From patchwork Sun Mar 15 09:01:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Qi Zhang X-Patchwork-Id: 66663 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 38B7FA0564; Sun, 15 Mar 2020 09:57:43 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 833CD25D9; Sun, 15 Mar 2020 09:57:42 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id 3774A3B5 for ; Sun, 15 Mar 2020 09:57:40 +0100 (CET) IronPort-SDR: X5cbFWyDH+VzZyuDDkIZP1vRPH/CYa5AmKa/paw3EuCRuhtVQeeDQjwhuDB/GIrTqZMunJ8dXm 6HGAsu4K3zyQ== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 15 Mar 2020 01:57:39 -0700 IronPort-SDR: NloWyV6dnRcB02JBmCJghgBQmmmjAFu9d1LuXqbkpacuk13eGhJpNWWbdi/c4oTAvUoslio+hc UIwVjOg7hc3Q== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,555,1574150400"; d="scan'208";a="354732751" Received: from dpdk51.sh.intel.com ([10.67.110.245]) by fmsmga001.fm.intel.com with ESMTP; 15 Mar 2020 01:57:37 -0700 From: Qi Zhang To: beilei.xing@intel.com, yahui.cao@intel.com Cc: xiaolong.ye@intel.com, dev@dpdk.org, Qi Zhang Date: Sun, 15 Mar 2020 17:01:06 +0800 Message-Id: <20200315090106.14987-1-qi.z.zhang@intel.com> X-Mailer: git-send-email 2.13.6 In-Reply-To: <20200315032227.44523-1-qi.z.zhang@intel.com> References: <20200315032227.44523-1-qi.z.zhang@intel.com> Subject: [dpdk-dev] [PATCH v3] net/ice: code clean for queue interrupt config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The patch decouple the intr_handle from ice_vsi_queue_bind_intr which is shared by data Rx queue and fdir Rx queue's interrupt binding. Though there is no issue in current implemenation of ice_vsi_queue_bind_intr, but this rely on the assumption that the fdir setup (when bind interrrupt for fdir queue) must be called before dev_start (when bind interrupt for data queue), which is not a good practise. More detail: if we deferred fdir setup to after dev_start, it is possible when configure a fdir queue, a recorded vector for Rx queue 0 be overwritten by the fdir queue's vector number, this may cause interrupt Rx mode does not work on the Rx queue 0, since the vector number be used by rte_eth_dev_rx_intr_ctl is corrupted. The patch also apply the same code decoupling on ice_vsi_enable|disable_queue_intr to make all queue interrupt API more consistent. Signed-off-by: Qi Zhang --- v3: - fix couple bugs v2: - rewrite the commit, actually this is not a fix, but code clean. drivers/net/ice/ice_ethdev.c | 53 +++++++++++++++++---------------------- drivers/net/ice/ice_ethdev.h | 9 ++++--- drivers/net/ice/ice_fdir_filter.c | 6 ++--- 3 files changed, 32 insertions(+), 36 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index e59761c22..dd9105eed 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -2318,11 +2318,8 @@ ice_release_vsi(struct ice_vsi *vsi) } void -ice_vsi_disable_queues_intr(struct ice_vsi *vsi) +ice_vsi_disable_queues_intr(struct ice_vsi *vsi, bool use_misc) { - struct rte_eth_dev *dev = vsi->adapter->eth_dev; - struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct ice_hw *hw = ICE_VSI_TO_HW(vsi); uint16_t msix_intr, i; @@ -2333,7 +2330,7 @@ ice_vsi_disable_queues_intr(struct ice_vsi *vsi) rte_wmb(); } - if (rte_intr_allow_others(intr_handle)) + if (!use_misc) /* vfio-pci */ for (i = 0; i < vsi->nb_msix; i++) { msix_intr = vsi->msix_intr + i; @@ -2368,7 +2365,8 @@ ice_dev_stop(struct rte_eth_dev *dev) ice_tx_queue_stop(dev, i); /* disable all queue interrupts */ - ice_vsi_disable_queues_intr(main_vsi); + ice_vsi_disable_queues_intr(main_vsi, + !rte_intr_allow_others(intr_handle)); if (pf->init_link_up) ice_dev_set_link_up(dev); @@ -2616,16 +2614,15 @@ __vsi_queues_bind_intr(struct ice_vsi *vsi, uint16_t msix_vect, } void -ice_vsi_queues_bind_intr(struct ice_vsi *vsi) +ice_vsi_queues_bind_intr(struct ice_vsi *vsi, + bool use_misc, + int intr_num_max, + int *intr_vec) { - struct rte_eth_dev *dev = vsi->adapter->eth_dev; - struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct ice_hw *hw = ICE_VSI_TO_HW(vsi); uint16_t msix_vect = vsi->msix_intr; - uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_handle->nb_efd); + uint16_t nb_msix = RTE_MIN(vsi->nb_msix, intr_num_max); uint16_t queue_idx = 0; - int record = 0; int i; /* clear Rx/Tx queue interrupt */ @@ -2634,15 +2631,12 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi) ICE_WRITE_REG(hw, QINT_RQCTL(vsi->base_queue + i), 0); } - /* PF bind interrupt */ - if (rte_intr_dp_is_en(intr_handle)) { - queue_idx = 0; - record = 1; - } + if (use_misc) + nb_msix = 1; for (i = 0; i < vsi->nb_used_qps; i++) { if (nb_msix <= 1) { - if (!rte_intr_allow_others(intr_handle)) + if (use_misc) msix_vect = ICE_MISC_VEC_ID; /* uio mapping all queue to one msix_vect */ @@ -2650,9 +2644,8 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi) vsi->base_queue + i, vsi->nb_used_qps - i); - for (; !!record && i < vsi->nb_used_qps; i++) - intr_handle->intr_vec[queue_idx + i] = - msix_vect; + for (; intr_vec && i < vsi->nb_used_qps; i++) + intr_vec[queue_idx + i] = msix_vect; break; } @@ -2660,8 +2653,8 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi) __vsi_queues_bind_intr(vsi, msix_vect, vsi->base_queue + i, 1); - if (!!record) - intr_handle->intr_vec[queue_idx + i] = msix_vect; + if (intr_vec) + intr_vec[queue_idx + i] = msix_vect; msix_vect++; nb_msix--; @@ -2669,15 +2662,12 @@ ice_vsi_queues_bind_intr(struct ice_vsi *vsi) } void -ice_vsi_enable_queues_intr(struct ice_vsi *vsi) +ice_vsi_enable_queues_intr(struct ice_vsi *vsi, bool use_misc) { - struct rte_eth_dev *dev = vsi->adapter->eth_dev; - struct rte_pci_device *pci_dev = ICE_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct ice_hw *hw = ICE_VSI_TO_HW(vsi); uint16_t msix_intr, i; - if (rte_intr_allow_others(intr_handle)) + if (!use_misc) for (i = 0; i < vsi->nb_used_qps; i++) { msix_intr = vsi->msix_intr + i; ICE_WRITE_REG(hw, GLINT_DYN_CTL(msix_intr), @@ -2733,10 +2723,13 @@ ice_rxq_intr_setup(struct rte_eth_dev *dev) /* Map queues with MSIX interrupt */ vsi->nb_used_qps = dev->data->nb_rx_queues; - ice_vsi_queues_bind_intr(vsi); + ice_vsi_queues_bind_intr(vsi, + !rte_intr_allow_others(intr_handle), + intr_handle->nb_efd, + intr_handle->intr_vec); /* Enable interrupts for all the queues */ - ice_vsi_enable_queues_intr(vsi); + ice_vsi_enable_queues_intr(vsi, !rte_intr_allow_others(intr_handle)); rte_intr_enable(intr_handle); diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index da557a254..392b937a9 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -461,9 +461,12 @@ struct ice_vsi * ice_setup_vsi(struct ice_pf *pf, enum ice_vsi_type type); int ice_release_vsi(struct ice_vsi *vsi); -void ice_vsi_enable_queues_intr(struct ice_vsi *vsi); -void ice_vsi_disable_queues_intr(struct ice_vsi *vsi); -void ice_vsi_queues_bind_intr(struct ice_vsi *vsi); +void ice_vsi_enable_queues_intr(struct ice_vsi *vsi, bool use_misc); +void ice_vsi_disable_queues_intr(struct ice_vsi *vsi, bool use_misc); +void ice_vsi_queues_bind_intr(struct ice_vsi *vsi, + bool use_misc, + int intr_num_max, + int *intr_vec); static inline int ice_align_floor(int n) diff --git a/drivers/net/ice/ice_fdir_filter.c b/drivers/net/ice/ice_fdir_filter.c index 6342b560c..fdcfbd78f 100644 --- a/drivers/net/ice/ice_fdir_filter.c +++ b/drivers/net/ice/ice_fdir_filter.c @@ -501,8 +501,8 @@ ice_fdir_setup(struct ice_pf *pf) /* Enable FDIR MSIX interrupt */ vsi->nb_used_qps = 1; - ice_vsi_queues_bind_intr(vsi); - ice_vsi_enable_queues_intr(vsi); + ice_vsi_queues_bind_intr(vsi, true, 1, NULL); + ice_vsi_enable_queues_intr(vsi, true); /* reserve memory for the fdir programming packet */ snprintf(z_name, sizeof(z_name), "ICE_%s_%d", @@ -628,7 +628,7 @@ ice_fdir_teardown(struct ice_pf *pf) if (!vsi) return; - ice_vsi_disable_queues_intr(vsi); + ice_vsi_disable_queues_intr(vsi, true); err = ice_fdir_tx_queue_stop(eth_dev, pf->fdir.txq->queue_id); if (err)