From patchwork Fri Jul 24 13:58:13 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ananyev, Konstantin" X-Patchwork-Id: 6579 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id CBD79C454; Fri, 24 Jul 2015 15:59:37 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 0043EC446 for ; Fri, 24 Jul 2015 15:59:34 +0200 (CEST) Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by fmsmga101.fm.intel.com with ESMTP; 24 Jul 2015 06:59:35 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.15,538,1432623600"; d="scan'208";a="529608816" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by FMSMGA003.fm.intel.com with ESMTP; 24 Jul 2015 06:59:33 -0700 Received: from sivswdev02.ir.intel.com (sivswdev02.ir.intel.com [10.237.217.46]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id t6ODxWOW026569; Fri, 24 Jul 2015 14:59:32 +0100 Received: from sivswdev02.ir.intel.com (localhost [127.0.0.1]) by sivswdev02.ir.intel.com with ESMTP id t6ODxW99012273; Fri, 24 Jul 2015 14:59:32 +0100 Received: (from kananye1@localhost) by sivswdev02.ir.intel.com with id t6ODxWhR012269; Fri, 24 Jul 2015 14:59:32 +0100 From: Konstantin Ananyev To: dev@dpdk.org Date: Fri, 24 Jul 2015 14:58:13 +0100 Message-Id: <1437746295-12184-4-git-send-email-konstantin.ananyev@intel.com> X-Mailer: git-send-email 1.7.4.1 In-Reply-To: <1437746295-12184-1-git-send-email-konstantin.ananyev@intel.com> References: <1437667506-3890-2-git-send-email-bruce.richardson@intel.com> <1437746295-12184-1-git-send-email-konstantin.ananyev@intel.com> In-Reply-To: <1437667506-3890-2-git-send-email-bruce.richardson@intel.com> References: <1437667506-3890-2-git-send-email-bruce.richardson@intel.com> Subject: [dpdk-dev] [PATCHv2 3/5] ixgbe: fix bug on release of mbufs from queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The calculations of what mbufs were valid in the RX and TX queues were incorrect when freeing the mbufs for the vector PMD. This led to crashes due to invalid reference counts when mbuf debugging was turned on, and possibly other more subtle problems (such as mbufs being freed when in use) in other cases. To fix this, the following changes were made: * correct counts and post-loop values in the TX release function for the vector code. * create a new separate RX release function for the RX vector code, since the tracking of what mbufs are valid or not is different for that code path Fixes: c95584dc2b18 ("ixgbe: new vectorized functions for Rx/Tx") v2 chages: - Make sure that rx_using_sse is reset to zero if scalar RX function was chosen. - fix checkpatch.pl errors. Signed-off-by: Bruce Richardson Signed-off-by: Konstantin Ananyev --- drivers/net/ixgbe/ixgbe_rxtx.c | 20 +++++++++++++++ drivers/net/ixgbe/ixgbe_rxtx.h | 3 +++ drivers/net/ixgbe/ixgbe_rxtx_vec.c | 52 +++++++++++++++++++++++++------------- 3 files changed, 58 insertions(+), 17 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 75c5555..db2454c 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -2270,6 +2270,14 @@ ixgbe_rx_queue_release_mbufs(struct ixgbe_rx_queue *rxq) { unsigned i; +#ifdef RTE_IXGBE_INC_VECTOR + /* SSE Vector driver has a different way of releasing mbufs. */ + if (rxq->rx_using_sse) { + ixgbe_rx_queue_release_mbufs_vec(rxq); + return; + } +#endif + if (rxq->sw_ring != NULL) { for (i = 0; i < rxq->nb_rx_desc; i++) { if (rxq->sw_ring[i].mbuf != NULL) { @@ -3925,6 +3933,7 @@ ixgbe_set_ivar(struct rte_eth_dev *dev, u8 entry, u8 vector, s8 type) void __attribute__((cold)) ixgbe_set_rx_function(struct rte_eth_dev *dev) { + uint16_t i, rx_using_sse; struct ixgbe_adapter *adapter = (struct ixgbe_adapter *)dev->data->dev_private; @@ -4013,6 +4022,17 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev) dev->rx_pkt_burst = ixgbe_recv_pkts; } + + /* Propagate information about RX function choice through all queues. */ + + rx_using_sse = + (dev->rx_pkt_burst == ixgbe_recv_scattered_pkts_vec || + dev->rx_pkt_burst == ixgbe_recv_pkts_vec); + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + struct ixgbe_rx_queue *rxq = dev->data->rx_queues[i]; + rxq->rx_using_sse = rx_using_sse; + } } /** diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 64e6bb9..befdc3a 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -121,6 +121,8 @@ struct ixgbe_rx_queue { uint16_t rx_next_avail; /**< idx of next staged pkt to ret to app */ uint16_t rx_free_trigger; /**< triggers rx buffer allocation */ #endif + uint16_t rx_using_sse; + /**< indicates that vector RX is in use */ #ifdef RTE_IXGBE_INC_VECTOR uint16_t rxrearm_nb; /**< number of remaining to be re-armed */ uint16_t rxrearm_start; /**< the idx we start the re-arming from */ @@ -284,6 +286,7 @@ uint16_t ixgbe_recv_scattered_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); int ixgbe_rx_vec_dev_conf_condition_check(struct rte_eth_dev *dev); int ixgbe_rxq_vec_setup(struct ixgbe_rx_queue *rxq); +void ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq); #ifdef RTE_IXGBE_INC_VECTOR diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec.c b/drivers/net/ixgbe/ixgbe_rxtx_vec.c index 47ff990..c01da7b 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec.c +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec.c @@ -726,27 +726,45 @@ ixgbe_tx_queue_release_mbufs(struct ixgbe_tx_queue *txq) { unsigned i; struct ixgbe_tx_entry_v *txe; - uint16_t nb_free, max_desc; + const uint16_t max_desc = (uint16_t)(txq->nb_tx_desc - 1); - if (txq->sw_ring != NULL) { - /* release the used mbufs in sw_ring */ - nb_free = txq->nb_tx_free; - max_desc = (uint16_t)(txq->nb_tx_desc - 1); - for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1); - nb_free < max_desc && i != txq->tx_tail; - i = (i + 1) & max_desc) { - txe = (struct ixgbe_tx_entry_v *)&txq->sw_ring[i]; - if (txe->mbuf != NULL) - rte_pktmbuf_free_seg(txe->mbuf); - } - /* reset tx_entry */ - for (i = 0; i < txq->nb_tx_desc; i++) { - txe = (struct ixgbe_tx_entry_v *)&txq->sw_ring[i]; - txe->mbuf = NULL; - } + if (txq->sw_ring == NULL || txq->nb_tx_free == max_desc) + return; + + /* release the used mbufs in sw_ring */ + for (i = txq->tx_next_dd - (txq->tx_rs_thresh - 1); + i != txq->tx_tail; + i = (i + 1) & max_desc) { + txe = &((struct ixgbe_tx_entry_v *)txq->sw_ring)[i]; + rte_pktmbuf_free_seg(txe->mbuf); + } + txq->nb_tx_free = max_desc; + + /* reset tx_entry */ + for (i = 0; i < txq->nb_tx_desc; i++) { + txe = (struct ixgbe_tx_entry_v *)&txq->sw_ring[i]; + txe->mbuf = NULL; } } +void __attribute__((cold)) +ixgbe_rx_queue_release_mbufs_vec(struct ixgbe_rx_queue *rxq) +{ + const unsigned mask = rxq->nb_rx_desc - 1; + unsigned i; + + if (rxq->sw_ring == NULL || rxq->rxrearm_nb >= rxq->nb_rx_desc) + return; + + /* free all mbufs that are valid in the ring */ + for (i = rxq->rx_tail; i != rxq->rxrearm_start; i = (i + 1) & mask) + rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + rxq->rxrearm_nb = rxq->nb_rx_desc; + + /* set all entries to NULL */ + memset(rxq->sw_ring, 0, sizeof(rxq->sw_ring[0]) * rxq->nb_rx_desc); +} + static void __attribute__((cold)) ixgbe_tx_free_swring(struct ixgbe_tx_queue *txq) {