From patchwork Thu Nov 24 09:54:17 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 17236 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id A873E58C5; Thu, 24 Nov 2016 11:00:34 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id B0C3D558D for ; Thu, 24 Nov 2016 10:59:38 +0100 (CET) Received: from glumotte.dev.6wind.com (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 51173278C1; Thu, 24 Nov 2016 10:59:34 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: thomas.monjalon@6wind.com, konstantin.ananyev@intel.com, wenzhuo.lu@intel.com, helin.zhang@intel.com Date: Thu, 24 Nov 2016 10:54:17 +0100 Message-Id: <1479981261-19512-6-git-send-email-olivier.matz@6wind.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1479981261-19512-1-git-send-email-olivier.matz@6wind.com> References: <1479981261-19512-1-git-send-email-olivier.matz@6wind.com> Subject: [dpdk-dev] [RFC 5/9] net/ixgbe: add handler for Tx queue descriptor count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Like for TX, use a binary search algorithm to get the number of used Tx descriptors. PR=52423 Signed-off-by: Olivier Matz Acked-by: Ivan Boule --- drivers/net/ixgbe/ixgbe_ethdev.c | 1 + drivers/net/ixgbe/ixgbe_ethdev.h | 4 ++- drivers/net/ixgbe/ixgbe_rxtx.c | 57 ++++++++++++++++++++++++++++++++++++++++ drivers/net/ixgbe/ixgbe_rxtx.h | 2 ++ 4 files changed, 63 insertions(+), 1 deletion(-) diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index baffc71..0ba098a 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -553,6 +553,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = { .rx_queue_intr_disable = ixgbe_dev_rx_queue_intr_disable, .rx_queue_release = ixgbe_dev_rx_queue_release, .rx_queue_count = ixgbe_dev_rx_queue_count, + .tx_queue_count = ixgbe_dev_tx_queue_count, .rx_descriptor_done = ixgbe_dev_rx_descriptor_done, .tx_queue_setup = ixgbe_dev_tx_queue_setup, .tx_queue_release = ixgbe_dev_tx_queue_release, diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index 4ff6338..e060c3d 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -348,7 +348,9 @@ int ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, const struct rte_eth_txconf *tx_conf); uint32_t ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, - uint16_t rx_queue_id); + uint16_t rx_queue_id); +uint32_t ixgbe_dev_tx_queue_count(struct rte_eth_dev *dev, + uint16_t tx_queue_id); int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset); int ixgbevf_dev_rx_descriptor_done(void *rx_queue, uint16_t offset); diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index 07509b4..5bf6b1a 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -2437,6 +2437,7 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, txq->nb_tx_desc = nb_desc; txq->tx_rs_thresh = tx_rs_thresh; + txq->tx_rs_thresh_div = nb_desc / tx_rs_thresh; txq->tx_free_thresh = tx_free_thresh; txq->pthresh = tx_conf->tx_thresh.pthresh; txq->hthresh = tx_conf->tx_thresh.hthresh; @@ -2906,6 +2907,62 @@ ixgbe_dev_rx_queue_count(struct rte_eth_dev *dev, uint16_t rx_queue_id) return offset; } +uint32_t +ixgbe_dev_tx_queue_count(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct ixgbe_tx_queue *txq; + uint32_t status; + int32_t offset, interval, idx = 0; + int32_t max_offset, used_desc; + + txq = dev->data->tx_queues[tx_queue_id]; + + /* if DD on next threshold desc is not set, assume used packets + * are pending. + */ + status = txq->tx_ring[txq->tx_next_dd].wb.status; + if (!(status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD))) + return txq->nb_tx_desc - txq->nb_tx_free - 1; + + /* browse DD bits between tail starting from tx_next_dd: we have + * to be careful since DD bits are only set every tx_rs_thresh + * descriptor. + */ + interval = txq->tx_rs_thresh_div >> 1; + offset = interval * txq->tx_rs_thresh; + + /* don't go beyond tail */ + max_offset = txq->tx_tail - txq->tx_next_dd; + if (max_offset < 0) + max_offset += txq->nb_tx_desc; + + do { + interval >>= 1; + + if (offset >= max_offset) { + offset -= (interval * txq->tx_rs_thresh); + continue; + } + + idx = txq->tx_next_dd + offset; + if (idx >= txq->nb_tx_desc) + idx -= txq->nb_tx_desc; + + status = txq->tx_ring[idx].wb.status; + if (status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD)) + offset += (interval * txq->tx_rs_thresh); + else + offset -= (interval * txq->tx_rs_thresh); + } while (interval > 0); + + /* idx is now the index of the head */ + used_desc = txq->tx_tail - idx; + if (used_desc < 0) + used_desc += txq->nb_tx_desc; + + return used_desc; +} + int ixgbe_dev_rx_descriptor_done(void *rx_queue, uint16_t offset) { diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 2608b36..f69b5de 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -221,6 +221,8 @@ struct ixgbe_tx_queue { uint16_t tx_free_thresh; /** Number of TX descriptors to use before RS bit is set. */ uint16_t tx_rs_thresh; + /** Number of TX descriptors divided by tx_rs_thresh. */ + uint16_t tx_rs_thresh_div; /** Number of TX descriptors used since RS bit was set. */ uint16_t nb_tx_used; /** Index to last TX descriptor to have been cleaned. */