From patchwork Wed Aug 7 14:37:24 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Thierry Herbelot X-Patchwork-Id: 57528 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 019594C96; Wed, 7 Aug 2019 16:38:11 +0200 (CEST) Received: from mail-wr1-f68.google.com (mail-wr1-f68.google.com [209.85.221.68]) by dpdk.org (Postfix) with ESMTP id 992382C16 for ; Wed, 7 Aug 2019 16:37:56 +0200 (CEST) Received: by mail-wr1-f68.google.com with SMTP id c2so88414105wrm.8 for ; Wed, 07 Aug 2019 07:37:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=6wind.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :in-reply-to:references; bh=N4CpN3tGndVgg11SLlCEe0CaLeJ68B+1/ALyNfPrBro=; b=OwkBR4JGH8xGawjAAA2eWr1FXSLA9yqWKnt+z06+ghgYtKpuoAZsWjtqnXA8crkrKa amg51o9Q4wnO8oObAOaoBhDIt4Mkh+JpZ7ZRxNGWgIXekwNLPai8OxKVcqHXHiUTY6uq 2oJp/ixP646skLG8yH7BtuhK9w5ZyKj1hQZ8B8EWlS7u8qhliPTeqgM9I31HbzORar/R cP06UT5fiUixyZfNeRCqZadFkZZqK4UqBfR0jD8n0wvL74IWlTWikeOlupDyx0CCV2Lf U6NI5HidiAEcQZLLsdzDAGqZbjpa2Sp+5/afDcDgY7PFXvhXZPAGhLzqmuXK08KVyLM4 Dyug== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:in-reply-to:references; bh=N4CpN3tGndVgg11SLlCEe0CaLeJ68B+1/ALyNfPrBro=; b=irtvcwu6oMyb6g7UcdY9sHd8qqUAazN9QA67n3L3TyXEnPnEzSZ6w/LLCexW4yzfNG mam4KsmF9gtkdCWKYbVeRq3Kpdv0JMY0MsSHxjZl08HW+J43TxDPLh3EzXQHCS7yzcrn jvW5Pp1R5In3zx8VGu3KL96pFqA5o0j6p+TxG8f7kCfM+GJtYYYaF1lNlAWK7tesXSRp Fo9v8Ey1nllCCVLCTqE6a2SxZAv/e6iuGQtirM6PgPMo0zk8Y9KwQRxDupLf++2A8vTN XBmubx09mvSkRGHLRHQIshMsPaM7iROYS6CmwN8KENah8llvk1he/IByPeiOpfFkoMyP ma0A== X-Gm-Message-State: APjAAAUi4QpHq7lo9Mc5hhGTpzQN8cIyGFsvEDHLDKiLBxjsa4b0TjZO mKbkKzc256Ts6aeIystzV4jLMsQ7pQ== X-Google-Smtp-Source: APXvYqygPA6CWs4mQyjC7ltthCVyR2sOtuy0MAe6J3ShjTp/tFaSdJw0lczXIVLVg/LBul4SyYmfaQ== X-Received: by 2002:a5d:4a4e:: with SMTP id v14mr10746080wrs.200.1565188676035; Wed, 07 Aug 2019 07:37:56 -0700 (PDT) Received: from ascain.dev.6wind.com. (host.78.145.23.62.rev.coltfrance.com. [62.23.145.78]) by smtp.gmail.com with ESMTPSA id 2sm133441211wrn.29.2019.08.07.07.37.55 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 07 Aug 2019 07:37:55 -0700 (PDT) From: Thierry Herbelot To: dev@dpdk.org Cc: Olivier Matz , stable@dpdk.org, Thomas Monjalon Date: Wed, 7 Aug 2019 16:37:24 +0200 Message-Id: <09ef3a984386ade27828e549c542e290825c3c13.1565188248.git.thierry.herbelot@6wind.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH 19.11 05/12] net/ixgbe: fix Tx descriptor status api X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Olivier Matz The Tx descriptor status api was not behaving as expected. This API is used to inspect the content of the descriptors in the Tx ring to determine the length of the Tx queue. Since the software advances the tail pointer and the hardware advances the head pointer, the Tx queue is located before txq->tx_tail in the ring. Therefore, a call to rte_eth_tx_descriptor_status(..., offset=20) should inspect the 20th descriptor before the tail, not after. As before, we still need to take care about only checking descriptors that have the RS bit. Additionally, we can avoid an access to the ring if offset is greater or equal to nb_tx_desc - nb_tx_free. Fixes: 5da8b8814178 ("net/ixgbe: implement descriptor status API") Cc: stable at dpdk.org Signed-off-by: Olivier Matz --- drivers/net/ixgbe/ixgbe_rxtx.c | 45 +++++++++++++++++++++++++++++++----------- drivers/net/ixgbe/ixgbe_rxtx.h | 1 + 2 files changed, 34 insertions(+), 12 deletions(-) diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index edcfa60cec98..68e3aea5ed46 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -2627,10 +2627,15 @@ ixgbe_dev_tx_queue_setup(struct rte_eth_dev *dev, hw->mac.type == ixgbe_mac_X540_vf || hw->mac.type == ixgbe_mac_X550_vf || hw->mac.type == ixgbe_mac_X550EM_x_vf || - hw->mac.type == ixgbe_mac_X550EM_a_vf) + hw->mac.type == ixgbe_mac_X550EM_a_vf) { txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_VFTDT(queue_idx)); - else + txq->tdh_reg_addr = IXGBE_PCI_REG_ADDR(hw, + IXGBE_VFTDH(queue_idx)); + } else { txq->tdt_reg_addr = IXGBE_PCI_REG_ADDR(hw, IXGBE_TDT(txq->reg_idx)); + txq->tdh_reg_addr = IXGBE_PCI_REG_ADDR(hw, + IXGBE_TDH(txq->reg_idx)); + } txq->tx_ring_phys_addr = tz->iova; txq->tx_ring = (union ixgbe_adv_tx_desc *) tz->addr; @@ -3163,22 +3168,38 @@ ixgbe_dev_tx_descriptor_status(void *tx_queue, uint16_t offset) { struct ixgbe_tx_queue *txq = tx_queue; volatile uint32_t *status; - uint32_t desc; + int32_t desc, dd; if (unlikely(offset >= txq->nb_tx_desc)) return -EINVAL; + if (offset >= txq->nb_tx_desc - txq->nb_tx_free) + return RTE_ETH_TX_DESC_DONE; + + desc = txq->tx_tail - offset - 1; + if (desc < 0) + desc += txq->nb_tx_desc; - desc = txq->tx_tail + offset; - /* go to next desc that has the RS bit */ - desc = ((desc + txq->tx_rs_thresh - 1) / txq->tx_rs_thresh) * - txq->tx_rs_thresh; - if (desc >= txq->nb_tx_desc) { - desc -= txq->nb_tx_desc; - if (desc >= txq->nb_tx_desc) - desc -= txq->nb_tx_desc; + /* offset is too small, no other way than reading PCI reg */ + if (unlikely(offset < txq->tx_rs_thresh)) { + int16_t tx_head, queue_size; + tx_head = ixgbe_read_addr(txq->tdh_reg_addr); + queue_size = txq->tx_tail - tx_head; + if (queue_size < 0) + queue_size += txq->nb_tx_desc; + return queue_size > offset ? RTE_ETH_TX_DESC_FULL : + RTE_ETH_TX_DESC_DONE; } - status = &txq->tx_ring[desc].wb.status; + /* index of the dd bit to look at */ + dd = (desc / txq->tx_rs_thresh + 1) * txq->tx_rs_thresh - 1; + + /* In full featured mode, RS bit is only set in the last descriptor */ + /* of a multisegments packet */ + if (!((txq->offloads == 0) && + (txq->tx_rs_thresh >= RTE_PMD_IXGBE_TX_MAX_BURST))) + dd = txq->sw_ring[dd].last_id; + + status = &txq->tx_ring[dd].wb.status; if (*status & rte_cpu_to_le_32(IXGBE_ADVTXD_STAT_DD)) return RTE_ETH_TX_DESC_DONE; diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 505d344b9cee..05fd4167576c 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -201,6 +201,7 @@ struct ixgbe_tx_queue { struct ixgbe_tx_entry_v *sw_ring_v; /**< address of SW ring for vector PMD */ }; volatile uint32_t *tdt_reg_addr; /**< Address of TDT register. */ + volatile uint32_t *tdh_reg_addr; /**< Address of TDH register. */ uint16_t nb_tx_desc; /**< number of TX descriptors. */ uint16_t tx_tail; /**< current value of TDT reg. */ /**< Start freeing TX buffers if there are less free descriptors than