From patchwork Wed Mar 1 17:19:09 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 21020 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 5D1DED586; Wed, 1 Mar 2017 18:20:56 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 5FE292C2B for ; Wed, 1 Mar 2017 18:20:03 +0100 (CET) Received: from glumotte.dev.6wind.com (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 45DD525329; Wed, 1 Mar 2017 18:19:58 +0100 (CET) From: Olivier Matz To: dev@dpdk.org, thomas.monjalon@6wind.com, konstantin.ananyev@intel.com, wenzhuo.lu@intel.com, helin.zhang@intel.com, jingjing.wu@intel.com, adrien.mazarguil@6wind.com, nelio.laranjeiro@6wind.com Cc: ferruh.yigit@intel.com, bruce.richardson@intel.com Date: Wed, 1 Mar 2017 18:19:09 +0100 Message-Id: <1488388752-1819-4-git-send-email-olivier.matz@6wind.com> X-Mailer: git-send-email 2.8.1 In-Reply-To: <1488388752-1819-1-git-send-email-olivier.matz@6wind.com> References: <1479981261-19512-1-git-send-email-olivier.matz@6wind.com> <1488388752-1819-1-git-send-email-olivier.matz@6wind.com> Subject: [dpdk-dev] [PATCH 3/6] net/e1000: implement descriptor status API (igb) X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Olivier Matz --- drivers/net/e1000/e1000_ethdev.h | 5 +++++ drivers/net/e1000/igb_ethdev.c | 2 ++ drivers/net/e1000/igb_rxtx.c | 46 ++++++++++++++++++++++++++++++++++++++++ 3 files changed, 53 insertions(+) diff --git a/drivers/net/e1000/e1000_ethdev.h b/drivers/net/e1000/e1000_ethdev.h index 81a6dbb..2b63b1a 100644 --- a/drivers/net/e1000/e1000_ethdev.h +++ b/drivers/net/e1000/e1000_ethdev.h @@ -311,6 +311,11 @@ uint32_t eth_igb_rx_queue_count(struct rte_eth_dev *dev, int eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset); +int eth_igb_rx_descriptor_status(struct rte_eth_dev *dev, + uint16_t rx_queue_id, uint16_t offset); +int eth_igb_tx_descriptor_status(struct rte_eth_dev *dev, + uint16_t tx_queue_id, uint16_t offset); + int eth_igb_tx_queue_setup(struct rte_eth_dev *dev, uint16_t tx_queue_id, uint16_t nb_tx_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); diff --git a/drivers/net/e1000/igb_ethdev.c b/drivers/net/e1000/igb_ethdev.c index a112b38..f6ed824 100644 --- a/drivers/net/e1000/igb_ethdev.c +++ b/drivers/net/e1000/igb_ethdev.c @@ -406,6 +406,8 @@ static const struct eth_dev_ops eth_igb_ops = { .rx_queue_release = eth_igb_rx_queue_release, .rx_queue_count = eth_igb_rx_queue_count, .rx_descriptor_done = eth_igb_rx_descriptor_done, + .rx_descriptor_status = eth_igb_rx_descriptor_status, + .tx_descriptor_status = eth_igb_tx_descriptor_status, .tx_queue_setup = eth_igb_tx_queue_setup, .tx_queue_release = eth_igb_tx_queue_release, .dev_led_on = eth_igb_led_on, diff --git a/drivers/net/e1000/igb_rxtx.c b/drivers/net/e1000/igb_rxtx.c index c9cf392..838413c 100644 --- a/drivers/net/e1000/igb_rxtx.c +++ b/drivers/net/e1000/igb_rxtx.c @@ -1606,6 +1606,52 @@ eth_igb_rx_descriptor_done(void *rx_queue, uint16_t offset) return !!(rxdp->wb.upper.status_error & E1000_RXD_STAT_DD); } +int +eth_igb_rx_descriptor_status(struct rte_eth_dev *dev, uint16_t rx_queue_id, + uint16_t offset) +{ + volatile uint32_t *status; + struct igb_rx_queue *rxq; + uint32_t desc; + + rxq = dev->data->rx_queues[rx_queue_id]; + if (unlikely(offset >= rxq->nb_rx_desc)) + return -EINVAL; + + if (offset >= rxq->nb_rx_desc - rxq->nb_rx_hold) + return RTE_ETH_RX_DESC_USED; + + desc = rxq->rx_tail + offset; + if (desc >= rxq->nb_rx_desc) + desc -= rxq->nb_rx_desc; + + status = &rxq->rx_ring[desc].wb.upper.status_error; + if (*status & rte_cpu_to_le_32(E1000_RXD_STAT_DD)) + return RTE_ETH_RX_DESC_DONE; + + return RTE_ETH_RX_DESC_AVAIL; +} + +int +eth_igb_tx_descriptor_status(struct rte_eth_dev *dev, uint16_t tx_queue_id, + uint16_t offset) +{ + volatile uint32_t *status; + struct igb_tx_queue *txq; + uint32_t desc; + + txq = dev->data->tx_queues[tx_queue_id]; + if (unlikely(offset >= txq->nb_tx_desc)) + return -EINVAL; + + desc = txq->tx_tail + offset; + status = &txq->tx_ring[desc].wb.status; + if (*status & rte_cpu_to_le_32(E1000_TXD_STAT_DD)) + return RTE_ETH_TX_DESC_DONE; + + return RTE_ETH_TX_DESC_FULL; +} + void igb_dev_clear_queues(struct rte_eth_dev *dev) {