From patchwork Thu Mar 23 10:43:28 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feifei Wang X-Patchwork-Id: 125449 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DD76542813; Thu, 23 Mar 2023 11:43:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B6660427E9; Thu, 23 Mar 2023 11:43:44 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id D430641101 for ; Thu, 23 Mar 2023 11:43:42 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 69D14AD7; Thu, 23 Mar 2023 03:44:26 -0700 (PDT) Received: from net-x86-dell-8268.shanghai.arm.com (net-x86-dell-8268.shanghai.arm.com [10.169.210.116]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 267023F766; Thu, 23 Mar 2023 03:43:38 -0700 (PDT) From: Feifei Wang To: Thomas Monjalon , Ferruh Yigit , Andrew Rybchenko Cc: dev@dpdk.org, mb@smartsharesystems.com, konstantin.v.ananyev@yandex.ru, nd@arm.com, Feifei Wang , Honnappa Nagarahalli , Ruifeng Wang Subject: [PATCH v4 1/3] ethdev: add API for buffer recycle mode Date: Thu, 23 Mar 2023 18:43:28 +0800 Message-Id: <20230323104330.3823251-2-feifei.wang2@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230323104330.3823251-1-feifei.wang2@arm.com> References: <20211224164613.32569-1-feifei.wang2@arm.com> <20230323104330.3823251-1-feifei.wang2@arm.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There are 4 upper APIs for buffer recycle mode: 1. 'rte_eth_rx_queue_buf_recycle_info_get' This is to retrieve buffer ring information about given ports's Rx queue in buffer recycle mode. And due to this, buffer recycle can be no longer limited to the same driver in Rx and Tx. 2. 'rte_eth_dev_buf_recycle' Users can call this API to enable buffer recycle mode in data path. There are 2 internal APIs in it, which is separately for Rx and TX. 3. 'rte_eth_tx_buf_stash' Internal API for buffer recycle mode. This is to stash Tx used buffers into Rx buffer ring. 4. 'rte_eth_rx_descriptors_refill' Internal API for buffer recycle mode. This is to refill Rx descriptors. Above all APIs are just implemented at the upper level. For different APIs, we need to define specific functions separately. Suggested-by: Honnappa Nagarahalli Suggested-by: Ruifeng Wang Signed-off-by: Feifei Wang Reviewed-by: Ruifeng Wang Reviewed-by: Honnappa Nagarahalli --- lib/ethdev/ethdev_driver.h | 10 ++ lib/ethdev/ethdev_private.c | 2 + lib/ethdev/rte_ethdev.c | 37 ++++++ lib/ethdev/rte_ethdev.h | 230 +++++++++++++++++++++++++++++++++++ lib/ethdev/rte_ethdev_core.h | 11 ++ lib/ethdev/version.map | 6 + 6 files changed, 296 insertions(+) diff --git a/lib/ethdev/ethdev_driver.h b/lib/ethdev/ethdev_driver.h index 2c9d615fb5..412f064975 100644 --- a/lib/ethdev/ethdev_driver.h +++ b/lib/ethdev/ethdev_driver.h @@ -59,6 +59,10 @@ struct rte_eth_dev { eth_rx_descriptor_status_t rx_descriptor_status; /** Check the status of a Tx descriptor */ eth_tx_descriptor_status_t tx_descriptor_status; + /** Stash Tx used buffers into RX ring in buffer recycle mode */ + eth_tx_buf_stash_t tx_buf_stash; + /** Refill Rx descriptors in buffer recycle mode */ + eth_rx_descriptors_refill_t rx_descriptors_refill; /** * Device data that is shared between primary and secondary processes @@ -504,6 +508,10 @@ typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev, typedef void (*eth_txq_info_get_t)(struct rte_eth_dev *dev, uint16_t tx_queue_id, struct rte_eth_txq_info *qinfo); +typedef void (*eth_rxq_buf_recycle_info_get_t)(struct rte_eth_dev *dev, + uint16_t rx_queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info); + typedef int (*eth_burst_mode_get_t)(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_burst_mode *mode); @@ -1247,6 +1255,8 @@ struct eth_dev_ops { eth_rxq_info_get_t rxq_info_get; /** Retrieve Tx queue information */ eth_txq_info_get_t txq_info_get; + /** Get Rx queue buffer recycle information */ + eth_rxq_buf_recycle_info_get_t rxq_buf_recycle_info_get; eth_burst_mode_get_t rx_burst_mode_get; /**< Get Rx burst mode */ eth_burst_mode_get_t tx_burst_mode_get; /**< Get Tx burst mode */ eth_fw_version_get_t fw_version_get; /**< Get firmware version */ diff --git a/lib/ethdev/ethdev_private.c b/lib/ethdev/ethdev_private.c index 14ec8c6ccf..f8d0ae9226 100644 --- a/lib/ethdev/ethdev_private.c +++ b/lib/ethdev/ethdev_private.c @@ -277,6 +277,8 @@ eth_dev_fp_ops_setup(struct rte_eth_fp_ops *fpo, fpo->rx_queue_count = dev->rx_queue_count; fpo->rx_descriptor_status = dev->rx_descriptor_status; fpo->tx_descriptor_status = dev->tx_descriptor_status; + fpo->tx_buf_stash = dev->tx_buf_stash; + fpo->rx_descriptors_refill = dev->rx_descriptors_refill; fpo->rxq.data = dev->data->rx_queues; fpo->rxq.clbk = (void **)(uintptr_t)dev->post_rx_burst_cbs; diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 4d03255683..2e049f2176 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -5784,6 +5784,43 @@ rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, return 0; } +int +rte_eth_rx_queue_buf_recycle_info_get(uint16_t port_id, uint16_t queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info) +{ + struct rte_eth_dev *dev; + + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + dev = &rte_eth_devices[port_id]; + + if (queue_id >= dev->data->nb_rx_queues) { + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u\n", queue_id); + return -EINVAL; + } + + if (rxq_buf_recycle_info == NULL) { + RTE_ETHDEV_LOG(ERR, "Cannot get ethdev port %u Rx queue %u buf_recycle_info to NULL\n", + port_id, queue_id); + return -EINVAL; + } + + if (dev->data->rx_queues == NULL || + dev->data->rx_queues[queue_id] == NULL) { + RTE_ETHDEV_LOG(ERR, + "Rx queue %"PRIu16" of device with port_id=%" + PRIu16" has not been setup\n", + queue_id, port_id); + return -EINVAL; + } + + if (*dev->dev_ops->rxq_buf_recycle_info_get == NULL) + return -ENOTSUP; + + dev->dev_ops->rxq_buf_recycle_info_get(dev, queue_id, rxq_buf_recycle_info); + + return 0; +} + int rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id, struct rte_eth_burst_mode *mode) diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 99fe9e238b..977075782e 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1820,6 +1820,29 @@ struct rte_eth_txq_info { uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */ } __rte_cache_min_aligned; +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice. + * + * Ethernet device Rx queue buffer ring information structure in buffer recycle mode. + * Used to retrieve Rx queue buffer ring information when Tx queue stashing used buffers + * in Rx buffer ring. + */ +struct rte_eth_rxq_buf_recycle_info { + struct rte_mbuf **buf_ring; /**< buffer ring of Rx queue. */ + struct rte_mempool *mp; /**< mempool of Rx queue. */ + uint16_t *refill_head; /**< head of buffer ring refilling descriptors. */ + uint16_t *receive_tail; /**< tail of buffer ring receiving pkts. */ + uint16_t buf_ring_size; /**< configured number of buffer ring size. */ + /** + * request for number of Rx refill buffers. + * For some PMD drivers, Rx refiil buffers number should be aligned with + * its buffer ring size. This is to simplify ring wraparound. + * Value 0 means that no request for this. + */ + uint16_t refill_request; +} __rte_cache_min_aligned; + /* Generic Burst mode flag definition, values can be ORed. */ /** @@ -4809,6 +4832,32 @@ int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, struct rte_eth_txq_info *qinfo); +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Retrieve buffer ring information about given ports's Rx queue in buffer recycle + * mode. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The Rx queue on the Ethernet device for which buffer ring information + * will be retrieved. + * @param rxq_buf_recycle_info + * A pointer to a structure of type *rte_eth_rxq_buf_recycle_info* to be filled. + * + * @return + * - 0: Success + * - -ENODEV: If *port_id* is invalid. + * - -ENOTSUP: routine is not supported by the device PMD. + * - -EINVAL: The queue_id is out of range. + */ +__rte_experimental +int rte_eth_rx_queue_buf_recycle_info_get(uint16_t port_id, + uint16_t queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info); + /** * Retrieve information about the Rx packet burst mode. * @@ -5987,6 +6036,71 @@ rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) return (int)(*p->rx_queue_count)(qd); } +/** + * @internal + * Rx routine for rte_eth_dev_buf_recycle(). + * Refill Rx descriptors in buffer recycle mode. + * + * @note + * This API can only be called by rte_eth_dev_buf_recycle(). + * Before calling this API, rte_eth_tx_buf_stash() should be + * called to stash Tx used buffers into Rx buffer ring. + * + * When this functionality is not implemented in the driver, the return + * buffer number is 0. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The index of the receive queue. + * The value must be in the range [0, nb_rx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + *@param nb + * The number of Rx descriptors to be refilled. + * @return + * The number Rx descriptors correct to be refilled. + * - ENODEV: bad port or queue (only if compiled with debug). + */ +static inline uint16_t rte_eth_rx_descriptors_refill(uint16_t port_id, + uint16_t queue_id, uint16_t nb) +{ + struct rte_eth_fp_ops *p; + void *qd; + +#ifdef RTE_ETHDEV_DEBUG_RX + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + rte_errno = ENODEV; + return 0; + } +#endif + + p = &rte_eth_fp_ops[port_id]; + qd = p->rxq.data[queue_id]; + +#ifdef RTE_ETHDEV_DEBUG_RX + if (!rte_eth_dev_is_valid_port(port_id)) { + RTE_ETHDEV_LOG(ERR, "Invalid Rx port_id=%u\n", port_id); + rte_errno = ENODEV; + return 0; + + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", + queue_id, port_id); + rte_errno = ENODEV; + return 0; + } +#endif + + if (!p->rx_descriptors_refill) + return 0; + + return p->rx_descriptors_refill(qd, nb); +} + /**@{@name Rx hardware descriptor states * @see rte_eth_rx_descriptor_status */ @@ -6483,6 +6597,122 @@ rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id, return rte_eth_tx_buffer_flush(port_id, queue_id, buffer); } +/** + * @internal + * Tx routine for rte_eth_dev_buf_recycle(). + * Stash Tx used buffers into Rx buffer ring in buffer recycle mode. + * + * @note + * This API can only be called by rte_eth_dev_buf_recycle(). + * After calling this API, rte_eth_rx_descriptors_refill() should be + * called to refill Rx ring descriptors. + * + * When this functionality is not implemented in the driver, the return + * buffer number is 0. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The index of the transmit queue. + * The value must be in the range [0, nb_tx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + * @param rxq_buf_recycle_info + * A pointer to a structure of Rx queue buffer ring information in buffer + * recycle mode. + * + * @return + * The number buffers correct to be filled in the Rx buffer ring. + * - ENODEV: bad port or queue (only if compiled with debug). + */ +static inline uint16_t rte_eth_tx_buf_stash(uint16_t port_id, uint16_t queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info) +{ + struct rte_eth_fp_ops *p; + void *qd; + +#ifdef RTE_ETHDEV_DEBUG_TX + if (port_id >= RTE_MAX_ETHPORTS || + queue_id >= RTE_MAX_QUEUES_PER_PORT) { + RTE_ETHDEV_LOG(ERR, + "Invalid port_id=%u or queue_id=%u\n", + port_id, queue_id); + rte_errno = ENODEV; + return 0; + } +#endif + + p = &rte_eth_fp_ops[port_id]; + qd = p->txq.data[queue_id]; + +#ifdef RTE_ETHDEV_DEBUG_TX + if (!rte_eth_dev_is_valid_port(port_id)) { + RTE_ETHDEV_LOG(ERR, "Invalid Tx port_id=%u\n", port_id); + rte_errno = ENODEV; + return 0; + + if (qd == NULL) { + RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", + queue_id, port_id); + rte_erno = ENODEV; + return 0; + } +#endif + + if (p->tx_buf_stash == NULL) + return 0; + + return p->tx_buf_stash(qd, rxq_buf_recycle_info); +} + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice + * + * Buffer recycle mode can let Tx queue directly put used buffers into Rx buffer + * ring. This avoids freeing buffers into mempool and allocating buffers from + * mempool. + * + * @param rx_port_id + * Port identifying the receive side. + * @param rx_queue_id + * The index of the receive queue identifying the receive side. + * The value must be in the range [0, nb_rx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + * @param tx_port_id + * Port identifying the transmit side. + * @param tx_queue_id + * The index of the transmit queue identifying the transmit side. + * The value must be in the range [0, nb_tx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + * @param rxq_recycle_info + * A pointer to a structure of type *rte_eth_txq_rearm_data* to be filled. + * @return + * - (0) on success or no recycling buffer. + * - (-EINVAL) rxq_recycle_info is NULL. + */ +__rte_experimental +static inline int +rte_eth_dev_buf_recycle(uint16_t rx_port_id, uint16_t rx_queue_id, + uint16_t tx_port_id, uint16_t tx_queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info) +{ + /* The number of recycling buffers. */ + uint16_t nb_buf; + + if (!rxq_buf_recycle_info) + return -EINVAL; + + /* Stash Tx used buffers into Rx buffer ring */ + nb_buf = rte_eth_tx_buf_stash(tx_port_id, tx_queue_id, + rxq_buf_recycle_info); + /* If there are recycling buffers, refill Rx queue descriptors. */ + if (nb_buf) + rte_eth_rx_descriptors_refill(rx_port_id, rx_queue_id, + nb_buf); + + return 0; +} + /** * @warning * @b EXPERIMENTAL: this API may change without prior notice diff --git a/lib/ethdev/rte_ethdev_core.h b/lib/ethdev/rte_ethdev_core.h index dcf8adab92..10f9d5cbe7 100644 --- a/lib/ethdev/rte_ethdev_core.h +++ b/lib/ethdev/rte_ethdev_core.h @@ -56,6 +56,13 @@ typedef int (*eth_rx_descriptor_status_t)(void *rxq, uint16_t offset); /** @internal Check the status of a Tx descriptor */ typedef int (*eth_tx_descriptor_status_t)(void *txq, uint16_t offset); +/** @internal Stash Tx used buffers into RX ring in buffer recycle mode */ +typedef uint16_t (*eth_tx_buf_stash_t)(void *txq, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info); + +/** @internal Refill Rx descriptors in buffer recycle mode */ +typedef uint16_t (*eth_rx_descriptors_refill_t)(void *rxq, uint16_t nb); + /** * @internal * Structure used to hold opaque pointers to internal ethdev Rx/Tx @@ -90,6 +97,8 @@ struct rte_eth_fp_ops { eth_rx_queue_count_t rx_queue_count; /** Check the status of a Rx descriptor. */ eth_rx_descriptor_status_t rx_descriptor_status; + /** Refill Rx descriptors in buffer recycle mode */ + eth_rx_descriptors_refill_t rx_descriptors_refill; /** Rx queues data. */ struct rte_ethdev_qdata rxq; uintptr_t reserved1[3]; @@ -106,6 +115,8 @@ struct rte_eth_fp_ops { eth_tx_prep_t tx_pkt_prepare; /** Check the status of a Tx descriptor. */ eth_tx_descriptor_status_t tx_descriptor_status; + /** Stash Tx used buffers into RX ring in buffer recycle mode */ + eth_tx_buf_stash_t tx_buf_stash; /** Tx queues data. */ struct rte_ethdev_qdata txq; uintptr_t reserved2[3]; diff --git a/lib/ethdev/version.map b/lib/ethdev/version.map index 357d1a88c0..8a4b1dac80 100644 --- a/lib/ethdev/version.map +++ b/lib/ethdev/version.map @@ -299,6 +299,10 @@ EXPERIMENTAL { rte_flow_action_handle_query_update; rte_flow_async_action_handle_query_update; rte_flow_async_create_by_index; + + # added in 23.07 + rte_eth_dev_buf_recycle; + rte_eth_rx_queue_buf_recycle_info_get; }; INTERNAL { @@ -328,4 +332,6 @@ INTERNAL { rte_eth_representor_id_get; rte_eth_switch_domain_alloc; rte_eth_switch_domain_free; + rte_eth_tx_buf_stash; + rte_eth_rx_descriptors_refill; }; From patchwork Thu Mar 23 10:43:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feifei Wang X-Patchwork-Id: 125450 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B6C642813; Thu, 23 Mar 2023 11:43:52 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D3E4842B8E; Thu, 23 Mar 2023 11:43:47 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 7169C42B8E for ; Thu, 23 Mar 2023 11:43:46 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 08A1C1570; Thu, 23 Mar 2023 03:44:30 -0700 (PDT) Received: from net-x86-dell-8268.shanghai.arm.com (net-x86-dell-8268.shanghai.arm.com [10.169.210.116]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 095E53F766; Thu, 23 Mar 2023 03:43:42 -0700 (PDT) From: Feifei Wang To: Yuying Zhang , Beilei Xing Cc: dev@dpdk.org, mb@smartsharesystems.com, konstantin.v.ananyev@yandex.ru, nd@arm.com, Feifei Wang , Honnappa Nagarahalli , Ruifeng Wang Subject: [PATCH v4 2/3] net/i40e: implement recycle buffer mode Date: Thu, 23 Mar 2023 18:43:29 +0800 Message-Id: <20230323104330.3823251-3-feifei.wang2@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230323104330.3823251-1-feifei.wang2@arm.com> References: <20211224164613.32569-1-feifei.wang2@arm.com> <20230323104330.3823251-1-feifei.wang2@arm.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Define specific function implementation for i40e driver. Currently, recycle buffer mode can support 128bit vector path. And can be enabled both in fast free and no fast free mode. Suggested-by: Honnappa Nagarahalli Signed-off-by: Feifei Wang Reviewed-by: Ruifeng Wang Reviewed-by: Honnappa Nagarahalli --- drivers/net/i40e/i40e_ethdev.c | 1 + drivers/net/i40e/i40e_ethdev.h | 2 + drivers/net/i40e/i40e_rxtx.c | 24 +++++ drivers/net/i40e/i40e_rxtx.h | 4 + drivers/net/i40e/i40e_rxtx_vec_common.h | 128 ++++++++++++++++++++++++ 5 files changed, 159 insertions(+) diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c index cb0070f94b..456fb256f5 100644 --- a/drivers/net/i40e/i40e_ethdev.c +++ b/drivers/net/i40e/i40e_ethdev.c @@ -496,6 +496,7 @@ static const struct eth_dev_ops i40e_eth_dev_ops = { .flow_ops_get = i40e_dev_flow_ops_get, .rxq_info_get = i40e_rxq_info_get, .txq_info_get = i40e_txq_info_get, + .rxq_buf_recycle_info_get = i40e_rxq_buf_recycle_info_get, .rx_burst_mode_get = i40e_rx_burst_mode_get, .tx_burst_mode_get = i40e_tx_burst_mode_get, .timesync_enable = i40e_timesync_enable, diff --git a/drivers/net/i40e/i40e_ethdev.h b/drivers/net/i40e/i40e_ethdev.h index 9b806d130e..83c5ff5859 100644 --- a/drivers/net/i40e/i40e_ethdev.h +++ b/drivers/net/i40e/i40e_ethdev.h @@ -1355,6 +1355,8 @@ void i40e_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_rxq_info *qinfo); void i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); +void i40e_rxq_buf_recycle_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info); int i40e_rx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_burst_mode *mode); int i40e_tx_burst_mode_get(struct rte_eth_dev *dev, uint16_t queue_id, diff --git a/drivers/net/i40e/i40e_rxtx.c b/drivers/net/i40e/i40e_rxtx.c index 788ffb51c2..479964c6c4 100644 --- a/drivers/net/i40e/i40e_rxtx.c +++ b/drivers/net/i40e/i40e_rxtx.c @@ -3197,6 +3197,28 @@ i40e_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, qinfo->conf.offloads = txq->offloads; } +void +i40e_rxq_buf_recycle_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info) +{ + struct i40e_rx_queue *rxq; + + rxq = dev->data->rx_queues[queue_id]; + + rxq_buf_recycle_info->buf_ring = (void *)rxq->sw_ring; + rxq_buf_recycle_info->mp = rxq->mp; + rxq_buf_recycle_info->buf_ring_size = rxq->nb_rx_desc; + rxq_buf_recycle_info->refill_request = RTE_I40E_RXQ_REARM_THRESH; + +#if RTE_BYTE_ORDER == RTE_BIG_ENDIAN + rxq_buf_recycle_info->refill_head = &rxq->rxrearm_start + 0xF; + rxq_buf_recycle_info->receive_tail = &rxq->rx_tail + 0xF; +#else + rxq_buf_recycle_info->refill_head = &rxq->rxrearm_start; + rxq_buf_recycle_info->receive_tail = &rxq->rx_tail; +#endif +} + #ifdef RTE_ARCH_X86 static inline bool get_avx_supported(bool request_avx512) @@ -3273,6 +3295,7 @@ i40e_set_rx_function(struct rte_eth_dev *dev) if (ad->rx_vec_allowed && rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + dev->rx_descriptors_refill = i40e_rx_descriptors_refill_vec; #ifdef RTE_ARCH_X86 if (dev->data->scattered_rx) { if (ad->rx_use_avx512) { @@ -3465,6 +3488,7 @@ i40e_set_tx_function(struct rte_eth_dev *dev) if (ad->tx_simple_allowed) { if (ad->tx_vec_allowed && rte_vect_get_max_simd_bitwidth() >= RTE_VECT_SIMD_128) { + dev->tx_buf_stash = i40e_tx_buf_stash_vec; #ifdef RTE_ARCH_X86 if (ad->tx_use_avx512) { #ifdef CC_AVX512_SUPPORT diff --git a/drivers/net/i40e/i40e_rxtx.h b/drivers/net/i40e/i40e_rxtx.h index 5e6eecc501..0ad8f530f9 100644 --- a/drivers/net/i40e/i40e_rxtx.h +++ b/drivers/net/i40e/i40e_rxtx.h @@ -233,6 +233,10 @@ uint32_t i40e_dev_rx_queue_count(void *rx_queue); int i40e_dev_rx_descriptor_status(void *rx_queue, uint16_t offset); int i40e_dev_tx_descriptor_status(void *tx_queue, uint16_t offset); +uint16_t i40e_tx_buf_stash_vec(void *tx_queue, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info); +uint16_t i40e_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb); + uint16_t i40e_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t i40e_recv_scattered_pkts_vec(void *rx_queue, diff --git a/drivers/net/i40e/i40e_rxtx_vec_common.h b/drivers/net/i40e/i40e_rxtx_vec_common.h index fe1a6ec75e..068ce694f2 100644 --- a/drivers/net/i40e/i40e_rxtx_vec_common.h +++ b/drivers/net/i40e/i40e_rxtx_vec_common.h @@ -156,6 +156,134 @@ tx_backlog_entry(struct i40e_tx_entry *txep, txep[i].mbuf = tx_pkts[i]; } +uint16_t +i40e_tx_buf_stash_vec(void *tx_queue, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info) +{ + struct i40e_tx_queue *txq = tx_queue; + struct i40e_tx_entry *txep; + struct rte_mbuf **rxep; + struct rte_mbuf *m[RTE_I40E_TX_MAX_FREE_BUF_SZ]; + int i, j, n; + uint16_t avail = 0; + uint16_t buf_ring_size = rxq_buf_recycle_info->buf_ring_size; + uint16_t mask = rxq_buf_recycle_info->buf_ring_size - 1; + uint16_t refill_request = rxq_buf_recycle_info->refill_request; + uint16_t refill_head = *rxq_buf_recycle_info->refill_head; + uint16_t receive_tail = *rxq_buf_recycle_info->receive_tail; + + /* Get available recycling Rx buffers. */ + avail = (buf_ring_size - (refill_head - receive_tail)) & mask; + + /* Check Tx free thresh and Rx available space. */ + if (txq->nb_tx_free > txq->tx_free_thresh || avail <= txq->tx_rs_thresh) + return 0; + + /* check DD bits on threshold descriptor */ + if ((txq->tx_ring[txq->tx_next_dd].cmd_type_offset_bsz & + rte_cpu_to_le_64(I40E_TXD_QW1_DTYPE_MASK)) != + rte_cpu_to_le_64(I40E_TX_DESC_DTYPE_DESC_DONE)) + return 0; + + n = txq->tx_rs_thresh; + + /* Buffer recycle can only support no ring buffer wraparound. + * Two case for this: + * + * case 1: The refill head of Rx buffer ring needs to be aligned with + * buffer ring size. In this case, the number of Tx freeing buffers + * should be equal to refill_request. + * + * case 2: The refill head of Rx ring buffer does not need to be aligned + * with buffer ring size. In this case, the update of refill head can not + * exceed the Rx buffer ring size. + */ + if (refill_request != n || + (!refill_request && (refill_head + n > buf_ring_size))) + return 0; + + /* First buffer to free from S/W ring is at index + * tx_next_dd - (tx_rs_thresh-1). + */ + txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; + rxep = rxq_buf_recycle_info->buf_ring; + rxep += refill_head; + + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + /* Directly put mbufs from Tx to Rx. */ + for (i = 0; i < n; i++, rxep++, txep++) + *rxep = txep[0].mbuf; + } else { + for (i = 0, j = 0; i < n; i++) { + /* Avoid txq contains buffers from expected mempoo. */ + if (unlikely(rxq_buf_recycle_info->mp + != txep[i].mbuf->pool)) + return 0; + + m[j] = rte_pktmbuf_prefree_seg(txep[i].mbuf); + + /* In case 1, each of Tx buffers should be the + * last reference. + */ + if (unlikely(m[j] == NULL && refill_request)) + return 0; + /* In case 2, the number of valid Tx free + * buffers should be recorded. + */ + j++; + } + rte_memcpy(rxep, m, sizeof(void *) * j); + } + + /* Update counters for Tx. */ + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); + txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); + if (txq->tx_next_dd >= txq->nb_tx_desc) + txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); + + return n; +} + +uint16_t +i40e_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb) +{ + struct i40e_rx_queue *rxq = rx_queue; + struct i40e_rx_entry *rxep; + volatile union i40e_rx_desc *rxdp; + uint16_t rx_id; + uint64_t paddr; + uint64_t dma_addr; + uint16_t i; + + rxdp = rxq->rx_ring + rxq->rxrearm_start; + rxep = &rxq->sw_ring[rxq->rxrearm_start]; + + for (i = 0; i < nb; i++) { + /* Initialize rxdp descs. */ + paddr = (rxep[i].mbuf)->buf_iova + RTE_PKTMBUF_HEADROOM; + dma_addr = rte_cpu_to_le_64(paddr); + /* flush desc with pa dma_addr */ + rxdp[i].read.hdr_addr = 0; + rxdp[i].read.pkt_addr = dma_addr; + } + + /* Update the descriptor initializer index */ + rxq->rxrearm_start += nb; + if (rxq->rxrearm_start >= rxq->nb_rx_desc) + rxq->rxrearm_start = 0; + + rxq->rxrearm_nb -= nb; + + rx_id = (uint16_t)((rxq->rxrearm_start == 0) ? + (rxq->nb_rx_desc - 1) : (rxq->rxrearm_start - 1)); + + rte_io_wmb(); + /* Update the tail pointer on the NIC */ + I40E_PCI_REG_WRITE_RELAXED(rxq->qrx_tail, rx_id); + + return nb; +} + static inline void _i40e_rx_queue_release_mbufs_vec(struct i40e_rx_queue *rxq) { From patchwork Thu Mar 23 10:43:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feifei Wang X-Patchwork-Id: 125451 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6B06942813; Thu, 23 Mar 2023 11:43:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D71CD42D0B; Thu, 23 Mar 2023 11:43:51 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 0CC1F42C76 for ; Thu, 23 Mar 2023 11:43:50 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9DC881576; Thu, 23 Mar 2023 03:44:33 -0700 (PDT) Received: from net-x86-dell-8268.shanghai.arm.com (net-x86-dell-8268.shanghai.arm.com [10.169.210.116]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9E8A03F766; Thu, 23 Mar 2023 03:43:46 -0700 (PDT) From: Feifei Wang To: Qiming Yang , Wenjun Wu Cc: dev@dpdk.org, mb@smartsharesystems.com, konstantin.v.ananyev@yandex.ru, nd@arm.com, Feifei Wang , Honnappa Nagarahalli , Ruifeng Wang Subject: [PATCH v4 3/3] net/ixgbe: implement recycle buffer mode Date: Thu, 23 Mar 2023 18:43:30 +0800 Message-Id: <20230323104330.3823251-4-feifei.wang2@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230323104330.3823251-1-feifei.wang2@arm.com> References: <20211224164613.32569-1-feifei.wang2@arm.com> <20230323104330.3823251-1-feifei.wang2@arm.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Define specific function implementation for ixgbe driver. Currently, recycle buffer mode can support 128bit vector path. And can be enabled both in fast free and no fast free mode. Suggested-by: Honnappa Nagarahalli Signed-off-by: Feifei Wang Reviewed-by: Ruifeng Wang Reviewed-by: Honnappa Nagarahalli --- drivers/net/ixgbe/ixgbe_ethdev.c | 1 + drivers/net/ixgbe/ixgbe_ethdev.h | 3 + drivers/net/ixgbe/ixgbe_rxtx.c | 25 +++++ drivers/net/ixgbe/ixgbe_rxtx.h | 4 + drivers/net/ixgbe/ixgbe_rxtx_vec_common.h | 127 ++++++++++++++++++++++ 5 files changed, 160 insertions(+) diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c index 88118bc305..3bada9abbd 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.c +++ b/drivers/net/ixgbe/ixgbe_ethdev.c @@ -543,6 +543,7 @@ static const struct eth_dev_ops ixgbe_eth_dev_ops = { .set_mc_addr_list = ixgbe_dev_set_mc_addr_list, .rxq_info_get = ixgbe_rxq_info_get, .txq_info_get = ixgbe_txq_info_get, + .rxq_buf_recycle_info_get = ixgbe_rxq_buf_recycle_info_get, .timesync_enable = ixgbe_timesync_enable, .timesync_disable = ixgbe_timesync_disable, .timesync_read_rx_timestamp = ixgbe_timesync_read_rx_timestamp, diff --git a/drivers/net/ixgbe/ixgbe_ethdev.h b/drivers/net/ixgbe/ixgbe_ethdev.h index 48290af512..ca6aa0da64 100644 --- a/drivers/net/ixgbe/ixgbe_ethdev.h +++ b/drivers/net/ixgbe/ixgbe_ethdev.h @@ -625,6 +625,9 @@ void ixgbe_rxq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, void ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, struct rte_eth_txq_info *qinfo); +void ixgbe_rxq_buf_recycle_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info); + int ixgbevf_dev_rx_init(struct rte_eth_dev *dev); void ixgbevf_dev_tx_init(struct rte_eth_dev *dev); diff --git a/drivers/net/ixgbe/ixgbe_rxtx.c b/drivers/net/ixgbe/ixgbe_rxtx.c index c9d6ca9efe..ad276cbf33 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.c +++ b/drivers/net/ixgbe/ixgbe_rxtx.c @@ -2558,6 +2558,7 @@ ixgbe_set_tx_function(struct rte_eth_dev *dev, struct ixgbe_tx_queue *txq) (rte_eal_process_type() != RTE_PROC_PRIMARY || ixgbe_txq_vec_setup(txq) == 0)) { PMD_INIT_LOG(DEBUG, "Vector tx enabled."); + dev->tx_buf_stash = ixgbe_tx_buf_stash_vec; dev->tx_pkt_burst = ixgbe_xmit_pkts_vec; } else dev->tx_pkt_burst = ixgbe_xmit_pkts_simple; @@ -4852,6 +4853,7 @@ ixgbe_set_rx_function(struct rte_eth_dev *dev) RTE_IXGBE_DESCS_PER_LOOP, dev->data->port_id); + dev->rx_descriptors_refill = ixgbe_rx_descriptors_refill_vec; dev->rx_pkt_burst = ixgbe_recv_pkts_vec; } else if (adapter->rx_bulk_alloc_allowed) { PMD_INIT_LOG(DEBUG, "Rx Burst Bulk Alloc Preconditions are " @@ -5623,6 +5625,29 @@ ixgbe_txq_info_get(struct rte_eth_dev *dev, uint16_t queue_id, qinfo->conf.tx_deferred_start = txq->tx_deferred_start; } +void +ixgbe_rxq_buf_recycle_info_get(struct rte_eth_dev *dev, uint16_t queue_id, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info) +{ + struct ixgbe_rx_queue *rxq; + struct ixgbe_adapter *adapter = dev->data->dev_private; + + rxq = dev->data->rx_queues[queue_id]; + + rxq_buf_recycle_info->buf_ring = (void *)rxq->sw_ring; + rxq_buf_recycle_info->mp = rxq->mb_pool; + rxq_buf_recycle_info->buf_ring_size = rxq->nb_rx_desc; + rxq_buf_recycle_info->receive_tail = &rxq->rx_tail; + + if (adapter->rx_vec_allowed) { + rxq_buf_recycle_info->refill_request = RTE_IXGBE_RXQ_REARM_THRESH; + rxq_buf_recycle_info->refill_head = &rxq->rxrearm_start; + } else { + rxq_buf_recycle_info->refill_request = rxq->rx_free_thresh; + rxq_buf_recycle_info->refill_head = &rxq->rx_free_trigger; + } +} + /* * [VF] Initializes Receive Unit. */ diff --git a/drivers/net/ixgbe/ixgbe_rxtx.h b/drivers/net/ixgbe/ixgbe_rxtx.h index 668a5b9814..18f890f91a 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx.h +++ b/drivers/net/ixgbe/ixgbe_rxtx.h @@ -295,6 +295,10 @@ int ixgbe_dev_tx_done_cleanup(void *tx_queue, uint32_t free_cnt); extern const uint32_t ptype_table[IXGBE_PACKET_TYPE_MAX]; extern const uint32_t ptype_table_tn[IXGBE_PACKET_TYPE_TN_MAX]; +uint16_t ixgbe_tx_buf_stash_vec(void *tx_queue, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info); +uint16_t ixgbe_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb); + uint16_t ixgbe_xmit_fixed_burst_vec(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); int ixgbe_txq_vec_setup(struct ixgbe_tx_queue *txq); diff --git a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h index a4d9ec9b08..e66a4a2d5b 100644 --- a/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h +++ b/drivers/net/ixgbe/ixgbe_rxtx_vec_common.h @@ -139,6 +139,133 @@ tx_backlog_entry(struct ixgbe_tx_entry_v *txep, txep[i].mbuf = tx_pkts[i]; } +uint16_t +ixgbe_tx_buf_stash_vec(void *tx_queue, + struct rte_eth_rxq_buf_recycle_info *rxq_buf_recycle_info) +{ + struct ixgbe_tx_queue *txq = tx_queue; + struct ixgbe_tx_entry *txep; + struct rte_mbuf **rxep; + struct rte_mbuf *m[RTE_IXGBE_TX_MAX_FREE_BUF_SZ]; + int i, j, n; + uint32_t status; + uint16_t avail = 0; + uint16_t buf_ring_size = rxq_buf_recycle_info->buf_ring_size; + uint16_t mask = rxq_buf_recycle_info->buf_ring_size - 1; + uint16_t refill_request = rxq_buf_recycle_info->refill_request; + uint16_t refill_head = *rxq_buf_recycle_info->refill_head; + uint16_t receive_tail = *rxq_buf_recycle_info->receive_tail; + + /* Get available recycling Rx buffers. */ + avail = (buf_ring_size - (refill_head - receive_tail)) & mask; + + /* Check Tx free thresh and Rx available space. */ + if (txq->nb_tx_free > txq->tx_free_thresh || avail <= txq->tx_rs_thresh) + return 0; + + /* check DD bits on threshold descriptor */ + status = txq->tx_ring[txq->tx_next_dd].wb.status; + if (!(status & IXGBE_ADVTXD_STAT_DD)) + return 0; + + n = txq->tx_rs_thresh; + + /* Buffer recycle can only support no ring buffer wraparound. + * Two case for this: + * + * case 1: The refill head of Rx buffer ring needs to be aligned with + * buffer ring size. In this case, the number of Tx freeing buffers + * should be equal to refill_request. + * + * case 2: The refill head of Rx ring buffer does not need to be aligned + * with buffer ring size. In this case, the update of refill head can not + * exceed the Rx buffer ring size. + */ + if (refill_request != n || + (!refill_request && (refill_head + n > buf_ring_size))) + return 0; + + /* First buffer to free from S/W ring is at index + * tx_next_dd - (tx_rs_thresh-1). + */ + txep = &txq->sw_ring[txq->tx_next_dd - (n - 1)]; + rxep = rxq_buf_recycle_info->buf_ring; + rxep += refill_head; + + if (txq->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE) { + /* Directly put mbufs from Tx to Rx. */ + for (i = 0; i < n; i++, rxep++, txep++) + *rxep = txep[0].mbuf; + } else { + for (i = 0, j = 0; i < n; i++) { + /* Avoid txq contains buffers from expected mempoo. */ + if (unlikely(rxq_buf_recycle_info->mp + != txep[i].mbuf->pool)) + return 0; + + m[j] = rte_pktmbuf_prefree_seg(txep[i].mbuf); + + /* In case 1, each of Tx buffers should be the + * last reference. + */ + if (unlikely(m[j] == NULL && refill_request)) + return 0; + /* In case 2, the number of valid Tx free + * buffers should be recorded. + */ + j++; + } + rte_memcpy(rxep, m, sizeof(void *) * j); + } + + /* Update counters for Tx. */ + txq->nb_tx_free = (uint16_t)(txq->nb_tx_free + txq->tx_rs_thresh); + txq->tx_next_dd = (uint16_t)(txq->tx_next_dd + txq->tx_rs_thresh); + if (txq->tx_next_dd >= txq->nb_tx_desc) + txq->tx_next_dd = (uint16_t)(txq->tx_rs_thresh - 1); + + return n; +} + +uint16_t +ixgbe_rx_descriptors_refill_vec(void *rx_queue, uint16_t nb) +{ + struct ixgbe_rx_queue *rxq = rx_queue; + struct ixgbe_rx_entry *rxep; + volatile union ixgbe_adv_rx_desc *rxdp; + uint16_t rx_id; + uint64_t paddr; + uint64_t dma_addr; + uint16_t i; + + rxdp = rxq->rx_ring + rxq->rxrearm_start; + rxep = &rxq->sw_ring[rxq->rxrearm_start]; + + for (i = 0; i < nb; i++) { + /* Initialize rxdp descs. */ + paddr = (rxep[i].mbuf)->buf_iova + RTE_PKTMBUF_HEADROOM; + dma_addr = rte_cpu_to_le_64(paddr); + /* flush desc with pa dma_addr */ + rxdp[i].read.hdr_addr = 0; + rxdp[i].read.pkt_addr = dma_addr; + } + + /* Update the descriptor initializer index */ + rxq->rxrearm_start += nb; + if (rxq->rxrearm_start >= rxq->nb_rx_desc) + rxq->rxrearm_start = 0; + + rxq->rxrearm_nb -= nb; + + rx_id = (uint16_t)((rxq->rxrearm_start == 0) ? + (rxq->nb_rx_desc - 1) : (rxq->rxrearm_start - 1)); + + /* Update the tail pointer on the NIC */ + IXGBE_PCI_REG_WRITE(rxq->rdt_reg_addr, rx_id); + + return nb; +} + static inline void _ixgbe_tx_queue_release_mbufs_vec(struct ixgbe_tx_queue *txq) {