From patchwork Fri Jan 27 18:37:58 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Billy McFall X-Patchwork-Id: 20064 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 43DB45589; Fri, 27 Jan 2017 19:38:27 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id CC7FC2B83 for ; Fri, 27 Jan 2017 19:38:04 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx16.intmail.prod.int.phx2.redhat.com [10.5.11.28]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 446A881F07; Fri, 27 Jan 2017 18:38:05 +0000 (UTC) Received: from localhost.localdomain.com (ovpn-119-105.rdu2.redhat.com [10.10.119.105]) by smtp.corp.redhat.com (Postfix) with ESMTP id B2D86AE2F5; Fri, 27 Jan 2017 18:38:04 +0000 (UTC) From: Billy McFall To: thomas.monjalon@6wind.com, wenzhuo.lu@intel.com, olivier.matz@6wind.com Cc: dev@dpdk.org, Billy McFall Date: Fri, 27 Jan 2017 13:37:58 -0500 Message-Id: <20170127183800.27466-2-bmcfall@redhat.com> In-Reply-To: <20170127183800.27466-1-bmcfall@redhat.com> References: <20170123211340.22570-1-bmcfall@redhat.com> <20170127183800.27466-1-bmcfall@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.74 on 10.5.11.28 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Fri, 27 Jan 2017 18:38:05 +0000 (UTC) Subject: [dpdk-dev] [PATCH v5 1/3] ethdev: new API to free consumed buffers in Tx ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add a new API to force free consumed buffers on Tx ring. API will return the number of packets freed (0-n) or error code if feature not supported (-ENOTSUP) or input invalid (-ENODEV). Signed-off-by: Billy McFall --- doc/guides/nics/features/default.ini | 1 + doc/guides/prog_guide/mempool_lib.rst | 29 +++++++++++++++++++++++++++++ lib/librte_ether/rte_ethdev.c | 14 ++++++++++++++ lib/librte_ether/rte_ethdev.h | 31 +++++++++++++++++++++++++++++++ 4 files changed, 75 insertions(+) diff --git a/doc/guides/nics/features/default.ini b/doc/guides/nics/features/default.ini index 6e4a043..e2d8c83 100644 --- a/doc/guides/nics/features/default.ini +++ b/doc/guides/nics/features/default.ini @@ -55,6 +55,7 @@ FW version = EEPROM dump = Registers dump = Multiprocess aware = +Free TX ring buffers = BSD nic_uio = Linux UIO = Linux VFIO = diff --git a/doc/guides/prog_guide/mempool_lib.rst b/doc/guides/prog_guide/mempool_lib.rst index ffdc109..92c6fd5 100644 --- a/doc/guides/prog_guide/mempool_lib.rst +++ b/doc/guides/prog_guide/mempool_lib.rst @@ -132,6 +132,35 @@ These user-owned caches can be explicitly passed to ``rte_mempool_generic_put()` The ``rte_mempool_default_cache()`` call returns the default internal cache if any. In contrast to the default caches, user-owned caches can be used by non-EAL threads too. + +Driver Cache +~~~~~~~~~~~~ + +In addition to the a core’s local cache, many of the drivers don't release the mbuf back to the mempool, or local cache, +immediately after the packet has been transmitted. +Instead, they leave the mbuf in their txRing and either perform a bulk release when the tx_rs_thresh has been crossed +or free the mbuf when a slot in the txRing is needed. + +An application can request the driver to release used mbufs with the ``rte_eth_tx_done_cleanup()`` API. +This API requests the driver to release mbufs that are no longer in use, independent of whether or not the tx_rs_thresh +has been crossed. +There are two scenarios when an application may want the mbuf back immediately: + +* When a given packet needs to be sent to multiple destination interfaces (either for Layer 2 flooding or Layer 3 multi-cast). + One option is to make a copy of the packet or a copy of the header portion that needs to manipulated. + A second option is to transmit the packet and then poll the ``rte_eth_tx_done_cleanup()`` API until the reference count + on the packet is decremented. + Then the same packet can be transmitted to the next destination interface. + +* If an application is designed to make multiple runs, like a packet generator, and one run has completed. + The application may want to reset to a clean state. + In this case, it may want to call the ``rte_eth_tx_done_cleanup()`` API to request each destination interface it has been + using to release all of its used mbufs. + +To determine if a driver supports this API, check for the *Free TX ring buffers* feature in the *Network Interface +Controller Drivers* document. + + Mempool Handlers ------------------------ diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c index 61f44e2..1cbf6d0 100644 --- a/lib/librte_ether/rte_ethdev.c +++ b/lib/librte_ether/rte_ethdev.c @@ -1253,6 +1253,20 @@ rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer, } int +rte_eth_tx_done_cleanup(uint8_t port_id, uint16_t queue_id, uint32_t free_cnt) +{ + struct rte_eth_dev *dev = &rte_eth_devices[port_id]; + + /* Validate Input Data. Bail if not valid or not supported. */ + RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); + RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_done_cleanup, -ENOTSUP); + + /* Call driver to free pending mbufs. */ + return (*dev->dev_ops->tx_done_cleanup)(dev->data->tx_queues[queue_id], + free_cnt); +} + +int rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size) { int ret = 0; diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h index c17bbda..b23886c 100644 --- a/lib/librte_ether/rte_ethdev.h +++ b/lib/librte_ether/rte_ethdev.h @@ -1183,6 +1183,9 @@ typedef int (*eth_fw_version_get_t)(struct rte_eth_dev *dev, char *fw_version, size_t fw_size); /**< @internal Get firmware information of an Ethernet device. */ +typedef int (*eth_tx_done_cleanup_t)(void *txq, uint32_t free_cnt); +/**< @internal Force mbufs to be from TX ring. */ + typedef void (*eth_rxq_info_get_t)(struct rte_eth_dev *dev, uint16_t rx_queue_id, struct rte_eth_rxq_info *qinfo); @@ -1487,6 +1490,7 @@ struct eth_dev_ops { eth_rx_disable_intr_t rx_queue_intr_disable; /**< Disable Rx queue interrupt. */ eth_tx_queue_setup_t tx_queue_setup;/**< Set up device TX queue. */ eth_queue_release_t tx_queue_release; /**< Release TX queue. */ + eth_tx_done_cleanup_t tx_done_cleanup;/**< Free tx ring mbufs */ eth_dev_led_on_t dev_led_on; /**< Turn on LED. */ eth_dev_led_off_t dev_led_off; /**< Turn off LED. */ @@ -3091,6 +3095,33 @@ rte_eth_tx_buffer(uint8_t port_id, uint16_t queue_id, } /** + * Request the driver to free mbufs currently cached by the driver. The + * driver will only free the mbuf if it is no longer in use. It is the + * application's responsibity to ensure rte_eth_tx_buffer_flush(..) is + * called if needed. + * + * @param port_id + * The port identifier of the Ethernet device. + * @param queue_id + * The index of the transmit queue through which output packets must be + * sent. + * The value must be in the range [0, nb_tx_queue - 1] previously supplied + * to rte_eth_dev_configure(). + * @param free_cnt + * Maximum number of packets to free. Use 0 to indicate all possible packets + * should be freed. Note that a packet may be using multiple mbufs. + * @return + * Failure: < 0 + * -ENODEV: Invalid interface + * -ENOTSUP: Driver does not support function + * Success: >= 0 + * 0-n: Number of packets freed. More packets may still remain in ring that + * are in use. + */ +int +rte_eth_tx_done_cleanup(uint8_t port_id, uint16_t queue_id, uint32_t free_cnt); + +/** * Configure a callback for buffered packets which cannot be sent * * Register a specific callback to be called when an attempt is made to send