[dpdk-dev,v2] ethdev: introduce lock-free txq capability flag

Message ID 20170706062120.3895-1-jerin.jacob@caviumnetworks.com (mailing list archive)
State Changes Requested, archived
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Jerin Jacob July 6, 2017, 6:21 a.m. UTC
  Introducing the DEV_TX_OFFLOAD_MT_LOCKFREE TX capability flag.
if a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads
can invoke rte_eth_tx_burst() concurrently on the same tx queue without
SW lock. This PMD feature will be useful in the following use cases and
found in the OCTEON family of NPUs.

1) Remove explicit spinlock in some applications where lcores
to TX queues are not mapped 1:1.
example: OVS has such instance
https://github.com/openvswitch/ovs/blob/master/lib/netdev-dpdk.c#L299
https://github.com/openvswitch/ovs/blob/master/lib/netdev-dpdk.c#L1859
See the the usage of tx_lock spinlock.

2) In the eventdev use case, Avoid dedicating a separate TX core for
transmitting and thus enables more scaling as all workers can
send the packets.

v2:
- Changed the flag name to DEV_TX_OFFLOAD_MT_LOCKFREE(Thomas)
- Updated the documentation in doc/guides/prog_guide/poll_mode_drv.rst
and rte_eth_tx_burst() doxgen comments(Thomas)

Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
---
 doc/guides/prog_guide/poll_mode_drv.rst | 14 ++++++++++++--
 lib/librte_ether/rte_ethdev.h           |  8 ++++++++
 2 files changed, 20 insertions(+), 2 deletions(-)
  

Comments

Thomas Monjalon July 8, 2017, 4:08 p.m. UTC | #1
Hi Jerin,

Thanks for the update.
I think we can add this new flag in 17.08.
I prefer waiting John's review, especially for doc wording,
before applying it. I consider it does not hurt to add it post-rc1.

See below for my first comment on the doc.

06/07/2017 08:21, Jerin Jacob:
> Introducing the DEV_TX_OFFLOAD_MT_LOCKFREE TX capability flag.
> if a PMD advertises DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads
> can invoke rte_eth_tx_burst() concurrently on the same tx queue without
> SW lock. This PMD feature will be useful in the following use cases and
> found in the OCTEON family of NPUs.
> 
> 1) Remove explicit spinlock in some applications where lcores
> to TX queues are not mapped 1:1.
> example: OVS has such instance
> https://github.com/openvswitch/ovs/blob/master/lib/netdev-dpdk.c#L299
> https://github.com/openvswitch/ovs/blob/master/lib/netdev-dpdk.c#L1859
> See the the usage of tx_lock spinlock.
> 
> 2) In the eventdev use case, Avoid dedicating a separate TX core for
> transmitting and thus enables more scaling as all workers can
> send the packets.
> 
> v2:
> - Changed the flag name to DEV_TX_OFFLOAD_MT_LOCKFREE(Thomas)
> - Updated the documentation in doc/guides/prog_guide/poll_mode_drv.rst
> and rte_eth_tx_burst() doxgen comments(Thomas)
> 
> Signed-off-by: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> ---
[...]
> +If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
> +concurrently on the same tx queue without SW lock.This PMD feature found in some NICs and

A space is missing after the dot.
Note: my preference is to start next sentence on a new line (in RST source).

> +useful in the following use cases if PMD supports it. See `Hardware Offload`_ for details.

This sentence is confusing. I would remove "if PMD supports it".
After "following use cases", should we add a colon?
The relation with `Hardware Offload`_ is not obvious.

> +*  Remove explicit spinlock in some applications where lcores to TX queues are not mapped 1:1.

Can we reword "lcores to TX queues"?
I suggest "lcores are not mapped to Tx queues with 1:1 relation".

> +*  In the eventdev use case, Avoid dedicating a separate TX core for transmitting and thus

Uppercase in the middle of the sentence spotted.

> +   enables more scaling as all workers can send the packets.
  
Jerin Jacob July 10, 2017, 4:56 p.m. UTC | #2
-----Original Message-----
> Date: Sat, 08 Jul 2017 18:08:57 +0200
> From: Thomas Monjalon <thomas@monjalon.net>
> To: Jerin Jacob <jerin.jacob@caviumnetworks.com>
> Cc: dev@dpdk.org, ferruh.yigit@intel.com, john.mcnamara@intel.com
> Subject: Re: [dpdk-dev] [PATCH v2] ethdev: introduce lock-free txq
>  capability flag
> 
> Hi Jerin,
> 

Hi Thomas. Thanks for the review.

> Thanks for the update.
> I think we can add this new flag in 17.08.
> I prefer waiting John's review, especially for doc wording,

Make sense. I will send the version.

> before applying it. I consider it does not hurt to add it post-rc1.
> 
> See below for my first comment on the doc.

Fixed in next version.
  

Patch

diff --git a/doc/guides/prog_guide/poll_mode_drv.rst b/doc/guides/prog_guide/poll_mode_drv.rst
index 4987f70a1..e6ac2978f 100644
--- a/doc/guides/prog_guide/poll_mode_drv.rst
+++ b/doc/guides/prog_guide/poll_mode_drv.rst
@@ -84,7 +84,7 @@  Whenever needed and appropriate, asynchronous communication should be introduced
 
 Avoiding lock contention is a key issue in a multi-core environment.
 To address this issue, PMDs are designed to work with per-core private resources as much as possible.
-For example, a PMD maintains a separate transmit queue per-core, per-port.
+For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable.
 In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore).
 
 To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core
@@ -146,6 +146,15 @@  This is also true for the pipe-line model provided all logical cores used are lo
 
 Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance.
 
+If the PMD is ``DEV_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()``
+concurrently on the same tx queue without SW lock.This PMD feature found in some NICs and
+useful in the following use cases if PMD supports it. See `Hardware Offload`_ for details.
+
+*  Remove explicit spinlock in some applications where lcores to TX queues are not mapped 1:1.
+
+*  In the eventdev use case, Avoid dedicating a separate TX core for transmitting and thus
+   enables more scaling as all workers can send the packets.
+
 Device Identification and Configuration
 ---------------------------------------
 
@@ -290,7 +299,8 @@  Hardware Offload
 
 Depending on driver capabilities advertised by
 ``rte_eth_dev_info_get()``, the PMD may support hardware offloading
-feature like checksumming, TCP segmentation or VLAN insertion.
+feature like checksumming, TCP segmentation, VLAN insertion or
+lockfree multithreaded TX burst on the same TX queue.
 
 The support of these offload features implies the addition of dedicated
 status bit(s) and value field(s) into the rte_mbuf data structure, along
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index d1076c821..8874ea504 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -901,6 +901,10 @@  struct rte_eth_conf {
 #define DEV_TX_OFFLOAD_IPIP_TNL_TSO     0x00000800    /**< Used for tunneling packet. */
 #define DEV_TX_OFFLOAD_GENEVE_TNL_TSO   0x00001000    /**< Used for tunneling packet. */
 #define DEV_TX_OFFLOAD_MACSEC_INSERT    0x00002000
+#define DEV_TX_OFFLOAD_MT_LOCKFREE      0x00004000
+/**< Multiple threads can invoke rte_eth_tx_burst() concurrently on the same
+ * tx queue without SW lock.
+ */
 
 struct rte_pci_device;
 
@@ -2933,6 +2937,10 @@  static inline int rte_eth_tx_descriptor_status(uint8_t port_id,
  * rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf*  buffers
  * of those packets whose transmission was effectively completed.
  *
+ * If the PMD is DEV_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can
+ * invoke this function concurrently on the same tx queue without SW lock.
+ * @see rte_eth_dev_info_get, struct rte_eth_txconf::txq_flags
+ *
  * @param port_id
  *   The port identifier of the Ethernet device.
  * @param queue_id