[v2,01/11] eventdev: improve doxygen introduction text

Message ID 20240119174346.108905-2-bruce.richardson@intel.com (mailing list archive)
State Changes Requested, archived
Delegated to: Jerin Jacob
Headers
Series improve eventdev API specification/documentation |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-testing warning apply patch failure

Commit Message

Bruce Richardson Jan. 19, 2024, 5:43 p.m. UTC
  Make some textual improvements to the introduction to eventdev and event
devices in the eventdev header file. This text appears in the doxygen
output for the header file, and introduces the key concepts, for
example: events, event devices, queues, ports and scheduling.

This patch makes the following improvements:
* small textual fixups, e.g. correcting use of singular/plural
* rewrites of some sentences to improve clarity
* using doxygen markdown to split the whole large block up into
  sections, thereby making it easier to read.

No large-scale changes are made, and blocks are not reordered

Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
---
 lib/eventdev/rte_eventdev.h | 112 +++++++++++++++++++++---------------
 1 file changed, 66 insertions(+), 46 deletions(-)
  

Comments

Mattias Rönnblom Jan. 23, 2024, 8:57 a.m. UTC | #1
On 2024-01-19 18:43, Bruce Richardson wrote:
> Make some textual improvements to the introduction to eventdev and event
> devices in the eventdev header file. This text appears in the doxygen
> output for the header file, and introduces the key concepts, for
> example: events, event devices, queues, ports and scheduling.
> 

Great stuff, Bruce.

> This patch makes the following improvements:
> * small textual fixups, e.g. correcting use of singular/plural
> * rewrites of some sentences to improve clarity
> * using doxygen markdown to split the whole large block up into
>    sections, thereby making it easier to read.
> 
> No large-scale changes are made, and blocks are not reordered
> 
> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> ---
>   lib/eventdev/rte_eventdev.h | 112 +++++++++++++++++++++---------------
>   1 file changed, 66 insertions(+), 46 deletions(-)
> 
> diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> index ec9b02455d..a36c89c7a4 100644
> --- a/lib/eventdev/rte_eventdev.h
> +++ b/lib/eventdev/rte_eventdev.h
> @@ -12,12 +12,13 @@
>    * @file
>    *
>    * RTE Event Device API
> + * ====================
>    *
>    * In a polling model, lcores poll ethdev ports and associated rx queues

"In a polling model, lcores pick up packets from Ethdev ports and 
associated RX queues, runs the processing to completion, and enqueues 
the completed packets to a TX queue. NIC-level receive-side scaling 
(RSS) may be used to balance the load across multiple CPU cores."

I thought it might be worth to be a little more verbose on what is the 
reference model Eventdev is compared to. Maybe you can add "traditional" 
or "archetypal", or "simple" as a prefix to the "polling model". (I 
think I would call this a "simple run-to-completion model" rather than 
"polling model".)

"By contrast, in Eventdev, ingressing* packets are fed into an event 
device, which schedules packets across available lcores, in accordance 
to its configuration. This event-driven programming model offers 
applications automatic multicore scaling, dynamic load balancing, 
pipelining, packet order maintenance, synchronization, and quality of 
service."

* Is this a word?

> - * directly to look for packet. In an event driven model, by contrast, lcores
> - * call the scheduler that selects packets for them based on programmer
> - * specified criteria. Eventdev library adds support for event driven
> - * programming model, which offer applications automatic multicore scaling,
> + * directly to look for packets. In an event driven model, in contrast, lcores
> + * call a scheduler that selects packets for them based on programmer
> + * specified criteria. The eventdev library adds support for the event driven
> + * programming model, which offers applications automatic multicore scaling,
>    * dynamic load balancing, pipelining, packet ingress order maintenance and
>    * synchronization services to simplify application packet processing.
>    *
> @@ -25,12 +26,15 @@
>    *
>    * - The application-oriented Event API that includes functions to setup
>    *   an event device (configure it, setup its queues, ports and start it), to
> - *   establish the link between queues to port and to receive events, and so on.
> + *   establish the links between queues and ports to receive events, and so on.
>    *
>    * - The driver-oriented Event API that exports a function allowing
> - *   an event poll Mode Driver (PMD) to simultaneously register itself as
> + *   an event poll Mode Driver (PMD) to register itself as
>    *   an event device driver.
>    *
> + * Application-oriented Event API
> + * ------------------------------
> + *
>    * Event device components:
>    *
>    *                     +-----------------+
> @@ -75,27 +79,33 @@
>    *            |                                                           |
>    *            +-----------------------------------------------------------+
>    *
> - * Event device: A hardware or software-based event scheduler.
> + * **Event device**: A hardware or software-based event scheduler.
>    *
> - * Event: A unit of scheduling that encapsulates a packet or other datatype
> - * like SW generated event from the CPU, Crypto work completion notification,
> - * Timer expiry event notification etc as well as metadata.
> - * The metadata includes flow ID, scheduling type, event priority, event_type,
> + * **Event**: A unit of scheduling that encapsulates a packet or other datatype,

"Event: Represents an item of work and is the smallest unit of 
scheduling. An event carries metadata, such as queue ID, scheduling 
type, and event priority, and data such as one or more packets or other 
kinds of buffers. Examples of events are a software-generated item of 
work originating from a lcore carrying a packet to be processed, a 
crypto work completion notification and a timer expiry notification."

I've found "work scheduler" as helpful term describing what role an 
event device serve in the system, and thus an event represent an item of 
work. "Event" and "Event device" are also good names, but lead some 
people to think libevent or event loop, which is not exactly right.

> + * such as: SW generated event from the CPU, crypto work completion notification,
> + * timer expiry event notification etc., as well as metadata about the packet or data.
> + * The metadata includes a flow ID (if any), scheduling type, event priority, event_type,
>    * sub_event_type etc.
>    *
> - * Event queue: A queue containing events that are scheduled by the event dev.
> + * **Event queue**: A queue containing events that are scheduled by the event device.
>    * An event queue contains events of different flows associated with scheduling
>    * types, such as atomic, ordered, or parallel.
> + * Each event given to an eventdev must have a valid event queue id field in the metadata,
"eventdev" -> "event device"

> + * to specify on which event queue in the device the event must be placed,
> + * for later scheduling to a core.

Events aren't nessarily scheduled to cores, so remove the last part.

>    *
> - * Event port: An application's interface into the event dev for enqueue and
> + * **Event port**: An application's interface into the event dev for enqueue and
>    * dequeue operations. Each event port can be linked with one or more
>    * event queues for dequeue operations.
> - *
> - * By default, all the functions of the Event Device API exported by a PMD
> - * are lock-free functions which assume to not be invoked in parallel on
> - * different logical cores to work on the same target object. For instance,
> - * the dequeue function of a PMD cannot be invoked in parallel on two logical
> - * cores to operates on same  event port. Of course, this function
> + * Each port should be associated with a single core (enqueue and dequeue is not thread-safe).

Should, or must?

Either it's a MT safety issue, and any lcore can access the port with 
the proper serialization, or it's something where the lcore id used to 
store state between invocations, or some other mechanism that prevents a 
port from being used by multiple threads (lcore or not).

> + * To schedule events to a core, the event device will schedule them to the event port(s)
> + * being polled by that core.

"core" -> "lcore" ?

> + *
> + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
> + * are lock-free functions, which must not be invoked on the same object in parallel on
> + * different logical cores.

This is a one-sentence contradiction. The term "lock free" implies a 
data structure which is MT safe, achieving this goal without the use of 
locks. A lock-free object thus *may* be called from different threads, 
including different lcore threads.

Ports are not MT safe, and thus one port should not be acted upon by 
more than one thread (either in parallel, or throughout the lifetime of 
the event device/port; see above).

The event device is MT safe, provided the different parallel callers use 
different ports.

A more subtle question and one with a less obvious answer is if the 
caller of also *must* be an EAL thread, or if a registered non-EAL 
thread or even an unregistered non-EAL thread may call the "fast path" 
functions (enqueue, dequeue etc).

For EAL threads, the event device implementation may safely use 
non-preemption safe constructs (like the default ring variant and spin 
locks).

If the caller is a registered non-EAL thread or an EAL thread, the lcore 
id may be used to index various data structures.

If "lcore id"-less threads may call the fast path APIs, what are the MT 
safety guarantees in that case? Like rte_random.h, or something else.

> + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
> + * cores to operate on same  event port. Of course, this function
>    * can be invoked in parallel by different logical cores on different ports.
>    * It is the responsibility of the upper level application to enforce this rule.
>    *
> @@ -107,22 +117,19 @@
>    *
>    * Event devices are dynamically registered during the PCI/SoC device probing
>    * phase performed at EAL initialization time.
> - * When an Event device is being probed, a *rte_event_dev* structure and
> - * a new device identifier are allocated for that device. Then, the
> - * event_dev_init() function supplied by the Event driver matching the probed
> - * device is invoked to properly initialize the device.
> + * When an Event device is being probed, an *rte_event_dev* structure is allocated
> + * for it and the event_dev_init() function supplied by the Event driver
> + * is invoked to properly initialize the device.
>    *
> - * The role of the device init function consists of resetting the hardware or
> - * software event driver implementations.
> + * The role of the device init function is to reset the device hardware or
> + * to initialize the software event driver implementation.
>    *
> - * If the device init operation is successful, the correspondence between
> - * the device identifier assigned to the new device and its associated
> - * *rte_event_dev* structure is effectively registered.
> - * Otherwise, both the *rte_event_dev* structure and the device identifier are
> - * freed.
> + * If the device init operation is successful, the device is assigned a device
> + * id (dev_id) for application use.
> + * Otherwise, the *rte_event_dev* structure is freed.
>    *
>    * The functions exported by the application Event API to setup a device
> - * designated by its device identifier must be invoked in the following order:
> + * must be invoked in the following order:
>    *     - rte_event_dev_configure()
>    *     - rte_event_queue_setup()
>    *     - rte_event_port_setup()
> @@ -130,10 +137,15 @@
>    *     - rte_event_dev_start()
>    *
>    * Then, the application can invoke, in any order, the functions
> - * exported by the Event API to schedule events, dequeue events, enqueue events,
> - * change event queue(s) to event port [un]link establishment and so on.
> - *
> - * Application may use rte_event_[queue/port]_default_conf_get() to get the
> + * exported by the Event API to dequeue events, enqueue events,
> + * and link and unlink event queue(s) to event ports.
> + *
> + * Before configuring a device, an application should call rte_event_dev_info_get()
> + * to determine the capabilities of the event device, and any queue or port
> + * limits of that device. The parameters set in the various device configuration
> + * structures may need to be adjusted based on the max values provided in the
> + * device information structure returned from the info_get API.
> + * An application may use rte_event_[queue/port]_default_conf_get() to get the
>    * default configuration to set up an event queue or event port by
>    * overriding few default values.
>    *
> @@ -145,7 +157,11 @@
>    * when the device is stopped.
>    *
>    * Finally, an application can close an Event device by invoking the
> - * rte_event_dev_close() function.
> + * rte_event_dev_close() function. Once closed, a device cannot be
> + * reconfigured or restarted.
> + *
> + * Driver-Oriented Event API
> + * -------------------------
>    *
>    * Each function of the application Event API invokes a specific function
>    * of the PMD that controls the target device designated by its device
> @@ -164,10 +180,13 @@
>    * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
>    *
>    * For performance reasons, the address of the fast-path functions of the
> - * Event driver is not contained in the *event_dev_ops* structure.
> + * Event driver are not contained in the *event_dev_ops* structure.

It's one address, so it should remain "is"?

>    * Instead, they are directly stored at the beginning of the *rte_event_dev*
>    * structure to avoid an extra indirect memory access during their invocation.
>    *
> + * Event Enqueue, Dequeue and Scheduling
> + * -------------------------------------
> + *
>    * RTE event device drivers do not use interrupts for enqueue or dequeue
>    * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
>    * functions to applications.
> @@ -179,21 +198,22 @@
>    * crypto work completion notification etc
>    *
>    * The *dequeue* operation gets one or more events from the event ports.
> - * The application process the events and send to downstream event queue through
> - * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
> - * on the final stage, the application may use Tx adapter API for maintaining
> - * the ingress order and then send the packet/event on the wire.
> + * The application processes the events and sends them to a downstream event queue through
> + * rte_event_enqueue_burst(), if it is an intermediate stage of event processing.
> + * On the final stage of processing, the application may use the Tx adapter API for maintaining
> + * the event ingress order while sending the packet/event on the wire via NIC Tx.
>    *
>    * The point at which events are scheduled to ports depends on the device.
>    * For hardware devices, scheduling occurs asynchronously without any software
>    * intervention. Software schedulers can either be distributed
>    * (each worker thread schedules events to its own port) or centralized
>    * (a dedicated thread schedules to all ports). Distributed software schedulers
> - * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
> - * scheduler logic need a dedicated service core for scheduling.
> - * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
> - * indicates the device is centralized and thus needs a dedicated scheduling
> - * thread that repeatedly calls software specific scheduling function.
> + * perform the scheduling inside the enqueue or dequeue functions, whereas centralized
> + * software schedulers need a dedicated service core for scheduling.
> + * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
> + * indicates that the device is centralized and thus needs a dedicated scheduling
> + * thread, generally a service core,
> + * that repeatedly calls the software specific scheduling function.

In the SW case, what you have is a service that needs to be mapped to a 
service lcore.

"generally a RTE service that should be mapped to one or more service 
lcores"

>    *
>    * An event driven worker thread has following typical workflow on fastpath:
>    * \code{.c}
  
Bruce Richardson Jan. 23, 2024, 9:06 a.m. UTC | #2
On Tue, Jan 23, 2024 at 09:57:58AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Make some textual improvements to the introduction to eventdev and event
> > devices in the eventdev header file. This text appears in the doxygen
> > output for the header file, and introduces the key concepts, for
> > example: events, event devices, queues, ports and scheduling.
> > 
> 
> Great stuff, Bruce.
> 
Thanks, good feedback here. I'll take that into account and do a v3 later
when all feedback on this v2 is in.

/Bruce
  
Mattias Rönnblom Jan. 24, 2024, 11:37 a.m. UTC | #3
On 2024-01-23 10:06, Bruce Richardson wrote:
> On Tue, Jan 23, 2024 at 09:57:58AM +0100, Mattias Rönnblom wrote:
>> On 2024-01-19 18:43, Bruce Richardson wrote:
>>> Make some textual improvements to the introduction to eventdev and event
>>> devices in the eventdev header file. This text appears in the doxygen
>>> output for the header file, and introduces the key concepts, for
>>> example: events, event devices, queues, ports and scheduling.
>>>
>>
>> Great stuff, Bruce.
>>
> Thanks, good feedback here. I'll take that into account and do a v3 later
> when all feedback on this v2 is in.
> 
> /Bruce

Sorry for such a piecemeal review. I didn't have time to do it all in 
one go.
  
Bruce Richardson Jan. 31, 2024, 1:45 p.m. UTC | #4
On Tue, Jan 23, 2024 at 09:57:58AM +0100, Mattias Rönnblom wrote:
> On 2024-01-19 18:43, Bruce Richardson wrote:
> > Make some textual improvements to the introduction to eventdev and event
> > devices in the eventdev header file. This text appears in the doxygen
> > output for the header file, and introduces the key concepts, for
> > example: events, event devices, queues, ports and scheduling.
> > 
> 
> Great stuff, Bruce.
> 
> > This patch makes the following improvements:
> > * small textual fixups, e.g. correcting use of singular/plural
> > * rewrites of some sentences to improve clarity
> > * using doxygen markdown to split the whole large block up into
> >    sections, thereby making it easier to read.
> > 
> > No large-scale changes are made, and blocks are not reordered
> > 
> > Signed-off-by: Bruce Richardson <bruce.richardson@intel.com>
> > ---
> >   lib/eventdev/rte_eventdev.h | 112 +++++++++++++++++++++---------------
> >   1 file changed, 66 insertions(+), 46 deletions(-)
> > 
> > diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
> > index ec9b02455d..a36c89c7a4 100644
> > --- a/lib/eventdev/rte_eventdev.h
> > +++ b/lib/eventdev/rte_eventdev.h
> > @@ -12,12 +12,13 @@
> >    * @file
> >    *
> >    * RTE Event Device API
> > + * ====================
> >    *
> >    * In a polling model, lcores poll ethdev ports and associated rx queues
> 
> "In a polling model, lcores pick up packets from Ethdev ports and associated
> RX queues, runs the processing to completion, and enqueues the completed
> packets to a TX queue. NIC-level receive-side scaling (RSS) may be used to
> balance the load across multiple CPU cores."
> 
> I thought it might be worth to be a little more verbose on what is the
> reference model Eventdev is compared to. Maybe you can add "traditional" or
> "archetypal", or "simple" as a prefix to the "polling model". (I think I
> would call this a "simple run-to-completion model" rather than "polling
> model".)
> 
> "By contrast, in Eventdev, ingressing* packets are fed into an event device,
> which schedules packets across available lcores, in accordance to its
> configuration. This event-driven programming model offers applications
> automatic multicore scaling, dynamic load balancing, pipelining, packet
> order maintenance, synchronization, and quality of service."
> 
> * Is this a word?
> 
Ack, taking these suggestions with minor tweaks. Changed "ingressing" to
"incoming", which should be clear enough and is definitely a word.

> > - * directly to look for packet. In an event driven model, by contrast, lcores
> > - * call the scheduler that selects packets for them based on programmer
> > - * specified criteria. Eventdev library adds support for event driven
> > - * programming model, which offer applications automatic multicore scaling,
> > + * directly to look for packets. In an event driven model, in contrast, lcores
> > + * call a scheduler that selects packets for them based on programmer
> > + * specified criteria. The eventdev library adds support for the event driven
> > + * programming model, which offers applications automatic multicore scaling,
> >    * dynamic load balancing, pipelining, packet ingress order maintenance and
> >    * synchronization services to simplify application packet processing.
> >    *
> > @@ -25,12 +26,15 @@
> >    *
> >    * - The application-oriented Event API that includes functions to setup
> >    *   an event device (configure it, setup its queues, ports and start it), to
> > - *   establish the link between queues to port and to receive events, and so on.
> > + *   establish the links between queues and ports to receive events, and so on.
> >    *
> >    * - The driver-oriented Event API that exports a function allowing
> > - *   an event poll Mode Driver (PMD) to simultaneously register itself as
> > + *   an event poll Mode Driver (PMD) to register itself as
> >    *   an event device driver.
> >    *
> > + * Application-oriented Event API
> > + * ------------------------------
> > + *
> >    * Event device components:
> >    *
> >    *                     +-----------------+
> > @@ -75,27 +79,33 @@
> >    *            |                                                           |
> >    *            +-----------------------------------------------------------+
> >    *
> > - * Event device: A hardware or software-based event scheduler.
> > + * **Event device**: A hardware or software-based event scheduler.
> >    *
> > - * Event: A unit of scheduling that encapsulates a packet or other datatype
> > - * like SW generated event from the CPU, Crypto work completion notification,
> > - * Timer expiry event notification etc as well as metadata.
> > - * The metadata includes flow ID, scheduling type, event priority, event_type,
> > + * **Event**: A unit of scheduling that encapsulates a packet or other datatype,
> 
> "Event: Represents an item of work and is the smallest unit of scheduling.
> An event carries metadata, such as queue ID, scheduling type, and event
> priority, and data such as one or more packets or other kinds of buffers.
> Examples of events are a software-generated item of work originating from a
> lcore carrying a packet to be processed, a crypto work completion
> notification and a timer expiry notification."
> 
> I've found "work scheduler" as helpful term describing what role an event
> device serve in the system, and thus an event represent an item of work.
> "Event" and "Event device" are also good names, but lead some people to
> think libevent or event loop, which is not exactly right.
> 

Ack.

> > + * such as: SW generated event from the CPU, crypto work completion notification,
> > + * timer expiry event notification etc., as well as metadata about the packet or data.
> > + * The metadata includes a flow ID (if any), scheduling type, event priority, event_type,
> >    * sub_event_type etc.
> >    *
> > - * Event queue: A queue containing events that are scheduled by the event dev.
> > + * **Event queue**: A queue containing events that are scheduled by the event device.
> >    * An event queue contains events of different flows associated with scheduling
> >    * types, such as atomic, ordered, or parallel.
> > + * Each event given to an eventdev must have a valid event queue id field in the metadata,
> "eventdev" -> "event device"
> 
> > + * to specify on which event queue in the device the event must be placed,
> > + * for later scheduling to a core.
> 
> Events aren't nessarily scheduled to cores, so remove the last part.
> 
> >    *
> > - * Event port: An application's interface into the event dev for enqueue and
> > + * **Event port**: An application's interface into the event dev for enqueue and
> >    * dequeue operations. Each event port can be linked with one or more
> >    * event queues for dequeue operations.
> > - *
> > - * By default, all the functions of the Event Device API exported by a PMD
> > - * are lock-free functions which assume to not be invoked in parallel on
> > - * different logical cores to work on the same target object. For instance,
> > - * the dequeue function of a PMD cannot be invoked in parallel on two logical
> > - * cores to operates on same  event port. Of course, this function
> > + * Each port should be associated with a single core (enqueue and dequeue is not thread-safe).
> 
> Should, or must?
> 
> Either it's a MT safety issue, and any lcore can access the port with the
> proper serialization, or it's something where the lcore id used to store
> state between invocations, or some other mechanism that prevents a port from
> being used by multiple threads (lcore or not).
> 

Rewording this to start with the fact that enqueue and dequeue functions are
not "thread-safe", and then stating that the expected configuration is that
each port is assigned to an lcore, otherwise sync mechanisms are needed.

> > + * To schedule events to a core, the event device will schedule them to the event port(s)
> > + * being polled by that core.
> 
> "core" -> "lcore" ?
> 
> > + *
> > + * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
> > + * are lock-free functions, which must not be invoked on the same object in parallel on
> > + * different logical cores.
> 
> This is a one-sentence contradiction. The term "lock free" implies a data
> structure which is MT safe, achieving this goal without the use of locks. A
> lock-free object thus *may* be called from different threads, including
> different lcore threads.
> 

Changed lock-free to non-thread-safe.

> Ports are not MT safe, and thus one port should not be acted upon by more
> than one thread (either in parallel, or throughout the lifetime of the event
> device/port; see above).
> 
> The event device is MT safe, provided the different parallel callers use
> different ports.
> 
> A more subtle question and one with a less obvious answer is if the caller
> of also *must* be an EAL thread, or if a registered non-EAL thread or even
> an unregistered non-EAL thread may call the "fast path" functions (enqueue,
> dequeue etc).
> 
> For EAL threads, the event device implementation may safely use
> non-preemption safe constructs (like the default ring variant and spin
> locks).
> 
> If the caller is a registered non-EAL thread or an EAL thread, the lcore id
> may be used to index various data structures.
> 
> If "lcore id"-less threads may call the fast path APIs, what are the MT
> safety guarantees in that case? Like rte_random.h, or something else.
> 

I don't know the answer to this. I believe right now that most/all eventdev
functions are callable on non-EAL threads, but I'm not sure we want to
guarantee that - e.g. some drivers may require registered threads. I think
we need to resolve and document this, but I'm not going to do so in this
patch(set).

> > + * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
> > + * cores to operate on same  event port. Of course, this function
> >    * can be invoked in parallel by different logical cores on different ports.
> >    * It is the responsibility of the upper level application to enforce this rule.
> >    *
> > @@ -107,22 +117,19 @@
> >    *
> >    * Event devices are dynamically registered during the PCI/SoC device probing
> >    * phase performed at EAL initialization time.
> > - * When an Event device is being probed, a *rte_event_dev* structure and
> > - * a new device identifier are allocated for that device. Then, the
> > - * event_dev_init() function supplied by the Event driver matching the probed
> > - * device is invoked to properly initialize the device.
> > + * When an Event device is being probed, an *rte_event_dev* structure is allocated
> > + * for it and the event_dev_init() function supplied by the Event driver
> > + * is invoked to properly initialize the device.
> >    *
> > - * The role of the device init function consists of resetting the hardware or
> > - * software event driver implementations.
> > + * The role of the device init function is to reset the device hardware or
> > + * to initialize the software event driver implementation.
> >    *
> > - * If the device init operation is successful, the correspondence between
> > - * the device identifier assigned to the new device and its associated
> > - * *rte_event_dev* structure is effectively registered.
> > - * Otherwise, both the *rte_event_dev* structure and the device identifier are
> > - * freed.
> > + * If the device init operation is successful, the device is assigned a device
> > + * id (dev_id) for application use.
> > + * Otherwise, the *rte_event_dev* structure is freed.
> >    *
> >    * The functions exported by the application Event API to setup a device
> > - * designated by its device identifier must be invoked in the following order:
> > + * must be invoked in the following order:
> >    *     - rte_event_dev_configure()
> >    *     - rte_event_queue_setup()
> >    *     - rte_event_port_setup()
> > @@ -130,10 +137,15 @@
> >    *     - rte_event_dev_start()
> >    *
> >    * Then, the application can invoke, in any order, the functions
> > - * exported by the Event API to schedule events, dequeue events, enqueue events,
> > - * change event queue(s) to event port [un]link establishment and so on.
> > - *
> > - * Application may use rte_event_[queue/port]_default_conf_get() to get the
> > + * exported by the Event API to dequeue events, enqueue events,
> > + * and link and unlink event queue(s) to event ports.
> > + *
> > + * Before configuring a device, an application should call rte_event_dev_info_get()
> > + * to determine the capabilities of the event device, and any queue or port
> > + * limits of that device. The parameters set in the various device configuration
> > + * structures may need to be adjusted based on the max values provided in the
> > + * device information structure returned from the info_get API.
> > + * An application may use rte_event_[queue/port]_default_conf_get() to get the
> >    * default configuration to set up an event queue or event port by
> >    * overriding few default values.
> >    *
> > @@ -145,7 +157,11 @@
> >    * when the device is stopped.
> >    *
> >    * Finally, an application can close an Event device by invoking the
> > - * rte_event_dev_close() function.
> > + * rte_event_dev_close() function. Once closed, a device cannot be
> > + * reconfigured or restarted.
> > + *
> > + * Driver-Oriented Event API
> > + * -------------------------
> >    *
> >    * Each function of the application Event API invokes a specific function
> >    * of the PMD that controls the target device designated by its device
> > @@ -164,10 +180,13 @@
> >    * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
> >    *
> >    * For performance reasons, the address of the fast-path functions of the
> > - * Event driver is not contained in the *event_dev_ops* structure.
> > + * Event driver are not contained in the *event_dev_ops* structure.
> 
> It's one address, so it should remain "is"?

I think it should be "addresses of the functions", so adjusting that and
keeping it as "are". Next sentence already uses "they" in the plural too,
so then everything aligns nicely.

> 
> >    * Instead, they are directly stored at the beginning of the *rte_event_dev*
> >    * structure to avoid an extra indirect memory access during their invocation.
> >    *
> > + * Event Enqueue, Dequeue and Scheduling
> > + * -------------------------------------
> > + *
> >    * RTE event device drivers do not use interrupts for enqueue or dequeue
> >    * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
> >    * functions to applications.
> > @@ -179,21 +198,22 @@
> >    * crypto work completion notification etc
> >    *
> >    * The *dequeue* operation gets one or more events from the event ports.
> > - * The application process the events and send to downstream event queue through
> > - * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
> > - * on the final stage, the application may use Tx adapter API for maintaining
> > - * the ingress order and then send the packet/event on the wire.
> > + * The application processes the events and sends them to a downstream event queue through
> > + * rte_event_enqueue_burst(), if it is an intermediate stage of event processing.
> > + * On the final stage of processing, the application may use the Tx adapter API for maintaining
> > + * the event ingress order while sending the packet/event on the wire via NIC Tx.
> >    *
> >    * The point at which events are scheduled to ports depends on the device.
> >    * For hardware devices, scheduling occurs asynchronously without any software
> >    * intervention. Software schedulers can either be distributed
> >    * (each worker thread schedules events to its own port) or centralized
> >    * (a dedicated thread schedules to all ports). Distributed software schedulers
> > - * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
> > - * scheduler logic need a dedicated service core for scheduling.
> > - * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
> > - * indicates the device is centralized and thus needs a dedicated scheduling
> > - * thread that repeatedly calls software specific scheduling function.
> > + * perform the scheduling inside the enqueue or dequeue functions, whereas centralized
> > + * software schedulers need a dedicated service core for scheduling.
> > + * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
> > + * indicates that the device is centralized and thus needs a dedicated scheduling
> > + * thread, generally a service core,
> > + * that repeatedly calls the software specific scheduling function.
> 
> In the SW case, what you have is a service that needs to be mapped to a
> service lcore.
> 
> "generally a RTE service that should be mapped to one or more service
> lcores"
> 
Ack, will use that rewording.
  

Patch

diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h
index ec9b02455d..a36c89c7a4 100644
--- a/lib/eventdev/rte_eventdev.h
+++ b/lib/eventdev/rte_eventdev.h
@@ -12,12 +12,13 @@ 
  * @file
  *
  * RTE Event Device API
+ * ====================
  *
  * In a polling model, lcores poll ethdev ports and associated rx queues
- * directly to look for packet. In an event driven model, by contrast, lcores
- * call the scheduler that selects packets for them based on programmer
- * specified criteria. Eventdev library adds support for event driven
- * programming model, which offer applications automatic multicore scaling,
+ * directly to look for packets. In an event driven model, in contrast, lcores
+ * call a scheduler that selects packets for them based on programmer
+ * specified criteria. The eventdev library adds support for the event driven
+ * programming model, which offers applications automatic multicore scaling,
  * dynamic load balancing, pipelining, packet ingress order maintenance and
  * synchronization services to simplify application packet processing.
  *
@@ -25,12 +26,15 @@ 
  *
  * - The application-oriented Event API that includes functions to setup
  *   an event device (configure it, setup its queues, ports and start it), to
- *   establish the link between queues to port and to receive events, and so on.
+ *   establish the links between queues and ports to receive events, and so on.
  *
  * - The driver-oriented Event API that exports a function allowing
- *   an event poll Mode Driver (PMD) to simultaneously register itself as
+ *   an event poll Mode Driver (PMD) to register itself as
  *   an event device driver.
  *
+ * Application-oriented Event API
+ * ------------------------------
+ *
  * Event device components:
  *
  *                     +-----------------+
@@ -75,27 +79,33 @@ 
  *            |                                                           |
  *            +-----------------------------------------------------------+
  *
- * Event device: A hardware or software-based event scheduler.
+ * **Event device**: A hardware or software-based event scheduler.
  *
- * Event: A unit of scheduling that encapsulates a packet or other datatype
- * like SW generated event from the CPU, Crypto work completion notification,
- * Timer expiry event notification etc as well as metadata.
- * The metadata includes flow ID, scheduling type, event priority, event_type,
+ * **Event**: A unit of scheduling that encapsulates a packet or other datatype,
+ * such as: SW generated event from the CPU, crypto work completion notification,
+ * timer expiry event notification etc., as well as metadata about the packet or data.
+ * The metadata includes a flow ID (if any), scheduling type, event priority, event_type,
  * sub_event_type etc.
  *
- * Event queue: A queue containing events that are scheduled by the event dev.
+ * **Event queue**: A queue containing events that are scheduled by the event device.
  * An event queue contains events of different flows associated with scheduling
  * types, such as atomic, ordered, or parallel.
+ * Each event given to an eventdev must have a valid event queue id field in the metadata,
+ * to specify on which event queue in the device the event must be placed,
+ * for later scheduling to a core.
  *
- * Event port: An application's interface into the event dev for enqueue and
+ * **Event port**: An application's interface into the event dev for enqueue and
  * dequeue operations. Each event port can be linked with one or more
  * event queues for dequeue operations.
- *
- * By default, all the functions of the Event Device API exported by a PMD
- * are lock-free functions which assume to not be invoked in parallel on
- * different logical cores to work on the same target object. For instance,
- * the dequeue function of a PMD cannot be invoked in parallel on two logical
- * cores to operates on same  event port. Of course, this function
+ * Each port should be associated with a single core (enqueue and dequeue is not thread-safe).
+ * To schedule events to a core, the event device will schedule them to the event port(s)
+ * being polled by that core.
+ *
+ * *NOTE*: By default, all the functions of the Event Device API exported by a PMD
+ * are lock-free functions, which must not be invoked on the same object in parallel on
+ * different logical cores.
+ * For instance, the dequeue function of a PMD cannot be invoked in parallel on two logical
+ * cores to operate on same  event port. Of course, this function
  * can be invoked in parallel by different logical cores on different ports.
  * It is the responsibility of the upper level application to enforce this rule.
  *
@@ -107,22 +117,19 @@ 
  *
  * Event devices are dynamically registered during the PCI/SoC device probing
  * phase performed at EAL initialization time.
- * When an Event device is being probed, a *rte_event_dev* structure and
- * a new device identifier are allocated for that device. Then, the
- * event_dev_init() function supplied by the Event driver matching the probed
- * device is invoked to properly initialize the device.
+ * When an Event device is being probed, an *rte_event_dev* structure is allocated
+ * for it and the event_dev_init() function supplied by the Event driver
+ * is invoked to properly initialize the device.
  *
- * The role of the device init function consists of resetting the hardware or
- * software event driver implementations.
+ * The role of the device init function is to reset the device hardware or
+ * to initialize the software event driver implementation.
  *
- * If the device init operation is successful, the correspondence between
- * the device identifier assigned to the new device and its associated
- * *rte_event_dev* structure is effectively registered.
- * Otherwise, both the *rte_event_dev* structure and the device identifier are
- * freed.
+ * If the device init operation is successful, the device is assigned a device
+ * id (dev_id) for application use.
+ * Otherwise, the *rte_event_dev* structure is freed.
  *
  * The functions exported by the application Event API to setup a device
- * designated by its device identifier must be invoked in the following order:
+ * must be invoked in the following order:
  *     - rte_event_dev_configure()
  *     - rte_event_queue_setup()
  *     - rte_event_port_setup()
@@ -130,10 +137,15 @@ 
  *     - rte_event_dev_start()
  *
  * Then, the application can invoke, in any order, the functions
- * exported by the Event API to schedule events, dequeue events, enqueue events,
- * change event queue(s) to event port [un]link establishment and so on.
- *
- * Application may use rte_event_[queue/port]_default_conf_get() to get the
+ * exported by the Event API to dequeue events, enqueue events,
+ * and link and unlink event queue(s) to event ports.
+ *
+ * Before configuring a device, an application should call rte_event_dev_info_get()
+ * to determine the capabilities of the event device, and any queue or port
+ * limits of that device. The parameters set in the various device configuration
+ * structures may need to be adjusted based on the max values provided in the
+ * device information structure returned from the info_get API.
+ * An application may use rte_event_[queue/port]_default_conf_get() to get the
  * default configuration to set up an event queue or event port by
  * overriding few default values.
  *
@@ -145,7 +157,11 @@ 
  * when the device is stopped.
  *
  * Finally, an application can close an Event device by invoking the
- * rte_event_dev_close() function.
+ * rte_event_dev_close() function. Once closed, a device cannot be
+ * reconfigured or restarted.
+ *
+ * Driver-Oriented Event API
+ * -------------------------
  *
  * Each function of the application Event API invokes a specific function
  * of the PMD that controls the target device designated by its device
@@ -164,10 +180,13 @@ 
  * supplied in the *event_dev_ops* structure of the *rte_event_dev* structure.
  *
  * For performance reasons, the address of the fast-path functions of the
- * Event driver is not contained in the *event_dev_ops* structure.
+ * Event driver are not contained in the *event_dev_ops* structure.
  * Instead, they are directly stored at the beginning of the *rte_event_dev*
  * structure to avoid an extra indirect memory access during their invocation.
  *
+ * Event Enqueue, Dequeue and Scheduling
+ * -------------------------------------
+ *
  * RTE event device drivers do not use interrupts for enqueue or dequeue
  * operation. Instead, Event drivers export Poll-Mode enqueue and dequeue
  * functions to applications.
@@ -179,21 +198,22 @@ 
  * crypto work completion notification etc
  *
  * The *dequeue* operation gets one or more events from the event ports.
- * The application process the events and send to downstream event queue through
- * rte_event_enqueue_burst() if it is an intermediate stage of event processing,
- * on the final stage, the application may use Tx adapter API for maintaining
- * the ingress order and then send the packet/event on the wire.
+ * The application processes the events and sends them to a downstream event queue through
+ * rte_event_enqueue_burst(), if it is an intermediate stage of event processing.
+ * On the final stage of processing, the application may use the Tx adapter API for maintaining
+ * the event ingress order while sending the packet/event on the wire via NIC Tx.
  *
  * The point at which events are scheduled to ports depends on the device.
  * For hardware devices, scheduling occurs asynchronously without any software
  * intervention. Software schedulers can either be distributed
  * (each worker thread schedules events to its own port) or centralized
  * (a dedicated thread schedules to all ports). Distributed software schedulers
- * perform the scheduling in rte_event_dequeue_burst(), whereas centralized
- * scheduler logic need a dedicated service core for scheduling.
- * The RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag is not set
- * indicates the device is centralized and thus needs a dedicated scheduling
- * thread that repeatedly calls software specific scheduling function.
+ * perform the scheduling inside the enqueue or dequeue functions, whereas centralized
+ * software schedulers need a dedicated service core for scheduling.
+ * The absence of the RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED capability flag
+ * indicates that the device is centralized and thus needs a dedicated scheduling
+ * thread, generally a service core,
+ * that repeatedly calls the software specific scheduling function.
  *
  * An event driven worker thread has following typical workflow on fastpath:
  * \code{.c}