[v3,1/8] eventdev: introduce event vector capability

Message ID 20210316200156.252-2-pbhagavatula@marvell.com (mailing list archive)
State Superseded, archived
Delegated to: Jerin Jacob
Headers
Series Introduce event vectorization |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Performance-Testing fail build patch failure

Commit Message

Pavan Nikhilesh Bhagavatula March 16, 2021, 8:01 p.m. UTC
  From: Pavan Nikhilesh <pbhagavatula@marvell.com>

Introduce rte_event_vector datastructure which is capable of holding
multiple uintptr_t of the same flow thereby allowing applications
to vectorize their pipeline and reducing the complexity of pipelining
the events across multiple stages.
This approach also reduces the scheduling overhead on a event device.

Add a event vector mempool create handler to create mempools based on
the best mempool ops available on a given platform.

Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
---
 doc/guides/prog_guide/eventdev.rst |  36 +++++++++-
 lib/librte_eventdev/rte_eventdev.h | 110 ++++++++++++++++++++++++++++-
 lib/librte_eventdev/version.map    |   3 +
 3 files changed, 146 insertions(+), 3 deletions(-)
  

Comments

Jayatheerthan, Jay March 18, 2021, 6:19 a.m. UTC | #1
Posting my comments in beginning. No further comments.

@pbhagavatula@marvell.com, it seems like v3 patch1 drops some changes from v2 patch1 (for e.g. missing event vector pointer in rte_event) causing build failure. Could you have a look?

Thanks!

-Jay


> -----Original Message-----
> From: pbhagavatula@marvell.com <pbhagavatula@marvell.com>
> Sent: Wednesday, March 17, 2021 1:32 AM
> To: jerinj@marvell.com; Jayatheerthan, Jay <jay.jayatheerthan@intel.com>; Carrillo, Erik G <erik.g.carrillo@intel.com>; Gujjar,
> Abhinandan S <abhinandan.gujjar@intel.com>; McDaniel, Timothy <timothy.mcdaniel@intel.com>; hemant.agrawal@nxp.com; Van
> Haaren, Harry <harry.van.haaren@intel.com>; mattias.ronnblom <mattias.ronnblom@ericsson.com>; Ma, Liang J
> <liang.j.ma@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil Horman <nhorman@tuxdriver.com>
> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@marvell.com>
> Subject: [dpdk-dev] [PATCH v3 1/8] eventdev: introduce event vector capability
> 
> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
> 
> Introduce rte_event_vector datastructure which is capable of holding
> multiple uintptr_t of the same flow thereby allowing applications
> to vectorize their pipeline and reducing the complexity of pipelining
> the events across multiple stages.
> This approach also reduces the scheduling overhead on a event device.
> 
> Add a event vector mempool create handler to create mempools based on
> the best mempool ops available on a given platform.
> 
> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
> ---
>  doc/guides/prog_guide/eventdev.rst |  36 +++++++++-
>  lib/librte_eventdev/rte_eventdev.h | 110 ++++++++++++++++++++++++++++-
>  lib/librte_eventdev/version.map    |   3 +
>  3 files changed, 146 insertions(+), 3 deletions(-)
> 
> diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
> index ccde086f6..fda9c3743 100644
> --- a/doc/guides/prog_guide/eventdev.rst
> +++ b/doc/guides/prog_guide/eventdev.rst
> @@ -63,13 +63,45 @@ the actual event being scheduled is. The payload is a union of the following:
>  * ``uint64_t u64``
>  * ``void *event_ptr``
>  * ``struct rte_mbuf *mbuf``
> +* ``struct rte_event_vector *vec``
> 
> -These three items in a union occupy the same 64 bits at the end of the rte_event
> +These four items in a union occupy the same 64 bits at the end of the rte_event
>  structure. The application can utilize the 64 bits directly by accessing the
> -u64 variable, while the event_ptr and mbuf are provided as convenience
> +u64 variable, while the event_ptr, mbuf, vec are provided as a convenience
>  variables.  For example the mbuf pointer in the union can used to schedule a
>  DPDK packet.
> 
> +Event Vector
> +~~~~~~~~~~~~
> +
> +The rte_event_vector struct contains a vector of elements defined by the event
> +type specified in the ``rte_event``. The event_vector structure contains the
> +following data:
> +
> +* ``nb_elem`` - The number of elements held within the vector.
> +
> +Similar to ``rte_event`` the payload of event vector is also a union, allowing
> +flexibility in what the actual vector is.
> +
> +* ``struct rte_mbuf *mbufs[0]`` - An array of mbufs.
> +* ``void *ptrs[0]`` - An array of pointers.
> +* ``uint64_t *u64s[0]`` - An array of uint64_t elements.
> +
> +The size of the event vector is related to the total number of elements it is
> +configured to hold, this is achieved by making `rte_event_vector` a variable
> +length structure.
> +A helper function is provided to create a mempool that holds event vector, which
> +takes name of the pool, total number of required ``rte_event_vector``,
> +cache size, number of elements in each ``rte_event_vector`` and socket id.
> +
> +.. code-block:: c
> +
> +        rte_event_vector_pool_create("vector_pool", nb_event_vectors, cache_sz,
> +                                     nb_elements_per_vector, socket_id);
> +
> +The function ``rte_event_vector_pool_create`` creates mempool with the best
> +platform mempool ops.
> +
>  Queues
>  ~~~~~~
> 
> diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
> index ce1fc2ce0..c0d01c873 100644
> --- a/lib/librte_eventdev/rte_eventdev.h
> +++ b/lib/librte_eventdev/rte_eventdev.h
> @@ -212,8 +212,10 @@ extern "C" {
> 
>  #include <rte_common.h>
>  #include <rte_config.h>
> -#include <rte_memory.h>
>  #include <rte_errno.h>
> +#include <rte_mbuf_pool_ops.h>
> +#include <rte_memory.h>
> +#include <rte_mempool.h>
> 
>  #include "rte_eventdev_trace_fp.h"
> 
> @@ -913,6 +915,25 @@ rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
>  int
>  rte_event_dev_close(uint8_t dev_id);
> 
> +/**
> + * Event vector structure.
> + */
> +struct rte_event_vector {
> +	uint64_t nb_elem : 16;
> +	/**< Number of elements in this event vector. */
> +	uint64_t rsvd : 48;
> +	uint64_t impl_opaque;
> +	union {
> +		struct rte_mbuf *mbufs[0];
> +		void *ptrs[0];
> +		uint64_t *u64s[0];
> +	} __rte_aligned(16);
> +	/**< Start of the vector array union. Depending upon the event type the
> +	 * vector array can be an array of mbufs or pointers or opaque u64
> +	 * values.
> +	 */
> +};
> +
>  /* Scheduler type definitions */
>  #define RTE_SCHED_TYPE_ORDERED          0
>  /**< Ordered scheduling
> @@ -986,6 +1007,21 @@ rte_event_dev_close(uint8_t dev_id);
>   */
>  #define RTE_EVENT_TYPE_ETH_RX_ADAPTER   0x4
>  /**< The event generated from event eth Rx adapter */
> +#define RTE_EVENT_TYPE_VECTOR           0x8
> +/**< Indicates that event is a vector.
> + * All vector event types should be an logical OR of EVENT_TYPE_VECTOR.
> + * This simplifies the pipeline design as we can split processing the events
> + * between vector events and normal event across event types.
> + * Example:
> + *	if (ev.event_type & RTE_EVENT_TYPE_VECTOR) {
> + *		// Classify and handle vector event.
> + *	} else {
> + *		// Classify and handle event.
> + *	}
> + */
> +#define RTE_EVENT_TYPE_CPU_VECTOR (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU)
> +/**< The event vector generated from cpu for pipelining. */
> +
>  #define RTE_EVENT_TYPE_MAX              0x10
>  /**< Maximum number of event types */
> 
> @@ -2023,6 +2059,78 @@ rte_event_dev_xstats_reset(uint8_t dev_id,
>   */
>  int rte_event_dev_selftest(uint8_t dev_id);
> 
> +/**
> + * Get the memory required per event vector based on the number of elements per
> + * vector.
> + * This should be used to create the mempool that holds the event vectors.
> + *
> + * @param name
> + *   The name of the vector pool.
> + * @param n
> + *   The number of elements in the mbuf pool.
> + * @param cache_size
> + *   Size of the per-core object cache. See rte_mempool_create() for
> + *   details.
> + * @param nb_elem
> + *   The number of elements then a single event vector should be able to hold.
> + * @param socket_id
> + *   The socket identifier where the memory should be allocated. The
> + *   value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the
> + *   reserved zone
> + *
> + * @return
> + *   The pointer to the newly allocated mempool, on success. NULL on error
> + *   with rte_errno set appropriately. Possible rte_errno values include:
> + *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
> + *    - E_RTE_SECONDARY - function was called from a secondary process instance
> + *    - EINVAL - cache size provided is too large, or priv_size is not aligned.
> + *    - ENOSPC - the maximum number of memzones has already been allocated
> + *    - EEXIST - a memzone with the same name already exists
> + *    - ENOMEM - no appropriate memory area found in which to create memzone
> + */
> +__rte_experimental
> +static inline struct rte_mempool *
> +rte_event_vector_pool_create(const char *name, unsigned int n,
> +			     unsigned int cache_size, uint16_t nb_elem,
> +			     int socket_id)
> +{
> +	const char *mp_ops_name;
> +	struct rte_mempool *mp;
> +	unsigned int elt_sz;
> +	int ret;
> +
> +	if (!nb_elem) {
> +		RTE_LOG(ERR, EVENTDEV,
> +			"Invalid number of elements=%d requested\n", nb_elem);
> +		rte_errno = -EINVAL;
> +		return NULL;
> +	}
> +
> +	elt_sz =
> +		sizeof(struct rte_event_vector) + (nb_elem * sizeof(uintptr_t));
> +	mp = rte_mempool_create_empty(name, n, elt_sz, cache_size, 0, socket_id,
> +				      0);
> +	if (mp == NULL)
> +		return NULL;
> +
> +	mp_ops_name = rte_mbuf_best_mempool_ops();
> +	ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL);
> +	if (ret != 0) {
> +		RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n");
> +		goto err;
> +	}
> +
> +	ret = rte_mempool_populate_default(mp);
> +	if (ret < 0)
> +		goto err;
> +
> +	return mp;
> +err:
> +	rte_mempool_free(mp);
> +	rte_errno = -ret;
> +	return NULL;
> +}
> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map
> index 3e5c09cfd..a070ef56e 100644
> --- a/lib/librte_eventdev/version.map
> +++ b/lib/librte_eventdev/version.map
> @@ -138,6 +138,9 @@ EXPERIMENTAL {
>  	__rte_eventdev_trace_port_setup;
>  	# added in 20.11
>  	rte_event_pmd_pci_probe_named;
> +
> +	#added in 21.05
> +	rte_event_vector_pool_create;
>  };
> 
>  INTERNAL {
> --
> 2.17.1
  
Pavan Nikhilesh Bhagavatula March 18, 2021, 6:23 a.m. UTC | #2
>Posting my comments in beginning. No further comments.
>
>@pbhagavatula@marvell.com, it seems like v3 patch1 drops some
>changes from v2 patch1 (for e.g. missing event vector pointer in
>rte_event) causing build failure. Could you have a look?

Ah, my bad, my auto-formatter really messed-up the series.
I will fix it in v4.

>
>Thanks!
>
>-Jay
>

Thanks,
Pavan.

>
>> -----Original Message-----
>> From: pbhagavatula@marvell.com <pbhagavatula@marvell.com>
>> Sent: Wednesday, March 17, 2021 1:32 AM
>> To: jerinj@marvell.com; Jayatheerthan, Jay
><jay.jayatheerthan@intel.com>; Carrillo, Erik G
><erik.g.carrillo@intel.com>; Gujjar,
>> Abhinandan S <abhinandan.gujjar@intel.com>; McDaniel, Timothy
><timothy.mcdaniel@intel.com>; hemant.agrawal@nxp.com; Van
>> Haaren, Harry <harry.van.haaren@intel.com>; mattias.ronnblom
><mattias.ronnblom@ericsson.com>; Ma, Liang J
>> <liang.j.ma@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil Horman
><nhorman@tuxdriver.com>
>> Cc: dev@dpdk.org; Pavan Nikhilesh <pbhagavatula@marvell.com>
>> Subject: [dpdk-dev] [PATCH v3 1/8] eventdev: introduce event vector
>capability
>>
>> From: Pavan Nikhilesh <pbhagavatula@marvell.com>
>>
>> Introduce rte_event_vector datastructure which is capable of holding
>> multiple uintptr_t of the same flow thereby allowing applications
>> to vectorize their pipeline and reducing the complexity of pipelining
>> the events across multiple stages.
>> This approach also reduces the scheduling overhead on a event
>device.
>>
>> Add a event vector mempool create handler to create mempools
>based on
>> the best mempool ops available on a given platform.
>>
>> Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com>
>> ---
>>  doc/guides/prog_guide/eventdev.rst |  36 +++++++++-
>>  lib/librte_eventdev/rte_eventdev.h | 110
>++++++++++++++++++++++++++++-
>>  lib/librte_eventdev/version.map    |   3 +
>>  3 files changed, 146 insertions(+), 3 deletions(-)
>>
>> diff --git a/doc/guides/prog_guide/eventdev.rst
>b/doc/guides/prog_guide/eventdev.rst
>> index ccde086f6..fda9c3743 100644
>> --- a/doc/guides/prog_guide/eventdev.rst
>> +++ b/doc/guides/prog_guide/eventdev.rst
>> @@ -63,13 +63,45 @@ the actual event being scheduled is. The
>payload is a union of the following:
>>  * ``uint64_t u64``
>>  * ``void *event_ptr``
>>  * ``struct rte_mbuf *mbuf``
>> +* ``struct rte_event_vector *vec``
>>
>> -These three items in a union occupy the same 64 bits at the end of
>the rte_event
>> +These four items in a union occupy the same 64 bits at the end of the
>rte_event
>>  structure. The application can utilize the 64 bits directly by accessing
>the
>> -u64 variable, while the event_ptr and mbuf are provided as
>convenience
>> +u64 variable, while the event_ptr, mbuf, vec are provided as a
>convenience
>>  variables.  For example the mbuf pointer in the union can used to
>schedule a
>>  DPDK packet.
>>
>> +Event Vector
>> +~~~~~~~~~~~~
>> +
>> +The rte_event_vector struct contains a vector of elements defined
>by the event
>> +type specified in the ``rte_event``. The event_vector structure
>contains the
>> +following data:
>> +
>> +* ``nb_elem`` - The number of elements held within the vector.
>> +
>> +Similar to ``rte_event`` the payload of event vector is also a union,
>allowing
>> +flexibility in what the actual vector is.
>> +
>> +* ``struct rte_mbuf *mbufs[0]`` - An array of mbufs.
>> +* ``void *ptrs[0]`` - An array of pointers.
>> +* ``uint64_t *u64s[0]`` - An array of uint64_t elements.
>> +
>> +The size of the event vector is related to the total number of
>elements it is
>> +configured to hold, this is achieved by making `rte_event_vector` a
>variable
>> +length structure.
>> +A helper function is provided to create a mempool that holds event
>vector, which
>> +takes name of the pool, total number of required
>``rte_event_vector``,
>> +cache size, number of elements in each ``rte_event_vector`` and
>socket id.
>> +
>> +.. code-block:: c
>> +
>> +        rte_event_vector_pool_create("vector_pool",
>nb_event_vectors, cache_sz,
>> +                                     nb_elements_per_vector, socket_id);
>> +
>> +The function ``rte_event_vector_pool_create`` creates mempool
>with the best
>> +platform mempool ops.
>> +
>>  Queues
>>  ~~~~~~
>>
>> diff --git a/lib/librte_eventdev/rte_eventdev.h
>b/lib/librte_eventdev/rte_eventdev.h
>> index ce1fc2ce0..c0d01c873 100644
>> --- a/lib/librte_eventdev/rte_eventdev.h
>> +++ b/lib/librte_eventdev/rte_eventdev.h
>> @@ -212,8 +212,10 @@ extern "C" {
>>
>>  #include <rte_common.h>
>>  #include <rte_config.h>
>> -#include <rte_memory.h>
>>  #include <rte_errno.h>
>> +#include <rte_mbuf_pool_ops.h>
>> +#include <rte_memory.h>
>> +#include <rte_mempool.h>
>>
>>  #include "rte_eventdev_trace_fp.h"
>>
>> @@ -913,6 +915,25 @@
>rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
>>  int
>>  rte_event_dev_close(uint8_t dev_id);
>>
>> +/**
>> + * Event vector structure.
>> + */
>> +struct rte_event_vector {
>> +	uint64_t nb_elem : 16;
>> +	/**< Number of elements in this event vector. */
>> +	uint64_t rsvd : 48;
>> +	uint64_t impl_opaque;
>> +	union {
>> +		struct rte_mbuf *mbufs[0];
>> +		void *ptrs[0];
>> +		uint64_t *u64s[0];
>> +	} __rte_aligned(16);
>> +	/**< Start of the vector array union. Depending upon the event
>type the
>> +	 * vector array can be an array of mbufs or pointers or opaque
>u64
>> +	 * values.
>> +	 */
>> +};
>> +
>>  /* Scheduler type definitions */
>>  #define RTE_SCHED_TYPE_ORDERED          0
>>  /**< Ordered scheduling
>> @@ -986,6 +1007,21 @@ rte_event_dev_close(uint8_t dev_id);
>>   */
>>  #define RTE_EVENT_TYPE_ETH_RX_ADAPTER   0x4
>>  /**< The event generated from event eth Rx adapter */
>> +#define RTE_EVENT_TYPE_VECTOR           0x8
>> +/**< Indicates that event is a vector.
>> + * All vector event types should be an logical OR of
>EVENT_TYPE_VECTOR.
>> + * This simplifies the pipeline design as we can split processing the
>events
>> + * between vector events and normal event across event types.
>> + * Example:
>> + *	if (ev.event_type & RTE_EVENT_TYPE_VECTOR) {
>> + *		// Classify and handle vector event.
>> + *	} else {
>> + *		// Classify and handle event.
>> + *	}
>> + */
>> +#define RTE_EVENT_TYPE_CPU_VECTOR
>(RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU)
>> +/**< The event vector generated from cpu for pipelining. */
>> +
>>  #define RTE_EVENT_TYPE_MAX              0x10
>>  /**< Maximum number of event types */
>>
>> @@ -2023,6 +2059,78 @@ rte_event_dev_xstats_reset(uint8_t
>dev_id,
>>   */
>>  int rte_event_dev_selftest(uint8_t dev_id);
>>
>> +/**
>> + * Get the memory required per event vector based on the number of
>elements per
>> + * vector.
>> + * This should be used to create the mempool that holds the event
>vectors.
>> + *
>> + * @param name
>> + *   The name of the vector pool.
>> + * @param n
>> + *   The number of elements in the mbuf pool.
>> + * @param cache_size
>> + *   Size of the per-core object cache. See rte_mempool_create() for
>> + *   details.
>> + * @param nb_elem
>> + *   The number of elements then a single event vector should be
>able to hold.
>> + * @param socket_id
>> + *   The socket identifier where the memory should be allocated. The
>> + *   value can be *SOCKET_ID_ANY* if there is no NUMA constraint
>for the
>> + *   reserved zone
>> + *
>> + * @return
>> + *   The pointer to the newly allocated mempool, on success. NULL
>on error
>> + *   with rte_errno set appropriately. Possible rte_errno values
>include:
>> + *    - E_RTE_NO_CONFIG - function could not get pointer to
>rte_config structure
>> + *    - E_RTE_SECONDARY - function was called from a secondary
>process instance
>> + *    - EINVAL - cache size provided is too large, or priv_size is not
>aligned.
>> + *    - ENOSPC - the maximum number of memzones has already been
>allocated
>> + *    - EEXIST - a memzone with the same name already exists
>> + *    - ENOMEM - no appropriate memory area found in which to
>create memzone
>> + */
>> +__rte_experimental
>> +static inline struct rte_mempool *
>> +rte_event_vector_pool_create(const char *name, unsigned int n,
>> +			     unsigned int cache_size, uint16_t nb_elem,
>> +			     int socket_id)
>> +{
>> +	const char *mp_ops_name;
>> +	struct rte_mempool *mp;
>> +	unsigned int elt_sz;
>> +	int ret;
>> +
>> +	if (!nb_elem) {
>> +		RTE_LOG(ERR, EVENTDEV,
>> +			"Invalid number of elements=%d requested\n",
>nb_elem);
>> +		rte_errno = -EINVAL;
>> +		return NULL;
>> +	}
>> +
>> +	elt_sz =
>> +		sizeof(struct rte_event_vector) + (nb_elem *
>sizeof(uintptr_t));
>> +	mp = rte_mempool_create_empty(name, n, elt_sz, cache_size,
>0, socket_id,
>> +				      0);
>> +	if (mp == NULL)
>> +		return NULL;
>> +
>> +	mp_ops_name = rte_mbuf_best_mempool_ops();
>> +	ret = rte_mempool_set_ops_byname(mp, mp_ops_name,
>NULL);
>> +	if (ret != 0) {
>> +		RTE_LOG(ERR, EVENTDEV, "error setting mempool
>handler\n");
>> +		goto err;
>> +	}
>> +
>> +	ret = rte_mempool_populate_default(mp);
>> +	if (ret < 0)
>> +		goto err;
>> +
>> +	return mp;
>> +err:
>> +	rte_mempool_free(mp);
>> +	rte_errno = -ret;
>> +	return NULL;
>> +}
>> +
>>  #ifdef __cplusplus
>>  }
>>  #endif
>> diff --git a/lib/librte_eventdev/version.map
>b/lib/librte_eventdev/version.map
>> index 3e5c09cfd..a070ef56e 100644
>> --- a/lib/librte_eventdev/version.map
>> +++ b/lib/librte_eventdev/version.map
>> @@ -138,6 +138,9 @@ EXPERIMENTAL {
>>  	__rte_eventdev_trace_port_setup;
>>  	# added in 20.11
>>  	rte_event_pmd_pci_probe_named;
>> +
>> +	#added in 21.05
>> +	rte_event_vector_pool_create;
>>  };
>>
>>  INTERNAL {
>> --
>> 2.17.1
  

Patch

diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst
index ccde086f6..fda9c3743 100644
--- a/doc/guides/prog_guide/eventdev.rst
+++ b/doc/guides/prog_guide/eventdev.rst
@@ -63,13 +63,45 @@  the actual event being scheduled is. The payload is a union of the following:
 * ``uint64_t u64``
 * ``void *event_ptr``
 * ``struct rte_mbuf *mbuf``
+* ``struct rte_event_vector *vec``
 
-These three items in a union occupy the same 64 bits at the end of the rte_event
+These four items in a union occupy the same 64 bits at the end of the rte_event
 structure. The application can utilize the 64 bits directly by accessing the
-u64 variable, while the event_ptr and mbuf are provided as convenience
+u64 variable, while the event_ptr, mbuf, vec are provided as a convenience
 variables.  For example the mbuf pointer in the union can used to schedule a
 DPDK packet.
 
+Event Vector
+~~~~~~~~~~~~
+
+The rte_event_vector struct contains a vector of elements defined by the event
+type specified in the ``rte_event``. The event_vector structure contains the
+following data:
+
+* ``nb_elem`` - The number of elements held within the vector.
+
+Similar to ``rte_event`` the payload of event vector is also a union, allowing
+flexibility in what the actual vector is.
+
+* ``struct rte_mbuf *mbufs[0]`` - An array of mbufs.
+* ``void *ptrs[0]`` - An array of pointers.
+* ``uint64_t *u64s[0]`` - An array of uint64_t elements.
+
+The size of the event vector is related to the total number of elements it is
+configured to hold, this is achieved by making `rte_event_vector` a variable
+length structure.
+A helper function is provided to create a mempool that holds event vector, which
+takes name of the pool, total number of required ``rte_event_vector``,
+cache size, number of elements in each ``rte_event_vector`` and socket id.
+
+.. code-block:: c
+
+        rte_event_vector_pool_create("vector_pool", nb_event_vectors, cache_sz,
+                                     nb_elements_per_vector, socket_id);
+
+The function ``rte_event_vector_pool_create`` creates mempool with the best
+platform mempool ops.
+
 Queues
 ~~~~~~
 
diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h
index ce1fc2ce0..c0d01c873 100644
--- a/lib/librte_eventdev/rte_eventdev.h
+++ b/lib/librte_eventdev/rte_eventdev.h
@@ -212,8 +212,10 @@  extern "C" {
 
 #include <rte_common.h>
 #include <rte_config.h>
-#include <rte_memory.h>
 #include <rte_errno.h>
+#include <rte_mbuf_pool_ops.h>
+#include <rte_memory.h>
+#include <rte_mempool.h>
 
 #include "rte_eventdev_trace_fp.h"
 
@@ -913,6 +915,25 @@  rte_event_dev_stop_flush_callback_register(uint8_t dev_id,
 int
 rte_event_dev_close(uint8_t dev_id);
 
+/**
+ * Event vector structure.
+ */
+struct rte_event_vector {
+	uint64_t nb_elem : 16;
+	/**< Number of elements in this event vector. */
+	uint64_t rsvd : 48;
+	uint64_t impl_opaque;
+	union {
+		struct rte_mbuf *mbufs[0];
+		void *ptrs[0];
+		uint64_t *u64s[0];
+	} __rte_aligned(16);
+	/**< Start of the vector array union. Depending upon the event type the
+	 * vector array can be an array of mbufs or pointers or opaque u64
+	 * values.
+	 */
+};
+
 /* Scheduler type definitions */
 #define RTE_SCHED_TYPE_ORDERED          0
 /**< Ordered scheduling
@@ -986,6 +1007,21 @@  rte_event_dev_close(uint8_t dev_id);
  */
 #define RTE_EVENT_TYPE_ETH_RX_ADAPTER   0x4
 /**< The event generated from event eth Rx adapter */
+#define RTE_EVENT_TYPE_VECTOR           0x8
+/**< Indicates that event is a vector.
+ * All vector event types should be an logical OR of EVENT_TYPE_VECTOR.
+ * This simplifies the pipeline design as we can split processing the events
+ * between vector events and normal event across event types.
+ * Example:
+ *	if (ev.event_type & RTE_EVENT_TYPE_VECTOR) {
+ *		// Classify and handle vector event.
+ *	} else {
+ *		// Classify and handle event.
+ *	}
+ */
+#define RTE_EVENT_TYPE_CPU_VECTOR (RTE_EVENT_TYPE_VECTOR | RTE_EVENT_TYPE_CPU)
+/**< The event vector generated from cpu for pipelining. */
+
 #define RTE_EVENT_TYPE_MAX              0x10
 /**< Maximum number of event types */
 
@@ -2023,6 +2059,78 @@  rte_event_dev_xstats_reset(uint8_t dev_id,
  */
 int rte_event_dev_selftest(uint8_t dev_id);
 
+/**
+ * Get the memory required per event vector based on the number of elements per
+ * vector.
+ * This should be used to create the mempool that holds the event vectors.
+ *
+ * @param name
+ *   The name of the vector pool.
+ * @param n
+ *   The number of elements in the mbuf pool.
+ * @param cache_size
+ *   Size of the per-core object cache. See rte_mempool_create() for
+ *   details.
+ * @param nb_elem
+ *   The number of elements then a single event vector should be able to hold.
+ * @param socket_id
+ *   The socket identifier where the memory should be allocated. The
+ *   value can be *SOCKET_ID_ANY* if there is no NUMA constraint for the
+ *   reserved zone
+ *
+ * @return
+ *   The pointer to the newly allocated mempool, on success. NULL on error
+ *   with rte_errno set appropriately. Possible rte_errno values include:
+ *    - E_RTE_NO_CONFIG - function could not get pointer to rte_config structure
+ *    - E_RTE_SECONDARY - function was called from a secondary process instance
+ *    - EINVAL - cache size provided is too large, or priv_size is not aligned.
+ *    - ENOSPC - the maximum number of memzones has already been allocated
+ *    - EEXIST - a memzone with the same name already exists
+ *    - ENOMEM - no appropriate memory area found in which to create memzone
+ */
+__rte_experimental
+static inline struct rte_mempool *
+rte_event_vector_pool_create(const char *name, unsigned int n,
+			     unsigned int cache_size, uint16_t nb_elem,
+			     int socket_id)
+{
+	const char *mp_ops_name;
+	struct rte_mempool *mp;
+	unsigned int elt_sz;
+	int ret;
+
+	if (!nb_elem) {
+		RTE_LOG(ERR, EVENTDEV,
+			"Invalid number of elements=%d requested\n", nb_elem);
+		rte_errno = -EINVAL;
+		return NULL;
+	}
+
+	elt_sz =
+		sizeof(struct rte_event_vector) + (nb_elem * sizeof(uintptr_t));
+	mp = rte_mempool_create_empty(name, n, elt_sz, cache_size, 0, socket_id,
+				      0);
+	if (mp == NULL)
+		return NULL;
+
+	mp_ops_name = rte_mbuf_best_mempool_ops();
+	ret = rte_mempool_set_ops_byname(mp, mp_ops_name, NULL);
+	if (ret != 0) {
+		RTE_LOG(ERR, EVENTDEV, "error setting mempool handler\n");
+		goto err;
+	}
+
+	ret = rte_mempool_populate_default(mp);
+	if (ret < 0)
+		goto err;
+
+	return mp;
+err:
+	rte_mempool_free(mp);
+	rte_errno = -ret;
+	return NULL;
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map
index 3e5c09cfd..a070ef56e 100644
--- a/lib/librte_eventdev/version.map
+++ b/lib/librte_eventdev/version.map
@@ -138,6 +138,9 @@  EXPERIMENTAL {
 	__rte_eventdev_trace_port_setup;
 	# added in 20.11
 	rte_event_pmd_pci_probe_named;
+
+	#added in 21.05
+	rte_event_vector_pool_create;
 };
 
 INTERNAL {