[v6,2/2] mbuf: implement generic format for sched field

Message ID 20181219154237.836-2-reshma.pattan@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Cristian Dumitrescu
Headers
Series [v6,1/2] meter: add new rte color definition |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Pattan, Reshma Dec. 19, 2018, 3:42 p.m. UTC
  This patch implements the changes proposed in the deprecation
notes [1][2].

librte_mbuf changes:
The mbuf::hash::sched field is updated to support generic
definition in line with the ethdev TM and MTR APIs. The new generic
format contains: queue ID, traffic class, color.

Added public APIs to set and get these new fields to and from mbuf.

librte_sched changes:
In addtion, following API functions of the sched library have
been modified with an additional parameter of type struct
rte_sched_port to accommodate the changes made to mbuf sched field.
(i)rte_sched_port_pkt_write()
(ii) rte_sched_port_pkt_read_tree_path()

librte_pipeline, qos_sched UT, qos_sched app are updated
to make use of new changes.

Also mbuf::hash::txadapter has been added for eventdev txq,
rte_event_eth_tx_adapter_txq_set and rte_event_eth_tx_adapter_txq_get()
are updated to use uses mbuf::hash::txadapter.txq.

doc:
Release notes updated.
Removed deprecation notice for mbuf::hash::sched and sched API.

[1] http://mails.dpdk.org/archives/dev/2018-February/090651.html
[2] https://mails.dpdk.org/archives/dev/2018-November/119051.html

Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
---
v6: added scheduler comment

v5:
Removed librte_meter from librte_mbuf dependency list.
Changed the mbuf set/get functions to use uint8_t for color.

v4: converted mbuf::hash::sched as instantiation of rte_mbuf_sched.

v3: addressed review comments given in the below link.
http://patches.dpdk.org/patch/48587/
Updated library ABI versioning in meson build.
---
---
 doc/guides/rel_notes/deprecation.rst          |  10 --
 doc/guides/rel_notes/release_19_02.rst        |   4 +-
 examples/qos_sched/app_thread.c               |   7 +-
 examples/qos_sched/main.c                     |   1 +
 .../rte_event_eth_tx_adapter.h                |  18 ++-
 lib/librte_mbuf/Makefile                      |   2 +-
 lib/librte_mbuf/meson.build                   |   2 +-
 lib/librte_mbuf/rte_mbuf.h                    | 119 +++++++++++++++++-
 lib/librte_pipeline/rte_table_action.c        |  44 +++----
 lib/librte_sched/Makefile                     |   2 +-
 lib/librte_sched/meson.build                  |   1 +
 lib/librte_sched/rte_sched.c                  |  90 ++++++-------
 lib/librte_sched/rte_sched.h                  |  10 +-
 test/test/test_sched.c                        |   9 +-
 14 files changed, 197 insertions(+), 122 deletions(-)
  

Comments

Ananyev, Konstantin Dec. 19, 2018, 6:14 p.m. UTC | #1
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Reshma Pattan
> Sent: Wednesday, December 19, 2018 3:43 PM
> To: dev@dpdk.org; jerin.jacob@caviumnetworks.com; Rao, Nikhil <nikhil.rao@intel.com>; olivier.matz@6wind.com;
> thomas@monjalon.net; Singh, Jasvinder <jasvinder.singh@intel.com>; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>
> Cc: Pattan, Reshma <reshma.pattan@intel.com>
> Subject: [dpdk-dev] [PATCH v6 2/2] mbuf: implement generic format for sched field
> 
> This patch implements the changes proposed in the deprecation
> notes [1][2].
> 
> librte_mbuf changes:
> The mbuf::hash::sched field is updated to support generic
> definition in line with the ethdev TM and MTR APIs. The new generic
> format contains: queue ID, traffic class, color.
> 
> Added public APIs to set and get these new fields to and from mbuf.
> 
> librte_sched changes:
> In addtion, following API functions of the sched library have
> been modified with an additional parameter of type struct
> rte_sched_port to accommodate the changes made to mbuf sched field.
> (i)rte_sched_port_pkt_write()
> (ii) rte_sched_port_pkt_read_tree_path()
> 
> librte_pipeline, qos_sched UT, qos_sched app are updated
> to make use of new changes.
> 
> Also mbuf::hash::txadapter has been added for eventdev txq,
> rte_event_eth_tx_adapter_txq_set and rte_event_eth_tx_adapter_txq_get()
> are updated to use uses mbuf::hash::txadapter.txq.
> 
> doc:
> Release notes updated.
> Removed deprecation notice for mbuf::hash::sched and sched API.
> 
> [1] http://mails.dpdk.org/archives/dev/2018-February/090651.html
> [2] https://mails.dpdk.org/archives/dev/2018-November/119051.html
> 
> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
> Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com>
> ---
> v6: added scheduler comment
> 
> v5:
> Removed librte_meter from librte_mbuf dependency list.
> Changed the mbuf set/get functions to use uint8_t for color.
> 
> v4: converted mbuf::hash::sched as instantiation of rte_mbuf_sched.
> 
> v3: addressed review comments given in the below link.
> http://patches.dpdk.org/patch/48587/
> Updated library ABI versioning in meson build.
> ---

Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>

> 2.17.1
  
Olivier Matz Dec. 20, 2018, 8:29 a.m. UTC | #2
Hi Reshma,

Please find few minor comments below.

On Wed, Dec 19, 2018 at 03:42:37PM +0000, Reshma Pattan wrote:
> This patch implements the changes proposed in the deprecation
> notes [1][2].
> 
> librte_mbuf changes:
> The mbuf::hash::sched field is updated to support generic

I think it would be clearer to use mbuf->hash.sched (instead of ::).

> definition in line with the ethdev TM and MTR APIs. The new generic
> format contains: queue ID, traffic class, color.

I wonder if it wouldn't be worth specifying "traffic manager" and
"meter" instead of or in addition to TM and MTR.

[...]

> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -468,6 +468,17 @@ __extension__
>  typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
>                                 * with a single assignment */
>  
> +struct rte_mbuf_sched {
> +	uint32_t queue_id;   /**< Queue ID. */
> +	uint8_t traffic_class;
> +	/**< Traffic class ID. Traffic class 0
> +	 * is the highest priority traffic class.
> +	 */
> +	uint8_t color;
> +	/**< Color. @see enum rte_color.*/
> +	uint16_t reserved;   /**< Reserved. */
> +};
> +
>  /**
>   * The generic rte_mbuf, containing a packet mbuf.
>   */
> @@ -574,14 +585,16 @@ struct rte_mbuf {
>  				 * on PKT_RX_FDIR_* flag in ol_flags.
>  				 */
>  			} fdir;	/**< Filter identifier if FDIR enabled */
> +			struct rte_mbuf_sched sched; /**< Hierarchical scheduler */


What about directly embedding the structure like the others? Since mbuf
is a very packed structure, I think it helps to show that rte_mbuf_sched
does not exceed the size of the union.

I mean something like this:

			struct rte_mbuf_sched {
				uint32_t queue_id;      /**< Queue ID. */
				uint8_t traffic_class;
				/**< Traffic class ID (0 = highest priority). */
				uint8_t color;
				/**< Color. @see enum rte_color. */
				uint16_t reserved;      /**< Reserved. */
			} sched;

I would require small changes, see below.

>  			struct {
> -				uint32_t lo;
> -				uint32_t hi;
> +				uint32_t reserved1;
> +				uint16_t reserved2;
> +				uint16_t txq;
>  				/**< The event eth Tx adapter uses this field
>  				 * to store Tx queue id.
>  				 * @see rte_event_eth_tx_adapter_txq_set()
>  				 */
> -			} sched;          /**< Hierarchical scheduler */
> +			} txadapter; /**< Eventdev ethdev Tx adapter */
>  			/**< User defined tags. See rte_distributor_process() */
>  			uint32_t usr;
>  		} hash;                   /**< hash information */
> @@ -2289,6 +2302,106 @@ rte_pktmbuf_linearize(struct rte_mbuf *mbuf)
>   */
>  void rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len);
>  
> +/**
> + * Get the value of mbuf sched queue_id field.
> + */
> +static inline uint32_t
> +rte_mbuf_sched_queue_get(const struct rte_mbuf *m)
> +{
> +	return m->hash.sched.queue_id;
> +}
> +
> +/**
> + * Get the value of mbuf sched traffic_class field.
> + */
> +static inline uint8_t
> +rte_mbuf_sched_traffic_class_get(const struct rte_mbuf *m)
> +{
> +	return m->hash.sched.traffic_class;
> +}
> +
> +/**
> + * Get the value of mbuf sched color field.
> + */
> +static inline uint8_t
> +rte_mbuf_sched_color_get(const struct rte_mbuf *m)
> +{
> +	return m->hash.sched.color;
> +}
> +
> +/**
> + * Get the values of mbuf sched queue_id, traffic_class and color.

missing an empty comment line here.

> + * @param m
> + *   Mbuf to read
> + * @param queue_id
> + *  Returns the queue id
> + * @param traffic_class
> + *  Returns the traffic class id
> + * @param color
> + *  Returns the colour id
> + */
> +static inline void
> +rte_mbuf_sched_get(const struct rte_mbuf *m, uint32_t *queue_id,
> +			uint8_t *traffic_class,
> +			uint8_t *color)
> +{
> +	struct rte_mbuf_sched sched = m->hash.sched;
> +
> +	*queue_id = sched.queue_id;
> +	*traffic_class = sched.traffic_class;
> +	*color = sched.color;

I don't think there is a need to have an additional local copy.

*queue_id = m->hash.sched.queue_id;
*traffic_class = m->hash.sched.traffic_class;
*color = m->hash.sched.color;


> +}
> +
> +/**
> + * Set the mbuf sched queue_id to the defined value.
> + */
> +static inline void
> +rte_mbuf_sched_queue_set(struct rte_mbuf *m, uint32_t queue_id)
> +{
> +	m->hash.sched.queue_id = queue_id;
> +}
> +
> +/**
> + * Set the mbuf sched traffic_class id to the defined value.
> + */
> +static inline void
> +rte_mbuf_sched_traffic_class_set(struct rte_mbuf *m, uint8_t traffic_class)
> +{
> +	m->hash.sched.traffic_class = traffic_class;
> +}
> +
> +/**
> + * Set the mbuf sched color id to the defined value.
> + */
> +static inline void
> +rte_mbuf_sched_color_set(struct rte_mbuf *m, uint8_t color)
> +{
> +	m->hash.sched.color = color;
> +}
> +
> +/**
> + * Set the mbuf sched queue_id, traffic_class and color.

empty comment missing

> + * @param m
> + *   Mbuf to set
> + * @param queue_id
> + *  Queue id value to be set
> + * @param traffic_class
> + *  Traffic class id value to be set
> + * @param color
> + *  Color id to be set
> + */
> +static inline void
> +rte_mbuf_sched_set(struct rte_mbuf *m, uint32_t queue_id,
> +			uint8_t traffic_class,
> +			uint8_t color)
> +{
> +	m->hash.sched = (struct rte_mbuf_sched){
> +				.queue_id = queue_id,
> +				.traffic_class = traffic_class,
> +				.color = color,
> +			};

Why not this?

m->hash.sched.queue_id = queue_id;
m->hash.sched.traffic_class = traffic_class;
m->hash.sched.color = color;


Apart from this, the mbuf part looks ok to me.

Thanks,
Olivier
  
Cristian Dumitrescu Dec. 20, 2018, 11:28 a.m. UTC | #3
Hi Olivier,

Thanks for your reply!

<snip>

> >   * The generic rte_mbuf, containing a packet mbuf.
> >   */
> > @@ -574,14 +585,16 @@ struct rte_mbuf {
> >  				 * on PKT_RX_FDIR_* flag in ol_flags.
> >  				 */
> >  			} fdir;	/**< Filter identifier if FDIR enabled */
> > +			struct rte_mbuf_sched sched; /**< Hierarchical
> scheduler */
> 
> 
> What about directly embedding the structure like the others? Since mbuf
> is a very packed structure, I think it helps to show that rte_mbuf_sched
> does not exceed the size of the union.
> 
> I mean something like this:
> 
> 			struct rte_mbuf_sched {
> 				uint32_t queue_id;      /**< Queue ID. */
> 				uint8_t traffic_class;
> 				/**< Traffic class ID (0 = highest priority). */
> 				uint8_t color;
> 				/**< Color. @see enum rte_color. */
> 				uint16_t reserved;      /**< Reserved. */
> 			} sched;

If this syntax does not limit the scope of struct rte_mbuf_sched to just within its parent struct rte_mbuf, then it would also fit my needs and I am more than happy to use it.

All I need is a name for this rte_mbuf_sched structure, so I can use it to get a decent implementation of set/get functions.

<snip>

> > + * @param m
> > + *   Mbuf to read
> > + * @param queue_id
> > + *  Returns the queue id
> > + * @param traffic_class
> > + *  Returns the traffic class id
> > + * @param color
> > + *  Returns the colour id
> > + */
> > +static inline void
> > +rte_mbuf_sched_get(const struct rte_mbuf *m, uint32_t *queue_id,
> > +			uint8_t *traffic_class,
> > +			uint8_t *color)
> > +{
> > +	struct rte_mbuf_sched sched = m->hash.sched;
> > +
> > +	*queue_id = sched.queue_id;
> > +	*traffic_class = sched.traffic_class;
> > +	*color = sched.color;
> 
> I don't think there is a need to have an additional local copy.
> 
> *queue_id = m->hash.sched.queue_id;
> *traffic_class = m->hash.sched.traffic_class;
> *color = m->hash.sched.color;
> 

With local copy, compiler typically generates a single 8-byte read instruction. Without the local copy, compiler typically generates 3x read instructions.

The set/get functions are used in some performance critical actions, so this is the reason to make sure we get them right.

<snip>

> > + * @param m
> > + *   Mbuf to set
> > + * @param queue_id
> > + *  Queue id value to be set
> > + * @param traffic_class
> > + *  Traffic class id value to be set
> > + * @param color
> > + *  Color id to be set
> > + */
> > +static inline void
> > +rte_mbuf_sched_set(struct rte_mbuf *m, uint32_t queue_id,
> > +			uint8_t traffic_class,
> > +			uint8_t color)
> > +{
> > +	m->hash.sched = (struct rte_mbuf_sched){
> > +				.queue_id = queue_id,
> > +				.traffic_class = traffic_class,
> > +				.color = color,
> > +			};
> 
> Why not this?
> 
> m->hash.sched.queue_id = queue_id;
> m->hash.sched.traffic_class = traffic_class;
> m->hash.sched.color = color;
> 

Same here, we need the compiler to generate a single 8-byte write instruction rather than 3x read-modify-write operations. Makes sense?

> 
> Apart from this, the mbuf part looks ok to me.
> 
> Thanks,
> Olivier

Thanks,
Cristian
  
Olivier Matz Dec. 20, 2018, 12:41 p.m. UTC | #4
Hi Cristian,

[...]

On Thu, Dec 20, 2018 at 11:28:01AM +0000, Dumitrescu, Cristian wrote:
> > > + * @param m
> > > + *   Mbuf to read
> > > + * @param queue_id
> > > + *  Returns the queue id
> > > + * @param traffic_class
> > > + *  Returns the traffic class id
> > > + * @param color
> > > + *  Returns the colour id
> > > + */
> > > +static inline void
> > > +rte_mbuf_sched_get(const struct rte_mbuf *m, uint32_t *queue_id,
> > > +			uint8_t *traffic_class,
> > > +			uint8_t *color)
> > > +{
> > > +	struct rte_mbuf_sched sched = m->hash.sched;
> > > +
> > > +	*queue_id = sched.queue_id;
> > > +	*traffic_class = sched.traffic_class;
> > > +	*color = sched.color;
> > 
> > I don't think there is a need to have an additional local copy.
> > 
> > *queue_id = m->hash.sched.queue_id;
> > *traffic_class = m->hash.sched.traffic_class;
> > *color = m->hash.sched.color;
> > 
> 
> With local copy, compiler typically generates a single 8-byte read instruction. Without the local copy, compiler typically generates 3x read instructions.
> 
> The set/get functions are used in some performance critical actions, so this is the reason to make sure we get them right.

Ok, that makes sense, thanks for clarification.


Regards,
Olivier
  

Patch

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index ac7fb29a7..ec80dcc7b 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -49,16 +49,6 @@  Deprecation Notices
   structure would be made internal (or removed if all dependencies are cleared)
   in future releases.
 
-* mbuf: The opaque ``mbuf->hash.sched`` field will be updated to support generic
-  definition in line with the ethdev TM and MTR APIs. Currently, this field
-  is defined in librte_sched in a non-generic way. The new generic format
-  will contain: queue ID, traffic class, color. Field size will not change.
-
-* sched: Some API functions will change prototype due to the above
-  deprecation note for mbuf->hash.sched, e.g. ``rte_sched_port_pkt_write()``
-  and ``rte_sched_port_pkt_read()`` will likely have an additional parameter
-  of type ``struct rte_sched_port``.
-
 * mbuf: the macro ``RTE_MBUF_INDIRECT()`` will be removed in v18.08 or later and
   replaced with ``RTE_MBUF_CLONED()`` which is already added in v18.05. As
   ``EXT_ATTACHED_MBUF`` is newly introduced in v18.05, ``RTE_MBUF_INDIRECT()``
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index 8deb68b9a..1f9ec9fc1 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -156,7 +156,7 @@  The libraries prepended with a plus sign were incremented in this version.
      librte_kvargs.so.1
      librte_latencystats.so.1
      librte_lpm.so.2
-     librte_mbuf.so.4
+   + librte_mbuf.so.5
      librte_member.so.1
      librte_mempool.so.5
      librte_meter.so.2
@@ -178,7 +178,7 @@  The libraries prepended with a plus sign were incremented in this version.
      librte_rawdev.so.1
      librte_reorder.so.1
      librte_ring.so.2
-     librte_sched.so.1
+   + librte_sched.so.2
      librte_security.so.1
      librte_table.so.3
      librte_timer.so.1
diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c
index a59274236..bec4deee3 100644
--- a/examples/qos_sched/app_thread.c
+++ b/examples/qos_sched/app_thread.c
@@ -73,8 +73,11 @@  app_rx_thread(struct thread_conf **confs)
 			for(i = 0; i < nb_rx; i++) {
 				get_pkt_sched(rx_mbufs[i],
 						&subport, &pipe, &traffic_class, &queue, &color);
-				rte_sched_port_pkt_write(rx_mbufs[i], subport, pipe,
-						traffic_class, queue, (enum rte_meter_color) color);
+				rte_sched_port_pkt_write(conf->sched_port,
+						rx_mbufs[i],
+						subport, pipe,
+						traffic_class, queue,
+						(enum rte_meter_color) color);
 			}
 
 			if (unlikely(rte_ring_sp_enqueue_bulk(conf->rx_ring,
diff --git a/examples/qos_sched/main.c b/examples/qos_sched/main.c
index e7c97bd64..c0ed16b68 100644
--- a/examples/qos_sched/main.c
+++ b/examples/qos_sched/main.c
@@ -55,6 +55,7 @@  app_main_loop(__attribute__((unused))void *dummy)
 			flow->rx_thread.rx_port = flow->rx_port;
 			flow->rx_thread.rx_ring =  flow->rx_ring;
 			flow->rx_thread.rx_queue = flow->rx_queue;
+			flow->rx_thread.sched_port = flow->sched_port;
 
 			rx_confs[rx_idx++] = &flow->rx_thread;
 
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.h b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
index 81456d4a9..2a50656d9 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.h
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
@@ -63,13 +63,11 @@ 
  * function can be used to retrieve the adapter's service function ID.
  *
  * The ethernet port and transmit queue index to transmit the mbuf on are
- * specified using the mbuf port and the higher 16 bits of
- * struct rte_mbuf::hash::sched:hi. The application should use the
- * rte_event_eth_tx_adapter_txq_set() and rte_event_eth_tx_adapter_txq_get()
- * functions to access the transmit queue index since it is expected that the
- * transmit queue will be eventually defined within struct rte_mbuf and using
- * these macros will help with minimizing application impact due to
- * a change in how the transmit queue index is specified.
+ * specified using the mbuf port struct rte_mbuf::hash::txadapter:txq.
+ * The application should use the rte_event_eth_tx_adapter_txq_set()
+ * and rte_event_eth_tx_adapter_txq_get() functions to access the transmit
+ * queue index, using these macros will help with minimizing application
+ * impact due to a change in how the transmit queue index is specified.
  */
 
 #ifdef __cplusplus
@@ -300,8 +298,7 @@  rte_event_eth_tx_adapter_queue_del(uint8_t id,
 static __rte_always_inline void __rte_experimental
 rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue)
 {
-	uint16_t *p = (uint16_t *)&pkt->hash.sched.hi;
-	p[1] = queue;
+	pkt->hash.txadapter.txq = queue;
 }
 
 /**
@@ -320,8 +317,7 @@  rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue)
 static __rte_always_inline uint16_t __rte_experimental
 rte_event_eth_tx_adapter_txq_get(struct rte_mbuf *pkt)
 {
-	uint16_t *p = (uint16_t *)&pkt->hash.sched.hi;
-	return p[1];
+	return pkt->hash.txadapter.txq;
 }
 
 /**
diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
index e2b98a254..8c4c7d790 100644
--- a/lib/librte_mbuf/Makefile
+++ b/lib/librte_mbuf/Makefile
@@ -11,7 +11,7 @@  LDLIBS += -lrte_eal -lrte_mempool
 
 EXPORT_MAP := rte_mbuf_version.map
 
-LIBABIVER := 4
+LIBABIVER := 5
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c rte_mbuf_pool_ops.c
diff --git a/lib/librte_mbuf/meson.build b/lib/librte_mbuf/meson.build
index 94d9c4c96..e37da0250 100644
--- a/lib/librte_mbuf/meson.build
+++ b/lib/librte_mbuf/meson.build
@@ -1,7 +1,7 @@ 
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
-version = 4
+version = 5
 sources = files('rte_mbuf.c', 'rte_mbuf_ptype.c', 'rte_mbuf_pool_ops.c')
 headers = files('rte_mbuf.h', 'rte_mbuf_ptype.h', 'rte_mbuf_pool_ops.h')
 deps += ['mempool']
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 3dbc6695e..dd69123a1 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -468,6 +468,17 @@  __extension__
 typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes
                                * with a single assignment */
 
+struct rte_mbuf_sched {
+	uint32_t queue_id;   /**< Queue ID. */
+	uint8_t traffic_class;
+	/**< Traffic class ID. Traffic class 0
+	 * is the highest priority traffic class.
+	 */
+	uint8_t color;
+	/**< Color. @see enum rte_color.*/
+	uint16_t reserved;   /**< Reserved. */
+};
+
 /**
  * The generic rte_mbuf, containing a packet mbuf.
  */
@@ -574,14 +585,16 @@  struct rte_mbuf {
 				 * on PKT_RX_FDIR_* flag in ol_flags.
 				 */
 			} fdir;	/**< Filter identifier if FDIR enabled */
+			struct rte_mbuf_sched sched; /**< Hierarchical scheduler */
 			struct {
-				uint32_t lo;
-				uint32_t hi;
+				uint32_t reserved1;
+				uint16_t reserved2;
+				uint16_t txq;
 				/**< The event eth Tx adapter uses this field
 				 * to store Tx queue id.
 				 * @see rte_event_eth_tx_adapter_txq_set()
 				 */
-			} sched;          /**< Hierarchical scheduler */
+			} txadapter; /**< Eventdev ethdev Tx adapter */
 			/**< User defined tags. See rte_distributor_process() */
 			uint32_t usr;
 		} hash;                   /**< hash information */
@@ -2289,6 +2302,106 @@  rte_pktmbuf_linearize(struct rte_mbuf *mbuf)
  */
 void rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len);
 
+/**
+ * Get the value of mbuf sched queue_id field.
+ */
+static inline uint32_t
+rte_mbuf_sched_queue_get(const struct rte_mbuf *m)
+{
+	return m->hash.sched.queue_id;
+}
+
+/**
+ * Get the value of mbuf sched traffic_class field.
+ */
+static inline uint8_t
+rte_mbuf_sched_traffic_class_get(const struct rte_mbuf *m)
+{
+	return m->hash.sched.traffic_class;
+}
+
+/**
+ * Get the value of mbuf sched color field.
+ */
+static inline uint8_t
+rte_mbuf_sched_color_get(const struct rte_mbuf *m)
+{
+	return m->hash.sched.color;
+}
+
+/**
+ * Get the values of mbuf sched queue_id, traffic_class and color.
+ * @param m
+ *   Mbuf to read
+ * @param queue_id
+ *  Returns the queue id
+ * @param traffic_class
+ *  Returns the traffic class id
+ * @param color
+ *  Returns the colour id
+ */
+static inline void
+rte_mbuf_sched_get(const struct rte_mbuf *m, uint32_t *queue_id,
+			uint8_t *traffic_class,
+			uint8_t *color)
+{
+	struct rte_mbuf_sched sched = m->hash.sched;
+
+	*queue_id = sched.queue_id;
+	*traffic_class = sched.traffic_class;
+	*color = sched.color;
+}
+
+/**
+ * Set the mbuf sched queue_id to the defined value.
+ */
+static inline void
+rte_mbuf_sched_queue_set(struct rte_mbuf *m, uint32_t queue_id)
+{
+	m->hash.sched.queue_id = queue_id;
+}
+
+/**
+ * Set the mbuf sched traffic_class id to the defined value.
+ */
+static inline void
+rte_mbuf_sched_traffic_class_set(struct rte_mbuf *m, uint8_t traffic_class)
+{
+	m->hash.sched.traffic_class = traffic_class;
+}
+
+/**
+ * Set the mbuf sched color id to the defined value.
+ */
+static inline void
+rte_mbuf_sched_color_set(struct rte_mbuf *m, uint8_t color)
+{
+	m->hash.sched.color = color;
+}
+
+/**
+ * Set the mbuf sched queue_id, traffic_class and color.
+ * @param m
+ *   Mbuf to set
+ * @param queue_id
+ *  Queue id value to be set
+ * @param traffic_class
+ *  Traffic class id value to be set
+ * @param color
+ *  Color id to be set
+ */
+static inline void
+rte_mbuf_sched_set(struct rte_mbuf *m, uint32_t queue_id,
+			uint8_t traffic_class,
+			uint8_t color)
+{
+	m->hash.sched = (struct rte_mbuf_sched){
+				.queue_id = queue_id,
+				.traffic_class = traffic_class,
+				.color = color,
+			};
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_pipeline/rte_table_action.c b/lib/librte_pipeline/rte_table_action.c
index 7c7c8dd82..8a2bb13dc 100644
--- a/lib/librte_pipeline/rte_table_action.c
+++ b/lib/librte_pipeline/rte_table_action.c
@@ -107,14 +107,6 @@  mtr_cfg_check(struct rte_table_action_mtr_config *mtr)
 	return 0;
 }
 
-#define MBUF_SCHED_QUEUE_TC_COLOR(queue, tc, color)        \
-	((uint16_t)((((uint64_t)(queue)) & 0x3) |          \
-	((((uint64_t)(tc)) & 0x3) << 2) |                  \
-	((((uint64_t)(color)) & 0x3) << 4)))
-
-#define MBUF_SCHED_COLOR(sched, color)                     \
-	(((sched) & (~0x30LLU)) | ((color) << 4))
-
 struct mtr_trtcm_data {
 	struct rte_meter_trtcm trtcm;
 	uint64_t stats[e_RTE_METER_COLORS];
@@ -176,7 +168,7 @@  mtr_data_size(struct rte_table_action_mtr_config *mtr)
 struct dscp_table_entry_data {
 	enum rte_meter_color color;
 	uint16_t tc;
-	uint16_t queue_tc_color;
+	uint16_t tc_queue;
 };
 
 struct dscp_table_data {
@@ -319,8 +311,7 @@  pkt_work_mtr(struct rte_mbuf *mbuf,
 	uint32_t dscp,
 	uint16_t total_length)
 {
-	uint64_t drop_mask, sched;
-	uint64_t *sched_ptr = (uint64_t *) &mbuf->hash.sched;
+	uint64_t drop_mask;
 	struct dscp_table_entry_data *dscp_entry = &dscp_table->entry[dscp];
 	enum rte_meter_color color_in, color_meter, color_policer;
 	uint32_t tc, mp_id;
@@ -329,7 +320,6 @@  pkt_work_mtr(struct rte_mbuf *mbuf,
 	color_in = dscp_entry->color;
 	data += tc;
 	mp_id = MTR_TRTCM_DATA_METER_PROFILE_ID_GET(data);
-	sched = *sched_ptr;
 
 	/* Meter */
 	color_meter = rte_meter_trtcm_color_aware_check(
@@ -346,7 +336,7 @@  pkt_work_mtr(struct rte_mbuf *mbuf,
 	drop_mask = MTR_TRTCM_DATA_POLICER_ACTION_DROP_GET(data, color_meter);
 	color_policer =
 		MTR_TRTCM_DATA_POLICER_ACTION_COLOR_GET(data, color_meter);
-	*sched_ptr = MBUF_SCHED_COLOR(sched, color_policer);
+	rte_mbuf_sched_color_set(mbuf, (uint8_t)color_policer);
 
 	return drop_mask;
 }
@@ -368,9 +358,8 @@  tm_cfg_check(struct rte_table_action_tm_config *tm)
 }
 
 struct tm_data {
-	uint16_t queue_tc_color;
-	uint16_t subport;
-	uint32_t pipe;
+	uint32_t queue_id;
+	uint32_t reserved;
 } __attribute__((__packed__));
 
 static int
@@ -397,9 +386,9 @@  tm_apply(struct tm_data *data,
 		return status;
 
 	/* Apply */
-	data->queue_tc_color = 0;
-	data->subport = (uint16_t) p->subport_id;
-	data->pipe = p->pipe_id;
+	data->queue_id = p->subport_id <<
+				(__builtin_ctz(cfg->n_pipes_per_subport) + 4) |
+				p->pipe_id << 4;
 
 	return 0;
 }
@@ -411,12 +400,11 @@  pkt_work_tm(struct rte_mbuf *mbuf,
 	uint32_t dscp)
 {
 	struct dscp_table_entry_data *dscp_entry = &dscp_table->entry[dscp];
-	struct tm_data *sched_ptr = (struct tm_data *) &mbuf->hash.sched;
-	struct tm_data sched;
-
-	sched = *data;
-	sched.queue_tc_color = dscp_entry->queue_tc_color;
-	*sched_ptr = sched;
+	uint32_t queue_id = data->queue_id |
+				(dscp_entry->tc << 2) |
+				dscp_entry->tc_queue;
+	rte_mbuf_sched_set(mbuf, queue_id, dscp_entry->tc,
+				(uint8_t)dscp_entry->color);
 }
 
 /**
@@ -2580,17 +2568,13 @@  rte_table_action_dscp_table_update(struct rte_table_action *action,
 			&action->dscp_table.entry[i];
 		struct rte_table_action_dscp_table_entry *entry =
 			&table->entry[i];
-		uint16_t queue_tc_color =
-			MBUF_SCHED_QUEUE_TC_COLOR(entry->tc_queue_id,
-				entry->tc_id,
-				entry->color);
 
 		if ((dscp_mask & (1LLU << i)) == 0)
 			continue;
 
 		data->color = entry->color;
 		data->tc = entry->tc_id;
-		data->queue_tc_color = queue_tc_color;
+		data->tc_queue = entry->tc_queue_id;
 	}
 
 	return 0;
diff --git a/lib/librte_sched/Makefile b/lib/librte_sched/Makefile
index 46c53ed71..644fd9d15 100644
--- a/lib/librte_sched/Makefile
+++ b/lib/librte_sched/Makefile
@@ -18,7 +18,7 @@  LDLIBS += -lrte_timer
 
 EXPORT_MAP := rte_sched_version.map
 
-LIBABIVER := 1
+LIBABIVER := 2
 
 #
 # all source are stored in SRCS-y
diff --git a/lib/librte_sched/meson.build b/lib/librte_sched/meson.build
index f85d64df8..8e989e5f6 100644
--- a/lib/librte_sched/meson.build
+++ b/lib/librte_sched/meson.build
@@ -1,6 +1,7 @@ 
 # SPDX-License-Identifier: BSD-3-Clause
 # Copyright(c) 2017 Intel Corporation
 
+version = 2
 sources = files('rte_sched.c', 'rte_red.c', 'rte_approx.c')
 headers = files('rte_sched.h', 'rte_sched_common.h',
 		'rte_red.h', 'rte_approx.h')
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 587d5e602..dd7739172 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -128,22 +128,6 @@  enum grinder_state {
 	e_GRINDER_READ_MBUF
 };
 
-/*
- * Path through the scheduler hierarchy used by the scheduler enqueue
- * operation to identify the destination queue for the current
- * packet. Stored in the field pkt.hash.sched of struct rte_mbuf of
- * each packet, typically written by the classification stage and read
- * by scheduler enqueue.
- */
-struct rte_sched_port_hierarchy {
-	uint16_t queue:2;                /**< Queue ID (0 .. 3) */
-	uint16_t traffic_class:2;        /**< Traffic class ID (0 .. 3)*/
-	uint32_t color:2;                /**< Color */
-	uint16_t unused:10;
-	uint16_t subport;                /**< Subport ID */
-	uint32_t pipe;		         /**< Pipe ID */
-};
-
 struct rte_sched_grinder {
 	/* Pipe cache */
 	uint16_t pcache_qmask[RTE_SCHED_GRINDER_PCACHE_SIZE];
@@ -185,6 +169,7 @@  struct rte_sched_port {
 	/* User parameters */
 	uint32_t n_subports_per_port;
 	uint32_t n_pipes_per_subport;
+	uint32_t n_pipes_per_subport_log2;
 	uint32_t rate;
 	uint32_t mtu;
 	uint32_t frame_overhead;
@@ -645,6 +630,8 @@  rte_sched_port_config(struct rte_sched_port_params *params)
 	/* User parameters */
 	port->n_subports_per_port = params->n_subports_per_port;
 	port->n_pipes_per_subport = params->n_pipes_per_subport;
+	port->n_pipes_per_subport_log2 =
+			__builtin_ctz(params->n_pipes_per_subport);
 	port->rate = params->rate;
 	port->mtu = params->mtu + params->frame_overhead;
 	port->frame_overhead = params->frame_overhead;
@@ -1006,44 +993,52 @@  rte_sched_port_pipe_profile_add(struct rte_sched_port *port,
 	return 0;
 }
 
+static inline uint32_t
+rte_sched_port_qindex(struct rte_sched_port *port,
+	uint32_t subport,
+	uint32_t pipe,
+	uint32_t traffic_class,
+	uint32_t queue)
+{
+	return ((subport & (port->n_subports_per_port - 1)) <<
+			(port->n_pipes_per_subport_log2 + 4)) |
+			((pipe & (port->n_pipes_per_subport - 1)) << 4) |
+			((traffic_class &
+			    (RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE - 1)) << 2) |
+			(queue & (RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS - 1));
+}
+
 void
-rte_sched_port_pkt_write(struct rte_mbuf *pkt,
-			 uint32_t subport, uint32_t pipe, uint32_t traffic_class,
+rte_sched_port_pkt_write(struct rte_sched_port *port,
+			 struct rte_mbuf *pkt,
+			 uint32_t subport, uint32_t pipe,
+			 uint32_t traffic_class,
 			 uint32_t queue, enum rte_meter_color color)
 {
-	struct rte_sched_port_hierarchy *sched
-		= (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
-
-	RTE_BUILD_BUG_ON(sizeof(*sched) > sizeof(pkt->hash.sched));
-
-	sched->color = (uint32_t) color;
-	sched->subport = subport;
-	sched->pipe = pipe;
-	sched->traffic_class = traffic_class;
-	sched->queue = queue;
+	uint32_t queue_id = rte_sched_port_qindex(port, subport, pipe,
+			traffic_class, queue);
+	rte_mbuf_sched_set(pkt, queue_id, traffic_class, (uint8_t)color);
 }
 
 void
-rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt,
+rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
+				  const struct rte_mbuf *pkt,
 				  uint32_t *subport, uint32_t *pipe,
 				  uint32_t *traffic_class, uint32_t *queue)
 {
-	const struct rte_sched_port_hierarchy *sched
-		= (const struct rte_sched_port_hierarchy *) &pkt->hash.sched;
+	uint32_t queue_id = rte_mbuf_sched_queue_get(pkt);
 
-	*subport = sched->subport;
-	*pipe = sched->pipe;
-	*traffic_class = sched->traffic_class;
-	*queue = sched->queue;
+	*subport = queue_id >> (port->n_pipes_per_subport_log2 + 4);
+	*pipe = (queue_id >> 4) & (port->n_pipes_per_subport - 1);
+	*traffic_class = (queue_id >> 2) &
+				(RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE - 1);
+	*queue = queue_id & (RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS - 1);
 }
 
 enum rte_meter_color
 rte_sched_port_pkt_read_color(const struct rte_mbuf *pkt)
 {
-	const struct rte_sched_port_hierarchy *sched
-		= (const struct rte_sched_port_hierarchy *) &pkt->hash.sched;
-
-	return (enum rte_meter_color) sched->color;
+	return (enum rte_meter_color)rte_mbuf_sched_color_get(pkt);
 }
 
 int
@@ -1100,18 +1095,6 @@  rte_sched_queue_read_stats(struct rte_sched_port *port,
 	return 0;
 }
 
-static inline uint32_t
-rte_sched_port_qindex(struct rte_sched_port *port, uint32_t subport, uint32_t pipe, uint32_t traffic_class, uint32_t queue)
-{
-	uint32_t result;
-
-	result = subport * port->n_pipes_per_subport + pipe;
-	result = result * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE + traffic_class;
-	result = result * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue;
-
-	return result;
-}
-
 #ifdef RTE_SCHED_DEBUG
 
 static inline int
@@ -1272,11 +1255,8 @@  rte_sched_port_enqueue_qptrs_prefetch0(struct rte_sched_port *port,
 #ifdef RTE_SCHED_COLLECT_STATS
 	struct rte_sched_queue_extra *qe;
 #endif
-	uint32_t subport, pipe, traffic_class, queue, qindex;
-
-	rte_sched_port_pkt_read_tree_path(pkt, &subport, &pipe, &traffic_class, &queue);
+	uint32_t qindex = rte_mbuf_sched_queue_get(pkt);
 
-	qindex = rte_sched_port_qindex(port, subport, pipe, traffic_class, queue);
 	q = port->queue + qindex;
 	rte_prefetch0(q);
 #ifdef RTE_SCHED_COLLECT_STATS
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index 84fa896de..243efa1d4 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -355,6 +355,8 @@  rte_sched_queue_read_stats(struct rte_sched_port *port,
  * Scheduler hierarchy path write to packet descriptor. Typically
  * called by the packet classification stage.
  *
+ * @param port
+ *   Handle to port scheduler instance
  * @param pkt
  *   Packet descriptor handle
  * @param subport
@@ -369,7 +371,8 @@  rte_sched_queue_read_stats(struct rte_sched_port *port,
  *   Packet color set
  */
 void
-rte_sched_port_pkt_write(struct rte_mbuf *pkt,
+rte_sched_port_pkt_write(struct rte_sched_port *port,
+			 struct rte_mbuf *pkt,
 			 uint32_t subport, uint32_t pipe, uint32_t traffic_class,
 			 uint32_t queue, enum rte_meter_color color);
 
@@ -379,6 +382,8 @@  rte_sched_port_pkt_write(struct rte_mbuf *pkt,
  * enqueue operation. The subport, pipe, traffic class and queue
  * parameters need to be pre-allocated by the caller.
  *
+ * @param port
+ *   Handle to port scheduler instance
  * @param pkt
  *   Packet descriptor handle
  * @param subport
@@ -392,7 +397,8 @@  rte_sched_port_pkt_write(struct rte_mbuf *pkt,
  *
  */
 void
-rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt,
+rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
+				  const struct rte_mbuf *pkt,
 				  uint32_t *subport, uint32_t *pipe,
 				  uint32_t *traffic_class, uint32_t *queue);
 
diff --git a/test/test/test_sched.c b/test/test/test_sched.c
index 32e500ba9..40e411cab 100644
--- a/test/test/test_sched.c
+++ b/test/test/test_sched.c
@@ -76,7 +76,7 @@  create_mempool(void)
 }
 
 static void
-prepare_pkt(struct rte_mbuf *mbuf)
+prepare_pkt(struct rte_sched_port *port, struct rte_mbuf *mbuf)
 {
 	struct ether_hdr *eth_hdr;
 	struct vlan_hdr *vlan1, *vlan2;
@@ -95,7 +95,8 @@  prepare_pkt(struct rte_mbuf *mbuf)
 	ip_hdr->dst_addr = IPv4(0,0,TC,QUEUE);
 
 
-	rte_sched_port_pkt_write(mbuf, SUBPORT, PIPE, TC, QUEUE, e_RTE_METER_YELLOW);
+	rte_sched_port_pkt_write(port, mbuf, SUBPORT, PIPE, TC, QUEUE,
+					e_RTE_METER_YELLOW);
 
 	/* 64 byte packet */
 	mbuf->pkt_len  = 60;
@@ -138,7 +139,7 @@  test_sched(void)
 	for (i = 0; i < 10; i++) {
 		in_mbufs[i] = rte_pktmbuf_alloc(mp);
 		TEST_ASSERT_NOT_NULL(in_mbufs[i], "Packet allocation failed\n");
-		prepare_pkt(in_mbufs[i]);
+		prepare_pkt(port, in_mbufs[i]);
 	}
 
 
@@ -155,7 +156,7 @@  test_sched(void)
 		color = rte_sched_port_pkt_read_color(out_mbufs[i]);
 		TEST_ASSERT_EQUAL(color, e_RTE_METER_YELLOW, "Wrong color\n");
 
-		rte_sched_port_pkt_read_tree_path(out_mbufs[i],
+		rte_sched_port_pkt_read_tree_path(port, out_mbufs[i],
 				&subport, &pipe, &traffic_class, &queue);
 
 		TEST_ASSERT_EQUAL(subport, SUBPORT, "Wrong subport\n");