[v2,1/3] mbuf: implement generic format for sched field

Message ID 1544193116-7058-1-git-send-email-reshma.pattan@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Cristian Dumitrescu
Headers
Series [v2,1/3] mbuf: implement generic format for sched field |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/mellanox-Performance-Testing success Performance Testing PASS
ci/intel-Performance-Testing success Performance Testing PASS

Commit Message

Pattan, Reshma Dec. 7, 2018, 2:31 p.m. UTC
  This patch implements the changes proposed in the deprecation
notes [1][2].

librte_mbuf changes:
The mbuf::hash::sched field is updated to support generic
definition in line with the ethdev TM and MTR APIs. The new generic
format contains: queue ID, traffic class, color.

Added public APIs to set and get these new fields to and from mbuf.

librte_sched changes:
In addtion, following API functions of the sched library have
been modified with an additional parameter of type struct
rte_sched_port to accommodate the changes made to mbuf sched field.
(i)  rte_sched_port_pkt_write()
(ii) rte_sched_port_pkt_read_tree_path()

librte_pipeline, qos_sched UT, qos_sched app are updated
to make use of new changes.

Also mbuf::hash::txadpater have been added for eventdev txq,
rte_event_eth_tx_adapter_txq_set and rte_event_eth_tx_adapter_txq_get()
are updated to use uses mbuf::hash::txadpater.txq.

doc:
Release notes updated.
Removed deprecation notice for mbuf::hash::sched and sched API.

[1] http://mails.dpdk.org/archives/dev/2018-February/090651.html
[2] https://mails.dpdk.org/archives/dev/2018-November/119051.html

Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
---
 doc/guides/rel_notes/deprecation.rst          |  10 --
 doc/guides/rel_notes/release_19_02.rst        |  15 ++-
 examples/qos_sched/app_thread.c               |   7 +-
 examples/qos_sched/main.c                     |   1 +
 .../rte_event_eth_tx_adapter.h                |  14 +--
 lib/librte_mbuf/Makefile                      |   2 +-
 lib/librte_mbuf/rte_mbuf.h                    | 112 +++++++++++++++++-
 lib/librte_pipeline/rte_table_action.c        |  75 +++++++-----
 lib/librte_sched/Makefile                     |   2 +-
 lib/librte_sched/rte_sched.c                  |  97 +++++++--------
 lib/librte_sched/rte_sched.h                  |  10 +-
 test/test/test_sched.c                        |   9 +-
 12 files changed, 234 insertions(+), 120 deletions(-)
  

Comments

Cristian Dumitrescu Dec. 11, 2018, 7:02 p.m. UTC | #1
> -----Original Message-----
> From: Pattan, Reshma
> Sent: Friday, December 7, 2018 2:32 PM
> To: dev@dpdk.org; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>;
> jerin.jacob@caviumnetworks.com; Singh, Jasvinder
> <jasvinder.singh@intel.com>
> Cc: Pattan, Reshma <reshma.pattan@intel.com>
> Subject: [PATCH v2 1/3] mbuf: implement generic format for sched field
> 
> This patch implements the changes proposed in the deprecation
> notes [1][2].
> 
> librte_mbuf changes:
> The mbuf::hash::sched field is updated to support generic
> definition in line with the ethdev TM and MTR APIs. The new generic
> format contains: queue ID, traffic class, color.
> 
> Added public APIs to set and get these new fields to and from mbuf.
> 
> librte_sched changes:
> In addtion, following API functions of the sched library have
> been modified with an additional parameter of type struct
> rte_sched_port to accommodate the changes made to mbuf sched field.
> (i)  rte_sched_port_pkt_write()
> (ii) rte_sched_port_pkt_read_tree_path()
> 
> librte_pipeline, qos_sched UT, qos_sched app are updated
> to make use of new changes.
> 
> Also mbuf::hash::txadpater have been added for eventdev txq,
> rte_event_eth_tx_adapter_txq_set and
> rte_event_eth_tx_adapter_txq_get()
> are updated to use uses mbuf::hash::txadpater.txq.
> 
> doc:
> Release notes updated.
> Removed deprecation notice for mbuf::hash::sched and sched API.
> 
> [1] http://mails.dpdk.org/archives/dev/2018-February/090651.html
> [2] https://mails.dpdk.org/archives/dev/2018-November/119051.html
> 
> Signed-off-by: Jasvinder Singh <jasvinder.singh@intel.com>
> Signed-off-by: Reshma Pattan <reshma.pattan@intel.com>
> ---
>  doc/guides/rel_notes/deprecation.rst          |  10 --
>  doc/guides/rel_notes/release_19_02.rst        |  15 ++-
>  examples/qos_sched/app_thread.c               |   7 +-
>  examples/qos_sched/main.c                     |   1 +
>  .../rte_event_eth_tx_adapter.h                |  14 +--
>  lib/librte_mbuf/Makefile                      |   2 +-
>  lib/librte_mbuf/rte_mbuf.h                    | 112 +++++++++++++++++-
>  lib/librte_pipeline/rte_table_action.c        |  75 +++++++-----
>  lib/librte_sched/Makefile                     |   2 +-
>  lib/librte_sched/rte_sched.c                  |  97 +++++++--------
>  lib/librte_sched/rte_sched.h                  |  10 +-
>  test/test/test_sched.c                        |   9 +-
>  12 files changed, 234 insertions(+), 120 deletions(-)
> 

<snip>

> diff --git a/doc/guides/rel_notes/release_19_02.rst
> b/doc/guides/rel_notes/release_19_02.rst
> index a94fa86a7..24ba00d04 100644
> --- a/doc/guides/rel_notes/release_19_02.rst
> +++ b/doc/guides/rel_notes/release_19_02.rst
> @@ -83,6 +83,11 @@ API Changes
>     Also, make sure to start the actual text at the margin.
>     =========================================================
> 
> +* The below functions of the sched library have been modified with an
> +  additional parameter of type ``struct rte_sched_port`` to accommodate
> +  the changes made to mbuf::hash::sched field.
> +  (i)  rte_sched_port_pkt_write()
> +  (ii) rte_sched_port_pkt_read_tree_path()
> 
>  ABI Changes
>  -----------
> @@ -99,6 +104,12 @@ ABI Changes
>     Also, make sure to start the actual text at the margin.
>     =========================================================
> 
> +* The mbuf::hash::sched field is updated to support generic
> +  definition in line with the ethdev TM and MTR APIs. The new generic
> +  format contains: ``queue ID, traffic class, color``.
> +
> +* The mbuf:hash:txadapter is now added for eventdev txq. The txadapter
> +  format now contains ``reserved1, reserved2 and txq``.
> 

IMO these changes are too small to make it to the release notes, I suggest we remove them from here.

<snip>

> diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
> index 3dbc6695e..accd98d9b 100644
> --- a/lib/librte_mbuf/rte_mbuf.h
> +++ b/lib/librte_mbuf/rte_mbuf.h
> @@ -575,13 +575,23 @@ struct rte_mbuf {
>  				 */
>  			} fdir;	/**< Filter identifier if FDIR enabled */
>  			struct {
> -				uint32_t lo;
> -				uint32_t hi;
> +				uint32_t queue_id;   /**< Queue ID. */
> +				uint8_t traffic_class;
> +				/**< Traffic class ID. Traffic class 0 is high
> +				 * priority traffic class.
> +				 */

This comment is too fuzzy, my initial suggestion was: Traffic class 0 is the highest priority traffic class.

> +				uint8_t color;   /**< Color. */

Already suggested the following:
1. Include the new file "rte_color.h" in this file
2. Clearly connect this field to its real type, i.e. enum rte_color through the proper Doxygen syntax: /**< Color. @see enum rte_color. */

> +				uint16_t reserved;   /**< Reserved. */
> +			} sched;          /**< Hierarchical scheduler */
> +			struct {
> +				uint32_t reserved1;
> +				uint16_t reserved2;
> +				uint16_t txq;
>  				/**< The event eth Tx adapter uses this field
>  				 * to store Tx queue id.
>  				 * @see
> rte_event_eth_tx_adapter_txq_set()
>  				 */
> -			} sched;          /**< Hierarchical scheduler */
> +			} txadapter; /**< Eventdev ethdev Tx adapter */
>  			/**< User defined tags. See
> rte_distributor_process() */
>  			uint32_t usr;
>  		} hash;                   /**< hash information */
> @@ -2289,6 +2299,102 @@ rte_pktmbuf_linearize(struct rte_mbuf *mbuf)
>   */
>  void rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned
> dump_len);
> 
> +/**
> + * Reads the value of an mbuf's sched queue_id field.
> + */
> +static inline uint32_t
> +rte_mbuf_sched_queue_read(const struct rte_mbuf *m)
> +{
> +	return m->hash.sched.queue_id;
> +}

Let's name these functions *_get instead of *_read, which pairs up with the *_set functions below.

Applicable to all get/set functions: The comment should contain an imperative verb, i.e. "Get the value ...." instead of "Gets the value ...".

> +
> +/**
> + * Reads the value of an mbuf's sched traffic_class field.
> + */
> +static inline uint8_t
> +rte_mbuf_sched_traffic_class_read(const struct rte_mbuf *m)
> +{
> +	return m->hash.sched.traffic_class;
> +}
> +
> +/**
> + * Reads the value of an mbuf's sched color field.
> + */
> +static inline uint8_t
> +rte_mbuf_sched_color_read(const struct rte_mbuf *m)
> +{
> +	return m->hash.sched.color;
> +}

The type of the return value should be enum rte_color instead of uint8_t:

static inline enum rte_color
rte_mbuf_sched_color_read(const struct rte_mbuf *m)
{
	return (enum rte_color)m->hash.sched.color;
}

> +
> +/**
> + * Sets an mbuf's sched queue_id to the defined value.
> + */
> +static inline void
> +rte_mbuf_sched_queue_set(struct rte_mbuf *m, uint32_t qid)
> +{
> +	m->hash.sched.queue_id = qid;
> +}

Let's use the full name queue_id instead of qid, please.

> +
> +/**
> + * Sets an mbuf's sched traffic_class id to the defined value.
> + */
> +static inline void
> +rte_mbuf_sched_traffic_class_set(struct rte_mbuf *m, uint8_t tc)
> +{
> +	m->hash.sched.traffic_class = tc;
> +}
> +

Same here.

> +/**
> + * Sets an mbuf's sched color id to the defined value.
> + */
> +static inline void
> +rte_mbuf_sched_color_set(struct rte_mbuf *m, uint8_t color)
> +{
> +	m->hash.sched.color = color;
> +}

The color parameter type should be enum rte_color, not uint8_t:

static inline void
rte_mbuf_sched_color_set(struct rte_mbuf *m, enum rte_color color)
{
	m->hash.sched.color = (uint8_t)color;
}

> +
> +/**
> + * Reads the values of an mbuf's sched queue_id, traffic_class and color.
> + * @param m
> + *   Mbuf to read
> + * @param qid
> + *  Returns the queue id
> + * @param tc
> + *  Returns the traffic class id
> + * @param color
> + *  Returns the colour id
> + */
> +static inline void
> +rte_mbuf_sched_read(const struct rte_mbuf *m, uint32_t *qid,
> +			uint8_t *tc,
> +			uint8_t *color)
> +{
> +	*qid = m->hash.sched.queue_id;
> +	*tc = m->hash.sched.traffic_class;
> +	*color = m->hash.sched.color;
> +}

Color type should be enum rte_color instead of uint8_t.

Full names, not abbreviated names, please.

Please move this function above with the other *_get functions (i.e. before any *_set function).

> +
> +/**
> + * Sets the mbuf's sched queue_id, traffic_class and color.
> + * @param m
> + *   Mbuf to set
> + * @param qid
> + *  Queue id value to be set
> + * @param tc
> + *  Traffic class id value to be set
> + * @param color
> + *  Color id to be set
> + */
> +static inline void
> +rte_mbuf_sched_set(struct rte_mbuf *m, uint32_t qid,
> +			uint8_t tc,
> +			uint8_t color)
> +{
> +	m->hash.sched.queue_id = qid;
> +	m->hash.sched.traffic_class = tc;
> +	m->hash.sched.color = color;
> +}

Color type should be enum rte_color instead of uint8_t.

Full names, not abbreviated names, please.

> +
>  #ifdef __cplusplus
>  }
>  #endif
> diff --git a/lib/librte_pipeline/rte_table_action.c
> b/lib/librte_pipeline/rte_table_action.c
> index 7c7c8dd82..f9768b9cc 100644
> --- a/lib/librte_pipeline/rte_table_action.c
> +++ b/lib/librte_pipeline/rte_table_action.c
> @@ -107,14 +107,6 @@ mtr_cfg_check(struct rte_table_action_mtr_config
> *mtr)
>  	return 0;
>  }
> 
> -#define MBUF_SCHED_QUEUE_TC_COLOR(queue, tc, color)        \
> -	((uint16_t)((((uint64_t)(queue)) & 0x3) |          \
> -	((((uint64_t)(tc)) & 0x3) << 2) |                  \
> -	((((uint64_t)(color)) & 0x3) << 4)))
> -
> -#define MBUF_SCHED_COLOR(sched, color)                     \
> -	(((sched) & (~0x30LLU)) | ((color) << 4))
> -
>  struct mtr_trtcm_data {
>  	struct rte_meter_trtcm trtcm;
>  	uint64_t stats[e_RTE_METER_COLORS];
> @@ -176,7 +168,7 @@ mtr_data_size(struct rte_table_action_mtr_config
> *mtr)
>  struct dscp_table_entry_data {
>  	enum rte_meter_color color;
>  	uint16_t tc;
> -	uint16_t queue_tc_color;
> +	uint32_t queue;
>  };

In this data structure, let's rename queue as tc_queue, as it better illustrates what it is, i.e. queue within the traffic class.

In order to preserve the size of this data structure (8 bytes), let's use uint16_t to store tc_queue field.

> 
>  struct dscp_table_data {
> @@ -319,8 +311,7 @@ pkt_work_mtr(struct rte_mbuf *mbuf,
>  	uint32_t dscp,
>  	uint16_t total_length)
>  {
> -	uint64_t drop_mask, sched;
> -	uint64_t *sched_ptr = (uint64_t *) &mbuf->hash.sched;
> +	uint64_t drop_mask;
>  	struct dscp_table_entry_data *dscp_entry = &dscp_table-
> >entry[dscp];
>  	enum rte_meter_color color_in, color_meter, color_policer;
>  	uint32_t tc, mp_id;
> @@ -329,7 +320,6 @@ pkt_work_mtr(struct rte_mbuf *mbuf,
>  	color_in = dscp_entry->color;
>  	data += tc;
>  	mp_id = MTR_TRTCM_DATA_METER_PROFILE_ID_GET(data);
> -	sched = *sched_ptr;
> 
>  	/* Meter */
>  	color_meter = rte_meter_trtcm_color_aware_check(
> @@ -346,7 +336,7 @@ pkt_work_mtr(struct rte_mbuf *mbuf,
>  	drop_mask =
> MTR_TRTCM_DATA_POLICER_ACTION_DROP_GET(data, color_meter);
>  	color_policer =
>  		MTR_TRTCM_DATA_POLICER_ACTION_COLOR_GET(data,
> color_meter);
> -	*sched_ptr = MBUF_SCHED_COLOR(sched, color_policer);
> +	rte_mbuf_sched_color_set(mbuf, color_policer);
> 
>  	return drop_mask;
>  }
> @@ -368,11 +358,16 @@ tm_cfg_check(struct rte_table_action_tm_config
> *tm)
>  }
> 
>  struct tm_data {
> -	uint16_t queue_tc_color;
>  	uint16_t subport;
>  	uint32_t pipe;
>  } __attribute__((__packed__));
> 

In order to preserve the current size for this data structure (8 bytes), let's have subport type as uint32_t.

> +/* log2 representation of tm configuration */

I suggest to remove this comment.

> +struct tm_cfg {
> +	uint32_t subports_per_port_log2;
> +	uint32_t pipes_per_subport_log2;
> +} __attribute__((__packed__));
> +

For readability purpose, it is probably better to preserve the names, so please use n_subports_per_port_log2 and n_pipes_per_subport_log2.

>  static int
>  tm_apply_check(struct rte_table_action_tm_params *p,
>  	struct rte_table_action_tm_config *cfg)
> @@ -397,26 +392,37 @@ tm_apply(struct tm_data *data,
>  		return status;
> 
>  	/* Apply */
> -	data->queue_tc_color = 0;
>  	data->subport = (uint16_t) p->subport_id;
>  	data->pipe = p->pipe_id;
> 
>  	return 0;
>  }
> 
> +static uint32_t
> +tm_sched_qindex(struct tm_data *data,
> +	struct dscp_table_entry_data *dscp,
> +	struct tm_cfg *tm)
> +{
> +	uint32_t result;
> +
> +	result = (data->subport << tm->pipes_per_subport_log2) + data-
> >pipe;
> +	result = result * RTE_TABLE_ACTION_TC_MAX + dscp->tc;
> +	result = result * RTE_TABLE_ACTION_TC_QUEUE_MAX + dscp-
> >queue;
> +
> +	return result;
> +}
> +

We can do this in a single assignment (replace the multiplication with 4 by shift left with 2).

>  static __rte_always_inline void
>  pkt_work_tm(struct rte_mbuf *mbuf,
>  	struct tm_data *data,
>  	struct dscp_table_data *dscp_table,
> -	uint32_t dscp)
> +	uint32_t dscp,
> +	struct tm_cfg *tm)
>  {
>  	struct dscp_table_entry_data *dscp_entry = &dscp_table-
> >entry[dscp];
> -	struct tm_data *sched_ptr = (struct tm_data *) &mbuf->hash.sched;
> -	struct tm_data sched;
> +	uint32_t queue = tm_sched_qindex(data, dscp_entry, tm);
> 
> -	sched = *data;
> -	sched.queue_tc_color = dscp_entry->queue_tc_color;
> -	*sched_ptr = sched;
> +	rte_mbuf_sched_set(mbuf, queue, dscp_entry->tc, dscp_entry-
> >color);
>  }
> 
>  /**
> @@ -2440,6 +2446,7 @@ struct rte_table_action {
>  	struct ap_data data;
>  	struct dscp_table_data dscp_table;
>  	struct meter_profile_data mp[METER_PROFILES_MAX];
> +	struct tm_cfg tm;
>  };
> 
>  struct rte_table_action *
> @@ -2465,6 +2472,11 @@ rte_table_action_create(struct
> rte_table_action_profile *profile,
>  	memcpy(&action->cfg, &profile->cfg, sizeof(profile->cfg));
>  	memcpy(&action->data, &profile->data, sizeof(profile->data));
> 
> +	action->tm.subports_per_port_log2 =
> +			__builtin_ctz(profile->cfg.tm.n_subports_per_port);
> +	action->tm.pipes_per_subport_log2 =
> +			__builtin_ctz(profile->cfg.tm.n_pipes_per_subport);
> +
>  	return action;
>  }
> 
> @@ -2580,17 +2592,13 @@ rte_table_action_dscp_table_update(struct
> rte_table_action *action,
>  			&action->dscp_table.entry[i];
>  		struct rte_table_action_dscp_table_entry *entry =
>  			&table->entry[i];
> -		uint16_t queue_tc_color =
> -			MBUF_SCHED_QUEUE_TC_COLOR(entry-
> >tc_queue_id,
> -				entry->tc_id,
> -				entry->color);
> 
>  		if ((dscp_mask & (1LLU << i)) == 0)
>  			continue;
> 
>  		data->color = entry->color;
>  		data->tc = entry->tc_id;
> -		data->queue_tc_color = queue_tc_color;
> +		data->queue = entry->tc_queue_id;
>  	}
> 
>  	return 0;
> @@ -2882,7 +2890,8 @@ pkt_work(struct rte_mbuf *mbuf,
>  		pkt_work_tm(mbuf,
>  			data,
>  			&action->dscp_table,
> -			dscp);
> +			dscp,
> +			&action->tm);
>  	}
> 
>  	if (cfg->action_mask & (1LLU << RTE_TABLE_ACTION_DECAP)) {
> @@ -3108,22 +3117,26 @@ pkt4_work(struct rte_mbuf **mbufs,
>  		pkt_work_tm(mbuf0,
>  			data0,
>  			&action->dscp_table,
> -			dscp0);
> +			dscp0,
> +			&action->tm);
> 
>  		pkt_work_tm(mbuf1,
>  			data1,
>  			&action->dscp_table,
> -			dscp1);
> +			dscp1,
> +			&action->tm);
> 
>  		pkt_work_tm(mbuf2,
>  			data2,
>  			&action->dscp_table,
> -			dscp2);
> +			dscp2,
> +			&action->tm);
> 
>  		pkt_work_tm(mbuf3,
>  			data3,
>  			&action->dscp_table,
> -			dscp3);
> +			dscp3,
> +			&action->tm);
>  	}
> 
>  	if (cfg->action_mask & (1LLU << RTE_TABLE_ACTION_DECAP)) {
> diff --git a/lib/librte_sched/Makefile b/lib/librte_sched/Makefile
> index 46c53ed71..644fd9d15 100644
> --- a/lib/librte_sched/Makefile
> +++ b/lib/librte_sched/Makefile
> @@ -18,7 +18,7 @@ LDLIBS += -lrte_timer
> 
>  EXPORT_MAP := rte_sched_version.map
> 
> -LIBABIVER := 1
> +LIBABIVER := 2
> 
>  #
>  # all source are stored in SRCS-y
> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
> index 587d5e602..a6d9a5886 100644
> --- a/lib/librte_sched/rte_sched.c
> +++ b/lib/librte_sched/rte_sched.c
> @@ -41,6 +41,9 @@
>  #define RTE_SCHED_PIPE_INVALID                UINT32_MAX
>  #define RTE_SCHED_BMP_POS_INVALID             UINT32_MAX
> 
> +#define RTE_SCHED_QUEUES_PER_PIPE_LOG2 \
> +	__builtin_ctz(RTE_SCHED_QUEUES_PER_PIPE)
> +
>  /* Scaling for cycles_per_byte calculation
>   * Chosen so that minimum rate is 480 bit/sec
>   */
> @@ -128,22 +131,6 @@ enum grinder_state {
>  	e_GRINDER_READ_MBUF
>  };
> 
> -/*
> - * Path through the scheduler hierarchy used by the scheduler enqueue
> - * operation to identify the destination queue for the current
> - * packet. Stored in the field pkt.hash.sched of struct rte_mbuf of
> - * each packet, typically written by the classification stage and read
> - * by scheduler enqueue.
> - */
> -struct rte_sched_port_hierarchy {
> -	uint16_t queue:2;                /**< Queue ID (0 .. 3) */
> -	uint16_t traffic_class:2;        /**< Traffic class ID (0 .. 3)*/
> -	uint32_t color:2;                /**< Color */
> -	uint16_t unused:10;
> -	uint16_t subport;                /**< Subport ID */
> -	uint32_t pipe;		         /**< Pipe ID */
> -};
> -
>  struct rte_sched_grinder {
>  	/* Pipe cache */
>  	uint16_t pcache_qmask[RTE_SCHED_GRINDER_PCACHE_SIZE];
> @@ -228,6 +215,7 @@ struct rte_sched_port {
>  	uint8_t *bmp_array;
>  	struct rte_mbuf **queue_array;
>  	uint8_t memory[0] __rte_cache_aligned;
> +
>  } __rte_cache_aligned;
> 
>  enum rte_sched_port_array {
> @@ -242,13 +230,11 @@ enum rte_sched_port_array {
>  };
> 
>  #ifdef RTE_SCHED_COLLECT_STATS
> -
>  static inline uint32_t
>  rte_sched_port_queues_per_subport(struct rte_sched_port *port)
>  {
>  	return RTE_SCHED_QUEUES_PER_PIPE * port-
> >n_pipes_per_subport;
>  }
> -
>  #endif
> 
>  static inline uint32_t
> @@ -1006,44 +992,56 @@ rte_sched_port_pipe_profile_add(struct
> rte_sched_port *port,
>  	return 0;
>  }
> 
> +static inline uint32_t
> +rte_sched_port_qindex(struct rte_sched_port *port,
> +	uint32_t subport,
> +	uint32_t pipe,
> +	uint32_t traffic_class,
> +	uint32_t queue)
> +{
> +	uint32_t result;
> +
> +	result = (subport << __builtin_ctz(port->n_pipes_per_subport)) +
> pipe;
> +	result = result * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE +
> traffic_class;
> +	result = result * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue;
> +
> +	return result;
> +}
> +
>  void
> -rte_sched_port_pkt_write(struct rte_mbuf *pkt,
> +rte_sched_port_pkt_write(struct rte_sched_port *port,
> +			 struct rte_mbuf *pkt,
>  			 uint32_t subport, uint32_t pipe, uint32_t
> traffic_class,
>  			 uint32_t queue, enum rte_meter_color color)
>  {
> -	struct rte_sched_port_hierarchy *sched
> -		= (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
> -
> -	RTE_BUILD_BUG_ON(sizeof(*sched) > sizeof(pkt->hash.sched));
> -
> -	sched->color = (uint32_t) color;
> -	sched->subport = subport;
> -	sched->pipe = pipe;
> -	sched->traffic_class = traffic_class;
> -	sched->queue = queue;
> +	uint32_t queue_id = rte_sched_port_qindex(port, subport, pipe,
> +			traffic_class, queue);
> +	rte_mbuf_sched_set(pkt, queue_id, traffic_class, (uint8_t)color);
>  }
> 
>  void
> -rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt,
> +rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
> +				  const struct rte_mbuf *pkt,
>  				  uint32_t *subport, uint32_t *pipe,
>  				  uint32_t *traffic_class, uint32_t *queue)
>  {
> -	const struct rte_sched_port_hierarchy *sched
> -		= (const struct rte_sched_port_hierarchy *) &pkt-
> >hash.sched;
> -
> -	*subport = sched->subport;
> -	*pipe = sched->pipe;
> -	*traffic_class = sched->traffic_class;
> -	*queue = sched->queue;
> +	uint32_t qid = rte_mbuf_sched_queue_read(pkt);
> +
> +	*subport = qid >>
> +		(__builtin_ctz
> +			(RTE_SCHED_QUEUES_PER_PIPE <<
> +			__builtin_ctz(port->n_pipes_per_subport)
> +			)
> +		);
> +	*pipe = qid >> RTE_SCHED_QUEUES_PER_PIPE_LOG2;
> +	*traffic_class = rte_mbuf_sched_traffic_class_read(pkt);
> +	*queue = qid & (RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS - 1);
>  }
> 
>  enum rte_meter_color
>  rte_sched_port_pkt_read_color(const struct rte_mbuf *pkt)
>  {
> -	const struct rte_sched_port_hierarchy *sched
> -		= (const struct rte_sched_port_hierarchy *) &pkt-
> >hash.sched;
> -
> -	return (enum rte_meter_color) sched->color;
> +	return (enum rte_meter_color)rte_mbuf_sched_color_read(pkt);
>  }
> 
>  int
> @@ -1100,18 +1098,6 @@ rte_sched_queue_read_stats(struct
> rte_sched_port *port,
>  	return 0;
>  }
> 
> -static inline uint32_t
> -rte_sched_port_qindex(struct rte_sched_port *port, uint32_t subport,
> uint32_t pipe, uint32_t traffic_class, uint32_t queue)
> -{
> -	uint32_t result;
> -
> -	result = subport * port->n_pipes_per_subport + pipe;
> -	result = result * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE +
> traffic_class;
> -	result = result * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue;
> -
> -	return result;
> -}
> -
>  #ifdef RTE_SCHED_DEBUG
> 
>  static inline int
> @@ -1272,11 +1258,8 @@ rte_sched_port_enqueue_qptrs_prefetch0(struct
> rte_sched_port *port,
>  #ifdef RTE_SCHED_COLLECT_STATS
>  	struct rte_sched_queue_extra *qe;
>  #endif
> -	uint32_t subport, pipe, traffic_class, queue, qindex;
> -
> -	rte_sched_port_pkt_read_tree_path(pkt, &subport, &pipe,
> &traffic_class, &queue);
> +	uint32_t qindex = rte_mbuf_sched_queue_read(pkt);
> 
> -	qindex = rte_sched_port_qindex(port, subport, pipe, traffic_class,
> queue);
>  	q = port->queue + qindex;
>  	rte_prefetch0(q);
>  #ifdef RTE_SCHED_COLLECT_STATS
> diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
> index 84fa896de..243efa1d4 100644
> --- a/lib/librte_sched/rte_sched.h
> +++ b/lib/librte_sched/rte_sched.h
> @@ -355,6 +355,8 @@ rte_sched_queue_read_stats(struct rte_sched_port
> *port,
>   * Scheduler hierarchy path write to packet descriptor. Typically
>   * called by the packet classification stage.
>   *
> + * @param port
> + *   Handle to port scheduler instance
>   * @param pkt
>   *   Packet descriptor handle
>   * @param subport
> @@ -369,7 +371,8 @@ rte_sched_queue_read_stats(struct rte_sched_port
> *port,
>   *   Packet color set
>   */
>  void
> -rte_sched_port_pkt_write(struct rte_mbuf *pkt,
> +rte_sched_port_pkt_write(struct rte_sched_port *port,
> +			 struct rte_mbuf *pkt,
>  			 uint32_t subport, uint32_t pipe, uint32_t
> traffic_class,
>  			 uint32_t queue, enum rte_meter_color color);
> 
> @@ -379,6 +382,8 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt,
>   * enqueue operation. The subport, pipe, traffic class and queue
>   * parameters need to be pre-allocated by the caller.
>   *
> + * @param port
> + *   Handle to port scheduler instance
>   * @param pkt
>   *   Packet descriptor handle
>   * @param subport
> @@ -392,7 +397,8 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt,
>   *
>   */
>  void
> -rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt,
> +rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
> +				  const struct rte_mbuf *pkt,
>  				  uint32_t *subport, uint32_t *pipe,
>  				  uint32_t *traffic_class, uint32_t *queue);
> 
> diff --git a/test/test/test_sched.c b/test/test/test_sched.c
> index 32e500ba9..40e411cab 100644
> --- a/test/test/test_sched.c
> +++ b/test/test/test_sched.c
> @@ -76,7 +76,7 @@ create_mempool(void)
>  }
> 
>  static void
> -prepare_pkt(struct rte_mbuf *mbuf)
> +prepare_pkt(struct rte_sched_port *port, struct rte_mbuf *mbuf)
>  {
>  	struct ether_hdr *eth_hdr;
>  	struct vlan_hdr *vlan1, *vlan2;
> @@ -95,7 +95,8 @@ prepare_pkt(struct rte_mbuf *mbuf)
>  	ip_hdr->dst_addr = IPv4(0,0,TC,QUEUE);
> 
> 
> -	rte_sched_port_pkt_write(mbuf, SUBPORT, PIPE, TC, QUEUE,
> e_RTE_METER_YELLOW);
> +	rte_sched_port_pkt_write(port, mbuf, SUBPORT, PIPE, TC, QUEUE,
> +					e_RTE_METER_YELLOW);
> 
>  	/* 64 byte packet */
>  	mbuf->pkt_len  = 60;
> @@ -138,7 +139,7 @@ test_sched(void)
>  	for (i = 0; i < 10; i++) {
>  		in_mbufs[i] = rte_pktmbuf_alloc(mp);
>  		TEST_ASSERT_NOT_NULL(in_mbufs[i], "Packet allocation
> failed\n");
> -		prepare_pkt(in_mbufs[i]);
> +		prepare_pkt(port, in_mbufs[i]);
>  	}
> 
> 
> @@ -155,7 +156,7 @@ test_sched(void)
>  		color = rte_sched_port_pkt_read_color(out_mbufs[i]);
>  		TEST_ASSERT_EQUAL(color, e_RTE_METER_YELLOW, "Wrong
> color\n");
> 
> -		rte_sched_port_pkt_read_tree_path(out_mbufs[i],
> +		rte_sched_port_pkt_read_tree_path(port, out_mbufs[i],
>  				&subport, &pipe, &traffic_class, &queue);
> 
>  		TEST_ASSERT_EQUAL(subport, SUBPORT, "Wrong
> subport\n");
> --
> 2.17.1
  
Cristian Dumitrescu Dec. 12, 2018, 6:17 p.m. UTC | #2
Hi Reshma,

More comments on this patch.

<snip>

> diff --git a/lib/librte_pipeline/rte_table_action.c
> b/lib/librte_pipeline/rte_table_action.c
> index 7c7c8dd82..f9768b9cc 100644
> --- a/lib/librte_pipeline/rte_table_action.c
> +++ b/lib/librte_pipeline/rte_table_action.c
> @@ -107,14 +107,6 @@ mtr_cfg_check(struct rte_table_action_mtr_config
> *mtr)
>  	return 0;
>  }
> 

<snip>

> @@ -368,11 +358,16 @@ tm_cfg_check(struct rte_table_action_tm_config
> *tm)
>  }
> 
>  struct tm_data {
> -	uint16_t queue_tc_color;
>  	uint16_t subport;
>  	uint32_t pipe;
>  } __attribute__((__packed__));
> 

Currently, the tm_data has the same memory layout as mbuf->hash.sched, as defined by librte_sched.
Now, we are basically replacing sched with queue_id, whose format is defined by librte_sched. So why not keep the same approach going forward?
What does this mean in practice:
- queue_id is simply packaging (subport ID, pipe ID, tc, tc_queue) into the 32-bit queue_id.
- we can simply pre-pack subport ID and pipe ID into queue_id, and initially use tc = 0 and tc_queue = 0
-later on, as we read tc and tc_queue from DSCP table, we simply update them in the queue_id, basically we change the least significant 4 bits of queue_id

Does it make sense?

Let's also keep tm_data size of 8 bytes to preserve the current 8-byte action data alignment:

struct tm_data {
  	uint32_t queue_id;
	uint32_t reserved;
} __attribute__((__packed__));

> +/* log2 representation of tm configuration */
> +struct tm_cfg {
> +	uint32_t subports_per_port_log2;
> +	uint32_t pipes_per_subport_log2;
> +} __attribute__((__packed__));

No need to worry about _log2 values if we follow this approach, as they will only be needed (and computed) by apply function, not the run-time tm_work function.

> +
>  static int
>  tm_apply_check(struct rte_table_action_tm_params *p,
>  	struct rte_table_action_tm_config *cfg)
> @@ -397,26 +392,37 @@ tm_apply(struct tm_data *data,
>  		return status;
> 
>  	/* Apply */
> -	data->queue_tc_color = 0;
>  	data->subport = (uint16_t) p->subport_id;
>  	data->pipe = p->pipe_id;
> 
>  	return 0;
>  }
> 
> +static uint32_t
> +tm_sched_qindex(struct tm_data *data,
> +	struct dscp_table_entry_data *dscp,
> +	struct tm_cfg *tm)
> +{
> +	uint32_t result;
> +
> +	result = (data->subport << tm->pipes_per_subport_log2) + data-
> >pipe;
> +	result = result * RTE_TABLE_ACTION_TC_MAX + dscp->tc;
> +	result = result * RTE_TABLE_ACTION_TC_QUEUE_MAX + dscp-
> >queue;
> +
> +	return result;
> +}
> +

This function is no longer needed with the above approach.

>  static __rte_always_inline void
>  pkt_work_tm(struct rte_mbuf *mbuf,
>  	struct tm_data *data,
>  	struct dscp_table_data *dscp_table,
> -	uint32_t dscp)
> +	uint32_t dscp,
> +	struct tm_cfg *tm)
>  {
>  	struct dscp_table_entry_data *dscp_entry = &dscp_table-
> >entry[dscp];
> -	struct tm_data *sched_ptr = (struct tm_data *) &mbuf->hash.sched;
> -	struct tm_data sched;
> +	uint32_t queue = tm_sched_qindex(data, dscp_entry, tm);
> 
> -	sched = *data;
> -	sched.queue_tc_color = dscp_entry->queue_tc_color;
> -	*sched_ptr = sched;
> +	rte_mbuf_sched_set(mbuf, queue, dscp_entry->tc, dscp_entry-
> >color);
>  }
> 

With the above approach:

uint32_t queue_id = data->queue_id | (dscp_entry-> tc << 2) | dscp_entry->tc_queue;
rte_mbuf_sched_set(mbuf, queue_id, dscp_entry->tc, dscp_entry-color);

>  /**
> @@ -2440,6 +2446,7 @@ struct rte_table_action {
>  	struct ap_data data;
>  	struct dscp_table_data dscp_table;
>  	struct meter_profile_data mp[METER_PROFILES_MAX];
> +	struct tm_cfg tm;
>  };
> 

Not needed with the above approach.

>  struct rte_table_action *
> @@ -2465,6 +2472,11 @@ rte_table_action_create(struct
> rte_table_action_profile *profile,
>  	memcpy(&action->cfg, &profile->cfg, sizeof(profile->cfg));
>  	memcpy(&action->data, &profile->data, sizeof(profile->data));
> 
> +	action->tm.subports_per_port_log2 =
> +			__builtin_ctz(profile->cfg.tm.n_subports_per_port);
> +	action->tm.pipes_per_subport_log2 =
> +			__builtin_ctz(profile->cfg.tm.n_pipes_per_subport);
> +
>  	return action;
>  }
> 
> @@ -2580,17 +2592,13 @@ rte_table_action_dscp_table_update(struct
> rte_table_action *action,
>  			&action->dscp_table.entry[i];
>  		struct rte_table_action_dscp_table_entry *entry =
>  			&table->entry[i];
> -		uint16_t queue_tc_color =
> -			MBUF_SCHED_QUEUE_TC_COLOR(entry-
> >tc_queue_id,
> -				entry->tc_id,
> -				entry->color);
> 
>  		if ((dscp_mask & (1LLU << i)) == 0)
>  			continue;
> 
>  		data->color = entry->color;
>  		data->tc = entry->tc_id;
> -		data->queue_tc_color = queue_tc_color;
> +		data->queue = entry->tc_queue_id;
>  	}
> 
>  	return 0;
> @@ -2882,7 +2890,8 @@ pkt_work(struct rte_mbuf *mbuf,
>  		pkt_work_tm(mbuf,
>  			data,
>  			&action->dscp_table,
> -			dscp);
> +			dscp,
> +			&action->tm);
>  	}
> 
>  	if (cfg->action_mask & (1LLU << RTE_TABLE_ACTION_DECAP)) {
> @@ -3108,22 +3117,26 @@ pkt4_work(struct rte_mbuf **mbufs,
>  		pkt_work_tm(mbuf0,
>  			data0,
>  			&action->dscp_table,
> -			dscp0);
> +			dscp0,
> +			&action->tm);
> 
>  		pkt_work_tm(mbuf1,
>  			data1,
>  			&action->dscp_table,
> -			dscp1);
> +			dscp1,
> +			&action->tm);
> 
>  		pkt_work_tm(mbuf2,
>  			data2,
>  			&action->dscp_table,
> -			dscp2);
> +			dscp2,
> +			&action->tm);
> 
>  		pkt_work_tm(mbuf3,
>  			data3,
>  			&action->dscp_table,
> -			dscp3);
> +			dscp3,
> +			&action->tm);
>  	}
> 
>  	if (cfg->action_mask & (1LLU << RTE_TABLE_ACTION_DECAP)) {
> diff --git a/lib/librte_sched/Makefile b/lib/librte_sched/Makefile
> index 46c53ed71..644fd9d15 100644
> --- a/lib/librte_sched/Makefile
> +++ b/lib/librte_sched/Makefile
> @@ -18,7 +18,7 @@ LDLIBS += -lrte_timer
> 
>  EXPORT_MAP := rte_sched_version.map
> 
> -LIBABIVER := 1
> +LIBABIVER := 2
> 
>  #
>  # all source are stored in SRCS-y
> diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
> index 587d5e602..a6d9a5886 100644
> --- a/lib/librte_sched/rte_sched.c
> +++ b/lib/librte_sched/rte_sched.c
> @@ -41,6 +41,9 @@
>  #define RTE_SCHED_PIPE_INVALID                UINT32_MAX
>  #define RTE_SCHED_BMP_POS_INVALID             UINT32_MAX
> 
> +#define RTE_SCHED_QUEUES_PER_PIPE_LOG2 \
> +	__builtin_ctz(RTE_SCHED_QUEUES_PER_PIPE)
> +
>  /* Scaling for cycles_per_byte calculation
>   * Chosen so that minimum rate is 480 bit/sec
>   */
> @@ -128,22 +131,6 @@ enum grinder_state {
>  	e_GRINDER_READ_MBUF
>  };
> 
> -/*
> - * Path through the scheduler hierarchy used by the scheduler enqueue
> - * operation to identify the destination queue for the current
> - * packet. Stored in the field pkt.hash.sched of struct rte_mbuf of
> - * each packet, typically written by the classification stage and read
> - * by scheduler enqueue.
> - */
> -struct rte_sched_port_hierarchy {
> -	uint16_t queue:2;                /**< Queue ID (0 .. 3) */
> -	uint16_t traffic_class:2;        /**< Traffic class ID (0 .. 3)*/
> -	uint32_t color:2;                /**< Color */
> -	uint16_t unused:10;
> -	uint16_t subport;                /**< Subport ID */
> -	uint32_t pipe;		         /**< Pipe ID */
> -};
> -
>  struct rte_sched_grinder {
>  	/* Pipe cache */
>  	uint16_t pcache_qmask[RTE_SCHED_GRINDER_PCACHE_SIZE];
> @@ -228,6 +215,7 @@ struct rte_sched_port {
>  	uint8_t *bmp_array;
>  	struct rte_mbuf **queue_array;
>  	uint8_t memory[0] __rte_cache_aligned;
> +
>  } __rte_cache_aligned;
> 

Please remove this blank line.

>  enum rte_sched_port_array {
> @@ -242,13 +230,11 @@ enum rte_sched_port_array {
>  };
> 
>  #ifdef RTE_SCHED_COLLECT_STATS
> -
>  static inline uint32_t
>  rte_sched_port_queues_per_subport(struct rte_sched_port *port)
>  {
>  	return RTE_SCHED_QUEUES_PER_PIPE * port-
> >n_pipes_per_subport;
>  }
> -
>  #endif
> 

Please put back the two blank lines that got removed.

>  static inline uint32_t
> @@ -1006,44 +992,56 @@ rte_sched_port_pipe_profile_add(struct
> rte_sched_port *port,
>  	return 0;
>  }
> 
> +static inline uint32_t
> +rte_sched_port_qindex(struct rte_sched_port *port,
> +	uint32_t subport,
> +	uint32_t pipe,
> +	uint32_t traffic_class,
> +	uint32_t queue)
> +{
> +	uint32_t result;
> +
> +	result = (subport << __builtin_ctz(port->n_pipes_per_subport)) +
> pipe;
> +	result = result * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE +
> traffic_class;
> +	result = result * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue;
> +
> +	return result;
> +}
> +

This is a performance critical run-time function, so let's optimize it by pre-computing log2 value for n_pipes_per_subport and store it in the port data structure: port->n_pipes_per_subport_log2.

We should also make sure we truncate subport ID and pipe ID in case they are bigger than max, which is easily done since the max values are enforced to be power of 2.

static inline uint32_t
rte_sched_port_qindex(struct rte_sched_port *port,
	uint32_t subport,
	uint32_t pipe,
	uint32_t traffic_class,
	uint32_t queue)
{
	return ((subport & (port->n_subports_per_port - 1)) << (port->n_pipes_per_subport_log2 + 4)) |
		((pipe & (port->n_pipes_per_subport - 1)) << 4) |
		((traffic_class & (RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE - 1)) << 2) |
		(queue & (RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS - 1));
}

>  void
> -rte_sched_port_pkt_write(struct rte_mbuf *pkt,
> +rte_sched_port_pkt_write(struct rte_sched_port *port,
> +			 struct rte_mbuf *pkt,
>  			 uint32_t subport, uint32_t pipe, uint32_t
> traffic_class,
>  			 uint32_t queue, enum rte_meter_color color)
>  {
> -	struct rte_sched_port_hierarchy *sched
> -		= (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
> -
> -	RTE_BUILD_BUG_ON(sizeof(*sched) > sizeof(pkt->hash.sched));
> -
> -	sched->color = (uint32_t) color;
> -	sched->subport = subport;
> -	sched->pipe = pipe;
> -	sched->traffic_class = traffic_class;
> -	sched->queue = queue;
> +	uint32_t queue_id = rte_sched_port_qindex(port, subport, pipe,
> +			traffic_class, queue);
> +	rte_mbuf_sched_set(pkt, queue_id, traffic_class, (uint8_t)color);
>  }
> 
>  void
> -rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt,
> +rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
> +				  const struct rte_mbuf *pkt,
>  				  uint32_t *subport, uint32_t *pipe,
>  				  uint32_t *traffic_class, uint32_t *queue)
>  {
> -	const struct rte_sched_port_hierarchy *sched
> -		= (const struct rte_sched_port_hierarchy *) &pkt-
> >hash.sched;
> -
> -	*subport = sched->subport;
> -	*pipe = sched->pipe;
> -	*traffic_class = sched->traffic_class;
> -	*queue = sched->queue;
> +	uint32_t qid = rte_mbuf_sched_queue_read(pkt);

queue_id instead of qid, please.

> +
> +	*subport = qid >>
> +		(__builtin_ctz
> +			(RTE_SCHED_QUEUES_PER_PIPE <<
> +			__builtin_ctz(port->n_pipes_per_subport)
> +			)
> +		);

Not correct.

*subport = queue_id >> (port->n_pipes_per_subport_log2 + 4);

> +	*pipe = qid >> RTE_SCHED_QUEUES_PER_PIPE_LOG2;

Not correct.

*pipe = (queue_id >> 4) & (port->n_queues_per_subport - 1);

> +	*traffic_class = rte_mbuf_sched_traffic_class_read(pkt);

Let's read traffic class from queue_id in order to keep consistency with support ID and pipe ID, please:

*traffic_class = (queue_id >> 2) & (RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE - 1);

> +	*queue = qid & (RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS - 1);
>  }
> 
>  enum rte_meter_color
>  rte_sched_port_pkt_read_color(const struct rte_mbuf *pkt)
>  {
> -	const struct rte_sched_port_hierarchy *sched
> -		= (const struct rte_sched_port_hierarchy *) &pkt-
> >hash.sched;
> -
> -	return (enum rte_meter_color) sched->color;
> +	return (enum rte_meter_color)rte_mbuf_sched_color_read(pkt);
>  }
> 
>  int
> @@ -1100,18 +1098,6 @@ rte_sched_queue_read_stats(struct
> rte_sched_port *port,
>  	return 0;
>  }
> 
> -static inline uint32_t
> -rte_sched_port_qindex(struct rte_sched_port *port, uint32_t subport,
> uint32_t pipe, uint32_t traffic_class, uint32_t queue)
> -{
> -	uint32_t result;
> -
> -	result = subport * port->n_pipes_per_subport + pipe;
> -	result = result * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE +
> traffic_class;
> -	result = result * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue;
> -
> -	return result;
> -}
> -
>  #ifdef RTE_SCHED_DEBUG
> 
>  static inline int
> @@ -1272,11 +1258,8 @@ rte_sched_port_enqueue_qptrs_prefetch0(struct
> rte_sched_port *port,
>  #ifdef RTE_SCHED_COLLECT_STATS
>  	struct rte_sched_queue_extra *qe;
>  #endif
> -	uint32_t subport, pipe, traffic_class, queue, qindex;
> -
> -	rte_sched_port_pkt_read_tree_path(pkt, &subport, &pipe,
> &traffic_class, &queue);
> +	uint32_t qindex = rte_mbuf_sched_queue_read(pkt);
> 
> -	qindex = rte_sched_port_qindex(port, subport, pipe, traffic_class,
> queue);
>  	q = port->queue + qindex;
>  	rte_prefetch0(q);
>  #ifdef RTE_SCHED_COLLECT_STATS
> diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
> index 84fa896de..243efa1d4 100644
> --- a/lib/librte_sched/rte_sched.h
> +++ b/lib/librte_sched/rte_sched.h
> @@ -355,6 +355,8 @@ rte_sched_queue_read_stats(struct rte_sched_port
> *port,
>   * Scheduler hierarchy path write to packet descriptor. Typically
>   * called by the packet classification stage.
>   *
> + * @param port
> + *   Handle to port scheduler instance
>   * @param pkt
>   *   Packet descriptor handle
>   * @param subport
> @@ -369,7 +371,8 @@ rte_sched_queue_read_stats(struct rte_sched_port
> *port,
>   *   Packet color set
>   */
>  void
> -rte_sched_port_pkt_write(struct rte_mbuf *pkt,
> +rte_sched_port_pkt_write(struct rte_sched_port *port,
> +			 struct rte_mbuf *pkt,
>  			 uint32_t subport, uint32_t pipe, uint32_t
> traffic_class,
>  			 uint32_t queue, enum rte_meter_color color);
> 
> @@ -379,6 +382,8 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt,
>   * enqueue operation. The subport, pipe, traffic class and queue
>   * parameters need to be pre-allocated by the caller.
>   *
> + * @param port
> + *   Handle to port scheduler instance
>   * @param pkt
>   *   Packet descriptor handle
>   * @param subport
> @@ -392,7 +397,8 @@ rte_sched_port_pkt_write(struct rte_mbuf *pkt,
>   *
>   */
>  void
> -rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt,
> +rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
> +				  const struct rte_mbuf *pkt,
>  				  uint32_t *subport, uint32_t *pipe,
>  				  uint32_t *traffic_class, uint32_t *queue);
> 
> diff --git a/test/test/test_sched.c b/test/test/test_sched.c
> index 32e500ba9..40e411cab 100644
> --- a/test/test/test_sched.c
> +++ b/test/test/test_sched.c
> @@ -76,7 +76,7 @@ create_mempool(void)
>  }
> 
>  static void
> -prepare_pkt(struct rte_mbuf *mbuf)
> +prepare_pkt(struct rte_sched_port *port, struct rte_mbuf *mbuf)
>  {
>  	struct ether_hdr *eth_hdr;
>  	struct vlan_hdr *vlan1, *vlan2;
> @@ -95,7 +95,8 @@ prepare_pkt(struct rte_mbuf *mbuf)
>  	ip_hdr->dst_addr = IPv4(0,0,TC,QUEUE);
> 
> 
> -	rte_sched_port_pkt_write(mbuf, SUBPORT, PIPE, TC, QUEUE,
> e_RTE_METER_YELLOW);
> +	rte_sched_port_pkt_write(port, mbuf, SUBPORT, PIPE, TC, QUEUE,
> +					e_RTE_METER_YELLOW);
> 
>  	/* 64 byte packet */
>  	mbuf->pkt_len  = 60;
> @@ -138,7 +139,7 @@ test_sched(void)
>  	for (i = 0; i < 10; i++) {
>  		in_mbufs[i] = rte_pktmbuf_alloc(mp);
>  		TEST_ASSERT_NOT_NULL(in_mbufs[i], "Packet allocation
> failed\n");
> -		prepare_pkt(in_mbufs[i]);
> +		prepare_pkt(port, in_mbufs[i]);
>  	}
> 
> 
> @@ -155,7 +156,7 @@ test_sched(void)
>  		color = rte_sched_port_pkt_read_color(out_mbufs[i]);
>  		TEST_ASSERT_EQUAL(color, e_RTE_METER_YELLOW, "Wrong
> color\n");
> 
> -		rte_sched_port_pkt_read_tree_path(out_mbufs[i],
> +		rte_sched_port_pkt_read_tree_path(port, out_mbufs[i],
>  				&subport, &pipe, &traffic_class, &queue);
> 
>  		TEST_ASSERT_EQUAL(subport, SUBPORT, "Wrong
> subport\n");
> --
> 2.17.1
  
Rao, Nikhil Dec. 13, 2018, 8:51 a.m. UTC | #3
Hi Reshma,

> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Reshma Pattan
> Sent: Friday, December 7, 2018 8:02 PM
> To: dev@dpdk.org; Dumitrescu, Cristian <cristian.dumitrescu@intel.com>;
> jerin.jacob@caviumnetworks.com; Singh, Jasvinder
> <jasvinder.singh@intel.com>
> Cc: Pattan, Reshma <reshma.pattan@intel.com>
> Subject: [dpdk-dev] [PATCH v2 1/3] mbuf: implement generic format for
> sched field
> 
> This patch implements the changes proposed in the deprecation notes [1][2].
> 
> librte_mbuf changes:
> The mbuf::hash::sched field is updated to support generic definition in line
> with the ethdev TM and MTR APIs. The new generic format contains: queue
> ID, traffic class, color.
> 
> Added public APIs to set and get these new fields to and from mbuf.
> 
> librte_sched changes:
> In addtion, following API functions of the sched library have been modified
> with an additional parameter of type struct rte_sched_port to accommodate
> the changes made to mbuf sched field.
> (i)  rte_sched_port_pkt_write()
> (ii) rte_sched_port_pkt_read_tree_path()
> 
> librte_pipeline, qos_sched UT, qos_sched app are updated to make use of
> new changes.
> 
> Also mbuf::hash::txadpater
s/txadpater/txadapter

> have been added for eventdev txq,
s/have/has

> rte_event_eth_tx_adapter_txq_set and rte_event_eth_tx_adapter_txq_get()
> are updated to use uses mbuf::hash::txadpater.txq.

s/txadpater/txadapter

>
<snip>
> 
> diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.h
> b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
> index 81456d4a9..b406660ab 100644
> --- a/lib/librte_eventdev/rte_event_eth_tx_adapter.h
> +++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
> @@ -63,11 +63,11 @@
>   * function can be used to retrieve the adapter's service function ID.
>   *
>   * The ethernet port and transmit queue index to transmit the mbuf on are
> - * specified using the mbuf port and the higher 16 bits of
> - * struct rte_mbuf::hash::sched:hi. The application should use the
> - * rte_event_eth_tx_adapter_txq_set() and
> rte_event_eth_tx_adapter_txq_get()
> - * functions to access the transmit queue index since it is expected that the
> - * transmit queue will be eventually defined within struct rte_mbuf and using
> + * specified using the mbuf port struct rte_mbuf::hash::txadapter:txq.
> + * The application should use the rte_event_eth_tx_adapter_txq_set()
> + * and rte_event_eth_tx_adapter_txq_get() functions to access the
> + transmit
> + * queue index 

>since it is expected that the transmit queue will be
> + * eventually defined within struct rte_mbuf and 
Can we delete the text above ?

> using
>   * these macros will help with minimizing application impact due to
>   * a change in how the transmit queue index is specified.
>   */
Retaining the macro looks OK to me.

> @@ -300,7 +300,7 @@ rte_event_eth_tx_adapter_queue_del(uint8_t id,
> static __rte_always_inline void __rte_experimental
> rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue)  {
> -	uint16_t *p = (uint16_t *)&pkt->hash.sched.hi;
> +	uint16_t *p = (uint16_t *)&pkt->hash.txadapter.txq;
>  	p[1] = queue;
>  }
Replace with 
	pkt->hash.txadapter.txq = queue;

> 
> @@ -320,7 +320,7 @@ rte_event_eth_tx_adapter_txq_set(struct rte_mbuf
> *pkt, uint16_t queue)  static __rte_always_inline uint16_t
> __rte_experimental  rte_event_eth_tx_adapter_txq_get(struct rte_mbuf
> *pkt)  {
> -	uint16_t *p = (uint16_t *)&pkt->hash.sched.hi;
> +	uint16_t *p = (uint16_t *)&pkt->hash.txadapter.txq;
>  	return p[1];
>  }
Replace with 
	return pkt->hash.txadapter.txq;

Thanks,
Nikhil
  

Patch

diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index b48486d36..60e081a54 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -49,16 +49,6 @@  Deprecation Notices
   structure would be made internal (or removed if all dependencies are cleared)
   in future releases.
 
-* mbuf: The opaque ``mbuf->hash.sched`` field will be updated to support generic
-  definition in line with the ethdev TM and MTR APIs. Currently, this field
-  is defined in librte_sched in a non-generic way. The new generic format
-  will contain: queue ID, traffic class, color. Field size will not change.
-
-* sched: Some API functions will change prototype due to the above
-  deprecation note for mbuf->hash.sched, e.g. ``rte_sched_port_pkt_write()``
-  and ``rte_sched_port_pkt_read()`` will likely have an additional parameter
-  of type ``struct rte_sched_port``.
-
 * mbuf: the macro ``RTE_MBUF_INDIRECT()`` will be removed in v18.08 or later and
   replaced with ``RTE_MBUF_CLONED()`` which is already added in v18.05. As
   ``EXT_ATTACHED_MBUF`` is newly introduced in v18.05, ``RTE_MBUF_INDIRECT()``
diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst
index a94fa86a7..24ba00d04 100644
--- a/doc/guides/rel_notes/release_19_02.rst
+++ b/doc/guides/rel_notes/release_19_02.rst
@@ -83,6 +83,11 @@  API Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* The below functions of the sched library have been modified with an
+  additional parameter of type ``struct rte_sched_port`` to accommodate
+  the changes made to mbuf::hash::sched field.
+  (i)  rte_sched_port_pkt_write()
+  (ii) rte_sched_port_pkt_read_tree_path()
 
 ABI Changes
 -----------
@@ -99,6 +104,12 @@  ABI Changes
    Also, make sure to start the actual text at the margin.
    =========================================================
 
+* The mbuf::hash::sched field is updated to support generic
+  definition in line with the ethdev TM and MTR APIs. The new generic
+  format contains: ``queue ID, traffic class, color``.
+
+* The mbuf:hash:txadapter is now added for eventdev txq. The txadapter
+  format now contains ``reserved1, reserved2 and txq``.
 
 Shared Library Versions
 -----------------------
@@ -146,7 +157,7 @@  The libraries prepended with a plus sign were incremented in this version.
      librte_kvargs.so.1
      librte_latencystats.so.1
      librte_lpm.so.2
-     librte_mbuf.so.4
+   + librte_mbuf.so.5
      librte_member.so.1
      librte_mempool.so.5
      librte_meter.so.2
@@ -168,7 +179,7 @@  The libraries prepended with a plus sign were incremented in this version.
      librte_rawdev.so.1
      librte_reorder.so.1
      librte_ring.so.2
-     librte_sched.so.1
+   + librte_sched.so.2
      librte_security.so.1
      librte_table.so.3
      librte_timer.so.1
diff --git a/examples/qos_sched/app_thread.c b/examples/qos_sched/app_thread.c
index a59274236..bec4deee3 100644
--- a/examples/qos_sched/app_thread.c
+++ b/examples/qos_sched/app_thread.c
@@ -73,8 +73,11 @@  app_rx_thread(struct thread_conf **confs)
 			for(i = 0; i < nb_rx; i++) {
 				get_pkt_sched(rx_mbufs[i],
 						&subport, &pipe, &traffic_class, &queue, &color);
-				rte_sched_port_pkt_write(rx_mbufs[i], subport, pipe,
-						traffic_class, queue, (enum rte_meter_color) color);
+				rte_sched_port_pkt_write(conf->sched_port,
+						rx_mbufs[i],
+						subport, pipe,
+						traffic_class, queue,
+						(enum rte_meter_color) color);
 			}
 
 			if (unlikely(rte_ring_sp_enqueue_bulk(conf->rx_ring,
diff --git a/examples/qos_sched/main.c b/examples/qos_sched/main.c
index e7c97bd64..c0ed16b68 100644
--- a/examples/qos_sched/main.c
+++ b/examples/qos_sched/main.c
@@ -55,6 +55,7 @@  app_main_loop(__attribute__((unused))void *dummy)
 			flow->rx_thread.rx_port = flow->rx_port;
 			flow->rx_thread.rx_ring =  flow->rx_ring;
 			flow->rx_thread.rx_queue = flow->rx_queue;
+			flow->rx_thread.sched_port = flow->sched_port;
 
 			rx_confs[rx_idx++] = &flow->rx_thread;
 
diff --git a/lib/librte_eventdev/rte_event_eth_tx_adapter.h b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
index 81456d4a9..b406660ab 100644
--- a/lib/librte_eventdev/rte_event_eth_tx_adapter.h
+++ b/lib/librte_eventdev/rte_event_eth_tx_adapter.h
@@ -63,11 +63,11 @@ 
  * function can be used to retrieve the adapter's service function ID.
  *
  * The ethernet port and transmit queue index to transmit the mbuf on are
- * specified using the mbuf port and the higher 16 bits of
- * struct rte_mbuf::hash::sched:hi. The application should use the
- * rte_event_eth_tx_adapter_txq_set() and rte_event_eth_tx_adapter_txq_get()
- * functions to access the transmit queue index since it is expected that the
- * transmit queue will be eventually defined within struct rte_mbuf and using
+ * specified using the mbuf port struct rte_mbuf::hash::txadapter:txq.
+ * The application should use the rte_event_eth_tx_adapter_txq_set()
+ * and rte_event_eth_tx_adapter_txq_get() functions to access the transmit
+ * queue index since it is expected that the transmit queue will be
+ * eventually defined within struct rte_mbuf and using
  * these macros will help with minimizing application impact due to
  * a change in how the transmit queue index is specified.
  */
@@ -300,7 +300,7 @@  rte_event_eth_tx_adapter_queue_del(uint8_t id,
 static __rte_always_inline void __rte_experimental
 rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue)
 {
-	uint16_t *p = (uint16_t *)&pkt->hash.sched.hi;
+	uint16_t *p = (uint16_t *)&pkt->hash.txadapter.txq;
 	p[1] = queue;
 }
 
@@ -320,7 +320,7 @@  rte_event_eth_tx_adapter_txq_set(struct rte_mbuf *pkt, uint16_t queue)
 static __rte_always_inline uint16_t __rte_experimental
 rte_event_eth_tx_adapter_txq_get(struct rte_mbuf *pkt)
 {
-	uint16_t *p = (uint16_t *)&pkt->hash.sched.hi;
+	uint16_t *p = (uint16_t *)&pkt->hash.txadapter.txq;
 	return p[1];
 }
 
diff --git a/lib/librte_mbuf/Makefile b/lib/librte_mbuf/Makefile
index e2b98a254..8c4c7d790 100644
--- a/lib/librte_mbuf/Makefile
+++ b/lib/librte_mbuf/Makefile
@@ -11,7 +11,7 @@  LDLIBS += -lrte_eal -lrte_mempool
 
 EXPORT_MAP := rte_mbuf_version.map
 
-LIBABIVER := 4
+LIBABIVER := 5
 
 # all source are stored in SRCS-y
 SRCS-$(CONFIG_RTE_LIBRTE_MBUF) := rte_mbuf.c rte_mbuf_ptype.c rte_mbuf_pool_ops.c
diff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h
index 3dbc6695e..accd98d9b 100644
--- a/lib/librte_mbuf/rte_mbuf.h
+++ b/lib/librte_mbuf/rte_mbuf.h
@@ -575,13 +575,23 @@  struct rte_mbuf {
 				 */
 			} fdir;	/**< Filter identifier if FDIR enabled */
 			struct {
-				uint32_t lo;
-				uint32_t hi;
+				uint32_t queue_id;   /**< Queue ID. */
+				uint8_t traffic_class;
+				/**< Traffic class ID. Traffic class 0 is high
+				 * priority traffic class.
+				 */
+				uint8_t color;   /**< Color. */
+				uint16_t reserved;   /**< Reserved. */
+			} sched;          /**< Hierarchical scheduler */
+			struct {
+				uint32_t reserved1;
+				uint16_t reserved2;
+				uint16_t txq;
 				/**< The event eth Tx adapter uses this field
 				 * to store Tx queue id.
 				 * @see rte_event_eth_tx_adapter_txq_set()
 				 */
-			} sched;          /**< Hierarchical scheduler */
+			} txadapter; /**< Eventdev ethdev Tx adapter */
 			/**< User defined tags. See rte_distributor_process() */
 			uint32_t usr;
 		} hash;                   /**< hash information */
@@ -2289,6 +2299,102 @@  rte_pktmbuf_linearize(struct rte_mbuf *mbuf)
  */
 void rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len);
 
+/**
+ * Reads the value of an mbuf's sched queue_id field.
+ */
+static inline uint32_t
+rte_mbuf_sched_queue_read(const struct rte_mbuf *m)
+{
+	return m->hash.sched.queue_id;
+}
+
+/**
+ * Reads the value of an mbuf's sched traffic_class field.
+ */
+static inline uint8_t
+rte_mbuf_sched_traffic_class_read(const struct rte_mbuf *m)
+{
+	return m->hash.sched.traffic_class;
+}
+
+/**
+ * Reads the value of an mbuf's sched color field.
+ */
+static inline uint8_t
+rte_mbuf_sched_color_read(const struct rte_mbuf *m)
+{
+	return m->hash.sched.color;
+}
+
+/**
+ * Sets an mbuf's sched queue_id to the defined value.
+ */
+static inline void
+rte_mbuf_sched_queue_set(struct rte_mbuf *m, uint32_t qid)
+{
+	m->hash.sched.queue_id = qid;
+}
+
+/**
+ * Sets an mbuf's sched traffic_class id to the defined value.
+ */
+static inline void
+rte_mbuf_sched_traffic_class_set(struct rte_mbuf *m, uint8_t tc)
+{
+	m->hash.sched.traffic_class = tc;
+}
+
+/**
+ * Sets an mbuf's sched color id to the defined value.
+ */
+static inline void
+rte_mbuf_sched_color_set(struct rte_mbuf *m, uint8_t color)
+{
+	m->hash.sched.color = color;
+}
+
+/**
+ * Reads the values of an mbuf's sched queue_id, traffic_class and color.
+ * @param m
+ *   Mbuf to read
+ * @param qid
+ *  Returns the queue id
+ * @param tc
+ *  Returns the traffic class id
+ * @param color
+ *  Returns the colour id
+ */
+static inline void
+rte_mbuf_sched_read(const struct rte_mbuf *m, uint32_t *qid,
+			uint8_t *tc,
+			uint8_t *color)
+{
+	*qid = m->hash.sched.queue_id;
+	*tc = m->hash.sched.traffic_class;
+	*color = m->hash.sched.color;
+}
+
+/**
+ * Sets the mbuf's sched queue_id, traffic_class and color.
+ * @param m
+ *   Mbuf to set
+ * @param qid
+ *  Queue id value to be set
+ * @param tc
+ *  Traffic class id value to be set
+ * @param color
+ *  Color id to be set
+ */
+static inline void
+rte_mbuf_sched_set(struct rte_mbuf *m, uint32_t qid,
+			uint8_t tc,
+			uint8_t color)
+{
+	m->hash.sched.queue_id = qid;
+	m->hash.sched.traffic_class = tc;
+	m->hash.sched.color = color;
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_pipeline/rte_table_action.c b/lib/librte_pipeline/rte_table_action.c
index 7c7c8dd82..f9768b9cc 100644
--- a/lib/librte_pipeline/rte_table_action.c
+++ b/lib/librte_pipeline/rte_table_action.c
@@ -107,14 +107,6 @@  mtr_cfg_check(struct rte_table_action_mtr_config *mtr)
 	return 0;
 }
 
-#define MBUF_SCHED_QUEUE_TC_COLOR(queue, tc, color)        \
-	((uint16_t)((((uint64_t)(queue)) & 0x3) |          \
-	((((uint64_t)(tc)) & 0x3) << 2) |                  \
-	((((uint64_t)(color)) & 0x3) << 4)))
-
-#define MBUF_SCHED_COLOR(sched, color)                     \
-	(((sched) & (~0x30LLU)) | ((color) << 4))
-
 struct mtr_trtcm_data {
 	struct rte_meter_trtcm trtcm;
 	uint64_t stats[e_RTE_METER_COLORS];
@@ -176,7 +168,7 @@  mtr_data_size(struct rte_table_action_mtr_config *mtr)
 struct dscp_table_entry_data {
 	enum rte_meter_color color;
 	uint16_t tc;
-	uint16_t queue_tc_color;
+	uint32_t queue;
 };
 
 struct dscp_table_data {
@@ -319,8 +311,7 @@  pkt_work_mtr(struct rte_mbuf *mbuf,
 	uint32_t dscp,
 	uint16_t total_length)
 {
-	uint64_t drop_mask, sched;
-	uint64_t *sched_ptr = (uint64_t *) &mbuf->hash.sched;
+	uint64_t drop_mask;
 	struct dscp_table_entry_data *dscp_entry = &dscp_table->entry[dscp];
 	enum rte_meter_color color_in, color_meter, color_policer;
 	uint32_t tc, mp_id;
@@ -329,7 +320,6 @@  pkt_work_mtr(struct rte_mbuf *mbuf,
 	color_in = dscp_entry->color;
 	data += tc;
 	mp_id = MTR_TRTCM_DATA_METER_PROFILE_ID_GET(data);
-	sched = *sched_ptr;
 
 	/* Meter */
 	color_meter = rte_meter_trtcm_color_aware_check(
@@ -346,7 +336,7 @@  pkt_work_mtr(struct rte_mbuf *mbuf,
 	drop_mask = MTR_TRTCM_DATA_POLICER_ACTION_DROP_GET(data, color_meter);
 	color_policer =
 		MTR_TRTCM_DATA_POLICER_ACTION_COLOR_GET(data, color_meter);
-	*sched_ptr = MBUF_SCHED_COLOR(sched, color_policer);
+	rte_mbuf_sched_color_set(mbuf, color_policer);
 
 	return drop_mask;
 }
@@ -368,11 +358,16 @@  tm_cfg_check(struct rte_table_action_tm_config *tm)
 }
 
 struct tm_data {
-	uint16_t queue_tc_color;
 	uint16_t subport;
 	uint32_t pipe;
 } __attribute__((__packed__));
 
+/* log2 representation of tm configuration */
+struct tm_cfg {
+	uint32_t subports_per_port_log2;
+	uint32_t pipes_per_subport_log2;
+} __attribute__((__packed__));
+
 static int
 tm_apply_check(struct rte_table_action_tm_params *p,
 	struct rte_table_action_tm_config *cfg)
@@ -397,26 +392,37 @@  tm_apply(struct tm_data *data,
 		return status;
 
 	/* Apply */
-	data->queue_tc_color = 0;
 	data->subport = (uint16_t) p->subport_id;
 	data->pipe = p->pipe_id;
 
 	return 0;
 }
 
+static uint32_t
+tm_sched_qindex(struct tm_data *data,
+	struct dscp_table_entry_data *dscp,
+	struct tm_cfg *tm)
+{
+	uint32_t result;
+
+	result = (data->subport << tm->pipes_per_subport_log2) + data->pipe;
+	result = result * RTE_TABLE_ACTION_TC_MAX + dscp->tc;
+	result = result * RTE_TABLE_ACTION_TC_QUEUE_MAX + dscp->queue;
+
+	return result;
+}
+
 static __rte_always_inline void
 pkt_work_tm(struct rte_mbuf *mbuf,
 	struct tm_data *data,
 	struct dscp_table_data *dscp_table,
-	uint32_t dscp)
+	uint32_t dscp,
+	struct tm_cfg *tm)
 {
 	struct dscp_table_entry_data *dscp_entry = &dscp_table->entry[dscp];
-	struct tm_data *sched_ptr = (struct tm_data *) &mbuf->hash.sched;
-	struct tm_data sched;
+	uint32_t queue = tm_sched_qindex(data, dscp_entry, tm);
 
-	sched = *data;
-	sched.queue_tc_color = dscp_entry->queue_tc_color;
-	*sched_ptr = sched;
+	rte_mbuf_sched_set(mbuf, queue, dscp_entry->tc, dscp_entry->color);
 }
 
 /**
@@ -2440,6 +2446,7 @@  struct rte_table_action {
 	struct ap_data data;
 	struct dscp_table_data dscp_table;
 	struct meter_profile_data mp[METER_PROFILES_MAX];
+	struct tm_cfg tm;
 };
 
 struct rte_table_action *
@@ -2465,6 +2472,11 @@  rte_table_action_create(struct rte_table_action_profile *profile,
 	memcpy(&action->cfg, &profile->cfg, sizeof(profile->cfg));
 	memcpy(&action->data, &profile->data, sizeof(profile->data));
 
+	action->tm.subports_per_port_log2 =
+			__builtin_ctz(profile->cfg.tm.n_subports_per_port);
+	action->tm.pipes_per_subport_log2 =
+			__builtin_ctz(profile->cfg.tm.n_pipes_per_subport);
+
 	return action;
 }
 
@@ -2580,17 +2592,13 @@  rte_table_action_dscp_table_update(struct rte_table_action *action,
 			&action->dscp_table.entry[i];
 		struct rte_table_action_dscp_table_entry *entry =
 			&table->entry[i];
-		uint16_t queue_tc_color =
-			MBUF_SCHED_QUEUE_TC_COLOR(entry->tc_queue_id,
-				entry->tc_id,
-				entry->color);
 
 		if ((dscp_mask & (1LLU << i)) == 0)
 			continue;
 
 		data->color = entry->color;
 		data->tc = entry->tc_id;
-		data->queue_tc_color = queue_tc_color;
+		data->queue = entry->tc_queue_id;
 	}
 
 	return 0;
@@ -2882,7 +2890,8 @@  pkt_work(struct rte_mbuf *mbuf,
 		pkt_work_tm(mbuf,
 			data,
 			&action->dscp_table,
-			dscp);
+			dscp,
+			&action->tm);
 	}
 
 	if (cfg->action_mask & (1LLU << RTE_TABLE_ACTION_DECAP)) {
@@ -3108,22 +3117,26 @@  pkt4_work(struct rte_mbuf **mbufs,
 		pkt_work_tm(mbuf0,
 			data0,
 			&action->dscp_table,
-			dscp0);
+			dscp0,
+			&action->tm);
 
 		pkt_work_tm(mbuf1,
 			data1,
 			&action->dscp_table,
-			dscp1);
+			dscp1,
+			&action->tm);
 
 		pkt_work_tm(mbuf2,
 			data2,
 			&action->dscp_table,
-			dscp2);
+			dscp2,
+			&action->tm);
 
 		pkt_work_tm(mbuf3,
 			data3,
 			&action->dscp_table,
-			dscp3);
+			dscp3,
+			&action->tm);
 	}
 
 	if (cfg->action_mask & (1LLU << RTE_TABLE_ACTION_DECAP)) {
diff --git a/lib/librte_sched/Makefile b/lib/librte_sched/Makefile
index 46c53ed71..644fd9d15 100644
--- a/lib/librte_sched/Makefile
+++ b/lib/librte_sched/Makefile
@@ -18,7 +18,7 @@  LDLIBS += -lrte_timer
 
 EXPORT_MAP := rte_sched_version.map
 
-LIBABIVER := 1
+LIBABIVER := 2
 
 #
 # all source are stored in SRCS-y
diff --git a/lib/librte_sched/rte_sched.c b/lib/librte_sched/rte_sched.c
index 587d5e602..a6d9a5886 100644
--- a/lib/librte_sched/rte_sched.c
+++ b/lib/librte_sched/rte_sched.c
@@ -41,6 +41,9 @@ 
 #define RTE_SCHED_PIPE_INVALID                UINT32_MAX
 #define RTE_SCHED_BMP_POS_INVALID             UINT32_MAX
 
+#define RTE_SCHED_QUEUES_PER_PIPE_LOG2 \
+	__builtin_ctz(RTE_SCHED_QUEUES_PER_PIPE)
+
 /* Scaling for cycles_per_byte calculation
  * Chosen so that minimum rate is 480 bit/sec
  */
@@ -128,22 +131,6 @@  enum grinder_state {
 	e_GRINDER_READ_MBUF
 };
 
-/*
- * Path through the scheduler hierarchy used by the scheduler enqueue
- * operation to identify the destination queue for the current
- * packet. Stored in the field pkt.hash.sched of struct rte_mbuf of
- * each packet, typically written by the classification stage and read
- * by scheduler enqueue.
- */
-struct rte_sched_port_hierarchy {
-	uint16_t queue:2;                /**< Queue ID (0 .. 3) */
-	uint16_t traffic_class:2;        /**< Traffic class ID (0 .. 3)*/
-	uint32_t color:2;                /**< Color */
-	uint16_t unused:10;
-	uint16_t subport;                /**< Subport ID */
-	uint32_t pipe;		         /**< Pipe ID */
-};
-
 struct rte_sched_grinder {
 	/* Pipe cache */
 	uint16_t pcache_qmask[RTE_SCHED_GRINDER_PCACHE_SIZE];
@@ -228,6 +215,7 @@  struct rte_sched_port {
 	uint8_t *bmp_array;
 	struct rte_mbuf **queue_array;
 	uint8_t memory[0] __rte_cache_aligned;
+
 } __rte_cache_aligned;
 
 enum rte_sched_port_array {
@@ -242,13 +230,11 @@  enum rte_sched_port_array {
 };
 
 #ifdef RTE_SCHED_COLLECT_STATS
-
 static inline uint32_t
 rte_sched_port_queues_per_subport(struct rte_sched_port *port)
 {
 	return RTE_SCHED_QUEUES_PER_PIPE * port->n_pipes_per_subport;
 }
-
 #endif
 
 static inline uint32_t
@@ -1006,44 +992,56 @@  rte_sched_port_pipe_profile_add(struct rte_sched_port *port,
 	return 0;
 }
 
+static inline uint32_t
+rte_sched_port_qindex(struct rte_sched_port *port,
+	uint32_t subport,
+	uint32_t pipe,
+	uint32_t traffic_class,
+	uint32_t queue)
+{
+	uint32_t result;
+
+	result = (subport << __builtin_ctz(port->n_pipes_per_subport)) + pipe;
+	result = result * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE + traffic_class;
+	result = result * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue;
+
+	return result;
+}
+
 void
-rte_sched_port_pkt_write(struct rte_mbuf *pkt,
+rte_sched_port_pkt_write(struct rte_sched_port *port,
+			 struct rte_mbuf *pkt,
 			 uint32_t subport, uint32_t pipe, uint32_t traffic_class,
 			 uint32_t queue, enum rte_meter_color color)
 {
-	struct rte_sched_port_hierarchy *sched
-		= (struct rte_sched_port_hierarchy *) &pkt->hash.sched;
-
-	RTE_BUILD_BUG_ON(sizeof(*sched) > sizeof(pkt->hash.sched));
-
-	sched->color = (uint32_t) color;
-	sched->subport = subport;
-	sched->pipe = pipe;
-	sched->traffic_class = traffic_class;
-	sched->queue = queue;
+	uint32_t queue_id = rte_sched_port_qindex(port, subport, pipe,
+			traffic_class, queue);
+	rte_mbuf_sched_set(pkt, queue_id, traffic_class, (uint8_t)color);
 }
 
 void
-rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt,
+rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
+				  const struct rte_mbuf *pkt,
 				  uint32_t *subport, uint32_t *pipe,
 				  uint32_t *traffic_class, uint32_t *queue)
 {
-	const struct rte_sched_port_hierarchy *sched
-		= (const struct rte_sched_port_hierarchy *) &pkt->hash.sched;
-
-	*subport = sched->subport;
-	*pipe = sched->pipe;
-	*traffic_class = sched->traffic_class;
-	*queue = sched->queue;
+	uint32_t qid = rte_mbuf_sched_queue_read(pkt);
+
+	*subport = qid >>
+		(__builtin_ctz
+			(RTE_SCHED_QUEUES_PER_PIPE <<
+			__builtin_ctz(port->n_pipes_per_subport)
+			)
+		);
+	*pipe = qid >> RTE_SCHED_QUEUES_PER_PIPE_LOG2;
+	*traffic_class = rte_mbuf_sched_traffic_class_read(pkt);
+	*queue = qid & (RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS - 1);
 }
 
 enum rte_meter_color
 rte_sched_port_pkt_read_color(const struct rte_mbuf *pkt)
 {
-	const struct rte_sched_port_hierarchy *sched
-		= (const struct rte_sched_port_hierarchy *) &pkt->hash.sched;
-
-	return (enum rte_meter_color) sched->color;
+	return (enum rte_meter_color)rte_mbuf_sched_color_read(pkt);
 }
 
 int
@@ -1100,18 +1098,6 @@  rte_sched_queue_read_stats(struct rte_sched_port *port,
 	return 0;
 }
 
-static inline uint32_t
-rte_sched_port_qindex(struct rte_sched_port *port, uint32_t subport, uint32_t pipe, uint32_t traffic_class, uint32_t queue)
-{
-	uint32_t result;
-
-	result = subport * port->n_pipes_per_subport + pipe;
-	result = result * RTE_SCHED_TRAFFIC_CLASSES_PER_PIPE + traffic_class;
-	result = result * RTE_SCHED_QUEUES_PER_TRAFFIC_CLASS + queue;
-
-	return result;
-}
-
 #ifdef RTE_SCHED_DEBUG
 
 static inline int
@@ -1272,11 +1258,8 @@  rte_sched_port_enqueue_qptrs_prefetch0(struct rte_sched_port *port,
 #ifdef RTE_SCHED_COLLECT_STATS
 	struct rte_sched_queue_extra *qe;
 #endif
-	uint32_t subport, pipe, traffic_class, queue, qindex;
-
-	rte_sched_port_pkt_read_tree_path(pkt, &subport, &pipe, &traffic_class, &queue);
+	uint32_t qindex = rte_mbuf_sched_queue_read(pkt);
 
-	qindex = rte_sched_port_qindex(port, subport, pipe, traffic_class, queue);
 	q = port->queue + qindex;
 	rte_prefetch0(q);
 #ifdef RTE_SCHED_COLLECT_STATS
diff --git a/lib/librte_sched/rte_sched.h b/lib/librte_sched/rte_sched.h
index 84fa896de..243efa1d4 100644
--- a/lib/librte_sched/rte_sched.h
+++ b/lib/librte_sched/rte_sched.h
@@ -355,6 +355,8 @@  rte_sched_queue_read_stats(struct rte_sched_port *port,
  * Scheduler hierarchy path write to packet descriptor. Typically
  * called by the packet classification stage.
  *
+ * @param port
+ *   Handle to port scheduler instance
  * @param pkt
  *   Packet descriptor handle
  * @param subport
@@ -369,7 +371,8 @@  rte_sched_queue_read_stats(struct rte_sched_port *port,
  *   Packet color set
  */
 void
-rte_sched_port_pkt_write(struct rte_mbuf *pkt,
+rte_sched_port_pkt_write(struct rte_sched_port *port,
+			 struct rte_mbuf *pkt,
 			 uint32_t subport, uint32_t pipe, uint32_t traffic_class,
 			 uint32_t queue, enum rte_meter_color color);
 
@@ -379,6 +382,8 @@  rte_sched_port_pkt_write(struct rte_mbuf *pkt,
  * enqueue operation. The subport, pipe, traffic class and queue
  * parameters need to be pre-allocated by the caller.
  *
+ * @param port
+ *   Handle to port scheduler instance
  * @param pkt
  *   Packet descriptor handle
  * @param subport
@@ -392,7 +397,8 @@  rte_sched_port_pkt_write(struct rte_mbuf *pkt,
  *
  */
 void
-rte_sched_port_pkt_read_tree_path(const struct rte_mbuf *pkt,
+rte_sched_port_pkt_read_tree_path(struct rte_sched_port *port,
+				  const struct rte_mbuf *pkt,
 				  uint32_t *subport, uint32_t *pipe,
 				  uint32_t *traffic_class, uint32_t *queue);
 
diff --git a/test/test/test_sched.c b/test/test/test_sched.c
index 32e500ba9..40e411cab 100644
--- a/test/test/test_sched.c
+++ b/test/test/test_sched.c
@@ -76,7 +76,7 @@  create_mempool(void)
 }
 
 static void
-prepare_pkt(struct rte_mbuf *mbuf)
+prepare_pkt(struct rte_sched_port *port, struct rte_mbuf *mbuf)
 {
 	struct ether_hdr *eth_hdr;
 	struct vlan_hdr *vlan1, *vlan2;
@@ -95,7 +95,8 @@  prepare_pkt(struct rte_mbuf *mbuf)
 	ip_hdr->dst_addr = IPv4(0,0,TC,QUEUE);
 
 
-	rte_sched_port_pkt_write(mbuf, SUBPORT, PIPE, TC, QUEUE, e_RTE_METER_YELLOW);
+	rte_sched_port_pkt_write(port, mbuf, SUBPORT, PIPE, TC, QUEUE,
+					e_RTE_METER_YELLOW);
 
 	/* 64 byte packet */
 	mbuf->pkt_len  = 60;
@@ -138,7 +139,7 @@  test_sched(void)
 	for (i = 0; i < 10; i++) {
 		in_mbufs[i] = rte_pktmbuf_alloc(mp);
 		TEST_ASSERT_NOT_NULL(in_mbufs[i], "Packet allocation failed\n");
-		prepare_pkt(in_mbufs[i]);
+		prepare_pkt(port, in_mbufs[i]);
 	}
 
 
@@ -155,7 +156,7 @@  test_sched(void)
 		color = rte_sched_port_pkt_read_color(out_mbufs[i]);
 		TEST_ASSERT_EQUAL(color, e_RTE_METER_YELLOW, "Wrong color\n");
 
-		rte_sched_port_pkt_read_tree_path(out_mbufs[i],
+		rte_sched_port_pkt_read_tree_path(port, out_mbufs[i],
 				&subport, &pipe, &traffic_class, &queue);
 
 		TEST_ASSERT_EQUAL(subport, SUBPORT, "Wrong subport\n");