[dpdk-dev,v5,1/3] ethdev: introduce Rx queue offloads API

Message ID bb9cc7e65e5167ce42ec1c809b5c8a8dc0d0633e.1506624250.git.shahafs@mellanox.com (mailing list archive)
State Superseded, archived
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Shahaf Shuler Sept. 28, 2017, 6:54 p.m. UTC
  Introduce a new API to configure Rx offloads.

In the new API, offloads are divided into per-port and per-queue
offloads. The PMD reports capability for each of them.
Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
To enable per-port offload, the offload should be set on both device
configuration and queue configuration. To enable per-queue offload, the
offloads can be set only on queue configuration.

Applications should set the ignore_offload_bitfield bit on rxmode
structure in order to move to the new API.

The old Rx offloads API is kept for the meanwhile, in order to enable a
smooth transition for PMDs and application to the new API.

Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
---
 doc/guides/nics/features.rst  |  33 ++++----
 lib/librte_ether/rte_ethdev.c | 156 +++++++++++++++++++++++++++++++++----
 lib/librte_ether/rte_ethdev.h |  51 +++++++++++-
 3 files changed, 210 insertions(+), 30 deletions(-)
  

Comments

Ferruh Yigit Oct. 3, 2017, 12:32 a.m. UTC | #1
On 9/28/2017 7:54 PM, Shahaf Shuler wrote:
> Introduce a new API to configure Rx offloads.
> 
> In the new API, offloads are divided into per-port and per-queue
> offloads. The PMD reports capability for each of them.
> Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
> To enable per-port offload, the offload should be set on both device
> configuration and queue configuration. To enable per-queue offload, the
> offloads can be set only on queue configuration.
> 
> Applications should set the ignore_offload_bitfield bit on rxmode
> structure in order to move to the new API.
> 
> The old Rx offloads API is kept for the meanwhile, in order to enable a
> smooth transition for PMDs and application to the new API.
> 
> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>

<...>

> @@ -1102,8 +1193,18 @@ rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
>  	if (rx_conf == NULL)
>  		rx_conf = &dev_info.default_rxconf;
>  
> +	local_conf = *rx_conf;
> +	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
> +		/**
> +		 * Reflect port offloads to queue offloads in order for
> +		 * offloads to not be discarded.
> +		 */
> +		rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
> +						    &local_conf.offloads);
> +	}

If an application switches to the new method, it will set "offloads" and
if underlying PMD doesn't support the new method it will just do nothing
with "offloads" variable but problem is application won't know PMD just
ignored them, it may think per queue offloads set.

Does it make sense to notify application that PMD doesn't understand
that new "offloads" flag?

> +
>  	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
> -					      socket_id, rx_conf, mp);
> +					      socket_id, &local_conf, mp);
>  	if (!ret) {
>  		if (!dev->data->min_rx_buf_size ||
>  		    dev->data->min_rx_buf_size > mbp_buf_size)

<...>

>  /**
> @@ -691,6 +712,12 @@ struct rte_eth_rxconf {
>  	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
>  	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
>  	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
> +	/**
> +	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
> +	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
> +	 * fields on rte_eth_dev_info structure are allowed to be set.
> +	 */

How application will use above "capa" flags to decide what to set? Since
"rx_queue_offload_capa" is new field introduced with this patch no PMD
implemented it yet, does it means no application will be able to use per
queue offloads yet?

> +	uint64_t offloads;
>  };
>  

<...>
  
Shahaf Shuler Oct. 3, 2017, 6:25 a.m. UTC | #2
Hi Ferruh,

Tuesday, October 3, 2017 3:32 AM, Ferruh Yigit:
> On 9/28/2017 7:54 PM, Shahaf Shuler wrote:

> > Introduce a new API to configure Rx offloads.

> >

> > In the new API, offloads are divided into per-port and per-queue

> > offloads. The PMD reports capability for each of them.

> > Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.

> > To enable per-port offload, the offload should be set on both device

> > configuration and queue configuration. To enable per-queue offload,

> > the offloads can be set only on queue configuration.

> >

> > Applications should set the ignore_offload_bitfield bit on rxmode

> > structure in order to move to the new API.

> >

> > The old Rx offloads API is kept for the meanwhile, in order to enable

> > a smooth transition for PMDs and application to the new API.

> >

> > Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>

> 

> <...>

> 

> > @@ -1102,8 +1193,18 @@ rte_eth_rx_queue_setup(uint8_t port_id,

> uint16_t rx_queue_id,

> >  	if (rx_conf == NULL)

> >  		rx_conf = &dev_info.default_rxconf;

> >

> > +	local_conf = *rx_conf;

> > +	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {

> > +		/**

> > +		 * Reflect port offloads to queue offloads in order for

> > +		 * offloads to not be discarded.

> > +		 */

> > +		rte_eth_convert_rx_offload_bitfield(&dev->data-

> >dev_conf.rxmode,

> > +						    &local_conf.offloads);

> > +	}

> 

> If an application switches to the new method, it will set "offloads" and if

> underlying PMD doesn't support the new method it will just do nothing with

> "offloads" variable but problem is application won't know PMD just ignored

> them, it may think per queue offloads set.

> 

> Does it make sense to notify application that PMD doesn't understand that

> new "offloads" flag?


I don't think it is needed. In the new API the per-queue Rx offloads caps are reported using a new rx_queue_offload_capa field. Old PMD will not set it, therefore application which use the new API will see that the underlying PMD is supporting only per-port Rx offloads. 
This should be enough for it to understand that the per-queue offloads won't be set. 

> 

> > +

> >  	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id,

> nb_rx_desc,

> > -					      socket_id, rx_conf, mp);

> > +					      socket_id, &local_conf, mp);

> >  	if (!ret) {

> >  		if (!dev->data->min_rx_buf_size ||

> >  		    dev->data->min_rx_buf_size > mbp_buf_size)

> 

> <...>

> 

> >  /**

> > @@ -691,6 +712,12 @@ struct rte_eth_rxconf {

> >  	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors.

> */

> >  	uint8_t rx_drop_en; /**< Drop packets if no descriptors are

> available. */

> >  	uint8_t rx_deferred_start; /**< Do not start queue with

> > rte_eth_dev_start(). */

> > +	/**

> > +	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.

> > +	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa

> > +	 * fields on rte_eth_dev_info structure are allowed to be set.

> > +	 */

> 

> How application will use above "capa" flags to decide what to set? Since

> "rx_queue_offload_capa" is new field introduced with this patch no PMD

> implemented it yet, does it means no application will be able to use per

> queue offloads yet?


Yes.
Application which use the new offloads API should query the device info and look into the rx_offloads_capa and rx_queue_offloads_capa.
According to those 2 caps it will decide how to set the offloads. 
Per-queue Rx offloads is a new functionality introduced in this series. Of course old PMD will not support it, and this will be reflected on the rx_queue_offlaods_capa.  


> 

> > +	uint64_t offloads;

> >  };

> >

> 

> <...>
  
Ferruh Yigit Oct. 3, 2017, 7:46 p.m. UTC | #3
On 10/3/2017 7:25 AM, Shahaf Shuler wrote:
> Hi Ferruh,
> 
> Tuesday, October 3, 2017 3:32 AM, Ferruh Yigit:
>> On 9/28/2017 7:54 PM, Shahaf Shuler wrote:
>>> Introduce a new API to configure Rx offloads.
>>>
>>> In the new API, offloads are divided into per-port and per-queue
>>> offloads. The PMD reports capability for each of them.
>>> Offloads are enabled using the existing DEV_RX_OFFLOAD_* flags.
>>> To enable per-port offload, the offload should be set on both device
>>> configuration and queue configuration. To enable per-queue offload,
>>> the offloads can be set only on queue configuration.
>>>
>>> Applications should set the ignore_offload_bitfield bit on rxmode
>>> structure in order to move to the new API.
>>>
>>> The old Rx offloads API is kept for the meanwhile, in order to enable
>>> a smooth transition for PMDs and application to the new API.
>>>
>>> Signed-off-by: Shahaf Shuler <shahafs@mellanox.com>
>>
>> <...>
>>
>>> @@ -1102,8 +1193,18 @@ rte_eth_rx_queue_setup(uint8_t port_id,
>> uint16_t rx_queue_id,
>>>  	if (rx_conf == NULL)
>>>  		rx_conf = &dev_info.default_rxconf;
>>>
>>> +	local_conf = *rx_conf;
>>> +	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
>>> +		/**
>>> +		 * Reflect port offloads to queue offloads in order for
>>> +		 * offloads to not be discarded.
>>> +		 */
>>> +		rte_eth_convert_rx_offload_bitfield(&dev->data-
>>> dev_conf.rxmode,
>>> +						    &local_conf.offloads);
>>> +	}
>>
>> If an application switches to the new method, it will set "offloads" and if
>> underlying PMD doesn't support the new method it will just do nothing with
>> "offloads" variable but problem is application won't know PMD just ignored
>> them, it may think per queue offloads set.
>>
>> Does it make sense to notify application that PMD doesn't understand that
>> new "offloads" flag?
> 
> I don't think it is needed. In the new API the per-queue Rx offloads caps are reported using a new rx_queue_offload_capa field. Old PMD will not set it, therefore application which use the new API will see that the underlying PMD is supporting only per-port Rx offloads. 
> This should be enough for it to understand that the per-queue offloads won't be set. 

OK, makes sense, so application should check queue bases offload
capabilities PMD returned and decide port based or queue based offloads
to use.

> 
>>
>>> +
>>>  	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id,
>> nb_rx_desc,
>>> -					      socket_id, rx_conf, mp);
>>> +					      socket_id, &local_conf, mp);
>>>  	if (!ret) {
>>>  		if (!dev->data->min_rx_buf_size ||
>>>  		    dev->data->min_rx_buf_size > mbp_buf_size)
>>

<...>
  

Patch

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 37ffbc68c..4e68144ef 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -179,7 +179,7 @@  Jumbo frame
 
 Supports Rx jumbo frames.
 
-* **[uses]    user config**: ``dev_conf.rxmode.jumbo_frame``,
+* **[uses]    rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_JUMBO_FRAME``.
   ``dev_conf.rxmode.max_rx_pkt_len``.
 * **[related] rte_eth_dev_info**: ``max_rx_pktlen``.
 * **[related] API**: ``rte_eth_dev_set_mtu()``.
@@ -192,7 +192,7 @@  Scattered Rx
 
 Supports receiving segmented mbufs.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_scatter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_SCATTER``.
 * **[implements] datapath**: ``Scattered Rx function``.
 * **[implements] rte_eth_dev_data**: ``scattered_rx``.
 * **[provides]   eth_dev_ops**: ``rxq_info_get:scattered_rx``.
@@ -206,11 +206,11 @@  LRO
 
 Supports Large Receive Offload.
 
-* **[uses]       user config**: ``dev_conf.rxmode.enable_lro``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
 
 
 .. _nic_features_tso:
@@ -363,7 +363,7 @@  VLAN filter
 
 Supports filtering of a VLAN Tag identifier.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_filter``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_FILTER``.
 * **[implements] eth_dev_ops**: ``vlan_filter_set``.
 * **[related]    API**: ``rte_eth_dev_vlan_filter()``.
 
@@ -499,7 +499,7 @@  CRC offload
 
 Supports CRC stripping by hardware.
 
-* **[uses] user config**: ``dev_conf.rxmode.hw_strip_crc``.
+* **[uses] rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_CRC_STRIP``.
 
 
 .. _nic_features_vlan_offload:
@@ -509,11 +509,10 @@  VLAN offload
 
 Supports VLAN offload to hardware.
 
-* **[uses]       user config**: ``dev_conf.rxmode.hw_vlan_strip``,
-  ``dev_conf.rxmode.hw_vlan_filter``, ``dev_conf.rxmode.hw_vlan_extend``.
+* **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_VLAN_STRIP,DEV_RX_OFFLOAD_VLAN_FILTER,DEV_RX_OFFLOAD_VLAN_EXTEND``.
 * **[implements] eth_dev_ops**: ``vlan_offload_set``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_VLAN_STRIPPED``, ``mbuf.vlan_tci``.
-* **[provides]   rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
+* **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_VLAN_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_VLAN_INSERT``.
 * **[related]    API**: ``rte_eth_dev_set_vlan_offload()``,
   ``rte_eth_dev_get_vlan_offload()``.
@@ -526,10 +525,11 @@  QinQ offload
 
 Supports QinQ (queue in queue) offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_QINQ_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_QINQ_PKT``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_QINQ_STRIPPED``, ``mbuf.vlan_tci``,
    ``mbuf.vlan_tci_outer``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_QINQ_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_QINQ_INSERT``.
 
 
@@ -540,13 +540,13 @@  L3 checksum offload
 
 Supports L3 checksum offload.
 
-* **[uses]     user config**: ``dev_conf.rxmode.hw_ip_checksum``.
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_IP_CKSUM_UNKNOWN`` |
   ``PKT_RX_IP_CKSUM_BAD`` | ``PKT_RX_IP_CKSUM_GOOD`` |
   ``PKT_RX_IP_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_IPV4_CKSUM``.
 
 
@@ -557,13 +557,14 @@  L4 checksum offload
 
 Supports L4 checksum offload.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_L4_NO_CKSUM`` | ``PKT_TX_TCP_CKSUM`` |
   ``PKT_TX_SCTP_CKSUM`` | ``PKT_TX_UDP_CKSUM``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_L4_CKSUM_UNKNOWN`` |
   ``PKT_RX_L4_CKSUM_BAD`` | ``PKT_RX_L4_CKSUM_GOOD`` |
   ``PKT_RX_L4_CKSUM_NONE``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_UDP_CKSUM,DEV_RX_OFFLOAD_TCP_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_UDP_CKSUM,DEV_TX_OFFLOAD_TCP_CKSUM,DEV_TX_OFFLOAD_SCTP_CKSUM``.
 
 
@@ -574,8 +575,9 @@  MACsec offload
 
 Supports MACsec.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_MACSEC_STRIP``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_MACSEC``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_MACSEC_STRIP``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_MACSEC_INSERT``.
 
 
@@ -586,13 +588,14 @@  Inner L3 checksum
 
 Supports inner packet L3 checksum.
 
+* **[uses]     rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``.
 * **[uses]     mbuf**: ``mbuf.ol_flags:PKT_TX_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_IPV4`` | ``PKT_TX_IPV6``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IP_CKSUM``,
   ``mbuf.ol_flags:PKT_TX_OUTER_IPV4`` | ``PKT_TX_OUTER_IPV6``.
 * **[uses]     mbuf**: ``mbuf.outer_l2_len``, ``mbuf.outer_l3_len``.
 * **[provides] mbuf**: ``mbuf.ol_flags:PKT_RX_EIP_CKSUM_BAD``.
-* **[provides] rte_eth_dev_info**: ``rx_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
+* **[provides] rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM``,
   ``tx_offload_capa:DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM``.
 
 
diff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c
index 1849a3bdd..9b73d2377 100644
--- a/lib/librte_ether/rte_ethdev.c
+++ b/lib/librte_ether/rte_ethdev.c
@@ -688,12 +688,90 @@  rte_eth_speed_bitflag(uint32_t speed, int duplex)
 	}
 }
 
+/**
+ * A conversion function from rxmode bitfield API.
+ */
+static void
+rte_eth_convert_rx_offload_bitfield(const struct rte_eth_rxmode *rxmode,
+				    uint64_t *rx_offloads)
+{
+	uint64_t offloads = 0;
+
+	if (rxmode->header_split == 1)
+		offloads |= DEV_RX_OFFLOAD_HEADER_SPLIT;
+	if (rxmode->hw_ip_checksum == 1)
+		offloads |= DEV_RX_OFFLOAD_CHECKSUM;
+	if (rxmode->hw_vlan_filter == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_FILTER;
+	if (rxmode->hw_vlan_strip == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_STRIP;
+	if (rxmode->hw_vlan_extend == 1)
+		offloads |= DEV_RX_OFFLOAD_VLAN_EXTEND;
+	if (rxmode->jumbo_frame == 1)
+		offloads |= DEV_RX_OFFLOAD_JUMBO_FRAME;
+	if (rxmode->hw_strip_crc == 1)
+		offloads |= DEV_RX_OFFLOAD_CRC_STRIP;
+	if (rxmode->enable_scatter == 1)
+		offloads |= DEV_RX_OFFLOAD_SCATTER;
+	if (rxmode->enable_lro == 1)
+		offloads |= DEV_RX_OFFLOAD_TCP_LRO;
+
+	*rx_offloads = offloads;
+}
+
+/**
+ * A conversion function from rxmode offloads API.
+ */
+static void
+rte_eth_convert_rx_offloads(const uint64_t rx_offloads,
+			    struct rte_eth_rxmode *rxmode)
+{
+
+	if (rx_offloads & DEV_RX_OFFLOAD_HEADER_SPLIT)
+		rxmode->header_split = 1;
+	else
+		rxmode->header_split = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CHECKSUM)
+		rxmode->hw_ip_checksum = 1;
+	else
+		rxmode->hw_ip_checksum = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_FILTER)
+		rxmode->hw_vlan_filter = 1;
+	else
+		rxmode->hw_vlan_filter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_STRIP)
+		rxmode->hw_vlan_strip = 1;
+	else
+		rxmode->hw_vlan_strip = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_VLAN_EXTEND)
+		rxmode->hw_vlan_extend = 1;
+	else
+		rxmode->hw_vlan_extend = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_JUMBO_FRAME)
+		rxmode->jumbo_frame = 1;
+	else
+		rxmode->jumbo_frame = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_CRC_STRIP)
+		rxmode->hw_strip_crc = 1;
+	else
+		rxmode->hw_strip_crc = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_SCATTER)
+		rxmode->enable_scatter = 1;
+	else
+		rxmode->enable_scatter = 0;
+	if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)
+		rxmode->enable_lro = 1;
+	else
+		rxmode->enable_lro = 0;
+}
+
 int
 rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
 {
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_conf local_conf = *dev_conf;
 	int diag;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -723,8 +801,20 @@  rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		return -EBUSY;
 	}
 
+	/*
+	 * Convert between the offloads API to enable PMDs to support
+	 * only one of them.
+	 */
+	if ((dev_conf->rxmode.ignore_offload_bitfield == 0)) {
+		rte_eth_convert_rx_offload_bitfield(
+				&dev_conf->rxmode, &local_conf.rxmode.offloads);
+	} else {
+		rte_eth_convert_rx_offloads(dev_conf->rxmode.offloads,
+					    &local_conf.rxmode);
+	}
+
 	/* Copy the dev_conf parameter into the dev structure */
-	memcpy(&dev->data->dev_conf, dev_conf, sizeof(dev->data->dev_conf));
+	memcpy(&dev->data->dev_conf, &local_conf, sizeof(dev->data->dev_conf));
 
 	/*
 	 * Check that the numbers of RX and TX queues are not greater
@@ -768,7 +858,7 @@  rte_eth_dev_configure(uint8_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 	 * If jumbo frames are enabled, check that the maximum RX packet
 	 * length is supported by the configured device.
 	 */
-	if (dev_conf->rxmode.jumbo_frame == 1) {
+	if (local_conf.rxmode.offloads & DEV_RX_OFFLOAD_JUMBO_FRAME) {
 		if (dev_conf->rxmode.max_rx_pkt_len >
 		    dev_info.max_rx_pktlen) {
 			RTE_PMD_DEBUG_TRACE("ethdev port_id=%d max_rx_pkt_len %u"
@@ -1032,6 +1122,7 @@  rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	uint32_t mbp_buf_size;
 	struct rte_eth_dev *dev;
 	struct rte_eth_dev_info dev_info;
+	struct rte_eth_rxconf local_conf;
 	void **rxq;
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);
@@ -1102,8 +1193,18 @@  rte_eth_rx_queue_setup(uint8_t port_id, uint16_t rx_queue_id,
 	if (rx_conf == NULL)
 		rx_conf = &dev_info.default_rxconf;
 
+	local_conf = *rx_conf;
+	if (dev->data->dev_conf.rxmode.ignore_offload_bitfield == 0) {
+		/**
+		 * Reflect port offloads to queue offloads in order for
+		 * offloads to not be discarded.
+		 */
+		rte_eth_convert_rx_offload_bitfield(&dev->data->dev_conf.rxmode,
+						    &local_conf.offloads);
+	}
+
 	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
-					      socket_id, rx_conf, mp);
+					      socket_id, &local_conf, mp);
 	if (!ret) {
 		if (!dev->data->min_rx_buf_size ||
 		    dev->data->min_rx_buf_size > mbp_buf_size)
@@ -2007,7 +2108,8 @@  rte_eth_dev_vlan_filter(uint8_t port_id, uint16_t vlan_id, int on)
 
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
-	if (!(dev->data->dev_conf.rxmode.hw_vlan_filter)) {
+	if (!(dev->data->dev_conf.rxmode.offloads &
+	      DEV_RX_OFFLOAD_VLAN_FILTER)) {
 		RTE_PMD_DEBUG_TRACE("port %d: vlan-filtering disabled\n", port_id);
 		return -ENOSYS;
 	}
@@ -2083,23 +2185,41 @@  rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 
 	/*check which option changed by application*/
 	cur = !!(offload_mask & ETH_VLAN_STRIP_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_strip);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_STRIP);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_strip = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_STRIP;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_STRIP;
 		mask |= ETH_VLAN_STRIP_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_FILTER_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_filter);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_FILTER);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_filter = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_FILTER;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_FILTER;
 		mask |= ETH_VLAN_FILTER_MASK;
 	}
 
 	cur = !!(offload_mask & ETH_VLAN_EXTEND_OFFLOAD);
-	org = !!(dev->data->dev_conf.rxmode.hw_vlan_extend);
+	org = !!(dev->data->dev_conf.rxmode.offloads &
+		 DEV_RX_OFFLOAD_VLAN_EXTEND);
 	if (cur != org) {
-		dev->data->dev_conf.rxmode.hw_vlan_extend = (uint8_t)cur;
+		if (cur)
+			dev->data->dev_conf.rxmode.offloads |=
+				DEV_RX_OFFLOAD_VLAN_EXTEND;
+		else
+			dev->data->dev_conf.rxmode.offloads &=
+				~DEV_RX_OFFLOAD_VLAN_EXTEND;
 		mask |= ETH_VLAN_EXTEND_MASK;
 	}
 
@@ -2108,6 +2228,13 @@  rte_eth_dev_set_vlan_offload(uint8_t port_id, int offload_mask)
 		return ret;
 
 	RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->vlan_offload_set, -ENOTSUP);
+
+	/*
+	 * Convert to the offload bitfield API just in case the underlying PMD
+	 * still supporting it.
+	 */
+	rte_eth_convert_rx_offloads(dev->data->dev_conf.rxmode.offloads,
+				    &dev->data->dev_conf.rxmode);
 	(*dev->dev_ops->vlan_offload_set)(dev, mask);
 
 	return ret;
@@ -2122,13 +2249,16 @@  rte_eth_dev_get_vlan_offload(uint8_t port_id)
 	RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV);
 	dev = &rte_eth_devices[port_id];
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_strip)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_STRIP)
 		ret |= ETH_VLAN_STRIP_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_filter)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_FILTER)
 		ret |= ETH_VLAN_FILTER_OFFLOAD;
 
-	if (dev->data->dev_conf.rxmode.hw_vlan_extend)
+	if (dev->data->dev_conf.rxmode.offloads &
+	    DEV_RX_OFFLOAD_VLAN_EXTEND)
 		ret |= ETH_VLAN_EXTEND_OFFLOAD;
 
 	return ret;
diff --git a/lib/librte_ether/rte_ethdev.h b/lib/librte_ether/rte_ethdev.h
index 99cdd54d4..e02d57881 100644
--- a/lib/librte_ether/rte_ethdev.h
+++ b/lib/librte_ether/rte_ethdev.h
@@ -348,7 +348,18 @@  struct rte_eth_rxmode {
 	enum rte_eth_rx_mq_mode mq_mode;
 	uint32_t max_rx_pkt_len;  /**< Only used if jumbo_frame enabled. */
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
+	/**
+	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_offload_capa field on rte_eth_dev_info
+	 * structure are allowed to be set.
+	 */
+	uint64_t offloads;
 	__extension__
+	/**
+	 * Below bitfield API is obsolete. Application should
+	 * enable per-port offloads using the offload field
+	 * above.
+	 */
 	uint16_t header_split : 1, /**< Header Split enable. */
 		hw_ip_checksum   : 1, /**< IP/UDP/TCP checksum offload enable. */
 		hw_vlan_filter   : 1, /**< VLAN filter enable. */
@@ -357,7 +368,17 @@  struct rte_eth_rxmode {
 		jumbo_frame      : 1, /**< Jumbo Frame Receipt enable. */
 		hw_strip_crc     : 1, /**< Enable CRC stripping by hardware. */
 		enable_scatter   : 1, /**< Enable scatter packets rx handler */
-		enable_lro       : 1; /**< Enable LRO */
+		enable_lro       : 1, /**< Enable LRO */
+		/**
+		 * When set the offload bitfield should be ignored.
+		 * Instead per-port Rx offloads should be set on offloads
+		 * field above.
+		 * Per-queue offloads shuold be set on rte_eth_rxq_conf
+		 * structure.
+		 * This bit is temporary till rxmode bitfield offloads API will
+		 * be deprecated.
+		 */
+		ignore_offload_bitfield : 1;
 };
 
 /**
@@ -691,6 +712,12 @@  struct rte_eth_rxconf {
 	uint16_t rx_free_thresh; /**< Drives the freeing of RX descriptors. */
 	uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */
 	uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */
+	/**
+	 * Per-queue Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
+	 * Only offloads set on rx_queue_offload_capa or rx_offload_capa
+	 * fields on rte_eth_dev_info structure are allowed to be set.
+	 */
+	uint64_t offloads;
 };
 
 #define ETH_TXQ_FLAGS_NOMULTSEGS 0x0001 /**< nb_segs=1 for all mbufs */
@@ -907,6 +934,18 @@  struct rte_eth_conf {
 #define DEV_RX_OFFLOAD_QINQ_STRIP  0x00000020
 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM 0x00000040
 #define DEV_RX_OFFLOAD_MACSEC_STRIP     0x00000080
+#define DEV_RX_OFFLOAD_HEADER_SPLIT	0x00000100
+#define DEV_RX_OFFLOAD_VLAN_FILTER	0x00000200
+#define DEV_RX_OFFLOAD_VLAN_EXTEND	0x00000400
+#define DEV_RX_OFFLOAD_JUMBO_FRAME	0x00000800
+#define DEV_RX_OFFLOAD_CRC_STRIP	0x00001000
+#define DEV_RX_OFFLOAD_SCATTER		0x00002000
+#define DEV_RX_OFFLOAD_CHECKSUM (DEV_RX_OFFLOAD_IPV4_CKSUM | \
+				 DEV_RX_OFFLOAD_UDP_CKSUM | \
+				 DEV_RX_OFFLOAD_TCP_CKSUM)
+#define DEV_RX_OFFLOAD_VLAN (DEV_RX_OFFLOAD_VLAN_STRIP | \
+			     DEV_RX_OFFLOAD_VLAN_FILTER | \
+			     DEV_RX_OFFLOAD_VLAN_EXTEND)
 
 /**
  * TX offload capabilities of a device.
@@ -949,8 +988,11 @@  struct rte_eth_dev_info {
 	/** Maximum number of hash MAC addresses for MTA and UTA. */
 	uint16_t max_vfs; /**< Maximum number of VFs. */
 	uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */
-	uint32_t rx_offload_capa; /**< Device RX offload capabilities. */
+	uint64_t rx_offload_capa;
+	/**< Device per port RX offload capabilities. */
 	uint32_t tx_offload_capa; /**< Device TX offload capabilities. */
+	uint64_t rx_queue_offload_capa;
+	/**< Device per queue RX offload capabilities. */
 	uint16_t reta_size;
 	/**< Device redirection table size, the total number of entries. */
 	uint8_t hash_key_size; /**< Hash key size in bytes */
@@ -1874,6 +1916,9 @@  uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex);
  *        each statically configurable offload hardware feature provided by
  *        Ethernet devices, such as IP checksum or VLAN tag stripping for
  *        example.
+ *        The Rx offload bitfield API is obsolete and will be deprecated.
+ *        Applications should set the ignore_bitfield_offloads bit on *rxmode*
+ *        structure and use offloads field to set per-port offloads instead.
  *     - the Receive Side Scaling (RSS) configuration when using multiple RX
  *         queues per port.
  *
@@ -1927,6 +1972,8 @@  void _rte_eth_dev_reset(struct rte_eth_dev *dev);
  *   The *rx_conf* structure contains an *rx_thresh* structure with the values
  *   of the Prefetch, Host, and Write-Back threshold registers of the receive
  *   ring.
+ *   In addition it contains the hardware offloads features to activate using
+ *   the DEV_RX_OFFLOAD_* flags.
  * @param mb_pool
  *   The pointer to the memory pool from which to allocate *rte_mbuf* network
  *   memory buffers to populate each descriptor of the receive ring.