net/iavf: fix multi-process shared data

Message ID 20210928033753.1955674-1-dapengx.yu@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Qi Zhang
Headers
Series net/iavf: fix multi-process shared data |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/github-robot: build success github build: passed
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS

Commit Message

Yu, DapengX Sept. 28, 2021, 3:37 a.m. UTC
  From: Dapeng Yu <dapengx.yu@intel.com>

When the iavf_adapter instance is not initialized completedly in the
primary process, the secondary process accesses its "rte_eth_dev"
member, it causes secondary process crash.

This patch replaces adapter->eth_dev with rte_eth_devices[port_id] in
the data paths where rte_eth_dev instance is accessed.

Fixes: f978c1c9b3b5 ("net/iavf: add RSS hash parsing in AVX path")
Fixes: 9c9aa0040344 ("net/iavf: add offload path for Rx AVX512 flex descriptor")
Fixes: 63660ea3ee0b ("net/iavf: add RSS hash parsing in SSE path")
Cc: stable@dpdk.org

Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
---
 drivers/net/iavf/iavf_rxtx_vec_avx2.c   | 5 +++--
 drivers/net/iavf/iavf_rxtx_vec_avx512.c | 5 +++--
 drivers/net/iavf/iavf_rxtx_vec_sse.c    | 3 ++-
 3 files changed, 8 insertions(+), 5 deletions(-)
  

Comments

Qi Zhang Sept. 28, 2021, 11:12 a.m. UTC | #1
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of dapengx.yu@intel.com
> Sent: Tuesday, September 28, 2021 11:38 AM
> To: Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing,
> Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; Yu, DapengX <dapengx.yu@intel.com>; stable@dpdk.org
> Subject: [dpdk-dev] [PATCH] net/iavf: fix multi-process shared data
> 
> From: Dapeng Yu <dapengx.yu@intel.com>
> 
> When the iavf_adapter instance is not initialized completedly in the primary
> process, the secondary process accesses its "rte_eth_dev"
> member, it causes secondary process crash.
> 
> This patch replaces adapter->eth_dev with rte_eth_devices[port_id] in the data
> paths where rte_eth_dev instance is accessed.
> 
> Fixes: f978c1c9b3b5 ("net/iavf: add RSS hash parsing in AVX path")
> Fixes: 9c9aa0040344 ("net/iavf: add offload path for Rx AVX512 flex
> descriptor")
> Fixes: 63660ea3ee0b ("net/iavf: add RSS hash parsing in SSE path")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>

Acked-by: Qi Zhang <qi.z.zhang@intel.com>

Applied to dpdk-next-net-intel.

Thanks
Qi
  
Ferruh Yigit Sept. 29, 2021, 4:28 p.m. UTC | #2
On 9/28/2021 4:37 AM, dapengx.yu@intel.com wrote:
> From: Dapeng Yu <dapengx.yu@intel.com>
> 
> When the iavf_adapter instance is not initialized completedly in the
> primary process, the secondary process accesses its "rte_eth_dev"
> member, it causes secondary process crash.
> 
> This patch replaces adapter->eth_dev with rte_eth_devices[port_id] in
> the data paths where rte_eth_dev instance is accessed.
> 
> Fixes: f978c1c9b3b5 ("net/iavf: add RSS hash parsing in AVX path")
> Fixes: 9c9aa0040344 ("net/iavf: add offload path for Rx AVX512 flex descriptor")
> Fixes: 63660ea3ee0b ("net/iavf: add RSS hash parsing in SSE path")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
> ---
>  drivers/net/iavf/iavf_rxtx_vec_avx2.c   | 5 +++--
>  drivers/net/iavf/iavf_rxtx_vec_avx512.c | 5 +++--
>  drivers/net/iavf/iavf_rxtx_vec_sse.c    | 3 ++-
>  3 files changed, 8 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> index 475070e036..59b086ade5 100644
> --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> @@ -525,6 +525,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
>  #define IAVF_DESCS_PER_LOOP_AVX 8
>  
>  	const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
> +	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
>  

It is not good idea to access global variable directly from the driver.

The problem definition is correct, eth_dev is unique per process, so it can't be
saved to a shared struct.

But here I assume real intention is to be able to access PMD specific data from
queue struct, for this what about storing 'rte_eth_dev_data' in the
'iavf_rx_queue', this should sove the problem without accessing the global variable.
  
Yu, DapengX Sept. 30, 2021, 9:11 a.m. UTC | #3
> -----Original Message-----
> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> Sent: Thursday, September 30, 2021 12:28 AM
> To: Yu, DapengX <dapengx.yu@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> Xing, Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: Re: [dpdk-stable] [PATCH] net/iavf: fix multi-process shared data
> 
> On 9/28/2021 4:37 AM, dapengx.yu@intel.com wrote:
> > From: Dapeng Yu <dapengx.yu@intel.com>
> >
> > When the iavf_adapter instance is not initialized completedly in the
> > primary process, the secondary process accesses its "rte_eth_dev"
> > member, it causes secondary process crash.
> >
> > This patch replaces adapter->eth_dev with rte_eth_devices[port_id] in
> > the data paths where rte_eth_dev instance is accessed.
> >
> > Fixes: f978c1c9b3b5 ("net/iavf: add RSS hash parsing in AVX path")
> > Fixes: 9c9aa0040344 ("net/iavf: add offload path for Rx AVX512 flex
> > descriptor")
> > Fixes: 63660ea3ee0b ("net/iavf: add RSS hash parsing in SSE path")
> > Cc: stable@dpdk.org
> >
> > Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
> > ---
> >  drivers/net/iavf/iavf_rxtx_vec_avx2.c   | 5 +++--
> >  drivers/net/iavf/iavf_rxtx_vec_avx512.c | 5 +++--
> >  drivers/net/iavf/iavf_rxtx_vec_sse.c    | 3 ++-
> >  3 files changed, 8 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> > b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> > index 475070e036..59b086ade5 100644
> > --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> > +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> > @@ -525,6 +525,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct
> > iavf_rx_queue *rxq,  #define IAVF_DESCS_PER_LOOP_AVX 8
> >
> >  	const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
> > +	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
> >
> 
> It is not good idea to access global variable directly from the driver.
In "lib/ethdev/rte_ethdev.h", the global variable rte_eth_devices is used.
So I think use it in a PMD should be also acceptable since it is just read.
rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
{
	struct rte_eth_dev *dev = &rte_eth_devices[port_id];

> 
> The problem definition is correct, eth_dev is unique per process, so it can't
> be saved to a shared struct.
> 
> But here I assume real intention is to be able to access PMD specific data
> from queue struct, for this what about storing 'rte_eth_dev_data' in the
> 'iavf_rx_queue', this should sove the problem without accessing the global
> variable.

The intention is to read the offload properties of device configuration, so it not 
queue specific or PMD specific. It is already in public data structure.
If it is stored in 'iavf_rx_queue' again, the data will be duplicate.
  
Ferruh Yigit Sept. 30, 2021, 10:57 a.m. UTC | #4
On 9/30/2021 10:11 AM, Yu, DapengX wrote:
> 
> 
>> -----Original Message-----
>> From: Yigit, Ferruh <ferruh.yigit@intel.com>
>> Sent: Thursday, September 30, 2021 12:28 AM
>> To: Yu, DapengX <dapengx.yu@intel.com>; Richardson, Bruce
>> <bruce.richardson@intel.com>; Ananyev, Konstantin
>> <konstantin.ananyev@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
>> Xing, Beilei <beilei.xing@intel.com>
>> Cc: dev@dpdk.org; stable@dpdk.org
>> Subject: Re: [dpdk-stable] [PATCH] net/iavf: fix multi-process shared data
>>
>> On 9/28/2021 4:37 AM, dapengx.yu@intel.com wrote:
>>> From: Dapeng Yu <dapengx.yu@intel.com>
>>>
>>> When the iavf_adapter instance is not initialized completedly in the
>>> primary process, the secondary process accesses its "rte_eth_dev"
>>> member, it causes secondary process crash.
>>>
>>> This patch replaces adapter->eth_dev with rte_eth_devices[port_id] in
>>> the data paths where rte_eth_dev instance is accessed.
>>>
>>> Fixes: f978c1c9b3b5 ("net/iavf: add RSS hash parsing in AVX path")
>>> Fixes: 9c9aa0040344 ("net/iavf: add offload path for Rx AVX512 flex
>>> descriptor")
>>> Fixes: 63660ea3ee0b ("net/iavf: add RSS hash parsing in SSE path")
>>> Cc: stable@dpdk.org
>>>
>>> Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
>>> ---
>>>  drivers/net/iavf/iavf_rxtx_vec_avx2.c   | 5 +++--
>>>  drivers/net/iavf/iavf_rxtx_vec_avx512.c | 5 +++--
>>>  drivers/net/iavf/iavf_rxtx_vec_sse.c    | 3 ++-
>>>  3 files changed, 8 insertions(+), 5 deletions(-)
>>>
>>> diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
>>> b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
>>> index 475070e036..59b086ade5 100644
>>> --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
>>> +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
>>> @@ -525,6 +525,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct
>>> iavf_rx_queue *rxq,  #define IAVF_DESCS_PER_LOOP_AVX 8
>>>
>>>  	const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
>>> +	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
>>>
>>
>> It is not good idea to access global variable directly from the driver.
> In "lib/ethdev/rte_ethdev.h", the global variable rte_eth_devices is used.
> So I think use it in a PMD should be also acceptable since it is just read.

It is expected for ehtdev APIs to access the array. Application knows only
port_id, ethdev layer converts this port_id to device struct by accessing the
global array, and drivers should be able to operate only with its device.

> rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts)
> {
> 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> 
>>
>> The problem definition is correct, eth_dev is unique per process, so it can't
>> be saved to a shared struct.
>>
>> But here I assume real intention is to be able to access PMD specific data
>> from queue struct, for this what about storing 'rte_eth_dev_data' in the
>> 'iavf_rx_queue', this should sove the problem without accessing the global
>> variable.
> 
> The intention is to read the offload properties of device configuration, so it not 
> queue specific or PMD specific. It is already in public data structure.
> If it is stored in 'iavf_rx_queue' again, the data will be duplicate.
> 

I can see the intention. This is more design concern, you can access to that
data structure doesn't mean you should.

You will just store the pointer of the 'data', is it duplication?
  
Qi Zhang Oct. 7, 2021, 4:50 a.m. UTC | #5
> -----Original Message-----
> From: dev <dev-bounces@dpdk.org> On Behalf Of Ferruh Yigit
> Sent: Thursday, September 30, 2021 6:57 PM
> To: Yu, DapengX <dapengx.yu@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>; Xing,
> Beilei <beilei.xing@intel.com>
> Cc: dev@dpdk.org; stable@dpdk.org
> Subject: Re: [dpdk-dev] [dpdk-stable] [PATCH] net/iavf: fix multi-process shared
> data
> 
> On 9/30/2021 10:11 AM, Yu, DapengX wrote:
> >
> >
> >> -----Original Message-----
> >> From: Yigit, Ferruh <ferruh.yigit@intel.com>
> >> Sent: Thursday, September 30, 2021 12:28 AM
> >> To: Yu, DapengX <dapengx.yu@intel.com>; Richardson, Bruce
> >> <bruce.richardson@intel.com>; Ananyev, Konstantin
> >> <konstantin.ananyev@intel.com>; Wu, Jingjing <jingjing.wu@intel.com>;
> >> Xing, Beilei <beilei.xing@intel.com>
> >> Cc: dev@dpdk.org; stable@dpdk.org
> >> Subject: Re: [dpdk-stable] [PATCH] net/iavf: fix multi-process shared
> >> data
> >>
> >> On 9/28/2021 4:37 AM, dapengx.yu@intel.com wrote:
> >>> From: Dapeng Yu <dapengx.yu@intel.com>
> >>>
> >>> When the iavf_adapter instance is not initialized completedly in the
> >>> primary process, the secondary process accesses its "rte_eth_dev"
> >>> member, it causes secondary process crash.
> >>>
> >>> This patch replaces adapter->eth_dev with rte_eth_devices[port_id]
> >>> in the data paths where rte_eth_dev instance is accessed.
> >>>
> >>> Fixes: f978c1c9b3b5 ("net/iavf: add RSS hash parsing in AVX path")
> >>> Fixes: 9c9aa0040344 ("net/iavf: add offload path for Rx AVX512 flex
> >>> descriptor")
> >>> Fixes: 63660ea3ee0b ("net/iavf: add RSS hash parsing in SSE path")
> >>> Cc: stable@dpdk.org
> >>>
> >>> Signed-off-by: Dapeng Yu <dapengx.yu@intel.com>
> >>> ---
> >>>  drivers/net/iavf/iavf_rxtx_vec_avx2.c   | 5 +++--
> >>>  drivers/net/iavf/iavf_rxtx_vec_avx512.c | 5 +++--
> >>>  drivers/net/iavf/iavf_rxtx_vec_sse.c    | 3 ++-
> >>>  3 files changed, 8 insertions(+), 5 deletions(-)
> >>>
> >>> diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> >>> b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> >>> index 475070e036..59b086ade5 100644
> >>> --- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> >>> +++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
> >>> @@ -525,6 +525,7 @@ _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct
> >>> iavf_rx_queue *rxq,  #define IAVF_DESCS_PER_LOOP_AVX 8
> >>>
> >>>  	const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
> >>> +	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
> >>>
> >>
> >> It is not good idea to access global variable directly from the driver.
> > In "lib/ethdev/rte_ethdev.h", the global variable rte_eth_devices is used.
> > So I think use it in a PMD should be also acceptable since it is just read.
> 
> It is expected for ehtdev APIs to access the array. Application knows only
> port_id, ethdev layer converts this port_id to device struct by accessing the
> global array, and drivers should be able to operate only with its device.
> 
> > rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id,
> > 		 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts) {
> > 	struct rte_eth_dev *dev = &rte_eth_devices[port_id];
> >
> >>
> >> The problem definition is correct, eth_dev is unique per process, so
> >> it can't be saved to a shared struct.
> >>
> >> But here I assume real intention is to be able to access PMD specific
> >> data from queue struct, for this what about storing
> >> 'rte_eth_dev_data' in the 'iavf_rx_queue', this should sove the
> >> problem without accessing the global variable.
> >
> > The intention is to read the offload properties of device
> > configuration, so it not queue specific or PMD specific. It is already in public
> data structure.
> > If it is stored in 'iavf_rx_queue' again, the data will be duplicate.
> >
> 
> I can see the intention. This is more design concern, you can access to that
> data structure doesn't mean you should.
> 
> You will just store the pointer of the 'data', is it duplication?

+1, access rte_eth_devices directly is not a good practice in PMD. 

I think to fix the knowing issue, we can just replace eth_dev with eth_dev_data in iavf_adapter. (this is actually what PF's fix do)

And to avoid long pointer chain like " rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads" in data path.

We should introduce per queue cache, but this could be in a separate patch.
  

Patch

diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx2.c b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
index 475070e036..59b086ade5 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx2.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx2.c
@@ -525,6 +525,7 @@  _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 #define IAVF_DESCS_PER_LOOP_AVX 8
 
 	const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0,
 			0, rxq->mbuf_initializer);
@@ -903,7 +904,7 @@  _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+		if (dev->data->dev_conf.rxmode.offloads &
 				DEV_RX_OFFLOAD_RSS_HASH ||
 				rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 			/* load bottom half of every 32B desc */
@@ -956,7 +957,7 @@  _iavf_recv_raw_pkts_vec_avx2_flex_rxd(struct iavf_rx_queue *rxq,
 					(_mm256_castsi128_si256(raw_desc_bh0),
 					raw_desc_bh1, 1);
 
-			if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+			if (dev->data->dev_conf.rxmode.offloads &
 					DEV_RX_OFFLOAD_RSS_HASH) {
 				/**
 				 * to shift the 32b RSS hash value to the
diff --git a/drivers/net/iavf/iavf_rxtx_vec_avx512.c b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
index 571161c0cd..ed64a232e7 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_avx512.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_avx512.c
@@ -713,6 +713,7 @@  _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 #ifdef IAVF_RX_PTYPE_OFFLOAD
 	const uint32_t *type_table = rxq->vsi->adapter->ptype_tbl;
 #endif
+	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
 
 	const __m256i mbuf_init = _mm256_set_epi64x(0, 0, 0,
 						    rxq->mbuf_initializer);
@@ -1137,7 +1138,7 @@  _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 			 * needs to load 2nd 16B of each desc for RSS hash parsing,
 			 * will cause performance drop to get into this context.
 			 */
-			if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+			if (dev->data->dev_conf.rxmode.offloads &
 			    DEV_RX_OFFLOAD_RSS_HASH ||
 			    rxq->rx_flags & IAVF_RX_FLAGS_VLAN_TAG_LOC_L2TAG2_2) {
 				/* load bottom half of every 32B desc */
@@ -1190,7 +1191,7 @@  _iavf_recv_raw_pkts_vec_avx512_flex_rxd(struct iavf_rx_queue *rxq,
 						(_mm256_castsi128_si256(raw_desc_bh0),
 						 raw_desc_bh1, 1);
 
-				if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+				if (dev->data->dev_conf.rxmode.offloads &
 						DEV_RX_OFFLOAD_RSS_HASH) {
 					/**
 					 * to shift the 32b RSS hash value to the
diff --git a/drivers/net/iavf/iavf_rxtx_vec_sse.c b/drivers/net/iavf/iavf_rxtx_vec_sse.c
index ee1e905525..1231d0f63d 100644
--- a/drivers/net/iavf/iavf_rxtx_vec_sse.c
+++ b/drivers/net/iavf/iavf_rxtx_vec_sse.c
@@ -645,6 +645,7 @@  _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 	int pos;
 	uint64_t var;
 	const uint32_t *ptype_tbl = rxq->vsi->adapter->ptype_tbl;
+	struct rte_eth_dev *dev = &rte_eth_devices[rxq->port_id];
 	__m128i crc_adjust = _mm_set_epi16
 				(0, 0, 0,       /* ignore non-length fields */
 				 -rxq->crc_len, /* sub crc on data_len */
@@ -817,7 +818,7 @@  _recv_raw_pkts_vec_flex_rxd(struct iavf_rx_queue *rxq,
 		 * needs to load 2nd 16B of each desc for RSS hash parsing,
 		 * will cause performance drop to get into this context.
 		 */
-		if (rxq->vsi->adapter->eth_dev->data->dev_conf.rxmode.offloads &
+		if (dev->data->dev_conf.rxmode.offloads &
 				DEV_RX_OFFLOAD_RSS_HASH) {
 			/* load bottom half of every 32B desc */
 			const __m128i raw_desc_bh3 =