kni: fix rtnl deadlocks and race conditions

Message ID 20201126144613.4986-1-eladv6@gmail.com (mailing list archive)
State Changes Requested, archived
Delegated to: Ferruh Yigit
Headers
Series kni: fix rtnl deadlocks and race conditions |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/Intel-compilation success Compilation OK
ci/travis-robot success Travis build: passed
ci/iol-mellanox-Performance success Performance Testing PASS

Commit Message

Elad Nachman Nov. 26, 2020, 2:46 p.m. UTC
  This patch leverages on Stephen Hemminger's 64106 patch from Dec 2019,
and fixes the issues reported by Ferruh and Igor:

A. KNI sync lock is being locked while rtnl is held. 
If two threads are calling kni_net_process_request() ,
then the first one wil take the sync lock, release rtnl lock then sleep.
The second thread will try to lock sync lock while holding rtnl.
The first thread will wake, and try to lock rtnl, resulting in a deadlock.
The remedy is to release rtnl before locking the KNI sync lock.
Since in between nothing is accessing Linux network-wise,
no rtnl locking is needed.

B. There is a race condition in __dev_close_many() processing the
close_list while the application terminates.
It looks like if two vEth devices are terminating,
and one releases the rtnl lock, the other takes it,
updating the close_list in an unstable state,
causing the close_list to become a circular linked list,
hence list_for_each_entry() will endlessly loop inside
__dev_close_many() . 
Since the description for the original patch indicate the
original motivation was bringing the device up,
I have changed kni_net_process_request() to hold the rtnl mutex
in case of bringing the device down since this is the path called
from __dev_close_many() , causing the corruption of the close_list. 



Signed-off-by: Elad Nachman <eladv6@gmail.com>
---
 kernel/linux/kni/kni_net.c | 47 +++++++++++++++++++++++++-------------
 1 file changed, 31 insertions(+), 16 deletions(-)
  

Comments

Ferruh Yigit Feb. 19, 2021, 6:41 p.m. UTC | #1
On 11/26/2020 2:46 PM, Elad Nachman wrote:
> This patch leverages on Stephen Hemminger's 64106 patch from Dec 2019,
> and fixes the issues reported by Ferruh and Igor:
> 
> A. KNI sync lock is being locked while rtnl is held.
> If two threads are calling kni_net_process_request() ,
> then the first one wil take the sync lock, release rtnl lock then sleep.
> The second thread will try to lock sync lock while holding rtnl.
> The first thread will wake, and try to lock rtnl, resulting in a deadlock.
> The remedy is to release rtnl before locking the KNI sync lock.
> Since in between nothing is accessing Linux network-wise,
> no rtnl locking is needed.

Hi Elad,

Thanks for explanation, that clarifies the issue.
Also I confirm I don't see the hang, at least as much as I test.

> 
> B. There is a race condition in __dev_close_many() processing the
> close_list while the application terminates.
> It looks like if two vEth devices are terminating,
> and one releases the rtnl lock, the other takes it,
> updating the close_list in an unstable state,
> causing the close_list to become a circular linked list,
> hence list_for_each_entry() will endlessly loop inside
> __dev_close_many() .
> Since the description for the original patch indicate the
> original motivation was bringing the device up,
> I have changed kni_net_process_request() to hold the rtnl mutex
> in case of bringing the device down since this is the path called
> from __dev_close_many() , causing the corruption of the close_list.
> 

I can't reproduce this case, I see the protection in the code, but better to get 
confirmation from Igor.



Overall the issue seems calling a function pointed by 'rte_kni_ops' which 
requires to acquire the rtnl lock.
So I wonder if this can't be handled in the ops function, by processing the 
request asynchronously,
like recording the request, return from 'rte_kni_ops', and process the request 
afterwards?

I assume the application we mention is not kni sample application.

> 
> 
> Signed-off-by: Elad Nachman <eladv6@gmail.com>
> ---
>   kernel/linux/kni/kni_net.c | 47 +++++++++++++++++++++++++-------------
>   1 file changed, 31 insertions(+), 16 deletions(-)
> 
> diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c
> index 4b752083d..cf5b0845d 100644
> --- a/kernel/linux/kni/kni_net.c
> +++ b/kernel/linux/kni/kni_net.c
> @@ -17,6 +17,7 @@
>   #include <linux/skbuff.h>
>   #include <linux/kthread.h>
>   #include <linux/delay.h>
> +#include <linux/rtnetlink.h>
>   
>   #include <rte_kni_common.h>
>   #include <kni_fifo.h>
> @@ -102,18 +103,26 @@ get_data_kva(struct kni_dev *kni, void *pkt_kva)
>    * It can be called to process the request.
>    */
>   static int
> -kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
> +kni_net_process_request(struct net_device *dev, struct rte_kni_request *req)
>   {
> +	struct kni_dev *kni = netdev_priv(dev);
>   	int ret = -1;
>   	void *resp_va;
>   	uint32_t num;
>   	int ret_val;
> +	int req_is_dev_stop = 0;
>   
> -	if (!kni || !req) {
> -		pr_err("No kni instance or request\n");
> -		return -EINVAL;
> -	}
> +	if (req->req_id == RTE_KNI_REQ_CFG_NETWORK_IF &&
> +			req->if_up == 0)
> +		req_is_dev_stop = 1;
>   
> +	ASSERT_RTNL();
> +
> +	if (!req_is_dev_stop) {
> +		dev_hold(dev);
> +		rtnl_unlock();
> +	}
> +
>   	mutex_lock(&kni->sync_lock);
>   
>   	/* Construct data */
> @@ -125,8 +134,13 @@ kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
>   		goto fail;
>   	}
>   
> +	/* Since we need to wait and RTNL mutex is held
> +	 * drop the mutex and hold refernce to keep device
> +	 */
> +

Comment seems left here, need to go up. s/refernce/reference

>   	ret_val = wait_event_interruptible_timeout(kni->wq,
>   			kni_fifo_count(kni->resp_q), 3 * HZ);
> +
>   	if (signal_pending(current) || ret_val <= 0) {
>   		ret = -ETIME;
>   		goto fail;
> @@ -144,6 +158,13 @@ kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
>   
>   fail:
>   	mutex_unlock(&kni->sync_lock);
> +
> +

extra empty line

> +	if (!req_is_dev_stop) {
> +		rtnl_lock();
> +		dev_put(dev);
> +	}
> +
>   	return ret;
>   }
>   
> @@ -155,7 +176,6 @@ kni_net_open(struct net_device *dev)
>   {
>   	int ret;
>   	struct rte_kni_request req;
> -	struct kni_dev *kni = netdev_priv(dev);
>   
>   	netif_start_queue(dev);
>   	if (kni_dflt_carrier == 1)
> @@ -168,7 +188,7 @@ kni_net_open(struct net_device *dev)
>   
>   	/* Setting if_up to non-zero means up */
>   	req.if_up = 1;
> -	ret = kni_net_process_request(kni, &req);
> +	ret = kni_net_process_request(dev, &req);
>   

Althoug it is not soo confusing, these lines and following ones are noise for 
this patch, they are just for 'kni_net_process_request' paramter change.

What do you think do the 'kni_net_process_request' parameter change in first 
patch, and fix the issue in second, this way second patch can contain only the 
actual changes required for fix.
  
Elad Nachman Feb. 21, 2021, 8:03 a.m. UTC | #2
Hi,

Regarding the asynchronous call - thought about it, but then the
request will always return OK to user-space and I will have no way to
return failure error codes back to user-space.

If the above explanation is acceptable, per your other comments - I
can send a new patch without the parameter change , without the empty
line, and with the comment moved to the proper place in the code.

Waiting for your decision,

Elad.

On Fri, Feb 19, 2021 at 8:42 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>
> On 11/26/2020 2:46 PM, Elad Nachman wrote:
> > This patch leverages on Stephen Hemminger's 64106 patch from Dec 2019,
> > and fixes the issues reported by Ferruh and Igor:
> >
> > A. KNI sync lock is being locked while rtnl is held.
> > If two threads are calling kni_net_process_request() ,
> > then the first one wil take the sync lock, release rtnl lock then sleep.
> > The second thread will try to lock sync lock while holding rtnl.
> > The first thread will wake, and try to lock rtnl, resulting in a deadlock.
> > The remedy is to release rtnl before locking the KNI sync lock.
> > Since in between nothing is accessing Linux network-wise,
> > no rtnl locking is needed.
>
> Hi Elad,
>
> Thanks for explanation, that clarifies the issue.
> Also I confirm I don't see the hang, at least as much as I test.
>
> >
> > B. There is a race condition in __dev_close_many() processing the
> > close_list while the application terminates.
> > It looks like if two vEth devices are terminating,
> > and one releases the rtnl lock, the other takes it,
> > updating the close_list in an unstable state,
> > causing the close_list to become a circular linked list,
> > hence list_for_each_entry() will endlessly loop inside
> > __dev_close_many() .
> > Since the description for the original patch indicate the
> > original motivation was bringing the device up,
> > I have changed kni_net_process_request() to hold the rtnl mutex
> > in case of bringing the device down since this is the path called
> > from __dev_close_many() , causing the corruption of the close_list.
> >
>
> I can't reproduce this case, I see the protection in the code, but better to get
> confirmation from Igor.
>
>
>
> Overall the issue seems calling a function pointed by 'rte_kni_ops' which
> requires to acquire the rtnl lock.
> So I wonder if this can't be handled in the ops function, by processing the
> request asynchronously,
> like recording the request, return from 'rte_kni_ops', and process the request
> afterwards?
>
> I assume the application we mention is not kni sample application.
>
> >
> >
> > Signed-off-by: Elad Nachman <eladv6@gmail.com>
> > ---
> >   kernel/linux/kni/kni_net.c | 47 +++++++++++++++++++++++++-------------
> >   1 file changed, 31 insertions(+), 16 deletions(-)
> >
> > diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c
> > index 4b752083d..cf5b0845d 100644
> > --- a/kernel/linux/kni/kni_net.c
> > +++ b/kernel/linux/kni/kni_net.c
> > @@ -17,6 +17,7 @@
> >   #include <linux/skbuff.h>
> >   #include <linux/kthread.h>
> >   #include <linux/delay.h>
> > +#include <linux/rtnetlink.h>
> >
> >   #include <rte_kni_common.h>
> >   #include <kni_fifo.h>
> > @@ -102,18 +103,26 @@ get_data_kva(struct kni_dev *kni, void *pkt_kva)
> >    * It can be called to process the request.
> >    */
> >   static int
> > -kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
> > +kni_net_process_request(struct net_device *dev, struct rte_kni_request *req)
> >   {
> > +     struct kni_dev *kni = netdev_priv(dev);
> >       int ret = -1;
> >       void *resp_va;
> >       uint32_t num;
> >       int ret_val;
> > +     int req_is_dev_stop = 0;
> >
> > -     if (!kni || !req) {
> > -             pr_err("No kni instance or request\n");
> > -             return -EINVAL;
> > -     }
> > +     if (req->req_id == RTE_KNI_REQ_CFG_NETWORK_IF &&
> > +                     req->if_up == 0)
> > +             req_is_dev_stop = 1;
> >
> > +     ASSERT_RTNL();
> > +
> > +     if (!req_is_dev_stop) {
> > +             dev_hold(dev);
> > +             rtnl_unlock();
> > +     }
> > +
> >       mutex_lock(&kni->sync_lock);
> >
> >       /* Construct data */
> > @@ -125,8 +134,13 @@ kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
> >               goto fail;
> >       }
> >
> > +     /* Since we need to wait and RTNL mutex is held
> > +      * drop the mutex and hold refernce to keep device
> > +      */
> > +
>
> Comment seems left here, need to go up. s/refernce/reference
>
> >       ret_val = wait_event_interruptible_timeout(kni->wq,
> >                       kni_fifo_count(kni->resp_q), 3 * HZ);
> > +
> >       if (signal_pending(current) || ret_val <= 0) {
> >               ret = -ETIME;
> >               goto fail;
> > @@ -144,6 +158,13 @@ kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
> >
> >   fail:
> >       mutex_unlock(&kni->sync_lock);
> > +
> > +
>
> extra empty line
>
> > +     if (!req_is_dev_stop) {
> > +             rtnl_lock();
> > +             dev_put(dev);
> > +     }
> > +
> >       return ret;
> >   }
> >
> > @@ -155,7 +176,6 @@ kni_net_open(struct net_device *dev)
> >   {
> >       int ret;
> >       struct rte_kni_request req;
> > -     struct kni_dev *kni = netdev_priv(dev);
> >
> >       netif_start_queue(dev);
> >       if (kni_dflt_carrier == 1)
> > @@ -168,7 +188,7 @@ kni_net_open(struct net_device *dev)
> >
> >       /* Setting if_up to non-zero means up */
> >       req.if_up = 1;
> > -     ret = kni_net_process_request(kni, &req);
> > +     ret = kni_net_process_request(dev, &req);
> >
>
> Althoug it is not soo confusing, these lines and following ones are noise for
> this patch, they are just for 'kni_net_process_request' paramter change.
>
> What do you think do the 'kni_net_process_request' parameter change in first
> patch, and fix the issue in second, this way second patch can contain only the
> actual changes required for fix.
  
Ferruh Yigit Feb. 22, 2021, 3:58 p.m. UTC | #3
On 2/21/2021 8:03 AM, Elad Nachman wrote:
> Hi,
> 
> Regarding the asynchronous call - thought about it, but then the
> request will always return OK to user-space and I will have no way to
> return failure error codes back to user-space.
> 

Right, let's continue with this patch. Can you please send a new version with 
updates mentioned below?

> If the above explanation is acceptable, per your other comments - I
> can send a new patch without the parameter change , without the empty
> line, and with the comment moved to the proper place in the code.
> 
> Waiting for your decision,
> 
> Elad.
> 
> On Fri, Feb 19, 2021 at 8:42 PM Ferruh Yigit <ferruh.yigit@intel.com> wrote:
>>
>> On 11/26/2020 2:46 PM, Elad Nachman wrote:
>>> This patch leverages on Stephen Hemminger's 64106 patch from Dec 2019,
>>> and fixes the issues reported by Ferruh and Igor:
>>>
>>> A. KNI sync lock is being locked while rtnl is held.
>>> If two threads are calling kni_net_process_request() ,
>>> then the first one wil take the sync lock, release rtnl lock then sleep.
>>> The second thread will try to lock sync lock while holding rtnl.
>>> The first thread will wake, and try to lock rtnl, resulting in a deadlock.
>>> The remedy is to release rtnl before locking the KNI sync lock.
>>> Since in between nothing is accessing Linux network-wise,
>>> no rtnl locking is needed.
>>
>> Hi Elad,
>>
>> Thanks for explanation, that clarifies the issue.
>> Also I confirm I don't see the hang, at least as much as I test.
>>
>>>
>>> B. There is a race condition in __dev_close_many() processing the
>>> close_list while the application terminates.
>>> It looks like if two vEth devices are terminating,
>>> and one releases the rtnl lock, the other takes it,
>>> updating the close_list in an unstable state,
>>> causing the close_list to become a circular linked list,
>>> hence list_for_each_entry() will endlessly loop inside
>>> __dev_close_many() .
>>> Since the description for the original patch indicate the
>>> original motivation was bringing the device up,
>>> I have changed kni_net_process_request() to hold the rtnl mutex
>>> in case of bringing the device down since this is the path called
>>> from __dev_close_many() , causing the corruption of the close_list.
>>>
>>
>> I can't reproduce this case, I see the protection in the code, but better to get
>> confirmation from Igor.
>>
>>
>>
>> Overall the issue seems calling a function pointed by 'rte_kni_ops' which
>> requires to acquire the rtnl lock.
>> So I wonder if this can't be handled in the ops function, by processing the
>> request asynchronously,
>> like recording the request, return from 'rte_kni_ops', and process the request
>> afterwards?
>>
>> I assume the application we mention is not kni sample application.
>>
>>>
>>>
>>> Signed-off-by: Elad Nachman <eladv6@gmail.com>
>>> ---
>>>    kernel/linux/kni/kni_net.c | 47 +++++++++++++++++++++++++-------------
>>>    1 file changed, 31 insertions(+), 16 deletions(-)
>>>
>>> diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c
>>> index 4b752083d..cf5b0845d 100644
>>> --- a/kernel/linux/kni/kni_net.c
>>> +++ b/kernel/linux/kni/kni_net.c
>>> @@ -17,6 +17,7 @@
>>>    #include <linux/skbuff.h>
>>>    #include <linux/kthread.h>
>>>    #include <linux/delay.h>
>>> +#include <linux/rtnetlink.h>
>>>
>>>    #include <rte_kni_common.h>
>>>    #include <kni_fifo.h>
>>> @@ -102,18 +103,26 @@ get_data_kva(struct kni_dev *kni, void *pkt_kva)
>>>     * It can be called to process the request.
>>>     */
>>>    static int
>>> -kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
>>> +kni_net_process_request(struct net_device *dev, struct rte_kni_request *req)
>>>    {
>>> +     struct kni_dev *kni = netdev_priv(dev);
>>>        int ret = -1;
>>>        void *resp_va;
>>>        uint32_t num;
>>>        int ret_val;
>>> +     int req_is_dev_stop = 0;
>>>
>>> -     if (!kni || !req) {
>>> -             pr_err("No kni instance or request\n");
>>> -             return -EINVAL;
>>> -     }
>>> +     if (req->req_id == RTE_KNI_REQ_CFG_NETWORK_IF &&
>>> +                     req->if_up == 0)
>>> +             req_is_dev_stop = 1;
>>>
>>> +     ASSERT_RTNL();
>>> +
>>> +     if (!req_is_dev_stop) {
>>> +             dev_hold(dev);
>>> +             rtnl_unlock();
>>> +     }
>>> +
>>>        mutex_lock(&kni->sync_lock);
>>>
>>>        /* Construct data */
>>> @@ -125,8 +134,13 @@ kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
>>>                goto fail;
>>>        }
>>>
>>> +     /* Since we need to wait and RTNL mutex is held
>>> +      * drop the mutex and hold refernce to keep device
>>> +      */
>>> +
>>
>> Comment seems left here, need to go up. s/refernce/reference
>>
>>>        ret_val = wait_event_interruptible_timeout(kni->wq,
>>>                        kni_fifo_count(kni->resp_q), 3 * HZ);
>>> +
>>>        if (signal_pending(current) || ret_val <= 0) {
>>>                ret = -ETIME;
>>>                goto fail;
>>> @@ -144,6 +158,13 @@ kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
>>>
>>>    fail:
>>>        mutex_unlock(&kni->sync_lock);
>>> +
>>> +
>>
>> extra empty line
>>
>>> +     if (!req_is_dev_stop) {
>>> +             rtnl_lock();
>>> +             dev_put(dev);
>>> +     }
>>> +
>>>        return ret;
>>>    }
>>>
>>> @@ -155,7 +176,6 @@ kni_net_open(struct net_device *dev)
>>>    {
>>>        int ret;
>>>        struct rte_kni_request req;
>>> -     struct kni_dev *kni = netdev_priv(dev);
>>>
>>>        netif_start_queue(dev);
>>>        if (kni_dflt_carrier == 1)
>>> @@ -168,7 +188,7 @@ kni_net_open(struct net_device *dev)
>>>
>>>        /* Setting if_up to non-zero means up */
>>>        req.if_up = 1;
>>> -     ret = kni_net_process_request(kni, &req);
>>> +     ret = kni_net_process_request(dev, &req);
>>>
>>
>> Althoug it is not soo confusing, these lines and following ones are noise for
>> this patch, they are just for 'kni_net_process_request' paramter change.
>>
>> What do you think do the 'kni_net_process_request' parameter change in first
>> patch, and fix the issue in second, this way second patch can contain only the
>> actual changes required for fix.
  
Ferruh Yigit Feb. 23, 2021, 12:53 p.m. UTC | #4
On 2/23/2021 12:05 PM, Elad Nachman wrote:
> This version 2 of the patch leverages on Stephen Hemminger's 64106
> patch from Dec 2019,
> and fixes the issues reported by Ferruh and Igor:
> 
> A. KNI sync lock is being locked while rtnl is held.
> If two threads are calling kni_net_process_request() ,
> then the first one will take the sync lock, release rtnl lock then sleep.
> The second thread will try to lock sync lock while holding rtnl.
> The first thread will wake, and try to lock rtnl, resulting in a deadlock.
> The remedy is to release rtnl before locking the KNI sync lock.
> Since in between nothing is accessing Linux network-wise,
> no rtnl locking is needed.
> 
> B. There is a race condition in __dev_close_many() processing the
> close_list while the application terminates.
> It looks like if two vEth devices are terminating,
> and one releases the rtnl lock, the other takes it,
> updating the close_list in an unstable state,
> causing the close_list to become a circular linked list,
> hence list_for_each_entry() will endlessly loop inside
> __dev_close_many() .
> Since the description for the original patch indicate the
> original motivation was bringing the device up,
> I have changed kni_net_process_request() to hold the rtnl mutex
> in case of bringing the device down since this is the path called
> from __dev_close_many() , causing the corruption of the close_list.
> 
> Depends-on: patch-64106 ("kni: fix kernel deadlock when using mlx devices")
 >

Can you please make new version of the patches on top of latest git head, not 
exiting patches, we don't support incremental updates.

> 
> Signed-off-by: Elad Nachman <eladv6@gmail.com>
> ---
> V2:
> * rebuild the patch as increment from patch 64106
> * fix comment and blank lines
> 
> ---
>   kernel/linux/kni/kni_net.c | 25 +++++++++++++++++--------
>   1 file changed, 17 insertions(+), 8 deletions(-)
> 
> diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c
> index f0b6e9a8d..b41360220 100644
> --- a/kernel/linux/kni/kni_net.c
> +++ b/kernel/linux/kni/kni_net.c
> @@ -110,9 +110,22 @@ kni_net_process_request(struct net_device *dev, struct rte_kni_request *req)
>   	void *resp_va;
>   	uint32_t num;
>   	int ret_val;
> +	int req_is_dev_stop = 0;
> +

One more thing, can you please add comment to code why "stop" request is 
special? You have it in the commit log, but a short description in code also cna 
be helpful.
  

Patch

diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c
index 4b752083d..cf5b0845d 100644
--- a/kernel/linux/kni/kni_net.c
+++ b/kernel/linux/kni/kni_net.c
@@ -17,6 +17,7 @@ 
 #include <linux/skbuff.h>
 #include <linux/kthread.h>
 #include <linux/delay.h>
+#include <linux/rtnetlink.h>
 
 #include <rte_kni_common.h>
 #include <kni_fifo.h>
@@ -102,18 +103,26 @@  get_data_kva(struct kni_dev *kni, void *pkt_kva)
  * It can be called to process the request.
  */
 static int
-kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
+kni_net_process_request(struct net_device *dev, struct rte_kni_request *req)
 {
+	struct kni_dev *kni = netdev_priv(dev);
 	int ret = -1;
 	void *resp_va;
 	uint32_t num;
 	int ret_val;
+	int req_is_dev_stop = 0;
 
-	if (!kni || !req) {
-		pr_err("No kni instance or request\n");
-		return -EINVAL;
-	}
+	if (req->req_id == RTE_KNI_REQ_CFG_NETWORK_IF &&
+			req->if_up == 0)
+		req_is_dev_stop = 1;
 
+	ASSERT_RTNL();
+
+	if (!req_is_dev_stop) {
+		dev_hold(dev);
+		rtnl_unlock();
+	}
+
 	mutex_lock(&kni->sync_lock);
 
 	/* Construct data */
@@ -125,8 +134,13 @@  kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
 		goto fail;
 	}
 
+	/* Since we need to wait and RTNL mutex is held
+	 * drop the mutex and hold refernce to keep device
+	 */
+
 	ret_val = wait_event_interruptible_timeout(kni->wq,
 			kni_fifo_count(kni->resp_q), 3 * HZ);
+
 	if (signal_pending(current) || ret_val <= 0) {
 		ret = -ETIME;
 		goto fail;
@@ -144,6 +158,13 @@  kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req)
 
 fail:
 	mutex_unlock(&kni->sync_lock);
+
+
+	if (!req_is_dev_stop) {
+		rtnl_lock();
+		dev_put(dev);
+	}
+
 	return ret;
 }
 
@@ -155,7 +176,6 @@  kni_net_open(struct net_device *dev)
 {
 	int ret;
 	struct rte_kni_request req;
-	struct kni_dev *kni = netdev_priv(dev);
 
 	netif_start_queue(dev);
 	if (kni_dflt_carrier == 1)
@@ -168,7 +188,7 @@  kni_net_open(struct net_device *dev)
 
 	/* Setting if_up to non-zero means up */
 	req.if_up = 1;
-	ret = kni_net_process_request(kni, &req);
+	ret = kni_net_process_request(dev, &req);
 
 	return (ret == 0) ? req.result : ret;
 }
@@ -178,7 +198,6 @@  kni_net_release(struct net_device *dev)
 {
 	int ret;
 	struct rte_kni_request req;
-	struct kni_dev *kni = netdev_priv(dev);
 
 	netif_stop_queue(dev); /* can't transmit any more */
 	netif_carrier_off(dev);
@@ -188,7 +207,7 @@  kni_net_release(struct net_device *dev)
 
 	/* Setting if_up to 0 means down */
 	req.if_up = 0;
-	ret = kni_net_process_request(kni, &req);
+	ret = kni_net_process_request(dev, &req);
 
 	return (ret == 0) ? req.result : ret;
 }
@@ -643,14 +662,13 @@  kni_net_change_mtu(struct net_device *dev, int new_mtu)
 {
 	int ret;
 	struct rte_kni_request req;
-	struct kni_dev *kni = netdev_priv(dev);
 
 	pr_debug("kni_net_change_mtu new mtu %d to be set\n", new_mtu);
 
 	memset(&req, 0, sizeof(req));
 	req.req_id = RTE_KNI_REQ_CHANGE_MTU;
 	req.new_mtu = new_mtu;
-	ret = kni_net_process_request(kni, &req);
+	ret = kni_net_process_request(dev, &req);
 	if (ret == 0 && req.result == 0)
 		dev->mtu = new_mtu;
 
@@ -661,7 +679,6 @@  static void
 kni_net_change_rx_flags(struct net_device *netdev, int flags)
 {
 	struct rte_kni_request req;
-	struct kni_dev *kni = netdev_priv(netdev);
 
 	memset(&req, 0, sizeof(req));
 
@@ -683,7 +700,7 @@  kni_net_change_rx_flags(struct net_device *netdev, int flags)
 			req.promiscusity = 0;
 	}
 
-	kni_net_process_request(kni, &req);
+	kni_net_process_request(netdev, &req);
 }
 
 /*
@@ -742,7 +759,6 @@  kni_net_set_mac(struct net_device *netdev, void *p)
 {
 	int ret;
 	struct rte_kni_request req;
-	struct kni_dev *kni;
 	struct sockaddr *addr = p;
 
 	memset(&req, 0, sizeof(req));
@@ -754,8 +770,7 @@  kni_net_set_mac(struct net_device *netdev, void *p)
 	memcpy(req.mac_addr, addr->sa_data, netdev->addr_len);
 	memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len);
 
-	kni = netdev_priv(netdev);
-	ret = kni_net_process_request(kni, &req);
+	ret = kni_net_process_request(netdev, &req);
 
 	return (ret == 0 ? req.result : ret);
 }