From patchwork Thu Nov 26 14:46:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Elad Nachman X-Patchwork-Id: 84595 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 891BCA04DD; Thu, 26 Nov 2020 19:11:19 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DE0A8CA48; Thu, 26 Nov 2020 19:11:17 +0100 (CET) Received: from mail-ej1-f66.google.com (mail-ej1-f66.google.com [209.85.218.66]) by dpdk.org (Postfix) with ESMTP id AA476C988 for ; Thu, 26 Nov 2020 15:46:22 +0100 (CET) Received: by mail-ej1-f66.google.com with SMTP id bo9so3211883ejb.13 for ; Thu, 26 Nov 2020 06:46:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=from:to:cc:subject:date:message-id; bh=Y5N7Ht2Pp4OpccXId70q5d4qNJxImd8HvYyq0frr4Nc=; b=X8V/GZpO5LK2cXPSJmv/t0sxpHN4msrjzj1dJMCj6+RxjfB7QVll94gqCKXAXyOCvU aYenQRM06LkAWKeUandY5gv7oKpQFMJtMofkRCadjoAhWHvNEC9k5EEHC7L17aridzbS wNmAQvOF3a14eRZG14W6lzIslN/0ISQDhXdPlZUcEipE0lM18sJAkrIbdppRP8cxIq/0 yNr3IZEZ+xg8pjWkmnlbtig/VTN9pafUN0H4COBgmx4U0WSD296SORoTCTyJX3cekeQK /1kw5oo/h2Zv+4IXCAsp13MHUM0kxlizq2f/KL99hpAJ1Kvdq07lqHTP5Qo9i7t8zhH6 JjnA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id; bh=Y5N7Ht2Pp4OpccXId70q5d4qNJxImd8HvYyq0frr4Nc=; b=B80E6PI6sgPJXVuVL9bE6Fn+Zc73wQga2YWJ5UgrguYdMSVoJ/OIwC3+n7tW+QcYX5 fiYkBnKnBo5mUWGKYGVvx201WMLEphXkwjjKn7GqzOkUkRLbshtY2mcWvlTm9bJbh9uz tPcJsuA1gSoksvN5VTZgqn9D9kepHdkja9B7i6DrLMN6iQvyOFp8S5qEigienfSVSkZ1 QJQayaXD0xs87Kk8JDyjT6otvyhkY/5naXGcXdiRHWeDsn8cBC04rsvyvy67/fJaKZsX TFYLE9aV/CDfnDc57gn+mUI/4WM5X+vs2cWY3ZsFHxs15avmgNG+BLMY6xz+fN+mP519 2BIA== X-Gm-Message-State: AOAM530DAS8ETH1lQNI69FB/WHfdXvzLV34BhcqVy43I3amWFvITus6C nccVsegpomQutCfw2qDg1Lk= X-Google-Smtp-Source: ABdhPJz1x2/OUEI98Zdn84IoPJJVfem665xjXB2MOJbtiEu+0DT1CTZfmRhgeqJjRrEMX2XFmKmfKQ== X-Received: by 2002:a17:906:2f87:: with SMTP id w7mr3106587eji.83.1606401981408; Thu, 26 Nov 2020 06:46:21 -0800 (PST) Received: from localhost (46-117-180-88.bb.netvision.net.il. [46.117.180.88]) by smtp.gmail.com with ESMTPSA id d10sm2855725ejc.39.2020.11.26.06.46.20 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Thu, 26 Nov 2020 06:46:20 -0800 (PST) From: Elad Nachman To: Ferruh Yigit Cc: dev@dpdk.org, Elad Nachman Date: Thu, 26 Nov 2020 16:46:13 +0200 Message-Id: <20201126144613.4986-1-eladv6@gmail.com> X-Mailer: git-send-email 2.17.1 X-Mailman-Approved-At: Thu, 26 Nov 2020 19:11:16 +0100 Subject: [dpdk-dev] [PATCH] kni: fix rtnl deadlocks and race conditions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch leverages on Stephen Hemminger's 64106 patch from Dec 2019, and fixes the issues reported by Ferruh and Igor: A. KNI sync lock is being locked while rtnl is held. If two threads are calling kni_net_process_request() , then the first one wil take the sync lock, release rtnl lock then sleep. The second thread will try to lock sync lock while holding rtnl. The first thread will wake, and try to lock rtnl, resulting in a deadlock. The remedy is to release rtnl before locking the KNI sync lock. Since in between nothing is accessing Linux network-wise, no rtnl locking is needed. B. There is a race condition in __dev_close_many() processing the close_list while the application terminates. It looks like if two vEth devices are terminating, and one releases the rtnl lock, the other takes it, updating the close_list in an unstable state, causing the close_list to become a circular linked list, hence list_for_each_entry() will endlessly loop inside __dev_close_many() . Since the description for the original patch indicate the original motivation was bringing the device up, I have changed kni_net_process_request() to hold the rtnl mutex in case of bringing the device down since this is the path called from __dev_close_many() , causing the corruption of the close_list. Signed-off-by: Elad Nachman --- kernel/linux/kni/kni_net.c | 47 +++++++++++++++++++++++++------------- 1 file changed, 31 insertions(+), 16 deletions(-) diff --git a/kernel/linux/kni/kni_net.c b/kernel/linux/kni/kni_net.c index 4b752083d..cf5b0845d 100644 --- a/kernel/linux/kni/kni_net.c +++ b/kernel/linux/kni/kni_net.c @@ -17,6 +17,7 @@ #include #include #include +#include #include #include @@ -102,18 +103,26 @@ get_data_kva(struct kni_dev *kni, void *pkt_kva) * It can be called to process the request. */ static int -kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req) +kni_net_process_request(struct net_device *dev, struct rte_kni_request *req) { + struct kni_dev *kni = netdev_priv(dev); int ret = -1; void *resp_va; uint32_t num; int ret_val; + int req_is_dev_stop = 0; - if (!kni || !req) { - pr_err("No kni instance or request\n"); - return -EINVAL; - } + if (req->req_id == RTE_KNI_REQ_CFG_NETWORK_IF && + req->if_up == 0) + req_is_dev_stop = 1; + ASSERT_RTNL(); + + if (!req_is_dev_stop) { + dev_hold(dev); + rtnl_unlock(); + } + mutex_lock(&kni->sync_lock); /* Construct data */ @@ -125,8 +134,13 @@ kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req) goto fail; } + /* Since we need to wait and RTNL mutex is held + * drop the mutex and hold refernce to keep device + */ + ret_val = wait_event_interruptible_timeout(kni->wq, kni_fifo_count(kni->resp_q), 3 * HZ); + if (signal_pending(current) || ret_val <= 0) { ret = -ETIME; goto fail; @@ -144,6 +158,13 @@ kni_net_process_request(struct kni_dev *kni, struct rte_kni_request *req) fail: mutex_unlock(&kni->sync_lock); + + + if (!req_is_dev_stop) { + rtnl_lock(); + dev_put(dev); + } + return ret; } @@ -155,7 +176,6 @@ kni_net_open(struct net_device *dev) { int ret; struct rte_kni_request req; - struct kni_dev *kni = netdev_priv(dev); netif_start_queue(dev); if (kni_dflt_carrier == 1) @@ -168,7 +188,7 @@ kni_net_open(struct net_device *dev) /* Setting if_up to non-zero means up */ req.if_up = 1; - ret = kni_net_process_request(kni, &req); + ret = kni_net_process_request(dev, &req); return (ret == 0) ? req.result : ret; } @@ -178,7 +198,6 @@ kni_net_release(struct net_device *dev) { int ret; struct rte_kni_request req; - struct kni_dev *kni = netdev_priv(dev); netif_stop_queue(dev); /* can't transmit any more */ netif_carrier_off(dev); @@ -188,7 +207,7 @@ kni_net_release(struct net_device *dev) /* Setting if_up to 0 means down */ req.if_up = 0; - ret = kni_net_process_request(kni, &req); + ret = kni_net_process_request(dev, &req); return (ret == 0) ? req.result : ret; } @@ -643,14 +662,13 @@ kni_net_change_mtu(struct net_device *dev, int new_mtu) { int ret; struct rte_kni_request req; - struct kni_dev *kni = netdev_priv(dev); pr_debug("kni_net_change_mtu new mtu %d to be set\n", new_mtu); memset(&req, 0, sizeof(req)); req.req_id = RTE_KNI_REQ_CHANGE_MTU; req.new_mtu = new_mtu; - ret = kni_net_process_request(kni, &req); + ret = kni_net_process_request(dev, &req); if (ret == 0 && req.result == 0) dev->mtu = new_mtu; @@ -661,7 +679,6 @@ static void kni_net_change_rx_flags(struct net_device *netdev, int flags) { struct rte_kni_request req; - struct kni_dev *kni = netdev_priv(netdev); memset(&req, 0, sizeof(req)); @@ -683,7 +700,7 @@ kni_net_change_rx_flags(struct net_device *netdev, int flags) req.promiscusity = 0; } - kni_net_process_request(kni, &req); + kni_net_process_request(netdev, &req); } /* @@ -742,7 +759,6 @@ kni_net_set_mac(struct net_device *netdev, void *p) { int ret; struct rte_kni_request req; - struct kni_dev *kni; struct sockaddr *addr = p; memset(&req, 0, sizeof(req)); @@ -754,8 +770,7 @@ kni_net_set_mac(struct net_device *netdev, void *p) memcpy(req.mac_addr, addr->sa_data, netdev->addr_len); memcpy(netdev->dev_addr, addr->sa_data, netdev->addr_len); - kni = netdev_priv(netdev); - ret = kni_net_process_request(kni, &req); + ret = kni_net_process_request(netdev, &req); return (ret == 0 ? req.result : ret); }