From patchwork Tue Mar 31 17:13:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 67509 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA9D2A0562; Tue, 31 Mar 2020 19:14:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id CAC531C0B3; Tue, 31 Mar 2020 19:14:20 +0200 (CEST) Received: from mail-pl1-f193.google.com (mail-pl1-f193.google.com [209.85.214.193]) by dpdk.org (Postfix) with ESMTP id E98761C0B1 for ; Tue, 31 Mar 2020 19:14:17 +0200 (CEST) Received: by mail-pl1-f193.google.com with SMTP id v23so8339054ply.10 for ; Tue, 31 Mar 2020 10:14:17 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hzVAm9u+dULJ7B3oegwe+jNbuEADxU2ov+7HVG8Vajk=; b=U3ZZ3QAStCfHwZ4hSKa5/giNZ/+Idu3vRCckM+j/8fMDoGpUqzW4K+viX0FTT032Im 8adFS7staE7ivT2pzgWICZOkuVXiK55s/xYODOPUGEeOjB/VTTPHvEjn55pB4hDEfKJr goLPOyUxb2lze3h0Cgtk9R0SP1xX5PmEAoCgkIFNW6jcJMXrOGdzz9mbjL1QkDCkFVU0 JE1cIYMgBjU7KR4jEj7pVfC1BtZ9YwBrJ8Y/3zWI5/7mMJKxld/K4pQCWZPwiKUBs0sQ 0QlddD7tYNNHfz8ohy/Rp8wo+xQ+b25uFg3A/HwNSj+sh4M8Gq2S+ePI10TDlsBQw02l L34A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hzVAm9u+dULJ7B3oegwe+jNbuEADxU2ov+7HVG8Vajk=; b=ES9u45liZ9RkKer8YVRExEvO4oQvGpheKzsZA8kUYjYK2Vda1VFYMDd6mz3viUOMG5 7m4ak3d3YQQYQn1nxuseaBVolNINPpvj96zS4IeGj9jFoQvRK/naohIX82PY8yvfABzg ix7B5QSUV1JkzonQLHIlnfpFRgj64JzM+wjnDILcTyDuorf608WbfBGmwnbG3KxvuiKn l0tqDh4FRXvSE95/E8eZxT93I0v6Vqx92MfqF65wjtm1MqM9YdvLN95cR3HUnuqs99sr w/lZ1JmBljUZJ+Ty7xXTae7OH5Ey9ZIAn4Z9WW1Y+H/0Qi2Q/hlsAuG1u22MTU6X99Jh pZtw== X-Gm-Message-State: ANhLgQ3ALbsSydr5Rt1mV5IV0hCfUEPf2o33C4NPWK3ewAicLOX26m8R BzeiCH6KMR4IfpqicjXf3l8a1/W4UZDI4w== X-Google-Smtp-Source: ADFU+vvAqef61+DBerL1oh1flnX37SB0zGb0CMjEbNvrrrplm0S3QgzLmkUAyDckVP1RiF137ro8Aw== X-Received: by 2002:a17:902:562:: with SMTP id 89mr17977918plf.249.1585674855828; Tue, 31 Mar 2020 10:14:15 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id i124sm12869764pfg.14.2020.03.31.10.14.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 10:14:14 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , stable@dpdk.org Date: Tue, 31 Mar 2020 10:13:57 -0700 Message-Id: <20200331171404.23596-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200331171404.23596-1-stephen@networkplumber.org> References: <20200316235612.29854-1-stephen@networkplumber.org> <20200331171404.23596-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 1/8] net/netvsc: propagate descriptor limits from VF to netvsc X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If application cares about descriptor limits, the netvsc device values should reflect those of the VF as well. Fixes: dc7680e8597c ("net/netvsc: support integrated VF") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_vf.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/drivers/net/netvsc/hn_vf.c b/drivers/net/netvsc/hn_vf.c index 7a3734cadfa4..1261b2e2ef85 100644 --- a/drivers/net/netvsc/hn_vf.c +++ b/drivers/net/netvsc/hn_vf.c @@ -167,6 +167,17 @@ hn_nvs_handle_vfassoc(struct rte_eth_dev *dev, hn_vf_remove(hv); } +static void +hn_vf_merge_desc_lim(struct rte_eth_desc_lim *lim, + const struct rte_eth_desc_lim *vf_lim) +{ + lim->nb_max = RTE_MIN(vf_lim->nb_max, lim->nb_max); + lim->nb_min = RTE_MAX(vf_lim->nb_min, lim->nb_min); + lim->nb_align = RTE_MAX(vf_lim->nb_align, lim->nb_align); + lim->nb_seg_max = RTE_MIN(vf_lim->nb_seg_max, lim->nb_seg_max); + lim->nb_mtu_seg_max = RTE_MIN(vf_lim->nb_seg_max, lim->nb_seg_max); +} + /* * Merge the info from the VF and synthetic path. * use the default config of the VF @@ -196,11 +207,13 @@ static int hn_vf_info_merge(struct rte_eth_dev *vf_dev, info->max_tx_queues); info->tx_offload_capa &= vf_info.tx_offload_capa; info->tx_queue_offload_capa &= vf_info.tx_queue_offload_capa; + hn_vf_merge_desc_lim(&info->tx_desc_lim, &vf_info.tx_desc_lim); info->min_rx_bufsize = RTE_MAX(vf_info.min_rx_bufsize, info->min_rx_bufsize); info->max_rx_pktlen = RTE_MAX(vf_info.max_rx_pktlen, info->max_rx_pktlen); + hn_vf_merge_desc_lim(&info->rx_desc_lim, &vf_info.rx_desc_lim); return 0; } From patchwork Tue Mar 31 17:13:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 67510 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1C549A0562; Tue, 31 Mar 2020 19:14:35 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4452A1C0BD; Tue, 31 Mar 2020 19:14:24 +0200 (CEST) Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by dpdk.org (Postfix) with ESMTP id C517A1C0B3 for ; Tue, 31 Mar 2020 19:14:18 +0200 (CEST) Received: by mail-pl1-f195.google.com with SMTP id w3so8351201plz.5 for ; Tue, 31 Mar 2020 10:14:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=69DDz4/vG4oGSwLpYSfJUUGqnJ8PAp3rYp/b+qIzf3A=; b=Nw78t6TguKQuGvKa0h64xbTYcq1T9kIPkteHVYWWj+V6WkDG1ewfFyDMpvAG1aUQxI w6l4GXGTGYm6gjM8fcgUZILw7T5wsd7jrsbvCvZekTJob/S3A8k9qqElU6T5c3ViW/bk L66bN+16Q08Se1STbjgE+dv4z8YBZuu1SSmKQdkpAN8T/n0qEHyAoQBFDwyasNokXnr+ 0+QPLLptdSgYkTIbXuIqj+Z5UkqNngdQMUTws10Vu0A9G0A5phqvUpDP4/4Nk4Cz74z5 QHVduQY9uM0gy3CVTZCiC+EvNu+u+IaMhhMnhZ68jxIbtHVLEmldV7SnjqYEOgnmV8J5 1FiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=69DDz4/vG4oGSwLpYSfJUUGqnJ8PAp3rYp/b+qIzf3A=; b=k6RcNuQe1Vmc3oiJvyCD2pegXn4Rs5FxPAGMouT0vKxgtETz3HXgnQ7anR9SInuleC E0LhDW3+1VUfi9AAdU1gl/KYDVFe0pSJuUyTogmDCDS1xm+H/gpbLrdu42W7kopRULLC K3f4bTydLjUL9apJcIi/4Q0ARYpvImsTyFKK4xR61t8luoI1lxFgu5v1uXMd8p6+0qfa BBVJHbGGwiS2CEWhiGUC1oSlbigN2cfzhb+NlRgRkKpNe3za576GX7weOHjLAHDO+U8n eq5I9oqG7LDd+XuK/SWK4ehTcigHr9fChRrtIVl3hlE8zNlVt1OEMUIUq4CZ0njGlotB J/og== X-Gm-Message-State: ANhLgQ3qQy9AO9NhAJTUD47oi8MwgTXBqPCqhVT/jXmPNGLzZMFM8U2W pqFV+U9QIo8HvJC5CfsuBzBYnlQuLY5Y/w== X-Google-Smtp-Source: ADFU+vu7bpvCrCOKRgxpimwhMfKEG++F1Udsc6bgIFnE4gF0464wmGIwuH5WW8XP5mv7Ahs7lp+5tA== X-Received: by 2002:a17:902:76ca:: with SMTP id j10mr18189268plt.184.1585674857409; Tue, 31 Mar 2020 10:14:17 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id i124sm12869764pfg.14.2020.03.31.10.14.15 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 10:14:16 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , stable@dpdk.org Date: Tue, 31 Mar 2020 10:13:58 -0700 Message-Id: <20200331171404.23596-3-stephen@networkplumber.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200331171404.23596-1-stephen@networkplumber.org> References: <20200316235612.29854-1-stephen@networkplumber.org> <20200331171404.23596-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 2/8] net/netvsc: handle receive packets during multi-channel setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" It is possible for a packet to arrive during the configuration process when setting up multiple queue mode. This would cause configure to fail; fix by just ignoring receive packets while waiting for control commands. Use the receive ring lock to avoid possible races between oddly behaved applications doing rx_burst and control operations concurrently. Fixes: 4e9c73e96e83 ("net/netvsc: add Hyper-V network device") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_nvs.c | 41 +++++++++++++++++++++++++++++++++++-- 1 file changed, 39 insertions(+), 2 deletions(-) diff --git a/drivers/net/netvsc/hn_nvs.c b/drivers/net/netvsc/hn_nvs.c index 6b518685ab6f..477202b2a0b7 100644 --- a/drivers/net/netvsc/hn_nvs.c +++ b/drivers/net/netvsc/hn_nvs.c @@ -54,7 +54,7 @@ static int hn_nvs_req_send(struct hn_data *hv, } static int -hn_nvs_execute(struct hn_data *hv, +__hn_nvs_execute(struct hn_data *hv, void *req, uint32_t reqlen, void *resp, uint32_t resplen, uint32_t type) @@ -62,6 +62,7 @@ hn_nvs_execute(struct hn_data *hv, struct vmbus_channel *chan = hn_primary_chan(hv); char buffer[NVS_RESPSIZE_MAX]; const struct hn_nvs_hdr *hdr; + uint64_t xactid; uint32_t len; int ret; @@ -77,7 +78,7 @@ hn_nvs_execute(struct hn_data *hv, retry: len = sizeof(buffer); - ret = rte_vmbus_chan_recv(chan, buffer, &len, NULL); + ret = rte_vmbus_chan_recv(chan, buffer, &len, &xactid); if (ret == -EAGAIN) { rte_delay_us(HN_CHAN_INTERVAL_US); goto retry; @@ -88,7 +89,20 @@ hn_nvs_execute(struct hn_data *hv, return ret; } + if (len < sizeof(*hdr)) { + PMD_DRV_LOG(ERR, "response missing NVS header"); + return -EINVAL; + } + hdr = (struct hn_nvs_hdr *)buffer; + + /* Silently drop received packets while waiting for response */ + if (hdr->type == NVS_TYPE_RNDIS) { + hn_nvs_ack_rxbuf(chan, xactid); + --hv->rxbuf_outstanding; + goto retry; + } + if (hdr->type != type) { PMD_DRV_LOG(ERR, "unexpected NVS resp %#x, expect %#x", hdr->type, type); @@ -108,6 +122,29 @@ hn_nvs_execute(struct hn_data *hv, return 0; } + +/* + * Execute one control command and get the response. + * Only one command can be active on a channel at once + * Unlike BSD, DPDK does not have an interrupt context + * so the polling is required to wait for response. + */ +static int +hn_nvs_execute(struct hn_data *hv, + void *req, uint32_t reqlen, + void *resp, uint32_t resplen, + uint32_t type) +{ + struct hn_rx_queue *rxq = hv->primary; + int ret; + + rte_spinlock_lock(&rxq->ring_lock); + ret = __hn_nvs_execute(hv, req, reqlen, resp, resplen, type); + rte_spinlock_unlock(&rxq->ring_lock); + + return ret; +} + static int hn_nvs_doinit(struct hn_data *hv, uint32_t nvs_ver) { From patchwork Tue Mar 31 17:13:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 67511 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7CB01A0562; Tue, 31 Mar 2020 19:14:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 403231C0C5; Tue, 31 Mar 2020 19:14:28 +0200 (CEST) Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by dpdk.org (Postfix) with ESMTP id 376C81C06D for ; Tue, 31 Mar 2020 19:14:21 +0200 (CEST) Received: by mail-pj1-f66.google.com with SMTP id q16so1150671pje.1 for ; Tue, 31 Mar 2020 10:14:21 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lfZ2qmUQEKOx1drigyHTP+d+i9fwevCQPaOCSzE7mNc=; b=i/qFKp1H10+EJcV/z66DjAXlF1ry8OKeQvzloCjOL9xz/+W8kuFc+wnmlsqtRl8su6 ZS0fS0DLrSP8KSoJP/KA+QL7UaCLAvAKEFLfrf6FyST2JdxW6XvHfqWwsRSpkp6xPUg7 DLPI3EWwBc0yPI1XW81VM6SoXm59ikGvqpfvkfQa5e0FkrWzQzyXeQeqU79p460gD54u ZSapIU6RVDcCkDVsGTaGB/z2+UZCh0LeWUqgbWuHqbX1m1NWdcP2I3BR8AcGMbpfJW7x gx14TbtOY1r9fDdRzOoS0hWzuKands70WyJRQBokMF3JDjDUVKsEi1Krg3UeTyp4P0g2 ia/Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lfZ2qmUQEKOx1drigyHTP+d+i9fwevCQPaOCSzE7mNc=; b=XUjjGKSUO0HaDRs1e0+pAAtzb0fVuIXs+p1uKpHtJ6xgK2qq/q1Py0C0YediJn6WoT l9p3FiHlDVnf0cGQD+AI+FqFRUZXWZNLWAW+y5NFhFQupyLmvTbDsJarzxUz1zAMFrUe 5Gzz1M5lgf6bOG3b8WUumqYYFXn3oPh6OA6O93qhisnd0vyvj71QCcehO2s6f60g0RIt 4tNSNLFk5O7nJ/VnLn5OPyxVpO5PWW5+e4oBfeJKw6RnLKo72X9QJ+LDkEwFvnOqYEl5 mxa4MgBYBNSR8RYV8jmGA5N3eJkc33tvytjbN4ftyiJ+lLEy9zXJRn+w1K8dlwReMCxw sFUw== X-Gm-Message-State: ANhLgQ2+UCDn00OgC3DDbL08d6ktFpUWNvW1zLm1ZvamOs9UMdhxmuTF B7kWOEieL/93Z3Y7vXWFrdRFgS4Uw00Uzg== X-Google-Smtp-Source: ADFU+vtoEynSLDxpCCInvwHUBWNJ/ZVW/jxroGSa4KQZFOC2uIQGekryxXziPllyjMgedifcgBIe+g== X-Received: by 2002:a17:902:8e8a:: with SMTP id bg10mr18072510plb.219.1585674859569; Tue, 31 Mar 2020 10:14:19 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id i124sm12869764pfg.14.2020.03.31.10.14.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 10:14:17 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , stable@dpdk.org Date: Tue, 31 Mar 2020 10:13:59 -0700 Message-Id: <20200331171404.23596-4-stephen@networkplumber.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200331171404.23596-1-stephen@networkplumber.org> References: <20200316235612.29854-1-stephen@networkplumber.org> <20200331171404.23596-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 3/8] net/netvsc: split send buffers from transmit descriptors X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The VMBus has reserved transmit area (per device) and transmit descriptors (per queue). The previous code was always having a 1:1 mapping between send buffers and descriptors. This can lead to one queue starving another and also buffer bloat. Change to working more like FreeBSD where there is a pool of transmit descriptors per queue. If send buffer is not available then no aggregation happens but the queue can still drain. Fixes: 4e9c73e96e83 ("net/netvsc: add Hyper-V network device") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_ethdev.c | 9 +- drivers/net/netvsc/hn_rxtx.c | 271 ++++++++++++++++++++------------- drivers/net/netvsc/hn_var.h | 10 +- 3 files changed, 180 insertions(+), 110 deletions(-) diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c index 564620748daf..ac6610838008 100644 --- a/drivers/net/netvsc/hn_ethdev.c +++ b/drivers/net/netvsc/hn_ethdev.c @@ -257,6 +257,9 @@ static int hn_dev_info_get(struct rte_eth_dev *dev, dev_info->max_rx_queues = hv->max_queues; dev_info->max_tx_queues = hv->max_queues; + dev_info->tx_desc_lim.nb_min = 1; + dev_info->tx_desc_lim.nb_max = 4096; + if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -982,7 +985,7 @@ eth_hn_dev_init(struct rte_eth_dev *eth_dev) if (err) goto failed; - err = hn_tx_pool_init(eth_dev); + err = hn_chim_init(eth_dev); if (err) goto failed; @@ -1018,7 +1021,7 @@ eth_hn_dev_init(struct rte_eth_dev *eth_dev) failed: PMD_INIT_LOG(NOTICE, "device init failed"); - hn_tx_pool_uninit(eth_dev); + hn_chim_uninit(eth_dev); hn_detach(hv); return err; } @@ -1042,7 +1045,7 @@ eth_hn_dev_uninit(struct rte_eth_dev *eth_dev) eth_dev->rx_pkt_burst = NULL; hn_detach(hv); - hn_tx_pool_uninit(eth_dev); + hn_chim_uninit(eth_dev); rte_vmbus_chan_close(hv->primary->chan); rte_free(hv->primary); ret = rte_eth_dev_owner_delete(hv->owner.id); diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 7212780c156e..32c03e3da0c7 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -83,7 +84,7 @@ struct hn_txdesc { struct rte_mbuf *m; uint16_t queue_id; - uint16_t chim_index; + uint32_t chim_index; uint32_t chim_size; uint32_t data_size; uint32_t packets; @@ -98,11 +99,13 @@ struct hn_txdesc { RNDIS_PKTINFO_SIZE(NDIS_LSO2_INFO_SIZE) + \ RNDIS_PKTINFO_SIZE(NDIS_TXCSUM_INFO_SIZE)) +#define HN_RNDIS_PKT_ALIGNED RTE_ALIGN(HN_RNDIS_PKT_LEN, RTE_CACHE_LINE_SIZE) + /* Minimum space required for a packet */ #define HN_PKTSIZE_MIN(align) \ RTE_ALIGN(RTE_ETHER_MIN_LEN + HN_RNDIS_PKT_LEN, align) -#define DEFAULT_TX_FREE_THRESH 32U +#define DEFAULT_TX_FREE_THRESH 32 static void hn_update_packet_stats(struct hn_stats *stats, const struct rte_mbuf *m) @@ -150,63 +153,77 @@ hn_rndis_pktmsg_offset(uint32_t ofs) static void hn_txd_init(struct rte_mempool *mp __rte_unused, void *opaque, void *obj, unsigned int idx) { + struct hn_tx_queue *txq = opaque; struct hn_txdesc *txd = obj; - struct rte_eth_dev *dev = opaque; - struct rndis_packet_msg *pkt; memset(txd, 0, sizeof(*txd)); - txd->chim_index = idx; - - pkt = rte_malloc_socket("RNDIS_TX", HN_RNDIS_PKT_LEN, - rte_align32pow2(HN_RNDIS_PKT_LEN), - dev->device->numa_node); - if (!pkt) - rte_exit(EXIT_FAILURE, "can not allocate RNDIS header"); - txd->rndis_pkt = pkt; + txd->queue_id = txq->queue_id; + txd->chim_index = NVS_CHIM_IDX_INVALID; + txd->rndis_pkt = (struct rndis_packet_msg *)(char *)txq->tx_rndis + + idx * HN_RNDIS_PKT_ALIGNED; } -/* - * Unlike Linux and FreeBSD, this driver uses a mempool - * to limit outstanding transmits and reserve buffers - */ int -hn_tx_pool_init(struct rte_eth_dev *dev) +hn_chim_init(struct rte_eth_dev *dev) { struct hn_data *hv = dev->data->dev_private; - char name[RTE_MEMPOOL_NAMESIZE]; - struct rte_mempool *mp; + uint32_t i, chim_bmp_size; + + rte_spinlock_init(&hv->chim_lock); + chim_bmp_size = rte_bitmap_get_memory_footprint(hv->chim_cnt); + hv->chim_bmem = rte_zmalloc("hn_chim_bitmap", chim_bmp_size, + RTE_CACHE_LINE_SIZE); + if (hv->chim_bmem == NULL) { + PMD_INIT_LOG(ERR, "failed to allocate bitmap size %u", + chim_bmp_size); + return -1; + } - snprintf(name, sizeof(name), - "hn_txd_%u", dev->data->port_id); - - PMD_INIT_LOG(DEBUG, "create a TX send pool %s n=%u size=%zu socket=%d", - name, hv->chim_cnt, sizeof(struct hn_txdesc), - dev->device->numa_node); - - mp = rte_mempool_create(name, hv->chim_cnt, sizeof(struct hn_txdesc), - HN_TXD_CACHE_SIZE, 0, - NULL, NULL, - hn_txd_init, dev, - dev->device->numa_node, 0); - if (!mp) { - PMD_DRV_LOG(ERR, - "mempool %s create failed: %d", name, rte_errno); - return -rte_errno; + hv->chim_bmap = rte_bitmap_init(hv->chim_cnt, + hv->chim_bmem, chim_bmp_size); + if (hv->chim_bmap == NULL) { + PMD_INIT_LOG(ERR, "failed to init chim bitmap"); + return -1; } - hv->tx_pool = mp; + for (i = 0; i < hv->chim_cnt; i++) + rte_bitmap_set(hv->chim_bmap, i); + return 0; } void -hn_tx_pool_uninit(struct rte_eth_dev *dev) +hn_chim_uninit(struct rte_eth_dev *dev) { struct hn_data *hv = dev->data->dev_private; - if (hv->tx_pool) { - rte_mempool_free(hv->tx_pool); - hv->tx_pool = NULL; + rte_bitmap_free(hv->chim_bmap); + rte_free(hv->chim_bmem); + hv->chim_bmem = NULL; +} + +static uint32_t hn_chim_alloc(struct hn_data *hv) +{ + uint32_t index = NVS_CHIM_IDX_INVALID; + uint64_t slab; + + rte_spinlock_lock(&hv->chim_lock); + if (rte_bitmap_scan(hv->chim_bmap, &index, &slab)) + rte_bitmap_clear(hv->chim_bmap, index); + rte_spinlock_unlock(&hv->chim_lock); + + return index; +} + +static void hn_chim_free(struct hn_data *hv, uint32_t chim_idx) +{ + if (chim_idx >= hv->chim_cnt) { + PMD_DRV_LOG(ERR, "Invalid chimney index %u", chim_idx); + } else { + rte_spinlock_lock(&hv->chim_lock); + rte_bitmap_set(hv->chim_bmap, chim_idx); + rte_spinlock_unlock(&hv->chim_lock); } } @@ -220,15 +237,16 @@ static void hn_reset_txagg(struct hn_tx_queue *txq) int hn_dev_tx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, uint16_t nb_desc __rte_unused, + uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { struct hn_data *hv = dev->data->dev_private; struct hn_tx_queue *txq; + char name[RTE_MEMPOOL_NAMESIZE]; uint32_t tx_free_thresh; - int err; + int err = -ENOMEM; PMD_INIT_FUNC_TRACE(); @@ -244,14 +262,42 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev, tx_free_thresh = tx_conf->tx_free_thresh; if (tx_free_thresh == 0) - tx_free_thresh = RTE_MIN(hv->chim_cnt / 4, + tx_free_thresh = RTE_MIN(nb_desc / 4, DEFAULT_TX_FREE_THRESH); - if (tx_free_thresh >= hv->chim_cnt - 3) - tx_free_thresh = hv->chim_cnt - 3; + if (tx_free_thresh + 3 >= nb_desc) { + PMD_INIT_LOG(ERR, + "tx_free_thresh must be less than the number of TX entries minus 3(%u)." + " (tx_free_thresh=%u port=%u queue=%u)\n", + nb_desc - 3, + tx_free_thresh, dev->data->port_id, queue_idx); + return -EINVAL; + } txq->free_thresh = tx_free_thresh; + snprintf(name, sizeof(name), + "hn_txd_%u_%u", dev->data->port_id, queue_idx); + + PMD_INIT_LOG(DEBUG, "TX descriptor pool %s n=%u size=%zu", + name, nb_desc, sizeof(struct hn_txdesc)); + + txq->tx_rndis = rte_calloc("hn_txq_rndis", nb_desc, + HN_RNDIS_PKT_ALIGNED, RTE_CACHE_LINE_SIZE); + if (txq->tx_rndis == NULL) + goto error; + + txq->txdesc_pool = rte_mempool_create(name, nb_desc, + sizeof(struct hn_txdesc), + 0, 0, NULL, NULL, + hn_txd_init, txq, + dev->device->numa_node, 0); + if (txq->txdesc_pool == NULL) { + PMD_DRV_LOG(ERR, + "mempool %s create failed: %d", name, rte_errno); + goto error; + } + txq->agg_szmax = RTE_MIN(hv->chim_szmax, hv->rndis_agg_size); txq->agg_pktmax = hv->rndis_agg_pkts; txq->agg_align = hv->rndis_agg_align; @@ -260,31 +306,57 @@ hn_dev_tx_queue_setup(struct rte_eth_dev *dev, err = hn_vf_tx_queue_setup(dev, queue_idx, nb_desc, socket_id, tx_conf); - if (err) { - rte_free(txq); - return err; + if (err == 0) { + dev->data->tx_queues[queue_idx] = txq; + return 0; } - dev->data->tx_queues[queue_idx] = txq; - return 0; +error: + if (txq->txdesc_pool) + rte_mempool_free(txq->txdesc_pool); + rte_free(txq->tx_rndis); + rte_free(txq); + return err; +} + + +static struct hn_txdesc *hn_txd_get(struct hn_tx_queue *txq) +{ + struct hn_txdesc *txd; + + if (rte_mempool_get(txq->txdesc_pool, (void **)&txd)) { + ++txq->stats.ring_full; + PMD_TX_LOG(DEBUG, "tx pool exhausted!"); + return NULL; + } + + txd->m = NULL; + txd->packets = 0; + txd->data_size = 0; + txd->chim_size = 0; + + return txd; +} + +static void hn_txd_put(struct hn_tx_queue *txq, struct hn_txdesc *txd) +{ + rte_mempool_put(txq->txdesc_pool, txd); } void hn_dev_tx_queue_release(void *arg) { struct hn_tx_queue *txq = arg; - struct hn_txdesc *txd; PMD_INIT_FUNC_TRACE(); if (!txq) return; - /* If any pending data is still present just drop it */ - txd = txq->agg_txd; - if (txd) - rte_mempool_put(txq->hv->tx_pool, txd); + if (txq->txdesc_pool) + rte_mempool_free(txq->txdesc_pool); + rte_free(txq->tx_rndis); rte_free(txq); } @@ -292,6 +364,7 @@ static void hn_nvs_send_completed(struct rte_eth_dev *dev, uint16_t queue_id, unsigned long xactid, const struct hn_nvs_rndis_ack *ack) { + struct hn_data *hv = dev->data->dev_private; struct hn_txdesc *txd = (struct hn_txdesc *)xactid; struct hn_tx_queue *txq; @@ -312,9 +385,11 @@ hn_nvs_send_completed(struct rte_eth_dev *dev, uint16_t queue_id, ++txq->stats.errors; } - rte_pktmbuf_free(txd->m); + if (txd->chim_index != NVS_CHIM_IDX_INVALID) + hn_chim_free(hv, txd->chim_index); - rte_mempool_put(txq->hv->tx_pool, txd); + rte_pktmbuf_free(txd->m); + hn_txd_put(txq, txd); } /* Handle transmit completion events */ @@ -1036,28 +1111,15 @@ static int hn_flush_txagg(struct hn_tx_queue *txq, bool *need_sig) return ret; } -static struct hn_txdesc *hn_new_txd(struct hn_data *hv, - struct hn_tx_queue *txq) -{ - struct hn_txdesc *txd; - - if (rte_mempool_get(hv->tx_pool, (void **)&txd)) { - ++txq->stats.ring_full; - PMD_TX_LOG(DEBUG, "tx pool exhausted!"); - return NULL; - } - - txd->m = NULL; - txd->queue_id = txq->queue_id; - txd->packets = 0; - txd->data_size = 0; - txd->chim_size = 0; - - return txd; -} - +/* + * Try and find a place in a send chimney buffer to put + * the small packet. If space is available, this routine + * returns a pointer of where to place the data. + * If no space, caller should try direct transmit. + */ static void * -hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, uint32_t pktsize) +hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, + struct hn_txdesc *txd, uint32_t pktsize) { struct hn_txdesc *agg_txd = txq->agg_txd; struct rndis_packet_msg *pkt; @@ -1085,7 +1147,7 @@ hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, uint32_t pktsize) } chim = (uint8_t *)pkt + pkt->len; - + txq->agg_prevpkt = chim; txq->agg_pktleft--; txq->agg_szleft -= pktsize; if (txq->agg_szleft < HN_PKTSIZE_MIN(txq->agg_align)) { @@ -1095,18 +1157,21 @@ hn_try_txagg(struct hn_data *hv, struct hn_tx_queue *txq, uint32_t pktsize) */ txq->agg_pktleft = 0; } - } else { - agg_txd = hn_new_txd(hv, txq); - if (!agg_txd) - return NULL; - - chim = (uint8_t *)hv->chim_res->addr - + agg_txd->chim_index * hv->chim_szmax; - txq->agg_txd = agg_txd; - txq->agg_pktleft = txq->agg_pktmax - 1; - txq->agg_szleft = txq->agg_szmax - pktsize; + hn_txd_put(txq, txd); + return chim; } + + txd->chim_index = hn_chim_alloc(hv); + if (txd->chim_index == NVS_CHIM_IDX_INVALID) + return NULL; + + chim = (uint8_t *)hv->chim_res->addr + + txd->chim_index * hv->chim_szmax; + + txq->agg_txd = txd; + txq->agg_pktleft = txq->agg_pktmax - 1; + txq->agg_szleft = txq->agg_szmax - pktsize; txq->agg_prevpkt = chim; return chim; @@ -1329,13 +1394,18 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) return (*vf_dev->tx_pkt_burst)(sub_q, tx_pkts, nb_pkts); } - if (rte_mempool_avail_count(hv->tx_pool) <= txq->free_thresh) + if (rte_mempool_avail_count(txq->txdesc_pool) <= txq->free_thresh) hn_process_events(hv, txq->queue_id, 0); for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { struct rte_mbuf *m = tx_pkts[nb_tx]; uint32_t pkt_size = m->pkt_len + HN_RNDIS_PKT_LEN; struct rndis_packet_msg *pkt; + struct hn_txdesc *txd; + + txd = hn_txd_get(txq); + if (txd == NULL) + break; /* For small packets aggregate them in chimney buffer */ if (m->pkt_len < HN_TXCOPY_THRESHOLD && pkt_size <= txq->agg_szmax) { @@ -1346,7 +1416,8 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) goto fail; } - pkt = hn_try_txagg(hv, txq, pkt_size); + + pkt = hn_try_txagg(hv, txq, txd, pkt_size); if (unlikely(!pkt)) break; @@ -1360,21 +1431,13 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) hn_flush_txagg(txq, &need_sig)) goto fail; } else { - struct hn_txdesc *txd; - - /* can send chimney data and large packet at once */ - txd = txq->agg_txd; - if (txd) { - hn_reset_txagg(txq); - } else { - txd = hn_new_txd(hv, txq); - if (unlikely(!txd)) - break; - } + /* Send any outstanding packets in buffer */ + if (txq->agg_txd && hn_flush_txagg(txq, &need_sig)) + goto fail; pkt = txd->rndis_pkt; txd->m = m; - txd->data_size += m->pkt_len; + txd->data_size = m->pkt_len; ++txd->packets; hn_encap(pkt, queue_id, m); @@ -1383,7 +1446,7 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) if (unlikely(ret != 0)) { PMD_TX_LOG(NOTICE, "sg send failed: %d", ret); ++txq->stats.errors; - rte_mempool_put(hv->tx_pool, txd); + hn_txd_put(txq, txd); goto fail; } } diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h index 05bc492511ec..822d737bd3cc 100644 --- a/drivers/net/netvsc/hn_var.h +++ b/drivers/net/netvsc/hn_var.h @@ -52,6 +52,8 @@ struct hn_tx_queue { uint16_t port_id; uint16_t queue_id; uint32_t free_thresh; + struct rte_mempool *txdesc_pool; + void *tx_rndis; /* Applied packet transmission aggregation limits. */ uint32_t agg_szmax; @@ -115,8 +117,10 @@ struct hn_data { uint16_t num_queues; uint64_t rss_offloads; + rte_spinlock_t chim_lock; struct rte_mem_resource *chim_res; /* UIO resource for Tx */ - struct rte_mempool *tx_pool; /* Tx descriptors */ + struct rte_bitmap *chim_bmap; /* Send buffer map */ + void *chim_bmem; uint32_t chim_szmax; /* Max size per buffer */ uint32_t chim_cnt; /* Max packets per buffer */ @@ -157,8 +161,8 @@ uint16_t hn_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t hn_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); -int hn_tx_pool_init(struct rte_eth_dev *dev); -void hn_tx_pool_uninit(struct rte_eth_dev *dev); +int hn_chim_init(struct rte_eth_dev *dev); +void hn_chim_uninit(struct rte_eth_dev *dev); int hn_dev_link_update(struct rte_eth_dev *dev, int wait); int hn_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, From patchwork Tue Mar 31 17:14:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 67512 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8B187A0562; Tue, 31 Mar 2020 19:14:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3D72F1C0C4; Tue, 31 Mar 2020 19:14:33 +0200 (CEST) Received: from mail-pj1-f67.google.com (mail-pj1-f67.google.com [209.85.216.67]) by dpdk.org (Postfix) with ESMTP id CA9981C0B9 for ; Tue, 31 Mar 2020 19:14:22 +0200 (CEST) Received: by mail-pj1-f67.google.com with SMTP id l36so1340257pjb.3 for ; Tue, 31 Mar 2020 10:14:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hYn5Rs8jjETAf+mUXRLE6HeDZ/uR71Zo8u620Nr4Efw=; b=lRSREjDo+s6KBs2JakrfgXgDp5m4s7b5+tV2z36bWCoEGA4oiyysktzsbhYZ/wZ3zk FzAGMuacVmS7TPL3p+6xE4lpd3utMMqhzFFa0pb3RXOc0Lp0KM9PVVseXacNpFvO4t0x 2Vl2d8wN74tAQ7S4YHGvxp3vLdQPwmwSqXRnw8LkgqLd799FZpXwZwIs2tLrohi1hBHW pq4qU33tgqtSaSeLdgnIjqI0LJbFsCZ041Ol4P8NRbESO3xZM9mcvUML3LD3IpUZWr/i fiDr/eNEMupm1i4LODQjJFJ8KL3cDGACml+kOkNZMsbxzSlHRsv2sDnImw0aUVaSwv/r 7V6Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hYn5Rs8jjETAf+mUXRLE6HeDZ/uR71Zo8u620Nr4Efw=; b=nTgli/kcT8Ji42GFuh+1J5mUw3Mtw/PPZog+NlEofa3kMyuadoRgrkE0UXTP/n5aCd 9yhXDjRCcTmRFE5LmCQcyHZ1tLGXK+m7+yBuFvQoPWXf0acyccTZYZVObjDfr46ZbuIv 0aDG0MqBUUh7BQvByAlBrnX8VZIjEkKXslNjJFKW3iTjGwAE5LO7JPg+9NZU4mJi7Kr/ Ps6MkU9sb4oxT10bmjZQP6igRLoCl/9yS7R7KAjh/eZLqyYwgCfRmxv4hmtepVNBRiDp ImAP1saw/zSy6ig/TairpR55H+WgfqXiNe6mgCXFZ4GUDL1VQvdF91SxzK9mGCrMSvC2 9bhg== X-Gm-Message-State: ANhLgQ0auUZB7jZFORQqGd9nrnHelN1N5Dc1CNg4YcSMFuzDiGZR50JJ WVM7KsetpTDGQIRUA/600TSEDo3ODchTCw== X-Google-Smtp-Source: ADFU+vsoNIWA36dKaKwiie54VdyakT/Acy8Kwie4f70CnfsQQ8jvfoRhZATBu7neF/WpobTG9rpu0Q== X-Received: by 2002:a17:902:ec01:: with SMTP id l1mr18572808pld.151.1585674861445; Tue, 31 Mar 2020 10:14:21 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id i124sm12869764pfg.14.2020.03.31.10.14.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 10:14:20 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , stable@dpdk.org Date: Tue, 31 Mar 2020 10:14:00 -0700 Message-Id: <20200331171404.23596-5-stephen@networkplumber.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200331171404.23596-1-stephen@networkplumber.org> References: <20200316235612.29854-1-stephen@networkplumber.org> <20200331171404.23596-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 4/8] net/netvsc: fix invalid rte_free on dev_close X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The netvsc PMD was putting the mac address in private data but the core rte_ethdev doesn't allow that it. It has to be in rte_malloc'd memory or the a message will be printed on shutdown/close. EAL: Invalid memory Fixes: f8279f47dd89 ("net/netvsc: fix crash in secondary process") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_ethdev.c | 16 ++++++++++------ drivers/net/netvsc/hn_var.h | 2 -- 2 files changed, 10 insertions(+), 8 deletions(-) diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c index ac6610838008..05f1a25a1abc 100644 --- a/drivers/net/netvsc/hn_ethdev.c +++ b/drivers/net/netvsc/hn_ethdev.c @@ -134,8 +134,6 @@ eth_dev_vmbus_allocate(struct rte_vmbus_device *dev, size_t private_data_size) static void eth_dev_vmbus_release(struct rte_eth_dev *eth_dev) { - /* mac_addrs must not be freed alone because part of dev_private */ - eth_dev->data->mac_addrs = NULL; /* free ether device */ rte_eth_dev_release_port(eth_dev); @@ -937,9 +935,6 @@ eth_hn_dev_init(struct rte_eth_dev *eth_dev) eth_dev->tx_pkt_burst = &hn_xmit_pkts; eth_dev->rx_pkt_burst = &hn_recv_pkts; - /* Since Hyper-V only supports one MAC address, just use local data */ - eth_dev->data->mac_addrs = &hv->mac_addr; - /* * for secondary processes, we don't initialize any further as primary * has already done this work. @@ -947,6 +942,15 @@ eth_hn_dev_init(struct rte_eth_dev *eth_dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; + /* Since Hyper-V only supports one MAC address */ + eth_dev->data->mac_addrs = rte_calloc("hv_mac", HN_MAX_MAC_ADDRS, + sizeof(struct rte_ether_addr), 0); + if (eth_dev->data->mac_addrs == NULL) { + PMD_INIT_LOG(ERR, + "Failed to allocate memory store MAC addresses"); + return -ENOMEM; + } + hv->vmbus = vmbus; hv->rxbuf_res = &vmbus->resource[HV_RECV_BUF_MAP]; hv->chim_res = &vmbus->resource[HV_SEND_BUF_MAP]; @@ -989,7 +993,7 @@ eth_hn_dev_init(struct rte_eth_dev *eth_dev) if (err) goto failed; - err = hn_rndis_get_eaddr(hv, hv->mac_addr.addr_bytes); + err = hn_rndis_get_eaddr(hv, eth_dev->data->mac_addrs->addr_bytes); if (err) goto failed; diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h index 822d737bd3cc..b4c61717379f 100644 --- a/drivers/net/netvsc/hn_var.h +++ b/drivers/net/netvsc/hn_var.h @@ -139,8 +139,6 @@ struct hn_data { uint8_t rss_key[40]; uint16_t rss_ind[128]; - struct rte_ether_addr mac_addr; - struct rte_eth_dev_owner owner; struct rte_intr_handle vf_intr; From patchwork Tue Mar 31 17:14:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 67513 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 303FBA0562; Tue, 31 Mar 2020 19:15:10 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A94F01C0D7; Tue, 31 Mar 2020 19:14:34 +0200 (CEST) Received: from mail-pj1-f68.google.com (mail-pj1-f68.google.com [209.85.216.68]) by dpdk.org (Postfix) with ESMTP id 662121C0C4 for ; Tue, 31 Mar 2020 19:14:25 +0200 (CEST) Received: by mail-pj1-f68.google.com with SMTP id kx8so1338358pjb.5 for ; Tue, 31 Mar 2020 10:14:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qsb7BmJUn8DCb/zClKimtajfedrzR8g6JTYpx34OQV8=; b=Ce0R9p8NXHNziC0lwvvyQl3m0x5GsPEVGiWrzDPpsMPSMufCN7m3zFUHZ3wr+/dZVE pBY9Upt5zCpQAWDJiKutOMpdXH9W0wbABq2tXlGn7oAAIBGsAXFjrUpa1EiqD57xLNDD eN0c/vHZLCc9Fsm6Z1UmSj5NRU8Dg1xU4my68XwKd3SL+9qUBBHshAP7yA9+F4h9wJQz iWx9B7pCvaLzDGGrNoU2Vj1Z3HBo+mrqvXuDtiyDxN+ZSpRBZGb8ZnSrrdn//fPQo7PP F2k+4YWU6aai0o7ggvFG1DXHUwIyf+pC/3LZgZuGSTcJbWYhtxgZd6KTgtb/q6swKJC0 0VOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qsb7BmJUn8DCb/zClKimtajfedrzR8g6JTYpx34OQV8=; b=YesXABZZm7137GWvF1bYG05Dx1IxyEVWL1FxJAuAJfmlZnvwc+FJMHN717IAsO6KwO Xo3Yduy7lZjjEFM5byavIKeOlJSUcTeqIDZ2Dqt6FqpKN2I9VM1/AFq/7oftY9fUoBb6 dNhRrg1eDuMuMrlWB8Sl7jyQWDttXk1i4CBJA8qDUbBSfXQOS+9krmiHyl84OxipqSD/ uhOqKcghPR5STDSRrc6tZklR1OMnt4UUHxpYjpuL4jTdBe2lSNET/phIdI86gORdQwDA XGrYvZD39asKedfXvymMO3Cmv3cBSB998IHz6AOz5WZnQpdHOkrQoE2tzL+hEOp9XNBA Gzng== X-Gm-Message-State: ANhLgQ26m8EdCTUM35HEzf8ypue7f7OPLYB7Hrib+k8G0cJ52rVZrgZ4 Sybvot/cH10mUMp4px6n+f2kktn87gYsew== X-Google-Smtp-Source: ADFU+vudVUJUoUzJsyPimHLqp2KOi3AwxZsAhNkO6m5z0agZxSOGWstDiTXlzWQ1v/VNuYTHOc4KCA== X-Received: by 2002:a17:902:9a03:: with SMTP id v3mr17829493plp.272.1585674863647; Tue, 31 Mar 2020 10:14:23 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id i124sm12869764pfg.14.2020.03.31.10.14.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 10:14:21 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , stable@dpdk.org Date: Tue, 31 Mar 2020 10:14:01 -0700 Message-Id: <20200331171404.23596-6-stephen@networkplumber.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200331171404.23596-1-stephen@networkplumber.org> References: <20200316235612.29854-1-stephen@networkplumber.org> <20200331171404.23596-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 5/8] net/netvsc: remove process event optimization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Remove unlocked check for data in receive ring. This check is not safe because of missing barriers etc. Fixes: 4e9c73e96e83 ("net/netvsc: add Hyper-V network device") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_rxtx.c | 4 ---- 1 file changed, 4 deletions(-) diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 32c03e3da0c7..e8df84604202 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -969,10 +969,6 @@ uint32_t hn_process_events(struct hn_data *hv, uint16_t queue_id, rxq = queue_id == 0 ? hv->primary : dev->data->rx_queues[queue_id]; - /* If no pending data then nothing to do */ - if (rte_vmbus_chan_rx_empty(rxq->chan)) - return 0; - /* * Since channel is shared between Rx and TX queue need to have a lock * since DPDK does not force same CPU to be used for Rx/Tx. From patchwork Tue Mar 31 17:14:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 67514 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B53F6A0562; Tue, 31 Mar 2020 19:15:19 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0BC771C10E; Tue, 31 Mar 2020 19:14:36 +0200 (CEST) Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by dpdk.org (Postfix) with ESMTP id 620541C0C4 for ; Tue, 31 Mar 2020 19:14:27 +0200 (CEST) Received: by mail-pj1-f66.google.com with SMTP id kx8so1338394pjb.5 for ; Tue, 31 Mar 2020 10:14:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qF0B1UYVfHLFxwg4qvMbT+1CFFl/y1kAgTefDR7kJVc=; b=2Dg2xvxkngKWS5zrTRk+CSZ0tST29PXxseLYTEuRhd9ovltQPLtP+NH1PQRh1vKNqo AcpPzlIrzmq9A6RLY1ed6mjZ96k5hyyDimIotliRJ65EHmNUax7qTTFsAlzvrqp+dwnf qRGvYEPrln/56yrzQkwRufeMgjXczyZqz0lBFnm3PS/miWGKuV/N83pLxusFmpGG0bJf 9gT55ECzU9oxiLeSCZT9HDntTmGWyrr2WTciNNbtb0VCivBcby1+rnVxFrxPHXAf1DKm Gi3Q0P0gCuKsxf5uIypJiOCvp7poSsFG/UMtBppRphl14+k5St1v/sTPwVOq+JINinez cIow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qF0B1UYVfHLFxwg4qvMbT+1CFFl/y1kAgTefDR7kJVc=; b=ZMYrHo9vUut+wp7sjfHPY94VPk6tPiuxXfws9w+SfflhuuBuelrGSiJEUkRFckCsji 09HQQIUnI3G8znl1rQe9cxHYSVcnOUXuuQ5TwsDXgb/zMIxXX5AjmgjFwVMtjmUxyW5b DPUOzkvFgZEZLDIoUTXviRUBGCEFVIzlMWvD5yTw0ZporOmyZaAMEc27ti9u5CFakBny g4IHffeL4DS4pdV+WIyoX0nwvbZB/vdNn/zqNecX+mfK0oPr4tNmCZR2+NRf5TwWvnXz OOdbwyhHSo9v6MEQsanfcplY35WvM69OgLxpr1bUJ5qeCianIOBW7qxnvkYHZ/HNPpxT NXqA== X-Gm-Message-State: AGi0PuaaFmTiuirUeX97nzNjxOl2HE21bWzO3OKmS/eZcPEgbQ4hXFpu zg6bggQ0GOYK0yFVebw/xRasP1NDS31XWg== X-Google-Smtp-Source: APiQypKSbWpvJNrLlIOpA4KDS95Y8IJIxxYIWwUvkLauMtLFLJ+lXOsneAoVrtxdNnslJ9/CR4Y1vA== X-Received: by 2002:a17:902:bb91:: with SMTP id m17mr2172265pls.223.1585674866076; Tue, 31 Mar 2020 10:14:26 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id i124sm12869764pfg.14.2020.03.31.10.14.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 10:14:24 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , stable@dpdk.org Date: Tue, 31 Mar 2020 10:14:02 -0700 Message-Id: <20200331171404.23596-7-stephen@networkplumber.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200331171404.23596-1-stephen@networkplumber.org> References: <20200316235612.29854-1-stephen@networkplumber.org> <20200331171404.23596-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 6/8] net/netvsc: handle transmit completions based on burst size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If tx_free_thresh is quite low, it is possible that we need to cleanup based on burst size. Fixes: fc30efe3a22e ("net/netvsc: change Rx descriptor setup and sizing") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_rxtx.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index e8df84604202..cbdfcc628b75 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -1375,7 +1375,7 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hn_data *hv = txq->hv; struct rte_eth_dev *vf_dev; bool need_sig = false; - uint16_t nb_tx; + uint16_t nb_tx, avail; int ret; if (unlikely(hv->closed)) @@ -1390,7 +1390,8 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) return (*vf_dev->tx_pkt_burst)(sub_q, tx_pkts, nb_pkts); } - if (rte_mempool_avail_count(txq->txdesc_pool) <= txq->free_thresh) + avail = rte_mempool_avail_count(txq->txdesc_pool); + if (nb_pkts > avail || avail <= txq->free_thresh) hn_process_events(hv, txq->queue_id, 0); for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { From patchwork Tue Mar 31 17:14:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 67515 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 23277A0562; Tue, 31 Mar 2020 19:15:27 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 50A0B1C113; Tue, 31 Mar 2020 19:14:37 +0200 (CEST) Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by dpdk.org (Postfix) with ESMTP id 501B91C0CC for ; Tue, 31 Mar 2020 19:14:29 +0200 (CEST) Received: by mail-pf1-f194.google.com with SMTP id 22so10603432pfa.9 for ; Tue, 31 Mar 2020 10:14:29 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hQIjjxLRdWDnUCFgt6/FtHGYslkCYHvDfR3jz/areDc=; b=mkgtoVJQEGyBIjOBIJjycA57fABrX3C+L4zUOQir8yPmFCeL18ms84GiADFiA9m4eV 8MmmEiltmSu9VJcJFkbCVJSxIpy7fGjR0FwqX3zrk3LAtpj+1DT5hKsPmWs62SJ0vHAZ pGOpaXAvn0F8Jp0GTVmUJfgndUhqL1m7o+JUXGfCUVB9XBo6HMZ0jsdY4p8NVmErI/BF //67aMGM6r1q/YxaKtbyqJs9kGTqmjdxf0st16FzJJPRtUNHeOsQpxxpDt2SdM+xYGsw G9Jo5u26KTDZG4+jhR6ig8uPednfkoZMdQLOokoR93lMbGnaqYFS6n+hDbBOmcbCtQfI e8kA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hQIjjxLRdWDnUCFgt6/FtHGYslkCYHvDfR3jz/areDc=; b=a28xjwkOzUojRTwfQU9oWaDhVGcDNFHwDw1s0A4i+RuHGPk+s+TxjmV+bmajmnVUvZ 5/FN6L8d6YFDb/axi0Rtn02kXKGOi/oREqS+pbRdqcGGh6qyq50b5QmOxaIbRFEu4n8M 6osik6CKzQIcLoP5meCSs0i4vkavXf/CKwMg0JDipjHTp0ne1aJ1IOhRMSTyUTt6Xyy1 uJ4wbiKOmqFs2tRnxYMGerKEGM7iaM/bTSzgvs3bs8gfrJG4qU+7xUffdza7tEJS1HDd uYHOBF87kTUzdKSCIBS0ESxoVX/UrZxEXGbTcM1NJ7hT0YpJIBHNh8a7HrTAJEPIXOKz 8BOw== X-Gm-Message-State: ANhLgQ1LZpjWRKuiVDJaEI0anmEH6RxmMu1PIPF0XCTUkjGFannl6pyI 7a1OwFxVN+lp9hGdqp6+gFTp+xgoLfgkZQ== X-Google-Smtp-Source: ADFU+vtF/CGeG5BEQv664cQOfTXezaND/8AkAOCqWk196C/Tig254FaYwI+tFn5AXCAQZJs1mg8G3A== X-Received: by 2002:a65:4544:: with SMTP id x4mr17781271pgr.388.1585674867856; Tue, 31 Mar 2020 10:14:27 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id i124sm12869764pfg.14.2020.03.31.10.14.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 10:14:26 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Tue, 31 Mar 2020 10:14:03 -0700 Message-Id: <20200331171404.23596-8-stephen@networkplumber.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200331171404.23596-1-stephen@networkplumber.org> References: <20200316235612.29854-1-stephen@networkplumber.org> <20200331171404.23596-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 7/8] bus/vmbus: simplify args to need_signal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The transmit need signal function can avoid an unnecessary dereference by passing the right pointer. This also makes code better match FreeBSD driver. Signed-off-by: Stephen Hemminger --- drivers/bus/vmbus/vmbus_bufring.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/bus/vmbus/vmbus_bufring.c b/drivers/bus/vmbus/vmbus_bufring.c index c88001605dbb..c4aa07b307ff 100644 --- a/drivers/bus/vmbus/vmbus_bufring.c +++ b/drivers/bus/vmbus/vmbus_bufring.c @@ -54,10 +54,10 @@ void vmbus_br_setup(struct vmbus_br *br, void *buf, unsigned int blen) * data have arrived. */ static inline bool -vmbus_txbr_need_signal(const struct vmbus_br *tbr, uint32_t old_windex) +vmbus_txbr_need_signal(const struct vmbus_bufring *vbr, uint32_t old_windex) { rte_smp_mb(); - if (tbr->vbr->imask) + if (vbr->imask) return false; rte_smp_rmb(); @@ -66,7 +66,7 @@ vmbus_txbr_need_signal(const struct vmbus_br *tbr, uint32_t old_windex) * This is the only case we need to signal when the * ring transitions from being empty to non-empty. */ - return old_windex == tbr->vbr->rindex; + return old_windex == vbr->rindex; } static inline uint32_t @@ -163,7 +163,7 @@ vmbus_txbr_write(struct vmbus_br *tbr, const struct iovec iov[], int iovlen, rte_pause(); /* If host had read all data before this, then need to signal */ - *need_sig |= vmbus_txbr_need_signal(tbr, old_windex); + *need_sig |= vmbus_txbr_need_signal(vbr, old_windex); return 0; } From patchwork Tue Mar 31 17:14:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 67516 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E15A8A0562; Tue, 31 Mar 2020 19:15:37 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 895231C11A; Tue, 31 Mar 2020 19:14:38 +0200 (CEST) Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by dpdk.org (Postfix) with ESMTP id 3FBCE1BFEF for ; Tue, 31 Mar 2020 19:14:31 +0200 (CEST) Received: by mail-pg1-f196.google.com with SMTP id b1so10611859pgm.8 for ; Tue, 31 Mar 2020 10:14:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=3pH8NHNxHhe7XYMfAuUW7u0S8V75frEfJD7gt2jcCIg=; b=DBnrSb23MI/dQGsjFjRpOw9Uzkt0EvYitJLl24RNlRc4YQYfYHQvTLefkPya4y2TKB ps/iarCFSK62AM7pEmdY6A758bSh9CdrJOHQq07uIA7rMDTWVO+y1hjAcK8oe4atZQqw aFRGQu6CgGulbOEIdG39SccUJNkQm85SKan/m4ZQ+bmyr4kRIdq9L7HAQ0BoEmYH2Ymf tDGiqW2w6dWRc4DI+SqiNoLdBuAizk+tBhTnBOlWrkCZ3YXcD3wDhHWFRUehm0G1f+23 +AG2ZsFBwqeDf58d1KqLIiUcw6r+9jrzUY36fRmn1j3c59nkTT2XmAhZdz/V53L6Wo2i +0Rg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=3pH8NHNxHhe7XYMfAuUW7u0S8V75frEfJD7gt2jcCIg=; b=SohrQjvD7QjMzYftiN0qshwxFjN/CKKV9/NDghKLZgviDdiXJElhIQ3Up5U0OP4N0r mkizlfDHgiqMnyU2Ms9WENeX30TcohXEJRGVk/m5wUP7Cx020O89eohOzeLuqPql3sLE 1kbCb2vduTch5kmQjHBh9Ve+Kml7g936bW8m3wIKhQlLpOMCSEibvi1z01K48dVq2lc8 p4cA+MOrQvJjDn/bwivT/hVOOKpbL7fZ+9SUMBaOiaCOopelxPGKez9LtIYw5dahQvz/ X09BMig9SH17HgtDXU6d03oh42zLflVd0Mzjy/c/Jnx8/P8tmvRcPzjd++Nz1ELVO69T RRDg== X-Gm-Message-State: AGi0PuaKC9o9KLx9LOc2sVvAEjyAy4CtdQHp4u9UFbBaTjv/6JScsNO2 8SoDK6Ooj7eZd0kKvKVmdd3zvVmlOdD7sA== X-Google-Smtp-Source: APiQypKUFyzZgIes+EfG9cuCZuTUqkX4dWqPr3tcI75+xuphySnhLLW1qgqhGmOb1nPFzIsL2glWFg== X-Received: by 2002:a63:9550:: with SMTP id t16mr5220050pgn.300.1585674869864; Tue, 31 Mar 2020 10:14:29 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id i124sm12869764pfg.14.2020.03.31.10.14.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 10:14:28 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger Date: Tue, 31 Mar 2020 10:14:04 -0700 Message-Id: <20200331171404.23596-9-stephen@networkplumber.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200331171404.23596-1-stephen@networkplumber.org> References: <20200316235612.29854-1-stephen@networkplumber.org> <20200331171404.23596-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 8/8] net/netvsc: avoid possible live lock X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since the ring buffer with host is shared for both transmit completions and receive packets, it is possible that transmitter could get starved if receive ring gets full. Better to process all outstanding events which frees up transmit buffer slots, even if means dropping some packets. Fixes: 7e6c82430702 ("net/netvsc: avoid over filling Rx descriptor ring") Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_rxtx.c | 3 --- 1 file changed, 3 deletions(-) diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index cbdfcc628b75..19f00a05285f 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -1032,9 +1032,6 @@ uint32_t hn_process_events(struct hn_data *hv, uint16_t queue_id, if (tx_limit && tx_done >= tx_limit) break; - - if (rxq->rx_ring && rte_ring_full(rxq->rx_ring)) - break; } if (bytes_read > 0)