From patchwork Tue Jul 24 21:08:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 43323 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0B642322C; Tue, 24 Jul 2018 23:09:09 +0200 (CEST) Received: from mail-pl0-f66.google.com (mail-pl0-f66.google.com [209.85.160.66]) by dpdk.org (Postfix) with ESMTP id E9B9398 for ; Tue, 24 Jul 2018 23:09:03 +0200 (CEST) Received: by mail-pl0-f66.google.com with SMTP id w3-v6so2296505plq.2 for ; Tue, 24 Jul 2018 14:09:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pw16iFMvloPDZZFKtnXHtLMyrJCAFAY1nt2nTDVz3V0=; b=dn82qQAVsMEwqOXhUnF2QPUQs12eSNhP9Ia/pgqCe6OBEKMQOTsee3lpxnqaGTcb7K h3FX9m/bYwRDoMFtt0Pr+2jKMGuq000HFc9tZc/a/LgEUtLKxxPNf522HSWbODLC6gxG r4vDRcgg7zlCH6p/+QpJwvTZisS4vadWuyfBUdV/iAuwZPffqah7g3102/XxiWxa6/it oz70S3iLDJrqUNIHlGkl84S07PwblM4U3E/3iIIlQY7tAFUoX1hUzcNmGOyI9ydNPSV6 wyl+2M15orbnMbg6ICQJeU11fVbSb3uUKCkgCNaiuBW13BcdhTFpyg1y1hDor4oIkhRI 52vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pw16iFMvloPDZZFKtnXHtLMyrJCAFAY1nt2nTDVz3V0=; b=dm6KkdtXgi8a6n2a/Na/NgXJb7ucnbEx38A7i9dsz7H8XTbOKJsBA3oE2ShcztT/hK AknnkB7FuBfOPIQ/cLfCOXhkfh6w2ZPsUFMwpTOzkCx0QvgGb37ciwCbCJnXy8Q0v5GN Qj1Q1RL/pOuDDcs0MpJ5LErPkBLfIvVFcyGwSxqM50wl+IbxhL1otnWgKFfvCGaa7+hP dRitXGkFOx0tIcNnfkoGFKEYAINJfDxZ8dMBT+zjuoJzLPrdJhn7QUEiVfp0MddxlEPd I4q4BYHqhwt5nGlQJsZ2+omx1IXypgtKh26bb4oX5hW3FX5yT2YjZM5Hb+p2b6718lKh FxKg== X-Gm-Message-State: AOUpUlFjvhxUXXgnzgubxfZXVdo7LHO517RXgctT4oAj4dpp87VFAvN9 6zPFSHHJVRgLSIsl4kw09XsehBVSvT4= X-Google-Smtp-Source: AAOMgpfk7AEWk7GMo0EgKSFWFxgxvIM/XBnYAMvPp6W2F4B8d13PENMbdOXZ5WbnnhEvRaD9PBoGAg== X-Received: by 2002:a17:902:5381:: with SMTP id c1-v6mr18155011pli.137.1532466542762; Tue, 24 Jul 2018 14:09:02 -0700 (PDT) Received: from xeon-e3.wavecable.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id d11-v6sm16921161pfo.135.2018.07.24.14.09.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Jul 2018 14:09:01 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Stephen Hemminger Date: Tue, 24 Jul 2018 14:08:50 -0700 Message-Id: <20180724210853.22767-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180724210853.22767-1-stephen@networkplumber.org> References: <20180724210853.22767-1-stephen@networkplumber.org> Subject: [dpdk-dev] [PATCH 1/4] netvsc: change rx descriptor setup and sizing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Increase the size of the ring used to hold mbuf's received but not processed. The default is now based off the size of thw receive mbuf pool not the number of sections from the host. Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_rxtx.c | 24 +++++++----------------- 1 file changed, 7 insertions(+), 17 deletions(-) diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 6d2f41c4c011..9a2dd9cb1beb 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -728,18 +728,12 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev, struct rte_mempool *mp) { struct hn_data *hv = dev->data->dev_private; - uint32_t qmax = hv->rxbuf_section_cnt; char ring_name[RTE_RING_NAMESIZE]; struct hn_rx_queue *rxq; unsigned int count; - size_t size; - int err = -ENOMEM; PMD_INIT_FUNC_TRACE(); - if (nb_desc == 0 || nb_desc > qmax) - nb_desc = qmax; - if (queue_idx == 0) { rxq = hv->primary; } else { @@ -749,14 +743,9 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev, } rxq->mb_pool = mp; - - count = rte_align32pow2(nb_desc); - size = sizeof(struct rte_ring) + count * sizeof(void *); - rxq->rx_ring = rte_malloc_socket("RX_RING", size, - RTE_CACHE_LINE_SIZE, - socket_id); - if (!rxq->rx_ring) - goto fail; + count = rte_mempool_avail_count(mp) / dev->data->nb_rx_queues; + if (nb_desc == 0 || nb_desc > count) + nb_desc = count; /* * Staging ring from receive event logic to rx_pkts. @@ -765,9 +754,10 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev, */ snprintf(ring_name, sizeof(ring_name), "hn_rx_%u_%u", dev->data->port_id, queue_idx); - err = rte_ring_init(rxq->rx_ring, ring_name, - count, 0); - if (err) + rxq->rx_ring = rte_ring_create(ring_name, + rte_align32pow2(nb_desc), + socket_id, 0); + if (!rxq->rx_ring) goto fail; dev->data->rx_queues[queue_idx] = rxq; From patchwork Tue Jul 24 21:08:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 43324 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D49793250; Tue, 24 Jul 2018 23:09:10 +0200 (CEST) Received: from mail-pg1-f196.google.com (mail-pg1-f196.google.com [209.85.215.196]) by dpdk.org (Postfix) with ESMTP id 706162C15 for ; Tue, 24 Jul 2018 23:09:05 +0200 (CEST) Received: by mail-pg1-f196.google.com with SMTP id z8-v6so3720454pgu.8 for ; Tue, 24 Jul 2018 14:09:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yHt9YJuJcyEK2HJIGZV9Dkagkz2DrDkARcf3uX8fVI4=; b=c+SmaiQryf4Ih66TdbhElQMmRuzaCyDRitIRAy64pmDrkIF/T/nSKxWSqDg9qN5SsQ ZqiX52Wa6R9gD4nGYWbsLqaHrtAAhtjU9bCAlXlFKegt9LD17zB7Un4CfhumtJRQGWhC BEWVTEMJL75LZpCEd0sR3WeMnJBGIAU4EttUL66cwuX03wtPfL96vS9iHAE6u3Z+wuEI MAaQN/vD+iLbLw1jtVDEyJaVPRn/z2JjPROrLlzT9VEJ4G2Vz7vTRriPC3PLtdKtpfvF 4/M9yGqHaxiKa4FwhWiz3mFvgLHeej02PjfW81JbmY1HUFUn4yBy8VuLwSHp4HuiNvL2 RtEA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yHt9YJuJcyEK2HJIGZV9Dkagkz2DrDkARcf3uX8fVI4=; b=Fs3PVOO5DL+d4KLcp9Bxj0MxPe0Z6xHP+AhtvP84AXpBLSMseAN8nILbHYVZgiKO22 NqUuew1V2CrwDk9ifw/1aQYmgU+feg7053bxTuDsem1zP68wmaK436OcuVbF7SS3GjTO Du1OvT0pj+eaWNqrQLvUYyVENGXp85DFYl2cUqnkf/8h1kfFFLsq9hAf1s3ciLjmOgJY A6B9DQfmBrLV4y1o2bd0vDiJvPuDW7fy2FPAPGmx1fHI3lwEnN9yXOd9ZA9osSrrVPMN 8ImfFjvkMnYrdSg2yfXZIU4UVUDqrpBk/ljHrlYiM4bwSWUj4dL64PXIipzOv4xud/cA revw== X-Gm-Message-State: AOUpUlEVqqBsacl4TO1kIqFOLiTUjq87NCUANPIW5Hk0ixO7mZuYBlcY nyGvU39h/bh1wV6xAAQx13ZUhbp6E4Y= X-Google-Smtp-Source: AAOMgpdp9CJgzgvgI2DHJoM8Bj+zoghpqYeFUQ0WK8seI73YPLKbfPPaZILIA8yxHkVrkMzsxERNpg== X-Received: by 2002:a63:6c05:: with SMTP id h5-v6mr18035154pgc.367.1532466544129; Tue, 24 Jul 2018 14:09:04 -0700 (PDT) Received: from xeon-e3.wavecable.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id d11-v6sm16921161pfo.135.2018.07.24.14.09.02 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Jul 2018 14:09:03 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Stephen Hemminger Date: Tue, 24 Jul 2018 14:08:51 -0700 Message-Id: <20180724210853.22767-3-stephen@networkplumber.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180724210853.22767-1-stephen@networkplumber.org> References: <20180724210853.22767-1-stephen@networkplumber.org> Subject: [dpdk-dev] [PATCH 2/4] netvsc: avoid over filling receive descriptor ring X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If the number of packets requested are already present in the rx_ring then skip reading the ring buffer from the host. If the ring between the poll and receive side is full, then don't poll (let incoming packets stay on host). If no more transmit descriptors are available, then still try and flush any outstanding data. Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_rxtx.c | 15 ++++++++------- 1 file changed, 8 insertions(+), 7 deletions(-) diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 9a2dd9cb1beb..1aff64ee3ae5 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -878,11 +878,11 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id) PMD_DRV_LOG(ERR, "unknown chan pkt %u", pkt->type); break; } + + if (rxq->rx_ring && rte_ring_full(rxq->rx_ring)) + break; } rte_spinlock_unlock(&rxq->ring_lock); - - if (unlikely(ret != -EAGAIN)) - PMD_DRV_LOG(ERR, "channel receive failed: %d", ret); } static void hn_append_to_chim(struct hn_tx_queue *txq, @@ -1248,7 +1248,7 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) pkt = hn_try_txagg(hv, txq, pkt_size); if (unlikely(!pkt)) - goto fail; + break; hn_encap(pkt, txq->queue_id, m); hn_append_to_chim(txq, pkt, m); @@ -1269,7 +1269,7 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } else { txd = hn_new_txd(hv, txq); if (unlikely(!txd)) - goto fail; + break; } pkt = txd->rndis_pkt; @@ -1310,8 +1310,9 @@ hn_recv_pkts(void *prxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (unlikely(hv->closed)) return 0; - /* Get all outstanding receive completions */ - hn_process_events(hv, rxq->queue_id); + /* If ring is empty then process more */ + if (rte_ring_count(rxq->rx_ring) < nb_pkts) + hn_process_events(hv, rxq->queue_id); /* Get mbufs off staging ring */ return rte_ring_sc_dequeue_burst(rxq->rx_ring, (void **)rx_pkts, From patchwork Tue Jul 24 21:08:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 43325 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0C09032A5; Tue, 24 Jul 2018 23:09:13 +0200 (CEST) Received: from mail-pf1-f194.google.com (mail-pf1-f194.google.com [209.85.210.194]) by dpdk.org (Postfix) with ESMTP id ED44631FC for ; Tue, 24 Jul 2018 23:09:06 +0200 (CEST) Received: by mail-pf1-f194.google.com with SMTP id d14-v6so1141252pfo.3 for ; Tue, 24 Jul 2018 14:09:06 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=rrvI4y3U1rTTOuIajwJn3kWG2v77GiX3rWBplI+e650=; b=JzfAkB3dBDSY24pbHIpEPdrjMHxrLFl81nI7w0ysFH+EyIB/n1+s6Ee+a8RIgun0UO teNkm9HEpE1HikRDaw6CCwOcVHp8ZQ45oQvlDQdySlVqsaH2Xkfqkgf5zze5klAtpW44 p88e5YTyN3SwV74HYdZRRC48fcpDlPBroi8BeLI1aS29ENXjJ59Qya9s9YvLArpGEKnC ve6M07O2zv06MssU1hV3HmtJBJ0aXm2Uca4y8XfeI1BtOZrsDHZl5GiXWJglcS3H3Xh5 v9qijAtyO9kDKDFBqYwmj+0IrQ5G/TPxEqGgo7latSn5HhhuJ26UA29rMvj47nQZIR53 PFCA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=rrvI4y3U1rTTOuIajwJn3kWG2v77GiX3rWBplI+e650=; b=erts21hX3a8ZHkGecWS3TwfNocwgcwSPd+ZweZruwxHPkOnN7gx821XmSVevYkK22x C3UtqjNtGhSmSTGpJrm8a85UPtxvDip54p2WX/QbxuzAy/n++fHXN7HMY3ECqHdzqL+1 ZbPsur32zMT8l5bnfZDNQN03miLTnZYpE+C80xXEDmpUWZLveuYpBybq5TIyYyLiDEFH 361sVFBE8PS/O/kFFptmj2XfrTr7OKFNqDaDnHC1RAAVWqK2ahm+R5SRzrwvwfAlbIld pGSQCjDs3L38hWVmPzJiXrg1baEKjbYF7G5VgVrxJNgYNZmOFt8XyMp4Th4RzDw/VC0q cqYg== X-Gm-Message-State: AOUpUlHD05EqlJcuyfMBIPKTClQDzyinAGz9qKtXfsSlEhWuCLBfIb1A 9xMfrEYfKZAAikRYG/5ey7Y98lsa30c= X-Google-Smtp-Source: AAOMgpctB/sTRouCqiu2Aj/NnGPADOFsEz7Wr+3Sq/16U2ObwJssfKjXyNkxH+f5IDl7AFDEVlJeuw== X-Received: by 2002:a63:9802:: with SMTP id q2-v6mr17698605pgd.70.1532466545822; Tue, 24 Jul 2018 14:09:05 -0700 (PDT) Received: from xeon-e3.wavecable.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id d11-v6sm16921161pfo.135.2018.07.24.14.09.04 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Jul 2018 14:09:04 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Stephen Hemminger Date: Tue, 24 Jul 2018 14:08:52 -0700 Message-Id: <20180724210853.22767-4-stephen@networkplumber.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180724210853.22767-1-stephen@networkplumber.org> References: <20180724210853.22767-1-stephen@networkplumber.org> Subject: [dpdk-dev] [PATCH 3/4] netvsc: implement queue info get handles X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This helps when diagnosing ring issues in testpmd. Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_ethdev.c | 2 ++ drivers/net/netvsc/hn_rxtx.c | 22 ++++++++++++++++++++++ drivers/net/netvsc/hn_var.h | 4 ++++ 3 files changed, 28 insertions(+) diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c index 47ed760b825d..78b842ba2d68 100644 --- a/drivers/net/netvsc/hn_ethdev.c +++ b/drivers/net/netvsc/hn_ethdev.c @@ -536,6 +536,8 @@ static const struct eth_dev_ops hn_eth_dev_ops = { .dev_stop = hn_dev_stop, .dev_close = hn_dev_close, .dev_infos_get = hn_dev_info_get, + .txq_info_get = hn_dev_tx_queue_info, + .rxq_info_get = hn_dev_rx_queue_info, .promiscuous_enable = hn_dev_promiscuous_enable, .promiscuous_disable = hn_dev_promiscuous_disable, .allmulticast_enable = hn_dev_allmulticast_enable, diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 1aff64ee3ae5..17cebeb74456 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -268,6 +268,17 @@ hn_dev_tx_queue_release(void *arg) rte_free(txq); } +void +hn_dev_tx_queue_info(struct rte_eth_dev *dev, uint16_t queue_idx, + struct rte_eth_txq_info *qinfo) +{ + struct hn_data *hv = dev->data->dev_private; + struct hn_tx_queue *txq = dev->data->rx_queues[queue_idx]; + + qinfo->conf.tx_free_thresh = txq->free_thresh; + qinfo->nb_desc = hv->tx_pool->size; +} + static void hn_nvs_send_completed(struct rte_eth_dev *dev, uint16_t queue_id, unsigned long xactid, const struct hn_nvs_rndis_ack *ack) @@ -790,6 +801,17 @@ hn_dev_rx_queue_release(void *arg) } } +void +hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_idx, + struct rte_eth_rxq_info *qinfo) +{ + struct hn_rx_queue *rxq = dev->data->rx_queues[queue_idx]; + + qinfo->mp = rxq->mb_pool; + qinfo->scattered_rx = 1; + qinfo->nb_desc = rte_ring_get_capacity(rxq->rx_ring); +} + static void hn_nvs_handle_notify(const struct vmbus_chanpkt_hdr *pkthdr, const void *data) diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h index f0358c58226a..3f3b442697af 100644 --- a/drivers/net/netvsc/hn_var.h +++ b/drivers/net/netvsc/hn_var.h @@ -141,6 +141,8 @@ int hn_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); void hn_dev_tx_queue_release(void *arg); +void hn_dev_tx_queue_info(struct rte_eth_dev *dev, uint16_t queue_idx, + struct rte_eth_txq_info *qinfo); struct hn_rx_queue *hn_rx_queue_alloc(struct hn_data *hv, uint16_t queue_id, @@ -151,3 +153,5 @@ int hn_dev_rx_queue_setup(struct rte_eth_dev *dev, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); void hn_dev_rx_queue_release(void *arg); +void hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_idx, + struct rte_eth_rxq_info *qinfo); From patchwork Tue Jul 24 21:08:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 43326 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 554A2378B; Tue, 24 Jul 2018 23:09:15 +0200 (CEST) Received: from mail-pg1-f193.google.com (mail-pg1-f193.google.com [209.85.215.193]) by dpdk.org (Postfix) with ESMTP id 536B431FC for ; Tue, 24 Jul 2018 23:09:08 +0200 (CEST) Received: by mail-pg1-f193.google.com with SMTP id y5-v6so3738683pgv.1 for ; Tue, 24 Jul 2018 14:09:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=HaoleF6LEwfv/RUyTJXu/pNeriaeB4qgaJoPjlofCcY=; b=IMy+pQXKwlm+HJfMpxX1HrwbyaUpnD4B41BdNQoahnsehZsWzXDDRD01ybSHHiEYfM 7KNQr6W5jvt3NkoVNx7PGKIfjfS5IMuPxhXAUotYB8GFwEy/BagEI3iziuoUKlimnLR8 n915SvBanMoBDt2A3T1Ctm5B75KTfVrPoKsW2rzK4NoaEjRLSITZjANLoRYJBwAU7mB7 pDWp6cYnx9zLufR5+TGyMxxQx554iturCDMUFS4sGIvt683MWo3RBvZcgMlAFeRDCAW5 elHIiw7vkPZgjdB+KmuHy738hWiBfdAtlXIzE0EEqknMJMSCtgjlu0GL75w89j+x99HE h3Og== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=HaoleF6LEwfv/RUyTJXu/pNeriaeB4qgaJoPjlofCcY=; b=pLCnw8i+a+2dUFo7Pmhfjr6kjUzAoyCsRN2eZK3GuaEbqthp0Xgia3oBeNBf0Mp5XQ pDOrz4jHOOd7CzOiDKT42MgYdW2PZx7TqpiSNgfxFbsuqYyhgjjI4vs8rfSLdy66j9Ci 5OeoGTaQyq27DRC5tmlKURxXx/IwEMfwo+27bvC45BOlE5rUUtDdFCIKzozNbfWoapZt GBQiUiKX+0uoWeEn9BVpI/NZFTzcqQhNDEHJKUnk6Onlwb0dw1D3OWwEB6pSjDVlTHAy HXR7dw3FrsLMEsI1cEcaPBX5R4UGcyjgG/DfbdFQdineCxz2seXOi1LJSJlnkF/GjcrK oZug== X-Gm-Message-State: AOUpUlFj0QGWEvTTriXwy1za09gfXseWkZiR450qdx+Yzl3ggZoxhnwW PIJIaFMXkDtTqZhuwx0jbi+WE/QcVbs= X-Google-Smtp-Source: AAOMgpd+iA30u/hhN2O4/TlsVons6cFJGLFJXSJHtXB6qK9YFB0FckJ292Ju9ZkKvL+toCF/44y/FQ== X-Received: by 2002:a63:2dc1:: with SMTP id t184-v6mr17756291pgt.62.1532466547106; Tue, 24 Jul 2018 14:09:07 -0700 (PDT) Received: from xeon-e3.wavecable.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id d11-v6sm16921161pfo.135.2018.07.24.14.09.05 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Jul 2018 14:09:06 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Stephen Hemminger Date: Tue, 24 Jul 2018 14:08:53 -0700 Message-Id: <20180724210853.22767-5-stephen@networkplumber.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180724210853.22767-1-stephen@networkplumber.org> References: <20180724210853.22767-1-stephen@networkplumber.org> Subject: [dpdk-dev] [PATCH 4/4] netvsc/vmbus: avoid signalling host on read X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Don't signal host that receive ring has been read until all events have been processed. This reduces the number of guest exits and therefore improves performance. Signed-off-by: Stephen Hemminger --- drivers/bus/vmbus/rte_bus_vmbus.h | 13 +++++- drivers/bus/vmbus/rte_bus_vmbus_version.map | 1 + drivers/bus/vmbus/vmbus_bufring.c | 3 ++ drivers/bus/vmbus/vmbus_channel.c | 45 +++++++++---------- drivers/net/netvsc/hn_rxtx.c | 49 +++++++-------------- drivers/net/netvsc/hn_var.h | 3 +- 6 files changed, 56 insertions(+), 58 deletions(-) diff --git a/drivers/bus/vmbus/rte_bus_vmbus.h b/drivers/bus/vmbus/rte_bus_vmbus.h index 0100f80ff9a0..4a2c1f6fd918 100644 --- a/drivers/bus/vmbus/rte_bus_vmbus.h +++ b/drivers/bus/vmbus/rte_bus_vmbus.h @@ -337,12 +337,23 @@ int rte_vmbus_chan_recv(struct vmbus_channel *chan, * @param len * Pointer to size of receive buffer (in/out) * @return - * On success, returns 0 + * On success, returns number of bytes read. * On failure, returns negative errno. */ int rte_vmbus_chan_recv_raw(struct vmbus_channel *chan, void *data, uint32_t *len); +/** + * Notify host of bytes read (after recv_raw) + * Signals host if required. + * + * @param channel + * Pointer to vmbus_channel structure. + * @param bytes_read + * Number of bytes read since last signal + */ +void rte_vmbus_chan_signal_read(struct vmbus_channel *chan, uint32_t bytes_read); + /** * Determine sub channel index of the given channel * diff --git a/drivers/bus/vmbus/rte_bus_vmbus_version.map b/drivers/bus/vmbus/rte_bus_vmbus_version.map index 5324fef4662c..dabb9203104b 100644 --- a/drivers/bus/vmbus/rte_bus_vmbus_version.map +++ b/drivers/bus/vmbus/rte_bus_vmbus_version.map @@ -10,6 +10,7 @@ DPDK_18.08 { rte_vmbus_chan_rx_empty; rte_vmbus_chan_send; rte_vmbus_chan_send_sglist; + rte_vmbus_chan_signal_read; rte_vmbus_chan_signal_tx; rte_vmbus_irq_mask; rte_vmbus_irq_read; diff --git a/drivers/bus/vmbus/vmbus_bufring.c b/drivers/bus/vmbus/vmbus_bufring.c index c2d7d8cc2254..c88001605dbb 100644 --- a/drivers/bus/vmbus/vmbus_bufring.c +++ b/drivers/bus/vmbus/vmbus_bufring.c @@ -221,6 +221,9 @@ vmbus_rxbr_read(struct vmbus_br *rbr, void *data, size_t dlen, size_t skip) if (vmbus_br_availread(rbr) < dlen + skip + sizeof(uint64_t)) return -EAGAIN; + /* Record where host was when we started read (for debug) */ + rbr->windex = rbr->vbr->windex; + /* * Copy channel packet from RX bufring. */ diff --git a/drivers/bus/vmbus/vmbus_channel.c b/drivers/bus/vmbus/vmbus_channel.c index f9feada9b047..cc5f3e8379a5 100644 --- a/drivers/bus/vmbus/vmbus_channel.c +++ b/drivers/bus/vmbus/vmbus_channel.c @@ -176,49 +176,37 @@ bool rte_vmbus_chan_rx_empty(const struct vmbus_channel *channel) return br->vbr->rindex == br->vbr->windex; } -static int vmbus_read_and_signal(struct vmbus_channel *chan, - void *data, size_t dlen, size_t skip) +/* Signal host after reading N bytes */ +void rte_vmbus_chan_signal_read(struct vmbus_channel *chan, uint32_t bytes_read) { struct vmbus_br *rbr = &chan->rxbr; - uint32_t write_sz, pending_sz, bytes_read; - int error; - - /* Record where host was when we started read (for debug) */ - rbr->windex = rbr->vbr->windex; - - /* Read data and skip packet header */ - error = vmbus_rxbr_read(rbr, data, dlen, skip); - if (error) - return error; + uint32_t write_sz, pending_sz; /* No need for signaling on older versions */ if (!rbr->vbr->feature_bits.feat_pending_send_sz) - return 0; + return; /* Make sure reading of pending happens after new read index */ rte_mb(); pending_sz = rbr->vbr->pending_send; if (!pending_sz) - return 0; + return; rte_smp_rmb(); write_sz = vmbus_br_availwrite(rbr, rbr->vbr->windex); - bytes_read = dlen + skip + sizeof(uint64_t); /* If there was space before then host was not blocked */ if (write_sz - bytes_read > pending_sz) - return 0; + return; /* If pending write will not fit */ if (write_sz <= pending_sz) - return 0; + return; vmbus_set_event(chan->device, chan); - return 0; } -/* TODO: replace this with inplace ring buffer (no copy) */ int rte_vmbus_chan_recv(struct vmbus_channel *chan, void *data, uint32_t *len, uint64_t *request_id) { @@ -256,10 +244,16 @@ int rte_vmbus_chan_recv(struct vmbus_channel *chan, void *data, uint32_t *len, if (request_id) *request_id = pkt.xactid; - /* Read data and skip the header */ - return vmbus_read_and_signal(chan, data, dlen, hlen); + /* Read data and skip packet header */ + error = vmbus_rxbr_read(&chan->rxbr, data, dlen, hlen); + if (error) + return error; + + rte_vmbus_chan_signal_read(chan, dlen + hlen + sizeof(uint64_t)); + return 0; } +/* TODO: replace this with inplace ring buffer (no copy) */ int rte_vmbus_chan_recv_raw(struct vmbus_channel *chan, void *data, uint32_t *len) { @@ -291,8 +285,13 @@ int rte_vmbus_chan_recv_raw(struct vmbus_channel *chan, if (unlikely(dlen > bufferlen)) return -ENOBUFS; - /* Put packet header in data buffer */ - return vmbus_read_and_signal(chan, data, dlen, 0); + /* Read data and skip packet header */ + error = vmbus_rxbr_read(&chan->rxbr, data, dlen, 0); + if (error) + return error; + + /* Return the number of bytes read */ + return dlen + sizeof(uint64_t); } int vmbus_chan_create(const struct rte_vmbus_device *device, diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 17cebeb74456..38c1612a6ac6 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -40,7 +40,7 @@ #define HN_TXCOPY_THRESHOLD 512 #define HN_RXCOPY_THRESHOLD 256 -#define HN_RXQ_EVENT_DEFAULT 1024 +#define HN_RXQ_EVENT_DEFAULT 2048 struct hn_rxinfo { uint32_t vlan_info; @@ -709,7 +709,8 @@ struct hn_rx_queue *hn_rx_queue_alloc(struct hn_data *hv, { struct hn_rx_queue *rxq; - rxq = rte_zmalloc_socket("HN_RXQ", sizeof(*rxq), + rxq = rte_zmalloc_socket("HN_RXQ", + sizeof(*rxq) + HN_RXQ_EVENT_DEFAULT, RTE_CACHE_LINE_SIZE, socket_id); if (rxq) { rxq->hv = hv; @@ -717,16 +718,6 @@ struct hn_rx_queue *hn_rx_queue_alloc(struct hn_data *hv, rte_spinlock_init(&rxq->ring_lock); rxq->port_id = hv->port_id; rxq->queue_id = queue_id; - - rxq->event_sz = HN_RXQ_EVENT_DEFAULT; - rxq->event_buf = rte_malloc_socket("RX_EVENTS", - rxq->event_sz, - RTE_CACHE_LINE_SIZE, - socket_id); - if (!rxq->event_buf) { - rte_free(rxq); - rxq = NULL; - } } return rxq; } @@ -835,6 +826,7 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id) { struct rte_eth_dev *dev = &rte_eth_devices[hv->port_id]; struct hn_rx_queue *rxq; + uint32_t bytes_read = 0; int ret = 0; rxq = queue_id == 0 ? hv->primary : dev->data->rx_queues[queue_id]; @@ -852,34 +844,21 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id) for (;;) { const struct vmbus_chanpkt_hdr *pkt; - uint32_t len = rxq->event_sz; + uint32_t len = HN_RXQ_EVENT_DEFAULT; const void *data; ret = rte_vmbus_chan_recv_raw(rxq->chan, rxq->event_buf, &len); if (ret == -EAGAIN) break; /* ring is empty */ - if (ret == -ENOBUFS) { - /* expanded buffer needed */ - len = rte_align32pow2(len); - PMD_DRV_LOG(DEBUG, "expand event buf to %u", len); - - rxq->event_buf = rte_realloc(rxq->event_buf, - len, RTE_CACHE_LINE_SIZE); - if (rxq->event_buf) { - rxq->event_sz = len; - continue; - } - - rte_exit(EXIT_FAILURE, "can not expand event buf!\n"); - break; - } - - if (ret != 0) { - PMD_DRV_LOG(ERR, "vmbus ring buffer error: %d", ret); - break; - } + else if (ret == -ENOBUFS) + rte_exit(EXIT_FAILURE, "event buffer not big enough (%u < %u)", + HN_RXQ_EVENT_DEFAULT, len); + else if (ret <= 0) + rte_exit(EXIT_FAILURE, + "vmbus ring buffer error: %d", ret); + bytes_read += ret; pkt = (const struct vmbus_chanpkt_hdr *)rxq->event_buf; data = (char *)rxq->event_buf + vmbus_chanpkt_getlen(pkt->hlen); @@ -904,6 +883,10 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id) if (rxq->rx_ring && rte_ring_full(rxq->rx_ring)) break; } + + if (bytes_read > 0) + rte_vmbus_chan_signal_read(rxq->chan, bytes_read); + rte_spinlock_unlock(&rxq->ring_lock); } diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h index 3f3b442697af..f7ff8585bc1c 100644 --- a/drivers/net/netvsc/hn_var.h +++ b/drivers/net/netvsc/hn_var.h @@ -69,7 +69,6 @@ struct hn_rx_queue { struct vmbus_channel *chan; struct rte_mempool *mb_pool; struct rte_ring *rx_ring; - void *event_buf; rte_spinlock_t ring_lock; uint32_t event_sz; @@ -77,6 +76,8 @@ struct hn_rx_queue { uint16_t queue_id; struct hn_stats stats; uint64_t ring_full; + + uint8_t event_buf[]; };