From patchwork Tue Jul 24 21:08:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 43323 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0B642322C; Tue, 24 Jul 2018 23:09:09 +0200 (CEST) Received: from mail-pl0-f66.google.com (mail-pl0-f66.google.com [209.85.160.66]) by dpdk.org (Postfix) with ESMTP id E9B9398 for ; Tue, 24 Jul 2018 23:09:03 +0200 (CEST) Received: by mail-pl0-f66.google.com with SMTP id w3-v6so2296505plq.2 for ; Tue, 24 Jul 2018 14:09:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=pw16iFMvloPDZZFKtnXHtLMyrJCAFAY1nt2nTDVz3V0=; b=dn82qQAVsMEwqOXhUnF2QPUQs12eSNhP9Ia/pgqCe6OBEKMQOTsee3lpxnqaGTcb7K h3FX9m/bYwRDoMFtt0Pr+2jKMGuq000HFc9tZc/a/LgEUtLKxxPNf522HSWbODLC6gxG r4vDRcgg7zlCH6p/+QpJwvTZisS4vadWuyfBUdV/iAuwZPffqah7g3102/XxiWxa6/it oz70S3iLDJrqUNIHlGkl84S07PwblM4U3E/3iIIlQY7tAFUoX1hUzcNmGOyI9ydNPSV6 wyl+2M15orbnMbg6ICQJeU11fVbSb3uUKCkgCNaiuBW13BcdhTFpyg1y1hDor4oIkhRI 52vw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=pw16iFMvloPDZZFKtnXHtLMyrJCAFAY1nt2nTDVz3V0=; b=dm6KkdtXgi8a6n2a/Na/NgXJb7ucnbEx38A7i9dsz7H8XTbOKJsBA3oE2ShcztT/hK AknnkB7FuBfOPIQ/cLfCOXhkfh6w2ZPsUFMwpTOzkCx0QvgGb37ciwCbCJnXy8Q0v5GN Qj1Q1RL/pOuDDcs0MpJ5LErPkBLfIvVFcyGwSxqM50wl+IbxhL1otnWgKFfvCGaa7+hP dRitXGkFOx0tIcNnfkoGFKEYAINJfDxZ8dMBT+zjuoJzLPrdJhn7QUEiVfp0MddxlEPd I4q4BYHqhwt5nGlQJsZ2+omx1IXypgtKh26bb4oX5hW3FX5yT2YjZM5Hb+p2b6718lKh FxKg== X-Gm-Message-State: AOUpUlFjvhxUXXgnzgubxfZXVdo7LHO517RXgctT4oAj4dpp87VFAvN9 6zPFSHHJVRgLSIsl4kw09XsehBVSvT4= X-Google-Smtp-Source: AAOMgpfk7AEWk7GMo0EgKSFWFxgxvIM/XBnYAMvPp6W2F4B8d13PENMbdOXZ5WbnnhEvRaD9PBoGAg== X-Received: by 2002:a17:902:5381:: with SMTP id c1-v6mr18155011pli.137.1532466542762; Tue, 24 Jul 2018 14:09:02 -0700 (PDT) Received: from xeon-e3.wavecable.com (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id d11-v6sm16921161pfo.135.2018.07.24.14.09.01 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Tue, 24 Jul 2018 14:09:01 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Stephen Hemminger Date: Tue, 24 Jul 2018 14:08:50 -0700 Message-Id: <20180724210853.22767-2-stephen@networkplumber.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180724210853.22767-1-stephen@networkplumber.org> References: <20180724210853.22767-1-stephen@networkplumber.org> Subject: [dpdk-dev] [PATCH 1/4] netvsc: change rx descriptor setup and sizing X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Increase the size of the ring used to hold mbuf's received but not processed. The default is now based off the size of thw receive mbuf pool not the number of sections from the host. Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_rxtx.c | 24 +++++++----------------- 1 file changed, 7 insertions(+), 17 deletions(-) diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 6d2f41c4c011..9a2dd9cb1beb 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -728,18 +728,12 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev, struct rte_mempool *mp) { struct hn_data *hv = dev->data->dev_private; - uint32_t qmax = hv->rxbuf_section_cnt; char ring_name[RTE_RING_NAMESIZE]; struct hn_rx_queue *rxq; unsigned int count; - size_t size; - int err = -ENOMEM; PMD_INIT_FUNC_TRACE(); - if (nb_desc == 0 || nb_desc > qmax) - nb_desc = qmax; - if (queue_idx == 0) { rxq = hv->primary; } else { @@ -749,14 +743,9 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev, } rxq->mb_pool = mp; - - count = rte_align32pow2(nb_desc); - size = sizeof(struct rte_ring) + count * sizeof(void *); - rxq->rx_ring = rte_malloc_socket("RX_RING", size, - RTE_CACHE_LINE_SIZE, - socket_id); - if (!rxq->rx_ring) - goto fail; + count = rte_mempool_avail_count(mp) / dev->data->nb_rx_queues; + if (nb_desc == 0 || nb_desc > count) + nb_desc = count; /* * Staging ring from receive event logic to rx_pkts. @@ -765,9 +754,10 @@ hn_dev_rx_queue_setup(struct rte_eth_dev *dev, */ snprintf(ring_name, sizeof(ring_name), "hn_rx_%u_%u", dev->data->port_id, queue_idx); - err = rte_ring_init(rxq->rx_ring, ring_name, - count, 0); - if (err) + rxq->rx_ring = rte_ring_create(ring_name, + rte_align32pow2(nb_desc), + socket_id, 0); + if (!rxq->rx_ring) goto fail; dev->data->rx_queues[queue_idx] = rxq;