From patchwork Tue Mar 31 17:14:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 67514 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B53F6A0562; Tue, 31 Mar 2020 19:15:19 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0BC771C10E; Tue, 31 Mar 2020 19:14:36 +0200 (CEST) Received: from mail-pj1-f66.google.com (mail-pj1-f66.google.com [209.85.216.66]) by dpdk.org (Postfix) with ESMTP id 620541C0C4 for ; Tue, 31 Mar 2020 19:14:27 +0200 (CEST) Received: by mail-pj1-f66.google.com with SMTP id kx8so1338394pjb.5 for ; Tue, 31 Mar 2020 10:14:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qF0B1UYVfHLFxwg4qvMbT+1CFFl/y1kAgTefDR7kJVc=; b=2Dg2xvxkngKWS5zrTRk+CSZ0tST29PXxseLYTEuRhd9ovltQPLtP+NH1PQRh1vKNqo AcpPzlIrzmq9A6RLY1ed6mjZ96k5hyyDimIotliRJ65EHmNUax7qTTFsAlzvrqp+dwnf qRGvYEPrln/56yrzQkwRufeMgjXczyZqz0lBFnm3PS/miWGKuV/N83pLxusFmpGG0bJf 9gT55ECzU9oxiLeSCZT9HDntTmGWyrr2WTciNNbtb0VCivBcby1+rnVxFrxPHXAf1DKm Gi3Q0P0gCuKsxf5uIypJiOCvp7poSsFG/UMtBppRphl14+k5St1v/sTPwVOq+JINinez cIow== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qF0B1UYVfHLFxwg4qvMbT+1CFFl/y1kAgTefDR7kJVc=; b=ZMYrHo9vUut+wp7sjfHPY94VPk6tPiuxXfws9w+SfflhuuBuelrGSiJEUkRFckCsji 09HQQIUnI3G8znl1rQe9cxHYSVcnOUXuuQ5TwsDXgb/zMIxXX5AjmgjFwVMtjmUxyW5b DPUOzkvFgZEZLDIoUTXviRUBGCEFVIzlMWvD5yTw0ZporOmyZaAMEc27ti9u5CFakBny g4IHffeL4DS4pdV+WIyoX0nwvbZB/vdNn/zqNecX+mfK0oPr4tNmCZR2+NRf5TwWvnXz OOdbwyhHSo9v6MEQsanfcplY35WvM69OgLxpr1bUJ5qeCianIOBW7qxnvkYHZ/HNPpxT NXqA== X-Gm-Message-State: AGi0PuaaFmTiuirUeX97nzNjxOl2HE21bWzO3OKmS/eZcPEgbQ4hXFpu zg6bggQ0GOYK0yFVebw/xRasP1NDS31XWg== X-Google-Smtp-Source: APiQypKSbWpvJNrLlIOpA4KDS95Y8IJIxxYIWwUvkLauMtLFLJ+lXOsneAoVrtxdNnslJ9/CR4Y1vA== X-Received: by 2002:a17:902:bb91:: with SMTP id m17mr2172265pls.223.1585674866076; Tue, 31 Mar 2020 10:14:26 -0700 (PDT) Received: from hermes.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id i124sm12869764pfg.14.2020.03.31.10.14.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 31 Mar 2020 10:14:24 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , stable@dpdk.org Date: Tue, 31 Mar 2020 10:14:02 -0700 Message-Id: <20200331171404.23596-7-stephen@networkplumber.org> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200331171404.23596-1-stephen@networkplumber.org> References: <20200316235612.29854-1-stephen@networkplumber.org> <20200331171404.23596-1-stephen@networkplumber.org> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 6/8] net/netvsc: handle transmit completions based on burst size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If tx_free_thresh is quite low, it is possible that we need to cleanup based on burst size. Fixes: fc30efe3a22e ("net/netvsc: change Rx descriptor setup and sizing") Cc: stable@dpdk.org Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_rxtx.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index e8df84604202..cbdfcc628b75 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -1375,7 +1375,7 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) struct hn_data *hv = txq->hv; struct rte_eth_dev *vf_dev; bool need_sig = false; - uint16_t nb_tx; + uint16_t nb_tx, avail; int ret; if (unlikely(hv->closed)) @@ -1390,7 +1390,8 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) return (*vf_dev->tx_pkt_burst)(sub_q, tx_pkts, nb_pkts); } - if (rte_mempool_avail_count(txq->txdesc_pool) <= txq->free_thresh) + avail = rte_mempool_avail_count(txq->txdesc_pool); + if (nb_pkts > avail || avail <= txq->free_thresh) hn_process_events(hv, txq->queue_id, 0); for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) {