From patchwork Thu Jun 16 08:20:31 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 112851 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 55604A0093; Thu, 16 Jun 2022 10:20:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A32E4281E; Thu, 16 Jun 2022 10:20:47 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 1D4754114F for ; Thu, 16 Jun 2022 10:20:46 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1655367645; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=ovEmsDA8kmtCP9RsoYkgxa/InbRn+2VtCQWRrKVR3Mo=; b=ZDxyWCh77DwYzWgUch7lZwATmnNKs1HYF+WmVybD3w1DFasOorArYwKSyQZ+T3Ef4LoLYs rpmzYnTcUn3Ng9TgVMyDasA8ALc5sNRanF2DXNeEdMD1ELZG1nzP6mv7mzcDo7EEy4p4fV mSUr5GB/Xp2vTstrQEdfnM01YWripBc= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-408-mr30ISvfNlKrDLw5gjz0nA-1; Thu, 16 Jun 2022 04:20:35 -0400 X-MC-Unique: mr30ISvfNlKrDLw5gjz0nA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B35D81C01E81; Thu, 16 Jun 2022 08:20:34 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.34]) by smtp.corp.redhat.com (Postfix) with ESMTP id D6B9040CF8E4; Thu, 16 Jun 2022 08:20:33 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, david.marchand@redhat.com Cc: Maxime Coquelin Subject: [PATCH] vhost: rename number of available entries Date: Thu, 16 Jun 2022 10:20:31 +0200 Message-Id: <20220616082031.5005-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patchs renames the local variables free_entries to avail_entries in the dequeue path. Indeed, this variable represents the number of new packets available in the Virtio transmit queue, so these entries are actually used, not free. Signed-off-by: Maxime Coquelin Reviewed-by: David Marchand --- lib/vhost/virtio_net.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 68a26eb17d..84cdf7e3b1 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2774,7 +2774,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, bool legacy_ol_flags) { uint16_t i; - uint16_t free_entries; + uint16_t avail_entries; uint16_t dropped = 0; static bool allocerr_warned; @@ -2782,9 +2782,9 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, * The ordering between avail index and * desc reads needs to be enforced. */ - free_entries = __atomic_load_n(&vq->avail->idx, __ATOMIC_ACQUIRE) - + avail_entries = __atomic_load_n(&vq->avail->idx, __ATOMIC_ACQUIRE) - vq->last_avail_idx; - if (free_entries == 0) + if (avail_entries == 0) return 0; rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]); @@ -2792,7 +2792,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, VHOST_LOG_DATA(DEBUG, "(%s) %s\n", dev->ifname, __func__); count = RTE_MIN(count, MAX_PKT_BURST); - count = RTE_MIN(count, free_entries); + count = RTE_MIN(count, avail_entries); VHOST_LOG_DATA(DEBUG, "(%s) about to dequeue %u buffers\n", dev->ifname, count); @@ -3288,7 +3288,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, { static bool allocerr_warned; bool dropped = false; - uint16_t free_entries; + uint16_t avail_entries; uint16_t pkt_idx, slot_idx = 0; uint16_t nr_done_pkts = 0; uint16_t pkt_err = 0; @@ -3302,9 +3302,9 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, * The ordering between avail index and * desc reads needs to be enforced. */ - free_entries = __atomic_load_n(&vq->avail->idx, __ATOMIC_ACQUIRE) - + avail_entries = __atomic_load_n(&vq->avail->idx, __ATOMIC_ACQUIRE) - vq->last_avail_idx; - if (free_entries == 0) + if (avail_entries == 0) goto out; rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]); @@ -3312,7 +3312,7 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, async_iter_reset(async); count = RTE_MIN(count, MAX_PKT_BURST); - count = RTE_MIN(count, free_entries); + count = RTE_MIN(count, avail_entries); VHOST_LOG_DATA(DEBUG, "(%s) about to dequeue %u buffers\n", dev->ifname, count);