From patchwork Wed Jan 31 09:31:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 136227 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A6DAB43A1A; Wed, 31 Jan 2024 10:31:23 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7F60C402A1; Wed, 31 Jan 2024 10:31:23 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id F2D7940270 for ; Wed, 31 Jan 2024 10:31:21 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1706693481; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=Li6JeV/2P/7XYSvZCP6DR3O3ZL+uV/27OwGw2gV8pv8=; b=aYUCqCiyQM0GqMZ5EWHIlI4tCZxdDVbC77SBTiopNa1JA7ur3KXS0k6+5c2XX7BGsEGz6G 6MppEl2JYllWJpIDBNcFHuZ7PXzH9ZqsX9dqO5XFV0MwSxX//6cxvj3FSz+NwhvY9g9XgR kSi8+IWg1JFTNgxFoJSwFS7JmI7OK5c= Received: from mimecast-mx02.redhat.com (mx-ext.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-447-6FgEzK9DM525Yn9sy9vniw-1; Wed, 31 Jan 2024 04:31:17 -0500 X-MC-Unique: 6FgEzK9DM525Yn9sy9vniw-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 6A9212999B25; Wed, 31 Jan 2024 09:31:17 +0000 (UTC) Received: from max-p1.redhat.com (unknown [10.39.208.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id A0FC61121306; Wed, 31 Jan 2024 09:31:15 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbox@nvidia.com, david.marchand@redhat.com, bnemeth@redhat.com, echaudro@redhat.com Cc: Maxime Coquelin , stable@dpdk.org Subject: [PATCH 1/2] vhost: fix memory leak in Virtio Tx split path Date: Wed, 31 Jan 2024 10:31:12 +0100 Message-ID: <20240131093113.2208894-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When vIOMMU is enabled and Virtio device is bound to kernel driver in guest, rte_vhost_dequeue_burst() will often return early because of IOTLB misses. This patch fixes a mbuf leak occurring in this case. Fixes: 242695f6122a ("vhost: allocate and free packets in bulk in Tx split") Cc: stable@dpdk.org Signed-off-by: Maxime Coquelin --- lib/vhost/virtio_net.c | 20 ++++++++------------ 1 file changed, 8 insertions(+), 12 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 280d4845f8..db9985c9b9 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -3120,11 +3120,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, VHOST_ACCESS_RO) < 0)) break; - update_shadow_used_ring_split(vq, head_idx, 0); - if (unlikely(buf_len <= dev->vhost_hlen)) { - dropped += 1; - i++; + dropped = 1; break; } @@ -3143,8 +3140,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, buf_len, mbuf_pool->name); allocerr_warned = true; } - dropped += 1; - i++; + dropped = 1; break; } @@ -3155,17 +3151,17 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, VHOST_DATA_LOG(dev->ifname, ERR, "failed to copy desc to mbuf."); allocerr_warned = true; } - dropped += 1; - i++; + dropped = 1; break; } + update_shadow_used_ring_split(vq, head_idx, 0); } - if (dropped) - rte_pktmbuf_free_bulk(&pkts[i - 1], count - i + 1); + if (unlikely(count != i)) + rte_pktmbuf_free_bulk(&pkts[i], count - i); - vq->last_avail_idx += i; + vq->last_avail_idx += i + dropped; do_data_copy_dequeue(vq); if (unlikely(i < count)) @@ -3175,7 +3171,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, vhost_vring_call_split(dev, vq); } - return (i - dropped); + return i; } __rte_noinline From patchwork Wed Jan 31 09:31:13 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 136228 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 789FA43A1A; Wed, 31 Jan 2024 10:31:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9CC1C41140; Wed, 31 Jan 2024 10:31:25 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id AE47C40270 for ; Wed, 31 Jan 2024 10:31:22 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1706693482; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=CWykAWAW99MD6oEuhh4RQxg3uhKQEpQYI6PBaIBq9f8=; b=bVaMyLFpAkvcHjbpDGEUQwEtD1OqB7uHTzGglhTLm23YX1k6McelDppSSWqNE2g6WvmPou aVLUFCQEIRKk6THzjaAQj+sbTXn5f+bpNpmd40clRoL/p5LMV3xH5cSYTuU+LHLcf4dJZx 2+7WdxOE/RAP+KB8Q1mxGkIvz5Hec1A= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.3, cipher=TLS_AES_256_GCM_SHA384) id us-mta-382-JeMksUtdOO65yihjCm71aA-1; Wed, 31 Jan 2024 04:31:19 -0500 X-MC-Unique: JeMksUtdOO65yihjCm71aA-1 Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits) key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 0C89F185A782; Wed, 31 Jan 2024 09:31:19 +0000 (UTC) Received: from max-p1.redhat.com (unknown [10.39.208.18]) by smtp.corp.redhat.com (Postfix) with ESMTP id B06731121306; Wed, 31 Jan 2024 09:31:17 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbox@nvidia.com, david.marchand@redhat.com, bnemeth@redhat.com, echaudro@redhat.com Cc: Maxime Coquelin Subject: [PATCH 2/2] vhost: add new mbuf allocation failure statistic Date: Wed, 31 Jan 2024 10:31:13 +0100 Message-ID: <20240131093113.2208894-2-maxime.coquelin@redhat.com> In-Reply-To: <20240131093113.2208894-1-maxime.coquelin@redhat.com> References: <20240131093113.2208894-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.4.1 on 10.11.54.3 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch introduces a new, per virtqueue, mbuf allocation failure statistic. It can be useful to troubleshoot packets drops due to insufficient mempool size or memory leaks. Signed-off-by: Maxime Coquelin --- lib/vhost/vhost.c | 1 + lib/vhost/vhost.h | 1 + lib/vhost/virtio_net.c | 17 +++++++++++++---- 3 files changed, 15 insertions(+), 4 deletions(-) diff --git a/lib/vhost/vhost.c b/lib/vhost/vhost.c index 5912a42979..ac71d17784 100644 --- a/lib/vhost/vhost.c +++ b/lib/vhost/vhost.c @@ -55,6 +55,7 @@ static const struct vhost_vq_stats_name_off vhost_vq_stat_strings[] = { {"iotlb_misses", offsetof(struct vhost_virtqueue, stats.iotlb_misses)}, {"inflight_submitted", offsetof(struct vhost_virtqueue, stats.inflight_submitted)}, {"inflight_completed", offsetof(struct vhost_virtqueue, stats.inflight_completed)}, + {"mbuf_alloc_failed", offsetof(struct vhost_virtqueue, stats.mbuf_alloc_failed)}, }; #define VHOST_NB_VQ_STATS RTE_DIM(vhost_vq_stat_strings) diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index 470dadbba6..371c3e3858 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -156,6 +156,7 @@ struct virtqueue_stats { uint64_t iotlb_misses; uint64_t inflight_submitted; uint64_t inflight_completed; + uint64_t mbuf_alloc_failed; uint64_t guest_notifications_suppressed; /* Counters below are atomic, and should be incremented as such. */ RTE_ATOMIC(uint64_t) guest_notifications; diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index db9985c9b9..b056c83d8f 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -2975,6 +2975,7 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (mbuf_avail == 0) { cur = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(cur == NULL)) { + vq->stats.mbuf_alloc_failed++; VHOST_DATA_LOG(dev->ifname, ERR, "failed to allocate memory for mbuf."); goto error; @@ -3103,8 +3104,10 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, count = RTE_MIN(count, avail_entries); VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count); - if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) { + vq->stats.mbuf_alloc_failed += count; return 0; + } for (i = 0; i < count; i++) { struct buf_vector buf_vec[BUF_VECTOR_MAX]; @@ -3481,8 +3484,10 @@ virtio_dev_tx_packed(struct virtio_net *dev, { uint32_t pkt_idx = 0; - if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts, count)) { + vq->stats.mbuf_alloc_failed += count; return 0; + } do { rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]); @@ -3729,8 +3734,10 @@ virtio_dev_tx_async_split(struct virtio_net *dev, struct vhost_virtqueue *vq, count = RTE_MIN(count, avail_entries); VHOST_DATA_LOG(dev->ifname, DEBUG, "about to dequeue %u buffers", count); - if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) { + vq->stats.mbuf_alloc_failed += count; goto out; + } for (pkt_idx = 0; pkt_idx < count; pkt_idx++) { uint16_t head_idx = 0; @@ -4019,8 +4026,10 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, async_iter_reset(async); - if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) + if (rte_pktmbuf_alloc_bulk(mbuf_pool, pkts_prealloc, count)) { + vq->stats.mbuf_alloc_failed += count; goto out; + } do { struct rte_mbuf *pkt = pkts_prealloc[pkt_idx];