From patchwork Tue Dec 6 04:46:16 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mike Pattrick X-Patchwork-Id: 120467 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9333BA0552; Tue, 6 Dec 2022 05:46:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3C0EE40156; Tue, 6 Dec 2022 05:46:28 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id C174B40151 for ; Tue, 6 Dec 2022 05:46:26 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1670301985; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding; bh=SxKQHpa8xvJbJUKM/WW8UuAZ2ga2t/gUf8ULlJWxycs=; b=GeLdGUZT0fIqr4F4m+adjLq+B+yM02EqORCaXoWckGTBTtQQzDHxol1C6BCKnFn1LQfi9f JEOCf1fISm8cXj7hHajKOW0otRvEbNnytz070TNvQjlxuUkxgwG+SNNbg2Ng/fLnCUcDez 8phDUmOnyTV3Yqfi2z4EZpFUOGlTazY= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-459-Xhl0IRxsOP-z2JTFteJQ2g-1; Mon, 05 Dec 2022 23:46:24 -0500 X-MC-Unique: Xhl0IRxsOP-z2JTFteJQ2g-1 Received: from smtp.corp.redhat.com (int-mx09.intmail.prod.int.rdu2.redhat.com [10.11.54.9]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id B90E7833A09; Tue, 6 Dec 2022 04:46:23 +0000 (UTC) Received: from mpattric.remote.csb (unknown [10.22.32.61]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3170D477F55; Tue, 6 Dec 2022 04:46:23 +0000 (UTC) From: Mike Pattrick To: Maxime Coquelin , Chenbo Xia Cc: dev@dpdk.org, Mike Pattrick Subject: [PATCH] vhost: exclude VM hugepages from coredumps Date: Mon, 5 Dec 2022 23:46:16 -0500 Message-Id: <20221206044616.725392-1-mkp@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.9 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently if an application wants to include shared hugepages in coredumps in conjunction with the vhost library, the coredump will be larger than expected and include unneeded virtual machine memory. This patch will mark all vhost huge pages as DONTDUMP, except for some select pages used by DPDK. Signed-off-by: Mike Pattrick --- lib/vhost/iotlb.c | 5 +++++ lib/vhost/vhost.h | 11 +++++++++++ lib/vhost/vhost_user.c | 10 ++++++++++ 3 files changed, 26 insertions(+) diff --git a/lib/vhost/iotlb.c b/lib/vhost/iotlb.c index 6a729e8804..2f89f88817 100644 --- a/lib/vhost/iotlb.c +++ b/lib/vhost/iotlb.c @@ -149,6 +149,7 @@ vhost_user_iotlb_cache_remove_all(struct vhost_virtqueue *vq) rte_rwlock_write_lock(&vq->iotlb_lock); RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) { + mem_set_dump((void *)node->uaddr, node->size, true); TAILQ_REMOVE(&vq->iotlb_list, node, next); vhost_user_iotlb_pool_put(vq, node); } @@ -170,6 +171,7 @@ vhost_user_iotlb_cache_random_evict(struct vhost_virtqueue *vq) RTE_TAILQ_FOREACH_SAFE(node, &vq->iotlb_list, next, temp_node) { if (!entry_idx) { + mem_set_dump((void *)node->uaddr, node->size, true); TAILQ_REMOVE(&vq->iotlb_list, node, next); vhost_user_iotlb_pool_put(vq, node); vq->iotlb_cache_nr--; @@ -222,12 +224,14 @@ vhost_user_iotlb_cache_insert(struct virtio_net *dev, struct vhost_virtqueue *vq vhost_user_iotlb_pool_put(vq, new_node); goto unlock; } else if (node->iova > new_node->iova) { + mem_set_dump((void *)node->uaddr, node->size, true); TAILQ_INSERT_BEFORE(node, new_node, next); vq->iotlb_cache_nr++; goto unlock; } } + mem_set_dump((void *)node->uaddr, node->size, true); TAILQ_INSERT_TAIL(&vq->iotlb_list, new_node, next); vq->iotlb_cache_nr++; @@ -255,6 +259,7 @@ vhost_user_iotlb_cache_remove(struct vhost_virtqueue *vq, break; if (iova < node->iova + node->size) { + mem_set_dump((void *)node->uaddr, node->size, true); TAILQ_REMOVE(&vq->iotlb_list, node, next); vhost_user_iotlb_pool_put(vq, node); vq->iotlb_cache_nr--; diff --git a/lib/vhost/vhost.h b/lib/vhost/vhost.h index ef211ed519..09e1d5d97b 100644 --- a/lib/vhost/vhost.h +++ b/lib/vhost/vhost.h @@ -987,4 +987,15 @@ mbuf_is_consumed(struct rte_mbuf *m) return true; } + +static __rte_always_inline void +mem_set_dump(void *ptr, size_t size, bool enable) +{ +#ifdef MADV_DONTDUMP + if (madvise(ptr, size, enable ? MADV_DODUMP : MADV_DONTDUMP) == -1) { + rte_log(RTE_LOG_INFO, vhost_config_log_level, + "VHOST_CONFIG: could not set coredump preference (%s).\n", strerror(errno)); + } +#endif +} #endif /* _VHOST_NET_CDEV_H_ */ diff --git a/lib/vhost/vhost_user.c b/lib/vhost/vhost_user.c index 9902ae9944..8f33d5f4d9 100644 --- a/lib/vhost/vhost_user.c +++ b/lib/vhost/vhost_user.c @@ -793,6 +793,9 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) return; } + mem_set_dump(vq->desc_packed, len, true); + mem_set_dump(vq->driver_event, len, true); + mem_set_dump(vq->device_event, len, true); vq->access_ok = true; return; } @@ -846,6 +849,9 @@ translate_ring_addresses(struct virtio_net **pdev, struct vhost_virtqueue **pvq) "some packets maybe resent for Tx and dropped for Rx\n"); } + mem_set_dump(vq->desc, len, true); + mem_set_dump(vq->avail, len, true); + mem_set_dump(vq->used, len, true); vq->access_ok = true; VHOST_LOG_CONFIG(dev->ifname, DEBUG, "mapped address desc: %p\n", vq->desc); @@ -1224,6 +1230,7 @@ vhost_user_mmap_region(struct virtio_net *dev, region->mmap_addr = mmap_addr; region->mmap_size = mmap_size; region->host_user_addr = (uint64_t)(uintptr_t)mmap_addr + mmap_offset; + mem_set_dump(mmap_addr, mmap_size, false); if (dev->async_copy) { if (add_guest_pages(dev, region, alignment) < 0) { @@ -1528,6 +1535,7 @@ inflight_mem_alloc(struct virtio_net *dev, const char *name, size_t size, int *f return NULL; } + mem_set_dump(ptr, size, false); *fd = mfd; return ptr; } @@ -1736,6 +1744,7 @@ vhost_user_set_inflight_fd(struct virtio_net **pdev, dev->inflight_info->fd = -1; } + mem_set_dump(addr, mmap_size, false); dev->inflight_info->fd = fd; dev->inflight_info->addr = addr; dev->inflight_info->size = mmap_size; @@ -2283,6 +2292,7 @@ vhost_user_set_log_base(struct virtio_net **pdev, dev->log_addr = (uint64_t)(uintptr_t)addr; dev->log_base = dev->log_addr + off; dev->log_size = size; + mem_set_dump(addr, size, false); for (i = 0; i < dev->nr_vring; i++) { struct vhost_virtqueue *vq = dev->virtqueue[i];