From patchwork Thu Jul 14 08:18:39 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ilya Maximets X-Patchwork-Id: 14829 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id F14A137A4; Thu, 14 Jul 2016 10:18:59 +0200 (CEST) Received: from mailout3.w1.samsung.com (mailout3.w1.samsung.com [210.118.77.13]) by dpdk.org (Postfix) with ESMTP id 234DC3237 for ; Thu, 14 Jul 2016 10:18:58 +0200 (CEST) Received: from eucpsbgm1.samsung.com (unknown [203.254.199.244]) by mailout3.w1.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTP id <0OAA002N9PRJ0Q10@mailout3.w1.samsung.com> for dev@dpdk.org; Thu, 14 Jul 2016 09:18:55 +0100 (BST) X-AuditID: cbfec7f4-f796c6d000001486-64-57874aef93c1 Received: from eusync1.samsung.com ( [203.254.199.211]) by eucpsbgm1.samsung.com (EUCPMTA) with SMTP id 3C.25.05254.FEA47875; Thu, 14 Jul 2016 09:18:55 +0100 (BST) Received: from imaximets.rnd.samsung.ru ([106.109.129.180]) by eusync1.samsung.com (Oracle Communications Messaging Server 7.0.5.31.0 64bit (built May 5 2014)) with ESMTPA id <0OAA00CDVPR86390@eusync1.samsung.com>; Thu, 14 Jul 2016 09:18:55 +0100 (BST) From: Ilya Maximets To: dev@dpdk.org, Huawei Xie , Yuanhan Liu , Rich Lane Cc: Dyasly Sergey , Heetae Ahn , Jianfeng Tan , Stephen Hemminger , Thomas Monjalon , Ilya Maximets Date: Thu, 14 Jul 2016 11:18:39 +0300 Message-id: <1468484319-26906-1-git-send-email-i.maximets@samsung.com> X-Mailer: git-send-email 2.7.4 In-reply-to: <1463748604-27251-1-git-send-email-i.maximets@samsung.com> References: <1463748604-27251-1-git-send-email-i.maximets@samsung.com> X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrDLMWRmVeSWpSXmKPExsVy+t/xy7rvvdrDDRrmClu8+7SdyWLa59vs Fu0zzzJZXGn/yW7RPfsLm8Wmd5NYLSbPlrJYfEfO4sum6WwW1ydcYHXg8rjYf4fR48Hlm0we vxYsZfVYvOclk8e8k4EePSfnMXn0bVnFGMAexWWTkpqTWZZapG+XwJWx7OwP5oLX8hVTLl9h bWBsk+pi5OSQEDCReP98EzuELSZx4d56ti5GLg4hgaWMEjvWXmIFSQgJtDJJ3J9aB2KzCehI nFp9hBGkSESglVHi8f9pYB3MAs1MEts+PAEbJSzgKPH1ZysjiM0ioCrR+vIe2CReATeJqTdb mCDWyUncPNfJDGJzCrhLHOhsYofY5iaxfuI0pgmMvAsYGVYxiqaWJhcUJ6XnGuoVJ+YWl+al 6yXn525ihIThlx2Mi49ZHWIU4GBU4uF9kNcWLsSaWFZcmXuIUYKDWUmEd4l7e7gQb0piZVVq UX58UWlOavEhRmkOFiVx3rm73ocICaQnlqRmp6YWpBbBZJk4OKUaGBmcrvzl+NWq3fOdV9Br 4k9tQ/6UOTf+LwmMfB2QtfLdlmUMf/lY66Y0rlEWe8pTe4l9XWHv5BZ1L9azmU9+9yXPXnbz yGuHPzevmT5n352Yfeh8QifTo3jlkpbbYkbPInfd5pnRmOzL/eamq0AV+23XWa90uWebXlxd G91vsVVmh6GwoufjSCWW4oxEQy3mouJEAPHuF+g/AgAA Subject: [dpdk-dev] [PATCH v2] vhost: fix segfault on bad descriptor address X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In current implementation vhost will crash with segmentation fault if malicious or buggy virtio application breaks addresses of descriptors. Before commit 0823c1cb0a73 this crash was reproducible even with normal DPDK application that tries to change number of virtqueues dynamically inside VM. Fix that by checking addresses of descriptors before using. Also fixed return value on error for 'copy_mbuf_to_desc_mergeable()' from '-1' to '0' because it returns unsigned value and it means number of used descriptors. Signed-off-by: Ilya Maximets --- Version 2: * Rebased on top of current master. * host's address now checked in meargeable case, because needed refactoring already done. * Commit-message changed because old issue with virtio reload accidentially fixed by commit 0823c1cb0a73. lib/librte_vhost/vhost_rxtx.c | 28 +++++++++++++++++++++------- 1 file changed, 21 insertions(+), 7 deletions(-) diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c index 15ca956..31e8b58 100644 --- a/lib/librte_vhost/vhost_rxtx.c +++ b/lib/librte_vhost/vhost_rxtx.c @@ -147,10 +147,10 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, struct virtio_net_hdr_mrg_rxbuf virtio_hdr = {{0, 0, 0, 0, 0, 0}, 0}; desc = &vq->desc[desc_idx]; - if (unlikely(desc->len < dev->vhost_hlen)) + desc_addr = gpa_to_vva(dev, desc->addr); + if (unlikely(desc->len < dev->vhost_hlen || !desc_addr)) return -1; - desc_addr = gpa_to_vva(dev, desc->addr); rte_prefetch0((void *)(uintptr_t)desc_addr); virtio_enqueue_offload(m, &virtio_hdr.hdr); @@ -182,7 +182,10 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq, return -1; desc = &vq->desc[desc->next]; - desc_addr = gpa_to_vva(dev, desc->addr); + desc_addr = gpa_to_vva(dev, desc->addr); + if (unlikely(!desc_addr)) + return -1; + desc_offset = 0; desc_avail = desc->len; } @@ -387,10 +390,10 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq, LOG_DEBUG(VHOST_DATA, "(%d) current index %d | end index %d\n", dev->vid, cur_idx, end_idx); - if (buf_vec[vec_idx].buf_len < dev->vhost_hlen) - return -1; - desc_addr = gpa_to_vva(dev, buf_vec[vec_idx].buf_addr); + if (buf_vec[vec_idx].buf_len < dev->vhost_hlen || !desc_addr) + return 0; + rte_prefetch0((void *)(uintptr_t)desc_addr); virtio_hdr.num_buffers = end_idx - start_idx; @@ -425,6 +428,8 @@ copy_mbuf_to_desc_mergeable(struct virtio_net *dev, struct vhost_virtqueue *vq, vec_idx++; desc_addr = gpa_to_vva(dev, buf_vec[vec_idx].buf_addr); + if (unlikely(!desc_addr)) + return 0; /* Prefetch buffer address. */ rte_prefetch0((void *)(uintptr_t)desc_addr); @@ -507,7 +512,7 @@ virtio_dev_merge_rx(struct virtio_net *dev, uint16_t queue_id, *(volatile uint16_t *)&vq->used->idx += nr_used; vhost_log_used_vring(dev, vq, offsetof(struct vring_used, idx), sizeof(vq->used->idx)); - vq->last_used_idx = end; + vq->last_used_idx += nr_used; } if (likely(pkt_idx)) { @@ -688,6 +693,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, return -1; desc_addr = gpa_to_vva(dev, desc->addr); + if (unlikely(!desc_addr)) + return -1; + hdr = (struct virtio_net_hdr *)((uintptr_t)desc_addr); rte_prefetch0(hdr); @@ -701,6 +709,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, desc = &vq->desc[desc->next]; desc_addr = gpa_to_vva(dev, desc->addr); + if (unlikely(!desc_addr)) + return -1; + rte_prefetch0((void *)(uintptr_t)desc_addr); desc_offset = 0; @@ -737,6 +748,9 @@ copy_desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, desc = &vq->desc[desc->next]; desc_addr = gpa_to_vva(dev, desc->addr); + if (unlikely(!desc_addr)) + return -1; + rte_prefetch0((void *)(uintptr_t)desc_addr); desc_offset = 0;