From patchwork Fri Feb 26 07:33:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 88262 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E541BA034F; Fri, 26 Feb 2021 08:33:49 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 68525407FF; Fri, 26 Feb 2021 08:33:49 +0100 (CET) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by mails.dpdk.org (Postfix) with ESMTP id 472F340692; Fri, 26 Feb 2021 08:33:47 +0100 (CET) IronPort-SDR: oZMeKnnwgkc+GSfJhoNwcmf8ln6Y46LbO/vsG5l6Zle3s3A61BspykIzo19//0uRu+ubOtDpVZ 4tseHUY/fV4w== X-IronPort-AV: E=McAfee;i="6000,8403,9906"; a="185927902" X-IronPort-AV: E=Sophos;i="5.81,207,1610438400"; d="scan'208";a="185927902" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 25 Feb 2021 23:33:45 -0800 IronPort-SDR: icxluKGVSpIh7NxGl1FD/uLH3opnXFa8VUL4dNEy7u3q5GAoNVWEsofrs3H3ItRx5xDNjQtbjo 5Mb048Dlt8UA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,207,1610438400"; d="scan'208";a="404792256" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.108]) by orsmga008.jf.intel.com with ESMTP; 25 Feb 2021 23:33:42 -0800 From: Marvin Liu To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, Marvin Liu , stable@dpdk.org Date: Fri, 26 Feb 2021 15:33:21 +0800 Message-Id: <20210226073321.66996-1-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH] vhost: fix potential buffer overflow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In vhost datapath, descriptor's length are mostly used in two coherent operations. First step is used for address translation, second step is used for memory transaction from guest to host. But the iterval between two steps will give a window for malicious guest, in which can change descriptor length after vhost calcuated buffer size. Thus may lead to buffer overflow in vhost side. This potential risk can be eliminated by accessing the descriptor length once. Fixes: 1be4ebb1c464 ("vhost: support indirect descriptor in mergeable Rx") Fixes: 2f3225a7d69b ("vhost: add vector filling support for packed ring") Fixes: 75ed51697820 ("vhost: add packed ring batch dequeue") Signed-off-by: Marvin Liu Cc: stable@dpdk.org Reviewed-by: Maxime Coquelin diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 583bf379c6..0a7d008a91 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -548,10 +548,11 @@ fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq, return -1; } - len += descs[idx].len; + dlen = descs[idx].len; + len += dlen; if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id, - descs[idx].addr, descs[idx].len, + descs[idx].addr, dlen, perm))) { free_ind_table(idesc); return -1; @@ -668,9 +669,10 @@ fill_vec_buf_packed_indirect(struct virtio_net *dev, return -1; } - *len += descs[i].len; + dlen = descs[i].len; + *len += dlen; if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id, - descs[i].addr, descs[i].len, + descs[i].addr, dlen, perm))) return -1; } @@ -691,6 +693,7 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, bool wrap_counter = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; uint16_t vec_id = *vec_idx; + uint64_t dlen; if (avail_idx < vq->last_avail_idx) wrap_counter ^= 1; @@ -723,11 +726,12 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, len, perm) < 0)) return -1; } else { - *len += descs[avail_idx].len; + dlen = descs[avail_idx].len; + *len += dlen; if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id, descs[avail_idx].addr, - descs[avail_idx].len, + dlen, perm))) return -1; } @@ -2314,7 +2318,7 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev, } vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { - pkts[i]->pkt_len = descs[avail_idx + i].len - buf_offset; + pkts[i]->pkt_len = lens[i] - buf_offset; pkts[i]->data_len = pkts[i]->pkt_len; ids[i] = descs[avail_idx + i].id; } From patchwork Thu Mar 25 03:01:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 89782 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0603EA0A02; Thu, 25 Mar 2021 04:03:47 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6B89B4067B; Thu, 25 Mar 2021 04:03:47 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 47E9B40147; Thu, 25 Mar 2021 04:03:45 +0100 (CET) IronPort-SDR: U0GnNEjxZX3htXsqEyVhkKfoLyr2/6p4Fgr2GBOku/T5fuRIzWI8HBxM5xKrtKsCf7orNaA+7b TiLAsf03yNdQ== X-IronPort-AV: E=McAfee;i="6000,8403,9933"; a="177953064" X-IronPort-AV: E=Sophos;i="5.81,276,1610438400"; d="scan'208";a="177953064" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2021 20:03:38 -0700 IronPort-SDR: 3uFccTf8yuT32VYtMTbRn1cP3ipCC53NpkD88ePlSclKdu1hOQuhfCUjS68JTEnGxpu2NeSX/Q sFU3rtHxVMtA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,276,1610438400"; d="scan'208";a="415765155" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.108]) by orsmga008.jf.intel.com with ESMTP; 24 Mar 2021 20:03:36 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, Marvin Liu , stable@dpdk.org Date: Thu, 25 Mar 2021 11:01:38 +0800 Message-Id: <20210325030139.2486-2-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210325030139.2486-1-yong.liu@intel.com> References: <20210226073321.66996-1-yong.liu@intel.com> <20210325030139.2486-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH 2/3] vhost: fix packed ring potential buffer overflow X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Similar as split ring, the multiple accesses of descriptor length will lead to potential risk. One-time access of descriptor length can eliminate this risk. Fixes: 2f3225a7d69b ("vhost: add vector filling support for packed ring") Cc: stable@dpdk.org Signed-off-by: Marvin Liu Reviewed-by: Maxime Coquelin diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 576a0a20c0..de43686522 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -669,9 +669,10 @@ fill_vec_buf_packed_indirect(struct virtio_net *dev, return -1; } - *len += descs[i].len; + dlen = descs[i].len; + *len += dlen; if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id, - descs[i].addr, descs[i].len, + descs[i].addr, dlen, perm))) return -1; } @@ -692,6 +693,7 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, bool wrap_counter = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; uint16_t vec_id = *vec_idx; + uint64_t dlen; if (avail_idx < vq->last_avail_idx) wrap_counter ^= 1; @@ -724,11 +726,12 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, len, perm) < 0)) return -1; } else { - *len += descs[avail_idx].len; + dlen = descs[avail_idx].len; + *len += dlen; if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id, descs[avail_idx].addr, - descs[avail_idx].len, + dlen, perm))) return -1; } From patchwork Thu Mar 25 03:01:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 89783 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EE61DA0A02; Thu, 25 Mar 2021 04:03:53 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 927C1140DA2; Thu, 25 Mar 2021 04:03:49 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id 1BE1640147; Thu, 25 Mar 2021 04:03:45 +0100 (CET) IronPort-SDR: BAEmOzRpN/aWNAxmTXkdlw15s0ozRBeVKy9u5CwG5FEq9OYYkIjhCqUMKX31oRh7c8t7IugRSI KKpkVlxXMiUg== X-IronPort-AV: E=McAfee;i="6000,8403,9933"; a="177953067" X-IronPort-AV: E=Sophos;i="5.81,276,1610438400"; d="scan'208";a="177953067" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 24 Mar 2021 20:03:39 -0700 IronPort-SDR: vOsGwhMMcyoL5/j2Ya0AV7AzwWrBRToOIq2XjYmvq1zDlKHTWKN3cZdhaB3YmH7WaX4XRAAtsp FB5mDqaoKXzA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,276,1610438400"; d="scan'208";a="415765165" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.108]) by orsmga008.jf.intel.com with ESMTP; 24 Mar 2021 20:03:38 -0700 From: Marvin Liu To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, Marvin Liu , stable@dpdk.org Date: Thu, 25 Mar 2021 11:01:39 +0800 Message-Id: <20210325030139.2486-3-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210325030139.2486-1-yong.liu@intel.com> References: <20210226073321.66996-1-yong.liu@intel.com> <20210325030139.2486-1-yong.liu@intel.com> Subject: [dpdk-dev] [PATCH 3/3] vhost: fix potential buffer overflow when batch dequeue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Similar as single dequeue, the multiple accesses of descriptor length will lead to potential risk. One-time access of descriptor length can eliminate this risk. Fixes: 75ed51697820 ("vhost: add packed ring batch dequeue") Cc: stable@dpdk.org Signed-off-by: Marvin Liu Reviewed-by: Maxime Coquelin diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index de43686522..0a7d008a91 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -2318,7 +2318,7 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev, } vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { - pkts[i]->pkt_len = descs[avail_idx + i].len - buf_offset; + pkts[i]->pkt_len = lens[i] - buf_offset; pkts[i]->data_len = pkts[i]->pkt_len; ids[i] = descs[avail_idx + i].id; }