From patchwork Thu Jul 8 10:44:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 95551 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 58C20A0C4A; Thu, 8 Jul 2021 13:00:10 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E56A240696; Thu, 8 Jul 2021 13:00:09 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 5DA4E4014F; Thu, 8 Jul 2021 13:00:07 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10038"; a="209527315" X-IronPort-AV: E=Sophos;i="5.84,222,1620716400"; d="scan'208";a="209527315" Received: from fmsmga003.fm.intel.com ([10.253.24.29]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Jul 2021 04:00:05 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.84,222,1620716400"; d="scan'208";a="487510058" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.119.149]) by FMSMGA003.fm.intel.com with ESMTP; 08 Jul 2021 04:00:03 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, Chenbo.Xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, yvonnex.yang@intel.com, Cheng Jiang , stable@dpdk.org Date: Thu, 8 Jul 2021 10:44:32 +0000 Message-Id: <20210708104432.46275-1-cheng1.jiang@intel.com> X-Mailer: git-send-email 2.29.2 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH] vhost: fix index overflow issue in async vhost X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" We introduced some new indexes in async vhost. If we don't pay attention to the management of these indexes, they will eventually overflow and lead to errors. This patch is to check and keep these indexes within a reasonable range. Fixes: 873e8dad6f49 ("vhost: support packed ring in async datapath") Cc: stable@dpdk.org Signed-off-by: Cheng Jiang --- lib/vhost/virtio_net.c | 22 +++++++++++++++------- 1 file changed, 15 insertions(+), 7 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index f4a2c88d8b..61cb5a126c 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1614,6 +1614,7 @@ store_dma_desc_info_packed(struct vring_used_elem_packed *s_ring, if (d_idx + count <= ring_size) { rte_memcpy(d_ring + d_idx, s_ring + s_idx, count * elem_size); + } else { uint16_t size = ring_size - d_idx; @@ -2036,7 +2037,7 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, slot_idx = (vq->async_pkts_idx + num_async_pkts) % vq->size; if (it_pool[it_idx].count) { - uint16_t from, to; + uint16_t from; async_descs_idx += num_descs; async_fill_desc(&tdes[pkt_burst_idx++], @@ -2055,11 +2056,13 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, * descriptors. */ from = vq->shadow_used_idx - num_buffers; - to = vq->async_buffer_idx_packed % vq->size; store_dma_desc_info_packed(vq->shadow_used_packed, - vq->async_buffers_packed, vq->size, from, to, num_buffers); + vq->async_buffers_packed, vq->size, from, + vq->async_buffer_idx_packed, num_buffers); vq->async_buffer_idx_packed += num_buffers; + if (vq->async_buffer_idx_packed >= vq->size) + vq->async_buffer_idx_packed -= vq->size; vq->shadow_used_idx -= num_buffers; } else { comp_pkts[num_done_pkts++] = pkts[pkt_idx]; @@ -2112,6 +2115,8 @@ virtio_dev_rx_async_submit_packed(struct virtio_net *dev, dma_error_handler_packed(vq, async_descs, async_descs_idx, slot_idx, pkt_err, &pkt_idx, &num_async_pkts, &num_done_pkts); vq->async_pkts_idx += num_async_pkts; + if (vq->async_pkts_idx >= vq->size) + vq->async_pkts_idx -= vq->size; *comp_count = num_done_pkts; if (likely(vq->shadow_used_idx)) { @@ -2160,7 +2165,7 @@ write_back_completed_descs_packed(struct vhost_virtqueue *vq, uint16_t from, to; do { - from = vq->last_async_buffer_idx_packed % vq->size; + from = vq->last_async_buffer_idx_packed; to = (from + nr_left) % vq->size; if (to > from) { vhost_update_used_packed(vq, vq->async_buffers_packed + from, to - from); @@ -2169,7 +2174,7 @@ write_back_completed_descs_packed(struct vhost_virtqueue *vq, } else { vhost_update_used_packed(vq, vq->async_buffers_packed + from, vq->size - from); - vq->last_async_buffer_idx_packed += vq->size - from; + vq->last_async_buffer_idx_packed = 0; nr_left -= vq->size - from; } } while (nr_left > 0); @@ -2252,10 +2257,13 @@ uint16_t rte_vhost_poll_enqueue_completed(int vid, uint16_t queue_id, vhost_vring_call_split(dev, vq); } } else { - if (vq_is_packed(dev)) + if (vq_is_packed(dev)) { vq->last_async_buffer_idx_packed += n_buffers; - else + if (vq->last_async_buffer_idx_packed >= vq->size) + vq->last_async_buffer_idx_packed -= vq->size; + } else { vq->last_async_desc_idx_split += n_descs; + } } done: