From patchwork Thu Jul 14 05:11:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ma, WenwuX" X-Patchwork-Id: 113956 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 59D46A00C5; Thu, 14 Jul 2022 07:16:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3140D41147; Thu, 14 Jul 2022 07:16:36 +0200 (CEST) Received: from mga18.intel.com (mga18.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id B74CD41101; Thu, 14 Jul 2022 07:16:34 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657775795; x=1689311795; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=1iu0WNqozVyCe3kOVV+jsnIhzulxw9CYw9UBP4cU7HQ=; b=L9ozFgTxx7hX+qbirVa+Ywj/Wwky9sF7UGW6HHEtKVpfHMVXMnHnqQl/ r2HsggNyahxUEdFGlog+sa3HvJFD/01Qs24wEI52YeAzq160NkJhqcUhT G/P+PM/4bXCiMDYI+XkT/ES8XtdZw0KFiSOhVZJ1uxhcLDdv9l+WUYlW5 3pVowFv6+fK+hR4eluFzMdxAD+do2U28AQR250e8jkcObWsy0y7zEKyBI Bb+JXgh9tUDt8DA+9A4RrzT0WwIYrFi4BJbP8mFlNnYmbJ4PYyabi3V0Y qC6ZAvI84gyT4gcdlwkQHteXvHElR25uxLz8tcabaFxCk1wFfi0j2HMjH g==; X-IronPort-AV: E=McAfee;i="6400,9594,10407"; a="268449149" X-IronPort-AV: E=Sophos;i="5.92,269,1650956400"; d="scan'208";a="268449149" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2022 22:16:31 -0700 X-IronPort-AV: E=Sophos;i="5.92,269,1650956400"; d="scan'208";a="653715380" Received: from unknown (HELO localhost.localdomain) ([10.239.252.251]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Jul 2022 22:16:28 -0700 From: Wenwu Ma To: maxime.coquelin@redhat.com, chenbo.xia@intel.com, dev@dpdk.org Cc: jiayu.hu@intel.com, yinan.wang@intel.com, xingguang.he@intel.com, weix.ling@intel.com, yuanx.wang@intel.com, Wenwu Ma , stable@dpdk.org Subject: [PATCH] examples/vhost: fix use after free Date: Thu, 14 Jul 2022 13:11:06 +0800 Message-Id: <20220714051106.1134222-1-wenwux.ma@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In async_enqueue_pkts(), the failed pkts will be freed before return, but, the failed pkts may be retried later, it will cause use after free. So, we free the failed pkts after retry. Fixes: 1907ce4baec3 ("examples/vhost: fix retry logic on Rx path") Cc: stable@dpdk.org Signed-off-by: Wenwu Ma Tested-by: Wei Ling Reviewed-by: Chenbo Xia --- examples/vhost/main.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 7e1666f42a..7956dc4f13 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1073,8 +1073,13 @@ drain_vhost(struct vhost_dev *vdev) __ATOMIC_SEQ_CST); } - if (!dma_bind[vid2socketid[vdev->vid]].dmas[VIRTIO_RXQ].async_enabled) + if (!dma_bind[vid2socketid[vdev->vid]].dmas[VIRTIO_RXQ].async_enabled) { free_pkts(m, nr_xmit); + } else { + uint16_t enqueue_fail = nr_xmit - ret; + if (enqueue_fail > 0) + free_pkts(&m[ret], enqueue_fail); + } } static __rte_always_inline void @@ -1350,17 +1355,12 @@ async_enqueue_pkts(struct vhost_dev *dev, uint16_t queue_id, struct rte_mbuf **pkts, uint32_t rx_count) { uint16_t enqueue_count; - uint16_t enqueue_fail = 0; uint16_t dma_id = dma_bind[vid2socketid[dev->vid]].dmas[VIRTIO_RXQ].dev_id; complete_async_pkts(dev); enqueue_count = rte_vhost_submit_enqueue_burst(dev->vid, queue_id, pkts, rx_count, dma_id, 0); - enqueue_fail = rx_count - enqueue_count; - if (enqueue_fail) - free_pkts(&pkts[enqueue_count], enqueue_fail); - return enqueue_count; } @@ -1405,8 +1405,13 @@ drain_eth_rx(struct vhost_dev *vdev) __ATOMIC_SEQ_CST); } - if (!dma_bind[vid2socketid[vdev->vid]].dmas[VIRTIO_RXQ].async_enabled) + if (!dma_bind[vid2socketid[vdev->vid]].dmas[VIRTIO_RXQ].async_enabled) { free_pkts(pkts, rx_count); + } else { + uint16_t enqueue_fail = rx_count - enqueue_count; + if (enqueue_fail > 0) + free_pkts(&pkts[enqueue_count], enqueue_fail); + } } uint16_t async_dequeue_pkts(struct vhost_dev *dev, uint16_t queue_id,