From patchwork Thu Jul 7 06:55:13 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 113772 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C92A1A0540; Thu, 7 Jul 2022 08:56:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B996F40A7B; Thu, 7 Jul 2022 08:56:59 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 12B3D406B4 for ; Thu, 7 Jul 2022 08:56:57 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1657177018; x=1688713018; h=from:to:cc:subject:date:message-id; bh=P13Ier8MTm/9LW2Pujrhbdyj4H2Y4IiPliOueUXGZGw=; b=ghvCRyK1xhXVJgYhIDa7aDO4ZAIDChKBDPcxA6uANmQzy+Ed+1usHMgV U1/3wJMsA2f2ACroeNzFQ09gmfiOtctlZTiDJfim/kIj97DYJWHhFA9iv Rp2ECNKUw+8ZFuEt2S6uD02yI6IbLzP0CWuyqyAO8j0jp+/CqwR0nepYc HIG3uRAB67+6UA8Kqm7xqzqLnfqKoXWe2WHmKWqz0SGLf4fJdhMJu/sQU 4GDKne/AMsZ9lyehLz7pojOA4GjVB9gpmB1eeeTpWqDjfQM+59N18jVE8 1vLj4RZOo3EgAKoR44tECbFl+Xi/LPXJC39wD8ZdZooQNR2UEkxd9Efb1 w==; X-IronPort-AV: E=McAfee;i="6400,9594,10400"; a="285073089" X-IronPort-AV: E=Sophos;i="5.92,252,1650956400"; d="scan'208";a="285073089" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 06 Jul 2022 23:56:56 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.92,252,1650956400"; d="scan'208";a="651007551" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.110.228]) by fmsmga008.fm.intel.com with ESMTP; 06 Jul 2022 23:56:53 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xingguang.he@intel.com, yvonnex.yang@intel.com, cheng1.jiang@intel.com, Xuan Ding Subject: [PATCH] vhost: fix unnecessary dirty page logging Date: Thu, 7 Jul 2022 06:55:13 +0000 Message-Id: <20220707065513.66458-1-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding The dirty page logging is only required in vhost enqueue direction for live migration. This patch removes the unnecessary dirty page logging in vhost dequeue direction. Otherwise, it will result in a performance drop. Some if-else judgements are also optimized to improve performance. Fixes: 6d823bb302c7 ("vhost: prepare sync for descriptor to mbuf refactoring") Fixes: b6eee3e83402 ("vhost: fix sync dequeue offload") Signed-off-by: Xuan Ding Reviewed-by: Jiayu Hu Reviewed-by: Chenbo Xia Reviewed-by: Maxime Coquelin Tested-by: Xingguang He --- lib/vhost/virtio_net.c | 31 +++++++++++++------------------ 1 file changed, 13 insertions(+), 18 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index e842c35fef..12b7fbe7f9 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -1113,27 +1113,27 @@ sync_fill_seg(struct virtio_net *dev, struct vhost_virtqueue *vq, rte_memcpy((void *)((uintptr_t)(buf_addr)), rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), cpy_len); + vhost_log_cache_write_iova(dev, vq, buf_iova, cpy_len); + PRINT_PACKET(dev, (uintptr_t)(buf_addr), cpy_len, 0); } else { rte_memcpy(rte_pktmbuf_mtod_offset(m, void *, mbuf_offset), (void *)((uintptr_t)(buf_addr)), cpy_len); } - vhost_log_cache_write_iova(dev, vq, buf_iova, cpy_len); - PRINT_PACKET(dev, (uintptr_t)(buf_addr), cpy_len, 0); } else { if (to_desc) { batch_copy[vq->batch_copy_nb_elems].dst = (void *)((uintptr_t)(buf_addr)); batch_copy[vq->batch_copy_nb_elems].src = rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); + batch_copy[vq->batch_copy_nb_elems].log_addr = buf_iova; + batch_copy[vq->batch_copy_nb_elems].len = cpy_len; } else { batch_copy[vq->batch_copy_nb_elems].dst = rte_pktmbuf_mtod_offset(m, void *, mbuf_offset); batch_copy[vq->batch_copy_nb_elems].src = (void *)((uintptr_t)(buf_addr)); } - batch_copy[vq->batch_copy_nb_elems].log_addr = buf_iova; - batch_copy[vq->batch_copy_nb_elems].len = cpy_len; vq->batch_copy_nb_elems++; } } @@ -2739,18 +2739,14 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, if (async_fill_seg(dev, vq, cur, mbuf_offset, buf_iova + buf_offset, cpy_len, false) < 0) goto error; + } else if (likely(hdr && cur == m)) { + rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *, mbuf_offset), + (void *)((uintptr_t)(buf_addr + buf_offset)), + cpy_len); } else { - if (hdr && cur == m) { - rte_memcpy(rte_pktmbuf_mtod_offset(cur, void *, mbuf_offset), - (void *)((uintptr_t)(buf_addr + buf_offset)), - cpy_len); - vhost_log_cache_write_iova(dev, vq, buf_iova + buf_offset, cpy_len); - PRINT_PACKET(dev, (uintptr_t)(buf_addr + buf_offset), cpy_len, 0); - } else { - sync_fill_seg(dev, vq, cur, mbuf_offset, - buf_addr + buf_offset, - buf_iova + buf_offset, cpy_len, false); - } + sync_fill_seg(dev, vq, cur, mbuf_offset, + buf_addr + buf_offset, + buf_iova + buf_offset, cpy_len, false); } mbuf_avail -= cpy_len; @@ -2804,9 +2800,8 @@ desc_to_mbuf(struct virtio_net *dev, struct vhost_virtqueue *vq, async_iter_finalize(async); if (hdr) pkts_info[slot_idx].nethdr = *hdr; - } else { - if (hdr) - vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); + } else if (hdr) { + vhost_dequeue_offload(dev, hdr, m, legacy_ol_flags); } return 0;