From patchwork Fri Nov 1 08:54:09 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Hu, Jiayu" X-Patchwork-Id: 62298 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10760A00BE; Fri, 1 Nov 2019 03:13:31 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EA3221D44A; Fri, 1 Nov 2019 03:13:22 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 1CB4D1D41C for ; Fri, 1 Nov 2019 03:13:19 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 31 Oct 2019 19:13:19 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,253,1569308400"; d="scan'208";a="212654590" Received: from npg_dpdk_virtio_jiayuhu_07.sh.intel.com ([10.67.119.35]) by orsmga002.jf.intel.com with ESMTP; 31 Oct 2019 19:13:17 -0700 From: Jiayu Hu To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, zhihong.wang@intel.com, bruce.richardson@intel.com, Jiayu Hu Date: Fri, 1 Nov 2019 04:54:09 -0400 Message-Id: <1572598450-245091-2-git-send-email-jiayu.hu@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1572598450-245091-1-git-send-email-jiayu.hu@intel.com> References: <1569507973-247570-1-git-send-email-jiayu.hu@intel.com> <1572598450-245091-1-git-send-email-jiayu.hu@intel.com> Subject: [dpdk-dev] [RFC v2 1/2] vhost: populate guest memory for DMA-accelerated vhost-user X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" DMA engines, like I/OAT, are efficient in moving large data within memory. Offloading large copies in vhost side to DMA engines can save precious CPU cycles and improve vhost performance. However, using the DMA engine requires to populate guest's memory. This patch is to enable DMA-accelerated vhost-user to populate guest's memory. Signed-off-by: Jiayu Hu --- lib/librte_vhost/rte_vhost.h | 1 + lib/librte_vhost/socket.c | 11 +++++++++++ lib/librte_vhost/vhost.h | 2 ++ lib/librte_vhost/vhost_user.c | 3 ++- 4 files changed, 16 insertions(+), 1 deletion(-) diff --git a/lib/librte_vhost/rte_vhost.h b/lib/librte_vhost/rte_vhost.h index 7b5dc87..7716939 100644 --- a/lib/librte_vhost/rte_vhost.h +++ b/lib/librte_vhost/rte_vhost.h @@ -34,6 +34,7 @@ extern "C" { #define RTE_VHOST_USER_EXTBUF_SUPPORT (1ULL << 5) /* support only linear buffers (no chained mbufs) */ #define RTE_VHOST_USER_LINEARBUF_SUPPORT (1ULL << 6) +#define RTE_VHOST_USER_DMA_COPY (1ULL << 7) /** Protocol features. */ #ifndef VHOST_USER_PROTOCOL_F_MQ diff --git a/lib/librte_vhost/socket.c b/lib/librte_vhost/socket.c index a34bc7f..9db6f6b 100644 --- a/lib/librte_vhost/socket.c +++ b/lib/librte_vhost/socket.c @@ -62,6 +62,8 @@ struct vhost_user_socket { */ int vdpa_dev_id; + bool dma_enabled; + struct vhost_device_ops const *notify_ops; }; @@ -240,6 +242,13 @@ vhost_user_add_connection(int fd, struct vhost_user_socket *vsocket) if (vsocket->linearbuf) vhost_enable_linearbuf(vid); + if (vsocket->dma_enabled) { + struct virtio_net *dev; + + dev = get_device(vid); + dev->dma_enabled = true; + } + RTE_LOG(INFO, VHOST_CONFIG, "new device, handle is %d\n", vid); if (vsocket->notify_ops->new_connection) { @@ -889,6 +898,8 @@ rte_vhost_driver_register(const char *path, uint64_t flags) goto out_mutex; } + vsocket->dma_enabled = flags & RTE_VHOST_USER_DMA_COPY; + /* * Set the supported features correctly for the builtin vhost-user * net driver. diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h index 9f11b28..b61a790 100644 --- a/lib/librte_vhost/vhost.h +++ b/lib/librte_vhost/vhost.h @@ -383,6 +383,8 @@ struct virtio_net { */ int vdpa_dev_id; + bool dma_enabled; + /* context data for the external message handlers */ void *extern_data; /* pre and post vhost user message handlers for the device */ diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c index 2a9fa7c..12722b9 100644 --- a/lib/librte_vhost/vhost_user.c +++ b/lib/librte_vhost/vhost_user.c @@ -1067,7 +1067,8 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *msg, } mmap_size = RTE_ALIGN_CEIL(mmap_size, alignment); - populate = (dev->dequeue_zero_copy) ? MAP_POPULATE : 0; + populate = (dev->dequeue_zero_copy || dev->dma_enabled) ? + MAP_POPULATE : 0; mmap_addr = mmap(NULL, mmap_size, PROT_READ | PROT_WRITE, MAP_SHARED | populate, fd, 0);