From patchwork Thu Oct 29 23:51:53 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jianfeng Tan X-Patchwork-Id: 8321 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 502BD5957; Fri, 30 Oct 2015 07:52:11 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by dpdk.org (Postfix) with ESMTP id 26DE53F9 for ; Fri, 30 Oct 2015 07:52:08 +0100 (CET) Received: from orsmga002.jf.intel.com ([10.7.209.21]) by orsmga101.jf.intel.com with ESMTP; 29 Oct 2015 23:52:06 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.20,217,1444719600"; d="scan'208";a="838629202" Received: from tan-s2600cw.sh.intel.com ([10.239.128.225]) by orsmga002.jf.intel.com with ESMTP; 29 Oct 2015 23:52:04 -0700 From: Jianfeng Tan To: dev@dpdk.org Date: Fri, 30 Oct 2015 07:51:53 +0800 Message-Id: <1446162713-130100-1-git-send-email-jianfeng.tan@intel.com> X-Mailer: git-send-email 2.1.4 Subject: [dpdk-dev] [PATCH] vhost: fix mmap failure as len not aligned with hugepage size X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch fixes a bug under lower version linux kernel, mmap() fails when length is not aligned with hugepage size. Signed-off-by: Jianfeng Tan Acked-by: Changchun Ouyang > ---> lib/librte_vhost/vhost_user/virtio-net-user.c | 12 +++++++++--- --- lib/librte_vhost/vhost_user/virtio-net-user.c | 12 +++++++++--- 1 file changed, 9 insertions(+), 3 deletions(-) diff --git a/lib/librte_vhost/vhost_user/virtio-net-user.c b/lib/librte_vhost/vhost_user/virtio-net-user.c index a998ad8..641561c 100644 --- a/lib/librte_vhost/vhost_user/virtio-net-user.c +++ b/lib/librte_vhost/vhost_user/virtio-net-user.c @@ -147,6 +147,10 @@ user_set_mem_table(struct vhost_device_ctx ctx, struct VhostUserMsg *pmsg) /* This is ugly */ mapped_size = memory.regions[idx].memory_size + memory.regions[idx].mmap_offset; + + alignment = get_blk_size(pmsg->fds[idx]); + mapped_size = RTE_ALIGN_CEIL(mapped_size, alignment); + mapped_address = (uint64_t)(uintptr_t)mmap(NULL, mapped_size, PROT_READ | PROT_WRITE, MAP_SHARED, @@ -154,9 +158,11 @@ user_set_mem_table(struct vhost_device_ctx ctx, struct VhostUserMsg *pmsg) 0); RTE_LOG(INFO, VHOST_CONFIG, - "mapped region %d fd:%d to %p sz:0x%"PRIx64" off:0x%"PRIx64"\n", + "mapped region %d fd:%d to:%p sz:0x%"PRIx64" " + "off:0x%"PRIx64" align:0x%"PRIx64"\n", idx, pmsg->fds[idx], (void *)(uintptr_t)mapped_address, - mapped_size, memory.regions[idx].mmap_offset); + mapped_size, memory.regions[idx].mmap_offset, + alignment); if (mapped_address == (uint64_t)(uintptr_t)MAP_FAILED) { RTE_LOG(ERR, VHOST_CONFIG, @@ -166,7 +172,7 @@ user_set_mem_table(struct vhost_device_ctx ctx, struct VhostUserMsg *pmsg) pregion_orig[idx].mapped_address = mapped_address; pregion_orig[idx].mapped_size = mapped_size; - pregion_orig[idx].blksz = get_blk_size(pmsg->fds[idx]); + pregion_orig[idx].blksz = alignment; pregion_orig[idx].fd = pmsg->fds[idx]; mapped_address += memory.regions[idx].mmap_offset;