From patchwork Tue Jul 23 10:01:20 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anatoly Burakov X-Patchwork-Id: 56960 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4DD981B9A5; Tue, 23 Jul 2019 12:01:24 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 8D7321B951; Tue, 23 Jul 2019 12:01:22 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 23 Jul 2019 03:01:21 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.64,298,1559545200"; d="scan'208";a="368323192" Received: from silpixa00399498.ir.intel.com (HELO silpixa00399498.ger.corp.intel.com) ([10.237.223.125]) by fmsmga005.fm.intel.com with ESMTP; 23 Jul 2019 03:01:20 -0700 From: Anatoly Burakov To: dev@dpdk.org Cc: andrius.sirvys@intel.com, stable@dpdk.org Date: Tue, 23 Jul 2019 11:01:20 +0100 Message-Id: <6ee9d8ddb5de3b2de880ad42c37b012888a6facd.1563876069.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH] vfio: use contiguous mapping for IOVA as VA mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When using IOVA as VA mode, there is no need to map segments page by page. This normally isn't a problem, but it becomes one when attempting to use DPDK in no-huge mode, where VFIO subsystem simply runs out of space to store mappings. Fix this for x86 by triggering different callbacks based on whether IOVA as VA mode is enabled. Fixes: 73a639085938 ("vfio: allow to map other memory regions") Cc: stable@dpdk.org Signed-off-by: Anatoly Burakov Signed-off-by: Anatoly Burakov Tested-by: Andrius Sirvys --- lib/librte_eal/linux/eal/eal_vfio.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c index ed04231b1..501c74f23 100644 --- a/lib/librte_eal/linux/eal/eal_vfio.c +++ b/lib/librte_eal/linux/eal/eal_vfio.c @@ -1231,6 +1231,19 @@ rte_vfio_get_group_num(const char *sysfs_base, return 1; } +static int +type1_map_contig(const struct rte_memseg_list *msl, const struct rte_memseg *ms, + size_t len, void *arg) +{ + int *vfio_container_fd = arg; + + if (msl->external) + return 0; + + return vfio_type1_dma_mem_map(*vfio_container_fd, ms->addr_64, ms->iova, + len, 1); +} + static int type1_map(const struct rte_memseg_list *msl, const struct rte_memseg *ms, void *arg) @@ -1300,6 +1313,13 @@ vfio_type1_dma_mem_map(int vfio_container_fd, uint64_t vaddr, uint64_t iova, static int vfio_type1_dma_map(int vfio_container_fd) { + if (rte_eal_iova_mode() == RTE_IOVA_VA) { + /* with IOVA as VA mode, we can get away with mapping contiguous + * chunks rather than going page-by-page. + */ + return rte_memseg_contig_walk(type1_map_contig, + &vfio_container_fd); + } return rte_memseg_walk(type1_map, &vfio_container_fd); }