From patchwork Tue Feb 25 13:24:48 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Burakov, Anatoly" X-Patchwork-Id: 66041 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3BC07A055A; Tue, 25 Feb 2020 14:24:45 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6E2151BF8D; Tue, 25 Feb 2020 14:24:44 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 80B601F1C for ; Tue, 25 Feb 2020 14:24:42 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Feb 2020 05:24:41 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.70,484,1574150400"; d="scan'208";a="436259426" Received: from silpixa00399498.ir.intel.com (HELO silpixa00399498.ger.corp.intel.com) ([10.237.223.151]) by fmsmga005.fm.intel.com with ESMTP; 25 Feb 2020 05:24:40 -0800 From: Anatoly Burakov To: dev@dpdk.org Date: Tue, 25 Feb 2020 13:24:48 +0000 Message-Id: X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH] vfio: map contiguous areas in one go X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Currently, when we are creating DMA mappings for memory that's either external or is backed by hugepages in IOVA as PA mode, we assume that each page is necessarily discontiguous. This may not actually be the case, especially for external memory, where the user is able to create their own IOVA table and make it contiguous. This is a problem because VFIO has a limited number of DMA mappings, and it does not appear to concatenate them and treats each mapping as separate, even when they cover adjacent areas. Fix this so that we always map contiguous memory in a single chunk, as opposed to mapping each segment separately. Signed-off-by: Anatoly Burakov --- lib/librte_eal/linux/eal/eal_vfio.c | 59 +++++++++++++++++++++++++---- 1 file changed, 51 insertions(+), 8 deletions(-) diff --git a/lib/librte_eal/linux/eal/eal_vfio.c b/lib/librte_eal/linux/eal/eal_vfio.c index 01b5ef3f42..4502aefed3 100644 --- a/lib/librte_eal/linux/eal/eal_vfio.c +++ b/lib/librte_eal/linux/eal/eal_vfio.c @@ -514,9 +514,11 @@ static void vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, void *arg __rte_unused) { + rte_iova_t iova_start, iova_expected; struct rte_memseg_list *msl; struct rte_memseg *ms; size_t cur_len = 0; + uint64_t va_start; msl = rte_mem_virt2memseg_list(addr); @@ -545,22 +547,63 @@ vfio_mem_event_callback(enum rte_mem_event type, const void *addr, size_t len, #endif /* memsegs are contiguous in memory */ ms = rte_mem_virt2memseg(addr, msl); + + /* + * This memory is not guaranteed to be contiguous, but it still could + * be, or it could have some small contiguous chunks. Since the number + * of VFIO mappings is limited, and VFIO appears to not concatenate + * adjacent mappings, we have to do this ourselves. + * + * So, find contiguous chunks, then map them. + */ + va_start = ms->addr_64; + iova_start = iova_expected = ms->iova; while (cur_len < len) { + bool new_contig_area = ms->iova != iova_expected; + bool last_seg = (len - cur_len) == ms->len; + bool skip_last = false; + + /* only do mappings when current contiguous area ends */ + if (new_contig_area) { + if (type == RTE_MEM_EVENT_ALLOC) + vfio_dma_mem_map(default_vfio_cfg, va_start, + iova_start, + iova_expected - iova_start, 1); + else + vfio_dma_mem_map(default_vfio_cfg, va_start, + iova_start, + iova_expected - iova_start, 0); + va_start = ms->addr_64; + iova_start = ms->iova; + } /* some memory segments may have invalid IOVA */ if (ms->iova == RTE_BAD_IOVA) { RTE_LOG(DEBUG, EAL, "Memory segment at %p has bad IOVA, skipping\n", ms->addr); - goto next; + skip_last = true; } - if (type == RTE_MEM_EVENT_ALLOC) - vfio_dma_mem_map(default_vfio_cfg, ms->addr_64, - ms->iova, ms->len, 1); - else - vfio_dma_mem_map(default_vfio_cfg, ms->addr_64, - ms->iova, ms->len, 0); -next: + iova_expected = ms->iova + ms->len; cur_len += ms->len; ++ms; + + /* + * don't count previous segment, and don't attempt to + * dereference a potentially invalid pointer. + */ + if (skip_last && !last_seg) { + iova_expected = iova_start = ms->iova; + va_start = ms->addr_64; + } else if (!skip_last && last_seg) { + /* this is the last segment and we're not skipping */ + if (type == RTE_MEM_EVENT_ALLOC) + vfio_dma_mem_map(default_vfio_cfg, va_start, + iova_start, + iova_expected - iova_start, 1); + else + vfio_dma_mem_map(default_vfio_cfg, va_start, + iova_start, + iova_expected - iova_start, 0); + } } #ifdef RTE_ARCH_PPC_64 cur_len = 0;