mem: check DMA mask for user-supplied IOVA addresses

Message ID e092c8e06b192c629f906ef0ae128f41f62c1028.1588693697.git.anatoly.burakov@intel.com (mailing list archive)
State Changes Requested, archived
Delegated to: David Marchand
Headers
Series mem: check DMA mask for user-supplied IOVA addresses |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-nxp-Performance success Performance Testing PASS
ci/travis-robot warning Travis build: failed
ci/iol-mellanox-Performance success Performance Testing PASS
ci/Intel-compilation fail Compilation issues
ci/iol-testing success Testing PASS

Commit Message

Anatoly Burakov May 5, 2020, 3:48 p.m. UTC
  Currently, external memory API will silently succeed if the IOVA
addresses supplied by the user do not fit into the DMA mask. This
can cause difficult to debug issues or accepting failed kernel
VFIO DMA mappings.

Fix it so that if the IOVA addresses are provided, they are checked
to see if they fit into the DMA mask.

Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
---
 lib/librte_eal/common/rte_malloc.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)
  

Comments

Anatoly Burakov May 5, 2020, 4:26 p.m. UTC | #1
On 05-May-20 4:48 PM, Anatoly Burakov wrote:
> Currently, external memory API will silently succeed if the IOVA
> addresses supplied by the user do not fit into the DMA mask. This
> can cause difficult to debug issues or accepting failed kernel
> VFIO DMA mappings.
> 
> Fix it so that if the IOVA addresses are provided, they are checked
> to see if they fit into the DMA mask.
> 
> Signed-off-by: Anatoly Burakov <anatoly.burakov@intel.com>
> ---
To avoid any misunderstandings, this is only for external memory - we 
already do this for internal memory. See the original thread [1] for 
context.

Also, needs Fixes: tag and a unit test, so will submit a v2 some time 
this week.

[1] http://patches.dpdk.org/patch/69566/
  

Patch

diff --git a/lib/librte_eal/common/rte_malloc.c b/lib/librte_eal/common/rte_malloc.c
index f1b73168bd..0d3a3ef93f 100644
--- a/lib/librte_eal/common/rte_malloc.c
+++ b/lib/librte_eal/common/rte_malloc.c
@@ -392,6 +392,29 @@  find_named_heap(const char *name)
 	return NULL;
 }
 
+static int
+check_iova_addrs_dma_mask(rte_iova_t iova_addrs[], unsigned int n_pages,
+		size_t page_sz)
+{
+	unsigned int i, bits;
+	rte_iova_t max = 0;
+
+	/* we only care for the biggest address we will get */
+	for (i = 0; i < n_pages; i++) {
+		rte_iova_t first = iova_addrs[i];
+		rte_iova_t last = first + page_sz - 1;
+		max = RTE_MAX(last, max);
+	}
+
+	bits = rte_fls_u64(max);
+	if (rte_mem_check_dma_mask(bits) != 0) {
+		RTE_LOG(ERR, EAL, "IOVA 0x%zx does not fit into the DMA mask\n",
+				max);
+		return -1;
+	}
+	return 0;
+}
+
 int
 rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
 		rte_iova_t iova_addrs[], unsigned int n_pages, size_t page_sz)
@@ -412,6 +435,12 @@  rte_malloc_heap_memory_add(const char *heap_name, void *va_addr, size_t len,
 		rte_errno = EINVAL;
 		return -1;
 	}
+	/* check if all IOVA's fit into the DMA mask */
+	if (iova_addrs != NULL && check_iova_addrs_dma_mask(iova_addrs,
+			n_pages, page_sz) != 0) {
+		rte_errno = EINVAL;
+		return -1;
+	}
 	rte_mcfg_mem_write_lock();
 
 	/* find our heap */