mbox series

[v4,0/4] fix issue with partial DMA unmap

Message ID 20201202054647.3449-1-ndabilpuram@marvell.com (mailing list archive)
Headers
Series fix issue with partial DMA unmap |

Message

Nithin Dabilpuram Dec. 2, 2020, 5:46 a.m. UTC
  Partial DMA unmap is not supported by VFIO type1 IOMMU
in Linux. Though the return value is zero, the returned
DMA unmap size is not same as expected size.
So add test case and fix to both heap triggered DMA
mapping and user triggered DMA mapping/unmapping.

Refer vfio_dma_do_unmap() in drivers/vfio/vfio_iommu_type1.c
Snippet of comment is below.

        /*
         * vfio-iommu-type1 (v1) - User mappings were coalesced together to
         * avoid tracking individual mappings.  This means that the granularity
         * of the original mapping was lost and the user was allowed to attempt
         * to unmap any range.  Depending on the contiguousness of physical
         * memory and page sizes supported by the IOMMU, arbitrary unmaps may
         * or may not have worked.  We only guaranteed unmap granularity
         * matching the original mapping; even though it was untracked here,
         * the original mappings are reflected in IOMMU mappings.  This
         * resulted in a couple unusual behaviors.  First, if a range is not
         * able to be unmapped, ex. a set of 4k pages that was mapped as a
         * 2M hugepage into the IOMMU, the unmap ioctl returns success but with
         * a zero sized unmap.  Also, if an unmap request overlaps the first
         * address of a hugepage, the IOMMU will unmap the entire hugepage.
         * This also returns success and the returned unmap size reflects the
         * actual size unmapped.

         * We attempt to maintain compatibility with this "v1" interface, but  
         * we take control out of the hands of the IOMMU.  Therefore, an unmap 
         * request offset from the beginning of the original mapping will      
         * return success with zero sized unmap.  And an unmap request covering
         * the first iova of mapping will unmap the entire range.              

This behavior can be verified by using first patch and add return check for
dma_unmap.size != len in vfio_type1_dma_mem_map()

v4:
- Fixed issue with patch 4/4 on x86 builds.

v3:
- Fixed external memory test case(4/4) to use system page size
  instead of 4K.
- Fixed check-git-log.sh issue and rebased.
- Added acked-by from anatoly.burakov@intel.com to first 3 patches.

v2: 
- Reverted earlier commit that enables mergin contiguous mapping for
  IOVA as PA. (see 1/3)
- Updated documentation about kernel dma mapping limits and vfio
  module parameter.
- Moved vfio test to test_vfio.c and handled comments from
  Anatoly.

Nithin Dabilpuram (4):
  vfio: revert changes for map contiguous areas in one go
  vfio: fix DMA mapping granularity for type1 IOVA as VA
  test: add test case to validate VFIO DMA map/unmap
  test: change external memory test to use system page sz

 app/test/meson.build                   |   1 +
 app/test/test_external_mem.c           |   3 +-
 app/test/test_vfio.c                   | 103 +++++++++++++++++++++++++++++++++
 doc/guides/linux_gsg/linux_drivers.rst |  10 ++++
 lib/librte_eal/linux/eal_vfio.c        |  93 ++++++++++++-----------------
 lib/librte_eal/linux/eal_vfio.h        |   1 +
 6 files changed, 153 insertions(+), 58 deletions(-)
 create mode 100644 app/test/test_vfio.c