mbox series

[v8,0/3] fix issue with partial DMA unmap

Message ID 20210115073243.7025-1-ndabilpuram@marvell.com (mailing list archive)
Headers
Series fix issue with partial DMA unmap |

Message

Nithin Dabilpuram Jan. 15, 2021, 7:32 a.m. UTC
  Partial DMA unmap is not supported by VFIO type1 IOMMU
in Linux. Though the return value is zero, the returned
DMA unmap size is not same as expected size.
So add test case and fix to both heap triggered DMA
mapping and user triggered DMA mapping/unmapping.

Refer vfio_dma_do_unmap() in drivers/vfio/vfio_iommu_type1.c
Snippet of comment is below.

        /*
         * vfio-iommu-type1 (v1) - User mappings were coalesced together to
         * avoid tracking individual mappings.  This means that the granularity
         * of the original mapping was lost and the user was allowed to attempt
         * to unmap any range.  Depending on the contiguousness of physical
         * memory and page sizes supported by the IOMMU, arbitrary unmaps may
         * or may not have worked.  We only guaranteed unmap granularity
         * matching the original mapping; even though it was untracked here,
         * the original mappings are reflected in IOMMU mappings.  This
         * resulted in a couple unusual behaviors.  First, if a range is not
         * able to be unmapped, ex. a set of 4k pages that was mapped as a
         * 2M hugepage into the IOMMU, the unmap ioctl returns success but with
         * a zero sized unmap.  Also, if an unmap request overlaps the first
         * address of a hugepage, the IOMMU will unmap the entire hugepage.
         * This also returns success and the returned unmap size reflects the
         * actual size unmapped.

         * We attempt to maintain compatibility with this "v1" interface, but  
         * we take control out of the hands of the IOMMU.  Therefore, an unmap 
         * request offset from the beginning of the original mapping will      
         * return success with zero sized unmap.  And an unmap request covering
         * the first iova of mapping will unmap the entire range.              

This behavior can be verified by using first patch and add return check for
dma_unmap.size != len in vfio_type1_dma_mem_map()

v8:
- Add cc stable to patch 3/3

v7:
- Dropped vfio test case of patch 3/4 i.e
  "test: add test case to validate VFIO DMA map/unmap"
  as it couldn't be supported in POWER9 system.

v6:
- Fixed issue with x86-32 build introduced by v5.

v5:
- Changed vfio test in test_vfio.c to use system pages allocated from
  heap instead of mmap() so that it comes in range of initially configured
  window for POWER9 System.
- Added acked-by from David for 1/4, 2/4.

v4:
- Fixed issue with patch 4/4 on x86 builds.

v3:
- Fixed external memory test case(4/4) to use system page size
  instead of 4K.
- Fixed check-git-log.sh issue and rebased.
- Added acked-by from anatoly.burakov@intel.com to first 3 patches.

v2: 
- Reverted earlier commit that enables mergin contiguous mapping for
  IOVA as PA. (see 1/3)
- Updated documentation about kernel dma mapping limits and vfio
  module parameter.
- Moved vfio test to test_vfio.c and handled comments from
  Anatoly.

Nithin Dabilpuram (3):
  vfio: revert changes for map contiguous areas in one go
  vfio: fix DMA mapping granularity for type1 IOVA as VA
  test: change external memory test to use system page sz

 app/test/test_external_mem.c           |  3 +-
 doc/guides/linux_gsg/linux_drivers.rst | 10 ++++
 lib/librte_eal/linux/eal_vfio.c        | 93 +++++++++++++---------------------
 lib/librte_eal/linux/eal_vfio.h        |  1 +
 4 files changed, 49 insertions(+), 58 deletions(-)
  

Comments

Burakov, Anatoly Feb. 16, 2021, 1:14 p.m. UTC | #1
On 15-Jan-21 7:32 AM, Nithin Dabilpuram wrote:
> Partial DMA unmap is not supported by VFIO type1 IOMMU
> in Linux. Though the return value is zero, the returned
> DMA unmap size is not same as expected size.
> So add test case and fix to both heap triggered DMA
> mapping and user triggered DMA mapping/unmapping.
> 
> Refer vfio_dma_do_unmap() in drivers/vfio/vfio_iommu_type1.c
> Snippet of comment is below.
> 
>          /*
>           * vfio-iommu-type1 (v1) - User mappings were coalesced together to
>           * avoid tracking individual mappings.  This means that the granularity
>           * of the original mapping was lost and the user was allowed to attempt
>           * to unmap any range.  Depending on the contiguousness of physical
>           * memory and page sizes supported by the IOMMU, arbitrary unmaps may
>           * or may not have worked.  We only guaranteed unmap granularity
>           * matching the original mapping; even though it was untracked here,
>           * the original mappings are reflected in IOMMU mappings.  This
>           * resulted in a couple unusual behaviors.  First, if a range is not
>           * able to be unmapped, ex. a set of 4k pages that was mapped as a
>           * 2M hugepage into the IOMMU, the unmap ioctl returns success but with
>           * a zero sized unmap.  Also, if an unmap request overlaps the first
>           * address of a hugepage, the IOMMU will unmap the entire hugepage.
>           * This also returns success and the returned unmap size reflects the
>           * actual size unmapped.
> 
>           * We attempt to maintain compatibility with this "v1" interface, but
>           * we take control out of the hands of the IOMMU.  Therefore, an unmap
>           * request offset from the beginning of the original mapping will
>           * return success with zero sized unmap.  And an unmap request covering
>           * the first iova of mapping will unmap the entire range.
> 
> This behavior can be verified by using first patch and add return check for
> dma_unmap.size != len in vfio_type1_dma_mem_map()
> 
> v8:
> - Add cc stable to patch 3/3
> 
> v7:
> - Dropped vfio test case of patch 3/4 i.e
>    "test: add test case to validate VFIO DMA map/unmap"
>    as it couldn't be supported in POWER9 system.
> 
> v6:
> - Fixed issue with x86-32 build introduced by v5.
> 
> v5:
> - Changed vfio test in test_vfio.c to use system pages allocated from
>    heap instead of mmap() so that it comes in range of initially configured
>    window for POWER9 System.
> - Added acked-by from David for 1/4, 2/4.
> 
> v4:
> - Fixed issue with patch 4/4 on x86 builds.
> 
> v3:
> - Fixed external memory test case(4/4) to use system page size
>    instead of 4K.
> - Fixed check-git-log.sh issue and rebased.
> - Added acked-by from anatoly.burakov@intel.com to first 3 patches.
> 
> v2:
> - Reverted earlier commit that enables mergin contiguous mapping for
>    IOVA as PA. (see 1/3)
> - Updated documentation about kernel dma mapping limits and vfio
>    module parameter.
> - Moved vfio test to test_vfio.c and handled comments from
>    Anatoly.
> 
> Nithin Dabilpuram (3):
>    vfio: revert changes for map contiguous areas in one go
>    vfio: fix DMA mapping granularity for type1 IOVA as VA
>    test: change external memory test to use system page sz
> 

Is there anything preventing this from getting merged? Let's try for 
21.05 :)
  
Nithin Dabilpuram Feb. 22, 2021, 9:41 a.m. UTC | #2
Ping. 
Can this be merged for 21.05 ? It is pending since few releases.

--
Thanks
Nihtin

On Tue, Feb 16, 2021 at 01:14:37PM +0000, Burakov, Anatoly wrote:
> On 15-Jan-21 7:32 AM, Nithin Dabilpuram wrote:
> > Partial DMA unmap is not supported by VFIO type1 IOMMU
> > in Linux. Though the return value is zero, the returned
> > DMA unmap size is not same as expected size.
> > So add test case and fix to both heap triggered DMA
> > mapping and user triggered DMA mapping/unmapping.
> > 
> > Refer vfio_dma_do_unmap() in drivers/vfio/vfio_iommu_type1.c
> > Snippet of comment is below.
> > 
> >          /*
> >           * vfio-iommu-type1 (v1) - User mappings were coalesced together to
> >           * avoid tracking individual mappings.  This means that the granularity
> >           * of the original mapping was lost and the user was allowed to attempt
> >           * to unmap any range.  Depending on the contiguousness of physical
> >           * memory and page sizes supported by the IOMMU, arbitrary unmaps may
> >           * or may not have worked.  We only guaranteed unmap granularity
> >           * matching the original mapping; even though it was untracked here,
> >           * the original mappings are reflected in IOMMU mappings.  This
> >           * resulted in a couple unusual behaviors.  First, if a range is not
> >           * able to be unmapped, ex. a set of 4k pages that was mapped as a
> >           * 2M hugepage into the IOMMU, the unmap ioctl returns success but with
> >           * a zero sized unmap.  Also, if an unmap request overlaps the first
> >           * address of a hugepage, the IOMMU will unmap the entire hugepage.
> >           * This also returns success and the returned unmap size reflects the
> >           * actual size unmapped.
> > 
> >           * We attempt to maintain compatibility with this "v1" interface, but
> >           * we take control out of the hands of the IOMMU.  Therefore, an unmap
> >           * request offset from the beginning of the original mapping will
> >           * return success with zero sized unmap.  And an unmap request covering
> >           * the first iova of mapping will unmap the entire range.
> > 
> > This behavior can be verified by using first patch and add return check for
> > dma_unmap.size != len in vfio_type1_dma_mem_map()
> > 
> > v8:
> > - Add cc stable to patch 3/3
> > 
> > v7:
> > - Dropped vfio test case of patch 3/4 i.e
> >    "test: add test case to validate VFIO DMA map/unmap"
> >    as it couldn't be supported in POWER9 system.
> > 
> > v6:
> > - Fixed issue with x86-32 build introduced by v5.
> > 
> > v5:
> > - Changed vfio test in test_vfio.c to use system pages allocated from
> >    heap instead of mmap() so that it comes in range of initially configured
> >    window for POWER9 System.
> > - Added acked-by from David for 1/4, 2/4.
> > 
> > v4:
> > - Fixed issue with patch 4/4 on x86 builds.
> > 
> > v3:
> > - Fixed external memory test case(4/4) to use system page size
> >    instead of 4K.
> > - Fixed check-git-log.sh issue and rebased.
> > - Added acked-by from anatoly.burakov@intel.com to first 3 patches.
> > 
> > v2:
> > - Reverted earlier commit that enables mergin contiguous mapping for
> >    IOVA as PA. (see 1/3)
> > - Updated documentation about kernel dma mapping limits and vfio
> >    module parameter.
> > - Moved vfio test to test_vfio.c and handled comments from
> >    Anatoly.
> > 
> > Nithin Dabilpuram (3):
> >    vfio: revert changes for map contiguous areas in one go
> >    vfio: fix DMA mapping granularity for type1 IOVA as VA
> >    test: change external memory test to use system page sz
> > 
> 
> Is there anything preventing this from getting merged? Let's try for 21.05
> :)
> 
> -- 
> Thanks,
> Anatoly
  
David Marchand Feb. 22, 2021, 10:06 a.m. UTC | #3
On Mon, Feb 22, 2021 at 10:42 AM Nithin Dabilpuram
<nithind1988@gmail.com> wrote:
>
> Can this be merged for 21.05 ? It is pending since few releases.

I'll get them this week, hopefully.
  
David Marchand March 1, 2021, 12:13 p.m. UTC | #4
On Fri, Jan 15, 2021 at 8:33 AM Nithin Dabilpuram
<ndabilpuram@marvell.com> wrote:
>
> Partial DMA unmap is not supported by VFIO type1 IOMMU
> in Linux. Though the return value is zero, the returned
> DMA unmap size is not same as expected size.
> So add test case and fix to both heap triggered DMA
> mapping and user triggered DMA mapping/unmapping.
>
> Refer vfio_dma_do_unmap() in drivers/vfio/vfio_iommu_type1.c
> Snippet of comment is below.
>
>         /*
>          * vfio-iommu-type1 (v1) - User mappings were coalesced together to
>          * avoid tracking individual mappings.  This means that the granularity
>          * of the original mapping was lost and the user was allowed to attempt
>          * to unmap any range.  Depending on the contiguousness of physical
>          * memory and page sizes supported by the IOMMU, arbitrary unmaps may
>          * or may not have worked.  We only guaranteed unmap granularity
>          * matching the original mapping; even though it was untracked here,
>          * the original mappings are reflected in IOMMU mappings.  This
>          * resulted in a couple unusual behaviors.  First, if a range is not
>          * able to be unmapped, ex. a set of 4k pages that was mapped as a
>          * 2M hugepage into the IOMMU, the unmap ioctl returns success but with
>          * a zero sized unmap.  Also, if an unmap request overlaps the first
>          * address of a hugepage, the IOMMU will unmap the entire hugepage.
>          * This also returns success and the returned unmap size reflects the
>          * actual size unmapped.
>
>          * We attempt to maintain compatibility with this "v1" interface, but
>          * we take control out of the hands of the IOMMU.  Therefore, an unmap
>          * request offset from the beginning of the original mapping will
>          * return success with zero sized unmap.  And an unmap request covering
>          * the first iova of mapping will unmap the entire range.
>
> This behavior can be verified by using first patch and add return check for
> dma_unmap.size != len in vfio_type1_dma_mem_map()

Series applied, thanks.