[1/1] vhost: fix iotlb mempool single-consumer flag

Message ID 20200810141103.8015-2-eperezma@redhat.com (mailing list archive)
State Superseded, archived
Delegated to: Maxime Coquelin
Headers
Series vhost: fix iotlb mempool single-consumer flag |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/Intel-compilation success Compilation OK
ci/iol-mellanox-Performance success Performance Testing PASS
ci/travis-robot success Travis build: passed
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-testing success Testing PASS

Commit Message

Eugenio Perez Martin Aug. 10, 2020, 2:11 p.m. UTC
  Bugzilla bug: 523

Using testpmd as a vhost-user with iommu:

/home/dpdk/build/app/dpdk-testpmd -l 1,3 \
        --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 \
        -- --auto-start --stats-period 5 --forward-mode=txonly

And qemu with packed virtqueue:

    <interface type='vhostuser'>
      <mac address='88:67:11:5f:dd:02'/>
      <source type='unix' path='/tmp/vhost-user1' mode='client'/>
      <model type='virtio'/>
      <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
    </interface>
...

  <qemu:commandline>
    <qemu:arg value='-set'/>
    <qemu:arg value='device.net1.packed=on'/>
  </qemu:commandline>

--

Is it possible to consume the iotlb's entries of the mempoo from different
threads. Thread sanitizer example output (after change rwlocks to POSIX ones):

WARNING: ThreadSanitizer: data race (pid=76927)
  Write of size 8 at 0x00017ffd5628 by thread T5:
    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 (dpdk-testpmd+0x769343)
    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
    #6 <null> <null> (libtsan.so.0+0x2a68d)

  Previous read of size 8 at 0x00017ffd5628 by thread T3:
    #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-testpmd+0x76ee96)
    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-testpmd+0x77488c)
    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7abeb3)
    #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-testpmd+0x7abeb3)
    #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-testpmd+0x7abeb3)
    #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170 (dpdk-testpmd+0x7abeb3)
    #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346 (dpdk-testpmd+0x7abeb3)
    #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-testpmd+0x7abeb3)
    #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
    #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
    #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
    #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
    #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
    #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
    #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
    #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
    #16 <null> <null> (libtsan.so.0+0x2a68d)

  Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)

  Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
    #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 (dpdk-testpmd+0xa289e7)
    #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-testpmd+0x7728ef)
    #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-testpmd+0x1de233d)
    #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-testpmd+0x1de29cc)
    #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-testpmd+0x991ce2)
    #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
    #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)

  Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
    #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b)
    #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)

--

Or:
WARNING: ThreadSanitizer: data race (pid=76927)
  Write of size 1 at 0x00017ffd00f8 by thread T5:
    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 (dpdk-testpmd+0x769370)
    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
    #6 <null> <null> (libtsan.so.0+0x2a68d)

  Previous write of size 1 at 0x00017ffd00f8 by thread T3:
    #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 (dpdk-testpmd+0x75eb0c)
    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-testpmd+0x774926)
    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7a79d1)
    #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 (dpdk-testpmd+0x7a79d1)
    #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-testpmd+0x7a79d1)
    #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
    #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
    #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
    #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
    #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
    #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
    #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
    #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
    #13 <null> <null> (libtsan.so.0+0x2a68d)

--

As a consequence, the two threads can modify the same entry of the mempool.
Usually, this cause a loop in iotlb_pending_entries list.

Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
---
 lib/librte_vhost/iotlb.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
  

Comments

Kevin Traynor Aug. 25, 2020, 9:17 a.m. UTC | #1
On 10/08/2020 15:11, Eugenio Pérez wrote:
> Bugzilla bug: 523
> 
> Using testpmd as a vhost-user with iommu:
> 
> /home/dpdk/build/app/dpdk-testpmd -l 1,3 \
>         --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 \
>         -- --auto-start --stats-period 5 --forward-mode=txonly
> 
> And qemu with packed virtqueue:
> 
>     <interface type='vhostuser'>
>       <mac address='88:67:11:5f:dd:02'/>
>       <source type='unix' path='/tmp/vhost-user1' mode='client'/>
>       <model type='virtio'/>
>       <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
>       <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
>     </interface>
> ...
> 
>   <qemu:commandline>
>     <qemu:arg value='-set'/>
>     <qemu:arg value='device.net1.packed=on'/>
>   </qemu:commandline>
> 
> --
> 
> Is it possible to consume the iotlb's entries of the mempoo from different
> threads. Thread sanitizer example output (after change rwlocks to POSIX ones):
> 
> WARNING: ThreadSanitizer: data race (pid=76927)
>   Write of size 8 at 0x00017ffd5628 by thread T5:
>     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 (dpdk-testpmd+0x769343)
>     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
>     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
>     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
>     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
>     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
>     #6 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Previous read of size 8 at 0x00017ffd5628 by thread T3:
>     #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-testpmd+0x76ee96)
>     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-testpmd+0x77488c)
>     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7abeb3)
>     #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-testpmd+0x7abeb3)
>     #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-testpmd+0x7abeb3)
>     #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170 (dpdk-testpmd+0x7abeb3)
>     #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346 (dpdk-testpmd+0x7abeb3)
>     #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-testpmd+0x7abeb3)
>     #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
>     #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
>     #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
>     #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
>     #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
>     #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
>     #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
>     #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
>     #16 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)
> 
>   Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
>     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>     #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 (dpdk-testpmd+0xa289e7)
>     #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-testpmd+0x7728ef)
>     #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-testpmd+0x1de233d)
>     #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-testpmd+0x1de29cc)
>     #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-testpmd+0x991ce2)
>     #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
>     #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
> 
>   Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
>     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>     #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b)
>     #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
> 
> --
> 
> Or:
> WARNING: ThreadSanitizer: data race (pid=76927)
>   Write of size 1 at 0x00017ffd00f8 by thread T5:
>     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 (dpdk-testpmd+0x769370)
>     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
>     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
>     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
>     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
>     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
>     #6 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Previous write of size 1 at 0x00017ffd00f8 by thread T3:
>     #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 (dpdk-testpmd+0x75eb0c)
>     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-testpmd+0x774926)
>     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7a79d1)
>     #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 (dpdk-testpmd+0x7a79d1)
>     #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-testpmd+0x7a79d1)
>     #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
>     #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
>     #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
>     #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
>     #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
>     #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
>     #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
>     #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
>     #13 <null> <null> (libtsan.so.0+0x2a68d)
> 
> --
> 
> As a consequence, the two threads can modify the same entry of the mempool.
> Usually, this cause a loop in iotlb_pending_entries list.
> 
> Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  lib/librte_vhost/iotlb.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> index 5b3a0c090..e0b67721b 100644
> --- a/lib/librte_vhost/iotlb.c
> +++ b/lib/librte_vhost/iotlb.c
> @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
>  			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
>  			0, 0, NULL, NULL, NULL, socket,
>  			MEMPOOL_F_NO_CACHE_ALIGN |
> -			MEMPOOL_F_SP_PUT |
> -			MEMPOOL_F_SC_GET);
> +			MEMPOOL_F_SP_PUT);
>  	if (!vq->iotlb_pool) {
>  		VHOST_LOG_CONFIG(ERR,
>  				"Failed to create IOTLB cache pool (%s)\n",
> 

Looks ok to me, but would need review from vhost maintainer.
  
Chenbo Xia Aug. 26, 2020, 6:28 a.m. UTC | #2
Hi Eugenio,

> -----Original Message-----
> From: Eugenio Pérez <eperezma@redhat.com>
> Sent: Monday, August 10, 2020 10:11 PM
> To: dev@dpdk.org
> Cc: Adrian Moreno Zapata <amorenoz@redhat.com>; Maxime Coquelin
> <maxime.coquelin@redhat.com>; stable@dpdk.org; Wang, Zhihong
> <zhihong.wang@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> Subject: [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
> 
> Bugzilla bug: 523
> 
> Using testpmd as a vhost-user with iommu:
> 
> /home/dpdk/build/app/dpdk-testpmd -l 1,3 \
>         --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1
> \
>         -- --auto-start --stats-period 5 --forward-mode=txonly
> 
> And qemu with packed virtqueue:
> 
>     <interface type='vhostuser'>
>       <mac address='88:67:11:5f:dd:02'/>
>       <source type='unix' path='/tmp/vhost-user1' mode='client'/>
>       <model type='virtio'/>
>       <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
>       <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
> function='0x0'/>
>     </interface>
> ...
> 
>   <qemu:commandline>
>     <qemu:arg value='-set'/>
>     <qemu:arg value='device.net1.packed=on'/>
>   </qemu:commandline>
> 

The fix looks fine to me. But the commit message is a little bit complicated
to me (also, some lines too long). Since this bug is clear and could be
described by something like 'control thread which handles iotlb msg and forwarding
thread which uses iotlb to translate address may modify same entry of mempool
and may cause a loop in iotlb_pending_entries list'. Do you think it makes
sense?

Thanks for the fix!
Chenbo

> --
> 
> Is it possible to consume the iotlb's entries of the mempoo from different
> threads. Thread sanitizer example output (after change rwlocks to POSIX
> ones):
> 
> WARNING: ThreadSanitizer: data race (pid=76927)
>   Write of size 8 at 0x00017ffd5628 by thread T5:
>     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181
> (dpdk-testpmd+0x769343)
>     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-
> testpmd+0x78e4bf)
>     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-
> testpmd+0x78fcf8)
>     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> testpmd+0x770162)
>     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> testpmd+0x7591c2)
>     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> (dpdk-testpmd+0xa2890b)
>     #6 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Previous read of size 8 at 0x00017ffd5628 by thread T3:
>     #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-
> testpmd+0x76ee96)
>     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-
> testpmd+0x77488c)
>     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> testpmd+0x7abeb3)
>     #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-
> testpmd+0x7abeb3)
>     #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-
> testpmd+0x7abeb3)
>     #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170
> (dpdk-testpmd+0x7abeb3)
>     #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346
> (dpdk-testpmd+0x7abeb3)
>     #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-
> testpmd+0x7abeb3)
>     #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> testpmd+0x7b0654)
>     #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> (dpdk-testpmd+0x7b0654)
>     #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> testpmd+0x1ddfbd8)
>     #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> testpmd+0x505fdb)
>     #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> testpmd+0x5106ad)
>     #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> testpmd+0x4f8951)
>     #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-
> testpmd+0x4f89d7)
>     #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-
> testpmd+0xa5b20a)
>     #16 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)
> 
>   Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
>     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>     #1
> rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216
> (dpdk-testpmd+0xa289e7)
>     #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-
> testpmd+0x7728ef)
>     #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-
> testpmd+0x1de233d)
>     #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-
> testpmd+0x1de29cc)
>     #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-
> testpmd+0x991ce2)
>     #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
>     #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
> 
>   Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
>     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>     #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-
> testpmd+0xa46e2b)
>     #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
> 
> --
> 
> Or:
> WARNING: ThreadSanitizer: data race (pid=76927)
>   Write of size 1 at 0x00017ffd00f8 by thread T5:
>     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182
> (dpdk-testpmd+0x769370)
>     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-
> testpmd+0x78e4bf)
>     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-
> testpmd+0x78fcf8)
>     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> testpmd+0x770162)
>     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> testpmd+0x7591c2)
>     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> (dpdk-testpmd+0xa2890b)
>     #6 <null> <null> (libtsan.so.0+0x2a68d)
> 
>   Previous write of size 1 at 0x00017ffd00f8 by thread T3:
>     #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86
> (dpdk-testpmd+0x75eb0c)
>     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-
> testpmd+0x774926)
>     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> testpmd+0x7a79d1)
>     #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295
> (dpdk-testpmd+0x7a79d1)
>     #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-
> testpmd+0x7a79d1)
>     #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> testpmd+0x7b0654)
>     #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> (dpdk-testpmd+0x7b0654)
>     #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> testpmd+0x1ddfbd8)
>     #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> testpmd+0x505fdb)
>     #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> testpmd+0x5106ad)
>     #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> testpmd+0x4f8951)
>     #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-
> testpmd+0x4f89d7)
>     #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-
> testpmd+0xa5b20a)
>     #13 <null> <null> (libtsan.so.0+0x2a68d)
> 
> --
> 
> As a consequence, the two threads can modify the same entry of the mempool.
> Usually, this cause a loop in iotlb_pending_entries list.
> 
> Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> ---
>  lib/librte_vhost/iotlb.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
> 
> diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> index 5b3a0c090..e0b67721b 100644
> --- a/lib/librte_vhost/iotlb.c
> +++ b/lib/librte_vhost/iotlb.c
> @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int
> vq_index)
>  			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
>  			0, 0, NULL, NULL, NULL, socket,
>  			MEMPOOL_F_NO_CACHE_ALIGN |
> -			MEMPOOL_F_SP_PUT |
> -			MEMPOOL_F_SC_GET);
> +			MEMPOOL_F_SP_PUT);
>  	if (!vq->iotlb_pool) {
>  		VHOST_LOG_CONFIG(ERR,
>  				"Failed to create IOTLB cache pool (%s)\n",
> --
> 2.18.1
  
Eugenio Perez Martin Aug. 26, 2020, 12:50 p.m. UTC | #3
Hi Chenbo.

On Wed, Aug 26, 2020 at 8:29 AM Xia, Chenbo <chenbo.xia@intel.com> wrote:
>
> Hi Eugenio,
>
> > -----Original Message-----
> > From: Eugenio Pérez <eperezma@redhat.com>
> > Sent: Monday, August 10, 2020 10:11 PM
> > To: dev@dpdk.org
> > Cc: Adrian Moreno Zapata <amorenoz@redhat.com>; Maxime Coquelin
> > <maxime.coquelin@redhat.com>; stable@dpdk.org; Wang, Zhihong
> > <zhihong.wang@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> > Subject: [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
> >
> > Bugzilla bug: 523
> >
> > Using testpmd as a vhost-user with iommu:
> >
> > /home/dpdk/build/app/dpdk-testpmd -l 1,3 \
> >         --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1
> > \
> >         -- --auto-start --stats-period 5 --forward-mode=txonly
> >
> > And qemu with packed virtqueue:
> >
> >     <interface type='vhostuser'>
> >       <mac address='88:67:11:5f:dd:02'/>
> >       <source type='unix' path='/tmp/vhost-user1' mode='client'/>
> >       <model type='virtio'/>
> >       <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
> >       <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
> > function='0x0'/>
> >     </interface>
> > ...
> >
> >   <qemu:commandline>
> >     <qemu:arg value='-set'/>
> >     <qemu:arg value='device.net1.packed=on'/>
> >   </qemu:commandline>
> >
>
> The fix looks fine to me. But the commit message is a little bit complicated
> to me (also, some lines too long). Since this bug is clear and could be
> described by something like 'control thread which handles iotlb msg and forwarding
> thread which uses iotlb to translate address may modify same entry of mempool
> and may cause a loop in iotlb_pending_entries list'. Do you think it makes
> sense?

Sure, I just wanted to give enough information to reproduce it, but
that can be in the bugzilla case too if you prefer. Do you need me to
send a v2?

Thanks!

>
> Thanks for the fix!
> Chenbo
>
> > --
> >
> > Is it possible to consume the iotlb's entries of the mempoo from different
> > threads. Thread sanitizer example output (after change rwlocks to POSIX
> > ones):
> >
> > WARNING: ThreadSanitizer: data race (pid=76927)
> >   Write of size 8 at 0x00017ffd5628 by thread T5:
> >     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181
> > (dpdk-testpmd+0x769343)
> >     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-
> > testpmd+0x78e4bf)
> >     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-
> > testpmd+0x78fcf8)
> >     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> > testpmd+0x770162)
> >     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> > testpmd+0x7591c2)
> >     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> > (dpdk-testpmd+0xa2890b)
> >     #6 <null> <null> (libtsan.so.0+0x2a68d)
> >
> >   Previous read of size 8 at 0x00017ffd5628 by thread T3:
> >     #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-
> > testpmd+0x76ee96)
> >     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-
> > testpmd+0x77488c)
> >     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> > testpmd+0x7abeb3)
> >     #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-
> > testpmd+0x7abeb3)
> >     #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-
> > testpmd+0x7abeb3)
> >     #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170
> > (dpdk-testpmd+0x7abeb3)
> >     #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346
> > (dpdk-testpmd+0x7abeb3)
> >     #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-
> > testpmd+0x7abeb3)
> >     #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> > testpmd+0x7b0654)
> >     #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> > (dpdk-testpmd+0x7b0654)
> >     #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> > testpmd+0x1ddfbd8)
> >     #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> > testpmd+0x505fdb)
> >     #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> > testpmd+0x5106ad)
> >     #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> > testpmd+0x4f8951)
> >     #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-
> > testpmd+0x4f89d7)
> >     #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-
> > testpmd+0xa5b20a)
> >     #16 <null> <null> (libtsan.so.0+0x2a68d)
> >
> >   Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)
> >
> >   Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
> >     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
> >     #1
> > rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216
> > (dpdk-testpmd+0xa289e7)
> >     #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-
> > testpmd+0x7728ef)
> >     #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-
> > testpmd+0x1de233d)
> >     #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-
> > testpmd+0x1de29cc)
> >     #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-
> > testpmd+0x991ce2)
> >     #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
> >     #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
> >
> >   Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
> >     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
> >     #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-
> > testpmd+0xa46e2b)
> >     #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
> >
> > --
> >
> > Or:
> > WARNING: ThreadSanitizer: data race (pid=76927)
> >   Write of size 1 at 0x00017ffd00f8 by thread T5:
> >     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182
> > (dpdk-testpmd+0x769370)
> >     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-
> > testpmd+0x78e4bf)
> >     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-
> > testpmd+0x78fcf8)
> >     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> > testpmd+0x770162)
> >     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> > testpmd+0x7591c2)
> >     #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> > (dpdk-testpmd+0xa2890b)
> >     #6 <null> <null> (libtsan.so.0+0x2a68d)
> >
> >   Previous write of size 1 at 0x00017ffd00f8 by thread T3:
> >     #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86
> > (dpdk-testpmd+0x75eb0c)
> >     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-
> > testpmd+0x774926)
> >     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> > testpmd+0x7a79d1)
> >     #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295
> > (dpdk-testpmd+0x7a79d1)
> >     #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-
> > testpmd+0x7a79d1)
> >     #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> > testpmd+0x7b0654)
> >     #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> > (dpdk-testpmd+0x7b0654)
> >     #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> > testpmd+0x1ddfbd8)
> >     #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> > testpmd+0x505fdb)
> >     #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> > testpmd+0x5106ad)
> >     #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> > testpmd+0x4f8951)
> >     #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-
> > testpmd+0x4f89d7)
> >     #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-
> > testpmd+0xa5b20a)
> >     #13 <null> <null> (libtsan.so.0+0x2a68d)
> >
> > --
> >
> > As a consequence, the two threads can modify the same entry of the mempool.
> > Usually, this cause a loop in iotlb_pending_entries list.
> >
> > Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > ---
> >  lib/librte_vhost/iotlb.c | 3 +--
> >  1 file changed, 1 insertion(+), 2 deletions(-)
> >
> > diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> > index 5b3a0c090..e0b67721b 100644
> > --- a/lib/librte_vhost/iotlb.c
> > +++ b/lib/librte_vhost/iotlb.c
> > @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int
> > vq_index)
> >                       IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
> >                       0, 0, NULL, NULL, NULL, socket,
> >                       MEMPOOL_F_NO_CACHE_ALIGN |
> > -                     MEMPOOL_F_SP_PUT |
> > -                     MEMPOOL_F_SC_GET);
> > +                     MEMPOOL_F_SP_PUT);
> >       if (!vq->iotlb_pool) {
> >               VHOST_LOG_CONFIG(ERR,
> >                               "Failed to create IOTLB cache pool (%s)\n",
> > --
> > 2.18.1
>
  
Chenbo Xia Aug. 27, 2020, 1:20 a.m. UTC | #4
Hi Eugenio,

> -----Original Message-----
> From: Eugenio Perez Martin <eperezma@redhat.com>
> Sent: Wednesday, August 26, 2020 8:51 PM
> To: Xia, Chenbo <chenbo.xia@intel.com>
> Cc: dev@dpdk.org; Adrian Moreno Zapata <amorenoz@redhat.com>; Maxime
> Coquelin <maxime.coquelin@redhat.com>; stable@dpdk.org; Wang, Zhihong
> <zhihong.wang@intel.com>
> Subject: Re: [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
> 
> Hi Chenbo.
> 
> On Wed, Aug 26, 2020 at 8:29 AM Xia, Chenbo <chenbo.xia@intel.com> wrote:
> >
> > Hi Eugenio,
> >
> > > -----Original Message-----
> > > From: Eugenio Pérez <eperezma@redhat.com>
> > > Sent: Monday, August 10, 2020 10:11 PM
> > > To: dev@dpdk.org
> > > Cc: Adrian Moreno Zapata <amorenoz@redhat.com>; Maxime Coquelin
> > > <maxime.coquelin@redhat.com>; stable@dpdk.org; Wang, Zhihong
> > > <zhihong.wang@intel.com>; Xia, Chenbo <chenbo.xia@intel.com>
> > > Subject: [PATCH 1/1] vhost: fix iotlb mempool single-consumer flag
> > >
> > > Bugzilla bug: 523
> > >
> > > Using testpmd as a vhost-user with iommu:
> > >
> > > /home/dpdk/build/app/dpdk-testpmd -l 1,3 \
> > >         --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-
> support=1
> > > \
> > >         -- --auto-start --stats-period 5 --forward-mode=txonly
> > >
> > > And qemu with packed virtqueue:
> > >
> > >     <interface type='vhostuser'>
> > >       <mac address='88:67:11:5f:dd:02'/>
> > >       <source type='unix' path='/tmp/vhost-user1' mode='client'/>
> > >       <model type='virtio'/>
> > >       <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
> > >       <address type='pci' domain='0x0000' bus='0x07' slot='0x00'
> > > function='0x0'/>
> > >     </interface>
> > > ...
> > >
> > >   <qemu:commandline>
> > >     <qemu:arg value='-set'/>
> > >     <qemu:arg value='device.net1.packed=on'/>
> > >   </qemu:commandline>
> > >
> >
> > The fix looks fine to me. But the commit message is a little bit
> complicated
> > to me (also, some lines too long). Since this bug is clear and could be
> > described by something like 'control thread which handles iotlb msg and
> forwarding
> > thread which uses iotlb to translate address may modify same entry of
> mempool
> > and may cause a loop in iotlb_pending_entries list'. Do you think it
> makes
> > sense?
> 
> Sure, I just wanted to give enough information to reproduce it, but
> that can be in the bugzilla case too if you prefer. Do you need me to
> send a v2?
> 

Yes, the information is very detailed for review! Since there's already one
warning for commits message in patchwork, I'd like a brief description
with bugzilla link and the details could be in that link. Is this ok for you?

Thanks!
Chenbo  

> Thanks!
> 
> >
> > Thanks for the fix!
> > Chenbo
> >
> > > --
> > >
> > > Is it possible to consume the iotlb's entries of the mempoo from
> different
> > > threads. Thread sanitizer example output (after change rwlocks to
> POSIX
> > > ones):
> > >
> > > WARNING: ThreadSanitizer: data race (pid=76927)
> > >   Write of size 8 at 0x00017ffd5628 by thread T5:
> > >     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181
> > > (dpdk-testpmd+0x769343)
> > >     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380
> (dpdk-
> > > testpmd+0x78e4bf)
> > >     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848
> (dpdk-
> > > testpmd+0x78fcf8)
> > >     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> > > testpmd+0x770162)
> > >     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> > > testpmd+0x7591c2)
> > >     #5
> ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> > > (dpdk-testpmd+0xa2890b)
> > >     #6 <null> <null> (libtsan.so.0+0x2a68d)
> > >
> > >   Previous read of size 8 at 0x00017ffd5628 by thread T3:
> > >     #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252
> (dpdk-
> > > testpmd+0x76ee96)
> > >     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-
> > > testpmd+0x77488c)
> > >     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> > > testpmd+0x7abeb3)
> > >     #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-
> > > testpmd+0x7abeb3)
> > >     #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-
> > > testpmd+0x7abeb3)
> > >     #5
> vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170
> > > (dpdk-testpmd+0x7abeb3)
> > >     #6
> virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346
> > > (dpdk-testpmd+0x7abeb3)
> > >     #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384
> (dpdk-
> > > testpmd+0x7abeb3)
> > >     #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> > > testpmd+0x7b0654)
> > >     #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> > > (dpdk-testpmd+0x7b0654)
> > >     #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> > > testpmd+0x1ddfbd8)
> > >     #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> > > testpmd+0x505fdb)
> > >     #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> > > testpmd+0x5106ad)
> > >     #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> > > testpmd+0x4f8951)
> > >     #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106
> (dpdk-
> > > testpmd+0x4f89d7)
> > >     #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127
> (dpdk-
> > > testpmd+0xa5b20a)
> > >     #16 <null> <null> (libtsan.so.0+0x2a68d)
> > >
> > >   Location is global '<null>' at 0x000000000000
> (rtemap_0+0x00003ffd5628)
> > >
> > >   Thread T5 'vhost-events' (tid=76933, running) created by main thread
> at:
> > >     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
> > >     #1
> > >
> rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216
> > > (dpdk-testpmd+0xa289e7)
> > >     #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-
> > > testpmd+0x7728ef)
> > >     #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028
> (dpdk-
> > > testpmd+0x1de233d)
> > >     #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126
> (dpdk-
> > > testpmd+0x1de29cc)
> > >     #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439
> (dpdk-
> > > testpmd+0x991ce2)
> > >     #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-
> testpmd+0x4f9b45)
> > >     #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
> > >
> > >   Thread T3 'lcore-slave-3' (tid=76931, running) created by main
> thread at:
> > >     #0 pthread_create <null> (libtsan.so.0+0x2cd42)
> > >     #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-
> > > testpmd+0xa46e2b)
> > >     #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
> > >
> > > --
> > >
> > > Or:
> > > WARNING: ThreadSanitizer: data race (pid=76927)
> > >   Write of size 1 at 0x00017ffd00f8 by thread T5:
> > >     #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182
> > > (dpdk-testpmd+0x769370)
> > >     #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380
> (dpdk-
> > > testpmd+0x78e4bf)
> > >     #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848
> (dpdk-
> > > testpmd+0x78fcf8)
> > >     #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-
> > > testpmd+0x770162)
> > >     #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-
> > > testpmd+0x7591c2)
> > >     #5
> ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193
> > > (dpdk-testpmd+0xa2890b)
> > >     #6 <null> <null> (libtsan.so.0+0x2a68d)
> > >
> > >   Previous write of size 1 at 0x00017ffd00f8 by thread T3:
> > >     #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86
> > > (dpdk-testpmd+0x75eb0c)
> > >     #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-
> > > testpmd+0x774926)
> > >     #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-
> > > testpmd+0x7a79d1)
> > >     #3
> virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295
> > > (dpdk-testpmd+0x7a79d1)
> > >     #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376
> (dpdk-
> > > testpmd+0x7a79d1)
> > >     #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-
> > > testpmd+0x7b0654)
> > >     #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465
> > > (dpdk-testpmd+0x7b0654)
> > >     #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-
> > > testpmd+0x1ddfbd8)
> > >     #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-
> > > testpmd+0x505fdb)
> > >     #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-
> > > testpmd+0x5106ad)
> > >     #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-
> > > testpmd+0x4f8951)
> > >     #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106
> (dpdk-
> > > testpmd+0x4f89d7)
> > >     #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127
> (dpdk-
> > > testpmd+0xa5b20a)
> > >     #13 <null> <null> (libtsan.so.0+0x2a68d)
> > >
> > > --
> > >
> > > As a consequence, the two threads can modify the same entry of the
> mempool.
> > > Usually, this cause a loop in iotlb_pending_entries list.
> > >
> > > Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
> > > Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
> > > ---
> > >  lib/librte_vhost/iotlb.c | 3 +--
> > >  1 file changed, 1 insertion(+), 2 deletions(-)
> > >
> > > diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
> > > index 5b3a0c090..e0b67721b 100644
> > > --- a/lib/librte_vhost/iotlb.c
> > > +++ b/lib/librte_vhost/iotlb.c
> > > @@ -321,8 +321,7 @@ vhost_user_iotlb_init(struct virtio_net *dev, int
> > > vq_index)
> > >                       IOTLB_CACHE_SIZE, sizeof(struct
> vhost_iotlb_entry), 0,
> > >                       0, 0, NULL, NULL, NULL, socket,
> > >                       MEMPOOL_F_NO_CACHE_ALIGN |
> > > -                     MEMPOOL_F_SP_PUT |
> > > -                     MEMPOOL_F_SC_GET);
> > > +                     MEMPOOL_F_SP_PUT);
> > >       if (!vq->iotlb_pool) {
> > >               VHOST_LOG_CONFIG(ERR,
> > >                               "Failed to create IOTLB cache pool
> (%s)\n",
> > > --
> > > 2.18.1
> >
  
Jens Freimann Aug. 28, 2020, 6:40 p.m. UTC | #5
Hi Eugenio,

On Mon, Aug 10, 2020 at 04:11:03PM +0200, Eugenio Pérez wrote:
>Bugzilla bug: 523
>
>Using testpmd as a vhost-user with iommu:
>
>/home/dpdk/build/app/dpdk-testpmd -l 1,3 \
>        --vdev net_vhost0,iface=/tmp/vhost-user1,queues=1,iommu-support=1 \
>        -- --auto-start --stats-period 5 --forward-mode=txonly
>
>And qemu with packed virtqueue:
>
>    <interface type='vhostuser'>
>      <mac address='88:67:11:5f:dd:02'/>
>      <source type='unix' path='/tmp/vhost-user1' mode='client'/>
>      <model type='virtio'/>
>      <driver name='vhost' rx_queue_size='1024' iommu='on' ats='on'/>
>      <address type='pci' domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
>    </interface>
>...
>
>  <qemu:commandline>
>    <qemu:arg value='-set'/>
>    <qemu:arg value='device.net1.packed=on'/>
>  </qemu:commandline>
>
>--
>
>Is it possible to consume the iotlb's entries of the mempoo from different
>threads. Thread sanitizer example output (after change rwlocks to POSIX ones):
>
>WARNING: ThreadSanitizer: data race (pid=76927)
>  Write of size 8 at 0x00017ffd5628 by thread T5:
>    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:181 (dpdk-testpmd+0x769343)
>    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
>    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
>    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
>    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
>    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
>    #6 <null> <null> (libtsan.so.0+0x2a68d)
>
>  Previous read of size 8 at 0x00017ffd5628 by thread T3:
>    #0 vhost_user_iotlb_cache_find ../lib/librte_vhost/iotlb.c:252 (dpdk-testpmd+0x76ee96)
>    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:42 (dpdk-testpmd+0x77488c)
>    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7abeb3)
>    #3 map_one_desc ../lib/librte_vhost/virtio_net.c:497 (dpdk-testpmd+0x7abeb3)
>    #4 fill_vec_buf_packed ../lib/librte_vhost/virtio_net.c:751 (dpdk-testpmd+0x7abeb3)
>    #5 vhost_enqueue_single_packed ../lib/librte_vhost/virtio_net.c:1170 (dpdk-testpmd+0x7abeb3)
>    #6 virtio_dev_rx_single_packed ../lib/librte_vhost/virtio_net.c:1346 (dpdk-testpmd+0x7abeb3)
>    #7 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1384 (dpdk-testpmd+0x7abeb3)
>    #8 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
>    #9 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
>    #10 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
>    #11 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
>    #12 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
>    #13 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
>    #14 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
>    #15 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
>    #16 <null> <null> (libtsan.so.0+0x2a68d)
>
>  Location is global '<null>' at 0x000000000000 (rtemap_0+0x00003ffd5628)
>
>  Thread T5 'vhost-events' (tid=76933, running) created by main thread at:
>    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>    #1 rte_ctrl_thread_create ../lib/librte_eal/common/eal_common_thread.c:216 (dpdk-testpmd+0xa289e7)
>    #2 rte_vhost_driver_start ../lib/librte_vhost/socket.c:1190 (dpdk-testpmd+0x7728ef)
>    #3 vhost_driver_setup ../drivers/net/vhost/rte_eth_vhost.c:1028 (dpdk-testpmd+0x1de233d)
>    #4 eth_dev_configure ../drivers/net/vhost/rte_eth_vhost.c:1126 (dpdk-testpmd+0x1de29cc)
>    #5 rte_eth_dev_configure ../lib/librte_ethdev/rte_ethdev.c:1439 (dpdk-testpmd+0x991ce2)
>    #6 start_port ../app/test-pmd/testpmd.c:2450 (dpdk-testpmd+0x4f9b45)
>    #7 main ../app/test-pmd/testpmd.c:3777 (dpdk-testpmd+0x4fe1ac)
>
>  Thread T3 'lcore-slave-3' (tid=76931, running) created by main thread at:
>    #0 pthread_create <null> (libtsan.so.0+0x2cd42)
>    #1 rte_eal_init ../lib/librte_eal/linux/eal.c:1244 (dpdk-testpmd+0xa46e2b)
>    #2 main ../app/test-pmd/testpmd.c:3673 (dpdk-testpmd+0x4fdd75)
>
>--
>
>Or:
>WARNING: ThreadSanitizer: data race (pid=76927)
>  Write of size 1 at 0x00017ffd00f8 by thread T5:
>    #0 vhost_user_iotlb_cache_insert ../lib/librte_vhost/iotlb.c:182 (dpdk-testpmd+0x769370)
>    #1 vhost_user_iotlb_msg ../lib/librte_vhost/vhost_user.c:2380 (dpdk-testpmd+0x78e4bf)
>    #2 vhost_user_msg_handler ../lib/librte_vhost/vhost_user.c:2848 (dpdk-testpmd+0x78fcf8)
>    #3 vhost_user_read_cb ../lib/librte_vhost/socket.c:311 (dpdk-testpmd+0x770162)
>    #4 fdset_event_dispatch ../lib/librte_vhost/fd_man.c:286 (dpdk-testpmd+0x7591c2)
>    #5 ctrl_thread_init ../lib/librte_eal/common/eal_common_thread.c:193 (dpdk-testpmd+0xa2890b)
>    #6 <null> <null> (libtsan.so.0+0x2a68d)
>
>  Previous write of size 1 at 0x00017ffd00f8 by thread T3:
>    #0 vhost_user_iotlb_pending_insert ../lib/librte_vhost/iotlb.c:86 (dpdk-testpmd+0x75eb0c)
>    #1 __vhost_iova_to_vva ../lib/librte_vhost/vhost.c:58 (dpdk-testpmd+0x774926)
>    #2 vhost_iova_to_vva ../lib/librte_vhost/vhost.h:753 (dpdk-testpmd+0x7a79d1)
>    #3 virtio_dev_rx_batch_packed ../lib/librte_vhost/virtio_net.c:1295 (dpdk-testpmd+0x7a79d1)
>    #4 virtio_dev_rx_packed ../lib/librte_vhost/virtio_net.c:1376 (dpdk-testpmd+0x7a79d1)
>    #5 virtio_dev_rx ../lib/librte_vhost/virtio_net.c:1435 (dpdk-testpmd+0x7b0654)
>    #6 rte_vhost_enqueue_burst ../lib/librte_vhost/virtio_net.c:1465 (dpdk-testpmd+0x7b0654)
>    #7 eth_vhost_tx ../drivers/net/vhost/rte_eth_vhost.c:470 (dpdk-testpmd+0x1ddfbd8)
>    #8 rte_eth_tx_burst ../lib/librte_ethdev/rte_ethdev.h:4800 (dpdk-testpmd+0x505fdb)
>    #9 pkt_burst_transmit ../app/test-pmd/txonly.c:365 (dpdk-testpmd+0x5106ad)
>    #10 run_pkt_fwd_on_lcore ../app/test-pmd/testpmd.c:2080 (dpdk-testpmd+0x4f8951)
>    #11 start_pkt_forward_on_core ../app/test-pmd/testpmd.c:2106 (dpdk-testpmd+0x4f89d7)
>    #12 eal_thread_loop ../lib/librte_eal/linux/eal_thread.c:127 (dpdk-testpmd+0xa5b20a)
>    #13 <null> <null> (libtsan.so.0+0x2a68d)
>
>--
>
>As a consequence, the two threads can modify the same entry of the mempool.
>Usually, this cause a loop in iotlb_pending_entries list.
>
>Fixes: d012d1f293f4 ("vhost: add IOTLB helper functions")
>Signed-off-by: Eugenio Pérez <eperezma@redhat.com>
>---
> lib/librte_vhost/iotlb.c | 3 +--
> 1 file changed, 1 insertion(+), 2 deletions(-)
>

looks good to me.

Reviewed-by: Jens Freimann <jfreimann@redhat.com>

regards,
Jens
  

Patch

diff --git a/lib/librte_vhost/iotlb.c b/lib/librte_vhost/iotlb.c
index 5b3a0c090..e0b67721b 100644
--- a/lib/librte_vhost/iotlb.c
+++ b/lib/librte_vhost/iotlb.c
@@ -321,8 +321,7 @@  vhost_user_iotlb_init(struct virtio_net *dev, int vq_index)
 			IOTLB_CACHE_SIZE, sizeof(struct vhost_iotlb_entry), 0,
 			0, 0, NULL, NULL, NULL, socket,
 			MEMPOOL_F_NO_CACHE_ALIGN |
-			MEMPOOL_F_SP_PUT |
-			MEMPOOL_F_SC_GET);
+			MEMPOOL_F_SP_PUT);
 	if (!vq->iotlb_pool) {
 		VHOST_LOG_CONFIG(ERR,
 				"Failed to create IOTLB cache pool (%s)\n",