[1/5] vhost: enforce avail index and desc read ordering

Message ID 20181205094957.1938-2-maxime.coquelin@redhat.com (mailing list archive)
State Superseded, archived
Delegated to: Maxime Coquelin
Headers
Series vhost: add missing barriers, remove useless volatiles |

Checks

Context Check Description
ci/Intel-compilation success Compilation OK
ci/mellanox-Performance-Testing success Performance Testing PASS
ci/intel-Performance-Testing success Performance Testing PASS

Commit Message

Maxime Coquelin Dec. 5, 2018, 9:49 a.m. UTC
  A read barrier is required to ensure the ordering between
available index and the descriptor reads is enforced.

Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application")
Cc: stable@dpdk.org

Reported-by: Jason Wang <jasowang@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)
  

Comments

Ilya Maximets Dec. 5, 2018, 11:30 a.m. UTC | #1
On 05.12.2018 12:49, Maxime Coquelin wrote:
> A read barrier is required to ensure the ordering between
> available index and the descriptor reads is enforced.
> 
> Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application")
> Cc: stable@dpdk.org
> 
> Reported-by: Jason Wang <jasowang@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/librte_vhost/virtio_net.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 5e1a1a727..f11ebb54f 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -791,6 +791,12 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
>  	avail_head = *((volatile uint16_t *)&vq->avail->idx);
>  
> +	/*
> +	 * The ordering between avail index and
> +	 * desc reads needs to be enforced.
> +	 */
> +	rte_smp_rmb();
> +

Hmm. This looks weird to me.
Could you please describe the bad scenario here? (It'll be good to have it
in commit message too)

As I understand, you're enforcing the read of avail->idx to happen before
reading the avail->ring[avail_idx]. Is it correct?

But we have following code sequence:

1. read avail->idx (avail_head).
2. check that last_avail_idx != avail_head.
3. read from the ring using last_avail_idx.

So, there is a strict dependency between all 3 steps and the memory
transaction will be finished at the step #2 in any case. There is no
way to read the ring before reading the avail->idx.

Am I missing something?

>  	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
>  		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
>  		uint16_t nr_vec = 0;
> @@ -1373,6 +1379,12 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  	if (free_entries == 0)
>  		return 0;
>  
> +	/*
> +	 * The ordering between avail index and
> +	 * desc reads needs to be enforced.
> +	 */
> +	rte_smp_rmb();
> +

This one is strange too.

	free_entries = *((volatile uint16_t *)&vq->avail->idx) -
			vq->last_avail_idx;
	if (free_entries == 0)
		return 0;

The code reads the value of avail->idx and uses the value on the next
line even with any compiler optimizations. There is no way for CPU to
postpone the actual read.

>  	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
>  
>  	count = RTE_MIN(count, MAX_PKT_BURST);
>
  
Jason Wang Dec. 6, 2018, 4:17 a.m. UTC | #2
On 2018/12/5 下午7:30, Ilya Maximets wrote:
> On 05.12.2018 12:49, Maxime Coquelin wrote:
>> A read barrier is required to ensure the ordering between
>> available index and the descriptor reads is enforced.
>>
>> Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application")
>> Cc: stable@dpdk.org
>>
>> Reported-by: Jason Wang <jasowang@redhat.com>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/librte_vhost/virtio_net.c | 12 ++++++++++++
>>   1 file changed, 12 insertions(+)
>>
>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>> index 5e1a1a727..f11ebb54f 100644
>> --- a/lib/librte_vhost/virtio_net.c
>> +++ b/lib/librte_vhost/virtio_net.c
>> @@ -791,6 +791,12 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>   	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
>>   	avail_head = *((volatile uint16_t *)&vq->avail->idx);
>>   
>> +	/*
>> +	 * The ordering between avail index and
>> +	 * desc reads needs to be enforced.
>> +	 */
>> +	rte_smp_rmb();
>> +
> Hmm. This looks weird to me.
> Could you please describe the bad scenario here? (It'll be good to have it
> in commit message too)
>
> As I understand, you're enforcing the read of avail->idx to happen before
> reading the avail->ring[avail_idx]. Is it correct?
>
> But we have following code sequence:
>
> 1. read avail->idx (avail_head).
> 2. check that last_avail_idx != avail_head.
> 3. read from the ring using last_avail_idx.
>
> So, there is a strict dependency between all 3 steps and the memory
> transaction will be finished at the step #2 in any case. There is no
> way to read the ring before reading the avail->idx.
>
> Am I missing something?


Nope, I kind of get what you meaning now. And even if we will

4. read descriptor from descriptor ring using the id read from 3

5. read descriptor content according to the address from 4

They still have dependent memory access. So there's no need for rmb.


>
>>   	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
>>   		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
>>   		uint16_t nr_vec = 0;
>> @@ -1373,6 +1379,12 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>   	if (free_entries == 0)
>>   		return 0;
>>   
>> +	/*
>> +	 * The ordering between avail index and
>> +	 * desc reads needs to be enforced.
>> +	 */
>> +	rte_smp_rmb();
>> +
> This one is strange too.
>
> 	free_entries = *((volatile uint16_t *)&vq->avail->idx) -
> 			vq->last_avail_idx;
> 	if (free_entries == 0)
> 		return 0;
>
> The code reads the value of avail->idx and uses the value on the next
> line even with any compiler optimizations. There is no way for CPU to
> postpone the actual read.


Yes.

Thanks


>
>>   	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
>>   
>>   	count = RTE_MIN(count, MAX_PKT_BURST);
>>
  
Ilya Maximets Dec. 6, 2018, 12:48 p.m. UTC | #3
On 06.12.2018 7:17, Jason Wang wrote:
> 
> On 2018/12/5 下午7:30, Ilya Maximets wrote:
>> On 05.12.2018 12:49, Maxime Coquelin wrote:
>>> A read barrier is required to ensure the ordering between
>>> available index and the descriptor reads is enforced.
>>>
>>> Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application")
>>> Cc: stable@dpdk.org
>>>
>>> Reported-by: Jason Wang <jasowang@redhat.com>
>>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>> ---
>>>   lib/librte_vhost/virtio_net.c | 12 ++++++++++++
>>>   1 file changed, 12 insertions(+)
>>>
>>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>>> index 5e1a1a727..f11ebb54f 100644
>>> --- a/lib/librte_vhost/virtio_net.c
>>> +++ b/lib/librte_vhost/virtio_net.c
>>> @@ -791,6 +791,12 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>>       rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
>>>       avail_head = *((volatile uint16_t *)&vq->avail->idx);
>>>   +    /*
>>> +     * The ordering between avail index and
>>> +     * desc reads needs to be enforced.
>>> +     */
>>> +    rte_smp_rmb();
>>> +
>> Hmm. This looks weird to me.
>> Could you please describe the bad scenario here? (It'll be good to have it
>> in commit message too)
>>
>> As I understand, you're enforcing the read of avail->idx to happen before
>> reading the avail->ring[avail_idx]. Is it correct?
>>
>> But we have following code sequence:
>>
>> 1. read avail->idx (avail_head).
>> 2. check that last_avail_idx != avail_head.
>> 3. read from the ring using last_avail_idx.
>>
>> So, there is a strict dependency between all 3 steps and the memory
>> transaction will be finished at the step #2 in any case. There is no
>> way to read the ring before reading the avail->idx.
>>
>> Am I missing something?
> 
> 
> Nope, I kind of get what you meaning now. And even if we will
> 
> 4. read descriptor from descriptor ring using the id read from 3
> 
> 5. read descriptor content according to the address from 4
> 
> They still have dependent memory access. So there's no need for rmb.
> 

On a second glance I changed my mind.
The code looks like this:

1. read avail_head = avail->idx
2. read cur_idx    = last_avail_idx
if (cur_idx != avail_head) {
    3. read idx = avail->ring[cur_idx]
    4. read desc[idx]
}

There is an address (data) dependency: 2 -> 3 -> 4.
These reads could not be reordered.

But it's only control dependency between 1 and (3, 4), because 'avail_head'
is not used to calculate 'cur_idx'. In case of aggressive speculative
execution, 1 could be reordered with 3 resulting with reading of not yet
updated 'idx'.

Not sure if speculative execution could go so far while 'avail_head' is not
read yet, but it's should be possible in theory.

Thoughts ?

>>
>>>       for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
>>>           uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
>>>           uint16_t nr_vec = 0;
>>> @@ -1373,6 +1379,12 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>>       if (free_entries == 0)
>>>           return 0;
>>>   +    /*
>>> +     * The ordering between avail index and
>>> +     * desc reads needs to be enforced.
>>> +     */
>>> +    rte_smp_rmb();
>>> +
>> This one is strange too.
>>
>>     free_entries = *((volatile uint16_t *)&vq->avail->idx) -
>>             vq->last_avail_idx;
>>     if (free_entries == 0)
>>         return 0;
>>
>> The code reads the value of avail->idx and uses the value on the next
>> line even with any compiler optimizations. There is no way for CPU to
>> postpone the actual read.
> 
> 
> Yes.
> 

It's kind of similar situation here, but 'avail_head' is involved somehow
in 'cur_idx' calculation because of
	fill_vec_buf_split(..., vq->last_avail_idx + i, ...)
And 'i' depends on 'free_entries'. But we need to look at the exact asm
code to be sure. I think, we may add barrier here to avoid possible issues.

> Thanks
> 
> 
>>
>>>       VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
>>>         count = RTE_MIN(count, MAX_PKT_BURST);
>>>
> 
>
  
Jason Wang Dec. 6, 2018, 1:25 p.m. UTC | #4
On 2018/12/6 下午8:48, Ilya Maximets wrote:
> On 06.12.2018 7:17, Jason Wang wrote:
>> On 2018/12/5 下午7:30, Ilya Maximets wrote:
>>> On 05.12.2018 12:49, Maxime Coquelin wrote:
>>>> A read barrier is required to ensure the ordering between
>>>> available index and the descriptor reads is enforced.
>>>>
>>>> Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application")
>>>> Cc: stable@dpdk.org
>>>>
>>>> Reported-by: Jason Wang <jasowang@redhat.com>
>>>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>> ---
>>>>    lib/librte_vhost/virtio_net.c | 12 ++++++++++++
>>>>    1 file changed, 12 insertions(+)
>>>>
>>>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>>>> index 5e1a1a727..f11ebb54f 100644
>>>> --- a/lib/librte_vhost/virtio_net.c
>>>> +++ b/lib/librte_vhost/virtio_net.c
>>>> @@ -791,6 +791,12 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>>>        rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
>>>>        avail_head = *((volatile uint16_t *)&vq->avail->idx);
>>>>    +    /*
>>>> +     * The ordering between avail index and
>>>> +     * desc reads needs to be enforced.
>>>> +     */
>>>> +    rte_smp_rmb();
>>>> +
>>> Hmm. This looks weird to me.
>>> Could you please describe the bad scenario here? (It'll be good to have it
>>> in commit message too)
>>>
>>> As I understand, you're enforcing the read of avail->idx to happen before
>>> reading the avail->ring[avail_idx]. Is it correct?
>>>
>>> But we have following code sequence:
>>>
>>> 1. read avail->idx (avail_head).
>>> 2. check that last_avail_idx != avail_head.
>>> 3. read from the ring using last_avail_idx.
>>>
>>> So, there is a strict dependency between all 3 steps and the memory
>>> transaction will be finished at the step #2 in any case. There is no
>>> way to read the ring before reading the avail->idx.
>>>
>>> Am I missing something?
>>
>> Nope, I kind of get what you meaning now. And even if we will
>>
>> 4. read descriptor from descriptor ring using the id read from 3
>>
>> 5. read descriptor content according to the address from 4
>>
>> They still have dependent memory access. So there's no need for rmb.
>>
> On a second glance I changed my mind.
> The code looks like this:
>
> 1. read avail_head = avail->idx
> 2. read cur_idx    = last_avail_idx
> if (cur_idx != avail_head) {
>      3. read idx = avail->ring[cur_idx]
>      4. read desc[idx]
> }
>
> There is an address (data) dependency: 2 -> 3 -> 4.
> These reads could not be reordered.
>
> But it's only control dependency between 1 and (3, 4), because 'avail_head'
> is not used to calculate 'cur_idx'. In case of aggressive speculative
> execution, 1 could be reordered with 3 resulting with reading of not yet
> updated 'idx'.
>
> Not sure if speculative execution could go so far while 'avail_head' is not
> read yet, but it's should be possible in theory.
>
> Thoughts ?


I think I change my mind as well, this is similar to the discussion of 
desc_is_avail(). So I think it's possible.


>
>>>>        for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
>>>>            uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
>>>>            uint16_t nr_vec = 0;
>>>> @@ -1373,6 +1379,12 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>>>        if (free_entries == 0)
>>>>            return 0;
>>>>    +    /*
>>>> +     * The ordering between avail index and
>>>> +     * desc reads needs to be enforced.
>>>> +     */
>>>> +    rte_smp_rmb();
>>>> +
>>> This one is strange too.
>>>
>>>      free_entries = *((volatile uint16_t *)&vq->avail->idx) -
>>>              vq->last_avail_idx;
>>>      if (free_entries == 0)
>>>          return 0;
>>>
>>> The code reads the value of avail->idx and uses the value on the next
>>> line even with any compiler optimizations. There is no way for CPU to
>>> postpone the actual read.
>>
>> Yes.
>>
> It's kind of similar situation here, but 'avail_head' is involved somehow
> in 'cur_idx' calculation because of
> 	fill_vec_buf_split(..., vq->last_avail_idx + i, ...)
> And 'i' depends on 'free_entries'.


I agree it depends on compiler,  it can choose to remove such data 
dependency.


> But we need to look at the exact asm
> code to be sure.


I think it's probably hard to get a conclusion by checking asm code 
generated by one specific version or kind of a compiler


>   I think, we may add barrier here to avoid possible issues.


Yes.


Thanks.


>
>> Thanks
>>
>>
>>>>        VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
>>>>          count = RTE_MIN(count, MAX_PKT_BURST);
>>>>
>>
  
Michael S. Tsirkin Dec. 6, 2018, 1:48 p.m. UTC | #5
On Thu, Dec 06, 2018 at 12:17:38PM +0800, Jason Wang wrote:
> 
> On 2018/12/5 下午7:30, Ilya Maximets wrote:
> > On 05.12.2018 12:49, Maxime Coquelin wrote:
> > > A read barrier is required to ensure the ordering between
> > > available index and the descriptor reads is enforced.
> > > 
> > > Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application")
> > > Cc: stable@dpdk.org
> > > 
> > > Reported-by: Jason Wang <jasowang@redhat.com>
> > > Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> > > ---
> > >   lib/librte_vhost/virtio_net.c | 12 ++++++++++++
> > >   1 file changed, 12 insertions(+)
> > > 
> > > diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> > > index 5e1a1a727..f11ebb54f 100644
> > > --- a/lib/librte_vhost/virtio_net.c
> > > +++ b/lib/librte_vhost/virtio_net.c
> > > @@ -791,6 +791,12 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> > >   	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
> > >   	avail_head = *((volatile uint16_t *)&vq->avail->idx);
> > > +	/*
> > > +	 * The ordering between avail index and
> > > +	 * desc reads needs to be enforced.
> > > +	 */
> > > +	rte_smp_rmb();
> > > +
> > Hmm. This looks weird to me.
> > Could you please describe the bad scenario here? (It'll be good to have it
> > in commit message too)
> > 
> > As I understand, you're enforcing the read of avail->idx to happen before
> > reading the avail->ring[avail_idx]. Is it correct?
> > 
> > But we have following code sequence:
> > 
> > 1. read avail->idx (avail_head).
> > 2. check that last_avail_idx != avail_head.
> > 3. read from the ring using last_avail_idx.
> > 
> > So, there is a strict dependency between all 3 steps and the memory
> > transaction will be finished at the step #2 in any case. There is no
> > way to read the ring before reading the avail->idx.
> > 
> > Am I missing something?
> 
> 
> Nope, I kind of get what you meaning now. And even if we will
> 
> 4. read descriptor from descriptor ring using the id read from 3
> 
> 5. read descriptor content according to the address from 4
> 
> They still have dependent memory access. So there's no need for rmb.

I am pretty sure on some architectures there is a need for a barrier
here.  This is an execution dependency since avail_head is not used as an
index. And reads can be speculated.  So the read from the ring can be
speculated and execute before the read of avail_head and the check.

However SMP rmb is/should be free on x86.  So unless someone on this
thread is actually testing performance on non-x86, you are both wasting
cycles discussing removal of nop macros and also risk pushing untested
software on users.


> 
> > 
> > >   	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
> > >   		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
> > >   		uint16_t nr_vec = 0;
> > > @@ -1373,6 +1379,12 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> > >   	if (free_entries == 0)
> > >   		return 0;
> > > +	/*
> > > +	 * The ordering between avail index and
> > > +	 * desc reads needs to be enforced.
> > > +	 */
> > > +	rte_smp_rmb();
> > > +
> > This one is strange too.
> > 
> > 	free_entries = *((volatile uint16_t *)&vq->avail->idx) -
> > 			vq->last_avail_idx;
> > 	if (free_entries == 0)
> > 		return 0;
> > 
> > The code reads the value of avail->idx and uses the value on the next
> > line even with any compiler optimizations. There is no way for CPU to
> > postpone the actual read.
> 
> 
> Yes.
> 
> Thanks
> 
> 
> > 
> > >   	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
> > >   	count = RTE_MIN(count, MAX_PKT_BURST);
> > >
  
Ilya Maximets Dec. 7, 2018, 2:58 p.m. UTC | #6
On 06.12.2018 16:48, Michael S. Tsirkin wrote:
> On Thu, Dec 06, 2018 at 12:17:38PM +0800, Jason Wang wrote:
>>
>> On 2018/12/5 下午7:30, Ilya Maximets wrote:
>>> On 05.12.2018 12:49, Maxime Coquelin wrote:
>>>> A read barrier is required to ensure the ordering between
>>>> available index and the descriptor reads is enforced.
>>>>
>>>> Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application")
>>>> Cc: stable@dpdk.org
>>>>
>>>> Reported-by: Jason Wang <jasowang@redhat.com>
>>>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>>>> ---
>>>>   lib/librte_vhost/virtio_net.c | 12 ++++++++++++
>>>>   1 file changed, 12 insertions(+)
>>>>
>>>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>>>> index 5e1a1a727..f11ebb54f 100644
>>>> --- a/lib/librte_vhost/virtio_net.c
>>>> +++ b/lib/librte_vhost/virtio_net.c
>>>> @@ -791,6 +791,12 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>>>   	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
>>>>   	avail_head = *((volatile uint16_t *)&vq->avail->idx);
>>>> +	/*
>>>> +	 * The ordering between avail index and
>>>> +	 * desc reads needs to be enforced.
>>>> +	 */
>>>> +	rte_smp_rmb();
>>>> +
>>> Hmm. This looks weird to me.
>>> Could you please describe the bad scenario here? (It'll be good to have it
>>> in commit message too)
>>>
>>> As I understand, you're enforcing the read of avail->idx to happen before
>>> reading the avail->ring[avail_idx]. Is it correct?
>>>
>>> But we have following code sequence:
>>>
>>> 1. read avail->idx (avail_head).
>>> 2. check that last_avail_idx != avail_head.
>>> 3. read from the ring using last_avail_idx.
>>>
>>> So, there is a strict dependency between all 3 steps and the memory
>>> transaction will be finished at the step #2 in any case. There is no
>>> way to read the ring before reading the avail->idx.
>>>
>>> Am I missing something?
>>
>>
>> Nope, I kind of get what you meaning now. And even if we will
>>
>> 4. read descriptor from descriptor ring using the id read from 3
>>
>> 5. read descriptor content according to the address from 4
>>
>> They still have dependent memory access. So there's no need for rmb.
> 
> I am pretty sure on some architectures there is a need for a barrier
> here.  This is an execution dependency since avail_head is not used as an
> index. And reads can be speculated.  So the read from the ring can be
> speculated and execute before the read of avail_head and the check.
> 
> However SMP rmb is/should be free on x86.

rte_smp_rmd() turns into compiler barrier on x86. And compiler barriers
could be harmful too in some cases.

> So unless someone on this
> thread is actually testing performance on non-x86, you are both wasting
> cycles discussing removal of nop macros and also risk pushing untested
> software on users.

Since DPDK supports not only x86, we have to consider possible performance
issues on different architectures. In fact that this patch makes no sense
on x86, the only thing we need to consider is the stability and performance
on non-x86 architectures. If we'll not pay attention to things like this,
vhost-user could become completely unusable on non-x86 architectures someday.

It'll be cool if someone could test patches (autotest would be nice too) on
ARM at least. But, unfortunately, testing of DPDK is still far from being
ideal. And the lack of hardware is the main issue. I'm running vhost with
qemu on my ARMv8 platform from time to time, but it's definitely not enough.
And I can not test every patch on a list.

However I made a few tests on ARMv8 and this patch shows no significant
performance difference. But it makes the performance a bit more stable
between runs, which is nice.

> 
> 
>>
>>>
>>>>   	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
>>>>   		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
>>>>   		uint16_t nr_vec = 0;
>>>> @@ -1373,6 +1379,12 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>>>   	if (free_entries == 0)
>>>>   		return 0;
>>>> +	/*
>>>> +	 * The ordering between avail index and
>>>> +	 * desc reads needs to be enforced.
>>>> +	 */
>>>> +	rte_smp_rmb();
>>>> +
>>> This one is strange too.
>>>
>>> 	free_entries = *((volatile uint16_t *)&vq->avail->idx) -
>>> 			vq->last_avail_idx;
>>> 	if (free_entries == 0)
>>> 		return 0;
>>>
>>> The code reads the value of avail->idx and uses the value on the next
>>> line even with any compiler optimizations. There is no way for CPU to
>>> postpone the actual read.
>>
>>
>> Yes.
>>
>> Thanks
>>
>>
>>>
>>>>   	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
>>>>   	count = RTE_MIN(count, MAX_PKT_BURST);
>>>>
> 
>
  
Michael S. Tsirkin Dec. 7, 2018, 3:44 p.m. UTC | #7
On Fri, Dec 07, 2018 at 05:58:24PM +0300, Ilya Maximets wrote:
> On 06.12.2018 16:48, Michael S. Tsirkin wrote:
> > On Thu, Dec 06, 2018 at 12:17:38PM +0800, Jason Wang wrote:
> >>
> >> On 2018/12/5 下午7:30, Ilya Maximets wrote:
> >>> On 05.12.2018 12:49, Maxime Coquelin wrote:
> >>>> A read barrier is required to ensure the ordering between
> >>>> available index and the descriptor reads is enforced.
> >>>>
> >>>> Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application")
> >>>> Cc: stable@dpdk.org
> >>>>
> >>>> Reported-by: Jason Wang <jasowang@redhat.com>
> >>>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> >>>> ---
> >>>>   lib/librte_vhost/virtio_net.c | 12 ++++++++++++
> >>>>   1 file changed, 12 insertions(+)
> >>>>
> >>>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> >>>> index 5e1a1a727..f11ebb54f 100644
> >>>> --- a/lib/librte_vhost/virtio_net.c
> >>>> +++ b/lib/librte_vhost/virtio_net.c
> >>>> @@ -791,6 +791,12 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> >>>>   	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
> >>>>   	avail_head = *((volatile uint16_t *)&vq->avail->idx);
> >>>> +	/*
> >>>> +	 * The ordering between avail index and
> >>>> +	 * desc reads needs to be enforced.
> >>>> +	 */
> >>>> +	rte_smp_rmb();
> >>>> +
> >>> Hmm. This looks weird to me.
> >>> Could you please describe the bad scenario here? (It'll be good to have it
> >>> in commit message too)
> >>>
> >>> As I understand, you're enforcing the read of avail->idx to happen before
> >>> reading the avail->ring[avail_idx]. Is it correct?
> >>>
> >>> But we have following code sequence:
> >>>
> >>> 1. read avail->idx (avail_head).
> >>> 2. check that last_avail_idx != avail_head.
> >>> 3. read from the ring using last_avail_idx.
> >>>
> >>> So, there is a strict dependency between all 3 steps and the memory
> >>> transaction will be finished at the step #2 in any case. There is no
> >>> way to read the ring before reading the avail->idx.
> >>>
> >>> Am I missing something?
> >>
> >>
> >> Nope, I kind of get what you meaning now. And even if we will
> >>
> >> 4. read descriptor from descriptor ring using the id read from 3
> >>
> >> 5. read descriptor content according to the address from 4
> >>
> >> They still have dependent memory access. So there's no need for rmb.
> > 
> > I am pretty sure on some architectures there is a need for a barrier
> > here.  This is an execution dependency since avail_head is not used as an
> > index. And reads can be speculated.  So the read from the ring can be
> > speculated and execute before the read of avail_head and the check.
> > 
> > However SMP rmb is/should be free on x86.
> 
> rte_smp_rmd() turns into compiler barrier on x86. And compiler barriers
> could be harmful too in some cases.
> 
> > So unless someone on this
> > thread is actually testing performance on non-x86, you are both wasting
> > cycles discussing removal of nop macros and also risk pushing untested
> > software on users.
> 
> Since DPDK supports not only x86, we have to consider possible performance
> issues on different architectures. In fact that this patch makes no sense
> on x86, the only thing we need to consider is the stability and performance
> on non-x86 architectures. If we'll not pay attention to things like this,
> vhost-user could become completely unusable on non-x86 architectures someday.
> 
> It'll be cool if someone could test patches (autotest would be nice too) on
> ARM at least. But, unfortunately, testing of DPDK is still far from being
> ideal. And the lack of hardware is the main issue. I'm running vhost with
> qemu on my ARMv8 platform from time to time, but it's definitely not enough.
> And I can not test every patch on a list.
> 
> However I made a few tests on ARMv8 and this patch shows no significant
> performance difference. But it makes the performance a bit more stable
> between runs, which is nice.

I'm sorry about being unclear. I think a barrier is required, so this
patch is good.  I was trying to say that splitting hairs trying to prove
that the barrier can be omitted without testing that omitting it gives a
performance benefit doesn't make sense. Since you observed that adding a
barrier actually helps performance stability, it's all good.


> > 
> > 
> >>
> >>>
> >>>>   	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
> >>>>   		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
> >>>>   		uint16_t nr_vec = 0;
> >>>> @@ -1373,6 +1379,12 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
> >>>>   	if (free_entries == 0)
> >>>>   		return 0;
> >>>> +	/*
> >>>> +	 * The ordering between avail index and
> >>>> +	 * desc reads needs to be enforced.
> >>>> +	 */
> >>>> +	rte_smp_rmb();
> >>>> +
> >>> This one is strange too.
> >>>
> >>> 	free_entries = *((volatile uint16_t *)&vq->avail->idx) -
> >>> 			vq->last_avail_idx;
> >>> 	if (free_entries == 0)
> >>> 		return 0;
> >>>
> >>> The code reads the value of avail->idx and uses the value on the next
> >>> line even with any compiler optimizations. There is no way for CPU to
> >>> postpone the actual read.
> >>
> >>
> >> Yes.
> >>
> >> Thanks
> >>
> >>
> >>>
> >>>>   	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
> >>>>   	count = RTE_MIN(count, MAX_PKT_BURST);
> >>>>
> > 
> >
  
Ilya Maximets Dec. 11, 2018, 10:38 a.m. UTC | #8
On 05.12.2018 12:49, Maxime Coquelin wrote:
> A read barrier is required to ensure the ordering between
> available index and the descriptor reads is enforced.
> 
> Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application")
> Cc: stable@dpdk.org
> 
> Reported-by: Jason Wang <jasowang@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/librte_vhost/virtio_net.c | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 

I'd like to have a bit more details about a bad scenario in a commit
message because it's not an obvious change at a first glance.

Otherwise,
Acked-by: Ilya Maximets <i.maximets@samsung.com>


> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 5e1a1a727..f11ebb54f 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -791,6 +791,12 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
>  	avail_head = *((volatile uint16_t *)&vq->avail->idx);
>  
> +	/*
> +	 * The ordering between avail index and
> +	 * desc reads needs to be enforced.
> +	 */
> +	rte_smp_rmb();
> +
>  	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
>  		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
>  		uint16_t nr_vec = 0;
> @@ -1373,6 +1379,12 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  	if (free_entries == 0)
>  		return 0;
>  
> +	/*
> +	 * The ordering between avail index and
> +	 * desc reads needs to be enforced.
> +	 */
> +	rte_smp_rmb();
> +
>  	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
>  
>  	count = RTE_MIN(count, MAX_PKT_BURST);
>
  
Maxime Coquelin Dec. 11, 2018, 2:46 p.m. UTC | #9
On 12/11/18 11:38 AM, Ilya Maximets wrote:
> On 05.12.2018 12:49, Maxime Coquelin wrote:
>> A read barrier is required to ensure the ordering between
>> available index and the descriptor reads is enforced.
>>
>> Fixes: 4796ad63ba1f ("examples/vhost: import userspace vhost application")
>> Cc: stable@dpdk.org
>>
>> Reported-by: Jason Wang <jasowang@redhat.com>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/librte_vhost/virtio_net.c | 12 ++++++++++++
>>   1 file changed, 12 insertions(+)
>>
> 
> I'd like to have a bit more details about a bad scenario in a commit
> message because it's not an obvious change at a first glance.

Sure, I'll rework the commit message in the v2.

> Otherwise,
> Acked-by: Ilya Maximets <i.maximets@samsung.com>

Thanks,
Maxime

> 
>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>> index 5e1a1a727..f11ebb54f 100644
>> --- a/lib/librte_vhost/virtio_net.c
>> +++ b/lib/librte_vhost/virtio_net.c
>> @@ -791,6 +791,12 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>   	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
>>   	avail_head = *((volatile uint16_t *)&vq->avail->idx);
>>   
>> +	/*
>> +	 * The ordering between avail index and
>> +	 * desc reads needs to be enforced.
>> +	 */
>> +	rte_smp_rmb();
>> +
>>   	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
>>   		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
>>   		uint16_t nr_vec = 0;
>> @@ -1373,6 +1379,12 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>   	if (free_entries == 0)
>>   		return 0;
>>   
>> +	/*
>> +	 * The ordering between avail index and
>> +	 * desc reads needs to be enforced.
>> +	 */
>> +	rte_smp_rmb();
>> +
>>   	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
>>   
>>   	count = RTE_MIN(count, MAX_PKT_BURST);
>>
  

Patch

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 5e1a1a727..f11ebb54f 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -791,6 +791,12 @@  virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
 	avail_head = *((volatile uint16_t *)&vq->avail->idx);
 
+	/*
+	 * The ordering between avail index and
+	 * desc reads needs to be enforced.
+	 */
+	rte_smp_rmb();
+
 	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
 		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
 		uint16_t nr_vec = 0;
@@ -1373,6 +1379,12 @@  virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	if (free_entries == 0)
 		return 0;
 
+	/*
+	 * The ordering between avail index and
+	 * desc reads needs to be enforced.
+	 */
+	rte_smp_rmb();
+
 	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
 
 	count = RTE_MIN(count, MAX_PKT_BURST);