net/ring: advertise multi segment support.

Message ID 1600425415-31834-1-git-send-email-dceara@redhat.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series net/ring: advertise multi segment support. |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/travis-robot success Travis build: passed
ci/iol-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/Intel-compilation success Compilation OK

Commit Message

Dumitru Ceara Sept. 18, 2020, 10:36 a.m. UTC
  Even though ring interfaces don't support any other TX/RX offloads they
do support sending multi segment packets and this should be advertised
in order to not break applications that use ring interfaces.

Signed-off-by: Dumitru Ceara <dceara@redhat.com>
---
 drivers/net/ring/rte_eth_ring.c | 1 +
 1 file changed, 1 insertion(+)
  

Comments

Ferruh Yigit Sept. 22, 2020, 2:21 p.m. UTC | #1
On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
> Even though ring interfaces don't support any other TX/RX offloads they
> do support sending multi segment packets and this should be advertised
> in order to not break applications that use ring interfaces.
 >

Does ring PMD support sending multi segmented packets?

As far as I can see ring PMD doesn't know about the mbuf segments.

> 
> Signed-off-by: Dumitru Ceara <dceara@redhat.com>
> ---
>   drivers/net/ring/rte_eth_ring.c | 1 +
>   1 file changed, 1 insertion(+)
> 
> diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
> index 733c898..59d1e67 100644
> --- a/drivers/net/ring/rte_eth_ring.c
> +++ b/drivers/net/ring/rte_eth_ring.c
> @@ -160,6 +160,7 @@ struct pmd_internals {
>   	dev_info->max_mac_addrs = 1;
>   	dev_info->max_rx_pktlen = (uint32_t)-1;
>   	dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
> +	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
>   	dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
>   	dev_info->min_rx_bufsize = 0;
>   
>
  
Dumitru Ceara Sept. 28, 2020, 7:31 a.m. UTC | #2
On 9/22/20 4:21 PM, Ferruh Yigit wrote:
> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
>> Even though ring interfaces don't support any other TX/RX offloads they
>> do support sending multi segment packets and this should be advertised
>> in order to not break applications that use ring interfaces.
>>
> 
> Does ring PMD support sending multi segmented packets?
> 

Yes, sending multi segmented packets works fine with ring PMD.

> As far as I can see ring PMD doesn't know about the mbuf segments.
> 

Right, the PMD doesn't care about the mbuf segments but it implicitly
supports sending multi segmented packets. From what I see it's actually
the case for most of the PMDs, in the sense that most don't even check
the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
segment packets they are just accepted.

However, the fact that the ring PMD doesn't advertise this implicit
support forces applications that use ring PMD to have a special case for
handling ring interfaces. If the ring PMD would advertise
DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious
to the type of underlying interface.

Thanks,
Dumitru
  
Ferruh Yigit Sept. 28, 2020, 10:25 a.m. UTC | #3
On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
> On 9/22/20 4:21 PM, Ferruh Yigit wrote:
>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
>>> Even though ring interfaces don't support any other TX/RX offloads they
>>> do support sending multi segment packets and this should be advertised
>>> in order to not break applications that use ring interfaces.
>>>
>>
>> Does ring PMD support sending multi segmented packets?
>>
> 
> Yes, sending multi segmented packets works fine with ring PMD.
> 

Define "works fine" :)

All PMDs can put the first mbuf of the chained mbuf to the ring, in that case 
what is the difference between the ones supports 'DEV_TX_OFFLOAD_MULTI_SEGS' and 
the ones doesn't support?

If the traffic is only from ring PMD to ring PMD, you won't recognize the 
difference between segmented or not-segmented mbufs, and it will look like 
segmented packets works fine.
But if there is other PMDs involved in the forwarding, or if need to process the 
packets, will it still work fine?

>> As far as I can see ring PMD doesn't know about the mbuf segments.
>>
> 
> Right, the PMD doesn't care about the mbuf segments but it implicitly
> supports sending multi segmented packets. From what I see it's actually
> the case for most of the PMDs, in the sense that most don't even check
> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
> segment packets they are just accepted.
 >

As far as I can see, if the segmented packets sent, the ring PMD will put the 
first mbuf into the ring without doing anything specific to the next segments.

If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect the 
segmented packets and put each chained mbuf into the separate field in the ring.

> 
> However, the fact that the ring PMD doesn't advertise this implicit
> support forces applications that use ring PMD to have a special case for
> handling ring interfaces. If the ring PMD would advertise
> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious
> to the type of underlying interface.
> 

This is not handling the special case for the ring PMD, this is why he have the 
offload capability flag. Application should behave according capability flags, 
not per specific PMD.

Is there any specific usecase you are trying to cover?
  
Ananyev, Konstantin Sept. 28, 2020, 11 a.m. UTC | #4
> On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
> > On 9/22/20 4:21 PM, Ferruh Yigit wrote:
> >> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
> >>> Even though ring interfaces don't support any other TX/RX offloads they
> >>> do support sending multi segment packets and this should be advertised
> >>> in order to not break applications that use ring interfaces.
> >>>
> >>
> >> Does ring PMD support sending multi segmented packets?
> >>
> >
> > Yes, sending multi segmented packets works fine with ring PMD.
> >
> 
> Define "works fine" :)
> 
> All PMDs can put the first mbuf of the chained mbuf to the ring, in that case
> what is the difference between the ones supports 'DEV_TX_OFFLOAD_MULTI_SEGS' and
> the ones doesn't support?
> 
> If the traffic is only from ring PMD to ring PMD, you won't recognize the
> difference between segmented or not-segmented mbufs, and it will look like
> segmented packets works fine.
> But if there is other PMDs involved in the forwarding, or if need to process the
> packets, will it still work fine?
> 
> >> As far as I can see ring PMD doesn't know about the mbuf segments.
> >>
> >
> > Right, the PMD doesn't care about the mbuf segments but it implicitly
> > supports sending multi segmented packets. From what I see it's actually
> > the case for most of the PMDs, in the sense that most don't even check
> > the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
> > segment packets they are just accepted.
>  >
> 
> As far as I can see, if the segmented packets sent, the ring PMD will put the
> first mbuf into the ring without doing anything specific to the next segments.
> 
> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect the
> segmented packets and put each chained mbuf into the separate field in the ring.

Hmm, wonder why do you think this is necessary?
From my perspective current behaviour is sufficient for TX-ing multi-seg packets
over the ring. 

> 
> >
> > However, the fact that the ring PMD doesn't advertise this implicit
> > support forces applications that use ring PMD to have a special case for
> > handling ring interfaces. If the ring PMD would advertise
> > DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious
> > to the type of underlying interface.
> >
> 
> This is not handling the special case for the ring PMD, this is why he have the
> offload capability flag. Application should behave according capability flags,
> not per specific PMD.
> 
> Is there any specific usecase you are trying to cover?
  
Bruce Richardson Sept. 28, 2020, 11:01 a.m. UTC | #5
On Mon, Sep 28, 2020 at 11:25:34AM +0100, Ferruh Yigit wrote:
> On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
> > On 9/22/20 4:21 PM, Ferruh Yigit wrote:
> > > On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
> > > > Even though ring interfaces don't support any other TX/RX offloads they
> > > > do support sending multi segment packets and this should be advertised
> > > > in order to not break applications that use ring interfaces.
> > > > 
> > > 
> > > Does ring PMD support sending multi segmented packets?
> > > 
> > 
> > Yes, sending multi segmented packets works fine with ring PMD.
> > 
> 
> Define "works fine" :)
> 
> All PMDs can put the first mbuf of the chained mbuf to the ring, in that
> case what is the difference between the ones supports
> 'DEV_TX_OFFLOAD_MULTI_SEGS' and the ones doesn't support?
> 
> If the traffic is only from ring PMD to ring PMD, you won't recognize the
> difference between segmented or not-segmented mbufs, and it will look like
> segmented packets works fine.
> But if there is other PMDs involved in the forwarding, or if need to process
> the packets, will it still work fine?
> 

What other PMDs do or don't do should be irrelevant here, I think. The fact
that multi-segment PMDs make it though the ring PMD in valid form should be
sufficient to mark it as supported.

> > > As far as I can see ring PMD doesn't know about the mbuf segments.
> > > 
> > 
> > Right, the PMD doesn't care about the mbuf segments but it implicitly
> > supports sending multi segmented packets. From what I see it's actually
> > the case for most of the PMDs, in the sense that most don't even check
> > the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
> > segment packets they are just accepted.
> >
> 
> As far as I can see, if the segmented packets sent, the ring PMD will put
> the first mbuf into the ring without doing anything specific to the next
> segments.
> 
> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect
> the segmented packets and put each chained mbuf into the separate field in
> the ring.
> 

Why, what would be the advantage of that? Right now if you send in a valid
packet chain to the Ring PMD, you get a valid packet chain out again the
other side, so I don't see what needs to change about that behaviour.

> > 
> > However, the fact that the ring PMD doesn't advertise this implicit
> > support forces applications that use ring PMD to have a special case for
> > handling ring interfaces. If the ring PMD would advertise
> > DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious
> > to the type of underlying interface.
> > 
> 
> This is not handling the special case for the ring PMD, this is why he have
> the offload capability flag. Application should behave according capability
> flags, not per specific PMD.
> 
> Is there any specific usecase you are trying to cover?
  
Ferruh Yigit Sept. 28, 2020, 12:42 p.m. UTC | #6
On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote:
>> On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
>>> On 9/22/20 4:21 PM, Ferruh Yigit wrote:
>>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
>>>>> Even though ring interfaces don't support any other TX/RX offloads they
>>>>> do support sending multi segment packets and this should be advertised
>>>>> in order to not break applications that use ring interfaces.
>>>>>
>>>>
>>>> Does ring PMD support sending multi segmented packets?
>>>>
>>>
>>> Yes, sending multi segmented packets works fine with ring PMD.
>>>
>>
>> Define "works fine" :)
>>
>> All PMDs can put the first mbuf of the chained mbuf to the ring, in that case
>> what is the difference between the ones supports 'DEV_TX_OFFLOAD_MULTI_SEGS' and
>> the ones doesn't support?
>>
>> If the traffic is only from ring PMD to ring PMD, you won't recognize the
>> difference between segmented or not-segmented mbufs, and it will look like
>> segmented packets works fine.
>> But if there is other PMDs involved in the forwarding, or if need to process the
>> packets, will it still work fine?
>>
>>>> As far as I can see ring PMD doesn't know about the mbuf segments.
>>>>
>>>
>>> Right, the PMD doesn't care about the mbuf segments but it implicitly
>>> supports sending multi segmented packets. From what I see it's actually
>>> the case for most of the PMDs, in the sense that most don't even check
>>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
>>> segment packets they are just accepted.
>>   >
>>
>> As far as I can see, if the segmented packets sent, the ring PMD will put the
>> first mbuf into the ring without doing anything specific to the next segments.
>>
>> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect the
>> segmented packets and put each chained mbuf into the separate field in the ring.
> 
> Hmm, wonder why do you think this is necessary?
>  From my perspective current behaviour is sufficient for TX-ing multi-seg packets
> over the ring.
> 

I was thinking based on what some PMDs already doing, but right ring may not 
need to do it.

Also for the case, one application is sending multi segmented packets to the 
ring, and other application pulling packets from the ring and sending to a PMD 
that does NOT support the multi-seg TX. I thought ring PMD claiming the 
multi-seg Tx support should serialize packets to support this case, but instead 
ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing the 
responsibility to the application.

So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' & 
'DEV_RX_OFFLOAD_SCATTER', what do you think?

>>
>>>
>>> However, the fact that the ring PMD doesn't advertise this implicit
>>> support forces applications that use ring PMD to have a special case for
>>> handling ring interfaces. If the ring PMD would advertise
>>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious
>>> to the type of underlying interface.
>>>
>>
>> This is not handling the special case for the ring PMD, this is why he have the
>> offload capability flag. Application should behave according capability flags,
>> not per specific PMD.
>>
>> Is there any specific usecase you are trying to cover?
  
Ferruh Yigit Sept. 28, 2020, 12:45 p.m. UTC | #7
On 9/28/2020 12:01 PM, Bruce Richardson wrote:
> On Mon, Sep 28, 2020 at 11:25:34AM +0100, Ferruh Yigit wrote:
>> On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
>>> On 9/22/20 4:21 PM, Ferruh Yigit wrote:
>>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
>>>>> Even though ring interfaces don't support any other TX/RX offloads they
>>>>> do support sending multi segment packets and this should be advertised
>>>>> in order to not break applications that use ring interfaces.
>>>>>
>>>>
>>>> Does ring PMD support sending multi segmented packets?
>>>>
>>>
>>> Yes, sending multi segmented packets works fine with ring PMD.
>>>
>>
>> Define "works fine" :)
>>
>> All PMDs can put the first mbuf of the chained mbuf to the ring, in that
>> case what is the difference between the ones supports
>> 'DEV_TX_OFFLOAD_MULTI_SEGS' and the ones doesn't support?
>>
>> If the traffic is only from ring PMD to ring PMD, you won't recognize the
>> difference between segmented or not-segmented mbufs, and it will look like
>> segmented packets works fine.
>> But if there is other PMDs involved in the forwarding, or if need to process
>> the packets, will it still work fine?
>>
> 
> What other PMDs do or don't do should be irrelevant here, I think. The fact
> that multi-segment PMDs make it though the ring PMD in valid form should be
> sufficient to mark it as supported.
> 
>>>> As far as I can see ring PMD doesn't know about the mbuf segments.
>>>>
>>>
>>> Right, the PMD doesn't care about the mbuf segments but it implicitly
>>> supports sending multi segmented packets. From what I see it's actually
>>> the case for most of the PMDs, in the sense that most don't even check
>>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
>>> segment packets they are just accepted.
>>>
>>
>> As far as I can see, if the segmented packets sent, the ring PMD will put
>> the first mbuf into the ring without doing anything specific to the next
>> segments.
>>
>> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect
>> the segmented packets and put each chained mbuf into the separate field in
>> the ring.
>>
> 
> Why, what would be the advantage of that? Right now if you send in a valid
> packet chain to the Ring PMD, you get a valid packet chain out again the
> other side, so I don't see what needs to change about that behaviour.
> 

Got it. Konstantin also had similar comment, I have replied there.

>>>
>>> However, the fact that the ring PMD doesn't advertise this implicit
>>> support forces applications that use ring PMD to have a special case for
>>> handling ring interfaces. If the ring PMD would advertise
>>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious
>>> to the type of underlying interface.
>>>
>>
>> This is not handling the special case for the ring PMD, this is why he have
>> the offload capability flag. Application should behave according capability
>> flags, not per specific PMD.
>>
>> Is there any specific usecase you are trying to cover?
  
Ananyev, Konstantin Sept. 28, 2020, 1:10 p.m. UTC | #8
> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Monday, September 28, 2020 1:43 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Dumitru Ceara <dceara@redhat.com>; dev@dpdk.org
> Cc: Richardson, Bruce <bruce.richardson@intel.com>
> Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support.
> 
> On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote:
> >> On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
> >>> On 9/22/20 4:21 PM, Ferruh Yigit wrote:
> >>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
> >>>>> Even though ring interfaces don't support any other TX/RX offloads they
> >>>>> do support sending multi segment packets and this should be advertised
> >>>>> in order to not break applications that use ring interfaces.
> >>>>>
> >>>>
> >>>> Does ring PMD support sending multi segmented packets?
> >>>>
> >>>
> >>> Yes, sending multi segmented packets works fine with ring PMD.
> >>>
> >>
> >> Define "works fine" :)
> >>
> >> All PMDs can put the first mbuf of the chained mbuf to the ring, in that case
> >> what is the difference between the ones supports 'DEV_TX_OFFLOAD_MULTI_SEGS' and
> >> the ones doesn't support?
> >>
> >> If the traffic is only from ring PMD to ring PMD, you won't recognize the
> >> difference between segmented or not-segmented mbufs, and it will look like
> >> segmented packets works fine.
> >> But if there is other PMDs involved in the forwarding, or if need to process the
> >> packets, will it still work fine?
> >>
> >>>> As far as I can see ring PMD doesn't know about the mbuf segments.
> >>>>
> >>>
> >>> Right, the PMD doesn't care about the mbuf segments but it implicitly
> >>> supports sending multi segmented packets. From what I see it's actually
> >>> the case for most of the PMDs, in the sense that most don't even check
> >>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
> >>> segment packets they are just accepted.
> >>   >
> >>
> >> As far as I can see, if the segmented packets sent, the ring PMD will put the
> >> first mbuf into the ring without doing anything specific to the next segments.
> >>
> >> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect the
> >> segmented packets and put each chained mbuf into the separate field in the ring.
> >
> > Hmm, wonder why do you think this is necessary?
> >  From my perspective current behaviour is sufficient for TX-ing multi-seg packets
> > over the ring.
> >
> 
> I was thinking based on what some PMDs already doing, but right ring may not
> need to do it.
> 
> Also for the case, one application is sending multi segmented packets to the
> ring, and other application pulling packets from the ring and sending to a PMD
> that does NOT support the multi-seg TX. I thought ring PMD claiming the
> multi-seg Tx support should serialize packets to support this case, but instead
> ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing the
> responsibility to the application.
> 
> So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' &
> 'DEV_RX_OFFLOAD_SCATTER', what do you think?

Seems so...
Another question - should we allow DEV_TX_OFFLOAD_MULTI_SEGS here,
 if DEV_RX_OFFLOAD_SCATTER was not specified?


> 
> >>
> >>>
> >>> However, the fact that the ring PMD doesn't advertise this implicit
> >>> support forces applications that use ring PMD to have a special case for
> >>> handling ring interfaces. If the ring PMD would advertise
> >>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious
> >>> to the type of underlying interface.
> >>>
> >>
> >> This is not handling the special case for the ring PMD, this is why he have the
> >> offload capability flag. Application should behave according capability flags,
> >> not per specific PMD.
> >>
> >> Is there any specific usecase you are trying to cover?
  
Ferruh Yigit Sept. 28, 2020, 1:26 p.m. UTC | #9
On 9/28/2020 2:10 PM, Ananyev, Konstantin wrote:
> 
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Monday, September 28, 2020 1:43 PM
>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Dumitru Ceara <dceara@redhat.com>; dev@dpdk.org
>> Cc: Richardson, Bruce <bruce.richardson@intel.com>
>> Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment support.
>>
>> On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote:
>>>> On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
>>>>> On 9/22/20 4:21 PM, Ferruh Yigit wrote:
>>>>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
>>>>>>> Even though ring interfaces don't support any other TX/RX offloads they
>>>>>>> do support sending multi segment packets and this should be advertised
>>>>>>> in order to not break applications that use ring interfaces.
>>>>>>>
>>>>>>
>>>>>> Does ring PMD support sending multi segmented packets?
>>>>>>
>>>>>
>>>>> Yes, sending multi segmented packets works fine with ring PMD.
>>>>>
>>>>
>>>> Define "works fine" :)
>>>>
>>>> All PMDs can put the first mbuf of the chained mbuf to the ring, in that case
>>>> what is the difference between the ones supports 'DEV_TX_OFFLOAD_MULTI_SEGS' and
>>>> the ones doesn't support?
>>>>
>>>> If the traffic is only from ring PMD to ring PMD, you won't recognize the
>>>> difference between segmented or not-segmented mbufs, and it will look like
>>>> segmented packets works fine.
>>>> But if there is other PMDs involved in the forwarding, or if need to process the
>>>> packets, will it still work fine?
>>>>
>>>>>> As far as I can see ring PMD doesn't know about the mbuf segments.
>>>>>>
>>>>>
>>>>> Right, the PMD doesn't care about the mbuf segments but it implicitly
>>>>> supports sending multi segmented packets. From what I see it's actually
>>>>> the case for most of the PMDs, in the sense that most don't even check
>>>>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
>>>>> segment packets they are just accepted.
>>>>    >
>>>>
>>>> As far as I can see, if the segmented packets sent, the ring PMD will put the
>>>> first mbuf into the ring without doing anything specific to the next segments.
>>>>
>>>> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should detect the
>>>> segmented packets and put each chained mbuf into the separate field in the ring.
>>>
>>> Hmm, wonder why do you think this is necessary?
>>>   From my perspective current behaviour is sufficient for TX-ing multi-seg packets
>>> over the ring.
>>>
>>
>> I was thinking based on what some PMDs already doing, but right ring may not
>> need to do it.
>>
>> Also for the case, one application is sending multi segmented packets to the
>> ring, and other application pulling packets from the ring and sending to a PMD
>> that does NOT support the multi-seg TX. I thought ring PMD claiming the
>> multi-seg Tx support should serialize packets to support this case, but instead
>> ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing the
>> responsibility to the application.
>>
>> So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' &
>> 'DEV_RX_OFFLOAD_SCATTER', what do you think?
> 
> Seems so...
> Another question - should we allow DEV_TX_OFFLOAD_MULTI_SEGS here,
>   if DEV_RX_OFFLOAD_SCATTER was not specified?
> 

I think better to have a new version of the patch to claim both capabilities 
together.

> 
>>
>>>>
>>>>>
>>>>> However, the fact that the ring PMD doesn't advertise this implicit
>>>>> support forces applications that use ring PMD to have a special case for
>>>>> handling ring interfaces. If the ring PMD would advertise
>>>>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be oblivious
>>>>> to the type of underlying interface.
>>>>>
>>>>
>>>> This is not handling the special case for the ring PMD, this is why he have the
>>>> offload capability flag. Application should behave according capability flags,
>>>> not per specific PMD.
>>>>
>>>> Is there any specific usecase you are trying to cover?
>
  
Dumitru Ceara Sept. 28, 2020, 1:58 p.m. UTC | #10
On 9/28/20 3:26 PM, Ferruh Yigit wrote:
> On 9/28/2020 2:10 PM, Ananyev, Konstantin wrote:
>>
>>
>>> -----Original Message-----
>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>> Sent: Monday, September 28, 2020 1:43 PM
>>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Dumitru Ceara
>>> <dceara@redhat.com>; dev@dpdk.org
>>> Cc: Richardson, Bruce <bruce.richardson@intel.com>
>>> Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment
>>> support.
>>>
>>> On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote:
>>>>> On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
>>>>>> On 9/22/20 4:21 PM, Ferruh Yigit wrote:
>>>>>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
>>>>>>>> Even though ring interfaces don't support any other TX/RX
>>>>>>>> offloads they
>>>>>>>> do support sending multi segment packets and this should be
>>>>>>>> advertised
>>>>>>>> in order to not break applications that use ring interfaces.
>>>>>>>>
>>>>>>>
>>>>>>> Does ring PMD support sending multi segmented packets?
>>>>>>>
>>>>>>
>>>>>> Yes, sending multi segmented packets works fine with ring PMD.
>>>>>>
>>>>>
>>>>> Define "works fine" :)
>>>>>
>>>>> All PMDs can put the first mbuf of the chained mbuf to the ring, in
>>>>> that case
>>>>> what is the difference between the ones supports
>>>>> 'DEV_TX_OFFLOAD_MULTI_SEGS' and
>>>>> the ones doesn't support?
>>>>>
>>>>> If the traffic is only from ring PMD to ring PMD, you won't
>>>>> recognize the
>>>>> difference between segmented or not-segmented mbufs, and it will
>>>>> look like
>>>>> segmented packets works fine.
>>>>> But if there is other PMDs involved in the forwarding, or if need
>>>>> to process the
>>>>> packets, will it still work fine?
>>>>>
>>>>>>> As far as I can see ring PMD doesn't know about the mbuf segments.
>>>>>>>
>>>>>>
>>>>>> Right, the PMD doesn't care about the mbuf segments but it implicitly
>>>>>> supports sending multi segmented packets. From what I see it's
>>>>>> actually
>>>>>> the case for most of the PMDs, in the sense that most don't even
>>>>>> check
>>>>>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
>>>>>> segment packets they are just accepted.
>>>>>    >
>>>>>
>>>>> As far as I can see, if the segmented packets sent, the ring PMD
>>>>> will put the
>>>>> first mbuf into the ring without doing anything specific to the
>>>>> next segments.
>>>>>
>>>>> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should
>>>>> detect the
>>>>> segmented packets and put each chained mbuf into the separate field
>>>>> in the ring.
>>>>
>>>> Hmm, wonder why do you think this is necessary?
>>>>   From my perspective current behaviour is sufficient for TX-ing
>>>> multi-seg packets
>>>> over the ring.
>>>>
>>>
>>> I was thinking based on what some PMDs already doing, but right ring
>>> may not
>>> need to do it.
>>>
>>> Also for the case, one application is sending multi segmented packets
>>> to the
>>> ring, and other application pulling packets from the ring and sending
>>> to a PMD
>>> that does NOT support the multi-seg TX. I thought ring PMD claiming the
>>> multi-seg Tx support should serialize packets to support this case,
>>> but instead
>>> ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing
>>> the
>>> responsibility to the application.
>>>
>>> So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' &
>>> 'DEV_RX_OFFLOAD_SCATTER', what do you think?
>>
>> Seems so...
>> Another question - should we allow DEV_TX_OFFLOAD_MULTI_SEGS here,
>>   if DEV_RX_OFFLOAD_SCATTER was not specified?
>>
> 
> I think better to have a new version of the patch to claim both
> capabilities together.
> 

OK, I can do that and send a v2 to claim both caps together.

Just so that it's clear to me though, these capabilities will only be
advertised and the current behavior of the ring PMD at tx/rx will remain
unchanged, right?

Thanks,
Dumitru

>>
>>>
>>>>>
>>>>>>
>>>>>> However, the fact that the ring PMD doesn't advertise this implicit
>>>>>> support forces applications that use ring PMD to have a special
>>>>>> case for
>>>>>> handling ring interfaces. If the ring PMD would advertise
>>>>>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be
>>>>>> oblivious
>>>>>> to the type of underlying interface.
>>>>>>
>>>>>
>>>>> This is not handling the special case for the ring PMD, this is why
>>>>> he have the
>>>>> offload capability flag. Application should behave according
>>>>> capability flags,
>>>>> not per specific PMD.
>>>>>
>>>>> Is there any specific usecase you are trying to cover?
>>
>
  
Ferruh Yigit Sept. 28, 2020, 3:02 p.m. UTC | #11
On 9/28/2020 2:58 PM, Dumitru Ceara wrote:
> On 9/28/20 3:26 PM, Ferruh Yigit wrote:
>> On 9/28/2020 2:10 PM, Ananyev, Konstantin wrote:
>>>
>>>
>>>> -----Original Message-----
>>>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>>>> Sent: Monday, September 28, 2020 1:43 PM
>>>> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Dumitru Ceara
>>>> <dceara@redhat.com>; dev@dpdk.org
>>>> Cc: Richardson, Bruce <bruce.richardson@intel.com>
>>>> Subject: Re: [dpdk-dev] [PATCH] net/ring: advertise multi segment
>>>> support.
>>>>
>>>> On 9/28/2020 12:00 PM, Ananyev, Konstantin wrote:
>>>>>> On 9/28/2020 8:31 AM, Dumitru Ceara wrote:
>>>>>>> On 9/22/20 4:21 PM, Ferruh Yigit wrote:
>>>>>>>> On 9/18/2020 11:36 AM, Dumitru Ceara wrote:
>>>>>>>>> Even though ring interfaces don't support any other TX/RX
>>>>>>>>> offloads they
>>>>>>>>> do support sending multi segment packets and this should be
>>>>>>>>> advertised
>>>>>>>>> in order to not break applications that use ring interfaces.
>>>>>>>>>
>>>>>>>>
>>>>>>>> Does ring PMD support sending multi segmented packets?
>>>>>>>>
>>>>>>>
>>>>>>> Yes, sending multi segmented packets works fine with ring PMD.
>>>>>>>
>>>>>>
>>>>>> Define "works fine" :)
>>>>>>
>>>>>> All PMDs can put the first mbuf of the chained mbuf to the ring, in
>>>>>> that case
>>>>>> what is the difference between the ones supports
>>>>>> 'DEV_TX_OFFLOAD_MULTI_SEGS' and
>>>>>> the ones doesn't support?
>>>>>>
>>>>>> If the traffic is only from ring PMD to ring PMD, you won't
>>>>>> recognize the
>>>>>> difference between segmented or not-segmented mbufs, and it will
>>>>>> look like
>>>>>> segmented packets works fine.
>>>>>> But if there is other PMDs involved in the forwarding, or if need
>>>>>> to process the
>>>>>> packets, will it still work fine?
>>>>>>
>>>>>>>> As far as I can see ring PMD doesn't know about the mbuf segments.
>>>>>>>>
>>>>>>>
>>>>>>> Right, the PMD doesn't care about the mbuf segments but it implicitly
>>>>>>> supports sending multi segmented packets. From what I see it's
>>>>>>> actually
>>>>>>> the case for most of the PMDs, in the sense that most don't even
>>>>>>> check
>>>>>>> the DEV_TX_OFFLOAD_MULTI_SEGS flag and if the application sends multi
>>>>>>> segment packets they are just accepted.
>>>>>>     >
>>>>>>
>>>>>> As far as I can see, if the segmented packets sent, the ring PMD
>>>>>> will put the
>>>>>> first mbuf into the ring without doing anything specific to the
>>>>>> next segments.
>>>>>>
>>>>>> If the 'DEV_TX_OFFLOAD_MULTI_SEGS' is supported I expect it should
>>>>>> detect the
>>>>>> segmented packets and put each chained mbuf into the separate field
>>>>>> in the ring.
>>>>>
>>>>> Hmm, wonder why do you think this is necessary?
>>>>>    From my perspective current behaviour is sufficient for TX-ing
>>>>> multi-seg packets
>>>>> over the ring.
>>>>>
>>>>
>>>> I was thinking based on what some PMDs already doing, but right ring
>>>> may not
>>>> need to do it.
>>>>
>>>> Also for the case, one application is sending multi segmented packets
>>>> to the
>>>> ring, and other application pulling packets from the ring and sending
>>>> to a PMD
>>>> that does NOT support the multi-seg TX. I thought ring PMD claiming the
>>>> multi-seg Tx support should serialize packets to support this case,
>>>> but instead
>>>> ring claiming 'DEV_RX_OFFLOAD_SCATTER' capability can work by pushing
>>>> the
>>>> responsibility to the application.
>>>>
>>>> So in this case ring should support both 'DEV_TX_OFFLOAD_MULTI_SEGS' &
>>>> 'DEV_RX_OFFLOAD_SCATTER', what do you think?
>>>
>>> Seems so...
>>> Another question - should we allow DEV_TX_OFFLOAD_MULTI_SEGS here,
>>>    if DEV_RX_OFFLOAD_SCATTER was not specified?
>>>
>>
>> I think better to have a new version of the patch to claim both
>> capabilities together.
>>
> 
> OK, I can do that and send a v2 to claim both caps together.
> 
> Just so that it's clear to me though, these capabilities will only be
> advertised and the current behavior of the ring PMD at tx/rx will remain
> unchanged, right?
> 

Yes, PMD behavior won't change, only PMD's hint to applications on what it 
supports will change.

> 
>>>
>>>>
>>>>>>
>>>>>>>
>>>>>>> However, the fact that the ring PMD doesn't advertise this implicit
>>>>>>> support forces applications that use ring PMD to have a special
>>>>>>> case for
>>>>>>> handling ring interfaces. If the ring PMD would advertise
>>>>>>> DEV_TX_OFFLOAD_MULTI_SEGS this would allow upper layers to be
>>>>>>> oblivious
>>>>>>> to the type of underlying interface.
>>>>>>>
>>>>>>
>>>>>> This is not handling the special case for the ring PMD, this is why
>>>>>> he have the
>>>>>> offload capability flag. Application should behave according
>>>>>> capability flags,
>>>>>> not per specific PMD.
>>>>>>
>>>>>> Is there any specific usecase you are trying to cover?
>>>
>>
>
  

Patch

diff --git a/drivers/net/ring/rte_eth_ring.c b/drivers/net/ring/rte_eth_ring.c
index 733c898..59d1e67 100644
--- a/drivers/net/ring/rte_eth_ring.c
+++ b/drivers/net/ring/rte_eth_ring.c
@@ -160,6 +160,7 @@  struct pmd_internals {
 	dev_info->max_mac_addrs = 1;
 	dev_info->max_rx_pktlen = (uint32_t)-1;
 	dev_info->max_rx_queues = (uint16_t)internals->max_rx_queues;
+	dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS;
 	dev_info->max_tx_queues = (uint16_t)internals->max_tx_queues;
 	dev_info->min_rx_bufsize = 0;