[v4,1/3] ethdev: support API to set max LRO packet size

Message ID 4c64b7941e1e9416ae7946cb44d50a01888d70c4.1573129825.git.dekelp@mellanox.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series support API to set max LRO packet size |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/iol-intel-Performance fail Performance Testing issues
ci/iol-compilation success Compile Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/Intel-compilation success Compilation OK

Commit Message

Dekel Peled Nov. 7, 2019, 12:35 p.m. UTC
  This patch implements [1], to support API for configuration and
validation of max size for LRO aggregated packet.
API change notice [2] is removed, and release notes for 19.11
are updated accordingly.

Various PMDs using LRO offload are updated, the new data members are
initialized to ensure they don't fail validation.

[1] http://patches.dpdk.org/patch/58217/
[2] http://patches.dpdk.org/patch/57492/

Signed-off-by: Dekel Peled <dekelp@mellanox.com>
Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
---
 doc/guides/nics/features.rst             |  2 ++
 doc/guides/rel_notes/deprecation.rst     |  4 ---
 doc/guides/rel_notes/release_19_11.rst   |  8 ++++++
 drivers/net/bnxt/bnxt_ethdev.c           |  1 +
 drivers/net/hinic/hinic_pmd_ethdev.c     |  1 +
 drivers/net/ixgbe/ixgbe_ethdev.c         |  2 ++
 drivers/net/ixgbe/ixgbe_vf_representor.c |  1 +
 drivers/net/mlx5/mlx5.h                  |  3 +++
 drivers/net/mlx5/mlx5_ethdev.c           |  1 +
 drivers/net/mlx5/mlx5_rxq.c              |  1 -
 drivers/net/qede/qede_ethdev.c           |  1 +
 drivers/net/virtio/virtio_ethdev.c       |  1 +
 drivers/net/vmxnet3/vmxnet3_ethdev.c     |  1 +
 lib/librte_ethdev/rte_ethdev.c           | 44 ++++++++++++++++++++++++++++++++
 lib/librte_ethdev/rte_ethdev.h           |  4 +++
 15 files changed, 70 insertions(+), 5 deletions(-)
  

Comments

Ferruh Yigit Nov. 7, 2019, 8:15 p.m. UTC | #1
On 11/7/2019 12:35 PM, Dekel Peled wrote:
> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>  							RTE_ETHER_MAX_LEN;
>  	}
>  
> +	/*
> +	 * If LRO is enabled, check that the maximum aggregated packet
> +	 * size is supported by the configured device.
> +	 */
> +	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> +		ret = check_lro_pkt_size(
> +				port_id, dev_conf->rxmode.max_lro_pkt_size,
> +				dev_info.max_lro_pkt_size);
> +		if (ret != 0)
> +			goto rollback;
> +	}
> +

This check forces applications that enable LRO to provide 'max_lro_pkt_size'
config value.

- Why it is mandatory now, how it was working before if it is mandatory value?

- What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?

- What do you think setting 'max_lro_pkt_size' config value to what PMD provided
if application doesn't provide it?
  
Matan Azrad Nov. 8, 2019, 6:54 a.m. UTC | #2
Hi

From: Ferruh Yigit
> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >
> 	RTE_ETHER_MAX_LEN;
> >  	}
> >
> > +	/*
> > +	 * If LRO is enabled, check that the maximum aggregated packet
> > +	 * size is supported by the configured device.
> > +	 */
> > +	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> > +		ret = check_lro_pkt_size(
> > +				port_id, dev_conf-
> >rxmode.max_lro_pkt_size,
> > +				dev_info.max_lro_pkt_size);
> > +		if (ret != 0)
> > +			goto rollback;
> > +	}
> > +
> 
> This check forces applications that enable LRO to provide 'max_lro_pkt_size'
> config value.

Yes.(we can break an API, we noticed it)

> - Why it is mandatory now, how it was working before if it is mandatory
> value?

It is the same as max_rx_pkt_len which is mandatory for jumbo frame offload.
So now, when the user configures a LRO offload he must to set max lro pkt len.
We don't want to confuse the user here with the max rx pkt len configurations and behaviors, they should be with same logic.

This parameter defines well the LRO behavior.
Before this, each PMD took its own interpretation to what should be the maximum size for LRO aggregated packets.
Now, the user must say what is his intension, and the ethdev can limit it according to the device capability.
By this way, also, the PMD can organize\optimize its data-path more.
Also, the application can create different mempools for LRO queues to allow bigger packet receiving for LRO traffic.
 
> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
Yes, you can see the feature description Dekel added.
This patch also updates all the PMDs support an LRO for non-0 value.

as same as max rx pkt len, no?

> - What do you think setting 'max_lro_pkt_size' config value to what PMD
> provided if application doesn't provide it?
Same answers as above.
  
Ferruh Yigit Nov. 8, 2019, 9:19 a.m. UTC | #3
On 11/8/2019 6:54 AM, Matan Azrad wrote:
> Hi
> 
> From: Ferruh Yigit
>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>
>> 	RTE_ETHER_MAX_LEN;
>>>  	}
>>>
>>> +	/*
>>> +	 * If LRO is enabled, check that the maximum aggregated packet
>>> +	 * size is supported by the configured device.
>>> +	 */
>>> +	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
>>> +		ret = check_lro_pkt_size(
>>> +				port_id, dev_conf-
>>> rxmode.max_lro_pkt_size,
>>> +				dev_info.max_lro_pkt_size);
>>> +		if (ret != 0)
>>> +			goto rollback;
>>> +	}
>>> +
>>
>> This check forces applications that enable LRO to provide 'max_lro_pkt_size'
>> config value.
> 
> Yes.(we can break an API, we noticed it)

I am not talking about API/ABI breakage, that part is OK.
With this check, if the application requested LRO offload but not provided
'max_lro_pkt_size' value, device configuration will fail.

Can there be a case application is good with whatever the PMD can support as max?

> 
>> - Why it is mandatory now, how it was working before if it is mandatory
>> value?
> 
> It is the same as max_rx_pkt_len which is mandatory for jumbo frame offload.
> So now, when the user configures a LRO offload he must to set max lro pkt len.
> We don't want to confuse the user here with the max rx pkt len configurations and behaviors, they should be with same logic.
> 
> This parameter defines well the LRO behavior.
> Before this, each PMD took its own interpretation to what should be the maximum size for LRO aggregated packets.
> Now, the user must say what is his intension, and the ethdev can limit it according to the device capability.
> By this way, also, the PMD can organize\optimize its data-path more.
> Also, the application can create different mempools for LRO queues to allow bigger packet receiving for LRO traffic.
>  
>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
> Yes, you can see the feature description Dekel added.
> This patch also updates all the PMDs support an LRO for non-0 value.

Of course I can see the updates Matan, my point is "What happens if PMD doesn't
provide 'max_lro_pkt_size'",
1) There is no check for it right, so it is acceptable?
2) Are we making this filed mandatory to provide for PMDs, it is easy to make
new fields mandatory for PMDs but is this really necessary?

> 
> as same as max rx pkt len, no?
> 
>> - What do you think setting 'max_lro_pkt_size' config value to what PMD
>> provided if application doesn't provide it?
> Same answers as above.
> 

If application doesn't care the value, as it has been till now, and not provided
explicit 'max_lro_pkt_size', why not ethdev level use the value provided by PMD
instead of failing?
  
Matan Azrad Nov. 8, 2019, 10:10 a.m. UTC | #4
From: Ferruh Yigit
> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > Hi
> >
> > From: Ferruh Yigit
> >> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>
> >> 	RTE_ETHER_MAX_LEN;
> >>>  	}
> >>>
> >>> +	/*
> >>> +	 * If LRO is enabled, check that the maximum aggregated packet
> >>> +	 * size is supported by the configured device.
> >>> +	 */
> >>> +	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
> >>> +		ret = check_lro_pkt_size(
> >>> +				port_id, dev_conf-
> >>> rxmode.max_lro_pkt_size,
> >>> +				dev_info.max_lro_pkt_size);
> >>> +		if (ret != 0)
> >>> +			goto rollback;
> >>> +	}
> >>> +
> >>
> >> This check forces applications that enable LRO to provide
> 'max_lro_pkt_size'
> >> config value.
> >
> > Yes.(we can break an API, we noticed it)
> 
> I am not talking about API/ABI breakage, that part is OK.
> With this check, if the application requested LRO offload but not provided
> 'max_lro_pkt_size' value, device configuration will fail.
> 
Yes
> Can there be a case application is good with whatever the PMD can support
> as max?
Yes can be - you know, we can do everything we want but it is better to be consistent:
Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max lro pkt len should be mandatory for LRO offload.

So your question is actually why both, non-lro packets and LRO packets max size are mandatory...


I think it should be important values for net applications management.
Also good for mbuf size managements.

> >
> >> - Why it is mandatory now, how it was working before if it is
> >> mandatory value?
> >
> > It is the same as max_rx_pkt_len which is mandatory for jumbo frame
> offload.
> > So now, when the user configures a LRO offload he must to set max lro pkt
> len.
> > We don't want to confuse the user here with the max rx pkt len
> configurations and behaviors, they should be with same logic.
> >
> > This parameter defines well the LRO behavior.
> > Before this, each PMD took its own interpretation to what should be the
> maximum size for LRO aggregated packets.
> > Now, the user must say what is his intension, and the ethdev can limit it
> according to the device capability.
> > By this way, also, the PMD can organize\optimize its data-path more.
> > Also, the application can create different mempools for LRO queues to
> allow bigger packet receiving for LRO traffic.
> >
> >> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
> > Yes, you can see the feature description Dekel added.
> > This patch also updates all the PMDs support an LRO for non-0 value.
> 
> Of course I can see the updates Matan, my point is "What happens if PMD
> doesn't provide 'max_lro_pkt_size'",
> 1) There is no check for it right, so it is acceptable?

There is check.
If the capability is 0, any non-zero configuration will fail. 

> 2) Are we making this filed mandatory to provide for PMDs, it is easy to make
> new fields mandatory for PMDs but is this really necessary?

Yes, for consistence.

> >
> > as same as max rx pkt len, no?
> >
> >> - What do you think setting 'max_lro_pkt_size' config value to what
> >> PMD provided if application doesn't provide it?
> > Same answers as above.
> >
> 
> If application doesn't care the value, as it has been till now, and not provided
> explicit 'max_lro_pkt_size', why not ethdev level use the value provided by
> PMD instead of failing?

Again, same question we can ask on max rx pkt len.

Looks like the packet size is very important value which should be set by the application.

Previous applications have no option to configure it, so they haven't configure it, (probably cover it somehow) I think it is our miss to supply this info.

Let's do it in same way as we do max rx pkt len (as this patch main idea).
Later, we can change both to other meaning.

Matan
  
Ferruh Yigit Nov. 8, 2019, 11:37 a.m. UTC | #5
On 11/8/2019 10:10 AM, Matan Azrad wrote:
> 
> 
> From: Ferruh Yigit
>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>> Hi
>>>
>>> From: Ferruh Yigit
>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>
>>>> 	RTE_ETHER_MAX_LEN;
>>>>>  	}
>>>>>
>>>>> +	/*
>>>>> +	 * If LRO is enabled, check that the maximum aggregated packet
>>>>> +	 * size is supported by the configured device.
>>>>> +	 */
>>>>> +	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
>>>>> +		ret = check_lro_pkt_size(
>>>>> +				port_id, dev_conf-
>>>>> rxmode.max_lro_pkt_size,
>>>>> +				dev_info.max_lro_pkt_size);
>>>>> +		if (ret != 0)
>>>>> +			goto rollback;
>>>>> +	}
>>>>> +
>>>>
>>>> This check forces applications that enable LRO to provide
>> 'max_lro_pkt_size'
>>>> config value.
>>>
>>> Yes.(we can break an API, we noticed it)
>>
>> I am not talking about API/ABI breakage, that part is OK.
>> With this check, if the application requested LRO offload but not provided
>> 'max_lro_pkt_size' value, device configuration will fail.
>>
> Yes
>> Can there be a case application is good with whatever the PMD can support
>> as max?
> Yes can be - you know, we can do everything we want but it is better to be consistent:
> Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max lro pkt len should be mandatory for LRO offload.
> 
> So your question is actually why both, non-lro packets and LRO packets max size are mandatory...
> 
> 
> I think it should be important values for net applications management.
> Also good for mbuf size managements.
> 
>>>
>>>> - Why it is mandatory now, how it was working before if it is
>>>> mandatory value?
>>>
>>> It is the same as max_rx_pkt_len which is mandatory for jumbo frame
>> offload.
>>> So now, when the user configures a LRO offload he must to set max lro pkt
>> len.
>>> We don't want to confuse the user here with the max rx pkt len
>> configurations and behaviors, they should be with same logic.
>>>
>>> This parameter defines well the LRO behavior.
>>> Before this, each PMD took its own interpretation to what should be the
>> maximum size for LRO aggregated packets.
>>> Now, the user must say what is his intension, and the ethdev can limit it
>> according to the device capability.
>>> By this way, also, the PMD can organize\optimize its data-path more.
>>> Also, the application can create different mempools for LRO queues to
>> allow bigger packet receiving for LRO traffic.
>>>
>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
>>> Yes, you can see the feature description Dekel added.
>>> This patch also updates all the PMDs support an LRO for non-0 value.
>>
>> Of course I can see the updates Matan, my point is "What happens if PMD
>> doesn't provide 'max_lro_pkt_size'",
>> 1) There is no check for it right, so it is acceptable?
> 
> There is check.
> If the capability is 0, any non-zero configuration will fail. 
> 
>> 2) Are we making this filed mandatory to provide for PMDs, it is easy to make
>> new fields mandatory for PMDs but is this really necessary?
> 
> Yes, for consistence.
> 
>>>
>>> as same as max rx pkt len, no?
>>>
>>>> - What do you think setting 'max_lro_pkt_size' config value to what
>>>> PMD provided if application doesn't provide it?
>>> Same answers as above.
>>>
>>
>> If application doesn't care the value, as it has been till now, and not provided
>> explicit 'max_lro_pkt_size', why not ethdev level use the value provided by
>> PMD instead of failing?
> 
> Again, same question we can ask on max rx pkt len.
> 
> Looks like the packet size is very important value which should be set by the application.
> 
> Previous applications have no option to configure it, so they haven't configure it, (probably cover it somehow) I think it is our miss to supply this info.
> 
> Let's do it in same way as we do max rx pkt len (as this patch main idea).
> Later, we can change both to other meaning.
> 

I think it is not a good reason to introduce a new mandatory config option for
application because of 'max_rx_pkt_len' does it.

Will it work, if:
- If application doesn't provide this value, use the PMD max
- If both application and PMD doesn't provide this value, fail on configure()?
  
Matan Azrad Nov. 8, 2019, 11:56 a.m. UTC | #6
From: Ferruh Yigit
> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >
> >
> > From: Ferruh Yigit
> >> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>> Hi
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>
> >>>> 	RTE_ETHER_MAX_LEN;
> >>>>>  	}
> >>>>>
> >>>>> +	/*
> >>>>> +	 * If LRO is enabled, check that the maximum aggregated
> packet
> >>>>> +	 * size is supported by the configured device.
> >>>>> +	 */
> >>>>> +	if (dev_conf->rxmode.offloads &
> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>> +		ret = check_lro_pkt_size(
> >>>>> +				port_id, dev_conf-
> >>>>> rxmode.max_lro_pkt_size,
> >>>>> +				dev_info.max_lro_pkt_size);
> >>>>> +		if (ret != 0)
> >>>>> +			goto rollback;
> >>>>> +	}
> >>>>> +
> >>>>
> >>>> This check forces applications that enable LRO to provide
> >> 'max_lro_pkt_size'
> >>>> config value.
> >>>
> >>> Yes.(we can break an API, we noticed it)
> >>
> >> I am not talking about API/ABI breakage, that part is OK.
> >> With this check, if the application requested LRO offload but not
> >> provided 'max_lro_pkt_size' value, device configuration will fail.
> >>
> > Yes
> >> Can there be a case application is good with whatever the PMD can
> >> support as max?
> > Yes can be - you know, we can do everything we want but it is better to be
> consistent:
> > Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max
> lro pkt len should be mandatory for LRO offload.
> >
> > So your question is actually why both, non-lro packets and LRO packets max
> size are mandatory...
> >
> >
> > I think it should be important values for net applications management.
> > Also good for mbuf size managements.
> >
> >>>
> >>>> - Why it is mandatory now, how it was working before if it is
> >>>> mandatory value?
> >>>
> >>> It is the same as max_rx_pkt_len which is mandatory for jumbo frame
> >> offload.
> >>> So now, when the user configures a LRO offload he must to set max
> >>> lro pkt
> >> len.
> >>> We don't want to confuse the user here with the max rx pkt len
> >> configurations and behaviors, they should be with same logic.
> >>>
> >>> This parameter defines well the LRO behavior.
> >>> Before this, each PMD took its own interpretation to what should be
> >>> the
> >> maximum size for LRO aggregated packets.
> >>> Now, the user must say what is his intension, and the ethdev can
> >>> limit it
> >> according to the device capability.
> >>> By this way, also, the PMD can organize\optimize its data-path more.
> >>> Also, the application can create different mempools for LRO queues
> >>> to
> >> allow bigger packet receiving for LRO traffic.
> >>>
> >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
> >>> Yes, you can see the feature description Dekel added.
> >>> This patch also updates all the PMDs support an LRO for non-0 value.
> >>
> >> Of course I can see the updates Matan, my point is "What happens if
> >> PMD doesn't provide 'max_lro_pkt_size'",
> >> 1) There is no check for it right, so it is acceptable?
> >
> > There is check.
> > If the capability is 0, any non-zero configuration will fail.
> >
> >> 2) Are we making this filed mandatory to provide for PMDs, it is easy
> >> to make new fields mandatory for PMDs but is this really necessary?
> >
> > Yes, for consistence.
> >
> >>>
> >>> as same as max rx pkt len, no?
> >>>
> >>>> - What do you think setting 'max_lro_pkt_size' config value to what
> >>>> PMD provided if application doesn't provide it?
> >>> Same answers as above.
> >>>
> >>
> >> If application doesn't care the value, as it has been till now, and
> >> not provided explicit 'max_lro_pkt_size', why not ethdev level use
> >> the value provided by PMD instead of failing?
> >
> > Again, same question we can ask on max rx pkt len.
> >
> > Looks like the packet size is very important value which should be set by
> the application.
> >
> > Previous applications have no option to configure it, so they haven't
> configure it, (probably cover it somehow) I think it is our miss to supply this
> info.
> >
> > Let's do it in same way as we do max rx pkt len (as this patch main idea).
> > Later, we can change both to other meaning.
> >
> 
> I think it is not a good reason to introduce a new mandatory config option for
> application because of 'max_rx_pkt_len' does it.

It is mandatory only if LRO offload is configured.

> Will it work, if:
> - If application doesn't provide this value, use the PMD max

May cause a problem if the mbuf size is not enough for the PMD maximum.
 
> - If both application and PMD doesn't provide this value, fail on configure()?

It will work.
In my opinion - not ideal.

Matan
  
Ferruh Yigit Nov. 8, 2019, 12:51 p.m. UTC | #7
On 11/8/2019 11:56 AM, Matan Azrad wrote:
> 
> 
> From: Ferruh Yigit
>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
>>>
>>>
>>> From: Ferruh Yigit
>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>>>> Hi
>>>>>
>>>>> From: Ferruh Yigit
>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>>>
>>>>>> 	RTE_ETHER_MAX_LEN;
>>>>>>>  	}
>>>>>>>
>>>>>>> +	/*
>>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
>> packet
>>>>>>> +	 * size is supported by the configured device.
>>>>>>> +	 */
>>>>>>> +	if (dev_conf->rxmode.offloads &
>> DEV_RX_OFFLOAD_TCP_LRO) {
>>>>>>> +		ret = check_lro_pkt_size(
>>>>>>> +				port_id, dev_conf-
>>>>>>> rxmode.max_lro_pkt_size,
>>>>>>> +				dev_info.max_lro_pkt_size);
>>>>>>> +		if (ret != 0)
>>>>>>> +			goto rollback;
>>>>>>> +	}
>>>>>>> +
>>>>>>
>>>>>> This check forces applications that enable LRO to provide
>>>> 'max_lro_pkt_size'
>>>>>> config value.
>>>>>
>>>>> Yes.(we can break an API, we noticed it)
>>>>
>>>> I am not talking about API/ABI breakage, that part is OK.
>>>> With this check, if the application requested LRO offload but not
>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
>>>>
>>> Yes
>>>> Can there be a case application is good with whatever the PMD can
>>>> support as max?
>>> Yes can be - you know, we can do everything we want but it is better to be
>> consistent:
>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max
>> lro pkt len should be mandatory for LRO offload.
>>>
>>> So your question is actually why both, non-lro packets and LRO packets max
>> size are mandatory...
>>>
>>>
>>> I think it should be important values for net applications management.
>>> Also good for mbuf size managements.
>>>
>>>>>
>>>>>> - Why it is mandatory now, how it was working before if it is
>>>>>> mandatory value?
>>>>>
>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo frame
>>>> offload.
>>>>> So now, when the user configures a LRO offload he must to set max
>>>>> lro pkt
>>>> len.
>>>>> We don't want to confuse the user here with the max rx pkt len
>>>> configurations and behaviors, they should be with same logic.
>>>>>
>>>>> This parameter defines well the LRO behavior.
>>>>> Before this, each PMD took its own interpretation to what should be
>>>>> the
>>>> maximum size for LRO aggregated packets.
>>>>> Now, the user must say what is his intension, and the ethdev can
>>>>> limit it
>>>> according to the device capability.
>>>>> By this way, also, the PMD can organize\optimize its data-path more.
>>>>> Also, the application can create different mempools for LRO queues
>>>>> to
>>>> allow bigger packet receiving for LRO traffic.
>>>>>
>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
>>>>> Yes, you can see the feature description Dekel added.
>>>>> This patch also updates all the PMDs support an LRO for non-0 value.
>>>>
>>>> Of course I can see the updates Matan, my point is "What happens if
>>>> PMD doesn't provide 'max_lro_pkt_size'",
>>>> 1) There is no check for it right, so it is acceptable?
>>>
>>> There is check.
>>> If the capability is 0, any non-zero configuration will fail.
>>>
>>>> 2) Are we making this filed mandatory to provide for PMDs, it is easy
>>>> to make new fields mandatory for PMDs but is this really necessary?
>>>
>>> Yes, for consistence.
>>>
>>>>>
>>>>> as same as max rx pkt len, no?
>>>>>
>>>>>> - What do you think setting 'max_lro_pkt_size' config value to what
>>>>>> PMD provided if application doesn't provide it?
>>>>> Same answers as above.
>>>>>
>>>>
>>>> If application doesn't care the value, as it has been till now, and
>>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
>>>> the value provided by PMD instead of failing?
>>>
>>> Again, same question we can ask on max rx pkt len.
>>>
>>> Looks like the packet size is very important value which should be set by
>> the application.
>>>
>>> Previous applications have no option to configure it, so they haven't
>> configure it, (probably cover it somehow) I think it is our miss to supply this
>> info.
>>>
>>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
>>> Later, we can change both to other meaning.
>>>
>>
>> I think it is not a good reason to introduce a new mandatory config option for
>> application because of 'max_rx_pkt_len' does it.
> 
> It is mandatory only if LRO offload is configured.
> 
>> Will it work, if:
>> - If application doesn't provide this value, use the PMD max
> 
> May cause a problem if the mbuf size is not enough for the PMD maximum.

OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
be used but you already explained that application may want to use different
mempools for LRO queues.

For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into account and
program the device accordingly (of course in LRO enabled case) ?
This part seems missing and should be highlighted to other PMD maintainers.

>  
>> - If both application and PMD doesn't provide this value, fail on configure()?
> 
> It will work.
> In my opinion - not ideal.
> 
> Matan
> 
>
  
Ananyev, Konstantin Nov. 8, 2019, 1:11 p.m. UTC | #8
> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Friday, November 8, 2019 11:56 AM
> To: Yigit, Ferruh <ferruh.yigit@intel.com>; Dekel Peled <dekelp@mellanox.com>; Mcnamara, John <john.mcnamara@intel.com>;
> Kovacevic, Marko <marko.kovacevic@intel.com>; nhorman@tuxdriver.com; ajit.khaparde@broadcom.com;
> somnath.kotur@broadcom.com; Burakov, Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> <konstantin.ananyev@intel.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>;
> rmody@marvell.com; shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei <tiwei.bie@intel.com>; Wang, Zhihong
> <zhihong.wang@intel.com>; yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>; arybchenko@solarflare.com; Wu,
> Jingjing <jingjing.wu@intel.com>; Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
> 
> 
> 
> From: Ferruh Yigit
> > On 11/8/2019 10:10 AM, Matan Azrad wrote:
> > >
> > >
> > > From: Ferruh Yigit
> > >> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > >>> Hi
> > >>>
> > >>> From: Ferruh Yigit
> > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > >>>>>
> > >>>> 	RTE_ETHER_MAX_LEN;
> > >>>>>  	}
> > >>>>>
> > >>>>> +	/*
> > >>>>> +	 * If LRO is enabled, check that the maximum aggregated
> > packet
> > >>>>> +	 * size is supported by the configured device.
> > >>>>> +	 */
> > >>>>> +	if (dev_conf->rxmode.offloads &
> > DEV_RX_OFFLOAD_TCP_LRO) {
> > >>>>> +		ret = check_lro_pkt_size(
> > >>>>> +				port_id, dev_conf-
> > >>>>> rxmode.max_lro_pkt_size,
> > >>>>> +				dev_info.max_lro_pkt_size);
> > >>>>> +		if (ret != 0)
> > >>>>> +			goto rollback;
> > >>>>> +	}
> > >>>>> +
> > >>>>
> > >>>> This check forces applications that enable LRO to provide
> > >> 'max_lro_pkt_size'
> > >>>> config value.
> > >>>
> > >>> Yes.(we can break an API, we noticed it)
> > >>
> > >> I am not talking about API/ABI breakage, that part is OK.
> > >> With this check, if the application requested LRO offload but not
> > >> provided 'max_lro_pkt_size' value, device configuration will fail.
> > >>
> > > Yes
> > >> Can there be a case application is good with whatever the PMD can
> > >> support as max?
> > > Yes can be - you know, we can do everything we want but it is better to be
> > consistent:
> > > Due to the fact of Max rx pkt len field is mandatory for JUMBO offload, max
> > lro pkt len should be mandatory for LRO offload.
> > >
> > > So your question is actually why both, non-lro packets and LRO packets max
> > size are mandatory...
> > >
> > >
> > > I think it should be important values for net applications management.
> > > Also good for mbuf size managements.
> > >
> > >>>
> > >>>> - Why it is mandatory now, how it was working before if it is
> > >>>> mandatory value?
> > >>>
> > >>> It is the same as max_rx_pkt_len which is mandatory for jumbo frame
> > >> offload.
> > >>> So now, when the user configures a LRO offload he must to set max
> > >>> lro pkt
> > >> len.
> > >>> We don't want to confuse the user here with the max rx pkt len
> > >> configurations and behaviors, they should be with same logic.
> > >>>
> > >>> This parameter defines well the LRO behavior.
> > >>> Before this, each PMD took its own interpretation to what should be
> > >>> the
> > >> maximum size for LRO aggregated packets.
> > >>> Now, the user must say what is his intension, and the ethdev can
> > >>> limit it
> > >> according to the device capability.
> > >>> By this way, also, the PMD can organize\optimize its data-path more.
> > >>> Also, the application can create different mempools for LRO queues
> > >>> to
> > >> allow bigger packet receiving for LRO traffic.
> > >>>
> > >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is '0'?
> > >>> Yes, you can see the feature description Dekel added.
> > >>> This patch also updates all the PMDs support an LRO for non-0 value.
> > >>
> > >> Of course I can see the updates Matan, my point is "What happens if
> > >> PMD doesn't provide 'max_lro_pkt_size'",
> > >> 1) There is no check for it right, so it is acceptable?
> > >
> > > There is check.
> > > If the capability is 0, any non-zero configuration will fail.
> > >
> > >> 2) Are we making this filed mandatory to provide for PMDs, it is easy
> > >> to make new fields mandatory for PMDs but is this really necessary?
> > >
> > > Yes, for consistence.
> > >
> > >>>
> > >>> as same as max rx pkt len, no?
> > >>>
> > >>>> - What do you think setting 'max_lro_pkt_size' config value to what
> > >>>> PMD provided if application doesn't provide it?
> > >>> Same answers as above.
> > >>>
> > >>
> > >> If application doesn't care the value, as it has been till now, and
> > >> not provided explicit 'max_lro_pkt_size', why not ethdev level use
> > >> the value provided by PMD instead of failing?
> > >
> > > Again, same question we can ask on max rx pkt len.
> > >
> > > Looks like the packet size is very important value which should be set by
> > the application.
> > >
> > > Previous applications have no option to configure it, so they haven't
> > configure it, (probably cover it somehow) I think it is our miss to supply this
> > info.
> > >
> > > Let's do it in same way as we do max rx pkt len (as this patch main idea).
> > > Later, we can change both to other meaning.
> > >
> >
> > I think it is not a good reason to introduce a new mandatory config option for
> > application because of 'max_rx_pkt_len' does it.
> 
> It is mandatory only if LRO offload is configured.

So max_rx_pkt_len will remain max size of one packet,
while max_lro_len will be max accumulate size for each LRO session?

BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
ixgbe_vf, as I remember, doesn’t support LRO at all.

> 
> > Will it work, if:
> > - If application doesn't provide this value, use the PMD max
> 
> May cause a problem if the mbuf size is not enough for the PMD maximum.

Another question, what will happen if PMD will ignore that value and will
generate packets bigger then requested?

> 
> > - If both application and PMD doesn't provide this value, fail on configure()?
> 
> It will work.
> In my opinion - not ideal.
> 
> Matan
>
  
Dekel Peled Nov. 8, 2019, 2:10 p.m. UTC | #9
Thanks, PSB.

> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Friday, November 8, 2019 3:11 PM
> To: Matan Azrad <matan@mellanox.com>; Yigit, Ferruh
> <ferruh.yigit@intel.com>; Dekel Peled <dekelp@mellanox.com>; Mcnamara,
> John <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>;
> arybchenko@solarflare.com; Wu, Jingjing <jingjing.wu@intel.com>;
> Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO
> packet size
> 
> 
> 
> > -----Original Message-----
> > From: Matan Azrad <matan@mellanox.com>
> > Sent: Friday, November 8, 2019 11:56 AM
> > To: Yigit, Ferruh <ferruh.yigit@intel.com>; Dekel Peled
> > <dekelp@mellanox.com>; Mcnamara, John <john.mcnamara@intel.com>;
> > Kovacevic, Marko <marko.kovacevic@intel.com>;
> nhorman@tuxdriver.com;
> > ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> > Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> > cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu,
> Wenzhuo
> > <wenzhuo.lu@intel.com>; Ananyev, Konstantin
> > <konstantin.ananyev@intel.com>; Shahaf Shuler
> <shahafs@mellanox.com>;
> > Slava Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> > shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> > <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> > yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>;
> > arybchenko@solarflare.com; Wu, Jingjing <jingjing.wu@intel.com>;
> > Iremonger, Bernard <bernard.iremonger@intel.com>
> > Cc: dev@dpdk.org
> > Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max
> > LRO packet size
> >
> >
> >
> > From: Ferruh Yigit
> > > On 11/8/2019 10:10 AM, Matan Azrad wrote:
> > > >
> > > >
> > > > From: Ferruh Yigit
> > > >> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > > >>> Hi
> > > >>>
> > > >>> From: Ferruh Yigit
> > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > >>>>>
> > > >>>> 	RTE_ETHER_MAX_LEN;
> > > >>>>>  	}
> > > >>>>>
> > > >>>>> +	/*
> > > >>>>> +	 * If LRO is enabled, check that the maximum aggregated
> > > packet
> > > >>>>> +	 * size is supported by the configured device.
> > > >>>>> +	 */
> > > >>>>> +	if (dev_conf->rxmode.offloads &
> > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > >>>>> +		ret = check_lro_pkt_size(
> > > >>>>> +				port_id, dev_conf-
> > > >>>>> rxmode.max_lro_pkt_size,
> > > >>>>> +				dev_info.max_lro_pkt_size);
> > > >>>>> +		if (ret != 0)
> > > >>>>> +			goto rollback;
> > > >>>>> +	}
> > > >>>>> +
> > > >>>>
> > > >>>> This check forces applications that enable LRO to provide
> > > >> 'max_lro_pkt_size'
> > > >>>> config value.
> > > >>>
> > > >>> Yes.(we can break an API, we noticed it)
> > > >>
> > > >> I am not talking about API/ABI breakage, that part is OK.
> > > >> With this check, if the application requested LRO offload but not
> > > >> provided 'max_lro_pkt_size' value, device configuration will fail.
> > > >>
> > > > Yes
> > > >> Can there be a case application is good with whatever the PMD can
> > > >> support as max?
> > > > Yes can be - you know, we can do everything we want but it is
> > > > better to be
> > > consistent:
> > > > Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > > offload, max
> > > lro pkt len should be mandatory for LRO offload.
> > > >
> > > > So your question is actually why both, non-lro packets and LRO
> > > > packets max
> > > size are mandatory...
> > > >
> > > >
> > > > I think it should be important values for net applications management.
> > > > Also good for mbuf size managements.
> > > >
> > > >>>
> > > >>>> - Why it is mandatory now, how it was working before if it is
> > > >>>> mandatory value?
> > > >>>
> > > >>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> > > >>> frame
> > > >> offload.
> > > >>> So now, when the user configures a LRO offload he must to set
> > > >>> max lro pkt
> > > >> len.
> > > >>> We don't want to confuse the user here with the max rx pkt len
> > > >> configurations and behaviors, they should be with same logic.
> > > >>>
> > > >>> This parameter defines well the LRO behavior.
> > > >>> Before this, each PMD took its own interpretation to what should
> > > >>> be the
> > > >> maximum size for LRO aggregated packets.
> > > >>> Now, the user must say what is his intension, and the ethdev can
> > > >>> limit it
> > > >> according to the device capability.
> > > >>> By this way, also, the PMD can organize\optimize its data-path more.
> > > >>> Also, the application can create different mempools for LRO
> > > >>> queues to
> > > >> allow bigger packet receiving for LRO traffic.
> > > >>>
> > > >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
> '0'?
> > > >>> Yes, you can see the feature description Dekel added.
> > > >>> This patch also updates all the PMDs support an LRO for non-0 value.
> > > >>
> > > >> Of course I can see the updates Matan, my point is "What happens
> > > >> if PMD doesn't provide 'max_lro_pkt_size'",
> > > >> 1) There is no check for it right, so it is acceptable?
> > > >
> > > > There is check.
> > > > If the capability is 0, any non-zero configuration will fail.
> > > >
> > > >> 2) Are we making this filed mandatory to provide for PMDs, it is
> > > >> easy to make new fields mandatory for PMDs but is this really
> necessary?
> > > >
> > > > Yes, for consistence.
> > > >
> > > >>>
> > > >>> as same as max rx pkt len, no?
> > > >>>
> > > >>>> - What do you think setting 'max_lro_pkt_size' config value to
> > > >>>> what PMD provided if application doesn't provide it?
> > > >>> Same answers as above.
> > > >>>
> > > >>
> > > >> If application doesn't care the value, as it has been till now,
> > > >> and not provided explicit 'max_lro_pkt_size', why not ethdev
> > > >> level use the value provided by PMD instead of failing?
> > > >
> > > > Again, same question we can ask on max rx pkt len.
> > > >
> > > > Looks like the packet size is very important value which should be
> > > > set by
> > > the application.
> > > >
> > > > Previous applications have no option to configure it, so they
> > > > haven't
> > > configure it, (probably cover it somehow) I think it is our miss to
> > > supply this info.
> > > >
> > > > Let's do it in same way as we do max rx pkt len (as this patch main idea).
> > > > Later, we can change both to other meaning.
> > > >
> > >
> > > I think it is not a good reason to introduce a new mandatory config
> > > option for application because of 'max_rx_pkt_len' does it.
> >
> > It is mandatory only if LRO offload is configured.
> 
> So max_rx_pkt_len will remain max size of one packet, while max_lro_len
> will be max accumulate size for each LRO session?
> 

Yes.

> BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.

Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
Change to RTE_IPV4_MAX_PKT_LEN?

> ixgbe_vf, as I remember, doesn’t support LRO at all.

Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
Remove it?

> 
> >
> > > Will it work, if:
> > > - If application doesn't provide this value, use the PMD max
> >
> > May cause a problem if the mbuf size is not enough for the PMD maximum.
> 
> Another question, what will happen if PMD will ignore that value and will
> generate packets bigger then requested?

PMD should use this value and not ignore it.

> 
> >
> > > - If both application and PMD doesn't provide this value, fail on
> configure()?
> >
> > It will work.
> > In my opinion - not ideal.
> >
> > Matan
> >
  
Ananyev, Konstantin Nov. 8, 2019, 2:52 p.m. UTC | #10
> > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > >>>>>
> > > > >>>> 	RTE_ETHER_MAX_LEN;
> > > > >>>>>  	}
> > > > >>>>>
> > > > >>>>> +	/*
> > > > >>>>> +	 * If LRO is enabled, check that the maximum aggregated
> > > > packet
> > > > >>>>> +	 * size is supported by the configured device.
> > > > >>>>> +	 */
> > > > >>>>> +	if (dev_conf->rxmode.offloads &
> > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > >>>>> +		ret = check_lro_pkt_size(
> > > > >>>>> +				port_id, dev_conf-
> > > > >>>>> rxmode.max_lro_pkt_size,
> > > > >>>>> +				dev_info.max_lro_pkt_size);
> > > > >>>>> +		if (ret != 0)
> > > > >>>>> +			goto rollback;
> > > > >>>>> +	}
> > > > >>>>> +
> > > > >>>>
> > > > >>>> This check forces applications that enable LRO to provide
> > > > >> 'max_lro_pkt_size'
> > > > >>>> config value.
> > > > >>>
> > > > >>> Yes.(we can break an API, we noticed it)
> > > > >>
> > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > >> With this check, if the application requested LRO offload but not
> > > > >> provided 'max_lro_pkt_size' value, device configuration will fail.
> > > > >>
> > > > > Yes
> > > > >> Can there be a case application is good with whatever the PMD can
> > > > >> support as max?
> > > > > Yes can be - you know, we can do everything we want but it is
> > > > > better to be
> > > > consistent:
> > > > > Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > > > offload, max
> > > > lro pkt len should be mandatory for LRO offload.
> > > > >
> > > > > So your question is actually why both, non-lro packets and LRO
> > > > > packets max
> > > > size are mandatory...
> > > > >
> > > > >
> > > > > I think it should be important values for net applications management.
> > > > > Also good for mbuf size managements.
> > > > >
> > > > >>>
> > > > >>>> - Why it is mandatory now, how it was working before if it is
> > > > >>>> mandatory value?
> > > > >>>
> > > > >>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> > > > >>> frame
> > > > >> offload.
> > > > >>> So now, when the user configures a LRO offload he must to set
> > > > >>> max lro pkt
> > > > >> len.
> > > > >>> We don't want to confuse the user here with the max rx pkt len
> > > > >> configurations and behaviors, they should be with same logic.
> > > > >>>
> > > > >>> This parameter defines well the LRO behavior.
> > > > >>> Before this, each PMD took its own interpretation to what should
> > > > >>> be the
> > > > >> maximum size for LRO aggregated packets.
> > > > >>> Now, the user must say what is his intension, and the ethdev can
> > > > >>> limit it
> > > > >> according to the device capability.
> > > > >>> By this way, also, the PMD can organize\optimize its data-path more.
> > > > >>> Also, the application can create different mempools for LRO
> > > > >>> queues to
> > > > >> allow bigger packet receiving for LRO traffic.
> > > > >>>
> > > > >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
> > '0'?
> > > > >>> Yes, you can see the feature description Dekel added.
> > > > >>> This patch also updates all the PMDs support an LRO for non-0 value.
> > > > >>
> > > > >> Of course I can see the updates Matan, my point is "What happens
> > > > >> if PMD doesn't provide 'max_lro_pkt_size'",
> > > > >> 1) There is no check for it right, so it is acceptable?
> > > > >
> > > > > There is check.
> > > > > If the capability is 0, any non-zero configuration will fail.
> > > > >
> > > > >> 2) Are we making this filed mandatory to provide for PMDs, it is
> > > > >> easy to make new fields mandatory for PMDs but is this really
> > necessary?
> > > > >
> > > > > Yes, for consistence.
> > > > >
> > > > >>>
> > > > >>> as same as max rx pkt len, no?
> > > > >>>
> > > > >>>> - What do you think setting 'max_lro_pkt_size' config value to
> > > > >>>> what PMD provided if application doesn't provide it?
> > > > >>> Same answers as above.
> > > > >>>
> > > > >>
> > > > >> If application doesn't care the value, as it has been till now,
> > > > >> and not provided explicit 'max_lro_pkt_size', why not ethdev
> > > > >> level use the value provided by PMD instead of failing?
> > > > >
> > > > > Again, same question we can ask on max rx pkt len.
> > > > >
> > > > > Looks like the packet size is very important value which should be
> > > > > set by
> > > > the application.
> > > > >
> > > > > Previous applications have no option to configure it, so they
> > > > > haven't
> > > > configure it, (probably cover it somehow) I think it is our miss to
> > > > supply this info.
> > > > >
> > > > > Let's do it in same way as we do max rx pkt len (as this patch main idea).
> > > > > Later, we can change both to other meaning.
> > > > >
> > > >
> > > > I think it is not a good reason to introduce a new mandatory config
> > > > option for application because of 'max_rx_pkt_len' does it.
> > >
> > > It is mandatory only if LRO offload is configured.
> >
> > So max_rx_pkt_len will remain max size of one packet, while max_lro_len
> > will be max accumulate size for each LRO session?
> >
> 
> Yes.
> 
> > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> 
> Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> Change to RTE_IPV4_MAX_PKT_LEN?
> 
> > ixgbe_vf, as I remember, doesn’t support LRO at all.
> 
> Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
> Remove it?

Yes, please for both.

> 
> >
> > >
> > > > Will it work, if:
> > > > - If application doesn't provide this value, use the PMD max
> > >
> > > May cause a problem if the mbuf size is not enough for the PMD maximum.
> >
> > Another question, what will happen if PMD will ignore that value and will
> > generate packets bigger then requested?
> 
> PMD should use this value and not ignore it.

Hmm, ok but this patch updates mxl driver only...
I suppose you expect other PMD maintainers to do the job for their PMDs, right?
If so, are they aware (and agree) for this new hard requirement and changes required?
Again what PMD should do if it can't support exact value?
Let say user asked max_lro_size=20KB but PMD can do only 16KB or 24KB?
Should it fail, or round to smallest, or ...?

Actually I wonder, should it really be a hard requirement or more like a guidance to PMD?
Why app needs and *exact* value for LRO size?
 

> >
> > >
> > > > - If both application and PMD doesn't provide this value, fail on
> > configure()?
> > >
> > > It will work.
> > > In my opinion - not ideal.
> > >
> > > Matan
> > >
  
Dekel Peled Nov. 8, 2019, 4:08 p.m. UTC | #11
Thanks, PSB.

> -----Original Message-----
> From: Ananyev, Konstantin <konstantin.ananyev@intel.com>
> Sent: Friday, November 8, 2019 4:53 PM
> To: Dekel Peled <dekelp@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara,
> John <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>;
> arybchenko@solarflare.com; Wu, Jingjing <jingjing.wu@intel.com>;
> Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO
> packet size
> 
> 
> > > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > > >>>>>
> > > > > >>>> 	RTE_ETHER_MAX_LEN;
> > > > > >>>>>  	}
> > > > > >>>>>
> > > > > >>>>> +	/*
> > > > > >>>>> +	 * If LRO is enabled, check that the maximum
> aggregated
> > > > > packet
> > > > > >>>>> +	 * size is supported by the configured device.
> > > > > >>>>> +	 */
> > > > > >>>>> +	if (dev_conf->rxmode.offloads &
> > > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > > >>>>> +		ret = check_lro_pkt_size(
> > > > > >>>>> +				port_id, dev_conf-
> > > > > >>>>> rxmode.max_lro_pkt_size,
> > > > > >>>>> +				dev_info.max_lro_pkt_size);
> > > > > >>>>> +		if (ret != 0)
> > > > > >>>>> +			goto rollback;
> > > > > >>>>> +	}
> > > > > >>>>> +
> > > > > >>>>
> > > > > >>>> This check forces applications that enable LRO to provide
> > > > > >> 'max_lro_pkt_size'
> > > > > >>>> config value.
> > > > > >>>
> > > > > >>> Yes.(we can break an API, we noticed it)
> > > > > >>
> > > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > > >> With this check, if the application requested LRO offload but
> > > > > >> not provided 'max_lro_pkt_size' value, device configuration will
> fail.
> > > > > >>
> > > > > > Yes
> > > > > >> Can there be a case application is good with whatever the PMD
> > > > > >> can support as max?
> > > > > > Yes can be - you know, we can do everything we want but it is
> > > > > > better to be
> > > > > consistent:
> > > > > > Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > > > > offload, max
> > > > > lro pkt len should be mandatory for LRO offload.
> > > > > >
> > > > > > So your question is actually why both, non-lro packets and LRO
> > > > > > packets max
> > > > > size are mandatory...
> > > > > >
> > > > > >
> > > > > > I think it should be important values for net applications
> management.
> > > > > > Also good for mbuf size managements.
> > > > > >
> > > > > >>>
> > > > > >>>> - Why it is mandatory now, how it was working before if it
> > > > > >>>> is mandatory value?
> > > > > >>>
> > > > > >>> It is the same as max_rx_pkt_len which is mandatory for
> > > > > >>> jumbo frame
> > > > > >> offload.
> > > > > >>> So now, when the user configures a LRO offload he must to
> > > > > >>> set max lro pkt
> > > > > >> len.
> > > > > >>> We don't want to confuse the user here with the max rx pkt
> > > > > >>> len
> > > > > >> configurations and behaviors, they should be with same logic.
> > > > > >>>
> > > > > >>> This parameter defines well the LRO behavior.
> > > > > >>> Before this, each PMD took its own interpretation to what
> > > > > >>> should be the
> > > > > >> maximum size for LRO aggregated packets.
> > > > > >>> Now, the user must say what is his intension, and the ethdev
> > > > > >>> can limit it
> > > > > >> according to the device capability.
> > > > > >>> By this way, also, the PMD can organize\optimize its data-path
> more.
> > > > > >>> Also, the application can create different mempools for LRO
> > > > > >>> queues to
> > > > > >> allow bigger packet receiving for LRO traffic.
> > > > > >>>
> > > > > >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size',
> > > > > >>>> so it is
> > > '0'?
> > > > > >>> Yes, you can see the feature description Dekel added.
> > > > > >>> This patch also updates all the PMDs support an LRO for non-0
> value.
> > > > > >>
> > > > > >> Of course I can see the updates Matan, my point is "What
> > > > > >> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > > > >> 1) There is no check for it right, so it is acceptable?
> > > > > >
> > > > > > There is check.
> > > > > > If the capability is 0, any non-zero configuration will fail.
> > > > > >
> > > > > >> 2) Are we making this filed mandatory to provide for PMDs, it
> > > > > >> is easy to make new fields mandatory for PMDs but is this
> > > > > >> really
> > > necessary?
> > > > > >
> > > > > > Yes, for consistence.
> > > > > >
> > > > > >>>
> > > > > >>> as same as max rx pkt len, no?
> > > > > >>>
> > > > > >>>> - What do you think setting 'max_lro_pkt_size' config value
> > > > > >>>> to what PMD provided if application doesn't provide it?
> > > > > >>> Same answers as above.
> > > > > >>>
> > > > > >>
> > > > > >> If application doesn't care the value, as it has been till
> > > > > >> now, and not provided explicit 'max_lro_pkt_size', why not
> > > > > >> ethdev level use the value provided by PMD instead of failing?
> > > > > >
> > > > > > Again, same question we can ask on max rx pkt len.
> > > > > >
> > > > > > Looks like the packet size is very important value which
> > > > > > should be set by
> > > > > the application.
> > > > > >
> > > > > > Previous applications have no option to configure it, so they
> > > > > > haven't
> > > > > configure it, (probably cover it somehow) I think it is our miss
> > > > > to supply this info.
> > > > > >
> > > > > > Let's do it in same way as we do max rx pkt len (as this patch main
> idea).
> > > > > > Later, we can change both to other meaning.
> > > > > >
> > > > >
> > > > > I think it is not a good reason to introduce a new mandatory
> > > > > config option for application because of 'max_rx_pkt_len' does it.
> > > >
> > > > It is mandatory only if LRO offload is configured.
> > >
> > > So max_rx_pkt_len will remain max size of one packet, while
> > > max_lro_len will be max accumulate size for each LRO session?
> > >
> >
> > Yes.
> >
> > > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> >
> > Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> > Change to RTE_IPV4_MAX_PKT_LEN?
> >
> > > ixgbe_vf, as I remember, doesn’t support LRO at all.
> >
> > Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
> > Remove it?
> 
> Yes, please for both.

Will change in v5.

> 
> >
> > >
> > > >
> > > > > Will it work, if:
> > > > > - If application doesn't provide this value, use the PMD max
> > > >
> > > > May cause a problem if the mbuf size is not enough for the PMD
> maximum.
> > >
> > > Another question, what will happen if PMD will ignore that value and
> > > will generate packets bigger then requested?
> >
> > PMD should use this value and not ignore it.
> 
> Hmm, ok but this patch updates mxl driver only...
> I suppose you expect other PMD maintainers to do the job for their PMDs,
> right?
> If so, are they aware (and agree) for this new hard requirement and changes
> required?
> Again what PMD should do if it can't support exact value?
> Let say user asked max_lro_size=20KB but PMD can do only 16KB or 24KB?
> Should it fail, or round to smallest, or ...?
> 
> Actually I wonder, should it really be a hard requirement or more like a
> guidance to PMD?
> Why app needs and *exact* value for LRO size?

The exact value should be configured to HW as LRO session limit.

> 
> 
> > >
> > > >
> > > > > - If both application and PMD doesn't provide this value, fail
> > > > > on
> > > configure()?
> > > >
> > > > It will work.
> > > > In my opinion - not ideal.
> > > >
> > > > Matan
> > > >
  
Dekel Peled Nov. 8, 2019, 4:11 p.m. UTC | #12
Thanks, PSB.

> -----Original Message-----
> From: Ferruh Yigit <ferruh.yigit@intel.com>
> Sent: Friday, November 8, 2019 2:52 PM
> To: Matan Azrad <matan@mellanox.com>; Dekel Peled
> <dekelp@mellanox.com>; john.mcnamara@intel.com;
> marko.kovacevic@intel.com; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
> anatoly.burakov@intel.com; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com;
> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Shahaf Shuler
> <shahafs@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>;
> rmody@marvell.com; shshaikh@marvell.com;
> maxime.coquelin@redhat.com; tiwei.bie@intel.com;
> zhihong.wang@intel.com; yongwang@vmware.com; Thomas Monjalon
> <thomas@monjalon.net>; arybchenko@solarflare.com;
> jingjing.wu@intel.com; bernard.iremonger@intel.com
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO
> packet size
> 
> On 11/8/2019 11:56 AM, Matan Azrad wrote:
> >
> >
> > From: Ferruh Yigit
> >> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >>>
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>>>> Hi
> >>>>>
> >>>>> From: Ferruh Yigit
> >>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>>>
> >>>>>> 	RTE_ETHER_MAX_LEN;
> >>>>>>>  	}
> >>>>>>>
> >>>>>>> +	/*
> >>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
> >> packet
> >>>>>>> +	 * size is supported by the configured device.
> >>>>>>> +	 */
> >>>>>>> +	if (dev_conf->rxmode.offloads &
> >> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>>>> +		ret = check_lro_pkt_size(
> >>>>>>> +				port_id, dev_conf-
> >>>>>>> rxmode.max_lro_pkt_size,
> >>>>>>> +				dev_info.max_lro_pkt_size);
> >>>>>>> +		if (ret != 0)
> >>>>>>> +			goto rollback;
> >>>>>>> +	}
> >>>>>>> +
> >>>>>>
> >>>>>> This check forces applications that enable LRO to provide
> >>>> 'max_lro_pkt_size'
> >>>>>> config value.
> >>>>>
> >>>>> Yes.(we can break an API, we noticed it)
> >>>>
> >>>> I am not talking about API/ABI breakage, that part is OK.
> >>>> With this check, if the application requested LRO offload but not
> >>>> provided 'max_lro_pkt_size' value, device configuration will fail.
> >>>>
> >>> Yes
> >>>> Can there be a case application is good with whatever the PMD can
> >>>> support as max?
> >>> Yes can be - you know, we can do everything we want but it is better
> >>> to be
> >> consistent:
> >>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> >>> offload, max
> >> lro pkt len should be mandatory for LRO offload.
> >>>
> >>> So your question is actually why both, non-lro packets and LRO
> >>> packets max
> >> size are mandatory...
> >>>
> >>>
> >>> I think it should be important values for net applications management.
> >>> Also good for mbuf size managements.
> >>>
> >>>>>
> >>>>>> - Why it is mandatory now, how it was working before if it is
> >>>>>> mandatory value?
> >>>>>
> >>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> >>>>> frame
> >>>> offload.
> >>>>> So now, when the user configures a LRO offload he must to set max
> >>>>> lro pkt
> >>>> len.
> >>>>> We don't want to confuse the user here with the max rx pkt len
> >>>> configurations and behaviors, they should be with same logic.
> >>>>>
> >>>>> This parameter defines well the LRO behavior.
> >>>>> Before this, each PMD took its own interpretation to what should
> >>>>> be the
> >>>> maximum size for LRO aggregated packets.
> >>>>> Now, the user must say what is his intension, and the ethdev can
> >>>>> limit it
> >>>> according to the device capability.
> >>>>> By this way, also, the PMD can organize\optimize its data-path more.
> >>>>> Also, the application can create different mempools for LRO queues
> >>>>> to
> >>>> allow bigger packet receiving for LRO traffic.
> >>>>>
> >>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
> '0'?
> >>>>> Yes, you can see the feature description Dekel added.
> >>>>> This patch also updates all the PMDs support an LRO for non-0 value.
> >>>>
> >>>> Of course I can see the updates Matan, my point is "What happens if
> >>>> PMD doesn't provide 'max_lro_pkt_size'",
> >>>> 1) There is no check for it right, so it is acceptable?
> >>>
> >>> There is check.
> >>> If the capability is 0, any non-zero configuration will fail.
> >>>
> >>>> 2) Are we making this filed mandatory to provide for PMDs, it is
> >>>> easy to make new fields mandatory for PMDs but is this really
> necessary?
> >>>
> >>> Yes, for consistence.
> >>>
> >>>>>
> >>>>> as same as max rx pkt len, no?
> >>>>>
> >>>>>> - What do you think setting 'max_lro_pkt_size' config value to
> >>>>>> what PMD provided if application doesn't provide it?
> >>>>> Same answers as above.
> >>>>>
> >>>>
> >>>> If application doesn't care the value, as it has been till now, and
> >>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
> >>>> the value provided by PMD instead of failing?
> >>>
> >>> Again, same question we can ask on max rx pkt len.
> >>>
> >>> Looks like the packet size is very important value which should be
> >>> set by
> >> the application.
> >>>
> >>> Previous applications have no option to configure it, so they
> >>> haven't
> >> configure it, (probably cover it somehow) I think it is our miss to
> >> supply this info.
> >>>
> >>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
> >>> Later, we can change both to other meaning.
> >>>
> >>
> >> I think it is not a good reason to introduce a new mandatory config
> >> option for application because of 'max_rx_pkt_len' does it.
> >
> > It is mandatory only if LRO offload is configured.
> >
> >> Will it work, if:
> >> - If application doesn't provide this value, use the PMD max
> >
> > May cause a problem if the mbuf size is not enough for the PMD maximum.
> 
> OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
> be used but you already explained that application may want to use different
> mempools for LRO queues.
> 
> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> account and program the device accordingly (of course in LRO enabled case)
> ?
> This part seems missing and should be highlighted to other PMD maintainers.
> 

All relevant PMDs were modified and maintainers are copied on this patch series.

> >
> >> - If both application and PMD doesn't provide this value, fail on
> configure()?
> >
> > It will work.
> > In my opinion - not ideal.
> >
> > Matan
> >
> >
  
Ananyev, Konstantin Nov. 8, 2019, 4:28 p.m. UTC | #13
> >
> >
> > > > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > > > >>>>>
> > > > > > >>>> 	RTE_ETHER_MAX_LEN;
> > > > > > >>>>>  	}
> > > > > > >>>>>
> > > > > > >>>>> +	/*
> > > > > > >>>>> +	 * If LRO is enabled, check that the maximum
> > aggregated
> > > > > > packet
> > > > > > >>>>> +	 * size is supported by the configured device.
> > > > > > >>>>> +	 */
> > > > > > >>>>> +	if (dev_conf->rxmode.offloads &
> > > > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > > > >>>>> +		ret = check_lro_pkt_size(
> > > > > > >>>>> +				port_id, dev_conf-
> > > > > > >>>>> rxmode.max_lro_pkt_size,
> > > > > > >>>>> +				dev_info.max_lro_pkt_size);
> > > > > > >>>>> +		if (ret != 0)
> > > > > > >>>>> +			goto rollback;
> > > > > > >>>>> +	}
> > > > > > >>>>> +
> > > > > > >>>>
> > > > > > >>>> This check forces applications that enable LRO to provide
> > > > > > >> 'max_lro_pkt_size'
> > > > > > >>>> config value.
> > > > > > >>>
> > > > > > >>> Yes.(we can break an API, we noticed it)
> > > > > > >>
> > > > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > > > >> With this check, if the application requested LRO offload but
> > > > > > >> not provided 'max_lro_pkt_size' value, device configuration will
> > fail.
> > > > > > >>
> > > > > > > Yes
> > > > > > >> Can there be a case application is good with whatever the PMD
> > > > > > >> can support as max?
> > > > > > > Yes can be - you know, we can do everything we want but it is
> > > > > > > better to be
> > > > > > consistent:
> > > > > > > Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > > > > > offload, max
> > > > > > lro pkt len should be mandatory for LRO offload.
> > > > > > >
> > > > > > > So your question is actually why both, non-lro packets and LRO
> > > > > > > packets max
> > > > > > size are mandatory...
> > > > > > >
> > > > > > >
> > > > > > > I think it should be important values for net applications
> > management.
> > > > > > > Also good for mbuf size managements.
> > > > > > >
> > > > > > >>>
> > > > > > >>>> - Why it is mandatory now, how it was working before if it
> > > > > > >>>> is mandatory value?
> > > > > > >>>
> > > > > > >>> It is the same as max_rx_pkt_len which is mandatory for
> > > > > > >>> jumbo frame
> > > > > > >> offload.
> > > > > > >>> So now, when the user configures a LRO offload he must to
> > > > > > >>> set max lro pkt
> > > > > > >> len.
> > > > > > >>> We don't want to confuse the user here with the max rx pkt
> > > > > > >>> len
> > > > > > >> configurations and behaviors, they should be with same logic.
> > > > > > >>>
> > > > > > >>> This parameter defines well the LRO behavior.
> > > > > > >>> Before this, each PMD took its own interpretation to what
> > > > > > >>> should be the
> > > > > > >> maximum size for LRO aggregated packets.
> > > > > > >>> Now, the user must say what is his intension, and the ethdev
> > > > > > >>> can limit it
> > > > > > >> according to the device capability.
> > > > > > >>> By this way, also, the PMD can organize\optimize its data-path
> > more.
> > > > > > >>> Also, the application can create different mempools for LRO
> > > > > > >>> queues to
> > > > > > >> allow bigger packet receiving for LRO traffic.
> > > > > > >>>
> > > > > > >>>> - What happens if PMD doesn't provide 'max_lro_pkt_size',
> > > > > > >>>> so it is
> > > > '0'?
> > > > > > >>> Yes, you can see the feature description Dekel added.
> > > > > > >>> This patch also updates all the PMDs support an LRO for non-0
> > value.
> > > > > > >>
> > > > > > >> Of course I can see the updates Matan, my point is "What
> > > > > > >> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > > > > >> 1) There is no check for it right, so it is acceptable?
> > > > > > >
> > > > > > > There is check.
> > > > > > > If the capability is 0, any non-zero configuration will fail.
> > > > > > >
> > > > > > >> 2) Are we making this filed mandatory to provide for PMDs, it
> > > > > > >> is easy to make new fields mandatory for PMDs but is this
> > > > > > >> really
> > > > necessary?
> > > > > > >
> > > > > > > Yes, for consistence.
> > > > > > >
> > > > > > >>>
> > > > > > >>> as same as max rx pkt len, no?
> > > > > > >>>
> > > > > > >>>> - What do you think setting 'max_lro_pkt_size' config value
> > > > > > >>>> to what PMD provided if application doesn't provide it?
> > > > > > >>> Same answers as above.
> > > > > > >>>
> > > > > > >>
> > > > > > >> If application doesn't care the value, as it has been till
> > > > > > >> now, and not provided explicit 'max_lro_pkt_size', why not
> > > > > > >> ethdev level use the value provided by PMD instead of failing?
> > > > > > >
> > > > > > > Again, same question we can ask on max rx pkt len.
> > > > > > >
> > > > > > > Looks like the packet size is very important value which
> > > > > > > should be set by
> > > > > > the application.
> > > > > > >
> > > > > > > Previous applications have no option to configure it, so they
> > > > > > > haven't
> > > > > > configure it, (probably cover it somehow) I think it is our miss
> > > > > > to supply this info.
> > > > > > >
> > > > > > > Let's do it in same way as we do max rx pkt len (as this patch main
> > idea).
> > > > > > > Later, we can change both to other meaning.
> > > > > > >
> > > > > >
> > > > > > I think it is not a good reason to introduce a new mandatory
> > > > > > config option for application because of 'max_rx_pkt_len' does it.
> > > > >
> > > > > It is mandatory only if LRO offload is configured.
> > > >
> > > > So max_rx_pkt_len will remain max size of one packet, while
> > > > max_lro_len will be max accumulate size for each LRO session?
> > > >
> > >
> > > Yes.
> > >
> > > > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> > >
> > > Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> > > Change to RTE_IPV4_MAX_PKT_LEN?
> > >
> > > > ixgbe_vf, as I remember, doesn’t support LRO at all.
> > >
> > > Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
> > > Remove it?
> >
> > Yes, please for both.
> 
> Will change in v5.
> 
> >
> > >
> > > >
> > > > >
> > > > > > Will it work, if:
> > > > > > - If application doesn't provide this value, use the PMD max
> > > > >
> > > > > May cause a problem if the mbuf size is not enough for the PMD
> > maximum.
> > > >
> > > > Another question, what will happen if PMD will ignore that value and
> > > > will generate packets bigger then requested?
> > >
> > > PMD should use this value and not ignore it.
> >
> > Hmm, ok but this patch updates mxl driver only...
> > I suppose you expect other PMD maintainers to do the job for their PMDs,
> > right?
> > If so, are they aware (and agree) for this new hard requirement and changes
> > required?
> > Again what PMD should do if it can't support exact value?
> > Let say user asked max_lro_size=20KB but PMD can do only 16KB or 24KB?
> > Should it fail, or round to smallest, or ...?
> >
> > Actually I wonder, should it really be a hard requirement or more like a
> > guidance to PMD?
> > Why app needs and *exact* value for LRO size?
> 
> The exact value should be configured to HW as LRO session limit.

But if the HW can't support this exact value, see the example above?
In fact, shouldn't we allow PMD to forbid user to configure max LRO size?
Let say if in dev_info max_lro_size==0, then PMD doesn't support LRO size
configuration at all.
That way PMDs who do support LRO, but don't want to (can't to)
support configurable LRO size will stay untouched.

> 
> >
> >
> > > >
> > > > >
> > > > > > - If both application and PMD doesn't provide this value, fail
> > > > > > on
> > > > configure()?
> > > > >
> > > > > It will work.
> > > > > In my opinion - not ideal.
> > > > >
> > > > > Matan
> > > > >
  
Ferruh Yigit Nov. 8, 2019, 4:53 p.m. UTC | #14
On 11/8/2019 4:11 PM, Dekel Peled wrote:
> Thanks, PSB.
> 
>> -----Original Message-----
>> From: Ferruh Yigit <ferruh.yigit@intel.com>
>> Sent: Friday, November 8, 2019 2:52 PM
>> To: Matan Azrad <matan@mellanox.com>; Dekel Peled
>> <dekelp@mellanox.com>; john.mcnamara@intel.com;
>> marko.kovacevic@intel.com; nhorman@tuxdriver.com;
>> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com;
>> anatoly.burakov@intel.com; xuanziyang2@huawei.com;
>> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com;
>> wenzhuo.lu@intel.com; konstantin.ananyev@intel.com; Shahaf Shuler
>> <shahafs@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>;
>> rmody@marvell.com; shshaikh@marvell.com;
>> maxime.coquelin@redhat.com; tiwei.bie@intel.com;
>> zhihong.wang@intel.com; yongwang@vmware.com; Thomas Monjalon
>> <thomas@monjalon.net>; arybchenko@solarflare.com;
>> jingjing.wu@intel.com; bernard.iremonger@intel.com
>> Cc: dev@dpdk.org
>> Subject: Re: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO
>> packet size
>>
>> On 11/8/2019 11:56 AM, Matan Azrad wrote:
>>>
>>>
>>> From: Ferruh Yigit
>>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
>>>>>
>>>>>
>>>>> From: Ferruh Yigit
>>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>>>>>> Hi
>>>>>>>
>>>>>>> From: Ferruh Yigit
>>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>>>>>
>>>>>>>> 	RTE_ETHER_MAX_LEN;
>>>>>>>>>  	}
>>>>>>>>>
>>>>>>>>> +	/*
>>>>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
>>>> packet
>>>>>>>>> +	 * size is supported by the configured device.
>>>>>>>>> +	 */
>>>>>>>>> +	if (dev_conf->rxmode.offloads &
>>>> DEV_RX_OFFLOAD_TCP_LRO) {
>>>>>>>>> +		ret = check_lro_pkt_size(
>>>>>>>>> +				port_id, dev_conf-
>>>>>>>>> rxmode.max_lro_pkt_size,
>>>>>>>>> +				dev_info.max_lro_pkt_size);
>>>>>>>>> +		if (ret != 0)
>>>>>>>>> +			goto rollback;
>>>>>>>>> +	}
>>>>>>>>> +
>>>>>>>>
>>>>>>>> This check forces applications that enable LRO to provide
>>>>>> 'max_lro_pkt_size'
>>>>>>>> config value.
>>>>>>>
>>>>>>> Yes.(we can break an API, we noticed it)
>>>>>>
>>>>>> I am not talking about API/ABI breakage, that part is OK.
>>>>>> With this check, if the application requested LRO offload but not
>>>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
>>>>>>
>>>>> Yes
>>>>>> Can there be a case application is good with whatever the PMD can
>>>>>> support as max?
>>>>> Yes can be - you know, we can do everything we want but it is better
>>>>> to be
>>>> consistent:
>>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
>>>>> offload, max
>>>> lro pkt len should be mandatory for LRO offload.
>>>>>
>>>>> So your question is actually why both, non-lro packets and LRO
>>>>> packets max
>>>> size are mandatory...
>>>>>
>>>>>
>>>>> I think it should be important values for net applications management.
>>>>> Also good for mbuf size managements.
>>>>>
>>>>>>>
>>>>>>>> - Why it is mandatory now, how it was working before if it is
>>>>>>>> mandatory value?
>>>>>>>
>>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
>>>>>>> frame
>>>>>> offload.
>>>>>>> So now, when the user configures a LRO offload he must to set max
>>>>>>> lro pkt
>>>>>> len.
>>>>>>> We don't want to confuse the user here with the max rx pkt len
>>>>>> configurations and behaviors, they should be with same logic.
>>>>>>>
>>>>>>> This parameter defines well the LRO behavior.
>>>>>>> Before this, each PMD took its own interpretation to what should
>>>>>>> be the
>>>>>> maximum size for LRO aggregated packets.
>>>>>>> Now, the user must say what is his intension, and the ethdev can
>>>>>>> limit it
>>>>>> according to the device capability.
>>>>>>> By this way, also, the PMD can organize\optimize its data-path more.
>>>>>>> Also, the application can create different mempools for LRO queues
>>>>>>> to
>>>>>> allow bigger packet receiving for LRO traffic.
>>>>>>>
>>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
>> '0'?
>>>>>>> Yes, you can see the feature description Dekel added.
>>>>>>> This patch also updates all the PMDs support an LRO for non-0 value.
>>>>>>
>>>>>> Of course I can see the updates Matan, my point is "What happens if
>>>>>> PMD doesn't provide 'max_lro_pkt_size'",
>>>>>> 1) There is no check for it right, so it is acceptable?
>>>>>
>>>>> There is check.
>>>>> If the capability is 0, any non-zero configuration will fail.
>>>>>
>>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is
>>>>>> easy to make new fields mandatory for PMDs but is this really
>> necessary?
>>>>>
>>>>> Yes, for consistence.
>>>>>
>>>>>>>
>>>>>>> as same as max rx pkt len, no?
>>>>>>>
>>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to
>>>>>>>> what PMD provided if application doesn't provide it?
>>>>>>> Same answers as above.
>>>>>>>
>>>>>>
>>>>>> If application doesn't care the value, as it has been till now, and
>>>>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
>>>>>> the value provided by PMD instead of failing?
>>>>>
>>>>> Again, same question we can ask on max rx pkt len.
>>>>>
>>>>> Looks like the packet size is very important value which should be
>>>>> set by
>>>> the application.
>>>>>
>>>>> Previous applications have no option to configure it, so they
>>>>> haven't
>>>> configure it, (probably cover it somehow) I think it is our miss to
>>>> supply this info.
>>>>>
>>>>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
>>>>> Later, we can change both to other meaning.
>>>>>
>>>>
>>>> I think it is not a good reason to introduce a new mandatory config
>>>> option for application because of 'max_rx_pkt_len' does it.
>>>
>>> It is mandatory only if LRO offload is configured.
>>>
>>>> Will it work, if:
>>>> - If application doesn't provide this value, use the PMD max
>>>
>>> May cause a problem if the mbuf size is not enough for the PMD maximum.
>>
>> OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
>> be used but you already explained that application may want to use different
>> mempools for LRO queues.
>>
>> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
>> account and program the device accordingly (of course in LRO enabled case)
>> ?
>> This part seems missing and should be highlighted to other PMD maintainers.
>>
> 
> All relevant PMDs were modified and maintainers are copied on this patch series.
> 

What modified is PMD announcing a 'dev_info->max_lro_pkt_size' value, which is good.

But PMDs are not using user provided 'rxmode.max_lro_pkt_size' value, I assume
they are still using 'max_rx_pkt_len' to configure the device.

+1 to cc'ing maintainers, but everyone not able to follow all patches and not
sure if every maintainer read the patch and recognized they should update their
driver. I think better to highlight this things in cover letter / emails etc.
I hope it is more clear now.



Not for this patch, but generally;
As a process, previously I proposed a keeping a todo list under documentation
for PMDs for these kind of things, that each PMD maintainer can go there to
figure out what kind of changes required because of others changes, but that
didn't go in.
Other option is whoever updating library update all PMDs fully, but based on
feature it can be very hard to update others PMDs.

Overall these gaps are causing inconsistencies between PMDs and we need a proper
solution.
  
Matan Azrad Nov. 9, 2019, 6:20 p.m. UTC | #15
Hi

From: Ferruh Yigit
> On 11/8/2019 11:56 AM, Matan Azrad wrote:
> >
> >
> > From: Ferruh Yigit
> >> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >>>
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>>>> Hi
> >>>>>
> >>>>> From: Ferruh Yigit
> >>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>>>
> >>>>>> 	RTE_ETHER_MAX_LEN;
> >>>>>>>  	}
> >>>>>>>
> >>>>>>> +	/*
> >>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
> >> packet
> >>>>>>> +	 * size is supported by the configured device.
> >>>>>>> +	 */
> >>>>>>> +	if (dev_conf->rxmode.offloads &
> >> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>>>> +		ret = check_lro_pkt_size(
> >>>>>>> +				port_id, dev_conf-
> >>>>>>> rxmode.max_lro_pkt_size,
> >>>>>>> +				dev_info.max_lro_pkt_size);
> >>>>>>> +		if (ret != 0)
> >>>>>>> +			goto rollback;
> >>>>>>> +	}
> >>>>>>> +
> >>>>>>
> >>>>>> This check forces applications that enable LRO to provide
> >>>> 'max_lro_pkt_size'
> >>>>>> config value.
> >>>>>
> >>>>> Yes.(we can break an API, we noticed it)
> >>>>
> >>>> I am not talking about API/ABI breakage, that part is OK.
> >>>> With this check, if the application requested LRO offload but not
> >>>> provided 'max_lro_pkt_size' value, device configuration will fail.
> >>>>
> >>> Yes
> >>>> Can there be a case application is good with whatever the PMD can
> >>>> support as max?
> >>> Yes can be - you know, we can do everything we want but it is better
> >>> to be
> >> consistent:
> >>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> >>> offload, max
> >> lro pkt len should be mandatory for LRO offload.
> >>>
> >>> So your question is actually why both, non-lro packets and LRO
> >>> packets max
> >> size are mandatory...
> >>>
> >>>
> >>> I think it should be important values for net applications management.
> >>> Also good for mbuf size managements.
> >>>
> >>>>>
> >>>>>> - Why it is mandatory now, how it was working before if it is
> >>>>>> mandatory value?
> >>>>>
> >>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> >>>>> frame
> >>>> offload.
> >>>>> So now, when the user configures a LRO offload he must to set max
> >>>>> lro pkt
> >>>> len.
> >>>>> We don't want to confuse the user here with the max rx pkt len
> >>>> configurations and behaviors, they should be with same logic.
> >>>>>
> >>>>> This parameter defines well the LRO behavior.
> >>>>> Before this, each PMD took its own interpretation to what should
> >>>>> be the
> >>>> maximum size for LRO aggregated packets.
> >>>>> Now, the user must say what is his intension, and the ethdev can
> >>>>> limit it
> >>>> according to the device capability.
> >>>>> By this way, also, the PMD can organize\optimize its data-path more.
> >>>>> Also, the application can create different mempools for LRO queues
> >>>>> to
> >>>> allow bigger packet receiving for LRO traffic.
> >>>>>
> >>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
> '0'?
> >>>>> Yes, you can see the feature description Dekel added.
> >>>>> This patch also updates all the PMDs support an LRO for non-0 value.
> >>>>
> >>>> Of course I can see the updates Matan, my point is "What happens if
> >>>> PMD doesn't provide 'max_lro_pkt_size'",
> >>>> 1) There is no check for it right, so it is acceptable?
> >>>
> >>> There is check.
> >>> If the capability is 0, any non-zero configuration will fail.
> >>>
> >>>> 2) Are we making this filed mandatory to provide for PMDs, it is
> >>>> easy to make new fields mandatory for PMDs but is this really
> necessary?
> >>>
> >>> Yes, for consistence.
> >>>
> >>>>>
> >>>>> as same as max rx pkt len, no?
> >>>>>
> >>>>>> - What do you think setting 'max_lro_pkt_size' config value to
> >>>>>> what PMD provided if application doesn't provide it?
> >>>>> Same answers as above.
> >>>>>
> >>>>
> >>>> If application doesn't care the value, as it has been till now, and
> >>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
> >>>> the value provided by PMD instead of failing?
> >>>
> >>> Again, same question we can ask on max rx pkt len.
> >>>
> >>> Looks like the packet size is very important value which should be
> >>> set by
> >> the application.
> >>>
> >>> Previous applications have no option to configure it, so they
> >>> haven't
> >> configure it, (probably cover it somehow) I think it is our miss to
> >> supply this info.
> >>>
> >>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
> >>> Later, we can change both to other meaning.
> >>>
> >>
> >> I think it is not a good reason to introduce a new mandatory config
> >> option for application because of 'max_rx_pkt_len' does it.
> >
> > It is mandatory only if LRO offload is configured.
> >
> >> Will it work, if:
> >> - If application doesn't provide this value, use the PMD max
> >
> > May cause a problem if the mbuf size is not enough for the PMD maximum.
> 
> OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
> be used but you already explained that application may want to use different
> mempools for LRO queues.
> 
So , are you agree with the idea?

> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> account and program the device accordingly (of course in LRO enabled case)
> ?
> This part seems missing and should be highlighted to other PMD maintainers.


Yes, you are right.
PMDs must limit the LRO aggregated packet according to the new field,
And it probably very hard for the patch introducer to understand how to do it for each PMD. 

I think each new configuration requires other maintainers\developers to adjust their own PMD code to the new configuration and it should be done in limited time.

My suggestion here:
1. To reserve the info field and the configuration field for rc2.(if it is critical not to break ABI for rc3)
2. To merge the ethdev patch in the start of rc3.
3. Request each relevant PMD to adjust its PMD to the new configuration for the end of rc3.
	Note: this should be small change and only for ~5 PMDs:
		a. Introduce the info field according to the device ability.
		b. For each LRO queue:
			Use the LRO max size configuration instead of the current max rx pkt len configuration(looks like small condition). 

What do you think?
  
Matan Azrad Nov. 9, 2019, 6:26 p.m. UTC | #16
Hi Konstantin

From: Ananyev, Konstantin
> Sent: Friday, November 8, 2019 6:29 PM
> To: Dekel Peled <dekelp@mellanox.com>; Matan Azrad
> <matan@mellanox.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Mcnamara,
> John <john.mcnamara@intel.com>; Kovacevic, Marko
> <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov,
> Anatoly <anatoly.burakov@intel.com>; xuanziyang2@huawei.com;
> cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo
> <wenzhuo.lu@intel.com>; Shahaf Shuler <shahafs@mellanox.com>; Slava
> Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com;
> shshaikh@marvell.com; maxime.coquelin@redhat.com; Bie, Tiwei
> <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>;
> yongwang@vmware.com; Thomas Monjalon <thomas@monjalon.net>;
> arybchenko@solarflare.com; Wu, Jingjing <jingjing.wu@intel.com>;
> Iremonger, Bernard <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO
> packet size
> 
> 
> > >
> > >
> > > > > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > > > > >>>>>
> > > > > > > >>>> 	RTE_ETHER_MAX_LEN;
> > > > > > > >>>>>  	}
> > > > > > > >>>>>
> > > > > > > >>>>> +	/*
> > > > > > > >>>>> +	 * If LRO is enabled, check that the maximum
> > > aggregated
> > > > > > > packet
> > > > > > > >>>>> +	 * size is supported by the configured device.
> > > > > > > >>>>> +	 */
> > > > > > > >>>>> +	if (dev_conf->rxmode.offloads &
> > > > > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > > > > >>>>> +		ret = check_lro_pkt_size(
> > > > > > > >>>>> +				port_id, dev_conf-
> > > > > > > >>>>> rxmode.max_lro_pkt_size,
> > > > > > > >>>>> +				dev_info.max_lro_pkt_size);
> > > > > > > >>>>> +		if (ret != 0)
> > > > > > > >>>>> +			goto rollback;
> > > > > > > >>>>> +	}
> > > > > > > >>>>> +
> > > > > > > >>>>
> > > > > > > >>>> This check forces applications that enable LRO to
> > > > > > > >>>> provide
> > > > > > > >> 'max_lro_pkt_size'
> > > > > > > >>>> config value.
> > > > > > > >>>
> > > > > > > >>> Yes.(we can break an API, we noticed it)
> > > > > > > >>
> > > > > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > > > > >> With this check, if the application requested LRO offload
> > > > > > > >> but not provided 'max_lro_pkt_size' value, device
> > > > > > > >> configuration will
> > > fail.
> > > > > > > >>
> > > > > > > > Yes
> > > > > > > >> Can there be a case application is good with whatever the
> > > > > > > >> PMD can support as max?
> > > > > > > > Yes can be - you know, we can do everything we want but it
> > > > > > > > is better to be
> > > > > > > consistent:
> > > > > > > > Due to the fact of Max rx pkt len field is mandatory for
> > > > > > > > JUMBO offload, max
> > > > > > > lro pkt len should be mandatory for LRO offload.
> > > > > > > >
> > > > > > > > So your question is actually why both, non-lro packets and
> > > > > > > > LRO packets max
> > > > > > > size are mandatory...
> > > > > > > >
> > > > > > > >
> > > > > > > > I think it should be important values for net applications
> > > management.
> > > > > > > > Also good for mbuf size managements.
> > > > > > > >
> > > > > > > >>>
> > > > > > > >>>> - Why it is mandatory now, how it was working before if
> > > > > > > >>>> it is mandatory value?
> > > > > > > >>>
> > > > > > > >>> It is the same as max_rx_pkt_len which is mandatory for
> > > > > > > >>> jumbo frame
> > > > > > > >> offload.
> > > > > > > >>> So now, when the user configures a LRO offload he must
> > > > > > > >>> to set max lro pkt
> > > > > > > >> len.
> > > > > > > >>> We don't want to confuse the user here with the max rx
> > > > > > > >>> pkt len
> > > > > > > >> configurations and behaviors, they should be with same logic.
> > > > > > > >>>
> > > > > > > >>> This parameter defines well the LRO behavior.
> > > > > > > >>> Before this, each PMD took its own interpretation to
> > > > > > > >>> what should be the
> > > > > > > >> maximum size for LRO aggregated packets.
> > > > > > > >>> Now, the user must say what is his intension, and the
> > > > > > > >>> ethdev can limit it
> > > > > > > >> according to the device capability.
> > > > > > > >>> By this way, also, the PMD can organize\optimize its
> > > > > > > >>> data-path
> > > more.
> > > > > > > >>> Also, the application can create different mempools for
> > > > > > > >>> LRO queues to
> > > > > > > >> allow bigger packet receiving for LRO traffic.
> > > > > > > >>>
> > > > > > > >>>> - What happens if PMD doesn't provide
> > > > > > > >>>> 'max_lro_pkt_size', so it is
> > > > > '0'?
> > > > > > > >>> Yes, you can see the feature description Dekel added.
> > > > > > > >>> This patch also updates all the PMDs support an LRO for
> > > > > > > >>> non-0
> > > value.
> > > > > > > >>
> > > > > > > >> Of course I can see the updates Matan, my point is "What
> > > > > > > >> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > > > > > >> 1) There is no check for it right, so it is acceptable?
> > > > > > > >
> > > > > > > > There is check.
> > > > > > > > If the capability is 0, any non-zero configuration will fail.
> > > > > > > >
> > > > > > > >> 2) Are we making this filed mandatory to provide for
> > > > > > > >> PMDs, it is easy to make new fields mandatory for PMDs
> > > > > > > >> but is this really
> > > > > necessary?
> > > > > > > >
> > > > > > > > Yes, for consistence.
> > > > > > > >
> > > > > > > >>>
> > > > > > > >>> as same as max rx pkt len, no?
> > > > > > > >>>
> > > > > > > >>>> - What do you think setting 'max_lro_pkt_size' config
> > > > > > > >>>> value to what PMD provided if application doesn't provide
> it?
> > > > > > > >>> Same answers as above.
> > > > > > > >>>
> > > > > > > >>
> > > > > > > >> If application doesn't care the value, as it has been
> > > > > > > >> till now, and not provided explicit 'max_lro_pkt_size',
> > > > > > > >> why not ethdev level use the value provided by PMD instead
> of failing?
> > > > > > > >
> > > > > > > > Again, same question we can ask on max rx pkt len.
> > > > > > > >
> > > > > > > > Looks like the packet size is very important value which
> > > > > > > > should be set by
> > > > > > > the application.
> > > > > > > >
> > > > > > > > Previous applications have no option to configure it, so
> > > > > > > > they haven't
> > > > > > > configure it, (probably cover it somehow) I think it is our
> > > > > > > miss to supply this info.
> > > > > > > >
> > > > > > > > Let's do it in same way as we do max rx pkt len (as this
> > > > > > > > patch main
> > > idea).
> > > > > > > > Later, we can change both to other meaning.
> > > > > > > >
> > > > > > >
> > > > > > > I think it is not a good reason to introduce a new mandatory
> > > > > > > config option for application because of 'max_rx_pkt_len' does it.
> > > > > >
> > > > > > It is mandatory only if LRO offload is configured.
> > > > >
> > > > > So max_rx_pkt_len will remain max size of one packet, while
> > > > > max_lro_len will be max accumulate size for each LRO session?
> > > > >
> > > >
> > > > Yes.
> > > >
> > > > > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> > > >
> > > > Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> > > > Change to RTE_IPV4_MAX_PKT_LEN?
> > > >
> > > > > ixgbe_vf, as I remember, doesn’t support LRO at all.
> > > >
> > > > Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
> > > > Remove it?
> > >
> > > Yes, please for both.
> >
> > Will change in v5.
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > > Will it work, if:
> > > > > > > - If application doesn't provide this value, use the PMD max
> > > > > >
> > > > > > May cause a problem if the mbuf size is not enough for the PMD
> > > maximum.
> > > > >
> > > > > Another question, what will happen if PMD will ignore that value
> > > > > and will generate packets bigger then requested?
> > > >
> > > > PMD should use this value and not ignore it.
> > >
> > > Hmm, ok but this patch updates mxl driver only...
> > > I suppose you expect other PMD maintainers to do the job for their
> > > PMDs, right?
> > > If so, are they aware (and agree) for this new hard requirement and
> > > changes required?
> > > Again what PMD should do if it can't support exact value?
> > > Let say user asked max_lro_size=20KB but PMD can do only 16KB or
> 24KB?
> > > Should it fail, or round to smallest, or ...?
> > >
> > > Actually I wonder, should it really be a hard requirement or more
> > > like a guidance to PMD?
> > > Why app needs and *exact* value for LRO size?
> >
> > The exact value should be configured to HW as LRO session limit.
> 
> But if the HW can't support this exact value, see the example above?
> In fact, shouldn't we allow PMD to forbid user to configure max LRO size?
> Let say if in dev_info max_lro_size==0, then PMD doesn't support LRO size
> configuration at all.
> That way PMDs who do support LRO, but don't want to (can't to) support
> configurable LRO size will stay untouched.

Each HW should support packet size limitation no matter if it is LRO packet or not:
How does the PMD limit the packet size for max rx packet len conf?
How does the PMD limit the packet size for the mbuf size?
  
Ananyev, Konstantin Nov. 10, 2019, 10:51 p.m. UTC | #17
Hi Matan,

> > > > > > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > > > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > > > > > >>>>>
> > > > > > > > >>>> 	RTE_ETHER_MAX_LEN;
> > > > > > > > >>>>>  	}
> > > > > > > > >>>>>
> > > > > > > > >>>>> +	/*
> > > > > > > > >>>>> +	 * If LRO is enabled, check that the maximum
> > > > aggregated
> > > > > > > > packet
> > > > > > > > >>>>> +	 * size is supported by the configured device.
> > > > > > > > >>>>> +	 */
> > > > > > > > >>>>> +	if (dev_conf->rxmode.offloads &
> > > > > > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > > > > > >>>>> +		ret = check_lro_pkt_size(
> > > > > > > > >>>>> +				port_id, dev_conf-
> > > > > > > > >>>>> rxmode.max_lro_pkt_size,
> > > > > > > > >>>>> +				dev_info.max_lro_pkt_size);
> > > > > > > > >>>>> +		if (ret != 0)
> > > > > > > > >>>>> +			goto rollback;
> > > > > > > > >>>>> +	}
> > > > > > > > >>>>> +
> > > > > > > > >>>>
> > > > > > > > >>>> This check forces applications that enable LRO to
> > > > > > > > >>>> provide
> > > > > > > > >> 'max_lro_pkt_size'
> > > > > > > > >>>> config value.
> > > > > > > > >>>
> > > > > > > > >>> Yes.(we can break an API, we noticed it)
> > > > > > > > >>
> > > > > > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > > > > > >> With this check, if the application requested LRO offload
> > > > > > > > >> but not provided 'max_lro_pkt_size' value, device
> > > > > > > > >> configuration will
> > > > fail.
> > > > > > > > >>
> > > > > > > > > Yes
> > > > > > > > >> Can there be a case application is good with whatever the
> > > > > > > > >> PMD can support as max?
> > > > > > > > > Yes can be - you know, we can do everything we want but it
> > > > > > > > > is better to be
> > > > > > > > consistent:
> > > > > > > > > Due to the fact of Max rx pkt len field is mandatory for
> > > > > > > > > JUMBO offload, max
> > > > > > > > lro pkt len should be mandatory for LRO offload.
> > > > > > > > >
> > > > > > > > > So your question is actually why both, non-lro packets and
> > > > > > > > > LRO packets max
> > > > > > > > size are mandatory...
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > I think it should be important values for net applications
> > > > management.
> > > > > > > > > Also good for mbuf size managements.
> > > > > > > > >
> > > > > > > > >>>
> > > > > > > > >>>> - Why it is mandatory now, how it was working before if
> > > > > > > > >>>> it is mandatory value?
> > > > > > > > >>>
> > > > > > > > >>> It is the same as max_rx_pkt_len which is mandatory for
> > > > > > > > >>> jumbo frame
> > > > > > > > >> offload.
> > > > > > > > >>> So now, when the user configures a LRO offload he must
> > > > > > > > >>> to set max lro pkt
> > > > > > > > >> len.
> > > > > > > > >>> We don't want to confuse the user here with the max rx
> > > > > > > > >>> pkt len
> > > > > > > > >> configurations and behaviors, they should be with same logic.
> > > > > > > > >>>
> > > > > > > > >>> This parameter defines well the LRO behavior.
> > > > > > > > >>> Before this, each PMD took its own interpretation to
> > > > > > > > >>> what should be the
> > > > > > > > >> maximum size for LRO aggregated packets.
> > > > > > > > >>> Now, the user must say what is his intension, and the
> > > > > > > > >>> ethdev can limit it
> > > > > > > > >> according to the device capability.
> > > > > > > > >>> By this way, also, the PMD can organize\optimize its
> > > > > > > > >>> data-path
> > > > more.
> > > > > > > > >>> Also, the application can create different mempools for
> > > > > > > > >>> LRO queues to
> > > > > > > > >> allow bigger packet receiving for LRO traffic.
> > > > > > > > >>>
> > > > > > > > >>>> - What happens if PMD doesn't provide
> > > > > > > > >>>> 'max_lro_pkt_size', so it is
> > > > > > '0'?
> > > > > > > > >>> Yes, you can see the feature description Dekel added.
> > > > > > > > >>> This patch also updates all the PMDs support an LRO for
> > > > > > > > >>> non-0
> > > > value.
> > > > > > > > >>
> > > > > > > > >> Of course I can see the updates Matan, my point is "What
> > > > > > > > >> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > > > > > > >> 1) There is no check for it right, so it is acceptable?
> > > > > > > > >
> > > > > > > > > There is check.
> > > > > > > > > If the capability is 0, any non-zero configuration will fail.
> > > > > > > > >
> > > > > > > > >> 2) Are we making this filed mandatory to provide for
> > > > > > > > >> PMDs, it is easy to make new fields mandatory for PMDs
> > > > > > > > >> but is this really
> > > > > > necessary?
> > > > > > > > >
> > > > > > > > > Yes, for consistence.
> > > > > > > > >
> > > > > > > > >>>
> > > > > > > > >>> as same as max rx pkt len, no?
> > > > > > > > >>>
> > > > > > > > >>>> - What do you think setting 'max_lro_pkt_size' config
> > > > > > > > >>>> value to what PMD provided if application doesn't provide
> > it?
> > > > > > > > >>> Same answers as above.
> > > > > > > > >>>
> > > > > > > > >>
> > > > > > > > >> If application doesn't care the value, as it has been
> > > > > > > > >> till now, and not provided explicit 'max_lro_pkt_size',
> > > > > > > > >> why not ethdev level use the value provided by PMD instead
> > of failing?
> > > > > > > > >
> > > > > > > > > Again, same question we can ask on max rx pkt len.
> > > > > > > > >
> > > > > > > > > Looks like the packet size is very important value which
> > > > > > > > > should be set by
> > > > > > > > the application.
> > > > > > > > >
> > > > > > > > > Previous applications have no option to configure it, so
> > > > > > > > > they haven't
> > > > > > > > configure it, (probably cover it somehow) I think it is our
> > > > > > > > miss to supply this info.
> > > > > > > > >
> > > > > > > > > Let's do it in same way as we do max rx pkt len (as this
> > > > > > > > > patch main
> > > > idea).
> > > > > > > > > Later, we can change both to other meaning.
> > > > > > > > >
> > > > > > > >
> > > > > > > > I think it is not a good reason to introduce a new mandatory
> > > > > > > > config option for application because of 'max_rx_pkt_len' does it.
> > > > > > >
> > > > > > > It is mandatory only if LRO offload is configured.
> > > > > >
> > > > > > So max_rx_pkt_len will remain max size of one packet, while
> > > > > > max_lro_len will be max accumulate size for each LRO session?
> > > > > >
> > > > >
> > > > > Yes.
> > > > >
> > > > > > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> > > > >
> > > > > Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> > > > > Change to RTE_IPV4_MAX_PKT_LEN?
> > > > >
> > > > > > ixgbe_vf, as I remember, doesn’t support LRO at all.
> > > > >
> > > > > Please see my change in drivers/net/ixgbe/ixgbe_vf_representor.c
> > > > > Remove it?
> > > >
> > > > Yes, please for both.
> > >
> > > Will change in v5.
> > >
> > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > > Will it work, if:
> > > > > > > > - If application doesn't provide this value, use the PMD max
> > > > > > >
> > > > > > > May cause a problem if the mbuf size is not enough for the PMD
> > > > maximum.
> > > > > >
> > > > > > Another question, what will happen if PMD will ignore that value
> > > > > > and will generate packets bigger then requested?
> > > > >
> > > > > PMD should use this value and not ignore it.
> > > >
> > > > Hmm, ok but this patch updates mxl driver only...
> > > > I suppose you expect other PMD maintainers to do the job for their
> > > > PMDs, right?
> > > > If so, are they aware (and agree) for this new hard requirement and
> > > > changes required?
> > > > Again what PMD should do if it can't support exact value?
> > > > Let say user asked max_lro_size=20KB but PMD can do only 16KB or
> > 24KB?
> > > > Should it fail, or round to smallest, or ...?
> > > >
> > > > Actually I wonder, should it really be a hard requirement or more
> > > > like a guidance to PMD?
> > > > Why app needs and *exact* value for LRO size?
> > >
> > > The exact value should be configured to HW as LRO session limit.
> >
> > But if the HW can't support this exact value, see the example above?
> > In fact, shouldn't we allow PMD to forbid user to configure max LRO size?
> > Let say if in dev_info max_lro_size==0, then PMD doesn't support LRO size
> > configuration at all.
> > That way PMDs who do support LRO, but don't want to (can't to) support
> > configurable LRO size will stay untouched.
> 
> Each HW should support packet size limitation no matter if it is LRO packet or not:
> How does the PMD limit the packet size for max rx packet len conf?
> How does the PMD limit the packet size for the mbuf size?

Not sure I understand your statement and questions above...
For sure PMD has to support max_rx_pktlen., but how does it relate to max_lro?
Konstantin
  
Ananyev, Konstantin Nov. 10, 2019, 11:40 p.m. UTC | #18
> 
> From: Ferruh Yigit
> > On 11/8/2019 11:56 AM, Matan Azrad wrote:
> > >
> > >
> > > From: Ferruh Yigit
> > >> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> > >>>
> > >>>
> > >>> From: Ferruh Yigit
> > >>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > >>>>> Hi
> > >>>>>
> > >>>>> From: Ferruh Yigit
> > >>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > >>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > >>>>>>>
> > >>>>>> 	RTE_ETHER_MAX_LEN;
> > >>>>>>>  	}
> > >>>>>>>
> > >>>>>>> +	/*
> > >>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
> > >> packet
> > >>>>>>> +	 * size is supported by the configured device.
> > >>>>>>> +	 */
> > >>>>>>> +	if (dev_conf->rxmode.offloads &
> > >> DEV_RX_OFFLOAD_TCP_LRO) {
> > >>>>>>> +		ret = check_lro_pkt_size(
> > >>>>>>> +				port_id, dev_conf-
> > >>>>>>> rxmode.max_lro_pkt_size,
> > >>>>>>> +				dev_info.max_lro_pkt_size);
> > >>>>>>> +		if (ret != 0)
> > >>>>>>> +			goto rollback;
> > >>>>>>> +	}
> > >>>>>>> +
> > >>>>>>
> > >>>>>> This check forces applications that enable LRO to provide
> > >>>> 'max_lro_pkt_size'
> > >>>>>> config value.
> > >>>>>
> > >>>>> Yes.(we can break an API, we noticed it)
> > >>>>
> > >>>> I am not talking about API/ABI breakage, that part is OK.
> > >>>> With this check, if the application requested LRO offload but not
> > >>>> provided 'max_lro_pkt_size' value, device configuration will fail.
> > >>>>
> > >>> Yes
> > >>>> Can there be a case application is good with whatever the PMD can
> > >>>> support as max?
> > >>> Yes can be - you know, we can do everything we want but it is better
> > >>> to be
> > >> consistent:
> > >>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > >>> offload, max
> > >> lro pkt len should be mandatory for LRO offload.
> > >>>
> > >>> So your question is actually why both, non-lro packets and LRO
> > >>> packets max
> > >> size are mandatory...
> > >>>
> > >>>
> > >>> I think it should be important values for net applications management.
> > >>> Also good for mbuf size managements.
> > >>>
> > >>>>>
> > >>>>>> - Why it is mandatory now, how it was working before if it is
> > >>>>>> mandatory value?
> > >>>>>
> > >>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> > >>>>> frame
> > >>>> offload.
> > >>>>> So now, when the user configures a LRO offload he must to set max
> > >>>>> lro pkt
> > >>>> len.
> > >>>>> We don't want to confuse the user here with the max rx pkt len
> > >>>> configurations and behaviors, they should be with same logic.
> > >>>>>
> > >>>>> This parameter defines well the LRO behavior.
> > >>>>> Before this, each PMD took its own interpretation to what should
> > >>>>> be the
> > >>>> maximum size for LRO aggregated packets.
> > >>>>> Now, the user must say what is his intension, and the ethdev can
> > >>>>> limit it
> > >>>> according to the device capability.
> > >>>>> By this way, also, the PMD can organize\optimize its data-path more.
> > >>>>> Also, the application can create different mempools for LRO queues
> > >>>>> to
> > >>>> allow bigger packet receiving for LRO traffic.
> > >>>>>
> > >>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
> > '0'?
> > >>>>> Yes, you can see the feature description Dekel added.
> > >>>>> This patch also updates all the PMDs support an LRO for non-0 value.
> > >>>>
> > >>>> Of course I can see the updates Matan, my point is "What happens if
> > >>>> PMD doesn't provide 'max_lro_pkt_size'",
> > >>>> 1) There is no check for it right, so it is acceptable?
> > >>>
> > >>> There is check.
> > >>> If the capability is 0, any non-zero configuration will fail.
> > >>>
> > >>>> 2) Are we making this filed mandatory to provide for PMDs, it is
> > >>>> easy to make new fields mandatory for PMDs but is this really
> > necessary?
> > >>>
> > >>> Yes, for consistence.
> > >>>
> > >>>>>
> > >>>>> as same as max rx pkt len, no?
> > >>>>>
> > >>>>>> - What do you think setting 'max_lro_pkt_size' config value to
> > >>>>>> what PMD provided if application doesn't provide it?
> > >>>>> Same answers as above.
> > >>>>>
> > >>>>
> > >>>> If application doesn't care the value, as it has been till now, and
> > >>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
> > >>>> the value provided by PMD instead of failing?
> > >>>
> > >>> Again, same question we can ask on max rx pkt len.
> > >>>
> > >>> Looks like the packet size is very important value which should be
> > >>> set by
> > >> the application.
> > >>>
> > >>> Previous applications have no option to configure it, so they
> > >>> haven't
> > >> configure it, (probably cover it somehow) I think it is our miss to
> > >> supply this info.
> > >>>
> > >>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
> > >>> Later, we can change both to other meaning.
> > >>>
> > >>
> > >> I think it is not a good reason to introduce a new mandatory config
> > >> option for application because of 'max_rx_pkt_len' does it.
> > >
> > > It is mandatory only if LRO offload is configured.
> > >
> > >> Will it work, if:
> > >> - If application doesn't provide this value, use the PMD max
> > >
> > > May cause a problem if the mbuf size is not enough for the PMD maximum.
> >
> > OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
> > be used but you already explained that application may want to use different
> > mempools for LRO queues.
> >
> So , are you agree with the idea?
> 
> > For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> > account and program the device accordingly (of course in LRO enabled case)
> > ?
> > This part seems missing and should be highlighted to other PMD maintainers.
> 
> 
> Yes, you are right.
> PMDs must limit the LRO aggregated packet according to the new field,
> And it probably very hard for the patch introducer to understand how to do it for each PMD.
> 
> I think each new configuration requires other maintainers\developers to adjust their own PMD code to the new configuration and it should
> be done in limited time.
> 
> My suggestion here:
> 1. To reserve the info field and the configuration field for rc2.(if it is critical not to break ABI for rc3)
> 2. To merge the ethdev patch in the start of rc3.
> 3. Request each relevant PMD to adjust its PMD to the new configuration for the end of rc3.
> 	Note: this should be small change and only for ~5 PMDs:
> 		a. Introduce the info field according to the device ability.
> 		b. For each LRO queue:
> 			Use the LRO max size configuration instead of the current max rx pkt len configuration(looks like small condition).

That's definitely looks like a significant behavior change for existing apps and PMDs,
and I wonder what for?
Why we can't keep max_rx_pkt_len semantics as it is right now,
and just add an optional ability to limit max size of LRO aggregations?
 
> 
> What do you think?
> 
> 
>
  
Matan Azrad Nov. 11, 2019, 6:53 a.m. UTC | #19
Hi

From: Ananyev, Konstantin
> Hi Matan,
> 
> > > > > > > > > >>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > > > > > > >>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > > > > > > >>>>>
> > > > > > > > > >>>> 	RTE_ETHER_MAX_LEN;
> > > > > > > > > >>>>>  	}
> > > > > > > > > >>>>>
> > > > > > > > > >>>>> +	/*
> > > > > > > > > >>>>> +	 * If LRO is enabled, check that the maximum
> > > > > aggregated
> > > > > > > > > packet
> > > > > > > > > >>>>> +	 * size is supported by the configured device.
> > > > > > > > > >>>>> +	 */
> > > > > > > > > >>>>> +	if (dev_conf->rxmode.offloads &
> > > > > > > > > DEV_RX_OFFLOAD_TCP_LRO) {
> > > > > > > > > >>>>> +		ret = check_lro_pkt_size(
> > > > > > > > > >>>>> +				port_id, dev_conf-
> > > > > > > > > >>>>> rxmode.max_lro_pkt_size,
> > > > > > > > > >>>>> +				dev_info.max_lro_pkt_size);
> > > > > > > > > >>>>> +		if (ret != 0)
> > > > > > > > > >>>>> +			goto rollback;
> > > > > > > > > >>>>> +	}
> > > > > > > > > >>>>> +
> > > > > > > > > >>>>
> > > > > > > > > >>>> This check forces applications that enable LRO to
> > > > > > > > > >>>> provide
> > > > > > > > > >> 'max_lro_pkt_size'
> > > > > > > > > >>>> config value.
> > > > > > > > > >>>
> > > > > > > > > >>> Yes.(we can break an API, we noticed it)
> > > > > > > > > >>
> > > > > > > > > >> I am not talking about API/ABI breakage, that part is OK.
> > > > > > > > > >> With this check, if the application requested LRO
> > > > > > > > > >> offload but not provided 'max_lro_pkt_size' value,
> > > > > > > > > >> device configuration will
> > > > > fail.
> > > > > > > > > >>
> > > > > > > > > > Yes
> > > > > > > > > >> Can there be a case application is good with whatever
> > > > > > > > > >> the PMD can support as max?
> > > > > > > > > > Yes can be - you know, we can do everything we want
> > > > > > > > > > but it is better to be
> > > > > > > > > consistent:
> > > > > > > > > > Due to the fact of Max rx pkt len field is mandatory
> > > > > > > > > > for JUMBO offload, max
> > > > > > > > > lro pkt len should be mandatory for LRO offload.
> > > > > > > > > >
> > > > > > > > > > So your question is actually why both, non-lro packets
> > > > > > > > > > and LRO packets max
> > > > > > > > > size are mandatory...
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > I think it should be important values for net
> > > > > > > > > > applications
> > > > > management.
> > > > > > > > > > Also good for mbuf size managements.
> > > > > > > > > >
> > > > > > > > > >>>
> > > > > > > > > >>>> - Why it is mandatory now, how it was working
> > > > > > > > > >>>> before if it is mandatory value?
> > > > > > > > > >>>
> > > > > > > > > >>> It is the same as max_rx_pkt_len which is mandatory
> > > > > > > > > >>> for jumbo frame
> > > > > > > > > >> offload.
> > > > > > > > > >>> So now, when the user configures a LRO offload he
> > > > > > > > > >>> must to set max lro pkt
> > > > > > > > > >> len.
> > > > > > > > > >>> We don't want to confuse the user here with the max
> > > > > > > > > >>> rx pkt len
> > > > > > > > > >> configurations and behaviors, they should be with same
> logic.
> > > > > > > > > >>>
> > > > > > > > > >>> This parameter defines well the LRO behavior.
> > > > > > > > > >>> Before this, each PMD took its own interpretation to
> > > > > > > > > >>> what should be the
> > > > > > > > > >> maximum size for LRO aggregated packets.
> > > > > > > > > >>> Now, the user must say what is his intension, and
> > > > > > > > > >>> the ethdev can limit it
> > > > > > > > > >> according to the device capability.
> > > > > > > > > >>> By this way, also, the PMD can organize\optimize its
> > > > > > > > > >>> data-path
> > > > > more.
> > > > > > > > > >>> Also, the application can create different mempools
> > > > > > > > > >>> for LRO queues to
> > > > > > > > > >> allow bigger packet receiving for LRO traffic.
> > > > > > > > > >>>
> > > > > > > > > >>>> - What happens if PMD doesn't provide
> > > > > > > > > >>>> 'max_lro_pkt_size', so it is
> > > > > > > '0'?
> > > > > > > > > >>> Yes, you can see the feature description Dekel added.
> > > > > > > > > >>> This patch also updates all the PMDs support an LRO
> > > > > > > > > >>> for
> > > > > > > > > >>> non-0
> > > > > value.
> > > > > > > > > >>
> > > > > > > > > >> Of course I can see the updates Matan, my point is
> > > > > > > > > >> "What happens if PMD doesn't provide
> > > > > > > > > >> 'max_lro_pkt_size'",
> > > > > > > > > >> 1) There is no check for it right, so it is acceptable?
> > > > > > > > > >
> > > > > > > > > > There is check.
> > > > > > > > > > If the capability is 0, any non-zero configuration will fail.
> > > > > > > > > >
> > > > > > > > > >> 2) Are we making this filed mandatory to provide for
> > > > > > > > > >> PMDs, it is easy to make new fields mandatory for
> > > > > > > > > >> PMDs but is this really
> > > > > > > necessary?
> > > > > > > > > >
> > > > > > > > > > Yes, for consistence.
> > > > > > > > > >
> > > > > > > > > >>>
> > > > > > > > > >>> as same as max rx pkt len, no?
> > > > > > > > > >>>
> > > > > > > > > >>>> - What do you think setting 'max_lro_pkt_size'
> > > > > > > > > >>>> config value to what PMD provided if application
> > > > > > > > > >>>> doesn't provide
> > > it?
> > > > > > > > > >>> Same answers as above.
> > > > > > > > > >>>
> > > > > > > > > >>
> > > > > > > > > >> If application doesn't care the value, as it has been
> > > > > > > > > >> till now, and not provided explicit
> > > > > > > > > >> 'max_lro_pkt_size', why not ethdev level use the
> > > > > > > > > >> value provided by PMD instead
> > > of failing?
> > > > > > > > > >
> > > > > > > > > > Again, same question we can ask on max rx pkt len.
> > > > > > > > > >
> > > > > > > > > > Looks like the packet size is very important value
> > > > > > > > > > which should be set by
> > > > > > > > > the application.
> > > > > > > > > >
> > > > > > > > > > Previous applications have no option to configure it,
> > > > > > > > > > so they haven't
> > > > > > > > > configure it, (probably cover it somehow) I think it is
> > > > > > > > > our miss to supply this info.
> > > > > > > > > >
> > > > > > > > > > Let's do it in same way as we do max rx pkt len (as
> > > > > > > > > > this patch main
> > > > > idea).
> > > > > > > > > > Later, we can change both to other meaning.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > I think it is not a good reason to introduce a new
> > > > > > > > > mandatory config option for application because of
> 'max_rx_pkt_len' does it.
> > > > > > > >
> > > > > > > > It is mandatory only if LRO offload is configured.
> > > > > > >
> > > > > > > So max_rx_pkt_len will remain max size of one packet, while
> > > > > > > max_lro_len will be max accumulate size for each LRO session?
> > > > > > >
> > > > > >
> > > > > > Yes.
> > > > > >
> > > > > > > BTW, I think that for ixgbe max lro is RTE_IPV4_MAX_PKT_LEN.
> > > > > >
> > > > > > Please see my change in drivers/net/ixgbe/ixgbe_ethdev.c.
> > > > > > Change to RTE_IPV4_MAX_PKT_LEN?
> > > > > >
> > > > > > > ixgbe_vf, as I remember, doesn’t support LRO at all.
> > > > > >
> > > > > > Please see my change in
> > > > > > drivers/net/ixgbe/ixgbe_vf_representor.c
> > > > > > Remove it?
> > > > >
> > > > > Yes, please for both.
> > > >
> > > > Will change in v5.
> > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > > Will it work, if:
> > > > > > > > > - If application doesn't provide this value, use the PMD
> > > > > > > > > max
> > > > > > > >
> > > > > > > > May cause a problem if the mbuf size is not enough for the
> > > > > > > > PMD
> > > > > maximum.
> > > > > > >
> > > > > > > Another question, what will happen if PMD will ignore that
> > > > > > > value and will generate packets bigger then requested?
> > > > > >
> > > > > > PMD should use this value and not ignore it.
> > > > >
> > > > > Hmm, ok but this patch updates mxl driver only...
> > > > > I suppose you expect other PMD maintainers to do the job for
> > > > > their PMDs, right?
> > > > > If so, are they aware (and agree) for this new hard requirement
> > > > > and changes required?
> > > > > Again what PMD should do if it can't support exact value?
> > > > > Let say user asked max_lro_size=20KB but PMD can do only 16KB or
> > > 24KB?
> > > > > Should it fail, or round to smallest, or ...?
> > > > >
> > > > > Actually I wonder, should it really be a hard requirement or
> > > > > more like a guidance to PMD?
> > > > > Why app needs and *exact* value for LRO size?
> > > >
> > > > The exact value should be configured to HW as LRO session limit.
> > >
> > > But if the HW can't support this exact value, see the example above?
> > > In fact, shouldn't we allow PMD to forbid user to configure max LRO size?
> > > Let say if in dev_info max_lro_size==0, then PMD doesn't support LRO
> > > size configuration at all.
> > > That way PMDs who do support LRO, but don't want to (can't to)
> > > support configurable LRO size will stay untouched.
> >
> > Each HW should support packet size limitation no matter if it is LRO packet
> or not:
> > How does the PMD limit the packet size for max rx packet len conf?
> > How does the PMD limit the packet size for the mbuf size?
> 
> Not sure I understand your statement and questions above...
> For sure PMD has to support max_rx_pktlen., but how does it relate to
> max_lro?

You said that HW may not support LRO max size configuration.
I answered that as same as the HW can limit packets to the configuration of max_rx_pkt_len, so it can limit LRO packets size here too.
For simplifications:
Rx Queues which are not configured to do LRO offload should limit their packets to the max_rx_pkt_len field.
Rx Queues which are configured to do LRO offload should limit their packets to the max_lro_pkt_len new field.

In addition, both should limit the packets size to the mbuf size of the Rx mempool configured to the Rx queue( if scatter offload is not enabled).
  
Matan Azrad Nov. 11, 2019, 8:01 a.m. UTC | #20
From: Ananyev, Konstantin
> >
> > From: Ferruh Yigit
> > > On 11/8/2019 11:56 AM, Matan Azrad wrote:
> > > >
> > > >
> > > > From: Ferruh Yigit
> > > >> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> > > >>>
> > > >>>
> > > >>> From: Ferruh Yigit
> > > >>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > > >>>>> Hi
> > > >>>>>
> > > >>>>> From: Ferruh Yigit
> > > >>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > >>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > >>>>>>>
> > > >>>>>> 	RTE_ETHER_MAX_LEN;
> > > >>>>>>>  	}
> > > >>>>>>>
> > > >>>>>>> +	/*
> > > >>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
> > > >> packet
> > > >>>>>>> +	 * size is supported by the configured device.
> > > >>>>>>> +	 */
> > > >>>>>>> +	if (dev_conf->rxmode.offloads &
> > > >> DEV_RX_OFFLOAD_TCP_LRO) {
> > > >>>>>>> +		ret = check_lro_pkt_size(
> > > >>>>>>> +				port_id, dev_conf-
> > > >>>>>>> rxmode.max_lro_pkt_size,
> > > >>>>>>> +				dev_info.max_lro_pkt_size);
> > > >>>>>>> +		if (ret != 0)
> > > >>>>>>> +			goto rollback;
> > > >>>>>>> +	}
> > > >>>>>>> +
> > > >>>>>>
> > > >>>>>> This check forces applications that enable LRO to provide
> > > >>>> 'max_lro_pkt_size'
> > > >>>>>> config value.
> > > >>>>>
> > > >>>>> Yes.(we can break an API, we noticed it)
> > > >>>>
> > > >>>> I am not talking about API/ABI breakage, that part is OK.
> > > >>>> With this check, if the application requested LRO offload but
> > > >>>> not provided 'max_lro_pkt_size' value, device configuration will fail.
> > > >>>>
> > > >>> Yes
> > > >>>> Can there be a case application is good with whatever the PMD
> > > >>>> can support as max?
> > > >>> Yes can be - you know, we can do everything we want but it is
> > > >>> better to be
> > > >> consistent:
> > > >>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > >>> offload, max
> > > >> lro pkt len should be mandatory for LRO offload.
> > > >>>
> > > >>> So your question is actually why both, non-lro packets and LRO
> > > >>> packets max
> > > >> size are mandatory...
> > > >>>
> > > >>>
> > > >>> I think it should be important values for net applications
> management.
> > > >>> Also good for mbuf size managements.
> > > >>>
> > > >>>>>
> > > >>>>>> - Why it is mandatory now, how it was working before if it is
> > > >>>>>> mandatory value?
> > > >>>>>
> > > >>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> > > >>>>> frame
> > > >>>> offload.
> > > >>>>> So now, when the user configures a LRO offload he must to set
> > > >>>>> max lro pkt
> > > >>>> len.
> > > >>>>> We don't want to confuse the user here with the max rx pkt len
> > > >>>> configurations and behaviors, they should be with same logic.
> > > >>>>>
> > > >>>>> This parameter defines well the LRO behavior.
> > > >>>>> Before this, each PMD took its own interpretation to what
> > > >>>>> should be the
> > > >>>> maximum size for LRO aggregated packets.
> > > >>>>> Now, the user must say what is his intension, and the ethdev
> > > >>>>> can limit it
> > > >>>> according to the device capability.
> > > >>>>> By this way, also, the PMD can organize\optimize its data-path
> more.
> > > >>>>> Also, the application can create different mempools for LRO
> > > >>>>> queues to
> > > >>>> allow bigger packet receiving for LRO traffic.
> > > >>>>>
> > > >>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so
> > > >>>>>> it is
> > > '0'?
> > > >>>>> Yes, you can see the feature description Dekel added.
> > > >>>>> This patch also updates all the PMDs support an LRO for non-0
> value.
> > > >>>>
> > > >>>> Of course I can see the updates Matan, my point is "What
> > > >>>> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > >>>> 1) There is no check for it right, so it is acceptable?
> > > >>>
> > > >>> There is check.
> > > >>> If the capability is 0, any non-zero configuration will fail.
> > > >>>
> > > >>>> 2) Are we making this filed mandatory to provide for PMDs, it
> > > >>>> is easy to make new fields mandatory for PMDs but is this
> > > >>>> really
> > > necessary?
> > > >>>
> > > >>> Yes, for consistence.
> > > >>>
> > > >>>>>
> > > >>>>> as same as max rx pkt len, no?
> > > >>>>>
> > > >>>>>> - What do you think setting 'max_lro_pkt_size' config value
> > > >>>>>> to what PMD provided if application doesn't provide it?
> > > >>>>> Same answers as above.
> > > >>>>>
> > > >>>>
> > > >>>> If application doesn't care the value, as it has been till now,
> > > >>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
> > > >>>> level use the value provided by PMD instead of failing?
> > > >>>
> > > >>> Again, same question we can ask on max rx pkt len.
> > > >>>
> > > >>> Looks like the packet size is very important value which should
> > > >>> be set by
> > > >> the application.
> > > >>>
> > > >>> Previous applications have no option to configure it, so they
> > > >>> haven't
> > > >> configure it, (probably cover it somehow) I think it is our miss
> > > >> to supply this info.
> > > >>>
> > > >>> Let's do it in same way as we do max rx pkt len (as this patch main
> idea).
> > > >>> Later, we can change both to other meaning.
> > > >>>
> > > >>
> > > >> I think it is not a good reason to introduce a new mandatory
> > > >> config option for application because of 'max_rx_pkt_len' does it.
> > > >
> > > > It is mandatory only if LRO offload is configured.
> > > >
> > > >> Will it work, if:
> > > >> - If application doesn't provide this value, use the PMD max
> > > >
> > > > May cause a problem if the mbuf size is not enough for the PMD
> maximum.
> > >
> > > OK, this is what I was missing, for this case I was thinking
> > > max_rx_pkt_len will be used but you already explained that
> > > application may want to use different mempools for LRO queues.
> > >
> > So , are you agree with the idea?
> >
> > > For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> > > account and program the device accordingly (of course in LRO enabled
> > > case) ?
> > > This part seems missing and should be highlighted to other PMD
> maintainers.
> >
> >
> > Yes, you are right.
> > PMDs must limit the LRO aggregated packet according to the new field,
> > And it probably very hard for the patch introducer to understand how to do
> it for each PMD.
> >
> > I think each new configuration requires other maintainers\developers
> > to adjust their own PMD code to the new configuration and it should be
> done in limited time.
> >
> > My suggestion here:
> > 1. To reserve the info field and the configuration field for rc2.(if
> > it is critical not to break ABI for rc3) 2. To merge the ethdev patch in the
> start of rc3.
> > 3. Request each relevant PMD to adjust its PMD to the new configuration
> for the end of rc3.
> > 	Note: this should be small change and only for ~5 PMDs:
> > 		a. Introduce the info field according to the device ability.
> > 		b. For each LRO queue:
> > 			Use the LRO max size configuration instead of the
> current max rx pkt len configuration(looks like small condition).
> 
> That's definitely looks like a significant behavior change for existing apps and
> PMDs, and I wonder what for?

There was a miss in configuration:

It doesn't make sense to limit non-lro queues with the same packets length of lro queues:
	Naturally, LRO packets are bigger significantly(because of the HW aggregation), hence,
	the user may use bigger mbufs for the LRO packets, so potentially, it is better to separate mempool, one for the LRO queues with big mbufs and the second for the non-LRO queues with smaller mbufs (to optimize the memory usage).
	Since the user may want tail-room in the LRO mbuf  it may limit the LRO packet size to smaller number than the mbuf (- HEADROOM) and for this reason as same as the usage of the regular field  (max_rx_pkt_len) a new field should be set for LRO queues.

> Why we can't keep max_rx_pkt_len semantics as it is right now, and just add
> an optional ability to limit max size of LRO aggregations?

What is the semantic of max_rx_pkt_len regards LRO packets? It is not clear from the documentation.
 
So this patch defines it well:
Non-LRO queues should be limited to max_rx_pkt_len.
LRO queues should be limited to max_lro_pkt_len.

The consistence in the ways of the configuration for RX packet length should be the same.
max_rx_pkt_len is mandatory for JUMBO offload => max_lro_pkt_len is mandatory for LRO offload.


Current applications uses LRO just need to configure the field same as current max_rx_pkt_len if they want to stay with the same behavior - really not a big change.
If the application want to improve their memory usage as I said above, the new fields allow it as well. 

> > What do you think?
> >
> >
> >
  
Ferruh Yigit Nov. 11, 2019, 11:15 a.m. UTC | #21
On 11/9/2019 6:20 PM, Matan Azrad wrote:
> Hi
> 
> From: Ferruh Yigit
>> On 11/8/2019 11:56 AM, Matan Azrad wrote:
>>>
>>>
>>> From: Ferruh Yigit
>>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
>>>>>
>>>>>
>>>>> From: Ferruh Yigit
>>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>>>>>> Hi
>>>>>>>
>>>>>>> From: Ferruh Yigit
>>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>>>>>
>>>>>>>> 	RTE_ETHER_MAX_LEN;
>>>>>>>>>  	}
>>>>>>>>>
>>>>>>>>> +	/*
>>>>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
>>>> packet
>>>>>>>>> +	 * size is supported by the configured device.
>>>>>>>>> +	 */
>>>>>>>>> +	if (dev_conf->rxmode.offloads &
>>>> DEV_RX_OFFLOAD_TCP_LRO) {
>>>>>>>>> +		ret = check_lro_pkt_size(
>>>>>>>>> +				port_id, dev_conf-
>>>>>>>>> rxmode.max_lro_pkt_size,
>>>>>>>>> +				dev_info.max_lro_pkt_size);
>>>>>>>>> +		if (ret != 0)
>>>>>>>>> +			goto rollback;
>>>>>>>>> +	}
>>>>>>>>> +
>>>>>>>>
>>>>>>>> This check forces applications that enable LRO to provide
>>>>>> 'max_lro_pkt_size'
>>>>>>>> config value.
>>>>>>>
>>>>>>> Yes.(we can break an API, we noticed it)
>>>>>>
>>>>>> I am not talking about API/ABI breakage, that part is OK.
>>>>>> With this check, if the application requested LRO offload but not
>>>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
>>>>>>
>>>>> Yes
>>>>>> Can there be a case application is good with whatever the PMD can
>>>>>> support as max?
>>>>> Yes can be - you know, we can do everything we want but it is better
>>>>> to be
>>>> consistent:
>>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
>>>>> offload, max
>>>> lro pkt len should be mandatory for LRO offload.
>>>>>
>>>>> So your question is actually why both, non-lro packets and LRO
>>>>> packets max
>>>> size are mandatory...
>>>>>
>>>>>
>>>>> I think it should be important values for net applications management.
>>>>> Also good for mbuf size managements.
>>>>>
>>>>>>>
>>>>>>>> - Why it is mandatory now, how it was working before if it is
>>>>>>>> mandatory value?
>>>>>>>
>>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
>>>>>>> frame
>>>>>> offload.
>>>>>>> So now, when the user configures a LRO offload he must to set max
>>>>>>> lro pkt
>>>>>> len.
>>>>>>> We don't want to confuse the user here with the max rx pkt len
>>>>>> configurations and behaviors, they should be with same logic.
>>>>>>>
>>>>>>> This parameter defines well the LRO behavior.
>>>>>>> Before this, each PMD took its own interpretation to what should
>>>>>>> be the
>>>>>> maximum size for LRO aggregated packets.
>>>>>>> Now, the user must say what is his intension, and the ethdev can
>>>>>>> limit it
>>>>>> according to the device capability.
>>>>>>> By this way, also, the PMD can organize\optimize its data-path more.
>>>>>>> Also, the application can create different mempools for LRO queues
>>>>>>> to
>>>>>> allow bigger packet receiving for LRO traffic.
>>>>>>>
>>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it is
>> '0'?
>>>>>>> Yes, you can see the feature description Dekel added.
>>>>>>> This patch also updates all the PMDs support an LRO for non-0 value.
>>>>>>
>>>>>> Of course I can see the updates Matan, my point is "What happens if
>>>>>> PMD doesn't provide 'max_lro_pkt_size'",
>>>>>> 1) There is no check for it right, so it is acceptable?
>>>>>
>>>>> There is check.
>>>>> If the capability is 0, any non-zero configuration will fail.
>>>>>
>>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is
>>>>>> easy to make new fields mandatory for PMDs but is this really
>> necessary?
>>>>>
>>>>> Yes, for consistence.
>>>>>
>>>>>>>
>>>>>>> as same as max rx pkt len, no?
>>>>>>>
>>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to
>>>>>>>> what PMD provided if application doesn't provide it?
>>>>>>> Same answers as above.
>>>>>>>
>>>>>>
>>>>>> If application doesn't care the value, as it has been till now, and
>>>>>> not provided explicit 'max_lro_pkt_size', why not ethdev level use
>>>>>> the value provided by PMD instead of failing?
>>>>>
>>>>> Again, same question we can ask on max rx pkt len.
>>>>>
>>>>> Looks like the packet size is very important value which should be
>>>>> set by
>>>> the application.
>>>>>
>>>>> Previous applications have no option to configure it, so they
>>>>> haven't
>>>> configure it, (probably cover it somehow) I think it is our miss to
>>>> supply this info.
>>>>>
>>>>> Let's do it in same way as we do max rx pkt len (as this patch main idea).
>>>>> Later, we can change both to other meaning.
>>>>>
>>>>
>>>> I think it is not a good reason to introduce a new mandatory config
>>>> option for application because of 'max_rx_pkt_len' does it.
>>>
>>> It is mandatory only if LRO offload is configured.
>>>
>>>> Will it work, if:
>>>> - If application doesn't provide this value, use the PMD max
>>>
>>> May cause a problem if the mbuf size is not enough for the PMD maximum.
>>
>> OK, this is what I was missing, for this case I was thinking max_rx_pkt_len will
>> be used but you already explained that application may want to use different
>> mempools for LRO queues.
>>
> So , are you agree with the idea?
> 
>> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
>> account and program the device accordingly (of course in LRO enabled case)
>> ?
>> This part seems missing and should be highlighted to other PMD maintainers.
> 
> 
> Yes, you are right.
> PMDs must limit the LRO aggregated packet according to the new field,
> And it probably very hard for the patch introducer to understand how to do it for each PMD. 
> 
> I think each new configuration requires other maintainers\developers to adjust their own PMD code to the new configuration and it should be done in limited time.

Agree.
But experience showed that this synchronization is not as easy as it sounds,
whoever changing the interface/library says other PMDs should reflect the change
but most of the times other PMD maintainers not aware of it or if they do they
have other priorities for the release, so the changes should be in a way to give
more time to PMDs to adapt it and during this time library change shouldn't
break other PMDs.

> 
> My suggestion here:
> 1. To reserve the info field and the configuration field for rc2.(if it is critical not to break ABI for rc3)
> 2. To merge the ethdev patch in the start of rc3.
> 3. Request each relevant PMD to adjust its PMD to the new configuration for the end of rc3.
> 	Note: this should be small change and only for ~5 PMDs:
> 		a. Introduce the info field according to the device ability.
> 		b. For each LRO queue:
> 			Use the LRO max size configuration instead of the current max rx pkt len configuration(looks like small condition). 
> 
> What do you think?

There is already a v6 which only updates dev_info fields to have the
'max_lro_pktlen' field, the PMD updates there also looks safe, so I think we can
go with it for rc2.

For the configuration part, I suggest deferring it next release, which gives
more time for discussion and enough time for other PMDs to implement it.


And related configuration, right now devices already configured to limit the
packet size to 'max_rx_pkt_len', it can be an optimization to increase it to
'max_lro_pkt_len' for the queues LRO is supported, why not make this
configuration more explicitly with specific API as Konstantin suggested [1],
this way it only affects the applications that are interested in and the PMDs
that want to support this.
Current implementation is under 'rte_eth_dev_configure()' which is used by all
DPDK applications and impact of changing it is much larger, also it makes
mandatory for applications to provide this config option when LRO enabled,
explicit API gives same result without making a mandatory config option.

[1]
int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro);
  
Matan Azrad Nov. 11, 2019, 11:33 a.m. UTC | #22
From: Ferruh Yigit
> On 11/9/2019 6:20 PM, Matan Azrad wrote:
> > Hi
> >
> > From: Ferruh Yigit
> >> On 11/8/2019 11:56 AM, Matan Azrad wrote:
> >>>
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >>>>>
> >>>>>
> >>>>> From: Ferruh Yigit
> >>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>>>>>> Hi
> >>>>>>>
> >>>>>>> From: Ferruh Yigit
> >>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>>>>>
> >>>>>>>> 	RTE_ETHER_MAX_LEN;
> >>>>>>>>>  	}
> >>>>>>>>>
> >>>>>>>>> +	/*
> >>>>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
> >>>> packet
> >>>>>>>>> +	 * size is supported by the configured device.
> >>>>>>>>> +	 */
> >>>>>>>>> +	if (dev_conf->rxmode.offloads &
> >>>> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>>>>>> +		ret = check_lro_pkt_size(
> >>>>>>>>> +				port_id, dev_conf-
> >>>>>>>>> rxmode.max_lro_pkt_size,
> >>>>>>>>> +				dev_info.max_lro_pkt_size);
> >>>>>>>>> +		if (ret != 0)
> >>>>>>>>> +			goto rollback;
> >>>>>>>>> +	}
> >>>>>>>>> +
> >>>>>>>>
> >>>>>>>> This check forces applications that enable LRO to provide
> >>>>>> 'max_lro_pkt_size'
> >>>>>>>> config value.
> >>>>>>>
> >>>>>>> Yes.(we can break an API, we noticed it)
> >>>>>>
> >>>>>> I am not talking about API/ABI breakage, that part is OK.
> >>>>>> With this check, if the application requested LRO offload but not
> >>>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
> >>>>>>
> >>>>> Yes
> >>>>>> Can there be a case application is good with whatever the PMD can
> >>>>>> support as max?
> >>>>> Yes can be - you know, we can do everything we want but it is
> >>>>> better to be
> >>>> consistent:
> >>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> >>>>> offload, max
> >>>> lro pkt len should be mandatory for LRO offload.
> >>>>>
> >>>>> So your question is actually why both, non-lro packets and LRO
> >>>>> packets max
> >>>> size are mandatory...
> >>>>>
> >>>>>
> >>>>> I think it should be important values for net applications management.
> >>>>> Also good for mbuf size managements.
> >>>>>
> >>>>>>>
> >>>>>>>> - Why it is mandatory now, how it was working before if it is
> >>>>>>>> mandatory value?
> >>>>>>>
> >>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> >>>>>>> frame
> >>>>>> offload.
> >>>>>>> So now, when the user configures a LRO offload he must to set
> >>>>>>> max lro pkt
> >>>>>> len.
> >>>>>>> We don't want to confuse the user here with the max rx pkt len
> >>>>>> configurations and behaviors, they should be with same logic.
> >>>>>>>
> >>>>>>> This parameter defines well the LRO behavior.
> >>>>>>> Before this, each PMD took its own interpretation to what should
> >>>>>>> be the
> >>>>>> maximum size for LRO aggregated packets.
> >>>>>>> Now, the user must say what is his intension, and the ethdev can
> >>>>>>> limit it
> >>>>>> according to the device capability.
> >>>>>>> By this way, also, the PMD can organize\optimize its data-path
> more.
> >>>>>>> Also, the application can create different mempools for LRO
> >>>>>>> queues to
> >>>>>> allow bigger packet receiving for LRO traffic.
> >>>>>>>
> >>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it
> >>>>>>>> is
> >> '0'?
> >>>>>>> Yes, you can see the feature description Dekel added.
> >>>>>>> This patch also updates all the PMDs support an LRO for non-0
> value.
> >>>>>>
> >>>>>> Of course I can see the updates Matan, my point is "What happens
> >>>>>> if PMD doesn't provide 'max_lro_pkt_size'",
> >>>>>> 1) There is no check for it right, so it is acceptable?
> >>>>>
> >>>>> There is check.
> >>>>> If the capability is 0, any non-zero configuration will fail.
> >>>>>
> >>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is
> >>>>>> easy to make new fields mandatory for PMDs but is this really
> >> necessary?
> >>>>>
> >>>>> Yes, for consistence.
> >>>>>
> >>>>>>>
> >>>>>>> as same as max rx pkt len, no?
> >>>>>>>
> >>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to
> >>>>>>>> what PMD provided if application doesn't provide it?
> >>>>>>> Same answers as above.
> >>>>>>>
> >>>>>>
> >>>>>> If application doesn't care the value, as it has been till now,
> >>>>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
> >>>>>> level use the value provided by PMD instead of failing?
> >>>>>
> >>>>> Again, same question we can ask on max rx pkt len.
> >>>>>
> >>>>> Looks like the packet size is very important value which should be
> >>>>> set by
> >>>> the application.
> >>>>>
> >>>>> Previous applications have no option to configure it, so they
> >>>>> haven't
> >>>> configure it, (probably cover it somehow) I think it is our miss to
> >>>> supply this info.
> >>>>>
> >>>>> Let's do it in same way as we do max rx pkt len (as this patch main
> idea).
> >>>>> Later, we can change both to other meaning.
> >>>>>
> >>>>
> >>>> I think it is not a good reason to introduce a new mandatory config
> >>>> option for application because of 'max_rx_pkt_len' does it.
> >>>
> >>> It is mandatory only if LRO offload is configured.
> >>>
> >>>> Will it work, if:
> >>>> - If application doesn't provide this value, use the PMD max
> >>>
> >>> May cause a problem if the mbuf size is not enough for the PMD
> maximum.
> >>
> >> OK, this is what I was missing, for this case I was thinking
> >> max_rx_pkt_len will be used but you already explained that
> >> application may want to use different mempools for LRO queues.
> >>
> > So , are you agree with the idea?
> >
> >> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> >> account and program the device accordingly (of course in LRO enabled
> >> case) ?
> >> This part seems missing and should be highlighted to other PMD
> maintainers.
> >
> >
> > Yes, you are right.
> > PMDs must limit the LRO aggregated packet according to the new field,
> > And it probably very hard for the patch introducer to understand how to do
> it for each PMD.
> >
> > I think each new configuration requires other maintainers\developers to
> adjust their own PMD code to the new configuration and it should be done in
> limited time.
> 
> Agree.
> But experience showed that this synchronization is not as easy as it sounds,
> whoever changing the interface/library says other PMDs should reflect the
> change but most of the times other PMD maintainers not aware of it or if
> they do they have other priorities for the release, so the changes should be
> in a way to give more time to PMDs to adapt it and during this time library
> change shouldn't break other PMDs.
> 

Yes.

> > My suggestion here:
> > 1. To reserve the info field and the configuration field for rc2.(if
> > it is critical not to break ABI for rc3) 2. To merge the ethdev patch in the
> start of rc3.
> > 3. Request each relevant PMD to adjust its PMD to the new configuration
> for the end of rc3.
> > 	Note: this should be small change and only for ~5 PMDs:
> > 		a. Introduce the info field according to the device ability.
> > 		b. For each LRO queue:
> > 			Use the LRO max size configuration instead of the
> current max rx pkt len configuration(looks like small condition).
> >
> > What do you think?
> 
> There is already a v6 which only updates dev_info fields to have the
> 'max_lro_pktlen' field, the PMD updates there also looks safe, so I think we
> can go with it for rc2.
> 

Doesn’t make sense to expose the info field without the configuration.


> For the configuration part, I suggest deferring it next release, which gives
> more time for discussion and enough time for other PMDs to implement it.
> 
> 
> And related configuration, right now devices already configured to limit the
> packet size to 'max_rx_pkt_len', it can be an optimization to increase it to
> 'max_lro_pkt_len' for the queues LRO is supported, why not make this
> configuration more explicitly with specific API as Konstantin suggested [1],
> this way it only affects the applications that are interested in and the PMDs
> that want to support this.
> Current implementation is under 'rte_eth_dev_configure()' which is used by
> all DPDK applications and impact of changing it is much larger, also it makes
> mandatory for applications to provide this config option when LRO enabled,
> explicit API gives same result without making a mandatory config option.
> 
> [1]
> int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro);

Please see my answers to Konstantin regarding this topic.



One more option:
In order to not break PMDs because of this feature:
0 in the capability field means, The PMD doesn't support LRO special limitation so if the application configuration is not the same like max_rx_pkt_len the validation will fail.
  
Ferruh Yigit Nov. 11, 2019, 12:21 p.m. UTC | #23
On 11/11/2019 11:33 AM, Matan Azrad wrote:
> 
> 
> From: Ferruh Yigit
>> On 11/9/2019 6:20 PM, Matan Azrad wrote:
>>> Hi
>>>
>>> From: Ferruh Yigit
>>>> On 11/8/2019 11:56 AM, Matan Azrad wrote:
>>>>>
>>>>>
>>>>> From: Ferruh Yigit
>>>>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
>>>>>>>
>>>>>>>
>>>>>>> From: Ferruh Yigit
>>>>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
>>>>>>>>> Hi
>>>>>>>>>
>>>>>>>>> From: Ferruh Yigit
>>>>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
>>>>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
>>>>>>>>>>>
>>>>>>>>>> 	RTE_ETHER_MAX_LEN;
>>>>>>>>>>>  	}
>>>>>>>>>>>
>>>>>>>>>>> +	/*
>>>>>>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
>>>>>> packet
>>>>>>>>>>> +	 * size is supported by the configured device.
>>>>>>>>>>> +	 */
>>>>>>>>>>> +	if (dev_conf->rxmode.offloads &
>>>>>> DEV_RX_OFFLOAD_TCP_LRO) {
>>>>>>>>>>> +		ret = check_lro_pkt_size(
>>>>>>>>>>> +				port_id, dev_conf-
>>>>>>>>>>> rxmode.max_lro_pkt_size,
>>>>>>>>>>> +				dev_info.max_lro_pkt_size);
>>>>>>>>>>> +		if (ret != 0)
>>>>>>>>>>> +			goto rollback;
>>>>>>>>>>> +	}
>>>>>>>>>>> +
>>>>>>>>>>
>>>>>>>>>> This check forces applications that enable LRO to provide
>>>>>>>> 'max_lro_pkt_size'
>>>>>>>>>> config value.
>>>>>>>>>
>>>>>>>>> Yes.(we can break an API, we noticed it)
>>>>>>>>
>>>>>>>> I am not talking about API/ABI breakage, that part is OK.
>>>>>>>> With this check, if the application requested LRO offload but not
>>>>>>>> provided 'max_lro_pkt_size' value, device configuration will fail.
>>>>>>>>
>>>>>>> Yes
>>>>>>>> Can there be a case application is good with whatever the PMD can
>>>>>>>> support as max?
>>>>>>> Yes can be - you know, we can do everything we want but it is
>>>>>>> better to be
>>>>>> consistent:
>>>>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
>>>>>>> offload, max
>>>>>> lro pkt len should be mandatory for LRO offload.
>>>>>>>
>>>>>>> So your question is actually why both, non-lro packets and LRO
>>>>>>> packets max
>>>>>> size are mandatory...
>>>>>>>
>>>>>>>
>>>>>>> I think it should be important values for net applications management.
>>>>>>> Also good for mbuf size managements.
>>>>>>>
>>>>>>>>>
>>>>>>>>>> - Why it is mandatory now, how it was working before if it is
>>>>>>>>>> mandatory value?
>>>>>>>>>
>>>>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
>>>>>>>>> frame
>>>>>>>> offload.
>>>>>>>>> So now, when the user configures a LRO offload he must to set
>>>>>>>>> max lro pkt
>>>>>>>> len.
>>>>>>>>> We don't want to confuse the user here with the max rx pkt len
>>>>>>>> configurations and behaviors, they should be with same logic.
>>>>>>>>>
>>>>>>>>> This parameter defines well the LRO behavior.
>>>>>>>>> Before this, each PMD took its own interpretation to what should
>>>>>>>>> be the
>>>>>>>> maximum size for LRO aggregated packets.
>>>>>>>>> Now, the user must say what is his intension, and the ethdev can
>>>>>>>>> limit it
>>>>>>>> according to the device capability.
>>>>>>>>> By this way, also, the PMD can organize\optimize its data-path
>> more.
>>>>>>>>> Also, the application can create different mempools for LRO
>>>>>>>>> queues to
>>>>>>>> allow bigger packet receiving for LRO traffic.
>>>>>>>>>
>>>>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so it
>>>>>>>>>> is
>>>> '0'?
>>>>>>>>> Yes, you can see the feature description Dekel added.
>>>>>>>>> This patch also updates all the PMDs support an LRO for non-0
>> value.
>>>>>>>>
>>>>>>>> Of course I can see the updates Matan, my point is "What happens
>>>>>>>> if PMD doesn't provide 'max_lro_pkt_size'",
>>>>>>>> 1) There is no check for it right, so it is acceptable?
>>>>>>>
>>>>>>> There is check.
>>>>>>> If the capability is 0, any non-zero configuration will fail.
>>>>>>>
>>>>>>>> 2) Are we making this filed mandatory to provide for PMDs, it is
>>>>>>>> easy to make new fields mandatory for PMDs but is this really
>>>> necessary?
>>>>>>>
>>>>>>> Yes, for consistence.
>>>>>>>
>>>>>>>>>
>>>>>>>>> as same as max rx pkt len, no?
>>>>>>>>>
>>>>>>>>>> - What do you think setting 'max_lro_pkt_size' config value to
>>>>>>>>>> what PMD provided if application doesn't provide it?
>>>>>>>>> Same answers as above.
>>>>>>>>>
>>>>>>>>
>>>>>>>> If application doesn't care the value, as it has been till now,
>>>>>>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
>>>>>>>> level use the value provided by PMD instead of failing?
>>>>>>>
>>>>>>> Again, same question we can ask on max rx pkt len.
>>>>>>>
>>>>>>> Looks like the packet size is very important value which should be
>>>>>>> set by
>>>>>> the application.
>>>>>>>
>>>>>>> Previous applications have no option to configure it, so they
>>>>>>> haven't
>>>>>> configure it, (probably cover it somehow) I think it is our miss to
>>>>>> supply this info.
>>>>>>>
>>>>>>> Let's do it in same way as we do max rx pkt len (as this patch main
>> idea).
>>>>>>> Later, we can change both to other meaning.
>>>>>>>
>>>>>>
>>>>>> I think it is not a good reason to introduce a new mandatory config
>>>>>> option for application because of 'max_rx_pkt_len' does it.
>>>>>
>>>>> It is mandatory only if LRO offload is configured.
>>>>>
>>>>>> Will it work, if:
>>>>>> - If application doesn't provide this value, use the PMD max
>>>>>
>>>>> May cause a problem if the mbuf size is not enough for the PMD
>> maximum.
>>>>
>>>> OK, this is what I was missing, for this case I was thinking
>>>> max_rx_pkt_len will be used but you already explained that
>>>> application may want to use different mempools for LRO queues.
>>>>
>>> So , are you agree with the idea?
>>>
>>>> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
>>>> account and program the device accordingly (of course in LRO enabled
>>>> case) ?
>>>> This part seems missing and should be highlighted to other PMD
>> maintainers.
>>>
>>>
>>> Yes, you are right.
>>> PMDs must limit the LRO aggregated packet according to the new field,
>>> And it probably very hard for the patch introducer to understand how to do
>> it for each PMD.
>>>
>>> I think each new configuration requires other maintainers\developers to
>> adjust their own PMD code to the new configuration and it should be done in
>> limited time.
>>
>> Agree.
>> But experience showed that this synchronization is not as easy as it sounds,
>> whoever changing the interface/library says other PMDs should reflect the
>> change but most of the times other PMD maintainers not aware of it or if
>> they do they have other priorities for the release, so the changes should be
>> in a way to give more time to PMDs to adapt it and during this time library
>> change shouldn't break other PMDs.
>>
> 
> Yes.
> 
>>> My suggestion here:
>>> 1. To reserve the info field and the configuration field for rc2.(if
>>> it is critical not to break ABI for rc3) 2. To merge the ethdev patch in the
>> start of rc3.
>>> 3. Request each relevant PMD to adjust its PMD to the new configuration
>> for the end of rc3.
>>> 	Note: this should be small change and only for ~5 PMDs:
>>> 		a. Introduce the info field according to the device ability.
>>> 		b. For each LRO queue:
>>> 			Use the LRO max size configuration instead of the
>> current max rx pkt len configuration(looks like small condition).
>>>
>>> What do you think?
>>
>> There is already a v6 which only updates dev_info fields to have the
>> 'max_lro_pktlen' field, the PMD updates there also looks safe, so I think we
>> can go with it for rc2.
>>
> 
> Doesn’t make sense to expose the info field without the configuration.
> 
> 
>> For the configuration part, I suggest deferring it next release, which gives
>> more time for discussion and enough time for other PMDs to implement it.
>>
>>
>> And related configuration, right now devices already configured to limit the
>> packet size to 'max_rx_pkt_len', it can be an optimization to increase it to
>> 'max_lro_pkt_len' for the queues LRO is supported, why not make this
>> configuration more explicitly with specific API as Konstantin suggested [1],
>> this way it only affects the applications that are interested in and the PMDs
>> that want to support this.
>> Current implementation is under 'rte_eth_dev_configure()' which is used by
>> all DPDK applications and impact of changing it is much larger, also it makes
>> mandatory for applications to provide this config option when LRO enabled,
>> explicit API gives same result without making a mandatory config option.
>>
>> [1]
>> int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro);
> 
> Please see my answers to Konstantin regarding this topic.
> 
> 
> 
> One more option:
> In order to not break PMDs because of this feature:
> 0 in the capability field means, The PMD doesn't support LRO special limitation so if the application configuration is not the same like max_rx_pkt_len the validation will fail.
> 

I don't see this is a mandatory field if the LRO is enabled, am I missing
something? And current implementation does so by failing configure(), the affect
to the applications is my first concern.
Second is when application supplied the proper values but PMD is not doing
anything without letting application anything done.

That is why I think explicit API makes this clear and only required by
application wants to use it.

Similar can be done with following, this also doesn't require both application
and PMD changes, wdyt?

ethdev, configure():
if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
  if (dev_conf->rxmode.max_lro_pktlen) {
    if (dev_info.max_lro_pktlen) {
	validate(rxmode.max_lro_pktlen, dev_info.max_lro_pktlen)
    } else if (dev_info.max_rx_pktlen)
        validate(rxmode.max_lro_pktlen, dev_info.max_rx_pktlen)
    }
  }
}


in PMD:
if (LRO) {
	queue.max_pktlen = rxmode.max_lro_pktlen ?
		rxmode.max_lro_pktlen :
		rxmode.max_tx_pktlen;
}
  
Matan Azrad Nov. 11, 2019, 1:32 p.m. UTC | #24
From: Ferruh Yigit
> On 11/11/2019 11:33 AM, Matan Azrad wrote:
> >
> >
> > From: Ferruh Yigit
> >> On 11/9/2019 6:20 PM, Matan Azrad wrote:
> >>> Hi
> >>>
> >>> From: Ferruh Yigit
> >>>> On 11/8/2019 11:56 AM, Matan Azrad wrote:
> >>>>>
> >>>>>
> >>>>> From: Ferruh Yigit
> >>>>>> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> >>>>>>>
> >>>>>>>
> >>>>>>> From: Ferruh Yigit
> >>>>>>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> >>>>>>>>> Hi
> >>>>>>>>>
> >>>>>>>>> From: Ferruh Yigit
> >>>>>>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> >>>>>>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> >>>>>>>>>>>
> >>>>>>>>>> 	RTE_ETHER_MAX_LEN;
> >>>>>>>>>>>  	}
> >>>>>>>>>>>
> >>>>>>>>>>> +	/*
> >>>>>>>>>>> +	 * If LRO is enabled, check that the maximum
> aggregated
> >>>>>> packet
> >>>>>>>>>>> +	 * size is supported by the configured device.
> >>>>>>>>>>> +	 */
> >>>>>>>>>>> +	if (dev_conf->rxmode.offloads &
> >>>>>> DEV_RX_OFFLOAD_TCP_LRO) {
> >>>>>>>>>>> +		ret = check_lro_pkt_size(
> >>>>>>>>>>> +				port_id, dev_conf-
> >>>>>>>>>>> rxmode.max_lro_pkt_size,
> >>>>>>>>>>> +				dev_info.max_lro_pkt_size);
> >>>>>>>>>>> +		if (ret != 0)
> >>>>>>>>>>> +			goto rollback;
> >>>>>>>>>>> +	}
> >>>>>>>>>>> +
> >>>>>>>>>>
> >>>>>>>>>> This check forces applications that enable LRO to provide
> >>>>>>>> 'max_lro_pkt_size'
> >>>>>>>>>> config value.
> >>>>>>>>>
> >>>>>>>>> Yes.(we can break an API, we noticed it)
> >>>>>>>>
> >>>>>>>> I am not talking about API/ABI breakage, that part is OK.
> >>>>>>>> With this check, if the application requested LRO offload but
> >>>>>>>> not provided 'max_lro_pkt_size' value, device configuration will
> fail.
> >>>>>>>>
> >>>>>>> Yes
> >>>>>>>> Can there be a case application is good with whatever the PMD
> >>>>>>>> can support as max?
> >>>>>>> Yes can be - you know, we can do everything we want but it is
> >>>>>>> better to be
> >>>>>> consistent:
> >>>>>>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> >>>>>>> offload, max
> >>>>>> lro pkt len should be mandatory for LRO offload.
> >>>>>>>
> >>>>>>> So your question is actually why both, non-lro packets and LRO
> >>>>>>> packets max
> >>>>>> size are mandatory...
> >>>>>>>
> >>>>>>>
> >>>>>>> I think it should be important values for net applications
> management.
> >>>>>>> Also good for mbuf size managements.
> >>>>>>>
> >>>>>>>>>
> >>>>>>>>>> - Why it is mandatory now, how it was working before if it is
> >>>>>>>>>> mandatory value?
> >>>>>>>>>
> >>>>>>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> >>>>>>>>> frame
> >>>>>>>> offload.
> >>>>>>>>> So now, when the user configures a LRO offload he must to set
> >>>>>>>>> max lro pkt
> >>>>>>>> len.
> >>>>>>>>> We don't want to confuse the user here with the max rx pkt len
> >>>>>>>> configurations and behaviors, they should be with same logic.
> >>>>>>>>>
> >>>>>>>>> This parameter defines well the LRO behavior.
> >>>>>>>>> Before this, each PMD took its own interpretation to what
> >>>>>>>>> should be the
> >>>>>>>> maximum size for LRO aggregated packets.
> >>>>>>>>> Now, the user must say what is his intension, and the ethdev
> >>>>>>>>> can limit it
> >>>>>>>> according to the device capability.
> >>>>>>>>> By this way, also, the PMD can organize\optimize its data-path
> >> more.
> >>>>>>>>> Also, the application can create different mempools for LRO
> >>>>>>>>> queues to
> >>>>>>>> allow bigger packet receiving for LRO traffic.
> >>>>>>>>>
> >>>>>>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so
> >>>>>>>>>> it is
> >>>> '0'?
> >>>>>>>>> Yes, you can see the feature description Dekel added.
> >>>>>>>>> This patch also updates all the PMDs support an LRO for non-0
> >> value.
> >>>>>>>>
> >>>>>>>> Of course I can see the updates Matan, my point is "What
> >>>>>>>> happens if PMD doesn't provide 'max_lro_pkt_size'",
> >>>>>>>> 1) There is no check for it right, so it is acceptable?
> >>>>>>>
> >>>>>>> There is check.
> >>>>>>> If the capability is 0, any non-zero configuration will fail.
> >>>>>>>
> >>>>>>>> 2) Are we making this filed mandatory to provide for PMDs, it
> >>>>>>>> is easy to make new fields mandatory for PMDs but is this
> >>>>>>>> really
> >>>> necessary?
> >>>>>>>
> >>>>>>> Yes, for consistence.
> >>>>>>>
> >>>>>>>>>
> >>>>>>>>> as same as max rx pkt len, no?
> >>>>>>>>>
> >>>>>>>>>> - What do you think setting 'max_lro_pkt_size' config value
> >>>>>>>>>> to what PMD provided if application doesn't provide it?
> >>>>>>>>> Same answers as above.
> >>>>>>>>>
> >>>>>>>>
> >>>>>>>> If application doesn't care the value, as it has been till now,
> >>>>>>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
> >>>>>>>> level use the value provided by PMD instead of failing?
> >>>>>>>
> >>>>>>> Again, same question we can ask on max rx pkt len.
> >>>>>>>
> >>>>>>> Looks like the packet size is very important value which should
> >>>>>>> be set by
> >>>>>> the application.
> >>>>>>>
> >>>>>>> Previous applications have no option to configure it, so they
> >>>>>>> haven't
> >>>>>> configure it, (probably cover it somehow) I think it is our miss
> >>>>>> to supply this info.
> >>>>>>>
> >>>>>>> Let's do it in same way as we do max rx pkt len (as this patch
> >>>>>>> main
> >> idea).
> >>>>>>> Later, we can change both to other meaning.
> >>>>>>>
> >>>>>>
> >>>>>> I think it is not a good reason to introduce a new mandatory
> >>>>>> config option for application because of 'max_rx_pkt_len' does it.
> >>>>>
> >>>>> It is mandatory only if LRO offload is configured.
> >>>>>
> >>>>>> Will it work, if:
> >>>>>> - If application doesn't provide this value, use the PMD max
> >>>>>
> >>>>> May cause a problem if the mbuf size is not enough for the PMD
> >> maximum.
> >>>>
> >>>> OK, this is what I was missing, for this case I was thinking
> >>>> max_rx_pkt_len will be used but you already explained that
> >>>> application may want to use different mempools for LRO queues.
> >>>>
> >>> So , are you agree with the idea?
> >>>
> >>>> For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size'
> >>>> into account and program the device accordingly (of course in LRO
> >>>> enabled
> >>>> case) ?
> >>>> This part seems missing and should be highlighted to other PMD
> >> maintainers.
> >>>
> >>>
> >>> Yes, you are right.
> >>> PMDs must limit the LRO aggregated packet according to the new
> >>> field, And it probably very hard for the patch introducer to
> >>> understand how to do
> >> it for each PMD.
> >>>
> >>> I think each new configuration requires other maintainers\developers
> >>> to
> >> adjust their own PMD code to the new configuration and it should be
> >> done in limited time.
> >>
> >> Agree.
> >> But experience showed that this synchronization is not as easy as it
> >> sounds, whoever changing the interface/library says other PMDs should
> >> reflect the change but most of the times other PMD maintainers not
> >> aware of it or if they do they have other priorities for the release,
> >> so the changes should be in a way to give more time to PMDs to adapt
> >> it and during this time library change shouldn't break other PMDs.
> >>
> >
> > Yes.
> >
> >>> My suggestion here:
> >>> 1. To reserve the info field and the configuration field for rc2.(if
> >>> it is critical not to break ABI for rc3) 2. To merge the ethdev
> >>> patch in the
> >> start of rc3.
> >>> 3. Request each relevant PMD to adjust its PMD to the new
> >>> configuration
> >> for the end of rc3.
> >>> 	Note: this should be small change and only for ~5 PMDs:
> >>> 		a. Introduce the info field according to the device ability.
> >>> 		b. For each LRO queue:
> >>> 			Use the LRO max size configuration instead of the
> >> current max rx pkt len configuration(looks like small condition).
> >>>
> >>> What do you think?
> >>
> >> There is already a v6 which only updates dev_info fields to have the
> >> 'max_lro_pktlen' field, the PMD updates there also looks safe, so I
> >> think we can go with it for rc2.
> >>
> >
> > Doesn’t make sense to expose the info field without the configuration.
> >
> >
> >> For the configuration part, I suggest deferring it next release,
> >> which gives more time for discussion and enough time for other PMDs to
> implement it.
> >>
> >>
> >> And related configuration, right now devices already configured to
> >> limit the packet size to 'max_rx_pkt_len', it can be an optimization
> >> to increase it to 'max_lro_pkt_len' for the queues LRO is supported,
> >> why not make this configuration more explicitly with specific API as
> >> Konstantin suggested [1], this way it only affects the applications
> >> that are interested in and the PMDs that want to support this.
> >> Current implementation is under 'rte_eth_dev_configure()' which is
> >> used by all DPDK applications and impact of changing it is much
> >> larger, also it makes mandatory for applications to provide this
> >> config option when LRO enabled, explicit API gives same result without
> making a mandatory config option.
> >>
> >> [1]
> >> int rte_eth_dev_set_max_lro(uint16_t port_id, uint32_t lro);
> >
> > Please see my answers to Konstantin regarding this topic.
> >
> >
> >
> > One more option:
> > In order to not break PMDs because of this feature:
> > 0 in the capability field means, The PMD doesn't support LRO special
> limitation so if the application configuration is not the same like
> max_rx_pkt_len the validation will fail.
> >
> 
> I don't see this is a mandatory field if the LRO is enabled, am I missing
> something?

From the application size, this is mandatory, you right exactly like max_rx_pkt_len.

> And current implementation does so by failing configure(), the
> affect to the applications is my first concern.

This is a small effect as any API change.
If an exists application wants to save its current LRO behavior, it just need to put max_lro_pkt_len=max_rx_pkt_len.
Do you think this is a big change? Why?


> Second is when application supplied the proper values but PMD is not doing
> anything without letting application anything done.
> 

PMD which doesn't change its info to value != 0, as I said, means that it doesn't support LRO queues special limited size.
When the PMD maintainers have time to support the feature, they just need to change the info value to be !=0 and to take into account the max_lro_pkt_len in configuration.

> That is why I think explicit API makes this clear and only required by
> application wants to use it.


I think that exposing a new function API is not good because it introduce a different way to limit the Rx packet size.

For regular packet - configure it in the configuration struct.
For LRO packets - use a function to do it.

It is very confusing and not intuitive from the user side.

Why not to save convention and consistent, so
max_rx_pkt_len  is mandatory (for JUMBO offload) and in the config structure 
Also the new lro conf is mandatory (for LRO offload) and in the configuration structure. 

So, all the configurations to limit the Rx packet size are in the same place and done by the same way.

> Similar can be done with following, this also doesn't require both application
> and PMD changes, wdyt?
> 
> ethdev, configure():
> if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
>   if (dev_conf->rxmode.max_lro_pktlen) {
>     if (dev_info.max_lro_pktlen) {
> 	validate(rxmode.max_lro_pktlen, dev_info.max_lro_pktlen)
>     } else if (dev_info.max_rx_pktlen)
>         validate(rxmode.max_lro_pktlen, dev_info.max_rx_pktlen)
>     }
>   }
> }
> 
> 
> in PMD:
> if (LRO) {
> 	queue.max_pktlen = rxmode.max_lro_pktlen ?
> 		rxmode.max_lro_pktlen :
> 		rxmode.max_tx_pktlen;
> }

Again, my only concern here is the consistency of Rx packet size limitation mandatory for LRO and non-LRO.
  
Ananyev, Konstantin Nov. 12, 2019, 6:31 p.m. UTC | #25
> -----Original Message-----
> From: Matan Azrad <matan@mellanox.com>
> Sent: Monday, November 11, 2019 8:01 AM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Yigit, Ferruh <ferruh.yigit@intel.com>; Dekel Peled <dekelp@mellanox.com>;
> Mcnamara, John <john.mcnamara@intel.com>; Kovacevic, Marko <marko.kovacevic@intel.com>; nhorman@tuxdriver.com;
> ajit.khaparde@broadcom.com; somnath.kotur@broadcom.com; Burakov, Anatoly <anatoly.burakov@intel.com>;
> xuanziyang2@huawei.com; cloud.wangxiaoyun@huawei.com; zhouguoyang@huawei.com; Lu, Wenzhuo <wenzhuo.lu@intel.com>; Shahaf
> Shuler <shahafs@mellanox.com>; Slava Ovsiienko <viacheslavo@mellanox.com>; rmody@marvell.com; shshaikh@marvell.com;
> maxime.coquelin@redhat.com; Bie, Tiwei <tiwei.bie@intel.com>; Wang, Zhihong <zhihong.wang@intel.com>; yongwang@vmware.com;
> Thomas Monjalon <thomas@monjalon.net>; arybchenko@solarflare.com; Wu, Jingjing <jingjing.wu@intel.com>; Iremonger, Bernard
> <bernard.iremonger@intel.com>
> Cc: dev@dpdk.org
> Subject: RE: [dpdk-dev] [PATCH v4 1/3] ethdev: support API to set max LRO packet size
> 
> 
> 
> From: Ananyev, Konstantin
> > >
> > > From: Ferruh Yigit
> > > > On 11/8/2019 11:56 AM, Matan Azrad wrote:
> > > > >
> > > > >
> > > > > From: Ferruh Yigit
> > > > >> On 11/8/2019 10:10 AM, Matan Azrad wrote:
> > > > >>>
> > > > >>>
> > > > >>> From: Ferruh Yigit
> > > > >>>> On 11/8/2019 6:54 AM, Matan Azrad wrote:
> > > > >>>>> Hi
> > > > >>>>>
> > > > >>>>> From: Ferruh Yigit
> > > > >>>>>> On 11/7/2019 12:35 PM, Dekel Peled wrote:
> > > > >>>>>>> @@ -1266,6 +1286,18 @@ struct rte_eth_dev *
> > > > >>>>>>>
> > > > >>>>>> 	RTE_ETHER_MAX_LEN;
> > > > >>>>>>>  	}
> > > > >>>>>>>
> > > > >>>>>>> +	/*
> > > > >>>>>>> +	 * If LRO is enabled, check that the maximum aggregated
> > > > >> packet
> > > > >>>>>>> +	 * size is supported by the configured device.
> > > > >>>>>>> +	 */
> > > > >>>>>>> +	if (dev_conf->rxmode.offloads &
> > > > >> DEV_RX_OFFLOAD_TCP_LRO) {
> > > > >>>>>>> +		ret = check_lro_pkt_size(
> > > > >>>>>>> +				port_id, dev_conf-
> > > > >>>>>>> rxmode.max_lro_pkt_size,
> > > > >>>>>>> +				dev_info.max_lro_pkt_size);
> > > > >>>>>>> +		if (ret != 0)
> > > > >>>>>>> +			goto rollback;
> > > > >>>>>>> +	}
> > > > >>>>>>> +
> > > > >>>>>>
> > > > >>>>>> This check forces applications that enable LRO to provide
> > > > >>>> 'max_lro_pkt_size'
> > > > >>>>>> config value.
> > > > >>>>>
> > > > >>>>> Yes.(we can break an API, we noticed it)
> > > > >>>>
> > > > >>>> I am not talking about API/ABI breakage, that part is OK.
> > > > >>>> With this check, if the application requested LRO offload but
> > > > >>>> not provided 'max_lro_pkt_size' value, device configuration will fail.
> > > > >>>>
> > > > >>> Yes
> > > > >>>> Can there be a case application is good with whatever the PMD
> > > > >>>> can support as max?
> > > > >>> Yes can be - you know, we can do everything we want but it is
> > > > >>> better to be
> > > > >> consistent:
> > > > >>> Due to the fact of Max rx pkt len field is mandatory for JUMBO
> > > > >>> offload, max
> > > > >> lro pkt len should be mandatory for LRO offload.
> > > > >>>
> > > > >>> So your question is actually why both, non-lro packets and LRO
> > > > >>> packets max
> > > > >> size are mandatory...
> > > > >>>
> > > > >>>
> > > > >>> I think it should be important values for net applications
> > management.
> > > > >>> Also good for mbuf size managements.
> > > > >>>
> > > > >>>>>
> > > > >>>>>> - Why it is mandatory now, how it was working before if it is
> > > > >>>>>> mandatory value?
> > > > >>>>>
> > > > >>>>> It is the same as max_rx_pkt_len which is mandatory for jumbo
> > > > >>>>> frame
> > > > >>>> offload.
> > > > >>>>> So now, when the user configures a LRO offload he must to set
> > > > >>>>> max lro pkt
> > > > >>>> len.
> > > > >>>>> We don't want to confuse the user here with the max rx pkt len
> > > > >>>> configurations and behaviors, they should be with same logic.
> > > > >>>>>
> > > > >>>>> This parameter defines well the LRO behavior.
> > > > >>>>> Before this, each PMD took its own interpretation to what
> > > > >>>>> should be the
> > > > >>>> maximum size for LRO aggregated packets.
> > > > >>>>> Now, the user must say what is his intension, and the ethdev
> > > > >>>>> can limit it
> > > > >>>> according to the device capability.
> > > > >>>>> By this way, also, the PMD can organize\optimize its data-path
> > more.
> > > > >>>>> Also, the application can create different mempools for LRO
> > > > >>>>> queues to
> > > > >>>> allow bigger packet receiving for LRO traffic.
> > > > >>>>>
> > > > >>>>>> - What happens if PMD doesn't provide 'max_lro_pkt_size', so
> > > > >>>>>> it is
> > > > '0'?
> > > > >>>>> Yes, you can see the feature description Dekel added.
> > > > >>>>> This patch also updates all the PMDs support an LRO for non-0
> > value.
> > > > >>>>
> > > > >>>> Of course I can see the updates Matan, my point is "What
> > > > >>>> happens if PMD doesn't provide 'max_lro_pkt_size'",
> > > > >>>> 1) There is no check for it right, so it is acceptable?
> > > > >>>
> > > > >>> There is check.
> > > > >>> If the capability is 0, any non-zero configuration will fail.
> > > > >>>
> > > > >>>> 2) Are we making this filed mandatory to provide for PMDs, it
> > > > >>>> is easy to make new fields mandatory for PMDs but is this
> > > > >>>> really
> > > > necessary?
> > > > >>>
> > > > >>> Yes, for consistence.
> > > > >>>
> > > > >>>>>
> > > > >>>>> as same as max rx pkt len, no?
> > > > >>>>>
> > > > >>>>>> - What do you think setting 'max_lro_pkt_size' config value
> > > > >>>>>> to what PMD provided if application doesn't provide it?
> > > > >>>>> Same answers as above.
> > > > >>>>>
> > > > >>>>
> > > > >>>> If application doesn't care the value, as it has been till now,
> > > > >>>> and not provided explicit 'max_lro_pkt_size', why not ethdev
> > > > >>>> level use the value provided by PMD instead of failing?
> > > > >>>
> > > > >>> Again, same question we can ask on max rx pkt len.
> > > > >>>
> > > > >>> Looks like the packet size is very important value which should
> > > > >>> be set by
> > > > >> the application.
> > > > >>>
> > > > >>> Previous applications have no option to configure it, so they
> > > > >>> haven't
> > > > >> configure it, (probably cover it somehow) I think it is our miss
> > > > >> to supply this info.
> > > > >>>
> > > > >>> Let's do it in same way as we do max rx pkt len (as this patch main
> > idea).
> > > > >>> Later, we can change both to other meaning.
> > > > >>>
> > > > >>
> > > > >> I think it is not a good reason to introduce a new mandatory
> > > > >> config option for application because of 'max_rx_pkt_len' does it.
> > > > >
> > > > > It is mandatory only if LRO offload is configured.
> > > > >
> > > > >> Will it work, if:
> > > > >> - If application doesn't provide this value, use the PMD max
> > > > >
> > > > > May cause a problem if the mbuf size is not enough for the PMD
> > maximum.
> > > >
> > > > OK, this is what I was missing, for this case I was thinking
> > > > max_rx_pkt_len will be used but you already explained that
> > > > application may want to use different mempools for LRO queues.
> > > >
> > > So , are you agree with the idea?
> > >
> > > > For this case shouldn't PMDs take the 'rxmode.max_lro_pkt_size' into
> > > > account and program the device accordingly (of course in LRO enabled
> > > > case) ?
> > > > This part seems missing and should be highlighted to other PMD
> > maintainers.
> > >
> > >
> > > Yes, you are right.
> > > PMDs must limit the LRO aggregated packet according to the new field,
> > > And it probably very hard for the patch introducer to understand how to do
> > it for each PMD.
> > >
> > > I think each new configuration requires other maintainers\developers
> > > to adjust their own PMD code to the new configuration and it should be
> > done in limited time.
> > >
> > > My suggestion here:
> > > 1. To reserve the info field and the configuration field for rc2.(if
> > > it is critical not to break ABI for rc3) 2. To merge the ethdev patch in the
> > start of rc3.
> > > 3. Request each relevant PMD to adjust its PMD to the new configuration
> > for the end of rc3.
> > > 	Note: this should be small change and only for ~5 PMDs:
> > > 		a. Introduce the info field according to the device ability.
> > > 		b. For each LRO queue:
> > > 			Use the LRO max size configuration instead of the
> > current max rx pkt len configuration(looks like small condition).
> >
> > That's definitely looks like a significant behavior change for existing apps and
> > PMDs, and I wonder what for?
> 
> There was a miss in configuration:
> 
> It doesn't make sense to limit non-lro queues with the same packets length of lro queues:
> 	Naturally, LRO packets are bigger significantly(because of the HW aggregation), hence,
> 	the user may use bigger mbufs for the LRO packets, so potentially, it is better to separate mempool, one for the LRO queues with
> big mbufs and the second for the non-LRO queues with smaller mbufs (to optimize the memory usage).
> 	Since the user may want tail-room in the LRO mbuf  it may limit the LRO packet size to smaller number than the mbuf (-
> HEADROOM) and for this reason as same as the usage of the regular field  (max_rx_pkt_len) a new field should be set for LRO queues.
> 
> > Why we can't keep max_rx_pkt_len semantics as it is right now, and just add
> > an optional ability to limit max size of LRO aggregations?
> 
> What is the semantic of max_rx_pkt_len regards LRO packets? It is not clear from the documentation.

That's probably where misunderstanding starts.
For me:
max_rx_pkt_len is the maximum size of the ethernet packet the NIC
will accept (doesn't matter LRO enabled or not).
Now if LRO is enabled NIC can accumulate multiple 'physical' packets
into one big 'virtual' one.
So when LRO is enabled max_lro_size limits how big these
accumulated 'virtual' packets can be.
While max_rx_pkt_len still limits max ethernet packet size NIC will accept. 

In what you suggest it is not clear to me what will be max_lro_size semantics.
Would it be maximum size of the ethernet packet the NIC will accept (equivalent of max_rx_pkt_len)
or would it be accumulated 'virtual' packet size limit?  
Or might be something else?

> 
> So this patch defines it well:
> Non-LRO queues should be limited to max_rx_pkt_len.
> LRO queues should be limited to max_lro_pkt_len.
> 
> The consistence in the ways of the configuration for RX packet length should be the same.
> max_rx_pkt_len is mandatory for JUMBO offload => max_lro_pkt_len is mandatory for LRO offload.
> 
> 
> Current applications uses LRO just need to configure the field same as current max_rx_pkt_len if they want to stay with the same behavior -
> really not a big change.
> If the application want to improve their memory usage as I said above, the new fields allow it as well.
> 
> > > What do you think?
  

Patch

diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst
index 7a31cf7..2138ce3 100644
--- a/doc/guides/nics/features.rst
+++ b/doc/guides/nics/features.rst
@@ -193,10 +193,12 @@  LRO
 Supports Large Receive Offload.
 
 * **[uses]       rte_eth_rxconf,rte_eth_rxmode**: ``offloads:DEV_RX_OFFLOAD_TCP_LRO``.
+  ``dev_conf.rxmode.max_lro_pkt_size``.
 * **[implements] datapath**: ``LRO functionality``.
 * **[implements] rte_eth_dev_data**: ``lro``.
 * **[provides]   mbuf**: ``mbuf.ol_flags:PKT_RX_LRO``, ``mbuf.tso_segsz``.
 * **[provides]   rte_eth_dev_info**: ``rx_offload_capa,rx_queue_offload_capa:DEV_RX_OFFLOAD_TCP_LRO``.
+* **[provides]   rte_eth_dev_info**: ``max_lro_pkt_size``.
 
 
 .. _nic_features_tso:
diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst
index c10dc30..fdec33d 100644
--- a/doc/guides/rel_notes/deprecation.rst
+++ b/doc/guides/rel_notes/deprecation.rst
@@ -87,10 +87,6 @@  Deprecation Notices
   This scheme will allow PMDs to avoid lookup to internal ptype table on Rx and
   thereby improve Rx performance if application wishes do so.
 
-* ethdev: New 32-bit fields may be added for maximum LRO session size, in
-  struct ``rte_eth_dev_info`` for the port capability and in struct
-  ``rte_eth_rxmode`` for the port configuration.
-
 * cryptodev: support for using IV with all sizes is added, J0 still can
   be used but only when IV length in following structs ``rte_crypto_auth_xform``,
   ``rte_crypto_aead_xform`` is set to zero. When IV length is greater or equal
diff --git a/doc/guides/rel_notes/release_19_11.rst b/doc/guides/rel_notes/release_19_11.rst
index 23182d1..b2b788c 100644
--- a/doc/guides/rel_notes/release_19_11.rst
+++ b/doc/guides/rel_notes/release_19_11.rst
@@ -406,6 +406,14 @@  ABI Changes
   align the Ethernet header on receive and all known encapsulations
   preserve the alignment of the header.
 
+* ethdev: Added 32-bit fields for maximum LRO aggregated packet size, in
+  struct ``rte_eth_dev_info`` for the port capability and in struct
+  ``rte_eth_rxmode`` for the port configuration.
+  Application should use the new field in struct ``rte_eth_rxmode`` to configure
+  the requested size.
+  PMD should use the new field in struct ``rte_eth_dev_info`` to report the
+  supported port capability.
+
 
 Shared Library Versions
 -----------------------
diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index b9b055e..741b897 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -519,6 +519,7 @@  static int bnxt_dev_info_get_op(struct rte_eth_dev *eth_dev,
 	/* Fast path specifics */
 	dev_info->min_rx_bufsize = 1;
 	dev_info->max_rx_pktlen = BNXT_MAX_PKT_LEN;
+	dev_info->max_lro_pkt_size = BNXT_MAX_PKT_LEN;
 
 	dev_info->rx_offload_capa = BNXT_DEV_RX_OFFLOAD_SUPPORT;
 	if (bp->flags & BNXT_FLAG_PTP_SUPPORTED)
diff --git a/drivers/net/hinic/hinic_pmd_ethdev.c b/drivers/net/hinic/hinic_pmd_ethdev.c
index 9f37a40..b33b2cf 100644
--- a/drivers/net/hinic/hinic_pmd_ethdev.c
+++ b/drivers/net/hinic/hinic_pmd_ethdev.c
@@ -727,6 +727,7 @@  static void hinic_get_speed_capa(struct rte_eth_dev *dev, uint32_t *speed_capa)
 	info->max_tx_queues  = nic_dev->nic_cap.max_sqs;
 	info->min_rx_bufsize = HINIC_MIN_RX_BUF_SIZE;
 	info->max_rx_pktlen  = HINIC_MAX_JUMBO_FRAME_SIZE;
+	info->max_lro_pkt_size = HINIC_MAX_JUMBO_FRAME_SIZE;
 	info->max_mac_addrs  = HINIC_MAX_UC_MAC_ADDRS;
 	info->min_mtu = HINIC_MIN_MTU_SIZE;
 	info->max_mtu = HINIC_MAX_MTU_SIZE;
diff --git a/drivers/net/ixgbe/ixgbe_ethdev.c b/drivers/net/ixgbe/ixgbe_ethdev.c
index 30c0379..c391f51 100644
--- a/drivers/net/ixgbe/ixgbe_ethdev.c
+++ b/drivers/net/ixgbe/ixgbe_ethdev.c
@@ -3814,6 +3814,7 @@  static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 	}
 	dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL register */
 	dev_info->max_rx_pktlen = 15872; /* includes CRC, cf MAXFRS register */
+	dev_info->max_lro_pkt_size = 15872;
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
 	dev_info->max_vfs = pci_dev->max_vfs;
@@ -3937,6 +3938,7 @@  static int ixgbevf_dev_xstats_get_names(__rte_unused struct rte_eth_dev *dev,
 	dev_info->max_tx_queues = (uint16_t)hw->mac.max_tx_queues;
 	dev_info->min_rx_bufsize = 1024; /* cf BSIZEPACKET in SRRCTL reg */
 	dev_info->max_rx_pktlen = 9728; /* includes CRC, cf MAXFRS reg */
+	dev_info->max_lro_pkt_size = 9728;
 	dev_info->max_mtu = dev_info->max_rx_pktlen - IXGBE_ETH_OVERHEAD;
 	dev_info->max_mac_addrs = hw->mac.num_rar_entries;
 	dev_info->max_hash_mac_addrs = IXGBE_VMDQ_NUM_UC_MAC;
diff --git a/drivers/net/ixgbe/ixgbe_vf_representor.c b/drivers/net/ixgbe/ixgbe_vf_representor.c
index dbbef29..28dfa3a 100644
--- a/drivers/net/ixgbe/ixgbe_vf_representor.c
+++ b/drivers/net/ixgbe/ixgbe_vf_representor.c
@@ -48,6 +48,7 @@ 
 	dev_info->min_rx_bufsize = 1024;
 	/**< Minimum size of RX buffer. */
 	dev_info->max_rx_pktlen = 9728;
+	dev_info->max_lro_pkt_size = 9728;
 	/**< Maximum configurable length of RX pkt. */
 	dev_info->max_rx_queues = IXGBE_VF_MAX_RX_QUEUES;
 	/**< Maximum number of RX queues. */
diff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h
index f644998..fdfc99b 100644
--- a/drivers/net/mlx5/mlx5.h
+++ b/drivers/net/mlx5/mlx5.h
@@ -203,6 +203,9 @@  struct mlx5_hca_attr {
 #define MLX5_LRO_SUPPORTED(dev) \
 	(((struct mlx5_priv *)((dev)->data->dev_private))->config.lro.supported)
 
+/* Maximal size of aggregated LRO packet. */
+#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
+
 /* LRO configurations structure. */
 struct mlx5_lro_config {
 	uint32_t supported:1; /* Whether LRO is supported. */
diff --git a/drivers/net/mlx5/mlx5_ethdev.c b/drivers/net/mlx5/mlx5_ethdev.c
index c2bed2f..1443faa 100644
--- a/drivers/net/mlx5/mlx5_ethdev.c
+++ b/drivers/net/mlx5/mlx5_ethdev.c
@@ -606,6 +606,7 @@  struct ethtool_link_settings {
 	/* FIXME: we should ask the device for these values. */
 	info->min_rx_bufsize = 32;
 	info->max_rx_pktlen = 65536;
+	info->max_lro_pkt_size = MLX5_MAX_LRO_SIZE;
 	/*
 	 * Since we need one CQ per QP, the limit is the minimum number
 	 * between the two values.
diff --git a/drivers/net/mlx5/mlx5_rxq.c b/drivers/net/mlx5/mlx5_rxq.c
index 24d0eaa..9423e7b 100644
--- a/drivers/net/mlx5/mlx5_rxq.c
+++ b/drivers/net/mlx5/mlx5_rxq.c
@@ -1701,7 +1701,6 @@  struct mlx5_rxq_obj *
 	return 0;
 }
 
-#define MLX5_MAX_LRO_SIZE (UINT8_MAX * 256u)
 #define MLX5_MAX_TCP_HDR_OFFSET ((unsigned int)(sizeof(struct rte_ether_hdr) + \
 					sizeof(struct rte_vlan_hdr) * 2 + \
 					sizeof(struct rte_ipv6_hdr)))
diff --git a/drivers/net/qede/qede_ethdev.c b/drivers/net/qede/qede_ethdev.c
index 575982f..ccbb8a4 100644
--- a/drivers/net/qede/qede_ethdev.c
+++ b/drivers/net/qede/qede_ethdev.c
@@ -1277,6 +1277,7 @@  static int qede_dev_configure(struct rte_eth_dev *eth_dev)
 
 	dev_info->min_rx_bufsize = (uint32_t)QEDE_MIN_RX_BUFF_SIZE;
 	dev_info->max_rx_pktlen = (uint32_t)ETH_TX_MAX_NON_LSO_PKT_LEN;
+	dev_info->max_lro_pkt_size = (uint32_t)0x7FFF;
 	dev_info->rx_desc_lim = qede_rx_desc_lim;
 	dev_info->tx_desc_lim = qede_tx_desc_lim;
 
diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c
index 646de99..fa33c45 100644
--- a/drivers/net/virtio/virtio_ethdev.c
+++ b/drivers/net/virtio/virtio_ethdev.c
@@ -2435,6 +2435,7 @@  static void virtio_dev_free_mbufs(struct rte_eth_dev *dev)
 		RTE_MIN(hw->max_queue_pairs, VIRTIO_MAX_TX_QUEUES);
 	dev_info->min_rx_bufsize = VIRTIO_MIN_RX_BUFSIZE;
 	dev_info->max_rx_pktlen = VIRTIO_MAX_RX_PKTLEN;
+	dev_info->max_lro_pkt_size = VIRTIO_MAX_RX_PKTLEN;
 	dev_info->max_mac_addrs = VIRTIO_MAX_MAC_ADDRS;
 
 	host_features = VTPCI_OPS(hw)->get_features(hw);
diff --git a/drivers/net/vmxnet3/vmxnet3_ethdev.c b/drivers/net/vmxnet3/vmxnet3_ethdev.c
index d1faeaa..d18e8bc 100644
--- a/drivers/net/vmxnet3/vmxnet3_ethdev.c
+++ b/drivers/net/vmxnet3/vmxnet3_ethdev.c
@@ -1161,6 +1161,7 @@  static int eth_vmxnet3_pci_remove(struct rte_pci_device *pci_dev)
 	dev_info->max_tx_queues = VMXNET3_MAX_TX_QUEUES;
 	dev_info->min_rx_bufsize = 1518 + RTE_PKTMBUF_HEADROOM;
 	dev_info->max_rx_pktlen = 16384; /* includes CRC, cf MAXFRS register */
+	dev_info->max_lro_pkt_size = 16384;
 	dev_info->speed_capa = ETH_LINK_SPEED_10G;
 	dev_info->max_mac_addrs = VMXNET3_MAX_MAC_ADDRS;
 
diff --git a/lib/librte_ethdev/rte_ethdev.c b/lib/librte_ethdev/rte_ethdev.c
index 652c369..c642ba5 100644
--- a/lib/librte_ethdev/rte_ethdev.c
+++ b/lib/librte_ethdev/rte_ethdev.c
@@ -1136,6 +1136,26 @@  struct rte_eth_dev *
 	return name;
 }
 
+static inline int
+check_lro_pkt_size(uint16_t port_id, uint32_t config_size,
+		   uint32_t dev_info_size)
+{
+	int ret = 0;
+
+	if (config_size > dev_info_size) {
+		RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
+			       "> max allowed value %u\n", port_id, config_size,
+			       dev_info_size);
+		ret = -EINVAL;
+	} else if (config_size < RTE_ETHER_MIN_LEN) {
+		RTE_ETHDEV_LOG(ERR, "Ethdev port_id=%d max_lro_pkt_size %u "
+			       "< min allowed value %u\n", port_id, config_size,
+			       (unsigned int)RTE_ETHER_MIN_LEN);
+		ret = -EINVAL;
+	}
+	return ret;
+}
+
 int
 rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_q, uint16_t nb_tx_q,
 		      const struct rte_eth_conf *dev_conf)
@@ -1266,6 +1286,18 @@  struct rte_eth_dev *
 							RTE_ETHER_MAX_LEN;
 	}
 
+	/*
+	 * If LRO is enabled, check that the maximum aggregated packet
+	 * size is supported by the configured device.
+	 */
+	if (dev_conf->rxmode.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+		ret = check_lro_pkt_size(
+				port_id, dev_conf->rxmode.max_lro_pkt_size,
+				dev_info.max_lro_pkt_size);
+		if (ret != 0)
+			goto rollback;
+	}
+
 	/* Any requested offloading must be within its device capabilities */
 	if ((dev_conf->rxmode.offloads & dev_info.rx_offload_capa) !=
 	     dev_conf->rxmode.offloads) {
@@ -1770,6 +1802,18 @@  struct rte_eth_dev *
 		return -EINVAL;
 	}
 
+	/*
+	 * If LRO is enabled, check that the maximum aggregated packet
+	 * size is supported by the configured device.
+	 */
+	if (local_conf.offloads & DEV_RX_OFFLOAD_TCP_LRO) {
+		int ret = check_lro_pkt_size(port_id,
+				dev->data->dev_conf.rxmode.max_lro_pkt_size,
+				dev_info.max_lro_pkt_size);
+		if (ret != 0)
+			return ret;
+	}
+
 	ret = (*dev->dev_ops->rx_queue_setup)(dev, rx_queue_id, nb_rx_desc,
 					      socket_id, &local_conf, mp);
 	if (!ret) {
diff --git a/lib/librte_ethdev/rte_ethdev.h b/lib/librte_ethdev/rte_ethdev.h
index 44d77b3..1b76df5 100644
--- a/lib/librte_ethdev/rte_ethdev.h
+++ b/lib/librte_ethdev/rte_ethdev.h
@@ -395,6 +395,8 @@  struct rte_eth_rxmode {
 	/** The multi-queue packet distribution mode to be used, e.g. RSS. */
 	enum rte_eth_rx_mq_mode mq_mode;
 	uint32_t max_rx_pkt_len;  /**< Only used if JUMBO_FRAME enabled. */
+	/** Maximum allowed size of LRO aggregated packet. */
+	uint32_t max_lro_pkt_size;
 	uint16_t split_hdr_size;  /**< hdr buf size (header_split enabled).*/
 	/**
 	 * Per-port Rx offloads to be set using DEV_RX_OFFLOAD_* flags.
@@ -1218,6 +1220,8 @@  struct rte_eth_dev_info {
 	const uint32_t *dev_flags; /**< Device flags */
 	uint32_t min_rx_bufsize; /**< Minimum size of RX buffer. */
 	uint32_t max_rx_pktlen; /**< Maximum configurable length of RX pkt. */
+	/** Maximum configurable size of LRO aggregated packet. */
+	uint32_t max_lro_pkt_size;
 	uint16_t max_rx_queues; /**< Maximum number of RX queues. */
 	uint16_t max_tx_queues; /**< Maximum number of TX queues. */
 	uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */