[5/5] net/hns3: select Tx prepare based on Tx offload

Message ID 1619594455-56787-6-git-send-email-humin29@huawei.com (mailing list archive)
State Accepted, archived
Delegated to: Ferruh Yigit
Headers
Series Features and bugfix for hns3 PMD |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation fail Compilation issues
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/github-robot success github build: passed
ci/intel-Testing success Testing PASS
ci/iol-abi-testing success Testing PASS
ci/iol-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS

Commit Message

humin (Q) April 28, 2021, 7:20 a.m. UTC
  From: Chengchang Tang <tangchengchang@huawei.com>

Tx prepare should be called only when necessary to reduce the impact on
performance.

For partial TX offload, users need to call rte_eth_tx_prepare() to invoke
the tx_prepare callback of PMDs. In this callback, the PMDs adjust the
packet based on the offloading used by the user. (e.g. For some PMDs,
pseudo-headers need to be calculated when the TX cksum is offloaded.)

However, for the users, they cannot grasp all the hardware and PMDs
characteristics. As a result, users cannot decide when they need to
actually call tx_prepare. Therefore, we should assume that the user calls
rte_eth_tx_prepare() when using any Tx offloading to ensure that related
functions work properly. Whether packets need to be adjusted should be
determined by PMDs. They can make judgments in the dev_configure or
queue_setup phase. When the related function is not used, the pointer of
tx_prepare should be set to NULL to reduce the performance loss caused by
invoking rte_eth_tx_repare().

In this patch, if tx_prepare is not required for the offloading used by
the users, the tx_prepare pointer will be set to NULL.

Fixes: bba636698316 ("net/hns3: support Rx/Tx and related operations")
Cc: stable@dpdk.org

Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
---
 drivers/net/hns3/hns3_rxtx.c | 36 +++++++++++++++++++++++++++++++++---
 1 file changed, 33 insertions(+), 3 deletions(-)
  

Comments

David Marchand May 7, 2021, 9:26 a.m. UTC | #1
On Wed, Apr 28, 2021 at 9:21 AM Min Hu (Connor) <humin29@huawei.com> wrote:
>
> From: Chengchang Tang <tangchengchang@huawei.com>
>
> Tx prepare should be called only when necessary to reduce the impact on
> performance.
>
> For partial TX offload, users need to call rte_eth_tx_prepare() to invoke
> the tx_prepare callback of PMDs. In this callback, the PMDs adjust the
> packet based on the offloading used by the user. (e.g. For some PMDs,
> pseudo-headers need to be calculated when the TX cksum is offloaded.)
>
> However, for the users, they cannot grasp all the hardware and PMDs
> characteristics. As a result, users cannot decide when they need to
> actually call tx_prepare. Therefore, we should assume that the user calls
> rte_eth_tx_prepare() when using any Tx offloading to ensure that related
> functions work properly. Whether packets need to be adjusted should be
> determined by PMDs. They can make judgments in the dev_configure or
> queue_setup phase. When the related function is not used, the pointer of
> tx_prepare should be set to NULL to reduce the performance loss caused by
> invoking rte_eth_tx_repare().
>
> In this patch, if tx_prepare is not required for the offloading used by
> the users, the tx_prepare pointer will be set to NULL.
>
> Fixes: bba636698316 ("net/hns3: support Rx/Tx and related operations")
> Cc: stable@dpdk.org
>
> Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
> ---
>  drivers/net/hns3/hns3_rxtx.c | 36 +++++++++++++++++++++++++++++++++---
>  1 file changed, 33 insertions(+), 3 deletions(-)
>
> diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
> index 3881a72..7ac3a48 100644
> --- a/drivers/net/hns3/hns3_rxtx.c
> +++ b/drivers/net/hns3/hns3_rxtx.c
> @@ -4203,17 +4203,45 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
>         return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
>  }
>
> +static bool
> +hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
> +{
> +#ifdef RTE_LIBRTE_ETHDEV_DEBUG
> +       /* always perform tx_prepare when debug */
> +       return true;

dev is unused in this case.
http://mails.dpdk.org/archives/test-report/2021-May/193391.html
  
Ferruh Yigit May 7, 2021, 10:23 a.m. UTC | #2
On 5/7/2021 10:26 AM, David Marchand wrote:
> On Wed, Apr 28, 2021 at 9:21 AM Min Hu (Connor) <humin29@huawei.com> wrote:
>>
>> From: Chengchang Tang <tangchengchang@huawei.com>
>>
>> Tx prepare should be called only when necessary to reduce the impact on
>> performance.
>>
>> For partial TX offload, users need to call rte_eth_tx_prepare() to invoke
>> the tx_prepare callback of PMDs. In this callback, the PMDs adjust the
>> packet based on the offloading used by the user. (e.g. For some PMDs,
>> pseudo-headers need to be calculated when the TX cksum is offloaded.)
>>
>> However, for the users, they cannot grasp all the hardware and PMDs
>> characteristics. As a result, users cannot decide when they need to
>> actually call tx_prepare. Therefore, we should assume that the user calls
>> rte_eth_tx_prepare() when using any Tx offloading to ensure that related
>> functions work properly. Whether packets need to be adjusted should be
>> determined by PMDs. They can make judgments in the dev_configure or
>> queue_setup phase. When the related function is not used, the pointer of
>> tx_prepare should be set to NULL to reduce the performance loss caused by
>> invoking rte_eth_tx_repare().
>>
>> In this patch, if tx_prepare is not required for the offloading used by
>> the users, the tx_prepare pointer will be set to NULL.
>>
>> Fixes: bba636698316 ("net/hns3: support Rx/Tx and related operations")
>> Cc: stable@dpdk.org
>>
>> Signed-off-by: Chengchang Tang <tangchengchang@huawei.com>
>> Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
>> ---
>>  drivers/net/hns3/hns3_rxtx.c | 36 +++++++++++++++++++++++++++++++++---
>>  1 file changed, 33 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
>> index 3881a72..7ac3a48 100644
>> --- a/drivers/net/hns3/hns3_rxtx.c
>> +++ b/drivers/net/hns3/hns3_rxtx.c
>> @@ -4203,17 +4203,45 @@ hns3_tx_check_simple_support(struct rte_eth_dev *dev)
>>         return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
>>  }
>>
>> +static bool
>> +hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
>> +{
>> +#ifdef RTE_LIBRTE_ETHDEV_DEBUG
>> +       /* always perform tx_prepare when debug */
>> +       return true;
> 
> dev is unused in this case.
> http://mails.dpdk.org/archives/test-report/2021-May/193391.html
> 

Thanks David,

@Connor, can you please send a quick fix for it?
'RTE_SET_USED(dev);' should fix it.
  

Patch

diff --git a/drivers/net/hns3/hns3_rxtx.c b/drivers/net/hns3/hns3_rxtx.c
index 3881a72..7ac3a48 100644
--- a/drivers/net/hns3/hns3_rxtx.c
+++ b/drivers/net/hns3/hns3_rxtx.c
@@ -4203,17 +4203,45 @@  hns3_tx_check_simple_support(struct rte_eth_dev *dev)
 	return (offloads == (offloads & DEV_TX_OFFLOAD_MBUF_FAST_FREE));
 }
 
+static bool
+hns3_get_tx_prep_needed(struct rte_eth_dev *dev)
+{
+#ifdef RTE_LIBRTE_ETHDEV_DEBUG
+	/* always perform tx_prepare when debug */
+	return true;
+#else
+#define HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK (\
+		DEV_TX_OFFLOAD_IPV4_CKSUM | \
+		DEV_TX_OFFLOAD_TCP_CKSUM | \
+		DEV_TX_OFFLOAD_UDP_CKSUM | \
+		DEV_TX_OFFLOAD_SCTP_CKSUM | \
+		DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM | \
+		DEV_TX_OFFLOAD_OUTER_UDP_CKSUM | \
+		DEV_TX_OFFLOAD_TCP_TSO | \
+		DEV_TX_OFFLOAD_VXLAN_TNL_TSO | \
+		DEV_TX_OFFLOAD_GRE_TNL_TSO | \
+		DEV_TX_OFFLOAD_GENEVE_TNL_TSO)
+
+	uint64_t tx_offload = dev->data->dev_conf.txmode.offloads;
+	if (tx_offload & HNS3_DEV_TX_CSKUM_TSO_OFFLOAD_MASK)
+		return true;
+
+	return false;
+#endif
+}
+
 static eth_tx_burst_t
 hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
 {
 	struct hns3_adapter *hns = dev->data->dev_private;
 	bool vec_allowed, sve_allowed, simple_allowed;
-	bool vec_support;
+	bool vec_support, tx_prepare_needed;
 
 	vec_support = hns3_tx_check_vec_support(dev) == 0;
 	vec_allowed = vec_support && hns3_get_default_vec_support();
 	sve_allowed = vec_support && hns3_get_sve_support();
 	simple_allowed = hns3_tx_check_simple_support(dev);
+	tx_prepare_needed = hns3_get_tx_prep_needed(dev);
 
 	*prep = NULL;
 
@@ -4224,7 +4252,8 @@  hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
 	if (hns->tx_func_hint == HNS3_IO_FUNC_HINT_SIMPLE && simple_allowed)
 		return hns3_xmit_pkts_simple;
 	if (hns->tx_func_hint == HNS3_IO_FUNC_HINT_COMMON) {
-		*prep = hns3_prep_pkts;
+		if (tx_prepare_needed)
+			*prep = hns3_prep_pkts;
 		return hns3_xmit_pkts;
 	}
 
@@ -4233,7 +4262,8 @@  hns3_get_tx_function(struct rte_eth_dev *dev, eth_tx_prep_t *prep)
 	if (simple_allowed)
 		return hns3_xmit_pkts_simple;
 
-	*prep = hns3_prep_pkts;
+	if (tx_prepare_needed)
+		*prep = hns3_prep_pkts;
 	return hns3_xmit_pkts;
 }