[v3] net/bonding: another fix to LACP mempool size

Message ID 20220305070922.308-1-gaoxiangliu0@163.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers
Series [v3] net/bonding: another fix to LACP mempool size |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/Intel-compilation success Compilation OK
ci/iol-broadcom-Functional success Functional Testing PASS
ci/intel-Testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/github-robot: build success github build: passed
ci/iol-aarch64-unit-testing success Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-abi-testing success Testing PASS

Commit Message

Gaoxiang Liu March 5, 2022, 7:09 a.m. UTC
  The following log message may appear after a slave is idle(or nearly
idle)
for a few minutes:"PMD: Failed to allocate LACP packet from pool".
And bond mode 4 negotiation may fail.

Problem: All mbufs from a slave' private pool(used exclusively for
transmitting LACPDUs)
have been allocated and are still sitting in the device's tx descriptor
ring and
other cores' mempool caches.

Solution: Ensure that each slave'tx (LACPDU) mempool owns more than
n-tx-queues * n-tx-descriptor + fwd_core_num *
per-core-mmempool-flush-threshold mbufs.

Note that the LACP tx machine fuction is the only code that allocates
from a slave's private pool. It runs in the context of the interrupt
thread, and thus it has no mempool cache of its own.

Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>

---
v2:
* Fixed compile issues.

v3:
* delete duplicate code.
---
 drivers/net/bonding/rte_eth_bond_8023ad.c | 8 +++++---
 1 file changed, 5 insertions(+), 3 deletions(-)
  

Comments

humin (Q) March 8, 2022, 1:17 a.m. UTC | #1
HI,

在 2022/3/5 15:09, Gaoxiang Liu 写道:
> The following log message may appear after a slave is idle(or nearly
> idle)
> for a few minutes:"PMD: Failed to allocate LACP packet from pool".
> And bond mode 4 negotiation may fail.
> 
> Problem: All mbufs from a slave' private pool(used exclusively for
> transmitting LACPDUs)
> have been allocated and are still sitting in the device's tx descriptor
> ring and
> other cores' mempool caches.
> 
> Solution: Ensure that each slave'tx (LACPDU) mempool owns more than
> n-tx-queues * n-tx-descriptor + fwd_core_num *
> per-core-mmempool-flush-threshold mbufs.
How do you get this way to calculate 'total_tx_desc'?
I want to know the causal logic.
> 
> Note that the LACP tx machine fuction is the only code that allocates
> from a slave's private pool. It runs in the context of the interrupt
> thread, and thus it has no mempool cache of its own.
> 
> Signed-off-by: Gaoxiang Liu <liugaoxiang@huawei.com>
> 
> ---
> v2:
> * Fixed compile issues.
> 
> v3:
> * delete duplicate code.
> ---
>   drivers/net/bonding/rte_eth_bond_8023ad.c | 8 +++++---
>   1 file changed, 5 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
> index ca50583d62..d73038b018 100644
> --- a/drivers/net/bonding/rte_eth_bond_8023ad.c
> +++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
> @@ -1050,6 +1050,7 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
>   	uint32_t total_tx_desc;
>   	struct bond_tx_queue *bd_tx_q;
>   	uint16_t q_id;
> +	uint32_t cache_size;
>   
>   	/* Given slave mus not be in active list */
>   	RTE_ASSERT(find_slave_by_id(internals->active_slaves,
> @@ -1100,11 +1101,12 @@ bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
>   		total_tx_desc += bd_tx_q->nb_tx_desc;
>   	}
>   
> +	cache_size = RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> +		32 : RTE_MEMPOOL_CACHE_MAX_SIZE;
> +	total_tx_desc = rte_lcore_count() * cache_size * 1.5;
According to your comment, is it wrong?
or:    'total_tx_desc += rte_lcore_count() * cache_size * 1.5;'

>   	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
>   	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
> -		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
> -			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
> -		0, element_size, socket_id);
> +		cache_size, 0, element_size, socket_id);
>   
>   	/* Any memory allocation failure in initialization is critical because
>   	 * resources can't be free, so reinitialization is impossible. */
>
  

Patch

diff --git a/drivers/net/bonding/rte_eth_bond_8023ad.c b/drivers/net/bonding/rte_eth_bond_8023ad.c
index ca50583d62..d73038b018 100644
--- a/drivers/net/bonding/rte_eth_bond_8023ad.c
+++ b/drivers/net/bonding/rte_eth_bond_8023ad.c
@@ -1050,6 +1050,7 @@  bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 	uint32_t total_tx_desc;
 	struct bond_tx_queue *bd_tx_q;
 	uint16_t q_id;
+	uint32_t cache_size;
 
 	/* Given slave mus not be in active list */
 	RTE_ASSERT(find_slave_by_id(internals->active_slaves,
@@ -1100,11 +1101,12 @@  bond_mode_8023ad_activate_slave(struct rte_eth_dev *bond_dev,
 		total_tx_desc += bd_tx_q->nb_tx_desc;
 	}
 
+	cache_size = RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
+		32 : RTE_MEMPOOL_CACHE_MAX_SIZE;
+	total_tx_desc = rte_lcore_count() * cache_size * 1.5;
 	snprintf(mem_name, RTE_DIM(mem_name), "slave_port%u_pool", slave_id);
 	port->mbuf_pool = rte_pktmbuf_pool_create(mem_name, total_tx_desc,
-		RTE_MEMPOOL_CACHE_MAX_SIZE >= 32 ?
-			32 : RTE_MEMPOOL_CACHE_MAX_SIZE,
-		0, element_size, socket_id);
+		cache_size, 0, element_size, socket_id);
 
 	/* Any memory allocation failure in initialization is critical because
 	 * resources can't be free, so reinitialization is impossible. */