[dpdk-dev] net/bonding: improve non-ip packets RSS

Message ID 1479460122-18780-1-git-send-email-haifeng.lin@huawei.com (mailing list archive)
State Rejected, archived
Delegated to: Ferruh Yigit
Headers

Checks

Context Check Description
checkpatch/checkpatch success coding style OK

Commit Message

Linhaifeng Nov. 18, 2016, 9:08 a.m. UTC
  Most ethernet not support non-ip packets RSS and only first
queue can used to receive. In this scenario lacp bond can
only use one queue even if multi queue configured.

We use below formula to change the map between bond_qid and
slave_qid to let at least slave_num queues to receive packets:

	slave_qid = (bond_qid + slave_id) % queue_num

Signed-off-by: Haifeng Lin <haifeng.lin@huawei.com>
---
 drivers/net/bonding/rte_eth_bond_pmd.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)
  

Comments

Ferruh Yigit Dec. 8, 2016, 5:13 p.m. UTC | #1
On 11/18/2016 9:08 AM, Haifeng Lin wrote:
> Most ethernet not support non-ip packets RSS and only first
> queue can used to receive. In this scenario lacp bond can
> only use one queue even if multi queue configured.
> 
> We use below formula to change the map between bond_qid and
> slave_qid to let at least slave_num queues to receive packets:
> 
> 	slave_qid = (bond_qid + slave_id) % queue_num
> 
> Signed-off-by: Haifeng Lin <haifeng.lin@huawei.com>
> ---

Reminder for the patch ...

<..>
  
Doherty, Declan Feb. 10, 2017, 4:30 p.m. UTC | #2
On 18/11/16 09:08, haifeng.lin at huawei.com (Haifeng Lin) wrote:
> Most ethernet not support non-ip packets RSS and only first
> queue can used to receive. In this scenario lacp bond can
> only use one queue even if multi queue configured.
>
> We use below formula to change the map between bond_qid and
> slave_qid to let at least slave_num queues to receive packets:
>
> 	slave_qid = (bond_qid + slave_id) % queue_num
>
> Signed-off-by: Haifeng Lin <haifeng.lin at huawei.com>
> ---
>  drivers/net/bonding/rte_eth_bond_pmd.c | 6 +++++-
>  1 file changed, 5 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
> index 09ce7bf..8ad843a 100644
> --- a/drivers/net/bonding/rte_eth_bond_pmd.c
> +++ b/drivers/net/bonding/rte_eth_bond_pmd.c
> @@ -141,6 +141,8 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
>  	uint8_t collecting;  /* current slave collecting status */
>  	const uint8_t promisc = internals->promiscuous_en;
>  	uint8_t i, j, k;
> +	int slave_qid, bond_qid = bd_rx_q->queue_id;
> +	int queue_num = internals->nb_rx_queues;
>
>  	rte_eth_macaddr_get(internals->port_id, &bond_mac);
>  	/* Copy slave list to protect against slave up/down changes during tx
> @@ -154,7 +156,9 @@ bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
>  		collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING);
>
>  		/* Read packets from this slave */
> -		num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id,
> +		slave_qid = queue_num ? (bond_qid + slaves[i]) % queue_num :
> +				bond_qid;
> +		num_rx_total += rte_eth_rx_burst(slaves[i], slave_qid,
>  				&bufs[num_rx_total], nb_pkts - num_rx_total);
>
>  		for (k = j; k < 2 && k < num_rx_total; k++)
>

Nack, I think this could introduce unexpected behaviour as could then be 
read from a different of a slave queue that the queue id specified by 
the calling function, where the expected behaviour is that there is a 
1:1 queue mapping from bond to slave queues. If RSS is needed for 
ethdevs which don't support it natively I think the appropriate solution 
is to create a software RSS solution which can be enabled at the slave 
ethdev level itself. I don't think the bonding layer should be 
implementing this functionality.
  

Patch

diff --git a/drivers/net/bonding/rte_eth_bond_pmd.c b/drivers/net/bonding/rte_eth_bond_pmd.c
index 09ce7bf..8ad843a 100644
--- a/drivers/net/bonding/rte_eth_bond_pmd.c
+++ b/drivers/net/bonding/rte_eth_bond_pmd.c
@@ -141,6 +141,8 @@  bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 	uint8_t collecting;  /* current slave collecting status */
 	const uint8_t promisc = internals->promiscuous_en;
 	uint8_t i, j, k;
+	int slave_qid, bond_qid = bd_rx_q->queue_id;
+	int queue_num = internals->nb_rx_queues;
 
 	rte_eth_macaddr_get(internals->port_id, &bond_mac);
 	/* Copy slave list to protect against slave up/down changes during tx
@@ -154,7 +156,9 @@  bond_ethdev_rx_burst_8023ad(void *queue, struct rte_mbuf **bufs,
 		collecting = ACTOR_STATE(&mode_8023ad_ports[slaves[i]], COLLECTING);
 
 		/* Read packets from this slave */
-		num_rx_total += rte_eth_rx_burst(slaves[i], bd_rx_q->queue_id,
+		slave_qid = queue_num ? (bond_qid + slaves[i]) % queue_num :
+				bond_qid;
+		num_rx_total += rte_eth_rx_burst(slaves[i], slave_qid,
 				&bufs[num_rx_total], nb_pkts - num_rx_total);
 
 		for (k = j; k < 2 && k < num_rx_total; k++)