[dpdk-dev] i40e: fix the issue of cannot using more than 1 poor for VMDq

Message ID 1447232205-2339-1-git-send-email-helin.zhang@intel.com (mailing list archive)
State Accepted, archived
Headers

Commit Message

Zhang, Helin Nov. 11, 2015, 8:56 a.m. UTC
  It fixes the issue of cannot using more than 1 poor for VMDq,
according to the queues left.

Fixes: 705b57f82054 ("i40e: enlarge the number of supported queues")

Signed-off-by: Helin Zhang <helin.zhang@intel.com>
---
 drivers/net/i40e/i40e_ethdev.c | 36 ++++++++++++++++++++++++++----------
 1 file changed, 26 insertions(+), 10 deletions(-)
  

Comments

Thomas Monjalon Nov. 11, 2015, 10:13 a.m. UTC | #1
Comments on the git message:

After the word "fix" in the title, the word "issue" is useless.
It's better to have a short title, easy to parse in the commit list.

What is a poor? Do you mean pool?

2015-11-11 16:56, Helin Zhang:
> It fixes the issue of cannot using more than 1 poor for VMDq,
> according to the queues left.

You forgot to describe the bug.
What happens when we use more than 1?

> Fixes: 705b57f82054 ("i40e: enlarge the number of supported queues")

Thanks for providing the Fixes line.

> Signed-off-by: Helin Zhang <helin.zhang@intel.com>
  
Thomas Monjalon Nov. 11, 2015, 6:37 p.m. UTC | #2
2015-11-11 16:56, Helin Zhang:
> It fixes the issue of cannot using more than 1 poor for VMDq,
> according to the queues left.
> 
> Fixes: 705b57f82054 ("i40e: enlarge the number of supported queues")
> 
> Signed-off-by: Helin Zhang <helin.zhang@intel.com>

Applied, thanks
  

Patch

diff --git a/drivers/net/i40e/i40e_ethdev.c b/drivers/net/i40e/i40e_ethdev.c
index ddf3d38..c5cd06f 100644
--- a/drivers/net/i40e/i40e_ethdev.c
+++ b/drivers/net/i40e/i40e_ethdev.c
@@ -3120,17 +3120,33 @@  i40e_pf_parameter_init(struct rte_eth_dev *dev)
 
 	/* VMDq queue/VSI allocation */
 	pf->vmdq_qp_offset = pf->vf_qp_offset + pf->vf_nb_qps * pf->vf_num;
+	pf->vmdq_nb_qps = 0;
+	pf->max_nb_vmdq_vsi = 0;
 	if (hw->func_caps.vmdq) {
-		pf->flags |= I40E_FLAG_VMDQ;
-		pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
-		pf->max_nb_vmdq_vsi = 1;
-		PMD_DRV_LOG(DEBUG, "%u VMDQ VSIs, %u queues per VMDQ VSI, "
-			    "in total %u queues", pf->max_nb_vmdq_vsi,
-			    pf->vmdq_nb_qps,
-			    pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi);
-	} else {
-		pf->vmdq_nb_qps = 0;
-		pf->max_nb_vmdq_vsi = 0;
+		if (qp_count < hw->func_caps.num_tx_qp) {
+			pf->max_nb_vmdq_vsi = (hw->func_caps.num_tx_qp -
+				qp_count) / pf->vmdq_nb_qp_max;
+
+			/* Limit the maximum number of VMDq vsi to the maximum
+			 * ethdev can support
+			 */
+			pf->max_nb_vmdq_vsi = RTE_MIN(pf->max_nb_vmdq_vsi,
+				ETH_64_POOLS);
+			if (pf->max_nb_vmdq_vsi) {
+				pf->flags |= I40E_FLAG_VMDQ;
+				pf->vmdq_nb_qps = pf->vmdq_nb_qp_max;
+				PMD_DRV_LOG(DEBUG, "%u VMDQ VSIs, %u queues "
+					    "per VMDQ VSI, in total %u queues",
+					    pf->max_nb_vmdq_vsi,
+					    pf->vmdq_nb_qps, pf->vmdq_nb_qps *
+					    pf->max_nb_vmdq_vsi);
+			} else {
+				PMD_DRV_LOG(INFO, "No enough queues left for "
+					    "VMDq");
+			}
+		} else {
+			PMD_DRV_LOG(INFO, "No queue left for VMDq");
+		}
 	}
 	qp_count += pf->vmdq_nb_qps * pf->max_nb_vmdq_vsi;
 	vsi_count += pf->max_nb_vmdq_vsi;