[2/2] net/bnxt: fix TQM ring context memory sizing formulas

Message ID 20200506062710.22093-3-kalesh-anakkur.purayil@broadcom.com (mailing list archive)
State Accepted, archived
Delegated to: Ajit Khaparde
Headers
Series bnxt bug fixes |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/travis-robot warning Travis build: failed
ci/Intel-compilation fail Compilation issues

Commit Message

Kalesh A P May 6, 2020, 6:27 a.m. UTC
From: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>

The current formulas to calculate the TQM slow path and fast path ring
context memory sizes are not quite correct. TQM slow path entry is
array index 0 of ctx->tqm_mem[]. The other array entries are for fast
path. Fix these sizes according to firmware spec. for 57500 and newer
chips.

Fixes: cc5e26b8ef98 ("net/bnxt: increase TQM entry allocation")
Cc: stable@dpdk.org

Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com>
Reviewed-by: Somnath Kotur <somnath.kotur@broadcom.com>
Reviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>
---
 drivers/net/bnxt/bnxt_ethdev.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)
  

Patch

diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c
index d877ff6..dab291c 100644
--- a/drivers/net/bnxt/bnxt_ethdev.c
+++ b/drivers/net/bnxt/bnxt_ethdev.c
@@ -4672,6 +4672,7 @@  int bnxt_alloc_ctx_mem(struct bnxt *bp)
 	struct bnxt_ctx_pg_info *ctx_pg;
 	struct bnxt_ctx_mem_info *ctx;
 	uint32_t mem_size, ena, entries;
+	uint32_t entries_sp, min;
 	int i, rc;
 
 	rc = bnxt_hwrm_func_backing_store_qcaps(bp);
@@ -4719,15 +4720,20 @@  int bnxt_alloc_ctx_mem(struct bnxt *bp)
 	if (rc)
 		return rc;
 
-	entries = ctx->qp_max_l2_entries +
-		  ctx->vnic_max_vnic_entries +
-		  ctx->tqm_min_entries_per_ring;
+	min = ctx->tqm_min_entries_per_ring;
+
+	entries_sp = ctx->qp_max_l2_entries +
+		     ctx->vnic_max_vnic_entries +
+		     2 * ctx->qp_min_qp1_entries + min;
+	entries_sp = bnxt_roundup(entries_sp, ctx->tqm_entries_multiple);
+
+	entries = ctx->qp_max_l2_entries + ctx->qp_min_qp1_entries;
 	entries = bnxt_roundup(entries, ctx->tqm_entries_multiple);
-	entries = clamp_t(uint32_t, entries, ctx->tqm_min_entries_per_ring,
+	entries = clamp_t(uint32_t, entries, min,
 			  ctx->tqm_max_entries_per_ring);
 	for (i = 0, ena = 0; i < ctx->tqm_fp_rings_count + 1; i++) {
 		ctx_pg = ctx->tqm_mem[i];
-		ctx_pg->entries = entries;
+		ctx_pg->entries = i ? entries : entries_sp;
 		mem_size = ctx->tqm_entry_size * ctx_pg->entries;
 		rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "tqm_mem", i);
 		if (rc)