[v1,1/1] mempool/octeontx2: fix mempool creation failure

Message ID 20190708044731.16843-1-vattunuru@marvell.com (mailing list archive)
State Accepted, archived
Delegated to: Thomas Monjalon
Headers
Series [v1,1/1] mempool/octeontx2: fix mempool creation failure |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/mellanox-Performance-Testing success Performance Testing PASS
ci/intel-Performance-Testing success Performance Testing PASS
ci/Intel-compilation fail apply issues

Commit Message

Vamsi Krishna Attunuru July 8, 2019, 4:47 a.m. UTC
  From: Vamsi Attunuru <vattunuru@marvell.com>

Fix npa pool range errors observed while creating mempool, this issue
happens when mempool objects are from different mem segments.

During mempool creation, octeontx2 mempool driver populates pool range
fields before enqueuing the buffers. If any enqueue or dequeue operation
reaches npa hardware prior to the range field's HW context update,
those ops result in npa range errors. Patch adds a routine to read back
HW context and verify if range fields are updated or not.

Fixes: e5271c507aeb ("mempool/octeontx2: add remaining slow path ops")

Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
---
 drivers/mempool/octeontx2/otx2_mempool_ops.c | 37 ++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)
  

Comments

Jerin Jacob Kollanukkaran July 8, 2019, 5:25 a.m. UTC | #1
> -----Original Message-----
> From: vattunuru@marvell.com <vattunuru@marvell.com>
> Sent: Monday, July 8, 2019 10:18 AM
> To: dev@dpdk.org
> Cc: thomas@monjalon.net; Jerin Jacob Kollanukkaran <jerinj@marvell.com>;
> Vamsi Krishna Attunuru <vattunuru@marvell.com>
> Subject: [PATCH v1 1/1] mempool/octeontx2: fix mempool creation failure

Actually it is v2.

v2..v1:
# Fixed git-check-log.sh issues
# Updated git comments for "when this issue happens?"
# Change the name of the patch
# Add Fixes tag
 
> From: Vamsi Attunuru <vattunuru@marvell.com>
> 
> Fix npa pool range errors observed while creating mempool, this issue
> happens when mempool objects are from different mem segments.
> 
> During mempool creation, octeontx2 mempool driver populates pool range
> fields before enqueuing the buffers. If any enqueue or dequeue operation
> reaches npa hardware prior to the range field's HW context update, those
> ops result in npa range errors. Patch adds a routine to read back HW context
> and verify if range fields are updated or not.
> 
> Fixes: e5271c507aeb ("mempool/octeontx2: add remaining slow path ops")
> 
> Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>

Acked-by: Jerin Jacob <jerinj@marvell.com>
  
Thomas Monjalon July 8, 2019, 9:54 a.m. UTC | #2
08/07/2019 07:25, Jerin Jacob Kollanukkaran:
> From: vattunuru@marvell.com <vattunuru@marvell.com>
> Subject: [PATCH v1 1/1] mempool/octeontx2: fix mempool creation failure
> 
> Actually it is v2.

At least, it is weel threaded with previous v1 :)

> v2..v1:
> # Fixed git-check-log.sh issues
> # Updated git comments for "when this issue happens?"
> # Change the name of the patch
> # Add Fixes tag
>  
> > From: Vamsi Attunuru <vattunuru@marvell.com>
> > 
> > Fix npa pool range errors observed while creating mempool, this issue
> > happens when mempool objects are from different mem segments.
> > 
> > During mempool creation, octeontx2 mempool driver populates pool range
> > fields before enqueuing the buffers. If any enqueue or dequeue operation
> > reaches npa hardware prior to the range field's HW context update, those
> > ops result in npa range errors. Patch adds a routine to read back HW context
> > and verify if range fields are updated or not.
> > 
> > Fixes: e5271c507aeb ("mempool/octeontx2: add remaining slow path ops")
> > 
> > Signed-off-by: Vamsi Attunuru <vattunuru@marvell.com>
> 
> Acked-by: Jerin Jacob <jerinj@marvell.com>

Applied, thanks
  

Patch

diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c
index e1764b0..a60a77a 100644
--- a/drivers/mempool/octeontx2/otx2_mempool_ops.c
+++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c
@@ -600,6 +600,40 @@  npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle)
 }
 
 static int
+npa_lf_aura_range_update_check(uint64_t aura_handle)
+{
+	uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle);
+	struct otx2_npa_lf *lf = otx2_npa_lf_obj_get();
+	struct npa_aura_lim *lim = lf->aura_lim;
+	struct npa_aq_enq_req *req;
+	struct npa_aq_enq_rsp *rsp;
+	struct npa_pool_s *pool;
+	int rc;
+
+	req  = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox);
+
+	req->aura_id = aura_id;
+	req->ctype = NPA_AQ_CTYPE_POOL;
+	req->op = NPA_AQ_INSTOP_READ;
+
+	rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp);
+	if (rc) {
+		otx2_err("Failed to get pool(0x%"PRIx64") context", aura_id);
+		return rc;
+	}
+
+	pool = &rsp->pool;
+
+	if (lim[aura_id].ptr_start != pool->ptr_start ||
+		lim[aura_id].ptr_end != pool->ptr_end) {
+		otx2_err("Range update failed on pool(0x%"PRIx64")", aura_id);
+		return -ERANGE;
+	}
+
+	return 0;
+}
+
+static int
 otx2_npa_alloc(struct rte_mempool *mp)
 {
 	uint32_t block_size, block_count;
@@ -724,6 +758,9 @@  otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr,
 
 	npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len);
 
+	if (npa_lf_aura_range_update_check(mp->pool_id) < 0)
+		return -EBUSY;
+
 	return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len,
 					       obj_cb, obj_cb_arg);
 }