From patchwork Mon Jul 8 04:47:31 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vamsi Krishna Attunuru X-Patchwork-Id: 56205 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 604574C8D; Mon, 8 Jul 2019 06:47:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id 5F644397D for ; Mon, 8 Jul 2019 06:47:39 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x684lAGE016535; Sun, 7 Jul 2019 21:47:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=05uG26+RGXZxhFKbSUNEkp2dLPwFL1U6TK2T9NdgrxU=; b=xJpG/fcv6sri88ZAxnomyYxW9iKOpylIYagiBUp9+MkM07eizRi1/NIZlMWCAStnN4H8 BE0DtJ8UxaEQK2JgeSnhkuc2PgSATINMVGopuVaclWueSz8XmwRhwupFFkOaCWhCWscS gBBdkzdbmQtRGer37IQchnfT+VJClS1hcUbfXgKeScmbis3dqjxpApmbjaD3TtYF8ufo r4N/AQY5JyLQRchN8gM1yMSyNchRdk5oA6MKISPDQsII15d2rq/GcVlOIY8yguPauGD4 rOQB6xG5orTBRnjpui69uXo4Jh3DxZKUW6+lqowSxijLUWkY3nvy/UrshPYhGqiag3WC ew== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0b-0016f401.pphosted.com with ESMTP id 2tju5j5kcp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Jul 2019 21:47:38 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Sun, 7 Jul 2019 21:47:36 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Sun, 7 Jul 2019 21:47:36 -0700 Received: from hyd1vattunuru-dt.caveonetworks.com (unknown [10.29.52.72]) by maili.marvell.com (Postfix) with ESMTP id E31173F703F; Sun, 7 Jul 2019 21:47:34 -0700 (PDT) From: To: CC: , , Vamsi Attunuru Date: Mon, 8 Jul 2019 10:17:31 +0530 Message-ID: <20190708044731.16843-1-vattunuru@marvell.com> X-Mailer: git-send-email 2.8.4 In-Reply-To: <20190705103341.30219-1-vattunuru@marvell.com> References: <20190705103341.30219-1-vattunuru@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-08_01:, , signatures=0 Subject: [dpdk-dev] [PATCH v1 1/1] mempool/octeontx2: fix mempool creation failure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vamsi Attunuru Fix npa pool range errors observed while creating mempool, this issue happens when mempool objects are from different mem segments. During mempool creation, octeontx2 mempool driver populates pool range fields before enqueuing the buffers. If any enqueue or dequeue operation reaches npa hardware prior to the range field's HW context update, those ops result in npa range errors. Patch adds a routine to read back HW context and verify if range fields are updated or not. Fixes: e5271c507aeb ("mempool/octeontx2: add remaining slow path ops") Signed-off-by: Vamsi Attunuru Acked-by: Jerin Jacob --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 37 ++++++++++++++++++++++++++++ 1 file changed, 37 insertions(+) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index e1764b0..a60a77a 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -600,6 +600,40 @@ npa_lf_aura_pool_pair_free(struct otx2_npa_lf *lf, uint64_t aura_handle) } static int +npa_lf_aura_range_update_check(uint64_t aura_handle) +{ + uint64_t aura_id = npa_lf_aura_handle_to_aura(aura_handle); + struct otx2_npa_lf *lf = otx2_npa_lf_obj_get(); + struct npa_aura_lim *lim = lf->aura_lim; + struct npa_aq_enq_req *req; + struct npa_aq_enq_rsp *rsp; + struct npa_pool_s *pool; + int rc; + + req = otx2_mbox_alloc_msg_npa_aq_enq(lf->mbox); + + req->aura_id = aura_id; + req->ctype = NPA_AQ_CTYPE_POOL; + req->op = NPA_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(lf->mbox, (void *)&rsp); + if (rc) { + otx2_err("Failed to get pool(0x%"PRIx64") context", aura_id); + return rc; + } + + pool = &rsp->pool; + + if (lim[aura_id].ptr_start != pool->ptr_start || + lim[aura_id].ptr_end != pool->ptr_end) { + otx2_err("Range update failed on pool(0x%"PRIx64")", aura_id); + return -ERANGE; + } + + return 0; +} + +static int otx2_npa_alloc(struct rte_mempool *mp) { uint32_t block_size, block_count; @@ -724,6 +758,9 @@ otx2_npa_populate(struct rte_mempool *mp, unsigned int max_objs, void *vaddr, npa_lf_aura_op_range_set(mp->pool_id, iova, iova + len); + if (npa_lf_aura_range_update_check(mp->pool_id) < 0) + return -EBUSY; + return rte_mempool_op_populate_default(mp, max_objs, vaddr, iova, len, obj_cb, obj_cb_arg); }