From patchwork Fri Jun 28 07:49:45 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55541 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F0D621B9AD; Fri, 28 Jun 2019 09:50:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 70A9D4CA9 for ; Fri, 28 Jun 2019 09:50:42 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5S7niRu000486 for ; Fri, 28 Jun 2019 00:50:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=hby2vVojWRN1HzXIMQjlTnFmJqVJicPR6zvzw4tUfnI=; b=Y6HjicGAXHI159kdeUGlr+f5ZZaP69Fp+MqemwvKzEtwjIIv+0QAzavfOXcG/XpEmBIX JUOat9g4JS/FrZH/g2ea1tPmlgGh+aPQixNODxB+aFmy3ji5mMQKV7EM7vS+H/eF5cD9 U8pUDjwMypOO6jiWL7LCW3Kdthy0qL6GFBBMMxXqjbj7Rp4B5jwzT7cAMmenmEL7EVQ6 7pDfHxDUt+uvDTuXqeoftm1Jh93GVxtDcRGnGURyPaasAnoyiaF9tpm++p2TRUVVi1iX pjtOTcvtKrphl1pcZpMgDPkr2XkkMjtQhDuRUlRyEKyWcmvQOfL7aFEZLb+MKEEIxBas QQ== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd778aq0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Fri, 28 Jun 2019 00:50:41 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 00:50:40 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 00:50:40 -0700 Received: from BG-LT7430.marvell.com (bg-lt7430.marvell.com [10.28.10.255]) by maili.marvell.com (Postfix) with ESMTP id 26FCC3F7041; Fri, 28 Jun 2019 00:50:38 -0700 (PDT) From: To: CC: , Pavan Nikhilesh Date: Fri, 28 Jun 2019 13:19:45 +0530 Message-ID: <20190628075024.404-7-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628075024.404-1-pbhagavatula@marvell.com> References: <20190628075024.404-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_02:, , signatures=0 Subject: [dpdk-dev] [PATCH v2 06/44] event/octeontx2: allocate event inflight buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Allocate buffers in DRAM that hold inflight events. Signed-off-by: Pavan Nikhilesh --- drivers/event/octeontx2/Makefile | 2 +- drivers/event/octeontx2/otx2_evdev.c | 116 ++++++++++++++++++++++++++- drivers/event/octeontx2/otx2_evdev.h | 8 ++ 3 files changed, 124 insertions(+), 2 deletions(-) diff --git a/drivers/event/octeontx2/Makefile b/drivers/event/octeontx2/Makefile index 36f0b2b12..b3c3beccb 100644 --- a/drivers/event/octeontx2/Makefile +++ b/drivers/event/octeontx2/Makefile @@ -33,7 +33,7 @@ LIBABIVER := 1 SRCS-$(CONFIG_RTE_LIBRTE_PMD_OCTEONTX2_EVENTDEV) += otx2_evdev.c LDLIBS += -lrte_eal -lrte_bus_pci -lrte_pci -LDLIBS += -lrte_eventdev +LDLIBS += -lrte_mempool -lrte_eventdev -lrte_mbuf LDLIBS += -lrte_common_octeontx2 -lrte_mempool_octeontx2 include $(RTE_SDK)/mk/rte.lib.mk diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 2290598d0..fc4dbda0a 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include "otx2_evdev.h" @@ -203,6 +204,107 @@ sso_configure_queues(const struct rte_eventdev *event_dev) return rc; } +static int +sso_xaq_allocate(struct otx2_sso_evdev *dev) +{ + const struct rte_memzone *mz; + struct npa_aura_s *aura; + static int reconfig_cnt; + char pool_name[RTE_MEMZONE_NAMESIZE]; + uint32_t xaq_cnt; + int rc; + + if (dev->xaq_pool) + rte_mempool_free(dev->xaq_pool); + + /* + * Allocate memory for Add work backpressure. + */ + mz = rte_memzone_lookup(OTX2_SSO_FC_NAME); + if (mz == NULL) + mz = rte_memzone_reserve_aligned(OTX2_SSO_FC_NAME, + OTX2_ALIGN + + sizeof(struct npa_aura_s), + rte_socket_id(), + RTE_MEMZONE_IOVA_CONTIG, + OTX2_ALIGN); + if (mz == NULL) { + otx2_err("Failed to allocate mem for fcmem"); + return -ENOMEM; + } + + dev->fc_iova = mz->iova; + dev->fc_mem = mz->addr; + + aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem + OTX2_ALIGN); + memset(aura, 0, sizeof(struct npa_aura_s)); + + aura->fc_ena = 1; + aura->fc_addr = dev->fc_iova; + aura->fc_hyst_bits = 0; /* Store count on all updates */ + + /* Taken from HRM 14.3.3(4) */ + xaq_cnt = dev->nb_event_queues * OTX2_SSO_XAQ_CACHE_CNT; + xaq_cnt += (dev->iue / dev->xae_waes) + + (OTX2_SSO_XAQ_SLACK * dev->nb_event_queues); + + otx2_sso_dbg("Configuring %d xaq buffers", xaq_cnt); + /* Setup XAQ based on number of nb queues. */ + snprintf(pool_name, 30, "otx2_xaq_buf_pool_%d", reconfig_cnt); + dev->xaq_pool = (void *)rte_mempool_create_empty(pool_name, + xaq_cnt, dev->xaq_buf_size, 0, 0, + rte_socket_id(), 0); + + if (dev->xaq_pool == NULL) { + otx2_err("Unable to create empty mempool."); + rte_memzone_free(mz); + return -ENOMEM; + } + + rc = rte_mempool_set_ops_byname(dev->xaq_pool, + rte_mbuf_platform_mempool_ops(), aura); + if (rc != 0) { + otx2_err("Unable to set xaqpool ops."); + goto alloc_fail; + } + + rc = rte_mempool_populate_default(dev->xaq_pool); + if (rc < 0) { + otx2_err("Unable to set populate xaqpool."); + goto alloc_fail; + } + reconfig_cnt++; + /* When SW does addwork (enqueue) check if there is space in XAQ by + * comparing fc_addr above against the xaq_lmt calculated below. + * There should be a minimum headroom (OTX2_SSO_XAQ_SLACK / 2) for SSO + * to request XAQ to cache them even before enqueue is called. + */ + dev->xaq_lmt = xaq_cnt - (OTX2_SSO_XAQ_SLACK / 2 * + dev->nb_event_queues); + dev->nb_xaq_cfg = xaq_cnt; + + return 0; +alloc_fail: + rte_mempool_free(dev->xaq_pool); + rte_memzone_free(mz); + return rc; +} + +static int +sso_ggrp_alloc_xaq(struct otx2_sso_evdev *dev) +{ + struct otx2_mbox *mbox = dev->mbox; + struct sso_hw_setconfig *req; + + otx2_sso_dbg("Configuring XAQ for GGRPs"); + req = otx2_mbox_alloc_msg_sso_hw_setconfig(mbox); + req->npa_pf_func = otx2_npa_pf_func_get(); + req->npa_aura_id = npa_lf_aura_handle_to_aura(dev->xaq_pool->pool_id); + req->hwgrps = dev->nb_event_queues; + + return otx2_mbox_process(mbox); +} + static void sso_lf_teardown(struct otx2_sso_evdev *dev, enum otx2_sso_lf_type lf_type) @@ -288,11 +390,23 @@ otx2_sso_configure(const struct rte_eventdev *event_dev) goto teardown_hws; } + if (sso_xaq_allocate(dev) < 0) { + rc = -ENOMEM; + goto teardown_hwggrp; + } + + rc = sso_ggrp_alloc_xaq(dev); + if (rc < 0) { + otx2_err("Failed to alloc xaq to ggrp %d", rc); + goto teardown_hwggrp; + } + dev->configured = 1; rte_mb(); return 0; - +teardown_hwggrp: + sso_lf_teardown(dev, SSO_LF_GGRP); teardown_hws: sso_lf_teardown(dev, SSO_LF_GWS); dev->nb_event_queues = 0; diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index b46402771..375640bca 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -17,6 +17,9 @@ #define OTX2_SSO_MAX_VHGRP RTE_EVENT_MAX_QUEUES_PER_DEV #define OTX2_SSO_MAX_VHWS (UINT8_MAX) +#define OTX2_SSO_FC_NAME "otx2_evdev_xaq_fc" +#define OTX2_SSO_XAQ_SLACK (8) +#define OTX2_SSO_XAQ_CACHE_CNT (0x7) /* SSO LF register offsets (BAR2) */ #define SSO_LF_GGRP_OP_ADD_WORK0 (0x0ull) @@ -54,6 +57,11 @@ struct otx2_sso_evdev { uint32_t min_dequeue_timeout_ns; uint32_t max_dequeue_timeout_ns; int32_t max_num_events; + uint64_t *fc_mem; + uint64_t xaq_lmt; + uint64_t nb_xaq_cfg; + rte_iova_t fc_iova; + struct rte_mempool *xaq_pool; /* HW const */ uint32_t xae_waes; uint32_t xaq_buf_size;