From patchwork Fri Jun 28 18:23:41 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 55635 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 92D1E1BB21; Fri, 28 Jun 2019 20:25:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 516041B9FE for ; Fri, 28 Jun 2019 20:25:11 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x5SILDX3011345; Fri, 28 Jun 2019 11:25:10 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0818; bh=suNgDg8Ad5QRsv65t8Lq2VYUiXlB2rtcABEormXwxzA=; b=cYKZ/rnfaaeiosSre/pAWYOL1GpQzc0XcHxZUUjvOTFjcKFQ5BupY5QIhUsS2v4jGAlM /h9O+hASlBJp+E77DCrcfBjIjLsc6ML1rCzcwUWUhroICvH+NY8gVbXjVKMNAamxYBpQ t5sKuITyDipmmp+7g9YvzL3H2qw8aV6MCn0VRuHLqryKj+MA+jcGoQQtlvnKLkz1SRd5 vUC3wjrbonYYvaSoBBMM08mQZXsfgcpV96ewSpxckN1nqASYOxNaHJTPba/6pwcL/YY6 S9U52uLw0/vCl8VH6ZSkIE6omOirNMLa1iWk/P+a84lwson2Qtquh9jot/n1v5+bk472 hg== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 2tdd77agpu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Fri, 28 Jun 2019 11:25:10 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Fri, 28 Jun 2019 11:25:09 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Fri, 28 Jun 2019 11:25:09 -0700 Received: from BG-LT7430.marvell.com (unknown [10.28.17.12]) by maili.marvell.com (Postfix) with ESMTP id 767523F7040; Fri, 28 Jun 2019 11:25:07 -0700 (PDT) From: To: , Pavan Nikhilesh , "John McNamara" , Marko Kovacevic CC: Date: Fri, 28 Jun 2019 23:53:41 +0530 Message-ID: <20190628182354.228-31-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190628182354.228-1-pbhagavatula@marvell.com> References: <20190628182354.228-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-06-28_08:, , signatures=0 Subject: [dpdk-dev] [PATCH v3 30/42] event/octeontx2: add devargs to disable NPA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh If the chunks are allocated from NPA then TIM can automatically free them when traversing the list of chunks. Add devargs to disable NPA and use software mempool to manage chunks. Example: --dev "0002:0e:00.0,tim_disable_npa=1" Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx2.rst | 9 +++ drivers/event/octeontx2/otx2_tim_evdev.c | 81 +++++++++++++++++------- drivers/event/octeontx2/otx2_tim_evdev.h | 3 + 3 files changed, 70 insertions(+), 23 deletions(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index 98d0dfb6f..d24f81629 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -94,6 +94,15 @@ Runtime Config Options --dev "0002:0e:00.0,selftest=1" +- ``TIM disable NPA`` + + By default chunks are allocated from NPA then TIM can automatically free + them when traversing the list of chunks. The ``tim_disable_npa`` devargs + parameter disables NPA and uses software mempool to manage chunks + For example:: + + --dev "0002:0e:00.0,tim_disable_npa=1" + Debugging Options ~~~~~~~~~~~~~~~~~ diff --git a/drivers/event/octeontx2/otx2_tim_evdev.c b/drivers/event/octeontx2/otx2_tim_evdev.c index a0953bb49..4b9816676 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.c +++ b/drivers/event/octeontx2/otx2_tim_evdev.c @@ -2,6 +2,7 @@ * Copyright(C) 2019 Marvell International Ltd. */ +#include #include #include @@ -77,33 +78,45 @@ tim_chnk_pool_create(struct otx2_tim_ring *tim_ring, if (cache_sz > RTE_MEMPOOL_CACHE_MAX_SIZE) cache_sz = RTE_MEMPOOL_CACHE_MAX_SIZE; - /* NPA need not have cache as free is not visible to SW */ - tim_ring->chunk_pool = rte_mempool_create_empty(pool_name, - tim_ring->nb_chunks, - tim_ring->chunk_sz, - 0, 0, rte_socket_id(), - mp_flags); + if (!tim_ring->disable_npa) { + /* NPA need not have cache as free is not visible to SW */ + tim_ring->chunk_pool = rte_mempool_create_empty(pool_name, + tim_ring->nb_chunks, tim_ring->chunk_sz, + 0, 0, rte_socket_id(), mp_flags); - if (tim_ring->chunk_pool == NULL) { - otx2_err("Unable to create chunkpool."); - return -ENOMEM; - } + if (tim_ring->chunk_pool == NULL) { + otx2_err("Unable to create chunkpool."); + return -ENOMEM; + } - rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool, - rte_mbuf_platform_mempool_ops(), NULL); - if (rc < 0) { - otx2_err("Unable to set chunkpool ops"); - goto free; - } + rc = rte_mempool_set_ops_byname(tim_ring->chunk_pool, + rte_mbuf_platform_mempool_ops(), + NULL); + if (rc < 0) { + otx2_err("Unable to set chunkpool ops"); + goto free; + } - rc = rte_mempool_populate_default(tim_ring->chunk_pool); - if (rc < 0) { - otx2_err("Unable to set populate chunkpool."); - goto free; + rc = rte_mempool_populate_default(tim_ring->chunk_pool); + if (rc < 0) { + otx2_err("Unable to set populate chunkpool."); + goto free; + } + tim_ring->aura = npa_lf_aura_handle_to_aura( + tim_ring->chunk_pool->pool_id); + tim_ring->ena_dfb = 0; + } else { + tim_ring->chunk_pool = rte_mempool_create(pool_name, + tim_ring->nb_chunks, tim_ring->chunk_sz, + cache_sz, 0, NULL, NULL, NULL, NULL, + rte_socket_id(), + mp_flags); + if (tim_ring->chunk_pool == NULL) { + otx2_err("Unable to create chunkpool."); + return -ENOMEM; + } + tim_ring->ena_dfb = 1; } - tim_ring->aura = npa_lf_aura_handle_to_aura( - tim_ring->chunk_pool->pool_id); - tim_ring->ena_dfb = 0; return 0; @@ -229,6 +242,8 @@ otx2_tim_ring_create(struct rte_event_timer_adapter *adptr) tim_ring->nb_bkts = (tim_ring->max_tout / tim_ring->tck_nsec); tim_ring->chunk_sz = OTX2_TIM_RING_DEF_CHUNK_SZ; nb_timers = rcfg->nb_timers; + tim_ring->disable_npa = dev->disable_npa; + tim_ring->nb_chunks = nb_timers / OTX2_TIM_NB_CHUNK_SLOTS( tim_ring->chunk_sz); tim_ring->nb_chunk_slots = OTX2_TIM_NB_CHUNK_SLOTS(tim_ring->chunk_sz); @@ -339,6 +354,24 @@ otx2_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, return 0; } +#define OTX2_TIM_DISABLE_NPA "tim_disable_npa" + +static void +tim_parse_devargs(struct rte_devargs *devargs, struct otx2_tim_evdev *dev) +{ + struct rte_kvargs *kvlist; + + if (devargs == NULL) + return; + + kvlist = rte_kvargs_parse(devargs->args, NULL); + if (kvlist == NULL) + return; + + rte_kvargs_process(kvlist, OTX2_TIM_DISABLE_NPA, + &parse_kvargs_flag, &dev->disable_npa); +} + void otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) { @@ -364,6 +397,8 @@ otx2_tim_init(struct rte_pci_device *pci_dev, struct otx2_dev *cmn_dev) dev->mbox = cmn_dev->mbox; dev->bar2 = cmn_dev->bar2; + tim_parse_devargs(pci_dev->device.devargs, dev); + otx2_mbox_alloc_msg_free_rsrc_cnt(dev->mbox); rc = otx2_mbox_process_msg(dev->mbox, (void *)&rsrc_cnt); if (rc < 0) { diff --git a/drivers/event/octeontx2/otx2_tim_evdev.h b/drivers/event/octeontx2/otx2_tim_evdev.h index fdd076ebd..0a0a0b4d8 100644 --- a/drivers/event/octeontx2/otx2_tim_evdev.h +++ b/drivers/event/octeontx2/otx2_tim_evdev.h @@ -55,6 +55,8 @@ struct otx2_tim_evdev { struct otx2_mbox *mbox; uint16_t nb_rings; uintptr_t bar2; + /* Dev args */ + uint8_t disable_npa; }; struct otx2_tim_ring { @@ -65,6 +67,7 @@ struct otx2_tim_ring { struct rte_mempool *chunk_pool; uint64_t tck_int; uint8_t prod_type_sp; + uint8_t disable_npa; uint8_t optimized; uint8_t ena_dfb; uint16_t ring_id;