From patchwork Mon Apr 26 17:44:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 92193 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA50EA0548; Mon, 26 Apr 2021 19:45:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4A4F341211; Mon, 26 Apr 2021 19:45:20 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 1D5FB411EF for ; Mon, 26 Apr 2021 19:45:19 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 13QHis2s030120 for ; Mon, 26 Apr 2021 10:45:18 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=3BuPcYuTR0xyFq/4GbtWETwZ9Adr1E4rEKdWz7oBpcU=; b=c9ycF7dG0Oz+zXKy1NR2CJ17iQpL+OHE4nYXyLqAUOTzz3ZzbeJ1NvIrXNsXPBNvQNTo tKDPYgJYpTs0S6/xRf/pBO/WEIPOGCO/tvHK/dG6kHk3PARZfODQoammvAbF7fBUkFWY /dykEb7yV+gcgyrrbK5PCviySMlX61U+a99l4ucXyYSkwEkNicHPdNZ2UYTgeHq6Bc0i rThGlzqoz71t2KOYl4GGzkFdZXp7QJsPzmDTgPP2MqlZgPy2UbmrndC7OZqLbpqqkEMi 5Xnkj60ccwoT7/GnSIxcrluHKSTGs/XHO+fm0jPyLukfkED0E819eycOS1h1ay3LO2NH TA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 385tvvhddt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 26 Apr 2021 10:45:18 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 26 Apr 2021 10:45:16 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 26 Apr 2021 10:45:16 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id BD7545B6C98; Mon, 26 Apr 2021 10:45:14 -0700 (PDT) From: To: , Pavan Nikhilesh , "Shijith Thotton" CC: Date: Mon, 26 Apr 2021 23:14:14 +0530 Message-ID: <20210426174441.2302-8-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210426174441.2302-1-pbhagavatula@marvell.com> References: <20210306162942.6845-1-pbhagavatula@marvell.com> <20210426174441.2302-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: xGSLWr7AfHsX8XIbwolNw3DShUOMRylp X-Proofpoint-ORIG-GUID: xGSLWr7AfHsX8XIbwolNw3DShUOMRylp X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-04-26_09:2021-04-26, 2021-04-26 signatures=0 Subject: [dpdk-dev] [PATCH v2 07/33] event/cnxk: allocate event inflight buffers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Allocate buffers in DRAM that hold inflight events. Signed-off-by: Shijith Thotton Signed-off-by: Pavan Nikhilesh --- drivers/event/cnxk/cn10k_eventdev.c | 7 ++ drivers/event/cnxk/cn9k_eventdev.c | 7 ++ drivers/event/cnxk/cnxk_eventdev.c | 105 ++++++++++++++++++++++++++++ drivers/event/cnxk/cnxk_eventdev.h | 14 +++- 4 files changed, 132 insertions(+), 1 deletion(-) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 92687c23e..7e3fa20c5 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -55,6 +55,13 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev) return -ENODEV; } + rc = cnxk_sso_xaq_allocate(dev); + if (rc < 0) + goto cnxk_rsrc_fini; + + return 0; +cnxk_rsrc_fini: + roc_sso_rsrc_fini(&dev->sso); return rc; } diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 1bd2b3343..71245b660 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -63,6 +63,13 @@ cn9k_sso_dev_configure(const struct rte_eventdev *event_dev) return -ENODEV; } + rc = cnxk_sso_xaq_allocate(dev); + if (rc < 0) + goto cnxk_rsrc_fini; + + return 0; +cnxk_rsrc_fini: + roc_sso_rsrc_fini(&dev->sso); return rc; } diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index 59cc570fe..927f99117 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -28,12 +28,107 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev, RTE_EVENT_DEV_CAP_CARRY_FLOW_ID; } +int +cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev) +{ + char pool_name[RTE_MEMZONE_NAMESIZE]; + uint32_t xaq_cnt, npa_aura_id; + const struct rte_memzone *mz; + struct npa_aura_s *aura; + static int reconfig_cnt; + int rc; + + if (dev->xaq_pool) { + rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues); + if (rc < 0) { + plt_err("Failed to release XAQ %d", rc); + return rc; + } + rte_mempool_free(dev->xaq_pool); + dev->xaq_pool = NULL; + } + + /* + * Allocate memory for Add work backpressure. + */ + mz = rte_memzone_lookup(CNXK_SSO_FC_NAME); + if (mz == NULL) + mz = rte_memzone_reserve_aligned(CNXK_SSO_FC_NAME, + sizeof(struct npa_aura_s) + + RTE_CACHE_LINE_SIZE, + 0, 0, RTE_CACHE_LINE_SIZE); + if (mz == NULL) { + plt_err("Failed to allocate mem for fcmem"); + return -ENOMEM; + } + + dev->fc_iova = mz->iova; + dev->fc_mem = mz->addr; + + aura = (struct npa_aura_s *)((uintptr_t)dev->fc_mem + + RTE_CACHE_LINE_SIZE); + memset(aura, 0, sizeof(struct npa_aura_s)); + + aura->fc_ena = 1; + aura->fc_addr = dev->fc_iova; + aura->fc_hyst_bits = 0; /* Store count on all updates */ + + /* Taken from HRM 14.3.3(4) */ + xaq_cnt = dev->nb_event_queues * CNXK_SSO_XAQ_CACHE_CNT; + xaq_cnt += (dev->sso.iue / dev->sso.xae_waes) + + (CNXK_SSO_XAQ_SLACK * dev->nb_event_queues); + + plt_sso_dbg("Configuring %d xaq buffers", xaq_cnt); + /* Setup XAQ based on number of nb queues. */ + snprintf(pool_name, 30, "cnxk_xaq_buf_pool_%d", reconfig_cnt); + dev->xaq_pool = (void *)rte_mempool_create_empty( + pool_name, xaq_cnt, dev->sso.xaq_buf_size, 0, 0, + rte_socket_id(), 0); + + if (dev->xaq_pool == NULL) { + plt_err("Unable to create empty mempool."); + rte_memzone_free(mz); + return -ENOMEM; + } + + rc = rte_mempool_set_ops_byname(dev->xaq_pool, + rte_mbuf_platform_mempool_ops(), aura); + if (rc != 0) { + plt_err("Unable to set xaqpool ops."); + goto alloc_fail; + } + + rc = rte_mempool_populate_default(dev->xaq_pool); + if (rc < 0) { + plt_err("Unable to set populate xaqpool."); + goto alloc_fail; + } + reconfig_cnt++; + /* When SW does addwork (enqueue) check if there is space in XAQ by + * comparing fc_addr above against the xaq_lmt calculated below. + * There should be a minimum headroom (CNXK_SSO_XAQ_SLACK / 2) for SSO + * to request XAQ to cache them even before enqueue is called. + */ + dev->xaq_lmt = + xaq_cnt - (CNXK_SSO_XAQ_SLACK / 2 * dev->nb_event_queues); + dev->nb_xaq_cfg = xaq_cnt; + + npa_aura_id = roc_npa_aura_handle_to_aura(dev->xaq_pool->pool_id); + return roc_sso_hwgrp_alloc_xaq(&dev->sso, npa_aura_id, + dev->nb_event_queues); +alloc_fail: + rte_mempool_free(dev->xaq_pool); + rte_memzone_free(mz); + return rc; +} + int cnxk_sso_dev_validate(const struct rte_eventdev *event_dev) { struct rte_event_dev_config *conf = &event_dev->data->dev_conf; struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint32_t deq_tmo_ns; + int rc; deq_tmo_ns = conf->dequeue_timeout_ns; @@ -67,6 +162,16 @@ cnxk_sso_dev_validate(const struct rte_eventdev *event_dev) return -EINVAL; } + if (dev->xaq_pool) { + rc = roc_sso_hwgrp_release_xaq(&dev->sso, dev->nb_event_queues); + if (rc < 0) { + plt_err("Failed to release XAQ %d", rc); + return rc; + } + rte_mempool_free(dev->xaq_pool); + dev->xaq_pool = NULL; + } + dev->nb_event_queues = conf->nb_event_queues; dev->nb_event_ports = conf->nb_event_ports; diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index 974c618bc..8478120c0 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -5,6 +5,7 @@ #ifndef __CNXK_EVENTDEV_H__ #define __CNXK_EVENTDEV_H__ +#include #include #include @@ -13,7 +14,10 @@ #define USEC2NSEC(__us) ((__us)*1E3) -#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz" +#define CNXK_SSO_FC_NAME "cnxk_evdev_xaq_fc" +#define CNXK_SSO_MZ_NAME "cnxk_evdev_mz" +#define CNXK_SSO_XAQ_CACHE_CNT (0x7) +#define CNXK_SSO_XAQ_SLACK (8) struct cnxk_sso_evdev { struct roc_sso sso; @@ -26,6 +30,11 @@ struct cnxk_sso_evdev { uint32_t min_dequeue_timeout_ns; uint32_t max_dequeue_timeout_ns; int32_t max_num_events; + uint64_t *fc_mem; + uint64_t xaq_lmt; + uint64_t nb_xaq_cfg; + rte_iova_t fc_iova; + struct rte_mempool *xaq_pool; /* CN9K */ uint8_t dual_ws; } __rte_cache_aligned; @@ -36,6 +45,9 @@ cnxk_sso_pmd_priv(const struct rte_eventdev *event_dev) return event_dev->data->dev_private; } +/* Configuration functions */ +int cnxk_sso_xaq_allocate(struct cnxk_sso_evdev *dev); + /* Common ops API. */ int cnxk_sso_init(struct rte_eventdev *event_dev); int cnxk_sso_fini(struct rte_eventdev *event_dev);