From patchwork Wed Dec 2 10:12:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Liron Himi X-Patchwork-Id: 84700 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 537C5A04DB; Wed, 2 Dec 2020 11:21:36 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0521ECB3E; Wed, 2 Dec 2020 11:13:14 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 7D15CCAB2 for ; Wed, 2 Dec 2020 11:13:10 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 0B2A5IrT002020 for ; Wed, 2 Dec 2020 02:13:08 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=WcqPChdn0bVUqXVe1zGorVXnn4OZfREQzgsATAZ8cog=; b=CNVUZ6VkqZsQjFOyAHPjD+hAbXeT+b7zCxDEnHQ4z1Br5YOBjjjA5S4AczOKg2mERk8l mihrkb+SCGbcL46a+ao9FkCPAapWHHLH5D3hfYIpiYO30bJR3aaeo01wlCZqfDwAtp/h f7KXUeOKBNwUy+wIMhufJM/i4DQtYCuVwLMat3BMyUh0oywg6qmYKVbatbyl7X/1C0Lp xhLuCLM+9DPm2sBctEwBMcI1cqPvBr+f16ZnGb8uMXJCL4YwI3oMmPXZoRWLi1U8fxLu HaC/dUsEXzpHHulf3J4Sp9UqW4wtShIf8mNANnp0eEYIG6JaRxge6HPqXvCoh8EXGetr Ug== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0a-0016f401.pphosted.com with ESMTP id 3568jf82a4-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Wed, 02 Dec 2020 02:13:08 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 2 Dec 2020 02:13:07 -0800 Received: from pt-lxl0023.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 2 Dec 2020 02:13:06 -0800 From: To: CC: , Liron Himi Date: Wed, 2 Dec 2020 12:12:01 +0200 Message-ID: <20201202101212.4717-28-lironh@marvell.com> X-Mailer: git-send-email 2.28.0 In-Reply-To: <20201202101212.4717-1-lironh@marvell.com> References: <20201202101212.4717-1-lironh@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.312, 18.0.737 definitions=2020-12-02_04:2020-11-30, 2020-12-02 signatures=0 Subject: [dpdk-dev] [PATCH v1 27/38] net/mvpp2: dummy pool creation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Liron Himi Currently the HW is configured with only one pool which its buffer size may be larger than the rx-fifo-size. In that situation, frame size larger than the fifo-size is gets dropped due to fifo overrun. this is cause because the HW works in cut-through mode which waits to have in the fifo at least the amount of bytes as define in the smallest pool's buffer size. This patch add a dummy pool which its buffer size is very small (smaller than 64B frame). this tricks the HW and any frame size is gets passed from the FIFO to the PP2. Signed-off-by: Liron Himi Reviewed-by: Michael Shamis Signed-off-by: Liron Himi --- drivers/net/mvpp2/mrvl_ethdev.c | 71 ++++++++++++++++++++++++++------- drivers/net/mvpp2/mrvl_ethdev.h | 2 + drivers/net/mvpp2/mrvl_qos.c | 1 + 3 files changed, 60 insertions(+), 14 deletions(-) diff --git a/drivers/net/mvpp2/mrvl_ethdev.c b/drivers/net/mvpp2/mrvl_ethdev.c index 127861a82..1f9489d77 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.c +++ b/drivers/net/mvpp2/mrvl_ethdev.c @@ -90,6 +90,8 @@ static int used_bpools[PP2_NUM_PKT_PROC] = { static struct pp2_bpool *mrvl_port_to_bpool_lookup[RTE_MAX_ETHPORTS]; static int mrvl_port_bpool_size[PP2_NUM_PKT_PROC][PP2_BPOOL_NUM_POOLS][RTE_MAX_LCORE]; static uint64_t cookie_addr_high = MRVL_COOKIE_ADDR_INVALID; +static int dummy_pool_id[PP2_NUM_PKT_PROC]; +struct pp2_bpool *dummy_pool[PP2_NUM_PKT_PROC] = {0}; struct mrvl_ifnames { const char *names[PP2_NUM_ETH_PPIO * PP2_NUM_PKT_PROC]; @@ -189,6 +191,19 @@ static struct { MRVL_XSTATS_TBL_ENTRY(tx_errors) }; +static inline int +mrvl_reserve_bit(int *bitmap, int max) +{ + int n = sizeof(*bitmap) * 8 - __builtin_clz(*bitmap); + + if (n >= max) + return -1; + + *bitmap |= 1 << n; + + return n; +} + /** * Initialize packet processor. * @@ -199,6 +214,9 @@ static int mrvl_init_pp2(void) { struct pp2_init_params init_params; + struct pp2_bpool_params bpool_params; + char name[15]; + int err, i; memset(&init_params, 0, sizeof(init_params)); init_params.hif_reserved_map = MRVL_MUSDK_HIFS_RESERVED; @@ -207,7 +225,36 @@ mrvl_init_pp2(void) if (mrvl_cfg && mrvl_cfg->pp2_cfg.prs_udfs.num_udfs) memcpy(&init_params.prs_udfs, &mrvl_cfg->pp2_cfg.prs_udfs, sizeof(struct pp2_parse_udfs)); - return pp2_init(&init_params); + err = pp2_init(&init_params); + if (err != 0) { + MRVL_LOG(ERR, "PP2 init failed"); + return -1; + } + + memset(dummy_pool, 0, sizeof(dummy_pool)); + for (i = 0; i < pp2_get_num_inst(); i++) { + dummy_pool_id[i] = mrvl_reserve_bit(&used_bpools[i], + PP2_BPOOL_NUM_POOLS); + if (dummy_pool_id[i] < 0) { + MRVL_LOG(ERR, "Can't find free pool\n"); + return -1; + } + + memset(name, 0, sizeof(name)); + snprintf(name, sizeof(name), "pool-%d:%d", i, dummy_pool_id[i]); + memset(&bpool_params, 0, sizeof(bpool_params)); + bpool_params.match = name; + bpool_params.buff_len = MRVL_PKT_OFFS; + bpool_params.dummy_short_pool = 1; + err = pp2_bpool_init(&bpool_params, &dummy_pool[i]); + if (err != 0 || !dummy_pool[i]) { + MRVL_LOG(ERR, "BPool init failed!\n"); + used_bpools[i] &= ~(1 << dummy_pool_id[i]); + return -1; + } + } + + return 0; } /** @@ -219,6 +266,15 @@ mrvl_init_pp2(void) static void mrvl_deinit_pp2(void) { + int i; + + for (i = 0; i < PP2_NUM_PKT_PROC; i++) { + if (!dummy_pool[i]) + continue; + pp2_bpool_deinit(dummy_pool[i]); + used_bpools[i] &= ~(1 << dummy_pool_id[i]); + } + pp2_deinit(); } @@ -259,19 +315,6 @@ mrvl_get_bpool_size(int pp2_id, int pool_id) return size; } -static inline int -mrvl_reserve_bit(int *bitmap, int max) -{ - int n = sizeof(*bitmap) * 8 - __builtin_clz(*bitmap); - - if (n >= max) - return -1; - - *bitmap |= 1 << n; - - return n; -} - static int mrvl_init_hif(int core_id) { diff --git a/drivers/net/mvpp2/mrvl_ethdev.h b/drivers/net/mvpp2/mrvl_ethdev.h index 5dbd8b46c..24dbe20d7 100644 --- a/drivers/net/mvpp2/mrvl_ethdev.h +++ b/drivers/net/mvpp2/mrvl_ethdev.h @@ -197,6 +197,8 @@ extern int mrvl_logtype; rte_log(RTE_LOG_ ## level, mrvl_logtype, "%s(): " fmt "\n", \ __func__, ##args) +extern struct pp2_bpool *dummy_pool[PP2_NUM_PKT_PROC]; + /** * Convert string to uint32_t with extra checks for result correctness. * diff --git a/drivers/net/mvpp2/mrvl_qos.c b/drivers/net/mvpp2/mrvl_qos.c index f5275efc7..23a014ade 100644 --- a/drivers/net/mvpp2/mrvl_qos.c +++ b/drivers/net/mvpp2/mrvl_qos.c @@ -881,6 +881,7 @@ setup_tc(struct pp2_ppio_tc_params *param, uint8_t inqs, param->pkt_offset = MRVL_PKT_OFFS; param->pools[0][0] = bpool; + param->pools[0][1] = dummy_pool[bpool->pp2_id]; param->default_color = color; inq_params = rte_zmalloc_socket("inq_params",