From patchwork Mon Jun 14 19:24:24 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 94181 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0B397A0C48; Mon, 14 Jun 2021 21:24:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E05394067E; Mon, 14 Jun 2021 21:24:36 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id A26004067A for ; Mon, 14 Jun 2021 21:24:35 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 15EJFvB1028484 for ; Mon, 14 Jun 2021 12:24:34 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=KV74jm7k+oL53crzBxEcDp8Cl4VNeR2xhaj0/Ua/kD0=; b=LdwVTX7FWlRLTrDsaZGd2rkxU+hD3G2oOUWiMFC2+BV0+VcTjahfqXSdx29qLrulfpRb 1+C7q9l67ndTlC273S0KWuxU+WEL64RupdITXlIlWMuXggM6hPEum5xp6zlMCpEf2NRN LS75a6c/HaeF6Xz3k4mz6961bT0uFHOooNZx3+TJtzwILS+4EVsLkR/y8YpGLE8DNC3L 30fXqHpBVOWaUV+4ItZi6Qzp07/D163y6u3riu3mE7BesJTxvFzdFNUX+aBnNDdM0mHC sYcab/6KZnpF0NCxXD0pGos0mLtqN3v8PoTAMmvR4kBhIPfHhtTv8SCa+ZJ0Ts/HTakm 6Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 395uwmbyrf-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 14 Jun 2021 12:24:34 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 14 Jun 2021 12:24:33 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 14 Jun 2021 12:24:33 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id 572103F70D3; Mon, 14 Jun 2021 12:24:30 -0700 (PDT) From: To: , Nithin Dabilpuram CC: , Pavan Nikhilesh Date: Tue, 15 Jun 2021 00:54:24 +0530 Message-ID: <20210614192426.2978-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-Proofpoint-GUID: rqpKSNIF4vT8Q43tIDOSqKTvN341N8Qh X-Proofpoint-ORIG-GUID: rqpKSNIF4vT8Q43tIDOSqKTvN341N8Qh X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-06-14_13:2021-06-14, 2021-06-14 signatures=0 Subject: [dpdk-dev] [PATCH 1/2] mempool/octeontx2: fix shift calculation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Shift is used to generate an 8-bit saturate value from the current aura used count. The shift value should be derived from the log2 of block count if it is greater than 256 else the shift should be 0. Fixes: 7bcc47cbe2fa ("mempool/octeontx2: add mempool alloc op") Signed-off-by: Pavan Nikhilesh --- drivers/mempool/octeontx2/otx2_mempool_ops.c | 6 ++++-- 1 file changed, 4 insertions(+), 2 deletions(-) diff --git a/drivers/mempool/octeontx2/otx2_mempool_ops.c b/drivers/mempool/octeontx2/otx2_mempool_ops.c index 9ff71bcf6b..d827fd8c7b 100644 --- a/drivers/mempool/octeontx2/otx2_mempool_ops.c +++ b/drivers/mempool/octeontx2/otx2_mempool_ops.c @@ -611,7 +611,8 @@ npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size, /* Update aura fields */ aura->pool_addr = pool_id;/* AF will translate to associated poolctx */ aura->ena = 1; - aura->shift = __builtin_clz(block_count) - 8; + aura->shift = rte_log2_u32(block_count); + aura->shift = aura->shift < 8 ? 0 : aura->shift - 8; aura->limit = block_count; aura->pool_caching = 1; aura->err_int_ena = BIT(NPA_AURA_ERR_INT_AURA_ADD_OVER); @@ -626,7 +627,8 @@ npa_lf_aura_pool_pair_alloc(struct otx2_npa_lf *lf, const uint32_t block_size, pool->ena = 1; pool->buf_size = block_size / OTX2_ALIGN; pool->stack_max_pages = stack_size; - pool->shift = __builtin_clz(block_count) - 8; + pool->shift = rte_log2_u32(block_count); + pool->shift = pool->shift < 8 ? 0 : pool->shift - 8; pool->ptr_start = 0; pool->ptr_end = ~0; pool->stack_caching = 1; From patchwork Mon Jun 14 19:24:25 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 94182 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 157F5A0C48; Mon, 14 Jun 2021 21:24:41 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0603E40DDB; Mon, 14 Jun 2021 21:24:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 7E41F4067A for ; Mon, 14 Jun 2021 21:24:39 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 15EJAQFR006078 for ; Mon, 14 Jun 2021 12:24:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=yyTSQf3l7wa+2E2M+4SXAGp5ySyTuwXKqepYR6bA/5Q=; b=YldPkLsR6uVO53lYaVcGhZyx06s3PA+HQd5q89yOjisflEZnw+hh8X6S2bE+IKbq4BUC wwQm0fsofvSOLMsOdmbCiRjXxUU0J9pdk1+Edy95l6TON7LHary0OVx0LZSHYNZCngqs NvyGUrphhmbhUpkpqY1u4FWXoHi6Bmkt0n2GGvpf2xunvLrwdwWpTYuEhpPr3QbFa2rH 1g3pBTzVvR4kTx/13Xo57tTz26myq1774ZHC2kTplVDFqoh3Z7Y2UFr10MK2QH3XfKGM WOV0x6c2/b27lKuWlM+eRpOxndiQxW7ZZ2Dx606kxAVdtyOAL0YAIWEeG/jQAeaHAsKE Qw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3963v1je5m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Mon, 14 Jun 2021 12:24:38 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 14 Jun 2021 12:24:36 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Mon, 14 Jun 2021 12:24:36 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id 0F4783F70D2; Mon, 14 Jun 2021 12:24:34 -0700 (PDT) From: To: , Pavan Nikhilesh CC: Date: Tue, 15 Jun 2021 00:54:25 +0530 Message-ID: <20210614192426.2978-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210614192426.2978-1-pbhagavatula@marvell.com> References: <20210614192426.2978-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 2wjBw9xWsxGk7dVQ-2L2ihLNtQuR22O4 X-Proofpoint-ORIG-GUID: 2wjBw9xWsxGk7dVQ-2L2ihLNtQuR22O4 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-06-14_13:2021-06-14, 2021-06-14 signatures=0 Subject: [dpdk-dev] [PATCH 2/2] event/octeontx2: configure aura backpressure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh In poll mode driver of octeontx2 the RQ is connected to a CQ and it is responsible for asserting backpressure to the CGX channel. When event eth Rx adapter is configured, the RQ is connected to a event queue, to enable backpressure we need to configure AURA assigned to a given RQ to backpressure CGX channel. Event device expects unique AURA to be configured per ethernet device. If multiple RQ from different ethernet devices use the same AURA, the backpressure will be disabled, application can override this using devargs: -a 0002:0e:00.0,force_rx_bp=1 Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/octeontx2.rst | 24 +++++ drivers/event/octeontx2/otx2_evdev.c | 4 + drivers/event/octeontx2/otx2_evdev.h | 1 + drivers/event/octeontx2/otx2_evdev_adptr.c | 105 +++++++++++++++++++-- 4 files changed, 127 insertions(+), 7 deletions(-) diff --git a/doc/guides/eventdevs/octeontx2.rst b/doc/guides/eventdevs/octeontx2.rst index ce733198c2..11fbebfcd2 100644 --- a/doc/guides/eventdevs/octeontx2.rst +++ b/doc/guides/eventdevs/octeontx2.rst @@ -138,6 +138,15 @@ Runtime Config Options -a 0002:0e:00.0,npa_lock_mask=0xf +- ``Force Rx Back pressure`` + + Force Rx back pressure when same mempool is used across ethernet device + connected to event device. + + For example:: + + -a 0002:0e:00.0,force_rx_bp=1 + Debugging Options ----------------- @@ -152,3 +161,18 @@ Debugging Options +---+------------+-------------------------------------------------------+ | 2 | TIM | --log-level='pmd\.event\.octeontx2\.timer,8' | +---+------------+-------------------------------------------------------+ + +Limitations +----------- + +Rx adapter support +~~~~~~~~~~~~~~~~~~ + +Using the same mempool for all the ethernet device ports connected to +event device would cause back pressure to be asserted only on the first +ethernet device. +Back pressure is automatically disabled when using same mempool for all the +ethernet devices connected to event device to override this applications can +use `force_rx_bp=1` device arguments. +Using unique mempool per each ethernet device is recommended when they are +connected to event device. diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index ee7a6ad514..38a6b651d9 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -1639,6 +1639,7 @@ static struct rte_eventdev_ops otx2_sso_ops = { #define OTX2_SSO_XAE_CNT "xae_cnt" #define OTX2_SSO_SINGLE_WS "single_ws" #define OTX2_SSO_GGRP_QOS "qos" +#define OTX2_SSO_FORCE_BP "force_rx_bp" static void parse_queue_param(char *value, void *opaque) @@ -1734,6 +1735,8 @@ sso_parse_devargs(struct otx2_sso_evdev *dev, struct rte_devargs *devargs) &single_ws); rte_kvargs_process(kvlist, OTX2_SSO_GGRP_QOS, &parse_sso_kvargs_dict, dev); + rte_kvargs_process(kvlist, OTX2_SSO_FORCE_BP, &parse_kvargs_flag, + &dev->force_rx_bp); otx2_parse_common_devargs(kvlist); dev->dual_ws = !single_ws; rte_kvargs_free(kvlist); @@ -1892,4 +1895,5 @@ RTE_PMD_REGISTER_KMOD_DEP(event_octeontx2, "vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(event_octeontx2, OTX2_SSO_XAE_CNT "=" OTX2_SSO_SINGLE_WS "=1" OTX2_SSO_GGRP_QOS "=" + OTX2_SSO_FORCE_BP "=1" OTX2_NPA_LOCK_MASK "=<1-65535>"); diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index 96e5799be1..a5d34b7df7 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -151,6 +151,7 @@ struct otx2_sso_evdev { uint8_t dual_ws; uint32_t xae_cnt; uint8_t qos_queue_cnt; + uint8_t force_rx_bp; struct otx2_sso_qos *qos_parse_data; /* HW const */ uint32_t xae_waes; diff --git a/drivers/event/octeontx2/otx2_evdev_adptr.c b/drivers/event/octeontx2/otx2_evdev_adptr.c index d85c3665ca..a91f784b1e 100644 --- a/drivers/event/octeontx2/otx2_evdev_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_adptr.c @@ -4,6 +4,8 @@ #include "otx2_evdev.h" +#define NIX_RQ_AURA_THRESH(x) (((x)*95) / 100) + int otx2_sso_rx_adapter_caps_get(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev, uint32_t *caps) @@ -306,6 +308,87 @@ sso_updt_lookup_mem(const struct rte_eventdev *event_dev, void *lookup_mem) } } +static inline void +sso_cfg_nix_mp_bpid(struct otx2_sso_evdev *dev, + struct otx2_eth_dev *otx2_eth_dev, struct otx2_eth_rxq *rxq, + uint8_t ena) +{ + struct otx2_fc_info *fc = &otx2_eth_dev->fc_info; + struct npa_aq_enq_req *req; + struct npa_aq_enq_rsp *rsp; + struct otx2_npa_lf *lf; + struct otx2_mbox *mbox; + uint32_t limit; + int rc; + + if (otx2_dev_is_sdp(otx2_eth_dev)) + return; + + lf = otx2_npa_lf_obj_get(); + if (!lf) + return; + mbox = lf->mbox; + + req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + if (req == NULL) + return; + + req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_READ; + + rc = otx2_mbox_process_msg(mbox, (void *)&rsp); + if (rc) + return; + + limit = rsp->aura.limit; + /* BP is already enabled. */ + if (rsp->aura.bp_ena) { + /* If BP ids don't match disable BP. */ + if ((rsp->aura.nix0_bpid != fc->bpid[0]) && !dev->force_rx_bp) { + req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + if (req == NULL) + return; + + req->aura_id = + npa_lf_aura_handle_to_aura(rxq->pool->pool_id); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_WRITE; + + req->aura.bp_ena = 0; + req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena); + + otx2_mbox_process(mbox); + } + return; + } + + /* BP was previously enabled but now disabled skip. */ + if (rsp->aura.bp) + return; + + req = otx2_mbox_alloc_msg_npa_aq_enq(mbox); + if (req == NULL) + return; + + req->aura_id = npa_lf_aura_handle_to_aura(rxq->pool->pool_id); + req->ctype = NPA_AQ_CTYPE_AURA; + req->op = NPA_AQ_INSTOP_WRITE; + + if (ena) { + req->aura.nix0_bpid = fc->bpid[0]; + req->aura_mask.nix0_bpid = ~(req->aura_mask.nix0_bpid); + req->aura.bp = NIX_RQ_AURA_THRESH( + limit > 128 ? 256 : limit); /* 95% of size*/ + req->aura_mask.bp = ~(req->aura_mask.bp); + } + + req->aura.bp_ena = !!ena; + req->aura_mask.bp_ena = ~(req->aura_mask.bp_ena); + + otx2_mbox_process(mbox); +} + int otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev, @@ -326,8 +409,9 @@ otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, for (i = 0 ; i < eth_dev->data->nb_rx_queues; i++) { rxq = eth_dev->data->rx_queues[i]; sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV); - rc = sso_xae_reconfigure((struct rte_eventdev *) - (uintptr_t)event_dev); + sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true); + rc = sso_xae_reconfigure( + (struct rte_eventdev *)(uintptr_t)event_dev); rc |= sso_rxq_enable(otx2_eth_dev, i, queue_conf->ev.sched_type, queue_conf->ev.queue_id, port); @@ -337,6 +421,7 @@ otx2_sso_rx_adapter_queue_add(const struct rte_eventdev *event_dev, } else { rxq = eth_dev->data->rx_queues[rx_queue_id]; sso_updt_xae_cnt(dev, rxq, RTE_EVENT_TYPE_ETHDEV); + sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, rxq, true); rc = sso_xae_reconfigure((struct rte_eventdev *) (uintptr_t)event_dev); rc |= sso_rxq_enable(otx2_eth_dev, (uint16_t)rx_queue_id, @@ -363,19 +448,25 @@ otx2_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev, int32_t rx_queue_id) { - struct otx2_eth_dev *dev = eth_dev->data->dev_private; + struct otx2_eth_dev *otx2_eth_dev = eth_dev->data->dev_private; + struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); int i, rc; - RTE_SET_USED(event_dev); rc = strncmp(eth_dev->device->driver->name, "net_octeontx2", 13); if (rc) return -EINVAL; if (rx_queue_id < 0) { - for (i = 0 ; i < eth_dev->data->nb_rx_queues; i++) - rc = sso_rxq_disable(dev, i); + for (i = 0; i < eth_dev->data->nb_rx_queues; i++) { + rc = sso_rxq_disable(otx2_eth_dev, i); + sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, + eth_dev->data->rx_queues[i], false); + } } else { - rc = sso_rxq_disable(dev, (uint16_t)rx_queue_id); + rc = sso_rxq_disable(otx2_eth_dev, (uint16_t)rx_queue_id); + sso_cfg_nix_mp_bpid(dev, otx2_eth_dev, + eth_dev->data->rx_queues[rx_queue_id], + false); } if (rc < 0)