From patchwork Sat Feb 19 12:13:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 107844 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A308EA034E; Sat, 19 Feb 2022 13:35:32 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5DBF7410E3; Sat, 19 Feb 2022 13:35:29 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6CB56410E0 for ; Sat, 19 Feb 2022 13:35:27 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21JAkxti001914 for ; Sat, 19 Feb 2022 04:35:26 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=reLH3FpYMvQ7PfAcOR3IxcYB4jxfe7ErQgp0DUv4tsY=; b=UXWM5PewzoAgxRWZXOS86PwLuzt0X8GSzkHCn5UtM1Cy1vGGktj7qk5JTIzxkeM36sN6 hd5QeRDSYglXauOWNEiBKihaTqAtMdeArjjsGxsHfddDbjab5Gy1jDf+2H3xtinwR5HR QRVcnNcNrBzSNL1rd2oTIAl2LmwZg4y01d71P0qGY5dQK39uyxI/FEu20kD8hjeDGc2e hSzpxTPYSY0OBKb7/ckvkRc56xKXiTMicTb6MsinDXEOOWuh+DRsvionNbEGFeOHUBbs GSVmTT7kuAyjGEaZSlYGUidQItVbDXm7p8Q9ka6OvEJMuLIZKphAwxZVDXFgmuBpWg9y xA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3eaxssg7vh-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sat, 19 Feb 2022 04:35:26 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sat, 19 Feb 2022 04:35:47 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sat, 19 Feb 2022 04:35:47 -0800 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id 9C7C85C6A04; Sat, 19 Feb 2022 04:13:43 -0800 (PST) From: To: , Pavan Nikhilesh , "Shijith Thotton" CC: Subject: [PATCH 1/2] event/cnxk: remove deschedule usage in CN9K Date: Sat, 19 Feb 2022 17:43:37 +0530 Message-ID: <20220219121338.2438-1-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 MIME-Version: 1.0 X-Proofpoint-GUID: 1x3UUYXFvqFrzKpxdU3FosBcgeVSd3xt X-Proofpoint-ORIG-GUID: 1x3UUYXFvqFrzKpxdU3FosBcgeVSd3xt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-19_04,2022-02-18_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Using deschedule cmd might incorrectly ignore updates to WQE, GGRP on CN9K. Use addwork to pipeline work instead. Signed-off-by: Pavan Nikhilesh --- drivers/event/cnxk/cn9k_worker.h | 41 +++++++++++++++++++++++++------- 1 file changed, 32 insertions(+), 9 deletions(-) diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index 79374b8d95..0905d744cc 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -63,15 +63,18 @@ cn9k_sso_hws_fwd_swtag(uint64_t base, const struct rte_event *ev) } static __rte_always_inline void -cn9k_sso_hws_fwd_group(uint64_t base, const struct rte_event *ev, - const uint16_t grp) +cn9k_sso_hws_new_event_wait(struct cn9k_sso_hws *ws, const struct rte_event *ev) { const uint32_t tag = (uint32_t)ev->event; const uint8_t new_tt = ev->sched_type; + const uint64_t event_ptr = ev->u64; + const uint16_t grp = ev->queue_id; - plt_write64(ev->u64, base + SSOW_LF_GWS_OP_UPD_WQP_GRP1); - cnxk_sso_hws_swtag_desched(tag, new_tt, grp, - base + SSOW_LF_GWS_OP_SWTAG_DESCHED); + while (ws->xaq_lmt <= __atomic_load_n(ws->fc_mem, __ATOMIC_RELAXED)) + ; + + cnxk_sso_hws_add_work(event_ptr, tag, new_tt, + ws->grp_base + (grp << 12)); } static __rte_always_inline void @@ -86,10 +89,12 @@ cn9k_sso_hws_forward_event(struct cn9k_sso_hws *ws, const struct rte_event *ev) } else { /* * Group has been changed for group based work pipelining, - * Use deschedule/add_work operation to transfer the event to + * Use add_work operation to transfer the event to * new group/core */ - cn9k_sso_hws_fwd_group(ws->base, ev, grp); + rte_atomic_thread_fence(__ATOMIC_RELEASE); + roc_sso_hws_head_wait(ws->base); + cn9k_sso_hws_new_event_wait(ws, ev); } } @@ -113,6 +118,22 @@ cn9k_sso_hws_dual_new_event(struct cn9k_sso_hws_dual *dws, return 1; } +static __rte_always_inline void +cn9k_sso_hws_dual_new_event_wait(struct cn9k_sso_hws_dual *dws, + const struct rte_event *ev) +{ + const uint32_t tag = (uint32_t)ev->event; + const uint8_t new_tt = ev->sched_type; + const uint64_t event_ptr = ev->u64; + const uint16_t grp = ev->queue_id; + + while (dws->xaq_lmt <= __atomic_load_n(dws->fc_mem, __ATOMIC_RELAXED)) + ; + + cnxk_sso_hws_add_work(event_ptr, tag, new_tt, + dws->grp_base + (grp << 12)); +} + static __rte_always_inline void cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws, uint64_t base, const struct rte_event *ev) @@ -126,10 +147,12 @@ cn9k_sso_hws_dual_forward_event(struct cn9k_sso_hws_dual *dws, uint64_t base, } else { /* * Group has been changed for group based work pipelining, - * Use deschedule/add_work operation to transfer the event to + * Use add_work operation to transfer the event to * new group/core */ - cn9k_sso_hws_fwd_group(base, ev, grp); + rte_atomic_thread_fence(__ATOMIC_RELEASE); + roc_sso_hws_head_wait(base); + cn9k_sso_hws_dual_new_event_wait(dws, ev); } } From patchwork Sat Feb 19 12:13:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 107843 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B841A034E; Sat, 19 Feb 2022 13:35:26 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7125140141; Sat, 19 Feb 2022 13:35:26 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6157240040 for ; Sat, 19 Feb 2022 13:35:25 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with ESMTP id 21JCWhxU016966 for ; Sat, 19 Feb 2022 04:35:23 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=WPUbQSie0pGkAT9AOk9nlffJUUkO+2NiyIa/f891+5w=; b=gKkVqVcH/cjA3z8Da98wwoJBIQzwK//6EUOyEmiLzUSi3+uxWYYt3wf+38gmYJa8liW8 pPj/BTUDsNEloSMRdcurK2v2MvwSOzAGw8NP366m94OKJ1GnzmOJwSfh6qWa/4AKrOW4 RK7C7777iCqIoLztQr8bKM46JCFjOxdjZED0MxsRdv/CBxz5BXhZ9t6iT5jvalhPfRlx 4+9o9jpG21nxSvURBr6/zn41Hf+0BKy6H2XFRbcLDp/B5WHXAzTWED0vr6x7R23hRSjX DYvV2Utyovzd0JQySYNKHr3qmyXWkagdDXAZ6F1y4pG//c19Qx/XHQUBm3xLbwQ07YOn tg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3eaxssg7ve-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sat, 19 Feb 2022 04:35:23 -0800 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sat, 19 Feb 2022 04:35:44 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sat, 19 Feb 2022 04:35:44 -0800 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id C8DDE5C6A06; Sat, 19 Feb 2022 04:13:46 -0800 (PST) From: To: , Pavan Nikhilesh , "Shijith Thotton" CC: Subject: [PATCH 2/2] event/cnxk: update SQB fc check for Tx adapter Date: Sat, 19 Feb 2022 17:43:38 +0530 Message-ID: <20220219121338.2438-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220219121338.2438-1-pbhagavatula@marvell.com> References: <20220219121338.2438-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: h-N1jNGJ0bSBoRoN4Pny-Oy6RIGZfukz X-Proofpoint-ORIG-GUID: h-N1jNGJ0bSBoRoN4Pny-Oy6RIGZfukz X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.205,Aquarius:18.0.816,Hydra:6.0.425,FMLib:17.11.62.513 definitions=2022-02-19_04,2022-02-18_01,2021-12-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Update SQB limit to include CPT queue size when Security offload is enabled. Signed-off-by: Pavan Nikhilesh --- drivers/event/cnxk/cn10k_eventdev.c | 30 +++++++++++++++++++ drivers/event/cnxk/cn10k_worker.h | 18 +++++------ drivers/event/cnxk/cn9k_eventdev.c | 28 ++++++++--------- drivers/event/cnxk/cn9k_worker.h | 5 ++-- drivers/event/cnxk/cnxk_eventdev.h | 1 - drivers/event/cnxk/cnxk_eventdev_adptr.c | 38 +++++++++++++++++++----- 6 files changed, 86 insertions(+), 34 deletions(-) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 26d65e3568..24f3a5908c 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -717,6 +717,35 @@ cn10k_sso_tx_adapter_caps_get(const struct rte_eventdev *dev, return 0; } +static void +cn10k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id) +{ + struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private; + struct cn10k_eth_txq *txq; + struct roc_nix_sq *sq; + int i; + + if (tx_queue_id < 0) { + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + cn10k_sso_txq_fc_update(eth_dev, i); + } else { + uint16_t sqes_per_sqb; + + sq = &cnxk_eth_dev->sqs[tx_queue_id]; + txq = eth_dev->data->tx_queues[tx_queue_id]; + sqes_per_sqb = 1U << txq->sqes_per_sqb_log2; + sq->nb_sqb_bufs_adj = + sq->nb_sqb_bufs - + RTE_ALIGN_MUL_CEIL(sq->nb_sqb_bufs, sqes_per_sqb) / + sqes_per_sqb; + if (cnxk_eth_dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) + sq->nb_sqb_bufs_adj -= (cnxk_eth_dev->outb.nb_desc / + (sqes_per_sqb - 1)); + txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj; + txq->nb_sqb_bufs_adj = (70 * txq->nb_sqb_bufs_adj) / 100; + } +} + static int cn10k_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev, @@ -746,6 +775,7 @@ cn10k_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev, } dev->tx_offloads |= tx_offloads; + cn10k_sso_txq_fc_update(eth_dev, tx_queue_id); rc = cn10k_sso_updt_tx_adptr_data(event_dev); if (rc < 0) return rc; diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index cfe729cef9..bb32ef75ef 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -470,6 +470,14 @@ cn10k_sso_hws_xtract_meta(struct rte_mbuf *m, const uint64_t *txq_data) (BIT_ULL(48) - 1)); } +static __rte_always_inline void +cn10k_sso_txq_fc_wait(const struct cn10k_eth_txq *txq) +{ + while ((uint64_t)txq->nb_sqb_bufs_adj <= + __atomic_load_n(txq->fc_mem, __ATOMIC_RELAXED)) + ; +} + static __rte_always_inline void cn10k_sso_tx_one(struct cn10k_sso_hws *ws, struct rte_mbuf *m, uint64_t *cmd, uint16_t lmt_id, uintptr_t lmt_addr, uint8_t sched_type, @@ -517,6 +525,7 @@ cn10k_sso_tx_one(struct cn10k_sso_hws *ws, struct rte_mbuf *m, uint64_t *cmd, if (!CNXK_TAG_IS_HEAD(ws->gw_rdata) && !sched_type) ws->gw_rdata = roc_sso_hws_head_wait(ws->base); + cn10k_sso_txq_fc_wait(txq); roc_lmt_submit_steorl(lmt_id, pa); } @@ -577,7 +586,6 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev, struct cn10k_eth_txq *txq; struct rte_mbuf *m; uintptr_t lmt_addr; - uint16_t ref_cnt; uint16_t lmt_id; lmt_addr = ws->lmt_base; @@ -607,17 +615,9 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev, } m = ev->mbuf; - ref_cnt = m->refcnt; cn10k_sso_tx_one(ws, m, cmd, lmt_id, lmt_addr, ev->sched_type, txq_data, flags); - if (flags & NIX_TX_OFFLOAD_MBUF_NOFF_F) { - if (ref_cnt > 1) - return 1; - } - - cnxk_sso_hws_swtag_flush(ws->base + SSOW_LF_GWS_TAG, - ws->base + SSOW_LF_GWS_OP_SWTAG_FLUSH); return 1; } diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 6d3d03c97c..8e55961ddc 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -955,8 +955,7 @@ cn9k_sso_tx_adapter_caps_get(const struct rte_eventdev *dev, } static void -cn9k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id, - bool ena) +cn9k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id) { struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private; struct cn9k_eth_txq *txq; @@ -965,20 +964,21 @@ cn9k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id, if (tx_queue_id < 0) { for (i = 0; i < eth_dev->data->nb_tx_queues; i++) - cn9k_sso_txq_fc_update(eth_dev, i, ena); + cn9k_sso_txq_fc_update(eth_dev, i); } else { - uint16_t sq_limit; + uint16_t sqes_per_sqb; sq = &cnxk_eth_dev->sqs[tx_queue_id]; txq = eth_dev->data->tx_queues[tx_queue_id]; - sq_limit = - ena ? RTE_MIN(CNXK_SSO_SQB_LIMIT, sq->aura_sqb_bufs) : - sq->nb_sqb_bufs; - txq->nb_sqb_bufs_adj = - sq_limit - - RTE_ALIGN_MUL_CEIL(sq_limit, - (1ULL << txq->sqes_per_sqb_log2)) / - (1ULL << txq->sqes_per_sqb_log2); + sqes_per_sqb = 1U << txq->sqes_per_sqb_log2; + sq->nb_sqb_bufs_adj = + sq->nb_sqb_bufs - + RTE_ALIGN_MUL_CEIL(sq->nb_sqb_bufs, sqes_per_sqb) / + sqes_per_sqb; + if (cnxk_eth_dev->tx_offloads & RTE_ETH_TX_OFFLOAD_SECURITY) + sq->nb_sqb_bufs_adj -= (cnxk_eth_dev->outb.nb_desc / + (sqes_per_sqb - 1)); + txq->nb_sqb_bufs_adj = sq->nb_sqb_bufs_adj; txq->nb_sqb_bufs_adj = (70 * txq->nb_sqb_bufs_adj) / 100; } } @@ -1012,7 +1012,7 @@ cn9k_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev, } dev->tx_offloads |= tx_offloads; - cn9k_sso_txq_fc_update(eth_dev, tx_queue_id, true); + cn9k_sso_txq_fc_update(eth_dev, tx_queue_id); rc = cn9k_sso_updt_tx_adptr_data(event_dev); if (rc < 0) return rc; @@ -1033,7 +1033,7 @@ cn9k_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev, rc = cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, tx_queue_id); if (rc < 0) return rc; - cn9k_sso_txq_fc_update(eth_dev, tx_queue_id, false); + cn9k_sso_txq_fc_update(eth_dev, tx_queue_id); return cn9k_sso_updt_tx_adptr_data(event_dev); } diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index 0905d744cc..79b2b3809f 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -613,9 +613,8 @@ NIX_RX_FASTPATH_MODES static __rte_always_inline void cn9k_sso_txq_fc_wait(const struct cn9k_eth_txq *txq) { - while (!((txq->nb_sqb_bufs_adj - - __atomic_load_n(txq->fc_mem, __ATOMIC_RELAXED)) - << (txq)->sqes_per_sqb_log2)) + while ((uint64_t)txq->nb_sqb_bufs_adj <= + __atomic_load_n(txq->fc_mem, __ATOMIC_RELAXED)) ; } diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index e3b5ffa7eb..b157fef096 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -38,7 +38,6 @@ #define CNXK_SSO_XAQ_CACHE_CNT (0x7) #define CNXK_SSO_XAQ_SLACK (8) #define CNXK_SSO_WQE_SG_PTR (9) -#define CNXK_SSO_SQB_LIMIT (0x180) #define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY) #define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY) diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c index 5ebd3340e7..7b580ca98f 100644 --- a/drivers/event/cnxk/cnxk_eventdev_adptr.c +++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c @@ -335,8 +335,18 @@ cnxk_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, static int cnxk_sso_sqb_aura_limit_edit(struct roc_nix_sq *sq, uint16_t nb_sqb_bufs) { - return roc_npa_aura_limit_modify( - sq->aura_handle, RTE_MIN(nb_sqb_bufs, sq->aura_sqb_bufs)); + int rc; + + if (sq->nb_sqb_bufs != nb_sqb_bufs) { + rc = roc_npa_aura_limit_modify( + sq->aura_handle, + RTE_MIN(nb_sqb_bufs, sq->aura_sqb_bufs)); + if (rc < 0) + return rc; + + sq->nb_sqb_bufs = RTE_MIN(nb_sqb_bufs, sq->aura_sqb_bufs); + } + return 0; } static void @@ -522,22 +532,29 @@ cnxk_sso_tx_adapter_queue_add(const struct rte_eventdev *event_dev, { struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private; struct roc_nix_sq *sq; - int i, ret; + int i, ret = 0; void *txq; if (tx_queue_id < 0) { for (i = 0; i < eth_dev->data->nb_tx_queues; i++) - cnxk_sso_tx_adapter_queue_add(event_dev, eth_dev, i); + ret |= cnxk_sso_tx_adapter_queue_add(event_dev, eth_dev, + i); } else { txq = eth_dev->data->tx_queues[tx_queue_id]; sq = &cnxk_eth_dev->sqs[tx_queue_id]; - cnxk_sso_sqb_aura_limit_edit(sq, CNXK_SSO_SQB_LIMIT); + cnxk_sso_sqb_aura_limit_edit(sq, sq->nb_sqb_bufs); ret = cnxk_sso_updt_tx_queue_data( event_dev, eth_dev->data->port_id, tx_queue_id, txq); if (ret < 0) return ret; } + if (ret < 0) { + plt_err("Failed to configure Tx adapter port=%d, q=%d", + eth_dev->data->port_id, tx_queue_id); + return ret; + } + return 0; } @@ -548,12 +565,13 @@ cnxk_sso_tx_adapter_queue_del(const struct rte_eventdev *event_dev, { struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private; struct roc_nix_sq *sq; - int i, ret; + int i, ret = 0; RTE_SET_USED(event_dev); if (tx_queue_id < 0) { for (i = 0; i < eth_dev->data->nb_tx_queues; i++) - cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, i); + ret |= cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, + i); } else { sq = &cnxk_eth_dev->sqs[tx_queue_id]; cnxk_sso_sqb_aura_limit_edit(sq, sq->nb_sqb_bufs); @@ -563,5 +581,11 @@ cnxk_sso_tx_adapter_queue_del(const struct rte_eventdev *event_dev, return ret; } + if (ret < 0) { + plt_err("Failed to clear Tx adapter config port=%d, q=%d", + eth_dev->data->port_id, tx_queue_id); + return ret; + } + return 0; }