From patchwork Sat Jul 3 22:00:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 95247 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB311A0C40; Sun, 4 Jul 2021 00:00:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9CB34410FE; Sun, 4 Jul 2021 00:00:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 6E8BF410F9 for ; Sun, 4 Jul 2021 00:00:41 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 163M0ef5028738 for ; Sat, 3 Jul 2021 15:00:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=p6h7vbaSN5bm79hEFJmLHWsE8BbkA5uaUa3LqOfbeQo=; b=cIRUeP9tY+hv5loA1DS7Dm+G02R0rVxtVOxaPeSO30GCtvKO1WQJL3i/gR1YuOKDvgFf u51vBcsQFTN5h9iywi8in9zvEJxbMah4U/dc63O8yoL8FN4ThF7awG/7+T7yLVtjo6vp KtEdyyn9Ai7FVeB7llj8jrXPRlOdkVipgDkfkJ9Bo8yTjUQMJza6QSkY9gdAf/NsBjT7 a/6eyhm2kdFKQeGO5j4g4AtMPZRIhSXF5tUZmvHza05Ht3oGqvlDvwN9/dKbB1RX1dW2 I1tRHP9XNTgvkBy/+BqZF6mb35LX8oTTNzK0tClNr7ZchxAqG2RJGkcLKEi1QdvJWoi8 Yw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com with ESMTP id 39jn8qhe40-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sat, 03 Jul 2021 15:00:40 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Sat, 3 Jul 2021 15:00:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Sat, 3 Jul 2021 15:00:38 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id E8EBF3F70C3; Sat, 3 Jul 2021 15:00:35 -0700 (PDT) From: To: , Pavan Nikhilesh , "Shijith Thotton" , Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao CC: Date: Sun, 4 Jul 2021 03:30:18 +0530 Message-ID: <20210703220022.1387-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210703220022.1387-1-pbhagavatula@marvell.com> References: <20210702211408.777-1-pbhagavatula@marvell.com> <20210703220022.1387-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: skdtgI3GcLR18TJEBmN-5baqLuJSlYU1 X-Proofpoint-ORIG-GUID: skdtgI3GcLR18TJEBmN-5baqLuJSlYU1 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.790 definitions=2021-07-03_07:2021-07-02, 2021-07-03 signatures=0 Subject: [dpdk-dev] [PATCH v7 3/7] event/cnxk: add Tx adapter support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Add support for event eth Tx adapter. Signed-off-by: Pavan Nikhilesh Acked-by: Nithin Dabilpuram --- doc/guides/eventdevs/cnxk.rst | 4 +- doc/guides/rel_notes/release_21_08.rst | 6 +- drivers/common/cnxk/roc_nix.h | 1 + drivers/common/cnxk/roc_nix_queue.c | 8 +- drivers/event/cnxk/cn10k_eventdev.c | 91 ++++++++++++++ drivers/event/cnxk/cn9k_eventdev.c | 148 +++++++++++++++++++++++ drivers/event/cnxk/cnxk_eventdev.h | 22 +++- drivers/event/cnxk/cnxk_eventdev_adptr.c | 88 ++++++++++++++ 8 files changed, 359 insertions(+), 9 deletions(-) -- 2.17.1 diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst index b7e82c127..6fdccc2ab 100644 --- a/doc/guides/eventdevs/cnxk.rst +++ b/doc/guides/eventdevs/cnxk.rst @@ -42,7 +42,9 @@ Features of the OCTEON cnxk SSO PMD are: - HW managed packets enqueued from ethdev to eventdev exposed through event eth RX adapter. - N:1 ethernet device Rx queue to Event queue mapping. -- Full Rx offload support defined through ethdev queue configuration. +- Lockfree Tx from event eth Tx adapter using ``DEV_TX_OFFLOAD_MT_LOCKFREE`` + capability while maintaining receive packet order. +- Full Rx/Tx offload support defined through ethdev queue configuration. Prerequisites and Compilation procedure --------------------------------------- diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index 3892c8017..80ff93269 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -60,10 +60,10 @@ New Features * Added net/cnxk driver which provides the support for the integrated ethernet device. -* **Added support for Marvell CN10K, CN9K, event Rx adapter.** +* **Added support for Marvell CN10K, CN9K, event Rx/Tx adapter.** - * Added Rx adapter support for event/cnxk when the ethernet device requested is - net/cnxk. + * Added Rx/Tx adapter support for event/cnxk when the ethernet device requested + is net/cnxk. Removed Items diff --git a/drivers/common/cnxk/roc_nix.h b/drivers/common/cnxk/roc_nix.h index 76613fe84..822c1900e 100644 --- a/drivers/common/cnxk/roc_nix.h +++ b/drivers/common/cnxk/roc_nix.h @@ -200,6 +200,7 @@ struct roc_nix_sq { uint64_t aura_handle; int16_t nb_sqb_bufs_adj; uint16_t nb_sqb_bufs; + uint16_t aura_sqb_bufs; plt_iova_t io_addr; void *lmt_addr; void *sqe_mem; diff --git a/drivers/common/cnxk/roc_nix_queue.c b/drivers/common/cnxk/roc_nix_queue.c index 0604e7a18..7e2f86eca 100644 --- a/drivers/common/cnxk/roc_nix_queue.c +++ b/drivers/common/cnxk/roc_nix_queue.c @@ -587,12 +587,12 @@ sqb_pool_populate(struct roc_nix *roc_nix, struct roc_nix_sq *sq) aura.fc_ena = 1; aura.fc_addr = (uint64_t)sq->fc; aura.fc_hyst_bits = 0; /* Store count on all updates */ - rc = roc_npa_pool_create(&sq->aura_handle, blk_sz, nb_sqb_bufs, &aura, + rc = roc_npa_pool_create(&sq->aura_handle, blk_sz, NIX_MAX_SQB, &aura, &pool); if (rc) goto fail; - sq->sqe_mem = plt_zmalloc(blk_sz * nb_sqb_bufs, blk_sz); + sq->sqe_mem = plt_zmalloc(blk_sz * NIX_MAX_SQB, blk_sz); if (sq->sqe_mem == NULL) { rc = NIX_ERR_NO_MEM; goto nomem; @@ -600,11 +600,13 @@ sqb_pool_populate(struct roc_nix *roc_nix, struct roc_nix_sq *sq) /* Fill the initial buffers */ iova = (uint64_t)sq->sqe_mem; - for (count = 0; count < nb_sqb_bufs; count++) { + for (count = 0; count < NIX_MAX_SQB; count++) { roc_npa_aura_op_free(sq->aura_handle, 0, iova); iova += blk_sz; } roc_npa_aura_op_range_set(sq->aura_handle, (uint64_t)sq->sqe_mem, iova); + roc_npa_aura_limit_modify(sq->aura_handle, sq->nb_sqb_bufs); + sq->aura_sqb_bufs = NIX_MAX_SQB; return rc; nomem: diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index ba7d95fff..8a9b04a3d 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -44,6 +44,7 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id) /* First cache line is reserved for cookie */ ws = (struct cn10k_sso_hws *)((uint8_t *)ws + RTE_CACHE_LINE_SIZE); ws->base = roc_sso_hws_base_get(&dev->sso, port_id); + ws->tx_base = ws->base; ws->hws_id = port_id; ws->swtag_req = 0; ws->gw_wdata = cn10k_sso_gw_mode_wdata(dev); @@ -233,6 +234,39 @@ cn10k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp) return roc_sso_rsrc_init(&dev->sso, hws, hwgrp); } +static int +cn10k_sso_updt_tx_adptr_data(const struct rte_eventdev *event_dev) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + int i; + + if (dev->tx_adptr_data == NULL) + return 0; + + for (i = 0; i < dev->nb_event_ports; i++) { + struct cn10k_sso_hws *ws = event_dev->data->ports[i]; + void *ws_cookie; + + ws_cookie = cnxk_sso_hws_get_cookie(ws); + ws_cookie = rte_realloc_socket( + ws_cookie, + sizeof(struct cnxk_sso_hws_cookie) + + sizeof(struct cn10k_sso_hws) + + (sizeof(uint64_t) * (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (ws_cookie == NULL) + return -ENOMEM; + ws = RTE_PTR_ADD(ws_cookie, sizeof(struct cnxk_sso_hws_cookie)); + memcpy(&ws->tx_adptr_data, dev->tx_adptr_data, + sizeof(uint64_t) * (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT); + event_dev->data->ports[i] = ws; + } + + return 0; +} + static void cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) { @@ -493,6 +527,10 @@ cn10k_sso_start(struct rte_eventdev *event_dev) { int rc; + rc = cn10k_sso_updt_tx_adptr_data(event_dev); + if (rc < 0) + return rc; + rc = cnxk_sso_start(event_dev, cn10k_sso_hws_reset, cn10k_sso_hws_flush_events); if (rc < 0) @@ -595,6 +633,55 @@ cn10k_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, return cnxk_sso_rx_adapter_queue_del(event_dev, eth_dev, rx_queue_id); } +static int +cn10k_sso_tx_adapter_caps_get(const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev, uint32_t *caps) +{ + int ret; + + RTE_SET_USED(dev); + ret = strncmp(eth_dev->device->driver->name, "net_cn10k", 8); + if (ret) + *caps = 0; + else + *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT; + + return 0; +} + +static int +cn10k_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + int rc; + + RTE_SET_USED(id); + rc = cnxk_sso_tx_adapter_queue_add(event_dev, eth_dev, tx_queue_id); + if (rc < 0) + return rc; + rc = cn10k_sso_updt_tx_adptr_data(event_dev); + if (rc < 0) + return rc; + cn10k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); + + return 0; +} + +static int +cn10k_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + int rc; + + RTE_SET_USED(id); + rc = cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, tx_queue_id); + if (rc < 0) + return rc; + return cn10k_sso_updt_tx_adptr_data(event_dev); +} + static struct rte_eventdev_ops cn10k_sso_dev_ops = { .dev_infos_get = cn10k_sso_info_get, .dev_configure = cn10k_sso_dev_configure, @@ -614,6 +701,10 @@ static struct rte_eventdev_ops cn10k_sso_dev_ops = { .eth_rx_adapter_start = cnxk_sso_rx_adapter_start, .eth_rx_adapter_stop = cnxk_sso_rx_adapter_stop, + .eth_tx_adapter_caps_get = cn10k_sso_tx_adapter_caps_get, + .eth_tx_adapter_queue_add = cn10k_sso_tx_adapter_queue_add, + .eth_tx_adapter_queue_del = cn10k_sso_tx_adapter_queue_del, + .timer_adapter_caps_get = cnxk_tim_caps_get, .dump = cnxk_sso_dump, diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index e386cb784..21f80323d 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -248,6 +248,66 @@ cn9k_sso_rsrc_init(void *arg, uint8_t hws, uint8_t hwgrp) return roc_sso_rsrc_init(&dev->sso, hws, hwgrp); } +static int +cn9k_sso_updt_tx_adptr_data(const struct rte_eventdev *event_dev) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + int i; + + if (dev->tx_adptr_data == NULL) + return 0; + + for (i = 0; i < dev->nb_event_ports; i++) { + if (dev->dual_ws) { + struct cn9k_sso_hws_dual *dws = + event_dev->data->ports[i]; + void *ws_cookie; + + ws_cookie = cnxk_sso_hws_get_cookie(dws); + ws_cookie = rte_realloc_socket( + ws_cookie, + sizeof(struct cnxk_sso_hws_cookie) + + sizeof(struct cn9k_sso_hws_dual) + + (sizeof(uint64_t) * + (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (ws_cookie == NULL) + return -ENOMEM; + dws = RTE_PTR_ADD(ws_cookie, + sizeof(struct cnxk_sso_hws_cookie)); + memcpy(&dws->tx_adptr_data, dev->tx_adptr_data, + sizeof(uint64_t) * (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT); + event_dev->data->ports[i] = dws; + } else { + struct cn9k_sso_hws *ws = event_dev->data->ports[i]; + void *ws_cookie; + + ws_cookie = cnxk_sso_hws_get_cookie(ws); + ws_cookie = rte_realloc_socket( + ws_cookie, + sizeof(struct cnxk_sso_hws_cookie) + + sizeof(struct cn9k_sso_hws_dual) + + (sizeof(uint64_t) * + (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT), + RTE_CACHE_LINE_SIZE, SOCKET_ID_ANY); + if (ws_cookie == NULL) + return -ENOMEM; + ws = RTE_PTR_ADD(ws_cookie, + sizeof(struct cnxk_sso_hws_cookie)); + memcpy(&ws->tx_adptr_data, dev->tx_adptr_data, + sizeof(uint64_t) * (dev->max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT); + event_dev->data->ports[i] = ws; + } + } + rte_mb(); + + return 0; +} + static void cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) { @@ -734,6 +794,10 @@ cn9k_sso_start(struct rte_eventdev *event_dev) { int rc; + rc = cn9k_sso_updt_tx_adptr_data(event_dev); + if (rc < 0) + return rc; + rc = cnxk_sso_start(event_dev, cn9k_sso_hws_reset, cn9k_sso_hws_flush_events); if (rc < 0) @@ -844,6 +908,86 @@ cn9k_sso_rx_adapter_queue_del(const struct rte_eventdev *event_dev, return cnxk_sso_rx_adapter_queue_del(event_dev, eth_dev, rx_queue_id); } +static int +cn9k_sso_tx_adapter_caps_get(const struct rte_eventdev *dev, + const struct rte_eth_dev *eth_dev, uint32_t *caps) +{ + int ret; + + RTE_SET_USED(dev); + ret = strncmp(eth_dev->device->driver->name, "net_cn9k", 8); + if (ret) + *caps = 0; + else + *caps = RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT; + + return 0; +} + +static void +cn9k_sso_txq_fc_update(const struct rte_eth_dev *eth_dev, int32_t tx_queue_id, + bool ena) +{ + struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private; + struct cn9k_eth_txq *txq; + struct roc_nix_sq *sq; + int i; + + if (tx_queue_id < 0) { + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + cn9k_sso_txq_fc_update(eth_dev, i, ena); + } else { + uint16_t sq_limit; + + sq = &cnxk_eth_dev->sqs[tx_queue_id]; + txq = eth_dev->data->tx_queues[tx_queue_id]; + sq_limit = + ena ? RTE_MIN(CNXK_SSO_SQB_LIMIT, sq->aura_sqb_bufs) : + sq->nb_sqb_bufs; + txq->nb_sqb_bufs_adj = + sq_limit - + RTE_ALIGN_MUL_CEIL(sq_limit, + (1ULL << txq->sqes_per_sqb_log2)) / + (1ULL << txq->sqes_per_sqb_log2); + txq->nb_sqb_bufs_adj = (70 * txq->nb_sqb_bufs_adj) / 100; + } +} + +static int +cn9k_sso_tx_adapter_queue_add(uint8_t id, const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + int rc; + + RTE_SET_USED(id); + rc = cnxk_sso_tx_adapter_queue_add(event_dev, eth_dev, tx_queue_id); + if (rc < 0) + return rc; + cn9k_sso_txq_fc_update(eth_dev, tx_queue_id, true); + rc = cn9k_sso_updt_tx_adptr_data(event_dev); + if (rc < 0) + return rc; + cn9k_sso_fp_fns_set((struct rte_eventdev *)(uintptr_t)event_dev); + + return 0; +} + +static int +cn9k_sso_tx_adapter_queue_del(uint8_t id, const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + int rc; + + RTE_SET_USED(id); + rc = cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, tx_queue_id); + if (rc < 0) + return rc; + cn9k_sso_txq_fc_update(eth_dev, tx_queue_id, false); + return cn9k_sso_updt_tx_adptr_data(event_dev); +} + static struct rte_eventdev_ops cn9k_sso_dev_ops = { .dev_infos_get = cn9k_sso_info_get, .dev_configure = cn9k_sso_dev_configure, @@ -863,6 +1007,10 @@ static struct rte_eventdev_ops cn9k_sso_dev_ops = { .eth_rx_adapter_start = cnxk_sso_rx_adapter_start, .eth_rx_adapter_stop = cnxk_sso_rx_adapter_stop, + .eth_tx_adapter_caps_get = cn9k_sso_tx_adapter_caps_get, + .eth_tx_adapter_queue_add = cn9k_sso_tx_adapter_queue_add, + .eth_tx_adapter_queue_del = cn9k_sso_tx_adapter_queue_del, + .timer_adapter_caps_get = cnxk_tim_caps_get, .dump = cnxk_sso_dump, diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index 9d5d2d033..24e1be6a9 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include @@ -34,6 +35,7 @@ #define CNXK_SSO_XAQ_CACHE_CNT (0x7) #define CNXK_SSO_XAQ_SLACK (8) #define CNXK_SSO_WQE_SG_PTR (9) +#define CNXK_SSO_SQB_LIMIT (0x180) #define CNXK_TT_FROM_TAG(x) (((x) >> 32) & SSO_TT_EMPTY) #define CNXK_TT_FROM_EVENT(x) (((x) >> 38) & SSO_TT_EMPTY) @@ -86,9 +88,12 @@ struct cnxk_sso_evdev { rte_iova_t fc_iova; struct rte_mempool *xaq_pool; uint64_t rx_offloads; + uint64_t tx_offloads; uint64_t adptr_xae_cnt; uint16_t rx_adptr_pool_cnt; uint64_t *rx_adptr_pools; + uint64_t *tx_adptr_data; + uint16_t max_port_id; uint16_t tim_adptr_ring_cnt; uint16_t *timer_adptr_rings; uint64_t *timer_adptr_sz; @@ -115,7 +120,10 @@ struct cn10k_sso_hws { uint64_t xaq_lmt __rte_cache_aligned; uint64_t *fc_mem; uintptr_t grps_base[CNXK_SSO_MAX_HWGRP]; + /* Tx Fastpath data */ + uint64_t tx_base __rte_cache_aligned; uintptr_t lmt_base; + uint8_t tx_adptr_data[]; } __rte_cache_aligned; /* CN9K HWS ops */ @@ -140,7 +148,9 @@ struct cn9k_sso_hws { uint64_t xaq_lmt __rte_cache_aligned; uint64_t *fc_mem; uintptr_t grps_base[CNXK_SSO_MAX_HWGRP]; - uint64_t base; + /* Tx Fastpath data */ + uint64_t base __rte_cache_aligned; + uint8_t tx_adptr_data[]; } __rte_cache_aligned; struct cn9k_sso_hws_state { @@ -160,7 +170,9 @@ struct cn9k_sso_hws_dual { uint64_t xaq_lmt __rte_cache_aligned; uint64_t *fc_mem; uintptr_t grps_base[CNXK_SSO_MAX_HWGRP]; - uint64_t base[2]; + /* Tx Fastpath data */ + uint64_t base[2] __rte_cache_aligned; + uint8_t tx_adptr_data[]; } __rte_cache_aligned; struct cnxk_sso_hws_cookie { @@ -267,5 +279,11 @@ int cnxk_sso_rx_adapter_start(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev); int cnxk_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, const struct rte_eth_dev *eth_dev); +int cnxk_sso_tx_adapter_queue_add(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id); +int cnxk_sso_tx_adapter_queue_del(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id); #endif /* __CNXK_EVENTDEV_H__ */ diff --git a/drivers/event/cnxk/cnxk_eventdev_adptr.c b/drivers/event/cnxk/cnxk_eventdev_adptr.c index 3b7ecb375..502da272d 100644 --- a/drivers/event/cnxk/cnxk_eventdev_adptr.c +++ b/drivers/event/cnxk/cnxk_eventdev_adptr.c @@ -223,3 +223,91 @@ cnxk_sso_rx_adapter_stop(const struct rte_eventdev *event_dev, return 0; } + +static int +cnxk_sso_sqb_aura_limit_edit(struct roc_nix_sq *sq, uint16_t nb_sqb_bufs) +{ + return roc_npa_aura_limit_modify( + sq->aura_handle, RTE_MIN(nb_sqb_bufs, sq->aura_sqb_bufs)); +} + +static int +cnxk_sso_updt_tx_queue_data(const struct rte_eventdev *event_dev, + uint16_t eth_port_id, uint16_t tx_queue_id, + void *txq) +{ + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + uint16_t max_port_id = dev->max_port_id; + uint64_t *txq_data = dev->tx_adptr_data; + + if (txq_data == NULL || eth_port_id > max_port_id) { + max_port_id = RTE_MAX(max_port_id, eth_port_id); + txq_data = rte_realloc_socket( + txq_data, + (sizeof(uint64_t) * (max_port_id + 1) * + RTE_MAX_QUEUES_PER_PORT), + RTE_CACHE_LINE_SIZE, event_dev->data->socket_id); + if (txq_data == NULL) + return -ENOMEM; + } + + ((uint64_t(*)[RTE_MAX_QUEUES_PER_PORT]) + txq_data)[eth_port_id][tx_queue_id] = (uint64_t)txq; + dev->max_port_id = max_port_id; + dev->tx_adptr_data = txq_data; + return 0; +} + +int +cnxk_sso_tx_adapter_queue_add(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private; + struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct roc_nix_sq *sq; + int i, ret; + void *txq; + + if (tx_queue_id < 0) { + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + cnxk_sso_tx_adapter_queue_add(event_dev, eth_dev, i); + } else { + txq = eth_dev->data->tx_queues[tx_queue_id]; + sq = &cnxk_eth_dev->sqs[tx_queue_id]; + cnxk_sso_sqb_aura_limit_edit(sq, CNXK_SSO_SQB_LIMIT); + ret = cnxk_sso_updt_tx_queue_data( + event_dev, eth_dev->data->port_id, tx_queue_id, txq); + if (ret < 0) + return ret; + + dev->tx_offloads |= cnxk_eth_dev->tx_offload_flags; + } + + return 0; +} + +int +cnxk_sso_tx_adapter_queue_del(const struct rte_eventdev *event_dev, + const struct rte_eth_dev *eth_dev, + int32_t tx_queue_id) +{ + struct cnxk_eth_dev *cnxk_eth_dev = eth_dev->data->dev_private; + struct roc_nix_sq *sq; + int i, ret; + + RTE_SET_USED(event_dev); + if (tx_queue_id < 0) { + for (i = 0; i < eth_dev->data->nb_tx_queues; i++) + cnxk_sso_tx_adapter_queue_del(event_dev, eth_dev, i); + } else { + sq = &cnxk_eth_dev->sqs[tx_queue_id]; + cnxk_sso_sqb_aura_limit_edit(sq, sq->nb_sqb_bufs); + ret = cnxk_sso_updt_tx_queue_data( + event_dev, eth_dev->data->port_id, tx_queue_id, NULL); + if (ret < 0) + return ret; + } + + return 0; +}