From patchwork Mon Aug 23 19:40:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 97246 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C1565A0C58; Mon, 23 Aug 2021 21:41:17 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1B774411D0; Mon, 23 Aug 2021 21:40:53 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 38F90411BE for ; Mon, 23 Aug 2021 21:40:51 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.0.43) with SMTP id 17NC5AoS009069; Mon, 23 Aug 2021 12:40:50 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=WbzhaFWF1qoioGBaivyrzx7w7LZggNJlkaKEGa0yJuc=; b=RL32DSHTl1f8xNs3L7IBFUAT1bqnN6jnUpH8GpPRTQY2qSBSPQXcZ84WhJJP39yzD+IZ uCFqzEnG+6tuARHKVbmngbEj66nV+PYHpPxIjC7fUTX86M50mFrzT7/pX5ErCAbz+QKb +9CBbWx6yF8RUFcOLZCPfovLhEx67RPX6LF1z270+MOloFPvlvS9CSJMAA/vUjVcU94l E7WS1/II3x5K68Kv5Qd0hvSZ4mRgbFj/Up7QEXmyaCUVhDCGV2Mp1tHKDwdLoglzKtoc YWg5koVpLq2RmJGIijjqyn1a8NLSkJsWLAwwS/loMz2enIJ8/qZs892luzkkbreL5d7D jg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3am1fk3eab-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Mon, 23 Aug 2021 12:40:49 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Mon, 23 Aug 2021 12:40:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Mon, 23 Aug 2021 12:40:47 -0700 Received: from BG-LT7430.marvell.com (BG-LT7430.marvell.com [10.28.177.176]) by maili.marvell.com (Postfix) with ESMTP id 02C8F5B693B; Mon, 23 Aug 2021 12:40:42 -0700 (PDT) From: To: , Pavan Nikhilesh , "Shijith Thotton" , Timothy McDaniel , Hemant Agrawal , "Nipun Gupta" , =?utf-8?q?Mattias_R=C3=B6nnblom?= , Liang Ma , "Peter Mccarthy" , Harry van Haaren CC: , Date: Tue, 24 Aug 2021 01:10:11 +0530 Message-ID: <20210823194020.1229-7-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210823194020.1229-1-pbhagavatula@marvell.com> References: <20210823194020.1229-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: cel4dIiCXcyr7SmIaDhiIVhu2IpVVIC0 X-Proofpoint-ORIG-GUID: cel4dIiCXcyr7SmIaDhiIVhu2IpVVIC0 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.391,FMLib:17.0.607.475 definitions=2021-08-23_04,2021-08-23_01,2020-04-07_01 Subject: [dpdk-dev] [RFC 07/15] eventdev: make drivers to use new API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Pavan Nikhilesh Make drivers use the new API for all enqueue and dequeue paths. Signed-off-by: Pavan Nikhilesh Acked-by: Hemant Agrawal --- drivers/event/cnxk/cn10k_eventdev.c | 63 ++++--- drivers/event/cnxk/cn10k_worker.c | 22 +-- drivers/event/cnxk/cn10k_worker.h | 49 ++--- drivers/event/cnxk/cn10k_worker_deq.c | 8 +- drivers/event/cnxk/cn10k_worker_deq_burst.c | 14 +- drivers/event/cnxk/cn10k_worker_deq_tmo.c | 21 ++- drivers/event/cnxk/cn10k_worker_tx_enq.c | 4 +- drivers/event/cnxk/cn10k_worker_tx_enq_seg.c | 4 +- drivers/event/cnxk/cn9k_eventdev.c | 168 +++++++++--------- drivers/event/cnxk/cn9k_worker.c | 45 ++--- drivers/event/cnxk/cn9k_worker.h | 87 +++++---- drivers/event/cnxk/cn9k_worker_deq.c | 8 +- drivers/event/cnxk/cn9k_worker_deq_burst.c | 14 +- drivers/event/cnxk/cn9k_worker_deq_tmo.c | 21 ++- drivers/event/cnxk/cn9k_worker_dual_deq.c | 8 +- .../event/cnxk/cn9k_worker_dual_deq_burst.c | 13 +- drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c | 22 ++- drivers/event/cnxk/cn9k_worker_dual_tx_enq.c | 4 +- .../event/cnxk/cn9k_worker_dual_tx_enq_seg.c | 4 +- drivers/event/cnxk/cn9k_worker_tx_enq.c | 4 +- drivers/event/cnxk/cn9k_worker_tx_enq_seg.c | 4 +- drivers/event/dlb2/dlb2.c | 77 ++++++-- drivers/event/dpaa/dpaa_eventdev.c | 45 ++++- drivers/event/dpaa2/dpaa2_eventdev.c | 47 ++++- drivers/event/dsw/dsw_evdev.c | 28 ++- drivers/event/octeontx/ssovf_evdev.h | 14 +- drivers/event/octeontx/ssovf_worker.c | 110 +++++++----- drivers/event/octeontx2/otx2_evdev.c | 111 ++++++------ drivers/event/octeontx2/otx2_evdev.h | 151 ++++++++-------- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 10 +- drivers/event/octeontx2/otx2_worker.c | 88 +++++---- drivers/event/octeontx2/otx2_worker_dual.c | 92 ++++++---- drivers/event/opdl/opdl_evdev.c | 28 ++- drivers/event/skeleton/skeleton_eventdev.c | 37 +++- drivers/event/sw/sw_evdev.c | 29 ++- 35 files changed, 885 insertions(+), 569 deletions(-) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 697b134041..5dfebc5e54 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -271,56 +271,61 @@ static void cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); - const event_dequeue_t sso_hws_deq[2][2][2][2][2][2] = { + struct rte_eventdev_api *api; + + api = &rte_eventdev_api[event_dev->data->dev_id]; + const rte_event_dequeue_t sso_hws_deq[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2] = { + const rte_event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_burst_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_t sso_hws_tmo_deq[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_tmo_deq[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t sso_hws_tmo_deq_burst[2][2][2][2][2][2] = { + const rte_event_dequeue_burst_t + sso_hws_tmo_deq_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_burst_##name, - NIX_RX_FASTPATH_MODES + NIX_RX_FASTPATH_MODES #undef R - }; + }; - const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2][2][2] = { + const rte_event_dequeue_burst_t + sso_hws_deq_seg_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_seg_burst_##name, - NIX_RX_FASTPATH_MODES + NIX_RX_FASTPATH_MODES #undef R - }; + }; - const event_dequeue_t sso_hws_tmo_deq_seg[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_tmo_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t sso_hws_tmo_deq_seg_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_deq_tmo_seg_burst_##name, @@ -329,7 +334,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) }; /* Tx modes */ - const event_tx_adapter_enqueue + const rte_event_tx_adapter_enqueue_t sso_hws_tx_adptr_enq[2][2][2][2][2][2] = { #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_##name, @@ -337,7 +342,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef T }; - const event_tx_adapter_enqueue + const rte_event_tx_adapter_enqueue_t sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = { #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ [f5][f4][f3][f2][f1][f0] = cn10k_sso_hws_tx_adptr_enq_seg_##name, @@ -345,19 +350,19 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef T }; - event_dev->enqueue = cn10k_sso_hws_enq; - event_dev->enqueue_burst = cn10k_sso_hws_enq_burst; - event_dev->enqueue_new_burst = cn10k_sso_hws_enq_new_burst; - event_dev->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst; + api->enqueue = cn10k_sso_hws_enq; + api->enqueue_burst = cn10k_sso_hws_enq_burst; + api->enqueue_new_burst = cn10k_sso_hws_enq_new_burst; + api->enqueue_forward_burst = cn10k_sso_hws_enq_fwd_burst; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { - event_dev->dequeue = sso_hws_deq_seg + api->dequeue = sso_hws_deq_seg [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_seg_burst + api->dequeue_burst = sso_hws_deq_seg_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] @@ -365,7 +370,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_tmo_deq_seg + api->dequeue = sso_hws_tmo_deq_seg [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -375,7 +380,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_tmo_deq_seg_burst + api->dequeue_burst = sso_hws_tmo_deq_seg_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -387,14 +392,14 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; } } else { - event_dev->dequeue = sso_hws_deq + api->dequeue = sso_hws_deq [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_burst + api->dequeue_burst = sso_hws_deq_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] @@ -402,7 +407,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_tmo_deq + api->dequeue = sso_hws_tmo_deq [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -412,7 +417,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_tmo_deq_burst + api->dequeue_burst = sso_hws_tmo_deq_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -427,7 +432,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ - event_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg + api->txa_enqueue = sso_hws_tx_adptr_enq_seg [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] @@ -435,7 +440,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } else { - event_dev->txa_enqueue = sso_hws_tx_adptr_enq + api->txa_enqueue = sso_hws_tx_adptr_enq [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] @@ -444,7 +449,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } - event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; + api->txa_enqueue_same_dest = api->txa_enqueue; } static void diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c index c71aa37327..a43ca9f524 100644 --- a/drivers/event/cnxk/cn10k_worker.c +++ b/drivers/event/cnxk/cn10k_worker.c @@ -7,9 +7,9 @@ #include "cnxk_worker.h" uint16_t __rte_hot -cn10k_sso_hws_enq(void *port, const struct rte_event *ev) +cn10k_sso_hws_enq(uint8_t dev_id, uint8_t port_id, const struct rte_event *ev) { - struct cn10k_sso_hws *ws = port; + struct cn10k_sso_hws *ws = _rte_event_dev_prolog(dev_id, port_id); switch (ev->op) { case RTE_EVENT_OP_NEW: @@ -29,18 +29,18 @@ cn10k_sso_hws_enq(void *port, const struct rte_event *ev) } uint16_t __rte_hot -cn10k_sso_hws_enq_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) +cn10k_sso_hws_enq_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { RTE_SET_USED(nb_events); - return cn10k_sso_hws_enq(port, ev); + return cn10k_sso_hws_enq(dev_id, port_id, ev); } uint16_t __rte_hot -cn10k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) +cn10k_sso_hws_enq_new_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct cn10k_sso_hws *ws = port; + struct cn10k_sso_hws *ws = _rte_event_dev_prolog(dev_id, port_id); uint16_t i, rc = 1; for (i = 0; i < nb_events && rc; i++) @@ -50,10 +50,10 @@ cn10k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[], } uint16_t __rte_hot -cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) +cn10k_sso_hws_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct cn10k_sso_hws *ws = port; + struct cn10k_sso_hws *ws = _rte_event_dev_prolog(dev_id, port_id); RTE_SET_USED(nb_events); cn10k_sso_hws_forward_event(ws, ev); diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index 9cc0992063..f3725ff48f 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -272,38 +272,43 @@ cn10k_sso_hws_get_work_empty(struct cn10k_sso_hws *ws, struct rte_event *ev) } /* CN10K Fastpath functions. */ -uint16_t __rte_hot cn10k_sso_hws_enq(void *port, const struct rte_event *ev); -uint16_t __rte_hot cn10k_sso_hws_enq_burst(void *port, +uint16_t __rte_hot cn10k_sso_hws_enq(uint8_t dev_id, uint8_t port_id, + const struct rte_event *ev); +uint16_t __rte_hot cn10k_sso_hws_enq_burst(uint8_t dev_id, uint8_t port_id, const struct rte_event ev[], uint16_t nb_events); -uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port, +uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(uint8_t dev_id, uint8_t port_id, const struct rte_event ev[], uint16_t nb_events); -uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port, +uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, const struct rte_event ev[], uint16_t nb_events); #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn10k_sso_hws_deq_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ - uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ uint16_t __rte_hot cn10k_sso_hws_deq_tmo_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ - uint16_t __rte_hot cn10k_sso_hws_deq_tmo_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn10k_sso_hws_deq_tmo_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ uint16_t __rte_hot cn10k_sso_hws_deq_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ - uint16_t __rte_hot cn10k_sso_hws_deq_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn10k_sso_hws_deq_seg_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ uint16_t __rte_hot cn10k_sso_hws_deq_tmo_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks); \ uint16_t __rte_hot cn10k_sso_hws_deq_tmo_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks); + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); NIX_RX_FASTPATH_MODES #undef R @@ -453,13 +458,17 @@ cn10k_sso_hws_event_tx(struct cn10k_sso_hws *ws, struct rte_event *ev, #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events); \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); \ uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events); \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); \ uint16_t __rte_hot cn10k_sso_hws_dual_tx_adptr_enq_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events); \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); \ uint16_t __rte_hot cn10k_sso_hws_dual_tx_adptr_enq_seg_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events); + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); NIX_TX_FASTPATH_MODES #undef T diff --git a/drivers/event/cnxk/cn10k_worker_deq.c b/drivers/event/cnxk/cn10k_worker_deq.c index 36ec454ccc..72aa97c114 100644 --- a/drivers/event/cnxk/cn10k_worker_deq.c +++ b/drivers/event/cnxk/cn10k_worker_deq.c @@ -8,8 +8,10 @@ #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn10k_sso_hws_deq_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn10k_sso_hws *ws = port; \ \ RTE_SET_USED(timeout_ticks); \ @@ -24,8 +26,10 @@ } \ \ uint16_t __rte_hot cn10k_sso_hws_deq_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn10k_sso_hws *ws = port; \ \ RTE_SET_USED(timeout_ticks); \ diff --git a/drivers/event/cnxk/cn10k_worker_deq_burst.c b/drivers/event/cnxk/cn10k_worker_deq_burst.c index 29ecc551cf..15b8a49412 100644 --- a/drivers/event/cnxk/cn10k_worker_deq_burst.c +++ b/drivers/event/cnxk/cn10k_worker_deq_burst.c @@ -8,21 +8,23 @@ #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn10k_sso_hws_deq_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn10k_sso_hws_deq_##name(port, ev, timeout_ticks); \ + return cn10k_sso_hws_deq_##name(dev_id, port_id, ev, \ + timeout_ticks); \ } \ \ uint16_t __rte_hot cn10k_sso_hws_deq_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn10k_sso_hws_deq_seg_##name(port, ev, timeout_ticks); \ + return cn10k_sso_hws_deq_seg_##name(dev_id, port_id, ev, \ + timeout_ticks); \ } NIX_RX_FASTPATH_MODES diff --git a/drivers/event/cnxk/cn10k_worker_deq_tmo.c b/drivers/event/cnxk/cn10k_worker_deq_tmo.c index c8524a27bd..4e6c3c7cb5 100644 --- a/drivers/event/cnxk/cn10k_worker_deq_tmo.c +++ b/drivers/event/cnxk/cn10k_worker_deq_tmo.c @@ -8,8 +8,10 @@ #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn10k_sso_hws_deq_tmo_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn10k_sso_hws *ws = port; \ uint16_t ret = 1; \ uint64_t iter; \ @@ -29,17 +31,20 @@ } \ \ uint16_t __rte_hot cn10k_sso_hws_deq_tmo_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn10k_sso_hws_deq_tmo_##name(port, ev, timeout_ticks); \ + return cn10k_sso_hws_deq_tmo_##name(dev_id, port_id, ev, \ + timeout_ticks); \ } \ \ uint16_t __rte_hot cn10k_sso_hws_deq_tmo_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn10k_sso_hws *ws = port; \ uint16_t ret = 1; \ uint64_t iter; \ @@ -59,12 +64,12 @@ } \ \ uint16_t __rte_hot cn10k_sso_hws_deq_tmo_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn10k_sso_hws_deq_tmo_seg_##name(port, ev, \ + return cn10k_sso_hws_deq_tmo_seg_##name(dev_id, port_id, ev, \ timeout_ticks); \ } diff --git a/drivers/event/cnxk/cn10k_worker_tx_enq.c b/drivers/event/cnxk/cn10k_worker_tx_enq.c index f9968ac0d0..bfb657c1de 100644 --- a/drivers/event/cnxk/cn10k_worker_tx_enq.c +++ b/drivers/event/cnxk/cn10k_worker_tx_enq.c @@ -6,8 +6,10 @@ #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn10k_sso_hws *ws = port; \ uint64_t cmd[sz]; \ \ diff --git a/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c b/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c index a24fc42e5a..6fbccd7fd4 100644 --- a/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c +++ b/drivers/event/cnxk/cn10k_worker_tx_enq_seg.c @@ -6,8 +6,10 @@ #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot cn10k_sso_hws_tx_adptr_enq_seg_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2]; \ struct cn10k_sso_hws *ws = port; \ \ diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 9b439947e5..48c8114c6e 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -312,57 +312,62 @@ static void cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); + struct rte_eventdev_api *api; + + api = &rte_eventdev_api[event_dev->data->dev_id]; /* Single WS modes */ - const event_dequeue_t sso_hws_deq[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_deq[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2] = { + const rte_event_dequeue_burst_t sso_hws_deq_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_burst_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_deq_tmo[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t sso_hws_deq_tmo_burst[2][2][2][2][2][2] = { + const rte_event_dequeue_burst_t + sso_hws_deq_tmo_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_burst_##name, - NIX_RX_FASTPATH_MODES + NIX_RX_FASTPATH_MODES #undef R - }; + }; - const event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t sso_hws_deq_seg_burst[2][2][2][2][2][2] = { + const rte_event_dequeue_burst_t + sso_hws_deq_seg_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_seg_burst_##name, - NIX_RX_FASTPATH_MODES + NIX_RX_FASTPATH_MODES #undef R - }; + }; - const event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_deq_tmo_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t sso_hws_deq_tmo_seg_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_deq_tmo_seg_burst_##name, @@ -371,28 +376,29 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) }; /* Dual WS modes */ - const event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_dual_deq[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t sso_hws_dual_deq_burst[2][2][2][2][2][2] = { + const rte_event_dequeue_burst_t + sso_hws_dual_deq_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_burst_##name, - NIX_RX_FASTPATH_MODES + NIX_RX_FASTPATH_MODES #undef R - }; + }; - const event_dequeue_t sso_hws_dual_deq_tmo[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_dual_deq_tmo[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t sso_hws_dual_deq_tmo_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_burst_##name, @@ -400,14 +406,14 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; - const event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_dual_deq_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t sso_hws_dual_deq_seg_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_seg_burst_##name, @@ -415,14 +421,14 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef R }; - const event_dequeue_t sso_hws_dual_deq_tmo_seg[2][2][2][2][2][2] = { + const rte_event_dequeue_t sso_hws_dual_deq_tmo_seg[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_##name, NIX_RX_FASTPATH_MODES #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t sso_hws_dual_deq_tmo_seg_burst[2][2][2][2][2][2] = { #define R(name, f5, f4, f3, f2, f1, f0, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_deq_tmo_seg_burst_##name, @@ -431,7 +437,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) }; /* Tx modes */ - const event_tx_adapter_enqueue + const rte_event_tx_adapter_enqueue_t sso_hws_tx_adptr_enq[2][2][2][2][2][2] = { #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_##name, @@ -439,7 +445,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef T }; - const event_tx_adapter_enqueue + const rte_event_tx_adapter_enqueue_t sso_hws_tx_adptr_enq_seg[2][2][2][2][2][2] = { #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_tx_adptr_enq_seg_##name, @@ -447,7 +453,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef T }; - const event_tx_adapter_enqueue + const rte_event_tx_adapter_enqueue_t sso_hws_dual_tx_adptr_enq[2][2][2][2][2][2] = { #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_##name, @@ -455,7 +461,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef T }; - const event_tx_adapter_enqueue + const rte_event_tx_adapter_enqueue_t sso_hws_dual_tx_adptr_enq_seg[2][2][2][2][2][2] = { #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ [f5][f4][f3][f2][f1][f0] = cn9k_sso_hws_dual_tx_adptr_enq_seg_##name, @@ -463,19 +469,19 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) #undef T }; - event_dev->enqueue = cn9k_sso_hws_enq; - event_dev->enqueue_burst = cn9k_sso_hws_enq_burst; - event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst; - event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst; + api->enqueue = cn9k_sso_hws_enq; + api->enqueue_burst = cn9k_sso_hws_enq_burst; + api->enqueue_new_burst = cn9k_sso_hws_enq_new_burst; + api->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { - event_dev->dequeue = sso_hws_deq_seg + api->dequeue = sso_hws_deq_seg [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_seg_burst + api->dequeue_burst = sso_hws_deq_seg_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] @@ -483,7 +489,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_deq_tmo_seg + api->dequeue = sso_hws_deq_tmo_seg [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -493,7 +499,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_tmo_seg_burst + api->dequeue_burst = sso_hws_deq_tmo_seg_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -505,14 +511,14 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; } } else { - event_dev->dequeue = sso_hws_deq + api->dequeue = sso_hws_deq [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_burst + api->dequeue_burst = sso_hws_deq_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] @@ -520,7 +526,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_deq_tmo + api->dequeue = sso_hws_deq_tmo [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -530,7 +536,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_deq_tmo_burst + api->dequeue_burst = sso_hws_deq_tmo_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -545,7 +551,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ - event_dev->txa_enqueue = sso_hws_tx_adptr_enq_seg + api->txa_enqueue = sso_hws_tx_adptr_enq_seg [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] @@ -553,7 +559,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } else { - event_dev->txa_enqueue = sso_hws_tx_adptr_enq + api->txa_enqueue = sso_hws_tx_adptr_enq [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_MBUF_NOFF_F)] @@ -563,14 +569,13 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) } if (dev->dual_ws) { - event_dev->enqueue = cn9k_sso_hws_dual_enq; - event_dev->enqueue_burst = cn9k_sso_hws_dual_enq_burst; - event_dev->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst; - event_dev->enqueue_forward_burst = - cn9k_sso_hws_dual_enq_fwd_burst; + api->enqueue = cn9k_sso_hws_dual_enq; + api->enqueue_burst = cn9k_sso_hws_dual_enq_burst; + api->enqueue_new_burst = cn9k_sso_hws_dual_enq_new_burst; + api->enqueue_forward_burst = cn9k_sso_hws_dual_enq_fwd_burst; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { - event_dev->dequeue = sso_hws_dual_deq_seg + api->dequeue = sso_hws_dual_deq_seg [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -580,7 +585,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_dual_deq_seg_burst + api->dequeue_burst = sso_hws_dual_deq_seg_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -591,7 +596,21 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_dual_deq_tmo_seg + api->dequeue = sso_hws_dual_deq_tmo_seg + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_RSS_F)]; + api->dequeue_burst = + sso_hws_dual_deq_tmo_seg_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & @@ -604,23 +623,9 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = - sso_hws_dual_deq_tmo_seg_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_RSS_F)]; } } else { - event_dev->dequeue = sso_hws_dual_deq + api->dequeue = sso_hws_dual_deq [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -630,7 +635,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = sso_hws_dual_deq_burst + api->dequeue_burst = sso_hws_dual_deq_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -641,7 +646,20 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; if (dev->is_timeout_deq) { - event_dev->dequeue = sso_hws_dual_deq_tmo + api->dequeue = sso_hws_dual_deq_tmo + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_VLAN_STRIP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_TSTAMP_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_MARK_UPDATE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_CHECKSUM_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_PTYPE_F)] + [!!(dev->rx_offloads & + NIX_RX_OFFLOAD_RSS_F)]; + api->dequeue_burst = sso_hws_dual_deq_tmo_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_VLAN_STRIP_F)] [!!(dev->rx_offloads & @@ -654,27 +672,13 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = - sso_hws_dual_deq_tmo_burst - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_VLAN_STRIP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_TSTAMP_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_MARK_UPDATE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_CHECKSUM_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_PTYPE_F)] - [!!(dev->rx_offloads & - NIX_RX_OFFLOAD_RSS_F)]; } } if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { /* [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ - event_dev->txa_enqueue = sso_hws_dual_tx_adptr_enq_seg + api->txa_enqueue = sso_hws_dual_tx_adptr_enq_seg [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] [!!(dev->tx_offloads & @@ -686,7 +690,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } else { - event_dev->txa_enqueue = sso_hws_dual_tx_adptr_enq + api->txa_enqueue = sso_hws_dual_tx_adptr_enq [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] [!!(dev->tx_offloads & @@ -700,7 +704,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) } } - event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; + api->txa_enqueue_same_dest = api->txa_enqueue; rte_mb(); } diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c index 538bc4b0b3..d0a3b684dd 100644 --- a/drivers/event/cnxk/cn9k_worker.c +++ b/drivers/event/cnxk/cn9k_worker.c @@ -7,9 +7,9 @@ #include "cn9k_worker.h" uint16_t __rte_hot -cn9k_sso_hws_enq(void *port, const struct rte_event *ev) +cn9k_sso_hws_enq(uint8_t dev_id, uint8_t port_id, const struct rte_event *ev) { - struct cn9k_sso_hws *ws = port; + struct cn9k_sso_hws *ws = _rte_event_dev_prolog(dev_id, port_id); switch (ev->op) { case RTE_EVENT_OP_NEW: @@ -28,18 +28,18 @@ cn9k_sso_hws_enq(void *port, const struct rte_event *ev) } uint16_t __rte_hot -cn9k_sso_hws_enq_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) +cn9k_sso_hws_enq_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { RTE_SET_USED(nb_events); - return cn9k_sso_hws_enq(port, ev); + return cn9k_sso_hws_enq(dev_id, port_id, ev); } uint16_t __rte_hot -cn9k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) +cn9k_sso_hws_enq_new_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct cn9k_sso_hws *ws = port; + struct cn9k_sso_hws *ws = _rte_event_dev_prolog(dev_id, port_id); uint16_t i, rc = 1; for (i = 0; i < nb_events && rc; i++) @@ -49,10 +49,10 @@ cn9k_sso_hws_enq_new_burst(void *port, const struct rte_event ev[], } uint16_t __rte_hot -cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) +cn9k_sso_hws_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct cn9k_sso_hws *ws = port; + struct cn9k_sso_hws *ws = _rte_event_dev_prolog(dev_id, port_id); RTE_SET_USED(nb_events); cn9k_sso_hws_forward_event(ws, ev); @@ -63,9 +63,10 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], /* Dual ws ops. */ uint16_t __rte_hot -cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev) +cn9k_sso_hws_dual_enq(uint8_t dev_id, uint8_t port_id, + const struct rte_event *ev) { - struct cn9k_sso_hws_dual *dws = port; + struct cn9k_sso_hws_dual *dws = _rte_event_dev_prolog(dev_id, port_id); struct cn9k_sso_hws_state *vws; vws = &dws->ws_state[!dws->vws]; @@ -86,18 +87,18 @@ cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev) } uint16_t __rte_hot -cn9k_sso_hws_dual_enq_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) +cn9k_sso_hws_dual_enq_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { RTE_SET_USED(nb_events); - return cn9k_sso_hws_dual_enq(port, ev); + return cn9k_sso_hws_dual_enq(dev_id, port_id, ev); } uint16_t __rte_hot -cn9k_sso_hws_dual_enq_new_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) +cn9k_sso_hws_dual_enq_new_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct cn9k_sso_hws_dual *dws = port; + struct cn9k_sso_hws_dual *dws = _rte_event_dev_prolog(dev_id, port_id); uint16_t i, rc = 1; for (i = 0; i < nb_events && rc; i++) @@ -107,10 +108,10 @@ cn9k_sso_hws_dual_enq_new_burst(void *port, const struct rte_event ev[], } uint16_t __rte_hot -cn9k_sso_hws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], - uint16_t nb_events) +cn9k_sso_hws_dual_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct cn9k_sso_hws_dual *dws = port; + struct cn9k_sso_hws_dual *dws = _rte_event_dev_prolog(dev_id, port_id); RTE_SET_USED(nb_events); cn9k_sso_hws_dual_forward_event(dws, &dws->ws_state[!dws->vws], ev); diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index 9b2a0bf882..be9ae2a1e2 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -344,75 +344,86 @@ cn9k_sso_hws_get_work_empty(struct cn9k_sso_hws_state *ws, struct rte_event *ev) } /* CN9K Fastpath functions. */ -uint16_t __rte_hot cn9k_sso_hws_enq(void *port, const struct rte_event *ev); -uint16_t __rte_hot cn9k_sso_hws_enq_burst(void *port, +uint16_t __rte_hot cn9k_sso_hws_enq(uint8_t dev_id, uint8_t port_id, + const struct rte_event *ev); +uint16_t __rte_hot cn9k_sso_hws_enq_burst(uint8_t dev_id, uint8_t port_id, const struct rte_event ev[], uint16_t nb_events); -uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port, +uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(uint8_t dev_id, uint8_t port_id, const struct rte_event ev[], uint16_t nb_events); -uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port, +uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, const struct rte_event ev[], uint16_t nb_events); -uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port, +uint16_t __rte_hot cn9k_sso_hws_dual_enq(uint8_t dev_id, uint8_t port_id, const struct rte_event *ev); -uint16_t __rte_hot cn9k_sso_hws_dual_enq_burst(void *port, +uint16_t __rte_hot cn9k_sso_hws_dual_enq_burst(uint8_t dev_id, uint8_t port_id, const struct rte_event ev[], uint16_t nb_events); -uint16_t __rte_hot cn9k_sso_hws_dual_enq_new_burst(void *port, +uint16_t __rte_hot cn9k_sso_hws_dual_enq_new_burst(uint8_t dev_id, + uint8_t port_id, const struct rte_event ev[], uint16_t nb_events); -uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(void *port, +uint16_t __rte_hot cn9k_sso_hws_dual_enq_fwd_burst(uint8_t dev_id, + uint8_t port_id, const struct rte_event ev[], uint16_t nb_events); #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_deq_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ - uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_tmo_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ - uint16_t __rte_hot cn9k_sso_hws_deq_tmo_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_tmo_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ - uint16_t __rte_hot cn9k_sso_hws_deq_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_deq_seg_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_tmo_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_deq_tmo_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks); + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); NIX_RX_FASTPATH_MODES #undef R #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ - uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ - uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ - uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ uint64_t timeout_ticks); \ + uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks); \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks); \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks); + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); NIX_RX_FASTPATH_MODES #undef R @@ -503,13 +514,17 @@ cn9k_sso_hws_event_tx(uint64_t base, struct rte_event *ev, uint64_t *cmd, #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events); \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); \ uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events); \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); \ uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events); \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); \ uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events); + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); NIX_TX_FASTPATH_MODES #undef T diff --git a/drivers/event/cnxk/cn9k_worker_deq.c b/drivers/event/cnxk/cn9k_worker_deq.c index 51ccaf4ec4..b60740ea71 100644 --- a/drivers/event/cnxk/cn9k_worker_deq.c +++ b/drivers/event/cnxk/cn9k_worker_deq.c @@ -8,8 +8,10 @@ #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_deq_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws *ws = port; \ \ RTE_SET_USED(timeout_ticks); \ @@ -24,8 +26,10 @@ } \ \ uint16_t __rte_hot cn9k_sso_hws_deq_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws *ws = port; \ \ RTE_SET_USED(timeout_ticks); \ diff --git a/drivers/event/cnxk/cn9k_worker_deq_burst.c b/drivers/event/cnxk/cn9k_worker_deq_burst.c index 4e2801459b..2e84683499 100644 --- a/drivers/event/cnxk/cn9k_worker_deq_burst.c +++ b/drivers/event/cnxk/cn9k_worker_deq_burst.c @@ -8,21 +8,23 @@ #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_deq_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn9k_sso_hws_deq_##name(port, ev, timeout_ticks); \ + return cn9k_sso_hws_deq_##name(dev_id, port_id, ev, \ + timeout_ticks); \ } \ \ uint16_t __rte_hot cn9k_sso_hws_deq_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn9k_sso_hws_deq_seg_##name(port, ev, timeout_ticks); \ + return cn9k_sso_hws_deq_seg_##name(dev_id, port_id, ev, \ + timeout_ticks); \ } NIX_RX_FASTPATH_MODES diff --git a/drivers/event/cnxk/cn9k_worker_deq_tmo.c b/drivers/event/cnxk/cn9k_worker_deq_tmo.c index 9713d1ef00..7c6ff30dd4 100644 --- a/drivers/event/cnxk/cn9k_worker_deq_tmo.c +++ b/drivers/event/cnxk/cn9k_worker_deq_tmo.c @@ -8,8 +8,10 @@ #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_deq_tmo_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws *ws = port; \ uint16_t ret = 1; \ uint64_t iter; \ @@ -29,17 +31,20 @@ } \ \ uint16_t __rte_hot cn9k_sso_hws_deq_tmo_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn9k_sso_hws_deq_tmo_##name(port, ev, timeout_ticks); \ + return cn9k_sso_hws_deq_tmo_##name(dev_id, port_id, ev, \ + timeout_ticks); \ } \ \ uint16_t __rte_hot cn9k_sso_hws_deq_tmo_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws *ws = port; \ uint16_t ret = 1; \ uint64_t iter; \ @@ -59,12 +64,12 @@ } \ \ uint16_t __rte_hot cn9k_sso_hws_deq_tmo_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn9k_sso_hws_deq_tmo_seg_##name(port, ev, \ + return cn9k_sso_hws_deq_tmo_seg_##name(dev_id, port_id, ev, \ timeout_ticks); \ } diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq.c b/drivers/event/cnxk/cn9k_worker_dual_deq.c index 709fa2d9ef..14b27ea0a3 100644 --- a/drivers/event/cnxk/cn9k_worker_dual_deq.c +++ b/drivers/event/cnxk/cn9k_worker_dual_deq.c @@ -8,8 +8,10 @@ #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws_dual *dws = port; \ uint16_t gw; \ \ @@ -29,8 +31,10 @@ } \ \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws_dual *dws = port; \ uint16_t gw; \ \ diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c b/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c index d50e1cf83f..e746deae36 100644 --- a/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c +++ b/drivers/event/cnxk/cn9k_worker_dual_deq_burst.c @@ -8,21 +8,22 @@ #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn9k_sso_hws_dual_deq_##name(port, ev, timeout_ticks); \ + return cn9k_sso_hws_dual_deq_##name(dev_id, port_id, ev, \ + timeout_ticks); \ } \ \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn9k_sso_hws_dual_deq_seg_##name(port, ev, \ + return cn9k_sso_hws_dual_deq_seg_##name(dev_id, port_id, ev, \ timeout_ticks); \ } diff --git a/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c b/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c index a0508fdf0d..1db7a8dc86 100644 --- a/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c +++ b/drivers/event/cnxk/cn9k_worker_dual_deq_tmo.c @@ -8,8 +8,10 @@ #define R(name, f5, f4, f3, f2, f1, f0, flags) \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws_dual *dws = port; \ uint16_t ret = 1; \ uint64_t iter; \ @@ -37,18 +39,20 @@ } \ \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn9k_sso_hws_dual_deq_tmo_##name(port, ev, \ + return cn9k_sso_hws_dual_deq_tmo_##name(dev_id, port_id, ev, \ timeout_ticks); \ } \ \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_seg_##name( \ - void *port, struct rte_event *ev, uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws_dual *dws = port; \ uint16_t ret = 1; \ uint64_t iter; \ @@ -76,13 +80,13 @@ } \ \ uint16_t __rte_hot cn9k_sso_hws_dual_deq_tmo_seg_burst_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events, \ - uint64_t timeout_ticks) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return cn9k_sso_hws_dual_deq_tmo_seg_##name(port, ev, \ - timeout_ticks); \ + return cn9k_sso_hws_dual_deq_tmo_seg_##name( \ + dev_id, port_id, ev, timeout_ticks); \ } NIX_RX_FASTPATH_MODES diff --git a/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c b/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c index 92e2981f02..87cc3a40d4 100644 --- a/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c +++ b/drivers/event/cnxk/cn9k_worker_dual_tx_enq.c @@ -6,8 +6,10 @@ #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws_dual *ws = port; \ uint64_t cmd[sz]; \ \ diff --git a/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c b/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c index dfb574cf95..f7662431d0 100644 --- a/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c +++ b/drivers/event/cnxk/cn9k_worker_dual_tx_enq_seg.c @@ -6,8 +6,10 @@ #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot cn9k_sso_hws_dual_tx_adptr_enq_seg_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2]; \ struct cn9k_sso_hws_dual *ws = port; \ \ diff --git a/drivers/event/cnxk/cn9k_worker_tx_enq.c b/drivers/event/cnxk/cn9k_worker_tx_enq.c index 3df649c0c8..ca82edd3c3 100644 --- a/drivers/event/cnxk/cn9k_worker_tx_enq.c +++ b/drivers/event/cnxk/cn9k_worker_tx_enq.c @@ -6,8 +6,10 @@ #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws *ws = port; \ uint64_t cmd[sz]; \ \ diff --git a/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c b/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c index 0efe29113e..f9024ba20a 100644 --- a/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c +++ b/drivers/event/cnxk/cn9k_worker_tx_enq_seg.c @@ -6,9 +6,11 @@ #define T(name, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot cn9k_sso_hws_tx_adptr_enq_seg_##name( \ - void *port, struct rte_event ev[], uint16_t nb_events) \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events) \ { \ uint64_t cmd[(sz) + CNXK_NIX_TX_MSEG_SG_DWORDS - 2]; \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ struct cn9k_sso_hws *ws = port; \ \ RTE_SET_USED(nb_events); \ diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index c8742ddb2c..c69c36c5da 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -1245,21 +1245,29 @@ static inline uint16_t dlb2_event_enqueue_delayed(void *event_port, const struct rte_event events[]); +static _RTE_EVENT_ENQ_PROTO(dlb2_event_enqueue_delayed); + static inline uint16_t dlb2_event_enqueue_burst_delayed(void *event_port, const struct rte_event events[], uint16_t num); +static _RTE_EVENT_ENQ_BURST_PROTO(dlb2_event_enqueue_burst_delayed); + static inline uint16_t dlb2_event_enqueue_new_burst_delayed(void *event_port, const struct rte_event events[], uint16_t num); +static _RTE_EVENT_ENQ_BURST_PROTO(dlb2_event_enqueue_new_burst_delayed); + static inline uint16_t dlb2_event_enqueue_forward_burst_delayed(void *event_port, const struct rte_event events[], uint16_t num); +static _RTE_EVENT_ENQ_BURST_PROTO(dlb2_event_enqueue_forward_burst_delayed); + /* Generate the required bitmask for rotate-style expected QE gen bits. * This requires a pattern of 1's and zeros, starting with expected as * 1 bits, so when hardware writes 0's they're "new". This requires the @@ -1422,13 +1430,21 @@ dlb2_hw_create_ldb_port(struct dlb2_eventdev *dlb2, * performance reasons. */ if (qm_port->token_pop_mode == DELAYED_POP) { - dlb2->event_dev->enqueue = dlb2_event_enqueue_delayed; - dlb2->event_dev->enqueue_burst = - dlb2_event_enqueue_burst_delayed; - dlb2->event_dev->enqueue_new_burst = - dlb2_event_enqueue_new_burst_delayed; - dlb2->event_dev->enqueue_forward_burst = - dlb2_event_enqueue_forward_burst_delayed; + rte_event_set_enq_fn( + dlb2->event_dev->data->dev_id, + _RTE_EVENT_ENQ_FUNC(dlb2_event_enqueue_delayed)); + rte_event_set_enq_burst_fn( + dlb2->event_dev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC( + dlb2_event_enqueue_burst_delayed)); + rte_event_set_enq_new_burst_fn( + dlb2->event_dev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC( + dlb2_event_enqueue_new_burst_delayed)); + rte_event_set_enq_fwd_burst_fn( + dlb2->event_dev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC( + dlb2_event_enqueue_forward_burst_delayed)); } qm_port->owed_tokens = 0; @@ -2976,6 +2992,8 @@ dlb2_event_enqueue_burst(void *event_port, return __dlb2_event_enqueue_burst(event_port, events, num, false); } +static _RTE_EVENT_ENQ_BURST_DEF(dlb2_event_enqueue_burst); + static uint16_t dlb2_event_enqueue_burst_delayed(void *event_port, const struct rte_event events[], @@ -2984,6 +3002,8 @@ dlb2_event_enqueue_burst_delayed(void *event_port, return __dlb2_event_enqueue_burst(event_port, events, num, true); } +static _RTE_EVENT_ENQ_BURST_DEF(dlb2_event_enqueue_burst_delayed); + static inline uint16_t dlb2_event_enqueue(void *event_port, const struct rte_event events[]) @@ -2991,6 +3011,8 @@ dlb2_event_enqueue(void *event_port, return __dlb2_event_enqueue_burst(event_port, events, 1, false); } +static _RTE_EVENT_ENQ_DEF(dlb2_event_enqueue); + static inline uint16_t dlb2_event_enqueue_delayed(void *event_port, const struct rte_event events[]) @@ -2998,6 +3020,8 @@ dlb2_event_enqueue_delayed(void *event_port, return __dlb2_event_enqueue_burst(event_port, events, 1, true); } +static _RTE_EVENT_ENQ_DEF(dlb2_event_enqueue_delayed); + static uint16_t dlb2_event_enqueue_new_burst(void *event_port, const struct rte_event events[], @@ -3006,6 +3030,8 @@ dlb2_event_enqueue_new_burst(void *event_port, return __dlb2_event_enqueue_burst(event_port, events, num, false); } +static _RTE_EVENT_ENQ_BURST_DEF(dlb2_event_enqueue_new_burst); + static uint16_t dlb2_event_enqueue_new_burst_delayed(void *event_port, const struct rte_event events[], @@ -3014,6 +3040,8 @@ dlb2_event_enqueue_new_burst_delayed(void *event_port, return __dlb2_event_enqueue_burst(event_port, events, num, true); } +static _RTE_EVENT_ENQ_BURST_DEF(dlb2_event_enqueue_new_burst_delayed); + static uint16_t dlb2_event_enqueue_forward_burst(void *event_port, const struct rte_event events[], @@ -3022,6 +3050,8 @@ dlb2_event_enqueue_forward_burst(void *event_port, return __dlb2_event_enqueue_burst(event_port, events, num, false); } +static _RTE_EVENT_ENQ_BURST_DEF(dlb2_event_enqueue_forward_burst); + static uint16_t dlb2_event_enqueue_forward_burst_delayed(void *event_port, const struct rte_event events[], @@ -3030,6 +3060,8 @@ dlb2_event_enqueue_forward_burst_delayed(void *event_port, return __dlb2_event_enqueue_burst(event_port, events, num, true); } +static _RTE_EVENT_ENQ_BURST_DEF(dlb2_event_enqueue_forward_burst_delayed); + static void dlb2_event_release(struct dlb2_eventdev *dlb2, uint8_t port_id, @@ -4062,12 +4094,16 @@ dlb2_event_dequeue_burst(void *event_port, struct rte_event *ev, uint16_t num, return cnt; } +static _RTE_EVENT_DEQ_BURST_DEF(dlb2_event_dequeue_burst); + static uint16_t dlb2_event_dequeue(void *event_port, struct rte_event *ev, uint64_t wait) { return dlb2_event_dequeue_burst(event_port, ev, 1, wait); } +static _RTE_EVENT_DEQ_DEF(dlb2_event_dequeue); + static uint16_t dlb2_event_dequeue_burst_sparse(void *event_port, struct rte_event *ev, uint16_t num, uint64_t wait) @@ -4098,6 +4134,8 @@ dlb2_event_dequeue_burst_sparse(void *event_port, struct rte_event *ev, return cnt; } +static _RTE_EVENT_DEQ_BURST_DEF(dlb2_event_dequeue_burst_sparse); + static uint16_t dlb2_event_dequeue_sparse(void *event_port, struct rte_event *ev, uint64_t wait) @@ -4105,6 +4143,8 @@ dlb2_event_dequeue_sparse(void *event_port, struct rte_event *ev, return dlb2_event_dequeue_burst_sparse(event_port, ev, 1, wait); } +static _RTE_EVENT_DEQ_DEF(dlb2_event_dequeue_sparse); + static void dlb2_flush_port(struct rte_eventdev *dev, int port_id) { @@ -4381,6 +4421,7 @@ dlb2_eventdev_timeout_ticks(struct rte_eventdev *dev, uint64_t ns, static void dlb2_entry_points_init(struct rte_eventdev *dev) { + struct rte_eventdev_api *api; struct dlb2_eventdev *dlb2; /* Expose PMD's eventdev interface */ @@ -4409,21 +4450,27 @@ dlb2_entry_points_init(struct rte_eventdev *dev) .dev_selftest = test_dlb2_eventdev, }; + api = &rte_eventdev_api[dev->data->dev_id]; /* Expose PMD's eventdev interface */ dev->dev_ops = &dlb2_eventdev_entry_ops; - dev->enqueue = dlb2_event_enqueue; - dev->enqueue_burst = dlb2_event_enqueue_burst; - dev->enqueue_new_burst = dlb2_event_enqueue_new_burst; - dev->enqueue_forward_burst = dlb2_event_enqueue_forward_burst; + api->enqueue = _RTE_EVENT_ENQ_FUNC(dlb2_event_enqueue); + api->enqueue_burst = + _RTE_EVENT_ENQ_BURST_FUNC(dlb2_event_enqueue_burst); + api->enqueue_new_burst = + _RTE_EVENT_ENQ_BURST_FUNC(dlb2_event_enqueue_new_burst); + api->enqueue_forward_burst = + _RTE_EVENT_ENQ_BURST_FUNC(dlb2_event_enqueue_forward_burst); dlb2 = dev->data->dev_private; if (dlb2->poll_mode == DLB2_CQ_POLL_MODE_SPARSE) { - dev->dequeue = dlb2_event_dequeue_sparse; - dev->dequeue_burst = dlb2_event_dequeue_burst_sparse; + api->dequeue = _RTE_EVENT_DEQ_FUNC(dlb2_event_dequeue_sparse); + api->dequeue_burst = _RTE_EVENT_DEQ_BURST_FUNC( + dlb2_event_dequeue_burst_sparse); } else { - dev->dequeue = dlb2_event_dequeue; - dev->dequeue_burst = dlb2_event_dequeue_burst; + api->dequeue = _RTE_EVENT_DEQ_FUNC(dlb2_event_dequeue); + api->dequeue_burst = + _RTE_EVENT_DEQ_BURST_FUNC(dlb2_event_dequeue_burst); } } diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c index 9f14390d28..08e7f59db4 100644 --- a/drivers/event/dpaa/dpaa_eventdev.c +++ b/drivers/event/dpaa/dpaa_eventdev.c @@ -111,12 +111,16 @@ dpaa_event_enqueue_burst(void *port, const struct rte_event ev[], return nb_events; } +static _RTE_EVENT_ENQ_BURST_DEF(dpaa_event_enqueue_burst); + static uint16_t dpaa_event_enqueue(void *port, const struct rte_event *ev) { return dpaa_event_enqueue_burst(port, ev, 1); } +static _RTE_EVENT_ENQ_DEF(dpaa_event_enqueue); + static void drain_4_bytes(int fd, fd_set *fdset) { if (FD_ISSET(fd, fdset)) { @@ -231,12 +235,16 @@ dpaa_event_dequeue_burst(void *port, struct rte_event ev[], return num_frames; } +static _RTE_EVENT_DEQ_BURST_DEF(dpaa_event_dequeue_burst); + static uint16_t dpaa_event_dequeue(void *port, struct rte_event *ev, uint64_t timeout_ticks) { return dpaa_event_dequeue_burst(port, ev, 1, timeout_ticks); } +static _RTE_EVENT_DEQ_DEF(dpaa_event_dequeue); + static uint16_t dpaa_event_dequeue_burst_intr(void *port, struct rte_event ev[], uint16_t nb_events, uint64_t timeout_ticks) @@ -309,6 +317,8 @@ dpaa_event_dequeue_burst_intr(void *port, struct rte_event ev[], return num_frames; } +static _RTE_EVENT_DEQ_BURST_DEF(dpaa_event_dequeue_burst_intr); + static uint16_t dpaa_event_dequeue_intr(void *port, struct rte_event *ev, @@ -317,6 +327,8 @@ dpaa_event_dequeue_intr(void *port, return dpaa_event_dequeue_burst_intr(port, ev, 1, timeout_ticks); } +static _RTE_EVENT_DEQ_DEF(dpaa_event_dequeue_intr); + static void dpaa_event_dev_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info) @@ -907,6 +919,8 @@ dpaa_eventdev_txa_enqueue_same_dest(void *port, return rte_eth_tx_burst(m0->port, qid, m, nb_events); } +static _RTE_EVENT_TXA_ENQ_BURST_DEF(dpaa_eventdev_txa_enqueue_same_dest); + static uint16_t dpaa_eventdev_txa_enqueue(void *port, struct rte_event ev[], @@ -925,6 +939,8 @@ dpaa_eventdev_txa_enqueue(void *port, return nb_events; } +static _RTE_EVENT_TXA_ENQ_BURST_DEF(dpaa_eventdev_txa_enqueue); + static struct eventdev_ops dpaa_eventdev_ops = { .dev_infos_get = dpaa_event_dev_info_get, .dev_configure = dpaa_event_dev_configure, @@ -995,6 +1011,7 @@ dpaa_event_dev_create(const char *name, const char *params) { struct rte_eventdev *eventdev; struct dpaa_eventdev *priv; + uint8_t dev_id; eventdev = rte_event_pmd_vdev_init(name, sizeof(struct dpaa_eventdev), @@ -1004,23 +1021,35 @@ dpaa_event_dev_create(const char *name, const char *params) goto fail; } priv = eventdev->data->dev_private; + dev_id = eventdev->data->dev_id; eventdev->dev_ops = &dpaa_eventdev_ops; - eventdev->enqueue = dpaa_event_enqueue; - eventdev->enqueue_burst = dpaa_event_enqueue_burst; + rte_event_set_enq_fn(dev_id, _RTE_EVENT_ENQ_FUNC(dpaa_event_enqueue)); + rte_event_set_enq_burst_fn( + dev_id, _RTE_EVENT_ENQ_BURST_FUNC(dpaa_event_enqueue_burst)); if (dpaa_event_check_flags(params)) { - eventdev->dequeue = dpaa_event_dequeue; - eventdev->dequeue_burst = dpaa_event_dequeue_burst; + rte_event_set_deq_fn(dev_id, + _RTE_EVENT_DEQ_FUNC(dpaa_event_dequeue)); + rte_event_set_deq_burst_fn( + dev_id, + _RTE_EVENT_DEQ_BURST_FUNC(dpaa_event_dequeue_burst)); } else { priv->intr_mode = 1; eventdev->dev_ops->timeout_ticks = dpaa_event_dequeue_timeout_ticks_intr; - eventdev->dequeue = dpaa_event_dequeue_intr; - eventdev->dequeue_burst = dpaa_event_dequeue_burst_intr; + rte_event_set_deq_fn( + dev_id, _RTE_EVENT_DEQ_FUNC(dpaa_event_dequeue_intr)); + rte_event_set_deq_burst_fn( + dev_id, _RTE_EVENT_DEQ_BURST_FUNC( + dpaa_event_dequeue_burst_intr)); } - eventdev->txa_enqueue = dpaa_eventdev_txa_enqueue; - eventdev->txa_enqueue_same_dest = dpaa_eventdev_txa_enqueue_same_dest; + rte_event_set_tx_adapter_enq_fn( + dev_id, + _RTE_EVENT_TXA_ENQ_BURST_FUNC(dpaa_eventdev_txa_enqueue)); + rte_event_set_tx_adapter_enq_same_dest_fn( + dev_id, _RTE_EVENT_TXA_ENQ_BURST_FUNC( + dpaa_eventdev_txa_enqueue_same_dest)); RTE_LOG(INFO, PMD, "%s eventdev added", name); diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index d577f64824..1060a9dfcf 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -201,12 +201,16 @@ dpaa2_eventdev_enqueue_burst(void *port, const struct rte_event ev[], } +static _RTE_EVENT_ENQ_BURST_DEF(dpaa2_eventdev_enqueue_burst); + static uint16_t dpaa2_eventdev_enqueue(void *port, const struct rte_event *ev) { return dpaa2_eventdev_enqueue_burst(port, ev, 1); } +static _RTE_EVENT_ENQ_DEF(dpaa2_eventdev_enqueue); + static void dpaa2_eventdev_dequeue_wait(uint64_t timeout_ticks) { struct epoll_event epoll_ev; @@ -362,6 +366,8 @@ dpaa2_eventdev_dequeue_burst(void *port, struct rte_event ev[], return 0; } +static _RTE_EVENT_DEQ_BURST_DEF(dpaa2_eventdev_dequeue_burst); + static uint16_t dpaa2_eventdev_dequeue(void *port, struct rte_event *ev, uint64_t timeout_ticks) @@ -369,6 +375,8 @@ dpaa2_eventdev_dequeue(void *port, struct rte_event *ev, return dpaa2_eventdev_dequeue_burst(port, ev, 1, timeout_ticks); } +static _RTE_EVENT_DEQ_DEF(dpaa2_eventdev_dequeue); + static void dpaa2_eventdev_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info) @@ -997,6 +1005,8 @@ dpaa2_eventdev_txa_enqueue_same_dest(void *port, return rte_eth_tx_burst(m0->port, qid, m, nb_events); } +static _RTE_EVENT_TXA_ENQ_BURST_DEF(dpaa2_eventdev_txa_enqueue_same_dest); + static uint16_t dpaa2_eventdev_txa_enqueue(void *port, struct rte_event ev[], @@ -1015,6 +1025,8 @@ dpaa2_eventdev_txa_enqueue(void *port, return nb_events; } +static _RTE_EVENT_TXA_ENQ_BURST_DEF(dpaa2_eventdev_txa_enqueue); + static struct eventdev_ops dpaa2_eventdev_ops = { .dev_infos_get = dpaa2_eventdev_info_get, .dev_configure = dpaa2_eventdev_configure, @@ -1088,6 +1100,7 @@ dpaa2_eventdev_create(const char *name) struct dpaa2_eventdev *priv; struct dpaa2_dpcon_dev *dpcon_dev = NULL; struct dpaa2_dpci_dev *dpci_dev = NULL; + uint8_t dev_id; int ret; eventdev = rte_event_pmd_vdev_init(name, @@ -1099,14 +1112,32 @@ dpaa2_eventdev_create(const char *name) } eventdev->dev_ops = &dpaa2_eventdev_ops; - eventdev->enqueue = dpaa2_eventdev_enqueue; - eventdev->enqueue_burst = dpaa2_eventdev_enqueue_burst; - eventdev->enqueue_new_burst = dpaa2_eventdev_enqueue_burst; - eventdev->enqueue_forward_burst = dpaa2_eventdev_enqueue_burst; - eventdev->dequeue = dpaa2_eventdev_dequeue; - eventdev->dequeue_burst = dpaa2_eventdev_dequeue_burst; - eventdev->txa_enqueue = dpaa2_eventdev_txa_enqueue; - eventdev->txa_enqueue_same_dest = dpaa2_eventdev_txa_enqueue_same_dest; + dev_id = eventdev->data->dev_id; + + rte_event_set_enq_fn(dev_id, + _RTE_EVENT_ENQ_FUNC(dpaa2_eventdev_enqueue)); + rte_event_set_enq_burst_fn( + dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(dpaa2_eventdev_enqueue_burst)); + rte_event_set_enq_new_burst_fn( + dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(dpaa2_eventdev_enqueue_burst)); + rte_event_set_enq_fwd_burst_fn( + dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(dpaa2_eventdev_enqueue_burst)); + + rte_event_set_deq_fn(dev_id, + _RTE_EVENT_DEQ_FUNC(dpaa2_eventdev_dequeue)); + rte_event_set_deq_burst_fn( + dev_id, + _RTE_EVENT_DEQ_BURST_FUNC(dpaa2_eventdev_dequeue_burst)); + + rte_event_set_tx_adapter_enq_fn( + dev_id, + _RTE_EVENT_TXA_ENQ_BURST_FUNC(dpaa2_eventdev_txa_enqueue)); + rte_event_set_tx_adapter_enq_same_dest_fn( + dev_id, _RTE_EVENT_TXA_ENQ_BURST_FUNC( + dpaa2_eventdev_txa_enqueue_same_dest)); /* For secondary processes, the primary has done all the work */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c index 01f060fff3..8e9e29e363 100644 --- a/drivers/event/dsw/dsw_evdev.c +++ b/drivers/event/dsw/dsw_evdev.c @@ -420,12 +420,20 @@ static struct eventdev_ops dsw_evdev_ops = { .xstats_get_by_name = dsw_xstats_get_by_name }; +static _RTE_EVENT_ENQ_DEF(dsw_event_enqueue); +static _RTE_EVENT_ENQ_BURST_DEF(dsw_event_enqueue_burst); +static _RTE_EVENT_ENQ_BURST_DEF(dsw_event_enqueue_new_burst); +static _RTE_EVENT_ENQ_BURST_DEF(dsw_event_enqueue_forward_burst); +static _RTE_EVENT_DEQ_DEF(dsw_event_dequeue); +static _RTE_EVENT_DEQ_BURST_DEF(dsw_event_dequeue_burst); + static int dsw_probe(struct rte_vdev_device *vdev) { const char *name; struct rte_eventdev *dev; struct dsw_evdev *dsw; + uint8_t dev_id; name = rte_vdev_device_name(vdev); @@ -435,12 +443,20 @@ dsw_probe(struct rte_vdev_device *vdev) return -EFAULT; dev->dev_ops = &dsw_evdev_ops; - dev->enqueue = dsw_event_enqueue; - dev->enqueue_burst = dsw_event_enqueue_burst; - dev->enqueue_new_burst = dsw_event_enqueue_new_burst; - dev->enqueue_forward_burst = dsw_event_enqueue_forward_burst; - dev->dequeue = dsw_event_dequeue; - dev->dequeue_burst = dsw_event_dequeue_burst; + dev_id = dev->data->dev_id; + + rte_event_set_enq_fn(dev_id, _RTE_EVENT_ENQ_FUNC(dsw_event_enqueue)); + rte_event_set_enq_burst_fn( + dev_id, _RTE_EVENT_ENQ_BURST_FUNC(dsw_event_enqueue_burst)); + rte_event_set_enq_new_burst_fn( + dev_id, _RTE_EVENT_ENQ_BURST_FUNC(dsw_event_enqueue_new_burst)); + rte_event_set_enq_fwd_burst_fn( + dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(dsw_event_enqueue_forward_burst)); + + rte_event_set_deq_fn(dev_id, _RTE_EVENT_DEQ_FUNC(dsw_event_dequeue)); + rte_event_set_deq_burst_fn( + dev_id, _RTE_EVENT_DEQ_BURST_FUNC(dsw_event_dequeue_burst)); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/drivers/event/octeontx/ssovf_evdev.h b/drivers/event/octeontx/ssovf_evdev.h index bb1056a955..9950ac9919 100644 --- a/drivers/event/octeontx/ssovf_evdev.h +++ b/drivers/event/octeontx/ssovf_evdev.h @@ -172,13 +172,13 @@ ssovf_pmd_priv(const struct rte_eventdev *eventdev) extern int otx_logtype_ssovf; -uint16_t ssows_enq(void *port, const struct rte_event *ev); -uint16_t ssows_enq_burst(void *port, - const struct rte_event ev[], uint16_t nb_events); -uint16_t ssows_enq_new_burst(void *port, - const struct rte_event ev[], uint16_t nb_events); -uint16_t ssows_enq_fwd_burst(void *port, - const struct rte_event ev[], uint16_t nb_events); +uint16_t ssows_enq(uint8_t dev_id, uint8_t port_id, const struct rte_event *ev); +uint16_t ssows_enq_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events); +uint16_t ssows_enq_new_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events); +uint16_t ssows_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events); typedef void (*ssows_handle_event_t)(void *arg, struct rte_event ev); void ssows_flush_events(struct ssows *ws, uint8_t queue_id, ssows_handle_event_t fn, void *arg); diff --git a/drivers/event/octeontx/ssovf_worker.c b/drivers/event/octeontx/ssovf_worker.c index 8b056ddc5a..0d463521c6 100644 --- a/drivers/event/octeontx/ssovf_worker.c +++ b/drivers/event/octeontx/ssovf_worker.c @@ -93,9 +93,10 @@ ssows_release_event(struct ssows *ws) #define R(name, f2, f1, f0, flags) \ static uint16_t __rte_noinline __rte_hot \ -ssows_deq_ ##name(void *port, struct rte_event *ev, uint64_t timeout_ticks) \ +ssows_deq_ ##name(uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks) \ { \ - struct ssows *ws = port; \ + struct ssows *ws = _rte_event_dev_prolog(dev_id, port_id); \ \ RTE_SET_USED(timeout_ticks); \ \ @@ -109,19 +110,21 @@ ssows_deq_ ##name(void *port, struct rte_event *ev, uint64_t timeout_ticks) \ } \ \ static uint16_t __rte_hot \ -ssows_deq_burst_ ##name(void *port, struct rte_event ev[], \ +ssows_deq_burst_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event ev[], \ uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return ssows_deq_ ##name(port, ev, timeout_ticks); \ + return ssows_deq_ ##name(dev_id, port_id, ev, timeout_ticks); \ } \ \ static uint16_t __rte_hot \ -ssows_deq_timeout_ ##name(void *port, struct rte_event *ev, \ +ssows_deq_timeout_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ uint64_t timeout_ticks) \ { \ - struct ssows *ws = port; \ + struct ssows *ws = _rte_event_dev_prolog(dev_id, port_id); \ uint64_t iter; \ uint16_t ret = 1; \ \ @@ -137,21 +140,23 @@ ssows_deq_timeout_ ##name(void *port, struct rte_event *ev, \ } \ \ static uint16_t __rte_hot \ -ssows_deq_timeout_burst_ ##name(void *port, struct rte_event ev[], \ +ssows_deq_timeout_burst_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event ev[], \ uint16_t nb_events, uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return ssows_deq_timeout_ ##name(port, ev, timeout_ticks); \ + return ssows_deq_timeout_ ##name(dev_id, port_id, ev, \ + timeout_ticks); \ } SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R __rte_always_inline uint16_t __rte_hot -ssows_enq(void *port, const struct rte_event *ev) +ssows_enq(uint8_t dev_id, uint8_t port_id, const struct rte_event *ev) { - struct ssows *ws = port; + struct ssows *ws = _rte_event_dev_prolog(dev_id, port_id); uint16_t ret = 1; switch (ev->op) { @@ -172,17 +177,19 @@ ssows_enq(void *port, const struct rte_event *ev) } uint16_t __rte_hot -ssows_enq_burst(void *port, const struct rte_event ev[], uint16_t nb_events) +ssows_enq_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { RTE_SET_USED(nb_events); - return ssows_enq(port, ev); + return ssows_enq(dev_id, port_id, ev); } uint16_t __rte_hot -ssows_enq_new_burst(void *port, const struct rte_event ev[], uint16_t nb_events) +ssows_enq_new_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { uint16_t i; - struct ssows *ws = port; + struct ssows *ws = _rte_event_dev_prolog(dev_id, port_id); rte_smp_wmb(); for (i = 0; i < nb_events; i++) @@ -192,9 +199,10 @@ ssows_enq_new_burst(void *port, const struct rte_event ev[], uint16_t nb_events) } uint16_t __rte_hot -ssows_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events) +ssows_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct ssows *ws = port; + struct ssows *ws = _rte_event_dev_prolog(dev_id, port_id); RTE_SET_USED(nb_events); ssows_forward_event(ws, ev); @@ -311,10 +319,13 @@ __sso_event_tx_adapter_enqueue(void *port, struct rte_event ev[], #define T(name, f3, f2, f1, f0, sz, flags) \ static uint16_t __rte_noinline __rte_hot \ -sso_event_tx_adapter_enqueue_ ## name(void *port, struct rte_event ev[], \ - uint16_t nb_events) \ +sso_event_tx_adapter_enqueue_ ## name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event ev[], \ + uint16_t nb_events) \ { \ + void *port = _rte_event_dev_prolog(dev_id, port_id); \ uint64_t cmd[sz]; \ + \ return __sso_event_tx_adapter_enqueue(port, ev, nb_events, cmd, \ flags); \ } @@ -323,11 +334,12 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC #undef T static uint16_t __rte_hot -ssow_crypto_adapter_enqueue(void *port, struct rte_event ev[], - uint16_t nb_events) +ssow_crypto_adapter_enqueue(uint8_t dev_id, uint8_t port_id, + struct rte_event ev[], uint16_t nb_events) { - RTE_SET_USED(nb_events); + void *port = _rte_event_dev_prolog(dev_id, port_id); + RTE_SET_USED(nb_events); return otx_crypto_adapter_enqueue(port, ev->event_ptr); } @@ -335,15 +347,18 @@ void ssovf_fastpath_fns_set(struct rte_eventdev *dev) { struct ssovf_evdev *edev = ssovf_pmd_priv(dev); + struct rte_eventdev_api *api; + + api = &rte_eventdev_api[dev->data->dev_id]; - dev->enqueue = ssows_enq; - dev->enqueue_burst = ssows_enq_burst; - dev->enqueue_new_burst = ssows_enq_new_burst; - dev->enqueue_forward_burst = ssows_enq_fwd_burst; + api->enqueue = ssows_enq; + api->enqueue_burst = ssows_enq_burst; + api->enqueue_new_burst = ssows_enq_new_burst; + api->enqueue_forward_burst = ssows_enq_fwd_burst; - dev->ca_enqueue = ssow_crypto_adapter_enqueue; + api->ca_enqueue = ssow_crypto_adapter_enqueue; - const event_tx_adapter_enqueue ssow_txa_enqueue[2][2][2][2] = { + const rte_event_tx_adapter_enqueue_t ssow_txa_enqueue[2][2][2][2] = { #define T(name, f3, f2, f1, f0, sz, flags) \ [f3][f2][f1][f0] = sso_event_tx_adapter_enqueue_ ##name, @@ -351,16 +366,16 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC #undef T }; - dev->txa_enqueue = ssow_txa_enqueue + api->txa_enqueue = ssow_txa_enqueue [!!(edev->tx_offload_flags & OCCTX_TX_OFFLOAD_MBUF_NOFF_F)] [!!(edev->tx_offload_flags & OCCTX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(edev->tx_offload_flags & OCCTX_TX_OFFLOAD_L3_L4_CSUM_F)] [!!(edev->tx_offload_flags & OCCTX_TX_MULTI_SEG_F)]; - dev->txa_enqueue_same_dest = dev->txa_enqueue; + api->txa_enqueue_same_dest = api->txa_enqueue; /* Assigning dequeue func pointers */ - const event_dequeue_t ssow_deq[2][2][2] = { + const rte_event_dequeue_t ssow_deq[2][2][2] = { #define R(name, f2, f1, f0, flags) \ [f2][f1][f0] = ssows_deq_ ##name, @@ -368,12 +383,12 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - dev->dequeue = ssow_deq - [!!(edev->rx_offload_flags & OCCTX_RX_VLAN_FLTR_F)] - [!!(edev->rx_offload_flags & OCCTX_RX_OFFLOAD_CSUM_F)] - [!!(edev->rx_offload_flags & OCCTX_RX_MULTI_SEG_F)]; + api->dequeue = + ssow_deq[!!(edev->rx_offload_flags & OCCTX_RX_VLAN_FLTR_F)] + [!!(edev->rx_offload_flags & OCCTX_RX_OFFLOAD_CSUM_F)] + [!!(edev->rx_offload_flags & OCCTX_RX_MULTI_SEG_F)]; - const event_dequeue_burst_t ssow_deq_burst[2][2][2] = { + const rte_event_dequeue_burst_t ssow_deq_burst[2][2][2] = { #define R(name, f2, f1, f0, flags) \ [f2][f1][f0] = ssows_deq_burst_ ##name, @@ -381,13 +396,13 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - dev->dequeue_burst = ssow_deq_burst + api->dequeue_burst = ssow_deq_burst [!!(edev->rx_offload_flags & OCCTX_RX_VLAN_FLTR_F)] [!!(edev->rx_offload_flags & OCCTX_RX_OFFLOAD_CSUM_F)] [!!(edev->rx_offload_flags & OCCTX_RX_MULTI_SEG_F)]; if (edev->is_timeout_deq) { - const event_dequeue_t ssow_deq_timeout[2][2][2] = { + const rte_event_dequeue_t ssow_deq_timeout[2][2][2] = { #define R(name, f2, f1, f0, flags) \ [f2][f1][f0] = ssows_deq_timeout_ ##name, @@ -395,23 +410,24 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - dev->dequeue = ssow_deq_timeout - [!!(edev->rx_offload_flags & OCCTX_RX_VLAN_FLTR_F)] - [!!(edev->rx_offload_flags & OCCTX_RX_OFFLOAD_CSUM_F)] - [!!(edev->rx_offload_flags & OCCTX_RX_MULTI_SEG_F)]; + api->dequeue = ssow_deq_timeout + [!!(edev->rx_offload_flags & OCCTX_RX_VLAN_FLTR_F)] + [!!(edev->rx_offload_flags & OCCTX_RX_OFFLOAD_CSUM_F)] + [!!(edev->rx_offload_flags & OCCTX_RX_MULTI_SEG_F)]; - const event_dequeue_burst_t ssow_deq_timeout_burst[2][2][2] = { + const rte_event_dequeue_burst_t + ssow_deq_timeout_burst[2][2][2] = { #define R(name, f2, f1, f0, flags) \ [f2][f1][f0] = ssows_deq_timeout_burst_ ##name, SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R - }; + }; - dev->dequeue_burst = ssow_deq_timeout_burst - [!!(edev->rx_offload_flags & OCCTX_RX_VLAN_FLTR_F)] - [!!(edev->rx_offload_flags & OCCTX_RX_OFFLOAD_CSUM_F)] - [!!(edev->rx_offload_flags & OCCTX_RX_MULTI_SEG_F)]; + api->dequeue_burst = ssow_deq_timeout_burst + [!!(edev->rx_offload_flags & OCCTX_RX_VLAN_FLTR_F)] + [!!(edev->rx_offload_flags & OCCTX_RX_OFFLOAD_CSUM_F)] + [!!(edev->rx_offload_flags & OCCTX_RX_MULTI_SEG_F)]; } } diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index 00902ebf53..41b9409d66 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -44,29 +44,32 @@ void sso_fastpath_fns_set(struct rte_eventdev *event_dev) { struct otx2_sso_evdev *dev = sso_pmd_priv(event_dev); + struct rte_eventdev_api *api; + /* Single WS modes */ - const event_dequeue_t ssogws_deq[2][2][2][2][2][2][2] = { + const rte_event_dequeue_t ssogws_deq[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_ ##name, SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_burst_t ssogws_deq_burst[2][2][2][2][2][2][2] = { + const rte_event_dequeue_burst_t + ssogws_deq_burst[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_burst_ ##name, SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_t ssogws_deq_timeout[2][2][2][2][2][2][2] = { + const rte_event_dequeue_t ssogws_deq_timeout[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_timeout_ ##name, SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t ssogws_deq_timeout_burst[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -75,14 +78,14 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_t ssogws_deq_seg[2][2][2][2][2][2][2] = { + const rte_event_dequeue_t ssogws_deq_seg[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_deq_seg_ ##name, SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t ssogws_deq_seg_burst[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -91,7 +94,8 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_t ssogws_deq_seg_timeout[2][2][2][2][2][2][2] = { + const rte_event_dequeue_t + ssogws_deq_seg_timeout[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ otx2_ssogws_deq_seg_timeout_ ##name, @@ -99,7 +103,7 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t ssogws_deq_seg_timeout_burst[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -110,14 +114,14 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC /* Dual WS modes */ - const event_dequeue_t ssogws_dual_deq[2][2][2][2][2][2][2] = { + const rte_event_dequeue_t ssogws_dual_deq[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_ ##name, SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t ssogws_dual_deq_burst[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -126,7 +130,8 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_t ssogws_dual_deq_timeout[2][2][2][2][2][2][2] = { + const rte_event_dequeue_t + ssogws_dual_deq_timeout[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ otx2_ssogws_dual_deq_timeout_ ##name, @@ -134,7 +139,7 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t ssogws_dual_deq_timeout_burst[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -143,14 +148,14 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_t ssogws_dual_deq_seg[2][2][2][2][2][2][2] = { + const rte_event_dequeue_t ssogws_dual_deq_seg[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = otx2_ssogws_dual_deq_seg_ ##name, SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t ssogws_dual_deq_seg_burst[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -159,7 +164,7 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_t + const rte_event_dequeue_t ssogws_dual_deq_seg_timeout[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -168,7 +173,7 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R }; - const event_dequeue_burst_t + const rte_event_dequeue_burst_t ssogws_dual_deq_seg_timeout_burst[2][2][2][2][2][2][2] = { #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -178,7 +183,7 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC }; /* Tx modes */ - const event_tx_adapter_enqueue + const rte_event_tx_adapter_enqueue_t ssogws_tx_adptr_enq[2][2][2][2][2][2][2] = { #define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -187,7 +192,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC #undef T }; - const event_tx_adapter_enqueue + const rte_event_tx_adapter_enqueue_t ssogws_tx_adptr_enq_seg[2][2][2][2][2][2][2] = { #define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -196,7 +201,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC #undef T }; - const event_tx_adapter_enqueue + const rte_event_tx_adapter_enqueue_t ssogws_dual_tx_adptr_enq[2][2][2][2][2][2][2] = { #define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -205,7 +210,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC #undef T }; - const event_tx_adapter_enqueue + const rte_event_tx_adapter_enqueue_t ssogws_dual_tx_adptr_enq_seg[2][2][2][2][2][2][2] = { #define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ [f6][f5][f4][f3][f2][f1][f0] = \ @@ -214,12 +219,14 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC #undef T }; - event_dev->enqueue = otx2_ssogws_enq; - event_dev->enqueue_burst = otx2_ssogws_enq_burst; - event_dev->enqueue_new_burst = otx2_ssogws_enq_new_burst; - event_dev->enqueue_forward_burst = otx2_ssogws_enq_fwd_burst; + api = &rte_eventdev_api[event_dev->data->dev_id]; + + api->enqueue = otx2_ssogws_enq; + api->enqueue_burst = otx2_ssogws_enq_burst; + api->enqueue_new_burst = otx2_ssogws_enq_new_burst; + api->enqueue_forward_burst = otx2_ssogws_enq_fwd_burst; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { - event_dev->dequeue = ssogws_deq_seg + api->dequeue = ssogws_deq_seg [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] @@ -227,7 +234,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = ssogws_deq_seg_burst + api->dequeue_burst = ssogws_deq_seg_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] @@ -236,7 +243,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; if (dev->is_timeout_deq) { - event_dev->dequeue = ssogws_deq_seg_timeout + api->dequeue = ssogws_deq_seg_timeout [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] @@ -244,7 +251,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = + api->dequeue_burst = ssogws_deq_seg_timeout_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -255,7 +262,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; } } else { - event_dev->dequeue = ssogws_deq + api->dequeue = ssogws_deq [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] @@ -263,7 +270,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = ssogws_deq_burst + api->dequeue_burst = ssogws_deq_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] @@ -272,7 +279,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; if (dev->is_timeout_deq) { - event_dev->dequeue = ssogws_deq_timeout + api->dequeue = ssogws_deq_timeout [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_MARK_UPDATE_F)] @@ -280,7 +287,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->rx_offloads & NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = + api->dequeue_burst = ssogws_deq_timeout_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -294,7 +301,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ - event_dev->txa_enqueue = ssogws_tx_adptr_enq_seg + api->txa_enqueue = ssogws_tx_adptr_enq_seg [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] @@ -303,7 +310,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } else { - event_dev->txa_enqueue = ssogws_tx_adptr_enq + api->txa_enqueue = ssogws_tx_adptr_enq [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSTAMP_F)] @@ -312,18 +319,16 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } - event_dev->ca_enqueue = otx2_ssogws_ca_enq; + api->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { - event_dev->enqueue = otx2_ssogws_dual_enq; - event_dev->enqueue_burst = otx2_ssogws_dual_enq_burst; - event_dev->enqueue_new_burst = - otx2_ssogws_dual_enq_new_burst; - event_dev->enqueue_forward_burst = - otx2_ssogws_dual_enq_fwd_burst; + api->enqueue = otx2_ssogws_dual_enq; + api->enqueue_burst = otx2_ssogws_dual_enq_burst; + api->enqueue_new_burst = otx2_ssogws_dual_enq_new_burst; + api->enqueue_forward_burst = otx2_ssogws_dual_enq_fwd_burst; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { - event_dev->dequeue = ssogws_dual_deq_seg + api->dequeue = ssogws_dual_deq_seg [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & @@ -336,7 +341,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = ssogws_dual_deq_seg_burst + api->dequeue_burst = ssogws_dual_deq_seg_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_TSTAMP_F)] @@ -349,7 +354,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; if (dev->is_timeout_deq) { - event_dev->dequeue = + api->dequeue = ssogws_dual_deq_seg_timeout [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] @@ -365,7 +370,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = + api->dequeue_burst = ssogws_dual_deq_seg_timeout_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] @@ -383,7 +388,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_OFFLOAD_RSS_F)]; } } else { - event_dev->dequeue = ssogws_dual_deq + api->dequeue = ssogws_dual_deq [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & @@ -396,7 +401,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_OFFLOAD_CHECKSUM_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = ssogws_dual_deq_burst + api->dequeue_burst = ssogws_dual_deq_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] [!!(dev->rx_offloads & @@ -410,7 +415,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->rx_offloads & NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; if (dev->is_timeout_deq) { - event_dev->dequeue = + api->dequeue = ssogws_dual_deq_timeout [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] @@ -426,7 +431,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_RX_OFFLOAD_PTYPE_F)] [!!(dev->rx_offloads & NIX_RX_OFFLOAD_RSS_F)]; - event_dev->dequeue_burst = + api->dequeue_burst = ssogws_dual_deq_timeout_burst [!!(dev->rx_offloads & NIX_RX_OFFLOAD_SECURITY_F)] @@ -447,7 +452,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) { /* [SEC] [TSMP] [MBUF_NOFF] [VLAN] [OL3_L4_CSUM] [L3_L4_CSUM] */ - event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq_seg + api->txa_enqueue = ssogws_dual_tx_adptr_enq_seg [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] @@ -461,7 +466,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } else { - event_dev->txa_enqueue = ssogws_dual_tx_adptr_enq + api->txa_enqueue = ssogws_dual_tx_adptr_enq [!!(dev->tx_offloads & NIX_TX_OFFLOAD_SECURITY_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_TSO_F)] @@ -475,10 +480,10 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } - event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; + api->ca_enqueue = otx2_ssogws_dual_ca_enq; } - event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; + api->txa_enqueue_same_dest = api->txa_enqueue; rte_mb(); } diff --git a/drivers/event/octeontx2/otx2_evdev.h b/drivers/event/octeontx2/otx2_evdev.h index a5d34b7df7..64ce165ac1 100644 --- a/drivers/event/octeontx2/otx2_evdev.h +++ b/drivers/event/octeontx2/otx2_evdev.h @@ -279,93 +279,98 @@ parse_kvargs_value(const char *key, const char *value, void *opaque) #define SSO_TX_ADPTR_ENQ_FASTPATH_FUNC NIX_TX_FASTPATH_MODES /* Single WS API's */ -uint16_t otx2_ssogws_enq(void *port, const struct rte_event *ev); -uint16_t otx2_ssogws_enq_burst(void *port, const struct rte_event ev[], - uint16_t nb_events); -uint16_t otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[], +uint16_t otx2_ssogws_enq(uint8_t dev_id, uint8_t port_id, + const struct rte_event *ev); +uint16_t otx2_ssogws_enq_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events); +uint16_t otx2_ssogws_enq_new_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events); -uint16_t otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], +uint16_t otx2_ssogws_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events); /* Dual WS API's */ -uint16_t otx2_ssogws_dual_enq(void *port, const struct rte_event *ev); -uint16_t otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[], +uint16_t otx2_ssogws_dual_enq(uint8_t dev_id, uint8_t port_id, + const struct rte_event *ev); +uint16_t otx2_ssogws_dual_enq_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events); -uint16_t otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], +uint16_t otx2_ssogws_dual_enq_new_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events); -uint16_t otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], +uint16_t otx2_ssogws_dual_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events); /* Auto generated API's */ -#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ -uint16_t otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_timeout_ ##name(void *port, \ - struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_seg_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_seg_timeout_ ##name(void *port, \ - struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ +#define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ + uint16_t otx2_ssogws_deq_##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ + uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_deq_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_deq_timeout_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_deq_timeout_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_deq_seg_##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ + uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_deq_seg_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_deq_seg_timeout_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_deq_seg_timeout_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ \ -uint16_t otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_timeout_ ##name(void *port, \ - struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \ - struct rte_event *ev, \ - uint64_t timeout_ticks); \ -uint16_t otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events, \ - uint64_t timeout_ticks);\ + uint16_t otx2_ssogws_dual_deq_##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ + uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_dual_deq_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_dual_deq_timeout_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_dual_deq_timeout_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_dual_deq_seg_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_dual_deq_seg_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_dual_deq_seg_timeout_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event *ev, \ + uint64_t timeout_ticks); \ + uint16_t otx2_ssogws_dual_deq_seg_timeout_burst_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events, uint64_t timeout_ticks); SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R -#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ -uint16_t otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[],\ - uint16_t nb_events); \ -uint16_t otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events); \ -uint16_t otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events); \ -uint16_t otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \ - struct rte_event ev[], \ - uint16_t nb_events); \ +#define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ + uint16_t otx2_ssogws_tx_adptr_enq_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); \ + uint16_t otx2_ssogws_tx_adptr_enq_seg_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); \ + uint16_t otx2_ssogws_dual_tx_adptr_enq_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); \ + uint16_t otx2_ssogws_dual_tx_adptr_enq_seg_##name( \ + uint8_t dev_id, uint8_t port_id, struct rte_event ev[], \ + uint16_t nb_events); SSO_TX_ADPTR_ENQ_FASTPATH_FUNC #undef T diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h index ecf7eb9f56..b9b60a9667 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -62,9 +62,10 @@ otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) } static uint16_t __rte_hot -otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +otx2_ssogws_ca_enq(uint8_t dev_id, uint8_t port_id, struct rte_event ev[], + uint16_t nb_events) { - struct otx2_ssogws *ws = port; + struct otx2_ssogws *ws = _rte_event_dev_prolog(dev_id, port_id); RTE_SET_USED(nb_events); @@ -72,9 +73,10 @@ otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) } static uint16_t __rte_hot -otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +otx2_ssogws_dual_ca_enq(uint8_t dev_id, uint8_t port_id, struct rte_event ev[], + uint16_t nb_events) { - struct otx2_ssogws_dual *ws = port; + struct otx2_ssogws_dual *ws = _rte_event_dev_prolog(dev_id, port_id); RTE_SET_USED(nb_events); diff --git a/drivers/event/octeontx2/otx2_worker.c b/drivers/event/octeontx2/otx2_worker.c index 95139d27a3..8ea41368e7 100644 --- a/drivers/event/octeontx2/otx2_worker.c +++ b/drivers/event/octeontx2/otx2_worker.c @@ -76,11 +76,12 @@ otx2_ssogws_forward_event(struct otx2_ssogws *ws, const struct rte_event *ev) } #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ -uint16_t __rte_hot \ -otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \ +uint16_t __rte_hot \ +otx2_ssogws_deq_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ uint64_t timeout_ticks) \ { \ - struct otx2_ssogws *ws = port; \ + struct otx2_ssogws *ws = _rte_event_dev_prolog(dev_id, port_id);\ \ RTE_SET_USED(timeout_ticks); \ \ @@ -93,21 +94,24 @@ otx2_ssogws_deq_ ##name(void *port, struct rte_event *ev, \ return otx2_ssogws_get_work(ws, ev, flags, ws->lookup_mem); \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_deq_burst_ ##name(void *port, struct rte_event ev[], \ +uint16_t __rte_hot \ +otx2_ssogws_deq_burst_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event ev[], \ uint16_t nb_events, \ uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return otx2_ssogws_deq_ ##name(port, ev, timeout_ticks); \ + return otx2_ssogws_deq_ ##name(dev_id, port_id, ev, \ + timeout_ticks); \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_deq_timeout_ ##name(void *port, struct rte_event *ev, \ +uint16_t __rte_hot \ +otx2_ssogws_deq_timeout_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ uint64_t timeout_ticks) \ { \ - struct otx2_ssogws *ws = port; \ + struct otx2_ssogws *ws = _rte_event_dev_prolog(dev_id, port_id);\ uint16_t ret = 1; \ uint64_t iter; \ \ @@ -125,21 +129,24 @@ otx2_ssogws_deq_timeout_ ##name(void *port, struct rte_event *ev, \ return ret; \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_deq_timeout_burst_ ##name(void *port, struct rte_event ev[],\ +uint16_t __rte_hot \ +otx2_ssogws_deq_timeout_burst_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event ev[], \ uint16_t nb_events, \ uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return otx2_ssogws_deq_timeout_ ##name(port, ev, timeout_ticks);\ + return otx2_ssogws_deq_timeout_ ##name(dev_id, port_id, \ + ev, timeout_ticks); \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \ +uint16_t __rte_hot \ +otx2_ssogws_deq_seg_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ uint64_t timeout_ticks) \ { \ - struct otx2_ssogws *ws = port; \ + struct otx2_ssogws *ws = _rte_event_dev_prolog(dev_id, port_id);\ \ RTE_SET_USED(timeout_ticks); \ \ @@ -153,21 +160,24 @@ otx2_ssogws_deq_seg_ ##name(void *port, struct rte_event *ev, \ ws->lookup_mem); \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_deq_seg_burst_ ##name(void *port, struct rte_event ev[], \ +uint16_t __rte_hot \ +otx2_ssogws_deq_seg_burst_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event ev[], \ uint16_t nb_events, \ uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return otx2_ssogws_deq_seg_ ##name(port, ev, timeout_ticks); \ + return otx2_ssogws_deq_seg_ ##name(dev_id, port_id, ev, \ + timeout_ticks); \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_deq_seg_timeout_ ##name(void *port, struct rte_event *ev, \ +uint16_t __rte_hot \ +otx2_ssogws_deq_seg_timeout_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ uint64_t timeout_ticks) \ { \ - struct otx2_ssogws *ws = port; \ + struct otx2_ssogws *ws = _rte_event_dev_prolog(dev_id, port_id);\ uint16_t ret = 1; \ uint64_t iter; \ \ @@ -187,15 +197,16 @@ otx2_ssogws_deq_seg_timeout_ ##name(void *port, struct rte_event *ev, \ return ret; \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_deq_seg_timeout_burst_ ##name(void *port, \ +uint16_t __rte_hot \ +otx2_ssogws_deq_seg_timeout_burst_ ##name(uint8_t dev_id, \ + uint8_t port_id, \ struct rte_event ev[], \ uint16_t nb_events, \ uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return otx2_ssogws_deq_seg_timeout_ ##name(port, ev, \ + return otx2_ssogws_deq_seg_timeout_ ##name(dev_id, port_id, ev, \ timeout_ticks); \ } @@ -203,9 +214,9 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #undef R uint16_t __rte_hot -otx2_ssogws_enq(void *port, const struct rte_event *ev) +otx2_ssogws_enq(uint8_t dev_id, uint8_t port_id, const struct rte_event *ev) { - struct otx2_ssogws *ws = port; + struct otx2_ssogws *ws = _rte_event_dev_prolog(dev_id, port_id); switch (ev->op) { case RTE_EVENT_OP_NEW: @@ -225,18 +236,20 @@ otx2_ssogws_enq(void *port, const struct rte_event *ev) } uint16_t __rte_hot -otx2_ssogws_enq_burst(void *port, const struct rte_event ev[], +otx2_ssogws_enq_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { RTE_SET_USED(nb_events); - return otx2_ssogws_enq(port, ev); + return otx2_ssogws_enq(dev_id, port_id, ev); } uint16_t __rte_hot -otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[], +otx2_ssogws_enq_new_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct otx2_ssogws *ws = port; + struct otx2_ssogws *ws = _rte_event_dev_prolog(dev_id, port_id); uint16_t i, rc = 1; rte_smp_mb(); @@ -250,10 +263,11 @@ otx2_ssogws_enq_new_burst(void *port, const struct rte_event ev[], } uint16_t __rte_hot -otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], +otx2_ssogws_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct otx2_ssogws *ws = port; + struct otx2_ssogws *ws = _rte_event_dev_prolog(dev_id, port_id); RTE_SET_USED(nb_events); otx2_ssogws_forward_event(ws, ev); @@ -263,10 +277,11 @@ otx2_ssogws_enq_fwd_burst(void *port, const struct rte_event ev[], #define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot \ -otx2_ssogws_tx_adptr_enq_ ## name(void *port, struct rte_event ev[], \ +otx2_ssogws_tx_adptr_enq_ ## name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event ev[], \ uint16_t nb_events) \ { \ - struct otx2_ssogws *ws = port; \ + struct otx2_ssogws *ws = _rte_event_dev_prolog(dev_id, port_id);\ uint64_t cmd[sz]; \ \ RTE_SET_USED(nb_events); \ @@ -281,11 +296,12 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC #define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot \ -otx2_ssogws_tx_adptr_enq_seg_ ## name(void *port, struct rte_event ev[],\ +otx2_ssogws_tx_adptr_enq_seg_ ## name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event ev[], \ uint16_t nb_events) \ { \ uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \ - struct otx2_ssogws *ws = port; \ + struct otx2_ssogws *ws = _rte_event_dev_prolog(dev_id, port_id);\ \ RTE_SET_USED(nb_events); \ return otx2_ssogws_event_tx(ws->base, &ev[0], cmd, \ diff --git a/drivers/event/octeontx2/otx2_worker_dual.c b/drivers/event/octeontx2/otx2_worker_dual.c index 81af4ca904..b34160a265 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.c +++ b/drivers/event/octeontx2/otx2_worker_dual.c @@ -80,9 +80,10 @@ otx2_ssogws_dual_forward_event(struct otx2_ssogws_dual *ws, } uint16_t __rte_hot -otx2_ssogws_dual_enq(void *port, const struct rte_event *ev) +otx2_ssogws_dual_enq(uint8_t dev_id, uint8_t port_id, + const struct rte_event *ev) { - struct otx2_ssogws_dual *ws = port; + struct otx2_ssogws_dual *ws = _rte_event_dev_prolog(dev_id, port_id); struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws]; switch (ev->op) { @@ -103,18 +104,20 @@ otx2_ssogws_dual_enq(void *port, const struct rte_event *ev) } uint16_t __rte_hot -otx2_ssogws_dual_enq_burst(void *port, const struct rte_event ev[], +otx2_ssogws_dual_enq_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { RTE_SET_USED(nb_events); - return otx2_ssogws_dual_enq(port, ev); + return otx2_ssogws_dual_enq(dev_id, port_id, ev); } uint16_t __rte_hot -otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], +otx2_ssogws_dual_enq_new_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct otx2_ssogws_dual *ws = port; + struct otx2_ssogws_dual *ws = _rte_event_dev_prolog(dev_id, port_id); uint16_t i, rc = 1; rte_smp_mb(); @@ -128,10 +131,11 @@ otx2_ssogws_dual_enq_new_burst(void *port, const struct rte_event ev[], } uint16_t __rte_hot -otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], +otx2_ssogws_dual_enq_fwd_burst(uint8_t dev_id, uint8_t port_id, + const struct rte_event ev[], uint16_t nb_events) { - struct otx2_ssogws_dual *ws = port; + struct otx2_ssogws_dual *ws = _rte_event_dev_prolog(dev_id, port_id); struct otx2_ssogws_state *vws = &ws->ws_state[!ws->vws]; RTE_SET_USED(nb_events); @@ -141,11 +145,13 @@ otx2_ssogws_dual_enq_fwd_burst(void *port, const struct rte_event ev[], } #define R(name, f6, f5, f4, f3, f2, f1, f0, flags) \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \ +uint16_t __rte_hot \ +otx2_ssogws_dual_deq_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ uint64_t timeout_ticks) \ { \ - struct otx2_ssogws_dual *ws = port; \ + struct otx2_ssogws_dual *ws = _rte_event_dev_prolog(dev_id, \ + port_id); \ uint8_t gw; \ \ rte_prefetch_non_temporal(ws); \ @@ -166,21 +172,25 @@ otx2_ssogws_dual_deq_ ##name(void *port, struct rte_event *ev, \ return gw; \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_burst_ ##name(void *port, struct rte_event ev[], \ +uint16_t __rte_hot \ +otx2_ssogws_dual_deq_burst_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event ev[], \ uint16_t nb_events, \ uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return otx2_ssogws_dual_deq_ ##name(port, ev, timeout_ticks); \ + return otx2_ssogws_dual_deq_ ##name(dev_id, port_id, ev, \ + timeout_ticks); \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_timeout_ ##name(void *port, struct rte_event *ev, \ +uint16_t __rte_hot \ +otx2_ssogws_dual_deq_timeout_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ uint64_t timeout_ticks) \ { \ - struct otx2_ssogws_dual *ws = port; \ + struct otx2_ssogws_dual *ws = _rte_event_dev_prolog(dev_id, \ + port_id); \ uint64_t iter; \ uint8_t gw; \ \ @@ -208,23 +218,26 @@ otx2_ssogws_dual_deq_timeout_ ##name(void *port, struct rte_event *ev, \ return gw; \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_timeout_burst_ ##name(void *port, \ +uint16_t __rte_hot \ +otx2_ssogws_dual_deq_timeout_burst_ ##name(uint8_t dev_id, \ + uint8_t port_id, \ struct rte_event ev[], \ uint16_t nb_events, \ uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return otx2_ssogws_dual_deq_timeout_ ##name(port, ev, \ + return otx2_ssogws_dual_deq_timeout_ ##name(dev_id, port_id, ev,\ timeout_ticks); \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \ +uint16_t __rte_hot \ +otx2_ssogws_dual_deq_seg_ ##name(uint8_t dev_id, uint8_t port_id, \ + struct rte_event *ev, \ uint64_t timeout_ticks) \ { \ - struct otx2_ssogws_dual *ws = port; \ + struct otx2_ssogws_dual *ws = _rte_event_dev_prolog(dev_id, \ + port_id); \ uint8_t gw; \ \ RTE_SET_USED(timeout_ticks); \ @@ -245,24 +258,26 @@ otx2_ssogws_dual_deq_seg_ ##name(void *port, struct rte_event *ev, \ return gw; \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_seg_burst_ ##name(void *port, \ +uint16_t __rte_hot \ +otx2_ssogws_dual_deq_seg_burst_ ##name(uint8_t dev_id, uint8_t port_id, \ struct rte_event ev[], \ uint16_t nb_events, \ uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return otx2_ssogws_dual_deq_seg_ ##name(port, ev, \ + return otx2_ssogws_dual_deq_seg_ ##name(dev_id, port_id, ev, \ timeout_ticks); \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \ +uint16_t __rte_hot \ +otx2_ssogws_dual_deq_seg_timeout_ ##name(uint8_t dev_id, \ + uint8_t port_id, \ struct rte_event *ev, \ uint64_t timeout_ticks) \ { \ - struct otx2_ssogws_dual *ws = port; \ + struct otx2_ssogws_dual *ws = _rte_event_dev_prolog(dev_id, \ + port_id); \ uint64_t iter; \ uint8_t gw; \ \ @@ -292,15 +307,17 @@ otx2_ssogws_dual_deq_seg_timeout_ ##name(void *port, \ return gw; \ } \ \ -uint16_t __rte_hot \ -otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(void *port, \ +uint16_t __rte_hot \ +otx2_ssogws_dual_deq_seg_timeout_burst_ ##name(uint8_t dev_id, \ + uint8_t port_id, \ struct rte_event ev[], \ uint16_t nb_events, \ uint64_t timeout_ticks) \ { \ RTE_SET_USED(nb_events); \ \ - return otx2_ssogws_dual_deq_seg_timeout_ ##name(port, ev, \ + return otx2_ssogws_dual_deq_seg_timeout_ ##name(dev_id, port_id,\ + ev, \ timeout_ticks); \ } @@ -309,11 +326,12 @@ SSO_RX_ADPTR_ENQ_FASTPATH_FUNC #define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot \ -otx2_ssogws_dual_tx_adptr_enq_ ## name(void *port, \ +otx2_ssogws_dual_tx_adptr_enq_ ## name(uint8_t dev_id, uint8_t port_id, \ struct rte_event ev[], \ uint16_t nb_events) \ { \ - struct otx2_ssogws_dual *ws = port; \ + struct otx2_ssogws_dual *ws = _rte_event_dev_prolog(dev_id, \ + port_id); \ uint64_t cmd[sz]; \ \ RTE_SET_USED(nb_events); \ @@ -327,12 +345,14 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC #define T(name, f6, f5, f4, f3, f2, f1, f0, sz, flags) \ uint16_t __rte_hot \ -otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(void *port, \ +otx2_ssogws_dual_tx_adptr_enq_seg_ ## name(uint8_t dev_id, \ + uint8_t port_id, \ struct rte_event ev[], \ uint16_t nb_events) \ { \ uint64_t cmd[(sz) + NIX_TX_MSEG_SG_DWORDS - 2]; \ - struct otx2_ssogws_dual *ws = port; \ + struct otx2_ssogws_dual *ws = _rte_event_dev_prolog(dev_id, \ + port_id); \ \ RTE_SET_USED(nb_events); \ return otx2_ssogws_event_tx(ws->base[!ws->vws], &ev[0], \ diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c index 739dc64c82..c3d293ea4b 100644 --- a/drivers/event/opdl/opdl_evdev.c +++ b/drivers/event/opdl/opdl_evdev.c @@ -606,6 +606,11 @@ set_do_test(const char *key __rte_unused, const char *value, void *opaque) return 0; } +static _RTE_EVENT_ENQ_BURST_DEF(opdl_event_enqueue_burst); +static _RTE_EVENT_ENQ_DEF(opdl_event_enqueue); +static _RTE_EVENT_DEQ_BURST_DEF(opdl_event_dequeue_burst); +static _RTE_EVENT_DEQ_DEF(opdl_event_dequeue); + static int opdl_probe(struct rte_vdev_device *vdev) { @@ -712,12 +717,23 @@ opdl_probe(struct rte_vdev_device *vdev) dev->dev_ops = &evdev_opdl_ops; - dev->enqueue = opdl_event_enqueue; - dev->enqueue_burst = opdl_event_enqueue_burst; - dev->enqueue_new_burst = opdl_event_enqueue_burst; - dev->enqueue_forward_burst = opdl_event_enqueue_burst; - dev->dequeue = opdl_event_dequeue; - dev->dequeue_burst = opdl_event_dequeue_burst; + rte_event_set_enq_fn(dev->data->dev_id, + _RTE_EVENT_ENQ_FUNC(opdl_event_enqueue)); + rte_event_set_enq_burst_fn( + dev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(opdl_event_enqueue_burst)); + rte_event_set_enq_new_burst_fn( + dev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(opdl_event_enqueue_burst)); + rte_event_set_enq_fwd_burst_fn( + dev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(opdl_event_enqueue_burst)); + + rte_event_set_deq_fn(dev->data->dev_id, + _RTE_EVENT_DEQ_FUNC(opdl_event_dequeue)); + rte_event_set_deq_burst_fn( + dev->data->dev_id, + _RTE_EVENT_DEQ_BURST_FUNC(opdl_event_dequeue_burst)); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c index c9e17e7cb1..a781bdb0f9 100644 --- a/drivers/event/skeleton/skeleton_eventdev.c +++ b/drivers/event/skeleton/skeleton_eventdev.c @@ -338,6 +338,11 @@ static struct eventdev_ops skeleton_eventdev_ops = { .dump = skeleton_eventdev_dump }; +static _RTE_EVENT_ENQ_DEF(skeleton_eventdev_enqueue); +static _RTE_EVENT_ENQ_BURST_DEF(skeleton_eventdev_enqueue_burst); +static _RTE_EVENT_DEQ_DEF(skeleton_eventdev_dequeue); +static _RTE_EVENT_DEQ_BURST_DEF(skeleton_eventdev_dequeue_burst); + static int skeleton_eventdev_init(struct rte_eventdev *eventdev) { @@ -347,11 +352,17 @@ skeleton_eventdev_init(struct rte_eventdev *eventdev) PMD_DRV_FUNC_TRACE(); - eventdev->dev_ops = &skeleton_eventdev_ops; - eventdev->enqueue = skeleton_eventdev_enqueue; - eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst; - eventdev->dequeue = skeleton_eventdev_dequeue; - eventdev->dequeue_burst = skeleton_eventdev_dequeue_burst; + rte_event_set_enq_fn(eventdev->data->dev_id, + _RTE_EVENT_ENQ_FUNC(skeleton_eventdev_enqueue)); + rte_event_set_enq_burst_fn( + eventdev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(skeleton_eventdev_enqueue_burst)); + + rte_event_set_deq_fn(eventdev->data->dev_id, + _RTE_EVENT_DEQ_FUNC(skeleton_eventdev_dequeue)); + rte_event_set_deq_burst_fn( + eventdev->data->dev_id, + _RTE_EVENT_DEQ_BURST_FUNC(skeleton_eventdev_dequeue_burst)); /* For secondary processes, the primary has done all the work */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) @@ -438,10 +449,18 @@ skeleton_eventdev_create(const char *name, int socket_id) } eventdev->dev_ops = &skeleton_eventdev_ops; - eventdev->enqueue = skeleton_eventdev_enqueue; - eventdev->enqueue_burst = skeleton_eventdev_enqueue_burst; - eventdev->dequeue = skeleton_eventdev_dequeue; - eventdev->dequeue_burst = skeleton_eventdev_dequeue_burst; + + rte_event_set_enq_fn(eventdev->data->dev_id, + _RTE_EVENT_ENQ_FUNC(skeleton_eventdev_enqueue)); + rte_event_set_enq_burst_fn( + eventdev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(skeleton_eventdev_enqueue_burst)); + + rte_event_set_deq_fn(eventdev->data->dev_id, + _RTE_EVENT_DEQ_FUNC(skeleton_eventdev_dequeue)); + rte_event_set_deq_burst_fn( + eventdev->data->dev_id, + _RTE_EVENT_DEQ_BURST_FUNC(skeleton_eventdev_dequeue_burst)); return 0; fail: diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c index 9b72073322..494769fd06 100644 --- a/drivers/event/sw/sw_evdev.c +++ b/drivers/event/sw/sw_evdev.c @@ -942,6 +942,11 @@ static int32_t sw_sched_service_func(void *args) return 0; } +static _RTE_EVENT_ENQ_BURST_DEF(sw_event_enqueue_burst); +static _RTE_EVENT_ENQ_DEF(sw_event_enqueue); +static _RTE_EVENT_DEQ_BURST_DEF(sw_event_dequeue_burst); +static _RTE_EVENT_DEQ_DEF(sw_event_dequeue); + static int sw_probe(struct rte_vdev_device *vdev) { @@ -1085,12 +1090,24 @@ sw_probe(struct rte_vdev_device *vdev) return -EFAULT; } dev->dev_ops = &evdev_sw_ops; - dev->enqueue = sw_event_enqueue; - dev->enqueue_burst = sw_event_enqueue_burst; - dev->enqueue_new_burst = sw_event_enqueue_burst; - dev->enqueue_forward_burst = sw_event_enqueue_burst; - dev->dequeue = sw_event_dequeue; - dev->dequeue_burst = sw_event_dequeue_burst; + + rte_event_set_enq_fn(dev->data->dev_id, + _RTE_EVENT_ENQ_FUNC(sw_event_enqueue)); + rte_event_set_enq_burst_fn( + dev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(sw_event_enqueue_burst)); + rte_event_set_enq_new_burst_fn( + dev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(sw_event_enqueue_burst)); + rte_event_set_enq_fwd_burst_fn( + dev->data->dev_id, + _RTE_EVENT_ENQ_BURST_FUNC(sw_event_enqueue_burst)); + + rte_event_set_deq_fn(dev->data->dev_id, + _RTE_EVENT_DEQ_FUNC(sw_event_dequeue)); + rte_event_set_deq_burst_fn( + dev->data->dev_id, + _RTE_EVENT_DEQ_BURST_FUNC(sw_event_dequeue_burst)); if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0;