From patchwork Mon Oct 16 20:57:12 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sivaprasad Tummala X-Patchwork-Id: 132658 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2AF6B43182; Mon, 16 Oct 2023 22:58:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5F14D40DDB; Mon, 16 Oct 2023 22:57:57 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2056.outbound.protection.outlook.com [40.107.93.56]) by mails.dpdk.org (Postfix) with ESMTP id D7894402E5 for ; Mon, 16 Oct 2023 22:57:54 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Wm0pOtFpOwU2TnEf2ne3wEueCiiHhivsVjv7NqcARSPl8RuKePcnIOnW4/zk+ZPxcThUn3pR22iueqZ8O5qRkzGNC6PPt0VdkyC+LjXTyJzrZPK5/e4EPyC9ljrxyQnr/Ar/Txx2VoDAgXjsjG6vC/pIp27e4+tiozZKLr/y5lw1ODN0Lrgs+T49HGGvhEY7hTm/qQ8X8QZKBFANocVJHyDFl3ueddtZkaIwEabXBik8d5QXD/hRQWjVRotse2GCSECq0uPYWjVrhuZ+TQW5fUMq9zvopX8nClpTRnzBpBXe0km/ZkLb6Vz+hTdkMSjDzCJNdpwS9Xr+jXjhyXy1Lw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=fOcCZGCCn4AtEdOVDu+1kF8ahl0/1bFawPr/C0KFNTU=; b=iXSbKkWLhTwtaFzYFiHFcwpMnLdp+Py4rmSAG3k6hZg3U2TYAi/y2eHfhKsYd5c4UhErA0INPHa2MNDFfQqcjhPbeV4UAKKGSMe1UVuS758Sw+BU6oMud0nGuY4FcjEVO93ZlGjHwlSCnbg87hcn7XHFgiG+RlWsdgyU9UgkINGvpwt9f+xjeZkop2aCWBeHK5x2vNN3LFXRqtNIzSYjVmnwksyin69v+sUAb+ysIOPVPDfAZYCJiT7hBuDq2tGEJh85l53yA16iyc9972hmmyAUNoJgwXiq+Yw6MT9vAHI/4is5ytZJ9reAlsnOp/C9vLOUCGuONjd4kPwFQ+932Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=fOcCZGCCn4AtEdOVDu+1kF8ahl0/1bFawPr/C0KFNTU=; b=ZSAB2jqsyr9MjICSFLAbZLv93m3ZBlADUheJFlhPpzi3wEn+1pi7kcTqyTBGbMaegzZStaA1tjinytY9cxDqOCuTN2Sjn0BjpyrZIaDKhhZwOb5bL1NUY1dN7KMyuNFFkCIaINcwFoI9zrcl2/IvpM/PX9psaeMshAhAlW6BQL0= Received: from CYZPR14CA0048.namprd14.prod.outlook.com (2603:10b6:930:a0::25) by DM4PR12MB7573.namprd12.prod.outlook.com (2603:10b6:8:10f::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.34; Mon, 16 Oct 2023 20:57:52 +0000 Received: from CY4PEPF0000E9D0.namprd03.prod.outlook.com (2603:10b6:930:a0:cafe::40) by CYZPR14CA0048.outlook.office365.com (2603:10b6:930:a0::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.35 via Frontend Transport; Mon, 16 Oct 2023 20:57:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000E9D0.mail.protection.outlook.com (10.167.241.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6838.22 via Frontend Transport; Mon, 16 Oct 2023 20:57:52 +0000 Received: from telco-siena.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Mon, 16 Oct 2023 15:57:36 -0500 From: Sivaprasad Tummala To: , , CC: , , Subject: [PATCH v1 3/6] eventdev: support optional dequeue callbacks Date: Mon, 16 Oct 2023 13:57:12 -0700 Message-ID: <20231016205715.970999-3-sivaprasad.tummala@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231016205715.970999-1-sivaprasad.tummala@amd.com> References: <20230419095427.563185-1-sivaprasad.tummala@amd.com> <20231016205715.970999-1-sivaprasad.tummala@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9D0:EE_|DM4PR12MB7573:EE_ X-MS-Office365-Filtering-Correlation-Id: 5d24626f-0eb7-415b-7f98-08dbce8a9053 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: tbEl8qJbGfd7d/xDcY7GcciU0zNpWRs0sVzTagjOTy/m4TQJe2jt4nn224N4BWd2n+Xthma1/PrBozr/OD3i47C8QbauqE+MfcW5r4FSEGP6c2Y+0czE1cQrOyGfS/1dQFkSgWstk6PQrMysyASQ3cB0XMxTpuleUjoTm9+vjoqdVobjtuYnIXlrHRz97R4Ovqv10L9awAGQ/DW/JSvAVa4U4vq0IaWFStBYgj4F55xIHZwQvyYPhRTbnJVQ4TDZNqLvC/q8KoUxyR1aZHGuvEWFy0whXwx550WgUHwr3Dci+QCl4JvzYJiHDN9LWbSmGuRaCDVE5JbrL54PzcsLvK61njO5kh79yLy3DPNFpKi3rexM+sFekeGI9lTg6GhunDSeUrIZDgoV2BtNCyQRIekGwUicQ4ldabfovkSQtFImpAxHCN1dNVx+L0oqYZTQ5wDjr2O8Z49pRmhhiAE2db04B+oP4mcaQ1xOPUOVHCreYY4S1bW3kKDPGfKQOwJPuADeZOeWLfwACa01iZmpPpAclPexeALe/lpLxPMPeX+f7b7VLnL63CRnhh8AAIjoah8FSY8hXL/DErzG9B+nOp6QzVd374iCHO5wIoYNBKek8J5weJedtrYurdUNUVP4Lkr9pM8sGOJq4rm2gkhI0Y8yIQHKNrYCZN6M6p/b6kiK4ifXvsV3/QneTOptI4fZMqXAquGSW09rQQS6xxgtqx5RTDGOSDQDwypnFzYX4Lmk3AOmGqwjU8iJCpBAwgwZVuvEXUxTYzOCRV66qv606A== X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(376002)(136003)(346002)(396003)(230922051799003)(64100799003)(82310400011)(451199024)(1800799009)(186009)(40470700004)(46966006)(36840700001)(6666004)(40460700003)(36860700001)(54906003)(8936002)(70206006)(316002)(70586007)(110136005)(356005)(478600001)(81166007)(1076003)(82740400003)(2616005)(26005)(7696005)(16526019)(8676002)(47076005)(83380400001)(4326008)(426003)(336012)(40480700001)(30864003)(2906002)(5660300002)(36756003)(86362001)(41300700001)(44832011)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Oct 2023 20:57:52.4163 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5d24626f-0eb7-415b-7f98-08dbce8a9053 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9D0.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR12MB7573 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add optional support for inline event processing within pmd dequeue call. For a dequeue callback, events dequeued from the event port were passed them to a callback function if configured, to allow additional processing. e.g. unpack batch of packets from each event on dequeue, before passing back to the application. Signed-off-by: Sivaprasad Tummala --- lib/eventdev/eventdev_pmd.h | 38 +++++++++++ lib/eventdev/eventdev_private.c | 2 + lib/eventdev/rte_eventdev.c | 107 +++++++++++++++++++++++++++++++ lib/eventdev/rte_eventdev.h | 95 +++++++++++++++++++++++++++ lib/eventdev/rte_eventdev_core.h | 12 +++- lib/eventdev/version.map | 3 + 6 files changed, 256 insertions(+), 1 deletion(-) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index a0ee768ce7..ce067b1d5d 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -97,6 +97,19 @@ struct rte_eventdev_global { uint8_t nb_devs; /**< Number of devices found */ }; +/** + * @internal + * Structure used to hold information about the callbacks to be called for a + * port on dequeue. + */ +struct rte_event_dequeue_callback { + struct rte_event_dequeue_callback *next; + union{ + rte_dequeue_callback_fn dequeue; + } fn; + void *param; +}; + /** * @internal * The data part, with no function pointers, associated with each device. @@ -171,6 +184,10 @@ struct rte_eventdev { /**< Pointer to PMD dequeue burst function. */ event_maintain_t maintain; /**< Pointer to PMD port maintenance function. */ + struct rte_event_dequeue_callback *post_dequeue_burst_cbs[RTE_EVENT_MAX_PORTS_PER_DEV]; + /**< User-supplied functions called from dequeue_burst to post-process + * received packets before passing them to the user + */ event_tx_adapter_enqueue_t txa_enqueue_same_dest; /**< Pointer to PMD eth Tx adapter burst enqueue function with * events destined to same Eth port & Tx queue. @@ -245,6 +262,27 @@ rte_event_pmd_is_valid_dev(uint8_t dev_id) return 1; } +/** + * Executes all the user application registered callbacks for the specific + * event device. + * + * @param dev_id + * Event device index. + * @param port_id + * Event port index + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * for output to be populated with the dequeued event objects. + * @param nb_events + * number of event objects + * + * @return + * The number of event objects + */ +__rte_internal +uint16_t rte_eventdev_pmd_dequeue_callback_process(uint8_t dev_id, + uint8_t port_id, struct rte_event ev[], uint16_t nb_events); + /** * Definitions of all functions exported by a driver through the * generic structure of type *event_dev_ops* supplied in the diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c index 017f97ccab..052c526ce0 100644 --- a/lib/eventdev/eventdev_private.c +++ b/lib/eventdev/eventdev_private.c @@ -137,4 +137,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op, fp_op->dma_enqueue = dev->dma_enqueue; fp_op->profile_switch = dev->profile_switch; fp_op->data = dev->data->ports; + fp_op->ev_port.clbk = (void **)(uintptr_t)dev->post_dequeue_burst_cbs; + fp_op->ev_port.data = dev->data->ports; } diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 5feb4326a2..f2540a6aa8 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -18,6 +18,7 @@ #include #include #include +#include #include #include #include @@ -39,6 +40,9 @@ static struct rte_eventdev_global eventdev_globals = { /* Public fastpath APIs. */ struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS]; +/* spinlock for add/remove dequeue callbacks */ +static rte_spinlock_t event_dev_dequeue_cb_lock = RTE_SPINLOCK_INITIALIZER; + /* Event dev north bound API implementation */ uint8_t @@ -884,6 +888,109 @@ rte_event_port_attr_get(uint8_t dev_id, uint8_t port_id, uint32_t attr_id, return 0; } +const struct rte_event_dequeue_callback * +rte_event_add_dequeue_callback(uint8_t dev_id, uint8_t port_id, + rte_dequeue_callback_fn fn, void *user_param) +{ + struct rte_eventdev *dev; + struct rte_event_dequeue_callback *cb; + struct rte_event_dequeue_callback *tail; + + /* check input parameters */ + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, NULL); + dev = &rte_eventdevs[dev_id]; + if (!is_valid_port(dev, port_id)) { + RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); + return NULL; + } + + cb = rte_zmalloc(NULL, sizeof(*cb), 0); + if (cb == NULL) { + rte_errno = ENOMEM; + return NULL; + } + cb->fn.dequeue = fn; + cb->param = user_param; + + rte_spinlock_lock(&event_dev_dequeue_cb_lock); + /* Add the callbacks in fifo order. */ + tail = rte_eventdevs[dev_id].post_dequeue_burst_cbs[port_id]; + if (!tail) { + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + rte_atomic_store_explicit( + &rte_eventdevs[dev_id].post_dequeue_burst_cbs[port_id], + cb, __ATOMIC_RELEASE); + } else { + while (tail->next) + tail = tail->next; + /* Stores to cb->fn and cb->param should complete before + * cb is visible to data plane. + */ + rte_atomic_store_explicit(&tail->next, cb, __ATOMIC_RELEASE); + } + rte_spinlock_unlock(&event_dev_dequeue_cb_lock); + + return cb; +} + +int +rte_event_remove_dequeue_callback(uint8_t dev_id, uint8_t port_id, + const struct rte_event_dequeue_callback *user_cb) +{ + struct rte_eventdev *dev; + struct rte_event_dequeue_callback *cb; + struct rte_event_dequeue_callback **prev_cb; + + /* Check input parameters. */ + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + dev = &rte_eventdevs[dev_id]; + if (user_cb == NULL || !is_valid_port(dev, port_id)) + return -EINVAL; + + rte_spinlock_lock(&event_dev_dequeue_cb_lock); + prev_cb = &dev->post_dequeue_burst_cbs[port_id]; + for (; *prev_cb != NULL; prev_cb = &cb->next) { + cb = *prev_cb; + if (cb == user_cb) { + /* Remove the user cb from the callback list. */ + rte_atomic_store_explicit(prev_cb, cb->next, + __ATOMIC_RELAXED); + break; + } + } + rte_spinlock_unlock(&event_dev_dequeue_cb_lock); + + return 0; +} + +uint16_t rte_eventdev_pmd_dequeue_callback_process(uint8_t dev_id, + uint8_t port_id, struct rte_event ev[], uint16_t nb_events) +{ + struct rte_event_dequeue_callback *cb; + const struct rte_event_fp_ops *fp_ops; + + fp_ops = &rte_event_fp_ops[dev_id]; + + /* __ATOMIC_RELEASE memory order was used when the + * call back was inserted into the list. + * Since there is a clear dependency between loading + * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is + * not required. + */ + cb = rte_atomic_load_explicit((void **)&fp_ops->ev_port.clbk[port_id], + __ATOMIC_RELAXED); + if (unlikely(cb != NULL)) + while (cb != NULL) { + nb_events = cb->fn.dequeue(dev_id, port_id, ev, + nb_events, cb->param); + cb = cb->next; + } + + return nb_events; +} + int rte_event_port_get_monitor_addr(uint8_t dev_id, uint8_t port_id, struct rte_power_monitor_cond *pmc) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 38dbbc2617..c0097c0a23 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -954,6 +954,101 @@ void rte_event_port_quiesce(uint8_t dev_id, uint8_t port_id, rte_eventdev_port_flush_t release_cb, void *args); +struct rte_event_dequeue_callback; + +/** + * Function type used for dequeue event processing callbacks. + * + * The callback function is called on dequeue with a burst of events that have + * been received on the given event port. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param[out] ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * for output to be populated with the dequeued event objects. + * @param nb_events + * The maximum number of event objects to dequeue, typically number of + * rte_event_port_dequeue_depth() available for this port. + * @param opaque + * Opaque pointer of event port callback related data. + * + * @return + * The number of event objects returned to the user. + */ +typedef uint16_t (*rte_dequeue_callback_fn)(uint8_t dev_id, uint8_t port_id, + struct rte_event *ev, uint16_t nb_events, void *user_param); + +/** + * Add a callback to be called on event dequeue on a given event device port. + * + * This API configures a function to be called for each burst of + * events dequeued on a given event device port. The return value is a pointer + * that can be used to later remove the callback using + * rte_event_remove_dequeue_callback(). + * + * Multiple functions are called in the order that they are added. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param fn + * The callback function + * @param user_param + * A generic pointer parameter which will be passed to each invocation of the + * callback function on this event device port. Inter-thread synchronization + * of any user data changes is the responsibility of the user. + * + * @return + * NULL on error. + * On success, a pointer value which can later be used to remove the callback. + */ +__rte_experimental +const struct rte_event_dequeue_callback * +rte_event_add_dequeue_callback(uint8_t dev_id, uint8_t port_id, + rte_dequeue_callback_fn fn, void *user_param); + +/** + * Remove a dequeue event callback from a given event device port. + * + * This API is used to removed callbacks that were added to a event device port + * using rte_event_add_dequeue_callback(). + * + * Note: the callback is removed from the callback list but it isn't freed + * since the it may still be in use. The memory for the callback can be + * subsequently freed back by the application by calling rte_free(): + * + * - Immediately - if the device is stopped, or the user knows that no + * callbacks are in flight e.g. if called from the thread doing dequeue + * on that port. + * + * - After a short delay - where the delay is sufficient to allow any + * in-flight callbacks to complete. Alternately, the RCU mechanism can be + * used to detect when data plane threads have ceased referencing the + * callback memory. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param user_cb + * The callback function + * + * @return + * - 0: Success. Callback was removed. + * - -ENODEV: If *dev_id* is invalid. + * - -EINVAL: The port_id is out of range, or the callback + * is NULL. + */ +__rte_experimental +int +rte_event_remove_dequeue_callback(uint8_t dev_id, uint8_t port_id, + const struct rte_event_dequeue_callback *user_cb); + + /** * The queue depth of the port on the enqueue side */ diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index 5b405518d1..5ce93c4b6f 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -49,6 +49,14 @@ typedef uint16_t (*event_dma_adapter_enqueue_t)(void *port, struct rte_event ev[ typedef int (*event_profile_switch_t)(void *port, uint8_t profile); /**< @internal Switch active link profile on the event port. */ +struct rte_eventdev_port_data { + void **data; + /**< points to array of internal port data pointers */ + void **clbk; + /**< points to array of port callback data pointers */ +}; +/**< @internal Structure used to hold opaque eventdev port data. */ + struct rte_event_fp_ops { void **data; /**< points to array of internal port data pointers */ @@ -76,7 +84,9 @@ struct rte_event_fp_ops { /**< PMD DMA adapter enqueue function. */ event_profile_switch_t profile_switch; /**< PMD Event switch profile function. */ - uintptr_t reserved[4]; + struct rte_eventdev_port_data ev_port; + /**< Eventdev port data. */ + uintptr_t reserved[1]; } __rte_cache_aligned; extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS]; diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index fa9eb069ff..a0c7aa5bbd 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -155,6 +155,8 @@ EXPERIMENTAL { rte_event_port_profile_unlink; rte_event_port_profile_links_get; rte_event_port_get_monitor_addr; + rte_event_add_dequeue_callback; + rte_event_remove_dequeue_callback; __rte_eventdev_trace_port_profile_switch; }; @@ -165,6 +167,7 @@ INTERNAL { event_dev_fp_ops_set; event_dev_probing_finish; rte_event_pmd_allocate; + rte_eventdev_pmd_dequeue_callback_process; rte_event_pmd_get_named_dev; rte_event_pmd_is_valid_dev; rte_event_pmd_pci_probe;