From patchwork Thu Aug 31 20:44:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 131001 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10C7A42216; Thu, 31 Aug 2023 22:44:46 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B561940296; Thu, 31 Aug 2023 22:44:41 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 628D140294 for ; Thu, 31 Aug 2023 22:44:40 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37VCQtGP016792; Thu, 31 Aug 2023 13:44:39 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=9jDfhBghmgJD0Or4MhQfLpwXrjMVi2laDPwlIzHC484=; b=GrLKEDXhy9LH6L3Qa6oQWxpUlfrI/7BjEX5klGeCVC0m8W9nyDu7LGejpjSmZmnV/jU3 OwPHJNpyLFrOk/U3Ln+if6EwU05vlNkimkAu3gY1jGRNQkNkh4wk9s1UxaiJClNAj5mh 3XTYueACiGnK6vG1ik1FLbl6jgg2g/RN1IYtqybO6ldeGyvA3ozgkhdPc9lEbl9imLWl TP5acK5WoxBMS7D6lHnks5M6Vq2vwhG7qV4XeW+aJzB9NAHGTOO+CYKuM+dbvw5shk4e GAKbcAzxla6QmOWfiA18iF2anvJG8uBeNISFbC6/knJp9biDqQs8mXOnsc3CU9w+jlug Ig== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sttvb9wab-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 31 Aug 2023 13:44:38 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 31 Aug 2023 13:44:37 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 31 Aug 2023 13:44:37 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id 6F4303F703F; Thu, 31 Aug 2023 13:44:32 -0700 (PDT) From: To: , , , , , , , , , , , , , CC: Subject: [PATCH v2 1/3] eventdev: introduce link profiles Date: Fri, 1 Sep 2023 02:14:22 +0530 Message-ID: <20230831204424.13367-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230831204424.13367-1-pbhagavatula@marvell.com> References: <20230825184435.2986-1-pbhagavatula@marvell.com> <20230831204424.13367-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Apk_OyZR2n4JCOUYts-Ma6dd0vno-ENZ X-Proofpoint-GUID: Apk_OyZR2n4JCOUYts-Ma6dd0vno-ENZ X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-31_18,2023-08-31_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh A collection of event queues linked to an event port can be associated with a unique identifier called as a profile, multiple such profiles can be created based on the event device capability using the function `rte_event_port_profile_links_set` which takes arguments similar to `rte_event_port_link` in addition to the profile identifier. The maximum link profiles that are supported by an event device is advertised through the structure member `rte_event_dev_info::max_profiles_per_port`. By default, event ports are configured to use the link profile 0 on initialization. Once multiple link profiles are set up and the event device is started, the application can use the function `rte_event_port_profile_switch` to change the currently active profile on an event port. This effects the next `rte_event_dequeue_burst` call, where the event queues associated with the newly active link profile will participate in scheduling. An unlink function `rte_event_port_profile_unlink` is provided to modify the links associated to a profile, and `rte_event_port_profile_links_get` can be used to retrieve the links associated with a profile. Using Link profiles can reduce the overhead of linking/unlinking and waiting for unlinks in progress in fast-path and gives applications the ability to switch between preset profiles on the fly. Signed-off-by: Pavan Nikhilesh --- config/rte_config.h | 1 + doc/guides/eventdevs/features/default.ini | 1 + doc/guides/prog_guide/eventdev.rst | 40 ++++ doc/guides/rel_notes/release_23_11.rst | 17 ++ drivers/event/cnxk/cnxk_eventdev.c | 3 +- drivers/event/dlb2/dlb2.c | 1 + drivers/event/dpaa/dpaa_eventdev.c | 1 + drivers/event/dpaa2/dpaa2_eventdev.c | 2 +- drivers/event/dsw/dsw_evdev.c | 1 + drivers/event/octeontx/ssovf_evdev.c | 2 +- drivers/event/opdl/opdl_evdev.c | 1 + drivers/event/skeleton/skeleton_eventdev.c | 1 + drivers/event/sw/sw_evdev.c | 1 + lib/eventdev/eventdev_pmd.h | 59 +++++- lib/eventdev/eventdev_private.c | 9 + lib/eventdev/eventdev_trace.h | 32 +++ lib/eventdev/eventdev_trace_points.c | 12 ++ lib/eventdev/rte_eventdev.c | 146 ++++++++++--- lib/eventdev/rte_eventdev.h | 231 +++++++++++++++++++++ lib/eventdev/rte_eventdev_core.h | 4 + lib/eventdev/rte_eventdev_trace_fp.h | 8 + lib/eventdev/version.map | 6 + 22 files changed, 549 insertions(+), 30 deletions(-) diff --git a/config/rte_config.h b/config/rte_config.h index 400e44e3cf..d43b3eecb8 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -73,6 +73,7 @@ #define RTE_EVENT_MAX_DEVS 16 #define RTE_EVENT_MAX_PORTS_PER_DEV 255 #define RTE_EVENT_MAX_QUEUES_PER_DEV 255 +#define RTE_EVENT_MAX_PROFILES_PER_PORT 8 #define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32 #define RTE_EVENT_ETH_INTR_RING_SIZE 1024 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32 diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 00360f60c6..1c0082352b 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -18,6 +18,7 @@ multiple_queue_port = carry_flow_id = maintenance_free = runtime_queue_attr = +profile_links = ; ; Features of a default Ethernet Rx adapter. diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst index 2c83176846..9c07870a79 100644 --- a/doc/guides/prog_guide/eventdev.rst +++ b/doc/guides/prog_guide/eventdev.rst @@ -317,6 +317,46 @@ can be achieved like this: } int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1); +Linking Queues to Ports with profiles +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +An application can use link profiles if supported by the underlying event device to setup up +multiple link profile per port and change them run time depending up on heuristic data. +Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress +in fast-path and gives applications the ability to switch between preset profiles on the fly. + +An Example use case could be as follows. + +Config path: + +.. code-block:: c + + uint8_t lq[4] = {4, 5, 6, 7}; + uint8_t hq[4] = {0, 1, 2, 3}; + + if (rte_event_dev_info.max_profiles_per_port < 2) + return -ENOTSUP; + + rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0); + rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1); + +Worker path: + +.. code-block:: c + + uint8_t profile_id_to_switch; + + while (1) { + deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0); + if (deq == 0) { + profile_id_to_switch = app_findprofile_id_to_switch(); + rte_event_port_profile_switch(0, 0, profile_id_to_switch); + continue; + } + + // Process the event received. + } + Starting the EventDev ~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index 333e1d95a2..e19a0ed3c3 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -78,6 +78,23 @@ New Features * build: Optional libraries can now be selected with the new ``enable_libs`` build option similarly to the existing ``enable_drivers`` build option. +* **Added eventdev support to link queues to port with profile.** + + Introduced event link profiles that can be used to associated links between + event queues and an event port with a unique identifier termed as profile. + The profile can be used to switch between the associated links in fast-path + without the additional overhead of linking/unlinking and waiting for unlinking. + + * Added ``rte_event_port_profile_links_set`` to link event queues to an event + port with a unique profile identifier. + + * Added ``rte_event_port_profile_unlink`` to unlink event queues from an event + port associated with a profile. + + * Added ``rte_event_port_profile_links_get`` to retrieve links associated to a + profile. + + * Added ``rte_event_port_profile_switch`` to switch between profiles as needed. Removed Items ------------- diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index 27883a3619..529622cac6 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -31,6 +31,7 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev, RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE | RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR; + dev_info->max_profiles_per_port = 1; } int @@ -133,7 +134,7 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev, for (i = 0; i < dev->nb_event_ports; i++) { uint16_t nb_hwgrp = 0; - links_map = event_dev->data->links_map; + links_map = event_dev->data->links_map[0]; /* Point links_map to this port specific area */ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV); diff --git a/drivers/event/dlb2/dlb2.c b/drivers/event/dlb2/dlb2.c index 60c5cd4804..580057870f 100644 --- a/drivers/event/dlb2/dlb2.c +++ b/drivers/event/dlb2/dlb2.c @@ -79,6 +79,7 @@ static struct rte_event_dev_info evdev_dlb2_default_info = { RTE_EVENT_DEV_CAP_RUNTIME_PORT_LINK | RTE_EVENT_DEV_CAP_MULTIPLE_QUEUE_PORT | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE), + .max_profiles_per_port = 1, }; struct process_local_port_data diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c index 4b3d16735b..f615da3813 100644 --- a/drivers/event/dpaa/dpaa_eventdev.c +++ b/drivers/event/dpaa/dpaa_eventdev.c @@ -359,6 +359,7 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev, RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; + dev_info->max_profiles_per_port = 1; } static int diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index fa1a1ade80..ffc5550f85 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -411,7 +411,7 @@ dpaa2_eventdev_info_get(struct rte_eventdev *dev, RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; - + dev_info->max_profiles_per_port = 1; } static int diff --git a/drivers/event/dsw/dsw_evdev.c b/drivers/event/dsw/dsw_evdev.c index 6c5cde2468..785c12f61f 100644 --- a/drivers/event/dsw/dsw_evdev.c +++ b/drivers/event/dsw/dsw_evdev.c @@ -218,6 +218,7 @@ dsw_info_get(struct rte_eventdev *dev __rte_unused, .max_event_port_dequeue_depth = DSW_MAX_PORT_DEQUEUE_DEPTH, .max_event_port_enqueue_depth = DSW_MAX_PORT_ENQUEUE_DEPTH, .max_num_events = DSW_MAX_EVENTS, + .max_profiles_per_port = 1, .event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE| RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED| RTE_EVENT_DEV_CAP_NONSEQ_MODE| diff --git a/drivers/event/octeontx/ssovf_evdev.c b/drivers/event/octeontx/ssovf_evdev.c index 650266b996..0eb9358981 100644 --- a/drivers/event/octeontx/ssovf_evdev.c +++ b/drivers/event/octeontx/ssovf_evdev.c @@ -158,7 +158,7 @@ ssovf_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *dev_info) RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; - + dev_info->max_profiles_per_port = 1; } static int diff --git a/drivers/event/opdl/opdl_evdev.c b/drivers/event/opdl/opdl_evdev.c index 9ce8b39b60..dd25749654 100644 --- a/drivers/event/opdl/opdl_evdev.c +++ b/drivers/event/opdl/opdl_evdev.c @@ -378,6 +378,7 @@ opdl_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info) .event_dev_cap = RTE_EVENT_DEV_CAP_BURST_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE, + .max_profiles_per_port = 1, }; *info = evdev_opdl_info; diff --git a/drivers/event/skeleton/skeleton_eventdev.c b/drivers/event/skeleton/skeleton_eventdev.c index 8513b9a013..dc9b131641 100644 --- a/drivers/event/skeleton/skeleton_eventdev.c +++ b/drivers/event/skeleton/skeleton_eventdev.c @@ -104,6 +104,7 @@ skeleton_eventdev_info_get(struct rte_eventdev *dev, RTE_EVENT_DEV_CAP_EVENT_QOS | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE; + dev_info->max_profiles_per_port = 1; } static int diff --git a/drivers/event/sw/sw_evdev.c b/drivers/event/sw/sw_evdev.c index cfd659d774..6d1816b76d 100644 --- a/drivers/event/sw/sw_evdev.c +++ b/drivers/event/sw/sw_evdev.c @@ -609,6 +609,7 @@ sw_info_get(struct rte_eventdev *dev, struct rte_event_dev_info *info) RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE), + .max_profiles_per_port = 1, }; *info = evdev_sw_info; diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index f62f42e140..66fdad71f3 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -119,8 +119,8 @@ struct rte_eventdev_data { /**< Array of port configuration structures. */ struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV]; /**< Array of queue configuration structures. */ - uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV * - RTE_EVENT_MAX_QUEUES_PER_DEV]; + uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT] + [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV]; /**< Memory to store queues to port connections. */ void *dev_private; /**< PMD-specific private data */ @@ -178,6 +178,9 @@ struct rte_eventdev { event_tx_adapter_enqueue_t txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ event_crypto_adapter_enqueue_t ca_enqueue; + /**< PMD Crypto adapter enqueue function. */ + event_profile_switch_t profile_switch; + /**< PMD Event switch profile function. */ uint64_t reserved_64s[4]; /**< Reserved for future fields */ void *reserved_ptrs[3]; /**< Reserved for future fields */ @@ -437,6 +440,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port, const uint8_t queues[], const uint8_t priorities[], uint16_t nb_links); +/** + * Link multiple source event queues associated with a profile to a destination + * event port. + * + * @param dev + * Event device pointer + * @param port + * Event port pointer + * @param queues + * Points to an array of *nb_links* event queues to be linked + * to the event port. + * @param priorities + * Points to an array of *nb_links* service priorities associated with each + * event queue link to event port. + * @param nb_links + * The number of links to establish. + * @param profile + * The profile ID to associate the links. + * + * @return + * Returns 0 on success. + */ +typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port, + const uint8_t queues[], const uint8_t priorities[], + uint16_t nb_links, uint8_t profile); + /** * Unlink multiple source event queues from destination event port. * @@ -455,6 +484,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port, typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port, uint8_t queues[], uint16_t nb_unlinks); +/** + * Unlink multiple source event queues associated with a profile from destination + * event port. + * + * @param dev + * Event device pointer + * @param port + * Event port pointer + * @param queues + * An array of *nb_unlinks* event queues to be unlinked from the event port. + * @param nb_unlinks + * The number of unlinks to establish + * @param profile + * The profile ID of the associated links. + * + * @return + * Returns 0 on success. + */ +typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port, + uint8_t queues[], uint16_t nb_unlinks, + uint8_t profile); + /** * Unlinks in progress. Returns number of unlinks that the PMD is currently * performing, but have not yet been completed. @@ -1348,8 +1399,12 @@ struct eventdev_ops { eventdev_port_link_t port_link; /**< Link event queues to an event port. */ + eventdev_port_link_profile_t port_link_profile; + /**< Link event queues associated with a profile to an event port. */ eventdev_port_unlink_t port_unlink; /**< Unlink event queues from an event port. */ + eventdev_port_unlink_profile_t port_unlink_profile; + /**< Unlink event queues associated with a profile from an event port. */ eventdev_port_unlinks_in_progress_t port_unlinks_in_progress; /**< Unlinks in progress on an event port. */ eventdev_dequeue_timeout_ticks_t timeout_ticks; diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c index 1d3d9d357e..a5e0bd3de0 100644 --- a/lib/eventdev/eventdev_private.c +++ b/lib/eventdev/eventdev_private.c @@ -81,6 +81,13 @@ dummy_event_crypto_adapter_enqueue(__rte_unused void *port, return 0; } +static int +dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t profile) +{ + RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device"); + return -EINVAL; +} + void event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) { @@ -97,6 +104,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) .txa_enqueue_same_dest = dummy_event_tx_adapter_enqueue_same_dest, .ca_enqueue = dummy_event_crypto_adapter_enqueue, + .profile_switch = dummy_event_port_profile_switch, .data = dummy_data, }; @@ -117,5 +125,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op, fp_op->txa_enqueue = dev->txa_enqueue; fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest; fp_op->ca_enqueue = dev->ca_enqueue; + fp_op->profile_switch = dev->profile_switch; fp_op->data = dev->data->ports; } diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h index f008ef0091..5fc9bebd13 100644 --- a/lib/eventdev/eventdev_trace.h +++ b/lib/eventdev/eventdev_trace.h @@ -76,6 +76,17 @@ RTE_TRACE_POINT( rte_trace_point_emit_int(rc); ) +RTE_TRACE_POINT( + rte_eventdev_trace_port_profile_links_set, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, + uint16_t nb_links, uint8_t profile, int rc), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_u16(nb_links); + rte_trace_point_emit_u8(profile); + rte_trace_point_emit_int(rc); +) + RTE_TRACE_POINT( rte_eventdev_trace_port_unlink, RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, @@ -86,6 +97,17 @@ RTE_TRACE_POINT( rte_trace_point_emit_int(rc); ) +RTE_TRACE_POINT( + rte_eventdev_trace_port_profile_unlink, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, + uint16_t nb_unlinks, uint8_t profile, int rc), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_u16(nb_unlinks); + rte_trace_point_emit_u8(profile); + rte_trace_point_emit_int(rc); +) + RTE_TRACE_POINT( rte_eventdev_trace_start, RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc), @@ -487,6 +509,16 @@ RTE_TRACE_POINT( rte_trace_point_emit_int(count); ) +RTE_TRACE_POINT( + rte_eventdev_trace_port_profile_links_get, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile, + int count), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_u8(profile); + rte_trace_point_emit_int(count); +) + RTE_TRACE_POINT( rte_eventdev_trace_port_unlinks_in_progress, RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id), diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c index 76144cfe75..8024e07531 100644 --- a/lib/eventdev/eventdev_trace_points.c +++ b/lib/eventdev/eventdev_trace_points.c @@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link, lib.eventdev.port.link) +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_set, + lib.eventdev.port.profile.links.set) + RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink, lib.eventdev.port.unlink) +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_unlink, + lib.eventdev.port.profile.unlink) + RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start, lib.eventdev.start) @@ -40,6 +46,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain, lib.eventdev.maintain) +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch, + lib.eventdev.port.profile.switch) + /* Eventdev Rx adapter trace points */ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_rx_adapter_create, lib.eventdev.rx.adapter.create) @@ -206,6 +215,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_default_conf_get, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_links_get, lib.eventdev.port.links.get) +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_get, + lib.eventdev.port.profile.links.get) + RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlinks_in_progress, lib.eventdev.port.unlinks.in.progress) diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 6ab4524332..30df0572d2 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -270,7 +270,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) void **ports; uint16_t *links_map; struct rte_event_port_conf *ports_cfg; - unsigned int i; + unsigned int i, j; RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports, dev->data->dev_id); @@ -281,7 +281,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) ports = dev->data->ports; ports_cfg = dev->data->ports_cfg; - links_map = dev->data->links_map; for (i = nb_ports; i < old_nb_ports; i++) (*dev->dev_ops->port_release)(ports[i]); @@ -297,9 +296,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) sizeof(ports[0]) * new_ps); memset(ports_cfg + old_nb_ports, 0, sizeof(ports_cfg[0]) * new_ps); - for (i = old_links_map_end; i < links_map_end; i++) - links_map[i] = - EVENT_QUEUE_SERVICE_PRIORITY_INVALID; + for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) { + links_map = dev->data->links_map[i]; + for (j = old_links_map_end; j < links_map_end; j++) + links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID; + } } } else { if (*dev->dev_ops->port_release == NULL) @@ -953,21 +954,44 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id, const uint8_t queues[], const uint8_t priorities[], uint16_t nb_links) { - struct rte_eventdev *dev; - uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV]; + return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0); +} + +int +rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links, uint8_t profile) +{ uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV]; + uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV]; + struct rte_event_dev_info info; + struct rte_eventdev *dev; uint16_t *links_map; int i, diag; RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0); dev = &rte_eventdevs[dev_id]; + if (*dev->dev_ops->dev_infos_get == NULL) + return -ENOTSUP; + + (*dev->dev_ops->dev_infos_get)(dev, &info); + if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) { + RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile); + return -EINVAL; + } + if (*dev->dev_ops->port_link == NULL) { RTE_EDEV_LOG_ERR("Function not supported\n"); rte_errno = ENOTSUP; return 0; } + if (profile && *dev->dev_ops->port_link_profile == NULL) { + RTE_EDEV_LOG_ERR("Function not supported\n"); + rte_errno = ENOTSUP; + return 0; + } + if (!is_valid_port(dev, port_id)) { RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); rte_errno = EINVAL; @@ -995,18 +1019,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id, return 0; } - diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], - queues, priorities, nb_links); + if (profile) + diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues, + priorities, nb_links, profile); + else + diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues, + priorities, nb_links); if (diag < 0) return diag; - links_map = dev->data->links_map; + links_map = dev->data->links_map[profile]; /* Point links_map to this port specific area */ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV); for (i = 0; i < diag; i++) links_map[queues[i]] = (uint8_t)priorities[i]; - rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag); + rte_eventdev_trace_port_profile_links_set(dev_id, port_id, nb_links, profile, diag); return diag; } @@ -1014,27 +1042,50 @@ int rte_event_port_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[], uint16_t nb_unlinks) { - struct rte_eventdev *dev; + return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0); +} + +int +rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[], + uint16_t nb_unlinks, uint8_t profile) +{ uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV]; - int i, diag, j; + struct rte_event_dev_info info; + struct rte_eventdev *dev; uint16_t *links_map; + int i, diag, j; RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0); dev = &rte_eventdevs[dev_id]; + if (*dev->dev_ops->dev_infos_get == NULL) + return -ENOTSUP; + + (*dev->dev_ops->dev_infos_get)(dev, &info); + if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) { + RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile); + return -EINVAL; + } + if (*dev->dev_ops->port_unlink == NULL) { RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } + if (profile && *dev->dev_ops->port_unlink_profile == NULL) { + RTE_EDEV_LOG_ERR("Function not supported"); + rte_errno = ENOTSUP; + return 0; + } + if (!is_valid_port(dev, port_id)) { RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); rte_errno = EINVAL; return 0; } - links_map = dev->data->links_map; + links_map = dev->data->links_map[profile]; /* Point links_map to this port specific area */ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV); @@ -1063,16 +1114,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id, return 0; } - diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], - queues, nb_unlinks); - + if (profile) + diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues, + nb_unlinks, profile); + else + diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues, + nb_unlinks); if (diag < 0) return diag; for (i = 0; i < diag; i++) links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID; - rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag); + rte_eventdev_trace_port_profile_unlink(dev_id, port_id, nb_unlinks, profile, diag); return diag; } @@ -1116,7 +1170,8 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id, return -EINVAL; } - links_map = dev->data->links_map; + /* Use the default profile. */ + links_map = dev->data->links_map[0]; /* Point links_map to this port specific area */ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV); for (i = 0; i < dev->data->nb_queues; i++) { @@ -1132,6 +1187,48 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id, return count; } +int +rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[], + uint8_t priorities[], uint8_t profile) +{ + struct rte_event_dev_info info; + struct rte_eventdev *dev; + uint16_t *links_map; + int i, count = 0; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + dev = &rte_eventdevs[dev_id]; + if (*dev->dev_ops->dev_infos_get == NULL) + return -ENOTSUP; + + (*dev->dev_ops->dev_infos_get)(dev, &info); + if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT || profile >= info.max_profiles_per_port) { + RTE_EDEV_LOG_ERR("Invalid profile=%" PRIu8, profile); + return -EINVAL; + } + + if (!is_valid_port(dev, port_id)) { + RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); + return -EINVAL; + } + + links_map = dev->data->links_map[profile]; + /* Point links_map to this port specific area */ + links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV); + for (i = 0; i < dev->data->nb_queues; i++) { + if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) { + queues[count] = i; + priorities[count] = (uint8_t)links_map[i]; + ++count; + } + } + + rte_eventdev_trace_port_profile_links_get(dev_id, port_id, profile, count); + + return count; +} + int rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns, uint64_t *timeout_ticks) @@ -1440,7 +1537,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data, { char mz_name[RTE_EVENTDEV_NAME_MAX_LEN]; const struct rte_memzone *mz; - int n; + int i, n; /* Generate memzone name */ n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id); @@ -1460,11 +1557,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data, *data = mz->addr; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { memset(*data, 0, sizeof(struct rte_eventdev_data)); - for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * - RTE_EVENT_MAX_QUEUES_PER_DEV; - n++) - (*data)->links_map[n] = - EVENT_QUEUE_SERVICE_PRIORITY_INVALID; + for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) + for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV; + n++) + (*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID; } return 0; diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 2ba8a7b090..f6ce45d160 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -320,6 +320,12 @@ struct rte_event; * rte_event_queue_setup(). */ +#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12) +/** Event device is capable of supporting multiple link profiles per event port + * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater + * than one. + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority expressed across eventdev subsystem @@ -446,6 +452,10 @@ struct rte_event_dev_info { * device. These ports and queues are not accounted for in * max_event_ports or max_event_queues. */ + uint8_t max_profiles_per_port; + /**< Maximum number of event queue profiles per event port. + * A device that doesn't support multiple profiles will set this as 1. + */ }; /** @@ -1536,6 +1546,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns, * latency of critical work by establishing the link with more event ports * at runtime. * + * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater + * than or equal to one, this function links the event queues to the default + * profile i.e. profile 0 of the event port. + * * @param dev_id * The identifier of the device. * @@ -1593,6 +1607,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id, * Event queue(s) to event port unlink establishment can be changed at runtime * without re-configuring the device. * + * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater + * than or equal to one, this function unlinks the event queues from the default + * profile i.e. profile 0 of the event port. + * * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks. * * @param dev_id @@ -1626,6 +1644,136 @@ int rte_event_port_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[], uint16_t nb_unlinks); +/** + * Link multiple source event queues supplied in *queues* to the destination + * event port designated by its *port_id* with associated profile identifier + * supplied in *profile* with service priorities supplied in *priorities* on + * the event device designated by its *dev_id*. + * + * If *profile* is set to 0 then, the links created by the call `rte_event_port_link` + * will be overwritten. + * + * Event ports by default use profile 0 unless it is changed using the + * call ``rte_event_port_profile_switch()``. + * + * The link establishment shall enable the event port *port_id* from + * receiving events from the specified event queue(s) supplied in *queues* + * + * An event queue may link to one or more event ports. + * The number of links can be established from an event queue to event port is + * implementation defined. + * + * Event queue(s) to event port link establishment can be changed at runtime + * without re-configuring the device to support scaling and to reduce the + * latency of critical work by establishing the link with more event ports + * at runtime. + * + * @param dev_id + * The identifier of the device. + * + * @param port_id + * Event port identifier to select the destination port to link. + * + * @param queues + * Points to an array of *nb_links* event queues to be linked + * to the event port. + * NULL value is allowed, in which case this function links all the configured + * event queues *nb_event_queues* which previously supplied to + * rte_event_dev_configure() to the event port *port_id* + * + * @param priorities + * Points to an array of *nb_links* service priorities associated with each + * event queue link to event port. + * The priority defines the event port's servicing priority for + * event queue, which may be ignored by an implementation. + * The requested priority should in the range of + * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST]. + * The implementation shall normalize the requested priority to + * implementation supported priority value. + * NULL value is allowed, in which case this function links the event queues + * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority + * + * @param nb_links + * The number of links to establish. This parameter is ignored if queues is + * NULL. + * + * @param profile + * The profile identifier associated with the links between event queues and + * event port. Should be less than the max capability reported by + * ``rte_event_dev_info::max_profiles_per_port`` + * + * @return + * The number of links actually established. The return value can be less than + * the value of the *nb_links* parameter when the implementation has the + * limitation on specific queue to port link establishment or if invalid + * parameters are specified in *queues* + * If the return value is less than *nb_links*, the remaining links at the end + * of link[] are not established, and the caller has to take care of them. + * If return value is less than *nb_links* then implementation shall update the + * rte_errno accordingly, Possible rte_errno values are + * (EDQUOT) Quota exceeded(Application tried to link the queue configured with + * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports) + * (EINVAL) Invalid parameter + * + */ +__rte_experimental +int +rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links, uint8_t profile); + +/** + * Unlink multiple source event queues supplied in *queues* that belong to profile + * designated by *profile* from the destination event port designated by its + * *port_id* on the event device designated by its *dev_id*. + * + * If *profile* is set to 0 i.e., the default profile then, then this function will + * act as ``rte_event_port_unlink``. + * + * The unlink call issues an async request to disable the event port *port_id* + * from receiving events from the specified event queue *queue_id*. + * Event queue(s) to event port unlink establishment can be changed at runtime + * without re-configuring the device. + * + * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks. + * + * @param dev_id + * The identifier of the device. + * + * @param port_id + * Event port identifier to select the destination port to unlink. + * + * @param queues + * Points to an array of *nb_unlinks* event queues to be unlinked + * from the event port. + * NULL value is allowed, in which case this function unlinks all the + * event queue(s) from the event port *port_id*. + * + * @param nb_unlinks + * The number of unlinks to establish. This parameter is ignored if queues is + * NULL. + * + * @param profile + * The profile identifier associated with the links between event queues and + * event port. Should be less than the max capability reported by + * ``rte_event_dev_info::max_profiles_per_port`` + * + * @return + * The number of unlinks successfully requested. The return value can be less + * than the value of the *nb_unlinks* parameter when the implementation has the + * limitation on specific queue to port unlink establishment or + * if invalid parameters are specified. + * If the return value is less than *nb_unlinks*, the remaining queues at the + * end of queues[] are not unlinked, and the caller has to take care of them. + * If return value is less than *nb_unlinks* then implementation shall update + * the rte_errno accordingly, Possible rte_errno values are + * (EINVAL) Invalid parameter + * + */ +__rte_experimental +int +rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[], + uint16_t nb_unlinks, uint8_t profile); + /** * Returns the number of unlinks in progress. * @@ -1680,6 +1828,42 @@ int rte_event_port_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[], uint8_t priorities[]); +/** + * Retrieve the list of source event queues and its service priority + * associated to a profile and linked to the destination event port + * designated by its *port_id* on the event device designated by its *dev_id*. + * + * @param dev_id + * The identifier of the device. + * + * @param port_id + * Event port identifier. + * + * @param[out] queues + * Points to an array of *queues* for output. + * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to + * store the event queue(s) linked with event port *port_id* + * + * @param[out] priorities + * Points to an array of *priorities* for output. + * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to + * store the service priority associated with each event queue linked + * + * @param profile + * The profile identifier associated with the links between event queues and + * event port. Should be less than the max capability reported by + * ``rte_event_dev_info::max_profiles_per_port`` + * + * @return + * The number of links established on the event port designated by its + * *port_id*. + * - <0 on failure. + */ +__rte_experimental +int +rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[], + uint8_t priorities[], uint8_t profile); + /** * Retrieve the service ID of the event dev. If the adapter doesn't use * a rte_service function, this function returns -ESRCH. @@ -2265,6 +2449,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op) return 0; } +/** + * Change the active profile on an event port. + * + * This function is used to change the current active profile on an event port + * when multiple link profiles are configured on an event port through the + * function call ``rte_event_port_profile_links_set``. + * + * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues + * that were associated with the newly active profile will participate in + * scheduling. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param profile + * The identifier of the profile. + * @return + * - 0 on success. + * - -EINVAL if *dev_id*, *port_id*, or *profile* is invalid. + */ +__rte_experimental +static inline uint8_t +rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile) +{ + const struct rte_event_fp_ops *fp_ops; + void *port; + + fp_ops = &rte_event_fp_ops[dev_id]; + port = fp_ops->data[port_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + if (dev_id >= RTE_EVENT_MAX_DEVS || + port_id >= RTE_EVENT_MAX_PORTS_PER_DEV) + return -EINVAL; + + if (port == NULL) + return -EINVAL; + + if (profile >= RTE_EVENT_MAX_PROFILES_PER_PORT) + return -EINVAL; +#endif + rte_eventdev_trace_port_profile_switch(dev_id, port_id, profile); + + return fp_ops->profile_switch(port, profile); +} + #ifdef __cplusplus } #endif diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index c328bdbc82..dfde8500fc 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -42,6 +42,8 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port, uint16_t nb_events); /**< @internal Enqueue burst of events on crypto adapter */ +typedef int (*event_profile_switch_t)(void *port, uint8_t profile); + struct rte_event_fp_ops { void **data; /**< points to array of internal port data pointers */ @@ -65,6 +67,8 @@ struct rte_event_fp_ops { /**< PMD Tx adapter enqueue same destination function. */ event_crypto_adapter_enqueue_t ca_enqueue; /**< PMD Crypto adapter enqueue function. */ + event_profile_switch_t profile_switch; + /**< PMD Event switch profile function. */ uintptr_t reserved[6]; } __rte_cache_aligned; diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h index af2172d2a5..04d510ad00 100644 --- a/lib/eventdev/rte_eventdev_trace_fp.h +++ b/lib/eventdev/rte_eventdev_trace_fp.h @@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_int(op); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_port_profile_switch, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_u8(profile); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_eth_tx_adapter_enqueue, RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index b03c10d99f..22e88185b7 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -131,6 +131,12 @@ EXPERIMENTAL { rte_event_eth_tx_adapter_runtime_params_init; rte_event_eth_tx_adapter_runtime_params_set; rte_event_timer_remaining_ticks_get; + + # added in 23.11 + rte_event_port_profile_links_set; + rte_event_port_profile_unlink; + rte_event_port_profile_links_get; + __rte_eventdev_trace_port_profile_switch; }; INTERNAL { From patchwork Thu Aug 31 20:44:23 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 131002 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4404B42216; Thu, 31 Aug 2023 22:44:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 180D64029E; Thu, 31 Aug 2023 22:44:47 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 46AD94029E for ; Thu, 31 Aug 2023 22:44:45 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37VCRv9V018285; Thu, 31 Aug 2023 13:44:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=vn2dWZV3Ko2+rJlVnaAgd4+neJaqGpPVfcpTUWYsn1E=; b=C+Gph+5jJsmmcWaXleuMAbyRGKBHUGILKotYR593IrI23sUNfXpIfXAdZCkwNvWkxnMV BUWbTHPyEFQrEE1OxI22qCoDz8sawP/63TRtz0pDgLPhqHKx5LQ+SMBsisWVAMzMawN9 38mVyGVYCRO8+nlMtCyTs6iGSf5J7K4x4PwjlV2qbrLgsP1Sc21pcojME7ye231nTkkv 4krKWynO+XJQxiztMx45j241uoZk6XmUTrvsK0kciu9mcJmvAm9a7xGdXMInZM/ueBUK 7QWzlaUGGb0GChR/yND7WicpfxhatcB3NUFFBKFaPk3rj3//A/S28pLBKcgufRWK6tpn yA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sttvb9wam-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 31 Aug 2023 13:44:44 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 31 Aug 2023 13:44:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 31 Aug 2023 13:44:42 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id AC0193F7085; Thu, 31 Aug 2023 13:44:37 -0700 (PDT) From: To: , , , , , , , , , , , , , CC: Subject: [PATCH v2 2/3] event/cnxk: implement event link profiles Date: Fri, 1 Sep 2023 02:14:23 +0530 Message-ID: <20230831204424.13367-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230831204424.13367-1-pbhagavatula@marvell.com> References: <20230825184435.2986-1-pbhagavatula@marvell.com> <20230831204424.13367-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: UJ0iuBeFqiD_IrOg5CG4KdMKiNNTpg5r X-Proofpoint-GUID: UJ0iuBeFqiD_IrOg5CG4KdMKiNNTpg5r X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-31_18,2023-08-31_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Implement event link profiles support on CN10K and CN9K. Both the platforms support up to 2 link profiles. Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/cnxk.rst | 1 + doc/guides/eventdevs/features/cnxk.ini | 3 +- doc/guides/rel_notes/release_23_11.rst | 5 ++ drivers/common/cnxk/roc_nix_inl_dev.c | 4 +- drivers/common/cnxk/roc_sso.c | 18 +++---- drivers/common/cnxk/roc_sso.h | 8 +-- drivers/common/cnxk/roc_sso_priv.h | 4 +- drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++----- drivers/event/cnxk/cn10k_worker.c | 11 ++++ drivers/event/cnxk/cn10k_worker.h | 1 + drivers/event/cnxk/cn9k_eventdev.c | 74 ++++++++++++++++---------- drivers/event/cnxk/cn9k_worker.c | 22 ++++++++ drivers/event/cnxk/cn9k_worker.h | 2 + drivers/event/cnxk/cnxk_eventdev.c | 38 +++++++------ drivers/event/cnxk/cnxk_eventdev.h | 10 ++-- 15 files changed, 164 insertions(+), 82 deletions(-) diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst index 1a59233282..cccb8a0304 100644 --- a/doc/guides/eventdevs/cnxk.rst +++ b/doc/guides/eventdevs/cnxk.rst @@ -48,6 +48,7 @@ Features of the OCTEON cnxk SSO PMD are: - HW managed event vectorization on CN10K for packets enqueued from ethdev to eventdev configurable per each Rx queue in Rx adapter. - Event vector transmission via Tx adapter. +- Up to 2 event link profiles. Prerequisites and Compilation procedure --------------------------------------- diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini index bee69bf8f4..5d353e3670 100644 --- a/doc/guides/eventdevs/features/cnxk.ini +++ b/doc/guides/eventdevs/features/cnxk.ini @@ -12,7 +12,8 @@ runtime_port_link = Y multiple_queue_port = Y carry_flow_id = Y maintenance_free = Y -runtime_queue_attr = y +runtime_queue_attr = Y +profile_links = Y [Eth Rx adapter Features] internal_port = Y diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index e19a0ed3c3..a6362ed7f8 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -96,6 +96,11 @@ New Features * Added ``rte_event_port_profile_switch`` to switch between profiles as needed. +* **Added support for link profiles for Marvell CNXK event device driver.** + + Marvell CNXK event device driver supports up to two link profiles per event + port. Added support to advertise link profile capabilities and supporting APIs. + Removed Items ------------- diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c index d76158e30d..690d47c045 100644 --- a/drivers/common/cnxk/roc_nix_inl_dev.c +++ b/drivers/common/cnxk/roc_nix_inl_dev.c @@ -285,7 +285,7 @@ nix_inl_sso_setup(struct nix_inl_dev *inl_dev) } /* Setup hwgrp->hws link */ - sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true); + sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, true); /* Enable HWGRP */ plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL); @@ -315,7 +315,7 @@ nix_inl_sso_release(struct nix_inl_dev *inl_dev) nix_inl_sso_unregister_irqs(inl_dev); /* Unlink hws */ - sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false); + sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, false); /* Release XAQ aura */ sso_hwgrp_release_xaq(&inl_dev->dev, 1); diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index a5f48d5bbc..f063184565 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -185,8 +185,8 @@ sso_rsrc_get(struct roc_sso *roc_sso) } void -sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, - uint16_t hwgrp[], uint16_t n, uint16_t enable) +sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[], + uint16_t n, uint8_t set, uint16_t enable) { uint64_t reg; int i, j, k; @@ -203,7 +203,7 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, k = n % 4; k = k ? k : 4; for (j = 0; j < k; j++) { - mask[j] = hwgrp[i + j] | enable << 14; + mask[j] = hwgrp[i + j] | (uint32_t)set << 12 | enable << 14; if (bmp) { enable ? plt_bitmap_set(bmp, hwgrp[i + j]) : plt_bitmap_clear(bmp, hwgrp[i + j]); @@ -289,8 +289,8 @@ roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns) } int -roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], - uint16_t nb_hwgrp) +roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp, + uint8_t set) { struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; struct sso *sso; @@ -298,14 +298,14 @@ roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], sso = roc_sso_to_sso_priv(roc_sso); base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12); - sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 1); + sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 1); return nb_hwgrp; } int -roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], - uint16_t nb_hwgrp) +roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp, + uint8_t set) { struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; struct sso *sso; @@ -313,7 +313,7 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], sso = roc_sso_to_sso_priv(roc_sso); base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12); - sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 0); + sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 0); return nb_hwgrp; } diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h index a2bb6fcb22..55a8894050 100644 --- a/drivers/common/cnxk/roc_sso.h +++ b/drivers/common/cnxk/roc_sso.h @@ -84,10 +84,10 @@ int __roc_api roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp, uint8_t weight, uint8_t affinity, uint8_t priority); uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns); -int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, - uint16_t hwgrp[], uint16_t nb_hwgrp); -int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, - uint16_t hwgrp[], uint16_t nb_hwgrp); +int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], + uint16_t nb_hwgrp, uint8_t set); +int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], + uint16_t nb_hwgrp, uint8_t set); int __roc_api roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp); uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws); diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h index 09729d4f62..21c59c57e6 100644 --- a/drivers/common/cnxk/roc_sso_priv.h +++ b/drivers/common/cnxk/roc_sso_priv.h @@ -44,8 +44,8 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso) int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf, void **rsp); int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf); -void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, - uint16_t hwgrp[], uint16_t n, uint16_t enable); +void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[], + uint16_t n, uint8_t set, uint16_t enable); int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps); int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps); int sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 499a3aace7..69d970ac30 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -66,21 +66,21 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id) } static int -cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link) +cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile) { struct cnxk_sso_evdev *dev = arg; struct cn10k_sso_hws *ws = port; - return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link); + return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile); } static int -cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link) +cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile) { struct cnxk_sso_evdev *dev = arg; struct cn10k_sso_hws *ws = port; - return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link); + return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile); } static void @@ -107,10 +107,11 @@ cn10k_sso_hws_release(void *arg, void *hws) { struct cnxk_sso_evdev *dev = arg; struct cn10k_sso_hws *ws = hws; - uint16_t i; + uint16_t i, j; - for (i = 0; i < dev->nb_event_queues; i++) - roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1); + for (i = 0; i < CNXK_SSO_MAX_PROFILES; i++) + for (j = 0; j < dev->nb_event_queues; j++) + roc_sso_hws_unlink(&dev->sso, ws->hws_id, &j, 1, i); memset(ws, 0, sizeof(*ws)); } @@ -475,6 +476,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq); event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; + event_dev->profile_switch = cn10k_sso_hws_profile_switch; #else RTE_SET_USED(event_dev); #endif @@ -618,9 +620,8 @@ cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, } static int -cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, - const uint8_t queues[], const uint8_t priorities[], - uint16_t nb_links) +cn10k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links, uint8_t profile) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t hwgrp_ids[nb_links]; @@ -629,14 +630,14 @@ cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, RTE_SET_USED(priorities); for (link = 0; link < nb_links; link++) hwgrp_ids[link] = queues[link]; - nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links); + nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile); return (int)nb_links; } static int -cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, - uint8_t queues[], uint16_t nb_unlinks) +cn10k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[], + uint16_t nb_unlinks, uint8_t profile) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t hwgrp_ids[nb_unlinks]; @@ -644,11 +645,25 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, for (unlink = 0; unlink < nb_unlinks; unlink++) hwgrp_ids[unlink] = queues[unlink]; - nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks); + nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile); return (int)nb_unlinks; } +static int +cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links) +{ + return cn10k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0); +} + +static int +cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[], + uint16_t nb_unlinks) +{ + return cn10k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0); +} + static void cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev) { @@ -993,6 +1008,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = { .port_quiesce = cn10k_sso_port_quiesce, .port_link = cn10k_sso_port_link, .port_unlink = cn10k_sso_port_unlink, + .port_link_profile = cn10k_sso_port_link_profile, + .port_unlink_profile = cn10k_sso_port_unlink_profile, .timeout_ticks = cnxk_sso_timeout_ticks, .eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get, diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c index 9b5bf90159..d59769717e 100644 --- a/drivers/event/cnxk/cn10k_worker.c +++ b/drivers/event/cnxk/cn10k_worker.c @@ -431,3 +431,14 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } + +int __rte_hot +cn10k_sso_hws_profile_switch(void *port, uint8_t profile) +{ + struct cn10k_sso_hws *ws = port; + + ws->gw_wdata &= ~(0xFFUL); + ws->gw_wdata |= (profile + 1); + + return 0; +} diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index b4ee023723..7aa49d7b3b 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -316,6 +316,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port, uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); +int __rte_hot cn10k_sso_hws_profile_switch(void *port, uint8_t profile); #define R(name, flags) \ uint16_t __rte_hot cn10k_sso_hws_deq_##name( \ diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index 6cce5477f0..10a8c4dfbc 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -15,7 +15,7 @@ enq_op = enq_ops[dev->tx_offloads & (NIX_TX_OFFLOAD_MAX - 1)] static int -cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link) +cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile) { struct cnxk_sso_evdev *dev = arg; struct cn9k_sso_hws_dual *dws; @@ -24,22 +24,20 @@ cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link) if (dev->dual_ws) { dws = port; - rc = roc_sso_hws_link(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, - nb_link); - rc |= roc_sso_hws_link(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), - map, nb_link); + rc = roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, nb_link, + profile); + rc |= roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map, + nb_link, profile); } else { ws = port; - rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link); + rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile); } return rc; } static int -cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link) +cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile) { struct cnxk_sso_evdev *dev = arg; struct cn9k_sso_hws_dual *dws; @@ -48,15 +46,13 @@ cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link) if (dev->dual_ws) { dws = port; - rc = roc_sso_hws_unlink(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), - map, nb_link); - rc |= roc_sso_hws_unlink(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), - map, nb_link); + rc = roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, + nb_link, profile); + rc |= roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map, + nb_link, profile); } else { ws = port; - rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link); + rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile); } return rc; @@ -97,21 +93,24 @@ cn9k_sso_hws_release(void *arg, void *hws) struct cnxk_sso_evdev *dev = arg; struct cn9k_sso_hws_dual *dws; struct cn9k_sso_hws *ws; - uint16_t i; + uint16_t i, k; if (dev->dual_ws) { dws = hws; for (i = 0; i < dev->nb_event_queues; i++) { - roc_sso_hws_unlink(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), &i, 1); - roc_sso_hws_unlink(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), &i, 1); + for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) { + roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), + &i, 1, k); + roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), + &i, 1, k); + } } memset(dws, 0, sizeof(*dws)); } else { ws = hws; for (i = 0; i < dev->nb_event_queues; i++) - roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1); + for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) + roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1, k); memset(ws, 0, sizeof(*ws)); } } @@ -438,6 +437,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) event_dev->enqueue_burst = cn9k_sso_hws_enq_burst; event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst; event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst; + event_dev->profile_switch = cn9k_sso_hws_profile_switch; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq_seg); CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, @@ -475,6 +475,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) event_dev->enqueue_forward_burst = cn9k_sso_hws_dual_enq_fwd_burst; event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq; + event_dev->profile_switch = cn9k_sso_hws_dual_profile_switch; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, @@ -695,9 +696,8 @@ cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, } static int -cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, - const uint8_t queues[], const uint8_t priorities[], - uint16_t nb_links) +cn9k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links, uint8_t profile) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t hwgrp_ids[nb_links]; @@ -706,14 +706,14 @@ cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, RTE_SET_USED(priorities); for (link = 0; link < nb_links; link++) hwgrp_ids[link] = queues[link]; - nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links); + nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile); return (int)nb_links; } static int -cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, - uint8_t queues[], uint16_t nb_unlinks) +cn9k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[], + uint16_t nb_unlinks, uint8_t profile) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t hwgrp_ids[nb_unlinks]; @@ -721,11 +721,25 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, for (unlink = 0; unlink < nb_unlinks; unlink++) hwgrp_ids[unlink] = queues[unlink]; - nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks); + nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile); return (int)nb_unlinks; } +static int +cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links) +{ + return cn9k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0); +} + +static int +cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[], + uint16_t nb_unlinks) +{ + return cn9k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0); +} + static int cn9k_sso_start(struct rte_eventdev *event_dev) { @@ -1006,6 +1020,8 @@ static struct eventdev_ops cn9k_sso_dev_ops = { .port_quiesce = cn9k_sso_port_quiesce, .port_link = cn9k_sso_port_link, .port_unlink = cn9k_sso_port_unlink, + .port_link_profile = cn9k_sso_port_link_profile, + .port_unlink_profile = cn9k_sso_port_unlink_profile, .timeout_ticks = cnxk_sso_timeout_ticks, .eth_rx_adapter_caps_get = cn9k_sso_rx_adapter_caps_get, diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c index abbbfffd85..a9ac49a5a7 100644 --- a/drivers/event/cnxk/cn9k_worker.c +++ b/drivers/event/cnxk/cn9k_worker.c @@ -66,6 +66,17 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } +int __rte_hot +cn9k_sso_hws_profile_switch(void *port, uint8_t profile) +{ + struct cn9k_sso_hws *ws = port; + + ws->gw_wdata &= ~(0xFFUL); + ws->gw_wdata |= (profile + 1); + + return 0; +} + /* Dual ws ops. */ uint16_t __rte_hot @@ -149,3 +160,14 @@ cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) return cn9k_cpt_crypto_adapter_enqueue(dws->base[!dws->vws], ev->event_ptr); } + +int __rte_hot +cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile) +{ + struct cn9k_sso_hws_dual *dws = port; + + dws->gw_wdata &= ~(0xFFUL); + dws->gw_wdata |= (profile + 1); + + return 0; +} diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index 9ddab095ac..bb062a2eaf 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -375,6 +375,7 @@ uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); +int __rte_hot cn9k_sso_hws_profile_switch(void *port, uint8_t profile); uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev); @@ -391,6 +392,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events); uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events); +int __rte_hot cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile); #define R(name, flags) \ uint16_t __rte_hot cn9k_sso_hws_deq_##name( \ diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index 529622cac6..f48d6d91b6 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -30,8 +30,9 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev, RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE | - RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR; - dev_info->max_profiles_per_port = 1; + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR | + RTE_EVENT_DEV_CAP_PROFILE_LINK; + dev_info->max_profiles_per_port = CNXK_SSO_MAX_PROFILES; } int @@ -129,23 +130,25 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev, { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP]; - int i, j; + int i, j, k; for (i = 0; i < dev->nb_event_ports; i++) { - uint16_t nb_hwgrp = 0; - - links_map = event_dev->data->links_map[0]; - /* Point links_map to this port specific area */ - links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV); + for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) { + uint16_t nb_hwgrp = 0; + + links_map = event_dev->data->links_map[k]; + /* Point links_map to this port specific area */ + links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV); + + for (j = 0; j < dev->nb_event_queues; j++) { + if (links_map[j] == 0xdead) + continue; + hwgrp[nb_hwgrp] = j; + nb_hwgrp++; + } - for (j = 0; j < dev->nb_event_queues; j++) { - if (links_map[j] == 0xdead) - continue; - hwgrp[nb_hwgrp] = j; - nb_hwgrp++; + link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp, k); } - - link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp); } } @@ -436,7 +439,7 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t all_queues[CNXK_SSO_MAX_HWGRP]; - uint16_t i; + uint16_t i, j; void *ws; if (!dev->configured) @@ -447,7 +450,8 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn) for (i = 0; i < dev->nb_event_ports; i++) { ws = event_dev->data->ports[i]; - unlink_fn(dev, ws, all_queues, dev->nb_event_queues); + for (j = 0; j < CNXK_SSO_MAX_PROFILES; j++) + unlink_fn(dev, ws, all_queues, dev->nb_event_queues, j); rte_free(cnxk_sso_hws_get_cookie(ws)); event_dev->data->ports[i] = NULL; } diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index 962e630256..d351314200 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -33,6 +33,8 @@ #define CN10K_SSO_GW_MODE "gw_mode" #define CN10K_SSO_STASH "stash" +#define CNXK_SSO_MAX_PROFILES 2 + #define NSEC2USEC(__ns) ((__ns) / 1E3) #define USEC2NSEC(__us) ((__us)*1E3) #define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9) @@ -57,10 +59,10 @@ typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id); typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t grp_base); typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws); -typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, - uint16_t nb_link); -typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, - uint16_t nb_link); +typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link, + uint8_t profile); +typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link, + uint8_t profile); typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev); typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws); typedef int (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base, From patchwork Thu Aug 31 20:44:24 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 131003 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E70EA42216; Thu, 31 Aug 2023 22:45:04 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 210BB4029A; Thu, 31 Aug 2023 22:44:52 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 8E8F0402A6 for ; Thu, 31 Aug 2023 22:44:50 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 37VCQs7w016790; Thu, 31 Aug 2023 13:44:49 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=TVG43hI0ewJh4EUVirljEDzM0Uf1aCXrfGNUL+D8+6A=; b=Lg81WWxEaQAz8DRhJnI+1S53MK1lvYnHnek1Uf1UW/evahE7mqwjWEB7hmarTrl9DU5/ He3ZHJpZnT1zcVuntsBf1fMvvSQmkLCS7cUWCn24g9xzT6/co9ZjsUdfkDIvljzVWzeH I5ZUtV5D8lwmyR8v2RifWrVNse9ce95jkMPvUc9ffCV6UUzb6RQDGAgMCUKjL1LsCihm HyxE1kla+ngL0x8veTyVKfqzRdFa63ykksG/72+uRBlARq5OVCbw/Vt1I0qqfEp4pSDQ phCuSMmoicjrd5g/IBCXmRO65ZKenfhhqxF/NRrvBUIpmnOTfnZJYLRlz8X/3vK5IXUR nQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3sttvb9wbe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 31 Aug 2023 13:44:49 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 31 Aug 2023 13:44:47 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 31 Aug 2023 13:44:48 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id 12F563F703F; Thu, 31 Aug 2023 13:44:42 -0700 (PDT) From: To: , , , , , , , , , , , , , CC: Subject: [PATCH v2 3/3] test/event: add event link profile test Date: Fri, 1 Sep 2023 02:14:24 +0530 Message-ID: <20230831204424.13367-4-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230831204424.13367-1-pbhagavatula@marvell.com> References: <20230825184435.2986-1-pbhagavatula@marvell.com> <20230831204424.13367-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: XnbTcipbbA_kO1qxqO-MuZG5nZuC09zF X-Proofpoint-GUID: XnbTcipbbA_kO1qxqO-MuZG5nZuC09zF X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.957,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-08-31_18,2023-08-31_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Add test case to verify event link profiles. Signed-off-by: Pavan Nikhilesh --- app/test/test_eventdev.c | 117 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 117 insertions(+) diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index 29354a24c9..b333fec634 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -1129,6 +1129,121 @@ test_eventdev_link_get(void) return TEST_SUCCESS; } +static int +test_eventdev_change_profile(void) +{ +#define MAX_RETRIES 4 + uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV]; + uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV]; + struct rte_event_queue_conf qcfg; + struct rte_event_port_conf pcfg; + struct rte_event_dev_info info; + struct rte_event ev; + uint8_t q, re; + int rc; + + rte_event_dev_info_get(TEST_DEV_ID, &info); + + if (info.max_profiles_per_port <= 1) + return TEST_SKIPPED; + + if (info.max_event_queues <= 1) + return TEST_SKIPPED; + + rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config"); + rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to setup port0"); + + rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config"); + rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0"); + + q = 0; + rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 0); + TEST_ASSERT(rc == 1, "Failed to link queue 0 to port 0 with profile 0"); + q = 1; + rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 1); + TEST_ASSERT(rc == 1, "Failed to link queue 1 to port 0 with profile 1"); + + rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 0); + TEST_ASSERT(rc == 1, "Failed to links"); + TEST_ASSERT(queues[0] == 0, "Invalid queue found in link"); + + rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 1); + TEST_ASSERT(rc == 1, "Failed to links"); + TEST_ASSERT(queues[0] == 1, "Invalid queue found in link"); + + rc = rte_event_dev_start(TEST_DEV_ID); + TEST_ASSERT_SUCCESS(rc, "Failed to start event device"); + + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.queue_id = 0; + ev.op = RTE_EVENT_OP_NEW; + ev.flow_id = 0; + ev.u64 = 0xBADF00D0; + rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1); + TEST_ASSERT(rc == 1, "Failed to enqueue event"); + ev.queue_id = 1; + ev.flow_id = 1; + rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1); + TEST_ASSERT(rc == 1, "Failed to enqueue event"); + + ev.event = 0; + ev.u64 = 0; + + rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 1); + TEST_ASSERT_SUCCESS(rc, "Failed to change profile"); + + re = MAX_RETRIES; + while (re--) { + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + printf("rc %d\n", rc); + if (rc) + break; + } + + TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1"); + TEST_ASSERT(ev.flow_id == 1, "Incorrect flow identifier from profile 1"); + TEST_ASSERT(ev.queue_id == 1, "Incorrect queue identifier from profile 1"); + + re = MAX_RETRIES; + while (re--) { + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile"); + } + + rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 0); + TEST_ASSERT_SUCCESS(rc, "Failed to change profile"); + + re = MAX_RETRIES; + while (re--) { + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + if (rc) + break; + } + + TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1"); + TEST_ASSERT(ev.flow_id == 0, "Incorrect flow identifier from profile 0"); + TEST_ASSERT(ev.queue_id == 0, "Incorrect queue identifier from profile 0"); + + re = MAX_RETRIES; + while (re--) { + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile"); + } + + q = 0; + rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 0); + TEST_ASSERT(rc == 1, "Failed to unlink queue 0 to port 0 with profile 0"); + q = 1; + rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 1); + TEST_ASSERT(rc == 1, "Failed to unlink queue 1 to port 0 with profile 1"); + + return TEST_SUCCESS; +} + static int test_eventdev_close(void) { @@ -1187,6 +1302,8 @@ static struct unit_test_suite eventdev_common_testsuite = { test_eventdev_timeout_ticks), TEST_CASE_ST(NULL, NULL, test_eventdev_start_stop), + TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device, + test_eventdev_change_profile), TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device, test_eventdev_link), TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,