From patchwork Tue Oct 3 09:47:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 132273 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65831426AE; Tue, 3 Oct 2023 11:47:41 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 12680402DC; Tue, 3 Oct 2023 11:47:40 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A6BF0402D9 for ; Tue, 3 Oct 2023 11:47:38 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3932Iw9q028322; Tue, 3 Oct 2023 02:47:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=bSorYn/Uh1j/eUW8fAIsHIZmZPMBBz8trnuxWnCUTe0=; b=d1ebWPkdJgRYbGcpn5W/7WvIJRo+pAy1kIFET0vMq9uwX6TdQeopp3A5yNcRp+hdQc58 znQFhpu7Zy3H+iIZOAmApjUp+gbJN/JHD61i3Ifpm8m9fTr7ThelHallqULhmKjHN3Xq 4L5cGW5YrHbAbIT11oM9wxoBBeWrmZ2rQf/pKVmY0z0NpYwIStmT0FE7mIm+uWFq+X37 o7SVQwSg5HSRQy4anaQ6Ccfp92MyUZwTID7w/b9usUvpkkf2Ni2qFnsdY72jeLJdRTif BcTDfSxFJx5xauwA/aGpnKqN3wQlGEuuJc31278oROIcTyt4iZuBh0PpyH2TzHP9CLhm Mg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3tek6mypqg-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 03 Oct 2023 02:47:37 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 3 Oct 2023 02:47:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 3 Oct 2023 02:47:35 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id 13AF93F704B; Tue, 3 Oct 2023 02:47:27 -0700 (PDT) From: To: , , , , , , , , , , , , , CC: Subject: [PATCH v6 1/3] eventdev: introduce link profiles Date: Tue, 3 Oct 2023 15:17:19 +0530 Message-ID: <20231003094721.5115-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231003094721.5115-1-pbhagavatula@marvell.com> References: <20231003075109.4309-1-pbhagavatula@marvell.com> <20231003094721.5115-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Je2z1LNBS8vKP6DS4tI3xGtXkUFKay5Z X-Proofpoint-GUID: Je2z1LNBS8vKP6DS4tI3xGtXkUFKay5Z X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-03_06,2023-10-02_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh A collection of event queues linked to an event port can be associated with a unique identifier called as a link profile, multiple such profiles can be created based on the event device capability using the function `rte_event_port_profile_links_set` which takes arguments similar to `rte_event_port_link` in addition to the profile identifier. The maximum link profiles that are supported by an event device is advertised through the structure member `rte_event_dev_info::max_profiles_per_port`. By default, event ports are configured to use the link profile 0 on initialization. Once multiple link profiles are set up and the event device is started, the application can use the function `rte_event_port_profile_switch` to change the currently active profile on an event port. This effects the next `rte_event_dequeue_burst` call, where the event queues associated with the newly active link profile will participate in scheduling. An unlink function `rte_event_port_profile_unlink` is provided to modify the links associated to a profile, and `rte_event_port_profile_links_get` can be used to retrieve the links associated with a profile. Using Link profiles can reduce the overhead of linking/unlinking and waiting for unlinks in progress in fast-path and gives applications the ability to switch between preset profiles on the fly. Signed-off-by: Pavan Nikhilesh Acked-by: Jerin Jacob --- config/rte_config.h | 1 + doc/guides/eventdevs/features/default.ini | 1 + doc/guides/prog_guide/eventdev.rst | 40 ++++ doc/guides/rel_notes/release_23_11.rst | 11 ++ drivers/event/cnxk/cnxk_eventdev.c | 2 +- lib/eventdev/eventdev_pmd.h | 59 +++++- lib/eventdev/eventdev_private.c | 9 + lib/eventdev/eventdev_trace.h | 32 +++ lib/eventdev/eventdev_trace_points.c | 12 ++ lib/eventdev/rte_eventdev.c | 150 +++++++++++--- lib/eventdev/rte_eventdev.h | 231 ++++++++++++++++++++++ lib/eventdev/rte_eventdev_core.h | 5 + lib/eventdev/rte_eventdev_trace_fp.h | 8 + lib/eventdev/version.map | 4 + 14 files changed, 536 insertions(+), 29 deletions(-) diff --git a/config/rte_config.h b/config/rte_config.h index 401727703f..a06189d0b5 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -73,6 +73,7 @@ #define RTE_EVENT_MAX_DEVS 16 #define RTE_EVENT_MAX_PORTS_PER_DEV 255 #define RTE_EVENT_MAX_QUEUES_PER_DEV 255 +#define RTE_EVENT_MAX_PROFILES_PER_PORT 8 #define RTE_EVENT_TIMER_ADAPTER_NUM_MAX 32 #define RTE_EVENT_ETH_INTR_RING_SIZE 1024 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32 diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 73a52d915b..e980ae134a 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -18,6 +18,7 @@ multiple_queue_port = carry_flow_id = maintenance_free = runtime_queue_attr = +profile_links = ; ; Features of a default Ethernet Rx adapter. diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst index ff55115d0d..8c15c678bf 100644 --- a/doc/guides/prog_guide/eventdev.rst +++ b/doc/guides/prog_guide/eventdev.rst @@ -317,6 +317,46 @@ can be achieved like this: } int links_made = rte_event_port_link(dev_id, tx_port_id, &single_link_q, &priority, 1); +Linking Queues to Ports with link profiles +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +An application can use link profiles if supported by the underlying event device to setup up +multiple link profile per port and change them run time depending up on heuristic data. +Using Link profiles can reduce the overhead of linking/unlinking and wait for unlinks in progress +in fast-path and gives applications the ability to switch between preset profiles on the fly. + +An Example use case could be as follows. + +Config path: + +.. code-block:: c + + uint8_t lq[4] = {4, 5, 6, 7}; + uint8_t hq[4] = {0, 1, 2, 3}; + + if (rte_event_dev_info.max_profiles_per_port < 2) + return -ENOTSUP; + + rte_event_port_profile_links_set(0, 0, hq, NULL, 4, 0); + rte_event_port_profile_links_set(0, 0, lq, NULL, 4, 1); + +Worker path: + +.. code-block:: c + + uint8_t profile_id_to_switch; + + while (1) { + deq = rte_event_dequeue_burst(0, 0, &ev, 1, 0); + if (deq == 0) { + profile_id_to_switch = app_find_profile_id_to_switch(); + rte_event_port_profile_switch(0, 0, profile_id_to_switch); + continue; + } + + // Process the event received. + } + Starting the EventDev ~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index b66c364e21..fe6656bed2 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -90,6 +90,17 @@ New Features model by introducing APIs that allow applications to enqueue/dequeue DMA operations to/from dmadev as events scheduled by an event device. +* **Added eventdev support to link queues to port with link profile.** + + Introduced event link profiles that can be used to associated links between + event queues and an event port with a unique identifier termed as link profile. + The profile can be used to switch between the associated links in fast-path + without the additional overhead of linking/unlinking and waiting for unlinking. + + * Added ``rte_event_port_profile_links_set``, ``rte_event_port_profile_unlink`` + ``rte_event_port_profile_links_get`` and ``rte_event_port_profile_switch`` + APIs to enable this feature. + * **Updated Marvell cnxk eventdev driver.** * Added support for ``remaining_ticks_get`` timer adapter PMD callback diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index 9c9192bd40..e8ea7e0efb 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -133,7 +133,7 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev, for (i = 0; i < dev->nb_event_ports; i++) { uint16_t nb_hwgrp = 0; - links_map = event_dev->data->links_map; + links_map = event_dev->data->links_map[0]; /* Point links_map to this port specific area */ links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV); diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index f7227c0bfd..30bd90085c 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -119,8 +119,8 @@ struct rte_eventdev_data { /**< Array of port configuration structures. */ struct rte_event_queue_conf queues_cfg[RTE_EVENT_MAX_QUEUES_PER_DEV]; /**< Array of queue configuration structures. */ - uint16_t links_map[RTE_EVENT_MAX_PORTS_PER_DEV * - RTE_EVENT_MAX_QUEUES_PER_DEV]; + uint16_t links_map[RTE_EVENT_MAX_PROFILES_PER_PORT] + [RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV]; /**< Memory to store queues to port connections. */ void *dev_private; /**< PMD-specific private data */ @@ -179,9 +179,10 @@ struct rte_eventdev { /**< Pointer to PMD eth Tx adapter enqueue function. */ event_crypto_adapter_enqueue_t ca_enqueue; /**< Pointer to PMD crypto adapter enqueue function. */ - event_dma_adapter_enqueue_t dma_enqueue; /**< Pointer to PMD DMA adapter enqueue function. */ + event_profile_switch_t profile_switch; + /**< Pointer to PMD Event switch profile function. */ uint64_t reserved_64s[3]; /**< Reserved for future fields */ void *reserved_ptrs[3]; /**< Reserved for future fields */ @@ -441,6 +442,32 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port, const uint8_t queues[], const uint8_t priorities[], uint16_t nb_links); +/** + * Link multiple source event queues associated with a link profile to a + * destination event port. + * + * @param dev + * Event device pointer + * @param port + * Event port pointer + * @param queues + * Points to an array of *nb_links* event queues to be linked + * to the event port. + * @param priorities + * Points to an array of *nb_links* service priorities associated with each + * event queue link to event port. + * @param nb_links + * The number of links to establish. + * @param profile_id + * The profile ID to associate the links. + * + * @return + * Returns 0 on success. + */ +typedef int (*eventdev_port_link_profile_t)(struct rte_eventdev *dev, void *port, + const uint8_t queues[], const uint8_t priorities[], + uint16_t nb_links, uint8_t profile_id); + /** * Unlink multiple source event queues from destination event port. * @@ -459,6 +486,28 @@ typedef int (*eventdev_port_link_t)(struct rte_eventdev *dev, void *port, typedef int (*eventdev_port_unlink_t)(struct rte_eventdev *dev, void *port, uint8_t queues[], uint16_t nb_unlinks); +/** + * Unlink multiple source event queues associated with a link profile from + * destination event port. + * + * @param dev + * Event device pointer + * @param port + * Event port pointer + * @param queues + * An array of *nb_unlinks* event queues to be unlinked from the event port. + * @param nb_unlinks + * The number of unlinks to establish + * @param profile_id + * The profile ID of the associated links. + * + * @return + * Returns 0 on success. + */ +typedef int (*eventdev_port_unlink_profile_t)(struct rte_eventdev *dev, void *port, + uint8_t queues[], uint16_t nb_unlinks, + uint8_t profile_id); + /** * Unlinks in progress. Returns number of unlinks that the PMD is currently * performing, but have not yet been completed. @@ -1502,8 +1551,12 @@ struct eventdev_ops { eventdev_port_link_t port_link; /**< Link event queues to an event port. */ + eventdev_port_link_profile_t port_link_profile; + /**< Link event queues associated with a profile to an event port. */ eventdev_port_unlink_t port_unlink; /**< Unlink event queues from an event port. */ + eventdev_port_unlink_profile_t port_unlink_profile; + /**< Unlink event queues associated with a profile from an event port. */ eventdev_port_unlinks_in_progress_t port_unlinks_in_progress; /**< Unlinks in progress on an event port. */ eventdev_dequeue_timeout_ticks_t timeout_ticks; diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c index 18ed8bf3c8..017f97ccab 100644 --- a/lib/eventdev/eventdev_private.c +++ b/lib/eventdev/eventdev_private.c @@ -89,6 +89,13 @@ dummy_event_dma_adapter_enqueue(__rte_unused void *port, __rte_unused struct rte return 0; } +static int +dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t profile_id) +{ + RTE_EDEV_LOG_ERR("change profile requested for unconfigured event device"); + return -EINVAL; +} + void event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) { @@ -106,6 +113,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) dummy_event_tx_adapter_enqueue_same_dest, .ca_enqueue = dummy_event_crypto_adapter_enqueue, .dma_enqueue = dummy_event_dma_adapter_enqueue, + .profile_switch = dummy_event_port_profile_switch, .data = dummy_data, }; @@ -127,5 +135,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op, fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest; fp_op->ca_enqueue = dev->ca_enqueue; fp_op->dma_enqueue = dev->dma_enqueue; + fp_op->profile_switch = dev->profile_switch; fp_op->data = dev->data->ports; } diff --git a/lib/eventdev/eventdev_trace.h b/lib/eventdev/eventdev_trace.h index f008ef0091..9c2b261c06 100644 --- a/lib/eventdev/eventdev_trace.h +++ b/lib/eventdev/eventdev_trace.h @@ -76,6 +76,17 @@ RTE_TRACE_POINT( rte_trace_point_emit_int(rc); ) +RTE_TRACE_POINT( + rte_eventdev_trace_port_profile_links_set, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, + uint16_t nb_links, uint8_t profile_id, int rc), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_u16(nb_links); + rte_trace_point_emit_u8(profile_id); + rte_trace_point_emit_int(rc); +) + RTE_TRACE_POINT( rte_eventdev_trace_port_unlink, RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, @@ -86,6 +97,17 @@ RTE_TRACE_POINT( rte_trace_point_emit_int(rc); ) +RTE_TRACE_POINT( + rte_eventdev_trace_port_profile_unlink, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, + uint16_t nb_unlinks, uint8_t profile_id, int rc), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_u16(nb_unlinks); + rte_trace_point_emit_u8(profile_id); + rte_trace_point_emit_int(rc); +) + RTE_TRACE_POINT( rte_eventdev_trace_start, RTE_TRACE_POINT_ARGS(uint8_t dev_id, int rc), @@ -487,6 +509,16 @@ RTE_TRACE_POINT( rte_trace_point_emit_int(count); ) +RTE_TRACE_POINT( + rte_eventdev_trace_port_profile_links_get, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile_id, + int count), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_u8(profile_id); + rte_trace_point_emit_int(count); +) + RTE_TRACE_POINT( rte_eventdev_trace_port_unlinks_in_progress, RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id), diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c index 76144cfe75..8024e07531 100644 --- a/lib/eventdev/eventdev_trace_points.c +++ b/lib/eventdev/eventdev_trace_points.c @@ -19,9 +19,15 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_setup, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_link, lib.eventdev.port.link) +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_set, + lib.eventdev.port.profile.links.set) + RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlink, lib.eventdev.port.unlink) +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_unlink, + lib.eventdev.port.profile.unlink) + RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_start, lib.eventdev.start) @@ -40,6 +46,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_deq_burst, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain, lib.eventdev.maintain) +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch, + lib.eventdev.port.profile.switch) + /* Eventdev Rx adapter trace points */ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_rx_adapter_create, lib.eventdev.rx.adapter.create) @@ -206,6 +215,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_default_conf_get, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_links_get, lib.eventdev.port.links.get) +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_links_get, + lib.eventdev.port.profile.links.get) + RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_unlinks_in_progress, lib.eventdev.port.unlinks.in.progress) diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 60509c6efb..5ee8bd665b 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -96,6 +96,7 @@ rte_event_dev_info_get(uint8_t dev_id, struct rte_event_dev_info *dev_info) return -EINVAL; memset(dev_info, 0, sizeof(struct rte_event_dev_info)); + dev_info->max_profiles_per_port = 1; if (*dev->dev_ops->dev_infos_get == NULL) return -ENOTSUP; @@ -293,7 +294,7 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) void **ports; uint16_t *links_map; struct rte_event_port_conf *ports_cfg; - unsigned int i; + unsigned int i, j; RTE_EDEV_LOG_DEBUG("Setup %d ports on device %u", nb_ports, dev->data->dev_id); @@ -304,7 +305,6 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) ports = dev->data->ports; ports_cfg = dev->data->ports_cfg; - links_map = dev->data->links_map; for (i = nb_ports; i < old_nb_ports; i++) (*dev->dev_ops->port_release)(ports[i]); @@ -320,9 +320,11 @@ event_dev_port_config(struct rte_eventdev *dev, uint8_t nb_ports) sizeof(ports[0]) * new_ps); memset(ports_cfg + old_nb_ports, 0, sizeof(ports_cfg[0]) * new_ps); - for (i = old_links_map_end; i < links_map_end; i++) - links_map[i] = - EVENT_QUEUE_SERVICE_PRIORITY_INVALID; + for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) { + links_map = dev->data->links_map[i]; + for (j = old_links_map_end; j < links_map_end; j++) + links_map[j] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID; + } } } else { if (*dev->dev_ops->port_release == NULL) @@ -976,21 +978,45 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id, const uint8_t queues[], const uint8_t priorities[], uint16_t nb_links) { - struct rte_eventdev *dev; - uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV]; + return rte_event_port_profile_links_set(dev_id, port_id, queues, priorities, nb_links, 0); +} + +int +rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id) +{ uint8_t priorities_list[RTE_EVENT_MAX_QUEUES_PER_DEV]; + uint8_t queues_list[RTE_EVENT_MAX_QUEUES_PER_DEV]; + struct rte_event_dev_info info; + struct rte_eventdev *dev; uint16_t *links_map; int i, diag; RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0); dev = &rte_eventdevs[dev_id]; + if (*dev->dev_ops->dev_infos_get == NULL) + return -ENOTSUP; + + (*dev->dev_ops->dev_infos_get)(dev, &info); + if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT || + profile_id >= info.max_profiles_per_port) { + RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id); + return -EINVAL; + } + if (*dev->dev_ops->port_link == NULL) { RTE_EDEV_LOG_ERR("Function not supported\n"); rte_errno = ENOTSUP; return 0; } + if (profile_id && *dev->dev_ops->port_link_profile == NULL) { + RTE_EDEV_LOG_ERR("Function not supported\n"); + rte_errno = ENOTSUP; + return 0; + } + if (!is_valid_port(dev, port_id)) { RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); rte_errno = EINVAL; @@ -1018,18 +1044,22 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id, return 0; } - diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], - queues, priorities, nb_links); + if (profile_id) + diag = (*dev->dev_ops->port_link_profile)(dev, dev->data->ports[port_id], queues, + priorities, nb_links, profile_id); + else + diag = (*dev->dev_ops->port_link)(dev, dev->data->ports[port_id], queues, + priorities, nb_links); if (diag < 0) return diag; - links_map = dev->data->links_map; + links_map = dev->data->links_map[profile_id]; /* Point links_map to this port specific area */ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV); for (i = 0; i < diag; i++) links_map[queues[i]] = (uint8_t)priorities[i]; - rte_eventdev_trace_port_link(dev_id, port_id, nb_links, diag); + rte_eventdev_trace_port_profile_links_set(dev_id, port_id, nb_links, profile_id, diag); return diag; } @@ -1037,27 +1067,51 @@ int rte_event_port_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[], uint16_t nb_unlinks) { - struct rte_eventdev *dev; + return rte_event_port_profile_unlink(dev_id, port_id, queues, nb_unlinks, 0); +} + +int +rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[], + uint16_t nb_unlinks, uint8_t profile_id) +{ uint8_t all_queues[RTE_EVENT_MAX_QUEUES_PER_DEV]; - int i, diag, j; + struct rte_event_dev_info info; + struct rte_eventdev *dev; uint16_t *links_map; + int i, diag, j; RTE_EVENTDEV_VALID_DEVID_OR_ERRNO_RET(dev_id, EINVAL, 0); dev = &rte_eventdevs[dev_id]; + if (*dev->dev_ops->dev_infos_get == NULL) + return -ENOTSUP; + + (*dev->dev_ops->dev_infos_get)(dev, &info); + if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT || + profile_id >= info.max_profiles_per_port) { + RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id); + return -EINVAL; + } + if (*dev->dev_ops->port_unlink == NULL) { RTE_EDEV_LOG_ERR("Function not supported"); rte_errno = ENOTSUP; return 0; } + if (profile_id && *dev->dev_ops->port_unlink_profile == NULL) { + RTE_EDEV_LOG_ERR("Function not supported"); + rte_errno = ENOTSUP; + return 0; + } + if (!is_valid_port(dev, port_id)) { RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); rte_errno = EINVAL; return 0; } - links_map = dev->data->links_map; + links_map = dev->data->links_map[profile_id]; /* Point links_map to this port specific area */ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV); @@ -1086,16 +1140,19 @@ rte_event_port_unlink(uint8_t dev_id, uint8_t port_id, return 0; } - diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], - queues, nb_unlinks); - + if (profile_id) + diag = (*dev->dev_ops->port_unlink_profile)(dev, dev->data->ports[port_id], queues, + nb_unlinks, profile_id); + else + diag = (*dev->dev_ops->port_unlink)(dev, dev->data->ports[port_id], queues, + nb_unlinks); if (diag < 0) return diag; for (i = 0; i < diag; i++) links_map[queues[i]] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID; - rte_eventdev_trace_port_unlink(dev_id, port_id, nb_unlinks, diag); + rte_eventdev_trace_port_profile_unlink(dev_id, port_id, nb_unlinks, profile_id, diag); return diag; } @@ -1139,7 +1196,8 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id, return -EINVAL; } - links_map = dev->data->links_map; + /* Use the default profile_id. */ + links_map = dev->data->links_map[0]; /* Point links_map to this port specific area */ links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV); for (i = 0; i < dev->data->nb_queues; i++) { @@ -1155,6 +1213,49 @@ rte_event_port_links_get(uint8_t dev_id, uint8_t port_id, return count; } +int +rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[], + uint8_t priorities[], uint8_t profile_id) +{ + struct rte_event_dev_info info; + struct rte_eventdev *dev; + uint16_t *links_map; + int i, count = 0; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + dev = &rte_eventdevs[dev_id]; + if (*dev->dev_ops->dev_infos_get == NULL) + return -ENOTSUP; + + (*dev->dev_ops->dev_infos_get)(dev, &info); + if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT || + profile_id >= info.max_profiles_per_port) { + RTE_EDEV_LOG_ERR("Invalid profile_id=%" PRIu8, profile_id); + return -EINVAL; + } + + if (!is_valid_port(dev, port_id)) { + RTE_EDEV_LOG_ERR("Invalid port_id=%" PRIu8, port_id); + return -EINVAL; + } + + links_map = dev->data->links_map[profile_id]; + /* Point links_map to this port specific area */ + links_map += (port_id * RTE_EVENT_MAX_QUEUES_PER_DEV); + for (i = 0; i < dev->data->nb_queues; i++) { + if (links_map[i] != EVENT_QUEUE_SERVICE_PRIORITY_INVALID) { + queues[count] = i; + priorities[count] = (uint8_t)links_map[i]; + ++count; + } + } + + rte_eventdev_trace_port_profile_links_get(dev_id, port_id, profile_id, count); + + return count; +} + int rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns, uint64_t *timeout_ticks) @@ -1463,7 +1564,7 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data, { char mz_name[RTE_EVENTDEV_NAME_MAX_LEN]; const struct rte_memzone *mz; - int n; + int i, n; /* Generate memzone name */ n = snprintf(mz_name, sizeof(mz_name), "rte_eventdev_data_%u", dev_id); @@ -1483,11 +1584,10 @@ eventdev_data_alloc(uint8_t dev_id, struct rte_eventdev_data **data, *data = mz->addr; if (rte_eal_process_type() == RTE_PROC_PRIMARY) { memset(*data, 0, sizeof(struct rte_eventdev_data)); - for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * - RTE_EVENT_MAX_QUEUES_PER_DEV; - n++) - (*data)->links_map[n] = - EVENT_QUEUE_SERVICE_PRIORITY_INVALID; + for (i = 0; i < RTE_EVENT_MAX_PROFILES_PER_PORT; i++) + for (n = 0; n < RTE_EVENT_MAX_PORTS_PER_DEV * RTE_EVENT_MAX_QUEUES_PER_DEV; + n++) + (*data)->links_map[i][n] = EVENT_QUEUE_SERVICE_PRIORITY_INVALID; } return 0; diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 41743f91b1..2ea98302b8 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -320,6 +320,12 @@ struct rte_event; * rte_event_queue_setup(). */ +#define RTE_EVENT_DEV_CAP_PROFILE_LINK (1ULL << 12) +/**< Event device is capable of supporting multiple link profiles per event port + * i.e., the value of `rte_event_dev_info::max_profiles_per_port` is greater + * than one. + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority expressed across eventdev subsystem @@ -446,6 +452,10 @@ struct rte_event_dev_info { * device. These ports and queues are not accounted for in * max_event_ports or max_event_queues. */ + uint8_t max_profiles_per_port; + /**< Maximum number of event queue profiles per event port. + * A device that doesn't support multiple profiles will set this as 1. + */ }; /** @@ -1580,6 +1590,10 @@ rte_event_dequeue_timeout_ticks(uint8_t dev_id, uint64_t ns, * latency of critical work by establishing the link with more event ports * at runtime. * + * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater + * than or equal to one, this function links the event queues to the default + * profile_id i.e. profile_id 0 of the event port. + * * @param dev_id * The identifier of the device. * @@ -1637,6 +1651,10 @@ rte_event_port_link(uint8_t dev_id, uint8_t port_id, * Event queue(s) to event port unlink establishment can be changed at runtime * without re-configuring the device. * + * When the value of ``rte_event_dev_info::max_profiles_per_port`` is greater + * than or equal to one, this function unlinks the event queues from the default + * profile identifier i.e. profile 0 of the event port. + * * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks. * * @param dev_id @@ -1670,6 +1688,136 @@ int rte_event_port_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[], uint16_t nb_unlinks); +/** + * Link multiple source event queues supplied in *queues* to the destination + * event port designated by its *port_id* with associated profile identifier + * supplied in *profile_id* with service priorities supplied in *priorities* + * on the event device designated by its *dev_id*. + * + * If *profile_id* is set to 0 then, the links created by the call `rte_event_port_link` + * will be overwritten. + * + * Event ports by default use profile_id 0 unless it is changed using the + * call ``rte_event_port_profile_switch()``. + * + * The link establishment shall enable the event port *port_id* from + * receiving events from the specified event queue(s) supplied in *queues* + * + * An event queue may link to one or more event ports. + * The number of links can be established from an event queue to event port is + * implementation defined. + * + * Event queue(s) to event port link establishment can be changed at runtime + * without re-configuring the device to support scaling and to reduce the + * latency of critical work by establishing the link with more event ports + * at runtime. + * + * @param dev_id + * The identifier of the device. + * + * @param port_id + * Event port identifier to select the destination port to link. + * + * @param queues + * Points to an array of *nb_links* event queues to be linked + * to the event port. + * NULL value is allowed, in which case this function links all the configured + * event queues *nb_event_queues* which previously supplied to + * rte_event_dev_configure() to the event port *port_id* + * + * @param priorities + * Points to an array of *nb_links* service priorities associated with each + * event queue link to event port. + * The priority defines the event port's servicing priority for + * event queue, which may be ignored by an implementation. + * The requested priority should in the range of + * [RTE_EVENT_DEV_PRIORITY_HIGHEST, RTE_EVENT_DEV_PRIORITY_LOWEST]. + * The implementation shall normalize the requested priority to + * implementation supported priority value. + * NULL value is allowed, in which case this function links the event queues + * with RTE_EVENT_DEV_PRIORITY_NORMAL servicing priority + * + * @param nb_links + * The number of links to establish. This parameter is ignored if queues is + * NULL. + * + * @param profile_id + * The profile identifier associated with the links between event queues and + * event port. Should be less than the max capability reported by + * ``rte_event_dev_info::max_profiles_per_port`` + * + * @return + * The number of links actually established. The return value can be less than + * the value of the *nb_links* parameter when the implementation has the + * limitation on specific queue to port link establishment or if invalid + * parameters are specified in *queues* + * If the return value is less than *nb_links*, the remaining links at the end + * of link[] are not established, and the caller has to take care of them. + * If return value is less than *nb_links* then implementation shall update the + * rte_errno accordingly, Possible rte_errno values are + * (EDQUOT) Quota exceeded(Application tried to link the queue configured with + * RTE_EVENT_QUEUE_CFG_SINGLE_LINK to more than one event ports) + * (EINVAL) Invalid parameter + * + */ +__rte_experimental +int +rte_event_port_profile_links_set(uint8_t dev_id, uint8_t port_id, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links, uint8_t profile_id); + +/** + * Unlink multiple source event queues supplied in *queues* that belong to profile + * designated by *profile_id* from the destination event port designated by its + * *port_id* on the event device designated by its *dev_id*. + * + * If *profile_id* is set to 0 i.e., the default profile then, then this function + * will act as ``rte_event_port_unlink``. + * + * The unlink call issues an async request to disable the event port *port_id* + * from receiving events from the specified event queue *queue_id*. + * Event queue(s) to event port unlink establishment can be changed at runtime + * without re-configuring the device. + * + * @see rte_event_port_unlinks_in_progress() to poll for completed unlinks. + * + * @param dev_id + * The identifier of the device. + * + * @param port_id + * Event port identifier to select the destination port to unlink. + * + * @param queues + * Points to an array of *nb_unlinks* event queues to be unlinked + * from the event port. + * NULL value is allowed, in which case this function unlinks all the + * event queue(s) from the event port *port_id*. + * + * @param nb_unlinks + * The number of unlinks to establish. This parameter is ignored if queues is + * NULL. + * + * @param profile_id + * The profile identifier associated with the links between event queues and + * event port. Should be less than the max capability reported by + * ``rte_event_dev_info::max_profiles_per_port`` + * + * @return + * The number of unlinks successfully requested. The return value can be less + * than the value of the *nb_unlinks* parameter when the implementation has the + * limitation on specific queue to port unlink establishment or + * if invalid parameters are specified. + * If the return value is less than *nb_unlinks*, the remaining queues at the + * end of queues[] are not unlinked, and the caller has to take care of them. + * If return value is less than *nb_unlinks* then implementation shall update + * the rte_errno accordingly, Possible rte_errno values are + * (EINVAL) Invalid parameter + * + */ +__rte_experimental +int +rte_event_port_profile_unlink(uint8_t dev_id, uint8_t port_id, uint8_t queues[], + uint16_t nb_unlinks, uint8_t profile_id); + /** * Returns the number of unlinks in progress. * @@ -1724,6 +1872,42 @@ int rte_event_port_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[], uint8_t priorities[]); +/** + * Retrieve the list of source event queues and its service priority + * associated to a *profile_id* and linked to the destination event port + * designated by its *port_id* on the event device designated by its *dev_id*. + * + * @param dev_id + * The identifier of the device. + * + * @param port_id + * Event port identifier. + * + * @param[out] queues + * Points to an array of *queues* for output. + * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to + * store the event queue(s) linked with event port *port_id* + * + * @param[out] priorities + * Points to an array of *priorities* for output. + * The caller has to allocate *RTE_EVENT_MAX_QUEUES_PER_DEV* bytes to + * store the service priority associated with each event queue linked + * + * @param profile_id + * The profile identifier associated with the links between event queues and + * event port. Should be less than the max capability reported by + * ``rte_event_dev_info::max_profiles_per_port`` + * + * @return + * The number of links established on the event port designated by its + * *port_id*. + * - <0 on failure. + */ +__rte_experimental +int +rte_event_port_profile_links_get(uint8_t dev_id, uint8_t port_id, uint8_t queues[], + uint8_t priorities[], uint8_t profile_id); + /** * Retrieve the service ID of the event dev. If the adapter doesn't use * a rte_service function, this function returns -ESRCH. @@ -2309,6 +2493,53 @@ rte_event_maintain(uint8_t dev_id, uint8_t port_id, int op) return 0; } +/** + * Change the active profile on an event port. + * + * This function is used to change the current active profile on an event port + * when multiple link profiles are configured on an event port through the + * function call ``rte_event_port_profile_links_set``. + * + * On the subsequent ``rte_event_dequeue_burst`` call, only the event queues + * that were associated with the newly active profile will participate in + * scheduling. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param profile_id + * The identifier of the profile. + * @return + * - 0 on success. + * - -EINVAL if *dev_id*, *port_id*, or *profile_id* is invalid. + */ +__rte_experimental +static inline uint8_t +rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile_id) +{ + const struct rte_event_fp_ops *fp_ops; + void *port; + + fp_ops = &rte_event_fp_ops[dev_id]; + port = fp_ops->data[port_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + if (dev_id >= RTE_EVENT_MAX_DEVS || + port_id >= RTE_EVENT_MAX_PORTS_PER_DEV) + return -EINVAL; + + if (port == NULL) + return -EINVAL; + + if (profile_id >= RTE_EVENT_MAX_PROFILES_PER_PORT) + return -EINVAL; +#endif + rte_eventdev_trace_port_profile_switch(dev_id, port_id, profile_id); + + return fp_ops->profile_switch(port, profile_id); +} + #ifdef __cplusplus } #endif diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index 83e8736c71..5b405518d1 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -46,6 +46,9 @@ typedef uint16_t (*event_dma_adapter_enqueue_t)(void *port, struct rte_event ev[ uint16_t nb_events); /**< @internal Enqueue burst of events on DMA adapter */ +typedef int (*event_profile_switch_t)(void *port, uint8_t profile); +/**< @internal Switch active link profile on the event port. */ + struct rte_event_fp_ops { void **data; /**< points to array of internal port data pointers */ @@ -71,6 +74,8 @@ struct rte_event_fp_ops { /**< PMD Crypto adapter enqueue function. */ event_dma_adapter_enqueue_t dma_enqueue; /**< PMD DMA adapter enqueue function. */ + event_profile_switch_t profile_switch; + /**< PMD Event switch profile function. */ uintptr_t reserved[4]; } __rte_cache_aligned; diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h index af2172d2a5..04d510ad00 100644 --- a/lib/eventdev/rte_eventdev_trace_fp.h +++ b/lib/eventdev/rte_eventdev_trace_fp.h @@ -46,6 +46,14 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_int(op); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_port_profile_switch, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, uint8_t profile), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_u8(profile); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_eth_tx_adapter_enqueue, RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index b81eb2919c..59ee8b86cf 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -150,6 +150,10 @@ EXPERIMENTAL { rte_event_dma_adapter_vchan_add; rte_event_dma_adapter_vchan_del; rte_event_eth_rx_adapter_create_ext_with_params; + rte_event_port_profile_links_set; + rte_event_port_profile_unlink; + rte_event_port_profile_links_get; + __rte_eventdev_trace_port_profile_switch; }; INTERNAL { From patchwork Tue Oct 3 09:47:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 132274 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B46FC426AE; Tue, 3 Oct 2023 11:47:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA86F402DA; Tue, 3 Oct 2023 11:47:45 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A0368402DA for ; Tue, 3 Oct 2023 11:47:43 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3932NHwJ028374; Tue, 3 Oct 2023 02:47:43 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=URfCZPYCiMLrkg7iQre2pD9erYUeqZul5Vf3nECQHUY=; b=c1WWamdDEXIZDdMrXN7mPGz9ssiSzeLKNkOBVwdZigs/gPah1yyGp5jcDKVb2sFuma3c WIaTfdj8FVaCDKjclFa25cc26bxqYYMj0yku7AhI5q3aBiq/Wo9zge4cTrbcv5b3dr06 VgCt517N2ka3+dS71iS5oF4fs6qVlCs3GFDppZyICaeUZG2IqfB/YO7XRVEkDnQtVkc8 69xkXEQjb03zY1T3bm/UHyseB7a49gvLI37RYFIDpIRhfKv6JXCtT0AOcpNzmG7G3HEG jp/SWPpKVTPE/Mn9TlWG7cc7cFa33cYPa8Ohk0wtpWQuo4d6sInU0VbNytlRZrfCmvFn FA== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3tek6mypqq-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 03 Oct 2023 02:47:42 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 3 Oct 2023 02:47:40 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 3 Oct 2023 02:47:40 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id DDA8E3F7043; Tue, 3 Oct 2023 02:47:33 -0700 (PDT) From: To: , , , , , , , , , , , , , CC: Subject: [PATCH v6 2/3] event/cnxk: implement event link profiles Date: Tue, 3 Oct 2023 15:17:20 +0530 Message-ID: <20231003094721.5115-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231003094721.5115-1-pbhagavatula@marvell.com> References: <20231003075109.4309-1-pbhagavatula@marvell.com> <20231003094721.5115-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: bMqmNkVJWTRipHfSq_ufheN9iajhKtGM X-Proofpoint-GUID: bMqmNkVJWTRipHfSq_ufheN9iajhKtGM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-03_06,2023-10-02_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Implement event link profiles support on CN10K and CN9K. Both the platforms support up to 2 link profiles. Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/cnxk.rst | 1 + doc/guides/eventdevs/features/cnxk.ini | 3 +- doc/guides/rel_notes/release_23_11.rst | 2 + drivers/common/cnxk/roc_nix_inl_dev.c | 4 +- drivers/common/cnxk/roc_sso.c | 18 +++---- drivers/common/cnxk/roc_sso.h | 8 +-- drivers/common/cnxk/roc_sso_priv.h | 4 +- drivers/event/cnxk/cn10k_eventdev.c | 45 +++++++++++----- drivers/event/cnxk/cn10k_worker.c | 11 ++++ drivers/event/cnxk/cn10k_worker.h | 1 + drivers/event/cnxk/cn9k_eventdev.c | 74 ++++++++++++++++---------- drivers/event/cnxk/cn9k_worker.c | 22 ++++++++ drivers/event/cnxk/cn9k_worker.h | 2 + drivers/event/cnxk/cnxk_eventdev.c | 37 +++++++------ drivers/event/cnxk/cnxk_eventdev.h | 10 ++-- 15 files changed, 161 insertions(+), 81 deletions(-) diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst index 1a59233282..cccb8a0304 100644 --- a/doc/guides/eventdevs/cnxk.rst +++ b/doc/guides/eventdevs/cnxk.rst @@ -48,6 +48,7 @@ Features of the OCTEON cnxk SSO PMD are: - HW managed event vectorization on CN10K for packets enqueued from ethdev to eventdev configurable per each Rx queue in Rx adapter. - Event vector transmission via Tx adapter. +- Up to 2 event link profiles. Prerequisites and Compilation procedure --------------------------------------- diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini index bee69bf8f4..5d353e3670 100644 --- a/doc/guides/eventdevs/features/cnxk.ini +++ b/doc/guides/eventdevs/features/cnxk.ini @@ -12,7 +12,8 @@ runtime_port_link = Y multiple_queue_port = Y carry_flow_id = Y maintenance_free = Y -runtime_queue_attr = y +runtime_queue_attr = Y +profile_links = Y [Eth Rx adapter Features] internal_port = Y diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index fe6656bed2..66c4ddf37c 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -105,6 +105,8 @@ New Features * Added support for ``remaining_ticks_get`` timer adapter PMD callback to get the remaining ticks to expire for a given event timer. + * Added link profiles support for Marvell CNXK event device driver, + up to two link profiles are supported per event port. Removed Items diff --git a/drivers/common/cnxk/roc_nix_inl_dev.c b/drivers/common/cnxk/roc_nix_inl_dev.c index d76158e30d..690d47c045 100644 --- a/drivers/common/cnxk/roc_nix_inl_dev.c +++ b/drivers/common/cnxk/roc_nix_inl_dev.c @@ -285,7 +285,7 @@ nix_inl_sso_setup(struct nix_inl_dev *inl_dev) } /* Setup hwgrp->hws link */ - sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, true); + sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, true); /* Enable HWGRP */ plt_write64(0x1, inl_dev->sso_base + SSO_LF_GGRP_QCTL); @@ -315,7 +315,7 @@ nix_inl_sso_release(struct nix_inl_dev *inl_dev) nix_inl_sso_unregister_irqs(inl_dev); /* Unlink hws */ - sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, false); + sso_hws_link_modify(0, inl_dev->ssow_base, NULL, hwgrp, 1, 0, false); /* Release XAQ aura */ sso_hwgrp_release_xaq(&inl_dev->dev, 1); diff --git a/drivers/common/cnxk/roc_sso.c b/drivers/common/cnxk/roc_sso.c index c37da685da..748d287bad 100644 --- a/drivers/common/cnxk/roc_sso.c +++ b/drivers/common/cnxk/roc_sso.c @@ -186,8 +186,8 @@ sso_rsrc_get(struct roc_sso *roc_sso) } void -sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, - uint16_t hwgrp[], uint16_t n, uint16_t enable) +sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[], + uint16_t n, uint8_t set, uint16_t enable) { uint64_t reg; int i, j, k; @@ -204,7 +204,7 @@ sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, k = n % 4; k = k ? k : 4; for (j = 0; j < k; j++) { - mask[j] = hwgrp[i + j] | enable << 14; + mask[j] = hwgrp[i + j] | (uint32_t)set << 12 | enable << 14; if (bmp) { enable ? plt_bitmap_set(bmp, hwgrp[i + j]) : plt_bitmap_clear(bmp, hwgrp[i + j]); @@ -290,8 +290,8 @@ roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns) } int -roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], - uint16_t nb_hwgrp) +roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp, + uint8_t set) { struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; struct sso *sso; @@ -299,14 +299,14 @@ roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], sso = roc_sso_to_sso_priv(roc_sso); base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12); - sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 1); + sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 1); return nb_hwgrp; } int -roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], - uint16_t nb_hwgrp) +roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], uint16_t nb_hwgrp, + uint8_t set) { struct dev *dev = &roc_sso_to_sso_priv(roc_sso)->dev; struct sso *sso; @@ -314,7 +314,7 @@ roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], sso = roc_sso_to_sso_priv(roc_sso); base = dev->bar2 + (RVU_BLOCK_ADDR_SSOW << 20 | hws << 12); - sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, 0); + sso_hws_link_modify(hws, base, sso->link_map[hws], hwgrp, nb_hwgrp, set, 0); return nb_hwgrp; } diff --git a/drivers/common/cnxk/roc_sso.h b/drivers/common/cnxk/roc_sso.h index 8ee62afb9a..64f14b8119 100644 --- a/drivers/common/cnxk/roc_sso.h +++ b/drivers/common/cnxk/roc_sso.h @@ -84,10 +84,10 @@ int __roc_api roc_sso_hwgrp_set_priority(struct roc_sso *roc_sso, uint16_t hwgrp, uint8_t weight, uint8_t affinity, uint8_t priority); uint64_t __roc_api roc_sso_ns_to_gw(struct roc_sso *roc_sso, uint64_t ns); -int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, - uint16_t hwgrp[], uint16_t nb_hwgrp); -int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, - uint16_t hwgrp[], uint16_t nb_hwgrp); +int __roc_api roc_sso_hws_link(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], + uint16_t nb_hwgrp, uint8_t set); +int __roc_api roc_sso_hws_unlink(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp[], + uint16_t nb_hwgrp, uint8_t set); int __roc_api roc_sso_hwgrp_hws_link_status(struct roc_sso *roc_sso, uint8_t hws, uint16_t hwgrp); uintptr_t __roc_api roc_sso_hws_base_get(struct roc_sso *roc_sso, uint8_t hws); diff --git a/drivers/common/cnxk/roc_sso_priv.h b/drivers/common/cnxk/roc_sso_priv.h index 09729d4f62..21c59c57e6 100644 --- a/drivers/common/cnxk/roc_sso_priv.h +++ b/drivers/common/cnxk/roc_sso_priv.h @@ -44,8 +44,8 @@ roc_sso_to_sso_priv(struct roc_sso *roc_sso) int sso_lf_alloc(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf, void **rsp); int sso_lf_free(struct dev *dev, enum sso_lf_type lf_type, uint16_t nb_lf); -void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, - uint16_t hwgrp[], uint16_t n, uint16_t enable); +void sso_hws_link_modify(uint8_t hws, uintptr_t base, struct plt_bitmap *bmp, uint16_t hwgrp[], + uint16_t n, uint8_t set, uint16_t enable); int sso_hwgrp_alloc_xaq(struct dev *dev, uint32_t npa_aura_id, uint16_t hwgrps); int sso_hwgrp_release_xaq(struct dev *dev, uint16_t hwgrps); int sso_hwgrp_init_xaq_aura(struct dev *dev, struct roc_sso_xaq_data *xaq, diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index cf186b9af4..bb0c910553 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -66,21 +66,21 @@ cn10k_sso_init_hws_mem(void *arg, uint8_t port_id) } static int -cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link) +cn10k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile) { struct cnxk_sso_evdev *dev = arg; struct cn10k_sso_hws *ws = port; - return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link); + return roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile); } static int -cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link) +cn10k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile) { struct cnxk_sso_evdev *dev = arg; struct cn10k_sso_hws *ws = port; - return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link); + return roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile); } static void @@ -107,10 +107,11 @@ cn10k_sso_hws_release(void *arg, void *hws) { struct cnxk_sso_evdev *dev = arg; struct cn10k_sso_hws *ws = hws; - uint16_t i; + uint16_t i, j; - for (i = 0; i < dev->nb_event_queues; i++) - roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1); + for (i = 0; i < CNXK_SSO_MAX_PROFILES; i++) + for (j = 0; j < dev->nb_event_queues; j++) + roc_sso_hws_unlink(&dev->sso, ws->hws_id, &j, 1, i); memset(ws, 0, sizeof(*ws)); } @@ -482,6 +483,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq); event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; + event_dev->profile_switch = cn10k_sso_hws_profile_switch; #else RTE_SET_USED(event_dev); #endif @@ -633,9 +635,8 @@ cn10k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, } static int -cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, - const uint8_t queues[], const uint8_t priorities[], - uint16_t nb_links) +cn10k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links, uint8_t profile) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t hwgrp_ids[nb_links]; @@ -644,14 +645,14 @@ cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, RTE_SET_USED(priorities); for (link = 0; link < nb_links; link++) hwgrp_ids[link] = queues[link]; - nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links); + nb_links = cn10k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile); return (int)nb_links; } static int -cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, - uint8_t queues[], uint16_t nb_unlinks) +cn10k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[], + uint16_t nb_unlinks, uint8_t profile) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t hwgrp_ids[nb_unlinks]; @@ -659,11 +660,25 @@ cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, for (unlink = 0; unlink < nb_unlinks; unlink++) hwgrp_ids[unlink] = queues[unlink]; - nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks); + nb_unlinks = cn10k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile); return (int)nb_unlinks; } +static int +cn10k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links) +{ + return cn10k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0); +} + +static int +cn10k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[], + uint16_t nb_unlinks) +{ + return cn10k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0); +} + static void cn10k_sso_configure_queue_stash(struct rte_eventdev *event_dev) { @@ -1020,6 +1035,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = { .port_quiesce = cn10k_sso_port_quiesce, .port_link = cn10k_sso_port_link, .port_unlink = cn10k_sso_port_unlink, + .port_link_profile = cn10k_sso_port_link_profile, + .port_unlink_profile = cn10k_sso_port_unlink_profile, .timeout_ticks = cnxk_sso_timeout_ticks, .eth_rx_adapter_caps_get = cn10k_sso_rx_adapter_caps_get, diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c index 9b5bf90159..d59769717e 100644 --- a/drivers/event/cnxk/cn10k_worker.c +++ b/drivers/event/cnxk/cn10k_worker.c @@ -431,3 +431,14 @@ cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } + +int __rte_hot +cn10k_sso_hws_profile_switch(void *port, uint8_t profile) +{ + struct cn10k_sso_hws *ws = port; + + ws->gw_wdata &= ~(0xFFUL); + ws->gw_wdata |= (profile + 1); + + return 0; +} diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index e71ab3c523..26fecf21fb 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -329,6 +329,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_new_burst(void *port, uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); +int __rte_hot cn10k_sso_hws_profile_switch(void *port, uint8_t profile); #define R(name, flags) \ uint16_t __rte_hot cn10k_sso_hws_deq_##name( \ diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index fe6f5d9f86..9fb9ca0d63 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -15,7 +15,7 @@ enq_op = enq_ops[dev->tx_offloads & (NIX_TX_OFFLOAD_MAX - 1)] static int -cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link) +cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile) { struct cnxk_sso_evdev *dev = arg; struct cn9k_sso_hws_dual *dws; @@ -24,22 +24,20 @@ cn9k_sso_hws_link(void *arg, void *port, uint16_t *map, uint16_t nb_link) if (dev->dual_ws) { dws = port; - rc = roc_sso_hws_link(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, - nb_link); - rc |= roc_sso_hws_link(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), - map, nb_link); + rc = roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, nb_link, + profile); + rc |= roc_sso_hws_link(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map, + nb_link, profile); } else { ws = port; - rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link); + rc = roc_sso_hws_link(&dev->sso, ws->hws_id, map, nb_link, profile); } return rc; } static int -cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link) +cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link, uint8_t profile) { struct cnxk_sso_evdev *dev = arg; struct cn9k_sso_hws_dual *dws; @@ -48,15 +46,13 @@ cn9k_sso_hws_unlink(void *arg, void *port, uint16_t *map, uint16_t nb_link) if (dev->dual_ws) { dws = port; - rc = roc_sso_hws_unlink(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), - map, nb_link); - rc |= roc_sso_hws_unlink(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), - map, nb_link); + rc = roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), map, + nb_link, profile); + rc |= roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), map, + nb_link, profile); } else { ws = port; - rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link); + rc = roc_sso_hws_unlink(&dev->sso, ws->hws_id, map, nb_link, profile); } return rc; @@ -97,21 +93,24 @@ cn9k_sso_hws_release(void *arg, void *hws) struct cnxk_sso_evdev *dev = arg; struct cn9k_sso_hws_dual *dws; struct cn9k_sso_hws *ws; - uint16_t i; + uint16_t i, k; if (dev->dual_ws) { dws = hws; for (i = 0; i < dev->nb_event_queues; i++) { - roc_sso_hws_unlink(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), &i, 1); - roc_sso_hws_unlink(&dev->sso, - CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), &i, 1); + for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) { + roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 0), + &i, 1, k); + roc_sso_hws_unlink(&dev->sso, CN9K_DUAL_WS_PAIR_ID(dws->hws_id, 1), + &i, 1, k); + } } memset(dws, 0, sizeof(*dws)); } else { ws = hws; for (i = 0; i < dev->nb_event_queues; i++) - roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1); + for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) + roc_sso_hws_unlink(&dev->sso, ws->hws_id, &i, 1, k); memset(ws, 0, sizeof(*ws)); } } @@ -438,6 +437,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) event_dev->enqueue_burst = cn9k_sso_hws_enq_burst; event_dev->enqueue_new_burst = cn9k_sso_hws_enq_new_burst; event_dev->enqueue_forward_burst = cn9k_sso_hws_enq_fwd_burst; + event_dev->profile_switch = cn9k_sso_hws_profile_switch; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, sso_hws_deq_seg); CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue_burst, @@ -475,6 +475,7 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) event_dev->enqueue_forward_burst = cn9k_sso_hws_dual_enq_fwd_burst; event_dev->ca_enqueue = cn9k_sso_hws_dual_ca_enq; + event_dev->profile_switch = cn9k_sso_hws_dual_profile_switch; if (dev->rx_offloads & NIX_RX_MULTI_SEG_F) { CN9K_SET_EVDEV_DEQ_OP(dev, event_dev->dequeue, @@ -708,9 +709,8 @@ cn9k_sso_port_quiesce(struct rte_eventdev *event_dev, void *port, } static int -cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, - const uint8_t queues[], const uint8_t priorities[], - uint16_t nb_links) +cn9k_sso_port_link_profile(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links, uint8_t profile) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t hwgrp_ids[nb_links]; @@ -719,14 +719,14 @@ cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, RTE_SET_USED(priorities); for (link = 0; link < nb_links; link++) hwgrp_ids[link] = queues[link]; - nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links); + nb_links = cn9k_sso_hws_link(dev, port, hwgrp_ids, nb_links, profile); return (int)nb_links; } static int -cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, - uint8_t queues[], uint16_t nb_unlinks) +cn9k_sso_port_unlink_profile(struct rte_eventdev *event_dev, void *port, uint8_t queues[], + uint16_t nb_unlinks, uint8_t profile) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t hwgrp_ids[nb_unlinks]; @@ -734,11 +734,25 @@ cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, for (unlink = 0; unlink < nb_unlinks; unlink++) hwgrp_ids[unlink] = queues[unlink]; - nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks); + nb_unlinks = cn9k_sso_hws_unlink(dev, port, hwgrp_ids, nb_unlinks, profile); return (int)nb_unlinks; } +static int +cn9k_sso_port_link(struct rte_eventdev *event_dev, void *port, const uint8_t queues[], + const uint8_t priorities[], uint16_t nb_links) +{ + return cn9k_sso_port_link_profile(event_dev, port, queues, priorities, nb_links, 0); +} + +static int +cn9k_sso_port_unlink(struct rte_eventdev *event_dev, void *port, uint8_t queues[], + uint16_t nb_unlinks) +{ + return cn9k_sso_port_unlink_profile(event_dev, port, queues, nb_unlinks, 0); +} + static int cn9k_sso_start(struct rte_eventdev *event_dev) { @@ -1019,6 +1033,8 @@ static struct eventdev_ops cn9k_sso_dev_ops = { .port_quiesce = cn9k_sso_port_quiesce, .port_link = cn9k_sso_port_link, .port_unlink = cn9k_sso_port_unlink, + .port_link_profile = cn9k_sso_port_link_profile, + .port_unlink_profile = cn9k_sso_port_unlink_profile, .timeout_ticks = cnxk_sso_timeout_ticks, .eth_rx_adapter_caps_get = cn9k_sso_rx_adapter_caps_get, diff --git a/drivers/event/cnxk/cn9k_worker.c b/drivers/event/cnxk/cn9k_worker.c index abbbfffd85..a9ac49a5a7 100644 --- a/drivers/event/cnxk/cn9k_worker.c +++ b/drivers/event/cnxk/cn9k_worker.c @@ -66,6 +66,17 @@ cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], return 1; } +int __rte_hot +cn9k_sso_hws_profile_switch(void *port, uint8_t profile) +{ + struct cn9k_sso_hws *ws = port; + + ws->gw_wdata &= ~(0xFFUL); + ws->gw_wdata |= (profile + 1); + + return 0; +} + /* Dual ws ops. */ uint16_t __rte_hot @@ -149,3 +160,14 @@ cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) return cn9k_cpt_crypto_adapter_enqueue(dws->base[!dws->vws], ev->event_ptr); } + +int __rte_hot +cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile) +{ + struct cn9k_sso_hws_dual *dws = port; + + dws->gw_wdata &= ~(0xFFUL); + dws->gw_wdata |= (profile + 1); + + return 0; +} diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index ee659e80d6..6936b7ad04 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -366,6 +366,7 @@ uint16_t __rte_hot cn9k_sso_hws_enq_new_burst(void *port, uint16_t __rte_hot cn9k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); +int __rte_hot cn9k_sso_hws_profile_switch(void *port, uint8_t profile); uint16_t __rte_hot cn9k_sso_hws_dual_enq(void *port, const struct rte_event *ev); @@ -382,6 +383,7 @@ uint16_t __rte_hot cn9k_sso_hws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events); uint16_t __rte_hot cn9k_sso_hws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events); +int __rte_hot cn9k_sso_hws_dual_profile_switch(void *port, uint8_t profile); #define R(name, flags) \ uint16_t __rte_hot cn9k_sso_hws_deq_##name( \ diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index e8ea7e0efb..0c61f4c20e 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -30,7 +30,9 @@ cnxk_sso_info_get(struct cnxk_sso_evdev *dev, RTE_EVENT_DEV_CAP_NONSEQ_MODE | RTE_EVENT_DEV_CAP_CARRY_FLOW_ID | RTE_EVENT_DEV_CAP_MAINTENANCE_FREE | - RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR; + RTE_EVENT_DEV_CAP_RUNTIME_QUEUE_ATTR | + RTE_EVENT_DEV_CAP_PROFILE_LINK; + dev_info->max_profiles_per_port = CNXK_SSO_MAX_PROFILES; } int @@ -128,23 +130,25 @@ cnxk_sso_restore_links(const struct rte_eventdev *event_dev, { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t *links_map, hwgrp[CNXK_SSO_MAX_HWGRP]; - int i, j; + int i, j, k; for (i = 0; i < dev->nb_event_ports; i++) { - uint16_t nb_hwgrp = 0; - - links_map = event_dev->data->links_map[0]; - /* Point links_map to this port specific area */ - links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV); + for (k = 0; k < CNXK_SSO_MAX_PROFILES; k++) { + uint16_t nb_hwgrp = 0; + + links_map = event_dev->data->links_map[k]; + /* Point links_map to this port specific area */ + links_map += (i * RTE_EVENT_MAX_QUEUES_PER_DEV); + + for (j = 0; j < dev->nb_event_queues; j++) { + if (links_map[j] == 0xdead) + continue; + hwgrp[nb_hwgrp] = j; + nb_hwgrp++; + } - for (j = 0; j < dev->nb_event_queues; j++) { - if (links_map[j] == 0xdead) - continue; - hwgrp[nb_hwgrp] = j; - nb_hwgrp++; + link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp, k); } - - link_fn(dev, event_dev->data->ports[i], hwgrp, nb_hwgrp); } } @@ -435,7 +439,7 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn) { struct cnxk_sso_evdev *dev = cnxk_sso_pmd_priv(event_dev); uint16_t all_queues[CNXK_SSO_MAX_HWGRP]; - uint16_t i; + uint16_t i, j; void *ws; if (!dev->configured) @@ -446,7 +450,8 @@ cnxk_sso_close(struct rte_eventdev *event_dev, cnxk_sso_unlink_t unlink_fn) for (i = 0; i < dev->nb_event_ports; i++) { ws = event_dev->data->ports[i]; - unlink_fn(dev, ws, all_queues, dev->nb_event_queues); + for (j = 0; j < CNXK_SSO_MAX_PROFILES; j++) + unlink_fn(dev, ws, all_queues, dev->nb_event_queues, j); rte_free(cnxk_sso_hws_get_cookie(ws)); event_dev->data->ports[i] = NULL; } diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index bd50de87c0..d42d1afa1a 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -33,6 +33,8 @@ #define CN10K_SSO_GW_MODE "gw_mode" #define CN10K_SSO_STASH "stash" +#define CNXK_SSO_MAX_PROFILES 2 + #define NSEC2USEC(__ns) ((__ns) / 1E3) #define USEC2NSEC(__us) ((__us)*1E3) #define NSEC2TICK(__ns, __freq) (((__ns) * (__freq)) / 1E9) @@ -57,10 +59,10 @@ typedef void *(*cnxk_sso_init_hws_mem_t)(void *dev, uint8_t port_id); typedef void (*cnxk_sso_hws_setup_t)(void *dev, void *ws, uintptr_t grp_base); typedef void (*cnxk_sso_hws_release_t)(void *dev, void *ws); -typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, - uint16_t nb_link); -typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, - uint16_t nb_link); +typedef int (*cnxk_sso_link_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link, + uint8_t profile); +typedef int (*cnxk_sso_unlink_t)(void *dev, void *ws, uint16_t *map, uint16_t nb_link, + uint8_t profile); typedef void (*cnxk_handle_event_t)(void *arg, struct rte_event ev); typedef void (*cnxk_sso_hws_reset_t)(void *arg, void *ws); typedef int (*cnxk_sso_hws_flush_t)(void *ws, uint8_t queue_id, uintptr_t base, From patchwork Tue Oct 3 09:47:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 132275 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1C940426AE; Tue, 3 Oct 2023 11:47:58 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DA11A40608; Tue, 3 Oct 2023 11:47:50 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 63AAD40608 for ; Tue, 3 Oct 2023 11:47:48 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 3933jmVU023432; Tue, 3 Oct 2023 02:47:47 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=ZJomz5zTA8m69IZNHb808fZ1P2VqJ190A/q3QZbPVlA=; b=DTxH8SN9F+HU6ACwJa5c1O7p+YGei3+OM6KoBiHbwqBChXvy1L4bH1Z8NUx8PbJLUsTd 0aBIsitVl656uxPpy9QrH1UNQ3V86rJWToSZFeHaqQjL34qF0GLOhQ5Bo1KLVP/lOttM +tYO9MbS2Cx0A/qhN7EjUlQm5Mlfkd50AYfs5wuzM1t31/QCQKu0X2VFcMtnqYp27kaV 2a5x/8XuZT+Y0ILhGctg+fqhppvdL7MaPL9/cxN7Aiwm8/LomY1Eld+4wxDQLe+aYEox zgCYYDbIaJ6AANjz4WFO3+m1l6Z9+Bl1l21Su81o2RRCL0jaQzqyLUc/Luammz5H04sK Yw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3tgbas92v3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 03 Oct 2023 02:47:47 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 3 Oct 2023 02:47:45 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 3 Oct 2023 02:47:45 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id 2CFEB3F704B; Tue, 3 Oct 2023 02:47:38 -0700 (PDT) From: To: , , , , , , , , , , , , , CC: Subject: [PATCH v6 3/3] test/event: add event link profile test Date: Tue, 3 Oct 2023 15:17:21 +0530 Message-ID: <20231003094721.5115-4-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20231003094721.5115-1-pbhagavatula@marvell.com> References: <20231003075109.4309-1-pbhagavatula@marvell.com> <20231003094721.5115-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 6tLo_Ag4Mfb0TrhUbc3b8TlI7r2h7vCp X-Proofpoint-GUID: 6tLo_Ag4Mfb0TrhUbc3b8TlI7r2h7vCp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-10-03_06,2023-10-02_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Add test case to verify event link profiles. Signed-off-by: Pavan Nikhilesh --- app/test/test_eventdev.c | 117 +++++++++++++++++++++++++++++++++++++++ 1 file changed, 117 insertions(+) diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index c51c93bdbd..0ecfa7db02 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -1129,6 +1129,121 @@ test_eventdev_link_get(void) return TEST_SUCCESS; } +static int +test_eventdev_profile_switch(void) +{ +#define MAX_RETRIES 4 + uint8_t priorities[RTE_EVENT_MAX_QUEUES_PER_DEV]; + uint8_t queues[RTE_EVENT_MAX_QUEUES_PER_DEV]; + struct rte_event_queue_conf qcfg; + struct rte_event_port_conf pcfg; + struct rte_event_dev_info info; + struct rte_event ev; + uint8_t q, re; + int rc; + + rte_event_dev_info_get(TEST_DEV_ID, &info); + + if (info.max_profiles_per_port <= 1) + return TEST_SKIPPED; + + if (info.max_event_queues <= 1) + return TEST_SKIPPED; + + rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config"); + rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to setup port0"); + + rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config"); + rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0"); + + q = 0; + rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 0); + TEST_ASSERT(rc == 1, "Failed to link queue 0 to port 0 with profile 0"); + q = 1; + rc = rte_event_port_profile_links_set(TEST_DEV_ID, 0, &q, NULL, 1, 1); + TEST_ASSERT(rc == 1, "Failed to link queue 1 to port 0 with profile 1"); + + rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 0); + TEST_ASSERT(rc == 1, "Failed to links"); + TEST_ASSERT(queues[0] == 0, "Invalid queue found in link"); + + rc = rte_event_port_profile_links_get(TEST_DEV_ID, 0, queues, priorities, 1); + TEST_ASSERT(rc == 1, "Failed to links"); + TEST_ASSERT(queues[0] == 1, "Invalid queue found in link"); + + rc = rte_event_dev_start(TEST_DEV_ID); + TEST_ASSERT_SUCCESS(rc, "Failed to start event device"); + + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.queue_id = 0; + ev.op = RTE_EVENT_OP_NEW; + ev.flow_id = 0; + ev.u64 = 0xBADF00D0; + rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1); + TEST_ASSERT(rc == 1, "Failed to enqueue event"); + ev.queue_id = 1; + ev.flow_id = 1; + rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1); + TEST_ASSERT(rc == 1, "Failed to enqueue event"); + + ev.event = 0; + ev.u64 = 0; + + rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 1); + TEST_ASSERT_SUCCESS(rc, "Failed to change profile"); + + re = MAX_RETRIES; + while (re--) { + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + printf("rc %d\n", rc); + if (rc) + break; + } + + TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1"); + TEST_ASSERT(ev.flow_id == 1, "Incorrect flow identifier from profile 1"); + TEST_ASSERT(ev.queue_id == 1, "Incorrect queue identifier from profile 1"); + + re = MAX_RETRIES; + while (re--) { + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile"); + } + + rc = rte_event_port_profile_switch(TEST_DEV_ID, 0, 0); + TEST_ASSERT_SUCCESS(rc, "Failed to change profile"); + + re = MAX_RETRIES; + while (re--) { + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + if (rc) + break; + } + + TEST_ASSERT(rc == 1, "Failed to dequeue event from profile 1"); + TEST_ASSERT(ev.flow_id == 0, "Incorrect flow identifier from profile 0"); + TEST_ASSERT(ev.queue_id == 0, "Incorrect queue identifier from profile 0"); + + re = MAX_RETRIES; + while (re--) { + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + TEST_ASSERT(rc == 0, "Unexpected event dequeued from active profile"); + } + + q = 0; + rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 0); + TEST_ASSERT(rc == 1, "Failed to unlink queue 0 to port 0 with profile 0"); + q = 1; + rc = rte_event_port_profile_unlink(TEST_DEV_ID, 0, &q, 1, 1); + TEST_ASSERT(rc == 1, "Failed to unlink queue 1 to port 0 with profile 1"); + + return TEST_SUCCESS; +} + static int test_eventdev_close(void) { @@ -1187,6 +1302,8 @@ static struct unit_test_suite eventdev_common_testsuite = { test_eventdev_timeout_ticks), TEST_CASE_ST(NULL, NULL, test_eventdev_start_stop), + TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device, + test_eventdev_profile_switch), TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device, test_eventdev_link), TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device,