From patchwork Tue Oct 1 13:18:56 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 144857 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6E41A45A7A; Tue, 1 Oct 2024 15:19:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D375A40664; Tue, 1 Oct 2024 15:19:17 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7556A40673 for ; Tue, 1 Oct 2024 15:19:15 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 491BQYWs006945; Tue, 1 Oct 2024 06:19:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=W kKpLlgw0vs3vS8rQBEE5JBK3R1LlZIkpv+ExhJVOIA=; b=BxLM1IHA3b1bg62xH dqJJqWyleBNca/74wPx6O3WExdYkGEjM1i06SVrJTJdydfYT4kMvYTvUPVT1j0nk ETVq3k/SKAosoPBVIpCEY4Btj+7bzFaJ5dDU3Dbu71WIUn8ZGaGDRMxUMY/NQYPn 4lvOXD0jixt+Wk1GURk0h/C5DaBb0u+x98v4qA1Mw6ImnlTEFqO/l8Hs0aYb2q/k IDjSuXVmg0uKNFPiapAkgZNoYABJuLOvVRyGOdyW8vN2kUI4VPIUJ6FW9PQL0V5H brIfuUbkiWT7YK0x4thZtPITggyNtsqjrZTw2xpPBI5PA1qWKM/kcjbb97/IlkjX uk15A== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 420g6tresp-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 01 Oct 2024 06:19:14 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 1 Oct 2024 06:19:13 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 1 Oct 2024 06:19:13 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id E47B03F705D; Tue, 1 Oct 2024 06:19:08 -0700 (PDT) From: To: , , , , , , , , CC: , Pavan Nikhilesh Subject: [PATCH v4 1/6] eventdev: introduce event pre-scheduling Date: Tue, 1 Oct 2024 18:48:56 +0530 Message-ID: <20241001131901.7920-2-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241001131901.7920-1-pbhagavatula@marvell.com> References: <20241001061411.2537-1-pbhagavatula@marvell.com> <20241001131901.7920-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: JEOFBuIMltRz4_sM6lNI2P3N2k2pjAP- X-Proofpoint-GUID: JEOFBuIMltRz4_sM6lNI2P3N2k2pjAP- X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Event pre-scheduling improves scheduling performance by assigning events to event ports in advance when dequeues are issued. The dequeue operation initiates the pre-schedule operation, which completes in parallel without affecting the dequeued event flow contexts and dequeue latency. Event devices can indicate pre-scheduling capabilities using `RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE` and `RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE` via the event device info function `info.event_dev_cap`. Applications can select the pre-schedule type and configure it through `rte_event_dev_config.preschedule_type` during `rte_event_dev_configure`. The supported pre-schedule types are: * `RTE_EVENT_DEV_PRESCHEDULE_NONE` - No pre-scheduling. * `RTE_EVENT_DEV_PRESCHEDULE` - Always issue a pre-schedule on dequeue. * `RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE` - Delay issuing pre-schedule until there are no forward progress constraints with the held flow contexts. Signed-off-by: Pavan Nikhilesh --- app/test/test_eventdev.c | 108 ++++++++++++++++++++ doc/guides/eventdevs/features/default.ini | 1 + doc/guides/prog_guide/eventdev/eventdev.rst | 22 ++++ doc/guides/rel_notes/release_24_11.rst | 8 ++ lib/eventdev/rte_eventdev.h | 48 +++++++++ 5 files changed, 187 insertions(+) diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index e4e234dc98..d75fc8fbbc 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -1250,6 +1250,112 @@ test_eventdev_profile_switch(void) return TEST_SUCCESS; } +static int +preschedule_test(rte_event_dev_preschedule_type_t preschedule_type, const char *preschedule_name) +{ +#define NB_EVENTS 1024 + uint64_t start, total; + struct rte_event ev; + int rc, cnt; + + ev.event_type = RTE_EVENT_TYPE_CPU; + ev.queue_id = 0; + ev.op = RTE_EVENT_OP_NEW; + ev.u64 = 0xBADF00D0; + + for (cnt = 0; cnt < NB_EVENTS; cnt++) { + ev.flow_id = cnt; + rc = rte_event_enqueue_burst(TEST_DEV_ID, 0, &ev, 1); + TEST_ASSERT(rc == 1, "Failed to enqueue event"); + } + + RTE_SET_USED(preschedule_type); + total = 0; + while (cnt) { + start = rte_rdtsc_precise(); + rc = rte_event_dequeue_burst(TEST_DEV_ID, 0, &ev, 1, 0); + if (rc) { + total += rte_rdtsc_precise() - start; + cnt--; + } + } + printf("Preschedule type : %s, avg cycles %" PRIu64 "\n", preschedule_name, + total / NB_EVENTS); + + return TEST_SUCCESS; +} + +static int +preschedule_configure(rte_event_dev_preschedule_type_t type, struct rte_event_dev_info *info) +{ + struct rte_event_dev_config dev_conf; + struct rte_event_queue_conf qcfg; + struct rte_event_port_conf pcfg; + int rc; + + devconf_set_default_sane_values(&dev_conf, info); + dev_conf.nb_event_ports = 1; + dev_conf.nb_event_queues = 1; + dev_conf.preschedule_type = type; + + rc = rte_event_dev_configure(TEST_DEV_ID, &dev_conf); + TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); + + rc = rte_event_port_default_conf_get(TEST_DEV_ID, 0, &pcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to get port0 default config"); + rc = rte_event_port_setup(TEST_DEV_ID, 0, &pcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to setup port0"); + + rc = rte_event_queue_default_conf_get(TEST_DEV_ID, 0, &qcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to get queue0 default config"); + rc = rte_event_queue_setup(TEST_DEV_ID, 0, &qcfg); + TEST_ASSERT_SUCCESS(rc, "Failed to setup queue0"); + + rc = rte_event_port_link(TEST_DEV_ID, 0, NULL, NULL, 0); + TEST_ASSERT(rc == (int)dev_conf.nb_event_queues, "Failed to link port, device %d", + TEST_DEV_ID); + + rc = rte_event_dev_start(TEST_DEV_ID); + TEST_ASSERT_SUCCESS(rc, "Failed to start event device"); + + return 0; +} + +static int +test_eventdev_preschedule_configure(void) +{ + struct rte_event_dev_info info; + int rc; + + rte_event_dev_info_get(TEST_DEV_ID, &info); + + if ((info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE) == 0) + return TEST_SKIPPED; + + rc = preschedule_configure(RTE_EVENT_DEV_PRESCHEDULE_NONE, &info); + TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE_NONE, "RTE_EVENT_DEV_PRESCHEDULE_NONE"); + TEST_ASSERT_SUCCESS(rc, "Failed to test preschedule RTE_EVENT_DEV_PRESCHEDULE_NONE"); + + rte_event_dev_stop(TEST_DEV_ID); + rc = preschedule_configure(RTE_EVENT_DEV_PRESCHEDULE, &info); + TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE, "RTE_EVENT_DEV_PRESCHEDULE"); + TEST_ASSERT_SUCCESS(rc, "Failed to test preschedule RTE_EVENT_DEV_PRESCHEDULE"); + + if (info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) { + rte_event_dev_stop(TEST_DEV_ID); + rc = preschedule_configure(RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE, &info); + TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE, + "RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE"); + TEST_ASSERT_SUCCESS( + rc, "Failed to test preschedule RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE"); + } + + return TEST_SUCCESS; +} + static int test_eventdev_close(void) { @@ -1310,6 +1416,8 @@ static struct unit_test_suite eventdev_common_testsuite = { test_eventdev_start_stop), TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device, test_eventdev_profile_switch), + TEST_CASE_ST(eventdev_configure_setup, NULL, + test_eventdev_preschedule_configure), TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device, test_eventdev_link), TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device, diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 1cc4303fe5..c8d5ed2d74 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -22,6 +22,7 @@ carry_flow_id = maintenance_free = runtime_queue_attr = profile_links = +preschedule = ; ; Features of a default Ethernet Rx adapter. diff --git a/doc/guides/prog_guide/eventdev/eventdev.rst b/doc/guides/prog_guide/eventdev/eventdev.rst index fb6dfce102..341b9bb2c6 100644 --- a/doc/guides/prog_guide/eventdev/eventdev.rst +++ b/doc/guides/prog_guide/eventdev/eventdev.rst @@ -357,6 +357,28 @@ Worker path: // Process the event received. } +Event Pre-scheduling +~~~~~~~~~~~~~~~~~~~~ + +Event pre-scheduling improves scheduling performance by assigning events to event ports in advance +when dequeues are issued. +The `rte_event_dequeue_burst` operation initiates the pre-schedule operation, which completes +in parallel without affecting the dequeued event flow contexts and dequeue latency. +On the next dequeue operation, the pre-scheduled events are dequeued and pre-schedule is initiated +again. + +An application can use event pre-scheduling if the event device supports it at either device +level or at a individual port level. +The application can check pre-schedule capability by checking if ``rte_event_dev_info.event_dev_cap`` +has the bit ``RTE_EVENT_DEV_CAP_PRESCHEDULE`` set, if present pre-scheduling can be enabled at device +configuration time by setting appropriate pre-schedule type in ``rte_event_dev_config.preschedule``. + +Currently, the following pre-schedule types are supported: + * ``RTE_EVENT_DEV_PRESCHEDULE_NONE`` - No pre-scheduling. + * ``RTE_EVENT_DEV_PRESCHEDULE`` - Always issue a pre-schedule when dequeue is issued. + * ``RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE`` - Issue pre-schedule when dequeue is issued and there are + no forward progress constraints. + Starting the EventDev ~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index 0ff70d9057..eae5cc326b 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -55,6 +55,14 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Added event device pre-scheduling support.** + + Added support for pre-scheduling of events to event ports to improve + scheduling performance and latency. + + * Added ``rte_event_dev_config::preschedule_type`` to configure the device + level pre-scheduling type. + Removed Items ------------- diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 08e5f9320b..5ea7f5a07b 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -446,6 +446,30 @@ struct rte_event; * @see RTE_SCHED_TYPE_PARALLEL */ +#define RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE (1ULL << 16) +/**< Event device supports event pre-scheduling. + * + * When this capability is available, the application can enable event pre-scheduling on the event + * device to pre-schedule events to a event port when `rte_event_dequeue_burst()` + * is issued. + * The pre-schedule process starts with the `rte_event_dequeue_burst()` call and the + * pre-scheduled events are returned on the next `rte_event_dequeue_burst()` call. + * + * @see rte_event_dev_configure() + */ + +#define RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE (1ULL << 17) +/**< Event device supports adaptive event pre-scheduling. + * + * When this capability is available, the application can enable adaptive pre-scheduling + * on the event device where the events are pre-scheduled when there are no forward + * progress constraints with the currently held flow contexts. + * The pre-schedule process starts with the `rte_event_dequeue_burst()` call and the + * pre-scheduled events are returned on the next `rte_event_dequeue_burst()` call. + * + * @see rte_event_dev_configure() + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority level for events and queues. @@ -680,6 +704,25 @@ rte_event_dev_attr_get(uint8_t dev_id, uint32_t attr_id, * @see rte_event_dequeue_timeout_ticks(), rte_event_dequeue_burst() */ +typedef enum { + RTE_EVENT_DEV_PRESCHEDULE_NONE = 0, + /* Disable pre-schedule across the event device or on a given event port. + * @ref rte_event_dev_config.preschedule_type + */ + RTE_EVENT_DEV_PRESCHEDULE, + /* Enable pre-schedule always across the event device or a given event port. + * @ref rte_event_dev_config.preschedule_type + * @see RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE + */ + RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE, + /* Enable adaptive pre-schedule across the event device or a given event port. + * Delay issuing pre-schedule until there are no forward progress constraints with + * the held flow contexts. + * @ref rte_event_dev_config.preschedule_type + * @see RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE + */ +} rte_event_dev_preschedule_type_t; + /** Event device configuration structure */ struct rte_event_dev_config { uint32_t dequeue_timeout_ns; @@ -752,6 +795,11 @@ struct rte_event_dev_config { * optimized for single-link usage, this field is a hint for how many * to allocate; otherwise, regular event ports and queues will be used. */ + rte_event_dev_preschedule_type_t preschedule_type; + /**< Event pre-schedule type to use across the event device, if supported. + * @see RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE + * @see RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE + */ }; /** From patchwork Tue Oct 1 13:18:57 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 144858 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8BEFD45A7A; Tue, 1 Oct 2024 15:19:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3BB0040678; Tue, 1 Oct 2024 15:19:22 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 4F42E4067E for ; Tue, 1 Oct 2024 15:19:21 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 491BBxRM022491; Tue, 1 Oct 2024 06:19:20 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=Q SNwk7qimr+txPBgsHGhC7v9cOsN6E0mIc4NLv+ciJ8=; b=Y4f25ygg4XXaS73tj iYJC7E6iHkSVt3qA4qrcrv/0GEB17XOGSicNpJwK4/IHn5G/eChynqb1AJnTYFvN 8Fsdvaf9tsLVr6cvBRJkLhQBWBZt/jHivJfEPPhmvSlMRvoIc07i8KQRronT6MoI sqlCtIW0t8oLbmLMDWMtqBu5W7IbTIPSEAUZJpxniHZuzQFeimGyQBbZAnpLKtUZ /66Y72dsdM6EHO4ZnvCjyydNc9pDYJYp9UKZgNhmDQF8ZjySYzmIbB7RN5IGNlBc K7tORS5RXlPHGtqDc0hZuLOdC0VuPIN4RFrNLiRdTMfBe8fee3Ldbx41gin86w9O Ip5aA== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 41yt6gdrr7-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 01 Oct 2024 06:19:20 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 1 Oct 2024 06:19:18 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 1 Oct 2024 06:19:18 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id B692D3F705F; Tue, 1 Oct 2024 06:19:13 -0700 (PDT) From: To: , , , , , , , , CC: , Pavan Nikhilesh Subject: [PATCH v4 2/6] eventdev: add event port pre-schedule modify Date: Tue, 1 Oct 2024 18:48:57 +0530 Message-ID: <20241001131901.7920-3-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241001131901.7920-1-pbhagavatula@marvell.com> References: <20241001061411.2537-1-pbhagavatula@marvell.com> <20241001131901.7920-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: aItgksBFDmMgCYP-Bt2s5so1vg6G3lH5 X-Proofpoint-ORIG-GUID: aItgksBFDmMgCYP-Bt2s5so1vg6G3lH5 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Some event devices allow pre-schedule types to be modified at runtime on an event port. Add `RTE_EVENT_DEV_CAP_EVENT_PER_PORT_PRESCHEDULE` capability to indicate that the event device supports this feature. Add `rte_event_port_preschedule_modify()` API to modify the pre-schedule type at runtime. Signed-off-by: Pavan Nikhilesh --- app/test/test_eventdev.c | 45 +++++++++++++++-- doc/guides/prog_guide/eventdev/eventdev.rst | 12 +++++ doc/guides/rel_notes/release_24_11.rst | 2 + lib/eventdev/eventdev_pmd.h | 2 + lib/eventdev/eventdev_private.c | 20 ++++++++ lib/eventdev/eventdev_trace_points.c | 3 ++ lib/eventdev/rte_eventdev.h | 55 +++++++++++++++++++++ lib/eventdev/rte_eventdev_core.h | 6 +++ lib/eventdev/rte_eventdev_trace_fp.h | 11 ++++- lib/eventdev/version.map | 4 ++ 10 files changed, 154 insertions(+), 6 deletions(-) diff --git a/app/test/test_eventdev.c b/app/test/test_eventdev.c index d75fc8fbbc..af3ab689b5 100644 --- a/app/test/test_eventdev.c +++ b/app/test/test_eventdev.c @@ -1251,7 +1251,8 @@ test_eventdev_profile_switch(void) } static int -preschedule_test(rte_event_dev_preschedule_type_t preschedule_type, const char *preschedule_name) +preschedule_test(rte_event_dev_preschedule_type_t preschedule_type, const char *preschedule_name, + uint8_t modify) { #define NB_EVENTS 1024 uint64_t start, total; @@ -1269,7 +1270,11 @@ preschedule_test(rte_event_dev_preschedule_type_t preschedule_type, const char * TEST_ASSERT(rc == 1, "Failed to enqueue event"); } - RTE_SET_USED(preschedule_type); + if (modify) { + rc = rte_event_port_preschedule_modify(TEST_DEV_ID, 0, preschedule_type); + TEST_ASSERT_SUCCESS(rc, "Failed to modify preschedule type"); + } + total = 0; while (cnt) { start = rte_rdtsc_precise(); @@ -1334,13 +1339,13 @@ test_eventdev_preschedule_configure(void) rc = preschedule_configure(RTE_EVENT_DEV_PRESCHEDULE_NONE, &info); TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); - rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE_NONE, "RTE_EVENT_DEV_PRESCHEDULE_NONE"); + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE_NONE, "RTE_EVENT_DEV_PRESCHEDULE_NONE", 0); TEST_ASSERT_SUCCESS(rc, "Failed to test preschedule RTE_EVENT_DEV_PRESCHEDULE_NONE"); rte_event_dev_stop(TEST_DEV_ID); rc = preschedule_configure(RTE_EVENT_DEV_PRESCHEDULE, &info); TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); - rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE, "RTE_EVENT_DEV_PRESCHEDULE"); + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE, "RTE_EVENT_DEV_PRESCHEDULE", 0); TEST_ASSERT_SUCCESS(rc, "Failed to test preschedule RTE_EVENT_DEV_PRESCHEDULE"); if (info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) { @@ -1348,7 +1353,7 @@ test_eventdev_preschedule_configure(void) rc = preschedule_configure(RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE, &info); TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE, - "RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE"); + "RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE", 0); TEST_ASSERT_SUCCESS( rc, "Failed to test preschedule RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE"); } @@ -1356,6 +1361,34 @@ test_eventdev_preschedule_configure(void) return TEST_SUCCESS; } +static int +test_eventdev_preschedule_modify(void) +{ + struct rte_event_dev_info info; + int rc; + + rte_event_dev_info_get(TEST_DEV_ID, &info); + if ((info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PER_PORT_PRESCHEDULE) == 0) + return TEST_SKIPPED; + + rc = preschedule_configure(RTE_EVENT_DEV_PRESCHEDULE_NONE, &info); + TEST_ASSERT_SUCCESS(rc, "Failed to configure eventdev"); + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE_NONE, "RTE_EVENT_DEV_PRESCHEDULE_NONE", 1); + TEST_ASSERT_SUCCESS(rc, "Failed to test per port preschedule RTE_EVENT_DEV_PRESCHEDULE_NONE"); + + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE, "RTE_EVENT_DEV_PRESCHEDULE", 1); + TEST_ASSERT_SUCCESS(rc, "Failed to test per port preschedule RTE_EVENT_DEV_PRESCHEDULE"); + + if (info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) { + rc = preschedule_test(RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE, + "RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE", 1); + TEST_ASSERT_SUCCESS( + rc, "Failed to test per port preschedule RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE"); + } + + return TEST_SUCCESS; +} + static int test_eventdev_close(void) { @@ -1418,6 +1451,8 @@ static struct unit_test_suite eventdev_common_testsuite = { test_eventdev_profile_switch), TEST_CASE_ST(eventdev_configure_setup, NULL, test_eventdev_preschedule_configure), + TEST_CASE_ST(eventdev_configure_setup, eventdev_stop_device, + test_eventdev_preschedule_modify), TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device, test_eventdev_link), TEST_CASE_ST(eventdev_setup_device, eventdev_stop_device, diff --git a/doc/guides/prog_guide/eventdev/eventdev.rst b/doc/guides/prog_guide/eventdev/eventdev.rst index 341b9bb2c6..2deab0333e 100644 --- a/doc/guides/prog_guide/eventdev/eventdev.rst +++ b/doc/guides/prog_guide/eventdev/eventdev.rst @@ -379,6 +379,18 @@ Currently, the following pre-schedule types are supported: * ``RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE`` - Issue pre-schedule when dequeue is issued and there are no forward progress constraints. +To enable or disable event pre-scheduling at a given event port, the application can use +``rte_event_port_preschedule_modify()`` API. + +.. code-block:: c + + rte_event_port_preschedule_modify(dev_id, port_id, RTE_EVENT_DEV_PRESCHEDULE); + // Dequeue events from the event port with normal dequeue() function. + rte_event_port_preschedule_modify(dev_id, port_id, RTE_EVENT_DEV_PRESCHEDULE_NONE); + // Disable pre-scheduling if thread is about to be scheduled out and issue dequeue() to drain + // pending events. + + Starting the EventDev ~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index eae5cc326b..6e36ac7b7e 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -62,6 +62,8 @@ New Features * Added ``rte_event_dev_config::preschedule_type`` to configure the device level pre-scheduling type. + * Added ``rte_event_port_preschedule_modify`` to modify pre-scheduling type + on a given event port. Removed Items diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 7a5699f14b..9ea23aa6cd 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -184,6 +184,8 @@ struct __rte_cache_aligned rte_eventdev { /**< Pointer to PMD DMA adapter enqueue function. */ event_profile_switch_t profile_switch; /**< Pointer to PMD Event switch profile function. */ + event_preschedule_modify_t preschedule_modify; + /**< Pointer to PMD Event port pre-schedule type modify function. */ uint64_t reserved_64s[3]; /**< Reserved for future fields */ void *reserved_ptrs[3]; /**< Reserved for future fields */ diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c index 017f97ccab..dc37f736f8 100644 --- a/lib/eventdev/eventdev_private.c +++ b/lib/eventdev/eventdev_private.c @@ -96,6 +96,21 @@ dummy_event_port_profile_switch(__rte_unused void *port, __rte_unused uint8_t pr return -EINVAL; } +static int +dummy_event_port_preschedule_modify(__rte_unused void *port, + __rte_unused rte_event_dev_preschedule_type_t preschedule) +{ + RTE_EDEV_LOG_ERR("modify pre-schedule requested for unconfigured event device"); + return -EINVAL; +} + +static int +dummy_event_port_preschedule_modify_hint(__rte_unused void *port, + __rte_unused rte_event_dev_preschedule_type_t preschedule) +{ + return -ENOTSUP; +} + void event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) { @@ -114,6 +129,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) .ca_enqueue = dummy_event_crypto_adapter_enqueue, .dma_enqueue = dummy_event_dma_adapter_enqueue, .profile_switch = dummy_event_port_profile_switch, + .preschedule_modify = dummy_event_port_preschedule_modify, .data = dummy_data, }; @@ -136,5 +152,9 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op, fp_op->ca_enqueue = dev->ca_enqueue; fp_op->dma_enqueue = dev->dma_enqueue; fp_op->profile_switch = dev->profile_switch; + fp_op->preschedule_modify = dev->preschedule_modify; fp_op->data = dev->data->ports; + + if (fp_op->preschedule_modify == NULL) + fp_op->preschedule_modify = dummy_event_port_preschedule_modify_hint; } diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c index 8024e07531..e41674123c 100644 --- a/lib/eventdev/eventdev_trace_points.c +++ b/lib/eventdev/eventdev_trace_points.c @@ -49,6 +49,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_maintain, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch, lib.eventdev.port.profile.switch) +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_preschedule_modify, + lib.eventdev.port.preschedule.modify) + /* Eventdev Rx adapter trace points */ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_rx_adapter_create, lib.eventdev.rx.adapter.create) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 5ea7f5a07b..0add0093ac 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -470,6 +470,16 @@ struct rte_event; * @see rte_event_dev_configure() */ +#define RTE_EVENT_DEV_CAP_EVENT_PER_PORT_PRESCHEDULE (1ULL << 18) +/**< Event device supports event pre-scheduling per event port. + * + * When this flag is set, the event device allows controlling the event + * pre-scheduling at a event port granularity. + * + * @see rte_event_dev_configure() + * @see rte_event_port_preschedule_modify() + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority level for events and queues. @@ -708,18 +718,23 @@ typedef enum { RTE_EVENT_DEV_PRESCHEDULE_NONE = 0, /* Disable pre-schedule across the event device or on a given event port. * @ref rte_event_dev_config.preschedule_type + * @ref rte_event_port_preschedule_modify() */ RTE_EVENT_DEV_PRESCHEDULE, /* Enable pre-schedule always across the event device or a given event port. * @ref rte_event_dev_config.preschedule_type + * @ref rte_event_port_preschedule_modify() * @see RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE + * @see RTE_EVENT_DEV_CAP_EVENT_PER_PORT_PRESCHEDULE */ RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE, /* Enable adaptive pre-schedule across the event device or a given event port. * Delay issuing pre-schedule until there are no forward progress constraints with * the held flow contexts. * @ref rte_event_dev_config.preschedule_type + * @ref rte_event_port_preschedule_modify() * @see RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE + * @see RTE_EVENT_DEV_CAP_EVENT_PER_PORT_PRESCHEDULE */ } rte_event_dev_preschedule_type_t; @@ -2922,6 +2937,46 @@ rte_event_port_profile_switch(uint8_t dev_id, uint8_t port_id, uint8_t profile_i return fp_ops->profile_switch(port, profile_id); } +/** + * Change the pre-schedule type to use on an event port. + * + * This function is used to change the current pre-schedule type configured + * on an event port, the pre-schedule type can be set to none to disable pre-scheduling. + * This effects the subsequent ``rte_event_dequeue_burst`` call. + * The event device should support RTE_EVENT_DEV_CAP_PER_PORT_PRESCHEDULE capability. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param type + * The preschedule type to use on the event port. + * @return + * - 0 on success. + * - -EINVAL if *dev_id*, *port_id*, or *type* is invalid. + */ +static inline int +rte_event_port_preschedule_modify(uint8_t dev_id, uint8_t port_id, + rte_event_dev_preschedule_type_t type) +{ + const struct rte_event_fp_ops *fp_ops; + void *port; + + fp_ops = &rte_event_fp_ops[dev_id]; + port = fp_ops->data[port_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + if (dev_id >= RTE_EVENT_MAX_DEVS || port_id >= RTE_EVENT_MAX_PORTS_PER_DEV) + return -EINVAL; + + if (port == NULL) + return -EINVAL; +#endif + rte_eventdev_trace_port_preschedule_modify(dev_id, port_id, type); + + return fp_ops->preschedule_modify(port, type); +} + #ifdef __cplusplus } #endif diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index fc8e1556ab..2275888a6b 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -49,6 +49,10 @@ typedef uint16_t (*event_dma_adapter_enqueue_t)(void *port, struct rte_event ev[ typedef int (*event_profile_switch_t)(void *port, uint8_t profile); /**< @internal Switch active link profile on the event port. */ +typedef int (*event_preschedule_modify_t)(void *port, + rte_event_dev_preschedule_type_t preschedule_type); +/**< @internal Modify pre-schedule type on the event port. */ + struct __rte_cache_aligned rte_event_fp_ops { void **data; /**< points to array of internal port data pointers */ @@ -76,6 +80,8 @@ struct __rte_cache_aligned rte_event_fp_ops { /**< PMD DMA adapter enqueue function. */ event_profile_switch_t profile_switch; /**< PMD Event switch profile function. */ + event_preschedule_modify_t preschedule_modify; + /**< PMD Event port pre-schedule switch. */ uintptr_t reserved[4]; }; diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h index 04d510ad00..78baed94de 100644 --- a/lib/eventdev/rte_eventdev_trace_fp.h +++ b/lib/eventdev/rte_eventdev_trace_fp.h @@ -8,7 +8,7 @@ /** * @file * - * API for ethdev trace support + * API for eventdev trace support */ #ifdef __cplusplus @@ -54,6 +54,15 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(profile); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_port_preschedule_modify, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, + int type), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_int(type); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_eth_tx_adapter_enqueue, RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index 4947bb4ec6..b6d63ba576 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -147,6 +147,10 @@ EXPERIMENTAL { rte_event_port_profile_unlink; rte_event_port_profile_links_get; __rte_eventdev_trace_port_profile_switch; + + # added in 24.11 + rte_event_port_preschedule_modify; + __rte_eventdev_trace_port_preschedule_modify; }; INTERNAL { From patchwork Tue Oct 1 13:18:58 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 144859 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1AB3D45A7A; Tue, 1 Oct 2024 15:19:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 234D64066D; Tue, 1 Oct 2024 15:19:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3422640289 for ; Tue, 1 Oct 2024 15:19:25 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 491BQYX7006945; Tue, 1 Oct 2024 06:19:24 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=u GPdjG6k31iFyLMCl9jhrBI6ioS94P+dMCiKvlw0VjM=; b=T7q/P/F6MJ4/BFtfz bl00o7Iuo8bvc1Ak64jroq2B0RGTiUOIISJZiYT5PegCMN/WiwFIe0dLY2NiIXoz CXiFAVRhvTdzwFDjEhbOqUqxQn8nyIVZ2k2+s8Dn+QIOutCFKQPgiSPvKGZYples iJGuKjyK8I0Jl1aMk5oGHOYeeyznByFeI852C/+dv7KLRZUc9VcHCoMIUBfr7fEs 7CFRxpD3GaqmcAJQpTBPxMCKhrM3R9meq3eJ6vlE0oZluRsAFD01E0gWVEc7AecH iwkQyQm99WbN13KdHJ0p1A9fvBKxWGDLtgqLrbpey3RhQarbTKXjoHaFeO3miWg7 C3+Xw== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 420g6trevf-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 01 Oct 2024 06:19:24 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 1 Oct 2024 06:19:23 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 1 Oct 2024 06:19:23 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id 026B53F705D; Tue, 1 Oct 2024 06:19:18 -0700 (PDT) From: To: , , , , , , , , CC: , Pavan Nikhilesh Subject: [PATCH v4 3/6] eventdev: add SW event preschedule hint Date: Tue, 1 Oct 2024 18:48:58 +0530 Message-ID: <20241001131901.7920-4-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241001131901.7920-1-pbhagavatula@marvell.com> References: <20241001061411.2537-1-pbhagavatula@marvell.com> <20241001131901.7920-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: ao3mxBooLbA6cUNXU1OtN4Pqng3Xd9TM X-Proofpoint-GUID: ao3mxBooLbA6cUNXU1OtN4Pqng3Xd9TM X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Add a new eventdev API to provide a hint to the eventdev PMD to pre-schedule the next event into the event port, without releasing the current flow context. Event device that support this feature advertises the capability using the RTE_EVENT_DEV_CAP_SW_PRESCHEDULE capability flag. Application can invoke `rte_event_port_preschedule` to hint the PMD. Signed-off-by: Pavan Nikhilesh --- doc/guides/prog_guide/eventdev/eventdev.rst | 8 ++++ doc/guides/rel_notes/release_24_11.rst | 3 +- lib/eventdev/eventdev_pmd.h | 2 + lib/eventdev/eventdev_private.c | 21 ++++++++- lib/eventdev/eventdev_trace_points.c | 3 ++ lib/eventdev/rte_eventdev.h | 49 +++++++++++++++++++++ lib/eventdev/rte_eventdev_core.h | 5 +++ lib/eventdev/rte_eventdev_trace_fp.h | 8 ++++ lib/eventdev/version.map | 2 + 9 files changed, 98 insertions(+), 3 deletions(-) diff --git a/doc/guides/prog_guide/eventdev/eventdev.rst b/doc/guides/prog_guide/eventdev/eventdev.rst index 2deab0333e..1d8b86ab66 100644 --- a/doc/guides/prog_guide/eventdev/eventdev.rst +++ b/doc/guides/prog_guide/eventdev/eventdev.rst @@ -390,6 +390,14 @@ To enable or disable event pre-scheduling at a given event port, the application // Disable pre-scheduling if thread is about to be scheduled out and issue dequeue() to drain // pending events. +Event Pre-schedule Hint can be used to provide a hint to the eventdev PMD to pre-schedule the next +event without releasing the current flow context. Event device that support this feature advertises +the capability using the ``RTE_EVENT_DEV_CAP_SW_PRESCHEDULE`` capability flag. +If pre-scheduling is already enabled at a event device or event port level then the hint is ignored. + +.. code-block:: c + + rte_event_port_preschedule(dev_id, port_id, RTE_EVENT_DEV_PRESCHEDULE); Starting the EventDev ~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_24_11.rst b/doc/guides/rel_notes/release_24_11.rst index 6e36ac7b7e..3ada21c084 100644 --- a/doc/guides/rel_notes/release_24_11.rst +++ b/doc/guides/rel_notes/release_24_11.rst @@ -64,7 +64,8 @@ New Features level pre-scheduling type. * Added ``rte_event_port_preschedule_modify`` to modify pre-scheduling type on a given event port. - + * Added ``rte_event_port_preschedule`` to allow applications to decide when + to pre-schedule events on an event port. Removed Items ------------- diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 9ea23aa6cd..0bee2347ef 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -186,6 +186,8 @@ struct __rte_cache_aligned rte_eventdev { /**< Pointer to PMD Event switch profile function. */ event_preschedule_modify_t preschedule_modify; /**< Pointer to PMD Event port pre-schedule type modify function. */ + event_preschedule_t preschedule; + /**< Pointer to PMD Event port pre-schedule function. */ uint64_t reserved_64s[3]; /**< Reserved for future fields */ void *reserved_ptrs[3]; /**< Reserved for future fields */ diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c index dc37f736f8..6aed1cba9a 100644 --- a/lib/eventdev/eventdev_private.c +++ b/lib/eventdev/eventdev_private.c @@ -111,6 +111,19 @@ dummy_event_port_preschedule_modify_hint(__rte_unused void *port, return -ENOTSUP; } +static void +dummy_event_port_preschedule(__rte_unused void *port, + __rte_unused rte_event_dev_preschedule_type_t preschedule) +{ + RTE_EDEV_LOG_ERR("pre-schedule requested for unconfigured event device"); +} + +static void +dummy_event_port_preschedule_hint(__rte_unused void *port, + __rte_unused rte_event_dev_preschedule_type_t preschedule) +{ +} + void event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) { @@ -124,12 +137,12 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) .dequeue_burst = dummy_event_dequeue_burst, .maintain = dummy_event_maintain, .txa_enqueue = dummy_event_tx_adapter_enqueue, - .txa_enqueue_same_dest = - dummy_event_tx_adapter_enqueue_same_dest, + .txa_enqueue_same_dest = dummy_event_tx_adapter_enqueue_same_dest, .ca_enqueue = dummy_event_crypto_adapter_enqueue, .dma_enqueue = dummy_event_dma_adapter_enqueue, .profile_switch = dummy_event_port_profile_switch, .preschedule_modify = dummy_event_port_preschedule_modify, + .preschedule = dummy_event_port_preschedule, .data = dummy_data, }; @@ -153,8 +166,12 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op, fp_op->dma_enqueue = dev->dma_enqueue; fp_op->profile_switch = dev->profile_switch; fp_op->preschedule_modify = dev->preschedule_modify; + fp_op->preschedule = dev->preschedule; fp_op->data = dev->data->ports; if (fp_op->preschedule_modify == NULL) fp_op->preschedule_modify = dummy_event_port_preschedule_modify_hint; + + if (fp_op->preschedule == NULL) + fp_op->preschedule = dummy_event_port_preschedule_hint; } diff --git a/lib/eventdev/eventdev_trace_points.c b/lib/eventdev/eventdev_trace_points.c index e41674123c..e7af1591f7 100644 --- a/lib/eventdev/eventdev_trace_points.c +++ b/lib/eventdev/eventdev_trace_points.c @@ -52,6 +52,9 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_profile_switch, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_preschedule_modify, lib.eventdev.port.preschedule.modify) +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_port_preschedule, + lib.eventdev.port.preschedule) + /* Eventdev Rx adapter trace points */ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_eth_rx_adapter_create, lib.eventdev.rx.adapter.create) diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 0add0093ac..8df6a8bee1 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -480,6 +480,15 @@ struct rte_event; * @see rte_event_port_preschedule_modify() */ +#define RTE_EVENT_DEV_CAP_SW_PRESCHEDULE (1ULL << 19) +/**< Event device supports software prescheduling. + * + * When this flag is set, the application can issue preschedule request on + * a event port. + * + * @see rte_event_port_preschedule() + */ + /* Event device priority levels */ #define RTE_EVENT_DEV_PRIORITY_HIGHEST 0 /**< Highest priority level for events and queues. @@ -2977,6 +2986,46 @@ rte_event_port_preschedule_modify(uint8_t dev_id, uint8_t port_id, return fp_ops->preschedule_modify(port, type); } +/** + * Provide a hint to the event device to pre-schedule events to event port . + * + * Hint the event device to pre-schedule events to the event port. + * The call doesn't not guarantee that the events will be pre-scheduleed. + * The call doesn't release the flow context currently held by the event port. + * The event device should support RTE_EVENT_DEV_CAP_SW_PRESCHEDULE capability. + * + * When pre-scheduling is enabled at an event device or event port level, the + * hint is ignored. + * + * Subsequent calls to rte_event_dequeue_burst() will dequeue the pre-schedule + * events but pre-schedule operation is not issued again. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param type + * The pre-schedule type to use on the event port. + */ +static inline void +rte_event_port_preschedule(uint8_t dev_id, uint8_t port_id, rte_event_dev_preschedule_type_t type) +{ + const struct rte_event_fp_ops *fp_ops; + void *port; + + fp_ops = &rte_event_fp_ops[dev_id]; + port = fp_ops->data[port_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + if (dev_id >= RTE_EVENT_MAX_DEVS || port_id >= RTE_EVENT_MAX_PORTS_PER_DEV) + return; + if (port == NULL) + return; +#endif + rte_eventdev_trace_port_preschedule(dev_id, port_id, type); + + fp_ops->preschedule(port, type); +} #ifdef __cplusplus } #endif diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index 2275888a6b..21988abb4f 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -53,6 +53,9 @@ typedef int (*event_preschedule_modify_t)(void *port, rte_event_dev_preschedule_type_t preschedule_type); /**< @internal Modify pre-schedule type on the event port. */ +typedef void (*event_preschedule_t)(void *port, rte_event_dev_preschedule_type_t preschedule_type); +/**< @internal Issue pre-schedule on an event port. */ + struct __rte_cache_aligned rte_event_fp_ops { void **data; /**< points to array of internal port data pointers */ @@ -82,6 +85,8 @@ struct __rte_cache_aligned rte_event_fp_ops { /**< PMD Event switch profile function. */ event_preschedule_modify_t preschedule_modify; /**< PMD Event port pre-schedule switch. */ + event_preschedule_t preschedule; + /**< PMD Event port pre-schedule. */ uintptr_t reserved[4]; }; diff --git a/lib/eventdev/rte_eventdev_trace_fp.h b/lib/eventdev/rte_eventdev_trace_fp.h index 78baed94de..8290f8a248 100644 --- a/lib/eventdev/rte_eventdev_trace_fp.h +++ b/lib/eventdev/rte_eventdev_trace_fp.h @@ -63,6 +63,14 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_int(type); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_port_preschedule, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, int type), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_int(type); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_eth_tx_adapter_enqueue, RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index b6d63ba576..42a5867aba 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -151,6 +151,8 @@ EXPERIMENTAL { # added in 24.11 rte_event_port_preschedule_modify; __rte_eventdev_trace_port_preschedule_modify; + rte_event_port_preschedule; + __rte_eventdev_trace_port_preschedule; }; INTERNAL { From patchwork Tue Oct 1 13:18:59 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 144860 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D727F45A7A; Tue, 1 Oct 2024 15:19:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B018640663; Tue, 1 Oct 2024 15:19:31 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 53F8440656 for ; Tue, 1 Oct 2024 15:19:30 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 49192Ld0000750; Tue, 1 Oct 2024 06:19:29 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=9 T1vSqW2j9uI2jCJPc6+pFgnMkmoTSk7BUgVI/2jaWk=; b=M2LEXk83WCTge8c8E cshLJ2dxym/8kJO6edonadlmZVlC8FRYqGjxisfPuCnITQKB7SDMPrrNnxfaDJfi g6xO3c3ysWuG4K8vqFA+Y1nCAA15xMv/Wex2KyJWDP1/ZUgmY7HiX37BYFNBjb1E tv8TCNqfdZkWtp335I3tozvaHWdwN9dSOMPFJP0fvD3IUXK+xFlbLcx2dFFp7Ho9 tPo2oaeP2FRSkDkrkhNJGHmAqk7xs2sq+xGGHpHRsvlyGC82Pt6MC+1g5gQQIO0i lii66U9Y5v4vKxm7k0Jt/lppub+VM3+fE95Xc1BfUAYOsms5QUxwAK/RQHfDj1xV uBX2Q== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 41yt6gdrt0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 01 Oct 2024 06:19:29 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 1 Oct 2024 06:19:27 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 1 Oct 2024 06:19:27 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id CAB5E3F705D; Tue, 1 Oct 2024 06:19:23 -0700 (PDT) From: To: , , , , , , , , , Pavan Nikhilesh CC: Subject: [PATCH v4 4/6] event/cnkx: add pre-schedule support Date: Tue, 1 Oct 2024 18:48:59 +0530 Message-ID: <20241001131901.7920-5-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241001131901.7920-1-pbhagavatula@marvell.com> References: <20241001061411.2537-1-pbhagavatula@marvell.com> <20241001131901.7920-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: xrV9kL8hMjaDKydwlf_xnG-YSxcujxLP X-Proofpoint-ORIG-GUID: xrV9kL8hMjaDKydwlf_xnG-YSxcujxLP X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Add device level and port level pre-schedule support for cnxk eventdev. Signed-off-by: Pavan Nikhilesh --- doc/guides/eventdevs/cnxk.rst | 10 ---------- doc/guides/eventdevs/features/cnxk.ini | 1 + drivers/event/cnxk/cn10k_eventdev.c | 19 +++++++++++++++++-- drivers/event/cnxk/cn10k_worker.c | 21 +++++++++++++++++++++ drivers/event/cnxk/cn10k_worker.h | 1 + drivers/event/cnxk/cnxk_eventdev.c | 2 -- drivers/event/cnxk/cnxk_eventdev.h | 1 - 7 files changed, 40 insertions(+), 15 deletions(-) diff --git a/doc/guides/eventdevs/cnxk.rst b/doc/guides/eventdevs/cnxk.rst index d038930594..e21846f4e0 100644 --- a/doc/guides/eventdevs/cnxk.rst +++ b/doc/guides/eventdevs/cnxk.rst @@ -78,16 +78,6 @@ Runtime Config Options -a 0002:0e:00.0,single_ws=1 -- ``CN10K Getwork mode`` - - CN10K supports three getwork prefetch modes no prefetch[0], prefetch - immediately[1] and delayed prefetch on forward progress event[2]. - The default getwork mode is 2. - - For example:: - - -a 0002:0e:00.0,gw_mode=1 - - ``Event Group QoS support`` SSO GGRPs i.e. queue uses DRAM & SRAM buffers to hold in-flight diff --git a/doc/guides/eventdevs/features/cnxk.ini b/doc/guides/eventdevs/features/cnxk.ini index d1516372fa..5ba528f086 100644 --- a/doc/guides/eventdevs/features/cnxk.ini +++ b/doc/guides/eventdevs/features/cnxk.ini @@ -17,6 +17,7 @@ carry_flow_id = Y maintenance_free = Y runtime_queue_attr = Y profile_links = Y +preschedule = Y [Eth Rx adapter Features] internal_port = Y diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index 2d7b169974..5c50e72152 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -527,6 +527,7 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) event_dev->dma_enqueue = cn10k_dma_adapter_enqueue; event_dev->profile_switch = cn10k_sso_hws_profile_switch; + event_dev->preschedule_modify = cn10k_sso_hws_preschedule_modify; #else RTE_SET_USED(event_dev); #endif @@ -541,6 +542,9 @@ cn10k_sso_info_get(struct rte_eventdev *event_dev, dev_info->driver_name = RTE_STR(EVENTDEV_NAME_CN10K_PMD); cnxk_sso_info_get(dev, dev_info); dev_info->max_event_port_enqueue_depth = UINT32_MAX; + dev_info->event_dev_cap |= RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE | + RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE | + RTE_EVENT_DEV_CAP_EVENT_PER_PORT_PRESCHEDULE; } static int @@ -566,6 +570,19 @@ cn10k_sso_dev_configure(const struct rte_eventdev *event_dev) if (rc < 0) goto cnxk_rsrc_fini; + switch (event_dev->data->dev_conf.preschedule_type) { + default: + case RTE_EVENT_DEV_PRESCHEDULE_NONE: + dev->gw_mode = CN10K_GW_MODE_NONE; + break; + case RTE_EVENT_DEV_PRESCHEDULE: + dev->gw_mode = CN10K_GW_MODE_PREF; + break; + case RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE: + dev->gw_mode = CN10K_GW_MODE_PREF_WFE; + break; + } + rc = cnxk_setup_event_ports(event_dev, cn10k_sso_init_hws_mem, cn10k_sso_hws_setup); if (rc < 0) @@ -1199,7 +1216,6 @@ cn10k_sso_init(struct rte_eventdev *event_dev) return 0; } - dev->gw_mode = CN10K_GW_MODE_PREF_WFE; rc = cnxk_sso_init(event_dev); if (rc < 0) return rc; @@ -1256,7 +1272,6 @@ RTE_PMD_REGISTER_KMOD_DEP(event_cn10k, "vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(event_cn10k, CNXK_SSO_XAE_CNT "=" CNXK_SSO_GGRP_QOS "=" CNXK_SSO_FORCE_BP "=1" - CN10K_SSO_GW_MODE "=" CN10K_SSO_STASH "=" CNXK_TIM_DISABLE_NPA "=1" CNXK_TIM_CHNK_SLOTS "=" diff --git a/drivers/event/cnxk/cn10k_worker.c b/drivers/event/cnxk/cn10k_worker.c index d59769717e..ffe06843fa 100644 --- a/drivers/event/cnxk/cn10k_worker.c +++ b/drivers/event/cnxk/cn10k_worker.c @@ -442,3 +442,24 @@ cn10k_sso_hws_profile_switch(void *port, uint8_t profile) return 0; } + +int __rte_hot +cn10k_sso_hws_preschedule_modify(void *port, rte_event_dev_preschedule_type_t type) +{ + struct cn10k_sso_hws *ws = port; + + ws->gw_wdata &= (BIT(19) | BIT(20)); + switch (type) { + default: + case RTE_EVENT_DEV_PRESCHEDULE_NONE: + break; + case RTE_EVENT_DEV_PRESCHEDULE: + ws->gw_wdata |= BIT(19); + break; + case RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE: + ws->gw_wdata |= BIT(19) | BIT(20); + break; + } + + return 0; +} diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index c5026409d7..b5898395e6 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -377,6 +377,7 @@ uint16_t __rte_hot cn10k_sso_hws_enq_fwd_burst(void *port, const struct rte_event ev[], uint16_t nb_events); int __rte_hot cn10k_sso_hws_profile_switch(void *port, uint8_t profile); +int __rte_hot cn10k_sso_hws_preschedule_modify(void *port, rte_event_dev_preschedule_type_t type); #define R(name, flags) \ uint16_t __rte_hot cn10k_sso_hws_deq_##name( \ diff --git a/drivers/event/cnxk/cnxk_eventdev.c b/drivers/event/cnxk/cnxk_eventdev.c index 4b2d6bffa6..c1df481827 100644 --- a/drivers/event/cnxk/cnxk_eventdev.c +++ b/drivers/event/cnxk/cnxk_eventdev.c @@ -624,8 +624,6 @@ cnxk_sso_parse_devargs(struct cnxk_sso_evdev *dev, struct rte_devargs *devargs) &dev->force_ena_bp); rte_kvargs_process(kvlist, CN9K_SSO_SINGLE_WS, &parse_kvargs_flag, &single_ws); - rte_kvargs_process(kvlist, CN10K_SSO_GW_MODE, &parse_kvargs_value, - &dev->gw_mode); rte_kvargs_process(kvlist, CN10K_SSO_STASH, &parse_sso_kvargs_stash_dict, dev); dev->dual_ws = !single_ws; diff --git a/drivers/event/cnxk/cnxk_eventdev.h b/drivers/event/cnxk/cnxk_eventdev.h index ece49394e7..f147ef3c78 100644 --- a/drivers/event/cnxk/cnxk_eventdev.h +++ b/drivers/event/cnxk/cnxk_eventdev.h @@ -30,7 +30,6 @@ #define CNXK_SSO_GGRP_QOS "qos" #define CNXK_SSO_FORCE_BP "force_rx_bp" #define CN9K_SSO_SINGLE_WS "single_ws" -#define CN10K_SSO_GW_MODE "gw_mode" #define CN10K_SSO_STASH "stash" #define CNXK_SSO_MAX_PROFILES 2 From patchwork Tue Oct 1 13:19:00 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 144861 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7288145A7A; Tue, 1 Oct 2024 15:19:49 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1AD4440695; Tue, 1 Oct 2024 15:19:38 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id EF4D8402BD for ; Tue, 1 Oct 2024 15:19:35 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 491B5Zbb023408; Tue, 1 Oct 2024 06:19:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=Q KD3QANp6NcgivKs8a22B23c7X6tw1HVo/eavaUiWCA=; b=g4rWQ/e6P7Z1W8weJ W+OYiipzU2LVtec5u8M/feWUn70llHGq3tx4aBg3Uf+NDiZsta190sZikUd+gUOk fSBw2F+3hvS1jl2Vth/ItoW6O+AWMl4UzKKk7CgkFOPLx2YxAyeWEqGpgr3TFrFC QvZhFjuT8ENTNrqINJfViSa9USNwE2QHMRQzkbzDYGP0C50BAiDNcKtxM2Osy5RH 9RxIy6gW9cYOSlHvoO6UV4cN9w2Fe+CMVj588ciAFtEI4/dH/OgJaVhxcg1F2Zbz e6uT2wN0f7ZCPABEhAETb8mNOkL5dUPdD2hQ3mfn1CZXMRMRGn4nb9Bi9dLt46b+ +61cQ== Received: from dc5-exch05.marvell.com ([199.233.59.128]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 41yt6gdrun-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 01 Oct 2024 06:19:34 -0700 (PDT) Received: from DC5-EXCH05.marvell.com (10.69.176.209) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 1 Oct 2024 06:19:32 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH05.marvell.com (10.69.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 1 Oct 2024 06:19:32 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id 98B043F705D; Tue, 1 Oct 2024 06:19:28 -0700 (PDT) From: To: , , , , , , , , CC: , Pavan Nikhilesh Subject: [PATCH v4 5/6] app/test-eventdev: add pre-scheduling support Date: Tue, 1 Oct 2024 18:49:00 +0530 Message-ID: <20241001131901.7920-6-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241001131901.7920-1-pbhagavatula@marvell.com> References: <20241001061411.2537-1-pbhagavatula@marvell.com> <20241001131901.7920-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: BDQhSBEP98DVJVnXzZhtc5Lloe29trcN X-Proofpoint-ORIG-GUID: BDQhSBEP98DVJVnXzZhtc5Lloe29trcN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Add support to configure pre-scheduling for eventdev test application. Option `--preschedule` 0 - Disable pre-scheduling. 1 - Enable pre-scheduling. 2 - Enable pre-schedule with adaptive mode (Default). Signed-off-by: Pavan Nikhilesh --- app/test-eventdev/evt_common.h | 45 ++++++++++++++++++++++++------- app/test-eventdev/evt_options.c | 17 ++++++++++++ app/test-eventdev/evt_options.h | 1 + doc/guides/tools/testeventdev.rst | 6 +++++ 4 files changed, 59 insertions(+), 10 deletions(-) diff --git a/app/test-eventdev/evt_common.h b/app/test-eventdev/evt_common.h index dbe1e5c0c4..176c077e51 100644 --- a/app/test-eventdev/evt_common.h +++ b/app/test-eventdev/evt_common.h @@ -64,6 +64,8 @@ struct evt_options { uint8_t nb_timer_adptrs; uint8_t timdev_use_burst; uint8_t per_port_pool; + uint8_t preschedule; + uint8_t preschedule_opted; uint8_t sched_type_list[EVT_MAX_STAGES]; uint16_t mbuf_sz; uint16_t wkr_deq_dep; @@ -184,6 +186,30 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues, return ret; } + if (opt->preschedule_opted && opt->preschedule) { + switch (opt->preschedule) { + case RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE: + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE)) { + evt_err("Preschedule type %d not supported", opt->preschedule); + return -EINVAL; + } + break; + case RTE_EVENT_DEV_PRESCHEDULE: + if (!(info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE)) { + evt_err("Preschedule type %d not supported", opt->preschedule); + return -EINVAL; + } + break; + default: + break; + } + } + + if (!opt->preschedule_opted) { + if (info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) + opt->preschedule = RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE; + } + if (opt->deq_tmo_nsec) { if (opt->deq_tmo_nsec < info.min_dequeue_timeout_ns) { opt->deq_tmo_nsec = info.min_dequeue_timeout_ns; @@ -198,16 +224,15 @@ evt_configure_eventdev(struct evt_options *opt, uint8_t nb_queues, } const struct rte_event_dev_config config = { - .dequeue_timeout_ns = opt->deq_tmo_nsec, - .nb_event_queues = nb_queues, - .nb_event_ports = nb_ports, - .nb_single_link_event_port_queues = 0, - .nb_events_limit = info.max_num_events, - .nb_event_queue_flows = opt->nb_flows, - .nb_event_port_dequeue_depth = - info.max_event_port_dequeue_depth, - .nb_event_port_enqueue_depth = - info.max_event_port_enqueue_depth, + .dequeue_timeout_ns = opt->deq_tmo_nsec, + .nb_event_queues = nb_queues, + .nb_event_ports = nb_ports, + .nb_single_link_event_port_queues = 0, + .nb_events_limit = info.max_num_events, + .nb_event_queue_flows = opt->nb_flows, + .nb_event_port_dequeue_depth = info.max_event_port_dequeue_depth, + .nb_event_port_enqueue_depth = info.max_event_port_enqueue_depth, + .preschedule_type = opt->preschedule, }; return rte_event_dev_configure(opt->dev_id, &config); diff --git a/app/test-eventdev/evt_options.c b/app/test-eventdev/evt_options.c index fb5a0a255f..323d1e724d 100644 --- a/app/test-eventdev/evt_options.c +++ b/app/test-eventdev/evt_options.c @@ -130,6 +130,17 @@ evt_parse_tx_pkt_sz(struct evt_options *opt, const char *arg __rte_unused) return ret; } +static int +evt_parse_preschedule(struct evt_options *opt, const char *arg __rte_unused) +{ + int ret; + + ret = parser_read_uint8(&(opt->preschedule), arg); + opt->preschedule_opted = 1; + + return ret; +} + static int evt_parse_timer_prod_type(struct evt_options *opt, const char *arg __rte_unused) { @@ -510,6 +521,10 @@ usage(char *program) " across all the ethernet devices before\n" " event workers start.\n" "\t--tx_pkt_sz : Packet size to use with Tx first." + "\t--preschedule : Pre-schedule type to use.\n" + " 0 - disable pre-schedule\n" + " 1 - pre-schedule\n" + " 2 - pre-schedule adaptive (Default)\n" ); printf("available tests:\n"); evt_test_dump_names(); @@ -598,6 +613,7 @@ static struct option lgopts[] = { { EVT_HELP, 0, 0, 0 }, { EVT_TX_FIRST, 1, 0, 0 }, { EVT_TX_PKT_SZ, 1, 0, 0 }, + { EVT_PRESCHEDULE, 1, 0, 0 }, { NULL, 0, 0, 0 } }; @@ -647,6 +663,7 @@ evt_opts_parse_long(int opt_idx, struct evt_options *opt) { EVT_PER_PORT_POOL, evt_parse_per_port_pool}, { EVT_TX_FIRST, evt_parse_tx_first}, { EVT_TX_PKT_SZ, evt_parse_tx_pkt_sz}, + { EVT_PRESCHEDULE, evt_parse_preschedule}, }; for (i = 0; i < RTE_DIM(parsermap); i++) { diff --git a/app/test-eventdev/evt_options.h b/app/test-eventdev/evt_options.h index 646060c7c6..18a893b704 100644 --- a/app/test-eventdev/evt_options.h +++ b/app/test-eventdev/evt_options.h @@ -59,6 +59,7 @@ #define EVT_PER_PORT_POOL ("per_port_pool") #define EVT_TX_FIRST ("tx_first") #define EVT_TX_PKT_SZ ("tx_pkt_sz") +#define EVT_PRESCHEDULE ("preschedule") #define EVT_HELP ("help") void evt_options_default(struct evt_options *opt); diff --git a/doc/guides/tools/testeventdev.rst b/doc/guides/tools/testeventdev.rst index 00eb702571..38e2ec0c36 100644 --- a/doc/guides/tools/testeventdev.rst +++ b/doc/guides/tools/testeventdev.rst @@ -236,6 +236,12 @@ The following are the application command-line options: Packet size to use for `--tx_first`. Only applicable for `pipeline_atq` and `pipeline_queue` tests. +* ``--preschedule`` + + Enable pre-scheduling of events. + 0 - Disable pre-scheduling. + 1 - Enable pre-scheduling. + 2 - Enable pre-schedule with adaptive mode (Default). Eventdev Tests -------------- From patchwork Tue Oct 1 13:19:01 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Pavan Nikhilesh Bhagavatula X-Patchwork-Id: 144862 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E11F45A7A; Tue, 1 Oct 2024 15:19:57 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC55440691; Tue, 1 Oct 2024 15:19:42 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 0C74E40691 for ; Tue, 1 Oct 2024 15:19:40 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.18.1.2/8.18.1.2) with ESMTP id 491BQa06006960; Tue, 1 Oct 2024 06:19:40 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= cc:content-transfer-encoding:content-type:date:from:in-reply-to :message-id:mime-version:references:subject:to; s=pfpt0220; bh=Z EqJ6IluvAkqgmCmpIBnTPqwBBDNZ0u1kY7VgT8zWzw=; b=ZI3KLS10iGT6iyQry l8PqVdnp2F56Y1SLoUqgpbIx3tLzyTSCgYyK1q1w+Cf1OhfDxtlNmTaxQT9GS33y 5aro6QPy/Vb8GSe9cabs139UWO9Hx6AKXQk6ZbyPrxvFvFmb6BODApcbuWYoKcHM nIhWnDKxvhKjGxNkjOdZxmd8qC7ifYJNImP5LcBinGnhLp9HeVOv0aLMasA/49FK vx6la05CIQPGYxcZ+iwpUwCyLF5xeXwJQIhROKdWQmi/A82Fbd/MxVuKQObK8ft6 O4c8a+lX6SYAXwkDihlsN/KEVwnvN1U/aALZZVL/ahyfB2GDU3qJOPcII6LLcADM 5m7qw== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 420g6trewv-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Tue, 01 Oct 2024 06:19:39 -0700 (PDT) Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1544.4; Tue, 1 Oct 2024 06:19:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1544.4 via Frontend Transport; Tue, 1 Oct 2024 06:19:38 -0700 Received: from MININT-80QBFE8.corp.innovium.com (MININT-80QBFE8.marvell.com [10.28.164.106]) by maili.marvell.com (Postfix) with ESMTP id 67B9D5C68E3; Tue, 1 Oct 2024 06:19:33 -0700 (PDT) From: To: , , , , , , , , , Radu Nicolau , "Akhil Goyal" , Sunil Kumar Kori , "Pavan Nikhilesh" CC: Subject: [PATCH v4 6/6] examples: use eventdev pre-scheduling Date: Tue, 1 Oct 2024 18:49:01 +0530 Message-ID: <20241001131901.7920-7-pbhagavatula@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20241001131901.7920-1-pbhagavatula@marvell.com> References: <20241001061411.2537-1-pbhagavatula@marvell.com> <20241001131901.7920-1-pbhagavatula@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: wEvfESYziH1VsnUcJs2P5rTke3yWcZKi X-Proofpoint-GUID: wEvfESYziH1VsnUcJs2P5rTke3yWcZKi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.293,Aquarius:18.0.1039,Hydra:6.0.680,FMLib:17.12.60.29 definitions=2024-09-06_09,2024-09-06_01,2024-09-02_01 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Pavan Nikhilesh Enable event pre-scheduling if supported by the event device. Signed-off-by: Pavan Nikhilesh --- examples/eventdev_pipeline/pipeline_worker_generic.c | 6 ++++++ examples/eventdev_pipeline/pipeline_worker_tx.c | 6 ++++++ examples/ipsec-secgw/event_helper.c | 6 ++++++ examples/l2fwd-event/l2fwd_event_generic.c | 6 ++++++ examples/l2fwd-event/l2fwd_event_internal_port.c | 6 ++++++ examples/l3fwd/l3fwd_event_generic.c | 6 ++++++ examples/l3fwd/l3fwd_event_internal_port.c | 6 ++++++ 7 files changed, 42 insertions(+) diff --git a/examples/eventdev_pipeline/pipeline_worker_generic.c b/examples/eventdev_pipeline/pipeline_worker_generic.c index 783f68c91e..8052e2df86 100644 --- a/examples/eventdev_pipeline/pipeline_worker_generic.c +++ b/examples/eventdev_pipeline/pipeline_worker_generic.c @@ -188,6 +188,12 @@ setup_eventdev_generic(struct worker_data *worker_data) config.nb_event_port_enqueue_depth = dev_info.max_event_port_enqueue_depth; + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE) + config.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE; + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) + config.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE; + ret = rte_event_dev_configure(dev_id, &config); if (ret < 0) { printf("%d: Error configuring device\n", __LINE__); diff --git a/examples/eventdev_pipeline/pipeline_worker_tx.c b/examples/eventdev_pipeline/pipeline_worker_tx.c index 98a52f3892..077b902bdb 100644 --- a/examples/eventdev_pipeline/pipeline_worker_tx.c +++ b/examples/eventdev_pipeline/pipeline_worker_tx.c @@ -505,6 +505,12 @@ setup_eventdev_worker_tx_enq(struct worker_data *worker_data) config.nb_event_port_enqueue_depth = dev_info.max_event_port_enqueue_depth; + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE) + config.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE; + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) + config.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE; + ret = rte_event_dev_configure(dev_id, &config); if (ret < 0) { printf("%d: Error configuring device\n", __LINE__); diff --git a/examples/ipsec-secgw/event_helper.c b/examples/ipsec-secgw/event_helper.c index 89fb7e62a5..61133607d6 100644 --- a/examples/ipsec-secgw/event_helper.c +++ b/examples/ipsec-secgw/event_helper.c @@ -669,6 +669,12 @@ eh_initialize_eventdev(struct eventmode_conf *em_conf) eventdev_conf.nb_event_port_enqueue_depth = evdev_default_conf.max_event_port_enqueue_depth; + if (evdev_default_conf.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE) + eventdev_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE; + + if (evdev_default_conf.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) + eventdev_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE; + /* Configure event device */ ret = rte_event_dev_configure(eventdev_id, &eventdev_conf); if (ret < 0) { diff --git a/examples/l2fwd-event/l2fwd_event_generic.c b/examples/l2fwd-event/l2fwd_event_generic.c index 1977e23261..d5a3cd9984 100644 --- a/examples/l2fwd-event/l2fwd_event_generic.c +++ b/examples/l2fwd-event/l2fwd_event_generic.c @@ -86,6 +86,12 @@ l2fwd_event_device_setup_generic(struct l2fwd_resources *rsrc) evt_rsrc->has_burst = !!(dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE); + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE) + event_d_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE; + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) + event_d_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE; + ret = rte_event_dev_configure(event_d_id, &event_d_conf); if (ret < 0) rte_panic("Error in configuring event device\n"); diff --git a/examples/l2fwd-event/l2fwd_event_internal_port.c b/examples/l2fwd-event/l2fwd_event_internal_port.c index 717a7bceb8..0b619afe91 100644 --- a/examples/l2fwd-event/l2fwd_event_internal_port.c +++ b/examples/l2fwd-event/l2fwd_event_internal_port.c @@ -82,6 +82,12 @@ l2fwd_event_device_setup_internal_port(struct l2fwd_resources *rsrc) evt_rsrc->has_burst = !!(dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE); + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE) + event_d_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE; + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) + event_d_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE; + ret = rte_event_dev_configure(event_d_id, &event_d_conf); if (ret < 0) rte_panic("Error in configuring event device\n"); diff --git a/examples/l3fwd/l3fwd_event_generic.c b/examples/l3fwd/l3fwd_event_generic.c index ddb6e5c38d..333c87b01c 100644 --- a/examples/l3fwd/l3fwd_event_generic.c +++ b/examples/l3fwd/l3fwd_event_generic.c @@ -74,6 +74,12 @@ l3fwd_event_device_setup_generic(void) evt_rsrc->has_burst = !!(dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE); + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE) + event_d_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE; + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) + event_d_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE; + ret = rte_event_dev_configure(event_d_id, &event_d_conf); if (ret < 0) rte_panic("Error in configuring event device\n"); diff --git a/examples/l3fwd/l3fwd_event_internal_port.c b/examples/l3fwd/l3fwd_event_internal_port.c index cb49a8b9fa..32354fc09f 100644 --- a/examples/l3fwd/l3fwd_event_internal_port.c +++ b/examples/l3fwd/l3fwd_event_internal_port.c @@ -73,6 +73,12 @@ l3fwd_event_device_setup_internal_port(void) evt_rsrc->has_burst = !!(dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_BURST_MODE); + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE) + event_d_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE; + + if (dev_info.event_dev_cap & RTE_EVENT_DEV_CAP_EVENT_PRESCHEDULE_ADAPTIVE) + event_d_conf.preschedule_type = RTE_EVENT_DEV_PRESCHEDULE_ADAPTIVE; + ret = rte_event_dev_configure(event_d_id, &event_d_conf); if (ret < 0) rte_panic("Error in configuring event device\n");