From patchwork Mon Oct 16 20:57:14 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sivaprasad Tummala X-Patchwork-Id: 132659 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7390C43182; Mon, 16 Oct 2023 22:58:14 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B134240E64; Mon, 16 Oct 2023 22:57:58 +0200 (CEST) Received: from NAM04-DM6-obe.outbound.protection.outlook.com (mail-dm6nam04on2057.outbound.protection.outlook.com [40.107.102.57]) by mails.dpdk.org (Postfix) with ESMTP id 9CE4040A8B for ; Mon, 16 Oct 2023 22:57:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=en9NVbZVtyI1CL1O7rJHYoYYKqkKRe1RAUGo6TfdsdrsM3YnsCblTJoz10jVTyTAy1JbwkqHr1Kbv1IvPCbBY0YmqwpZzGW2aof+19UQfsLKq5FJFvaN5zDEHpo3bgpoLKgeLabsJSZNWYMzYRXHjFJKctUz9oDK+4NsENxUSlDziPbWQmyRqwvL3jJaUkBJlLnBhmNlDKK6sKmb60nHMBt4kIgvkKnZD4NDyGeR96US66wJwgTz8vNY3lNHwau2KDDFQTL0joaLplz3KZ4NCR5djjFyBD3TSYbaxwLzlkzVKZNS9zK2hGSuFUkQcr+tjh97ZR3sx1/QTnA/vpZzww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Rzh57Ta9gQEs5xUmO3JDHzxWv92pWEkuYVbXwPSve6A=; b=h7VhY+vjSXbymv/CsI9KuVFGQSGILhFv65fMianWXCLYYPpyw/M19B25H4s95mqfQJSENoyX0QCioPceeKwT5lE+GPYCBcwryAOY+g+sXq8BLvyZf9idyGRO++k/M+mqRXtpAECxxmhJZiG+4QRp4c6EOcTMEpUF/yxRzFhRGSWusAU4273qjP3FZXIXXfUo5phu0ul+3bvE/P3TyW/hMm9q8eQHvoXB87c4sxNhI8iV8psxJHUl/GJoYtpKSJJog3AyAPnSFSQWLbsCxey+QrcuR5eH0/x21fg0CR21vUGAzVcwxCivig7hLkV5xij4pg6g63gPecH+Raq4NuaXUQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 165.204.84.17) smtp.rcpttodomain=gmail.com smtp.mailfrom=amd.com; dmarc=pass (p=quarantine sp=quarantine pct=100) action=none header.from=amd.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Rzh57Ta9gQEs5xUmO3JDHzxWv92pWEkuYVbXwPSve6A=; b=jBfPdWCud8zSJJXkKvue3su+2hojsRBExQyXjV7QBGDAx9yAh562DuT4DF+2VravH6fDhXBJuXjRIAmP6wI+P5+WVZGsbAJqZDjIhL+AY6Kqn/wmPr4CRhDqKZbbnDROdek5JDyZldr/BsVe1V4f62w9BasPdkNWXqWkbIGQ2C0= Received: from CYZPR14CA0039.namprd14.prod.outlook.com (2603:10b6:930:a0::12) by BY5PR12MB4853.namprd12.prod.outlook.com (2603:10b6:a03:1da::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.36; Mon, 16 Oct 2023 20:57:54 +0000 Received: from CY4PEPF0000E9D0.namprd03.prod.outlook.com (2603:10b6:930:a0:cafe::dd) by CYZPR14CA0039.outlook.office365.com (2603:10b6:930:a0::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.36 via Frontend Transport; Mon, 16 Oct 2023 20:57:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 165.204.84.17) smtp.mailfrom=amd.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=amd.com; Received-SPF: Pass (protection.outlook.com: domain of amd.com designates 165.204.84.17 as permitted sender) receiver=protection.outlook.com; client-ip=165.204.84.17; helo=SATLEXMB04.amd.com; pr=C Received: from SATLEXMB04.amd.com (165.204.84.17) by CY4PEPF0000E9D0.mail.protection.outlook.com (10.167.241.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.6838.22 via Frontend Transport; Mon, 16 Oct 2023 20:57:54 +0000 Received: from telco-siena.amd.com (10.180.168.240) by SATLEXMB04.amd.com (10.181.40.145) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2507.27; Mon, 16 Oct 2023 15:57:52 -0500 From: Sivaprasad Tummala To: , , CC: , , Subject: [PATCH v1 5/6] power: add eventdev support for power management Date: Mon, 16 Oct 2023 13:57:14 -0700 Message-ID: <20231016205715.970999-5-sivaprasad.tummala@amd.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231016205715.970999-1-sivaprasad.tummala@amd.com> References: <20230419095427.563185-1-sivaprasad.tummala@amd.com> <20231016205715.970999-1-sivaprasad.tummala@amd.com> MIME-Version: 1.0 X-Originating-IP: [10.180.168.240] X-ClientProxiedBy: SATLEXMB03.amd.com (10.181.40.144) To SATLEXMB04.amd.com (10.181.40.145) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: CY4PEPF0000E9D0:EE_|BY5PR12MB4853:EE_ X-MS-Office365-Filtering-Correlation-Id: 28ee3f74-4a37-49f8-3953-08dbce8a9150 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: DenBPTbwOStVhmcG7aLdHE+lgDIm4kHR8sihSuocU+KdR+2CGB4AQHhPI8YC0hHm1xYrFJaQsS2RudMKt8akCmIzC3YN+9qxta4hUELOz1xBwUXR5COwHtBWbHTyDQJH1K/bPVo3TnKSUukBwSdHxrOy7Xi9tdsBPTCq9yZsCaPHU62hqlXxwQ6DrGM65D11Uuwa/oXX7AdrfHagwBQkTn6TlwcNgTtasYZsmmLJAwBah4SnpnBD6qZqMuwH6jo1u86RtdgMzAyfdhtHMQjzHIsKmfIsgQ3ymPSD/dCZltKcLO78eHwnJ/DLOo2ReG0PMzPKLgzO9xGkHmMvWfNI1Z7nEcJAUFCEr9LZJ0F5aG1jXTfUObIndqMsEl280KBzSWuGzsnKZC2HTbW6YsGHBNtFGgWqn6Ce4TQh5Setj/WrAGvqv0CUdkF0kaVzh54ZyZ2TXFSLRnPF/5lxaNUjNEaW8ttr/UXLB5qsU0LfMAp09rdy/LFb7vORFMHQWyqASt5qw/NZWGytutGY8xKpF6OgfZV06ef1LYsesnFq6SHrJwfxMCaZrujwo/+xo8McBK40FZ0sNAs0pl9Tue7ex5pyiDn9BkKio3tVUuVIEYucgsrTfzq5QPIkeqLVCAOk4eURmN/42cqlQ6oKmAm/zaXpEDZ0BBCzUHtzGXt9Es5PD/8nmVv39/lQnH8iciFuAr+pSn+WicTUCvnFnYJVSYYugLN7n/rhBWQzkjeeAIekh7AwYrg2N0PPKYIRMaztd+bQaQE3jdIMxGV/MQ8h6Q== X-Forefront-Antispam-Report: CIP:165.204.84.17; CTRY:US; LANG:en; SCL:1; SRV:; IPV:CAL; SFV:NSPM; H:SATLEXMB04.amd.com; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(376002)(136003)(346002)(396003)(230922051799003)(64100799003)(82310400011)(186009)(1800799009)(451199024)(40470700004)(46966006)(36840700001)(110136005)(478600001)(7696005)(6666004)(82740400003)(356005)(81166007)(36756003)(86362001)(1076003)(40460700003)(16526019)(2616005)(426003)(26005)(36860700001)(83380400001)(5660300002)(316002)(54906003)(70206006)(70586007)(41300700001)(336012)(8936002)(8676002)(4326008)(30864003)(2906002)(40480700001)(47076005)(44832011)(36900700001); DIR:OUT; SFP:1101; X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 16 Oct 2023 20:57:54.0725 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 28ee3f74-4a37-49f8-3953-08dbce8a9150 X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=3dd8961f-e488-4e60-8e11-a82d994e183d; Ip=[165.204.84.17]; Helo=[SATLEXMB04.amd.com] X-MS-Exchange-CrossTenant-AuthSource: CY4PEPF0000E9D0.namprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4853 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add eventdev support to enable power saving when no events are arriving. It is based on counting the number of empty polls and, when the number reaches a certain threshold, entering an architecture-defined optimized power state that will either wait until a TSC timestamp expires, or when events arrive. This API mandates a core-to-single-port mapping (i.e. one core polling multiple ports of event device is not supported). This should be ok as the general use case will have one CPU core using one port to enqueue/dequeue events from an eventdev. This design is using Eventdev PMD Dequeue callbacks. 1. MWAITX/MONITORX: When a certain threshold of empty polls is reached, the core will go into a power optimized sleep while waiting on an address of next RX descriptor to be written to. 2. Pause instruction This method uses the pause instruction to avoid busy polling. Signed-off-by: Sivaprasad Tummala --- lib/power/meson.build | 2 +- lib/power/rte_power_pmd_mgmt.c | 226 +++++++++++++++++++++++++++++++++ lib/power/rte_power_pmd_mgmt.h | 55 ++++++++ lib/power/version.map | 4 + 4 files changed, 286 insertions(+), 1 deletion(-) diff --git a/lib/power/meson.build b/lib/power/meson.build index 056d0043d8..86e178bbb4 100644 --- a/lib/power/meson.build +++ b/lib/power/meson.build @@ -32,4 +32,4 @@ headers = files( if cc.has_argument('-Wno-cast-qual') cflags += '-Wno-cast-qual' endif -deps += ['timer', 'ethdev'] +deps += ['timer', 'ethdev', 'eventdev'] diff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c index 38f8384085..df3ac2d221 100644 --- a/lib/power/rte_power_pmd_mgmt.c +++ b/lib/power/rte_power_pmd_mgmt.c @@ -9,8 +9,10 @@ #include #include #include +#include #include +#include #include "rte_power_pmd_mgmt.h" #include "power_common.h" @@ -53,6 +55,7 @@ struct queue_list_entry { uint64_t n_empty_polls; uint64_t n_sleeps; const struct rte_eth_rxtx_callback *cb; + const struct rte_event_dequeue_callback *evt_cb; }; struct pmd_core_cfg { @@ -414,6 +417,64 @@ cfg_queues_stopped(struct pmd_core_cfg *queue_cfg) return 1; } +static uint16_t +evt_clb_umwait(uint8_t dev_id, uint8_t port_id, struct rte_event *ev __rte_unused, + uint16_t nb_events, void *arg) +{ + struct queue_list_entry *queue_conf = arg; + + /* this callback can't do more than one queue, omit multiqueue logic */ + if (unlikely(nb_events == 0)) { + queue_conf->n_empty_polls++; + if (unlikely(queue_conf->n_empty_polls > emptypoll_max)) { + struct rte_power_monitor_cond pmc; + int ret; + + /* use monitoring condition to sleep */ + ret = rte_event_port_get_monitor_addr(dev_id, port_id, + &pmc); + if (ret == 0) + rte_power_monitor(&pmc, UINT64_MAX); + } + } else + queue_conf->n_empty_polls = 0; + + return nb_events; +} + +static uint16_t +evt_clb_pause(uint8_t dev_id __rte_unused, uint8_t port_id __rte_unused, + struct rte_event *ev __rte_unused, + uint16_t nb_events, void *arg) +{ + const unsigned int lcore = rte_lcore_id(); + struct queue_list_entry *queue_conf = arg; + struct pmd_core_cfg *lcore_conf; + const bool empty = nb_events == 0; + uint32_t pause_duration = rte_power_pmd_mgmt_get_pause_duration(); + + lcore_conf = &lcore_cfgs[lcore]; + + if (likely(!empty)) + /* early exit */ + queue_reset(lcore_conf, queue_conf); + else { + /* can this queue sleep? */ + if (!queue_can_sleep(lcore_conf, queue_conf)) + return nb_events; + + /* can this lcore sleep? */ + if (!lcore_can_sleep(lcore_conf)) + return nb_events; + + uint64_t i; + for (i = 0; i < global_data.pause_per_us * pause_duration; i++) + rte_pause(); + } + + return nb_events; +} + static int check_scale(unsigned int lcore) { @@ -481,6 +542,171 @@ get_monitor_callback(void) clb_multiwait : clb_umwait; } +static int +check_evt_monitor(struct pmd_core_cfg *cfg __rte_unused, + const union queue *qdata) +{ + struct rte_power_monitor_cond dummy; + + /* check if rte_power_monitor is supported */ + if (!global_data.intrinsics_support.power_monitor) { + RTE_LOG(DEBUG, POWER, "Monitoring intrinsics are not supported\n"); + return -ENOTSUP; + } + + /* check if the device supports the necessary PMD API */ + if (rte_event_port_get_monitor_addr((uint8_t)qdata->portid, (uint8_t)qdata->qid, + &dummy) == -ENOTSUP) { + RTE_LOG(DEBUG, POWER, "event port does not support rte_event_get_monitor_addr\n"); + return -ENOTSUP; + } + + /* we're done */ + return 0; +} + +int +rte_power_eventdev_pmgmt_port_enable(unsigned int lcore_id, uint8_t dev_id, + uint8_t port_id, enum rte_power_pmd_mgmt_type mode) +{ + const union queue qdata = {.portid = dev_id, .qid = port_id}; + struct pmd_core_cfg *lcore_cfg; + struct queue_list_entry *queue_cfg; + struct rte_event_dev_info info; + rte_dequeue_callback_fn clb; + int ret; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (lcore_id >= RTE_MAX_LCORE) { + ret = -EINVAL; + goto end; + } + + if (rte_event_dev_info_get(dev_id, &info) < 0) { + ret = -EINVAL; + goto end; + } + + /* check if queue id is valid */ + if (port_id >= info.max_event_ports) { + ret = -EINVAL; + goto end; + } + + lcore_cfg = &lcore_cfgs[lcore_id]; + + /* if callback was already enabled, check current callback type */ + if (lcore_cfg->pwr_mgmt_state != PMD_MGMT_DISABLED && + lcore_cfg->cb_mode != mode) { + ret = -EINVAL; + goto end; + } + + /* we need this in various places */ + rte_cpu_get_intrinsics_support(&global_data.intrinsics_support); + + switch (mode) { + case RTE_POWER_MGMT_TYPE_MONITOR: + /* check if we can add a new port */ + ret = check_evt_monitor(lcore_cfg, &qdata); + if (ret < 0) + goto end; + + clb = evt_clb_umwait; + break; + case RTE_POWER_MGMT_TYPE_PAUSE: + /* figure out various time-to-tsc conversions */ + if (global_data.tsc_per_us == 0) + calc_tsc(); + + clb = evt_clb_pause; + break; + default: + RTE_LOG(DEBUG, POWER, "Invalid power management type\n"); + ret = -EINVAL; + goto end; + } + /* add this queue to the list */ + ret = queue_list_add(lcore_cfg, &qdata); + if (ret < 0) { + RTE_LOG(DEBUG, POWER, "Failed to add queue to list: %s\n", + strerror(-ret)); + goto end; + } + /* new queue is always added last */ + queue_cfg = TAILQ_LAST(&lcore_cfg->head, queue_list_head); + + /* when enabling first queue, ensure sleep target is not 0 */ + if (lcore_cfg->n_queues == 1 && lcore_cfg->sleep_target == 0) + lcore_cfg->sleep_target = 1; + + /* initialize data before enabling the callback */ + if (lcore_cfg->n_queues == 1) { + lcore_cfg->cb_mode = mode; + lcore_cfg->pwr_mgmt_state = PMD_MGMT_ENABLED; + } + queue_cfg->evt_cb = rte_event_add_dequeue_callback(dev_id, port_id, + clb, queue_cfg); + + ret = 0; +end: + return ret; +} + +int +rte_power_eventdev_pmgmt_port_disable(unsigned int lcore_id, + uint8_t dev_id, uint8_t port_id) +{ + const union queue qdata = {.portid = dev_id, .qid = port_id}; + struct pmd_core_cfg *lcore_cfg; + struct queue_list_entry *queue_cfg; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (lcore_id >= RTE_MAX_LCORE) + return -EINVAL; + + /* no need to check queue id as wrong queue id would not be enabled */ + lcore_cfg = &lcore_cfgs[lcore_id]; + + if (lcore_cfg->pwr_mgmt_state != PMD_MGMT_ENABLED) + return -EINVAL; + + /* + * There is no good/easy way to do this without race conditions, so we + * are just going to throw our hands in the air and hope that the user + * has read the documentation and has ensured that ports are stopped at + * the time we enter the API functions. + */ + queue_cfg = queue_list_take(lcore_cfg, &qdata); + if (queue_cfg == NULL) + return -ENOENT; + + /* if we've removed all queues from the lists, set state to disabled */ + if (lcore_cfg->n_queues == 0) + lcore_cfg->pwr_mgmt_state = PMD_MGMT_DISABLED; + + switch (lcore_cfg->cb_mode) { + case RTE_POWER_MGMT_TYPE_MONITOR: /* fall-through */ + case RTE_POWER_MGMT_TYPE_SCALE: + case RTE_POWER_MGMT_TYPE_PAUSE: + rte_event_remove_dequeue_callback(dev_id, port_id, + queue_cfg->evt_cb); + break; + } + /* + * the API doc mandates that the user stops all processing on affected + * ports before calling any of these API's, so we can assume that the + * callbacks can be freed. we're intentionally casting away const-ness. + */ + rte_free((void *)queue_cfg->evt_cb); + free(queue_cfg); + + return 0; +} + + int rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id, uint16_t queue_id, enum rte_power_pmd_mgmt_type mode) diff --git a/lib/power/rte_power_pmd_mgmt.h b/lib/power/rte_power_pmd_mgmt.h index 0f1a2eb22e..e1966b9777 100644 --- a/lib/power/rte_power_pmd_mgmt.h +++ b/lib/power/rte_power_pmd_mgmt.h @@ -87,6 +87,61 @@ int rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id, uint16_t port_id, uint16_t queue_id); +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice. + * + * Enable power management on a specified Event device port and lcore. + * + * @note This function is not thread-safe. + * + * @warning This function must be called when the event device stopped and + * no enqueue/dequeue is in progress! + * + * @param lcore_id + * The lcore the event port will be polled from. + * @param dev_id + * The identifier of the device. + * @param port_id + * Event port identifier of the Event device. + * @param mode + * The power management scheme to use for specified event port. + * @return + * 0 on success + * <0 on error + */ +__rte_experimental +int +rte_power_eventdev_pmgmt_port_enable(unsigned int lcore_id, + uint8_t dev_id, uint8_t port_id, + enum rte_power_pmd_mgmt_type mode); + +/** + * @warning + * @b EXPERIMENTAL: this API may change, or be removed, without prior notice. + * + * Disable power management on a specified Ethernet device Rx queue and lcore. + * + * @note This function is not thread-safe. + * + * @warning This function must be called when all affected Ethernet queues are + * stopped and no Rx/Tx is in progress! + * + * @param lcore_id + * The lcore the Rx queue is polled from. + * @param dev_id + * The identifier of the device. + * @param port_id + * Event port identifier of the Event device. + * @return + * 0 on success + * <0 on error + */ +__rte_experimental +int +rte_power_eventdev_pmgmt_port_disable(unsigned int lcore_id, + uint8_t dev_id, uint8_t port_id); + /** * @warning * @b EXPERIMENTAL: this API may change, or be removed, without prior notice. diff --git a/lib/power/version.map b/lib/power/version.map index b8b54f768e..4ab762e072 100644 --- a/lib/power/version.map +++ b/lib/power/version.map @@ -52,4 +52,8 @@ EXPERIMENTAL { rte_power_uncore_get_num_freqs; rte_power_uncore_get_num_pkgs; rte_power_uncore_init; + + # added in 23.07 + rte_power_eventdev_pmgmt_port_enable; + rte_power_eventdev_pmgmt_port_disable; };