From patchwork Thu Jan 19 15:06:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Jarry X-Patchwork-Id: 122356 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 230C34241A; Thu, 19 Jan 2023 16:07:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E31A041101; Thu, 19 Jan 2023 16:07:28 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id D5F9B40223 for ; Thu, 19 Jan 2023 16:07:26 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674140846; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=Laq5238T21v0GQea3vBIGtuA0bO04akBlU8fiGzf2AQ=; b=XXkM3ap7IYQxvOSRpcHL5gWWJtNR0VQFvLbhcxw7M8K+M0Iw6bVwvWnPZDB4jynxgqBJGp L4eDN4Elef6T8rlJ3qPj9ZWmMnOcNNEVsh3d0pdt7Pjr8bqkxi3Yv7HvVxvSadOMxzvF0M xTzTzCCBOvJNvMcm8WnWOZnysuLtipo= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-438-1H1rik1FNfq8vhcPru_ByA-1; Thu, 19 Jan 2023 10:07:24 -0500 X-MC-Unique: 1H1rik1FNfq8vhcPru_ByA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 5A5EE85CCE5; Thu, 19 Jan 2023 15:07:24 +0000 (UTC) Received: from ringo.home (unknown [10.39.208.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0DAA540C2004; Thu, 19 Jan 2023 15:07:22 +0000 (UTC) From: Robin Jarry To: dev@dpdk.org Cc: Tyler Retzlaff , Kevin Laatz , Robin Jarry , =?utf-8?q?Morten_Br=C3=B8rup?= Subject: [PATCH v6 1/5] eal: add lcore info in telemetry Date: Thu, 19 Jan 2023 16:06:52 +0100 Message-Id: <20230119150656.418404-2-rjarry@redhat.com> In-Reply-To: <20230119150656.418404-1-rjarry@redhat.com> References: <20221123102612.1688865-1-rjarry@redhat.com> <20230119150656.418404-1-rjarry@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Report the same information than rte_lcore_dump() in the telemetry API into /eal/lcore/list and /eal/lcore/info,ID. Example: --> /eal/lcore/info,3 { "/eal/lcore/info": { "lcore_id": 3, "socket": 0, "role": "RTE", "cpuset": [ 3 ] } } Signed-off-by: Robin Jarry Acked-by: Morten Brørup Reviewed-by: Kevin Laatz --- Notes: v5 -> v6: No change lib/eal/common/eal_common_lcore.c | 96 +++++++++++++++++++++++++++++++ 1 file changed, 96 insertions(+) diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 06c594b0224f..16548977dce8 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -10,6 +10,9 @@ #include #include #include +#ifndef RTE_EXEC_ENV_WINDOWS +#include +#endif #include "eal_private.h" #include "eal_thread.h" @@ -456,3 +459,96 @@ rte_lcore_dump(FILE *f) { rte_lcore_iterate(lcore_dump_cb, f); } + +#ifndef RTE_EXEC_ENV_WINDOWS +static int +lcore_telemetry_id_cb(unsigned int lcore_id, void *arg) +{ + struct rte_tel_data *d = arg; + return rte_tel_data_add_array_int(d, lcore_id); +} + +static int +handle_lcore_list(const char *cmd __rte_unused, + const char *params __rte_unused, + struct rte_tel_data *d) +{ + int ret = rte_tel_data_start_array(d, RTE_TEL_INT_VAL); + if (ret) + return ret; + return rte_lcore_iterate(lcore_telemetry_id_cb, d); +} + +struct lcore_telemetry_info { + unsigned int lcore_id; + struct rte_tel_data *d; +}; + +static int +lcore_telemetry_info_cb(unsigned int lcore_id, void *arg) +{ + struct lcore_telemetry_info *info = arg; + struct rte_config *cfg = rte_eal_get_configuration(); + struct rte_tel_data *cpuset; + const char *role; + unsigned int cpu; + + if (info->lcore_id != lcore_id) + return 0; + + switch (cfg->lcore_role[lcore_id]) { + case ROLE_RTE: + role = "RTE"; + break; + case ROLE_SERVICE: + role = "SERVICE"; + break; + case ROLE_NON_EAL: + role = "NON_EAL"; + break; + default: + role = "UNKNOWN"; + break; + } + rte_tel_data_start_dict(info->d); + rte_tel_data_add_dict_int(info->d, "lcore_id", lcore_id); + rte_tel_data_add_dict_int(info->d, "socket", rte_lcore_to_socket_id(lcore_id)); + rte_tel_data_add_dict_string(info->d, "role", role); + cpuset = rte_tel_data_alloc(); + if (!cpuset) + return -ENOMEM; + rte_tel_data_start_array(cpuset, RTE_TEL_INT_VAL); + for (cpu = 0; cpu < CPU_SETSIZE; cpu++) + if (CPU_ISSET(cpu, &lcore_config[lcore_id].cpuset)) + rte_tel_data_add_array_int(cpuset, cpu); + rte_tel_data_add_dict_container(info->d, "cpuset", cpuset, 0); + + return 0; +} + +static int +handle_lcore_info(const char *cmd __rte_unused, const char *params, struct rte_tel_data *d) +{ + struct lcore_telemetry_info info = { .d = d }; + char *endptr = NULL; + if (params == NULL || strlen(params) == 0) + return -EINVAL; + errno = 0; + info.lcore_id = strtoul(params, &endptr, 10); + if (errno) + return -errno; + if (endptr == params) + return -EINVAL; + return rte_lcore_iterate(lcore_telemetry_info_cb, &info); +} + +RTE_INIT(lcore_telemetry) +{ + rte_telemetry_register_cmd( + "/eal/lcore/list", handle_lcore_list, + "List of lcore ids. Takes no parameters"); + rte_telemetry_register_cmd( + "/eal/lcore/info", handle_lcore_info, + "Returns lcore info. Parameters: int lcore_id"); +} +#endif /* !RTE_EXEC_ENV_WINDOWS */ From patchwork Thu Jan 19 15:06:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Jarry X-Patchwork-Id: 122357 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 105174241A; Thu, 19 Jan 2023 16:07:39 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C9F0642D88; Thu, 19 Jan 2023 16:07:31 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 89CE942D85 for ; Thu, 19 Jan 2023 16:07:30 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674140850; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=r9bhtp3kQbHtUc2Y8NklHS96USELxKJ3ANyRoEYt2hs=; b=BLiX+Yso1twq81JL1BJnHLmJZEZJ+0ZrH6wNxDwkEM/5vv5h0DtsO/Ts+rou8B0RYHb3T6 stBA4eiWK2peFP1RS0dnlUDHZUCfkQbnCI3wSfHe+qAtKoOtbPn9clFxckzs/k9ZoOuw1m nVP5byV5tcgsYE1zd/kiErlxwXNBisE= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-295-UbByBx4ONb-wHN6XvKcvjA-1; Thu, 19 Jan 2023 10:07:26 -0500 X-MC-Unique: UbByBx4ONb-wHN6XvKcvjA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id EBC633814967; Thu, 19 Jan 2023 15:07:25 +0000 (UTC) Received: from ringo.home (unknown [10.39.208.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id A916740C2064; Thu, 19 Jan 2023 15:07:24 +0000 (UTC) From: Robin Jarry To: dev@dpdk.org Cc: Tyler Retzlaff , Kevin Laatz , Robin Jarry , =?utf-8?q?Morten_Br=C3=B8rup?= Subject: [PATCH v6 2/5] eal: allow applications to report their cpu usage Date: Thu, 19 Jan 2023 16:06:53 +0100 Message-Id: <20230119150656.418404-3-rjarry@redhat.com> In-Reply-To: <20230119150656.418404-1-rjarry@redhat.com> References: <20221123102612.1688865-1-rjarry@redhat.com> <20230119150656.418404-1-rjarry@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Allow applications to register a callback that will be invoked in rte_lcore_dump() and when requesting lcore info in the telemetry API. The callback is expected to return the number of TSC cycles that have passed since application start and the number of these cycles that were spent doing busy work. Signed-off-by: Robin Jarry Acked-by: Morten Brørup Reviewed-by: Kevin Laatz --- Notes: v5 -> v6: Added/rephrased some inline comments. lib/eal/common/eal_common_lcore.c | 45 ++++++++++++++++++++++++++++--- lib/eal/include/rte_lcore.h | 35 ++++++++++++++++++++++++ lib/eal/version.map | 1 + 3 files changed, 78 insertions(+), 3 deletions(-) diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 16548977dce8..80513cfe3725 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -2,6 +2,7 @@ * Copyright(c) 2010-2014 Intel Corporation */ +#include #include #include @@ -422,11 +423,21 @@ rte_lcore_iterate(rte_lcore_iterate_cb cb, void *arg) return ret; } +static rte_lcore_usage_cb lcore_usage_cb; + +void +rte_lcore_register_usage_cb(rte_lcore_usage_cb cb) +{ + lcore_usage_cb = cb; +} + static int lcore_dump_cb(unsigned int lcore_id, void *arg) { struct rte_config *cfg = rte_eal_get_configuration(); - char cpuset[RTE_CPU_AFFINITY_STR_LEN]; + char cpuset[RTE_CPU_AFFINITY_STR_LEN], usage_str[256]; + struct rte_lcore_usage usage; + rte_lcore_usage_cb usage_cb; const char *role; FILE *f = arg; int ret; @@ -446,11 +457,25 @@ lcore_dump_cb(unsigned int lcore_id, void *arg) break; } + /* The callback may not set all the fields in the structure, so clear it here. */ + memset(&usage, 0, sizeof(usage)); + usage_str[0] = '\0'; + /* + * Guard against concurrent modification of lcore_usage_cb. + * rte_lcore_register_usage_cb() should only be called once at application init + * but nothing prevents and application to reset the callback to NULL. + */ + usage_cb = lcore_usage_cb; + if (usage_cb != NULL && usage_cb(lcore_id, &usage) == 0) { + snprintf(usage_str, sizeof(usage_str), ", busy cycles %"PRIu64"/%"PRIu64, + usage.busy_cycles, usage.total_cycles); + } ret = eal_thread_dump_affinity(&lcore_config[lcore_id].cpuset, cpuset, sizeof(cpuset)); - fprintf(f, "lcore %u, socket %u, role %s, cpuset %s%s\n", lcore_id, + fprintf(f, "lcore %u, socket %u, role %s, cpuset %s%s%s\n", lcore_id, rte_lcore_to_socket_id(lcore_id), role, cpuset, - ret == 0 ? "" : "..."); + ret == 0 ? "" : "...", usage_str); + return 0; } @@ -489,7 +514,9 @@ lcore_telemetry_info_cb(unsigned int lcore_id, void *arg) { struct lcore_telemetry_info *info = arg; struct rte_config *cfg = rte_eal_get_configuration(); + struct rte_lcore_usage usage; struct rte_tel_data *cpuset; + rte_lcore_usage_cb usage_cb; const char *role; unsigned int cpu; @@ -522,6 +549,18 @@ lcore_telemetry_info_cb(unsigned int lcore_id, void *arg) if (CPU_ISSET(cpu, &lcore_config[lcore_id].cpuset)) rte_tel_data_add_array_int(cpuset, cpu); rte_tel_data_add_dict_container(info->d, "cpuset", cpuset, 0); + /* The callback may not set all the fields in the structure, so clear it here. */ + memset(&usage, 0, sizeof(usage)); + /* + * Guard against concurrent modification of lcore_usage_cb. + * rte_lcore_register_usage_cb() should only be called once at application init + * but nothing prevents and application to reset the callback to NULL. + */ + usage_cb = lcore_usage_cb; + if (usage_cb != NULL && usage_cb(lcore_id, &usage) == 0) { + rte_tel_data_add_dict_u64(info->d, "total_cycles", usage.total_cycles); + rte_tel_data_add_dict_u64(info->d, "busy_cycles", usage.busy_cycles); + } return 0; } diff --git a/lib/eal/include/rte_lcore.h b/lib/eal/include/rte_lcore.h index 6938c3fd7b81..52468e7120dd 100644 --- a/lib/eal/include/rte_lcore.h +++ b/lib/eal/include/rte_lcore.h @@ -328,6 +328,41 @@ typedef int (*rte_lcore_iterate_cb)(unsigned int lcore_id, void *arg); int rte_lcore_iterate(rte_lcore_iterate_cb cb, void *arg); +/** + * CPU usage statistics. + */ +struct rte_lcore_usage { + uint64_t total_cycles; + /**< The total amount of time since application start, in TSC cycles. */ + uint64_t busy_cycles; + /**< The amount of busy time since application start, in TSC cycles. */ +}; + +/** + * Callback to allow applications to report CPU usage. + * + * @param [in] lcore_id + * The lcore to consider. + * @param [out] usage + * Counters representing this lcore usage. This can never be NULL. + * @return + * - 0 if fields in usage were updated successfully. The fields that the + * application does not support must not be modified. + * - a negative value if the information is not available or if any error occurred. + */ +typedef int (*rte_lcore_usage_cb)(unsigned int lcore_id, struct rte_lcore_usage *usage); + +/** + * Register a callback from an application to be called in rte_lcore_dump() and + * the /eal/lcore/info telemetry endpoint handler. Applications are expected to + * report CPU usage statistics via this callback. + * + * @param cb + * The callback function. + */ +__rte_experimental +void rte_lcore_register_usage_cb(rte_lcore_usage_cb cb); + /** * List all lcores. * diff --git a/lib/eal/version.map b/lib/eal/version.map index 7ad12a7dc985..30fd216a12ea 100644 --- a/lib/eal/version.map +++ b/lib/eal/version.map @@ -440,6 +440,7 @@ EXPERIMENTAL { rte_thread_detach; rte_thread_equal; rte_thread_join; + rte_lcore_register_usage_cb; }; INTERNAL { From patchwork Thu Jan 19 15:06:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Jarry X-Patchwork-Id: 122358 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 769A44241A; Thu, 19 Jan 2023 16:07:46 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AD1D742D8B; Thu, 19 Jan 2023 16:07:33 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id B575142D85 for ; Thu, 19 Jan 2023 16:07:31 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674140851; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=sBYC9KmFxeyjH40ixUWootaBJi14DLjTYTg0IlA3kM8=; b=gSj3ENFXbzWkofYHfaZfJpFFQAwmm/aU883Xe45hBT6RDiXBE3p40ybG7elvctluN6JXUF MjENhlVxyJtwUWamEIYfmfyJeS6XKxEsrO9g/NLANB/+pn8Q30mg0Yk6l9a2KV5aEJ2OG/ QmIKzQGRR2HF24Xm11ijDkXTaOBj0xU= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-487-GNSdXfR1MPufMeSh_9aWRA-1; Thu, 19 Jan 2023 10:07:28 -0500 X-MC-Unique: GNSdXfR1MPufMeSh_9aWRA-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id A6FFB80D1BE; Thu, 19 Jan 2023 15:07:27 +0000 (UTC) Received: from ringo.home (unknown [10.39.208.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id 46B3340C2064; Thu, 19 Jan 2023 15:07:26 +0000 (UTC) From: Robin Jarry To: dev@dpdk.org Cc: Tyler Retzlaff , Kevin Laatz , Robin Jarry , =?utf-8?q?Morten_Br=C3=B8rup?= , Konstantin Ananyev Subject: [PATCH v6 3/5] testpmd: add dump_lcores command Date: Thu, 19 Jan 2023 16:06:54 +0100 Message-Id: <20230119150656.418404-4-rjarry@redhat.com> In-Reply-To: <20230119150656.418404-1-rjarry@redhat.com> References: <20221123102612.1688865-1-rjarry@redhat.com> <20230119150656.418404-1-rjarry@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add a simple command that calls rte_lcore_dump(). Signed-off-by: Robin Jarry Acked-by: Morten Brørup Acked-by: Konstantin Ananyev Reviewed-by: Kevin Laatz --- Notes: v5 -> v6: No change app/test-pmd/cmdline.c | 3 +++ 1 file changed, 3 insertions(+) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index b32dc8bfd445..96474d2ae458 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -8345,6 +8345,8 @@ static void cmd_dump_parsed(void *parsed_result, rte_mempool_list_dump(stdout); else if (!strcmp(res->dump, "dump_devargs")) rte_devargs_dump(stdout); + else if (!strcmp(res->dump, "dump_lcores")) + rte_lcore_dump(stdout); else if (!strcmp(res->dump, "dump_log_types")) rte_log_dump(stdout); } @@ -8358,6 +8360,7 @@ static cmdline_parse_token_string_t cmd_dump_dump = "dump_ring#" "dump_mempool#" "dump_devargs#" + "dump_lcores#" "dump_log_types"); static cmdline_parse_inst_t cmd_dump = { From patchwork Thu Jan 19 15:06:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Jarry X-Patchwork-Id: 122360 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 274504241A; Thu, 19 Jan 2023 16:08:02 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1893842DA4; Thu, 19 Jan 2023 16:07:44 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id 2157542DA4 for ; Thu, 19 Jan 2023 16:07:42 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674140861; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=wEO/286DnL+mXYXyUdYsveISybKyQHOD0cA4pntE6B0=; b=jQxDBNzJHWCbM0ZiMi9MLNuoHHHSDKX/J8yurO/0Jzalaf05lNStEctnHUwDG+/NM2ENZV DyW3U/K43dWtdADEtp1Jd0Ayy1vbFZsL3WuoPMxLyRYSuvkWLS9thsGZ5YlttcIMxGGwX+ +EkygI6GSv8jooUvUrS5yM6JQ2A0A9g= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-553-gPScqFbvOwKt0TkDUJVgFw-1; Thu, 19 Jan 2023 10:07:34 -0500 X-MC-Unique: gPScqFbvOwKt0TkDUJVgFw-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 81D533C02192; Thu, 19 Jan 2023 15:07:29 +0000 (UTC) Received: from ringo.home (unknown [10.39.208.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0F14440C2064; Thu, 19 Jan 2023 15:07:27 +0000 (UTC) From: Robin Jarry To: dev@dpdk.org Cc: Tyler Retzlaff , Kevin Laatz , Robin Jarry , =?utf-8?q?Morten_Br=C3=B8rup?= , Konstantin Ananyev Subject: [PATCH v6 4/5] testpmd: report lcore usage Date: Thu, 19 Jan 2023 16:06:55 +0100 Message-Id: <20230119150656.418404-5-rjarry@redhat.com> In-Reply-To: <20230119150656.418404-1-rjarry@redhat.com> References: <20221123102612.1688865-1-rjarry@redhat.com> <20230119150656.418404-1-rjarry@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Reuse the --record-core-cycles option to account for busy cycles. One turn of packet_fwd_t is considered "busy" if there was at least one received or transmitted packet. Add a new busy_cycles field in struct fwd_stream. Update get_end_cycles to accept an additional argument for the number of processed packets. Update fwd_stream.busy_cycles when the number of packets is greater than zero. When --record-core-cycles is specified, register a callback with rte_lcore_register_usage_cb(). In the callback, use the new lcore_id field in struct fwd_lcore to identify the correct index in fwd_lcores and return the sum of busy/total cycles of all fwd_streams. This makes the cycles counters available in rte_lcore_dump() and the lcore telemetry API: testpmd> dump_lcores lcore 3, socket 0, role RTE, cpuset 3 lcore 4, socket 0, role RTE, cpuset 4, busy cycles 1228584096/9239923140 lcore 5, socket 0, role RTE, cpuset 5, busy cycles 1255661768/9218141538 --> /eal/lcore/info,4 { "/eal/lcore/info": { "lcore_id": 4, "socket": 0, "role": "RTE", "cpuset": [ 4 ], "busy_cycles": 10623340318, "total_cycles": 55331167354 } } Signed-off-by: Robin Jarry Acked-by: Morten Brørup Acked-by: Konstantin Ananyev Reviewed-by: Kevin Laatz --- Notes: v5 -> v6: No change. app/test-pmd/5tswap.c | 5 +++-- app/test-pmd/csumonly.c | 6 +++--- app/test-pmd/flowgen.c | 2 +- app/test-pmd/icmpecho.c | 6 +++--- app/test-pmd/iofwd.c | 5 +++-- app/test-pmd/macfwd.c | 5 +++-- app/test-pmd/macswap.c | 5 +++-- app/test-pmd/noisy_vnf.c | 4 ++++ app/test-pmd/rxonly.c | 5 +++-- app/test-pmd/shared_rxq_fwd.c | 5 +++-- app/test-pmd/testpmd.c | 39 ++++++++++++++++++++++++++++++++++- app/test-pmd/testpmd.h | 14 +++++++++---- app/test-pmd/txonly.c | 7 ++++--- 13 files changed, 81 insertions(+), 27 deletions(-) diff --git a/app/test-pmd/5tswap.c b/app/test-pmd/5tswap.c index f041a5e1d530..03225075716c 100644 --- a/app/test-pmd/5tswap.c +++ b/app/test-pmd/5tswap.c @@ -116,7 +116,7 @@ pkt_burst_5tuple_swap(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; @@ -182,7 +182,8 @@ pkt_burst_5tuple_swap(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/csumonly.c b/app/test-pmd/csumonly.c index 1c2459851522..03e141221a56 100644 --- a/app/test-pmd/csumonly.c +++ b/app/test-pmd/csumonly.c @@ -868,7 +868,7 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; rx_bad_ip_csum = 0; @@ -1200,8 +1200,8 @@ pkt_burst_checksum_forward(struct fwd_stream *fs) rte_pktmbuf_free(tx_pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/flowgen.c b/app/test-pmd/flowgen.c index fd6abc0f4124..7b2f0ffdf0f5 100644 --- a/app/test-pmd/flowgen.c +++ b/app/test-pmd/flowgen.c @@ -196,7 +196,7 @@ pkt_burst_flow_gen(struct fwd_stream *fs) RTE_PER_LCORE(_next_flow) = next_flow; - get_end_cycles(fs, start_tsc); + get_end_cycles(fs, start_tsc, nb_tx); } static int diff --git a/app/test-pmd/icmpecho.c b/app/test-pmd/icmpecho.c index 066f2a3ab79b..2fc9f96dc95f 100644 --- a/app/test-pmd/icmpecho.c +++ b/app/test-pmd/icmpecho.c @@ -303,7 +303,7 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; nb_replies = 0; @@ -508,8 +508,8 @@ reply_to_icmp_echo_rqsts(struct fwd_stream *fs) } while (++nb_tx < nb_replies); } } - - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/iofwd.c b/app/test-pmd/iofwd.c index 8fafdec548ad..e5a2dbe20c69 100644 --- a/app/test-pmd/iofwd.c +++ b/app/test-pmd/iofwd.c @@ -59,7 +59,7 @@ pkt_burst_io_forward(struct fwd_stream *fs) pkts_burst, nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, @@ -84,7 +84,8 @@ pkt_burst_io_forward(struct fwd_stream *fs) } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/macfwd.c b/app/test-pmd/macfwd.c index beb220fbb462..9db623999970 100644 --- a/app/test-pmd/macfwd.c +++ b/app/test-pmd/macfwd.c @@ -65,7 +65,7 @@ pkt_burst_mac_forward(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; @@ -115,7 +115,8 @@ pkt_burst_mac_forward(struct fwd_stream *fs) } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/macswap.c b/app/test-pmd/macswap.c index 4f8deb338296..4db134ac1d91 100644 --- a/app/test-pmd/macswap.c +++ b/app/test-pmd/macswap.c @@ -66,7 +66,7 @@ pkt_burst_mac_swap(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; txp = &ports[fs->tx_port]; @@ -93,7 +93,8 @@ pkt_burst_mac_swap(struct fwd_stream *fs) rte_pktmbuf_free(pkts_burst[nb_tx]); } while (++nb_tx < nb_rx); } - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/noisy_vnf.c b/app/test-pmd/noisy_vnf.c index c65ec6f06a5c..290bdcda45f0 100644 --- a/app/test-pmd/noisy_vnf.c +++ b/app/test-pmd/noisy_vnf.c @@ -152,6 +152,9 @@ pkt_burst_noisy_vnf(struct fwd_stream *fs) uint64_t delta_ms; bool needs_flush = false; uint64_t now; + uint64_t start_tsc = 0; + + get_start_cycles(&start_tsc); nb_rx = rte_eth_rx_burst(fs->rx_port, fs->rx_queue, pkts_burst, nb_pkt_per_burst); @@ -219,6 +222,7 @@ pkt_burst_noisy_vnf(struct fwd_stream *fs) fs->fwd_dropped += drop_pkts(tmp_pkts, nb_deqd, sent); ncf->prev_time = rte_get_timer_cycles(); } + get_end_cycles(fs, start_tsc, nb_rx + nb_tx); } #define NOISY_STRSIZE 256 diff --git a/app/test-pmd/rxonly.c b/app/test-pmd/rxonly.c index d528d4f34e60..519202339e16 100644 --- a/app/test-pmd/rxonly.c +++ b/app/test-pmd/rxonly.c @@ -58,13 +58,14 @@ pkt_burst_receive(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; fs->rx_packets += nb_rx; for (i = 0; i < nb_rx; i++) rte_pktmbuf_free(pkts_burst[i]); - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/shared_rxq_fwd.c b/app/test-pmd/shared_rxq_fwd.c index 2e9047804b5b..395b73bfe52e 100644 --- a/app/test-pmd/shared_rxq_fwd.c +++ b/app/test-pmd/shared_rxq_fwd.c @@ -102,9 +102,10 @@ shared_rxq_fwd(struct fwd_stream *fs) nb_pkt_per_burst); inc_rx_burst_stats(fs, nb_rx); if (unlikely(nb_rx == 0)) - return; + goto end; forward_shared_rxq(fs, nb_rx, pkts_burst); - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_rx); } static void diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index 134d79a55547..d80867b91b3d 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -2053,7 +2053,7 @@ fwd_stats_display(void) fs->rx_bad_outer_ip_csum; if (record_core_cycles) - fwd_cycles += fs->core_cycles; + fwd_cycles += fs->busy_cycles; } for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) { pt_id = fwd_ports_ids[i]; @@ -2184,6 +2184,7 @@ fwd_stats_reset(void) memset(&fs->rx_burst_stats, 0, sizeof(fs->rx_burst_stats)); memset(&fs->tx_burst_stats, 0, sizeof(fs->tx_burst_stats)); + fs->busy_cycles = 0; fs->core_cycles = 0; } } @@ -2260,6 +2261,7 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t pkt_fwd) tics_datum = rte_rdtsc(); tics_per_1sec = rte_get_timer_hz(); #endif + fc->lcore_id = rte_lcore_id(); fsm = &fwd_streams[fc->stream_idx]; nb_fs = fc->stream_nb; do { @@ -2288,6 +2290,37 @@ run_pkt_fwd_on_lcore(struct fwd_lcore *fc, packet_fwd_t pkt_fwd) } while (! fc->stopped); } +static int +lcore_usage_callback(unsigned int lcore_id, struct rte_lcore_usage *usage) +{ + struct fwd_stream **fsm; + struct fwd_lcore *fc; + streamid_t nb_fs; + streamid_t sm_id; + int c; + + for (c = 0; c < nb_lcores; c++) { + fc = fwd_lcores[c]; + if (fc->lcore_id != lcore_id) + continue; + + fsm = &fwd_streams[fc->stream_idx]; + nb_fs = fc->stream_nb; + usage->busy_cycles = 0; + usage->total_cycles = 0; + + for (sm_id = 0; sm_id < nb_fs; sm_id++) + if (!fsm[sm_id]->disabled) { + usage->busy_cycles += fsm[sm_id]->busy_cycles; + usage->total_cycles += fsm[sm_id]->core_cycles; + } + + return 0; + } + + return -1; +} + static int start_pkt_forward_on_core(void *fwd_arg) { @@ -4522,6 +4555,10 @@ main(int argc, char** argv) rte_stats_bitrate_reg(bitrate_data); } #endif + + if (record_core_cycles) + rte_lcore_register_usage_cb(lcore_usage_callback); + #ifdef RTE_LIB_CMDLINE if (init_cmdline() != 0) rte_exit(EXIT_FAILURE, diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 7d24d25970d2..5dbf5d1c465c 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -174,7 +174,8 @@ struct fwd_stream { #ifdef RTE_LIB_GRO unsigned int gro_times; /**< GRO operation times */ #endif - uint64_t core_cycles; /**< used for RX and TX processing */ + uint64_t busy_cycles; /**< used with --record-core-cycles */ + uint64_t core_cycles; /**< used with --record-core-cycles */ struct pkt_burst_stats rx_burst_stats; struct pkt_burst_stats tx_burst_stats; struct fwd_lcore *lcore; /**< Lcore being scheduled. */ @@ -360,6 +361,7 @@ struct fwd_lcore { streamid_t stream_nb; /**< number of streams in "fwd_streams" */ lcoreid_t cpuid_idx; /**< index of logical core in CPU id table */ volatile char stopped; /**< stop forwarding when set */ + unsigned int lcore_id; /**< return value of rte_lcore_id() */ }; /* @@ -836,10 +838,14 @@ get_start_cycles(uint64_t *start_tsc) } static inline void -get_end_cycles(struct fwd_stream *fs, uint64_t start_tsc) +get_end_cycles(struct fwd_stream *fs, uint64_t start_tsc, uint64_t nb_packets) { - if (record_core_cycles) - fs->core_cycles += rte_rdtsc() - start_tsc; + if (record_core_cycles) { + uint64_t cycles = rte_rdtsc() - start_tsc; + fs->core_cycles += cycles; + if (nb_packets > 0) + fs->busy_cycles += cycles; + } } static inline void diff --git a/app/test-pmd/txonly.c b/app/test-pmd/txonly.c index 021624952daa..ad37626ff63c 100644 --- a/app/test-pmd/txonly.c +++ b/app/test-pmd/txonly.c @@ -331,7 +331,7 @@ pkt_burst_transmit(struct fwd_stream *fs) struct rte_mbuf *pkt; struct rte_mempool *mbp; struct rte_ether_hdr eth_hdr; - uint16_t nb_tx; + uint16_t nb_tx = 0; uint16_t nb_pkt; uint16_t vlan_tci, vlan_tci_outer; uint32_t retry; @@ -392,7 +392,7 @@ pkt_burst_transmit(struct fwd_stream *fs) } if (nb_pkt == 0) - return; + goto end; nb_tx = rte_eth_tx_burst(fs->tx_port, fs->tx_queue, pkts_burst, nb_pkt); @@ -426,7 +426,8 @@ pkt_burst_transmit(struct fwd_stream *fs) } while (++nb_tx < nb_pkt); } - get_end_cycles(fs, start_tsc); +end: + get_end_cycles(fs, start_tsc, nb_tx); } static int From patchwork Thu Jan 19 15:06:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Robin Jarry X-Patchwork-Id: 122359 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C8E4F4241A; Thu, 19 Jan 2023 16:07:52 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 92E0242D97; Thu, 19 Jan 2023 16:07:37 +0100 (CET) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.129.124]) by mails.dpdk.org (Postfix) with ESMTP id BC3FE42D9B for ; Thu, 19 Jan 2023 16:07:35 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1674140855; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=NVkzjA+msvPkRKSrfZjFrJnnbBfdT6XiwHqAirn8JMM=; b=eZxJeBNVUyTcjltli0qu4+j6MUcM/2OATH38QL05P3i5QgoIOufCpadu4EgbDDvO2lIvIP xscWPGtR3UthoA6Kg3enYwq25lWO5o5OWO4V7XlJmdfG4d4U7ppvxHCpsPae8cl7u42Q8t bp7/tMuJfSuNLrFnoBnyWlJG14FtLkA= Received: from mimecast-mx02.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-91-Wr0qc17tOTa1Hd5Vvx9j0A-1; Thu, 19 Jan 2023 10:07:31 -0500 X-MC-Unique: Wr0qc17tOTa1Hd5Vvx9j0A-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 217731C0A59B; Thu, 19 Jan 2023 15:07:31 +0000 (UTC) Received: from ringo.home (unknown [10.39.208.27]) by smtp.corp.redhat.com (Postfix) with ESMTP id D2DE440C2006; Thu, 19 Jan 2023 15:07:29 +0000 (UTC) From: Robin Jarry To: dev@dpdk.org Cc: Tyler Retzlaff , Kevin Laatz , Robin Jarry , =?utf-8?q?Morten_Br=C3=B8rup?= Subject: [PATCH v6 5/5] telemetry: add /eal/lcore/usage endpoint Date: Thu, 19 Jan 2023 16:06:56 +0100 Message-Id: <20230119150656.418404-6-rjarry@redhat.com> In-Reply-To: <20230119150656.418404-1-rjarry@redhat.com> References: <20221123102612.1688865-1-rjarry@redhat.com> <20230119150656.418404-1-rjarry@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 3.1 on 10.11.54.1 X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Allow fetching CPU cycles usage for all lcores with a single request. This endpoint is intended for repeated and frequent invocations by external monitoring systems and therefore returns condensed data. It consists of a single dictionary with three keys: "lcore_ids", "total_cycles" and "busy_cycles" that are mapped to three arrays of integer values. Each array has the same number of values, one per lcore, in the same order. Example: --> /eal/lcore/usage { "/eal/lcore/usage": { "lcore_ids": [ 4, 5 ], "total_cycles": [ 23846845590, 23900558914 ], "busy_cycles": [ 21043446682, 21448837316 ] } } Cc: Kevin Laatz Cc: Morten Brørup Signed-off-by: Robin Jarry Acked-by: Morten Brørup Reviewed-by: Kevin Laatz --- Notes: v6: new patch lib/eal/common/eal_common_lcore.c | 64 +++++++++++++++++++++++++++++++ 1 file changed, 64 insertions(+) diff --git a/lib/eal/common/eal_common_lcore.c b/lib/eal/common/eal_common_lcore.c index 80513cfe3725..2f310ad19672 100644 --- a/lib/eal/common/eal_common_lcore.c +++ b/lib/eal/common/eal_common_lcore.c @@ -581,6 +581,67 @@ handle_lcore_info(const char *cmd __rte_unused, const char *params, struct rte_t return rte_lcore_iterate(lcore_telemetry_info_cb, &info); } +struct lcore_telemetry_usage { + struct rte_tel_data *lcore_ids; + struct rte_tel_data *total_cycles; + struct rte_tel_data *busy_cycles; +}; + +static int +lcore_telemetry_usage_cb(unsigned int lcore_id, void *arg) +{ + struct lcore_telemetry_usage *u = arg; + struct rte_lcore_usage usage; + rte_lcore_usage_cb usage_cb; + + /* The callback may not set all the fields in the structure, so clear it here. */ + memset(&usage, 0, sizeof(usage)); + /* + * Guard against concurrent modification of lcore_usage_cb. + * rte_lcore_register_usage_cb() should only be called once at application init + * but nothing prevents and application to reset the callback to NULL. + */ + usage_cb = lcore_usage_cb; + if (usage_cb != NULL && usage_cb(lcore_id, &usage) == 0) { + rte_tel_data_add_array_int(u->lcore_ids, lcore_id); + rte_tel_data_add_array_u64(u->total_cycles, usage.total_cycles); + rte_tel_data_add_array_u64(u->busy_cycles, usage.busy_cycles); + } + + return 0; +} + +static int +handle_lcore_usage(const char *cmd __rte_unused, + const char *params __rte_unused, + struct rte_tel_data *d) +{ + struct lcore_telemetry_usage usage; + struct rte_tel_data *lcore_ids = rte_tel_data_alloc(); + struct rte_tel_data *total_cycles = rte_tel_data_alloc(); + struct rte_tel_data *busy_cycles = rte_tel_data_alloc(); + + if (!lcore_ids || !total_cycles || !busy_cycles) { + rte_tel_data_free(lcore_ids); + rte_tel_data_free(total_cycles); + rte_tel_data_free(busy_cycles); + return -ENOMEM; + } + + rte_tel_data_start_dict(d); + rte_tel_data_start_array(lcore_ids, RTE_TEL_INT_VAL); + rte_tel_data_start_array(total_cycles, RTE_TEL_U64_VAL); + rte_tel_data_start_array(busy_cycles, RTE_TEL_U64_VAL); + rte_tel_data_add_dict_container(d, "lcore_ids", lcore_ids, 0); + rte_tel_data_add_dict_container(d, "total_cycles", total_cycles, 0); + rte_tel_data_add_dict_container(d, "busy_cycles", busy_cycles, 0); + usage.lcore_ids = lcore_ids; + usage.total_cycles = total_cycles; + usage.busy_cycles = busy_cycles; + + return rte_lcore_iterate(lcore_telemetry_usage_cb, &usage); +} + RTE_INIT(lcore_telemetry) { rte_telemetry_register_cmd( @@ -589,5 +650,8 @@ RTE_INIT(lcore_telemetry) rte_telemetry_register_cmd( "/eal/lcore/info", handle_lcore_info, "Returns lcore info. Parameters: int lcore_id"); + rte_telemetry_register_cmd( + "/eal/lcore/usage", handle_lcore_usage, + "Returns lcore cycles usage. Takes no parameters"); } #endif /* !RTE_EXEC_ENV_WINDOWS */