From patchwork Thu Nov 26 11:15:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wisam Monther X-Patchwork-Id: 84573 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F301CA052A; Thu, 26 Nov 2020 12:17:11 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 78D75C9B4; Thu, 26 Nov 2020 12:16:11 +0100 (CET) Received: from hqnvemgate24.nvidia.com (hqnvemgate24.nvidia.com [216.228.121.143]) by dpdk.org (Postfix) with ESMTP id EC389C97A for ; Thu, 26 Nov 2020 12:16:07 +0100 (CET) Received: from hqmail.nvidia.com (Not Verified[216.228.121.13]) by hqnvemgate24.nvidia.com (using TLS: TLSv1.2, AES256-SHA) id ; Thu, 26 Nov 2020 03:16:13 -0800 Received: from nvidia.com (10.124.1.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1473.3; Thu, 26 Nov 2020 11:16:04 +0000 From: Wisam Jaddo To: , , , CC: Date: Thu, 26 Nov 2020 13:15:42 +0200 Message-ID: <20201126111543.16928-4-wisamm@nvidia.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201126111543.16928-1-wisamm@nvidia.com> References: <20201126111543.16928-1-wisamm@nvidia.com> MIME-Version: 1.0 X-Originating-IP: [10.124.1.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nvidia.com; s=n1; t=1606389373; bh=8ULVCLgAPqyghXGLebWw6zC1jVXHiDvPDPT4aug3S6w=; h=From:To:CC:Subject:Date:Message-ID:X-Mailer:In-Reply-To: References:MIME-Version:Content-Type:X-Originating-IP: X-ClientProxiedBy; b=nlokN1LV8EhTsAQiiDkKXHedAWtBNscXYOPhcgeTYcOXmwITNi54CTTR2FGDEPboz NJwONpKgD9lXQoVHwUxkQE8dgZ6GLz5IO1C17BYyxlI+dwG5ug8BMVbRfx9eY4DzFa uY7dlTSI/7dChETr9gl3pLJyVUhAdO0YcKJmtIdULNfd//g4GxMhsy0SOMoiyyz7Z8 A7Vtth1s1L7KwLLN3AuXNeFPexhTNzJ7NIgura9X4WVocY8uGW5caOTF5Otbr+9Mtr t9lWOY+E1/DP73GRFpPgL35burdoxQR6n8XWaCtDi/tUPZY63XMCAsv8jzOoSnZuVr dYpvjFm3aMZaA== Subject: [dpdk-dev] [PATCH 3/4] app/flow-perf: change clock measurement functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The clock() function is not good practice to use for multiple cores/threads, since it measures the CPU time used by the process and not the wall clock time, while when running through multiple cores/threads simultaneously, we can burn through CPU time much faster. As a result this commit will change the way of measurement to use rd_tsc, and the results will be divided by the processor frequency. Signed-off-by: Wisam Jaddo Reviewed-by: Alexander Kozyrev Reviewed-by: Suanming Mou --- app/test-flow-perf/main.c | 16 ++++++++-------- 1 file changed, 8 insertions(+), 8 deletions(-) diff --git a/app/test-flow-perf/main.c b/app/test-flow-perf/main.c index 663b2e9bae..3a0e4c1951 100644 --- a/app/test-flow-perf/main.c +++ b/app/test-flow-perf/main.c @@ -889,7 +889,7 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list) rules_count_per_core = rules_count / mc_pool.cores_count; - start_batch = clock(); + start_batch = rte_rdtsc(); for (i = 0; i < (uint32_t) rules_count_per_core; i++) { if (flows_list[i] == 0) break; @@ -907,12 +907,12 @@ destroy_flows(int port_id, uint8_t core_id, struct rte_flow **flows_list) * for this batch. */ if (!((i + 1) % rules_batch)) { - end_batch = clock(); + end_batch = rte_rdtsc(); delta = (double) (end_batch - start_batch); rules_batch_idx = ((i + 1) / rules_batch) - 1; - cpu_time_per_batch[rules_batch_idx] = delta / CLOCKS_PER_SEC; + cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz(); cpu_time_used += cpu_time_per_batch[rules_batch_idx]; - start_batch = clock(); + start_batch = rte_rdtsc(); } } @@ -985,7 +985,7 @@ insert_flows(int port_id, uint8_t core_id) flows_list[flow_index++] = flow; } - start_batch = clock(); + start_batch = rte_rdtsc(); for (counter = start_counter; counter < end_counter; counter++) { flow = generate_flow(port_id, flow_group, flow_attrs, flow_items, flow_actions, @@ -1011,12 +1011,12 @@ insert_flows(int port_id, uint8_t core_id) * for this batch. */ if (!((counter + 1) % rules_batch)) { - end_batch = clock(); + end_batch = rte_rdtsc(); delta = (double) (end_batch - start_batch); rules_batch_idx = ((counter + 1) / rules_batch) - 1; - cpu_time_per_batch[rules_batch_idx] = delta / CLOCKS_PER_SEC; + cpu_time_per_batch[rules_batch_idx] = delta / rte_get_tsc_hz(); cpu_time_used += cpu_time_per_batch[rules_batch_idx]; - start_batch = clock(); + start_batch = rte_rdtsc(); } }