From patchwork Wed Nov 4 17:04:25 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Olivier Matz X-Patchwork-Id: 83695 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8B30CA04B1; Wed, 4 Nov 2020 18:04:55 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DE1A52C7A; Wed, 4 Nov 2020 18:04:35 +0100 (CET) Received: from proxy.6wind.com (host.76.145.23.62.rev.coltfrance.com [62.23.145.76]) by dpdk.org (Postfix) with ESMTP id 277682C2D for ; Wed, 4 Nov 2020 18:04:35 +0100 (CET) Received: from glumotte.dev.6wind.com. (unknown [10.16.0.195]) by proxy.6wind.com (Postfix) with ESMTP id 0566A483C68; Wed, 4 Nov 2020 18:04:34 +0100 (CET) From: Olivier Matz To: dev@dpdk.org Cc: Honnappa Nagarahalli , Gavin Hu , Phil Yang Date: Wed, 4 Nov 2020 18:04:25 +0100 Message-Id: <20201104170425.8882-1-olivier.matz@6wind.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH] test/mcslock: remove unneeded per-lcore copy X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Each core already comes with its local storage for mcslock (in its stack), therefore there is no need to define an additional per-lcore mcslock. Fixes: 32dcb9fd2a22 ("test/mcslock: add MCS queued lock unit test") Signed-off-by: Olivier Matz Reviewed-by: Honnappa Nagarahalli Reviewed-by: Honnappa Nagarahalli --- app/test/test_mcslock.c | 16 ++++++---------- 1 file changed, 6 insertions(+), 10 deletions(-) diff --git a/app/test/test_mcslock.c b/app/test/test_mcslock.c index fbca78707d..80eaecc90a 100644 --- a/app/test/test_mcslock.c +++ b/app/test/test_mcslock.c @@ -37,10 +37,6 @@ * lock multiple times. */ -RTE_DEFINE_PER_LCORE(rte_mcslock_t, _ml_me); -RTE_DEFINE_PER_LCORE(rte_mcslock_t, _ml_try_me); -RTE_DEFINE_PER_LCORE(rte_mcslock_t, _ml_perf_me); - rte_mcslock_t *p_ml; rte_mcslock_t *p_ml_try; rte_mcslock_t *p_ml_perf; @@ -53,7 +49,7 @@ static int test_mcslock_per_core(__rte_unused void *arg) { /* Per core me node. */ - rte_mcslock_t ml_me = RTE_PER_LCORE(_ml_me); + rte_mcslock_t ml_me; rte_mcslock_lock(&p_ml, &ml_me); printf("MCS lock taken on core %u\n", rte_lcore_id()); @@ -77,7 +73,7 @@ load_loop_fn(void *func_param) const unsigned int lcore = rte_lcore_id(); /**< Per core me node. */ - rte_mcslock_t ml_perf_me = RTE_PER_LCORE(_ml_perf_me); + rte_mcslock_t ml_perf_me; /* wait synchro */ while (rte_atomic32_read(&synchro) == 0) @@ -151,8 +147,8 @@ static int test_mcslock_try(__rte_unused void *arg) { /**< Per core me node. */ - rte_mcslock_t ml_me = RTE_PER_LCORE(_ml_me); - rte_mcslock_t ml_try_me = RTE_PER_LCORE(_ml_try_me); + rte_mcslock_t ml_me; + rte_mcslock_t ml_try_me; /* Locked ml_try in the main lcore, so it should fail * when trying to lock it in the worker lcore. @@ -178,8 +174,8 @@ test_mcslock(void) int i; /* Define per core me node. */ - rte_mcslock_t ml_me = RTE_PER_LCORE(_ml_me); - rte_mcslock_t ml_try_me = RTE_PER_LCORE(_ml_try_me); + rte_mcslock_t ml_me; + rte_mcslock_t ml_try_me; /* * Test mcs lock & unlock on each core