diff mbox series

test/hash: use compiler atomics for sync

Message ID 20210922215205.2638916-1-dharmik.thakkar@arm.com (mailing list archive)
State Accepted
Delegated to: David Marchand
Headers show
Series test/hash: use compiler atomics for sync | expand

Checks

Context Check Description
ci/github-robot: build success github build: passed
ci/iol-x86_64-compile-testing warning Testing issues
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/intel-Testing success Testing PASS
ci/Intel-compilation success Compilation OK
ci/checkpatch success coding style OK

Commit Message

Dharmik Thakkar Sept. 22, 2021, 9:52 p.m. UTC
Convert rte_atomic usages to compiler atomic built-ins
for stats sync

Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 app/test/test_hash_multiwriter.c | 19 ++++----
 app/test/test_hash_readwrite.c   | 80 +++++++++++++++-----------------
 2 files changed, 45 insertions(+), 54 deletions(-)

Comments

David Marchand Oct. 2, 2021, 3:15 p.m. UTC | #1
Hello guys,

On Wed, Sep 22, 2021 at 11:52 PM Dharmik Thakkar
<dharmik.thakkar@arm.com> wrote:
>
> Convert rte_atomic usages to compiler atomic built-ins
> for stats sync
>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Reviewed-by: Joyce Kong <joyce.kong@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>

Review please.
Wang, Yipeng1 Oct. 4, 2021, 4:37 p.m. UTC | #2
> -----Original Message-----
> From: David Marchand <david.marchand@redhat.com>
> Sent: Saturday, October 2, 2021 8:16 AM
> To: Wang, Yipeng1 <yipeng1.wang@intel.com>; Gobriel, Sameh
> <sameh.gobriel@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>; Medvedkin, Vladimir
> <vladimir.medvedkin@intel.com>
> Cc: dev <dev@dpdk.org>; nd <nd@arm.com>; Dharmik Thakkar
> <dharmik.thakkar@arm.com>; Joyce Kong <joyce.kong@arm.com>; Ruifeng
> Wang <ruifeng.wang@arm.com>
> Subject: Re: [dpdk-dev] [PATCH] test/hash: use compiler atomics for sync
> 
> Hello guys,
> 
> On Wed, Sep 22, 2021 at 11:52 PM Dharmik Thakkar
> <dharmik.thakkar@arm.com> wrote:
> >
> > Convert rte_atomic usages to compiler atomic built-ins for stats sync
> >
> > Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> > Reviewed-by: Joyce Kong <joyce.kong@arm.com>
> > Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> 
> Review please.
> 
> 
> --
> David Marchand

[Wang, Yipeng] 
New failure with MinGW
+---------------------+--------------------+----------------------+-------------------+
|     Environment     | dpdk_meson_compile | dpdk_mingw64_compile | dpdk_compile_spdk |
+=====================+====================+======================+===================+
| Windows Server 2019 | PASS               | FAIL                 | SKIPPED           |
+---------------------+--------------------+----------------------+-------------------+

Any guideline on this failure David?

Otherwise the patch looks good to me.
Thanks!
David Marchand Oct. 17, 2021, 2:09 p.m. UTC | #3
On Mon, Oct 4, 2021 at 8:15 PM Wang, Yipeng1 <yipeng1.wang@intel.com> wrote:
> New failure with MinGW
> +---------------------+--------------------+----------------------+-------------------+
> |     Environment     | dpdk_meson_compile | dpdk_mingw64_compile | dpdk_compile_spdk |
> +=====================+====================+======================+===================+
> | Windows Server 2019 | PASS               | FAIL                 | SKIPPED           |
> +---------------------+--------------------+----------------------+-------------------+
>
> Any guideline on this failure David?

Afaiu, it was unrelated to this patch.
David Christensen Oct. 18, 2021, 6:04 p.m. UTC | #4
On 9/22/21 2:52 PM, Dharmik Thakkar wrote:
> Convert rte_atomic usages to compiler atomic built-ins
> for stats sync
> 
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Reviewed-by: Joyce Kong <joyce.kong@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
> ---

Tested-by: David Christensen <drc@linux.vnet.ibm.com>
David Marchand Oct. 19, 2021, 2:29 p.m. UTC | #5
On Wed, Sep 22, 2021 at 11:52 PM Dharmik Thakkar
<dharmik.thakkar@arm.com> wrote:
>
> Convert rte_atomic usages to compiler atomic built-ins
> for stats sync
>
> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
> Reviewed-by: Joyce Kong <joyce.kong@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Acked-by: Yipeng Wang <yipeng1.wang@intel.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>

Applied, thanks.
diff mbox series

Patch

diff --git a/app/test/test_hash_multiwriter.c b/app/test/test_hash_multiwriter.c
index afa3c7b93d85..0c5a8ca18607 100644
--- a/app/test/test_hash_multiwriter.c
+++ b/app/test/test_hash_multiwriter.c
@@ -43,8 +43,8 @@  const uint32_t nb_entries = 5*1024*1024;
 const uint32_t nb_total_tsx_insertion = 4.5*1024*1024;
 uint32_t rounded_nb_total_tsx_insertion;
 
-static rte_atomic64_t gcycles;
-static rte_atomic64_t ginsertions;
+static uint64_t gcycles;
+static uint64_t ginsertions;
 
 static int use_htm;
 
@@ -84,8 +84,8 @@  test_hash_multiwriter_worker(void *arg)
 	}
 
 	cycles = rte_rdtsc_precise() - begin;
-	rte_atomic64_add(&gcycles, cycles);
-	rte_atomic64_add(&ginsertions, i - offset);
+	__atomic_fetch_add(&gcycles, cycles, __ATOMIC_RELAXED);
+	__atomic_fetch_add(&ginsertions, i - offset, __ATOMIC_RELAXED);
 
 	for (; i < offset + tbl_multiwriter_test_params.nb_tsx_insertion; i++)
 		tbl_multiwriter_test_params.keys[i]
@@ -168,11 +168,8 @@  test_hash_multiwriter(void)
 
 	tbl_multiwriter_test_params.found = found;
 
-	rte_atomic64_init(&gcycles);
-	rte_atomic64_clear(&gcycles);
-
-	rte_atomic64_init(&ginsertions);
-	rte_atomic64_clear(&ginsertions);
+	__atomic_store_n(&gcycles, 0, __ATOMIC_RELAXED);
+	__atomic_store_n(&ginsertions, 0, __ATOMIC_RELAXED);
 
 	/* Get list of enabled cores */
 	i = 0;
@@ -238,8 +235,8 @@  test_hash_multiwriter(void)
 	printf("No key corrupted during multiwriter insertion.\n");
 
 	unsigned long long int cycles_per_insertion =
-		rte_atomic64_read(&gcycles)/
-		rte_atomic64_read(&ginsertions);
+		__atomic_load_n(&gcycles, __ATOMIC_RELAXED)/
+		__atomic_load_n(&ginsertions, __ATOMIC_RELAXED);
 
 	printf(" cycles per insertion: %llu\n", cycles_per_insertion);
 
diff --git a/app/test/test_hash_readwrite.c b/app/test/test_hash_readwrite.c
index 4860768a6491..9b192f2b5e7c 100644
--- a/app/test/test_hash_readwrite.c
+++ b/app/test/test_hash_readwrite.c
@@ -45,14 +45,14 @@  struct {
 	struct rte_hash *h;
 } tbl_rw_test_param;
 
-static rte_atomic64_t gcycles;
-static rte_atomic64_t ginsertions;
+static uint64_t gcycles;
+static uint64_t ginsertions;
 
-static rte_atomic64_t gread_cycles;
-static rte_atomic64_t gwrite_cycles;
+static uint64_t gread_cycles;
+static uint64_t gwrite_cycles;
 
-static rte_atomic64_t greads;
-static rte_atomic64_t gwrites;
+static uint64_t greads;
+static uint64_t gwrites;
 
 static int
 test_hash_readwrite_worker(__rte_unused void *arg)
@@ -110,8 +110,8 @@  test_hash_readwrite_worker(__rte_unused void *arg)
 	}
 
 	cycles = rte_rdtsc_precise() - begin;
-	rte_atomic64_add(&gcycles, cycles);
-	rte_atomic64_add(&ginsertions, i - offset);
+	__atomic_fetch_add(&gcycles, cycles, __ATOMIC_RELAXED);
+	__atomic_fetch_add(&ginsertions, i - offset, __ATOMIC_RELAXED);
 
 	for (; i < offset + tbl_rw_test_param.num_insert; i++)
 		tbl_rw_test_param.keys[i] = RTE_RWTEST_FAIL;
@@ -209,11 +209,8 @@  test_hash_readwrite_functional(int use_htm, int use_rw_lf, int use_ext)
 	int worker_cnt = rte_lcore_count() - 1;
 	uint32_t tot_insert = 0;
 
-	rte_atomic64_init(&gcycles);
-	rte_atomic64_clear(&gcycles);
-
-	rte_atomic64_init(&ginsertions);
-	rte_atomic64_clear(&ginsertions);
+	__atomic_store_n(&gcycles, 0, __ATOMIC_RELAXED);
+	__atomic_store_n(&ginsertions, 0, __ATOMIC_RELAXED);
 
 	if (init_params(use_ext, use_htm, use_rw_lf, use_jhash) != 0)
 		goto err;
@@ -272,8 +269,8 @@  test_hash_readwrite_functional(int use_htm, int use_rw_lf, int use_ext)
 	printf("No key corrupted during read-write test.\n");
 
 	unsigned long long int cycles_per_insertion =
-		rte_atomic64_read(&gcycles) /
-		rte_atomic64_read(&ginsertions);
+		__atomic_load_n(&gcycles, __ATOMIC_RELAXED) /
+		__atomic_load_n(&ginsertions, __ATOMIC_RELAXED);
 
 	printf("cycles per insertion and lookup: %llu\n", cycles_per_insertion);
 
@@ -313,8 +310,8 @@  test_rw_reader(void *arg)
 	}
 
 	cycles = rte_rdtsc_precise() - begin;
-	rte_atomic64_add(&gread_cycles, cycles);
-	rte_atomic64_add(&greads, i);
+	__atomic_fetch_add(&gread_cycles, cycles, __ATOMIC_RELAXED);
+	__atomic_fetch_add(&greads, i, __ATOMIC_RELAXED);
 	return 0;
 }
 
@@ -347,8 +344,9 @@  test_rw_writer(void *arg)
 	}
 
 	cycles = rte_rdtsc_precise() - begin;
-	rte_atomic64_add(&gwrite_cycles, cycles);
-	rte_atomic64_add(&gwrites, tbl_rw_test_param.num_insert);
+	__atomic_fetch_add(&gwrite_cycles, cycles, __ATOMIC_RELAXED);
+	__atomic_fetch_add(&gwrites, tbl_rw_test_param.num_insert,
+							__ATOMIC_RELAXED);
 	return 0;
 }
 
@@ -371,15 +369,11 @@  test_hash_readwrite_perf(struct perf *perf_results, int use_htm,
 
 	uint64_t start = 0, end = 0;
 
-	rte_atomic64_init(&greads);
-	rte_atomic64_init(&gwrites);
-	rte_atomic64_clear(&gwrites);
-	rte_atomic64_clear(&greads);
+	__atomic_store_n(&gwrites, 0, __ATOMIC_RELAXED);
+	__atomic_store_n(&greads, 0, __ATOMIC_RELAXED);
 
-	rte_atomic64_init(&gread_cycles);
-	rte_atomic64_clear(&gread_cycles);
-	rte_atomic64_init(&gwrite_cycles);
-	rte_atomic64_clear(&gwrite_cycles);
+	__atomic_store_n(&gread_cycles, 0, __ATOMIC_RELAXED);
+	__atomic_store_n(&gwrite_cycles, 0, __ATOMIC_RELAXED);
 
 	if (init_params(0, use_htm, 0, use_jhash) != 0)
 		goto err;
@@ -436,10 +430,10 @@  test_hash_readwrite_perf(struct perf *perf_results, int use_htm,
 		if (tot_worker_lcore < core_cnt[n] * 2)
 			goto finish;
 
-		rte_atomic64_clear(&greads);
-		rte_atomic64_clear(&gread_cycles);
-		rte_atomic64_clear(&gwrites);
-		rte_atomic64_clear(&gwrite_cycles);
+		__atomic_store_n(&greads, 0, __ATOMIC_RELAXED);
+		__atomic_store_n(&gread_cycles, 0, __ATOMIC_RELAXED);
+		__atomic_store_n(&gwrites, 0, __ATOMIC_RELAXED);
+		__atomic_store_n(&gwrite_cycles, 0, __ATOMIC_RELAXED);
 
 		rte_hash_reset(tbl_rw_test_param.h);
 
@@ -481,8 +475,8 @@  test_hash_readwrite_perf(struct perf *perf_results, int use_htm,
 
 		if (reader_faster) {
 			unsigned long long int cycles_per_insertion =
-				rte_atomic64_read(&gread_cycles) /
-				rte_atomic64_read(&greads);
+				__atomic_load_n(&gread_cycles, __ATOMIC_RELAXED) /
+				__atomic_load_n(&greads, __ATOMIC_RELAXED);
 			perf_results->read_only[n] = cycles_per_insertion;
 			printf("Reader only: cycles per lookup: %llu\n",
 							cycles_per_insertion);
@@ -490,17 +484,17 @@  test_hash_readwrite_perf(struct perf *perf_results, int use_htm,
 
 		else {
 			unsigned long long int cycles_per_insertion =
-				rte_atomic64_read(&gwrite_cycles) /
-				rte_atomic64_read(&gwrites);
+				__atomic_load_n(&gwrite_cycles, __ATOMIC_RELAXED) /
+				__atomic_load_n(&gwrites, __ATOMIC_RELAXED);
 			perf_results->write_only[n] = cycles_per_insertion;
 			printf("Writer only: cycles per writes: %llu\n",
 							cycles_per_insertion);
 		}
 
-		rte_atomic64_clear(&greads);
-		rte_atomic64_clear(&gread_cycles);
-		rte_atomic64_clear(&gwrites);
-		rte_atomic64_clear(&gwrite_cycles);
+		__atomic_store_n(&greads, 0, __ATOMIC_RELAXED);
+		__atomic_store_n(&gread_cycles, 0, __ATOMIC_RELAXED);
+		__atomic_store_n(&gwrites, 0, __ATOMIC_RELAXED);
+		__atomic_store_n(&gwrite_cycles, 0, __ATOMIC_RELAXED);
 
 		rte_hash_reset(tbl_rw_test_param.h);
 
@@ -575,8 +569,8 @@  test_hash_readwrite_perf(struct perf *perf_results, int use_htm,
 
 		if (reader_faster) {
 			unsigned long long int cycles_per_insertion =
-				rte_atomic64_read(&gread_cycles) /
-				rte_atomic64_read(&greads);
+				__atomic_load_n(&gread_cycles, __ATOMIC_RELAXED) /
+				__atomic_load_n(&greads, __ATOMIC_RELAXED);
 			perf_results->read_write_r[n] = cycles_per_insertion;
 			printf("Read-write cycles per lookup: %llu\n",
 							cycles_per_insertion);
@@ -584,8 +578,8 @@  test_hash_readwrite_perf(struct perf *perf_results, int use_htm,
 
 		else {
 			unsigned long long int cycles_per_insertion =
-				rte_atomic64_read(&gwrite_cycles) /
-				rte_atomic64_read(&gwrites);
+				__atomic_load_n(&gwrite_cycles, __ATOMIC_RELAXED) /
+				__atomic_load_n(&gwrites, __ATOMIC_RELAXED);
 			perf_results->read_write_w[n] = cycles_per_insertion;
 			printf("Read-write cycles per writes: %llu\n",
 							cycles_per_insertion);