[v6,3/3] app/test: add allocator performance autotest

Message ID 20211011085644.2716490-4-dkozlyuk@nvidia.com (mailing list archive)
State Not Applicable, archived
Delegated to: David Marchand
Headers
Series eal: add memory pre-allocation from existing files |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/github-robot: build success github build: passed
ci/iol-spell-check-testing warning Testing issues
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS

Commit Message

Dmitry Kozlyuk Oct. 11, 2021, 8:56 a.m. UTC
  Memory allocator performance is crucial to applications that deal
with large amount of memory or allocate frequently. DPDK allocator
performance is affected by EAL options, API used and, at least,
allocation size. New autotest is intended to be run with different
EAL options. It measures performance with a range of sizes
for dirrerent APIs: rte_malloc, rte_zmalloc, and rte_memzone_reserve.

Work distribution between allocation and deallocation depends on EAL
options. The test prints both times and total time to ease comparison.

Memory can be filled with zeroes at different points of allocation path,
but it always takes considerable fraction of overall timing. This is why
the test measures filling speed and prints how long clearing would take
for each size as a hint.

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
---
 app/test/meson.build        |   2 +
 app/test/test_malloc_perf.c | 161 ++++++++++++++++++++++++++++++++++++
 2 files changed, 163 insertions(+)
 create mode 100644 app/test/test_malloc_perf.c
  

Comments

Aaron Conole Oct. 12, 2021, 1:53 p.m. UTC | #1
Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com> writes:

> Memory allocator performance is crucial to applications that deal
> with large amount of memory or allocate frequently. DPDK allocator
> performance is affected by EAL options, API used and, at least,
> allocation size. New autotest is intended to be run with different
> EAL options. It measures performance with a range of sizes
> for dirrerent APIs: rte_malloc, rte_zmalloc, and rte_memzone_reserve.
>
> Work distribution between allocation and deallocation depends on EAL
> options. The test prints both times and total time to ease comparison.
>
> Memory can be filled with zeroes at different points of allocation path,
> but it always takes considerable fraction of overall timing. This is why
> the test measures filling speed and prints how long clearing would take
> for each size as a hint.
>
> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
> ---

This isn't really a test, imho.  There are no assert()s.  How does a
developer who tries to fix a bug in this area know what is acceptable?

Please switch the printf()s to RTE_LOG calls, and add some
RTE_TEST_ASSERT calls to enforce some time range at the least.
Otherwise this test will not really be checking the performance - just
giving a report somewhere.

Also, I don't understand the way the memset test works here.  You do one
large memset at the very beginning and then extrapolate the time it
would take.  Does that hold any value or should we do a memset in each
iteration and enforce a scaled time?

>  app/test/meson.build        |   2 +
>  app/test/test_malloc_perf.c | 161 ++++++++++++++++++++++++++++++++++++
>  2 files changed, 163 insertions(+)
>  create mode 100644 app/test/test_malloc_perf.c
>
> diff --git a/app/test/meson.build b/app/test/meson.build
> index f144d8b8ed..47d1d60ded 100644
> --- a/app/test/meson.build
> +++ b/app/test/meson.build
> @@ -85,6 +85,7 @@ test_sources = files(
>          'test_lpm6_perf.c',
>          'test_lpm_perf.c',
>          'test_malloc.c',
> +        'test_malloc_perf.c',
>          'test_mbuf.c',
>          'test_member.c',
>          'test_member_perf.c',
> @@ -282,6 +283,7 @@ fast_tests = [
>  
>  perf_test_names = [
>          'ring_perf_autotest',
> +        'malloc_perf_autotest',
>          'mempool_perf_autotest',
>          'memcpy_perf_autotest',
>          'hash_perf_autotest',
> diff --git a/app/test/test_malloc_perf.c b/app/test/test_malloc_perf.c
> new file mode 100644
> index 0000000000..fa7357f540
> --- /dev/null
> +++ b/app/test/test_malloc_perf.c
> @@ -0,0 +1,161 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright (c) 2021 NVIDIA Corporation & Affiliates
> + */
> +
> +#include <inttypes.h>
> +#include <string.h>
> +#include <rte_cycles.h>
> +#include <rte_errno.h>
> +#include <rte_malloc.h>
> +#include <rte_memzone.h>
> +
> +#include "test.h"
> +
> +typedef void * (alloc_t)(const char *name, size_t size, unsigned int align);
> +typedef void (free_t)(void *addr);
> +
> +static const uint64_t KB = 1 << 10;
> +static const uint64_t GB = 1 << 30;
> +
> +static double
> +tsc_to_us(uint64_t tsc, size_t runs)
> +{
> +	return (double)tsc / rte_get_tsc_hz() * US_PER_S / runs;
> +}
> +
> +static int
> +test_memset_perf(double *us_per_gb)
> +{
> +	static const size_t RUNS = 20;
> +
> +	void *ptr;
> +	size_t i;
> +	uint64_t tsc;
> +
> +	puts("Performance: memset");
> +
> +	ptr = rte_malloc(NULL, GB, 0);
> +	if (ptr == NULL) {
> +		printf("rte_malloc(size=%"PRIx64") failed\n", GB);
> +		return -1;
> +	}
> +
> +	tsc = rte_rdtsc_precise();
> +	for (i = 0; i < RUNS; i++)
> +		memset(ptr, 0, GB);
> +	tsc = rte_rdtsc_precise() - tsc;
> +
> +	*us_per_gb = tsc_to_us(tsc, RUNS);
> +	printf("Result: %f.3 GiB/s <=> %.2f us/MiB\n",
> +			US_PER_S / *us_per_gb, *us_per_gb / KB);
> +
> +	rte_free(ptr);
> +	putchar('\n');
> +	return 0;
> +}
> +
> +static int
> +test_alloc_perf(const char *name, alloc_t *alloc_fn, free_t free_fn,
> +		size_t max_runs, double memset_gb_us)
> +{
> +	static const size_t SIZES[] = {
> +			1 << 6, 1 << 7, 1 << 10, 1 << 12, 1 << 16, 1 << 20,
> +			1 << 21, 1 << 22, 1 << 24, 1 << 30 };
> +
> +	size_t i, j;
> +	void **ptrs;
> +
> +	printf("Performance: %s\n", name);
> +
> +	ptrs = calloc(max_runs, sizeof(ptrs[0]));
> +	if (ptrs == NULL) {
> +		puts("Cannot allocate memory for pointers");
> +		return -1;
> +	}
> +
> +	printf("%12s%8s%12s%12s%12s%12s\n",
> +			"Size (B)", "Runs", "Alloc (us)", "Free (us)",
> +			"Total (us)", "memset (us)");
> +	for (i = 0; i < RTE_DIM(SIZES); i++) {
> +		size_t size = SIZES[i];
> +		size_t runs_done;
> +		uint64_t tsc_start, tsc_alloc, tsc_free;
> +		double alloc_time, free_time, memset_time;
> +
> +		tsc_start = rte_rdtsc_precise();
> +		for (j = 0; j < max_runs; j++) {
> +			ptrs[j] = alloc_fn(NULL, size, 0);
> +			if (ptrs[j] == NULL)
> +				break;
> +		}
> +		tsc_alloc = rte_rdtsc_precise() - tsc_start;
> +
> +		if (j == 0) {
> +			printf("%12zu Interrupted: out of memory.\n", size);
> +			break;
> +		}
> +		runs_done = j;
> +
> +		tsc_start = rte_rdtsc_precise();
> +		for (j = 0; j < runs_done && ptrs[j] != NULL; j++)
> +			free_fn(ptrs[j]);
> +		tsc_free = rte_rdtsc_precise() - tsc_start;
> +
> +		alloc_time = tsc_to_us(tsc_alloc, runs_done);
> +		free_time = tsc_to_us(tsc_free, runs_done);
> +		memset_time = memset_gb_us * size / GB;
> +		printf("%12zu%8zu%12.2f%12.2f%12.2f%12.2f\n",
> +				size, runs_done, alloc_time, free_time,
> +				alloc_time + free_time, memset_time);
> +
> +		memset(ptrs, 0, max_runs * sizeof(ptrs[0]));
> +	}
> +
> +	free(ptrs);
> +	putchar('\n');
> +	return 0;
> +}
> +
> +static void *
> +memzone_alloc(const char *name __rte_unused, size_t size, unsigned int align)
> +{
> +	const struct rte_memzone *mz;
> +	char gen_name[RTE_MEMZONE_NAMESIZE];
> +
> +	snprintf(gen_name, sizeof(gen_name), "test-mz-%"PRIx64, rte_rdtsc());
> +	mz = rte_memzone_reserve_aligned(gen_name, size, SOCKET_ID_ANY,
> +			RTE_MEMZONE_1GB | RTE_MEMZONE_SIZE_HINT_ONLY, align);
> +	return (void *)(uintptr_t)mz;
> +}
> +
> +static void
> +memzone_free(void *addr)
> +{
> +	rte_memzone_free((struct rte_memzone *)addr);
> +}
> +
> +static int
> +test_malloc_perf(void)
> +{
> +	static const size_t MAX_RUNS = 10000;
> +
> +	double memset_gb_us;
> +
> +	if (test_memset_perf(&memset_gb_us) < 0)
> +		return -1;
> +
> +	if (test_alloc_perf("rte_malloc", rte_malloc, rte_free,
> +			MAX_RUNS, memset_gb_us) < 0)
> +		return -1;
> +	if (test_alloc_perf("rte_zmalloc", rte_zmalloc, rte_free,
> +			MAX_RUNS, memset_gb_us) < 0)
> +		return -1;
> +
> +	if (test_alloc_perf("rte_memzone_reserve", memzone_alloc, memzone_free,
> +			RTE_MAX_MEMZONE - 1, memset_gb_us) < 0)
> +		return -1;
> +
> +	return 0;
> +}
> +
> +REGISTER_TEST_COMMAND(malloc_perf_autotest, test_malloc_perf);
  
Dmitry Kozlyuk Oct. 12, 2021, 2:48 p.m. UTC | #2
> This isn't really a test, imho.  There are no assert()s.  How does a developer who
> tries to fix a bug in this area know what is acceptable?
> 
> Please switch the printf()s to RTE_LOG calls, and add some RTE_TEST_ASSERT
> calls to enforce some time range at the least.
> Otherwise this test will not really be checking the performance - just giving a
> report somewhere.

I just followed DPDK naming convention of test_xxx_perf.c / xxx_perf_autotest.
They all should really be called benchmarks.
They help developers to see how the code changes affect performance.
I don't understand how this "perf test" is not in line with existing ones
and where it should properly reside.

I'm not totally opposed to replacing printf() with RTE_LOG(), but all other test use printf().
The drawback of the change is inconsistency, what is the benefit?

> Also, I don't understand the way the memset test works here.  You do one large
> memset at the very beginning and then extrapolate the time it would take.  Does
> that hold any value or should we do a memset in each iteration and enforce a
> scaled time?

As explained above, we don't need to enforce anything, we want a report.
I've never seen a case with one NUMA node where memset() time would not scale linearly,
but benchmarks should be precise so I'll change it to memset()'ing the allocated area, thanks.
  
Aaron Conole Oct. 15, 2021, 1:47 p.m. UTC | #3
Dmitry Kozlyuk <dkozlyuk@nvidia.com> writes:

>> This isn't really a test, imho.  There are no assert()s.  How does a developer who
>> tries to fix a bug in this area know what is acceptable?
>> 
>> Please switch the printf()s to RTE_LOG calls, and add some RTE_TEST_ASSERT
>> calls to enforce some time range at the least.
>> Otherwise this test will not really be checking the performance - just giving a
>> report somewhere.
>
> I just followed DPDK naming convention of test_xxx_perf.c / xxx_perf_autotest.
> They all should really be called benchmarks.

Agreed - they are not really tests and it makes me wonder why we label
them as such.  It will be confusing.  A developer who runs the perf test
suite will just see "OK" everywhere and assume that all the tests are
working - even if they introduce a performance regression.

Maybe it would make sense to relabel them (perf-benchmark or something),
so that there isn't an expectation that we have PASS / FAIL.  That's a
larger scope than this patch, though.

> They help developers to see how the code changes affect performance.
> I don't understand how this "perf test" is not in line with existing ones
> and where it should properly reside.
>
> I'm not totally opposed to replacing printf() with RTE_LOG(), but all other test use printf().
> The drawback of the change is inconsistency, what is the benefit?

RTE_LOG is captured in other places as well.  printf() depending on how
the test app is run might not go anywhere.  Also, at least the ipsec
perf test starts introducing RTE_LOG() calls - although even there they
use printf() for reports.

I guess it's very confusing to call all of these as 'test' since they
aren't.

But that's an aside, and I guess this is consistent with existing
_perf.c files.

>> Also, I don't understand the way the memset test works here.  You do one large
>> memset at the very beginning and then extrapolate the time it would take.  Does
>> that hold any value or should we do a memset in each iteration and enforce a
>> scaled time?
>
> As explained above, we don't need to enforce anything, we want a report.
> I've never seen a case with one NUMA node where memset() time would not scale linearly,
> but benchmarks should be precise so I'll change it to memset()'ing the allocated area, thanks.
  

Patch

diff --git a/app/test/meson.build b/app/test/meson.build
index f144d8b8ed..47d1d60ded 100644
--- a/app/test/meson.build
+++ b/app/test/meson.build
@@ -85,6 +85,7 @@  test_sources = files(
         'test_lpm6_perf.c',
         'test_lpm_perf.c',
         'test_malloc.c',
+        'test_malloc_perf.c',
         'test_mbuf.c',
         'test_member.c',
         'test_member_perf.c',
@@ -282,6 +283,7 @@  fast_tests = [
 
 perf_test_names = [
         'ring_perf_autotest',
+        'malloc_perf_autotest',
         'mempool_perf_autotest',
         'memcpy_perf_autotest',
         'hash_perf_autotest',
diff --git a/app/test/test_malloc_perf.c b/app/test/test_malloc_perf.c
new file mode 100644
index 0000000000..fa7357f540
--- /dev/null
+++ b/app/test/test_malloc_perf.c
@@ -0,0 +1,161 @@ 
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright (c) 2021 NVIDIA Corporation & Affiliates
+ */
+
+#include <inttypes.h>
+#include <string.h>
+#include <rte_cycles.h>
+#include <rte_errno.h>
+#include <rte_malloc.h>
+#include <rte_memzone.h>
+
+#include "test.h"
+
+typedef void * (alloc_t)(const char *name, size_t size, unsigned int align);
+typedef void (free_t)(void *addr);
+
+static const uint64_t KB = 1 << 10;
+static const uint64_t GB = 1 << 30;
+
+static double
+tsc_to_us(uint64_t tsc, size_t runs)
+{
+	return (double)tsc / rte_get_tsc_hz() * US_PER_S / runs;
+}
+
+static int
+test_memset_perf(double *us_per_gb)
+{
+	static const size_t RUNS = 20;
+
+	void *ptr;
+	size_t i;
+	uint64_t tsc;
+
+	puts("Performance: memset");
+
+	ptr = rte_malloc(NULL, GB, 0);
+	if (ptr == NULL) {
+		printf("rte_malloc(size=%"PRIx64") failed\n", GB);
+		return -1;
+	}
+
+	tsc = rte_rdtsc_precise();
+	for (i = 0; i < RUNS; i++)
+		memset(ptr, 0, GB);
+	tsc = rte_rdtsc_precise() - tsc;
+
+	*us_per_gb = tsc_to_us(tsc, RUNS);
+	printf("Result: %f.3 GiB/s <=> %.2f us/MiB\n",
+			US_PER_S / *us_per_gb, *us_per_gb / KB);
+
+	rte_free(ptr);
+	putchar('\n');
+	return 0;
+}
+
+static int
+test_alloc_perf(const char *name, alloc_t *alloc_fn, free_t free_fn,
+		size_t max_runs, double memset_gb_us)
+{
+	static const size_t SIZES[] = {
+			1 << 6, 1 << 7, 1 << 10, 1 << 12, 1 << 16, 1 << 20,
+			1 << 21, 1 << 22, 1 << 24, 1 << 30 };
+
+	size_t i, j;
+	void **ptrs;
+
+	printf("Performance: %s\n", name);
+
+	ptrs = calloc(max_runs, sizeof(ptrs[0]));
+	if (ptrs == NULL) {
+		puts("Cannot allocate memory for pointers");
+		return -1;
+	}
+
+	printf("%12s%8s%12s%12s%12s%12s\n",
+			"Size (B)", "Runs", "Alloc (us)", "Free (us)",
+			"Total (us)", "memset (us)");
+	for (i = 0; i < RTE_DIM(SIZES); i++) {
+		size_t size = SIZES[i];
+		size_t runs_done;
+		uint64_t tsc_start, tsc_alloc, tsc_free;
+		double alloc_time, free_time, memset_time;
+
+		tsc_start = rte_rdtsc_precise();
+		for (j = 0; j < max_runs; j++) {
+			ptrs[j] = alloc_fn(NULL, size, 0);
+			if (ptrs[j] == NULL)
+				break;
+		}
+		tsc_alloc = rte_rdtsc_precise() - tsc_start;
+
+		if (j == 0) {
+			printf("%12zu Interrupted: out of memory.\n", size);
+			break;
+		}
+		runs_done = j;
+
+		tsc_start = rte_rdtsc_precise();
+		for (j = 0; j < runs_done && ptrs[j] != NULL; j++)
+			free_fn(ptrs[j]);
+		tsc_free = rte_rdtsc_precise() - tsc_start;
+
+		alloc_time = tsc_to_us(tsc_alloc, runs_done);
+		free_time = tsc_to_us(tsc_free, runs_done);
+		memset_time = memset_gb_us * size / GB;
+		printf("%12zu%8zu%12.2f%12.2f%12.2f%12.2f\n",
+				size, runs_done, alloc_time, free_time,
+				alloc_time + free_time, memset_time);
+
+		memset(ptrs, 0, max_runs * sizeof(ptrs[0]));
+	}
+
+	free(ptrs);
+	putchar('\n');
+	return 0;
+}
+
+static void *
+memzone_alloc(const char *name __rte_unused, size_t size, unsigned int align)
+{
+	const struct rte_memzone *mz;
+	char gen_name[RTE_MEMZONE_NAMESIZE];
+
+	snprintf(gen_name, sizeof(gen_name), "test-mz-%"PRIx64, rte_rdtsc());
+	mz = rte_memzone_reserve_aligned(gen_name, size, SOCKET_ID_ANY,
+			RTE_MEMZONE_1GB | RTE_MEMZONE_SIZE_HINT_ONLY, align);
+	return (void *)(uintptr_t)mz;
+}
+
+static void
+memzone_free(void *addr)
+{
+	rte_memzone_free((struct rte_memzone *)addr);
+}
+
+static int
+test_malloc_perf(void)
+{
+	static const size_t MAX_RUNS = 10000;
+
+	double memset_gb_us;
+
+	if (test_memset_perf(&memset_gb_us) < 0)
+		return -1;
+
+	if (test_alloc_perf("rte_malloc", rte_malloc, rte_free,
+			MAX_RUNS, memset_gb_us) < 0)
+		return -1;
+	if (test_alloc_perf("rte_zmalloc", rte_zmalloc, rte_free,
+			MAX_RUNS, memset_gb_us) < 0)
+		return -1;
+
+	if (test_alloc_perf("rte_memzone_reserve", memzone_alloc, memzone_free,
+			RTE_MAX_MEMZONE - 1, memset_gb_us) < 0)
+		return -1;
+
+	return 0;
+}
+
+REGISTER_TEST_COMMAND(malloc_perf_autotest, test_malloc_perf);