[v5] mempool: test performance with larger bursts

Message ID 20240124112134.85549-1-mb@smartsharesystems.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series [v5] mempool: test performance with larger bursts |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/github-robot: build success github build: passed
ci/intel-Functional success Functional PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-mellanox-Performance success Performance Testing PASS

Commit Message

Morten Brørup Jan. 24, 2024, 11:21 a.m. UTC
  Bursts of up to 64 or 128 packets are not uncommon, so increase the
maximum tested get and put burst sizes from 32 to 128.
For convenience, also test get and put burst sizes of
RTE_MEMPOOL_CACHE_MAX_SIZE.

Some applications keep more than 512 objects, so increase the maximum
number of kept objects from 512 to 32768, still in jumps of factor four.
This exceeds the typical mempool cache size of 512 objects, so the test
also exercises the mempool driver.

Increased the precision of rate_persec calculation by timing the actual
duration of the test, instead of assuming it took exactly 5 seconds.

Added cache guard to per-lcore stats structure.

Signed-off-by: Morten Brørup <mb@smartsharesystems.com>
Acked-by: Chengwen Feng <fengchengwen@huawei.com>
---

v5:
* Increased N, to reduce measurement overhead with large numbers of kept
  objects.
* Increased precision of rate_persec calculation.
* Added missing cache guard to per-lcore stats structure.
v4:
* v3 failed to apply; I had messed up something with git.
* Added ACK from Chengwen Feng.
v3:
* Increased max number of kept objects to 32768.
* Added get and put burst sizes of RTE_MEMPOOL_CACHE_MAX_SIZE objects.
* Print error if unable to allocate mempool.
* Initialize use_external_cache with each test.
  A previous version of this patch had a bug, where all test runs
  following the first would use external cache. (Chengwen Feng)
v2: Addressed feedback by Chengwen Feng
* Added get and put burst sizes of 64 objects, which is probably also not
  uncommon packet burst size.
* Fixed list of number of kept objects so list remains in jumps of factor
  four.
* Added three derivative test cases, for faster testing.
---
 app/test/test_mempool_perf.c | 137 ++++++++++++++++++++++-------------
 1 file changed, 86 insertions(+), 51 deletions(-)
  

Comments

Thomas Monjalon Feb. 18, 2024, 6:03 p.m. UTC | #1
24/01/2024 12:21, Morten Brørup:
> --- a/app/test/test_mempool_perf.c
> +++ b/app/test/test_mempool_perf.c
> @@ -1,6 +1,6 @@
>  /* SPDX-License-Identifier: BSD-3-Clause
>   * Copyright(c) 2010-2014 Intel Corporation
> - * Copyright(c) 2022 SmartShare Systems
> + * Copyright(c) 2022-2024 SmartShare Systems

You don't need to update copyright year.
The first year is the only important one.

reading: https://matija.suklje.name/how-and-why-to-properly-write-copyright-statements-in-your-code#why-not-bump-the-year-on-change

[...]
>  REGISTER_PERF_TEST(mempool_perf_autotest, test_mempool_perf);
> +REGISTER_PERF_TEST(mempool_perf_autotest_1core, test_mempool_perf_1core);
> +REGISTER_PERF_TEST(mempool_perf_autotest_2cores, test_mempool_perf_2cores);

How do we make sure the test is skipped if we have only 1 core?

> +REGISTER_PERF_TEST(mempool_perf_autotest_allcores, test_mempool_perf_allcores);

How the test duration is changed after this patch?
  
Morten Brørup Feb. 20, 2024, 1:49 p.m. UTC | #2
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Sunday, 18 February 2024 19.04
> 
> 24/01/2024 12:21, Morten Brørup:
> > --- a/app/test/test_mempool_perf.c
> > +++ b/app/test/test_mempool_perf.c
> > @@ -1,6 +1,6 @@
> >  /* SPDX-License-Identifier: BSD-3-Clause
> >   * Copyright(c) 2010-2014 Intel Corporation
> > - * Copyright(c) 2022 SmartShare Systems
> > + * Copyright(c) 2022-2024 SmartShare Systems
> 
> You don't need to update copyright year.
> The first year is the only important one.
> 
> reading: https://matija.suklje.name/how-and-why-to-properly-write-
> copyright-statements-in-your-code#why-not-bump-the-year-on-change

Thank you, Thomas. Very informative.
Will fix in next version.

> 
> [...]
> >  REGISTER_PERF_TEST(mempool_perf_autotest, test_mempool_perf);
> > +REGISTER_PERF_TEST(mempool_perf_autotest_1core,
> test_mempool_perf_1core);
> > +REGISTER_PERF_TEST(mempool_perf_autotest_2cores,
> test_mempool_perf_2cores);
> 
> How do we make sure the test is skipped if we have only 1 core?

Good point. Will fix in next version.

> 
> > +REGISTER_PERF_TEST(mempool_perf_autotest_allcores,
> test_mempool_perf_allcores);
> 
> How the test duration is changed after this patch?
> 

On my test machine, the expanded test parameter set increased the duration of one test run from 20 minutes to 100 minutes.
Before the patch, all three test runs were always executed, i.e. a total duration of 60 minutes.

In other words:
The expanded test parameter set increased the test run duration by factor five.
Introducing the ability to optionally only test with a specific number of lcores reduced the total test duration to a third.
  
Thomas Monjalon Feb. 21, 2024, 10:22 a.m. UTC | #3
20/02/2024 14:49, Morten Brørup:
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > 24/01/2024 12:21, Morten Brørup:
> > >  REGISTER_PERF_TEST(mempool_perf_autotest, test_mempool_perf);
> > > +REGISTER_PERF_TEST(mempool_perf_autotest_1core,
> > test_mempool_perf_1core);
> > > +REGISTER_PERF_TEST(mempool_perf_autotest_2cores,
> > test_mempool_perf_2cores);
> > 
> > How do we make sure the test is skipped if we have only 1 core?
> 
> Good point. Will fix in next version.
> 
> > 
> > > +REGISTER_PERF_TEST(mempool_perf_autotest_allcores,
> > test_mempool_perf_allcores);
> > 
> > How the test duration is changed after this patch?
> 
> On my test machine, the expanded test parameter set increased the duration of one test run from 20 minutes to 100 minutes.
> Before the patch, all three test runs were always executed, i.e. a total duration of 60 minutes.
> 
> In other words:
> The expanded test parameter set increased the test run duration by factor five.
> Introducing the ability to optionally only test with a specific number of lcores reduced the total test duration to a third.

That's a very long test.
It would be interesting to find a way to make it shorter.
  
Morten Brørup Feb. 21, 2024, 10:38 a.m. UTC | #4
> From: Thomas Monjalon [mailto:thomas@monjalon.net]
> Sent: Wednesday, 21 February 2024 11.23
> 
> 20/02/2024 14:49, Morten Brørup:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > >
> > > How the test duration is changed after this patch?
> >
> > On my test machine, the expanded test parameter set increased the
> duration of one test run from 20 minutes to 100 minutes.
> > Before the patch, all three test runs were always executed, i.e. a
> total duration of 60 minutes.
> >
> > In other words:
> > The expanded test parameter set increased the test run duration by
> factor five.
> > Introducing the ability to optionally only test with a specific
> number of lcores reduced the total test duration to a third.
> 
> That's a very long test.
> It would be interesting to find a way to make it shorter.

I tried looking into this, but I couldn't figure out how to pass parameters to a test, so I added the three variants with shorter tests, as suggested by Chengwen Feng.
  
Bruce Richardson Feb. 21, 2024, 10:40 a.m. UTC | #5
On Wed, Feb 21, 2024 at 11:38:34AM +0100, Morten Brørup wrote:
> > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > Sent: Wednesday, 21 February 2024 11.23
> > 
> > 20/02/2024 14:49, Morten Brørup:
> > > From: Thomas Monjalon [mailto:thomas@monjalon.net]
> > > >
> > > > How the test duration is changed after this patch?
> > >
> > > On my test machine, the expanded test parameter set increased the
> > duration of one test run from 20 minutes to 100 minutes.
> > > Before the patch, all three test runs were always executed, i.e. a
> > total duration of 60 minutes.
> > >
> > > In other words:
> > > The expanded test parameter set increased the test run duration by
> > factor five.
> > > Introducing the ability to optionally only test with a specific
> > number of lcores reduced the total test duration to a third.
> > 
> > That's a very long test.
> > It would be interesting to find a way to make it shorter.
> 
> I tried looking into this, but I couldn't figure out how to pass parameters to a test, so I added the three variants with shorter tests, as suggested by Chengwen Feng.
> 

It's not currently possible, but:
https://patches.dpdk.org/project/dpdk/patch/20231215130656.247582-1-bruce.richardson@intel.com/

/Bruce
  

Patch

diff --git a/app/test/test_mempool_perf.c b/app/test/test_mempool_perf.c
index 96de347f04..dcdfb52020 100644
--- a/app/test/test_mempool_perf.c
+++ b/app/test/test_mempool_perf.c
@@ -1,6 +1,6 @@ 
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(c) 2010-2014 Intel Corporation
- * Copyright(c) 2022 SmartShare Systems
+ * Copyright(c) 2022-2024 SmartShare Systems
  */
 
 #include <string.h>
@@ -54,22 +54,25 @@ 
  *
  *    - Bulk size (*n_get_bulk*, *n_put_bulk*)
  *
- *      - Bulk get from 1 to 32
- *      - Bulk put from 1 to 32
- *      - Bulk get and put from 1 to 32, compile time constant
+ *      - Bulk get from 1 to 128, and RTE_MEMPOOL_CACHE_MAX_SIZE
+ *      - Bulk put from 1 to 128, and RTE_MEMPOOL_CACHE_MAX_SIZE
+ *      - Bulk get and put from 1 to 128, and RTE_MEMPOOL_CACHE_MAX_SIZE, compile time constant
  *
  *    - Number of kept objects (*n_keep*)
  *
  *      - 32
  *      - 128
  *      - 512
+ *      - 2048
+ *      - 8192
+ *      - 32768
  */
 
-#define N 65536
 #define TIME_S 5
 #define MEMPOOL_ELT_SIZE 2048
-#define MAX_KEEP 512
-#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE))-1)
+#define MAX_KEEP 32768
+#define N (128 * MAX_KEEP)
+#define MEMPOOL_SIZE ((rte_lcore_count()*(MAX_KEEP+RTE_MEMPOOL_CACHE_MAX_SIZE*2))-1)
 
 /* Number of pointers fitting into one cache line. */
 #define CACHE_LINE_BURST (RTE_CACHE_LINE_SIZE / sizeof(uintptr_t))
@@ -100,9 +103,11 @@  static unsigned n_keep;
 /* true if we want to test with constant n_get_bulk and n_put_bulk */
 static int use_constant_values;
 
-/* number of enqueues / dequeues */
+/* number of enqueues / dequeues, and time used */
 struct mempool_test_stats {
 	uint64_t enq_count;
+	uint64_t duration_cycles;
+	RTE_CACHE_GUARD;
 } __rte_cache_aligned;
 
 static struct mempool_test_stats stats[RTE_MAX_LCORE];
@@ -185,6 +190,7 @@  per_lcore_mempool_test(void *arg)
 		GOTO_ERR(ret, out);
 
 	stats[lcore_id].enq_count = 0;
+	stats[lcore_id].duration_cycles = 0;
 
 	/* wait synchro for workers */
 	if (lcore_id != rte_get_main_lcore())
@@ -204,6 +210,13 @@  per_lcore_mempool_test(void *arg)
 					CACHE_LINE_BURST, CACHE_LINE_BURST);
 		else if (n_get_bulk == 32)
 			ret = test_loop(mp, cache, n_keep, 32, 32);
+		else if (n_get_bulk == 64)
+			ret = test_loop(mp, cache, n_keep, 64, 64);
+		else if (n_get_bulk == 128)
+			ret = test_loop(mp, cache, n_keep, 128, 128);
+		else if (n_get_bulk == RTE_MEMPOOL_CACHE_MAX_SIZE)
+			ret = test_loop(mp, cache, n_keep,
+					RTE_MEMPOOL_CACHE_MAX_SIZE, RTE_MEMPOOL_CACHE_MAX_SIZE);
 		else
 			ret = -1;
 
@@ -215,6 +228,8 @@  per_lcore_mempool_test(void *arg)
 		stats[lcore_id].enq_count += N;
 	}
 
+	stats[lcore_id].duration_cycles = time_diff;
+
 out:
 	if (use_external_cache) {
 		rte_mempool_cache_flush(cache, mp);
@@ -232,6 +247,7 @@  launch_cores(struct rte_mempool *mp, unsigned int cores)
 	uint64_t rate;
 	int ret;
 	unsigned cores_save = cores;
+	double hz = rte_get_timer_hz();
 
 	__atomic_store_n(&synchro, 0, __ATOMIC_RELAXED);
 
@@ -278,7 +294,9 @@  launch_cores(struct rte_mempool *mp, unsigned int cores)
 
 	rate = 0;
 	for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++)
-		rate += (stats[lcore_id].enq_count / TIME_S);
+		if (stats[lcore_id].duration_cycles != 0)
+			rate += (double)stats[lcore_id].enq_count * hz /
+					(double)stats[lcore_id].duration_cycles;
 
 	printf("rate_persec=%" PRIu64 "\n", rate);
 
@@ -287,11 +305,13 @@  launch_cores(struct rte_mempool *mp, unsigned int cores)
 
 /* for a given number of core, launch all test cases */
 static int
-do_one_mempool_test(struct rte_mempool *mp, unsigned int cores)
+do_one_mempool_test(struct rte_mempool *mp, unsigned int cores, int external_cache)
 {
-	unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 0 };
-	unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 0 };
-	unsigned int keep_tab[] = { 32, 128, 512, 0 };
+	unsigned int bulk_tab_get[] = { 1, 4, CACHE_LINE_BURST, 32, 64, 128,
+			RTE_MEMPOOL_CACHE_MAX_SIZE, 0 };
+	unsigned int bulk_tab_put[] = { 1, 4, CACHE_LINE_BURST, 32, 64, 128,
+			RTE_MEMPOOL_CACHE_MAX_SIZE, 0 };
+	unsigned int keep_tab[] = { 32, 128, 512, 2048, 8192, 32768, 0 };
 	unsigned *get_bulk_ptr;
 	unsigned *put_bulk_ptr;
 	unsigned *keep_ptr;
@@ -301,6 +321,10 @@  do_one_mempool_test(struct rte_mempool *mp, unsigned int cores)
 		for (put_bulk_ptr = bulk_tab_put; *put_bulk_ptr; put_bulk_ptr++) {
 			for (keep_ptr = keep_tab; *keep_ptr; keep_ptr++) {
 
+				if (*keep_ptr < *get_bulk_ptr || *keep_ptr < *put_bulk_ptr)
+					continue;
+
+				use_external_cache = external_cache;
 				use_constant_values = 0;
 				n_get_bulk = *get_bulk_ptr;
 				n_put_bulk = *put_bulk_ptr;
@@ -323,7 +347,7 @@  do_one_mempool_test(struct rte_mempool *mp, unsigned int cores)
 }
 
 static int
-test_mempool_perf(void)
+do_all_mempool_perf_tests(unsigned int cores)
 {
 	struct rte_mempool *mp_cache = NULL;
 	struct rte_mempool *mp_nocache = NULL;
@@ -337,8 +361,10 @@  test_mempool_perf(void)
 					NULL, NULL,
 					my_obj_init, NULL,
 					SOCKET_ID_ANY, 0);
-	if (mp_nocache == NULL)
+	if (mp_nocache == NULL) {
+		printf("cannot allocate mempool (without cache)\n");
 		goto err;
+	}
 
 	/* create a mempool (with cache) */
 	mp_cache = rte_mempool_create("perf_test_cache", MEMPOOL_SIZE,
@@ -347,8 +373,10 @@  test_mempool_perf(void)
 				      NULL, NULL,
 				      my_obj_init, NULL,
 				      SOCKET_ID_ANY, 0);
-	if (mp_cache == NULL)
+	if (mp_cache == NULL) {
+		printf("cannot allocate mempool (with cache)\n");
 		goto err;
+	}
 
 	default_pool_ops = rte_mbuf_best_mempool_ops();
 	/* Create a mempool based on Default handler */
@@ -376,65 +404,72 @@  test_mempool_perf(void)
 
 	rte_mempool_obj_iter(default_pool, my_obj_init, NULL);
 
-	/* performance test with 1, 2 and max cores */
 	printf("start performance test (without cache)\n");
-
-	if (do_one_mempool_test(mp_nocache, 1) < 0)
-		goto err;
-
-	if (do_one_mempool_test(mp_nocache, 2) < 0)
+	if (do_one_mempool_test(mp_nocache, cores, 0) < 0)
 		goto err;
 
-	if (do_one_mempool_test(mp_nocache, rte_lcore_count()) < 0)
-		goto err;
-
-	/* performance test with 1, 2 and max cores */
 	printf("start performance test for %s (without cache)\n",
 	       default_pool_ops);
-
-	if (do_one_mempool_test(default_pool, 1) < 0)
+	if (do_one_mempool_test(default_pool, cores, 0) < 0)
 		goto err;
 
-	if (do_one_mempool_test(default_pool, 2) < 0)
+	printf("start performance test (with cache)\n");
+	if (do_one_mempool_test(mp_cache, cores, 0) < 0)
 		goto err;
 
-	if (do_one_mempool_test(default_pool, rte_lcore_count()) < 0)
+	printf("start performance test (with user-owned cache)\n");
+	if (do_one_mempool_test(mp_nocache, cores, 1) < 0)
 		goto err;
 
-	/* performance test with 1, 2 and max cores */
-	printf("start performance test (with cache)\n");
+	rte_mempool_list_dump(stdout);
 
-	if (do_one_mempool_test(mp_cache, 1) < 0)
-		goto err;
+	ret = 0;
 
-	if (do_one_mempool_test(mp_cache, 2) < 0)
-		goto err;
+err:
+	rte_mempool_free(mp_cache);
+	rte_mempool_free(mp_nocache);
+	rte_mempool_free(default_pool);
+	return ret;
+}
 
-	if (do_one_mempool_test(mp_cache, rte_lcore_count()) < 0)
-		goto err;
+static int
+test_mempool_perf_1core(void)
+{
+	return do_all_mempool_perf_tests(1);
+}
 
-	/* performance test with 1, 2 and max cores */
-	printf("start performance test (with user-owned cache)\n");
-	use_external_cache = 1;
+static int
+test_mempool_perf_2cores(void)
+{
+	return do_all_mempool_perf_tests(2);
+}
 
-	if (do_one_mempool_test(mp_nocache, 1) < 0)
-		goto err;
+static int
+test_mempool_perf_allcores(void)
+{
+	return do_all_mempool_perf_tests(rte_lcore_count());
+}
 
-	if (do_one_mempool_test(mp_nocache, 2) < 0)
-		goto err;
+static int
+test_mempool_perf(void)
+{
+	int ret = -1;
 
-	if (do_one_mempool_test(mp_nocache, rte_lcore_count()) < 0)
+	/* performance test with 1, 2 and max cores */
+	if (do_all_mempool_perf_tests(1) < 0)
+		goto err;
+	if (do_all_mempool_perf_tests(2) < 0)
+		goto err;
+	if (do_all_mempool_perf_tests(rte_lcore_count()) < 0)
 		goto err;
-
-	rte_mempool_list_dump(stdout);
 
 	ret = 0;
 
 err:
-	rte_mempool_free(mp_cache);
-	rte_mempool_free(mp_nocache);
-	rte_mempool_free(default_pool);
 	return ret;
 }
 
 REGISTER_PERF_TEST(mempool_perf_autotest, test_mempool_perf);
+REGISTER_PERF_TEST(mempool_perf_autotest_1core, test_mempool_perf_1core);
+REGISTER_PERF_TEST(mempool_perf_autotest_2cores, test_mempool_perf_2cores);
+REGISTER_PERF_TEST(mempool_perf_autotest_allcores, test_mempool_perf_allcores);