[v2] test/lcores: reduce cpu consumption
Checks
Commit Message
Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or build
systems running the fast-test testsuite.
Ask for a reschedule at the threads synchronisation points.
Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Luca Boccassi <bluca@debian.org>
---
Changes since v1:
- fix build with mingw,
---
app/test/test_lcores.c | 20 +++++++++++++++-----
1 file changed, 15 insertions(+), 5 deletions(-)
Comments
On Thu, 7 Mar 2024 15:16:06 +0100
David Marchand <david.marchand@redhat.com> wrote:
> Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or build
> systems running the fast-test testsuite.
> Ask for a reschedule at the threads synchronisation points.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Luca Boccassi <bluca@debian.org>
> ---
That test was always failing on my little desktop machine, now it works.
Tested-by: Stephen Hemminger <stephen@networkplumber.org>
On Thu, 7 Mar 2024 15:16:06 +0100
David Marchand <david.marchand@redhat.com> wrote:
> +#ifndef _POSIX_PRIORITY_SCHEDULING
> +/* sched_yield(2):
> + * POSIX systems on which sched_yield() is available define _POSIX_PRIOR‐
> + * ITY_SCHEDULING in <unistd.h>.
> + */
> +#define sched_yield()
> +#endif
Could you fix the awkward line break in that comment before merging :-)
On Thu, Mar 7, 2024 at 3:16 PM David Marchand <david.marchand@redhat.com> wrote:
>
> Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or build
> systems running the fast-test testsuite.
> Ask for a reschedule at the threads synchronisation points.
>
> Signed-off-by: David Marchand <david.marchand@redhat.com>
> Acked-by: Luca Boccassi <bluca@debian.org>
Ideally, this test should be rewritten with some kind of OS-agnostic
synchronisation/scheduling API (mutex?).
But I think it will be enough for now.
I updated the code comment as requested by Stephen.
Applied, thanks.
On Thu, Mar 07, 2024 at 07:06:26PM +0100, David Marchand wrote:
> On Thu, Mar 7, 2024 at 3:16 PM David Marchand <david.marchand@redhat.com> wrote:
> >
> > Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or build
> > systems running the fast-test testsuite.
> > Ask for a reschedule at the threads synchronisation points.
> >
> > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > Acked-by: Luca Boccassi <bluca@debian.org>
>
> Ideally, this test should be rewritten with some kind of OS-agnostic
> synchronisation/scheduling API (mutex?).
> But I think it will be enough for now.
It's okay, I'll eventually get to this :)
>
> I updated the code comment as requested by Stephen.
>
> Applied, thanks.
>
> --
> David Marchand
> From: Tyler Retzlaff [mailto:roretzla@linux.microsoft.com]
> Sent: Thursday, 7 March 2024 19.37
>
> On Thu, Mar 07, 2024 at 07:06:26PM +0100, David Marchand wrote:
> > On Thu, Mar 7, 2024 at 3:16 PM David Marchand
> <david.marchand@redhat.com> wrote:
> > >
> > > Busy looping on RTE_MAX_LCORES threads is too heavy in some CI or
> build
> > > systems running the fast-test testsuite.
> > > Ask for a reschedule at the threads synchronisation points.
> > >
> > > Signed-off-by: David Marchand <david.marchand@redhat.com>
> > > Acked-by: Luca Boccassi <bluca@debian.org>
> >
> > Ideally, this test should be rewritten with some kind of OS-agnostic
> > synchronisation/scheduling API (mutex?).
> > But I think it will be enough for now.
>
> It's okay, I'll eventually get to this :)
>
For future reference, it seems SwitchToThread() [1] resembles sched_yield() [2].
[1]: https://learn.microsoft.com/en-us/windows/win32/api/processthreadsapi/nf-processthreadsapi-switchtothread
[2]: https://linux.die.net/man/2/sched_yield
> >
> > I updated the code comment as requested by Stephen.
> >
> > Applied, thanks.
> >
> > --
> > David Marchand
@@ -2,7 +2,9 @@
* Copyright (c) 2020 Red Hat, Inc.
*/
+#include <sched.h>
#include <string.h>
+#include <unistd.h>
#include <rte_common.h>
#include <rte_errno.h>
@@ -11,6 +13,14 @@
#include "test.h"
+#ifndef _POSIX_PRIORITY_SCHEDULING
+/* sched_yield(2):
+ * POSIX systems on which sched_yield() is available define _POSIX_PRIOR‐
+ * ITY_SCHEDULING in <unistd.h>.
+ */
+#define sched_yield()
+#endif
+
struct thread_context {
enum { Thread_INIT, Thread_ERROR, Thread_DONE } state;
bool lcore_id_any;
@@ -43,7 +53,7 @@ static uint32_t thread_loop(void *arg)
/* Wait for release from the control thread. */
while (__atomic_load_n(t->registered_count, __ATOMIC_ACQUIRE) != 0)
- ;
+ sched_yield();
rte_thread_unregister();
lcore_id = rte_lcore_id();
if (lcore_id != LCORE_ID_ANY) {
@@ -85,7 +95,7 @@ test_non_eal_lcores(unsigned int eal_threads_count)
/* Wait all non-EAL threads to register. */
while (__atomic_load_n(®istered_count, __ATOMIC_ACQUIRE) !=
non_eal_threads_count)
- ;
+ sched_yield();
/* We managed to create the max number of threads, let's try to create
* one more. This will allow one more check.
@@ -101,7 +111,7 @@ test_non_eal_lcores(unsigned int eal_threads_count)
printf("non-EAL threads count: %u\n", non_eal_threads_count);
while (__atomic_load_n(®istered_count, __ATOMIC_ACQUIRE) !=
non_eal_threads_count)
- ;
+ sched_yield();
}
skip_lcore_any:
@@ -267,7 +277,7 @@ test_non_eal_lcores_callback(unsigned int eal_threads_count)
non_eal_threads_count++;
while (__atomic_load_n(®istered_count, __ATOMIC_ACQUIRE) !=
non_eal_threads_count)
- ;
+ sched_yield();
if (l[0].init != eal_threads_count + 1 ||
l[1].init != eal_threads_count + 1) {
printf("Error: incorrect init calls, expected %u, %u, got %u, %u\n",
@@ -290,7 +300,7 @@ test_non_eal_lcores_callback(unsigned int eal_threads_count)
non_eal_threads_count++;
while (__atomic_load_n(®istered_count, __ATOMIC_ACQUIRE) !=
non_eal_threads_count)
- ;
+ sched_yield();
if (l[0].init != eal_threads_count + 2 ||
l[1].init != eal_threads_count + 2) {
printf("Error: incorrect init calls, expected %u, %u, got %u, %u\n",