examples/multi_process: fix RX packets distribution

Message ID 20211026095037.17557-1-getelson@nvidia.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series examples/multi_process: fix RX packets distribution |

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/github-robot: build success github build: passed
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/iol-broadcom-Functional fail Functional Testing issues
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-x86_64-unit-testing success Testing PASS
ci/iol-x86_64-compile-testing success Testing PASS
ci/iol-mellanox-Performance fail Performance Testing issues
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-aarch64-compile-testing success Testing PASS
ci/iol-aarch64-unit-testing success Testing PASS

Commit Message

Gregory Etelson Oct. 26, 2021, 9:50 a.m. UTC
  MP servers distributes RX packets between clients according to
round-robin scheme.

Current implementation always started packets distribution from
the first client. That procedure resulted in uniform distribution
in cases when RX packets number was a multiple of clients number.
However, if RX burst repeatedly returned single
packet, round-robin scheme would not work because all packets
were assigned to the first client only.

The patch does not restart packets distribution from
the first client.
Packets distribution always continues to the next client.

Fixes: af75078fece3 ("first public release")

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com>
---
 examples/multi_process/client_server_mp/mp_server/main.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
  

Comments

Burakov, Anatoly Oct. 28, 2021, 2:29 p.m. UTC | #1
On 26-Oct-21 10:50 AM, Gregory Etelson wrote:
> MP servers distributes RX packets between clients according to
> round-robin scheme.
> 
> Current implementation always started packets distribution from
> the first client. That procedure resulted in uniform distribution
> in cases when RX packets number was a multiple of clients number.
> However, if RX burst repeatedly returned single
> packet, round-robin scheme would not work because all packets
> were assigned to the first client only.
> 
> The patch does not restart packets distribution from
> the first client.
> Packets distribution always continues to the next client.
> 
> Fixes: af75078fece3 ("first public release")
> 
> Signed-off-by: Gregory Etelson <getelson@nvidia.com>
> Reviewed-by: Dmitry Kozlyuk <dkozlyuk@oss.nvidia.com>
> ---
>   examples/multi_process/client_server_mp/mp_server/main.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c
> index b4761ebc7b..fb441cbbf0 100644
> --- a/examples/multi_process/client_server_mp/mp_server/main.c
> +++ b/examples/multi_process/client_server_mp/mp_server/main.c
> @@ -234,7 +234,7 @@ process_packets(uint32_t port_num __rte_unused,
>   		struct rte_mbuf *pkts[], uint16_t rx_count)
>   {
>   	uint16_t i;
> -	uint8_t client = 0;
> +	static uint8_t client = 0;
>   
>   	for (i = 0; i < rx_count; i++) {
>   		enqueue_rx_packet(client, pkts[i]);
> 

Wouldn't that make it global? I don't recall off the top of my head if 
the multiprocess app is intended to have multiple Rx threads, but if you 
did have two forwarding threads, they would effectively both use the 
same `client` value, stepping on top of each other. This should probably 
be per-thread?
  
Gregory Etelson Oct. 28, 2021, 3:14 p.m. UTC | #2
Hello Anatoly,

..snip..

> b/examples/multi_process/client_server_mp/m
> p_server/main.c
> > @@ -234,7 +234,7 @@
> process_packets(uint32_t port_num
> __rte_unused,
> >               struct rte_mbuf *pkts[], uint16_t
> rx_count)
> >   {
> >       uint16_t i;
> > -     uint8_t client = 0;
> > +     static uint8_t client = 0;
> >
> >       for (i = 0; i < rx_count; i++) {
> >               enqueue_rx_packet(client, pkts[i]);
> >
> 
> Wouldn't that make it global? I don't recall off
> the top of my head if
> the multiprocess app is intended to have
> multiple Rx threads, but if you
> did have two forwarding threads, they would
> effectively both use the
> same `client` value, stepping on top of each
> other. This should probably
> be per-thread?
> 

MP client-server example was not designed as a multi-threaded app.
Server and clients run in a different process and the model allows one server process.
Server allocates a dedicated ring to each client and distributes Rx packets
between rings in round-robin sequence. 
Each ring configured for single producer and single consumer.
Consider an example when server's rte_eth_rx_burst() returns a single packet
on each call. 
Without the patch, the server will ignore all clients with id > 0 and 
assign all Rx packets to rx_ring 0.
Changing process_packets()  `client` variable to static allows unform round-robin
packets distribution between rings.

Regards,
Gregory
  
Burakov, Anatoly Oct. 28, 2021, 3:35 p.m. UTC | #3
On 28-Oct-21 4:14 PM, Gregory Etelson wrote:
> Hello Anatoly,
> 
> ..snip..
> 
>> b/examples/multi_process/client_server_mp/m
>> p_server/main.c
>>> @@ -234,7 +234,7 @@
>> process_packets(uint32_t port_num
>> __rte_unused,
>>>                struct rte_mbuf *pkts[], uint16_t
>> rx_count)
>>>    {
>>>        uint16_t i;
>>> -     uint8_t client = 0;
>>> +     static uint8_t client = 0;
>>>
>>>        for (i = 0; i < rx_count; i++) {
>>>                enqueue_rx_packet(client, pkts[i]);
>>>
>>
>> Wouldn't that make it global? I don't recall off
>> the top of my head if
>> the multiprocess app is intended to have
>> multiple Rx threads, but if you
>> did have two forwarding threads, they would
>> effectively both use the
>> same `client` value, stepping on top of each
>> other. This should probably
>> be per-thread?
>>
> 
> MP client-server example was not designed as a multi-threaded app.
> Server and clients run in a different process and the model allows one server process.
> Server allocates a dedicated ring to each client and distributes Rx packets
> between rings in round-robin sequence.
> Each ring configured for single producer and single consumer.
> Consider an example when server's rte_eth_rx_burst() returns a single packet
> on each call.
> Without the patch, the server will ignore all clients with id > 0 and
> assign all Rx packets to rx_ring 0.
> Changing process_packets()  `client` variable to static allows unform round-robin
> packets distribution between rings.
> 
> Regards,
> Gregory
> 

Right, i just checked the code, and the app indeed allows only one 
forwarding thread on the server.

Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
  
Thomas Monjalon Nov. 8, 2021, 9:27 p.m. UTC | #4
28/10/2021 17:35, Burakov, Anatoly:
> On 28-Oct-21 4:14 PM, Gregory Etelson wrote:
> >>> -     uint8_t client = 0;
> >>> +     static uint8_t client = 0;
> 
> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>

checkpatch has a message for you:
ERROR:INITIALISED_STATIC: do not initialise statics to 0
  
Gregory Etelson Nov. 9, 2021, 6:42 a.m. UTC | #5
Hello Thomas,

> 
> 28/10/2021 17:35, Burakov, Anatoly:
> > On 28-Oct-21 4:14 PM, Gregory Etelson wrote:
> > >>> -     uint8_t client = 0;
> > >>> +     static uint8_t client = 0;
> >
> > Acked-by: Anatoly Burakov
> <anatoly.burakov@intel.com>
> 
> checkpatch has a message for you:
> ERROR:INITIALISED_STATIC: do not initialise
> statics to 0
> 

Turning the `client` variable to static ensured that the next time
the function will be called it will proceed iterating clients instead of
starting a loop from the beginning - that's the main idea of that patch.
The variable must be initialized to 0 because the application model 
requires at least a single client with index 0.
ANSI C allows static variables initialization to any valid value.
Do you know why the checkpatch utility denied such initialization ?

Regards,
Gregory
  
Thomas Monjalon Nov. 9, 2021, 7:30 a.m. UTC | #6
09/11/2021 07:42, Gregory Etelson:
> Hello Thomas,
> 
> > 
> > 28/10/2021 17:35, Burakov, Anatoly:
> > > On 28-Oct-21 4:14 PM, Gregory Etelson wrote:
> > > >>> -     uint8_t client = 0;
> > > >>> +     static uint8_t client = 0;
> > >
> > > Acked-by: Anatoly Burakov
> > <anatoly.burakov@intel.com>
> > 
> > checkpatch has a message for you:
> > ERROR:INITIALISED_STATIC: do not initialise
> > statics to 0
> > 
> 
> Turning the `client` variable to static ensured that the next time
> the function will be called it will proceed iterating clients instead of
> starting a loop from the beginning - that's the main idea of that patch.
> The variable must be initialized to 0 because the application model 
> requires at least a single client with index 0.
> ANSI C allows static variables initialization to any valid value.
> Do you know why the checkpatch utility denied such initialization ?

ANSI C makes static variables iniatilized to 0 by default.
  
Gregory Etelson Nov. 9, 2021, 9:35 a.m. UTC | #7
Hello Thomas,


> > > 28/10/2021 17:35, Burakov, Anatoly:
> > > > On 28-Oct-21 4:14 PM, Gregory Etelson
> wrote:
> > > > >>> -     uint8_t client = 0;
> > > > >>> +     static uint8_t client = 0;
> > > >
> > > > Acked-by: Anatoly Burakov
> > > <anatoly.burakov@intel.com>
> > >
> > > checkpatch has a message for you:
> > > ERROR:INITIALISED_STATIC: do not initialise
> > > statics to 0
> > >
> >
> > Turning the `client` variable to static ensured
> that the next time
> > the function will be called it will proceed
> iterating clients instead of
> > starting a loop from the beginning - that's the
> main idea of that patch.
> > The variable must be initialized to 0 because
> the application model
> > requires at least a single client with index 0.
> > ANSI C allows static variables initialization to
> any valid value.
> > Do you know why the checkpatch utility denied
> such initialization ?
> 
> ANSI C makes static variables iniatilized to 0 by
> default.
> 


I'll post updated patch

Regards,
Gregory
  

Patch

diff --git a/examples/multi_process/client_server_mp/mp_server/main.c b/examples/multi_process/client_server_mp/mp_server/main.c
index b4761ebc7b..fb441cbbf0 100644
--- a/examples/multi_process/client_server_mp/mp_server/main.c
+++ b/examples/multi_process/client_server_mp/mp_server/main.c
@@ -234,7 +234,7 @@  process_packets(uint32_t port_num __rte_unused,
 		struct rte_mbuf *pkts[], uint16_t rx_count)
 {
 	uint16_t i;
-	uint8_t client = 0;
+	static uint8_t client = 0;
 
 	for (i = 0; i < rx_count; i++) {
 		enqueue_rx_packet(client, pkts[i]);