app/pdump: exits once primary app exited
Checks
Commit Message
If primary app exited, meaningless for pdump keeps running anymore.
Signed-off-by: Suanming.Mou <mousuanming@huawei.com>
---
app/pdump/main.c | 4 ++++
1 file changed, 4 insertions(+)
Comments
Hi,
snipped
> @@ -847,6 +847,10 @@ struct parse_val {
> pdump_rxtx(pt->rx_ring, pt->rx_vdev_id, &pt->stats);
> if (pt->dir & RTE_PDUMP_FLAG_TX)
> pdump_rxtx(pt->tx_ring, pt->tx_vdev_id, &pt->stats);
> +
> + /* Once primary exits, so will I. */
> + if (!rte_eal_primary_proc_alive(NULL))
> + quit_signal = 1;
> }
As per the current suggested code flow check is added to while loop in function `dump_packets'.
Questions:
1. What is impact in performance with and without patch?
2. For various packet sizes and port speed what are delta in drops for packet capture?
Note: If pdump application is still alive when primary is not running, primary cannot be started. Is this a cue that pdump is still alive and has to be terminated?
>
> static int
> --
> 1.7.12.4
On 2019/4/25 23:51, Varghese, Vipin wrote:
> Hi,
>
> snipped
>> @@ -847,6 +847,10 @@ struct parse_val {
>> pdump_rxtx(pt->rx_ring, pt->rx_vdev_id, &pt->stats);
>> if (pt->dir & RTE_PDUMP_FLAG_TX)
>> pdump_rxtx(pt->tx_ring, pt->tx_vdev_id, &pt->stats);
>> +
>> + /* Once primary exits, so will I. */
>> + if (!rte_eal_primary_proc_alive(NULL))
>> + quit_signal = 1;
>> }
> As per the current suggested code flow check is added to while loop in function `dump_packets'.
Thanks for the reply. Since want to make it clean, the code was here.
However, it seems need to take care of the performance impact first.
> Questions:
> 1. What is impact in performance with and without patch?
A1. Do a little trick as the patch below to tested the impact in the single core mode on Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz with no pkts.
diff --git a/app/pdump/main.c b/app/pdump/main.c
index 3d208548fa13..804011b187c4 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -141,7 +141,7 @@ struct parse_val {
static int num_tuples;
static struct rte_eth_conf port_conf_default;
-static volatile uint8_t quit_signal;
+static volatile uint32_t quit_signal;
static uint8_t multiple_core_capture;
/**< display usage */
@@ -868,6 +868,7 @@ struct parse_val {
dump_packets(void)
{
int i;
+ uint64_t start, end;
uint32_t lcore_id = 0;
if (!multiple_core_capture) {
@@ -880,10 +881,20 @@ struct parse_val {
pdump_t[i].device_id,
pdump_t[i].queue);
- while (!quit_signal) {
+ /* make it hot */
+ rte_eal_primary_proc_alive(NULL);
+ rte_eal_primary_proc_alive(NULL)
+
+ start = rte_rdtsc();
+ while (quit_signal < 50000) {
+ /* Just testing with and w/o the 'if' line below */
+ if (rte_eal_primary_proc_alive(NULL))
+ quit_signal++;
for (i = 0; i < num_tuples; i++)
pdump_packets(&pdump_t[i]);
}
+ end = rte_rdtsc();
+ printf("Totally count:%u, cost tsc:%lu\n", quit_signal, end - start);
return;
}
The total tsc cost is about 338809671 with rte_eal_primary_proc_alive().
And the tsc cost is just about 513573 without rte_eal_primary_proc_alive().
The dpdk-pdump had also used taskset to bind to specify isolate core.
So it seems the patch do a great performance impact.
Maybe another async method should be introduced to monitor the primary status.
> 2. For various packet sizes and port speed what are delta in drops for packet capture?
A2. Refer to A1, it's not needed anymore.
>
> Note: If pdump application is still alive when primary is not running, primary cannot be started. Is this a cue that pdump is still alive and has to be terminated?
Yes, some guys complained that the residual dpdk-pdump impact the restart of the primary app and refuse to add other mechanisms e.g. to kill the dpdk-pdump in the app to avoid that case.
So the patch was created.
Is there any other ways to avoid that.
>> static int
>> --
>> 1.7.12.4
>
>
Hi,
Looks like something in email format setting is affecting the style. Please find my replies below
snipped
As per the current suggested code flow check is added to while loop in function `dump_packets'.
Thanks for the reply. Since want to make it clean, the code was here.
However, it seems need to take care of the performance impact first.
Response> thanks for acknowledging the same.
Questions:
1. What is impact in performance with and without patch?
A1. Do a little trick as the patch below to tested the impact in the single core mode on Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz with no pkts.
diff --git a/app/pdump/main.c b/app/pdump/main.c
index 3d208548fa13..804011b187c4 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -141,7 +141,7 @@ struct parse_val {
static int num_tuples;
static struct rte_eth_conf port_conf_default;
-static volatile uint8_t quit_signal;
+static volatile uint32_t quit_signal;
static uint8_t multiple_core_capture;
/**< display usage */
@@ -868,6 +868,7 @@ struct parse_val {
dump_packets(void)
{
int i;
+ uint64_t start, end;
uint32_t lcore_id = 0;
if (!multiple_core_capture) {
@@ -880,10 +881,20 @@ struct parse_val {
pdump_t[i].device_id,
pdump_t[i].queue);
- while (!quit_signal) {
+ /* make it hot */
+ rte_eal_primary_proc_alive(NULL);
+ rte_eal_primary_proc_alive(NULL)
+
+ start = rte_rdtsc();
+ while (quit_signal < 50000) {
+ /* Just testing with and w/o the 'if' line below */
+ if (rte_eal_primary_proc_alive(NULL))
+ quit_signal++;
for (i = 0; i < num_tuples; i++)
pdump_packets(&pdump_t[i]);
}
+ end = rte_rdtsc();
+ printf("Totally count:%u, cost tsc:%lu\n", quit_signal, end - start);
return;
}
The total tsc cost is about 338809671 with rte_eal_primary_proc_alive().
And the tsc cost is just about 513573 without rte_eal_primary_proc_alive().
The dpdk-pdump had also used taskset to bind to specify isolate core.
So it seems the patch do a great performance impact.
Response> thanks for confirming the suspicion.
Maybe another async method should be introduced to monitor the primary status.
Response> yes, without affecting the capture thread.
2. For various packet sizes and port speed what are delta in drops for packet capture?
A2. Refer to A1, it's not needed anymore.
Response> A1 there is performance impact.
Note: If pdump application is still alive when primary is not running, primary cannot be started. Is this a cue that pdump is still alive and has to be terminated?
Yes, some guys complained that the residual dpdk-pdump impact the restart of the primary app and refuse to add other mechanisms e.g. to kill the dpdk-pdump in the app to avoid that case.
So the patch was created.
Is there any other ways to avoid that.
Response> in my humble opinion, best way around is add user option like ‘--exit’; which then will add periodic rte_timer for user desired seconds ‘0.1, 0.5, 1.0, 5.0’. The timer callback can run on master core which sets ‘quit_signal’ once primary is no longer alive. In case of ‘multi thread’ capture master thread is not involved in dump_packets thus avoiding any packet drops or performance issue..
I will leave this suggestion open for comments from the maintainer.
snipped
Hi,
Looks like something in email format setting is affecting the style. Please find my replies below
snipped
As per the current suggested code flow check is added to while loop in function `dump_packets'.
Thanks for the reply. Since want to make it clean, the code was here.
However, it seems need to take care of the performance impact first.
Response> thanks for acknowledging the same.
Questions:
1. What is impact in performance with and without patch?
A1. Do a little trick as the patch below to tested the impact in the single core mode on Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz with no pkts.
diff --git a/app/pdump/main.c b/app/pdump/main.c
index 3d208548fa13..804011b187c4 100644
--- a/app/pdump/main.c
+++ b/app/pdump/main.c
@@ -141,7 +141,7 @@ struct parse_val {
static int num_tuples;
static struct rte_eth_conf port_conf_default;
-static volatile uint8_t quit_signal;
+static volatile uint32_t quit_signal;
static uint8_t multiple_core_capture;
/**< display usage */
@@ -868,6 +868,7 @@ struct parse_val {
dump_packets(void)
{
int i;
+ uint64_t start, end;
uint32_t lcore_id = 0;
if (!multiple_core_capture) {
@@ -880,10 +881,20 @@ struct parse_val {
pdump_t[i].device_id,
pdump_t[i].queue);
- while (!quit_signal) {
+ /* make it hot */
+ rte_eal_primary_proc_alive(NULL);
+ rte_eal_primary_proc_alive(NULL)
+
+ start = rte_rdtsc();
+ while (quit_signal < 50000) {
+ /* Just testing with and w/o the 'if' line below */
+ if (rte_eal_primary_proc_alive(NULL))
+ quit_signal++;
for (i = 0; i < num_tuples; i++)
pdump_packets(&pdump_t[i]);
}
+ end = rte_rdtsc();
+ printf("Totally count:%u, cost tsc:%lu\n", quit_signal, end - start);
return;
}
The total tsc cost is about 338809671 with rte_eal_primary_proc_alive().
And the tsc cost is just about 513573 without rte_eal_primary_proc_alive().
The dpdk-pdump had also used taskset to bind to specify isolate core.
So it seems the patch do a great performance impact.
Response> thanks for confirming the suspicion.
Maybe another async method should be introduced to monitor the primary status.
Response> yes, without affecting the capture thread.
2. For various packet sizes and port speed what are delta in drops for packet capture?
A2. Refer to A1, it's not needed anymore.
Response> A1 there is performance impact.
Note: If pdump application is still alive when primary is not running, primary cannot be started. Is this a cue that pdump is still alive and has to be terminated?
Yes, some guys complained that the residual dpdk-pdump impact the restart of the primary app and refuse to add other mechanisms e.g. to kill the dpdk-pdump in the app to avoid that case.
So the patch was created.
Is there any other ways to avoid that.
Response> in my humble opinion, best way around is add user option like ‘--exit’; which then will add periodic rte_timer for user desired seconds ‘0.1, 0.5, 1.0, 5.0’. The timer callback can run on master core which sets ‘quit_signal’ once primary is no longer alive. In case of ‘multi thread’ capture master thread is not involved in dump_packets thus avoiding any packet drops or performance issue..
On 2019/4/26 18:56, Varghese, Vipin wrote:
>
> I will leave this suggestion open for comments from the maintainer.
>
Hi,
Thanks for your suggestion. I have also tried to add an slave core to
monitor the primary status this afternoon. It works.
I doubt if it can be add an new option as you suggested, but which will
also require people who complain the exiting to add an extra slave core
for that.
Please waiting for the new patch in one or two days.
> snipped
>
> Hi,
>
> Looks like something in email format setting is affecting the style.
> Please find my replies below
>
> snipped
>
> As per the current suggested code flow check is added to while loop in function `dump_packets'.
>
> Thanks for the reply. Since want to make it clean, the code was here.
> However, it seems need to take care of the performance impact first.
> Response> thanks for acknowledging the same.
>
> Questions:
>
> 1. What is impact in performance with and without patch?
>
> A1. Do a little trick as the patch below to tested the impact in the single core mode on Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz with no pkts.
> diff --git a/app/pdump/main.c b/app/pdump/main.c
> index 3d208548fa13..804011b187c4 100644
> --- a/app/pdump/main.c
> +++ b/app/pdump/main.c
> @@ -141,7 +141,7 @@ struct parse_val {
>
> static int num_tuples;
> static struct rte_eth_conf port_conf_default;
> -static volatile uint8_t quit_signal;
> +static volatile uint32_t quit_signal;
> static uint8_t multiple_core_capture;
>
> /**< display usage */
> @@ -868,6 +868,7 @@ struct parse_val {
> dump_packets(void)
> {
> int i;
> + uint64_t start, end;
> uint32_t lcore_id = 0;
>
> if (!multiple_core_capture) {
> @@ -880,10 +881,20 @@ struct parse_val {
> pdump_t[i].device_id,
> pdump_t[i].queue);
>
> - while (!quit_signal) {
> + /* make it hot */
> + rte_eal_primary_proc_alive(NULL);
> + rte_eal_primary_proc_alive(NULL)
> +
> + start = rte_rdtsc();
> + while (quit_signal < 50000) {
> + /* Just testing with and w/o the 'if' line below */
> + if (rte_eal_primary_proc_alive(NULL))
> + quit_signal++;
> for (i = 0; i < num_tuples; i++)
> pdump_packets(&pdump_t[i]);
> }
> + end = rte_rdtsc();
> + printf("Totally count:%u, cost tsc:%lu\n", quit_signal, end - start);
>
> return;
> }
> The total tsc cost is about 338809671 with rte_eal_primary_proc_alive().
> And the tsc cost is just about 513573 without rte_eal_primary_proc_alive().
> The dpdk-pdump had also used taskset to bind to specify isolate core.
> So it seems the patch do a great performance impact.
> Response> thanks for confirming the suspicion.
> Maybe another async method should be introduced to monitor the primary status.
> Response> yes, without affecting the capture thread.
>
> 2. For various packet sizes and port speed what are delta in drops for packet capture?
>
> A2. Refer to A1, it's not needed anymore.
> Response> A1 there is performance impact.
> Note: If pdump application is still alive when primary is not running, primary cannot be started. Is this a cue that pdump is still alive and has to be terminated?
> Yes, some guys complained that the residual dpdk-pdump impact the restart of the primary app and refuse to add other mechanisms e.g. to kill the dpdk-pdump in the app to avoid that case.
> So the patch was created.
> Is there any other ways to avoid that.
> Response> in my humble opinion, best way around is add user option
> like ‘--exit’; which then will add periodic rte_timer for user desired
> seconds ‘0.1, 0.5, 1.0, 5.0’. The timer callback can run on master
> core which sets ‘quit_signal’ once primary is no longer alive. In case
> of ‘multi thread’ capture master thread is not involved in
> dump_packets thus avoiding any packet drops or performance issue..
On 26-Apr-19 1:08 PM, Suanming.Mou wrote:
>
> On 2019/4/26 18:56, Varghese, Vipin wrote:
>>
>> I will leave this suggestion open for comments from the maintainer.
>>
> Hi,
>
> Thanks for your suggestion. I have also tried to add an slave core to
> monitor the primary status this afternoon. It works.
>
> I doubt if it can be add an new option as you suggested, but which will
> also require people who complain the exiting to add an extra slave core
> for that.
>
> Please waiting for the new patch in one or two days.
>
You can use alarm API to check for this regularly. It's not like the
interrupt thread is doing much anyway. Just set alarm to fire every N
seconds, and that's it.
On 2019/4/26 21:46, Burakov, Anatoly wrote:
> On 26-Apr-19 1:08 PM, Suanming.Mou wrote:
>>
>> On 2019/4/26 18:56, Varghese, Vipin wrote:
>>>
>>> I will leave this suggestion open for comments from the maintainer.
>>>
>> Hi,
>>
>> Thanks for your suggestion. I have also tried to add an slave core to
>> monitor the primary status this afternoon. It works.
>>
>> I doubt if it can be add an new option as you suggested, but which
>> will also require people who complain the exiting to add an extra
>> slave core for that.
>>
>> Please waiting for the new patch in one or two days.
>>
>
> You can use alarm API to check for this regularly. It's not like the
> interrupt thread is doing much anyway. Just set alarm to fire every N
> seconds, and that's it.
Hi,
Thank you very much for the suggestion. Yes, that seems the best
solution. Just tested it roughly as the code below:
+static void monitor_primary(void *arg __rte_unused)
+{
+ if (quit_signal)
+ return;
+
+ if (rte_eal_primary_proc_alive(NULL))
+ rte_eal_alarm_set(MONITOR_INTERVEL, monitor_primary, NULL);
+ else
+ quit_signal = 1;
+
+ return;
+}
+
static inline void
dump_packets(void)
{
int i;
uint32_t lcore_id = 0;
+ if (exit_with_primary)
+ rte_eal_alarm_set(MONITOR_INTERVEL, monitor_primary, NULL);
+
I will prepare the patch with option exit_with_primary.
Br,
Mou
On 26-Apr-19 3:32 PM, Suanming.Mou wrote:
>
> On 2019/4/26 21:46, Burakov, Anatoly wrote:
>> On 26-Apr-19 1:08 PM, Suanming.Mou wrote:
>>>
>>> On 2019/4/26 18:56, Varghese, Vipin wrote:
>>>>
>>>> I will leave this suggestion open for comments from the maintainer.
>>>>
>>> Hi,
>>>
>>> Thanks for your suggestion. I have also tried to add an slave core to
>>> monitor the primary status this afternoon. It works.
>>>
>>> I doubt if it can be add an new option as you suggested, but which
>>> will also require people who complain the exiting to add an extra
>>> slave core for that.
>>>
>>> Please waiting for the new patch in one or two days.
>>>
>>
>> You can use alarm API to check for this regularly. It's not like the
>> interrupt thread is doing much anyway. Just set alarm to fire every N
>> seconds, and that's it.
>
> Hi,
>
> Thank you very much for the suggestion. Yes, that seems the best
> solution. Just tested it roughly as the code below:
>
> +static void monitor_primary(void *arg __rte_unused)
> +{
> + if (quit_signal)
> + return;
> +
> + if (rte_eal_primary_proc_alive(NULL))
> + rte_eal_alarm_set(MONITOR_INTERVEL, monitor_primary, NULL);
> + else
> + quit_signal = 1;
> +
> + return;
> +}
> +
> static inline void
> dump_packets(void)
> {
> int i;
> uint32_t lcore_id = 0;
>
> + if (exit_with_primary)
> + rte_eal_alarm_set(MONITOR_INTERVEL, monitor_primary, NULL);
> +
>
>
> I will prepare the patch with option exit_with_primary.
>
Actually, i'm curious if this really does work. Unless my knowledge is
out of date, interrupt thread doesn't work in secondary processes, and
by extension neither should the alarm API...
On 2019/4/26 22:39, Burakov, Anatoly wrote:
> On 26-Apr-19 3:32 PM, Suanming.Mou wrote:
>>
>> On 2019/4/26 21:46, Burakov, Anatoly wrote:
>>> On 26-Apr-19 1:08 PM, Suanming.Mou wrote:
>>>>
>>>> On 2019/4/26 18:56, Varghese, Vipin wrote:
>>>>>
>>>>> I will leave this suggestion open for comments from the maintainer.
>>>>>
>>>> Hi,
>>>>
>>>> Thanks for your suggestion. I have also tried to add an slave core
>>>> to monitor the primary status this afternoon. It works.
>>>>
>>>> I doubt if it can be add an new option as you suggested, but which
>>>> will also require people who complain the exiting to add an extra
>>>> slave core for that.
>>>>
>>>> Please waiting for the new patch in one or two days.
>>>>
>>>
>>> You can use alarm API to check for this regularly. It's not like the
>>> interrupt thread is doing much anyway. Just set alarm to fire every
>>> N seconds, and that's it.
>>
>> Hi,
>>
>> Thank you very much for the suggestion. Yes, that seems the best
>> solution. Just tested it roughly as the code below:
>>
>> +static void monitor_primary(void *arg __rte_unused)
>> +{
>> + if (quit_signal)
>> + return;
>> +
>> + if (rte_eal_primary_proc_alive(NULL))
>> + rte_eal_alarm_set(MONITOR_INTERVEL, monitor_primary, NULL);
>> + else
>> + quit_signal = 1;
>> +
>> + return;
>> +}
>> +
>> static inline void
>> dump_packets(void)
>> {
>> int i;
>> uint32_t lcore_id = 0;
>>
>> + if (exit_with_primary)
>> + rte_eal_alarm_set(MONITOR_INTERVEL, monitor_primary, NULL);
>> +
>>
>>
>> I will prepare the patch with option exit_with_primary.
>>
>
> Actually, i'm curious if this really does work. Unless my knowledge is
> out of date, interrupt thread doesn't work in secondary processes, and
> by extension neither should the alarm API...
Uh... If I understand correctly, the alarm API has used in the secondary
before.
Refer to handle_primary_request()....
On 26-Apr-19 3:49 PM, Suanming.Mou wrote:
>
> On 2019/4/26 22:39, Burakov, Anatoly wrote:
>> On 26-Apr-19 3:32 PM, Suanming.Mou wrote:
>>>
>>> On 2019/4/26 21:46, Burakov, Anatoly wrote:
>>>> On 26-Apr-19 1:08 PM, Suanming.Mou wrote:
>>>>>
>>>>> On 2019/4/26 18:56, Varghese, Vipin wrote:
>>>>>>
>>>>>> I will leave this suggestion open for comments from the maintainer.
>>>>>>
>>>>> Hi,
>>>>>
>>>>> Thanks for your suggestion. I have also tried to add an slave core
>>>>> to monitor the primary status this afternoon. It works.
>>>>>
>>>>> I doubt if it can be add an new option as you suggested, but which
>>>>> will also require people who complain the exiting to add an extra
>>>>> slave core for that.
>>>>>
>>>>> Please waiting for the new patch in one or two days.
>>>>>
>>>>
>>>> You can use alarm API to check for this regularly. It's not like the
>>>> interrupt thread is doing much anyway. Just set alarm to fire every
>>>> N seconds, and that's it.
>>>
>>> Hi,
>>>
>>> Thank you very much for the suggestion. Yes, that seems the best
>>> solution. Just tested it roughly as the code below:
>>>
>>> +static void monitor_primary(void *arg __rte_unused)
>>> +{
>>> + if (quit_signal)
>>> + return;
>>> +
>>> + if (rte_eal_primary_proc_alive(NULL))
>>> + rte_eal_alarm_set(MONITOR_INTERVEL, monitor_primary, NULL);
>>> + else
>>> + quit_signal = 1;
>>> +
>>> + return;
>>> +}
>>> +
>>> static inline void
>>> dump_packets(void)
>>> {
>>> int i;
>>> uint32_t lcore_id = 0;
>>>
>>> + if (exit_with_primary)
>>> + rte_eal_alarm_set(MONITOR_INTERVEL, monitor_primary, NULL);
>>> +
>>>
>>>
>>> I will prepare the patch with option exit_with_primary.
>>>
>>
>> Actually, i'm curious if this really does work. Unless my knowledge is
>> out of date, interrupt thread doesn't work in secondary processes, and
>> by extension neither should the alarm API...
>
> Uh... If I understand correctly, the alarm API has used in the secondary
> before.
>
> Refer to handle_primary_request()....
>
Then my knowledge really is out of date :)
@@ -847,6 +847,10 @@ struct parse_val {
pdump_rxtx(pt->rx_ring, pt->rx_vdev_id, &pt->stats);
if (pt->dir & RTE_PDUMP_FLAG_TX)
pdump_rxtx(pt->tx_ring, pt->tx_vdev_id, &pt->stats);
+
+ /* Once primary exits, so will I. */
+ if (!rte_eal_primary_proc_alive(NULL))
+ quit_signal = 1;
}
static int