[RFC,V2,1/2] app/testpmd: fix queue stats mapping configuration
diff mbox series

Message ID 1603182389-10087-2-git-send-email-humin29@huawei.com
State RFC
Delegated to: Ferruh Yigit
Headers show
Series
  • fix queue stats mapping
Related show

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Min Hu (Connor) Oct. 20, 2020, 8:26 a.m. UTC
From: Huisong Li <lihuisong@huawei.com>

Currently, the queue stats mapping has the following problems:
1) Many PMD drivers don't support queue stats mapping. But there is no
failure message after executing the command "set stat_qmap rx 0 2 2".
2) Once queue mapping is set, unrelated and unmapped queues are also
displayed.
3) There is no need to keep cache line alignment for
'struct queue_stats_mappings'.
4) The mapping arrays, 'tx_queue_stats_mappings_array' &
'rx_queue_stats_mappings_array' are global and their sizes are based on
fixed max port and queue size assumptions.
5) The configuration result does not take effect or can not be queried
in real time.

Therefore, we have made the following adjustments:
1) If PMD supports queue stats mapping, configure to driver in real time
after executing the command "set stat_qmap rx/tx ...". If not,
the command can not be accepted.
2) Only display queues that mapping done by adding a new 'active' field
 in queue_stats_mappings struct.
3) Remove cache alignment for 'struct queue_stats_mappings'.
4) Add a new port_stats_mappings struct in rte_port.
The struct contains number of rx/txq stats mapping, rx/tx
queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
Size of queue_stats_mapping_array is set to "RTE_ETHDEV_QUEUE_STAT_CNTRS"
 to ensure that the same number of queues can be set for each port.

Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")

Signed-off-by: Huisong Li <lihuisong@huawei.com>
---
 app/test-pmd/config.c     | 180 +++++++++++++++++++++++++++++-----------------
 app/test-pmd/parameters.c |  63 ++++++++--------
 app/test-pmd/testpmd.c    | 180 ++++++++++++++++++++++++++++++++--------------
 app/test-pmd/testpmd.h    |  41 ++++++-----
 4 files changed, 292 insertions(+), 172 deletions(-)

Comments

Ferruh Yigit Oct. 30, 2020, 8:54 p.m. UTC | #1
On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
> From: Huisong Li <lihuisong@huawei.com>
> 
> Currently, the queue stats mapping has the following problems:
> 1) Many PMD drivers don't support queue stats mapping. But there is no
> failure message after executing the command "set stat_qmap rx 0 2 2".
> 2) Once queue mapping is set, unrelated and unmapped queues are also
> displayed.
> 3) There is no need to keep cache line alignment for
> 'struct queue_stats_mappings'.
> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
> 'rx_queue_stats_mappings_array' are global and their sizes are based on
> fixed max port and queue size assumptions.
> 5) The configuration result does not take effect or can not be queried
> in real time.
> 
> Therefore, we have made the following adjustments:
> 1) If PMD supports queue stats mapping, configure to driver in real time
> after executing the command "set stat_qmap rx/tx ...". If not,
> the command can not be accepted.
> 2) Only display queues that mapping done by adding a new 'active' field
>   in queue_stats_mappings struct.
> 3) Remove cache alignment for 'struct queue_stats_mappings'.
> 4) Add a new port_stats_mappings struct in rte_port.
> The struct contains number of rx/txq stats mapping, rx/tx
> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
> Size of queue_stats_mapping_array is set to "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>   to ensure that the same number of queues can be set for each port.
> 

Hi Connor,

I think above adjustment are good, but after the decision to use xstats for the 
queue stats, what do you think about more simplification,

1)
What testpmd does is, records the queue stats mapping commands and registers 
them later on port start & forwarding start.
What happens if recording and registering completely removed?
When "set stat_qmap .." issued, it just call the ethdev APIs to do the mapping 
in device.
This lets us removing record structures, "struct port_stats_mappings p_stats_map"
Also can remove 'map_port_queue_stats_mapping_registers()' and its sub functions.

2)
Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" parameters, 
which enables removing 'parse_queue_stats_mapping_config()' function too

3)
Another problem is to display the queue stats, in 'fwd_stats_display()' & 
'nic_stats_display()', there is a check if the queue stats mapping enable or not 
('rx_queue_stats_mapping_enabled' & 'tx_queue_stats_mapping_enabled'),
I think displaying queue stats and queue stat mapping should be separate, why 
not drop checks for queue stats mapping and display queue stats for 'nb_rxq' & 
'nb_txq' queues?

Does above make sense?


Majority of the drivers doesn't require queue stat mapping to get the queue 
stats, lets don't pollute main usage with this requirement.


> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
> 
> Signed-off-by: Huisong Li <lihuisong@huawei.com>

<...>
Min Hu (Connor) Nov. 3, 2020, 6:30 a.m. UTC | #2
Hi Ferruh,

I agree with your proposal. But if we remove record structures, we will
not be able to query the current queue stats mapping configuration. Or
we can provide a query API for the PMD driver that uses the
set_queue_stats_mapping API, and driver records these mapping
information from user.

What do you think?


在 2020/10/31 4:54, Ferruh Yigit 写道:
> On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
>> From: Huisong Li <lihuisong@huawei.com>
>>
>> Currently, the queue stats mapping has the following problems:
>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>> failure message after executing the command "set stat_qmap rx 0 2 2".
>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>> displayed.
>> 3) There is no need to keep cache line alignment for
>> 'struct queue_stats_mappings'.
>> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
>> 'rx_queue_stats_mappings_array' are global and their sizes are based on
>> fixed max port and queue size assumptions.
>> 5) The configuration result does not take effect or can not be queried
>> in real time.
>>
>> Therefore, we have made the following adjustments:
>> 1) If PMD supports queue stats mapping, configure to driver in real time
>> after executing the command "set stat_qmap rx/tx ...". If not,
>> the command can not be accepted.
>> 2) Only display queues that mapping done by adding a new 'active' field
>>   in queue_stats_mappings struct.
>> 3) Remove cache alignment for 'struct queue_stats_mappings'.
>> 4) Add a new port_stats_mappings struct in rte_port.
>> The struct contains number of rx/txq stats mapping, rx/tx
>> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
>> Size of queue_stats_mapping_array is set to "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>>   to ensure that the same number of queues can be set for each port.
>>
> 
> Hi Connor,
> 
> I think above adjustment are good, but after the decision to use xstats 
> for the queue stats, what do you think about more simplification,
> 
> 1)
> What testpmd does is, records the queue stats mapping commands and 
> registers them later on port start & forwarding start.
> What happens if recording and registering completely removed?
> When "set stat_qmap .." issued, it just call the ethdev APIs to do the 
> mapping in device.
> This lets us removing record structures, "struct port_stats_mappings 
> p_stats_map"
> Also can remove 'map_port_queue_stats_mapping_registers()' and its sub 
> functions.
> 
> 2)
> Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" 
> parameters, which enables removing 'parse_queue_stats_mapping_config()' 
> function too
> 
> 3)
> Another problem is to display the queue stats, in 'fwd_stats_display()' 
> & 'nic_stats_display()', there is a check if the queue stats mapping 
> enable or not ('rx_queue_stats_mapping_enabled' & 
> 'tx_queue_stats_mapping_enabled'),
> I think displaying queue stats and queue stat mapping should be 
> separate, why not drop checks for queue stats mapping and display queue 
> stats for 'nb_rxq' & 'nb_txq' queues?
> 
> Does above make sense?
> 
> 
> Majority of the drivers doesn't require queue stat mapping to get the 
> queue stats, lets don't pollute main usage with this requirement.
> 
> 
>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats 
>> mappings error")
>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>
>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
> 
> <...>
> 
> .
Min Hu (Connor) Nov. 12, 2020, 2:28 a.m. UTC | #3
Hi Ferruh, any suggestions?


在 2020/11/3 14:30, Min Hu (Connor) 写道:
> Hi Ferruh,
> 
> I agree with your proposal. But if we remove record structures, we will
> not be able to query the current queue stats mapping configuration. Or
> we can provide a query API for the PMD driver that uses the
> set_queue_stats_mapping API, and driver records these mapping
> information from user.
> 
> What do you think?
> 
> 
> 在 2020/10/31 4:54, Ferruh Yigit 写道:
>> On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
>>> From: Huisong Li <lihuisong@huawei.com>
>>>
>>> Currently, the queue stats mapping has the following problems:
>>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>>> failure message after executing the command "set stat_qmap rx 0 2 2".
>>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>>> displayed.
>>> 3) There is no need to keep cache line alignment for
>>> 'struct queue_stats_mappings'.
>>> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
>>> 'rx_queue_stats_mappings_array' are global and their sizes are based on
>>> fixed max port and queue size assumptions.
>>> 5) The configuration result does not take effect or can not be queried
>>> in real time.
>>>
>>> Therefore, we have made the following adjustments:
>>> 1) If PMD supports queue stats mapping, configure to driver in real time
>>> after executing the command "set stat_qmap rx/tx ...". If not,
>>> the command can not be accepted.
>>> 2) Only display queues that mapping done by adding a new 'active' field
>>>   in queue_stats_mappings struct.
>>> 3) Remove cache alignment for 'struct queue_stats_mappings'.
>>> 4) Add a new port_stats_mappings struct in rte_port.
>>> The struct contains number of rx/txq stats mapping, rx/tx
>>> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
>>> Size of queue_stats_mapping_array is set to 
>>> "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>>>   to ensure that the same number of queues can be set for each port.
>>>
>>
>> Hi Connor,
>>
>> I think above adjustment are good, but after the decision to use 
>> xstats for the queue stats, what do you think about more simplification,
>>
>> 1)
>> What testpmd does is, records the queue stats mapping commands and 
>> registers them later on port start & forwarding start.
>> What happens if recording and registering completely removed?
>> When "set stat_qmap .." issued, it just call the ethdev APIs to do the 
>> mapping in device.
>> This lets us removing record structures, "struct port_stats_mappings 
>> p_stats_map"
>> Also can remove 'map_port_queue_stats_mapping_registers()' and its sub 
>> functions.
>>
>> 2)
>> Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" 
>> parameters, which enables removing 
>> 'parse_queue_stats_mapping_config()' function too
>>
>> 3)
>> Another problem is to display the queue stats, in 
>> 'fwd_stats_display()' & 'nic_stats_display()', there is a check if the 
>> queue stats mapping enable or not ('rx_queue_stats_mapping_enabled' & 
>> 'tx_queue_stats_mapping_enabled'),
>> I think displaying queue stats and queue stat mapping should be 
>> separate, why not drop checks for queue stats mapping and display 
>> queue stats for 'nb_rxq' & 'nb_txq' queues?
>>
>> Does above make sense?
>>
>>
>> Majority of the drivers doesn't require queue stat mapping to get the 
>> queue stats, lets don't pollute main usage with this requirement.
>>
>>
>>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats 
>>> mappings error")
>>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>>
>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>
>> <...>
>>
>> .
Ferruh Yigit Nov. 12, 2020, 9:52 a.m. UTC | #4
On 11/12/2020 2:28 AM, Min Hu (Connor) wrote:
> Hi Ferruh, any suggestions?
> 
> 
> 在 2020/11/3 14:30, Min Hu (Connor) 写道:
>> Hi Ferruh,
>>
>> I agree with your proposal. But if we remove record structures, we will
>> not be able to query the current queue stats mapping configuration. Or
>> we can provide a query API for the PMD driver that uses the
>> set_queue_stats_mapping API, and driver records these mapping
>> information from user.
>>
>> What do you think?
>>

Sorry for delay,

Yes that information will be lost, but since the queue stats mapping is not 
commonly used, I think it can be OK to loose it.

As you said another option is to add a new ethdev API to get queue stats 
mapping, but because of same reason not sure about adding it. We can add it 
later if there is a request for it.

>>
>> 在 2020/10/31 4:54, Ferruh Yigit 写道:
>>> On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>
>>>> Currently, the queue stats mapping has the following problems:
>>>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>>>> failure message after executing the command "set stat_qmap rx 0 2 2".
>>>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>>>> displayed.
>>>> 3) There is no need to keep cache line alignment for
>>>> 'struct queue_stats_mappings'.
>>>> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
>>>> 'rx_queue_stats_mappings_array' are global and their sizes are based on
>>>> fixed max port and queue size assumptions.
>>>> 5) The configuration result does not take effect or can not be queried
>>>> in real time.
>>>>
>>>> Therefore, we have made the following adjustments:
>>>> 1) If PMD supports queue stats mapping, configure to driver in real time
>>>> after executing the command "set stat_qmap rx/tx ...". If not,
>>>> the command can not be accepted.
>>>> 2) Only display queues that mapping done by adding a new 'active' field
>>>>   in queue_stats_mappings struct.
>>>> 3) Remove cache alignment for 'struct queue_stats_mappings'.
>>>> 4) Add a new port_stats_mappings struct in rte_port.
>>>> The struct contains number of rx/txq stats mapping, rx/tx
>>>> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
>>>> Size of queue_stats_mapping_array is set to "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>>>>   to ensure that the same number of queues can be set for each port.
>>>>
>>>
>>> Hi Connor,
>>>
>>> I think above adjustment are good, but after the decision to use xstats for 
>>> the queue stats, what do you think about more simplification,
>>>
>>> 1)
>>> What testpmd does is, records the queue stats mapping commands and registers 
>>> them later on port start & forwarding start.
>>> What happens if recording and registering completely removed?
>>> When "set stat_qmap .." issued, it just call the ethdev APIs to do the 
>>> mapping in device.
>>> This lets us removing record structures, "struct port_stats_mappings 
>>> p_stats_map"
>>> Also can remove 'map_port_queue_stats_mapping_registers()' and its sub 
>>> functions.
>>>
>>> 2)
>>> Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" 
>>> parameters, which enables removing 'parse_queue_stats_mapping_config()' 
>>> function too
>>>
>>> 3)
>>> Another problem is to display the queue stats, in 'fwd_stats_display()' & 
>>> 'nic_stats_display()', there is a check if the queue stats mapping enable or 
>>> not ('rx_queue_stats_mapping_enabled' & 'tx_queue_stats_mapping_enabled'),
>>> I think displaying queue stats and queue stat mapping should be separate, why 
>>> not drop checks for queue stats mapping and display queue stats for 'nb_rxq' 
>>> & 'nb_txq' queues?
>>>
>>> Does above make sense?
>>>
>>>
>>> Majority of the drivers doesn't require queue stat mapping to get the queue 
>>> stats, lets don't pollute main usage with this requirement.
>>>
>>>
>>>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats mappings error")
>>>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>>>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>>>
>>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>>
>>> <...>
>>>
>>> .
Min Hu (Connor) Nov. 18, 2020, 3:39 a.m. UTC | #5
HI, Ferruh,
	Ok,I will send V3 patches. thanks.

在 2020/11/12 17:52, Ferruh Yigit 写道:
> On 11/12/2020 2:28 AM, Min Hu (Connor) wrote:
>> Hi Ferruh, any suggestions?
>>
>>
>> 在 2020/11/3 14:30, Min Hu (Connor) 写道:
>>> Hi Ferruh,
>>>
>>> I agree with your proposal. But if we remove record structures, we will
>>> not be able to query the current queue stats mapping configuration. Or
>>> we can provide a query API for the PMD driver that uses the
>>> set_queue_stats_mapping API, and driver records these mapping
>>> information from user.
>>>
>>> What do you think?
>>>
> 
> Sorry for delay,
> 
> Yes that information will be lost, but since the queue stats mapping is 
> not commonly used, I think it can be OK to loose it.
> 
> As you said another option is to add a new ethdev API to get queue stats 
> mapping, but because of same reason not sure about adding it. We can add 
> it later if there is a request for it.
> 
>>>
>>> 在 2020/10/31 4:54, Ferruh Yigit 写道:
>>>> On 10/20/2020 9:26 AM, Min Hu (Connor) wrote:
>>>>> From: Huisong Li <lihuisong@huawei.com>
>>>>>
>>>>> Currently, the queue stats mapping has the following problems:
>>>>> 1) Many PMD drivers don't support queue stats mapping. But there is no
>>>>> failure message after executing the command "set stat_qmap rx 0 2 2".
>>>>> 2) Once queue mapping is set, unrelated and unmapped queues are also
>>>>> displayed.
>>>>> 3) There is no need to keep cache line alignment for
>>>>> 'struct queue_stats_mappings'.
>>>>> 4) The mapping arrays, 'tx_queue_stats_mappings_array' &
>>>>> 'rx_queue_stats_mappings_array' are global and their sizes are 
>>>>> based on
>>>>> fixed max port and queue size assumptions.
>>>>> 5) The configuration result does not take effect or can not be queried
>>>>> in real time.
>>>>>
>>>>> Therefore, we have made the following adjustments:
>>>>> 1) If PMD supports queue stats mapping, configure to driver in real 
>>>>> time
>>>>> after executing the command "set stat_qmap rx/tx ...". If not,
>>>>> the command can not be accepted.
>>>>> 2) Only display queues that mapping done by adding a new 'active' 
>>>>> field
>>>>>   in queue_stats_mappings struct.
>>>>> 3) Remove cache alignment for 'struct queue_stats_mappings'.
>>>>> 4) Add a new port_stats_mappings struct in rte_port.
>>>>> The struct contains number of rx/txq stats mapping, rx/tx
>>>>> queue_stats_mapping_enabled flag, and rx/tx queue_stats_mapping array.
>>>>> Size of queue_stats_mapping_array is set to 
>>>>> "RTE_ETHDEV_QUEUE_STAT_CNTRS"
>>>>>   to ensure that the same number of queues can be set for each port.
>>>>>
>>>>
>>>> Hi Connor,
>>>>
>>>> I think above adjustment are good, but after the decision to use 
>>>> xstats for the queue stats, what do you think about more 
>>>> simplification,
>>>>
>>>> 1)
>>>> What testpmd does is, records the queue stats mapping commands and 
>>>> registers them later on port start & forwarding start.
>>>> What happens if recording and registering completely removed?
>>>> When "set stat_qmap .." issued, it just call the ethdev APIs to do 
>>>> the mapping in device.
>>>> This lets us removing record structures, "struct port_stats_mappings 
>>>> p_stats_map"
>>>> Also can remove 'map_port_queue_stats_mapping_registers()' and its 
>>>> sub functions.
>>>>
>>>> 2)
>>>> Also lets remove "tx-queue-stats-mapping" & "rx-queue-stats-mapping" 
>>>> parameters, which enables removing 
>>>> 'parse_queue_stats_mapping_config()' function too
>>>>
>>>> 3)
>>>> Another problem is to display the queue stats, in 
>>>> 'fwd_stats_display()' & 'nic_stats_display()', there is a check if 
>>>> the queue stats mapping enable or not 
>>>> ('rx_queue_stats_mapping_enabled' & 'tx_queue_stats_mapping_enabled'),
>>>> I think displaying queue stats and queue stat mapping should be 
>>>> separate, why not drop checks for queue stats mapping and display 
>>>> queue stats for 'nb_rxq' & 'nb_txq' queues?
>>>>
>>>> Does above make sense?
>>>>
>>>>
>>>> Majority of the drivers doesn't require queue stat mapping to get 
>>>> the queue stats, lets don't pollute main usage with this requirement.
>>>>
>>>>
>>>>> Fixes: 4dccdc789bf4b ("app/testpmd: simplify handling of stats 
>>>>> mappings error")
>>>>> Fixes: 013af9b6b64f6 ("app/testpmd: various updates")
>>>>> Fixes: ed30d9b691b21 ("app/testpmd: add stats per queue")
>>>>>
>>>>> Signed-off-by: Huisong Li <lihuisong@huawei.com>
>>>>
>>>> <...>
>>>>
>>>> .
> 
> .

Patch
diff mbox series

diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c
index ba17f3b..325c42f 100644
--- a/app/test-pmd/config.c
+++ b/app/test-pmd/config.c
@@ -177,13 +177,13 @@  nic_stats_display(portid_t port_id)
 	static uint64_t prev_bytes_rx[RTE_MAX_ETHPORTS];
 	static uint64_t prev_bytes_tx[RTE_MAX_ETHPORTS];
 	static uint64_t prev_ns[RTE_MAX_ETHPORTS];
+	struct port_stats_mappings *p_stats_map;
 	struct timespec cur_time;
 	uint64_t diff_pkts_rx, diff_pkts_tx, diff_bytes_rx, diff_bytes_tx,
 								diff_ns;
 	uint64_t mpps_rx, mpps_tx, mbps_rx, mbps_tx;
 	struct rte_eth_stats stats;
-	struct rte_port *port = &ports[port_id];
-	uint8_t i;
+	struct rte_port *port;
 
 	static const char *nic_stats_border = "########################";
 
@@ -195,7 +195,10 @@  nic_stats_display(portid_t port_id)
 	printf("\n  %s NIC statistics for port %-2d %s\n",
 	       nic_stats_border, port_id, nic_stats_border);
 
-	if ((!port->rx_queue_stats_mapping_enabled) && (!port->tx_queue_stats_mapping_enabled)) {
+	port = &ports[port_id];
+	p_stats_map = &port->p_stats_map;
+	if ((!p_stats_map->rx_queue_stats_mapping_enabled) &&
+		(!p_stats_map->tx_queue_stats_mapping_enabled)) {
 		printf("  RX-packets: %-10"PRIu64" RX-missed: %-10"PRIu64" RX-bytes:  "
 		       "%-"PRIu64"\n",
 		       stats.ipackets, stats.imissed, stats.ibytes);
@@ -205,36 +208,20 @@  nic_stats_display(portid_t port_id)
 		printf("  TX-packets: %-10"PRIu64" TX-errors: %-10"PRIu64" TX-bytes:  "
 		       "%-"PRIu64"\n",
 		       stats.opackets, stats.oerrors, stats.obytes);
-	}
-	else {
-		printf("  RX-packets:              %10"PRIu64"    RX-errors: %10"PRIu64
-		       "    RX-bytes: %10"PRIu64"\n",
-		       stats.ipackets, stats.ierrors, stats.ibytes);
-		printf("  RX-errors:  %10"PRIu64"\n", stats.ierrors);
-		printf("  RX-nombuf:               %10"PRIu64"\n",
+	} else {
+		printf("  RX-packets:             %14"PRIu64"    RX-missed: "
+		       "%14"PRIu64"    RX-bytes: %14"PRIu64"\n",
+		       stats.ipackets, stats.imissed, stats.ibytes);
+		printf("  RX-errors:              %14"PRIu64"\n",
+		       stats.ierrors);
+		printf("  RX-nombuf:              %14"PRIu64"\n",
 		       stats.rx_nombuf);
-		printf("  TX-packets:              %10"PRIu64"    TX-errors: %10"PRIu64
-		       "    TX-bytes: %10"PRIu64"\n",
+		printf("  TX-packets:             %14"PRIu64"    TX-errors: "
+		       "%14"PRIu64"    TX-bytes: %14"PRIu64"\n",
 		       stats.opackets, stats.oerrors, stats.obytes);
 	}
 
-	if (port->rx_queue_stats_mapping_enabled) {
-		printf("\n");
-		for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
-			printf("  Stats reg %2d RX-packets: %10"PRIu64
-			       "    RX-errors: %10"PRIu64
-			       "    RX-bytes: %10"PRIu64"\n",
-			       i, stats.q_ipackets[i], stats.q_errors[i], stats.q_ibytes[i]);
-		}
-	}
-	if (port->tx_queue_stats_mapping_enabled) {
-		printf("\n");
-		for (i = 0; i < RTE_ETHDEV_QUEUE_STAT_CNTRS; i++) {
-			printf("  Stats reg %2d TX-packets: %10"PRIu64
-			       "                             TX-bytes: %10"PRIu64"\n",
-			       i, stats.q_opackets[i], stats.q_obytes[i]);
-		}
-	}
+	port_stats_mapping_display(port_id, &stats);
 
 	diff_ns = 0;
 	if (clock_gettime(CLOCK_TYPE_ID, &cur_time) == 0) {
@@ -400,7 +387,9 @@  nic_xstats_clear(portid_t port_id)
 void
 nic_stats_mapping_display(portid_t port_id)
 {
-	struct rte_port *port = &ports[port_id];
+	struct port_stats_mappings *p_stats_map;
+	struct queue_stats_mappings *q_stats_map;
+	struct rte_port *port;
 	uint16_t i;
 
 	static const char *nic_stats_mapping_border = "########################";
@@ -410,7 +399,10 @@  nic_stats_mapping_display(portid_t port_id)
 		return;
 	}
 
-	if ((!port->rx_queue_stats_mapping_enabled) && (!port->tx_queue_stats_mapping_enabled)) {
+	port = &ports[port_id];
+	p_stats_map = &port->p_stats_map;
+	if ((!p_stats_map->rx_queue_stats_mapping_enabled) &&
+		(!p_stats_map->tx_queue_stats_mapping_enabled)) {
 		printf("Port id %d - either does not support queue statistic mapping or"
 		       " no queue statistic mapping set\n", port_id);
 		return;
@@ -419,24 +411,26 @@  nic_stats_mapping_display(portid_t port_id)
 	printf("\n  %s NIC statistics mapping for port %-2d %s\n",
 	       nic_stats_mapping_border, port_id, nic_stats_mapping_border);
 
-	if (port->rx_queue_stats_mapping_enabled) {
-		for (i = 0; i < nb_rx_queue_stats_mappings; i++) {
-			if (rx_queue_stats_mappings[i].port_id == port_id) {
+	if (p_stats_map->rx_queue_stats_mapping_enabled) {
+		for (i = 0; i < p_stats_map->nb_rxq_stats_mappings; i++) {
+			q_stats_map = &p_stats_map->rxq_map_array[i];
+			if (q_stats_map->active) {
 				printf("  RX-queue %2d mapped to Stats Reg %2d\n",
-				       rx_queue_stats_mappings[i].queue_id,
-				       rx_queue_stats_mappings[i].stats_counter_id);
+				       q_stats_map->queue_id,
+				       q_stats_map->stats_counter_id);
 			}
 		}
 		printf("\n");
 	}
 
 
-	if (port->tx_queue_stats_mapping_enabled) {
-		for (i = 0; i < nb_tx_queue_stats_mappings; i++) {
-			if (tx_queue_stats_mappings[i].port_id == port_id) {
+	if (p_stats_map->tx_queue_stats_mapping_enabled) {
+		for (i = 0; i < p_stats_map->nb_txq_stats_mappings; i++) {
+			q_stats_map = &p_stats_map->txq_map_array[i];
+			if (q_stats_map->active) {
 				printf("  TX-queue %2d mapped to Stats Reg %2d\n",
-				       tx_queue_stats_mappings[i].queue_id,
-				       tx_queue_stats_mappings[i].stats_counter_id);
+				       q_stats_map->queue_id,
+				       q_stats_map->stats_counter_id);
 			}
 		}
 	}
@@ -4546,8 +4540,13 @@  tx_vlan_pvid_set(portid_t port_id, uint16_t vlan_id, int on)
 void
 set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value)
 {
+	struct port_stats_mappings *p_stats_map;
+	struct queue_stats_mappings *q_stats_map;
+	bool existing_mapping_found = false;
+	struct rte_port *port;
+	uint16_t cur_map_idx;
 	uint16_t i;
-	uint8_t existing_mapping_found = 0;
+	int ret;
 
 	if (port_id_is_invalid(port_id, ENABLED_WARN))
 		return;
@@ -4561,37 +4560,88 @@  set_qmap(portid_t port_id, uint8_t is_rx, uint16_t queue_id, uint8_t map_value)
 		return;
 	}
 
-	if (!is_rx) { /*then tx*/
-		for (i = 0; i < nb_tx_queue_stats_mappings; i++) {
-			if ((tx_queue_stats_mappings[i].port_id == port_id) &&
-			    (tx_queue_stats_mappings[i].queue_id == queue_id)) {
-				tx_queue_stats_mappings[i].stats_counter_id = map_value;
-				existing_mapping_found = 1;
+	port = &ports[port_id];
+	p_stats_map = &port->p_stats_map;
+	if (!is_rx) { /* tx */
+		for (i = 0; i < p_stats_map->nb_txq_stats_mappings; i++) {
+			q_stats_map = &p_stats_map->txq_map_array[i];
+			if (q_stats_map->queue_id == queue_id) {
+				ret =
+				rte_eth_dev_set_tx_queue_stats_mapping(port_id,
+							queue_id, map_value);
+				if (ret) {
+					printf("failed to set tx queue stats "
+						"mapping.\n");
+					return;
+				}
+
+				q_stats_map->stats_counter_id = map_value;
+				q_stats_map->active = true;
+				existing_mapping_found = true;
 				break;
 			}
 		}
-		if (!existing_mapping_found) { /* A new additional mapping... */
-			tx_queue_stats_mappings[nb_tx_queue_stats_mappings].port_id = port_id;
-			tx_queue_stats_mappings[nb_tx_queue_stats_mappings].queue_id = queue_id;
-			tx_queue_stats_mappings[nb_tx_queue_stats_mappings].stats_counter_id = map_value;
-			nb_tx_queue_stats_mappings++;
+
+		/* A new additional mapping... */
+		if (!existing_mapping_found) {
+			ret = rte_eth_dev_set_tx_queue_stats_mapping(port_id,
+								     queue_id,
+								     map_value);
+			if (ret) {
+				printf("failed to set tx queue stats "
+					"mapping.\n");
+				return;
+			}
+
+			cur_map_idx = p_stats_map->nb_txq_stats_mappings;
+			q_stats_map = &p_stats_map->txq_map_array[cur_map_idx];
+			q_stats_map->queue_id = queue_id;
+			q_stats_map->stats_counter_id = map_value;
+			q_stats_map->active = true;
+			p_stats_map->nb_txq_stats_mappings++;
 		}
-	}
-	else { /*rx*/
-		for (i = 0; i < nb_rx_queue_stats_mappings; i++) {
-			if ((rx_queue_stats_mappings[i].port_id == port_id) &&
-			    (rx_queue_stats_mappings[i].queue_id == queue_id)) {
-				rx_queue_stats_mappings[i].stats_counter_id = map_value;
-				existing_mapping_found = 1;
+
+		p_stats_map->tx_queue_stats_mapping_enabled = true;
+	} else { /* rx */
+		for (i = 0; i < p_stats_map->nb_rxq_stats_mappings; i++) {
+			q_stats_map = &p_stats_map->rxq_map_array[i];
+			if (q_stats_map->queue_id == queue_id) {
+				ret =
+				rte_eth_dev_set_rx_queue_stats_mapping(port_id,
+							queue_id, map_value);
+				if (ret) {
+					printf("failed to set rx queue stats "
+						"mapping.\n");
+					return;
+				}
+
+				q_stats_map->stats_counter_id = map_value;
+				q_stats_map->active = true;
+				existing_mapping_found = true;
 				break;
 			}
 		}
-		if (!existing_mapping_found) { /* A new additional mapping... */
-			rx_queue_stats_mappings[nb_rx_queue_stats_mappings].port_id = port_id;
-			rx_queue_stats_mappings[nb_rx_queue_stats_mappings].queue_id = queue_id;
-			rx_queue_stats_mappings[nb_rx_queue_stats_mappings].stats_counter_id = map_value;
-			nb_rx_queue_stats_mappings++;
+
+		/* A new additional mapping... */
+		if (!existing_mapping_found) {
+			ret = rte_eth_dev_set_rx_queue_stats_mapping(port_id,
+								     queue_id,
+								     map_value);
+			if (ret) {
+				printf("failed to set rx queue stats "
+					"mapping.\n");
+				return;
+			}
+
+			cur_map_idx = p_stats_map->nb_rxq_stats_mappings;
+			q_stats_map = &p_stats_map->rxq_map_array[cur_map_idx];
+			q_stats_map->queue_id = queue_id;
+			q_stats_map->stats_counter_id = map_value;
+			q_stats_map->active = true;
+			p_stats_map->nb_rxq_stats_mappings++;
 		}
+
+		p_stats_map->rx_queue_stats_mapping_enabled = true;
 	}
 }
 
diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c
index 5ae0cb6..ee2501b 100644
--- a/app/test-pmd/parameters.c
+++ b/app/test-pmd/parameters.c
@@ -300,7 +300,6 @@  parse_fwd_portmask(const char *portmask)
 		set_fwd_ports_mask((uint64_t) pm);
 }
 
-
 static int
 parse_queue_stats_mapping_config(const char *q_arg, int is_rx)
 {
@@ -315,11 +314,13 @@  parse_queue_stats_mapping_config(const char *q_arg, int is_rx)
 	};
 	unsigned long int_fld[_NUM_FLD];
 	char *str_fld[_NUM_FLD];
+	struct rte_port *port;
 	int i;
 	unsigned size;
-
-	/* reset from value set at definition */
-	is_rx ? (nb_rx_queue_stats_mappings = 0) : (nb_tx_queue_stats_mappings = 0);
+	int port_id;
+	struct queue_stats_mappings *q_stats_map;
+	struct port_stats_mappings *p_stats_map;
+	uint16_t q_map_idx;
 
 	while ((p = strchr(p0,'(')) != NULL) {
 		++p;
@@ -346,44 +347,40 @@  parse_queue_stats_mapping_config(const char *q_arg, int is_rx)
 			return -1;
 		}
 
+		port_id = (uint8_t)int_fld[FLD_PORT];
+		port = &ports[port_id];
+		p_stats_map = &port->p_stats_map;
 		if (!is_rx) {
-			if ((nb_tx_queue_stats_mappings >=
-						MAX_TX_QUEUE_STATS_MAPPINGS)) {
+			q_map_idx = p_stats_map->nb_txq_stats_mappings;
+			if (q_map_idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS) {
 				printf("exceeded max number of TX queue "
-						"statistics mappings: %hu\n",
-						nb_tx_queue_stats_mappings);
+					"statistics mappings: %hu\n",
+					p_stats_map->nb_txq_stats_mappings);
 				return -1;
 			}
-			tx_queue_stats_mappings_array[nb_tx_queue_stats_mappings].port_id =
-				(uint8_t)int_fld[FLD_PORT];
-			tx_queue_stats_mappings_array[nb_tx_queue_stats_mappings].queue_id =
-				(uint8_t)int_fld[FLD_QUEUE];
-			tx_queue_stats_mappings_array[nb_tx_queue_stats_mappings].stats_counter_id =
-				(uint8_t)int_fld[FLD_STATS_COUNTER];
-			++nb_tx_queue_stats_mappings;
-		}
-		else {
-			if ((nb_rx_queue_stats_mappings >=
-						MAX_RX_QUEUE_STATS_MAPPINGS)) {
+			q_stats_map =
+				&p_stats_map->txq_map_array[q_map_idx];
+			q_stats_map->queue_id = int_fld[FLD_QUEUE];
+			q_stats_map->stats_counter_id =
+						int_fld[FLD_STATS_COUNTER];
+			++p_stats_map->nb_txq_stats_mappings;
+		} else {
+			q_map_idx = p_stats_map->nb_rxq_stats_mappings;
+			if (q_map_idx >= RTE_ETHDEV_QUEUE_STAT_CNTRS) {
 				printf("exceeded max number of RX queue "
-						"statistics mappings: %hu\n",
-						nb_rx_queue_stats_mappings);
+					"statistics mappings: %hu\n",
+					p_stats_map->nb_rxq_stats_mappings);
 				return -1;
 			}
-			rx_queue_stats_mappings_array[nb_rx_queue_stats_mappings].port_id =
-				(uint8_t)int_fld[FLD_PORT];
-			rx_queue_stats_mappings_array[nb_rx_queue_stats_mappings].queue_id =
-				(uint8_t)int_fld[FLD_QUEUE];
-			rx_queue_stats_mappings_array[nb_rx_queue_stats_mappings].stats_counter_id =
-				(uint8_t)int_fld[FLD_STATS_COUNTER];
-			++nb_rx_queue_stats_mappings;
+			q_stats_map =
+				&p_stats_map->rxq_map_array[q_map_idx];
+			q_stats_map->queue_id = int_fld[FLD_QUEUE];
+			q_stats_map->stats_counter_id =
+						int_fld[FLD_STATS_COUNTER];
+			++p_stats_map->nb_rxq_stats_mappings;
 		}
-
 	}
-/* Reassign the rx/tx_queue_stats_mappings pointer to point to this newly populated array rather */
-/* than to the default array (that was set at its definition) */
-	is_rx ? (rx_queue_stats_mappings = rx_queue_stats_mappings_array) :
-		(tx_queue_stats_mappings = tx_queue_stats_mappings_array);
+
 	return 0;
 }
 
diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c
index 94e3688..86e3271 100644
--- a/app/test-pmd/testpmd.c
+++ b/app/test-pmd/testpmd.c
@@ -476,15 +476,6 @@  struct rte_fdir_conf fdir_conf = {
 
 volatile int test_done = 1; /* stop packet forwarding when set to 1. */
 
-struct queue_stats_mappings tx_queue_stats_mappings_array[MAX_TX_QUEUE_STATS_MAPPINGS];
-struct queue_stats_mappings rx_queue_stats_mappings_array[MAX_RX_QUEUE_STATS_MAPPINGS];
-
-struct queue_stats_mappings *tx_queue_stats_mappings = tx_queue_stats_mappings_array;
-struct queue_stats_mappings *rx_queue_stats_mappings = rx_queue_stats_mappings_array;
-
-uint16_t nb_tx_queue_stats_mappings = 0;
-uint16_t nb_rx_queue_stats_mappings = 0;
-
 /*
  * Display zero values by default for xstats
  */
@@ -1809,10 +1800,84 @@  fwd_stream_stats_display(streamid_t stream_id)
 }
 
 void
+port_stats_mapping_display(portid_t pt_id, struct rte_eth_stats *stats)
+{
+	struct port_stats_mappings *p_stats_map;
+	struct queue_stats_mappings *q_stats_map;
+	bool txq_stats_map_found = false;
+	bool rxq_stats_map_found = false;
+	uint16_t nb_txq_stats_map;
+	uint16_t nb_rxq_stats_map;
+	struct rte_port *port;
+	uint16_t i, j;
+
+	if (stats == NULL) {
+		printf("input stats address is null pointer.\n");
+		return;
+	}
+
+	if (port_id_is_invalid(pt_id, ENABLED_WARN)) {
+		print_valid_ports();
+		return;
+	}
+
+	port = &ports[pt_id];
+	p_stats_map = &port->p_stats_map;
+	if (p_stats_map->rx_queue_stats_mapping_enabled) {
+		printf("\n");
+		nb_rxq_stats_map = p_stats_map->nb_rxq_stats_mappings;
+		for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
+			for (i = 0; i < nb_rxq_stats_map; i++) {
+				q_stats_map = &p_stats_map->rxq_map_array[i];
+				if (q_stats_map->stats_counter_id == j &&
+					q_stats_map->active) {
+					rxq_stats_map_found = true;
+					break;
+				}
+			}
+
+			if (rxq_stats_map_found) {
+				printf("  Stats reg %2d RX-packets:%14"PRIu64
+				       "    RX-errors: %14"PRIu64
+				       "    RX-bytes:%14"PRIu64"\n",
+				       j, stats->q_ipackets[j],
+				       stats->q_errors[j],
+				       stats->q_ibytes[j]);
+				rxq_stats_map_found = false;
+			}
+		}
+	}
+	if (p_stats_map->tx_queue_stats_mapping_enabled) {
+		printf("\n");
+		nb_txq_stats_map = p_stats_map->nb_txq_stats_mappings;
+		for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
+			for (i = 0; i < nb_txq_stats_map; i++) {
+				q_stats_map = &p_stats_map->txq_map_array[i];
+				if (q_stats_map->stats_counter_id == j &&
+					q_stats_map->active) {
+					txq_stats_map_found = true;
+					break;
+				}
+			}
+
+			if (txq_stats_map_found) {
+				printf("  Stats reg %2d TX-packets:%14"PRIu64
+				       "				 TX-bytes:%14"
+				       PRIu64"\n",
+				       j, stats->q_opackets[j],
+				       stats->q_obytes[j]);
+				txq_stats_map_found = false;
+			}
+		}
+	}
+}
+
+void
 fwd_stats_display(void)
 {
 	static const char *fwd_stats_border = "----------------------";
 	static const char *acc_stats_border = "+++++++++++++++";
+	struct port_stats_mappings *p_stats_map;
 	struct {
 		struct fwd_stream *rx_stream;
 		struct fwd_stream *tx_stream;
@@ -1857,8 +1922,6 @@  fwd_stats_display(void)
 			fwd_cycles += fs->core_cycles;
 	}
 	for (i = 0; i < cur_fwd_config.nb_fwd_ports; i++) {
-		uint8_t j;
-
 		pt_id = fwd_ports_ids[i];
 		port = &ports[pt_id];
 
@@ -1881,8 +1944,9 @@  fwd_stats_display(void)
 		printf("\n  %s Forward statistics for port %-2d %s\n",
 		       fwd_stats_border, pt_id, fwd_stats_border);
 
-		if (!port->rx_queue_stats_mapping_enabled &&
-		    !port->tx_queue_stats_mapping_enabled) {
+		p_stats_map = &port->p_stats_map;
+		if (!p_stats_map->rx_queue_stats_mapping_enabled &&
+		    !p_stats_map->tx_queue_stats_mapping_enabled) {
 			printf("  RX-packets: %-14"PRIu64
 			       " RX-dropped: %-14"PRIu64
 			       "RX-total: %-"PRIu64"\n",
@@ -1944,26 +2008,7 @@  fwd_stats_display(void)
 					&ports_stats[pt_id].tx_stream->tx_burst_stats);
 		}
 
-		if (port->rx_queue_stats_mapping_enabled) {
-			printf("\n");
-			for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
-				printf("  Stats reg %2d RX-packets:%14"PRIu64
-				       "     RX-errors:%14"PRIu64
-				       "    RX-bytes:%14"PRIu64"\n",
-				       j, stats.q_ipackets[j],
-				       stats.q_errors[j], stats.q_ibytes[j]);
-			}
-			printf("\n");
-		}
-		if (port->tx_queue_stats_mapping_enabled) {
-			for (j = 0; j < RTE_ETHDEV_QUEUE_STAT_CNTRS; j++) {
-				printf("  Stats reg %2d TX-packets:%14"PRIu64
-				       "                                 TX-bytes:%14"
-				       PRIu64"\n",
-				       j, stats.q_opackets[j],
-				       stats.q_obytes[j]);
-			}
-		}
+		port_stats_mapping_display(pt_id, &stats);
 
 		printf("  %s--------------------------------%s\n",
 		       fwd_stats_border, fwd_stats_border);
@@ -3355,59 +3400,84 @@  dev_event_callback(const char *device_name, enum rte_dev_event_type type,
 static int
 set_tx_queue_stats_mapping_registers(portid_t port_id, struct rte_port *port)
 {
+	struct port_stats_mappings *p_stats_map;
+	struct queue_stats_mappings *q_stats_map;
+	bool mapping_found = false;
 	uint16_t i;
 	int diag;
-	uint8_t mapping_found = 0;
 
-	for (i = 0; i < nb_tx_queue_stats_mappings; i++) {
-		if ((tx_queue_stats_mappings[i].port_id == port_id) &&
-				(tx_queue_stats_mappings[i].queue_id < nb_txq )) {
+	p_stats_map = &port->p_stats_map;
+	for (i = 0; i < p_stats_map->nb_txq_stats_mappings; i++) {
+		q_stats_map = &p_stats_map->txq_map_array[i];
+		if (q_stats_map->active) {
+			mapping_found = true;
+			continue;
+		}
+
+		if (q_stats_map->queue_id < nb_txq) {
 			diag = rte_eth_dev_set_tx_queue_stats_mapping(port_id,
-					tx_queue_stats_mappings[i].queue_id,
-					tx_queue_stats_mappings[i].stats_counter_id);
+					q_stats_map->queue_id,
+					q_stats_map->stats_counter_id);
 			if (diag != 0)
 				return diag;
-			mapping_found = 1;
+			q_stats_map->active = true;
+			mapping_found = true;
 		}
 	}
 	if (mapping_found)
-		port->tx_queue_stats_mapping_enabled = 1;
+		p_stats_map->tx_queue_stats_mapping_enabled = true;
+
 	return 0;
 }
 
 static int
 set_rx_queue_stats_mapping_registers(portid_t port_id, struct rte_port *port)
 {
+	struct port_stats_mappings *p_stats_map;
+	struct queue_stats_mappings *q_stats_map;
+	bool mapping_found = false;
 	uint16_t i;
 	int diag;
-	uint8_t mapping_found = 0;
 
-	for (i = 0; i < nb_rx_queue_stats_mappings; i++) {
-		if ((rx_queue_stats_mappings[i].port_id == port_id) &&
-				(rx_queue_stats_mappings[i].queue_id < nb_rxq )) {
+	p_stats_map = &port->p_stats_map;
+	for (i = 0; i < p_stats_map->nb_rxq_stats_mappings; i++) {
+		q_stats_map = &p_stats_map->rxq_map_array[i];
+		if (q_stats_map->active) {
+			mapping_found = true;
+			continue;
+		}
+
+		if (q_stats_map->queue_id < nb_rxq) {
 			diag = rte_eth_dev_set_rx_queue_stats_mapping(port_id,
-					rx_queue_stats_mappings[i].queue_id,
-					rx_queue_stats_mappings[i].stats_counter_id);
+					q_stats_map->queue_id,
+					q_stats_map->stats_counter_id);
 			if (diag != 0)
 				return diag;
-			mapping_found = 1;
+			q_stats_map->active = true;
+			mapping_found = true;
 		}
 	}
 	if (mapping_found)
-		port->rx_queue_stats_mapping_enabled = 1;
+		p_stats_map->rx_queue_stats_mapping_enabled = true;
+
 	return 0;
 }
 
 static void
 map_port_queue_stats_mapping_registers(portid_t pi, struct rte_port *port)
 {
+	struct port_stats_mappings *p_stats_map = &port->p_stats_map;
 	int diag = 0;
 
 	diag = set_tx_queue_stats_mapping_registers(pi, port);
 	if (diag != 0) {
 		if (diag == -ENOTSUP) {
-			port->tx_queue_stats_mapping_enabled = 0;
-			printf("TX queue stats mapping not supported port id=%d\n", pi);
+			memset(p_stats_map->txq_map_array, 0,
+				sizeof(p_stats_map->txq_map_array));
+			p_stats_map->nb_txq_stats_mappings = 0;
+			p_stats_map->tx_queue_stats_mapping_enabled = false;
+			printf("TX queue stats mapping not supported "
+				"port id=%d\n", pi);
 		}
 		else
 			rte_exit(EXIT_FAILURE,
@@ -3419,8 +3489,12 @@  map_port_queue_stats_mapping_registers(portid_t pi, struct rte_port *port)
 	diag = set_rx_queue_stats_mapping_registers(pi, port);
 	if (diag != 0) {
 		if (diag == -ENOTSUP) {
-			port->rx_queue_stats_mapping_enabled = 0;
-			printf("RX queue stats mapping not supported port id=%d\n", pi);
+			memset(p_stats_map->rxq_map_array, 0,
+				sizeof(p_stats_map->rxq_map_array));
+			p_stats_map->nb_rxq_stats_mappings = 0;
+			p_stats_map->rx_queue_stats_mapping_enabled = false;
+			printf("RX queue stats mapping not supported "
+				"port id=%d\n", pi);
 		}
 		else
 			rte_exit(EXIT_FAILURE,
diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h
index 833ca14..0397d6d 100644
--- a/app/test-pmd/testpmd.h
+++ b/app/test-pmd/testpmd.h
@@ -181,6 +181,24 @@  struct tunnel_ops {
 	uint32_t items:1;
 };
 
+struct queue_stats_mappings {
+	uint16_t queue_id;
+	uint8_t stats_counter_id;
+	bool active;
+};
+
+/**
+ * The data of queue stats mapping on this port.
+ */
+struct port_stats_mappings {
+	struct queue_stats_mappings rxq_map_array[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+	struct queue_stats_mappings txq_map_array[RTE_ETHDEV_QUEUE_STAT_CNTRS];
+	uint16_t nb_rxq_stats_mappings;
+	uint16_t nb_txq_stats_mappings;
+	bool rx_queue_stats_mapping_enabled;
+	bool tx_queue_stats_mapping_enabled;
+};
+
 /**
  * The data structure associated with each port.
  */
@@ -195,8 +213,7 @@  struct rte_port {
 	uint16_t                tunnel_tso_segsz; /**< Segmentation offload MSS for tunneled pkts. */
 	uint16_t                tx_vlan_id;/**< The tag ID */
 	uint16_t                tx_vlan_id_outer;/**< The outer tag ID */
-	uint8_t                 tx_queue_stats_mapping_enabled;
-	uint8_t                 rx_queue_stats_mapping_enabled;
+	struct port_stats_mappings p_stats_map;
 	volatile uint16_t        port_status;    /**< port started or not */
 	uint8_t                 need_setup;     /**< port just attached */
 	uint8_t                 need_reconfig;  /**< need reconfiguring port or not */
@@ -315,25 +332,6 @@  enum dcb_mode_enable
 	DCB_ENABLED
 };
 
-#define MAX_TX_QUEUE_STATS_MAPPINGS 1024 /* MAX_PORT of 32 @ 32 tx_queues/port */
-#define MAX_RX_QUEUE_STATS_MAPPINGS 4096 /* MAX_PORT of 32 @ 128 rx_queues/port */
-
-struct queue_stats_mappings {
-	portid_t port_id;
-	uint16_t queue_id;
-	uint8_t stats_counter_id;
-} __rte_cache_aligned;
-
-extern struct queue_stats_mappings tx_queue_stats_mappings_array[];
-extern struct queue_stats_mappings rx_queue_stats_mappings_array[];
-
-/* Assign both tx and rx queue stats mappings to the same default values */
-extern struct queue_stats_mappings *tx_queue_stats_mappings;
-extern struct queue_stats_mappings *rx_queue_stats_mappings;
-
-extern uint16_t nb_tx_queue_stats_mappings;
-extern uint16_t nb_rx_queue_stats_mappings;
-
 extern uint8_t xstats_hide_zero; /**< Hide zero values for xstats display */
 
 /* globals used for configuration */
@@ -780,6 +778,7 @@  void nic_stats_clear(portid_t port_id);
 void nic_xstats_display(portid_t port_id);
 void nic_xstats_clear(portid_t port_id);
 void nic_stats_mapping_display(portid_t port_id);
+void port_stats_mapping_display(portid_t pt_id, struct rte_eth_stats *stats);
 void device_infos_display(const char *identifier);
 void port_infos_display(portid_t port_id);
 void port_summary_display(portid_t port_id);