diff mbox series

[v4,1/7] ethdev: allocate max space for internal queue array

Message ID 20211004135603.20593-2-konstantin.ananyev@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers show
Series hide eth dev related structures | expand

Checks

Context Check Description
ci/checkpatch success coding style OK

Commit Message

Ananyev, Konstantin Oct. 4, 2021, 1:55 p.m. UTC
At queue configure stage always allocate space for maximum possible
number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
pointer to internal queue data without extra checking of current number
of configured queues.
That would help in future to hide rte_eth_dev and related structures.

Signed-off-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
---
 lib/ethdev/rte_ethdev.c | 36 +++++++++---------------------------
 1 file changed, 9 insertions(+), 27 deletions(-)

Comments

Thomas Monjalon Oct. 5, 2021, 12:09 p.m. UTC | #1
04/10/2021 15:55, Konstantin Ananyev:
> At queue configure stage always allocate space for maximum possible
> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> pointer to internal queue data without extra checking of current number
> of configured queues.

What is the memory usage overhead per port?
We should consider cases with thousand of virtual ports.
Thomas Monjalon Oct. 5, 2021, 12:21 p.m. UTC | #2
04/10/2021 15:55, Konstantin Ananyev:
> At queue configure stage always allocate space for maximum possible
> number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.

The problem with this max is that it cannot be changed dynamically.
That could be another patch, but I would like to see this number
a default max which can be changed with an init function called
before any other configuration.
Ananyev, Konstantin Oct. 5, 2021, 4:45 p.m. UTC | #3
> > At queue configure stage always allocate space for maximum possible
> > number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> > That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> > pointer to internal queue data without extra checking of current number
> > of configured queues.
> 
> What is the memory usage overhead per port?

(2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT
With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port. 

> We should consider cases with thousand of virtual ports.

For 1K ports (with 1K queues each) it will be 16MB.
Thomas Monjalon Oct. 5, 2021, 4:49 p.m. UTC | #4
05/10/2021 18:45, Ananyev, Konstantin:
> > > At queue configure stage always allocate space for maximum possible
> > > number (RTE_MAX_QUEUES_PER_PORT) of queue pointers.
> > > That will allow 'fast' inline functions (eth_rx_burst, etc.) to refer
> > > pointer to internal queue data without extra checking of current number
> > > of configured queues.
> > 
> > What is the memory usage overhead per port?
> 
> (2*sizeof(uintptr_t))* RTE_MAX_QUEUES_PER_PORT
> With RTE_MAX_QUEUES_PER_PORT==1024 (default value) it is 16KB per port. 

Please add it in the commit log.

> > We should consider cases with thousand of virtual ports.
> 
> For 1K ports (with 1K queues each) it will be 16MB.

OK it looks reasonnable.
diff mbox series

Patch

diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c
index daf5ca9242..424bc260fa 100644
--- a/lib/ethdev/rte_ethdev.c
+++ b/lib/ethdev/rte_ethdev.c
@@ -898,7 +898,8 @@  eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->rx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->rx_queues = rte_zmalloc("ethdev->rx_queues",
-				sizeof(dev->data->rx_queues[0]) * nb_queues,
+				sizeof(dev->data->rx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
 				RTE_CACHE_LINE_SIZE);
 		if (dev->data->rx_queues == NULL) {
 			dev->data->nb_rx_queues = 0;
@@ -909,21 +910,11 @@  eth_dev_rx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		rxq = dev->data->rx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->rx_queue_release)(rxq[i]);
-		rxq = rte_realloc(rxq, sizeof(rxq[0]) * nb_queues,
-				RTE_CACHE_LINE_SIZE);
-		if (rxq == NULL)
-			return -(ENOMEM);
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(rxq + old_nb_queues, 0,
-				sizeof(rxq[0]) * new_qs);
+			rxq[i] = NULL;
 		}
 
-		dev->data->rx_queues = rxq;
-
 	} else if (dev->data->rx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->rx_queue_release, -ENOTSUP);
 
@@ -1138,8 +1129,9 @@  eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 	if (dev->data->tx_queues == NULL && nb_queues != 0) { /* first time configuration */
 		dev->data->tx_queues = rte_zmalloc("ethdev->tx_queues",
-						   sizeof(dev->data->tx_queues[0]) * nb_queues,
-						   RTE_CACHE_LINE_SIZE);
+				sizeof(dev->data->tx_queues[0]) *
+				RTE_MAX_QUEUES_PER_PORT,
+				RTE_CACHE_LINE_SIZE);
 		if (dev->data->tx_queues == NULL) {
 			dev->data->nb_tx_queues = 0;
 			return -(ENOMEM);
@@ -1149,21 +1141,11 @@  eth_dev_tx_queue_config(struct rte_eth_dev *dev, uint16_t nb_queues)
 
 		txq = dev->data->tx_queues;
 
-		for (i = nb_queues; i < old_nb_queues; i++)
+		for (i = nb_queues; i < old_nb_queues; i++) {
 			(*dev->dev_ops->tx_queue_release)(txq[i]);
-		txq = rte_realloc(txq, sizeof(txq[0]) * nb_queues,
-				  RTE_CACHE_LINE_SIZE);
-		if (txq == NULL)
-			return -ENOMEM;
-		if (nb_queues > old_nb_queues) {
-			uint16_t new_qs = nb_queues - old_nb_queues;
-
-			memset(txq + old_nb_queues, 0,
-			       sizeof(txq[0]) * new_qs);
+			txq[i] = NULL;
 		}
 
-		dev->data->tx_queues = txq;
-
 	} else if (dev->data->tx_queues != NULL && nb_queues == 0) {
 		RTE_FUNC_PTR_OR_ERR_RET(*dev->dev_ops->tx_queue_release, -ENOTSUP);