[v2,05/16] net/dpaa2: fix the prefetch Rx to honor nb pkts

Message ID 1530697431-1244-5-git-send-email-hemant.agrawal@nxp.com (mailing list archive)
State Superseded, archived
Delegated to: Thomas Monjalon
Headers
Series [v2,01/16] bus/dpaa: fix phandle support for kernel 4.16 |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Hemant Agrawal July 4, 2018, 9:43 a.m. UTC
  This patch fix the prefetch rx routine to
set the next prefetch request to the size of nb_pkts.
This will assume that next request will ideally will be
of same size.

Fixes: 4bc5ab88dbd6 ("net/dpaa2: fix Tx only mode")
Cc: stable@dpdk.org

Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
---
 drivers/net/dpaa2/dpaa2_rxtx.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)
  

Comments

Shreyansh Jain July 6, 2018, 5:01 a.m. UTC | #1
On Wednesday 04 July 2018 03:13 PM, Hemant Agrawal wrote:
> This patch fix the prefetch rx routine to
             ^^^^^
             fixes
> set the next prefetch request to the size of nb_pkts.
> This will assume that next request will ideally will be
> of same size.

Incorrect wording.
Maybe:
"It assumes that next request would ideally be of same size"

> 
> Fixes: 4bc5ab88dbd6 ("net/dpaa2: fix Tx only mode")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>
> ---
>   drivers/net/dpaa2/dpaa2_rxtx.c | 16 +++++++++++-----
>   1 file changed, 11 insertions(+), 5 deletions(-)
> 
> diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
> index dac086d..311861c 100644
> --- a/drivers/net/dpaa2/dpaa2_rxtx.c
> +++ b/drivers/net/dpaa2/dpaa2_rxtx.c
> @@ -447,6 +447,12 @@ eth_copy_mbuf_to_fd(struct rte_mbuf *mbuf,
>   return 0;
>   }
>   
> +/* This function assumes that you will be keeping the same value for nb_pkts
                             ^^^^^^^^^^
Ideally commit messages shouldn't have personifications 'you/your' etc
But, it is a trivial thing and I leave at your discretion.

> + * across calls per queue, if that is not the case, better use non-prefetch
> + * version of rx call.
> + * It will return the packets as request in the previous call without honoring
                                    ^^^^^^^^^
                                    requested

[...]

Being very trivial comments, if you send the next version, please use:

Acked-by: Shreyansh Jain <shreyansh.jain@nxp.com>
  
Hemant Agrawal July 6, 2018, 8:13 a.m. UTC | #2
On Wednesday 04 July 2018 03:13 PM, Hemant Agrawal wrote:
> This patch fix the prefetch rx routine to
             ^^^^^
             fixes
> set the next prefetch request to the size of nb_pkts.
> This will assume that next request will ideally will be of same size.

Incorrect wording.
Maybe:
"It assumes that next request would ideally be of same size"

[Hemant] I have taken care of it in v3.
  

Patch

diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c
index dac086d..311861c 100644
--- a/drivers/net/dpaa2/dpaa2_rxtx.c
+++ b/drivers/net/dpaa2/dpaa2_rxtx.c
@@ -447,6 +447,12 @@  eth_copy_mbuf_to_fd(struct rte_mbuf *mbuf,
 return 0;
 }
 
+/* This function assumes that you will be keeping the same value for nb_pkts
+ * across calls per queue, if that is not the case, better use non-prefetch
+ * version of rx call.
+ * It will return the packets as request in the previous call without honoring
+ * the current nb_pkts or bufs space.
+ */
 uint16_t
 dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 {
@@ -454,7 +460,7 @@  dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	struct dpaa2_queue *dpaa2_q = (struct dpaa2_queue *)queue;
 	struct qbman_result *dq_storage, *dq_storage1 = NULL;
 	uint32_t fqid = dpaa2_q->fqid;
-	int ret, num_rx = 0;
+	int ret, num_rx = 0, pull_size;
 	uint8_t pending, status;
 	struct qbman_swp *swp;
 	const struct qbman_fd *fd, *next_fd;
@@ -470,12 +476,12 @@  dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 		}
 	}
 	swp = DPAA2_PER_LCORE_ETHRX_PORTAL;
-
+	pull_size = (nb_pkts > DPAA2_DQRR_RING_SIZE) ?
+					       DPAA2_DQRR_RING_SIZE : nb_pkts;
 	if (unlikely(!q_storage->active_dqs)) {
 		q_storage->toggle = 0;
 		dq_storage = q_storage->dq_storage[q_storage->toggle];
-		q_storage->last_num_pkts = (nb_pkts > DPAA2_DQRR_RING_SIZE) ?
-					       DPAA2_DQRR_RING_SIZE : nb_pkts;
+		q_storage->last_num_pkts = pull_size;
 		qbman_pull_desc_clear(&pulldesc);
 		qbman_pull_desc_set_numframes(&pulldesc,
 					      q_storage->last_num_pkts);
@@ -514,7 +520,7 @@  dpaa2_dev_prefetch_rx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts)
 	q_storage->toggle ^= 1;
 	dq_storage1 = q_storage->dq_storage[q_storage->toggle];
 	qbman_pull_desc_clear(&pulldesc);
-	qbman_pull_desc_set_numframes(&pulldesc, DPAA2_DQRR_RING_SIZE);
+	qbman_pull_desc_set_numframes(&pulldesc, pull_size);
 	qbman_pull_desc_set_fq(&pulldesc, fqid);
 	qbman_pull_desc_set_storage(&pulldesc, dq_storage1,
 		(uint64_t)(DPAA2_VADDR_TO_IOVA(dq_storage1)), 1);