From patchwork Mon Sep 27 12:26:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nipun Gupta X-Patchwork-Id: 99778 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 432BDA0548; Mon, 27 Sep 2021 14:27:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F0CEA41104; Mon, 27 Sep 2021 14:27:01 +0200 (CEST) Received: from inva020.nxp.com (inva020.nxp.com [92.121.34.13]) by mails.dpdk.org (Postfix) with ESMTP id 1107140E3C for ; Mon, 27 Sep 2021 14:26:55 +0200 (CEST) Received: from inva020.nxp.com (localhost [127.0.0.1]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id E5F061A2618; Mon, 27 Sep 2021 14:26:54 +0200 (CEST) Received: from aprdc01srsp001v.ap-rdc01.nxp.com (aprdc01srsp001v.ap-rdc01.nxp.com [165.114.16.16]) by inva020.eu-rdc02.nxp.com (Postfix) with ESMTP id 6BD1D1A25F5; Mon, 27 Sep 2021 14:26:54 +0200 (CEST) Received: from lsv03274.swis.in-blr01.nxp.com (lsv03274.swis.in-blr01.nxp.com [92.120.147.114]) by aprdc01srsp001v.ap-rdc01.nxp.com (Postfix) with ESMTP id 4C786183ACDC; Mon, 27 Sep 2021 20:26:53 +0800 (+08) From: nipun.gupta@nxp.com To: dev@dpdk.org Cc: thomas@monjalon.net, ferruh.yigit@intel.com, hemant.agrawal@nxp.com, sachin.saxena@nxp.com, Jun Yang Date: Mon, 27 Sep 2021 17:56:43 +0530 Message-Id: <20210927122650.30881-5-nipun.gupta@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20210927122650.30881-1-nipun.gupta@nxp.com> References: <20210927122650.30881-1-nipun.gupta@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 04/11] net/dpaa2: support multiple Tx queues enqueue for ordered X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Jun Yang Support the tx enqueue in order queue mode, where the queue id for each event may be different. Signed-off-by: Jun Yang --- drivers/event/dpaa2/dpaa2_eventdev.c | 12 ++- drivers/net/dpaa2/dpaa2_ethdev.h | 3 + drivers/net/dpaa2/dpaa2_rxtx.c | 142 +++++++++++++++++++++++++++ drivers/net/dpaa2/version.map | 1 + 4 files changed, 154 insertions(+), 4 deletions(-) diff --git a/drivers/event/dpaa2/dpaa2_eventdev.c b/drivers/event/dpaa2/dpaa2_eventdev.c index 5ccf22f77f..28f3bbca9a 100644 --- a/drivers/event/dpaa2/dpaa2_eventdev.c +++ b/drivers/event/dpaa2/dpaa2_eventdev.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright 2017,2019 NXP + * Copyright 2017,2019-2021 NXP */ #include @@ -1002,16 +1002,20 @@ dpaa2_eventdev_txa_enqueue(void *port, struct rte_event ev[], uint16_t nb_events) { - struct rte_mbuf *m = (struct rte_mbuf *)ev[0].mbuf; + void *txq[32]; + struct rte_mbuf *m[32]; uint8_t qid, i; RTE_SET_USED(port); for (i = 0; i < nb_events; i++) { - qid = rte_event_eth_tx_adapter_txq_get(m); - rte_eth_tx_burst(m->port, qid, &m, 1); + m[i] = (struct rte_mbuf *)ev[i].mbuf; + qid = rte_event_eth_tx_adapter_txq_get(m[i]); + txq[i] = rte_eth_devices[m[i]->port].data->tx_queues[qid]; } + dpaa2_dev_tx_multi_txq_ordered(txq, m, nb_events); + return nb_events; } diff --git a/drivers/net/dpaa2/dpaa2_ethdev.h b/drivers/net/dpaa2/dpaa2_ethdev.h index 3f34d7ecff..07a6811dd2 100644 --- a/drivers/net/dpaa2/dpaa2_ethdev.h +++ b/drivers/net/dpaa2/dpaa2_ethdev.h @@ -236,6 +236,9 @@ void dpaa2_dev_process_ordered_event(struct qbman_swp *swp, uint16_t dpaa2_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts); uint16_t dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts); +uint16_t dpaa2_dev_tx_multi_txq_ordered(void **queue, + struct rte_mbuf **bufs, uint16_t nb_pkts); + uint16_t dummy_dev_tx(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts); void dpaa2_dev_free_eqresp_buf(uint16_t eqresp_ci); void dpaa2_flow_clean(struct rte_eth_dev *dev); diff --git a/drivers/net/dpaa2/dpaa2_rxtx.c b/drivers/net/dpaa2/dpaa2_rxtx.c index f40369e2c3..447063b3c3 100644 --- a/drivers/net/dpaa2/dpaa2_rxtx.c +++ b/drivers/net/dpaa2/dpaa2_rxtx.c @@ -1445,6 +1445,148 @@ dpaa2_set_enqueue_descriptor(struct dpaa2_queue *dpaa2_q, *dpaa2_seqn(m) = DPAA2_INVALID_MBUF_SEQN; } +__rte_internal uint16_t +dpaa2_dev_tx_multi_txq_ordered(void **queue, + struct rte_mbuf **bufs, uint16_t nb_pkts) +{ + /* Function to transmit the frames to multiple queues respectively.*/ + uint32_t loop, retry_count; + int32_t ret; + struct qbman_fd fd_arr[MAX_TX_RING_SLOTS]; + uint32_t frames_to_send; + struct rte_mempool *mp; + struct qbman_eq_desc eqdesc[MAX_TX_RING_SLOTS]; + struct dpaa2_queue *dpaa2_q[MAX_TX_RING_SLOTS]; + struct qbman_swp *swp; + uint16_t bpid; + struct rte_mbuf *mi; + struct rte_eth_dev_data *eth_data; + struct dpaa2_dev_priv *priv; + struct dpaa2_queue *order_sendq; + + if (unlikely(!DPAA2_PER_LCORE_DPIO)) { + ret = dpaa2_affine_qbman_swp(); + if (ret) { + DPAA2_PMD_ERR( + "Failed to allocate IO portal, tid: %d\n", + rte_gettid()); + return 0; + } + } + swp = DPAA2_PER_LCORE_PORTAL; + + for (loop = 0; loop < nb_pkts; loop++) { + dpaa2_q[loop] = (struct dpaa2_queue *)queue[loop]; + eth_data = dpaa2_q[loop]->eth_data; + priv = eth_data->dev_private; + qbman_eq_desc_clear(&eqdesc[loop]); + if (*dpaa2_seqn(*bufs) && priv->en_ordered) { + order_sendq = (struct dpaa2_queue *)priv->tx_vq[0]; + dpaa2_set_enqueue_descriptor(order_sendq, + (*bufs), + &eqdesc[loop]); + } else { + qbman_eq_desc_set_no_orp(&eqdesc[loop], + DPAA2_EQ_RESP_ERR_FQ); + qbman_eq_desc_set_fq(&eqdesc[loop], + dpaa2_q[loop]->fqid); + } + + retry_count = 0; + while (qbman_result_SCN_state(dpaa2_q[loop]->cscn)) { + retry_count++; + /* Retry for some time before giving up */ + if (retry_count > CONG_RETRY_COUNT) + goto send_frames; + } + + if (likely(RTE_MBUF_DIRECT(*bufs))) { + mp = (*bufs)->pool; + /* Check the basic scenario and set + * the FD appropriately here itself. + */ + if (likely(mp && mp->ops_index == + priv->bp_list->dpaa2_ops_index && + (*bufs)->nb_segs == 1 && + rte_mbuf_refcnt_read((*bufs)) == 1)) { + if (unlikely((*bufs)->ol_flags + & PKT_TX_VLAN_PKT)) { + ret = rte_vlan_insert(bufs); + if (ret) + goto send_frames; + } + DPAA2_MBUF_TO_CONTIG_FD((*bufs), + &fd_arr[loop], + mempool_to_bpid(mp)); + bufs++; + dpaa2_q[loop]++; + continue; + } + } else { + mi = rte_mbuf_from_indirect(*bufs); + mp = mi->pool; + } + /* Not a hw_pkt pool allocated frame */ + if (unlikely(!mp || !priv->bp_list)) { + DPAA2_PMD_ERR("Err: No buffer pool attached"); + goto send_frames; + } + + if (mp->ops_index != priv->bp_list->dpaa2_ops_index) { + DPAA2_PMD_WARN("Non DPAA2 buffer pool"); + /* alloc should be from the default buffer pool + * attached to this interface + */ + bpid = priv->bp_list->buf_pool.bpid; + + if (unlikely((*bufs)->nb_segs > 1)) { + DPAA2_PMD_ERR( + "S/G not supp for non hw offload buffer"); + goto send_frames; + } + if (eth_copy_mbuf_to_fd(*bufs, + &fd_arr[loop], bpid)) { + goto send_frames; + } + /* free the original packet */ + rte_pktmbuf_free(*bufs); + } else { + bpid = mempool_to_bpid(mp); + if (unlikely((*bufs)->nb_segs > 1)) { + if (eth_mbuf_to_sg_fd(*bufs, + &fd_arr[loop], + mp, + bpid)) + goto send_frames; + } else { + eth_mbuf_to_fd(*bufs, + &fd_arr[loop], bpid); + } + } + + bufs++; + dpaa2_q[loop]++; + } + +send_frames: + frames_to_send = loop; + loop = 0; + while (loop < frames_to_send) { + ret = qbman_swp_enqueue_multiple_desc(swp, &eqdesc[loop], + &fd_arr[loop], + frames_to_send - loop); + if (likely(ret > 0)) { + loop += ret; + } else { + retry_count++; + if (retry_count > DPAA2_MAX_TX_RETRY_COUNT) + break; + } + } + + return loop; +} + /* Callback to handle sending ordered packets through WRIOP based interface */ uint16_t dpaa2_dev_tx_ordered(void *queue, struct rte_mbuf **bufs, uint16_t nb_pkts) diff --git a/drivers/net/dpaa2/version.map b/drivers/net/dpaa2/version.map index 3ab96344c4..f9786af7e4 100644 --- a/drivers/net/dpaa2/version.map +++ b/drivers/net/dpaa2/version.map @@ -19,6 +19,7 @@ EXPERIMENTAL { INTERNAL { global: + dpaa2_dev_tx_multi_txq_ordered; dpaa2_eth_eventq_attach; dpaa2_eth_eventq_detach;