From patchwork Mon Jul 22 16:39:02 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gagandeep Singh X-Patchwork-Id: 142621 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C5C7B45683; Mon, 22 Jul 2024 18:40:03 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CC93440DD1; Mon, 22 Jul 2024 18:39:50 +0200 (CEST) Received: from DUZPR83CU001.outbound.protection.outlook.com (mail-northeuropeazon11013016.outbound.protection.outlook.com [52.101.67.16]) by mails.dpdk.org (Postfix) with ESMTP id D529240B9E for ; Mon, 22 Jul 2024 18:39:49 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=yT2zbvQAszgf5UatbLi9ApMcq44o2tXyHmBSlQqp+WCckqdSUPlGb/jNAkPkOd2qjd3cOnkve5e/DXYSBVHYZeXNc9j48z9l3OttHI6xomyXKz8+pRwdRpQvrxcBmXb+DQxldlqsgOCm0+/IeAePKYAQ7d2nihzgMXacRqJXveNEBb0JvVgFJSaDrXPcyJsZVp1/NOjvCZSwxQNv1s67QvG5t9h4ZRdlVheg1opq0kUPNgKnN6/Xec/QJaHaf/6mba//iI8PKk50hg/0fXI6eJEDM/JzaoaE7sl7BnCYvkQzCTF/tE6rRW692kVISIEdU84sg1dd0NVvFDI7aHLtgw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=4/HxFqyRZ0b3q1XoIOZ+mktAUQUc4l0FXD3rTmgn6pg=; b=RVzMD+JD/kIAYC5/BylR1dl+lsKtUIizbYb6qPdriWZdrfs8kHwmOOITVUN81aZbkpHKnetevf5HiZWUVJNvxKwJoGc2BUWbFVPuehec8SeW8zjHvUzo9QtSL/zqmtlGEwrXtq9u609JTmYBWFx44onx9BlV41fsiDNhYLk/U+prukBupjviXsWTDZytWC41gZeIv2eF7By/kdfKHOYcTKZYJUd+o9IA9zRIa/Sw/AW3CJ/igVWemAny/npVyooBmzVhdxxngx5GodwXhDjfJpcaoq+nh2yHJX95+Li+Taf83MubqVSNf6WLCD4151+uLsGVQjaHvi7CLi3ZBVAmKw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=4/HxFqyRZ0b3q1XoIOZ+mktAUQUc4l0FXD3rTmgn6pg=; b=TtCeZfJIA09tgLKad3JI6pWSDRD/JsOP3giHYNKzz0uoTbd8Rq5OMjj4y6BeieC4mfGV6j9m/fvl/TgBAMg785x4qV1AiPc3IjnNJZUxYGO++yu2/UrgJMzUxrIVPI+1wR4Q9VLyXrS9EazHUbTbW0LhQAISMelU8uc2hK03svc= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=nxp.com; Received: from AS8SPR01MB0024.eurprd04.prod.outlook.com (2603:10a6:20b:3d0::24) by VI0PR04MB10461.eurprd04.prod.outlook.com (2603:10a6:800:216::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7784.16; Mon, 22 Jul 2024 16:39:48 +0000 Received: from AS8SPR01MB0024.eurprd04.prod.outlook.com ([fe80::c634:479e:8f8a:a325]) by AS8SPR01MB0024.eurprd04.prod.outlook.com ([fe80::c634:479e:8f8a:a325%7]) with mapi id 15.20.7784.017; Mon, 22 Jul 2024 16:39:48 +0000 From: Gagandeep Singh To: dev@dpdk.org, Hemant Agrawal Cc: Jun Yang Subject: [v3 02/30] dma/dpaa2: support multiple HW queues Date: Mon, 22 Jul 2024 22:09:02 +0530 Message-Id: <20240722163930.2171568-3-g.singh@nxp.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240722163930.2171568-1-g.singh@nxp.com> References: <20240722115843.1830105-1-g.singh@nxp.com> <20240722163930.2171568-1-g.singh@nxp.com> X-ClientProxiedBy: SG3P274CA0014.SGPP274.PROD.OUTLOOK.COM (2603:1096:4:be::26) To AS8SPR01MB0024.eurprd04.prod.outlook.com (2603:10a6:20b:3d0::24) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AS8SPR01MB0024:EE_|VI0PR04MB10461:EE_ X-MS-Office365-Filtering-Correlation-Id: aab24e58-3e94-404a-e715-08dcaa6ce694 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|1800799024|366016|376014|52116014|38350700014; X-Microsoft-Antispam-Message-Info: CN4cD97015dZTZe1GRhMh4tEWX+xYpY3h8YWtLo6dc9AVHg3ttOhsPMLH7Zolt1E6RQUaPEXPhWLqvLxIfdQ/osJHInPQXAAvZWY6MQNtMgby6WF8RWWSwfnjDdJbtK90bvfBFOIUVP3WZ1GbtyCg+BU1N1fH6+vK3LIA5ft5ZYS5eGwwEfQSY9pcLNrniL0EhxFAy9C5pN1gWU36xZ0vdmzIq1GKL0HZoMVa5J7AgpA+Zw+fPN/QKTXZTvf0S18sYsRv5kgek+FNO6xY2tLkjloAciqR1xriG/ljiaKtz7tvAfRT4lmB94K6YTtD+Uu+UJ2TjmRvJ+9LLS1oxMJMLtMEl5Nt301SBe7dGx80BByu7RQwVW56JOq1uCaevSunwwd3FBWJKQvtjrC0GtVWG3NNXd8kB1BPxZCWt+AOLU00ioIVbBXQyGmddjLbzJuuD4/Jg/wbU07lD8K0WfmXYYjeQBm6y24siPCcvsbx0QSnK8ZULGJtXjwlj/ZXiaB/V46ioM1h+98L6gx+I8yWas4C0COfXNCE6lDJR6SUo84jZwL8R7cIRWiqL8Nfp2yCMBIt1Jm7DOp7q+tPUJuu1AcXsnKXkUw/gHijbZokgw8rzYNphXXKRJIEk0yRhjKCVtTIIrNr8WUnKbaq37+rZrzIYlULwPxVDaE2pGMH3Mic3+fwZ2K13GLavncYaYkfCLnNqKNWQwxJ2o7bEfopuBnAbSCBkG0020WqaR1wEdcrXDNGsy4DwNVC61QiilSJN4UQi83AAB3x3laaziiiUB9Xvfnx+5q6FXfFJwz91cWWsLpVjSKa37tbryRpoHUcfIHElzOTJs9yU1RcAjEjrLKgViRpT0uf8tPBI6nSGTArtFlYUqGnTfz9hOJKCOCMev1kweS2y31wcNXcRxQNTxQbuSWlRPjanl6q4lj3qKGr+Qx2xXZgGZAYvTLYQ8HN1F7yKFWH8DggGjNxofVdF49838eeYYCEppWWLgZTwV0Swt+oHMoxwWdJNYZMSjwn63BdLbMSMoN6atbBSZyQWbXkZ5RyLC6gYN9Mksm9oaxnl5kjp/Jaofx2GTpNhyPnbXrR4q1LEj2blm8RG5YsLBUXQUGjvV/kjGH/nERvOJS1GsMAhzTwcOw2ANcfpd1Rg/f/EIGf0pqyvfX8S+AMsrLVVY+VvJBx7JWPc+rWUxIBoX0dyq4Mus+FearAMqXa8tVhhUl9Ug1yP4LYlcRX1oU7YTvXPxeZvcdbsX4bOVp0N2vB8euKAkB60kS5xVQ4F6ADYXKaYVrD8lN3t5ixHJ+/59qnR0c+AafvopnoqmpQuROxnbYKvEm1KXGr6zf7RmA11EiaVMfpDcyIGPuzEyC9lYA24+gi+V9BhklcLo0MUEgoZtgg+sHfOwnCEUiH9tBIt5QoNFScBxgjJZ0Hw== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:AS8SPR01MB0024.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230040)(1800799024)(366016)(376014)(52116014)(38350700014); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 8Bjf0xW6Idla20d4/K0jk4+f//mREH5gw6gAgNDSKECwCUmKRNTl9UDTiwe1H4MN+x6L+AF+/7cY5XpmWtbVE3fHRwZV/mc7aqooeBjArebX7vVsmrr6F3ywO5LyXtX4zQeaRoVbIxItBiTgap/9HPkQTmopK+6dXztCTF1SwmpxoJWXc0mUikMRTB12nY+qbJyMS5IudVp9mov0I4JrrYsygzKuy1I4TXMsbvgadxKZw3r2tLtxYOy2jTc1/iKPendMYsI0SwY1JipomswfkIw2gsdBiBzIn84u5aPLd1vsWnAcaPHe+lM0XCt7HPu9rsHeJRHZH0aGl76hThYsQKT/WcxVu1zHmunW/NArXhhlsEbyrUm219gFq6XuRtXz8E56X8wvAGgGG7OI9om4OSYHyRnCbksyJSPosdVNW3vGJB5GSXlMKLoA9fY/SGYVRHApbiiFhiljp0jaoZ8nGUltEVWwcV84OukseN8msy2PlAd2Sl7xG3saVLISrlglWIbp8+hrDBcktUn/wVzI9oIyObdZizEBoss/cB29rXfbAX93HcZVAMkMO/ma7e/4nDij16DXY1g+pWOxxSDsC1zAFALBmR6okqaRhHKy0r2KJNbizcsS0c7Qdy2gseVh2rK4LtNrQG3qiWAFhEIn4V4vrEY8YlzJW6bX1H+ZckkeGEsbVY0V2eVGduG/WlPIsWVlLvi94TW7NS7vIqSu9V9xHausuAWRMr3jABLrnX/qqjBMUio/cZ8OzV5WQpR1JFLPsGjtaeo1GkUvh4Ri94EbKscFcYi9qctwnJwXEho3Lr5kJ1c0Pak4Gx+ETnfgRVGtD4Q/PyUEbhGZHKBPgUxCDg5raPp4cJx/oHZ4VuGWGZOgjHW2Xm0ov3pYIOgfxBmTe5EIQzv0aent5eTlU0ak0CPEKKxFELmFkQpX4dg3z3wvAL8pFqoh4XCOHzN0zmHv1E7TL2GHOANUpVUnfthw43Z8G97s+nBIhfes9iX0sI8rXHJCeLjkBREEWGEys4U7wamnfVqyOFo0uVd5IqCbmVuqEdWgLcEoEAv6zS43MPNVsUygFUd21T5N8uChGDbfRoArXuSCBiWyWPXMIfEYSxu/YP1JPI5WofMeXJpb3IGFbYGksck2ZFh0HFoKoWhvT95srVo+/Z67V7eCIAZlG+lR5MenB4Q/dMU2M0+sPYwzqiji1P2puXpfWr/f/DguKUTAUkHBirJqXIeTjoHLNeyZHIA8+OJl8yEEsSxovu00yzTofdXABtgr2qCaJf6B1icAWhUWl5auZ/nk7QHPJbjYrdb7LX7uaedfftwxltzPeUJQPomP5GoJfFCz89WNvJSRfCQbONAro1VM9CXV99bX2UjGB3x7QYrarYDm4U+beEkIE2sXfM4ei1QripAeLIifvdfqGEL5ItJq5LivJo4JiIfCKA2qCT3EEDAFuV91jir91W0kQLeMgSM8IJQOLtRhqIBb5GH/jMcdB3nXsydizYkRyd/KYUl+e5wk+IeFu+tVB/H6I99coMFGAzvszEOZqxo3paZYaG8hc4ZvjWpk8DPF33+vXJMpqNkGwO6jPO2rTxeG3Sa18/JH X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: aab24e58-3e94-404a-e715-08dcaa6ce694 X-MS-Exchange-CrossTenant-AuthSource: AS8SPR01MB0024.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 Jul 2024 16:39:48.4513 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: q2lXzvZFfBfM/CSNKjpvOX5exqCs3gEmdyke5l4UGRyorzZ5yRKMGBq2v/FikjvU X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI0PR04MB10461 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Jun Yang Initialize and Configure queues of dma device according to hw queues supported from mc bus. Because multiple queues per device are supported, virt queues implementation are dropped. Signed-off-by: Jun Yang --- drivers/dma/dpaa2/dpaa2_qdma.c | 312 +++++++++++++++------------------ drivers/dma/dpaa2/dpaa2_qdma.h | 6 +- 2 files changed, 140 insertions(+), 178 deletions(-) diff --git a/drivers/dma/dpaa2/dpaa2_qdma.c b/drivers/dma/dpaa2/dpaa2_qdma.c index 5954b552b5..945ba71e4a 100644 --- a/drivers/dma/dpaa2/dpaa2_qdma.c +++ b/drivers/dma/dpaa2/dpaa2_qdma.c @@ -478,9 +478,9 @@ dpdmai_dev_get_job_us(struct qdma_virt_queue *qdma_vq __rte_unused, static inline uint16_t dpdmai_dev_get_single_job_lf(struct qdma_virt_queue *qdma_vq, - const struct qbman_fd *fd, - struct rte_dpaa2_qdma_job **job, - uint16_t *nb_jobs) + const struct qbman_fd *fd, + struct rte_dpaa2_qdma_job **job, + uint16_t *nb_jobs) { struct qbman_fle *fle; struct rte_dpaa2_qdma_job **ppjob = NULL; @@ -512,9 +512,9 @@ dpdmai_dev_get_single_job_lf(struct qdma_virt_queue *qdma_vq, static inline uint16_t dpdmai_dev_get_sg_job_lf(struct qdma_virt_queue *qdma_vq, - const struct qbman_fd *fd, - struct rte_dpaa2_qdma_job **job, - uint16_t *nb_jobs) + const struct qbman_fd *fd, + struct rte_dpaa2_qdma_job **job, + uint16_t *nb_jobs) { struct qbman_fle *fle; struct rte_dpaa2_qdma_job **ppjob = NULL; @@ -548,12 +548,12 @@ dpdmai_dev_get_sg_job_lf(struct qdma_virt_queue *qdma_vq, /* Function to receive a QDMA job for a given device and queue*/ static int dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq, - uint16_t *vq_id, - struct rte_dpaa2_qdma_job **job, - uint16_t nb_jobs) + uint16_t *vq_id, + struct rte_dpaa2_qdma_job **job, + uint16_t nb_jobs) { struct dpaa2_dpdmai_dev *dpdmai_dev = qdma_vq->dpdmai_dev; - struct dpaa2_queue *rxq = &(dpdmai_dev->rx_queue[0]); + struct dpaa2_queue *rxq; struct qbman_result *dq_storage, *dq_storage1 = NULL; struct qbman_pull_desc pulldesc; struct qbman_swp *swp; @@ -562,7 +562,7 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq, uint8_t num_rx = 0; const struct qbman_fd *fd; uint16_t vqid, num_rx_ret; - uint16_t rx_fqid = rxq->fqid; + uint16_t rx_fqid; int ret, pull_size; if (qdma_vq->flags & DPAA2_QDMA_VQ_FD_SG_FORMAT) { @@ -575,15 +575,17 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq, if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_QDMA_ERR( - "Failed to allocate IO portal, tid: %d\n", + DPAA2_QDMA_ERR("Failed to allocate IO portal, tid(%d)", rte_gettid()); return 0; } } swp = DPAA2_PER_LCORE_PORTAL; + rxq = &dpdmai_dev->rx_queue[qdma_vq->vq_id]; + rx_fqid = rxq->fqid; - pull_size = (nb_jobs > dpaa2_dqrr_size) ? dpaa2_dqrr_size : nb_jobs; + pull_size = (nb_jobs > dpaa2_dqrr_size) ? + dpaa2_dqrr_size : nb_jobs; q_storage = rxq->q_storage; if (unlikely(!q_storage->active_dqs)) { @@ -697,12 +699,12 @@ dpdmai_dev_dequeue_multijob_prefetch(struct qdma_virt_queue *qdma_vq, static int dpdmai_dev_dequeue_multijob_no_prefetch(struct qdma_virt_queue *qdma_vq, - uint16_t *vq_id, - struct rte_dpaa2_qdma_job **job, - uint16_t nb_jobs) + uint16_t *vq_id, + struct rte_dpaa2_qdma_job **job, + uint16_t nb_jobs) { struct dpaa2_dpdmai_dev *dpdmai_dev = qdma_vq->dpdmai_dev; - struct dpaa2_queue *rxq = &(dpdmai_dev->rx_queue[0]); + struct dpaa2_queue *rxq; struct qbman_result *dq_storage; struct qbman_pull_desc pulldesc; struct qbman_swp *swp; @@ -710,7 +712,7 @@ dpdmai_dev_dequeue_multijob_no_prefetch(struct qdma_virt_queue *qdma_vq, uint8_t num_rx = 0; const struct qbman_fd *fd; uint16_t vqid, num_rx_ret; - uint16_t rx_fqid = rxq->fqid; + uint16_t rx_fqid; int ret, next_pull, num_pulled = 0; if (qdma_vq->flags & DPAA2_QDMA_VQ_FD_SG_FORMAT) { @@ -725,15 +727,15 @@ dpdmai_dev_dequeue_multijob_no_prefetch(struct qdma_virt_queue *qdma_vq, if (unlikely(!DPAA2_PER_LCORE_DPIO)) { ret = dpaa2_affine_qbman_swp(); if (ret) { - DPAA2_QDMA_ERR( - "Failed to allocate IO portal, tid: %d\n", + DPAA2_QDMA_ERR("Failed to allocate IO portal, tid(%d)", rte_gettid()); return 0; } } swp = DPAA2_PER_LCORE_PORTAL; - rxq = &(dpdmai_dev->rx_queue[0]); + rxq = &dpdmai_dev->rx_queue[qdma_vq->vq_id]; + rx_fqid = rxq->fqid; do { dq_storage = rxq->q_storage->dq_storage[0]; @@ -810,7 +812,7 @@ dpdmai_dev_submit_multi(struct qdma_virt_queue *qdma_vq, uint16_t nb_jobs) { struct dpaa2_dpdmai_dev *dpdmai_dev = qdma_vq->dpdmai_dev; - uint16_t txq_id = dpdmai_dev->tx_queue[0].fqid; + uint16_t txq_id = dpdmai_dev->tx_queue[qdma_vq->vq_id].fqid; struct qbman_fd fd[DPAA2_QDMA_MAX_DESC]; struct qbman_eq_desc eqdesc; struct qbman_swp *swp; @@ -931,8 +933,8 @@ dpaa2_qdma_submit(void *dev_private, uint16_t vchan) static int dpaa2_qdma_enqueue(void *dev_private, uint16_t vchan, - rte_iova_t src, rte_iova_t dst, - uint32_t length, uint64_t flags) + rte_iova_t src, rte_iova_t dst, + uint32_t length, uint64_t flags) { struct dpaa2_dpdmai_dev *dpdmai_dev = dev_private; struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; @@ -966,8 +968,8 @@ dpaa2_qdma_enqueue(void *dev_private, uint16_t vchan, int rte_dpaa2_qdma_copy_multi(int16_t dev_id, uint16_t vchan, - struct rte_dpaa2_qdma_job **jobs, - uint16_t nb_cpls) + struct rte_dpaa2_qdma_job **jobs, + uint16_t nb_cpls) { struct rte_dma_fp_object *obj = &rte_dma_fp_objs[dev_id]; struct dpaa2_dpdmai_dev *dpdmai_dev = obj->dev_private; @@ -978,14 +980,11 @@ rte_dpaa2_qdma_copy_multi(int16_t dev_id, uint16_t vchan, } static uint16_t -dpaa2_qdma_dequeue_multi(struct qdma_device *qdma_dev, - struct qdma_virt_queue *qdma_vq, - struct rte_dpaa2_qdma_job **jobs, - uint16_t nb_jobs) +dpaa2_qdma_dequeue_multi(struct qdma_virt_queue *qdma_vq, + struct rte_dpaa2_qdma_job **jobs, + uint16_t nb_jobs) { - struct qdma_virt_queue *temp_qdma_vq; - int ring_count; - int ret = 0, i; + int ret; if (qdma_vq->flags & DPAA2_QDMA_VQ_FD_SG_FORMAT) { /** Make sure there are enough space to get jobs.*/ @@ -1002,42 +1001,12 @@ dpaa2_qdma_dequeue_multi(struct qdma_device *qdma_dev, nb_jobs = RTE_MIN((qdma_vq->num_enqueues - qdma_vq->num_dequeues), nb_jobs); - if (qdma_vq->exclusive_hw_queue) { - /* In case of exclusive queue directly fetch from HW queue */ - ret = qdma_vq->dequeue_job(qdma_vq, NULL, jobs, nb_jobs); - if (ret < 0) { - DPAA2_QDMA_ERR( - "Dequeue from DPDMAI device failed: %d", ret); - return ret; - } - } else { - uint16_t temp_vq_id[DPAA2_QDMA_MAX_DESC]; - - /* Get the QDMA completed jobs from the software ring. - * In case they are not available on the ring poke the HW - * to fetch completed jobs from corresponding HW queues - */ - ring_count = rte_ring_count(qdma_vq->status_ring); - if (ring_count < nb_jobs) { - ret = qdma_vq->dequeue_job(qdma_vq, - temp_vq_id, jobs, nb_jobs); - for (i = 0; i < ret; i++) { - temp_qdma_vq = &qdma_dev->vqs[temp_vq_id[i]]; - rte_ring_enqueue(temp_qdma_vq->status_ring, - (void *)(jobs[i])); - } - ring_count = rte_ring_count( - qdma_vq->status_ring); - } - - if (ring_count) { - /* Dequeue job from the software ring - * to provide to the user - */ - ret = rte_ring_dequeue_bulk(qdma_vq->status_ring, - (void **)jobs, - ring_count, NULL); - } + ret = qdma_vq->dequeue_job(qdma_vq, NULL, jobs, nb_jobs); + if (ret < 0) { + DPAA2_QDMA_ERR("Dequeue from DMA%d-q%d failed(%d)", + qdma_vq->dpdmai_dev->dpdmai_id, + qdma_vq->vq_id, ret); + return ret; } qdma_vq->num_dequeues += ret; @@ -1046,9 +1015,9 @@ dpaa2_qdma_dequeue_multi(struct qdma_device *qdma_dev, static uint16_t dpaa2_qdma_dequeue_status(void *dev_private, uint16_t vchan, - const uint16_t nb_cpls, - uint16_t *last_idx, - enum rte_dma_status_code *st) + const uint16_t nb_cpls, + uint16_t *last_idx, + enum rte_dma_status_code *st) { struct dpaa2_dpdmai_dev *dpdmai_dev = dev_private; struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; @@ -1056,7 +1025,7 @@ dpaa2_qdma_dequeue_status(void *dev_private, uint16_t vchan, struct rte_dpaa2_qdma_job *jobs[DPAA2_QDMA_MAX_DESC]; int ret, i; - ret = dpaa2_qdma_dequeue_multi(qdma_dev, qdma_vq, jobs, nb_cpls); + ret = dpaa2_qdma_dequeue_multi(qdma_vq, jobs, nb_cpls); for (i = 0; i < ret; i++) st[i] = jobs[i]->status; @@ -1071,8 +1040,8 @@ dpaa2_qdma_dequeue_status(void *dev_private, uint16_t vchan, static uint16_t dpaa2_qdma_dequeue(void *dev_private, - uint16_t vchan, const uint16_t nb_cpls, - uint16_t *last_idx, bool *has_error) + uint16_t vchan, const uint16_t nb_cpls, + uint16_t *last_idx, bool *has_error) { struct dpaa2_dpdmai_dev *dpdmai_dev = dev_private; struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; @@ -1082,7 +1051,7 @@ dpaa2_qdma_dequeue(void *dev_private, RTE_SET_USED(has_error); - ret = dpaa2_qdma_dequeue_multi(qdma_dev, qdma_vq, + ret = dpaa2_qdma_dequeue_multi(qdma_vq, jobs, nb_cpls); rte_mempool_put_bulk(qdma_vq->job_pool, (void **)jobs, ret); @@ -1103,16 +1072,15 @@ rte_dpaa2_qdma_completed_multi(int16_t dev_id, uint16_t vchan, struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; struct qdma_virt_queue *qdma_vq = &qdma_dev->vqs[vchan]; - return dpaa2_qdma_dequeue_multi(qdma_dev, qdma_vq, jobs, nb_cpls); + return dpaa2_qdma_dequeue_multi(qdma_vq, jobs, nb_cpls); } static int dpaa2_qdma_info_get(const struct rte_dma_dev *dev, - struct rte_dma_info *dev_info, - uint32_t info_sz) + struct rte_dma_info *dev_info, + uint32_t info_sz __rte_unused) { - RTE_SET_USED(dev); - RTE_SET_USED(info_sz); + struct dpaa2_dpdmai_dev *dpdmai_dev = dev->data->dev_private; dev_info->dev_capa = RTE_DMA_CAPA_MEM_TO_MEM | RTE_DMA_CAPA_MEM_TO_DEV | @@ -1120,7 +1088,7 @@ dpaa2_qdma_info_get(const struct rte_dma_dev *dev, RTE_DMA_CAPA_DEV_TO_MEM | RTE_DMA_CAPA_SILENT | RTE_DMA_CAPA_OPS_COPY; - dev_info->max_vchans = DPAA2_QDMA_MAX_VHANS; + dev_info->max_vchans = dpdmai_dev->num_queues; dev_info->max_desc = DPAA2_QDMA_MAX_DESC; dev_info->min_desc = DPAA2_QDMA_MIN_DESC; @@ -1129,12 +1097,13 @@ dpaa2_qdma_info_get(const struct rte_dma_dev *dev, static int dpaa2_qdma_configure(struct rte_dma_dev *dev, - const struct rte_dma_conf *dev_conf, - uint32_t conf_sz) + const struct rte_dma_conf *dev_conf, + uint32_t conf_sz) { char name[32]; /* RTE_MEMZONE_NAMESIZE = 32 */ struct dpaa2_dpdmai_dev *dpdmai_dev = dev->data->dev_private; struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; + uint16_t i; DPAA2_QDMA_FUNC_TRACE(); @@ -1142,9 +1111,9 @@ dpaa2_qdma_configure(struct rte_dma_dev *dev, /* In case QDMA device is not in stopped state, return -EBUSY */ if (qdma_dev->state == 1) { - DPAA2_QDMA_ERR( - "Device is in running state. Stop before config."); - return -1; + DPAA2_QDMA_ERR("%s Not stopped, configure failed.", + dev->data->dev_name); + return -EBUSY; } /* Allocate Virtual Queues */ @@ -1156,6 +1125,9 @@ dpaa2_qdma_configure(struct rte_dma_dev *dev, DPAA2_QDMA_ERR("qdma_virtual_queues allocation failed"); return -ENOMEM; } + for (i = 0; i < dev_conf->nb_vchans; i++) + qdma_dev->vqs[i].vq_id = i; + qdma_dev->num_vqs = dev_conf->nb_vchans; return 0; @@ -1257,13 +1229,12 @@ dpaa2_qdma_vchan_rbp_set(struct qdma_virt_queue *vq, static int dpaa2_qdma_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, - const struct rte_dma_vchan_conf *conf, - uint32_t conf_sz) + const struct rte_dma_vchan_conf *conf, + uint32_t conf_sz) { struct dpaa2_dpdmai_dev *dpdmai_dev = dev->data->dev_private; struct qdma_device *qdma_dev = dpdmai_dev->qdma_dev; uint32_t pool_size; - char ring_name[32]; char pool_name[64]; int fd_long_format = 1; int sg_enable = 0, ret; @@ -1301,20 +1272,6 @@ dpaa2_qdma_vchan_setup(struct rte_dma_dev *dev, uint16_t vchan, pool_size = QDMA_FLE_SINGLE_POOL_SIZE; } - if (qdma_dev->num_vqs == 1) - qdma_dev->vqs[vchan].exclusive_hw_queue = 1; - else { - /* Allocate a Ring for Virtual Queue in VQ mode */ - snprintf(ring_name, sizeof(ring_name), "status ring %d %d", - dev->data->dev_id, vchan); - qdma_dev->vqs[vchan].status_ring = rte_ring_create(ring_name, - conf->nb_desc, rte_socket_id(), 0); - if (!qdma_dev->vqs[vchan].status_ring) { - DPAA2_QDMA_ERR("Status ring creation failed for vq"); - return rte_errno; - } - } - snprintf(pool_name, sizeof(pool_name), "qdma_fle_pool_dev%d_qid%d", dpdmai_dev->dpdmai_id, vchan); qdma_dev->vqs[vchan].fle_pool = rte_mempool_create(pool_name, @@ -1410,8 +1367,8 @@ dpaa2_qdma_reset(struct rte_dma_dev *dev) /* In case QDMA device is not in stopped state, return -EBUSY */ if (qdma_dev->state == 1) { - DPAA2_QDMA_ERR( - "Device is in running state. Stop before reset."); + DPAA2_QDMA_ERR("%s Not stopped, reset failed.", + dev->data->dev_name); return -EBUSY; } @@ -1424,10 +1381,6 @@ dpaa2_qdma_reset(struct rte_dma_dev *dev) } } - /* Reset and free virtual queues */ - for (i = 0; i < qdma_dev->num_vqs; i++) { - rte_ring_free(qdma_dev->vqs[i].status_ring); - } rte_free(qdma_dev->vqs); qdma_dev->vqs = NULL; @@ -1504,29 +1457,35 @@ static int dpaa2_dpdmai_dev_uninit(struct rte_dma_dev *dev) { struct dpaa2_dpdmai_dev *dpdmai_dev = dev->data->dev_private; - int ret; + struct dpaa2_queue *rxq; + int ret, i; DPAA2_QDMA_FUNC_TRACE(); ret = dpdmai_disable(&dpdmai_dev->dpdmai, CMD_PRI_LOW, - dpdmai_dev->token); - if (ret) - DPAA2_QDMA_ERR("dmdmai disable failed"); + dpdmai_dev->token); + if (ret) { + DPAA2_QDMA_ERR("dpdmai(%d) disable failed", + dpdmai_dev->dpdmai_id); + } /* Set up the DQRR storage for Rx */ - struct dpaa2_queue *rxq = &(dpdmai_dev->rx_queue[0]); - - if (rxq->q_storage) { - dpaa2_free_dq_storage(rxq->q_storage); - rte_free(rxq->q_storage); + for (i = 0; i < dpdmai_dev->num_queues; i++) { + rxq = &dpdmai_dev->rx_queue[i]; + if (rxq->q_storage) { + dpaa2_free_dq_storage(rxq->q_storage); + rte_free(rxq->q_storage); + } } /* Close the device at underlying layer*/ ret = dpdmai_close(&dpdmai_dev->dpdmai, CMD_PRI_LOW, dpdmai_dev->token); - if (ret) - DPAA2_QDMA_ERR("Failure closing dpdmai device"); + if (ret) { + DPAA2_QDMA_ERR("dpdmai(%d) close failed", + dpdmai_dev->dpdmai_id); + } - return 0; + return ret; } static int @@ -1538,80 +1497,87 @@ dpaa2_dpdmai_dev_init(struct rte_dma_dev *dev, int dpdmai_id) struct dpdmai_rx_queue_attr rx_attr; struct dpdmai_tx_queue_attr tx_attr; struct dpaa2_queue *rxq; - int ret; + int ret, i; DPAA2_QDMA_FUNC_TRACE(); /* Open DPDMAI device */ dpdmai_dev->dpdmai_id = dpdmai_id; dpdmai_dev->dpdmai.regs = dpaa2_get_mcp_ptr(MC_PORTAL_INDEX); - dpdmai_dev->qdma_dev = rte_malloc(NULL, sizeof(struct qdma_device), - RTE_CACHE_LINE_SIZE); + dpdmai_dev->qdma_dev = rte_malloc(NULL, + sizeof(struct qdma_device), RTE_CACHE_LINE_SIZE); ret = dpdmai_open(&dpdmai_dev->dpdmai, CMD_PRI_LOW, - dpdmai_dev->dpdmai_id, &dpdmai_dev->token); + dpdmai_dev->dpdmai_id, &dpdmai_dev->token); if (ret) { - DPAA2_QDMA_ERR("dpdmai_open() failed with err: %d", ret); + DPAA2_QDMA_ERR("%s: dma(%d) open failed(%d)", + __func__, dpdmai_dev->dpdmai_id, ret); return ret; } /* Get DPDMAI attributes */ ret = dpdmai_get_attributes(&dpdmai_dev->dpdmai, CMD_PRI_LOW, - dpdmai_dev->token, &attr); + dpdmai_dev->token, &attr); if (ret) { - DPAA2_QDMA_ERR("dpdmai get attributes failed with err: %d", - ret); + DPAA2_QDMA_ERR("%s: dma(%d) get attributes failed(%d)", + __func__, dpdmai_dev->dpdmai_id, ret); goto init_err; } dpdmai_dev->num_queues = attr.num_of_queues; - /* Set up Rx Queue */ - memset(&rx_queue_cfg, 0, sizeof(struct dpdmai_rx_queue_cfg)); - ret = dpdmai_set_rx_queue(&dpdmai_dev->dpdmai, - CMD_PRI_LOW, - dpdmai_dev->token, - 0, 0, &rx_queue_cfg); - if (ret) { - DPAA2_QDMA_ERR("Setting Rx queue failed with err: %d", - ret); - goto init_err; - } + /* Set up Rx Queues */ + for (i = 0; i < dpdmai_dev->num_queues; i++) { + memset(&rx_queue_cfg, 0, sizeof(struct dpdmai_rx_queue_cfg)); + ret = dpdmai_set_rx_queue(&dpdmai_dev->dpdmai, + CMD_PRI_LOW, + dpdmai_dev->token, + i, 0, &rx_queue_cfg); + if (ret) { + DPAA2_QDMA_ERR("%s Q%d set failed(%d)", + dev->data->dev_name, i, ret); + goto init_err; + } - /* Allocate DQ storage for the DPDMAI Rx queues */ - rxq = &(dpdmai_dev->rx_queue[0]); - rxq->q_storage = rte_malloc("dq_storage", - sizeof(struct queue_storage_info_t), - RTE_CACHE_LINE_SIZE); - if (!rxq->q_storage) { - DPAA2_QDMA_ERR("q_storage allocation failed"); - ret = -ENOMEM; - goto init_err; - } + /* Allocate DQ storage for the DPDMAI Rx queues */ + rxq = &dpdmai_dev->rx_queue[i]; + rxq->q_storage = rte_malloc("dq_storage", + sizeof(struct queue_storage_info_t), + RTE_CACHE_LINE_SIZE); + if (!rxq->q_storage) { + DPAA2_QDMA_ERR("%s DQ info(Q%d) alloc failed", + dev->data->dev_name, i); + ret = -ENOMEM; + goto init_err; + } - memset(rxq->q_storage, 0, sizeof(struct queue_storage_info_t)); - ret = dpaa2_alloc_dq_storage(rxq->q_storage); - if (ret) { - DPAA2_QDMA_ERR("dpaa2_alloc_dq_storage failed"); - goto init_err; + memset(rxq->q_storage, 0, sizeof(struct queue_storage_info_t)); + ret = dpaa2_alloc_dq_storage(rxq->q_storage); + if (ret) { + DPAA2_QDMA_ERR("%s DQ storage(Q%d) alloc failed(%d)", + dev->data->dev_name, i, ret); + goto init_err; + } } - /* Get Rx and Tx queues FQID */ - ret = dpdmai_get_rx_queue(&dpdmai_dev->dpdmai, CMD_PRI_LOW, - dpdmai_dev->token, 0, 0, &rx_attr); - if (ret) { - DPAA2_QDMA_ERR("Reading device failed with err: %d", - ret); - goto init_err; - } - dpdmai_dev->rx_queue[0].fqid = rx_attr.fqid; + /* Get Rx and Tx queues FQID's */ + for (i = 0; i < dpdmai_dev->num_queues; i++) { + ret = dpdmai_get_rx_queue(&dpdmai_dev->dpdmai, CMD_PRI_LOW, + dpdmai_dev->token, i, 0, &rx_attr); + if (ret) { + DPAA2_QDMA_ERR("Get DPDMAI%d-RXQ%d failed(%d)", + dpdmai_dev->dpdmai_id, i, ret); + goto init_err; + } + dpdmai_dev->rx_queue[i].fqid = rx_attr.fqid; - ret = dpdmai_get_tx_queue(&dpdmai_dev->dpdmai, CMD_PRI_LOW, - dpdmai_dev->token, 0, 0, &tx_attr); - if (ret) { - DPAA2_QDMA_ERR("Reading device failed with err: %d", - ret); - goto init_err; + ret = dpdmai_get_tx_queue(&dpdmai_dev->dpdmai, CMD_PRI_LOW, + dpdmai_dev->token, i, 0, &tx_attr); + if (ret) { + DPAA2_QDMA_ERR("Get DPDMAI%d-TXQ%d failed(%d)", + dpdmai_dev->dpdmai_id, i, ret); + goto init_err; + } + dpdmai_dev->tx_queue[i].fqid = tx_attr.fqid; } - dpdmai_dev->tx_queue[0].fqid = tx_attr.fqid; /* Enable the device */ ret = dpdmai_enable(&dpdmai_dev->dpdmai, CMD_PRI_LOW, diff --git a/drivers/dma/dpaa2/dpaa2_qdma.h b/drivers/dma/dpaa2/dpaa2_qdma.h index 811906fcbc..786dcb9308 100644 --- a/drivers/dma/dpaa2/dpaa2_qdma.h +++ b/drivers/dma/dpaa2/dpaa2_qdma.h @@ -18,7 +18,7 @@ #define DPAA2_QDMA_MAX_SG_NB 64 -#define DPAA2_DPDMAI_MAX_QUEUES 1 +#define DPAA2_DPDMAI_MAX_QUEUES 16 /** FLE single job pool size: job pointer(uint64_t) + * 3 Frame list + 2 source/destination descriptor. @@ -245,8 +245,6 @@ typedef int (qdma_enqueue_multijob_t)( /** Represents a QDMA virtual queue */ struct qdma_virt_queue { - /** Status ring of the virtual queue */ - struct rte_ring *status_ring; /** Associated hw queue */ struct dpaa2_dpdmai_dev *dpdmai_dev; /** FLE pool for the queue */ @@ -255,8 +253,6 @@ struct qdma_virt_queue { struct dpaa2_qdma_rbp rbp; /** States if this vq is in use or not */ uint8_t in_use; - /** States if this vq has exclusively associated hw queue */ - uint8_t exclusive_hw_queue; /** Number of descriptor for the virtual DMA channel */ uint16_t nb_desc; /* Total number of enqueues on this VQ */