From patchwork Thu Sep 9 11:14:56 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Gagandeep Singh X-Patchwork-Id: 98420 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 97002A0547; Thu, 9 Sep 2021 13:15:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E4CE541144; Thu, 9 Sep 2021 13:15:29 +0200 (CEST) Received: from EUR05-AM6-obe.outbound.protection.outlook.com (mail-am6eur05on2069.outbound.protection.outlook.com [40.107.22.69]) by mails.dpdk.org (Postfix) with ESMTP id 2EC3E4113B for ; Thu, 9 Sep 2021 13:15:28 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lPVLToXVGbldwisn05IfR9PuP+3RVY+/BiLOa1rCllvmKE1pxqolbqOQWWN/uQ0/dlkHEKqTJn43aB+5gTDQz++9WklynsHWVXhrNA+6GIkw5KVCsFDFzjD7rkXVZffDqhUDBQL/IbpzSGRqtjnCDVK7PuWtIpQEYF2xGmLVTN9Ba/3LUU8sFpEL2vkkJxXtR7VtgMwnfPwnY0QhkVPZzSirQvq9f07wzhb84Auh2cOkgYqg49Qm3Un2ozh33EgRq/Lb7O6bhxvKEjcq2Ilw+SLlQl5O3XCV+S0kbF+RhQcxy+62VoQd1CtJ1v8PiDSpumEPGw0KKSaNXMY+uEp+ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version; bh=/1/6FnESmAAOn/jRaBntbXQHNxYXGmKc2iQxz5R90r4=; b=n7RREtHe1AD6++4BK1Wy/0kEIMKGyorzi906EIjg7dEwE64uI8CS7FRbOYvC3LJjV8IEnWSNRnRo2LwNWZl/8hHMn64+tr4GWWgNJRuwrO8yh+ftnGQ4Ew4bGwDXcnIP7rF9qZWaMIilguS76tUUsnBugvcFm6nzVOwXBOJmwgshQQ5I5jnCkPmJDhRKufeeasLLXWhVnswtrDQP1ucTgKOTEhtG4V4Hyzh/OBrvyZJkQr3fQJi0qHHglwNL44J8tqzN+dNpr/c/qCOZBA3twtIj3SjHAXLQ8Jzzxn4njFVJbYOfWJV41pYs7mb/zhhjybK7fzzFIei9BDFisx51/w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass header.d=nxp.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/1/6FnESmAAOn/jRaBntbXQHNxYXGmKc2iQxz5R90r4=; b=E31wno9cNtVzoZyFlCuVjODAr3mUvfw9S/w8COdrKROXpghajPeDRXTrZ7pjqK8bgCHDmERpzPOA1WOhYQ8/qjquo1mk6BGbdXam/p8PwxIjXpvz5h+jH9xxDU2cy0Q7/7FPQQsEdsPHi1GcKVNIkg96WPZOz2zo72VAaO09yiY= Authentication-Results: dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=none action=none header.from=nxp.com; Received: from VI1PR04MB6960.eurprd04.prod.outlook.com (2603:10a6:803:12d::10) by VI1PR0401MB2367.eurprd04.prod.outlook.com (2603:10a6:800:24::8) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4500.14; Thu, 9 Sep 2021 11:15:27 +0000 Received: from VI1PR04MB6960.eurprd04.prod.outlook.com ([fe80::d496:fcce:f667:7aa7]) by VI1PR04MB6960.eurprd04.prod.outlook.com ([fe80::d496:fcce:f667:7aa7%8]) with mapi id 15.20.4500.015; Thu, 9 Sep 2021 11:15:27 +0000 From: Gagandeep Singh To: dev@dpdk.org Cc: nipun.gupta@nxp.com, thomas@monjalon.net, Gagandeep Singh Date: Thu, 9 Sep 2021 16:44:56 +0530 Message-Id: <20210909111500.3901706-3-g.singh@nxp.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210909111500.3901706-1-g.singh@nxp.com> References: <20210909111500.3901706-1-g.singh@nxp.com> X-ClientProxiedBy: SG2PR06CA0149.apcprd06.prod.outlook.com (2603:1096:1:1f::27) To VI1PR04MB6960.eurprd04.prod.outlook.com (2603:10a6:803:12d::10) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from lsv03457.swis.in-blr01.nxp.com (14.142.151.118) by SG2PR06CA0149.apcprd06.prod.outlook.com (2603:1096:1:1f::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4500.14 via Frontend Transport; Thu, 9 Sep 2021 11:15:25 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b9de6b95-a4fc-4e2c-3394-08d973832005 X-MS-TrafficTypeDiagnostic: VI1PR0401MB2367: X-MS-Exchange-Transport-Forked: True X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:538; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 0o37TaogVbhREhg0LTJScs+Sf27wnIGKjGnmD8kiV170bL5FTtGNWvA/KJmvBFHGHw6MYYrtQYr6o+Y0ayFt7fxeyfIg1cxLXeAxRashx5ZXnutUNLy7DsYKzwGUJZICNALXa17dXK54MXpFl1cmKkdnpkXRL8/M3IEuhpw0R8Sn9UjF81zjz4sfO/feWO3+1OzrkXbPaOWw8jwIWupJ2jCuyWIHZeajvBnFLJHmJAImvLxTC8PlzdHo4b+Zwlmfhi8dx3mPTdIPeFD9CmeTgNAAtL769PQfsWIYy/tbWHCWRxOa8qVivMrS850Gc3x79WesIzLPL5rN6gvh6SItgX0EJ9/bX9ONF1ECsbBJ7WPhkqZYZKsQbEzy1KPI8U9CDLaMDwsudRSAwLntFPnWUotXbVsr8uFzqi0aEz5U4UKZCS/1YOo+11HQuTw4ROtQbevalJe3qOqJJ/Tkp0EVvH9Zmar0C9lHHS01uDLpyTlH9s96ehfirw48dmD5mYEgSwBA+wm50E+T0B9w1lYGhXM2d/GDsrm1an44YpswWWdlX/rkY+PrrUe7GeQGRYGtJbNFggy5LxqsO3NvYqadDKMUUhvgdDVewlOgmrEtMTmWOKUOrXAeGiXPQ8djeCG7GbIhCwgJlFu+0c99uAqOW1MSnwsQsZrnaZDOPSvmvZWxIToZXWkScGGIR2oad/kotLXyUu6wTU0dnPD+EoEQgiMqaGa9rWzLzlArN4qkPdY= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:VI1PR04MB6960.eurprd04.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(136003)(346002)(396003)(39860400002)(376002)(366004)(66476007)(6486002)(7696005)(66556008)(186003)(52116002)(86362001)(36756003)(8676002)(1006002)(83380400001)(5660300002)(956004)(4326008)(6666004)(2616005)(66946007)(6916009)(478600001)(1076003)(38350700002)(30864003)(8936002)(38100700002)(26005)(316002)(2906002)(55236004)(110426009); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: wC6RgulNjbOPbuL8uXmUuWikwf6Ch9lvgvRagb/f2Erl0OUh6ioWEkzf/G81ObzHtBOAz52H+V3sBHC9ROPAPGjiMldEsc52M0Yliix6L2iEkqwtoaZaME6FnPDseDzl/G3nvVH+pXhoSSwwTkkjH7SbQ+bvt8voVTAYYi7cM3XHeSNe1rUVzcBnwJuAWrdsoYSVadsBdCBYZBQMsTnKyOKLpx9PfdbKpNTQucIP3ZE7/a6oYZUSPTyS6fsvmp8Dd0Pfry7ITVJECwSUS1D6/xMTk+x/oOxXVDGRrRH/Lrrq3NMMD1i6rbsko3QkFBGALTOeelbumgLSByp45twl1Pbss4vUK362jGv6Fc2WGBouqlpydKneakyVmbQC2PuYnSDG3N3neonJrWgDJfrLezvyz1BDNt9j/uF1Vq3S4ECAsdnEOFhWkOxuoMC+aKahdG4ksZDiuN9tVS9d9SLj5KrJ6+xFjjEfAsjFQlZT0iIjjIimYvpmBcOk3V8lNGOtTWzwm+N/+xAGKl5OKTMhl80HAPbS3cwWDKzmdcElczbvPam7vm6SNf9KVkiVxv+5JVOlZXLL6xy/Ha33oQQfMEDcCnb8wTl+I5IPlQjNvq+vovhiND0qh8Gy164IlGvno3nNm3J0yrdAqNUbqtXNHMKnE2UrFXe931fmF3lUDtCV2NORjHJOg2OPIPn7ktiT4VCxNZgKOLt7asvbxVxs1H+yF87w5rBlCs6qx6C5ILmfvZQbsiArYIJFydXkA7Gz0jLg3LsSwVS2ihecqaV5QHfkGYhOqLI9I7hjcNrQl2eMXyHXM8R3Q53M9JRRZukGw1coy+rGpFBOO/3dyCf4nfLskcMtp0r9g0lcTqQw9cDj4uxVDFZV0BOjJWP0lUtpPxsey9XMId75/qvSfPtc+WwPgivbB/o18Z+QIvPs2URq8KiGn0hQqq2MtMuBEQqJv0OvoMefXnwvHU3w32dXhX+pWfqr6v7wor4aqkfwWtGCYNwNvOGnd7kznWdiTuEDTJU1c4s685k9d2a1PXWWbT+adS0KkToLsyGy35hII/jJc8h6JsM2n4ObKsQecqxY/4LakZwscNnWhxDfxfc9Dpa3h5sa7K4eHUwujjh87lBHd5tJVXeyU1ApQt/6NyVhEPq0CD2yfwrzTnpl+bFwTiMKWr9MIvHnBGiteWws4NXokA1K7XeC0rkkoBv396nHxffN5NgTW2pyJSmosM0b83p4e7Qi3pl675cIyiZAssXjsO0qc4rjMwVMEW56D1j1+eYyhModMmOWr8tHyTEid1WowQhNWHinVKYZtGUqrnxLvDWnJ+MBf5OULXvdW38w X-OriginatorOrg: nxp.com X-MS-Exchange-CrossTenant-Network-Message-Id: b9de6b95-a4fc-4e2c-3394-08d973832005 X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB6960.eurprd04.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 Sep 2021 11:15:26.9377 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: VW7e7K1pWFe0l0JOwKErRRyEWphXWhP0L3A/GWVwwZm/jKaDJMwqKARXJfkvFswM X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2367 Subject: [dpdk-dev] [PATCH 2/6] dma/dpaa: add device probe and remove functionality X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch add device initialisation functionality. Signed-off-by: Gagandeep Singh --- drivers/dma/dpaa/dpaa_qdma.c | 440 ++++++++++++++++++++++++++++++++++- drivers/dma/dpaa/dpaa_qdma.h | 247 ++++++++++++++++++++ 2 files changed, 685 insertions(+), 2 deletions(-) create mode 100644 drivers/dma/dpaa/dpaa_qdma.h diff --git a/drivers/dma/dpaa/dpaa_qdma.c b/drivers/dma/dpaa/dpaa_qdma.c index 2ef3ee0c35..aea09edc9e 100644 --- a/drivers/dma/dpaa/dpaa_qdma.c +++ b/drivers/dma/dpaa/dpaa_qdma.c @@ -3,17 +3,453 @@ */ #include +#include + +#include "dpaa_qdma.h" + +static inline int ilog2(int x) +{ + int log = 0; + + x >>= 1; + + while (x) { + log++; + x >>= 1; + } + return log; +} + +static u32 qdma_readl(void *addr) +{ + return QDMA_IN(addr); +} + +static void qdma_writel(u32 val, void *addr) +{ + QDMA_OUT(addr, val); +} + +static void *dma_pool_alloc(int size, int aligned, dma_addr_t *phy_addr) +{ + void *virt_addr; + + virt_addr = rte_malloc("dma pool alloc", size, aligned); + if (!virt_addr) + return NULL; + + *phy_addr = rte_mem_virt2iova(virt_addr); + + return virt_addr; +} + +static void dma_pool_free(void *addr) +{ + rte_free(addr); +} + +static void fsl_qdma_free_chan_resources(struct fsl_qdma_chan *fsl_chan) +{ + struct fsl_qdma_queue *fsl_queue = fsl_chan->queue; + struct fsl_qdma_engine *fsl_qdma = fsl_chan->qdma; + struct fsl_qdma_comp *comp_temp, *_comp_temp; + int id; + + if (--fsl_queue->count) + goto finally; + + id = (fsl_qdma->block_base - fsl_queue->block_base) / + fsl_qdma->block_offset; + + while (rte_atomic32_read(&wait_task[id]) == 1) + rte_delay_us(QDMA_DELAY); + + list_for_each_entry_safe(comp_temp, _comp_temp, + &fsl_queue->comp_used, list) { + list_del(&comp_temp->list); + dma_pool_free(comp_temp->virt_addr); + dma_pool_free(comp_temp->desc_virt_addr); + rte_free(comp_temp); + } + + list_for_each_entry_safe(comp_temp, _comp_temp, + &fsl_queue->comp_free, list) { + list_del(&comp_temp->list); + dma_pool_free(comp_temp->virt_addr); + dma_pool_free(comp_temp->desc_virt_addr); + rte_free(comp_temp); + } + +finally: + fsl_qdma->desc_allocated--; +} +static struct fsl_qdma_queue +*fsl_qdma_alloc_queue_resources(struct fsl_qdma_engine *fsl_qdma) +{ + struct fsl_qdma_queue *queue_head, *queue_temp; + int len, i, j; + int queue_num; + int blocks; + unsigned int queue_size[FSL_QDMA_QUEUE_MAX]; + + queue_num = fsl_qdma->n_queues; + blocks = fsl_qdma->num_blocks; + + len = sizeof(*queue_head) * queue_num * blocks; + queue_head = rte_zmalloc("qdma: queue head", len, 0); + if (!queue_head) + return NULL; + + for (i = 0; i < FSL_QDMA_QUEUE_MAX; i++) + queue_size[i] = QDMA_QUEUE_SIZE; + + for (j = 0; j < blocks; j++) { + for (i = 0; i < queue_num; i++) { + if (queue_size[i] > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX || + queue_size[i] < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) { + return NULL; + } + queue_temp = queue_head + i + (j * queue_num); + + queue_temp->cq = + dma_pool_alloc(sizeof(struct fsl_qdma_format) * + queue_size[i], + sizeof(struct fsl_qdma_format) * + queue_size[i], &queue_temp->bus_addr); + + memset(queue_temp->cq, 0x0, queue_size[i] * + sizeof(struct fsl_qdma_format)); + + if (!queue_temp->cq) + return NULL; + + queue_temp->block_base = fsl_qdma->block_base + + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); + queue_temp->n_cq = queue_size[i]; + queue_temp->id = i; + queue_temp->count = 0; + queue_temp->virt_head = queue_temp->cq; + + } + } + return queue_head; +} + +static struct fsl_qdma_queue *fsl_qdma_prep_status_queue(void) +{ + struct fsl_qdma_queue *status_head; + unsigned int status_size; + + status_size = QDMA_STATUS_SIZE; + if (status_size > FSL_QDMA_CIRCULAR_DESC_SIZE_MAX || + status_size < FSL_QDMA_CIRCULAR_DESC_SIZE_MIN) { + return NULL; + } + + status_head = rte_zmalloc("qdma: status head", sizeof(*status_head), 0); + if (!status_head) + return NULL; + + /* + * Buffer for queue command + */ + status_head->cq = dma_pool_alloc(sizeof(struct fsl_qdma_format) * + status_size, + sizeof(struct fsl_qdma_format) * + status_size, + &status_head->bus_addr); + + memset(status_head->cq, 0x0, status_size * + sizeof(struct fsl_qdma_format)); + if (!status_head->cq) + return NULL; + + status_head->n_cq = status_size; + status_head->virt_head = status_head->cq; + + return status_head; +} + +static int fsl_qdma_halt(struct fsl_qdma_engine *fsl_qdma) +{ + void *ctrl = fsl_qdma->ctrl_base; + void *block; + int i, count = RETRIES; + unsigned int j; + u32 reg; + + /* Disable the command queue and wait for idle state. */ + reg = qdma_readl(ctrl + FSL_QDMA_DMR); + reg |= FSL_QDMA_DMR_DQD; + qdma_writel(reg, ctrl + FSL_QDMA_DMR); + for (j = 0; j < fsl_qdma->num_blocks; j++) { + block = fsl_qdma->block_base + + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); + for (i = 0; i < FSL_QDMA_QUEUE_NUM_MAX; i++) + qdma_writel(0, block + FSL_QDMA_BCQMR(i)); + } + while (true) { + reg = qdma_readl(ctrl + FSL_QDMA_DSR); + if (!(reg & FSL_QDMA_DSR_DB)) + break; + if (count-- < 0) + return -EBUSY; + rte_delay_us(100); + } + + for (j = 0; j < fsl_qdma->num_blocks; j++) { + block = fsl_qdma->block_base + + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); + + /* Disable status queue. */ + qdma_writel(0, block + FSL_QDMA_BSQMR); + + /* + * clear the command queue interrupt detect register for + * all queues. + */ + qdma_writel(0xffffffff, block + FSL_QDMA_BCQIDR(0)); + } + + return 0; +} + +static int fsl_qdma_reg_init(struct fsl_qdma_engine *fsl_qdma) +{ + struct fsl_qdma_queue *fsl_queue = fsl_qdma->queue; + struct fsl_qdma_queue *temp; + void *ctrl = fsl_qdma->ctrl_base; + void *block; + u32 i, j; + u32 reg; + int ret, val; + + /* Try to halt the qDMA engine first. */ + ret = fsl_qdma_halt(fsl_qdma); + if (ret) { + return ret; + } + + for (j = 0; j < fsl_qdma->num_blocks; j++) { + block = fsl_qdma->block_base + + FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma, j); + for (i = 0; i < fsl_qdma->n_queues; i++) { + temp = fsl_queue + i + (j * fsl_qdma->n_queues); + /* + * Initialize Command Queue registers to + * point to the first + * command descriptor in memory. + * Dequeue Pointer Address Registers + * Enqueue Pointer Address Registers + */ + + qdma_writel(lower_32_bits(temp->bus_addr), + block + FSL_QDMA_BCQDPA_SADDR(i)); + qdma_writel(upper_32_bits(temp->bus_addr), + block + FSL_QDMA_BCQEDPA_SADDR(i)); + qdma_writel(lower_32_bits(temp->bus_addr), + block + FSL_QDMA_BCQEPA_SADDR(i)); + qdma_writel(upper_32_bits(temp->bus_addr), + block + FSL_QDMA_BCQEEPA_SADDR(i)); + + /* Initialize the queue mode. */ + reg = FSL_QDMA_BCQMR_EN; + reg |= FSL_QDMA_BCQMR_CD_THLD(ilog2(temp->n_cq) - 4); + reg |= FSL_QDMA_BCQMR_CQ_SIZE(ilog2(temp->n_cq) - 6); + qdma_writel(reg, block + FSL_QDMA_BCQMR(i)); + } + + /* + * Workaround for erratum: ERR010812. + * We must enable XOFF to avoid the enqueue rejection occurs. + * Setting SQCCMR ENTER_WM to 0x20. + */ + + qdma_writel(FSL_QDMA_SQCCMR_ENTER_WM, + block + FSL_QDMA_SQCCMR); + + /* + * Initialize status queue registers to point to the first + * command descriptor in memory. + * Dequeue Pointer Address Registers + * Enqueue Pointer Address Registers + */ + + qdma_writel( + upper_32_bits(fsl_qdma->status[j]->bus_addr), + block + FSL_QDMA_SQEEPAR); + qdma_writel( + lower_32_bits(fsl_qdma->status[j]->bus_addr), + block + FSL_QDMA_SQEPAR); + qdma_writel( + upper_32_bits(fsl_qdma->status[j]->bus_addr), + block + FSL_QDMA_SQEDPAR); + qdma_writel( + lower_32_bits(fsl_qdma->status[j]->bus_addr), + block + FSL_QDMA_SQDPAR); + /* Desiable status queue interrupt. */ + + qdma_writel(0x0, block + FSL_QDMA_BCQIER(0)); + qdma_writel(0x0, block + FSL_QDMA_BSQICR); + qdma_writel(0x0, block + FSL_QDMA_CQIER); + + /* Initialize the status queue mode. */ + reg = FSL_QDMA_BSQMR_EN; + val = ilog2(fsl_qdma->status[j]->n_cq) - 6; + reg |= FSL_QDMA_BSQMR_CQ_SIZE(val); + qdma_writel(reg, block + FSL_QDMA_BSQMR); + } + + reg = qdma_readl(ctrl + FSL_QDMA_DMR); + reg &= ~FSL_QDMA_DMR_DQD; + qdma_writel(reg, ctrl + FSL_QDMA_DMR); + + return 0; +} + +static void +dma_release(void *fsl_chan) +{ + ((struct fsl_qdma_chan *)fsl_chan)->free = true; + fsl_qdma_free_chan_resources((struct fsl_qdma_chan *)fsl_chan); +} + +static int +dpaa_qdma_init(struct rte_dmadev *dmadev) +{ + struct fsl_qdma_engine *fsl_qdma = dmadev->dev_private; + struct fsl_qdma_chan *fsl_chan; + uint64_t phys_addr; + unsigned int len; + int ccsr_qdma_fd; + int regs_size; + int ret; + u32 i; + + fsl_qdma->desc_allocated = 0; + fsl_qdma->n_chans = VIRT_CHANNELS; + fsl_qdma->n_queues = QDMA_QUEUES; + fsl_qdma->num_blocks = QDMA_BLOCKS; + fsl_qdma->block_offset = QDMA_BLOCK_OFFSET; + + len = sizeof(*fsl_chan) * fsl_qdma->n_chans; + fsl_qdma->chans = rte_zmalloc("qdma: fsl chans", len, 0); + if (!fsl_qdma->chans) + return -1; + + len = sizeof(struct fsl_qdma_queue *) * fsl_qdma->num_blocks; + fsl_qdma->status = rte_zmalloc("qdma: fsl status", len, 0); + if (!fsl_qdma->status) { + rte_free(fsl_qdma->chans); + return -1; + } + + for (i = 0; i < fsl_qdma->num_blocks; i++) { + rte_atomic32_init(&wait_task[i]); + fsl_qdma->status[i] = fsl_qdma_prep_status_queue(); + if (!fsl_qdma->status[i]) + goto err; + } + + ccsr_qdma_fd = open("/dev/mem", O_RDWR); + if (unlikely(ccsr_qdma_fd < 0)) { + goto err; + } + + regs_size = fsl_qdma->block_offset * (fsl_qdma->num_blocks + 2); + phys_addr = QDMA_CCSR_BASE; + fsl_qdma->ctrl_base = mmap(NULL, regs_size, PROT_READ | + PROT_WRITE, MAP_SHARED, + ccsr_qdma_fd, phys_addr); + + close(ccsr_qdma_fd); + if (fsl_qdma->ctrl_base == MAP_FAILED) { + goto err; + } + + fsl_qdma->status_base = fsl_qdma->ctrl_base + QDMA_BLOCK_OFFSET; + fsl_qdma->block_base = fsl_qdma->status_base + QDMA_BLOCK_OFFSET; + + fsl_qdma->queue = fsl_qdma_alloc_queue_resources(fsl_qdma); + if (!fsl_qdma->queue) { + munmap(fsl_qdma->ctrl_base, regs_size); + goto err; + } + + for (i = 0; i < fsl_qdma->n_chans; i++) { + struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i]; + + fsl_chan->qdma = fsl_qdma; + fsl_chan->queue = fsl_qdma->queue + i % (fsl_qdma->n_queues * + fsl_qdma->num_blocks); + fsl_chan->free = true; + } + + ret = fsl_qdma_reg_init(fsl_qdma); + if (ret) { + munmap(fsl_qdma->ctrl_base, regs_size); + goto err; + } + + return 0; + +err: + rte_free(fsl_qdma->chans); + rte_free(fsl_qdma->status); + + return -1; +} static int dpaa_qdma_probe(__rte_unused struct rte_dpaa_driver *dpaa_drv, - __rte_unused struct rte_dpaa_device *dpaa_dev) + struct rte_dpaa_device *dpaa_dev) { + struct rte_dmadev *dmadev; + int ret; + + dmadev = rte_dmadev_pmd_allocate(dpaa_dev->device.name); + if (!dmadev) { + return -EINVAL; + } + + dmadev->dev_private = rte_zmalloc("struct fsl_qdma_engine *", + sizeof(struct fsl_qdma_engine), + RTE_CACHE_LINE_SIZE); + if (!dmadev->dev_private) { + (void)rte_dmadev_pmd_release(dmadev); + return -ENOMEM; + } + + dpaa_dev->dmadev = dmadev; + + /* Invoke PMD device initialization function */ + ret = dpaa_qdma_init(dmadev); + if (ret) { + rte_free(dmadev->dev_private); + (void)rte_dmadev_pmd_release(dmadev); + return ret; + } + return 0; } static int -dpaa_qdma_remove(__rte_unused struct rte_dpaa_device *dpaa_dev) +dpaa_qdma_remove(struct rte_dpaa_device *dpaa_dev) { + struct rte_dmadev *dmadev = dpaa_dev->dmadev; + struct fsl_qdma_engine *fsl_qdma = dmadev->dev_private; + int i = 0, max = QDMA_QUEUES * QDMA_BLOCKS; + + for (i = 0; i < max; i++) { + struct fsl_qdma_chan *fsl_chan = &fsl_qdma->chans[i]; + + if (fsl_chan->free == false) + dma_release(fsl_chan); + } + + rte_free(fsl_qdma->status); + rte_free(fsl_qdma->chans); + return 0; } diff --git a/drivers/dma/dpaa/dpaa_qdma.h b/drivers/dma/dpaa/dpaa_qdma.h new file mode 100644 index 0000000000..cc0d1f114e --- /dev/null +++ b/drivers/dma/dpaa/dpaa_qdma.h @@ -0,0 +1,247 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2021 NXP + */ + +#ifndef _DPAA_QDMA_H_ +#define _DPAA_QDMA_H_ + +#define CORE_NUMBER 4 +#define RETRIES 5 + +#define FSL_QDMA_DMR 0x0 +#define FSL_QDMA_DSR 0x4 +#define FSL_QDMA_DEIER 0xe00 +#define FSL_QDMA_DEDR 0xe04 +#define FSL_QDMA_DECFDW0R 0xe10 +#define FSL_QDMA_DECFDW1R 0xe14 +#define FSL_QDMA_DECFDW2R 0xe18 +#define FSL_QDMA_DECFDW3R 0xe1c +#define FSL_QDMA_DECFQIDR 0xe30 +#define FSL_QDMA_DECBR 0xe34 + +#define FSL_QDMA_BCQMR(x) (0xc0 + 0x100 * (x)) +#define FSL_QDMA_BCQSR(x) (0xc4 + 0x100 * (x)) +#define FSL_QDMA_BCQEDPA_SADDR(x) (0xc8 + 0x100 * (x)) +#define FSL_QDMA_BCQDPA_SADDR(x) (0xcc + 0x100 * (x)) +#define FSL_QDMA_BCQEEPA_SADDR(x) (0xd0 + 0x100 * (x)) +#define FSL_QDMA_BCQEPA_SADDR(x) (0xd4 + 0x100 * (x)) +#define FSL_QDMA_BCQIER(x) (0xe0 + 0x100 * (x)) +#define FSL_QDMA_BCQIDR(x) (0xe4 + 0x100 * (x)) + +#define FSL_QDMA_SQEDPAR 0x808 +#define FSL_QDMA_SQDPAR 0x80c +#define FSL_QDMA_SQEEPAR 0x810 +#define FSL_QDMA_SQEPAR 0x814 +#define FSL_QDMA_BSQMR 0x800 +#define FSL_QDMA_BSQSR 0x804 +#define FSL_QDMA_BSQICR 0x828 +#define FSL_QDMA_CQMR 0xa00 +#define FSL_QDMA_CQDSCR1 0xa08 +#define FSL_QDMA_CQDSCR2 0xa0c +#define FSL_QDMA_CQIER 0xa10 +#define FSL_QDMA_CQEDR 0xa14 +#define FSL_QDMA_SQCCMR 0xa20 + +#define FSL_QDMA_SQICR_ICEN + +#define FSL_QDMA_CQIDR_CQT 0xff000000 +#define FSL_QDMA_CQIDR_SQPE 0x800000 +#define FSL_QDMA_CQIDR_SQT 0x8000 + +#define FSL_QDMA_BCQIER_CQTIE 0x8000 +#define FSL_QDMA_BCQIER_CQPEIE 0x800000 +#define FSL_QDMA_BSQICR_ICEN 0x80000000 +#define FSL_QDMA_BSQICR_ICST(x) ((x) << 16) +#define FSL_QDMA_CQIER_MEIE 0x80000000 +#define FSL_QDMA_CQIER_TEIE 0x1 +#define FSL_QDMA_SQCCMR_ENTER_WM 0x200000 + +#define FSL_QDMA_QUEUE_MAX 8 + +#define FSL_QDMA_BCQMR_EN 0x80000000 +#define FSL_QDMA_BCQMR_EI 0x40000000 +#define FSL_QDMA_BCQMR_EI_BE 0x40 +#define FSL_QDMA_BCQMR_CD_THLD(x) ((x) << 20) +#define FSL_QDMA_BCQMR_CQ_SIZE(x) ((x) << 16) + +#define FSL_QDMA_BCQSR_QF 0x10000 +#define FSL_QDMA_BCQSR_XOFF 0x1 +#define FSL_QDMA_BCQSR_QF_XOFF_BE 0x1000100 + +#define FSL_QDMA_BSQMR_EN 0x80000000 +#define FSL_QDMA_BSQMR_DI 0x40000000 +#define FSL_QDMA_BSQMR_DI_BE 0x40 +#define FSL_QDMA_BSQMR_CQ_SIZE(x) ((x) << 16) + +#define FSL_QDMA_BSQSR_QE 0x20000 +#define FSL_QDMA_BSQSR_QE_BE 0x200 +#define FSL_QDMA_BSQSR_QF 0x10000 + +#define FSL_QDMA_DMR_DQD 0x40000000 +#define FSL_QDMA_DSR_DB 0x80000000 + +#define FSL_QDMA_COMMAND_BUFFER_SIZE 64 +#define FSL_QDMA_DESCRIPTOR_BUFFER_SIZE 32 +#define FSL_QDMA_CIRCULAR_DESC_SIZE_MIN 64 +#define FSL_QDMA_CIRCULAR_DESC_SIZE_MAX 16384 +#define FSL_QDMA_QUEUE_NUM_MAX 8 + +#define FSL_QDMA_CMD_RWTTYPE 0x4 +#define FSL_QDMA_CMD_LWC 0x2 + +#define FSL_QDMA_CMD_RWTTYPE_OFFSET 28 +#define FSL_QDMA_CMD_NS_OFFSET 27 +#define FSL_QDMA_CMD_DQOS_OFFSET 24 +#define FSL_QDMA_CMD_WTHROTL_OFFSET 20 +#define FSL_QDMA_CMD_DSEN_OFFSET 19 +#define FSL_QDMA_CMD_LWC_OFFSET 16 + +#define QDMA_CCDF_STATUS 20 +#define QDMA_CCDF_OFFSET 20 +#define QDMA_CCDF_MASK GENMASK(28, 20) +#define QDMA_CCDF_FOTMAT BIT(29) +#define QDMA_CCDF_SER BIT(30) + +#define QDMA_SG_FIN BIT(30) +#define QDMA_SG_EXT BIT(31) +#define QDMA_SG_LEN_MASK GENMASK(29, 0) + +#define QDMA_BIG_ENDIAN 1 +#define COMP_TIMEOUT 100000 +#define COMMAND_QUEUE_OVERFLLOW 10 + +/* qdma engine attribute */ +#define QDMA_QUEUE_SIZE 64 +#define QDMA_STATUS_SIZE 64 +#define QDMA_CCSR_BASE 0x8380000 +#define VIRT_CHANNELS 32 +#define QDMA_BLOCK_OFFSET 0x10000 +#define QDMA_BLOCKS 4 +#define QDMA_QUEUES 8 +#define QDMA_DELAY 1000 + +#define __arch_getq(a) (*(volatile u64 *)(a)) +#define __arch_putq(v, a) (*(volatile u64 *)(a) = (v)) +#define __arch_getq32(a) (*(volatile u32 *)(a)) +#define __arch_putq32(v, a) (*(volatile u32 *)(a) = (v)) +#define readq32(c) \ + ({ u32 __v = __arch_getq32(c); rte_io_rmb(); __v; }) +#define writeq32(v, c) \ + ({ u32 __v = v; __arch_putq32(__v, c); __v; }) +#define ioread32(_p) readq32(_p) +#define iowrite32(_v, _p) writeq32(_v, _p) + +#define ioread32be(_p) be32_to_cpu(readq32(_p)) +#define iowrite32be(_v, _p) writeq32(be32_to_cpu(_v), _p) + +#ifdef QDMA_BIG_ENDIAN +#define QDMA_IN(addr) ioread32be(addr) +#define QDMA_OUT(addr, val) iowrite32be(val, addr) +#define QDMA_IN_BE(addr) ioread32(addr) +#define QDMA_OUT_BE(addr, val) iowrite32(val, addr) +#else +#define QDMA_IN(addr) ioread32(addr) +#define QDMA_OUT(addr, val) iowrite32(val, addr) +#define QDMA_IN_BE(addr) ioread32be(addr) +#define QDMA_OUT_BE(addr, val) iowrite32be(val, addr) +#endif + +#define FSL_QDMA_BLOCK_BASE_OFFSET(fsl_qdma_engine, x) \ + (((fsl_qdma_engine)->block_offset) * (x)) + +typedef void (*dma_call_back)(void *params); + +/* qDMA Command Descriptor Formats */ +struct fsl_qdma_format { + __le32 status; /* ser, status */ + __le32 cfg; /* format, offset */ + union { + struct { + __le32 addr_lo; /* low 32-bits of 40-bit address */ + u8 addr_hi; /* high 8-bits of 40-bit address */ + u8 __reserved1[2]; + u8 cfg8b_w1; /* dd, queue */ + }; + __le64 data; + }; +}; + +/* qDMA Source Descriptor Format */ +struct fsl_qdma_sdf { + __le32 rev3; + __le32 cfg; /* rev4, bit[0-11] - ssd, bit[12-23] sss */ + __le32 rev5; + __le32 cmd; +}; + +/* qDMA Destination Descriptor Format */ +struct fsl_qdma_ddf { + __le32 rev1; + __le32 cfg; /* rev2, bit[0-11] - dsd, bit[12-23] - dss */ + __le32 rev3; + __le32 cmd; +}; + +enum dma_status { + DMA_COMPLETE, + DMA_IN_PROGRESS, + DMA_IN_PREPAR, + DMA_PAUSED, + DMA_ERROR, +}; + +struct fsl_qdma_chan { + struct fsl_qdma_engine *qdma; + struct fsl_qdma_queue *queue; + bool free; + struct list_head list; +}; + +struct fsl_qdma_list { + struct list_head dma_list; +}; + +struct fsl_qdma_queue { + struct fsl_qdma_format *virt_head; + struct list_head comp_used; + struct list_head comp_free; + dma_addr_t bus_addr; + u32 n_cq; + u32 id; + u32 count; + struct fsl_qdma_format *cq; + void *block_base; +}; + +struct fsl_qdma_comp { + dma_addr_t bus_addr; + dma_addr_t desc_bus_addr; + void *virt_addr; + int index; + void *desc_virt_addr; + struct fsl_qdma_chan *qchan; + dma_call_back call_back_func; + void *params; + struct list_head list; +}; + +struct fsl_qdma_engine { + int desc_allocated; + void *ctrl_base; + void *status_base; + void *block_base; + u32 n_chans; + u32 n_queues; + int error_irq; + struct fsl_qdma_queue *queue; + struct fsl_qdma_queue **status; + struct fsl_qdma_chan *chans; + u32 num_blocks; + u8 free_block_id; + u32 vchan_map[4]; + int block_offset; +}; + +static rte_atomic32_t wait_task[CORE_NUMBER]; + +#endif /* _DPAA_QDMA_H_ */