get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/26393/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 26393,
    "url": "https://patches.dpdk.org/api/patches/26393/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/1499179471-19145-12-git-send-email-shreyansh.jain@nxp.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1499179471-19145-12-git-send-email-shreyansh.jain@nxp.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1499179471-19145-12-git-send-email-shreyansh.jain@nxp.com",
    "date": "2017-07-04T14:44:02",
    "name": "[dpdk-dev,v2,11/40] bus/dpaa: add QMan driver core routines",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "46d3538c73c48e6a1859f902d6c9253e975ebecd",
    "submitter": {
        "id": 497,
        "url": "https://patches.dpdk.org/api/people/497/?format=api",
        "name": "Shreyansh Jain",
        "email": "shreyansh.jain@nxp.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/1499179471-19145-12-git-send-email-shreyansh.jain@nxp.com/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/26393/comments/",
    "check": "warning",
    "checks": "https://patches.dpdk.org/api/patches/26393/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id 262B07D05;\n\tTue,  4 Jul 2017 16:35:59 +0200 (CEST)",
            "from NAM02-SN1-obe.outbound.protection.outlook.com\n\t(mail-sn1nam02on0058.outbound.protection.outlook.com [104.47.36.58])\n\tby dpdk.org (Postfix) with ESMTP id 6FA7E7CF1\n\tfor <dev@dpdk.org>; Tue,  4 Jul 2017 16:35:55 +0200 (CEST)",
            "from CY4PR03CA0019.namprd03.prod.outlook.com (10.168.162.29) by\n\tBLUPR03MB472.namprd03.prod.outlook.com (10.141.78.153) with Microsoft\n\tSMTP Server (version=TLS1_2,\n\tcipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id\n\t15.1.1220.11; Tue, 4 Jul 2017 14:35:51 +0000",
            "from BL2FFO11FD032.protection.gbl (2a01:111:f400:7c09::159) by\n\tCY4PR03CA0019.outlook.office365.com (2603:10b6:903:33::29) with\n\tMicrosoft SMTP Server (version=TLS1_2,\n\tcipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1220.11\n\tvia Frontend Transport; Tue, 4 Jul 2017 14:35:51 +0000",
            "from az84smr01.freescale.net (192.88.158.2) by\n\tBL2FFO11FD032.mail.protection.outlook.com (10.173.160.73) with\n\tMicrosoft SMTP Server (version=TLS1_0,\n\tcipher=TLS_RSA_WITH_AES_256_CBC_SHA) id 15.1.1199.9\n\tvia Frontend Transport; Tue, 4 Jul 2017 14:35:50 +0000",
            "from Tophie.ap.freescale.net ([10.232.14.39])\n\tby az84smr01.freescale.net (8.14.3/8.14.0) with ESMTP id\n\tv64EZM6t016426; Tue, 4 Jul 2017 07:35:46 -0700"
        ],
        "Authentication-Results": "spf=fail (sender IP is 192.88.158.2)\n\tsmtp.mailfrom=nxp.com; nxp.com; dkim=none (message not signed)\n\theader.d=none;nxp.com; dmarc=fail action=none header.from=nxp.com;",
        "Received-SPF": "Fail (protection.outlook.com: domain of nxp.com does not\n\tdesignate 192.88.158.2 as permitted sender)\n\treceiver=protection.outlook.com; \n\tclient-ip=192.88.158.2; helo=az84smr01.freescale.net;",
        "From": "Shreyansh Jain <shreyansh.jain@nxp.com>",
        "To": "<dev@dpdk.org>",
        "CC": "<ferruh.yigit@intel.com>, <hemant.agrawal@nxp.com>",
        "Date": "Tue, 4 Jul 2017 20:14:02 +0530",
        "Message-ID": "<1499179471-19145-12-git-send-email-shreyansh.jain@nxp.com>",
        "X-Mailer": "git-send-email 2.7.4",
        "In-Reply-To": "<1499179471-19145-1-git-send-email-shreyansh.jain@nxp.com>",
        "References": "<1497591668-3320-1-git-send-email-shreyansh.jain@nxp.com>\n\t<1499179471-19145-1-git-send-email-shreyansh.jain@nxp.com>",
        "X-EOPAttributedMessage": "0",
        "X-Matching-Connectors": "131436525509129095;\n\t(91ab9b29-cfa4-454e-5278-08d120cd25b8); ()",
        "X-Forefront-Antispam-Report": "CIP:192.88.158.2; IPV:NLI; CTRY:US; EFV:NLI;\n\tSFV:NSPM;\n\tSFS:(10009020)(6009001)(336005)(39860400002)(39450400003)(39840400002)(39850400002)(39380400002)(39400400002)(39410400002)(2980300002)(1110001)(1109001)(339900001)(189002)(51234002)(199003)(9170700003)(53936002)(53946003)(104016004)(50986999)(76176999)(356003)(85426001)(68736007)(47776003)(16200700003)(54906002)(48376002)(69596002)(50226002)(498600001)(2906002)(8676002)(189998001)(8936002)(81166006)(305945005)(2950100002)(110136004)(6666003)(38730400002)(77096006)(105606002)(2351001)(4326008)(626005)(106466001)(6916009)(33646002)(36756003)(5660300001)(8656002)(50466002)(5003940100001)(575784001)(86362001)(2004002)(569005);\n\tDIR:OUT; SFP:1101; SCL:1; SRVR:BLUPR03MB472; H:az84smr01.freescale.net;\n\tFPR:; \n\tSPF:Fail; MLV:ovrnspm; MX:1; A:1; PTR:InfoDomainNonexistent; LANG:en; ",
        "X-Microsoft-Exchange-Diagnostics": [
            "=?us-ascii?Q?1; BL2FFO11FD032;\n\t1:aV5KpCTI8ukAeQbeLuHJ7pzDBzES0VnC5j4eRmF8Wc?=\n\tlJrTqkvfMeGleBNPjOrHVKc2Gkh4rQIkYoOZr0KhAeUsEhp3gsOdD/owlcyBpV3XA7NK86SR5fXeBI9IeONLwOTsTpV3dPH3Gsq6EyF7/WmKt1lPsdRA8VpAcruwJiuacgnyLq+U4MMBlhzXw+dMhbPa0Yc2C/RMyFZ5B5V8uFYEe5XtNLmBRxgqOhs1hQ56l/7AdwhMDRytLc80JuKJdnig4HvRB9cEc+Z5xm4NInFAxwHkeyAQ6oWfp45NPLrrg+Qf+Kb2uDakJIDlUdV3wcuy1NjsCsPac4uKsH3LTFq21SUuZ93PHYPtZ68kKSGUQdv8HSpx74QCtrTN6MDyeBcQmzytjWRINtDO2JrhWVgG7Mtj5FMmiFAyL/3H6HZGQGLSf1Qlf5K7v+0lTQ6K3KhJWzdgYjMTXSUdTs6rFDDFiGtp3JOz5yzsZnKRL6SDP6ryG5LBiWgk6XtH9UZe5GLgTPNmwRRBMopmVoCXnduyqkTvQFxS8mcAZYwqKvxD0nV9ZKi1U9+5LXuIV5zzr5TTjf42u/Jz2OU2Dak8vEMF00ximXSErK2pnWmsZJtEfbneCavYPLh3vx/0HkhHzb3wX7p5L5LXfq7dgTITQmvThE3XZVpXLUmFPGCqOj+qyV6oQtV8rtZ7SF68THfaRL2zTdg/f6smpC9pyAEXNOULcq3RkZP+5XOEJRq2+0q5tJ5wfTcq3IfggoMNgDFm86u7qsKbNXm2j+s15y8fl10e4Tg4odW/ZU+XzSMF7rd6lkaEDZ62p27dgGghy7Q90VFSsDwjTE9aa/EI58HI3hyn9nynT6kb7ZOFh/wAzpFqEjiQ7aMJkMOGY5Nu7J5tGREpyHb4c3b8cnjzpiXrM1QVzOLH6Jww/qdJsoW8Z/Su6oCnWfr6/CE8/GBxAj+S4AOwL4MrNoVWOkIXf+sTTj1OLpo28c9jgLYpdcnCHeaWmc4xuEQL3EtDjalFmt8u7MAhICdA1SJq7rUp2X5TQJcmBIp0UPy/CQ9h0FU7PRKajEGF6EafrzZmIJml+VnoAGuRFquXwBaHJr2cyW3w6Mpw==",
            "1; BLUPR03MB472;\n\t3:rGJOo060CLSa670QdEgpOn3s+P3+m3yACCAz1HOIX7wrAidWnfU0ScCsiJJFb7QI+wFb6xTZkM3y6pkPI7qBS0U3+5OtJrwRgT6OO0obedwOsowDewVvK/HsHt53KCzlWGpMe/e/xyyZESpTDCBP+z8RsAAN4OAcYVZdgvzawMR/C3T88efFMebohTuPMaJRHCfoiXqsowjDJoa1E0xD2N2lOvd+Ex0unoAmoih+ym/Krsdw+7oR9Iji1OTiX9Jn6T1LbHnlC94ZFZozYAQEIX3J33r0MgWcPygvYRZSIzL9U/aqABDZ46QX1x7ckITJENJRiWwQUfCmipnd1EPwgGbbEAmfKy/09pwV8+S8qkyzDp+hgGph0tJqaxeDH1Tuz0QDvymITlIbFYFKevdoAjUU/v6nBR2szHhd7ewcolb7yVP+cj1I8VA5hbJyMQFEfE/deI30+moVk8IqFg7ThiUM/kUOGBmagL3zn1WuMvMsn1ITo3s98bNCISefIyps7Pc+MRWXratAP32FsdyncOflG1GDIa+5lJQ49PS5MNWQINt9f9NMJ74HhqeqpZnNoqKbb4SYsYLNTdWyOOg2JrsXsMovKGYQIlXqEh0WhqeaAYeVHAGjuRwoSLjRf51vSAP5MOGnfSaEE2C+1tuyz5Y+FZtdPQN92VG7wE9O4eWraeFbZIxgJs75bjHiPM35EjbnFg8iFVB+Rb2MmmX6VgUpc5fCl/4KNDBdXtIxFtLSu/WYygoSWay1yUhatKoRKWp3s7CWhfJF5E6/BkyM+ZEq6VqlUX0VRruFmHtJbs7gq7MUNsGXYeCkvnNcFbW05s2iGu/ezoeivYJAaaM8a/7QtHxv9LLD4CM6jZMVkTrElhQoX+5yogyA+sFB5nTa",
            "1; BLUPR03MB472;\n\t25:EUAzm3noYmE8Rd/wXMTZW3UWdwUCdasFT7obCudtalKbGjKmWzJN8RhgzXYL5XZ7TLhb27jc6GlNFRGt5XGFVJEXJ7m1IAEL4RMYZIdcSgH1vfbxgt+T6pQybVT9XDnxo9v2XWxZIs2egUTayiy4x8+4sck+LYeI7HyXzBIDg5XYy+rTHyz1ytMsZjFL0aUBMcngxloWmeJ93YYScogK3qntLMyGZumMzxLUqE9cjsMYiEW8PtBw9demkuthNYlwDY32gbR7xq15nbibdOAjlCpmE4tMaVXysvD0UeHu2qKlr6HiC8Hq//ISsN6JpRqmjEy9yKB7rILM6ud3C8uBu8dzQEAvXnvj0UDNiIFSFIkJahskPUGRTLA3uv240K7+t2OuuxZVhyUJOBB/KJtqH6VxjYAXB+pi0bW1R6+YIAY3v79Xqcyt2JjsPNQsp5Pmpjd48vTKD9m69l0KRTPlsKY1kG4ETTlpbHIbDv97S+wyRJ9Rdh4CuLaLmvMD2OA+OxR4Ch1OLEOHnbZ+6uH8WK0x5hD5TqwVkK+dLhxEBMM0ve2E8jqhXf+8ldEkM+bJS9TgcngJhor5HbnhrgI7lrCwFeVSY4PMqdZdQQUb3rhZ2S3kDV36a3S2++NiU+O6ODKqL4hy66LyNIGG7DgVRp1hUc6molhWHaO73YUHnQ+zrR41NZUwEShVWNfA/8E4BBNOyMvi/De7llcHwvdYe+jVW/+B4zLVCA7Cbj368FtLJ1pcDBOWNyDGpJ+wqoA+Vs3mfz8PZFHGQpbTOCorsnPteu8Ub1n1csc71+utqxA+lmTdYZnAB9dwEqaeLKaDm1wdf/FwyOudqur4XI/bjJWprYhjYk2eceH7VZ85jfXvg4+q+XXB+O0A5Wv7RiEhicdQcuN6ezNQZDvAEXmLtexIm8EJ5cEENBNRncGAZQc=",
            "1; BLUPR03MB472;\n\t31:oqxDrm5ADAPl6t/Wm+TLwptRBE0qpbxpb+g6MimG7l7wqmEOMAHGpdqlTkKayKGE78jWhsqUP1846cK38XMBDUVZn8Cj+pkZJ2+oXmEC8J3y6ZKvthOGj4wRz8HIwzCHcOurXTaVRao10mKMOp1cbm7gJ2y50VceKWSNkAmlHAn3Y7hbXkFB18XjfeAWASZOP3YPqr2UjbmzAM1MvO7JtnRYInh6CBlgEbTv+o+876ppHaMmZjVaE4r4A+k2mKCZvapttfLKR6/sI3+SS9jlsZkO1HplR4VuHW6uK/TeGJwsiwJa+SF0TMasypJUhf75ZJW6zCb6Njm+vFuYej3p5NSA7dEpNGid+0eMGtJVTAQ8NuKBpMOY90KJvXAdQ6plRWqSRgC0w+MqcovDLItYHu1aQ5SgG4IP+7yol8Ft8m2VxH1CY25cHjoOPum41/Gzt+1lwyFZtLo4lWfT/BsJVdfUReTR7atfYtIJQfkvnC4tTK0pKxZn567kXBq4NTtKG8FqOkrD20ItaK1CV208DniilUoCqSIqKLtcSn4Q3LXx4BX+wUNlPoA5DECAIkZVfESEy4TzSlE+9UCOa36n5WvERvgmqg+ZrjAvVZ8IiZpoY+Xq2UUjczmaAGUIm2S3PEjVWL6pFAsqS5EnchMTEL+2MxY4e8ye30nulVIQz4y4LpNdV6dpbRlYDtjxayQvyZspF7RfH5Ui9Y1oznA0ww==",
            "=?us-ascii?Q?1; BLUPR03MB472;\n\t4:gaMRfLIucMSD8PmNhq3WZA6qYq78SnyKjl9h42lFW5Q?=\n\t4+uxYxhlwda3/VIDjLGBJ9ih6g/Z9mTYmXoWZUQhcCwKtO31cTqXg+dHXo5GrIb6OuVJH/LDoIX9Ua17CwA6YTlmg8ffVC4rzAzCjltMlN8SLzAbry+qSzTRbkJEh9WZ4aL5Gt6i5o00VtIj9p7mF8qEJZ1hvGtZRSq4GxNmFrTkfRrxRd2krFQdyN/n00MdXkyuIG8GK6UM9vBpfI52Q36i0a/+XPGmYlqHg6WbIZAG4mi1v7xzGV4NCvjjxRrbPg5JCx7+Cy4l9hxJt5mVksVX0kinw09Hckgo0u1aN2Owg/fkEo1Ff0BA/Vw5zFkKZdBbNPZGPkXn0BqK322kMChd+5S7oblQ1uQhmPY+fce9eHSioSlYE/bnz/zWvElJcY+vkYVpqnd4XbjLcjT672HGhZUF8R6Uak4MKI3WEsNInKrnBCCCDRAz3ZXvylXFcEZffQ6vVy8C1N4RpcXalH4PdEHgaPnUV0zP8TucajLTbg+kHKLjUcbmloqrnByNBuipaYXx5EEiblqtG0o9aeKArWpY+H7KUgHSj65BnEH80C6UiTJ2ZpmarB8OrGpxILAEoEkBTDZwExFEeMYoP5WLlZ1Vd5ozK7cVB/Rmq0yreBw4PPs/+93vckCIZ+CLPvsA59h+eQoDjpcPFKWBKoziNwI1HwiTHs9zFBZJVQN6MiI7w3q/PImQ07EUxTQkUb3AMFv5afpo3LmwqOj0xG+l9iclPJH6EEMoeJL+35p4GJtBDSpVnAlYmsMPD+w39iq/fLu/GKX5CoUGgzdZi2IetSavj5tdvES1fuMYEYvwYrG9x/6I8M0QUlp38NfokQofaO44YHtPtdcjkWASWojxfRRJiDKPbGOX2CoYOhlFkPYc1FOUeI6ZtQUo07C97FwG7gmyRJsNQ3e6GAtr15VrQTrvsc6GZMjZZHcU7eQlGIUaeuOb0kbITchW9Ph1lkhYcqypr1FN/qyEOcYIk6UWYhoidhZXQqFUahUEfcZN8TcBxdDTYJoCvQKZJGgKziyi8nJRWDgeGKJCpWPrEsztb7pwfxcrnZ9ZzPSnZ7WgJwS+IBRVly9wWK3pYPbpXtib5hVS3RWFeOnHlRTS3ghTveE9qS13FMX3x/Yc3IrCEGZRE51JfLsWUvoxHv9rYvB9c6zWGAG0HH2woNFv+LmOIYKjl/8mTbsxmLjNy1TVfvGSUzvkDFiq1+kuyJRgj2n0WAS7DDCiBLDHev9Fhq9aynZz4//PZLKqdDW2TuGFmtHyNSzg1ZuxBlQy3EjBucQ2qHNbvXBddBqalWogFTevMZhofcL6IoNHtIhyAXeROvi8XowI7WtX5kUQqT+m8jD8qyOxKYs/PK7YSPQT9hFHOVc7jhNU6mMchRV1ykVqCeY6c8c/Oe3lvj3Itnh8kzCtVDNf0ppKzXFXWwJpDNXCy4n1Za2D1tzdkTdNeYJPlI2S5VDf5bxrw50xnGIdEeHB7XXw0UTh3gzzp8vrBTOprAyvg/poLBjv3AZHt1m1LrCpRKRq54j3by6pAWBc=",
            "=?us-ascii?Q?1; BLUPR03MB472;\n\t23:yxeucBuO7Lag/LV6tUSeRq7/ByULDdycGBbwLq9pXw?=\n\tVfLcP/aV/+Jd8XnNf3gepB78Ks09u+oWt2cpCNB4eyOoTuipqs2Ctls6RLxHd2Is3w89LeKB2YNh1kao2CsVHxHVi+pFMNkU0ByzhgH8beFdN/HXwwEc+o1BFM44n0tM+BBQZSZyNk7GlqtwOhf2inpQgIkiGhI1h2OPQ0FR1hWpoctM/uqNCB5Byn77/kvwQ4O7nKDoaHZXoPO7v1Vpr+7dMh/Gzc6C0n5FWSgIImbrbFmERDo9JW5imMn+zRz576VU+B7Mq0N2FYGGuMGd4f0+p9i7cMs7I+zB2+h/I+3eJZKqXtC/1V0fphY9aQ1oBN68oY2Fa6W+kEC91GgcqW9FgLa/yT+SwjTP/jwng2f0GSeOP+ZqCYxcn48qmB8O+gWw7EHrIAwRO2rSwjpmpQPfsHCnDx53FGvTk7cUElYhadv0OLmeQaVgqZB0l/QiacRal+6et3UM8rjeCEpZ4EcJyQ01Ru9Br6R8fTcm9YhYi7ZKUR/Y7y0rOx//M5ndJRzhm89L1+Gg47PySQQmLW9fs+y+mTh6ydM+axsm6z1PDmLP4pJd6tyfb1y8kEYnePMUvv+ae22sSdtoZLPZjXxBNbDj8fk4M9XN6mLTvGp9SKQ1kfaQMFKi4zrGrGlkjjXNyW08BXhZ22m9nlqI6LxxvjsguNFZWMQr2kttUqy3Hie+rXgpHmKZZDtEXpSyMHRRbX9WQJoSPwWYA8Ui2NPL5Mc8WUI1N1xS4fQmqaVYTcWsKBe01Igle4bj5WpzKaRwaANX3tfkg/12tkFMyHq08i/kKwrBgz7IMAZTGO34cH7Hd/a7dyO3NKcYimyO7yAJ2MKrPeFH1ti7FyRn/NPu4QLGUV2QG1DyUA7+zKJRfuqqix2VPIN8kfj1LlIRcpiBiOsF3x3IGfjEl5FonlRzhg3VTvJz4V//iIDLiL4bUW3wxGVw4UrInh1OyIvopBkp+Bnz+PidQDG520mpZqBX7FQ6je/ytZGNXLOHNE/WK34yPh35025HhXgI/swfC1AIwzFy8q9eIkzkatYq6MV07CXwvfP4S3MGCC2N83zYdaFNOzLXSjN2aPtzmJSWTH1u9KIf3R9czYGerhHESGxMMAZZ81yj2kDDnmPd3mz/MPOKiTnuDJZbjbSL/MGzFrVpfpXGFXaVHgc5qHbGgOuESCqqZ9zEszJAGu/WQg1dcB45WAQNETSTbp/H6yQrckxlRL9zVnh7qs9w3HtebN6luWUdKozxlmLfgjBWsBhKW3PfF3Fz79FQmW0vOWAl6kNcVZUKCLb3lEvbezQ5xuFjmpMfHCEw6tuo/4QE9bbqskE8Jgc/AbRFiZ1BK+V7GOCV/Bf6tnfDAMX1RoMRKvYxrBJICqhyHEdCZ8/On9xw==",
            "=?us-ascii?Q?1; BLUPR03MB472;\n\t6:bRGXhRe9qcAqT8ftHpALRsnCjoGC+7H149uQWIoHCZA?=\n\tkINDQd+GMdD6Nc8LgkVxPI+cFWkW8Zj/+DwgbvIXqmDnMRvVNACKhg5NtZ4GyKkx1Cwtjg51PVVKSHpHtunFXpIDF/hAcS70PlV9gg6YJhNttq1WjI0Butr2FVP2JAhAQ6R+aKy1WpjddCm1bqsv798nbOtiJTM39q/GUZyLsXMLw/09RxWw1hxkpz5/eRChATDRlOHnOpENgH6pbKTKX/qEIo/6rUUHLmzUOQlrQpYnrlelWvJ92FzcrWaXRnqbuKlX1G3DI3iDS/qzLvjZ+Nzye1W/4ey8rn5JzkW708TvWa9Tdw/uoFTXvSEC0iGSWoQ4dDe2QjjPL0MRuUh8e+eRJn7dskQptWs0xxXiZtN7thEn8UeZlhq4uPV8QjR3EdMNEiMbQ3KYJUJ1KX3JEOzgsrJ8bHpH2NBYSlW/Gc5VfNNKRtXgP0prac4vVPL5T/vXcOji+iZgtPckg1e9uSU8D7tAuO1Q4YYcLK+VSyBu6wG5Y+bCOzia+GPRgu4lkOPW3Ag0NfIION5eejmW9N76lHIUPQq6seAkiLA+x1D4AB4hv/r9brBHVKAYhwj5xH7eBt8SNrkawE20tI851CgI9CQGHAqxE6b3fMyaXTPUcGrm0JSgIPC1eVtuoyp+HRS/PX4N7dG87IN5arVaBxsa/bqS1ifya4x1ZOpLUmdMhZk9yKUC+fJLJZTeP0vtcqJpGnLowcUgsN7KN0vwJ89L8Ry+a6CANrp9YYS6OnzkAQvMYV1wvf/VCP0HoziyG6+hfXB1LN7upi4F4u3tnNB5XZGUyp6fnR/Rqx4qe13fRMixSEtvwaLAmkHx2rhQDbJmbEodmvyk/fS9cDMF/UY2hGy2qhsJy2HtlxzyS9ShvMU8OQzEsd/G6uKwQHVjYTuwEAJZFHsO/6b7zdWsJ6OENFiCx40T5+F006OWLO63PfceCCwTYASFGdGNJw1U=",
            "1; BLUPR03MB472;\n\t5:+g2314qWsQ0pwhaBFUK67dyvRFqAJMDRo6sdPYZdlMCCuvSvSzTzN1aMtxb5BBGcH/jiGy8A5T/unQlmcX2S6UcGirmlQkZLkepFAAL9ZPkmdYD8ji0MeihMuzexwrNzoqs7CPPXaoq2QO+yVOVEwkxUhKNe18/msV5RFHNztQRSG15GG1ii/3besunwifTgoX6r1ARyDnqmgBKGlKc+O4p/DFB+031mJ0eXfu/ZZnGqgcTqmI7bRC/ic3HO894UfZ2WQHsPIbfvAho3XWC2MSZZVQXUV9IuzKKZbsLlTObFWx0DjMP9/Q5xfRv6JUMsutB+nCQOZFjQXC0MFVmU7UeTkX1lkPfB0do4Gx06EkxFlsVNyCP6H+52HJF2LT1nWgLJCTxCCBg+tGWommf2E8NrcpyCzaFOub+QrO6Q1Jz+ywlCtmzv+U4LtTFO138RnC+v4vvnowTUFVKPD9XyK32Tzi8EaZ1aLipobe7FWXi1b/Buy0kJ39mSLuTeVDgTAzd3f0Ml2gTVjdFU5JQoYg==;\n\t24:18gTPhq67X4DL/zPPvgEN5HS+NfdvHTqT+L847QyniCaIUbaq0baZgL7OuC2i2007ol7jh4TdQjFNWkEWBMeilU/rAoEA3G2a6Bw5WAJEQY=",
            "1; BLUPR03MB472;\n\t7:nWPJTRsrYJSVFYBjKxbAVOEbI4a8vUk4XDtZYFMPto52jvVA21flUsUO2njsjqprZFOm0svSW414N6NXcrtLR/V40P+TLF5ODOJBGmyJgF3g27c72XVfJXtjmZoLqa15+pi4vhzizrVrkf7HyQVZDKd46xAkGhMn6PSDqArSaaQjq/OqSnQyJEHPPNW1cuzl7NjSt5aasjclg/Wp9VGnnJc0qydWzaIMlKWMu77f1erEq1hziiBSe+Y9A3p1S2d8VL+o8JoI77AjLu8j5Trs2BeO5F4D2Dq8l8ASJnrXlnfEw+wSInZwQ2wsw5EQ5jVS2A04pYq0m3ZR043DC9MYf5yi6S/XA7UXyn7jb7aZXWunrhgkadG1urpq/+av4stKJzYdiSrk/NHflgHzd9qfNb7gDAN5M3iaHPuS7nuK7aPrmGqRzdR+JrrdRSp3BfwYcdOP5HyCBZLMfG1RSkPggTU00WOpnxx5dF9D8xUf0P7Vtavis3kwJgWXm2Gqzq4CzwZC4x5EGTslGnNMX5pd81RCClkl2DXcHvtfizGRg/clLcE/dMg9G+Y8KLLj2m9hBskWqkMgIOIe2jU8DIZD9M//EFlRzpHagR1VjVysLvIsVBiLHEIy6mOhEAI7NZLUhsIk8Zd8gKDWy4/mGXiWW4rhnfRZ5HYeZGlXK+y5fqgRfCQlRztqjL1nuyP+727q5qaeTFtnx4qQ5lgKGl1uiCqAPjICtDV+NytDLG1f6Z3Sy2plW1idrH56zLF2+X3sIZQOxF0O71Mg5YdKo82zBBjtscDFNAIs6VTR7grh9fQ="
        ],
        "MIME-Version": "1.0",
        "Content-Type": "text/plain",
        "X-MS-PublicTrafficType": "Email",
        "X-MS-Office365-Filtering-Correlation-Id": "9abcf07c-700b-4996-a87f-08d4c2e9f7df",
        "X-Microsoft-Antispam": "UriScan:; BCL:0; PCL:0;\n\tRULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(300000503095)(300135400095)(2017052603031)(201703131430075)(201703131517081)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095);\n\tSRVR:BLUPR03MB472; ",
        "X-MS-TrafficTypeDiagnostic": "BLUPR03MB472:",
        "X-Microsoft-Antispam-PRVS": "<BLUPR03MB472916B60F4152647A54EC090D70@BLUPR03MB472.namprd03.prod.outlook.com>",
        "X-Exchange-Antispam-Report-Test": "UriScan:(60795455431006)(133145235818549)(236129657087228)(185117386973197)(227817650892897)(48057245064654)(148574349560750)(275809806118684)(167848164394848)(158140799945019);",
        "X-Exchange-Antispam-Report-CFA-Test": "BCL:0; PCL:0;\n\tRULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6095135)(601004)(2401047)(8121501046)(5005006)(13016025)(13018025)(93006095)(93001095)(100000703101)(100105400095)(10201501046)(3002001)(6055026)(6096035)(20161123565025)(20161123559100)(20161123556025)(20161123563025)(20161123561025)(201703131430075)(201703131433075)(201703131448075)(201703161259150)(201703151042153)(100000704101)(100105200095)(100000705101)(100105500095);\n\tSRVR:BLUPR03MB472; BCL:0; PCL:0;\n\tRULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(400006)(100000804101)(100110200095)(100000805101)(100110500095);\n\tSRVR:BLUPR03MB472; ",
        "X-Forefront-PRVS": "0358535363",
        "SpamDiagnosticOutput": "1:99",
        "SpamDiagnosticMetadata": "NSPM",
        "X-MS-Exchange-CrossTenant-OriginalArrivalTime": "04 Jul 2017 14:35:50.5541\n\t(UTC)",
        "X-MS-Exchange-CrossTenant-Id": "5afe0b00-7697-4969-b663-5eab37d5f47e",
        "X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp": "TenantId=5afe0b00-7697-4969-b663-5eab37d5f47e;\n\tIp=[192.88.158.2]; \n\tHelo=[az84smr01.freescale.net]",
        "X-MS-Exchange-CrossTenant-FromEntityHeader": "HybridOnPrem",
        "X-MS-Exchange-Transport-CrossTenantHeadersStamped": "BLUPR03MB472",
        "Subject": "[dpdk-dev] [PATCH v2 11/40] bus/dpaa: add QMan driver core routines",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Signed-off-by: Geoff Thorpe <geoff.thorpe@nxp.com>\nSigned-off-by: Roy Pledge <roy.pledge@nxp.com>\nSigned-off-by: Hemant Agrawal <hemant.agrawal@nxp.com>\nSigned-off-by: Shreyansh Jain <shreyansh.jain@nxp.com>\n---\n drivers/bus/dpaa/Makefile                 |    2 +\n drivers/bus/dpaa/base/qbman/dpaa_alloc.c  |   88 ++\n drivers/bus/dpaa/base/qbman/qman.c        | 2402 +++++++++++++++++++++++++++++\n drivers/bus/dpaa/base/qbman/qman.h        |  888 +++++++++++\n drivers/bus/dpaa/base/qbman/qman_driver.c |   12 +\n drivers/bus/dpaa/base/qbman/qman_priv.h   |   11 -\n drivers/bus/dpaa/include/fsl_qman.h       |  767 ++++++++-\n drivers/bus/dpaa/include/fsl_usd.h        |    1 +\n 8 files changed, 4148 insertions(+), 23 deletions(-)\n create mode 100644 drivers/bus/dpaa/base/qbman/dpaa_alloc.c\n create mode 100644 drivers/bus/dpaa/base/qbman/qman.c\n create mode 100644 drivers/bus/dpaa/base/qbman/qman.h",
    "diff": "diff --git a/drivers/bus/dpaa/Makefile b/drivers/bus/dpaa/Makefile\nindex f1120bd..ad68828 100644\n--- a/drivers/bus/dpaa/Makefile\n+++ b/drivers/bus/dpaa/Makefile\n@@ -71,7 +71,9 @@ SRCS-$(CONFIG_RTE_LIBRTE_DPAA_BUS) += \\\n \tbase/fman/of.c \\\n \tbase/fman/netcfg_layer.c \\\n \tbase/qbman/process.c \\\n+\tbase/qbman/qman.c \\\n \tbase/qbman/qman_driver.c \\\n+\tbase/qbman/dpaa_alloc.c \\\n \tbase/qbman/dpaa_sys.c\n \n # Link Pthread\ndiff --git a/drivers/bus/dpaa/base/qbman/dpaa_alloc.c b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c\nnew file mode 100644\nindex 0000000..690576a\n--- /dev/null\n+++ b/drivers/bus/dpaa/base/qbman/dpaa_alloc.c\n@@ -0,0 +1,88 @@\n+/*-\n+ * This file is provided under a dual BSD/GPLv2 license. When using or\n+ * redistributing this file, you may do so under either license.\n+ *\n+ *   BSD LICENSE\n+ *\n+ * Copyright 2009-2016 Freescale Semiconductor Inc.\n+ * Copyright 2017 NXP.\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions are met:\n+ * * Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * * Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in the\n+ * documentation and/or other materials provided with the distribution.\n+ * * Neither the name of the above-listed copyright holders nor the\n+ * names of any contributors may be used to endorse or promote products\n+ * derived from this software without specific prior written permission.\n+ *\n+ *   GPL LICENSE SUMMARY\n+ *\n+ * ALTERNATIVELY, this software may be distributed under the terms of the\n+ * GNU General Public License (\"GPL\") as published by the Free Software\n+ * Foundation, either version 2 of that License or (at your option) any\n+ * later version.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE\n+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n+ * POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include \"dpaa_sys.h\"\n+#include <process.h>\n+#include <fsl_qman.h>\n+\n+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial)\n+{\n+\treturn process_alloc(dpaa_id_fqid, result, count, align, partial);\n+}\n+\n+void qman_release_fqid_range(u32 fqid, u32 count)\n+{\n+\tprocess_release(dpaa_id_fqid, fqid, count);\n+}\n+\n+int qman_reserve_fqid_range(u32 fqid, unsigned int count)\n+{\n+\treturn process_reserve(dpaa_id_fqid, fqid, count);\n+}\n+\n+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial)\n+{\n+\treturn process_alloc(dpaa_id_qpool, result, count, align, partial);\n+}\n+\n+void qman_release_pool_range(u32 pool, u32 count)\n+{\n+\tprocess_release(dpaa_id_qpool, pool, count);\n+}\n+\n+int qman_reserve_pool_range(u32 pool, u32 count)\n+{\n+\treturn process_reserve(dpaa_id_qpool, pool, count);\n+}\n+\n+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial)\n+{\n+\treturn process_alloc(dpaa_id_cgrid, result, count, align, partial);\n+}\n+\n+void qman_release_cgrid_range(u32 cgrid, u32 count)\n+{\n+\tprocess_release(dpaa_id_cgrid, cgrid, count);\n+}\n+\n+int qman_reserve_cgrid_range(u32 cgrid, u32 count)\n+{\n+\treturn process_reserve(dpaa_id_cgrid, cgrid, count);\n+}\ndiff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c\nnew file mode 100644\nindex 0000000..829e671\n--- /dev/null\n+++ b/drivers/bus/dpaa/base/qbman/qman.c\n@@ -0,0 +1,2402 @@\n+/*-\n+ * This file is provided under a dual BSD/GPLv2 license. When using or\n+ * redistributing this file, you may do so under either license.\n+ *\n+ *   BSD LICENSE\n+ *\n+ * Copyright 2008-2016 Freescale Semiconductor Inc.\n+ * Copyright 2017 NXP.\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions are met:\n+ * * Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * * Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in the\n+ * documentation and/or other materials provided with the distribution.\n+ * * Neither the name of the above-listed copyright holders nor the\n+ * names of any contributors may be used to endorse or promote products\n+ * derived from this software without specific prior written permission.\n+ *\n+ *   GPL LICENSE SUMMARY\n+ *\n+ * ALTERNATIVELY, this software may be distributed under the terms of the\n+ * GNU General Public License (\"GPL\") as published by the Free Software\n+ * Foundation, either version 2 of that License or (at your option) any\n+ * later version.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE\n+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n+ * POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include \"qman.h\"\n+#include <rte_branch_prediction.h>\n+\n+/* Compilation constants */\n+#define DQRR_MAXFILL\t15\n+#define EQCR_ITHRESH\t4\t/* if EQCR congests, interrupt threshold */\n+#define IRQNAME\t\t\"QMan portal %d\"\n+#define MAX_IRQNAME\t16\t/* big enough for \"QMan portal %d\" */\n+/* maximum number of DQRR entries to process in qman_poll() */\n+#define FSL_QMAN_POLL_LIMIT 8\n+\n+/* Lock/unlock frame queues, subject to the \"LOCKED\" flag. This is about\n+ * inter-processor locking only. Note, FQLOCK() is always called either under a\n+ * local_irq_save() or from interrupt context - hence there's no need for irq\n+ * protection (and indeed, attempting to nest irq-protection doesn't work, as\n+ * the \"irq en/disable\" machinery isn't recursive...).\n+ */\n+#define FQLOCK(fq) \\\n+\tdo { \\\n+\t\tstruct qman_fq *__fq478 = (fq); \\\n+\t\tif (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \\\n+\t\t\tspin_lock(&__fq478->fqlock); \\\n+\t} while (0)\n+#define FQUNLOCK(fq) \\\n+\tdo { \\\n+\t\tstruct qman_fq *__fq478 = (fq); \\\n+\t\tif (fq_isset(__fq478, QMAN_FQ_FLAG_LOCKED)) \\\n+\t\t\tspin_unlock(&__fq478->fqlock); \\\n+\t} while (0)\n+\n+static inline void fq_set(struct qman_fq *fq, u32 mask)\n+{\n+\tdpaa_set_bits(mask, &fq->flags);\n+}\n+\n+static inline void fq_clear(struct qman_fq *fq, u32 mask)\n+{\n+\tdpaa_clear_bits(mask, &fq->flags);\n+}\n+\n+static inline int fq_isset(struct qman_fq *fq, u32 mask)\n+{\n+\treturn fq->flags & mask;\n+}\n+\n+static inline int fq_isclear(struct qman_fq *fq, u32 mask)\n+{\n+\treturn !(fq->flags & mask);\n+}\n+\n+struct qman_portal {\n+\tstruct qm_portal p;\n+\t/* PORTAL_BITS_*** - dynamic, strictly internal */\n+\tunsigned long bits;\n+\t/* interrupt sources processed by portal_isr(), configurable */\n+\tunsigned long irq_sources;\n+\tu32 use_eqcr_ci_stashing;\n+\tu32 slowpoll;\t/* only used when interrupts are off */\n+\t/* only 1 volatile dequeue at a time */\n+\tstruct qman_fq *vdqcr_owned;\n+\tu32 sdqcr;\n+\tint dqrr_disable_ref;\n+\t/* A portal-specific handler for DCP ERNs. If this is NULL, the global\n+\t * handler is called instead.\n+\t */\n+\tqman_cb_dc_ern cb_dc_ern;\n+\t/* When the cpu-affine portal is activated, this is non-NULL */\n+\tconst struct qm_portal_config *config;\n+\tstruct dpa_rbtree retire_table;\n+\tchar irqname[MAX_IRQNAME];\n+\t/* 2-element array. cgrs[0] is mask, cgrs[1] is snapshot. */\n+\tstruct qman_cgrs *cgrs;\n+\t/* linked-list of CSCN handlers. */\n+\tstruct list_head cgr_cbs;\n+\t/* list lock */\n+\tspinlock_t cgr_lock;\n+\t/* track if memory was allocated by the driver */\n+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__\n+\t/* Keep a shadow copy of the DQRR on LE systems as the SW needs to\n+\t * do byte swaps of DQRR read only memory.  First entry must be aligned\n+\t * to 2 ** 10 to ensure DQRR index calculations based shadow copy\n+\t * address (6 bits for address shift + 4 bits for the DQRR size).\n+\t */\n+\tstruct qm_dqrr_entry shadow_dqrr[QM_DQRR_SIZE]\n+\t\t    __attribute__((aligned(1024)));\n+#endif\n+};\n+\n+/* Global handler for DCP ERNs. Used when the portal receiving the message does\n+ * not have a portal-specific handler.\n+ */\n+static qman_cb_dc_ern cb_dc_ern;\n+\n+static cpumask_t affine_mask;\n+static DEFINE_SPINLOCK(affine_mask_lock);\n+static u16 affine_channels[NR_CPUS];\n+static DEFINE_PER_CPU(struct qman_portal, qman_affine_portal);\n+\n+static inline struct qman_portal *get_affine_portal(void)\n+{\n+\treturn &get_cpu_var(qman_affine_portal);\n+}\n+\n+/* This gives a FQID->FQ lookup to cover the fact that we can't directly demux\n+ * retirement notifications (the fact they are sometimes h/w-consumed means that\n+ * contextB isn't always a s/w demux - and as we can't know which case it is\n+ * when looking at the notification, we have to use the slow lookup for all of\n+ * them). NB, it's possible to have multiple FQ objects refer to the same FQID\n+ * (though at most one of them should be the consumer), so this table isn't for\n+ * all FQs - FQs are added when retirement commands are issued, and removed when\n+ * they complete, which also massively reduces the size of this table.\n+ */\n+IMPLEMENT_DPAA_RBTREE(fqtree, struct qman_fq, node, fqid);\n+/*\n+ * This is what everything can wait on, even if it migrates to a different cpu\n+ * to the one whose affine portal it is waiting on.\n+ */\n+static DECLARE_WAIT_QUEUE_HEAD(affine_queue);\n+\n+static inline int table_push_fq(struct qman_portal *p, struct qman_fq *fq)\n+{\n+\tint ret = fqtree_push(&p->retire_table, fq);\n+\n+\tif (ret)\n+\t\tpr_err(\"ERROR: double FQ-retirement %d\\n\", fq->fqid);\n+\treturn ret;\n+}\n+\n+static inline void table_del_fq(struct qman_portal *p, struct qman_fq *fq)\n+{\n+\tfqtree_del(&p->retire_table, fq);\n+}\n+\n+static inline struct qman_fq *table_find_fq(struct qman_portal *p, u32 fqid)\n+{\n+\treturn fqtree_find(&p->retire_table, fqid);\n+}\n+\n+static inline void cpu_to_hw_fqd(struct qm_fqd *fqd)\n+{\n+\t/* Byteswap the FQD to HW format */\n+\tfqd->fq_ctrl = cpu_to_be16(fqd->fq_ctrl);\n+\tfqd->dest_wq = cpu_to_be16(fqd->dest_wq);\n+\tfqd->ics_cred = cpu_to_be16(fqd->ics_cred);\n+\tfqd->context_b = cpu_to_be32(fqd->context_b);\n+\tfqd->context_a.opaque = cpu_to_be64(fqd->context_a.opaque);\n+\tfqd->opaque_td = cpu_to_be16(fqd->opaque_td);\n+}\n+\n+static inline void hw_fqd_to_cpu(struct qm_fqd *fqd)\n+{\n+\t/* Byteswap the FQD to CPU format */\n+\tfqd->fq_ctrl = be16_to_cpu(fqd->fq_ctrl);\n+\tfqd->dest_wq = be16_to_cpu(fqd->dest_wq);\n+\tfqd->ics_cred = be16_to_cpu(fqd->ics_cred);\n+\tfqd->context_b = be32_to_cpu(fqd->context_b);\n+\tfqd->context_a.opaque = be64_to_cpu(fqd->context_a.opaque);\n+}\n+\n+static inline void cpu_to_hw_fd(struct qm_fd *fd)\n+{\n+\tfd->addr = cpu_to_be40(fd->addr);\n+\tfd->status = cpu_to_be32(fd->status);\n+\tfd->opaque = cpu_to_be32(fd->opaque);\n+}\n+\n+static inline void hw_fd_to_cpu(struct qm_fd *fd)\n+{\n+\tfd->addr = be40_to_cpu(fd->addr);\n+\tfd->status = be32_to_cpu(fd->status);\n+\tfd->opaque = be32_to_cpu(fd->opaque);\n+}\n+\n+/* In the case that slow- and fast-path handling are both done by qman_poll()\n+ * (ie. because there is no interrupt handling), we ought to balance how often\n+ * we do the fast-path poll versus the slow-path poll. We'll use two decrementer\n+ * sources, so we call the fast poll 'n' times before calling the slow poll\n+ * once. The idle decrementer constant is used when the last slow-poll detected\n+ * no work to do, and the busy decrementer constant when the last slow-poll had\n+ * work to do.\n+ */\n+#define SLOW_POLL_IDLE   1000\n+#define SLOW_POLL_BUSY   10\n+static u32 __poll_portal_slow(struct qman_portal *p, u32 is);\n+static inline unsigned int __poll_portal_fast(struct qman_portal *p,\n+\t\t\t\t\t      unsigned int poll_limit);\n+\n+/* Portal interrupt handler */\n+static irqreturn_t portal_isr(__always_unused int irq, void *ptr)\n+{\n+\tstruct qman_portal *p = ptr;\n+\t/*\n+\t * The CSCI/CCSCI source is cleared inside __poll_portal_slow(), because\n+\t * it could race against a Query Congestion State command also given\n+\t * as part of the handling of this interrupt source. We mustn't\n+\t * clear it a second time in this top-level function.\n+\t */\n+\tu32 clear = QM_DQAVAIL_MASK | (p->irq_sources &\n+\t\t~(QM_PIRQ_CSCI | QM_PIRQ_CCSCI));\n+\tu32 is = qm_isr_status_read(&p->p) & p->irq_sources;\n+\t/* DQRR-handling if it's interrupt-driven */\n+\tif (is & QM_PIRQ_DQRI)\n+\t\t__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);\n+\t/* Handling of anything else that's interrupt-driven */\n+\tclear |= __poll_portal_slow(p, is);\n+\tqm_isr_status_clear(&p->p, clear);\n+\treturn IRQ_HANDLED;\n+}\n+\n+/* This inner version is used privately by qman_create_affine_portal(), as well\n+ * as by the exported qman_stop_dequeues().\n+ */\n+static inline void qman_stop_dequeues_ex(struct qman_portal *p)\n+{\n+\tif (!(p->dqrr_disable_ref++))\n+\t\tqm_dqrr_set_maxfill(&p->p, 0);\n+}\n+\n+static int drain_mr_fqrni(struct qm_portal *p)\n+{\n+\tconst struct qm_mr_entry *msg;\n+loop:\n+\tmsg = qm_mr_current(p);\n+\tif (!msg) {\n+\t\t/*\n+\t\t * if MR was full and h/w had other FQRNI entries to produce, we\n+\t\t * need to allow it time to produce those entries once the\n+\t\t * existing entries are consumed. A worst-case situation\n+\t\t * (fully-loaded system) means h/w sequencers may have to do 3-4\n+\t\t * other things before servicing the portal's MR pump, each of\n+\t\t * which (if slow) may take ~50 qman cycles (which is ~200\n+\t\t * processor cycles). So rounding up and then multiplying this\n+\t\t * worst-case estimate by a factor of 10, just to be\n+\t\t * ultra-paranoid, goes as high as 10,000 cycles. NB, we consume\n+\t\t * one entry at a time, so h/w has an opportunity to produce new\n+\t\t * entries well before the ring has been fully consumed, so\n+\t\t * we're being *really* paranoid here.\n+\t\t */\n+\t\tu64 now, then = mfatb();\n+\n+\t\tdo {\n+\t\t\tnow = mfatb();\n+\t\t} while ((then + 10000) > now);\n+\t\tmsg = qm_mr_current(p);\n+\t\tif (!msg)\n+\t\t\treturn 0;\n+\t}\n+\tif ((msg->verb & QM_MR_VERB_TYPE_MASK) != QM_MR_VERB_FQRNI) {\n+\t\t/* We aren't draining anything but FQRNIs */\n+\t\tpr_err(\"Found verb 0x%x in MR\\n\", msg->verb);\n+\t\treturn -1;\n+\t}\n+\tqm_mr_next(p);\n+\tqm_mr_cci_consume(p, 1);\n+\tgoto loop;\n+}\n+\n+static inline int qm_eqcr_init(struct qm_portal *portal,\n+\t\t\t       enum qm_eqcr_pmode pmode,\n+\t\t\t       unsigned int eq_stash_thresh,\n+\t\t\t       int eq_stash_prio)\n+{\n+\t/* This use of 'register', as well as all other occurrences, is because\n+\t * it has been observed to generate much faster code with gcc than is\n+\t * otherwise the case.\n+\t */\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\tu32 cfg;\n+\tu8 pi;\n+\n+\teqcr->ring = portal->addr.ce + QM_CL_EQCR;\n+\teqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);\n+\tqm_cl_invalidate(EQCR_CI);\n+\tpi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);\n+\teqcr->cursor = eqcr->ring + pi;\n+\teqcr->vbit = (qm_in(EQCR_PI_CINH) & QM_EQCR_SIZE) ?\n+\t\t\tQM_EQCR_VERB_VBIT : 0;\n+\teqcr->available = QM_EQCR_SIZE - 1 -\n+\t\t\tqm_cyc_diff(QM_EQCR_SIZE, eqcr->ci, pi);\n+\teqcr->ithresh = qm_in(EQCR_ITR);\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\teqcr->busy = 0;\n+\teqcr->pmode = pmode;\n+#endif\n+\tcfg = (qm_in(CFG) & 0x00ffffff) |\n+\t\t(eq_stash_thresh << 28) | /* QCSP_CFG: EST */\n+\t\t(eq_stash_prio << 26)\t| /* QCSP_CFG: EP */\n+\t\t((pmode & 0x3) << 24);\t/* QCSP_CFG::EPM */\n+\tqm_out(CFG, cfg);\n+\treturn 0;\n+}\n+\n+static inline void qm_eqcr_finish(struct qm_portal *portal)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\tu8 pi, ci;\n+\tu32 cfg;\n+\n+\t/*\n+\t * Disable EQCI stashing because the QMan only\n+\t * presents the value it previously stashed to\n+\t * maintain coherency.  Setting the stash threshold\n+\t * to 1 then 0 ensures that QMan has resyncronized\n+\t * its internal copy so that the portal is clean\n+\t * when it is reinitialized in the future\n+\t */\n+\tcfg = (qm_in(CFG) & 0x0fffffff) |\n+\t\t(1 << 28); /* QCSP_CFG: EST */\n+\tqm_out(CFG, cfg);\n+\tcfg &= 0x0fffffff; /* stash threshold = 0 */\n+\tqm_out(CFG, cfg);\n+\n+\tpi = qm_in(EQCR_PI_CINH) & (QM_EQCR_SIZE - 1);\n+\tci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);\n+\n+\t/* Refresh EQCR CI cache value */\n+\tqm_cl_invalidate(EQCR_CI);\n+\teqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);\n+\n+\tDPAA_ASSERT(!eqcr->busy);\n+\tif (pi != EQCR_PTR2IDX(eqcr->cursor))\n+\t\tpr_crit(\"loosing uncommitted EQCR entries\\n\");\n+\tif (ci != eqcr->ci)\n+\t\tpr_crit(\"missing existing EQCR completions\\n\");\n+\tif (eqcr->ci != EQCR_PTR2IDX(eqcr->cursor))\n+\t\tpr_crit(\"EQCR destroyed unquiesced\\n\");\n+}\n+\n+static inline int qm_dqrr_init(struct qm_portal *portal,\n+\t\t\t__maybe_unused const struct qm_portal_config *config,\n+\t\t\tenum qm_dqrr_dmode dmode,\n+\t\t\t__maybe_unused enum qm_dqrr_pmode pmode,\n+\t\t\tenum qm_dqrr_cmode cmode, u8 max_fill)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\tu32 cfg;\n+\n+\t/* Make sure the DQRR will be idle when we enable */\n+\tqm_out(DQRR_SDQCR, 0);\n+\tqm_out(DQRR_VDQCR, 0);\n+\tqm_out(DQRR_PDQCR, 0);\n+\tdqrr->ring = portal->addr.ce + QM_CL_DQRR;\n+\tdqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);\n+\tdqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);\n+\tdqrr->cursor = dqrr->ring + dqrr->ci;\n+\tdqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);\n+\tdqrr->vbit = (qm_in(DQRR_PI_CINH) & QM_DQRR_SIZE) ?\n+\t\t\tQM_DQRR_VERB_VBIT : 0;\n+\tdqrr->ithresh = qm_in(DQRR_ITR);\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tdqrr->dmode = dmode;\n+\tdqrr->pmode = pmode;\n+\tdqrr->cmode = cmode;\n+#endif\n+\t/* Invalidate every ring entry before beginning */\n+\tfor (cfg = 0; cfg < QM_DQRR_SIZE; cfg++)\n+\t\tdccivac(qm_cl(dqrr->ring, cfg));\n+\tcfg = (qm_in(CFG) & 0xff000f00) |\n+\t\t((max_fill & (QM_DQRR_SIZE - 1)) << 20) | /* DQRR_MF */\n+\t\t((dmode & 1) << 18) |\t\t\t/* DP */\n+\t\t((cmode & 3) << 16) |\t\t\t/* DCM */\n+\t\t0xa0 |\t\t\t\t\t/* RE+SE */\n+\t\t(0 ? 0x40 : 0) |\t\t\t/* Ignore RP */\n+\t\t(0 ? 0x10 : 0);\t\t\t\t/* Ignore SP */\n+\tqm_out(CFG, cfg);\n+\tqm_dqrr_set_maxfill(portal, max_fill);\n+\treturn 0;\n+}\n+\n+static inline void qm_dqrr_finish(struct qm_portal *portal)\n+{\n+\t__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tif ((dqrr->cmode != qm_dqrr_cdc) &&\n+\t    (dqrr->ci != DQRR_PTR2IDX(dqrr->cursor)))\n+\t\tpr_crit(\"Ignoring completed DQRR entries\\n\");\n+#endif\n+}\n+\n+static inline int qm_mr_init(struct qm_portal *portal,\n+\t\t\t     __maybe_unused enum qm_mr_pmode pmode,\n+\t\t\t     enum qm_mr_cmode cmode)\n+{\n+\tregister struct qm_mr *mr = &portal->mr;\n+\tu32 cfg;\n+\n+\tmr->ring = portal->addr.ce + QM_CL_MR;\n+\tmr->pi = qm_in(MR_PI_CINH) & (QM_MR_SIZE - 1);\n+\tmr->ci = qm_in(MR_CI_CINH) & (QM_MR_SIZE - 1);\n+\tmr->cursor = mr->ring + mr->ci;\n+\tmr->fill = qm_cyc_diff(QM_MR_SIZE, mr->ci, mr->pi);\n+\tmr->vbit = (qm_in(MR_PI_CINH) & QM_MR_SIZE) ? QM_MR_VERB_VBIT : 0;\n+\tmr->ithresh = qm_in(MR_ITR);\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tmr->pmode = pmode;\n+\tmr->cmode = cmode;\n+#endif\n+\tcfg = (qm_in(CFG) & 0xfffff0ff) |\n+\t\t((cmode & 1) << 8);\t\t/* QCSP_CFG:MM */\n+\tqm_out(CFG, cfg);\n+\treturn 0;\n+}\n+\n+static inline void qm_mr_pvb_update(struct qm_portal *portal)\n+{\n+\tregister struct qm_mr *mr = &portal->mr;\n+\tconst struct qm_mr_entry *res = qm_cl(mr->ring, mr->pi);\n+\n+\tDPAA_ASSERT(mr->pmode == qm_mr_pvb);\n+\t/* when accessing 'verb', use __raw_readb() to ensure that compiler\n+\t * inlining doesn't try to optimise out \"excess reads\".\n+\t */\n+\tif ((__raw_readb(&res->verb) & QM_MR_VERB_VBIT) == mr->vbit) {\n+\t\tmr->pi = (mr->pi + 1) & (QM_MR_SIZE - 1);\n+\t\tif (!mr->pi)\n+\t\t\tmr->vbit ^= QM_MR_VERB_VBIT;\n+\t\tmr->fill++;\n+\t\tres = MR_INC(res);\n+\t}\n+\tdcbit_ro(res);\n+}\n+\n+static inline\n+struct qman_portal *qman_create_portal(\n+\t\t\tstruct qman_portal *portal,\n+\t\t\t      const struct qm_portal_config *c,\n+\t\t\t      const struct qman_cgrs *cgrs)\n+{\n+\tstruct qm_portal *p;\n+\tchar buf[16];\n+\tint ret;\n+\tu32 isdr;\n+\n+\tp = &portal->p;\n+\n+\tportal->use_eqcr_ci_stashing = ((qman_ip_rev >= QMAN_REV30) ? 1 : 0);\n+\t/*\n+\t * prep the low-level portal struct with the mapped addresses from the\n+\t * config, everything that follows depends on it and \"config\" is more\n+\t * for (de)reference\n+\t */\n+\tp->addr.ce = c->addr_virt[DPAA_PORTAL_CE];\n+\tp->addr.ci = c->addr_virt[DPAA_PORTAL_CI];\n+\t/*\n+\t * If CI-stashing is used, the current defaults use a threshold of 3,\n+\t * and stash with high-than-DQRR priority.\n+\t */\n+\tif (qm_eqcr_init(p, qm_eqcr_pvb,\n+\t\t\t portal->use_eqcr_ci_stashing ? 3 : 0, 1)) {\n+\t\tpr_err(\"Qman EQCR initialisation failed\\n\");\n+\t\tgoto fail_eqcr;\n+\t}\n+\tif (qm_dqrr_init(p, c, qm_dqrr_dpush, qm_dqrr_pvb,\n+\t\t\t qm_dqrr_cdc, DQRR_MAXFILL)) {\n+\t\tpr_err(\"Qman DQRR initialisation failed\\n\");\n+\t\tgoto fail_dqrr;\n+\t}\n+\tif (qm_mr_init(p, qm_mr_pvb, qm_mr_cci)) {\n+\t\tpr_err(\"Qman MR initialisation failed\\n\");\n+\t\tgoto fail_mr;\n+\t}\n+\tif (qm_mc_init(p)) {\n+\t\tpr_err(\"Qman MC initialisation failed\\n\");\n+\t\tgoto fail_mc;\n+\t}\n+\n+\t/* static interrupt-gating controls */\n+\tqm_dqrr_set_ithresh(p, 0);\n+\tqm_mr_set_ithresh(p, 0);\n+\tqm_isr_set_iperiod(p, 0);\n+\tportal->cgrs = kmalloc(2 * sizeof(*cgrs), GFP_KERNEL);\n+\tif (!portal->cgrs)\n+\t\tgoto fail_cgrs;\n+\t/* initial snapshot is no-depletion */\n+\tqman_cgrs_init(&portal->cgrs[1]);\n+\tif (cgrs)\n+\t\tportal->cgrs[0] = *cgrs;\n+\telse\n+\t\t/* if the given mask is NULL, assume all CGRs can be seen */\n+\t\tqman_cgrs_fill(&portal->cgrs[0]);\n+\tINIT_LIST_HEAD(&portal->cgr_cbs);\n+\tspin_lock_init(&portal->cgr_lock);\n+\tportal->bits = 0;\n+\tportal->slowpoll = 0;\n+\tportal->sdqcr = QM_SDQCR_SOURCE_CHANNELS | QM_SDQCR_COUNT_UPTO3 |\n+\t\t\tQM_SDQCR_DEDICATED_PRECEDENCE | QM_SDQCR_TYPE_PRIO_QOS |\n+\t\t\tQM_SDQCR_TOKEN_SET(0xab) | QM_SDQCR_CHANNELS_DEDICATED;\n+\tportal->dqrr_disable_ref = 0;\n+\tportal->cb_dc_ern = NULL;\n+\tsprintf(buf, \"qportal-%d\", c->channel);\n+\tdpa_rbtree_init(&portal->retire_table);\n+\tisdr = 0xffffffff;\n+\tqm_isr_disable_write(p, isdr);\n+\tportal->irq_sources = 0;\n+\tqm_isr_enable_write(p, portal->irq_sources);\n+\tqm_isr_status_clear(p, 0xffffffff);\n+\tsnprintf(portal->irqname, MAX_IRQNAME, IRQNAME, c->cpu);\n+\tif (request_irq(c->irq, portal_isr, 0, portal->irqname,\n+\t\t\tportal)) {\n+\t\tpr_err(\"request_irq() failed\\n\");\n+\t\tgoto fail_irq;\n+\t}\n+\n+\t/* Need EQCR to be empty before continuing */\n+\tisdr &= ~QM_PIRQ_EQCI;\n+\tqm_isr_disable_write(p, isdr);\n+\tret = qm_eqcr_get_fill(p);\n+\tif (ret) {\n+\t\tpr_err(\"Qman EQCR unclean\\n\");\n+\t\tgoto fail_eqcr_empty;\n+\t}\n+\tisdr &= ~(QM_PIRQ_DQRI | QM_PIRQ_MRI);\n+\tqm_isr_disable_write(p, isdr);\n+\tif (qm_dqrr_current(p)) {\n+\t\tpr_err(\"Qman DQRR unclean\\n\");\n+\t\tqm_dqrr_cdc_consume_n(p, 0xffff);\n+\t}\n+\tif (qm_mr_current(p) && drain_mr_fqrni(p)) {\n+\t\t/* special handling, drain just in case it's a few FQRNIs */\n+\t\tif (drain_mr_fqrni(p))\n+\t\t\tgoto fail_dqrr_mr_empty;\n+\t}\n+\t/* Success */\n+\tportal->config = c;\n+\tqm_isr_disable_write(p, 0);\n+\tqm_isr_uninhibit(p);\n+\t/* Write a sane SDQCR */\n+\tqm_dqrr_sdqcr_set(p, portal->sdqcr);\n+\treturn portal;\n+fail_dqrr_mr_empty:\n+fail_eqcr_empty:\n+\tfree_irq(c->irq, portal);\n+fail_irq:\n+\tkfree(portal->cgrs);\n+\tspin_lock_destroy(&portal->cgr_lock);\n+fail_cgrs:\n+\tqm_mc_finish(p);\n+fail_mc:\n+\tqm_mr_finish(p);\n+fail_mr:\n+\tqm_dqrr_finish(p);\n+fail_dqrr:\n+\tqm_eqcr_finish(p);\n+fail_eqcr:\n+\treturn NULL;\n+}\n+\n+struct qman_portal *qman_create_affine_portal(const struct qm_portal_config *c,\n+\t\t\t\t\t      const struct qman_cgrs *cgrs)\n+{\n+\tstruct qman_portal *res;\n+\tstruct qman_portal *portal = get_affine_portal();\n+\t/* A criteria for calling this function (from qman_driver.c) is that\n+\t * we're already affine to the cpu and won't schedule onto another cpu.\n+\t */\n+\n+\tres = qman_create_portal(portal, c, cgrs);\n+\tif (res) {\n+\t\tspin_lock(&affine_mask_lock);\n+\t\tCPU_SET(c->cpu, &affine_mask);\n+\t\taffine_channels[c->cpu] =\n+\t\t\tc->channel;\n+\t\tspin_unlock(&affine_mask_lock);\n+\t}\n+\treturn res;\n+}\n+\n+static inline\n+void qman_destroy_portal(struct qman_portal *qm)\n+{\n+\tconst struct qm_portal_config *pcfg;\n+\n+\t/* Stop dequeues on the portal */\n+\tqm_dqrr_sdqcr_set(&qm->p, 0);\n+\n+\t/*\n+\t * NB we do this to \"quiesce\" EQCR. If we add enqueue-completions or\n+\t * something related to QM_PIRQ_EQCI, this may need fixing.\n+\t * Also, due to the prefetching model used for CI updates in the enqueue\n+\t * path, this update will only invalidate the CI cacheline *after*\n+\t * working on it, so we need to call this twice to ensure a full update\n+\t * irrespective of where the enqueue processing was at when the teardown\n+\t * began.\n+\t */\n+\tqm_eqcr_cce_update(&qm->p);\n+\tqm_eqcr_cce_update(&qm->p);\n+\tpcfg = qm->config;\n+\n+\tfree_irq(pcfg->irq, qm);\n+\n+\tkfree(qm->cgrs);\n+\tqm_mc_finish(&qm->p);\n+\tqm_mr_finish(&qm->p);\n+\tqm_dqrr_finish(&qm->p);\n+\tqm_eqcr_finish(&qm->p);\n+\n+\tqm->config = NULL;\n+\n+\tspin_lock_destroy(&qm->cgr_lock);\n+}\n+\n+const struct qm_portal_config *qman_destroy_affine_portal(void)\n+{\n+\t/* We don't want to redirect if we're a slave, use \"raw\" */\n+\tstruct qman_portal *qm = get_affine_portal();\n+\tconst struct qm_portal_config *pcfg;\n+\tint cpu;\n+\n+\tpcfg = qm->config;\n+\tcpu = pcfg->cpu;\n+\n+\tqman_destroy_portal(qm);\n+\n+\tspin_lock(&affine_mask_lock);\n+\tCPU_CLR(cpu, &affine_mask);\n+\tspin_unlock(&affine_mask_lock);\n+\treturn pcfg;\n+}\n+\n+int qman_get_portal_index(void)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\treturn p->config->index;\n+}\n+\n+/* Inline helper to reduce nesting in __poll_portal_slow() */\n+static inline void fq_state_change(struct qman_portal *p, struct qman_fq *fq,\n+\t\t\t\t   const struct qm_mr_entry *msg, u8 verb)\n+{\n+\tFQLOCK(fq);\n+\tswitch (verb) {\n+\tcase QM_MR_VERB_FQRL:\n+\t\tDPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_ORL));\n+\t\tfq_clear(fq, QMAN_FQ_STATE_ORL);\n+\t\ttable_del_fq(p, fq);\n+\t\tbreak;\n+\tcase QM_MR_VERB_FQRN:\n+\t\tDPAA_ASSERT((fq->state == qman_fq_state_parked) ||\n+\t\t\t    (fq->state == qman_fq_state_sched));\n+\t\tDPAA_ASSERT(fq_isset(fq, QMAN_FQ_STATE_CHANGING));\n+\t\tfq_clear(fq, QMAN_FQ_STATE_CHANGING);\n+\t\tif (msg->fq.fqs & QM_MR_FQS_NOTEMPTY)\n+\t\t\tfq_set(fq, QMAN_FQ_STATE_NE);\n+\t\tif (msg->fq.fqs & QM_MR_FQS_ORLPRESENT)\n+\t\t\tfq_set(fq, QMAN_FQ_STATE_ORL);\n+\t\telse\n+\t\t\ttable_del_fq(p, fq);\n+\t\tfq->state = qman_fq_state_retired;\n+\t\tbreak;\n+\tcase QM_MR_VERB_FQPN:\n+\t\tDPAA_ASSERT(fq->state == qman_fq_state_sched);\n+\t\tDPAA_ASSERT(fq_isclear(fq, QMAN_FQ_STATE_CHANGING));\n+\t\tfq->state = qman_fq_state_parked;\n+\t}\n+\tFQUNLOCK(fq);\n+}\n+\n+static u32 __poll_portal_slow(struct qman_portal *p, u32 is)\n+{\n+\tconst struct qm_mr_entry *msg;\n+\tstruct qm_mr_entry swapped_msg;\n+\n+\tif (is & QM_PIRQ_CSCI) {\n+\t\tstruct qman_cgrs rr, c;\n+\t\tstruct qm_mc_result *mcr;\n+\t\tstruct qman_cgr *cgr;\n+\n+\t\tspin_lock(&p->cgr_lock);\n+\t\t/*\n+\t\t * The CSCI bit must be cleared _before_ issuing the\n+\t\t * Query Congestion State command, to ensure that a long\n+\t\t * CGR State Change callback cannot miss an intervening\n+\t\t * state change.\n+\t\t */\n+\t\tqm_isr_status_clear(&p->p, QM_PIRQ_CSCI);\n+\t\tqm_mc_start(&p->p);\n+\t\tqm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);\n+\t\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\t\tcpu_relax();\n+\t\t/* mask out the ones I'm not interested in */\n+\t\tqman_cgrs_and(&rr, (const struct qman_cgrs *)\n+\t\t\t&mcr->querycongestion.state, &p->cgrs[0]);\n+\t\t/* check previous snapshot for delta, enter/exit congestion */\n+\t\tqman_cgrs_xor(&c, &rr, &p->cgrs[1]);\n+\t\t/* update snapshot */\n+\t\tqman_cgrs_cp(&p->cgrs[1], &rr);\n+\t\t/* Invoke callback */\n+\t\tlist_for_each_entry(cgr, &p->cgr_cbs, node)\n+\t\t\tif (cgr->cb && qman_cgrs_get(&c, cgr->cgrid))\n+\t\t\t\tcgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid));\n+\t\tspin_unlock(&p->cgr_lock);\n+\t}\n+\n+\tif (is & QM_PIRQ_EQRI) {\n+\t\tqm_eqcr_cce_update(&p->p);\n+\t\tqm_eqcr_set_ithresh(&p->p, 0);\n+\t\twake_up(&affine_queue);\n+\t}\n+\n+\tif (is & QM_PIRQ_MRI) {\n+\t\tstruct qman_fq *fq;\n+\t\tu8 verb, num = 0;\n+mr_loop:\n+\t\tqm_mr_pvb_update(&p->p);\n+\t\tmsg = qm_mr_current(&p->p);\n+\t\tif (!msg)\n+\t\t\tgoto mr_done;\n+\t\tswapped_msg = *msg;\n+\t\thw_fd_to_cpu(&swapped_msg.ern.fd);\n+\t\tverb = msg->verb & QM_MR_VERB_TYPE_MASK;\n+\t\t/* The message is a software ERN iff the 0x20 bit is set */\n+\t\tif (verb & 0x20) {\n+\t\t\tswitch (verb) {\n+\t\t\tcase QM_MR_VERB_FQRNI:\n+\t\t\t\t/* nada, we drop FQRNIs on the floor */\n+\t\t\t\tbreak;\n+\t\t\tcase QM_MR_VERB_FQRN:\n+\t\t\tcase QM_MR_VERB_FQRL:\n+\t\t\t\t/* Lookup in the retirement table */\n+\t\t\t\tfq = table_find_fq(p,\n+\t\t\t\t\t\t   be32_to_cpu(msg->fq.fqid));\n+\t\t\t\tBUG_ON(!fq);\n+\t\t\t\tfq_state_change(p, fq, &swapped_msg, verb);\n+\t\t\t\tif (fq->cb.fqs)\n+\t\t\t\t\tfq->cb.fqs(p, fq, &swapped_msg);\n+\t\t\t\tbreak;\n+\t\t\tcase QM_MR_VERB_FQPN:\n+\t\t\t\t/* Parked */\n+\t\t\t\tfq = (void *)(uintptr_t)\n+\t\t\t\t\tbe32_to_cpu(msg->fq.contextB);\n+\t\t\t\tfq_state_change(p, fq, msg, verb);\n+\t\t\t\tif (fq->cb.fqs)\n+\t\t\t\t\tfq->cb.fqs(p, fq, &swapped_msg);\n+\t\t\t\tbreak;\n+\t\t\tcase QM_MR_VERB_DC_ERN:\n+\t\t\t\t/* DCP ERN */\n+\t\t\t\tif (p->cb_dc_ern)\n+\t\t\t\t\tp->cb_dc_ern(p, msg);\n+\t\t\t\telse if (cb_dc_ern)\n+\t\t\t\t\tcb_dc_ern(p, msg);\n+\t\t\t\telse {\n+\t\t\t\t\tstatic int warn_once;\n+\n+\t\t\t\t\tif (!warn_once) {\n+\t\t\t\t\t\tpr_crit(\"Leaking DCP ERNs!\\n\");\n+\t\t\t\t\t\twarn_once = 1;\n+\t\t\t\t\t}\n+\t\t\t\t}\n+\t\t\t\tbreak;\n+\t\t\tdefault:\n+\t\t\t\tpr_crit(\"Invalid MR verb 0x%02x\\n\", verb);\n+\t\t\t}\n+\t\t} else {\n+\t\t\t/* Its a software ERN */\n+\t\t\tfq = (void *)(uintptr_t)be32_to_cpu(msg->ern.tag);\n+\t\t\tfq->cb.ern(p, fq, &swapped_msg);\n+\t\t}\n+\t\tnum++;\n+\t\tqm_mr_next(&p->p);\n+\t\tgoto mr_loop;\n+mr_done:\n+\t\tqm_mr_cci_consume(&p->p, num);\n+\t}\n+\t/*\n+\t * QM_PIRQ_CSCI/CCSCI has already been cleared, as part of its specific\n+\t * processing. If that interrupt source has meanwhile been re-asserted,\n+\t * we mustn't clear it here (or in the top-level interrupt handler).\n+\t */\n+\treturn is & (QM_PIRQ_EQCI | QM_PIRQ_EQRI | QM_PIRQ_MRI);\n+}\n+\n+/*\n+ * remove some slowish-path stuff from the \"fast path\" and make sure it isn't\n+ * inlined.\n+ */\n+static noinline void clear_vdqcr(struct qman_portal *p, struct qman_fq *fq)\n+{\n+\tp->vdqcr_owned = NULL;\n+\tFQLOCK(fq);\n+\tfq_clear(fq, QMAN_FQ_STATE_VDQCR);\n+\tFQUNLOCK(fq);\n+\twake_up(&affine_queue);\n+}\n+\n+/*\n+ * The only states that would conflict with other things if they ran at the\n+ * same time on the same cpu are:\n+ *\n+ *   (i) setting/clearing vdqcr_owned, and\n+ *  (ii) clearing the NE (Not Empty) flag.\n+ *\n+ * Both are safe. Because;\n+ *\n+ *   (i) this clearing can only occur after qman_set_vdq() has set the\n+ *\t vdqcr_owned field (which it does before setting VDQCR), and\n+ *\t qman_volatile_dequeue() blocks interrupts and preemption while this is\n+ *\t done so that we can't interfere.\n+ *  (ii) the NE flag is only cleared after qman_retire_fq() has set it, and as\n+ *\t with (i) that API prevents us from interfering until it's safe.\n+ *\n+ * The good thing is that qman_set_vdq() and qman_retire_fq() run far\n+ * less frequently (ie. per-FQ) than __poll_portal_fast() does, so the nett\n+ * advantage comes from this function not having to \"lock\" anything at all.\n+ *\n+ * Note also that the callbacks are invoked at points which are safe against the\n+ * above potential conflicts, but that this function itself is not re-entrant\n+ * (this is because the function tracks one end of each FIFO in the portal and\n+ * we do *not* want to lock that). So the consequence is that it is safe for\n+ * user callbacks to call into any QMan API.\n+ */\n+static inline unsigned int __poll_portal_fast(struct qman_portal *p,\n+\t\t\t\t\t      unsigned int poll_limit)\n+{\n+\tconst struct qm_dqrr_entry *dq;\n+\tstruct qman_fq *fq;\n+\tenum qman_cb_dqrr_result res;\n+\tunsigned int limit = 0;\n+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__\n+\tstruct qm_dqrr_entry *shadow;\n+#endif\n+\tdo {\n+\t\tqm_dqrr_pvb_update(&p->p);\n+\t\tdq = qm_dqrr_current(&p->p);\n+\t\tif (!dq)\n+\t\t\tbreak;\n+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__\n+\t/* If running on an LE system the fields of the\n+\t * dequeue entry must be swapper.  Because the\n+\t * QMan HW will ignore writes the DQRR entry is\n+\t * copied and the index stored within the copy\n+\t */\n+\t\tshadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];\n+\t\t*shadow = *dq;\n+\t\tdq = shadow;\n+\t\tshadow->fqid = be32_to_cpu(shadow->fqid);\n+\t\tshadow->contextB = be32_to_cpu(shadow->contextB);\n+\t\tshadow->seqnum = be16_to_cpu(shadow->seqnum);\n+\t\thw_fd_to_cpu(&shadow->fd);\n+#endif\n+\n+\t\tif (dq->stat & QM_DQRR_STAT_UNSCHEDULED) {\n+\t\t\t/*\n+\t\t\t * VDQCR: don't trust context_b as the FQ may have\n+\t\t\t * been configured for h/w consumption and we're\n+\t\t\t * draining it post-retirement.\n+\t\t\t */\n+\t\t\tfq = p->vdqcr_owned;\n+\t\t\t/*\n+\t\t\t * We only set QMAN_FQ_STATE_NE when retiring, so we\n+\t\t\t * only need to check for clearing it when doing\n+\t\t\t * volatile dequeues.  It's one less thing to check\n+\t\t\t * in the critical path (SDQCR).\n+\t\t\t */\n+\t\t\tif (dq->stat & QM_DQRR_STAT_FQ_EMPTY)\n+\t\t\t\tfq_clear(fq, QMAN_FQ_STATE_NE);\n+\t\t\t/*\n+\t\t\t * This is duplicated from the SDQCR code, but we\n+\t\t\t * have stuff to do before *and* after this callback,\n+\t\t\t * and we don't want multiple if()s in the critical\n+\t\t\t * path (SDQCR).\n+\t\t\t */\n+\t\t\tres = fq->cb.dqrr(p, fq, dq);\n+\t\t\tif (res == qman_cb_dqrr_stop)\n+\t\t\t\tbreak;\n+\t\t\t/* Check for VDQCR completion */\n+\t\t\tif (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)\n+\t\t\t\tclear_vdqcr(p, fq);\n+\t\t} else {\n+\t\t\t/* SDQCR: context_b points to the FQ */\n+\t\t\tfq = (void *)(uintptr_t)dq->contextB;\n+\t\t\t/* Now let the callback do its stuff */\n+\t\t\tres = fq->cb.dqrr(p, fq, dq);\n+\t\t\t/*\n+\t\t\t * The callback can request that we exit without\n+\t\t\t * consuming this entry nor advancing;\n+\t\t\t */\n+\t\t\tif (res == qman_cb_dqrr_stop)\n+\t\t\t\tbreak;\n+\t\t}\n+\t\t/* Interpret 'dq' from a driver perspective. */\n+\t\t/*\n+\t\t * Parking isn't possible unless HELDACTIVE was set. NB,\n+\t\t * FORCEELIGIBLE implies HELDACTIVE, so we only need to\n+\t\t * check for HELDACTIVE to cover both.\n+\t\t */\n+\t\tDPAA_ASSERT((dq->stat & QM_DQRR_STAT_FQ_HELDACTIVE) ||\n+\t\t\t    (res != qman_cb_dqrr_park));\n+\t\t/* just means \"skip it, I'll consume it myself later on\" */\n+\t\tif (res != qman_cb_dqrr_defer)\n+\t\t\tqm_dqrr_cdc_consume_1ptr(&p->p, dq,\n+\t\t\t\t\t\t res == qman_cb_dqrr_park);\n+\t\t/* Move forward */\n+\t\tqm_dqrr_next(&p->p);\n+\t\t/*\n+\t\t * Entry processed and consumed, increment our counter.  The\n+\t\t * callback can request that we exit after consuming the\n+\t\t * entry, and we also exit if we reach our processing limit,\n+\t\t * so loop back only if neither of these conditions is met.\n+\t\t */\n+\t} while (++limit < poll_limit && res != qman_cb_dqrr_consume_stop);\n+\n+\treturn limit;\n+}\n+\n+u16 qman_affine_channel(int cpu)\n+{\n+\tif (cpu < 0) {\n+\t\tstruct qman_portal *portal = get_affine_portal();\n+\n+\t\tcpu = portal->config->cpu;\n+\t}\n+\tBUG_ON(!CPU_ISSET(cpu, &affine_mask));\n+\treturn affine_channels[cpu];\n+}\n+\n+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\tconst struct qm_dqrr_entry *dq;\n+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__\n+\tstruct qm_dqrr_entry *shadow;\n+#endif\n+\n+\tqm_dqrr_pvb_update(&p->p);\n+\tdq = qm_dqrr_current(&p->p);\n+\tif (!dq)\n+\t\treturn NULL;\n+\n+\tif (!(dq->stat & QM_DQRR_STAT_FD_VALID)) {\n+\t\t/* Invalid DQRR - put the portal and consume the DQRR.\n+\t\t * Return NULL to user as no packet is seen.\n+\t\t */\n+\t\tqman_dqrr_consume(fq, (struct qm_dqrr_entry *)dq);\n+\t\treturn NULL;\n+\t}\n+\n+#if __BYTE_ORDER__ == __ORDER_LITTLE_ENDIAN__\n+\tshadow = &p->shadow_dqrr[DQRR_PTR2IDX(dq)];\n+\t*shadow = *dq;\n+\tdq = shadow;\n+\tshadow->fqid = be32_to_cpu(shadow->fqid);\n+\tshadow->contextB = be32_to_cpu(shadow->contextB);\n+\tshadow->seqnum = be16_to_cpu(shadow->seqnum);\n+\thw_fd_to_cpu(&shadow->fd);\n+#endif\n+\n+\tif (dq->stat & QM_DQRR_STAT_FQ_EMPTY)\n+\t\tfq_clear(fq, QMAN_FQ_STATE_NE);\n+\n+\treturn (struct qm_dqrr_entry *)dq;\n+}\n+\n+void qman_dqrr_consume(struct qman_fq *fq,\n+\t\t       struct qm_dqrr_entry *dq)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tif (dq->stat & QM_DQRR_STAT_DQCR_EXPIRED)\n+\t\tclear_vdqcr(p, fq);\n+\n+\tqm_dqrr_cdc_consume_1ptr(&p->p, dq, 0);\n+\tqm_dqrr_next(&p->p);\n+}\n+\n+int qman_poll_dqrr(unsigned int limit)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\tint ret;\n+\n+\tret = __poll_portal_fast(p, limit);\n+\treturn ret;\n+}\n+\n+void qman_poll(void)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tif ((~p->irq_sources) & QM_PIRQ_SLOW) {\n+\t\tif (!(p->slowpoll--)) {\n+\t\t\tu32 is = qm_isr_status_read(&p->p) & ~p->irq_sources;\n+\t\t\tu32 active = __poll_portal_slow(p, is);\n+\n+\t\t\tif (active) {\n+\t\t\t\tqm_isr_status_clear(&p->p, active);\n+\t\t\t\tp->slowpoll = SLOW_POLL_BUSY;\n+\t\t\t} else\n+\t\t\t\tp->slowpoll = SLOW_POLL_IDLE;\n+\t\t}\n+\t}\n+\tif ((~p->irq_sources) & QM_PIRQ_DQRI)\n+\t\t__poll_portal_fast(p, FSL_QMAN_POLL_LIMIT);\n+}\n+\n+void qman_stop_dequeues(void)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tqman_stop_dequeues_ex(p);\n+}\n+\n+void qman_start_dequeues(void)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tDPAA_ASSERT(p->dqrr_disable_ref > 0);\n+\tif (!(--p->dqrr_disable_ref))\n+\t\tqm_dqrr_set_maxfill(&p->p, DQRR_MAXFILL);\n+}\n+\n+void qman_static_dequeue_add(u32 pools)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tpools &= p->config->pools;\n+\tp->sdqcr |= pools;\n+\tqm_dqrr_sdqcr_set(&p->p, p->sdqcr);\n+}\n+\n+void qman_static_dequeue_del(u32 pools)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tpools &= p->config->pools;\n+\tp->sdqcr &= ~pools;\n+\tqm_dqrr_sdqcr_set(&p->p, p->sdqcr);\n+}\n+\n+u32 qman_static_dequeue_get(void)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\treturn p->sdqcr;\n+}\n+\n+void qman_dca(struct qm_dqrr_entry *dq, int park_request)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tqm_dqrr_cdc_consume_1ptr(&p->p, dq, park_request);\n+}\n+\n+/* Frame queue API */\n+static const char *mcr_result_str(u8 result)\n+{\n+\tswitch (result) {\n+\tcase QM_MCR_RESULT_NULL:\n+\t\treturn \"QM_MCR_RESULT_NULL\";\n+\tcase QM_MCR_RESULT_OK:\n+\t\treturn \"QM_MCR_RESULT_OK\";\n+\tcase QM_MCR_RESULT_ERR_FQID:\n+\t\treturn \"QM_MCR_RESULT_ERR_FQID\";\n+\tcase QM_MCR_RESULT_ERR_FQSTATE:\n+\t\treturn \"QM_MCR_RESULT_ERR_FQSTATE\";\n+\tcase QM_MCR_RESULT_ERR_NOTEMPTY:\n+\t\treturn \"QM_MCR_RESULT_ERR_NOTEMPTY\";\n+\tcase QM_MCR_RESULT_PENDING:\n+\t\treturn \"QM_MCR_RESULT_PENDING\";\n+\tcase QM_MCR_RESULT_ERR_BADCOMMAND:\n+\t\treturn \"QM_MCR_RESULT_ERR_BADCOMMAND\";\n+\t}\n+\treturn \"<unknown MCR result>\";\n+}\n+\n+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq)\n+{\n+\tstruct qm_fqd fqd;\n+\tstruct qm_mcr_queryfq_np np;\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p;\n+\n+\tif (flags & QMAN_FQ_FLAG_DYNAMIC_FQID) {\n+\t\tint ret = qman_alloc_fqid(&fqid);\n+\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t}\n+\tspin_lock_init(&fq->fqlock);\n+\tfq->fqid = fqid;\n+\tfq->flags = flags;\n+\tfq->state = qman_fq_state_oos;\n+\tfq->cgr_groupid = 0;\n+\n+\tif (!(flags & QMAN_FQ_FLAG_AS_IS) || (flags & QMAN_FQ_FLAG_NO_MODIFY))\n+\t\treturn 0;\n+\t/* Everything else is AS_IS support */\n+\tp = get_affine_portal();\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->queryfq.fqid = cpu_to_be32(fqid);\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ);\n+\tif (mcr->result != QM_MCR_RESULT_OK) {\n+\t\tpr_err(\"QUERYFQ failed: %s\\n\", mcr_result_str(mcr->result));\n+\t\tgoto err;\n+\t}\n+\tfqd = mcr->queryfq.fqd;\n+\thw_fqd_to_cpu(&fqd);\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->queryfq_np.fqid = cpu_to_be32(fqid);\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYFQ_NP);\n+\tif (mcr->result != QM_MCR_RESULT_OK) {\n+\t\tpr_err(\"QUERYFQ_NP failed: %s\\n\", mcr_result_str(mcr->result));\n+\t\tgoto err;\n+\t}\n+\tnp = mcr->queryfq_np;\n+\t/* Phew, have queryfq and queryfq_np results, stitch together\n+\t * the FQ object from those.\n+\t */\n+\tfq->cgr_groupid = fqd.cgid;\n+\tswitch (np.state & QM_MCR_NP_STATE_MASK) {\n+\tcase QM_MCR_NP_STATE_OOS:\n+\t\tbreak;\n+\tcase QM_MCR_NP_STATE_RETIRED:\n+\t\tfq->state = qman_fq_state_retired;\n+\t\tif (np.frm_cnt)\n+\t\t\tfq_set(fq, QMAN_FQ_STATE_NE);\n+\t\tbreak;\n+\tcase QM_MCR_NP_STATE_TEN_SCHED:\n+\tcase QM_MCR_NP_STATE_TRU_SCHED:\n+\tcase QM_MCR_NP_STATE_ACTIVE:\n+\t\tfq->state = qman_fq_state_sched;\n+\t\tif (np.state & QM_MCR_NP_STATE_R)\n+\t\t\tfq_set(fq, QMAN_FQ_STATE_CHANGING);\n+\t\tbreak;\n+\tcase QM_MCR_NP_STATE_PARKED:\n+\t\tfq->state = qman_fq_state_parked;\n+\t\tbreak;\n+\tdefault:\n+\t\tDPAA_ASSERT(NULL == \"invalid FQ state\");\n+\t}\n+\tif (fqd.fq_ctrl & QM_FQCTRL_CGE)\n+\t\tfq->state |= QMAN_FQ_STATE_CGR_EN;\n+\treturn 0;\n+err:\n+\tif (flags & QMAN_FQ_FLAG_DYNAMIC_FQID)\n+\t\tqman_release_fqid(fqid);\n+\treturn -EIO;\n+}\n+\n+void qman_destroy_fq(struct qman_fq *fq, u32 flags __maybe_unused)\n+{\n+\t/*\n+\t * We don't need to lock the FQ as it is a pre-condition that the FQ be\n+\t * quiesced. Instead, run some checks.\n+\t */\n+\tswitch (fq->state) {\n+\tcase qman_fq_state_parked:\n+\t\tDPAA_ASSERT(flags & QMAN_FQ_DESTROY_PARKED);\n+\tcase qman_fq_state_oos:\n+\t\tif (fq_isset(fq, QMAN_FQ_FLAG_DYNAMIC_FQID))\n+\t\t\tqman_release_fqid(fq->fqid);\n+\n+\t\treturn;\n+\tdefault:\n+\t\tbreak;\n+\t}\n+\tDPAA_ASSERT(NULL == \"qman_free_fq() on unquiesced FQ!\");\n+}\n+\n+u32 qman_fq_fqid(struct qman_fq *fq)\n+{\n+\treturn fq->fqid;\n+}\n+\n+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags)\n+{\n+\tif (state)\n+\t\t*state = fq->state;\n+\tif (flags)\n+\t\t*flags = fq->flags;\n+}\n+\n+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p;\n+\n+\tu8 res, myverb = (flags & QMAN_INITFQ_FLAG_SCHED) ?\n+\t\tQM_MCC_VERB_INITFQ_SCHED : QM_MCC_VERB_INITFQ_PARKED;\n+\n+\tif ((fq->state != qman_fq_state_oos) &&\n+\t    (fq->state != qman_fq_state_parked))\n+\t\treturn -EINVAL;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tif (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))\n+\t\treturn -EINVAL;\n+#endif\n+\tif (opts && (opts->we_mask & QM_INITFQ_WE_OAC)) {\n+\t\t/* And can't be set at the same time as TDTHRESH */\n+\t\tif (opts->we_mask & QM_INITFQ_WE_TDTHRESH)\n+\t\t\treturn -EINVAL;\n+\t}\n+\t/* Issue an INITFQ_[PARKED|SCHED] management command */\n+\tp = get_affine_portal();\n+\tFQLOCK(fq);\n+\tif (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||\n+\t\t     ((fq->state != qman_fq_state_oos) &&\n+\t\t\t\t(fq->state != qman_fq_state_parked)))) {\n+\t\tFQUNLOCK(fq);\n+\t\treturn -EBUSY;\n+\t}\n+\tmcc = qm_mc_start(&p->p);\n+\tif (opts)\n+\t\tmcc->initfq = *opts;\n+\tmcc->initfq.fqid = cpu_to_be32(fq->fqid);\n+\tmcc->initfq.count = 0;\n+\t/*\n+\t * If the FQ does *not* have the TO_DCPORTAL flag, context_b is set as a\n+\t * demux pointer. Otherwise, the caller-provided value is allowed to\n+\t * stand, don't overwrite it.\n+\t */\n+\tif (fq_isclear(fq, QMAN_FQ_FLAG_TO_DCPORTAL)) {\n+\t\tdma_addr_t phys_fq;\n+\n+\t\tmcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTB;\n+\t\tmcc->initfq.fqd.context_b = (u32)(uintptr_t)fq;\n+\t\t/*\n+\t\t *  and the physical address - NB, if the user wasn't trying to\n+\t\t * set CONTEXTA, clear the stashing settings.\n+\t\t */\n+\t\tif (!(mcc->initfq.we_mask & QM_INITFQ_WE_CONTEXTA)) {\n+\t\t\tmcc->initfq.we_mask |= QM_INITFQ_WE_CONTEXTA;\n+\t\t\tmemset(&mcc->initfq.fqd.context_a, 0,\n+\t\t\t       sizeof(mcc->initfq.fqd.context_a));\n+\t\t} else {\n+\t\t\tphys_fq = rte_mem_virt2phy(fq);\n+\t\t\tqm_fqd_stashing_set64(&mcc->initfq.fqd, phys_fq);\n+\t\t}\n+\t}\n+\tif (flags & QMAN_INITFQ_FLAG_LOCAL) {\n+\t\tmcc->initfq.fqd.dest.channel = p->config->channel;\n+\t\tif (!(mcc->initfq.we_mask & QM_INITFQ_WE_DESTWQ)) {\n+\t\t\tmcc->initfq.we_mask |= QM_INITFQ_WE_DESTWQ;\n+\t\t\tmcc->initfq.fqd.dest.wq = 4;\n+\t\t}\n+\t}\n+\tmcc->initfq.we_mask = cpu_to_be16(mcc->initfq.we_mask);\n+\tcpu_to_hw_fqd(&mcc->initfq.fqd);\n+\tqm_mc_commit(&p->p, myverb);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);\n+\tres = mcr->result;\n+\tif (res != QM_MCR_RESULT_OK) {\n+\t\tFQUNLOCK(fq);\n+\t\treturn -EIO;\n+\t}\n+\tif (opts) {\n+\t\tif (opts->we_mask & QM_INITFQ_WE_FQCTRL) {\n+\t\t\tif (opts->fqd.fq_ctrl & QM_FQCTRL_CGE)\n+\t\t\t\tfq_set(fq, QMAN_FQ_STATE_CGR_EN);\n+\t\t\telse\n+\t\t\t\tfq_clear(fq, QMAN_FQ_STATE_CGR_EN);\n+\t\t}\n+\t\tif (opts->we_mask & QM_INITFQ_WE_CGID)\n+\t\t\tfq->cgr_groupid = opts->fqd.cgid;\n+\t}\n+\tfq->state = (flags & QMAN_INITFQ_FLAG_SCHED) ?\n+\t\tqman_fq_state_sched : qman_fq_state_parked;\n+\tFQUNLOCK(fq);\n+\treturn 0;\n+}\n+\n+int qman_schedule_fq(struct qman_fq *fq)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p;\n+\n+\tint ret = 0;\n+\tu8 res;\n+\n+\tif (fq->state != qman_fq_state_parked)\n+\t\treturn -EINVAL;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tif (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))\n+\t\treturn -EINVAL;\n+#endif\n+\t/* Issue a ALTERFQ_SCHED management command */\n+\tp = get_affine_portal();\n+\n+\tFQLOCK(fq);\n+\tif (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||\n+\t\t     (fq->state != qman_fq_state_parked))) {\n+\t\tret = -EBUSY;\n+\t\tgoto out;\n+\t}\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->alterfq.fqid = cpu_to_be32(fq->fqid);\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_ALTER_SCHED);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_SCHED);\n+\tres = mcr->result;\n+\tif (res != QM_MCR_RESULT_OK) {\n+\t\tret = -EIO;\n+\t\tgoto out;\n+\t}\n+\tfq->state = qman_fq_state_sched;\n+out:\n+\tFQUNLOCK(fq);\n+\n+\treturn ret;\n+}\n+\n+int qman_retire_fq(struct qman_fq *fq, u32 *flags)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p;\n+\n+\tint rval;\n+\tu8 res;\n+\n+\tif ((fq->state != qman_fq_state_parked) &&\n+\t    (fq->state != qman_fq_state_sched))\n+\t\treturn -EINVAL;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tif (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))\n+\t\treturn -EINVAL;\n+#endif\n+\tp = get_affine_portal();\n+\n+\tFQLOCK(fq);\n+\tif (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||\n+\t\t     (fq->state == qman_fq_state_retired) ||\n+\t\t\t\t(fq->state == qman_fq_state_oos))) {\n+\t\trval = -EBUSY;\n+\t\tgoto out;\n+\t}\n+\trval = table_push_fq(p, fq);\n+\tif (rval)\n+\t\tgoto out;\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->alterfq.fqid = cpu_to_be32(fq->fqid);\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_ALTER_RETIRE);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_RETIRE);\n+\tres = mcr->result;\n+\t/*\n+\t * \"Elegant\" would be to treat OK/PENDING the same way; set CHANGING,\n+\t * and defer the flags until FQRNI or FQRN (respectively) show up. But\n+\t * \"Friendly\" is to process OK immediately, and not set CHANGING. We do\n+\t * friendly, otherwise the caller doesn't necessarily have a fully\n+\t * \"retired\" FQ on return even if the retirement was immediate. However\n+\t * this does mean some code duplication between here and\n+\t * fq_state_change().\n+\t */\n+\tif (likely(res == QM_MCR_RESULT_OK)) {\n+\t\trval = 0;\n+\t\t/* Process 'fq' right away, we'll ignore FQRNI */\n+\t\tif (mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY)\n+\t\t\tfq_set(fq, QMAN_FQ_STATE_NE);\n+\t\tif (mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)\n+\t\t\tfq_set(fq, QMAN_FQ_STATE_ORL);\n+\t\telse\n+\t\t\ttable_del_fq(p, fq);\n+\t\tif (flags)\n+\t\t\t*flags = fq->flags;\n+\t\tfq->state = qman_fq_state_retired;\n+\t\tif (fq->cb.fqs) {\n+\t\t\t/*\n+\t\t\t * Another issue with supporting \"immediate\" retirement\n+\t\t\t * is that we're forced to drop FQRNIs, because by the\n+\t\t\t * time they're seen it may already be \"too late\" (the\n+\t\t\t * fq may have been OOS'd and free()'d already). But if\n+\t\t\t * the upper layer wants a callback whether it's\n+\t\t\t * immediate or not, we have to fake a \"MR\" entry to\n+\t\t\t * look like an FQRNI...\n+\t\t\t */\n+\t\t\tstruct qm_mr_entry msg;\n+\n+\t\t\tmsg.verb = QM_MR_VERB_FQRNI;\n+\t\t\tmsg.fq.fqs = mcr->alterfq.fqs;\n+\t\t\tmsg.fq.fqid = fq->fqid;\n+\t\t\tmsg.fq.contextB = (u32)(uintptr_t)fq;\n+\t\t\tfq->cb.fqs(p, fq, &msg);\n+\t\t}\n+\t} else if (res == QM_MCR_RESULT_PENDING) {\n+\t\trval = 1;\n+\t\tfq_set(fq, QMAN_FQ_STATE_CHANGING);\n+\t} else {\n+\t\trval = -EIO;\n+\t\ttable_del_fq(p, fq);\n+\t}\n+out:\n+\tFQUNLOCK(fq);\n+\treturn rval;\n+}\n+\n+int qman_oos_fq(struct qman_fq *fq)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p;\n+\n+\tint ret = 0;\n+\tu8 res;\n+\n+\tif (fq->state != qman_fq_state_retired)\n+\t\treturn -EINVAL;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tif (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))\n+\t\treturn -EINVAL;\n+#endif\n+\tp = get_affine_portal();\n+\tFQLOCK(fq);\n+\tif (unlikely((fq_isset(fq, QMAN_FQ_STATE_BLOCKOOS)) ||\n+\t\t     (fq->state != qman_fq_state_retired))) {\n+\t\tret = -EBUSY;\n+\t\tgoto out;\n+\t}\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->alterfq.fqid = cpu_to_be32(fq->fqid);\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_ALTER_OOS);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_ALTER_OOS);\n+\tres = mcr->result;\n+\tif (res != QM_MCR_RESULT_OK) {\n+\t\tret = -EIO;\n+\t\tgoto out;\n+\t}\n+\tfq->state = qman_fq_state_oos;\n+out:\n+\tFQUNLOCK(fq);\n+\treturn ret;\n+}\n+\n+int qman_fq_flow_control(struct qman_fq *fq, int xon)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p;\n+\n+\tint ret = 0;\n+\tu8 res;\n+\tu8 myverb;\n+\n+\tif ((fq->state == qman_fq_state_oos) ||\n+\t    (fq->state == qman_fq_state_retired) ||\n+\t\t(fq->state == qman_fq_state_parked))\n+\t\treturn -EINVAL;\n+\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tif (unlikely(fq_isset(fq, QMAN_FQ_FLAG_NO_MODIFY)))\n+\t\treturn -EINVAL;\n+#endif\n+\t/* Issue a ALTER_FQXON or ALTER_FQXOFF management command */\n+\tp = get_affine_portal();\n+\tFQLOCK(fq);\n+\tif (unlikely((fq_isset(fq, QMAN_FQ_STATE_CHANGING)) ||\n+\t\t     (fq->state == qman_fq_state_parked) ||\n+\t\t\t(fq->state == qman_fq_state_oos) ||\n+\t\t\t(fq->state == qman_fq_state_retired))) {\n+\t\tret = -EBUSY;\n+\t\tgoto out;\n+\t}\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->alterfq.fqid = fq->fqid;\n+\tmcc->alterfq.count = 0;\n+\tmyverb = xon ? QM_MCC_VERB_ALTER_FQXON : QM_MCC_VERB_ALTER_FQXOFF;\n+\n+\tqm_mc_commit(&p->p, myverb);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);\n+\n+\tres = mcr->result;\n+\tif (res != QM_MCR_RESULT_OK) {\n+\t\tret = -EIO;\n+\t\tgoto out;\n+\t}\n+out:\n+\tFQUNLOCK(fq);\n+\treturn ret;\n+}\n+\n+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tu8 res;\n+\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->queryfq.fqid = cpu_to_be32(fq->fqid);\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);\n+\tres = mcr->result;\n+\tif (res == QM_MCR_RESULT_OK)\n+\t\t*fqd = mcr->queryfq.fqd;\n+\thw_fqd_to_cpu(fqd);\n+\tif (res != QM_MCR_RESULT_OK)\n+\t\treturn -EIO;\n+\treturn 0;\n+}\n+\n+int qman_query_fq_has_pkts(struct qman_fq *fq)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tint ret = 0;\n+\tu8 res;\n+\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->queryfq.fqid = cpu_to_be32(fq->fqid);\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tres = mcr->result;\n+\tif (res == QM_MCR_RESULT_OK)\n+\t\tret = !!mcr->queryfq_np.frm_cnt;\n+\treturn ret;\n+}\n+\n+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tu8 res;\n+\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->queryfq.fqid = cpu_to_be32(fq->fqid);\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_QUERYFQ_NP);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);\n+\tres = mcr->result;\n+\tif (res == QM_MCR_RESULT_OK) {\n+\t\t*np = mcr->queryfq_np;\n+\t\tnp->fqd_link = be24_to_cpu(np->fqd_link);\n+\t\tnp->odp_seq = be16_to_cpu(np->odp_seq);\n+\t\tnp->orp_nesn = be16_to_cpu(np->orp_nesn);\n+\t\tnp->orp_ea_hseq  = be16_to_cpu(np->orp_ea_hseq);\n+\t\tnp->orp_ea_tseq  = be16_to_cpu(np->orp_ea_tseq);\n+\t\tnp->orp_ea_hptr = be24_to_cpu(np->orp_ea_hptr);\n+\t\tnp->orp_ea_tptr = be24_to_cpu(np->orp_ea_tptr);\n+\t\tnp->pfdr_hptr = be24_to_cpu(np->pfdr_hptr);\n+\t\tnp->pfdr_tptr = be24_to_cpu(np->pfdr_tptr);\n+\t\tnp->ics_surp = be16_to_cpu(np->ics_surp);\n+\t\tnp->byte_cnt = be32_to_cpu(np->byte_cnt);\n+\t\tnp->frm_cnt = be24_to_cpu(np->frm_cnt);\n+\t\tnp->ra1_sfdr = be16_to_cpu(np->ra1_sfdr);\n+\t\tnp->ra2_sfdr = be16_to_cpu(np->ra2_sfdr);\n+\t\tnp->od1_sfdr = be16_to_cpu(np->od1_sfdr);\n+\t\tnp->od2_sfdr = be16_to_cpu(np->od2_sfdr);\n+\t\tnp->od3_sfdr = be16_to_cpu(np->od3_sfdr);\n+\t}\n+\tif (res == QM_MCR_RESULT_ERR_FQID)\n+\t\treturn -ERANGE;\n+\telse if (res != QM_MCR_RESULT_OK)\n+\t\treturn -EIO;\n+\treturn 0;\n+}\n+\n+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tu8 res, myverb;\n+\n+\tmyverb = (query_dedicated) ? QM_MCR_VERB_QUERYWQ_DEDICATED :\n+\t\t\t\t QM_MCR_VERB_QUERYWQ;\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->querywq.channel.id = cpu_to_be16(wq->channel.id);\n+\tqm_mc_commit(&p->p, myverb);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == myverb);\n+\tres = mcr->result;\n+\tif (res == QM_MCR_RESULT_OK) {\n+\t\tint i, array_len;\n+\n+\t\twq->channel.id = be16_to_cpu(mcr->querywq.channel.id);\n+\t\tarray_len = ARRAY_SIZE(mcr->querywq.wq_len);\n+\t\tfor (i = 0; i < array_len; i++)\n+\t\t\twq->wq_len[i] = be32_to_cpu(mcr->querywq.wq_len[i]);\n+\t}\n+\tif (res != QM_MCR_RESULT_OK) {\n+\t\tpr_err(\"QUERYWQ failed: %s\\n\", mcr_result_str(res));\n+\t\treturn -EIO;\n+\t}\n+\treturn 0;\n+}\n+\n+int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,\n+\t\t       struct qm_mcr_cgrtestwrite *result)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tu8 res;\n+\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->cgrtestwrite.cgid = cgr->cgrid;\n+\tmcc->cgrtestwrite.i_bcnt_hi = (u8)(i_bcnt >> 32);\n+\tmcc->cgrtestwrite.i_bcnt_lo = (u32)i_bcnt;\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_CGRTESTWRITE);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_CGRTESTWRITE);\n+\tres = mcr->result;\n+\tif (res == QM_MCR_RESULT_OK)\n+\t\t*result = mcr->cgrtestwrite;\n+\tif (res != QM_MCR_RESULT_OK) {\n+\t\tpr_err(\"CGR TEST WRITE failed: %s\\n\", mcr_result_str(res));\n+\t\treturn -EIO;\n+\t}\n+\treturn 0;\n+}\n+\n+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *cgrd)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p = get_affine_portal();\n+\tu8 res;\n+\tunsigned int i;\n+\n+\tmcc = qm_mc_start(&p->p);\n+\tmcc->querycgr.cgid = cgr->cgrid;\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_QUERYCGR);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCC_VERB_QUERYCGR);\n+\tres = mcr->result;\n+\tif (res == QM_MCR_RESULT_OK)\n+\t\t*cgrd = mcr->querycgr;\n+\tif (res != QM_MCR_RESULT_OK) {\n+\t\tpr_err(\"QUERY_CGR failed: %s\\n\", mcr_result_str(res));\n+\t\treturn -EIO;\n+\t}\n+\tcgrd->cgr.wr_parm_g.word =\n+\t\tbe32_to_cpu(cgrd->cgr.wr_parm_g.word);\n+\tcgrd->cgr.wr_parm_y.word =\n+\t\tbe32_to_cpu(cgrd->cgr.wr_parm_y.word);\n+\tcgrd->cgr.wr_parm_r.word =\n+\t\tbe32_to_cpu(cgrd->cgr.wr_parm_r.word);\n+\tcgrd->cgr.cscn_targ =  be32_to_cpu(cgrd->cgr.cscn_targ);\n+\tcgrd->cgr.__cs_thres = be16_to_cpu(cgrd->cgr.__cs_thres);\n+\tfor (i = 0; i < ARRAY_SIZE(cgrd->cscn_targ_swp); i++)\n+\t\tcgrd->cscn_targ_swp[i] =\n+\t\t\tbe32_to_cpu(cgrd->cscn_targ_swp[i]);\n+\treturn 0;\n+}\n+\n+int qman_query_congestion(struct qm_mcr_querycongestion *congestion)\n+{\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p = get_affine_portal();\n+\tu8 res;\n+\tunsigned int i;\n+\n+\tqm_mc_start(&p->p);\n+\tqm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==\n+\t\t\tQM_MCC_VERB_QUERYCONGESTION);\n+\tres = mcr->result;\n+\tif (res == QM_MCR_RESULT_OK)\n+\t\t*congestion = mcr->querycongestion;\n+\tif (res != QM_MCR_RESULT_OK) {\n+\t\tpr_err(\"QUERY_CONGESTION failed: %s\\n\", mcr_result_str(res));\n+\t\treturn -EIO;\n+\t}\n+\tfor (i = 0; i < ARRAY_SIZE(congestion->state.state); i++)\n+\t\tcongestion->state.state[i] =\n+\t\t\tbe32_to_cpu(congestion->state.state[i]);\n+\treturn 0;\n+}\n+\n+int qman_set_vdq(struct qman_fq *fq, u16 num)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\tuint32_t vdqcr;\n+\tint ret = -EBUSY;\n+\n+\tvdqcr = QM_VDQCR_EXACT;\n+\tvdqcr |= QM_VDQCR_NUMFRAMES_SET(num);\n+\n+\tif ((fq->state != qman_fq_state_parked) &&\n+\t    (fq->state != qman_fq_state_retired)) {\n+\t\tret = -EINVAL;\n+\t\tgoto out;\n+\t}\n+\tif (fq_isset(fq, QMAN_FQ_STATE_VDQCR)) {\n+\t\tret = -EBUSY;\n+\t\tgoto out;\n+\t}\n+\tvdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;\n+\n+\tif (!p->vdqcr_owned) {\n+\t\tFQLOCK(fq);\n+\t\tif (fq_isset(fq, QMAN_FQ_STATE_VDQCR))\n+\t\t\tgoto escape;\n+\t\tfq_set(fq, QMAN_FQ_STATE_VDQCR);\n+\t\tFQUNLOCK(fq);\n+\t\tp->vdqcr_owned = fq;\n+\t\tret = 0;\n+\t}\n+escape:\n+\tif (!ret)\n+\t\tqm_dqrr_vdqcr_set(&p->p, vdqcr);\n+\n+out:\n+\treturn ret;\n+}\n+\n+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags __maybe_unused,\n+\t\t\t  u32 vdqcr)\n+{\n+\tstruct qman_portal *p;\n+\tint ret = -EBUSY;\n+\n+\tif ((fq->state != qman_fq_state_parked) &&\n+\t    (fq->state != qman_fq_state_retired))\n+\t\treturn -EINVAL;\n+\tif (vdqcr & QM_VDQCR_FQID_MASK)\n+\t\treturn -EINVAL;\n+\tif (fq_isset(fq, QMAN_FQ_STATE_VDQCR))\n+\t\treturn -EBUSY;\n+\tvdqcr = (vdqcr & ~QM_VDQCR_FQID_MASK) | fq->fqid;\n+\n+\tp = get_affine_portal();\n+\n+\tif (!p->vdqcr_owned) {\n+\t\tFQLOCK(fq);\n+\t\tif (fq_isset(fq, QMAN_FQ_STATE_VDQCR))\n+\t\t\tgoto escape;\n+\t\tfq_set(fq, QMAN_FQ_STATE_VDQCR);\n+\t\tFQUNLOCK(fq);\n+\t\tp->vdqcr_owned = fq;\n+\t\tret = 0;\n+\t}\n+escape:\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/* VDQCR is set */\n+\tqm_dqrr_vdqcr_set(&p->p, vdqcr);\n+\treturn 0;\n+}\n+\n+static noinline void update_eqcr_ci(struct qman_portal *p, u8 avail)\n+{\n+\tif (avail)\n+\t\tqm_eqcr_cce_prefetch(&p->p);\n+\telse\n+\t\tqm_eqcr_cce_update(&p->p);\n+}\n+\n+int qman_eqcr_is_empty(void)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\tu8 avail;\n+\n+\tupdate_eqcr_ci(p, 0);\n+\tavail = qm_eqcr_get_fill(&p->p);\n+\treturn (avail == 0);\n+}\n+\n+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine)\n+{\n+\tif (affine) {\n+\t\tstruct qman_portal *p = get_affine_portal();\n+\n+\t\tp->cb_dc_ern = handler;\n+\t} else\n+\t\tcb_dc_ern = handler;\n+}\n+\n+static inline struct qm_eqcr_entry *try_p_eq_start(struct qman_portal *p,\n+\t\t\t\t\tstruct qman_fq *fq,\n+\t\t\t\t\tconst struct qm_fd *fd,\n+\t\t\t\t\tu32 flags)\n+{\n+\tstruct qm_eqcr_entry *eq;\n+\tu8 avail;\n+\n+\tif (p->use_eqcr_ci_stashing) {\n+\t\t/*\n+\t\t * The stashing case is easy, only update if we need to in\n+\t\t * order to try and liberate ring entries.\n+\t\t */\n+\t\teq = qm_eqcr_start_stash(&p->p);\n+\t} else {\n+\t\t/*\n+\t\t * The non-stashing case is harder, need to prefetch ahead of\n+\t\t * time.\n+\t\t */\n+\t\tavail = qm_eqcr_get_avail(&p->p);\n+\t\tif (avail < 2)\n+\t\t\tupdate_eqcr_ci(p, avail);\n+\t\teq = qm_eqcr_start_no_stash(&p->p);\n+\t}\n+\n+\tif (unlikely(!eq))\n+\t\treturn NULL;\n+\n+\tif (flags & QMAN_ENQUEUE_FLAG_DCA)\n+\t\teq->dca = QM_EQCR_DCA_ENABLE |\n+\t\t\t((flags & QMAN_ENQUEUE_FLAG_DCA_PARK) ?\n+\t\t\t\t\tQM_EQCR_DCA_PARK : 0) |\n+\t\t\t((flags >> 8) & QM_EQCR_DCA_IDXMASK);\n+\teq->fqid = cpu_to_be32(fq->fqid);\n+\teq->tag = cpu_to_be32((u32)(uintptr_t)fq);\n+\teq->fd = *fd;\n+\tcpu_to_hw_fd(&eq->fd);\n+\treturn eq;\n+}\n+\n+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\tstruct qm_eqcr_entry *eq;\n+\n+\teq = try_p_eq_start(p, fq, fd, flags);\n+\tif (!eq)\n+\t\treturn -EBUSY;\n+\t/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */\n+\tqm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_CMD_ENQUEUE |\n+\t\t(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));\n+\t/* Factor the below out, it's used from qman_enqueue_orp() too */\n+\treturn 0;\n+}\n+\n+int qman_enqueue_multi(struct qman_fq *fq,\n+\t\t       const struct qm_fd *fd,\n+\t\tint frames_to_send)\n+{\n+\tstruct qman_portal *p = get_affine_portal();\n+\tstruct qm_portal *portal = &p->p;\n+\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\tstruct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq;\n+\n+\tu8 i, diff, old_ci, sent = 0;\n+\n+\t/* Update the available entries if no entry is free */\n+\tif (!eqcr->available) {\n+\t\told_ci = eqcr->ci;\n+\t\teqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);\n+\t\tdiff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);\n+\t\teqcr->available += diff;\n+\t\tif (!diff)\n+\t\t\treturn 0;\n+\t}\n+\n+\t/* try to send as many frames as possible */\n+\twhile (eqcr->available && frames_to_send--) {\n+\t\teq->fqid = cpu_to_be32(fq->fqid);\n+\t\teq->tag = cpu_to_be32((u32)(uintptr_t)fq);\n+\t\teq->fd.opaque_addr = fd->opaque_addr;\n+\t\teq->fd.addr = cpu_to_be40(fd->addr);\n+\t\teq->fd.status = cpu_to_be32(fd->status);\n+\t\teq->fd.opaque = cpu_to_be32(fd->opaque);\n+\n+\t\teq = (void *)((unsigned long)(eq + 1) &\n+\t\t\t(~(unsigned long)(QM_EQCR_SIZE << 6)));\n+\t\teqcr->available--;\n+\t\tsent++;\n+\t\tfd++;\n+\t}\n+\tlwsync();\n+\n+\t/* In order for flushes to complete faster, all lines are recorded in\n+\t * 32 bit word.\n+\t */\n+\teq = eqcr->cursor;\n+\tfor (i = 0; i < sent; i++) {\n+\t\teq->__dont_write_directly__verb =\n+\t\t\tQM_EQCR_VERB_CMD_ENQUEUE | eqcr->vbit;\n+\t\tprev_eq = eq;\n+\t\teq = (void *)((unsigned long)(eq + 1) &\n+\t\t\t(~(unsigned long)(QM_EQCR_SIZE << 6)));\n+\t\tif (unlikely((prev_eq + 1) != eq))\n+\t\t\teqcr->vbit ^= QM_EQCR_VERB_VBIT;\n+\t}\n+\n+\t/* We need  to flush all the lines but without load/store operations\n+\t * between them\n+\t */\n+\teq = eqcr->cursor;\n+\tfor (i = 0; i < sent; i++) {\n+\t\tdcbf(eq);\n+\t\teq = (void *)((unsigned long)(eq + 1) &\n+\t\t\t(~(unsigned long)(QM_EQCR_SIZE << 6)));\n+\t}\n+\t/* Update cursor for the next call */\n+\teqcr->cursor = eq;\n+\treturn sent;\n+}\n+\n+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,\n+\t\t     struct qman_fq *orp, u16 orp_seqnum)\n+{\n+\tstruct qman_portal *p  = get_affine_portal();\n+\tstruct qm_eqcr_entry *eq;\n+\n+\teq = try_p_eq_start(p, fq, fd, flags);\n+\tif (!eq)\n+\t\treturn -EBUSY;\n+\t/* Process ORP-specifics here */\n+\tif (flags & QMAN_ENQUEUE_FLAG_NLIS)\n+\t\torp_seqnum |= QM_EQCR_SEQNUM_NLIS;\n+\telse {\n+\t\torp_seqnum &= ~QM_EQCR_SEQNUM_NLIS;\n+\t\tif (flags & QMAN_ENQUEUE_FLAG_NESN)\n+\t\t\torp_seqnum |= QM_EQCR_SEQNUM_NESN;\n+\t\telse\n+\t\t\t/* No need to check 4 QMAN_ENQUEUE_FLAG_HOLE */\n+\t\t\torp_seqnum &= ~QM_EQCR_SEQNUM_NESN;\n+\t}\n+\teq->seqnum = cpu_to_be16(orp_seqnum);\n+\teq->orp = cpu_to_be32(orp->fqid);\n+\t/* Note: QM_EQCR_VERB_INTERRUPT == QMAN_ENQUEUE_FLAG_WAIT_SYNC */\n+\tqm_eqcr_pvb_commit(&p->p, QM_EQCR_VERB_ORP |\n+\t\t((flags & (QMAN_ENQUEUE_FLAG_HOLE | QMAN_ENQUEUE_FLAG_NESN)) ?\n+\t\t\t\t0 : QM_EQCR_VERB_CMD_ENQUEUE) |\n+\t\t(flags & (QM_EQCR_VERB_COLOUR_MASK | QM_EQCR_VERB_INTERRUPT)));\n+\n+\treturn 0;\n+}\n+\n+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,\n+\t\t    struct qm_mcc_initcgr *opts)\n+{\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tu8 res;\n+\tu8 verb = QM_MCC_VERB_MODIFYCGR;\n+\n+\tmcc = qm_mc_start(&p->p);\n+\tif (opts)\n+\t\tmcc->initcgr = *opts;\n+\tmcc->initcgr.we_mask = cpu_to_be16(mcc->initcgr.we_mask);\n+\tmcc->initcgr.cgr.wr_parm_g.word =\n+\t\tcpu_to_be32(mcc->initcgr.cgr.wr_parm_g.word);\n+\tmcc->initcgr.cgr.wr_parm_y.word =\n+\t\tcpu_to_be32(mcc->initcgr.cgr.wr_parm_y.word);\n+\tmcc->initcgr.cgr.wr_parm_r.word =\n+\t\tcpu_to_be32(mcc->initcgr.cgr.wr_parm_r.word);\n+\tmcc->initcgr.cgr.cscn_targ =  cpu_to_be32(mcc->initcgr.cgr.cscn_targ);\n+\tmcc->initcgr.cgr.__cs_thres = cpu_to_be16(mcc->initcgr.cgr.__cs_thres);\n+\n+\tmcc->initcgr.cgid = cgr->cgrid;\n+\tif (flags & QMAN_CGR_FLAG_USE_INIT)\n+\t\tverb = QM_MCC_VERB_INITCGR;\n+\tqm_mc_commit(&p->p, verb);\n+\twhile (!(mcr = qm_mc_result(&p->p)))\n+\t\tcpu_relax();\n+\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == verb);\n+\tres = mcr->result;\n+\treturn (res == QM_MCR_RESULT_OK) ? 0 : -EIO;\n+}\n+\n+#define TARG_MASK(n) (0x80000000 >> (n->config->channel - \\\n+\t\t\t\t\tQM_CHANNEL_SWPORTAL0))\n+#define TARG_DCP_MASK(n) (0x80000000 >> (10 + n))\n+#define PORTAL_IDX(n) (n->config->channel - QM_CHANNEL_SWPORTAL0)\n+\n+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,\n+\t\t    struct qm_mcc_initcgr *opts)\n+{\n+\tstruct qm_mcr_querycgr cgr_state;\n+\tstruct qm_mcc_initcgr local_opts;\n+\tint ret;\n+\tstruct qman_portal *p;\n+\n+\t/* We have to check that the provided CGRID is within the limits of the\n+\t * data-structures, for obvious reasons. However we'll let h/w take\n+\t * care of determining whether it's within the limits of what exists on\n+\t * the SoC.\n+\t */\n+\tif (cgr->cgrid >= __CGR_NUM)\n+\t\treturn -EINVAL;\n+\n+\tp = get_affine_portal();\n+\n+\tmemset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));\n+\tcgr->chan = p->config->channel;\n+\tspin_lock(&p->cgr_lock);\n+\n+\t/* if no opts specified, just add it to the list */\n+\tif (!opts)\n+\t\tgoto add_list;\n+\n+\tret = qman_query_cgr(cgr, &cgr_state);\n+\tif (ret)\n+\t\tgoto release_lock;\n+\tif (opts)\n+\t\tlocal_opts = *opts;\n+\tif ((qman_ip_rev & 0xFF00) >= QMAN_REV30)\n+\t\tlocal_opts.cgr.cscn_targ_upd_ctrl =\n+\t\t\tQM_CGR_TARG_UDP_CTRL_WRITE_BIT | PORTAL_IDX(p);\n+\telse\n+\t\t/* Overwrite TARG */\n+\t\tlocal_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |\n+\t\t\t\t\t\t\tTARG_MASK(p);\n+\tlocal_opts.we_mask |= QM_CGR_WE_CSCN_TARG;\n+\n+\t/* send init if flags indicate so */\n+\tif (opts && (flags & QMAN_CGR_FLAG_USE_INIT))\n+\t\tret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT, &local_opts);\n+\telse\n+\t\tret = qman_modify_cgr(cgr, 0, &local_opts);\n+\tif (ret)\n+\t\tgoto release_lock;\n+add_list:\n+\tlist_add(&cgr->node, &p->cgr_cbs);\n+\n+\t/* Determine if newly added object requires its callback to be called */\n+\tret = qman_query_cgr(cgr, &cgr_state);\n+\tif (ret) {\n+\t\t/* we can't go back, so proceed and return success, but screen\n+\t\t * and wail to the log file.\n+\t\t */\n+\t\tpr_crit(\"CGR HW state partially modified\\n\");\n+\t\tret = 0;\n+\t\tgoto release_lock;\n+\t}\n+\tif (cgr->cb && cgr_state.cgr.cscn_en && qman_cgrs_get(&p->cgrs[1],\n+\t\t\t\t\t\t\t      cgr->cgrid))\n+\t\tcgr->cb(p, cgr, 1);\n+release_lock:\n+\tspin_unlock(&p->cgr_lock);\n+\treturn ret;\n+}\n+\n+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,\n+\t\t\t   struct qm_mcc_initcgr *opts)\n+{\n+\tstruct qm_mcc_initcgr local_opts;\n+\tstruct qm_mcr_querycgr cgr_state;\n+\tint ret;\n+\n+\tif ((qman_ip_rev & 0xFF00) < QMAN_REV30) {\n+\t\tpr_warn(\"QMan version doesn't support CSCN => DCP portal\\n\");\n+\t\treturn -EINVAL;\n+\t}\n+\t/* We have to check that the provided CGRID is within the limits of the\n+\t * data-structures, for obvious reasons. However we'll let h/w take\n+\t * care of determining whether it's within the limits of what exists on\n+\t * the SoC.\n+\t */\n+\tif (cgr->cgrid >= __CGR_NUM)\n+\t\treturn -EINVAL;\n+\n+\tret = qman_query_cgr(cgr, &cgr_state);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tmemset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));\n+\tif (opts)\n+\t\tlocal_opts = *opts;\n+\n+\tif ((qman_ip_rev & 0xFF00) >= QMAN_REV30)\n+\t\tlocal_opts.cgr.cscn_targ_upd_ctrl =\n+\t\t\t\tQM_CGR_TARG_UDP_CTRL_WRITE_BIT |\n+\t\t\t\tQM_CGR_TARG_UDP_CTRL_DCP | dcp_portal;\n+\telse\n+\t\tlocal_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ |\n+\t\t\t\t\tTARG_DCP_MASK(dcp_portal);\n+\tlocal_opts.we_mask |= QM_CGR_WE_CSCN_TARG;\n+\n+\t/* send init if flags indicate so */\n+\tif (opts && (flags & QMAN_CGR_FLAG_USE_INIT))\n+\t\tret = qman_modify_cgr(cgr, QMAN_CGR_FLAG_USE_INIT,\n+\t\t\t\t      &local_opts);\n+\telse\n+\t\tret = qman_modify_cgr(cgr, 0, &local_opts);\n+\n+\treturn ret;\n+}\n+\n+int qman_delete_cgr(struct qman_cgr *cgr)\n+{\n+\tstruct qm_mcr_querycgr cgr_state;\n+\tstruct qm_mcc_initcgr local_opts;\n+\tint ret = 0;\n+\tstruct qman_cgr *i;\n+\tstruct qman_portal *p = get_affine_portal();\n+\n+\tif (cgr->chan != p->config->channel) {\n+\t\tpr_crit(\"Attempting to delete cgr from different portal than\"\n+\t\t\t\" it was create: create 0x%x, delete 0x%x\\n\",\n+\t\t\tcgr->chan, p->config->channel);\n+\t\tret = -EINVAL;\n+\t\tgoto put_portal;\n+\t}\n+\tmemset(&local_opts, 0, sizeof(struct qm_mcc_initcgr));\n+\tspin_lock(&p->cgr_lock);\n+\tlist_del(&cgr->node);\n+\t/*\n+\t * If there are no other CGR objects for this CGRID in the list,\n+\t * update CSCN_TARG accordingly\n+\t */\n+\tlist_for_each_entry(i, &p->cgr_cbs, node)\n+\t\tif ((i->cgrid == cgr->cgrid) && i->cb)\n+\t\t\tgoto release_lock;\n+\tret = qman_query_cgr(cgr, &cgr_state);\n+\tif (ret)  {\n+\t\t/* add back to the list */\n+\t\tlist_add(&cgr->node, &p->cgr_cbs);\n+\t\tgoto release_lock;\n+\t}\n+\t/* Overwrite TARG */\n+\tlocal_opts.we_mask = QM_CGR_WE_CSCN_TARG;\n+\tif ((qman_ip_rev & 0xFF00) >= QMAN_REV30)\n+\t\tlocal_opts.cgr.cscn_targ_upd_ctrl = PORTAL_IDX(p);\n+\telse\n+\t\tlocal_opts.cgr.cscn_targ = cgr_state.cgr.cscn_targ &\n+\t\t\t\t\t\t\t ~(TARG_MASK(p));\n+\tret = qman_modify_cgr(cgr, 0, &local_opts);\n+\tif (ret)\n+\t\t/* add back to the list */\n+\t\tlist_add(&cgr->node, &p->cgr_cbs);\n+release_lock:\n+\tspin_unlock(&p->cgr_lock);\n+put_portal:\n+\treturn ret;\n+}\n+\n+int qman_shutdown_fq(u32 fqid)\n+{\n+\tstruct qman_portal *p;\n+\tstruct qm_portal *low_p;\n+\tstruct qm_mc_command *mcc;\n+\tstruct qm_mc_result *mcr;\n+\tu8 state;\n+\tint orl_empty, fq_empty, drain = 0;\n+\tu32 result;\n+\tu32 channel, wq;\n+\tu16 dest_wq;\n+\n+\tp = get_affine_portal();\n+\tlow_p = &p->p;\n+\n+\t/* Determine the state of the FQID */\n+\tmcc = qm_mc_start(low_p);\n+\tmcc->queryfq_np.fqid = cpu_to_be32(fqid);\n+\tqm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ_NP);\n+\twhile (!(mcr = qm_mc_result(low_p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ_NP);\n+\tstate = mcr->queryfq_np.state & QM_MCR_NP_STATE_MASK;\n+\tif (state == QM_MCR_NP_STATE_OOS)\n+\t\treturn 0; /* Already OOS, no need to do anymore checks */\n+\n+\t/* Query which channel the FQ is using */\n+\tmcc = qm_mc_start(low_p);\n+\tmcc->queryfq.fqid = cpu_to_be32(fqid);\n+\tqm_mc_commit(low_p, QM_MCC_VERB_QUERYFQ);\n+\twhile (!(mcr = qm_mc_result(low_p)))\n+\t\tcpu_relax();\n+\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) == QM_MCR_VERB_QUERYFQ);\n+\n+\t/* Need to store these since the MCR gets reused */\n+\tdest_wq = be16_to_cpu(mcr->queryfq.fqd.dest_wq);\n+\tchannel = dest_wq & 0x7;\n+\twq = dest_wq >> 3;\n+\n+\tswitch (state) {\n+\tcase QM_MCR_NP_STATE_TEN_SCHED:\n+\tcase QM_MCR_NP_STATE_TRU_SCHED:\n+\tcase QM_MCR_NP_STATE_ACTIVE:\n+\tcase QM_MCR_NP_STATE_PARKED:\n+\t\torl_empty = 0;\n+\t\tmcc = qm_mc_start(low_p);\n+\t\tmcc->alterfq.fqid = cpu_to_be32(fqid);\n+\t\tqm_mc_commit(low_p, QM_MCC_VERB_ALTER_RETIRE);\n+\t\twhile (!(mcr = qm_mc_result(low_p)))\n+\t\t\tcpu_relax();\n+\t\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==\n+\t\t\t   QM_MCR_VERB_ALTER_RETIRE);\n+\t\tresult = mcr->result; /* Make a copy as we reuse MCR below */\n+\n+\t\tif (result == QM_MCR_RESULT_PENDING) {\n+\t\t\t/* Need to wait for the FQRN in the message ring, which\n+\t\t\t * will only occur once the FQ has been drained.  In\n+\t\t\t * order for the FQ to drain the portal needs to be set\n+\t\t\t * to dequeue from the channel the FQ is scheduled on\n+\t\t\t */\n+\t\t\tconst struct qm_mr_entry *msg;\n+\t\t\tconst struct qm_dqrr_entry *dqrr = NULL;\n+\t\t\tint found_fqrn = 0;\n+\t\t\t__maybe_unused u16 dequeue_wq = 0;\n+\n+\t\t\t/* Flag that we need to drain FQ */\n+\t\t\tdrain = 1;\n+\n+\t\t\tif (channel >= qm_channel_pool1 &&\n+\t\t\t    channel < (u16)(qm_channel_pool1 + 15)) {\n+\t\t\t\t/* Pool channel, enable the bit in the portal */\n+\t\t\t\tdequeue_wq = (channel -\n+\t\t\t\t\t      qm_channel_pool1 + 1) << 4 | wq;\n+\t\t\t} else if (channel < qm_channel_pool1) {\n+\t\t\t\t/* Dedicated channel */\n+\t\t\t\tdequeue_wq = wq;\n+\t\t\t} else {\n+\t\t\t\tpr_info(\"Cannot recover FQ 0x%x,\"\n+\t\t\t\t\t\" it is scheduled on channel 0x%x\",\n+\t\t\t\t\tfqid, channel);\n+\t\t\t\treturn -EBUSY;\n+\t\t\t}\n+\t\t\t/* Set the sdqcr to drain this channel */\n+\t\t\tif (channel < qm_channel_pool1)\n+\t\t\t\tqm_dqrr_sdqcr_set(low_p,\n+\t\t\t\t\t\t  QM_SDQCR_TYPE_ACTIVE |\n+\t\t\t\t\t  QM_SDQCR_CHANNELS_DEDICATED);\n+\t\t\telse\n+\t\t\t\tqm_dqrr_sdqcr_set(low_p,\n+\t\t\t\t\t\t  QM_SDQCR_TYPE_ACTIVE |\n+\t\t\t\t\t\t  QM_SDQCR_CHANNELS_POOL_CONV\n+\t\t\t\t\t\t  (channel));\n+\t\t\twhile (!found_fqrn) {\n+\t\t\t\t/* Keep draining DQRR while checking the MR*/\n+\t\t\t\tqm_dqrr_pvb_update(low_p);\n+\t\t\t\tdqrr = qm_dqrr_current(low_p);\n+\t\t\t\twhile (dqrr) {\n+\t\t\t\t\tqm_dqrr_cdc_consume_1ptr(\n+\t\t\t\t\t\tlow_p, dqrr, 0);\n+\t\t\t\t\tqm_dqrr_pvb_update(low_p);\n+\t\t\t\t\tqm_dqrr_next(low_p);\n+\t\t\t\t\tdqrr = qm_dqrr_current(low_p);\n+\t\t\t\t}\n+\t\t\t\t/* Process message ring too */\n+\t\t\t\tqm_mr_pvb_update(low_p);\n+\t\t\t\tmsg = qm_mr_current(low_p);\n+\t\t\t\twhile (msg) {\n+\t\t\t\t\tif ((msg->verb &\n+\t\t\t\t\t     QM_MR_VERB_TYPE_MASK)\n+\t\t\t\t\t    == QM_MR_VERB_FQRN)\n+\t\t\t\t\t\tfound_fqrn = 1;\n+\t\t\t\t\tqm_mr_next(low_p);\n+\t\t\t\t\tqm_mr_cci_consume_to_current(low_p);\n+\t\t\t\t\tqm_mr_pvb_update(low_p);\n+\t\t\t\t\tmsg = qm_mr_current(low_p);\n+\t\t\t\t}\n+\t\t\t\tcpu_relax();\n+\t\t\t}\n+\t\t}\n+\t\tif (result != QM_MCR_RESULT_OK &&\n+\t\t    result !=  QM_MCR_RESULT_PENDING) {\n+\t\t\t/* error */\n+\t\t\tpr_err(\"qman_retire_fq failed on FQ 0x%x,\"\n+\t\t\t       \" result=0x%x\\n\", fqid, result);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tif (!(mcr->alterfq.fqs & QM_MCR_FQS_ORLPRESENT)) {\n+\t\t\t/* ORL had no entries, no need to wait until the\n+\t\t\t * ERNs come in.\n+\t\t\t */\n+\t\t\torl_empty = 1;\n+\t\t}\n+\t\t/* Retirement succeeded, check to see if FQ needs\n+\t\t * to be drained.\n+\t\t */\n+\t\tif (drain || mcr->alterfq.fqs & QM_MCR_FQS_NOTEMPTY) {\n+\t\t\t/* FQ is Not Empty, drain using volatile DQ commands */\n+\t\t\tfq_empty = 0;\n+\t\t\tdo {\n+\t\t\t\tconst struct qm_dqrr_entry *dqrr = NULL;\n+\t\t\t\tu32 vdqcr = fqid | QM_VDQCR_NUMFRAMES_SET(3);\n+\n+\t\t\t\tqm_dqrr_vdqcr_set(low_p, vdqcr);\n+\n+\t\t\t\t/* Wait for a dequeue to occur */\n+\t\t\t\twhile (dqrr == NULL) {\n+\t\t\t\t\tqm_dqrr_pvb_update(low_p);\n+\t\t\t\t\tdqrr = qm_dqrr_current(low_p);\n+\t\t\t\t\tif (!dqrr)\n+\t\t\t\t\t\tcpu_relax();\n+\t\t\t\t}\n+\t\t\t\t/* Process the dequeues, making sure to\n+\t\t\t\t * empty the ring completely.\n+\t\t\t\t */\n+\t\t\t\twhile (dqrr) {\n+\t\t\t\t\tif (dqrr->fqid == fqid &&\n+\t\t\t\t\t    dqrr->stat & QM_DQRR_STAT_FQ_EMPTY)\n+\t\t\t\t\t\tfq_empty = 1;\n+\t\t\t\t\tqm_dqrr_cdc_consume_1ptr(low_p,\n+\t\t\t\t\t\t\t\t dqrr, 0);\n+\t\t\t\t\tqm_dqrr_pvb_update(low_p);\n+\t\t\t\t\tqm_dqrr_next(low_p);\n+\t\t\t\t\tdqrr = qm_dqrr_current(low_p);\n+\t\t\t\t}\n+\t\t\t} while (fq_empty == 0);\n+\t\t}\n+\t\tqm_dqrr_sdqcr_set(low_p, 0);\n+\n+\t\t/* Wait for the ORL to have been completely drained */\n+\t\twhile (orl_empty == 0) {\n+\t\t\tconst struct qm_mr_entry *msg;\n+\n+\t\t\tqm_mr_pvb_update(low_p);\n+\t\t\tmsg = qm_mr_current(low_p);\n+\t\t\twhile (msg) {\n+\t\t\t\tif ((msg->verb & QM_MR_VERB_TYPE_MASK) ==\n+\t\t\t\t    QM_MR_VERB_FQRL)\n+\t\t\t\t\torl_empty = 1;\n+\t\t\t\tqm_mr_next(low_p);\n+\t\t\t\tqm_mr_cci_consume_to_current(low_p);\n+\t\t\t\tqm_mr_pvb_update(low_p);\n+\t\t\t\tmsg = qm_mr_current(low_p);\n+\t\t\t}\n+\t\t\tcpu_relax();\n+\t\t}\n+\t\tmcc = qm_mc_start(low_p);\n+\t\tmcc->alterfq.fqid = cpu_to_be32(fqid);\n+\t\tqm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);\n+\t\twhile (!(mcr = qm_mc_result(low_p)))\n+\t\t\tcpu_relax();\n+\t\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==\n+\t\t\t   QM_MCR_VERB_ALTER_OOS);\n+\t\tif (mcr->result != QM_MCR_RESULT_OK) {\n+\t\t\tpr_err(\n+\t\t\t\"OOS after drain Failed on FQID 0x%x, result 0x%x\\n\",\n+\t\t\t       fqid, mcr->result);\n+\t\t\treturn -1;\n+\t\t}\n+\t\treturn 0;\n+\n+\tcase QM_MCR_NP_STATE_RETIRED:\n+\t\t/* Send OOS Command */\n+\t\tmcc = qm_mc_start(low_p);\n+\t\tmcc->alterfq.fqid = cpu_to_be32(fqid);\n+\t\tqm_mc_commit(low_p, QM_MCC_VERB_ALTER_OOS);\n+\t\twhile (!(mcr = qm_mc_result(low_p)))\n+\t\t\tcpu_relax();\n+\t\tDPAA_ASSERT((mcr->verb & QM_MCR_VERB_MASK) ==\n+\t\t\t   QM_MCR_VERB_ALTER_OOS);\n+\t\tif (mcr->result) {\n+\t\t\tpr_err(\"OOS Failed on FQID 0x%x\\n\", fqid);\n+\t\t\treturn -1;\n+\t\t}\n+\t\treturn 0;\n+\n+\t}\n+\treturn -1;\n+}\ndiff --git a/drivers/bus/dpaa/base/qbman/qman.h b/drivers/bus/dpaa/base/qbman/qman.h\nnew file mode 100644\nindex 0000000..ee78d31\n--- /dev/null\n+++ b/drivers/bus/dpaa/base/qbman/qman.h\n@@ -0,0 +1,888 @@\n+/*-\n+ * This file is provided under a dual BSD/GPLv2 license. When using or\n+ * redistributing this file, you may do so under either license.\n+ *\n+ *   BSD LICENSE\n+ *\n+ * Copyright 2008-2016 Freescale Semiconductor Inc.\n+ * Copyright 2017 NXP.\n+ *\n+ * Redistribution and use in source and binary forms, with or without\n+ * modification, are permitted provided that the following conditions are met:\n+ * * Redistributions of source code must retain the above copyright\n+ * notice, this list of conditions and the following disclaimer.\n+ * * Redistributions in binary form must reproduce the above copyright\n+ * notice, this list of conditions and the following disclaimer in the\n+ * documentation and/or other materials provided with the distribution.\n+ * * Neither the name of the above-listed copyright holders nor the\n+ * names of any contributors may be used to endorse or promote products\n+ * derived from this software without specific prior written permission.\n+ *\n+ *   GPL LICENSE SUMMARY\n+ *\n+ * ALTERNATIVELY, this software may be distributed under the terms of the\n+ * GNU General Public License (\"GPL\") as published by the Free Software\n+ * Foundation, either version 2 of that License or (at your option) any\n+ * later version.\n+ *\n+ * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n+ * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n+ * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n+ * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE\n+ * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n+ * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n+ * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n+ * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n+ * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n+ * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n+ * POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include \"qman_priv.h\"\n+\n+/***************************/\n+/* Portal register assists */\n+/***************************/\n+#define QM_REG_EQCR_PI_CINH\t0x3000\n+#define QM_REG_EQCR_CI_CINH\t0x3040\n+#define QM_REG_EQCR_ITR\t\t0x3080\n+#define QM_REG_DQRR_PI_CINH\t0x3100\n+#define QM_REG_DQRR_CI_CINH\t0x3140\n+#define QM_REG_DQRR_ITR\t\t0x3180\n+#define QM_REG_DQRR_DCAP\t0x31C0\n+#define QM_REG_DQRR_SDQCR\t0x3200\n+#define QM_REG_DQRR_VDQCR\t0x3240\n+#define QM_REG_DQRR_PDQCR\t0x3280\n+#define QM_REG_MR_PI_CINH\t0x3300\n+#define QM_REG_MR_CI_CINH\t0x3340\n+#define QM_REG_MR_ITR\t\t0x3380\n+#define QM_REG_CFG\t\t0x3500\n+#define QM_REG_ISR\t\t0x3600\n+#define QM_REG_IIR              0x36C0\n+#define QM_REG_ITPR\t\t0x3740\n+\n+/* Cache-enabled register offsets */\n+#define QM_CL_EQCR\t\t0x0000\n+#define QM_CL_DQRR\t\t0x1000\n+#define QM_CL_MR\t\t0x2000\n+#define QM_CL_EQCR_PI_CENA\t0x3000\n+#define QM_CL_EQCR_CI_CENA\t0x3040\n+#define QM_CL_DQRR_PI_CENA\t0x3100\n+#define QM_CL_DQRR_CI_CENA\t0x3140\n+#define QM_CL_MR_PI_CENA\t0x3300\n+#define QM_CL_MR_CI_CENA\t0x3340\n+#define QM_CL_CR\t\t0x3800\n+#define QM_CL_RR0\t\t0x3900\n+#define QM_CL_RR1\t\t0x3940\n+\n+/* BTW, the drivers (and h/w programming model) already obtain the required\n+ * synchronisation for portal accesses via lwsync(), hwsync(), and\n+ * data-dependencies. Use of barrier()s or other order-preserving primitives\n+ * simply degrade performance. Hence the use of the __raw_*() interfaces, which\n+ * simply ensure that the compiler treats the portal registers as volatile (ie.\n+ * non-coherent).\n+ */\n+\n+/* Cache-inhibited register access. */\n+#define __qm_in(qm, o)\t\tbe32_to_cpu(__raw_readl((qm)->ci  + (o)))\n+#define __qm_out(qm, o, val)\t__raw_writel((cpu_to_be32(val)), \\\n+\t\t\t\t\t     (qm)->ci + (o))\n+#define qm_in(reg)\t\t__qm_in(&portal->addr, QM_REG_##reg)\n+#define qm_out(reg, val)\t__qm_out(&portal->addr, QM_REG_##reg, val)\n+\n+/* Cache-enabled (index) register access */\n+#define __qm_cl_touch_ro(qm, o) dcbt_ro((qm)->ce + (o))\n+#define __qm_cl_touch_rw(qm, o) dcbt_rw((qm)->ce + (o))\n+#define __qm_cl_in(qm, o)\tbe32_to_cpu(__raw_readl((qm)->ce + (o)))\n+#define __qm_cl_out(qm, o, val) \\\n+\tdo { \\\n+\t\tu32 *__tmpclout = (qm)->ce + (o); \\\n+\t\t__raw_writel(cpu_to_be32(val), __tmpclout); \\\n+\t\tdcbf(__tmpclout); \\\n+\t} while (0)\n+#define __qm_cl_invalidate(qm, o) dccivac((qm)->ce + (o))\n+#define qm_cl_touch_ro(reg) __qm_cl_touch_ro(&portal->addr, QM_CL_##reg##_CENA)\n+#define qm_cl_touch_rw(reg) __qm_cl_touch_rw(&portal->addr, QM_CL_##reg##_CENA)\n+#define qm_cl_in(reg)\t    __qm_cl_in(&portal->addr, QM_CL_##reg##_CENA)\n+#define qm_cl_out(reg, val) __qm_cl_out(&portal->addr, QM_CL_##reg##_CENA, val)\n+#define qm_cl_invalidate(reg)\\\n+\t__qm_cl_invalidate(&portal->addr, QM_CL_##reg##_CENA)\n+\n+/* Cache-enabled ring access */\n+#define qm_cl(base, idx)\t((void *)base + ((idx) << 6))\n+\n+/* Cyclic helper for rings. FIXME: once we are able to do fine-grain perf\n+ * analysis, look at using the \"extra\" bit in the ring index registers to avoid\n+ * cyclic issues.\n+ */\n+static inline u8 qm_cyc_diff(u8 ringsize, u8 first, u8 last)\n+{\n+\t/* 'first' is included, 'last' is excluded */\n+\tif (first <= last)\n+\t\treturn last - first;\n+\treturn ringsize + last - first;\n+}\n+\n+/* Portal modes.\n+ *   Enum types;\n+ *     pmode == production mode\n+ *     cmode == consumption mode,\n+ *     dmode == h/w dequeue mode.\n+ *   Enum values use 3 letter codes. First letter matches the portal mode,\n+ *   remaining two letters indicate;\n+ *     ci == cache-inhibited portal register\n+ *     ce == cache-enabled portal register\n+ *     vb == in-band valid-bit (cache-enabled)\n+ *     dc == DCA (Discrete Consumption Acknowledgment), DQRR-only\n+ *   As for \"enum qm_dqrr_dmode\", it should be self-explanatory.\n+ */\n+enum qm_eqcr_pmode {\t\t/* matches QCSP_CFG::EPM */\n+\tqm_eqcr_pci = 0,\t/* PI index, cache-inhibited */\n+\tqm_eqcr_pce = 1,\t/* PI index, cache-enabled */\n+\tqm_eqcr_pvb = 2\t\t/* valid-bit */\n+};\n+\n+enum qm_dqrr_dmode {\t\t/* matches QCSP_CFG::DP */\n+\tqm_dqrr_dpush = 0,\t/* SDQCR  + VDQCR */\n+\tqm_dqrr_dpull = 1\t/* PDQCR */\n+};\n+\n+enum qm_dqrr_pmode {\t\t/* s/w-only */\n+\tqm_dqrr_pci,\t\t/* reads DQRR_PI_CINH */\n+\tqm_dqrr_pce,\t\t/* reads DQRR_PI_CENA */\n+\tqm_dqrr_pvb\t\t/* reads valid-bit */\n+};\n+\n+enum qm_dqrr_cmode {\t\t/* matches QCSP_CFG::DCM */\n+\tqm_dqrr_cci = 0,\t/* CI index, cache-inhibited */\n+\tqm_dqrr_cce = 1,\t/* CI index, cache-enabled */\n+\tqm_dqrr_cdc = 2\t\t/* Discrete Consumption Acknowledgment */\n+};\n+\n+enum qm_mr_pmode {\t\t/* s/w-only */\n+\tqm_mr_pci,\t\t/* reads MR_PI_CINH */\n+\tqm_mr_pce,\t\t/* reads MR_PI_CENA */\n+\tqm_mr_pvb\t\t/* reads valid-bit */\n+};\n+\n+enum qm_mr_cmode {\t\t/* matches QCSP_CFG::MM */\n+\tqm_mr_cci = 0,\t\t/* CI index, cache-inhibited */\n+\tqm_mr_cce = 1\t\t/* CI index, cache-enabled */\n+};\n+\n+/* ------------------------- */\n+/* --- Portal structures --- */\n+\n+#define QM_EQCR_SIZE\t\t8\n+#define QM_DQRR_SIZE\t\t16\n+#define QM_MR_SIZE\t\t8\n+\n+struct qm_eqcr {\n+\tstruct qm_eqcr_entry *ring, *cursor;\n+\tu8 ci, available, ithresh, vbit;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tu32 busy;\n+\tenum qm_eqcr_pmode pmode;\n+#endif\n+};\n+\n+struct qm_dqrr {\n+\tconst struct qm_dqrr_entry *ring, *cursor;\n+\tu8 pi, ci, fill, ithresh, vbit;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tenum qm_dqrr_dmode dmode;\n+\tenum qm_dqrr_pmode pmode;\n+\tenum qm_dqrr_cmode cmode;\n+#endif\n+};\n+\n+struct qm_mr {\n+\tconst struct qm_mr_entry *ring, *cursor;\n+\tu8 pi, ci, fill, ithresh, vbit;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tenum qm_mr_pmode pmode;\n+\tenum qm_mr_cmode cmode;\n+#endif\n+};\n+\n+struct qm_mc {\n+\tstruct qm_mc_command *cr;\n+\tstruct qm_mc_result *rr;\n+\tu8 rridx, vbit;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tenum {\n+\t\t/* Can be _mc_start()ed */\n+\t\tqman_mc_idle,\n+\t\t/* Can be _mc_commit()ed or _mc_abort()ed */\n+\t\tqman_mc_user,\n+\t\t/* Can only be _mc_retry()ed */\n+\t\tqman_mc_hw\n+\t} state;\n+#endif\n+};\n+\n+#define QM_PORTAL_ALIGNMENT ____cacheline_aligned\n+\n+struct qm_addr {\n+\tvoid __iomem *ce;\t/* cache-enabled */\n+\tvoid __iomem *ci;\t/* cache-inhibited */\n+};\n+\n+struct qm_portal {\n+\tstruct qm_addr addr;\n+\tstruct qm_eqcr eqcr;\n+\tstruct qm_dqrr dqrr;\n+\tstruct qm_mr mr;\n+\tstruct qm_mc mc;\n+} QM_PORTAL_ALIGNMENT;\n+\n+/* Bit-wise logic to wrap a ring pointer by clearing the \"carry bit\" */\n+#define EQCR_CARRYCLEAR(p) \\\n+\t(void *)((unsigned long)(p) & (~(unsigned long)(QM_EQCR_SIZE << 6)))\n+\n+extern dma_addr_t rte_mem_virt2phy(const void *addr);\n+\n+/* Bit-wise logic to convert a ring pointer to a ring index */\n+static inline u8 EQCR_PTR2IDX(struct qm_eqcr_entry *e)\n+{\n+\treturn ((uintptr_t)e >> 6) & (QM_EQCR_SIZE - 1);\n+}\n+\n+/* Increment the 'cursor' ring pointer, taking 'vbit' into account */\n+static inline void EQCR_INC(struct qm_eqcr *eqcr)\n+{\n+\t/* NB: this is odd-looking, but experiments show that it generates fast\n+\t * code with essentially no branching overheads. We increment to the\n+\t * next EQCR pointer and handle overflow and 'vbit'.\n+\t */\n+\tstruct qm_eqcr_entry *partial = eqcr->cursor + 1;\n+\n+\teqcr->cursor = EQCR_CARRYCLEAR(partial);\n+\tif (partial != eqcr->cursor)\n+\t\teqcr->vbit ^= QM_EQCR_VERB_VBIT;\n+}\n+\n+static inline struct qm_eqcr_entry *qm_eqcr_start_no_stash(struct qm_portal\n+\t\t\t\t\t\t\t\t *portal)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\tDPAA_ASSERT(!eqcr->busy);\n+\tif (!eqcr->available)\n+\t\treturn NULL;\n+\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\teqcr->busy = 1;\n+#endif\n+\n+\treturn eqcr->cursor;\n+}\n+\n+static inline struct qm_eqcr_entry *qm_eqcr_start_stash(struct qm_portal\n+\t\t\t\t\t\t\t\t*portal)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\tu8 diff, old_ci;\n+\n+\tDPAA_ASSERT(!eqcr->busy);\n+\tif (!eqcr->available) {\n+\t\told_ci = eqcr->ci;\n+\t\teqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);\n+\t\tdiff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);\n+\t\teqcr->available += diff;\n+\t\tif (!diff)\n+\t\t\treturn NULL;\n+\t}\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\teqcr->busy = 1;\n+#endif\n+\treturn eqcr->cursor;\n+}\n+\n+static inline void qm_eqcr_abort(struct qm_portal *portal)\n+{\n+\t__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\tDPAA_ASSERT(eqcr->busy);\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\teqcr->busy = 0;\n+#endif\n+}\n+\n+static inline struct qm_eqcr_entry *qm_eqcr_pend_and_next(\n+\t\t\t\t\tstruct qm_portal *portal, u8 myverb)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\tDPAA_ASSERT(eqcr->busy);\n+\tDPAA_ASSERT(eqcr->pmode != qm_eqcr_pvb);\n+\tif (eqcr->available == 1)\n+\t\treturn NULL;\n+\teqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;\n+\tdcbf(eqcr->cursor);\n+\tEQCR_INC(eqcr);\n+\teqcr->available--;\n+\treturn eqcr->cursor;\n+}\n+\n+#define EQCR_COMMIT_CHECKS(eqcr) \\\n+do { \\\n+\tDPAA_ASSERT(eqcr->busy); \\\n+\tDPAA_ASSERT(eqcr->cursor->orp == (eqcr->cursor->orp & 0x00ffffff)); \\\n+\tDPAA_ASSERT(eqcr->cursor->fqid == (eqcr->cursor->fqid & 0x00ffffff)); \\\n+} while (0)\n+\n+static inline void qm_eqcr_pci_commit(struct qm_portal *portal, u8 myverb)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\tEQCR_COMMIT_CHECKS(eqcr);\n+\tDPAA_ASSERT(eqcr->pmode == qm_eqcr_pci);\n+\teqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;\n+\tEQCR_INC(eqcr);\n+\teqcr->available--;\n+\tdcbf(eqcr->cursor);\n+\thwsync();\n+\tqm_out(EQCR_PI_CINH, EQCR_PTR2IDX(eqcr->cursor));\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\teqcr->busy = 0;\n+#endif\n+}\n+\n+static inline void qm_eqcr_pce_prefetch(struct qm_portal *portal)\n+{\n+\t__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\tDPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);\n+\tqm_cl_invalidate(EQCR_PI);\n+\tqm_cl_touch_rw(EQCR_PI);\n+}\n+\n+static inline void qm_eqcr_pce_commit(struct qm_portal *portal, u8 myverb)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\tEQCR_COMMIT_CHECKS(eqcr);\n+\tDPAA_ASSERT(eqcr->pmode == qm_eqcr_pce);\n+\teqcr->cursor->__dont_write_directly__verb = myverb | eqcr->vbit;\n+\tEQCR_INC(eqcr);\n+\teqcr->available--;\n+\tdcbf(eqcr->cursor);\n+\tlwsync();\n+\tqm_cl_out(EQCR_PI, EQCR_PTR2IDX(eqcr->cursor));\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\teqcr->busy = 0;\n+#endif\n+}\n+\n+static inline void qm_eqcr_pvb_commit(struct qm_portal *portal, u8 myverb)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\tstruct qm_eqcr_entry *eqcursor;\n+\n+\tEQCR_COMMIT_CHECKS(eqcr);\n+\tDPAA_ASSERT(eqcr->pmode == qm_eqcr_pvb);\n+\tlwsync();\n+\teqcursor = eqcr->cursor;\n+\teqcursor->__dont_write_directly__verb = myverb | eqcr->vbit;\n+\tdcbf(eqcursor);\n+\tEQCR_INC(eqcr);\n+\teqcr->available--;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\teqcr->busy = 0;\n+#endif\n+}\n+\n+static inline u8 qm_eqcr_cci_update(struct qm_portal *portal)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\tu8 diff, old_ci = eqcr->ci;\n+\n+\teqcr->ci = qm_in(EQCR_CI_CINH) & (QM_EQCR_SIZE - 1);\n+\tdiff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);\n+\teqcr->available += diff;\n+\treturn diff;\n+}\n+\n+static inline void qm_eqcr_cce_prefetch(struct qm_portal *portal)\n+{\n+\t__maybe_unused register struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\tqm_cl_touch_ro(EQCR_CI);\n+}\n+\n+static inline u8 qm_eqcr_cce_update(struct qm_portal *portal)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\tu8 diff, old_ci = eqcr->ci;\n+\n+\teqcr->ci = qm_cl_in(EQCR_CI) & (QM_EQCR_SIZE - 1);\n+\tqm_cl_invalidate(EQCR_CI);\n+\tdiff = qm_cyc_diff(QM_EQCR_SIZE, old_ci, eqcr->ci);\n+\teqcr->available += diff;\n+\treturn diff;\n+}\n+\n+static inline u8 qm_eqcr_get_ithresh(struct qm_portal *portal)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\treturn eqcr->ithresh;\n+}\n+\n+static inline void qm_eqcr_set_ithresh(struct qm_portal *portal, u8 ithresh)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\teqcr->ithresh = ithresh;\n+\tqm_out(EQCR_ITR, ithresh);\n+}\n+\n+static inline u8 qm_eqcr_get_avail(struct qm_portal *portal)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\treturn eqcr->available;\n+}\n+\n+static inline u8 qm_eqcr_get_fill(struct qm_portal *portal)\n+{\n+\tregister struct qm_eqcr *eqcr = &portal->eqcr;\n+\n+\treturn QM_EQCR_SIZE - 1 - eqcr->available;\n+}\n+\n+#define DQRR_CARRYCLEAR(p) \\\n+\t(void *)((unsigned long)(p) & (~(unsigned long)(QM_DQRR_SIZE << 6)))\n+\n+static inline u8 DQRR_PTR2IDX(const struct qm_dqrr_entry *e)\n+{\n+\treturn ((uintptr_t)e >> 6) & (QM_DQRR_SIZE - 1);\n+}\n+\n+static inline const struct qm_dqrr_entry *DQRR_INC(\n+\t\t\t\t\t\tconst struct qm_dqrr_entry *e)\n+{\n+\treturn DQRR_CARRYCLEAR(e + 1);\n+}\n+\n+static inline void qm_dqrr_set_maxfill(struct qm_portal *portal, u8 mf)\n+{\n+\tqm_out(CFG, (qm_in(CFG) & 0xff0fffff) |\n+\t\t((mf & (QM_DQRR_SIZE - 1)) << 20));\n+}\n+\n+static inline const struct qm_dqrr_entry *qm_dqrr_current(\n+\t\t\t\t\t\tstruct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tif (!dqrr->fill)\n+\t\treturn NULL;\n+\treturn dqrr->cursor;\n+}\n+\n+static inline u8 qm_dqrr_cursor(struct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\treturn DQRR_PTR2IDX(dqrr->cursor);\n+}\n+\n+static inline u8 qm_dqrr_next(struct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->fill);\n+\tdqrr->cursor = DQRR_INC(dqrr->cursor);\n+\treturn --dqrr->fill;\n+}\n+\n+static inline u8 qm_dqrr_pci_update(struct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\tu8 diff, old_pi = dqrr->pi;\n+\n+\tDPAA_ASSERT(dqrr->pmode == qm_dqrr_pci);\n+\tdqrr->pi = qm_in(DQRR_PI_CINH) & (QM_DQRR_SIZE - 1);\n+\tdiff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);\n+\tdqrr->fill += diff;\n+\treturn diff;\n+}\n+\n+static inline void qm_dqrr_pce_prefetch(struct qm_portal *portal)\n+{\n+\t__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);\n+\tqm_cl_invalidate(DQRR_PI);\n+\tqm_cl_touch_ro(DQRR_PI);\n+}\n+\n+static inline u8 qm_dqrr_pce_update(struct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\tu8 diff, old_pi = dqrr->pi;\n+\n+\tDPAA_ASSERT(dqrr->pmode == qm_dqrr_pce);\n+\tdqrr->pi = qm_cl_in(DQRR_PI) & (QM_DQRR_SIZE - 1);\n+\tdiff = qm_cyc_diff(QM_DQRR_SIZE, old_pi, dqrr->pi);\n+\tdqrr->fill += diff;\n+\treturn diff;\n+}\n+\n+static inline void qm_dqrr_pvb_update(struct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\tconst struct qm_dqrr_entry *res = qm_cl(dqrr->ring, dqrr->pi);\n+\n+\tDPAA_ASSERT(dqrr->pmode == qm_dqrr_pvb);\n+\t/* when accessing 'verb', use __raw_readb() to ensure that compiler\n+\t * inlining doesn't try to optimise out \"excess reads\".\n+\t */\n+\tif ((__raw_readb(&res->verb) & QM_DQRR_VERB_VBIT) == dqrr->vbit) {\n+\t\tdqrr->pi = (dqrr->pi + 1) & (QM_DQRR_SIZE - 1);\n+\t\tif (!dqrr->pi)\n+\t\t\tdqrr->vbit ^= QM_DQRR_VERB_VBIT;\n+\t\tdqrr->fill++;\n+\t}\n+}\n+\n+static inline void qm_dqrr_cci_consume(struct qm_portal *portal, u8 num)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);\n+\tdqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);\n+\tqm_out(DQRR_CI_CINH, dqrr->ci);\n+}\n+\n+static inline void qm_dqrr_cci_consume_to_current(struct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cci);\n+\tdqrr->ci = DQRR_PTR2IDX(dqrr->cursor);\n+\tqm_out(DQRR_CI_CINH, dqrr->ci);\n+}\n+\n+static inline void qm_dqrr_cce_prefetch(struct qm_portal *portal)\n+{\n+\t__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);\n+\tqm_cl_invalidate(DQRR_CI);\n+\tqm_cl_touch_rw(DQRR_CI);\n+}\n+\n+static inline void qm_dqrr_cce_consume(struct qm_portal *portal, u8 num)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);\n+\tdqrr->ci = (dqrr->ci + num) & (QM_DQRR_SIZE - 1);\n+\tqm_cl_out(DQRR_CI, dqrr->ci);\n+}\n+\n+static inline void qm_dqrr_cce_consume_to_current(struct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cce);\n+\tdqrr->ci = DQRR_PTR2IDX(dqrr->cursor);\n+\tqm_cl_out(DQRR_CI, dqrr->ci);\n+}\n+\n+static inline void qm_dqrr_cdc_consume_1(struct qm_portal *portal, u8 idx,\n+\t\t\t\t\t int park)\n+{\n+\t__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);\n+\tDPAA_ASSERT(idx < QM_DQRR_SIZE);\n+\tqm_out(DQRR_DCAP, (0 << 8) |\t/* S */\n+\t\t((park ? 1 : 0) << 6) |\t/* PK */\n+\t\tidx);\t\t\t/* DCAP_CI */\n+}\n+\n+static inline void qm_dqrr_cdc_consume_1ptr(struct qm_portal *portal,\n+\t\t\t\t\t    const struct qm_dqrr_entry *dq,\n+\t\t\t\t\tint park)\n+{\n+\t__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;\n+\tu8 idx = DQRR_PTR2IDX(dq);\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);\n+\tDPAA_ASSERT(idx < QM_DQRR_SIZE);\n+\tqm_out(DQRR_DCAP, (0 << 8) |\t\t/* DQRR_DCAP::S */\n+\t\t((park ? 1 : 0) << 6) |\t\t/* DQRR_DCAP::PK */\n+\t\tidx);\t\t\t\t/* DQRR_DCAP::DCAP_CI */\n+}\n+\n+static inline void qm_dqrr_cdc_consume_n(struct qm_portal *portal, u16 bitmask)\n+{\n+\t__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);\n+\tqm_out(DQRR_DCAP, (1 << 8) |\t\t/* DQRR_DCAP::S */\n+\t\t((u32)bitmask << 16));\t\t/* DQRR_DCAP::DCAP_CI */\n+\tdqrr->ci = qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);\n+\tdqrr->fill = qm_cyc_diff(QM_DQRR_SIZE, dqrr->ci, dqrr->pi);\n+}\n+\n+static inline u8 qm_dqrr_cdc_cci(struct qm_portal *portal)\n+{\n+\t__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);\n+\treturn qm_in(DQRR_CI_CINH) & (QM_DQRR_SIZE - 1);\n+}\n+\n+static inline void qm_dqrr_cdc_cce_prefetch(struct qm_portal *portal)\n+{\n+\t__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);\n+\tqm_cl_invalidate(DQRR_CI);\n+\tqm_cl_touch_ro(DQRR_CI);\n+}\n+\n+static inline u8 qm_dqrr_cdc_cce(struct qm_portal *portal)\n+{\n+\t__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode == qm_dqrr_cdc);\n+\treturn qm_cl_in(DQRR_CI) & (QM_DQRR_SIZE - 1);\n+}\n+\n+static inline u8 qm_dqrr_get_ci(struct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);\n+\treturn dqrr->ci;\n+}\n+\n+static inline void qm_dqrr_park(struct qm_portal *portal, u8 idx)\n+{\n+\t__maybe_unused register struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);\n+\tqm_out(DQRR_DCAP, (0 << 8) |\t\t/* S */\n+\t\t(1 << 6) |\t\t\t/* PK */\n+\t\t(idx & (QM_DQRR_SIZE - 1)));\t/* DCAP_CI */\n+}\n+\n+static inline void qm_dqrr_park_current(struct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\tDPAA_ASSERT(dqrr->cmode != qm_dqrr_cdc);\n+\tqm_out(DQRR_DCAP, (0 << 8) |\t\t/* S */\n+\t\t(1 << 6) |\t\t\t/* PK */\n+\t\tDQRR_PTR2IDX(dqrr->cursor));\t/* DCAP_CI */\n+}\n+\n+static inline void qm_dqrr_sdqcr_set(struct qm_portal *portal, u32 sdqcr)\n+{\n+\tqm_out(DQRR_SDQCR, sdqcr);\n+}\n+\n+static inline u32 qm_dqrr_sdqcr_get(struct qm_portal *portal)\n+{\n+\treturn qm_in(DQRR_SDQCR);\n+}\n+\n+static inline void qm_dqrr_vdqcr_set(struct qm_portal *portal, u32 vdqcr)\n+{\n+\tqm_out(DQRR_VDQCR, vdqcr);\n+}\n+\n+static inline u32 qm_dqrr_vdqcr_get(struct qm_portal *portal)\n+{\n+\treturn qm_in(DQRR_VDQCR);\n+}\n+\n+static inline u8 qm_dqrr_get_ithresh(struct qm_portal *portal)\n+{\n+\tregister struct qm_dqrr *dqrr = &portal->dqrr;\n+\n+\treturn dqrr->ithresh;\n+}\n+\n+static inline void qm_dqrr_set_ithresh(struct qm_portal *portal, u8 ithresh)\n+{\n+\tqm_out(DQRR_ITR, ithresh);\n+}\n+\n+static inline u8 qm_dqrr_get_maxfill(struct qm_portal *portal)\n+{\n+\treturn (qm_in(CFG) & 0x00f00000) >> 20;\n+}\n+\n+/* -------------- */\n+/* --- MR API --- */\n+\n+#define MR_CARRYCLEAR(p) \\\n+\t(void *)((unsigned long)(p) & (~(unsigned long)(QM_MR_SIZE << 6)))\n+\n+static inline u8 MR_PTR2IDX(const struct qm_mr_entry *e)\n+{\n+\treturn ((uintptr_t)e >> 6) & (QM_MR_SIZE - 1);\n+}\n+\n+static inline const struct qm_mr_entry *MR_INC(const struct qm_mr_entry *e)\n+{\n+\treturn MR_CARRYCLEAR(e + 1);\n+}\n+\n+static inline void qm_mr_finish(struct qm_portal *portal)\n+{\n+\tregister struct qm_mr *mr = &portal->mr;\n+\n+\tif (mr->ci != MR_PTR2IDX(mr->cursor))\n+\t\tpr_crit(\"Ignoring completed MR entries\\n\");\n+}\n+\n+static inline const struct qm_mr_entry *qm_mr_current(struct qm_portal *portal)\n+{\n+\tregister struct qm_mr *mr = &portal->mr;\n+\n+\tif (!mr->fill)\n+\t\treturn NULL;\n+\treturn mr->cursor;\n+}\n+\n+static inline u8 qm_mr_next(struct qm_portal *portal)\n+{\n+\tregister struct qm_mr *mr = &portal->mr;\n+\n+\tDPAA_ASSERT(mr->fill);\n+\tmr->cursor = MR_INC(mr->cursor);\n+\treturn --mr->fill;\n+}\n+\n+static inline void qm_mr_cci_consume(struct qm_portal *portal, u8 num)\n+{\n+\tregister struct qm_mr *mr = &portal->mr;\n+\n+\tDPAA_ASSERT(mr->cmode == qm_mr_cci);\n+\tmr->ci = (mr->ci + num) & (QM_MR_SIZE - 1);\n+\tqm_out(MR_CI_CINH, mr->ci);\n+}\n+\n+static inline void qm_mr_cci_consume_to_current(struct qm_portal *portal)\n+{\n+\tregister struct qm_mr *mr = &portal->mr;\n+\n+\tDPAA_ASSERT(mr->cmode == qm_mr_cci);\n+\tmr->ci = MR_PTR2IDX(mr->cursor);\n+\tqm_out(MR_CI_CINH, mr->ci);\n+}\n+\n+static inline void qm_mr_set_ithresh(struct qm_portal *portal, u8 ithresh)\n+{\n+\tqm_out(MR_ITR, ithresh);\n+}\n+\n+/* ------------------------------ */\n+/* --- Management command API --- */\n+static inline int qm_mc_init(struct qm_portal *portal)\n+{\n+\tregister struct qm_mc *mc = &portal->mc;\n+\n+\tmc->cr = portal->addr.ce + QM_CL_CR;\n+\tmc->rr = portal->addr.ce + QM_CL_RR0;\n+\tmc->rridx = (__raw_readb(&mc->cr->__dont_write_directly__verb) &\n+\t\t\tQM_MCC_VERB_VBIT) ?  0 : 1;\n+\tmc->vbit = mc->rridx ? QM_MCC_VERB_VBIT : 0;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tmc->state = qman_mc_idle;\n+#endif\n+\treturn 0;\n+}\n+\n+static inline void qm_mc_finish(struct qm_portal *portal)\n+{\n+\t__maybe_unused register struct qm_mc *mc = &portal->mc;\n+\n+\tDPAA_ASSERT(mc->state == qman_mc_idle);\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tif (mc->state != qman_mc_idle)\n+\t\tpr_crit(\"Losing incomplete MC command\\n\");\n+#endif\n+}\n+\n+static inline struct qm_mc_command *qm_mc_start(struct qm_portal *portal)\n+{\n+\tregister struct qm_mc *mc = &portal->mc;\n+\n+\tDPAA_ASSERT(mc->state == qman_mc_idle);\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tmc->state = qman_mc_user;\n+#endif\n+\tdcbz_64(mc->cr);\n+\treturn mc->cr;\n+}\n+\n+static inline void qm_mc_commit(struct qm_portal *portal, u8 myverb)\n+{\n+\tregister struct qm_mc *mc = &portal->mc;\n+\tstruct qm_mc_result *rr = mc->rr + mc->rridx;\n+\n+\tDPAA_ASSERT(mc->state == qman_mc_user);\n+\tlwsync();\n+\tmc->cr->__dont_write_directly__verb = myverb | mc->vbit;\n+\tdcbf(mc->cr);\n+\tdcbit_ro(rr);\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tmc->state = qman_mc_hw;\n+#endif\n+}\n+\n+static inline struct qm_mc_result *qm_mc_result(struct qm_portal *portal)\n+{\n+\tregister struct qm_mc *mc = &portal->mc;\n+\tstruct qm_mc_result *rr = mc->rr + mc->rridx;\n+\n+\tDPAA_ASSERT(mc->state == qman_mc_hw);\n+\t/* The inactive response register's verb byte always returns zero until\n+\t * its command is submitted and completed. This includes the valid-bit,\n+\t * in case you were wondering.\n+\t */\n+\tif (!__raw_readb(&rr->verb)) {\n+\t\tdcbit_ro(rr);\n+\t\treturn NULL;\n+\t}\n+\tmc->rridx ^= 1;\n+\tmc->vbit ^= QM_MCC_VERB_VBIT;\n+#ifdef RTE_LIBRTE_DPAA_CHECKING\n+\tmc->state = qman_mc_idle;\n+#endif\n+\treturn rr;\n+}\n+\n+/* Portal interrupt register API */\n+static inline void qm_isr_set_iperiod(struct qm_portal *portal, u16 iperiod)\n+{\n+\tqm_out(ITPR, iperiod);\n+}\n+\n+static inline u32 __qm_isr_read(struct qm_portal *portal, enum qm_isr_reg n)\n+{\n+#if defined(RTE_ARCH_ARM64)\n+\treturn __qm_in(&portal->addr, QM_REG_ISR + (n << 6));\n+#else\n+\treturn __qm_in(&portal->addr, QM_REG_ISR + (n << 2));\n+#endif\n+}\n+\n+static inline void __qm_isr_write(struct qm_portal *portal, enum qm_isr_reg n,\n+\t\t\t\t  u32 val)\n+{\n+#if defined(RTE_ARCH_ARM64)\n+\t__qm_out(&portal->addr, QM_REG_ISR + (n << 6), val);\n+#else\n+\t__qm_out(&portal->addr, QM_REG_ISR + (n << 2), val);\n+#endif\n+}\ndiff --git a/drivers/bus/dpaa/base/qbman/qman_driver.c b/drivers/bus/dpaa/base/qbman/qman_driver.c\nindex 80dde20..a7faf17 100644\n--- a/drivers/bus/dpaa/base/qbman/qman_driver.c\n+++ b/drivers/bus/dpaa/base/qbman/qman_driver.c\n@@ -66,6 +66,7 @@ static __thread struct dpaa_ioctl_portal_map map = {\n static int fsl_qman_portal_init(uint32_t index, int is_shared)\n {\n \tcpu_set_t cpuset;\n+\tstruct qman_portal *portal;\n \tint loop, ret;\n \tstruct dpaa_ioctl_irq_map irq_map;\n \n@@ -116,6 +117,14 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)\n \tpcfg.node = NULL;\n \tpcfg.irq = fd;\n \n+\tportal = qman_create_affine_portal(&pcfg, NULL);\n+\tif (!portal) {\n+\t\tpr_err(\"Qman portal initialisation failed (%d)\\n\",\n+\t\t       pcfg.cpu);\n+\t\tprocess_portal_unmap(&map.addr);\n+\t\treturn -EBUSY;\n+\t}\n+\n \tirq_map.type = dpaa_portal_qman;\n \tirq_map.portal_cinh = map.addr.cinh;\n \tprocess_portal_irq_map(fd, &irq_map);\n@@ -124,10 +133,13 @@ static int fsl_qman_portal_init(uint32_t index, int is_shared)\n \n static int fsl_qman_portal_finish(void)\n {\n+\t__maybe_unused const struct qm_portal_config *cfg;\n \tint ret;\n \n \tprocess_portal_irq_unmap(fd);\n \n+\tcfg = qman_destroy_affine_portal();\n+\tBUG_ON(cfg != &pcfg);\n \tret = process_portal_unmap(&map.addr);\n \tif (ret)\n \t\terror(0, ret, \"process_portal_unmap()\");\ndiff --git a/drivers/bus/dpaa/base/qbman/qman_priv.h b/drivers/bus/dpaa/base/qbman/qman_priv.h\nindex e9826c2..4ae2ea5 100644\n--- a/drivers/bus/dpaa/base/qbman/qman_priv.h\n+++ b/drivers/bus/dpaa/base/qbman/qman_priv.h\n@@ -44,10 +44,6 @@\n #include \"dpaa_sys.h\"\n #include <fsl_qman.h>\n \n-#if !defined(CONFIG_FSL_QMAN_FQ_LOOKUP) && defined(RTE_ARCH_ARM64)\n-#error \"_ARM64 requires _FSL_QMAN_FQ_LOOKUP\"\n-#endif\n-\n /* Congestion Groups */\n /*\n  * This wrapper represents a bit-array for the state of the 256 QMan congestion\n@@ -201,13 +197,6 @@ void qm_set_liodns(struct qm_portal_config *pcfg);\n int qman_testwrite_cgr(struct qman_cgr *cgr, u64 i_bcnt,\n \t\t       struct qm_mcr_cgrtestwrite *result);\n \n-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP\n-/* If the fq object pointer is greater than the size of context_b field,\n- * than a lookup table is required.\n- */\n-int qman_setup_fq_lookup_table(size_t num_entries);\n-#endif\n-\n /*   QMan s/w corenet portal, low-level i/face\t */\n \n /*\ndiff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h\nindex 740ee25..9735e1d 100644\n--- a/drivers/bus/dpaa/include/fsl_qman.h\n+++ b/drivers/bus/dpaa/include/fsl_qman.h\n@@ -46,15 +46,6 @@ extern \"C\" {\n \n #include <dpaa_rbtree.h>\n \n-/* FQ lookups (turn this on for 64bit user-space) */\n-#if (__WORDSIZE == 64)\n-#define CONFIG_FSL_QMAN_FQ_LOOKUP\n-/* if FQ lookups are supported, this controls the number of initialised,\n- * s/w-consumed FQs that can be supported at any one time.\n- */\n-#define CONFIG_FSL_QMAN_FQ_LOOKUP_MAX (32 * 1024)\n-#endif\n-\n /* Last updated for v00.800 of the BG */\n \n /* Hardware constants */\n@@ -1254,9 +1245,6 @@ struct qman_fq {\n \tenum qman_fq_state state;\n \tint cgr_groupid;\n \tstruct rb_node node;\n-#ifdef CONFIG_FSL_QMAN_FQ_LOOKUP\n-\tu32 key;\n-#endif\n };\n \n /*\n@@ -1275,6 +1263,761 @@ struct qman_cgr {\n \tstruct list_head node;\n };\n \n+/* Flags to qman_create_fq() */\n+#define QMAN_FQ_FLAG_NO_ENQUEUE      0x00000001 /* can't enqueue */\n+#define QMAN_FQ_FLAG_NO_MODIFY       0x00000002 /* can only enqueue */\n+#define QMAN_FQ_FLAG_TO_DCPORTAL     0x00000004 /* consumed by CAAM/PME/Fman */\n+#define QMAN_FQ_FLAG_LOCKED          0x00000008 /* multi-core locking */\n+#define QMAN_FQ_FLAG_AS_IS           0x00000010 /* query h/w state */\n+#define QMAN_FQ_FLAG_DYNAMIC_FQID    0x00000020 /* (de)allocate fqid */\n+\n+/* Flags to qman_destroy_fq() */\n+#define QMAN_FQ_DESTROY_PARKED       0x00000001 /* FQ can be parked or OOS */\n+\n+/* Flags from qman_fq_state() */\n+#define QMAN_FQ_STATE_CHANGING       0x80000000 /* 'state' is changing */\n+#define QMAN_FQ_STATE_NE             0x40000000 /* retired FQ isn't empty */\n+#define QMAN_FQ_STATE_ORL            0x20000000 /* retired FQ has ORL */\n+#define QMAN_FQ_STATE_BLOCKOOS       0xe0000000 /* if any are set, no OOS */\n+#define QMAN_FQ_STATE_CGR_EN         0x10000000 /* CGR enabled */\n+#define QMAN_FQ_STATE_VDQCR          0x08000000 /* being volatile dequeued */\n+\n+/* Flags to qman_init_fq() */\n+#define QMAN_INITFQ_FLAG_SCHED       0x00000001 /* schedule rather than park */\n+#define QMAN_INITFQ_FLAG_LOCAL       0x00000004 /* set dest portal */\n+\n+/* Flags to qman_enqueue(). NB, the strange numbering is to align with hardware,\n+ * bit-wise. (NB: the PME API is sensitive to these precise numberings too, so\n+ * any change here should be audited in PME.)\n+ */\n+#define QMAN_ENQUEUE_FLAG_WATCH_CGR  0x00080000 /* watch congestion state */\n+#define QMAN_ENQUEUE_FLAG_DCA        0x00008000 /* perform enqueue-DCA */\n+#define QMAN_ENQUEUE_FLAG_DCA_PARK   0x00004000 /* If DCA, requests park */\n+#define QMAN_ENQUEUE_FLAG_DCA_PTR(p)\t\t/* If DCA, p is DQRR entry */ \\\n+\t\t(((u32)(p) << 2) & 0x00000f00)\n+#define QMAN_ENQUEUE_FLAG_C_GREEN    0x00000000 /* choose one C_*** flag */\n+#define QMAN_ENQUEUE_FLAG_C_YELLOW   0x00000008\n+#define QMAN_ENQUEUE_FLAG_C_RED      0x00000010\n+#define QMAN_ENQUEUE_FLAG_C_OVERRIDE 0x00000018\n+/* For the ORP-specific qman_enqueue_orp() variant;\n+ * - this flag indicates \"Not Last In Sequence\", ie. all but the final fragment\n+ *   of a frame.\n+ */\n+#define QMAN_ENQUEUE_FLAG_NLIS       0x01000000\n+/* - this flag performs no enqueue but fills in an ORP sequence number that\n+ *   would otherwise block it (eg. if a frame has been dropped).\n+ */\n+#define QMAN_ENQUEUE_FLAG_HOLE       0x02000000\n+/* - this flag performs no enqueue but advances NESN to the given sequence\n+ *   number.\n+ */\n+#define QMAN_ENQUEUE_FLAG_NESN       0x04000000\n+\n+/* Flags to qman_modify_cgr() */\n+#define QMAN_CGR_FLAG_USE_INIT       0x00000001\n+#define QMAN_CGR_MODE_FRAME          0x00000001\n+\n+/**\n+ * qman_get_portal_index - get portal configuration index\n+ */\n+int qman_get_portal_index(void);\n+\n+/**\n+ * qman_affine_channel - return the channel ID of an portal\n+ * @cpu: the cpu whose affine portal is the subject of the query\n+ *\n+ * If @cpu is -1, the affine portal for the current CPU will be used. It is a\n+ * bug to call this function for any value of @cpu (other than -1) that is not a\n+ * member of the cpu mask.\n+ */\n+u16 qman_affine_channel(int cpu);\n+\n+/**\n+ * qman_set_vdq - Issue a volatile dequeue command\n+ * @fq: Frame Queue on which the volatile dequeue command is issued\n+ * @num: Number of Frames requested for volatile dequeue\n+ *\n+ * This function will issue a volatile dequeue command to the QMAN.\n+ */\n+int qman_set_vdq(struct qman_fq *fq, u16 num);\n+\n+/**\n+ * qman_dequeue - Get the DQRR entry after volatile dequeue command\n+ * @fq: Frame Queue on which the volatile dequeue command is issued\n+ *\n+ * This function will return the DQRR entry after a volatile dequeue command\n+ * is issued. It will keep returning NULL until there is no packet available on\n+ * the DQRR.\n+ */\n+struct qm_dqrr_entry *qman_dequeue(struct qman_fq *fq);\n+\n+/**\n+ * qman_dqrr_consume - Consume the DQRR entriy after volatile dequeue\n+ * @fq: Frame Queue on which the volatile dequeue command is issued\n+ * @dq: DQRR entry to consume. This is the one which is provided by the\n+ *    'qbman_dequeue' command.\n+ *\n+ * This will consume the DQRR enrey and make it available for next volatile\n+ * dequeue.\n+ */\n+void qman_dqrr_consume(struct qman_fq *fq,\n+\t\t       struct qm_dqrr_entry *dq);\n+\n+/**\n+ * qman_poll_dqrr - process DQRR (fast-path) entries\n+ * @limit: the maximum number of DQRR entries to process\n+ *\n+ * Use of this function requires that DQRR processing not be interrupt-driven.\n+ * Ie. the value returned by qman_irqsource_get() should not include\n+ * QM_PIRQ_DQRI. If the current CPU is sharing a portal hosted on another CPU,\n+ * this function will return -EINVAL, otherwise the return value is >=0 and\n+ * represents the number of DQRR entries processed.\n+ */\n+int qman_poll_dqrr(unsigned int limit);\n+\n+/**\n+ * qman_poll\n+ *\n+ * Dispatcher logic on a cpu can use this to trigger any maintenance of the\n+ * affine portal. There are two classes of portal processing in question;\n+ * fast-path (which involves demuxing dequeue ring (DQRR) entries and tracking\n+ * enqueue ring (EQCR) consumption), and slow-path (which involves EQCR\n+ * thresholds, congestion state changes, etc). This function does whatever\n+ * processing is not triggered by interrupts.\n+ *\n+ * Note, if DQRR and some slow-path processing are poll-driven (rather than\n+ * interrupt-driven) then this function uses a heuristic to determine how often\n+ * to run slow-path processing - as slow-path processing introduces at least a\n+ * minimum latency each time it is run, whereas fast-path (DQRR) processing is\n+ * close to zero-cost if there is no work to be done.\n+ */\n+void qman_poll(void);\n+\n+/**\n+ * qman_stop_dequeues - Stop h/w dequeuing to the s/w portal\n+ *\n+ * Disables DQRR processing of the portal. This is reference-counted, so\n+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to\n+ * truly re-enable dequeuing.\n+ */\n+void qman_stop_dequeues(void);\n+\n+/**\n+ * qman_start_dequeues - (Re)start h/w dequeuing to the s/w portal\n+ *\n+ * Enables DQRR processing of the portal. This is reference-counted, so\n+ * qman_start_dequeues() must be called as many times as qman_stop_dequeues() to\n+ * truly re-enable dequeuing.\n+ */\n+void qman_start_dequeues(void);\n+\n+/**\n+ * qman_static_dequeue_add - Add pool channels to the portal SDQCR\n+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)\n+ *\n+ * Adds a set of pool channels to the portal's static dequeue command register\n+ * (SDQCR). The requested pools are limited to those the portal has dequeue\n+ * access to.\n+ */\n+void qman_static_dequeue_add(u32 pools);\n+\n+/**\n+ * qman_static_dequeue_del - Remove pool channels from the portal SDQCR\n+ * @pools: bit-mask of pool channels, using QM_SDQCR_CHANNELS_POOL(n)\n+ *\n+ * Removes a set of pool channels from the portal's static dequeue command\n+ * register (SDQCR). The requested pools are limited to those the portal has\n+ * dequeue access to.\n+ */\n+void qman_static_dequeue_del(u32 pools);\n+\n+/**\n+ * qman_static_dequeue_get - return the portal's current SDQCR\n+ *\n+ * Returns the portal's current static dequeue command register (SDQCR). The\n+ * entire register is returned, so if only the currently-enabled pool channels\n+ * are desired, mask the return value with QM_SDQCR_CHANNELS_POOL_MASK.\n+ */\n+u32 qman_static_dequeue_get(void);\n+\n+/**\n+ * qman_dca - Perform a Discrete Consumption Acknowledgment\n+ * @dq: the DQRR entry to be consumed\n+ * @park_request: indicates whether the held-active @fq should be parked\n+ *\n+ * Only allowed in DCA-mode portals, for DQRR entries whose handler callback had\n+ * previously returned 'qman_cb_dqrr_defer'. NB, as with the other APIs, this\n+ * does not take a 'portal' argument but implies the core affine portal from the\n+ * cpu that is currently executing the function. For reasons of locking, this\n+ * function must be called from the same CPU as that which processed the DQRR\n+ * entry in the first place.\n+ */\n+void qman_dca(struct qm_dqrr_entry *dq, int park_request);\n+\n+/**\n+ * qman_eqcr_is_empty - Determine if portal's EQCR is empty\n+ *\n+ * For use in situations where a cpu-affine caller needs to determine when all\n+ * enqueues for the local portal have been processed by Qman but can't use the\n+ * QMAN_ENQUEUE_FLAG_WAIT_SYNC flag to do this from the final qman_enqueue().\n+ * The function forces tracking of EQCR consumption (which normally doesn't\n+ * happen until enqueue processing needs to find space to put new enqueue\n+ * commands), and returns zero if the ring still has unprocessed entries,\n+ * non-zero if it is empty.\n+ */\n+int qman_eqcr_is_empty(void);\n+\n+/**\n+ * qman_set_dc_ern - Set the handler for DCP enqueue rejection notifications\n+ * @handler: callback for processing DCP ERNs\n+ * @affine: whether this handler is specific to the locally affine portal\n+ *\n+ * If a hardware block's interface to Qman (ie. its direct-connect portal, or\n+ * DCP) is configured not to receive enqueue rejections, then any enqueues\n+ * through that DCP that are rejected will be sent to a given software portal.\n+ * If @affine is non-zero, then this handler will only be used for DCP ERNs\n+ * received on the portal affine to the current CPU. If multiple CPUs share a\n+ * portal and they all call this function, they will be setting the handler for\n+ * the same portal! If @affine is zero, then this handler will be global to all\n+ * portals handled by this instance of the driver. Only those portals that do\n+ * not have their own affine handler will use the global handler.\n+ */\n+void qman_set_dc_ern(qman_cb_dc_ern handler, int affine);\n+\n+\t/* FQ management */\n+\t/* ------------- */\n+/**\n+ * qman_create_fq - Allocates a FQ\n+ * @fqid: the index of the FQD to encapsulate, must be \"Out of Service\"\n+ * @flags: bit-mask of QMAN_FQ_FLAG_*** options\n+ * @fq: memory for storing the 'fq', with callbacks filled in\n+ *\n+ * Creates a frame queue object for the given @fqid, unless the\n+ * QMAN_FQ_FLAG_DYNAMIC_FQID flag is set in @flags, in which case a FQID is\n+ * dynamically allocated (or the function fails if none are available). Once\n+ * created, the caller should not touch the memory at 'fq' except as extended to\n+ * adjacent memory for user-defined fields (see the definition of \"struct\n+ * qman_fq\" for more info). NO_MODIFY is only intended for enqueuing to\n+ * pre-existing frame-queues that aren't to be otherwise interfered with, it\n+ * prevents all other modifications to the frame queue. The TO_DCPORTAL flag\n+ * causes the driver to honour any contextB modifications requested in the\n+ * qm_init_fq() API, as this indicates the frame queue will be consumed by a\n+ * direct-connect portal (PME, CAAM, or Fman). When frame queues are consumed by\n+ * software portals, the contextB field is controlled by the driver and can't be\n+ * modified by the caller. If the AS_IS flag is specified, management commands\n+ * will be used on portal @p to query state for frame queue @fqid and construct\n+ * a frame queue object based on that, rather than assuming/requiring that it be\n+ * Out of Service.\n+ */\n+int qman_create_fq(u32 fqid, u32 flags, struct qman_fq *fq);\n+\n+/**\n+ * qman_destroy_fq - Deallocates a FQ\n+ * @fq: the frame queue object to release\n+ * @flags: bit-mask of QMAN_FQ_FREE_*** options\n+ *\n+ * The memory for this frame queue object ('fq' provided in qman_create_fq()) is\n+ * not deallocated but the caller regains ownership, to do with as desired. The\n+ * FQ must be in the 'out-of-service' state unless the QMAN_FQ_FREE_PARKED flag\n+ * is specified, in which case it may also be in the 'parked' state.\n+ */\n+void qman_destroy_fq(struct qman_fq *fq, u32 flags);\n+\n+/**\n+ * qman_fq_fqid - Queries the frame queue ID of a FQ object\n+ * @fq: the frame queue object to query\n+ */\n+u32 qman_fq_fqid(struct qman_fq *fq);\n+\n+/**\n+ * qman_fq_state - Queries the state of a FQ object\n+ * @fq: the frame queue object to query\n+ * @state: pointer to state enum to return the FQ scheduling state\n+ * @flags: pointer to state flags to receive QMAN_FQ_STATE_*** bitmask\n+ *\n+ * Queries the state of the FQ object, without performing any h/w commands.\n+ * This captures the state, as seen by the driver, at the time the function\n+ * executes.\n+ */\n+void qman_fq_state(struct qman_fq *fq, enum qman_fq_state *state, u32 *flags);\n+\n+/**\n+ * qman_init_fq - Initialises FQ fields, leaves the FQ \"parked\" or \"scheduled\"\n+ * @fq: the frame queue object to modify, must be 'parked' or new.\n+ * @flags: bit-mask of QMAN_INITFQ_FLAG_*** options\n+ * @opts: the FQ-modification settings, as defined in the low-level API\n+ *\n+ * The @opts parameter comes from the low-level portal API. Select\n+ * QMAN_INITFQ_FLAG_SCHED in @flags to cause the frame queue to be scheduled\n+ * rather than parked. NB, @opts can be NULL.\n+ *\n+ * Note that some fields and options within @opts may be ignored or overwritten\n+ * by the driver;\n+ * 1. the 'count' and 'fqid' fields are always ignored (this operation only\n+ * affects one frame queue: @fq).\n+ * 2. the QM_INITFQ_WE_CONTEXTB option of the 'we_mask' field and the associated\n+ * 'fqd' structure's 'context_b' field are sometimes overwritten;\n+ *   - if @fq was not created with QMAN_FQ_FLAG_TO_DCPORTAL, then context_b is\n+ *     initialised to a value used by the driver for demux.\n+ *   - if context_b is initialised for demux, so is context_a in case stashing\n+ *     is requested (see item 4).\n+ * (So caller control of context_b is only possible for TO_DCPORTAL frame queue\n+ * objects.)\n+ * 3. if @flags contains QMAN_INITFQ_FLAG_LOCAL, the 'fqd' structure's\n+ * 'dest::channel' field will be overwritten to match the portal used to issue\n+ * the command. If the WE_DESTWQ write-enable bit had already been set by the\n+ * caller, the channel workqueue will be left as-is, otherwise the write-enable\n+ * bit is set and the workqueue is set to a default of 4. If the \"LOCAL\" flag\n+ * isn't set, the destination channel/workqueue fields and the write-enable bit\n+ * are left as-is.\n+ * 4. if the driver overwrites context_a/b for demux, then if\n+ * QM_INITFQ_WE_CONTEXTA is set, the driver will only overwrite\n+ * context_a.address fields and will leave the stashing fields provided by the\n+ * user alone, otherwise it will zero out the context_a.stashing fields.\n+ */\n+int qman_init_fq(struct qman_fq *fq, u32 flags, struct qm_mcc_initfq *opts);\n+\n+/**\n+ * qman_schedule_fq - Schedules a FQ\n+ * @fq: the frame queue object to schedule, must be 'parked'\n+ *\n+ * Schedules the frame queue, which must be Parked, which takes it to\n+ * Tentatively-Scheduled or Truly-Scheduled depending on its fill-level.\n+ */\n+int qman_schedule_fq(struct qman_fq *fq);\n+\n+/**\n+ * qman_retire_fq - Retires a FQ\n+ * @fq: the frame queue object to retire\n+ * @flags: FQ flags (as per qman_fq_state) if retirement completes immediately\n+ *\n+ * Retires the frame queue. This returns zero if it succeeds immediately, +1 if\n+ * the retirement was started asynchronously, otherwise it returns negative for\n+ * failure. When this function returns zero, @flags is set to indicate whether\n+ * the retired FQ is empty and/or whether it has any ORL fragments (to show up\n+ * as ERNs). Otherwise the corresponding flags will be known when a subsequent\n+ * FQRN message shows up on the portal's message ring.\n+ *\n+ * NB, if the retirement is asynchronous (the FQ was in the Truly Scheduled or\n+ * Active state), the completion will be via the message ring as a FQRN - but\n+ * the corresponding callback may occur before this function returns!! Ie. the\n+ * caller should be prepared to accept the callback as the function is called,\n+ * not only once it has returned.\n+ */\n+int qman_retire_fq(struct qman_fq *fq, u32 *flags);\n+\n+/**\n+ * qman_oos_fq - Puts a FQ \"out of service\"\n+ * @fq: the frame queue object to be put out-of-service, must be 'retired'\n+ *\n+ * The frame queue must be retired and empty, and if any order restoration list\n+ * was released as ERNs at the time of retirement, they must all be consumed.\n+ */\n+int qman_oos_fq(struct qman_fq *fq);\n+\n+/**\n+ * qman_fq_flow_control - Set the XON/XOFF state of a FQ\n+ * @fq: the frame queue object to be set to XON/XOFF state, must not be 'oos',\n+ * or 'retired' or 'parked' state\n+ * @xon: boolean to set fq in XON or XOFF state\n+ *\n+ * The frame should be in Tentatively Scheduled state or Truly Schedule sate,\n+ * otherwise the IFSI interrupt will be asserted.\n+ */\n+int qman_fq_flow_control(struct qman_fq *fq, int xon);\n+\n+/**\n+ * qman_query_fq - Queries FQD fields (via h/w query command)\n+ * @fq: the frame queue object to be queried\n+ * @fqd: storage for the queried FQD fields\n+ */\n+int qman_query_fq(struct qman_fq *fq, struct qm_fqd *fqd);\n+\n+/**\n+ * qman_query_fq_has_pkts - Queries non-programmable FQD fields and returns '1'\n+ * if packets are in the frame queue. If there are no packets on frame\n+ * queue '0' is returned.\n+ * @fq: the frame queue object to be queried\n+ */\n+int qman_query_fq_has_pkts(struct qman_fq *fq);\n+\n+/**\n+ * qman_query_fq_np - Queries non-programmable FQD fields\n+ * @fq: the frame queue object to be queried\n+ * @np: storage for the queried FQD fields\n+ */\n+int qman_query_fq_np(struct qman_fq *fq, struct qm_mcr_queryfq_np *np);\n+\n+/**\n+ * qman_query_wq - Queries work queue lengths\n+ * @query_dedicated: If non-zero, query length of WQs in the channel dedicated\n+ *\t\tto this software portal. Otherwise, query length of WQs in a\n+ *\t\tchannel  specified in wq.\n+ * @wq: storage for the queried WQs lengths. Also specified the channel to\n+ *\tto query if query_dedicated is zero.\n+ */\n+int qman_query_wq(u8 query_dedicated, struct qm_mcr_querywq *wq);\n+\n+/**\n+ * qman_volatile_dequeue - Issue a volatile dequeue command\n+ * @fq: the frame queue object to dequeue from\n+ * @flags: a bit-mask of QMAN_VOLATILE_FLAG_*** options\n+ * @vdqcr: bit mask of QM_VDQCR_*** options, as per qm_dqrr_vdqcr_set()\n+ *\n+ * Attempts to lock access to the portal's VDQCR volatile dequeue functionality.\n+ * The function will block and sleep if QMAN_VOLATILE_FLAG_WAIT is specified and\n+ * the VDQCR is already in use, otherwise returns non-zero for failure. If\n+ * QMAN_VOLATILE_FLAG_FINISH is specified, the function will only return once\n+ * the VDQCR command has finished executing (ie. once the callback for the last\n+ * DQRR entry resulting from the VDQCR command has been called). If not using\n+ * the FINISH flag, completion can be determined either by detecting the\n+ * presence of the QM_DQRR_STAT_UNSCHEDULED and QM_DQRR_STAT_DQCR_EXPIRED bits\n+ * in the \"stat\" field of the \"struct qm_dqrr_entry\" passed to the FQ's dequeue\n+ * callback, or by waiting for the QMAN_FQ_STATE_VDQCR bit to disappear from the\n+ * \"flags\" retrieved from qman_fq_state().\n+ */\n+int qman_volatile_dequeue(struct qman_fq *fq, u32 flags, u32 vdqcr);\n+\n+/**\n+ * qman_enqueue - Enqueue a frame to a frame queue\n+ * @fq: the frame queue object to enqueue to\n+ * @fd: a descriptor of the frame to be enqueued\n+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options\n+ *\n+ * Fills an entry in the EQCR of portal @qm to enqueue the frame described by\n+ * @fd. The descriptor details are copied from @fd to the EQCR entry, the 'pid'\n+ * field is ignored. The return value is non-zero on error, such as ring full\n+ * (and FLAG_WAIT not specified), congestion avoidance (FLAG_WATCH_CGR\n+ * specified), etc. If the ring is full and FLAG_WAIT is specified, this\n+ * function will block. If FLAG_INTERRUPT is set, the EQCI bit of the portal\n+ * interrupt will assert when Qman consumes the EQCR entry (subject to \"status\n+ * disable\", \"enable\", and \"inhibit\" registers). If FLAG_DCA is set, Qman will\n+ * perform an implied \"discrete consumption acknowledgment\" on the dequeue\n+ * ring's (DQRR) entry, at the ring index specified by the FLAG_DCA_IDX(x)\n+ * macro. (As an alternative to issuing explicit DCA actions on DQRR entries,\n+ * this implicit DCA can delay the release of a \"held active\" frame queue\n+ * corresponding to a DQRR entry until Qman consumes the EQCR entry - providing\n+ * order-preservation semantics in packet-forwarding scenarios.) If FLAG_DCA is\n+ * set, then FLAG_DCA_PARK can also be set to imply that the DQRR consumption\n+ * acknowledgment should \"park request\" the \"held active\" frame queue. Ie.\n+ * when the portal eventually releases that frame queue, it will be left in the\n+ * Parked state rather than Tentatively Scheduled or Truly Scheduled. If the\n+ * portal is watching congestion groups, the QMAN_ENQUEUE_FLAG_WATCH_CGR flag\n+ * is requested, and the FQ is a member of a congestion group, then this\n+ * function returns -EAGAIN if the congestion group is currently congested.\n+ * Note, this does not eliminate ERNs, as the async interface means we can be\n+ * sending enqueue commands to an un-congested FQ that becomes congested before\n+ * the enqueue commands are processed, but it does minimise needless thrashing\n+ * of an already busy hardware resource by throttling many of the to-be-dropped\n+ * enqueues \"at the source\".\n+ */\n+int qman_enqueue(struct qman_fq *fq, const struct qm_fd *fd, u32 flags);\n+\n+int qman_enqueue_multi(struct qman_fq *fq,\n+\t\t       const struct qm_fd *fd,\n+\t\tint frames_to_send);\n+\n+typedef int (*qman_cb_precommit) (void *arg);\n+\n+/**\n+ * qman_enqueue_orp - Enqueue a frame to a frame queue using an ORP\n+ * @fq: the frame queue object to enqueue to\n+ * @fd: a descriptor of the frame to be enqueued\n+ * @flags: bit-mask of QMAN_ENQUEUE_FLAG_*** options\n+ * @orp: the frame queue object used as an order restoration point.\n+ * @orp_seqnum: the sequence number of this frame in the order restoration path\n+ *\n+ * Similar to qman_enqueue(), but with the addition of an Order Restoration\n+ * Point (@orp) and corresponding sequence number (@orp_seqnum) for this\n+ * enqueue operation to employ order restoration. Each frame queue object acts\n+ * as an Order Definition Point (ODP) by providing each frame dequeued from it\n+ * with an incrementing sequence number, this value is generally ignored unless\n+ * that sequence of dequeued frames will need order restoration later. Each\n+ * frame queue object also encapsulates an Order Restoration Point (ORP), which\n+ * is a re-assembly context for re-ordering frames relative to their sequence\n+ * numbers as they are enqueued. The ORP does not have to be within the frame\n+ * queue that receives the enqueued frame, in fact it is usually the frame\n+ * queue from which the frames were originally dequeued. For the purposes of\n+ * order restoration, multiple frames (or \"fragments\") can be enqueued for a\n+ * single sequence number by setting the QMAN_ENQUEUE_FLAG_NLIS flag for all\n+ * enqueues except the final fragment of a given sequence number. Ordering\n+ * between sequence numbers is guaranteed, even if fragments of different\n+ * sequence numbers are interlaced with one another. Fragments of the same\n+ * sequence number will retain the order in which they are enqueued. If no\n+ * enqueue is to performed, QMAN_ENQUEUE_FLAG_HOLE indicates that the given\n+ * sequence number is to be \"skipped\" by the ORP logic (eg. if a frame has been\n+ * dropped from a sequence), or QMAN_ENQUEUE_FLAG_NESN indicates that the given\n+ * sequence number should become the ORP's \"Next Expected Sequence Number\".\n+ *\n+ * Side note: a frame queue object can be used purely as an ORP, without\n+ * carrying any frames at all. Care should be taken not to deallocate a frame\n+ * queue object that is being actively used as an ORP, as a future allocation\n+ * of the frame queue object may start using the internal ORP before the\n+ * previous use has finished.\n+ */\n+int qman_enqueue_orp(struct qman_fq *fq, const struct qm_fd *fd, u32 flags,\n+\t\t     struct qman_fq *orp, u16 orp_seqnum);\n+\n+/**\n+ * qman_alloc_fqid_range - Allocate a contiguous range of FQIDs\n+ * @result: is set by the API to the base FQID of the allocated range\n+ * @count: the number of FQIDs required\n+ * @align: required alignment of the allocated range\n+ * @partial: non-zero if the API can return fewer than @count FQIDs\n+ *\n+ * Returns the number of frame queues allocated, or a negative error code. If\n+ * @partial is non zero, the allocation request may return a smaller range of\n+ * FQs than requested (though alignment will be as requested). If @partial is\n+ * zero, the return value will either be 'count' or negative.\n+ */\n+int qman_alloc_fqid_range(u32 *result, u32 count, u32 align, int partial);\n+static inline int qman_alloc_fqid(u32 *result)\n+{\n+\tint ret = qman_alloc_fqid_range(result, 1, 0, 0);\n+\n+\treturn (ret > 0) ? 0 : ret;\n+}\n+\n+/**\n+ * qman_release_fqid_range - Release the specified range of frame queue IDs\n+ * @fqid: the base FQID of the range to deallocate\n+ * @count: the number of FQIDs in the range\n+ *\n+ * This function can also be used to seed the allocator with ranges of FQIDs\n+ * that it can subsequently allocate from.\n+ */\n+void qman_release_fqid_range(u32 fqid, unsigned int count);\n+static inline void qman_release_fqid(u32 fqid)\n+{\n+\tqman_release_fqid_range(fqid, 1);\n+}\n+\n+void qman_seed_fqid_range(u32 fqid, unsigned int count);\n+\n+int qman_shutdown_fq(u32 fqid);\n+\n+/**\n+ * qman_reserve_fqid_range - Reserve the specified range of frame queue IDs\n+ * @fqid: the base FQID of the range to deallocate\n+ * @count: the number of FQIDs in the range\n+ */\n+int qman_reserve_fqid_range(u32 fqid, unsigned int count);\n+static inline int qman_reserve_fqid(u32 fqid)\n+{\n+\treturn qman_reserve_fqid_range(fqid, 1);\n+}\n+\n+/* Pool-channel management */\n+/**\n+ * qman_alloc_pool_range - Allocate a contiguous range of pool-channel IDs\n+ * @result: is set by the API to the base pool-channel ID of the allocated range\n+ * @count: the number of pool-channel IDs required\n+ * @align: required alignment of the allocated range\n+ * @partial: non-zero if the API can return fewer than @count\n+ *\n+ * Returns the number of pool-channel IDs allocated, or a negative error code.\n+ * If @partial is non zero, the allocation request may return a smaller range of\n+ * than requested (though alignment will be as requested). If @partial is zero,\n+ * the return value will either be 'count' or negative.\n+ */\n+int qman_alloc_pool_range(u32 *result, u32 count, u32 align, int partial);\n+static inline int qman_alloc_pool(u32 *result)\n+{\n+\tint ret = qman_alloc_pool_range(result, 1, 0, 0);\n+\n+\treturn (ret > 0) ? 0 : ret;\n+}\n+\n+/**\n+ * qman_release_pool_range - Release the specified range of pool-channel IDs\n+ * @id: the base pool-channel ID of the range to deallocate\n+ * @count: the number of pool-channel IDs in the range\n+ */\n+void qman_release_pool_range(u32 id, unsigned int count);\n+static inline void qman_release_pool(u32 id)\n+{\n+\tqman_release_pool_range(id, 1);\n+}\n+\n+/**\n+ * qman_reserve_pool_range - Reserve the specified range of pool-channel IDs\n+ * @id: the base pool-channel ID of the range to reserve\n+ * @count: the number of pool-channel IDs in the range\n+ */\n+int qman_reserve_pool_range(u32 id, unsigned int count);\n+static inline int qman_reserve_pool(u32 id)\n+{\n+\treturn qman_reserve_pool_range(id, 1);\n+}\n+\n+void qman_seed_pool_range(u32 id, unsigned int count);\n+\n+\t/* CGR management */\n+\t/* -------------- */\n+/**\n+ * qman_create_cgr - Register a congestion group object\n+ * @cgr: the 'cgr' object, with fields filled in\n+ * @flags: QMAN_CGR_FLAG_* values\n+ * @opts: optional state of CGR settings\n+ *\n+ * Registers this object to receiving congestion entry/exit callbacks on the\n+ * portal affine to the cpu portal on which this API is executed. If opts is\n+ * NULL then only the callback (cgr->cb) function is registered. If @flags\n+ * contains QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset\n+ * any unspecified parameters) will be used rather than a modify hw hardware\n+ * (which only modifies the specified parameters).\n+ */\n+int qman_create_cgr(struct qman_cgr *cgr, u32 flags,\n+\t\t    struct qm_mcc_initcgr *opts);\n+\n+/**\n+ * qman_create_cgr_to_dcp - Register a congestion group object to DCP portal\n+ * @cgr: the 'cgr' object, with fields filled in\n+ * @flags: QMAN_CGR_FLAG_* values\n+ * @dcp_portal: the DCP portal to which the cgr object is registered.\n+ * @opts: optional state of CGR settings\n+ *\n+ */\n+int qman_create_cgr_to_dcp(struct qman_cgr *cgr, u32 flags, u16 dcp_portal,\n+\t\t\t   struct qm_mcc_initcgr *opts);\n+\n+/**\n+ * qman_delete_cgr - Deregisters a congestion group object\n+ * @cgr: the 'cgr' object to deregister\n+ *\n+ * \"Unplugs\" this CGR object from the portal affine to the cpu on which this API\n+ * is executed. This must be excuted on the same affine portal on which it was\n+ * created.\n+ */\n+int qman_delete_cgr(struct qman_cgr *cgr);\n+\n+/**\n+ * qman_modify_cgr - Modify CGR fields\n+ * @cgr: the 'cgr' object to modify\n+ * @flags: QMAN_CGR_FLAG_* values\n+ * @opts: the CGR-modification settings\n+ *\n+ * The @opts parameter comes from the low-level portal API, and can be NULL.\n+ * Note that some fields and options within @opts may be ignored or overwritten\n+ * by the driver, in particular the 'cgrid' field is ignored (this operation\n+ * only affects the given CGR object). If @flags contains\n+ * QMAN_CGR_FLAG_USE_INIT, then an init hw command (which will reset any\n+ * unspecified parameters) will be used rather than a modify hw hardware (which\n+ * only modifies the specified parameters).\n+ */\n+int qman_modify_cgr(struct qman_cgr *cgr, u32 flags,\n+\t\t    struct qm_mcc_initcgr *opts);\n+\n+/**\n+ * qman_query_cgr - Queries CGR fields\n+ * @cgr: the 'cgr' object to query\n+ * @result: storage for the queried congestion group record\n+ */\n+int qman_query_cgr(struct qman_cgr *cgr, struct qm_mcr_querycgr *result);\n+\n+/**\n+ * qman_query_congestion - Queries the state of all congestion groups\n+ * @congestion: storage for the queried state of all congestion groups\n+ */\n+int qman_query_congestion(struct qm_mcr_querycongestion *congestion);\n+\n+/**\n+ * qman_alloc_cgrid_range - Allocate a contiguous range of CGR IDs\n+ * @result: is set by the API to the base CGR ID of the allocated range\n+ * @count: the number of CGR IDs required\n+ * @align: required alignment of the allocated range\n+ * @partial: non-zero if the API can return fewer than @count\n+ *\n+ * Returns the number of CGR IDs allocated, or a negative error code.\n+ * If @partial is non zero, the allocation request may return a smaller range of\n+ * than requested (though alignment will be as requested). If @partial is zero,\n+ * the return value will either be 'count' or negative.\n+ */\n+int qman_alloc_cgrid_range(u32 *result, u32 count, u32 align, int partial);\n+static inline int qman_alloc_cgrid(u32 *result)\n+{\n+\tint ret = qman_alloc_cgrid_range(result, 1, 0, 0);\n+\n+\treturn (ret > 0) ? 0 : ret;\n+}\n+\n+/**\n+ * qman_release_cgrid_range - Release the specified range of CGR IDs\n+ * @id: the base CGR ID of the range to deallocate\n+ * @count: the number of CGR IDs in the range\n+ */\n+void qman_release_cgrid_range(u32 id, unsigned int count);\n+static inline void qman_release_cgrid(u32 id)\n+{\n+\tqman_release_cgrid_range(id, 1);\n+}\n+\n+/**\n+ * qman_reserve_cgrid_range - Reserve the specified range of CGR ID\n+ * @id: the base CGR ID of the range to reserve\n+ * @count: the number of CGR IDs in the range\n+ */\n+int qman_reserve_cgrid_range(u32 id, unsigned int count);\n+static inline int qman_reserve_cgrid(u32 id)\n+{\n+\treturn qman_reserve_cgrid_range(id, 1);\n+}\n+\n+void qman_seed_cgrid_range(u32 id, unsigned int count);\n+\n+\t/* Helpers */\n+\t/* ------- */\n+/**\n+ * qman_poll_fq_for_init - Check if an FQ has been initialised from OOS\n+ * @fqid: the FQID that will be initialised by other s/w\n+ *\n+ * In many situations, a FQID is provided for communication between s/w\n+ * entities, and whilst the consumer is responsible for initialising and\n+ * scheduling the FQ, the producer(s) generally create a wrapper FQ object using\n+ * and only call qman_enqueue() (no FQ initialisation, scheduling, etc). Ie;\n+ *     qman_create_fq(..., QMAN_FQ_FLAG_NO_MODIFY, ...);\n+ * However, data can not be enqueued to the FQ until it is initialised out of\n+ * the OOS state - this function polls for that condition. It is particularly\n+ * useful for users of IPC functions - each endpoint's Rx FQ is the other\n+ * endpoint's Tx FQ, so each side can initialise and schedule their Rx FQ object\n+ * and then use this API on the (NO_MODIFY) Tx FQ object in order to\n+ * synchronise. The function returns zero for success, +1 if the FQ is still in\n+ * the OOS state, or negative if there was an error.\n+ */\n+static inline int qman_poll_fq_for_init(struct qman_fq *fq)\n+{\n+\tstruct qm_mcr_queryfq_np np;\n+\tint err;\n+\n+\terr = qman_query_fq_np(fq, &np);\n+\tif (err)\n+\t\treturn err;\n+\tif ((np.state & QM_MCR_NP_STATE_MASK) == QM_MCR_NP_STATE_OOS)\n+\t\treturn 1;\n+\treturn 0;\n+}\n+\n+#if __BYTE_ORDER__ == __ORDER_BIG_ENDIAN__\n+#define cpu_to_hw_sg(x) (x)\n+#define hw_sg_to_cpu(x) (x)\n+#else\n+#define cpu_to_hw_sg(x)  __cpu_to_hw_sg(x)\n+#define hw_sg_to_cpu(x)  __hw_sg_to_cpu(x)\n+\n+static inline void __cpu_to_hw_sg(struct qm_sg_entry *sgentry)\n+{\n+\tsgentry->opaque = cpu_to_be64(sgentry->opaque);\n+\tsgentry->val = cpu_to_be32(sgentry->val);\n+\tsgentry->val_off = cpu_to_be16(sgentry->val_off);\n+}\n+\n+static inline void __hw_sg_to_cpu(struct qm_sg_entry *sgentry)\n+{\n+\tsgentry->opaque = be64_to_cpu(sgentry->opaque);\n+\tsgentry->val = be32_to_cpu(sgentry->val);\n+\tsgentry->val_off = be16_to_cpu(sgentry->val_off);\n+}\n+#endif\n \n #ifdef __cplusplus\n }\ndiff --git a/drivers/bus/dpaa/include/fsl_usd.h b/drivers/bus/dpaa/include/fsl_usd.h\nindex b0d953f..a4897b0 100644\n--- a/drivers/bus/dpaa/include/fsl_usd.h\n+++ b/drivers/bus/dpaa/include/fsl_usd.h\n@@ -42,6 +42,7 @@\n #define __FSL_USD_H\n \n #include <compat.h>\n+#include <fsl_qman.h>\n \n #ifdef __cplusplus\n extern \"C\" {\n",
    "prefixes": [
        "dpdk-dev",
        "v2",
        "11/40"
    ]
}