get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/125887/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 125887,
    "url": "http://patches.dpdk.org/api/patches/125887/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/20230410110015.2973660-12-chaoyong.he@corigine.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20230410110015.2973660-12-chaoyong.he@corigine.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20230410110015.2973660-12-chaoyong.he@corigine.com",
    "date": "2023-04-10T11:00:13",
    "name": "[11/13] net/nfp: move NFDk logic to own source file",
    "commit_ref": null,
    "pull_url": null,
    "state": "accepted",
    "archived": true,
    "hash": "df291769af72647605502a4dce125b2ee5f84c1c",
    "submitter": {
        "id": 2554,
        "url": "http://patches.dpdk.org/api/people/2554/?format=api",
        "name": "Chaoyong He",
        "email": "chaoyong.he@corigine.com"
    },
    "delegate": {
        "id": 319,
        "url": "http://patches.dpdk.org/api/users/319/?format=api",
        "username": "fyigit",
        "first_name": "Ferruh",
        "last_name": "Yigit",
        "email": "ferruh.yigit@amd.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/20230410110015.2973660-12-chaoyong.he@corigine.com/mbox/",
    "series": [
        {
            "id": 27651,
            "url": "http://patches.dpdk.org/api/series/27651/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=27651",
            "date": "2023-04-10T11:00:02",
            "name": "Sync the kernel driver logic",
            "version": 1,
            "mbox": "http://patches.dpdk.org/series/27651/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/125887/comments/",
    "check": "warning",
    "checks": "http://patches.dpdk.org/api/patches/125887/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id DEE2842910;\n\tMon, 10 Apr 2023 13:02:27 +0200 (CEST)",
            "from mails.dpdk.org (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id CDE9042D67;\n\tMon, 10 Apr 2023 13:01:15 +0200 (CEST)",
            "from NAM12-DM6-obe.outbound.protection.outlook.com\n (mail-dm6nam12on2132.outbound.protection.outlook.com [40.107.243.132])\n by mails.dpdk.org (Postfix) with ESMTP id 9D65E42D5A\n for <dev@dpdk.org>; Mon, 10 Apr 2023 13:01:13 +0200 (CEST)",
            "from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5)\n by DM6PR13MB3882.namprd13.prod.outlook.com (2603:10b6:5:22a::22) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6277.31; Mon, 10 Apr\n 2023 11:01:11 +0000",
            "from SJ0PR13MB5545.namprd13.prod.outlook.com\n ([fe80::baa6:b49:d2f4:c60e]) by SJ0PR13MB5545.namprd13.prod.outlook.com\n ([fe80::baa6:b49:d2f4:c60e%4]) with mapi id 15.20.6277.036; Mon, 10 Apr 2023\n 11:01:11 +0000"
        ],
        "ARC-Seal": "i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;\n b=gXl9KaTDSUfhPJvItnR3hW8lp0BW4NZq18ovwt9DgMy65L6poo0gu0o1os53Z/+ys+K1SpIg0x/6Ov2qAZiLVnzEBhrHKwCP6tiIp4YU02rlM6NHUqYV1zgE0RwSb8/1TrKNqFCWgd2HqYTg/yTDU8MTo/AnKjgekOKqPiUIGHCneueS/MaU9PvIEwliNRkBSS83fJC6LrO2xAWxaGK5EAbhauFRtZYFJqOGXK6LgG/wQah0CPN51Xdlq4RfBtqMVlGGXpHvo26wsOdA/eqM6odRqhVPIaRO41440h8ooxm3WpGIXekVtCDaNtxq9yZxa2W7m1Qm46JOp7UBaIylYA==",
        "ARC-Message-Signature": "i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector9901;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=AdA6YLZ6Z2hAEVZH+VF2RA41dPYHTG+CTrOWcuMUAVA=;\n b=Ats8IyBZg57JPQLLWqMbH+fwdA9QYzb7UUkJ0Jebnc618GjZ4HnQuFyiszKN02QGe8B2F9dxXKbxxFpJupNTezE4h4dQwXH5p41NvnlEK4FzmGQwOx3DuarEzLB9c7F+c7GwCnJJKNpjEPL4CUZlPsQO/yTUWCrWMYI/PTqoRrH2CEezRcHm7GYUdFg6OUlHwD3lZutr+2USa1NeE1tnSb+3NoW1njZJSSHerIJEFWXJBvixpagrUqTa2U39gqIlD6yQtw5zyAKTtGU0Vl4h1Urfy4QYgpFSUCGWgqva/3R+YsRPLOYzOC5CrJUpFPjvkhTiziN9dIAmt+HKLcnBNw==",
        "ARC-Authentication-Results": "i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com;\n dkim=pass header.d=corigine.com; arc=none",
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=AdA6YLZ6Z2hAEVZH+VF2RA41dPYHTG+CTrOWcuMUAVA=;\n b=MTf9VxtLOfzNfuPkf9Tt3MDgMNa8URbT7WEyxDL47Kj7er+N8EJSBM+bu5Doy05erlilL/pAEJP5Jni/EYBkFnvICqr0a1sy0QRx9is31e6eeN3JPtuRV+wxsgBcv99eHd4MQ8DVfyhy43NVo1/sukkaeqN3fixIFLnPgPuXGk4=",
        "Authentication-Results": "dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=corigine.com;",
        "From": "Chaoyong He <chaoyong.he@corigine.com>",
        "To": "dev@dpdk.org",
        "Cc": "oss-drivers@corigine.com, niklas.soderlund@corigine.com,\n Chaoyong He <chaoyong.he@corigine.com>",
        "Subject": "[PATCH 11/13] net/nfp: move NFDk logic to own source file",
        "Date": "Mon, 10 Apr 2023 19:00:13 +0800",
        "Message-Id": "<20230410110015.2973660-12-chaoyong.he@corigine.com>",
        "X-Mailer": "git-send-email 2.39.1",
        "In-Reply-To": "<20230410110015.2973660-1-chaoyong.he@corigine.com>",
        "References": "<20230410110015.2973660-1-chaoyong.he@corigine.com>",
        "Content-Type": "text/plain; charset=UTF-8",
        "Content-Transfer-Encoding": "8bit",
        "X-ClientProxiedBy": "SI2PR01CA0040.apcprd01.prod.exchangelabs.com\n (2603:1096:4:193::14) To SJ0PR13MB5545.namprd13.prod.outlook.com\n (2603:10b6:a03:424::5)",
        "MIME-Version": "1.0",
        "X-MS-PublicTrafficType": "Email",
        "X-MS-TrafficTypeDiagnostic": "SJ0PR13MB5545:EE_|DM6PR13MB3882:EE_",
        "X-MS-Office365-Filtering-Correlation-Id": "529738b2-2a2d-4b47-eed0-08db39b2e4cd",
        "X-MS-Exchange-SenderADCheck": "1",
        "X-MS-Exchange-AntiSpam-Relay": "0",
        "X-Microsoft-Antispam": "BCL:0;",
        "X-Microsoft-Antispam-Message-Info": "\n b06bNbtMMcFMiLBDAjamWoWh1NMICLukgpbgcanbziZxazIgVevoQyNJR1lfxz65Fa2+DeUnBHlO+oz6Hq9C1i7E3ey2f5YmA4XtjchFYnUB8lVUUNNugOKmy1wdIut0E3KdCImTaoBRI8aYLmPTwKsbYco0xwBsNg3eBrayrXbfl0GrEoTVzk9ssl9LBQzJWZ6s36Q5pv7zm0tv3uraw6ypsnRnor+M9l007ZVembjC2Opbs+4LnQEzU+XGmOvL0TRPBxthVVTXVGPPDcsb/nkBE2O6ZdkGRzWHVJp++fupytScmDfrXWcu4sBmNArVbaz7NEHL97ZHE9+XoYR7IVw0OOVPys/9q8A2rl2K7I5qiMTmeC++i1DqoOTgC2l2L1QjNc/jM4aTDj8LC1sUr2a2FFKwqjemdf8YtrZ/Tqyf0FgLxZ6o20KNoyNeOylRBgVyyQTrAfFt8+luYiQc6xCB8GXeBSKEUaCPyx/NNdemoimeUGd5ZmtWhXObH1iwqgtkTf64VaEAB2dL0m2ybacYcq/t3XAnCnUtm2uZ+PT2uchVt0DA9BDQ7imvg1zDnJRcI8PyCF3xjGBt8Tp0x4399H6tiztbUImpfnTrQUNyYrbfI2//ZDNUYspoWc6z",
        "X-Forefront-Antispam-Report": "CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;\n IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE;\n SFS:(13230028)(4636009)(396003)(39840400004)(366004)(376002)(136003)(346002)(451199021)(316002)(86362001)(66556008)(66574015)(66476007)(478600001)(83380400001)(66946007)(38100700002)(38350700002)(36756003)(186003)(6512007)(4326008)(8676002)(2616005)(6916009)(41300700001)(2906002)(1076003)(6506007)(26005)(107886003)(8936002)(52116002)(30864003)(5660300002)(6486002)(44832011);\n DIR:OUT; SFP:1102;",
        "X-MS-Exchange-AntiSpam-MessageData-ChunkCount": "1",
        "X-MS-Exchange-AntiSpam-MessageData-0": "=?utf-8?q?YSO1EyWf+ekcF4S3S8yPJgwXciXo?=\n\t=?utf-8?q?kk0l+3ipnci31psoKYrcPIBbffx4XEyGN6yB+bWYEpToVMDWEcZO0P4sVn6EMMlHs?=\n\t=?utf-8?q?zabhoSvx5FAHLmUgXtazWGKQJ/54hg/8DglRIqbLtRn1d9XddgoDUGw/AWNKBC+uc?=\n\t=?utf-8?q?x5BPTq+an2Bq7N5pBTbN1uTtw74soDj3ti/h6SNzD4CVCMnLSSiwju340noFNZJL+?=\n\t=?utf-8?q?67u+AKYp06PNYJVHGc+6ovv59RTb2dv7NlxatayHMAbrywRYWMFkFW1OIvqOLLMeB?=\n\t=?utf-8?q?Z9G6x/vR2RfWeXNMobnWkOfCM+szibhIHyPv+3IyG/wxsPG/IaV8Xh+6vWXUO+pUo?=\n\t=?utf-8?q?LkbGo7l82DVkGOXmJq2PezhHdtJqbjqiSe2JP8NwgBkBuVw+6NG853xcGiqUtxQi6?=\n\t=?utf-8?q?aAXwhY1ULFMgKTrjB6wuNSnMgUJcbZ6sfglQnERWLzl8DK7b10zVvX56R43Bv8T98?=\n\t=?utf-8?q?iNCsoP1zysPhKhK/trN70YUA0nH6+WdE988g3t5d2q+0kRLuSBkrpwtNm2RP1dej7?=\n\t=?utf-8?q?hcaQer5QqfkY1XqOXUp3xaKX6gpIEbGZJYlXrYL7knDD2CtDdaeYJ7VUchVVaazdg?=\n\t=?utf-8?q?3BeD7P+s10f4tN9dCJFqMgBo8elEbLO+MDhHyN6CNuJyLEY4ewjGf+M91WjUfrcGq?=\n\t=?utf-8?q?p1lHSU2xeEfVBfW5A1q9FtWfjJi9ycvN/TouGNVJteTA65mUUONBPuLpxEhLX6s+P?=\n\t=?utf-8?q?fMGgbz+SEgrBjea0NJweidt+1OujUK4q6QatX//oXOFHSAcUrbf8e0AAyX0s2yGH0?=\n\t=?utf-8?q?CFAezIORwxXWIzwa1k50XKawDGMW04EaQUm2+ieA79HoYwldRbCgYUSlvBeO2J6FL?=\n\t=?utf-8?q?7xM1IZhVry71FMqZx0myzTCoKOAeU7tdE+JpbG/kZWvxhlnwDhMhMPZ2Zq/Ah3WhW?=\n\t=?utf-8?q?k5bcXP57jzbPPS2heeD/aLearQY9dRTcz/IjzTxWOvYFXCzvBLzAOhNr1ZiA+AN6b?=\n\t=?utf-8?q?K8cuaulhhwcPJ6aBbQoxbm/NvupFuNt9JHdzzjcq4A/QM/0gVi8K1wyGAy3TQEUQ+?=\n\t=?utf-8?q?vsLDncs46EJ0/Xblf3AwZbtlQ23rPKPwTmhhMRBEfju92h3UTRQkdwCdBeoCYSxsf?=\n\t=?utf-8?q?ZuEeIIAnbewHWjhO2nxm6pvVvsEeENqf4skvXaXRKu/dr4Pu0GULn+wCEZx3Xoqj0?=\n\t=?utf-8?q?1bEfon01O7p948N1rnTmduFcEWxBxcbf4lmHHVU1PY0rJWw29eXtnQ8yrzAiTKMXC?=\n\t=?utf-8?q?cAeCCe3ktJn317thgPTBeyNO+mds1KHH5tKFbX0FsTcUqtBQ6cFPOy4sgfDat31pd?=\n\t=?utf-8?q?oUUP/lGm8DIX48iSBQaqBGjOe3QF85pGdF5YMMdy+FrWgPAc9ip1mxApm4WacJhj5?=\n\t=?utf-8?q?7oNruVrsGnjoX/WGt/ZbXVQGiCZfva6+ySG2kimceHxHdYuF7cnF/cVA2a7iYKbjf?=\n\t=?utf-8?q?7WINUKCsI+htj00q47ga/i3jIjmLMqfew/PZ5YVthr2sFV71an7S8YRkr+GpEqSXS?=\n\t=?utf-8?q?pxlGBV79FEml9HEZnGFEq6/qCJwlccnGNOBqEJKrh9C2usAQ4zApslXaS/1PHZN5e?=\n\t=?utf-8?q?bwdk+frj3qMLhSCo/RKk7PjzI0F2a0lmQQ=3D=3D?=",
        "X-OriginatorOrg": "corigine.com",
        "X-MS-Exchange-CrossTenant-Network-Message-Id": "\n 529738b2-2a2d-4b47-eed0-08db39b2e4cd",
        "X-MS-Exchange-CrossTenant-AuthSource": "SJ0PR13MB5545.namprd13.prod.outlook.com",
        "X-MS-Exchange-CrossTenant-AuthAs": "Internal",
        "X-MS-Exchange-CrossTenant-OriginalArrivalTime": "10 Apr 2023 11:01:11.1719 (UTC)",
        "X-MS-Exchange-CrossTenant-FromEntityHeader": "Hosted",
        "X-MS-Exchange-CrossTenant-Id": "fe128f2c-073b-4c20-818e-7246a585940c",
        "X-MS-Exchange-CrossTenant-MailboxType": "HOSTED",
        "X-MS-Exchange-CrossTenant-UserPrincipalName": "\n uBvnaoZIxy0sAYk/RDDks9sTQDp36nA70Hf9RQHihaUfR71ie8sjH6IyYHGMeVtlTk0V5yPmHEfx4yH1oRSt6KspmnIdMdU6D8rhRmPrT0Q=",
        "X-MS-Exchange-Transport-CrossTenantHeadersStamped": "DM6PR13MB3882",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org"
    },
    "content": "Split out the data structure and logics of NFDk into new file. The code\nis moved verbatim, no functional change.\n\nSigned-off-by: Chaoyong He <chaoyong.he@corigine.com>\nReviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com>\n---\n drivers/net/nfp/meson.build        |   1 +\n drivers/net/nfp/nfdk/nfp_nfdk.h    | 179 ++++++++++\n drivers/net/nfp/nfdk/nfp_nfdk_dp.c | 421 ++++++++++++++++++++++++\n drivers/net/nfp/nfp_common.c       |   1 +\n drivers/net/nfp/nfp_ethdev.c       |   1 +\n drivers/net/nfp/nfp_ethdev_vf.c    |   1 +\n drivers/net/nfp/nfp_rxtx.c         | 507 +----------------------------\n drivers/net/nfp/nfp_rxtx.h         |  55 ----\n 8 files changed, 605 insertions(+), 561 deletions(-)\n create mode 100644 drivers/net/nfp/nfdk/nfp_nfdk.h\n create mode 100644 drivers/net/nfp/nfdk/nfp_nfdk_dp.c",
    "diff": "diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build\nindex 697a1479c8..93c708959c 100644\n--- a/drivers/net/nfp/meson.build\n+++ b/drivers/net/nfp/meson.build\n@@ -11,6 +11,7 @@ sources = files(\n         'flower/nfp_flower_ctrl.c',\n         'flower/nfp_flower_representor.c',\n         'nfd3/nfp_nfd3_dp.c',\n+        'nfdk/nfp_nfdk_dp.c',\n         'nfpcore/nfp_cpp_pcie_ops.c',\n         'nfpcore/nfp_nsp.c',\n         'nfpcore/nfp_cppcore.c',\ndiff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h\nnew file mode 100644\nindex 0000000000..43e4d75432\n--- /dev/null\n+++ b/drivers/net/nfp/nfdk/nfp_nfdk.h\n@@ -0,0 +1,179 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright (c) 2023 Corigine, Inc.\n+ * All rights reserved.\n+ */\n+\n+#ifndef _NFP_NFDK_H_\n+#define _NFP_NFDK_H_\n+\n+#define NFDK_TX_DESC_PER_SIMPLE_PKT     2\n+#define NFDK_TX_DESC_GATHER_MAX         17\n+\n+#define NFDK_TX_MAX_DATA_PER_HEAD       0x00001000\n+#define NFDK_TX_MAX_DATA_PER_DESC       0x00004000\n+#define NFDK_TX_MAX_DATA_PER_BLOCK      0x00010000\n+\n+#define NFDK_DESC_TX_DMA_LEN_HEAD       0x0FFF        /* [0,11] */\n+#define NFDK_DESC_TX_DMA_LEN            0x3FFF        /* [0,13] */\n+#define NFDK_DESC_TX_TYPE_HEAD          0xF000        /* [12,15] */\n+\n+#define NFDK_DESC_TX_TYPE_GATHER        1\n+#define NFDK_DESC_TX_TYPE_TSO           2\n+#define NFDK_DESC_TX_TYPE_SIMPLE        8\n+\n+/* TX descriptor format */\n+#define NFDK_DESC_TX_EOP                RTE_BIT32(14)\n+\n+/* Flags in the host TX descriptor */\n+#define NFDK_DESC_TX_CHAIN_META         RTE_BIT32(3)\n+#define NFDK_DESC_TX_ENCAP              RTE_BIT32(2)\n+#define NFDK_DESC_TX_L4_CSUM            RTE_BIT32(1)\n+#define NFDK_DESC_TX_L3_CSUM            RTE_BIT32(0)\n+\n+#define NFDK_TX_DESC_BLOCK_SZ           256\n+#define NFDK_TX_DESC_BLOCK_CNT          (NFDK_TX_DESC_BLOCK_SZ /         \\\n+\t\t\t\t\tsizeof(struct nfp_net_nfdk_tx_desc))\n+#define NFDK_TX_DESC_STOP_CNT           (NFDK_TX_DESC_BLOCK_CNT *        \\\n+\t\t\t\t\tNFDK_TX_DESC_PER_SIMPLE_PKT)\n+#define D_BLOCK_CPL(idx)               (NFDK_TX_DESC_BLOCK_CNT -        \\\n+\t\t\t\t\t(idx) % NFDK_TX_DESC_BLOCK_CNT)\n+/* Convenience macro for wrapping descriptor index on ring size */\n+#define D_IDX(ring, idx)               ((idx) & ((ring)->tx_count - 1))\n+\n+struct nfp_net_nfdk_tx_desc {\n+\tunion {\n+\t\tstruct {\n+\t\t\t__le16 dma_addr_hi;  /* High bits of host buf address */\n+\t\t\t__le16 dma_len_type; /* Length to DMA for this desc */\n+\t\t\t__le32 dma_addr_lo;  /* Low 32bit of host buf addr */\n+\t\t};\n+\n+\t\tstruct {\n+\t\t\t__le16 mss;\t/* MSS to be used for LSO */\n+\t\t\tuint8_t lso_hdrlen;  /* LSO, TCP payload offset */\n+\t\t\tuint8_t lso_totsegs; /* LSO, total segments */\n+\t\t\tuint8_t l3_offset;   /* L3 header offset */\n+\t\t\tuint8_t l4_offset;   /* L4 header offset */\n+\t\t\t__le16 lso_meta_res; /* Rsvd bits in TSO metadata */\n+\t\t};\n+\n+\t\tstruct {\n+\t\t\tuint8_t flags;\t/* TX Flags, see @NFDK_DESC_TX_* */\n+\t\t\tuint8_t reserved[7];\t/* meta byte placeholder */\n+\t\t};\n+\n+\t\t__le32 vals[2];\n+\t\t__le64 raw;\n+\t};\n+};\n+\n+static inline uint32_t\n+nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq)\n+{\n+\tuint32_t free_desc;\n+\n+\tif (txq->wr_p >= txq->rd_p)\n+\t\tfree_desc = txq->tx_count - (txq->wr_p - txq->rd_p);\n+\telse\n+\t\tfree_desc = txq->rd_p - txq->wr_p;\n+\n+\treturn (free_desc > NFDK_TX_DESC_STOP_CNT) ?\n+\t\t(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;\n+}\n+\n+/*\n+ * nfp_net_nfdk_txq_full() - Check if the TX queue free descriptors\n+ * is below tx_free_threshold for firmware of nfdk\n+ *\n+ * @txq: TX queue to check\n+ *\n+ * This function uses the host copy* of read/write pointers.\n+ */\n+static inline uint32_t\n+nfp_net_nfdk_txq_full(struct nfp_net_txq *txq)\n+{\n+\treturn (nfp_net_nfdk_free_tx_desc(txq) < txq->tx_free_thresh);\n+}\n+\n+/* nfp_net_nfdk_tx_cksum() - Set TX CSUM offload flags in TX descriptor of nfdk */\n+static inline uint64_t\n+nfp_net_nfdk_tx_cksum(struct nfp_net_txq *txq, struct rte_mbuf *mb,\n+\t\tuint64_t flags)\n+{\n+\tuint64_t ol_flags;\n+\tstruct nfp_net_hw *hw = txq->hw;\n+\n+\tif ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) == 0)\n+\t\treturn flags;\n+\n+\tol_flags = mb->ol_flags;\n+\n+\t/* Set TCP csum offload if TSO enabled. */\n+\tif (ol_flags & RTE_MBUF_F_TX_TCP_SEG)\n+\t\tflags |= NFDK_DESC_TX_L4_CSUM;\n+\n+\tif (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)\n+\t\tflags |= NFDK_DESC_TX_ENCAP;\n+\n+\t/* IPv6 does not need checksum */\n+\tif (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)\n+\t\tflags |= NFDK_DESC_TX_L3_CSUM;\n+\n+\tif (ol_flags & RTE_MBUF_F_TX_L4_MASK)\n+\t\tflags |= NFDK_DESC_TX_L4_CSUM;\n+\n+\treturn flags;\n+}\n+\n+/* nfp_net_nfdk_tx_tso() - Set TX descriptor for TSO of nfdk */\n+static inline uint64_t\n+nfp_net_nfdk_tx_tso(struct nfp_net_txq *txq, struct rte_mbuf *mb)\n+{\n+\tuint64_t ol_flags;\n+\tstruct nfp_net_nfdk_tx_desc txd;\n+\tstruct nfp_net_hw *hw = txq->hw;\n+\n+\tif ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) == 0)\n+\t\tgoto clean_txd;\n+\n+\tol_flags = mb->ol_flags;\n+\n+\tif ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0)\n+\t\tgoto clean_txd;\n+\n+\ttxd.l3_offset = mb->l2_len;\n+\ttxd.l4_offset = mb->l2_len + mb->l3_len;\n+\ttxd.lso_meta_res = 0;\n+\ttxd.mss = rte_cpu_to_le_16(mb->tso_segsz);\n+\ttxd.lso_hdrlen = mb->l2_len + mb->l3_len + mb->l4_len;\n+\ttxd.lso_totsegs = (mb->pkt_len + mb->tso_segsz) / mb->tso_segsz;\n+\n+\tif (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {\n+\t\ttxd.l3_offset += mb->outer_l2_len + mb->outer_l3_len;\n+\t\ttxd.l4_offset += mb->outer_l2_len + mb->outer_l3_len;\n+\t\ttxd.lso_hdrlen += mb->outer_l2_len + mb->outer_l3_len;\n+\t}\n+\n+\treturn txd.raw;\n+\n+clean_txd:\n+\ttxd.l3_offset = 0;\n+\ttxd.l4_offset = 0;\n+\ttxd.lso_hdrlen = 0;\n+\ttxd.mss = 0;\n+\ttxd.lso_totsegs = 0;\n+\ttxd.lso_meta_res = 0;\n+\n+\treturn txd.raw;\n+}\n+\n+uint16_t nfp_net_nfdk_xmit_pkts(void *tx_queue,\n+\t\tstruct rte_mbuf **tx_pkts,\n+\t\tuint16_t nb_pkts);\n+int nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,\n+\t\tuint16_t queue_idx,\n+\t\tuint16_t nb_desc,\n+\t\tunsigned int socket_id,\n+\t\tconst struct rte_eth_txconf *tx_conf);\n+\n+#endif /* _NFP_NFDK_H_ */\ndiff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c\nnew file mode 100644\nindex 0000000000..ec937c1f50\n--- /dev/null\n+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c\n@@ -0,0 +1,421 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright (c) 2023 Corigine, Inc.\n+ * All rights reserved.\n+ */\n+\n+#include <ethdev_driver.h>\n+#include <bus_pci_driver.h>\n+#include <rte_malloc.h>\n+\n+#include \"../nfp_logs.h\"\n+#include \"../nfp_common.h\"\n+#include \"../nfp_rxtx.h\"\n+#include \"../nfpcore/nfp_mip.h\"\n+#include \"../nfpcore/nfp_rtsym.h\"\n+#include \"nfp_nfdk.h\"\n+\n+static inline int\n+nfp_net_nfdk_headlen_to_segs(unsigned int headlen)\n+{\n+\treturn DIV_ROUND_UP(headlen +\n+\t\t\tNFDK_TX_MAX_DATA_PER_DESC -\n+\t\t\tNFDK_TX_MAX_DATA_PER_HEAD,\n+\t\t\tNFDK_TX_MAX_DATA_PER_DESC);\n+}\n+\n+static int\n+nfp_net_nfdk_tx_maybe_close_block(struct nfp_net_txq *txq, struct rte_mbuf *pkt)\n+{\n+\tunsigned int n_descs, wr_p, i, nop_slots;\n+\tstruct rte_mbuf *pkt_temp;\n+\n+\tpkt_temp = pkt;\n+\tn_descs = nfp_net_nfdk_headlen_to_segs(pkt_temp->data_len);\n+\twhile (pkt_temp->next) {\n+\t\tpkt_temp = pkt_temp->next;\n+\t\tn_descs += DIV_ROUND_UP(pkt_temp->data_len, NFDK_TX_MAX_DATA_PER_DESC);\n+\t}\n+\n+\tif (unlikely(n_descs > NFDK_TX_DESC_GATHER_MAX))\n+\t\treturn -EINVAL;\n+\n+\t/* Under count by 1 (don't count meta) for the round down to work out */\n+\tn_descs += !!(pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG);\n+\n+\tif (round_down(txq->wr_p, NFDK_TX_DESC_BLOCK_CNT) !=\n+\t\t\tround_down(txq->wr_p + n_descs, NFDK_TX_DESC_BLOCK_CNT))\n+\t\tgoto close_block;\n+\n+\tif ((uint32_t)txq->data_pending + pkt->pkt_len > NFDK_TX_MAX_DATA_PER_BLOCK)\n+\t\tgoto close_block;\n+\n+\treturn 0;\n+\n+close_block:\n+\twr_p = txq->wr_p;\n+\tnop_slots = D_BLOCK_CPL(wr_p);\n+\n+\tmemset(&txq->ktxds[wr_p], 0, nop_slots * sizeof(struct nfp_net_nfdk_tx_desc));\n+\tfor (i = wr_p; i < nop_slots + wr_p; i++) {\n+\t\tif (txq->txbufs[i].mbuf) {\n+\t\t\trte_pktmbuf_free_seg(txq->txbufs[i].mbuf);\n+\t\t\ttxq->txbufs[i].mbuf = NULL;\n+\t\t}\n+\t}\n+\ttxq->data_pending = 0;\n+\ttxq->wr_p = D_IDX(txq, txq->wr_p + nop_slots);\n+\n+\treturn nop_slots;\n+}\n+\n+static void\n+nfp_net_nfdk_set_meta_data(struct rte_mbuf *pkt,\n+\t\tstruct nfp_net_txq *txq,\n+\t\tuint64_t *metadata)\n+{\n+\tchar *meta;\n+\tuint8_t layer = 0;\n+\tuint32_t meta_type;\n+\tstruct nfp_net_hw *hw;\n+\tuint32_t header_offset;\n+\tuint8_t vlan_layer = 0;\n+\tstruct nfp_net_meta_raw meta_data;\n+\n+\tmemset(&meta_data, 0, sizeof(meta_data));\n+\thw = txq->hw;\n+\n+\tif ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) != 0 &&\n+\t\t\t(hw->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) {\n+\t\tif (meta_data.length == 0)\n+\t\t\tmeta_data.length = NFP_NET_META_HEADER_SIZE;\n+\t\tmeta_data.length += NFP_NET_META_FIELD_SIZE;\n+\t\tmeta_data.header |= NFP_NET_META_VLAN;\n+\t}\n+\n+\tif (meta_data.length == 0)\n+\t\treturn;\n+\n+\tmeta_type = meta_data.header;\n+\theader_offset = meta_type << NFP_NET_META_NFDK_LENGTH;\n+\tmeta_data.header = header_offset | meta_data.length;\n+\tmeta_data.header = rte_cpu_to_be_32(meta_data.header);\n+\tmeta = rte_pktmbuf_prepend(pkt, meta_data.length);\n+\tmemcpy(meta, &meta_data.header, sizeof(meta_data.header));\n+\tmeta += NFP_NET_META_HEADER_SIZE;\n+\n+\tfor (; meta_type != 0; meta_type >>= NFP_NET_META_FIELD_SIZE, layer++,\n+\t\t\tmeta += NFP_NET_META_FIELD_SIZE) {\n+\t\tswitch (meta_type & NFP_NET_META_FIELD_MASK) {\n+\t\tcase NFP_NET_META_VLAN:\n+\t\t\tif (vlan_layer > 0) {\n+\t\t\t\tPMD_DRV_LOG(ERR, \"At most 1 layers of vlan is supported\");\n+\t\t\t\treturn;\n+\t\t\t}\n+\t\t\tnfp_net_set_meta_vlan(&meta_data, pkt, layer);\n+\t\t\tvlan_layer++;\n+\t\t\tbreak;\n+\t\tdefault:\n+\t\t\tPMD_DRV_LOG(ERR, \"The metadata type not supported\");\n+\t\t\treturn;\n+\t\t}\n+\n+\t\tmemcpy(meta, &meta_data.data[layer], sizeof(meta_data.data[layer]));\n+\t}\n+\n+\t*metadata = NFDK_DESC_TX_CHAIN_META;\n+}\n+\n+uint16_t\n+nfp_net_nfdk_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n+{\n+\tuint32_t buf_idx;\n+\tuint64_t dma_addr;\n+\tuint16_t free_descs;\n+\tuint32_t npkts = 0;\n+\tuint64_t metadata = 0;\n+\tuint16_t issued_descs = 0;\n+\tstruct nfp_net_txq *txq;\n+\tstruct nfp_net_hw *hw;\n+\tstruct nfp_net_nfdk_tx_desc *ktxds;\n+\tstruct rte_mbuf *pkt, *temp_pkt;\n+\tstruct rte_mbuf **lmbuf;\n+\n+\ttxq = tx_queue;\n+\thw = txq->hw;\n+\n+\tPMD_TX_LOG(DEBUG, \"working for queue %u at pos %d and %u packets\",\n+\t\ttxq->qidx, txq->wr_p, nb_pkts);\n+\n+\tif ((nfp_net_nfdk_free_tx_desc(txq) < NFDK_TX_DESC_PER_SIMPLE_PKT *\n+\t\t\tnb_pkts) || (nfp_net_nfdk_txq_full(txq)))\n+\t\tnfp_net_tx_free_bufs(txq);\n+\n+\tfree_descs = (uint16_t)nfp_net_nfdk_free_tx_desc(txq);\n+\tif (unlikely(free_descs == 0))\n+\t\treturn 0;\n+\n+\tPMD_TX_LOG(DEBUG, \"queue: %u. Sending %u packets\", txq->qidx, nb_pkts);\n+\t/* Sending packets */\n+\twhile ((npkts < nb_pkts) && free_descs) {\n+\t\tuint32_t type, dma_len, dlen_type, tmp_dlen;\n+\t\tint nop_descs, used_descs;\n+\n+\t\tpkt = *(tx_pkts + npkts);\n+\t\tnop_descs = nfp_net_nfdk_tx_maybe_close_block(txq, pkt);\n+\t\tif (nop_descs < 0)\n+\t\t\tgoto xmit_end;\n+\n+\t\tissued_descs += nop_descs;\n+\t\tktxds = &txq->ktxds[txq->wr_p];\n+\t\t/* Grabbing the mbuf linked to the current descriptor */\n+\t\tbuf_idx = txq->wr_p;\n+\t\tlmbuf = &txq->txbufs[buf_idx++].mbuf;\n+\t\t/* Warming the cache for releasing the mbuf later on */\n+\t\tRTE_MBUF_PREFETCH_TO_FREE(*lmbuf);\n+\n+\t\ttemp_pkt = pkt;\n+\t\tnfp_net_nfdk_set_meta_data(pkt, txq, &metadata);\n+\n+\t\tif (unlikely(pkt->nb_segs > 1 &&\n+\t\t\t\t!(hw->cap & NFP_NET_CFG_CTRL_GATHER))) {\n+\t\t\tPMD_INIT_LOG(ERR, \"Multisegment packet not supported\");\n+\t\t\tgoto xmit_end;\n+\t\t}\n+\n+\t\t/*\n+\t\t * Checksum and VLAN flags just in the first descriptor for a\n+\t\t * multisegment packet, but TSO info needs to be in all of them.\n+\t\t */\n+\n+\t\tdma_len = pkt->data_len;\n+\t\tif ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) &&\n+\t\t\t\t(pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {\n+\t\t\ttype = NFDK_DESC_TX_TYPE_TSO;\n+\t\t} else if (pkt->next == NULL && dma_len <= NFDK_TX_MAX_DATA_PER_HEAD) {\n+\t\t\ttype = NFDK_DESC_TX_TYPE_SIMPLE;\n+\t\t} else {\n+\t\t\ttype = NFDK_DESC_TX_TYPE_GATHER;\n+\t\t}\n+\n+\t\t/* Implicitly truncates to chunk in below logic */\n+\t\tdma_len -= 1;\n+\n+\t\t/*\n+\t\t * We will do our best to pass as much data as we can in descriptor\n+\t\t * and we need to make sure the first descriptor includes whole\n+\t\t * head since there is limitation in firmware side. Sometimes the\n+\t\t * value of 'dma_len & NFDK_DESC_TX_DMA_LEN_HEAD' will be less\n+\t\t * than packet head len.\n+\t\t */\n+\t\tdlen_type = (dma_len > NFDK_DESC_TX_DMA_LEN_HEAD ?\n+\t\t\t\tNFDK_DESC_TX_DMA_LEN_HEAD : dma_len) |\n+\t\t\t(NFDK_DESC_TX_TYPE_HEAD & (type << 12));\n+\t\tktxds->dma_len_type = rte_cpu_to_le_16(dlen_type);\n+\t\tdma_addr = rte_mbuf_data_iova(pkt);\n+\t\tPMD_TX_LOG(DEBUG, \"Working with mbuf at dma address:\"\n+\t\t\t\t\"%\" PRIx64 \"\", dma_addr);\n+\t\tktxds->dma_addr_hi = rte_cpu_to_le_16(dma_addr >> 32);\n+\t\tktxds->dma_addr_lo = rte_cpu_to_le_32(dma_addr & 0xffffffff);\n+\t\tktxds++;\n+\n+\t\t/*\n+\t\t * Preserve the original dlen_type, this way below the EOP logic\n+\t\t * can use dlen_type.\n+\t\t */\n+\t\ttmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD;\n+\t\tdma_len -= tmp_dlen;\n+\t\tdma_addr += tmp_dlen + 1;\n+\n+\t\t/*\n+\t\t * The rest of the data (if any) will be in larger DMA descriptors\n+\t\t * and is handled with the dma_len loop.\n+\t\t */\n+\t\twhile (pkt) {\n+\t\t\tif (*lmbuf)\n+\t\t\t\trte_pktmbuf_free_seg(*lmbuf);\n+\t\t\t*lmbuf = pkt;\n+\t\t\twhile (dma_len > 0) {\n+\t\t\t\tdma_len -= 1;\n+\t\t\t\tdlen_type = NFDK_DESC_TX_DMA_LEN & dma_len;\n+\n+\t\t\t\tktxds->dma_len_type = rte_cpu_to_le_16(dlen_type);\n+\t\t\t\tktxds->dma_addr_hi = rte_cpu_to_le_16(dma_addr >> 32);\n+\t\t\t\tktxds->dma_addr_lo = rte_cpu_to_le_32(dma_addr & 0xffffffff);\n+\t\t\t\tktxds++;\n+\n+\t\t\t\tdma_len -= dlen_type;\n+\t\t\t\tdma_addr += dlen_type + 1;\n+\t\t\t}\n+\n+\t\t\tif (pkt->next == NULL)\n+\t\t\t\tbreak;\n+\n+\t\t\tpkt = pkt->next;\n+\t\t\tdma_len = pkt->data_len;\n+\t\t\tdma_addr = rte_mbuf_data_iova(pkt);\n+\t\t\tPMD_TX_LOG(DEBUG, \"Working with mbuf at dma address:\"\n+\t\t\t\t\"%\" PRIx64 \"\", dma_addr);\n+\n+\t\t\tlmbuf = &txq->txbufs[buf_idx++].mbuf;\n+\t\t}\n+\n+\t\t(ktxds - 1)->dma_len_type = rte_cpu_to_le_16(dlen_type | NFDK_DESC_TX_EOP);\n+\n+\t\tktxds->raw = rte_cpu_to_le_64(nfp_net_nfdk_tx_cksum(txq, temp_pkt, metadata));\n+\t\tktxds++;\n+\n+\t\tif ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) &&\n+\t\t\t\t(temp_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {\n+\t\t\tktxds->raw = rte_cpu_to_le_64(nfp_net_nfdk_tx_tso(txq, temp_pkt));\n+\t\t\tktxds++;\n+\t\t}\n+\n+\t\tused_descs = ktxds - txq->ktxds - txq->wr_p;\n+\t\tif (round_down(txq->wr_p, NFDK_TX_DESC_BLOCK_CNT) !=\n+\t\t\tround_down(txq->wr_p + used_descs - 1, NFDK_TX_DESC_BLOCK_CNT)) {\n+\t\t\tPMD_INIT_LOG(INFO, \"Used descs cross block boundary\");\n+\t\t\tgoto xmit_end;\n+\t\t}\n+\n+\t\ttxq->wr_p = D_IDX(txq, txq->wr_p + used_descs);\n+\t\tif (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT)\n+\t\t\ttxq->data_pending += temp_pkt->pkt_len;\n+\t\telse\n+\t\t\ttxq->data_pending = 0;\n+\n+\t\tissued_descs += used_descs;\n+\t\tnpkts++;\n+\t\tfree_descs = (uint16_t)nfp_net_nfdk_free_tx_desc(txq);\n+\t}\n+\n+xmit_end:\n+\t/* Increment write pointers. Force memory write before we let HW know */\n+\trte_wmb();\n+\tnfp_qcp_ptr_add(txq->qcp_q, NFP_QCP_WRITE_PTR, issued_descs);\n+\n+\treturn npkts;\n+}\n+\n+int\n+nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,\n+\t\tuint16_t queue_idx,\n+\t\tuint16_t nb_desc,\n+\t\tunsigned int socket_id,\n+\t\tconst struct rte_eth_txconf *tx_conf)\n+{\n+\tint ret;\n+\tuint16_t min_tx_desc;\n+\tuint16_t max_tx_desc;\n+\tconst struct rte_memzone *tz;\n+\tstruct nfp_net_txq *txq;\n+\tuint16_t tx_free_thresh;\n+\tstruct nfp_net_hw *hw;\n+\tuint32_t tx_desc_sz;\n+\n+\thw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);\n+\n+\tPMD_INIT_FUNC_TRACE();\n+\n+\tret = nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc);\n+\tif (ret != 0)\n+\t\treturn ret;\n+\n+\t/* Validating number of descriptors */\n+\ttx_desc_sz = nb_desc * sizeof(struct nfp_net_nfdk_tx_desc);\n+\tif ((NFDK_TX_DESC_PER_SIMPLE_PKT * tx_desc_sz) % NFP_ALIGN_RING_DESC != 0 ||\n+\t    (NFDK_TX_DESC_PER_SIMPLE_PKT * nb_desc) % NFDK_TX_DESC_BLOCK_CNT != 0 ||\n+\t     nb_desc > max_tx_desc || nb_desc < min_tx_desc) {\n+\t\tPMD_DRV_LOG(ERR, \"Wrong nb_desc value\");\n+\t\treturn -EINVAL;\n+\t}\n+\n+\ttx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?\n+\t\t\t\ttx_conf->tx_free_thresh :\n+\t\t\t\tDEFAULT_TX_FREE_THRESH);\n+\n+\tif (tx_free_thresh > (nb_desc)) {\n+\t\tPMD_DRV_LOG(ERR,\n+\t\t\t\"tx_free_thresh must be less than the number of TX \"\n+\t\t\t\"descriptors. (tx_free_thresh=%u port=%d \"\n+\t\t\t\"queue=%d)\", (unsigned int)tx_free_thresh,\n+\t\t\tdev->data->port_id, (int)queue_idx);\n+\t\treturn -(EINVAL);\n+\t}\n+\n+\t/*\n+\t * Free memory prior to re-allocation if needed. This is the case after\n+\t * calling nfp_net_stop\n+\t */\n+\tif (dev->data->tx_queues[queue_idx]) {\n+\t\tPMD_TX_LOG(DEBUG, \"Freeing memory prior to re-allocation %d\",\n+\t\t\t\tqueue_idx);\n+\t\tnfp_net_tx_queue_release(dev, queue_idx);\n+\t\tdev->data->tx_queues[queue_idx] = NULL;\n+\t}\n+\n+\t/* Allocating tx queue data structure */\n+\ttxq = rte_zmalloc_socket(\"ethdev TX queue\", sizeof(struct nfp_net_txq),\n+\t\t\tRTE_CACHE_LINE_SIZE, socket_id);\n+\tif (txq == NULL) {\n+\t\tPMD_DRV_LOG(ERR, \"Error allocating tx dma\");\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\t/*\n+\t * Allocate TX ring hardware descriptors. A memzone large enough to\n+\t * handle the maximum ring size is allocated in order to allow for\n+\t * resizing in later calls to the queue setup function.\n+\t */\n+\ttz = rte_eth_dma_zone_reserve(dev, \"tx_ring\", queue_idx,\n+\t\t\t\tsizeof(struct nfp_net_nfdk_tx_desc) *\n+\t\t\t\tNFDK_TX_DESC_PER_SIMPLE_PKT *\n+\t\t\t\tmax_tx_desc, NFP_MEMZONE_ALIGN,\n+\t\t\t\tsocket_id);\n+\tif (tz == NULL) {\n+\t\tPMD_DRV_LOG(ERR, \"Error allocating tx dma\");\n+\t\tnfp_net_tx_queue_release(dev, queue_idx);\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\ttxq->tx_count = nb_desc * NFDK_TX_DESC_PER_SIMPLE_PKT;\n+\ttxq->tx_free_thresh = tx_free_thresh;\n+\ttxq->tx_pthresh = tx_conf->tx_thresh.pthresh;\n+\ttxq->tx_hthresh = tx_conf->tx_thresh.hthresh;\n+\ttxq->tx_wthresh = tx_conf->tx_thresh.wthresh;\n+\n+\t/* queue mapping based on firmware configuration */\n+\ttxq->qidx = queue_idx;\n+\ttxq->tx_qcidx = queue_idx * hw->stride_tx;\n+\ttxq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);\n+\n+\ttxq->port_id = dev->data->port_id;\n+\n+\t/* Saving physical and virtual addresses for the TX ring */\n+\ttxq->dma = (uint64_t)tz->iova;\n+\ttxq->ktxds = (struct nfp_net_nfdk_tx_desc *)tz->addr;\n+\n+\t/* mbuf pointers array for referencing mbufs linked to TX descriptors */\n+\ttxq->txbufs = rte_zmalloc_socket(\"txq->txbufs\",\n+\t\t\t\tsizeof(*txq->txbufs) * txq->tx_count,\n+\t\t\t\tRTE_CACHE_LINE_SIZE, socket_id);\n+\n+\tif (txq->txbufs == NULL) {\n+\t\tnfp_net_tx_queue_release(dev, queue_idx);\n+\t\treturn -ENOMEM;\n+\t}\n+\tPMD_TX_LOG(DEBUG, \"txbufs=%p hw_ring=%p dma_addr=0x%\" PRIx64,\n+\t\ttxq->txbufs, txq->ktxds, (unsigned long)txq->dma);\n+\n+\tnfp_net_reset_tx_queue(txq);\n+\n+\tdev->data->tx_queues[queue_idx] = txq;\n+\ttxq->hw = hw;\n+\t/*\n+\t * Telling the HW about the physical address of the TX ring and number\n+\t * of descriptors in log2 format\n+\t */\n+\tnn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma);\n+\tnn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count));\n+\n+\treturn 0;\n+}\ndiff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c\nindex ca334d56ab..f17632a364 100644\n--- a/drivers/net/nfp/nfp_common.c\n+++ b/drivers/net/nfp/nfp_common.c\n@@ -45,6 +45,7 @@\n #include \"nfp_cpp_bridge.h\"\n \n #include \"nfd3/nfp_nfd3.h\"\n+#include \"nfdk/nfp_nfdk.h\"\n \n #include <sys/types.h>\n #include <sys/socket.h>\ndiff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c\nindex f212a4a10e..c2684ec268 100644\n--- a/drivers/net/nfp/nfp_ethdev.c\n+++ b/drivers/net/nfp/nfp_ethdev.c\n@@ -39,6 +39,7 @@\n #include \"nfp_cpp_bridge.h\"\n \n #include \"nfd3/nfp_nfd3.h\"\n+#include \"nfdk/nfp_nfdk.h\"\n #include \"flower/nfp_flower.h\"\n \n static int\ndiff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c\nindex 80a8983deb..5fd2dc11a3 100644\n--- a/drivers/net/nfp/nfp_ethdev_vf.c\n+++ b/drivers/net/nfp/nfp_ethdev_vf.c\n@@ -23,6 +23,7 @@\n #include \"nfp_rxtx.h\"\n #include \"nfp_logs.h\"\n #include \"nfd3/nfp_nfd3.h\"\n+#include \"nfdk/nfp_nfdk.h\"\n \n static void\n nfp_netvf_read_mac(struct nfp_net_hw *hw)\ndiff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c\nindex 76021b64ee..9eaa0b89c1 100644\n--- a/drivers/net/nfp/nfp_rxtx.c\n+++ b/drivers/net/nfp/nfp_rxtx.c\n@@ -21,6 +21,7 @@\n #include \"nfp_rxtx.h\"\n #include \"nfp_logs.h\"\n #include \"nfd3/nfp_nfd3.h\"\n+#include \"nfdk/nfp_nfdk.h\"\n #include \"nfpcore/nfp_mip.h\"\n #include \"nfpcore/nfp_rtsym.h\"\n \n@@ -764,187 +765,6 @@ nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data,\n \tmeta_data->data[layer] = rte_cpu_to_be_32(tpid << 16 | vlan_tci);\n }\n \n-static void\n-nfp_net_nfdk_set_meta_data(struct rte_mbuf *pkt,\n-\t\tstruct nfp_net_txq *txq,\n-\t\tuint64_t *metadata)\n-{\n-\tchar *meta;\n-\tuint8_t layer = 0;\n-\tuint32_t meta_type;\n-\tstruct nfp_net_hw *hw;\n-\tuint32_t header_offset;\n-\tuint8_t vlan_layer = 0;\n-\tstruct nfp_net_meta_raw meta_data;\n-\n-\tmemset(&meta_data, 0, sizeof(meta_data));\n-\thw = txq->hw;\n-\n-\tif ((pkt->ol_flags & RTE_MBUF_F_TX_VLAN) != 0 &&\n-\t\t\t(hw->ctrl & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) {\n-\t\tif (meta_data.length == 0)\n-\t\t\tmeta_data.length = NFP_NET_META_HEADER_SIZE;\n-\t\tmeta_data.length += NFP_NET_META_FIELD_SIZE;\n-\t\tmeta_data.header |= NFP_NET_META_VLAN;\n-\t}\n-\n-\tif (meta_data.length == 0)\n-\t\treturn;\n-\n-\tmeta_type = meta_data.header;\n-\theader_offset = meta_type << NFP_NET_META_NFDK_LENGTH;\n-\tmeta_data.header = header_offset | meta_data.length;\n-\tmeta_data.header = rte_cpu_to_be_32(meta_data.header);\n-\tmeta = rte_pktmbuf_prepend(pkt, meta_data.length);\n-\tmemcpy(meta, &meta_data.header, sizeof(meta_data.header));\n-\tmeta += NFP_NET_META_HEADER_SIZE;\n-\n-\tfor (; meta_type != 0; meta_type >>= NFP_NET_META_FIELD_SIZE, layer++,\n-\t\t\tmeta += NFP_NET_META_FIELD_SIZE) {\n-\t\tswitch (meta_type & NFP_NET_META_FIELD_MASK) {\n-\t\tcase NFP_NET_META_VLAN:\n-\t\t\tif (vlan_layer > 0) {\n-\t\t\t\tPMD_DRV_LOG(ERR, \"At most 1 layers of vlan is supported\");\n-\t\t\t\treturn;\n-\t\t\t}\n-\t\t\tnfp_net_set_meta_vlan(&meta_data, pkt, layer);\n-\t\t\tvlan_layer++;\n-\t\t\tbreak;\n-\t\tdefault:\n-\t\t\tPMD_DRV_LOG(ERR, \"The metadata type not supported\");\n-\t\t\treturn;\n-\t\t}\n-\n-\t\tmemcpy(meta, &meta_data.data[layer], sizeof(meta_data.data[layer]));\n-\t}\n-\n-\t*metadata = NFDK_DESC_TX_CHAIN_META;\n-}\n-\n-static int\n-nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev,\n-\t\tuint16_t queue_idx,\n-\t\tuint16_t nb_desc,\n-\t\tunsigned int socket_id,\n-\t\tconst struct rte_eth_txconf *tx_conf)\n-{\n-\tint ret;\n-\tuint16_t min_tx_desc;\n-\tuint16_t max_tx_desc;\n-\tconst struct rte_memzone *tz;\n-\tstruct nfp_net_txq *txq;\n-\tuint16_t tx_free_thresh;\n-\tstruct nfp_net_hw *hw;\n-\tuint32_t tx_desc_sz;\n-\n-\thw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);\n-\n-\tPMD_INIT_FUNC_TRACE();\n-\n-\tret = nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc);\n-\tif (ret != 0)\n-\t\treturn ret;\n-\n-\t/* Validating number of descriptors */\n-\ttx_desc_sz = nb_desc * sizeof(struct nfp_net_nfdk_tx_desc);\n-\tif ((NFDK_TX_DESC_PER_SIMPLE_PKT * tx_desc_sz) % NFP_ALIGN_RING_DESC != 0 ||\n-\t    (NFDK_TX_DESC_PER_SIMPLE_PKT * nb_desc) % NFDK_TX_DESC_BLOCK_CNT != 0 ||\n-\t     nb_desc > max_tx_desc || nb_desc < min_tx_desc) {\n-\t\tPMD_DRV_LOG(ERR, \"Wrong nb_desc value\");\n-\t\treturn -EINVAL;\n-\t}\n-\n-\ttx_free_thresh = (uint16_t)((tx_conf->tx_free_thresh) ?\n-\t\t\t\ttx_conf->tx_free_thresh :\n-\t\t\t\tDEFAULT_TX_FREE_THRESH);\n-\n-\tif (tx_free_thresh > (nb_desc)) {\n-\t\tPMD_DRV_LOG(ERR,\n-\t\t\t\"tx_free_thresh must be less than the number of TX \"\n-\t\t\t\"descriptors. (tx_free_thresh=%u port=%d \"\n-\t\t\t\"queue=%d)\", (unsigned int)tx_free_thresh,\n-\t\t\tdev->data->port_id, (int)queue_idx);\n-\t\treturn -(EINVAL);\n-\t}\n-\n-\t/*\n-\t * Free memory prior to re-allocation if needed. This is the case after\n-\t * calling nfp_net_stop\n-\t */\n-\tif (dev->data->tx_queues[queue_idx]) {\n-\t\tPMD_TX_LOG(DEBUG, \"Freeing memory prior to re-allocation %d\",\n-\t\t\t\tqueue_idx);\n-\t\tnfp_net_tx_queue_release(dev, queue_idx);\n-\t\tdev->data->tx_queues[queue_idx] = NULL;\n-\t}\n-\n-\t/* Allocating tx queue data structure */\n-\ttxq = rte_zmalloc_socket(\"ethdev TX queue\", sizeof(struct nfp_net_txq),\n-\t\t\tRTE_CACHE_LINE_SIZE, socket_id);\n-\tif (txq == NULL) {\n-\t\tPMD_DRV_LOG(ERR, \"Error allocating tx dma\");\n-\t\treturn -ENOMEM;\n-\t}\n-\n-\t/*\n-\t * Allocate TX ring hardware descriptors. A memzone large enough to\n-\t * handle the maximum ring size is allocated in order to allow for\n-\t * resizing in later calls to the queue setup function.\n-\t */\n-\ttz = rte_eth_dma_zone_reserve(dev, \"tx_ring\", queue_idx,\n-\t\t\t\tsizeof(struct nfp_net_nfdk_tx_desc) *\n-\t\t\t\tNFDK_TX_DESC_PER_SIMPLE_PKT *\n-\t\t\t\tmax_tx_desc, NFP_MEMZONE_ALIGN,\n-\t\t\t\tsocket_id);\n-\tif (tz == NULL) {\n-\t\tPMD_DRV_LOG(ERR, \"Error allocating tx dma\");\n-\t\tnfp_net_tx_queue_release(dev, queue_idx);\n-\t\treturn -ENOMEM;\n-\t}\n-\n-\ttxq->tx_count = nb_desc * NFDK_TX_DESC_PER_SIMPLE_PKT;\n-\ttxq->tx_free_thresh = tx_free_thresh;\n-\ttxq->tx_pthresh = tx_conf->tx_thresh.pthresh;\n-\ttxq->tx_hthresh = tx_conf->tx_thresh.hthresh;\n-\ttxq->tx_wthresh = tx_conf->tx_thresh.wthresh;\n-\n-\t/* queue mapping based on firmware configuration */\n-\ttxq->qidx = queue_idx;\n-\ttxq->tx_qcidx = queue_idx * hw->stride_tx;\n-\ttxq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx);\n-\n-\ttxq->port_id = dev->data->port_id;\n-\n-\t/* Saving physical and virtual addresses for the TX ring */\n-\ttxq->dma = (uint64_t)tz->iova;\n-\ttxq->ktxds = (struct nfp_net_nfdk_tx_desc *)tz->addr;\n-\n-\t/* mbuf pointers array for referencing mbufs linked to TX descriptors */\n-\ttxq->txbufs = rte_zmalloc_socket(\"txq->txbufs\",\n-\t\t\t\tsizeof(*txq->txbufs) * txq->tx_count,\n-\t\t\t\tRTE_CACHE_LINE_SIZE, socket_id);\n-\n-\tif (txq->txbufs == NULL) {\n-\t\tnfp_net_tx_queue_release(dev, queue_idx);\n-\t\treturn -ENOMEM;\n-\t}\n-\tPMD_TX_LOG(DEBUG, \"txbufs=%p hw_ring=%p dma_addr=0x%\" PRIx64,\n-\t\ttxq->txbufs, txq->ktxds, (unsigned long)txq->dma);\n-\n-\tnfp_net_reset_tx_queue(txq);\n-\n-\tdev->data->tx_queues[queue_idx] = txq;\n-\ttxq->hw = hw;\n-\t/*\n-\t * Telling the HW about the physical address of the TX ring and number\n-\t * of descriptors in log2 format\n-\t */\n-\tnn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma);\n-\tnn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count));\n-\n-\treturn 0;\n-}\n-\n int\n nfp_net_tx_queue_setup(struct rte_eth_dev *dev,\n \t\tuint16_t queue_idx,\n@@ -973,328 +793,3 @@ nfp_net_tx_queue_setup(struct rte_eth_dev *dev,\n \t\treturn -EINVAL;\n \t}\n }\n-\n-static inline uint32_t\n-nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq)\n-{\n-\tuint32_t free_desc;\n-\n-\tif (txq->wr_p >= txq->rd_p)\n-\t\tfree_desc = txq->tx_count - (txq->wr_p - txq->rd_p);\n-\telse\n-\t\tfree_desc = txq->rd_p - txq->wr_p;\n-\n-\treturn (free_desc > NFDK_TX_DESC_STOP_CNT) ?\n-\t\t(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;\n-}\n-\n-/*\n- * nfp_net_nfdk_txq_full() - Check if the TX queue free descriptors\n- * is below tx_free_threshold for firmware of nfdk\n- *\n- * @txq: TX queue to check\n- *\n- * This function uses the host copy* of read/write pointers.\n- */\n-static inline uint32_t\n-nfp_net_nfdk_txq_full(struct nfp_net_txq *txq)\n-{\n-\treturn (nfp_net_nfdk_free_tx_desc(txq) < txq->tx_free_thresh);\n-}\n-\n-static inline int\n-nfp_net_nfdk_headlen_to_segs(unsigned int headlen)\n-{\n-\treturn DIV_ROUND_UP(headlen +\n-\t\t\tNFDK_TX_MAX_DATA_PER_DESC -\n-\t\t\tNFDK_TX_MAX_DATA_PER_HEAD,\n-\t\t\tNFDK_TX_MAX_DATA_PER_DESC);\n-}\n-\n-static int\n-nfp_net_nfdk_tx_maybe_close_block(struct nfp_net_txq *txq, struct rte_mbuf *pkt)\n-{\n-\tunsigned int n_descs, wr_p, i, nop_slots;\n-\tstruct rte_mbuf *pkt_temp;\n-\n-\tpkt_temp = pkt;\n-\tn_descs = nfp_net_nfdk_headlen_to_segs(pkt_temp->data_len);\n-\twhile (pkt_temp->next) {\n-\t\tpkt_temp = pkt_temp->next;\n-\t\tn_descs += DIV_ROUND_UP(pkt_temp->data_len, NFDK_TX_MAX_DATA_PER_DESC);\n-\t}\n-\n-\tif (unlikely(n_descs > NFDK_TX_DESC_GATHER_MAX))\n-\t\treturn -EINVAL;\n-\n-\t/* Under count by 1 (don't count meta) for the round down to work out */\n-\tn_descs += !!(pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG);\n-\n-\tif (round_down(txq->wr_p, NFDK_TX_DESC_BLOCK_CNT) !=\n-\t\t\tround_down(txq->wr_p + n_descs, NFDK_TX_DESC_BLOCK_CNT))\n-\t\tgoto close_block;\n-\n-\tif ((uint32_t)txq->data_pending + pkt->pkt_len > NFDK_TX_MAX_DATA_PER_BLOCK)\n-\t\tgoto close_block;\n-\n-\treturn 0;\n-\n-close_block:\n-\twr_p = txq->wr_p;\n-\tnop_slots = D_BLOCK_CPL(wr_p);\n-\n-\tmemset(&txq->ktxds[wr_p], 0, nop_slots * sizeof(struct nfp_net_nfdk_tx_desc));\n-\tfor (i = wr_p; i < nop_slots + wr_p; i++) {\n-\t\tif (txq->txbufs[i].mbuf) {\n-\t\t\trte_pktmbuf_free_seg(txq->txbufs[i].mbuf);\n-\t\t\ttxq->txbufs[i].mbuf = NULL;\n-\t\t}\n-\t}\n-\ttxq->data_pending = 0;\n-\ttxq->wr_p = D_IDX(txq, txq->wr_p + nop_slots);\n-\n-\treturn nop_slots;\n-}\n-\n-/* nfp_net_nfdk_tx_cksum() - Set TX CSUM offload flags in TX descriptor of nfdk */\n-static inline uint64_t\n-nfp_net_nfdk_tx_cksum(struct nfp_net_txq *txq, struct rte_mbuf *mb,\n-\t\tuint64_t flags)\n-{\n-\tuint64_t ol_flags;\n-\tstruct nfp_net_hw *hw = txq->hw;\n-\n-\tif ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) == 0)\n-\t\treturn flags;\n-\n-\tol_flags = mb->ol_flags;\n-\n-\t/* Set TCP csum offload if TSO enabled. */\n-\tif (ol_flags & RTE_MBUF_F_TX_TCP_SEG)\n-\t\tflags |= NFDK_DESC_TX_L4_CSUM;\n-\n-\tif (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK)\n-\t\tflags |= NFDK_DESC_TX_ENCAP;\n-\n-\t/* IPv6 does not need checksum */\n-\tif (ol_flags & RTE_MBUF_F_TX_IP_CKSUM)\n-\t\tflags |= NFDK_DESC_TX_L3_CSUM;\n-\n-\tif (ol_flags & RTE_MBUF_F_TX_L4_MASK)\n-\t\tflags |= NFDK_DESC_TX_L4_CSUM;\n-\n-\treturn flags;\n-}\n-\n-/* nfp_net_nfdk_tx_tso() - Set TX descriptor for TSO of nfdk */\n-static inline uint64_t\n-nfp_net_nfdk_tx_tso(struct nfp_net_txq *txq, struct rte_mbuf *mb)\n-{\n-\tuint64_t ol_flags;\n-\tstruct nfp_net_nfdk_tx_desc txd;\n-\tstruct nfp_net_hw *hw = txq->hw;\n-\n-\tif ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) == 0)\n-\t\tgoto clean_txd;\n-\n-\tol_flags = mb->ol_flags;\n-\n-\tif ((ol_flags & RTE_MBUF_F_TX_TCP_SEG) == 0)\n-\t\tgoto clean_txd;\n-\n-\ttxd.l3_offset = mb->l2_len;\n-\ttxd.l4_offset = mb->l2_len + mb->l3_len;\n-\ttxd.lso_meta_res = 0;\n-\ttxd.mss = rte_cpu_to_le_16(mb->tso_segsz);\n-\ttxd.lso_hdrlen = mb->l2_len + mb->l3_len + mb->l4_len;\n-\ttxd.lso_totsegs = (mb->pkt_len + mb->tso_segsz) / mb->tso_segsz;\n-\n-\tif (ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) {\n-\t\ttxd.l3_offset += mb->outer_l2_len + mb->outer_l3_len;\n-\t\ttxd.l4_offset += mb->outer_l2_len + mb->outer_l3_len;\n-\t\ttxd.lso_hdrlen += mb->outer_l2_len + mb->outer_l3_len;\n-\t}\n-\n-\treturn txd.raw;\n-\n-clean_txd:\n-\ttxd.l3_offset = 0;\n-\ttxd.l4_offset = 0;\n-\ttxd.lso_hdrlen = 0;\n-\ttxd.mss = 0;\n-\ttxd.lso_totsegs = 0;\n-\ttxd.lso_meta_res = 0;\n-\n-\treturn txd.raw;\n-}\n-\n-uint16_t\n-nfp_net_nfdk_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts)\n-{\n-\tuint32_t buf_idx;\n-\tuint64_t dma_addr;\n-\tuint16_t free_descs;\n-\tuint32_t npkts = 0;\n-\tuint64_t metadata = 0;\n-\tuint16_t issued_descs = 0;\n-\tstruct nfp_net_txq *txq;\n-\tstruct nfp_net_hw *hw;\n-\tstruct nfp_net_nfdk_tx_desc *ktxds;\n-\tstruct rte_mbuf *pkt, *temp_pkt;\n-\tstruct rte_mbuf **lmbuf;\n-\n-\ttxq = tx_queue;\n-\thw = txq->hw;\n-\n-\tPMD_TX_LOG(DEBUG, \"working for queue %u at pos %d and %u packets\",\n-\t\ttxq->qidx, txq->wr_p, nb_pkts);\n-\n-\tif ((nfp_net_nfdk_free_tx_desc(txq) < NFDK_TX_DESC_PER_SIMPLE_PKT *\n-\t\t\tnb_pkts) || (nfp_net_nfdk_txq_full(txq)))\n-\t\tnfp_net_tx_free_bufs(txq);\n-\n-\tfree_descs = (uint16_t)nfp_net_nfdk_free_tx_desc(txq);\n-\tif (unlikely(free_descs == 0))\n-\t\treturn 0;\n-\n-\tPMD_TX_LOG(DEBUG, \"queue: %u. Sending %u packets\", txq->qidx, nb_pkts);\n-\t/* Sending packets */\n-\twhile ((npkts < nb_pkts) && free_descs) {\n-\t\tuint32_t type, dma_len, dlen_type, tmp_dlen;\n-\t\tint nop_descs, used_descs;\n-\n-\t\tpkt = *(tx_pkts + npkts);\n-\t\tnop_descs = nfp_net_nfdk_tx_maybe_close_block(txq, pkt);\n-\t\tif (nop_descs < 0)\n-\t\t\tgoto xmit_end;\n-\n-\t\tissued_descs += nop_descs;\n-\t\tktxds = &txq->ktxds[txq->wr_p];\n-\t\t/* Grabbing the mbuf linked to the current descriptor */\n-\t\tbuf_idx = txq->wr_p;\n-\t\tlmbuf = &txq->txbufs[buf_idx++].mbuf;\n-\t\t/* Warming the cache for releasing the mbuf later on */\n-\t\tRTE_MBUF_PREFETCH_TO_FREE(*lmbuf);\n-\n-\t\ttemp_pkt = pkt;\n-\t\tnfp_net_nfdk_set_meta_data(pkt, txq, &metadata);\n-\n-\t\tif (unlikely(pkt->nb_segs > 1 &&\n-\t\t\t\t!(hw->cap & NFP_NET_CFG_CTRL_GATHER))) {\n-\t\t\tPMD_INIT_LOG(ERR, \"Multisegment packet not supported\");\n-\t\t\tgoto xmit_end;\n-\t\t}\n-\n-\t\t/*\n-\t\t * Checksum and VLAN flags just in the first descriptor for a\n-\t\t * multisegment packet, but TSO info needs to be in all of them.\n-\t\t */\n-\n-\t\tdma_len = pkt->data_len;\n-\t\tif ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) &&\n-\t\t\t\t(pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {\n-\t\t\ttype = NFDK_DESC_TX_TYPE_TSO;\n-\t\t} else if (pkt->next == NULL && dma_len <= NFDK_TX_MAX_DATA_PER_HEAD) {\n-\t\t\ttype = NFDK_DESC_TX_TYPE_SIMPLE;\n-\t\t} else {\n-\t\t\ttype = NFDK_DESC_TX_TYPE_GATHER;\n-\t\t}\n-\n-\t\t/* Implicitly truncates to chunk in below logic */\n-\t\tdma_len -= 1;\n-\n-\t\t/*\n-\t\t * We will do our best to pass as much data as we can in descriptor\n-\t\t * and we need to make sure the first descriptor includes whole\n-\t\t * head since there is limitation in firmware side. Sometimes the\n-\t\t * value of 'dma_len & NFDK_DESC_TX_DMA_LEN_HEAD' will be less\n-\t\t * than packet head len.\n-\t\t */\n-\t\tdlen_type = (dma_len > NFDK_DESC_TX_DMA_LEN_HEAD ?\n-\t\t\t\tNFDK_DESC_TX_DMA_LEN_HEAD : dma_len) |\n-\t\t\t(NFDK_DESC_TX_TYPE_HEAD & (type << 12));\n-\t\tktxds->dma_len_type = rte_cpu_to_le_16(dlen_type);\n-\t\tdma_addr = rte_mbuf_data_iova(pkt);\n-\t\tPMD_TX_LOG(DEBUG, \"Working with mbuf at dma address:\"\n-\t\t\t\t\"%\" PRIx64 \"\", dma_addr);\n-\t\tktxds->dma_addr_hi = rte_cpu_to_le_16(dma_addr >> 32);\n-\t\tktxds->dma_addr_lo = rte_cpu_to_le_32(dma_addr & 0xffffffff);\n-\t\tktxds++;\n-\n-\t\t/*\n-\t\t * Preserve the original dlen_type, this way below the EOP logic\n-\t\t * can use dlen_type.\n-\t\t */\n-\t\ttmp_dlen = dlen_type & NFDK_DESC_TX_DMA_LEN_HEAD;\n-\t\tdma_len -= tmp_dlen;\n-\t\tdma_addr += tmp_dlen + 1;\n-\n-\t\t/*\n-\t\t * The rest of the data (if any) will be in larger DMA descriptors\n-\t\t * and is handled with the dma_len loop.\n-\t\t */\n-\t\twhile (pkt) {\n-\t\t\tif (*lmbuf)\n-\t\t\t\trte_pktmbuf_free_seg(*lmbuf);\n-\t\t\t*lmbuf = pkt;\n-\t\t\twhile (dma_len > 0) {\n-\t\t\t\tdma_len -= 1;\n-\t\t\t\tdlen_type = NFDK_DESC_TX_DMA_LEN & dma_len;\n-\n-\t\t\t\tktxds->dma_len_type = rte_cpu_to_le_16(dlen_type);\n-\t\t\t\tktxds->dma_addr_hi = rte_cpu_to_le_16(dma_addr >> 32);\n-\t\t\t\tktxds->dma_addr_lo = rte_cpu_to_le_32(dma_addr & 0xffffffff);\n-\t\t\t\tktxds++;\n-\n-\t\t\t\tdma_len -= dlen_type;\n-\t\t\t\tdma_addr += dlen_type + 1;\n-\t\t\t}\n-\n-\t\t\tif (pkt->next == NULL)\n-\t\t\t\tbreak;\n-\n-\t\t\tpkt = pkt->next;\n-\t\t\tdma_len = pkt->data_len;\n-\t\t\tdma_addr = rte_mbuf_data_iova(pkt);\n-\t\t\tPMD_TX_LOG(DEBUG, \"Working with mbuf at dma address:\"\n-\t\t\t\t\"%\" PRIx64 \"\", dma_addr);\n-\n-\t\t\tlmbuf = &txq->txbufs[buf_idx++].mbuf;\n-\t\t}\n-\n-\t\t(ktxds - 1)->dma_len_type = rte_cpu_to_le_16(dlen_type | NFDK_DESC_TX_EOP);\n-\n-\t\tktxds->raw = rte_cpu_to_le_64(nfp_net_nfdk_tx_cksum(txq, temp_pkt, metadata));\n-\t\tktxds++;\n-\n-\t\tif ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) &&\n-\t\t\t\t(temp_pkt->ol_flags & RTE_MBUF_F_TX_TCP_SEG)) {\n-\t\t\tktxds->raw = rte_cpu_to_le_64(nfp_net_nfdk_tx_tso(txq, temp_pkt));\n-\t\t\tktxds++;\n-\t\t}\n-\n-\t\tused_descs = ktxds - txq->ktxds - txq->wr_p;\n-\t\tif (round_down(txq->wr_p, NFDK_TX_DESC_BLOCK_CNT) !=\n-\t\t\tround_down(txq->wr_p + used_descs - 1, NFDK_TX_DESC_BLOCK_CNT)) {\n-\t\t\tPMD_INIT_LOG(INFO, \"Used descs cross block boundary\");\n-\t\t\tgoto xmit_end;\n-\t\t}\n-\n-\t\ttxq->wr_p = D_IDX(txq, txq->wr_p + used_descs);\n-\t\tif (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT)\n-\t\t\ttxq->data_pending += temp_pkt->pkt_len;\n-\t\telse\n-\t\t\ttxq->data_pending = 0;\n-\n-\t\tissued_descs += used_descs;\n-\t\tnpkts++;\n-\t\tfree_descs = (uint16_t)nfp_net_nfdk_free_tx_desc(txq);\n-\t}\n-\n-xmit_end:\n-\t/* Increment write pointers. Force memory write before we let HW know */\n-\trte_wmb();\n-\tnfp_qcp_ptr_add(txq->qcp_q, NFP_QCP_WRITE_PTR, issued_descs);\n-\n-\treturn npkts;\n-}\ndiff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h\nindex 6c81a98ae0..4d0c88529b 100644\n--- a/drivers/net/nfp/nfp_rxtx.h\n+++ b/drivers/net/nfp/nfp_rxtx.h\n@@ -96,59 +96,7 @@ struct nfp_meta_parsed {\n /* Descriptor alignment */\n #define NFP_ALIGN_RING_DESC 128\n \n-#define NFDK_TX_MAX_DATA_PER_HEAD       0x00001000\n-#define NFDK_DESC_TX_DMA_LEN_HEAD       0x0fff\n-#define NFDK_DESC_TX_TYPE_HEAD          0xf000\n-#define NFDK_DESC_TX_DMA_LEN            0x3fff\n-#define NFDK_TX_DESC_PER_SIMPLE_PKT     2\n-#define NFDK_DESC_TX_TYPE_TSO           2\n-#define NFDK_DESC_TX_TYPE_SIMPLE        8\n-#define NFDK_DESC_TX_TYPE_GATHER        1\n-#define NFDK_DESC_TX_EOP                RTE_BIT32(14)\n-#define NFDK_DESC_TX_CHAIN_META         RTE_BIT32(3)\n-#define NFDK_DESC_TX_ENCAP              RTE_BIT32(2)\n-#define NFDK_DESC_TX_L4_CSUM            RTE_BIT32(1)\n-#define NFDK_DESC_TX_L3_CSUM            RTE_BIT32(0)\n-\n-#define NFDK_TX_MAX_DATA_PER_DESC      0x00004000\n-#define NFDK_TX_DESC_GATHER_MAX        17\n #define DIV_ROUND_UP(n, d)             (((n) + (d) - 1) / (d))\n-#define NFDK_TX_DESC_BLOCK_SZ          256\n-#define NFDK_TX_DESC_BLOCK_CNT         (NFDK_TX_DESC_BLOCK_SZ /         \\\n-\t\t\t\t\tsizeof(struct nfp_net_nfdk_tx_desc))\n-#define NFDK_TX_DESC_STOP_CNT          (NFDK_TX_DESC_BLOCK_CNT *        \\\n-\t\t\t\t\tNFDK_TX_DESC_PER_SIMPLE_PKT)\n-#define NFDK_TX_MAX_DATA_PER_BLOCK     0x00010000\n-#define D_BLOCK_CPL(idx)               (NFDK_TX_DESC_BLOCK_CNT -        \\\n-\t\t\t\t\t(idx) % NFDK_TX_DESC_BLOCK_CNT)\n-#define D_IDX(ring, idx)               ((idx) & ((ring)->tx_count - 1))\n-\n-struct nfp_net_nfdk_tx_desc {\n-\tunion {\n-\t\tstruct {\n-\t\t\t__le16 dma_addr_hi;  /* High bits of host buf address */\n-\t\t\t__le16 dma_len_type; /* Length to DMA for this desc */\n-\t\t\t__le32 dma_addr_lo;  /* Low 32bit of host buf addr */\n-\t\t};\n-\n-\t\tstruct {\n-\t\t\t__le16 mss;\t/* MSS to be used for LSO */\n-\t\t\tuint8_t lso_hdrlen;  /* LSO, TCP payload offset */\n-\t\t\tuint8_t lso_totsegs; /* LSO, total segments */\n-\t\t\tuint8_t l3_offset;   /* L3 header offset */\n-\t\t\tuint8_t l4_offset;   /* L4 header offset */\n-\t\t\t__le16 lso_meta_res; /* Rsvd bits in TSO metadata */\n-\t\t};\n-\n-\t\tstruct {\n-\t\t\tuint8_t flags;\t/* TX Flags, see @NFDK_DESC_TX_* */\n-\t\t\tuint8_t reserved[7];\t/* meta byte placeholder */\n-\t\t};\n-\n-\t\t__le32 vals[2];\n-\t\t__le64 raw;\n-\t};\n-};\n \n struct nfp_net_txq {\n \tstruct nfp_net_hw *hw; /* Backpointer to nfp_net structure */\n@@ -396,9 +344,6 @@ int nfp_net_tx_queue_setup(struct rte_eth_dev *dev,\n \t\tuint16_t nb_desc,\n \t\tunsigned int socket_id,\n \t\tconst struct rte_eth_txconf *tx_conf);\n-uint16_t nfp_net_nfdk_xmit_pkts(void *tx_queue,\n-\t\tstruct rte_mbuf **tx_pkts,\n-\t\tuint16_t nb_pkts);\n int nfp_net_tx_free_bufs(struct nfp_net_txq *txq);\n void nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data,\n \t\tstruct rte_mbuf *pkt,\n",
    "prefixes": [
        "11/13"
    ]
}