get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/132372/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 132372,
    "url": "http://patches.dpdk.org/api/patches/132372/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/20231007023339.1546659-3-chaoyong.he@corigine.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20231007023339.1546659-3-chaoyong.he@corigine.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20231007023339.1546659-3-chaoyong.he@corigine.com",
    "date": "2023-10-07T02:33:30",
    "name": "[02/11] net/nfp: unify the indent coding style",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "002023226ad0d1a720af1b318accaeaaf4923758",
    "submitter": {
        "id": 2554,
        "url": "http://patches.dpdk.org/api/people/2554/?format=api",
        "name": "Chaoyong He",
        "email": "chaoyong.he@corigine.com"
    },
    "delegate": {
        "id": 319,
        "url": "http://patches.dpdk.org/api/users/319/?format=api",
        "username": "fyigit",
        "first_name": "Ferruh",
        "last_name": "Yigit",
        "email": "ferruh.yigit@amd.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/20231007023339.1546659-3-chaoyong.he@corigine.com/mbox/",
    "series": [
        {
            "id": 29758,
            "url": "http://patches.dpdk.org/api/series/29758/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=29758",
            "date": "2023-10-07T02:33:28",
            "name": "Unify the PMD coding style",
            "version": 1,
            "mbox": "http://patches.dpdk.org/series/29758/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/132372/comments/",
    "check": "warning",
    "checks": "http://patches.dpdk.org/api/patches/132372/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 045EE426D6;\n\tSat,  7 Oct 2023 04:34:20 +0200 (CEST)",
            "from mails.dpdk.org (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 6EC3A40A72;\n\tSat,  7 Oct 2023 04:34:07 +0200 (CEST)",
            "from NAM10-DM6-obe.outbound.protection.outlook.com\n (mail-dm6nam10on2108.outbound.protection.outlook.com [40.107.93.108])\n by mails.dpdk.org (Postfix) with ESMTP id 81C79406B4\n for <dev@dpdk.org>; Sat,  7 Oct 2023 04:34:05 +0200 (CEST)",
            "from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5)\n by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24)\n with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct\n 2023 02:34:02 +0000",
            "from SJ0PR13MB5545.namprd13.prod.outlook.com\n ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com\n ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023\n 02:34:02 +0000"
        ],
        "ARC-Seal": "i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;\n b=bacIIAz1DD1lBgnSuuFfSX7GLjoPzuzidIthLq5JOsegQAhx2+uS2UDTqdC8VPypY9+8fWW5AGnK4KMseQ8A401suZCyLf0eUHrGoKkZ9hopZCTDWgC+wErfVlGM1zc84nXBWnqAGt3/zQnk6f6nOL39GT0MIw9krcvRrS59UV0zj+/ttg+eX2afDLAf+O1KRrOqXrUa0mKu3gz5PbvaxocgazaXnoh4N0kjcQMR48dn0wshfQTWpjLnEnm31RPuqA3lsNsOkPKy/Lr1R0onVFJZ5I6M1BD3+2nHoOaTn/Tfk4NR6o/EFDiuWMVvhnMfhw/mD8L+hIcpnDAcMWPmrg==",
        "ARC-Message-Signature": "i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector9901;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1;\n bh=uRzCAmfiKFUvP0WoxaqxnTBTXZBb6Kok8mJEygOP8ng=;\n b=ausOenL7td2FI0U6s0JKKfCooYn6XrPorWLm5JpRANzsOwN0JdfZJnAlF/UfBGEYzYwyKqvA6Zuq5Kn+a2/j/xtFJCsMwDxJ+wx+uuTHFcxepg5xuakk3iz8nTeJ6GU+y2N93LNrjvRHmDISl8qATj2SBbRQ852oAsEuHmYMVX0elZTaBMJH4lhrDKVLdGkmeLPW6WWI9yWlrhKfoBEI3EdTkc7vsE/hfCxZtBAGQZavAp6LSj5AVa1mutmMCAVIEZZCgGL583H0S1LtH8lRKiWMzmDg1htqERVpeNtESapf9YbP6NtodXfSCUx4bznP9dm47OQEm9Io7n7mphqX2A==",
        "ARC-Authentication-Results": "i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com;\n dkim=pass header.d=corigine.com; arc=none",
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=uRzCAmfiKFUvP0WoxaqxnTBTXZBb6Kok8mJEygOP8ng=;\n b=Jbcs0Aym2KkkD8MafAe6wtfBzWYQWI53FIf25Cy1di4I/1EdqN1XcqPJ33/lZ5mcyFaGbn+tpQ0pRhl95WK12wjSUNWtfR6j7Q6o2KUpAwwUo067IKoSZr2g/TM9Kom5WWd3PfecpKImKDCupp0msp74sEEL21vvzC2FUZh0TN0=",
        "Authentication-Results": "dkim=none (message not signed)\n header.d=none;dmarc=none action=none header.from=corigine.com;",
        "From": "Chaoyong He <chaoyong.he@corigine.com>",
        "To": "dev@dpdk.org",
        "Cc": "oss-drivers@corigine.com, Chaoyong He <chaoyong.he@corigine.com>,\n Long Wu <long.wu@corigine.com>, Peng Zhang <peng.zhang@corigine.com>",
        "Subject": "[PATCH 02/11] net/nfp: unify the indent coding style",
        "Date": "Sat,  7 Oct 2023 10:33:30 +0800",
        "Message-Id": "<20231007023339.1546659-3-chaoyong.he@corigine.com>",
        "X-Mailer": "git-send-email 2.39.1",
        "In-Reply-To": "<20231007023339.1546659-1-chaoyong.he@corigine.com>",
        "References": "<20231007023339.1546659-1-chaoyong.he@corigine.com>",
        "Content-Transfer-Encoding": "8bit",
        "Content-Type": "text/plain",
        "X-ClientProxiedBy": "SJ0PR03CA0212.namprd03.prod.outlook.com\n (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com\n (2603:10b6:a03:424::5)",
        "MIME-Version": "1.0",
        "X-MS-PublicTrafficType": "Email",
        "X-MS-TrafficTypeDiagnostic": "SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_",
        "X-MS-Office365-Filtering-Correlation-Id": "18bcec56-99eb-403b-d110-08dbc6dddde6",
        "X-MS-Exchange-SenderADCheck": "1",
        "X-MS-Exchange-AntiSpam-Relay": "0",
        "X-Microsoft-Antispam": "BCL:0;",
        "X-Microsoft-Antispam-Message-Info": "\n kK2VXRtFrbKXc190SggrEPdz7JQgDMVae66HCYhcBUxdIaKlg/B/vVCeaDOaqdNatINISE7FQBJVIoUxIw9Law+kRHUgzhnEsX76vEI2gmgjUBxtMO3vv/y55it5hT9M3cLISQjMIHO83I8i54kYBWbIFC2w1SDAdampy5ZLXzSkabiPUYqWt6pywtBOULiTqg09QGy8T5on5Y56jrFSEcSnh8AfR+1O+ZHkmn1EbPVNVTGd6TQdqK6mt8M/5af2/CiNwrhtYKZGFf9dLwui3Ay9qOaROXkpLdBonf0st7l+D2tifOmA8HpUr9vqZYxI7x+I0L6iSydu19FgVMbHelOdHEuTpP1iXLzw8BAPo33/qhYbGgtiQ3GnC8VG7BqXH5wzQyy+2Pa1+4KFMtEnJalaWNzeM1kPRGk3jLTweI9LT+ZUMBlAi6ZQSx1LIGrSDOY/WddFIuSGSu7Lm1GYJm0nUMnG6p/vdwoKBJX7Mdy9htTQnsg1ScO++u+qbpx+lQrrEHb7aZhxENTdCkihz6jjzWEBsanEJ13WOwi0MYsAOb4bOfxOErUuulRaExBYJE0BL7zqcMPkKJ+Vln9oEbEBtILTEn9ZJa0FKmoau+7pAcmVVGBZMkPg60Sz73A1Q2RqPRmWPaGJt6jtHTbQeZOodtwvcIdVfCwq/Cq+EVo=",
        "X-Forefront-Antispam-Report": "CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;\n IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE;\n SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(30864003)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001)(579004)(559001);\n DIR:OUT; SFP:1102;",
        "X-MS-Exchange-AntiSpam-MessageData-ChunkCount": "1",
        "X-MS-Exchange-AntiSpam-MessageData-0": "\n DRGFEhVxLB7IUMbY8mm6OMfMaeYGLHLpS7lkKAEFbCatxTpmTEKRtLt5ewkRJ6+6XQOT0MaXv4fesncZRqL/eWNmWeO+6qbXY1Kf1Sk22m4LmW9ULLUfWZH1l1dc4RS4UkLU/TALp7ojG/Y93DyogNN8Lxv+0VF9qbBk3Nuvqg2BH32vP4uruBT51q1R+SJxCUvs2y8QS/bIPOw9Pp4FhxX1RllKb2+T0lNiuzP3v7dpfPxTv4lUM5duX/gQjG7v8rRFeJlYgurja3F9ofEkRRCBU3UbP6rcswSCblLXEq6wuwQYkxAnGPor9iHf8QEj/vu4tiJcaR3asfnE3eRe/4T1of0upNxUWUm6KQ1fa9tIPe1JPxFhkYakRLMycjRpCXClzRUSPUn5TSOzgg9Psv7+zo54cA58bQo3OvEthiM/24sNI6u4M7irvA3dHLLx/cAJMqXIAOzWzl1PvnmjCmB4MHiUUrnru2m6EzKfxORBUJhT2/hGJg4R6LHJ6EbFml9mRX2fBASx2ZBffXzBDsTFoK8jrDtJguHQhgNIokizCmhBGlSdQrGwKcSKyhRrkBdg58edv9XdmjozBxwfRL+PwWt9+GWByv1/FR3gim6uRifEZqK3yU62j/NKsZSnSMUP1gaMduYNR8KVO6VZGRjgv9Tcrxm8LQtET6YTrch8ly1HD+6J4Ably7wDH97B+cYgmXJs41/1GcMWNVUhU+B2836lmtkQsEjhZ/JZfG4fy9bGYImbQwgaYLFRiYLM81/wPLNQldSTxLC8VCGz1LrLHsZivMMqGLvYMCjZtw8iJFucISvovPI88Zcc3pKb5i+1WkvAA3EpNrwgZIL3Xu3gS2eWRfIZtcJTfCveEgiRxS427/nR6fAV4wqJoTr7XRg1zcQsH6ptXQ0RwDuX0/MYzPRZ1v4rKqTmsA01eR64Sbda0tHY4N9BWgFlUlPCWEq8Ip03mnKktCZIyhWHsFZ141tVCP4ESmVWGefb24eigpHubqs4GkEEPhM7Bg/nlyyHckccaqFpcNkNiqPxSPGlxU+/p0krnbsuNMf6InZsfZqrEZ70Lt93okj62vlXKMZqgXBxe/Phrtb+y48+PQoTI+y515uVZ9bKlmJrXe4eOjQVmBHg6TxhOwF43LvB8ERrQVFd4Lk7iy13TYBuAUXPT/c4AY41zOQam8qF67o0rL29ZKVlJyPGBtKnSLFey0hSxfnch5BSMzOSfmZXpSLVOyi6RgET1tMvjQ04WcNPkDzs7hRCWl6sZ5z05hs7RTZ9MuneC9kp4uzkqtZ1quoGTMQeSn+7X35zdvc7wwfdjKO0fItlmawsiwSMWJEaIP3WUbDggcl8/YBVrvHaew2vszzRY5aDx/oztVEPxnSvsN81CKd/n5na9V29I4+jIZsWe3eMUHb3ePJh34bI4dfmQtOWMNgHSPe3YcK3qbyfUjq0TuV71FSi0LRUnB7qV2d47PybCo3CgeCSVJLfVoCFjK10qVMdMi9jkfLYa32HfnxwXVHxC1BAkD5y6/YZfTIRNWxchMak7C0w7eV7GmjMh4s5d48npwDfqiec4FLBappVwFvHhgITBxXsCX9Qo5Qmi3uQI/Lk4Jd/2gGWIQ==",
        "X-OriginatorOrg": "corigine.com",
        "X-MS-Exchange-CrossTenant-Network-Message-Id": "\n 18bcec56-99eb-403b-d110-08dbc6dddde6",
        "X-MS-Exchange-CrossTenant-AuthSource": "SJ0PR13MB5545.namprd13.prod.outlook.com",
        "X-MS-Exchange-CrossTenant-AuthAs": "Internal",
        "X-MS-Exchange-CrossTenant-OriginalArrivalTime": "07 Oct 2023 02:34:02.1147 (UTC)",
        "X-MS-Exchange-CrossTenant-FromEntityHeader": "Hosted",
        "X-MS-Exchange-CrossTenant-Id": "fe128f2c-073b-4c20-818e-7246a585940c",
        "X-MS-Exchange-CrossTenant-MailboxType": "HOSTED",
        "X-MS-Exchange-CrossTenant-UserPrincipalName": "\n TqIrTMjOsWN9hTucWgRmV8N1Rl+8IQibM80aNphix3dRe1tIfa6UQGYKH6CynnkSWgouSuV38wmVzfrkykYmu+QI9W6ZXhOEcG5WZlcASig=",
        "X-MS-Exchange-Transport-CrossTenantHeadersStamped": "SA0PR13MB3936",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org"
    },
    "content": "Each parameter of function should occupy one line, and indent two TAB\ncharacter.\nAll the statement which span multi line should indent two TAB character.\n\nSigned-off-by: Chaoyong He <chaoyong.he@corigine.com>\nReviewed-by: Long Wu <long.wu@corigine.com>\nReviewed-by: Peng Zhang <peng.zhang@corigine.com>\n---\n drivers/net/nfp/flower/nfp_flower.c           |   3 +-\n drivers/net/nfp/flower/nfp_flower_ctrl.c      |   7 +-\n .../net/nfp/flower/nfp_flower_representor.c   |   2 +-\n drivers/net/nfp/nfdk/nfp_nfdk.h               |   2 +-\n drivers/net/nfp/nfdk/nfp_nfdk_dp.c            |   4 +-\n drivers/net/nfp/nfp_common.c                  | 250 +++++++++---------\n drivers/net/nfp/nfp_common.h                  |  81 ++++--\n drivers/net/nfp/nfp_cpp_bridge.c              |  56 ++--\n drivers/net/nfp/nfp_ethdev.c                  |  82 +++---\n drivers/net/nfp/nfp_ethdev_vf.c               |  66 +++--\n drivers/net/nfp/nfp_flow.c                    |  36 +--\n drivers/net/nfp/nfp_rxtx.c                    |  86 +++---\n drivers/net/nfp/nfp_rxtx.h                    |  10 +-\n 13 files changed, 357 insertions(+), 328 deletions(-)",
    "diff": "diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c\nindex 3ddaf0f28d..59717fa6b1 100644\n--- a/drivers/net/nfp/flower/nfp_flower.c\n+++ b/drivers/net/nfp/flower/nfp_flower.c\n@@ -330,7 +330,8 @@ nfp_flower_pf_xmit_pkts(void *tx_queue,\n }\n \n static int\n-nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type)\n+nfp_flower_init_vnic_common(struct nfp_net_hw *hw,\n+\t\tconst char *vnic_type)\n {\n \tint err;\n \tuint32_t start_q;\ndiff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c\nindex b564e7cd73..4967cc2375 100644\n--- a/drivers/net/nfp/flower/nfp_flower_ctrl.c\n+++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c\n@@ -64,9 +64,8 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,\n \t\t */\n \t\tnew_mb = rte_pktmbuf_alloc(rxq->mem_pool);\n \t\tif (unlikely(new_mb == NULL)) {\n-\t\t\tPMD_RX_LOG(ERR,\n-\t\t\t\t\"RX mbuf alloc failed port_id=%u queue_id=%hu\",\n-\t\t\t\trxq->port_id, rxq->qidx);\n+\t\t\tPMD_RX_LOG(ERR, \"RX mbuf alloc failed port_id=%u queue_id=%hu\",\n+\t\t\t\t\trxq->port_id, rxq->qidx);\n \t\t\tnfp_net_mbuf_alloc_failed(rxq);\n \t\t\tbreak;\n \t\t}\n@@ -141,7 +140,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue,\n \trte_wmb();\n \tif (nb_hold >= rxq->rx_free_thresh) {\n \t\tPMD_RX_LOG(DEBUG, \"port=%hu queue=%hu nb_hold=%hu avail=%hu\",\n-\t\t\trxq->port_id, rxq->qidx, nb_hold, avail);\n+\t\t\t\trxq->port_id, rxq->qidx, nb_hold, avail);\n \t\tnfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold);\n \t\tnb_hold = 0;\n \t}\ndiff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c\nindex 55ca3e6db0..01c2c5a517 100644\n--- a/drivers/net/nfp/flower/nfp_flower_representor.c\n+++ b/drivers/net/nfp/flower/nfp_flower_representor.c\n@@ -826,7 +826,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower)\n \t\tsnprintf(flower_repr.name, sizeof(flower_repr.name),\n \t\t\t\t\"%s_repr_vf%d\", pci_name, i);\n \n-\t\t /* This will also allocate private memory for the device*/\n+\t\t/* This will also allocate private memory for the device*/\n \t\tret = rte_eth_dev_create(eth_dev->device, flower_repr.name,\n \t\t\t\tsizeof(struct nfp_flower_representor),\n \t\t\t\tNULL, NULL, nfp_flower_repr_init, &flower_repr);\ndiff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h\nindex 75ecb361ee..99675b6bd7 100644\n--- a/drivers/net/nfp/nfdk/nfp_nfdk.h\n+++ b/drivers/net/nfp/nfdk/nfp_nfdk.h\n@@ -143,7 +143,7 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq)\n \t\tfree_desc = txq->rd_p - txq->wr_p;\n \n \treturn (free_desc > NFDK_TX_DESC_STOP_CNT) ?\n-\t\t(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;\n+\t\t\t(free_desc - NFDK_TX_DESC_STOP_CNT) : 0;\n }\n \n /*\ndiff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c\nindex d4bd5edb0a..2426ffb261 100644\n--- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c\n+++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c\n@@ -101,9 +101,7 @@ static inline uint16_t\n nfp_net_nfdk_headlen_to_segs(uint16_t headlen)\n {\n \t/* First descriptor fits less data, so adjust for that */\n-\treturn DIV_ROUND_UP(headlen +\n-\t\t\tNFDK_TX_MAX_DATA_PER_DESC -\n-\t\t\tNFDK_TX_MAX_DATA_PER_HEAD,\n+\treturn DIV_ROUND_UP(headlen + NFDK_TX_MAX_DATA_PER_DESC - NFDK_TX_MAX_DATA_PER_HEAD,\n \t\t\tNFDK_TX_MAX_DATA_PER_DESC);\n }\n \ndiff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c\nindex 36752583dd..9719a9212b 100644\n--- a/drivers/net/nfp/nfp_common.c\n+++ b/drivers/net/nfp/nfp_common.c\n@@ -172,7 +172,8 @@ nfp_net_link_speed_rte2nfp(uint16_t speed)\n }\n \n static void\n-nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link)\n+nfp_net_notify_port_speed(struct nfp_net_hw *hw,\n+\t\tstruct rte_eth_link *link)\n {\n \t/**\n \t * Read the link status from NFP_NET_CFG_STS. If the link is down\n@@ -188,21 +189,22 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link)\n \t * NFP_NET_CFG_STS_NSP_LINK_RATE.\n \t */\n \tnn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE,\n-\t\t      nfp_net_link_speed_rte2nfp(link->link_speed));\n+\t\t\tnfp_net_link_speed_rte2nfp(link->link_speed));\n }\n \n /* The length of firmware version string */\n #define FW_VER_LEN        32\n \n static int\n-__nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)\n+__nfp_net_reconfig(struct nfp_net_hw *hw,\n+\t\tuint32_t update)\n {\n \tint cnt;\n \tuint32_t new;\n \tstruct timespec wait;\n \n \tPMD_DRV_LOG(DEBUG, \"Writing to the configuration queue (%p)...\",\n-\t\t    hw->qcp_cfg);\n+\t\t\thw->qcp_cfg);\n \n \tif (hw->qcp_cfg == NULL) {\n \t\tPMD_INIT_LOG(ERR, \"Bad configuration queue pointer\");\n@@ -227,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)\n \t\t}\n \t\tif (cnt >= NFP_NET_POLL_TIMEOUT) {\n \t\t\tPMD_INIT_LOG(ERR, \"Reconfig timeout for 0x%08x after\"\n-\t\t\t\t\t  \" %dms\", update, cnt);\n+\t\t\t\t\t\" %dms\", update, cnt);\n \t\t\treturn -EIO;\n \t\t}\n \t\tnanosleep(&wait, 0); /* waiting for a 1ms */\n@@ -254,7 +256,9 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update)\n  *   - (EIO) if I/O err and fail to reconfigure the device.\n  */\n int\n-nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update)\n+nfp_net_reconfig(struct nfp_net_hw *hw,\n+\t\tuint32_t ctrl,\n+\t\tuint32_t update)\n {\n \tint ret;\n \n@@ -296,7 +300,9 @@ nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update)\n  *   - (EIO) if I/O err and fail to reconfigure the device.\n  */\n int\n-nfp_net_ext_reconfig(struct nfp_net_hw *hw, uint32_t ctrl_ext, uint32_t update)\n+nfp_net_ext_reconfig(struct nfp_net_hw *hw,\n+\t\tuint32_t ctrl_ext,\n+\t\tuint32_t update)\n {\n \tint ret;\n \n@@ -401,7 +407,7 @@ nfp_net_configure(struct rte_eth_dev *dev)\n \n \t/* Checking RX mode */\n \tif ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 &&\n-\t    (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {\n+\t\t\t(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) {\n \t\tPMD_INIT_LOG(INFO, \"RSS not supported\");\n \t\treturn -EINVAL;\n \t}\n@@ -409,7 +415,7 @@ nfp_net_configure(struct rte_eth_dev *dev)\n \t/* Checking MTU set */\n \tif (rxmode->mtu > NFP_FRAME_SIZE_MAX) {\n \t\tPMD_INIT_LOG(ERR, \"MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported\",\n-\t\t\t\t    rxmode->mtu, NFP_FRAME_SIZE_MAX);\n+\t\t\t\trxmode->mtu, NFP_FRAME_SIZE_MAX);\n \t\treturn -ERANGE;\n \t}\n \n@@ -446,7 +452,8 @@ nfp_net_log_device_information(const struct nfp_net_hw *hw)\n }\n \n static inline void\n-nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, uint32_t *ctrl)\n+nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw,\n+\t\tuint32_t *ctrl)\n {\n \tif ((hw->cap & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0)\n \t\t*ctrl |= NFP_NET_CFG_CTRL_RXVLAN_V2;\n@@ -490,8 +497,9 @@ nfp_net_disable_queues(struct rte_eth_dev *dev)\n \tnn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, 0);\n \n \tnew_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_ENABLE;\n-\tupdate = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING |\n-\t\t NFP_NET_CFG_UPDATE_MSIX;\n+\tupdate = NFP_NET_CFG_UPDATE_GEN |\n+\t\t\tNFP_NET_CFG_UPDATE_RING |\n+\t\t\tNFP_NET_CFG_UPDATE_MSIX;\n \n \tif ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0)\n \t\tnew_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG;\n@@ -517,7 +525,8 @@ nfp_net_cfg_queue_setup(struct nfp_net_hw *hw)\n }\n \n void\n-nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac)\n+nfp_net_write_mac(struct nfp_net_hw *hw,\n+\t\tuint8_t *mac)\n {\n \tuint32_t mac0 = *(uint32_t *)mac;\n \tuint16_t mac1;\n@@ -527,20 +536,21 @@ nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac)\n \tmac += 4;\n \tmac1 = *(uint16_t *)mac;\n \tnn_writew(rte_cpu_to_be_16(mac1),\n-\t\t  hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6);\n+\t\t\thw->ctrl_bar + NFP_NET_CFG_MACADDR + 6);\n }\n \n int\n-nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)\n+nfp_net_set_mac_addr(struct rte_eth_dev *dev,\n+\t\tstruct rte_ether_addr *mac_addr)\n {\n \tstruct nfp_net_hw *hw;\n \tuint32_t update, ctrl;\n \n \thw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);\n \tif ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&\n-\t    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {\n+\t\t\t(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) {\n \t\tPMD_INIT_LOG(INFO, \"MAC address unable to change when\"\n-\t\t\t\t  \" port enabled\");\n+\t\t\t\t\" port enabled\");\n \t\treturn -EBUSY;\n \t}\n \n@@ -551,7 +561,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)\n \tupdate = NFP_NET_CFG_UPDATE_MACADDR;\n \tctrl = hw->ctrl;\n \tif ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 &&\n-\t    (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)\n+\t\t\t(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0)\n \t\tctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR;\n \tif (nfp_net_reconfig(hw, ctrl, update) != 0) {\n \t\tPMD_INIT_LOG(INFO, \"MAC address update failed\");\n@@ -562,15 +572,15 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr)\n \n int\n nfp_configure_rx_interrupt(struct rte_eth_dev *dev,\n-\t\t\t   struct rte_intr_handle *intr_handle)\n+\t\tstruct rte_intr_handle *intr_handle)\n {\n \tstruct nfp_net_hw *hw;\n \tint i;\n \n \tif (rte_intr_vec_list_alloc(intr_handle, \"intr_vec\",\n-\t\t\t\t    dev->data->nb_rx_queues) != 0) {\n+\t\t\t\tdev->data->nb_rx_queues) != 0) {\n \t\tPMD_INIT_LOG(ERR, \"Failed to allocate %d rx_queues\"\n-\t\t\t     \" intr_vec\", dev->data->nb_rx_queues);\n+\t\t\t\t\" intr_vec\", dev->data->nb_rx_queues);\n \t\treturn -ENOMEM;\n \t}\n \n@@ -590,12 +600,10 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev,\n \t\t\t * efd interrupts\n \t\t\t*/\n \t\t\tnn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1);\n-\t\t\tif (rte_intr_vec_list_index_set(intr_handle, i,\n-\t\t\t\t\t\t\t       i + 1) != 0)\n+\t\t\tif (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0)\n \t\t\t\treturn -1;\n \t\t\tPMD_INIT_LOG(DEBUG, \"intr_vec[%d]= %d\", i,\n-\t\t\t\trte_intr_vec_list_index_get(intr_handle,\n-\t\t\t\t\t\t\t\t   i));\n+\t\t\t\t\trte_intr_vec_list_index_get(intr_handle, i));\n \t\t}\n \t}\n \n@@ -651,13 +659,13 @@ nfp_check_offloads(struct rte_eth_dev *dev)\n \n \t/* TX checksum offload */\n \tif ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 ||\n-\t    (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||\n-\t    (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)\n+\t\t\t(txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 ||\n+\t\t\t(txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0)\n \t\tctrl |= NFP_NET_CFG_CTRL_TXCSUM;\n \n \t/* LSO offload */\n \tif ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 ||\n-\t    (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {\n+\t\t\t(txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) {\n \t\tif ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0)\n \t\t\tctrl |= NFP_NET_CFG_CTRL_LSO;\n \t\telse\n@@ -751,7 +759,8 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev)\n  * status.\n  */\n int\n-nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)\n+nfp_net_link_update(struct rte_eth_dev *dev,\n+\t\t__rte_unused int wait_to_complete)\n {\n \tint ret;\n \tuint32_t i;\n@@ -820,7 +829,8 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete)\n }\n \n int\n-nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)\n+nfp_net_stats_get(struct rte_eth_dev *dev,\n+\t\tstruct rte_eth_stats *stats)\n {\n \tint i;\n \tstruct nfp_net_hw *hw;\n@@ -838,16 +848,16 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)\n \t\t\tbreak;\n \n \t\tnfp_dev_stats.q_ipackets[i] =\n-\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));\n+\t\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));\n \n \t\tnfp_dev_stats.q_ipackets[i] -=\n-\t\t\thw->eth_stats_base.q_ipackets[i];\n+\t\t\t\thw->eth_stats_base.q_ipackets[i];\n \n \t\tnfp_dev_stats.q_ibytes[i] =\n-\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);\n+\t\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);\n \n \t\tnfp_dev_stats.q_ibytes[i] -=\n-\t\t\thw->eth_stats_base.q_ibytes[i];\n+\t\t\t\thw->eth_stats_base.q_ibytes[i];\n \t}\n \n \t/* reading per TX ring stats */\n@@ -856,46 +866,42 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)\n \t\t\tbreak;\n \n \t\tnfp_dev_stats.q_opackets[i] =\n-\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));\n+\t\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));\n \n-\t\tnfp_dev_stats.q_opackets[i] -=\n-\t\t\thw->eth_stats_base.q_opackets[i];\n+\t\tnfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i];\n \n \t\tnfp_dev_stats.q_obytes[i] =\n-\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);\n+\t\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);\n \n-\t\tnfp_dev_stats.q_obytes[i] -=\n-\t\t\thw->eth_stats_base.q_obytes[i];\n+\t\tnfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i];\n \t}\n \n-\tnfp_dev_stats.ipackets =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);\n+\tnfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);\n \n \tnfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets;\n \n-\tnfp_dev_stats.ibytes =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);\n+\tnfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);\n \n \tnfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes;\n \n \tnfp_dev_stats.opackets =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);\n \n \tnfp_dev_stats.opackets -= hw->eth_stats_base.opackets;\n \n \tnfp_dev_stats.obytes =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);\n \n \tnfp_dev_stats.obytes -= hw->eth_stats_base.obytes;\n \n \t/* reading general device stats */\n \tnfp_dev_stats.ierrors =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);\n \n \tnfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors;\n \n \tnfp_dev_stats.oerrors =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);\n \n \tnfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors;\n \n@@ -903,7 +909,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats)\n \tnfp_dev_stats.rx_nombuf = dev->data->rx_mbuf_alloc_failed;\n \n \tnfp_dev_stats.imissed =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);\n \n \tnfp_dev_stats.imissed -= hw->eth_stats_base.imissed;\n \n@@ -933,10 +939,10 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)\n \t\t\tbreak;\n \n \t\thw->eth_stats_base.q_ipackets[i] =\n-\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));\n+\t\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i));\n \n \t\thw->eth_stats_base.q_ibytes[i] =\n-\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);\n+\t\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8);\n \t}\n \n \t/* reading per TX ring stats */\n@@ -945,36 +951,36 @@ nfp_net_stats_reset(struct rte_eth_dev *dev)\n \t\t\tbreak;\n \n \t\thw->eth_stats_base.q_opackets[i] =\n-\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));\n+\t\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i));\n \n \t\thw->eth_stats_base.q_obytes[i] =\n-\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);\n+\t\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8);\n \t}\n \n \thw->eth_stats_base.ipackets =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES);\n \n \thw->eth_stats_base.ibytes =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS);\n \n \thw->eth_stats_base.opackets =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES);\n \n \thw->eth_stats_base.obytes =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS);\n \n \t/* reading general device stats */\n \thw->eth_stats_base.ierrors =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS);\n \n \thw->eth_stats_base.oerrors =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS);\n \n \t/* RX ring mbuf allocation failures */\n \tdev->data->rx_mbuf_alloc_failed = 0;\n \n \thw->eth_stats_base.imissed =\n-\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);\n+\t\t\tnn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS);\n \n \treturn 0;\n }\n@@ -1237,16 +1243,16 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)\n \n \tif ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0)\n \t\tdev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM |\n-\t\t\t\t\t     RTE_ETH_RX_OFFLOAD_UDP_CKSUM |\n-\t\t\t\t\t     RTE_ETH_RX_OFFLOAD_TCP_CKSUM;\n+\t\t\t\tRTE_ETH_RX_OFFLOAD_UDP_CKSUM |\n+\t\t\t\tRTE_ETH_RX_OFFLOAD_TCP_CKSUM;\n \n \tif ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0)\n \t\tdev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT;\n \n \tif ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0)\n \t\tdev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM |\n-\t\t\t\t\t     RTE_ETH_TX_OFFLOAD_UDP_CKSUM |\n-\t\t\t\t\t     RTE_ETH_TX_OFFLOAD_TCP_CKSUM;\n+\t\t\t\tRTE_ETH_TX_OFFLOAD_UDP_CKSUM |\n+\t\t\t\tRTE_ETH_TX_OFFLOAD_TCP_CKSUM;\n \n \tif ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) {\n \t\tdev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO;\n@@ -1301,21 +1307,24 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info)\n \t\tdev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH;\n \n \t\tdev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 |\n-\t\t\t\t\t\t   RTE_ETH_RSS_NONFRAG_IPV4_TCP |\n-\t\t\t\t\t\t   RTE_ETH_RSS_NONFRAG_IPV4_UDP |\n-\t\t\t\t\t\t   RTE_ETH_RSS_NONFRAG_IPV4_SCTP |\n-\t\t\t\t\t\t   RTE_ETH_RSS_IPV6 |\n-\t\t\t\t\t\t   RTE_ETH_RSS_NONFRAG_IPV6_TCP |\n-\t\t\t\t\t\t   RTE_ETH_RSS_NONFRAG_IPV6_UDP |\n-\t\t\t\t\t\t   RTE_ETH_RSS_NONFRAG_IPV6_SCTP;\n+\t\t\t\tRTE_ETH_RSS_NONFRAG_IPV4_TCP |\n+\t\t\t\tRTE_ETH_RSS_NONFRAG_IPV4_UDP |\n+\t\t\t\tRTE_ETH_RSS_NONFRAG_IPV4_SCTP |\n+\t\t\t\tRTE_ETH_RSS_IPV6 |\n+\t\t\t\tRTE_ETH_RSS_NONFRAG_IPV6_TCP |\n+\t\t\t\tRTE_ETH_RSS_NONFRAG_IPV6_UDP |\n+\t\t\t\tRTE_ETH_RSS_NONFRAG_IPV6_SCTP;\n \n \t\tdev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ;\n \t\tdev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ;\n \t}\n \n-\tdev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G |\n-\t\t\t       RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G |\n-\t\t\t       RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G;\n+\tdev_info->speed_capa = RTE_ETH_LINK_SPEED_1G |\n+\t\t\tRTE_ETH_LINK_SPEED_10G |\n+\t\t\tRTE_ETH_LINK_SPEED_25G |\n+\t\t\tRTE_ETH_LINK_SPEED_40G |\n+\t\t\tRTE_ETH_LINK_SPEED_50G |\n+\t\t\tRTE_ETH_LINK_SPEED_100G;\n \n \treturn 0;\n }\n@@ -1384,7 +1393,8 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev)\n }\n \n int\n-nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)\n+nfp_rx_queue_intr_enable(struct rte_eth_dev *dev,\n+\t\tuint16_t queue_id)\n {\n \tstruct rte_pci_device *pci_dev;\n \tstruct nfp_net_hw *hw;\n@@ -1393,19 +1403,19 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id)\n \thw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);\n \tpci_dev = RTE_ETH_DEV_TO_PCI(dev);\n \n-\tif (rte_intr_type_get(pci_dev->intr_handle) !=\n-\t\t\t\t\t\t\tRTE_INTR_HANDLE_UIO)\n+\tif (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)\n \t\tbase = 1;\n \n \t/* Make sure all updates are written before un-masking */\n \trte_wmb();\n \tnn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id),\n-\t\t      NFP_NET_CFG_ICR_UNMASKED);\n+\t\t\tNFP_NET_CFG_ICR_UNMASKED);\n \treturn 0;\n }\n \n int\n-nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)\n+nfp_rx_queue_intr_disable(struct rte_eth_dev *dev,\n+\t\tuint16_t queue_id)\n {\n \tstruct rte_pci_device *pci_dev;\n \tstruct nfp_net_hw *hw;\n@@ -1414,8 +1424,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id)\n \thw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);\n \tpci_dev = RTE_ETH_DEV_TO_PCI(dev);\n \n-\tif (rte_intr_type_get(pci_dev->intr_handle) !=\n-\t\t\t\t\t\t\tRTE_INTR_HANDLE_UIO)\n+\tif (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO)\n \t\tbase = 1;\n \n \t/* Make sure all updates are written before un-masking */\n@@ -1433,16 +1442,15 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev)\n \trte_eth_linkstatus_get(dev, &link);\n \tif (link.link_status != 0)\n \t\tPMD_DRV_LOG(INFO, \"Port %d: Link Up - speed %u Mbps - %s\",\n-\t\t\t    dev->data->port_id, link.link_speed,\n-\t\t\t    link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX\n-\t\t\t    ? \"full-duplex\" : \"half-duplex\");\n+\t\t\t\tdev->data->port_id, link.link_speed,\n+\t\t\t\tlink.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ?\n+\t\t\t\t\"full-duplex\" : \"half-duplex\");\n \telse\n-\t\tPMD_DRV_LOG(INFO, \" Port %d: Link Down\",\n-\t\t\t    dev->data->port_id);\n+\t\tPMD_DRV_LOG(INFO, \" Port %d: Link Down\", dev->data->port_id);\n \n \tPMD_DRV_LOG(INFO, \"PCI Address: \" PCI_PRI_FMT,\n-\t\t    pci_dev->addr.domain, pci_dev->addr.bus,\n-\t\t    pci_dev->addr.devid, pci_dev->addr.function);\n+\t\t\tpci_dev->addr.domain, pci_dev->addr.bus,\n+\t\t\tpci_dev->addr.devid, pci_dev->addr.function);\n }\n \n /* Interrupt configuration and handling */\n@@ -1470,7 +1478,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev)\n \t\t/* Make sure all updates are written before un-masking */\n \t\trte_wmb();\n \t\tnn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX),\n-\t\t\t      NFP_NET_CFG_ICR_UNMASKED);\n+\t\t\t\tNFP_NET_CFG_ICR_UNMASKED);\n \t}\n }\n \n@@ -1523,8 +1531,8 @@ nfp_net_dev_interrupt_handler(void *param)\n \t}\n \n \tif (rte_eal_alarm_set(timeout * 1000,\n-\t\t\t      nfp_net_dev_interrupt_delayed_handler,\n-\t\t\t      (void *)dev) != 0) {\n+\t\t\tnfp_net_dev_interrupt_delayed_handler,\n+\t\t\t(void *)dev) != 0) {\n \t\tPMD_INIT_LOG(ERR, \"Error setting alarm\");\n \t\t/* Unmasking */\n \t\tnfp_net_irq_unmask(dev);\n@@ -1532,7 +1540,8 @@ nfp_net_dev_interrupt_handler(void *param)\n }\n \n int\n-nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)\n+nfp_net_dev_mtu_set(struct rte_eth_dev *dev,\n+\t\tuint16_t mtu)\n {\n \tstruct nfp_net_hw *hw;\n \n@@ -1541,14 +1550,14 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)\n \t/* mtu setting is forbidden if port is started */\n \tif (dev->data->dev_started) {\n \t\tPMD_DRV_LOG(ERR, \"port %d must be stopped before configuration\",\n-\t\t\t    dev->data->port_id);\n+\t\t\t\tdev->data->port_id);\n \t\treturn -EBUSY;\n \t}\n \n \t/* MTU larger than current mbufsize not supported */\n \tif (mtu > hw->flbufsz) {\n \t\tPMD_DRV_LOG(ERR, \"MTU (%u) larger than current mbufsize (%u) not supported\",\n-\t\t\t    mtu, hw->flbufsz);\n+\t\t\t\tmtu, hw->flbufsz);\n \t\treturn -ERANGE;\n \t}\n \n@@ -1561,7 +1570,8 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu)\n }\n \n int\n-nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)\n+nfp_net_vlan_offload_set(struct rte_eth_dev *dev,\n+\t\tint mask)\n {\n \tuint32_t new_ctrl, update;\n \tstruct nfp_net_hw *hw;\n@@ -1606,8 +1616,8 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask)\n \n static int\n nfp_net_rss_reta_write(struct rte_eth_dev *dev,\n-\t\t    struct rte_eth_rss_reta_entry64 *reta_conf,\n-\t\t    uint16_t reta_size)\n+\t\tstruct rte_eth_rss_reta_entry64 *reta_conf,\n+\t\tuint16_t reta_size)\n {\n \tuint32_t reta, mask;\n \tint i, j;\n@@ -1617,8 +1627,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,\n \n \tif (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {\n \t\tPMD_DRV_LOG(ERR, \"The size of hash lookup table configured \"\n-\t\t\t\"(%d) doesn't match the number hardware can supported \"\n-\t\t\t\"(%d)\", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);\n+\t\t\t\t\"(%d) doesn't match the number hardware can supported \"\n+\t\t\t\t\"(%d)\", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);\n \t\treturn -EINVAL;\n \t}\n \n@@ -1648,8 +1658,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,\n \t\t\t\treta &= ~(0xFF << (8 * j));\n \t\t\treta |= reta_conf[idx].reta[shift + j] << (8 * j);\n \t\t}\n-\t\tnn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift,\n-\t\t\t      reta);\n+\t\tnn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta);\n \t}\n \treturn 0;\n }\n@@ -1657,8 +1666,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev,\n /* Update Redirection Table(RETA) of Receive Side Scaling of Ethernet device */\n int\n nfp_net_reta_update(struct rte_eth_dev *dev,\n-\t\t    struct rte_eth_rss_reta_entry64 *reta_conf,\n-\t\t    uint16_t reta_size)\n+\t\tstruct rte_eth_rss_reta_entry64 *reta_conf,\n+\t\tuint16_t reta_size)\n {\n \tstruct nfp_net_hw *hw =\n \t\tNFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private);\n@@ -1683,8 +1692,8 @@ nfp_net_reta_update(struct rte_eth_dev *dev,\n  /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */\n int\n nfp_net_reta_query(struct rte_eth_dev *dev,\n-\t\t   struct rte_eth_rss_reta_entry64 *reta_conf,\n-\t\t   uint16_t reta_size)\n+\t\tstruct rte_eth_rss_reta_entry64 *reta_conf,\n+\t\tuint16_t reta_size)\n {\n \tuint8_t i, j, mask;\n \tint idx, shift;\n@@ -1698,8 +1707,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev,\n \n \tif (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) {\n \t\tPMD_DRV_LOG(ERR, \"The size of hash lookup table configured \"\n-\t\t\t\"(%d) doesn't match the number hardware can supported \"\n-\t\t\t\"(%d)\", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);\n+\t\t\t\t\"(%d) doesn't match the number hardware can supported \"\n+\t\t\t\t\"(%d)\", reta_size, NFP_NET_CFG_RSS_ITBL_SZ);\n \t\treturn -EINVAL;\n \t}\n \n@@ -1716,13 +1725,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev,\n \t\tif (mask == 0)\n \t\t\tcontinue;\n \n-\t\treta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) +\n-\t\t\t\t    shift);\n+\t\treta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift);\n \t\tfor (j = 0; j < 4; j++) {\n \t\t\tif ((mask & (0x1 << j)) == 0)\n \t\t\t\tcontinue;\n \t\t\treta_conf[idx].reta[shift + j] =\n-\t\t\t\t(uint8_t)((reta >> (8 * j)) & 0xF);\n+\t\t\t\t\t(uint8_t)((reta >> (8 * j)) & 0xF);\n \t\t}\n \t}\n \treturn 0;\n@@ -1730,7 +1738,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev,\n \n static int\n nfp_net_rss_hash_write(struct rte_eth_dev *dev,\n-\t\t\tstruct rte_eth_rss_conf *rss_conf)\n+\t\tstruct rte_eth_rss_conf *rss_conf)\n {\n \tstruct nfp_net_hw *hw;\n \tuint64_t rss_hf;\n@@ -1786,7 +1794,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev,\n \n int\n nfp_net_rss_hash_update(struct rte_eth_dev *dev,\n-\t\t\tstruct rte_eth_rss_conf *rss_conf)\n+\t\tstruct rte_eth_rss_conf *rss_conf)\n {\n \tuint32_t update;\n \tuint64_t rss_hf;\n@@ -1822,7 +1830,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev,\n \n int\n nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,\n-\t\t\t  struct rte_eth_rss_conf *rss_conf)\n+\t\tstruct rte_eth_rss_conf *rss_conf)\n {\n \tuint64_t rss_hf;\n \tuint32_t cfg_rss_ctrl;\n@@ -1888,7 +1896,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev)\n \tint i, j, ret;\n \n \tPMD_DRV_LOG(INFO, \"setting default RSS conf for %u queues\",\n-\t\trx_queues);\n+\t\t\trx_queues);\n \n \tnfp_reta_conf[0].mask = ~0x0;\n \tnfp_reta_conf[1].mask = ~0x0;\n@@ -1984,7 +1992,7 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,\n \n \tfor (i = 0; i < NFP_NET_N_VXLAN_PORTS; i += 2) {\n \t\tnn_cfg_writel(hw, NFP_NET_CFG_VXLAN_PORT + i * sizeof(port),\n-\t\t\t(hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]);\n+\t\t\t\t(hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]);\n \t}\n \n \trte_spinlock_lock(&hw->reconfig_lock);\n@@ -2004,7 +2012,8 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw,\n  * than 40 bits\n  */\n int\n-nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name)\n+nfp_net_check_dma_mask(struct nfp_net_hw *hw,\n+\t\tchar *name)\n {\n \tif (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3 &&\n \t\t\trte_mem_check_dma_mask(40) != 0) {\n@@ -2052,7 +2061,8 @@ nfp_net_cfg_read_version(struct nfp_net_hw *hw)\n }\n \n static void\n-nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version)\n+nfp_net_get_nsp_info(struct nfp_net_hw *hw,\n+\t\tchar *nsp_version)\n {\n \tstruct nfp_nsp *nsp;\n \n@@ -2068,7 +2078,8 @@ nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version)\n }\n \n static void\n-nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name)\n+nfp_net_get_mip_name(struct nfp_net_hw *hw,\n+\t\tchar *mip_name)\n {\n \tstruct nfp_mip *mip;\n \n@@ -2082,7 +2093,8 @@ nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name)\n }\n \n static void\n-nfp_net_get_app_name(struct nfp_net_hw *hw, char *app_name)\n+nfp_net_get_app_name(struct nfp_net_hw *hw,\n+\t\tchar *app_name)\n {\n \tswitch (hw->pf_dev->app_fw_id) {\n \tcase NFP_APP_FW_CORE_NIC:\ndiff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h\nindex bc3a948231..e4fd394868 100644\n--- a/drivers/net/nfp/nfp_common.h\n+++ b/drivers/net/nfp/nfp_common.h\n@@ -180,37 +180,47 @@ struct nfp_net_adapter {\n \tstruct nfp_net_hw hw;\n };\n \n-static inline uint8_t nn_readb(volatile const void *addr)\n+static inline uint8_t\n+nn_readb(volatile const void *addr)\n {\n \treturn rte_read8(addr);\n }\n \n-static inline void nn_writeb(uint8_t val, volatile void *addr)\n+static inline void\n+nn_writeb(uint8_t val,\n+\t\tvolatile void *addr)\n {\n \trte_write8(val, addr);\n }\n \n-static inline uint32_t nn_readl(volatile const void *addr)\n+static inline uint32_t\n+nn_readl(volatile const void *addr)\n {\n \treturn rte_read32(addr);\n }\n \n-static inline void nn_writel(uint32_t val, volatile void *addr)\n+static inline void\n+nn_writel(uint32_t val,\n+\t\tvolatile void *addr)\n {\n \trte_write32(val, addr);\n }\n \n-static inline uint16_t nn_readw(volatile const void *addr)\n+static inline uint16_t\n+nn_readw(volatile const void *addr)\n {\n \treturn rte_read16(addr);\n }\n \n-static inline void nn_writew(uint16_t val, volatile void *addr)\n+static inline void\n+nn_writew(uint16_t val,\n+\t\tvolatile void *addr)\n {\n \trte_write16(val, addr);\n }\n \n-static inline uint64_t nn_readq(volatile void *addr)\n+static inline uint64_t\n+nn_readq(volatile void *addr)\n {\n \tconst volatile uint32_t *p = addr;\n \tuint32_t low, high;\n@@ -221,7 +231,9 @@ static inline uint64_t nn_readq(volatile void *addr)\n \treturn low + ((uint64_t)high << 32);\n }\n \n-static inline void nn_writeq(uint64_t val, volatile void *addr)\n+static inline void\n+nn_writeq(uint64_t val,\n+\t\tvolatile void *addr)\n {\n \tnn_writel(val >> 32, (volatile char *)addr + 4);\n \tnn_writel(val, addr);\n@@ -232,49 +244,61 @@ static inline void nn_writeq(uint64_t val, volatile void *addr)\n  * Performs any endian conversion necessary.\n  */\n static inline uint8_t\n-nn_cfg_readb(struct nfp_net_hw *hw, int off)\n+nn_cfg_readb(struct nfp_net_hw *hw,\n+\t\tint off)\n {\n \treturn nn_readb(hw->ctrl_bar + off);\n }\n \n static inline void\n-nn_cfg_writeb(struct nfp_net_hw *hw, int off, uint8_t val)\n+nn_cfg_writeb(struct nfp_net_hw *hw,\n+\t\tint off,\n+\t\tuint8_t val)\n {\n \tnn_writeb(val, hw->ctrl_bar + off);\n }\n \n static inline uint16_t\n-nn_cfg_readw(struct nfp_net_hw *hw, int off)\n+nn_cfg_readw(struct nfp_net_hw *hw,\n+\t\tint off)\n {\n \treturn rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off));\n }\n \n static inline void\n-nn_cfg_writew(struct nfp_net_hw *hw, int off, uint16_t val)\n+nn_cfg_writew(struct nfp_net_hw *hw,\n+\t\tint off,\n+\t\tuint16_t val)\n {\n \tnn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off);\n }\n \n static inline uint32_t\n-nn_cfg_readl(struct nfp_net_hw *hw, int off)\n+nn_cfg_readl(struct nfp_net_hw *hw,\n+\t\tint off)\n {\n \treturn rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off));\n }\n \n static inline void\n-nn_cfg_writel(struct nfp_net_hw *hw, int off, uint32_t val)\n+nn_cfg_writel(struct nfp_net_hw *hw,\n+\t\tint off,\n+\t\tuint32_t val)\n {\n \tnn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off);\n }\n \n static inline uint64_t\n-nn_cfg_readq(struct nfp_net_hw *hw, int off)\n+nn_cfg_readq(struct nfp_net_hw *hw,\n+\t\tint off)\n {\n \treturn rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off));\n }\n \n static inline void\n-nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val)\n+nn_cfg_writeq(struct nfp_net_hw *hw,\n+\t\tint off,\n+\t\tuint64_t val)\n {\n \tnn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off);\n }\n@@ -286,7 +310,9 @@ nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val)\n  * @val: Value to add to the queue pointer\n  */\n static inline void\n-nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val)\n+nfp_qcp_ptr_add(uint8_t *q,\n+\t\tenum nfp_qcp_ptr ptr,\n+\t\tuint32_t val)\n {\n \tuint32_t off;\n \n@@ -304,7 +330,8 @@ nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val)\n  * @ptr: Read or Write pointer\n  */\n static inline uint32_t\n-nfp_qcp_read(uint8_t *q, enum nfp_qcp_ptr ptr)\n+nfp_qcp_read(uint8_t *q,\n+\t\tenum nfp_qcp_ptr ptr)\n {\n \tuint32_t off;\n \tuint32_t val;\n@@ -343,12 +370,12 @@ void nfp_net_params_setup(struct nfp_net_hw *hw);\n void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac);\n int nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr);\n int nfp_configure_rx_interrupt(struct rte_eth_dev *dev,\n-\t\t\t       struct rte_intr_handle *intr_handle);\n+\t\tstruct rte_intr_handle *intr_handle);\n uint32_t nfp_check_offloads(struct rte_eth_dev *dev);\n int nfp_net_promisc_enable(struct rte_eth_dev *dev);\n int nfp_net_promisc_disable(struct rte_eth_dev *dev);\n int nfp_net_link_update(struct rte_eth_dev *dev,\n-\t\t\t__rte_unused int wait_to_complete);\n+\t\t__rte_unused int wait_to_complete);\n int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats);\n int nfp_net_stats_reset(struct rte_eth_dev *dev);\n uint32_t nfp_net_xstats_size(const struct rte_eth_dev *dev);\n@@ -368,7 +395,7 @@ int nfp_net_xstats_get_by_id(struct rte_eth_dev *dev,\n \t\tunsigned int n);\n int nfp_net_xstats_reset(struct rte_eth_dev *dev);\n int nfp_net_infos_get(struct rte_eth_dev *dev,\n-\t\t      struct rte_eth_dev_info *dev_info);\n+\t\tstruct rte_eth_dev_info *dev_info);\n const uint32_t *nfp_net_supported_ptypes_get(struct rte_eth_dev *dev);\n int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id);\n int nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id);\n@@ -379,15 +406,15 @@ void nfp_net_dev_interrupt_delayed_handler(void *param);\n int nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu);\n int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask);\n int nfp_net_reta_update(struct rte_eth_dev *dev,\n-\t\t\tstruct rte_eth_rss_reta_entry64 *reta_conf,\n-\t\t\tuint16_t reta_size);\n+\t\tstruct rte_eth_rss_reta_entry64 *reta_conf,\n+\t\tuint16_t reta_size);\n int nfp_net_reta_query(struct rte_eth_dev *dev,\n-\t\t       struct rte_eth_rss_reta_entry64 *reta_conf,\n-\t\t       uint16_t reta_size);\n+\t\tstruct rte_eth_rss_reta_entry64 *reta_conf,\n+\t\tuint16_t reta_size);\n int nfp_net_rss_hash_update(struct rte_eth_dev *dev,\n-\t\t\t    struct rte_eth_rss_conf *rss_conf);\n+\t\tstruct rte_eth_rss_conf *rss_conf);\n int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev,\n-\t\t\t      struct rte_eth_rss_conf *rss_conf);\n+\t\tstruct rte_eth_rss_conf *rss_conf);\n int nfp_net_rss_config_default(struct rte_eth_dev *dev);\n void nfp_net_stop_rx_queue(struct rte_eth_dev *dev);\n void nfp_net_close_rx_queue(struct rte_eth_dev *dev);\ndiff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c\nindex 34764a8a32..85a8bf9235 100644\n--- a/drivers/net/nfp/nfp_cpp_bridge.c\n+++ b/drivers/net/nfp/nfp_cpp_bridge.c\n@@ -116,7 +116,8 @@ nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev)\n  * of CPP interface handler configured by the PMD setup.\n  */\n static int\n-nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)\n+nfp_cpp_bridge_serve_write(int sockfd,\n+\t\tstruct nfp_cpp *cpp)\n {\n \tstruct nfp_cpp_area *area;\n \toff_t offset, nfp_offset;\n@@ -126,7 +127,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)\n \tint err = 0;\n \n \tPMD_CPP_LOG(DEBUG, \"%s: offset size %zu, count_size: %zu\\n\", __func__,\n-\t\tsizeof(off_t), sizeof(size_t));\n+\t\t\tsizeof(off_t), sizeof(size_t));\n \n \t/* Reading the count param */\n \terr = recv(sockfd, &count, sizeof(off_t), 0);\n@@ -145,21 +146,21 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)\n \tnfp_offset = offset & ((1ull << 40) - 1);\n \n \tPMD_CPP_LOG(DEBUG, \"%s: count %zu and offset %jd\\n\", __func__, count,\n-\t\toffset);\n+\t\t\toffset);\n \tPMD_CPP_LOG(DEBUG, \"%s: cpp_id %08x and nfp_offset %jd\\n\", __func__,\n-\t\tcpp_id, nfp_offset);\n+\t\t\tcpp_id, nfp_offset);\n \n \t/* Adjust length if not aligned */\n \tif (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) !=\n-\t    (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {\n+\t\t\t(nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {\n \t\tcurlen = NFP_CPP_MEMIO_BOUNDARY -\n-\t\t\t(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));\n+\t\t\t\t(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));\n \t}\n \n \twhile (count > 0) {\n \t\t/* configure a CPP PCIe2CPP BAR for mapping the CPP target */\n \t\tarea = nfp_cpp_area_alloc_with_name(cpp, cpp_id, \"nfp.cdev\",\n-\t\t\t\t\t\t    nfp_offset, curlen);\n+\t\t\t\tnfp_offset, curlen);\n \t\tif (area == NULL) {\n \t\t\tPMD_CPP_LOG(ERR, \"area alloc fail\");\n \t\t\treturn -EIO;\n@@ -179,12 +180,11 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)\n \t\t\t\tlen = sizeof(tmpbuf);\n \n \t\t\tPMD_CPP_LOG(DEBUG, \"%s: Receive %u of %zu\\n\", __func__,\n-\t\t\t\t\t   len, count);\n+\t\t\t\t\tlen, count);\n \t\t\terr = recv(sockfd, tmpbuf, len, MSG_WAITALL);\n \t\t\tif (err != (int)len) {\n-\t\t\t\tPMD_CPP_LOG(ERR,\n-\t\t\t\t\t\"error when receiving, %d of %zu\",\n-\t\t\t\t\terr, count);\n+\t\t\t\tPMD_CPP_LOG(ERR, \"error when receiving, %d of %zu\",\n+\t\t\t\t\t\terr, count);\n \t\t\t\tnfp_cpp_area_release(area);\n \t\t\t\tnfp_cpp_area_free(area);\n \t\t\t\treturn -EIO;\n@@ -204,7 +204,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)\n \n \t\tcount -= pos;\n \t\tcurlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?\n-\t\t\t NFP_CPP_MEMIO_BOUNDARY : count;\n+\t\t\t\tNFP_CPP_MEMIO_BOUNDARY : count;\n \t}\n \n \treturn 0;\n@@ -217,7 +217,8 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp)\n  * data is sent to the requester using the same socket.\n  */\n static int\n-nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)\n+nfp_cpp_bridge_serve_read(int sockfd,\n+\t\tstruct nfp_cpp *cpp)\n {\n \tstruct nfp_cpp_area *area;\n \toff_t offset, nfp_offset;\n@@ -227,7 +228,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)\n \tint err = 0;\n \n \tPMD_CPP_LOG(DEBUG, \"%s: offset size %zu, count_size: %zu\\n\", __func__,\n-\t\tsizeof(off_t), sizeof(size_t));\n+\t\t\tsizeof(off_t), sizeof(size_t));\n \n \t/* Reading the count param */\n \terr = recv(sockfd, &count, sizeof(off_t), 0);\n@@ -246,20 +247,20 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)\n \tnfp_offset = offset & ((1ull << 40) - 1);\n \n \tPMD_CPP_LOG(DEBUG, \"%s: count %zu and offset %jd\\n\", __func__, count,\n-\t\t\t   offset);\n+\t\t\toffset);\n \tPMD_CPP_LOG(DEBUG, \"%s: cpp_id %08x and nfp_offset %jd\\n\", __func__,\n-\t\t\t   cpp_id, nfp_offset);\n+\t\t\tcpp_id, nfp_offset);\n \n \t/* Adjust length if not aligned */\n \tif (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) !=\n-\t    (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {\n+\t\t\t(nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) {\n \t\tcurlen = NFP_CPP_MEMIO_BOUNDARY -\n-\t\t\t(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));\n+\t\t\t\t(nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1));\n \t}\n \n \twhile (count > 0) {\n \t\tarea = nfp_cpp_area_alloc_with_name(cpp, cpp_id, \"nfp.cdev\",\n-\t\t\t\t\t\t    nfp_offset, curlen);\n+\t\t\t\tnfp_offset, curlen);\n \t\tif (area == NULL) {\n \t\t\tPMD_CPP_LOG(ERR, \"area alloc failed\");\n \t\t\treturn -EIO;\n@@ -285,13 +286,12 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)\n \t\t\t\treturn -EIO;\n \t\t\t}\n \t\t\tPMD_CPP_LOG(DEBUG, \"%s: sending %u of %zu\\n\", __func__,\n-\t\t\t\t\t   len, count);\n+\t\t\t\t\tlen, count);\n \n \t\t\terr = send(sockfd, tmpbuf, len, 0);\n \t\t\tif (err != (int)len) {\n-\t\t\t\tPMD_CPP_LOG(ERR,\n-\t\t\t\t\t\"error when sending: %d of %zu\",\n-\t\t\t\t\terr, count);\n+\t\t\t\tPMD_CPP_LOG(ERR, \"error when sending: %d of %zu\",\n+\t\t\t\t\t\terr, count);\n \t\t\t\tnfp_cpp_area_release(area);\n \t\t\t\tnfp_cpp_area_free(area);\n \t\t\t\treturn -EIO;\n@@ -304,7 +304,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)\n \n \t\tcount -= pos;\n \t\tcurlen = (count > NFP_CPP_MEMIO_BOUNDARY) ?\n-\t\t\tNFP_CPP_MEMIO_BOUNDARY : count;\n+\t\t\t\tNFP_CPP_MEMIO_BOUNDARY : count;\n \t}\n \treturn 0;\n }\n@@ -316,7 +316,8 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp)\n  * does not require any CPP access at all.\n  */\n static int\n-nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp)\n+nfp_cpp_bridge_serve_ioctl(int sockfd,\n+\t\tstruct nfp_cpp *cpp)\n {\n \tuint32_t cmd, ident_size, tmp;\n \tint err;\n@@ -395,7 +396,7 @@ nfp_cpp_bridge_service_func(void *args)\n \tstrcpy(address.sa_data, \"/tmp/nfp_cpp\");\n \n \tret = bind(sockfd, (const struct sockaddr *)&address,\n-\t\t   sizeof(struct sockaddr));\n+\t\t\tsizeof(struct sockaddr));\n \tif (ret < 0) {\n \t\tPMD_CPP_LOG(ERR, \"bind error (%d). Service failed\", errno);\n \t\tclose(sockfd);\n@@ -426,8 +427,7 @@ nfp_cpp_bridge_service_func(void *args)\n \t\twhile (1) {\n \t\t\tret = recv(datafd, &op, 4, 0);\n \t\t\tif (ret <= 0) {\n-\t\t\t\tPMD_CPP_LOG(DEBUG, \"%s: socket close\\n\",\n-\t\t\t\t\t\t   __func__);\n+\t\t\t\tPMD_CPP_LOG(DEBUG, \"%s: socket close\\n\", __func__);\n \t\t\t\tbreak;\n \t\t\t}\n \ndiff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c\nindex 12feec8eb4..65473d87e8 100644\n--- a/drivers/net/nfp/nfp_ethdev.c\n+++ b/drivers/net/nfp/nfp_ethdev.c\n@@ -22,7 +22,8 @@\n #include \"nfp_logs.h\"\n \n static int\n-nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, int port)\n+nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic,\n+\t\tint port)\n {\n \tstruct nfp_eth_table *nfp_eth_table;\n \tstruct nfp_net_hw *hw = NULL;\n@@ -70,21 +71,20 @@ nfp_net_start(struct rte_eth_dev *dev)\n \tif (dev->data->dev_conf.intr_conf.rxq != 0) {\n \t\tif (app_fw_nic->multiport) {\n \t\t\tPMD_INIT_LOG(ERR, \"PMD rx interrupt is not supported \"\n-\t\t\t\t\t  \"with NFP multiport PF\");\n+\t\t\t\t\t\"with NFP multiport PF\");\n \t\t\t\treturn -EINVAL;\n \t\t}\n-\t\tif (rte_intr_type_get(intr_handle) ==\n-\t\t\t\t\t\tRTE_INTR_HANDLE_UIO) {\n+\t\tif (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {\n \t\t\t/*\n \t\t\t * Better not to share LSC with RX interrupts.\n \t\t\t * Unregistering LSC interrupt handler\n \t\t\t */\n \t\t\trte_intr_callback_unregister(pci_dev->intr_handle,\n-\t\t\t\tnfp_net_dev_interrupt_handler, (void *)dev);\n+\t\t\t\t\tnfp_net_dev_interrupt_handler, (void *)dev);\n \n \t\t\tif (dev->data->nb_rx_queues > 1) {\n \t\t\t\tPMD_INIT_LOG(ERR, \"PMD rx interrupt only \"\n-\t\t\t\t\t     \"supports 1 queue with UIO\");\n+\t\t\t\t\t\t\"supports 1 queue with UIO\");\n \t\t\t\treturn -EIO;\n \t\t\t}\n \t\t}\n@@ -162,8 +162,7 @@ nfp_net_start(struct rte_eth_dev *dev)\n \t\t/* Configure the physical port up */\n \t\tnfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);\n \telse\n-\t\tnfp_eth_set_configured(dev->process_private,\n-\t\t\t\t       hw->nfp_idx, 1);\n+\t\tnfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);\n \n \thw->ctrl = new_ctrl;\n \n@@ -209,8 +208,7 @@ nfp_net_stop(struct rte_eth_dev *dev)\n \t\t/* Configure the physical port down */\n \t\tnfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);\n \telse\n-\t\tnfp_eth_set_configured(dev->process_private,\n-\t\t\t\t       hw->nfp_idx, 0);\n+\t\tnfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);\n \n \treturn 0;\n }\n@@ -229,8 +227,7 @@ nfp_net_set_link_up(struct rte_eth_dev *dev)\n \t\t/* Configure the physical port down */\n \t\treturn nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1);\n \telse\n-\t\treturn nfp_eth_set_configured(dev->process_private,\n-\t\t\t\t\t      hw->nfp_idx, 1);\n+\t\treturn nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1);\n }\n \n /* Set the link down. */\n@@ -247,8 +244,7 @@ nfp_net_set_link_down(struct rte_eth_dev *dev)\n \t\t/* Configure the physical port down */\n \t\treturn nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0);\n \telse\n-\t\treturn nfp_eth_set_configured(dev->process_private,\n-\t\t\t\t\t      hw->nfp_idx, 0);\n+\t\treturn nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0);\n }\n \n /* Reset and stop device. The device can not be restarted. */\n@@ -287,8 +283,7 @@ nfp_net_close(struct rte_eth_dev *dev)\n \tnfp_ipsec_uninit(dev);\n \n \t/* Cancel possible impending LSC work here before releasing the port*/\n-\trte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler,\n-\t\t\t     (void *)dev);\n+\trte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);\n \n \t/* Only free PF resources after all physical ports have been closed */\n \t/* Mark this port as unused and free device priv resources*/\n@@ -525,8 +520,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)\n \n \thw->ctrl_bar = pci_dev->mem_resource[0].addr;\n \tif (hw->ctrl_bar == NULL) {\n-\t\tPMD_DRV_LOG(ERR,\n-\t\t\t\"hw->ctrl_bar is NULL. BAR0 not configured\");\n+\t\tPMD_DRV_LOG(ERR, \"hw->ctrl_bar is NULL. BAR0 not configured\");\n \t\treturn -ENODEV;\n \t}\n \n@@ -592,7 +586,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)\n \teth_dev->data->dev_private = hw;\n \n \tPMD_INIT_LOG(DEBUG, \"ctrl_bar: %p, tx_bar: %p, rx_bar: %p\",\n-\t\t     hw->ctrl_bar, hw->tx_bar, hw->rx_bar);\n+\t\t\thw->ctrl_bar, hw->tx_bar, hw->rx_bar);\n \n \tnfp_net_cfg_queue_setup(hw);\n \thw->mtu = RTE_ETHER_MTU;\n@@ -607,8 +601,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev)\n \trte_spinlock_init(&hw->reconfig_lock);\n \n \t/* Allocating memory for mac addr */\n-\teth_dev->data->mac_addrs = rte_zmalloc(\"mac_addr\",\n-\t\t\t\t\t       RTE_ETHER_ADDR_LEN, 0);\n+\teth_dev->data->mac_addrs = rte_zmalloc(\"mac_addr\", RTE_ETHER_ADDR_LEN, 0);\n \tif (eth_dev->data->mac_addrs == NULL) {\n \t\tPMD_INIT_LOG(ERR, \"Failed to space for MAC address\");\n \t\treturn -ENOMEM;\n@@ -634,10 +627,10 @@ nfp_net_init(struct rte_eth_dev *eth_dev)\n \teth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;\n \n \tPMD_INIT_LOG(INFO, \"port %d VendorID=0x%x DeviceID=0x%x \"\n-\t\t     \"mac=\" RTE_ETHER_ADDR_PRT_FMT,\n-\t\t     eth_dev->data->port_id, pci_dev->id.vendor_id,\n-\t\t     pci_dev->id.device_id,\n-\t\t     RTE_ETHER_ADDR_BYTES(&hw->mac_addr));\n+\t\t\t\"mac=\" RTE_ETHER_ADDR_PRT_FMT,\n+\t\t\teth_dev->data->port_id, pci_dev->id.vendor_id,\n+\t\t\tpci_dev->id.device_id,\n+\t\t\tRTE_ETHER_ADDR_BYTES(&hw->mac_addr));\n \n \t/* Registering LSC interrupt handler */\n \trte_intr_callback_register(pci_dev->intr_handle,\n@@ -653,7 +646,9 @@ nfp_net_init(struct rte_eth_dev *eth_dev)\n #define DEFAULT_FW_PATH       \"/lib/firmware/netronome\"\n \n static int\n-nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)\n+nfp_fw_upload(struct rte_pci_device *dev,\n+\t\tstruct nfp_nsp *nsp,\n+\t\tchar *card)\n {\n \tstruct nfp_cpp *cpp = nfp_nsp_cpp(nsp);\n \tvoid *fw_buf;\n@@ -675,11 +670,10 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)\n \t/* First try to find a firmware image specific for this device */\n \tsnprintf(serial, sizeof(serial),\n \t\t\t\"serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x\",\n-\t\tcpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],\n-\t\tcpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);\n+\t\t\tcpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3],\n+\t\t\tcpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff);\n \n-\tsnprintf(fw_name, sizeof(fw_name), \"%s/%s.nffw\", DEFAULT_FW_PATH,\n-\t\t\tserial);\n+\tsnprintf(fw_name, sizeof(fw_name), \"%s/%s.nffw\", DEFAULT_FW_PATH, serial);\n \n \tPMD_DRV_LOG(DEBUG, \"Trying with fw file: %s\", fw_name);\n \tif (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0)\n@@ -703,7 +697,7 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card)\n \n load_fw:\n \tPMD_DRV_LOG(INFO, \"Firmware file found at %s with size: %zu\",\n-\t\tfw_name, fsize);\n+\t\t\tfw_name, fsize);\n \tPMD_DRV_LOG(INFO, \"Uploading the firmware ...\");\n \tnfp_nsp_load_fw(nsp, fw_buf, fsize);\n \tPMD_DRV_LOG(INFO, \"Done\");\n@@ -737,7 +731,7 @@ nfp_fw_setup(struct rte_pci_device *dev,\n \n \tif (nfp_eth_table->count == 0 || nfp_eth_table->count > 8) {\n \t\tPMD_DRV_LOG(ERR, \"NFP ethernet table reports wrong ports: %u\",\n-\t\t\tnfp_eth_table->count);\n+\t\t\t\tnfp_eth_table->count);\n \t\treturn -EIO;\n \t}\n \n@@ -829,7 +823,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,\n \tnuma_node = rte_socket_id();\n \tfor (i = 0; i < app_fw_nic->total_phyports; i++) {\n \t\tsnprintf(port_name, sizeof(port_name), \"%s_port%d\",\n-\t\t\t pf_dev->pci_dev->device.name, i);\n+\t\t\t\tpf_dev->pci_dev->device.name, i);\n \n \t\t/* Allocate a eth_dev for this phyport */\n \t\teth_dev = rte_eth_dev_allocate(port_name);\n@@ -839,8 +833,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev,\n \t\t}\n \n \t\t/* Allocate memory for this phyport */\n-\t\teth_dev->data->dev_private =\n-\t\t\trte_zmalloc_socket(port_name, sizeof(struct nfp_net_hw),\n+\t\teth_dev->data->dev_private = rte_zmalloc_socket(port_name,\n+\t\t\t\tsizeof(struct nfp_net_hw),\n \t\t\t\tRTE_CACHE_LINE_SIZE, numa_node);\n \t\tif (eth_dev->data->dev_private == NULL) {\n \t\t\tret = -ENOMEM;\n@@ -961,8 +955,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev)\n \t/* Now the symbol table should be there */\n \tsym_tbl = nfp_rtsym_table_read(cpp);\n \tif (sym_tbl == NULL) {\n-\t\tPMD_INIT_LOG(ERR, \"Something is wrong with the firmware\"\n-\t\t\t\t\" symbol table\");\n+\t\tPMD_INIT_LOG(ERR, \"Something is wrong with the firmware symbol table\");\n \t\tret = -EIO;\n \t\tgoto eth_table_cleanup;\n \t}\n@@ -1144,8 +1137,7 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev)\n \t */\n \tsym_tbl = nfp_rtsym_table_read(cpp);\n \tif (sym_tbl == NULL) {\n-\t\tPMD_INIT_LOG(ERR, \"Something is wrong with the firmware\"\n-\t\t\t\t\" symbol table\");\n+\t\tPMD_INIT_LOG(ERR, \"Something is wrong with the firmware symbol table\");\n \t\treturn -EIO;\n \t}\n \n@@ -1198,27 +1190,27 @@ nfp_pf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,\n static const struct rte_pci_id pci_id_nfp_pf_net_map[] = {\n \t{\n \t\tRTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,\n-\t\t\t       PCI_DEVICE_ID_NFP3800_PF_NIC)\n+\t\t\t\tPCI_DEVICE_ID_NFP3800_PF_NIC)\n \t},\n \t{\n \t\tRTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,\n-\t\t\t       PCI_DEVICE_ID_NFP4000_PF_NIC)\n+\t\t\t\tPCI_DEVICE_ID_NFP4000_PF_NIC)\n \t},\n \t{\n \t\tRTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,\n-\t\t\t       PCI_DEVICE_ID_NFP6000_PF_NIC)\n+\t\t\t\tPCI_DEVICE_ID_NFP6000_PF_NIC)\n \t},\n \t{\n \t\tRTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,\n-\t\t\t       PCI_DEVICE_ID_NFP3800_PF_NIC)\n+\t\t\t\tPCI_DEVICE_ID_NFP3800_PF_NIC)\n \t},\n \t{\n \t\tRTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,\n-\t\t\t       PCI_DEVICE_ID_NFP4000_PF_NIC)\n+\t\t\t\tPCI_DEVICE_ID_NFP4000_PF_NIC)\n \t},\n \t{\n \t\tRTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,\n-\t\t\t       PCI_DEVICE_ID_NFP6000_PF_NIC)\n+\t\t\t\tPCI_DEVICE_ID_NFP6000_PF_NIC)\n \t},\n \t{\n \t\t.vendor_id = 0,\ndiff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c\nindex c8d6b0461b..ac6a10685d 100644\n--- a/drivers/net/nfp/nfp_ethdev_vf.c\n+++ b/drivers/net/nfp/nfp_ethdev_vf.c\n@@ -50,18 +50,17 @@ nfp_netvf_start(struct rte_eth_dev *dev)\n \n \t/* check and configure queue intr-vector mapping */\n \tif (dev->data->dev_conf.intr_conf.rxq != 0) {\n-\t\tif (rte_intr_type_get(intr_handle) ==\n-\t\t\t\t\t\tRTE_INTR_HANDLE_UIO) {\n+\t\tif (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) {\n \t\t\t/*\n \t\t\t * Better not to share LSC with RX interrupts.\n \t\t\t * Unregistering LSC interrupt handler\n \t\t\t */\n \t\t\trte_intr_callback_unregister(pci_dev->intr_handle,\n-\t\t\t\tnfp_net_dev_interrupt_handler, (void *)dev);\n+\t\t\t\t\tnfp_net_dev_interrupt_handler, (void *)dev);\n \n \t\t\tif (dev->data->nb_rx_queues > 1) {\n \t\t\t\tPMD_INIT_LOG(ERR, \"PMD rx interrupt only \"\n-\t\t\t\t\t     \"supports 1 queue with UIO\");\n+\t\t\t\t\t\t\"supports 1 queue with UIO\");\n \t\t\t\treturn -EIO;\n \t\t\t}\n \t\t}\n@@ -190,12 +189,10 @@ nfp_netvf_close(struct rte_eth_dev *dev)\n \n \t/* unregister callback func from eal lib */\n \trte_intr_callback_unregister(pci_dev->intr_handle,\n-\t\t\t\t     nfp_net_dev_interrupt_handler,\n-\t\t\t\t     (void *)dev);\n+\t\t\tnfp_net_dev_interrupt_handler, (void *)dev);\n \n \t/* Cancel possible impending LSC work here before releasing the port*/\n-\trte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler,\n-\t\t\t     (void *)dev);\n+\trte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev);\n \n \t/*\n \t * The ixgbe PMD disables the pcie master on the\n@@ -282,8 +279,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)\n \n \thw->ctrl_bar = pci_dev->mem_resource[0].addr;\n \tif (hw->ctrl_bar == NULL) {\n-\t\tPMD_DRV_LOG(ERR,\n-\t\t\t\"hw->ctrl_bar is NULL. BAR0 not configured\");\n+\t\tPMD_DRV_LOG(ERR, \"hw->ctrl_bar is NULL. BAR0 not configured\");\n \t\treturn -ENODEV;\n \t}\n \n@@ -301,8 +297,8 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)\n \n \trte_eth_copy_pci_info(eth_dev, pci_dev);\n \n-\thw->eth_xstats_base = rte_malloc(\"rte_eth_xstat\", sizeof(struct rte_eth_xstat) *\n-\t\t\tnfp_net_xstats_size(eth_dev), 0);\n+\thw->eth_xstats_base = rte_malloc(\"rte_eth_xstat\",\n+\t\t\tsizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0);\n \tif (hw->eth_xstats_base == NULL) {\n \t\tPMD_INIT_LOG(ERR, \"no memory for xstats base values on device %s!\",\n \t\t\t\tpci_dev->device.name);\n@@ -318,13 +314,11 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)\n \tPMD_INIT_LOG(DEBUG, \"tx_bar_off: 0x%\" PRIx64 \"\", tx_bar_off);\n \tPMD_INIT_LOG(DEBUG, \"rx_bar_off: 0x%\" PRIx64 \"\", rx_bar_off);\n \n-\thw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr +\n-\t\t     tx_bar_off;\n-\thw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr +\n-\t\t     rx_bar_off;\n+\thw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off;\n+\thw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off;\n \n \tPMD_INIT_LOG(DEBUG, \"ctrl_bar: %p, tx_bar: %p, rx_bar: %p\",\n-\t\t     hw->ctrl_bar, hw->tx_bar, hw->rx_bar);\n+\t\t\thw->ctrl_bar, hw->tx_bar, hw->rx_bar);\n \n \tnfp_net_cfg_queue_setup(hw);\n \thw->mtu = RTE_ETHER_MTU;\n@@ -339,8 +333,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)\n \trte_spinlock_init(&hw->reconfig_lock);\n \n \t/* Allocating memory for mac addr */\n-\teth_dev->data->mac_addrs = rte_zmalloc(\"mac_addr\",\n-\t\t\t\t\t       RTE_ETHER_ADDR_LEN, 0);\n+\teth_dev->data->mac_addrs = rte_zmalloc(\"mac_addr\", RTE_ETHER_ADDR_LEN, 0);\n \tif (eth_dev->data->mac_addrs == NULL) {\n \t\tPMD_INIT_LOG(ERR, \"Failed to space for MAC address\");\n \t\terr = -ENOMEM;\n@@ -351,8 +344,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)\n \n \ttmp_ether_addr = &hw->mac_addr;\n \tif (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) {\n-\t\tPMD_INIT_LOG(INFO, \"Using random mac address for port %d\",\n-\t\t\t\t   port);\n+\t\tPMD_INIT_LOG(INFO, \"Using random mac address for port %d\", port);\n \t\t/* Using random mac addresses for VFs */\n \t\trte_eth_random_addr(&hw->mac_addr.addr_bytes[0]);\n \t\tnfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]);\n@@ -367,16 +359,15 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)\n \teth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS;\n \n \tPMD_INIT_LOG(INFO, \"port %d VendorID=0x%x DeviceID=0x%x \"\n-\t\t     \"mac=\" RTE_ETHER_ADDR_PRT_FMT,\n-\t\t     eth_dev->data->port_id, pci_dev->id.vendor_id,\n-\t\t     pci_dev->id.device_id,\n-\t\t     RTE_ETHER_ADDR_BYTES(&hw->mac_addr));\n+\t\t\t\"mac=\" RTE_ETHER_ADDR_PRT_FMT,\n+\t\t\teth_dev->data->port_id, pci_dev->id.vendor_id,\n+\t\t\tpci_dev->id.device_id,\n+\t\t\tRTE_ETHER_ADDR_BYTES(&hw->mac_addr));\n \n \tif (rte_eal_process_type() == RTE_PROC_PRIMARY) {\n \t\t/* Registering LSC interrupt handler */\n \t\trte_intr_callback_register(pci_dev->intr_handle,\n-\t\t\t\t\t   nfp_net_dev_interrupt_handler,\n-\t\t\t\t\t   (void *)eth_dev);\n+\t\t\t\tnfp_net_dev_interrupt_handler, (void *)eth_dev);\n \t\t/* Telling the firmware about the LSC interrupt entry */\n \t\tnn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX);\n \t\t/* Recording current stats counters values */\n@@ -394,39 +385,42 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev)\n static const struct rte_pci_id pci_id_nfp_vf_net_map[] = {\n \t{\n \t\tRTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,\n-\t\t\t       PCI_DEVICE_ID_NFP3800_VF_NIC)\n+\t\t\t\tPCI_DEVICE_ID_NFP3800_VF_NIC)\n \t},\n \t{\n \t\tRTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME,\n-\t\t\t       PCI_DEVICE_ID_NFP6000_VF_NIC)\n+\t\t\t\tPCI_DEVICE_ID_NFP6000_VF_NIC)\n \t},\n \t{\n \t\tRTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,\n-\t\t\t       PCI_DEVICE_ID_NFP3800_VF_NIC)\n+\t\t\t\tPCI_DEVICE_ID_NFP3800_VF_NIC)\n \t},\n \t{\n \t\tRTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE,\n-\t\t\t       PCI_DEVICE_ID_NFP6000_VF_NIC)\n+\t\t\t\tPCI_DEVICE_ID_NFP6000_VF_NIC)\n \t},\n \t{\n \t\t.vendor_id = 0,\n \t},\n };\n \n-static int nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)\n+static int\n+nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev)\n {\n \t/* VF cleanup, just free private port data */\n \treturn nfp_netvf_close(eth_dev);\n }\n \n-static int eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,\n-\tstruct rte_pci_device *pci_dev)\n+static int\n+eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused,\n+\t\tstruct rte_pci_device *pci_dev)\n {\n \treturn rte_eth_dev_pci_generic_probe(pci_dev,\n-\t\tsizeof(struct nfp_net_adapter), nfp_netvf_init);\n+\t\t\tsizeof(struct nfp_net_adapter), nfp_netvf_init);\n }\n \n-static int eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)\n+static int\n+eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev)\n {\n \treturn rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit);\n }\ndiff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c\nindex bdbc92180d..156b9599db 100644\n--- a/drivers/net/nfp/nfp_flow.c\n+++ b/drivers/net/nfp/nfp_flow.c\n@@ -166,7 +166,8 @@ nfp_flow_dev_to_priv(struct rte_eth_dev *dev)\n }\n \n static int\n-nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id)\n+nfp_mask_id_alloc(struct nfp_flow_priv *priv,\n+\t\tuint8_t *mask_id)\n {\n \tuint8_t temp_id;\n \tuint8_t freed_id;\n@@ -198,7 +199,8 @@ nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id)\n }\n \n static int\n-nfp_mask_id_free(struct nfp_flow_priv *priv, uint8_t mask_id)\n+nfp_mask_id_free(struct nfp_flow_priv *priv,\n+\t\tuint8_t mask_id)\n {\n \tstruct circ_buf *ring;\n \n@@ -671,7 +673,8 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr,\n }\n \n static void\n-nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)\n+nfp_flower_compile_meta_tci(char *mbuf_off,\n+\t\tstruct nfp_fl_key_ls *key_layer)\n {\n \tstruct nfp_flower_meta_tci *tci_meta;\n \n@@ -682,7 +685,8 @@ nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer)\n }\n \n static void\n-nfp_flower_update_meta_tci(char *exact, uint8_t mask_id)\n+nfp_flower_update_meta_tci(char *exact,\n+\t\tuint8_t mask_id)\n {\n \tstruct nfp_flower_meta_tci *meta_tci;\n \n@@ -691,7 +695,8 @@ nfp_flower_update_meta_tci(char *exact, uint8_t mask_id)\n }\n \n static void\n-nfp_flower_compile_ext_meta(char *mbuf_off, struct nfp_fl_key_ls *key_layer)\n+nfp_flower_compile_ext_meta(char *mbuf_off,\n+\t\tstruct nfp_fl_key_ls *key_layer)\n {\n \tstruct nfp_flower_ext_meta *ext_meta;\n \n@@ -1400,14 +1405,14 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,\n \tmeta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;\n \tif ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {\n \t\tipv4  = (struct nfp_flower_ipv4 *)\n-\t\t\t(*mbuf_off - sizeof(struct nfp_flower_ipv4));\n+\t\t\t\t(*mbuf_off - sizeof(struct nfp_flower_ipv4));\n \t\tports = (struct nfp_flower_tp_ports *)\n-\t\t\t((char *)ipv4 - sizeof(struct nfp_flower_tp_ports));\n+\t\t\t\t((char *)ipv4 - sizeof(struct nfp_flower_tp_ports));\n \t} else { /* IPv6 */\n \t\tipv6  = (struct nfp_flower_ipv6 *)\n-\t\t\t(*mbuf_off - sizeof(struct nfp_flower_ipv6));\n+\t\t\t\t(*mbuf_off - sizeof(struct nfp_flower_ipv6));\n \t\tports = (struct nfp_flower_tp_ports *)\n-\t\t\t((char *)ipv6 - sizeof(struct nfp_flower_tp_ports));\n+\t\t\t\t((char *)ipv6 - sizeof(struct nfp_flower_tp_ports));\n \t}\n \n \tmask = item->mask ? item->mask : proc->mask_default;\n@@ -1478,10 +1483,10 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,\n \tmeta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;\n \tif ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {\n \t\tports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -\n-\t\t\tsizeof(struct nfp_flower_tp_ports);\n+\t\t\t\tsizeof(struct nfp_flower_tp_ports);\n \t} else {/* IPv6 */\n \t\tports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) -\n-\t\t\tsizeof(struct nfp_flower_tp_ports);\n+\t\t\t\tsizeof(struct nfp_flower_tp_ports);\n \t}\n \tports = (struct nfp_flower_tp_ports *)ports_off;\n \n@@ -1521,10 +1526,10 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower,\n \tmeta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data;\n \tif ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) {\n \t\tports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) -\n-\t\t\tsizeof(struct nfp_flower_tp_ports);\n+\t\t\t\tsizeof(struct nfp_flower_tp_ports);\n \t} else { /* IPv6 */\n \t\tports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) -\n-\t\t\tsizeof(struct nfp_flower_tp_ports);\n+\t\t\t\tsizeof(struct nfp_flower_tp_ports);\n \t}\n \tports = (struct nfp_flower_tp_ports *)ports_off;\n \n@@ -1915,9 +1920,8 @@ nfp_flow_item_check(const struct rte_flow_item *item,\n \t\treturn 0;\n \t}\n \n-\tmask = item->mask ?\n-\t\t(const uint8_t *)item->mask :\n-\t\t(const uint8_t *)proc->mask_default;\n+\tmask = item->mask ? (const uint8_t *)item->mask :\n+\t\t\t(const uint8_t *)proc->mask_default;\n \n \t/*\n \t * Single-pass check to make sure that:\ndiff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c\nindex 4528417559..7885166753 100644\n--- a/drivers/net/nfp/nfp_rxtx.c\n+++ b/drivers/net/nfp/nfp_rxtx.c\n@@ -158,8 +158,9 @@ struct nfp_ptype_parsed {\n \n /* set mbuf checksum flags based on RX descriptor flags */\n void\n-nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,\n-\t\t struct rte_mbuf *mb)\n+nfp_net_rx_cksum(struct nfp_net_rxq *rxq,\n+\t\tstruct nfp_net_rx_desc *rxd,\n+\t\tstruct rte_mbuf *mb)\n {\n \tstruct nfp_net_hw *hw = rxq->hw;\n \n@@ -192,7 +193,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)\n \tunsigned int i;\n \n \tPMD_RX_LOG(DEBUG, \"Fill Rx Freelist for %u descriptors\",\n-\t\t   rxq->rx_count);\n+\t\t\trxq->rx_count);\n \n \tfor (i = 0; i < rxq->rx_count; i++) {\n \t\tstruct nfp_net_rx_desc *rxd;\n@@ -218,8 +219,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq)\n \trte_wmb();\n \n \t/* Not advertising the whole ring as the firmware gets confused if so */\n-\tPMD_RX_LOG(DEBUG, \"Increment FL write pointer in %u\",\n-\t\t   rxq->rx_count - 1);\n+\tPMD_RX_LOG(DEBUG, \"Increment FL write pointer in %u\", rxq->rx_count - 1);\n \n \tnfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1);\n \n@@ -521,7 +521,8 @@ nfp_net_parse_meta(struct nfp_net_rx_desc *rxds,\n  *   Mbuf to set the packet type.\n  */\n static void\n-nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype, struct rte_mbuf *mb)\n+nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype,\n+\t\tstruct rte_mbuf *mb)\n {\n \tuint32_t mbuf_ptype = RTE_PTYPE_L2_ETHER;\n \tuint8_t nfp_tunnel_ptype = nfp_ptype->tunnel_ptype;\n@@ -678,7 +679,9 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds,\n  */\n \n uint16_t\n-nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\n+nfp_net_recv_pkts(void *rx_queue,\n+\t\tstruct rte_mbuf **rx_pkts,\n+\t\tuint16_t nb_pkts)\n {\n \tstruct nfp_net_rxq *rxq;\n \tstruct nfp_net_rx_desc *rxds;\n@@ -728,8 +731,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\n \t\t */\n \t\tnew_mb = rte_pktmbuf_alloc(rxq->mem_pool);\n \t\tif (unlikely(new_mb == NULL)) {\n-\t\t\tPMD_RX_LOG(DEBUG,\n-\t\t\t\"RX mbuf alloc failed port_id=%u queue_id=%hu\",\n+\t\t\tPMD_RX_LOG(DEBUG, \"RX mbuf alloc failed port_id=%u queue_id=%hu\",\n \t\t\t\t\trxq->port_id, rxq->qidx);\n \t\t\tnfp_net_mbuf_alloc_failed(rxq);\n \t\t\tbreak;\n@@ -743,29 +745,28 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\n \t\trxb->mbuf = new_mb;\n \n \t\tPMD_RX_LOG(DEBUG, \"Packet len: %u, mbuf_size: %u\",\n-\t\t\t   rxds->rxd.data_len, rxq->mbuf_size);\n+\t\t\t\trxds->rxd.data_len, rxq->mbuf_size);\n \n \t\t/* Size of this segment */\n \t\tmb->data_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds);\n \t\t/* Size of the whole packet. We just support 1 segment */\n \t\tmb->pkt_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds);\n \n-\t\tif (unlikely((mb->data_len + hw->rx_offset) >\n-\t\t\t     rxq->mbuf_size)) {\n+\t\tif (unlikely((mb->data_len + hw->rx_offset) > rxq->mbuf_size)) {\n \t\t\t/*\n \t\t\t * This should not happen and the user has the\n \t\t\t * responsibility of avoiding it. But we have\n \t\t\t * to give some info about the error\n \t\t\t */\n \t\t\tPMD_RX_LOG(ERR,\n-\t\t\t\t\"mbuf overflow likely due to the RX offset.\\n\"\n-\t\t\t\t\"\\t\\tYour mbuf size should have extra space for\"\n-\t\t\t\t\" RX offset=%u bytes.\\n\"\n-\t\t\t\t\"\\t\\tCurrently you just have %u bytes available\"\n-\t\t\t\t\" but the received packet is %u bytes long\",\n-\t\t\t\thw->rx_offset,\n-\t\t\t\trxq->mbuf_size - hw->rx_offset,\n-\t\t\t\tmb->data_len);\n+\t\t\t\t\t\"mbuf overflow likely due to the RX offset.\\n\"\n+\t\t\t\t\t\"\\t\\tYour mbuf size should have extra space for\"\n+\t\t\t\t\t\" RX offset=%u bytes.\\n\"\n+\t\t\t\t\t\"\\t\\tCurrently you just have %u bytes available\"\n+\t\t\t\t\t\" but the received packet is %u bytes long\",\n+\t\t\t\t\thw->rx_offset,\n+\t\t\t\t\trxq->mbuf_size - hw->rx_offset,\n+\t\t\t\t\tmb->data_len);\n \t\t\trte_pktmbuf_free(mb);\n \t\t\tbreak;\n \t\t}\n@@ -774,8 +775,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\n \t\tif (hw->rx_offset != 0)\n \t\t\tmb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset;\n \t\telse\n-\t\t\tmb->data_off = RTE_PKTMBUF_HEADROOM +\n-\t\t\t\t       NFP_DESC_META_LEN(rxds);\n+\t\t\tmb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds);\n \n \t\t/* No scatter mode supported */\n \t\tmb->nb_segs = 1;\n@@ -817,7 +817,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\n \t\treturn nb_hold;\n \n \tPMD_RX_LOG(DEBUG, \"RX  port_id=%hu queue_id=%hu, %hu packets received\",\n-\t\t   rxq->port_id, rxq->qidx, avail);\n+\t\t\trxq->port_id, rxq->qidx, avail);\n \n \tnb_hold += rxq->nb_rx_hold;\n \n@@ -828,7 +828,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts)\n \trte_wmb();\n \tif (nb_hold > rxq->rx_free_thresh) {\n \t\tPMD_RX_LOG(DEBUG, \"port=%hu queue=%hu nb_hold=%hu avail=%hu\",\n-\t\t\t   rxq->port_id, rxq->qidx, nb_hold, avail);\n+\t\t\t\trxq->port_id, rxq->qidx, nb_hold, avail);\n \t\tnfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold);\n \t\tnb_hold = 0;\n \t}\n@@ -854,7 +854,8 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq)\n }\n \n void\n-nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)\n+nfp_net_rx_queue_release(struct rte_eth_dev *dev,\n+\t\tuint16_t queue_idx)\n {\n \tstruct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx];\n \n@@ -876,10 +877,11 @@ nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq)\n \n int\n nfp_net_rx_queue_setup(struct rte_eth_dev *dev,\n-\t\t       uint16_t queue_idx, uint16_t nb_desc,\n-\t\t       unsigned int socket_id,\n-\t\t       const struct rte_eth_rxconf *rx_conf,\n-\t\t       struct rte_mempool *mp)\n+\t\tuint16_t queue_idx,\n+\t\tuint16_t nb_desc,\n+\t\tunsigned int socket_id,\n+\t\tconst struct rte_eth_rxconf *rx_conf,\n+\t\tstruct rte_mempool *mp)\n {\n \tuint16_t min_rx_desc;\n \tuint16_t max_rx_desc;\n@@ -897,7 +899,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,\n \t/* Validating number of descriptors */\n \trx_desc_sz = nb_desc * sizeof(struct nfp_net_rx_desc);\n \tif (rx_desc_sz % NFP_ALIGN_RING_DESC != 0 ||\n-\t    nb_desc > max_rx_desc || nb_desc < min_rx_desc) {\n+\t\t\tnb_desc > max_rx_desc || nb_desc < min_rx_desc) {\n \t\tPMD_DRV_LOG(ERR, \"Wrong nb_desc value\");\n \t\treturn -EINVAL;\n \t}\n@@ -913,7 +915,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,\n \n \t/* Allocating rx queue data structure */\n \trxq = rte_zmalloc_socket(\"ethdev RX queue\", sizeof(struct nfp_net_rxq),\n-\t\t\t\t RTE_CACHE_LINE_SIZE, socket_id);\n+\t\t\tRTE_CACHE_LINE_SIZE, socket_id);\n \tif (rxq == NULL)\n \t\treturn -ENOMEM;\n \n@@ -943,9 +945,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,\n \t * resizing in later calls to the queue setup function.\n \t */\n \ttz = rte_eth_dma_zone_reserve(dev, \"rx_ring\", queue_idx,\n-\t\t\t\t   sizeof(struct nfp_net_rx_desc) *\n-\t\t\t\t   max_rx_desc, NFP_MEMZONE_ALIGN,\n-\t\t\t\t   socket_id);\n+\t\t\tsizeof(struct nfp_net_rx_desc) * max_rx_desc,\n+\t\t\tNFP_MEMZONE_ALIGN, socket_id);\n \n \tif (tz == NULL) {\n \t\tPMD_DRV_LOG(ERR, \"Error allocating rx dma\");\n@@ -960,8 +961,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,\n \n \t/* mbuf pointers array for referencing mbufs linked to RX descriptors */\n \trxq->rxbufs = rte_zmalloc_socket(\"rxq->rxbufs\",\n-\t\t\t\t\t sizeof(*rxq->rxbufs) * nb_desc,\n-\t\t\t\t\t RTE_CACHE_LINE_SIZE, socket_id);\n+\t\t\tsizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE,\n+\t\t\tsocket_id);\n \tif (rxq->rxbufs == NULL) {\n \t\tnfp_net_rx_queue_release(dev, queue_idx);\n \t\tdev->data->rx_queues[queue_idx] = NULL;\n@@ -969,7 +970,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev,\n \t}\n \n \tPMD_RX_LOG(DEBUG, \"rxbufs=%p hw_ring=%p dma_addr=0x%\" PRIx64,\n-\t\t   rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);\n+\t\t\trxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma);\n \n \tnfp_net_reset_rx_queue(rxq);\n \n@@ -998,15 +999,15 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)\n \tint todo;\n \n \tPMD_TX_LOG(DEBUG, \"queue %hu. Check for descriptor with a complete\"\n-\t\t   \" status\", txq->qidx);\n+\t\t\t\" status\", txq->qidx);\n \n \t/* Work out how many packets have been sent */\n \tqcp_rd_p = nfp_qcp_read(txq->qcp_q, NFP_QCP_READ_PTR);\n \n \tif (qcp_rd_p == txq->rd_p) {\n \t\tPMD_TX_LOG(DEBUG, \"queue %hu: It seems harrier is not sending \"\n-\t\t\t   \"packets (%u, %u)\", txq->qidx,\n-\t\t\t   qcp_rd_p, txq->rd_p);\n+\t\t\t\t\"packets (%u, %u)\", txq->qidx,\n+\t\t\t\tqcp_rd_p, txq->rd_p);\n \t\treturn 0;\n \t}\n \n@@ -1016,7 +1017,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq)\n \t\ttodo = qcp_rd_p + txq->tx_count - txq->rd_p;\n \n \tPMD_TX_LOG(DEBUG, \"qcp_rd_p %u, txq->rd_p: %u, qcp->rd_p: %u\",\n-\t\t   qcp_rd_p, txq->rd_p, txq->rd_p);\n+\t\t\tqcp_rd_p, txq->rd_p, txq->rd_p);\n \n \tif (todo == 0)\n \t\treturn todo;\n@@ -1045,7 +1046,8 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq)\n }\n \n void\n-nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx)\n+nfp_net_tx_queue_release(struct rte_eth_dev *dev,\n+\t\tuint16_t queue_idx)\n {\n \tstruct nfp_net_txq *txq = dev->data->tx_queues[queue_idx];\n \ndiff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h\nindex 3c7138f7d6..9a30ebd89e 100644\n--- a/drivers/net/nfp/nfp_rxtx.h\n+++ b/drivers/net/nfp/nfp_rxtx.h\n@@ -234,17 +234,17 @@ nfp_net_mbuf_alloc_failed(struct nfp_net_rxq *rxq)\n }\n \n void nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd,\n-\t\t struct rte_mbuf *mb);\n+\t\tstruct rte_mbuf *mb);\n int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev);\n uint32_t nfp_net_rx_queue_count(void *rx_queue);\n uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts,\n-\t\t\t\t  uint16_t nb_pkts);\n+\t\tuint16_t nb_pkts);\n void nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);\n void nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq);\n int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx,\n-\t\t\t\t  uint16_t nb_desc, unsigned int socket_id,\n-\t\t\t\t  const struct rte_eth_rxconf *rx_conf,\n-\t\t\t\t  struct rte_mempool *mp);\n+\t\tuint16_t nb_desc, unsigned int socket_id,\n+\t\tconst struct rte_eth_rxconf *rx_conf,\n+\t\tstruct rte_mempool *mp);\n void nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx);\n void nfp_net_reset_tx_queue(struct nfp_net_txq *txq);\n \n",
    "prefixes": [
        "02/11"
    ]
}