From patchwork Sat Oct 7 02:33:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132371 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BDCDE426D6; Sat, 7 Oct 2023 04:34:07 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B588640695; Sat, 7 Oct 2023 04:34:04 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2127.outbound.protection.outlook.com [40.107.94.127]) by mails.dpdk.org (Postfix) with ESMTP id EFCD840695 for ; Sat, 7 Oct 2023 04:34:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CDAc2z1HUx6U1SPcF4ZU/LMIrHQn/cJLApliwJWB45q8nhlwlUCe6+UPCMOVAmvGVgQIW/h1DNYy1Geshb7jCQT1HyJTSQSX+wWP/CnMvE+sr8wgcHYq5eLCEiA03H79oSbVq3zUopHTvcpZs4GRXp6MyUC5nmE3Gc1Hh+RyvlVovMDZKilQUyIEquVFT3/dm5ew3hTDAlUjOIidxPfLByqXDPM39iCDaYweT0m3B+uY9+jAoSbP6F5CqKzNrA3Uv2HltpKMl1NpvN81MWQr1eQGlkwd7CwTYlv1a6ZHsa1OzBfzIU3NGPdOjEvpaGwcod9HezzQ/4uQ2h5e38hVmQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=y1UEvAXN1PwlMvkD4uEsAGKOjzDDU22dax1GtRQLUWE=; b=WKNPlYWWlk89SlmEgx+Sw71OskfTrmwhZm4R8uncMUgjYSGebsw7Jl36Nt6N/JSRp6Scbmug6UOyK0VKWwWKytfkPNsPaqmzuFilmWyEC0cj2zPO8TSdQvEy5/Ii8n/sqOww8a2p8Rxms/1TyYYZ/7mbFO1RtiQvOiOuKm521xeYz6dP5Y/XTbjezYRUQVfoMm1pLAaQ/tHOo3rNkY0JO4k8zQtUm20UpytsIXj7kdMsNpadhrk1JDJbJb1PhB/yt1pP1rR4J6ELsCDXLqSvUVxDpPjDnbbbS22Zhk7YkRTrsKHde8gd3idOQbmO24gvxnbJO9DBL3CYYRKs6HjiQw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=y1UEvAXN1PwlMvkD4uEsAGKOjzDDU22dax1GtRQLUWE=; b=G2XHTLD4RrNTry7K8g4eGk7l9jvPrrmRBZae2MurvH6EUZGPzI2H9e0jDioLChm7XD1SQxwDH3z0RaWNXphlz2tlcd7JV5G7VwNiaFeYrbSw2gCVHNW5VrSkLvbgalEmPLG0IuDy53Rmmm5lTlgWoxxxw7wXPAzpLpjsk3r12EY= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:00 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:00 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 01/11] net/nfp: explicitly compare to null and 0 Date: Sat, 7 Oct 2023 10:33:29 +0800 Message-Id: <20231007023339.1546659-2-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: 11606991-1cfb-4f30-03d8-08dbc6dddca5 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: GnQlMdnAcdj/8Zzkr0hEKnHBDEGoQEEQAt9P74hEq7w63B+vmZOj5DrwiYj4qk9kbs3sPMfgSV1JWO5gW9UNi4VgUe2jUVw9tJQfU0zFWiJBotxyCo6ubxTQF07DpXTFa+N7v7GFGERreUenbP8d+OdS6dBlxH6ZbCTf498AYRuqMGUN6CC/d5m2RZKuueCIAlh82xlRh38a1YF37jnjUEkjlEQFu4F+GlhSpK1m0/54DWEBrktYZXrrVEpajbqb2ZL2G8PRTDZMXEXK+VC2IDhqe8nmwAN4+58bBnP4olozHTVz1PKtjrC2QUH1RLBKq3bUZit4PJrkBQCeypREaFHqz0FirAnAIzWPf1RDu6UZRMNoHk6LcSC0BtjDfCCFGJHgL6tKa1s5PJ2ve83ZVlPq2qix9vFL7VC5lINhjCb7uZcOP8GFjjL5vuNslBFerT/43H0M4xOsvKPwVb6NOkq095RnCk9PZMwCZ1xOSw3YK42XJA5YPg8Bywn1aKqHUO2y1T8vNenU4CRvr0AmrxrgSkLndcbc16WKPVygLGf9yAthP5fKxNcvei/uBonCGN0tKJEkGIT4LnKvQOhaKYdFQAkpVX00BUXdtqwEhCkqbrugCOVSmUqEuhYu9XB7k6HR/P2BREK9G/H3QnNb63T+qnm4iNY+pbBlRCxJcmE= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(30864003)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001)(579004); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: aa3dp3r+FcVfrZPcNRfiBv0tsYl4MJmhJBkumozXMdqAmZR4eOjI7+Rkg26DY6Ifpk69zLzjSoZRzVAgFIggpIB7SCe0qMYSZP3eOXTv1tJ+sw4jlj32+uO+TiRylwMFKwAtALd9w5mAOPyD/AcO4Wqkx5Pfbd1osvB3MEOu5dgM3Nm0ixSNDmJsoMefVIqruPvHpxHIj+hwa0PEBOvc8OPJGPd79Q+VEJzrzHrpfLExsfposJCIAtdpSw+Dt3oKRAxBjQhlDgfX4yGr5sGNFQfej4Ig+GesJlgkJ7zYWwQTcMjucTPsjCyniC7ILBoJ77Bc6YLCMiZWxjpNMuoffxVccP6DMws0Jbml3su0OFZ+IMKHP3RGggsqI+kPmG4AyshZpd5gu3ORm/efMAA3k6/r5/ZPffx1U+4p/Us5jxU2hxLdEx+MSoiLobu1DsDlcOPEjGhYJI66AypMS/Wnzxzbt9FPjiF4F2vX+zPyVft8F+ZuPEdem7cvTkPH65Ifmt3w1GON68RZ9XXFoT0c7OPFAU8/M22wWvhu/tA/tbImHsLLD+2I/zBkNQGLzSJgqwt1Uo8BPkCeAeUs/0bBtxjjKycMnGm/zEjp3fr5MWapSB/HfPec6tl1AHm//gqR+AG+sohuXQ+3aX4Q/ZnPWvxSKhfNXhvzgKGigRwzPuiu9YI5o3/2rUn6al45372FbmtuEQZjWxf+3r+VEamonUzpY8E39QkBq/H0/moOUPTLCNbc0sxTsceaUu1Jboj4KWwdHyQPmyC4YalO3Z9mB4bL8bW2zXtP1/p4J/P0fzdgS/HFfUqI9wygPkIy3PtD2MsyC/R4ko0rDOcIdxhsLizgGPOZYCdUIOpiYCAkstQs908U38Ct8uMQ7Aq9gw5zXtOsfASqjNdLEstnQrl2sQSoM0WzdtFy9KdeeyD0WUZIn7WH1E0VIbQ0POsi0ShCH/Wzp0BzFEPdQdZpr7+WlRc0b1yex5wq40rcRumnFHqHOQC5eW/AOIksWbaC0dDcbJXZk1eoviqD10UGcCFvrq5DELAiYukCZyGdD7Wt5FQL/S/CQO/tnB0oh8XWsg+yR52osue4+GtY4fOQQQghhxwD0VMkJ6RCLsB12uVpx7y3mwR/M6WhdBAfIkx3znS+78ji9mCdkdEht+1zTiGEC53De+WaImfmLZge6KvJth12r8bG62HGc/ei/+nghOg6xa3ZT0+x7D3Fg+U9o1b7KNrVKUxyCKOhTYEXCCKSUQoMH+Cufp2rlAdBIUXDpCs2LP+J+Ol9oyge0eKy402Ku38VQxAqWp7h6e1yaEXEvhcMXIQd9eIaKxNteimPdKsUw8eclWwsSGxviVgN2uEtW9fWuYCIRLEHZGlt2/pEcwokb8/jPXy8YSsceReq8qMZQBswUu/e1Yux43K+edlbYkDoYyQUMWK2Gc1P0iGTs5ZrE0aZXDN4SYkg9aMKZzVsURh3WFS4Ig157bvFNEvZaHOeTdaOKdof5r1nL7ryh75N+CiSzIifNC7beZy1kQRQRdDXLvY7vkjaqIyv0UdH7t//y+yye2IaGwiHotLoqpc7ZIuePRP42EzkvoBdmWhc40iEl3/xqC+XfwNs12xhzQ== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 11606991-1cfb-4f30-03d8-08dbc6dddca5 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:33:59.9837 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: YtC6Wqm4XKqzQ3ihiO4dyyI1h0QYuBG+wHEevE6XvSelm8d3z57U+rZNv/T6lwY725MS/i4dEEEiFX/5uHZ0VEJJRqPsi/+c7bY5kcawes0= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To compliance with the coding standard, make the pointer variable explicitly comparing to 'NULL' and the integer variable explicitly comparing to '0'. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.c | 6 +- drivers/net/nfp/flower/nfp_flower_ctrl.c | 6 +- drivers/net/nfp/nfp_common.c | 144 +++++++++++------------ drivers/net/nfp/nfp_cpp_bridge.c | 2 +- drivers/net/nfp/nfp_ethdev.c | 38 +++--- drivers/net/nfp/nfp_ethdev_vf.c | 14 +-- drivers/net/nfp/nfp_flow.c | 90 +++++++------- drivers/net/nfp/nfp_rxtx.c | 28 ++--- 8 files changed, 165 insertions(+), 163 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index 98e6f7f927..3ddaf0f28d 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -69,7 +69,7 @@ nfp_pf_repr_disable_queues(struct rte_eth_dev *dev) new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG; /* If an error when reconfig we avoid to change hw state */ - if (nfp_net_reconfig(hw, new_ctrl, update) < 0) + if (nfp_net_reconfig(hw, new_ctrl, update) != 0) return; hw->ctrl = new_ctrl; @@ -100,7 +100,7 @@ nfp_flower_pf_start(struct rte_eth_dev *dev) update |= NFP_NET_CFG_UPDATE_RSS; - if (hw->cap & NFP_NET_CFG_CTRL_RSS2) + if ((hw->cap & NFP_NET_CFG_CTRL_RSS2) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RSS2; else new_ctrl |= NFP_NET_CFG_CTRL_RSS; @@ -110,7 +110,7 @@ nfp_flower_pf_start(struct rte_eth_dev *dev) update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING; - if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) + if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG; nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl); diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c index c5282053cf..b564e7cd73 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.c +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c @@ -103,7 +103,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, } /* Filling the received mbuf with packet info */ - if (hw->rx_offset) + if (hw->rx_offset != 0) mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset; else mb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds); @@ -195,7 +195,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower, lmbuf = &txq->txbufs[txq->wr_p].mbuf; RTE_MBUF_PREFETCH_TO_FREE(*lmbuf); - if (*lmbuf) + if (*lmbuf != NULL) rte_pktmbuf_free_seg(*lmbuf); *lmbuf = mbuf; @@ -337,7 +337,7 @@ nfp_flower_ctrl_vnic_nfdk_xmit(struct nfp_app_fw_flower *app_fw_flower, } txq->wr_p = D_IDX(txq, txq->wr_p + used_descs); - if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT) + if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT != 0) txq->data_pending += mbuf->pkt_len; else txq->data_pending = 0; diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 5683afc40a..36752583dd 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -221,7 +221,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE); if (new == 0) break; - if (new & NFP_NET_CFG_UPDATE_ERR) { + if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) { PMD_INIT_LOG(ERR, "Reconfig error: 0x%08x", new); return -1; } @@ -390,18 +390,18 @@ nfp_net_configure(struct rte_eth_dev *dev) rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; - if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0) rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; /* Checking TX mode */ - if (txmode->mq_mode) { + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { PMD_INIT_LOG(INFO, "TX mq_mode DCB and VMDq not supported"); return -EINVAL; } /* Checking RX mode */ - if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG && - !(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY)) { + if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 && + (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) { PMD_INIT_LOG(INFO, "RSS not supported"); return -EINVAL; } @@ -493,11 +493,11 @@ nfp_net_disable_queues(struct rte_eth_dev *dev) update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING | NFP_NET_CFG_UPDATE_MSIX; - if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) + if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0) new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG; /* If an error when reconfig we avoid to change hw state */ - if (nfp_net_reconfig(hw, new_ctrl, update) < 0) + if (nfp_net_reconfig(hw, new_ctrl, update) != 0) return; hw->ctrl = new_ctrl; @@ -537,8 +537,8 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) uint32_t update, ctrl; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) && - !(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR)) { + if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && + (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) { PMD_INIT_LOG(INFO, "MAC address unable to change when" " port enabled"); return -EBUSY; @@ -550,10 +550,10 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) /* Signal the NIC about the change */ update = NFP_NET_CFG_UPDATE_MACADDR; ctrl = hw->ctrl; - if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) && - (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR)) + if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && + (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; - if (nfp_net_reconfig(hw, ctrl, update) < 0) { + if (nfp_net_reconfig(hw, ctrl, update) != 0) { PMD_INIT_LOG(INFO, "MAC address update failed"); return -EIO; } @@ -568,7 +568,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, int i; if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", - dev->data->nb_rx_queues)) { + dev->data->nb_rx_queues) != 0) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; @@ -580,7 +580,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO"); /* UIO just supports one queue and no LSC*/ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0); - if (rte_intr_vec_list_index_set(intr_handle, 0, 0)) + if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0) return -1; } else { PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO"); @@ -591,7 +591,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, */ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1); if (rte_intr_vec_list_index_set(intr_handle, i, - i + 1)) + i + 1) != 0) return -1; PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i, rte_intr_vec_list_index_get(intr_handle, @@ -619,53 +619,53 @@ nfp_check_offloads(struct rte_eth_dev *dev) rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; - if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) { - if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM) + if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) { + if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_RXCSUM; } - if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) + if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) nfp_net_enbable_rxvlan_cap(hw, &ctrl); - if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) { - if (hw->cap & NFP_NET_CFG_CTRL_RXQINQ) + if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) { + if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0) ctrl |= NFP_NET_CFG_CTRL_RXQINQ; } hw->mtu = dev->data->mtu; - if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) { - if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) + if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) { + if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2; - else if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN) + else if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN) != 0) ctrl |= NFP_NET_CFG_CTRL_TXVLAN; } /* L2 broadcast */ - if (hw->cap & NFP_NET_CFG_CTRL_L2BC) + if ((hw->cap & NFP_NET_CFG_CTRL_L2BC) != 0) ctrl |= NFP_NET_CFG_CTRL_L2BC; /* L2 multicast */ - if (hw->cap & NFP_NET_CFG_CTRL_L2MC) + if ((hw->cap & NFP_NET_CFG_CTRL_L2MC) != 0) ctrl |= NFP_NET_CFG_CTRL_L2MC; /* TX checksum offload */ - if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM || - txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || - txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) + if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 || + (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 || + (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_TXCSUM; /* LSO offload */ - if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO || - txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) { - if (hw->cap & NFP_NET_CFG_CTRL_LSO) + if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 || + (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { + if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0) ctrl |= NFP_NET_CFG_CTRL_LSO; else ctrl |= NFP_NET_CFG_CTRL_LSO2; } /* RX gather */ - if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) + if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0) ctrl |= NFP_NET_CFG_CTRL_GATHER; return ctrl; @@ -693,7 +693,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) return -ENOTSUP; } - if (hw->ctrl & NFP_NET_CFG_CTRL_PROMISC) { + if ((hw->ctrl & NFP_NET_CFG_CTRL_PROMISC) != 0) { PMD_DRV_LOG(INFO, "Promiscuous mode already enabled"); return 0; } @@ -706,7 +706,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) * it can not fail ... */ ret = nfp_net_reconfig(hw, new_ctrl, update); - if (ret < 0) + if (ret != 0) return ret; hw->ctrl = new_ctrl; @@ -736,7 +736,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev) * assuming it can not fail ... */ ret = nfp_net_reconfig(hw, new_ctrl, update); - if (ret < 0) + if (ret != 0) return ret; hw->ctrl = new_ctrl; @@ -770,7 +770,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) memset(&link, 0, sizeof(struct rte_eth_link)); - if (nn_link_status & NFP_NET_CFG_STS_LINK) + if ((nn_link_status & NFP_NET_CFG_STS_LINK) != 0) link.link_status = RTE_ETH_LINK_UP; link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; @@ -802,7 +802,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) ret = rte_eth_linkstatus_set(dev, &link); if (ret == 0) { - if (link.link_status) + if (link.link_status != 0) PMD_DRV_LOG(INFO, "NIC Link is Up"); else PMD_DRV_LOG(INFO, "NIC Link is Down"); @@ -907,7 +907,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) nfp_dev_stats.imissed -= hw->eth_stats_base.imissed; - if (stats) { + if (stats != NULL) { memcpy(stats, &nfp_dev_stats, sizeof(*stats)); return 0; } @@ -1229,32 +1229,32 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) /* Next should change when PF support is implemented */ dev_info->max_mac_addrs = 1; - if (hw->cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) + if ((hw->cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) != 0) dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP; - if (hw->cap & NFP_NET_CFG_CTRL_RXQINQ) + if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP; - if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM) + if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM; - if (hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) + if ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0) dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT; - if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM) + if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM; - if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) { + if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) { dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO; - if (hw->cap & NFP_NET_CFG_CTRL_VXLAN) + if ((hw->cap & NFP_NET_CFG_CTRL_VXLAN) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO; } - if (hw->cap & NFP_NET_CFG_CTRL_GATHER) + if ((hw->cap & NFP_NET_CFG_CTRL_GATHER) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS; cap_extend = nn_cfg_readl(hw, NFP_NET_CFG_CAP_WORD1); @@ -1297,7 +1297,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) .nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG, }; - if (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) { + if ((hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) != 0) { dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 | @@ -1431,7 +1431,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev) struct rte_eth_link link; rte_eth_linkstatus_get(dev, &link); - if (link.link_status) + if (link.link_status != 0) PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s", dev->data->port_id, link.link_speed, link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX @@ -1462,7 +1462,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) { + if ((hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) != 0) { /* If MSI-X auto-masking is used, clear the entry */ rte_wmb(); rte_intr_ack(pci_dev->intr_handle); @@ -1524,7 +1524,7 @@ nfp_net_dev_interrupt_handler(void *param) if (rte_eal_alarm_set(timeout * 1000, nfp_net_dev_interrupt_delayed_handler, - (void *)dev) < 0) { + (void *)dev) != 0) { PMD_INIT_LOG(ERR, "Error setting alarm"); /* Unmasking */ nfp_net_irq_unmask(dev); @@ -1577,16 +1577,16 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl); /* VLAN stripping setting */ - if (mask & RTE_ETH_VLAN_STRIP_MASK) { - if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) + if ((mask & RTE_ETH_VLAN_STRIP_MASK) != 0) { + if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) new_ctrl |= rxvlan_ctrl; else new_ctrl &= ~rxvlan_ctrl; } /* QinQ stripping setting */ - if (mask & RTE_ETH_QINQ_STRIP_MASK) { - if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) + if ((mask & RTE_ETH_QINQ_STRIP_MASK) != 0) { + if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ; else new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ; @@ -1674,7 +1674,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev, update = NFP_NET_CFG_UPDATE_RSS; - if (nfp_net_reconfig(hw, hw->ctrl, update) < 0) + if (nfp_net_reconfig(hw, hw->ctrl, update) != 0) return -EIO; return 0; @@ -1748,28 +1748,28 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, rss_hf = rss_conf->rss_hf; - if (rss_hf & RTE_ETH_RSS_IPV4) + if ((rss_hf & RTE_ETH_RSS_IPV4) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_SCTP; - if (rss_hf & RTE_ETH_RSS_IPV6) + if ((rss_hf & RTE_ETH_RSS_IPV6) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_SCTP; cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK; @@ -1814,7 +1814,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, update = NFP_NET_CFG_UPDATE_RSS; - if (nfp_net_reconfig(hw, hw->ctrl, update) < 0) + if (nfp_net_reconfig(hw, hw->ctrl, update) != 0) return -EIO; return 0; @@ -1838,28 +1838,28 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, rss_hf = rss_conf->rss_hf; cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL); - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4) != 0) rss_hf |= RTE_ETH_RSS_IPV4; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6) != 0) rss_hf |= RTE_ETH_RSS_IPV6; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_SCTP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_SCTP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_SCTP; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_SCTP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_SCTP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_SCTP; /* Propagate current RSS hash functions to caller */ diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index ed9a946b0c..34764a8a32 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -70,7 +70,7 @@ nfp_map_service(uint32_t service_id) rte_service_runstate_set(service_id, 1); rte_service_component_runstate_set(service_id, 1); rte_service_lcore_start(slcore); - if (rte_service_may_be_active(slcore)) + if (rte_service_may_be_active(slcore) != 0) PMD_INIT_LOG(INFO, "The service %s is running", service_name); else PMD_INIT_LOG(ERR, "The service %s is not running", service_name); diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index ebc5538291..12feec8eb4 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -89,7 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev) } } intr_vector = dev->data->nb_rx_queues; - if (rte_intr_efd_enable(intr_handle, intr_vector)) + if (rte_intr_efd_enable(intr_handle, intr_vector) != 0) return -1; nfp_configure_rx_interrupt(dev, intr_handle); @@ -113,7 +113,7 @@ nfp_net_start(struct rte_eth_dev *dev) dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; - if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) { + if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) != 0) { nfp_net_rss_config_default(dev); update |= NFP_NET_CFG_UPDATE_RSS; new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap); @@ -125,15 +125,15 @@ nfp_net_start(struct rte_eth_dev *dev) update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING; /* Enable vxlan */ - if (hw->cap & NFP_NET_CFG_CTRL_VXLAN) { + if ((hw->cap & NFP_NET_CFG_CTRL_VXLAN) != 0) { new_ctrl |= NFP_NET_CFG_CTRL_VXLAN; update |= NFP_NET_CFG_UPDATE_VXLAN; } - if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) + if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG; - if (nfp_net_reconfig(hw, new_ctrl, update) < 0) + if (nfp_net_reconfig(hw, new_ctrl, update) != 0) return -EIO; /* Enable packet type offload by extend ctrl word1. */ @@ -146,14 +146,14 @@ nfp_net_start(struct rte_eth_dev *dev) | NFP_NET_CFG_CTRL_IPSEC_LM_LOOKUP; update = NFP_NET_CFG_UPDATE_GEN; - if (nfp_net_ext_reconfig(hw, ctrl_extend, update) < 0) + if (nfp_net_ext_reconfig(hw, ctrl_extend, update) != 0) return -EIO; /* * Allocating rte mbufs for configured rx queues. * This requires queues being enabled before */ - if (nfp_net_rx_freelist_setup(dev) < 0) { + if (nfp_net_rx_freelist_setup(dev) != 0) { ret = -ENOMEM; goto error; } @@ -298,7 +298,7 @@ nfp_net_close(struct rte_eth_dev *dev) for (i = 0; i < app_fw_nic->total_phyports; i++) { /* Check to see if ports are still in use */ - if (app_fw_nic->ports[i]) + if (app_fw_nic->ports[i] != NULL) return 0; } @@ -598,7 +598,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) hw->mtu = RTE_ETHER_MTU; /* VLAN insertion is incompatible with LSOv2 */ - if (hw->cap & NFP_NET_CFG_CTRL_LSO2) + if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0) hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN; nfp_net_log_device_information(hw); @@ -618,7 +618,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]); tmp_ether_addr = &hw->mac_addr; - if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) { + if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { PMD_INIT_LOG(INFO, "Using random mac address for port %d", port); /* Using random mac addresses for VFs */ rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]); @@ -695,10 +695,11 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) /* Finally try the card type and media */ snprintf(fw_name, sizeof(fw_name), "%s/%s", DEFAULT_FW_PATH, card); PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); - if (rte_firmware_read(fw_name, &fw_buf, &fsize) < 0) { - PMD_DRV_LOG(INFO, "Firmware file %s not found.", fw_name); - return -ENOENT; - } + if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0) + goto load_fw; + + PMD_DRV_LOG(ERR, "Can't find suitable firmware."); + return -ENOENT; load_fw: PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %zu", @@ -727,7 +728,7 @@ nfp_fw_setup(struct rte_pci_device *dev, if (nfp_fw_model == NULL) nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "assembly.partno"); - if (nfp_fw_model) { + if (nfp_fw_model != NULL) { PMD_DRV_LOG(INFO, "firmware model found: %s", nfp_fw_model); } else { PMD_DRV_LOG(ERR, "firmware model NOT found"); @@ -865,7 +866,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, * nfp_net_init */ ret = nfp_net_init(eth_dev); - if (ret) { + if (ret != 0) { ret = -ENODEV; goto port_cleanup; } @@ -878,7 +879,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, port_cleanup: for (i = 0; i < app_fw_nic->total_phyports; i++) { - if (app_fw_nic->ports[i] && app_fw_nic->ports[i]->eth_dev) { + if (app_fw_nic->ports[i] != NULL && + app_fw_nic->ports[i]->eth_dev != NULL) { struct rte_eth_dev *tmp_dev; tmp_dev = app_fw_nic->ports[i]->eth_dev; nfp_ipsec_uninit(tmp_dev); @@ -950,7 +952,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) goto hwinfo_cleanup; } - if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo)) { + if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo) != 0) { PMD_INIT_LOG(ERR, "Error when uploading firmware"); ret = -EIO; goto eth_table_cleanup; diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 0c94fc51ad..c8d6b0461b 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -66,7 +66,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) } } intr_vector = dev->data->nb_rx_queues; - if (rte_intr_efd_enable(intr_handle, intr_vector)) + if (rte_intr_efd_enable(intr_handle, intr_vector) != 0) return -1; nfp_configure_rx_interrupt(dev, intr_handle); @@ -83,7 +83,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; - if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) { + if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) != 0) { nfp_net_rss_config_default(dev); update |= NFP_NET_CFG_UPDATE_RSS; new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap); @@ -94,18 +94,18 @@ nfp_netvf_start(struct rte_eth_dev *dev) update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING; - if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) + if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG; nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl); - if (nfp_net_reconfig(hw, new_ctrl, update) < 0) + if (nfp_net_reconfig(hw, new_ctrl, update) != 0) return -EIO; /* * Allocating rte mbufs for configured rx queues. * This requires queues being enabled before */ - if (nfp_net_rx_freelist_setup(dev) < 0) { + if (nfp_net_rx_freelist_setup(dev) != 0) { ret = -ENOMEM; goto error; } @@ -330,7 +330,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) hw->mtu = RTE_ETHER_MTU; /* VLAN insertion is incompatible with LSOv2 */ - if (hw->cap & NFP_NET_CFG_CTRL_LSO2) + if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0) hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN; nfp_net_log_device_information(hw); @@ -350,7 +350,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) nfp_netvf_read_mac(hw); tmp_ether_addr = &hw->mac_addr; - if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) { + if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { PMD_INIT_LOG(INFO, "Using random mac address for port %d", port); /* Using random mac addresses for VFs */ diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index aa286535f7..bdbc92180d 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -489,8 +489,8 @@ nfp_stats_id_free(struct nfp_flow_priv *priv, uint32_t ctx) /* Check if buffer is full */ ring = &priv->stats_ids.free_list; - if (!CIRC_SPACE(ring->head, ring->tail, priv->stats_ring_size * - NFP_FL_STATS_ELEM_RS - NFP_FL_STATS_ELEM_RS + 1)) + if (CIRC_SPACE(ring->head, ring->tail, priv->stats_ring_size * + NFP_FL_STATS_ELEM_RS - NFP_FL_STATS_ELEM_RS + 1) == 0) return -ENOBUFS; memcpy(&ring->buf[ring->head], &ctx, NFP_FL_STATS_ELEM_RS); @@ -575,7 +575,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower, rte_spinlock_lock(&priv->ipv6_off_lock); LIST_FOREACH(entry, &priv->ipv6_off_list, next) { - if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) { + if (memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr)) == 0) { entry->ref_count++; rte_spinlock_unlock(&priv->ipv6_off_lock); return 0; @@ -609,7 +609,7 @@ nfp_tun_del_ipv6_off(struct nfp_app_fw_flower *app_fw_flower, rte_spinlock_lock(&priv->ipv6_off_lock); LIST_FOREACH(entry, &priv->ipv6_off_list, next) { - if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) { + if (memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr)) == 0) { entry->ref_count--; if (entry->ref_count == 0) { LIST_REMOVE(entry, next); @@ -639,14 +639,14 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr, struct nfp_flower_ext_meta *ext_meta = NULL; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); if (ext_meta != NULL) key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2); - if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) { - if (key_layer2 & NFP_FLOWER_LAYER2_GRE) { + if ((key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { + if ((key_layer2 & NFP_FLOWER_LAYER2_GRE) != 0) { gre6 = (struct nfp_flower_ipv6_gre_tun *)(nfp_flow->payload.mask_data - sizeof(struct nfp_flower_ipv6_gre_tun)); ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, gre6->ipv6.ipv6_dst); @@ -656,7 +656,7 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr, ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst); } } else { - if (key_layer2 & NFP_FLOWER_LAYER2_GRE) { + if ((key_layer2 & NFP_FLOWER_LAYER2_GRE) != 0) { gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data - sizeof(struct nfp_flower_ipv4_gre_tun)); ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, gre4->ipv4.dst); @@ -750,7 +750,7 @@ nfp_flow_compile_metadata(struct nfp_flow_priv *priv, mbuf_off_mask += sizeof(struct nfp_flower_meta_tci); /* Populate Extended Metadata if required */ - if (key_layer->key_layer & NFP_FLOWER_LAYER_EXT_META) { + if ((key_layer->key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) { nfp_flower_compile_ext_meta(mbuf_off_exact, key_layer); nfp_flower_compile_ext_meta(mbuf_off_mask, key_layer); mbuf_off_exact += sizeof(struct nfp_flower_ext_meta); @@ -1035,7 +1035,7 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[], break; case RTE_FLOW_ACTION_TYPE_SET_TTL: PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_SET_TTL detected"); - if (key_ls->key_layer & NFP_FLOWER_LAYER_IPV4) { + if ((key_ls->key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { if (!ttl_tos_flag) { key_ls->act_size += sizeof(struct nfp_fl_act_set_ip4_ttl_tos); @@ -1130,15 +1130,15 @@ nfp_flow_is_tunnel(struct rte_flow *nfp_flow) struct nfp_flower_meta_tci *meta_tci; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN) != 0) return true; - if (!(meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) == 0) return false; ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2); - if (key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE)) + if ((key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE)) != 0) return true; return false; @@ -1234,7 +1234,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower, spec = item->spec; mask = item->mask ? item->mask : proc->mask_default; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) { @@ -1245,8 +1245,8 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower, hdr = is_mask ? &mask->hdr : &spec->hdr; - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_GRE)) { + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_GRE) != 0) { ipv4_gre_tun = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off; ipv4_gre_tun->ip_ext.tos = hdr->type_of_service; @@ -1271,7 +1271,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower, * reserve space for L4 info. * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4 */ - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) *mbuf_off += sizeof(struct nfp_flower_tp_ports); hdr = is_mask ? &mask->hdr : &spec->hdr; @@ -1312,7 +1312,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower, spec = item->spec; mask = item->mask ? item->mask : proc->mask_default; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) { @@ -1324,8 +1324,8 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower, hdr = is_mask ? &mask->hdr : &spec->hdr; vtc_flow = rte_be_to_cpu_32(hdr->vtc_flow); - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_GRE)) { + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_GRE) != 0) { ipv6_gre_tun = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off; ipv6_gre_tun->ip_ext.tos = vtc_flow >> RTE_IPV6_HDR_TC_SHIFT; @@ -1354,7 +1354,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower, * reserve space for L4 info. * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6 */ - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) *mbuf_off += sizeof(struct nfp_flower_tp_ports); hdr = is_mask ? &mask->hdr : &spec->hdr; @@ -1398,7 +1398,7 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, } meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) { + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ipv4 = (struct nfp_flower_ipv4 *) (*mbuf_off - sizeof(struct nfp_flower_ipv4)); ports = (struct nfp_flower_tp_ports *) @@ -1421,7 +1421,7 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, tcp_flags = spec->hdr.tcp_flags; } - if (ipv4) { + if (ipv4 != NULL) { if (tcp_flags & RTE_TCP_FIN_FLAG) ipv4->ip_ext.flags |= NFP_FL_TCP_FLAG_FIN; if (tcp_flags & RTE_TCP_SYN_FLAG) @@ -1476,7 +1476,7 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, } meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) { + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) - sizeof(struct nfp_flower_tp_ports); } else {/* IPv6 */ @@ -1519,7 +1519,7 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, } meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) { + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) - sizeof(struct nfp_flower_tp_ports); } else { /* IPv6 */ @@ -1559,7 +1559,7 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower, struct nfp_flower_ext_meta *ext_meta = NULL; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); spec = item->spec; @@ -1571,8 +1571,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower, mask = item->mask ? item->mask : proc->mask_default; hdr = is_mask ? &mask->hdr : &spec->hdr; - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6)) { + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off; tun6->tun_id = hdr->vx_vni; if (!is_mask) @@ -1585,8 +1585,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower, } vxlan_end: - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6)) + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) *mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun); else *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun); @@ -1613,7 +1613,7 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower, struct nfp_flower_ext_meta *ext_meta = NULL; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); spec = item->spec; @@ -1625,8 +1625,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower, mask = item->mask ? item->mask : proc->mask_default; geneve = is_mask ? mask : spec; - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6)) { + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off; tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) | (geneve->vni[1] << 8) | (geneve->vni[2])); @@ -1641,8 +1641,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower, } geneve_end: - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6)) { + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { *mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun); } else { *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun); @@ -1669,8 +1669,8 @@ nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower, ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); /* NVGRE is the only supported GRE tunnel type */ - if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6) { + if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off; if (is_mask) tun6->ethertype = rte_cpu_to_be_16(~0); @@ -1717,8 +1717,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower, mask = item->mask ? item->mask : proc->mask_default; tun_key = is_mask ? *mask : *spec; - if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6) { + if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off; tun6->tun_key = tun_key; tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY); @@ -1733,8 +1733,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower, } gre_key_end: - if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6) + if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) *mbuf_off += sizeof(struct nfp_flower_ipv6_gre_tun); else *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun); @@ -2079,7 +2079,7 @@ nfp_flow_compile_items(struct nfp_flower_representor *representor, sizeof(struct nfp_flower_in_port); meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) { + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) { mbuf_off_exact += sizeof(struct nfp_flower_ext_meta); mbuf_off_mask += sizeof(struct nfp_flower_ext_meta); } @@ -2522,7 +2522,7 @@ nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower, port = (struct nfp_flower_in_port *)(meta_tci + 1); eth = (struct nfp_flower_mac_mpls *)(port + 1); - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) ipv4 = (struct nfp_flower_ipv4 *)((char *)eth + sizeof(struct nfp_flower_mac_mpls) + sizeof(struct nfp_flower_tp_ports)); @@ -2649,7 +2649,7 @@ nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower, port = (struct nfp_flower_in_port *)(meta_tci + 1); eth = (struct nfp_flower_mac_mpls *)(port + 1); - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) ipv6 = (struct nfp_flower_ipv6 *)((char *)eth + sizeof(struct nfp_flower_mac_mpls) + sizeof(struct nfp_flower_tp_ports)); @@ -3145,7 +3145,7 @@ nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr, } meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow_meta, nfp_flow); else return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow_meta, nfp_flow); diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 66a5d6cb3a..4528417559 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -163,22 +163,22 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, { struct nfp_net_hw *hw = rxq->hw; - if (!(hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM)) + if ((hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM) == 0) return; /* If IPv4 and IP checksum error, fail */ - if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) && - !(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK))) + if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) != 0 && + (rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK) == 0)) mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; else mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; /* If neither UDP nor TCP return */ - if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) && - !(rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM)) + if ((rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) == 0 && + (rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM) == 0) return; - if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK)) + if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK) != 0) mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; else mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; @@ -232,7 +232,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) int i; for (i = 0; i < dev->data->nb_rx_queues; i++) { - if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) < 0) + if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0) return -1; } return 0; @@ -387,7 +387,7 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta, * to do anything. */ if ((hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) { - if (meta->vlan_layer >= 1 && meta->vlan[0].offload != 0) { + if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) { mb->vlan_tci = rte_cpu_to_le_32(meta->vlan[0].tci); mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED; } @@ -771,7 +771,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } /* Filling the received mbuf with packet info */ - if (hw->rx_offset) + if (hw->rx_offset != 0) mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset; else mb->data_off = RTE_PKTMBUF_HEADROOM + @@ -846,7 +846,7 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq) return; for (i = 0; i < rxq->rx_count; i++) { - if (rxq->rxbufs[i].mbuf) { + if (rxq->rxbufs[i].mbuf != NULL) { rte_pktmbuf_free_seg(rxq->rxbufs[i].mbuf); rxq->rxbufs[i].mbuf = NULL; } @@ -858,7 +858,7 @@ nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx) { struct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx]; - if (rxq) { + if (rxq != NULL) { nfp_net_rx_queue_release_mbufs(rxq); rte_eth_dma_zone_free(dev, "rx_ring", queue_idx); rte_free(rxq->rxbufs); @@ -906,7 +906,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, * Free memory prior to re-allocation if needed. This is the case after * calling nfp_net_stop */ - if (dev->data->rx_queues[queue_idx]) { + if (dev->data->rx_queues[queue_idx] != NULL) { nfp_net_rx_queue_release(dev, queue_idx); dev->data->rx_queues[queue_idx] = NULL; } @@ -1037,7 +1037,7 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq) return; for (i = 0; i < txq->tx_count; i++) { - if (txq->txbufs[i].mbuf) { + if (txq->txbufs[i].mbuf != NULL) { rte_pktmbuf_free_seg(txq->txbufs[i].mbuf); txq->txbufs[i].mbuf = NULL; } @@ -1049,7 +1049,7 @@ nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx) { struct nfp_net_txq *txq = dev->data->tx_queues[queue_idx]; - if (txq) { + if (txq != NULL) { nfp_net_tx_queue_release_mbufs(txq); rte_eth_dma_zone_free(dev, "tx_ring", queue_idx); rte_free(txq->txbufs); From patchwork Sat Oct 7 02:33:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132372 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 045EE426D6; Sat, 7 Oct 2023 04:34:20 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6EC3A40A72; Sat, 7 Oct 2023 04:34:07 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2108.outbound.protection.outlook.com [40.107.93.108]) by mails.dpdk.org (Postfix) with ESMTP id 81C79406B4 for ; Sat, 7 Oct 2023 04:34:05 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=bacIIAz1DD1lBgnSuuFfSX7GLjoPzuzidIthLq5JOsegQAhx2+uS2UDTqdC8VPypY9+8fWW5AGnK4KMseQ8A401suZCyLf0eUHrGoKkZ9hopZCTDWgC+wErfVlGM1zc84nXBWnqAGt3/zQnk6f6nOL39GT0MIw9krcvRrS59UV0zj+/ttg+eX2afDLAf+O1KRrOqXrUa0mKu3gz5PbvaxocgazaXnoh4N0kjcQMR48dn0wshfQTWpjLnEnm31RPuqA3lsNsOkPKy/Lr1R0onVFJZ5I6M1BD3+2nHoOaTn/Tfk4NR6o/EFDiuWMVvhnMfhw/mD8L+hIcpnDAcMWPmrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=uRzCAmfiKFUvP0WoxaqxnTBTXZBb6Kok8mJEygOP8ng=; b=ausOenL7td2FI0U6s0JKKfCooYn6XrPorWLm5JpRANzsOwN0JdfZJnAlF/UfBGEYzYwyKqvA6Zuq5Kn+a2/j/xtFJCsMwDxJ+wx+uuTHFcxepg5xuakk3iz8nTeJ6GU+y2N93LNrjvRHmDISl8qATj2SBbRQ852oAsEuHmYMVX0elZTaBMJH4lhrDKVLdGkmeLPW6WWI9yWlrhKfoBEI3EdTkc7vsE/hfCxZtBAGQZavAp6LSj5AVa1mutmMCAVIEZZCgGL583H0S1LtH8lRKiWMzmDg1htqERVpeNtESapf9YbP6NtodXfSCUx4bznP9dm47OQEm9Io7n7mphqX2A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=uRzCAmfiKFUvP0WoxaqxnTBTXZBb6Kok8mJEygOP8ng=; b=Jbcs0Aym2KkkD8MafAe6wtfBzWYQWI53FIf25Cy1di4I/1EdqN1XcqPJ33/lZ5mcyFaGbn+tpQ0pRhl95WK12wjSUNWtfR6j7Q6o2KUpAwwUo067IKoSZr2g/TM9Kom5WWd3PfecpKImKDCupp0msp74sEEL21vvzC2FUZh0TN0= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:02 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:02 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 02/11] net/nfp: unify the indent coding style Date: Sat, 7 Oct 2023 10:33:30 +0800 Message-Id: <20231007023339.1546659-3-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: 18bcec56-99eb-403b-d110-08dbc6dddde6 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: kK2VXRtFrbKXc190SggrEPdz7JQgDMVae66HCYhcBUxdIaKlg/B/vVCeaDOaqdNatINISE7FQBJVIoUxIw9Law+kRHUgzhnEsX76vEI2gmgjUBxtMO3vv/y55it5hT9M3cLISQjMIHO83I8i54kYBWbIFC2w1SDAdampy5ZLXzSkabiPUYqWt6pywtBOULiTqg09QGy8T5on5Y56jrFSEcSnh8AfR+1O+ZHkmn1EbPVNVTGd6TQdqK6mt8M/5af2/CiNwrhtYKZGFf9dLwui3Ay9qOaROXkpLdBonf0st7l+D2tifOmA8HpUr9vqZYxI7x+I0L6iSydu19FgVMbHelOdHEuTpP1iXLzw8BAPo33/qhYbGgtiQ3GnC8VG7BqXH5wzQyy+2Pa1+4KFMtEnJalaWNzeM1kPRGk3jLTweI9LT+ZUMBlAi6ZQSx1LIGrSDOY/WddFIuSGSu7Lm1GYJm0nUMnG6p/vdwoKBJX7Mdy9htTQnsg1ScO++u+qbpx+lQrrEHb7aZhxENTdCkihz6jjzWEBsanEJ13WOwi0MYsAOb4bOfxOErUuulRaExBYJE0BL7zqcMPkKJ+Vln9oEbEBtILTEn9ZJa0FKmoau+7pAcmVVGBZMkPg60Sz73A1Q2RqPRmWPaGJt6jtHTbQeZOodtwvcIdVfCwq/Cq+EVo= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(30864003)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001)(579004)(559001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: DRGFEhVxLB7IUMbY8mm6OMfMaeYGLHLpS7lkKAEFbCatxTpmTEKRtLt5ewkRJ6+6XQOT0MaXv4fesncZRqL/eWNmWeO+6qbXY1Kf1Sk22m4LmW9ULLUfWZH1l1dc4RS4UkLU/TALp7ojG/Y93DyogNN8Lxv+0VF9qbBk3Nuvqg2BH32vP4uruBT51q1R+SJxCUvs2y8QS/bIPOw9Pp4FhxX1RllKb2+T0lNiuzP3v7dpfPxTv4lUM5duX/gQjG7v8rRFeJlYgurja3F9ofEkRRCBU3UbP6rcswSCblLXEq6wuwQYkxAnGPor9iHf8QEj/vu4tiJcaR3asfnE3eRe/4T1of0upNxUWUm6KQ1fa9tIPe1JPxFhkYakRLMycjRpCXClzRUSPUn5TSOzgg9Psv7+zo54cA58bQo3OvEthiM/24sNI6u4M7irvA3dHLLx/cAJMqXIAOzWzl1PvnmjCmB4MHiUUrnru2m6EzKfxORBUJhT2/hGJg4R6LHJ6EbFml9mRX2fBASx2ZBffXzBDsTFoK8jrDtJguHQhgNIokizCmhBGlSdQrGwKcSKyhRrkBdg58edv9XdmjozBxwfRL+PwWt9+GWByv1/FR3gim6uRifEZqK3yU62j/NKsZSnSMUP1gaMduYNR8KVO6VZGRjgv9Tcrxm8LQtET6YTrch8ly1HD+6J4Ably7wDH97B+cYgmXJs41/1GcMWNVUhU+B2836lmtkQsEjhZ/JZfG4fy9bGYImbQwgaYLFRiYLM81/wPLNQldSTxLC8VCGz1LrLHsZivMMqGLvYMCjZtw8iJFucISvovPI88Zcc3pKb5i+1WkvAA3EpNrwgZIL3Xu3gS2eWRfIZtcJTfCveEgiRxS427/nR6fAV4wqJoTr7XRg1zcQsH6ptXQ0RwDuX0/MYzPRZ1v4rKqTmsA01eR64Sbda0tHY4N9BWgFlUlPCWEq8Ip03mnKktCZIyhWHsFZ141tVCP4ESmVWGefb24eigpHubqs4GkEEPhM7Bg/nlyyHckccaqFpcNkNiqPxSPGlxU+/p0krnbsuNMf6InZsfZqrEZ70Lt93okj62vlXKMZqgXBxe/Phrtb+y48+PQoTI+y515uVZ9bKlmJrXe4eOjQVmBHg6TxhOwF43LvB8ERrQVFd4Lk7iy13TYBuAUXPT/c4AY41zOQam8qF67o0rL29ZKVlJyPGBtKnSLFey0hSxfnch5BSMzOSfmZXpSLVOyi6RgET1tMvjQ04WcNPkDzs7hRCWl6sZ5z05hs7RTZ9MuneC9kp4uzkqtZ1quoGTMQeSn+7X35zdvc7wwfdjKO0fItlmawsiwSMWJEaIP3WUbDggcl8/YBVrvHaew2vszzRY5aDx/oztVEPxnSvsN81CKd/n5na9V29I4+jIZsWe3eMUHb3ePJh34bI4dfmQtOWMNgHSPe3YcK3qbyfUjq0TuV71FSi0LRUnB7qV2d47PybCo3CgeCSVJLfVoCFjK10qVMdMi9jkfLYa32HfnxwXVHxC1BAkD5y6/YZfTIRNWxchMak7C0w7eV7GmjMh4s5d48npwDfqiec4FLBappVwFvHhgITBxXsCX9Qo5Qmi3uQI/Lk4Jd/2gGWIQ== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 18bcec56-99eb-403b-d110-08dbc6dddde6 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:02.1147 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: TqIrTMjOsWN9hTucWgRmV8N1Rl+8IQibM80aNphix3dRe1tIfa6UQGYKH6CynnkSWgouSuV38wmVzfrkykYmu+QI9W6ZXhOEcG5WZlcASig= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Each parameter of function should occupy one line, and indent two TAB character. All the statement which span multi line should indent two TAB character. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.c | 3 +- drivers/net/nfp/flower/nfp_flower_ctrl.c | 7 +- .../net/nfp/flower/nfp_flower_representor.c | 2 +- drivers/net/nfp/nfdk/nfp_nfdk.h | 2 +- drivers/net/nfp/nfdk/nfp_nfdk_dp.c | 4 +- drivers/net/nfp/nfp_common.c | 250 +++++++++--------- drivers/net/nfp/nfp_common.h | 81 ++++-- drivers/net/nfp/nfp_cpp_bridge.c | 56 ++-- drivers/net/nfp/nfp_ethdev.c | 82 +++--- drivers/net/nfp/nfp_ethdev_vf.c | 66 +++-- drivers/net/nfp/nfp_flow.c | 36 +-- drivers/net/nfp/nfp_rxtx.c | 86 +++--- drivers/net/nfp/nfp_rxtx.h | 10 +- 13 files changed, 357 insertions(+), 328 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index 3ddaf0f28d..59717fa6b1 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -330,7 +330,8 @@ nfp_flower_pf_xmit_pkts(void *tx_queue, } static int -nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type) +nfp_flower_init_vnic_common(struct nfp_net_hw *hw, + const char *vnic_type) { int err; uint32_t start_q; diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c index b564e7cd73..4967cc2375 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.c +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c @@ -64,9 +64,8 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, */ new_mb = rte_pktmbuf_alloc(rxq->mem_pool); if (unlikely(new_mb == NULL)) { - PMD_RX_LOG(ERR, - "RX mbuf alloc failed port_id=%u queue_id=%hu", - rxq->port_id, rxq->qidx); + PMD_RX_LOG(ERR, "RX mbuf alloc failed port_id=%u queue_id=%hu", + rxq->port_id, rxq->qidx); nfp_net_mbuf_alloc_failed(rxq); break; } @@ -141,7 +140,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, rte_wmb(); if (nb_hold >= rxq->rx_free_thresh) { PMD_RX_LOG(DEBUG, "port=%hu queue=%hu nb_hold=%hu avail=%hu", - rxq->port_id, rxq->qidx, nb_hold, avail); + rxq->port_id, rxq->qidx, nb_hold, avail); nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold); nb_hold = 0; } diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c index 55ca3e6db0..01c2c5a517 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.c +++ b/drivers/net/nfp/flower/nfp_flower_representor.c @@ -826,7 +826,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower) snprintf(flower_repr.name, sizeof(flower_repr.name), "%s_repr_vf%d", pci_name, i); - /* This will also allocate private memory for the device*/ + /* This will also allocate private memory for the device*/ ret = rte_eth_dev_create(eth_dev->device, flower_repr.name, sizeof(struct nfp_flower_representor), NULL, NULL, nfp_flower_repr_init, &flower_repr); diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h index 75ecb361ee..99675b6bd7 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk.h +++ b/drivers/net/nfp/nfdk/nfp_nfdk.h @@ -143,7 +143,7 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq) free_desc = txq->rd_p - txq->wr_p; return (free_desc > NFDK_TX_DESC_STOP_CNT) ? - (free_desc - NFDK_TX_DESC_STOP_CNT) : 0; + (free_desc - NFDK_TX_DESC_STOP_CNT) : 0; } /* diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c index d4bd5edb0a..2426ffb261 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c +++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c @@ -101,9 +101,7 @@ static inline uint16_t nfp_net_nfdk_headlen_to_segs(uint16_t headlen) { /* First descriptor fits less data, so adjust for that */ - return DIV_ROUND_UP(headlen + - NFDK_TX_MAX_DATA_PER_DESC - - NFDK_TX_MAX_DATA_PER_HEAD, + return DIV_ROUND_UP(headlen + NFDK_TX_MAX_DATA_PER_DESC - NFDK_TX_MAX_DATA_PER_HEAD, NFDK_TX_MAX_DATA_PER_DESC); } diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 36752583dd..9719a9212b 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -172,7 +172,8 @@ nfp_net_link_speed_rte2nfp(uint16_t speed) } static void -nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link) +nfp_net_notify_port_speed(struct nfp_net_hw *hw, + struct rte_eth_link *link) { /** * Read the link status from NFP_NET_CFG_STS. If the link is down @@ -188,21 +189,22 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link) * NFP_NET_CFG_STS_NSP_LINK_RATE. */ nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, - nfp_net_link_speed_rte2nfp(link->link_speed)); + nfp_net_link_speed_rte2nfp(link->link_speed)); } /* The length of firmware version string */ #define FW_VER_LEN 32 static int -__nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) +__nfp_net_reconfig(struct nfp_net_hw *hw, + uint32_t update) { int cnt; uint32_t new; struct timespec wait; PMD_DRV_LOG(DEBUG, "Writing to the configuration queue (%p)...", - hw->qcp_cfg); + hw->qcp_cfg); if (hw->qcp_cfg == NULL) { PMD_INIT_LOG(ERR, "Bad configuration queue pointer"); @@ -227,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) } if (cnt >= NFP_NET_POLL_TIMEOUT) { PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after" - " %dms", update, cnt); + " %dms", update, cnt); return -EIO; } nanosleep(&wait, 0); /* waiting for a 1ms */ @@ -254,7 +256,9 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) * - (EIO) if I/O err and fail to reconfigure the device. */ int -nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update) +nfp_net_reconfig(struct nfp_net_hw *hw, + uint32_t ctrl, + uint32_t update) { int ret; @@ -296,7 +300,9 @@ nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update) * - (EIO) if I/O err and fail to reconfigure the device. */ int -nfp_net_ext_reconfig(struct nfp_net_hw *hw, uint32_t ctrl_ext, uint32_t update) +nfp_net_ext_reconfig(struct nfp_net_hw *hw, + uint32_t ctrl_ext, + uint32_t update) { int ret; @@ -401,7 +407,7 @@ nfp_net_configure(struct rte_eth_dev *dev) /* Checking RX mode */ if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 && - (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) { + (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) { PMD_INIT_LOG(INFO, "RSS not supported"); return -EINVAL; } @@ -409,7 +415,7 @@ nfp_net_configure(struct rte_eth_dev *dev) /* Checking MTU set */ if (rxmode->mtu > NFP_FRAME_SIZE_MAX) { PMD_INIT_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported", - rxmode->mtu, NFP_FRAME_SIZE_MAX); + rxmode->mtu, NFP_FRAME_SIZE_MAX); return -ERANGE; } @@ -446,7 +452,8 @@ nfp_net_log_device_information(const struct nfp_net_hw *hw) } static inline void -nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, uint32_t *ctrl) +nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, + uint32_t *ctrl) { if ((hw->cap & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) *ctrl |= NFP_NET_CFG_CTRL_RXVLAN_V2; @@ -490,8 +497,9 @@ nfp_net_disable_queues(struct rte_eth_dev *dev) nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, 0); new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_ENABLE; - update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING | - NFP_NET_CFG_UPDATE_MSIX; + update = NFP_NET_CFG_UPDATE_GEN | + NFP_NET_CFG_UPDATE_RING | + NFP_NET_CFG_UPDATE_MSIX; if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0) new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG; @@ -517,7 +525,8 @@ nfp_net_cfg_queue_setup(struct nfp_net_hw *hw) } void -nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac) +nfp_net_write_mac(struct nfp_net_hw *hw, + uint8_t *mac) { uint32_t mac0 = *(uint32_t *)mac; uint16_t mac1; @@ -527,20 +536,21 @@ nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac) mac += 4; mac1 = *(uint16_t *)mac; nn_writew(rte_cpu_to_be_16(mac1), - hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6); + hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6); } int -nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) +nfp_net_set_mac_addr(struct rte_eth_dev *dev, + struct rte_ether_addr *mac_addr) { struct nfp_net_hw *hw; uint32_t update, ctrl; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && - (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) { + (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) { PMD_INIT_LOG(INFO, "MAC address unable to change when" - " port enabled"); + " port enabled"); return -EBUSY; } @@ -551,7 +561,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) update = NFP_NET_CFG_UPDATE_MACADDR; ctrl = hw->ctrl; if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && - (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) + (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; if (nfp_net_reconfig(hw, ctrl, update) != 0) { PMD_INIT_LOG(INFO, "MAC address update failed"); @@ -562,15 +572,15 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) int nfp_configure_rx_interrupt(struct rte_eth_dev *dev, - struct rte_intr_handle *intr_handle) + struct rte_intr_handle *intr_handle) { struct nfp_net_hw *hw; int i; if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", - dev->data->nb_rx_queues) != 0) { + dev->data->nb_rx_queues) != 0) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" - " intr_vec", dev->data->nb_rx_queues); + " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; } @@ -590,12 +600,10 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, * efd interrupts */ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1); - if (rte_intr_vec_list_index_set(intr_handle, i, - i + 1) != 0) + if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0) return -1; PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i, - rte_intr_vec_list_index_get(intr_handle, - i)); + rte_intr_vec_list_index_get(intr_handle, i)); } } @@ -651,13 +659,13 @@ nfp_check_offloads(struct rte_eth_dev *dev) /* TX checksum offload */ if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0) + (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 || + (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_TXCSUM; /* LSO offload */ if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { + (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0) ctrl |= NFP_NET_CFG_CTRL_LSO; else @@ -751,7 +759,8 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev) * status. */ int -nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) +nfp_net_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) { int ret; uint32_t i; @@ -820,7 +829,8 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) } int -nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +nfp_net_stats_get(struct rte_eth_dev *dev, + struct rte_eth_stats *stats) { int i; struct nfp_net_hw *hw; @@ -838,16 +848,16 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) break; nfp_dev_stats.q_ipackets[i] = - nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); + nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); nfp_dev_stats.q_ipackets[i] -= - hw->eth_stats_base.q_ipackets[i]; + hw->eth_stats_base.q_ipackets[i]; nfp_dev_stats.q_ibytes[i] = - nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); + nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); nfp_dev_stats.q_ibytes[i] -= - hw->eth_stats_base.q_ibytes[i]; + hw->eth_stats_base.q_ibytes[i]; } /* reading per TX ring stats */ @@ -856,46 +866,42 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) break; nfp_dev_stats.q_opackets[i] = - nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); + nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); - nfp_dev_stats.q_opackets[i] -= - hw->eth_stats_base.q_opackets[i]; + nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i]; nfp_dev_stats.q_obytes[i] = - nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); + nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); - nfp_dev_stats.q_obytes[i] -= - hw->eth_stats_base.q_obytes[i]; + nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i]; } - nfp_dev_stats.ipackets = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); + nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets; - nfp_dev_stats.ibytes = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); + nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes; nfp_dev_stats.opackets = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); nfp_dev_stats.opackets -= hw->eth_stats_base.opackets; nfp_dev_stats.obytes = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); nfp_dev_stats.obytes -= hw->eth_stats_base.obytes; /* reading general device stats */ nfp_dev_stats.ierrors = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors; nfp_dev_stats.oerrors = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors; @@ -903,7 +909,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) nfp_dev_stats.rx_nombuf = dev->data->rx_mbuf_alloc_failed; nfp_dev_stats.imissed = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); nfp_dev_stats.imissed -= hw->eth_stats_base.imissed; @@ -933,10 +939,10 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) break; hw->eth_stats_base.q_ipackets[i] = - nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); + nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); hw->eth_stats_base.q_ibytes[i] = - nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); + nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); } /* reading per TX ring stats */ @@ -945,36 +951,36 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) break; hw->eth_stats_base.q_opackets[i] = - nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); + nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); hw->eth_stats_base.q_obytes[i] = - nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); + nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); } hw->eth_stats_base.ipackets = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); hw->eth_stats_base.ibytes = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); hw->eth_stats_base.opackets = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); hw->eth_stats_base.obytes = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); /* reading general device stats */ hw->eth_stats_base.ierrors = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); hw->eth_stats_base.oerrors = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); /* RX ring mbuf allocation failures */ dev->data->rx_mbuf_alloc_failed = 0; hw->eth_stats_base.imissed = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); return 0; } @@ -1237,16 +1243,16 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | - RTE_ETH_RX_OFFLOAD_UDP_CKSUM | - RTE_ETH_RX_OFFLOAD_TCP_CKSUM; + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM; if ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0) dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT; if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | - RTE_ETH_TX_OFFLOAD_UDP_CKSUM | - RTE_ETH_TX_OFFLOAD_TCP_CKSUM; + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM; if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) { dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO; @@ -1301,21 +1307,24 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 | - RTE_ETH_RSS_NONFRAG_IPV4_TCP | - RTE_ETH_RSS_NONFRAG_IPV4_UDP | - RTE_ETH_RSS_NONFRAG_IPV4_SCTP | - RTE_ETH_RSS_IPV6 | - RTE_ETH_RSS_NONFRAG_IPV6_TCP | - RTE_ETH_RSS_NONFRAG_IPV6_UDP | - RTE_ETH_RSS_NONFRAG_IPV6_SCTP; + RTE_ETH_RSS_NONFRAG_IPV4_TCP | + RTE_ETH_RSS_NONFRAG_IPV4_UDP | + RTE_ETH_RSS_NONFRAG_IPV4_SCTP | + RTE_ETH_RSS_IPV6 | + RTE_ETH_RSS_NONFRAG_IPV6_TCP | + RTE_ETH_RSS_NONFRAG_IPV6_UDP | + RTE_ETH_RSS_NONFRAG_IPV6_SCTP; dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ; dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ; } - dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G | - RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G | - RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | + RTE_ETH_LINK_SPEED_10G | + RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_40G | + RTE_ETH_LINK_SPEED_50G | + RTE_ETH_LINK_SPEED_100G; return 0; } @@ -1384,7 +1393,8 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev) } int -nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, + uint16_t queue_id) { struct rte_pci_device *pci_dev; struct nfp_net_hw *hw; @@ -1393,19 +1403,19 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (rte_intr_type_get(pci_dev->intr_handle) != - RTE_INTR_HANDLE_UIO) + if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; /* Make sure all updates are written before un-masking */ rte_wmb(); nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), - NFP_NET_CFG_ICR_UNMASKED); + NFP_NET_CFG_ICR_UNMASKED); return 0; } int -nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, + uint16_t queue_id) { struct rte_pci_device *pci_dev; struct nfp_net_hw *hw; @@ -1414,8 +1424,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (rte_intr_type_get(pci_dev->intr_handle) != - RTE_INTR_HANDLE_UIO) + if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; /* Make sure all updates are written before un-masking */ @@ -1433,16 +1442,15 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev) rte_eth_linkstatus_get(dev, &link); if (link.link_status != 0) PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s", - dev->data->port_id, link.link_speed, - link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX - ? "full-duplex" : "half-duplex"); + dev->data->port_id, link.link_speed, + link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ? + "full-duplex" : "half-duplex"); else - PMD_DRV_LOG(INFO, " Port %d: Link Down", - dev->data->port_id); + PMD_DRV_LOG(INFO, " Port %d: Link Down", dev->data->port_id); PMD_DRV_LOG(INFO, "PCI Address: " PCI_PRI_FMT, - pci_dev->addr.domain, pci_dev->addr.bus, - pci_dev->addr.devid, pci_dev->addr.function); + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); } /* Interrupt configuration and handling */ @@ -1470,7 +1478,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev) /* Make sure all updates are written before un-masking */ rte_wmb(); nn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX), - NFP_NET_CFG_ICR_UNMASKED); + NFP_NET_CFG_ICR_UNMASKED); } } @@ -1523,8 +1531,8 @@ nfp_net_dev_interrupt_handler(void *param) } if (rte_eal_alarm_set(timeout * 1000, - nfp_net_dev_interrupt_delayed_handler, - (void *)dev) != 0) { + nfp_net_dev_interrupt_delayed_handler, + (void *)dev) != 0) { PMD_INIT_LOG(ERR, "Error setting alarm"); /* Unmasking */ nfp_net_irq_unmask(dev); @@ -1532,7 +1540,8 @@ nfp_net_dev_interrupt_handler(void *param) } int -nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +nfp_net_dev_mtu_set(struct rte_eth_dev *dev, + uint16_t mtu) { struct nfp_net_hw *hw; @@ -1541,14 +1550,14 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) /* mtu setting is forbidden if port is started */ if (dev->data->dev_started) { PMD_DRV_LOG(ERR, "port %d must be stopped before configuration", - dev->data->port_id); + dev->data->port_id); return -EBUSY; } /* MTU larger than current mbufsize not supported */ if (mtu > hw->flbufsz) { PMD_DRV_LOG(ERR, "MTU (%u) larger than current mbufsize (%u) not supported", - mtu, hw->flbufsz); + mtu, hw->flbufsz); return -ERANGE; } @@ -1561,7 +1570,8 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) } int -nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) +nfp_net_vlan_offload_set(struct rte_eth_dev *dev, + int mask) { uint32_t new_ctrl, update; struct nfp_net_hw *hw; @@ -1606,8 +1616,8 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) static int nfp_net_rss_reta_write(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) { uint32_t reta, mask; int i, j; @@ -1617,8 +1627,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number hardware can supported " - "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); return -EINVAL; } @@ -1648,8 +1658,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, reta &= ~(0xFF << (8 * j)); reta |= reta_conf[idx].reta[shift + j] << (8 * j); } - nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, - reta); + nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta); } return 0; } @@ -1657,8 +1666,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, /* Update Redirection Table(RETA) of Receive Side Scaling of Ethernet device */ int nfp_net_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) { struct nfp_net_hw *hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1683,8 +1692,8 @@ nfp_net_reta_update(struct rte_eth_dev *dev, /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */ int nfp_net_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) { uint8_t i, j, mask; int idx, shift; @@ -1698,8 +1707,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev, if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number hardware can supported " - "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); return -EINVAL; } @@ -1716,13 +1725,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev, if (mask == 0) continue; - reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + - shift); + reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift); for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; reta_conf[idx].reta[shift + j] = - (uint8_t)((reta >> (8 * j)) & 0xF); + (uint8_t)((reta >> (8 * j)) & 0xF); } } return 0; @@ -1730,7 +1738,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev, static int nfp_net_rss_hash_write(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) + struct rte_eth_rss_conf *rss_conf) { struct nfp_net_hw *hw; uint64_t rss_hf; @@ -1786,7 +1794,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, int nfp_net_rss_hash_update(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) + struct rte_eth_rss_conf *rss_conf) { uint32_t update; uint64_t rss_hf; @@ -1822,7 +1830,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) + struct rte_eth_rss_conf *rss_conf) { uint64_t rss_hf; uint32_t cfg_rss_ctrl; @@ -1888,7 +1896,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) int i, j, ret; PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues", - rx_queues); + rx_queues); nfp_reta_conf[0].mask = ~0x0; nfp_reta_conf[1].mask = ~0x0; @@ -1984,7 +1992,7 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw, for (i = 0; i < NFP_NET_N_VXLAN_PORTS; i += 2) { nn_cfg_writel(hw, NFP_NET_CFG_VXLAN_PORT + i * sizeof(port), - (hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]); + (hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]); } rte_spinlock_lock(&hw->reconfig_lock); @@ -2004,7 +2012,8 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw, * than 40 bits */ int -nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name) +nfp_net_check_dma_mask(struct nfp_net_hw *hw, + char *name) { if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3 && rte_mem_check_dma_mask(40) != 0) { @@ -2052,7 +2061,8 @@ nfp_net_cfg_read_version(struct nfp_net_hw *hw) } static void -nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version) +nfp_net_get_nsp_info(struct nfp_net_hw *hw, + char *nsp_version) { struct nfp_nsp *nsp; @@ -2068,7 +2078,8 @@ nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version) } static void -nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name) +nfp_net_get_mip_name(struct nfp_net_hw *hw, + char *mip_name) { struct nfp_mip *mip; @@ -2082,7 +2093,8 @@ nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name) } static void -nfp_net_get_app_name(struct nfp_net_hw *hw, char *app_name) +nfp_net_get_app_name(struct nfp_net_hw *hw, + char *app_name) { switch (hw->pf_dev->app_fw_id) { case NFP_APP_FW_CORE_NIC: diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index bc3a948231..e4fd394868 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -180,37 +180,47 @@ struct nfp_net_adapter { struct nfp_net_hw hw; }; -static inline uint8_t nn_readb(volatile const void *addr) +static inline uint8_t +nn_readb(volatile const void *addr) { return rte_read8(addr); } -static inline void nn_writeb(uint8_t val, volatile void *addr) +static inline void +nn_writeb(uint8_t val, + volatile void *addr) { rte_write8(val, addr); } -static inline uint32_t nn_readl(volatile const void *addr) +static inline uint32_t +nn_readl(volatile const void *addr) { return rte_read32(addr); } -static inline void nn_writel(uint32_t val, volatile void *addr) +static inline void +nn_writel(uint32_t val, + volatile void *addr) { rte_write32(val, addr); } -static inline uint16_t nn_readw(volatile const void *addr) +static inline uint16_t +nn_readw(volatile const void *addr) { return rte_read16(addr); } -static inline void nn_writew(uint16_t val, volatile void *addr) +static inline void +nn_writew(uint16_t val, + volatile void *addr) { rte_write16(val, addr); } -static inline uint64_t nn_readq(volatile void *addr) +static inline uint64_t +nn_readq(volatile void *addr) { const volatile uint32_t *p = addr; uint32_t low, high; @@ -221,7 +231,9 @@ static inline uint64_t nn_readq(volatile void *addr) return low + ((uint64_t)high << 32); } -static inline void nn_writeq(uint64_t val, volatile void *addr) +static inline void +nn_writeq(uint64_t val, + volatile void *addr) { nn_writel(val >> 32, (volatile char *)addr + 4); nn_writel(val, addr); @@ -232,49 +244,61 @@ static inline void nn_writeq(uint64_t val, volatile void *addr) * Performs any endian conversion necessary. */ static inline uint8_t -nn_cfg_readb(struct nfp_net_hw *hw, int off) +nn_cfg_readb(struct nfp_net_hw *hw, + int off) { return nn_readb(hw->ctrl_bar + off); } static inline void -nn_cfg_writeb(struct nfp_net_hw *hw, int off, uint8_t val) +nn_cfg_writeb(struct nfp_net_hw *hw, + int off, + uint8_t val) { nn_writeb(val, hw->ctrl_bar + off); } static inline uint16_t -nn_cfg_readw(struct nfp_net_hw *hw, int off) +nn_cfg_readw(struct nfp_net_hw *hw, + int off) { return rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off)); } static inline void -nn_cfg_writew(struct nfp_net_hw *hw, int off, uint16_t val) +nn_cfg_writew(struct nfp_net_hw *hw, + int off, + uint16_t val) { nn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off); } static inline uint32_t -nn_cfg_readl(struct nfp_net_hw *hw, int off) +nn_cfg_readl(struct nfp_net_hw *hw, + int off) { return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off)); } static inline void -nn_cfg_writel(struct nfp_net_hw *hw, int off, uint32_t val) +nn_cfg_writel(struct nfp_net_hw *hw, + int off, + uint32_t val) { nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off); } static inline uint64_t -nn_cfg_readq(struct nfp_net_hw *hw, int off) +nn_cfg_readq(struct nfp_net_hw *hw, + int off) { return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off)); } static inline void -nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val) +nn_cfg_writeq(struct nfp_net_hw *hw, + int off, + uint64_t val) { nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off); } @@ -286,7 +310,9 @@ nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val) * @val: Value to add to the queue pointer */ static inline void -nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val) +nfp_qcp_ptr_add(uint8_t *q, + enum nfp_qcp_ptr ptr, + uint32_t val) { uint32_t off; @@ -304,7 +330,8 @@ nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val) * @ptr: Read or Write pointer */ static inline uint32_t -nfp_qcp_read(uint8_t *q, enum nfp_qcp_ptr ptr) +nfp_qcp_read(uint8_t *q, + enum nfp_qcp_ptr ptr) { uint32_t off; uint32_t val; @@ -343,12 +370,12 @@ void nfp_net_params_setup(struct nfp_net_hw *hw); void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac); int nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); int nfp_configure_rx_interrupt(struct rte_eth_dev *dev, - struct rte_intr_handle *intr_handle); + struct rte_intr_handle *intr_handle); uint32_t nfp_check_offloads(struct rte_eth_dev *dev); int nfp_net_promisc_enable(struct rte_eth_dev *dev); int nfp_net_promisc_disable(struct rte_eth_dev *dev); int nfp_net_link_update(struct rte_eth_dev *dev, - __rte_unused int wait_to_complete); + __rte_unused int wait_to_complete); int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); int nfp_net_stats_reset(struct rte_eth_dev *dev); uint32_t nfp_net_xstats_size(const struct rte_eth_dev *dev); @@ -368,7 +395,7 @@ int nfp_net_xstats_get_by_id(struct rte_eth_dev *dev, unsigned int n); int nfp_net_xstats_reset(struct rte_eth_dev *dev); int nfp_net_infos_get(struct rte_eth_dev *dev, - struct rte_eth_dev_info *dev_info); + struct rte_eth_dev_info *dev_info); const uint32_t *nfp_net_supported_ptypes_get(struct rte_eth_dev *dev); int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); int nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); @@ -379,15 +406,15 @@ void nfp_net_dev_interrupt_delayed_handler(void *param); int nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask); int nfp_net_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); int nfp_net_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); int nfp_net_rss_hash_update(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf); + struct rte_eth_rss_conf *rss_conf); int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf); + struct rte_eth_rss_conf *rss_conf); int nfp_net_rss_config_default(struct rte_eth_dev *dev); void nfp_net_stop_rx_queue(struct rte_eth_dev *dev); void nfp_net_close_rx_queue(struct rte_eth_dev *dev); diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index 34764a8a32..85a8bf9235 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -116,7 +116,8 @@ nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev) * of CPP interface handler configured by the PMD setup. */ static int -nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) +nfp_cpp_bridge_serve_write(int sockfd, + struct nfp_cpp *cpp) { struct nfp_cpp_area *area; off_t offset, nfp_offset; @@ -126,7 +127,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) int err = 0; PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, - sizeof(off_t), sizeof(size_t)); + sizeof(off_t), sizeof(size_t)); /* Reading the count param */ err = recv(sockfd, &count, sizeof(off_t), 0); @@ -145,21 +146,21 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) nfp_offset = offset & ((1ull << 40) - 1); PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, - offset); + offset); PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, - cpp_id, nfp_offset); + cpp_id, nfp_offset); /* Adjust length if not aligned */ if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) != - (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { + (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { curlen = NFP_CPP_MEMIO_BOUNDARY - - (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); + (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); } while (count > 0) { /* configure a CPP PCIe2CPP BAR for mapping the CPP target */ area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev", - nfp_offset, curlen); + nfp_offset, curlen); if (area == NULL) { PMD_CPP_LOG(ERR, "area alloc fail"); return -EIO; @@ -179,12 +180,11 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) len = sizeof(tmpbuf); PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__, - len, count); + len, count); err = recv(sockfd, tmpbuf, len, MSG_WAITALL); if (err != (int)len) { - PMD_CPP_LOG(ERR, - "error when receiving, %d of %zu", - err, count); + PMD_CPP_LOG(ERR, "error when receiving, %d of %zu", + err, count); nfp_cpp_area_release(area); nfp_cpp_area_free(area); return -EIO; @@ -204,7 +204,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) count -= pos; curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? - NFP_CPP_MEMIO_BOUNDARY : count; + NFP_CPP_MEMIO_BOUNDARY : count; } return 0; @@ -217,7 +217,8 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) * data is sent to the requester using the same socket. */ static int -nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) +nfp_cpp_bridge_serve_read(int sockfd, + struct nfp_cpp *cpp) { struct nfp_cpp_area *area; off_t offset, nfp_offset; @@ -227,7 +228,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) int err = 0; PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, - sizeof(off_t), sizeof(size_t)); + sizeof(off_t), sizeof(size_t)); /* Reading the count param */ err = recv(sockfd, &count, sizeof(off_t), 0); @@ -246,20 +247,20 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) nfp_offset = offset & ((1ull << 40) - 1); PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, - offset); + offset); PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, - cpp_id, nfp_offset); + cpp_id, nfp_offset); /* Adjust length if not aligned */ if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) != - (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { + (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { curlen = NFP_CPP_MEMIO_BOUNDARY - - (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); + (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); } while (count > 0) { area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev", - nfp_offset, curlen); + nfp_offset, curlen); if (area == NULL) { PMD_CPP_LOG(ERR, "area alloc failed"); return -EIO; @@ -285,13 +286,12 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) return -EIO; } PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__, - len, count); + len, count); err = send(sockfd, tmpbuf, len, 0); if (err != (int)len) { - PMD_CPP_LOG(ERR, - "error when sending: %d of %zu", - err, count); + PMD_CPP_LOG(ERR, "error when sending: %d of %zu", + err, count); nfp_cpp_area_release(area); nfp_cpp_area_free(area); return -EIO; @@ -304,7 +304,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) count -= pos; curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? - NFP_CPP_MEMIO_BOUNDARY : count; + NFP_CPP_MEMIO_BOUNDARY : count; } return 0; } @@ -316,7 +316,8 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) * does not require any CPP access at all. */ static int -nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp) +nfp_cpp_bridge_serve_ioctl(int sockfd, + struct nfp_cpp *cpp) { uint32_t cmd, ident_size, tmp; int err; @@ -395,7 +396,7 @@ nfp_cpp_bridge_service_func(void *args) strcpy(address.sa_data, "/tmp/nfp_cpp"); ret = bind(sockfd, (const struct sockaddr *)&address, - sizeof(struct sockaddr)); + sizeof(struct sockaddr)); if (ret < 0) { PMD_CPP_LOG(ERR, "bind error (%d). Service failed", errno); close(sockfd); @@ -426,8 +427,7 @@ nfp_cpp_bridge_service_func(void *args) while (1) { ret = recv(datafd, &op, 4, 0); if (ret <= 0) { - PMD_CPP_LOG(DEBUG, "%s: socket close\n", - __func__); + PMD_CPP_LOG(DEBUG, "%s: socket close\n", __func__); break; } diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 12feec8eb4..65473d87e8 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -22,7 +22,8 @@ #include "nfp_logs.h" static int -nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, int port) +nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, + int port) { struct nfp_eth_table *nfp_eth_table; struct nfp_net_hw *hw = NULL; @@ -70,21 +71,20 @@ nfp_net_start(struct rte_eth_dev *dev) if (dev->data->dev_conf.intr_conf.rxq != 0) { if (app_fw_nic->multiport) { PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported " - "with NFP multiport PF"); + "with NFP multiport PF"); return -EINVAL; } - if (rte_intr_type_get(intr_handle) == - RTE_INTR_HANDLE_UIO) { + if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. * Unregistering LSC interrupt handler */ rte_intr_callback_unregister(pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, (void *)dev); + nfp_net_dev_interrupt_handler, (void *)dev); if (dev->data->nb_rx_queues > 1) { PMD_INIT_LOG(ERR, "PMD rx interrupt only " - "supports 1 queue with UIO"); + "supports 1 queue with UIO"); return -EIO; } } @@ -162,8 +162,7 @@ nfp_net_start(struct rte_eth_dev *dev) /* Configure the physical port up */ nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1); else - nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 1); + nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1); hw->ctrl = new_ctrl; @@ -209,8 +208,7 @@ nfp_net_stop(struct rte_eth_dev *dev) /* Configure the physical port down */ nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0); else - nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 0); + nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0); return 0; } @@ -229,8 +227,7 @@ nfp_net_set_link_up(struct rte_eth_dev *dev) /* Configure the physical port down */ return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1); else - return nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 1); + return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1); } /* Set the link down. */ @@ -247,8 +244,7 @@ nfp_net_set_link_down(struct rte_eth_dev *dev) /* Configure the physical port down */ return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0); else - return nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 0); + return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0); } /* Reset and stop device. The device can not be restarted. */ @@ -287,8 +283,7 @@ nfp_net_close(struct rte_eth_dev *dev) nfp_ipsec_uninit(dev); /* Cancel possible impending LSC work here before releasing the port*/ - rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, - (void *)dev); + rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev); /* Only free PF resources after all physical ports have been closed */ /* Mark this port as unused and free device priv resources*/ @@ -525,8 +520,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) hw->ctrl_bar = pci_dev->mem_resource[0].addr; if (hw->ctrl_bar == NULL) { - PMD_DRV_LOG(ERR, - "hw->ctrl_bar is NULL. BAR0 not configured"); + PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured"); return -ENODEV; } @@ -592,7 +586,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_private = hw; PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p", - hw->ctrl_bar, hw->tx_bar, hw->rx_bar); + hw->ctrl_bar, hw->tx_bar, hw->rx_bar); nfp_net_cfg_queue_setup(hw); hw->mtu = RTE_ETHER_MTU; @@ -607,8 +601,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) rte_spinlock_init(&hw->reconfig_lock); /* Allocating memory for mac addr */ - eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", - RTE_ETHER_ADDR_LEN, 0); + eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0); if (eth_dev->data->mac_addrs == NULL) { PMD_INIT_LOG(ERR, "Failed to space for MAC address"); return -ENOMEM; @@ -634,10 +627,10 @@ nfp_net_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " - "mac=" RTE_ETHER_ADDR_PRT_FMT, - eth_dev->data->port_id, pci_dev->id.vendor_id, - pci_dev->id.device_id, - RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); + "mac=" RTE_ETHER_ADDR_PRT_FMT, + eth_dev->data->port_id, pci_dev->id.vendor_id, + pci_dev->id.device_id, + RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); /* Registering LSC interrupt handler */ rte_intr_callback_register(pci_dev->intr_handle, @@ -653,7 +646,9 @@ nfp_net_init(struct rte_eth_dev *eth_dev) #define DEFAULT_FW_PATH "/lib/firmware/netronome" static int -nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) +nfp_fw_upload(struct rte_pci_device *dev, + struct nfp_nsp *nsp, + char *card) { struct nfp_cpp *cpp = nfp_nsp_cpp(nsp); void *fw_buf; @@ -675,11 +670,10 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) /* First try to find a firmware image specific for this device */ snprintf(serial, sizeof(serial), "serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x", - cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3], - cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff); + cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3], + cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff); - snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, - serial); + snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial); PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0) @@ -703,7 +697,7 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) load_fw: PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %zu", - fw_name, fsize); + fw_name, fsize); PMD_DRV_LOG(INFO, "Uploading the firmware ..."); nfp_nsp_load_fw(nsp, fw_buf, fsize); PMD_DRV_LOG(INFO, "Done"); @@ -737,7 +731,7 @@ nfp_fw_setup(struct rte_pci_device *dev, if (nfp_eth_table->count == 0 || nfp_eth_table->count > 8) { PMD_DRV_LOG(ERR, "NFP ethernet table reports wrong ports: %u", - nfp_eth_table->count); + nfp_eth_table->count); return -EIO; } @@ -829,7 +823,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, numa_node = rte_socket_id(); for (i = 0; i < app_fw_nic->total_phyports; i++) { snprintf(port_name, sizeof(port_name), "%s_port%d", - pf_dev->pci_dev->device.name, i); + pf_dev->pci_dev->device.name, i); /* Allocate a eth_dev for this phyport */ eth_dev = rte_eth_dev_allocate(port_name); @@ -839,8 +833,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, } /* Allocate memory for this phyport */ - eth_dev->data->dev_private = - rte_zmalloc_socket(port_name, sizeof(struct nfp_net_hw), + eth_dev->data->dev_private = rte_zmalloc_socket(port_name, + sizeof(struct nfp_net_hw), RTE_CACHE_LINE_SIZE, numa_node); if (eth_dev->data->dev_private == NULL) { ret = -ENOMEM; @@ -961,8 +955,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) /* Now the symbol table should be there */ sym_tbl = nfp_rtsym_table_read(cpp); if (sym_tbl == NULL) { - PMD_INIT_LOG(ERR, "Something is wrong with the firmware" - " symbol table"); + PMD_INIT_LOG(ERR, "Something is wrong with the firmware symbol table"); ret = -EIO; goto eth_table_cleanup; } @@ -1144,8 +1137,7 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev) */ sym_tbl = nfp_rtsym_table_read(cpp); if (sym_tbl == NULL) { - PMD_INIT_LOG(ERR, "Something is wrong with the firmware" - " symbol table"); + PMD_INIT_LOG(ERR, "Something is wrong with the firmware symbol table"); return -EIO; } @@ -1198,27 +1190,27 @@ nfp_pf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, static const struct rte_pci_id pci_id_nfp_pf_net_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP3800_PF_NIC) + PCI_DEVICE_ID_NFP3800_PF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP4000_PF_NIC) + PCI_DEVICE_ID_NFP4000_PF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP6000_PF_NIC) + PCI_DEVICE_ID_NFP6000_PF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE, - PCI_DEVICE_ID_NFP3800_PF_NIC) + PCI_DEVICE_ID_NFP3800_PF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE, - PCI_DEVICE_ID_NFP4000_PF_NIC) + PCI_DEVICE_ID_NFP4000_PF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE, - PCI_DEVICE_ID_NFP6000_PF_NIC) + PCI_DEVICE_ID_NFP6000_PF_NIC) }, { .vendor_id = 0, diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index c8d6b0461b..ac6a10685d 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -50,18 +50,17 @@ nfp_netvf_start(struct rte_eth_dev *dev) /* check and configure queue intr-vector mapping */ if (dev->data->dev_conf.intr_conf.rxq != 0) { - if (rte_intr_type_get(intr_handle) == - RTE_INTR_HANDLE_UIO) { + if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. * Unregistering LSC interrupt handler */ rte_intr_callback_unregister(pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, (void *)dev); + nfp_net_dev_interrupt_handler, (void *)dev); if (dev->data->nb_rx_queues > 1) { PMD_INIT_LOG(ERR, "PMD rx interrupt only " - "supports 1 queue with UIO"); + "supports 1 queue with UIO"); return -EIO; } } @@ -190,12 +189,10 @@ nfp_netvf_close(struct rte_eth_dev *dev) /* unregister callback func from eal lib */ rte_intr_callback_unregister(pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, - (void *)dev); + nfp_net_dev_interrupt_handler, (void *)dev); /* Cancel possible impending LSC work here before releasing the port*/ - rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, - (void *)dev); + rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev); /* * The ixgbe PMD disables the pcie master on the @@ -282,8 +279,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) hw->ctrl_bar = pci_dev->mem_resource[0].addr; if (hw->ctrl_bar == NULL) { - PMD_DRV_LOG(ERR, - "hw->ctrl_bar is NULL. BAR0 not configured"); + PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured"); return -ENODEV; } @@ -301,8 +297,8 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) rte_eth_copy_pci_info(eth_dev, pci_dev); - hw->eth_xstats_base = rte_malloc("rte_eth_xstat", sizeof(struct rte_eth_xstat) * - nfp_net_xstats_size(eth_dev), 0); + hw->eth_xstats_base = rte_malloc("rte_eth_xstat", + sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0); if (hw->eth_xstats_base == NULL) { PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!", pci_dev->device.name); @@ -318,13 +314,11 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off); PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off); - hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + - tx_bar_off; - hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + - rx_bar_off; + hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off; + hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off; PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p", - hw->ctrl_bar, hw->tx_bar, hw->rx_bar); + hw->ctrl_bar, hw->tx_bar, hw->rx_bar); nfp_net_cfg_queue_setup(hw); hw->mtu = RTE_ETHER_MTU; @@ -339,8 +333,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) rte_spinlock_init(&hw->reconfig_lock); /* Allocating memory for mac addr */ - eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", - RTE_ETHER_ADDR_LEN, 0); + eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0); if (eth_dev->data->mac_addrs == NULL) { PMD_INIT_LOG(ERR, "Failed to space for MAC address"); err = -ENOMEM; @@ -351,8 +344,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) tmp_ether_addr = &hw->mac_addr; if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { - PMD_INIT_LOG(INFO, "Using random mac address for port %d", - port); + PMD_INIT_LOG(INFO, "Using random mac address for port %d", port); /* Using random mac addresses for VFs */ rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]); nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]); @@ -367,16 +359,15 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " - "mac=" RTE_ETHER_ADDR_PRT_FMT, - eth_dev->data->port_id, pci_dev->id.vendor_id, - pci_dev->id.device_id, - RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); + "mac=" RTE_ETHER_ADDR_PRT_FMT, + eth_dev->data->port_id, pci_dev->id.vendor_id, + pci_dev->id.device_id, + RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* Registering LSC interrupt handler */ rte_intr_callback_register(pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, - (void *)eth_dev); + nfp_net_dev_interrupt_handler, (void *)eth_dev); /* Telling the firmware about the LSC interrupt entry */ nn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX); /* Recording current stats counters values */ @@ -394,39 +385,42 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) static const struct rte_pci_id pci_id_nfp_vf_net_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP3800_VF_NIC) + PCI_DEVICE_ID_NFP3800_VF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP6000_VF_NIC) + PCI_DEVICE_ID_NFP6000_VF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE, - PCI_DEVICE_ID_NFP3800_VF_NIC) + PCI_DEVICE_ID_NFP3800_VF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE, - PCI_DEVICE_ID_NFP6000_VF_NIC) + PCI_DEVICE_ID_NFP6000_VF_NIC) }, { .vendor_id = 0, }, }; -static int nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev) +static int +nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev) { /* VF cleanup, just free private port data */ return nfp_netvf_close(eth_dev); } -static int eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, - struct rte_pci_device *pci_dev) +static int +eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_probe(pci_dev, - sizeof(struct nfp_net_adapter), nfp_netvf_init); + sizeof(struct nfp_net_adapter), nfp_netvf_init); } -static int eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev) +static int +eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit); } diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index bdbc92180d..156b9599db 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -166,7 +166,8 @@ nfp_flow_dev_to_priv(struct rte_eth_dev *dev) } static int -nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id) +nfp_mask_id_alloc(struct nfp_flow_priv *priv, + uint8_t *mask_id) { uint8_t temp_id; uint8_t freed_id; @@ -198,7 +199,8 @@ nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id) } static int -nfp_mask_id_free(struct nfp_flow_priv *priv, uint8_t mask_id) +nfp_mask_id_free(struct nfp_flow_priv *priv, + uint8_t mask_id) { struct circ_buf *ring; @@ -671,7 +673,8 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr, } static void -nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer) +nfp_flower_compile_meta_tci(char *mbuf_off, + struct nfp_fl_key_ls *key_layer) { struct nfp_flower_meta_tci *tci_meta; @@ -682,7 +685,8 @@ nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer) } static void -nfp_flower_update_meta_tci(char *exact, uint8_t mask_id) +nfp_flower_update_meta_tci(char *exact, + uint8_t mask_id) { struct nfp_flower_meta_tci *meta_tci; @@ -691,7 +695,8 @@ nfp_flower_update_meta_tci(char *exact, uint8_t mask_id) } static void -nfp_flower_compile_ext_meta(char *mbuf_off, struct nfp_fl_key_ls *key_layer) +nfp_flower_compile_ext_meta(char *mbuf_off, + struct nfp_fl_key_ls *key_layer) { struct nfp_flower_ext_meta *ext_meta; @@ -1400,14 +1405,14 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ipv4 = (struct nfp_flower_ipv4 *) - (*mbuf_off - sizeof(struct nfp_flower_ipv4)); + (*mbuf_off - sizeof(struct nfp_flower_ipv4)); ports = (struct nfp_flower_tp_ports *) - ((char *)ipv4 - sizeof(struct nfp_flower_tp_ports)); + ((char *)ipv4 - sizeof(struct nfp_flower_tp_ports)); } else { /* IPv6 */ ipv6 = (struct nfp_flower_ipv6 *) - (*mbuf_off - sizeof(struct nfp_flower_ipv6)); + (*mbuf_off - sizeof(struct nfp_flower_ipv6)); ports = (struct nfp_flower_tp_ports *) - ((char *)ipv6 - sizeof(struct nfp_flower_tp_ports)); + ((char *)ipv6 - sizeof(struct nfp_flower_tp_ports)); } mask = item->mask ? item->mask : proc->mask_default; @@ -1478,10 +1483,10 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) - - sizeof(struct nfp_flower_tp_ports); + sizeof(struct nfp_flower_tp_ports); } else {/* IPv6 */ ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) - - sizeof(struct nfp_flower_tp_ports); + sizeof(struct nfp_flower_tp_ports); } ports = (struct nfp_flower_tp_ports *)ports_off; @@ -1521,10 +1526,10 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) - - sizeof(struct nfp_flower_tp_ports); + sizeof(struct nfp_flower_tp_ports); } else { /* IPv6 */ ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) - - sizeof(struct nfp_flower_tp_ports); + sizeof(struct nfp_flower_tp_ports); } ports = (struct nfp_flower_tp_ports *)ports_off; @@ -1915,9 +1920,8 @@ nfp_flow_item_check(const struct rte_flow_item *item, return 0; } - mask = item->mask ? - (const uint8_t *)item->mask : - (const uint8_t *)proc->mask_default; + mask = item->mask ? (const uint8_t *)item->mask : + (const uint8_t *)proc->mask_default; /* * Single-pass check to make sure that: diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 4528417559..7885166753 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -158,8 +158,9 @@ struct nfp_ptype_parsed { /* set mbuf checksum flags based on RX descriptor flags */ void -nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, - struct rte_mbuf *mb) +nfp_net_rx_cksum(struct nfp_net_rxq *rxq, + struct nfp_net_rx_desc *rxd, + struct rte_mbuf *mb) { struct nfp_net_hw *hw = rxq->hw; @@ -192,7 +193,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) unsigned int i; PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors", - rxq->rx_count); + rxq->rx_count); for (i = 0; i < rxq->rx_count; i++) { struct nfp_net_rx_desc *rxd; @@ -218,8 +219,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) rte_wmb(); /* Not advertising the whole ring as the firmware gets confused if so */ - PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", - rxq->rx_count - 1); + PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", rxq->rx_count - 1); nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1); @@ -521,7 +521,8 @@ nfp_net_parse_meta(struct nfp_net_rx_desc *rxds, * Mbuf to set the packet type. */ static void -nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype, struct rte_mbuf *mb) +nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype, + struct rte_mbuf *mb) { uint32_t mbuf_ptype = RTE_PTYPE_L2_ETHER; uint8_t nfp_tunnel_ptype = nfp_ptype->tunnel_ptype; @@ -678,7 +679,9 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds, */ uint16_t -nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +nfp_net_recv_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct nfp_net_rxq *rxq; struct nfp_net_rx_desc *rxds; @@ -728,8 +731,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) */ new_mb = rte_pktmbuf_alloc(rxq->mem_pool); if (unlikely(new_mb == NULL)) { - PMD_RX_LOG(DEBUG, - "RX mbuf alloc failed port_id=%u queue_id=%hu", + PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%hu", rxq->port_id, rxq->qidx); nfp_net_mbuf_alloc_failed(rxq); break; @@ -743,29 +745,28 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxb->mbuf = new_mb; PMD_RX_LOG(DEBUG, "Packet len: %u, mbuf_size: %u", - rxds->rxd.data_len, rxq->mbuf_size); + rxds->rxd.data_len, rxq->mbuf_size); /* Size of this segment */ mb->data_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds); /* Size of the whole packet. We just support 1 segment */ mb->pkt_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds); - if (unlikely((mb->data_len + hw->rx_offset) > - rxq->mbuf_size)) { + if (unlikely((mb->data_len + hw->rx_offset) > rxq->mbuf_size)) { /* * This should not happen and the user has the * responsibility of avoiding it. But we have * to give some info about the error */ PMD_RX_LOG(ERR, - "mbuf overflow likely due to the RX offset.\n" - "\t\tYour mbuf size should have extra space for" - " RX offset=%u bytes.\n" - "\t\tCurrently you just have %u bytes available" - " but the received packet is %u bytes long", - hw->rx_offset, - rxq->mbuf_size - hw->rx_offset, - mb->data_len); + "mbuf overflow likely due to the RX offset.\n" + "\t\tYour mbuf size should have extra space for" + " RX offset=%u bytes.\n" + "\t\tCurrently you just have %u bytes available" + " but the received packet is %u bytes long", + hw->rx_offset, + rxq->mbuf_size - hw->rx_offset, + mb->data_len); rte_pktmbuf_free(mb); break; } @@ -774,8 +775,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (hw->rx_offset != 0) mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset; else - mb->data_off = RTE_PKTMBUF_HEADROOM + - NFP_DESC_META_LEN(rxds); + mb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds); /* No scatter mode supported */ mb->nb_segs = 1; @@ -817,7 +817,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) return nb_hold; PMD_RX_LOG(DEBUG, "RX port_id=%hu queue_id=%hu, %hu packets received", - rxq->port_id, rxq->qidx, avail); + rxq->port_id, rxq->qidx, avail); nb_hold += rxq->nb_rx_hold; @@ -828,7 +828,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rte_wmb(); if (nb_hold > rxq->rx_free_thresh) { PMD_RX_LOG(DEBUG, "port=%hu queue=%hu nb_hold=%hu avail=%hu", - rxq->port_id, rxq->qidx, nb_hold, avail); + rxq->port_id, rxq->qidx, nb_hold, avail); nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold); nb_hold = 0; } @@ -854,7 +854,8 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq) } void -nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx) +nfp_net_rx_queue_release(struct rte_eth_dev *dev, + uint16_t queue_idx) { struct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx]; @@ -876,10 +877,11 @@ nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq) int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp) + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) { uint16_t min_rx_desc; uint16_t max_rx_desc; @@ -897,7 +899,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, /* Validating number of descriptors */ rx_desc_sz = nb_desc * sizeof(struct nfp_net_rx_desc); if (rx_desc_sz % NFP_ALIGN_RING_DESC != 0 || - nb_desc > max_rx_desc || nb_desc < min_rx_desc) { + nb_desc > max_rx_desc || nb_desc < min_rx_desc) { PMD_DRV_LOG(ERR, "Wrong nb_desc value"); return -EINVAL; } @@ -913,7 +915,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, /* Allocating rx queue data structure */ rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct nfp_net_rxq), - RTE_CACHE_LINE_SIZE, socket_id); + RTE_CACHE_LINE_SIZE, socket_id); if (rxq == NULL) return -ENOMEM; @@ -943,9 +945,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, * resizing in later calls to the queue setup function. */ tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, - sizeof(struct nfp_net_rx_desc) * - max_rx_desc, NFP_MEMZONE_ALIGN, - socket_id); + sizeof(struct nfp_net_rx_desc) * max_rx_desc, + NFP_MEMZONE_ALIGN, socket_id); if (tz == NULL) { PMD_DRV_LOG(ERR, "Error allocating rx dma"); @@ -960,8 +961,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, /* mbuf pointers array for referencing mbufs linked to RX descriptors */ rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs", - sizeof(*rxq->rxbufs) * nb_desc, - RTE_CACHE_LINE_SIZE, socket_id); + sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE, + socket_id); if (rxq->rxbufs == NULL) { nfp_net_rx_queue_release(dev, queue_idx); dev->data->rx_queues[queue_idx] = NULL; @@ -969,7 +970,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, } PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, - rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma); + rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma); nfp_net_reset_rx_queue(rxq); @@ -998,15 +999,15 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq) int todo; PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete" - " status", txq->qidx); + " status", txq->qidx); /* Work out how many packets have been sent */ qcp_rd_p = nfp_qcp_read(txq->qcp_q, NFP_QCP_READ_PTR); if (qcp_rd_p == txq->rd_p) { PMD_TX_LOG(DEBUG, "queue %hu: It seems harrier is not sending " - "packets (%u, %u)", txq->qidx, - qcp_rd_p, txq->rd_p); + "packets (%u, %u)", txq->qidx, + qcp_rd_p, txq->rd_p); return 0; } @@ -1016,7 +1017,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq) todo = qcp_rd_p + txq->tx_count - txq->rd_p; PMD_TX_LOG(DEBUG, "qcp_rd_p %u, txq->rd_p: %u, qcp->rd_p: %u", - qcp_rd_p, txq->rd_p, txq->rd_p); + qcp_rd_p, txq->rd_p, txq->rd_p); if (todo == 0) return todo; @@ -1045,7 +1046,8 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq) } void -nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx) +nfp_net_tx_queue_release(struct rte_eth_dev *dev, + uint16_t queue_idx) { struct nfp_net_txq *txq = dev->data->tx_queues[queue_idx]; diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index 3c7138f7d6..9a30ebd89e 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -234,17 +234,17 @@ nfp_net_mbuf_alloc_failed(struct nfp_net_rxq *rxq) } void nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, - struct rte_mbuf *mb); + struct rte_mbuf *mb); int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev); uint32_t nfp_net_rx_queue_count(void *rx_queue); uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); + uint16_t nb_pkts); void nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); void nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq); int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t nb_desc, unsigned int socket_id, - const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp); + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); void nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); void nfp_net_reset_tx_queue(struct nfp_net_txq *txq); From patchwork Sat Oct 7 02:33:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132373 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F301426D6; Sat, 7 Oct 2023 04:34:30 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BA93C40A7D; Sat, 7 Oct 2023 04:34:08 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2108.outbound.protection.outlook.com [40.107.93.108]) by mails.dpdk.org (Postfix) with ESMTP id 0AAA340A6C for ; Sat, 7 Oct 2023 04:34:06 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NlgZ1MZ2SAR46GmdegP82T7+FB13zYcDjt0rh3ue1RMO8cRYmQ3ktI62TxhGO9CTlb/uc6Jbk/NUEwGTOqh8M5Ddr6AX24eFPbtlJE5KmhPD2AVAZr5+6rAawjHz5ioNH5xlSupKTZ9vDVNRCKCb2k8q+hQC9XEML0nAnK5OqllYQj7NaLQ/9wYR/P8ZBQ6MmZJZvQFg78NSQnQgJlJMfFTNAfp/nRwwcDETKbeSQgW3OWafMVlAZdobZ4BpIcqhIt0buA2i/B4bqboJSqYEwaIltqX+vs9DbNr+rIGnnA0MIAl5hfJf280rZANuU5gmcm4cpqQAE4sGPL48LI/AJQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=gDCg0FTEyvmnctj4XSM0Ma8YfuByetGBc+HvHTIzZlg=; b=gaDp1cx6/GnYk6Zdn7wp66403pdHdd2/uogTLynTZAoU82kXwfO+SuNr9TWZLAWGGC9vhMSJfvb6aGH04h9XksbLfr3+pO9xiES24LLUg/hrT5kl9DB8UODe6ShdUWrnLlfpj1AX/ZwDvjI2g2Ia8QOxUUpPZzvAWvuF9VdgPw/U4uJn7U5f4f5gHcSVtCHJKhhcSen2MLcKWDUOOMxjnI4hSK1NdtfBogQxvDcYiQyX3Xa/NtdB0dkb5gW6jJooSoAwJMUeRG4ltAbOJWHTF3zhC2X/+cdFjrW/4NqAgQUB4BUznz/qH4R5PAe1k2n41Hmr/+vu3cKLKeenryWKbQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=gDCg0FTEyvmnctj4XSM0Ma8YfuByetGBc+HvHTIzZlg=; b=P0COn/1E6pIcBYuXsDq3E4KOR3UHV94XYFYvR6DUz2CZRDJx98EnerNYw028ncKoJ+Fy9HlUS7faX9Ryl3SrBHsMKg4C6YSYWxZlx5bfdQTJ2+OQrrlqN0cAtOJgIUlsX0VlL9GsBJIi8y5oSR8GeYXFZXQPsmh/WyFBD/WK2q4= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:04 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:04 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 03/11] net/nfp: unify the type of integer variable Date: Sat, 7 Oct 2023 10:33:31 +0800 Message-Id: <20231007023339.1546659-4-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: 5195df8b-0d1f-4731-c509-08dbc6dddf2f X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: EWK3cx0Q2HfuHk0HpDAMOV7yb47RHdqi1vXSKo/sOmcyZ2hMH+Qdk3i84n4O1plAe9uN5Gcj7Kt0AjgrOI3g7rD/zBFFTuml64FQeQcsd+JEl5NZPEQFTX9gu+olT9RmvM1PAL6P5zo5t8s3wLFBLTtZO7cCMqFniA2TPhgg3ABZ4iU27UWht+IFNUeX+u+dOC8DxVOi7vug2K9tM9IEWNQyCLxbSuOCR2uFV14xF8g08cRMbQCnaaNKaCKxbYEYtLT2leEU3Mps4DgEPGR2crGTqoM/kuTQQNrNeblvzibgXtFoFzMYFxSJuE6WW0B/bWB9jpfDejgLKpFM3cr374TvuU77qavxdE09gFMGBGP+jo/xw3HhLdFsYMe+RYY0Ts3lF0JDdaL6QI5WnqnQIV1hDGvG8EsMkUoD/JLv3KEkNTtJN5u+tTIEpAJdkE15gbXoOhgVsKJ/qdGHEKBx/CXtpPoGnxA4yPtBV/MZ89P1n79QmdKkkZDSeg1PiovqP19x8Q+ThXQfCAWqG3ewQ9QQf2U4DoJzkopFpYCjB+mH8FKHmpg+9KL/x3Dop8nJyTcShJcfXz47oI5jMsesnQQTlxScVXB6ExtqMz1vs5OxWZQ57DswugurigMP9yaU15CwM3tYmv+WLJuYB13RAIxPA6ACoET9ThgdEVUmY+o= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(30864003)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: Xb5x8FrWVhVoXDfEbgMwjsu/w8qXFLMPo7yEZec9A4c2HWV2FpeT5To33ivMTb53Qjkq7fM+hTlL+EgjFnyGsvkw4zQXuRpkE9W9CDPYCSBF4Jrfvgg4Fm1+qL5QTZdGobzH4BNFuxsdW17/y1s7cSFzbg8zoKUh7anCpsAQzGlaBzVdW4cE69EZFQpp0IcjA8PlylQ52OTiDyuWTdBxW51rn6AqCp7N+Or7WHPZrf4+7/8fOx7q8Thr5Rxv+trYb4jEIzDPw45OwZluwmd6MvKHg3uQhKYZWB/V9ikJ6y6SPyu46hMtsbyTZIvZHr7cS7Azh3rK5H2W6axwZK7XqD5XyY7N0EL9dtry9x5oNuD34KfwdoYK+LZJsaaDZ++c46lVnsyrP01aJPpGzQpncSTHHBKNmLbuuaocZB/Bqv2mk/FQI4KUxy2l4y9GDW9BqJsd9R294Z8fBa9dS1ALi+8ZfashLumvfJqaWHWVTf4KsGSSkKGWoq1tSbxMiWmL0/e70M4rOT6aYBOVBEzwz8DjP2Ts1fx1sZ5Y6Mh4Dm0vgmL0CNF8obdBI8RVRyF1h+5SFPwJd9n1LMzVI96fzoQbpNuRtp1I1wYgikJ4IejnAuGaseBuQfSNjS+rMO8JJAEUBD5OSoE41fbPnjbFJ7rr0xMyvCCKF+jHP7Jd1JfJww7po9hCs7v3VPspgPnuvcrBYroyFlU9IxYtwEIUFyDyyFJO7Dlo6NonZjHiGXySPDa/6nCxOUXMPwq0AIXtB4bqVqt2WbgeTxESwC87b43hL0IHGRnxgY3gd70twK63FgHN2i6MuUkS8XU8EhCwsoqkQVkIJ/nZcRMDISm1cvr5rzgLlLf0wAzYd0chesZFhG/YGjJUVaxkcCmahJrth8zFn1/ZWguF+gQUofZewe7lmIJG4N2rHgDuG9n9MPFyUCwC3OPDGpPAZFv73sF51t55CfFIkd817t+LDUt93N+2slBcMHg5iEstqQMg9p2RcBf8WZO2rduPfZRX+kXnFBfzhpKID3lRp3d/ow4MWO3xv+3gzW6v7oETXu7Y+A4Jzr3CegLuWMn2m+ra4VuWnnCl2w39P48Y8hEaI77CQkNmD/GT6A/8fxNrGW3yRsiAUJgY+7IqHFEa4V7gNSFDVIhhsproiGXjRvyViaxNDWntxbj/HTIFNsnfL7DL4QjsnfBezkQ3FwISIQ3+cdEpBriomY6Zzef4Z7BV29ei8rX9VinWZ7lwCqwPXKH9pJR+nUoEVCB7WUQZRYMr0JvLZfsnUWCKMj1lqJuiTmYz/0TaVbMJ9DWWaQ4CzfA3W9BFCtbU64zkp6asAgGysBBws/eVo8sFi/GlKBdju0scKXZuvNUw2E6Ylln5IH4YDzHL1fHJKR0iJF4XmxLFsAjs6WpVIlleAN9rU80ZjJIl+C2ns0zhagsBl8CIoAudfkGLMVGFZwmDf9FAY7Z+ZQODBFu6Y5Ih41bFvlcMF+YTwJV2t9L8ofwGgkAGxGHJK7sIBLQa5POf++tmQallYLBr5qZS1pDjeu43xaPQp7mV32ZV87FKvfnXfcG/5TFn/Hvb6jAGDcjGS1eyrXClPsTsKa5QH88945OGhPCc+JDfdA== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 5195df8b-0d1f-4731-c509-08dbc6dddf2f X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:04.0617 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: QkrFKAhrWyL8xc13KeXfRyC8q1WONN56w50ghV522lOMKLJqhdrAqyUjXO4LY/uZNPjBRsa27Ahw8j5i7qN4YA7cJQE0ViJyfDGPgAT8FyY= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Unify the type of integer variable to the DPDK prefer style. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.c | 2 +- drivers/net/nfp/flower/nfp_flower_cmsg.c | 16 +++++----- drivers/net/nfp/nfd3/nfp_nfd3_dp.c | 6 ++-- drivers/net/nfp/nfp_common.c | 37 +++++++++++++----------- drivers/net/nfp/nfp_common.h | 16 +++++----- drivers/net/nfp/nfp_ethdev.c | 24 +++++++-------- drivers/net/nfp/nfp_ethdev_vf.c | 2 +- drivers/net/nfp/nfp_flow.c | 4 +-- drivers/net/nfp/nfp_rxtx.c | 12 ++++---- drivers/net/nfp/nfp_rxtx.h | 2 +- 10 files changed, 62 insertions(+), 59 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index 59717fa6b1..bd961043b2 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -26,7 +26,7 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; uint64_t enabled_queues = 0; - int i; + uint16_t i; struct nfp_flower_representor *repr; repr = dev->data->dev_private; diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c index 6b9532f5b6..5d6912b079 100644 --- a/drivers/net/nfp/flower/nfp_flower_cmsg.c +++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c @@ -64,10 +64,10 @@ nfp_flower_cmsg_mac_repr_init(struct rte_mbuf *mbuf, static void nfp_flower_cmsg_mac_repr_fill(struct rte_mbuf *m, - unsigned int idx, - unsigned int nbi, - unsigned int nbi_port, - unsigned int phys_port) + uint8_t idx, + uint32_t nbi, + uint32_t nbi_port, + uint32_t phys_port) { struct nfp_flower_cmsg_mac_repr *msg; @@ -81,11 +81,11 @@ nfp_flower_cmsg_mac_repr_fill(struct rte_mbuf *m, int nfp_flower_cmsg_mac_repr(struct nfp_app_fw_flower *app_fw_flower) { - int i; + uint8_t i; uint16_t cnt; - unsigned int nbi; - unsigned int nbi_port; - unsigned int phys_port; + uint32_t nbi; + uint32_t nbi_port; + uint32_t phys_port; struct rte_mbuf *mbuf; struct nfp_eth_table *nfp_eth_table; diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c index 64928254d8..5a84629ed7 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c +++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c @@ -227,9 +227,9 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue, uint16_t nb_pkts, bool repr_flag) { - int i; - int pkt_size; - int dma_size; + uint16_t i; + uint32_t pkt_size; + uint16_t dma_size; uint8_t offset; uint64_t dma_addr; uint16_t free_descs; diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 9719a9212b..cb2c2afbd7 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -199,7 +199,7 @@ static int __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) { - int cnt; + uint32_t cnt; uint32_t new; struct timespec wait; @@ -229,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, } if (cnt >= NFP_NET_POLL_TIMEOUT) { PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after" - " %dms", update, cnt); + " %ums", update, cnt); return -EIO; } nanosleep(&wait, 0); /* waiting for a 1ms */ @@ -466,7 +466,7 @@ nfp_net_enable_queues(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; uint64_t enabled_queues = 0; - int i; + uint16_t i; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -575,7 +575,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, struct rte_intr_handle *intr_handle) { struct nfp_net_hw *hw; - int i; + uint16_t i; if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", dev->data->nb_rx_queues) != 0) { @@ -832,7 +832,7 @@ int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - int i; + uint16_t i; struct nfp_net_hw *hw; struct rte_eth_stats nfp_dev_stats; @@ -923,7 +923,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, int nfp_net_stats_reset(struct rte_eth_dev *dev) { - int i; + uint16_t i; struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1398,7 +1398,7 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, { struct rte_pci_device *pci_dev; struct nfp_net_hw *hw; - int base = 0; + uint16_t base = 0; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -1419,7 +1419,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, { struct rte_pci_device *pci_dev; struct nfp_net_hw *hw; - int base = 0; + uint16_t base = 0; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -1619,9 +1619,10 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - uint32_t reta, mask; - int i, j; - int idx, shift; + uint8_t mask; + uint32_t reta; + uint16_t i, j; + uint16_t idx, shift; struct nfp_net_hw *hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1695,8 +1696,9 @@ nfp_net_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - uint8_t i, j, mask; - int idx, shift; + uint16_t i, j; + uint8_t mask; + uint16_t idx, shift; uint32_t reta; struct nfp_net_hw *hw; @@ -1720,7 +1722,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev, /* Handling 4 RSS entries per loop */ idx = i / RTE_ETH_RETA_GROUP_SIZE; shift = i % RTE_ETH_RETA_GROUP_SIZE; - mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF); + mask = (reta_conf[idx].mask >> shift) & 0xF; if (mask == 0) continue; @@ -1744,7 +1746,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, uint64_t rss_hf; uint32_t cfg_rss_ctrl = 0; uint8_t key; - int i; + uint8_t i; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1835,7 +1837,7 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, uint64_t rss_hf; uint32_t cfg_rss_ctrl; uint8_t key; - int i; + uint8_t i; struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1893,7 +1895,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) struct rte_eth_rss_reta_entry64 nfp_reta_conf[2]; uint16_t rx_queues = dev->data->nb_rx_queues; uint16_t queue; - int i, j, ret; + uint8_t i, j; + int ret; PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues", rx_queues); diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index e4fd394868..71153ea25b 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -245,14 +245,14 @@ nn_writeq(uint64_t val, */ static inline uint8_t nn_cfg_readb(struct nfp_net_hw *hw, - int off) + uint32_t off) { return nn_readb(hw->ctrl_bar + off); } static inline void nn_cfg_writeb(struct nfp_net_hw *hw, - int off, + uint32_t off, uint8_t val) { nn_writeb(val, hw->ctrl_bar + off); @@ -260,14 +260,14 @@ nn_cfg_writeb(struct nfp_net_hw *hw, static inline uint16_t nn_cfg_readw(struct nfp_net_hw *hw, - int off) + uint32_t off) { return rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off)); } static inline void nn_cfg_writew(struct nfp_net_hw *hw, - int off, + uint32_t off, uint16_t val) { nn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off); @@ -275,14 +275,14 @@ nn_cfg_writew(struct nfp_net_hw *hw, static inline uint32_t nn_cfg_readl(struct nfp_net_hw *hw, - int off) + uint32_t off) { return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off)); } static inline void nn_cfg_writel(struct nfp_net_hw *hw, - int off, + uint32_t off, uint32_t val) { nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off); @@ -290,14 +290,14 @@ nn_cfg_writel(struct nfp_net_hw *hw, static inline uint64_t nn_cfg_readq(struct nfp_net_hw *hw, - int off) + uint32_t off) { return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off)); } static inline void nn_cfg_writeq(struct nfp_net_hw *hw, - int off, + uint32_t off, uint64_t val) { nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off); diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 65473d87e8..140d20dcf7 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -23,7 +23,7 @@ static int nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, - int port) + uint16_t port) { struct nfp_eth_table *nfp_eth_table; struct nfp_net_hw *hw = NULL; @@ -255,7 +255,7 @@ nfp_net_close(struct rte_eth_dev *dev) struct rte_pci_device *pci_dev; struct nfp_pf_dev *pf_dev; struct nfp_app_fw_nic *app_fw_nic; - int i; + uint8_t i; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -487,7 +487,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) struct rte_ether_addr *tmp_ether_addr; uint64_t rx_base; uint64_t tx_base; - int port = 0; + uint16_t port = 0; int err; PMD_INIT_FUNC_TRACE(); @@ -501,7 +501,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) app_fw_nic = NFP_PRIV_TO_APP_FW_NIC(pf_dev->app_fw_priv); port = ((struct nfp_net_hw *)eth_dev->data->dev_private)->idx; - if (port < 0 || port > 7) { + if (port > 7) { PMD_DRV_LOG(ERR, "Port value is wrong"); return -ENODEV; } @@ -761,10 +761,10 @@ static int nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, const struct nfp_dev_info *dev_info) { - int i; + uint8_t i; int ret; int err = 0; - int total_vnics; + uint32_t total_vnics; struct nfp_net_hw *hw; unsigned int numa_node; struct rte_eth_dev *eth_dev; @@ -785,7 +785,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, /* Read the number of vNIC's created for the PF */ total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &err); - if (err != 0 || total_vnics <= 0 || total_vnics > 8) { + if (err != 0 || total_vnics == 0 || total_vnics > 8) { PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value"); ret = -ENODEV; goto app_cleanup; @@ -795,7 +795,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, * For coreNIC the number of vNICs exposed should be the same as the * number of physical ports */ - if (total_vnics != (int)nfp_eth_table->count) { + if (total_vnics != nfp_eth_table->count) { PMD_INIT_LOG(ERR, "Total physical ports do not match number of vNICs"); ret = -ENODEV; goto app_cleanup; @@ -1053,15 +1053,15 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev, struct nfp_rtsym_table *sym_tbl, struct nfp_cpp *cpp) { - int i; + uint32_t i; int err = 0; int ret = 0; - int total_vnics; + uint32_t total_vnics; struct nfp_net_hw *hw; /* Read the number of vNIC's created for the PF */ total_vnics = nfp_rtsym_read_le(sym_tbl, "nfd_cfg_pf0_num_ports", &err); - if (err != 0 || total_vnics <= 0 || total_vnics > 8) { + if (err != 0 || total_vnics == 0 || total_vnics > 8) { PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value"); return -ENODEV; } @@ -1069,7 +1069,7 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev, for (i = 0; i < total_vnics; i++) { struct rte_eth_dev *eth_dev; char port_name[RTE_ETH_NAME_MAX_LEN]; - snprintf(port_name, sizeof(port_name), "%s_port%d", + snprintf(port_name, sizeof(port_name), "%s_port%u", pci_dev->device.name, i); PMD_INIT_LOG(DEBUG, "Secondary attaching to port %s", port_name); diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index ac6a10685d..892300a909 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -260,7 +260,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) uint64_t tx_bar_off = 0, rx_bar_off = 0; uint32_t start_q; - int port = 0; + uint16_t port = 0; int err; const struct nfp_dev_info *dev_info; diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index 156b9599db..a254d839ff 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -2001,7 +2001,7 @@ nfp_flow_compile_item_proc(struct nfp_flower_representor *repr, char **mbuf_off_mask, bool is_outer_layer) { - int i; + uint32_t i; int ret = 0; bool continue_flag = true; const struct rte_flow_item *item; @@ -2235,7 +2235,7 @@ nfp_flow_action_set_ipv6(char *act_data, const struct rte_flow_action *action, bool ip_src_flag) { - int i; + uint32_t i; rte_be32_t tmp; size_t act_size; struct nfp_fl_act_set_ipv6_addr *set_ip; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 7885166753..8cbb9b74a2 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -190,7 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) { struct nfp_net_dp_buf *rxe = rxq->rxbufs; uint64_t dma_addr; - unsigned int i; + uint16_t i; PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors", rxq->rx_count); @@ -229,7 +229,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) { - int i; + uint16_t i; for (i = 0; i < dev->data->nb_rx_queues; i++) { if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0) @@ -840,7 +840,7 @@ nfp_net_recv_pkts(void *rx_queue, static void nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq) { - unsigned int i; + uint16_t i; if (rxq->rxbufs == NULL) return; @@ -992,11 +992,11 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, * @txq: TX queue to work with * Returns number of descriptors freed */ -int +uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq) { uint32_t qcp_rd_p; - int todo; + uint32_t todo; PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete" " status", txq->qidx); @@ -1032,7 +1032,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq) static void nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq) { - unsigned int i; + uint32_t i; if (txq->txbufs == NULL) return; diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index 9a30ebd89e..98ef6c3d93 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -253,7 +253,7 @@ int nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); -int nfp_net_tx_free_bufs(struct nfp_net_txq *txq); +uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq); void nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data, struct rte_mbuf *pkt, uint8_t layer); From patchwork Sat Oct 7 02:33:32 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132374 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F150426D6; Sat, 7 Oct 2023 04:34:42 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 315E640A87; Sat, 7 Oct 2023 04:34:11 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2099.outbound.protection.outlook.com [40.107.94.99]) by mails.dpdk.org (Postfix) with ESMTP id C938440A87 for ; Sat, 7 Oct 2023 04:34:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=MaLNc/fcEO/WdVSZm2iEsY/ptOwmW1+nYsbhNYZD5naYRMafuVGjQQXMXyXbfQG9rYS9Qyrr+kMTKjToUx9twycsppOd2SIgK09ecWPL/0u2yfYa5p94bTgWV9drMYMp3ZSfGaS5MtGY2m5qZz9IGA4t+hMzIy80CilgU1S/Zyf87eLCDeMjmp2l6F60qMMHKOWNtJMY3CuklTTzXdJyQ8vD4w/Ci6uQ9b69Lj+bo/LdOV2xx5LaAmzTtMEWbJTHdzxKh5u0f936SaHEaC5ggwvm6mzHW0jIGp2DQK3fwIxEBrgVOw80sOFAuLoSiC8I1SSf2HW8CoBxoPeRNT6vPA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SaZOVPLjeD7T4ZHv0MbeiVRtUgosL5rI6BBG5wUCxqI=; b=FRv+zGwDBaqbBOyP2I9WBdBGaO2b/+5K85Np/kB82jgKxfndSEwMTJvlBNDsdnMeLaa++J3rjXoE0EPo04LHx6Qva5cT9tKi5z5BexTcdW9azDOKgWdad2d4AyFM6pi1I5g526HCRMqd+PeEREOanvQXGKpldP1FHlyb2Af11HliZy7Wi5yhwnM+aZ2X71AkSMl/WztRx8rMLQefuHQhHgI6hYPaYGmZNDUR7o8lGU8Qvl4drCFLKGUE1Y3ulzE+iliVBK8D4X116JvboVx4c8YMWQIXv+UPKmcLc0yIoEbRSe7FZu8mqHgZXV0Sy7aBihXWD819ZNsps/01cVO+KA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SaZOVPLjeD7T4ZHv0MbeiVRtUgosL5rI6BBG5wUCxqI=; b=vybTk8GN8pmsYlWRmXmyZLPUSwwCZDZsy5DWkXcdj6FlDezs4vYhbeO1dX3mBoyzzLCWj03lScG8qGjlCX9+dvMK00FturpGYqfcb2shaksmOlkwgAGXhHPiRBV5KTx8pcmrncziTJYYqPw4/Hi34pynh+jz0qHx7mC5GiXHBbA= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:06 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:06 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 04/11] net/nfp: standard the local variable coding style Date: Sat, 7 Oct 2023 10:33:32 +0800 Message-Id: <20231007023339.1546659-5-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: 108ba3ac-e1c4-4238-59c2-08dbc6dde057 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 9pTGGBgetzF6ltvfNxYkA6tyv8NMPFnuUbO1Xzb94b0/css63WWYS1DLH9/GoTqIRm7DA+WsTmlLR0XxXA0du1HYRj7NUQGLCEqloP45e5xdIJb633p0JTiJQ4EN3XJBsZZtOUyNZPogOM2QSRpl6TKJBnhhJ+6d4Li9fphZ7jQywEbCex3SFcGmspNPS9BEAN6q094qKlNqHPhocp71uVlWtf6TUSdVQqZmvcLv5FkR0XsDCk1GlfBcWUFDfTh60/kIJ9VfKIMC6go94cphoHCQNQR3xISuQJE/WdsseL1DvIgh7fpSpwXg6qelHv6j9lYpkQjOMvbssmPJUxCwR89TZpaSAYYp0G1g6TOP5wZxJiFATnWhPmErsRv+dLX9hQCLARiRd+H/CYY144sx/cOn8pVijjePYS7pU8wOlk75bT6V5xiZNeOTOiYEBc6yAdYvBPAqj22luZ8WXeCb/RLR1VDNlaqXi7cEex/IZoJlbYyMX4s9O+pyWWYMngUo5bRzYliZmuqyVtscKn+GmVgyQTJrvH5hFf5N84VF65jLd61kVpqzQkO0uwGAGpKMHtIgtdSAnZ8Dqz43EPXXzRBHSEf41pizMjeARptUm3e+HbAckvrzp0m6ACibuUli6Oeg4V0PAGt86rdCuZ7Uo4aiwmhz76dmS9HFgMzf62o= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(30864003)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: kGCqFI4WTas0rIZ0tExNzqWTh03S4O6uAHcvRR7WOBa2KD0iOxu+PeIK6TociZhVzQnEpMHxDEahB8onC3rUm7SgltevMzewEecGbfBcVvtGOtCPxpgXB3YumwwvqKU9T5d026gr3Iac6VGFWALYgCqAEw/aP4XAJal22R6ah6rUJNlXcoKQKBeVmrvjEWBUhdPb3p7dpHGuSwb8TN3PJKOfjmuOYHZJuRcOlirReOvtoP+iLVlf4ajL2LSCfZlrOkKg+/9ts8L1X4Mp9Yf4wKBKWBjDxmFKiEk6Saz/i7eXylwzuUW8A3t2r55BGelLrhCEIQzS9zfh9hsUHgT9dgJCs9QR2Eyq8i2bfBVe4s0CSIHRGenUBU2W4NHYp2FZgWdyUNqjGOqAJ+tdjSlUKVnhGWvCCZCKvdqs2zo583k8KcBxGLa28niE+5SRzzhz5ouA/NTNW2AZdUMmrfj7FRHAKMYBdLh2R0zpRoAwEUF/2E8IWo7WNzfi4FfjwKGZ9J/+Lwto6DDggYNUjjQKC/y0bSa4Q0fYXyriRlnnNprtBDO8G4zO2SVpubvGlsnTLzwJqwznVs/BmWYI1q2FLw2ZHsIl6UDoMk6Bn2myV43Kz/2Gb3iUSgYzunWY2enURAJiAXyhQiB5sWOwtF+4Uvx1+fXDO2Faxskc92iLx8ict0QsQDhd9Y9LKmkwu8I//ugLoCazL/E3eTe0TlAl4Gy91gvxyXCIhueNfub4oYsQ1XvOZznVVPRdCot7Ht2CTyISI0VcFxPVUioYtu3rDaEEZGXqhBosZvoS4L6uLC+tlIvr4VicgdA61vGCotmnooTm6VLQEqAsOYBFEWhd55+wQ/ZjeM/d05hBV5XXu1qwKW0+3jGlMI5jcxJ86yDat8ZgFwuJqAD49Rw0QkGtoneeLsJ18ZociPPvQOasFX/VetnV39K3MwpK52VlJ329iZxlq+Q0DhVdP+mrDoGMXvPGHRuRSvGQJE+FnPkV7KiyvbIQjqIlIwdWXMf4XZVBqmim24BwaVuCahcnGLV8+uioVJbcX4LoZfHGONrteiKx+GUmjx7retYQ5KQd7GLodV4dSX7hlVmtXlJ3uKXxiteOy+AtJNLkTYO1GTUuXbgQ5ynIBFAKsdSQlpnZZgPB9nVDvd2DpqNrVdu6SlV8wSXU7sj1REN8Lz+Yl9MLWVIBllDTWFD64uac9DZwTiV4d6ENlAjFYbmNMZSaFOIBh7rLZ6STs1nmzQdmQj3eyR1HYVWvlobHELZQO9M75SNiLIgGEmcZ1ebfxTC0PGbJNr2XJF1vLQVdg+ZJuxY8hoXlI7J9PQjEaONcul1y5mK4+4RN4GsqtpvuhSzxT3XxkN8XBXD2Fg5S6OzHC13MfN2q037MBqmkQHg6jQBImYFi3NwD6FgDu6bQCZPI63yP5XX52Ff5xUtn/fIRVtGJtxEaErrZsPejh+hRNoQxVeFPa1dwvbGRSnJ5naT35dsNYub8+lorHrHEg2eRIQTSssGDp9cXUpIP8EhvNYrfNT8Xl+yMJMLSDd4FxpWPiPx+oOCIpxdnqw+SghiDhvNAAS9mtBMhlE8YKFC1LMCj4LitBZ5AiGqQHMPhDT3CoHbF4A== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 108ba3ac-e1c4-4238-59c2-08dbc6dde057 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:06.0164 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: koTUmsTA2h5efGQzngDa9aOJhxN9qeKPAu1Lcpq2WSGkyIb6zQ2U/p4ElBr9oXEmCRti8qjkxmBX+caXiieu+RiuhNTB2BRyKRHia8fs4bg= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There should only declare one local variable in each line, and the local variable should obey the unify sequence. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.c | 6 +- drivers/net/nfp/nfd3/nfp_nfd3_dp.c | 4 +- drivers/net/nfp/nfp_common.c | 97 ++++++++++++++++------------- drivers/net/nfp/nfp_common.h | 3 +- drivers/net/nfp/nfp_cpp_bridge.c | 39 ++++++++---- drivers/net/nfp/nfp_ethdev.c | 47 +++++++------- drivers/net/nfp/nfp_ethdev_vf.c | 23 +++---- drivers/net/nfp/nfp_flow.c | 28 ++++----- drivers/net/nfp/nfp_rxtx.c | 38 +++++------ 9 files changed, 154 insertions(+), 131 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index bd961043b2..9000ee191c 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -24,9 +24,9 @@ static void nfp_pf_repr_enable_queues(struct rte_eth_dev *dev) { + uint16_t i; struct nfp_net_hw *hw; uint64_t enabled_queues = 0; - uint16_t i; struct nfp_flower_representor *repr; repr = dev->data->dev_private; @@ -50,9 +50,9 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev) static void nfp_pf_repr_disable_queues(struct rte_eth_dev *dev) { - struct nfp_net_hw *hw; + uint32_t update; uint32_t new_ctrl; - uint32_t update = 0; + struct nfp_net_hw *hw; struct nfp_flower_representor *repr; repr = dev->data->dev_private; diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c index 5a84629ed7..699f65ebef 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c +++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c @@ -228,13 +228,13 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue, bool repr_flag) { uint16_t i; + uint8_t offset; uint32_t pkt_size; uint16_t dma_size; - uint8_t offset; uint64_t dma_addr; uint16_t free_descs; - uint16_t issued_descs; struct rte_mbuf *pkt; + uint16_t issued_descs; struct nfp_net_hw *hw; struct rte_mbuf **lmbuf; struct nfp_net_txq *txq; diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index cb2c2afbd7..18291a1cde 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -375,10 +375,10 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw, int nfp_net_configure(struct rte_eth_dev *dev) { + struct nfp_net_hw *hw; struct rte_eth_conf *dev_conf; struct rte_eth_rxmode *rxmode; struct rte_eth_txmode *txmode; - struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -464,9 +464,9 @@ nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, void nfp_net_enable_queues(struct rte_eth_dev *dev) { + uint16_t i; struct nfp_net_hw *hw; uint64_t enabled_queues = 0; - uint16_t i; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -488,8 +488,9 @@ nfp_net_enable_queues(struct rte_eth_dev *dev) void nfp_net_disable_queues(struct rte_eth_dev *dev) { + uint32_t update; + uint32_t new_ctrl; struct nfp_net_hw *hw; - uint32_t new_ctrl, update = 0; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -528,9 +529,10 @@ void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac) { - uint32_t mac0 = *(uint32_t *)mac; + uint32_t mac0; uint16_t mac1; + mac0 = *(uint32_t *)mac; nn_writel(rte_cpu_to_be_32(mac0), hw->ctrl_bar + NFP_NET_CFG_MACADDR); mac += 4; @@ -543,8 +545,9 @@ int nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) { + uint32_t ctrl; + uint32_t update; struct nfp_net_hw *hw; - uint32_t update, ctrl; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && @@ -574,8 +577,8 @@ int nfp_configure_rx_interrupt(struct rte_eth_dev *dev, struct rte_intr_handle *intr_handle) { - struct nfp_net_hw *hw; uint16_t i; + struct nfp_net_hw *hw; if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", dev->data->nb_rx_queues) != 0) { @@ -615,11 +618,11 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, uint32_t nfp_check_offloads(struct rte_eth_dev *dev) { + uint32_t ctrl = 0; struct nfp_net_hw *hw; struct rte_eth_conf *dev_conf; struct rte_eth_rxmode *rxmode; struct rte_eth_txmode *txmode; - uint32_t ctrl = 0; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -682,9 +685,10 @@ nfp_check_offloads(struct rte_eth_dev *dev) int nfp_net_promisc_enable(struct rte_eth_dev *dev) { - uint32_t new_ctrl, update = 0; - struct nfp_net_hw *hw; int ret; + uint32_t new_ctrl; + uint32_t update = 0; + struct nfp_net_hw *hw; struct nfp_flower_representor *repr; PMD_DRV_LOG(DEBUG, "Promiscuous mode enable"); @@ -725,9 +729,10 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) int nfp_net_promisc_disable(struct rte_eth_dev *dev) { - uint32_t new_ctrl, update = 0; - struct nfp_net_hw *hw; int ret; + uint32_t new_ctrl; + uint32_t update = 0; + struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -764,8 +769,8 @@ nfp_net_link_update(struct rte_eth_dev *dev, { int ret; uint32_t i; - uint32_t nn_link_status; struct nfp_net_hw *hw; + uint32_t nn_link_status; struct rte_eth_link link; struct nfp_eth_table *nfp_eth_table; @@ -988,12 +993,13 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) uint32_t nfp_net_xstats_size(const struct rte_eth_dev *dev) { - /* If the device is a VF, then there will be no MAC stats */ - struct nfp_net_hw *hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t count; + struct nfp_net_hw *hw; const uint32_t size = RTE_DIM(nfp_net_xstats); + /* If the device is a VF, then there will be no MAC stats */ + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (hw->mac_stats == NULL) { - uint32_t count; for (count = 0; count < size; count++) { if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC) break; @@ -1396,9 +1402,9 @@ int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { - struct rte_pci_device *pci_dev; - struct nfp_net_hw *hw; uint16_t base = 0; + struct nfp_net_hw *hw; + struct rte_pci_device *pci_dev; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -1417,9 +1423,9 @@ int nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) { - struct rte_pci_device *pci_dev; - struct nfp_net_hw *hw; uint16_t base = 0; + struct nfp_net_hw *hw; + struct rte_pci_device *pci_dev; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -1436,8 +1442,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, static void nfp_net_dev_link_status_print(struct rte_eth_dev *dev) { - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_eth_link link; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); rte_eth_linkstatus_get(dev, &link); if (link.link_status != 0) @@ -1573,16 +1579,16 @@ int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) { - uint32_t new_ctrl, update; + int ret; + uint32_t update; + uint32_t new_ctrl; struct nfp_net_hw *hw; + uint32_t rxvlan_ctrl = 0; struct rte_eth_conf *dev_conf; - uint32_t rxvlan_ctrl; - int ret; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); dev_conf = &dev->data->dev_conf; new_ctrl = hw->ctrl; - rxvlan_ctrl = 0; nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl); @@ -1619,12 +1625,15 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { + uint16_t i; + uint16_t j; + uint16_t idx; uint8_t mask; uint32_t reta; - uint16_t i, j; - uint16_t idx, shift; - struct nfp_net_hw *hw = - NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t shift; + struct nfp_net_hw *hw; + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { PMD_DRV_LOG(ERR, "The size of hash lookup table configured " @@ -1670,11 +1679,11 @@ nfp_net_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct nfp_net_hw *hw = - NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t update; int ret; + uint32_t update; + struct nfp_net_hw *hw; + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) return -EINVAL; @@ -1696,10 +1705,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - uint16_t i, j; + uint16_t i; + uint16_t j; + uint16_t idx; uint8_t mask; - uint16_t idx, shift; uint32_t reta; + uint16_t shift; struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1742,11 +1753,11 @@ static int nfp_net_rss_hash_write(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct nfp_net_hw *hw; + uint8_t i; + uint8_t key; uint64_t rss_hf; + struct nfp_net_hw *hw; uint32_t cfg_rss_ctrl = 0; - uint8_t key; - uint8_t i; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1834,10 +1845,10 @@ int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { + uint8_t i; + uint8_t key; uint64_t rss_hf; uint32_t cfg_rss_ctrl; - uint8_t key; - uint8_t i; struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1890,13 +1901,14 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, int nfp_net_rss_config_default(struct rte_eth_dev *dev) { + int ret; + uint8_t i; + uint8_t j; + uint16_t queue = 0; struct rte_eth_conf *dev_conf; struct rte_eth_rss_conf rss_conf; - struct rte_eth_rss_reta_entry64 nfp_reta_conf[2]; uint16_t rx_queues = dev->data->nb_rx_queues; - uint16_t queue; - uint8_t i, j; - int ret; + struct rte_eth_rss_reta_entry64 nfp_reta_conf[2]; PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues", rx_queues); @@ -1904,7 +1916,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) nfp_reta_conf[0].mask = ~0x0; nfp_reta_conf[1].mask = ~0x0; - queue = 0; for (i = 0; i < 0x40; i += 8) { for (j = i; j < (i + 8); j++) { nfp_reta_conf[0].reta[j] = queue; diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index 71153ea25b..9cb889c4a6 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -222,8 +222,9 @@ nn_writew(uint16_t val, static inline uint64_t nn_readq(volatile void *addr) { + uint32_t low; + uint32_t high; const volatile uint32_t *p = addr; - uint32_t low, high; high = nn_readl((volatile const void *)(p + 1)); low = nn_readl((volatile const void *)p); diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index 85a8bf9235..727ec7a7b2 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -119,12 +119,16 @@ static int nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) { - struct nfp_cpp_area *area; - off_t offset, nfp_offset; - uint32_t cpp_id, pos, len; + int err; + off_t offset; + uint32_t pos; + uint32_t len; + size_t count; + size_t curlen; + uint32_t cpp_id; + off_t nfp_offset; uint32_t tmpbuf[16]; - size_t count, curlen; - int err = 0; + struct nfp_cpp_area *area; PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, sizeof(off_t), sizeof(size_t)); @@ -220,12 +224,16 @@ static int nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) { - struct nfp_cpp_area *area; - off_t offset, nfp_offset; - uint32_t cpp_id, pos, len; + int err; + off_t offset; + uint32_t pos; + uint32_t len; + size_t count; + size_t curlen; + uint32_t cpp_id; + off_t nfp_offset; uint32_t tmpbuf[16]; - size_t count, curlen; - int err = 0; + struct nfp_cpp_area *area; PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, sizeof(off_t), sizeof(size_t)); @@ -319,8 +327,10 @@ static int nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp) { - uint32_t cmd, ident_size, tmp; int err; + uint32_t cmd; + uint32_t tmp; + uint32_t ident_size; /* Reading now the IOCTL command */ err = recv(sockfd, &cmd, 4, 0); @@ -375,10 +385,13 @@ nfp_cpp_bridge_serve_ioctl(int sockfd, static int nfp_cpp_bridge_service_func(void *args) { - struct sockaddr address; + int op; + int ret; + int sockfd; + int datafd; struct nfp_cpp *cpp; + struct sockaddr address; struct nfp_pf_dev *pf_dev; - int sockfd, datafd, op, ret; struct timeval timeout = {1, 0}; unlink("/tmp/nfp_cpp"); diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 140d20dcf7..7d149decfb 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -25,8 +25,8 @@ static int nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, uint16_t port) { + struct nfp_net_hw *hw; struct nfp_eth_table *nfp_eth_table; - struct nfp_net_hw *hw = NULL; /* Grab a pointer to the correct physical port */ hw = app_fw_nic->ports[port]; @@ -42,18 +42,19 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, static int nfp_net_start(struct rte_eth_dev *dev) { - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - uint32_t new_ctrl, update = 0; + int ret; + uint32_t new_ctrl; + uint32_t update = 0; uint32_t cap_extend; - uint32_t ctrl_extend = 0; + uint32_t intr_vector; struct nfp_net_hw *hw; + uint32_t ctrl_extend = 0; struct nfp_pf_dev *pf_dev; - struct nfp_app_fw_nic *app_fw_nic; struct rte_eth_conf *dev_conf; struct rte_eth_rxmode *rxmode; - uint32_t intr_vector; - int ret; + struct nfp_app_fw_nic *app_fw_nic; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private); @@ -251,11 +252,11 @@ nfp_net_set_link_down(struct rte_eth_dev *dev) static int nfp_net_close(struct rte_eth_dev *dev) { + uint8_t i; struct nfp_net_hw *hw; - struct rte_pci_device *pci_dev; struct nfp_pf_dev *pf_dev; + struct rte_pci_device *pci_dev; struct nfp_app_fw_nic *app_fw_nic; - uint8_t i; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -480,15 +481,15 @@ nfp_net_ethdev_ops_mount(struct nfp_net_hw *hw, static int nfp_net_init(struct rte_eth_dev *eth_dev) { - struct rte_pci_device *pci_dev; + int err; + uint16_t port; + uint64_t rx_base; + uint64_t tx_base; + struct nfp_net_hw *hw; struct nfp_pf_dev *pf_dev; + struct rte_pci_device *pci_dev; struct nfp_app_fw_nic *app_fw_nic; - struct nfp_net_hw *hw; struct rte_ether_addr *tmp_ether_addr; - uint64_t rx_base; - uint64_t tx_base; - uint16_t port = 0; - int err; PMD_INIT_FUNC_TRACE(); @@ -650,14 +651,14 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) { - struct nfp_cpp *cpp = nfp_nsp_cpp(nsp); void *fw_buf; - char fw_name[125]; - char serial[40]; size_t fsize; + char serial[40]; + char fw_name[125]; uint16_t interface; uint32_t cpp_serial_len; const uint8_t *cpp_serial; + struct nfp_cpp *cpp = nfp_nsp_cpp(nsp); cpp_serial_len = nfp_cpp_serial(cpp, &cpp_serial); if (cpp_serial_len != NFP_SERIAL_LEN) @@ -713,10 +714,10 @@ nfp_fw_setup(struct rte_pci_device *dev, struct nfp_eth_table *nfp_eth_table, struct nfp_hwinfo *hwinfo) { + int err; + char card_desc[100]; struct nfp_nsp *nsp; const char *nfp_fw_model; - char card_desc[100]; - int err = 0; nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "nffw.partno"); if (nfp_fw_model == NULL) @@ -897,9 +898,9 @@ nfp_pf_init(struct rte_pci_device *pci_dev) uint64_t addr; uint32_t cpp_id; struct nfp_cpp *cpp; - enum nfp_app_fw_id app_fw_id; struct nfp_pf_dev *pf_dev; struct nfp_hwinfo *hwinfo; + enum nfp_app_fw_id app_fw_id; char name[RTE_ETH_NAME_MAX_LEN]; struct nfp_rtsym_table *sym_tbl; struct nfp_eth_table *nfp_eth_table; @@ -1220,8 +1221,8 @@ static const struct rte_pci_id pci_id_nfp_pf_net_map[] = { static int nfp_pci_uninit(struct rte_eth_dev *eth_dev) { - struct rte_pci_device *pci_dev; uint16_t port_id; + struct rte_pci_device *pci_dev; pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 892300a909..aaef6ea91a 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -29,14 +29,15 @@ nfp_netvf_read_mac(struct nfp_net_hw *hw) static int nfp_netvf_start(struct rte_eth_dev *dev) { - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - uint32_t new_ctrl, update = 0; + int ret; + uint32_t new_ctrl; + uint32_t update = 0; + uint32_t intr_vector; struct nfp_net_hw *hw; struct rte_eth_conf *dev_conf; struct rte_eth_rxmode *rxmode; - uint32_t intr_vector; - int ret; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -254,15 +255,15 @@ nfp_netvf_ethdev_ops_mount(struct nfp_net_hw *hw, static int nfp_netvf_init(struct rte_eth_dev *eth_dev) { - struct rte_pci_device *pci_dev; - struct nfp_net_hw *hw; - struct rte_ether_addr *tmp_ether_addr; - - uint64_t tx_bar_off = 0, rx_bar_off = 0; + int err; uint32_t start_q; uint16_t port = 0; - int err; + struct nfp_net_hw *hw; + uint64_t tx_bar_off = 0; + uint64_t rx_bar_off = 0; + struct rte_pci_device *pci_dev; const struct nfp_dev_info *dev_info; + struct rte_ether_addr *tmp_ether_addr; PMD_INIT_FUNC_TRACE(); diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index a254d839ff..476eb0c7f8 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -728,9 +728,9 @@ nfp_flow_compile_metadata(struct nfp_flow_priv *priv, struct nfp_fl_key_ls *key_layer, uint32_t stats_ctx) { - struct nfp_fl_rule_metadata *nfp_flow_meta; - char *mbuf_off_exact; char *mbuf_off_mask; + char *mbuf_off_exact; + struct nfp_fl_rule_metadata *nfp_flow_meta; /* * Convert to long words as firmware expects @@ -941,9 +941,9 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[], int ret = 0; bool meter_flag = false; bool tc_hl_flag = false; - bool mac_set_flag = false; bool ip_set_flag = false; bool tp_set_flag = false; + bool mac_set_flag = false; bool ttl_tos_flag = false; const struct rte_flow_action *action; @@ -3165,11 +3165,11 @@ nfp_flow_action_geneve_encap_v4(struct nfp_app_fw_flower *app_fw_flower, { uint64_t tun_id; const struct rte_ether_hdr *eth; + struct nfp_fl_act_pre_tun *pre_tun; + struct nfp_fl_act_set_tun *set_tun; const struct rte_flow_item_udp *udp; const struct rte_flow_item_ipv4 *ipv4; const struct rte_flow_item_geneve *geneve; - struct nfp_fl_act_pre_tun *pre_tun; - struct nfp_fl_act_set_tun *set_tun; size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun); size_t act_set_size = sizeof(struct nfp_fl_act_set_tun); @@ -3205,11 +3205,11 @@ nfp_flow_action_geneve_encap_v6(struct nfp_app_fw_flower *app_fw_flower, uint8_t tos; uint64_t tun_id; const struct rte_ether_hdr *eth; + struct nfp_fl_act_pre_tun *pre_tun; + struct nfp_fl_act_set_tun *set_tun; const struct rte_flow_item_udp *udp; const struct rte_flow_item_ipv6 *ipv6; const struct rte_flow_item_geneve *geneve; - struct nfp_fl_act_pre_tun *pre_tun; - struct nfp_fl_act_set_tun *set_tun; size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun); size_t act_set_size = sizeof(struct nfp_fl_act_set_tun); @@ -3245,10 +3245,10 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower, { uint64_t tun_id; const struct rte_ether_hdr *eth; - const struct rte_flow_item_ipv4 *ipv4; - const struct rte_flow_item_gre *gre; struct nfp_fl_act_pre_tun *pre_tun; struct nfp_fl_act_set_tun *set_tun; + const struct rte_flow_item_gre *gre; + const struct rte_flow_item_ipv4 *ipv4; size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun); size_t act_set_size = sizeof(struct nfp_fl_act_set_tun); @@ -3283,10 +3283,10 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower, uint8_t tos; uint64_t tun_id; const struct rte_ether_hdr *eth; - const struct rte_flow_item_ipv6 *ipv6; - const struct rte_flow_item_gre *gre; struct nfp_fl_act_pre_tun *pre_tun; struct nfp_fl_act_set_tun *set_tun; + const struct rte_flow_item_gre *gre; + const struct rte_flow_item_ipv6 *ipv6; size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun); size_t act_set_size = sizeof(struct nfp_fl_act_set_tun); @@ -3431,12 +3431,12 @@ nfp_flow_compile_action(struct nfp_flower_representor *representor, uint32_t count; char *position; char *action_data; - bool ttl_tos_flag = false; - bool tc_hl_flag = false; bool drop_flag = false; + bool tc_hl_flag = false; bool ip_set_flag = false; bool tp_set_flag = false; bool mac_set_flag = false; + bool ttl_tos_flag = false; uint32_t total_actions = 0; const struct rte_flow_action *action; struct nfp_flower_meta_tci *meta_tci; @@ -4206,10 +4206,10 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) size_t stats_size; uint64_t ctx_count; uint64_t ctx_split; + struct nfp_flow_priv *priv; char mask_name[RTE_HASH_NAMESIZE]; char flow_name[RTE_HASH_NAMESIZE]; char pretun_name[RTE_HASH_NAMESIZE]; - struct nfp_flow_priv *priv; struct nfp_app_fw_flower *app_fw_flower; const char *pci_name = strchr(pf_dev->pci_dev->name, ':') + 1; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 8cbb9b74a2..db6122eac3 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -188,9 +188,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, static int nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) { - struct nfp_net_dp_buf *rxe = rxq->rxbufs; - uint64_t dma_addr; uint16_t i; + uint64_t dma_addr; + struct nfp_net_dp_buf *rxe = rxq->rxbufs; PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors", rxq->rx_count); @@ -241,17 +241,15 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) uint32_t nfp_net_rx_queue_count(void *rx_queue) { + uint32_t idx; + uint32_t count = 0; struct nfp_net_rxq *rxq; struct nfp_net_rx_desc *rxds; - uint32_t idx; - uint32_t count; rxq = rx_queue; idx = rxq->rd_p; - count = 0; - /* * Other PMDs are just checking the DD bit in intervals of 4 * descriptors and counting all four if the first has the DD @@ -282,9 +280,9 @@ nfp_net_parse_chained_meta(uint8_t *meta_base, rte_be32_t meta_header, struct nfp_meta_parsed *meta) { - uint8_t *meta_offset; uint32_t meta_info; uint32_t vlan_info; + uint8_t *meta_offset; meta_info = rte_be_to_cpu_32(meta_header); meta_offset = meta_base + 4; @@ -683,15 +681,15 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { - struct nfp_net_rxq *rxq; - struct nfp_net_rx_desc *rxds; - struct nfp_net_dp_buf *rxb; - struct nfp_net_hw *hw; + uint64_t dma_addr; + uint16_t avail = 0; struct rte_mbuf *mb; + uint16_t nb_hold = 0; + struct nfp_net_hw *hw; struct rte_mbuf *new_mb; - uint16_t nb_hold; - uint64_t dma_addr; - uint16_t avail; + struct nfp_net_rxq *rxq; + struct nfp_net_dp_buf *rxb; + struct nfp_net_rx_desc *rxds; uint16_t avail_multiplexed = 0; rxq = rx_queue; @@ -706,8 +704,6 @@ nfp_net_recv_pkts(void *rx_queue, hw = rxq->hw; - avail = 0; - nb_hold = 0; while (avail + avail_multiplexed < nb_pkts) { rxb = &rxq->rxbufs[rxq->rd_p]; if (unlikely(rxb == NULL)) { @@ -883,12 +879,12 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { + uint32_t rx_desc_sz; uint16_t min_rx_desc; uint16_t max_rx_desc; - const struct rte_memzone *tz; - struct nfp_net_rxq *rxq; struct nfp_net_hw *hw; - uint32_t rx_desc_sz; + struct nfp_net_rxq *rxq; + const struct rte_memzone *tz; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -995,8 +991,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq) { - uint32_t qcp_rd_p; uint32_t todo; + uint32_t qcp_rd_p; PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete" " status", txq->qidx); @@ -1072,8 +1068,8 @@ nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data, struct rte_mbuf *pkt, uint8_t layer) { - uint16_t vlan_tci; uint16_t tpid; + uint16_t vlan_tci; tpid = RTE_ETHER_TYPE_VLAN; vlan_tci = pkt->vlan_tci; From patchwork Sat Oct 7 02:33:33 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132375 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 07636426D6; Sat, 7 Oct 2023 04:34:55 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A3CAB40A89; Sat, 7 Oct 2023 04:34:14 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2099.outbound.protection.outlook.com [40.107.94.99]) by mails.dpdk.org (Postfix) with ESMTP id 60A4C40DCE for ; Sat, 7 Oct 2023 04:34:10 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=I2jTCd6NmVvMO5thiSIlLutlf2Sax6Ul1y/nU2ORASw4rahxzaaaX0PRbrBoSMRKBrhEC7rFhGo08fcfgk2F6alE00aPFlBXoaTr/lAPS/saWuQVhW/aOb64dIPuVRqJP9Oczhh8yuyv1qiak7RDwtoJbtIPHxjfb/yqN8A01loI3DabwFjxtaLeqi128b2ctphmZK0j0KED3dCEA6NEPxzkigv5IReyTHVgYkB7xw+xxu8+VHsYi+fnWZnYxjzJhUVlQJkNQ6WrLv6Y0i/YYMnwfsuC8a4fpse/OQUTOjjMJ4OyJRl1Gy2oGwK8HCxqoz1LYD9aMqRN5bKY5/ofnw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mB6kUrQ1OXZ8TOHbDfJmK0HiVfyBdEDj/+/Y8J11JGc=; b=TwbDNThe7WvRHn5dnPHBBwBtFHr4fS/H9wK0bV7xxO7ZhHzma8ira+aVe6Ir5NwT4prxAXFpu/7Ly1cBHvGau1SWYh9b8EItbTjgOYm3S05HVk1Ns5d+s6uOzNH7gn7/V9CQCYgYXF4C50HAjn9HlvMGhQkHIBGCFDKpsh9qQTYe4FwuPCwR92JkEWIykwUnKwyu0qIqBWLLFsMiauNASoPk9ivrxExpAd3n/KylULwFRXxRkq6JIsPqeNFH+6w76qCwhRjeCFFttoTW4+gNwJoNFKTMTQgmhtBGMtgcPN8JnjjTpIVN2raDM7mWKuNm+CdYDYh8A4pBePRl/dISyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mB6kUrQ1OXZ8TOHbDfJmK0HiVfyBdEDj/+/Y8J11JGc=; b=gc8GhuzCT35sIyqCfdtiM3DilywnlGDBVH5N9b1hFTMvtNs4+Z3fl3ig6iLsacmU1TZjoasU0xGBu9eh13hUAvw7fz1TJ3q+CaJhnGmomeaAIMdXqBnVeUDUvy7JFXtUnJjBayMJnx71sj7XLmSI1DAes/zUDVmDoYU1dpThrU0= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:08 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:08 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 05/11] net/nfp: adjust the log statement Date: Sat, 7 Oct 2023 10:33:33 +0800 Message-Id: <20231007023339.1546659-6-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: 64e632ea-7b00-47b9-7766-08dbc6dde188 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: WEziXw9mdcneVWnAuj6lFuLBQ8KTYVHEt+PH9EUS8HEBJJKGbFiDVL71ahb9vIaUlo2IoP5Vp5jSKd76xuwEKTpsZx5WG7+v+2e8mn2KrwR9Fb1h7Uo2aV6Oz+Y+9TYqXk1RVQgdFdXyBAMdQAFhoTfsCWXw+qLKL3rLKWhTQEB3eKqrQNP++8To17pL8WNFNNVpUKc/GBb0uwLOs1v5pweq0Sjvs6A/1m2CmX6cuYWAncxQ2mSsASDlyMcaDpwabOecr8+pU7VYclnk/pw0rS77qea9lZ2jKCHH1eASUPol7X2M4UIq238cf4Wd49C+AeakXVFYTisWkBVnWrAH5d0YP+M9wY0ETpqkbVYM7ar0NHbnkN+qx7WmthxDwwidRSdQWAUviVfZ91KdW8DVtti9lHAq8o+u07WA2Zr1Zh6CVIkrNZPely/v/vjR+WPKBlKBzv+HN4aAL+NhcK5MrjjBtMu6tSVXkNtWvSE69jTjJpGFflLxBb9FDQjiOSdsSoQ1vg05oEqSWztyGgZwn43tadO4c/BuqDqUlmtQGn+rtjov1PyDfqSfxNmZ/vO5b0SrNjUX3lUtqa4xnZvq+ZbOhpMkhma4iqD63aasyQojqSe7PjgNDtoamn39wfTf7b8wrDnGM2SJIfd3XYfMvVohCThJjXyLxSIekPnwZZQ= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(30864003)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: XDfG4GGguGt4ghnpH7KJiC8TSBpAMK8ZQFt+AppjoWu6ff5Qf3TBMbP46/Icj9Q3MTaCu/YmPzNICXIdRpls0JcyEOmIE92j/co+6ccN0rg6s5R6sVF49e/HUsN51MSmfkfEjoUgH4YIJfd6CSqE7qhnvO1/aiWVKMSoF6J9LHCcnTWxd/9UmJOnOMzHE3wrUhrTrAZwvd0u3fvz2lt1OZW2Dr/mliaYj5kF1nMtY75lWLW4wXLFdcyKO6YnPacGfGrt3vHlAi+1+ZauQWNRoO/DtYIr/wkcDQo72avZUuU4SmkanaxxHdKuql/mrE32QyA5hBqnu4jU6IeL+XNspAb76Xvcz7jxicWXcxDVBE61zR4MZlCjC+qExvgDqHHGevfpHSml1wVMd6aLtsLY0vNS3jOIk+XfyBnB93KHyeGxWLGv6KR/vRUkMFvqfd4bdl0q392Dok0WQaTz4lx+7YQNOpFVRmQhD1l23LTafD5ZfHtrgxDJWHEqSC4qFrlcyuxK2JBw2+ZysltDlRQ09TWREW+jrPy785LpY0YJcEnXgAaD3wT/lZ18eZWqhU8/vMgUX8HDvKaPk+lJHgGst7WvpcQQpdix2F6+fk2+Qzy2s/5Fp3mz1dn7UB0pOmbA8T8DENogv5lxM56U3sHXA9Vmu4Y6EcDlhuhiR72YxESx18wRL0Fnr3qWsvVLR+gDyyqj9O7c80AIpaYKQGlhEmiAsAAs4lo+DSILoMUhF+HyjPBfHne+IVGP0ujmYzg+4R8HweBHjj+vsSco+OYiQKFsZJy7VXSir2RTYyItRKbQZokEbQV51KHJq7oohqsHsshhPuKehOxXdD1xXrlEYvpdZib3TCnbsw0dabu1nr91+xx3klk2ZTgbGqu2WHs6+SAZ3USaSMTniqWrEdFEpBiRmElRmntIrpNBfh4lR7NwSWZ7VqgDhSve+s+wiERRoR6uy2Dd1mTER5yw/qKRq2haQrOKPkwpM3RUeUK/J9+SGoELy9FNN9VWPmRmirD2BzP87Qs50MvCxo/eCgMm2oBfKLCtHoDDO0cMB7AQZOic4JGJBi2O1r5iLjJOvqya+dSedk/u2kxlcJe4IvJE92vRDmVZTtuQy98+31NtPRd/h9Xy4D2r4ul2mH6yTVmO9UcZ0HnU9ODVXHc7tIVf67KATu8Ahjz0awJjMjWtTS4cMTN2Psov2Yn1uDwMOn4+2Jw1GZrtXMzSg31OqwhHzF9qksdpP+Rh64l7azNWZ+AQM/YeUv1ujnBirjXuXpdAJ+fhkCdy/x0+Gn2gWERIXCXiiHYfaPQdS8N+/7Qdql0bKxawhYDXF172I3XABM9NSmaN8L7n+ppiFkXjyZauLoOvNs0Mix93kWp2Tb6ouFMTiRdudXgYkZLby780BEUHNspKzY0c+hMIl9c1yAfMLLWCwxShCabgq9Q7e/Ep+kXlIoixQtjVj+yE3HyT3aXNZoyKfgNbfsBJQYgBAfAk4n7mnObHK/jjbOMqXJJa/8ErCKMxeVdOkGFxW8N4zEKw7FHNxQDwgUwjn2lDcvKxNtBLfPWYIF/gkKRa1hD4qQoKq70Zs/cyCG6a4IlLyg0mKt+pbkdO1CGjC6QaU87kwQ== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 64e632ea-7b00-47b9-7766-08dbc6dde188 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:07.9892 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: cKnX9ujSgqcPKhE/O5oh3N+4ymm62HU/KY2h/nRSXQ2C4zO80NAMr5W8phFZkz4Q1Xcg2o07XEUuoP57BDziv+9stfs5Qt//2RHO9MwHqNI= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add log statement to the important control logic, and remove verbose info log statement. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower_ctrl.c | 17 +++--- .../net/nfp/flower/nfp_flower_representor.c | 4 +- drivers/net/nfp/nfd3/nfp_nfd3_dp.c | 2 - drivers/net/nfp/nfdk/nfp_nfdk_dp.c | 2 - drivers/net/nfp/nfp_common.c | 59 ++++++++----------- drivers/net/nfp/nfp_cpp_bridge.c | 28 ++++----- drivers/net/nfp/nfp_ethdev.c | 21 +------ drivers/net/nfp/nfp_ethdev_vf.c | 17 +----- drivers/net/nfp/nfp_logs.h | 1 - drivers/net/nfp/nfp_rxtx.c | 17 ++---- 10 files changed, 58 insertions(+), 110 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c index 4967cc2375..1f4c5fd7f9 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.c +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c @@ -88,15 +88,14 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, * responsibility of avoiding it. But we have * to give some info about the error */ - PMD_RX_LOG(ERR, - "mbuf overflow likely due to the RX offset.\n" - "\t\tYour mbuf size should have extra space for" - " RX offset=%u bytes.\n" - "\t\tCurrently you just have %u bytes available" - " but the received packet is %u bytes long", - hw->rx_offset, - rxq->mbuf_size - hw->rx_offset, - mb->data_len); + PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n" + "\t\tYour mbuf size should have extra space for" + " RX offset=%u bytes.\n" + "\t\tCurrently you just have %u bytes available" + " but the received packet is %u bytes long", + hw->rx_offset, + rxq->mbuf_size - hw->rx_offset, + mb->data_len); rte_pktmbuf_free(mb); break; } diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c index 01c2c5a517..be0dfb2890 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.c +++ b/drivers/net/nfp/flower/nfp_flower_representor.c @@ -464,7 +464,7 @@ nfp_flower_repr_rx_burst(void *rx_queue, total_dequeue = rte_ring_dequeue_burst(repr->ring, (void *)rx_pkts, nb_pkts, &available); if (total_dequeue != 0) { - PMD_RX_LOG(DEBUG, "Representor Rx burst for %s, port_id: 0x%x, " + PMD_RX_LOG(DEBUG, "Representor Rx burst for %s, port_id: %#x, " "received: %u, available: %u", repr->name, repr->port_id, total_dequeue, available); @@ -510,7 +510,7 @@ nfp_flower_repr_tx_burst(void *tx_queue, pf_tx_queue = dev->data->tx_queues[0]; sent = nfp_flower_pf_xmit_pkts(pf_tx_queue, tx_pkts, nb_pkts); if (sent != 0) { - PMD_TX_LOG(DEBUG, "Representor Tx burst for %s, port_id: 0x%x transmitted: %u", + PMD_TX_LOG(DEBUG, "Representor Tx burst for %s, port_id: %#x transmitted: %hu", repr->name, repr->port_id, sent); repr->repr_stats.opackets += sent; } diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c index 699f65ebef..51755f4324 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c +++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c @@ -381,8 +381,6 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - PMD_INIT_FUNC_TRACE(); - nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc); /* Validating number of descriptors */ diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c index 2426ffb261..dae87ac6df 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c +++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c @@ -455,8 +455,6 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - PMD_INIT_FUNC_TRACE(); - nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc); /* Validating number of descriptors */ diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 18291a1cde..f48e1930dc 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -207,7 +207,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, hw->qcp_cfg); if (hw->qcp_cfg == NULL) { - PMD_INIT_LOG(ERR, "Bad configuration queue pointer"); + PMD_DRV_LOG(ERR, "Bad configuration queue pointer"); return -ENXIO; } @@ -224,15 +224,15 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, if (new == 0) break; if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) { - PMD_INIT_LOG(ERR, "Reconfig error: 0x%08x", new); + PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new); return -1; } if (cnt >= NFP_NET_POLL_TIMEOUT) { - PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after" - " %ums", update, cnt); + PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms", + update, cnt); return -EIO; } - nanosleep(&wait, 0); /* waiting for a 1ms */ + nanosleep(&wait, 0); /* Waiting for a 1ms */ } PMD_DRV_LOG(DEBUG, "Ack DONE"); return 0; @@ -390,8 +390,6 @@ nfp_net_configure(struct rte_eth_dev *dev) * called after that internal process */ - PMD_INIT_LOG(DEBUG, "Configure"); - dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; @@ -401,20 +399,20 @@ nfp_net_configure(struct rte_eth_dev *dev) /* Checking TX mode */ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { - PMD_INIT_LOG(INFO, "TX mq_mode DCB and VMDq not supported"); + PMD_DRV_LOG(ERR, "TX mq_mode DCB and VMDq not supported"); return -EINVAL; } /* Checking RX mode */ if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 && (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) { - PMD_INIT_LOG(INFO, "RSS not supported"); + PMD_DRV_LOG(ERR, "RSS not supported"); return -EINVAL; } /* Checking MTU set */ if (rxmode->mtu > NFP_FRAME_SIZE_MAX) { - PMD_INIT_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported", + PMD_DRV_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u)", rxmode->mtu, NFP_FRAME_SIZE_MAX); return -ERANGE; } @@ -552,8 +550,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) { - PMD_INIT_LOG(INFO, "MAC address unable to change when" - " port enabled"); + PMD_DRV_LOG(ERR, "MAC address unable to change when port enabled"); return -EBUSY; } @@ -567,7 +564,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; if (nfp_net_reconfig(hw, ctrl, update) != 0) { - PMD_INIT_LOG(INFO, "MAC address update failed"); + PMD_DRV_LOG(ERR, "MAC address update failed"); return -EIO; } return 0; @@ -582,21 +579,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", dev->data->nb_rx_queues) != 0) { - PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" - " intr_vec", dev->data->nb_rx_queues); + PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues intr_vec", + dev->data->nb_rx_queues); return -ENOMEM; } hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { - PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO"); + PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with UIO"); /* UIO just supports one queue and no LSC*/ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0); if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0) return -1; } else { - PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO"); + PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with VFIO"); for (i = 0; i < dev->data->nb_rx_queues; i++) { /* * The first msix vector is reserved for non @@ -605,8 +602,6 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1); if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0) return -1; - PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i, - rte_intr_vec_list_index_get(intr_handle, i)); } } @@ -691,8 +686,6 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) struct nfp_net_hw *hw; struct nfp_flower_representor *repr; - PMD_DRV_LOG(DEBUG, "Promiscuous mode enable"); - if ((dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) != 0) { repr = dev->data->dev_private; hw = repr->app_fw_flower->pf_hw; @@ -701,7 +694,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) } if ((hw->cap & NFP_NET_CFG_CTRL_PROMISC) == 0) { - PMD_INIT_LOG(INFO, "Promiscuous mode not supported"); + PMD_DRV_LOG(ERR, "Promiscuous mode not supported"); return -ENOTSUP; } @@ -774,9 +767,6 @@ nfp_net_link_update(struct rte_eth_dev *dev, struct rte_eth_link link; struct nfp_eth_table *nfp_eth_table; - - PMD_DRV_LOG(DEBUG, "Link update"); - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); /* Read link status */ @@ -1636,9 +1626,9 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { - PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number hardware can supported " - "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); + PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%hu)" + " doesn't match hardware can supported (%d)", + reta_size, NFP_NET_CFG_RSS_ITBL_SZ); return -EINVAL; } @@ -1719,9 +1709,9 @@ nfp_net_reta_query(struct rte_eth_dev *dev, return -EINVAL; if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { - PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number hardware can supported " - "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); + PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%d)" + " doesn't match hardware can supported (%d)", + reta_size, NFP_NET_CFG_RSS_ITBL_SZ); return -EINVAL; } @@ -1827,7 +1817,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, } if (rss_conf->rss_key_len > NFP_NET_CFG_RSS_KEY_SZ) { - PMD_DRV_LOG(ERR, "hash key too long"); + PMD_DRV_LOG(ERR, "RSS hash key too long"); return -EINVAL; } @@ -1910,9 +1900,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) uint16_t rx_queues = dev->data->nb_rx_queues; struct rte_eth_rss_reta_entry64 nfp_reta_conf[2]; - PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues", - rx_queues); - nfp_reta_conf[0].mask = ~0x0; nfp_reta_conf[1].mask = ~0x0; @@ -1929,7 +1916,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) dev_conf = &dev->data->dev_conf; if (dev_conf == NULL) { - PMD_DRV_LOG(INFO, "wrong rss conf"); + PMD_DRV_LOG(ERR, "Wrong rss conf"); return -EINVAL; } rss_conf = dev_conf->rx_adv_conf.rss_conf; diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index 727ec7a7b2..222cfdcbc3 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -130,7 +130,7 @@ nfp_cpp_bridge_serve_write(int sockfd, uint32_t tmpbuf[16]; struct nfp_cpp_area *area; - PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu", __func__, sizeof(off_t), sizeof(size_t)); /* Reading the count param */ @@ -149,9 +149,9 @@ nfp_cpp_bridge_serve_write(int sockfd, cpp_id = (offset >> 40) << 8; nfp_offset = offset & ((1ull << 40) - 1); - PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, + PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd", __func__, count, offset); - PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd", __func__, cpp_id, nfp_offset); /* Adjust length if not aligned */ @@ -162,7 +162,7 @@ nfp_cpp_bridge_serve_write(int sockfd, } while (count > 0) { - /* configure a CPP PCIe2CPP BAR for mapping the CPP target */ + /* Configure a CPP PCIe2CPP BAR for mapping the CPP target */ area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev", nfp_offset, curlen); if (area == NULL) { @@ -170,7 +170,7 @@ nfp_cpp_bridge_serve_write(int sockfd, return -EIO; } - /* mapping the target */ + /* Mapping the target */ err = nfp_cpp_area_acquire(area); if (err < 0) { PMD_CPP_LOG(ERR, "area acquire failed"); @@ -183,7 +183,7 @@ nfp_cpp_bridge_serve_write(int sockfd, if (len > sizeof(tmpbuf)) len = sizeof(tmpbuf); - PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu", __func__, len, count); err = recv(sockfd, tmpbuf, len, MSG_WAITALL); if (err != (int)len) { @@ -235,7 +235,7 @@ nfp_cpp_bridge_serve_read(int sockfd, uint32_t tmpbuf[16]; struct nfp_cpp_area *area; - PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu", __func__, sizeof(off_t), sizeof(size_t)); /* Reading the count param */ @@ -254,9 +254,9 @@ nfp_cpp_bridge_serve_read(int sockfd, cpp_id = (offset >> 40) << 8; nfp_offset = offset & ((1ull << 40) - 1); - PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, + PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd", __func__, count, offset); - PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd", __func__, cpp_id, nfp_offset); /* Adjust length if not aligned */ @@ -293,7 +293,7 @@ nfp_cpp_bridge_serve_read(int sockfd, nfp_cpp_area_free(area); return -EIO; } - PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu", __func__, len, count); err = send(sockfd, tmpbuf, len, 0); @@ -353,7 +353,7 @@ nfp_cpp_bridge_serve_ioctl(int sockfd, tmp = nfp_cpp_model(cpp); - PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x\n", __func__, tmp); + PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x", __func__, tmp); err = send(sockfd, &tmp, 4, 0); if (err != 4) { @@ -363,7 +363,7 @@ nfp_cpp_bridge_serve_ioctl(int sockfd, tmp = nfp_cpp_interface(cpp); - PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x\n", __func__, tmp); + PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x", __func__, tmp); err = send(sockfd, &tmp, 4, 0); if (err != 4) { @@ -440,11 +440,11 @@ nfp_cpp_bridge_service_func(void *args) while (1) { ret = recv(datafd, &op, 4, 0); if (ret <= 0) { - PMD_CPP_LOG(DEBUG, "%s: socket close\n", __func__); + PMD_CPP_LOG(DEBUG, "%s: socket close", __func__); break; } - PMD_CPP_LOG(DEBUG, "%s: getting op %u\n", __func__, op); + PMD_CPP_LOG(DEBUG, "%s: getting op %u", __func__, op); if (op == NFP_BRIDGE_OP_READ) nfp_cpp_bridge_serve_read(datafd, cpp); diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 7d149decfb..72abc4c16e 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -60,8 +60,6 @@ nfp_net_start(struct rte_eth_dev *dev) pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private); app_fw_nic = NFP_PRIV_TO_APP_FW_NIC(pf_dev->app_fw_priv); - PMD_INIT_LOG(DEBUG, "Start"); - /* Disabling queues just in case... */ nfp_net_disable_queues(dev); @@ -194,8 +192,6 @@ nfp_net_stop(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; - PMD_INIT_LOG(DEBUG, "Stop"); - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); nfp_net_disable_queues(dev); @@ -220,8 +216,6 @@ nfp_net_set_link_up(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; - PMD_DRV_LOG(DEBUG, "Set link up"); - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (rte_eal_process_type() == RTE_PROC_PRIMARY) @@ -237,8 +231,6 @@ nfp_net_set_link_down(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; - PMD_DRV_LOG(DEBUG, "Set link down"); - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (rte_eal_process_type() == RTE_PROC_PRIMARY) @@ -261,8 +253,6 @@ nfp_net_close(struct rte_eth_dev *dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; - PMD_INIT_LOG(DEBUG, "Close"); - pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private); hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -491,8 +481,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) struct nfp_app_fw_nic *app_fw_nic; struct rte_ether_addr *tmp_ether_addr; - PMD_INIT_FUNC_TRACE(); - pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); /* Use backpointer here to the PF of this eth_dev */ @@ -513,7 +501,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) */ hw = app_fw_nic->ports[port]; - PMD_INIT_LOG(DEBUG, "Working with physical port number: %d, " + PMD_INIT_LOG(DEBUG, "Working with physical port number: %hu, " "NFP internal port number: %d", port, hw->nfp_idx); rte_eth_copy_pci_info(eth_dev, pci_dev); @@ -579,9 +567,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); - PMD_INIT_LOG(DEBUG, "tx_base: 0x%" PRIx64 "", tx_base); - PMD_INIT_LOG(DEBUG, "rx_base: 0x%" PRIx64 "", rx_base); - hw->tx_bar = pf_dev->qc_bar + tx_base * NFP_QCP_QUEUE_ADDR_SZ; hw->rx_bar = pf_dev->qc_bar + rx_base * NFP_QCP_QUEUE_ADDR_SZ; eth_dev->data->dev_private = hw; @@ -627,7 +612,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; - PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " + PMD_INIT_LOG(INFO, "port %d VendorID=%#x DeviceID=%#x " "mac=" RTE_ETHER_ADDR_PRT_FMT, eth_dev->data->port_id, pci_dev->id.vendor_id, pci_dev->id.device_id, @@ -997,7 +982,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) goto pf_cleanup; } - PMD_INIT_LOG(DEBUG, "qc_bar address: 0x%p", pf_dev->qc_bar); + PMD_INIT_LOG(DEBUG, "qc_bar address: %p", pf_dev->qc_bar); /* * PF initialization has been done at this point. Call app specific diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index aaef6ea91a..d3c3c9e953 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -41,8 +41,6 @@ nfp_netvf_start(struct rte_eth_dev *dev) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - PMD_INIT_LOG(DEBUG, "Start"); - /* Disabling queues just in case... */ nfp_net_disable_queues(dev); @@ -136,8 +134,6 @@ nfp_netvf_start(struct rte_eth_dev *dev) static int nfp_netvf_stop(struct rte_eth_dev *dev) { - PMD_INIT_LOG(DEBUG, "Stop"); - nfp_net_disable_queues(dev); /* Clear queues */ @@ -170,8 +166,6 @@ nfp_netvf_close(struct rte_eth_dev *dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; - PMD_INIT_LOG(DEBUG, "Close"); - pci_dev = RTE_ETH_DEV_TO_PCI(dev); /* @@ -265,8 +259,6 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) const struct nfp_dev_info *dev_info; struct rte_ether_addr *tmp_ether_addr; - PMD_INIT_FUNC_TRACE(); - pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); dev_info = nfp_dev_info_get(pci_dev->id.device_id); @@ -301,7 +293,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) hw->eth_xstats_base = rte_malloc("rte_eth_xstat", sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0); if (hw->eth_xstats_base == NULL) { - PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!", + PMD_INIT_LOG(ERR, "No memory for xstats base values on device %s!", pci_dev->device.name); return -ENOMEM; } @@ -312,9 +304,6 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); rx_bar_off = nfp_qcp_queue_offset(dev_info, start_q); - PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off); - PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off); - hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off; hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off; @@ -345,7 +334,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) tmp_ether_addr = &hw->mac_addr; if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { - PMD_INIT_LOG(INFO, "Using random mac address for port %d", port); + PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port); /* Using random mac addresses for VFs */ rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]); nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]); @@ -359,7 +348,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; - PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " + PMD_INIT_LOG(INFO, "port %hu VendorID=%#x DeviceID=%#x " "mac=" RTE_ETHER_ADDR_PRT_FMT, eth_dev->data->port_id, pci_dev->id.vendor_id, pci_dev->id.device_id, diff --git a/drivers/net/nfp/nfp_logs.h b/drivers/net/nfp/nfp_logs.h index 315a57811c..16ff61700b 100644 --- a/drivers/net/nfp/nfp_logs.h +++ b/drivers/net/nfp/nfp_logs.h @@ -12,7 +12,6 @@ extern int nfp_logtype_init; #define PMD_INIT_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, nfp_logtype_init, \ "%s(): " fmt "\n", __func__, ## args) -#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") #ifdef RTE_ETHDEV_DEBUG_RX extern int nfp_logtype_rx; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index db6122eac3..5bfdfd28b3 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -192,7 +192,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) uint64_t dma_addr; struct nfp_net_dp_buf *rxe = rxq->rxbufs; - PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors", + PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %hu descriptors", rxq->rx_count); for (i = 0; i < rxq->rx_count; i++) { @@ -212,14 +212,13 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff; rxd->fld.dma_addr_lo = dma_addr & 0xffffffff; rxe[i].mbuf = mbuf; - PMD_RX_LOG(DEBUG, "[%d]: %" PRIx64, i, dma_addr); } /* Make sure all writes are flushed before telling the hardware */ rte_wmb(); /* Not advertising the whole ring as the firmware gets confused if so */ - PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", rxq->rx_count - 1); + PMD_RX_LOG(DEBUG, "Increment FL write pointer in %hu", rxq->rx_count - 1); nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1); @@ -432,7 +431,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta, if (meta->vlan[0].offload == 0) mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci); mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci); - PMD_RX_LOG(DEBUG, "Received outer vlan is %u inter vlan is %u", + PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u", mb->vlan_tci_outer, mb->vlan_tci); mb->ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED; } @@ -754,12 +753,11 @@ nfp_net_recv_pkts(void *rx_queue, * responsibility of avoiding it. But we have * to give some info about the error */ - PMD_RX_LOG(ERR, - "mbuf overflow likely due to the RX offset.\n" + PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n" "\t\tYour mbuf size should have extra space for" " RX offset=%u bytes.\n" "\t\tCurrently you just have %u bytes available" - " but the received packet is %u bytes long", + " but the received packet is %hu bytes long", hw->rx_offset, rxq->mbuf_size - hw->rx_offset, mb->data_len); @@ -888,8 +886,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - PMD_INIT_FUNC_TRACE(); - nfp_net_rx_desc_limits(hw, &min_rx_desc, &max_rx_desc); /* Validating number of descriptors */ @@ -965,9 +961,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } - PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, - rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma); - nfp_net_reset_rx_queue(rxq); rxq->hw = hw; From patchwork Sat Oct 7 02:33:34 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132376 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8C141426D6; Sat, 7 Oct 2023 04:35:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1D4AE40E40; Sat, 7 Oct 2023 04:34:16 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2100.outbound.protection.outlook.com [40.107.94.100]) by mails.dpdk.org (Postfix) with ESMTP id 2CEFE40E01 for ; Sat, 7 Oct 2023 04:34:13 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=akN5E+luQpZ1Og636h+MoBrNG9KKuZ0w7z9wPHfRcJPexe4Sdj5DRwtETDf2NM22ZWgpcCnfXRAjHA3QBLFcSC6W8vFLf5t2QXqOtB/Be5Bii6vfjp3d8rVsdLBk4p+QE6/9n3DD6ISbQcJiPwt0Va3OIYJy98xNbEM0rHqYBjN0r4RC1rQHv5LAuzirzur9HG36QBG37xUVotNvwjM97aZLzfghh07w86KTZF2j/no4lL5P5zsdjek13tsq7wsvay63V7nSOmlEvgfEXQ27bZpV5Cl6zh8i2583t7xZ8sltaD8JGwgi7DcCOwHcnpfnqRmcTwqzvBrEpkAYfJ6N9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oBdLV/PBPn1unE72PQqaDUXt+yP6ziyQDp4M6kTEgwQ=; b=MnajpeKpuT93ireDKWzRN9Y6wXbmoYvRhPhjtzd3n8H3X6E4kHfHZ+xBq2r43i4gVeRzgnKJrWa0rhHUbW91Plc5m2ECRHmv8/ZJGrDqCcN3K86XaccXd9DUS0bHHnC0C8xVTqyhfHjtUWHJAW4oyboPGxkX2fACbEs/xiePwkc0rhEqnfrXAXCc6kboZefZCsFi1khnOM7IHeD8u56rD//Q78rW6nrQy5Oax5HHnP6QlWlbjIn5sBGJGGrm1ZBUXaQBCcz5T3zlBTzFJu+TbrNpEsnIC6UWuJbl/yckpnkQzM1S/bcsw9f6TFDSq3nlopNMnJ3E+RmSsH8imz/9Ag== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oBdLV/PBPn1unE72PQqaDUXt+yP6ziyQDp4M6kTEgwQ=; b=D4NWn/8tKYZT/q3aphOF8eqJIQZkOUPTFfKu1la7j/CBy/esYI3FjSHf/7+CGHX4lUuG9XyIX/icKJS0QGYKdt8o95hYP9MLHofZE/R7Sa3oQmjA4EPWPSOcw/pTK4MywsmrJ/NuHRYP0MrOSzepGqy4tCI9IYF5Wmy8D5eC6t0= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:10 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:09 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 06/11] net/nfp: standard the comment style Date: Sat, 7 Oct 2023 10:33:34 +0800 Message-Id: <20231007023339.1546659-7-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: dc97f7e0-1037-4297-a9d7-08dbc6dde2ad X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: sz6QOZEMaI/CrxvwZyrO/EtGwBcOjL++ZEfIWBaLGxaQgpct1JonxqUmB+UMfzqHkIwIY8GWLMuItu8OEi/1xYSfpAqR0aUbb5Vd95eZI+YmpjyzJZlQkixiPwprsgaj67qSTjLK/aQM5hCZa+SHdVqlJ8dqIkfT9RDKL5MltvzGBsO44aJ/92udk9Z5aZCTOCkRBf5KPe5GZtQhoMZiwP4YXp8ZI++dgmqrC3vfB77O+scCTwxj+PFryG23Kg/YxDJj8O1SNNO5wzhca/EglUk3LriydMvYaiHsHXAnZEM/68bfXPdk+9pb4aKg6arc1hRVlqCvQ6gglWm/rZSnj8iRZApwvOThSONjrI331pdZ+wUACyCZkaock3MuYhHh677SezLFrOUtNsx4/j4ddKbW/W3aGc9a5EBYnHv/7IlV3ngs+a1aZJgw6ruArI/s6bL8TB44k3kEZ+Ug3FobM06qgletpbal0qUDbfoQ6WeD3rtgHGFIUvVPr/d4vGL93pp0Yq8tTMyMf/aRCXyjaacQr4cQiKxDaclB3keYr7fYJTYV6hSUW9gZHQierO5fzhjgqiozNqZtq+hsair5c7v4ywWq/GhXjt4lR82VIg2AViLd8rgmd8+1XrllR4Of47XeG4i+Hso9W27j4N3lFcOE6LMMF6TZoAchUzkQZOI= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(30864003)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001)(66899024)(579004)(559001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 3X/+Hcr0mzsRSMsGILNJ/1eAvYYIsUZLbJmJsnTAy8P+xapRb5PaV8vlgxdlraKUVFNP/zXFGg7WUA/3X34+STKjrgEGpY0hc9IDs39ujAJhySdBTSkp8+6hbU268WszyxrK/7XvqYnh1L6u/f1BahBVsvL7P60/WAsQDcQbXrKurs6TKHAt+0MwgxQ40KOSxam3J6/yIMu+uX/V2RAdgkGu81OT8Ld8uxT0lZeoVfhzVIRu3N9IgjtkEu+Ilq6cV3b8Fk0X+CXSxwJxOoBPYUnkUed4ZZg+42qkZ8cWcRZ2XyrHn7LllCRzKXIQznykwlXj1zhvjzyvZdwCD84rPWMMnmPldPqZEC7jMv6KrJKsImh3CjgnqpVYfpVK4cQBtVfYrW3vs5IHUrvJDEmNgAyNpQvD9msCV2e2ERYTy8atRfJD4/proezjQIGf3crA3eC4oEENgv7KH3rcwADQlAi2oa5jxxo1Ka1OMpC6TGW41KjMTzno9G/chDfWqHk3mRzJZyqddAnsqSV8/e9QVF7I+9nWTsBcQ+E3FA4T7KxTHWs2w3vUAKoMeYVN37yZSCC09irrAVVUVOqcJ0b5gYwcLVb2tlkf8CPmsaAeHQaWMePY2pY+3n8mpJP4Wl6JmmpAkGHwgunSt/AtL0gJmdCnkahRK6VWE8tu8DPmOuAC1S3JrYwkvnXHXQD2jff2iFHyeFCc7qX5LAAnL52eA1BXpEiVEeHonLcdMRcYnkiFr7tPc/mxvaFEhxt8Z/ivBkfvqAWcmOlIsaVGOGF6w6HBDB4wIPNoGmCNYg+c/dq34+Mrtr/kJeRTZ1UBeqUI0hxSF21sZt+QLQfaKRSZaGIyiwcC3iIl2hbNbDdMcXDrmUGmoQqSZBsAqC+aJujQEJP3cFDoZkwA+GkNDHr/5K0D/gYbLAp5SUbTnHOqdkJQ40BlcnWgzXKTGt3tUsrtokcKgqu7hRxUX4CqBQVjdmlobrWw8ID/OLGnT8JYe596byV1fTzqn/clVcbwQEjWzp3OhtND+30vNzwR+cN0nxy7P0Yl/YvY2jUPRJ6YT1bmYtfh1etoZBp7q2SwqfIa/UoMFMq3GQez/2KeGx0lrmlj0jw7cvnHwVbudvhErvJK1pYRL6+Garp/FVjX1QP90wdaa0YrWG9ivkXKY8PMdwLrqs0QgLH7Li9bawgjxtDeJURMF6zZ1+r0ZAxn4pyXyVNQ0fSvqlNTWCEb5b+TNjEOcyigNBUDUdomGZd1NHGMO961/wW/NmyjHOqTEUqAfkqfmaGKQwGJY9X9cPDSeIg4/X1sBfW5673ZeYWewb2XZum4lfPyh96zvtbYfN/M6sx2BmWioWjqT12Mz9vGINRbhSDZ0SnAk2D3vTj47QcPlCNTbhLx17Eq0JPt2RV7GfmciwPdORCmuN/Idpk7ewqWjalVoiZy272UKxLyIzFPkYilLeqWdpCEIuGIeF6M0g7vM9aEmrLWA9fzCnmx+KoLpKlWiwGEiOFWXV1r7vk8l/c2/KJRVsoffo9SC3Cmz1+GHrj7gI8AhCiQVqaHBbr4jSeHA0df4Eo6RohQvk1DoXjrB60vSYjpxF8Ob85s8pgw5HYH0GMFxJ6jF6aQyw== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: dc97f7e0-1037-4297-a9d7-08dbc6dde2ad X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:09.9284 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Vmo6g/jRUyboO2k92GkkfAEaFzt3n52yM/JWEA26L5uw5RznKnpTynlfSeY80R26ILqZk8vrnwBjFz6lbUN+5lsaf0P6Xkk4pfOFqv76EqI= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Follow the DPDK coding style, use the kdoc comment style. Also delete some comment which are not valid anymore and add some comment to help understand logic. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.h | 28 ++-- drivers/net/nfp/flower/nfp_flower_cmsg.c | 2 +- drivers/net/nfp/flower/nfp_flower_cmsg.h | 56 +++---- drivers/net/nfp/flower/nfp_flower_ctrl.c | 6 +- .../net/nfp/flower/nfp_flower_representor.c | 32 ++-- .../net/nfp/flower/nfp_flower_representor.h | 2 +- drivers/net/nfp/nfd3/nfp_nfd3.h | 33 ++-- drivers/net/nfp/nfd3/nfp_nfd3_dp.c | 16 +- drivers/net/nfp/nfdk/nfp_nfdk.h | 41 ++--- drivers/net/nfp/nfdk/nfp_nfdk_dp.c | 6 +- drivers/net/nfp/nfp_common.c | 142 ++++++++---------- drivers/net/nfp/nfp_common.h | 59 ++++---- drivers/net/nfp/nfp_cpp_bridge.c | 2 - drivers/net/nfp/nfp_ctrl.h | 22 +-- drivers/net/nfp/nfp_ethdev.c | 22 ++- drivers/net/nfp/nfp_ethdev_vf.c | 11 +- drivers/net/nfp/nfp_flow.c | 44 +++--- drivers/net/nfp/nfp_flow.h | 10 +- drivers/net/nfp/nfp_rxtx.c | 109 +++++--------- drivers/net/nfp/nfp_rxtx.h | 18 +-- 20 files changed, 284 insertions(+), 377 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h index 244b6daa37..0b4e38cedd 100644 --- a/drivers/net/nfp/flower/nfp_flower.h +++ b/drivers/net/nfp/flower/nfp_flower.h @@ -53,49 +53,49 @@ struct nfp_flower_nfd_func { /* The flower application's private structure */ struct nfp_app_fw_flower { - /* switch domain for this app */ + /** Switch domain for this app */ uint16_t switch_domain_id; - /* Number of VF representors */ + /** Number of VF representors */ uint8_t num_vf_reprs; - /* Number of phyport representors */ + /** Number of phyport representors */ uint8_t num_phyport_reprs; - /* Pointer to the PF vNIC */ + /** Pointer to the PF vNIC */ struct nfp_net_hw *pf_hw; - /* Pointer to a mempool for the ctrlvNIC */ + /** Pointer to a mempool for the Ctrl vNIC */ struct rte_mempool *ctrl_pktmbuf_pool; - /* Pointer to the ctrl vNIC */ + /** Pointer to the ctrl vNIC */ struct nfp_net_hw *ctrl_hw; - /* Ctrl vNIC Rx counter */ + /** Ctrl vNIC Rx counter */ uint64_t ctrl_vnic_rx_count; - /* Ctrl vNIC Tx counter */ + /** Ctrl vNIC Tx counter */ uint64_t ctrl_vnic_tx_count; - /* Array of phyport representors */ + /** Array of phyport representors */ struct nfp_flower_representor *phy_reprs[MAX_FLOWER_PHYPORTS]; - /* Array of VF representors */ + /** Array of VF representors */ struct nfp_flower_representor *vf_reprs[MAX_FLOWER_VFS]; - /* PF representor */ + /** PF representor */ struct nfp_flower_representor *pf_repr; - /* service id of ctrl vnic service */ + /** Service id of Ctrl vNIC service */ uint32_t ctrl_vnic_id; - /* Flower extra features */ + /** Flower extra features */ uint64_t ext_features; struct nfp_flow_priv *flow_priv; struct nfp_mtr_priv *mtr_priv; - /* Function pointers for different NFD version */ + /** Function pointers for different NFD version */ struct nfp_flower_nfd_func nfd_func; }; diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c index 5d6912b079..2ec9498d22 100644 --- a/drivers/net/nfp/flower/nfp_flower_cmsg.c +++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c @@ -230,7 +230,7 @@ nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower, return -ENOMEM; } - /* copy the flow to mbuf */ + /* Copy the flow to mbuf */ nfp_flow_meta = flow->payload.meta; msg_len = (nfp_flow_meta->key_len + nfp_flow_meta->mask_len + nfp_flow_meta->act_len) << NFP_FL_LW_SIZ; diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h index 9449760145..cb019171b6 100644 --- a/drivers/net/nfp/flower/nfp_flower_cmsg.h +++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h @@ -348,7 +348,7 @@ struct nfp_flower_stats_frame { rte_be64_t stats_cookie; }; -/** +/* * See RFC 2698 for more details. * Word[0](Flag options): * [15] p(pps) 1 for pps, 0 for bps @@ -378,40 +378,24 @@ struct nfp_cfg_head { rte_be32_t profile_id; }; -/** - * Struct nfp_profile_conf - profile config, offload to NIC - * @head: config head information - * @bkt_tkn_p: token bucket peak - * @bkt_tkn_c: token bucket committed - * @pbs: peak burst size - * @cbs: committed burst size - * @pir: peak information rate - * @cir: committed information rate - */ +/* Profile config, offload to NIC */ struct nfp_profile_conf { - struct nfp_cfg_head head; - rte_be32_t bkt_tkn_p; - rte_be32_t bkt_tkn_c; - rte_be32_t pbs; - rte_be32_t cbs; - rte_be32_t pir; - rte_be32_t cir; -}; - -/** - * Struct nfp_mtr_stats_reply - meter stats, read from firmware - * @head: config head information - * @pass_bytes: count of passed bytes - * @pass_pkts: count of passed packets - * @drop_bytes: count of dropped bytes - * @drop_pkts: count of dropped packets - */ + struct nfp_cfg_head head; /**< Config head information */ + rte_be32_t bkt_tkn_p; /**< Token bucket peak */ + rte_be32_t bkt_tkn_c; /**< Token bucket committed */ + rte_be32_t pbs; /**< Peak burst size */ + rte_be32_t cbs; /**< Committed burst size */ + rte_be32_t pir; /**< Peak information rate */ + rte_be32_t cir; /**< Committed information rate */ +}; + +/* Meter stats, read from firmware */ struct nfp_mtr_stats_reply { - struct nfp_cfg_head head; - rte_be64_t pass_bytes; - rte_be64_t pass_pkts; - rte_be64_t drop_bytes; - rte_be64_t drop_pkts; + struct nfp_cfg_head head; /**< Config head information */ + rte_be64_t pass_bytes; /**< Count of passed bytes */ + rte_be64_t pass_pkts; /**< Count of passed packets */ + rte_be64_t drop_bytes; /**< Count of dropped bytes */ + rte_be64_t drop_pkts; /**< Count of dropped packets */ }; enum nfp_flower_cmsg_port_type { @@ -851,7 +835,7 @@ struct nfp_fl_act_set_ipv6_addr { }; /* - * ipv6 tc hl fl + * Ipv6 tc hl fl * 3 2 1 * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ @@ -954,9 +938,9 @@ struct nfp_fl_act_set_tun { uint8_t tos; rte_be16_t outer_vlan_tpid; rte_be16_t outer_vlan_tci; - uint8_t tun_len; /* Only valid for NFP_FL_TUNNEL_GENEVE */ + uint8_t tun_len; /**< Only valid for NFP_FL_TUNNEL_GENEVE */ uint8_t reserved2; - rte_be16_t tun_proto; /* Only valid for NFP_FL_TUNNEL_GENEVE */ + rte_be16_t tun_proto; /**< Only valid for NFP_FL_TUNNEL_GENEVE */ } __rte_packed; /* diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c index 1f4c5fd7f9..d27579d2d8 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.c +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c @@ -123,7 +123,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, nb_hold++; rxq->rd_p++; - if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/ + if (unlikely(rxq->rd_p == rxq->rx_count)) /* Wrapping */ rxq->rd_p = 0; } @@ -206,7 +206,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower, txds->offset_eop = FLOWER_PKT_DATA_OFFSET | NFD3_DESC_TX_EOP; txq->wr_p++; - if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping?*/ + if (unlikely(txq->wr_p == txq->tx_count)) /* Wrapping */ txq->wr_p = 0; cnt++; @@ -520,7 +520,7 @@ nfp_flower_ctrl_vnic_poll(struct nfp_app_fw_flower *app_fw_flower) ctrl_hw = app_fw_flower->ctrl_hw; ctrl_eth_dev = ctrl_hw->eth_dev; - /* ctrl vNIC only has a single Rx queue */ + /* Ctrl vNIC only has a single Rx queue */ rxq = ctrl_eth_dev->data->rx_queues[0]; while (rte_service_runstate_get(app_fw_flower->ctrl_vnic_id) != 0) { diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c index be0dfb2890..9a9a66e4b0 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.c +++ b/drivers/net/nfp/flower/nfp_flower_representor.c @@ -10,18 +10,12 @@ #include "../nfp_logs.h" #include "../nfp_mtr.h" -/* - * enum nfp_repr_type - type of representor - * @NFP_REPR_TYPE_PHYS_PORT: external NIC port - * @NFP_REPR_TYPE_PF: physical function - * @NFP_REPR_TYPE_VF: virtual function - * @NFP_REPR_TYPE_MAX: number of representor types - */ +/* Type of representor */ enum nfp_repr_type { - NFP_REPR_TYPE_PHYS_PORT, - NFP_REPR_TYPE_PF, - NFP_REPR_TYPE_VF, - NFP_REPR_TYPE_MAX, + NFP_REPR_TYPE_PHYS_PORT, /*<< External NIC port */ + NFP_REPR_TYPE_PF, /*<< Physical function */ + NFP_REPR_TYPE_VF, /*<< Virtual function */ + NFP_REPR_TYPE_MAX, /*<< Number of representor types */ }; static int @@ -86,7 +80,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev, rxq->dma = (uint64_t)tz->iova; rxq->rxds = tz->addr; - /* mbuf pointers array for referencing mbufs linked to RX descriptors */ + /* Mbuf pointers array for referencing mbufs linked to RX descriptors */ rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs", sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); @@ -159,7 +153,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev, txq->tx_count = nb_desc; txq->tx_free_thresh = tx_free_thresh; - /* queue mapping based on firmware configuration */ + /* Queue mapping based on firmware configuration */ txq->qidx = queue_idx; txq->tx_qcidx = queue_idx * hw->stride_tx; txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); @@ -170,7 +164,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev, txq->dma = (uint64_t)tz->iova; txq->txds = tz->addr; - /* mbuf pointers array for referencing mbufs linked to TX descriptors */ + /* Mbuf pointers array for referencing mbufs linked to TX descriptors */ txq->txbufs = rte_zmalloc_socket("txq->txbufs", sizeof(*txq->txbufs) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); @@ -185,7 +179,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev, /* * Telling the HW about the physical address of the TX ring and number - * of descriptors in log2 format + * of descriptors in log2 format. */ nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma); nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(nb_desc)); @@ -603,7 +597,7 @@ nfp_flower_pf_repr_init(struct rte_eth_dev *eth_dev, /* Memory has been allocated in the eth_dev_create() function */ repr = eth_dev->data->dev_private; - /* Copy data here from the input representor template*/ + /* Copy data here from the input representor template */ repr->vf_id = init_repr_data->vf_id; repr->switch_domain_id = init_repr_data->switch_domain_id; repr->repr_type = init_repr_data->repr_type; @@ -672,7 +666,7 @@ nfp_flower_repr_init(struct rte_eth_dev *eth_dev, return -ENOMEM; } - /* Copy data here from the input representor template*/ + /* Copy data here from the input representor template */ repr->vf_id = init_repr_data->vf_id; repr->switch_domain_id = init_repr_data->switch_domain_id; repr->port_id = init_repr_data->port_id; @@ -752,7 +746,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower) nfp_eth_table = app_fw_flower->pf_hw->pf_dev->nfp_eth_table; eth_dev = app_fw_flower->ctrl_hw->eth_dev; - /* Send a NFP_FLOWER_CMSG_TYPE_MAC_REPR cmsg to hardware*/ + /* Send a NFP_FLOWER_CMSG_TYPE_MAC_REPR cmsg to hardware */ ret = nfp_flower_cmsg_mac_repr(app_fw_flower); if (ret != 0) { PMD_INIT_LOG(ERR, "Cloud not send mac repr cmsgs"); @@ -826,7 +820,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower) snprintf(flower_repr.name, sizeof(flower_repr.name), "%s_repr_vf%d", pci_name, i); - /* This will also allocate private memory for the device*/ + /* This will also allocate private memory for the device */ ret = rte_eth_dev_create(eth_dev->device, flower_repr.name, sizeof(struct nfp_flower_representor), NULL, NULL, nfp_flower_repr_init, &flower_repr); diff --git a/drivers/net/nfp/flower/nfp_flower_representor.h b/drivers/net/nfp/flower/nfp_flower_representor.h index 5ac5e38186..eda19cbb16 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.h +++ b/drivers/net/nfp/flower/nfp_flower_representor.h @@ -13,7 +13,7 @@ struct nfp_flower_representor { uint16_t switch_domain_id; uint32_t repr_type; uint32_t port_id; - uint32_t nfp_idx; /* only valid for the repr of physical port */ + uint32_t nfp_idx; /**< Only valid for the repr of physical port */ char name[RTE_ETH_NAME_MAX_LEN]; struct rte_ether_addr mac_addr; struct nfp_app_fw_flower *app_fw_flower; diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h index 7c56ca4908..0b0ca361f4 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3.h +++ b/drivers/net/nfp/nfd3/nfp_nfd3.h @@ -17,24 +17,24 @@ struct nfp_net_nfd3_tx_desc { union { struct { - uint8_t dma_addr_hi; /* High bits of host buf address */ - uint16_t dma_len; /* Length to DMA for this desc */ - /* Offset in buf where pkt starts + highest bit is eop flag */ + uint8_t dma_addr_hi; /**< High bits of host buf address */ + uint16_t dma_len; /**< Length to DMA for this desc */ + /** Offset in buf where pkt starts + highest bit is eop flag */ uint8_t offset_eop; - uint32_t dma_addr_lo; /* Low 32bit of host buf addr */ + uint32_t dma_addr_lo; /**< Low 32bit of host buf addr */ - uint16_t mss; /* MSS to be used for LSO */ - uint8_t lso_hdrlen; /* LSO, where the data starts */ - uint8_t flags; /* TX Flags, see @NFD3_DESC_TX_* */ + uint16_t mss; /**< MSS to be used for LSO */ + uint8_t lso_hdrlen; /**< LSO, where the data starts */ + uint8_t flags; /**< TX Flags, see @NFD3_DESC_TX_* */ union { struct { - uint8_t l3_offset; /* L3 header offset */ - uint8_t l4_offset; /* L4 header offset */ + uint8_t l3_offset; /**< L3 header offset */ + uint8_t l4_offset; /**< L4 header offset */ }; - uint16_t vlan; /* VLAN tag to add if indicated */ + uint16_t vlan; /**< VLAN tag to add if indicated */ }; - uint16_t data_len; /* Length of frame + meta data */ + uint16_t data_len; /**< Length of frame + meta data */ } __rte_packed; uint32_t vals[4]; }; @@ -54,13 +54,14 @@ nfp_net_nfd3_free_tx_desc(struct nfp_net_txq *txq) return (free_desc > 8) ? (free_desc - 8) : 0; } -/* - * nfp_net_nfd3_txq_full() - Check if the TX queue free descriptors - * is below tx_free_threshold for firmware of nfd3 - * - * @txq: TX queue to check +/** + * Check if the TX queue free descriptors is below tx_free_threshold + * for firmware with nfd3 * * This function uses the host copy* of read/write pointers. + * + * @param txq + * TX queue to check */ static inline bool nfp_net_nfd3_txq_full(struct nfp_net_txq *txq) diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c index 51755f4324..a26d4bf4c8 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c +++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c @@ -113,14 +113,12 @@ nfp_flower_nfd3_pkt_add_metadata(struct rte_mbuf *mbuf, } /* - * nfp_net_nfd3_tx_vlan() - Set vlan info in the nfd3 tx desc + * Set vlan info in the nfd3 tx desc * * If enable NFP_NET_CFG_CTRL_TXVLAN_V2 - * Vlan_info is stored in the meta and - * is handled in the nfp_net_nfd3_set_meta_vlan() + * Vlan_info is stored in the meta and is handled in the @nfp_net_nfd3_set_meta_vlan() * else if enable NFP_NET_CFG_CTRL_TXVLAN - * Vlan_info is stored in the tx_desc and - * is handled in the nfp_net_nfd3_tx_vlan() + * Vlan_info is stored in the tx_desc and is handled in the @nfp_net_nfd3_tx_vlan() */ static inline void nfp_net_nfd3_tx_vlan(struct nfp_net_txq *txq, @@ -299,7 +297,7 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue, nfp_net_nfd3_tx_vlan(txq, &txd, pkt); /* - * mbuf data_len is the data in one segment and pkt_len data + * Mbuf data_len is the data in one segment and pkt_len data * in the whole packet. When the packet is just one segment, * then data_len = pkt_len */ @@ -330,7 +328,7 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue, free_descs--; txq->wr_p++; - if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping */ + if (unlikely(txq->wr_p == txq->tx_count)) /* Wrapping */ txq->wr_p = 0; pkt_size -= dma_size; @@ -439,7 +437,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, txq->tx_count = nb_desc * NFD3_TX_DESC_PER_PKT; txq->tx_free_thresh = tx_free_thresh; - /* queue mapping based on firmware configuration */ + /* Queue mapping based on firmware configuration */ txq->qidx = queue_idx; txq->tx_qcidx = queue_idx * hw->stride_tx; txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); @@ -449,7 +447,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, txq->dma = tz->iova; txq->txds = tz->addr; - /* mbuf pointers array for referencing mbufs linked to TX descriptors */ + /* Mbuf pointers array for referencing mbufs linked to TX descriptors */ txq->txbufs = rte_zmalloc_socket("txq->txbufs", sizeof(*txq->txbufs) * txq->tx_count, RTE_CACHE_LINE_SIZE, socket_id); diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h index 99675b6bd7..04bd3c7600 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk.h +++ b/drivers/net/nfp/nfdk/nfp_nfdk.h @@ -75,7 +75,7 @@ * dma_addr_hi - bits [47:32] of host memory address * dma_addr_lo - bits [31:0] of host memory address * - * --> metadata descriptor + * --> Metadata descriptor * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 * Word +-------+-----------------------+---------------------+---+-----+ @@ -104,27 +104,27 @@ */ struct nfp_net_nfdk_tx_desc { union { - /* Address descriptor */ + /** Address descriptor */ struct { - uint16_t dma_addr_hi; /* High bits of host buf address */ - uint16_t dma_len_type; /* Length to DMA for this desc */ - uint32_t dma_addr_lo; /* Low 32bit of host buf addr */ + uint16_t dma_addr_hi; /**< High bits of host buf address */ + uint16_t dma_len_type; /**< Length to DMA for this desc */ + uint32_t dma_addr_lo; /**< Low 32bit of host buf addr */ }; - /* TSO descriptor */ + /** TSO descriptor */ struct { - uint16_t mss; /* MSS to be used for LSO */ - uint8_t lso_hdrlen; /* LSO, TCP payload offset */ - uint8_t lso_totsegs; /* LSO, total segments */ - uint8_t l3_offset; /* L3 header offset */ - uint8_t l4_offset; /* L4 header offset */ - uint16_t lso_meta_res; /* Rsvd bits in TSO metadata */ + uint16_t mss; /**< MSS to be used for LSO */ + uint8_t lso_hdrlen; /**< LSO, TCP payload offset */ + uint8_t lso_totsegs; /**< LSO, total segments */ + uint8_t l3_offset; /**< L3 header offset */ + uint8_t l4_offset; /**< L4 header offset */ + uint16_t lso_meta_res; /**< Rsvd bits in TSO metadata */ }; - /* Metadata descriptor */ + /** Metadata descriptor */ struct { - uint8_t flags; /* TX Flags, see @NFDK_DESC_TX_* */ - uint8_t reserved[7]; /* meta byte placeholder */ + uint8_t flags; /**< TX Flags, see @NFDK_DESC_TX_* */ + uint8_t reserved[7]; /**< Meta byte place holder */ }; uint32_t vals[2]; @@ -146,13 +146,14 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq) (free_desc - NFDK_TX_DESC_STOP_CNT) : 0; } -/* - * nfp_net_nfdk_txq_full() - Check if the TX queue free descriptors - * is below tx_free_threshold for firmware of nfdk - * - * @txq: TX queue to check +/** + * Check if the TX queue free descriptors is below tx_free_threshold + * for firmware of nfdk * * This function uses the host copy* of read/write pointers. + * + * @param txq + * TX queue to check */ static inline bool nfp_net_nfdk_txq_full(struct nfp_net_txq *txq) diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c index dae87ac6df..0e1f72cee8 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c +++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c @@ -478,7 +478,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, /* * Free memory prior to re-allocation if needed. This is the case after - * calling nfp_net_stop + * calling nfp_net_stop() */ if (dev->data->tx_queues[queue_idx] != NULL) { PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d", @@ -513,7 +513,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, txq->tx_count = nb_desc * NFDK_TX_DESC_PER_SIMPLE_PKT; txq->tx_free_thresh = tx_free_thresh; - /* queue mapping based on firmware configuration */ + /* Queue mapping based on firmware configuration */ txq->qidx = queue_idx; txq->tx_qcidx = queue_idx * hw->stride_tx; txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); @@ -523,7 +523,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, txq->dma = tz->iova; txq->ktxds = tz->addr; - /* mbuf pointers array for referencing mbufs linked to TX descriptors */ + /* Mbuf pointers array for referencing mbufs linked to TX descriptors */ txq->txbufs = rte_zmalloc_socket("txq->txbufs", sizeof(*txq->txbufs) * txq->tx_count, RTE_CACHE_LINE_SIZE, socket_id); diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index f48e1930dc..ed3c5c15d2 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -55,7 +55,7 @@ struct nfp_xstat { } static const struct nfp_xstat nfp_net_xstats[] = { - /** + /* * Basic xstats available on both VF and PF. * Note that in case new statistics of group NFP_XSTAT_GROUP_NET * are added to this array, they must appear before any statistics @@ -80,7 +80,7 @@ static const struct nfp_xstat nfp_net_xstats[] = { NFP_XSTAT_NET("bpf_app2_bytes", APP2_BYTES), NFP_XSTAT_NET("bpf_app3_pkts", APP3_FRAMES), NFP_XSTAT_NET("bpf_app3_bytes", APP3_BYTES), - /** + /* * MAC xstats available only on PF. These statistics are not available for VFs as the * PF is not initialized when the VF is initialized as it is still bound to the kernel * driver. As such, the PMD cannot obtain a CPP handle and access the rtsym_table in order @@ -175,7 +175,7 @@ static void nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link) { - /** + /* * Read the link status from NFP_NET_CFG_STS. If the link is down * then write the link speed NFP_NET_CFG_STS_LINK_RATE_UNKNOWN to * NFP_NET_CFG_STS_NSP_LINK_RATE. @@ -184,7 +184,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN); return; } - /** + /* * Link is up so write the link speed from the eth_table to * NFP_NET_CFG_STS_NSP_LINK_RATE. */ @@ -214,7 +214,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, nfp_qcp_ptr_add(hw->qcp_cfg, NFP_QCP_WRITE_PTR, 1); wait.tv_sec = 0; - wait.tv_nsec = 1000000; + wait.tv_nsec = 1000000; /* 1ms */ PMD_DRV_LOG(DEBUG, "Polling for update ack..."); @@ -253,7 +253,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, * * @return * - (0) if OK to reconfigure the device. - * - (EIO) if I/O err and fail to reconfigure the device. + * - (-EIO) if I/O err and fail to reconfigure the device. */ int nfp_net_reconfig(struct nfp_net_hw *hw, @@ -297,7 +297,7 @@ nfp_net_reconfig(struct nfp_net_hw *hw, * * @return * - (0) if OK to reconfigure the device. - * - (EIO) if I/O err and fail to reconfigure the device. + * - (-EIO) if I/O err and fail to reconfigure the device. */ int nfp_net_ext_reconfig(struct nfp_net_hw *hw, @@ -368,9 +368,15 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw, } /* - * Configure an Ethernet device. This function must be invoked first - * before any other function in the Ethernet API. This function can - * also be re-invoked when a device is in the stopped state. + * Configure an Ethernet device. + * + * This function must be invoked first before any other function in the Ethernet API. + * This function can also be re-invoked when a device is in the stopped state. + * + * A DPDK app sends info about how many queues to use and how those queues + * need to be configured. This is used by the DPDK core and it makes sure no + * more queues than those advertised by the driver are requested. + * This function is called after that internal process. */ int nfp_net_configure(struct rte_eth_dev *dev) @@ -382,14 +388,6 @@ nfp_net_configure(struct rte_eth_dev *dev) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* - * A DPDK app sends info about how many queues to use and how - * those queues need to be configured. This is used by the - * DPDK core and it makes sure no more queues than those - * advertised by the driver are requested. This function is - * called after that internal process - */ - dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; @@ -557,12 +555,12 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, /* Writing new MAC to the specific port BAR address */ nfp_net_write_mac(hw, (uint8_t *)mac_addr); - /* Signal the NIC about the change */ update = NFP_NET_CFG_UPDATE_MACADDR; ctrl = hw->ctrl; if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; + /* Signal the NIC about the change */ if (nfp_net_reconfig(hw, ctrl, update) != 0) { PMD_DRV_LOG(ERR, "MAC address update failed"); return -EIO; @@ -706,10 +704,6 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_PROMISC; update = NFP_NET_CFG_UPDATE_GEN; - /* - * DPDK sets promiscuous mode on just after this call assuming - * it can not fail ... - */ ret = nfp_net_reconfig(hw, new_ctrl, update); if (ret != 0) return ret; @@ -737,10 +731,6 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev) new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_PROMISC; update = NFP_NET_CFG_UPDATE_GEN; - /* - * DPDK sets promiscuous mode off just before this call - * assuming it can not fail ... - */ ret = nfp_net_reconfig(hw, new_ctrl, update); if (ret != 0) return ret; @@ -751,7 +741,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev) } /* - * return 0 means link status changed, -1 means not changed + * Return 0 means link status changed, -1 means not changed * * Wait to complete is needed as it can take up to 9 seconds to get the Link * status. @@ -793,7 +783,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, } } } else { - /** + /* * Shift and mask nn_link_status so that it is effectively the value * at offset NFP_NET_CFG_STS_NSP_LINK_RATE. */ @@ -812,7 +802,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, PMD_DRV_LOG(INFO, "NIC Link is Down"); } - /** + /* * Notify the port to update the speed value in the CTRL BAR from NSP. * Not applicable for VFs as the associated PF is still attached to the * kernel driver. @@ -833,11 +823,9 @@ nfp_net_stats_get(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* RTE_ETHDEV_QUEUE_STAT_CNTRS default value is 16 */ - memset(&nfp_dev_stats, 0, sizeof(nfp_dev_stats)); - /* reading per RX ring stats */ + /* Reading per RX ring stats */ for (i = 0; i < dev->data->nb_rx_queues; i++) { if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS) break; @@ -855,7 +843,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, hw->eth_stats_base.q_ibytes[i]; } - /* reading per TX ring stats */ + /* Reading per TX ring stats */ for (i = 0; i < dev->data->nb_tx_queues; i++) { if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS) break; @@ -889,7 +877,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.obytes -= hw->eth_stats_base.obytes; - /* reading general device stats */ + /* Reading general device stats */ nfp_dev_stats.ierrors = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); @@ -915,6 +903,10 @@ nfp_net_stats_get(struct rte_eth_dev *dev, return -EINVAL; } +/* + * hw->eth_stats_base records the per counter starting point. + * Lets update it now + */ int nfp_net_stats_reset(struct rte_eth_dev *dev) { @@ -923,12 +915,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* - * hw->eth_stats_base records the per counter starting point. - * Lets update it now - */ - - /* reading per RX ring stats */ + /* Reading per RX ring stats */ for (i = 0; i < dev->data->nb_rx_queues; i++) { if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS) break; @@ -940,7 +927,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); } - /* reading per TX ring stats */ + /* Reading per TX ring stats */ for (i = 0; i < dev->data->nb_tx_queues; i++) { if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS) break; @@ -964,7 +951,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) hw->eth_stats_base.obytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); - /* reading general device stats */ + /* Reading general device stats */ hw->eth_stats_base.ierrors = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); @@ -1032,7 +1019,7 @@ nfp_net_xstats_value(const struct rte_eth_dev *dev, if (raw) return value; - /** + /* * A baseline value of each statistic counter is recorded when stats are "reset". * Thus, the value returned by this function need to be decremented by this * baseline value. The result is the count of this statistic since the last time @@ -1041,12 +1028,12 @@ nfp_net_xstats_value(const struct rte_eth_dev *dev, return value - hw->eth_xstats_base[index].value; } +/* NOTE: All callers ensure dev is always set. */ int nfp_net_xstats_get_names(struct rte_eth_dev *dev, struct rte_eth_xstat_name *xstats_names, unsigned int size) { - /* NOTE: All callers ensure dev is always set. */ uint32_t id; uint32_t nfp_size; uint32_t read_size; @@ -1066,12 +1053,12 @@ nfp_net_xstats_get_names(struct rte_eth_dev *dev, return read_size; } +/* NOTE: All callers ensure dev is always set. */ int nfp_net_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - /* NOTE: All callers ensure dev is always set. */ uint32_t id; uint32_t nfp_size; uint32_t read_size; @@ -1092,16 +1079,16 @@ nfp_net_xstats_get(struct rte_eth_dev *dev, return read_size; } +/* + * NOTE: The only caller rte_eth_xstats_get_names_by_id() ensures dev, + * ids, xstats_names and size are valid, and non-NULL. + */ int nfp_net_xstats_get_names_by_id(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int size) { - /** - * NOTE: The only caller rte_eth_xstats_get_names_by_id() ensures dev, - * ids, xstats_names and size are valid, and non-NULL. - */ uint32_t i; uint32_t read_size; @@ -1123,16 +1110,16 @@ nfp_net_xstats_get_names_by_id(struct rte_eth_dev *dev, return read_size; } +/* + * NOTE: The only caller rte_eth_xstats_get_by_id() ensures dev, + * ids, values and n are valid, and non-NULL. + */ int nfp_net_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, unsigned int n) { - /** - * NOTE: The only caller rte_eth_xstats_get_by_id() ensures dev, - * ids, values and n are valid, and non-NULL. - */ uint32_t i; uint32_t read_size; @@ -1167,10 +1154,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev) hw->eth_xstats_base[id].id = id; hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true); } - /** - * Successfully reset xstats, now call function to reset basic stats - * return value is then based on the success of that function - */ + /* Successfully reset xstats, now call function to reset basic stats. */ return nfp_net_stats_reset(dev); } @@ -1217,7 +1201,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues; dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues; dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU; - /* + /** * The maximum rx packet length (max_rx_pktlen) is set to the * maximum supported frame size that the NFP can handle. This * includes layer 2 headers, CRC and other metadata that can @@ -1358,7 +1342,7 @@ nfp_net_common_init(struct rte_pci_device *pci_dev, nfp_net_init_metadata_format(hw); - /* read the Rx offset configured from firmware */ + /* Read the Rx offset configured from firmware */ if (hw->ver.major < 2) hw->rx_offset = NFP_NET_RX_OFFSET; else @@ -1375,7 +1359,6 @@ const uint32_t * nfp_net_supported_ptypes_get(struct rte_eth_dev *dev) { static const uint32_t ptypes[] = { - /* refers to nfp_net_set_hash() */ RTE_PTYPE_INNER_L3_IPV4, RTE_PTYPE_INNER_L3_IPV6, RTE_PTYPE_INNER_L3_IPV6_EXT, @@ -1449,10 +1432,8 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev) pci_dev->addr.devid, pci_dev->addr.function); } -/* Interrupt configuration and handling */ - /* - * nfp_net_irq_unmask - Unmask an interrupt + * Unmask an interrupt * * If MSI-X auto-masking is enabled clear the mask bit, otherwise * clear the ICR for the entry. @@ -1478,16 +1459,14 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev) } } -/* +/** * Interrupt handler which shall be registered for alarm callback for delayed * handling specific interrupt to wait for the stable nic state. As the NIC * interrupt state is not stable for nfp after link is just down, it needs * to wait 4 seconds to get the stable status. * - * @param handle Pointer to interrupt handle. - * @param param The address of parameter (struct rte_eth_dev *) - * - * @return void + * @param param + * The address of parameter (struct rte_eth_dev *) */ void nfp_net_dev_interrupt_delayed_handler(void *param) @@ -1516,13 +1495,12 @@ nfp_net_dev_interrupt_handler(void *param) nfp_net_link_update(dev, 0); - /* likely to up */ + /* Likely to up */ if (link.link_status == 0) { - /* handle it 1 sec later, wait it being stable */ + /* Handle it 1 sec later, wait it being stable */ timeout = NFP_NET_LINK_UP_CHECK_TIMEOUT; - /* likely to down */ - } else { - /* handle it 4 sec later, wait it being stable */ + } else { /* Likely to down */ + /* Handle it 4 sec later, wait it being stable */ timeout = NFP_NET_LINK_DOWN_CHECK_TIMEOUT; } @@ -1543,7 +1521,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* mtu setting is forbidden if port is started */ + /* MTU setting is forbidden if port is started */ if (dev->data->dev_started) { PMD_DRV_LOG(ERR, "port %d must be stopped before configuration", dev->data->port_id); @@ -1557,7 +1535,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, return -ERANGE; } - /* writing to configuration space */ + /* Writing to configuration space */ nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu); hw->mtu = mtu; @@ -1653,8 +1631,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; + /* Clearing the entry bits */ if (mask != 0xF) - /* Clearing the entry bits */ reta &= ~(0xFF << (8 * j)); reta |= reta_conf[idx].reta[shift + j] << (8 * j); } @@ -1689,7 +1667,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev, return 0; } - /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */ +/* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */ int nfp_net_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -1717,7 +1695,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev, /* * Reading Redirection Table. There are 128 8bit-entries which can be - * manage as 32 32bit-entries + * manage as 32 32bit-entries. */ for (i = 0; i < reta_size; i += 4) { /* Handling 4 RSS entries per loop */ @@ -1751,7 +1729,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* Writing the key byte a byte */ + /* Writing the key byte by byte */ for (i = 0; i < rss_conf->rss_key_len; i++) { memcpy(&key, &rss_conf->rss_key[i], 1); nn_cfg_writeb(hw, NFP_NET_CFG_RSS_KEY + i, key); @@ -1786,7 +1764,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK; cfg_rss_ctrl |= NFP_NET_CFG_RSS_TOEPLITZ; - /* configuring where to apply the RSS hash */ + /* Configuring where to apply the RSS hash */ nn_cfg_writel(hw, NFP_NET_CFG_RSS_CTRL, cfg_rss_ctrl); /* Writing the key size */ @@ -1809,7 +1787,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, /* Checking if RSS is enabled */ if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) { - if (rss_hf != 0) { /* Enable RSS? */ + if (rss_hf != 0) { PMD_DRV_LOG(ERR, "RSS unsupported"); return -EINVAL; } diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index 9cb889c4a6..b41d834165 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -53,7 +53,7 @@ enum nfp_app_fw_id { NFP_APP_FW_FLOWER_NIC = 0x3, }; -/* nfp_qcp_ptr - Read or Write Pointer of a queue */ +/* Read or Write Pointer of a queue */ enum nfp_qcp_ptr { NFP_QCP_READ_PTR = 0, NFP_QCP_WRITE_PTR @@ -72,15 +72,15 @@ struct nfp_net_tlv_caps { }; struct nfp_pf_dev { - /* Backpointer to associated pci device */ + /** Backpointer to associated pci device */ struct rte_pci_device *pci_dev; enum nfp_app_fw_id app_fw_id; - /* Pointer to the app running on the PF */ + /** Pointer to the app running on the PF */ void *app_fw_priv; - /* The eth table reported by firmware */ + /** The eth table reported by firmware */ struct nfp_eth_table *nfp_eth_table; uint8_t *ctrl_bar; @@ -94,17 +94,17 @@ struct nfp_pf_dev { struct nfp_hwinfo *hwinfo; struct nfp_rtsym_table *sym_tbl; - /* service id of cpp bridge service */ + /** Service id of cpp bridge service */ uint32_t cpp_bridge_id; }; struct nfp_app_fw_nic { - /* Backpointer to the PF device */ + /** Backpointer to the PF device */ struct nfp_pf_dev *pf_dev; /* - * Array of physical ports belonging to the this CoreNIC app - * This is really a list of vNIC's. One for each physical port + * Array of physical ports belonging to this CoreNIC app. + * This is really a list of vNIC's, one for each physical port. */ struct nfp_net_hw *ports[NFP_MAX_PHYPORTS]; @@ -113,13 +113,13 @@ struct nfp_app_fw_nic { }; struct nfp_net_hw { - /* Backpointer to the PF this port belongs to */ + /** Backpointer to the PF this port belongs to */ struct nfp_pf_dev *pf_dev; - /* Backpointer to the eth_dev of this port*/ + /** Backpointer to the eth_dev of this port*/ struct rte_eth_dev *eth_dev; - /* Info from the firmware */ + /** Info from the firmware */ struct nfp_net_fw_ver ver; uint32_t cap; uint32_t max_mtu; @@ -130,7 +130,7 @@ struct nfp_net_hw { /** NFP ASIC params */ const struct nfp_dev_info *dev_info; - /* Current values for control */ + /** Current values for control */ uint32_t ctrl; uint8_t *ctrl_bar; @@ -156,7 +156,7 @@ struct nfp_net_hw { struct rte_ether_addr mac_addr; - /* Records starting point for counters */ + /** Records starting point for counters */ struct rte_eth_stats eth_stats_base; struct rte_eth_xstat *eth_xstats_base; @@ -166,9 +166,9 @@ struct nfp_net_hw { uint8_t *mac_stats_bar; uint8_t *mac_stats; - /* Sequential physical port number, only valid for CoreNIC firmware */ + /** Sequential physical port number, only valid for CoreNIC firmware */ uint8_t idx; - /* Internal port number as seen from NFP */ + /** Internal port number as seen from NFP */ uint8_t nfp_idx; struct nfp_net_tlv_caps tlv_caps; @@ -240,10 +240,6 @@ nn_writeq(uint64_t val, nn_writel(val, addr); } -/* - * Functions to read/write from/to Config BAR - * Performs any endian conversion necessary. - */ static inline uint8_t nn_cfg_readb(struct nfp_net_hw *hw, uint32_t off) @@ -304,11 +300,15 @@ nn_cfg_writeq(struct nfp_net_hw *hw, nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off); } -/* - * nfp_qcp_ptr_add - Add the value to the selected pointer of a queue - * @q: Base address for queue structure - * @ptr: Add to the Read or Write pointer - * @val: Value to add to the queue pointer +/** + * Add the value to the selected pointer of a queue. + * + * @param q + * Base address for queue structure + * @param ptr + * Add to the read or write pointer + * @param val + * Value to add to the queue pointer */ static inline void nfp_qcp_ptr_add(uint8_t *q, @@ -325,10 +325,13 @@ nfp_qcp_ptr_add(uint8_t *q, nn_writel(rte_cpu_to_le_32(val), q + off); } -/* - * nfp_qcp_read - Read the current Read/Write pointer value for a queue - * @q: Base address for queue structure - * @ptr: Read or Write pointer +/** + * Read the current read/write pointer value for a queue. + * + * @param q + * Base address for queue structure + * @param ptr + * Read or Write pointer */ static inline uint32_t nfp_qcp_read(uint8_t *q, diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index 222cfdcbc3..b5bfe17d0e 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -1,8 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2014-2021 Netronome Systems, Inc. * All rights reserved. - * - * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation. */ #include "nfp_cpp_bridge.h" diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h index 55073c3cea..a13f95894a 100644 --- a/drivers/net/nfp/nfp_ctrl.h +++ b/drivers/net/nfp/nfp_ctrl.h @@ -20,7 +20,7 @@ /* Offset in Freelist buffer where packet starts on RX */ #define NFP_NET_RX_OFFSET 32 -/* working with metadata api (NFD version > 3.0) */ +/* Working with metadata api (NFD version > 3.0) */ #define NFP_NET_META_FIELD_SIZE 4 #define NFP_NET_META_FIELD_MASK ((1 << NFP_NET_META_FIELD_SIZE) - 1) #define NFP_NET_META_HEADER_SIZE 4 @@ -36,7 +36,7 @@ NFP_NET_META_VLAN_TPID_MASK) /* Prepend field types */ -#define NFP_NET_META_HASH 1 /* next field carries hash type */ +#define NFP_NET_META_HASH 1 /* Next field carries hash type */ #define NFP_NET_META_VLAN 4 #define NFP_NET_META_PORTID 5 #define NFP_NET_META_IPSEC 9 @@ -205,7 +205,7 @@ struct nfp_net_fw_ver { * @NFP_NET_CFG_SPARE_ADDR: DMA address for ME code to use (e.g. YDS-155 fix) */ #define NFP_NET_CFG_SPARE_ADDR 0x0050 -/** +/* * NFP6000/NFP4000 - Prepend configuration */ #define NFP_NET_CFG_RX_OFFSET 0x0050 @@ -330,7 +330,7 @@ struct nfp_net_fw_ver { /* * General device stats (0x0d00 - 0x0d90) - * all counters are 64bit. + * All counters are 64bit. */ #define NFP_NET_CFG_STATS_BASE 0x0d00 #define NFP_NET_CFG_STATS_RX_DISCARDS (NFP_NET_CFG_STATS_BASE + 0x00) @@ -364,7 +364,7 @@ struct nfp_net_fw_ver { /* * Per ring stats (0x1000 - 0x1800) - * options, 64bit per entry + * Options, 64bit per entry * @NFP_NET_CFG_TXR_STATS: TX ring statistics (Packet and Byte count) * @NFP_NET_CFG_RXR_STATS: RX ring statistics (Packet and Byte count) */ @@ -375,9 +375,9 @@ struct nfp_net_fw_ver { #define NFP_NET_CFG_RXR_STATS(_x) (NFP_NET_CFG_RXR_STATS_BASE + \ ((_x) * 0x10)) -/** +/* * Mac stats (0x0000 - 0x0200) - * all counters are 64bit. + * All counters are 64bit. */ #define NFP_MAC_STATS_BASE 0x0000 #define NFP_MAC_STATS_SIZE 0x0200 @@ -558,9 +558,11 @@ struct nfp_net_fw_ver { int nfp_net_tlv_caps_parse(struct rte_eth_dev *dev); -/* - * nfp_net_cfg_ctrl_rss() - Get RSS flag based on firmware's capability - * @hw_cap: The firmware's capabilities +/** + * Get RSS flag based on firmware's capability + * + * @param hw_cap + * The firmware's capabilities */ static inline uint32_t nfp_net_cfg_ctrl_rss(uint32_t hw_cap) diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 72abc4c16e..dece821e4a 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -66,7 +66,7 @@ nfp_net_start(struct rte_eth_dev *dev) /* Enabling the required queues in the device */ nfp_net_enable_queues(dev); - /* check and configure queue intr-vector mapping */ + /* Check and configure queue intr-vector mapping */ if (dev->data->dev_conf.intr_conf.rxq != 0) { if (app_fw_nic->multiport) { PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported " @@ -273,11 +273,11 @@ nfp_net_close(struct rte_eth_dev *dev) /* Clear ipsec */ nfp_ipsec_uninit(dev); - /* Cancel possible impending LSC work here before releasing the port*/ + /* Cancel possible impending LSC work here before releasing the port */ rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev); /* Only free PF resources after all physical ports have been closed */ - /* Mark this port as unused and free device priv resources*/ + /* Mark this port as unused and free device priv resources */ nn_cfg_writeb(hw, NFP_NET_CFG_LSC, 0xff); app_fw_nic->ports[hw->idx] = NULL; rte_eth_dev_release_port(dev); @@ -300,15 +300,10 @@ nfp_net_close(struct rte_eth_dev *dev) rte_intr_disable(pci_dev->intr_handle); - /* unregister callback func from eal lib */ + /* Unregister callback func from eal lib */ rte_intr_callback_unregister(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); - /* - * The ixgbe PMD disables the pcie master on the - * device. The i40e does not... - */ - return 0; } @@ -842,8 +837,9 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, eth_dev->device = &pf_dev->pci_dev->device; - /* ctrl/tx/rx BAR mappings and remaining init happens in - * nfp_net_init + /* + * Ctrl/tx/rx BAR mappings and remaining init happens in + * @nfp_net_init() */ ret = nfp_net_init(eth_dev); if (ret != 0) { @@ -970,7 +966,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) pf_dev->pci_dev = pci_dev; pf_dev->nfp_eth_table = nfp_eth_table; - /* configure access to tx/rx vNIC BARs */ + /* Configure access to tx/rx vNIC BARs */ addr = nfp_qcp_queue_offset(dev_info, 0); cpp_id = NFP_CPP_ISLAND_ID(0, NFP_CPP_ACTION_RW, 0, 0); @@ -1011,7 +1007,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) goto hwqueues_cleanup; } - /* register the CPP bridge service here for primary use */ + /* Register the CPP bridge service here for primary use */ ret = nfp_enable_cpp_service(pf_dev); if (ret != 0) PMD_INIT_LOG(INFO, "Enable cpp service failed."); diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index d3c3c9e953..0a1eb04294 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -47,7 +47,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) /* Enabling the required queues in the device */ nfp_net_enable_queues(dev); - /* check and configure queue intr-vector mapping */ + /* Check and configure queue intr-vector mapping */ if (dev->data->dev_conf.intr_conf.rxq != 0) { if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { /* @@ -182,18 +182,13 @@ nfp_netvf_close(struct rte_eth_dev *dev) rte_intr_disable(pci_dev->intr_handle); - /* unregister callback func from eal lib */ + /* Unregister callback func from eal lib */ rte_intr_callback_unregister(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); - /* Cancel possible impending LSC work here before releasing the port*/ + /* Cancel possible impending LSC work here before releasing the port */ rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev); - /* - * The ixgbe PMD disables the pcie master on the - * device. The i40e does not... - */ - return 0; } diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index 476eb0c7f8..7b1abe926e 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -118,21 +118,21 @@ struct vxlan_data { #define NVGRE_V4_LEN (sizeof(struct rte_ether_hdr) + \ sizeof(struct rte_ipv4_hdr) + \ sizeof(struct rte_flow_item_gre) + \ - sizeof(rte_be32_t)) /* gre key */ + sizeof(rte_be32_t)) /* Gre key */ #define NVGRE_V6_LEN (sizeof(struct rte_ether_hdr) + \ sizeof(struct rte_ipv6_hdr) + \ sizeof(struct rte_flow_item_gre) + \ - sizeof(rte_be32_t)) /* gre key */ + sizeof(rte_be32_t)) /* Gre key */ /* Process structure associated with a flow item */ struct nfp_flow_item_proc { - /* Bit-mask for fields supported by this PMD. */ + /** Bit-mask for fields supported by this PMD. */ const void *mask_support; - /* Bit-mask to use when @p item->mask is not provided. */ + /** Bit-mask to use when @p item->mask is not provided. */ const void *mask_default; - /* Size in bytes for @p mask_support and @p mask_default. */ + /** Size in bytes for @p mask_support and @p mask_default. */ const unsigned int mask_sz; - /* Merge a pattern item into a flow rule handle. */ + /** Merge a pattern item into a flow rule handle. */ int (*merge)(struct nfp_app_fw_flower *app_fw_flower, struct rte_flow *nfp_flow, char **mbuf_off, @@ -140,7 +140,7 @@ struct nfp_flow_item_proc { const struct nfp_flow_item_proc *proc, bool is_mask, bool is_outer_layer); - /* List of possible subsequent items. */ + /** List of possible subsequent items. */ const enum rte_flow_item_type *const next_item; }; @@ -318,14 +318,14 @@ nfp_check_mask_add(struct nfp_flow_priv *priv, mask_entry = nfp_mask_table_search(priv, mask_data, mask_len); if (mask_entry == NULL) { - /* mask entry does not exist, let's create one */ + /* Mask entry does not exist, let's create one */ ret = nfp_mask_table_add(priv, mask_data, mask_len, mask_id); if (ret != 0) return false; *meta_flags |= NFP_FL_META_FLAG_MANAGE_MASK; } else { - /* mask entry already exist */ + /* Mask entry already exist */ mask_entry->ref_cnt++; *mask_id = mask_entry->mask_id; } @@ -785,7 +785,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[], case RTE_FLOW_ITEM_TYPE_ETH: PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_ETH detected"); /* - * eth is set with no specific params. + * Eth is set with no specific params. * NFP does not need this. */ if (item->spec == NULL) @@ -1273,7 +1273,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower, } /* - * reserve space for L4 info. + * Reserve space for L4 info. * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4 */ if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) @@ -1356,7 +1356,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower, } /* - * reserve space for L4 info. + * Reserve space for L4 info. * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6 */ if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) @@ -3330,9 +3330,9 @@ nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower, return -EINVAL; } - /* Pre_tunnel action must be the first on action list. - * If other actions already exist, they need to be - * pushed forward. + /** + * Pre_tunnel action must be the first on action list. + * If other actions already exist, they need to be pushed forward. */ act_len = act_data - actions; if (act_len != 0) { @@ -4290,7 +4290,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) goto free_mask_id; } - /* flow stats */ + /* Flow stats */ rte_spinlock_init(&priv->stats_lock); stats_size = (ctx_count & NFP_FL_STAT_ID_STAT) | ((ctx_split - 1) & NFP_FL_STAT_ID_MU_NUM); @@ -4304,7 +4304,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) goto free_stats_id; } - /* mask table */ + /* Mask table */ mask_hash_params.hash_func_init_val = priv->hash_seed; priv->mask_table = rte_hash_create(&mask_hash_params); if (priv->mask_table == NULL) { @@ -4313,7 +4313,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) goto free_stats; } - /* flow table */ + /* Flow table */ flow_hash_params.hash_func_init_val = priv->hash_seed; flow_hash_params.entries = ctx_count; priv->flow_table = rte_hash_create(&flow_hash_params); @@ -4323,7 +4323,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) goto free_mask_table; } - /* pre tunnel table */ + /* Pre tunnel table */ priv->pre_tun_cnt = 1; pre_tun_hash_params.hash_func_init_val = priv->hash_seed; priv->pre_tun_table = rte_hash_create(&pre_tun_hash_params); @@ -4333,15 +4333,15 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) goto free_flow_table; } - /* ipv4 off list */ + /* IPv4 off list */ rte_spinlock_init(&priv->ipv4_off_lock); LIST_INIT(&priv->ipv4_off_list); - /* ipv6 off list */ + /* IPv6 off list */ rte_spinlock_init(&priv->ipv6_off_lock); LIST_INIT(&priv->ipv6_off_list); - /* neighbor next list */ + /* Neighbor next list */ LIST_INIT(&priv->nn_list); return 0; diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h index 7ce7f62453..68b6fb6abe 100644 --- a/drivers/net/nfp/nfp_flow.h +++ b/drivers/net/nfp/nfp_flow.h @@ -115,19 +115,19 @@ struct nfp_ipv6_addr_entry { struct nfp_flow_priv { uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */ uint64_t flower_version; /**< Flow version, always increase. */ - /* mask hash table */ + /* Mask hash table */ struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */ struct rte_hash *mask_table; /**< Hash table to store mask ids. */ - /* flow hash table */ + /* Flow hash table */ struct rte_hash *flow_table; /**< Hash table to store flow rules. */ - /* flow stats */ + /* Flow stats */ uint32_t active_mem_unit; /**< The size of active mem units. */ uint32_t total_mem_units; /**< The size of total mem units. */ uint32_t stats_ring_size; /**< The size of stats id ring. */ struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */ struct nfp_fl_stats *stats; /**< Store stats of flow. */ rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */ - /* pre tunnel rule */ + /* Pre tunnel rule */ uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */ uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */ struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */ @@ -137,7 +137,7 @@ struct nfp_flow_priv { /* IPv6 off */ LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */ rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */ - /* neighbor next */ + /* Neighbor next */ LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */ }; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 5bfdfd28b3..7b77351f1c 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -20,43 +20,22 @@ /* Maximum number of supported VLANs in parsed form packet metadata. */ #define NFP_META_MAX_VLANS 2 -/* - * struct nfp_meta_parsed - Record metadata parsed from packet - * - * Parsed NFP packet metadata are recorded in this struct. The content is - * read-only after it have been recorded during parsing by nfp_net_parse_meta(). - * - * @port_id: Port id value - * @sa_idx: IPsec SA index - * @hash: RSS hash value - * @hash_type: RSS hash type - * @ipsec_type: IPsec type - * @vlan_layer: The layers of VLAN info which are passed from nic. - * Only this number of entries of the @vlan array are valid. - * - * @vlan: Holds information parses from NFP_NET_META_VLAN. The inner most vlan - * starts at position 0 and only @vlan_layer entries contain valid - * information. - * - * Currently only 2 layers of vlan are supported, - * vlan[0] - vlan strip info - * vlan[1] - qinq strip info - * - * @vlan.offload: Flag indicates whether VLAN is offloaded - * @vlan.tpid: Vlan TPID - * @vlan.tci: Vlan TCI including PCP + Priority + VID - */ +/* Record metadata parsed from packet */ struct nfp_meta_parsed { - uint32_t port_id; - uint32_t sa_idx; - uint32_t hash; - uint8_t hash_type; - uint8_t ipsec_type; - uint8_t vlan_layer; + uint32_t port_id; /**< Port id value */ + uint32_t sa_idx; /**< IPsec SA index */ + uint32_t hash; /**< RSS hash value */ + uint8_t hash_type; /**< RSS hash type */ + uint8_t ipsec_type; /**< IPsec type */ + uint8_t vlan_layer; /**< The valid number of value in @vlan[] */ + /** + * Holds information parses from NFP_NET_META_VLAN. + * The inner most vlan starts at position 0 + */ struct { - uint8_t offload; - uint8_t tpid; - uint16_t tci; + uint8_t offload; /**< Flag indicates whether VLAN is offloaded */ + uint8_t tpid; /**< Vlan TPID */ + uint16_t tci; /**< Vlan TCI (PCP + Priority + VID) */ } vlan[NFP_META_MAX_VLANS]; }; @@ -156,7 +135,7 @@ struct nfp_ptype_parsed { uint8_t outer_l3_ptype; /**< Packet type of outer layer 3. */ }; -/* set mbuf checksum flags based on RX descriptor flags */ +/* Set mbuf checksum flags based on RX descriptor flags */ void nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, @@ -254,7 +233,7 @@ nfp_net_rx_queue_count(void *rx_queue) * descriptors and counting all four if the first has the DD * bit on. Of course, this is not accurate but can be good for * performance. But ideally that should be done in descriptors - * chunks belonging to the same cache line + * chunks belonging to the same cache line. */ while (count < rxq->rx_count) { @@ -265,7 +244,7 @@ nfp_net_rx_queue_count(void *rx_queue) count++; idx++; - /* Wrapping? */ + /* Wrapping */ if ((idx) == rxq->rx_count) idx = 0; } @@ -273,7 +252,7 @@ nfp_net_rx_queue_count(void *rx_queue) return count; } -/* nfp_net_parse_chained_meta() - Parse the chained metadata from packet */ +/* Parse the chained metadata from packet */ static bool nfp_net_parse_chained_meta(uint8_t *meta_base, rte_be32_t meta_header, @@ -320,12 +299,7 @@ nfp_net_parse_chained_meta(uint8_t *meta_base, return true; } -/* - * nfp_net_parse_meta_hash() - Set mbuf hash data based on the metadata info - * - * The RSS hash and hash-type are prepended to the packet data. - * Extract and decode it and set the mbuf fields. - */ +/* Set mbuf hash data based on the metadata info */ static void nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta, struct nfp_net_rxq *rxq, @@ -341,7 +315,7 @@ nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta, } /* - * nfp_net_parse_single_meta() - Parse the single metadata + * Parse the single metadata * * The RSS hash and hash-type are prepended to the packet data. * Get it from metadata area. @@ -355,12 +329,7 @@ nfp_net_parse_single_meta(uint8_t *meta_base, meta->hash = rte_be_to_cpu_32(*(rte_be32_t *)(meta_base + 4)); } -/* - * nfp_net_parse_meta_vlan() - Set mbuf vlan_strip data based on metadata info - * - * The VLAN info TPID and TCI are prepended to the packet data. - * Extract and decode it and set the mbuf fields. - */ +/* Set mbuf vlan_strip data based on metadata info */ static void nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta, struct nfp_net_rx_desc *rxd, @@ -369,19 +338,14 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta, { struct nfp_net_hw *hw = rxq->hw; - /* Skip if hardware don't support setting vlan. */ + /* Skip if firmware don't support setting vlan. */ if ((hw->ctrl & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) == 0) return; /* - * The nic support the two way to send the VLAN info, - * 1. According the metadata to send the VLAN info when NFP_NET_CFG_CTRL_RXVLAN_V2 - * is set - * 2. According the descriptor to sned the VLAN info when NFP_NET_CFG_CTRL_RXVLAN - * is set - * - * If the nic doesn't send the VLAN info, it is not necessary - * to do anything. + * The firmware support two ways to send the VLAN info (with priority) : + * 1. Using the metadata when NFP_NET_CFG_CTRL_RXVLAN_V2 is set, + * 2. Using the descriptor when NFP_NET_CFG_CTRL_RXVLAN is set. */ if ((hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) { if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) { @@ -397,7 +361,7 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta, } /* - * nfp_net_parse_meta_qinq() - Set mbuf qinq_strip data based on metadata info + * Set mbuf qinq_strip data based on metadata info * * The out VLAN tci are prepended to the packet data. * Extract and decode it and set the mbuf fields. @@ -469,7 +433,7 @@ nfp_net_parse_meta_ipsec(struct nfp_meta_parsed *meta, } } -/* nfp_net_parse_meta() - Parse the metadata from packet */ +/* Parse the metadata from packet */ static void nfp_net_parse_meta(struct nfp_net_rx_desc *rxds, struct nfp_net_rxq *rxq, @@ -672,7 +636,7 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds, * doing now have any benefit at all. Again, tests with this change have not * shown any improvement. Also, rte_mempool_get_bulk returns all or nothing * so looking at the implications of this type of allocation should be studied - * deeply + * deeply. */ uint16_t @@ -803,7 +767,7 @@ nfp_net_recv_pkts(void *rx_queue, nb_hold++; rxq->rd_p++; - if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/ + if (unlikely(rxq->rd_p == rxq->rx_count)) /* Wrapping */ rxq->rd_p = 0; } @@ -951,7 +915,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, rxq->dma = (uint64_t)tz->iova; rxq->rxds = tz->addr; - /* mbuf pointers array for referencing mbufs linked to RX descriptors */ + /* Mbuf pointers array for referencing mbufs linked to RX descriptors */ rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs", sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); @@ -975,11 +939,14 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, return 0; } -/* - * nfp_net_tx_free_bufs - Check for descriptors with a complete - * status - * @txq: TX queue to work with - * Returns number of descriptors freed +/** + * Check for descriptors with a complete status + * + * @param txq + * TX queue to work with + * + * @return + * Number of descriptors freed */ uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq) diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index 98ef6c3d93..899cc42c97 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -19,21 +19,11 @@ /* Maximum number of NFP packet metadata fields. */ #define NFP_META_MAX_FIELDS 8 -/* - * struct nfp_net_meta_raw - Raw memory representation of packet metadata - * - * Describe the raw metadata format, useful when preparing metadata for a - * transmission mbuf. - * - * @header: NFD3 or NFDk field type header (see format in nfp.rst) - * @data: Array of each fields data member - * @length: Keep track of number of valid fields in @header and data. Not part - * of the raw metadata. - */ +/* Describe the raw metadata format. */ struct nfp_net_meta_raw { - uint32_t header; - uint32_t data[NFP_META_MAX_FIELDS]; - uint8_t length; + uint32_t header; /**< Field type header (see format in nfp.rst) */ + uint32_t data[NFP_META_MAX_FIELDS]; /**< Array of each fields data member */ + uint8_t length; /**< Number of valid fields in @header */ }; /* Descriptor alignment */ From patchwork Sat Oct 7 02:33:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132377 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 01ED4426D6; Sat, 7 Oct 2023 04:35:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5327440E64; Sat, 7 Oct 2023 04:34:17 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2100.outbound.protection.outlook.com [40.107.94.100]) by mails.dpdk.org (Postfix) with ESMTP id DDAA2406B8 for ; Sat, 7 Oct 2023 04:34:13 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=j75IP8Dh5dslDXKgmHMNEjj+mTm+7Y44GP3vFSBcsfRvqeg5/IkHHWcQO4DD8w+aHlGf+3pdRLnA7xOpLlCczQmlAUjEo9YN+JQMchEI4YxnkYZTaUfCXmViMRWOQYARlzHV93DJ17VEvs/Nc9+kxB7QaXsRpacDQDEIeE/F9uZ6sQth3c9XznkE9K25gXbH8KXdyFZeteAwnhkgeS3wCRDmdj9oHhvGyIetYHWjmhO4i3oMPGfv0nAD5bMHn2HQtnjYF8Rl2o3jmE7sr13sLnIyrOVQ2NxfXxfTrHk5UlQmukydNFN/LYnOfIsfJeMOERW7fuz9AIint/6L8ZeFxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XnrnrELBWZ10zx0oJ+AomaChwqwNc078WM/YFOpIN00=; b=PrM0JvxfptvYUT45/mYfHc57ig7d5eSesRoov5dVUFVL2vvEGgyPU0simEesh1gH0t5oDzOtslusZj7POchCPOxZHeC3JH/YSfRndg1L3vJQbxi+geRs1IHComWI44jPu8fyYqGSXKWT8RlHIbU9WFtmUaaZHgKhyLTG5U+xGdk3vayXT1QqunYE0F5g9MiYWf6ZamjRgEXxKLwdaPzfN9S4a7gEWgZI1GHrrcGpAIw9LFDU2BPsBdsmszq+ziR4ZmOO7c+ATSV7dHgVf1HKlEIuh+TNSLoKFdze8ARtAc6jbqsnwaWgJSfMI/J36VcZvKb9OOyD/6kG1Ze+YBY7nA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XnrnrELBWZ10zx0oJ+AomaChwqwNc078WM/YFOpIN00=; b=F9919aK3FZeV7jnZUsk02aIu8+eTJAfOf4zW37aHUDDvKXJyBdky5M8EvFwWDGw+7Ttk77lRvkE7bcBaUUPKEgnbqKRji9HJaf6MKDML+lkuHuWEGho/qSqeVZxQLlv58x5sT77U3uRmkGdNB8UGFzAMwFtvDv/ZS0xwhI6SNJE= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:11 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:11 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 07/11] net/nfp: standard the blank character Date: Sat, 7 Oct 2023 10:33:35 +0800 Message-Id: <20231007023339.1546659-8-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: ae891825-fc11-4cc1-3e87-08dbc6dde3d5 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 578H6fFjjaS2UUKIA0HCdWkCNRgRk7+DeK/0WB4DpCeSBiNNRHoH6xXr6enu0eFfHNpZZyootjxdHCsdjTG6iVmjZrNOGlK26q8xPZvnN0JsdlCsmG4/unAFmDVpdDAo0sXhyITH+QT8wKwUlQVAgqE/LwGhQCvN7ulOsj3h1AnR4ZdhjSHyyZxRi/CTxSrA1qvbHQcs/44HB2RtKT8yNmcq7oQTI1IkkweEc/WkP1fHFdr8eS6B9NItg+QMD5GAP4hRCrrgLkZfBmNH+ZOIAo6uwER++Vx73ggtVFvC6wRn5GiGWrqPi/p8m/ZUgx/fDvOecgOPkdOi6GSNMddhHSGc3mN95e84BUEAbOrARrTeRKtd9slHSZnnEeFdWftYxyPg6lTJQBxotY2JajLWfRX1m4gKdGFOEq9r1sXmukZtWEU6yi86zExV5C5Gyvb/xtUkiOG7O3sE3SGyhhzRuzEiZyAaSXxqdp6ozs6Ti2TZwNSxbhHT2mD6s73fO1C8T3pbZJpPZFMMf9Qi9UNFajHc/4JfHj2kL1d8x9lgHGyD7M2UPnwex2tC1flklZWQKcB3Ouy9nSn4Sg2EIbu73mgSR1JxOP9S+TvMfbpOOxOC+WndCvrq5y3ZYpPBPTuv5Krx7kNBos8Py8JQl7DXozVQZ2SRQe6oHWNiT6dtnmo= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(30864003)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001)(66899024); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: EKglapfgW/KmP7cdgVFNEnJHcAgCi5QL/D+/tMu2Ho0fLC4p/qRShpXfxJSElU0lu7M1OsuNvouidQqonaP3GFO0z2vFYakW5G9sNZYM0z7cy1gYUczhvO2A59/s98UJir908VUvr1za6zizqTELgM9l64KJtbRHQQbdOiFTPDEZ7fuXjiRNXIiZk+nYg/hK/7q25/XjAbQFUrFfUjzH7p/AH2cEmJMJ+Xl7Pp2UhLvFEeHOK3saTXg4mv8EWl3RBzjNZH4w0wpG0Ml2OtBAeeY5Kgfg6UFiB0Rtu54tsmiHmTTkUZzRwKW+Of2uEKviq7GE124AnMoKdOoI8aLkp+GAVcpak8McVcm+Nf7kFQSL8wL5vbe7HIgfDcNrwWSbxhylu2qbE4L58Brf0Frk0294eeYJDTGtJNPjGsXOV3ScKdzVTu+aKhVAnZwVNGsApIPw6fU17XhZoR6P6tt7+fcoQl12AYz3v1u+Cu4T++7w3fX5HAWAyxpwtYON4R0pa//GD2JqQI36H7JbkQRZxUODjgrRbdIp0z7K0wYIlhNKg+fVLPsuAzNcP9ngTANgO/tUCpv9DhswdVJqzZWPxGqyKxPu6ljyiV1QUcsxnk21C3VJj1tcIrgn+AtXwVXdyN4laefpcBawKfTS4eC5YAZP5Qli/NrjnGk7mEwQzrS2NnKDkDp2Y0uz20ki142xeTqtafC9LUzsL8r1lAamKstTTBXvJFe4HwjWLD3klQW3lWNdndxji6gphhhBretWcfXMlrpGM+GAOC3Xh99FnFNuk24pehfzV7tEVqD7Xo57JQeaiqw3ScWUTJLs/lH8D6bCffQxpLfUiaPhaN2Hoo1wc8AjgDpn/Xxs2adbgdyTdf1hjJiEjkmdJRkTja4AcIt4TgMijhhVGq+DWCzM8wNyFuGW21IQQ23Qr7kELWhYn4vraTbRElZxgjq8lvXV13sQukvbEjnhyH5fXp5tcTnAp14r2pT/bIB9R8Bs/it3eB/Smw2FlULrHSX9xYMRT1SbWwkj2hHdKzIU1xAZsz8DKnYaCGZHVvmkqPgqEpt7Wt6cbjOM3vfJASoi4MuygEGJp/zO8RFxlUlmGd+2q6o168Cd2QPCTmkUxy1wvIyz03AXU3UA9tWXs8ncCvitaNcVHBJ77LBgRR6056ZEYSFgu/TOgc37aPYV0kbBnoH1cyIMVX+tb1SMvwikQoqoBK80xzw9ohxHJ9Yu+hA6gpCtILoiiYRc6Y6As1po2bq5nX/1QC8LBXSLbk7j+afU9A0ukTfGONXJfXq6sQhv+PKgNy5m5dtiTvvMvDnyBj25//8Z/Lcc7a3uwhRTRv4TcOWVK/zFas7xLTLrt5Mds2p+USs6nbx28SPjQDXNCJlmzZw0dyEERslEK4FsdBs3eK3WfEcVWi/TNqApIxKGyOFmh5doPI//pIOV5ALa8SNix5AVzs+ZEfwYaziQlxGQPYJ5tkjOr0TcPJMOPTP5m+EIJJbAxlMK23msUTWdQlVATvaDpiOAnxNcW8lR/qH/i/uybWtUmqxsUwKDmshCRnt+0vp52P+HqpFnXtAT/8Vj0CBcVIDHvXiSsgfhrktgeLFpQJxXrjzIUPh4TfKlhA== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: ae891825-fc11-4cc1-3e87-08dbc6dde3d5 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:11.8643 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: tcEm57pzB1wofboEWxCorWHvCh+ldQ6eBSYbvs8/CGqkZ49v4CHGZg15JVzfbD5/FpHGo0eq8/7vUt5TeEQy2499MDYoo+9PpcCWyqMhNxQ= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use space character to align instead of TAB character. There should one blank line to split the block of logic, no more no less. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/nfp_common.c | 39 +++++++++++---------- drivers/net/nfp/nfp_common.h | 6 ++-- drivers/net/nfp/nfp_cpp_bridge.c | 5 +++ drivers/net/nfp/nfp_ctrl.h | 6 ++-- drivers/net/nfp/nfp_ethdev.c | 58 ++++++++++++++++---------------- drivers/net/nfp/nfp_ethdev_vf.c | 49 +++++++++++++-------------- drivers/net/nfp/nfp_flow.c | 27 +++++++++------ drivers/net/nfp/nfp_flow.h | 7 ++++ drivers/net/nfp/nfp_rxtx.c | 7 ++-- 9 files changed, 113 insertions(+), 91 deletions(-) diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index ed3c5c15d2..3409ee8cb8 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -36,6 +36,7 @@ enum nfp_xstat_group { NFP_XSTAT_GROUP_NET, NFP_XSTAT_GROUP_MAC }; + struct nfp_xstat { char name[RTE_ETH_XSTATS_NAME_SIZE]; int offset; @@ -184,6 +185,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN); return; } + /* * Link is up so write the link speed from the eth_table to * NFP_NET_CFG_STS_NSP_LINK_RATE. @@ -223,17 +225,21 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE); if (new == 0) break; + if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) { PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new); return -1; } + if (cnt >= NFP_NET_POLL_TIMEOUT) { PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms", update, cnt); return -EIO; } + nanosleep(&wait, 0); /* Waiting for a 1ms */ } + PMD_DRV_LOG(DEBUG, "Ack DONE"); return 0; } @@ -387,7 +393,6 @@ nfp_net_configure(struct rte_eth_dev *dev) struct rte_eth_txmode *txmode; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; @@ -560,11 +565,13 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; + /* Signal the NIC about the change */ if (nfp_net_reconfig(hw, ctrl, update) != 0) { PMD_DRV_LOG(ERR, "MAC address update failed"); return -EIO; } + return 0; } @@ -832,13 +839,11 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.q_ipackets[i] = nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); - nfp_dev_stats.q_ipackets[i] -= hw->eth_stats_base.q_ipackets[i]; nfp_dev_stats.q_ibytes[i] = nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); - nfp_dev_stats.q_ibytes[i] -= hw->eth_stats_base.q_ibytes[i]; } @@ -850,42 +855,34 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.q_opackets[i] = nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); - nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i]; nfp_dev_stats.q_obytes[i] = nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); - nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i]; } nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); - nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets; nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); - nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes; nfp_dev_stats.opackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); - nfp_dev_stats.opackets -= hw->eth_stats_base.opackets; nfp_dev_stats.obytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); - nfp_dev_stats.obytes -= hw->eth_stats_base.obytes; /* Reading general device stats */ nfp_dev_stats.ierrors = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); - nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors; nfp_dev_stats.oerrors = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); - nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors; /* RX ring mbuf allocation failures */ @@ -893,7 +890,6 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.imissed = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); - nfp_dev_stats.imissed -= hw->eth_stats_base.imissed; if (stats != NULL) { @@ -981,6 +977,7 @@ nfp_net_xstats_size(const struct rte_eth_dev *dev) if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC) break; } + return count; } @@ -1154,6 +1151,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev) hw->eth_xstats_base[id].id = id; hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true); } + /* Successfully reset xstats, now call function to reset basic stats. */ return nfp_net_stats_reset(dev); } @@ -1201,6 +1199,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues; dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues; dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU; + /** * The maximum rx packet length (max_rx_pktlen) is set to the * maximum supported frame size that the NFP can handle. This @@ -1368,6 +1367,7 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev) if (dev->rx_pkt_burst == nfp_net_recv_pkts) return ptypes; + return NULL; } @@ -1381,7 +1381,6 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; @@ -1402,7 +1401,6 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; @@ -1619,11 +1617,11 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, idx = i / RTE_ETH_RETA_GROUP_SIZE; shift = i % RTE_ETH_RETA_GROUP_SIZE; mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF); - if (mask == 0) continue; reta = 0; + /* If all 4 entries were set, don't need read RETA register */ if (mask != 0xF) reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + i); @@ -1631,13 +1629,17 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; + /* Clearing the entry bits */ if (mask != 0xF) reta &= ~(0xFF << (8 * j)); + reta |= reta_conf[idx].reta[shift + j] << (8 * j); } + nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta); } + return 0; } @@ -1682,7 +1684,6 @@ nfp_net_reta_query(struct rte_eth_dev *dev, struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) return -EINVAL; @@ -1710,10 +1711,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev, for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; + reta_conf[idx].reta[shift + j] = (uint8_t)((reta >> (8 * j)) & 0xF); } } + return 0; } @@ -1791,6 +1794,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "RSS unsupported"); return -EINVAL; } + return 0; /* Nothing to do */ } @@ -1888,6 +1892,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) queue %= rx_queues; } } + ret = nfp_net_rss_reta_write(dev, nfp_reta_conf, 0x80); if (ret != 0) return ret; @@ -1897,8 +1902,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Wrong rss conf"); return -EINVAL; } - rss_conf = dev_conf->rx_adv_conf.rss_conf; + rss_conf = dev_conf->rx_adv_conf.rss_conf; ret = nfp_net_rss_hash_write(dev, &rss_conf); return ret; diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index b41d834165..27dc2175e3 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -32,7 +32,7 @@ #define DEFAULT_RX_HTHRESH 8 #define DEFAULT_RX_WTHRESH 0 -#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_RS_THRESH 32 #define DEFAULT_TX_FREE_THRESH 32 #define DEFAULT_TX_PTHRESH 32 #define DEFAULT_TX_HTHRESH 0 @@ -40,12 +40,12 @@ #define DEFAULT_TX_RSBIT_THRESH 32 /* Alignment for dma zones */ -#define NFP_MEMZONE_ALIGN 128 +#define NFP_MEMZONE_ALIGN 128 #define NFP_QCP_QUEUE_ADDR_SZ (0x800) /* Number of supported physical ports */ -#define NFP_MAX_PHYPORTS 12 +#define NFP_MAX_PHYPORTS 12 /* Firmware application ID's */ enum nfp_app_fw_id { diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index b5bfe17d0e..080070f58b 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -191,6 +191,7 @@ nfp_cpp_bridge_serve_write(int sockfd, nfp_cpp_area_free(area); return -EIO; } + err = nfp_cpp_area_write(area, pos, tmpbuf, len); if (err < 0) { PMD_CPP_LOG(ERR, "nfp_cpp_area_write error"); @@ -312,6 +313,7 @@ nfp_cpp_bridge_serve_read(int sockfd, curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? NFP_CPP_MEMIO_BOUNDARY : count; } + return 0; } @@ -393,6 +395,7 @@ nfp_cpp_bridge_service_func(void *args) struct timeval timeout = {1, 0}; unlink("/tmp/nfp_cpp"); + sockfd = socket(AF_UNIX, SOCK_STREAM, 0); if (sockfd < 0) { PMD_CPP_LOG(ERR, "socket creation error. Service failed"); @@ -456,8 +459,10 @@ nfp_cpp_bridge_service_func(void *args) if (op == 0) break; } + close(datafd); } + close(sockfd); return 0; diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h index a13f95894a..ef8bf486cb 100644 --- a/drivers/net/nfp/nfp_ctrl.h +++ b/drivers/net/nfp/nfp_ctrl.h @@ -208,8 +208,8 @@ struct nfp_net_fw_ver { /* * NFP6000/NFP4000 - Prepend configuration */ -#define NFP_NET_CFG_RX_OFFSET 0x0050 -#define NFP_NET_CFG_RX_OFFSET_DYNAMIC 0 /* Prepend mode */ +#define NFP_NET_CFG_RX_OFFSET 0x0050 +#define NFP_NET_CFG_RX_OFFSET_DYNAMIC 0 /* Prepend mode */ /* Start anchor of the TLV area */ #define NFP_NET_CFG_TLV_BASE 0x0058 @@ -442,7 +442,7 @@ struct nfp_net_fw_ver { #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6 (NFP_MAC_STATS_BASE + 0x1f0) #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7 (NFP_MAC_STATS_BASE + 0x1f8) -#define NFP_PF_CSR_SLICE_SIZE (32 * 1024) +#define NFP_PF_CSR_SLICE_SIZE (32 * 1024) /* * General use mailbox area (0x1800 - 0x19ff) diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index dece821e4a..0493548c81 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -36,6 +36,7 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, rte_ether_addr_copy(&nfp_eth_table->ports[port].mac_addr, &hw->mac_addr); free(nfp_eth_table); + return 0; } @@ -73,6 +74,7 @@ nfp_net_start(struct rte_eth_dev *dev) "with NFP multiport PF"); return -EINVAL; } + if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. @@ -87,6 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev) return -EIO; } } + intr_vector = dev->data->nb_rx_queues; if (rte_intr_efd_enable(intr_handle, intr_vector) != 0) return -1; @@ -198,7 +201,6 @@ nfp_net_stop(struct rte_eth_dev *dev) /* Clear queues */ nfp_net_stop_tx_queue(dev); - nfp_net_stop_rx_queue(dev); if (rte_eal_process_type() == RTE_PROC_PRIMARY) @@ -262,12 +264,10 @@ nfp_net_close(struct rte_eth_dev *dev) * We assume that the DPDK application is stopping all the * threads/queues before calling the device close function. */ - nfp_net_disable_queues(dev); /* Clear queues */ nfp_net_close_tx_queue(dev); - nfp_net_close_rx_queue(dev); /* Clear ipsec */ @@ -413,35 +413,35 @@ nfp_udp_tunnel_port_del(struct rte_eth_dev *dev, /* Initialise and register driver with DPDK Application */ static const struct eth_dev_ops nfp_net_eth_dev_ops = { - .dev_configure = nfp_net_configure, - .dev_start = nfp_net_start, - .dev_stop = nfp_net_stop, - .dev_set_link_up = nfp_net_set_link_up, - .dev_set_link_down = nfp_net_set_link_down, - .dev_close = nfp_net_close, - .promiscuous_enable = nfp_net_promisc_enable, - .promiscuous_disable = nfp_net_promisc_disable, - .link_update = nfp_net_link_update, - .stats_get = nfp_net_stats_get, - .stats_reset = nfp_net_stats_reset, + .dev_configure = nfp_net_configure, + .dev_start = nfp_net_start, + .dev_stop = nfp_net_stop, + .dev_set_link_up = nfp_net_set_link_up, + .dev_set_link_down = nfp_net_set_link_down, + .dev_close = nfp_net_close, + .promiscuous_enable = nfp_net_promisc_enable, + .promiscuous_disable = nfp_net_promisc_disable, + .link_update = nfp_net_link_update, + .stats_get = nfp_net_stats_get, + .stats_reset = nfp_net_stats_reset, .xstats_get = nfp_net_xstats_get, .xstats_reset = nfp_net_xstats_reset, .xstats_get_names = nfp_net_xstats_get_names, .xstats_get_by_id = nfp_net_xstats_get_by_id, .xstats_get_names_by_id = nfp_net_xstats_get_names_by_id, - .dev_infos_get = nfp_net_infos_get, + .dev_infos_get = nfp_net_infos_get, .dev_supported_ptypes_get = nfp_net_supported_ptypes_get, - .mtu_set = nfp_net_dev_mtu_set, - .mac_addr_set = nfp_net_set_mac_addr, - .vlan_offload_set = nfp_net_vlan_offload_set, - .reta_update = nfp_net_reta_update, - .reta_query = nfp_net_reta_query, - .rss_hash_update = nfp_net_rss_hash_update, - .rss_hash_conf_get = nfp_net_rss_hash_conf_get, - .rx_queue_setup = nfp_net_rx_queue_setup, - .rx_queue_release = nfp_net_rx_queue_release, - .tx_queue_setup = nfp_net_tx_queue_setup, - .tx_queue_release = nfp_net_tx_queue_release, + .mtu_set = nfp_net_dev_mtu_set, + .mac_addr_set = nfp_net_set_mac_addr, + .vlan_offload_set = nfp_net_vlan_offload_set, + .reta_update = nfp_net_reta_update, + .reta_query = nfp_net_reta_query, + .rss_hash_update = nfp_net_rss_hash_update, + .rss_hash_conf_get = nfp_net_rss_hash_conf_get, + .rx_queue_setup = nfp_net_rx_queue_setup, + .rx_queue_release = nfp_net_rx_queue_release, + .tx_queue_setup = nfp_net_tx_queue_setup, + .tx_queue_release = nfp_net_tx_queue_release, .rx_queue_intr_enable = nfp_rx_queue_intr_enable, .rx_queue_intr_disable = nfp_rx_queue_intr_disable, .udp_tunnel_port_add = nfp_udp_tunnel_port_add, @@ -501,7 +501,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) rte_eth_copy_pci_info(eth_dev, pci_dev); - hw->ctrl_bar = pci_dev->mem_resource[0].addr; if (hw->ctrl_bar == NULL) { PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured"); @@ -519,10 +518,12 @@ nfp_net_init(struct rte_eth_dev *eth_dev) PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for _mac_stats_bar"); return -EIO; } + hw->mac_stats = hw->mac_stats_bar; } else { if (pf_dev->ctrl_bar == NULL) return -ENODEV; + /* Use port offset in pf ctrl_bar for this ports control bar */ hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE); hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE); @@ -557,7 +558,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) return -ENOMEM; } - /* Work out where in the BAR the queues start. */ tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); @@ -653,12 +653,12 @@ nfp_fw_upload(struct rte_pci_device *dev, "serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x", cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3], cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff); - snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial); PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0) goto load_fw; + /* Then try the PCI name */ snprintf(fw_name, sizeof(fw_name), "%s/pci-%s.nffw", DEFAULT_FW_PATH, dev->name); diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 0a1eb04294..8053808b02 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -63,6 +63,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) return -EIO; } } + intr_vector = dev->data->nb_rx_queues; if (rte_intr_efd_enable(intr_handle, intr_vector) != 0) return -1; @@ -172,12 +173,10 @@ nfp_netvf_close(struct rte_eth_dev *dev) * We assume that the DPDK application is stopping all the * threads/queues before calling the device close function. */ - nfp_net_disable_queues(dev); /* Clear queues */ nfp_net_close_tx_queue(dev); - nfp_net_close_rx_queue(dev); rte_intr_disable(pci_dev->intr_handle); @@ -194,35 +193,35 @@ nfp_netvf_close(struct rte_eth_dev *dev) /* Initialise and register VF driver with DPDK Application */ static const struct eth_dev_ops nfp_netvf_eth_dev_ops = { - .dev_configure = nfp_net_configure, - .dev_start = nfp_netvf_start, - .dev_stop = nfp_netvf_stop, - .dev_set_link_up = nfp_netvf_set_link_up, - .dev_set_link_down = nfp_netvf_set_link_down, - .dev_close = nfp_netvf_close, - .promiscuous_enable = nfp_net_promisc_enable, - .promiscuous_disable = nfp_net_promisc_disable, - .link_update = nfp_net_link_update, - .stats_get = nfp_net_stats_get, - .stats_reset = nfp_net_stats_reset, + .dev_configure = nfp_net_configure, + .dev_start = nfp_netvf_start, + .dev_stop = nfp_netvf_stop, + .dev_set_link_up = nfp_netvf_set_link_up, + .dev_set_link_down = nfp_netvf_set_link_down, + .dev_close = nfp_netvf_close, + .promiscuous_enable = nfp_net_promisc_enable, + .promiscuous_disable = nfp_net_promisc_disable, + .link_update = nfp_net_link_update, + .stats_get = nfp_net_stats_get, + .stats_reset = nfp_net_stats_reset, .xstats_get = nfp_net_xstats_get, .xstats_reset = nfp_net_xstats_reset, .xstats_get_names = nfp_net_xstats_get_names, .xstats_get_by_id = nfp_net_xstats_get_by_id, .xstats_get_names_by_id = nfp_net_xstats_get_names_by_id, - .dev_infos_get = nfp_net_infos_get, + .dev_infos_get = nfp_net_infos_get, .dev_supported_ptypes_get = nfp_net_supported_ptypes_get, - .mtu_set = nfp_net_dev_mtu_set, - .mac_addr_set = nfp_net_set_mac_addr, - .vlan_offload_set = nfp_net_vlan_offload_set, - .reta_update = nfp_net_reta_update, - .reta_query = nfp_net_reta_query, - .rss_hash_update = nfp_net_rss_hash_update, - .rss_hash_conf_get = nfp_net_rss_hash_conf_get, - .rx_queue_setup = nfp_net_rx_queue_setup, - .rx_queue_release = nfp_net_rx_queue_release, - .tx_queue_setup = nfp_net_tx_queue_setup, - .tx_queue_release = nfp_net_tx_queue_release, + .mtu_set = nfp_net_dev_mtu_set, + .mac_addr_set = nfp_net_set_mac_addr, + .vlan_offload_set = nfp_net_vlan_offload_set, + .reta_update = nfp_net_reta_update, + .reta_query = nfp_net_reta_query, + .rss_hash_update = nfp_net_rss_hash_update, + .rss_hash_conf_get = nfp_net_rss_hash_conf_get, + .rx_queue_setup = nfp_net_rx_queue_setup, + .rx_queue_release = nfp_net_rx_queue_release, + .tx_queue_setup = nfp_net_tx_queue_setup, + .tx_queue_release = nfp_net_tx_queue_release, .rx_queue_intr_enable = nfp_rx_queue_intr_enable, .rx_queue_intr_disable = nfp_rx_queue_intr_disable, }; diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index 7b1abe926e..a806cbfbeb 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -464,6 +464,7 @@ nfp_stats_id_alloc(struct nfp_flow_priv *priv, uint32_t *ctx) priv->stats_ids.init_unallocated--; priv->active_mem_unit = 0; } + return 0; } @@ -590,6 +591,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower, PMD_DRV_LOG(ERR, "Mem error when offloading IP6 address."); return -ENOMEM; } + memcpy(tmp_entry->ipv6_addr, ipv6, sizeof(tmp_entry->ipv6_addr)); tmp_entry->ref_count = 1; @@ -1760,7 +1762,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_IPV6), - .mask_support = &(const struct rte_flow_item_eth){ + .mask_support = &(const struct rte_flow_item_eth) { .hdr = { .dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", .src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", @@ -1775,7 +1777,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { [RTE_FLOW_ITEM_TYPE_VLAN] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_IPV6), - .mask_support = &(const struct rte_flow_item_vlan){ + .mask_support = &(const struct rte_flow_item_vlan) { .hdr = { .vlan_tci = RTE_BE16(0xefff), .eth_proto = RTE_BE16(0xffff), @@ -1791,7 +1793,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_GRE), - .mask_support = &(const struct rte_flow_item_ipv4){ + .mask_support = &(const struct rte_flow_item_ipv4) { .hdr = { .type_of_service = 0xff, .fragment_offset = RTE_BE16(0xffff), @@ -1810,7 +1812,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_GRE), - .mask_support = &(const struct rte_flow_item_ipv6){ + .mask_support = &(const struct rte_flow_item_ipv6) { .hdr = { .vtc_flow = RTE_BE32(0x0ff00000), .proto = 0xff, @@ -1827,7 +1829,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { .merge = nfp_flow_merge_ipv6, }, [RTE_FLOW_ITEM_TYPE_TCP] = { - .mask_support = &(const struct rte_flow_item_tcp){ + .mask_support = &(const struct rte_flow_item_tcp) { .hdr = { .tcp_flags = 0xff, .src_port = RTE_BE16(0xffff), @@ -1841,7 +1843,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { [RTE_FLOW_ITEM_TYPE_UDP] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN, RTE_FLOW_ITEM_TYPE_GENEVE), - .mask_support = &(const struct rte_flow_item_udp){ + .mask_support = &(const struct rte_flow_item_udp) { .hdr = { .src_port = RTE_BE16(0xffff), .dst_port = RTE_BE16(0xffff), @@ -1852,7 +1854,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { .merge = nfp_flow_merge_udp, }, [RTE_FLOW_ITEM_TYPE_SCTP] = { - .mask_support = &(const struct rte_flow_item_sctp){ + .mask_support = &(const struct rte_flow_item_sctp) { .hdr = { .src_port = RTE_BE16(0xffff), .dst_port = RTE_BE16(0xffff), @@ -1864,7 +1866,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { }, [RTE_FLOW_ITEM_TYPE_VXLAN] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH), - .mask_support = &(const struct rte_flow_item_vxlan){ + .mask_support = &(const struct rte_flow_item_vxlan) { .hdr = { .vx_vni = RTE_BE32(0xffffff00), }, @@ -1875,7 +1877,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { }, [RTE_FLOW_ITEM_TYPE_GENEVE] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH), - .mask_support = &(const struct rte_flow_item_geneve){ + .mask_support = &(const struct rte_flow_item_geneve) { .vni = "\xff\xff\xff", }, .mask_default = &rte_flow_item_geneve_mask, @@ -1884,7 +1886,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { }, [RTE_FLOW_ITEM_TYPE_GRE] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY), - .mask_support = &(const struct rte_flow_item_gre){ + .mask_support = &(const struct rte_flow_item_gre) { .c_rsvd0_ver = RTE_BE16(0xa000), .protocol = RTE_BE16(0xffff), }, @@ -1916,6 +1918,7 @@ nfp_flow_item_check(const struct rte_flow_item *item, " without a corresponding 'spec'."); return -EINVAL; } + /* No spec, no mask, no problem. */ return 0; } @@ -2995,6 +2998,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr, for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) { if (priv->pre_tun_bitmap[i] == 0) continue; + entry->mac_index = i; find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size); if (find_entry != NULL) { @@ -3021,6 +3025,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr, *index = entry->mac_index; priv->pre_tun_cnt++; + return 0; } @@ -3055,12 +3060,14 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr, for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) { if (priv->pre_tun_bitmap[i] == 0) continue; + entry->mac_index = i; find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size); if (find_entry != NULL) { find_entry->ref_cnt--; if (find_entry->ref_cnt != 0) goto free_entry; + priv->pre_tun_bitmap[i] = 0; break; } diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h index 68b6fb6abe..5a5b6a7d19 100644 --- a/drivers/net/nfp/nfp_flow.h +++ b/drivers/net/nfp/nfp_flow.h @@ -115,11 +115,14 @@ struct nfp_ipv6_addr_entry { struct nfp_flow_priv { uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */ uint64_t flower_version; /**< Flow version, always increase. */ + /* Mask hash table */ struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */ struct rte_hash *mask_table; /**< Hash table to store mask ids. */ + /* Flow hash table */ struct rte_hash *flow_table; /**< Hash table to store flow rules. */ + /* Flow stats */ uint32_t active_mem_unit; /**< The size of active mem units. */ uint32_t total_mem_units; /**< The size of total mem units. */ @@ -127,16 +130,20 @@ struct nfp_flow_priv { struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */ struct nfp_fl_stats *stats; /**< Store stats of flow. */ rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */ + /* Pre tunnel rule */ uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */ uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */ struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */ + /* IPv4 off */ LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */ rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */ + /* IPv6 off */ LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */ rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */ + /* Neighbor next */ LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */ }; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 7b77351f1c..4632837c0e 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -190,6 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) rxd->fld.dd = 0; rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff; rxd->fld.dma_addr_lo = dma_addr & 0xffffffff; + rxe[i].mbuf = mbuf; } @@ -213,6 +214,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0) return -1; } + return 0; } @@ -225,7 +227,6 @@ nfp_net_rx_queue_count(void *rx_queue) struct nfp_net_rx_desc *rxds; rxq = rx_queue; - idx = rxq->rd_p; /* @@ -235,7 +236,6 @@ nfp_net_rx_queue_count(void *rx_queue) * performance. But ideally that should be done in descriptors * chunks belonging to the same cache line. */ - while (count < rxq->rx_count) { rxds = &rxq->rxds[idx]; if ((rxds->rxd.meta_len_dd & PCIE_DESC_RX_DD) == 0) @@ -394,6 +394,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta, if (meta->vlan[0].offload == 0) mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci); + mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci); PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u", mb->vlan_tci_outer, mb->vlan_tci); @@ -638,7 +639,6 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds, * so looking at the implications of this type of allocation should be studied * deeply. */ - uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -903,7 +903,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, sizeof(struct nfp_net_rx_desc) * max_rx_desc, NFP_MEMZONE_ALIGN, socket_id); - if (tz == NULL) { PMD_DRV_LOG(ERR, "Error allocating rx dma"); nfp_net_rx_queue_release(dev, queue_idx); From patchwork Sat Oct 7 02:33:36 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132378 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9E323426D6; Sat, 7 Oct 2023 04:35:28 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2664740ED8; Sat, 7 Oct 2023 04:34:19 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2093.outbound.protection.outlook.com [40.107.93.93]) by mails.dpdk.org (Postfix) with ESMTP id 4486C40E03 for ; Sat, 7 Oct 2023 04:34:15 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=O9qiwG7XTqtkuatz4mKBZMs0kqgipE8ejNerpC/Y1zMDcvii/2/bmNlqcEmbURCkCT9Zc09EDIyaIN5KsHa7YZrxO5i315BURHtHT1GMv+gqaCG7hm5jjUV1aLQd0n3Vc2dZLh8xpOWuHGTJnjIK+jyJE8tPJe/jx1iJhLYe9ZSe9JGFzvhdX3NSSV47ruRQxvF9L/9kWQ9tUzvpXx9NH9o9prHIzs05FtIyFWNiF6gxQY7KmImM+RqyGAoLduDHs+8aDwDsfmYf8WOZ5eJktvvNTQI3o4922jO9qCLxablinH0OgC5WprJO7UzUxlQACvHZGf+cb2s8cJ0ndmBANQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/yjy/upysLcxNSVQ6OgVelZ6xwI5GrD2V5p9c3n+ifs=; b=MdnPnHgleL6INJVrGcELbBCyxhWt0exz0IRfKRQ/h2Q7aoes8qpHo/gXpHwCcD/ipHxkD+5CmWRDpxFEEBOJNYAr2BlYn/Fi8CG1Dllgcs9TGrRZa+bhGRZIjWUj2dPwpwlSAURAngNiWeIIDh57XQysKOUv8EJgeeqOZLYxaOusCaLFjRWTpNs8+eq0tPxJ3Wv69Nko8XAF+PqsmIHgtA8Puw5/dgGws7uCeS5ZEcFb4ANKkafJFiS44NO8+3pUN81AFZiAadFIZxcfcDfwpqvjR44ZaCC+BQkH9caUOCi6+nR+MVBgTxRghq+vZ4h59Z/V+7so6Jz7a64cWYiO+A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/yjy/upysLcxNSVQ6OgVelZ6xwI5GrD2V5p9c3n+ifs=; b=AqWL8o5nfbZ1sg8FrVYsB2HqppXyT7v4HhvwKvskgr/ih2Qz5gocgSRNgAUQsOjBW9tg/1drV3c+VbfB7z/qSViM32up9GTqt7VaXzISdnmIQx6hGdrZ7WFk1uiI6epWxyXhGKnQHaVYVWCWFqETT0HHB96Gk0nxvGjWJ0SypUA= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:13 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:13 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 08/11] net/nfp: unify the guide line of header file Date: Sat, 7 Oct 2023 10:33:36 +0800 Message-Id: <20231007023339.1546659-9-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: 19139734-4192-4e2a-d346-08dbc6dde4fc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: SelKBo6aXpzHAO669WdCvzGmQarAQyxLHEMACCaTl5Dfybb5B6QG7Q0Bv7aVtDsY79TsPt0BfCOZ3wriesQhiIP/0U7nHX3hxIW6A55hXw336QgYEf3RZD+1Ed6STbx4kk2tLCm0TrDFnUFHjmF6djAiy0UcU3ieyiNMNrBCAZCC/KtdvveSv/AiSa6k3yIAbIhGGPN37+sPUpEfLrUTXuokmv/wUGjoWB8uplM2oOIaoW02RBX7tcb7QKLvRq+MWqEJVG/0RiVd70MzeLRnwCoO5lN+jvjT2RhMVoRCAqig2Fof91QSKZe0f5x2oyChpAU5GBWDT8zMDqni1FxMJDp1EYieInLsTokXuY+bwfECXzh/s/nYYtTyyY+i6B82N5CzN05RxzYBSuhNyS2B8zChzvN9KXV/LDk+F4k+0UZvlA/R60H/Cz/eBs8QjTEgCfwIBVa8DCpl61Kly+a9ogRJwEPCFcXnEUslMOWej4pwVgaN2NNMKGhKQt2LAl9DBEz8o25JWVNtaWLH6aOCbJfEbA7zUU6hY7UEkFs47rhNOMqHXsDNNZf7gCvvNKm+g7Px77F4b2WuSt6w0lLBKmoBSEOlbxrpOLBoNcTlfbBpeiskGynqX+IzsZtdnGN+7blpe5wYIvVtOu0ga67XsTHkzPgHHYTuErKHNaJ5fdo= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 876e0ex1wN9/DUkdSzOMnUg8aCz4r+C8/F5iWw4dCNpEiARqr4hpCBpqmpzRxTbQnSXuT2PSyMm6U9wSc3sxLbkl3Q8DM+8XNEv9N6qVkgQq2dzhxkFMvWkqJcfrLVNt2UfFYtEgvhLaS/zWWopri/79lGiBqZ/7txmHn2yA6jzB1x8ZrQnkWe+6pPz6306KwY47e3X+6k8vr7zV+sfEzc1PUXPILGUT/N8SeR7oZm5/VAQ2zhbxDBLW9Tpt3L8oYjsFa9jJowAaSqsELuc8e+RvXQaSXnlWjalArVlNct6NoX2hd8PeHpC/XHZMdFlywdGEkSLn7u4qPY7q7xBrp0ifYA56WGTnXoEoUTWAYRknXSYJZQwsie9AqJdA7EN0GGYKiJONXY7z/LTnQ2TJppdBngHyH4Odi7lUaJml8hy8U4dmUlgydVU4AZqXooMQlCaIIhtTeqtONKssmI5KvJoOJIEh+AHZSzivquoT0bfIasQEF4IseVOJaoEAqw6biZ1ydvniXmQ8U5KD85AkOKi8UKxaHXnuYGD9QrmCTKw97S/QZo6d6M9ygyWREoXQCqAu0gKjAX78+XWs9lUmrX/ds34NyfkB/HDqsgwoIduPI/j2gPZzzdXuKutELZ0xKfmJWAza/+xlLli/+zwSzIiMl8L+P2uO8QuV62q2HxxbpX8e17P0AwHrU39mSY5pwhzqFTl+94V2lNN3O4uTJyY76RF/fmb5uVJtUV95cwcVrHOSbpstCRRSqnrKmsRJmClXrYNRKG93+3UPBjx0IF8TCbw/zDFcCp8p8mHn/FXif0h/0TVKevt0N6YtIJNj9jmFWc2Pxmmkn5DwEpDUwvoPeROJjzzWeuEZ2aFIMeb7JlzJrSGIsWfkjEqP6uexcxqu8R8XLqa5fzzthZZUKB4Zh3lu2ZkQsf0ZTn9VaeMRLrHceKG9QJ7SfAhG59WE5lVwXmMtrUIWqQIT8dolqOCI7yjeKZVUibGmHWdmZfuq8khICHMYUlWUkDSHxvEkyUEZlin7nS2FXh+GdXynL6tU0HYpUotPN2gw8LW5LfhT7vgaar8J9iPZkCpYO6JQsH4d7Hp+NZsYCzAAFHaZTzxnsK9L2IOn3a5C4u38ldd84XEvyjGUF+h4kMJz2CLy7mHl/9JkGjn7qoNV5LkhyyKLw8g+6VE8xYOXgstTBf1AKGCFz6TRdTKcOfB0wolA3VcyL5HLpZ7ZwuawzMtwZc9kC4lsjlGXi82OR3WIAn4PVwiG3CeBksf71IfYYWisHqS69L1FHm5UxTsv0n4guzPI5WVjErfaclOhsuMbtAGT0wsjmswNRrvMq7j7NgYcg+1OYjrXUHV76SnaBkKvYf+4XyOoVIeaRvMN9P1gTECBG9cStDQ78bLhRSYR8kKm+h0PWLNq+SKhXf52m+aAkwZL3Ita2/uHMvFjYMpaTb2JCqu9WkHTt+Fd/bb/JdrXLI+2btaq5glQhaBZmnFv/i5TYnrQ4VGlSgV1DUx0VTNPSEzdqJGDAI37zu/XC/s8wELVFd8GUsMMxq+FMD5EiTAU7nqAZvFB1k4brcdGbpkOSXKlnAm88lt45aWdzTMDskVwZ7AlJsSZONx8mrICUQ== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 19139734-4192-4e2a-d346-08dbc6dde4fc X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:13.7642 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: q7GNi2wPZYQju4qzmb+oQbOmolQCV4k2HosQWzlV4KHTCxNtttnd8AG8PGWVFqmNXZNR/RklOj7lb1+jgTQgFK665dti0uI9Gcofhi0Nu6c= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Unify the guide line of header file, we choose '__FOO_BAR_H__' style. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.h | 6 +++--- drivers/net/nfp/flower/nfp_flower_cmsg.h | 6 +++--- drivers/net/nfp/flower/nfp_flower_ctrl.h | 6 +++--- drivers/net/nfp/flower/nfp_flower_representor.h | 6 +++--- drivers/net/nfp/nfd3/nfp_nfd3.h | 6 +++--- drivers/net/nfp/nfp_common.h | 6 +++--- drivers/net/nfp/nfp_cpp_bridge.h | 8 +++----- drivers/net/nfp/nfp_ctrl.h | 6 +++--- drivers/net/nfp/nfp_flow.h | 6 +++--- drivers/net/nfp/nfp_logs.h | 6 +++--- drivers/net/nfp/nfp_rxtx.h | 6 +++--- 11 files changed, 33 insertions(+), 35 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h index 0b4e38cedd..b7ea830209 100644 --- a/drivers/net/nfp/flower/nfp_flower.h +++ b/drivers/net/nfp/flower/nfp_flower.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_FLOWER_H_ -#define _NFP_FLOWER_H_ +#ifndef __NFP_FLOWER_H__ +#define __NFP_FLOWER_H__ #include "../nfp_common.h" @@ -118,4 +118,4 @@ int nfp_flower_pf_stop(struct rte_eth_dev *dev); uint32_t nfp_flower_pkt_add_metadata(struct nfp_app_fw_flower *app_fw_flower, struct rte_mbuf *mbuf, uint32_t port_id); -#endif /* _NFP_FLOWER_H_ */ +#endif /* __NFP_FLOWER_H__ */ diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h index cb019171b6..c2938fb6f6 100644 --- a/drivers/net/nfp/flower/nfp_flower_cmsg.h +++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_CMSG_H_ -#define _NFP_CMSG_H_ +#ifndef __NFP_CMSG_H__ +#define __NFP_CMSG_H__ #include "../nfp_flow.h" #include "nfp_flower.h" @@ -989,4 +989,4 @@ int nfp_flower_cmsg_qos_delete(struct nfp_app_fw_flower *app_fw_flower, int nfp_flower_cmsg_qos_stats(struct nfp_app_fw_flower *app_fw_flower, struct nfp_cfg_head *head); -#endif /* _NFP_CMSG_H_ */ +#endif /* __NFP_CMSG_H__ */ diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.h b/drivers/net/nfp/flower/nfp_flower_ctrl.h index f73a024266..4c94d36847 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.h +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_FLOWER_CTRL_H_ -#define _NFP_FLOWER_CTRL_H_ +#ifndef __NFP_FLOWER_CTRL_H__ +#define __NFP_FLOWER_CTRL_H__ #include "nfp_flower.h" @@ -13,4 +13,4 @@ uint16_t nfp_flower_ctrl_vnic_xmit(struct nfp_app_fw_flower *app_fw_flower, struct rte_mbuf *mbuf); void nfp_flower_ctrl_vnic_xmit_register(struct nfp_app_fw_flower *app_fw_flower); -#endif /* _NFP_FLOWER_CTRL_H_ */ +#endif /* __NFP_FLOWER_CTRL_H__ */ diff --git a/drivers/net/nfp/flower/nfp_flower_representor.h b/drivers/net/nfp/flower/nfp_flower_representor.h index eda19cbb16..bcb4c3cdb5 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.h +++ b/drivers/net/nfp/flower/nfp_flower_representor.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_FLOWER_REPRESENTOR_H_ -#define _NFP_FLOWER_REPRESENTOR_H_ +#ifndef __NFP_FLOWER_REPRESENTOR_H__ +#define __NFP_FLOWER_REPRESENTOR_H__ #include "nfp_flower.h" @@ -24,4 +24,4 @@ struct nfp_flower_representor { int nfp_flower_repr_create(struct nfp_app_fw_flower *app_fw_flower); -#endif /* _NFP_FLOWER_REPRESENTOR_H_ */ +#endif /* __NFP_FLOWER_REPRESENTOR_H__ */ diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h index 0b0ca361f4..3ba562cc3f 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3.h +++ b/drivers/net/nfp/nfd3/nfp_nfd3.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_NFD3_H_ -#define _NFP_NFD3_H_ +#ifndef __NFP_NFD3_H__ +#define __NFP_NFD3_H__ #include "../nfp_rxtx.h" @@ -84,4 +84,4 @@ int nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); -#endif /* _NFP_NFD3_H_ */ +#endif /* __NFP_NFD3_H__ */ diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index 27dc2175e3..11eda70f1a 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_COMMON_H_ -#define _NFP_COMMON_H_ +#ifndef __NFP_COMMON_H__ +#define __NFP_COMMON_H__ #include #include @@ -450,4 +450,4 @@ bool nfp_net_is_valid_nfd_version(struct nfp_net_fw_ver version); #define NFP_PRIV_TO_APP_FW_FLOWER(app_fw_priv)\ ((struct nfp_app_fw_flower *)app_fw_priv) -#endif /* _NFP_COMMON_H_ */ +#endif /* __NFP_COMMON_H__ */ diff --git a/drivers/net/nfp/nfp_cpp_bridge.h b/drivers/net/nfp/nfp_cpp_bridge.h index e6a957a090..a1103e85e4 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.h +++ b/drivers/net/nfp/nfp_cpp_bridge.h @@ -1,16 +1,14 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2014-2021 Netronome Systems, Inc. * All rights reserved. - * - * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation. */ -#ifndef _NFP_CPP_BRIDGE_H_ -#define _NFP_CPP_BRIDGE_H_ +#ifndef __NFP_CPP_BRIDGE_H__ +#define __NFP_CPP_BRIDGE_H__ #include "nfp_common.h" int nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev); int nfp_map_service(uint32_t service_id); -#endif /* _NFP_CPP_BRIDGE_H_ */ +#endif /* __NFP_CPP_BRIDGE_H__ */ diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h index ef8bf486cb..71fe125420 100644 --- a/drivers/net/nfp/nfp_ctrl.h +++ b/drivers/net/nfp/nfp_ctrl.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_CTRL_H_ -#define _NFP_CTRL_H_ +#ifndef __NFP_CTRL_H__ +#define __NFP_CTRL_H__ #include @@ -573,4 +573,4 @@ nfp_net_cfg_ctrl_rss(uint32_t hw_cap) return NFP_NET_CFG_CTRL_RSS; } -#endif /* _NFP_CTRL_H_ */ +#endif /* __NFP_CTRL_H__ */ diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h index 5a5b6a7d19..d4bde0a294 100644 --- a/drivers/net/nfp/nfp_flow.h +++ b/drivers/net/nfp/nfp_flow.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_FLOW_H_ -#define _NFP_FLOW_H_ +#ifndef __NFP_FLOW_H__ +#define __NFP_FLOW_H__ #include "nfp_common.h" @@ -164,4 +164,4 @@ int nfp_flow_priv_init(struct nfp_pf_dev *pf_dev); void nfp_flow_priv_uninit(struct nfp_pf_dev *pf_dev); int nfp_net_flow_ops_get(struct rte_eth_dev *dev, const struct rte_flow_ops **ops); -#endif /* _NFP_FLOW_H_ */ +#endif /* __NFP_FLOW_H__ */ diff --git a/drivers/net/nfp/nfp_logs.h b/drivers/net/nfp/nfp_logs.h index 16ff61700b..690adabffd 100644 --- a/drivers/net/nfp/nfp_logs.h +++ b/drivers/net/nfp/nfp_logs.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_LOGS_H_ -#define _NFP_LOGS_H_ +#ifndef __NFP_LOGS_H__ +#define __NFP_LOGS_H__ #include @@ -41,4 +41,4 @@ extern int nfp_logtype_driver; rte_log(RTE_LOG_ ## level, nfp_logtype_driver, \ "%s(): " fmt "\n", __func__, ## args) -#endif /* _NFP_LOGS_H_ */ +#endif /* __NFP_LOGS_H__ */ diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index 899cc42c97..956cc7a0d2 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_RXTX_H_ -#define _NFP_RXTX_H_ +#ifndef __NFP_RXTX_H__ +#define __NFP_RXTX_H__ #include @@ -253,4 +253,4 @@ void nfp_net_set_meta_ipsec(struct nfp_net_meta_raw *meta_data, uint8_t layer, uint8_t ipsec_layer); -#endif /* _NFP_RXTX_H_ */ +#endif /* __NFP_RXTX_H__ */ From patchwork Sat Oct 7 02:33:37 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132379 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2BCF1426D6; Sat, 7 Oct 2023 04:35:36 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4E34540EDF; Sat, 7 Oct 2023 04:34:20 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2139.outbound.protection.outlook.com [40.107.93.139]) by mails.dpdk.org (Postfix) with ESMTP id 38A1740E54 for ; Sat, 7 Oct 2023 04:34:17 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GMABBxWabmzCZ5s/0pai6Qw31N9L2arNsmUffrdRH+9EYCRulpd2ace+6PnLSCErfHWJpwvznSWBILhcv+N7291E4OagpFXjtwDwN8z1rOnJAYqHwUR0zoF7h6SeRIxf7WUkugC7uJmw6CH8yGl2u/ACGWYTuvvX7ghLgSo9HrCQiervnxFDE8kknD7cu+9/hg5LpzpgqnjklFP2wI5g6/tko8MTa/X73nQMrK00qid/Gob8pb2yrbujW3VxFkOIwb275x1Cz8DfQtNUw1jvLb4ZMxjRPzmReeQ4Lo6/yaR3xLX/jzcqKvg34BR4/h/ypCl7jWZAiqN+zxDG5ChWTQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8tqDP96CGc2Acn82c8Yh8nne0ByhETJZgQXPvctSaK0=; b=ObmV/jgr15n/v2DuouZKxq1An6iWrBT2U5GqN9MEcHBNbMdQ7MosT0inLR/ZwW91OBr+9S1OxHqEiw69qsSkavmp5d/rJcQnJbSwGw39chH7VrrXk/Zlp6Nq0hJon5ax3rRRJOVnJaUyqAP/jPorUWAmYGAjVK01C4fMUjAegg/YAAXISgYubpmMkwQHt4kcYuwGsSM4p3eLpk4pRcT99pfQrzp9LL4wasmzNh5Z1LaQniBKM9FvroWLmKFv1APnPoNdn3IFjw5YnY1T8IQAXRirsQCzhn5yilpzYbNJsPLhwphSkdPwfEHdvP1LvnG08UrN7vLXDtNPiZEEN1Lc3w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8tqDP96CGc2Acn82c8Yh8nne0ByhETJZgQXPvctSaK0=; b=PI1VlkcCa0y4Pc1tjdCxDNka2iwk2lr+LppIc1yET8c5XiTUcyXupHffNmQdS4xBzFf875bhdYEmjccUnUUvoubH7x6TMwYa23Ut/376+8hSI7uerrQEfVcCRqtEsLT8Qdg1qMPwgSv4BTPwPWdF78XfNQkAPDHp+I1C0L3P38M= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:15 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:15 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 09/11] net/nfp: rename some parameter and variable Date: Sat, 7 Oct 2023 10:33:37 +0800 Message-Id: <20231007023339.1546659-10-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: 15eec2ab-eaa1-45fc-413e-08dbc6dde61d X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RaVo5oDwKtgXmsc0DE6VIOCelXFtdlhtcWqEM2CrAqhRVSlpUjlURk6ppsgEouna3WQSZZmMbaFSkAap1DW1e3wOsCRrDnxqeUe48kYmWmX6hc2q2ZhBvM6B1kSUjv3F3s+qUBxM1B6OqI66WVG5yMsUqUzKWogHH2CHSmDTL63iigMquHSwkzUuyweMHTzAFwpYcbC9j40DgUN20MtbuolfcpliKR1zKY/Cvnd7azObIKCrWQnJLENpGsfUwAoGqf7MUATNz2+5ji5ztkMI/LWTKQ5fdu2+7fotqF2QWl5XMEpCzK1PUiURa/RkoBDcB2IGVs96m9U92m6BsaeayPhUS9nsh75UQehRuwlbgckfzUTNEUfVA8FJiuWmrPNplx5BCC9h4R8jlmJH4CcHMZfaRCm9CEYqTbYSbdmkOtuIQsCKC38d9kQYtJh+3yk6v0f4X96oTsxhaN/CRMYJsI7UA/i7J27nizlhbJwW9VRs/r64T3WmP/RX55T0sqmKODf7t6cPP1O+8jolyJB0F7h2F1hWlmG969HENrPIfwBLyrKqDeMk0jzLcDQyWXVKmmmj4rEda5Dc+FTO/+nxpDKsFZEO5h824n0R2+2fGM7ZU4G7W7rSRVt/SWGIiF5PIWWhoDeNPHW3+33NGj9z5Yfs0t+y/jIBnLkBj04Hsq8= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: TsL3fwBllCYBu5TLgZEYaapIMHdz6I9Vbp0sibdEOL5M5SVs6aLURRhKuZByJuD/wpHXHk5hpaf7jkiFlg/6QZTaR6qcNPbASNk8GS0kBmzfZVH7wLjzRTXGBf9Ls8JDcdLhSgpmv5cwoOWtXUk/OeM1NXvqIKTYDIi1OT3ybuotQ+26xlhJoCTNP/PCMx15t9PWHadeNaNgvccU6+vn05mbUVlWxFM/gKVqWrLZb9cWS8HU+By8gWgHSF0Vg5DfQScDubdsulzR+h7cczOq4hOduV/3QttGU2hlWedSI3HqYMBt6oleRqSJLS3ipafUm/xuRcjvPzEmzJOKaEMZw/2rhzl/KAuyzYjwAZmRorFJm0atP0YKklAOFnaq/DXxjjehKI9nuOT5r6VAhhO2POoyebVnKFXB9kG1Q2jEmvp635c2IrbLUZCjDMOMjy4/Pc7DopclGUftz3Wf0P4t4vB9WBL+5yYK7alv9B457ZoAKZetAqmvC2HKPl2slIwGBpYspKLXZ8gpQuotDtzFLo78eTVcC8Xsh68DFtvAKlMweu3lShJJJ7y4ILF9n2NRufhbIaUkZLAextX1btIhqdnJh3ufzCoLkOO1shQnQHPMUj9SOvSqkkyxYDiTzR2Sx3bygZsk40h0zBFfkTVF5ygcbMwKsb7+MKu+soRQ1SttDHuz8U3yf3xyScTPXM8uyLyLGWMWR7QHJpOoRwfPG4nBJWaCOIFnZrmnSXSuH+Md20QUZxHyvmh5yYWmd6jxQmamaG9TzGShdFpEqVcC4hzsNT68iFF7Dh7dDFXOtTdL7dtrJkgvbb0mTnBBErLqaaliigZGOIU5LRHQCfNIV1usMWjWoZYDSfzpT/GZPfL8wNjyCKlPK6IlJ7iVoTx0ACA7XMclYqtdw1b6wyTDQ051VCWUgbWgS/t6rheoGam8Of1QNbGy/cGFo5xMMx5chfvM5wZ9rBASffNV4PcFZNf95boZMhXKvAGcn3hg0jTGcyrpaG9P8frzxSCWtl00uRklXv71HPGe0k11xj+URCvCAFMe+PQMzJdo8ITMeIJ4TBCtkMZI9A5w1CzcKkaK9aYk2BRxN2EMdZBMs1jxcOZotPg6MxkqYLWrKy2FbbFkOfulUw855Yu/2rqe9iXWBBuGoYqMdfA2l3Bq7XlpI6fPdhjNumqDRrKh6NWC8svafUfYG7D2TvpVJail3wrrcVQfXZ89pySIaJOTo7aISXGPuaMwLUsugQW/PxODcRZIolSqpmBTDd1Yu86W/wc3OfJjF3acofLx5pTnZodWlFCLPEjm4B+t8XDKHoqhnMk4zyFKqCctIzBQbxnNKM43rrSRP5PAYmefz95OLgdjf7o0aanAXGEoK2pvUL6cZprmPNI3IgRtMR+5qLJDqpnjxr9ERewJA+1CuSAbrwZmWmdL3u2iS7CFNm68nMaitfQsKc0LTJl8OlEh+SdvcowR0Aau+LBe0k/D/c7oov9cpQJwTBZYIwkX0FApmAZkQrg9g8mxpVTUeGxcfre9/n3QJiAAUBSeXo3EZD+r7vJxm+K8qFJXrpaaaN6ToMM3Zrj/KwGR6i31IbImP20AX/Nlu4nx43SUAkCFaL96SC5KrA== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 15eec2ab-eaa1-45fc-413e-08dbc6dde61d X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:15.6737 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: VnsyGCORQdxc7Z2NuXZjMXb/QPbZHYP5u6pHbAYkPVpMdIWTLtv9EsJqD1deogRDpcSWwkLNTt3LN5li0tntMRKTD6I/AfQkLjHGhy0oHgM= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rename some parameter and variable to make the logic easier to understand. Also avoid the mix use of lowercase and uppercase in macro name. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/nfp_common.h | 20 ++++++++++---------- drivers/net/nfp/nfp_ethdev_vf.c | 8 ++++---- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index 11eda70f1a..a5e20bc4a7 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -19,9 +19,9 @@ #define NFP_QCP_QUEUE_ADD_RPTR 0x0000 #define NFP_QCP_QUEUE_ADD_WPTR 0x0004 #define NFP_QCP_QUEUE_STS_LO 0x0008 -#define NFP_QCP_QUEUE_STS_LO_READPTR_mask (0x3ffff) +#define NFP_QCP_QUEUE_STS_LO_READPTR_MASK (0x3ffff) #define NFP_QCP_QUEUE_STS_HI 0x000c -#define NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask (0x3ffff) +#define NFP_QCP_QUEUE_STS_HI_WRITEPTR_MASK (0x3ffff) /* Interrupt definitions */ #define NFP_NET_IRQ_LSC_IDX 0 @@ -303,7 +303,7 @@ nn_cfg_writeq(struct nfp_net_hw *hw, /** * Add the value to the selected pointer of a queue. * - * @param q + * @param queue * Base address for queue structure * @param ptr * Add to the read or write pointer @@ -311,7 +311,7 @@ nn_cfg_writeq(struct nfp_net_hw *hw, * Value to add to the queue pointer */ static inline void -nfp_qcp_ptr_add(uint8_t *q, +nfp_qcp_ptr_add(uint8_t *queue, enum nfp_qcp_ptr ptr, uint32_t val) { @@ -322,19 +322,19 @@ nfp_qcp_ptr_add(uint8_t *q, else off = NFP_QCP_QUEUE_ADD_WPTR; - nn_writel(rte_cpu_to_le_32(val), q + off); + nn_writel(rte_cpu_to_le_32(val), queue + off); } /** * Read the current read/write pointer value for a queue. * - * @param q + * @param queue * Base address for queue structure * @param ptr * Read or Write pointer */ static inline uint32_t -nfp_qcp_read(uint8_t *q, +nfp_qcp_read(uint8_t *queue, enum nfp_qcp_ptr ptr) { uint32_t off; @@ -345,12 +345,12 @@ nfp_qcp_read(uint8_t *q, else off = NFP_QCP_QUEUE_STS_HI; - val = rte_cpu_to_le_32(nn_readl(q + off)); + val = rte_cpu_to_le_32(nn_readl(queue + off)); if (ptr == NFP_QCP_READ_PTR) - return val & NFP_QCP_QUEUE_STS_LO_READPTR_mask; + return val & NFP_QCP_QUEUE_STS_LO_READPTR_MASK; else - return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask; + return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_MASK; } static inline uint32_t diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 8053808b02..af0689832a 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -396,7 +396,7 @@ nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev) } static int -eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, +nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_probe(pci_dev, @@ -404,7 +404,7 @@ eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, } static int -eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev) +nfp_vf_pci_remove(struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit); } @@ -412,8 +412,8 @@ eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev) static struct rte_pci_driver rte_nfp_net_vf_pmd = { .id_table = pci_id_nfp_vf_net_map, .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, - .probe = eth_nfp_vf_pci_probe, - .remove = eth_nfp_vf_pci_remove, + .probe = nfp_vf_pci_probe, + .remove = nfp_vf_pci_remove, }; RTE_PMD_REGISTER_PCI(net_nfp_vf, rte_nfp_net_vf_pmd); From patchwork Sat Oct 7 02:33:38 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132380 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3A5E4426D6; Sat, 7 Oct 2023 04:35:43 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7A72440EE4; Sat, 7 Oct 2023 04:34:21 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2098.outbound.protection.outlook.com [40.107.93.98]) by mails.dpdk.org (Postfix) with ESMTP id 54E5440EDB for ; Sat, 7 Oct 2023 04:34:19 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=GKhjBX4ZT4naJAnNZIURtyl7lidgLyd3K8wZYLPVoV5EtRzA1J9mLkZnS1TsA2rJhMwQ2lxV/tBLY8JqpX3Pr+9LkAGYXzS98wLWgYR/SD9SLPEbk6j3tqcGmB538aZhRWWgprTWa8uLVrXfZQr2xyqKQTkUdbXIfS4/BX58RAjQTuNrUk9D7cjO8xKxjIvEB21/vAv9zITzZ1cbAwAA6Lq+E1L6GOEEPhFrBqFHpN7i+VbVGCD3PuI3CdlgP1lAMvILBO72+LFXTjvy8QuXkNOFDz3BB+CYn+GV1VzqGQxtmLZ/6LWuPVoxS2OdlCUbqpJqktqXU2MrheChuj+wUw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=8Fm/Nd/R9LF1A86NB+N8NZau7/mOAkZe6dpi7iWj7TQ=; b=KkIgiWyHXlVc0lCrvzsPSBHmQ6i85jD7ddTmZz5haIFAEaBcBjc94SipZVJhgiCkNHRb4XwONs7iNuX+eFWVzmcUFWx731Ny2OB5FU3RDw8tq4mSFsw7xhj7WWhXnTWJdMu7EOXEtgdMGdDQXqHXdyFVk++Mr52aO4DOC2DvT7awOcfneF2nMxP0u0gscV1MCamQ9S7YXE4hM7r4kUJQG2fD8Tw9EXbZagfEnvnxFhKEhPaGxT3FWoUeatlqfDNGZsplb8EeylMlWOGDtKtLnvcN7PWcWHaVS2lASuNoZWjjQ81Qmszlq2y3lMVgDmnVJTm4Ft9T3w3GZoTf/0A6XA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=8Fm/Nd/R9LF1A86NB+N8NZau7/mOAkZe6dpi7iWj7TQ=; b=wEQuGLDEaySgnU6X14uJNA2kAH7hAOlSkTkrqRNvm7T/41WquaOidqIeV2vlaH8XFNibY9Zxc0KnWzUQOw6up6dNPYkOKA++A/AX68ZP0twnyaFYkktTs79SOtrOD2xIyRC8lOIXiB8HdIE8t98QHjl0msZqeDuraF67rssjNFY= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:17 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:17 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 10/11] net/nfp: adjust logic to make it more readable Date: Sat, 7 Oct 2023 10:33:38 +0800 Message-Id: <20231007023339.1546659-11-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: 757f9eae-4c1f-4ae2-d899-08dbc6dde744 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Drqdpqu7mLT2SsLLDk3nH4Mvwbu69Djt8MXZE6/HqSdg6G1rZbBUzW3/n4wYPmm7e+iI1O2RTtKvEvCmkYIf/VJSIRmrBRjf8t7HGC3P7pJ87FAUwPIH8NaFQLHCobOWtVj3cf7j9n+kWEV/2JMZOz+NthYBSxhNiWQVwQ7tOj3WqU2O2DowbRyAPjxn6G/RU09SKUzoDC3jsDBLB9nad1nfviHpEPJfWHqWBsoxyw0zwD5h9R2Ixk3E1kM6Z8SAbzGlZVjVLSjePafk1Hp/nHKC5k9B7uJx7oYlJcAzL20iiRQbRPWKgk2Q9Hv24Vgj3++GSXi2hnjVrwnLhGoZMdZv5zPGYlldSsrT1uSgCRFmYqT5F2ZCL2JOXe3BZDbGs+xuS+MZkWYU8rVFuYNKyuMuJtYAM+c+sLsFs4taSruauVzjBi+SqmBmj6Aksxct9EdEfH6tcPdrWFUYL7436jkthIL2wOdhQycBVdmVQIPvjJgSXZi0OkiB2AW47SxkN4IoDKe88giikTtxzMWVMgLub2CRajr+Vj9tYETcXnLASADy4rq6tjPPBS3ehmsKiEEKE70gKXZyccDzKM5JJnp+NfFP6fDgWp/k5lt9Xc2fGX2ZTWqOTfWx/NLOJUoq+7s4m/9S9esL4ktn9iGnLVrr7FK+gpVuKtpXCcdUt0w= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(30864003)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: FhC55s2ikjcRCbPcuqsBnanxCjrO84JU0Kzmo7SEou97CCBBLmGOEEG8i4U4MIpnn9m62VmEtPQkyIPZqTHNRy1wm0IUhyVcqN8ZblRhtfq0Tpop5wglYcZ7mEn7Q3cjzePXkKXi8Dh3Qjmfq43NncrUivwAgC/i2U2CwVCiBLppY5omhBQ0X8rreLAFZAr7sLbQYFdVb6mlU8hzyFwE4ofXAgyTXc8DiYHOlwe2dCwCIZSaxDPgz2rt//z5pXTGB/L4DJnKs9ZxWnsZZ0YUTZ2IyprP3DZKvXRtnrOfC0jlzEWH32FfVdCo7mRLK5488b6VYhOLJz0m//gE2O1IruUYlU/Mf1A4zwe1h0g8+bsu1/XseEGdPoYv7Y3mtGYya3jjF0ZH8VaWVXGeN1lbMZtWpMUXDO0B1xTknB/PKqdNdeLmExwXs5wes33VHXTmy0dJqO/4k1akxhyB1Vn/Zfq/AF3jYAkDxUme0rdQZXtJErPhmzxEqGjz30zZyOoARMnNiN/HQRJ+y9cNpcMDlQVzrXcIlSLT0Zdc7P/orZiOH04hkqyNz5z/Jvo+o6On3pBjcDPPutA3Q2TlSDNTvqjouWgtkhKv9MdjdEBCZ/ZvnTHiJspWE7LxvxnR7kMtqJUK1qmpjzNf8GEFIDw9LAiYfs6iB40HEY0X2LYxWbclSShSTXG7fFnWbWhyhiivP6ph2aDp01iq605PetNJnJDSb0IllMZ2+CLPu8TG6J8DoeeFdFuQzmNNcG5PlFUor1Fg5Ixe/jv4EZG92OwDsYTvyfiPxTPQP8FombCg4KTErRJ8x9XjIF1jkrGGyICJV1gYdLvG8G5UUBK0RY0jwp5kbxiqaIB+60pQaR2l6b4Y97cBvB77pVBYk7b0wl2mitJ1DPKC5aan3QawZlqzVvARA9tPi39gPupD7D/0vGH2t7Qs6ivPyMJIIQrlHcWvd1qOpHqx01ZZIEu14/VRYKETbEyJY9LKMZu+LzqeBkEM4pOfGMDlrnvBWIrvO9NG/dW2mxoYkUgQxVsVGUBE2IedQ4gfymMtGLsw7TXAFOjEg09Hydn7G+pcMBJnHEp8iTJkTtKC2h4a7TnQsFsFbZfNO6yduMlpb4yNSRWra05ZIPHUzzus+yHzLZVKB3/XQIjjV3iUPcA3Ceqz6O2M71GhkBY+N5ToiVkSKOF9vXvAh+ipIuIbRMtNAKFZzaa1bmN+THspQhU39JBc52a0hQU6UGW9aDsx/Se3uw+fRcgJ6/uh/ycrHdUmwWAPmg/s9akLw0Tw7060UrHisqzRmLVCpzmIsGAAtYw0akrGwBQQptI5MujZhTBKd9Xz/irbUuoOXr1HtBUMPG0grVGg79lUs7JFXQQr+S8Y2Ld9EMgqqNkWwyc+NzOAuchuXqJu1Kw2aInYSupj6GzX67i4nA0SLh1wwE01r+5DIS2nZNINL7n50WecIKXdk7kHvRmB9tjFU5w0xOMo6MdvavP02yrNo9tvpzwNChG/nuweGWpsK+2CnRQTpHMtEsWH6bMvwqzGRm9NMkL5M4LktylVnKYXX0MJsdbl6G67ztCyr6YdJCdcNL3zS2a3jXbTDEnxGFAhDSQtSGPc2asDakE7AA== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 757f9eae-4c1f-4ae2-d899-08dbc6dde744 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:17.5803 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: tOH7eiNVYQc/GtMwcbDIGx3bnssrS4/FuFeYkUGpaxrzCEz7JoCRzpM8QtEXJLXJ6jWKg9XdegJeDQ75uMzWcdS1e8FRcuHa+oW1kY709NA= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adjust some logic to make it easier to understand. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/nfp_common.c | 83 +++++++++++++++++--------------- drivers/net/nfp/nfp_cpp_bridge.c | 5 +- drivers/net/nfp/nfp_ctrl.h | 2 - drivers/net/nfp/nfp_ethdev.c | 23 ++++----- drivers/net/nfp/nfp_ethdev_vf.c | 15 +++--- drivers/net/nfp/nfp_rxtx.c | 2 +- 6 files changed, 61 insertions(+), 69 deletions(-) diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 3409ee8cb8..f6cd506dd6 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -467,19 +467,19 @@ nfp_net_enable_queues(struct rte_eth_dev *dev) { uint16_t i; struct nfp_net_hw *hw; - uint64_t enabled_queues = 0; + uint64_t enabled_queues; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); /* Enabling the required TX queues in the device */ + enabled_queues = 0; for (i = 0; i < dev->data->nb_tx_queues; i++) enabled_queues |= (1 << i); nn_cfg_writeq(hw, NFP_NET_CFG_TXRS_ENABLE, enabled_queues); - enabled_queues = 0; - /* Enabling the required RX queues in the device */ + enabled_queues = 0; for (i = 0; i < dev->data->nb_rx_queues; i++) enabled_queues |= (1 << i); @@ -619,33 +619,33 @@ uint32_t nfp_check_offloads(struct rte_eth_dev *dev) { uint32_t ctrl = 0; + uint64_t rx_offload; + uint64_t tx_offload; struct nfp_net_hw *hw; struct rte_eth_conf *dev_conf; - struct rte_eth_rxmode *rxmode; - struct rte_eth_txmode *txmode; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); dev_conf = &dev->data->dev_conf; - rxmode = &dev_conf->rxmode; - txmode = &dev_conf->txmode; + rx_offload = dev_conf->rxmode.offloads; + tx_offload = dev_conf->txmode.offloads; - if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) { + if ((rx_offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) { if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_RXCSUM; } - if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) + if ((rx_offload & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) nfp_net_enbable_rxvlan_cap(hw, &ctrl); - if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) { + if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) { if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0) ctrl |= NFP_NET_CFG_CTRL_RXQINQ; } hw->mtu = dev->data->mtu; - if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) { + if ((tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) { if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2; else if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN) != 0) @@ -661,14 +661,14 @@ nfp_check_offloads(struct rte_eth_dev *dev) ctrl |= NFP_NET_CFG_CTRL_L2MC; /* TX checksum offload */ - if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0) + if ((tx_offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 || + (tx_offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 || + (tx_offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_TXCSUM; /* LSO offload */ - if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { + if ((tx_offload & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 || + (tx_offload & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0) ctrl |= NFP_NET_CFG_CTRL_LSO; else @@ -676,7 +676,7 @@ nfp_check_offloads(struct rte_eth_dev *dev) } /* RX gather */ - if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0) + if ((tx_offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0) ctrl |= NFP_NET_CFG_CTRL_GATHER; return ctrl; @@ -766,11 +766,10 @@ nfp_net_link_update(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* Read link status */ - nn_link_status = nn_cfg_readw(hw, NFP_NET_CFG_STS); - memset(&link, 0, sizeof(struct rte_eth_link)); + /* Read link status */ + nn_link_status = nn_cfg_readw(hw, NFP_NET_CFG_STS); if ((nn_link_status & NFP_NET_CFG_STS_LINK) != 0) link.link_status = RTE_ETH_LINK_UP; @@ -828,6 +827,9 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct nfp_net_hw *hw; struct rte_eth_stats nfp_dev_stats; + if (stats == NULL) + return -EINVAL; + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); memset(&nfp_dev_stats, 0, sizeof(nfp_dev_stats)); @@ -892,11 +894,8 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); nfp_dev_stats.imissed -= hw->eth_stats_base.imissed; - if (stats != NULL) { - memcpy(stats, &nfp_dev_stats, sizeof(*stats)); - return 0; - } - return -EINVAL; + memcpy(stats, &nfp_dev_stats, sizeof(*stats)); + return 0; } /* @@ -1379,13 +1378,14 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, struct nfp_net_hw *hw; struct rte_pci_device *pci_dev; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; /* Make sure all updates are written before un-masking */ rte_wmb(); + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), NFP_NET_CFG_ICR_UNMASKED); return 0; @@ -1399,14 +1399,16 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, struct nfp_net_hw *hw; struct rte_pci_device *pci_dev; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; /* Make sure all updates are written before un-masking */ rte_wmb(); - nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), 0x1); + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), NFP_NET_CFG_ICR_RXTX); + return 0; } @@ -1445,13 +1447,13 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); + /* Make sure all updates are written before un-masking */ + rte_wmb(); + if ((hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) != 0) { /* If MSI-X auto-masking is used, clear the entry */ - rte_wmb(); rte_intr_ack(pci_dev->intr_handle); } else { - /* Make sure all updates are written before un-masking */ - rte_wmb(); nn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX), NFP_NET_CFG_ICR_UNMASKED); } @@ -1548,19 +1550,18 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int ret; uint32_t update; uint32_t new_ctrl; + uint64_t rx_offload; struct nfp_net_hw *hw; uint32_t rxvlan_ctrl = 0; - struct rte_eth_conf *dev_conf; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - dev_conf = &dev->data->dev_conf; + rx_offload = dev->data->dev_conf.rxmode.offloads; new_ctrl = hw->ctrl; - nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl); - /* VLAN stripping setting */ if ((mask & RTE_ETH_VLAN_STRIP_MASK) != 0) { - if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) + nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl); + if ((rx_offload & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) new_ctrl |= rxvlan_ctrl; else new_ctrl &= ~rxvlan_ctrl; @@ -1568,7 +1569,7 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, /* QinQ stripping setting */ if ((mask & RTE_ETH_QINQ_STRIP_MASK) != 0) { - if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) + if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ; else new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ; @@ -1580,10 +1581,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, update = NFP_NET_CFG_UPDATE_GEN; ret = nfp_net_reconfig(hw, new_ctrl, update); - if (ret == 0) - hw->ctrl = new_ctrl; + if (ret != 0) + return ret; - return ret; + hw->ctrl = new_ctrl; + + return 0; } static int diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index 080070f58b..f37de7060a 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -22,9 +22,6 @@ #define NFP_IOCTL_CPP_IDENTIFICATION _IOW(NFP_IOCTL, 0x8f, uint32_t) /* Prototypes */ -static int nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp); -static int nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp); -static int nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp); static int nfp_cpp_bridge_service_func(void *args); int @@ -438,7 +435,7 @@ nfp_cpp_bridge_service_func(void *args) return -EIO; } - while (1) { + for (;;) { ret = recv(datafd, &op, 4, 0); if (ret <= 0) { PMD_CPP_LOG(DEBUG, "%s: socket close", __func__); diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h index 71fe125420..1012b37b1f 100644 --- a/drivers/net/nfp/nfp_ctrl.h +++ b/drivers/net/nfp/nfp_ctrl.h @@ -442,8 +442,6 @@ struct nfp_net_fw_ver { #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6 (NFP_MAC_STATS_BASE + 0x1f0) #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7 (NFP_MAC_STATS_BASE + 0x1f8) -#define NFP_PF_CSR_SLICE_SIZE (32 * 1024) - /* * General use mailbox area (0x1800 - 0x19ff) * 4B used for update command and 4B return code followed by diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 0493548c81..362fd2b601 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -80,7 +80,7 @@ nfp_net_start(struct rte_eth_dev *dev) * Better not to share LSC with RX interrupts. * Unregistering LSC interrupt handler */ - rte_intr_callback_unregister(pci_dev->intr_handle, + rte_intr_callback_unregister(intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); if (dev->data->nb_rx_queues > 1) { @@ -525,7 +525,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) return -ENODEV; /* Use port offset in pf ctrl_bar for this ports control bar */ - hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE); + hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_NET_CFG_BAR_SZ); hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE); } @@ -743,8 +743,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, const struct nfp_dev_info *dev_info) { uint8_t i; - int ret; - int err = 0; + int ret = 0; uint32_t total_vnics; struct nfp_net_hw *hw; unsigned int numa_node; @@ -765,8 +764,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, pf_dev->app_fw_priv = app_fw_nic; /* Read the number of vNIC's created for the PF */ - total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &err); - if (err != 0 || total_vnics == 0 || total_vnics > 8) { + total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &ret); + if (ret != 0 || total_vnics == 0 || total_vnics > 8) { PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value"); ret = -ENODEV; goto app_cleanup; @@ -874,8 +873,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, static int nfp_pf_init(struct rte_pci_device *pci_dev) { - int ret; - int err = 0; + int ret = 0; uint64_t addr; uint32_t cpp_id; struct nfp_cpp *cpp; @@ -943,8 +941,8 @@ nfp_pf_init(struct rte_pci_device *pci_dev) } /* Read the app ID of the firmware loaded */ - app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &err); - if (err != 0) { + app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &ret); + if (ret != 0) { PMD_INIT_LOG(ERR, "Couldn't read app_fw_id from fw"); ret = -EIO; goto sym_tbl_cleanup; @@ -1080,7 +1078,6 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev, static int nfp_pf_secondary_init(struct rte_pci_device *pci_dev) { - int err = 0; int ret = 0; struct nfp_cpp *cpp; enum nfp_app_fw_id app_fw_id; @@ -1124,8 +1121,8 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev) } /* Read the app ID of the firmware loaded */ - app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &err); - if (err != 0) { + app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &ret); + if (ret != 0) { PMD_INIT_LOG(ERR, "Couldn't read app_fw_id from fw"); goto sym_tbl_cleanup; } diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index af0689832a..b6ebbc1ea5 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -39,8 +39,6 @@ nfp_netvf_start(struct rte_eth_dev *dev) struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* Disabling queues just in case... */ nfp_net_disable_queues(dev); @@ -54,7 +52,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) * Better not to share LSC with RX interrupts. * Unregistering LSC interrupt handler */ - rte_intr_callback_unregister(pci_dev->intr_handle, + rte_intr_callback_unregister(intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); if (dev->data->nb_rx_queues > 1) { @@ -77,6 +75,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) new_ctrl = nfp_check_offloads(dev); /* Writing configuration parameters in the device */ + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); nfp_net_params_setup(hw); dev_conf = &dev->data->dev_conf; @@ -244,15 +243,15 @@ static int nfp_netvf_init(struct rte_eth_dev *eth_dev) { int err; + uint16_t port; uint32_t start_q; - uint16_t port = 0; struct nfp_net_hw *hw; uint64_t tx_bar_off = 0; uint64_t rx_bar_off = 0; struct rte_pci_device *pci_dev; const struct nfp_dev_info *dev_info; - struct rte_ether_addr *tmp_ether_addr; + port = eth_dev->data->port_id; pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); dev_info = nfp_dev_info_get(pci_dev->id.device_id); @@ -325,9 +324,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) } nfp_netvf_read_mac(hw); - - tmp_ether_addr = &hw->mac_addr; - if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { + if (rte_is_valid_assigned_ether_addr(&hw->mac_addr) == 0) { PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port); /* Using random mac addresses for VFs */ rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]); @@ -344,7 +341,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) PMD_INIT_LOG(INFO, "port %hu VendorID=%#x DeviceID=%#x " "mac=" RTE_ETHER_ADDR_PRT_FMT, - eth_dev->data->port_id, pci_dev->id.vendor_id, + port, pci_dev->id.vendor_id, pci_dev->id.device_id, RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 4632837c0e..e11f617f9a 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -284,7 +284,7 @@ nfp_net_parse_chained_meta(uint8_t *meta_base, meta->vlan[meta->vlan_layer].tci = vlan_info & NFP_NET_META_VLAN_MASK; meta->vlan[meta->vlan_layer].tpid = NFP_NET_META_TPID(vlan_info); - ++meta->vlan_layer; + meta->vlan_layer++; break; case NFP_NET_META_IPSEC: meta->sa_idx = rte_be_to_cpu_32(*(rte_be32_t *)meta_offset); From patchwork Sat Oct 7 02:33:39 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132381 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8F079426D6; Sat, 7 Oct 2023 04:35:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DD20741060; Sat, 7 Oct 2023 04:34:22 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2090.outbound.protection.outlook.com [40.107.93.90]) by mails.dpdk.org (Postfix) with ESMTP id BB17340EE2 for ; Sat, 7 Oct 2023 04:34:20 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WNXYkzgxy4aKPIllk35PxqopjlI+f+U9zyHGWcD0k0OXDjJ+2pqSkPuVcoDvOm+MmrCIqMTmjNzac6aGKvV2QVfzLSgZquC6aNWeoJSg7jBzh8LaqKx1iMXCccPRnLJjuAR5XtMb0v0ZUwR0Ya/Flrkh96YJQJWJruMUaBUwAxK++mtgsZfsoGNn09/3bF5KWRJwaC37kjF+lM8RCBGjeB7PUUKRsxDUC/ZIG4hwJUK3FYg3pr/KmbTGIpHzpEWJLl0LpBlFHMcR1telUyFZbOJEaAWAS30XbAaAWN4HLeN7F0I2M9iiSMKYXkZpM6K9dXcq9YyGmaTb/qA7hEPFqA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=YPi5qtW1mudzyQUD3QMtTC5YnFVK8dukTwaRehriguQ=; b=BzWlNj2ekRZ9x4ycGP1LstNrB13w0Jot00n3P5Qw3DMJR+cjErq5LUUylGmrN+syJTxrSYMkGzY7Ag4id8WqpP6WS3X8ZcrHxua622t1mUCQ5YZewUKiqhgxIdtkKk+9qpQxFRoyyNix20agw7jCWBeLNrpiRhfonuY0si8pSueNZcJEqAWn7KlGDtBjgw98ezhdP3fHmSr1haYisxTd8bb4Ger2QIKovSMfZwO1Fa3PKzutUMbnlETe9fkWLVSDC6TqkxoiwbkYOAGR53vltKuY35oKia3bZSXF4kMFIp8fbS/x+0O5f96I2l+1UIEAw/jrNcRMH0dUzJNWQHGTvw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=YPi5qtW1mudzyQUD3QMtTC5YnFVK8dukTwaRehriguQ=; b=AmlNb+2Ps44N2HNXHOV64j1WnpySuOgLFtr2MR/liy/r75+G+HzWfVSjI3JsNpf0oMRjFIBKzo3xJmgwbP7L5hbxAuNfGrmq7mH6CpEyePNI9CvoCqjui5dn1gy2+qHhsHokxOMtPU5DE2mlyKXIgkU/o1tpoucsvzqkATMWFpE= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:19 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:19 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 11/11] net/nfp: refact the meson build file Date: Sat, 7 Oct 2023 10:33:39 +0800 Message-Id: <20231007023339.1546659-12-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: 4e54f695-d00e-4e96-4403-08dbc6dde865 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: vnBC853wEjx8ckQ+io4mrRiOMLft+5BBSWlzWD4TcXSV9zat/69A4GZnaSJN6uwEx0M8PEbB19hT3JFVXLMbxm0mBvcSCp1GditH+N97m2s5ltbmliQ0GvgEgdbZ0sJehesSMA6HV2F+FiJ33bE1oBRAc5GD+pbxitIW4Zqf0hmzH5BT6WWCbMzQeADKhiqyrLgA1jKNWC6puyFvlo8cnKE4ATpajPYnLj7RdcaogQjBw1Qnm3YwkZn2TZcOZlKUpSPLYoTGb8gbxjc8oRR+zWXu5L7KQ1ypjD5gqC6gEZU4puzCDCTbYVMtCLamJz6n7+o2HOafjumIhux8CPpcBYv3bFitIGoZRlXYojYY7hWeLwi+gOZhQMLBYqcauqtFh2PeZ9qYLxgu6biG3OJn4LMC+8d8wpbCBX1n0K/94xufc1FUrw+xogxKEEGAjbKMjfube8iAmHgqPif7TYuS7aukV7T6x7FbYhomCbRWfy/a2d0dnZ1j952S+wqnDJnb/6qDbw5KrUptV3mzt5yfQByxZ0dHvYGu2xK2UqxQYVx68FbbDvN6ELj5vJ/pk0Q5WfZT+FdT1cieUaYkRFdu9RvF77ereJHQRUIRGIG8sbK6JJrerIeeMirX6+xeMzlMmJAUzYG4d0O5Dfb2GBZ0Tms5jcAvcy24m6XzyKIei9I= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: pjTucoPTZPHtzo38/U0KKvzNZolq1TiMGzzR/DSSULX83/lEZoTcZpr5+T//bUDNlJZRuAd1C9fnrPi9D4aBy3yKMCv9fj3maXW/MtsGWgpU/QTY0iODLB/h303DlCp0/j0MFlxEiIOMkSgQOG7lgFG4SPCbb2pYulQwtH6MW3sDuqasYAWkW8JLvS3wFYLtCZvJkYWoEOxBb8tNCyBDYL6zz8SUWxXWi64QQM1+o89r9zEcpn8Vc8XviuhuhOso2uIS8UGTGneIJlsyvmuZ22Q2NVr3MLsBkJac6Cx/kQGi+M/mJtUY/RufUrKy2RO3ndPTsckem0TlPuQe6BNww5ZJOv3MfYmgFeVbkk2wNy0h3g3mYhx16y6QFTzE1gYK8XFS3PLEhAzbga4DRoJWnaYWS6852p+JxlX9qaNfJPfsOUSnDQRfBnq6HR8UKmO9+Hfq48+zCnzrZbm5sZ1QRoeUffA+uyUFIvpvFAi5H9r/Gm23Pz/IgELQlwdZD92hBJchiXKsggbrWDUQi5Cc5BwKPHsZPqB+s1JNbOVZmeRr9/3ywWezseBQUi7nhc+MiAe9vgKzMALSDRaCKPoiCdBQFCSCvhtcgkOmDVC1uJJYDgzBzwZhYEXd6tehMyqOIjW8+nk3IiCfb74s8W4iVcQw58xjCVHaX4m+1yovUhKzn6EhoM1wMaLyXxixCZJIPxMs2AcvwK0bU0wq/Tr3/f6gFhKJtgC8PY7tZkSntxj/idqavtdj/37StPZ2Ue/u+nH/RVmYeXcOdm1Gb/QILeTSKiyP1HPShzcLto2LwZ+NK/tKn85pwFpyxYiqoOLc3T04nvMkl4o4RPEVKvpnVNNF61mvysXCLXFL55zutz2o0PiEyblYXEkM0UBPQAah7z/T1EKCV9mtKXPDxKw0FtZk0U4mrlVwb+5i8fbDhAmqg2nWxv4jZXZ/L7pSj/qqelSf/c/hDQ4BEYrnJE1isLZyEA3sCLFjMmGD1a5cgaQAQhKJ+FwhvtM+ajATw3V9ln8AANFyg0qiELTzxn60bWzGZxPqyF+4Bi/7r4It0uutfdZtHqhBKUMXBDW63+1JWVqQ+yTiv2ZK2vJnGPBvxiyldEXfofYgHwifBXmdIS4FKqCZzhvUZXsiFep42PujymibL1EQJtLORSSjEOUrAktq284JtE3nLaF2OxTS+REN22W6yFR5X+KqHP/FLvh1nE01NwXHV+NsJWy6M1ehZuzqf8xd6UF9MsxwNgW9KfrWRh8QwgPYOCWQh5qCFdGs5rz5t+PYaUnNWnocePyg3kGZ1OPfJ7bV/d3pOehpvUtgvBJsPb0DDWRvlEjID865R9FMl/v9ITqU7KVDQAAWHVhTrVYmL0zxIQoOJh3WzOWQHDY0UmMfv+W2dgP3U5W4ahumWfc1eOWYWbBiso+upOmleDQm+joqd72SP7DcNj27oPQb9C09texE5jcexAIgKS/njumlxyuLBoy6McdpbTjbUTijxdzkB5xLSayq4FDEkN8DRWb8TE79zXliUulbwF1es7h3u2zltuu7HJ9vqD7nZd+UDsybuRjJ9If7nqWxhgoZzkDDJyIzZ++QZLz4aEpPnccOSmtUuC7TnBHG6g== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 4e54f695-d00e-4e96-4403-08dbc6dde865 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:19.5000 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: ZGHH6YukgWrQ5pjbgTmOA+wwUmZAX0xf44+M7cBI9qY2qz1vlricQ0oy7iySJt+91QXNqR2b44xtQYva1pdfj168fh1bmui1QK11cELUoUY= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Make the source files follow the alphabeta sequence. Also update the copyright header line. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/meson.build | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build index 3912566134..fd3e88a207 100644 --- a/drivers/net/nfp/meson.build +++ b/drivers/net/nfp/meson.build @@ -1,10 +1,11 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2018 Intel Corporation +# Copyright(c) 2018 Corigine, Inc. if not is_linux or not dpdk_conf.get('RTE_ARCH_64') build = false reason = 'only supported on 64-bit Linux' endif + sources = files( 'flower/nfp_flower.c', 'flower/nfp_flower_cmsg.c', @@ -12,30 +13,30 @@ sources = files( 'flower/nfp_flower_representor.c', 'nfd3/nfp_nfd3_dp.c', 'nfdk/nfp_nfdk_dp.c', - 'nfpcore/nfp_nsp.c', 'nfpcore/nfp_cppcore.c', - 'nfpcore/nfp_resource.c', - 'nfpcore/nfp_mip.c', - 'nfpcore/nfp_nffw.c', - 'nfpcore/nfp_rtsym.c', - 'nfpcore/nfp_nsp_cmds.c', 'nfpcore/nfp_crc.c', 'nfpcore/nfp_dev.c', + 'nfpcore/nfp_hwinfo.c', + 'nfpcore/nfp_mip.c', 'nfpcore/nfp_mutex.c', + 'nfpcore/nfp_nffw.c', + 'nfpcore/nfp_nsp.c', + 'nfpcore/nfp_nsp_cmds.c', 'nfpcore/nfp_nsp_eth.c', - 'nfpcore/nfp_hwinfo.c', + 'nfpcore/nfp_resource.c', + 'nfpcore/nfp_rtsym.c', 'nfpcore/nfp_target.c', 'nfpcore/nfp6000_pcie.c', 'nfp_common.c', - 'nfp_ctrl.c', - 'nfp_rxtx.c', 'nfp_cpp_bridge.c', - 'nfp_ethdev_vf.c', + 'nfp_ctrl.c', 'nfp_ethdev.c', + 'nfp_ethdev_vf.c', 'nfp_flow.c', 'nfp_ipsec.c', 'nfp_logs.c', 'nfp_mtr.c', + 'nfp_rxtx.c', ) deps += ['hash', 'security']