From patchwork Thu Oct 12 01:26:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132559 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C3B984236A; Thu, 12 Oct 2023 03:27:50 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E4706402E8; Thu, 12 Oct 2023 03:27:47 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2138.outbound.protection.outlook.com [40.107.220.138]) by mails.dpdk.org (Postfix) with ESMTP id 8CE6E402E4 for ; Thu, 12 Oct 2023 03:27:46 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IEcVXpz6tO1F24o43QxyCzw3cLyY7dpsAP9QkVM1+XWSJjpcl04FvkHGsmGQ/Dx1WZFUe4XpXFRFeyrIfwoMrkoJAoTmW5ONfw1CY35ft0nWN4JT5trIEoZGNTtXH2WWaQcNWQJ80/8OJWebJ0a4xgJhd3lu4Er4nPtebMNyWMJM3IIDMsgsTzKw7GGU83F4WZbssXM1X/tGVbghKGJ+V7ymzW9ce868BscAFLVrMU4EjUJaRIghM6G5v9wm0QYpZkfwIa7c4YYNAL+ydrYoEnDHK8SdXhBwp0OGostg7PpHhcoXOPQ/5qaUKGyDDi+mZ5FtQZleOjZ9x3CibA+eWA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=/Ml30Z+dRZrzA4yl83XpKwDJqr6r22NKxfwu4cYAkCU=; b=RT1m1s83KFQtTpY3i5Io8lkTTHCYp+/medfxyib0HsUn7syQ4PAnW/3llD4l94cAi0hiBDy7KNEzLrJKrhox4nczUvGT6CtsNir+pCGp4pTn5Q+MByWkQHWmV+FsP+FQieclX24W7ufXtR5gI67VFghWrl4u88pHpdhRje3d8F5S/Sna8QXQ7i3PQR6DwEZdUz6un9DulPZ5gdcaXdeEaMUntI9XIs4b3khhJx+3JZFINx1asf1rqXkDJLCaqDvAOwP+MDO3CNtMGMFQIgRRlD1btEbGn39E5O2S8UAh/II6azYrS7hGR5WpipZKf7movD3FYFclfKMCSP5CPk62FQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=/Ml30Z+dRZrzA4yl83XpKwDJqr6r22NKxfwu4cYAkCU=; b=qpT0euDYSVv/exzjr38+ehtleWSS3OABemwHFGL/NJQa2HGY5C9q/t6Fin9tATR41Fmj8TuATYn/o4hPuW6ECp8CGRRhGeivfgdRNb9K5wdLWB3mDlloxA8zSBuuma6bxKCHDUkIqHgecS5oFM8UGjujl8LCYlyYf6ZZtw14UwE= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by PH0PR13MB5975.namprd13.prod.outlook.com (2603:10b6:510:16e::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.37; Thu, 12 Oct 2023 01:27:44 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:27:44 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 01/11] net/nfp: explicitly compare to null and 0 Date: Thu, 12 Oct 2023 09:26:54 +0800 Message-Id: <20231012012704.483828-2-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|PH0PR13MB5975:EE_ X-MS-Office365-Filtering-Correlation-Id: 7feda9f6-6451-467f-2eb7-08dbcac26eb8 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 45bqR78YQU72iezgBXukTmRyQl4dLUoYrr8cx8GHrdxItyCO6eRD5ni90pLoudFsZjpiQSAktOGzxGWM5p3/jGrps8xSSgJt0QY+6SmxYg1UWAxOqcH6GMOBnwZNJmdqGtzrxEENpml/EpxqTM4VcNb0ML4j3ZhHpngLytrkmseuNnkaNzfzQ//rK+z9+QpVfyNo+grUIM4VKHIkeqYqT3EdoXKplc60LiXNPCipOxGooW35xTCQ5GaIYriNK4bfRX4+GWa/3g8RfAltUxbKqjkD0V64JqSL42uldFysDfQ+Zbjq43HsrfexZw608XWAfcnXDKptHQKv2c27zEr0CMvy/EtCIjr38LoFHQzBzTv9JaXSQcPzBUibAJhrA0a9E51WGNxOc1BilMQ6eaRjWPNc0le+timtrWSWjJ6FISCRD+i+7nCZ2ngn+gY+iaJhqn7Kg/oUkRmnMIBjax01Ht2dX1CCI2P4LTlGdVngg/eSfnn0tjZ89DNoyVr9JoTlkXSQBjaQOH1X4jdTupIcBdmCkD2wyAe18Ls66TXAGGuZQ5d1oiJtTi7tE41qX2uyIo1su4pQOMFmQk/lXe8QSnkmYwHCY6S12ver1EACwuYXjvqWcUubK00qlJDm7fJjTRefVN7hjnuSw+BDZFVIMTzy/OsGhXH6W16ON0HcUHo= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(346002)(39830400003)(136003)(366004)(376002)(230922051799003)(64100799003)(186009)(1800799009)(451199024)(86362001)(38350700002)(36756003)(38100700002)(66946007)(6916009)(2906002)(30864003)(8936002)(478600001)(6486002)(6512007)(41300700001)(44832011)(5660300002)(4326008)(6506007)(52116002)(8676002)(6666004)(83380400001)(107886003)(66556008)(66476007)(54906003)(26005)(2616005)(316002)(1076003)(579004); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 2OumHbGC+3zo+LHG52bblGRxZYWDqQ/tbgjvCsozxIHG4qLnJXYKsFQl6vnqaZe1UG3ZwUSvQ14WSHdiSFebEqg9UFSYKhZYbFWaO/u8osWJnP5EKNjqYl8V8kmhGIaeGnR7cg6AOVt7MO1x8kcMEQHFgGT5tpp63+fEpHkwoQLJIfINasI4rgL3WBOPxzrfWI+rEBae6d5ip9ebK+JaguJjrUMDR2J6UfCBHgLXozjDLxhNFIZXnVd/LAhNT5wj9kIh3K/Ee+FtOpbUVoYSaDF+1XV3wHGyfELxoHn9FUi6lCFXrHKgmq4yvq7lTR+sMjjbXgnFtUWLNyW/ncoqjjaXWBTDJ/8GVhcNMYPNP/js4GBwLAtidsPC0+4nbfCeGeHRLL0jSjWvtqnsAs+zwA746tupgFLrpOHgjZAQAA8MmcZ9w2bJH/wZBuaEAuUz0+WVQYt6zcxggquJrlkai1cszOWd+GjYosO2g66tvlrIK+3+2Ny9p8e87glPqfItQLNFzP18UGG6eCssYN3wIdZaGQAw/L4HZPbQKyraEDgxlWacf0bZ+bPTTDK5vEn/phOtd4mSCbTP+IJugXaNGNVw1s9p2iozgPpTuDuknt6ZzkN80DoUJzSUqR2fjWMU8ZyfwOab3ftl065JBKz+CG+7R/7OyW6MHaHyC7L3+AtIAUK2iDIJKaIaEJvDaNS2Rf1TDRviXDBDI+LYsd6YIvM+Mq1qW0rEfpHnwfvid9fPukikD410PUP3RGSK85FNXo/4YbbX4Roy8zlAtUaCdV7Mp1HV0sWsjPE+MhN2Jkz1QGHjFTCJXUTjkPTir3siQ+RKhy9eIRI0mpNAo8tW7JmRuqwQElmJIHdEeB7Zev1xsFru3JkA3QNdWb5AvmDPsGlaCox8Sz9u7BSsBcvoIQFthF3Kij9Q8D9q7vfjrg7UtGGp6u229RhcQZ9lI4AukjaIdMCmEJbd8Zu8rDlmQKcToDidzDTelaILH/CLZEAls7UCPxk3Q7o6rG9gWAD5ueuic+GtKbEd0NuO7NRUSGdIRKLHF0La7YQ/8hbjlCJjcCdvvpgTC+g7ikoSU4iUJXJxcek3UWwdgUkGM1/ff/N8uC91Q2AOkdU1nKKi4HFFJZnHKS7O7JwFSZW1MqgSiy0ip9wgUxi8JGlWHT5gQ8L8IffsoSGZERViwlVbEkqa6zaiO8f2u8ow1o71hhqVa3FGxGV1MZ2FpJo9lZIaGdS1quFKS2TTGaizvEvAfnb4gz4m/p0NpR+3+INcu1hi0ld6xfu+mDbCu4fL8QDJb/+eab6+kCi6O0UhJvhoFfjjwqo4ZbGUxvqfI+MwwEmfs4TIM+HPZCTw8EmAn1Acyt/zgt4FLK33+Hrru+yPJ4LDQpPLiJX5r/xw3ipmdo81Ed+J/vdoEIGqR5dFkIIxlBL2ru4GrY+2FSWRc2zE9Spi4JvrmBIujBThHPtdHDWg8kiE8z+W+rKmFwaZBCYyyO+QlYFHUG3ljCdngTUjnqzP31SvPHVDzwfyFmqs146zTipRAAMsxIaa5mBooFVOZy8nuZI1zGC0KGGOQWy8Ypnad+PS3fLVVZ8YEXA/G3Vnre1VsQJQsWqjhC6ssw0jiw== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 7feda9f6-6451-467f-2eb7-08dbcac26eb8 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:27:44.0418 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: cfmYyVNnyMEUVrXk2E6L48BCKilJrx3Lpgo3Wo1Aub3cWcOZjHLQHGb+28uLpHjfnIAhIL4Jbj6qw6iQbNaY0ArXIz3SvksnKYEu8SzxgUM= X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR13MB5975 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org To compliance with the coding standard, make the pointer variable explicitly comparing to 'NULL' and the integer variable explicitly comparing to '0'. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.c | 6 +- drivers/net/nfp/flower/nfp_flower_ctrl.c | 6 +- drivers/net/nfp/nfp_common.c | 144 +++++++++++------------ drivers/net/nfp/nfp_cpp_bridge.c | 2 +- drivers/net/nfp/nfp_ethdev.c | 38 +++--- drivers/net/nfp/nfp_ethdev_vf.c | 14 +-- drivers/net/nfp/nfp_flow.c | 90 +++++++------- drivers/net/nfp/nfp_rxtx.c | 28 ++--- 8 files changed, 165 insertions(+), 163 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index 98e6f7f927..3ddaf0f28d 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -69,7 +69,7 @@ nfp_pf_repr_disable_queues(struct rte_eth_dev *dev) new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG; /* If an error when reconfig we avoid to change hw state */ - if (nfp_net_reconfig(hw, new_ctrl, update) < 0) + if (nfp_net_reconfig(hw, new_ctrl, update) != 0) return; hw->ctrl = new_ctrl; @@ -100,7 +100,7 @@ nfp_flower_pf_start(struct rte_eth_dev *dev) update |= NFP_NET_CFG_UPDATE_RSS; - if (hw->cap & NFP_NET_CFG_CTRL_RSS2) + if ((hw->cap & NFP_NET_CFG_CTRL_RSS2) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RSS2; else new_ctrl |= NFP_NET_CFG_CTRL_RSS; @@ -110,7 +110,7 @@ nfp_flower_pf_start(struct rte_eth_dev *dev) update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING; - if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) + if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG; nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl); diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c index c5282053cf..b564e7cd73 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.c +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c @@ -103,7 +103,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, } /* Filling the received mbuf with packet info */ - if (hw->rx_offset) + if (hw->rx_offset != 0) mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset; else mb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds); @@ -195,7 +195,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower, lmbuf = &txq->txbufs[txq->wr_p].mbuf; RTE_MBUF_PREFETCH_TO_FREE(*lmbuf); - if (*lmbuf) + if (*lmbuf != NULL) rte_pktmbuf_free_seg(*lmbuf); *lmbuf = mbuf; @@ -337,7 +337,7 @@ nfp_flower_ctrl_vnic_nfdk_xmit(struct nfp_app_fw_flower *app_fw_flower, } txq->wr_p = D_IDX(txq, txq->wr_p + used_descs); - if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT) + if (txq->wr_p % NFDK_TX_DESC_BLOCK_CNT != 0) txq->data_pending += mbuf->pkt_len; else txq->data_pending = 0; diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 5683afc40a..36752583dd 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -221,7 +221,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE); if (new == 0) break; - if (new & NFP_NET_CFG_UPDATE_ERR) { + if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) { PMD_INIT_LOG(ERR, "Reconfig error: 0x%08x", new); return -1; } @@ -390,18 +390,18 @@ nfp_net_configure(struct rte_eth_dev *dev) rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; - if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) + if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0) rxmode->offloads |= RTE_ETH_RX_OFFLOAD_RSS_HASH; /* Checking TX mode */ - if (txmode->mq_mode) { + if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { PMD_INIT_LOG(INFO, "TX mq_mode DCB and VMDq not supported"); return -EINVAL; } /* Checking RX mode */ - if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG && - !(hw->cap & NFP_NET_CFG_CTRL_RSS_ANY)) { + if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 && + (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) { PMD_INIT_LOG(INFO, "RSS not supported"); return -EINVAL; } @@ -493,11 +493,11 @@ nfp_net_disable_queues(struct rte_eth_dev *dev) update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING | NFP_NET_CFG_UPDATE_MSIX; - if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) + if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0) new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG; /* If an error when reconfig we avoid to change hw state */ - if (nfp_net_reconfig(hw, new_ctrl, update) < 0) + if (nfp_net_reconfig(hw, new_ctrl, update) != 0) return; hw->ctrl = new_ctrl; @@ -537,8 +537,8 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) uint32_t update, ctrl; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) && - !(hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR)) { + if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && + (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) { PMD_INIT_LOG(INFO, "MAC address unable to change when" " port enabled"); return -EBUSY; @@ -550,10 +550,10 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) /* Signal the NIC about the change */ update = NFP_NET_CFG_UPDATE_MACADDR; ctrl = hw->ctrl; - if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) && - (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR)) + if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && + (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; - if (nfp_net_reconfig(hw, ctrl, update) < 0) { + if (nfp_net_reconfig(hw, ctrl, update) != 0) { PMD_INIT_LOG(INFO, "MAC address update failed"); return -EIO; } @@ -568,7 +568,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, int i; if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", - dev->data->nb_rx_queues)) { + dev->data->nb_rx_queues) != 0) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; @@ -580,7 +580,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO"); /* UIO just supports one queue and no LSC*/ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0); - if (rte_intr_vec_list_index_set(intr_handle, 0, 0)) + if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0) return -1; } else { PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO"); @@ -591,7 +591,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, */ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1); if (rte_intr_vec_list_index_set(intr_handle, i, - i + 1)) + i + 1) != 0) return -1; PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i, rte_intr_vec_list_index_get(intr_handle, @@ -619,53 +619,53 @@ nfp_check_offloads(struct rte_eth_dev *dev) rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; - if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) { - if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM) + if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) { + if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_RXCSUM; } - if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) + if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) nfp_net_enbable_rxvlan_cap(hw, &ctrl); - if (rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) { - if (hw->cap & NFP_NET_CFG_CTRL_RXQINQ) + if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) { + if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0) ctrl |= NFP_NET_CFG_CTRL_RXQINQ; } hw->mtu = dev->data->mtu; - if (txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) { - if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) + if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) { + if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2; - else if (hw->cap & NFP_NET_CFG_CTRL_TXVLAN) + else if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN) != 0) ctrl |= NFP_NET_CFG_CTRL_TXVLAN; } /* L2 broadcast */ - if (hw->cap & NFP_NET_CFG_CTRL_L2BC) + if ((hw->cap & NFP_NET_CFG_CTRL_L2BC) != 0) ctrl |= NFP_NET_CFG_CTRL_L2BC; /* L2 multicast */ - if (hw->cap & NFP_NET_CFG_CTRL_L2MC) + if ((hw->cap & NFP_NET_CFG_CTRL_L2MC) != 0) ctrl |= NFP_NET_CFG_CTRL_L2MC; /* TX checksum offload */ - if (txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM || - txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM || - txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) + if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 || + (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 || + (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_TXCSUM; /* LSO offload */ - if (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO || - txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) { - if (hw->cap & NFP_NET_CFG_CTRL_LSO) + if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 || + (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { + if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0) ctrl |= NFP_NET_CFG_CTRL_LSO; else ctrl |= NFP_NET_CFG_CTRL_LSO2; } /* RX gather */ - if (txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) + if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0) ctrl |= NFP_NET_CFG_CTRL_GATHER; return ctrl; @@ -693,7 +693,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) return -ENOTSUP; } - if (hw->ctrl & NFP_NET_CFG_CTRL_PROMISC) { + if ((hw->ctrl & NFP_NET_CFG_CTRL_PROMISC) != 0) { PMD_DRV_LOG(INFO, "Promiscuous mode already enabled"); return 0; } @@ -706,7 +706,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) * it can not fail ... */ ret = nfp_net_reconfig(hw, new_ctrl, update); - if (ret < 0) + if (ret != 0) return ret; hw->ctrl = new_ctrl; @@ -736,7 +736,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev) * assuming it can not fail ... */ ret = nfp_net_reconfig(hw, new_ctrl, update); - if (ret < 0) + if (ret != 0) return ret; hw->ctrl = new_ctrl; @@ -770,7 +770,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) memset(&link, 0, sizeof(struct rte_eth_link)); - if (nn_link_status & NFP_NET_CFG_STS_LINK) + if ((nn_link_status & NFP_NET_CFG_STS_LINK) != 0) link.link_status = RTE_ETH_LINK_UP; link.link_duplex = RTE_ETH_LINK_FULL_DUPLEX; @@ -802,7 +802,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) ret = rte_eth_linkstatus_set(dev, &link); if (ret == 0) { - if (link.link_status) + if (link.link_status != 0) PMD_DRV_LOG(INFO, "NIC Link is Up"); else PMD_DRV_LOG(INFO, "NIC Link is Down"); @@ -907,7 +907,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) nfp_dev_stats.imissed -= hw->eth_stats_base.imissed; - if (stats) { + if (stats != NULL) { memcpy(stats, &nfp_dev_stats, sizeof(*stats)); return 0; } @@ -1229,32 +1229,32 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) /* Next should change when PF support is implemented */ dev_info->max_mac_addrs = 1; - if (hw->cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) + if ((hw->cap & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) != 0) dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP; - if (hw->cap & NFP_NET_CFG_CTRL_RXQINQ) + if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_QINQ_STRIP; - if (hw->cap & NFP_NET_CFG_CTRL_RXCSUM) + if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM; - if (hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) + if ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0) dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT; - if (hw->cap & NFP_NET_CFG_CTRL_TXCSUM) + if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | RTE_ETH_TX_OFFLOAD_UDP_CKSUM | RTE_ETH_TX_OFFLOAD_TCP_CKSUM; - if (hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) { + if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) { dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO; - if (hw->cap & NFP_NET_CFG_CTRL_VXLAN) + if ((hw->cap & NFP_NET_CFG_CTRL_VXLAN) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO; } - if (hw->cap & NFP_NET_CFG_CTRL_GATHER) + if ((hw->cap & NFP_NET_CFG_CTRL_GATHER) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS; cap_extend = nn_cfg_readl(hw, NFP_NET_CFG_CAP_WORD1); @@ -1297,7 +1297,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) .nb_mtu_seg_max = NFP_TX_MAX_MTU_SEG, }; - if (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) { + if ((hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) != 0) { dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 | @@ -1431,7 +1431,7 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev) struct rte_eth_link link; rte_eth_linkstatus_get(dev, &link); - if (link.link_status) + if (link.link_status != 0) PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s", dev->data->port_id, link.link_speed, link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX @@ -1462,7 +1462,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) { + if ((hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) != 0) { /* If MSI-X auto-masking is used, clear the entry */ rte_wmb(); rte_intr_ack(pci_dev->intr_handle); @@ -1524,7 +1524,7 @@ nfp_net_dev_interrupt_handler(void *param) if (rte_eal_alarm_set(timeout * 1000, nfp_net_dev_interrupt_delayed_handler, - (void *)dev) < 0) { + (void *)dev) != 0) { PMD_INIT_LOG(ERR, "Error setting alarm"); /* Unmasking */ nfp_net_irq_unmask(dev); @@ -1577,16 +1577,16 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl); /* VLAN stripping setting */ - if (mask & RTE_ETH_VLAN_STRIP_MASK) { - if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) + if ((mask & RTE_ETH_VLAN_STRIP_MASK) != 0) { + if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) new_ctrl |= rxvlan_ctrl; else new_ctrl &= ~rxvlan_ctrl; } /* QinQ stripping setting */ - if (mask & RTE_ETH_QINQ_STRIP_MASK) { - if (dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) + if ((mask & RTE_ETH_QINQ_STRIP_MASK) != 0) { + if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ; else new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ; @@ -1674,7 +1674,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev, update = NFP_NET_CFG_UPDATE_RSS; - if (nfp_net_reconfig(hw, hw->ctrl, update) < 0) + if (nfp_net_reconfig(hw, hw->ctrl, update) != 0) return -EIO; return 0; @@ -1748,28 +1748,28 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, rss_hf = rss_conf->rss_hf; - if (rss_hf & RTE_ETH_RSS_IPV4) + if ((rss_hf & RTE_ETH_RSS_IPV4) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_TCP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_TCP; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_UDP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_UDP; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV4_SCTP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV4_SCTP; - if (rss_hf & RTE_ETH_RSS_IPV6) + if ((rss_hf & RTE_ETH_RSS_IPV6) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_TCP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_TCP; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_UDP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_UDP; - if (rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) + if ((rss_hf & RTE_ETH_RSS_NONFRAG_IPV6_SCTP) != 0) cfg_rss_ctrl |= NFP_NET_CFG_RSS_IPV6_SCTP; cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK; @@ -1814,7 +1814,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, update = NFP_NET_CFG_UPDATE_RSS; - if (nfp_net_reconfig(hw, hw->ctrl, update) < 0) + if (nfp_net_reconfig(hw, hw->ctrl, update) != 0) return -EIO; return 0; @@ -1838,28 +1838,28 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, rss_hf = rss_conf->rss_hf; cfg_rss_ctrl = nn_cfg_readl(hw, NFP_NET_CFG_RSS_CTRL); - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4) != 0) rss_hf |= RTE_ETH_RSS_IPV4; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_TCP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_TCP; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_TCP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_TCP; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_UDP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_UDP; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_UDP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_UDP; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6) != 0) rss_hf |= RTE_ETH_RSS_IPV6; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_SCTP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV4_SCTP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV4_SCTP; - if (cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_SCTP) + if ((cfg_rss_ctrl & NFP_NET_CFG_RSS_IPV6_SCTP) != 0) rss_hf |= RTE_ETH_RSS_NONFRAG_IPV6_SCTP; /* Propagate current RSS hash functions to caller */ diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index ed9a946b0c..34764a8a32 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -70,7 +70,7 @@ nfp_map_service(uint32_t service_id) rte_service_runstate_set(service_id, 1); rte_service_component_runstate_set(service_id, 1); rte_service_lcore_start(slcore); - if (rte_service_may_be_active(slcore)) + if (rte_service_may_be_active(slcore) != 0) PMD_INIT_LOG(INFO, "The service %s is running", service_name); else PMD_INIT_LOG(ERR, "The service %s is not running", service_name); diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index ebc5538291..12feec8eb4 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -89,7 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev) } } intr_vector = dev->data->nb_rx_queues; - if (rte_intr_efd_enable(intr_handle, intr_vector)) + if (rte_intr_efd_enable(intr_handle, intr_vector) != 0) return -1; nfp_configure_rx_interrupt(dev, intr_handle); @@ -113,7 +113,7 @@ nfp_net_start(struct rte_eth_dev *dev) dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; - if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) { + if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) != 0) { nfp_net_rss_config_default(dev); update |= NFP_NET_CFG_UPDATE_RSS; new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap); @@ -125,15 +125,15 @@ nfp_net_start(struct rte_eth_dev *dev) update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING; /* Enable vxlan */ - if (hw->cap & NFP_NET_CFG_CTRL_VXLAN) { + if ((hw->cap & NFP_NET_CFG_CTRL_VXLAN) != 0) { new_ctrl |= NFP_NET_CFG_CTRL_VXLAN; update |= NFP_NET_CFG_UPDATE_VXLAN; } - if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) + if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG; - if (nfp_net_reconfig(hw, new_ctrl, update) < 0) + if (nfp_net_reconfig(hw, new_ctrl, update) != 0) return -EIO; /* Enable packet type offload by extend ctrl word1. */ @@ -146,14 +146,14 @@ nfp_net_start(struct rte_eth_dev *dev) | NFP_NET_CFG_CTRL_IPSEC_LM_LOOKUP; update = NFP_NET_CFG_UPDATE_GEN; - if (nfp_net_ext_reconfig(hw, ctrl_extend, update) < 0) + if (nfp_net_ext_reconfig(hw, ctrl_extend, update) != 0) return -EIO; /* * Allocating rte mbufs for configured rx queues. * This requires queues being enabled before */ - if (nfp_net_rx_freelist_setup(dev) < 0) { + if (nfp_net_rx_freelist_setup(dev) != 0) { ret = -ENOMEM; goto error; } @@ -298,7 +298,7 @@ nfp_net_close(struct rte_eth_dev *dev) for (i = 0; i < app_fw_nic->total_phyports; i++) { /* Check to see if ports are still in use */ - if (app_fw_nic->ports[i]) + if (app_fw_nic->ports[i] != NULL) return 0; } @@ -598,7 +598,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) hw->mtu = RTE_ETHER_MTU; /* VLAN insertion is incompatible with LSOv2 */ - if (hw->cap & NFP_NET_CFG_CTRL_LSO2) + if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0) hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN; nfp_net_log_device_information(hw); @@ -618,7 +618,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]); tmp_ether_addr = &hw->mac_addr; - if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) { + if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { PMD_INIT_LOG(INFO, "Using random mac address for port %d", port); /* Using random mac addresses for VFs */ rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]); @@ -695,10 +695,11 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) /* Finally try the card type and media */ snprintf(fw_name, sizeof(fw_name), "%s/%s", DEFAULT_FW_PATH, card); PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); - if (rte_firmware_read(fw_name, &fw_buf, &fsize) < 0) { - PMD_DRV_LOG(INFO, "Firmware file %s not found.", fw_name); - return -ENOENT; - } + if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0) + goto load_fw; + + PMD_DRV_LOG(ERR, "Can't find suitable firmware."); + return -ENOENT; load_fw: PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %zu", @@ -727,7 +728,7 @@ nfp_fw_setup(struct rte_pci_device *dev, if (nfp_fw_model == NULL) nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "assembly.partno"); - if (nfp_fw_model) { + if (nfp_fw_model != NULL) { PMD_DRV_LOG(INFO, "firmware model found: %s", nfp_fw_model); } else { PMD_DRV_LOG(ERR, "firmware model NOT found"); @@ -865,7 +866,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, * nfp_net_init */ ret = nfp_net_init(eth_dev); - if (ret) { + if (ret != 0) { ret = -ENODEV; goto port_cleanup; } @@ -878,7 +879,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, port_cleanup: for (i = 0; i < app_fw_nic->total_phyports; i++) { - if (app_fw_nic->ports[i] && app_fw_nic->ports[i]->eth_dev) { + if (app_fw_nic->ports[i] != NULL && + app_fw_nic->ports[i]->eth_dev != NULL) { struct rte_eth_dev *tmp_dev; tmp_dev = app_fw_nic->ports[i]->eth_dev; nfp_ipsec_uninit(tmp_dev); @@ -950,7 +952,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) goto hwinfo_cleanup; } - if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo)) { + if (nfp_fw_setup(pci_dev, cpp, nfp_eth_table, hwinfo) != 0) { PMD_INIT_LOG(ERR, "Error when uploading firmware"); ret = -EIO; goto eth_table_cleanup; diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 0c94fc51ad..c8d6b0461b 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -66,7 +66,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) } } intr_vector = dev->data->nb_rx_queues; - if (rte_intr_efd_enable(intr_handle, intr_vector)) + if (rte_intr_efd_enable(intr_handle, intr_vector) != 0) return -1; nfp_configure_rx_interrupt(dev, intr_handle); @@ -83,7 +83,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; - if (rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) { + if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS) != 0) { nfp_net_rss_config_default(dev); update |= NFP_NET_CFG_UPDATE_RSS; new_ctrl |= nfp_net_cfg_ctrl_rss(hw->cap); @@ -94,18 +94,18 @@ nfp_netvf_start(struct rte_eth_dev *dev) update |= NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING; - if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) + if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RINGCFG; nn_cfg_writel(hw, NFP_NET_CFG_CTRL, new_ctrl); - if (nfp_net_reconfig(hw, new_ctrl, update) < 0) + if (nfp_net_reconfig(hw, new_ctrl, update) != 0) return -EIO; /* * Allocating rte mbufs for configured rx queues. * This requires queues being enabled before */ - if (nfp_net_rx_freelist_setup(dev) < 0) { + if (nfp_net_rx_freelist_setup(dev) != 0) { ret = -ENOMEM; goto error; } @@ -330,7 +330,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) hw->mtu = RTE_ETHER_MTU; /* VLAN insertion is incompatible with LSOv2 */ - if (hw->cap & NFP_NET_CFG_CTRL_LSO2) + if ((hw->cap & NFP_NET_CFG_CTRL_LSO2) != 0) hw->cap &= ~NFP_NET_CFG_CTRL_TXVLAN; nfp_net_log_device_information(hw); @@ -350,7 +350,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) nfp_netvf_read_mac(hw); tmp_ether_addr = &hw->mac_addr; - if (!rte_is_valid_assigned_ether_addr(tmp_ether_addr)) { + if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { PMD_INIT_LOG(INFO, "Using random mac address for port %d", port); /* Using random mac addresses for VFs */ diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index 020e31e9de..3ea6813d9a 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -521,8 +521,8 @@ nfp_stats_id_free(struct nfp_flow_priv *priv, uint32_t ctx) /* Check if buffer is full */ ring = &priv->stats_ids.free_list; - if (!CIRC_SPACE(ring->head, ring->tail, priv->stats_ring_size * - NFP_FL_STATS_ELEM_RS - NFP_FL_STATS_ELEM_RS + 1)) + if (CIRC_SPACE(ring->head, ring->tail, priv->stats_ring_size * + NFP_FL_STATS_ELEM_RS - NFP_FL_STATS_ELEM_RS + 1) == 0) return -ENOBUFS; memcpy(&ring->buf[ring->head], &ctx, NFP_FL_STATS_ELEM_RS); @@ -607,7 +607,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower, rte_spinlock_lock(&priv->ipv6_off_lock); LIST_FOREACH(entry, &priv->ipv6_off_list, next) { - if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) { + if (memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr)) == 0) { entry->ref_count++; rte_spinlock_unlock(&priv->ipv6_off_lock); return 0; @@ -641,7 +641,7 @@ nfp_tun_del_ipv6_off(struct nfp_app_fw_flower *app_fw_flower, rte_spinlock_lock(&priv->ipv6_off_lock); LIST_FOREACH(entry, &priv->ipv6_off_list, next) { - if (!memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr))) { + if (memcmp(entry->ipv6_addr, ipv6, sizeof(entry->ipv6_addr)) == 0) { entry->ref_count--; if (entry->ref_count == 0) { LIST_REMOVE(entry, next); @@ -671,14 +671,14 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr, struct nfp_flower_ext_meta *ext_meta = NULL; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); if (ext_meta != NULL) key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2); - if (key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) { - if (key_layer2 & NFP_FLOWER_LAYER2_GRE) { + if ((key_layer2 & NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { + if ((key_layer2 & NFP_FLOWER_LAYER2_GRE) != 0) { gre6 = (struct nfp_flower_ipv6_gre_tun *)(nfp_flow->payload.mask_data - sizeof(struct nfp_flower_ipv6_gre_tun)); ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, gre6->ipv6.ipv6_dst); @@ -688,7 +688,7 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr, ret = nfp_tun_del_ipv6_off(repr->app_fw_flower, udp6->ipv6.ipv6_dst); } } else { - if (key_layer2 & NFP_FLOWER_LAYER2_GRE) { + if ((key_layer2 & NFP_FLOWER_LAYER2_GRE) != 0) { gre4 = (struct nfp_flower_ipv4_gre_tun *)(nfp_flow->payload.mask_data - sizeof(struct nfp_flower_ipv4_gre_tun)); ret = nfp_tun_del_ipv4_off(repr->app_fw_flower, gre4->ipv4.dst); @@ -783,7 +783,7 @@ nfp_flow_compile_metadata(struct nfp_flow_priv *priv, mbuf_off_mask += sizeof(struct nfp_flower_meta_tci); /* Populate Extended Metadata if required */ - if (key_layer->key_layer & NFP_FLOWER_LAYER_EXT_META) { + if ((key_layer->key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) { nfp_flower_compile_ext_meta(mbuf_off_exact, key_layer); nfp_flower_compile_ext_meta(mbuf_off_mask, key_layer); mbuf_off_exact += sizeof(struct nfp_flower_ext_meta); @@ -1068,7 +1068,7 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[], break; case RTE_FLOW_ACTION_TYPE_SET_TTL: PMD_DRV_LOG(DEBUG, "RTE_FLOW_ACTION_TYPE_SET_TTL detected"); - if (key_ls->key_layer & NFP_FLOWER_LAYER_IPV4) { + if ((key_ls->key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { if (!ttl_tos_flag) { key_ls->act_size += sizeof(struct nfp_fl_act_set_ip4_ttl_tos); @@ -1166,15 +1166,15 @@ nfp_flow_is_tunnel(struct rte_flow *nfp_flow) struct nfp_flower_meta_tci *meta_tci; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_VXLAN) != 0) return true; - if (!(meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META)) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) == 0) return false; ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); key_layer2 = rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2); - if (key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE)) + if ((key_layer2 & (NFP_FLOWER_LAYER2_GENEVE | NFP_FLOWER_LAYER2_GRE)) != 0) return true; return false; @@ -1270,7 +1270,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower, spec = item->spec; mask = item->mask ? item->mask : proc->mask_default; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) { @@ -1281,8 +1281,8 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower, hdr = is_mask ? &mask->hdr : &spec->hdr; - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_GRE)) { + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_GRE) != 0) { ipv4_gre_tun = (struct nfp_flower_ipv4_gre_tun *)*mbuf_off; ipv4_gre_tun->ip_ext.tos = hdr->type_of_service; @@ -1307,7 +1307,7 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower, * reserve space for L4 info. * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4 */ - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) *mbuf_off += sizeof(struct nfp_flower_tp_ports); hdr = is_mask ? &mask->hdr : &spec->hdr; @@ -1348,7 +1348,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower, spec = item->spec; mask = item->mask ? item->mask : proc->mask_default; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); if (is_outer_layer && nfp_flow_is_tunnel(nfp_flow)) { @@ -1360,8 +1360,8 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower, hdr = is_mask ? &mask->hdr : &spec->hdr; vtc_flow = rte_be_to_cpu_32(hdr->vtc_flow); - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_GRE)) { + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_GRE) != 0) { ipv6_gre_tun = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off; ipv6_gre_tun->ip_ext.tos = vtc_flow >> RTE_IPV6_HDR_TC_SHIFT; @@ -1390,7 +1390,7 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower, * reserve space for L4 info. * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6 */ - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) *mbuf_off += sizeof(struct nfp_flower_tp_ports); hdr = is_mask ? &mask->hdr : &spec->hdr; @@ -1434,7 +1434,7 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, } meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) { + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ipv4 = (struct nfp_flower_ipv4 *) (*mbuf_off - sizeof(struct nfp_flower_ipv4)); ports = (struct nfp_flower_tp_ports *) @@ -1457,7 +1457,7 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, tcp_flags = spec->hdr.tcp_flags; } - if (ipv4) { + if (ipv4 != NULL) { if (tcp_flags & RTE_TCP_FIN_FLAG) ipv4->ip_ext.flags |= NFP_FL_TCP_FLAG_FIN; if (tcp_flags & RTE_TCP_SYN_FLAG) @@ -1512,7 +1512,7 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, } meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) { + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) - sizeof(struct nfp_flower_tp_ports); } else {/* IPv6 */ @@ -1555,7 +1555,7 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, } meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) { + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) - sizeof(struct nfp_flower_tp_ports); } else { /* IPv6 */ @@ -1595,7 +1595,7 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower, struct nfp_flower_ext_meta *ext_meta = NULL; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); spec = item->spec; @@ -1607,8 +1607,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower, mask = item->mask ? item->mask : proc->mask_default; hdr = is_mask ? &mask->hdr : &spec->hdr; - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6)) { + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off; tun6->tun_id = hdr->vx_vni; if (!is_mask) @@ -1621,8 +1621,8 @@ nfp_flow_merge_vxlan(struct nfp_app_fw_flower *app_fw_flower, } vxlan_end: - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6)) + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) *mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun); else *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun); @@ -1649,7 +1649,7 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower, struct nfp_flower_ext_meta *ext_meta = NULL; meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); spec = item->spec; @@ -1661,8 +1661,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower, mask = item->mask ? item->mask : proc->mask_default; geneve = is_mask ? mask : spec; - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6)) { + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { tun6 = (struct nfp_flower_ipv6_udp_tun *)*mbuf_off; tun6->tun_id = rte_cpu_to_be_32((geneve->vni[0] << 16) | (geneve->vni[1] << 8) | (geneve->vni[2])); @@ -1677,8 +1677,8 @@ nfp_flow_merge_geneve(struct nfp_app_fw_flower *app_fw_flower, } geneve_end: - if (ext_meta && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6)) { + if (ext_meta != NULL && (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { *mbuf_off += sizeof(struct nfp_flower_ipv6_udp_tun); } else { *mbuf_off += sizeof(struct nfp_flower_ipv4_udp_tun); @@ -1705,8 +1705,8 @@ nfp_flow_merge_gre(__rte_unused struct nfp_app_fw_flower *app_fw_flower, ext_meta = (struct nfp_flower_ext_meta *)(meta_tci + 1); /* NVGRE is the only supported GRE tunnel type */ - if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6) { + if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off; if (is_mask) tun6->ethertype = rte_cpu_to_be_16(~0); @@ -1753,8 +1753,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower, mask = item->mask ? item->mask : proc->mask_default; tun_key = is_mask ? *mask : *spec; - if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6) { + if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) { tun6 = (struct nfp_flower_ipv6_gre_tun *)*mbuf_off; tun6->tun_key = tun_key; tun6->tun_flags = rte_cpu_to_be_16(NFP_FL_GRE_FLAG_KEY); @@ -1769,8 +1769,8 @@ nfp_flow_merge_gre_key(struct nfp_app_fw_flower *app_fw_flower, } gre_key_end: - if (rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & - NFP_FLOWER_LAYER2_TUN_IPV6) + if ((rte_be_to_cpu_32(ext_meta->nfp_flow_key_layer2) & + NFP_FLOWER_LAYER2_TUN_IPV6) != 0) *mbuf_off += sizeof(struct nfp_flower_ipv6_gre_tun); else *mbuf_off += sizeof(struct nfp_flower_ipv4_gre_tun); @@ -2115,7 +2115,7 @@ nfp_flow_compile_items(struct nfp_flower_representor *representor, sizeof(struct nfp_flower_in_port); meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) { + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_EXT_META) != 0) { mbuf_off_exact += sizeof(struct nfp_flower_ext_meta); mbuf_off_mask += sizeof(struct nfp_flower_ext_meta); } @@ -2558,7 +2558,7 @@ nfp_flower_add_tun_neigh_v4_decap(struct nfp_app_fw_flower *app_fw_flower, port = (struct nfp_flower_in_port *)(meta_tci + 1); eth = (struct nfp_flower_mac_mpls *)(port + 1); - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) ipv4 = (struct nfp_flower_ipv4 *)((char *)eth + sizeof(struct nfp_flower_mac_mpls) + sizeof(struct nfp_flower_tp_ports)); @@ -2685,7 +2685,7 @@ nfp_flower_add_tun_neigh_v6_decap(struct nfp_app_fw_flower *app_fw_flower, port = (struct nfp_flower_in_port *)(meta_tci + 1); eth = (struct nfp_flower_mac_mpls *)(port + 1); - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) ipv6 = (struct nfp_flower_ipv6 *)((char *)eth + sizeof(struct nfp_flower_mac_mpls) + sizeof(struct nfp_flower_tp_ports)); @@ -3181,7 +3181,7 @@ nfp_flow_action_tunnel_decap(struct nfp_flower_representor *repr, } meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; - if (meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) + if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) return nfp_flower_add_tun_neigh_v4_decap(app_fw_flower, nfp_flow_meta, nfp_flow); else return nfp_flower_add_tun_neigh_v6_decap(app_fw_flower, nfp_flow_meta, nfp_flow); diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 66a5d6cb3a..4528417559 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -163,22 +163,22 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, { struct nfp_net_hw *hw = rxq->hw; - if (!(hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM)) + if ((hw->ctrl & NFP_NET_CFG_CTRL_RXCSUM) == 0) return; /* If IPv4 and IP checksum error, fail */ - if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) && - !(rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK))) + if (unlikely((rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM) != 0 && + (rxd->rxd.flags & PCIE_DESC_RX_IP4_CSUM_OK) == 0)) mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; else mb->ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; /* If neither UDP nor TCP return */ - if (!(rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) && - !(rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM)) + if ((rxd->rxd.flags & PCIE_DESC_RX_TCP_CSUM) == 0 && + (rxd->rxd.flags & PCIE_DESC_RX_UDP_CSUM) == 0) return; - if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK)) + if (likely(rxd->rxd.flags & PCIE_DESC_RX_L4_CSUM_OK) != 0) mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; else mb->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; @@ -232,7 +232,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) int i; for (i = 0; i < dev->data->nb_rx_queues; i++) { - if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) < 0) + if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0) return -1; } return 0; @@ -387,7 +387,7 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta, * to do anything. */ if ((hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) { - if (meta->vlan_layer >= 1 && meta->vlan[0].offload != 0) { + if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) { mb->vlan_tci = rte_cpu_to_le_32(meta->vlan[0].tci); mb->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED; } @@ -771,7 +771,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } /* Filling the received mbuf with packet info */ - if (hw->rx_offset) + if (hw->rx_offset != 0) mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset; else mb->data_off = RTE_PKTMBUF_HEADROOM + @@ -846,7 +846,7 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq) return; for (i = 0; i < rxq->rx_count; i++) { - if (rxq->rxbufs[i].mbuf) { + if (rxq->rxbufs[i].mbuf != NULL) { rte_pktmbuf_free_seg(rxq->rxbufs[i].mbuf); rxq->rxbufs[i].mbuf = NULL; } @@ -858,7 +858,7 @@ nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx) { struct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx]; - if (rxq) { + if (rxq != NULL) { nfp_net_rx_queue_release_mbufs(rxq); rte_eth_dma_zone_free(dev, "rx_ring", queue_idx); rte_free(rxq->rxbufs); @@ -906,7 +906,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, * Free memory prior to re-allocation if needed. This is the case after * calling nfp_net_stop */ - if (dev->data->rx_queues[queue_idx]) { + if (dev->data->rx_queues[queue_idx] != NULL) { nfp_net_rx_queue_release(dev, queue_idx); dev->data->rx_queues[queue_idx] = NULL; } @@ -1037,7 +1037,7 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq) return; for (i = 0; i < txq->tx_count; i++) { - if (txq->txbufs[i].mbuf) { + if (txq->txbufs[i].mbuf != NULL) { rte_pktmbuf_free_seg(txq->txbufs[i].mbuf); txq->txbufs[i].mbuf = NULL; } @@ -1049,7 +1049,7 @@ nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx) { struct nfp_net_txq *txq = dev->data->tx_queues[queue_idx]; - if (txq) { + if (txq != NULL) { nfp_net_tx_queue_release_mbufs(txq); rte_eth_dma_zone_free(dev, "tx_ring", queue_idx); rte_free(txq->txbufs); From patchwork Thu Oct 12 01:26:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132560 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E28A4236A; Thu, 12 Oct 2023 03:28:00 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C814140647; Thu, 12 Oct 2023 03:27:51 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2121.outbound.protection.outlook.com [40.107.220.121]) by mails.dpdk.org (Postfix) with ESMTP id 53C364067A for ; Thu, 12 Oct 2023 03:27:50 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Kh0U+NgMVBn83a05qMevmo7B5vDVT270nsa2pbQIhmr87WA1H7V6/sI58UPQnuRkdvlTT6ZHjYc0VOJLWrNxo3KLdeARJmgqLAfeNX2jEO9p5jHRUIKwq9NDoLjBAP1eC1yWlpmAdVaZc6HP+u6f/YW7r/nq1kp20Cc/y9RQv4xLsGVuAHFIM39mYzjCG/WbmrJXHwUubill44aejVbOOMiYIxNG7Ll4rot4G9iTkJobP+2v7eaRV7CRiiffIHnIeoCLE2goQQbLyZpVbLTgCw8jh8G5SudOl2a/s6gUixz4SNStHo8RnjnUZuiLQ89hKzUZbb2k/MrVni+yGBQpHA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=VERTKFpt6q4fl9SfXctj98coAf14/QtwxadTKJMeXJY=; b=LiQNlAFBZkNtfrMX6p9bK+71cZTIscsJh4HVnDKsq8weiR9z0yR3HcKpBNtUw2/iQCbaHweN+0RiQt44gNIfh4krgnZef6Skpk8uvVopaZzkuYziyJ3rFUl9FryKScVedrPyeGe0WYRKjOBuxA9jlAr1rIi2SG7gTcRRy7VTItFrldm6aKpjGc12RzrBtLBJoW7q4fI1IxTkX6OPE5qZSS+9VqXZauXIyEcYmWnswQPC61/qZ+o8OqVEA4JVXpqpN9VRBCjgJ+if6q8s5MoFg9rHZxekqDZeSC6ho2ASuj26bYBfEangzanFeUt/T7sstccwSr7oYnd77zhdbHgREg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=VERTKFpt6q4fl9SfXctj98coAf14/QtwxadTKJMeXJY=; b=kP93nAF2TEiQrRGvsuX694fsDOkDdTLr0u18DkB6PGSviUQ/SIfrphbCm+xadTRtNz+YLhL39XrphpxuHNHUjoU94EtBZGmePTM39vEsU/JWrghcC5vvdjy2P/2hfLws8PUdkv542iTAb0K4RL05dCydwNnyNNP7lenyrLSf0j4= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by PH0PR13MB5975.namprd13.prod.outlook.com (2603:10b6:510:16e::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.37; Thu, 12 Oct 2023 01:27:46 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:27:46 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 02/11] net/nfp: unify the indent coding style Date: Thu, 12 Oct 2023 09:26:55 +0800 Message-Id: <20231012012704.483828-3-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|PH0PR13MB5975:EE_ X-MS-Office365-Filtering-Correlation-Id: ea2abe92-513b-466c-31d3-08dbcac27033 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 8tAXqnsBGk4SAEfDs1MAF6PgCL97V22q2cCOBR2zEJ66jVjRTBergyTFMxWEVygLsUA/EBNtj9wJ5x2SKBNliPi89SSc7JLZT3cKDH+IsBGNn+pKWWcsdh9B6mWG0lRIUjxSmelLaHtlW/EJ5GQkW1oB+2xyHLXLcc+fOVDMX1iKVKRxxy7L0Vv09u3q1KGeFUCa6VnkefgJ3yWr6bKaUVPXw+XYXnAN1nJ0wy5uL7Cqnw/Fy0dCacAbR1HGwPvVbkpQanubdNgQmLFXHT3HX/5TlZBXeph58GMksYxT9tpIvXCqAz/Zsr3TAxSH0iphyhAWLiGhiP1PLk9Ee0h+lI5ro0o61OMUZxc8ApIc7N2aWsQWoTPz27/TARY6f9hWgbX473g5jc3Frivt84hzyfiUprqEHZldWch5G++cUUXCJIyGLvp0UVR/a9Tl23kpOVaqtrSiajFbuGz811GzCfyaG3LBtxlNH13UEm7m8iij3OFOC/qX8O8h8rNagmF8KcB7apACWJ6AY3YDS0rXTBdQNt+oXjChYVZg/sSCH8PrBnZbMlfm6ssiwers70kO48vVdSKxloqsAWbQ+4X0dQIJAoSaEPNSKT4CQ4gcJ2W3GzSIMuszSUdd98RKeIr9JgsKrXdFMAt6DvWTxMHMQdWtNrfZ9JETAlhe5BruwIg= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(346002)(39830400003)(136003)(366004)(376002)(230922051799003)(64100799003)(186009)(1800799009)(451199024)(86362001)(38350700002)(36756003)(38100700002)(66946007)(6916009)(2906002)(30864003)(8936002)(478600001)(6486002)(6512007)(41300700001)(44832011)(5660300002)(4326008)(6506007)(52116002)(8676002)(6666004)(83380400001)(107886003)(66556008)(66476007)(54906003)(26005)(2616005)(316002)(1076003)(559001)(579004); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: p84Aqz3CfD704zv1FzwyB+q5p9EbmGNw+T6oasuNVWc3wTfHkw/bE9b2w3U7zpgh0wBunbl0gjsTDY/mjrHteor+lE6oazz1IN7VIuDNLNc3Sqw9PZeVzERI0vKlS5SqpGgpe3EPPdznkAg2hmMamU3GoT8AQHk71uozTpJ/sTMBkZ+s4XyViPQahzweYY2Wq57qX5yks4s+OXr5el5yTPu8TyD93iHblbrikgtQqCnScQafA1eOZF1Wl0lU/FzQ4cHiWQK8v3We3l4z32b2W7W31XP/PBHcWd3UoM61WiSWst8FwTjyUEQJX0KsQ39L0X6KC94QFx625X0dN4HcJ+e5+IDwyzaXNxfQHo4c5d8NQfZ+Rcr+/t8DyOYWq+nuyYyZcfvxM+LpSVm+uLDkSbHlm1xZqOG52w0v0RehyDR96/asJUjrKYr+dOMsu9MK1x30Mt3uUWfQlcxbS4YmdaJGr24so9har+/Yd0QNMzP6hmPf1WkWWYwMW/uyv85c+pUpWjdtaRBS2J/N3JlZYX+l9XkPftOM8qpe8Gl9JUYGZh+EjRfIa09eLWlLjqVo+b9GYddwzaOYN3rdxTQKLByopYegTcEPHSgeledortKn5TOYhSj6mtiQ2bXw3RrQrHLMgq20+XgoqUf+NehU+/hIXIZ/tXkA4ae04I04oPBe9U4wZKtTAv2h55TqpPNmCpSKIfOI/iD83DWe5zmYZdRG8K90bGh5C8q6nWuWuXC+Cxd5I6rD23eZ8ijYMXEPR9Q5rPzO0h+Tu7EQxb6hxgEcND3f5uy8UwoGUmdZXYsiYwgOLlEFC20facaqT+yfvR2+lde7TBEf7xTNUX3VVZ9GFT+X1nZfUrZ5iyqNL4/gb7MWMZQpdJyxMPsTXctZuD42rK4uXZGq0vPYagrrUmR1+N1BislfoubRhB9MrF5iZlEA74J89lyt+EEnYiisQpBY8Q+C7TBe1ZUjYTlb7p4LpUygCU7n3/DgO+FnC0vla3000elYEYgDKH4of5w+KzkBMli0MyUveMPD5I0d0uOdsUDXCuwNLCpbd5cTQFOPjszTTF2Q/S5nkDQZY2P9dTpsHnhzSqlRGG96hy/sRDfHknZ++KTgE+NGpmqi6Xt3Mi3dE/9To6LcmjaTlFzPjvcLLfMpsKatOYM4cgWRCSgVRYxBOJ6GG2IFlBOP1IE7YUwqtLzXjXygIMhCiesf2RJXlI/woeCgweddw/vgNEyP2FPs606NytP9M5Sqy4cNrnde6AprA5Dqcaxd5CuFQFS/IsLiH4K+dxUHEryHZS/1rtVI4a+REOsHiEiRBm40DWw/NLnQ8+eaaiYGepmOz/g+gWUwpgg7hiVzKFxAUku+TWMEKEBOwSRC0X2yCTTm+94OyLuH+evjxrFpWBwNGwQ10Uc6b5nrTA7xBGv2etEmaongRoCPujs5iPrSvcImAln+ronZfPrnVTrnt2dhljh5NlWkcebQnTk2Nu8ifKVOHwiXZB4hNW87N5NjF36qLxgCUgd4NO98gAAGJXjOxZ3jk/cNJgpWuj+LpA6vWAoVi+pPEwyWMW5CtQ89pLOmad6nUkR8k07IiAGlOci44rm4FX5WNmuet88M3hueFA== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: ea2abe92-513b-466c-31d3-08dbcac27033 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:27:46.8072 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 0wiMormHz8I2Nga14T3EaUjXeMpvYZlJJAmwBx53ut3qYcDkK/fCf5Kj8KhdTEPMJ5qh85oGTpeQL8o8qsTcYCO/LFVmOtZi4eP4GamRPqM= X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR13MB5975 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Each parameter of function should occupy one line, and indent two TAB character. All the statement which span multi line should indent two TAB character. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.c | 5 +- drivers/net/nfp/flower/nfp_flower_ctrl.c | 7 +- .../net/nfp/flower/nfp_flower_representor.c | 2 +- drivers/net/nfp/nfdk/nfp_nfdk.h | 2 +- drivers/net/nfp/nfdk/nfp_nfdk_dp.c | 4 +- drivers/net/nfp/nfp_common.c | 250 +++++++++--------- drivers/net/nfp/nfp_common.h | 81 ++++-- drivers/net/nfp/nfp_cpp_bridge.c | 56 ++-- drivers/net/nfp/nfp_ethdev.c | 82 +++--- drivers/net/nfp/nfp_ethdev_vf.c | 66 +++-- drivers/net/nfp/nfp_flow.c | 36 +-- drivers/net/nfp/nfp_rxtx.c | 86 +++--- drivers/net/nfp/nfp_rxtx.h | 10 +- 13 files changed, 358 insertions(+), 329 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index 3ddaf0f28d..3352693d71 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -63,7 +63,7 @@ nfp_pf_repr_disable_queues(struct rte_eth_dev *dev) new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_ENABLE; update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING | - NFP_NET_CFG_UPDATE_MSIX; + NFP_NET_CFG_UPDATE_MSIX; if (hw->cap & NFP_NET_CFG_CTRL_RINGCFG) new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG; @@ -330,7 +330,8 @@ nfp_flower_pf_xmit_pkts(void *tx_queue, } static int -nfp_flower_init_vnic_common(struct nfp_net_hw *hw, const char *vnic_type) +nfp_flower_init_vnic_common(struct nfp_net_hw *hw, + const char *vnic_type) { int err; uint32_t start_q; diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c index b564e7cd73..4967cc2375 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.c +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c @@ -64,9 +64,8 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, */ new_mb = rte_pktmbuf_alloc(rxq->mem_pool); if (unlikely(new_mb == NULL)) { - PMD_RX_LOG(ERR, - "RX mbuf alloc failed port_id=%u queue_id=%hu", - rxq->port_id, rxq->qidx); + PMD_RX_LOG(ERR, "RX mbuf alloc failed port_id=%u queue_id=%hu", + rxq->port_id, rxq->qidx); nfp_net_mbuf_alloc_failed(rxq); break; } @@ -141,7 +140,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, rte_wmb(); if (nb_hold >= rxq->rx_free_thresh) { PMD_RX_LOG(DEBUG, "port=%hu queue=%hu nb_hold=%hu avail=%hu", - rxq->port_id, rxq->qidx, nb_hold, avail); + rxq->port_id, rxq->qidx, nb_hold, avail); nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold); nb_hold = 0; } diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c index 55ca3e6db0..01c2c5a517 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.c +++ b/drivers/net/nfp/flower/nfp_flower_representor.c @@ -826,7 +826,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower) snprintf(flower_repr.name, sizeof(flower_repr.name), "%s_repr_vf%d", pci_name, i); - /* This will also allocate private memory for the device*/ + /* This will also allocate private memory for the device*/ ret = rte_eth_dev_create(eth_dev->device, flower_repr.name, sizeof(struct nfp_flower_representor), NULL, NULL, nfp_flower_repr_init, &flower_repr); diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h index 75ecb361ee..99675b6bd7 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk.h +++ b/drivers/net/nfp/nfdk/nfp_nfdk.h @@ -143,7 +143,7 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq) free_desc = txq->rd_p - txq->wr_p; return (free_desc > NFDK_TX_DESC_STOP_CNT) ? - (free_desc - NFDK_TX_DESC_STOP_CNT) : 0; + (free_desc - NFDK_TX_DESC_STOP_CNT) : 0; } /* diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c index d4bd5edb0a..2426ffb261 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c +++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c @@ -101,9 +101,7 @@ static inline uint16_t nfp_net_nfdk_headlen_to_segs(uint16_t headlen) { /* First descriptor fits less data, so adjust for that */ - return DIV_ROUND_UP(headlen + - NFDK_TX_MAX_DATA_PER_DESC - - NFDK_TX_MAX_DATA_PER_HEAD, + return DIV_ROUND_UP(headlen + NFDK_TX_MAX_DATA_PER_DESC - NFDK_TX_MAX_DATA_PER_HEAD, NFDK_TX_MAX_DATA_PER_DESC); } diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 36752583dd..9719a9212b 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -172,7 +172,8 @@ nfp_net_link_speed_rte2nfp(uint16_t speed) } static void -nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link) +nfp_net_notify_port_speed(struct nfp_net_hw *hw, + struct rte_eth_link *link) { /** * Read the link status from NFP_NET_CFG_STS. If the link is down @@ -188,21 +189,22 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link) * NFP_NET_CFG_STS_NSP_LINK_RATE. */ nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, - nfp_net_link_speed_rte2nfp(link->link_speed)); + nfp_net_link_speed_rte2nfp(link->link_speed)); } /* The length of firmware version string */ #define FW_VER_LEN 32 static int -__nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) +__nfp_net_reconfig(struct nfp_net_hw *hw, + uint32_t update) { int cnt; uint32_t new; struct timespec wait; PMD_DRV_LOG(DEBUG, "Writing to the configuration queue (%p)...", - hw->qcp_cfg); + hw->qcp_cfg); if (hw->qcp_cfg == NULL) { PMD_INIT_LOG(ERR, "Bad configuration queue pointer"); @@ -227,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) } if (cnt >= NFP_NET_POLL_TIMEOUT) { PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after" - " %dms", update, cnt); + " %dms", update, cnt); return -EIO; } nanosleep(&wait, 0); /* waiting for a 1ms */ @@ -254,7 +256,9 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) * - (EIO) if I/O err and fail to reconfigure the device. */ int -nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update) +nfp_net_reconfig(struct nfp_net_hw *hw, + uint32_t ctrl, + uint32_t update) { int ret; @@ -296,7 +300,9 @@ nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t ctrl, uint32_t update) * - (EIO) if I/O err and fail to reconfigure the device. */ int -nfp_net_ext_reconfig(struct nfp_net_hw *hw, uint32_t ctrl_ext, uint32_t update) +nfp_net_ext_reconfig(struct nfp_net_hw *hw, + uint32_t ctrl_ext, + uint32_t update) { int ret; @@ -401,7 +407,7 @@ nfp_net_configure(struct rte_eth_dev *dev) /* Checking RX mode */ if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 && - (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) { + (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) { PMD_INIT_LOG(INFO, "RSS not supported"); return -EINVAL; } @@ -409,7 +415,7 @@ nfp_net_configure(struct rte_eth_dev *dev) /* Checking MTU set */ if (rxmode->mtu > NFP_FRAME_SIZE_MAX) { PMD_INIT_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported", - rxmode->mtu, NFP_FRAME_SIZE_MAX); + rxmode->mtu, NFP_FRAME_SIZE_MAX); return -ERANGE; } @@ -446,7 +452,8 @@ nfp_net_log_device_information(const struct nfp_net_hw *hw) } static inline void -nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, uint32_t *ctrl) +nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, + uint32_t *ctrl) { if ((hw->cap & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) *ctrl |= NFP_NET_CFG_CTRL_RXVLAN_V2; @@ -490,8 +497,9 @@ nfp_net_disable_queues(struct rte_eth_dev *dev) nn_cfg_writeq(hw, NFP_NET_CFG_RXRS_ENABLE, 0); new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_ENABLE; - update = NFP_NET_CFG_UPDATE_GEN | NFP_NET_CFG_UPDATE_RING | - NFP_NET_CFG_UPDATE_MSIX; + update = NFP_NET_CFG_UPDATE_GEN | + NFP_NET_CFG_UPDATE_RING | + NFP_NET_CFG_UPDATE_MSIX; if ((hw->cap & NFP_NET_CFG_CTRL_RINGCFG) != 0) new_ctrl &= ~NFP_NET_CFG_CTRL_RINGCFG; @@ -517,7 +525,8 @@ nfp_net_cfg_queue_setup(struct nfp_net_hw *hw) } void -nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac) +nfp_net_write_mac(struct nfp_net_hw *hw, + uint8_t *mac) { uint32_t mac0 = *(uint32_t *)mac; uint16_t mac1; @@ -527,20 +536,21 @@ nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac) mac += 4; mac1 = *(uint16_t *)mac; nn_writew(rte_cpu_to_be_16(mac1), - hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6); + hw->ctrl_bar + NFP_NET_CFG_MACADDR + 6); } int -nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) +nfp_net_set_mac_addr(struct rte_eth_dev *dev, + struct rte_ether_addr *mac_addr) { struct nfp_net_hw *hw; uint32_t update, ctrl; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && - (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) { + (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) { PMD_INIT_LOG(INFO, "MAC address unable to change when" - " port enabled"); + " port enabled"); return -EBUSY; } @@ -551,7 +561,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) update = NFP_NET_CFG_UPDATE_MACADDR; ctrl = hw->ctrl; if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && - (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) + (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; if (nfp_net_reconfig(hw, ctrl, update) != 0) { PMD_INIT_LOG(INFO, "MAC address update failed"); @@ -562,15 +572,15 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) int nfp_configure_rx_interrupt(struct rte_eth_dev *dev, - struct rte_intr_handle *intr_handle) + struct rte_intr_handle *intr_handle) { struct nfp_net_hw *hw; int i; if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", - dev->data->nb_rx_queues) != 0) { + dev->data->nb_rx_queues) != 0) { PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" - " intr_vec", dev->data->nb_rx_queues); + " intr_vec", dev->data->nb_rx_queues); return -ENOMEM; } @@ -590,12 +600,10 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, * efd interrupts */ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1); - if (rte_intr_vec_list_index_set(intr_handle, i, - i + 1) != 0) + if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0) return -1; PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i, - rte_intr_vec_list_index_get(intr_handle, - i)); + rte_intr_vec_list_index_get(intr_handle, i)); } } @@ -651,13 +659,13 @@ nfp_check_offloads(struct rte_eth_dev *dev) /* TX checksum offload */ if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0) + (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 || + (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_TXCSUM; /* LSO offload */ if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { + (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0) ctrl |= NFP_NET_CFG_CTRL_LSO; else @@ -751,7 +759,8 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev) * status. */ int -nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) +nfp_net_link_update(struct rte_eth_dev *dev, + __rte_unused int wait_to_complete) { int ret; uint32_t i; @@ -820,7 +829,8 @@ nfp_net_link_update(struct rte_eth_dev *dev, __rte_unused int wait_to_complete) } int -nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) +nfp_net_stats_get(struct rte_eth_dev *dev, + struct rte_eth_stats *stats) { int i; struct nfp_net_hw *hw; @@ -838,16 +848,16 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) break; nfp_dev_stats.q_ipackets[i] = - nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); + nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); nfp_dev_stats.q_ipackets[i] -= - hw->eth_stats_base.q_ipackets[i]; + hw->eth_stats_base.q_ipackets[i]; nfp_dev_stats.q_ibytes[i] = - nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); + nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); nfp_dev_stats.q_ibytes[i] -= - hw->eth_stats_base.q_ibytes[i]; + hw->eth_stats_base.q_ibytes[i]; } /* reading per TX ring stats */ @@ -856,46 +866,42 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) break; nfp_dev_stats.q_opackets[i] = - nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); + nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); - nfp_dev_stats.q_opackets[i] -= - hw->eth_stats_base.q_opackets[i]; + nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i]; nfp_dev_stats.q_obytes[i] = - nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); + nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); - nfp_dev_stats.q_obytes[i] -= - hw->eth_stats_base.q_obytes[i]; + nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i]; } - nfp_dev_stats.ipackets = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); + nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets; - nfp_dev_stats.ibytes = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); + nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes; nfp_dev_stats.opackets = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); nfp_dev_stats.opackets -= hw->eth_stats_base.opackets; nfp_dev_stats.obytes = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); nfp_dev_stats.obytes -= hw->eth_stats_base.obytes; /* reading general device stats */ nfp_dev_stats.ierrors = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors; nfp_dev_stats.oerrors = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors; @@ -903,7 +909,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) nfp_dev_stats.rx_nombuf = dev->data->rx_mbuf_alloc_failed; nfp_dev_stats.imissed = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); nfp_dev_stats.imissed -= hw->eth_stats_base.imissed; @@ -933,10 +939,10 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) break; hw->eth_stats_base.q_ipackets[i] = - nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); + nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); hw->eth_stats_base.q_ibytes[i] = - nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); + nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); } /* reading per TX ring stats */ @@ -945,36 +951,36 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) break; hw->eth_stats_base.q_opackets[i] = - nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); + nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); hw->eth_stats_base.q_obytes[i] = - nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); + nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); } hw->eth_stats_base.ipackets = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); hw->eth_stats_base.ibytes = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); hw->eth_stats_base.opackets = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); hw->eth_stats_base.obytes = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); /* reading general device stats */ hw->eth_stats_base.ierrors = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); hw->eth_stats_base.oerrors = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); /* RX ring mbuf allocation failures */ dev->data->rx_mbuf_alloc_failed = 0; hw->eth_stats_base.imissed = - nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); + nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); return 0; } @@ -1237,16 +1243,16 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | - RTE_ETH_RX_OFFLOAD_UDP_CKSUM | - RTE_ETH_RX_OFFLOAD_TCP_CKSUM; + RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM; if ((hw->cap & (NFP_NET_CFG_CTRL_TXVLAN | NFP_NET_CFG_CTRL_TXVLAN_V2)) != 0) dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_VLAN_INSERT; if ((hw->cap & NFP_NET_CFG_CTRL_TXCSUM) != 0) dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | - RTE_ETH_TX_OFFLOAD_UDP_CKSUM | - RTE_ETH_TX_OFFLOAD_TCP_CKSUM; + RTE_ETH_TX_OFFLOAD_UDP_CKSUM | + RTE_ETH_TX_OFFLOAD_TCP_CKSUM; if ((hw->cap & NFP_NET_CFG_CTRL_LSO_ANY) != 0) { dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_TCP_TSO; @@ -1301,21 +1307,24 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_RSS_HASH; dev_info->flow_type_rss_offloads = RTE_ETH_RSS_IPV4 | - RTE_ETH_RSS_NONFRAG_IPV4_TCP | - RTE_ETH_RSS_NONFRAG_IPV4_UDP | - RTE_ETH_RSS_NONFRAG_IPV4_SCTP | - RTE_ETH_RSS_IPV6 | - RTE_ETH_RSS_NONFRAG_IPV6_TCP | - RTE_ETH_RSS_NONFRAG_IPV6_UDP | - RTE_ETH_RSS_NONFRAG_IPV6_SCTP; + RTE_ETH_RSS_NONFRAG_IPV4_TCP | + RTE_ETH_RSS_NONFRAG_IPV4_UDP | + RTE_ETH_RSS_NONFRAG_IPV4_SCTP | + RTE_ETH_RSS_IPV6 | + RTE_ETH_RSS_NONFRAG_IPV6_TCP | + RTE_ETH_RSS_NONFRAG_IPV6_UDP | + RTE_ETH_RSS_NONFRAG_IPV6_SCTP; dev_info->reta_size = NFP_NET_CFG_RSS_ITBL_SZ; dev_info->hash_key_size = NFP_NET_CFG_RSS_KEY_SZ; } - dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | RTE_ETH_LINK_SPEED_10G | - RTE_ETH_LINK_SPEED_25G | RTE_ETH_LINK_SPEED_40G | - RTE_ETH_LINK_SPEED_50G | RTE_ETH_LINK_SPEED_100G; + dev_info->speed_capa = RTE_ETH_LINK_SPEED_1G | + RTE_ETH_LINK_SPEED_10G | + RTE_ETH_LINK_SPEED_25G | + RTE_ETH_LINK_SPEED_40G | + RTE_ETH_LINK_SPEED_50G | + RTE_ETH_LINK_SPEED_100G; return 0; } @@ -1384,7 +1393,8 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev) } int -nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) +nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, + uint16_t queue_id) { struct rte_pci_device *pci_dev; struct nfp_net_hw *hw; @@ -1393,19 +1403,19 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (rte_intr_type_get(pci_dev->intr_handle) != - RTE_INTR_HANDLE_UIO) + if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; /* Make sure all updates are written before un-masking */ rte_wmb(); nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), - NFP_NET_CFG_ICR_UNMASKED); + NFP_NET_CFG_ICR_UNMASKED); return 0; } int -nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) +nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, + uint16_t queue_id) { struct rte_pci_device *pci_dev; struct nfp_net_hw *hw; @@ -1414,8 +1424,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (rte_intr_type_get(pci_dev->intr_handle) != - RTE_INTR_HANDLE_UIO) + if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; /* Make sure all updates are written before un-masking */ @@ -1433,16 +1442,15 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev) rte_eth_linkstatus_get(dev, &link); if (link.link_status != 0) PMD_DRV_LOG(INFO, "Port %d: Link Up - speed %u Mbps - %s", - dev->data->port_id, link.link_speed, - link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX - ? "full-duplex" : "half-duplex"); + dev->data->port_id, link.link_speed, + link.link_duplex == RTE_ETH_LINK_FULL_DUPLEX ? + "full-duplex" : "half-duplex"); else - PMD_DRV_LOG(INFO, " Port %d: Link Down", - dev->data->port_id); + PMD_DRV_LOG(INFO, " Port %d: Link Down", dev->data->port_id); PMD_DRV_LOG(INFO, "PCI Address: " PCI_PRI_FMT, - pci_dev->addr.domain, pci_dev->addr.bus, - pci_dev->addr.devid, pci_dev->addr.function); + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); } /* Interrupt configuration and handling */ @@ -1470,7 +1478,7 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev) /* Make sure all updates are written before un-masking */ rte_wmb(); nn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX), - NFP_NET_CFG_ICR_UNMASKED); + NFP_NET_CFG_ICR_UNMASKED); } } @@ -1523,8 +1531,8 @@ nfp_net_dev_interrupt_handler(void *param) } if (rte_eal_alarm_set(timeout * 1000, - nfp_net_dev_interrupt_delayed_handler, - (void *)dev) != 0) { + nfp_net_dev_interrupt_delayed_handler, + (void *)dev) != 0) { PMD_INIT_LOG(ERR, "Error setting alarm"); /* Unmasking */ nfp_net_irq_unmask(dev); @@ -1532,7 +1540,8 @@ nfp_net_dev_interrupt_handler(void *param) } int -nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) +nfp_net_dev_mtu_set(struct rte_eth_dev *dev, + uint16_t mtu) { struct nfp_net_hw *hw; @@ -1541,14 +1550,14 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) /* mtu setting is forbidden if port is started */ if (dev->data->dev_started) { PMD_DRV_LOG(ERR, "port %d must be stopped before configuration", - dev->data->port_id); + dev->data->port_id); return -EBUSY; } /* MTU larger than current mbufsize not supported */ if (mtu > hw->flbufsz) { PMD_DRV_LOG(ERR, "MTU (%u) larger than current mbufsize (%u) not supported", - mtu, hw->flbufsz); + mtu, hw->flbufsz); return -ERANGE; } @@ -1561,7 +1570,8 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) } int -nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) +nfp_net_vlan_offload_set(struct rte_eth_dev *dev, + int mask) { uint32_t new_ctrl, update; struct nfp_net_hw *hw; @@ -1606,8 +1616,8 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) static int nfp_net_rss_reta_write(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) { uint32_t reta, mask; int i, j; @@ -1617,8 +1627,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number hardware can supported " - "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); return -EINVAL; } @@ -1648,8 +1658,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, reta &= ~(0xFF << (8 * j)); reta |= reta_conf[idx].reta[shift + j] << (8 * j); } - nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, - reta); + nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta); } return 0; } @@ -1657,8 +1666,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, /* Update Redirection Table(RETA) of Receive Side Scaling of Ethernet device */ int nfp_net_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) { struct nfp_net_hw *hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1683,8 +1692,8 @@ nfp_net_reta_update(struct rte_eth_dev *dev, /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */ int nfp_net_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) { uint8_t i, j, mask; int idx, shift; @@ -1698,8 +1707,8 @@ nfp_net_reta_query(struct rte_eth_dev *dev, if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number hardware can supported " - "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); + "(%d) doesn't match the number hardware can supported " + "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); return -EINVAL; } @@ -1716,13 +1725,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev, if (mask == 0) continue; - reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + - shift); + reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift); for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; reta_conf[idx].reta[shift + j] = - (uint8_t)((reta >> (8 * j)) & 0xF); + (uint8_t)((reta >> (8 * j)) & 0xF); } } return 0; @@ -1730,7 +1738,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev, static int nfp_net_rss_hash_write(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) + struct rte_eth_rss_conf *rss_conf) { struct nfp_net_hw *hw; uint64_t rss_hf; @@ -1786,7 +1794,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, int nfp_net_rss_hash_update(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) + struct rte_eth_rss_conf *rss_conf) { uint32_t update; uint64_t rss_hf; @@ -1822,7 +1830,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf) + struct rte_eth_rss_conf *rss_conf) { uint64_t rss_hf; uint32_t cfg_rss_ctrl; @@ -1888,7 +1896,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) int i, j, ret; PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues", - rx_queues); + rx_queues); nfp_reta_conf[0].mask = ~0x0; nfp_reta_conf[1].mask = ~0x0; @@ -1984,7 +1992,7 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw, for (i = 0; i < NFP_NET_N_VXLAN_PORTS; i += 2) { nn_cfg_writel(hw, NFP_NET_CFG_VXLAN_PORT + i * sizeof(port), - (hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]); + (hw->vxlan_ports[i + 1] << 16) | hw->vxlan_ports[i]); } rte_spinlock_lock(&hw->reconfig_lock); @@ -2004,7 +2012,8 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw, * than 40 bits */ int -nfp_net_check_dma_mask(struct nfp_net_hw *hw, char *name) +nfp_net_check_dma_mask(struct nfp_net_hw *hw, + char *name) { if (hw->ver.extend == NFP_NET_CFG_VERSION_DP_NFD3 && rte_mem_check_dma_mask(40) != 0) { @@ -2052,7 +2061,8 @@ nfp_net_cfg_read_version(struct nfp_net_hw *hw) } static void -nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version) +nfp_net_get_nsp_info(struct nfp_net_hw *hw, + char *nsp_version) { struct nfp_nsp *nsp; @@ -2068,7 +2078,8 @@ nfp_net_get_nsp_info(struct nfp_net_hw *hw, char *nsp_version) } static void -nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name) +nfp_net_get_mip_name(struct nfp_net_hw *hw, + char *mip_name) { struct nfp_mip *mip; @@ -2082,7 +2093,8 @@ nfp_net_get_mip_name(struct nfp_net_hw *hw, char *mip_name) } static void -nfp_net_get_app_name(struct nfp_net_hw *hw, char *app_name) +nfp_net_get_app_name(struct nfp_net_hw *hw, + char *app_name) { switch (hw->pf_dev->app_fw_id) { case NFP_APP_FW_CORE_NIC: diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index bc3a948231..e4fd394868 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -180,37 +180,47 @@ struct nfp_net_adapter { struct nfp_net_hw hw; }; -static inline uint8_t nn_readb(volatile const void *addr) +static inline uint8_t +nn_readb(volatile const void *addr) { return rte_read8(addr); } -static inline void nn_writeb(uint8_t val, volatile void *addr) +static inline void +nn_writeb(uint8_t val, + volatile void *addr) { rte_write8(val, addr); } -static inline uint32_t nn_readl(volatile const void *addr) +static inline uint32_t +nn_readl(volatile const void *addr) { return rte_read32(addr); } -static inline void nn_writel(uint32_t val, volatile void *addr) +static inline void +nn_writel(uint32_t val, + volatile void *addr) { rte_write32(val, addr); } -static inline uint16_t nn_readw(volatile const void *addr) +static inline uint16_t +nn_readw(volatile const void *addr) { return rte_read16(addr); } -static inline void nn_writew(uint16_t val, volatile void *addr) +static inline void +nn_writew(uint16_t val, + volatile void *addr) { rte_write16(val, addr); } -static inline uint64_t nn_readq(volatile void *addr) +static inline uint64_t +nn_readq(volatile void *addr) { const volatile uint32_t *p = addr; uint32_t low, high; @@ -221,7 +231,9 @@ static inline uint64_t nn_readq(volatile void *addr) return low + ((uint64_t)high << 32); } -static inline void nn_writeq(uint64_t val, volatile void *addr) +static inline void +nn_writeq(uint64_t val, + volatile void *addr) { nn_writel(val >> 32, (volatile char *)addr + 4); nn_writel(val, addr); @@ -232,49 +244,61 @@ static inline void nn_writeq(uint64_t val, volatile void *addr) * Performs any endian conversion necessary. */ static inline uint8_t -nn_cfg_readb(struct nfp_net_hw *hw, int off) +nn_cfg_readb(struct nfp_net_hw *hw, + int off) { return nn_readb(hw->ctrl_bar + off); } static inline void -nn_cfg_writeb(struct nfp_net_hw *hw, int off, uint8_t val) +nn_cfg_writeb(struct nfp_net_hw *hw, + int off, + uint8_t val) { nn_writeb(val, hw->ctrl_bar + off); } static inline uint16_t -nn_cfg_readw(struct nfp_net_hw *hw, int off) +nn_cfg_readw(struct nfp_net_hw *hw, + int off) { return rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off)); } static inline void -nn_cfg_writew(struct nfp_net_hw *hw, int off, uint16_t val) +nn_cfg_writew(struct nfp_net_hw *hw, + int off, + uint16_t val) { nn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off); } static inline uint32_t -nn_cfg_readl(struct nfp_net_hw *hw, int off) +nn_cfg_readl(struct nfp_net_hw *hw, + int off) { return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off)); } static inline void -nn_cfg_writel(struct nfp_net_hw *hw, int off, uint32_t val) +nn_cfg_writel(struct nfp_net_hw *hw, + int off, + uint32_t val) { nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off); } static inline uint64_t -nn_cfg_readq(struct nfp_net_hw *hw, int off) +nn_cfg_readq(struct nfp_net_hw *hw, + int off) { return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off)); } static inline void -nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val) +nn_cfg_writeq(struct nfp_net_hw *hw, + int off, + uint64_t val) { nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off); } @@ -286,7 +310,9 @@ nn_cfg_writeq(struct nfp_net_hw *hw, int off, uint64_t val) * @val: Value to add to the queue pointer */ static inline void -nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val) +nfp_qcp_ptr_add(uint8_t *q, + enum nfp_qcp_ptr ptr, + uint32_t val) { uint32_t off; @@ -304,7 +330,8 @@ nfp_qcp_ptr_add(uint8_t *q, enum nfp_qcp_ptr ptr, uint32_t val) * @ptr: Read or Write pointer */ static inline uint32_t -nfp_qcp_read(uint8_t *q, enum nfp_qcp_ptr ptr) +nfp_qcp_read(uint8_t *q, + enum nfp_qcp_ptr ptr) { uint32_t off; uint32_t val; @@ -343,12 +370,12 @@ void nfp_net_params_setup(struct nfp_net_hw *hw); void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac); int nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr); int nfp_configure_rx_interrupt(struct rte_eth_dev *dev, - struct rte_intr_handle *intr_handle); + struct rte_intr_handle *intr_handle); uint32_t nfp_check_offloads(struct rte_eth_dev *dev); int nfp_net_promisc_enable(struct rte_eth_dev *dev); int nfp_net_promisc_disable(struct rte_eth_dev *dev); int nfp_net_link_update(struct rte_eth_dev *dev, - __rte_unused int wait_to_complete); + __rte_unused int wait_to_complete); int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); int nfp_net_stats_reset(struct rte_eth_dev *dev); uint32_t nfp_net_xstats_size(const struct rte_eth_dev *dev); @@ -368,7 +395,7 @@ int nfp_net_xstats_get_by_id(struct rte_eth_dev *dev, unsigned int n); int nfp_net_xstats_reset(struct rte_eth_dev *dev); int nfp_net_infos_get(struct rte_eth_dev *dev, - struct rte_eth_dev_info *dev_info); + struct rte_eth_dev_info *dev_info); const uint32_t *nfp_net_supported_ptypes_get(struct rte_eth_dev *dev); int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id); int nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id); @@ -379,15 +406,15 @@ void nfp_net_dev_interrupt_delayed_handler(void *param); int nfp_net_dev_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask); int nfp_net_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); int nfp_net_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); int nfp_net_rss_hash_update(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf); + struct rte_eth_rss_conf *rss_conf); int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, - struct rte_eth_rss_conf *rss_conf); + struct rte_eth_rss_conf *rss_conf); int nfp_net_rss_config_default(struct rte_eth_dev *dev); void nfp_net_stop_rx_queue(struct rte_eth_dev *dev); void nfp_net_close_rx_queue(struct rte_eth_dev *dev); diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index 34764a8a32..85a8bf9235 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -116,7 +116,8 @@ nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev) * of CPP interface handler configured by the PMD setup. */ static int -nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) +nfp_cpp_bridge_serve_write(int sockfd, + struct nfp_cpp *cpp) { struct nfp_cpp_area *area; off_t offset, nfp_offset; @@ -126,7 +127,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) int err = 0; PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, - sizeof(off_t), sizeof(size_t)); + sizeof(off_t), sizeof(size_t)); /* Reading the count param */ err = recv(sockfd, &count, sizeof(off_t), 0); @@ -145,21 +146,21 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) nfp_offset = offset & ((1ull << 40) - 1); PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, - offset); + offset); PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, - cpp_id, nfp_offset); + cpp_id, nfp_offset); /* Adjust length if not aligned */ if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) != - (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { + (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { curlen = NFP_CPP_MEMIO_BOUNDARY - - (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); + (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); } while (count > 0) { /* configure a CPP PCIe2CPP BAR for mapping the CPP target */ area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev", - nfp_offset, curlen); + nfp_offset, curlen); if (area == NULL) { PMD_CPP_LOG(ERR, "area alloc fail"); return -EIO; @@ -179,12 +180,11 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) len = sizeof(tmpbuf); PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__, - len, count); + len, count); err = recv(sockfd, tmpbuf, len, MSG_WAITALL); if (err != (int)len) { - PMD_CPP_LOG(ERR, - "error when receiving, %d of %zu", - err, count); + PMD_CPP_LOG(ERR, "error when receiving, %d of %zu", + err, count); nfp_cpp_area_release(area); nfp_cpp_area_free(area); return -EIO; @@ -204,7 +204,7 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) count -= pos; curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? - NFP_CPP_MEMIO_BOUNDARY : count; + NFP_CPP_MEMIO_BOUNDARY : count; } return 0; @@ -217,7 +217,8 @@ nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) * data is sent to the requester using the same socket. */ static int -nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) +nfp_cpp_bridge_serve_read(int sockfd, + struct nfp_cpp *cpp) { struct nfp_cpp_area *area; off_t offset, nfp_offset; @@ -227,7 +228,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) int err = 0; PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, - sizeof(off_t), sizeof(size_t)); + sizeof(off_t), sizeof(size_t)); /* Reading the count param */ err = recv(sockfd, &count, sizeof(off_t), 0); @@ -246,20 +247,20 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) nfp_offset = offset & ((1ull << 40) - 1); PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, - offset); + offset); PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, - cpp_id, nfp_offset); + cpp_id, nfp_offset); /* Adjust length if not aligned */ if (((nfp_offset + (off_t)count - 1) & ~(NFP_CPP_MEMIO_BOUNDARY - 1)) != - (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { + (nfp_offset & ~(NFP_CPP_MEMIO_BOUNDARY - 1))) { curlen = NFP_CPP_MEMIO_BOUNDARY - - (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); + (nfp_offset & (NFP_CPP_MEMIO_BOUNDARY - 1)); } while (count > 0) { area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev", - nfp_offset, curlen); + nfp_offset, curlen); if (area == NULL) { PMD_CPP_LOG(ERR, "area alloc failed"); return -EIO; @@ -285,13 +286,12 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) return -EIO; } PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__, - len, count); + len, count); err = send(sockfd, tmpbuf, len, 0); if (err != (int)len) { - PMD_CPP_LOG(ERR, - "error when sending: %d of %zu", - err, count); + PMD_CPP_LOG(ERR, "error when sending: %d of %zu", + err, count); nfp_cpp_area_release(area); nfp_cpp_area_free(area); return -EIO; @@ -304,7 +304,7 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) count -= pos; curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? - NFP_CPP_MEMIO_BOUNDARY : count; + NFP_CPP_MEMIO_BOUNDARY : count; } return 0; } @@ -316,7 +316,8 @@ nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) * does not require any CPP access at all. */ static int -nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp) +nfp_cpp_bridge_serve_ioctl(int sockfd, + struct nfp_cpp *cpp) { uint32_t cmd, ident_size, tmp; int err; @@ -395,7 +396,7 @@ nfp_cpp_bridge_service_func(void *args) strcpy(address.sa_data, "/tmp/nfp_cpp"); ret = bind(sockfd, (const struct sockaddr *)&address, - sizeof(struct sockaddr)); + sizeof(struct sockaddr)); if (ret < 0) { PMD_CPP_LOG(ERR, "bind error (%d). Service failed", errno); close(sockfd); @@ -426,8 +427,7 @@ nfp_cpp_bridge_service_func(void *args) while (1) { ret = recv(datafd, &op, 4, 0); if (ret <= 0) { - PMD_CPP_LOG(DEBUG, "%s: socket close\n", - __func__); + PMD_CPP_LOG(DEBUG, "%s: socket close\n", __func__); break; } diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 12feec8eb4..65473d87e8 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -22,7 +22,8 @@ #include "nfp_logs.h" static int -nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, int port) +nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, + int port) { struct nfp_eth_table *nfp_eth_table; struct nfp_net_hw *hw = NULL; @@ -70,21 +71,20 @@ nfp_net_start(struct rte_eth_dev *dev) if (dev->data->dev_conf.intr_conf.rxq != 0) { if (app_fw_nic->multiport) { PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported " - "with NFP multiport PF"); + "with NFP multiport PF"); return -EINVAL; } - if (rte_intr_type_get(intr_handle) == - RTE_INTR_HANDLE_UIO) { + if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. * Unregistering LSC interrupt handler */ rte_intr_callback_unregister(pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, (void *)dev); + nfp_net_dev_interrupt_handler, (void *)dev); if (dev->data->nb_rx_queues > 1) { PMD_INIT_LOG(ERR, "PMD rx interrupt only " - "supports 1 queue with UIO"); + "supports 1 queue with UIO"); return -EIO; } } @@ -162,8 +162,7 @@ nfp_net_start(struct rte_eth_dev *dev) /* Configure the physical port up */ nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1); else - nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 1); + nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1); hw->ctrl = new_ctrl; @@ -209,8 +208,7 @@ nfp_net_stop(struct rte_eth_dev *dev) /* Configure the physical port down */ nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0); else - nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 0); + nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0); return 0; } @@ -229,8 +227,7 @@ nfp_net_set_link_up(struct rte_eth_dev *dev) /* Configure the physical port down */ return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 1); else - return nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 1); + return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 1); } /* Set the link down. */ @@ -247,8 +244,7 @@ nfp_net_set_link_down(struct rte_eth_dev *dev) /* Configure the physical port down */ return nfp_eth_set_configured(hw->cpp, hw->nfp_idx, 0); else - return nfp_eth_set_configured(dev->process_private, - hw->nfp_idx, 0); + return nfp_eth_set_configured(dev->process_private, hw->nfp_idx, 0); } /* Reset and stop device. The device can not be restarted. */ @@ -287,8 +283,7 @@ nfp_net_close(struct rte_eth_dev *dev) nfp_ipsec_uninit(dev); /* Cancel possible impending LSC work here before releasing the port*/ - rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, - (void *)dev); + rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev); /* Only free PF resources after all physical ports have been closed */ /* Mark this port as unused and free device priv resources*/ @@ -525,8 +520,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) hw->ctrl_bar = pci_dev->mem_resource[0].addr; if (hw->ctrl_bar == NULL) { - PMD_DRV_LOG(ERR, - "hw->ctrl_bar is NULL. BAR0 not configured"); + PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured"); return -ENODEV; } @@ -592,7 +586,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_private = hw; PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p", - hw->ctrl_bar, hw->tx_bar, hw->rx_bar); + hw->ctrl_bar, hw->tx_bar, hw->rx_bar); nfp_net_cfg_queue_setup(hw); hw->mtu = RTE_ETHER_MTU; @@ -607,8 +601,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) rte_spinlock_init(&hw->reconfig_lock); /* Allocating memory for mac addr */ - eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", - RTE_ETHER_ADDR_LEN, 0); + eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0); if (eth_dev->data->mac_addrs == NULL) { PMD_INIT_LOG(ERR, "Failed to space for MAC address"); return -ENOMEM; @@ -634,10 +627,10 @@ nfp_net_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " - "mac=" RTE_ETHER_ADDR_PRT_FMT, - eth_dev->data->port_id, pci_dev->id.vendor_id, - pci_dev->id.device_id, - RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); + "mac=" RTE_ETHER_ADDR_PRT_FMT, + eth_dev->data->port_id, pci_dev->id.vendor_id, + pci_dev->id.device_id, + RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); /* Registering LSC interrupt handler */ rte_intr_callback_register(pci_dev->intr_handle, @@ -653,7 +646,9 @@ nfp_net_init(struct rte_eth_dev *eth_dev) #define DEFAULT_FW_PATH "/lib/firmware/netronome" static int -nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) +nfp_fw_upload(struct rte_pci_device *dev, + struct nfp_nsp *nsp, + char *card) { struct nfp_cpp *cpp = nfp_nsp_cpp(nsp); void *fw_buf; @@ -675,11 +670,10 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) /* First try to find a firmware image specific for this device */ snprintf(serial, sizeof(serial), "serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x", - cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3], - cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff); + cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3], + cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff); - snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, - serial); + snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial); PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0) @@ -703,7 +697,7 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) load_fw: PMD_DRV_LOG(INFO, "Firmware file found at %s with size: %zu", - fw_name, fsize); + fw_name, fsize); PMD_DRV_LOG(INFO, "Uploading the firmware ..."); nfp_nsp_load_fw(nsp, fw_buf, fsize); PMD_DRV_LOG(INFO, "Done"); @@ -737,7 +731,7 @@ nfp_fw_setup(struct rte_pci_device *dev, if (nfp_eth_table->count == 0 || nfp_eth_table->count > 8) { PMD_DRV_LOG(ERR, "NFP ethernet table reports wrong ports: %u", - nfp_eth_table->count); + nfp_eth_table->count); return -EIO; } @@ -829,7 +823,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, numa_node = rte_socket_id(); for (i = 0; i < app_fw_nic->total_phyports; i++) { snprintf(port_name, sizeof(port_name), "%s_port%d", - pf_dev->pci_dev->device.name, i); + pf_dev->pci_dev->device.name, i); /* Allocate a eth_dev for this phyport */ eth_dev = rte_eth_dev_allocate(port_name); @@ -839,8 +833,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, } /* Allocate memory for this phyport */ - eth_dev->data->dev_private = - rte_zmalloc_socket(port_name, sizeof(struct nfp_net_hw), + eth_dev->data->dev_private = rte_zmalloc_socket(port_name, + sizeof(struct nfp_net_hw), RTE_CACHE_LINE_SIZE, numa_node); if (eth_dev->data->dev_private == NULL) { ret = -ENOMEM; @@ -961,8 +955,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) /* Now the symbol table should be there */ sym_tbl = nfp_rtsym_table_read(cpp); if (sym_tbl == NULL) { - PMD_INIT_LOG(ERR, "Something is wrong with the firmware" - " symbol table"); + PMD_INIT_LOG(ERR, "Something is wrong with the firmware symbol table"); ret = -EIO; goto eth_table_cleanup; } @@ -1144,8 +1137,7 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev) */ sym_tbl = nfp_rtsym_table_read(cpp); if (sym_tbl == NULL) { - PMD_INIT_LOG(ERR, "Something is wrong with the firmware" - " symbol table"); + PMD_INIT_LOG(ERR, "Something is wrong with the firmware symbol table"); return -EIO; } @@ -1198,27 +1190,27 @@ nfp_pf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, static const struct rte_pci_id pci_id_nfp_pf_net_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP3800_PF_NIC) + PCI_DEVICE_ID_NFP3800_PF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP4000_PF_NIC) + PCI_DEVICE_ID_NFP4000_PF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP6000_PF_NIC) + PCI_DEVICE_ID_NFP6000_PF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE, - PCI_DEVICE_ID_NFP3800_PF_NIC) + PCI_DEVICE_ID_NFP3800_PF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE, - PCI_DEVICE_ID_NFP4000_PF_NIC) + PCI_DEVICE_ID_NFP4000_PF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE, - PCI_DEVICE_ID_NFP6000_PF_NIC) + PCI_DEVICE_ID_NFP6000_PF_NIC) }, { .vendor_id = 0, diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index c8d6b0461b..ac6a10685d 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -50,18 +50,17 @@ nfp_netvf_start(struct rte_eth_dev *dev) /* check and configure queue intr-vector mapping */ if (dev->data->dev_conf.intr_conf.rxq != 0) { - if (rte_intr_type_get(intr_handle) == - RTE_INTR_HANDLE_UIO) { + if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. * Unregistering LSC interrupt handler */ rte_intr_callback_unregister(pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, (void *)dev); + nfp_net_dev_interrupt_handler, (void *)dev); if (dev->data->nb_rx_queues > 1) { PMD_INIT_LOG(ERR, "PMD rx interrupt only " - "supports 1 queue with UIO"); + "supports 1 queue with UIO"); return -EIO; } } @@ -190,12 +189,10 @@ nfp_netvf_close(struct rte_eth_dev *dev) /* unregister callback func from eal lib */ rte_intr_callback_unregister(pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, - (void *)dev); + nfp_net_dev_interrupt_handler, (void *)dev); /* Cancel possible impending LSC work here before releasing the port*/ - rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, - (void *)dev); + rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev); /* * The ixgbe PMD disables the pcie master on the @@ -282,8 +279,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) hw->ctrl_bar = pci_dev->mem_resource[0].addr; if (hw->ctrl_bar == NULL) { - PMD_DRV_LOG(ERR, - "hw->ctrl_bar is NULL. BAR0 not configured"); + PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured"); return -ENODEV; } @@ -301,8 +297,8 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) rte_eth_copy_pci_info(eth_dev, pci_dev); - hw->eth_xstats_base = rte_malloc("rte_eth_xstat", sizeof(struct rte_eth_xstat) * - nfp_net_xstats_size(eth_dev), 0); + hw->eth_xstats_base = rte_malloc("rte_eth_xstat", + sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0); if (hw->eth_xstats_base == NULL) { PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!", pci_dev->device.name); @@ -318,13 +314,11 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off); PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off); - hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + - tx_bar_off; - hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + - rx_bar_off; + hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off; + hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off; PMD_INIT_LOG(DEBUG, "ctrl_bar: %p, tx_bar: %p, rx_bar: %p", - hw->ctrl_bar, hw->tx_bar, hw->rx_bar); + hw->ctrl_bar, hw->tx_bar, hw->rx_bar); nfp_net_cfg_queue_setup(hw); hw->mtu = RTE_ETHER_MTU; @@ -339,8 +333,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) rte_spinlock_init(&hw->reconfig_lock); /* Allocating memory for mac addr */ - eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", - RTE_ETHER_ADDR_LEN, 0); + eth_dev->data->mac_addrs = rte_zmalloc("mac_addr", RTE_ETHER_ADDR_LEN, 0); if (eth_dev->data->mac_addrs == NULL) { PMD_INIT_LOG(ERR, "Failed to space for MAC address"); err = -ENOMEM; @@ -351,8 +344,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) tmp_ether_addr = &hw->mac_addr; if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { - PMD_INIT_LOG(INFO, "Using random mac address for port %d", - port); + PMD_INIT_LOG(INFO, "Using random mac address for port %d", port); /* Using random mac addresses for VFs */ rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]); nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]); @@ -367,16 +359,15 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " - "mac=" RTE_ETHER_ADDR_PRT_FMT, - eth_dev->data->port_id, pci_dev->id.vendor_id, - pci_dev->id.device_id, - RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); + "mac=" RTE_ETHER_ADDR_PRT_FMT, + eth_dev->data->port_id, pci_dev->id.vendor_id, + pci_dev->id.device_id, + RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); if (rte_eal_process_type() == RTE_PROC_PRIMARY) { /* Registering LSC interrupt handler */ rte_intr_callback_register(pci_dev->intr_handle, - nfp_net_dev_interrupt_handler, - (void *)eth_dev); + nfp_net_dev_interrupt_handler, (void *)eth_dev); /* Telling the firmware about the LSC interrupt entry */ nn_cfg_writeb(hw, NFP_NET_CFG_LSC, NFP_NET_IRQ_LSC_IDX); /* Recording current stats counters values */ @@ -394,39 +385,42 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) static const struct rte_pci_id pci_id_nfp_vf_net_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP3800_VF_NIC) + PCI_DEVICE_ID_NFP3800_VF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_NETRONOME, - PCI_DEVICE_ID_NFP6000_VF_NIC) + PCI_DEVICE_ID_NFP6000_VF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE, - PCI_DEVICE_ID_NFP3800_VF_NIC) + PCI_DEVICE_ID_NFP3800_VF_NIC) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_CORIGINE, - PCI_DEVICE_ID_NFP6000_VF_NIC) + PCI_DEVICE_ID_NFP6000_VF_NIC) }, { .vendor_id = 0, }, }; -static int nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev) +static int +nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev) { /* VF cleanup, just free private port data */ return nfp_netvf_close(eth_dev); } -static int eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, - struct rte_pci_device *pci_dev) +static int +eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_probe(pci_dev, - sizeof(struct nfp_net_adapter), nfp_netvf_init); + sizeof(struct nfp_net_adapter), nfp_netvf_init); } -static int eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev) +static int +eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit); } diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index 3ea6813d9a..6d9a1c249f 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -156,7 +156,8 @@ nfp_flow_dev_to_priv(struct rte_eth_dev *dev) } static int -nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id) +nfp_mask_id_alloc(struct nfp_flow_priv *priv, + uint8_t *mask_id) { uint8_t temp_id; uint8_t freed_id; @@ -188,7 +189,8 @@ nfp_mask_id_alloc(struct nfp_flow_priv *priv, uint8_t *mask_id) } static int -nfp_mask_id_free(struct nfp_flow_priv *priv, uint8_t mask_id) +nfp_mask_id_free(struct nfp_flow_priv *priv, + uint8_t mask_id) { struct circ_buf *ring; @@ -703,7 +705,8 @@ nfp_tun_check_ip_off_del(struct nfp_flower_representor *repr, } static void -nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer) +nfp_flower_compile_meta_tci(char *mbuf_off, + struct nfp_fl_key_ls *key_layer) { struct nfp_flower_meta_tci *tci_meta; @@ -714,7 +717,8 @@ nfp_flower_compile_meta_tci(char *mbuf_off, struct nfp_fl_key_ls *key_layer) } static void -nfp_flower_update_meta_tci(char *exact, uint8_t mask_id) +nfp_flower_update_meta_tci(char *exact, + uint8_t mask_id) { struct nfp_flower_meta_tci *meta_tci; @@ -723,7 +727,8 @@ nfp_flower_update_meta_tci(char *exact, uint8_t mask_id) } static void -nfp_flower_compile_ext_meta(char *mbuf_off, struct nfp_fl_key_ls *key_layer) +nfp_flower_compile_ext_meta(char *mbuf_off, + struct nfp_fl_key_ls *key_layer) { struct nfp_flower_ext_meta *ext_meta; @@ -1436,14 +1441,14 @@ nfp_flow_merge_tcp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ipv4 = (struct nfp_flower_ipv4 *) - (*mbuf_off - sizeof(struct nfp_flower_ipv4)); + (*mbuf_off - sizeof(struct nfp_flower_ipv4)); ports = (struct nfp_flower_tp_ports *) - ((char *)ipv4 - sizeof(struct nfp_flower_tp_ports)); + ((char *)ipv4 - sizeof(struct nfp_flower_tp_ports)); } else { /* IPv6 */ ipv6 = (struct nfp_flower_ipv6 *) - (*mbuf_off - sizeof(struct nfp_flower_ipv6)); + (*mbuf_off - sizeof(struct nfp_flower_ipv6)); ports = (struct nfp_flower_tp_ports *) - ((char *)ipv6 - sizeof(struct nfp_flower_tp_ports)); + ((char *)ipv6 - sizeof(struct nfp_flower_tp_ports)); } mask = item->mask ? item->mask : proc->mask_default; @@ -1514,10 +1519,10 @@ nfp_flow_merge_udp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) - - sizeof(struct nfp_flower_tp_ports); + sizeof(struct nfp_flower_tp_ports); } else {/* IPv6 */ ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) - - sizeof(struct nfp_flower_tp_ports); + sizeof(struct nfp_flower_tp_ports); } ports = (struct nfp_flower_tp_ports *)ports_off; @@ -1557,10 +1562,10 @@ nfp_flow_merge_sctp(__rte_unused struct nfp_app_fw_flower *app_fw_flower, meta_tci = (struct nfp_flower_meta_tci *)nfp_flow->payload.unmasked_data; if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_IPV4) != 0) { ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv4) - - sizeof(struct nfp_flower_tp_ports); + sizeof(struct nfp_flower_tp_ports); } else { /* IPv6 */ ports_off = *mbuf_off - sizeof(struct nfp_flower_ipv6) - - sizeof(struct nfp_flower_tp_ports); + sizeof(struct nfp_flower_tp_ports); } ports = (struct nfp_flower_tp_ports *)ports_off; @@ -1951,9 +1956,8 @@ nfp_flow_item_check(const struct rte_flow_item *item, return 0; } - mask = item->mask ? - (const uint8_t *)item->mask : - (const uint8_t *)proc->mask_default; + mask = item->mask ? (const uint8_t *)item->mask : + (const uint8_t *)proc->mask_default; /* * Single-pass check to make sure that: diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 4528417559..7885166753 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -158,8 +158,9 @@ struct nfp_ptype_parsed { /* set mbuf checksum flags based on RX descriptor flags */ void -nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, - struct rte_mbuf *mb) +nfp_net_rx_cksum(struct nfp_net_rxq *rxq, + struct nfp_net_rx_desc *rxd, + struct rte_mbuf *mb) { struct nfp_net_hw *hw = rxq->hw; @@ -192,7 +193,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) unsigned int i; PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors", - rxq->rx_count); + rxq->rx_count); for (i = 0; i < rxq->rx_count; i++) { struct nfp_net_rx_desc *rxd; @@ -218,8 +219,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) rte_wmb(); /* Not advertising the whole ring as the firmware gets confused if so */ - PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", - rxq->rx_count - 1); + PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", rxq->rx_count - 1); nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1); @@ -521,7 +521,8 @@ nfp_net_parse_meta(struct nfp_net_rx_desc *rxds, * Mbuf to set the packet type. */ static void -nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype, struct rte_mbuf *mb) +nfp_net_set_ptype(const struct nfp_ptype_parsed *nfp_ptype, + struct rte_mbuf *mb) { uint32_t mbuf_ptype = RTE_PTYPE_L2_ETHER; uint8_t nfp_tunnel_ptype = nfp_ptype->tunnel_ptype; @@ -678,7 +679,9 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds, */ uint16_t -nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +nfp_net_recv_pkts(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) { struct nfp_net_rxq *rxq; struct nfp_net_rx_desc *rxds; @@ -728,8 +731,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) */ new_mb = rte_pktmbuf_alloc(rxq->mem_pool); if (unlikely(new_mb == NULL)) { - PMD_RX_LOG(DEBUG, - "RX mbuf alloc failed port_id=%u queue_id=%hu", + PMD_RX_LOG(DEBUG, "RX mbuf alloc failed port_id=%u queue_id=%hu", rxq->port_id, rxq->qidx); nfp_net_mbuf_alloc_failed(rxq); break; @@ -743,29 +745,28 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rxb->mbuf = new_mb; PMD_RX_LOG(DEBUG, "Packet len: %u, mbuf_size: %u", - rxds->rxd.data_len, rxq->mbuf_size); + rxds->rxd.data_len, rxq->mbuf_size); /* Size of this segment */ mb->data_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds); /* Size of the whole packet. We just support 1 segment */ mb->pkt_len = rxds->rxd.data_len - NFP_DESC_META_LEN(rxds); - if (unlikely((mb->data_len + hw->rx_offset) > - rxq->mbuf_size)) { + if (unlikely((mb->data_len + hw->rx_offset) > rxq->mbuf_size)) { /* * This should not happen and the user has the * responsibility of avoiding it. But we have * to give some info about the error */ PMD_RX_LOG(ERR, - "mbuf overflow likely due to the RX offset.\n" - "\t\tYour mbuf size should have extra space for" - " RX offset=%u bytes.\n" - "\t\tCurrently you just have %u bytes available" - " but the received packet is %u bytes long", - hw->rx_offset, - rxq->mbuf_size - hw->rx_offset, - mb->data_len); + "mbuf overflow likely due to the RX offset.\n" + "\t\tYour mbuf size should have extra space for" + " RX offset=%u bytes.\n" + "\t\tCurrently you just have %u bytes available" + " but the received packet is %u bytes long", + hw->rx_offset, + rxq->mbuf_size - hw->rx_offset, + mb->data_len); rte_pktmbuf_free(mb); break; } @@ -774,8 +775,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (hw->rx_offset != 0) mb->data_off = RTE_PKTMBUF_HEADROOM + hw->rx_offset; else - mb->data_off = RTE_PKTMBUF_HEADROOM + - NFP_DESC_META_LEN(rxds); + mb->data_off = RTE_PKTMBUF_HEADROOM + NFP_DESC_META_LEN(rxds); /* No scatter mode supported */ mb->nb_segs = 1; @@ -817,7 +817,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) return nb_hold; PMD_RX_LOG(DEBUG, "RX port_id=%hu queue_id=%hu, %hu packets received", - rxq->port_id, rxq->qidx, avail); + rxq->port_id, rxq->qidx, avail); nb_hold += rxq->nb_rx_hold; @@ -828,7 +828,7 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) rte_wmb(); if (nb_hold > rxq->rx_free_thresh) { PMD_RX_LOG(DEBUG, "port=%hu queue=%hu nb_hold=%hu avail=%hu", - rxq->port_id, rxq->qidx, nb_hold, avail); + rxq->port_id, rxq->qidx, nb_hold, avail); nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, nb_hold); nb_hold = 0; } @@ -854,7 +854,8 @@ nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq) } void -nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx) +nfp_net_rx_queue_release(struct rte_eth_dev *dev, + uint16_t queue_idx) { struct nfp_net_rxq *rxq = dev->data->rx_queues[queue_idx]; @@ -876,10 +877,11 @@ nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq) int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, - uint16_t queue_idx, uint16_t nb_desc, - unsigned int socket_id, - const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp) + uint16_t queue_idx, + uint16_t nb_desc, + unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp) { uint16_t min_rx_desc; uint16_t max_rx_desc; @@ -897,7 +899,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, /* Validating number of descriptors */ rx_desc_sz = nb_desc * sizeof(struct nfp_net_rx_desc); if (rx_desc_sz % NFP_ALIGN_RING_DESC != 0 || - nb_desc > max_rx_desc || nb_desc < min_rx_desc) { + nb_desc > max_rx_desc || nb_desc < min_rx_desc) { PMD_DRV_LOG(ERR, "Wrong nb_desc value"); return -EINVAL; } @@ -913,7 +915,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, /* Allocating rx queue data structure */ rxq = rte_zmalloc_socket("ethdev RX queue", sizeof(struct nfp_net_rxq), - RTE_CACHE_LINE_SIZE, socket_id); + RTE_CACHE_LINE_SIZE, socket_id); if (rxq == NULL) return -ENOMEM; @@ -943,9 +945,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, * resizing in later calls to the queue setup function. */ tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, - sizeof(struct nfp_net_rx_desc) * - max_rx_desc, NFP_MEMZONE_ALIGN, - socket_id); + sizeof(struct nfp_net_rx_desc) * max_rx_desc, + NFP_MEMZONE_ALIGN, socket_id); if (tz == NULL) { PMD_DRV_LOG(ERR, "Error allocating rx dma"); @@ -960,8 +961,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, /* mbuf pointers array for referencing mbufs linked to RX descriptors */ rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs", - sizeof(*rxq->rxbufs) * nb_desc, - RTE_CACHE_LINE_SIZE, socket_id); + sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE, + socket_id); if (rxq->rxbufs == NULL) { nfp_net_rx_queue_release(dev, queue_idx); dev->data->rx_queues[queue_idx] = NULL; @@ -969,7 +970,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, } PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, - rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma); + rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma); nfp_net_reset_rx_queue(rxq); @@ -998,15 +999,15 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq) int todo; PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete" - " status", txq->qidx); + " status", txq->qidx); /* Work out how many packets have been sent */ qcp_rd_p = nfp_qcp_read(txq->qcp_q, NFP_QCP_READ_PTR); if (qcp_rd_p == txq->rd_p) { PMD_TX_LOG(DEBUG, "queue %hu: It seems harrier is not sending " - "packets (%u, %u)", txq->qidx, - qcp_rd_p, txq->rd_p); + "packets (%u, %u)", txq->qidx, + qcp_rd_p, txq->rd_p); return 0; } @@ -1016,7 +1017,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq) todo = qcp_rd_p + txq->tx_count - txq->rd_p; PMD_TX_LOG(DEBUG, "qcp_rd_p %u, txq->rd_p: %u, qcp->rd_p: %u", - qcp_rd_p, txq->rd_p, txq->rd_p); + qcp_rd_p, txq->rd_p, txq->rd_p); if (todo == 0) return todo; @@ -1045,7 +1046,8 @@ nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq) } void -nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx) +nfp_net_tx_queue_release(struct rte_eth_dev *dev, + uint16_t queue_idx) { struct nfp_net_txq *txq = dev->data->tx_queues[queue_idx]; diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index 3c7138f7d6..9a30ebd89e 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -234,17 +234,17 @@ nfp_net_mbuf_alloc_failed(struct nfp_net_rxq *rxq) } void nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, - struct rte_mbuf *mb); + struct rte_mbuf *mb); int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev); uint32_t nfp_net_rx_queue_count(void *rx_queue); uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, - uint16_t nb_pkts); + uint16_t nb_pkts); void nfp_net_rx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); void nfp_net_reset_rx_queue(struct nfp_net_rxq *rxq); int nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, - uint16_t nb_desc, unsigned int socket_id, - const struct rte_eth_rxconf *rx_conf, - struct rte_mempool *mp); + uint16_t nb_desc, unsigned int socket_id, + const struct rte_eth_rxconf *rx_conf, + struct rte_mempool *mp); void nfp_net_tx_queue_release(struct rte_eth_dev *dev, uint16_t queue_idx); void nfp_net_reset_tx_queue(struct nfp_net_txq *txq); From patchwork Thu Oct 12 01:26:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132561 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D9C7C4236A; Thu, 12 Oct 2023 03:28:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9B6C540695; Thu, 12 Oct 2023 03:27:54 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2101.outbound.protection.outlook.com [40.107.220.101]) by mails.dpdk.org (Postfix) with ESMTP id A885E402E4 for ; Thu, 12 Oct 2023 03:27:51 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RNmMd8DsUy5WqnpwO86zLDOirvKxbUtRaPzRxh5OJmi7ygJHIrfZm6jJTybpc5QlpBkEIE70JbAUayUtjNDTxOBV8lCw420MaHL+AGUOf9GFQu315DSNoAE5y6Xi/Lz53c5bv84WOuWLFIF+4yDIHJLkLnOfcqzMysaEV452a2CKQmynZ5woixZ3LaEpKnUcZtKVNYO536Lj89v2csA7aHfmbXmfLNH53LnVeZoVOjVXFa8CZLl24L626xFntT22Mz2F2z10y1biMNLDlLizvP7DWe+a/+2ceo8PsPQJeu0ZHDh9ismKKtlOjN7QooytrU1UrYox00zp8AULeUGouA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=IJuInOpW4F5Zff9n9OhSJxO9SUC918cR0qa/G1DvFYg=; b=V8YSIqnAZECrvU3CbS3w4XZu1CVAjSMTgXoOhBKQCMtiI/5D52oVon4/wu7lXsHIsC7u8Qq8/yVlrWodohUBRZ1aUs/XNtrojPtEgENzyu0qrcbVG0Utj2SFxS1DrNn+T/I0Ubs9+DrxRBQ2FFkF3HxK7DijPBwpu4Wzh1mvUH2M1DizkQ44OOkTVtpFxhIhtIJpUUdz+MLOotZd80G0WqZdlIGcDPh34rq5dDN+vQldetYB0QbslScfy06vX8usWNVePE8bmEY8o3Tt889fMRm0BqLrkDEeEsUf7QquA/H4Z9ZVaNehhSPt73OeQ62qNm0OY7JzCZdB6oW7vgnX9Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IJuInOpW4F5Zff9n9OhSJxO9SUC918cR0qa/G1DvFYg=; b=jQmiqdn8hlnSAWtcx7VNkwY4t4tXcX3xJQ+e9i/D4tr/0EHnY7V58pfPAdw4soGbh09mEUlhiU9iw+VmaVkGLKBnCYGnQWrgKg4I++oKhoLVXLkRkK+TxY621Rer02lOwIFfiFOXnCmPhie3TUkEh7XIvJAWdil7CMfMUK2WygE= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by PH0PR13MB5975.namprd13.prod.outlook.com (2603:10b6:510:16e::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.37; Thu, 12 Oct 2023 01:27:49 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:27:49 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 03/11] net/nfp: unify the type of integer variable Date: Thu, 12 Oct 2023 09:26:56 +0800 Message-Id: <20231012012704.483828-4-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|PH0PR13MB5975:EE_ X-MS-Office365-Filtering-Correlation-Id: df1f7275-708f-44bf-7ef9-08dbcac271db X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: I4YJsEE6bGWkTOUACMUI4h0DcktdDXYVUjv9egVEERsjKNea4RDGR0G71mP844qVHtjO8KCSy3vg71b/IHAOY4NsCNH+lB6Brnqa4av3wYr9nqjCte1wl5zfC5GQbiqWqCjok47gCNbuioLsa+UclWVEdPGsyumivGqD4IkT+ZQr7ARWYYF+vmlgCyGkTQo/UZaq6S5xxNgw7H2LowYuQ4aeoCpgRweP8N0JLZJMlkADXkBsjzLIxkLcqYl7RNyxGqW/nDqw6qZcjJCCs3cg04spCVooFfZuSayHgO24mLRUKTIEpy0Mssw4tYuW2NNAZI+KCz4WSRikhIEXoEyy0Rg+NHdbPOg8zVilbmhc7v5ibKZwaoltOjXxVpY16yLlS9hn8x1FdDGxf0/jM9CWff7zyJUBsDiGj+krYrZjvbyWct25IC5t+jtXH80gVdlkeC5ZtS+INuWmrovFOJ6RlSyLvGi/EobUPP1TlQhmXmP91tOPyX/7W6rhY2jOv/GUHyzKBdQseu2C9iWV2GyO7/wyaGewOhw0SrnM8Fw92n9GBQRORcPOx2Px2f7LPCi+gs8/XRBCztS/PrGePfEoBCZ3vqQd+6gCOlybUyhZY3Md84Tb4byIxzTnl37lbDJrAaPctvN0Ax0hMHnM8nsvm6Tz/jvMJPcIkxR7GhFQabM= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(346002)(39830400003)(136003)(366004)(376002)(230922051799003)(64100799003)(186009)(1800799009)(451199024)(86362001)(38350700002)(36756003)(38100700002)(66946007)(6916009)(2906002)(30864003)(8936002)(478600001)(6486002)(6512007)(41300700001)(44832011)(5660300002)(4326008)(6506007)(52116002)(8676002)(6666004)(83380400001)(107886003)(66556008)(66476007)(54906003)(26005)(2616005)(316002)(1076003); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: kTlFuZgU8kkHxV/i8bJiTQrU2hvQYMQ6R3RuS9E+eZ+GxcdeMEtqlWvObxpffZEE6xj6sanxf8EPSy4VxuhY/Knw3F7gair36mfENeDJzBb3lyJbb0gxNGuDGA5Dj+i++xwXBTK0MFBUCzUStQim7VV7ZVRfeIwC6s7eOqTIq8CPsx2fECi5Ipf/ppvekEwNcBLlm4Wy5IhQOp7BH3QY54QBexzWSNe29TA5SZMgqxPQaepMeTTfgGwebT/wK8p3i6T3PgSfiakRDFDF6HXVHtNtz4ppvleVBbPdL8+mp/vuHLU/lMkQybu6wLtxCeNk8AncNhx3OzXDVdmkICWvqfkd41Lgb9sWZHtBuIEO9LLUsky9n4WaMrhNgqqvPMUVybTyIC8lWtC6YGZ1Ih+xyv/Ip77fngwXbHHISVdonm0vO0MHrL6AQTUO7YG4yD9zAGThnie3g0vot4++u8GmHKjsFqFCN7S2uH9qKpH9qs0xsq49KXnQez37duLqe2mTT/vK08EFuJWZ8IC3mrqpVQ0Cg4SucpEOYXHVlZ3CdwB3d7y1EHJVLkdTPES6xTr2yUQYvEKcANc5D+rBCez3h/oB6i3TCKWOies6nzCtJmyEZBfBTmYQ62ZVduNc0eiOIXbT4+nuAJcrlrlYfc2KQQf9wivxa1yEg2tONiQRx9Hd4cxYt+BRZO3VF/MW9somsSP7D/oQtJ2eZbygKP0PGspqQ8JgRU7xep+SuDwm7RbuS+9wfOSzhKPPjXRW9rzDcKpeVdkONVHeTJLidpTgFgHj/THP0yKorpCCPcavVXRZaUA8H+zTZ2LKCgXFARJ8YOA7S4CCDHy4m5Zf+9j2OCB0YJAi9DpYczu3Iy88QLK0/0xvJFyHJIXAcM4aqHIwc4pUw+gsBTK840yI4tDJaJbo71A4Nazp6wBtNzgeGuZOm6+kzqKRAZdAk21gs4v52RcqEjkWSyDCPxsYFoG2q5QKXCaVpGUvfVpZDETaHKUSUs+C30KLpf1CnSGX1c24s1KR3J+P63FQ1L3CxcQqXT9w+O624GnkgUSTryoRrvIQgnInFR2jevDggi/zQiV7H0s/iuu53XTI3zx600nferksUrMPNudSezaul9hFlQkHslbvDFVjZE3GRcoB9+zbdNUUyPmNKD9e81eKVKg4flRwmbslZaRVdtZ+92yYXjhnwt/yBxVwpZGrQNXbJHr21y2su9VpfV2tar5lgYo+1CfEOaA2F4tpJhTgtOPmcqjXZbh7b9n7lrkK9ibdThPvNca5b1vwC17Pa4NSxN6XTjGKbvTHV0/0oerA3FoZCEdFSsxBapdwK0bUrpd0gsvV50KG1lDKcglTpbR3s9MDEd6WJu5lpu56nwrl8uvhTbfAmDY2nep5iRWMoazZZWUyqMW6c4FZK1WdueNYbde79SxBNikAAkZzNyZGdT5lRHjfvP3QDFkKqHYz5LihJEw8gWXqV90MLBe4IBNnOCY6MFMuPtxo2NPvD7KOAoK0IHQHj+A8dVaV6e9mhdL5uAfqjO5Ks5gLTIVox+brW02O3NK5k192O6/LMGRy2GnhCFhxNyqqJ3719HiBHuElTOkS1gcNRWn0LO9QMIMAQ5Dnpg== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: df1f7275-708f-44bf-7ef9-08dbcac271db X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:27:49.7151 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: LP8DhQ+H1KExj/H3iSwqRoGSa+2+XU5P1XaZrF6rcLs0XjjawaVYa9fh9bN92qGJnV8rLjqwx1z14A/suPXlkynn85VNbq6YixrKCYNal2k= X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR13MB5975 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Unify the type of integer variable to the DPDK prefer style. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.c | 2 +- drivers/net/nfp/flower/nfp_flower_cmsg.c | 16 +++++----- drivers/net/nfp/nfd3/nfp_nfd3_dp.c | 6 ++-- drivers/net/nfp/nfp_common.c | 37 +++++++++++++----------- drivers/net/nfp/nfp_common.h | 16 +++++----- drivers/net/nfp/nfp_ethdev.c | 24 +++++++-------- drivers/net/nfp/nfp_ethdev_vf.c | 2 +- drivers/net/nfp/nfp_flow.c | 8 ++--- drivers/net/nfp/nfp_rxtx.c | 12 ++++---- drivers/net/nfp/nfp_rxtx.h | 2 +- 10 files changed, 64 insertions(+), 61 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index 3352693d71..7dd1423aaf 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -26,7 +26,7 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; uint64_t enabled_queues = 0; - int i; + uint16_t i; struct nfp_flower_representor *repr; repr = dev->data->dev_private; diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c index 6b9532f5b6..5d6912b079 100644 --- a/drivers/net/nfp/flower/nfp_flower_cmsg.c +++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c @@ -64,10 +64,10 @@ nfp_flower_cmsg_mac_repr_init(struct rte_mbuf *mbuf, static void nfp_flower_cmsg_mac_repr_fill(struct rte_mbuf *m, - unsigned int idx, - unsigned int nbi, - unsigned int nbi_port, - unsigned int phys_port) + uint8_t idx, + uint32_t nbi, + uint32_t nbi_port, + uint32_t phys_port) { struct nfp_flower_cmsg_mac_repr *msg; @@ -81,11 +81,11 @@ nfp_flower_cmsg_mac_repr_fill(struct rte_mbuf *m, int nfp_flower_cmsg_mac_repr(struct nfp_app_fw_flower *app_fw_flower) { - int i; + uint8_t i; uint16_t cnt; - unsigned int nbi; - unsigned int nbi_port; - unsigned int phys_port; + uint32_t nbi; + uint32_t nbi_port; + uint32_t phys_port; struct rte_mbuf *mbuf; struct nfp_eth_table *nfp_eth_table; diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c index 64928254d8..5a84629ed7 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c +++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c @@ -227,9 +227,9 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue, uint16_t nb_pkts, bool repr_flag) { - int i; - int pkt_size; - int dma_size; + uint16_t i; + uint32_t pkt_size; + uint16_t dma_size; uint8_t offset; uint64_t dma_addr; uint16_t free_descs; diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 9719a9212b..cb2c2afbd7 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -199,7 +199,7 @@ static int __nfp_net_reconfig(struct nfp_net_hw *hw, uint32_t update) { - int cnt; + uint32_t cnt; uint32_t new; struct timespec wait; @@ -229,7 +229,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, } if (cnt >= NFP_NET_POLL_TIMEOUT) { PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after" - " %dms", update, cnt); + " %ums", update, cnt); return -EIO; } nanosleep(&wait, 0); /* waiting for a 1ms */ @@ -466,7 +466,7 @@ nfp_net_enable_queues(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; uint64_t enabled_queues = 0; - int i; + uint16_t i; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -575,7 +575,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, struct rte_intr_handle *intr_handle) { struct nfp_net_hw *hw; - int i; + uint16_t i; if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", dev->data->nb_rx_queues) != 0) { @@ -832,7 +832,7 @@ int nfp_net_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats) { - int i; + uint16_t i; struct nfp_net_hw *hw; struct rte_eth_stats nfp_dev_stats; @@ -923,7 +923,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, int nfp_net_stats_reset(struct rte_eth_dev *dev) { - int i; + uint16_t i; struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1398,7 +1398,7 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, { struct rte_pci_device *pci_dev; struct nfp_net_hw *hw; - int base = 0; + uint16_t base = 0; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -1419,7 +1419,7 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, { struct rte_pci_device *pci_dev; struct nfp_net_hw *hw; - int base = 0; + uint16_t base = 0; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -1619,9 +1619,10 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - uint32_t reta, mask; - int i, j; - int idx, shift; + uint8_t mask; + uint32_t reta; + uint16_t i, j; + uint16_t idx, shift; struct nfp_net_hw *hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1695,8 +1696,9 @@ nfp_net_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - uint8_t i, j, mask; - int idx, shift; + uint16_t i, j; + uint8_t mask; + uint16_t idx, shift; uint32_t reta; struct nfp_net_hw *hw; @@ -1720,7 +1722,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev, /* Handling 4 RSS entries per loop */ idx = i / RTE_ETH_RETA_GROUP_SIZE; shift = i % RTE_ETH_RETA_GROUP_SIZE; - mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF); + mask = (reta_conf[idx].mask >> shift) & 0xF; if (mask == 0) continue; @@ -1744,7 +1746,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, uint64_t rss_hf; uint32_t cfg_rss_ctrl = 0; uint8_t key; - int i; + uint8_t i; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1835,7 +1837,7 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, uint64_t rss_hf; uint32_t cfg_rss_ctrl; uint8_t key; - int i; + uint8_t i; struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1893,7 +1895,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) struct rte_eth_rss_reta_entry64 nfp_reta_conf[2]; uint16_t rx_queues = dev->data->nb_rx_queues; uint16_t queue; - int i, j, ret; + uint8_t i, j; + int ret; PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues", rx_queues); diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index e4fd394868..71153ea25b 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -245,14 +245,14 @@ nn_writeq(uint64_t val, */ static inline uint8_t nn_cfg_readb(struct nfp_net_hw *hw, - int off) + uint32_t off) { return nn_readb(hw->ctrl_bar + off); } static inline void nn_cfg_writeb(struct nfp_net_hw *hw, - int off, + uint32_t off, uint8_t val) { nn_writeb(val, hw->ctrl_bar + off); @@ -260,14 +260,14 @@ nn_cfg_writeb(struct nfp_net_hw *hw, static inline uint16_t nn_cfg_readw(struct nfp_net_hw *hw, - int off) + uint32_t off) { return rte_le_to_cpu_16(nn_readw(hw->ctrl_bar + off)); } static inline void nn_cfg_writew(struct nfp_net_hw *hw, - int off, + uint32_t off, uint16_t val) { nn_writew(rte_cpu_to_le_16(val), hw->ctrl_bar + off); @@ -275,14 +275,14 @@ nn_cfg_writew(struct nfp_net_hw *hw, static inline uint32_t nn_cfg_readl(struct nfp_net_hw *hw, - int off) + uint32_t off) { return rte_le_to_cpu_32(nn_readl(hw->ctrl_bar + off)); } static inline void nn_cfg_writel(struct nfp_net_hw *hw, - int off, + uint32_t off, uint32_t val) { nn_writel(rte_cpu_to_le_32(val), hw->ctrl_bar + off); @@ -290,14 +290,14 @@ nn_cfg_writel(struct nfp_net_hw *hw, static inline uint64_t nn_cfg_readq(struct nfp_net_hw *hw, - int off) + uint32_t off) { return rte_le_to_cpu_64(nn_readq(hw->ctrl_bar + off)); } static inline void nn_cfg_writeq(struct nfp_net_hw *hw, - int off, + uint32_t off, uint64_t val) { nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off); diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 65473d87e8..140d20dcf7 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -23,7 +23,7 @@ static int nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, - int port) + uint16_t port) { struct nfp_eth_table *nfp_eth_table; struct nfp_net_hw *hw = NULL; @@ -255,7 +255,7 @@ nfp_net_close(struct rte_eth_dev *dev) struct rte_pci_device *pci_dev; struct nfp_pf_dev *pf_dev; struct nfp_app_fw_nic *app_fw_nic; - int i; + uint8_t i; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -487,7 +487,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) struct rte_ether_addr *tmp_ether_addr; uint64_t rx_base; uint64_t tx_base; - int port = 0; + uint16_t port = 0; int err; PMD_INIT_FUNC_TRACE(); @@ -501,7 +501,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) app_fw_nic = NFP_PRIV_TO_APP_FW_NIC(pf_dev->app_fw_priv); port = ((struct nfp_net_hw *)eth_dev->data->dev_private)->idx; - if (port < 0 || port > 7) { + if (port > 7) { PMD_DRV_LOG(ERR, "Port value is wrong"); return -ENODEV; } @@ -761,10 +761,10 @@ static int nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, const struct nfp_dev_info *dev_info) { - int i; + uint8_t i; int ret; int err = 0; - int total_vnics; + uint32_t total_vnics; struct nfp_net_hw *hw; unsigned int numa_node; struct rte_eth_dev *eth_dev; @@ -785,7 +785,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, /* Read the number of vNIC's created for the PF */ total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &err); - if (err != 0 || total_vnics <= 0 || total_vnics > 8) { + if (err != 0 || total_vnics == 0 || total_vnics > 8) { PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value"); ret = -ENODEV; goto app_cleanup; @@ -795,7 +795,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, * For coreNIC the number of vNICs exposed should be the same as the * number of physical ports */ - if (total_vnics != (int)nfp_eth_table->count) { + if (total_vnics != nfp_eth_table->count) { PMD_INIT_LOG(ERR, "Total physical ports do not match number of vNICs"); ret = -ENODEV; goto app_cleanup; @@ -1053,15 +1053,15 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev, struct nfp_rtsym_table *sym_tbl, struct nfp_cpp *cpp) { - int i; + uint32_t i; int err = 0; int ret = 0; - int total_vnics; + uint32_t total_vnics; struct nfp_net_hw *hw; /* Read the number of vNIC's created for the PF */ total_vnics = nfp_rtsym_read_le(sym_tbl, "nfd_cfg_pf0_num_ports", &err); - if (err != 0 || total_vnics <= 0 || total_vnics > 8) { + if (err != 0 || total_vnics == 0 || total_vnics > 8) { PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value"); return -ENODEV; } @@ -1069,7 +1069,7 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev, for (i = 0; i < total_vnics; i++) { struct rte_eth_dev *eth_dev; char port_name[RTE_ETH_NAME_MAX_LEN]; - snprintf(port_name, sizeof(port_name), "%s_port%d", + snprintf(port_name, sizeof(port_name), "%s_port%u", pci_dev->device.name, i); PMD_INIT_LOG(DEBUG, "Secondary attaching to port %s", port_name); diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index ac6a10685d..892300a909 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -260,7 +260,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) uint64_t tx_bar_off = 0, rx_bar_off = 0; uint32_t start_q; - int port = 0; + uint16_t port = 0; int err; const struct nfp_dev_info *dev_info; diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index 6d9a1c249f..4c9904e36c 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -121,7 +121,7 @@ struct nfp_flow_item_proc { /* Bit-mask to use when @p item->mask is not provided. */ const void *mask_default; /* Size in bytes for @p mask_support and @p mask_default. */ - const unsigned int mask_sz; + const size_t mask_sz; /* Merge a pattern item into a flow rule handle. */ int (*merge)(struct nfp_app_fw_flower *app_fw_flower, struct rte_flow *nfp_flow, @@ -1941,8 +1941,8 @@ static int nfp_flow_item_check(const struct rte_flow_item *item, const struct nfp_flow_item_proc *proc) { + size_t i; int ret = 0; - unsigned int i; const uint8_t *mask; /* item->last and item->mask cannot exist without item->spec. */ @@ -2037,7 +2037,7 @@ nfp_flow_compile_item_proc(struct nfp_flower_representor *repr, char **mbuf_off_mask, bool is_outer_layer) { - int i; + uint32_t i; int ret = 0; bool continue_flag = true; const struct rte_flow_item *item; @@ -2271,7 +2271,7 @@ nfp_flow_action_set_ipv6(char *act_data, const struct rte_flow_action *action, bool ip_src_flag) { - int i; + uint32_t i; rte_be32_t tmp; size_t act_size; struct nfp_fl_act_set_ipv6_addr *set_ip; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 7885166753..8cbb9b74a2 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -190,7 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) { struct nfp_net_dp_buf *rxe = rxq->rxbufs; uint64_t dma_addr; - unsigned int i; + uint16_t i; PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors", rxq->rx_count); @@ -229,7 +229,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) int nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) { - int i; + uint16_t i; for (i = 0; i < dev->data->nb_rx_queues; i++) { if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0) @@ -840,7 +840,7 @@ nfp_net_recv_pkts(void *rx_queue, static void nfp_net_rx_queue_release_mbufs(struct nfp_net_rxq *rxq) { - unsigned int i; + uint16_t i; if (rxq->rxbufs == NULL) return; @@ -992,11 +992,11 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, * @txq: TX queue to work with * Returns number of descriptors freed */ -int +uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq) { uint32_t qcp_rd_p; - int todo; + uint32_t todo; PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete" " status", txq->qidx); @@ -1032,7 +1032,7 @@ nfp_net_tx_free_bufs(struct nfp_net_txq *txq) static void nfp_net_tx_queue_release_mbufs(struct nfp_net_txq *txq) { - unsigned int i; + uint32_t i; if (txq->txbufs == NULL) return; diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index 9a30ebd89e..98ef6c3d93 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -253,7 +253,7 @@ int nfp_net_tx_queue_setup(struct rte_eth_dev *dev, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); -int nfp_net_tx_free_bufs(struct nfp_net_txq *txq); +uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq); void nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data, struct rte_mbuf *pkt, uint8_t layer); From patchwork Thu Oct 12 01:26:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132562 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E50AB4236A; Thu, 12 Oct 2023 03:28:19 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D2BC1406B7; Thu, 12 Oct 2023 03:27:56 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2117.outbound.protection.outlook.com [40.107.220.117]) by mails.dpdk.org (Postfix) with ESMTP id 8882B4068E for ; Thu, 12 Oct 2023 03:27:54 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=W+t8GSdHq4BDLFuY6aHkkE6Sf6Etn5XtzAJOvd2ARDPmhNZb+3YyqgBvzrB10xMIF8Nhq3RxzTWb8rv7ujbHx2HBaA9ze8Q6yBXu8GwLQK2q+p5DJAPbMf348WHusMUmvPiEOC9waZFDHZP4kP9uX+b5hnzahKXZ2FPqg7pimL3FAc89iD6ODNBCcx9WDGHy/1u8vSbEDEhmPYpPJt1hUT1OZHaS6A3dLvbS9xf7oOJKYcJZU9IKnEOyUXLAz6M8lw1KBo322l2mYh5a2+r8LgNspXwC1+4m1W7oGIw/PITV/g4EecxLP0BXiqni/7g7MpNPMLs2AnrJeSXp93F38Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=R5Ns+QvXT21Irh1DoWIzXPe151Kx/BIspbWI/40noyE=; b=ldfFuddPFL6GeRS2+E3itTpYGbsj9T2l+F2cJ9MJ9oP0TsS3Olcriep3kL28hL2o0bzv7u7+awVv/xOK73U8L4MpQFFO4YMEMLtjgBrPg86fvia5v29xgVMdG/qpKI6gQ0tk8jfVUEPwcKx4lkWV5qpe2rcHaoDYi9x/K0WjUa+yCdoDO9dsXOQRW02T43eyDHgtu9M63TrYWiIc2AifgwcoGPXb+gSw3AWziL3WqPXVFvwqWnEkqIOZ2DOw5tYhAZDoP0Nguewe8reUJV2BkeaUoVL/0iezYAt0KzwGZ8G8oCA1bbB45kA4Uja/gEVJnrUWhMkOKRvFFr2jCDCKyQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=R5Ns+QvXT21Irh1DoWIzXPe151Kx/BIspbWI/40noyE=; b=QyDEitskTl7etSq7exqOcl6/Iwk5R4mtZf4CThK1ufxK8kDfI6T+J54wVTFJwdt+NhXEu7+3tSCD/e2UbUMRa9NBk2gX2dyAZmqi58SALxlHxZzj//JxF4mU0r5fBUk3/Wp78XQdDet3mLc6RrnarAYIfyAg9fxQBeY0gePqg20= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by PH0PR13MB5975.namprd13.prod.outlook.com (2603:10b6:510:16e::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.37; Thu, 12 Oct 2023 01:27:52 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:27:52 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 04/11] net/nfp: standard the local variable coding style Date: Thu, 12 Oct 2023 09:26:57 +0800 Message-Id: <20231012012704.483828-5-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|PH0PR13MB5975:EE_ X-MS-Office365-Filtering-Correlation-Id: d2bc31d7-665e-41e0-5250-08dbcac273ac X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: JezFCCDD8gsfAUD77Bgd7XcH2T3mcPgM79DG0lWoPNXstSAkDp7fDvvjDwyVge+RsgaQfUOYNGfpNkWhRENdvsAjjJeoWI4gu9cAFxI5aixiWjY4NCkxTA11rJRHI746rWSO1OuLTKFfUNfuDQuS70fO1qMf8aI9Ev1hYxRQL0SkKrhfwuv4u95Q3zix5uEKeWJjksZKB5J8OxXQsaLmMAg078masN1B8+SB15onPaGxHvAYkYd75aCBytp4L8jcy5OFHLbQxXdiRjBpVTF5ABL38CajTHQeMN0rGw0Mpbp8JDOV6POHqLCLs3MisHjflC6OV+ma98wpnEC45GRnJ9PQQZabgFF4oFDK24OepdeM/HPOL2i/XZ+cPnsxQWfcPhalQTAu+GmcpUunmU5Y3XVmePoI9EZGfM8jAVEToAHH9K90jwRsix2UFOFH1e5DXd3mB2fVdP4KT3U9O+rJQj6CFGKTpSvZR+I2vfWtw2Ol0MAl3XGkY7yhOGK+G6a2uTKMQ85tHSFywujx5iZ5yJRg+B6oAABTaUIyuXGKZg0nw+Qu8OMLr2oJC/FWd1+8VrY5EochZRgptTgPJ/vxZZtEhzZiGxgow20eY+UiuT3HI50zaeHf9nD/JcimvpGn2sYIDft3snl8XR5+4nDe+aI470rf6M+6NCxU/vuOF5w= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(346002)(39830400003)(136003)(366004)(376002)(230922051799003)(64100799003)(186009)(1800799009)(451199024)(86362001)(38350700002)(36756003)(38100700002)(66946007)(6916009)(2906002)(30864003)(8936002)(478600001)(6486002)(6512007)(41300700001)(44832011)(5660300002)(4326008)(6506007)(52116002)(8676002)(83380400001)(107886003)(66556008)(66476007)(54906003)(26005)(2616005)(316002)(1076003); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 2WvQtRCwxZHkJNc5oVYPzoAkCK+WVKaP7/xCZe9TT3gvyECvSfE0+YdR8fHUJoaX5I9VZ3kzzIcBVuvWNI0zfb8SCbIOcQKllh8beT8pRcL2D0SmjN6Y9NAQeBGyJNZP4z05wBb2Mr46/Pi0ItuT4EndPo5ypPpJSEbdz98QAh+XL6A+K3nBGYF2wO9Z7GWwz6clMXb8oZKvGq5Z02npTElWwc/AXcDtmZ4DMW2UKtXihIxYOOcq9Kdq4Zs8y1wWESBucOWUy05RmRvSI2sQ0vVMdEEZNoJ/xcqE6y8z1McO94PugjrVzsRZCOQXPvSHbtp52d0VxUKn9eBq9MNtNFeFv7OLa/1xzCCEHq9gVNmwabxY12jjoz4g7oUXoDwGQICLC0krPnmI3n7h3NCZcnpDU+mAAk/vcnbBB9YQXgp7HOzG0YxaEeEw0Linis0OCb3WaJV5BSP10l8tajeflYYHKM1u6dLmLTKhDOCvkzlLjcYB1O+p7fOGDVAyuAiWFf3cFDL7Ynv3Li7qGhwYSRH39buXqCQMf9IZZOA4Gd8xIVetg2tHFNbejrIHkbVrdATJ+MRX9UqNha//VS1xd8rRhfcmUrMMZ24NEnNZuCd8rD32Cl8D+Nm1U4nn3puAraW4kg3NhJhZdRCpnfg9D+8BjrtWYzTOFncR6o30vbNVeeYJk5IBOR4/BTdVfqvX7TYjziOuVe0gAJP5BqH11vPjyXbO0a/sAhTq1HpTb2kOoyYz37vIR2on4n8F/YRY8IfdYIq483A59rEdN8bmvn7BagDsLq7j3XRLafUvMoDzUzVnbTEjh6UN5L2Ujy/5V2oV5vdTJlZgUQeV2aIKqsiempMgppLtyz+p6zjnbuRCqIueQox44VIVvM16+jvpaKdNapwD75gt/zJZc/74JL/74BV2MY9kwAmCnynnXaW+fEi/jPPpd/yxAXu7B3gZfYrMhISSzzya3UQl3vIZmqIUvtdHx0VElmXpNUMporNwmfI2RLWuftQV1Nu1rAgG5UHWkGqYxm/zriiMIVZ6HPbdHgPkUzfNg6aGqGGoXXmBpmz8ls9B+xSxb/JL7wfTNf1amZwZzau3rSqaRv7We4Q9eNA6KQ+5/hOHEtU/8jTkq1MuQvkAn+yHPkfYQ3nBhlLioacWgfcO00PhokTzkavw4p+KmKNmOIa26SZCXNiXA5nftBhkXd0tt+Lpi42B/VhUyYVlvBm4axhBZDy+OxE5bSX8WCLmh4MH3JlmVPSFHhll8b1q/nf9f56h/zAdrD0p9uQZNiRh/PS/G0zZF/G9GiDgf9mYeepMvtld9ffLvRuW8NCDATWIspS6YzAnDRXFAbY4E978VsPxhCNHLuh1Rt3sc5MI8vpfdGYZ1lUIS+jywWPL1VngHjLAj/DimOzd4rOaEmzr3mhWjox7kCs1f5ccKmWGW4TYD9A/G/irhIp0OE0GyxPN6Nq66LHpcPyJ7FvXSgNMgaM5sOdDScZygTbhfehcZYuq3ls3D5SEQXNiAliU9X4s3auZNTiG62mkhG+oZa7PwAumbaORn2lLpJ89NF0XTaGeXsfhd0eUdhVHIHsc3ZMitIKAB5NMSb0vAkqs3YaTp7aiFrmBEQ== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: d2bc31d7-665e-41e0-5250-08dbcac273ac X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:27:52.2475 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: YjRnWUgiIjTn+6qf3xg5+39FRe5j/FkhQW3yYZRbsK43S7NN7rnHYky5jVk6uDj/MxcId6nayFT4zoaYBsJh0kxxAPDZnEUPFel3NOoMXhI= X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR13MB5975 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org There should only declare one local variable in each line, and the local variable should obey the unify sequence. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.c | 6 +- drivers/net/nfp/nfd3/nfp_nfd3_dp.c | 4 +- drivers/net/nfp/nfp_common.c | 97 ++++++++++++++++------------- drivers/net/nfp/nfp_common.h | 3 +- drivers/net/nfp/nfp_cpp_bridge.c | 39 ++++++++---- drivers/net/nfp/nfp_ethdev.c | 47 +++++++------- drivers/net/nfp/nfp_ethdev_vf.c | 23 +++---- drivers/net/nfp/nfp_flow.c | 28 ++++----- drivers/net/nfp/nfp_rxtx.c | 38 +++++------ 9 files changed, 154 insertions(+), 131 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index 7dd1423aaf..7a4e671178 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -24,9 +24,9 @@ static void nfp_pf_repr_enable_queues(struct rte_eth_dev *dev) { + uint16_t i; struct nfp_net_hw *hw; uint64_t enabled_queues = 0; - uint16_t i; struct nfp_flower_representor *repr; repr = dev->data->dev_private; @@ -50,9 +50,9 @@ nfp_pf_repr_enable_queues(struct rte_eth_dev *dev) static void nfp_pf_repr_disable_queues(struct rte_eth_dev *dev) { - struct nfp_net_hw *hw; + uint32_t update; uint32_t new_ctrl; - uint32_t update = 0; + struct nfp_net_hw *hw; struct nfp_flower_representor *repr; repr = dev->data->dev_private; diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c index 5a84629ed7..699f65ebef 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c +++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c @@ -228,13 +228,13 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue, bool repr_flag) { uint16_t i; + uint8_t offset; uint32_t pkt_size; uint16_t dma_size; - uint8_t offset; uint64_t dma_addr; uint16_t free_descs; - uint16_t issued_descs; struct rte_mbuf *pkt; + uint16_t issued_descs; struct nfp_net_hw *hw; struct rte_mbuf **lmbuf; struct nfp_net_txq *txq; diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index cb2c2afbd7..18291a1cde 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -375,10 +375,10 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw, int nfp_net_configure(struct rte_eth_dev *dev) { + struct nfp_net_hw *hw; struct rte_eth_conf *dev_conf; struct rte_eth_rxmode *rxmode; struct rte_eth_txmode *txmode; - struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -464,9 +464,9 @@ nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, void nfp_net_enable_queues(struct rte_eth_dev *dev) { + uint16_t i; struct nfp_net_hw *hw; uint64_t enabled_queues = 0; - uint16_t i; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -488,8 +488,9 @@ nfp_net_enable_queues(struct rte_eth_dev *dev) void nfp_net_disable_queues(struct rte_eth_dev *dev) { + uint32_t update; + uint32_t new_ctrl; struct nfp_net_hw *hw; - uint32_t new_ctrl, update = 0; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -528,9 +529,10 @@ void nfp_net_write_mac(struct nfp_net_hw *hw, uint8_t *mac) { - uint32_t mac0 = *(uint32_t *)mac; + uint32_t mac0; uint16_t mac1; + mac0 = *(uint32_t *)mac; nn_writel(rte_cpu_to_be_32(mac0), hw->ctrl_bar + NFP_NET_CFG_MACADDR); mac += 4; @@ -543,8 +545,9 @@ int nfp_net_set_mac_addr(struct rte_eth_dev *dev, struct rte_ether_addr *mac_addr) { + uint32_t ctrl; + uint32_t update; struct nfp_net_hw *hw; - uint32_t update, ctrl; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && @@ -574,8 +577,8 @@ int nfp_configure_rx_interrupt(struct rte_eth_dev *dev, struct rte_intr_handle *intr_handle) { - struct nfp_net_hw *hw; uint16_t i; + struct nfp_net_hw *hw; if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", dev->data->nb_rx_queues) != 0) { @@ -615,11 +618,11 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, uint32_t nfp_check_offloads(struct rte_eth_dev *dev) { + uint32_t ctrl = 0; struct nfp_net_hw *hw; struct rte_eth_conf *dev_conf; struct rte_eth_rxmode *rxmode; struct rte_eth_txmode *txmode; - uint32_t ctrl = 0; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -682,9 +685,10 @@ nfp_check_offloads(struct rte_eth_dev *dev) int nfp_net_promisc_enable(struct rte_eth_dev *dev) { - uint32_t new_ctrl, update = 0; - struct nfp_net_hw *hw; int ret; + uint32_t new_ctrl; + uint32_t update = 0; + struct nfp_net_hw *hw; struct nfp_flower_representor *repr; PMD_DRV_LOG(DEBUG, "Promiscuous mode enable"); @@ -725,9 +729,10 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) int nfp_net_promisc_disable(struct rte_eth_dev *dev) { - uint32_t new_ctrl, update = 0; - struct nfp_net_hw *hw; int ret; + uint32_t new_ctrl; + uint32_t update = 0; + struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -764,8 +769,8 @@ nfp_net_link_update(struct rte_eth_dev *dev, { int ret; uint32_t i; - uint32_t nn_link_status; struct nfp_net_hw *hw; + uint32_t nn_link_status; struct rte_eth_link link; struct nfp_eth_table *nfp_eth_table; @@ -988,12 +993,13 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) uint32_t nfp_net_xstats_size(const struct rte_eth_dev *dev) { - /* If the device is a VF, then there will be no MAC stats */ - struct nfp_net_hw *hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint32_t count; + struct nfp_net_hw *hw; const uint32_t size = RTE_DIM(nfp_net_xstats); + /* If the device is a VF, then there will be no MAC stats */ + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (hw->mac_stats == NULL) { - uint32_t count; for (count = 0; count < size; count++) { if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC) break; @@ -1396,9 +1402,9 @@ int nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, uint16_t queue_id) { - struct rte_pci_device *pci_dev; - struct nfp_net_hw *hw; uint16_t base = 0; + struct nfp_net_hw *hw; + struct rte_pci_device *pci_dev; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -1417,9 +1423,9 @@ int nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, uint16_t queue_id) { - struct rte_pci_device *pci_dev; - struct nfp_net_hw *hw; uint16_t base = 0; + struct nfp_net_hw *hw; + struct rte_pci_device *pci_dev; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -1436,8 +1442,8 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, static void nfp_net_dev_link_status_print(struct rte_eth_dev *dev) { - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_eth_link link; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); rte_eth_linkstatus_get(dev, &link); if (link.link_status != 0) @@ -1573,16 +1579,16 @@ int nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int mask) { - uint32_t new_ctrl, update; + int ret; + uint32_t update; + uint32_t new_ctrl; struct nfp_net_hw *hw; + uint32_t rxvlan_ctrl = 0; struct rte_eth_conf *dev_conf; - uint32_t rxvlan_ctrl; - int ret; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); dev_conf = &dev->data->dev_conf; new_ctrl = hw->ctrl; - rxvlan_ctrl = 0; nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl); @@ -1619,12 +1625,15 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { + uint16_t i; + uint16_t j; + uint16_t idx; uint8_t mask; uint32_t reta; - uint16_t i, j; - uint16_t idx, shift; - struct nfp_net_hw *hw = - NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + uint16_t shift; + struct nfp_net_hw *hw; + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { PMD_DRV_LOG(ERR, "The size of hash lookup table configured " @@ -1670,11 +1679,11 @@ nfp_net_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - struct nfp_net_hw *hw = - NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - uint32_t update; int ret; + uint32_t update; + struct nfp_net_hw *hw; + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) return -EINVAL; @@ -1696,10 +1705,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) { - uint16_t i, j; + uint16_t i; + uint16_t j; + uint16_t idx; uint8_t mask; - uint16_t idx, shift; uint32_t reta; + uint16_t shift; struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1742,11 +1753,11 @@ static int nfp_net_rss_hash_write(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { - struct nfp_net_hw *hw; + uint8_t i; + uint8_t key; uint64_t rss_hf; + struct nfp_net_hw *hw; uint32_t cfg_rss_ctrl = 0; - uint8_t key; - uint8_t i; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1834,10 +1845,10 @@ int nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, struct rte_eth_rss_conf *rss_conf) { + uint8_t i; + uint8_t key; uint64_t rss_hf; uint32_t cfg_rss_ctrl; - uint8_t key; - uint8_t i; struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -1890,13 +1901,14 @@ nfp_net_rss_hash_conf_get(struct rte_eth_dev *dev, int nfp_net_rss_config_default(struct rte_eth_dev *dev) { + int ret; + uint8_t i; + uint8_t j; + uint16_t queue = 0; struct rte_eth_conf *dev_conf; struct rte_eth_rss_conf rss_conf; - struct rte_eth_rss_reta_entry64 nfp_reta_conf[2]; uint16_t rx_queues = dev->data->nb_rx_queues; - uint16_t queue; - uint8_t i, j; - int ret; + struct rte_eth_rss_reta_entry64 nfp_reta_conf[2]; PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues", rx_queues); @@ -1904,7 +1916,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) nfp_reta_conf[0].mask = ~0x0; nfp_reta_conf[1].mask = ~0x0; - queue = 0; for (i = 0; i < 0x40; i += 8) { for (j = i; j < (i + 8); j++) { nfp_reta_conf[0].reta[j] = queue; diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index 71153ea25b..9cb889c4a6 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -222,8 +222,9 @@ nn_writew(uint16_t val, static inline uint64_t nn_readq(volatile void *addr) { + uint32_t low; + uint32_t high; const volatile uint32_t *p = addr; - uint32_t low, high; high = nn_readl((volatile const void *)(p + 1)); low = nn_readl((volatile const void *)p); diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index 85a8bf9235..727ec7a7b2 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -119,12 +119,16 @@ static int nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp) { - struct nfp_cpp_area *area; - off_t offset, nfp_offset; - uint32_t cpp_id, pos, len; + int err; + off_t offset; + uint32_t pos; + uint32_t len; + size_t count; + size_t curlen; + uint32_t cpp_id; + off_t nfp_offset; uint32_t tmpbuf[16]; - size_t count, curlen; - int err = 0; + struct nfp_cpp_area *area; PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, sizeof(off_t), sizeof(size_t)); @@ -220,12 +224,16 @@ static int nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp) { - struct nfp_cpp_area *area; - off_t offset, nfp_offset; - uint32_t cpp_id, pos, len; + int err; + off_t offset; + uint32_t pos; + uint32_t len; + size_t count; + size_t curlen; + uint32_t cpp_id; + off_t nfp_offset; uint32_t tmpbuf[16]; - size_t count, curlen; - int err = 0; + struct nfp_cpp_area *area; PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, sizeof(off_t), sizeof(size_t)); @@ -319,8 +327,10 @@ static int nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp) { - uint32_t cmd, ident_size, tmp; int err; + uint32_t cmd; + uint32_t tmp; + uint32_t ident_size; /* Reading now the IOCTL command */ err = recv(sockfd, &cmd, 4, 0); @@ -375,10 +385,13 @@ nfp_cpp_bridge_serve_ioctl(int sockfd, static int nfp_cpp_bridge_service_func(void *args) { - struct sockaddr address; + int op; + int ret; + int sockfd; + int datafd; struct nfp_cpp *cpp; + struct sockaddr address; struct nfp_pf_dev *pf_dev; - int sockfd, datafd, op, ret; struct timeval timeout = {1, 0}; unlink("/tmp/nfp_cpp"); diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 140d20dcf7..7d149decfb 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -25,8 +25,8 @@ static int nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, uint16_t port) { + struct nfp_net_hw *hw; struct nfp_eth_table *nfp_eth_table; - struct nfp_net_hw *hw = NULL; /* Grab a pointer to the correct physical port */ hw = app_fw_nic->ports[port]; @@ -42,18 +42,19 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, static int nfp_net_start(struct rte_eth_dev *dev) { - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - uint32_t new_ctrl, update = 0; + int ret; + uint32_t new_ctrl; + uint32_t update = 0; uint32_t cap_extend; - uint32_t ctrl_extend = 0; + uint32_t intr_vector; struct nfp_net_hw *hw; + uint32_t ctrl_extend = 0; struct nfp_pf_dev *pf_dev; - struct nfp_app_fw_nic *app_fw_nic; struct rte_eth_conf *dev_conf; struct rte_eth_rxmode *rxmode; - uint32_t intr_vector; - int ret; + struct nfp_app_fw_nic *app_fw_nic; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private); @@ -251,11 +252,11 @@ nfp_net_set_link_down(struct rte_eth_dev *dev) static int nfp_net_close(struct rte_eth_dev *dev) { + uint8_t i; struct nfp_net_hw *hw; - struct rte_pci_device *pci_dev; struct nfp_pf_dev *pf_dev; + struct rte_pci_device *pci_dev; struct nfp_app_fw_nic *app_fw_nic; - uint8_t i; if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; @@ -480,15 +481,15 @@ nfp_net_ethdev_ops_mount(struct nfp_net_hw *hw, static int nfp_net_init(struct rte_eth_dev *eth_dev) { - struct rte_pci_device *pci_dev; + int err; + uint16_t port; + uint64_t rx_base; + uint64_t tx_base; + struct nfp_net_hw *hw; struct nfp_pf_dev *pf_dev; + struct rte_pci_device *pci_dev; struct nfp_app_fw_nic *app_fw_nic; - struct nfp_net_hw *hw; struct rte_ether_addr *tmp_ether_addr; - uint64_t rx_base; - uint64_t tx_base; - uint16_t port = 0; - int err; PMD_INIT_FUNC_TRACE(); @@ -650,14 +651,14 @@ nfp_fw_upload(struct rte_pci_device *dev, struct nfp_nsp *nsp, char *card) { - struct nfp_cpp *cpp = nfp_nsp_cpp(nsp); void *fw_buf; - char fw_name[125]; - char serial[40]; size_t fsize; + char serial[40]; + char fw_name[125]; uint16_t interface; uint32_t cpp_serial_len; const uint8_t *cpp_serial; + struct nfp_cpp *cpp = nfp_nsp_cpp(nsp); cpp_serial_len = nfp_cpp_serial(cpp, &cpp_serial); if (cpp_serial_len != NFP_SERIAL_LEN) @@ -713,10 +714,10 @@ nfp_fw_setup(struct rte_pci_device *dev, struct nfp_eth_table *nfp_eth_table, struct nfp_hwinfo *hwinfo) { + int err; + char card_desc[100]; struct nfp_nsp *nsp; const char *nfp_fw_model; - char card_desc[100]; - int err = 0; nfp_fw_model = nfp_hwinfo_lookup(hwinfo, "nffw.partno"); if (nfp_fw_model == NULL) @@ -897,9 +898,9 @@ nfp_pf_init(struct rte_pci_device *pci_dev) uint64_t addr; uint32_t cpp_id; struct nfp_cpp *cpp; - enum nfp_app_fw_id app_fw_id; struct nfp_pf_dev *pf_dev; struct nfp_hwinfo *hwinfo; + enum nfp_app_fw_id app_fw_id; char name[RTE_ETH_NAME_MAX_LEN]; struct nfp_rtsym_table *sym_tbl; struct nfp_eth_table *nfp_eth_table; @@ -1220,8 +1221,8 @@ static const struct rte_pci_id pci_id_nfp_pf_net_map[] = { static int nfp_pci_uninit(struct rte_eth_dev *eth_dev) { - struct rte_pci_device *pci_dev; uint16_t port_id; + struct rte_pci_device *pci_dev; pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 892300a909..aaef6ea91a 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -29,14 +29,15 @@ nfp_netvf_read_mac(struct nfp_net_hw *hw) static int nfp_netvf_start(struct rte_eth_dev *dev) { - struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); - struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - uint32_t new_ctrl, update = 0; + int ret; + uint32_t new_ctrl; + uint32_t update = 0; + uint32_t intr_vector; struct nfp_net_hw *hw; struct rte_eth_conf *dev_conf; struct rte_eth_rxmode *rxmode; - uint32_t intr_vector; - int ret; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = pci_dev->intr_handle; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -254,15 +255,15 @@ nfp_netvf_ethdev_ops_mount(struct nfp_net_hw *hw, static int nfp_netvf_init(struct rte_eth_dev *eth_dev) { - struct rte_pci_device *pci_dev; - struct nfp_net_hw *hw; - struct rte_ether_addr *tmp_ether_addr; - - uint64_t tx_bar_off = 0, rx_bar_off = 0; + int err; uint32_t start_q; uint16_t port = 0; - int err; + struct nfp_net_hw *hw; + uint64_t tx_bar_off = 0; + uint64_t rx_bar_off = 0; + struct rte_pci_device *pci_dev; const struct nfp_dev_info *dev_info; + struct rte_ether_addr *tmp_ether_addr; PMD_INIT_FUNC_TRACE(); diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index 4c9904e36c..84b48daf85 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -761,9 +761,9 @@ nfp_flow_compile_metadata(struct nfp_flow_priv *priv, uint32_t stats_ctx, uint64_t cookie) { - struct nfp_fl_rule_metadata *nfp_flow_meta; - char *mbuf_off_exact; char *mbuf_off_mask; + char *mbuf_off_exact; + struct nfp_fl_rule_metadata *nfp_flow_meta; /* * Convert to long words as firmware expects @@ -974,9 +974,9 @@ nfp_flow_key_layers_calculate_actions(const struct rte_flow_action actions[], int ret = 0; bool meter_flag = false; bool tc_hl_flag = false; - bool mac_set_flag = false; bool ip_set_flag = false; bool tp_set_flag = false; + bool mac_set_flag = false; bool ttl_tos_flag = false; const struct rte_flow_action *action; @@ -3201,11 +3201,11 @@ nfp_flow_action_geneve_encap_v4(struct nfp_app_fw_flower *app_fw_flower, { uint64_t tun_id; const struct rte_ether_hdr *eth; + struct nfp_fl_act_pre_tun *pre_tun; + struct nfp_fl_act_set_tun *set_tun; const struct rte_flow_item_udp *udp; const struct rte_flow_item_ipv4 *ipv4; const struct rte_flow_item_geneve *geneve; - struct nfp_fl_act_pre_tun *pre_tun; - struct nfp_fl_act_set_tun *set_tun; size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun); size_t act_set_size = sizeof(struct nfp_fl_act_set_tun); @@ -3241,11 +3241,11 @@ nfp_flow_action_geneve_encap_v6(struct nfp_app_fw_flower *app_fw_flower, uint8_t tos; uint64_t tun_id; const struct rte_ether_hdr *eth; + struct nfp_fl_act_pre_tun *pre_tun; + struct nfp_fl_act_set_tun *set_tun; const struct rte_flow_item_udp *udp; const struct rte_flow_item_ipv6 *ipv6; const struct rte_flow_item_geneve *geneve; - struct nfp_fl_act_pre_tun *pre_tun; - struct nfp_fl_act_set_tun *set_tun; size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun); size_t act_set_size = sizeof(struct nfp_fl_act_set_tun); @@ -3281,10 +3281,10 @@ nfp_flow_action_nvgre_encap_v4(struct nfp_app_fw_flower *app_fw_flower, { uint64_t tun_id; const struct rte_ether_hdr *eth; - const struct rte_flow_item_ipv4 *ipv4; - const struct rte_flow_item_gre *gre; struct nfp_fl_act_pre_tun *pre_tun; struct nfp_fl_act_set_tun *set_tun; + const struct rte_flow_item_gre *gre; + const struct rte_flow_item_ipv4 *ipv4; size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun); size_t act_set_size = sizeof(struct nfp_fl_act_set_tun); @@ -3319,10 +3319,10 @@ nfp_flow_action_nvgre_encap_v6(struct nfp_app_fw_flower *app_fw_flower, uint8_t tos; uint64_t tun_id; const struct rte_ether_hdr *eth; - const struct rte_flow_item_ipv6 *ipv6; - const struct rte_flow_item_gre *gre; struct nfp_fl_act_pre_tun *pre_tun; struct nfp_fl_act_set_tun *set_tun; + const struct rte_flow_item_gre *gre; + const struct rte_flow_item_ipv6 *ipv6; size_t act_pre_size = sizeof(struct nfp_fl_act_pre_tun); size_t act_set_size = sizeof(struct nfp_fl_act_set_tun); @@ -3467,12 +3467,12 @@ nfp_flow_compile_action(struct nfp_flower_representor *representor, uint32_t count; char *position; char *action_data; - bool ttl_tos_flag = false; - bool tc_hl_flag = false; bool drop_flag = false; + bool tc_hl_flag = false; bool ip_set_flag = false; bool tp_set_flag = false; bool mac_set_flag = false; + bool ttl_tos_flag = false; uint32_t total_actions = 0; const struct rte_flow_action *action; struct nfp_flower_meta_tci *meta_tci; @@ -4283,10 +4283,10 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) size_t stats_size; uint64_t ctx_count; uint64_t ctx_split; + struct nfp_flow_priv *priv; char mask_name[RTE_HASH_NAMESIZE]; char flow_name[RTE_HASH_NAMESIZE]; char pretun_name[RTE_HASH_NAMESIZE]; - struct nfp_flow_priv *priv; struct nfp_app_fw_flower *app_fw_flower; const char *pci_name = strchr(pf_dev->pci_dev->name, ':') + 1; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 8cbb9b74a2..db6122eac3 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -188,9 +188,9 @@ nfp_net_rx_cksum(struct nfp_net_rxq *rxq, static int nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) { - struct nfp_net_dp_buf *rxe = rxq->rxbufs; - uint64_t dma_addr; uint16_t i; + uint64_t dma_addr; + struct nfp_net_dp_buf *rxe = rxq->rxbufs; PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors", rxq->rx_count); @@ -241,17 +241,15 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) uint32_t nfp_net_rx_queue_count(void *rx_queue) { + uint32_t idx; + uint32_t count = 0; struct nfp_net_rxq *rxq; struct nfp_net_rx_desc *rxds; - uint32_t idx; - uint32_t count; rxq = rx_queue; idx = rxq->rd_p; - count = 0; - /* * Other PMDs are just checking the DD bit in intervals of 4 * descriptors and counting all four if the first has the DD @@ -282,9 +280,9 @@ nfp_net_parse_chained_meta(uint8_t *meta_base, rte_be32_t meta_header, struct nfp_meta_parsed *meta) { - uint8_t *meta_offset; uint32_t meta_info; uint32_t vlan_info; + uint8_t *meta_offset; meta_info = rte_be_to_cpu_32(meta_header); meta_offset = meta_base + 4; @@ -683,15 +681,15 @@ nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { - struct nfp_net_rxq *rxq; - struct nfp_net_rx_desc *rxds; - struct nfp_net_dp_buf *rxb; - struct nfp_net_hw *hw; + uint64_t dma_addr; + uint16_t avail = 0; struct rte_mbuf *mb; + uint16_t nb_hold = 0; + struct nfp_net_hw *hw; struct rte_mbuf *new_mb; - uint16_t nb_hold; - uint64_t dma_addr; - uint16_t avail; + struct nfp_net_rxq *rxq; + struct nfp_net_dp_buf *rxb; + struct nfp_net_rx_desc *rxds; uint16_t avail_multiplexed = 0; rxq = rx_queue; @@ -706,8 +704,6 @@ nfp_net_recv_pkts(void *rx_queue, hw = rxq->hw; - avail = 0; - nb_hold = 0; while (avail + avail_multiplexed < nb_pkts) { rxb = &rxq->rxbufs[rxq->rd_p]; if (unlikely(rxb == NULL)) { @@ -883,12 +879,12 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { + uint32_t rx_desc_sz; uint16_t min_rx_desc; uint16_t max_rx_desc; - const struct rte_memzone *tz; - struct nfp_net_rxq *rxq; struct nfp_net_hw *hw; - uint32_t rx_desc_sz; + struct nfp_net_rxq *rxq; + const struct rte_memzone *tz; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); @@ -995,8 +991,8 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq) { - uint32_t qcp_rd_p; uint32_t todo; + uint32_t qcp_rd_p; PMD_TX_LOG(DEBUG, "queue %hu. Check for descriptor with a complete" " status", txq->qidx); @@ -1072,8 +1068,8 @@ nfp_net_set_meta_vlan(struct nfp_net_meta_raw *meta_data, struct rte_mbuf *pkt, uint8_t layer) { - uint16_t vlan_tci; uint16_t tpid; + uint16_t vlan_tci; tpid = RTE_ETHER_TYPE_VLAN; vlan_tci = pkt->vlan_tci; From patchwork Thu Oct 12 01:26:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132563 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AAF674236A; Thu, 12 Oct 2023 03:28:29 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 71CB8402F2; Thu, 12 Oct 2023 03:27:58 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2115.outbound.protection.outlook.com [40.107.93.115]) by mails.dpdk.org (Postfix) with ESMTP id E2047406B8 for ; Thu, 12 Oct 2023 03:27:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LhD3kSs7z/eveu5W8neLVnq3olxy/8B6eYxh3Sq1aOrM3B56Zqng1IEvTW+75SH/96RYnNrl6HuMHeK9KUEHcyR3Y3KK2zWiabQB77COyZDBF4I7Td2OHKBmIPqdOL4GfroLtr5ab5gi5oqA/jfQ1u1JKtg5/3FpM5kLWYYoSnCXd4pj8/v5MHOvl0wYP1XpvZ4SeTpAm1e/6WG4GU66ts2VQ8c3j8tkkpQuYlZRsOPjCXIc2lA/mc6udA9UaFYKr8X+WdZmdm+ImI7gskiruAVW7vyrKfc1MjAojFXFUh8R3uHawOsT4v1hJPmYpLrKBoeBE56V/udgtnXsorLHcw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mB6kUrQ1OXZ8TOHbDfJmK0HiVfyBdEDj/+/Y8J11JGc=; b=YiUeMsuEgLLWHHgbKbN7mqiayZtr779yIg5rQxBYOUskCnqwu0bncyXUSRRug1Lp6g3kNnn4UIJCyrJnrkoMLIlrK1Nu/X1aMwo2GigIB37icuKR4zQBETqvIxreevFjfXvC9MOyz9AmpQRjvMPl5fYcgrNn0eA+OQutVn0LaM6FeAietuCghe9atZUtNpgYWTs+sEkCJM8HZAGkHzfFTR8YXEGW/tJ9qdxWQWzUZ62UZ4dQ3tsUFTVyIekqoT5yM5U4TMR+0Ow4x4OrJtmPFw9IVzvX24BO6t827r3Q6M3jA1bLWtNC5qR4CrbBw17lOCA8aHbpGj5G3+UFpeh+jg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mB6kUrQ1OXZ8TOHbDfJmK0HiVfyBdEDj/+/Y8J11JGc=; b=hC/3h+LNcbzQItca/PgsvxCr3UQ6kNgpKpw2JETzRLZSdXpq/tJFOtMslWMz0ngx1Wtl7PtEEODTe3VxEAW5spll4vTo7C8jORhIDbpRijSrgPzWBK97iJZFv5wRVGLg1AUBCoJTiFCICfuuRinSJ9pLLz+jDG2TdtX9cBPraU4= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by PH0PR13MB5975.namprd13.prod.outlook.com (2603:10b6:510:16e::5) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.37; Thu, 12 Oct 2023 01:27:54 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:27:54 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 05/11] net/nfp: adjust the log statement Date: Thu, 12 Oct 2023 09:26:58 +0800 Message-Id: <20231012012704.483828-6-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|PH0PR13MB5975:EE_ X-MS-Office365-Filtering-Correlation-Id: 16416a7a-249e-4423-19bf-08dbcac2752c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: f59cp62jPYBGh5ZVzYZ09ARZl7bW/xnB1y4JqqBQvn2aaTQ1Tm0nX14LCBN8i+LMH1tcSPe3o9Uzgd1mCpX5zNz9c2bknWFyoMraNj3vFoHzALuuRFfCjbRzQ3V5KOaNMQ231kL2YgNA0yLuJfjMff44QkEDJOkMrJh+zfWvnfQjhQpt30FjnrVinbEOjXiehGk8e1nNMvUJfC+ZjtcjnjbzIElHrkKCoNV/5gl2DnyZTKszPWrGiOT1RJ2q06IoFtp3FLva6bpFEPOfZyDSii5lB7OJDtc+XeQpL3gl/qBwzs/ZsTv4+M9S+10vfK+TkF369f6tLKNCDLMun5h8z+oiL1dNzNFBNjRrCv+z4ZUP7F/qyqNgC8YsCn8qEhOQqLiAS63X8MHAPjzj7F9F+acXFYXVji5Nja8jiNkiBjFd1wivr2Sc5pabpGKUff0/3VQp4CDLGBWO0rjNvGKQCGQrsgUb5O8AdOqWtTh+UZTZBzC340dWrTxkqFUXb/kVTFF+N8BJ91ISurs6xjdKPO0fX+KKoErumbXFiSSGpt7PvHPsyc2Clr/DXMe0efl25oeHNABVECPPN/l/tJu9rrrbbxlttGYX44n90AExrnXidl3CJMq/IQSPJG5NYfqWhimA0JHdPnWGib2Yy+HbAP2K1Zsj7k06yn2XaEn6CVo= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(396003)(346002)(39830400003)(136003)(366004)(376002)(230922051799003)(64100799003)(186009)(1800799009)(451199024)(86362001)(38350700002)(36756003)(38100700002)(66946007)(6916009)(2906002)(30864003)(8936002)(478600001)(6486002)(6512007)(41300700001)(44832011)(5660300002)(4326008)(6506007)(52116002)(8676002)(83380400001)(107886003)(66556008)(66476007)(54906003)(26005)(2616005)(316002)(1076003); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: X+e2dlWlbgqQy0d1Li7wx4TyazFBx07N9Lkwtqnk9m+1krqVpbNcIyOOso3nc7vPERV4RwvIePs7uNfQUMr0JtGNIs14F4BH0JW1/rpOAxu0fz114Lt2SKO8RRAW+VbPIfuisQX9mFUVSpqVu/EjivG9I4onqWe/Tplek77lpCqusPm/vUkcwx7nB3ohJlnOiHJe7Aq61hft8UXEkU6hptrksnXJkIj7set2Foup0Zd5AXz9fzKgHDGKZaEmEBoz5mPpgkK0y28Odwa5kVRjw0kt4f67yk4BHs3yB9QvC+eKl9xGlMIfLqLF0wcgoDMZuquOPePhQZ3gfg5FtCpTqzF2jawXdxxNgIvK++xFuNNMHPw/r3M8VpzOMHulpuq/vDfXPkgEFZWfQuMrO82RvFxLoG195PUK45Ps7D2IrLTWE2jjvJU/mj63PYwbSCQEhW3lSj7FD3KznFWQxecnFzIa1pcjTmQiwR8L4+0TWQsTwOoG/nUNOhKzCnF6O+k03XmAoHYGvnDnyGPlMBGgCsm6FfB84oGIxNhh9xxfrv8HDA1zJL2PSS/hRF9ZBIZezLdgSUjGVeGvspH3ikm3kQLmB4pUk1UIpAZRj5XRaKB9fkDrx80H5atwaN/sMRb+/mP+SjHwPnKQPXUEje9UI1oLhy3F5frk4cKFMml5AnEaCpgyZIOtK/4ClaKkrEF0HRuvWI8b3Joz3ywVddD6RKJ7jtGiazYZoEIgMVe/KZ7KEppRVl4PpaswZXS+OR4eYL7rhS/OTwAJB7iX+TnESrX2k5r4PmkOJRbaBTnZpsqF9rzI+cJiO2rg5BJwLLMBz/byXFxrWHhbI0mDdRIdw1bLNxI6HQvFO2rdfyGwTWXKLmVZCYYOY0OjECZIXBbkQNEnCMZ0Y/vjvXFdOOdn8SIwX+DlB97VrhUdNbkrMLRMedAJbQ2bqUbwvVlnMFxB0WSPevGhZqNIIXjkAm0UZ2fecKIiIpSwKwzSMAeF+AAaWsrxnuklL+bONutsHxM56Rc8pDsimsJsW34qjLjDo5MdPqmRaEL4IjxZIayeU4TQuxcgndgT51peIzq3Q84S+Ay7PwwbPKzhyIZMgqxKWXKaL06TtJ6hZiLXKUb1xqX/Nf2zlcH1m396DNksEDA/fyv5dgEkO8hzA3im6OZX+MOg1bve5cmtdZi5O9eajpeSk7qCJfcryNNAkmqpUsn/5NIsaarrVuKeLJm2YjmTguPO85J4mlzxXhdMMuWnVWFlGnwvvce5jiuGEJ2umtf3P1G1nMCnYMe99DXlki71yCwsNWPwX4QVB5uSZ3x3L6qg8Cu0GdqLtfvIejPj6iBTKl312QmGgVuZCbIgsp+cEgkFhm/RruyesCuBWbno7EF/OKM3uNvjW9ifVlW9inR4Xk7aWOXVUo2RpQ3itOma0YcS6/XMsZLM5EOV/FdKxcdEOru3Vr68CJ/wnFxCZPZov8QzSV40XSKQydz2nWKPlLRlHdjsicIPOmlfq9u7JuwHS8dS4ZzxnLgh1awgxZJPJBQCy7IiOjxLyJFznCXj0w8RuZVMvoIQk27FD03OCocmuUIQErGMyb88CU3T50gL+J1NoTgoOzamz5nVdJW20Q== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 16416a7a-249e-4423-19bf-08dbcac2752c X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:27:54.5007 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: BLkNqTZa/RpQKQyAPodjDut/xzXkJSNmdJ+dkwB5UmSo1vX6zlDoTdZVUUwUPTMFEaDC8uR9qzBhzsWukbkMOs26SdYhklUVA7gfCgY/vGo= X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR13MB5975 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add log statement to the important control logic, and remove verbose info log statement. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower_ctrl.c | 17 +++--- .../net/nfp/flower/nfp_flower_representor.c | 4 +- drivers/net/nfp/nfd3/nfp_nfd3_dp.c | 2 - drivers/net/nfp/nfdk/nfp_nfdk_dp.c | 2 - drivers/net/nfp/nfp_common.c | 59 ++++++++----------- drivers/net/nfp/nfp_cpp_bridge.c | 28 ++++----- drivers/net/nfp/nfp_ethdev.c | 21 +------ drivers/net/nfp/nfp_ethdev_vf.c | 17 +----- drivers/net/nfp/nfp_logs.h | 1 - drivers/net/nfp/nfp_rxtx.c | 17 ++---- 10 files changed, 58 insertions(+), 110 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c index 4967cc2375..1f4c5fd7f9 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.c +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c @@ -88,15 +88,14 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, * responsibility of avoiding it. But we have * to give some info about the error */ - PMD_RX_LOG(ERR, - "mbuf overflow likely due to the RX offset.\n" - "\t\tYour mbuf size should have extra space for" - " RX offset=%u bytes.\n" - "\t\tCurrently you just have %u bytes available" - " but the received packet is %u bytes long", - hw->rx_offset, - rxq->mbuf_size - hw->rx_offset, - mb->data_len); + PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n" + "\t\tYour mbuf size should have extra space for" + " RX offset=%u bytes.\n" + "\t\tCurrently you just have %u bytes available" + " but the received packet is %u bytes long", + hw->rx_offset, + rxq->mbuf_size - hw->rx_offset, + mb->data_len); rte_pktmbuf_free(mb); break; } diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c index 01c2c5a517..be0dfb2890 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.c +++ b/drivers/net/nfp/flower/nfp_flower_representor.c @@ -464,7 +464,7 @@ nfp_flower_repr_rx_burst(void *rx_queue, total_dequeue = rte_ring_dequeue_burst(repr->ring, (void *)rx_pkts, nb_pkts, &available); if (total_dequeue != 0) { - PMD_RX_LOG(DEBUG, "Representor Rx burst for %s, port_id: 0x%x, " + PMD_RX_LOG(DEBUG, "Representor Rx burst for %s, port_id: %#x, " "received: %u, available: %u", repr->name, repr->port_id, total_dequeue, available); @@ -510,7 +510,7 @@ nfp_flower_repr_tx_burst(void *tx_queue, pf_tx_queue = dev->data->tx_queues[0]; sent = nfp_flower_pf_xmit_pkts(pf_tx_queue, tx_pkts, nb_pkts); if (sent != 0) { - PMD_TX_LOG(DEBUG, "Representor Tx burst for %s, port_id: 0x%x transmitted: %u", + PMD_TX_LOG(DEBUG, "Representor Tx burst for %s, port_id: %#x transmitted: %hu", repr->name, repr->port_id, sent); repr->repr_stats.opackets += sent; } diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c index 699f65ebef..51755f4324 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c +++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c @@ -381,8 +381,6 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - PMD_INIT_FUNC_TRACE(); - nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc); /* Validating number of descriptors */ diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c index 2426ffb261..dae87ac6df 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c +++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c @@ -455,8 +455,6 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - PMD_INIT_FUNC_TRACE(); - nfp_net_tx_desc_limits(hw, &min_tx_desc, &max_tx_desc); /* Validating number of descriptors */ diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 18291a1cde..f48e1930dc 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -207,7 +207,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, hw->qcp_cfg); if (hw->qcp_cfg == NULL) { - PMD_INIT_LOG(ERR, "Bad configuration queue pointer"); + PMD_DRV_LOG(ERR, "Bad configuration queue pointer"); return -ENXIO; } @@ -224,15 +224,15 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, if (new == 0) break; if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) { - PMD_INIT_LOG(ERR, "Reconfig error: 0x%08x", new); + PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new); return -1; } if (cnt >= NFP_NET_POLL_TIMEOUT) { - PMD_INIT_LOG(ERR, "Reconfig timeout for 0x%08x after" - " %ums", update, cnt); + PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms", + update, cnt); return -EIO; } - nanosleep(&wait, 0); /* waiting for a 1ms */ + nanosleep(&wait, 0); /* Waiting for a 1ms */ } PMD_DRV_LOG(DEBUG, "Ack DONE"); return 0; @@ -390,8 +390,6 @@ nfp_net_configure(struct rte_eth_dev *dev) * called after that internal process */ - PMD_INIT_LOG(DEBUG, "Configure"); - dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; @@ -401,20 +399,20 @@ nfp_net_configure(struct rte_eth_dev *dev) /* Checking TX mode */ if (txmode->mq_mode != RTE_ETH_MQ_TX_NONE) { - PMD_INIT_LOG(INFO, "TX mq_mode DCB and VMDq not supported"); + PMD_DRV_LOG(ERR, "TX mq_mode DCB and VMDq not supported"); return -EINVAL; } /* Checking RX mode */ if ((rxmode->mq_mode & RTE_ETH_MQ_RX_RSS_FLAG) != 0 && (hw->cap & NFP_NET_CFG_CTRL_RSS_ANY) == 0) { - PMD_INIT_LOG(INFO, "RSS not supported"); + PMD_DRV_LOG(ERR, "RSS not supported"); return -EINVAL; } /* Checking MTU set */ if (rxmode->mtu > NFP_FRAME_SIZE_MAX) { - PMD_INIT_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u) not supported", + PMD_DRV_LOG(ERR, "MTU (%u) larger than NFP_FRAME_SIZE_MAX (%u)", rxmode->mtu, NFP_FRAME_SIZE_MAX); return -ERANGE; } @@ -552,8 +550,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) == 0) { - PMD_INIT_LOG(INFO, "MAC address unable to change when" - " port enabled"); + PMD_DRV_LOG(ERR, "MAC address unable to change when port enabled"); return -EBUSY; } @@ -567,7 +564,7 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; if (nfp_net_reconfig(hw, ctrl, update) != 0) { - PMD_INIT_LOG(INFO, "MAC address update failed"); + PMD_DRV_LOG(ERR, "MAC address update failed"); return -EIO; } return 0; @@ -582,21 +579,21 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, if (rte_intr_vec_list_alloc(intr_handle, "intr_vec", dev->data->nb_rx_queues) != 0) { - PMD_INIT_LOG(ERR, "Failed to allocate %d rx_queues" - " intr_vec", dev->data->nb_rx_queues); + PMD_DRV_LOG(ERR, "Failed to allocate %d rx_queues intr_vec", + dev->data->nb_rx_queues); return -ENOMEM; } hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { - PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with UIO"); + PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with UIO"); /* UIO just supports one queue and no LSC*/ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0); if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0) return -1; } else { - PMD_INIT_LOG(INFO, "VF: enabling RX interrupt with VFIO"); + PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with VFIO"); for (i = 0; i < dev->data->nb_rx_queues; i++) { /* * The first msix vector is reserved for non @@ -605,8 +602,6 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1); if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0) return -1; - PMD_INIT_LOG(DEBUG, "intr_vec[%d]= %d", i, - rte_intr_vec_list_index_get(intr_handle, i)); } } @@ -691,8 +686,6 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) struct nfp_net_hw *hw; struct nfp_flower_representor *repr; - PMD_DRV_LOG(DEBUG, "Promiscuous mode enable"); - if ((dev->data->dev_flags & RTE_ETH_DEV_REPRESENTOR) != 0) { repr = dev->data->dev_private; hw = repr->app_fw_flower->pf_hw; @@ -701,7 +694,7 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) } if ((hw->cap & NFP_NET_CFG_CTRL_PROMISC) == 0) { - PMD_INIT_LOG(INFO, "Promiscuous mode not supported"); + PMD_DRV_LOG(ERR, "Promiscuous mode not supported"); return -ENOTSUP; } @@ -774,9 +767,6 @@ nfp_net_link_update(struct rte_eth_dev *dev, struct rte_eth_link link; struct nfp_eth_table *nfp_eth_table; - - PMD_DRV_LOG(DEBUG, "Link update"); - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); /* Read link status */ @@ -1636,9 +1626,9 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { - PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number hardware can supported " - "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); + PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%hu)" + " doesn't match hardware can supported (%d)", + reta_size, NFP_NET_CFG_RSS_ITBL_SZ); return -EINVAL; } @@ -1719,9 +1709,9 @@ nfp_net_reta_query(struct rte_eth_dev *dev, return -EINVAL; if (reta_size != NFP_NET_CFG_RSS_ITBL_SZ) { - PMD_DRV_LOG(ERR, "The size of hash lookup table configured " - "(%d) doesn't match the number hardware can supported " - "(%d)", reta_size, NFP_NET_CFG_RSS_ITBL_SZ); + PMD_DRV_LOG(ERR, "The size of hash lookup table configured (%d)" + " doesn't match hardware can supported (%d)", + reta_size, NFP_NET_CFG_RSS_ITBL_SZ); return -EINVAL; } @@ -1827,7 +1817,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, } if (rss_conf->rss_key_len > NFP_NET_CFG_RSS_KEY_SZ) { - PMD_DRV_LOG(ERR, "hash key too long"); + PMD_DRV_LOG(ERR, "RSS hash key too long"); return -EINVAL; } @@ -1910,9 +1900,6 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) uint16_t rx_queues = dev->data->nb_rx_queues; struct rte_eth_rss_reta_entry64 nfp_reta_conf[2]; - PMD_DRV_LOG(INFO, "setting default RSS conf for %u queues", - rx_queues); - nfp_reta_conf[0].mask = ~0x0; nfp_reta_conf[1].mask = ~0x0; @@ -1929,7 +1916,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) dev_conf = &dev->data->dev_conf; if (dev_conf == NULL) { - PMD_DRV_LOG(INFO, "wrong rss conf"); + PMD_DRV_LOG(ERR, "Wrong rss conf"); return -EINVAL; } rss_conf = dev_conf->rx_adv_conf.rss_conf; diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index 727ec7a7b2..222cfdcbc3 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -130,7 +130,7 @@ nfp_cpp_bridge_serve_write(int sockfd, uint32_t tmpbuf[16]; struct nfp_cpp_area *area; - PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu", __func__, sizeof(off_t), sizeof(size_t)); /* Reading the count param */ @@ -149,9 +149,9 @@ nfp_cpp_bridge_serve_write(int sockfd, cpp_id = (offset >> 40) << 8; nfp_offset = offset & ((1ull << 40) - 1); - PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, + PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd", __func__, count, offset); - PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd", __func__, cpp_id, nfp_offset); /* Adjust length if not aligned */ @@ -162,7 +162,7 @@ nfp_cpp_bridge_serve_write(int sockfd, } while (count > 0) { - /* configure a CPP PCIe2CPP BAR for mapping the CPP target */ + /* Configure a CPP PCIe2CPP BAR for mapping the CPP target */ area = nfp_cpp_area_alloc_with_name(cpp, cpp_id, "nfp.cdev", nfp_offset, curlen); if (area == NULL) { @@ -170,7 +170,7 @@ nfp_cpp_bridge_serve_write(int sockfd, return -EIO; } - /* mapping the target */ + /* Mapping the target */ err = nfp_cpp_area_acquire(area); if (err < 0) { PMD_CPP_LOG(ERR, "area acquire failed"); @@ -183,7 +183,7 @@ nfp_cpp_bridge_serve_write(int sockfd, if (len > sizeof(tmpbuf)) len = sizeof(tmpbuf); - PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: Receive %u of %zu", __func__, len, count); err = recv(sockfd, tmpbuf, len, MSG_WAITALL); if (err != (int)len) { @@ -235,7 +235,7 @@ nfp_cpp_bridge_serve_read(int sockfd, uint32_t tmpbuf[16]; struct nfp_cpp_area *area; - PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: offset size %zu, count_size: %zu", __func__, sizeof(off_t), sizeof(size_t)); /* Reading the count param */ @@ -254,9 +254,9 @@ nfp_cpp_bridge_serve_read(int sockfd, cpp_id = (offset >> 40) << 8; nfp_offset = offset & ((1ull << 40) - 1); - PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd\n", __func__, count, + PMD_CPP_LOG(DEBUG, "%s: count %zu and offset %jd", __func__, count, offset); - PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: cpp_id %08x and nfp_offset %jd", __func__, cpp_id, nfp_offset); /* Adjust length if not aligned */ @@ -293,7 +293,7 @@ nfp_cpp_bridge_serve_read(int sockfd, nfp_cpp_area_free(area); return -EIO; } - PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu\n", __func__, + PMD_CPP_LOG(DEBUG, "%s: sending %u of %zu", __func__, len, count); err = send(sockfd, tmpbuf, len, 0); @@ -353,7 +353,7 @@ nfp_cpp_bridge_serve_ioctl(int sockfd, tmp = nfp_cpp_model(cpp); - PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x\n", __func__, tmp); + PMD_CPP_LOG(DEBUG, "%s: sending NFP model %08x", __func__, tmp); err = send(sockfd, &tmp, 4, 0); if (err != 4) { @@ -363,7 +363,7 @@ nfp_cpp_bridge_serve_ioctl(int sockfd, tmp = nfp_cpp_interface(cpp); - PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x\n", __func__, tmp); + PMD_CPP_LOG(DEBUG, "%s: sending NFP interface %08x", __func__, tmp); err = send(sockfd, &tmp, 4, 0); if (err != 4) { @@ -440,11 +440,11 @@ nfp_cpp_bridge_service_func(void *args) while (1) { ret = recv(datafd, &op, 4, 0); if (ret <= 0) { - PMD_CPP_LOG(DEBUG, "%s: socket close\n", __func__); + PMD_CPP_LOG(DEBUG, "%s: socket close", __func__); break; } - PMD_CPP_LOG(DEBUG, "%s: getting op %u\n", __func__, op); + PMD_CPP_LOG(DEBUG, "%s: getting op %u", __func__, op); if (op == NFP_BRIDGE_OP_READ) nfp_cpp_bridge_serve_read(datafd, cpp); diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 7d149decfb..72abc4c16e 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -60,8 +60,6 @@ nfp_net_start(struct rte_eth_dev *dev) pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private); app_fw_nic = NFP_PRIV_TO_APP_FW_NIC(pf_dev->app_fw_priv); - PMD_INIT_LOG(DEBUG, "Start"); - /* Disabling queues just in case... */ nfp_net_disable_queues(dev); @@ -194,8 +192,6 @@ nfp_net_stop(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; - PMD_INIT_LOG(DEBUG, "Stop"); - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); nfp_net_disable_queues(dev); @@ -220,8 +216,6 @@ nfp_net_set_link_up(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; - PMD_DRV_LOG(DEBUG, "Set link up"); - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (rte_eal_process_type() == RTE_PROC_PRIMARY) @@ -237,8 +231,6 @@ nfp_net_set_link_down(struct rte_eth_dev *dev) { struct nfp_net_hw *hw; - PMD_DRV_LOG(DEBUG, "Set link down"); - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); if (rte_eal_process_type() == RTE_PROC_PRIMARY) @@ -261,8 +253,6 @@ nfp_net_close(struct rte_eth_dev *dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; - PMD_INIT_LOG(DEBUG, "Close"); - pf_dev = NFP_NET_DEV_PRIVATE_TO_PF(dev->data->dev_private); hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); @@ -491,8 +481,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) struct nfp_app_fw_nic *app_fw_nic; struct rte_ether_addr *tmp_ether_addr; - PMD_INIT_FUNC_TRACE(); - pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); /* Use backpointer here to the PF of this eth_dev */ @@ -513,7 +501,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) */ hw = app_fw_nic->ports[port]; - PMD_INIT_LOG(DEBUG, "Working with physical port number: %d, " + PMD_INIT_LOG(DEBUG, "Working with physical port number: %hu, " "NFP internal port number: %d", port, hw->nfp_idx); rte_eth_copy_pci_info(eth_dev, pci_dev); @@ -579,9 +567,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); - PMD_INIT_LOG(DEBUG, "tx_base: 0x%" PRIx64 "", tx_base); - PMD_INIT_LOG(DEBUG, "rx_base: 0x%" PRIx64 "", rx_base); - hw->tx_bar = pf_dev->qc_bar + tx_base * NFP_QCP_QUEUE_ADDR_SZ; hw->rx_bar = pf_dev->qc_bar + rx_base * NFP_QCP_QUEUE_ADDR_SZ; eth_dev->data->dev_private = hw; @@ -627,7 +612,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; - PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " + PMD_INIT_LOG(INFO, "port %d VendorID=%#x DeviceID=%#x " "mac=" RTE_ETHER_ADDR_PRT_FMT, eth_dev->data->port_id, pci_dev->id.vendor_id, pci_dev->id.device_id, @@ -997,7 +982,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) goto pf_cleanup; } - PMD_INIT_LOG(DEBUG, "qc_bar address: 0x%p", pf_dev->qc_bar); + PMD_INIT_LOG(DEBUG, "qc_bar address: %p", pf_dev->qc_bar); /* * PF initialization has been done at this point. Call app specific diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index aaef6ea91a..d3c3c9e953 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -41,8 +41,6 @@ nfp_netvf_start(struct rte_eth_dev *dev) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - PMD_INIT_LOG(DEBUG, "Start"); - /* Disabling queues just in case... */ nfp_net_disable_queues(dev); @@ -136,8 +134,6 @@ nfp_netvf_start(struct rte_eth_dev *dev) static int nfp_netvf_stop(struct rte_eth_dev *dev) { - PMD_INIT_LOG(DEBUG, "Stop"); - nfp_net_disable_queues(dev); /* Clear queues */ @@ -170,8 +166,6 @@ nfp_netvf_close(struct rte_eth_dev *dev) if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; - PMD_INIT_LOG(DEBUG, "Close"); - pci_dev = RTE_ETH_DEV_TO_PCI(dev); /* @@ -265,8 +259,6 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) const struct nfp_dev_info *dev_info; struct rte_ether_addr *tmp_ether_addr; - PMD_INIT_FUNC_TRACE(); - pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); dev_info = nfp_dev_info_get(pci_dev->id.device_id); @@ -301,7 +293,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) hw->eth_xstats_base = rte_malloc("rte_eth_xstat", sizeof(struct rte_eth_xstat) * nfp_net_xstats_size(eth_dev), 0); if (hw->eth_xstats_base == NULL) { - PMD_INIT_LOG(ERR, "no memory for xstats base values on device %s!", + PMD_INIT_LOG(ERR, "No memory for xstats base values on device %s!", pci_dev->device.name); return -ENOMEM; } @@ -312,9 +304,6 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) start_q = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); rx_bar_off = nfp_qcp_queue_offset(dev_info, start_q); - PMD_INIT_LOG(DEBUG, "tx_bar_off: 0x%" PRIx64 "", tx_bar_off); - PMD_INIT_LOG(DEBUG, "rx_bar_off: 0x%" PRIx64 "", rx_bar_off); - hw->tx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + tx_bar_off; hw->rx_bar = (uint8_t *)pci_dev->mem_resource[2].addr + rx_bar_off; @@ -345,7 +334,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) tmp_ether_addr = &hw->mac_addr; if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { - PMD_INIT_LOG(INFO, "Using random mac address for port %d", port); + PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port); /* Using random mac addresses for VFs */ rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]); nfp_net_write_mac(hw, &hw->mac_addr.addr_bytes[0]); @@ -359,7 +348,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) eth_dev->data->dev_flags |= RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS; - PMD_INIT_LOG(INFO, "port %d VendorID=0x%x DeviceID=0x%x " + PMD_INIT_LOG(INFO, "port %hu VendorID=%#x DeviceID=%#x " "mac=" RTE_ETHER_ADDR_PRT_FMT, eth_dev->data->port_id, pci_dev->id.vendor_id, pci_dev->id.device_id, diff --git a/drivers/net/nfp/nfp_logs.h b/drivers/net/nfp/nfp_logs.h index 315a57811c..16ff61700b 100644 --- a/drivers/net/nfp/nfp_logs.h +++ b/drivers/net/nfp/nfp_logs.h @@ -12,7 +12,6 @@ extern int nfp_logtype_init; #define PMD_INIT_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, nfp_logtype_init, \ "%s(): " fmt "\n", __func__, ## args) -#define PMD_INIT_FUNC_TRACE() PMD_INIT_LOG(DEBUG, " >>") #ifdef RTE_ETHDEV_DEBUG_RX extern int nfp_logtype_rx; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index db6122eac3..5bfdfd28b3 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -192,7 +192,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) uint64_t dma_addr; struct nfp_net_dp_buf *rxe = rxq->rxbufs; - PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %u descriptors", + PMD_RX_LOG(DEBUG, "Fill Rx Freelist for %hu descriptors", rxq->rx_count); for (i = 0; i < rxq->rx_count; i++) { @@ -212,14 +212,13 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff; rxd->fld.dma_addr_lo = dma_addr & 0xffffffff; rxe[i].mbuf = mbuf; - PMD_RX_LOG(DEBUG, "[%d]: %" PRIx64, i, dma_addr); } /* Make sure all writes are flushed before telling the hardware */ rte_wmb(); /* Not advertising the whole ring as the firmware gets confused if so */ - PMD_RX_LOG(DEBUG, "Increment FL write pointer in %u", rxq->rx_count - 1); + PMD_RX_LOG(DEBUG, "Increment FL write pointer in %hu", rxq->rx_count - 1); nfp_qcp_ptr_add(rxq->qcp_fl, NFP_QCP_WRITE_PTR, rxq->rx_count - 1); @@ -432,7 +431,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta, if (meta->vlan[0].offload == 0) mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci); mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci); - PMD_RX_LOG(DEBUG, "Received outer vlan is %u inter vlan is %u", + PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u", mb->vlan_tci_outer, mb->vlan_tci); mb->ol_flags |= RTE_MBUF_F_RX_QINQ | RTE_MBUF_F_RX_QINQ_STRIPPED; } @@ -754,12 +753,11 @@ nfp_net_recv_pkts(void *rx_queue, * responsibility of avoiding it. But we have * to give some info about the error */ - PMD_RX_LOG(ERR, - "mbuf overflow likely due to the RX offset.\n" + PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n" "\t\tYour mbuf size should have extra space for" " RX offset=%u bytes.\n" "\t\tCurrently you just have %u bytes available" - " but the received packet is %u bytes long", + " but the received packet is %hu bytes long", hw->rx_offset, rxq->mbuf_size - hw->rx_offset, mb->data_len); @@ -888,8 +886,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - PMD_INIT_FUNC_TRACE(); - nfp_net_rx_desc_limits(hw, &min_rx_desc, &max_rx_desc); /* Validating number of descriptors */ @@ -965,9 +961,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, return -ENOMEM; } - PMD_RX_LOG(DEBUG, "rxbufs=%p hw_ring=%p dma_addr=0x%" PRIx64, - rxq->rxbufs, rxq->rxds, (unsigned long)rxq->dma); - nfp_net_reset_rx_queue(rxq); rxq->hw = hw; From patchwork Thu Oct 12 01:26:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132565 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 926584236A; Thu, 12 Oct 2023 03:28:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D6E3F40A8B; Thu, 12 Oct 2023 03:28:03 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2121.outbound.protection.outlook.com [40.107.95.121]) by mails.dpdk.org (Postfix) with ESMTP id 8BC49406FF for ; Thu, 12 Oct 2023 03:28:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=AyaN62DD9NFsLCglOVdSZjnVexfFnB8I3LNWO7ejX2kgmNz0zoKn2nEclA8amlR7QqqHq1bZ9DU1GQkt1q731ACE/g4qzb/Q2+RvX4rZ6T9mzekg5zeSo8WDbuuvSohBPVhzZuQzSeqKbHeYhgAPTL3P4uQ9o2sr/fkS2c2X2tbeLfHVigSF+xoHaPobNyuOqvlJACQdhPIvWloXqwvKohD6GaBUnwz6WvsbbCQJ8w8Qd1cT5np7Q02YZ4ABIOtfKJU+U7u/kk8C32EfXJRmAjPGecEUbjcRKEKmu/62WQ4jq2zEcjSBValRgXbnjjHbyrgqHYAzq3nJeuYtEJLogQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=aiAyxH5rFPnBKSuEIx4w3q1gmYd5/ON60dSYrqOw+kE=; b=ClmZyxehLkb5LeE0ANOE4tFnFpiLBn785SyN6wr/hcFZ2sK4CF+naFjURyb2g5OJrKJKMZ3E7H4Yt/HWM5JdP0T4QP8rC0FhbSWQGeUg0suT5Sg2vGfYsSWL+tjerW19hvl8lapX8JGIV3B6jP84biz/XzK/WjdZOb4EqGtMAnYYPOonkP3c/PiJb+1QmW5UiaoX1bg5S9yCnm4NVOP/f8mTynbcTg0PgqmkHnycww/HQElwNhDvO+QQlwJviJGsxcRm5P+YvXlDETJTneB8YoF0+YEBh+XWZYI/q0eYabAFUOfb6F97/1T35HGRDQlThfUXAKST4kUgAMbtzBWePw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=aiAyxH5rFPnBKSuEIx4w3q1gmYd5/ON60dSYrqOw+kE=; b=orQRBolJhaJ2yth/fUSH42AZrHfR1RExJ0fwX4XeQq5z5halUiEG8zpwaU06DKpu6711qeZMhORmN68lDdLZDBrNNbPsGpnLGSDw+xCF3wvkKXL5MOjjY1MeEHbgK3B+2ORwe9dUyf/RmmdR9KSD349v/oWh3IeeH0IwVo6SakE= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SJ0PR13MB6111.namprd13.prod.outlook.com (2603:10b6:a03:4eb::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.42; Thu, 12 Oct 2023 01:27:56 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:27:56 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 06/11] net/nfp: standard the comment style Date: Thu, 12 Oct 2023 09:26:59 +0800 Message-Id: <20231012012704.483828-7-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SJ0PR13MB6111:EE_ X-MS-Office365-Filtering-Correlation-Id: b1ae9554-e23b-4d63-346f-08dbcac2767c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: OlR17JMs5RCyoY7YQUY15ERtsTA4gAEwNnrOoF8EGhXxcyPOoyGykyVt67pjhWgUtPaGyWdZLugv9I2z1CcMNwIOj5du0nayRbpT9NIhG5Pd+ADfPra/LOmO8lgfvuuRgrsrNm+heXDBiXF9WQF/sEUATyWHAgT+pvpAy9CkX2EfPluVvck7kgXiGRGl2l2etRJnEPi3k8OX7/azKjHfZ+P5Tur0SIU9PRCYIeALMZShCpxF5Ni7uExwO6o6F2hC6+C7+d7qWQp1u7yjtS3+FJeUxR1wDj2nhi5Ga5IkoSl59COQh1wWLL0feSUWuC4tj0WI6+UoGj9+DEfmWilA4aNaCJ/dwAGkp0N9WIx9D6YEHChWRCm5UeXUkLkvqtPUMmpPMfWjAk3TYIHTfEY3G1RAATsQUHkd1l5pSImLo/0OqzqpsPKHRPjZVFZgEoWcGRrop/Z8/LfvfDXmcoKAbg26+D5YAWktmPAj7T+5GACcxh6MS3vXqNGXacuKLSNUsHCWVRoAzJDvu0hhuJJDZY5bh8KN+og9OAUcreApKVeJ3+0g3VC3e1zJtE7CoihmBnWAnbViTyKvWK38fcMhHmAVyr2lGCAIkdwdd1Swqiemmoow11GoWm0egn1aAgvSblaXwXs5/Bdpr7PXhoYWepWSB2/zjTl53/lgpxe2x4w= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(346002)(366004)(136003)(376002)(396003)(39830400003)(230922051799003)(186009)(1800799009)(451199024)(64100799003)(6512007)(107886003)(52116002)(1076003)(2616005)(6506007)(478600001)(26005)(6486002)(83380400001)(44832011)(8676002)(30864003)(2906002)(316002)(54906003)(5660300002)(41300700001)(66946007)(66476007)(66556008)(4326008)(8936002)(6916009)(36756003)(38100700002)(86362001)(38350700002)(66899024)(579004)(559001); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: 4RiwFl4dWfmKPI5oe3V35LY6KK/Fjk71s2+7pe1faT7UylSwpDjSBC6jIT2PDNyLu7pVJtq4LLnDbiwV568LWBFQrULU0yzh6SppBTwOfU5fBzmVUkpKO8Uvw+zUWNxkcjzJp19+lQQp4XP2v0p1LhBUEsG/Dz3BC4uRBL+g1Uj8OAoOlHCSAGFwtF5zvwOllGVj+v+gAontuKlI2324euZNZ0CJ7Btcy55FAZ08lYhAnXdrPfHUZKnOVW2r2lsBBAf/6bztx0i8MoT1ZY7LoOGb1GnmWUfd8aPqEoDopN0F7ZKL/OpcGGm7cwXGKtuXNyAxlyTNDej7cgIpAjgeebVqHIaiH3q7eFQ2u2rhy3ehCM1NcnrpW5CbUjGJRzsdqf1rD+oJ5RnylWmANl+U65KwQ0ZdwzwCy511tF1bNLdJA0AJCCQz3pgbhDQERtzOW894PIumKNTPfcdTRCfhIetwvOwaaMzAjc7WzaO2siqa8WnVnhEcOmOWgFEAk9rQiXFONaimHwCfehiKC0zf4LbYFkPsEbAZVH7twvbd26PHmlgvu+Uva1Tr1QzdlMcnkz/JQytnC5dBKuD2mtUm2BsNrX+CEFCEgp8BF0eNA65/MygYwhTVRW2hxWAkWv31Zpmmkjy2AfWGhX1qlICh0jmKWBU0Da0Jq/gwQSvyfZ9kohC7JHeqWITykshuE9RyYmX4JLtkmtxXysrX/sZErV1XaN6tOxrbJ7GTW+z/k1WTSgjjSuoFaEsnaQX0NbIgfzBIKUr8GM3Kx4KsorH4vI3bImcxc64M2TndGRDkZ4uANT3qi7Uoi0dMa29xhMhtjZ0uCaHatbePdkvbQcCX3zMVbuH5HLQFXJefS6JaZkAq1pvzpvsVtSanwgIY031u+lcorHWu3c+POnDdhZy2fSCXmml4n3SpYKRJMMzrpft5JKme+ThQL6fHx4MCgzUvUKBPIoz454zc9ttPhwBTj5KrP9XD5dhYuY6YNyoolYdM36PQoseRaJWnsUf8USvXP32zb1xh1gFUspkTjYxvS4pGNSBxL+l3IKV7W9/3pgoz8IRiRb1X89/fyDvzo2B3WH9Rn9ODNihGE1sRXJbBWXL+IpgNo3p1yoJ6iMU94ZSolDz/bUV4ZP0tUohvJ5armdw0tcVmhGJP+KvKt9xIW+KoUQUnlLoFaYDVH31EAzDVAgmq8oG0V5sRtOgPDasLRuPxZ6b94XNyIu51nwWjU2xIURqrjVoa4pvQgjCADzqUzgEHMimmB5JwqYV+S0TwtVwTYYrPC82kJF/cDQwV8BFlBtOHGdoZC2Xxn69Gncub+1iHt+V4ejScXjrNR2m3tLvgz3tRSEZ7475RsioryPfH0GUcrMVJgJet7ZNuYDU5BisKVES2rntXSbAYPXUAtTHINuLqN51Iz5zdYlFQhzFO1y5COBWAaJNt8aCR9PjXDYG4/fuBUdqP1AzgxVQYHXr/5XHa0DMeMO4v3bo1JC+gxj9qsbKYYUa6cEuNPEnHPOVtXfqzkS/5SvpF/WRWQ1Rks9sRpVN+ZGY2xEW/rp0gEUuWVCCngvgMCKeRBp97sWIYQEU9eYpSrdFQtSWlVXjJ9DV5jgN6kSB9yI98Wg== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: b1ae9554-e23b-4d63-346f-08dbcac2767c X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:27:56.8608 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: u7sriGS4rqM/LuEfzOM8HEeMPaO9rbFd3zk6zZhB3ZrDqK3nC1RK1Q/R3HqD1pSkhFTKzi665EOxSavuepmJ+BPrsgFmFNYcEV/ElOhvdT8= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR13MB6111 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Follow the DPDK coding style, use the kdoc comment style. Also delete some comment which are not valid anymore and add some comment to help understand logic. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_conntrack.c | 4 +- drivers/net/nfp/flower/nfp_flower.c | 10 +- drivers/net/nfp/flower/nfp_flower.h | 28 ++-- drivers/net/nfp/flower/nfp_flower_cmsg.c | 2 +- drivers/net/nfp/flower/nfp_flower_cmsg.h | 56 +++---- drivers/net/nfp/flower/nfp_flower_ctrl.c | 16 +- .../net/nfp/flower/nfp_flower_representor.c | 42 +++-- .../net/nfp/flower/nfp_flower_representor.h | 2 +- drivers/net/nfp/nfd3/nfp_nfd3.h | 33 ++-- drivers/net/nfp/nfd3/nfp_nfd3_dp.c | 24 ++- drivers/net/nfp/nfdk/nfp_nfdk.h | 41 ++--- drivers/net/nfp/nfdk/nfp_nfdk_dp.c | 8 +- drivers/net/nfp/nfp_common.c | 152 ++++++++---------- drivers/net/nfp/nfp_common.h | 61 +++---- drivers/net/nfp/nfp_cpp_bridge.c | 6 +- drivers/net/nfp/nfp_ctrl.h | 34 ++-- drivers/net/nfp/nfp_ethdev.c | 40 +++-- drivers/net/nfp/nfp_ethdev_vf.c | 15 +- drivers/net/nfp/nfp_flow.c | 62 +++---- drivers/net/nfp/nfp_flow.h | 10 +- drivers/net/nfp/nfp_ipsec.h | 12 +- drivers/net/nfp/nfp_rxtx.c | 125 ++++++-------- drivers/net/nfp/nfp_rxtx.h | 18 +-- 23 files changed, 354 insertions(+), 447 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_conntrack.c b/drivers/net/nfp/flower/nfp_conntrack.c index 7b84b12546..f89003be8b 100644 --- a/drivers/net/nfp/flower/nfp_conntrack.c +++ b/drivers/net/nfp/flower/nfp_conntrack.c @@ -667,8 +667,8 @@ nfp_ct_flow_entry_get(struct nfp_ct_zone_entry *ze, { bool ret; uint8_t loop; - uint8_t item_cnt = 1; /* the RTE_FLOW_ITEM_TYPE_END */ - uint8_t action_cnt = 1; /* the RTE_FLOW_ACTION_TYPE_END */ + uint8_t item_cnt = 1; /* The RTE_FLOW_ITEM_TYPE_END */ + uint8_t action_cnt = 1; /* The RTE_FLOW_ACTION_TYPE_END */ struct nfp_flow_priv *priv; struct nfp_ct_map_entry *me; struct nfp_ct_flow_entry *fe; diff --git a/drivers/net/nfp/flower/nfp_flower.c b/drivers/net/nfp/flower/nfp_flower.c index 7a4e671178..4453ae7b5e 100644 --- a/drivers/net/nfp/flower/nfp_flower.c +++ b/drivers/net/nfp/flower/nfp_flower.c @@ -208,7 +208,7 @@ nfp_flower_pf_close(struct rte_eth_dev *dev) nfp_net_reset_rx_queue(this_rx_q); } - /* Cancel possible impending LSC work here before releasing the port*/ + /* Cancel possible impending LSC work here before releasing the port */ rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev); nn_cfg_writeb(hw, NFP_NET_CFG_LSC, 0xff); @@ -488,7 +488,7 @@ nfp_flower_init_ctrl_vnic(struct nfp_net_hw *hw) /* * Tracking mbuf size for detecting a potential mbuf overflow due to - * RX offset + * RX offset. */ rxq->mem_pool = mp; rxq->mbuf_size = rxq->mem_pool->elt_size; @@ -535,7 +535,7 @@ nfp_flower_init_ctrl_vnic(struct nfp_net_hw *hw) /* * Telling the HW about the physical address of the RX ring and number - * of descriptors in log2 format + * of descriptors in log2 format. */ nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(i), rxq->dma); nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(i), rte_log2_u32(CTRL_VNIC_NB_DESC)); @@ -600,7 +600,7 @@ nfp_flower_init_ctrl_vnic(struct nfp_net_hw *hw) /* * Telling the HW about the physical address of the TX ring and number - * of descriptors in log2 format + * of descriptors in log2 format. */ nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(i), txq->dma); nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(i), rte_log2_u32(CTRL_VNIC_NB_DESC)); @@ -758,7 +758,7 @@ nfp_flower_enable_services(struct nfp_app_fw_flower *app_fw_flower) app_fw_flower->ctrl_vnic_id = service_id; PMD_INIT_LOG(INFO, "%s registered", flower_service.name); - /* Map them to available service cores*/ + /* Map them to available service cores */ ret = nfp_map_service(service_id); if (ret != 0) { PMD_INIT_LOG(ERR, "Could not map %s", flower_service.name); diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h index 244b6daa37..0b4e38cedd 100644 --- a/drivers/net/nfp/flower/nfp_flower.h +++ b/drivers/net/nfp/flower/nfp_flower.h @@ -53,49 +53,49 @@ struct nfp_flower_nfd_func { /* The flower application's private structure */ struct nfp_app_fw_flower { - /* switch domain for this app */ + /** Switch domain for this app */ uint16_t switch_domain_id; - /* Number of VF representors */ + /** Number of VF representors */ uint8_t num_vf_reprs; - /* Number of phyport representors */ + /** Number of phyport representors */ uint8_t num_phyport_reprs; - /* Pointer to the PF vNIC */ + /** Pointer to the PF vNIC */ struct nfp_net_hw *pf_hw; - /* Pointer to a mempool for the ctrlvNIC */ + /** Pointer to a mempool for the Ctrl vNIC */ struct rte_mempool *ctrl_pktmbuf_pool; - /* Pointer to the ctrl vNIC */ + /** Pointer to the ctrl vNIC */ struct nfp_net_hw *ctrl_hw; - /* Ctrl vNIC Rx counter */ + /** Ctrl vNIC Rx counter */ uint64_t ctrl_vnic_rx_count; - /* Ctrl vNIC Tx counter */ + /** Ctrl vNIC Tx counter */ uint64_t ctrl_vnic_tx_count; - /* Array of phyport representors */ + /** Array of phyport representors */ struct nfp_flower_representor *phy_reprs[MAX_FLOWER_PHYPORTS]; - /* Array of VF representors */ + /** Array of VF representors */ struct nfp_flower_representor *vf_reprs[MAX_FLOWER_VFS]; - /* PF representor */ + /** PF representor */ struct nfp_flower_representor *pf_repr; - /* service id of ctrl vnic service */ + /** Service id of Ctrl vNIC service */ uint32_t ctrl_vnic_id; - /* Flower extra features */ + /** Flower extra features */ uint64_t ext_features; struct nfp_flow_priv *flow_priv; struct nfp_mtr_priv *mtr_priv; - /* Function pointers for different NFD version */ + /** Function pointers for different NFD version */ struct nfp_flower_nfd_func nfd_func; }; diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.c b/drivers/net/nfp/flower/nfp_flower_cmsg.c index 5d6912b079..2ec9498d22 100644 --- a/drivers/net/nfp/flower/nfp_flower_cmsg.c +++ b/drivers/net/nfp/flower/nfp_flower_cmsg.c @@ -230,7 +230,7 @@ nfp_flower_cmsg_flow_add(struct nfp_app_fw_flower *app_fw_flower, return -ENOMEM; } - /* copy the flow to mbuf */ + /* Copy the flow to mbuf */ nfp_flow_meta = flow->payload.meta; msg_len = (nfp_flow_meta->key_len + nfp_flow_meta->mask_len + nfp_flow_meta->act_len) << NFP_FL_LW_SIZ; diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h index 9449760145..cb019171b6 100644 --- a/drivers/net/nfp/flower/nfp_flower_cmsg.h +++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h @@ -348,7 +348,7 @@ struct nfp_flower_stats_frame { rte_be64_t stats_cookie; }; -/** +/* * See RFC 2698 for more details. * Word[0](Flag options): * [15] p(pps) 1 for pps, 0 for bps @@ -378,40 +378,24 @@ struct nfp_cfg_head { rte_be32_t profile_id; }; -/** - * Struct nfp_profile_conf - profile config, offload to NIC - * @head: config head information - * @bkt_tkn_p: token bucket peak - * @bkt_tkn_c: token bucket committed - * @pbs: peak burst size - * @cbs: committed burst size - * @pir: peak information rate - * @cir: committed information rate - */ +/* Profile config, offload to NIC */ struct nfp_profile_conf { - struct nfp_cfg_head head; - rte_be32_t bkt_tkn_p; - rte_be32_t bkt_tkn_c; - rte_be32_t pbs; - rte_be32_t cbs; - rte_be32_t pir; - rte_be32_t cir; -}; - -/** - * Struct nfp_mtr_stats_reply - meter stats, read from firmware - * @head: config head information - * @pass_bytes: count of passed bytes - * @pass_pkts: count of passed packets - * @drop_bytes: count of dropped bytes - * @drop_pkts: count of dropped packets - */ + struct nfp_cfg_head head; /**< Config head information */ + rte_be32_t bkt_tkn_p; /**< Token bucket peak */ + rte_be32_t bkt_tkn_c; /**< Token bucket committed */ + rte_be32_t pbs; /**< Peak burst size */ + rte_be32_t cbs; /**< Committed burst size */ + rte_be32_t pir; /**< Peak information rate */ + rte_be32_t cir; /**< Committed information rate */ +}; + +/* Meter stats, read from firmware */ struct nfp_mtr_stats_reply { - struct nfp_cfg_head head; - rte_be64_t pass_bytes; - rte_be64_t pass_pkts; - rte_be64_t drop_bytes; - rte_be64_t drop_pkts; + struct nfp_cfg_head head; /**< Config head information */ + rte_be64_t pass_bytes; /**< Count of passed bytes */ + rte_be64_t pass_pkts; /**< Count of passed packets */ + rte_be64_t drop_bytes; /**< Count of dropped bytes */ + rte_be64_t drop_pkts; /**< Count of dropped packets */ }; enum nfp_flower_cmsg_port_type { @@ -851,7 +835,7 @@ struct nfp_fl_act_set_ipv6_addr { }; /* - * ipv6 tc hl fl + * Ipv6 tc hl fl * 3 2 1 * 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 * +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ @@ -954,9 +938,9 @@ struct nfp_fl_act_set_tun { uint8_t tos; rte_be16_t outer_vlan_tpid; rte_be16_t outer_vlan_tci; - uint8_t tun_len; /* Only valid for NFP_FL_TUNNEL_GENEVE */ + uint8_t tun_len; /**< Only valid for NFP_FL_TUNNEL_GENEVE */ uint8_t reserved2; - rte_be16_t tun_proto; /* Only valid for NFP_FL_TUNNEL_GENEVE */ + rte_be16_t tun_proto; /**< Only valid for NFP_FL_TUNNEL_GENEVE */ } __rte_packed; /* diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.c b/drivers/net/nfp/flower/nfp_flower_ctrl.c index 1f4c5fd7f9..15d27f2ac7 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.c +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.c @@ -34,7 +34,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, if (unlikely(rxq == NULL)) { /* * DPDK just checks the queue is lower than max queues - * enabled. But the queue needs to be configured + * enabled. But the queue needs to be configured. */ PMD_RX_LOG(ERR, "RX Bad queue"); return 0; @@ -60,7 +60,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, /* * We got a packet. Let's alloc a new mbuf for refilling the - * free descriptor ring as soon as possible + * free descriptor ring as soon as possible. */ new_mb = rte_pktmbuf_alloc(rxq->mem_pool); if (unlikely(new_mb == NULL)) { @@ -72,7 +72,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, /* * Grab the mbuf and refill the descriptor with the - * previously allocated mbuf + * previously allocated mbuf. */ mb = rxb->mbuf; rxb->mbuf = new_mb; @@ -86,7 +86,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, /* * This should not happen and the user has the * responsibility of avoiding it. But we have - * to give some info about the error + * to give some info about the error. */ PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n" "\t\tYour mbuf size should have extra space for" @@ -123,7 +123,7 @@ nfp_flower_ctrl_vnic_recv(void *rx_queue, nb_hold++; rxq->rd_p++; - if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/ + if (unlikely(rxq->rd_p == rxq->rx_count)) /* Wrapping */ rxq->rd_p = 0; } @@ -170,7 +170,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower, if (unlikely(txq == NULL)) { /* * DPDK just checks the queue is lower than max queues - * enabled. But the queue needs to be configured + * enabled. But the queue needs to be configured. */ PMD_TX_LOG(ERR, "ctrl dev TX Bad queue"); goto xmit_end; @@ -206,7 +206,7 @@ nfp_flower_ctrl_vnic_nfd3_xmit(struct nfp_app_fw_flower *app_fw_flower, txds->offset_eop = FLOWER_PKT_DATA_OFFSET | NFD3_DESC_TX_EOP; txq->wr_p++; - if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping?*/ + if (unlikely(txq->wr_p == txq->tx_count)) /* Wrapping */ txq->wr_p = 0; cnt++; @@ -520,7 +520,7 @@ nfp_flower_ctrl_vnic_poll(struct nfp_app_fw_flower *app_fw_flower) ctrl_hw = app_fw_flower->ctrl_hw; ctrl_eth_dev = ctrl_hw->eth_dev; - /* ctrl vNIC only has a single Rx queue */ + /* Ctrl vNIC only has a single Rx queue */ rxq = ctrl_eth_dev->data->rx_queues[0]; while (rte_service_runstate_get(app_fw_flower->ctrl_vnic_id) != 0) { diff --git a/drivers/net/nfp/flower/nfp_flower_representor.c b/drivers/net/nfp/flower/nfp_flower_representor.c index be0dfb2890..e023a7d8dc 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.c +++ b/drivers/net/nfp/flower/nfp_flower_representor.c @@ -10,18 +10,12 @@ #include "../nfp_logs.h" #include "../nfp_mtr.h" -/* - * enum nfp_repr_type - type of representor - * @NFP_REPR_TYPE_PHYS_PORT: external NIC port - * @NFP_REPR_TYPE_PF: physical function - * @NFP_REPR_TYPE_VF: virtual function - * @NFP_REPR_TYPE_MAX: number of representor types - */ +/* Type of representor */ enum nfp_repr_type { - NFP_REPR_TYPE_PHYS_PORT, - NFP_REPR_TYPE_PF, - NFP_REPR_TYPE_VF, - NFP_REPR_TYPE_MAX, + NFP_REPR_TYPE_PHYS_PORT, /*<< External NIC port */ + NFP_REPR_TYPE_PF, /*<< Physical function */ + NFP_REPR_TYPE_VF, /*<< Virtual function */ + NFP_REPR_TYPE_MAX, /*<< Number of representor types */ }; static int @@ -55,7 +49,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev, /* * Tracking mbuf size for detecting a potential mbuf overflow due to - * RX offset + * RX offset. */ rxq->mem_pool = mp; rxq->mbuf_size = rxq->mem_pool->elt_size; @@ -86,7 +80,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev, rxq->dma = (uint64_t)tz->iova; rxq->rxds = tz->addr; - /* mbuf pointers array for referencing mbufs linked to RX descriptors */ + /* Mbuf pointers array for referencing mbufs linked to RX descriptors */ rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs", sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); @@ -101,7 +95,7 @@ nfp_pf_repr_rx_queue_setup(struct rte_eth_dev *dev, /* * Telling the HW about the physical address of the RX ring and number - * of descriptors in log2 format + * of descriptors in log2 format. */ nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(queue_idx), rxq->dma); nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(queue_idx), rte_log2_u32(nb_desc)); @@ -159,7 +153,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev, txq->tx_count = nb_desc; txq->tx_free_thresh = tx_free_thresh; - /* queue mapping based on firmware configuration */ + /* Queue mapping based on firmware configuration */ txq->qidx = queue_idx; txq->tx_qcidx = queue_idx * hw->stride_tx; txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); @@ -170,7 +164,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev, txq->dma = (uint64_t)tz->iova; txq->txds = tz->addr; - /* mbuf pointers array for referencing mbufs linked to TX descriptors */ + /* Mbuf pointers array for referencing mbufs linked to TX descriptors */ txq->txbufs = rte_zmalloc_socket("txq->txbufs", sizeof(*txq->txbufs) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); @@ -185,7 +179,7 @@ nfp_pf_repr_tx_queue_setup(struct rte_eth_dev *dev, /* * Telling the HW about the physical address of the TX ring and number - * of descriptors in log2 format + * of descriptors in log2 format. */ nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma); nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(nb_desc)); @@ -603,7 +597,7 @@ nfp_flower_pf_repr_init(struct rte_eth_dev *eth_dev, /* Memory has been allocated in the eth_dev_create() function */ repr = eth_dev->data->dev_private; - /* Copy data here from the input representor template*/ + /* Copy data here from the input representor template */ repr->vf_id = init_repr_data->vf_id; repr->switch_domain_id = init_repr_data->switch_domain_id; repr->repr_type = init_repr_data->repr_type; @@ -672,7 +666,7 @@ nfp_flower_repr_init(struct rte_eth_dev *eth_dev, return -ENOMEM; } - /* Copy data here from the input representor template*/ + /* Copy data here from the input representor template */ repr->vf_id = init_repr_data->vf_id; repr->switch_domain_id = init_repr_data->switch_domain_id; repr->port_id = init_repr_data->port_id; @@ -752,7 +746,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower) nfp_eth_table = app_fw_flower->pf_hw->pf_dev->nfp_eth_table; eth_dev = app_fw_flower->ctrl_hw->eth_dev; - /* Send a NFP_FLOWER_CMSG_TYPE_MAC_REPR cmsg to hardware*/ + /* Send a NFP_FLOWER_CMSG_TYPE_MAC_REPR cmsg to hardware */ ret = nfp_flower_cmsg_mac_repr(app_fw_flower); if (ret != 0) { PMD_INIT_LOG(ERR, "Cloud not send mac repr cmsgs"); @@ -795,8 +789,8 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower) "%s_repr_p%d", pci_name, i); /* - * Create a eth_dev for this representor - * This will also allocate private memory for the device + * Create a eth_dev for this representor. + * This will also allocate private memory for the device. */ ret = rte_eth_dev_create(eth_dev->device, flower_repr.name, sizeof(struct nfp_flower_representor), @@ -812,7 +806,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower) /* * Now allocate eth_dev's for VF representors. - * Also send reify messages + * Also send reify messages. */ for (i = 0; i < app_fw_flower->num_vf_reprs; i++) { flower_repr.repr_type = NFP_REPR_TYPE_VF; @@ -826,7 +820,7 @@ nfp_flower_repr_alloc(struct nfp_app_fw_flower *app_fw_flower) snprintf(flower_repr.name, sizeof(flower_repr.name), "%s_repr_vf%d", pci_name, i); - /* This will also allocate private memory for the device*/ + /* This will also allocate private memory for the device */ ret = rte_eth_dev_create(eth_dev->device, flower_repr.name, sizeof(struct nfp_flower_representor), NULL, NULL, nfp_flower_repr_init, &flower_repr); diff --git a/drivers/net/nfp/flower/nfp_flower_representor.h b/drivers/net/nfp/flower/nfp_flower_representor.h index 5ac5e38186..eda19cbb16 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.h +++ b/drivers/net/nfp/flower/nfp_flower_representor.h @@ -13,7 +13,7 @@ struct nfp_flower_representor { uint16_t switch_domain_id; uint32_t repr_type; uint32_t port_id; - uint32_t nfp_idx; /* only valid for the repr of physical port */ + uint32_t nfp_idx; /**< Only valid for the repr of physical port */ char name[RTE_ETH_NAME_MAX_LEN]; struct rte_ether_addr mac_addr; struct nfp_app_fw_flower *app_fw_flower; diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h index 7c56ca4908..0b0ca361f4 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3.h +++ b/drivers/net/nfp/nfd3/nfp_nfd3.h @@ -17,24 +17,24 @@ struct nfp_net_nfd3_tx_desc { union { struct { - uint8_t dma_addr_hi; /* High bits of host buf address */ - uint16_t dma_len; /* Length to DMA for this desc */ - /* Offset in buf where pkt starts + highest bit is eop flag */ + uint8_t dma_addr_hi; /**< High bits of host buf address */ + uint16_t dma_len; /**< Length to DMA for this desc */ + /** Offset in buf where pkt starts + highest bit is eop flag */ uint8_t offset_eop; - uint32_t dma_addr_lo; /* Low 32bit of host buf addr */ + uint32_t dma_addr_lo; /**< Low 32bit of host buf addr */ - uint16_t mss; /* MSS to be used for LSO */ - uint8_t lso_hdrlen; /* LSO, where the data starts */ - uint8_t flags; /* TX Flags, see @NFD3_DESC_TX_* */ + uint16_t mss; /**< MSS to be used for LSO */ + uint8_t lso_hdrlen; /**< LSO, where the data starts */ + uint8_t flags; /**< TX Flags, see @NFD3_DESC_TX_* */ union { struct { - uint8_t l3_offset; /* L3 header offset */ - uint8_t l4_offset; /* L4 header offset */ + uint8_t l3_offset; /**< L3 header offset */ + uint8_t l4_offset; /**< L4 header offset */ }; - uint16_t vlan; /* VLAN tag to add if indicated */ + uint16_t vlan; /**< VLAN tag to add if indicated */ }; - uint16_t data_len; /* Length of frame + meta data */ + uint16_t data_len; /**< Length of frame + meta data */ } __rte_packed; uint32_t vals[4]; }; @@ -54,13 +54,14 @@ nfp_net_nfd3_free_tx_desc(struct nfp_net_txq *txq) return (free_desc > 8) ? (free_desc - 8) : 0; } -/* - * nfp_net_nfd3_txq_full() - Check if the TX queue free descriptors - * is below tx_free_threshold for firmware of nfd3 - * - * @txq: TX queue to check +/** + * Check if the TX queue free descriptors is below tx_free_threshold + * for firmware with nfd3 * * This function uses the host copy* of read/write pointers. + * + * @param txq + * TX queue to check */ static inline bool nfp_net_nfd3_txq_full(struct nfp_net_txq *txq) diff --git a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c index 51755f4324..4df2c5d4d2 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3_dp.c +++ b/drivers/net/nfp/nfd3/nfp_nfd3_dp.c @@ -113,14 +113,12 @@ nfp_flower_nfd3_pkt_add_metadata(struct rte_mbuf *mbuf, } /* - * nfp_net_nfd3_tx_vlan() - Set vlan info in the nfd3 tx desc + * Set vlan info in the nfd3 tx desc * * If enable NFP_NET_CFG_CTRL_TXVLAN_V2 - * Vlan_info is stored in the meta and - * is handled in the nfp_net_nfd3_set_meta_vlan() + * Vlan_info is stored in the meta and is handled in the @nfp_net_nfd3_set_meta_vlan() * else if enable NFP_NET_CFG_CTRL_TXVLAN - * Vlan_info is stored in the tx_desc and - * is handled in the nfp_net_nfd3_tx_vlan() + * Vlan_info is stored in the tx_desc and is handled in the @nfp_net_nfd3_tx_vlan() */ static inline void nfp_net_nfd3_tx_vlan(struct nfp_net_txq *txq, @@ -299,9 +297,9 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue, nfp_net_nfd3_tx_vlan(txq, &txd, pkt); /* - * mbuf data_len is the data in one segment and pkt_len data + * Mbuf data_len is the data in one segment and pkt_len data * in the whole packet. When the packet is just one segment, - * then data_len = pkt_len + * then data_len = pkt_len. */ pkt_size = pkt->pkt_len; @@ -315,7 +313,7 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue, /* * Linking mbuf with descriptor for being released - * next time descriptor is used + * next time descriptor is used. */ *lmbuf = pkt; @@ -330,14 +328,14 @@ nfp_net_nfd3_xmit_pkts_common(void *tx_queue, free_descs--; txq->wr_p++; - if (unlikely(txq->wr_p == txq->tx_count)) /* wrapping */ + if (unlikely(txq->wr_p == txq->tx_count)) /* Wrapping */ txq->wr_p = 0; pkt_size -= dma_size; /* * Making the EOP, packets with just one segment - * the priority + * the priority. */ if (likely(pkt_size == 0)) txds->offset_eop = NFD3_DESC_TX_EOP; @@ -439,7 +437,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, txq->tx_count = nb_desc * NFD3_TX_DESC_PER_PKT; txq->tx_free_thresh = tx_free_thresh; - /* queue mapping based on firmware configuration */ + /* Queue mapping based on firmware configuration */ txq->qidx = queue_idx; txq->tx_qcidx = queue_idx * hw->stride_tx; txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); @@ -449,7 +447,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, txq->dma = tz->iova; txq->txds = tz->addr; - /* mbuf pointers array for referencing mbufs linked to TX descriptors */ + /* Mbuf pointers array for referencing mbufs linked to TX descriptors */ txq->txbufs = rte_zmalloc_socket("txq->txbufs", sizeof(*txq->txbufs) * txq->tx_count, RTE_CACHE_LINE_SIZE, socket_id); @@ -465,7 +463,7 @@ nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, /* * Telling the HW about the physical address of the TX ring and number - * of descriptors in log2 format + * of descriptors in log2 format. */ nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma); nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count)); diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h index 99675b6bd7..04bd3c7600 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk.h +++ b/drivers/net/nfp/nfdk/nfp_nfdk.h @@ -75,7 +75,7 @@ * dma_addr_hi - bits [47:32] of host memory address * dma_addr_lo - bits [31:0] of host memory address * - * --> metadata descriptor + * --> Metadata descriptor * Bit 3 3 2 2 2 2 2 2 2 2 2 2 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 0 * -----\ 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 9 8 7 6 5 4 3 2 1 0 * Word +-------+-----------------------+---------------------+---+-----+ @@ -104,27 +104,27 @@ */ struct nfp_net_nfdk_tx_desc { union { - /* Address descriptor */ + /** Address descriptor */ struct { - uint16_t dma_addr_hi; /* High bits of host buf address */ - uint16_t dma_len_type; /* Length to DMA for this desc */ - uint32_t dma_addr_lo; /* Low 32bit of host buf addr */ + uint16_t dma_addr_hi; /**< High bits of host buf address */ + uint16_t dma_len_type; /**< Length to DMA for this desc */ + uint32_t dma_addr_lo; /**< Low 32bit of host buf addr */ }; - /* TSO descriptor */ + /** TSO descriptor */ struct { - uint16_t mss; /* MSS to be used for LSO */ - uint8_t lso_hdrlen; /* LSO, TCP payload offset */ - uint8_t lso_totsegs; /* LSO, total segments */ - uint8_t l3_offset; /* L3 header offset */ - uint8_t l4_offset; /* L4 header offset */ - uint16_t lso_meta_res; /* Rsvd bits in TSO metadata */ + uint16_t mss; /**< MSS to be used for LSO */ + uint8_t lso_hdrlen; /**< LSO, TCP payload offset */ + uint8_t lso_totsegs; /**< LSO, total segments */ + uint8_t l3_offset; /**< L3 header offset */ + uint8_t l4_offset; /**< L4 header offset */ + uint16_t lso_meta_res; /**< Rsvd bits in TSO metadata */ }; - /* Metadata descriptor */ + /** Metadata descriptor */ struct { - uint8_t flags; /* TX Flags, see @NFDK_DESC_TX_* */ - uint8_t reserved[7]; /* meta byte placeholder */ + uint8_t flags; /**< TX Flags, see @NFDK_DESC_TX_* */ + uint8_t reserved[7]; /**< Meta byte place holder */ }; uint32_t vals[2]; @@ -146,13 +146,14 @@ nfp_net_nfdk_free_tx_desc(struct nfp_net_txq *txq) (free_desc - NFDK_TX_DESC_STOP_CNT) : 0; } -/* - * nfp_net_nfdk_txq_full() - Check if the TX queue free descriptors - * is below tx_free_threshold for firmware of nfdk - * - * @txq: TX queue to check +/** + * Check if the TX queue free descriptors is below tx_free_threshold + * for firmware of nfdk * * This function uses the host copy* of read/write pointers. + * + * @param txq + * TX queue to check */ static inline bool nfp_net_nfdk_txq_full(struct nfp_net_txq *txq) diff --git a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c index dae87ac6df..1289ba1d65 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk_dp.c +++ b/drivers/net/nfp/nfdk/nfp_nfdk_dp.c @@ -478,7 +478,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, /* * Free memory prior to re-allocation if needed. This is the case after - * calling nfp_net_stop + * calling nfp_net_stop(). */ if (dev->data->tx_queues[queue_idx] != NULL) { PMD_TX_LOG(DEBUG, "Freeing memory prior to re-allocation %d", @@ -513,7 +513,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, txq->tx_count = nb_desc * NFDK_TX_DESC_PER_SIMPLE_PKT; txq->tx_free_thresh = tx_free_thresh; - /* queue mapping based on firmware configuration */ + /* Queue mapping based on firmware configuration */ txq->qidx = queue_idx; txq->tx_qcidx = queue_idx * hw->stride_tx; txq->qcp_q = hw->tx_bar + NFP_QCP_QUEUE_OFF(txq->tx_qcidx); @@ -523,7 +523,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, txq->dma = tz->iova; txq->ktxds = tz->addr; - /* mbuf pointers array for referencing mbufs linked to TX descriptors */ + /* Mbuf pointers array for referencing mbufs linked to TX descriptors */ txq->txbufs = rte_zmalloc_socket("txq->txbufs", sizeof(*txq->txbufs) * txq->tx_count, RTE_CACHE_LINE_SIZE, socket_id); @@ -539,7 +539,7 @@ nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, /* * Telling the HW about the physical address of the TX ring and number - * of descriptors in log2 format + * of descriptors in log2 format. */ nn_cfg_writeq(hw, NFP_NET_CFG_TXR_ADDR(queue_idx), txq->dma); nn_cfg_writeb(hw, NFP_NET_CFG_TXR_SZ(queue_idx), rte_log2_u32(txq->tx_count)); diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index f48e1930dc..130f004b4d 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -55,7 +55,7 @@ struct nfp_xstat { } static const struct nfp_xstat nfp_net_xstats[] = { - /** + /* * Basic xstats available on both VF and PF. * Note that in case new statistics of group NFP_XSTAT_GROUP_NET * are added to this array, they must appear before any statistics @@ -80,7 +80,7 @@ static const struct nfp_xstat nfp_net_xstats[] = { NFP_XSTAT_NET("bpf_app2_bytes", APP2_BYTES), NFP_XSTAT_NET("bpf_app3_pkts", APP3_FRAMES), NFP_XSTAT_NET("bpf_app3_bytes", APP3_BYTES), - /** + /* * MAC xstats available only on PF. These statistics are not available for VFs as the * PF is not initialized when the VF is initialized as it is still bound to the kernel * driver. As such, the PMD cannot obtain a CPP handle and access the rtsym_table in order @@ -175,7 +175,7 @@ static void nfp_net_notify_port_speed(struct nfp_net_hw *hw, struct rte_eth_link *link) { - /** + /* * Read the link status from NFP_NET_CFG_STS. If the link is down * then write the link speed NFP_NET_CFG_STS_LINK_RATE_UNKNOWN to * NFP_NET_CFG_STS_NSP_LINK_RATE. @@ -184,7 +184,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN); return; } - /** + /* * Link is up so write the link speed from the eth_table to * NFP_NET_CFG_STS_NSP_LINK_RATE. */ @@ -214,7 +214,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, nfp_qcp_ptr_add(hw->qcp_cfg, NFP_QCP_WRITE_PTR, 1); wait.tv_sec = 0; - wait.tv_nsec = 1000000; + wait.tv_nsec = 1000000; /* 1ms */ PMD_DRV_LOG(DEBUG, "Polling for update ack..."); @@ -253,7 +253,7 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, * * @return * - (0) if OK to reconfigure the device. - * - (EIO) if I/O err and fail to reconfigure the device. + * - (-EIO) if I/O err and fail to reconfigure the device. */ int nfp_net_reconfig(struct nfp_net_hw *hw, @@ -297,7 +297,7 @@ nfp_net_reconfig(struct nfp_net_hw *hw, * * @return * - (0) if OK to reconfigure the device. - * - (EIO) if I/O err and fail to reconfigure the device. + * - (-EIO) if I/O err and fail to reconfigure the device. */ int nfp_net_ext_reconfig(struct nfp_net_hw *hw, @@ -368,9 +368,15 @@ nfp_net_mbox_reconfig(struct nfp_net_hw *hw, } /* - * Configure an Ethernet device. This function must be invoked first - * before any other function in the Ethernet API. This function can - * also be re-invoked when a device is in the stopped state. + * Configure an Ethernet device. + * + * This function must be invoked first before any other function in the Ethernet API. + * This function can also be re-invoked when a device is in the stopped state. + * + * A DPDK app sends info about how many queues to use and how those queues + * need to be configured. This is used by the DPDK core and it makes sure no + * more queues than those advertised by the driver are requested. + * This function is called after that internal process. */ int nfp_net_configure(struct rte_eth_dev *dev) @@ -382,14 +388,6 @@ nfp_net_configure(struct rte_eth_dev *dev) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* - * A DPDK app sends info about how many queues to use and how - * those queues need to be configured. This is used by the - * DPDK core and it makes sure no more queues than those - * advertised by the driver are requested. This function is - * called after that internal process - */ - dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; @@ -557,12 +555,12 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, /* Writing new MAC to the specific port BAR address */ nfp_net_write_mac(hw, (uint8_t *)mac_addr); - /* Signal the NIC about the change */ update = NFP_NET_CFG_UPDATE_MACADDR; ctrl = hw->ctrl; if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; + /* Signal the NIC about the change */ if (nfp_net_reconfig(hw, ctrl, update) != 0) { PMD_DRV_LOG(ERR, "MAC address update failed"); return -EIO; @@ -588,7 +586,7 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { PMD_DRV_LOG(INFO, "VF: enabling RX interrupt with UIO"); - /* UIO just supports one queue and no LSC*/ + /* UIO just supports one queue and no LSC */ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(0), 0); if (rte_intr_vec_list_index_set(intr_handle, 0, 0) != 0) return -1; @@ -597,8 +595,8 @@ nfp_configure_rx_interrupt(struct rte_eth_dev *dev, for (i = 0; i < dev->data->nb_rx_queues; i++) { /* * The first msix vector is reserved for non - * efd interrupts - */ + * efd interrupts. + */ nn_cfg_writeb(hw, NFP_NET_CFG_RXR_VEC(i), i + 1); if (rte_intr_vec_list_index_set(intr_handle, i, i + 1) != 0) return -1; @@ -706,10 +704,6 @@ nfp_net_promisc_enable(struct rte_eth_dev *dev) new_ctrl = hw->ctrl | NFP_NET_CFG_CTRL_PROMISC; update = NFP_NET_CFG_UPDATE_GEN; - /* - * DPDK sets promiscuous mode on just after this call assuming - * it can not fail ... - */ ret = nfp_net_reconfig(hw, new_ctrl, update); if (ret != 0) return ret; @@ -737,10 +731,6 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev) new_ctrl = hw->ctrl & ~NFP_NET_CFG_CTRL_PROMISC; update = NFP_NET_CFG_UPDATE_GEN; - /* - * DPDK sets promiscuous mode off just before this call - * assuming it can not fail ... - */ ret = nfp_net_reconfig(hw, new_ctrl, update); if (ret != 0) return ret; @@ -751,7 +741,7 @@ nfp_net_promisc_disable(struct rte_eth_dev *dev) } /* - * return 0 means link status changed, -1 means not changed + * Return 0 means link status changed, -1 means not changed * * Wait to complete is needed as it can take up to 9 seconds to get the Link * status. @@ -793,7 +783,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, } } } else { - /** + /* * Shift and mask nn_link_status so that it is effectively the value * at offset NFP_NET_CFG_STS_NSP_LINK_RATE. */ @@ -812,7 +802,7 @@ nfp_net_link_update(struct rte_eth_dev *dev, PMD_DRV_LOG(INFO, "NIC Link is Down"); } - /** + /* * Notify the port to update the speed value in the CTRL BAR from NSP. * Not applicable for VFs as the associated PF is still attached to the * kernel driver. @@ -833,11 +823,9 @@ nfp_net_stats_get(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* RTE_ETHDEV_QUEUE_STAT_CNTRS default value is 16 */ - memset(&nfp_dev_stats, 0, sizeof(nfp_dev_stats)); - /* reading per RX ring stats */ + /* Reading per RX ring stats */ for (i = 0; i < dev->data->nb_rx_queues; i++) { if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS) break; @@ -855,7 +843,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, hw->eth_stats_base.q_ibytes[i]; } - /* reading per TX ring stats */ + /* Reading per TX ring stats */ for (i = 0; i < dev->data->nb_tx_queues; i++) { if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS) break; @@ -889,7 +877,7 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.obytes -= hw->eth_stats_base.obytes; - /* reading general device stats */ + /* Reading general device stats */ nfp_dev_stats.ierrors = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); @@ -915,6 +903,10 @@ nfp_net_stats_get(struct rte_eth_dev *dev, return -EINVAL; } +/* + * hw->eth_stats_base records the per counter starting point. + * Lets update it now. + */ int nfp_net_stats_reset(struct rte_eth_dev *dev) { @@ -923,12 +915,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* - * hw->eth_stats_base records the per counter starting point. - * Lets update it now - */ - - /* reading per RX ring stats */ + /* Reading per RX ring stats */ for (i = 0; i < dev->data->nb_rx_queues; i++) { if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS) break; @@ -940,7 +927,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); } - /* reading per TX ring stats */ + /* Reading per TX ring stats */ for (i = 0; i < dev->data->nb_tx_queues; i++) { if (i == RTE_ETHDEV_QUEUE_STAT_CNTRS) break; @@ -964,7 +951,7 @@ nfp_net_stats_reset(struct rte_eth_dev *dev) hw->eth_stats_base.obytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); - /* reading general device stats */ + /* Reading general device stats */ hw->eth_stats_base.ierrors = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); @@ -1032,7 +1019,7 @@ nfp_net_xstats_value(const struct rte_eth_dev *dev, if (raw) return value; - /** + /* * A baseline value of each statistic counter is recorded when stats are "reset". * Thus, the value returned by this function need to be decremented by this * baseline value. The result is the count of this statistic since the last time @@ -1041,12 +1028,12 @@ nfp_net_xstats_value(const struct rte_eth_dev *dev, return value - hw->eth_xstats_base[index].value; } +/* NOTE: All callers ensure dev is always set. */ int nfp_net_xstats_get_names(struct rte_eth_dev *dev, struct rte_eth_xstat_name *xstats_names, unsigned int size) { - /* NOTE: All callers ensure dev is always set. */ uint32_t id; uint32_t nfp_size; uint32_t read_size; @@ -1066,12 +1053,12 @@ nfp_net_xstats_get_names(struct rte_eth_dev *dev, return read_size; } +/* NOTE: All callers ensure dev is always set. */ int nfp_net_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { - /* NOTE: All callers ensure dev is always set. */ uint32_t id; uint32_t nfp_size; uint32_t read_size; @@ -1092,16 +1079,16 @@ nfp_net_xstats_get(struct rte_eth_dev *dev, return read_size; } +/* + * NOTE: The only caller rte_eth_xstats_get_names_by_id() ensures dev, + * ids, xstats_names and size are valid, and non-NULL. + */ int nfp_net_xstats_get_names_by_id(struct rte_eth_dev *dev, const uint64_t *ids, struct rte_eth_xstat_name *xstats_names, unsigned int size) { - /** - * NOTE: The only caller rte_eth_xstats_get_names_by_id() ensures dev, - * ids, xstats_names and size are valid, and non-NULL. - */ uint32_t i; uint32_t read_size; @@ -1123,16 +1110,16 @@ nfp_net_xstats_get_names_by_id(struct rte_eth_dev *dev, return read_size; } +/* + * NOTE: The only caller rte_eth_xstats_get_by_id() ensures dev, + * ids, values and n are valid, and non-NULL. + */ int nfp_net_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, unsigned int n) { - /** - * NOTE: The only caller rte_eth_xstats_get_by_id() ensures dev, - * ids, values and n are valid, and non-NULL. - */ uint32_t i; uint32_t read_size; @@ -1167,10 +1154,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev) hw->eth_xstats_base[id].id = id; hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true); } - /** - * Successfully reset xstats, now call function to reset basic stats - * return value is then based on the success of that function - */ + /* Successfully reset xstats, now call function to reset basic stats. */ return nfp_net_stats_reset(dev); } @@ -1217,7 +1201,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues; dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues; dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU; - /* + /** * The maximum rx packet length (max_rx_pktlen) is set to the * maximum supported frame size that the NFP can handle. This * includes layer 2 headers, CRC and other metadata that can @@ -1358,7 +1342,7 @@ nfp_net_common_init(struct rte_pci_device *pci_dev, nfp_net_init_metadata_format(hw); - /* read the Rx offset configured from firmware */ + /* Read the Rx offset configured from firmware */ if (hw->ver.major < 2) hw->rx_offset = NFP_NET_RX_OFFSET; else @@ -1375,7 +1359,6 @@ const uint32_t * nfp_net_supported_ptypes_get(struct rte_eth_dev *dev) { static const uint32_t ptypes[] = { - /* refers to nfp_net_set_hash() */ RTE_PTYPE_INNER_L3_IPV4, RTE_PTYPE_INNER_L3_IPV6, RTE_PTYPE_INNER_L3_IPV6_EXT, @@ -1449,10 +1432,8 @@ nfp_net_dev_link_status_print(struct rte_eth_dev *dev) pci_dev->addr.devid, pci_dev->addr.function); } -/* Interrupt configuration and handling */ - /* - * nfp_net_irq_unmask - Unmask an interrupt + * Unmask an interrupt * * If MSI-X auto-masking is enabled clear the mask bit, otherwise * clear the ICR for the entry. @@ -1478,16 +1459,14 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev) } } -/* +/** * Interrupt handler which shall be registered for alarm callback for delayed * handling specific interrupt to wait for the stable nic state. As the NIC * interrupt state is not stable for nfp after link is just down, it needs * to wait 4 seconds to get the stable status. * - * @param handle Pointer to interrupt handle. - * @param param The address of parameter (struct rte_eth_dev *) - * - * @return void + * @param param + * The address of parameter (struct rte_eth_dev *) */ void nfp_net_dev_interrupt_delayed_handler(void *param) @@ -1516,13 +1495,12 @@ nfp_net_dev_interrupt_handler(void *param) nfp_net_link_update(dev, 0); - /* likely to up */ + /* Likely to up */ if (link.link_status == 0) { - /* handle it 1 sec later, wait it being stable */ + /* Handle it 1 sec later, wait it being stable */ timeout = NFP_NET_LINK_UP_CHECK_TIMEOUT; - /* likely to down */ - } else { - /* handle it 4 sec later, wait it being stable */ + } else { /* Likely to down */ + /* Handle it 4 sec later, wait it being stable */ timeout = NFP_NET_LINK_DOWN_CHECK_TIMEOUT; } @@ -1543,7 +1521,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* mtu setting is forbidden if port is started */ + /* MTU setting is forbidden if port is started */ if (dev->data->dev_started) { PMD_DRV_LOG(ERR, "port %d must be stopped before configuration", dev->data->port_id); @@ -1557,7 +1535,7 @@ nfp_net_dev_mtu_set(struct rte_eth_dev *dev, return -ERANGE; } - /* writing to configuration space */ + /* Writing to configuration space */ nn_cfg_writel(hw, NFP_NET_CFG_MTU, mtu); hw->mtu = mtu; @@ -1634,7 +1612,7 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, /* * Update Redirection Table. There are 128 8bit-entries which can be - * manage as 32 32bit-entries + * manage as 32 32bit-entries. */ for (i = 0; i < reta_size; i += 4) { /* Handling 4 RSS entries per loop */ @@ -1653,8 +1631,8 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; + /* Clearing the entry bits */ if (mask != 0xF) - /* Clearing the entry bits */ reta &= ~(0xFF << (8 * j)); reta |= reta_conf[idx].reta[shift + j] << (8 * j); } @@ -1689,7 +1667,7 @@ nfp_net_reta_update(struct rte_eth_dev *dev, return 0; } - /* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */ +/* Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. */ int nfp_net_reta_query(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, @@ -1717,7 +1695,7 @@ nfp_net_reta_query(struct rte_eth_dev *dev, /* * Reading Redirection Table. There are 128 8bit-entries which can be - * manage as 32 32bit-entries + * manage as 32 32bit-entries. */ for (i = 0; i < reta_size; i += 4) { /* Handling 4 RSS entries per loop */ @@ -1751,7 +1729,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* Writing the key byte a byte */ + /* Writing the key byte by byte */ for (i = 0; i < rss_conf->rss_key_len; i++) { memcpy(&key, &rss_conf->rss_key[i], 1); nn_cfg_writeb(hw, NFP_NET_CFG_RSS_KEY + i, key); @@ -1786,7 +1764,7 @@ nfp_net_rss_hash_write(struct rte_eth_dev *dev, cfg_rss_ctrl |= NFP_NET_CFG_RSS_MASK; cfg_rss_ctrl |= NFP_NET_CFG_RSS_TOEPLITZ; - /* configuring where to apply the RSS hash */ + /* Configuring where to apply the RSS hash */ nn_cfg_writel(hw, NFP_NET_CFG_RSS_CTRL, cfg_rss_ctrl); /* Writing the key size */ @@ -1809,7 +1787,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, /* Checking if RSS is enabled */ if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) { - if (rss_hf != 0) { /* Enable RSS? */ + if (rss_hf != 0) { PMD_DRV_LOG(ERR, "RSS unsupported"); return -EINVAL; } @@ -2010,7 +1988,7 @@ nfp_net_set_vxlan_port(struct nfp_net_hw *hw, /* * The firmware with NFD3 can not handle DMA address requiring more - * than 40 bits + * than 40 bits. */ int nfp_net_check_dma_mask(struct nfp_net_hw *hw, diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index 9cb889c4a6..6a36e2b04c 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -53,7 +53,7 @@ enum nfp_app_fw_id { NFP_APP_FW_FLOWER_NIC = 0x3, }; -/* nfp_qcp_ptr - Read or Write Pointer of a queue */ +/* Read or Write Pointer of a queue */ enum nfp_qcp_ptr { NFP_QCP_READ_PTR = 0, NFP_QCP_WRITE_PTR @@ -72,15 +72,15 @@ struct nfp_net_tlv_caps { }; struct nfp_pf_dev { - /* Backpointer to associated pci device */ + /** Backpointer to associated pci device */ struct rte_pci_device *pci_dev; enum nfp_app_fw_id app_fw_id; - /* Pointer to the app running on the PF */ + /** Pointer to the app running on the PF */ void *app_fw_priv; - /* The eth table reported by firmware */ + /** The eth table reported by firmware */ struct nfp_eth_table *nfp_eth_table; uint8_t *ctrl_bar; @@ -94,17 +94,17 @@ struct nfp_pf_dev { struct nfp_hwinfo *hwinfo; struct nfp_rtsym_table *sym_tbl; - /* service id of cpp bridge service */ + /** Service id of cpp bridge service */ uint32_t cpp_bridge_id; }; struct nfp_app_fw_nic { - /* Backpointer to the PF device */ + /** Backpointer to the PF device */ struct nfp_pf_dev *pf_dev; - /* - * Array of physical ports belonging to the this CoreNIC app - * This is really a list of vNIC's. One for each physical port + /** + * Array of physical ports belonging to this CoreNIC app. + * This is really a list of vNIC's, one for each physical port. */ struct nfp_net_hw *ports[NFP_MAX_PHYPORTS]; @@ -113,13 +113,13 @@ struct nfp_app_fw_nic { }; struct nfp_net_hw { - /* Backpointer to the PF this port belongs to */ + /** Backpointer to the PF this port belongs to */ struct nfp_pf_dev *pf_dev; - /* Backpointer to the eth_dev of this port*/ + /** Backpointer to the eth_dev of this port */ struct rte_eth_dev *eth_dev; - /* Info from the firmware */ + /** Info from the firmware */ struct nfp_net_fw_ver ver; uint32_t cap; uint32_t max_mtu; @@ -130,7 +130,7 @@ struct nfp_net_hw { /** NFP ASIC params */ const struct nfp_dev_info *dev_info; - /* Current values for control */ + /** Current values for control */ uint32_t ctrl; uint8_t *ctrl_bar; @@ -156,7 +156,7 @@ struct nfp_net_hw { struct rte_ether_addr mac_addr; - /* Records starting point for counters */ + /** Records starting point for counters */ struct rte_eth_stats eth_stats_base; struct rte_eth_xstat *eth_xstats_base; @@ -166,9 +166,9 @@ struct nfp_net_hw { uint8_t *mac_stats_bar; uint8_t *mac_stats; - /* Sequential physical port number, only valid for CoreNIC firmware */ + /** Sequential physical port number, only valid for CoreNIC firmware */ uint8_t idx; - /* Internal port number as seen from NFP */ + /** Internal port number as seen from NFP */ uint8_t nfp_idx; struct nfp_net_tlv_caps tlv_caps; @@ -240,10 +240,6 @@ nn_writeq(uint64_t val, nn_writel(val, addr); } -/* - * Functions to read/write from/to Config BAR - * Performs any endian conversion necessary. - */ static inline uint8_t nn_cfg_readb(struct nfp_net_hw *hw, uint32_t off) @@ -304,11 +300,15 @@ nn_cfg_writeq(struct nfp_net_hw *hw, nn_writeq(rte_cpu_to_le_64(val), hw->ctrl_bar + off); } -/* - * nfp_qcp_ptr_add - Add the value to the selected pointer of a queue - * @q: Base address for queue structure - * @ptr: Add to the Read or Write pointer - * @val: Value to add to the queue pointer +/** + * Add the value to the selected pointer of a queue. + * + * @param q + * Base address for queue structure + * @param ptr + * Add to the read or write pointer + * @param val + * Value to add to the queue pointer */ static inline void nfp_qcp_ptr_add(uint8_t *q, @@ -325,10 +325,13 @@ nfp_qcp_ptr_add(uint8_t *q, nn_writel(rte_cpu_to_le_32(val), q + off); } -/* - * nfp_qcp_read - Read the current Read/Write pointer value for a queue - * @q: Base address for queue structure - * @ptr: Read or Write pointer +/** + * Read the current read/write pointer value for a queue. + * + * @param q + * Base address for queue structure + * @param ptr + * Read or Write pointer */ static inline uint32_t nfp_qcp_read(uint8_t *q, diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index 222cfdcbc3..8f5271cde9 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -1,8 +1,6 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2014-2021 Netronome Systems, Inc. * All rights reserved. - * - * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation. */ #include "nfp_cpp_bridge.h" @@ -48,7 +46,7 @@ nfp_map_service(uint32_t service_id) /* * Find a service core with the least number of services already - * registered to it + * registered to it. */ while (slcore_count--) { service_count = rte_service_lcore_count_services(slcore_array[slcore_count]); @@ -100,7 +98,7 @@ nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev) pf_dev->cpp_bridge_id = service_id; PMD_INIT_LOG(INFO, "NFP cpp service registered"); - /* Map it to available service core*/ + /* Map it to available service core */ ret = nfp_map_service(service_id); if (ret != 0) { PMD_INIT_LOG(DEBUG, "Could not map nfp cpp service"); diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h index 55073c3cea..cd0a2f92a8 100644 --- a/drivers/net/nfp/nfp_ctrl.h +++ b/drivers/net/nfp/nfp_ctrl.h @@ -20,7 +20,7 @@ /* Offset in Freelist buffer where packet starts on RX */ #define NFP_NET_RX_OFFSET 32 -/* working with metadata api (NFD version > 3.0) */ +/* Working with metadata api (NFD version > 3.0) */ #define NFP_NET_META_FIELD_SIZE 4 #define NFP_NET_META_FIELD_MASK ((1 << NFP_NET_META_FIELD_SIZE) - 1) #define NFP_NET_META_HEADER_SIZE 4 @@ -36,14 +36,14 @@ NFP_NET_META_VLAN_TPID_MASK) /* Prepend field types */ -#define NFP_NET_META_HASH 1 /* next field carries hash type */ +#define NFP_NET_META_HASH 1 /* Next field carries hash type */ #define NFP_NET_META_VLAN 4 #define NFP_NET_META_PORTID 5 #define NFP_NET_META_IPSEC 9 #define NFP_META_PORT_ID_CTRL ~0U -/* Hash type pre-pended when a RSS hash was computed */ +/* Hash type prepended when a RSS hash was computed */ #define NFP_NET_RSS_NONE 0 #define NFP_NET_RSS_IPV4 1 #define NFP_NET_RSS_IPV6 2 @@ -102,7 +102,7 @@ #define NFP_NET_CFG_CTRL_IRQMOD (0x1 << 18) /* Interrupt moderation */ #define NFP_NET_CFG_CTRL_RINGPRIO (0x1 << 19) /* Ring priorities */ #define NFP_NET_CFG_CTRL_MSIXAUTO (0x1 << 20) /* MSI-X auto-masking */ -#define NFP_NET_CFG_CTRL_TXRWB (0x1 << 21) /* Write-back of TX ring*/ +#define NFP_NET_CFG_CTRL_TXRWB (0x1 << 21) /* Write-back of TX ring */ #define NFP_NET_CFG_CTRL_L2SWITCH (0x1 << 22) /* L2 Switch */ #define NFP_NET_CFG_CTRL_TXVLAN_V2 (0x1 << 23) /* Enable VLAN insert with metadata */ #define NFP_NET_CFG_CTRL_VXLAN (0x1 << 24) /* Enable VXLAN */ @@ -111,7 +111,7 @@ #define NFP_NET_CFG_CTRL_LSO2 (0x1 << 28) /* LSO/TSO (version 2) */ #define NFP_NET_CFG_CTRL_RSS2 (0x1 << 29) /* RSS (version 2) */ #define NFP_NET_CFG_CTRL_CSUM_COMPLETE (0x1 << 30) /* Checksum complete */ -#define NFP_NET_CFG_CTRL_LIVE_ADDR (0x1U << 31)/* live MAC addr change */ +#define NFP_NET_CFG_CTRL_LIVE_ADDR (0x1U << 31) /* Live MAC addr change */ #define NFP_NET_CFG_UPDATE 0x0004 #define NFP_NET_CFG_UPDATE_GEN (0x1 << 0) /* General update */ #define NFP_NET_CFG_UPDATE_RING (0x1 << 1) /* Ring config change */ @@ -124,7 +124,7 @@ #define NFP_NET_CFG_UPDATE_IRQMOD (0x1 << 8) /* IRQ mod change */ #define NFP_NET_CFG_UPDATE_VXLAN (0x1 << 9) /* VXLAN port change */ #define NFP_NET_CFG_UPDATE_MACADDR (0x1 << 11) /* MAC address change */ -#define NFP_NET_CFG_UPDATE_MBOX (0x1 << 12) /**< Mailbox update */ +#define NFP_NET_CFG_UPDATE_MBOX (0x1 << 12) /* Mailbox update */ #define NFP_NET_CFG_UPDATE_ERR (0x1U << 31) /* A error occurred */ #define NFP_NET_CFG_TXRS_ENABLE 0x0008 #define NFP_NET_CFG_RXRS_ENABLE 0x0010 @@ -205,7 +205,7 @@ struct nfp_net_fw_ver { * @NFP_NET_CFG_SPARE_ADDR: DMA address for ME code to use (e.g. YDS-155 fix) */ #define NFP_NET_CFG_SPARE_ADDR 0x0050 -/** +/* * NFP6000/NFP4000 - Prepend configuration */ #define NFP_NET_CFG_RX_OFFSET 0x0050 @@ -280,7 +280,7 @@ struct nfp_net_fw_ver { * @NFP_NET_CFG_TXR_BASE: Base offset for TX ring configuration * @NFP_NET_CFG_TXR_ADDR: Per TX ring DMA address (8B entries) * @NFP_NET_CFG_TXR_WB_ADDR: Per TX ring write back DMA address (8B entries) - * @NFP_NET_CFG_TXR_SZ: Per TX ring ring size (1B entries) + * @NFP_NET_CFG_TXR_SZ: Per TX ring size (1B entries) * @NFP_NET_CFG_TXR_VEC: Per TX ring MSI-X table entry (1B entries) * @NFP_NET_CFG_TXR_PRIO: Per TX ring priority (1B entries) * @NFP_NET_CFG_TXR_IRQ_MOD: Per TX ring interrupt moderation (4B entries) @@ -299,7 +299,7 @@ struct nfp_net_fw_ver { * RX ring configuration (0x0800 - 0x0c00) * @NFP_NET_CFG_RXR_BASE: Base offset for RX ring configuration * @NFP_NET_CFG_RXR_ADDR: Per TX ring DMA address (8B entries) - * @NFP_NET_CFG_RXR_SZ: Per TX ring ring size (1B entries) + * @NFP_NET_CFG_RXR_SZ: Per TX ring size (1B entries) * @NFP_NET_CFG_RXR_VEC: Per TX ring MSI-X table entry (1B entries) * @NFP_NET_CFG_RXR_PRIO: Per TX ring priority (1B entries) * @NFP_NET_CFG_RXR_IRQ_MOD: Per TX ring interrupt moderation (4B entries) @@ -330,7 +330,7 @@ struct nfp_net_fw_ver { /* * General device stats (0x0d00 - 0x0d90) - * all counters are 64bit. + * All counters are 64bit. */ #define NFP_NET_CFG_STATS_BASE 0x0d00 #define NFP_NET_CFG_STATS_RX_DISCARDS (NFP_NET_CFG_STATS_BASE + 0x00) @@ -364,7 +364,7 @@ struct nfp_net_fw_ver { /* * Per ring stats (0x1000 - 0x1800) - * options, 64bit per entry + * Options, 64bit per entry * @NFP_NET_CFG_TXR_STATS: TX ring statistics (Packet and Byte count) * @NFP_NET_CFG_RXR_STATS: RX ring statistics (Packet and Byte count) */ @@ -375,9 +375,9 @@ struct nfp_net_fw_ver { #define NFP_NET_CFG_RXR_STATS(_x) (NFP_NET_CFG_RXR_STATS_BASE + \ ((_x) * 0x10)) -/** +/* * Mac stats (0x0000 - 0x0200) - * all counters are 64bit. + * All counters are 64bit. */ #define NFP_MAC_STATS_BASE 0x0000 #define NFP_MAC_STATS_SIZE 0x0200 @@ -558,9 +558,11 @@ struct nfp_net_fw_ver { int nfp_net_tlv_caps_parse(struct rte_eth_dev *dev); -/* - * nfp_net_cfg_ctrl_rss() - Get RSS flag based on firmware's capability - * @hw_cap: The firmware's capabilities +/** + * Get RSS flag based on firmware's capability + * + * @param hw_cap + * The firmware's capabilities */ static inline uint32_t nfp_net_cfg_ctrl_rss(uint32_t hw_cap) diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 72abc4c16e..1651ac2455 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -66,7 +66,7 @@ nfp_net_start(struct rte_eth_dev *dev) /* Enabling the required queues in the device */ nfp_net_enable_queues(dev); - /* check and configure queue intr-vector mapping */ + /* Check and configure queue intr-vector mapping */ if (dev->data->dev_conf.intr_conf.rxq != 0) { if (app_fw_nic->multiport) { PMD_INIT_LOG(ERR, "PMD rx interrupt is not supported " @@ -76,7 +76,7 @@ nfp_net_start(struct rte_eth_dev *dev) if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. - * Unregistering LSC interrupt handler + * Unregistering LSC interrupt handler. */ rte_intr_callback_unregister(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); @@ -150,7 +150,7 @@ nfp_net_start(struct rte_eth_dev *dev) /* * Allocating rte mbufs for configured rx queues. - * This requires queues being enabled before + * This requires queues being enabled before. */ if (nfp_net_rx_freelist_setup(dev) != 0) { ret = -ENOMEM; @@ -273,11 +273,11 @@ nfp_net_close(struct rte_eth_dev *dev) /* Clear ipsec */ nfp_ipsec_uninit(dev); - /* Cancel possible impending LSC work here before releasing the port*/ + /* Cancel possible impending LSC work here before releasing the port */ rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev); /* Only free PF resources after all physical ports have been closed */ - /* Mark this port as unused and free device priv resources*/ + /* Mark this port as unused and free device priv resources */ nn_cfg_writeb(hw, NFP_NET_CFG_LSC, 0xff); app_fw_nic->ports[hw->idx] = NULL; rte_eth_dev_release_port(dev); @@ -300,15 +300,10 @@ nfp_net_close(struct rte_eth_dev *dev) rte_intr_disable(pci_dev->intr_handle); - /* unregister callback func from eal lib */ + /* Unregister callback func from eal lib */ rte_intr_callback_unregister(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); - /* - * The ixgbe PMD disables the pcie master on the - * device. The i40e does not... - */ - return 0; } @@ -497,7 +492,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) /* * Use PF array of physical ports to get pointer to - * this specific port + * this specific port. */ hw = app_fw_nic->ports[port]; @@ -779,7 +774,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, /* * For coreNIC the number of vNICs exposed should be the same as the - * number of physical ports + * number of physical ports. */ if (total_vnics != nfp_eth_table->count) { PMD_INIT_LOG(ERR, "Total physical ports do not match number of vNICs"); @@ -787,7 +782,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, goto app_cleanup; } - /* Populate coreNIC app properties*/ + /* Populate coreNIC app properties */ app_fw_nic->total_phyports = total_vnics; app_fw_nic->pf_dev = pf_dev; if (total_vnics > 1) @@ -842,8 +837,9 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, eth_dev->device = &pf_dev->pci_dev->device; - /* ctrl/tx/rx BAR mappings and remaining init happens in - * nfp_net_init + /* + * Ctrl/tx/rx BAR mappings and remaining init happens in + * @nfp_net_init() */ ret = nfp_net_init(eth_dev); if (ret != 0) { @@ -970,7 +966,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) pf_dev->pci_dev = pci_dev; pf_dev->nfp_eth_table = nfp_eth_table; - /* configure access to tx/rx vNIC BARs */ + /* Configure access to tx/rx vNIC BARs */ addr = nfp_qcp_queue_offset(dev_info, 0); cpp_id = NFP_CPP_ISLAND_ID(0, NFP_CPP_ACTION_RW, 0, 0); @@ -986,7 +982,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) /* * PF initialization has been done at this point. Call app specific - * init code now + * init code now. */ switch (pf_dev->app_fw_id) { case NFP_APP_FW_CORE_NIC: @@ -1011,7 +1007,7 @@ nfp_pf_init(struct rte_pci_device *pci_dev) goto hwqueues_cleanup; } - /* register the CPP bridge service here for primary use */ + /* Register the CPP bridge service here for primary use */ ret = nfp_enable_cpp_service(pf_dev); if (ret != 0) PMD_INIT_LOG(INFO, "Enable cpp service failed."); @@ -1079,7 +1075,7 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev, /* * When attaching to the NFP4000/6000 PF on a secondary process there * is no need to initialise the PF again. Only minimal work is required - * here + * here. */ static int nfp_pf_secondary_init(struct rte_pci_device *pci_dev) @@ -1119,7 +1115,7 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev) /* * We don't have access to the PF created in the primary process - * here so we have to read the number of ports from firmware + * here so we have to read the number of ports from firmware. */ sym_tbl = nfp_rtsym_table_read(cpp); if (sym_tbl == NULL) { @@ -1216,7 +1212,7 @@ nfp_pci_uninit(struct rte_eth_dev *eth_dev) rte_eth_dev_close(port_id); /* * Ports can be closed and freed but hotplugging is not - * currently supported + * currently supported. */ return -ENOTSUP; } diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index d3c3c9e953..c9e72dd953 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -47,12 +47,12 @@ nfp_netvf_start(struct rte_eth_dev *dev) /* Enabling the required queues in the device */ nfp_net_enable_queues(dev); - /* check and configure queue intr-vector mapping */ + /* Check and configure queue intr-vector mapping */ if (dev->data->dev_conf.intr_conf.rxq != 0) { if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. - * Unregistering LSC interrupt handler + * Unregistering LSC interrupt handler. */ rte_intr_callback_unregister(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); @@ -101,7 +101,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) /* * Allocating rte mbufs for configured rx queues. - * This requires queues being enabled before + * This requires queues being enabled before. */ if (nfp_net_rx_freelist_setup(dev) != 0) { ret = -ENOMEM; @@ -182,18 +182,13 @@ nfp_netvf_close(struct rte_eth_dev *dev) rte_intr_disable(pci_dev->intr_handle); - /* unregister callback func from eal lib */ + /* Unregister callback func from eal lib */ rte_intr_callback_unregister(pci_dev->intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); - /* Cancel possible impending LSC work here before releasing the port*/ + /* Cancel possible impending LSC work here before releasing the port */ rte_eal_alarm_cancel(nfp_net_dev_interrupt_delayed_handler, (void *)dev); - /* - * The ixgbe PMD disables the pcie master on the - * device. The i40e does not... - */ - return 0; } diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index 84b48daf85..fbcdb3d19e 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -108,21 +108,21 @@ #define NVGRE_V4_LEN (sizeof(struct rte_ether_hdr) + \ sizeof(struct rte_ipv4_hdr) + \ sizeof(struct rte_flow_item_gre) + \ - sizeof(rte_be32_t)) /* gre key */ + sizeof(rte_be32_t)) /* Gre key */ #define NVGRE_V6_LEN (sizeof(struct rte_ether_hdr) + \ sizeof(struct rte_ipv6_hdr) + \ sizeof(struct rte_flow_item_gre) + \ - sizeof(rte_be32_t)) /* gre key */ + sizeof(rte_be32_t)) /* Gre key */ /* Process structure associated with a flow item */ struct nfp_flow_item_proc { - /* Bit-mask for fields supported by this PMD. */ + /** Bit-mask for fields supported by this PMD. */ const void *mask_support; - /* Bit-mask to use when @p item->mask is not provided. */ + /** Bit-mask to use when @p item->mask is not provided. */ const void *mask_default; - /* Size in bytes for @p mask_support and @p mask_default. */ + /** Size in bytes for @p mask_support and @p mask_default. */ const size_t mask_sz; - /* Merge a pattern item into a flow rule handle. */ + /** Merge a pattern item into a flow rule handle. */ int (*merge)(struct nfp_app_fw_flower *app_fw_flower, struct rte_flow *nfp_flow, char **mbuf_off, @@ -130,7 +130,7 @@ struct nfp_flow_item_proc { const struct nfp_flow_item_proc *proc, bool is_mask, bool is_outer_layer); - /* List of possible subsequent items. */ + /** List of possible subsequent items. */ const enum rte_flow_item_type *const next_item; }; @@ -308,12 +308,12 @@ nfp_check_mask_add(struct nfp_flow_priv *priv, mask_entry = nfp_mask_table_search(priv, mask_data, mask_len); if (mask_entry == NULL) { - /* mask entry does not exist, let's create one */ + /* Mask entry does not exist, let's create one */ ret = nfp_mask_table_add(priv, mask_data, mask_len, mask_id); if (ret != 0) return false; } else { - /* mask entry already exist */ + /* Mask entry already exist */ mask_entry->ref_cnt++; *mask_id = mask_entry->mask_id; } @@ -818,7 +818,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[], case RTE_FLOW_ITEM_TYPE_ETH: PMD_DRV_LOG(DEBUG, "RTE_FLOW_ITEM_TYPE_ETH detected"); /* - * eth is set with no specific params. + * Eth is set with no specific params. * NFP does not need this. */ if (item->spec == NULL) @@ -879,7 +879,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[], key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun); /* * The outer l3 layer information is - * in `struct nfp_flower_ipv4_udp_tun` + * in `struct nfp_flower_ipv4_udp_tun`. */ key_ls->key_size -= sizeof(struct nfp_flower_ipv4); } else if (outer_ip6_flag) { @@ -889,7 +889,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[], key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun); /* * The outer l3 layer information is - * in `struct nfp_flower_ipv6_udp_tun` + * in `struct nfp_flower_ipv6_udp_tun`. */ key_ls->key_size -= sizeof(struct nfp_flower_ipv6); } else { @@ -910,7 +910,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[], key_ls->key_size += sizeof(struct nfp_flower_ipv4_udp_tun); /* * The outer l3 layer information is - * in `struct nfp_flower_ipv4_udp_tun` + * in `struct nfp_flower_ipv4_udp_tun`. */ key_ls->key_size -= sizeof(struct nfp_flower_ipv4); } else if (outer_ip6_flag) { @@ -918,7 +918,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[], key_ls->key_size += sizeof(struct nfp_flower_ipv6_udp_tun); /* * The outer l3 layer information is - * in `struct nfp_flower_ipv6_udp_tun` + * in `struct nfp_flower_ipv6_udp_tun`. */ key_ls->key_size -= sizeof(struct nfp_flower_ipv6); } else { @@ -939,7 +939,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[], key_ls->key_size += sizeof(struct nfp_flower_ipv4_gre_tun); /* * The outer l3 layer information is - * in `struct nfp_flower_ipv4_gre_tun` + * in `struct nfp_flower_ipv4_gre_tun`. */ key_ls->key_size -= sizeof(struct nfp_flower_ipv4); } else if (outer_ip6_flag) { @@ -947,7 +947,7 @@ nfp_flow_key_layers_calculate_items(const struct rte_flow_item items[], key_ls->key_size += sizeof(struct nfp_flower_ipv6_gre_tun); /* * The outer l3 layer information is - * in `struct nfp_flower_ipv6_gre_tun` + * in `struct nfp_flower_ipv6_gre_tun`. */ key_ls->key_size -= sizeof(struct nfp_flower_ipv6); } else { @@ -1309,8 +1309,8 @@ nfp_flow_merge_ipv4(__rte_unused struct nfp_app_fw_flower *app_fw_flower, } /* - * reserve space for L4 info. - * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4 + * Reserve space for L4 info. + * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv4. */ if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) *mbuf_off += sizeof(struct nfp_flower_tp_ports); @@ -1392,8 +1392,8 @@ nfp_flow_merge_ipv6(__rte_unused struct nfp_app_fw_flower *app_fw_flower, } /* - * reserve space for L4 info. - * rte_flow has ipv4 before L4 but NFP flower fw requires L4 before ipv6 + * Reserve space for L4 info. + * rte_flow has ipv6 before L4 but NFP flower fw requires L4 before ipv6. */ if ((meta_tci->nfp_flow_key_layer & NFP_FLOWER_LAYER_TP) != 0) *mbuf_off += sizeof(struct nfp_flower_tp_ports); @@ -2127,7 +2127,7 @@ nfp_flow_compile_items(struct nfp_flower_representor *representor, if (nfp_flow_tcp_flag_check(items)) nfp_flow->tcp_flag = true; - /* Check if this is a tunnel flow and get the inner item*/ + /* Check if this is a tunnel flow and get the inner item */ is_tun_flow = nfp_flow_inner_item_get(items, &loop_item); if (is_tun_flow) is_outer_layer = false; @@ -3366,9 +3366,9 @@ nfp_flow_action_raw_encap(struct nfp_app_fw_flower *app_fw_flower, return -EINVAL; } - /* Pre_tunnel action must be the first on action list. - * If other actions already exist, they need to be - * pushed forward. + /* + * Pre_tunnel action must be the first on action list. + * If other actions already exist, they need to be pushed forward. */ act_len = act_data - actions; if (act_len != 0) { @@ -4384,7 +4384,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) goto free_mask_id; } - /* flow stats */ + /* Flow stats */ rte_spinlock_init(&priv->stats_lock); stats_size = (ctx_count & NFP_FL_STAT_ID_STAT) | ((ctx_split - 1) & NFP_FL_STAT_ID_MU_NUM); @@ -4398,7 +4398,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) goto free_stats_id; } - /* mask table */ + /* Mask table */ mask_hash_params.hash_func_init_val = priv->hash_seed; priv->mask_table = rte_hash_create(&mask_hash_params); if (priv->mask_table == NULL) { @@ -4407,7 +4407,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) goto free_stats; } - /* flow table */ + /* Flow table */ flow_hash_params.hash_func_init_val = priv->hash_seed; flow_hash_params.entries = ctx_count; priv->flow_table = rte_hash_create(&flow_hash_params); @@ -4417,7 +4417,7 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) goto free_mask_table; } - /* pre tunnel table */ + /* Pre tunnel table */ priv->pre_tun_cnt = 1; pre_tun_hash_params.hash_func_init_val = priv->hash_seed; priv->pre_tun_table = rte_hash_create(&pre_tun_hash_params); @@ -4446,15 +4446,15 @@ nfp_flow_priv_init(struct nfp_pf_dev *pf_dev) goto free_ct_zone_table; } - /* ipv4 off list */ + /* IPv4 off list */ rte_spinlock_init(&priv->ipv4_off_lock); LIST_INIT(&priv->ipv4_off_list); - /* ipv6 off list */ + /* IPv6 off list */ rte_spinlock_init(&priv->ipv6_off_lock); LIST_INIT(&priv->ipv6_off_list); - /* neighbor next list */ + /* Neighbor next list */ LIST_INIT(&priv->nn_list); return 0; diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h index ed06eca371..ab38dbe1f4 100644 --- a/drivers/net/nfp/nfp_flow.h +++ b/drivers/net/nfp/nfp_flow.h @@ -126,19 +126,19 @@ struct nfp_ipv6_addr_entry { struct nfp_flow_priv { uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */ uint64_t flower_version; /**< Flow version, always increase. */ - /* mask hash table */ + /* Mask hash table */ struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */ struct rte_hash *mask_table; /**< Hash table to store mask ids. */ - /* flow hash table */ + /* Flow hash table */ struct rte_hash *flow_table; /**< Hash table to store flow rules. */ - /* flow stats */ + /* Flow stats */ uint32_t active_mem_unit; /**< The size of active mem units. */ uint32_t total_mem_units; /**< The size of total mem units. */ uint32_t stats_ring_size; /**< The size of stats id ring. */ struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */ struct nfp_fl_stats *stats; /**< Store stats of flow. */ rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */ - /* pre tunnel rule */ + /* Pre tunnel rule */ uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */ uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */ struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */ @@ -148,7 +148,7 @@ struct nfp_flow_priv { /* IPv6 off */ LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */ rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */ - /* neighbor next */ + /* Neighbor next */ LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */ /* Conntrack */ struct rte_hash *ct_zone_table; /**< Hash table to store ct zone entry */ diff --git a/drivers/net/nfp/nfp_ipsec.h b/drivers/net/nfp/nfp_ipsec.h index aaebb80fe1..d7a729398a 100644 --- a/drivers/net/nfp/nfp_ipsec.h +++ b/drivers/net/nfp/nfp_ipsec.h @@ -82,7 +82,7 @@ struct ipsec_discard_stats { uint32_t discards_alignment; /**< Alignment error */ uint32_t discards_hard_bytelimit; /**< Hard byte Count limit */ uint32_t discards_seq_num_wrap; /**< Sequ Number wrap */ - uint32_t discards_pmtu_exceeded; /**< PMTU Limit exceeded*/ + uint32_t discards_pmtu_exceeded; /**< PMTU Limit exceeded */ uint32_t discards_arw_old_seq; /**< Anti-Replay seq small */ uint32_t discards_arw_replay; /**< Anti-Replay seq rcvd */ uint32_t discards_ctrl_word; /**< Bad SA Control word */ @@ -99,16 +99,16 @@ struct ipsec_discard_stats { struct ipsec_get_sa_stats { uint32_t seq_lo; /**< Sequence Number (low 32bits) */ - uint32_t seq_high; /**< Sequence Number (high 32bits)*/ + uint32_t seq_high; /**< Sequence Number (high 32bits) */ uint32_t arw_counter_lo; /**< Anti-replay wndw cntr */ uint32_t arw_counter_high; /**< Anti-replay wndw cntr */ uint32_t arw_bitmap_lo; /**< Anti-replay wndw bitmap */ uint32_t arw_bitmap_high; /**< Anti-replay wndw bitmap */ uint32_t spare:1; - uint32_t soft_byte_exceeded :1; /**< Soft lifetime byte cnt exceeded*/ - uint32_t hard_byte_exceeded :1; /**< Hard lifetime byte cnt exceeded*/ - uint32_t soft_time_exceeded :1; /**< Soft lifetime time limit exceeded*/ - uint32_t hard_time_exceeded :1; /**< Hard lifetime time limit exceeded*/ + uint32_t soft_byte_exceeded :1; /**< Soft lifetime byte cnt exceeded */ + uint32_t hard_byte_exceeded :1; /**< Hard lifetime byte cnt exceeded */ + uint32_t soft_time_exceeded :1; /**< Soft lifetime time limit exceeded */ + uint32_t hard_time_exceeded :1; /**< Hard lifetime time limit exceeded */ uint32_t spare1:27; uint32_t lifetime_byte_count; uint32_t pkt_count; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 5bfdfd28b3..d506682b56 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -20,43 +20,22 @@ /* Maximum number of supported VLANs in parsed form packet metadata. */ #define NFP_META_MAX_VLANS 2 -/* - * struct nfp_meta_parsed - Record metadata parsed from packet - * - * Parsed NFP packet metadata are recorded in this struct. The content is - * read-only after it have been recorded during parsing by nfp_net_parse_meta(). - * - * @port_id: Port id value - * @sa_idx: IPsec SA index - * @hash: RSS hash value - * @hash_type: RSS hash type - * @ipsec_type: IPsec type - * @vlan_layer: The layers of VLAN info which are passed from nic. - * Only this number of entries of the @vlan array are valid. - * - * @vlan: Holds information parses from NFP_NET_META_VLAN. The inner most vlan - * starts at position 0 and only @vlan_layer entries contain valid - * information. - * - * Currently only 2 layers of vlan are supported, - * vlan[0] - vlan strip info - * vlan[1] - qinq strip info - * - * @vlan.offload: Flag indicates whether VLAN is offloaded - * @vlan.tpid: Vlan TPID - * @vlan.tci: Vlan TCI including PCP + Priority + VID - */ +/* Record metadata parsed from packet */ struct nfp_meta_parsed { - uint32_t port_id; - uint32_t sa_idx; - uint32_t hash; - uint8_t hash_type; - uint8_t ipsec_type; - uint8_t vlan_layer; + uint32_t port_id; /**< Port id value */ + uint32_t sa_idx; /**< IPsec SA index */ + uint32_t hash; /**< RSS hash value */ + uint8_t hash_type; /**< RSS hash type */ + uint8_t ipsec_type; /**< IPsec type */ + uint8_t vlan_layer; /**< The valid number of value in @vlan[] */ + /** + * Holds information parses from NFP_NET_META_VLAN. + * The inner most vlan starts at position 0 + */ struct { - uint8_t offload; - uint8_t tpid; - uint16_t tci; + uint8_t offload; /**< Flag indicates whether VLAN is offloaded */ + uint8_t tpid; /**< Vlan TPID */ + uint16_t tci; /**< Vlan TCI (PCP + Priority + VID) */ } vlan[NFP_META_MAX_VLANS]; }; @@ -156,7 +135,7 @@ struct nfp_ptype_parsed { uint8_t outer_l3_ptype; /**< Packet type of outer layer 3. */ }; -/* set mbuf checksum flags based on RX descriptor flags */ +/* Set mbuf checksum flags based on RX descriptor flags */ void nfp_net_rx_cksum(struct nfp_net_rxq *rxq, struct nfp_net_rx_desc *rxd, @@ -254,7 +233,7 @@ nfp_net_rx_queue_count(void *rx_queue) * descriptors and counting all four if the first has the DD * bit on. Of course, this is not accurate but can be good for * performance. But ideally that should be done in descriptors - * chunks belonging to the same cache line + * chunks belonging to the same cache line. */ while (count < rxq->rx_count) { @@ -265,7 +244,7 @@ nfp_net_rx_queue_count(void *rx_queue) count++; idx++; - /* Wrapping? */ + /* Wrapping */ if ((idx) == rxq->rx_count) idx = 0; } @@ -273,7 +252,7 @@ nfp_net_rx_queue_count(void *rx_queue) return count; } -/* nfp_net_parse_chained_meta() - Parse the chained metadata from packet */ +/* Parse the chained metadata from packet */ static bool nfp_net_parse_chained_meta(uint8_t *meta_base, rte_be32_t meta_header, @@ -320,12 +299,7 @@ nfp_net_parse_chained_meta(uint8_t *meta_base, return true; } -/* - * nfp_net_parse_meta_hash() - Set mbuf hash data based on the metadata info - * - * The RSS hash and hash-type are prepended to the packet data. - * Extract and decode it and set the mbuf fields. - */ +/* Set mbuf hash data based on the metadata info */ static void nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta, struct nfp_net_rxq *rxq, @@ -341,7 +315,7 @@ nfp_net_parse_meta_hash(const struct nfp_meta_parsed *meta, } /* - * nfp_net_parse_single_meta() - Parse the single metadata + * Parse the single metadata * * The RSS hash and hash-type are prepended to the packet data. * Get it from metadata area. @@ -355,12 +329,7 @@ nfp_net_parse_single_meta(uint8_t *meta_base, meta->hash = rte_be_to_cpu_32(*(rte_be32_t *)(meta_base + 4)); } -/* - * nfp_net_parse_meta_vlan() - Set mbuf vlan_strip data based on metadata info - * - * The VLAN info TPID and TCI are prepended to the packet data. - * Extract and decode it and set the mbuf fields. - */ +/* Set mbuf vlan_strip data based on metadata info */ static void nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta, struct nfp_net_rx_desc *rxd, @@ -369,19 +338,14 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta, { struct nfp_net_hw *hw = rxq->hw; - /* Skip if hardware don't support setting vlan. */ + /* Skip if firmware don't support setting vlan. */ if ((hw->ctrl & (NFP_NET_CFG_CTRL_RXVLAN | NFP_NET_CFG_CTRL_RXVLAN_V2)) == 0) return; /* - * The nic support the two way to send the VLAN info, - * 1. According the metadata to send the VLAN info when NFP_NET_CFG_CTRL_RXVLAN_V2 - * is set - * 2. According the descriptor to sned the VLAN info when NFP_NET_CFG_CTRL_RXVLAN - * is set - * - * If the nic doesn't send the VLAN info, it is not necessary - * to do anything. + * The firmware support two ways to send the VLAN info (with priority) : + * 1. Using the metadata when NFP_NET_CFG_CTRL_RXVLAN_V2 is set, + * 2. Using the descriptor when NFP_NET_CFG_CTRL_RXVLAN is set. */ if ((hw->ctrl & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) { if (meta->vlan_layer > 0 && meta->vlan[0].offload != 0) { @@ -397,7 +361,7 @@ nfp_net_parse_meta_vlan(const struct nfp_meta_parsed *meta, } /* - * nfp_net_parse_meta_qinq() - Set mbuf qinq_strip data based on metadata info + * Set mbuf qinq_strip data based on metadata info * * The out VLAN tci are prepended to the packet data. * Extract and decode it and set the mbuf fields. @@ -469,7 +433,7 @@ nfp_net_parse_meta_ipsec(struct nfp_meta_parsed *meta, } } -/* nfp_net_parse_meta() - Parse the metadata from packet */ +/* Parse the metadata from packet */ static void nfp_net_parse_meta(struct nfp_net_rx_desc *rxds, struct nfp_net_rxq *rxq, @@ -672,7 +636,7 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds, * doing now have any benefit at all. Again, tests with this change have not * shown any improvement. Also, rte_mempool_get_bulk returns all or nothing * so looking at the implications of this type of allocation should be studied - * deeply + * deeply. */ uint16_t @@ -695,7 +659,7 @@ nfp_net_recv_pkts(void *rx_queue, if (unlikely(rxq == NULL)) { /* * DPDK just checks the queue is lower than max queues - * enabled. But the queue needs to be configured + * enabled. But the queue needs to be configured. */ PMD_RX_LOG(ERR, "RX Bad queue"); return 0; @@ -722,7 +686,7 @@ nfp_net_recv_pkts(void *rx_queue, /* * We got a packet. Let's alloc a new mbuf for refilling the - * free descriptor ring as soon as possible + * free descriptor ring as soon as possible. */ new_mb = rte_pktmbuf_alloc(rxq->mem_pool); if (unlikely(new_mb == NULL)) { @@ -734,7 +698,7 @@ nfp_net_recv_pkts(void *rx_queue, /* * Grab the mbuf and refill the descriptor with the - * previously allocated mbuf + * previously allocated mbuf. */ mb = rxb->mbuf; rxb->mbuf = new_mb; @@ -751,7 +715,7 @@ nfp_net_recv_pkts(void *rx_queue, /* * This should not happen and the user has the * responsibility of avoiding it. But we have - * to give some info about the error + * to give some info about the error. */ PMD_RX_LOG(ERR, "mbuf overflow likely due to the RX offset.\n" "\t\tYour mbuf size should have extra space for" @@ -803,7 +767,7 @@ nfp_net_recv_pkts(void *rx_queue, nb_hold++; rxq->rd_p++; - if (unlikely(rxq->rd_p == rxq->rx_count)) /* wrapping?*/ + if (unlikely(rxq->rd_p == rxq->rx_count)) /* Wrapping */ rxq->rd_p = 0; } @@ -817,7 +781,7 @@ nfp_net_recv_pkts(void *rx_queue, /* * FL descriptors needs to be written before incrementing the - * FL queue WR pointer + * FL queue WR pointer. */ rte_wmb(); if (nb_hold > rxq->rx_free_thresh) { @@ -898,7 +862,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, /* * Free memory prior to re-allocation if needed. This is the case after - * calling nfp_net_stop + * calling @nfp_net_stop(). */ if (dev->data->rx_queues[queue_idx] != NULL) { nfp_net_rx_queue_release(dev, queue_idx); @@ -920,7 +884,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, /* * Tracking mbuf size for detecting a potential mbuf overflow due to - * RX offset + * RX offset. */ rxq->mem_pool = mp; rxq->mbuf_size = rxq->mem_pool->elt_size; @@ -951,7 +915,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, rxq->dma = (uint64_t)tz->iova; rxq->rxds = tz->addr; - /* mbuf pointers array for referencing mbufs linked to RX descriptors */ + /* Mbuf pointers array for referencing mbufs linked to RX descriptors */ rxq->rxbufs = rte_zmalloc_socket("rxq->rxbufs", sizeof(*rxq->rxbufs) * nb_desc, RTE_CACHE_LINE_SIZE, socket_id); @@ -967,7 +931,7 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, /* * Telling the HW about the physical address of the RX ring and number - * of descriptors in log2 format + * of descriptors in log2 format. */ nn_cfg_writeq(hw, NFP_NET_CFG_RXR_ADDR(queue_idx), rxq->dma); nn_cfg_writeb(hw, NFP_NET_CFG_RXR_SZ(queue_idx), rte_log2_u32(nb_desc)); @@ -975,11 +939,14 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, return 0; } -/* - * nfp_net_tx_free_bufs - Check for descriptors with a complete - * status - * @txq: TX queue to work with - * Returns number of descriptors freed +/** + * Check for descriptors with a complete status + * + * @param txq + * TX queue to work with + * + * @return + * Number of descriptors freed */ uint32_t nfp_net_tx_free_bufs(struct nfp_net_txq *txq) diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index 98ef6c3d93..899cc42c97 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -19,21 +19,11 @@ /* Maximum number of NFP packet metadata fields. */ #define NFP_META_MAX_FIELDS 8 -/* - * struct nfp_net_meta_raw - Raw memory representation of packet metadata - * - * Describe the raw metadata format, useful when preparing metadata for a - * transmission mbuf. - * - * @header: NFD3 or NFDk field type header (see format in nfp.rst) - * @data: Array of each fields data member - * @length: Keep track of number of valid fields in @header and data. Not part - * of the raw metadata. - */ +/* Describe the raw metadata format. */ struct nfp_net_meta_raw { - uint32_t header; - uint32_t data[NFP_META_MAX_FIELDS]; - uint8_t length; + uint32_t header; /**< Field type header (see format in nfp.rst) */ + uint32_t data[NFP_META_MAX_FIELDS]; /**< Array of each fields data member */ + uint8_t length; /**< Number of valid fields in @header */ }; /* Descriptor alignment */ From patchwork Thu Oct 12 01:27:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132564 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 22E884236A; Thu, 12 Oct 2023 03:28:39 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 323D6406FF; Thu, 12 Oct 2023 03:28:02 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2121.outbound.protection.outlook.com [40.107.95.121]) by mails.dpdk.org (Postfix) with ESMTP id 233EB4064E for ; Thu, 12 Oct 2023 03:28:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=m/Ue0ihwh/Mj11+ejfyn4KxvLJU4jqY2aSXT15ruejK54FJ36qRuhVl2WWG4e3wUWh7naRPa1NSPWu9yCpBooqSe/HKnUusD9TJ54ezjSuQaQCMO6NUrSngnNMTdXoppJX+jMQlKjJScjZfgldSKsgHpIwHvHFrZQZCJPTohYnhYOpa1pUq0FLWxym2S6q+MbUWOgYCTqegGQoQ5Xdx+PiWp9hNpPgKnXqBn32Q9FiLepEgcwrmsQdI63GUaqagO9lMJVLcNOuthF4oSUs4TTNNK3GtJwtWebYqnMro+SMIvYeMzj9H2Aj4NWVPaHaP3a0IjD5hnCdcCP/QjJTW3Yw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=tO7LDVbntpUx1mhc3/bRhxSiQtJ2h5NqScXbX+W+2gA=; b=gKeBkyA1jNAT6hFzkS4ZmpEpRnmebFpriI3q4VYKi2Enlud50zdbw55jC4xwbY5FiqywwXOfq5N2F2K9JcTo6L/2/WfGNcOtflbUjHOW+JhbUGoQgIvQkuF+cU+HAw71rVnOIPcfP9ZzWQznYaqVDyb9SjIjD3JRr2dUu6poJT8ZcsHilUwoXkWr/2bPGaqtaII+zduFF8wrV+Y3p3pI2H8mSkml9EaC9tm7eQq+gNeUfdN9y15ekHvv/q9HFbRw8lS+VwOn1DkUIViXNYdlbW9J3XcT6/XHHpK8H3F16n6MwqgSNbUedYF103gwytW6uDjC0C9SQELdoQAg46YgNg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=tO7LDVbntpUx1mhc3/bRhxSiQtJ2h5NqScXbX+W+2gA=; b=aZ0PpNbsYf5pzD+9w2T0g5GopyO2ednsqBYPMgeI7HOmT6ZXioqr1yzX9j61Ru+Rk+iA0B0futN1UOxFGoWjQws9/GvMQWcXLl8ZG/OI0/VQXJby6ejw9ch6Ozi0MLNbguUyncrsLgxXhB8LnjrqXZUbPAM6JlNl5lcvGGmO1tc= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SJ0PR13MB6111.namprd13.prod.outlook.com (2603:10b6:a03:4eb::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.42; Thu, 12 Oct 2023 01:27:59 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:27:59 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 07/11] net/nfp: standard the blank character Date: Thu, 12 Oct 2023 09:27:00 +0800 Message-Id: <20231012012704.483828-8-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SJ0PR13MB6111:EE_ X-MS-Office365-Filtering-Correlation-Id: 48f17d28-2af6-4d5e-fed1-08dbcac277dc X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: qKBQO5lBM6RQ1Xie4GsPvN5SjVeTACkUILzcGTMWnejcssV8IIQ1vdj9LtwBIg7EXXtXzyNRgUKloTaMZXqQCxoW8eEfnh9Z5LJ9yukTRwFfkwHCz8prjmuhxo/NooBjk0kdZjAgpGfOtAeDhxHbmryYq9uSbY77extkvD7thrM233JvPlUTonQPAO6Brfhvlh8MbkBL8iJq4hWhmA8Noyk07uhm5veY7k9QZlgwogRO1JcTvu4EqeM6g9XyPyZIpFP/1EohtE8QTo0Eu5lgsd4We3NMxCUzPEejgOff7MTa77qEzyVP1/G4XtBC5LXNhzvl4/FslZUsXsDXn+9+64QdQjGFtwh1u3arT+ezjT7U2uV9b60UYmzZ5MCEctMz8DK7tgfbgJvLKPidL3eqvIVmp06BhQ1WZT3aEBiO472Y5f044O31WXk8lcjmmiL6Rk/qtW7CbMkjv2tYbyUR8nCHtn1hc+kIBpSFpgDHHe9MvU3vghqJyM5KTK7VaPT7Hfeull4a7mhtDfvlRyQYZjEfImeSnZYDuuQBBwYo0kAmx7ln4zNxGSELt+ISV1pkLiEHDFgHnZej+q6S6fmpXFQKnVdB2y1RQGnhEK36Nj0/cwjsvnwnS6Mzf4dVRoMNSNmgpUVCAg5p79MENuRlwM3dQspPjjoAA7GCVRH2zCk= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(346002)(366004)(136003)(376002)(396003)(39830400003)(230922051799003)(186009)(1800799009)(451199024)(64100799003)(6512007)(107886003)(52116002)(1076003)(2616005)(6666004)(6506007)(478600001)(26005)(6486002)(83380400001)(44832011)(8676002)(30864003)(2906002)(316002)(54906003)(5660300002)(41300700001)(66946007)(66476007)(66556008)(4326008)(8936002)(6916009)(36756003)(38100700002)(86362001)(38350700002)(66899024); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: UDwby+WIj9zHjO9VT+WQCq2VHd4FRQMMbsaPo9evP+XMiUIjMLkMgIBPIh/uxMMcNoQfIDF+9YZC85RulwGDuisQ5Hs+uyTXMqN5tOvcF1dZEY+SQUljNMYXXXyIevhsmgeUiQQ2iH2NNzpwPzF83J4rUEQgeHf8akSxO0v9Tz4XJaaAatvwYAgIxKTwkUeH4vd2emFzUjW7QzkwBDw+ANdhA+T2ceRPgW/4PGTXgjy6bh65sutGmgtKG6Wa8Q8mjCwWLE9qGP3MDlydOjHgOy4lv9mPDL+HTp8rb7K9qqLYh8O6TSrgp56jgAon8lcnkrNv8tPy4BvXun/Uwr5OvSsktlHff588B8+0RYMlqFniCsjOhMbgEFxOlsvXviVD4XuSoo12ZtUtVtbY8eYkQH/3axNvxWsSNkaUo8KSTNbELY/+T6eWaovGgPEYPXxUZm4/gRhy71sSiQjLnsESBWE/T4SGQJc+lxZLVU6uGAY75tmqwtKRCi7piN0R93l2RiznBAtcdAKODh18+rOM27dgC6glb661O0tyhEA04ei9zCO04qlWFz9m4PLPw5Kw+pvE9zhr3YbF3QgNanS6O4e/1Tdmvj/JVzoJ8uTtnCrSM5oi1nDm94LxvtTw6V+ABUVQtMeJJ0Zm9yssay1jGgXvdEp5q1f/V+j4fsvbjWctfGNnu/kdLJnvrmqA5qIq0igXIDvFahY8NHtWmXbGkir6FmcMIQF1QpNNonB7mvKuWiGkt38HI6tL3TYy9mTvXRenEQg9O9DeMRsvD2RDZMHjcerzWhnJFB40T+WqNhe3QrV4RPO9cYxs2tnIvCmB6UhwL2RM8YolyYWzi2LFozEbeB3mTYBQUDx9xg3FeQmPuHOJyT8DjT0tCUTbI52W38w3gwKhYQGb1czFa3Meet2GXccBfBjIudL9c2e4XRl+knsoovh7MZqvLHbzxZieJwfNgCRl9PQxNS5SC6Cc0MQ85NgiiGNHIP/5uPaPXJ6H+O7B4pYF6mJB2EOuQ05CDH5fe/jDt5ATZcvin7Y46JVETfecfkIgnIAE9h5jRpgeb89g/Pb3cTddFtzw/w018myCypXc9xUZ13e6AXqXpxEoXgit3PxW6jDW7wEghGtXk+6BLVcgsi0DxZd7eonlP2L25/xplLoyyU6oHdUXZOLyhZTuYOQi1vTxxxOnyZTpN+dobmfiHNZWjhmujfHQjmpUc7jRU1mOngW6auQKqPZFVgEALjGY2dKJrUnbJdu5ZkkXBTeVeXDJEogYxstrgcwp6xfurmsAiyT9MyrZYRRkSUdo0FkXJv4BKPThBsUQhgFJb8rn60bz9KB3eclUmGuNkPKv4xhuwWiWtQ/iys6AjVjsMlbRZOwteib7pZztYWcm9dXFdsfcX8wrpZ6llagwMrvcZN4sxwA+A0p0fSxAiRQxi5FPs+mczu5VW69Ytcs/01FbwMfiHjZMC+NGsum8oKJASJWZmPDVm1ZgyAKrImZeh6LQAJyQRLvECoN6yoKN5YwE0MoDQ4X572HeH2E4SamnO42ERZz+QQvLGT5HKq94DZvFfB2CbVgJGH5tgEwi0mp7BGT6VZkE4+9NVBB4ZPlCSRQVRAbWE0LdZg== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 48f17d28-2af6-4d5e-fed1-08dbcac277dc X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:27:58.9746 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: T2+MM9gB9ZcOnz5G1UG4JKogPRAXPyA52w+6/sQkklLpfINdsQJarsOWCCg6nsVyfEjFWem411ChqM56ABGxBK48KG4dZ2vcii35cveQeNY= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR13MB6111 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use space character to align instead of TAB character. There should one blank line to split the block of logic, no more no less. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/nfp_common.c | 39 +++++++++-------- drivers/net/nfp/nfp_common.h | 6 +-- drivers/net/nfp/nfp_cpp_bridge.c | 5 +++ drivers/net/nfp/nfp_ctrl.h | 6 +-- drivers/net/nfp/nfp_ethdev.c | 58 +++++++++++++------------- drivers/net/nfp/nfp_ethdev_vf.c | 49 +++++++++++----------- drivers/net/nfp/nfp_flow.c | 27 +++++++----- drivers/net/nfp/nfp_flow.h | 7 ++++ drivers/net/nfp/nfp_rxtx.c | 7 ++-- drivers/net/nfp/nfpcore/nfp_resource.h | 2 +- 10 files changed, 114 insertions(+), 92 deletions(-) diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index 130f004b4d..a102c6f272 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -36,6 +36,7 @@ enum nfp_xstat_group { NFP_XSTAT_GROUP_NET, NFP_XSTAT_GROUP_MAC }; + struct nfp_xstat { char name[RTE_ETH_XSTATS_NAME_SIZE]; int offset; @@ -184,6 +185,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN); return; } + /* * Link is up so write the link speed from the eth_table to * NFP_NET_CFG_STS_NSP_LINK_RATE. @@ -223,17 +225,21 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE); if (new == 0) break; + if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) { PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new); return -1; } + if (cnt >= NFP_NET_POLL_TIMEOUT) { PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms", update, cnt); return -EIO; } + nanosleep(&wait, 0); /* Waiting for a 1ms */ } + PMD_DRV_LOG(DEBUG, "Ack DONE"); return 0; } @@ -387,7 +393,6 @@ nfp_net_configure(struct rte_eth_dev *dev) struct rte_eth_txmode *txmode; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; @@ -560,11 +565,13 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; + /* Signal the NIC about the change */ if (nfp_net_reconfig(hw, ctrl, update) != 0) { PMD_DRV_LOG(ERR, "MAC address update failed"); return -EIO; } + return 0; } @@ -832,13 +839,11 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.q_ipackets[i] = nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); - nfp_dev_stats.q_ipackets[i] -= hw->eth_stats_base.q_ipackets[i]; nfp_dev_stats.q_ibytes[i] = nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); - nfp_dev_stats.q_ibytes[i] -= hw->eth_stats_base.q_ibytes[i]; } @@ -850,42 +855,34 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.q_opackets[i] = nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); - nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i]; nfp_dev_stats.q_obytes[i] = nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); - nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i]; } nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); - nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets; nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); - nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes; nfp_dev_stats.opackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); - nfp_dev_stats.opackets -= hw->eth_stats_base.opackets; nfp_dev_stats.obytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); - nfp_dev_stats.obytes -= hw->eth_stats_base.obytes; /* Reading general device stats */ nfp_dev_stats.ierrors = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); - nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors; nfp_dev_stats.oerrors = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); - nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors; /* RX ring mbuf allocation failures */ @@ -893,7 +890,6 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.imissed = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); - nfp_dev_stats.imissed -= hw->eth_stats_base.imissed; if (stats != NULL) { @@ -981,6 +977,7 @@ nfp_net_xstats_size(const struct rte_eth_dev *dev) if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC) break; } + return count; } @@ -1154,6 +1151,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev) hw->eth_xstats_base[id].id = id; hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true); } + /* Successfully reset xstats, now call function to reset basic stats. */ return nfp_net_stats_reset(dev); } @@ -1201,6 +1199,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues; dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues; dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU; + /** * The maximum rx packet length (max_rx_pktlen) is set to the * maximum supported frame size that the NFP can handle. This @@ -1368,6 +1367,7 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev) if (dev->rx_pkt_burst == nfp_net_recv_pkts) return ptypes; + return NULL; } @@ -1381,7 +1381,6 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; @@ -1402,7 +1401,6 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; @@ -1619,11 +1617,11 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, idx = i / RTE_ETH_RETA_GROUP_SIZE; shift = i % RTE_ETH_RETA_GROUP_SIZE; mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF); - if (mask == 0) continue; reta = 0; + /* If all 4 entries were set, don't need read RETA register */ if (mask != 0xF) reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + i); @@ -1631,13 +1629,17 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; + /* Clearing the entry bits */ if (mask != 0xF) reta &= ~(0xFF << (8 * j)); + reta |= reta_conf[idx].reta[shift + j] << (8 * j); } + nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta); } + return 0; } @@ -1682,7 +1684,6 @@ nfp_net_reta_query(struct rte_eth_dev *dev, struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) return -EINVAL; @@ -1710,10 +1711,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev, for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; + reta_conf[idx].reta[shift + j] = (uint8_t)((reta >> (8 * j)) & 0xF); } } + return 0; } @@ -1791,6 +1794,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "RSS unsupported"); return -EINVAL; } + return 0; /* Nothing to do */ } @@ -1888,6 +1892,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) queue %= rx_queues; } } + ret = nfp_net_rss_reta_write(dev, nfp_reta_conf, 0x80); if (ret != 0) return ret; @@ -1897,8 +1902,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Wrong rss conf"); return -EINVAL; } - rss_conf = dev_conf->rx_adv_conf.rss_conf; + rss_conf = dev_conf->rx_adv_conf.rss_conf; ret = nfp_net_rss_hash_write(dev, &rss_conf); return ret; diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index 6a36e2b04c..5439865c5e 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -32,7 +32,7 @@ #define DEFAULT_RX_HTHRESH 8 #define DEFAULT_RX_WTHRESH 0 -#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_RS_THRESH 32 #define DEFAULT_TX_FREE_THRESH 32 #define DEFAULT_TX_PTHRESH 32 #define DEFAULT_TX_HTHRESH 0 @@ -40,12 +40,12 @@ #define DEFAULT_TX_RSBIT_THRESH 32 /* Alignment for dma zones */ -#define NFP_MEMZONE_ALIGN 128 +#define NFP_MEMZONE_ALIGN 128 #define NFP_QCP_QUEUE_ADDR_SZ (0x800) /* Number of supported physical ports */ -#define NFP_MAX_PHYPORTS 12 +#define NFP_MAX_PHYPORTS 12 /* Firmware application ID's */ enum nfp_app_fw_id { diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index 8f5271cde9..bb2a6fdcda 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -191,6 +191,7 @@ nfp_cpp_bridge_serve_write(int sockfd, nfp_cpp_area_free(area); return -EIO; } + err = nfp_cpp_area_write(area, pos, tmpbuf, len); if (err < 0) { PMD_CPP_LOG(ERR, "nfp_cpp_area_write error"); @@ -312,6 +313,7 @@ nfp_cpp_bridge_serve_read(int sockfd, curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? NFP_CPP_MEMIO_BOUNDARY : count; } + return 0; } @@ -393,6 +395,7 @@ nfp_cpp_bridge_service_func(void *args) struct timeval timeout = {1, 0}; unlink("/tmp/nfp_cpp"); + sockfd = socket(AF_UNIX, SOCK_STREAM, 0); if (sockfd < 0) { PMD_CPP_LOG(ERR, "socket creation error. Service failed"); @@ -456,8 +459,10 @@ nfp_cpp_bridge_service_func(void *args) if (op == 0) break; } + close(datafd); } + close(sockfd); return 0; diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h index cd0a2f92a8..5cc83ff3e6 100644 --- a/drivers/net/nfp/nfp_ctrl.h +++ b/drivers/net/nfp/nfp_ctrl.h @@ -208,8 +208,8 @@ struct nfp_net_fw_ver { /* * NFP6000/NFP4000 - Prepend configuration */ -#define NFP_NET_CFG_RX_OFFSET 0x0050 -#define NFP_NET_CFG_RX_OFFSET_DYNAMIC 0 /* Prepend mode */ +#define NFP_NET_CFG_RX_OFFSET 0x0050 +#define NFP_NET_CFG_RX_OFFSET_DYNAMIC 0 /* Prepend mode */ /* Start anchor of the TLV area */ #define NFP_NET_CFG_TLV_BASE 0x0058 @@ -442,7 +442,7 @@ struct nfp_net_fw_ver { #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6 (NFP_MAC_STATS_BASE + 0x1f0) #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7 (NFP_MAC_STATS_BASE + 0x1f8) -#define NFP_PF_CSR_SLICE_SIZE (32 * 1024) +#define NFP_PF_CSR_SLICE_SIZE (32 * 1024) /* * General use mailbox area (0x1800 - 0x19ff) diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index 1651ac2455..b65c2c1fe0 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -36,6 +36,7 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, rte_ether_addr_copy(&nfp_eth_table->ports[port].mac_addr, &hw->mac_addr); free(nfp_eth_table); + return 0; } @@ -73,6 +74,7 @@ nfp_net_start(struct rte_eth_dev *dev) "with NFP multiport PF"); return -EINVAL; } + if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. @@ -87,6 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev) return -EIO; } } + intr_vector = dev->data->nb_rx_queues; if (rte_intr_efd_enable(intr_handle, intr_vector) != 0) return -1; @@ -198,7 +201,6 @@ nfp_net_stop(struct rte_eth_dev *dev) /* Clear queues */ nfp_net_stop_tx_queue(dev); - nfp_net_stop_rx_queue(dev); if (rte_eal_process_type() == RTE_PROC_PRIMARY) @@ -262,12 +264,10 @@ nfp_net_close(struct rte_eth_dev *dev) * We assume that the DPDK application is stopping all the * threads/queues before calling the device close function. */ - nfp_net_disable_queues(dev); /* Clear queues */ nfp_net_close_tx_queue(dev); - nfp_net_close_rx_queue(dev); /* Clear ipsec */ @@ -413,35 +413,35 @@ nfp_udp_tunnel_port_del(struct rte_eth_dev *dev, /* Initialise and register driver with DPDK Application */ static const struct eth_dev_ops nfp_net_eth_dev_ops = { - .dev_configure = nfp_net_configure, - .dev_start = nfp_net_start, - .dev_stop = nfp_net_stop, - .dev_set_link_up = nfp_net_set_link_up, - .dev_set_link_down = nfp_net_set_link_down, - .dev_close = nfp_net_close, - .promiscuous_enable = nfp_net_promisc_enable, - .promiscuous_disable = nfp_net_promisc_disable, - .link_update = nfp_net_link_update, - .stats_get = nfp_net_stats_get, - .stats_reset = nfp_net_stats_reset, + .dev_configure = nfp_net_configure, + .dev_start = nfp_net_start, + .dev_stop = nfp_net_stop, + .dev_set_link_up = nfp_net_set_link_up, + .dev_set_link_down = nfp_net_set_link_down, + .dev_close = nfp_net_close, + .promiscuous_enable = nfp_net_promisc_enable, + .promiscuous_disable = nfp_net_promisc_disable, + .link_update = nfp_net_link_update, + .stats_get = nfp_net_stats_get, + .stats_reset = nfp_net_stats_reset, .xstats_get = nfp_net_xstats_get, .xstats_reset = nfp_net_xstats_reset, .xstats_get_names = nfp_net_xstats_get_names, .xstats_get_by_id = nfp_net_xstats_get_by_id, .xstats_get_names_by_id = nfp_net_xstats_get_names_by_id, - .dev_infos_get = nfp_net_infos_get, + .dev_infos_get = nfp_net_infos_get, .dev_supported_ptypes_get = nfp_net_supported_ptypes_get, - .mtu_set = nfp_net_dev_mtu_set, - .mac_addr_set = nfp_net_set_mac_addr, - .vlan_offload_set = nfp_net_vlan_offload_set, - .reta_update = nfp_net_reta_update, - .reta_query = nfp_net_reta_query, - .rss_hash_update = nfp_net_rss_hash_update, - .rss_hash_conf_get = nfp_net_rss_hash_conf_get, - .rx_queue_setup = nfp_net_rx_queue_setup, - .rx_queue_release = nfp_net_rx_queue_release, - .tx_queue_setup = nfp_net_tx_queue_setup, - .tx_queue_release = nfp_net_tx_queue_release, + .mtu_set = nfp_net_dev_mtu_set, + .mac_addr_set = nfp_net_set_mac_addr, + .vlan_offload_set = nfp_net_vlan_offload_set, + .reta_update = nfp_net_reta_update, + .reta_query = nfp_net_reta_query, + .rss_hash_update = nfp_net_rss_hash_update, + .rss_hash_conf_get = nfp_net_rss_hash_conf_get, + .rx_queue_setup = nfp_net_rx_queue_setup, + .rx_queue_release = nfp_net_rx_queue_release, + .tx_queue_setup = nfp_net_tx_queue_setup, + .tx_queue_release = nfp_net_tx_queue_release, .rx_queue_intr_enable = nfp_rx_queue_intr_enable, .rx_queue_intr_disable = nfp_rx_queue_intr_disable, .udp_tunnel_port_add = nfp_udp_tunnel_port_add, @@ -501,7 +501,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) rte_eth_copy_pci_info(eth_dev, pci_dev); - hw->ctrl_bar = pci_dev->mem_resource[0].addr; if (hw->ctrl_bar == NULL) { PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured"); @@ -519,10 +518,12 @@ nfp_net_init(struct rte_eth_dev *eth_dev) PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for _mac_stats_bar"); return -EIO; } + hw->mac_stats = hw->mac_stats_bar; } else { if (pf_dev->ctrl_bar == NULL) return -ENODEV; + /* Use port offset in pf ctrl_bar for this ports control bar */ hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE); hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE); @@ -557,7 +558,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) return -ENOMEM; } - /* Work out where in the BAR the queues start. */ tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); @@ -653,12 +653,12 @@ nfp_fw_upload(struct rte_pci_device *dev, "serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x", cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3], cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff); - snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial); PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0) goto load_fw; + /* Then try the PCI name */ snprintf(fw_name, sizeof(fw_name), "%s/pci-%s.nffw", DEFAULT_FW_PATH, dev->name); diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index c9e72dd953..7096695de6 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -63,6 +63,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) return -EIO; } } + intr_vector = dev->data->nb_rx_queues; if (rte_intr_efd_enable(intr_handle, intr_vector) != 0) return -1; @@ -172,12 +173,10 @@ nfp_netvf_close(struct rte_eth_dev *dev) * We assume that the DPDK application is stopping all the * threads/queues before calling the device close function. */ - nfp_net_disable_queues(dev); /* Clear queues */ nfp_net_close_tx_queue(dev); - nfp_net_close_rx_queue(dev); rte_intr_disable(pci_dev->intr_handle); @@ -194,35 +193,35 @@ nfp_netvf_close(struct rte_eth_dev *dev) /* Initialise and register VF driver with DPDK Application */ static const struct eth_dev_ops nfp_netvf_eth_dev_ops = { - .dev_configure = nfp_net_configure, - .dev_start = nfp_netvf_start, - .dev_stop = nfp_netvf_stop, - .dev_set_link_up = nfp_netvf_set_link_up, - .dev_set_link_down = nfp_netvf_set_link_down, - .dev_close = nfp_netvf_close, - .promiscuous_enable = nfp_net_promisc_enable, - .promiscuous_disable = nfp_net_promisc_disable, - .link_update = nfp_net_link_update, - .stats_get = nfp_net_stats_get, - .stats_reset = nfp_net_stats_reset, + .dev_configure = nfp_net_configure, + .dev_start = nfp_netvf_start, + .dev_stop = nfp_netvf_stop, + .dev_set_link_up = nfp_netvf_set_link_up, + .dev_set_link_down = nfp_netvf_set_link_down, + .dev_close = nfp_netvf_close, + .promiscuous_enable = nfp_net_promisc_enable, + .promiscuous_disable = nfp_net_promisc_disable, + .link_update = nfp_net_link_update, + .stats_get = nfp_net_stats_get, + .stats_reset = nfp_net_stats_reset, .xstats_get = nfp_net_xstats_get, .xstats_reset = nfp_net_xstats_reset, .xstats_get_names = nfp_net_xstats_get_names, .xstats_get_by_id = nfp_net_xstats_get_by_id, .xstats_get_names_by_id = nfp_net_xstats_get_names_by_id, - .dev_infos_get = nfp_net_infos_get, + .dev_infos_get = nfp_net_infos_get, .dev_supported_ptypes_get = nfp_net_supported_ptypes_get, - .mtu_set = nfp_net_dev_mtu_set, - .mac_addr_set = nfp_net_set_mac_addr, - .vlan_offload_set = nfp_net_vlan_offload_set, - .reta_update = nfp_net_reta_update, - .reta_query = nfp_net_reta_query, - .rss_hash_update = nfp_net_rss_hash_update, - .rss_hash_conf_get = nfp_net_rss_hash_conf_get, - .rx_queue_setup = nfp_net_rx_queue_setup, - .rx_queue_release = nfp_net_rx_queue_release, - .tx_queue_setup = nfp_net_tx_queue_setup, - .tx_queue_release = nfp_net_tx_queue_release, + .mtu_set = nfp_net_dev_mtu_set, + .mac_addr_set = nfp_net_set_mac_addr, + .vlan_offload_set = nfp_net_vlan_offload_set, + .reta_update = nfp_net_reta_update, + .reta_query = nfp_net_reta_query, + .rss_hash_update = nfp_net_rss_hash_update, + .rss_hash_conf_get = nfp_net_rss_hash_conf_get, + .rx_queue_setup = nfp_net_rx_queue_setup, + .rx_queue_release = nfp_net_rx_queue_release, + .tx_queue_setup = nfp_net_tx_queue_setup, + .tx_queue_release = nfp_net_tx_queue_release, .rx_queue_intr_enable = nfp_rx_queue_intr_enable, .rx_queue_intr_disable = nfp_rx_queue_intr_disable, }; diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index fbcdb3d19e..1bf31146fc 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -496,6 +496,7 @@ nfp_stats_id_alloc(struct nfp_flow_priv *priv, uint32_t *ctx) priv->stats_ids.init_unallocated--; priv->active_mem_unit = 0; } + return 0; } @@ -622,6 +623,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower, PMD_DRV_LOG(ERR, "Mem error when offloading IP6 address."); return -ENOMEM; } + memcpy(tmp_entry->ipv6_addr, ipv6, sizeof(tmp_entry->ipv6_addr)); tmp_entry->ref_count = 1; @@ -1796,7 +1798,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_IPV6), - .mask_support = &(const struct rte_flow_item_eth){ + .mask_support = &(const struct rte_flow_item_eth) { .hdr = { .dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", .src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", @@ -1811,7 +1813,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { [RTE_FLOW_ITEM_TYPE_VLAN] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_IPV6), - .mask_support = &(const struct rte_flow_item_vlan){ + .mask_support = &(const struct rte_flow_item_vlan) { .hdr = { .vlan_tci = RTE_BE16(0xefff), .eth_proto = RTE_BE16(0xffff), @@ -1827,7 +1829,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_GRE), - .mask_support = &(const struct rte_flow_item_ipv4){ + .mask_support = &(const struct rte_flow_item_ipv4) { .hdr = { .type_of_service = 0xff, .fragment_offset = RTE_BE16(0xffff), @@ -1846,7 +1848,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_GRE), - .mask_support = &(const struct rte_flow_item_ipv6){ + .mask_support = &(const struct rte_flow_item_ipv6) { .hdr = { .vtc_flow = RTE_BE32(0x0ff00000), .proto = 0xff, @@ -1863,7 +1865,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { .merge = nfp_flow_merge_ipv6, }, [RTE_FLOW_ITEM_TYPE_TCP] = { - .mask_support = &(const struct rte_flow_item_tcp){ + .mask_support = &(const struct rte_flow_item_tcp) { .hdr = { .tcp_flags = 0xff, .src_port = RTE_BE16(0xffff), @@ -1877,7 +1879,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { [RTE_FLOW_ITEM_TYPE_UDP] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN, RTE_FLOW_ITEM_TYPE_GENEVE), - .mask_support = &(const struct rte_flow_item_udp){ + .mask_support = &(const struct rte_flow_item_udp) { .hdr = { .src_port = RTE_BE16(0xffff), .dst_port = RTE_BE16(0xffff), @@ -1888,7 +1890,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { .merge = nfp_flow_merge_udp, }, [RTE_FLOW_ITEM_TYPE_SCTP] = { - .mask_support = &(const struct rte_flow_item_sctp){ + .mask_support = &(const struct rte_flow_item_sctp) { .hdr = { .src_port = RTE_BE16(0xffff), .dst_port = RTE_BE16(0xffff), @@ -1900,7 +1902,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { }, [RTE_FLOW_ITEM_TYPE_VXLAN] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH), - .mask_support = &(const struct rte_flow_item_vxlan){ + .mask_support = &(const struct rte_flow_item_vxlan) { .hdr = { .vx_vni = RTE_BE32(0xffffff00), }, @@ -1911,7 +1913,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { }, [RTE_FLOW_ITEM_TYPE_GENEVE] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH), - .mask_support = &(const struct rte_flow_item_geneve){ + .mask_support = &(const struct rte_flow_item_geneve) { .vni = "\xff\xff\xff", }, .mask_default = &rte_flow_item_geneve_mask, @@ -1920,7 +1922,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { }, [RTE_FLOW_ITEM_TYPE_GRE] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY), - .mask_support = &(const struct rte_flow_item_gre){ + .mask_support = &(const struct rte_flow_item_gre) { .c_rsvd0_ver = RTE_BE16(0xa000), .protocol = RTE_BE16(0xffff), }, @@ -1952,6 +1954,7 @@ nfp_flow_item_check(const struct rte_flow_item *item, " without a corresponding 'spec'."); return -EINVAL; } + /* No spec, no mask, no problem. */ return 0; } @@ -3031,6 +3034,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr, for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) { if (priv->pre_tun_bitmap[i] == 0) continue; + entry->mac_index = i; find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size); if (find_entry != NULL) { @@ -3057,6 +3061,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr, *index = entry->mac_index; priv->pre_tun_cnt++; + return 0; } @@ -3091,12 +3096,14 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr, for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) { if (priv->pre_tun_bitmap[i] == 0) continue; + entry->mac_index = i; find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size); if (find_entry != NULL) { find_entry->ref_cnt--; if (find_entry->ref_cnt != 0) goto free_entry; + priv->pre_tun_bitmap[i] = 0; break; } diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h index ab38dbe1f4..991629e6ed 100644 --- a/drivers/net/nfp/nfp_flow.h +++ b/drivers/net/nfp/nfp_flow.h @@ -126,11 +126,14 @@ struct nfp_ipv6_addr_entry { struct nfp_flow_priv { uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */ uint64_t flower_version; /**< Flow version, always increase. */ + /* Mask hash table */ struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */ struct rte_hash *mask_table; /**< Hash table to store mask ids. */ + /* Flow hash table */ struct rte_hash *flow_table; /**< Hash table to store flow rules. */ + /* Flow stats */ uint32_t active_mem_unit; /**< The size of active mem units. */ uint32_t total_mem_units; /**< The size of total mem units. */ @@ -138,16 +141,20 @@ struct nfp_flow_priv { struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */ struct nfp_fl_stats *stats; /**< Store stats of flow. */ rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */ + /* Pre tunnel rule */ uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */ uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */ struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */ + /* IPv4 off */ LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */ rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */ + /* IPv6 off */ LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */ rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */ + /* Neighbor next */ LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */ /* Conntrack */ diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index d506682b56..e284a67d7c 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -190,6 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) rxd->fld.dd = 0; rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff; rxd->fld.dma_addr_lo = dma_addr & 0xffffffff; + rxe[i].mbuf = mbuf; } @@ -213,6 +214,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0) return -1; } + return 0; } @@ -225,7 +227,6 @@ nfp_net_rx_queue_count(void *rx_queue) struct nfp_net_rx_desc *rxds; rxq = rx_queue; - idx = rxq->rd_p; /* @@ -235,7 +236,6 @@ nfp_net_rx_queue_count(void *rx_queue) * performance. But ideally that should be done in descriptors * chunks belonging to the same cache line. */ - while (count < rxq->rx_count) { rxds = &rxq->rxds[idx]; if ((rxds->rxd.meta_len_dd & PCIE_DESC_RX_DD) == 0) @@ -394,6 +394,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta, if (meta->vlan[0].offload == 0) mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci); + mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci); PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u", mb->vlan_tci_outer, mb->vlan_tci); @@ -638,7 +639,6 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds, * so looking at the implications of this type of allocation should be studied * deeply. */ - uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -903,7 +903,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, sizeof(struct nfp_net_rx_desc) * max_rx_desc, NFP_MEMZONE_ALIGN, socket_id); - if (tz == NULL) { PMD_DRV_LOG(ERR, "Error allocating rx dma"); nfp_net_rx_queue_release(dev, queue_idx); diff --git a/drivers/net/nfp/nfpcore/nfp_resource.h b/drivers/net/nfp/nfpcore/nfp_resource.h index 18196d273c..f49c99e462 100644 --- a/drivers/net/nfp/nfpcore/nfp_resource.h +++ b/drivers/net/nfp/nfpcore/nfp_resource.h @@ -15,7 +15,7 @@ #define NFP_RESOURCE_NFP_HWINFO "nfp.info" /* Service Processor */ -#define NFP_RESOURCE_NSP "nfp.sp" +#define NFP_RESOURCE_NSP "nfp.sp" /* Opaque handle to a NFP Resource */ struct nfp_resource; From patchwork Thu Oct 12 01:27:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132566 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A65D94236A; Thu, 12 Oct 2023 03:28:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 02D8640DCB; Thu, 12 Oct 2023 03:28:06 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2126.outbound.protection.outlook.com [40.107.95.126]) by mails.dpdk.org (Postfix) with ESMTP id BBA1B40A73 for ; Thu, 12 Oct 2023 03:28:02 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RD/fahZisaE2nZ6Bizy3BQ3w11omVypjud+iTsb8cojMQ7eiB4ojNq7Ymd3iOYTpQIDJvKKsDY0mFnMdHSgJRfTk/jZFJAfEbzFta7hQ6EG39C8spQRSWxIOOhhq1voP670CVrRL3+JzAMmqd/GH6oYtuXio34NNUwpFe3QuR3FdPZuk0TPn47Ml45BHfkpmsPF6pqscLbHmN5/AQ+VofOhTlqYvSt4CTnDAt+WgHtLl8YbKpxzfv1UDH0II2Md4YO4JimjIf9901WcnoLWBTEew2eoTnMQvNVo/HfQSrRQ9DFRnxFCe/8xSBDpWGLC6Z4GnDqr35Ea7AE+Q9Hod8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OKdm9q29UwNITI1kH/NMoz/ESgLMAXQvxxx+7SQgrDI=; b=JGNWvB2anB/ZErVscu0uze8o2uMDS59l3L0Do1oN/W55CgOuIGy18kxbH8gl5Vt0r/u3IC5nXfqjSBf7YsuacmKxJTk2K+zqA2MkN5ruXAnJkNX7pnSH8x2Wze2w4ClHvzY0WVAh71XY47J8Se058EUmWetSCvblzYvlOoXdosb5mMnshKWJxX9NbqAFKeMhcV+u7XIMTeSxtF6V4P3/7U8tuVtKgFhI5B0bawKTYFGQIoYPqDqCVC1Vv2nnOMmHw8xrOvcW+lDMhEjYt6vniU3Yqej67bDcVe3TzXDk8TjDgDvGuOPBm8tdklXXdI1JZ5q0Nyeqe3uxfhKZUbmIuA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OKdm9q29UwNITI1kH/NMoz/ESgLMAXQvxxx+7SQgrDI=; b=rZNXcpwgxwIIe6R6WDGx7wytMQBJDb8E6nL45sT9CvVeT0V15xhlBlXJHR2rrSYBOIdd5dF9WRWHXCISRZBnUB//5l2hxaH/RfQfCBC2xWHJsTi+4aZaQO//2RMbZLTYP+4jBDv5dZBafxVO8TwaGE6VBVOaiNlpMFLhRtY6LG4= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SJ0PR13MB6111.namprd13.prod.outlook.com (2603:10b6:a03:4eb::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.42; Thu, 12 Oct 2023 01:28:01 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:28:01 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 08/11] net/nfp: unify the guide line of header file Date: Thu, 12 Oct 2023 09:27:01 +0800 Message-Id: <20231012012704.483828-9-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SJ0PR13MB6111:EE_ X-MS-Office365-Filtering-Correlation-Id: 45a24cbf-ebb5-4512-169b-08dbcac2794c X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: UUknfzuqBEQHLL7fr4ikmlRZz7Rd4dvE6RK9hQfu7GybPCODco2fdRBoqeLfY7eDsA7PcnKqKFVTG/dnT+KB/6Yvy6z7afcYyAQtv8XVJyIMj+zfC37VnJMKMd8ehJEBCiK5XPKCOgtPgUjXA9npgtZLzbqRG9hXpj3z5HboFvC5a9CgGQxfK0m1BpeTXlF+TkUxcnrAw8JrQhdZ4AGL1mGhashSzNDBPZ4JXuysqmpyPlByBie68qd+U3PAlHYCSRz8jFFJ7lqC7Tjhkzc6ZWX2Q+CkJf4fUNSrul5pqI12fq40yGsyq+V38QY9wVCHx4XfQG2oEsmQI/uUGNbtLshTuN2bLCsnq0Q4EHDz0iApIZ/H3KV8nkfogloFalIK+3+pSY/4fottjGV54RLpWxUAxufuWxiqfpA4vrVUd5y5HK6fOBFmJ6dx2eehqnpsdHAts7TzguGhwz/DW8nwFxKvEqpvhRRWJQz92EVZCDnR+Z59gqBJ172TNO06p9zO2nYwGqnmJTZKIrJwkCDwx+YCJFPJFpIguWe++D/YECxXoysccOGnM4IEWIpTNWsrUPI/rtWLWfCivPvujRjaPWWUxoFIcIsb2RY7j+XY3Ypc2KP9jRRXUGvU+kG9kXWbgGtZKe3O6aVcDlyAQfY2LCrKPVawHofmuqsqYJpKqZk= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(346002)(366004)(136003)(376002)(396003)(39830400003)(230922051799003)(186009)(1800799009)(451199024)(64100799003)(6512007)(107886003)(52116002)(1076003)(2616005)(6506007)(478600001)(26005)(6486002)(83380400001)(44832011)(8676002)(2906002)(316002)(54906003)(5660300002)(41300700001)(66946007)(66476007)(66556008)(4326008)(8936002)(6916009)(36756003)(38100700002)(86362001)(38350700002); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: B3u9FyQnA/7kHCjmWmO2MO1UsTrdPBeQQFZk4Cvp4dMaZybP+NjZgSofl+ucJ+8eJDb4afYWb+hb3VdVBJxzZJBjqpDqbyrNegXWeTTbdn8uBdg/6dFtRA8/S7wLan/GzmM/7l+0qaOr8V0TZkoBF+5V3/PiuFeNyUW5DdOLPHPWMFrWH5XIax0Kh1U1nFH7zcHGigllSboLKWZSpZtNhd5DymBZbIm7k1DOVIO2kteax/mTLNKJKUA91tsHz0+EhVLQcoSG2doQoCquonP3dBkKM1OLOW3oCByRC1AfcsXTYXFi35124f67vXRuqLEJN5ijzfeQHl3tqBVxrVqJ/cYAQe4iOutKSPrn204kwQd8oRW7CWTCcQa5Fj/T5DTqRLfPcjRpJojqIZpvrKjJpfKkm9bDgusiB5NTxTVSGw6YjwuRcT3YRqkBmF0XmE5p52cU/Pd8MJ6+goULCyGwdiGKInaIZKMXRxoAYep1p4xsOM2qMLh2bl3ysfarzCon9wWdKb/eLbZzpzli1cPP9jXvOLy7t+J08VEYeiSQ5p2AheYQn0P7x7qgNudbNMkH/KlxAzTneJMvFTPEYxMs+RuqtYiH7+5i2zM54RF7ypyv5k+zBHU1oxEFMYjYZbpLM9gie7c4cA/xml9fp8OKckAfHb4iQkRjkpPOqkR+/DW9IsErR8bVsUQaf02goYiUhmz4c1Oviclo0++czI3cHPAm6V38uegVrt+euSKq71Rtrnx81zgPW0fMc8ikbQX2OUYlLbgTkWO3epS0Sbt6x1coeRebxbcI/7GDDUJ+n1qyNgol30llvPq5GX1Z0Jj1O1lsPIMJeWM8Qbdevn44PWB7/y2o3WkQKoLEGHz60b20Ttcaq6AleNeXd9LBr1fiEjtvwVotCvC3mdRD8M4rJ0+z2kQeTDEPhwjgYqzlAGwgRgu4B98DW2wn7ZRR2A36U6TNCb7y2wCF/ZResp1s4xMW1N+vF1AeZmsX2KZal6yMrAW/mMyOcDMHa+sfVgfOUIy4/We5Si7pd4yohNwZya+GwAfyQz9HT0mfgMMM2yMcnq5HKX7nMP2XJr6TZrCzk/y3jnTmITlQyaQRaJYOGbQ7wGWCUqHa9xoWYDT3FYDi4Fzn+Tk2t1RyGV0xdhHdZB7QwPZ9AQKYT/bp4T6I8uBDOgZot4F8MqszrrEP5rG7Df5hkW7SIvne14z6MObCKqzsuTTzar/wg+Xl+oE0lCt0LQyxNDN6T7Djeva4j31s9PYY74xj6eIkV6Vq/dsZTnSA0GZYAfINs6cDCbVEHX8UyRrbnmE8n8CCXZxrXhdA8vsx1rF+azDYgXL6086QfQTTqHfHjdw5fvqR3a/2Ji8+GFg+RX8Y1h9qDwlSPTSO13zOqPUxAUaBNuAXcew6iNzGo3lF0E/AOjLfPqqp36dTXGrVzNoBCBmGuuBFUVYu4TKNkosiHWSPGI0/g2uVCLFVwz+CHgVC6vmY7Pr0d4ehEG9+aNVNMYa3otKsGkGDeR1YBacOAU+OwIHuXmUHWOJoaxFTSHYadpysvx6GJPOwRUiSs2z4a0tvqMAq/LHcr7vxKcwr0j6hrlYT6kfeGV2aCfmgEjMm/exqc29SUA== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 45a24cbf-ebb5-4512-169b-08dbcac2794c X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:28:01.3401 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: PKWLjcfAfZ4YtFeysaq9QZ0VWhILik5FUnQbzRhLs+nWcEQ58QeBKxDE/71KvWPQWMWDQQJVPZV5hTQE1NwC6ekf1jLdkJlvaZQkfDEAzZs= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR13MB6111 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Unify the guide line of header file, we choose '__FOO_BAR_H__' style. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/flower/nfp_flower.h | 6 +++--- drivers/net/nfp/flower/nfp_flower_cmsg.h | 6 +++--- drivers/net/nfp/flower/nfp_flower_ctrl.h | 6 +++--- drivers/net/nfp/flower/nfp_flower_representor.h | 6 +++--- drivers/net/nfp/nfd3/nfp_nfd3.h | 6 +++--- drivers/net/nfp/nfdk/nfp_nfdk.h | 6 +++--- drivers/net/nfp/nfp_common.h | 6 +++--- drivers/net/nfp/nfp_cpp_bridge.h | 8 +++----- drivers/net/nfp/nfp_ctrl.h | 6 +++--- drivers/net/nfp/nfp_flow.h | 6 +++--- drivers/net/nfp/nfp_logs.h | 6 +++--- drivers/net/nfp/nfp_rxtx.h | 6 +++--- 12 files changed, 36 insertions(+), 38 deletions(-) diff --git a/drivers/net/nfp/flower/nfp_flower.h b/drivers/net/nfp/flower/nfp_flower.h index 0b4e38cedd..b7ea830209 100644 --- a/drivers/net/nfp/flower/nfp_flower.h +++ b/drivers/net/nfp/flower/nfp_flower.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_FLOWER_H_ -#define _NFP_FLOWER_H_ +#ifndef __NFP_FLOWER_H__ +#define __NFP_FLOWER_H__ #include "../nfp_common.h" @@ -118,4 +118,4 @@ int nfp_flower_pf_stop(struct rte_eth_dev *dev); uint32_t nfp_flower_pkt_add_metadata(struct nfp_app_fw_flower *app_fw_flower, struct rte_mbuf *mbuf, uint32_t port_id); -#endif /* _NFP_FLOWER_H_ */ +#endif /* __NFP_FLOWER_H__ */ diff --git a/drivers/net/nfp/flower/nfp_flower_cmsg.h b/drivers/net/nfp/flower/nfp_flower_cmsg.h index cb019171b6..c2938fb6f6 100644 --- a/drivers/net/nfp/flower/nfp_flower_cmsg.h +++ b/drivers/net/nfp/flower/nfp_flower_cmsg.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_CMSG_H_ -#define _NFP_CMSG_H_ +#ifndef __NFP_CMSG_H__ +#define __NFP_CMSG_H__ #include "../nfp_flow.h" #include "nfp_flower.h" @@ -989,4 +989,4 @@ int nfp_flower_cmsg_qos_delete(struct nfp_app_fw_flower *app_fw_flower, int nfp_flower_cmsg_qos_stats(struct nfp_app_fw_flower *app_fw_flower, struct nfp_cfg_head *head); -#endif /* _NFP_CMSG_H_ */ +#endif /* __NFP_CMSG_H__ */ diff --git a/drivers/net/nfp/flower/nfp_flower_ctrl.h b/drivers/net/nfp/flower/nfp_flower_ctrl.h index f73a024266..4c94d36847 100644 --- a/drivers/net/nfp/flower/nfp_flower_ctrl.h +++ b/drivers/net/nfp/flower/nfp_flower_ctrl.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_FLOWER_CTRL_H_ -#define _NFP_FLOWER_CTRL_H_ +#ifndef __NFP_FLOWER_CTRL_H__ +#define __NFP_FLOWER_CTRL_H__ #include "nfp_flower.h" @@ -13,4 +13,4 @@ uint16_t nfp_flower_ctrl_vnic_xmit(struct nfp_app_fw_flower *app_fw_flower, struct rte_mbuf *mbuf); void nfp_flower_ctrl_vnic_xmit_register(struct nfp_app_fw_flower *app_fw_flower); -#endif /* _NFP_FLOWER_CTRL_H_ */ +#endif /* __NFP_FLOWER_CTRL_H__ */ diff --git a/drivers/net/nfp/flower/nfp_flower_representor.h b/drivers/net/nfp/flower/nfp_flower_representor.h index eda19cbb16..bcb4c3cdb5 100644 --- a/drivers/net/nfp/flower/nfp_flower_representor.h +++ b/drivers/net/nfp/flower/nfp_flower_representor.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_FLOWER_REPRESENTOR_H_ -#define _NFP_FLOWER_REPRESENTOR_H_ +#ifndef __NFP_FLOWER_REPRESENTOR_H__ +#define __NFP_FLOWER_REPRESENTOR_H__ #include "nfp_flower.h" @@ -24,4 +24,4 @@ struct nfp_flower_representor { int nfp_flower_repr_create(struct nfp_app_fw_flower *app_fw_flower); -#endif /* _NFP_FLOWER_REPRESENTOR_H_ */ +#endif /* __NFP_FLOWER_REPRESENTOR_H__ */ diff --git a/drivers/net/nfp/nfd3/nfp_nfd3.h b/drivers/net/nfp/nfd3/nfp_nfd3.h index 0b0ca361f4..3ba562cc3f 100644 --- a/drivers/net/nfp/nfd3/nfp_nfd3.h +++ b/drivers/net/nfp/nfd3/nfp_nfd3.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_NFD3_H_ -#define _NFP_NFD3_H_ +#ifndef __NFP_NFD3_H__ +#define __NFP_NFD3_H__ #include "../nfp_rxtx.h" @@ -84,4 +84,4 @@ int nfp_net_nfd3_tx_queue_setup(struct rte_eth_dev *dev, unsigned int socket_id, const struct rte_eth_txconf *tx_conf); -#endif /* _NFP_NFD3_H_ */ +#endif /* __NFP_NFD3_H__ */ diff --git a/drivers/net/nfp/nfdk/nfp_nfdk.h b/drivers/net/nfp/nfdk/nfp_nfdk.h index 04bd3c7600..2767fd51cd 100644 --- a/drivers/net/nfp/nfdk/nfp_nfdk.h +++ b/drivers/net/nfp/nfdk/nfp_nfdk.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_NFDK_H_ -#define _NFP_NFDK_H_ +#ifndef __NFP_NFDK_H__ +#define __NFP_NFDK_H__ #include "../nfp_rxtx.h" @@ -178,4 +178,4 @@ int nfp_net_nfdk_tx_queue_setup(struct rte_eth_dev *dev, int nfp_net_nfdk_tx_maybe_close_block(struct nfp_net_txq *txq, struct rte_mbuf *pkt); -#endif /* _NFP_NFDK_H_ */ +#endif /* __NFP_NFDK_H__ */ diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index 5439865c5e..cd0ca50c6b 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_COMMON_H_ -#define _NFP_COMMON_H_ +#ifndef __NFP_COMMON_H__ +#define __NFP_COMMON_H__ #include #include @@ -450,4 +450,4 @@ bool nfp_net_is_valid_nfd_version(struct nfp_net_fw_ver version); #define NFP_PRIV_TO_APP_FW_FLOWER(app_fw_priv)\ ((struct nfp_app_fw_flower *)app_fw_priv) -#endif /* _NFP_COMMON_H_ */ +#endif /* __NFP_COMMON_H__ */ diff --git a/drivers/net/nfp/nfp_cpp_bridge.h b/drivers/net/nfp/nfp_cpp_bridge.h index e6a957a090..a1103e85e4 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.h +++ b/drivers/net/nfp/nfp_cpp_bridge.h @@ -1,16 +1,14 @@ /* SPDX-License-Identifier: BSD-3-Clause * Copyright (c) 2014-2021 Netronome Systems, Inc. * All rights reserved. - * - * Small portions derived from code Copyright(c) 2010-2015 Intel Corporation. */ -#ifndef _NFP_CPP_BRIDGE_H_ -#define _NFP_CPP_BRIDGE_H_ +#ifndef __NFP_CPP_BRIDGE_H__ +#define __NFP_CPP_BRIDGE_H__ #include "nfp_common.h" int nfp_enable_cpp_service(struct nfp_pf_dev *pf_dev); int nfp_map_service(uint32_t service_id); -#endif /* _NFP_CPP_BRIDGE_H_ */ +#endif /* __NFP_CPP_BRIDGE_H__ */ diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h index 5cc83ff3e6..5c2065a537 100644 --- a/drivers/net/nfp/nfp_ctrl.h +++ b/drivers/net/nfp/nfp_ctrl.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_CTRL_H_ -#define _NFP_CTRL_H_ +#ifndef __NFP_CTRL_H__ +#define __NFP_CTRL_H__ #include @@ -573,4 +573,4 @@ nfp_net_cfg_ctrl_rss(uint32_t hw_cap) return NFP_NET_CFG_CTRL_RSS; } -#endif /* _NFP_CTRL_H_ */ +#endif /* __NFP_CTRL_H__ */ diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h index 991629e6ed..aeb24458f3 100644 --- a/drivers/net/nfp/nfp_flow.h +++ b/drivers/net/nfp/nfp_flow.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_FLOW_H_ -#define _NFP_FLOW_H_ +#ifndef __NFP_FLOW_H__ +#define __NFP_FLOW_H__ #include "nfp_common.h" @@ -202,4 +202,4 @@ int nfp_flow_destroy(struct rte_eth_dev *dev, struct rte_flow *nfp_flow, struct rte_flow_error *error); -#endif /* _NFP_FLOW_H_ */ +#endif /* __NFP_FLOW_H__ */ diff --git a/drivers/net/nfp/nfp_logs.h b/drivers/net/nfp/nfp_logs.h index 16ff61700b..690adabffd 100644 --- a/drivers/net/nfp/nfp_logs.h +++ b/drivers/net/nfp/nfp_logs.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_LOGS_H_ -#define _NFP_LOGS_H_ +#ifndef __NFP_LOGS_H__ +#define __NFP_LOGS_H__ #include @@ -41,4 +41,4 @@ extern int nfp_logtype_driver; rte_log(RTE_LOG_ ## level, nfp_logtype_driver, \ "%s(): " fmt "\n", __func__, ## args) -#endif /* _NFP_LOGS_H_ */ +#endif /* __NFP_LOGS_H__ */ diff --git a/drivers/net/nfp/nfp_rxtx.h b/drivers/net/nfp/nfp_rxtx.h index 899cc42c97..956cc7a0d2 100644 --- a/drivers/net/nfp/nfp_rxtx.h +++ b/drivers/net/nfp/nfp_rxtx.h @@ -3,8 +3,8 @@ * All rights reserved. */ -#ifndef _NFP_RXTX_H_ -#define _NFP_RXTX_H_ +#ifndef __NFP_RXTX_H__ +#define __NFP_RXTX_H__ #include @@ -253,4 +253,4 @@ void nfp_net_set_meta_ipsec(struct nfp_net_meta_raw *meta_data, uint8_t layer, uint8_t ipsec_layer); -#endif /* _NFP_RXTX_H_ */ +#endif /* __NFP_RXTX_H__ */ From patchwork Thu Oct 12 01:27:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132567 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EAD4D4236A; Thu, 12 Oct 2023 03:29:05 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8250E40A84; Thu, 12 Oct 2023 03:28:08 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2103.outbound.protection.outlook.com [40.107.95.103]) by mails.dpdk.org (Postfix) with ESMTP id 172D140DCB for ; Thu, 12 Oct 2023 03:28:05 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=LOt8x7qHVCL448PXFFCIS1gfi6RdJQ3UoSe/sn5vXgyPFJ6eWpaoyZq2ws3lInvNFpXhjs0931rW8EIe2lbrSHxBh82iRR2CuQgU2WN+tHUEd1jSvhAmQyU7Oa3Y+0tYxvev1aBlB51me9XTWcmjEMMPE75vzEvKZ1cx/Agn7s0Sjg0UEZzjZmm3jxyvNbIVS5ysQnnHNJDuSXZkwBFVVu4hifClKsCRllLZG6hUkywCnMBmHyahHCOunnjHbJAd0aBANNju1RAVTtOZIEmApXEjX0DhhQuZuxhqABj9P2nughzvES7+zzOyr2XVYHTzmOFkYsgDFuoIPr4vJQ/ukQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=3hdQwTnAZPGx58V4Z8B27byupAt6nP5Y6y8xfm9DKg0=; b=eHo8bvwZNJhAt6PsZiIfNHNHbspMgesgY8CMggn1gwRN3lfAwQv2YKZBf2qpo4KXC5jFc/v9ghr1NBmbqO5yrPlyqDGRo9yC51SfdBAbhZbJl9RjYjEJEBYEV453tp1RJEaqb8J3ezuN++DsYAzdMF0MUD35U3P0nBLTeiXUtz30he6VFxXSuHeJSW2s1k/DNy2WyEAPktJRBEWsI0WKa+Od+guHUwNhi9JeeJehYP5QCuF+BwoYYgeoFOAUo9IUE8ci5mDC1UPS9v66Ylil5Ju+fisRKUwjZRR8iswtmqf+v7TJTwlI+nW4fNj09k4E1fyIn3aH5lpVnciXBK7xFw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=3hdQwTnAZPGx58V4Z8B27byupAt6nP5Y6y8xfm9DKg0=; b=UXRqnuS/yTTon60w62T6EiYQ0k8XLvXfrvcA90euaDW4K7my7BjUsWbVW6P0kQ81nYpAAzPX+uAp0ohy8puAOJf/klxRxLNjmFsX9DVhfizmeIm/z4H/TPf6T8/vwH4bYfq9+xaFvXQqMiwuWXHZgIvjrvErRZHJ+GUU/RNOIIc= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SJ0PR13MB6111.namprd13.prod.outlook.com (2603:10b6:a03:4eb::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.42; Thu, 12 Oct 2023 01:28:03 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:28:03 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 09/11] net/nfp: rename some parameter and variable Date: Thu, 12 Oct 2023 09:27:02 +0800 Message-Id: <20231012012704.483828-10-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SJ0PR13MB6111:EE_ X-MS-Office365-Filtering-Correlation-Id: 7a9d1f67-9337-428b-f8ea-08dbcac27a81 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZtmTDPaMW3ZyScmx7eSxv/e7SHkNLxA1GJ8F2W2nl7SM/EkRgVvPUuwxh5GDHm5/CU5o5JVN+9FcaRT1yI/Ep567jGCdn9/zEX4ZE3yGbCb79ISPIYDzgmeYd9+BT0EboggVIelrsmljd6Ab2zG8lKmOwulBSDZTsXhjazDHaPa3YPBIwDB7WzkWLNNuJ54Qj5QTwhMTo5iEH71tvS+R0b6Y1kOYXGtEAnJCbEdkmuNhG/DhiN1yKqgECjdBzfVkBKLKG55Q/rwoQXNScu94l4rzKKLoNEoXgTxnK3zAI0ZlIpVAMrte7nEnOJF4iI2Wp3P3C4nvvjIkYWGVDGr10CRGT2josaKeo/EvmCja1eKOowJhvVh5Uq1kN6prpu1UkjqZsw1wGKRgtKEdmgR/P6zZk0dmlqh22SSJe80NXvN/Uy8WJYa8NGdxpEEFL6/+6SyJmSOYMHq/ia0SkzECWYBJxL4froJmXucwMTAWSg0DmDBOm85ow4Fu/5oowKpt9/s7eWrsUtBkA5MnCCE+oT5+l85b5oqdd3rjm+AVdMGJEqxuQDQOfcaMGQ19MEYadtuxMSfkRDHdYlxVmliURuDfh07L4awfWCFj/VqCos05HIh/k3Oz7quCMSGNSFDggCss1Pp7+w2zKeTc9F2Ih3cArDMD+7kFxThafONhEtU= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(346002)(366004)(136003)(376002)(396003)(39830400003)(230922051799003)(186009)(1800799009)(451199024)(64100799003)(6512007)(107886003)(52116002)(1076003)(2616005)(6506007)(478600001)(26005)(6486002)(83380400001)(44832011)(8676002)(2906002)(316002)(54906003)(5660300002)(41300700001)(66946007)(66476007)(66556008)(4326008)(8936002)(6916009)(36756003)(38100700002)(86362001)(38350700002); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: DbPUz7JYCQvI0VUXonzrii7656VHBW8AaK1HQNxrMr2j8mdtq+0yI+f31KCfJjvGdCPhbZwWtBDiR4gvh6R7okp67nMEf+5yy0713cQm0FHrAS1/SkFX7XBwr4gqzDHCGNfZuAWxrpOy+GzAaHCmiZoZ3kgOImxnYcOvkXW6ZH51v1d8msLjfOzNN24ICc4d/QwriQqDyUK9AOOSTmfzR3XG8MzIW0ZZcan64feE6N9mfIUMuUMW6pLMsqbZ+xrN4xOqsfsiSLeM2Cg+DQ5uTt/s8EegGqPqBipmw6qkGocYHVrYayo8JGjcCZ7hl8ai4B/kvgcliAVQ7b3AlK7x/quPkKLeWAue8N5aILmg2k5bJpxuvIk8yp9PRNW8NzLs9EWqbPy8E392pUCDx/VkOQraG7l+dNXyG8biwcrIVAMn3lTKa1uyVKjifqx6rV5FBh38G5fwPc8WoilsDc2h9HL05z3znAn9GdQkpBHWD8rcUXRpxmqy7IuVj/rqjVtQd9ytu3N3dGb3w5jdVOb+ag6ku3z+WqC0DWzmRGXE7LIDGOQBqZz+5yC4kUKsTLds3NmkmKlpqFi+nql0JlG6jeUGf1K+frmWBXBMjHkKtzOZ3mkZ27axVMmDpKrVlFzvcqOE7bJovfYOecIK3fWYV94L2+EOBo3Hy0s4G+hgU++nZeSMKXPwLTXZ1hhCKGqFLuf7azJre4llFfw6pP0Dy20zJgwyAUvg9LfQSE8pLYmYhrWq9M5I8b7OId6Fb5gUdxKT1u9vOcbqsyMiA40n1X8B/VSkOlORZfooNjAcygIo1yjVw4ItARwTdlmP2NVvTpbWvjDj+Igyy30vQ5it/j5FRNjv5z45tBaffrFCqOX5PMT3KQdaZJQkhW3E0bxgUjIodSUC2AGN5XJ6ZBi6ErmQ4J6p65rDfqYJD+0BdfiozlUC90jJmNHns9dlBDaw7FrkMvKz5jQleALRfby9Zvapuq5bQt7F1xkqn7m92gIugPEMAz9sOOTIVO7Z9ayrsyxugqJcPyfRrulNm4UPYTUPGIUnYV9u56Oje/n6PCK5eZakpBsGwjvwLP3gkszh8orlkiH5WkHdHOS3r8B10Z1bTQ/F96cRTltHnTEAhyjwiJQi+yUOMjjxSNqxWE+skrKcwvSNqzk2awOH6a50tdBbx8kWt58ZGl6ikZYdv+1EL26AXefWDGdYoSnrS98DMFF9rj4E5KqR6KzlBEAPqso0Pnubg4akbasFJmrTTa6rEG39pVqfL4Ra1xS1+Bv2GTBwCaalUdhmHt/UZV0Hhmi9ktKHPU/8rcDiqGNofC2//Ftt9D+oUzt0puvSxaL6UT+vAg6V0cATWkooI47DAMNQMaPJoCL3ZuSPB3OoACYUosP1w2J/kQNGlnDhOycS70XixF/YMU9kZnj1VsTj1guPGGsg2ZDS8tJJDhWP4Yixl9UmWfd4dZYSLKjqaUmVBwe9WYg4Zyr04iYc4aQwRRWL74Y7P7jOiToWOx+sgKF9BXRMBaYLVAYDWi4v2jhNgJOAh75neynty9aus1RuGkQPmFYRYG23f3d6+UY7TfQyC4cAWsIpTrfLmH3wXizcyvlROXLNaFjBMcyTRbZnEw== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 7a9d1f67-9337-428b-f8ea-08dbcac27a81 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:28:03.6264 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: 0W+5uBqqPXelcuRWdev3U/saflS0psh2H2BtTtG5Gu9sPB7wJOJ2+Vu3tSrh4jNnk+wEmaPhkW5rKqmOhMqrEpmhVS4B7QHnhTnnNHPPEC4= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR13MB6111 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Rename some parameter and variable to make the logic easier to understand. Also avoid the mix use of lowercase and uppercase in macro name. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/nfp_common.h | 20 ++++++++++---------- drivers/net/nfp/nfp_ethdev_vf.c | 8 ++++---- 2 files changed, 14 insertions(+), 14 deletions(-) diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index cd0ca50c6b..aad3c29ba8 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -19,9 +19,9 @@ #define NFP_QCP_QUEUE_ADD_RPTR 0x0000 #define NFP_QCP_QUEUE_ADD_WPTR 0x0004 #define NFP_QCP_QUEUE_STS_LO 0x0008 -#define NFP_QCP_QUEUE_STS_LO_READPTR_mask (0x3ffff) +#define NFP_QCP_QUEUE_STS_LO_READPTR_MASK (0x3ffff) #define NFP_QCP_QUEUE_STS_HI 0x000c -#define NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask (0x3ffff) +#define NFP_QCP_QUEUE_STS_HI_WRITEPTR_MASK (0x3ffff) /* Interrupt definitions */ #define NFP_NET_IRQ_LSC_IDX 0 @@ -303,7 +303,7 @@ nn_cfg_writeq(struct nfp_net_hw *hw, /** * Add the value to the selected pointer of a queue. * - * @param q + * @param queue * Base address for queue structure * @param ptr * Add to the read or write pointer @@ -311,7 +311,7 @@ nn_cfg_writeq(struct nfp_net_hw *hw, * Value to add to the queue pointer */ static inline void -nfp_qcp_ptr_add(uint8_t *q, +nfp_qcp_ptr_add(uint8_t *queue, enum nfp_qcp_ptr ptr, uint32_t val) { @@ -322,19 +322,19 @@ nfp_qcp_ptr_add(uint8_t *q, else off = NFP_QCP_QUEUE_ADD_WPTR; - nn_writel(rte_cpu_to_le_32(val), q + off); + nn_writel(rte_cpu_to_le_32(val), queue + off); } /** * Read the current read/write pointer value for a queue. * - * @param q + * @param queue * Base address for queue structure * @param ptr * Read or Write pointer */ static inline uint32_t -nfp_qcp_read(uint8_t *q, +nfp_qcp_read(uint8_t *queue, enum nfp_qcp_ptr ptr) { uint32_t off; @@ -345,12 +345,12 @@ nfp_qcp_read(uint8_t *q, else off = NFP_QCP_QUEUE_STS_HI; - val = rte_cpu_to_le_32(nn_readl(q + off)); + val = rte_cpu_to_le_32(nn_readl(queue + off)); if (ptr == NFP_QCP_READ_PTR) - return val & NFP_QCP_QUEUE_STS_LO_READPTR_mask; + return val & NFP_QCP_QUEUE_STS_LO_READPTR_MASK; else - return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_mask; + return val & NFP_QCP_QUEUE_STS_HI_WRITEPTR_MASK; } static inline uint32_t diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 7096695de6..7fb7b3efc5 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -396,7 +396,7 @@ nfp_vf_pci_uninit(struct rte_eth_dev *eth_dev) } static int -eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, +nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_probe(pci_dev, @@ -404,7 +404,7 @@ eth_nfp_vf_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, } static int -eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev) +nfp_vf_pci_remove(struct rte_pci_device *pci_dev) { return rte_eth_dev_pci_generic_remove(pci_dev, nfp_vf_pci_uninit); } @@ -412,8 +412,8 @@ eth_nfp_vf_pci_remove(struct rte_pci_device *pci_dev) static struct rte_pci_driver rte_nfp_net_vf_pmd = { .id_table = pci_id_nfp_vf_net_map, .drv_flags = RTE_PCI_DRV_NEED_MAPPING | RTE_PCI_DRV_INTR_LSC, - .probe = eth_nfp_vf_pci_probe, - .remove = eth_nfp_vf_pci_remove, + .probe = nfp_vf_pci_probe, + .remove = nfp_vf_pci_remove, }; RTE_PMD_REGISTER_PCI(net_nfp_vf, rte_nfp_net_vf_pmd); From patchwork Thu Oct 12 01:27:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132568 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CBE644236A; Thu, 12 Oct 2023 03:29:11 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AA9B440DFD; Thu, 12 Oct 2023 03:28:10 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2096.outbound.protection.outlook.com [40.107.95.96]) by mails.dpdk.org (Postfix) with ESMTP id 8987F40DF5 for ; Thu, 12 Oct 2023 03:28:07 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XwgDTBELAaLf4XILo4FGACbuz01nG27z2I6ltIwjOUuvJ4sQgFJL77weF5NZsYGdvHDpkf/dANVD9HsQ0aDoWS/ZXOfGOyJ+NnqmFtMvjcxblGwu19zSUmiag2XBNUufydWSlB1rSjFflDkKyQKULAnuvLZyTen1acCzDr0Y0QQrd7D7j7AxhYV+Mf7+dgjljv+wRZk64EGsd8f1F8CcPzsJZeOrJw73iNDBSGEaExY8yHseq4G9fAHjEIhjf4XHOCYjNLk5qlZt44WWkTK4MBD8d6+fYPZwnuDJPkc0UrVD+kMEWtjVy0PqMjGGjBFr+/DLZMYkB5NvNohbLG5jzQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=SSy8ulcHXvmcO3rcGfiR99rbNEFj21os2/+tSHOQYgA=; b=hfyhA7vP7+8+FLwD6YvTZhCS+SyKcQ8x/56w7+k0P+qhdEkRaeoPYZfu3XIWHOrjtd8ptE+MuJxvMob0iog3iwcpFCa5uH0xprskSh916pLXQkLulE6n45VzGO7GWUpm3Pa3qoy+hyVBJKoO2Mcw3RkGFDy8/oo6+lpkSG+PsssdKvj+6X10H366iFnW3nYIIysIXIvuOAGKlZgo3CZW9Zu98urv0xCJOEKIrF7IyeB4izzSI2wclppIbQqRnnZp/gCV/VCdYyj2yyCtc1LVw5qYvlX3d7h3g7YFTsl1P9U3iSqyU7gpxz27cbRp/R7S/JvV7e3ty82U/7io70XKow== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=SSy8ulcHXvmcO3rcGfiR99rbNEFj21os2/+tSHOQYgA=; b=IIeDhcA1N7RFxX+lMt/1I2qusPOjpDI2ec+hh/rhuy8H/EGLwx/8IXUzbq6XxT/Hytch91YkR1RIXEHIARZ62HPq6jY7I4akJZhB/Xf7BscDPGZ98abY1ro6ZXNMUxTeGgNyr5E76m3OsQYwUZIznGDXUc7va2xNZOs90GZl0/E= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SJ0PR13MB6111.namprd13.prod.outlook.com (2603:10b6:a03:4eb::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.42; Thu, 12 Oct 2023 01:28:05 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:28:05 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 10/11] net/nfp: adjust logic to make it more readable Date: Thu, 12 Oct 2023 09:27:03 +0800 Message-Id: <20231012012704.483828-11-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SJ0PR13MB6111:EE_ X-MS-Office365-Filtering-Correlation-Id: 80dfc28a-7d0d-4a18-1096-08dbcac27bf1 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 27/i/bGucaI0FDXlF7VuIZjmjcFOVyC1qEw7QDSUb/Dg+dCAf776tUFSosMGlhKVLzMLh34P+6QYNFNEREoRKFVU3Y/fvFL0YIHyvcOjPxYIB7Bn5CJfqk8sKM20dbcMnp4hgFrekALkuzSzMxv+nKN3iGpctejzeysJzVn3FSsqbGSgq2rraTAs4cK5StLj2JcK2tqzdSQiH4Ou5gTlGcXNPwGAjvK0szDuPfWEPqJU0/FRVBP+mBNDBwnPsU8g3pBrfKqcckhCA9WEiGTncZENM+uFW0HoReeJYxpRJW6HYbAqIaYSg3tOVP4k/SZlVgv62vOrZat5XTvC3kcqebDgwC3zs45nw/CiydJW295Qj9xdE0bbtGt3gGuc/YwuaSS215uQHeVcUCEsZCeOsfoccxz2iLx7QyqVmr1F76Pu/FbwFMY1obs6O+p4UdpS2+7tzGVFkamom4iBqhvdICVhgrMvlVZQhfgzZwotYvdK1tpp9KjG/3GmZV5Bt2qjQl1jfIUO86PWKCXZ98Yo3cFQYNTStg6dpTDI1zHxOp54zz3xxnKXqPnb9gwBCxUA2IdbZ+toCLz93I5OH+9yC4Mu1tet/MC9g2XfmLfW6qXr+mVoiczBPniig8T4Zc+UokFpRiHShxDqYptH0XJ5HIf7HlXdJRlhyiSDYm4q2BU= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(346002)(366004)(136003)(376002)(396003)(39830400003)(230922051799003)(186009)(1800799009)(451199024)(64100799003)(6512007)(107886003)(52116002)(1076003)(2616005)(6506007)(478600001)(26005)(6486002)(83380400001)(44832011)(8676002)(30864003)(2906002)(316002)(54906003)(5660300002)(41300700001)(66946007)(66476007)(66556008)(4326008)(8936002)(6916009)(36756003)(38100700002)(86362001)(38350700002); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: H2h1lsVHHFMpBeGsakc9KhRTkp2lAQbaTaMnM+ZmpzBXfDnDWhhnSabBXkDlA4u1ocDfwCKFebq42QWrDmJ2O10utE5UxUM656RqfJAWndcko+FCEY6Xi2SjvtIsHzBUI3Ssxq9W6CMPF0285NedxHGTMqSHCpYT2yYxGsZeLWXoxDRbMTWC78MBPgOCzgRnLq2CxcqESbiD7r7KG/gCSppRVybP6R7B4/WzRJoZuX3hti7lgxn530xW3iPd599SpgyyeFNFM28eXc/JV8XeMZV4EUtv27Curc+s9MaV4yIiVDdyEI0hXb5w743jwjw1z0U2Efv7F6/+rhLR5kvqZ69RWLDRpO/lHgGfG7iniYyyE0Xnx+1DhPn4PTlc65pPXUJdPZKNP+tF2U0R5mXcw1QyYF3URH8K/DiO/DRUSXrQu0S1G8bZSAUXFTbCMUuLGYqKWqqrb0YgqXN6sNdw6w3wRIlWXy6CFnn0tF9vDwdIbIzLCnG6cG0x+9rnvJPnFjqI3+LfSeRWuxMqhNwCNOab8Wwe4UjKNqBgQ/uRggdVAjxLNy/Uq6hJKsGFnSaVFZCMsrliJGxgbf3Pd3Zct8AZWDaBu2Kjx/XVRpwDHTpAJNgfLOy6fU7r9xjidIkBtbuGFu545C4QYd1c33mW9Qzj+OUV/rFFustqUx8T02/FA9rmfOMEENW0cR29xGEf1AOhecNtWf6t9T39UC4B+/cEU1SR3KFRzNz+h512JxfgBYoZxBb9h+pHIxwPwL4FDZu6ExJ8l0AGt41wnHhVTIaLfHa4AxSQclvLRmoPAawQIh1ZHiPZmDFSFvImYi70SyXsN0pnR7zBlNpLg9WxMsdyXmGxDHfQInjYpS16sCsWu+qcw4QrKBI4Gf2nwQKyUehcWu1tTXrF8VY/jizTv4k9+Vv0C+yP8ukaszFmm+zwb92wWE5OJDW8A7L3H+gKQ44BC+3OUKhMYA5gpur8QVFFriRUQatXR1sSmMCfXtOpJh6vDmRBaOVPRjxf85G3AvxBdB6CA80RImcS75MJ+VR8FzKkjCCHDN3y3ATDWy9FnccCnOB0MUY85aDh8S/irSyibAJSJXNX7umlTaMpx/v6Sn5sWpvECzWQ+EfbxESd20qGC1uSwStpygYVXqS3NtDqW0LckoegXEbs8ycxR2/SklqVECCipn4SLREDiDUv3iHGoZFLAwwwPmaSO8kvyDvTgAehLq/+vQcL68I9WlehDtUWYpVPRHG2G95VhgCTsQx4QMdHvi80zqpg1E/ewCBd1GzdCLzHUmSoli8f4iMy2xUZUu5cyT+paszBQ2iO4XkUpH/Yoz0SaktTzAqBdbLEPRVkQ75XYlAJH/H32FU5orVj+PmwXOIG1X4fNYXwBgEk/djczIOxiQkQcqhmv+yALVAF5HfHcObgqxV4pQ8zWp+5DGA9juri5hqaDI2rjYM98s76cmFFbhRyTUEm+nG2qh8nSMM4Mot7Jey50QxVU/hN1AByq4rKJ4RUTmon8vC1dXnAT8Eq7iDztiItaCbo81osnvzcSOELlr/SSMhOjB8yeyLRGN0p+9uqBMK7aFnjVod3SDqciF4NTOsb5JYRcaXI+iJZ4WGjRZzzbw== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 80dfc28a-7d0d-4a18-1096-08dbcac27bf1 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:28:05.8011 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: CYtOYVVGPTXCeEjG49rCODTlaxtjJllMjwVborYGm2hZbxwTzVD09AzT4k71UKee03+k7EMgS1nzpx7NCwGfXjtqLrCs8mDBZk8w+LHB0HM= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR13MB6111 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adjust some logic to make it easier to understand. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/nfp_common.c | 87 +++++++++++++++++--------------- drivers/net/nfp/nfp_cpp_bridge.c | 5 +- drivers/net/nfp/nfp_ctrl.h | 2 - drivers/net/nfp/nfp_ethdev.c | 23 ++++----- drivers/net/nfp/nfp_ethdev_vf.c | 15 +++--- drivers/net/nfp/nfp_rxtx.c | 2 +- 6 files changed, 63 insertions(+), 71 deletions(-) diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index a102c6f272..2d834b29d9 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -453,7 +453,7 @@ nfp_net_log_device_information(const struct nfp_net_hw *hw) } static inline void -nfp_net_enbable_rxvlan_cap(struct nfp_net_hw *hw, +nfp_net_enable_rxvlan_cap(struct nfp_net_hw *hw, uint32_t *ctrl) { if ((hw->cap & NFP_NET_CFG_CTRL_RXVLAN_V2) != 0) @@ -467,19 +467,19 @@ nfp_net_enable_queues(struct rte_eth_dev *dev) { uint16_t i; struct nfp_net_hw *hw; - uint64_t enabled_queues = 0; + uint64_t enabled_queues; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); /* Enabling the required TX queues in the device */ + enabled_queues = 0; for (i = 0; i < dev->data->nb_tx_queues; i++) enabled_queues |= (1 << i); nn_cfg_writeq(hw, NFP_NET_CFG_TXRS_ENABLE, enabled_queues); - enabled_queues = 0; - /* Enabling the required RX queues in the device */ + enabled_queues = 0; for (i = 0; i < dev->data->nb_rx_queues; i++) enabled_queues |= (1 << i); @@ -619,33 +619,33 @@ uint32_t nfp_check_offloads(struct rte_eth_dev *dev) { uint32_t ctrl = 0; + uint64_t rx_offload; + uint64_t tx_offload; struct nfp_net_hw *hw; struct rte_eth_conf *dev_conf; - struct rte_eth_rxmode *rxmode; - struct rte_eth_txmode *txmode; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); dev_conf = &dev->data->dev_conf; - rxmode = &dev_conf->rxmode; - txmode = &dev_conf->txmode; + rx_offload = dev_conf->rxmode.offloads; + tx_offload = dev_conf->txmode.offloads; - if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) { + if ((rx_offload & RTE_ETH_RX_OFFLOAD_IPV4_CKSUM) != 0) { if ((hw->cap & NFP_NET_CFG_CTRL_RXCSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_RXCSUM; } - if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) - nfp_net_enbable_rxvlan_cap(hw, &ctrl); + if ((rx_offload & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) + nfp_net_enable_rxvlan_cap(hw, &ctrl); - if ((rxmode->offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) { + if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) { if ((hw->cap & NFP_NET_CFG_CTRL_RXQINQ) != 0) ctrl |= NFP_NET_CFG_CTRL_RXQINQ; } hw->mtu = dev->data->mtu; - if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) { + if ((tx_offload & RTE_ETH_TX_OFFLOAD_VLAN_INSERT) != 0) { if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN_V2) != 0) ctrl |= NFP_NET_CFG_CTRL_TXVLAN_V2; else if ((hw->cap & NFP_NET_CFG_CTRL_TXVLAN) != 0) @@ -661,14 +661,14 @@ nfp_check_offloads(struct rte_eth_dev *dev) ctrl |= NFP_NET_CFG_CTRL_L2MC; /* TX checksum offload */ - if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0) + if ((tx_offload & RTE_ETH_TX_OFFLOAD_IPV4_CKSUM) != 0 || + (tx_offload & RTE_ETH_TX_OFFLOAD_UDP_CKSUM) != 0 || + (tx_offload & RTE_ETH_TX_OFFLOAD_TCP_CKSUM) != 0) ctrl |= NFP_NET_CFG_CTRL_TXCSUM; /* LSO offload */ - if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 || - (txmode->offloads & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { + if ((tx_offload & RTE_ETH_TX_OFFLOAD_TCP_TSO) != 0 || + (tx_offload & RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO) != 0) { if ((hw->cap & NFP_NET_CFG_CTRL_LSO) != 0) ctrl |= NFP_NET_CFG_CTRL_LSO; else @@ -676,7 +676,7 @@ nfp_check_offloads(struct rte_eth_dev *dev) } /* RX gather */ - if ((txmode->offloads & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0) + if ((tx_offload & RTE_ETH_TX_OFFLOAD_MULTI_SEGS) != 0) ctrl |= NFP_NET_CFG_CTRL_GATHER; return ctrl; @@ -766,11 +766,10 @@ nfp_net_link_update(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* Read link status */ - nn_link_status = nn_cfg_readw(hw, NFP_NET_CFG_STS); - memset(&link, 0, sizeof(struct rte_eth_link)); + /* Read link status */ + nn_link_status = nn_cfg_readw(hw, NFP_NET_CFG_STS); if ((nn_link_status & NFP_NET_CFG_STS_LINK) != 0) link.link_status = RTE_ETH_LINK_UP; @@ -828,6 +827,9 @@ nfp_net_stats_get(struct rte_eth_dev *dev, struct nfp_net_hw *hw; struct rte_eth_stats nfp_dev_stats; + if (stats == NULL) + return -EINVAL; + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); memset(&nfp_dev_stats, 0, sizeof(nfp_dev_stats)); @@ -892,11 +894,8 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); nfp_dev_stats.imissed -= hw->eth_stats_base.imissed; - if (stats != NULL) { - memcpy(stats, &nfp_dev_stats, sizeof(*stats)); - return 0; - } - return -EINVAL; + memcpy(stats, &nfp_dev_stats, sizeof(*stats)); + return 0; } /* @@ -1379,13 +1378,14 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, struct nfp_net_hw *hw; struct rte_pci_device *pci_dev; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; /* Make sure all updates are written before un-masking */ rte_wmb(); + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), NFP_NET_CFG_ICR_UNMASKED); return 0; @@ -1399,14 +1399,16 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, struct nfp_net_hw *hw; struct rte_pci_device *pci_dev; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; /* Make sure all updates are written before un-masking */ rte_wmb(); - nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), 0x1); + + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); + nn_cfg_writeb(hw, NFP_NET_CFG_ICR(base + queue_id), NFP_NET_CFG_ICR_RXTX); + return 0; } @@ -1445,13 +1447,13 @@ nfp_net_irq_unmask(struct rte_eth_dev *dev) hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); + /* Make sure all updates are written before un-masking */ + rte_wmb(); + if ((hw->ctrl & NFP_NET_CFG_CTRL_MSIXAUTO) != 0) { /* If MSI-X auto-masking is used, clear the entry */ - rte_wmb(); rte_intr_ack(pci_dev->intr_handle); } else { - /* Make sure all updates are written before un-masking */ - rte_wmb(); nn_cfg_writeb(hw, NFP_NET_CFG_ICR(NFP_NET_IRQ_LSC_IDX), NFP_NET_CFG_ICR_UNMASKED); } @@ -1548,19 +1550,18 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, int ret; uint32_t update; uint32_t new_ctrl; + uint64_t rx_offload; struct nfp_net_hw *hw; uint32_t rxvlan_ctrl = 0; - struct rte_eth_conf *dev_conf; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - dev_conf = &dev->data->dev_conf; + rx_offload = dev->data->dev_conf.rxmode.offloads; new_ctrl = hw->ctrl; - nfp_net_enbable_rxvlan_cap(hw, &rxvlan_ctrl); - /* VLAN stripping setting */ if ((mask & RTE_ETH_VLAN_STRIP_MASK) != 0) { - if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) + nfp_net_enable_rxvlan_cap(hw, &rxvlan_ctrl); + if ((rx_offload & RTE_ETH_RX_OFFLOAD_VLAN_STRIP) != 0) new_ctrl |= rxvlan_ctrl; else new_ctrl &= ~rxvlan_ctrl; @@ -1568,7 +1569,7 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, /* QinQ stripping setting */ if ((mask & RTE_ETH_QINQ_STRIP_MASK) != 0) { - if ((dev_conf->rxmode.offloads & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) + if ((rx_offload & RTE_ETH_RX_OFFLOAD_QINQ_STRIP) != 0) new_ctrl |= NFP_NET_CFG_CTRL_RXQINQ; else new_ctrl &= ~NFP_NET_CFG_CTRL_RXQINQ; @@ -1580,10 +1581,12 @@ nfp_net_vlan_offload_set(struct rte_eth_dev *dev, update = NFP_NET_CFG_UPDATE_GEN; ret = nfp_net_reconfig(hw, new_ctrl, update); - if (ret == 0) - hw->ctrl = new_ctrl; + if (ret != 0) + return ret; - return ret; + hw->ctrl = new_ctrl; + + return 0; } static int diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index bb2a6fdcda..36dcdca9de 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -22,9 +22,6 @@ #define NFP_IOCTL_CPP_IDENTIFICATION _IOW(NFP_IOCTL, 0x8f, uint32_t) /* Prototypes */ -static int nfp_cpp_bridge_serve_write(int sockfd, struct nfp_cpp *cpp); -static int nfp_cpp_bridge_serve_read(int sockfd, struct nfp_cpp *cpp); -static int nfp_cpp_bridge_serve_ioctl(int sockfd, struct nfp_cpp *cpp); static int nfp_cpp_bridge_service_func(void *args); int @@ -438,7 +435,7 @@ nfp_cpp_bridge_service_func(void *args) return -EIO; } - while (1) { + for (;;) { ret = recv(datafd, &op, 4, 0); if (ret <= 0) { PMD_CPP_LOG(DEBUG, "%s: socket close", __func__); diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h index 5c2065a537..9ec51e0a25 100644 --- a/drivers/net/nfp/nfp_ctrl.h +++ b/drivers/net/nfp/nfp_ctrl.h @@ -442,8 +442,6 @@ struct nfp_net_fw_ver { #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6 (NFP_MAC_STATS_BASE + 0x1f0) #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7 (NFP_MAC_STATS_BASE + 0x1f8) -#define NFP_PF_CSR_SLICE_SIZE (32 * 1024) - /* * General use mailbox area (0x1800 - 0x19ff) * 4B used for update command and 4B return code followed by diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index b65c2c1fe0..c550c12e01 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -80,7 +80,7 @@ nfp_net_start(struct rte_eth_dev *dev) * Better not to share LSC with RX interrupts. * Unregistering LSC interrupt handler. */ - rte_intr_callback_unregister(pci_dev->intr_handle, + rte_intr_callback_unregister(intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); if (dev->data->nb_rx_queues > 1) { @@ -525,7 +525,7 @@ nfp_net_init(struct rte_eth_dev *eth_dev) return -ENODEV; /* Use port offset in pf ctrl_bar for this ports control bar */ - hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE); + hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_NET_CFG_BAR_SZ); hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE); } @@ -743,8 +743,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, const struct nfp_dev_info *dev_info) { uint8_t i; - int ret; - int err = 0; + int ret = 0; uint32_t total_vnics; struct nfp_net_hw *hw; unsigned int numa_node; @@ -765,8 +764,8 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, pf_dev->app_fw_priv = app_fw_nic; /* Read the number of vNIC's created for the PF */ - total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &err); - if (err != 0 || total_vnics == 0 || total_vnics > 8) { + total_vnics = nfp_rtsym_read_le(pf_dev->sym_tbl, "nfd_cfg_pf0_num_ports", &ret); + if (ret != 0 || total_vnics == 0 || total_vnics > 8) { PMD_INIT_LOG(ERR, "nfd_cfg_pf0_num_ports symbol with wrong value"); ret = -ENODEV; goto app_cleanup; @@ -874,8 +873,7 @@ nfp_init_app_fw_nic(struct nfp_pf_dev *pf_dev, static int nfp_pf_init(struct rte_pci_device *pci_dev) { - int ret; - int err = 0; + int ret = 0; uint64_t addr; uint32_t cpp_id; struct nfp_cpp *cpp; @@ -943,8 +941,8 @@ nfp_pf_init(struct rte_pci_device *pci_dev) } /* Read the app ID of the firmware loaded */ - app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &err); - if (err != 0) { + app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &ret); + if (ret != 0) { PMD_INIT_LOG(ERR, "Couldn't read app_fw_id from fw"); ret = -EIO; goto sym_tbl_cleanup; @@ -1080,7 +1078,6 @@ nfp_secondary_init_app_fw_nic(struct rte_pci_device *pci_dev, static int nfp_pf_secondary_init(struct rte_pci_device *pci_dev) { - int err = 0; int ret = 0; struct nfp_cpp *cpp; enum nfp_app_fw_id app_fw_id; @@ -1124,8 +1121,8 @@ nfp_pf_secondary_init(struct rte_pci_device *pci_dev) } /* Read the app ID of the firmware loaded */ - app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &err); - if (err != 0) { + app_fw_id = nfp_rtsym_read_le(sym_tbl, "_pf0_net_app_id", &ret); + if (ret != 0) { PMD_INIT_LOG(ERR, "Couldn't read app_fw_id from fw"); goto sym_tbl_cleanup; } diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 7fb7b3efc5..ac6e67efc6 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -39,8 +39,6 @@ nfp_netvf_start(struct rte_eth_dev *dev) struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); struct rte_intr_handle *intr_handle = pci_dev->intr_handle; - hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - /* Disabling queues just in case... */ nfp_net_disable_queues(dev); @@ -54,7 +52,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) * Better not to share LSC with RX interrupts. * Unregistering LSC interrupt handler. */ - rte_intr_callback_unregister(pci_dev->intr_handle, + rte_intr_callback_unregister(intr_handle, nfp_net_dev_interrupt_handler, (void *)dev); if (dev->data->nb_rx_queues > 1) { @@ -77,6 +75,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) new_ctrl = nfp_check_offloads(dev); /* Writing configuration parameters in the device */ + hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); nfp_net_params_setup(hw); dev_conf = &dev->data->dev_conf; @@ -244,15 +243,15 @@ static int nfp_netvf_init(struct rte_eth_dev *eth_dev) { int err; + uint16_t port; uint32_t start_q; - uint16_t port = 0; struct nfp_net_hw *hw; uint64_t tx_bar_off = 0; uint64_t rx_bar_off = 0; struct rte_pci_device *pci_dev; const struct nfp_dev_info *dev_info; - struct rte_ether_addr *tmp_ether_addr; + port = eth_dev->data->port_id; pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); dev_info = nfp_dev_info_get(pci_dev->id.device_id); @@ -325,9 +324,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) } nfp_netvf_read_mac(hw); - - tmp_ether_addr = &hw->mac_addr; - if (rte_is_valid_assigned_ether_addr(tmp_ether_addr) == 0) { + if (rte_is_valid_assigned_ether_addr(&hw->mac_addr) == 0) { PMD_INIT_LOG(INFO, "Using random mac address for port %hu", port); /* Using random mac addresses for VFs */ rte_eth_random_addr(&hw->mac_addr.addr_bytes[0]); @@ -344,7 +341,7 @@ nfp_netvf_init(struct rte_eth_dev *eth_dev) PMD_INIT_LOG(INFO, "port %hu VendorID=%#x DeviceID=%#x " "mac=" RTE_ETHER_ADDR_PRT_FMT, - eth_dev->data->port_id, pci_dev->id.vendor_id, + port, pci_dev->id.vendor_id, pci_dev->id.device_id, RTE_ETHER_ADDR_BYTES(&hw->mac_addr)); diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index e284a67d7c..6fcdcb0be7 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -284,7 +284,7 @@ nfp_net_parse_chained_meta(uint8_t *meta_base, meta->vlan[meta->vlan_layer].tci = vlan_info & NFP_NET_META_VLAN_MASK; meta->vlan[meta->vlan_layer].tpid = NFP_NET_META_TPID(vlan_info); - ++meta->vlan_layer; + meta->vlan_layer++; break; case NFP_NET_META_IPSEC: meta->sa_idx = rte_be_to_cpu_32(*(rte_be32_t *)meta_offset); From patchwork Thu Oct 12 01:27:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132569 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5574D4236A; Thu, 12 Oct 2023 03:29:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C57B040E0F; Thu, 12 Oct 2023 03:28:11 +0200 (CEST) Received: from NAM02-DM3-obe.outbound.protection.outlook.com (mail-dm3nam02on2093.outbound.protection.outlook.com [40.107.95.93]) by mails.dpdk.org (Postfix) with ESMTP id 262E440DFB for ; Thu, 12 Oct 2023 03:28:09 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FtHBa9NjkhlV/sB3BKeK/jydh6AQK0cMUeXDIVsECXHC0IC6Z+uXgkfN8qXc9lWiqhRfUbjpX1vBdiUxdN3j9NZYcChWDh+IS9nM4bSPoioiVukZdclrmfhUZlG+6QoaYpOTJkPX91jWXjAEBpQTd6RiL6o4gb8Zq/kXGBfRoSI/9Zuk7zgF1Ynfj6UkDFNuT6IkgZkHFNXUcE1OM5+a3c68SwlQv0FwkyEPdy7jdyP89QQr21ja+uyVO2XI2LwaYw2DTXS2tlYFw+6QmucfXHH0aj08yXre3Bo333SK+lYdCoMYaAXjbl0ltd0sPeMaWMiP528uAi5P/U/67fQCRQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LrRBHZeu1O4WkWdhX6W1l5kC6tOn6yq/U19U6110kWA=; b=VKJd7M52i3GwFLwdZKjGLKVQUv6SguegeJ4gbkWLcaybtFkKAj9sx8uE8s/3VCBPZXEm+JCx1tvDaRmD5WjAxhJwiFOlMCyv4bodaRzPh5K4CT3Qn8YILaiJb08/y0nrPMM+1UaUud8PVLfS3SeP8bXX8aCicOaWJ8ZwRN7tDmaNkZSDvy3TaGj4LdJ9Pqt7T1Zt/ShvtS8Btn+MDNipD0ikPq3ST3l++Vy+r5kkq4aI68NHzxPQILCLg+k7f2MsS79TG5hxRbt5hb/YWbGKlNjuASmiJGt+MGWu6M15HlLr95EUG19H1fSDQMS80tWIgBl5N/2RtAel1LLjb8w42Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LrRBHZeu1O4WkWdhX6W1l5kC6tOn6yq/U19U6110kWA=; b=Zk5CH+iTILNJEIGcPdgc7Djx43Hlk6CvTzxjH8VhTP8qRILQTtQiAYhbPg3bLgm94vGeDUVX86cWeTR1fUQezOS4H/8ytS8JL+1FQEz3q20sYX5YpQQpcoziO9JCYohqFIyDokJ5jK/SB458Ca1q5EJfVqegn/fnOLFSMHcmN/g= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SJ0PR13MB6111.namprd13.prod.outlook.com (2603:10b6:a03:4eb::10) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.42; Thu, 12 Oct 2023 01:28:07 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::2d5:f7d6:ecde:c93a%4]) with mapi id 15.20.6863.032; Thu, 12 Oct 2023 01:28:07 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH v2 11/11] net/nfp: refact the meson build file Date: Thu, 12 Oct 2023 09:27:04 +0800 Message-Id: <20231012012704.483828-12-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231012012704.483828-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> <20231012012704.483828-1-chaoyong.he@corigine.com> X-ClientProxiedBy: PH8PR15CA0015.namprd15.prod.outlook.com (2603:10b6:510:2d2::23) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SJ0PR13MB6111:EE_ X-MS-Office365-Filtering-Correlation-Id: 0cdf5b75-01d2-484f-5509-08dbcac27d29 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ow9oFouCcD25GUGYYTwDmbXEQUiH1cew+WcoXFlTY+F7XCfYwHA1tFWBXkdajlFlblSTaL+V85ZgNfQB4wajwoYonIv9yHvE0oZg4fG+nZWAXbltskplErC6ZhMlxGIDEL4/mxPGDLAPQJKyIV8MGg50kNMEu55DIRLoZR5Ls7UhZzonJcnNRjIO82ZtXoI+sljBbilzENkTP62YZKNM+qK5pVA1kPAhWKRq0X4zYNXeGCyvdzbuTTQU8yVC9q6BLLV2N4o6PujXOB/A5HJkjS1SaFRKebT5pDDSLwqI/biMfCUL7VTd+kbAXM2IWE+PjwPKC6KAJMDce4E8CcPZC+c0Eci7AsG3bPxRHh+YoBoUCjuRLClPQbAZ6pSHb79BWRKLyjlY9GnaF+bZzrAlxBYMdqbXCgRcfIYgrkvueorFTwJVY2J2caFSduCa+X4FUlJpKQ4ESTKR3e+R4KWX6Yb7+GUFN878XG/sE/0iVmdmAtZzh3g7c3Dh1ot1csoBp1EVmaMs9XLwtoyXB3rsTvLvD/7B2lg5aZHH/M4X6Fx1eGUkCIgvfGSuDsnxm9u1duO4EGFXsw8GoqsNQcShjfEz+Fu6ve6kVTzqGf0My93DLXUqiAa8gv9lnzxYHxhXDkvk+6SeCY1P+fb20vqGUl+rpOf0B7szRVjLHwWLoG8= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(346002)(366004)(136003)(376002)(396003)(39830400003)(230922051799003)(186009)(1800799009)(451199024)(64100799003)(6512007)(107886003)(52116002)(1076003)(2616005)(6666004)(6506007)(478600001)(26005)(6486002)(83380400001)(44832011)(8676002)(2906002)(316002)(54906003)(5660300002)(41300700001)(66946007)(66476007)(66556008)(4326008)(8936002)(6916009)(36756003)(38100700002)(86362001)(38350700002); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: o9Eu8E3XhPaiUMN5sN9us7xpgYQUG40mQRtYqofmZf0AOdhW72cH8MHwY/fgCwg3OLkzn/KuVd0tlauss/hQVupkCdjf4k7+n8skmurRzBlW7RYLAr1KhuBtD6OKLuB4IYIKNoSb12LpGPLpIBh+QldmJdfrUwCOdSYK61KawZKjQMMtWHEpbirvNYHcSmNoXeSDUuWkO9bw+HyOg9v1FAqsIHpQ1jVjYepx7xX/fuHbHoE+5PwAa/STA2dt9jLO4Ce7ucK/Ma6mxcJp4szghdfm4b8oGaRR9GRL3RRIKnPMKoJAWxHijNmdwjpfnHgPXbizSjj6se+UoJVb9b7N96Y39j9a3MvYJ+k0ugUik5GkP3pvoEUrHD+0wly7oqBGcTUnnUoOQGVqWNgdyAAkuNLw/Yb0UvwdecWsNf4KwZ4mLVGGgTYf9QoKpsCr1SQc3I6n2BCybGB7d1bExmAqR50WG9Geqo4p/RIBq8ZMHWkQ7Mre58AeON5jDs3roz5JJoyLU0F3KLFpfuCUfpAOw6Q13NuA6z/XQhwWJPBV9XBsE/n2vRGrixeEgn7Mx0NMU+Wi9piLKlnJST5rzjAUjmup+WQwfadaEP2gkF0F+OCzLo3t0YLpGEtMgUFO7aZvIJFQnM3OUL1e3GjcA/jhOyj3WMvntDVVGsxJImipPOiCe4aUY70mSOLm17uCJfFAAk2pMTQR3k9ElAubvkxOKxra9wEQol1cXagK8vbiBCsfdxMBjtHvobr0WRctIXHnxQzcGwyVtacKYpxOqjeGDb1Nlcpjz47jz4l1Vey7aldA7v9LpRd80BuTsB7Csjt2J2XNNgMopZQMRbQ9uRaG+9YZ5y4rx7ZvY7Zy7mx1ry5hbusg1lSF1/w+FYi1lLmmCaDSZxQWeqUtbQgmZN7y/dhG4pv2tF4fnwHLnlG1x1NDXDpOIq2YtRvcWKHf31P0MdVcFwDysVazvnRbsLSI+tZkS++IhSHYGpQRuKTHGH6xtaQai7n0jvGUqwJ40oYDmp4kA1rCj1M2F5aDZnHUYyg4brwLCWcr3bCDL1AWXZE2bqsvqgGE1e7AbDka0s1Fjht7YIpooN1fhyFNIwo8erlglwkYXfmH2lqnKGsTriOxV46kJDMjt7E+y/3EBNSiEYaPVT/+3a5TmCnAQA20QmQZZU2Rv78ToPJKrIab6tY9fyV8kTAB6udBac/CICLccE9Sth5G7upNMsS660/ou6GcoCU3gZz9C/f7Ge+TvL0/3rlGc+1SY43jizys64shzvtpFJ1N5qmLTe/GJV3SmDdI9BJ8jCVxDn5stJwc1v+wToLhwj1WaoL0F+DZHYbpwEBs6Gr8Gxw5PgzvFtgsiStqw4ivmgvGvsvdzkL29ViFR5GZOlyk7MNn9CmBsF87e4fe+f5xuPnOUHgit8RUoLIzgraUdcaOvqzunBYFwUzOb9c7CiGenXOWts8//Jin3iVd5GEsB2RKRgcb3qYTu8x4AX2BY+IFi4DNmLPhJh+jOrnhErU9ke1xs33hWRUoJWIsMBqzh1gUuDymRwkBVx3caNr9RtCIqw2zQ3FcqVUQBk3+vmu3lsXg1XQSzF4Xj3oWLrgAwtfuhx8mcZtc3w== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: 0cdf5b75-01d2-484f-5509-08dbcac27d29 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 01:28:07.8386 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Wt8s1ZHlJc3Or4VK+f9V21MNN0AiCsaPnftPY0oO/NArAIhHGlZk/OfxPE4KjRNSd3G/pi46uLY2mBih4O48OdB2XinqNbCn545ZzLeHiro= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR13MB6111 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Make the source files follow the alphabeta sequence. Also update the copyright header line. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/meson.build | 23 ++++++++++++----------- 1 file changed, 12 insertions(+), 11 deletions(-) diff --git a/drivers/net/nfp/meson.build b/drivers/net/nfp/meson.build index 7627c3e3f1..40e9ef8524 100644 --- a/drivers/net/nfp/meson.build +++ b/drivers/net/nfp/meson.build @@ -1,10 +1,11 @@ # SPDX-License-Identifier: BSD-3-Clause -# Copyright(c) 2018 Intel Corporation +# Copyright(c) 2018 Corigine, Inc. if not is_linux or not dpdk_conf.get('RTE_ARCH_64') build = false reason = 'only supported on 64-bit Linux' endif + sources = files( 'flower/nfp_conntrack.c', 'flower/nfp_flower.c', @@ -13,30 +14,30 @@ sources = files( 'flower/nfp_flower_representor.c', 'nfd3/nfp_nfd3_dp.c', 'nfdk/nfp_nfdk_dp.c', - 'nfpcore/nfp_nsp.c', 'nfpcore/nfp_cppcore.c', - 'nfpcore/nfp_resource.c', - 'nfpcore/nfp_mip.c', - 'nfpcore/nfp_nffw.c', - 'nfpcore/nfp_rtsym.c', - 'nfpcore/nfp_nsp_cmds.c', 'nfpcore/nfp_crc.c', 'nfpcore/nfp_dev.c', + 'nfpcore/nfp_hwinfo.c', + 'nfpcore/nfp_mip.c', 'nfpcore/nfp_mutex.c', + 'nfpcore/nfp_nffw.c', + 'nfpcore/nfp_nsp.c', + 'nfpcore/nfp_nsp_cmds.c', 'nfpcore/nfp_nsp_eth.c', - 'nfpcore/nfp_hwinfo.c', + 'nfpcore/nfp_resource.c', + 'nfpcore/nfp_rtsym.c', 'nfpcore/nfp_target.c', 'nfpcore/nfp6000_pcie.c', 'nfp_common.c', - 'nfp_ctrl.c', - 'nfp_rxtx.c', 'nfp_cpp_bridge.c', - 'nfp_ethdev_vf.c', + 'nfp_ctrl.c', 'nfp_ethdev.c', + 'nfp_ethdev_vf.c', 'nfp_flow.c', 'nfp_ipsec.c', 'nfp_logs.c', 'nfp_mtr.c', + 'nfp_rxtx.c', ) deps += ['hash', 'security']