From patchwork Sat Oct 7 02:33:35 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chaoyong He X-Patchwork-Id: 132377 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 01ED4426D6; Sat, 7 Oct 2023 04:35:16 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5327440E64; Sat, 7 Oct 2023 04:34:17 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2100.outbound.protection.outlook.com [40.107.94.100]) by mails.dpdk.org (Postfix) with ESMTP id DDAA2406B8 for ; Sat, 7 Oct 2023 04:34:13 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=j75IP8Dh5dslDXKgmHMNEjj+mTm+7Y44GP3vFSBcsfRvqeg5/IkHHWcQO4DD8w+aHlGf+3pdRLnA7xOpLlCczQmlAUjEo9YN+JQMchEI4YxnkYZTaUfCXmViMRWOQYARlzHV93DJ17VEvs/Nc9+kxB7QaXsRpacDQDEIeE/F9uZ6sQth3c9XznkE9K25gXbH8KXdyFZeteAwnhkgeS3wCRDmdj9oHhvGyIetYHWjmhO4i3oMPGfv0nAD5bMHn2HQtnjYF8Rl2o3jmE7sr13sLnIyrOVQ2NxfXxfTrHk5UlQmukydNFN/LYnOfIsfJeMOERW7fuz9AIint/6L8ZeFxg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=XnrnrELBWZ10zx0oJ+AomaChwqwNc078WM/YFOpIN00=; b=PrM0JvxfptvYUT45/mYfHc57ig7d5eSesRoov5dVUFVL2vvEGgyPU0simEesh1gH0t5oDzOtslusZj7POchCPOxZHeC3JH/YSfRndg1L3vJQbxi+geRs1IHComWI44jPu8fyYqGSXKWT8RlHIbU9WFtmUaaZHgKhyLTG5U+xGdk3vayXT1QqunYE0F5g9MiYWf6ZamjRgEXxKLwdaPzfN9S4a7gEWgZI1GHrrcGpAIw9LFDU2BPsBdsmszq+ziR4ZmOO7c+ATSV7dHgVf1HKlEIuh+TNSLoKFdze8ARtAc6jbqsnwaWgJSfMI/J36VcZvKb9OOyD/6kG1Ze+YBY7nA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=corigine.com; dmarc=pass action=none header.from=corigine.com; dkim=pass header.d=corigine.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=corigine.onmicrosoft.com; s=selector2-corigine-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=XnrnrELBWZ10zx0oJ+AomaChwqwNc078WM/YFOpIN00=; b=F9919aK3FZeV7jnZUsk02aIu8+eTJAfOf4zW37aHUDDvKXJyBdky5M8EvFwWDGw+7Ttk77lRvkE7bcBaUUPKEgnbqKRji9HJaf6MKDML+lkuHuWEGho/qSqeVZxQLlv58x5sT77U3uRmkGdNB8UGFzAMwFtvDv/ZS0xwhI6SNJE= Authentication-Results: dkim=none (message not signed) header.d=none;dmarc=none action=none header.from=corigine.com; Received: from SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) by SA0PR13MB3936.namprd13.prod.outlook.com (2603:10b6:806:97::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6838.38; Sat, 7 Oct 2023 02:34:11 +0000 Received: from SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314]) by SJ0PR13MB5545.namprd13.prod.outlook.com ([fe80::28c0:63e2:ecd1:9314%4]) with mapi id 15.20.6813.027; Sat, 7 Oct 2023 02:34:11 +0000 From: Chaoyong He To: dev@dpdk.org Cc: oss-drivers@corigine.com, Chaoyong He , Long Wu , Peng Zhang Subject: [PATCH 07/11] net/nfp: standard the blank character Date: Sat, 7 Oct 2023 10:33:35 +0800 Message-Id: <20231007023339.1546659-8-chaoyong.he@corigine.com> X-Mailer: git-send-email 2.39.1 In-Reply-To: <20231007023339.1546659-1-chaoyong.he@corigine.com> References: <20231007023339.1546659-1-chaoyong.he@corigine.com> X-ClientProxiedBy: SJ0PR03CA0212.namprd03.prod.outlook.com (2603:10b6:a03:39f::7) To SJ0PR13MB5545.namprd13.prod.outlook.com (2603:10b6:a03:424::5) MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: SJ0PR13MB5545:EE_|SA0PR13MB3936:EE_ X-MS-Office365-Filtering-Correlation-Id: ae891825-fc11-4cc1-3e87-08dbc6dde3d5 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 578H6fFjjaS2UUKIA0HCdWkCNRgRk7+DeK/0WB4DpCeSBiNNRHoH6xXr6enu0eFfHNpZZyootjxdHCsdjTG6iVmjZrNOGlK26q8xPZvnN0JsdlCsmG4/unAFmDVpdDAo0sXhyITH+QT8wKwUlQVAgqE/LwGhQCvN7ulOsj3h1AnR4ZdhjSHyyZxRi/CTxSrA1qvbHQcs/44HB2RtKT8yNmcq7oQTI1IkkweEc/WkP1fHFdr8eS6B9NItg+QMD5GAP4hRCrrgLkZfBmNH+ZOIAo6uwER++Vx73ggtVFvC6wRn5GiGWrqPi/p8m/ZUgx/fDvOecgOPkdOi6GSNMddhHSGc3mN95e84BUEAbOrARrTeRKtd9slHSZnnEeFdWftYxyPg6lTJQBxotY2JajLWfRX1m4gKdGFOEq9r1sXmukZtWEU6yi86zExV5C5Gyvb/xtUkiOG7O3sE3SGyhhzRuzEiZyAaSXxqdp6ozs6Ti2TZwNSxbhHT2mD6s73fO1C8T3pbZJpPZFMMf9Qi9UNFajHc/4JfHj2kL1d8x9lgHGyD7M2UPnwex2tC1flklZWQKcB3Ouy9nSn4Sg2EIbu73mgSR1JxOP9S+TvMfbpOOxOC+WndCvrq5y3ZYpPBPTuv5Krx7kNBos8Py8JQl7DXozVQZ2SRQe6oHWNiT6dtnmo= X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:SJ0PR13MB5545.namprd13.prod.outlook.com; PTR:; CAT:NONE; SFS:(13230031)(366004)(376002)(346002)(396003)(39830400003)(136003)(230922051799003)(186009)(1800799009)(64100799003)(451199024)(6666004)(6506007)(52116002)(6512007)(478600001)(6486002)(26005)(107886003)(1076003)(6916009)(316002)(30864003)(41300700001)(2906002)(44832011)(66476007)(8936002)(5660300002)(4326008)(54906003)(66556008)(66946007)(36756003)(8676002)(86362001)(38350700002)(38100700002)(2616005)(83380400001)(66899024); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-MessageData-0: EKglapfgW/KmP7cdgVFNEnJHcAgCi5QL/D+/tMu2Ho0fLC4p/qRShpXfxJSElU0lu7M1OsuNvouidQqonaP3GFO0z2vFYakW5G9sNZYM0z7cy1gYUczhvO2A59/s98UJir908VUvr1za6zizqTELgM9l64KJtbRHQQbdOiFTPDEZ7fuXjiRNXIiZk+nYg/hK/7q25/XjAbQFUrFfUjzH7p/AH2cEmJMJ+Xl7Pp2UhLvFEeHOK3saTXg4mv8EWl3RBzjNZH4w0wpG0Ml2OtBAeeY5Kgfg6UFiB0Rtu54tsmiHmTTkUZzRwKW+Of2uEKviq7GE124AnMoKdOoI8aLkp+GAVcpak8McVcm+Nf7kFQSL8wL5vbe7HIgfDcNrwWSbxhylu2qbE4L58Brf0Frk0294eeYJDTGtJNPjGsXOV3ScKdzVTu+aKhVAnZwVNGsApIPw6fU17XhZoR6P6tt7+fcoQl12AYz3v1u+Cu4T++7w3fX5HAWAyxpwtYON4R0pa//GD2JqQI36H7JbkQRZxUODjgrRbdIp0z7K0wYIlhNKg+fVLPsuAzNcP9ngTANgO/tUCpv9DhswdVJqzZWPxGqyKxPu6ljyiV1QUcsxnk21C3VJj1tcIrgn+AtXwVXdyN4laefpcBawKfTS4eC5YAZP5Qli/NrjnGk7mEwQzrS2NnKDkDp2Y0uz20ki142xeTqtafC9LUzsL8r1lAamKstTTBXvJFe4HwjWLD3klQW3lWNdndxji6gphhhBretWcfXMlrpGM+GAOC3Xh99FnFNuk24pehfzV7tEVqD7Xo57JQeaiqw3ScWUTJLs/lH8D6bCffQxpLfUiaPhaN2Hoo1wc8AjgDpn/Xxs2adbgdyTdf1hjJiEjkmdJRkTja4AcIt4TgMijhhVGq+DWCzM8wNyFuGW21IQQ23Qr7kELWhYn4vraTbRElZxgjq8lvXV13sQukvbEjnhyH5fXp5tcTnAp14r2pT/bIB9R8Bs/it3eB/Smw2FlULrHSX9xYMRT1SbWwkj2hHdKzIU1xAZsz8DKnYaCGZHVvmkqPgqEpt7Wt6cbjOM3vfJASoi4MuygEGJp/zO8RFxlUlmGd+2q6o168Cd2QPCTmkUxy1wvIyz03AXU3UA9tWXs8ncCvitaNcVHBJ77LBgRR6056ZEYSFgu/TOgc37aPYV0kbBnoH1cyIMVX+tb1SMvwikQoqoBK80xzw9ohxHJ9Yu+hA6gpCtILoiiYRc6Y6As1po2bq5nX/1QC8LBXSLbk7j+afU9A0ukTfGONXJfXq6sQhv+PKgNy5m5dtiTvvMvDnyBj25//8Z/Lcc7a3uwhRTRv4TcOWVK/zFas7xLTLrt5Mds2p+USs6nbx28SPjQDXNCJlmzZw0dyEERslEK4FsdBs3eK3WfEcVWi/TNqApIxKGyOFmh5doPI//pIOV5ALa8SNix5AVzs+ZEfwYaziQlxGQPYJ5tkjOr0TcPJMOPTP5m+EIJJbAxlMK23msUTWdQlVATvaDpiOAnxNcW8lR/qH/i/uybWtUmqxsUwKDmshCRnt+0vp52P+HqpFnXtAT/8Vj0CBcVIDHvXiSsgfhrktgeLFpQJxXrjzIUPh4TfKlhA== X-OriginatorOrg: corigine.com X-MS-Exchange-CrossTenant-Network-Message-Id: ae891825-fc11-4cc1-3e87-08dbc6dde3d5 X-MS-Exchange-CrossTenant-AuthSource: SJ0PR13MB5545.namprd13.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2023 02:34:11.8643 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: fe128f2c-073b-4c20-818e-7246a585940c X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: tcEm57pzB1wofboEWxCorWHvCh+ldQ6eBSYbvs8/CGqkZ49v4CHGZg15JVzfbD5/FpHGo0eq8/7vUt5TeEQy2499MDYoo+9PpcCWyqMhNxQ= X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA0PR13MB3936 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Use space character to align instead of TAB character. There should one blank line to split the block of logic, no more no less. Signed-off-by: Chaoyong He Reviewed-by: Long Wu Reviewed-by: Peng Zhang --- drivers/net/nfp/nfp_common.c | 39 +++++++++++---------- drivers/net/nfp/nfp_common.h | 6 ++-- drivers/net/nfp/nfp_cpp_bridge.c | 5 +++ drivers/net/nfp/nfp_ctrl.h | 6 ++-- drivers/net/nfp/nfp_ethdev.c | 58 ++++++++++++++++---------------- drivers/net/nfp/nfp_ethdev_vf.c | 49 +++++++++++++-------------- drivers/net/nfp/nfp_flow.c | 27 +++++++++------ drivers/net/nfp/nfp_flow.h | 7 ++++ drivers/net/nfp/nfp_rxtx.c | 7 ++-- 9 files changed, 113 insertions(+), 91 deletions(-) diff --git a/drivers/net/nfp/nfp_common.c b/drivers/net/nfp/nfp_common.c index ed3c5c15d2..3409ee8cb8 100644 --- a/drivers/net/nfp/nfp_common.c +++ b/drivers/net/nfp/nfp_common.c @@ -36,6 +36,7 @@ enum nfp_xstat_group { NFP_XSTAT_GROUP_NET, NFP_XSTAT_GROUP_MAC }; + struct nfp_xstat { char name[RTE_ETH_XSTATS_NAME_SIZE]; int offset; @@ -184,6 +185,7 @@ nfp_net_notify_port_speed(struct nfp_net_hw *hw, nn_cfg_writew(hw, NFP_NET_CFG_STS_NSP_LINK_RATE, NFP_NET_CFG_STS_LINK_RATE_UNKNOWN); return; } + /* * Link is up so write the link speed from the eth_table to * NFP_NET_CFG_STS_NSP_LINK_RATE. @@ -223,17 +225,21 @@ __nfp_net_reconfig(struct nfp_net_hw *hw, new = nn_cfg_readl(hw, NFP_NET_CFG_UPDATE); if (new == 0) break; + if ((new & NFP_NET_CFG_UPDATE_ERR) != 0) { PMD_DRV_LOG(ERR, "Reconfig error: %#08x", new); return -1; } + if (cnt >= NFP_NET_POLL_TIMEOUT) { PMD_DRV_LOG(ERR, "Reconfig timeout for %#08x after %u ms", update, cnt); return -EIO; } + nanosleep(&wait, 0); /* Waiting for a 1ms */ } + PMD_DRV_LOG(DEBUG, "Ack DONE"); return 0; } @@ -387,7 +393,6 @@ nfp_net_configure(struct rte_eth_dev *dev) struct rte_eth_txmode *txmode; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - dev_conf = &dev->data->dev_conf; rxmode = &dev_conf->rxmode; txmode = &dev_conf->txmode; @@ -560,11 +565,13 @@ nfp_net_set_mac_addr(struct rte_eth_dev *dev, if ((hw->ctrl & NFP_NET_CFG_CTRL_ENABLE) != 0 && (hw->cap & NFP_NET_CFG_CTRL_LIVE_ADDR) != 0) ctrl |= NFP_NET_CFG_CTRL_LIVE_ADDR; + /* Signal the NIC about the change */ if (nfp_net_reconfig(hw, ctrl, update) != 0) { PMD_DRV_LOG(ERR, "MAC address update failed"); return -EIO; } + return 0; } @@ -832,13 +839,11 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.q_ipackets[i] = nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i)); - nfp_dev_stats.q_ipackets[i] -= hw->eth_stats_base.q_ipackets[i]; nfp_dev_stats.q_ibytes[i] = nn_cfg_readq(hw, NFP_NET_CFG_RXR_STATS(i) + 0x8); - nfp_dev_stats.q_ibytes[i] -= hw->eth_stats_base.q_ibytes[i]; } @@ -850,42 +855,34 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.q_opackets[i] = nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i)); - nfp_dev_stats.q_opackets[i] -= hw->eth_stats_base.q_opackets[i]; nfp_dev_stats.q_obytes[i] = nn_cfg_readq(hw, NFP_NET_CFG_TXR_STATS(i) + 0x8); - nfp_dev_stats.q_obytes[i] -= hw->eth_stats_base.q_obytes[i]; } nfp_dev_stats.ipackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_FRAMES); - nfp_dev_stats.ipackets -= hw->eth_stats_base.ipackets; nfp_dev_stats.ibytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_OCTETS); - nfp_dev_stats.ibytes -= hw->eth_stats_base.ibytes; nfp_dev_stats.opackets = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_FRAMES); - nfp_dev_stats.opackets -= hw->eth_stats_base.opackets; nfp_dev_stats.obytes = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_OCTETS); - nfp_dev_stats.obytes -= hw->eth_stats_base.obytes; /* Reading general device stats */ nfp_dev_stats.ierrors = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_ERRORS); - nfp_dev_stats.ierrors -= hw->eth_stats_base.ierrors; nfp_dev_stats.oerrors = nn_cfg_readq(hw, NFP_NET_CFG_STATS_TX_ERRORS); - nfp_dev_stats.oerrors -= hw->eth_stats_base.oerrors; /* RX ring mbuf allocation failures */ @@ -893,7 +890,6 @@ nfp_net_stats_get(struct rte_eth_dev *dev, nfp_dev_stats.imissed = nn_cfg_readq(hw, NFP_NET_CFG_STATS_RX_DISCARDS); - nfp_dev_stats.imissed -= hw->eth_stats_base.imissed; if (stats != NULL) { @@ -981,6 +977,7 @@ nfp_net_xstats_size(const struct rte_eth_dev *dev) if (nfp_net_xstats[count].group == NFP_XSTAT_GROUP_MAC) break; } + return count; } @@ -1154,6 +1151,7 @@ nfp_net_xstats_reset(struct rte_eth_dev *dev) hw->eth_xstats_base[id].id = id; hw->eth_xstats_base[id].value = nfp_net_xstats_value(dev, id, true); } + /* Successfully reset xstats, now call function to reset basic stats. */ return nfp_net_stats_reset(dev); } @@ -1201,6 +1199,7 @@ nfp_net_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->max_rx_queues = (uint16_t)hw->max_rx_queues; dev_info->max_tx_queues = (uint16_t)hw->max_tx_queues; dev_info->min_rx_bufsize = RTE_ETHER_MIN_MTU; + /** * The maximum rx packet length (max_rx_pktlen) is set to the * maximum supported frame size that the NFP can handle. This @@ -1368,6 +1367,7 @@ nfp_net_supported_ptypes_get(struct rte_eth_dev *dev) if (dev->rx_pkt_burst == nfp_net_recv_pkts) return ptypes; + return NULL; } @@ -1381,7 +1381,6 @@ nfp_rx_queue_intr_enable(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; @@ -1402,7 +1401,6 @@ nfp_rx_queue_intr_disable(struct rte_eth_dev *dev, hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); pci_dev = RTE_ETH_DEV_TO_PCI(dev); - if (rte_intr_type_get(pci_dev->intr_handle) != RTE_INTR_HANDLE_UIO) base = 1; @@ -1619,11 +1617,11 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, idx = i / RTE_ETH_RETA_GROUP_SIZE; shift = i % RTE_ETH_RETA_GROUP_SIZE; mask = (uint8_t)((reta_conf[idx].mask >> shift) & 0xF); - if (mask == 0) continue; reta = 0; + /* If all 4 entries were set, don't need read RETA register */ if (mask != 0xF) reta = nn_cfg_readl(hw, NFP_NET_CFG_RSS_ITBL + i); @@ -1631,13 +1629,17 @@ nfp_net_rss_reta_write(struct rte_eth_dev *dev, for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; + /* Clearing the entry bits */ if (mask != 0xF) reta &= ~(0xFF << (8 * j)); + reta |= reta_conf[idx].reta[shift + j] << (8 * j); } + nn_cfg_writel(hw, NFP_NET_CFG_RSS_ITBL + (idx * 64) + shift, reta); } + return 0; } @@ -1682,7 +1684,6 @@ nfp_net_reta_query(struct rte_eth_dev *dev, struct nfp_net_hw *hw; hw = NFP_NET_DEV_PRIVATE_TO_HW(dev->data->dev_private); - if ((hw->ctrl & NFP_NET_CFG_CTRL_RSS_ANY) == 0) return -EINVAL; @@ -1710,10 +1711,12 @@ nfp_net_reta_query(struct rte_eth_dev *dev, for (j = 0; j < 4; j++) { if ((mask & (0x1 << j)) == 0) continue; + reta_conf[idx].reta[shift + j] = (uint8_t)((reta >> (8 * j)) & 0xF); } } + return 0; } @@ -1791,6 +1794,7 @@ nfp_net_rss_hash_update(struct rte_eth_dev *dev, PMD_DRV_LOG(ERR, "RSS unsupported"); return -EINVAL; } + return 0; /* Nothing to do */ } @@ -1888,6 +1892,7 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) queue %= rx_queues; } } + ret = nfp_net_rss_reta_write(dev, nfp_reta_conf, 0x80); if (ret != 0) return ret; @@ -1897,8 +1902,8 @@ nfp_net_rss_config_default(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Wrong rss conf"); return -EINVAL; } - rss_conf = dev_conf->rx_adv_conf.rss_conf; + rss_conf = dev_conf->rx_adv_conf.rss_conf; ret = nfp_net_rss_hash_write(dev, &rss_conf); return ret; diff --git a/drivers/net/nfp/nfp_common.h b/drivers/net/nfp/nfp_common.h index b41d834165..27dc2175e3 100644 --- a/drivers/net/nfp/nfp_common.h +++ b/drivers/net/nfp/nfp_common.h @@ -32,7 +32,7 @@ #define DEFAULT_RX_HTHRESH 8 #define DEFAULT_RX_WTHRESH 0 -#define DEFAULT_TX_RS_THRESH 32 +#define DEFAULT_TX_RS_THRESH 32 #define DEFAULT_TX_FREE_THRESH 32 #define DEFAULT_TX_PTHRESH 32 #define DEFAULT_TX_HTHRESH 0 @@ -40,12 +40,12 @@ #define DEFAULT_TX_RSBIT_THRESH 32 /* Alignment for dma zones */ -#define NFP_MEMZONE_ALIGN 128 +#define NFP_MEMZONE_ALIGN 128 #define NFP_QCP_QUEUE_ADDR_SZ (0x800) /* Number of supported physical ports */ -#define NFP_MAX_PHYPORTS 12 +#define NFP_MAX_PHYPORTS 12 /* Firmware application ID's */ enum nfp_app_fw_id { diff --git a/drivers/net/nfp/nfp_cpp_bridge.c b/drivers/net/nfp/nfp_cpp_bridge.c index b5bfe17d0e..080070f58b 100644 --- a/drivers/net/nfp/nfp_cpp_bridge.c +++ b/drivers/net/nfp/nfp_cpp_bridge.c @@ -191,6 +191,7 @@ nfp_cpp_bridge_serve_write(int sockfd, nfp_cpp_area_free(area); return -EIO; } + err = nfp_cpp_area_write(area, pos, tmpbuf, len); if (err < 0) { PMD_CPP_LOG(ERR, "nfp_cpp_area_write error"); @@ -312,6 +313,7 @@ nfp_cpp_bridge_serve_read(int sockfd, curlen = (count > NFP_CPP_MEMIO_BOUNDARY) ? NFP_CPP_MEMIO_BOUNDARY : count; } + return 0; } @@ -393,6 +395,7 @@ nfp_cpp_bridge_service_func(void *args) struct timeval timeout = {1, 0}; unlink("/tmp/nfp_cpp"); + sockfd = socket(AF_UNIX, SOCK_STREAM, 0); if (sockfd < 0) { PMD_CPP_LOG(ERR, "socket creation error. Service failed"); @@ -456,8 +459,10 @@ nfp_cpp_bridge_service_func(void *args) if (op == 0) break; } + close(datafd); } + close(sockfd); return 0; diff --git a/drivers/net/nfp/nfp_ctrl.h b/drivers/net/nfp/nfp_ctrl.h index a13f95894a..ef8bf486cb 100644 --- a/drivers/net/nfp/nfp_ctrl.h +++ b/drivers/net/nfp/nfp_ctrl.h @@ -208,8 +208,8 @@ struct nfp_net_fw_ver { /* * NFP6000/NFP4000 - Prepend configuration */ -#define NFP_NET_CFG_RX_OFFSET 0x0050 -#define NFP_NET_CFG_RX_OFFSET_DYNAMIC 0 /* Prepend mode */ +#define NFP_NET_CFG_RX_OFFSET 0x0050 +#define NFP_NET_CFG_RX_OFFSET_DYNAMIC 0 /* Prepend mode */ /* Start anchor of the TLV area */ #define NFP_NET_CFG_TLV_BASE 0x0058 @@ -442,7 +442,7 @@ struct nfp_net_fw_ver { #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS6 (NFP_MAC_STATS_BASE + 0x1f0) #define NFP_MAC_STATS_TX_PAUSE_FRAMES_CLASS7 (NFP_MAC_STATS_BASE + 0x1f8) -#define NFP_PF_CSR_SLICE_SIZE (32 * 1024) +#define NFP_PF_CSR_SLICE_SIZE (32 * 1024) /* * General use mailbox area (0x1800 - 0x19ff) diff --git a/drivers/net/nfp/nfp_ethdev.c b/drivers/net/nfp/nfp_ethdev.c index dece821e4a..0493548c81 100644 --- a/drivers/net/nfp/nfp_ethdev.c +++ b/drivers/net/nfp/nfp_ethdev.c @@ -36,6 +36,7 @@ nfp_net_pf_read_mac(struct nfp_app_fw_nic *app_fw_nic, rte_ether_addr_copy(&nfp_eth_table->ports[port].mac_addr, &hw->mac_addr); free(nfp_eth_table); + return 0; } @@ -73,6 +74,7 @@ nfp_net_start(struct rte_eth_dev *dev) "with NFP multiport PF"); return -EINVAL; } + if (rte_intr_type_get(intr_handle) == RTE_INTR_HANDLE_UIO) { /* * Better not to share LSC with RX interrupts. @@ -87,6 +89,7 @@ nfp_net_start(struct rte_eth_dev *dev) return -EIO; } } + intr_vector = dev->data->nb_rx_queues; if (rte_intr_efd_enable(intr_handle, intr_vector) != 0) return -1; @@ -198,7 +201,6 @@ nfp_net_stop(struct rte_eth_dev *dev) /* Clear queues */ nfp_net_stop_tx_queue(dev); - nfp_net_stop_rx_queue(dev); if (rte_eal_process_type() == RTE_PROC_PRIMARY) @@ -262,12 +264,10 @@ nfp_net_close(struct rte_eth_dev *dev) * We assume that the DPDK application is stopping all the * threads/queues before calling the device close function. */ - nfp_net_disable_queues(dev); /* Clear queues */ nfp_net_close_tx_queue(dev); - nfp_net_close_rx_queue(dev); /* Clear ipsec */ @@ -413,35 +413,35 @@ nfp_udp_tunnel_port_del(struct rte_eth_dev *dev, /* Initialise and register driver with DPDK Application */ static const struct eth_dev_ops nfp_net_eth_dev_ops = { - .dev_configure = nfp_net_configure, - .dev_start = nfp_net_start, - .dev_stop = nfp_net_stop, - .dev_set_link_up = nfp_net_set_link_up, - .dev_set_link_down = nfp_net_set_link_down, - .dev_close = nfp_net_close, - .promiscuous_enable = nfp_net_promisc_enable, - .promiscuous_disable = nfp_net_promisc_disable, - .link_update = nfp_net_link_update, - .stats_get = nfp_net_stats_get, - .stats_reset = nfp_net_stats_reset, + .dev_configure = nfp_net_configure, + .dev_start = nfp_net_start, + .dev_stop = nfp_net_stop, + .dev_set_link_up = nfp_net_set_link_up, + .dev_set_link_down = nfp_net_set_link_down, + .dev_close = nfp_net_close, + .promiscuous_enable = nfp_net_promisc_enable, + .promiscuous_disable = nfp_net_promisc_disable, + .link_update = nfp_net_link_update, + .stats_get = nfp_net_stats_get, + .stats_reset = nfp_net_stats_reset, .xstats_get = nfp_net_xstats_get, .xstats_reset = nfp_net_xstats_reset, .xstats_get_names = nfp_net_xstats_get_names, .xstats_get_by_id = nfp_net_xstats_get_by_id, .xstats_get_names_by_id = nfp_net_xstats_get_names_by_id, - .dev_infos_get = nfp_net_infos_get, + .dev_infos_get = nfp_net_infos_get, .dev_supported_ptypes_get = nfp_net_supported_ptypes_get, - .mtu_set = nfp_net_dev_mtu_set, - .mac_addr_set = nfp_net_set_mac_addr, - .vlan_offload_set = nfp_net_vlan_offload_set, - .reta_update = nfp_net_reta_update, - .reta_query = nfp_net_reta_query, - .rss_hash_update = nfp_net_rss_hash_update, - .rss_hash_conf_get = nfp_net_rss_hash_conf_get, - .rx_queue_setup = nfp_net_rx_queue_setup, - .rx_queue_release = nfp_net_rx_queue_release, - .tx_queue_setup = nfp_net_tx_queue_setup, - .tx_queue_release = nfp_net_tx_queue_release, + .mtu_set = nfp_net_dev_mtu_set, + .mac_addr_set = nfp_net_set_mac_addr, + .vlan_offload_set = nfp_net_vlan_offload_set, + .reta_update = nfp_net_reta_update, + .reta_query = nfp_net_reta_query, + .rss_hash_update = nfp_net_rss_hash_update, + .rss_hash_conf_get = nfp_net_rss_hash_conf_get, + .rx_queue_setup = nfp_net_rx_queue_setup, + .rx_queue_release = nfp_net_rx_queue_release, + .tx_queue_setup = nfp_net_tx_queue_setup, + .tx_queue_release = nfp_net_tx_queue_release, .rx_queue_intr_enable = nfp_rx_queue_intr_enable, .rx_queue_intr_disable = nfp_rx_queue_intr_disable, .udp_tunnel_port_add = nfp_udp_tunnel_port_add, @@ -501,7 +501,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) rte_eth_copy_pci_info(eth_dev, pci_dev); - hw->ctrl_bar = pci_dev->mem_resource[0].addr; if (hw->ctrl_bar == NULL) { PMD_DRV_LOG(ERR, "hw->ctrl_bar is NULL. BAR0 not configured"); @@ -519,10 +518,12 @@ nfp_net_init(struct rte_eth_dev *eth_dev) PMD_INIT_LOG(ERR, "nfp_rtsym_map fails for _mac_stats_bar"); return -EIO; } + hw->mac_stats = hw->mac_stats_bar; } else { if (pf_dev->ctrl_bar == NULL) return -ENODEV; + /* Use port offset in pf ctrl_bar for this ports control bar */ hw->ctrl_bar = pf_dev->ctrl_bar + (port * NFP_PF_CSR_SLICE_SIZE); hw->mac_stats = app_fw_nic->ports[0]->mac_stats_bar + (port * NFP_MAC_STATS_SIZE); @@ -557,7 +558,6 @@ nfp_net_init(struct rte_eth_dev *eth_dev) return -ENOMEM; } - /* Work out where in the BAR the queues start. */ tx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_TXQ); rx_base = nn_cfg_readl(hw, NFP_NET_CFG_START_RXQ); @@ -653,12 +653,12 @@ nfp_fw_upload(struct rte_pci_device *dev, "serial-%02x-%02x-%02x-%02x-%02x-%02x-%02x-%02x", cpp_serial[0], cpp_serial[1], cpp_serial[2], cpp_serial[3], cpp_serial[4], cpp_serial[5], interface >> 8, interface & 0xff); - snprintf(fw_name, sizeof(fw_name), "%s/%s.nffw", DEFAULT_FW_PATH, serial); PMD_DRV_LOG(DEBUG, "Trying with fw file: %s", fw_name); if (rte_firmware_read(fw_name, &fw_buf, &fsize) == 0) goto load_fw; + /* Then try the PCI name */ snprintf(fw_name, sizeof(fw_name), "%s/pci-%s.nffw", DEFAULT_FW_PATH, dev->name); diff --git a/drivers/net/nfp/nfp_ethdev_vf.c b/drivers/net/nfp/nfp_ethdev_vf.c index 0a1eb04294..8053808b02 100644 --- a/drivers/net/nfp/nfp_ethdev_vf.c +++ b/drivers/net/nfp/nfp_ethdev_vf.c @@ -63,6 +63,7 @@ nfp_netvf_start(struct rte_eth_dev *dev) return -EIO; } } + intr_vector = dev->data->nb_rx_queues; if (rte_intr_efd_enable(intr_handle, intr_vector) != 0) return -1; @@ -172,12 +173,10 @@ nfp_netvf_close(struct rte_eth_dev *dev) * We assume that the DPDK application is stopping all the * threads/queues before calling the device close function. */ - nfp_net_disable_queues(dev); /* Clear queues */ nfp_net_close_tx_queue(dev); - nfp_net_close_rx_queue(dev); rte_intr_disable(pci_dev->intr_handle); @@ -194,35 +193,35 @@ nfp_netvf_close(struct rte_eth_dev *dev) /* Initialise and register VF driver with DPDK Application */ static const struct eth_dev_ops nfp_netvf_eth_dev_ops = { - .dev_configure = nfp_net_configure, - .dev_start = nfp_netvf_start, - .dev_stop = nfp_netvf_stop, - .dev_set_link_up = nfp_netvf_set_link_up, - .dev_set_link_down = nfp_netvf_set_link_down, - .dev_close = nfp_netvf_close, - .promiscuous_enable = nfp_net_promisc_enable, - .promiscuous_disable = nfp_net_promisc_disable, - .link_update = nfp_net_link_update, - .stats_get = nfp_net_stats_get, - .stats_reset = nfp_net_stats_reset, + .dev_configure = nfp_net_configure, + .dev_start = nfp_netvf_start, + .dev_stop = nfp_netvf_stop, + .dev_set_link_up = nfp_netvf_set_link_up, + .dev_set_link_down = nfp_netvf_set_link_down, + .dev_close = nfp_netvf_close, + .promiscuous_enable = nfp_net_promisc_enable, + .promiscuous_disable = nfp_net_promisc_disable, + .link_update = nfp_net_link_update, + .stats_get = nfp_net_stats_get, + .stats_reset = nfp_net_stats_reset, .xstats_get = nfp_net_xstats_get, .xstats_reset = nfp_net_xstats_reset, .xstats_get_names = nfp_net_xstats_get_names, .xstats_get_by_id = nfp_net_xstats_get_by_id, .xstats_get_names_by_id = nfp_net_xstats_get_names_by_id, - .dev_infos_get = nfp_net_infos_get, + .dev_infos_get = nfp_net_infos_get, .dev_supported_ptypes_get = nfp_net_supported_ptypes_get, - .mtu_set = nfp_net_dev_mtu_set, - .mac_addr_set = nfp_net_set_mac_addr, - .vlan_offload_set = nfp_net_vlan_offload_set, - .reta_update = nfp_net_reta_update, - .reta_query = nfp_net_reta_query, - .rss_hash_update = nfp_net_rss_hash_update, - .rss_hash_conf_get = nfp_net_rss_hash_conf_get, - .rx_queue_setup = nfp_net_rx_queue_setup, - .rx_queue_release = nfp_net_rx_queue_release, - .tx_queue_setup = nfp_net_tx_queue_setup, - .tx_queue_release = nfp_net_tx_queue_release, + .mtu_set = nfp_net_dev_mtu_set, + .mac_addr_set = nfp_net_set_mac_addr, + .vlan_offload_set = nfp_net_vlan_offload_set, + .reta_update = nfp_net_reta_update, + .reta_query = nfp_net_reta_query, + .rss_hash_update = nfp_net_rss_hash_update, + .rss_hash_conf_get = nfp_net_rss_hash_conf_get, + .rx_queue_setup = nfp_net_rx_queue_setup, + .rx_queue_release = nfp_net_rx_queue_release, + .tx_queue_setup = nfp_net_tx_queue_setup, + .tx_queue_release = nfp_net_tx_queue_release, .rx_queue_intr_enable = nfp_rx_queue_intr_enable, .rx_queue_intr_disable = nfp_rx_queue_intr_disable, }; diff --git a/drivers/net/nfp/nfp_flow.c b/drivers/net/nfp/nfp_flow.c index 7b1abe926e..a806cbfbeb 100644 --- a/drivers/net/nfp/nfp_flow.c +++ b/drivers/net/nfp/nfp_flow.c @@ -464,6 +464,7 @@ nfp_stats_id_alloc(struct nfp_flow_priv *priv, uint32_t *ctx) priv->stats_ids.init_unallocated--; priv->active_mem_unit = 0; } + return 0; } @@ -590,6 +591,7 @@ nfp_tun_add_ipv6_off(struct nfp_app_fw_flower *app_fw_flower, PMD_DRV_LOG(ERR, "Mem error when offloading IP6 address."); return -ENOMEM; } + memcpy(tmp_entry->ipv6_addr, ipv6, sizeof(tmp_entry->ipv6_addr)); tmp_entry->ref_count = 1; @@ -1760,7 +1762,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VLAN, RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_IPV6), - .mask_support = &(const struct rte_flow_item_eth){ + .mask_support = &(const struct rte_flow_item_eth) { .hdr = { .dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", .src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", @@ -1775,7 +1777,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { [RTE_FLOW_ITEM_TYPE_VLAN] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_IPV4, RTE_FLOW_ITEM_TYPE_IPV6), - .mask_support = &(const struct rte_flow_item_vlan){ + .mask_support = &(const struct rte_flow_item_vlan) { .hdr = { .vlan_tci = RTE_BE16(0xefff), .eth_proto = RTE_BE16(0xffff), @@ -1791,7 +1793,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_GRE), - .mask_support = &(const struct rte_flow_item_ipv4){ + .mask_support = &(const struct rte_flow_item_ipv4) { .hdr = { .type_of_service = 0xff, .fragment_offset = RTE_BE16(0xffff), @@ -1810,7 +1812,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { RTE_FLOW_ITEM_TYPE_UDP, RTE_FLOW_ITEM_TYPE_SCTP, RTE_FLOW_ITEM_TYPE_GRE), - .mask_support = &(const struct rte_flow_item_ipv6){ + .mask_support = &(const struct rte_flow_item_ipv6) { .hdr = { .vtc_flow = RTE_BE32(0x0ff00000), .proto = 0xff, @@ -1827,7 +1829,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { .merge = nfp_flow_merge_ipv6, }, [RTE_FLOW_ITEM_TYPE_TCP] = { - .mask_support = &(const struct rte_flow_item_tcp){ + .mask_support = &(const struct rte_flow_item_tcp) { .hdr = { .tcp_flags = 0xff, .src_port = RTE_BE16(0xffff), @@ -1841,7 +1843,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { [RTE_FLOW_ITEM_TYPE_UDP] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_VXLAN, RTE_FLOW_ITEM_TYPE_GENEVE), - .mask_support = &(const struct rte_flow_item_udp){ + .mask_support = &(const struct rte_flow_item_udp) { .hdr = { .src_port = RTE_BE16(0xffff), .dst_port = RTE_BE16(0xffff), @@ -1852,7 +1854,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { .merge = nfp_flow_merge_udp, }, [RTE_FLOW_ITEM_TYPE_SCTP] = { - .mask_support = &(const struct rte_flow_item_sctp){ + .mask_support = &(const struct rte_flow_item_sctp) { .hdr = { .src_port = RTE_BE16(0xffff), .dst_port = RTE_BE16(0xffff), @@ -1864,7 +1866,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { }, [RTE_FLOW_ITEM_TYPE_VXLAN] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH), - .mask_support = &(const struct rte_flow_item_vxlan){ + .mask_support = &(const struct rte_flow_item_vxlan) { .hdr = { .vx_vni = RTE_BE32(0xffffff00), }, @@ -1875,7 +1877,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { }, [RTE_FLOW_ITEM_TYPE_GENEVE] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_ETH), - .mask_support = &(const struct rte_flow_item_geneve){ + .mask_support = &(const struct rte_flow_item_geneve) { .vni = "\xff\xff\xff", }, .mask_default = &rte_flow_item_geneve_mask, @@ -1884,7 +1886,7 @@ static const struct nfp_flow_item_proc nfp_flow_item_proc_list[] = { }, [RTE_FLOW_ITEM_TYPE_GRE] = { .next_item = NEXT_ITEM(RTE_FLOW_ITEM_TYPE_GRE_KEY), - .mask_support = &(const struct rte_flow_item_gre){ + .mask_support = &(const struct rte_flow_item_gre) { .c_rsvd0_ver = RTE_BE16(0xa000), .protocol = RTE_BE16(0xffff), }, @@ -1916,6 +1918,7 @@ nfp_flow_item_check(const struct rte_flow_item *item, " without a corresponding 'spec'."); return -EINVAL; } + /* No spec, no mask, no problem. */ return 0; } @@ -2995,6 +2998,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr, for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) { if (priv->pre_tun_bitmap[i] == 0) continue; + entry->mac_index = i; find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size); if (find_entry != NULL) { @@ -3021,6 +3025,7 @@ nfp_pre_tun_table_check_add(struct nfp_flower_representor *repr, *index = entry->mac_index; priv->pre_tun_cnt++; + return 0; } @@ -3055,12 +3060,14 @@ nfp_pre_tun_table_check_del(struct nfp_flower_representor *repr, for (i = 1; i < NFP_TUN_PRE_TUN_RULE_LIMIT; i++) { if (priv->pre_tun_bitmap[i] == 0) continue; + entry->mac_index = i; find_entry = nfp_pre_tun_table_search(priv, (char *)entry, entry_size); if (find_entry != NULL) { find_entry->ref_cnt--; if (find_entry->ref_cnt != 0) goto free_entry; + priv->pre_tun_bitmap[i] = 0; break; } diff --git a/drivers/net/nfp/nfp_flow.h b/drivers/net/nfp/nfp_flow.h index 68b6fb6abe..5a5b6a7d19 100644 --- a/drivers/net/nfp/nfp_flow.h +++ b/drivers/net/nfp/nfp_flow.h @@ -115,11 +115,14 @@ struct nfp_ipv6_addr_entry { struct nfp_flow_priv { uint32_t hash_seed; /**< Hash seed for hash tables in this structure. */ uint64_t flower_version; /**< Flow version, always increase. */ + /* Mask hash table */ struct nfp_fl_mask_id mask_ids; /**< Entry for mask hash table */ struct rte_hash *mask_table; /**< Hash table to store mask ids. */ + /* Flow hash table */ struct rte_hash *flow_table; /**< Hash table to store flow rules. */ + /* Flow stats */ uint32_t active_mem_unit; /**< The size of active mem units. */ uint32_t total_mem_units; /**< The size of total mem units. */ @@ -127,16 +130,20 @@ struct nfp_flow_priv { struct nfp_fl_stats_id stats_ids; /**< The stats id ring. */ struct nfp_fl_stats *stats; /**< Store stats of flow. */ rte_spinlock_t stats_lock; /** < Lock the update of 'stats' field. */ + /* Pre tunnel rule */ uint16_t pre_tun_cnt; /**< The size of pre tunnel rule */ uint8_t pre_tun_bitmap[NFP_TUN_PRE_TUN_RULE_LIMIT]; /**< Bitmap of pre tunnel rule */ struct rte_hash *pre_tun_table; /**< Hash table to store pre tunnel rule */ + /* IPv4 off */ LIST_HEAD(, nfp_ipv4_addr_entry) ipv4_off_list; /**< Store ipv4 off */ rte_spinlock_t ipv4_off_lock; /**< Lock the ipv4 off list */ + /* IPv6 off */ LIST_HEAD(, nfp_ipv6_addr_entry) ipv6_off_list; /**< Store ipv6 off */ rte_spinlock_t ipv6_off_lock; /**< Lock the ipv6 off list */ + /* Neighbor next */ LIST_HEAD(, nfp_fl_tun)nn_list; /**< Store nn entry */ }; diff --git a/drivers/net/nfp/nfp_rxtx.c b/drivers/net/nfp/nfp_rxtx.c index 7b77351f1c..4632837c0e 100644 --- a/drivers/net/nfp/nfp_rxtx.c +++ b/drivers/net/nfp/nfp_rxtx.c @@ -190,6 +190,7 @@ nfp_net_rx_fill_freelist(struct nfp_net_rxq *rxq) rxd->fld.dd = 0; rxd->fld.dma_addr_hi = (dma_addr >> 32) & 0xffff; rxd->fld.dma_addr_lo = dma_addr & 0xffffffff; + rxe[i].mbuf = mbuf; } @@ -213,6 +214,7 @@ nfp_net_rx_freelist_setup(struct rte_eth_dev *dev) if (nfp_net_rx_fill_freelist(dev->data->rx_queues[i]) != 0) return -1; } + return 0; } @@ -225,7 +227,6 @@ nfp_net_rx_queue_count(void *rx_queue) struct nfp_net_rx_desc *rxds; rxq = rx_queue; - idx = rxq->rd_p; /* @@ -235,7 +236,6 @@ nfp_net_rx_queue_count(void *rx_queue) * performance. But ideally that should be done in descriptors * chunks belonging to the same cache line. */ - while (count < rxq->rx_count) { rxds = &rxq->rxds[idx]; if ((rxds->rxd.meta_len_dd & PCIE_DESC_RX_DD) == 0) @@ -394,6 +394,7 @@ nfp_net_parse_meta_qinq(const struct nfp_meta_parsed *meta, if (meta->vlan[0].offload == 0) mb->vlan_tci = rte_cpu_to_le_16(meta->vlan[0].tci); + mb->vlan_tci_outer = rte_cpu_to_le_16(meta->vlan[1].tci); PMD_RX_LOG(DEBUG, "Received outer vlan TCI is %u inner vlan TCI is %u", mb->vlan_tci_outer, mb->vlan_tci); @@ -638,7 +639,6 @@ nfp_net_parse_ptype(struct nfp_net_rx_desc *rxds, * so looking at the implications of this type of allocation should be studied * deeply. */ - uint16_t nfp_net_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -903,7 +903,6 @@ nfp_net_rx_queue_setup(struct rte_eth_dev *dev, tz = rte_eth_dma_zone_reserve(dev, "rx_ring", queue_idx, sizeof(struct nfp_net_rx_desc) * max_rx_desc, NFP_MEMZONE_ALIGN, socket_id); - if (tz == NULL) { PMD_DRV_LOG(ERR, "Error allocating rx dma"); nfp_net_rx_queue_release(dev, queue_idx);