From patchwork Mon Mar 11 16:39:12 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 138166 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 96ED743C89; Mon, 11 Mar 2024 17:46:24 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 642A84067B; Mon, 11 Mar 2024 17:46:24 +0100 (CET) Received: from mail-pj1-f41.google.com (mail-pj1-f41.google.com [209.85.216.41]) by mails.dpdk.org (Postfix) with ESMTP id 4596F4027C for ; Mon, 11 Mar 2024 17:46:23 +0100 (CET) Received: by mail-pj1-f41.google.com with SMTP id 98e67ed59e1d1-29a74c88f74so2534444a91.3 for ; Mon, 11 Mar 2024 09:46:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20230601.gappssmtp.com; s=20230601; t=1710175582; x=1710780382; darn=dpdk.org; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:from:to:cc:subject:date:message-id:reply-to; bh=Pn0S19keC5I0xifdKfKsixz+HzSD+/xiWdoyv5TZCfk=; b=kLrekU/I/fGRlxL6v0KdA+wTf1JZJGsPFpMpLfx8tMzAsMMpaJGXFya8U9SoLOGgTY nOnn0ejdPDTCQ/1FHTEaSe+BZGWJfYP8rFXVNYOFYSS6ENNo8+CnwYH76ZU6zHGycf2f bm21vXhbRCiNQY3vj4fLPh+z3E+X8tiVzFOn30dx/MHuzWdURvqyVwY5IWRjgk/8qmYk LZ34oNc15wVxGy8U0Ma6gU9zVr/Z8I789A24k+2AoOWJrKYfYQKtXiHODP/fOj4DNMvn 5//kTwpbWZQfUqN2XXeSSYFRYnc7cdT8N1DLHL74+O5Bw5sl3QTtryT372PvSPcO1oLC 9cwA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1710175582; x=1710780382; h=content-transfer-encoding:mime-version:message-id:date:subject:cc :to:from:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=Pn0S19keC5I0xifdKfKsixz+HzSD+/xiWdoyv5TZCfk=; b=qAUKkiinMLtNTTU6zGO5ToFMhdR+j+WqR6HLOKYMd3ULXdhXdpFfTJ1CUC2mAWbTRB 3CqEdhFHV+kQbNVIF6MzZffS5Byhc5vADNaQTOi2/EHZEy0hJqXCtcR3h/SM67zHjFw2 zxIWYzRqrxow32+NyJ0cyeEbP3dUTCGrWfc6T0pXYwzOy89nQTFKDiPKWvJ1jZHeqEVH hFhcxzyj61Y0D3qtYOlquNhXey3nipvwk1m6ZEOHdF2LxHcKuno4mYhh6QUBar/iCI4B iFNGmE3sMzzFhTghaQ+XnViHtCzjG2nNilfdBJCKhSRLtLjvA3EeuniKHAhVE5YaD7qQ 387g== X-Gm-Message-State: AOJu0YxUylJuT3jr7NaB0x6tuL8707MBUSopXcv4gHuH1OPNtyJjp5np Op9qMpanuPTxy5xem+wV4Vifh7DUlYwH9sEsG+JKCcj5d/wPPfREFn7STTWaI4kQci3ppt6b8vi 9 X-Google-Smtp-Source: AGHT+IEzxvbHaYulvC5+R9NC+Ag0fQiJFEtTVpE2rZFrCuWVCTkwyTLYfQ1l2q9jsGhJe2fWsRLLHQ== X-Received: by 2002:a17:90a:c252:b0:29b:c1c8:8c1d with SMTP id d18-20020a17090ac25200b0029bc1c88c1dmr5307232pjx.40.1710175582156; Mon, 11 Mar 2024 09:46:22 -0700 (PDT) Received: from hermes.local (204-195-123-141.wavecable.com. [204.195.123.141]) by smtp.gmail.com with ESMTPSA id f12-20020a17090ab94c00b0029bed2dc95esm2734526pjw.56.2024.03.11.09.46.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Mar 2024 09:46:21 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Julien Aube Subject: [PATCH] net/bnx2x: fix indentation Date: Mon, 11 Mar 2024 09:39:12 -0700 Message-ID: <20240311163912.11229-1-stephen@networkplumber.org> X-Mailer: git-send-email 2.43.0 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The DPDK style of indentation uses tabs not spaces. This file had mix of both. Convert it. Signed-off-by: Stephen Hemminger --- drivers/net/bnx2x/bnx2x_stats.c | 174 ++++++++++++++++---------------- 1 file changed, 87 insertions(+), 87 deletions(-) diff --git a/drivers/net/bnx2x/bnx2x_stats.c b/drivers/net/bnx2x/bnx2x_stats.c index 69132c7c806e..895113c1e175 100644 --- a/drivers/net/bnx2x/bnx2x_stats.c +++ b/drivers/net/bnx2x/bnx2x_stats.c @@ -487,32 +487,32 @@ bnx2x_func_stats_init(struct bnx2x_softc *sc) static void bnx2x_stats_start(struct bnx2x_softc *sc) { - /* - * VFs travel through here as part of the statistics FSM, but no action - * is required - */ - if (IS_VF(sc)) { - return; - } + /* + * VFs travel through here as part of the statistics FSM, but no action + * is required + */ + if (IS_VF(sc)) { + return; + } - if (sc->port.pmf) { - bnx2x_port_stats_init(sc); - } + if (sc->port.pmf) { + bnx2x_port_stats_init(sc); + } - else if (sc->func_stx) { - bnx2x_func_stats_init(sc); - } + else if (sc->func_stx) { + bnx2x_func_stats_init(sc); + } - bnx2x_hw_stats_post(sc); - bnx2x_storm_stats_post(sc); + bnx2x_hw_stats_post(sc); + bnx2x_storm_stats_post(sc); } static void bnx2x_stats_pmf_start(struct bnx2x_softc *sc) { - bnx2x_stats_comp(sc); - bnx2x_stats_pmf_update(sc); - bnx2x_stats_start(sc); + bnx2x_stats_comp(sc); + bnx2x_stats_pmf_update(sc); + bnx2x_stats_start(sc); } static void @@ -1334,84 +1334,84 @@ bnx2x_port_stats_base_init(struct bnx2x_softc *sc) static void bnx2x_prep_fw_stats_req(struct bnx2x_softc *sc) { - int i; - int first_queue_query_index; - struct stats_query_header *stats_hdr = &sc->fw_stats_req->hdr; - rte_iova_t cur_data_offset; - struct stats_query_entry *cur_query_entry; + int i; + int first_queue_query_index; + struct stats_query_header *stats_hdr = &sc->fw_stats_req->hdr; + rte_iova_t cur_data_offset; + struct stats_query_entry *cur_query_entry; - stats_hdr->cmd_num = sc->fw_stats_num; - stats_hdr->drv_stats_counter = 0; + stats_hdr->cmd_num = sc->fw_stats_num; + stats_hdr->drv_stats_counter = 0; - /* - * The storm_counters struct contains the counters of completed - * statistics requests per storm which are incremented by FW - * each time it completes hadning a statistics ramrod. We will - * check these counters in the timer handler and discard a - * (statistics) ramrod completion. - */ - cur_data_offset = (sc->fw_stats_data_mapping + - offsetof(struct bnx2x_fw_stats_data, storm_counters)); + /* + * The storm_counters struct contains the counters of completed + * statistics requests per storm which are incremented by FW + * each time it completes hadning a statistics ramrod. We will + * check these counters in the timer handler and discard a + * (statistics) ramrod completion. + */ + cur_data_offset = (sc->fw_stats_data_mapping + + offsetof(struct bnx2x_fw_stats_data, storm_counters)); - stats_hdr->stats_counters_addrs.hi = htole32(U64_HI(cur_data_offset)); - stats_hdr->stats_counters_addrs.lo = htole32(U64_LO(cur_data_offset)); + stats_hdr->stats_counters_addrs.hi = htole32(U64_HI(cur_data_offset)); + stats_hdr->stats_counters_addrs.lo = htole32(U64_LO(cur_data_offset)); - /* - * Prepare the first stats ramrod (will be completed with - * the counters equal to zero) - init counters to something different. - */ - memset(&sc->fw_stats_data->storm_counters, 0xff, - sizeof(struct stats_counter)); + /* + * Prepare the first stats ramrod (will be completed with + * the counters equal to zero) - init counters to something different. + */ + memset(&sc->fw_stats_data->storm_counters, 0xff, + sizeof(struct stats_counter)); - /**** Port FW statistics data ****/ - cur_data_offset = (sc->fw_stats_data_mapping + - offsetof(struct bnx2x_fw_stats_data, port)); + /**** Port FW statistics data ****/ + cur_data_offset = (sc->fw_stats_data_mapping + + offsetof(struct bnx2x_fw_stats_data, port)); - cur_query_entry = &sc->fw_stats_req->query[BNX2X_PORT_QUERY_IDX]; + cur_query_entry = &sc->fw_stats_req->query[BNX2X_PORT_QUERY_IDX]; - cur_query_entry->kind = STATS_TYPE_PORT; - /* For port query index is a DON'T CARE */ - cur_query_entry->index = SC_PORT(sc); - /* For port query funcID is a DON'T CARE */ - cur_query_entry->funcID = htole16(SC_FUNC(sc)); - cur_query_entry->address.hi = htole32(U64_HI(cur_data_offset)); - cur_query_entry->address.lo = htole32(U64_LO(cur_data_offset)); + cur_query_entry->kind = STATS_TYPE_PORT; + /* For port query index is a DON'T CARE */ + cur_query_entry->index = SC_PORT(sc); + /* For port query funcID is a DON'T CARE */ + cur_query_entry->funcID = htole16(SC_FUNC(sc)); + cur_query_entry->address.hi = htole32(U64_HI(cur_data_offset)); + cur_query_entry->address.lo = htole32(U64_LO(cur_data_offset)); - /**** PF FW statistics data ****/ - cur_data_offset = (sc->fw_stats_data_mapping + - offsetof(struct bnx2x_fw_stats_data, pf)); + /**** PF FW statistics data ****/ + cur_data_offset = (sc->fw_stats_data_mapping + + offsetof(struct bnx2x_fw_stats_data, pf)); - cur_query_entry = &sc->fw_stats_req->query[BNX2X_PF_QUERY_IDX]; + cur_query_entry = &sc->fw_stats_req->query[BNX2X_PF_QUERY_IDX]; - cur_query_entry->kind = STATS_TYPE_PF; - /* For PF query index is a DON'T CARE */ - cur_query_entry->index = SC_PORT(sc); - cur_query_entry->funcID = htole16(SC_FUNC(sc)); - cur_query_entry->address.hi = htole32(U64_HI(cur_data_offset)); - cur_query_entry->address.lo = htole32(U64_LO(cur_data_offset)); + cur_query_entry->kind = STATS_TYPE_PF; + /* For PF query index is a DON'T CARE */ + cur_query_entry->index = SC_PORT(sc); + cur_query_entry->funcID = htole16(SC_FUNC(sc)); + cur_query_entry->address.hi = htole32(U64_HI(cur_data_offset)); + cur_query_entry->address.lo = htole32(U64_LO(cur_data_offset)); - /**** Clients' queries ****/ - cur_data_offset = (sc->fw_stats_data_mapping + - offsetof(struct bnx2x_fw_stats_data, queue_stats)); + /**** Clients' queries ****/ + cur_data_offset = (sc->fw_stats_data_mapping + + offsetof(struct bnx2x_fw_stats_data, queue_stats)); - /* - * First queue query index depends whether FCoE offloaded request will - * be included in the ramrod - */ + /* + * First queue query index depends whether FCoE offloaded request will + * be included in the ramrod + */ first_queue_query_index = (BNX2X_FIRST_QUEUE_QUERY_IDX - 1); - for (i = 0; i < sc->num_queues; i++) { - cur_query_entry = - &sc->fw_stats_req->query[first_queue_query_index + i]; + for (i = 0; i < sc->num_queues; i++) { + cur_query_entry = + &sc->fw_stats_req->query[first_queue_query_index + i]; - cur_query_entry->kind = STATS_TYPE_QUEUE; - cur_query_entry->index = bnx2x_stats_id(&sc->fp[i]); - cur_query_entry->funcID = htole16(SC_FUNC(sc)); - cur_query_entry->address.hi = htole32(U64_HI(cur_data_offset)); - cur_query_entry->address.lo = htole32(U64_LO(cur_data_offset)); + cur_query_entry->kind = STATS_TYPE_QUEUE; + cur_query_entry->index = bnx2x_stats_id(&sc->fp[i]); + cur_query_entry->funcID = htole16(SC_FUNC(sc)); + cur_query_entry->address.hi = htole32(U64_HI(cur_data_offset)); + cur_query_entry->address.lo = htole32(U64_LO(cur_data_offset)); - cur_data_offset += sizeof(struct per_queue_stats); - } + cur_data_offset += sizeof(struct per_queue_stats); + } } void bnx2x_memset_stats(struct bnx2x_softc *sc) @@ -1476,7 +1476,7 @@ bnx2x_stats_init(struct bnx2x_softc *sc) } PMD_DRV_LOG(DEBUG, sc, "port_stx 0x%x func_stx 0x%x", - sc->port.port_stx, sc->func_stx); + sc->port.port_stx, sc->func_stx); /* pmf should retrieve port statistics from SP on a non-init*/ if (!sc->stats_init && sc->port.pmf && sc->port.port_stx) { @@ -1492,11 +1492,11 @@ bnx2x_stats_init(struct bnx2x_softc *sc) REG_RD(sc, NIG_REG_STAT0_BRB_TRUNCATE + port*0x38); if (!CHIP_IS_E3(sc)) { REG_RD_DMAE(sc, NIG_REG_STAT0_EGRESS_MAC_PKT0 + port*0x50, - RTE_PTR_ADD(&sc->port.old_nig_stats, - offsetof(struct nig_stats, egress_mac_pkt0_lo)), 2); + RTE_PTR_ADD(&sc->port.old_nig_stats, + offsetof(struct nig_stats, egress_mac_pkt0_lo)), 2); REG_RD_DMAE(sc, NIG_REG_STAT0_EGRESS_MAC_PKT1 + port*0x50, - RTE_PTR_ADD(&sc->port.old_nig_stats, - offsetof(struct nig_stats, egress_mac_pkt1_lo)), 2); + RTE_PTR_ADD(&sc->port.old_nig_stats, + offsetof(struct nig_stats, egress_mac_pkt1_lo)), 2); } /* function stats */ @@ -1506,9 +1506,9 @@ bnx2x_stats_init(struct bnx2x_softc *sc) memset(&sc->fp[i].old_xclient, 0, sizeof(sc->fp[i].old_xclient)); if (sc->stats_init) { memset(&sc->fp[i].eth_q_stats, 0, - sizeof(sc->fp[i].eth_q_stats)); + sizeof(sc->fp[i].eth_q_stats)); memset(&sc->fp[i].eth_q_stats_old, 0, - sizeof(sc->fp[i].eth_q_stats_old)); + sizeof(sc->fp[i].eth_q_stats_old)); } }