From patchwork Mon Apr 27 07:58:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joyce Kong X-Patchwork-Id: 69369 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id EF704A00BE; Mon, 27 Apr 2020 09:59:59 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1E40C1C1D2; Mon, 27 Apr 2020 09:59:47 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id E1D9D1C1CB for ; Mon, 27 Apr 2020 09:59:44 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5F6371FB; Mon, 27 Apr 2020 00:59:44 -0700 (PDT) Received: from net-arm-thunderx2-03.shanghai.arm.com (net-arm-thunderx2-03.shanghai.arm.com [10.169.41.185]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 6DB953F68F; Mon, 27 Apr 2020 00:59:39 -0700 (PDT) From: Joyce Kong To: thomas@monjalon.net, stephen@networkplumber.org, david.marchand@redhat.com, mb@smartsharesystems.com, jerinj@marvell.com, bruce.richardson@intel.com, ravi1.kumar@amd.com, rmody@marvell.com, shshaikh@marvell.com, xuanziyang2@huawei.com, cloud.wangxiaoyun@huawei.com, zhouguoyang@huawei.com, honnappa.nagarahalli@arm.com, gavin.hu@arm.com, phil.yang@arm.com Cc: nd@arm.com, dev@dpdk.org Date: Mon, 27 Apr 2020 15:58:54 +0800 Message-Id: <20200427075856.12098-5-joyce.kong@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200427075856.12098-1-joyce.kong@arm.com> References: <20200427075856.12098-1-joyce.kong@arm.com> In-Reply-To: <1571125801-45773-1-git-send-email-joyce.kong@arm.com> References: <1571125801-45773-1-git-send-email-joyce.kong@arm.com> Subject: [dpdk-dev] [PATCH v10 4/6] net/bnx2x: use common rte bit operation APIs instead X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Remove its own bit operation APIs and use the common one, this can reduce the code duplication largely. Signed-off-by: Joyce Kong Reviewed-by: Gavin Hu --- drivers/net/bnx2x/bnx2x.c | 271 +++++++++++++++++------------------ drivers/net/bnx2x/bnx2x.h | 10 +- drivers/net/bnx2x/ecore_sp.c | 68 ++++----- drivers/net/bnx2x/ecore_sp.h | 106 +++++++------- 4 files changed, 221 insertions(+), 234 deletions(-) diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c index ff7646b25..8eb6d609b 100644 --- a/drivers/net/bnx2x/bnx2x.c +++ b/drivers/net/bnx2x/bnx2x.c @@ -25,6 +25,7 @@ #include #include #include +#include #include #define BNX2X_PMD_VER_PREFIX "BNX2X PMD" @@ -129,32 +130,6 @@ static void bnx2x_ack_sb(struct bnx2x_softc *sc, uint8_t igu_sb_id, uint8_t storm, uint16_t index, uint8_t op, uint8_t update); -int bnx2x_test_bit(int nr, volatile unsigned long *addr) -{ - int res; - - mb(); - res = ((*addr) & (1UL << nr)) != 0; - mb(); - return res; -} - -void bnx2x_set_bit(unsigned int nr, volatile unsigned long *addr) -{ - __sync_fetch_and_or(addr, (1UL << nr)); -} - -void bnx2x_clear_bit(int nr, volatile unsigned long *addr) -{ - __sync_fetch_and_and(addr, ~(1UL << nr)); -} - -int bnx2x_test_and_clear_bit(int nr, volatile unsigned long *addr) -{ - unsigned long mask = (1UL << nr); - return __sync_fetch_and_and(addr, ~mask) & mask; -} - int bnx2x_cmpxchg(volatile int *addr, int old, int new) { return __sync_val_compare_and_swap(addr, old, new); @@ -1434,16 +1409,16 @@ static int bnx2x_del_all_macs(struct bnx2x_softc *sc, struct ecore_vlan_mac_obj *mac_obj, int mac_type, uint8_t wait_for_comp) { - unsigned long ramrod_flags = 0, vlan_mac_flags = 0; + uint32_t ramrod_flags = 0, vlan_mac_flags = 0; int rc; /* wait for completion of requested */ if (wait_for_comp) { - bnx2x_set_bit(RAMROD_COMP_WAIT, &ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, &ramrod_flags); } /* Set the mac type of addresses we want to clear */ - bnx2x_set_bit(mac_type, &vlan_mac_flags); + rte_bit_relaxed_set32(mac_type, &vlan_mac_flags); rc = mac_obj->delete_all(sc, mac_obj, &vlan_mac_flags, &ramrod_flags); if (rc < 0) @@ -1454,8 +1429,7 @@ bnx2x_del_all_macs(struct bnx2x_softc *sc, struct ecore_vlan_mac_obj *mac_obj, static int bnx2x_fill_accept_flags(struct bnx2x_softc *sc, uint32_t rx_mode, - unsigned long *rx_accept_flags, - unsigned long *tx_accept_flags) + uint32_t *rx_accept_flags, uint32_t *tx_accept_flags) { /* Clear the flags first */ *rx_accept_flags = 0; @@ -1470,26 +1444,28 @@ bnx2x_fill_accept_flags(struct bnx2x_softc *sc, uint32_t rx_mode, break; case BNX2X_RX_MODE_NORMAL: - bnx2x_set_bit(ECORE_ACCEPT_UNICAST, rx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_MULTICAST, rx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_BROADCAST, rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_UNICAST, rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_MULTICAST, rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_BROADCAST, rx_accept_flags); /* internal switching mode */ - bnx2x_set_bit(ECORE_ACCEPT_UNICAST, tx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_MULTICAST, tx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_BROADCAST, tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_UNICAST, tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_MULTICAST, tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_BROADCAST, tx_accept_flags); break; case BNX2X_RX_MODE_ALLMULTI: - bnx2x_set_bit(ECORE_ACCEPT_UNICAST, rx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_ALL_MULTICAST, rx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_BROADCAST, rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_UNICAST, rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_ALL_MULTICAST, + rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_BROADCAST, rx_accept_flags); /* internal switching mode */ - bnx2x_set_bit(ECORE_ACCEPT_UNICAST, tx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_ALL_MULTICAST, tx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_BROADCAST, tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_UNICAST, tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_ALL_MULTICAST, + tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_BROADCAST, tx_accept_flags); break; @@ -1500,19 +1476,23 @@ bnx2x_fill_accept_flags(struct bnx2x_softc *sc, uint32_t rx_mode, * should receive matched and unmatched (in resolution of port) * unicast packets. */ - bnx2x_set_bit(ECORE_ACCEPT_UNMATCHED, rx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_UNICAST, rx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_ALL_MULTICAST, rx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_BROADCAST, rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_UNMATCHED, rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_UNICAST, rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_ALL_MULTICAST, + rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_BROADCAST, rx_accept_flags); /* internal switching mode */ - bnx2x_set_bit(ECORE_ACCEPT_ALL_MULTICAST, tx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_BROADCAST, tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_ALL_MULTICAST, + tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_BROADCAST, tx_accept_flags); if (IS_MF_SI(sc)) { - bnx2x_set_bit(ECORE_ACCEPT_ALL_UNICAST, tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_ALL_UNICAST, + tx_accept_flags); } else { - bnx2x_set_bit(ECORE_ACCEPT_UNICAST, tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_UNICAST, + tx_accept_flags); } break; @@ -1524,8 +1504,8 @@ bnx2x_fill_accept_flags(struct bnx2x_softc *sc, uint32_t rx_mode, /* Set ACCEPT_ANY_VLAN as we do not enable filtering by VLAN */ if (rx_mode != BNX2X_RX_MODE_NONE) { - bnx2x_set_bit(ECORE_ACCEPT_ANY_VLAN, rx_accept_flags); - bnx2x_set_bit(ECORE_ACCEPT_ANY_VLAN, tx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_ANY_VLAN, rx_accept_flags); + rte_bit_relaxed_set32(ECORE_ACCEPT_ANY_VLAN, tx_accept_flags); } return 0; @@ -1554,7 +1534,7 @@ bnx2x_set_q_rx_mode(struct bnx2x_softc *sc, uint8_t cl_id, ramrod_param.rdata = BNX2X_SP(sc, rx_mode_rdata); ramrod_param.rdata_mapping = (rte_iova_t)BNX2X_SP_MAPPING(sc, rx_mode_rdata), - bnx2x_set_bit(ECORE_FILTER_RX_MODE_PENDING, &sc->sp_state); + rte_bit_relaxed_set32(ECORE_FILTER_RX_MODE_PENDING, &sc->sp_state); ramrod_param.ramrod_flags = ramrod_flags; ramrod_param.rx_mode_flags = rx_mode_flags; @@ -1573,8 +1553,8 @@ bnx2x_set_q_rx_mode(struct bnx2x_softc *sc, uint8_t cl_id, int bnx2x_set_storm_rx_mode(struct bnx2x_softc *sc) { - unsigned long rx_mode_flags = 0, ramrod_flags = 0; - unsigned long rx_accept_flags = 0, tx_accept_flags = 0; + uint32_t rx_mode_flags = 0, ramrod_flags = 0; + uint32_t rx_accept_flags = 0, tx_accept_flags = 0; int rc; rc = bnx2x_fill_accept_flags(sc, sc->rx_mode, &rx_accept_flags, @@ -1583,9 +1563,9 @@ int bnx2x_set_storm_rx_mode(struct bnx2x_softc *sc) return rc; } - bnx2x_set_bit(RAMROD_RX, &ramrod_flags); - bnx2x_set_bit(RAMROD_TX, &ramrod_flags); - bnx2x_set_bit(RAMROD_COMP_WAIT, &ramrod_flags); + rte_bit_relaxed_set32(RAMROD_RX, &ramrod_flags); + rte_bit_relaxed_set32(RAMROD_TX, &ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, &ramrod_flags); return bnx2x_set_q_rx_mode(sc, sc->fp[0].cl_id, rx_mode_flags, rx_accept_flags, tx_accept_flags, @@ -1710,7 +1690,8 @@ static int bnx2x_func_wait_started(struct bnx2x_softc *sc) "Forcing STARTED-->TX_STOPPED-->STARTED"); func_params.f_obj = &sc->func_obj; - bnx2x_set_bit(RAMROD_DRV_CLR_ONLY, &func_params.ramrod_flags); + rte_bit_relaxed_set32(RAMROD_DRV_CLR_ONLY, + &func_params.ramrod_flags); /* STARTED-->TX_STOPPED */ func_params.cmd = ECORE_F_CMD_TX_STOP; @@ -1734,7 +1715,7 @@ static int bnx2x_stop_queue(struct bnx2x_softc *sc, int index) q_params.q_obj = &sc->sp_objs[fp->index].q_obj; /* We want to wait for completion in this context */ - bnx2x_set_bit(RAMROD_COMP_WAIT, &q_params.ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, &q_params.ramrod_flags); /* Stop the primary connection: */ @@ -1763,26 +1744,25 @@ static int bnx2x_stop_queue(struct bnx2x_softc *sc, int index) } /* wait for the outstanding SP commands */ -static uint8_t bnx2x_wait_sp_comp(struct bnx2x_softc *sc, unsigned long mask) +static uint8_t bnx2x_wait_sp_comp(struct bnx2x_softc *sc, uint32_t mask) { - unsigned long tmp; + uint32_t tmp; int tout = 5000; /* wait for 5 secs tops */ while (tout--) { mb(); - if (!(atomic_load_acq_long(&sc->sp_state) & mask)) { + if (!(atomic_load_acq_int(&sc->sp_state) & mask)) return TRUE; - } DELAY(1000); } mb(); - tmp = atomic_load_acq_long(&sc->sp_state); + tmp = atomic_load_acq_int(&sc->sp_state); if (tmp & mask) { PMD_DRV_LOG(INFO, sc, "Filtering completion timed out: " - "sp_state 0x%lx, mask 0x%lx", tmp, mask); + "sp_state 0x%x, mask 0x%x", tmp, mask); return FALSE; } @@ -1795,7 +1775,7 @@ static int bnx2x_func_stop(struct bnx2x_softc *sc) int rc; /* prepare parameters for function state transitions */ - bnx2x_set_bit(RAMROD_COMP_WAIT, &func_params.ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, &func_params.ramrod_flags); func_params.f_obj = &sc->func_obj; func_params.cmd = ECORE_F_CMD_STOP; @@ -1809,7 +1789,8 @@ static int bnx2x_func_stop(struct bnx2x_softc *sc) if (rc) { PMD_DRV_LOG(NOTICE, sc, "FUNC_STOP ramrod failed. " "Running a dry transaction"); - bnx2x_set_bit(RAMROD_DRV_CLR_ONLY, &func_params.ramrod_flags); + rte_bit_relaxed_set32(RAMROD_DRV_CLR_ONLY, + &func_params.ramrod_flags); return ecore_func_state_change(sc, &func_params); } @@ -1821,7 +1802,7 @@ static int bnx2x_reset_hw(struct bnx2x_softc *sc, uint32_t load_code) struct ecore_func_state_params func_params = { NULL }; /* Prepare parameters for function state transitions */ - bnx2x_set_bit(RAMROD_COMP_WAIT, &func_params.ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, &func_params.ramrod_flags); func_params.f_obj = &sc->func_obj; func_params.cmd = ECORE_F_CMD_HW_RESET; @@ -1878,11 +1859,11 @@ bnx2x_chip_cleanup(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_li * a race between the completion code and this code. */ - if (bnx2x_test_bit(ECORE_FILTER_RX_MODE_PENDING, &sc->sp_state)) { - bnx2x_set_bit(ECORE_FILTER_RX_MODE_SCHED, &sc->sp_state); - } else { + if (rte_bit_relaxed_get32(ECORE_FILTER_RX_MODE_PENDING, &sc->sp_state)) + rte_bit_relaxed_set32(ECORE_FILTER_RX_MODE_SCHED, + &sc->sp_state); + else bnx2x_set_storm_rx_mode(sc); - } /* Clean up multicast configuration */ rparam.mcast_obj = &sc->mcast_obj; @@ -1922,9 +1903,8 @@ bnx2x_chip_cleanup(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_li * If SP settings didn't get completed so far - something * very wrong has happen. */ - if (!bnx2x_wait_sp_comp(sc, ~0x0UL)) { + if (!bnx2x_wait_sp_comp(sc, ~0x0U)) PMD_DRV_LOG(NOTICE, sc, "Common slow path ramrods got stuck!"); - } unload_error: @@ -1964,7 +1944,7 @@ static void bnx2x_disable_close_the_gate(struct bnx2x_softc *sc) */ static void bnx2x_squeeze_objects(struct bnx2x_softc *sc) { - unsigned long ramrod_flags = 0, vlan_mac_flags = 0; + uint32_t ramrod_flags = 0, vlan_mac_flags = 0; struct ecore_mcast_ramrod_params rparam = { NULL }; struct ecore_vlan_mac_obj *mac_obj = &sc->sp_objs->mac_obj; int rc; @@ -1972,12 +1952,12 @@ static void bnx2x_squeeze_objects(struct bnx2x_softc *sc) /* Cleanup MACs' object first... */ /* Wait for completion of requested */ - bnx2x_set_bit(RAMROD_COMP_WAIT, &ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, &ramrod_flags); /* Perform a dry cleanup */ - bnx2x_set_bit(RAMROD_DRV_CLR_ONLY, &ramrod_flags); + rte_bit_relaxed_set32(RAMROD_DRV_CLR_ONLY, &ramrod_flags); /* Clean ETH primary MAC */ - bnx2x_set_bit(ECORE_ETH_MAC, &vlan_mac_flags); + rte_bit_relaxed_set32(ECORE_ETH_MAC, &vlan_mac_flags); rc = mac_obj->delete_all(sc, &sc->sp_objs->mac_obj, &vlan_mac_flags, &ramrod_flags); if (rc != 0) { @@ -1986,7 +1966,7 @@ static void bnx2x_squeeze_objects(struct bnx2x_softc *sc) /* Cleanup UC list */ vlan_mac_flags = 0; - bnx2x_set_bit(ECORE_UC_LIST_MAC, &vlan_mac_flags); + rte_bit_relaxed_set32(ECORE_UC_LIST_MAC, &vlan_mac_flags); rc = mac_obj->delete_all(sc, mac_obj, &vlan_mac_flags, &ramrod_flags); if (rc != 0) { PMD_DRV_LOG(NOTICE, sc, @@ -1996,7 +1976,7 @@ static void bnx2x_squeeze_objects(struct bnx2x_softc *sc) /* Now clean mcast object... */ rparam.mcast_obj = &sc->mcast_obj; - bnx2x_set_bit(RAMROD_DRV_CLR_ONLY, &rparam.ramrod_flags); + rte_bit_relaxed_set32(RAMROD_DRV_CLR_ONLY, &rparam.ramrod_flags); /* Add a DEL command... */ rc = ecore_config_mcast(sc, &rparam, ECORE_MCAST_CMD_DEL); @@ -4310,13 +4290,13 @@ static void bnx2x_handle_mcast_eqe(struct bnx2x_softc *sc) static void bnx2x_handle_classification_eqe(struct bnx2x_softc *sc, union event_ring_elem *elem) { - unsigned long ramrod_flags = 0; + uint32_t ramrod_flags = 0; int rc = 0; uint32_t cid = elem->message.data.eth_event.echo & BNX2X_SWCID_MASK; struct ecore_vlan_mac_obj *vlan_mac_obj; /* always push next commands out, don't wait here */ - bnx2x_set_bit(RAMROD_CONT, &ramrod_flags); + rte_bit_relaxed_set32(RAMROD_CONT, &ramrod_flags); switch (le32toh(elem->message.data.eth_event.echo) >> BNX2X_SWCID_SHIFT) { case ECORE_FILTER_MAC_PENDING: @@ -4347,12 +4327,12 @@ bnx2x_handle_classification_eqe(struct bnx2x_softc *sc, union event_ring_elem *e static void bnx2x_handle_rx_mode_eqe(struct bnx2x_softc *sc) { - bnx2x_clear_bit(ECORE_FILTER_RX_MODE_PENDING, &sc->sp_state); + rte_bit_relaxed_clear32(ECORE_FILTER_RX_MODE_PENDING, &sc->sp_state); /* send rx_mode command again if was requested */ - if (bnx2x_test_and_clear_bit(ECORE_FILTER_RX_MODE_SCHED, &sc->sp_state)) { + if (rte_bit_relaxed_test_and_clear32(ECORE_FILTER_RX_MODE_SCHED, + &sc->sp_state)) bnx2x_set_storm_rx_mode(sc); - } } static void bnx2x_update_eq_prod(struct bnx2x_softc *sc, uint16_t prod) @@ -4721,7 +4701,7 @@ static int bnx2x_init_hw(struct bnx2x_softc *sc, uint32_t load_code) PMD_INIT_FUNC_TRACE(sc); /* prepare the parameters for function state transitions */ - bnx2x_set_bit(RAMROD_COMP_WAIT, &func_params.ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, &func_params.ramrod_flags); func_params.f_obj = &sc->func_obj; func_params.cmd = ECORE_F_CMD_HW_INIT; @@ -4969,7 +4949,7 @@ static void bnx2x_init_eth_fp(struct bnx2x_softc *sc, int idx) { struct bnx2x_fastpath *fp = &sc->fp[idx]; uint32_t cids[ECORE_MULTI_TX_COS] = { 0 }; - unsigned long q_type = 0; + uint32_t q_type = 0; int cos; fp->sc = sc; @@ -5016,8 +4996,8 @@ static void bnx2x_init_eth_fp(struct bnx2x_softc *sc, int idx) bnx2x_update_fp_sb_idx(fp); /* Configure Queue State object */ - bnx2x_set_bit(ECORE_Q_TYPE_HAS_RX, &q_type); - bnx2x_set_bit(ECORE_Q_TYPE_HAS_TX, &q_type); + rte_bit_relaxed_set32(ECORE_Q_TYPE_HAS_RX, &q_type); + rte_bit_relaxed_set32(ECORE_Q_TYPE_HAS_TX, &q_type); ecore_init_queue_obj(sc, &sc->sp_objs[idx].q_obj, @@ -5831,7 +5811,7 @@ static int bnx2x_func_start(struct bnx2x_softc *sc) &func_params.params.start; /* Prepare parameters for function state transitions */ - bnx2x_set_bit(RAMROD_COMP_WAIT, &func_params.ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, &func_params.ramrod_flags); func_params.f_obj = &sc->func_obj; func_params.cmd = ECORE_F_CMD_START; @@ -6407,11 +6387,11 @@ bnx2x_pf_q_prep_init(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp, uint8_t cos; int cxt_index, cxt_offset; - bnx2x_set_bit(ECORE_Q_FLG_HC, &init_params->rx.flags); - bnx2x_set_bit(ECORE_Q_FLG_HC, &init_params->tx.flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_HC, &init_params->rx.flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_HC, &init_params->tx.flags); - bnx2x_set_bit(ECORE_Q_FLG_HC_EN, &init_params->rx.flags); - bnx2x_set_bit(ECORE_Q_FLG_HC_EN, &init_params->tx.flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_HC_EN, &init_params->rx.flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_HC_EN, &init_params->tx.flags); /* HC rate */ init_params->rx.hc_rate = @@ -6442,10 +6422,10 @@ bnx2x_pf_q_prep_init(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp, static unsigned long bnx2x_get_common_flags(struct bnx2x_softc *sc, uint8_t zero_stats) { - unsigned long flags = 0; + uint32_t flags = 0; /* PF driver will always initialize the Queue to an ACTIVE state */ - bnx2x_set_bit(ECORE_Q_FLG_ACTIVE, &flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_ACTIVE, &flags); /* * tx only connections collect statistics (on the same index as the @@ -6453,9 +6433,9 @@ bnx2x_get_common_flags(struct bnx2x_softc *sc, uint8_t zero_stats) * connection is initialized. */ - bnx2x_set_bit(ECORE_Q_FLG_STATS, &flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_STATS, &flags); if (zero_stats) { - bnx2x_set_bit(ECORE_Q_FLG_ZERO_STATS, &flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_ZERO_STATS, &flags); } /* @@ -6463,28 +6443,28 @@ bnx2x_get_common_flags(struct bnx2x_softc *sc, uint8_t zero_stats) * CoS-ness doesn't survive the loopback */ if (sc->flags & BNX2X_TX_SWITCHING) { - bnx2x_set_bit(ECORE_Q_FLG_TX_SWITCH, &flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_TX_SWITCH, &flags); } - bnx2x_set_bit(ECORE_Q_FLG_PCSUM_ON_PKT, &flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_PCSUM_ON_PKT, &flags); return flags; } static unsigned long bnx2x_get_q_flags(struct bnx2x_softc *sc, uint8_t leading) { - unsigned long flags = 0; + uint32_t flags = 0; if (IS_MF_SD(sc)) { - bnx2x_set_bit(ECORE_Q_FLG_OV, &flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_OV, &flags); } if (leading) { - bnx2x_set_bit(ECORE_Q_FLG_LEADING_RSS, &flags); - bnx2x_set_bit(ECORE_Q_FLG_MCAST, &flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_LEADING_RSS, &flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_MCAST, &flags); } - bnx2x_set_bit(ECORE_Q_FLG_VLAN, &flags); + rte_bit_relaxed_set32(ECORE_Q_FLG_VLAN, &flags); /* merge with common flags */ return flags | bnx2x_get_common_flags(sc, TRUE); @@ -6605,7 +6585,7 @@ bnx2x_setup_queue(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp, uint8_t lea q_params.q_obj = &BNX2X_SP_OBJ(sc, fp).q_obj; /* we want to wait for completion in this context */ - bnx2x_set_bit(RAMROD_COMP_WAIT, &q_params.ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, &q_params.ramrod_flags); /* prepare the INIT parameters */ bnx2x_pf_q_prep_init(sc, fp, &q_params.params.init); @@ -6673,20 +6653,20 @@ bnx2x_config_rss_pf(struct bnx2x_softc *sc, struct ecore_rss_config_obj *rss_obj params.rss_obj = rss_obj; - bnx2x_set_bit(RAMROD_COMP_WAIT, ¶ms.ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, ¶ms.ramrod_flags); - bnx2x_set_bit(ECORE_RSS_MODE_REGULAR, ¶ms.rss_flags); + rte_bit_relaxed_set32(ECORE_RSS_MODE_REGULAR, ¶ms.rss_flags); /* RSS configuration */ - bnx2x_set_bit(ECORE_RSS_IPV4, ¶ms.rss_flags); - bnx2x_set_bit(ECORE_RSS_IPV4_TCP, ¶ms.rss_flags); - bnx2x_set_bit(ECORE_RSS_IPV6, ¶ms.rss_flags); - bnx2x_set_bit(ECORE_RSS_IPV6_TCP, ¶ms.rss_flags); + rte_bit_relaxed_set32(ECORE_RSS_IPV4, ¶ms.rss_flags); + rte_bit_relaxed_set32(ECORE_RSS_IPV4_TCP, ¶ms.rss_flags); + rte_bit_relaxed_set32(ECORE_RSS_IPV6, ¶ms.rss_flags); + rte_bit_relaxed_set32(ECORE_RSS_IPV6_TCP, ¶ms.rss_flags); if (rss_obj->udp_rss_v4) { - bnx2x_set_bit(ECORE_RSS_IPV4_UDP, ¶ms.rss_flags); + rte_bit_relaxed_set32(ECORE_RSS_IPV4_UDP, ¶ms.rss_flags); } if (rss_obj->udp_rss_v6) { - bnx2x_set_bit(ECORE_RSS_IPV6_UDP, ¶ms.rss_flags); + rte_bit_relaxed_set32(ECORE_RSS_IPV6_UDP, ¶ms.rss_flags); } /* Hash bits */ @@ -6701,7 +6681,7 @@ bnx2x_config_rss_pf(struct bnx2x_softc *sc, struct ecore_rss_config_obj *rss_obj params.rss_key[i] = (uint32_t) rte_rand(); } - bnx2x_set_bit(ECORE_RSS_SET_SRCH, ¶ms.rss_flags); + rte_bit_relaxed_set32(ECORE_RSS_SET_SRCH, ¶ms.rss_flags); } if (IS_PF(sc)) @@ -6746,7 +6726,7 @@ static int bnx2x_init_rss_pf(struct bnx2x_softc *sc) static int bnx2x_set_mac_one(struct bnx2x_softc *sc, uint8_t * mac, struct ecore_vlan_mac_obj *obj, uint8_t set, int mac_type, - unsigned long *ramrod_flags) + uint32_t *ramrod_flags) { struct ecore_vlan_mac_ramrod_params ramrod_param; int rc; @@ -6758,11 +6738,12 @@ bnx2x_set_mac_one(struct bnx2x_softc *sc, uint8_t * mac, ramrod_param.ramrod_flags = *ramrod_flags; /* fill a user request section if needed */ - if (!bnx2x_test_bit(RAMROD_CONT, ramrod_flags)) { + if (!rte_bit_relaxed_get32(RAMROD_CONT, ramrod_flags)) { rte_memcpy(ramrod_param.user_req.u.mac.mac, mac, ETH_ALEN); - bnx2x_set_bit(mac_type, &ramrod_param.user_req.vlan_mac_flags); + rte_bit_relaxed_set32(mac_type, + &ramrod_param.user_req.vlan_mac_flags); /* Set the command: ADD or DEL */ ramrod_param.user_req.cmd = (set) ? ECORE_VLAN_MAC_ADD : @@ -6785,11 +6766,11 @@ bnx2x_set_mac_one(struct bnx2x_softc *sc, uint8_t * mac, static int bnx2x_set_eth_mac(struct bnx2x_softc *sc, uint8_t set) { - unsigned long ramrod_flags = 0; + uint32_t ramrod_flags = 0; PMD_DRV_LOG(DEBUG, sc, "Adding Ethernet MAC"); - bnx2x_set_bit(RAMROD_COMP_WAIT, &ramrod_flags); + rte_bit_relaxed_set32(RAMROD_COMP_WAIT, &ramrod_flags); /* Eth MAC is set on RSS leading client (fp[0]) */ return bnx2x_set_mac_one(sc, sc->link_params.mac_addr, @@ -6921,24 +6902,26 @@ bnx2x_fill_report_data(struct bnx2x_softc *sc, struct bnx2x_link_report_data *da /* Link is down */ if (!sc->link_vars.link_up || (sc->flags & BNX2X_MF_FUNC_DIS)) { - bnx2x_set_bit(BNX2X_LINK_REPORT_LINK_DOWN, + rte_bit_relaxed_set32(BNX2X_LINK_REPORT_LINK_DOWN, &data->link_report_flags); } /* Full DUPLEX */ if (sc->link_vars.duplex == DUPLEX_FULL) { - bnx2x_set_bit(BNX2X_LINK_REPORT_FULL_DUPLEX, + rte_bit_relaxed_set32(BNX2X_LINK_REPORT_FULL_DUPLEX, &data->link_report_flags); } /* Rx Flow Control is ON */ if (sc->link_vars.flow_ctrl & ELINK_FLOW_CTRL_RX) { - bnx2x_set_bit(BNX2X_LINK_REPORT_RX_FC_ON, &data->link_report_flags); + rte_bit_relaxed_set32(BNX2X_LINK_REPORT_RX_FC_ON, + &data->link_report_flags); } /* Tx Flow Control is ON */ if (sc->link_vars.flow_ctrl & ELINK_FLOW_CTRL_TX) { - bnx2x_set_bit(BNX2X_LINK_REPORT_TX_FC_ON, &data->link_report_flags); + rte_bit_relaxed_set32(BNX2X_LINK_REPORT_TX_FC_ON, + &data->link_report_flags); } } @@ -6957,14 +6940,14 @@ static void bnx2x_link_report_locked(struct bnx2x_softc *sc) /* Don't report link down or exactly the same link status twice */ if (!memcmp(&cur_data, &sc->last_reported_link, sizeof(cur_data)) || - (bnx2x_test_bit(BNX2X_LINK_REPORT_LINK_DOWN, + (rte_bit_relaxed_get32(BNX2X_LINK_REPORT_LINK_DOWN, &sc->last_reported_link.link_report_flags) && - bnx2x_test_bit(BNX2X_LINK_REPORT_LINK_DOWN, + rte_bit_relaxed_get32(BNX2X_LINK_REPORT_LINK_DOWN, &cur_data.link_report_flags))) { return; } - ELINK_DEBUG_P2(sc, "Change in link status : cur_data = %lx, last_reported_link = %lx", + ELINK_DEBUG_P2(sc, "Change in link status : cur_data = %x, last_reported_link = %x", cur_data.link_report_flags, sc->last_reported_link.link_report_flags); @@ -6974,15 +6957,16 @@ static void bnx2x_link_report_locked(struct bnx2x_softc *sc) /* report new link params and remember the state for the next time */ rte_memcpy(&sc->last_reported_link, &cur_data, sizeof(cur_data)); - if (bnx2x_test_bit(BNX2X_LINK_REPORT_LINK_DOWN, + if (rte_bit_relaxed_get32(BNX2X_LINK_REPORT_LINK_DOWN, &cur_data.link_report_flags)) { ELINK_DEBUG_P0(sc, "NIC Link is Down"); } else { __rte_unused const char *duplex; __rte_unused const char *flow; - if (bnx2x_test_and_clear_bit(BNX2X_LINK_REPORT_FULL_DUPLEX, - &cur_data.link_report_flags)) { + if (rte_bit_relaxed_test_and_clear32 + (BNX2X_LINK_REPORT_FULL_DUPLEX, + &cur_data.link_report_flags)) { duplex = "full"; ELINK_DEBUG_P0(sc, "link set to full duplex"); } else { @@ -6996,20 +6980,25 @@ static void bnx2x_link_report_locked(struct bnx2x_softc *sc) * enabled. */ if (cur_data.link_report_flags) { - if (bnx2x_test_bit(BNX2X_LINK_REPORT_RX_FC_ON, + if (rte_bit_relaxed_get32 + (BNX2X_LINK_REPORT_RX_FC_ON, &cur_data.link_report_flags) && - bnx2x_test_bit(BNX2X_LINK_REPORT_TX_FC_ON, + rte_bit_relaxed_get32(BNX2X_LINK_REPORT_TX_FC_ON, &cur_data.link_report_flags)) { flow = "ON - receive & transmit"; - } else if (bnx2x_test_bit(BNX2X_LINK_REPORT_RX_FC_ON, - &cur_data.link_report_flags) && - !bnx2x_test_bit(BNX2X_LINK_REPORT_TX_FC_ON, + } else if (rte_bit_relaxed_get32 + (BNX2X_LINK_REPORT_RX_FC_ON, + &cur_data.link_report_flags) && + !rte_bit_relaxed_get32 + (BNX2X_LINK_REPORT_TX_FC_ON, &cur_data.link_report_flags)) { flow = "ON - receive"; - } else if (!bnx2x_test_bit(BNX2X_LINK_REPORT_RX_FC_ON, + } else if (!rte_bit_relaxed_get32 + (BNX2X_LINK_REPORT_RX_FC_ON, &cur_data.link_report_flags) && - bnx2x_test_bit(BNX2X_LINK_REPORT_TX_FC_ON, - &cur_data.link_report_flags)) { + rte_bit_relaxed_get32 + (BNX2X_LINK_REPORT_TX_FC_ON, + &cur_data.link_report_flags)) { flow = "ON - transmit"; } else { flow = "none"; /* possible? */ @@ -7429,7 +7418,7 @@ int bnx2x_nic_load(struct bnx2x_softc *sc) bnx2x_set_rx_mode(sc); /* wait for all pending SP commands to complete */ - if (IS_PF(sc) && !bnx2x_wait_sp_comp(sc, ~0x0UL)) { + if (IS_PF(sc) && !bnx2x_wait_sp_comp(sc, ~0x0U)) { PMD_DRV_LOG(NOTICE, sc, "Timeout waiting for all SPs to complete!"); bnx2x_periodic_stop(sc); bnx2x_nic_unload(sc, UNLOAD_CLOSE, FALSE); diff --git a/drivers/net/bnx2x/bnx2x.h b/drivers/net/bnx2x/bnx2x.h index 3cadb5d82..69cc1430a 100644 --- a/drivers/net/bnx2x/bnx2x.h +++ b/drivers/net/bnx2x/bnx2x.h @@ -1000,8 +1000,8 @@ struct bnx2x_sp_objs { * link parameters twice. */ struct bnx2x_link_report_data { - uint16_t line_speed; /* Effective line speed */ - unsigned long link_report_flags; /* BNX2X_LINK_REPORT_XXX flags */ + uint16_t line_speed; /* Effective line speed */ + uint32_t link_report_flags; /* BNX2X_LINK_REPORT_XXX flags */ }; enum { @@ -1232,7 +1232,7 @@ struct bnx2x_softc { /* slow path */ struct bnx2x_dma sp_dma; struct bnx2x_slowpath *sp; - unsigned long sp_state; + uint32_t sp_state; /* slow path queue */ struct bnx2x_dma spq_dma; @@ -1816,10 +1816,6 @@ static const uint32_t dmae_reg_go_c[] = { #define PCI_PM_D0 1 #define PCI_PM_D3hot 2 -int bnx2x_test_bit(int nr, volatile unsigned long * addr); -void bnx2x_set_bit(unsigned int nr, volatile unsigned long * addr); -void bnx2x_clear_bit(int nr, volatile unsigned long * addr); -int bnx2x_test_and_clear_bit(int nr, volatile unsigned long * addr); int bnx2x_cmpxchg(volatile int *addr, int old, int new); int bnx2x_dma_alloc(struct bnx2x_softc *sc, size_t size, diff --git a/drivers/net/bnx2x/ecore_sp.c b/drivers/net/bnx2x/ecore_sp.c index 00c33a317..61f99c640 100644 --- a/drivers/net/bnx2x/ecore_sp.c +++ b/drivers/net/bnx2x/ecore_sp.c @@ -161,7 +161,7 @@ static inline void ecore_exe_queue_reset_pending(struct bnx2x_softc *sc, */ static int ecore_exe_queue_step(struct bnx2x_softc *sc, struct ecore_exe_queue_obj *o, - unsigned long *ramrod_flags) + uint32_t *ramrod_flags) { struct ecore_exeq_elem *elem, spacer; int cur_len = 0, rc; @@ -282,7 +282,7 @@ static void ecore_raw_set_pending(struct ecore_raw_obj *o) * */ static int ecore_state_wait(struct bnx2x_softc *sc, int state, - unsigned long *pstate) + uint32_t *pstate) { /* can take a while if any port is running */ int cnt = 5000; @@ -396,9 +396,9 @@ static void __ecore_vlan_mac_h_exec_pending(struct bnx2x_softc *sc, struct ecore_vlan_mac_obj *o) { int rc; - unsigned long ramrod_flags = o->saved_ramrod_flags; + uint32_t ramrod_flags = o->saved_ramrod_flags; - ECORE_MSG(sc, "vlan_mac_lock execute pending command with ramrod flags %lu", + ECORE_MSG(sc, "vlan_mac_lock execute pending command with ramrod flags %u", ramrod_flags); o->head_exe_request = FALSE; o->saved_ramrod_flags = 0; @@ -425,11 +425,11 @@ static void __ecore_vlan_mac_h_exec_pending(struct bnx2x_softc *sc, */ static void __ecore_vlan_mac_h_pend(struct bnx2x_softc *sc __rte_unused, struct ecore_vlan_mac_obj *o, - unsigned long ramrod_flags) + uint32_t ramrod_flags) { o->head_exe_request = TRUE; o->saved_ramrod_flags = ramrod_flags; - ECORE_MSG(sc, "Placing pending execution with ramrod flags %lu", + ECORE_MSG(sc, "Placing pending execution with ramrod flags %u", ramrod_flags); } @@ -804,7 +804,7 @@ static void ecore_set_one_mac_e2(struct bnx2x_softc *sc, int rule_cnt = rule_idx + 1, cmd = elem->cmd_data.vlan_mac.cmd; union eth_classify_rule_cmd *rule_entry = &data->rules[rule_idx]; bool add = (cmd == ECORE_VLAN_MAC_ADD) ? TRUE : FALSE; - unsigned long *vlan_mac_flags = &elem->cmd_data.vlan_mac.vlan_mac_flags; + uint32_t *vlan_mac_flags = &elem->cmd_data.vlan_mac.vlan_mac_flags; uint8_t *mac = elem->cmd_data.vlan_mac.u.mac.mac; /* Set LLH CAM entry: currently only iSCSI and ETH macs are @@ -1326,7 +1326,7 @@ static int ecore_wait_vlan_mac(struct bnx2x_softc *sc, static int __ecore_vlan_mac_execute_step(struct bnx2x_softc *sc, struct ecore_vlan_mac_obj *o, - unsigned long *ramrod_flags) + uint32_t *ramrod_flags) { int rc = ECORE_SUCCESS; @@ -1362,7 +1362,7 @@ static int __ecore_vlan_mac_execute_step(struct bnx2x_softc *sc, static int ecore_complete_vlan_mac(struct bnx2x_softc *sc, struct ecore_vlan_mac_obj *o, union event_ring_elem *cqe, - unsigned long *ramrod_flags) + uint32_t *ramrod_flags) { struct ecore_raw_obj *r = &o->raw; int rc; @@ -1518,7 +1518,7 @@ static int ecore_vlan_mac_get_registry_elem(struct bnx2x_softc *sc, static int ecore_execute_vlan_mac(struct bnx2x_softc *sc, union ecore_qable_obj *qo, ecore_list_t * exe_chunk, - unsigned long *ramrod_flags) + uint32_t *ramrod_flags) { struct ecore_exeq_elem *elem; struct ecore_vlan_mac_obj *o = &qo->vlan_mac, *cam_obj; @@ -1678,7 +1678,7 @@ int ecore_config_vlan_mac(struct bnx2x_softc *sc, { int rc = ECORE_SUCCESS; struct ecore_vlan_mac_obj *o = p->vlan_mac_obj; - unsigned long *ramrod_flags = &p->ramrod_flags; + uint32_t *ramrod_flags = &p->ramrod_flags; int cont = ECORE_TEST_BIT(RAMROD_CONT, ramrod_flags); struct ecore_raw_obj *raw = &o->raw; @@ -1758,8 +1758,8 @@ int ecore_config_vlan_mac(struct bnx2x_softc *sc, */ static int ecore_vlan_mac_del_all(struct bnx2x_softc *sc, struct ecore_vlan_mac_obj *o, - unsigned long *vlan_mac_flags, - unsigned long *ramrod_flags) + uint32_t *vlan_mac_flags, + uint32_t *ramrod_flags) { struct ecore_vlan_mac_registry_elem *pos = NULL; int rc = 0, read_lock; @@ -1836,7 +1836,7 @@ static void ecore_init_raw_obj(struct ecore_raw_obj *raw, uint8_t cl_id, uint32_t cid, uint8_t func_id, void *rdata, ecore_dma_addr_t rdata_mapping, int state, - unsigned long *pstate, ecore_obj_type type) + uint32_t *pstate, ecore_obj_type type) { raw->func_id = func_id; raw->cid = cid; @@ -1856,7 +1856,7 @@ static void ecore_init_vlan_mac_common(struct ecore_vlan_mac_obj *o, uint8_t cl_id, uint32_t cid, uint8_t func_id, void *rdata, ecore_dma_addr_t rdata_mapping, - int state, unsigned long *pstate, + int state, uint32_t *pstate, ecore_obj_type type, struct ecore_credit_pool_obj *macs_pool, struct ecore_credit_pool_obj @@ -1883,7 +1883,7 @@ void ecore_init_mac_obj(struct bnx2x_softc *sc, struct ecore_vlan_mac_obj *mac_obj, uint8_t cl_id, uint32_t cid, uint8_t func_id, void *rdata, ecore_dma_addr_t rdata_mapping, int state, - unsigned long *pstate, ecore_obj_type type, + uint32_t *pstate, ecore_obj_type type, struct ecore_credit_pool_obj *macs_pool) { union ecore_qable_obj *qable_obj = (union ecore_qable_obj *)mac_obj; @@ -2034,8 +2034,8 @@ static void ecore_rx_mode_set_rdata_hdr_e2(uint32_t cid, struct eth_classify_hea hdr->rule_cnt = rule_cnt; } -static void ecore_rx_mode_set_cmd_state_e2(unsigned long *accept_flags, struct eth_filter_rules_cmd - *cmd, int clear_accept_all) +static void ecore_rx_mode_set_cmd_state_e2(uint32_t *accept_flags, + struct eth_filter_rules_cmd *cmd, int clear_accept_all) { uint16_t state; @@ -2157,7 +2157,7 @@ static int ecore_set_rx_mode_e2(struct bnx2x_softc *sc, ecore_rx_mode_set_rdata_hdr_e2(p->cid, &data->header, rule_idx); ECORE_MSG - (sc, "About to configure %d rules, rx_accept_flags 0x%lx, tx_accept_flags 0x%lx", + (sc, "About to configure %d rules, rx_accept_flags 0x%x, tx_accept_flags 0x%x", data->header.rule_cnt, p->rx_accept_flags, p->tx_accept_flags); /* No need for an explicit memory barrier here as long we would @@ -3132,7 +3132,7 @@ void ecore_init_mcast_obj(struct bnx2x_softc *sc, uint8_t mcast_cl_id, uint32_t mcast_cid, uint8_t func_id, uint8_t engine_id, void *rdata, ecore_dma_addr_t rdata_mapping, int state, - unsigned long *pstate, ecore_obj_type type) + uint32_t *pstate, ecore_obj_type type) { ECORE_MEMSET(mcast_obj, 0, sizeof(*mcast_obj)); @@ -3598,7 +3598,7 @@ void ecore_init_rss_config_obj(struct bnx2x_softc *sc __rte_unused, uint8_t cl_id, uint32_t cid, uint8_t func_id, uint8_t engine_id, void *rdata, ecore_dma_addr_t rdata_mapping, - int state, unsigned long *pstate, + int state, uint32_t *pstate, ecore_obj_type type) { ecore_init_raw_obj(&rss_obj->raw, cl_id, cid, func_id, rdata, @@ -3627,7 +3627,7 @@ int ecore_queue_state_change(struct bnx2x_softc *sc, { struct ecore_queue_sp_obj *o = params->q_obj; int rc, pending_bit; - unsigned long *pending = &o->pending; + uint32_t *pending = &o->pending; /* Check that the requested transition is legal */ rc = o->check_transition(sc, o, params); @@ -3638,9 +3638,9 @@ int ecore_queue_state_change(struct bnx2x_softc *sc, } /* Set "pending" bit */ - ECORE_MSG(sc, "pending bit was=%lx", o->pending); + ECORE_MSG(sc, "pending bit was=%x", o->pending); pending_bit = o->set_pending(o, params); - ECORE_MSG(sc, "pending bit now=%lx", o->pending); + ECORE_MSG(sc, "pending bit now=%x", o->pending); /* Don't send a command if only driver cleanup was requested */ if (ECORE_TEST_BIT(RAMROD_DRV_CLR_ONLY, ¶ms->ramrod_flags)) @@ -3704,11 +3704,11 @@ static int ecore_queue_comp_cmd(struct bnx2x_softc *sc __rte_unused, struct ecore_queue_sp_obj *o, enum ecore_queue_cmd cmd) { - unsigned long cur_pending = o->pending; + uint32_t cur_pending = o->pending; if (!ECORE_TEST_AND_CLEAR_BIT(cmd, &cur_pending)) { PMD_DRV_LOG(ERR, sc, - "Bad MC reply %d for queue %d in state %d pending 0x%lx, next_state %d", + "Bad MC reply %d for queue %d in state %d pending 0x%x, next_state %d", cmd, o->cids[ECORE_PRIMARY_CID_INDEX], o->state, cur_pending, o->next_state); return ECORE_INVAL; @@ -3762,7 +3762,7 @@ static void ecore_q_fill_init_general_data(struct bnx2x_softc *sc __rte_unused, struct ecore_queue_sp_obj *o, struct ecore_general_setup_params *params, struct client_init_general_data - *gen_data, unsigned long *flags) + *gen_data, uint32_t *flags) { gen_data->client_id = o->cl_id; @@ -3794,7 +3794,7 @@ static void ecore_q_fill_init_general_data(struct bnx2x_softc *sc __rte_unused, static void ecore_q_fill_init_tx_data(struct ecore_txq_setup_params *params, struct client_init_tx_data *tx_data, - unsigned long *flags) + uint32_t *flags) { tx_data->enforce_security_flg = ECORE_TEST_BIT(ECORE_Q_FLG_TX_SEC, flags); @@ -3840,7 +3840,7 @@ static void ecore_q_fill_init_pause_data(struct rxq_pause_params *params, static void ecore_q_fill_init_rx_data(struct ecore_rxq_setup_params *params, struct client_init_rx_data *rx_data, - unsigned long *flags) + uint32_t *flags) { rx_data->tpa_en = ECORE_TEST_BIT(ECORE_Q_FLG_TPA, flags) * CLIENT_INIT_RX_DATA_TPA_EN_IPV4; @@ -4421,7 +4421,7 @@ static int ecore_queue_chk_transition(struct bnx2x_softc *sc __rte_unused, * the previous one. */ if (o->pending) { - PMD_DRV_LOG(ERR, sc, "Blocking transition since pending was %lx", + PMD_DRV_LOG(ERR, sc, "Blocking transition since pending was %x", o->pending); return ECORE_BUSY; } @@ -4630,7 +4630,7 @@ void ecore_init_queue_obj(struct bnx2x_softc *sc, struct ecore_queue_sp_obj *obj, uint8_t cl_id, uint32_t * cids, uint8_t cid_cnt, uint8_t func_id, void *rdata, - ecore_dma_addr_t rdata_mapping, unsigned long type) + ecore_dma_addr_t rdata_mapping, uint32_t type) { ECORE_MEMSET(obj, 0, sizeof(*obj)); @@ -4699,11 +4699,11 @@ ecore_func_state_change_comp(struct bnx2x_softc *sc __rte_unused, struct ecore_func_sp_obj *o, enum ecore_func_cmd cmd) { - unsigned long cur_pending = o->pending; + uint32_t cur_pending = o->pending; if (!ECORE_TEST_AND_CLEAR_BIT(cmd, &cur_pending)) { PMD_DRV_LOG(ERR, sc, - "Bad MC reply %d for func %d in state %d pending 0x%lx, next_state %d", + "Bad MC reply %d for func %d in state %d pending 0x%x, next_state %d", cmd, ECORE_FUNC_ID(sc), o->state, cur_pending, o->next_state); return ECORE_INVAL; @@ -5311,7 +5311,7 @@ int ecore_func_state_change(struct bnx2x_softc *sc, struct ecore_func_sp_obj *o = params->f_obj; int rc, cnt = 300; enum ecore_func_cmd cmd = params->cmd; - unsigned long *pending = &o->pending; + uint32_t *pending = &o->pending; ECORE_MUTEX_LOCK(&o->one_pending_mutex); diff --git a/drivers/net/bnx2x/ecore_sp.h b/drivers/net/bnx2x/ecore_sp.h index cc1db377a..d58072dac 100644 --- a/drivers/net/bnx2x/ecore_sp.h +++ b/drivers/net/bnx2x/ecore_sp.h @@ -14,6 +14,7 @@ #ifndef ECORE_SP_H #define ECORE_SP_H +#include #include #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN @@ -73,10 +74,11 @@ typedef rte_spinlock_t ECORE_MUTEX_SPIN; #define ECORE_SET_BIT_NA(bit, var) (*var |= (1 << bit)) #define ECORE_CLEAR_BIT_NA(bit, var) (*var &= ~(1 << bit)) -#define ECORE_TEST_BIT(bit, var) bnx2x_test_bit(bit, var) -#define ECORE_SET_BIT(bit, var) bnx2x_set_bit(bit, var) -#define ECORE_CLEAR_BIT(bit, var) bnx2x_clear_bit(bit, var) -#define ECORE_TEST_AND_CLEAR_BIT(bit, var) bnx2x_test_and_clear_bit(bit, var) +#define ECORE_TEST_BIT(bit, var) rte_bit_relaxed_get32(bit, var) +#define ECORE_SET_BIT(bit, var) rte_bit_relaxed_set32(bit, var) +#define ECORE_CLEAR_BIT(bit, var) rte_bit_relaxed_clear32(bit, var) +#define ECORE_TEST_AND_CLEAR_BIT(bit, var) \ + rte_bit_relaxed_test_and_clear32(bit, var) #define atomic_load_acq_int (int)* #define atomic_store_rel_int(a, v) (*a = v) @@ -485,7 +487,7 @@ struct ecore_raw_obj { /* Ramrod state params */ int state; /* "ramrod is pending" state bit */ - unsigned long *pstate; /* pointer to state buffer */ + uint32_t *pstate; /* pointer to state buffer */ ecore_obj_type obj_type; @@ -538,7 +540,7 @@ struct ecore_vlan_mac_data { /* used to contain the data related vlan_mac_flags bits from * ramrod parameters. */ - unsigned long vlan_mac_flags; + uint32_t vlan_mac_flags; /* Needed for MOVE command */ struct ecore_vlan_mac_obj *target_obj; @@ -589,7 +591,7 @@ typedef int (*exe_q_optimize)(struct bnx2x_softc *sc, typedef int (*exe_q_execute)(struct bnx2x_softc *sc, union ecore_qable_obj *o, ecore_list_t *exe_chunk, - unsigned long *ramrod_flags); + uint32_t *ramrod_flags); typedef struct ecore_exeq_elem * (*exe_q_get)(struct ecore_exe_queue_obj *o, struct ecore_exeq_elem *elem); @@ -659,7 +661,7 @@ struct ecore_vlan_mac_registry_elem { int cam_offset; /* Needed for DEL and RESTORE flows */ - unsigned long vlan_mac_flags; + uint32_t vlan_mac_flags; union ecore_classification_ramrod_data u; }; @@ -688,7 +690,7 @@ struct ecore_vlan_mac_ramrod_params { struct ecore_vlan_mac_obj *vlan_mac_obj; /* General command flags: COMP_WAIT, etc. */ - unsigned long ramrod_flags; + uint32_t ramrod_flags; /* Command specific configuration request */ struct ecore_vlan_mac_data user_req; @@ -706,7 +708,7 @@ struct ecore_vlan_mac_obj { */ uint8_t head_reader; /* Num. of readers accessing head list */ bool head_exe_request; /* Pending execution request. */ - unsigned long saved_ramrod_flags; /* Ramrods of pending execution */ + uint32_t saved_ramrod_flags; /* Ramrods of pending execution */ /* Execution queue interface instance */ struct ecore_exe_queue_obj exe_queue; @@ -801,8 +803,8 @@ struct ecore_vlan_mac_obj { */ int (*delete_all)(struct bnx2x_softc *sc, struct ecore_vlan_mac_obj *o, - unsigned long *vlan_mac_flags, - unsigned long *ramrod_flags); + uint32_t *vlan_mac_flags, + uint32_t *ramrod_flags); /** * Reconfigures the next MAC/VLAN/VLAN-MAC element from the previously @@ -842,7 +844,7 @@ struct ecore_vlan_mac_obj { */ int (*complete)(struct bnx2x_softc *sc, struct ecore_vlan_mac_obj *o, union event_ring_elem *cqe, - unsigned long *ramrod_flags); + uint32_t *ramrod_flags); /** * Wait for completion of all commands. Don't schedule new ones, @@ -883,13 +885,13 @@ enum { struct ecore_rx_mode_ramrod_params { struct ecore_rx_mode_obj *rx_mode_obj; - unsigned long *pstate; + uint32_t *pstate; int state; uint8_t cl_id; uint32_t cid; uint8_t func_id; - unsigned long ramrod_flags; - unsigned long rx_mode_flags; + uint32_t ramrod_flags; + uint32_t rx_mode_flags; /* rdata is either a pointer to eth_filter_rules_ramrod_data(e2) or to * a tstorm_eth_mac_filter_config (e1x). @@ -898,10 +900,10 @@ struct ecore_rx_mode_ramrod_params { ecore_dma_addr_t rdata_mapping; /* Rx mode settings */ - unsigned long rx_accept_flags; + uint32_t rx_accept_flags; /* internal switching settings */ - unsigned long tx_accept_flags; + uint32_t tx_accept_flags; }; struct ecore_rx_mode_obj { @@ -928,7 +930,7 @@ struct ecore_mcast_ramrod_params { struct ecore_mcast_obj *mcast_obj; /* Relevant options are RAMROD_COMP_WAIT and RAMROD_DRV_CLR_ONLY */ - unsigned long ramrod_flags; + uint32_t ramrod_flags; ecore_list_t mcast_list; /* list of struct ecore_mcast_list_elem */ /** TODO: @@ -1144,22 +1146,22 @@ struct ecore_config_rss_params { struct ecore_rss_config_obj *rss_obj; /* may have RAMROD_COMP_WAIT set only */ - unsigned long ramrod_flags; + uint32_t ramrod_flags; /* ECORE_RSS_X bits */ - unsigned long rss_flags; + uint32_t rss_flags; /* Number hash bits to take into an account */ - uint8_t rss_result_mask; + uint8_t rss_result_mask; /* Indirection table */ - uint8_t ind_table[T_ETH_INDIRECTION_TABLE_SIZE]; + uint8_t ind_table[T_ETH_INDIRECTION_TABLE_SIZE]; /* RSS hash values */ - uint32_t rss_key[10]; + uint32_t rss_key[10]; /* valid only if ECORE_RSS_UPDATE_TOE is set */ - uint16_t toe_rss_bitmap; + uint16_t toe_rss_bitmap; }; struct ecore_rss_config_obj { @@ -1290,17 +1292,17 @@ enum ecore_q_type { struct ecore_queue_init_params { struct { - unsigned long flags; - uint16_t hc_rate; - uint8_t fw_sb_id; - uint8_t sb_cq_index; + uint32_t flags; + uint16_t hc_rate; + uint8_t fw_sb_id; + uint8_t sb_cq_index; } tx; struct { - unsigned long flags; - uint16_t hc_rate; - uint8_t fw_sb_id; - uint8_t sb_cq_index; + uint32_t flags; + uint16_t hc_rate; + uint8_t fw_sb_id; + uint8_t sb_cq_index; } rx; /* CID context in the host memory */ @@ -1321,10 +1323,10 @@ struct ecore_queue_cfc_del_params { }; struct ecore_queue_update_params { - unsigned long update_flags; /* ECORE_Q_UPDATE_XX bits */ - uint16_t def_vlan; - uint16_t silent_removal_value; - uint16_t silent_removal_mask; + uint32_t update_flags; /* ECORE_Q_UPDATE_XX bits */ + uint16_t def_vlan; + uint16_t silent_removal_value; + uint16_t silent_removal_mask; /* index within the tx_only cids of this queue object */ uint8_t cid_index; }; @@ -1422,13 +1424,13 @@ struct ecore_queue_setup_params { struct ecore_txq_setup_params txq_params; struct ecore_rxq_setup_params rxq_params; struct rxq_pause_params pause_params; - unsigned long flags; + uint32_t flags; }; struct ecore_queue_setup_tx_only_params { struct ecore_general_setup_params gen_params; struct ecore_txq_setup_params txq_params; - unsigned long flags; + uint32_t flags; /* index within the tx_only cids of this queue object */ uint8_t cid_index; }; @@ -1440,7 +1442,7 @@ struct ecore_queue_state_params { enum ecore_queue_cmd cmd; /* may have RAMROD_COMP_WAIT set only */ - unsigned long ramrod_flags; + uint32_t ramrod_flags; /* Params according to the current command */ union { @@ -1478,14 +1480,14 @@ struct ecore_queue_sp_obj { enum ecore_q_state state, next_state; /* bits from enum ecore_q_type */ - unsigned long type; + uint32_t type; /* ECORE_Q_CMD_XX bits. This object implements "one * pending" paradigm but for debug and tracing purposes it's * more convenient to have different bits for different * commands. */ - unsigned long pending; + uint32_t pending; /* Buffer to use as a ramrod data and its mapping */ void *rdata; @@ -1653,7 +1655,7 @@ struct ecore_func_start_params { }; struct ecore_func_switch_update_params { - unsigned long changes; /* ECORE_F_UPDATE_XX bits */ + uint32_t changes; /* ECORE_F_UPDATE_XX bits */ uint16_t vlan; uint16_t vlan_eth_type; uint8_t vlan_force_prio; @@ -1704,7 +1706,7 @@ struct ecore_func_state_params { enum ecore_func_cmd cmd; /* may have RAMROD_COMP_WAIT set only */ - unsigned long ramrod_flags; + uint32_t ramrod_flags; /* Params according to the current command */ union { @@ -1753,7 +1755,7 @@ struct ecore_func_sp_obj { * more convenient to have different bits for different * commands. */ - unsigned long pending; + uint32_t pending; /* Buffer to use as a ramrod data and its mapping */ void *rdata; @@ -1821,7 +1823,7 @@ enum ecore_func_state ecore_func_get_state(struct bnx2x_softc *sc, void ecore_init_queue_obj(struct bnx2x_softc *sc, struct ecore_queue_sp_obj *obj, uint8_t cl_id, uint32_t *cids, uint8_t cid_cnt, uint8_t func_id, void *rdata, - ecore_dma_addr_t rdata_mapping, unsigned long type); + ecore_dma_addr_t rdata_mapping, uint32_t type); int ecore_queue_state_change(struct bnx2x_softc *sc, struct ecore_queue_state_params *params); @@ -1834,7 +1836,7 @@ void ecore_init_mac_obj(struct bnx2x_softc *sc, struct ecore_vlan_mac_obj *mac_obj, uint8_t cl_id, uint32_t cid, uint8_t func_id, void *rdata, ecore_dma_addr_t rdata_mapping, int state, - unsigned long *pstate, ecore_obj_type type, + uint32_t *pstate, ecore_obj_type type, struct ecore_credit_pool_obj *macs_pool); void ecore_init_vlan_obj(struct bnx2x_softc *sc, @@ -1842,7 +1844,7 @@ void ecore_init_vlan_obj(struct bnx2x_softc *sc, uint8_t cl_id, uint32_t cid, uint8_t func_id, void *rdata, ecore_dma_addr_t rdata_mapping, int state, - unsigned long *pstate, ecore_obj_type type, + uint32_t *pstate, ecore_obj_type type, struct ecore_credit_pool_obj *vlans_pool); void ecore_init_vlan_mac_obj(struct bnx2x_softc *sc, @@ -1850,7 +1852,7 @@ void ecore_init_vlan_mac_obj(struct bnx2x_softc *sc, uint8_t cl_id, uint32_t cid, uint8_t func_id, void *rdata, ecore_dma_addr_t rdata_mapping, int state, - unsigned long *pstate, ecore_obj_type type, + uint32_t *pstate, ecore_obj_type type, struct ecore_credit_pool_obj *macs_pool, struct ecore_credit_pool_obj *vlans_pool); @@ -1859,7 +1861,7 @@ void ecore_init_vxlan_fltr_obj(struct bnx2x_softc *sc, uint8_t cl_id, uint32_t cid, uint8_t func_id, void *rdata, ecore_dma_addr_t rdata_mapping, int state, - unsigned long *pstate, ecore_obj_type type, + uint32_t *pstate, ecore_obj_type type, struct ecore_credit_pool_obj *macs_pool, struct ecore_credit_pool_obj *vlans_pool); @@ -1901,7 +1903,7 @@ void ecore_init_mcast_obj(struct bnx2x_softc *sc, struct ecore_mcast_obj *mcast_obj, uint8_t mcast_cl_id, uint32_t mcast_cid, uint8_t func_id, uint8_t engine_id, void *rdata, ecore_dma_addr_t rdata_mapping, - int state, unsigned long *pstate, + int state, uint32_t *pstate, ecore_obj_type type); /** @@ -1943,7 +1945,7 @@ void ecore_init_rss_config_obj(struct bnx2x_softc *sc, struct ecore_rss_config_obj *rss_obj, uint8_t cl_id, uint32_t cid, uint8_t func_id, uint8_t engine_id, void *rdata, ecore_dma_addr_t rdata_mapping, - int state, unsigned long *pstate, + int state, uint32_t *pstate, ecore_obj_type type); /**