From patchwork Wed Jul 3 23:43:11 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rasesh Mody X-Patchwork-Id: 56050 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 56A4B1E20; Thu, 4 Jul 2019 01:43:37 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id D11FC1BE0; Thu, 4 Jul 2019 01:43:34 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x63NfBZj010168; Wed, 3 Jul 2019 16:43:33 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type; s=pfpt0818; bh=lRViw9XFNSJ4/rs/2NI3nXVbPkz1j3L72GjTFLmf+Tw=; b=Ano7J8OryANsOI9hwIZ5gXRyIYRZkOCBoZRR/5C6rsMyNQEc5IObjgu1+wegAFo6Eal6 DeLcGcGUXfL7f3M8056LZUeFA8XokAI03FBl13qROr7lQXD94SEaQHrt06hB3Ul2rNnz laLRpP+08FZp3gbbBPTVciynVv58qToK2+gETn8vUhJtBTPGJoXaJiOSiCRLh5V+2tKO VBPe40qP56nM1ItT2UQ4Ez2fSzDoakHg4YECqYidZB4PipKy8u+zmEblNRMgpdQN5DqB ASye0AJGySlYQdrnOVqZvuHBAPXUDi2dJ3XYdxN6zwhAfTu5+U3hVcDHhFOTPVMJUtBI zw== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tgrv1bd0q-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 03 Jul 2019 16:43:33 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 3 Jul 2019 16:43:32 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 3 Jul 2019 16:43:32 -0700 Received: from irv1user08.caveonetworks.com (unknown [10.104.116.105]) by maili.marvell.com (Postfix) with ESMTP id 3176B3F703F; Wed, 3 Jul 2019 16:43:32 -0700 (PDT) Received: (from rmody@localhost) by irv1user08.caveonetworks.com (8.14.4/8.14.4/Submit) id x63NhVqn011099; Wed, 3 Jul 2019 16:43:31 -0700 X-Authentication-Warning: irv1user08.caveonetworks.com: rmody set sender to rmody@marvell.com using -f From: Rasesh Mody To: CC: Rasesh Mody , , , Date: Wed, 3 Jul 2019 16:43:11 -0700 Message-ID: <20190703234313.10782-1-rmody@marvell.com> X-Mailer: git-send-email 2.18.0 MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-03_06:, , signatures=0 Subject: [dpdk-dev] [PATCH 1/3] net/bnx2x: fix read VF id X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The logic, to read vf_id used by ACQUIRE/TEARDOWN_Q/RELEASE TLVs, multiplexed return value to convey vf_id value and status of read vf_id API. This lets to segfault at dev_start() as resources are not properly cleaned and re-allocated. Fix read vf_id API to differentiate between vf_id value and return status. Adjust the status checking accordingly. Added bnx2x_vf_teardown_queue() API and moved relevant code from bnx2x_vf_unload() to new API. Fixes: 540a211084a7 ("bnx2x: driver core") Cc: stable@dpdk.org Signed-off-by: Rasesh Mody --- drivers/net/bnx2x/bnx2x.c | 2 + drivers/net/bnx2x/bnx2x_vfpf.c | 140 ++++++++++++++++++++------------- drivers/net/bnx2x/bnx2x_vfpf.h | 1 + 3 files changed, 87 insertions(+), 56 deletions(-) diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c index d523f4f2c..877f5b73d 100644 --- a/drivers/net/bnx2x/bnx2x.c +++ b/drivers/net/bnx2x/bnx2x.c @@ -2015,6 +2015,8 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link uint8_t global = FALSE; uint32_t val; + PMD_INIT_FUNC_TRACE(sc); + PMD_DRV_LOG(DEBUG, sc, "Starting NIC unload..."); /* mark driver as unloaded in shmem2 */ diff --git a/drivers/net/bnx2x/bnx2x_vfpf.c b/drivers/net/bnx2x/bnx2x_vfpf.c index 7cf738a82..8f7559c67 100644 --- a/drivers/net/bnx2x/bnx2x_vfpf.c +++ b/drivers/net/bnx2x/bnx2x_vfpf.c @@ -162,20 +162,26 @@ static inline uint16_t bnx2x_check_me_flags(uint32_t val) #define BNX2X_ME_ANSWER_DELAY 100 #define BNX2X_ME_ANSWER_TRIES 10 -static inline int bnx2x_read_vf_id(struct bnx2x_softc *sc) +static inline int bnx2x_read_vf_id(struct bnx2x_softc *sc, uint32_t *vf_id) { uint32_t val; uint8_t i = 0; while (i <= BNX2X_ME_ANSWER_TRIES) { val = BNX2X_DB_READ(DOORBELL_ADDR(sc, 0)); - if (bnx2x_check_me_flags(val)) - return VF_ID(val); + if (bnx2x_check_me_flags(val)) { + PMD_DRV_LOG(DEBUG, sc, + "valid register value: 0x%08x", val); + *vf_id = VF_ID(val); + return 0; + } DELAY_MS(BNX2X_ME_ANSWER_DELAY); i++; } + PMD_DRV_LOG(ERR, sc, "Invalid register value: 0x%08x", val); + return -EINVAL; } @@ -240,14 +246,13 @@ int bnx2x_loop_obtain_resources(struct bnx2x_softc *sc) int bnx2x_vf_get_resources(struct bnx2x_softc *sc, uint8_t tx_count, uint8_t rx_count) { struct vf_acquire_tlv *acq = &sc->vf2pf_mbox->query[0].acquire; - int vf_id; + uint32_t vf_id; int rc; bnx2x_vf_close(sc); bnx2x_vf_prep(sc, &acq->first_tlv, BNX2X_VF_TLV_ACQUIRE, sizeof(*acq)); - vf_id = bnx2x_read_vf_id(sc); - if (vf_id < 0) { + if (bnx2x_read_vf_id(sc, &vf_id)) { rc = -EAGAIN; goto out; } @@ -318,25 +323,30 @@ bnx2x_vf_close(struct bnx2x_softc *sc) { struct vf_release_tlv *query; struct vf_common_reply_tlv *reply = &sc->vf2pf_mbox->resp.common_reply; - int vf_id = bnx2x_read_vf_id(sc); + uint32_t vf_id; int rc; - if (vf_id >= 0) { - query = &sc->vf2pf_mbox->query[0].release; - bnx2x_vf_prep(sc, &query->first_tlv, BNX2X_VF_TLV_RELEASE, - sizeof(*query)); + query = &sc->vf2pf_mbox->query[0].release; + bnx2x_vf_prep(sc, &query->first_tlv, BNX2X_VF_TLV_RELEASE, + sizeof(*query)); - query->vf_id = vf_id; - bnx2x_add_tlv(sc, query, query->first_tlv.tl.length, - BNX2X_VF_TLV_LIST_END, - sizeof(struct channel_list_end_tlv)); + if (bnx2x_read_vf_id(sc, &vf_id)) { + rc = -EAGAIN; + goto out; + } - rc = bnx2x_do_req4pf(sc, sc->vf2pf_mbox_mapping.paddr); - if (rc || reply->status != BNX2X_VF_STATUS_SUCCESS) - PMD_DRV_LOG(ERR, sc, "Failed to release VF"); + query->vf_id = vf_id; - bnx2x_vf_finalize(sc, &query->first_tlv); - } + bnx2x_add_tlv(sc, query, query->first_tlv.tl.length, + BNX2X_VF_TLV_LIST_END, + sizeof(struct channel_list_end_tlv)); + + rc = bnx2x_do_req4pf(sc, sc->vf2pf_mbox_mapping.paddr); + if (rc || reply->status != BNX2X_VF_STATUS_SUCCESS) + PMD_DRV_LOG(ERR, sc, "Failed to release VF"); + +out: + bnx2x_vf_finalize(sc, &query->first_tlv); } /* Let PF know the VF status blocks phys_addrs */ @@ -347,6 +357,8 @@ bnx2x_vf_init(struct bnx2x_softc *sc) struct vf_common_reply_tlv *reply = &sc->vf2pf_mbox->resp.common_reply; int i, rc; + PMD_INIT_FUNC_TRACE(sc); + query = &sc->vf2pf_mbox->query[0].init; bnx2x_vf_prep(sc, &query->first_tlv, BNX2X_VF_TLV_INIT, sizeof(*query)); @@ -383,51 +395,38 @@ bnx2x_vf_unload(struct bnx2x_softc *sc) { struct vf_close_tlv *query; struct vf_common_reply_tlv *reply = &sc->vf2pf_mbox->resp.common_reply; - struct vf_q_op_tlv *query_op; - int i, vf_id, rc; - - vf_id = bnx2x_read_vf_id(sc); - if (vf_id > 0) { - FOR_EACH_QUEUE(sc, i) { - query_op = &sc->vf2pf_mbox->query[0].q_op; - bnx2x_vf_prep(sc, &query_op->first_tlv, - BNX2X_VF_TLV_TEARDOWN_Q, - sizeof(*query_op)); - - query_op->vf_qid = i; + uint32_t vf_id; + int i, rc; - bnx2x_add_tlv(sc, query_op, - query_op->first_tlv.tl.length, - BNX2X_VF_TLV_LIST_END, - sizeof(struct channel_list_end_tlv)); + PMD_INIT_FUNC_TRACE(sc); - rc = bnx2x_do_req4pf(sc, sc->vf2pf_mbox_mapping.paddr); - if (rc || reply->status != BNX2X_VF_STATUS_SUCCESS) - PMD_DRV_LOG(ERR, sc, - "Bad reply for vf_q %d teardown", i); + FOR_EACH_QUEUE(sc, i) + bnx2x_vf_teardown_queue(sc, i); - bnx2x_vf_finalize(sc, &query_op->first_tlv); - } + bnx2x_vf_set_mac(sc, false); - bnx2x_vf_set_mac(sc, false); + query = &sc->vf2pf_mbox->query[0].close; + bnx2x_vf_prep(sc, &query->first_tlv, BNX2X_VF_TLV_CLOSE, + sizeof(*query)); - query = &sc->vf2pf_mbox->query[0].close; - bnx2x_vf_prep(sc, &query->first_tlv, BNX2X_VF_TLV_CLOSE, - sizeof(*query)); + if (bnx2x_read_vf_id(sc, &vf_id)) { + rc = -EAGAIN; + goto out; + } - query->vf_id = vf_id; + query->vf_id = vf_id; - bnx2x_add_tlv(sc, query, query->first_tlv.tl.length, - BNX2X_VF_TLV_LIST_END, - sizeof(struct channel_list_end_tlv)); + bnx2x_add_tlv(sc, query, query->first_tlv.tl.length, + BNX2X_VF_TLV_LIST_END, + sizeof(struct channel_list_end_tlv)); - rc = bnx2x_do_req4pf(sc, sc->vf2pf_mbox_mapping.paddr); - if (rc || reply->status != BNX2X_VF_STATUS_SUCCESS) - PMD_DRV_LOG(ERR, sc, - "Bad reply from PF for close message"); + rc = bnx2x_do_req4pf(sc, sc->vf2pf_mbox_mapping.paddr); + if (rc || reply->status != BNX2X_VF_STATUS_SUCCESS) + PMD_DRV_LOG(ERR, sc, + "Bad reply from PF for close message"); - bnx2x_vf_finalize(sc, &query->first_tlv); - } +out: + bnx2x_vf_finalize(sc, &query->first_tlv); } static inline uint16_t @@ -521,6 +520,35 @@ bnx2x_vf_setup_queue(struct bnx2x_softc *sc, struct bnx2x_fastpath *fp, int lead return rc; } +int +bnx2x_vf_teardown_queue(struct bnx2x_softc *sc, int qid) +{ + struct vf_q_op_tlv *query_op; + struct vf_common_reply_tlv *reply = &sc->vf2pf_mbox->resp.common_reply; + int rc; + + query_op = &sc->vf2pf_mbox->query[0].q_op; + bnx2x_vf_prep(sc, &query_op->first_tlv, + BNX2X_VF_TLV_TEARDOWN_Q, + sizeof(*query_op)); + + query_op->vf_qid = qid; + + bnx2x_add_tlv(sc, query_op, + query_op->first_tlv.tl.length, + BNX2X_VF_TLV_LIST_END, + sizeof(struct channel_list_end_tlv)); + + rc = bnx2x_do_req4pf(sc, sc->vf2pf_mbox_mapping.paddr); + if (rc || reply->status != BNX2X_VF_STATUS_SUCCESS) + PMD_DRV_LOG(ERR, sc, + "Bad reply for vf_q %d teardown", qid); + + bnx2x_vf_finalize(sc, &query_op->first_tlv); + + return rc; +} + int bnx2x_vf_set_mac(struct bnx2x_softc *sc, int set) { diff --git a/drivers/net/bnx2x/bnx2x_vfpf.h b/drivers/net/bnx2x/bnx2x_vfpf.h index 6964c9d98..ce0259adf 100644 --- a/drivers/net/bnx2x/bnx2x_vfpf.h +++ b/drivers/net/bnx2x/bnx2x_vfpf.h @@ -328,6 +328,7 @@ struct bnx2x_vf_mbx_msg { union resp_tlvs resp; }; +int bnx2x_vf_teardown_queue(struct bnx2x_softc *sc, int qid); int bnx2x_vf_set_mac(struct bnx2x_softc *sc, int set); int bnx2x_vf_config_rss(struct bnx2x_softc *sc, struct ecore_config_rss_params *params); From patchwork Wed Jul 3 23:43:12 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rasesh Mody X-Patchwork-Id: 56051 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7B21E2C2B; Thu, 4 Jul 2019 01:44:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id 91D052956; Thu, 4 Jul 2019 01:44:10 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x63NfBVJ010160; Wed, 3 Jul 2019 16:44:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=Syf5Yx809pFoBbkBurHdd8Q+7Mod1nTjB/5+93F9gYI=; b=kD7obRkRtLzvlHmApM/mKvL4SWFQBGzTRAa4YByPd/oJHPE/i32IvCNU/daCLAosTRHX dYF2BmFrEgKEyFTlZoPWSx4+UCfeHlV+UjzeIXRMgUeJB0GKpqn764FvL/LJVQ7Sja9p thttYUR09Ps9bogjYQ+blzITNNRrdTWY2JFETFfRKMBuB3f4fknlfFyRf7Vc42JYLNZw oQer6UkpDxsvSxuysy80xRDPPtjFklBFNeIbB+exsIHPptTchi7jWefAuV17ZqcgfCub VuqRW5gcCzin52wYNMj/rw4rwfEDJavbtxjURdWXm81JNJX4QGWHpkAQTRFNx5g3nuI/ IA== Received: from sc-exch02.marvell.com ([199.233.58.182]) by mx0a-0016f401.pphosted.com with ESMTP id 2tgrv1bd24-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 03 Jul 2019 16:44:09 -0700 Received: from SC-EXCH03.marvell.com (10.93.176.83) by SC-EXCH02.marvell.com (10.93.176.82) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 3 Jul 2019 16:44:08 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 3 Jul 2019 16:44:08 -0700 Received: from irv1user08.caveonetworks.com (unknown [10.104.116.105]) by maili.marvell.com (Postfix) with ESMTP id 655713F7041; Wed, 3 Jul 2019 16:44:08 -0700 (PDT) Received: (from rmody@localhost) by irv1user08.caveonetworks.com (8.14.4/8.14.4/Submit) id x63Ni8a9011796; Wed, 3 Jul 2019 16:44:08 -0700 X-Authentication-Warning: irv1user08.caveonetworks.com: rmody set sender to rmody@marvell.com using -f From: Rasesh Mody To: CC: Rasesh Mody , , , Date: Wed, 3 Jul 2019 16:43:12 -0700 Message-ID: <20190703234313.10782-2-rmody@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190703234313.10782-1-rmody@marvell.com> References: <20190703234313.10782-1-rmody@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-03_06:, , signatures=0 Subject: [dpdk-dev] [PATCH 2/3] net/bnx2x: fix link events polling for SRIOV X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" We do not need to schedule periodic poll for slowpath link events for SRIOV. The link events are handled by the PF driver. Fixes: 6041aa619f9a ("net/bnx2x: fix poll link status") Cc: stable@dpdk.org Signed-off-by: Rasesh Mody --- drivers/net/bnx2x/bnx2x_ethdev.c | 19 ++++++++++++------- 1 file changed, 12 insertions(+), 7 deletions(-) diff --git a/drivers/net/bnx2x/bnx2x_ethdev.c b/drivers/net/bnx2x/bnx2x_ethdev.c index 10b4fdb8e..127811b92 100644 --- a/drivers/net/bnx2x/bnx2x_ethdev.c +++ b/drivers/net/bnx2x/bnx2x_ethdev.c @@ -218,9 +218,12 @@ bnx2x_dev_start(struct rte_eth_dev *dev) PMD_INIT_FUNC_TRACE(sc); /* start the periodic callout */ - if (atomic_load_acq_long(&sc->periodic_flags) == PERIODIC_STOP) { - bnx2x_periodic_start(dev); - PMD_DRV_LOG(DEBUG, sc, "Periodic poll re-started"); + if (IS_PF(sc)) { + if (atomic_load_acq_long(&sc->periodic_flags) == + PERIODIC_STOP) { + bnx2x_periodic_start(dev); + PMD_DRV_LOG(DEBUG, sc, "Periodic poll re-started"); + } } ret = bnx2x_init(sc); @@ -258,10 +261,10 @@ bnx2x_dev_stop(struct rte_eth_dev *dev) rte_intr_disable(&sc->pci_dev->intr_handle); rte_intr_callback_unregister(&sc->pci_dev->intr_handle, bnx2x_interrupt_handler, (void *)dev); - } - /* stop the periodic callout */ - bnx2x_periodic_stop(dev); + /* stop the periodic callout */ + bnx2x_periodic_stop(dev); + } ret = bnx2x_nic_unload(sc, UNLOAD_NORMAL, FALSE); if (ret) { @@ -680,7 +683,9 @@ bnx2x_common_dev_init(struct rte_eth_dev *eth_dev, int is_vf) return 0; out: - bnx2x_periodic_stop(eth_dev); + if (IS_PF(sc)) + bnx2x_periodic_stop(eth_dev); + return ret; } From patchwork Wed Jul 3 23:43:13 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Rasesh Mody X-Patchwork-Id: 56052 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7581225D9; Thu, 4 Jul 2019 01:44:15 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by dpdk.org (Postfix) with ESMTP id CAED02956; Thu, 4 Jul 2019 01:44:11 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.27/8.16.0.27) with SMTP id x63NeIgg021952; Wed, 3 Jul 2019 16:44:11 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type; s=pfpt0818; bh=v/0mhnYfRCrQ8N5fI8OELOGbMFJTGiUgDOyZIpBJFaM=; b=RuSe0BWXRXZG0pRHKg0tnOssXDBWLu9XIYsNdr+n3V+FM3nByNgUeprmwTQNuOBKtN7i I7zqJTsmndLH+/k+iOoEg+gd7bkVQuAZa8bcAFsL6UtF7us1gBPDtYXB19CvrU2YL7uW xMDynXpUk64R9OTHqAGjXAubR40gy1O1IJIX6WU3DJP8FDQD+x5Wf/rAJ0+hNfWe63Q5 uIcWSJQDOAyy8gDj/t588QdXjpGe9ZhZW5eFaUQI4nks7zEX1PxNi1yr7lEmvWHw/qzw rj97xwnlNUeSsewODwAoizysl5RAE/bxBJXV5rGepCbUoivG/rPNnc7geVp/VJIAJV73 kA== Received: from sc-exch01.marvell.com ([199.233.58.181]) by mx0b-0016f401.pphosted.com with ESMTP id 2tgtf72vgt-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 03 Jul 2019 16:44:11 -0700 Received: from SC-EXCH01.marvell.com (10.93.176.81) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server (TLS) id 15.0.1367.3; Wed, 3 Jul 2019 16:44:08 -0700 Received: from maili.marvell.com (10.93.176.43) by SC-EXCH01.marvell.com (10.93.176.81) with Microsoft SMTP Server id 15.0.1367.3 via Frontend Transport; Wed, 3 Jul 2019 16:44:08 -0700 Received: from irv1user08.caveonetworks.com (unknown [10.104.116.105]) by maili.marvell.com (Postfix) with ESMTP id C82753F7043; Wed, 3 Jul 2019 16:44:08 -0700 (PDT) Received: (from rmody@localhost) by irv1user08.caveonetworks.com (8.14.4/8.14.4/Submit) id x63Ni8Ks011800; Wed, 3 Jul 2019 16:44:08 -0700 X-Authentication-Warning: irv1user08.caveonetworks.com: rmody set sender to rmody@marvell.com using -f From: Rasesh Mody To: CC: Rasesh Mody , , , Date: Wed, 3 Jul 2019 16:43:13 -0700 Message-ID: <20190703234313.10782-3-rmody@marvell.com> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20190703234313.10782-1-rmody@marvell.com> References: <20190703234313.10782-1-rmody@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:, , definitions=2019-07-03_06:, , signatures=0 Subject: [dpdk-dev] [PATCH 3/3] net/bnx2x: fix fastpath SB allocation for SRIOV X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" For SRIOV, fastpath status blocks are not allocated resulting in segfault. Separate out fastpath DMA allocation/free from rest of memory allocation/free. It is now done as part of NIC load/unload. Comment indentation changes in bnx2x_alloc_hsi_mem() and bnx2x_free_hsi_mem() APIs. Fixes: f0219d98defd ("net/bnx2x: fix interrupt flood") Cc: stable@dpdk.org Signed-off-by: Rasesh Mody --- drivers/net/bnx2x/bnx2x.c | 117 +++++++++++++++++++------------------- 1 file changed, 60 insertions(+), 57 deletions(-) diff --git a/drivers/net/bnx2x/bnx2x.c b/drivers/net/bnx2x/bnx2x.c index 877f5b73d..1a088269f 100644 --- a/drivers/net/bnx2x/bnx2x.c +++ b/drivers/net/bnx2x/bnx2x.c @@ -2120,6 +2120,9 @@ bnx2x_nic_unload(struct bnx2x_softc *sc, uint32_t unload_mode, uint8_t keep_link bnx2x_free_mem(sc); } + /* free the host hardware/software hsi structures */ + bnx2x_free_hsi_mem(sc); + bnx2x_free_fw_stats_mem(sc); sc->state = BNX2X_STATE_CLOSED; @@ -2403,9 +2406,6 @@ static void bnx2x_free_mem(struct bnx2x_softc *sc) ecore_ilt_mem_op(sc, ILT_MEMOP_FREE); bnx2x_free_ilt_lines_mem(sc); - - /* free the host hardware/software hsi structures */ - bnx2x_free_hsi_mem(sc); } static int bnx2x_alloc_mem(struct bnx2x_softc *sc) @@ -2456,13 +2456,6 @@ static int bnx2x_alloc_mem(struct bnx2x_softc *sc) return -1; } - /* allocate the host hardware/software hsi structures */ - if (bnx2x_alloc_hsi_mem(sc) != 0) { - PMD_DRV_LOG(ERR, sc, "bnx2x_alloc_hsi_mem was failed"); - bnx2x_free_mem(sc); - return -ENXIO; - } - return 0; } @@ -7242,6 +7235,14 @@ int bnx2x_nic_load(struct bnx2x_softc *sc) } } + /* allocate the host hardware/software hsi structures */ + if (bnx2x_alloc_hsi_mem(sc) != 0) { + PMD_DRV_LOG(ERR, sc, "bnx2x_alloc_hsi_mem was failed"); + sc->state = BNX2X_STATE_CLOSED; + rc = -ENOMEM; + goto bnx2x_nic_load_error0; + } + if (bnx2x_alloc_fw_stats_mem(sc) != 0) { sc->state = BNX2X_STATE_CLOSED; rc = -ENOMEM; @@ -7457,6 +7458,7 @@ int bnx2x_nic_load(struct bnx2x_softc *sc) bnx2x_nic_load_error0: bnx2x_free_fw_stats_mem(sc); + bnx2x_free_hsi_mem(sc); bnx2x_free_mem(sc); return rc; @@ -8902,9 +8904,9 @@ int bnx2x_alloc_hsi_mem(struct bnx2x_softc *sc) uint32_t i; if (IS_PF(sc)) { -/************************/ -/* DEFAULT STATUS BLOCK */ -/************************/ + /************************/ + /* DEFAULT STATUS BLOCK */ + /************************/ if (bnx2x_dma_alloc(sc, sizeof(struct host_sp_status_block), &sc->def_sb_dma, "def_sb", @@ -8914,9 +8916,9 @@ int bnx2x_alloc_hsi_mem(struct bnx2x_softc *sc) sc->def_sb = (struct host_sp_status_block *)sc->def_sb_dma.vaddr; -/***************/ -/* EVENT QUEUE */ -/***************/ + /***************/ + /* EVENT QUEUE */ + /***************/ if (bnx2x_dma_alloc(sc, BNX2X_PAGE_SIZE, &sc->eq_dma, "ev_queue", @@ -8927,9 +8929,9 @@ int bnx2x_alloc_hsi_mem(struct bnx2x_softc *sc) sc->eq = (union event_ring_elem *)sc->eq_dma.vaddr; -/*************/ -/* SLOW PATH */ -/*************/ + /*************/ + /* SLOW PATH */ + /*************/ if (bnx2x_dma_alloc(sc, sizeof(struct bnx2x_slowpath), &sc->sp_dma, "sp", @@ -8941,9 +8943,9 @@ int bnx2x_alloc_hsi_mem(struct bnx2x_softc *sc) sc->sp = (struct bnx2x_slowpath *)sc->sp_dma.vaddr; -/*******************/ -/* SLOW PATH QUEUE */ -/*******************/ + /*******************/ + /* SLOW PATH QUEUE */ + /*******************/ if (bnx2x_dma_alloc(sc, BNX2X_PAGE_SIZE, &sc->spq_dma, "sp_queue", @@ -8956,9 +8958,9 @@ int bnx2x_alloc_hsi_mem(struct bnx2x_softc *sc) sc->spq = (struct eth_spe *)sc->spq_dma.vaddr; -/***************************/ -/* FW DECOMPRESSION BUFFER */ -/***************************/ + /***************************/ + /* FW DECOMPRESSION BUFFER */ + /***************************/ if (bnx2x_dma_alloc(sc, FW_BUF_SIZE, &sc->gz_buf_dma, "fw_buf", RTE_CACHE_LINE_SIZE) != 0) { @@ -8982,9 +8984,9 @@ int bnx2x_alloc_hsi_mem(struct bnx2x_softc *sc) fp->sc = sc; fp->index = i; -/*******************/ -/* FP STATUS BLOCK */ -/*******************/ + /*******************/ + /* FP STATUS BLOCK */ + /*******************/ snprintf(buf, sizeof(buf), "fp_%d_sb", i); if (bnx2x_dma_alloc(sc, sizeof(union bnx2x_host_hc_status_block), @@ -9015,49 +9017,50 @@ void bnx2x_free_hsi_mem(struct bnx2x_softc *sc) for (i = 0; i < sc->num_queues; i++) { fp = &sc->fp[i]; -/*******************/ -/* FP STATUS BLOCK */ -/*******************/ + /*******************/ + /* FP STATUS BLOCK */ + /*******************/ memset(&fp->status_block, 0, sizeof(fp->status_block)); bnx2x_dma_free(&fp->sb_dma); } - /***************************/ - /* FW DECOMPRESSION BUFFER */ - /***************************/ - - bnx2x_dma_free(&sc->gz_buf_dma); - sc->gz_buf = NULL; + if (IS_PF(sc)) { + /***************************/ + /* FW DECOMPRESSION BUFFER */ + /***************************/ - /*******************/ - /* SLOW PATH QUEUE */ - /*******************/ + bnx2x_dma_free(&sc->gz_buf_dma); + sc->gz_buf = NULL; - bnx2x_dma_free(&sc->spq_dma); - sc->spq = NULL; + /*******************/ + /* SLOW PATH QUEUE */ + /*******************/ - /*************/ - /* SLOW PATH */ - /*************/ + bnx2x_dma_free(&sc->spq_dma); + sc->spq = NULL; - bnx2x_dma_free(&sc->sp_dma); - sc->sp = NULL; + /*************/ + /* SLOW PATH */ + /*************/ - /***************/ - /* EVENT QUEUE */ - /***************/ + bnx2x_dma_free(&sc->sp_dma); + sc->sp = NULL; - bnx2x_dma_free(&sc->eq_dma); - sc->eq = NULL; + /***************/ + /* EVENT QUEUE */ + /***************/ - /************************/ - /* DEFAULT STATUS BLOCK */ - /************************/ + bnx2x_dma_free(&sc->eq_dma); + sc->eq = NULL; - bnx2x_dma_free(&sc->def_sb_dma); - sc->def_sb = NULL; + /************************/ + /* DEFAULT STATUS BLOCK */ + /************************/ + bnx2x_dma_free(&sc->def_sb_dma); + sc->def_sb = NULL; + } } /*