From patchwork Thu Jul 18 03:36:03 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 56677 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6281A1BC07; Thu, 18 Jul 2019 05:36:51 +0200 (CEST) Received: from rnd-relay.smtp.broadcom.com (rnd-relay.smtp.broadcom.com [192.19.229.170]) by dpdk.org (Postfix) with ESMTP id 6731E1B974 for ; Thu, 18 Jul 2019 05:36:44 +0200 (CEST) Received: from nis-sj1-27.broadcom.com (nis-sj1-27.lvn.broadcom.net [10.75.144.136]) by rnd-relay.smtp.broadcom.com (Postfix) with ESMTP id 0510730C390; Wed, 17 Jul 2019 20:36:42 -0700 (PDT) DKIM-Filter: OpenDKIM Filter v2.10.3 rnd-relay.smtp.broadcom.com 0510730C390 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com; s=dkimrelay; t=1563421002; bh=jbxCZbf4sxPkTwI0P0tZjhYsWYtKlkBsyF9GY7SlFPg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G82wuMXPuBMIHgBZE3RAqQpzZR+uqjTXYgdtYBaVjNz4ce893vDZUbaGHZGqFkxGR Klle7a3gKifvhLZnGjrdUjyV3Hb5vE+BVz3rZYf0AHytjfB9iVljtzSlDtEqmrbFYl L2EG2sR5/AX0aDqJ3ZZsBUd7V4IXP2cSRK7HUJIc= Received: from C02VPB22HTD6.wifi.broadcom.net (c02vpb22htd6.wifi.broadcom.net [10.122.43.105]) by nis-sj1-27.broadcom.com (Postfix) with ESMTP id 747A0AC0761; Wed, 17 Jul 2019 20:36:41 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , Somnath Kotur Date: Thu, 18 Jul 2019 09:06:03 +0530 Message-Id: <20190718033616.37605-10-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.20.1 (Apple Git-117) In-Reply-To: <20190718033616.37605-1-ajit.khaparde@broadcom.com> References: <20190718033616.37605-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 09/22] net/bnxt: use dedicated cpr for async events X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Lance Richardson This commit enables the creation of a dedicated completion ring for asynchronous event handling instead of handling these events on a receive completion ring. For the stingray platform and other platforms needing tighter control of resource utilization, we retain the ability to process async events on a receive completion ring. This behavior is controlled by a compile-time configuration variable. For Thor-based adapters, we use a dedicated NQ (notification queue) ring for async events (async events can't currently be received on a completion ring due to a firmware limitation). Rename "def_cp_ring" to "async_cp_ring" to better reflect its purpose (async event notifications) and to avoid confusion with VNIC default receive completion rings. Signed-off-by: Lance Richardson Reviewed-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- config/common_base | 1 + config/defconfig_arm64-stingray-linuxapp-gcc | 3 + drivers/net/bnxt/bnxt.h | 10 +- drivers/net/bnxt/bnxt_ethdev.c | 13 +- drivers/net/bnxt/bnxt_hwrm.c | 16 +- drivers/net/bnxt/bnxt_hwrm.h | 2 + drivers/net/bnxt/bnxt_irq.c | 47 +++--- drivers/net/bnxt/bnxt_ring.c | 145 ++++++++++++++++--- drivers/net/bnxt/bnxt_ring.h | 3 + drivers/net/bnxt/bnxt_rxr.c | 2 +- drivers/net/bnxt/bnxt_rxtx_vec_sse.c | 2 +- 11 files changed, 195 insertions(+), 49 deletions(-) diff --git a/config/common_base b/config/common_base index 8ef75c203..487a9b811 100644 --- a/config/common_base +++ b/config/common_base @@ -212,6 +212,7 @@ CONFIG_RTE_LIBRTE_BNX2X_DEBUG_PERIODIC=n # Compile burst-oriented Broadcom BNXT PMD driver # CONFIG_RTE_LIBRTE_BNXT_PMD=y +CONFIG_RTE_LIBRTE_BNXT_SHARED_ASYNC_RING=n # # Compile burst-oriented Chelsio Terminator (CXGBE) PMD diff --git a/config/defconfig_arm64-stingray-linuxapp-gcc b/config/defconfig_arm64-stingray-linuxapp-gcc index 7b33aa7af..acfb1c207 100644 --- a/config/defconfig_arm64-stingray-linuxapp-gcc +++ b/config/defconfig_arm64-stingray-linuxapp-gcc @@ -12,5 +12,8 @@ CONFIG_RTE_ARCH_ARM_TUNE="cortex-a72" CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n CONFIG_RTE_LIBRTE_VHOST_NUMA=n +# Conserve cpr resources by using rx cpr for async events. +CONFIG_RTE_LIBRTE_BNXT_SHARED_ASYNC_RING=y + CONFIG_RTE_EAL_IGB_UIO=y CONFIG_RTE_KNI_KMOD=n diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 3ccf784e5..8bd8f536c 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -33,6 +33,14 @@ #define BNXT_MAX_RX_RING_DESC 8192 #define BNXT_DB_SIZE 0x80 +#ifdef RTE_LIBRTE_BNXT_SHARED_ASYNC_RING +/* Async events are handled on rx queue 0 completion ring. */ +#define BNXT_NUM_ASYNC_CPR 0 +#else +/* Async events are handled on a dedicated completion ring. */ +#define BNXT_NUM_ASYNC_CPR 1 +#endif + /* Chimp Communication Channel */ #define GRCPF_REG_CHIMP_CHANNEL_OFFSET 0x0 #define GRCPF_REG_CHIMP_COMM_TRIGGER 0x100 @@ -387,7 +395,7 @@ struct bnxt { uint16_t fw_tx_port_stats_ext_size; /* Default completion ring */ - struct bnxt_cp_ring_info *def_cp_ring; + struct bnxt_cp_ring_info *async_cp_ring; uint32_t max_ring_grps; struct bnxt_ring_grp_info *grp_info; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 429ebe555..fe7837df2 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -198,12 +198,17 @@ static void bnxt_free_mem(struct bnxt *bp) bnxt_free_stats(bp); bnxt_free_tx_rings(bp); bnxt_free_rx_rings(bp); + bnxt_free_async_cp_ring(bp); } static int bnxt_alloc_mem(struct bnxt *bp) { int rc; + rc = bnxt_alloc_async_ring_struct(bp); + if (rc) + goto alloc_mem_err; + rc = bnxt_alloc_vnic_mem(bp); if (rc) goto alloc_mem_err; @@ -216,6 +221,10 @@ static int bnxt_alloc_mem(struct bnxt *bp) if (rc) goto alloc_mem_err; + rc = bnxt_alloc_async_cp_ring(bp); + if (rc) + goto alloc_mem_err; + return 0; alloc_mem_err: @@ -617,8 +626,8 @@ static int bnxt_dev_configure_op(struct rte_eth_dev *eth_dev) /* Inherit new configurations */ if (eth_dev->data->nb_rx_queues > bp->max_rx_rings || eth_dev->data->nb_tx_queues > bp->max_tx_rings || - eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues > - bp->max_cp_rings || + eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues + + BNXT_NUM_ASYNC_CPR > bp->max_cp_rings || eth_dev->data->nb_rx_queues + eth_dev->data->nb_tx_queues > bp->max_stat_ctx) goto resource_error; diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index bd4250a3a..52b2119a5 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -740,9 +740,12 @@ int bnxt_hwrm_func_reserve_vf_resc(struct bnxt *bp, bool test) req.num_tx_rings = rte_cpu_to_le_16(bp->tx_nr_rings); req.num_rx_rings = rte_cpu_to_le_16(bp->rx_nr_rings * AGG_RING_MULTIPLIER); - req.num_stat_ctxs = rte_cpu_to_le_16(bp->rx_nr_rings + bp->tx_nr_rings); + req.num_stat_ctxs = rte_cpu_to_le_16(bp->rx_nr_rings + + bp->tx_nr_rings + + BNXT_NUM_ASYNC_CPR); req.num_cmpl_rings = rte_cpu_to_le_16(bp->rx_nr_rings + - bp->tx_nr_rings); + bp->tx_nr_rings + + BNXT_NUM_ASYNC_CPR); req.num_vnics = rte_cpu_to_le_16(bp->rx_nr_rings); if (bp->vf_resv_strategy == HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC) { @@ -2079,7 +2082,7 @@ int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp) return rc; } -static void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr) +void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr) { struct bnxt_ring *cp_ring = cpr->cp_ring_struct; @@ -2089,9 +2092,10 @@ static void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr) memset(cpr->cp_desc_ring, 0, cpr->cp_ring_struct->ring_size * sizeof(*cpr->cp_desc_ring)); cpr->cp_raw_cons = 0; + cpr->valid = 0; } -static void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr) +void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr) { struct bnxt_ring *cp_ring = cpr->cp_ring_struct; @@ -3225,7 +3229,7 @@ int bnxt_hwrm_func_cfg_def_cp(struct bnxt *bp) req.enables = rte_cpu_to_le_32( HWRM_FUNC_CFG_INPUT_ENABLES_ASYNC_EVENT_CR); req.async_event_cr = rte_cpu_to_le_16( - bp->def_cp_ring->cp_ring_struct->fw_ring_id); + bp->async_cp_ring->cp_ring_struct->fw_ring_id); rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); HWRM_CHECK_RESULT(); @@ -3245,7 +3249,7 @@ int bnxt_hwrm_vf_func_cfg_def_cp(struct bnxt *bp) req.enables = rte_cpu_to_le_32( HWRM_FUNC_VF_CFG_INPUT_ENABLES_ASYNC_EVENT_CR); req.async_event_cr = rte_cpu_to_le_16( - bp->def_cp_ring->cp_ring_struct->fw_ring_id); + bp->async_cp_ring->cp_ring_struct->fw_ring_id); rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); HWRM_CHECK_RESULT(); diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 37aaa1a9e..c882fc2a1 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -119,6 +119,8 @@ int bnxt_free_all_hwrm_stat_ctxs(struct bnxt *bp); int bnxt_free_all_hwrm_rings(struct bnxt *bp); int bnxt_free_all_hwrm_ring_grps(struct bnxt *bp); int bnxt_alloc_all_hwrm_ring_grps(struct bnxt *bp); +void bnxt_free_cp_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr); +void bnxt_free_nq_ring(struct bnxt *bp, struct bnxt_cp_ring_info *cpr); int bnxt_set_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic); int bnxt_clear_hwrm_vnic_filters(struct bnxt *bp, struct bnxt_vnic_info *vnic); void bnxt_free_all_hwrm_resources(struct bnxt *bp); diff --git a/drivers/net/bnxt/bnxt_irq.c b/drivers/net/bnxt/bnxt_irq.c index 6c4dce401..9ff16ddd8 100644 --- a/drivers/net/bnxt/bnxt_irq.c +++ b/drivers/net/bnxt/bnxt_irq.c @@ -21,7 +21,7 @@ static void bnxt_int_handler(void *param) { struct rte_eth_dev *eth_dev = (struct rte_eth_dev *)param; struct bnxt *bp = eth_dev->data->dev_private; - struct bnxt_cp_ring_info *cpr = bp->def_cp_ring; + struct bnxt_cp_ring_info *cpr = bp->async_cp_ring; struct cmpl_base *cmp; uint32_t raw_cons; uint32_t cons; @@ -42,19 +42,19 @@ static void bnxt_int_handler(void *param) bnxt_event_hwrm_resp_handler(bp, cmp); raw_cons = NEXT_RAW_CMP(raw_cons); - }; + } cpr->cp_raw_cons = raw_cons; - B_CP_DB_REARM(cpr, cpr->cp_raw_cons); + if (BNXT_HAS_NQ(bp)) + bnxt_db_nq_arm(cpr); + else + B_CP_DB_REARM(cpr, cpr->cp_raw_cons); } void bnxt_free_int(struct bnxt *bp) { struct bnxt_irq *irq; - if (bp->irq_tbl == NULL) - return; - irq = bp->irq_tbl; if (irq) { if (irq->requested) { @@ -70,19 +70,35 @@ void bnxt_free_int(struct bnxt *bp) void bnxt_disable_int(struct bnxt *bp) { - struct bnxt_cp_ring_info *cpr = bp->def_cp_ring; + struct bnxt_cp_ring_info *cpr = bp->async_cp_ring; + + if (BNXT_NUM_ASYNC_CPR == 0) + return; + + if (!cpr || !cpr->cp_db.doorbell) + return; /* Only the default completion ring */ - if (cpr != NULL && cpr->cp_db.doorbell != NULL) + if (BNXT_HAS_NQ(bp)) + bnxt_db_nq(cpr); + else B_CP_DB_DISARM(cpr); } void bnxt_enable_int(struct bnxt *bp) { - struct bnxt_cp_ring_info *cpr = bp->def_cp_ring; + struct bnxt_cp_ring_info *cpr = bp->async_cp_ring; + + if (BNXT_NUM_ASYNC_CPR == 0) + return; + + if (!cpr || !cpr->cp_db.doorbell) + return; /* Only the default completion ring */ - if (cpr != NULL && cpr->cp_db.doorbell != NULL) + if (BNXT_HAS_NQ(bp)) + bnxt_db_nq_arm(cpr); + else B_CP_DB_ARM(cpr); } @@ -90,7 +106,7 @@ int bnxt_setup_int(struct bnxt *bp) { uint16_t total_vecs; const int len = sizeof(bp->irq_tbl[0].name); - int i, rc = 0; + int i; /* DPDK host only supports 1 MSI-X vector */ total_vecs = 1; @@ -104,14 +120,11 @@ int bnxt_setup_int(struct bnxt *bp) bp->irq_tbl[i].handler = bnxt_int_handler; } } else { - rc = -ENOMEM; - goto setup_exit; + PMD_DRV_LOG(ERR, "bnxt_irq_tbl setup failed\n"); + return -ENOMEM; } - return 0; -setup_exit: - PMD_DRV_LOG(ERR, "bnxt_irq_tbl setup failed\n"); - return rc; + return 0; } int bnxt_request_int(struct bnxt *bp) diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index a9952e02c..05a9a200c 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -5,6 +5,7 @@ #include #include +#include #include #include "bnxt.h" @@ -369,6 +370,7 @@ static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index, { struct bnxt_ring *cp_ring = cpr->cp_ring_struct; uint32_t nq_ring_id = HWRM_NA_SIGNATURE; + int cp_ring_index = queue_index + BNXT_NUM_ASYNC_CPR; uint8_t ring_type; int rc = 0; @@ -383,13 +385,13 @@ static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index, } } - rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, queue_index, + rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, cp_ring_index, HWRM_NA_SIGNATURE, nq_ring_id); if (rc) return rc; cpr->cp_cons = 0; - bnxt_set_db(bp, &cpr->cp_db, ring_type, queue_index, + bnxt_set_db(bp, &cpr->cp_db, ring_type, cp_ring_index, cp_ring->fw_ring_id); bnxt_db_cq(cpr); @@ -400,6 +402,7 @@ static int bnxt_alloc_nq_ring(struct bnxt *bp, int queue_index, struct bnxt_cp_ring_info *nqr) { struct bnxt_ring *nq_ring = nqr->cp_ring_struct; + int nq_ring_index = queue_index + BNXT_NUM_ASYNC_CPR; uint8_t ring_type; int rc = 0; @@ -408,12 +411,12 @@ static int bnxt_alloc_nq_ring(struct bnxt *bp, int queue_index, ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_NQ; - rc = bnxt_hwrm_ring_alloc(bp, nq_ring, ring_type, queue_index, + rc = bnxt_hwrm_ring_alloc(bp, nq_ring, ring_type, nq_ring_index, HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE); if (rc) return rc; - bnxt_set_db(bp, &nqr->cp_db, ring_type, queue_index, + bnxt_set_db(bp, &nqr->cp_db, ring_type, nq_ring_index, nq_ring->fw_ring_id); bnxt_db_nq(nqr); @@ -490,14 +493,16 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) struct bnxt_ring *cp_ring = cpr->cp_ring_struct; struct bnxt_cp_ring_info *nqr = rxq->nq_ring; struct bnxt_rx_ring_info *rxr = rxq->rx_ring; - int rc = 0; + int rc; if (BNXT_HAS_NQ(bp)) { - if (bnxt_alloc_nq_ring(bp, queue_index, nqr)) + rc = bnxt_alloc_nq_ring(bp, queue_index, nqr); + if (rc) goto err_out; } - if (bnxt_alloc_cmpl_ring(bp, queue_index, cpr, nqr)) + rc = bnxt_alloc_cmpl_ring(bp, queue_index, cpr, nqr); + if (rc) goto err_out; if (BNXT_HAS_RING_GRPS(bp)) { @@ -505,22 +510,24 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) bp->grp_info[queue_index].cp_fw_ring_id = cp_ring->fw_ring_id; } - if (!queue_index) { + if (!BNXT_NUM_ASYNC_CPR && !queue_index) { /* - * In order to save completion resources, use the first - * completion ring from PF or VF as the default completion ring - * for async event and HWRM forward response handling. + * If a dedicated async event completion ring is not enabled, + * use the first completion ring from PF or VF as the default + * completion ring for async event handling. */ - bp->def_cp_ring = cpr; + bp->async_cp_ring = cpr; rc = bnxt_hwrm_set_async_event_cr(bp); if (rc) goto err_out; } - if (bnxt_alloc_rx_ring(bp, queue_index)) + rc = bnxt_alloc_rx_ring(bp, queue_index); + if (rc) goto err_out; - if (bnxt_alloc_rx_agg_ring(bp, queue_index)) + rc = bnxt_alloc_rx_agg_ring(bp, queue_index); + if (rc) goto err_out; rxq->rx_buf_use_size = BNXT_MAX_MTU + RTE_ETHER_HDR_LEN + @@ -545,6 +552,9 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) bp->eth_dev->data->rx_queue_state[queue_index]); err_out: + PMD_DRV_LOG(ERR, + "Failed to allocate receive queue %d, rc %d.\n", + queue_index, rc); return rc; } @@ -583,15 +593,13 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) } bnxt_hwrm_set_ring_coal(bp, &coal, cp_ring->fw_ring_id); - - if (!i) { + if (!BNXT_NUM_ASYNC_CPR && !i) { /* - * In order to save completion resource, use the first - * completion ring from PF or VF as the default - * completion ring for async event & HWRM - * forward response handling. + * If a dedicated async event completion ring is not + * enabled, use the first completion ring as the default + * completion ring for async event handling. */ - bp->def_cp_ring = cpr; + bp->async_cp_ring = cpr; rc = bnxt_hwrm_set_async_event_cr(bp); if (rc) goto err_out; @@ -652,3 +660,98 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) err_out: return rc; } + +/* Allocate dedicated async completion ring. */ +int bnxt_alloc_async_cp_ring(struct bnxt *bp) +{ + struct bnxt_cp_ring_info *cpr = bp->async_cp_ring; + struct bnxt_ring *cp_ring = cpr->cp_ring_struct; + uint8_t ring_type; + int rc; + + if (BNXT_NUM_ASYNC_CPR == 0) + return 0; + + if (BNXT_HAS_NQ(bp)) + ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_NQ; + else + ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_L2_CMPL; + + rc = bnxt_hwrm_ring_alloc(bp, cp_ring, ring_type, 0, + HWRM_NA_SIGNATURE, HWRM_NA_SIGNATURE); + + if (rc) + return rc; + + cpr->cp_cons = 0; + cpr->valid = 0; + bnxt_set_db(bp, &cpr->cp_db, ring_type, 0, + cp_ring->fw_ring_id); + + if (BNXT_HAS_NQ(bp)) + bnxt_db_nq(cpr); + else + bnxt_db_cq(cpr); + + return bnxt_hwrm_set_async_event_cr(bp); +} + +/* Free dedicated async completion ring. */ +void bnxt_free_async_cp_ring(struct bnxt *bp) +{ + struct bnxt_cp_ring_info *cpr = bp->async_cp_ring; + + if (BNXT_NUM_ASYNC_CPR == 0 || cpr == NULL) + return; + + if (BNXT_HAS_NQ(bp)) + bnxt_free_nq_ring(bp, cpr); + else + bnxt_free_cp_ring(bp, cpr); + + bnxt_free_ring(cpr->cp_ring_struct); + rte_free(cpr->cp_ring_struct); + cpr->cp_ring_struct = NULL; + rte_free(cpr); + bp->async_cp_ring = NULL; +} + +int bnxt_alloc_async_ring_struct(struct bnxt *bp) +{ + struct bnxt_cp_ring_info *cpr = NULL; + struct bnxt_ring *ring = NULL; + unsigned int socket_id; + + if (BNXT_NUM_ASYNC_CPR == 0) + return 0; + + socket_id = rte_lcore_to_socket_id(rte_get_master_lcore()); + + cpr = rte_zmalloc_socket("cpr", + sizeof(struct bnxt_cp_ring_info), + RTE_CACHE_LINE_SIZE, socket_id); + if (cpr == NULL) + return -ENOMEM; + + ring = rte_zmalloc_socket("bnxt_cp_ring_struct", + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); + if (ring == NULL) { + rte_free(cpr); + return -ENOMEM; + } + + ring->bd = (void *)cpr->cp_desc_ring; + ring->bd_dma = cpr->cp_desc_mapping; + ring->ring_size = rte_align32pow2(DEFAULT_CP_RING_SIZE); + ring->ring_mask = ring->ring_size - 1; + ring->vmem_size = 0; + ring->vmem = NULL; + + bp->async_cp_ring = cpr; + cpr->cp_ring_struct = ring; + + return bnxt_alloc_rings(bp, 0, NULL, NULL, + bp->async_cp_ring, NULL, + "def_cp"); +} diff --git a/drivers/net/bnxt/bnxt_ring.h b/drivers/net/bnxt/bnxt_ring.h index e5cef3a1d..04c7b04b8 100644 --- a/drivers/net/bnxt/bnxt_ring.h +++ b/drivers/net/bnxt/bnxt_ring.h @@ -75,6 +75,9 @@ int bnxt_alloc_rings(struct bnxt *bp, uint16_t qidx, const char *suffix); int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index); int bnxt_alloc_hwrm_rings(struct bnxt *bp); +int bnxt_alloc_async_cp_ring(struct bnxt *bp); +void bnxt_free_async_cp_ring(struct bnxt *bp); +int bnxt_alloc_async_ring_struct(struct bnxt *bp); static inline void bnxt_db_write(struct bnxt_db_info *db, uint32_t idx) { diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 54a2cf5fd..1e068f817 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -564,7 +564,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, nb_rx_pkts++; if (rc == -EBUSY) /* partial completion */ break; - } else { + } else if (!BNXT_NUM_ASYNC_CPR) { evt = bnxt_event_hwrm_resp_handler(rxq->bp, (struct cmpl_base *)rxcmp); diff --git a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c index c358506f8..3ef016073 100644 --- a/drivers/net/bnxt/bnxt_rxtx_vec_sse.c +++ b/drivers/net/bnxt/bnxt_rxtx_vec_sse.c @@ -257,7 +257,7 @@ bnxt_recv_pkts_vec(void *rx_queue, struct rte_mbuf **rx_pkts, mbuf->packet_type = bnxt_parse_pkt_type(rxcmp, rxcmp1); rx_pkts[nb_rx_pkts++] = mbuf; - } else { + } else if (!BNXT_NUM_ASYNC_CPR) { evt = bnxt_event_hwrm_resp_handler(rxq->bp, (struct cmpl_base *)rxcmp);