From patchwork Tue Oct 12 21:14:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 101273 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EDEE2A0C4D; Tue, 12 Oct 2021 23:14:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BE0AA41120; Tue, 12 Oct 2021 23:14:44 +0200 (CEST) Received: from mail-pf1-f176.google.com (mail-pf1-f176.google.com [209.85.210.176]) by mails.dpdk.org (Postfix) with ESMTP id 73CA5410DA for ; Tue, 12 Oct 2021 23:14:41 +0200 (CEST) Received: by mail-pf1-f176.google.com with SMTP id m26so621330pff.3 for ; Tue, 12 Oct 2021 14:14:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=b30tmKFuX/PVTSFo4VqYsYNHdeWFe1LGx+JppVDSoFY=; b=E92o1zfY+ALjIZJ548vhTOm7JWppTUDh86LliI8yOWC49nDjEjGCPXq+IX/VSfjUrx hxy5p+51GL+JpFScNs++hGuaSCLMxmDgwKH3w4e83Ool7GvJKSyOrLw2mwoc97uRQWy4 lhvT49abB2/1gW2PB8+9GBEzocvLr/Frs3pHY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=b30tmKFuX/PVTSFo4VqYsYNHdeWFe1LGx+JppVDSoFY=; b=wdJcXXnuUf0FsGGUsgLvMKTGBlzs2565K47hLaRmOjgvhNIUSf7sfypgUixaLPvtTG nIuilyr1tsi7m/38NzMIcxc/ZFdoVH15wzExDk3OPD4DJ52dVzDavfZoLQ2CiXIu7JDo 818VVA5eggYlKP/TCqrYu4ze6x1p6XbMZBu2b7YdDclZPhOFren7IQBdXOGZd7uZVB4A +YdRfDJvttWVECiKiaWj0fh7lL+cmmaR2IPnWtRdm9d7jZYTtZa6HftxZLv7lopw2pje DrHOYuWa0Y2wpskd5t3tQTuHFPL/whWKJY7L3EOq9JsqK5xsKx3GCH+ZDU98HrX/7Cq7 vOTg== X-Gm-Message-State: AOAM530NeIP5GvuyYcc/pisz9LCeSZM2SfGhtI+m8Q+VWXPpFriXpo1W VKQFjv8x5CV+IpvReiCfDgiVM+CV+acTqD2ZJpup2OOErvUT2vQAyOlOIFJdCCjqtEyZOowztm1 FUJPnlouxPGJYcmp8YS8yo9uAdisZ4tyq2qsvi5EK7rGDsI/e9x4viVDdPGO104M= X-Google-Smtp-Source: ABdhPJy3qVIntkHS/4Smh8U8b3qGzv40cwoKH6mQA+rLv58H4tnFKgi+zNZpto2P4xTdVcyqQGEY0w== X-Received: by 2002:a63:4f56:: with SMTP id p22mr24737480pgl.134.1634073280191; Tue, 12 Oct 2021 14:14:40 -0700 (PDT) Received: from localhost.localdomain ([136.52.99.246]) by smtp.gmail.com with ESMTPSA id f18sm7585705pfa.60.2021.10.12.14.14.39 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Oct 2021 14:14:39 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , Somnath Kotur Date: Tue, 12 Oct 2021 14:14:34 -0700 Message-Id: <20211012211436.70846-2-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211012211436.70846-1-ajit.khaparde@broadcom.com> References: <20211012211436.70846-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH v3 1/3] net/bnxt: create aggregration rings when needed X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Aggregration rings are needed when PMD needs to support jumbo frames, LRO. Currently we are creating the aggregration rings whether jumbo frames or LRO has been enabled or disabled. This causes unnecessary allocation of mbufs needing larger mbuf pool which is not used at all. This patch modifies the code to create aggregration rings only when needed. Signed-off-by: Ajit Khaparde Reviewed-by: Lance Richardson Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt_hwrm.c | 9 +++ drivers/net/bnxt/bnxt_ring.c | 148 ++++++++++++++++++++++------------- drivers/net/bnxt/bnxt_rxq.c | 84 ++++++++++++-------- drivers/net/bnxt/bnxt_rxq.h | 2 + drivers/net/bnxt/bnxt_rxr.c | 111 +++++++++++++++----------- 5 files changed, 222 insertions(+), 132 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 503add42fd..181e607d7b 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -2741,6 +2741,14 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) if (BNXT_HAS_RING_GRPS(bp)) bp->grp_info[queue_index].rx_fw_ring_id = INVALID_HW_RING_ID; + /* Check agg ring struct explicitly. + * bnxt_need_agg_ring() returns the current state of offload flags, + * but we may have to deal with agg ring struct before the offload + * flags are updated. + */ + if (!bnxt_need_agg_ring(bp->eth_dev) || rxr->ag_ring_struct == NULL) + goto no_agg; + ring = rxr->ag_ring_struct; bnxt_hwrm_ring_free(bp, ring, BNXT_CHIP_P5(bp) ? @@ -2750,6 +2758,7 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) if (BNXT_HAS_RING_GRPS(bp)) bp->grp_info[queue_index].ag_fw_ring_id = INVALID_HW_RING_ID; +no_agg: bnxt_hwrm_stat_ctx_free(bp, cpr); bnxt_free_cp_ring(bp, cpr); diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index aaad08e5e5..08cefa1baa 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -104,13 +104,19 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, struct bnxt_ring *cp_ring = cp_ring_info->cp_ring_struct; struct bnxt_rx_ring_info *rx_ring_info = rxq ? rxq->rx_ring : NULL; struct bnxt_tx_ring_info *tx_ring_info = txq ? txq->tx_ring : NULL; - struct bnxt_ring *tx_ring; - struct bnxt_ring *rx_ring; - struct rte_pci_device *pdev = bp->pdev; uint64_t rx_offloads = bp->eth_dev->data->dev_conf.rxmode.offloads; + int ag_ring_start, ag_bitmap_start, tpa_info_start; + int ag_vmem_start, cp_ring_start, nq_ring_start; + int total_alloc_len, rx_ring_start, rx_ring_len; + struct rte_pci_device *pdev = bp->pdev; + struct bnxt_ring *tx_ring, *rx_ring; const struct rte_memzone *mz = NULL; char mz_name[RTE_MEMZONE_NAMESIZE]; rte_iova_t mz_phys_addr; + int ag_bitmap_len = 0; + int tpa_info_len = 0; + int ag_vmem_len = 0; + int ag_ring_len = 0; int stats_len = (tx_ring_info || rx_ring_info) ? RTE_CACHE_LINE_ROUNDUP(sizeof(struct hwrm_stat_ctx_query_output) - @@ -138,14 +144,12 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, RTE_CACHE_LINE_ROUNDUP(rx_ring_info-> rx_ring_struct->vmem_size) : 0; rx_vmem_len = RTE_ALIGN(rx_vmem_len, 128); - int ag_vmem_start = 0; - int ag_vmem_len = 0; - int cp_ring_start = 0; - int nq_ring_start = 0; ag_vmem_start = rx_vmem_start + rx_vmem_len; - ag_vmem_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP( - rx_ring_info->ag_ring_struct->vmem_size) : 0; + if (bnxt_need_agg_ring(bp->eth_dev)) + ag_vmem_len = rx_ring_info && rx_ring_info->ag_ring_struct ? + RTE_CACHE_LINE_ROUNDUP(rx_ring_info->ag_ring_struct->vmem_size) : 0; + cp_ring_start = ag_vmem_start + ag_vmem_len; cp_ring_start = RTE_ALIGN(cp_ring_start, 4096); @@ -164,36 +168,36 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, sizeof(struct tx_bd_long)) : 0; tx_ring_len = RTE_ALIGN(tx_ring_len, 4096); - int rx_ring_start = tx_ring_start + tx_ring_len; + rx_ring_start = tx_ring_start + tx_ring_len; rx_ring_start = RTE_ALIGN(rx_ring_start, 4096); - int rx_ring_len = rx_ring_info ? + rx_ring_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP(rx_ring_info->rx_ring_struct->ring_size * sizeof(struct rx_prod_pkt_bd)) : 0; rx_ring_len = RTE_ALIGN(rx_ring_len, 4096); - int ag_ring_start = rx_ring_start + rx_ring_len; + ag_ring_start = rx_ring_start + rx_ring_len; ag_ring_start = RTE_ALIGN(ag_ring_start, 4096); - int ag_ring_len = rx_ring_len * AGG_RING_SIZE_FACTOR; - ag_ring_len = RTE_ALIGN(ag_ring_len, 4096); - int ag_bitmap_start = ag_ring_start + ag_ring_len; - int ag_bitmap_len = rx_ring_info ? + if (bnxt_need_agg_ring(bp->eth_dev)) { + ag_ring_len = rx_ring_len * AGG_RING_SIZE_FACTOR; + ag_ring_len = RTE_ALIGN(ag_ring_len, 4096); + + ag_bitmap_len = rx_ring_info ? RTE_CACHE_LINE_ROUNDUP(rte_bitmap_get_memory_footprint( rx_ring_info->rx_ring_struct->ring_size * AGG_RING_SIZE_FACTOR)) : 0; - int tpa_info_start = ag_bitmap_start + ag_bitmap_len; - int tpa_info_len = 0; - - if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) { - int tpa_max = BNXT_TPA_MAX_AGGS(bp); + if (rx_ring_info && (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO)) { + int tpa_max = BNXT_TPA_MAX_AGGS(bp); - tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info); - tpa_info_len = RTE_CACHE_LINE_ROUNDUP(tpa_info_len); + tpa_info_len = tpa_max * sizeof(struct bnxt_tpa_info); + tpa_info_len = RTE_CACHE_LINE_ROUNDUP(tpa_info_len); + } } - int total_alloc_len = tpa_info_start; - total_alloc_len += tpa_info_len; + ag_bitmap_start = ag_ring_start + ag_ring_len; + tpa_info_start = ag_bitmap_start + ag_bitmap_len; + total_alloc_len = tpa_info_start + tpa_info_len; snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "bnxt_" PCI_PRI_FMT "-%04x_%s", pdev->addr.domain, @@ -254,34 +258,36 @@ int bnxt_alloc_rings(struct bnxt *bp, unsigned int socket_id, uint16_t qidx, (struct rte_mbuf **)rx_ring->vmem; } - rx_ring = rx_ring_info->ag_ring_struct; - - rx_ring->bd = ((char *)mz->addr + ag_ring_start); - rx_ring_info->ag_desc_ring = - (struct rx_prod_pkt_bd *)rx_ring->bd; - rx_ring->bd_dma = mz->iova + ag_ring_start; - rx_ring_info->ag_desc_mapping = rx_ring->bd_dma; - rx_ring->mem_zone = (const void *)mz; - - if (!rx_ring->bd) - return -ENOMEM; - if (rx_ring->vmem_size) { - rx_ring->vmem = - (void **)((char *)mz->addr + ag_vmem_start); - rx_ring_info->ag_buf_ring = - (struct rte_mbuf **)rx_ring->vmem; + if (bnxt_need_agg_ring(bp->eth_dev)) { + rx_ring = rx_ring_info->ag_ring_struct; + + rx_ring->bd = ((char *)mz->addr + ag_ring_start); + rx_ring_info->ag_desc_ring = + (struct rx_prod_pkt_bd *)rx_ring->bd; + rx_ring->bd_dma = mz->iova + ag_ring_start; + rx_ring_info->ag_desc_mapping = rx_ring->bd_dma; + rx_ring->mem_zone = (const void *)mz; + + if (!rx_ring->bd) + return -ENOMEM; + if (rx_ring->vmem_size) { + rx_ring->vmem = + (void **)((char *)mz->addr + ag_vmem_start); + rx_ring_info->ag_buf_ring = + (struct rte_mbuf **)rx_ring->vmem; + } + + rx_ring_info->ag_bitmap = + rte_bitmap_init(rx_ring_info->rx_ring_struct->ring_size * + AGG_RING_SIZE_FACTOR, (uint8_t *)mz->addr + + ag_bitmap_start, ag_bitmap_len); + + /* TPA info */ + if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) + rx_ring_info->tpa_info = + ((struct bnxt_tpa_info *) + ((char *)mz->addr + tpa_info_start)); } - - rx_ring_info->ag_bitmap = - rte_bitmap_init(rx_ring_info->rx_ring_struct->ring_size * - AGG_RING_SIZE_FACTOR, (uint8_t *)mz->addr + - ag_bitmap_start, ag_bitmap_len); - - /* TPA info */ - if (rx_offloads & DEV_RX_OFFLOAD_TCP_LRO) - rx_ring_info->tpa_info = - ((struct bnxt_tpa_info *)((char *)mz->addr + - tpa_info_start)); } cp_ring->bd = ((char *)mz->addr + cp_ring_start); @@ -550,6 +556,9 @@ static int bnxt_alloc_rx_agg_ring(struct bnxt *bp, int queue_index) uint8_t ring_type; int rc = 0; + if (!bnxt_need_agg_ring(bp->eth_dev)) + return 0; + ring->fw_rx_ring_id = rxr->rx_ring_struct->fw_ring_id; if (BNXT_CHIP_P5(bp)) { @@ -590,7 +599,7 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) */ cp_ring->ring_size = rxr->rx_ring_struct->ring_size * 2; - if (bp->eth_dev->data->scattered_rx) + if (bnxt_need_agg_ring(bp->eth_dev)) cp_ring->ring_size *= AGG_RING_SIZE_FACTOR; cp_ring->ring_mask = cp_ring->ring_size - 1; @@ -645,7 +654,8 @@ int bnxt_alloc_hwrm_rx_ring(struct bnxt *bp, int queue_index) goto err_out; } bnxt_db_write(&rxr->rx_db, rxr->rx_raw_prod); - bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); + if (bnxt_need_agg_ring(bp->eth_dev)) + bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); } rxq->index = queue_index; #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) @@ -683,8 +693,11 @@ static void bnxt_init_all_rings(struct bnxt *bp) ring = rxr->rx_ring_struct; ring->fw_ring_id = INVALID_HW_RING_ID; /* Rx-AGG */ - ring = rxr->ag_ring_struct; - ring->fw_ring_id = INVALID_HW_RING_ID; + if (bnxt_need_agg_ring(bp->eth_dev)) { + ring = rxr->ag_ring_struct; + if (ring != NULL) + ring->fw_ring_id = INVALID_HW_RING_ID; + } } for (i = 0; i < bp->tx_cp_nr_rings; i++) { txq = bp->tx_queues[i]; @@ -712,6 +725,29 @@ int bnxt_alloc_hwrm_rings(struct bnxt *bp) bnxt_init_all_rings(bp); for (i = 0; i < bp->rx_cp_nr_rings; i++) { + unsigned int soc_id = bp->eth_dev->device->numa_node; + struct bnxt_rx_queue *rxq = bp->rx_queues[i]; + struct bnxt_rx_ring_info *rxr = rxq->rx_ring; + struct bnxt_ring *ring; + + if (bnxt_need_agg_ring(bp->eth_dev)) { + ring = rxr->ag_ring_struct; + if (ring == NULL) { + bnxt_free_rxq_mem(rxq); + + rc = bnxt_init_rx_ring_struct(rxq, soc_id); + if (rc) + goto err_out; + + rc = bnxt_alloc_rings(bp, soc_id, + i, NULL, rxq, + rxq->cp_ring, NULL, + "rxr"); + if (rc) + goto err_out; + } + } + rc = bnxt_alloc_hwrm_rx_ring(bp, i); if (rc) goto err_out; diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index 2eb7a3cb29..3cc7bfc3bd 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -20,6 +20,17 @@ * RX Queues */ +/* Determine whether the current configuration needs aggregation ring in HW. */ +int bnxt_need_agg_ring(struct rte_eth_dev *eth_dev) +{ + /* scattered_rx will be true if OFFLOAD_SCATTER is enabled, + * if LRO is enabled, or if the max packet len is greater than the + * mbuf data size. So AGG ring will be needed whenever scattered_rx + * is set. + */ + return eth_dev->data->scattered_rx ? 1 : 0; +} + void bnxt_free_rxq_stats(struct bnxt_rx_queue *rxq) { if (rxq && rxq->cp_ring && rxq->cp_ring->hw_stats) @@ -203,6 +214,9 @@ void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq) } } /* Free up mbufs in Agg ring */ + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + return; + sw_ring = rxq->rx_ring->ag_buf_ring; if (sw_ring) { for (i = 0; @@ -240,41 +254,49 @@ void bnxt_free_rx_mbufs(struct bnxt *bp) } } -void bnxt_rx_queue_release_op(struct rte_eth_dev *dev, uint16_t queue_idx) +void bnxt_free_rxq_mem(struct bnxt_rx_queue *rxq) { - struct bnxt_rx_queue *rxq = dev->data->rx_queues[queue_idx]; - - if (rxq) { - if (is_bnxt_in_error(rxq->bp)) - return; - - bnxt_free_hwrm_rx_ring(rxq->bp, rxq->queue_id); - bnxt_rx_queue_release_mbufs(rxq); - - /* Free RX ring hardware descriptors */ - if (rxq->rx_ring) { - bnxt_free_ring(rxq->rx_ring->rx_ring_struct); - rte_free(rxq->rx_ring->rx_ring_struct); - /* Free RX Agg ring hardware descriptors */ - bnxt_free_ring(rxq->rx_ring->ag_ring_struct); - rte_free(rxq->rx_ring->ag_ring_struct); + bnxt_rx_queue_release_mbufs(rxq); + + /* Free RX, AGG ring hardware descriptors */ + if (rxq->rx_ring) { + bnxt_free_ring(rxq->rx_ring->rx_ring_struct); + rte_free(rxq->rx_ring->rx_ring_struct); + rxq->rx_ring->rx_ring_struct = NULL; + /* Free RX Agg ring hardware descriptors */ + bnxt_free_ring(rxq->rx_ring->ag_ring_struct); + rte_free(rxq->rx_ring->ag_ring_struct); + rxq->rx_ring->ag_ring_struct = NULL; + + rte_free(rxq->rx_ring); + rxq->rx_ring = NULL; + } + /* Free RX completion ring hardware descriptors */ + if (rxq->cp_ring) { + bnxt_free_ring(rxq->cp_ring->cp_ring_struct); + rte_free(rxq->cp_ring->cp_ring_struct); + rxq->cp_ring->cp_ring_struct = NULL; + rte_free(rxq->cp_ring); + rxq->cp_ring = NULL; + } - rte_free(rxq->rx_ring); - } - /* Free RX completion ring hardware descriptors */ - if (rxq->cp_ring) { - bnxt_free_ring(rxq->cp_ring->cp_ring_struct); - rte_free(rxq->cp_ring->cp_ring_struct); - rte_free(rxq->cp_ring); - } + bnxt_free_rxq_stats(rxq); + rte_memzone_free(rxq->mz); + rxq->mz = NULL; +} - bnxt_free_rxq_stats(rxq); - rte_memzone_free(rxq->mz); - rxq->mz = NULL; +void bnxt_rx_queue_release_op(struct rte_eth_dev *dev, uint16_t queue_idx) +{ + struct bnxt_rx_queue *rxq = dev->data->rx_queues[queue_idx]; - rte_free(rxq); - dev->data->rx_queues[queue_idx] = NULL; - } + if (rxq != NULL) { + if (is_bnxt_in_error(rxq->bp)) + return; + + bnxt_free_hwrm_rx_ring(rxq->bp, rxq->queue_id); + bnxt_free_rxq_mem(rxq); + rte_free(rxq); + } } int bnxt_rx_queue_setup_op(struct rte_eth_dev *eth_dev, diff --git a/drivers/net/bnxt/bnxt_rxq.h b/drivers/net/bnxt/bnxt_rxq.h index 9bb9352feb..0331c23810 100644 --- a/drivers/net/bnxt/bnxt_rxq.h +++ b/drivers/net/bnxt/bnxt_rxq.h @@ -63,4 +63,6 @@ int bnxt_rx_queue_start(struct rte_eth_dev *dev, int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id); void bnxt_rx_queue_release_mbufs(struct bnxt_rx_queue *rxq); +int bnxt_need_agg_ring(struct rte_eth_dev *eth_dev); +void bnxt_free_rxq_mem(struct bnxt_rx_queue *rxq); #endif diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 4c1ee4294e..aeacc60a01 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -1223,57 +1223,75 @@ int bnxt_init_rx_ring_struct(struct bnxt_rx_queue *rxq, unsigned int socket_id) rxq->rx_buf_size = BNXT_MAX_PKT_LEN + sizeof(struct rte_mbuf); - rxr = rte_zmalloc_socket("bnxt_rx_ring", - sizeof(struct bnxt_rx_ring_info), - RTE_CACHE_LINE_SIZE, socket_id); - if (rxr == NULL) - return -ENOMEM; - rxq->rx_ring = rxr; - - ring = rte_zmalloc_socket("bnxt_rx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); - if (ring == NULL) - return -ENOMEM; - rxr->rx_ring_struct = ring; - ring->ring_size = rte_align32pow2(rxq->nb_rx_desc); - ring->ring_mask = ring->ring_size - 1; - ring->bd = (void *)rxr->rx_desc_ring; - ring->bd_dma = rxr->rx_desc_mapping; - - /* Allocate extra rx ring entries for vector rx. */ - ring->vmem_size = sizeof(struct rte_mbuf *) * - (ring->ring_size + BNXT_RX_EXTRA_MBUF_ENTRIES); + if (rxq->rx_ring != NULL) { + rxr = rxq->rx_ring; + } else { - ring->vmem = (void **)&rxr->rx_buf_ring; - ring->fw_ring_id = INVALID_HW_RING_ID; + rxr = rte_zmalloc_socket("bnxt_rx_ring", + sizeof(struct bnxt_rx_ring_info), + RTE_CACHE_LINE_SIZE, socket_id); + if (rxr == NULL) + return -ENOMEM; + rxq->rx_ring = rxr; + } - cpr = rte_zmalloc_socket("bnxt_rx_ring", - sizeof(struct bnxt_cp_ring_info), - RTE_CACHE_LINE_SIZE, socket_id); - if (cpr == NULL) - return -ENOMEM; - rxq->cp_ring = cpr; + if (rxr->rx_ring_struct == NULL) { + ring = rte_zmalloc_socket("bnxt_rx_ring_struct", + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); + if (ring == NULL) + return -ENOMEM; + rxr->rx_ring_struct = ring; + ring->ring_size = rte_align32pow2(rxq->nb_rx_desc); + ring->ring_mask = ring->ring_size - 1; + ring->bd = (void *)rxr->rx_desc_ring; + ring->bd_dma = rxr->rx_desc_mapping; + + /* Allocate extra rx ring entries for vector rx. */ + ring->vmem_size = sizeof(struct rte_mbuf *) * + (ring->ring_size + BNXT_RX_EXTRA_MBUF_ENTRIES); + + ring->vmem = (void **)&rxr->rx_buf_ring; + ring->fw_ring_id = INVALID_HW_RING_ID; + } - ring = rte_zmalloc_socket("bnxt_rx_ring_struct", - sizeof(struct bnxt_ring), - RTE_CACHE_LINE_SIZE, socket_id); - if (ring == NULL) - return -ENOMEM; - cpr->cp_ring_struct = ring; + if (rxq->cp_ring != NULL) { + cpr = rxq->cp_ring; + } else { + cpr = rte_zmalloc_socket("bnxt_rx_ring", + sizeof(struct bnxt_cp_ring_info), + RTE_CACHE_LINE_SIZE, socket_id); + if (cpr == NULL) + return -ENOMEM; + rxq->cp_ring = cpr; + } - /* Allocate two completion slots per entry in desc ring. */ - ring->ring_size = rxr->rx_ring_struct->ring_size * 2; - ring->ring_size *= AGG_RING_SIZE_FACTOR; + if (cpr->cp_ring_struct == NULL) { + ring = rte_zmalloc_socket("bnxt_rx_ring_struct", + sizeof(struct bnxt_ring), + RTE_CACHE_LINE_SIZE, socket_id); + if (ring == NULL) + return -ENOMEM; + cpr->cp_ring_struct = ring; + + /* Allocate two completion slots per entry in desc ring. */ + ring->ring_size = rxr->rx_ring_struct->ring_size * 2; + if (bnxt_need_agg_ring(rxq->bp->eth_dev)) + ring->ring_size *= AGG_RING_SIZE_FACTOR; + + ring->ring_size = rte_align32pow2(ring->ring_size); + ring->ring_mask = ring->ring_size - 1; + ring->bd = (void *)cpr->cp_desc_ring; + ring->bd_dma = cpr->cp_desc_mapping; + ring->vmem_size = 0; + ring->vmem = NULL; + ring->fw_ring_id = INVALID_HW_RING_ID; + } - ring->ring_size = rte_align32pow2(ring->ring_size); - ring->ring_mask = ring->ring_size - 1; - ring->bd = (void *)cpr->cp_desc_ring; - ring->bd_dma = cpr->cp_desc_mapping; - ring->vmem_size = 0; - ring->vmem = NULL; - ring->fw_ring_id = INVALID_HW_RING_ID; + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + return 0; + rxr = rxq->rx_ring; /* Allocate Aggregator rings */ ring = rte_zmalloc_socket("bnxt_rx_ring_struct", sizeof(struct bnxt_ring), @@ -1351,6 +1369,9 @@ int bnxt_init_one_rx_ring(struct bnxt_rx_queue *rxq) rxr->rx_buf_ring[i] = &rxq->fake_mbuf; } + if (!bnxt_need_agg_ring(rxq->bp->eth_dev)) + return 0; + ring = rxr->ag_ring_struct; type = RX_PROD_AGG_BD_TYPE_RX_PROD_AGG; bnxt_init_rxbds(ring, type, size); From patchwork Tue Oct 12 21:14:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 101274 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4B1BAA0C4D; Tue, 12 Oct 2021 23:14:53 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC8714114D; Tue, 12 Oct 2021 23:14:45 +0200 (CEST) Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by mails.dpdk.org (Postfix) with ESMTP id 4F6134111F for ; Tue, 12 Oct 2021 23:14:42 +0200 (CEST) Received: by mail-pl1-f179.google.com with SMTP id v20so378992plo.7 for ; Tue, 12 Oct 2021 14:14:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=ZXmOkd0vlm01A4WP/IsphuPKakn3jUgK2Gk2X62VBOE=; b=Hf/1BFAj5RnaX2EvbTMuSGNBeu+rAvWEdATIwf3Nl4VRj7Hm+9jFlkKZrH63TuuOmo dCdhQ9erEWnEbUB4rsKGleMRpdQEOJ7/SqFbrwumw4X51ri+dJK9bPGZejCFS5aHS4VZ gYm+h4kEnFDQknhWYD/gwSclE/umzww+kk/Lg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=ZXmOkd0vlm01A4WP/IsphuPKakn3jUgK2Gk2X62VBOE=; b=cs2W/nGhfuY8HkNuiRYAjH0HCUFGOArt46dSD+VPmk45BtkLMSPlgpb79X2pzoPpdU dVfYI5FIHwyqgHSoJaNjBsxnVvDEcTkwOTGyv6+KsHHAKghhQIP2BILqAczRmDtEVnjR LL9yoLy4chVYtKaGT/ap0nVk7ryltMg99GFePyzcTdsoO6auTFmKEuMepgRMfeSFM3CE O/7RUyKDz/LYC+B7PsuAoVLjiASjm2q4KUe/pXF7H/dJB18rn9X4fEberPLV7VaHXJQE 3P5drObm7Y7N8D+vgOumMk5RfbqOIFDYOgdTulTRmkUmDecpRPofPrxdkPXc7LXEkfW8 PgGA== X-Gm-Message-State: AOAM531bxb8WM/6uutK722bJf9NTUfomN+4zrMqF0n4gIM8Us8aBc40H cfA8PeWVWzTcKb2BmaEHIB0F5/A62ALSkO5GqnGC2XEo8xq7vPiNMAWXt2rIeer96KKp3C+6lu4 tQXE5Q96WvY05zo8E3EICSgrXkOpOqQKQFUF93vTmwQpEUzO7ER+IeLsaXMpF8kI= X-Google-Smtp-Source: ABdhPJxOy/4d20IzBlimX1gOxjMh3Z84QhLUglWo+eRRKjE/Nv9hLNYAudKpwzhZZ1pHhZePHRsvRQ== X-Received: by 2002:a17:90b:390b:: with SMTP id ob11mr8794459pjb.145.1634073281228; Tue, 12 Oct 2021 14:14:41 -0700 (PDT) Received: from localhost.localdomain ([136.52.99.246]) by smtp.gmail.com with ESMTPSA id f18sm7585705pfa.60.2021.10.12.14.14.40 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Oct 2021 14:14:40 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, stable@dpdk.org, Lance Richardson Date: Tue, 12 Oct 2021 14:14:35 -0700 Message-Id: <20211012211436.70846-3-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211012211436.70846-1-ajit.khaparde@broadcom.com> References: <20211012211436.70846-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH v3 2/3] net/bnxt: fix Rx queue state on start X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Fix Rx queue state on device start. The state of Rx queues could be incorrect in some cases because instead of updating the state for all the Rx queues, we are updating it for queues in a VNIC. Fixes: 0105ea1296c9 ("net/bnxt: support runtime queue setup") Cc: stable@dpdk.org Signed-off-by: Ajit Khaparde Reviewed-by: Lance Richardson --- drivers/net/bnxt/bnxt_ethdev.c | 16 ++++++++++------ 1 file changed, 10 insertions(+), 6 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index aa7e7fdc85..a98f93ab29 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -482,12 +482,6 @@ static int bnxt_setup_one_vnic(struct bnxt *bp, uint16_t vnic_id) rxq->vnic->fw_grp_ids[j] = INVALID_HW_RING_ID; else vnic->rx_queue_cnt++; - - if (!rxq->rx_deferred_start) { - bp->eth_dev->data->rx_queue_state[j] = - RTE_ETH_QUEUE_STATE_STARTED; - rxq->rx_started = true; - } } PMD_DRV_LOG(DEBUG, "vnic->rx_queue_cnt = %d\n", vnic->rx_queue_cnt); @@ -824,6 +818,16 @@ static int bnxt_start_nic(struct bnxt *bp) } } + for (j = 0; j < bp->rx_nr_rings; j++) { + struct bnxt_rx_queue *rxq = bp->rx_queues[j]; + + if (!rxq->rx_deferred_start) { + bp->eth_dev->data->rx_queue_state[j] = + RTE_ETH_QUEUE_STATE_STARTED; + rxq->rx_started = true; + } + } + rc = bnxt_hwrm_cfa_l2_set_rx_mask(bp, &bp->vnic_info[0], 0, NULL); if (rc) { PMD_DRV_LOG(ERR, From patchwork Tue Oct 12 21:14:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 101275 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A7A46A0C4D; Tue, 12 Oct 2021 23:14:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4E7904115C; Tue, 12 Oct 2021 23:14:48 +0200 (CEST) Received: from mail-pl1-f179.google.com (mail-pl1-f179.google.com [209.85.214.179]) by mails.dpdk.org (Postfix) with ESMTP id D54B24111F for ; Tue, 12 Oct 2021 23:14:43 +0200 (CEST) Received: by mail-pl1-f179.google.com with SMTP id w14so394240pll.2 for ; Tue, 12 Oct 2021 14:14:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version; bh=V1FAMqdHCFjBCU31uibNWxKSL4e5Xbi64VUwPug9BpA=; b=L0VhGQWUKXK5k24MQQsHC3EENeZJPs/XtIqDLk+TqimgwYa99Xleb0VsYpynBkpErJ tfREHONZ/Dly2w5hAjgAsYpf2RcgfIcIRM05s2Jf6M/WiBbwNUUPtHn7kvuKrNXfMxoM r1a423stoCwR8HDGahqfvhL1LMNI4kwim5aJA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version; bh=V1FAMqdHCFjBCU31uibNWxKSL4e5Xbi64VUwPug9BpA=; b=ciIKK+oSciYizeD+WiinhZ5eBuyxWI0HL28FtM0WdQ7DFo7ZWAkwzVleHt0HYpE8aR QW5zJGO/ha/HnoU12JtS0/Ue//GFJq+XP974OIEzdy7Tyu/6bJSXbZAKa8fCY3cICjfP JYXQHDhZXfY6MW7s8WFJ58ufW6982qUb0i6B2N7xc2zoXxRmEkOIgA8x8oZ38Xi+qJzf Et0Gkf11IJxoK9ZzFm2H4cv03UYQUJ8vKxHATF3eMTc5Mk7vUZKpq218PIoX5aH9zY6n ZfT0DlZSv+/9Zi7wFuYldC0VBL3seqCXYP5XijhnS5OpFS4NgApbEDwGR8WQroWtdQO7 USZg== X-Gm-Message-State: AOAM5339dkVMiyVqPmM0DCD9SKUXbmbOF+/CklKIxE5NcM8l7f2s7sWa bQW+4NxxSE1kcyQF9TIwxjyA7S+a7lXa4L5nvGG+Lqy6VvS5O0B4vHIuGLP53at0kjiM1A8rAQL I8j99S2DZZ5KWNfwn6e2tLW0jv5rJ5X+cQBgtY1+NNOlLBtqqVIBEaQtwDR/Lqzo= X-Google-Smtp-Source: ABdhPJy0ZtnVHseM+qfiWAfeBwUXl0O3YZQXytK+5Co6tkf18r76lG5yv9PXqzh/kk9K3ZLKTXwF+w== X-Received: by 2002:a17:90a:6b4d:: with SMTP id x13mr8796508pjl.208.1634073282546; Tue, 12 Oct 2021 14:14:42 -0700 (PDT) Received: from localhost.localdomain ([136.52.99.246]) by smtp.gmail.com with ESMTPSA id f18sm7585705pfa.60.2021.10.12.14.14.41 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Tue, 12 Oct 2021 14:14:41 -0700 (PDT) From: Ajit Khaparde To: dev@dpdk.org Cc: ferruh.yigit@intel.com, Lance Richardson , Kalesh AP Date: Tue, 12 Oct 2021 14:14:36 -0700 Message-Id: <20211012211436.70846-4-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.30.1 (Apple Git-130) In-Reply-To: <20211012211436.70846-1-ajit.khaparde@broadcom.com> References: <20211012211436.70846-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-Content-Filtered-By: Mailman/MimeDel 2.1.29 Subject: [dpdk-dev] [PATCH v3 3/3] net/bnxt: enhance support for RSS action X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Enhance support for RSS action in the non-TruFlow path. This will allow the user or application to update the RSS settings using RTE_FLOW API. Signed-off-by: Ajit Khaparde Reviewed-by: Lance Richardson Reviewed-by: Kalesh AP --- drivers/net/bnxt/bnxt_filter.h | 1 + drivers/net/bnxt/bnxt_flow.c | 196 ++++++++++++++++++++++++++++++++- 2 files changed, 196 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_filter.h b/drivers/net/bnxt/bnxt_filter.h index 8bae0c4c72..587932c96f 100644 --- a/drivers/net/bnxt/bnxt_filter.h +++ b/drivers/net/bnxt/bnxt_filter.h @@ -43,6 +43,7 @@ struct bnxt_filter_info { #define HWRM_CFA_EM_FILTER 1 #define HWRM_CFA_NTUPLE_FILTER 2 #define HWRM_CFA_TUNNEL_REDIRECT_FILTER 3 +#define HWRM_CFA_CONFIG 4 uint8_t filter_type; uint32_t dst_id; diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index 59489b591a..b2ebb5634e 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -738,6 +738,10 @@ bnxt_validate_and_parse_flow_type(struct bnxt *bp, filter->enables = en; filter->valid_flags = valid_flags; + /* Items parsed but no filter to create in HW. */ + if (filter->enables == 0 && filter->valid_flags == 0) + filter->filter_type = HWRM_CFA_CONFIG; + return 0; } @@ -1070,6 +1074,167 @@ bnxt_update_filter_flags_en(struct bnxt_filter_info *filter, filter1, filter->fw_l2_filter_id, filter->l2_ref_cnt); } +/* Valid actions supported along with RSS are count and mark. */ +static int +bnxt_validate_rss_action(const struct rte_flow_action actions[]) +{ + for (; actions->type != RTE_FLOW_ACTION_TYPE_END; actions++) { + switch (actions->type) { + case RTE_FLOW_ACTION_TYPE_VOID: + break; + case RTE_FLOW_ACTION_TYPE_RSS: + break; + case RTE_FLOW_ACTION_TYPE_MARK: + break; + case RTE_FLOW_ACTION_TYPE_COUNT: + break; + default: + return -ENOTSUP; + } + } + + return 0; +} + +static int +bnxt_get_vnic(struct bnxt *bp, uint32_t group) +{ + int vnic_id = 0; + + /* For legacy NS3 based implementations, + * group_id will be mapped to a VNIC ID. + */ + if (BNXT_STINGRAY(bp)) + vnic_id = group; + + /* Non NS3 cases, group_id will be ignored. + * Setting will be configured on default VNIC. + */ + return vnic_id; +} + +static int +bnxt_vnic_rss_cfg_update(struct bnxt *bp, + struct bnxt_vnic_info *vnic, + const struct rte_flow_action *act, + struct rte_flow_error *error) +{ + const struct rte_flow_action_rss *rss; + unsigned int rss_idx, i; + uint16_t hash_type; + uint64_t types; + int rc; + + rss = (const struct rte_flow_action_rss *)act->conf; + + /* Currently only Toeplitz hash is supported. */ + if (rss->func != RTE_ETH_HASH_FUNCTION_DEFAULT && + rss->func != RTE_ETH_HASH_FUNCTION_TOEPLITZ) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Unsupported RSS hash function"); + rc = -rte_errno; + goto ret; + } + + /* key_len should match the hash key supported by hardware */ + if (rss->key_len != 0 && rss->key_len != HW_HASH_KEY_SIZE) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Incorrect hash key parameters"); + rc = -rte_errno; + goto ret; + } + + /* Currently RSS hash on inner and outer headers are supported. + * 0 => Default setting + * 1 => Inner + * 2 => Outer + */ + if (rss->level > 2) { + rte_flow_error_set(error, + ENOTSUP, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Unsupported hash level"); + rc = -rte_errno; + goto ret; + } + + if ((rss->queue_num == 0 && rss->queue != NULL) || + (rss->queue_num != 0 && rss->queue == NULL)) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid queue config specified"); + rc = -rte_errno; + goto ret; + } + + /* If RSS types is 0, use a best effort configuration */ + types = rss->types ? rss->types : ETH_RSS_IPV4; + + hash_type = bnxt_rte_to_hwrm_hash_types(types); + + /* If requested types can't be supported, leave existing settings */ + if (hash_type) + vnic->hash_type = hash_type; + + vnic->hash_mode = + bnxt_rte_to_hwrm_hash_level(bp, rss->types, rss->level); + + /* Update RSS key only if key_len != 0 */ + if (rss->key_len != 0) + memcpy(vnic->rss_hash_key, rss->key, rss->key_len); + + if (rss->queue_num == 0) + goto skip_rss_table; + + /* Validate Rx queues */ + for (i = 0; i < rss->queue_num; i++) { + PMD_DRV_LOG(DEBUG, "RSS action Queue %d\n", rss->queue[i]); + + if (rss->queue[i] >= bp->rx_nr_rings || + !bp->rx_queues[rss->queue[i]]) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid queue ID for RSS"); + rc = -rte_errno; + goto ret; + } + } + + /* Prepare the indirection table */ + for (rss_idx = 0; rss_idx < HW_HASH_INDEX_SIZE; rss_idx++) { + struct bnxt_rx_queue *rxq; + uint32_t idx; + + idx = rss->queue[rss_idx % rss->queue_num]; + + if (BNXT_CHIP_P5(bp)) { + rxq = bp->rx_queues[idx]; + vnic->rss_table[rss_idx * 2] = + rxq->rx_ring->rx_ring_struct->fw_ring_id; + vnic->rss_table[rss_idx * 2 + 1] = + rxq->cp_ring->cp_ring_struct->fw_ring_id; + } else { + vnic->rss_table[rss_idx] = vnic->fw_grp_ids[idx]; + } + } + +skip_rss_table: + rc = bnxt_hwrm_vnic_rss_cfg(bp, vnic); +ret: + return rc; +} + static int bnxt_validate_and_parse_flow(struct rte_eth_dev *dev, const struct rte_flow_item pattern[], @@ -1329,13 +1494,38 @@ bnxt_validate_and_parse_flow(struct rte_eth_dev *dev, filter->flow_id = filter1->flow_id; break; case RTE_FLOW_ACTION_TYPE_RSS: + rc = bnxt_validate_rss_action(actions); + if (rc != 0) { + rte_flow_error_set(error, + EINVAL, + RTE_FLOW_ERROR_TYPE_ACTION, + act, + "Invalid actions specified with RSS"); + rc = -rte_errno; + goto ret; + } + rss = (const struct rte_flow_action_rss *)act->conf; - vnic_id = attr->group; + vnic_id = bnxt_get_vnic(bp, attr->group); BNXT_VALID_VNIC_OR_RET(bp, vnic_id); vnic = &bp->vnic_info[vnic_id]; + /* + * For non NS3 cases, rte_flow_items will not be considered + * for RSS updates. + */ + if (filter->filter_type == HWRM_CFA_CONFIG) { + /* RSS config update requested */ + rc = bnxt_vnic_rss_cfg_update(bp, vnic, act, error); + if (rc != 0) + return -rte_errno; + + filter->dst_id = vnic->fw_vnic_id; + break; + } + /* Check if requested RSS config matches RSS config of VNIC * only if it is not a fresh VNIC configuration. * Otherwise the existing VNIC configuration can be used. @@ -2006,6 +2196,10 @@ _bnxt_flow_destroy(struct bnxt *bp, return ret; } + /* For config type, there is no filter in HW. Finish cleanup here */ + if (filter->filter_type == HWRM_CFA_CONFIG) + goto done; + ret = bnxt_match_filter(bp, filter); if (ret == 0) PMD_DRV_LOG(ERR, "Could not find matching flow\n");