From patchwork Fri Dec 22 21:56:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135544 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31FB343762; Fri, 22 Dec 2023 22:58:57 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6829542EBB; Fri, 22 Dec 2023 22:57:28 +0100 (CET) Received: from mail-oo1-f54.google.com (mail-oo1-f54.google.com [209.85.161.54]) by mails.dpdk.org (Postfix) with ESMTP id EE6DE42EAC for ; Fri, 22 Dec 2023 22:57:25 +0100 (CET) Received: by mail-oo1-f54.google.com with SMTP id 006d021491bc7-5944d1ce0c7so336323eaf.0 for ; Fri, 22 Dec 2023 13:57:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1703282244; x=1703887044; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=duE+R7xwrxskYBBOWi4rlZ1PAXlUNAuIUbO6E5YfU4I=; b=gC0931XUC7ytDATS1waBjKUUmm3Ccd3VkAb6uv0Is0THaOYK0XHaqif+Nvto65IE81 NAOwLuPkn/3/WpvYQNB9oEc/b/W0teqvU29/FAS38wL+DhVUkeWhfNatMYUbx02yyhj7 oUf1X/unbawWMuME1aCpos/VTVeNt3rqQ2rJg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1703282244; x=1703887044; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=duE+R7xwrxskYBBOWi4rlZ1PAXlUNAuIUbO6E5YfU4I=; b=rXRvZiOOlXRwY/QqztndCYUh9L72cJ1k3bXIJih2gWmyWXbiMZDKbCrarjA507qq0S Bye9OpH3g3L4kKdVykSIwyhc/ozVUUglJLHMt4iScpQijYcW/O6mEGQUVi/lvHCtaJ8K D04lSj7ZNQjipJac8qx74XCZtTsiRP0oqdQkMtoslksHh0oMkawjndpVUQGSrCFEQDIJ 4pOimSBJnBRklh2PD71rn/yavcNAmvVR1A9glG8MeaXXd2r8nKhoPhwuKmCeT3qtV1hQ 6BOOc3WIb92sWly+KEhTn8ec6nt35kaaWpwzpN9u/vHjpx9EsP6dedbFfh61EJ6wT9zv QVMg== X-Gm-Message-State: AOJu0YwNZsgmT1EuUrh5L1IviOzZybrGhThI2bMRFkoQH1l3QFu9UnIH 32CsjX2htWOfiaHV3tlzX5E+kH9prBo5Ki0e5AETfZbGrsVtZeCxuV9ien9POIKXI/ebz0cvN3d C+sJuofHnTLiloX58yAaIKafAP9zx9ZzNQuSKKz8P2LGaiUaIqTCW0GD5s50/t4z7I4c+2Gj2BA w= X-Google-Smtp-Source: AGHT+IESKfwEVUKxShJJ2Qp9UJS3nZgBCYut9SIDASWqUQePNeS10bApm+3/TWzsYhYV/ZX5unqjfA== X-Received: by 2002:a05:6358:7e56:b0:170:ec2e:4373 with SMTP id p22-20020a0563587e5600b00170ec2e4373mr2364119rwm.6.1703282244504; Fri, 22 Dec 2023 13:57:24 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id sr5-20020a17090b4e8500b0028afd8b1e0bsm3540700pjb.57.2023.12.22.13.57.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 22 Dec 2023 13:57:23 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v2 14/18] net/bnxt: add tunnel TPA support Date: Fri, 22 Dec 2023 13:56:55 -0800 Message-Id: <20231222215659.64993-15-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231222215659.64993-1-ajit.khaparde@broadcom.com> References: <20231222215659.64993-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Damodharam Ammepalli This patch adds support for tunnel TPA type. The tunnel TPA support is brought in by the updated bit_field tnl_tpa_en(4) in hwrm_vnic_tpa_cfg_input->enables, which is used by the firmware to indicate the capability of the underlying hardware. This patch updates hwrm HWRM_VNIC_TPA_CFG request for vxlan, geneve and default tunnel type bit_fields. The patch also updates to use the V3 TPA completion which the P7 devices support. Signed-off-by: Damodharam Ammepalli Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 4 ++ drivers/net/bnxt/bnxt_hwrm.c | 74 ++++++++++++++++++++++++++++++++++++ drivers/net/bnxt/bnxt_rxr.c | 9 +++-- drivers/net/bnxt/bnxt_vnic.c | 16 ++++++++ 4 files changed, 100 insertions(+), 3 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 576688bbff..2357e9f747 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -18,6 +18,7 @@ #include #include #include +#include #include "bnxt_cpr.h" #include "bnxt_util.h" @@ -119,6 +120,8 @@ (BNXT_CHIP_P5_P7(bp) ? TPA_MAX_SEGS_TH : \ TPA_MAX_SEGS) +#define BNXT_TPA_MAX_PAGES 65536 + /* * Define the number of async completion rings to be used. Set to zero for * configurations in which the maximum number of packet completion rings @@ -815,6 +818,7 @@ struct bnxt { #define BNXT_VNIC_CAP_ESP_SPI6_CAP BIT(12) #define BNXT_VNIC_CAP_AH_SPI_CAP (BNXT_VNIC_CAP_AH_SPI4_CAP | BNXT_VNIC_CAP_AH_SPI6_CAP) #define BNXT_VNIC_CAP_ESP_SPI_CAP (BNXT_VNIC_CAP_ESP_SPI4_CAP | BNXT_VNIC_CAP_ESP_SPI6_CAP) +#define BNXT_VNIC_CAP_VNIC_TUNNEL_TPA BIT(13) unsigned int rx_nr_rings; unsigned int rx_cp_nr_rings; diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 3c16abea69..f896a41653 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1046,6 +1046,9 @@ int bnxt_hwrm_vnic_qcaps(struct bnxt *bp) if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_RSS_IPSEC_ESP_SPI_IPV6_CAP) bp->vnic_cap_flags |= BNXT_VNIC_CAP_ESP_SPI6_CAP; + if (flags & HWRM_VNIC_QCAPS_OUTPUT_FLAGS_HW_TUNNEL_TPA_CAP) + bp->vnic_cap_flags |= BNXT_VNIC_CAP_VNIC_TUNNEL_TPA; + bp->max_tpa_v2 = rte_le_to_cpu_16(resp->max_aggs_supported); HWRM_UNLOCK(); @@ -2666,6 +2669,30 @@ int bnxt_hwrm_vnic_plcmode_cfg(struct bnxt *bp, return rc; } +#define BNXT_DFLT_TUNL_TPA_BMAP \ + (HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GRE | \ + HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_IPV4 | \ + HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_IPV6) + +static void bnxt_vnic_update_tunl_tpa_bmap(struct bnxt *bp, + struct hwrm_vnic_tpa_cfg_input *req) +{ + uint32_t tunl_tpa_bmap = BNXT_DFLT_TUNL_TPA_BMAP; + + if (!(bp->vnic_cap_flags & BNXT_VNIC_CAP_VNIC_TUNNEL_TPA)) + return; + + if (bp->vxlan_port_cnt) + tunl_tpa_bmap |= HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN | + HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN_GPE; + + if (bp->geneve_port_cnt) + tunl_tpa_bmap |= HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GENEVE; + + req->enables |= rte_cpu_to_le_32(HWRM_VNIC_TPA_CFG_INPUT_ENABLES_TNL_TPA_EN); + req->tnl_tpa_en_bitmap = rte_cpu_to_le_32(tunl_tpa_bmap); +} + int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic, bool enable) { @@ -2714,6 +2741,29 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, if (BNXT_CHIP_P5_P7(bp)) req.max_aggs = rte_cpu_to_le_16(bp->max_tpa_v2); + + /* For tpa v2 handle as per spec mss and log2 units */ + if (BNXT_CHIP_P7(bp)) { + uint32_t nsegs, n, segs = 0; + uint16_t mss = bp->eth_dev->data->mtu - 40; + size_t page_size = rte_mem_page_size(); + uint32_t max_mbuf_frags = + BNXT_TPA_MAX_PAGES / (rte_mem_page_size() + 1); + + /* Calculate the number of segs based on mss */ + if (mss <= page_size) { + n = page_size / mss; + nsegs = (max_mbuf_frags - 1) * n; + } else { + n = mss / page_size; + if (mss & (page_size - 1)) + n++; + nsegs = (max_mbuf_frags - n) / n; + } + segs = rte_log2_u32(nsegs); + req.max_agg_segs = rte_cpu_to_le_16(segs); + } + bnxt_vnic_update_tunl_tpa_bmap(bp, &req); } req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id); @@ -4242,6 +4292,27 @@ int bnxt_hwrm_pf_evb_mode(struct bnxt *bp) return rc; } +static int bnxt_hwrm_set_tpa(struct bnxt *bp) +{ + struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; + uint64_t rx_offloads = dev_conf->rxmode.offloads; + bool tpa_flags = 0; + int rc, i; + + tpa_flags = (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ? true : false; + for (i = 0; i < bp->max_vnics; i++) { + struct bnxt_vnic_info *vnic = &bp->vnic_info[i]; + + if (vnic->fw_vnic_id == INVALID_HW_RING_ID) + continue; + + rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic, tpa_flags); + if (rc) + return rc; + } + return 0; +} + int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, uint16_t port, uint8_t tunnel_type) { @@ -4278,6 +4349,8 @@ int bnxt_hwrm_tunnel_dst_port_alloc(struct bnxt *bp, uint16_t port, HWRM_UNLOCK(); + bnxt_hwrm_set_tpa(bp); + return rc; } @@ -4346,6 +4419,7 @@ int bnxt_hwrm_tunnel_dst_port_free(struct bnxt *bp, uint16_t port, bp->ecpri_port_cnt = 0; } + bnxt_hwrm_set_tpa(bp); return rc; } diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index d0706874a6..3542975600 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -153,7 +153,8 @@ static void bnxt_rx_ring_reset(void *arg) rxr = rxq->rx_ring; /* Disable and flush TPA before resetting the RX ring */ if (rxr->tpa_info) - bnxt_hwrm_vnic_tpa_cfg(bp, rxq->vnic, false); + bnxt_vnic_tpa_cfg(bp, rxq->queue_id, false); + rc = bnxt_hwrm_rx_ring_reset(bp, i); if (rc) { PMD_DRV_LOG(ERR, "Rx ring%d reset failed\n", i); @@ -163,12 +164,13 @@ static void bnxt_rx_ring_reset(void *arg) bnxt_rx_queue_release_mbufs(rxq); rxr->rx_raw_prod = 0; rxr->ag_raw_prod = 0; + rxr->ag_cons = 0; rxr->rx_next_cons = 0; bnxt_init_one_rx_ring(rxq); bnxt_db_write(&rxr->rx_db, rxr->rx_raw_prod); bnxt_db_write(&rxr->ag_db, rxr->ag_raw_prod); if (rxr->tpa_info) - bnxt_hwrm_vnic_tpa_cfg(bp, rxq->vnic, true); + bnxt_vnic_tpa_cfg(bp, rxq->queue_id, true); rxq->in_reset = 0; } @@ -1151,7 +1153,8 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, return -EBUSY; if (cmp_type == RX_TPA_START_CMPL_TYPE_RX_TPA_START || - cmp_type == RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2) { + cmp_type == RX_TPA_START_V2_CMPL_TYPE_RX_TPA_START_V2 || + cmp_type == RX_TPA_START_V3_CMPL_TYPE_RX_TPA_START_V3) { bnxt_tpa_start(rxq, (struct rx_tpa_start_cmpl *)rxcmp, (struct rx_tpa_start_cmpl_hi *)rxcmp1); rc = -EINVAL; /* Continue w/o new mbuf */ diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index 5ea34f7cb6..5092a7d774 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -464,7 +464,9 @@ bnxt_vnic_queue_delete(struct bnxt *bp, uint16_t vnic_idx) static struct bnxt_vnic_info* bnxt_vnic_queue_create(struct bnxt *bp, int32_t vnic_id, uint16_t q_index) { + struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; uint8_t *rx_queue_state = bp->eth_dev->data->rx_queue_state; + uint64_t rx_offloads = dev_conf->rxmode.offloads; struct bnxt_vnic_info *vnic; struct bnxt_rx_queue *rxq = NULL; int32_t rc = -EINVAL; @@ -523,6 +525,12 @@ bnxt_vnic_queue_create(struct bnxt *bp, int32_t vnic_id, uint16_t q_index) goto cleanup; } + rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic, + (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ? + true : false); + if (rc) + PMD_DRV_LOG(DEBUG, "Failed to configure TPA on this vnic %d\n", q_index); + rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic); if (rc) { PMD_DRV_LOG(DEBUG, "Failed to configure vnic plcmode %d\n", @@ -658,7 +666,9 @@ bnxt_vnic_rss_create(struct bnxt *bp, struct bnxt_vnic_rss_info *rss_info, uint16_t vnic_id) { + struct rte_eth_conf *dev_conf = &bp->eth_dev->data->dev_conf; uint8_t *rx_queue_state = bp->eth_dev->data->rx_queue_state; + uint64_t rx_offloads = dev_conf->rxmode.offloads; struct bnxt_vnic_info *vnic; struct bnxt_rx_queue *rxq = NULL; uint32_t idx, nr_ctxs, config_rss = 0; @@ -741,6 +751,12 @@ bnxt_vnic_rss_create(struct bnxt *bp, goto fail_cleanup; } + rc = bnxt_hwrm_vnic_tpa_cfg(bp, vnic, + (rx_offloads & RTE_ETH_RX_OFFLOAD_TCP_LRO) ? + true : false); + if (rc) + PMD_DRV_LOG(DEBUG, "Failed to configure TPA on this vnic %d\n", idx); + rc = bnxt_hwrm_vnic_plcmode_cfg(bp, vnic); if (rc) { PMD_DRV_LOG(ERR, "Failed to configure vnic plcmode %d\n",