From patchwork Mon Dec 4 18:36:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134815 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 64D3C4366C; Mon, 4 Dec 2023 19:37:28 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B84274113D; Mon, 4 Dec 2023 19:37:23 +0100 (CET) Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by mails.dpdk.org (Postfix) with ESMTP id 632D0400EF for ; Mon, 4 Dec 2023 19:37:22 +0100 (CET) Received: by mail-pg1-f171.google.com with SMTP id 41be03b00d2f7-5c230c79c0bso1729346a12.1 for ; Mon, 04 Dec 2023 10:37:22 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715041; x=1702319841; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=sW3bUU3iwzGKxIqBME0K1KjOs/ki9KFjtPytKKaiTKM=; b=DD3spO7BUyjOQExKYETlI7lJKXH9WJWcS2D1QzWtnK025lCmTf2GbSVKhB/Xi8rsbl Z6qB8FoWPuj98VOnTNNr1NMrsy4cSzfpFWsC7PGeTouvnC9iPPogzE4TpkWA/Y/48NqY cF47OmMe/QTPBb071GZP8xnf/SklTsrNNIsEk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715041; x=1702319841; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=sW3bUU3iwzGKxIqBME0K1KjOs/ki9KFjtPytKKaiTKM=; b=hytlfWYYd4tDabCk+cqSCBESBYkq8t8adPvU6jRhzaDjUEcDYIdDypBFFgtyGBUeYz fs7t3q36LozYePVaV5MSCT8TQKEyaIbSX4Ig6qAWXSGAucERaMsjSwonAwDap03NmW1T /b+M5qMEA6+KQhD0EuCnwVMdYrldGm1A6TfkTOp8lruGRkm4DVIb3USl2VK0xnpiPPgd esuRPPlX6oXb2GWeAkZfdTgEqK3QVXVC0jWn8IiqKLCu0z99OiMrobquPZv4uxAZSzOE BCxUDVC2xUIl1l1IrzYQK8bvpldCeIuQbM6hGTyiJUwsR/ZQVvCHodyH0cJeFdtDn4MP FXww== X-Gm-Message-State: AOJu0YyS1WkgnPv19Nua5NIhAmd8RyiartSrOg4DKsPphQs06lnCLElf Iza4Ix3o8NGifd/LJ54d8zgYtcNHDPtAfbXQ6vVE60NoWazvJI6gTZsGD3FKPsHZTDUbkemubBh h6YdV9vpnez3ds+rdFQGmyx6N2GYa8Y7fP5OQKxhyPNp+1H7jvLGMJAzEX2BUcJ6wp7mo X-Google-Smtp-Source: AGHT+IHQLsko4ChDSi2nmvZf2Dh0h1H7wlNYkGnASfjgv6olOFvFARM/3N9cKkltj59xHTsb55lt+w== X-Received: by 2002:a05:6a20:8408:b0:18f:97c:b9eb with SMTP id c8-20020a056a20840800b0018f097cb9ebmr3079018pzd.69.1701715041062; Mon, 04 Dec 2023 10:37:21 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.19 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:20 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH 01/14] net/bnxt: refactor epoch setting Date: Mon, 4 Dec 2023 10:36:57 -0800 Message-Id: <20231204183710.86921-2-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Fix epoch bit setting when we ring the doorbell. Epoch bit needs to toggle alreanately from 0 to 1 everytime the ring indices wrap. Currently its value is everything but an alternating 0 and 1. Remove unnecessary field db_epoch_shift from bnxt_db_info structure. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt_cpr.h | 5 ++--- drivers/net/bnxt/bnxt_ring.c | 9 ++------- 2 files changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/net/bnxt/bnxt_cpr.h b/drivers/net/bnxt/bnxt_cpr.h index 2de154322d..26e81a6a7e 100644 --- a/drivers/net/bnxt/bnxt_cpr.h +++ b/drivers/net/bnxt/bnxt_cpr.h @@ -53,11 +53,10 @@ struct bnxt_db_info { bool db_64; uint32_t db_ring_mask; uint32_t db_epoch_mask; - uint32_t db_epoch_shift; }; -#define DB_EPOCH(db, idx) (((idx) & (db)->db_epoch_mask) << \ - ((db)->db_epoch_shift)) +#define DB_EPOCH(db, idx) (!!((idx) & (db)->db_epoch_mask) << \ + DBR_EPOCH_SFT) #define DB_RING_IDX(db, idx) (((idx) & (db)->db_ring_mask) | \ DB_EPOCH(db, idx)) diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 34b2510d54..6dacb1b37f 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -371,9 +371,10 @@ static void bnxt_set_db(struct bnxt *bp, db->db_key64 = DBR_PATH_L2; break; } - if (BNXT_CHIP_SR2(bp)) { + if (BNXT_CHIP_P7(bp)) { db->db_key64 |= DBR_VALID; db_offset = bp->legacy_db_size; + db->db_epoch_mask = ring_mask + 1; } else if (BNXT_VF(bp)) { db_offset = DB_VF_OFFSET; } @@ -397,12 +398,6 @@ static void bnxt_set_db(struct bnxt *bp, db->db_64 = false; } db->db_ring_mask = ring_mask; - - if (BNXT_CHIP_SR2(bp)) { - db->db_epoch_mask = db->db_ring_mask + 1; - db->db_epoch_shift = DBR_EPOCH_SFT - - rte_log2_u32(db->db_epoch_mask); - } } static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index, From patchwork Mon Dec 4 18:36:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134817 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10C9F4366C; Mon, 4 Dec 2023 19:37:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7543E427E5; Mon, 4 Dec 2023 19:37:27 +0100 (CET) Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com [209.85.210.173]) by mails.dpdk.org (Postfix) with ESMTP id 8C1F541151 for ; Mon, 4 Dec 2023 19:37:25 +0100 (CET) Received: by mail-pf1-f173.google.com with SMTP id d2e1a72fcca58-6cbe716b511so3641214b3a.3 for ; Mon, 04 Dec 2023 10:37:25 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715044; x=1702319844; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :from:to:cc:subject:date:message-id:reply-to; bh=lD2Vfs4IIjpcLEZe8+3bzZ9GbbVrvBkbrCD3qytCbFA=; b=YB/gbeqzgWa6mcgpsjhlfp0Hvzx6Nd6euufxFaERamAsxt/5KoxdS5b7H0eZ9Wi1CD BKUZRAac9012CyukkyDi1aU1YgQhloqvh2A77Ap7cNh3xZ31fTbQL5ezDvRBfPeZj+Dr 3Y/mgVh2uN4vgPkWHxLxXDTOZ94gJDuXVQ0H4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715044; x=1702319844; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lD2Vfs4IIjpcLEZe8+3bzZ9GbbVrvBkbrCD3qytCbFA=; b=kqQs0DDIboaCbx2FswJTLLdKoIS0zF2ljqsQSU3zgqn9nUKrEykhc+ongA5CFQuzM8 YMnJdZ2qDb4UIRx/dTz+gFqMn8VOvFIT3DSBKgi3ZtT5TyXbq97qpQCqgXiG9KGrIsjh 162DD2at4kVwAHwC0FCjEUAJ2lm4KUkwRa+B4E99qDbWpOniw/fF7wBCqBlFrhaSqyMm 4oTf7qrZAzppwxgjv02hhi/Czmpw3J9e9X4kmkhBU6yYCfABAifu34oTE4zpgHU1adrw X9hWuU8qmL8edbTFeaAHB8WjXDBWWQneiabufpkGtzhSS3fo/y2nVxhTVMGKzJqelvcv 4j/Q== X-Gm-Message-State: AOJu0YxbojZqr9G7yd+9pPWepTy844c9zkvi4b3dmaD+CPbpqSjHU9Hp YmO1g8S71BvqYSYztinwxLDsy/vplSbI9jgXrrdaev/VGCG6dbjDBe2LzTy/8D31QLTqytQPgak 9NT4n1dsof2CYo5IVot5qlUZS2O5zAlgzla1TNf8JvNoXu7AKDOhwl6KX7uspZPfwmDbI X-Google-Smtp-Source: AGHT+IFM+Eg8qMhjJBQdh9g4hUoVP4cpYxTFpbhV3m2UURN2ztmkk0IKIyGqPJirixkZdAnYjDepaw== X-Received: by 2002:a05:6a00:150f:b0:6ce:2731:d5c8 with SMTP id q15-20020a056a00150f00b006ce2731d5c8mr2595454pfu.57.1701715042423; Mon, 04 Dec 2023 10:37:22 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.21 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:21 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Subject: [PATCH 02/14] net/bnxt: update HWRM API Date: Mon, 4 Dec 2023 10:36:58 -0800 Message-Id: <20231204183710.86921-3-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update HWRM API to version 1.10.2.158 Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_hwrm.c | 3 - drivers/net/bnxt/hsi_struct_def_dpdk.h | 1531 ++++++++++++++++++++++-- 2 files changed, 1429 insertions(+), 105 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 06f196760f..0a31b984e6 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -5175,9 +5175,6 @@ int bnxt_hwrm_set_ntuple_filter(struct bnxt *bp, if (enables & HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_PORT_MASK) req.dst_port_mask = rte_cpu_to_le_16(filter->dst_port_mask); - if (enables & - HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_MIRROR_VNIC_ID) - req.mirror_vnic_id = filter->mirror_vnic_id; req.enables = rte_cpu_to_le_32(enables); diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h index 9afdd056ce..65f3f0576b 100644 --- a/drivers/net/bnxt/hsi_struct_def_dpdk.h +++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h @@ -1154,8 +1154,8 @@ struct hwrm_err_output { #define HWRM_VERSION_MINOR 10 #define HWRM_VERSION_UPDATE 2 /* non-zero means beta version */ -#define HWRM_VERSION_RSVD 138 -#define HWRM_VERSION_STR "1.10.2.138" +#define HWRM_VERSION_RSVD 158 +#define HWRM_VERSION_STR "1.10.2.158" /**************** * hwrm_ver_get * @@ -6329,19 +6329,14 @@ struct rx_pkt_v3_cmpl_hi { #define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL \ (UINT32_C(0x5) << 9) /* - * Indicates that the IP checksum failed its check in the tunnel + * Indicates that the physical packet is shorter than that claimed + * by the tunnel header length. Valid for GTPv1-U packets. * header. */ - #define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_CS_ERROR \ + #define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_TOTAL_ERROR \ (UINT32_C(0x6) << 9) - /* - * Indicates that the L4 checksum failed its check in the tunnel - * header. - */ - #define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR \ - (UINT32_C(0x7) << 9) #define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_LAST \ - RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR + RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_TOTAL_ERROR /* * This indicates that there was an error in the inner * portion of the packet when this @@ -6406,20 +6401,8 @@ struct rx_pkt_v3_cmpl_hi { */ #define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN \ (UINT32_C(0x8) << 12) - /* - * Indicates that the IP checksum failed its check in the - * inner header. - */ - #define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_IP_CS_ERROR \ - (UINT32_C(0x9) << 12) - /* - * Indicates that the L4 checksum failed its check in the - * inner header. - */ - #define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR \ - (UINT32_C(0xa) << 12) #define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_LAST \ - RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR + RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN /* * This is data from the CFA block as indicated by the meta_format * field. @@ -14157,7 +14140,7 @@ struct hwrm_func_qcaps_input { uint8_t unused_0[6]; } __rte_packed; -/* hwrm_func_qcaps_output (size:896b/112B) */ +/* hwrm_func_qcaps_output (size:1088b/136B) */ struct hwrm_func_qcaps_output { /* The specific error status for the command. */ uint16_t error_code; @@ -14840,9 +14823,85 @@ struct hwrm_func_qcaps_output { /* * When this bit is '1', it indicates that the hardware based * link aggregation group (L2 and RoCE) feature is supported. + * This LAG feature is only supported on the THOR2 or newer NIC + * with multiple ports. */ #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_HW_LAG_SUPPORTED \ UINT32_C(0x400) + /* + * When this bit is '1', it indicates all contexts can be stored + * on chip instead of using host based backing store memory. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_ON_CHIP_CTX_SUPPORTED \ + UINT32_C(0x800) + /* + * When this bit is '1', it indicates that the HW supports + * using a steering tag in the memory transactions targeting + * L2 or RoCE ring resources. + * Steering Tags are system-specific values that must follow the + * encoding requirements of the hardware platform. On devices that + * support steering to multiple address domains, a value of 0 in + * bit 0 of the steering tag specifies the address is associated + * with the SOC address space, and a value of 1 indicates the + * address is associated with the host address space. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_STEERING_TAG_SUPPORTED \ + UINT32_C(0x1000) + /* + * When this bit is '1', it indicates that driver can enable + * support for an enhanced VF scale. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_ENHANCED_VF_SCALE_SUPPORTED \ + UINT32_C(0x2000) + /* + * When this bit is '1', it indicates that FW is capable of + * supporting partition based XID management for KTLS/QUIC + * Tx/Rx Key Context types. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_KEY_XID_PARTITION_SUPPORTED \ + UINT32_C(0x4000) + /* + * This bit is only valid on the condition that both + * “ktls_supported” and “quic_supported” flags are set. When this + * bit is valid, it conveys information below: + * 1. If it is set to ‘1’, it indicates that the firmware allows the + * driver to run KTLS and QUIC concurrently; + * 2. If it is cleared to ‘0’, it indicates that the driver has to + * make sure all crypto connections on all functions are of the + * same type, i.e., either KTLS or QUIC. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_CONCURRENT_KTLS_QUIC_SUPPORTED \ + UINT32_C(0x8000) + /* + * When this bit is '1', it indicates that the device supports + * setting a cross TC cap on a scheduler queue. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_SCHQ_CROSS_TC_CAP_SUPPORTED \ + UINT32_C(0x10000) + /* + * When this bit is '1', it indicates that the device supports + * setting a per TC cap on a scheduler queue. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_SCHQ_PER_TC_CAP_SUPPORTED \ + UINT32_C(0x20000) + /* + * When this bit is '1', it indicates that the device supports + * setting a per TC reservation on a scheduler queues. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_SCHQ_PER_TC_RESERVATION_SUPPORTED \ + UINT32_C(0x40000) + /* + * When this bit is '1', it indicates that firmware supports query + * for statistics related to invalid doorbell errors and drops. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_DB_ERROR_STATS_SUPPORTED \ + UINT32_C(0x80000) + /* + * When this bit is '1', it indicates that the device supports + * VF RoCE resource management. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_ROCE_VF_RESOURCE_MGMT_SUPPORTED \ + UINT32_C(0x100000) uint16_t tunnel_disable_flag; /* * When this bit is '1', it indicates that the VXLAN parsing @@ -14892,7 +14951,35 @@ struct hwrm_func_qcaps_output { */ #define HWRM_FUNC_QCAPS_OUTPUT_TUNNEL_DISABLE_FLAG_DISABLE_PPPOE \ UINT32_C(0x80) - uint8_t unused_1[2]; + uint16_t xid_partition_cap; + /* + * When this bit is '1', it indicates that FW is capable of + * supporting partition based XID management for KTLS TX + * key contexts. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_KTLS_TKC \ + UINT32_C(0x1) + /* + * When this bit is '1', it indicates that FW is capable of + * supporting partition based XID management for KTLS RX + * key contexts. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_KTLS_RKC \ + UINT32_C(0x2) + /* + * When this bit is '1', it indicates that FW is capable of + * supporting partition based XID management for QUIC TX + * key contexts. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_QUIC_TKC \ + UINT32_C(0x4) + /* + * When this bit is '1', it indicates that FW is capable of + * supporting partition based XID management for QUIC RX + * key contexts. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_QUIC_RKC \ + UINT32_C(0x8) /* * This value uniquely identifies the hardware NIC used by the * function. The value returned will be the same for all functions. @@ -14901,7 +14988,55 @@ struct hwrm_func_qcaps_output { * PCIe Capability Device Serial Number. */ uint8_t device_serial_number[8]; - uint8_t unused_2[7]; + /* + * This field is only valid in the XID partition mode. It indicates + * the number contexts per partition. + */ + uint16_t ctxs_per_partition; + uint8_t unused_2[2]; + /* + * The maximum number of address vectors that may be allocated across + * all VFs for the function. This is valid only on the PF with VF RoCE + * (SR-IOV) enabled. Returns zero if this command is called on a PF + * with VF RoCE (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_av; + /* + * The maximum number of completion queues that may be allocated across + * all VFs for the function. This is valid only on the PF with VF RoCE + * (SR-IOV) enabled. Returns zero if this command is called on a PF + * with VF RoCE (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_cq; + /* + * The maximum number of memory regions plus memory windows that may be + * allocated across all VFs for the function. This is valid only on the + * PF with VF RoCE (SR-IOV) enabled. Returns zero if this command is + * called on a PF with VF RoCE (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_mrw; + /* + * The maximum number of queue pairs that may be allocated across + * all VFs for the function. This is valid only on the PF with VF RoCE + * (SR-IOV) enabled. Returns zero if this command is called on a PF + * with VF RoCE (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_qp; + /* + * The maximum number of shared receive queues that may be allocated + * across all VFs for the function. This is valid only on the PF with + * VF RoCE (SR-IOV) enabled. Returns zero if this command is called on + * a PF with VF RoCE (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_srq; + /* + * The maximum number of GIDs that may be allocated across all VFs for + * the function. This is valid only on the PF with VF RoCE (SR-IOV) + * enabled. Returns zero if this command is called on a PF with VF RoCE + * (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_gid; + uint8_t unused_3[3]; /* * This field is used in Output records to indicate that the output * is completely written to RAM. This field should be read as '1' @@ -14959,7 +15094,7 @@ struct hwrm_func_qcfg_input { uint8_t unused_0[6]; } __rte_packed; -/* hwrm_func_qcfg_output (size:1024b/128B) */ +/* hwrm_func_qcfg_output (size:1280b/160B) */ struct hwrm_func_qcfg_output { /* The specific error status for the command. */ uint16_t error_code; @@ -15604,11 +15739,68 @@ struct hwrm_func_qcfg_output { */ uint16_t port_kdnet_fid; uint8_t unused_5[2]; - /* Number of Tx Key Contexts allocated. */ - uint32_t alloc_tx_key_ctxs; - /* Number of Rx Key Contexts allocated. */ - uint32_t alloc_rx_key_ctxs; - uint8_t unused_6[7]; + /* Number of KTLS Tx Key Contexts allocated. */ + uint32_t num_ktls_tx_key_ctxs; + /* Number of KTLS Rx Key Contexts allocated. */ + uint32_t num_ktls_rx_key_ctxs; + /* + * The LAG idx of this function. The lag_id is per port and the + * valid lag_id is from 0 to 7, if there is no valid lag_id, + * 0xff will be returned. + * This HW lag id is used for Truflow programming only. + */ + uint8_t lag_id; + /* Partition interface for this function. */ + uint8_t parif; + /* + * The LAG ID of a hardware link aggregation group (LAG) whose + * member ports include the port of this function. The LAG was + * previously created using HWRM_FUNC_LAG_CREATE. If the port of this + * function is not a member of any LAG, the fw_lag_id will be 0xff. + */ + uint8_t fw_lag_id; + uint8_t unused_6; + /* Number of QUIC Tx Key Contexts allocated. */ + uint32_t num_quic_tx_key_ctxs; + /* Number of QUIC Rx Key Contexts allocated. */ + uint32_t num_quic_rx_key_ctxs; + /* + * Number of AVs per VF. Only valid for PF. This field is ignored + * when the flag, l2_vf_resource_mgmt, is not set in RoCE + * initialize_fw. + */ + uint32_t roce_max_av_per_vf; + /* + * Number of CQs per VF. Only valid for PF. This field is ignored when + * the flag, l2_vf_resource_mgmt, is not set in RoCE initialize_fw. + */ + uint32_t roce_max_cq_per_vf; + /* + * Number of MR/MWs per VF. Only valid for PF. This field is ignored + * when the flag, l2_vf_resource_mgmt, is not set in RoCE + * initialize_fw. + */ + uint32_t roce_max_mrw_per_vf; + /* + * Number of QPs per VF. Only valid for PF. This field is ignored when + * the flag, l2_vf_resource_mgmt, is not set in RoCE initialize_fw. + */ + uint32_t roce_max_qp_per_vf; + /* + * Number of SRQs per VF. Only valid for PF. This field is ignored + * when the flag, l2_vf_resource_mgmt, is not set in RoCE + * initialize_fw. + */ + uint32_t roce_max_srq_per_vf; + /* + * Number of GIDs per VF. Only valid for PF. This field is ignored + * when the flag, l2_vf_resource_mgmt, is not set in RoCE + * initialize_fw. + */ + uint32_t roce_max_gid_per_vf; + /* Bitmap of context types that have XID partition enabled. */ + uint16_t xid_partition_cfg; + uint8_t unused_7; /* * This field is used in Output records to indicate that the output * is completely written to RAM. This field should be read as '1' @@ -15624,7 +15816,7 @@ struct hwrm_func_qcfg_output { *****************/ -/* hwrm_func_cfg_input (size:1024b/128B) */ +/* hwrm_func_cfg_input (size:1280b/160B) */ struct hwrm_func_cfg_input { /* The HWRM command request type. */ uint16_t req_type; @@ -15888,15 +16080,6 @@ struct hwrm_func_cfg_input { */ #define HWRM_FUNC_CFG_INPUT_FLAGS_BD_METADATA_DISABLE \ UINT32_C(0x40000000) - /* - * If this bit is set to 1, the driver is requesting FW to see if - * all the assets requested in this command (i.e. number of KTLS/ - * QUIC key contexts) are available. The firmware will return an - * error if the requested assets are not available. The firmware - * will NOT reserve the assets if they are available. - */ - #define HWRM_FUNC_CFG_INPUT_FLAGS_KEY_CTX_ASSETS_TEST \ - UINT32_C(0x80000000) uint32_t enables; /* * This bit must be '1' for the admin_mtu field to be @@ -16080,16 +16263,16 @@ struct hwrm_func_cfg_input { #define HWRM_FUNC_CFG_INPUT_ENABLES_HOST_MTU \ UINT32_C(0x20000000) /* - * This bit must be '1' for the number of Tx Key Contexts - * field to be configured. + * This bit must be '1' for the num_ktls_tx_key_ctxs field to be + * configured. */ - #define HWRM_FUNC_CFG_INPUT_ENABLES_TX_KEY_CTXS \ + #define HWRM_FUNC_CFG_INPUT_ENABLES_KTLS_TX_KEY_CTXS \ UINT32_C(0x40000000) /* - * This bit must be '1' for the number of Rx Key Contexts - * field to be configured. + * This bit must be '1' for the num_ktls_rx_key_ctxs field to be + * configured. */ - #define HWRM_FUNC_CFG_INPUT_ENABLES_RX_KEY_CTXS \ + #define HWRM_FUNC_CFG_INPUT_ENABLES_KTLS_RX_KEY_CTXS \ UINT32_C(0x80000000) /* * This field can be used by the admin PF to configure @@ -16542,19 +16725,93 @@ struct hwrm_func_cfg_input { * ring that is assigned to a function has a valid mtu. */ uint16_t host_mtu; - uint8_t unused_0[4]; + uint32_t flags2; + /* + * If this bit is set to 1, the driver is requesting the firmware + * to see if the assets (i.e., the number of KTLS key contexts) + * requested in this command are available. The firmware will return + * an error if the requested assets are not available. The firmware + * will NOT reserve the assets if they are available. + */ + #define HWRM_FUNC_CFG_INPUT_FLAGS2_KTLS_KEY_CTX_ASSETS_TEST \ + UINT32_C(0x1) + /* + * If this bit is set to 1, the driver is requesting the firmware + * to see if the assets (i.e., the number of QUIC key contexts) + * requested in this command are available. The firmware will return + * an error if the requested assets are not available. The firmware + * will NOT reserve the assets if they are available. + */ + #define HWRM_FUNC_CFG_INPUT_FLAGS2_QUIC_KEY_CTX_ASSETS_TEST \ + UINT32_C(0x2) uint32_t enables2; /* * This bit must be '1' for the kdnet_mode field to be * configured. */ - #define HWRM_FUNC_CFG_INPUT_ENABLES2_KDNET UINT32_C(0x1) + #define HWRM_FUNC_CFG_INPUT_ENABLES2_KDNET \ + UINT32_C(0x1) /* * This bit must be '1' for the db_page_size field to be * configured. Legacy controller core FW may silently ignore * the db_page_size programming request through this command. */ - #define HWRM_FUNC_CFG_INPUT_ENABLES2_DB_PAGE_SIZE UINT32_C(0x2) + #define HWRM_FUNC_CFG_INPUT_ENABLES2_DB_PAGE_SIZE \ + UINT32_C(0x2) + /* + * This bit must be '1' for the num_quic_tx_key_ctxs field to be + * configured. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_QUIC_TX_KEY_CTXS \ + UINT32_C(0x4) + /* + * This bit must be '1' for the num_quic_rx_key_ctxs field to be + * configured. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_QUIC_RX_KEY_CTXS \ + UINT32_C(0x8) + /* + * This bit must be '1' for the roce_max_av_per_vf field to be + * configured. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_AV_PER_VF \ + UINT32_C(0x10) + /* + * This bit must be '1' for the roce_max_cq_per_vf field to be + * configured. Only valid for PF. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_CQ_PER_VF \ + UINT32_C(0x20) + /* + * This bit must be '1' for the roce_max_mrw_per_vf field to be + * configured. Only valid for PF. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_MRW_PER_VF \ + UINT32_C(0x40) + /* + * This bit must be '1' for the roce_max_qp_per_vf field to be + * configured. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_QP_PER_VF \ + UINT32_C(0x80) + /* + * This bit must be '1' for the roce_max_srq_per_vf field to be + * configured. Only valid for PF. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_SRQ_PER_VF \ + UINT32_C(0x100) + /* + * This bit must be '1' for the roce_max_gid_per_vf field to be + * configured. Only valid for PF. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_GID_PER_VF \ + UINT32_C(0x200) + /* + * This bit must be '1' for the xid_partition_cfg field to be + * configured. Only valid for PF. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_XID_PARTITION_CFG \ + UINT32_C(0x400) /* * KDNet mode for the port for this function. If NPAR is * also configured on this port, it takes precedence. KDNet @@ -16602,11 +16859,56 @@ struct hwrm_func_cfg_input { #define HWRM_FUNC_CFG_INPUT_DB_PAGE_SIZE_LAST \ HWRM_FUNC_CFG_INPUT_DB_PAGE_SIZE_4MB uint8_t unused_1[2]; - /* Number of Tx Key Contexts requested. */ - uint32_t num_tx_key_ctxs; - /* Number of Rx Key Contexts requested. */ - uint32_t num_rx_key_ctxs; - uint8_t unused_2[4]; + /* Number of KTLS Tx Key Contexts requested. */ + uint32_t num_ktls_tx_key_ctxs; + /* Number of KTLS Rx Key Contexts requested. */ + uint32_t num_ktls_rx_key_ctxs; + /* Number of QUIC Tx Key Contexts requested. */ + uint32_t num_quic_tx_key_ctxs; + /* Number of QUIC Rx Key Contexts requested. */ + uint32_t num_quic_rx_key_ctxs; + /* Number of AVs per VF. Only valid for PF. */ + uint32_t roce_max_av_per_vf; + /* Number of CQs per VF. Only valid for PF. */ + uint32_t roce_max_cq_per_vf; + /* Number of MR/MWs per VF. Only valid for PF. */ + uint32_t roce_max_mrw_per_vf; + /* Number of QPs per VF. Only valid for PF. */ + uint32_t roce_max_qp_per_vf; + /* Number of SRQs per VF. Only valid for PF. */ + uint32_t roce_max_srq_per_vf; + /* Number of GIDs per VF. Only valid for PF. */ + uint32_t roce_max_gid_per_vf; + /* + * Bitmap of context kinds that have XID partition enabled. + * Only valid for PF. + */ + uint16_t xid_partition_cfg; + /* + * When this bit is '1', it indicates that driver enables XID + * partition on KTLS TX key contexts. + */ + #define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_KTLS_TKC \ + UINT32_C(0x1) + /* + * When this bit is '1', it indicates that driver enables XID + * partition on KTLS RX key contexts. + */ + #define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_KTLS_RKC \ + UINT32_C(0x2) + /* + * When this bit is '1', it indicates that driver enables XID + * partition on QUIC TX key contexts. + */ + #define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_QUIC_TKC \ + UINT32_C(0x4) + /* + * When this bit is '1', it indicates that driver enables XID + * partition on QUIC RX key contexts. + */ + #define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_QUIC_RKC \ + UINT32_C(0x8) + uint16_t unused_2; } __rte_packed; /* hwrm_func_cfg_output (size:128b/16B) */ @@ -22466,8 +22768,14 @@ struct hwrm_func_backing_store_cfg_v2_input { * which means "0" indicates the first instance. For backing * stores with single instance only, leave this field to 0. * 1. If the backing store type is MPC TQM ring, use the following - * instance value to MPC client mapping: + * instance value to map to MPC clients: * TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4) + * 2. If the backing store type is TBL_SCOPE, use the following + * instance value to map to table scope regions: + * RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3) + * 3. If the backing store type is XID partition, use the following + * instance value to map to context types: + * KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3) */ uint16_t instance; /* Control flags. */ @@ -22578,7 +22886,8 @@ struct hwrm_func_backing_store_cfg_v2_input { * | SRQ | srq_split_entries | * | CQ | cq_split_entries | * | VINC | vnic_split_entries | - * | MRAV | marv_split_entries | + * | MRAV | mrav_split_entries | + * | TS | ts_split_entries | */ uint32_t split_entry_0; /* Split entry #1. */ @@ -22711,6 +23020,15 @@ struct hwrm_func_backing_store_qcfg_v2_input { * Instance of the backing store type. It is zero-based, * which means "0" indicates the first instance. For backing * stores with single instance only, leave this field to 0. + * 1. If the backing store type is MPC TQM ring, use the following + * instance value to map to MPC clients: + * TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4) + * 2. If the backing store type is TBL_SCOPE, use the following + * instance value to map to table scope regions: + * RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3) + * 3. If the backing store type is XID partition, use the following + * instance value to map to context types: + * KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3) */ uint16_t instance; uint8_t rsvd[4]; @@ -22779,6 +23097,15 @@ struct hwrm_func_backing_store_qcfg_v2_output { * Instance of the backing store type. It is zero-based, * which means "0" indicates the first instance. For backing * stores with single instance only, leave this field to 0. + * 1. If the backing store type is MPC TQM ring, use the following + * instance value to map to MPC clients: + * TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4) + * 2. If the backing store type is TBL_SCOPE, use the following + * instance value to map to table scope regions: + * RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3) + * 3. If the backing store type is XID partition, use the following + * instance value to map to context types: + * KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3) */ uint16_t instance; /* Control flags. */ @@ -22855,7 +23182,8 @@ struct hwrm_func_backing_store_qcfg_v2_output { * | SRQ | srq_split_entries | * | CQ | cq_split_entries | * | VINC | vnic_split_entries | - * | MRAV | marv_split_entries | + * | MRAV | mrav_split_entries | + * | TS | ts_split_entries | */ uint32_t split_entry_0; /* Split entry #1. */ @@ -22876,17 +23204,20 @@ struct hwrm_func_backing_store_qcfg_v2_output { uint8_t valid; } __rte_packed; -/* Common structure to cast QPC split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is QPC. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */ /* qpc_split_entries (size:128b/16B) */ struct qpc_split_entries { /* Number of L2 QP backing store entries. */ uint32_t qp_num_l2_entries; /* Number of QP1 entries. */ uint32_t qp_num_qp1_entries; - uint32_t rsvd[2]; + /* + * Number of RoCE QP context entries required for this + * function to support fast QP modify destroy feature. + */ + uint32_t qp_num_fast_qpmd_entries; + uint32_t rsvd; } __rte_packed; -/* Common structure to cast SRQ split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is SRQ. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */ /* srq_split_entries (size:128b/16B) */ struct srq_split_entries { /* Number of L2 SRQ backing store entries. */ @@ -22895,7 +23226,6 @@ struct srq_split_entries { uint32_t rsvd2[2]; } __rte_packed; -/* Common structure to cast CQ split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is CQ. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */ /* cq_split_entries (size:128b/16B) */ struct cq_split_entries { /* Number of L2 CQ backing store entries. */ @@ -22904,7 +23234,6 @@ struct cq_split_entries { uint32_t rsvd2[2]; } __rte_packed; -/* Common structure to cast VNIC split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is VNIC. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */ /* vnic_split_entries (size:128b/16B) */ struct vnic_split_entries { /* Number of VNIC backing store entries. */ @@ -22913,7 +23242,6 @@ struct vnic_split_entries { uint32_t rsvd2[2]; } __rte_packed; -/* Common structure to cast MRAV split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is MRAV. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */ /* mrav_split_entries (size:128b/16B) */ struct mrav_split_entries { /* Number of AV backing store entries. */ @@ -22922,6 +23250,21 @@ struct mrav_split_entries { uint32_t rsvd2[2]; } __rte_packed; +/* ts_split_entries (size:128b/16B) */ +struct ts_split_entries { + /* Max number of TBL_SCOPE region entries (QCAPS). */ + uint32_t region_num_entries; + /* tsid to configure (CFG). */ + uint8_t tsid; + /* + * Lkup static bucket count (power of 2). + * Array is indexed by enum cfa_dir + */ + uint8_t lkup_static_bkt_cnt_exp[2]; + uint8_t rsvd; + uint32_t rsvd2[2]; +} __rte_packed; + /************************************ * hwrm_func_backing_store_qcaps_v2 * ************************************/ @@ -23112,12 +23455,36 @@ struct hwrm_func_backing_store_qcaps_v2_output { */ #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_DRIVER_MANAGED_MEMORY \ UINT32_C(0x4) + /* + * When set, it indicates the support of the following capability + * that is specific to the QP type: + * - For 2-port adapters, the ability to extend the RoCE QP + * entries configured on a PF, during some network events such as + * Link Down. These additional entries count is included in the + * advertised 'max_num_entries'. + * - The count of RoCE QP entries, derived from 'max_num_entries' + * (max_num_entries - qp_num_qp1_entries - qp_num_l2_entries - + * qp_num_fast_qpmd_entries, note qp_num_fast_qpmd_entries is + * always zero when QPs are pseudo-statically allocated), includes + * the count of QPs that can be migrated from the other PF (e.g., + * during network link down). Therefore, during normal operation + * when both PFs are active, the supported number of RoCE QPs for + * each of the PF is half of the advertised value. + */ + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_ROCE_QP_PSEUDO_STATIC_ALLOC \ + UINT32_C(0x8) /* * Bit map of the valid instances associated with the * backing store type. * 1. If the backing store type is MPC TQM ring, use the following - * bit to MPC client mapping: + * bits to map to MPC clients: * TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4) + * 2. If the backing store type is TBL_SCOPE, use the following + * bits to map to table scope regions: + * RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3) + * 3. If the backing store type is VF XID partition in-use table, use + * the following bits to map to context types: + * KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3) */ uint32_t instance_bit_map; /* @@ -23164,7 +23531,43 @@ struct hwrm_func_backing_store_qcaps_v2_output { * | 4 | All four split entries have valid data. | */ uint8_t subtype_valid_cnt; - uint8_t rsvd2; + /* + * Bitmap that indicates if each of the 'split_entry' denotes an + * exact count (i.e., min = max). When the exact count bit is set, + * it indicates the exact number of entries as advertised has to be + * configured. The 'split_entry' to be set to contain exact count by + * this bitmap needs to be a valid split entry specified by + * 'subtype_valid_cnt'. + */ + uint8_t exact_cnt_bit_map; + /* + * When this bit is '1', it indicates 'split_entry_0' contains + * an exact count. + */ + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_0_EXACT \ + UINT32_C(0x1) + /* + * When this bit is '1', it indicates 'split_entry_1' contains + * an exact count. + */ + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_1_EXACT \ + UINT32_C(0x2) + /* + * When this bit is '1', it indicates 'split_entry_2' contains + * an exact count. + */ + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_2_EXACT \ + UINT32_C(0x4) + /* + * When this bit is '1', it indicates 'split_entry_3' contains + * an exact count. + */ + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_3_EXACT \ + UINT32_C(0x8) + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_UNUSED_MASK \ + UINT32_C(0xf0) + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_UNUSED_SFT \ + 4 /* * Split entry #0. Note that the four split entries (as a group) * must be cast to a type-specific data structure first before @@ -23176,7 +23579,8 @@ struct hwrm_func_backing_store_qcaps_v2_output { * | SRQ | srq_split_entries | * | CQ | cq_split_entries | * | VINC | vnic_split_entries | - * | MRAV | marv_split_entries | + * | MRAV | mrav_split_entries | + * | TS | ts_split_entries | */ uint32_t split_entry_0; /* Split entry #1. */ @@ -23471,7 +23875,9 @@ struct hwrm_func_dbr_pacing_qcfg_output { * dbr_throttling_aeq_arm_reg register. */ uint8_t dbr_throttling_aeq_arm_reg_val; - uint8_t unused_3[7]; + uint8_t unused_3[3]; + /* This field indicates the maximum depth of the doorbell FIFO. */ + uint32_t dbr_stat_db_max_fifo_depth; /* * Specifies primary function’s NQ ID. * A value of 0xFFFF FFFF indicates NQ ID is invalid. @@ -25128,7 +25534,7 @@ struct hwrm_func_spd_qcfg_output { *********************/ -/* hwrm_port_phy_cfg_input (size:448b/56B) */ +/* hwrm_port_phy_cfg_input (size:512b/64B) */ struct hwrm_port_phy_cfg_input { /* The HWRM command request type. */ uint16_t req_type; @@ -25505,6 +25911,18 @@ struct hwrm_port_phy_cfg_input { */ #define HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_PAM4_LINK_SPEED_MASK \ UINT32_C(0x1000) + /* + * This bit must be '1' for the force_link_speeds2 field to be + * configured. + */ + #define HWRM_PORT_PHY_CFG_INPUT_ENABLES_FORCE_LINK_SPEEDS2 \ + UINT32_C(0x2000) + /* + * This bit must be '1' for the auto_link_speeds2_mask field to + * be configured. + */ + #define HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_LINK_SPEEDS2_MASK \ + UINT32_C(0x4000) /* Port ID of port that is to be configured. */ uint16_t port_id; /* @@ -25808,7 +26226,99 @@ struct hwrm_port_phy_cfg_input { UINT32_C(0x2) #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_PAM4_SPEED_MASK_200G \ UINT32_C(0x4) - uint8_t unused_2[2]; + /* + * This is the speed that will be used if the force_link_speeds2 + * bit is '1'. If unsupported speed is selected, an error + * will be generated. + */ + uint16_t force_link_speeds2; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_1GB \ + UINT32_C(0xa) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_10GB \ + UINT32_C(0x64) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_25GB \ + UINT32_C(0xfa) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_40GB \ + UINT32_C(0x190) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_50GB \ + UINT32_C(0x1f4) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB \ + UINT32_C(0x3e8) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_50GB_PAM4_56 \ + UINT32_C(0x1f5) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_56 \ + UINT32_C(0x3e9) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_56 \ + UINT32_C(0x7d1) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_56 \ + UINT32_C(0xfa1) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_112 \ + UINT32_C(0x3ea) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_112 \ + UINT32_C(0x7d2) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_112 \ + UINT32_C(0xfa2) + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_LAST \ + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_112 + /* + * This is a mask of link speeds that will be used if + * auto_link_speeds2_mask bit in the "enables" field is 1. + * If unsupported speed is enabled an error will be generated. + */ + uint16_t auto_link_speeds2_mask; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_1GB \ + UINT32_C(0x1) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_10GB \ + UINT32_C(0x2) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_25GB \ + UINT32_C(0x4) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_40GB \ + UINT32_C(0x8) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_50GB \ + UINT32_C(0x10) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_100GB \ + UINT32_C(0x20) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_50GB_PAM4_56 \ + UINT32_C(0x40) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_100GB_PAM4_56 \ + UINT32_C(0x80) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_200GB_PAM4_56 \ + UINT32_C(0x100) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_400GB_PAM4_56 \ + UINT32_C(0x200) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_100GB_PAM4_112 \ + UINT32_C(0x400) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_200GB_PAM4_112 \ + UINT32_C(0x800) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_400GB_PAM4_112 \ + UINT32_C(0x1000) + uint8_t unused_2[6]; } __rte_packed; /* hwrm_port_phy_cfg_output (size:128b/16B) */ @@ -25932,11 +26442,14 @@ struct hwrm_port_phy_qcfg_output { /* NRZ signaling */ #define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_NRZ \ UINT32_C(0x0) - /* PAM4 signaling */ + /* PAM4-56 signaling */ #define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4 \ UINT32_C(0x1) + /* PAM4-112 signaling */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4_112 \ + UINT32_C(0x2) #define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_LAST \ - HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4 + HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4_112 /* This value indicates the current active FEC mode. */ #define HWRM_PORT_PHY_QCFG_OUTPUT_ACTIVE_FEC_MASK \ UINT32_C(0xf0) @@ -25992,6 +26505,8 @@ struct hwrm_port_phy_qcfg_output { #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB UINT32_C(0x3e8) /* 200Gb link speed */ #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB UINT32_C(0x7d0) + /* 400Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_400GB UINT32_C(0xfa0) /* 10Mb link speed */ #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10MB UINT32_C(0xffff) #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_LAST \ @@ -26446,8 +26961,56 @@ struct hwrm_port_phy_qcfg_output { /* 100G_BASEER2 */ #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASEER2 \ UINT32_C(0x27) + /* 400G_BASECR */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASECR \ + UINT32_C(0x28) + /* 100G_BASESR */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASESR \ + UINT32_C(0x29) + /* 100G_BASELR */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASELR \ + UINT32_C(0x2a) + /* 100G_BASEER */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASEER \ + UINT32_C(0x2b) + /* 200G_BASECR2 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASECR2 \ + UINT32_C(0x2c) + /* 200G_BASESR2 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASESR2 \ + UINT32_C(0x2d) + /* 200G_BASELR2 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASELR2 \ + UINT32_C(0x2e) + /* 200G_BASEER2 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASEER2 \ + UINT32_C(0x2f) + /* 400G_BASECR8 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASECR8 \ + UINT32_C(0x30) + /* 200G_BASESR8 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASESR8 \ + UINT32_C(0x31) + /* 400G_BASELR8 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASELR8 \ + UINT32_C(0x32) + /* 400G_BASEER8 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASEER8 \ + UINT32_C(0x33) + /* 400G_BASECR4 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASECR4 \ + UINT32_C(0x34) + /* 400G_BASESR4 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASESR4 \ + UINT32_C(0x35) + /* 400G_BASELR4 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASELR4 \ + UINT32_C(0x36) + /* 400G_BASEER4 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASEER4 \ + UINT32_C(0x37) #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_LAST \ - HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASEER2 + HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASEER4 /* This value represents a media type. */ uint8_t media_type; /* Unknown */ @@ -26855,6 +27418,12 @@ struct hwrm_port_phy_qcfg_output { */ #define HWRM_PORT_PHY_QCFG_OUTPUT_OPTION_FLAGS_SIGNAL_MODE_KNOWN \ UINT32_C(0x2) + /* + * When this bit is '1', speeds2 fields are used to get + * speed details. + */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_OPTION_FLAGS_SPEEDS2_SUPPORTED \ + UINT32_C(0x4) /* * Up to 16 bytes of null padded ASCII string representing * PHY vendor. @@ -26933,7 +27502,162 @@ struct hwrm_port_phy_qcfg_output { uint8_t link_down_reason; /* Remote fault */ #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_DOWN_REASON_RF UINT32_C(0x1) - uint8_t unused_0[7]; + /* + * The supported speeds for the port. This is a bit mask. + * For each speed that is supported, the corresponding + * bit will be set to '1'. This is valid only if speeds2_supported + * is set in option_flags + */ + uint16_t support_speeds2; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_1GB \ + UINT32_C(0x1) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_10GB \ + UINT32_C(0x2) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_25GB \ + UINT32_C(0x4) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_40GB \ + UINT32_C(0x8) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_50GB \ + UINT32_C(0x10) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB \ + UINT32_C(0x20) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_50GB_PAM4_56 \ + UINT32_C(0x40) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB_PAM4_56 \ + UINT32_C(0x80) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_200GB_PAM4_56 \ + UINT32_C(0x100) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_400GB_PAM4_56 \ + UINT32_C(0x200) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB_PAM4_112 \ + UINT32_C(0x400) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_200GB_PAM4_112 \ + UINT32_C(0x800) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_400GB_PAM4_112 \ + UINT32_C(0x1000) + /* 800Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_800GB_PAM4_112 \ + UINT32_C(0x2000) + /* + * Current setting of forced link speed. When the link speed is not + * being forced, this value shall be set to 0. + * This field is valid only if speeds2_supported is set in option_flags. + */ + uint16_t force_link_speeds2; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_1GB \ + UINT32_C(0xa) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_10GB \ + UINT32_C(0x64) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_25GB \ + UINT32_C(0xfa) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_40GB \ + UINT32_C(0x190) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_50GB \ + UINT32_C(0x1f4) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_100GB \ + UINT32_C(0x3e8) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_50GB_PAM4_56 \ + UINT32_C(0x1f5) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_56 \ + UINT32_C(0x3e9) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_56 \ + UINT32_C(0x7d1) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_56 \ + UINT32_C(0xfa1) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_112 \ + UINT32_C(0x3ea) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_112 \ + UINT32_C(0x7d2) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_112 \ + UINT32_C(0xfa2) + /* 800Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_800GB_PAM4_112 \ + UINT32_C(0x1f42) + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_LAST \ + HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_800GB_PAM4_112 + /* + * Current setting of auto_link speed_mask that is used to advertise + * speeds during autonegotiation. + * This field is only valid when auto_mode is set to "mask". + * and if speeds2_supported is set in option_flags + * The speeds specified in this field shall be a subset of + * supported speeds on this port. + */ + uint16_t auto_link_speeds2; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_1GB \ + UINT32_C(0x1) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_10GB \ + UINT32_C(0x2) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_25GB \ + UINT32_C(0x4) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_40GB \ + UINT32_C(0x8) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_50GB \ + UINT32_C(0x10) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_100GB \ + UINT32_C(0x20) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_50GB_PAM4_56 \ + UINT32_C(0x40) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_100GB_PAM4_56 \ + UINT32_C(0x80) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_200GB_PAM4_56 \ + UINT32_C(0x100) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_400GB_PAM4_56 \ + UINT32_C(0x200) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_100GB_PAM4_112 \ + UINT32_C(0x400) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_200GB_PAM4_112 \ + UINT32_C(0x800) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_400GB_PAM4_112 \ + UINT32_C(0x1000) + /* 800Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_800GB_PAM4_112 \ + UINT32_C(0x2000) + /* + * This field is indicate the number of lanes used to transfer + * data. If the link is down, the value is zero. + * This is valid only if speeds2_supported is set in option_flags. + */ + uint8_t active_lanes; /* * This field is used in Output records to indicate that the output * is completely written to RAM. This field should be read as '1' @@ -28381,7 +29105,7 @@ struct tx_port_stats_ext { } __rte_packed; /* Port Rx Statistics extended Format */ -/* rx_port_stats_ext (size:3776b/472B) */ +/* rx_port_stats_ext (size:3904b/488B) */ struct rx_port_stats_ext { /* Number of times link state changed to down */ uint64_t link_down_events; @@ -28462,8 +29186,9 @@ struct rx_port_stats_ext { /* The number of events where the port receive buffer was over 85% full */ uint64_t rx_buffer_passed_threshold; /* - * The number of symbol errors that wasn't corrected by FEC correction - * algorithm + * This counter represents uncorrected symbol errors post-FEC and may not + * be populated in all cases. Each uncorrected FEC block may result in + * one or more symbol errors. */ uint64_t rx_pcs_symbol_err; /* The number of corrected bits on the port according to active FEC */ @@ -28507,6 +29232,21 @@ struct rx_port_stats_ext { * FEC function in the PHY */ uint64_t rx_fec_uncorrectable_blocks; + /* + * Total number of packets that are dropped due to not matching + * any RX filter rules. This value is zero on the non supported + * controllers. This counter is per controller, Firmware reports the + * same value on active ports. This counter does not include the + * packet discards because of no available buffers. + */ + uint64_t rx_filter_miss; + /* + * This field represents the number of FEC symbol errors by counting + * once for each 10-bit symbol corrected by FEC block. + * rx_fec_corrected_blocks will be incremented if all symbol errors in a + * codeword gets corrected. + */ + uint64_t rx_fec_symbol_err; } __rte_packed; /* @@ -29435,7 +30175,7 @@ struct hwrm_port_phy_qcaps_input { uint8_t unused_0[6]; } __rte_packed; -/* hwrm_port_phy_qcaps_output (size:256b/32B) */ +/* hwrm_port_phy_qcaps_output (size:320b/40B) */ struct hwrm_port_phy_qcaps_output { /* The specific error status for the command. */ uint16_t error_code; @@ -29725,6 +30465,13 @@ struct hwrm_port_phy_qcaps_output { */ #define HWRM_PORT_PHY_QCAPS_OUTPUT_FLAGS2_BANK_ADDR_SUPPORTED \ UINT32_C(0x4) + /* + * If set to 1, then this field indicates that + * supported_speed2 field is to be used in lieu of all + * supported_speed variants. + */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_FLAGS2_SPEEDS2_SUPPORTED \ + UINT32_C(0x8) /* * Number of internal ports for this device. This field allows the FW * to advertise how many internal ports are present. Manufacturing @@ -29733,6 +30480,108 @@ struct hwrm_port_phy_qcaps_output { * option "HPTN_MODE" is set to 1. */ uint8_t internal_port_cnt; + uint8_t unused_0; + /* + * This is a bit mask to indicate what speeds are supported + * as forced speeds on this link. + * For each speed that can be forced on this link, the + * corresponding mask bit shall be set to '1'. + * This field is valid only if speeds2_supported bit is set in flags2 + */ + uint16_t supported_speeds2_force_mode; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_1GB \ + UINT32_C(0x1) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_10GB \ + UINT32_C(0x2) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_25GB \ + UINT32_C(0x4) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_40GB \ + UINT32_C(0x8) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_50GB \ + UINT32_C(0x10) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_100GB \ + UINT32_C(0x20) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_50GB_PAM4_56 \ + UINT32_C(0x40) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_100GB_PAM4_56 \ + UINT32_C(0x80) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_200GB_PAM4_56 \ + UINT32_C(0x100) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_400GB_PAM4_56 \ + UINT32_C(0x200) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_100GB_PAM4_112 \ + UINT32_C(0x400) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_200GB_PAM4_112 \ + UINT32_C(0x800) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_400GB_PAM4_112 \ + UINT32_C(0x1000) + /* 800Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_800GB_PAM4_112 \ + UINT32_C(0x2000) + /* + * This is a bit mask to indicate what speeds are supported + * for autonegotiation on this link. + * For each speed that can be autonegotiated on this link, the + * corresponding mask bit shall be set to '1'. + * This field is valid only if speeds2_supported bit is set in flags2 + */ + uint16_t supported_speeds2_auto_mode; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_1GB \ + UINT32_C(0x1) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_10GB \ + UINT32_C(0x2) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_25GB \ + UINT32_C(0x4) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_40GB \ + UINT32_C(0x8) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_50GB \ + UINT32_C(0x10) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_100GB \ + UINT32_C(0x20) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_50GB_PAM4_56 \ + UINT32_C(0x40) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_100GB_PAM4_56 \ + UINT32_C(0x80) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_200GB_PAM4_56 \ + UINT32_C(0x100) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_400GB_PAM4_56 \ + UINT32_C(0x200) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_100GB_PAM4_112 \ + UINT32_C(0x400) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_200GB_PAM4_112 \ + UINT32_C(0x800) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_400GB_PAM4_112 \ + UINT32_C(0x1000) + /* 800Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_800GB_PAM4_112 \ + UINT32_C(0x2000) + uint8_t unused_1[3]; /* * This field is used in Output records to indicate that the output * is completely written to RAM. This field should be read as '1' @@ -38132,6 +38981,9 @@ struct hwrm_vnic_qcaps_output { /* When this bit is '1' FW supports VNIC hash mode. */ #define HWRM_VNIC_QCAPS_OUTPUT_FLAGS_VNIC_RSS_HASH_MODE_CAP \ UINT32_C(0x10000000) + /* When this bit is set to '1', hardware supports tunnel TPA. */ + #define HWRM_VNIC_QCAPS_OUTPUT_FLAGS_HW_TUNNEL_TPA_CAP \ + UINT32_C(0x20000000) /* * This field advertises the maximum concurrent TPA aggregations * supported by the VNIC on new devices that support TPA v2 or v3. @@ -38154,7 +39006,7 @@ struct hwrm_vnic_qcaps_output { *********************/ -/* hwrm_vnic_tpa_cfg_input (size:320b/40B) */ +/* hwrm_vnic_tpa_cfg_input (size:384b/48B) */ struct hwrm_vnic_tpa_cfg_input { /* The HWRM command request type. */ uint16_t req_type; @@ -38276,6 +39128,12 @@ struct hwrm_vnic_tpa_cfg_input { #define HWRM_VNIC_TPA_CFG_INPUT_ENABLES_MAX_AGG_TIMER UINT32_C(0x4) /* deprecated bit. Do not use!!! */ #define HWRM_VNIC_TPA_CFG_INPUT_ENABLES_MIN_AGG_LEN UINT32_C(0x8) + /* + * This bit must be '1' for the tnl_tpa_en_bitmap field to be + * configured. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_ENABLES_TNL_TPA_EN \ + UINT32_C(0x10) /* Logical vnic ID */ uint16_t vnic_id; /* @@ -38332,6 +39190,117 @@ struct hwrm_vnic_tpa_cfg_input { * and can be queried using hwrm_vnic_tpa_qcfg. */ uint32_t min_agg_len; + /* + * If the device supports hardware tunnel TPA feature, as indicated by + * the HWRM_VNIC_QCAPS command, this field is used to configure the + * tunnel types to be enabled. Each bit corresponds to a specific + * tunnel type. If a bit is set to '1', then the associated tunnel + * type is enabled; otherwise, it is disabled. + */ + uint32_t tnl_tpa_en_bitmap; + /* + * When this bit is '1', enable VXLAN encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN \ + UINT32_C(0x1) + /* + * When this bit is set to ‘1’, enable GENEVE encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GENEVE \ + UINT32_C(0x2) + /* + * When this bit is set to ‘1’, enable NVGRE encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_NVGRE \ + UINT32_C(0x4) + /* + * When this bit is set to ‘1’, enable GRE encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GRE \ + UINT32_C(0x8) + /* + * When this bit is set to ‘1’, enable IPV4 encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_IPV4 \ + UINT32_C(0x10) + /* + * When this bit is set to ‘1’, enable IPV6 encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_IPV6 \ + UINT32_C(0x20) + /* + * When this bit is '1', enable VXLAN_GPE encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN_GPE \ + UINT32_C(0x40) + /* + * When this bit is '1', enable VXLAN_CUSTOMER1 encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN_CUST1 \ + UINT32_C(0x80) + /* + * When this bit is '1', enable GRE_CUSTOMER1 encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GRE_CUST1 \ + UINT32_C(0x100) + /* + * When this bit is '1', enable UPAR1 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR1 \ + UINT32_C(0x200) + /* + * When this bit is '1', enable UPAR2 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR2 \ + UINT32_C(0x400) + /* + * When this bit is '1', enable UPAR3 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR3 \ + UINT32_C(0x800) + /* + * When this bit is '1', enable UPAR4 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR4 \ + UINT32_C(0x1000) + /* + * When this bit is '1', enable UPAR5 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR5 \ + UINT32_C(0x2000) + /* + * When this bit is '1', enable UPAR6 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR6 \ + UINT32_C(0x4000) + /* + * When this bit is '1', enable UPAR7 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR7 \ + UINT32_C(0x8000) + /* + * When this bit is '1', enable UPAR8 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR8 \ + UINT32_C(0x10000) + uint8_t unused_1[4]; } __rte_packed; /* hwrm_vnic_tpa_cfg_output (size:128b/16B) */ @@ -38355,6 +39324,288 @@ struct hwrm_vnic_tpa_cfg_output { uint8_t valid; } __rte_packed; +/********************** + * hwrm_vnic_tpa_qcfg * + **********************/ + + +/* hwrm_vnic_tpa_qcfg_input (size:192b/24B) */ +struct hwrm_vnic_tpa_qcfg_input { + /* The HWRM command request type. */ + uint16_t req_type; + /* + * The completion ring to send the completion event on. This should + * be the NQ ID returned from the `nq_alloc` HWRM command. + */ + uint16_t cmpl_ring; + /* + * The sequence ID is used by the driver for tracking multiple + * commands. This ID is treated as opaque data by the firmware and + * the value is returned in the `hwrm_resp_hdr` upon completion. + */ + uint16_t seq_id; + /* + * The target ID of the command: + * * 0x0-0xFFF8 - The function ID + * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors + * * 0xFFFD - Reserved for user-space HWRM interface + * * 0xFFFF - HWRM + */ + uint16_t target_id; + /* + * A physical address pointer pointing to a host buffer that the + * command's response data will be written. This can be either a host + * physical address (HPA) or a guest physical address (GPA) and must + * point to a physically contiguous block of memory. + */ + uint64_t resp_addr; + /* Logical vnic ID */ + uint16_t vnic_id; + uint8_t unused_0[6]; +} __rte_packed; + +/* hwrm_vnic_tpa_qcfg_output (size:256b/32B) */ +struct hwrm_vnic_tpa_qcfg_output { + /* The specific error status for the command. */ + uint16_t error_code; + /* The HWRM command request type. */ + uint16_t req_type; + /* The sequence ID from the original command. */ + uint16_t seq_id; + /* The length of the response data in number of bytes. */ + uint16_t resp_len; + uint32_t flags; + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) of + * non-tunneled TCP packets. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_TPA \ + UINT32_C(0x1) + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) of + * tunneled TCP packets. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_ENCAP_TPA \ + UINT32_C(0x2) + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) according + * to Windows Receive Segment Coalescing (RSC) rules. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_RSC_WND_UPDATE \ + UINT32_C(0x4) + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) according + * to Linux Generic Receive Offload (GRO) rules. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_GRO \ + UINT32_C(0x8) + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) for TCP + * packets with IP ECN set to non-zero. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_AGG_WITH_ECN \ + UINT32_C(0x10) + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) for + * GRE tunneled TCP packets only if all packets have the + * same GRE sequence. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_AGG_WITH_SAME_GRE_SEQ \ + UINT32_C(0x20) + /* + * When this bit is '1' and the GRO mode is enabled, + * the VNIC is configured to + * perform transparent packet aggregation (TPA) for + * TCP/IPv4 packets with consecutively increasing IPIDs. + * In other words, the last packet that is being + * aggregated to an already existing aggregation context + * shall have IPID 1 more than the IPID of the last packet + * that was aggregated in that aggregation context. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_GRO_IPID_CHECK \ + UINT32_C(0x40) + /* + * When this bit is '1' and the GRO mode is enabled, + * the VNIC is configured to + * perform transparent packet aggregation (TPA) for + * TCP packets with the same TTL (IPv4) or Hop limit (IPv6) + * value. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_GRO_TTL_CHECK \ + UINT32_C(0x80) + /* + * This is the maximum number of TCP segments that can + * be aggregated (unit is Log2). Max value is 31. + */ + uint16_t max_agg_segs; + /* 1 segment */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_1 UINT32_C(0x0) + /* 2 segments */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_2 UINT32_C(0x1) + /* 4 segments */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_4 UINT32_C(0x2) + /* 8 segments */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_8 UINT32_C(0x3) + /* Any segment size larger than this is not valid */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_MAX UINT32_C(0x1f) + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_LAST \ + HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_MAX + /* + * This is the maximum number of aggregations this VNIC is + * allowed (unit is Log2). Max value is 7 + */ + uint16_t max_aggs; + /* 1 aggregation */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_1 UINT32_C(0x0) + /* 2 aggregations */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_2 UINT32_C(0x1) + /* 4 aggregations */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_4 UINT32_C(0x2) + /* 8 aggregations */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_8 UINT32_C(0x3) + /* 16 aggregations */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_16 UINT32_C(0x4) + /* Any aggregation size larger than this is not valid */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_MAX UINT32_C(0x7) + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_LAST \ + HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_MAX + /* + * This is the maximum amount of time allowed for + * an aggregation context to complete after it was initiated. + */ + uint32_t max_agg_timer; + /* + * This is the minimum amount of payload length required to + * start an aggregation context. + */ + uint32_t min_agg_len; + /* + * If the device supports hardware tunnel TPA feature, as indicated by + * the HWRM_VNIC_QCAPS command, this field conveys the bitmap of the + * tunnel types that have been configured. Each bit corresponds to a + * specific tunnel type. If a bit is set to '1', then the associated + * tunnel type is enabled; otherwise, it is disabled. + */ + uint32_t tnl_tpa_en_bitmap; + /* + * When this bit is '1', enable VXLAN encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_VXLAN \ + UINT32_C(0x1) + /* + * When this bit is set to ‘1’, enable GENEVE encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_GENEVE \ + UINT32_C(0x2) + /* + * When this bit is set to ‘1’, enable NVGRE encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_NVGRE \ + UINT32_C(0x4) + /* + * When this bit is set to ‘1’, enable GRE encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_GRE \ + UINT32_C(0x8) + /* + * When this bit is set to ‘1’, enable IPV4 encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_IPV4 \ + UINT32_C(0x10) + /* + * When this bit is set to ‘1’, enable IPV6 encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_IPV6 \ + UINT32_C(0x20) + /* + * When this bit is '1', enable VXLAN_GPE encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_VXLAN_GPE \ + UINT32_C(0x40) + /* + * When this bit is '1', enable VXLAN_CUSTOMER1 encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_VXLAN_CUST1 \ + UINT32_C(0x80) + /* + * When this bit is '1', enable GRE_CUSTOMER1 encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_GRE_CUST1 \ + UINT32_C(0x100) + /* + * When this bit is '1', enable UPAR1 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR1 \ + UINT32_C(0x200) + /* + * When this bit is '1', enable UPAR2 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR2 \ + UINT32_C(0x400) + /* + * When this bit is '1', enable UPAR3 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR3 \ + UINT32_C(0x800) + /* + * When this bit is '1', enable UPAR4 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR4 \ + UINT32_C(0x1000) + /* + * When this bit is '1', enable UPAR5 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR5 \ + UINT32_C(0x2000) + /* + * When this bit is '1', enable UPAR6 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR6 \ + UINT32_C(0x4000) + /* + * When this bit is '1', enable UPAR7 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR7 \ + UINT32_C(0x8000) + /* + * When this bit is '1', enable UPAR8 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR8 \ + UINT32_C(0x10000) + uint8_t unused_0[3]; + /* + * This field is used in Output records to indicate that the output + * is completely written to RAM. This field should be read as '1' + * to indicate that the output has been completely written. + * When writing a command completion or response to an internal processor, + * the order of writes has to be such that this field is written last. + */ + uint8_t valid; +} __rte_packed; + /********************* * hwrm_vnic_rss_cfg * *********************/ @@ -38572,6 +39823,12 @@ struct hwrm_vnic_rss_cfg_input { */ #define HWRM_VNIC_RSS_CFG_INPUT_FLAGS_HASH_TYPE_EXCLUDE \ UINT32_C(0x2) + /* + * When this bit is '1', it indicates that the support of setting + * ipsec hash_types by the host drivers. + */ + #define HWRM_VNIC_RSS_CFG_INPUT_FLAGS_IPSEC_HASH_TYPE_CFG_SUPPORT \ + UINT32_C(0x4) uint8_t ring_select_mode; /* * In this mode, HW uses Toeplitz algorithm and provided Toeplitz @@ -39439,6 +40696,12 @@ struct hwrm_ring_alloc_input { */ #define HWRM_RING_ALLOC_INPUT_ENABLES_MPC_CHNLS_TYPE \ UINT32_C(0x400) + /* + * This bit must be '1' for the steering_tag field to be + * configured. + */ + #define HWRM_RING_ALLOC_INPUT_ENABLES_STEERING_TAG_VALID \ + UINT32_C(0x800) /* Ring Type. */ uint8_t ring_type; /* L2 Completion Ring (CR) */ @@ -39664,7 +40927,8 @@ struct hwrm_ring_alloc_input { #define HWRM_RING_ALLOC_INPUT_RING_ARB_CFG_ARB_POLICY_PARAM_MASK \ UINT32_C(0xff00) #define HWRM_RING_ALLOC_INPUT_RING_ARB_CFG_ARB_POLICY_PARAM_SFT 8 - uint16_t unused_3; + /* Steering tag to use for memory transactions. */ + uint16_t steering_tag; /* * This field is reserved for the future use. * It shall be set to 0. @@ -43871,7 +45135,10 @@ struct hwrm_cfa_ntuple_filter_alloc_input { * Setting of this flag indicates that the dst_id field contains RFS * ring table index. If this is not set it indicates dst_id is VNIC * or VPORT or function ID. Note dest_fid and dest_rfs_ring_idx - * can’t be set at the same time. + * can't be set at the same time. Updated drivers should pass ring + * idx in the rfs_ring_tbl_idx field if the firmware indicates + * support for the new field in the HWRM_CFA_ADV_FLOW_MGMT_QCAPS + * response. */ #define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DEST_RFS_RING_IDX \ UINT32_C(0x20) @@ -43986,10 +45253,7 @@ struct hwrm_cfa_ntuple_filter_alloc_input { */ #define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_ID \ UINT32_C(0x10000) - /* - * This bit must be '1' for the mirror_vnic_id field to be - * configured. - */ + /* This flag is deprecated. */ #define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_MIRROR_VNIC_ID \ UINT32_C(0x20000) /* @@ -43998,7 +45262,10 @@ struct hwrm_cfa_ntuple_filter_alloc_input { */ #define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_MACADDR \ UINT32_C(0x40000) - /* This flag is deprecated. */ + /* + * This bit must be '1' for the rfs_ring_tbl_idx field to + * be configured. + */ #define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_RFS_RING_TBL_IDX \ UINT32_C(0x80000) /* @@ -44069,10 +45336,12 @@ struct hwrm_cfa_ntuple_filter_alloc_input { */ uint16_t dst_id; /* - * Logical VNIC ID of the VNIC where traffic is - * mirrored. + * If set, this value shall represent the ring table + * index for receive flow steering. Note that this offset + * was formerly used for the mirror_vnic_id field, which + * is no longer supported. */ - uint16_t mirror_vnic_id; + uint16_t rfs_ring_tbl_idx; /* * This value indicates the tunnel type for this filter. * If this field is not specified, then the filter shall @@ -50258,6 +51527,13 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_output { */ #define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_EXT_IP_PROTO_SUPPORTED \ UINT32_C(0x100000) + /* + * Value of 1 to indicate that firmware supports setting of + * rfs_ring_tbl_idx (new offset) in HWRM_CFA_NTUPLE_ALLOC command. + * Value of 0 indicates ring tbl idx should be passed using dst_id. + */ + #define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V3_SUPPORTED \ + UINT32_C(0x200000) uint8_t unused_0[3]; /* * This field is used in Output records to indicate that the output @@ -56744,9 +58020,17 @@ struct hwrm_tunnel_dst_port_query_input { /* Generic Protocol Extension for VXLAN (VXLAN-GPE) */ #define HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_VXLAN_GPE \ UINT32_C(0x10) + /* Generic Routing Encapsulation */ + #define HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_GRE \ + UINT32_C(0x11) #define HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_LAST \ - HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_VXLAN_GPE - uint8_t unused_0[7]; + HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_GRE + /* + * This field is used to specify the next protocol value defined in the + * corresponding RFC spec for the applicable tunnel type. + */ + uint8_t tunnel_next_proto; + uint8_t unused_0[6]; } __rte_packed; /* hwrm_tunnel_dst_port_query_output (size:128b/16B) */ @@ -56808,7 +58092,21 @@ struct hwrm_tunnel_dst_port_query_output { /* This bit will be '1' when UPAR7 is IN_USE */ #define HWRM_TUNNEL_DST_PORT_QUERY_OUTPUT_UPAR_IN_USE_UPAR7 \ UINT32_C(0x80) - uint8_t unused_0[2]; + /* + * This field is used to convey the status of non udp port based + * tunnel parsing at chip level and at function level. + */ + uint8_t status; + /* This bit will be '1' when tunnel parsing is enabled globally. */ + #define HWRM_TUNNEL_DST_PORT_QUERY_OUTPUT_STATUS_CHIP_LEVEL \ + UINT32_C(0x1) + /* + * This bit will be '1' when tunnel parsing is enabled + * on the corresponding function. + */ + #define HWRM_TUNNEL_DST_PORT_QUERY_OUTPUT_STATUS_FUNC_LEVEL \ + UINT32_C(0x2) + uint8_t unused_0; /* * This field is used in Output records to indicate that the output * is completely written to RAM. This field should be read as '1' @@ -56886,9 +58184,16 @@ struct hwrm_tunnel_dst_port_alloc_input { /* Generic Protocol Extension for VXLAN (VXLAN-GPE) */ #define HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN_GPE \ UINT32_C(0x10) + /* Generic Routing Encapsulation */ + #define HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_GRE \ + UINT32_C(0x11) #define HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_LAST \ - HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN_GPE - uint8_t unused_0; + HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_GRE + /* + * This field is used to specify the next protocol value defined in the + * corresponding RFC spec for the applicable tunnel type. + */ + uint8_t tunnel_next_proto; /* * This field represents the value of L4 destination port used * for the given tunnel type. This field is valid for @@ -56900,7 +58205,7 @@ struct hwrm_tunnel_dst_port_alloc_input { * A value of 0 shall fail the command. */ uint16_t tunnel_dst_port_val; - uint8_t unused_1[4]; + uint8_t unused_0[4]; } __rte_packed; /* hwrm_tunnel_dst_port_alloc_output (size:128b/16B) */ @@ -56929,8 +58234,11 @@ struct hwrm_tunnel_dst_port_alloc_output { /* Out of resources error */ #define HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_NO_RESOURCE \ UINT32_C(0x2) + /* Tunnel type is alread enabled */ + #define HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_ENABLED \ + UINT32_C(0x3) #define HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_LAST \ - HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_NO_RESOURCE + HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_ENABLED /* * This field represents the UPAR usage status. * Available UPARs on wh+ are UPAR0 and UPAR1 @@ -57040,15 +58348,22 @@ struct hwrm_tunnel_dst_port_free_input { /* Generic Protocol Extension for VXLAN (VXLAN-GPE) */ #define HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN_GPE \ UINT32_C(0x10) + /* Generic Routing Encapsulation */ + #define HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_GRE \ + UINT32_C(0x11) #define HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_LAST \ - HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN_GPE - uint8_t unused_0; + HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_GRE + /* + * This field is used to specify the next protocol value defined in the + * corresponding RFC spec for the applicable tunnel type. + */ + uint8_t tunnel_next_proto; /* * Identifier of a tunnel L4 destination port value. Only applies to tunnel * types that has l4 destination port parameters. */ uint16_t tunnel_dst_port_id; - uint8_t unused_1[4]; + uint8_t unused_0[4]; } __rte_packed; /* hwrm_tunnel_dst_port_free_output (size:128b/16B) */ @@ -57234,7 +58549,7 @@ struct ctx_eng_stats { ***********************/ -/* hwrm_stat_ctx_alloc_input (size:256b/32B) */ +/* hwrm_stat_ctx_alloc_input (size:320b/40B) */ struct hwrm_stat_ctx_alloc_input { /* The HWRM command request type. */ uint16_t req_type; @@ -57305,6 +58620,18 @@ struct hwrm_stat_ctx_alloc_input { * for the periodic DMA updates. */ uint16_t stats_dma_length; + uint16_t flags; + /* This stats context uses the steering tag specified in the command. */ + #define HWRM_STAT_CTX_ALLOC_INPUT_FLAGS_STEERING_TAG_VALID \ + UINT32_C(0x1) + /* + * Steering tag to use for memory transactions from the periodic DMA + * updates. 'steering_tag_valid' should be set and 'steering_tag' + * should be specified, when the 'steering_tag_supported' bit is set + * under the 'flags_ext2' field of the hwrm_func_qcaps_output. + */ + uint16_t steering_tag; + uint32_t unused_1; } __rte_packed; /* hwrm_stat_ctx_alloc_output (size:128b/16B) */ From patchwork Mon Dec 4 18:36:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134816 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C60914366C; Mon, 4 Dec 2023 19:37:34 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DCEC541157; Mon, 4 Dec 2023 19:37:25 +0100 (CET) Received: from mail-pg1-f178.google.com (mail-pg1-f178.google.com [209.85.215.178]) by mails.dpdk.org (Postfix) with ESMTP id DAFC641151 for ; Mon, 4 Dec 2023 19:37:24 +0100 (CET) Received: by mail-pg1-f178.google.com with SMTP id 41be03b00d2f7-5c659db0ce2so1918981a12.0 for ; Mon, 04 Dec 2023 10:37:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715044; x=1702319844; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=dqZmXj8ErWf5WM8SLG1xwXgJNio1p9QbAU+Gttamm2Y=; b=EmgsYAwUYH6hS6SRcPHLkVJhO7bGKJJ8opXoNVVRokZGNpFV0+7QEU3iEQNlBfqpGe 4joUkloPDDlZAjOo+xzdpQMxaKOiEzdBG4v1t15DpbdqBZnwe0/l0xrv0P6ZaVUnkCaw 086c0TQpIMy+c1AJ60y+Zyl/QxtuJQeqLQbzo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715044; x=1702319844; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=dqZmXj8ErWf5WM8SLG1xwXgJNio1p9QbAU+Gttamm2Y=; b=rKEqwxVxU/RW9NyNTj6WdSeSDvm5QpH63PpelQaAAWl6XBydYeI8ZUNdsyYkfK2k6u YROoBazddSGotbk82+dVjM17qie5Bu1vYQqrFmkcWgn1wYiT4LuqQdSL8p0V0WmEEzNl +bzv3uUFr4piuWMrEbGLm+pPe2cvbL0y4+mFE1Ogvtsvfn1amTEdobUAKk66soC2sJw4 CcNuuVADuGI1aDsyvCGl52BzDQQiOME3wqfwutIn/pTRULNTIAPU709UdiEJrcUExeiS Q54bgZJnx/ipUS0fLoOFAQXTa/jYvfp9fZ6oYlzDkaZN0B6h8EAq1XNGP2EUYrfqPOOC B95A== X-Gm-Message-State: AOJu0YyMn79h+Vv1uYBYH0okRRoU9icEJ8pLtU2p3afcyHKdK6Kf0/9O 5jPQSeBKqU2J3B13aRM0cslURsQXCwrzG32wLcFsCgYaYAgKM2JoJitDTkOiWXW+Q8qD7sGylee 60KVSI42Fojg7RArrW0Wx67hGBXWMSvwyQdYMHts1TEUJZHZ02cgVVA4cQ8bBBxWaEp8l X-Google-Smtp-Source: AGHT+IGF+iT8lqwcS1ZlU1GHzr2SVsOG9Jtv0tunUjoQ/SHoucNW7fkygXIUbOenVZo3JMyXlf3TlQ== X-Received: by 2002:a05:6a21:6da1:b0:18f:97c:9757 with SMTP id wl33-20020a056a216da100b0018f097c9757mr5735840pzb.63.1701715043636; Mon, 04 Dec 2023 10:37:23 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:22 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP , Somnath Kotur Subject: [PATCH 03/14] net/bnxt: log a message when multicast promisc mode changes Date: Mon, 4 Dec 2023 10:36:59 -0800 Message-Id: <20231204183710.86921-4-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kalesh AP When the user tries to add more number of Mcast MAC addresses than supported by the port, driver puts port into Mcast promiscuous mode. It may be useful to the user to know that Mcast promiscuous mode is turned on. Similarly added a log when Mcast promiscuous mode is turned off. Signed-off-by: Kalesh AP Reviewed-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_ethdev.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index acf7e6e46e..999e4f1398 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -2931,12 +2931,18 @@ bnxt_dev_set_mc_addr_list_op(struct rte_eth_dev *eth_dev, bp->nb_mc_addr = nb_mc_addr; if (nb_mc_addr > BNXT_MAX_MC_ADDRS) { + PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%d) exceeded Max supported (%d)\n", + nb_mc_addr, BNXT_MAX_MC_ADDRS); + PMD_DRV_LOG(INFO, "Turning on Mcast promiscuous mode\n"); vnic->flags |= BNXT_VNIC_INFO_ALLMULTI; goto allmulti; } /* TODO Check for Duplicate mcast addresses */ - vnic->flags &= ~BNXT_VNIC_INFO_ALLMULTI; + if (vnic->flags & BNXT_VNIC_INFO_ALLMULTI) { + PMD_DRV_LOG(INFO, "Turning off Mcast promiscuous mode\n"); + vnic->flags &= ~BNXT_VNIC_INFO_ALLMULTI; + } for (i = 0; i < nb_mc_addr; i++) rte_ether_addr_copy(&mc_addr_set[i], &bp->mcast_addr_list[i]); From patchwork Mon Dec 4 18:37:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134818 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C9B2A4366C; Mon, 4 Dec 2023 19:37:52 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 98B00427E8; Mon, 4 Dec 2023 19:37:28 +0100 (CET) Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by mails.dpdk.org (Postfix) with ESMTP id 1C194415D7 for ; Mon, 4 Dec 2023 19:37:26 +0100 (CET) Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-6ce3534bf44so1405028b3a.1 for ; Mon, 04 Dec 2023 10:37:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715045; x=1702319845; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=2EEbzOpzHhIJwXBZNynL6cmfhWnsozKIKSxKH5nnOGc=; b=L6eSvoqyRv705z7MpeJafVIYXsDWv9l498HNQulaoqUHVf2eFgGEqIOkBvTrkvWcpO ICfVAVLtSCKLsTrVg+vR46tMicGAUGJQpn9tjmT24lQyt6vMBCtG95OB5rUsn0LVddl2 fzxIwa7glsIs9Zz723guNvQIAO2QpTfGq5OCs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715045; x=1702319845; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2EEbzOpzHhIJwXBZNynL6cmfhWnsozKIKSxKH5nnOGc=; b=DhzJSW0LBPxMFiLmJtpA7Xz9/7tFPIXvsZtEehr9aIzLO1uCvLidG4yEZ6xknXkYDP 7+JgzH6e2Z5qOyI692ydHU1RNP35Yna1UgG1p0+XtmkhPkuyiPnbiQe+6n/NnYdqXVgl a+u+rByZbw6zcsTbeqoPIdisSveQE3HvZCGfaMB1oJojkmAy8NzntAcGL6/p71H3gXmD 5fsFfotpHXLugBB7bonxfuZiNsbl8Hei2LTd4XJ+M6doRK7ItUIX1kJEGESxARZolzM8 x1OVsNucyB1A330XZ2xGsh6ku0Oir12s47UzYSJu98+uT5s6DilJQByjbBQ7QhV7DJjU q9QQ== X-Gm-Message-State: AOJu0YxCb+WdvvznsvFIOjmM74DljrnRga8KsgEXb/BSmbG/kxM+yg3B p4adaBvkNxryW2z7XzUTGFmev0YkzB58usrP7Zo+ZMZrVYgKdsIrcz/1yKL/aYPvusA9UXmUfSi RXyoHr1ehGKN/hzsQhpQ/MZEy50YVNPPSOwnNJbjc2buL6z6iHv9EuJwh0sAitCJur8a8 X-Google-Smtp-Source: AGHT+IF75WK3eCBwkk5Usf7m3Y9KmnkU+egfj1GD02F4XS144De386lelKPbnF8umhp6MJIJ9aodnQ== X-Received: by 2002:a05:6a00:1149:b0:6c3:4bf2:7486 with SMTP id b9-20020a056a00114900b006c34bf27486mr180397pfm.7.1701715044769; Mon, 04 Dec 2023 10:37:24 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.23 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:24 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Somnath Kotur Subject: [PATCH 04/14] net/bnxt: use the correct COS queue for Tx Date: Mon, 4 Dec 2023 10:37:00 -0800 Message-Id: <20231204183710.86921-5-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Earlier the firmware was configuring single lossy COS profiles for Tx. But now more than one profiles is possible. Identify the profile a NIC driver should use based on the profile type hint provided in queue_cfg_info. If the firmware does not set the bit to use profile type, then we will use the older method to pick the COS queue for Tx. Signed-off-by: Ajit Khaparde Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_hwrm.c | 56 ++++++++++++++++++++++++++++++++++-- drivers/net/bnxt/bnxt_hwrm.h | 7 +++++ 3 files changed, 62 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 0e01b1d4ba..542ef13f7c 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -311,6 +311,7 @@ struct bnxt_link_info { struct bnxt_cos_queue_info { uint8_t id; uint8_t profile; + uint8_t profile_type; }; struct rte_flow { diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 0a31b984e6..fe9e629892 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1544,7 +1544,7 @@ int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp) return 0; } -static bool bnxt_find_lossy_profile(struct bnxt *bp) +static bool _bnxt_find_lossy_profile(struct bnxt *bp) { int i = 0; @@ -1558,6 +1558,41 @@ static bool bnxt_find_lossy_profile(struct bnxt *bp) return false; } +static bool _bnxt_find_lossy_nic_profile(struct bnxt *bp) +{ + int i = 0, j = 0; + + for (i = 0; i < BNXT_COS_QUEUE_COUNT; i++) { + for (j = 0; j < BNXT_COS_QUEUE_COUNT; j++) { + if (bp->tx_cos_queue[i].profile == + HWRM_QUEUE_SERVICE_PROFILE_LOSSY && + bp->tx_cos_queue[j].profile_type == + HWRM_QUEUE_SERVICE_PROFILE_TYPE_NIC) { + bp->tx_cosq_id[0] = bp->tx_cos_queue[i].id; + return true; + } + } + } + return false; +} + +static bool bnxt_find_lossy_profile(struct bnxt *bp, bool use_prof_type) +{ + int i; + + for (i = 0; i < BNXT_COS_QUEUE_COUNT; i++) { + PMD_DRV_LOG(DEBUG, "profile %d, profile_id %d, type %d\n", + bp->tx_cos_queue[i].profile, + bp->tx_cos_queue[i].id, + bp->tx_cos_queue[i].profile_type); + } + + if (use_prof_type) + return _bnxt_find_lossy_nic_profile(bp); + else + return _bnxt_find_lossy_profile(bp); +} + static void bnxt_find_first_valid_profile(struct bnxt *bp) { int i = 0; @@ -1579,6 +1614,7 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) struct hwrm_queue_qportcfg_input req = {.req_type = 0 }; struct hwrm_queue_qportcfg_output *resp = bp->hwrm_cmd_resp_addr; uint32_t dir = HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_TX; + bool use_prof_type = false; int i; get_rx_info: @@ -1590,10 +1626,15 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) !(bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY)) req.drv_qmap_cap = HWRM_QUEUE_QPORTCFG_INPUT_DRV_QMAP_CAP_ENABLED; + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); HWRM_CHECK_RESULT(); + if (resp->queue_cfg_info & + HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_CFG_INFO_USE_PROFILE_TYPE) + use_prof_type = true; + if (dir == HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_TX) { GET_TX_QUEUE_INFO(0); GET_TX_QUEUE_INFO(1); @@ -1603,6 +1644,16 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) GET_TX_QUEUE_INFO(5); GET_TX_QUEUE_INFO(6); GET_TX_QUEUE_INFO(7); + if (use_prof_type) { + GET_TX_QUEUE_TYPE_INFO(0); + GET_TX_QUEUE_TYPE_INFO(1); + GET_TX_QUEUE_TYPE_INFO(2); + GET_TX_QUEUE_TYPE_INFO(3); + GET_TX_QUEUE_TYPE_INFO(4); + GET_TX_QUEUE_TYPE_INFO(5); + GET_TX_QUEUE_TYPE_INFO(6); + GET_TX_QUEUE_TYPE_INFO(7); + } } else { GET_RX_QUEUE_INFO(0); GET_RX_QUEUE_INFO(1); @@ -1636,11 +1687,12 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) * operations, ideally we should look to use LOSSY. * If not found, fallback to the first valid profile */ - if (!bnxt_find_lossy_profile(bp)) + if (!bnxt_find_lossy_profile(bp, use_prof_type)) bnxt_find_first_valid_profile(bp); } } + PMD_DRV_LOG(DEBUG, "Tx COS Queue ID %d\n", bp->tx_cosq_id[0]); bp->max_tc = resp->max_configurable_queues; bp->max_lltc = resp->max_configurable_lossless_queues; diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 68384bc757..f9fa6cf73a 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -46,6 +46,9 @@ struct hwrm_func_qstats_output; #define HWRM_QUEUE_SERVICE_PROFILE_UNKNOWN \ HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_ID0_SERVICE_PROFILE_UNKNOWN +#define HWRM_QUEUE_SERVICE_PROFILE_TYPE_NIC \ + HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_ID0_SERVICE_PROFILE_TYPE_NIC + #define HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC \ HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESERVATION_STRATEGY_MINIMAL_STATIC #define HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MAXIMAL \ @@ -74,6 +77,10 @@ struct hwrm_func_qstats_output; bp->tx_cos_queue[x].profile = \ resp->queue_id##x##_service_profile +#define GET_TX_QUEUE_TYPE_INFO(x) \ + bp->tx_cos_queue[x].profile_type = \ + resp->queue_id##x##_service_profile_type + #define GET_RX_QUEUE_INFO(x) \ bp->rx_cos_queue[x].id = resp->queue_id##x; \ bp->rx_cos_queue[x].profile = \ From patchwork Mon Dec 4 18:37:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134819 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C435F4366C; Mon, 4 Dec 2023 19:38:03 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 095ED42D2A; Mon, 4 Dec 2023 19:37:30 +0100 (CET) Received: from mail-pf1-f180.google.com (mail-pf1-f180.google.com [209.85.210.180]) by mails.dpdk.org (Postfix) with ESMTP id 63EAB427E1 for ; Mon, 4 Dec 2023 19:37:27 +0100 (CET) Received: by mail-pf1-f180.google.com with SMTP id d2e1a72fcca58-6ce49d9d874so1400226b3a.2 for ; Mon, 04 Dec 2023 10:37:27 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715046; x=1702319846; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=21T0anMGA8hqWP45ePk9zHhiTTidNl7o0HNEEVzcv/w=; b=Y2hW+XIkJaZzrfvKscEtHzB6JE4LWsgFqsBMI7Pxo14dRBdG4+lZNM6PfZIefeit6d GLtyqz96y/Q1/NGcTWi0L4cNjtZkIfZhN0Usap3xax7vbDbR9MlYOh4sUTbhR9r2R6HJ /UFjXPYYVpNNsIlydEqFL3MbAqFuRQC+5+/WY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715046; x=1702319846; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=21T0anMGA8hqWP45ePk9zHhiTTidNl7o0HNEEVzcv/w=; b=X2sdwzA3y7/p5QBkA/6AbR248YFfcft7j+7Xzl8eBkgSMfgN83GpH4oM7oC2M3Rz47 gMpa2ZFY0EVmOPnXhsaVPx58qLyvxy6syMvAqF0Sz7vn14E3VGGqr8Yi2mYfPRDeXMP0 FFNvZ+6ySahi5PTldabmKulqU8L3PKYBIP2w6FCNiI0vvbsDMOmqm7H1RNiFAIwpE873 00rbtXstySMm1C3qgTIGFOiesI/BSrKQIAP4m7AdrU7OEguy7ntoG77ZHIP8sGOJrk5I Jbs3zOQ1EWTPcS+DMjM7ZJ8cLKCkcOM0JfPkl3bLBR2SidN9QrxaBskWVogd7ccGTDj9 xVQw== X-Gm-Message-State: AOJu0Yyys5HJ5ChEEy47LlI09UKS2QWMeWW72LtJ9mxz9/IKfT9DRFyW tszkJ1YfoFwm5M8ZAali/D1nHwurmBi/ybtRL3brdgcXlaFvi6qVkf6pL90UDhS1KkNKHNS+68P lWIng3yMrrhNo+iHSC8nQufTq0zVrv3odIh3woCtKUCde4vavJSQ2KKAB7udCiL2yMlT+ X-Google-Smtp-Source: AGHT+IEHXFZ9ua91QtfTL8qfXPsUOuSpzUGXfRsBs/vjgWFphvCLkdZAdVzTPq+1kUiHO6lh+huUmA== X-Received: by 2002:a05:6a00:3492:b0:6ce:4fd9:48af with SMTP id cp18-20020a056a00349200b006ce4fd948afmr2380725pfb.60.1701715046096; Mon, 04 Dec 2023 10:37:26 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:25 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Somnath Kotur , Kalesh AP Subject: [PATCH 05/14] net/bnxt: refactor mem zone allocation Date: Mon, 4 Dec 2023 10:37:01 -0800 Message-Id: <20231204183710.86921-6-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently we are allocating memzone for VNIC attributes per VNIC. In cases where the firmware supports a higher VNIC count, this could lead to a higher number of memzone segments than supported. Move the memzone for VNIC attributes per function instead of per VNIC. Divide the memzone per VNIC as needed. Signed-off-by: Ajit Khaparde Reviewed-by: Somnath Kotur Reviewed-by: Kalesh AP --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_vnic.c | 52 +++++++++++++++++++----------------- drivers/net/bnxt/bnxt_vnic.h | 1 - 3 files changed, 28 insertions(+), 26 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 542ef13f7c..6af668e92f 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -772,6 +772,7 @@ struct bnxt { struct bnxt_vnic_info *vnic_info; STAILQ_HEAD(, bnxt_vnic_info) free_vnic_list; + const struct rte_memzone *vnic_rss_mz; struct bnxt_filter_info *filter_info; STAILQ_HEAD(, bnxt_filter_info) free_filter_list; diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index f86d27fd79..d40daf631e 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -123,13 +123,11 @@ void bnxt_free_vnic_attributes(struct bnxt *bp) for (i = 0; i < bp->max_vnics; i++) { vnic = &bp->vnic_info[i]; - if (vnic->rss_mz != NULL) { - rte_memzone_free(vnic->rss_mz); - vnic->rss_mz = NULL; - vnic->rss_hash_key = NULL; - vnic->rss_table = NULL; - } + vnic->rss_hash_key = NULL; + vnic->rss_table = NULL; } + rte_memzone_free(bp->vnic_rss_mz); + bp->vnic_rss_mz = NULL; } int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig) @@ -153,31 +151,35 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig) entry_length = RTE_CACHE_LINE_ROUNDUP(entry_length + rss_table_size); - for (i = 0; i < bp->max_vnics; i++) { - vnic = &bp->vnic_info[i]; - - snprintf(mz_name, RTE_MEMZONE_NAMESIZE, - "bnxt_" PCI_PRI_FMT "_vnicattr_%d", pdev->addr.domain, - pdev->addr.bus, pdev->addr.devid, pdev->addr.function, i); - mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; - mz = rte_memzone_lookup(mz_name); - if (mz == NULL) { - mz = rte_memzone_reserve(mz_name, - entry_length, + snprintf(mz_name, RTE_MEMZONE_NAMESIZE, + "bnxt_" PCI_PRI_FMT "_vnicattr", pdev->addr.domain, + pdev->addr.bus, pdev->addr.devid, pdev->addr.function); + mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; + mz = rte_memzone_lookup(mz_name); + if (mz == NULL) { + mz = rte_memzone_reserve_aligned(mz_name, + entry_length * bp->max_vnics, bp->eth_dev->device->numa_node, RTE_MEMZONE_2MB | RTE_MEMZONE_SIZE_HINT_ONLY | - RTE_MEMZONE_IOVA_CONTIG); - if (mz == NULL) { - PMD_DRV_LOG(ERR, "Cannot allocate bnxt vnic_attributes memory\n"); - return -ENOMEM; - } + RTE_MEMZONE_IOVA_CONTIG, + BNXT_PAGE_SIZE); + if (mz == NULL) { + PMD_DRV_LOG(ERR, + "Cannot allocate vnic_attributes memory\n"); + return -ENOMEM; } - vnic->rss_mz = mz; - mz_phys_addr = mz->iova; + } + bp->vnic_rss_mz = mz; + for (i = 0; i < bp->max_vnics; i++) { + uint32_t offset = entry_length * i; + + vnic = &bp->vnic_info[i]; + + mz_phys_addr = mz->iova + offset; /* Allocate rss table and hash key */ - vnic->rss_table = (void *)((char *)mz->addr); + vnic->rss_table = (void *)((char *)mz->addr + offset); vnic->rss_table_dma_addr = mz_phys_addr; memset(vnic->rss_table, -1, entry_length); diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h index 4396d95bda..7a6a0aa739 100644 --- a/drivers/net/bnxt/bnxt_vnic.h +++ b/drivers/net/bnxt/bnxt_vnic.h @@ -47,7 +47,6 @@ struct bnxt_vnic_info { uint16_t hash_type; uint8_t hash_mode; uint8_t prev_hash_mode; - const struct rte_memzone *rss_mz; rte_iova_t rss_table_dma_addr; uint16_t *rss_table; rte_iova_t rss_hash_key_dma_addr; From patchwork Mon Dec 4 18:37:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134820 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 98B134366C; Mon, 4 Dec 2023 19:38:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1FA8A42D35; Mon, 4 Dec 2023 19:37:31 +0100 (CET) Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by mails.dpdk.org (Postfix) with ESMTP id D8C9141611 for ; Mon, 4 Dec 2023 19:37:28 +0100 (CET) Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-6cb749044a2so5109143b3a.0 for ; Mon, 04 Dec 2023 10:37:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715047; x=1702319847; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :from:to:cc:subject:date:message-id:reply-to; bh=5ZKbkEnL8P/hVT2bJVxjteTsAeBPgeDE9QLDOXCAwXw=; b=I8gcYbn+XpFOXqgyxZ3J/9VL17ilVstS0Z+bWE1BOiyHQymtvFraBHowBS54U34S0M QKjTol7aWtkuYmejxVZgDwgOwjM86CjUOdgZizRpNaqeoEAiuHV/VfL63leY8Xz/yPaJ SsKPGhyv4C07og/9odDQxYm1+8KxJNUkmarJY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715047; x=1702319847; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5ZKbkEnL8P/hVT2bJVxjteTsAeBPgeDE9QLDOXCAwXw=; b=Reyku/HdRJ3CIXzOB2XVZnsEeGhD4EWL4SQCPJlSpjuE5ixm9+dVwa4jUWsxuUrJJY p5QjdV9lA/A17ipIxr5Dr0VRz4OHnSqVwjJpMio54jlc9fkaPCzCMA/mcALzyQIfdHRj q87ee7rGAhu4xNYtggGj9tpy0ExWKEjIfAAb3pSW9vTMhqY665oFSD93xpq5DtUdRhUw ydAlF4rk1T260fsoY54Loh0wFCoGWUQGxxIyp6RqyrQSaBSOy4+4nlUlUhf6FvGvKJ/c 4yZGAhcMkJcUfVGRxCUivPUK0Z4np60ZjU5F6QKWiKXeWuiRHLHA+E5Hkxp1FmsqLUGC upLQ== X-Gm-Message-State: AOJu0Yztt18AFpYSnIE3JrZHykGv9yAmgNVw3u5AEX+SWhXfhga0In0T xIhgwwDrGyNkBFg77qMFfQjP6r49QuqcVfsmIyQrFipYiOIYZU7Zxnh8FKxNL9vFlwtYJ5hunid /7FRPaEGarf6CXhLfexG0fBeuBRlCfBMQG27GS5627qrax8RSFCzfUDFAogc+jPlcF61t X-Google-Smtp-Source: AGHT+IEvGWBrNW5BotYt9sCGuU15m+PUOYO06EQXMl67tAkNxaHnu6lb0UEXw+oa+KWbSuLcBFewWQ== X-Received: by 2002:a05:6a00:2383:b0:6ce:2731:e876 with SMTP id f3-20020a056a00238300b006ce2731e876mr5488558pfc.61.1701715047353; Mon, 04 Dec 2023 10:37:27 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.26 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:26 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Subject: [PATCH 06/14] net/bnxt: add support for p7 device family Date: Mon, 4 Dec 2023 10:37:02 -0800 Message-Id: <20231204183710.86921-7-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for the P7 device family. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 14 ++++++++++++-- drivers/net/bnxt/bnxt_ethdev.c | 25 +++++++++++++++++++++++++ 2 files changed, 37 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 6af668e92f..3a1d8a6ff6 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -72,6 +72,11 @@ #define BROADCOM_DEV_ID_58814 0xd814 #define BROADCOM_DEV_ID_58818 0xd818 #define BROADCOM_DEV_ID_58818_VF 0xd82e +#define BROADCOM_DEV_ID_57608 0x1760 +#define BROADCOM_DEV_ID_57604 0x1761 +#define BROADCOM_DEV_ID_57602 0x1762 +#define BROADCOM_DEV_ID_57601 0x1763 +#define BROADCOM_DEV_ID_5760X_VF 0x1819 #define BROADCOM_DEV_957508_N2100 0x5208 #define BROADCOM_DEV_957414_N225 0x4145 @@ -685,6 +690,7 @@ struct bnxt { #define BNXT_FLAG_FLOW_XSTATS_EN BIT(25) #define BNXT_FLAG_DFLT_MAC_SET BIT(26) #define BNXT_FLAG_GFID_ENABLE BIT(27) +#define BNXT_FLAG_CHIP_P7 BIT(30) #define BNXT_PF(bp) (!((bp)->flags & BNXT_FLAG_VF)) #define BNXT_VF(bp) ((bp)->flags & BNXT_FLAG_VF) #define BNXT_NPAR(bp) ((bp)->flags & BNXT_FLAG_NPAR_PF) @@ -694,12 +700,16 @@ struct bnxt { #define BNXT_USE_KONG(bp) ((bp)->flags & BNXT_FLAG_KONG_MB_EN) #define BNXT_VF_IS_TRUSTED(bp) ((bp)->flags & BNXT_FLAG_TRUSTED_VF_EN) #define BNXT_CHIP_P5(bp) ((bp)->flags & BNXT_FLAG_CHIP_P5) +#define BNXT_CHIP_P7(bp) ((bp)->flags & BNXT_FLAG_CHIP_P7) +#define BNXT_CHIP_P5_P7(bp) (BNXT_CHIP_P5(bp) || BNXT_CHIP_P7(bp)) #define BNXT_STINGRAY(bp) ((bp)->flags & BNXT_FLAG_STINGRAY) -#define BNXT_HAS_NQ(bp) BNXT_CHIP_P5(bp) -#define BNXT_HAS_RING_GRPS(bp) (!BNXT_CHIP_P5(bp)) +#define BNXT_HAS_NQ(bp) BNXT_CHIP_P5_P7(bp) +#define BNXT_HAS_RING_GRPS(bp) (!BNXT_CHIP_P5_P7(bp)) #define BNXT_FLOW_XSTATS_EN(bp) ((bp)->flags & BNXT_FLAG_FLOW_XSTATS_EN) #define BNXT_HAS_DFLT_MAC_SET(bp) ((bp)->flags & BNXT_FLAG_DFLT_MAC_SET) #define BNXT_GFID_ENABLED(bp) ((bp)->flags & BNXT_FLAG_GFID_ENABLE) +#define BNXT_P7_MAX_NQ_RING_CNT 512 +#define BNXT_P7_CQ_MAX_L2_ENT 8192 uint32_t flags2; #define BNXT_FLAGS2_PTP_TIMESYNC_ENABLED BIT(0) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 999e4f1398..1e4182071a 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -84,6 +84,11 @@ static const struct rte_pci_id bnxt_pci_id_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58814) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58818) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58818_VF) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57608) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57604) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57602) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57601) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_5760X_VF) }, { .vendor_id = 0, /* sentinel */ }, }; @@ -4681,6 +4686,7 @@ static bool bnxt_vf_pciid(uint16_t device_id) case BROADCOM_DEV_ID_57500_VF1: case BROADCOM_DEV_ID_57500_VF2: case BROADCOM_DEV_ID_58818_VF: + case BROADCOM_DEV_ID_5760X_VF: /* FALLTHROUGH */ return true; default: @@ -4706,7 +4712,23 @@ static bool bnxt_p5_device(uint16_t device_id) case BROADCOM_DEV_ID_58812: case BROADCOM_DEV_ID_58814: case BROADCOM_DEV_ID_58818: + /* FALLTHROUGH */ + return true; + default: + return false; + } +} + +/* Phase 7 device */ +static bool bnxt_p7_device(uint16_t device_id) +{ + switch (device_id) { case BROADCOM_DEV_ID_58818_VF: + case BROADCOM_DEV_ID_57608: + case BROADCOM_DEV_ID_57604: + case BROADCOM_DEV_ID_57602: + case BROADCOM_DEV_ID_57601: + case BROADCOM_DEV_ID_5760X_VF: /* FALLTHROUGH */ return true; default: @@ -5874,6 +5896,9 @@ static int bnxt_drv_init(struct rte_eth_dev *eth_dev) if (bnxt_p5_device(pci_dev->id.device_id)) bp->flags |= BNXT_FLAG_CHIP_P5; + if (bnxt_p7_device(pci_dev->id.device_id)) + bp->flags |= BNXT_FLAG_CHIP_P7; + if (pci_dev->id.device_id == BROADCOM_DEV_ID_58802 || pci_dev->id.device_id == BROADCOM_DEV_ID_58804 || pci_dev->id.device_id == BROADCOM_DEV_ID_58808 || From patchwork Mon Dec 4 18:37:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134821 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EA1754366C; Mon, 4 Dec 2023 19:38:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AF6B642D87; Mon, 4 Dec 2023 19:37:32 +0100 (CET) Received: from mail-pj1-f48.google.com (mail-pj1-f48.google.com [209.85.216.48]) by mails.dpdk.org (Postfix) with ESMTP id A641442D35 for ; Mon, 4 Dec 2023 19:37:30 +0100 (CET) Received: by mail-pj1-f48.google.com with SMTP id 98e67ed59e1d1-28659b38bc7so3321456a91.0 for ; Mon, 04 Dec 2023 10:37:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715049; x=1702319849; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :from:to:cc:subject:date:message-id:reply-to; bh=3ee6YR2lyytMY7CtHLEL8h6SqO/5Bx6ByiMy2LOVdI0=; b=B/5yHXBAhpJwX/xi+quds2OQdejvv9HBv1Sf1+bLdwreYUKzcC1FPhp2TH2eZ2VQ9O CFVKPYJFL97orXiBIwrm2G/1PCT8nL5LFN/6TvM7VvXBptlEUfNSzXce3rU2aUeTH6dE 8l5RUl49/zQ8ElWOfBy5fLa9Yfyv4nsAuI0es= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715049; x=1702319849; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=3ee6YR2lyytMY7CtHLEL8h6SqO/5Bx6ByiMy2LOVdI0=; b=Mvk1xe7di4Qi37HIjNZc3+aTH1gQL5cM7cMj11Wu/1dHUwQTWLdzgX9vyA0FXCOYvT TMnhHh878/L0fD45boRZHDicc7/TGq6RtbgBGgdNvSmptDf5K4dQxj1GW5oI2VFFEHuX yaKNRpwmhi99mYqD4c/ahqnQytDi60kXLM8xRklIeYk1M8BASPiRXt0lFmjlQaugxM6J xDeGr3uM7NvNG6QTYlaPHHBpSUEHFAigFuS+JphZOneB+/8K0XNzWK8pLzJz3I68WWJU 3Q3O/kav2VNQnhdLtPGyRYWE56x/ZHDvEBNac4cX2ZB+/uQ5tOnD1Z//CZedMMZjU+zU 3U9g== X-Gm-Message-State: AOJu0YypMsrfg25iNqXDC1nGeh17mBts9kqlcy/oXHvT1Zd2S5ttpfBt Qn9+zHVVVGpeYLEu3Y1vFVbwIM4Vup4lv2e2RKMj6qAH6I8oP0IM0TyPDaafFhFonLwY2Au3Xw9 0p11o9QpyT9rOYo1Mwn37eFLr7komxluV6Ry5LvzOIEOorfcEtI1cUdKJ2RS0CL/x7kkj X-Google-Smtp-Source: AGHT+IF/n31RUxNr8EAIFeKZnPhuBm4Utegq4Vu3Rzpv9mBW+MBZOJoSiQjEDN39xj1BU1LxsOkbDg== X-Received: by 2002:a05:6a20:72a9:b0:18f:97c:6150 with SMTP id o41-20020a056a2072a900b0018f097c6150mr5711498pzk.77.1701715049112; Mon, 04 Dec 2023 10:37:29 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.27 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:27 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Subject: [PATCH 07/14] net/bnxt: refactor code to support P7 devices Date: Mon, 4 Dec 2023 10:37:03 -0800 Message-Id: <20231204183710.86921-8-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Refactor code to support the P7 device family. The changes include support for RSS, VNIC allocation, TPA. Remove unnecessary check to disable vector mode support for some device families. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 6 +++--- drivers/net/bnxt/bnxt_ethdev.c | 24 +++++++----------------- drivers/net/bnxt/bnxt_flow.c | 2 +- drivers/net/bnxt/bnxt_hwrm.c | 26 ++++++++++++++------------ drivers/net/bnxt/bnxt_ring.c | 6 +++--- drivers/net/bnxt/bnxt_rxq.c | 2 +- drivers/net/bnxt/bnxt_rxr.c | 6 +++--- drivers/net/bnxt/bnxt_vnic.c | 6 +++--- 8 files changed, 35 insertions(+), 43 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 3a1d8a6ff6..7439ecf4fa 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -107,11 +107,11 @@ #define TPA_MAX_SEGS 5 /* 32 segments in log2 units */ #define BNXT_TPA_MAX_AGGS(bp) \ - (BNXT_CHIP_P5(bp) ? TPA_MAX_AGGS_TH : \ + (BNXT_CHIP_P5_P7(bp) ? TPA_MAX_AGGS_TH : \ TPA_MAX_AGGS) #define BNXT_TPA_MAX_SEGS(bp) \ - (BNXT_CHIP_P5(bp) ? TPA_MAX_SEGS_TH : \ + (BNXT_CHIP_P5_P7(bp) ? TPA_MAX_SEGS_TH : \ TPA_MAX_SEGS) /* @@ -938,7 +938,7 @@ inline uint16_t bnxt_max_rings(struct bnxt *bp) * RSS table size in P5 is 512. * Cap max Rx rings to the same value for RSS. */ - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) max_rx_rings = RTE_MIN(max_rx_rings, BNXT_RSS_TBL_SIZE_P5); max_tx_rings = RTE_MIN(max_tx_rings, max_rx_rings); diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 1e4182071a..cab2589cf3 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -212,7 +212,7 @@ uint16_t bnxt_rss_ctxts(const struct bnxt *bp) unsigned int num_rss_rings = RTE_MIN(bp->rx_nr_rings, BNXT_RSS_TBL_SIZE_P5); - if (!BNXT_CHIP_P5(bp)) + if (!BNXT_CHIP_P5_P7(bp)) return 1; return RTE_ALIGN_MUL_CEIL(num_rss_rings, @@ -222,7 +222,7 @@ uint16_t bnxt_rss_ctxts(const struct bnxt *bp) uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp) { - if (!BNXT_CHIP_P5(bp)) + if (!BNXT_CHIP_P5_P7(bp)) return HW_HASH_INDEX_SIZE; return bnxt_rss_ctxts(bp) * BNXT_RSS_ENTRIES_PER_CTX_P5; @@ -765,7 +765,7 @@ static int bnxt_start_nic(struct bnxt *bp) /* P5 does not support ring groups. * But we will use the array to save RSS context IDs. */ - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) bp->max_ring_grps = BNXT_MAX_RSS_CTXTS_P5; rc = bnxt_vnic_queue_db_init(bp); @@ -1247,12 +1247,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev) { struct bnxt *bp = eth_dev->data->dev_private; - /* Disable vector mode RX for Stingray2 for now */ - if (BNXT_CHIP_SR2(bp)) { - bp->flags &= ~BNXT_FLAG_RX_VECTOR_PKT_MODE; - return bnxt_recv_pkts; - } - #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) /* Vector mode receive cannot be enabled if scattered rx is in use. */ if (eth_dev->data->scattered_rx) @@ -1321,10 +1315,6 @@ bnxt_transmit_function(struct rte_eth_dev *eth_dev) { struct bnxt *bp = eth_dev->data->dev_private; - /* Disable vector mode TX for Stingray2 for now */ - if (BNXT_CHIP_SR2(bp)) - return bnxt_xmit_pkts; - #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) uint64_t offloads = eth_dev->data->dev_conf.txmode.offloads; @@ -2091,7 +2081,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev, continue; rxq = bnxt_qid_to_rxq(bp, reta_conf[idx].reta[sft]); - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { vnic->rss_table[i * 2] = rxq->rx_ring->rx_ring_struct->fw_ring_id; vnic->rss_table[i * 2 + 1] = @@ -2138,7 +2128,7 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev, if (reta_conf[idx].mask & (1ULL << sft)) { uint16_t qid; - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) qid = bnxt_rss_to_qid(bp, vnic->rss_table[i * 2]); else @@ -3224,7 +3214,7 @@ bnxt_rx_queue_count_op(void *rx_queue) break; case CMPL_BASE_TYPE_RX_TPA_END: - if (BNXT_CHIP_P5(rxq->bp)) { + if (BNXT_CHIP_P5_P7(rxq->bp)) { struct rx_tpa_v2_end_cmpl_hi *p5_tpa_end; p5_tpa_end = (void *)rxcmp; @@ -3335,7 +3325,7 @@ bnxt_rx_descriptor_status_op(void *rx_queue, uint16_t offset) if (desc == offset) return RTE_ETH_RX_DESC_DONE; - if (BNXT_CHIP_P5(rxq->bp)) { + if (BNXT_CHIP_P5_P7(rxq->bp)) { struct rx_tpa_v2_end_cmpl_hi *p5_tpa_end; p5_tpa_end = (void *)rxcmp; diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index 28dd5ae6cb..15f0e1b308 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -1199,7 +1199,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp, if (i == bp->rx_cp_nr_rings) return 0; - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { rxq = bp->rx_queues[idx]; vnic->rss_table[rss_idx * 2] = rxq->rx_ring->rx_ring_struct->fw_ring_id; diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index fe9e629892..2d0a7a2731 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -853,7 +853,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) bp->first_vf_id = rte_le_to_cpu_16(resp->first_vf_id); bp->max_rx_em_flows = rte_le_to_cpu_16(resp->max_rx_em_flows); bp->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs); - if (!BNXT_CHIP_P5(bp) && !bp->pdev->max_vfs) + if (!BNXT_CHIP_P5_P7(bp) && !bp->pdev->max_vfs) bp->max_l2_ctx += bp->max_rx_em_flows; if (bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY) bp->max_vnics = rte_le_to_cpu_16(BNXT_MAX_VNICS_COS_CLASSIFY); @@ -1187,7 +1187,7 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp) * So use the value provided by func_qcaps. */ bp->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs); - if (!BNXT_CHIP_P5(bp) && !bp->pdev->max_vfs) + if (!BNXT_CHIP_P5_P7(bp) && !bp->pdev->max_vfs) bp->max_l2_ctx += bp->max_rx_em_flows; if (bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY) bp->max_vnics = rte_le_to_cpu_16(BNXT_MAX_VNICS_COS_CLASSIFY); @@ -1744,7 +1744,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp, req.ring_type = ring_type; req.cmpl_ring_id = rte_cpu_to_le_16(cmpl_ring_id); req.stat_ctx_id = rte_cpu_to_le_32(stats_ctx_id); - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { mb_pool = bp->rx_queues[0]->mb_pool; rx_buf_size = rte_pktmbuf_data_room_size(mb_pool) - RTE_PKTMBUF_HEADROOM; @@ -2118,7 +2118,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic) HWRM_PREP(&req, HWRM_VNIC_CFG, BNXT_USE_CHIMP_MB); - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { int dflt_rxq = vnic->start_grp_id; struct bnxt_rx_ring_info *rxr; struct bnxt_cp_ring_info *cpr; @@ -2304,7 +2304,7 @@ int bnxt_hwrm_vnic_ctx_free(struct bnxt *bp, struct bnxt_vnic_info *vnic) { int rc = 0; - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { int j; for (j = 0; j < vnic->num_lb_ctxts; j++) { @@ -2556,7 +2556,7 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, struct hwrm_vnic_tpa_cfg_input req = {.req_type = 0 }; struct hwrm_vnic_tpa_cfg_output *resp = bp->hwrm_cmd_resp_addr; - if (BNXT_CHIP_P5(bp) && !bp->max_tpa_v2) { + if ((BNXT_CHIP_P5(bp) || BNXT_CHIP_P7(bp)) && !bp->max_tpa_v2) { if (enable) PMD_DRV_LOG(ERR, "No HW support for LRO\n"); return -ENOTSUP; @@ -2584,6 +2584,9 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, req.max_aggs = rte_cpu_to_le_16(BNXT_TPA_MAX_AGGS(bp)); req.max_agg_segs = rte_cpu_to_le_16(BNXT_TPA_MAX_SEGS(bp)); req.min_agg_len = rte_cpu_to_le_32(512); + + if (BNXT_CHIP_P5_P7(bp)) + req.max_aggs = rte_cpu_to_le_16(bp->max_tpa_v2); } req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id); @@ -2836,7 +2839,7 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) ring = rxr ? rxr->ag_ring_struct : NULL; if (ring != NULL && cpr != NULL) { bnxt_hwrm_ring_free(bp, ring, - BNXT_CHIP_P5(bp) ? + BNXT_CHIP_P5_P7(bp) ? HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG : HWRM_RING_FREE_INPUT_RING_TYPE_RX, cpr->cp_ring_struct->fw_ring_id); @@ -3356,8 +3359,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up) /* Get user requested autoneg setting */ autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds); - - if (BNXT_CHIP_P5(bp) && + if (BNXT_CHIP_P5_P7(bp) && dev_conf->link_speeds & RTE_ETH_LINK_SPEED_40G) { /* 40G is not supported as part of media auto detect. * The speed should be forced and autoneg disabled @@ -5348,7 +5350,7 @@ int bnxt_vnic_rss_configure(struct bnxt *bp, struct bnxt_vnic_info *vnic) if (!(vnic->rss_table && vnic->hash_type)) return 0; - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) return bnxt_vnic_rss_configure_p5(bp, vnic); /* @@ -5440,7 +5442,7 @@ int bnxt_hwrm_set_ring_coal(struct bnxt *bp, int rc; /* Set ring coalesce parameters only for 100G NICs */ - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { if (bnxt_hwrm_set_coal_params_p5(bp, &req)) return -1; } else if (bnxt_stratus_device(bp)) { @@ -5470,7 +5472,7 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) int total_alloc_len; int rc, i, tqm_rings; - if (!BNXT_CHIP_P5(bp) || + if (!BNXT_CHIP_P5_P7(bp) || bp->hwrm_spec_code < HWRM_VERSION_1_9_2 || BNXT_VF(bp) || bp->ctx) diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 6dacb1b37f..90cad6c9c6 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -57,7 +57,7 @@ int bnxt_alloc_ring_grps(struct bnxt *bp) /* P5 does not support ring groups. * But we will use the array to save RSS context IDs. */ - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { bp->max_ring_grps = BNXT_MAX_RSS_CTXTS_P5; } else if (bp->max_ring_grps < bp->rx_cp_nr_rings) { /* 1 ring is for default completion ring */ @@ -354,7 +354,7 @@ static void bnxt_set_db(struct bnxt *bp, uint32_t fid, uint32_t ring_mask) { - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { int db_offset = DB_PF_OFFSET; switch (ring_type) { case HWRM_RING_ALLOC_INPUT_RING_TYPE_TX: @@ -559,7 +559,7 @@ static int bnxt_alloc_rx_agg_ring(struct bnxt *bp, int queue_index) ring->fw_rx_ring_id = rxr->rx_ring_struct->fw_ring_id; - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_RX_AGG; hw_stats_ctx_id = cpr->hw_stats_ctx_id; } else { diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index 0d0b5e28e4..575e7f193f 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -600,7 +600,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) if (bp->rx_queues[i]->rx_started) active_queue_cnt++; - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { /* * For P5, we need to ensure that the VNIC default * receive ring corresponds to an active receive queue. diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 0cabfb583c..9d45065f28 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -334,7 +334,7 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq, uint16_t cp_cons, ag_cons; struct rx_pkt_cmpl *rxcmp; struct rte_mbuf *last = mbuf; - bool is_p5_tpa = tpa_info && BNXT_CHIP_P5(rxq->bp); + bool is_p5_tpa = tpa_info && BNXT_CHIP_P5_P7(rxq->bp); for (i = 0; i < agg_buf; i++) { struct rte_mbuf **ag_buf; @@ -395,7 +395,7 @@ static int bnxt_discard_rx(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, } else if (cmp_type == RX_TPA_END_CMPL_TYPE_RX_TPA_END) { struct rx_tpa_end_cmpl *tpa_end = cmp; - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) return 0; agg_bufs = BNXT_TPA_END_AGG_BUFS(tpa_end); @@ -430,7 +430,7 @@ static inline struct rte_mbuf *bnxt_tpa_end( return NULL; } - if (BNXT_CHIP_P5(rxq->bp)) { + if (BNXT_CHIP_P5_P7(rxq->bp)) { struct rx_tpa_v2_end_cmpl *th_tpa_end; struct rx_tpa_v2_end_cmpl_hi *th_tpa_end1; diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index d40daf631e..bf93120d28 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -143,7 +143,7 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig) entry_length = HW_HASH_KEY_SIZE; - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) rss_table_size = BNXT_RSS_TBL_SIZE_P5 * 2 * sizeof(*vnic->rss_table); else @@ -418,8 +418,8 @@ static int32_t bnxt_vnic_populate_rss_table(struct bnxt *bp, struct bnxt_vnic_info *vnic) { - /* RSS table population is different for p4 and p5 platforms */ - if (BNXT_CHIP_P5(bp)) + /* RSS table population is different for p4 and p5, p7 platforms */ + if (BNXT_CHIP_P5_P7(bp)) return bnxt_vnic_populate_rss_table_p5(bp, vnic); return bnxt_vnic_populate_rss_table_p4(bp, vnic); From patchwork Mon Dec 4 18:37:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134822 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B41EE4366C; Mon, 4 Dec 2023 19:38:26 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BD02942D91; Mon, 4 Dec 2023 19:37:33 +0100 (CET) Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by mails.dpdk.org (Postfix) with ESMTP id DDDE142D6A for ; Mon, 4 Dec 2023 19:37:31 +0100 (CET) Received: by mail-pg1-f171.google.com with SMTP id 41be03b00d2f7-5c673b01eeeso782302a12.1 for ; Mon, 04 Dec 2023 10:37:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715051; x=1702319851; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=zimFljIq6A/KFTOUAOPUmbc9zvmILpmkZyGhvil+JKg=; b=ZZRmUTg4kbHxVtdwzQVqYXf05+ScUvvrgNFGhBRU1ypGrXZ7hSAw1eiOgj1zk2raBv z5DLcYdlQ7rstXsOXL0C9KOEDq3/Zfax0KE6U57LVw51V4FDuKPGxsiM15E2SU+QC/4c ILoVJvvvRGONBCIxltzWAyy9rXc+4J7OFgCqo= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715051; x=1702319851; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=zimFljIq6A/KFTOUAOPUmbc9zvmILpmkZyGhvil+JKg=; b=WoTUfFF4NyBF19bfrYrUt0RPyYz2YpLwkjNyvLk6GTynqXFtxPA6vURViP3f8MrLBx hSlKtFUO+qfD7e+GWqo8sVzDbfjj7Yx6halVb1y8iwzd0E63VeX8epnwHxr6P66pGdQH w46+qVfDF4qEwTbi7sIbLPjiLCWNJyusE85Duj4b3whufHbUzR8XTAKGtMlBxTS1FDFJ UiyzJdM1vvITqu5zLN5ZcY7uo350hyARZatXfVmytDDHotk6eKHkHUs2z++vpANDLl65 q4jq2sTX1Rx2w6nVlCGvVozoFvj/Oh5RmqJp2IoZZCEhzuNhuZVLhR1oLhcPXhloE5/N uWLQ== X-Gm-Message-State: AOJu0YxI1s5kn4q1SOc7kyqB9Y/kS4tVlfwtY32E4LjU2xZ6xVjZ/OtD tVtcvDd40B5AH3aHt79n4sQIBtLGSVuxYeCdXSruyyxPyH6tsZCEI90rIFz21jxPLIuQdTrSAKM j1s2DuTUz3YYxUHHTMZcAND0dMnMN5GeKI8q+4IIo0T0YCUzUkpHeoS61qyGiGmMNssyb X-Google-Smtp-Source: AGHT+IEWGttHJdN5PacDwv0Dnifu720LWiIDu5tgfOI/vEVjPStqmzkXdNXkqKsInyzsSfUyNI8qjw== X-Received: by 2002:a05:6a20:e118:b0:18f:7481:da54 with SMTP id kr24-20020a056a20e11800b0018f7481da54mr972643pzb.20.1701715050577; Mon, 04 Dec 2023 10:37:30 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:29 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH 08/14] net/bnxt: fix array overflow Date: Mon, 4 Dec 2023 10:37:04 -0800 Message-Id: <20231204183710.86921-9-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In some cases the number of elements in the context memory array can exceed the MAX_CTX_PAGES and that can cause the static members ctx_pg_arr and ctx_dma_arr to overflow. Allocate them dynamically to prevent this overflow. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt.h | 4 ++-- drivers/net/bnxt/bnxt_ethdev.c | 42 +++++++++++++++++++++++++++------- 2 files changed, 36 insertions(+), 10 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 7439ecf4fa..3fbdf1ddcc 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -455,8 +455,8 @@ struct bnxt_ring_mem_info { struct bnxt_ctx_pg_info { uint32_t entries; - void *ctx_pg_arr[MAX_CTX_PAGES]; - rte_iova_t ctx_dma_arr[MAX_CTX_PAGES]; + void **ctx_pg_arr; + rte_iova_t *ctx_dma_arr; struct bnxt_ring_mem_info ring_mem; }; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index cab2589cf3..1eab8d5020 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -4768,7 +4768,7 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, { struct bnxt_ring_mem_info *rmem = &ctx_pg->ring_mem; const struct rte_memzone *mz = NULL; - char mz_name[RTE_MEMZONE_NAMESIZE]; + char name[RTE_MEMZONE_NAMESIZE]; rte_iova_t mz_phys_addr; uint64_t valid_bits = 0; uint32_t sz; @@ -4780,6 +4780,19 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, rmem->nr_pages = RTE_ALIGN_MUL_CEIL(mem_size, BNXT_PAGE_SIZE) / BNXT_PAGE_SIZE; rmem->page_size = BNXT_PAGE_SIZE; + + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_pg_arr%s_%x_%d", + suffix, idx, bp->eth_dev->data->port_id); + ctx_pg->ctx_pg_arr = rte_zmalloc(name, sizeof(void *) * rmem->nr_pages, 0); + if (ctx_pg->ctx_pg_arr == NULL) + return -ENOMEM; + + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_dma_arr%s_%x_%d", + suffix, idx, bp->eth_dev->data->port_id); + ctx_pg->ctx_dma_arr = rte_zmalloc(name, sizeof(rte_iova_t *) * rmem->nr_pages, 0); + if (ctx_pg->ctx_dma_arr == NULL) + return -ENOMEM; + rmem->pg_arr = ctx_pg->ctx_pg_arr; rmem->dma_arr = ctx_pg->ctx_dma_arr; rmem->flags = BNXT_RMEM_VALID_PTE_FLAG; @@ -4787,13 +4800,13 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, valid_bits = PTU_PTE_VALID; if (rmem->nr_pages > 1) { - snprintf(mz_name, RTE_MEMZONE_NAMESIZE, + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_pg_tbl%s_%x_%d", suffix, idx, bp->eth_dev->data->port_id); - mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; - mz = rte_memzone_lookup(mz_name); + name[RTE_MEMZONE_NAMESIZE - 1] = 0; + mz = rte_memzone_lookup(name); if (!mz) { - mz = rte_memzone_reserve_aligned(mz_name, + mz = rte_memzone_reserve_aligned(name, rmem->nr_pages * 8, bp->eth_dev->device->numa_node, RTE_MEMZONE_2MB | @@ -4812,11 +4825,11 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, rmem->pg_tbl_mz = mz; } - snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_%s_%x_%d", + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_%s_%x_%d", suffix, idx, bp->eth_dev->data->port_id); - mz = rte_memzone_lookup(mz_name); + mz = rte_memzone_lookup(name); if (!mz) { - mz = rte_memzone_reserve_aligned(mz_name, + mz = rte_memzone_reserve_aligned(name, mem_size, bp->eth_dev->device->numa_node, RTE_MEMZONE_1GB | @@ -4862,6 +4875,17 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) return; bp->ctx->flags &= ~BNXT_CTX_FLAG_INITED; + rte_free(bp->ctx->qp_mem.ctx_pg_arr); + rte_free(bp->ctx->srq_mem.ctx_pg_arr); + rte_free(bp->ctx->cq_mem.ctx_pg_arr); + rte_free(bp->ctx->vnic_mem.ctx_pg_arr); + rte_free(bp->ctx->stat_mem.ctx_pg_arr); + rte_free(bp->ctx->qp_mem.ctx_dma_arr); + rte_free(bp->ctx->srq_mem.ctx_dma_arr); + rte_free(bp->ctx->cq_mem.ctx_dma_arr); + rte_free(bp->ctx->vnic_mem.ctx_dma_arr); + rte_free(bp->ctx->stat_mem.ctx_dma_arr); + rte_memzone_free(bp->ctx->qp_mem.ring_mem.mz); rte_memzone_free(bp->ctx->srq_mem.ring_mem.mz); rte_memzone_free(bp->ctx->cq_mem.ring_mem.mz); @@ -4874,6 +4898,8 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) rte_memzone_free(bp->ctx->stat_mem.ring_mem.pg_tbl_mz); for (i = 0; i < bp->ctx->tqm_fp_rings_count + 1; i++) { + rte_free(bp->ctx->tqm_mem[i]->ctx_pg_arr); + rte_free(bp->ctx->tqm_mem[i]->ctx_dma_arr); if (bp->ctx->tqm_mem[i]) rte_memzone_free(bp->ctx->tqm_mem[i]->ring_mem.mz); } From patchwork Mon Dec 4 18:37:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134823 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0F5F04366C; Mon, 4 Dec 2023 19:38:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6899F42D9F; Mon, 4 Dec 2023 19:37:35 +0100 (CET) Received: from mail-pg1-f182.google.com (mail-pg1-f182.google.com [209.85.215.182]) by mails.dpdk.org (Postfix) with ESMTP id 8C732427D7 for ; Mon, 4 Dec 2023 19:37:33 +0100 (CET) Received: by mail-pg1-f182.google.com with SMTP id 41be03b00d2f7-5c664652339so820908a12.1 for ; Mon, 04 Dec 2023 10:37:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715052; x=1702319852; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :from:to:cc:subject:date:message-id:reply-to; bh=LNoGZDtOoAsW+vVl/frGVRlTtpLMb8RYrrFpVmKhviU=; b=WPcoFULrmN5QXPDXeB+3EcFjFFEumWiMSX2jzLPxLe94Xu0JreK4mnbMIzl6n8vDgf R7AsLlXKTqJpKs2BoKwGT3KJeXAZtfPqjiDpRz0VOJmv8XAN5E7vsp25pCopOBxIUIdF IJ4QjwccIaCYjPe8Y/LPKO3Im04C1FlRACGxA= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715052; x=1702319852; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=LNoGZDtOoAsW+vVl/frGVRlTtpLMb8RYrrFpVmKhviU=; b=QClgEMDlMKFEAHkkJq08iQnIzE3oLqexccjhY5WRdhg1hgRF2j+xqKtSR6qzyM9sGZ /XovW0jeSrSkFqVA7vGoKENLD3cIagLCLy0tuzSn0/hcE3g9NcU+wHrT0hWTFUbg8YFA 7fGDeiaADh0lywDtfzvMu4vV6nyoB4Z2n4EmEZ+q7/Fscfu8nFKfVShKeZjWm6g+atAV tXQZ1jXkl/NgK7UvvbXc2az7Fb2TGa+tZx1/Kc/qigzkNxuCVbDoc5DMwRCrQSXIyQLn MBdKp6v6RJJFAb6o3PP6rEdTn2ME9rml2WfBZIH+tgTfBd1i4Ju+NOrQW154cWazpCbv nzPA== X-Gm-Message-State: AOJu0YzUFbq7w4TaubBS4DEcUxPbHKHY/6yGp4XgrslLFU2ZF+lWiiR5 N4Uqd6j6QV1H6WS6BVCu0LbUY0I6djvjygJZjttnHr+r5aAOZAI6Tq7AQNh5+O2GkUZHQiUOUHD Y/8G+hM+XuxSv1K4aOiouwq4GieuasLpDwIjIUZDX96UoM5VpFed4tfOfc2PluYVHBDOp X-Google-Smtp-Source: AGHT+IFWo5EUQpjW6+WS2M271MBNFGB7QJ2j/Hd7XrjO2Kl1HeqSzRdUtFXVwd0eQ48QBmHQOVWtNw== X-Received: by 2002:a05:6a20:3ca8:b0:18f:97c:8237 with SMTP id b40-20020a056a203ca800b0018f097c8237mr2570869pzj.65.1701715051915; Mon, 04 Dec 2023 10:37:31 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:31 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Subject: [PATCH 09/14] net/bnxt: add support for backing store v2 Date: Mon, 4 Dec 2023 10:37:05 -0800 Message-Id: <20231204183710.86921-10-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add backing store v2 changes. The firmware supports the new backing store scheme for P7 and newer devices. To support this, the driver queries the different types of chip contexts the firmware supports and allocates the appropriate size of memory for the firmware and hardware to use. The code then goes ahead and frees up the memory during cleanup. Older P5 device family continues to support the version 1 of backing store. While the P4 device family does not need any backing store memory. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 69 +++++++- drivers/net/bnxt/bnxt_ethdev.c | 177 ++++++++++++++++++-- drivers/net/bnxt/bnxt_hwrm.c | 298 +++++++++++++++++++++++++++++++-- drivers/net/bnxt/bnxt_hwrm.h | 8 + drivers/net/bnxt/bnxt_util.c | 10 ++ drivers/net/bnxt/bnxt_util.h | 1 + 6 files changed, 524 insertions(+), 39 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 3fbdf1ddcc..68c4778dc3 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -81,6 +81,11 @@ #define BROADCOM_DEV_957508_N2100 0x5208 #define BROADCOM_DEV_957414_N225 0x4145 +#define HWRM_SPEC_CODE_1_8_3 0x10803 +#define HWRM_VERSION_1_9_1 0x10901 +#define HWRM_VERSION_1_9_2 0x10903 +#define HWRM_VERSION_1_10_2_13 0x10a020d + #define BNXT_MAX_MTU 9574 #define BNXT_NUM_VLANS 2 #define BNXT_MAX_PKT_LEN (BNXT_MAX_MTU + RTE_ETHER_HDR_LEN +\ @@ -430,16 +435,26 @@ struct bnxt_coal { #define BNXT_PAGE_SIZE (1 << BNXT_PAGE_SHFT) #define MAX_CTX_PAGES (BNXT_PAGE_SIZE / 8) +#define BNXT_RTE_MEMZONE_FLAG (RTE_MEMZONE_1GB | RTE_MEMZONE_IOVA_CONTIG) + #define PTU_PTE_VALID 0x1UL #define PTU_PTE_LAST 0x2UL #define PTU_PTE_NEXT_TO_LAST 0x4UL +#define BNXT_CTX_MIN 1 +#define BNXT_CTX_INV 0xffff + +#define BNXT_CTX_INIT_VALID(flags) \ + ((flags) & \ + HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_ENABLE_CTX_KIND_INIT) + struct bnxt_ring_mem_info { int nr_pages; int page_size; uint32_t flags; #define BNXT_RMEM_VALID_PTE_FLAG 1 #define BNXT_RMEM_RING_PTE_FLAG 2 +#define BNXT_RMEM_USE_FULL_PAGE_FLAG 4 void **pg_arr; rte_iova_t *dma_arr; @@ -460,7 +475,50 @@ struct bnxt_ctx_pg_info { struct bnxt_ring_mem_info ring_mem; }; +struct bnxt_ctx_mem { + uint16_t type; + uint16_t entry_size; + uint32_t flags; +#define BNXT_CTX_MEM_TYPE_VALID \ + HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_TYPE_VALID + uint32_t instance_bmap; + uint8_t init_value; + uint8_t entry_multiple; + uint16_t init_offset; +#define BNXT_CTX_INIT_INVALID_OFFSET 0xffff + uint32_t max_entries; + uint32_t min_entries; + uint8_t last:1; + uint8_t split_entry_cnt; +#define BNXT_MAX_SPLIT_ENTRY 4 + union { + struct { + uint32_t qp_l2_entries; + uint32_t qp_qp1_entries; + uint32_t qp_fast_qpmd_entries; + }; + uint32_t srq_l2_entries; + uint32_t cq_l2_entries; + uint32_t vnic_entries; + struct { + uint32_t mrav_av_entries; + uint32_t mrav_num_entries_units; + }; + uint32_t split[BNXT_MAX_SPLIT_ENTRY]; + }; + struct bnxt_ctx_pg_info *pg_info; +}; + +#define BNXT_CTX_FLAG_INITED 0x01 + struct bnxt_ctx_mem_info { + struct bnxt_ctx_mem *ctx_arr; + uint32_t supported_types; + uint32_t flags; + uint16_t types; + uint8_t tqm_fp_rings_count; + + /* The following are used for V1 */ uint32_t qp_max_entries; uint16_t qp_min_qp1_entries; uint16_t qp_max_l2_entries; @@ -484,10 +542,6 @@ struct bnxt_ctx_mem_info { uint16_t tim_entry_size; uint32_t tim_max_entries; uint8_t tqm_entries_multiple; - uint8_t tqm_fp_rings_count; - - uint32_t flags; -#define BNXT_CTX_FLAG_INITED 0x01 struct bnxt_ctx_pg_info qp_mem; struct bnxt_ctx_pg_info srq_mem; @@ -739,6 +793,13 @@ struct bnxt { #define BNXT_FW_CAP_TRUFLOW_EN BIT(8) #define BNXT_FW_CAP_VLAN_TX_INSERT BIT(9) #define BNXT_FW_CAP_RX_ALL_PKT_TS BIT(10) +#define BNXT_FW_CAP_BACKING_STORE_V2 BIT(12) +#define BNXT_FW_BACKING_STORE_V2_EN(bp) \ + ((bp)->fw_cap & BNXT_FW_CAP_BACKING_STORE_V2) +#define BNXT_FW_BACKING_STORE_V1_EN(bp) \ + (BNXT_CHIP_P5_P7((bp)) && \ + (bp)->hwrm_spec_code >= HWRM_VERSION_1_9_2 && \ + !BNXT_VF((bp))) #define BNXT_TRUFLOW_EN(bp) ((bp)->fw_cap & BNXT_FW_CAP_TRUFLOW_EN &&\ (bp)->app_id != 0xFF) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 1eab8d5020..4472268924 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -4760,8 +4760,26 @@ static int bnxt_map_pci_bars(struct rte_eth_dev *eth_dev) return 0; } +static void bnxt_init_ctxm_mem(struct bnxt_ctx_mem *ctxm, void *p, int len) +{ + uint8_t init_val = ctxm->init_value; + uint16_t offset = ctxm->init_offset; + uint8_t *p2 = p; + int i; + + if (!init_val) + return; + if (offset == BNXT_CTX_INIT_INVALID_OFFSET) { + memset(p, init_val, len); + return; + } + for (i = 0; i < len; i += ctxm->entry_size) + *(p2 + i + offset) = init_val; +} + static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, struct bnxt_ctx_pg_info *ctx_pg, + struct bnxt_ctx_mem *ctxm, uint32_t mem_size, const char *suffix, uint16_t idx) @@ -4777,8 +4795,8 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, if (!mem_size) return 0; - rmem->nr_pages = RTE_ALIGN_MUL_CEIL(mem_size, BNXT_PAGE_SIZE) / - BNXT_PAGE_SIZE; + rmem->nr_pages = + RTE_ALIGN_MUL_CEIL(mem_size, BNXT_PAGE_SIZE) / BNXT_PAGE_SIZE; rmem->page_size = BNXT_PAGE_SIZE; snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_pg_arr%s_%x_%d", @@ -4795,13 +4813,13 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, rmem->pg_arr = ctx_pg->ctx_pg_arr; rmem->dma_arr = ctx_pg->ctx_dma_arr; - rmem->flags = BNXT_RMEM_VALID_PTE_FLAG; + rmem->flags = BNXT_RMEM_VALID_PTE_FLAG | BNXT_RMEM_USE_FULL_PAGE_FLAG; valid_bits = PTU_PTE_VALID; if (rmem->nr_pages > 1) { snprintf(name, RTE_MEMZONE_NAMESIZE, - "bnxt_ctx_pg_tbl%s_%x_%d", + "bnxt_ctxpgtbl%s_%x_%d", suffix, idx, bp->eth_dev->data->port_id); name[RTE_MEMZONE_NAMESIZE - 1] = 0; mz = rte_memzone_lookup(name); @@ -4817,9 +4835,11 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, return -ENOMEM; } - memset(mz->addr, 0, mz->len); + memset(mz->addr, 0xff, mz->len); mz_phys_addr = mz->iova; + if (ctxm != NULL) + bnxt_init_ctxm_mem(ctxm, mz->addr, mz->len); rmem->pg_tbl = mz->addr; rmem->pg_tbl_map = mz_phys_addr; rmem->pg_tbl_mz = mz; @@ -4840,9 +4860,11 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, return -ENOMEM; } - memset(mz->addr, 0, mz->len); + memset(mz->addr, 0xff, mz->len); mz_phys_addr = mz->iova; + if (ctxm != NULL) + bnxt_init_ctxm_mem(ctxm, mz->addr, mz->len); for (sz = 0, i = 0; sz < mem_size; sz += BNXT_PAGE_SIZE, i++) { rmem->pg_arr[i] = ((char *)mz->addr) + sz; rmem->dma_arr[i] = mz_phys_addr + sz; @@ -4867,6 +4889,34 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, return 0; } +static void bnxt_free_ctx_mem_v2(struct bnxt *bp) +{ + uint16_t type; + + for (type = 0; type < bp->ctx->types; type++) { + struct bnxt_ctx_mem *ctxm = &bp->ctx->ctx_arr[type]; + struct bnxt_ctx_pg_info *ctx_pg = ctxm->pg_info; + int i, n = 1; + + if (!ctx_pg) + continue; + if (ctxm->instance_bmap) + n = hweight32(ctxm->instance_bmap); + + for (i = 0; i < n; i++) { + rte_free(ctx_pg[i].ctx_pg_arr); + rte_free(ctx_pg[i].ctx_dma_arr); + rte_memzone_free(ctx_pg[i].ring_mem.mz); + rte_memzone_free(ctx_pg[i].ring_mem.pg_tbl_mz); + } + + rte_free(ctx_pg); + ctxm->pg_info = NULL; + } + rte_free(bp->ctx->ctx_arr); + bp->ctx->ctx_arr = NULL; +} + static void bnxt_free_ctx_mem(struct bnxt *bp) { int i; @@ -4875,6 +4925,12 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) return; bp->ctx->flags &= ~BNXT_CTX_FLAG_INITED; + + if (BNXT_FW_BACKING_STORE_V2_EN(bp)) { + bnxt_free_ctx_mem_v2(bp); + goto free_ctx; + } + rte_free(bp->ctx->qp_mem.ctx_pg_arr); rte_free(bp->ctx->srq_mem.ctx_pg_arr); rte_free(bp->ctx->cq_mem.ctx_pg_arr); @@ -4904,6 +4960,7 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) rte_memzone_free(bp->ctx->tqm_mem[i]->ring_mem.mz); } +free_ctx: rte_free(bp->ctx); bp->ctx = NULL; } @@ -4922,28 +4979,113 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) #define clamp_t(type, _x, min, max) min_t(type, max_t(type, _x, min), max) +int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp) +{ + struct bnxt_ctx_mem_info *ctx = bp->ctx; + struct bnxt_ctx_mem *ctx2; + uint16_t type; + int rc = 0; + + ctx2 = &ctx->ctx_arr[0]; + for (type = 0; type < ctx->types && rc == 0; type++) { + struct bnxt_ctx_mem *ctxm = &ctx->ctx_arr[type]; + struct bnxt_ctx_pg_info *ctx_pg; + uint32_t entries, mem_size; + int w = 1; + int i; + + if (ctxm->entry_size == 0) + continue; + + ctx_pg = ctxm->pg_info; + + if (ctxm->instance_bmap) + w = hweight32(ctxm->instance_bmap); + + for (i = 0; i < w && rc == 0; i++) { + char name[RTE_MEMZONE_NAMESIZE] = {0}; + + sprintf(name, "_%d_%d", i, type); + + if (ctxm->entry_multiple) + entries = bnxt_roundup(ctxm->max_entries, + ctxm->entry_multiple); + else + entries = ctxm->max_entries; + + if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_CQ) + entries = ctxm->cq_l2_entries; + else if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_QP) + entries = ctxm->qp_l2_entries; + else if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_MRAV) + entries = ctxm->mrav_av_entries; + else if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_TIM) + entries = ctx2->qp_l2_entries; + entries = clamp_t(uint32_t, entries, ctxm->min_entries, + ctxm->max_entries); + ctx_pg[i].entries = entries; + mem_size = ctxm->entry_size * entries; + PMD_DRV_LOG(DEBUG, + "Type:%d instance:%d entries:%d size:%d\n", + ctxm->type, i, ctx_pg[i].entries, mem_size); + rc = bnxt_alloc_ctx_mem_blk(bp, &ctx_pg[i], + ctxm->init_value ? ctxm : NULL, + mem_size, name, i); + } + } + + return rc; +} + int bnxt_alloc_ctx_mem(struct bnxt *bp) { struct bnxt_ctx_pg_info *ctx_pg; struct bnxt_ctx_mem_info *ctx; uint32_t mem_size, ena, entries; + int types = BNXT_CTX_MIN; uint32_t entries_sp, min; - int i, rc; + int i, rc = 0; + + if (!BNXT_FW_BACKING_STORE_V1_EN(bp) && + !BNXT_FW_BACKING_STORE_V2_EN(bp)) + return rc; + + if (BNXT_FW_BACKING_STORE_V2_EN(bp)) { + types = bnxt_hwrm_func_backing_store_types_count(bp); + if (types <= 0) + return types; + } + + rc = bnxt_hwrm_func_backing_store_ctx_alloc(bp, types); + if (rc != 0) + return rc; + + if (bp->ctx->flags & BNXT_CTX_FLAG_INITED) + return 0; + + ctx = bp->ctx; + if (BNXT_FW_BACKING_STORE_V2_EN(bp)) { + rc = bnxt_hwrm_func_backing_store_qcaps_v2(bp); + + for (i = 0 ; i < bp->ctx->types && rc == 0; i++) { + struct bnxt_ctx_mem *ctxm = &ctx->ctx_arr[i]; + + rc = bnxt_hwrm_func_backing_store_cfg_v2(bp, ctxm); + } + goto done; + } rc = bnxt_hwrm_func_backing_store_qcaps(bp); if (rc) { PMD_DRV_LOG(ERR, "Query context mem capability failed\n"); return rc; } - ctx = bp->ctx; - if (!ctx || (ctx->flags & BNXT_CTX_FLAG_INITED)) - return 0; ctx_pg = &ctx->qp_mem; ctx_pg->entries = ctx->qp_min_qp1_entries + ctx->qp_max_l2_entries; if (ctx->qp_entry_size) { mem_size = ctx->qp_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "qp_mem", 0); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "qp_mem", 0); if (rc) return rc; } @@ -4952,7 +5094,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx_pg->entries = ctx->srq_max_l2_entries; if (ctx->srq_entry_size) { mem_size = ctx->srq_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "srq_mem", 0); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "srq_mem", 0); if (rc) return rc; } @@ -4961,7 +5103,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx_pg->entries = ctx->cq_max_l2_entries; if (ctx->cq_entry_size) { mem_size = ctx->cq_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "cq_mem", 0); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "cq_mem", 0); if (rc) return rc; } @@ -4971,7 +5113,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx->vnic_max_ring_table_entries; if (ctx->vnic_entry_size) { mem_size = ctx->vnic_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "vnic_mem", 0); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "vnic_mem", 0); if (rc) return rc; } @@ -4980,7 +5122,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx_pg->entries = ctx->stat_max_entries; if (ctx->stat_entry_size) { mem_size = ctx->stat_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "stat_mem", 0); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "stat_mem", 0); if (rc) return rc; } @@ -5004,8 +5146,8 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx_pg->entries = i ? entries : entries_sp; if (ctx->tqm_entry_size) { mem_size = ctx->tqm_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, - "tqm_mem", i); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, + mem_size, "tqm_mem", i); if (rc) return rc; } @@ -5017,6 +5159,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ena |= FUNC_BACKING_STORE_CFG_INPUT_DFLT_ENABLES; rc = bnxt_hwrm_func_backing_store_cfg(bp, ena); +done: if (rc) PMD_DRV_LOG(ERR, "Failed to configure context mem: rc = %d\n", rc); diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 2d0a7a2731..67f8020e3c 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -24,10 +24,6 @@ #include "bnxt_vnic.h" #include "hsi_struct_def_dpdk.h" -#define HWRM_SPEC_CODE_1_8_3 0x10803 -#define HWRM_VERSION_1_9_1 0x10901 -#define HWRM_VERSION_1_9_2 0x10903 -#define HWRM_VERSION_1_10_2_13 0x10a020d struct bnxt_plcmodes_cfg { uint32_t flags; uint16_t jumbo_thresh; @@ -35,6 +31,28 @@ struct bnxt_plcmodes_cfg { uint16_t hds_threshold; }; +const char *bnxt_backing_store_types[] = { + "Queue pair", + "Shared receive queue", + "Completion queue", + "Virtual NIC", + "Statistic context", + "Slow-path TQM ring", + "Fast-path TQM ring", + "MR and MAV Context", + "TIM", + "Tx key context", + "Rx key context", + "Mid-path TQM ring", + "SQ Doorbell shadow region", + "RQ Doorbell shadow region", + "SRQ Doorbell shadow region", + "CQ Doorbell shadow region", + "QUIC Tx key context", + "QUIC Rx key context", + "Invalid type" +}; + static int page_getenum(size_t size) { if (size <= 1 << 4) @@ -894,6 +912,11 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) if (flags & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_LINK_ADMIN_STATUS_SUPPORTED) bp->fw_cap |= BNXT_FW_CAP_LINK_ADMIN; + if (flags & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_BS_V2_SUPPORTED) { + PMD_DRV_LOG(DEBUG, "Backing store v2 supported\n"); + if (BNXT_CHIP_P7(bp)) + bp->fw_cap |= BNXT_FW_CAP_BACKING_STORE_V2; + } if (!(flags & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_VLAN_ACCELERATION_TX_DISABLED)) { bp->fw_cap |= BNXT_FW_CAP_VLAN_TX_INSERT; PMD_DRV_LOG(DEBUG, "VLAN acceleration for TX is enabled\n"); @@ -5461,7 +5484,188 @@ int bnxt_hwrm_set_ring_coal(struct bnxt *bp, return 0; } -#define BNXT_RTE_MEMZONE_FLAG (RTE_MEMZONE_1GB | RTE_MEMZONE_IOVA_CONTIG) +static void bnxt_init_ctx_initializer(struct bnxt_ctx_mem *ctxm, + uint8_t init_val, + uint8_t init_offset, + bool init_mask_set) +{ + ctxm->init_value = init_val; + ctxm->init_offset = BNXT_CTX_INIT_INVALID_OFFSET; + if (init_mask_set) + ctxm->init_offset = init_offset * 4; + else + ctxm->init_value = 0; +} + +static int bnxt_alloc_all_ctx_pg_info(struct bnxt *bp) +{ + struct bnxt_ctx_mem_info *ctx = bp->ctx; + char name[RTE_MEMZONE_NAMESIZE]; + uint16_t type; + + for (type = 0; type < ctx->types; type++) { + struct bnxt_ctx_mem *ctxm = &ctx->ctx_arr[type]; + int n = 1; + + if (!ctxm->max_entries || ctxm->pg_info) + continue; + + if (ctxm->instance_bmap) + n = hweight32(ctxm->instance_bmap); + + sprintf(name, "bnxt_ctx_pgmem_%d_%d", + bp->eth_dev->data->port_id, type); + ctxm->pg_info = rte_malloc(name, sizeof(*ctxm->pg_info) * n, + RTE_CACHE_LINE_SIZE); + if (!ctxm->pg_info) + return -ENOMEM; + } + return 0; +} + +static void bnxt_init_ctx_v2_driver_managed(struct bnxt *bp __rte_unused, + struct bnxt_ctx_mem *ctxm) +{ + switch (ctxm->type) { + case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_SQ_DB_SHADOW: + case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_RQ_DB_SHADOW: + case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_SRQ_DB_SHADOW: + case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_CQ_DB_SHADOW: + /* FALLTHROUGH */ + ctxm->entry_size = 0; + ctxm->min_entries = 1; + ctxm->max_entries = 1; + break; + } +} + +int bnxt_hwrm_func_backing_store_qcaps_v2(struct bnxt *bp) +{ + struct hwrm_func_backing_store_qcaps_v2_input req = {0}; + struct hwrm_func_backing_store_qcaps_v2_output *resp = + bp->hwrm_cmd_resp_addr; + struct bnxt_ctx_mem_info *ctx = bp->ctx; + uint16_t last_valid_type = BNXT_CTX_INV; + uint16_t type = 0; + int rc; + + for (type = 0; type < bp->ctx->types; ) { + struct bnxt_ctx_mem *ctxm = &ctx->ctx_arr[type]; + uint8_t init_val, init_off, i; + uint32_t *p; + uint32_t flags; + + HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS_V2, BNXT_USE_CHIMP_MB); + req.type = rte_cpu_to_le_16(type); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); + HWRM_CHECK_RESULT(); + + flags = rte_le_to_cpu_32(resp->flags); + type = rte_le_to_cpu_16(resp->next_valid_type); + if (!(flags & HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_TYPE_VALID)) + goto next; + + ctxm->type = rte_le_to_cpu_16(resp->type); + + last_valid_type = ctxm->type; + ctxm->flags = flags; + if (flags & + HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_DRIVER_MANAGED_MEMORY) { + bnxt_init_ctx_v2_driver_managed(bp, ctxm); + goto next; + } + ctxm->entry_size = rte_le_to_cpu_16(resp->entry_size); + ctxm->instance_bmap = rte_le_to_cpu_32(resp->instance_bit_map); + ctxm->entry_multiple = resp->entry_multiple; + ctxm->max_entries = rte_le_to_cpu_32(resp->max_num_entries); + ctxm->min_entries = rte_le_to_cpu_32(resp->min_num_entries); + init_val = resp->ctx_init_value; + init_off = resp->ctx_init_offset; + bnxt_init_ctx_initializer(ctxm, init_val, init_off, + BNXT_CTX_INIT_VALID(flags)); + ctxm->split_entry_cnt = RTE_MIN(resp->subtype_valid_cnt, + BNXT_MAX_SPLIT_ENTRY); + for (i = 0, p = &resp->split_entry_0; i < ctxm->split_entry_cnt; + i++, p++) + ctxm->split[i] = rte_le_to_cpu_32(*p); + + PMD_DRV_LOG(DEBUG, + "type:%s size:%d multiple:%d max:%d min:%d split:%d init_val:%d init_off:%d init:%d bmap:0x%x\n", + bnxt_backing_store_types[ctxm->type], ctxm->entry_size, + ctxm->entry_multiple, ctxm->max_entries, ctxm->min_entries, + ctxm->split_entry_cnt, init_val, init_off, + BNXT_CTX_INIT_VALID(flags), ctxm->instance_bmap); + +next: + HWRM_UNLOCK(); + } + if (last_valid_type < bp->ctx->types) + ctx->ctx_arr[last_valid_type].last = true; + PMD_DRV_LOG(DEBUG, "Last valid type %d\n", last_valid_type); + rc = bnxt_alloc_all_ctx_pg_info(bp); + if (rc == 0) + rc = bnxt_alloc_ctx_pg_tbls(bp); + return rc; +} + +int bnxt_hwrm_func_backing_store_types_count(struct bnxt *bp) +{ + struct hwrm_func_backing_store_qcaps_v2_input req = {0}; + struct hwrm_func_backing_store_qcaps_v2_output *resp = + bp->hwrm_cmd_resp_addr; + uint16_t type = 0; + int types = 0; + int rc; + + /* Calculate number of valid context types */ + do { + uint32_t flags; + + HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS_V2, BNXT_USE_CHIMP_MB); + req.type = rte_cpu_to_le_16(type); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); + HWRM_CHECK_RESULT(); + if (rc != 0) + return rc; + + flags = rte_le_to_cpu_32(resp->flags); + type = rte_le_to_cpu_16(resp->next_valid_type); + HWRM_UNLOCK(); + + if (flags & HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_TYPE_VALID) + types++; + } while (type != HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_INVALID); + PMD_DRV_LOG(DEBUG, "Number of valid types %d\n", types); + + return types; +} + +int bnxt_hwrm_func_backing_store_ctx_alloc(struct bnxt *bp, uint16_t types) +{ + int alloc_len = sizeof(struct bnxt_ctx_mem_info); + + if (!BNXT_CHIP_P5_P7(bp) || + bp->hwrm_spec_code < HWRM_VERSION_1_9_2 || + BNXT_VF(bp) || + bp->ctx) + return 0; + + bp->ctx = rte_zmalloc("bnxt_ctx_mem", alloc_len, + RTE_CACHE_LINE_SIZE); + if (bp->ctx == NULL) + return -ENOMEM; + + alloc_len = sizeof(struct bnxt_ctx_mem) * types; + bp->ctx->ctx_arr = rte_zmalloc("bnxt_ctx_mem_arr", + alloc_len, + RTE_CACHE_LINE_SIZE); + if (bp->ctx->ctx_arr == NULL) + return -ENOMEM; + + bp->ctx->types = types; + return 0; +} + int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) { struct hwrm_func_backing_store_qcaps_input req = {0}; @@ -5469,27 +5673,19 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) bp->hwrm_cmd_resp_addr; struct bnxt_ctx_pg_info *ctx_pg; struct bnxt_ctx_mem_info *ctx; - int total_alloc_len; int rc, i, tqm_rings; if (!BNXT_CHIP_P5_P7(bp) || bp->hwrm_spec_code < HWRM_VERSION_1_9_2 || BNXT_VF(bp) || - bp->ctx) + bp->ctx->flags & BNXT_CTX_FLAG_INITED) return 0; + ctx = bp->ctx; HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB); rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); HWRM_CHECK_RESULT_SILENT(); - total_alloc_len = sizeof(*ctx); - ctx = rte_zmalloc("bnxt_ctx_mem", total_alloc_len, - RTE_CACHE_LINE_SIZE); - if (!ctx) { - rc = -ENOMEM; - goto ctx_err; - } - ctx->qp_max_entries = rte_le_to_cpu_32(resp->qp_max_entries); ctx->qp_min_qp1_entries = rte_le_to_cpu_16(resp->qp_min_qp1_entries); @@ -5500,8 +5696,13 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) rte_le_to_cpu_16(resp->srq_max_l2_entries); ctx->srq_max_entries = rte_le_to_cpu_32(resp->srq_max_entries); ctx->srq_entry_size = rte_le_to_cpu_16(resp->srq_entry_size); - ctx->cq_max_l2_entries = - rte_le_to_cpu_16(resp->cq_max_l2_entries); + if (BNXT_CHIP_P7(bp)) + ctx->cq_max_l2_entries = + RTE_MIN(BNXT_P7_CQ_MAX_L2_ENT, + rte_le_to_cpu_16(resp->cq_max_l2_entries)); + else + ctx->cq_max_l2_entries = + rte_le_to_cpu_16(resp->cq_max_l2_entries); ctx->cq_max_entries = rte_le_to_cpu_32(resp->cq_max_entries); ctx->cq_entry_size = rte_le_to_cpu_16(resp->cq_entry_size); ctx->vnic_max_vnic_entries = @@ -5555,12 +5756,73 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) for (i = 0; i < tqm_rings; i++, ctx_pg++) ctx->tqm_mem[i] = ctx_pg; - bp->ctx = ctx; ctx_err: HWRM_UNLOCK(); return rc; } +int bnxt_hwrm_func_backing_store_cfg_v2(struct bnxt *bp, + struct bnxt_ctx_mem *ctxm) +{ + struct hwrm_func_backing_store_cfg_v2_input req = {0}; + struct hwrm_func_backing_store_cfg_v2_output *resp = + bp->hwrm_cmd_resp_addr; + struct bnxt_ctx_pg_info *ctx_pg; + int i, j, k; + uint32_t *p; + int rc = 0; + int w = 1; + int b = 1; + + if (!BNXT_PF(bp)) { + PMD_DRV_LOG(INFO, + "Backing store config V2 can be issued on PF only\n"); + return 0; + } + + if (!(ctxm->flags & BNXT_CTX_MEM_TYPE_VALID) || !ctxm->pg_info) + return 0; + + if (ctxm->instance_bmap) + b = ctxm->instance_bmap; + + w = hweight32(b); + + for (i = 0, j = 0; i < w && rc == 0; i++) { + if (!(b & (1 << i))) + continue; + + HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_CFG_V2, BNXT_USE_CHIMP_MB); + req.type = rte_cpu_to_le_16(ctxm->type); + req.entry_size = rte_cpu_to_le_16(ctxm->entry_size); + req.subtype_valid_cnt = ctxm->split_entry_cnt; + for (k = 0, p = &req.split_entry_0; k < ctxm->split_entry_cnt; k++) + p[k] = rte_cpu_to_le_32(ctxm->split[k]); + + req.instance = rte_cpu_to_le_16(i); + ctx_pg = &ctxm->pg_info[j++]; + if (!ctx_pg->entries) + goto unlock; + + req.num_entries = rte_cpu_to_le_32(ctx_pg->entries); + bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, + &req.page_size_pbl_level, + &req.page_dir); + PMD_DRV_LOG(DEBUG, + "Backing store config V2 type:%s last %d, instance %d, hw %d\n", + bnxt_backing_store_types[req.type], ctxm->last, j, w); + if (ctxm->last && i == (w - 1)) + req.flags = + rte_cpu_to_le_32(BACKING_STORE_CFG_V2_IN_FLG_CFG_ALL_DONE); + + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); + HWRM_CHECK_RESULT(); +unlock: + HWRM_UNLOCK(); + } + return rc; +} + int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, uint32_t enables) { struct hwrm_func_backing_store_cfg_input req = {0}; diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index f9fa6cf73a..3d5194257b 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -60,6 +60,8 @@ struct hwrm_func_qstats_output; HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_PAM4_LINK_SPEED_MASK #define HWRM_PORT_PHY_CFG_IN_EN_AUTO_LINK_SPEED_MASK \ HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_LINK_SPEED_MASK +#define BACKING_STORE_CFG_V2_IN_FLG_CFG_ALL_DONE \ + HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_FLAGS_BS_CFG_ALL_DONE #define HWRM_SPEC_CODE_1_8_4 0x10804 #define HWRM_SPEC_CODE_1_9_0 0x10900 @@ -355,4 +357,10 @@ void bnxt_free_hwrm_tx_ring(struct bnxt *bp, int queue_index); int bnxt_alloc_hwrm_tx_ring(struct bnxt *bp, int queue_index); int bnxt_hwrm_config_host_mtu(struct bnxt *bp); int bnxt_vnic_rss_clear_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic); +int bnxt_hwrm_func_backing_store_qcaps_v2(struct bnxt *bp); +int bnxt_hwrm_func_backing_store_cfg_v2(struct bnxt *bp, + struct bnxt_ctx_mem *ctxm); +int bnxt_hwrm_func_backing_store_types_count(struct bnxt *bp); +int bnxt_hwrm_func_backing_store_ctx_alloc(struct bnxt *bp, uint16_t types); +int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp); #endif diff --git a/drivers/net/bnxt/bnxt_util.c b/drivers/net/bnxt/bnxt_util.c index 47dd5fa6ff..aa184496c2 100644 --- a/drivers/net/bnxt/bnxt_util.c +++ b/drivers/net/bnxt/bnxt_util.c @@ -27,3 +27,13 @@ void bnxt_eth_hw_addr_random(uint8_t *mac_addr) mac_addr[1] = 0x0a; mac_addr[2] = 0xf7; } + +uint8_t hweight32(uint32_t word32) +{ + uint32_t res = word32 - ((word32 >> 1) & 0x55555555); + + res = (res & 0x33333333) + ((res >> 2) & 0x33333333); + res = (res + (res >> 4)) & 0x0F0F0F0F; + res = res + (res >> 8); + return (res + (res >> 16)) & 0x000000FF; +} diff --git a/drivers/net/bnxt/bnxt_util.h b/drivers/net/bnxt/bnxt_util.h index 7f5b4c160e..b265f5841b 100644 --- a/drivers/net/bnxt/bnxt_util.h +++ b/drivers/net/bnxt/bnxt_util.h @@ -17,4 +17,5 @@ int bnxt_check_zero_bytes(const uint8_t *bytes, int len); void bnxt_eth_hw_addr_random(uint8_t *mac_addr); +uint8_t hweight32(uint32_t word32); #endif /* _BNXT_UTIL_H_ */ From patchwork Mon Dec 4 18:37:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134824 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 91B834366C; Mon, 4 Dec 2023 19:38:45 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 11A6342DBD; Mon, 4 Dec 2023 19:37:37 +0100 (CET) Received: from mail-pg1-f171.google.com (mail-pg1-f171.google.com [209.85.215.171]) by mails.dpdk.org (Postfix) with ESMTP id AEA94427D9 for ; Mon, 4 Dec 2023 19:37:34 +0100 (CET) Received: by mail-pg1-f171.google.com with SMTP id 41be03b00d2f7-5c66bbb3d77so821718a12.0 for ; Mon, 04 Dec 2023 10:37:34 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715053; x=1702319853; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=qXvVv8yDcyOG2H+a/u+dVz+kyoaVNzRQ2mZZBTwRUYo=; b=Iq5bynpntrSVKwiCY73UlQuncfJgMdpWJjfWXqbPuXxVyC+u1CoOdgqLPnllFxoJmR eXZLDDDfUKz5jrzP+v0Gut/0UtcCRfRrzcR5sAWaYxNwa2JWAgnBaSHz3nuMKyflc9pc CW4AIzPVcV+QT2Q2TBinSistM6HLKJ/rLtXW4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715053; x=1702319853; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=qXvVv8yDcyOG2H+a/u+dVz+kyoaVNzRQ2mZZBTwRUYo=; b=wT6my5WaAFL8wFujwyXRz/NuzmhamawolKGKRm5XwW40XYYPNttVtShpXlfmbwq4T+ t3xl5kP9wQafGIbOHAsEpTwiPbuoN0kjzrhjJD/d0lmUuIhoT5j7jc+E/CG8xcaeHliC DNC0dNoAoXeFBJ2+GqHMfrAR2YcvU/eYg648aqXe1pH++g3rnmZGD23s6jwxk8MXae/k U6psJsaT9P1R3eNvrtGkkqP2XHrYXTX5WMlO7XMhy4sIti+1Dk6zJMjF5TeZKncp7G7p Q+aBGTYC4HRpYIk5rjF0k532puN3+KFtO8P/n4G72piUJjVtGToLkU+LClEz/G8IPDa0 /kiQ== X-Gm-Message-State: AOJu0YzZDFPMQDBNN+DC4+iRZUdpybC2FP77eEmW6QLe0eOvx63dCTgc OhWdPnsj3xU/BDoA7aCYFHgjwSlRHnbjfzN+kfvsMRwaSEAoj4F0hJPoxgt2fO4mZYtF7VI0xEZ PPuyMAl2oY8szjcEBGIT78WYrRCPwnKQ0kkwSL4+pu2vc+7+vhG684CWrs3qtZtBw/51Q X-Google-Smtp-Source: AGHT+IF92IDrNABQZfrQf+M1Yrg+Zb/hFbPT882AgOufytXWkBEFXPZctyyyy3mTm/h9ags7wpjr/g== X-Received: by 2002:a05:6a20:26a1:b0:18f:97c:8273 with SMTP id h33-20020a056a2026a100b0018f097c8273mr1809814pze.125.1701715053455; Mon, 04 Dec 2023 10:37:33 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:32 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kishore Padmanabha , Mike Baucom Subject: [PATCH 10/14] net/bnxt: refactor the ulp initialization Date: Mon, 4 Dec 2023 10:37:06 -0800 Message-Id: <20231204183710.86921-11-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kishore Padmanabha Add new method to consider all the conditions to check before the ulp could be initialized. Signed-off-by: Kishore Padmanabha Reviewed-by: Ajit Khaparde Reviewed-by: Mike Baucom --- drivers/net/bnxt/bnxt_ethdev.c | 28 +++++++++++++++++++++++----- 1 file changed, 23 insertions(+), 5 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 4472268924..8f3bd858da 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -190,6 +190,7 @@ static void bnxt_dev_recover(void *arg); static void bnxt_free_error_recovery_info(struct bnxt *bp); static void bnxt_free_rep_info(struct bnxt *bp); static int bnxt_check_fw_ready(struct bnxt *bp); +static bool bnxt_enable_ulp(struct bnxt *bp); int is_bnxt_in_error(struct bnxt *bp) { @@ -1521,7 +1522,8 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev) return ret; /* delete the bnxt ULP port details */ - bnxt_ulp_port_deinit(bp); + if (bnxt_enable_ulp(bp)) + bnxt_ulp_port_deinit(bp); bnxt_cancel_fw_health_check(bp); @@ -1642,9 +1644,11 @@ int bnxt_dev_start_op(struct rte_eth_dev *eth_dev) goto error; /* Initialize bnxt ULP port details */ - rc = bnxt_ulp_port_init(bp); - if (rc) - goto error; + if (bnxt_enable_ulp(bp)) { + rc = bnxt_ulp_port_init(bp); + if (rc) + goto error; + } eth_dev->rx_pkt_burst = bnxt_receive_function(eth_dev); eth_dev->tx_pkt_burst = bnxt_transmit_function(eth_dev); @@ -3427,7 +3431,7 @@ bnxt_flow_ops_get_op(struct rte_eth_dev *dev, */ dev->data->dev_flags |= RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE; - if (BNXT_TRUFLOW_EN(bp)) + if (bnxt_enable_ulp(bp)) *ops = &bnxt_ulp_rte_flow_ops; else *ops = &bnxt_flow_ops; @@ -6667,6 +6671,20 @@ struct tf *bnxt_get_tfp_session(struct bnxt *bp, enum bnxt_session_type type) &bp->tfp[BNXT_SESSION_TYPE_REGULAR] : &bp->tfp[type]; } +/* check if ULP should be enabled or not */ +static bool bnxt_enable_ulp(struct bnxt *bp) +{ + /* truflow and MPC should be enabled */ + /* not enabling ulp for cli and no truflow apps */ + if (BNXT_TRUFLOW_EN(bp) && bp->app_id != 254 && + bp->app_id != 255) { + if (BNXT_CHIP_P7(bp)) + return false; + return true; + } + return false; +} + RTE_LOG_REGISTER_SUFFIX(bnxt_logtype_driver, driver, NOTICE); RTE_PMD_REGISTER_PCI(net_bnxt, bnxt_rte_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_bnxt, bnxt_pci_id_map); From patchwork Mon Dec 4 18:37:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134825 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id EBFE94366C; Mon, 4 Dec 2023 19:38:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3A1A142DC1; Mon, 4 Dec 2023 19:37:38 +0100 (CET) Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com [209.85.210.171]) by mails.dpdk.org (Postfix) with ESMTP id E016342DA7 for ; Mon, 4 Dec 2023 19:37:35 +0100 (CET) Received: by mail-pf1-f171.google.com with SMTP id d2e1a72fcca58-6ce387bcb06so988449b3a.0 for ; Mon, 04 Dec 2023 10:37:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715055; x=1702319855; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=lJu7JVfCle6KUEGoDHiryea1PhUfp9UcE9NKfZ7T6UQ=; b=SnFZ1bf1TORSCauILiliReKgf7wl1WSF8DqHz94uBgn/IQkYjmmQZcNRsDzQRfGufF RqxjGhp1UrtMtvCbHqh4hNyGEfIYa+ioPrdl9Y5HLP9RDw3UvaQCAAT+UjHcreWOmiaq zb7r1tk84JLqW+0Fu6f9dDlEkcvsPH2ZbwK8Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715055; x=1702319855; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=lJu7JVfCle6KUEGoDHiryea1PhUfp9UcE9NKfZ7T6UQ=; b=EqGScYvw/huZlMgbQ/7+uH35StFy64IfUTGb6MdbWipZQnii3XbZLQnsI2VM+W8Gel rL//WVzdaJBe0CJ9ge7ECWA8IBoYl6iqKjdPE94FFEK37UpIjKGky6UwO3dR6Qm/SWX/ xWEM8QgGf3OVJxl3exJofr7OSHo5f9arBogUhwJootUpT2NxjLxMYL2k9ef6MXX5PCyo Zq57QZuWPP8JbzHPpgZbLCKHnQFbCDsnFgCwupy7WpAhni1GFC0um7v8rpkn13FKP8Zk OHxjox5O/7tRtTWp9Yd+RakYZSY+Uxl/bbIU2P1tYEq7LmAZJAUCKJijRvcZsttAz8wP 7n9A== X-Gm-Message-State: AOJu0YwKojPiyImxeDLpUArEfIANu3+Ea8vAFWxumOWAsOPdEuaULVmF YJOi9em7fz5/GEbvKSBTjybKigtBjtMTAgx74xWuXcSw7gtazvY2ug2E17wKh032Wn3s0+mV+07 NSI+szDBE/UV0goA9ns93Z1yQHFTYm/8lhnSCN+Pr6ISJqgIkSXq6NoeAWAJkW6V3e0n+ X-Google-Smtp-Source: AGHT+IGQtRMVkvM2LDmjwaktt5MFvH7SRFwZxRHWbT30ei7xS4YNX+lO1narxLmjbOKxPWNsLMrzxg== X-Received: by 2002:a05:6a00:800e:b0:6ce:6407:2264 with SMTP id eg14-20020a056a00800e00b006ce64072264mr795481pfb.56.1701715054595; Mon, 04 Dec 2023 10:37:34 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:33 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH 11/14] net/bnxt: modify sending new HWRM commands to firmware Date: Mon, 4 Dec 2023 10:37:07 -0800 Message-Id: <20231204183710.86921-12-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org If the firmware fails to respond a HWRM command in a certain time, it may be because the firmware is in a bad state. Do not send any new HWRM commands in such a scenario. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_hwrm.c | 5 +++++ 2 files changed, 6 insertions(+) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 68c4778dc3..f7a60eb9a1 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -745,6 +745,7 @@ struct bnxt { #define BNXT_FLAG_DFLT_MAC_SET BIT(26) #define BNXT_FLAG_GFID_ENABLE BIT(27) #define BNXT_FLAG_CHIP_P7 BIT(30) +#define BNXT_FLAG_FW_TIMEDOUT BIT(31) #define BNXT_PF(bp) (!((bp)->flags & BNXT_FLAG_VF)) #define BNXT_VF(bp) ((bp)->flags & BNXT_FLAG_VF) #define BNXT_NPAR(bp) ((bp)->flags & BNXT_FLAG_NPAR_PF) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 67f8020e3c..ccc5417af1 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -200,6 +200,10 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg, if (bp->flags & BNXT_FLAG_FATAL_ERROR) return 0; + /* If previous HWRM command timed out, donot send new HWRM command */ + if (bp->flags & BNXT_FLAG_FW_TIMEDOUT) + return 0; + timeout = bp->hwrm_cmd_timeout; /* Update the message length for backing store config for new FW. */ @@ -300,6 +304,7 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg, PMD_DRV_LOG(ERR, "Error(timeout) sending msg 0x%04x, seq_id %d\n", req->req_type, req->seq_id); + bp->flags |= BNXT_FLAG_FW_TIMEDOUT; return -ETIMEDOUT; } return 0; From patchwork Mon Dec 4 18:37:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134826 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 03A9E4366C; Mon, 4 Dec 2023 19:38:58 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 66FEC42DC9; Mon, 4 Dec 2023 19:37:39 +0100 (CET) Received: from mail-pg1-f176.google.com (mail-pg1-f176.google.com [209.85.215.176]) by mails.dpdk.org (Postfix) with ESMTP id 2F08042DC0 for ; Mon, 4 Dec 2023 19:37:37 +0100 (CET) Received: by mail-pg1-f176.google.com with SMTP id 41be03b00d2f7-5c66bbb3d77so821743a12.0 for ; Mon, 04 Dec 2023 10:37:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715056; x=1702319856; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=73K00ySoFs3je5V4QpPH77DDqUSKwBKtc4bzM10DRv0=; b=Dx9o/sbx/hzzcRykeDT4NpnXauouLMA+CtShgiwK8zAQl4NQ+Sk/S68V9GdXarllr0 Q4s7JQp0HXPXDQfQ+F4nZn3HYUMsn58Sqw7wLQ3RD2gBpuG9EY1OO2qeK2wJxQnnc52g Kjb0jQeLz87j6o88PvjFIKSqFfWaklspMHGzc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715056; x=1702319856; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=73K00ySoFs3je5V4QpPH77DDqUSKwBKtc4bzM10DRv0=; b=n3ZpNv4FXYbciNY5+Yzwy36WJ5A0M5fWpaBEUV5rohSsjnQQ1TVUeENanGPgA2DJPn 0olxGHI5Kn6pJ6mU3sId6BHzxpcpbBsxCbhcujH0TBwLR+qCvZ0U17hnSAaNNjh5dr+0 BFP53Gx3TQ4HBOdSkfekXeI8XIahEewjj9fC65r2w0w2MGKk9KzLdfTytr8N+qpTYZtX IEixCiKSLuAk5A+I4fECpVi3QOFX7RIw5L90MFiD3AoAOwyUjznwzQJRKgfGN8YH+qJH 3rVxeihNYVHywUgrymRriDDClRrsVrc/UoG0i6jlF7oPinty6JXtnSFivh7kbtl2yek5 WuAw== X-Gm-Message-State: AOJu0YxxExPSJqopyXSx7QYCadlNvIwNwmgX8bWgineVug617YUDjhmA xuMnLlOI+oMC5/EsvX/0D4HSrcF0ca0tq0bfxG21E95m44Mv3LUTheiqNVFDcXWI2jIqOUxC39K CpwJEvy9En1mK2L9VWb4JHl2n+0k2kpugqsHaf+KWKRoTZIi3l5Fa7rEmMUK+neZPEC8+ X-Google-Smtp-Source: AGHT+IEHwHjTviexBytle8kWqjH0oHM7oRkdz/xysNQYuv9JszZoaF4EQBG2NhY5zcy9wZ8/EIbkVg== X-Received: by 2002:a05:6a21:30c4:b0:18f:97c:8271 with SMTP id yf4-20020a056a2130c400b0018f097c8271mr1840660pzb.123.1701715055757; Mon, 04 Dec 2023 10:37:35 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:35 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP , Somnath Kotur Subject: [PATCH 12/14] net/bnxt: retry HWRM ver get if the command fails Date: Mon, 4 Dec 2023 10:37:08 -0800 Message-Id: <20231204183710.86921-13-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Retry HWRM ver get if the command timesout because of PCI FLR. When the PCI driver issues an FLR during device initialization, the firmware may have to block the PXP target traffic till the FLR is complete. HWRM_VER_GET command issued during that window may time out. So retry the command again in such a scenario. Signed-off-by: Ajit Khaparde Reviewed-by: Kalesh AP Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_ethdev.c | 12 +++++++++++- 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index f7a60eb9a1..7aed4c3da3 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -879,6 +879,7 @@ struct bnxt { /* default command timeout value of 500ms */ #define DFLT_HWRM_CMD_TIMEOUT 500000 +#define PCI_FUNC_RESET_WAIT_TIMEOUT 1500000 /* short command timeout value of 50ms */ #define SHORT_HWRM_CMD_TIMEOUT 50000 /* default HWRM request timeout value */ diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 8f3bd858da..0ae6697940 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -5442,6 +5442,7 @@ static int bnxt_map_hcomm_fw_status_reg(struct bnxt *bp) static int bnxt_get_config(struct bnxt *bp) { uint16_t mtu; + int timeout; int rc = 0; bp->fw_cap = 0; @@ -5450,8 +5451,17 @@ static int bnxt_get_config(struct bnxt *bp) if (rc) return rc; - rc = bnxt_hwrm_ver_get(bp, DFLT_HWRM_CMD_TIMEOUT); + timeout = BNXT_CHIP_P7(bp) ? + PCI_FUNC_RESET_WAIT_TIMEOUT : + DFLT_HWRM_CMD_TIMEOUT; +try_again: + rc = bnxt_hwrm_ver_get(bp, timeout); if (rc) { + if (rc == -ETIMEDOUT && timeout == PCI_FUNC_RESET_WAIT_TIMEOUT) { + bp->flags &= ~BNXT_FLAG_FW_TIMEDOUT; + timeout = DFLT_HWRM_CMD_TIMEOUT; + goto try_again; + } bnxt_check_fw_status(bp); return rc; } From patchwork Mon Dec 4 18:37:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134827 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 907D04366C; Mon, 4 Dec 2023 19:39:05 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B7E3642DCE; Mon, 4 Dec 2023 19:37:40 +0100 (CET) Received: from mail-pf1-f169.google.com (mail-pf1-f169.google.com [209.85.210.169]) by mails.dpdk.org (Postfix) with ESMTP id 9BEF442D95 for ; Mon, 4 Dec 2023 19:37:38 +0100 (CET) Received: by mail-pf1-f169.google.com with SMTP id d2e1a72fcca58-6cddc148285so5090583b3a.2 for ; Mon, 04 Dec 2023 10:37:38 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715057; x=1702319857; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=hiORwBqfePnv4CBCIXBCiWXWfz+aYEQSlAfSNMP9y2I=; b=PAdywm5V9WR88L8GpASeLijt+9zH2djtME/nUnJ5Ge9x4K8oxZoT3KxOypki3o0FEp xod5ofyN6g+S3Vo2UHoVeoTYEKp+SWpk3Ws9gIpwtxm6ITQq1L8pIvEvtgtePqql6/CA sdVUUFxOFuYgOMy6nq3zgaN/OcVaJmRXhWKO0= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715057; x=1702319857; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=hiORwBqfePnv4CBCIXBCiWXWfz+aYEQSlAfSNMP9y2I=; b=W2USdIWRV2s/3kMnDq2X2jhPxSpMW5uaRb97ghnJ6Ogd2F8tiThUVgoQZOdbu9jvur C23e6Uh1wQXEZ1Ldd9OP9wIM59L4DSXUJJIGpccnVF9fV2IGR4Yeb9WN17Oo/fPqD1wj mTnVZcZrr9hA6QsAmAUwuvMZjP3AaHwN+lkS8EM5ynqrSQeoiq872Gn9QQJT9aiOcQtU U3IpY2sD5R9qJgRcWsHKFkQqtjhXEQ/p8oP9MIupXqP5mxzjYyBBV/M5JKizOsXv7Ck+ NucS1SBWyooOT5gdCNF6uveiqSRjuMchvzUc4jp2/xBqljfiAYkFqlEaZ8midMUf9etH QbUg== X-Gm-Message-State: AOJu0YySFvUzPVA9hx65+8bO4PuN+q5Xrzea90Uekj18bLy9PYYaBYC5 IH0GrEwJj4qzGSvHWCJUAGIj8HgB9T1Msw9JulBfJLnMor8kbNh7CngeGojTT/ENeGQXF7dkBWk PBtELLYykeq8F8Opi1kH87nGG3b1Xp9/FAd86ekwJALgfzQZ9k4rH3OMYpqfxgRHW2hHf X-Google-Smtp-Source: AGHT+IG5vm7iP7FJCiVxJO2x9hkZrte906mey0vTYtuXS3R4KpjGijWjJh5epQADtNwl4rFvYcDg/g== X-Received: by 2002:a05:6a00:4003:b0:6ce:4fd9:4891 with SMTP id by3-20020a056a00400300b006ce4fd94891mr2869108pfb.64.1701715057210; Mon, 04 Dec 2023 10:37:37 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:36 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP , Damodharam Ammepalli Subject: [PATCH 13/14] net/bnxt: cap ring resources for P7 devices Date: Mon, 4 Dec 2023 10:37:09 -0800 Message-Id: <20231204183710.86921-14-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Cap the NQ count for P7 devices. Driver does not need a high NQ ring count anyway since we operate in poll mode. Signed-off-by: Ajit Khaparde Reviewed-by: Kalesh AP Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt_hwrm.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index ccc5417af1..a747f6b6b8 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1222,7 +1222,10 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp) else bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics); bp->max_stat_ctx = rte_le_to_cpu_16(resp->max_stat_ctx); - bp->max_nq_rings = rte_le_to_cpu_16(resp->max_msix); + if (BNXT_CHIP_P7(bp)) + bp->max_nq_rings = BNXT_P7_MAX_NQ_RING_CNT; + else + bp->max_nq_rings = rte_le_to_cpu_16(resp->max_msix); bp->vf_resv_strategy = rte_le_to_cpu_16(resp->vf_reservation_strategy); if (bp->vf_resv_strategy > HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC) From patchwork Mon Dec 4 18:37:10 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 134828 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 911234366C; Mon, 4 Dec 2023 19:39:11 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DE59642DDA; Mon, 4 Dec 2023 19:37:41 +0100 (CET) Received: from mail-pf1-f170.google.com (mail-pf1-f170.google.com [209.85.210.170]) by mails.dpdk.org (Postfix) with ESMTP id 5D45842DCE for ; Mon, 4 Dec 2023 19:37:40 +0100 (CET) Received: by mail-pf1-f170.google.com with SMTP id d2e1a72fcca58-6cbe716b511so3641418b3a.3 for ; Mon, 04 Dec 2023 10:37:40 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1701715059; x=1702319859; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :from:to:cc:subject:date:message-id:reply-to; bh=aj0SOYHdKX+rj06m4nKtygMuAIMal4Bc363P+2fOW0U=; b=hDzmroRnERjfb3cQh8/nkE5WhIMKm6sQpbirW+3RajT0CTHQ0knSsPBpBody7ZyR1W yMUKC43m5iG7Es9V9hkLT4gKr4epxAqMeZnlHiY/e1DU7WUxNMS4OEq5jsXO6pS7xABy ZeviKw5SRFFUC+X4PNrAqnayYkmVamqqGqFCQ= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1701715059; x=1702319859; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=aj0SOYHdKX+rj06m4nKtygMuAIMal4Bc363P+2fOW0U=; b=vnlE3WSl7OITSMOCHr6GIp81tSJ9KjcKtVqKkib6b+HwogWQj63WPV6Gp908tvdpNl pJeA/nMtG9aMe6PLbNc8m+18aNzrPyyJzHFI/cVe+nxGGVGIidVbag8d2QxknJPOKKmr tU/lhC+B0icDg7OeKqWA8spwhH+OHppCJSM03s0lxEUiTNWLpvfJtqfHSMQW9HIdShFJ aCeajf9IY1k5Tze4G85kARj0Rb1NayxFPsWnfNCRlTBI0+ACSHvqBCbEZOntXsiR+IVf 5qyd9nA7HPqJjNPkjfq885trSXSCCuSa48vfjvrCVuyF2kWy07qoIZjiXXnIrpU5lQGc aGrQ== X-Gm-Message-State: AOJu0Yx/ROSZmYGiTGmd89WopkQnoaICm5UH5l85unp98TZOU9+P19xe w0kOjRMesuzN2zOqpju1q3DZILpHf2JEee0viB+hW/49F6v/pflnJCuZ/RlgyUeaIg/zqP0Gd/m 5ND+FnYrF8c+TiaEDwifhH4UY8dTtdFu9QslAOfBvwswBYfyaIgPlJhFdNgWJu6phh7hz X-Google-Smtp-Source: AGHT+IFGBj6cWdzmz5EqTWfSyOl1twzVC/o9o6Ix/ZzzGWptC8mpwBDFs6KqLrl2jBPKxz2iU0cMXg== X-Received: by 2002:a05:6a00:1f1a:b0:6ce:2731:d5c1 with SMTP id be26-20020a056a001f1a00b006ce2731d5c1mr2421723pfb.50.1701715058956; Mon, 04 Dec 2023 10:37:38 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id s3-20020a056a00178300b006be5af77f06sm4236664pfg.2.2023.12.04.10.37.37 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 04 Dec 2023 10:37:37 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Subject: [PATCH 14/14] net/bnxt: add support for v3 Rx completion Date: Mon, 4 Dec 2023 10:37:10 -0800 Message-Id: <20231204183710.86921-15-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231204183710.86921-1-ajit.khaparde@broadcom.com> References: <20231204183710.86921-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org P7 devices support the newer Rx completion version. This Rx completion though similar to the previous generation, provides some extra information for flow offload scenarios apart from the normal information. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_rxr.c | 87 ++++++++++++++++++++++++++++++++++- drivers/net/bnxt/bnxt_rxr.h | 92 +++++++++++++++++++++++++++++++++++++ 2 files changed, 177 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 9d45065f28..59ea0121de 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -553,6 +553,41 @@ bnxt_parse_pkt_type(struct rx_pkt_cmpl *rxcmp, struct rx_pkt_cmpl_hi *rxcmp1) return bnxt_ptype_table[index]; } +static void +bnxt_parse_pkt_type_v3(struct rte_mbuf *mbuf, + struct rx_pkt_cmpl *rxcmp_v1, + struct rx_pkt_cmpl_hi *rxcmp1_v1) +{ + uint32_t flags_type, flags2, meta; + struct rx_pkt_v3_cmpl_hi *rxcmp1; + struct rx_pkt_v3_cmpl *rxcmp; + uint8_t index; + + rxcmp = (void *)rxcmp_v1; + rxcmp1 = (void *)rxcmp1_v1; + + flags_type = rte_le_to_cpu_16(rxcmp->flags_type); + flags2 = rte_le_to_cpu_32(rxcmp1->flags2); + meta = rte_le_to_cpu_32(rxcmp->metadata1_payload_offset); + + /* TODO */ + /* Validate ptype table indexing at build time. */ + /* bnxt_check_ptype_constants_v3(); */ + + /* + * Index format: + * bit 0: Set if IP tunnel encapsulated packet. + * bit 1: Set if IPv6 packet, clear if IPv4. + * bit 2: Set if VLAN tag present. + * bits 3-6: Four-bit hardware packet type field. + */ + index = BNXT_CMPL_V3_ITYPE_TO_IDX(flags_type) | + BNXT_CMPL_V3_VLAN_TO_IDX(meta) | + BNXT_CMPL_V3_IP_VER_TO_IDX(flags2); + + mbuf->packet_type = bnxt_ptype_table[index]; +} + static void __rte_cold bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq) { @@ -716,6 +751,43 @@ bnxt_get_rx_ts_p5(struct bnxt *bp, uint32_t rx_ts_cmpl) ptp->rx_timestamp = pkt_time; } +static uint32_t +bnxt_ulp_set_mark_in_mbuf_v3(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1, + struct rte_mbuf *mbuf, uint32_t *vfr_flag) +{ + struct rx_pkt_v3_cmpl_hi *rxcmp1_v3 = (void *)rxcmp1; + uint32_t flags2, meta, mark_id = 0; + /* revisit the usage of gfid/lfid if mark action is supported. + * for now, only VFR is using mark and the metadata is the SVIF + * (a small number) + */ + bool gfid = false; + int rc = 0; + + flags2 = rte_le_to_cpu_32(rxcmp1_v3->flags2); + + switch (flags2 & RX_PKT_V3_CMPL_HI_FLAGS2_META_FORMAT_MASK) { + case RX_PKT_V3_CMPL_HI_FLAGS2_META_FORMAT_CHDR_DATA: + /* Only supporting Metadata for ulp now */ + meta = rxcmp1_v3->metadata2; + break; + default: + goto skip_mark; + } + + rc = ulp_mark_db_mark_get(bp->ulp_ctx, gfid, meta, vfr_flag, &mark_id); + if (!rc) { + /* Only supporting VFR for now, no Mark actions */ + if (vfr_flag && *vfr_flag) + return mark_id; + } + +skip_mark: + mbuf->hash.fdir.hi = 0; + + return 0; +} + static uint32_t bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1, struct rte_mbuf *mbuf, uint32_t *vfr_flag) @@ -892,7 +964,8 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, *rx_pkt = mbuf; goto next_rx; } else if ((cmp_type != CMPL_BASE_TYPE_RX_L2) && - (cmp_type != CMPL_BASE_TYPE_RX_L2_V2)) { + (cmp_type != CMPL_BASE_TYPE_RX_L2_V2) && + (cmp_type != CMPL_BASE_TYPE_RX_L2_V3)) { rc = -EINVAL; goto next_rx; } @@ -929,6 +1002,16 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, bp->ptp_all_rx_tstamp) bnxt_get_rx_ts_p5(rxq->bp, rxcmp1->reorder); + if (cmp_type == CMPL_BASE_TYPE_RX_L2_V3) { + bnxt_parse_csum_v3(mbuf, rxcmp1); + bnxt_parse_pkt_type_v3(mbuf, rxcmp, rxcmp1); + bnxt_rx_vlan_v3(mbuf, rxcmp, rxcmp1); + if (BNXT_TRUFLOW_EN(bp)) + mark_id = bnxt_ulp_set_mark_in_mbuf_v3(rxq->bp, rxcmp1, + mbuf, &vfr_flag); + goto reuse_rx_mbuf; + } + if (cmp_type == CMPL_BASE_TYPE_RX_L2_V2) { bnxt_parse_csum_v2(mbuf, rxcmp1); bnxt_parse_pkt_type_v2(mbuf, rxcmp, rxcmp1); @@ -1066,7 +1149,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, if (CMP_TYPE(rxcmp) == CMPL_BASE_TYPE_HWRM_DONE) { PMD_DRV_LOG(ERR, "Rx flush done\n"); } else if ((CMP_TYPE(rxcmp) >= CMPL_BASE_TYPE_RX_TPA_START_V2) && - (CMP_TYPE(rxcmp) <= RX_TPA_V2_ABUF_CMPL_TYPE_RX_TPA_AGG)) { + (CMP_TYPE(rxcmp) <= CMPL_BASE_TYPE_RX_TPA_START_V3)) { rc = bnxt_rx_pkt(&rx_pkts[nb_rx_pkts], rxq, &raw_cons); if (!rc) nb_rx_pkts++; diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h index af53bc0c25..439d29a07f 100644 --- a/drivers/net/bnxt/bnxt_rxr.h +++ b/drivers/net/bnxt/bnxt_rxr.h @@ -386,4 +386,96 @@ bnxt_parse_pkt_type_v2(struct rte_mbuf *mbuf, mbuf->packet_type = pkt_type; } + +/* Thor2 specific code for RX completion parsing */ +#define RX_PKT_V3_CMPL_FLAGS2_IP_TYPE_SFT 8 +#define RX_PKT_V3_CMPL_METADATA1_VALID_SFT 15 + +#define BNXT_CMPL_V3_ITYPE_TO_IDX(ft) \ + (((ft) & RX_PKT_V3_CMPL_FLAGS_ITYPE_MASK) >> \ + (RX_PKT_V3_CMPL_FLAGS_ITYPE_SFT - BNXT_PTYPE_TBL_TYPE_SFT)) + +#define BNXT_CMPL_V3_VLAN_TO_IDX(meta) \ + (((meta) & (1 << RX_PKT_V3_CMPL_METADATA1_VALID_SFT)) >> \ + (RX_PKT_V3_CMPL_METADATA1_VALID_SFT - BNXT_PTYPE_TBL_VLAN_SFT)) + +#define BNXT_CMPL_V3_IP_VER_TO_IDX(f2) \ + (((f2) & RX_PKT_V3_CMPL_HI_FLAGS2_IP_TYPE) >> \ + (RX_PKT_V3_CMPL_FLAGS2_IP_TYPE_SFT - BNXT_PTYPE_TBL_IP_VER_SFT)) + +#define RX_CMP_V3_VLAN_VALID(rxcmp) \ + (((struct rx_pkt_v3_cmpl *)rxcmp)->metadata1_payload_offset & \ + RX_PKT_V3_CMPL_METADATA1_VALID) + +#define RX_CMP_V3_METADATA0_VID(rxcmp1) \ + ((((struct rx_pkt_v3_cmpl_hi *)rxcmp1)->metadata0) & \ + (RX_PKT_V3_CMPL_HI_METADATA0_VID_MASK | \ + RX_PKT_V3_CMPL_HI_METADATA0_DE | \ + RX_PKT_V3_CMPL_HI_METADATA0_PRI_MASK)) + +static inline void bnxt_rx_vlan_v3(struct rte_mbuf *mbuf, + struct rx_pkt_cmpl *rxcmp, + struct rx_pkt_cmpl_hi *rxcmp1) +{ + if (RX_CMP_V3_VLAN_VALID(rxcmp)) { + mbuf->vlan_tci = RX_CMP_V3_METADATA0_VID(rxcmp1); + mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED; + } +} + +#define RX_CMP_V3_L4_CS_ERR(err) \ + (((err) & RX_PKT_CMPL_ERRORS_MASK) \ + & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR)) +#define RX_CMP_V3_L3_CS_ERR(err) \ + (((err) & RX_PKT_CMPL_ERRORS_MASK) \ + & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR)) +#define RX_CMP_V3_T_IP_CS_ERR(err) \ + (((err) & RX_PKT_CMPL_ERRORS_MASK) \ + & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR)) +#define RX_CMP_V3_T_L4_CS_ERR(err) \ + (((err) & RX_PKT_CMPL_ERRORS_MASK) \ + & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR)) +#define RX_PKT_CMPL_CALC \ + (RX_PKT_CMPL_FLAGS2_IP_CS_CALC | \ + RX_PKT_CMPL_FLAGS2_L4_CS_CALC | \ + RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC | \ + RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC) + +static inline uint64_t +bnxt_parse_csum_fields_v3(uint32_t flags2, uint32_t error_v2) +{ + uint64_t ol_flags = 0; + + if (flags2 & RX_PKT_CMPL_CALC) { + if (unlikely(RX_CMP_V3_L4_CS_ERR(error_v2))) + ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + else + ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + if (unlikely(RX_CMP_V3_L3_CS_ERR(error_v2))) + ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + if (unlikely(RX_CMP_V3_T_L4_CS_ERR(error_v2))) + ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + else + ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + if (unlikely(RX_CMP_V3_T_IP_CS_ERR(error_v2))) + ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + if (!(ol_flags & (RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD))) + ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + /* Unknown is defined as 0 for all packets types hence using below for all */ + ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN; + } + return ol_flags; +} + +static inline void +bnxt_parse_csum_v3(struct rte_mbuf *mbuf, struct rx_pkt_cmpl_hi *rxcmp1) +{ + struct rx_pkt_v3_cmpl_hi *v3_cmp = + (struct rx_pkt_v3_cmpl_hi *)(rxcmp1); + uint16_t error_v2 = rte_le_to_cpu_16(v3_cmp->errors_v2); + uint32_t flags2 = rte_le_to_cpu_32(v3_cmp->flags2); + + mbuf->ol_flags |= bnxt_parse_csum_fields_v3(flags2, error_v2); +} #endif /* _BNXT_RXR_H_ */