From patchwork Mon Dec 11 17:10:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135028 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0D517436C8; Mon, 11 Dec 2023 18:11:25 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C221C42D7B; Mon, 11 Dec 2023 18:11:21 +0100 (CET) Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) by mails.dpdk.org (Postfix) with ESMTP id 269D142D7A for ; Mon, 11 Dec 2023 18:11:21 +0100 (CET) Received: by mail-qk1-f173.google.com with SMTP id af79cd13be357-77f552d4179so186233585a.1 for ; Mon, 11 Dec 2023 09:11:21 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314680; x=1702919480; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=VF57EqXTgEW7ccyj6tFtKAzF+R7UkIkEkbd61aygsV8=; b=BlJTmWKRbUinNGXUv/aItg2pGyA74wDuTs6JwIIHn0Wmq3h87Ihp6/5/rtNyt1GrJg I7Vs9tGe64Xe9fl+9pdST059D9a/ZM4nMxK+Tr+ZmiLjw1hTf8CTkV/tfjmI4MpEPB2z 7P47d3s3vV3J7fyBWElyaZi4YEoSk9NptrkbY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314680; x=1702919480; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=VF57EqXTgEW7ccyj6tFtKAzF+R7UkIkEkbd61aygsV8=; b=nArZBk4ZIrfdT0M57yyGfrA9dhi4h9+oO3pA4EXZTB1o8Xr2I7MMym8EdetaNTs3WE YTKKaHBhRsN2puL+RtdV45N/EvrwTQQQ41kNnqjO8duOxyad/hHn6k/9x20QnbCuFpaV WP6pxaxSOziN3IQb/US3rOGQeb2p4vhoR/XzdmBsGvGiKWc5udkZiB+2OTlfLV2gLgR5 Erv1++FMCCdddJnbTx3FSjvjJGqOG5i4RL0JzfuHA4psljGQrq4+GpnZDHhlfsrZS0ek /LboGLOfpHjd9KRD3kd1O3luPDzNdH4sRteLeJdAOv7LP2BWtHqOcp1X4wpOBA15whkj O0qQ== X-Gm-Message-State: AOJu0Yx0YwFQdvuc/uZLZTOXimnK2IBlSrFmpFGqAKdZSAZslOJfBndw vtpTpNbT9CQAl5htdHiOEt03CgEyX7tajEZYdZQPaj/CLeArdAvXUJgWUox6lSw9x+i/Hb2N1b5 fOKDHc/cZkBLn77cMApxrWBcOcX2g4xTSMHiXfvWJsNKKoUbBHz1ET6k7CsU0jj9T4EaP X-Google-Smtp-Source: AGHT+IEKTUPvjiDkY8tEskfZn2Z2nKCawFTKbMEKxQp3WFLjR3F9aMJiizvmWEQy73KrMlS5urPCAw== X-Received: by 2002:a05:620a:3:b0:77f:37e7:bd6e with SMTP id j3-20020a05620a000300b0077f37e7bd6emr5790603qki.121.1702314680128; Mon, 11 Dec 2023 09:11:20 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.17 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:18 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 01/14] net/bnxt: refactor epoch setting Date: Mon, 11 Dec 2023 09:10:56 -0800 Message-Id: <20231211171109.89716-2-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Fix epoch bit setting when we ring the doorbell. Epoch bit needs to toggle alternatively from 0 to 1 every time the ring indices wrap. Currently its value is everything but an alternating 0 and 1. Remove unnecessary field db_epoch_shift from bnxt_db_info structure. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt_cpr.h | 5 ++--- drivers/net/bnxt/bnxt_ring.c | 9 ++------- 2 files changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/net/bnxt/bnxt_cpr.h b/drivers/net/bnxt/bnxt_cpr.h index 2de154322d..26e81a6a7e 100644 --- a/drivers/net/bnxt/bnxt_cpr.h +++ b/drivers/net/bnxt/bnxt_cpr.h @@ -53,11 +53,10 @@ struct bnxt_db_info { bool db_64; uint32_t db_ring_mask; uint32_t db_epoch_mask; - uint32_t db_epoch_shift; }; -#define DB_EPOCH(db, idx) (((idx) & (db)->db_epoch_mask) << \ - ((db)->db_epoch_shift)) +#define DB_EPOCH(db, idx) (!!((idx) & (db)->db_epoch_mask) << \ + DBR_EPOCH_SFT) #define DB_RING_IDX(db, idx) (((idx) & (db)->db_ring_mask) | \ DB_EPOCH(db, idx)) diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 34b2510d54..6dacb1b37f 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -371,9 +371,10 @@ static void bnxt_set_db(struct bnxt *bp, db->db_key64 = DBR_PATH_L2; break; } - if (BNXT_CHIP_SR2(bp)) { + if (BNXT_CHIP_P7(bp)) { db->db_key64 |= DBR_VALID; db_offset = bp->legacy_db_size; + db->db_epoch_mask = ring_mask + 1; } else if (BNXT_VF(bp)) { db_offset = DB_VF_OFFSET; } @@ -397,12 +398,6 @@ static void bnxt_set_db(struct bnxt *bp, db->db_64 = false; } db->db_ring_mask = ring_mask; - - if (BNXT_CHIP_SR2(bp)) { - db->db_epoch_mask = db->db_ring_mask + 1; - db->db_epoch_shift = DBR_EPOCH_SFT - - rte_log2_u32(db->db_epoch_mask); - } } static int bnxt_alloc_cmpl_ring(struct bnxt *bp, int queue_index, From patchwork Mon Dec 11 17:10:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135029 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BB3E3436C8; Mon, 11 Dec 2023 18:11:33 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5EE9342D89; Mon, 11 Dec 2023 18:11:26 +0100 (CET) Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by mails.dpdk.org (Postfix) with ESMTP id 7B0BC42D83 for ; Mon, 11 Dec 2023 18:11:24 +0100 (CET) Received: by mail-qk1-f169.google.com with SMTP id af79cd13be357-77f70206016so96622985a.0 for ; Mon, 11 Dec 2023 09:11:24 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314683; x=1702919483; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :from:to:cc:subject:date:message-id:reply-to; bh=5IyK1R+NsQxzbC5bbamZ9acvx3tvv5x+0U+JwR1z/v0=; b=TetrqAn6JABikOLwMGOzN4YZPity+tuLEvKBki0XBvxdDUsG2tWd8Mp5hSjln68t10 eydpf6aBgSOjtXip9Ai5SpjHGmicNsG2xFGVx2qm3RU8LqdL6GDFC3dqDc7bMg6zRN/+ 0fndHuHiA+bTFXQeiahOxX1A4EK8Tq+8sz0r8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314683; x=1702919483; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5IyK1R+NsQxzbC5bbamZ9acvx3tvv5x+0U+JwR1z/v0=; b=ebKzNH2lEXGcJVQ/lLP933EQGj9dT76Fi08Y+4HVWBNWfSjO+ZH8c3o7F73Z2khtjA yhdN55z5hhsT7byOga08p8Bmu3EeMsuATj4y2SxZ2gpjj7gGFjO37SWYWs+RzkLEPe3H VB1zhyMxVDh801bE1i0WIFwgR2alOO+aC3RQAgGihh7BPA4VjogwQSy3Q7xSf6yx2pEM w2ed+reDWLcoY7ifKE8LeOzX0cm4vAZCPTcr1aIbEM+bLMLuy/SnIp7XvcPpXNxpQo8A aXeYOGpeEfGfy9hUxsMLwxvuom4DshtLSusQuNtgdAU8pXnrVJqkL7PXQzFiKY+0P4iz wSCw== X-Gm-Message-State: AOJu0YwOSe89pmopaN9KvEu3GPF3bR2Ar9guhVW6mDMGXMHZ3CsPoLfm 9mVq584HFt8tYRbhQfcOKe47MwD1QfIlxCE75shytWlFNeTqFxGG1ZVEKWgkWNEdMk7rlnOlnNy ZN3l/HJmU3o3q+Jw31RSfDFyZiGINTAl5h7lBOvl54vtSr1YHTk+2D5KzzlXfNhkyh+tO X-Google-Smtp-Source: AGHT+IEihkYAJ1SVM43CCwmmrSVH/+YbEKogGOMOdJjBgmYoOYb3nps763/sjJdAOrob+QlT372TwA== X-Received: by 2002:a05:620a:389d:b0:77f:3cb9:8b5c with SMTP id qp29-20020a05620a389d00b0077f3cb98b5cmr5363976qkn.18.1702314681967; Mon, 11 Dec 2023 09:11:21 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.20 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:20 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Subject: [PATCH v3 02/14] net/bnxt: update HWRM API Date: Mon, 11 Dec 2023 09:10:57 -0800 Message-Id: <20231211171109.89716-3-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Update HWRM API to version 1.10.2.158 Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_hwrm.c | 3 - drivers/net/bnxt/hsi_struct_def_dpdk.h | 1531 ++++++++++++++++++++++-- 2 files changed, 1429 insertions(+), 105 deletions(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 06f196760f..0a31b984e6 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -5175,9 +5175,6 @@ int bnxt_hwrm_set_ntuple_filter(struct bnxt *bp, if (enables & HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_PORT_MASK) req.dst_port_mask = rte_cpu_to_le_16(filter->dst_port_mask); - if (enables & - HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_MIRROR_VNIC_ID) - req.mirror_vnic_id = filter->mirror_vnic_id; req.enables = rte_cpu_to_le_32(enables); diff --git a/drivers/net/bnxt/hsi_struct_def_dpdk.h b/drivers/net/bnxt/hsi_struct_def_dpdk.h index 9afdd056ce..65f3f0576b 100644 --- a/drivers/net/bnxt/hsi_struct_def_dpdk.h +++ b/drivers/net/bnxt/hsi_struct_def_dpdk.h @@ -1154,8 +1154,8 @@ struct hwrm_err_output { #define HWRM_VERSION_MINOR 10 #define HWRM_VERSION_UPDATE 2 /* non-zero means beta version */ -#define HWRM_VERSION_RSVD 138 -#define HWRM_VERSION_STR "1.10.2.138" +#define HWRM_VERSION_RSVD 158 +#define HWRM_VERSION_STR "1.10.2.158" /**************** * hwrm_ver_get * @@ -6329,19 +6329,14 @@ struct rx_pkt_v3_cmpl_hi { #define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_L3_BAD_TTL \ (UINT32_C(0x5) << 9) /* - * Indicates that the IP checksum failed its check in the tunnel + * Indicates that the physical packet is shorter than that claimed + * by the tunnel header length. Valid for GTPv1-U packets. * header. */ - #define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_IP_CS_ERROR \ + #define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_TOTAL_ERROR \ (UINT32_C(0x6) << 9) - /* - * Indicates that the L4 checksum failed its check in the tunnel - * header. - */ - #define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR \ - (UINT32_C(0x7) << 9) #define RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_LAST \ - RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_L4_CS_ERROR + RX_PKT_V3_CMPL_HI_ERRORS_T_PKT_ERROR_T_TOTAL_ERROR /* * This indicates that there was an error in the inner * portion of the packet when this @@ -6406,20 +6401,8 @@ struct rx_pkt_v3_cmpl_hi { */ #define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN \ (UINT32_C(0x8) << 12) - /* - * Indicates that the IP checksum failed its check in the - * inner header. - */ - #define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_IP_CS_ERROR \ - (UINT32_C(0x9) << 12) - /* - * Indicates that the L4 checksum failed its check in the - * inner header. - */ - #define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR \ - (UINT32_C(0xa) << 12) #define RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_LAST \ - RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_CS_ERROR + RX_PKT_V3_CMPL_HI_ERRORS_PKT_ERROR_L4_BAD_OPT_LEN /* * This is data from the CFA block as indicated by the meta_format * field. @@ -14157,7 +14140,7 @@ struct hwrm_func_qcaps_input { uint8_t unused_0[6]; } __rte_packed; -/* hwrm_func_qcaps_output (size:896b/112B) */ +/* hwrm_func_qcaps_output (size:1088b/136B) */ struct hwrm_func_qcaps_output { /* The specific error status for the command. */ uint16_t error_code; @@ -14840,9 +14823,85 @@ struct hwrm_func_qcaps_output { /* * When this bit is '1', it indicates that the hardware based * link aggregation group (L2 and RoCE) feature is supported. + * This LAG feature is only supported on the THOR2 or newer NIC + * with multiple ports. */ #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_HW_LAG_SUPPORTED \ UINT32_C(0x400) + /* + * When this bit is '1', it indicates all contexts can be stored + * on chip instead of using host based backing store memory. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_ON_CHIP_CTX_SUPPORTED \ + UINT32_C(0x800) + /* + * When this bit is '1', it indicates that the HW supports + * using a steering tag in the memory transactions targeting + * L2 or RoCE ring resources. + * Steering Tags are system-specific values that must follow the + * encoding requirements of the hardware platform. On devices that + * support steering to multiple address domains, a value of 0 in + * bit 0 of the steering tag specifies the address is associated + * with the SOC address space, and a value of 1 indicates the + * address is associated with the host address space. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_STEERING_TAG_SUPPORTED \ + UINT32_C(0x1000) + /* + * When this bit is '1', it indicates that driver can enable + * support for an enhanced VF scale. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_ENHANCED_VF_SCALE_SUPPORTED \ + UINT32_C(0x2000) + /* + * When this bit is '1', it indicates that FW is capable of + * supporting partition based XID management for KTLS/QUIC + * Tx/Rx Key Context types. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_KEY_XID_PARTITION_SUPPORTED \ + UINT32_C(0x4000) + /* + * This bit is only valid on the condition that both + * “ktls_supported” and “quic_supported” flags are set. When this + * bit is valid, it conveys information below: + * 1. If it is set to ‘1’, it indicates that the firmware allows the + * driver to run KTLS and QUIC concurrently; + * 2. If it is cleared to ‘0’, it indicates that the driver has to + * make sure all crypto connections on all functions are of the + * same type, i.e., either KTLS or QUIC. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_CONCURRENT_KTLS_QUIC_SUPPORTED \ + UINT32_C(0x8000) + /* + * When this bit is '1', it indicates that the device supports + * setting a cross TC cap on a scheduler queue. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_SCHQ_CROSS_TC_CAP_SUPPORTED \ + UINT32_C(0x10000) + /* + * When this bit is '1', it indicates that the device supports + * setting a per TC cap on a scheduler queue. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_SCHQ_PER_TC_CAP_SUPPORTED \ + UINT32_C(0x20000) + /* + * When this bit is '1', it indicates that the device supports + * setting a per TC reservation on a scheduler queues. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_SCHQ_PER_TC_RESERVATION_SUPPORTED \ + UINT32_C(0x40000) + /* + * When this bit is '1', it indicates that firmware supports query + * for statistics related to invalid doorbell errors and drops. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_DB_ERROR_STATS_SUPPORTED \ + UINT32_C(0x80000) + /* + * When this bit is '1', it indicates that the device supports + * VF RoCE resource management. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT2_ROCE_VF_RESOURCE_MGMT_SUPPORTED \ + UINT32_C(0x100000) uint16_t tunnel_disable_flag; /* * When this bit is '1', it indicates that the VXLAN parsing @@ -14892,7 +14951,35 @@ struct hwrm_func_qcaps_output { */ #define HWRM_FUNC_QCAPS_OUTPUT_TUNNEL_DISABLE_FLAG_DISABLE_PPPOE \ UINT32_C(0x80) - uint8_t unused_1[2]; + uint16_t xid_partition_cap; + /* + * When this bit is '1', it indicates that FW is capable of + * supporting partition based XID management for KTLS TX + * key contexts. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_KTLS_TKC \ + UINT32_C(0x1) + /* + * When this bit is '1', it indicates that FW is capable of + * supporting partition based XID management for KTLS RX + * key contexts. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_KTLS_RKC \ + UINT32_C(0x2) + /* + * When this bit is '1', it indicates that FW is capable of + * supporting partition based XID management for QUIC TX + * key contexts. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_QUIC_TKC \ + UINT32_C(0x4) + /* + * When this bit is '1', it indicates that FW is capable of + * supporting partition based XID management for QUIC RX + * key contexts. + */ + #define HWRM_FUNC_QCAPS_OUTPUT_XID_PARTITION_CAP_QUIC_RKC \ + UINT32_C(0x8) /* * This value uniquely identifies the hardware NIC used by the * function. The value returned will be the same for all functions. @@ -14901,7 +14988,55 @@ struct hwrm_func_qcaps_output { * PCIe Capability Device Serial Number. */ uint8_t device_serial_number[8]; - uint8_t unused_2[7]; + /* + * This field is only valid in the XID partition mode. It indicates + * the number contexts per partition. + */ + uint16_t ctxs_per_partition; + uint8_t unused_2[2]; + /* + * The maximum number of address vectors that may be allocated across + * all VFs for the function. This is valid only on the PF with VF RoCE + * (SR-IOV) enabled. Returns zero if this command is called on a PF + * with VF RoCE (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_av; + /* + * The maximum number of completion queues that may be allocated across + * all VFs for the function. This is valid only on the PF with VF RoCE + * (SR-IOV) enabled. Returns zero if this command is called on a PF + * with VF RoCE (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_cq; + /* + * The maximum number of memory regions plus memory windows that may be + * allocated across all VFs for the function. This is valid only on the + * PF with VF RoCE (SR-IOV) enabled. Returns zero if this command is + * called on a PF with VF RoCE (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_mrw; + /* + * The maximum number of queue pairs that may be allocated across + * all VFs for the function. This is valid only on the PF with VF RoCE + * (SR-IOV) enabled. Returns zero if this command is called on a PF + * with VF RoCE (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_qp; + /* + * The maximum number of shared receive queues that may be allocated + * across all VFs for the function. This is valid only on the PF with + * VF RoCE (SR-IOV) enabled. Returns zero if this command is called on + * a PF with VF RoCE (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_srq; + /* + * The maximum number of GIDs that may be allocated across all VFs for + * the function. This is valid only on the PF with VF RoCE (SR-IOV) + * enabled. Returns zero if this command is called on a PF with VF RoCE + * (SR-IOV) disabled or on a VF. + */ + uint32_t roce_vf_max_gid; + uint8_t unused_3[3]; /* * This field is used in Output records to indicate that the output * is completely written to RAM. This field should be read as '1' @@ -14959,7 +15094,7 @@ struct hwrm_func_qcfg_input { uint8_t unused_0[6]; } __rte_packed; -/* hwrm_func_qcfg_output (size:1024b/128B) */ +/* hwrm_func_qcfg_output (size:1280b/160B) */ struct hwrm_func_qcfg_output { /* The specific error status for the command. */ uint16_t error_code; @@ -15604,11 +15739,68 @@ struct hwrm_func_qcfg_output { */ uint16_t port_kdnet_fid; uint8_t unused_5[2]; - /* Number of Tx Key Contexts allocated. */ - uint32_t alloc_tx_key_ctxs; - /* Number of Rx Key Contexts allocated. */ - uint32_t alloc_rx_key_ctxs; - uint8_t unused_6[7]; + /* Number of KTLS Tx Key Contexts allocated. */ + uint32_t num_ktls_tx_key_ctxs; + /* Number of KTLS Rx Key Contexts allocated. */ + uint32_t num_ktls_rx_key_ctxs; + /* + * The LAG idx of this function. The lag_id is per port and the + * valid lag_id is from 0 to 7, if there is no valid lag_id, + * 0xff will be returned. + * This HW lag id is used for Truflow programming only. + */ + uint8_t lag_id; + /* Partition interface for this function. */ + uint8_t parif; + /* + * The LAG ID of a hardware link aggregation group (LAG) whose + * member ports include the port of this function. The LAG was + * previously created using HWRM_FUNC_LAG_CREATE. If the port of this + * function is not a member of any LAG, the fw_lag_id will be 0xff. + */ + uint8_t fw_lag_id; + uint8_t unused_6; + /* Number of QUIC Tx Key Contexts allocated. */ + uint32_t num_quic_tx_key_ctxs; + /* Number of QUIC Rx Key Contexts allocated. */ + uint32_t num_quic_rx_key_ctxs; + /* + * Number of AVs per VF. Only valid for PF. This field is ignored + * when the flag, l2_vf_resource_mgmt, is not set in RoCE + * initialize_fw. + */ + uint32_t roce_max_av_per_vf; + /* + * Number of CQs per VF. Only valid for PF. This field is ignored when + * the flag, l2_vf_resource_mgmt, is not set in RoCE initialize_fw. + */ + uint32_t roce_max_cq_per_vf; + /* + * Number of MR/MWs per VF. Only valid for PF. This field is ignored + * when the flag, l2_vf_resource_mgmt, is not set in RoCE + * initialize_fw. + */ + uint32_t roce_max_mrw_per_vf; + /* + * Number of QPs per VF. Only valid for PF. This field is ignored when + * the flag, l2_vf_resource_mgmt, is not set in RoCE initialize_fw. + */ + uint32_t roce_max_qp_per_vf; + /* + * Number of SRQs per VF. Only valid for PF. This field is ignored + * when the flag, l2_vf_resource_mgmt, is not set in RoCE + * initialize_fw. + */ + uint32_t roce_max_srq_per_vf; + /* + * Number of GIDs per VF. Only valid for PF. This field is ignored + * when the flag, l2_vf_resource_mgmt, is not set in RoCE + * initialize_fw. + */ + uint32_t roce_max_gid_per_vf; + /* Bitmap of context types that have XID partition enabled. */ + uint16_t xid_partition_cfg; + uint8_t unused_7; /* * This field is used in Output records to indicate that the output * is completely written to RAM. This field should be read as '1' @@ -15624,7 +15816,7 @@ struct hwrm_func_qcfg_output { *****************/ -/* hwrm_func_cfg_input (size:1024b/128B) */ +/* hwrm_func_cfg_input (size:1280b/160B) */ struct hwrm_func_cfg_input { /* The HWRM command request type. */ uint16_t req_type; @@ -15888,15 +16080,6 @@ struct hwrm_func_cfg_input { */ #define HWRM_FUNC_CFG_INPUT_FLAGS_BD_METADATA_DISABLE \ UINT32_C(0x40000000) - /* - * If this bit is set to 1, the driver is requesting FW to see if - * all the assets requested in this command (i.e. number of KTLS/ - * QUIC key contexts) are available. The firmware will return an - * error if the requested assets are not available. The firmware - * will NOT reserve the assets if they are available. - */ - #define HWRM_FUNC_CFG_INPUT_FLAGS_KEY_CTX_ASSETS_TEST \ - UINT32_C(0x80000000) uint32_t enables; /* * This bit must be '1' for the admin_mtu field to be @@ -16080,16 +16263,16 @@ struct hwrm_func_cfg_input { #define HWRM_FUNC_CFG_INPUT_ENABLES_HOST_MTU \ UINT32_C(0x20000000) /* - * This bit must be '1' for the number of Tx Key Contexts - * field to be configured. + * This bit must be '1' for the num_ktls_tx_key_ctxs field to be + * configured. */ - #define HWRM_FUNC_CFG_INPUT_ENABLES_TX_KEY_CTXS \ + #define HWRM_FUNC_CFG_INPUT_ENABLES_KTLS_TX_KEY_CTXS \ UINT32_C(0x40000000) /* - * This bit must be '1' for the number of Rx Key Contexts - * field to be configured. + * This bit must be '1' for the num_ktls_rx_key_ctxs field to be + * configured. */ - #define HWRM_FUNC_CFG_INPUT_ENABLES_RX_KEY_CTXS \ + #define HWRM_FUNC_CFG_INPUT_ENABLES_KTLS_RX_KEY_CTXS \ UINT32_C(0x80000000) /* * This field can be used by the admin PF to configure @@ -16542,19 +16725,93 @@ struct hwrm_func_cfg_input { * ring that is assigned to a function has a valid mtu. */ uint16_t host_mtu; - uint8_t unused_0[4]; + uint32_t flags2; + /* + * If this bit is set to 1, the driver is requesting the firmware + * to see if the assets (i.e., the number of KTLS key contexts) + * requested in this command are available. The firmware will return + * an error if the requested assets are not available. The firmware + * will NOT reserve the assets if they are available. + */ + #define HWRM_FUNC_CFG_INPUT_FLAGS2_KTLS_KEY_CTX_ASSETS_TEST \ + UINT32_C(0x1) + /* + * If this bit is set to 1, the driver is requesting the firmware + * to see if the assets (i.e., the number of QUIC key contexts) + * requested in this command are available. The firmware will return + * an error if the requested assets are not available. The firmware + * will NOT reserve the assets if they are available. + */ + #define HWRM_FUNC_CFG_INPUT_FLAGS2_QUIC_KEY_CTX_ASSETS_TEST \ + UINT32_C(0x2) uint32_t enables2; /* * This bit must be '1' for the kdnet_mode field to be * configured. */ - #define HWRM_FUNC_CFG_INPUT_ENABLES2_KDNET UINT32_C(0x1) + #define HWRM_FUNC_CFG_INPUT_ENABLES2_KDNET \ + UINT32_C(0x1) /* * This bit must be '1' for the db_page_size field to be * configured. Legacy controller core FW may silently ignore * the db_page_size programming request through this command. */ - #define HWRM_FUNC_CFG_INPUT_ENABLES2_DB_PAGE_SIZE UINT32_C(0x2) + #define HWRM_FUNC_CFG_INPUT_ENABLES2_DB_PAGE_SIZE \ + UINT32_C(0x2) + /* + * This bit must be '1' for the num_quic_tx_key_ctxs field to be + * configured. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_QUIC_TX_KEY_CTXS \ + UINT32_C(0x4) + /* + * This bit must be '1' for the num_quic_rx_key_ctxs field to be + * configured. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_QUIC_RX_KEY_CTXS \ + UINT32_C(0x8) + /* + * This bit must be '1' for the roce_max_av_per_vf field to be + * configured. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_AV_PER_VF \ + UINT32_C(0x10) + /* + * This bit must be '1' for the roce_max_cq_per_vf field to be + * configured. Only valid for PF. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_CQ_PER_VF \ + UINT32_C(0x20) + /* + * This bit must be '1' for the roce_max_mrw_per_vf field to be + * configured. Only valid for PF. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_MRW_PER_VF \ + UINT32_C(0x40) + /* + * This bit must be '1' for the roce_max_qp_per_vf field to be + * configured. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_QP_PER_VF \ + UINT32_C(0x80) + /* + * This bit must be '1' for the roce_max_srq_per_vf field to be + * configured. Only valid for PF. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_SRQ_PER_VF \ + UINT32_C(0x100) + /* + * This bit must be '1' for the roce_max_gid_per_vf field to be + * configured. Only valid for PF. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_ROCE_MAX_GID_PER_VF \ + UINT32_C(0x200) + /* + * This bit must be '1' for the xid_partition_cfg field to be + * configured. Only valid for PF. + */ + #define HWRM_FUNC_CFG_INPUT_ENABLES2_XID_PARTITION_CFG \ + UINT32_C(0x400) /* * KDNet mode for the port for this function. If NPAR is * also configured on this port, it takes precedence. KDNet @@ -16602,11 +16859,56 @@ struct hwrm_func_cfg_input { #define HWRM_FUNC_CFG_INPUT_DB_PAGE_SIZE_LAST \ HWRM_FUNC_CFG_INPUT_DB_PAGE_SIZE_4MB uint8_t unused_1[2]; - /* Number of Tx Key Contexts requested. */ - uint32_t num_tx_key_ctxs; - /* Number of Rx Key Contexts requested. */ - uint32_t num_rx_key_ctxs; - uint8_t unused_2[4]; + /* Number of KTLS Tx Key Contexts requested. */ + uint32_t num_ktls_tx_key_ctxs; + /* Number of KTLS Rx Key Contexts requested. */ + uint32_t num_ktls_rx_key_ctxs; + /* Number of QUIC Tx Key Contexts requested. */ + uint32_t num_quic_tx_key_ctxs; + /* Number of QUIC Rx Key Contexts requested. */ + uint32_t num_quic_rx_key_ctxs; + /* Number of AVs per VF. Only valid for PF. */ + uint32_t roce_max_av_per_vf; + /* Number of CQs per VF. Only valid for PF. */ + uint32_t roce_max_cq_per_vf; + /* Number of MR/MWs per VF. Only valid for PF. */ + uint32_t roce_max_mrw_per_vf; + /* Number of QPs per VF. Only valid for PF. */ + uint32_t roce_max_qp_per_vf; + /* Number of SRQs per VF. Only valid for PF. */ + uint32_t roce_max_srq_per_vf; + /* Number of GIDs per VF. Only valid for PF. */ + uint32_t roce_max_gid_per_vf; + /* + * Bitmap of context kinds that have XID partition enabled. + * Only valid for PF. + */ + uint16_t xid_partition_cfg; + /* + * When this bit is '1', it indicates that driver enables XID + * partition on KTLS TX key contexts. + */ + #define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_KTLS_TKC \ + UINT32_C(0x1) + /* + * When this bit is '1', it indicates that driver enables XID + * partition on KTLS RX key contexts. + */ + #define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_KTLS_RKC \ + UINT32_C(0x2) + /* + * When this bit is '1', it indicates that driver enables XID + * partition on QUIC TX key contexts. + */ + #define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_QUIC_TKC \ + UINT32_C(0x4) + /* + * When this bit is '1', it indicates that driver enables XID + * partition on QUIC RX key contexts. + */ + #define HWRM_FUNC_CFG_INPUT_XID_PARTITION_CFG_QUIC_RKC \ + UINT32_C(0x8) + uint16_t unused_2; } __rte_packed; /* hwrm_func_cfg_output (size:128b/16B) */ @@ -22466,8 +22768,14 @@ struct hwrm_func_backing_store_cfg_v2_input { * which means "0" indicates the first instance. For backing * stores with single instance only, leave this field to 0. * 1. If the backing store type is MPC TQM ring, use the following - * instance value to MPC client mapping: + * instance value to map to MPC clients: * TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4) + * 2. If the backing store type is TBL_SCOPE, use the following + * instance value to map to table scope regions: + * RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3) + * 3. If the backing store type is XID partition, use the following + * instance value to map to context types: + * KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3) */ uint16_t instance; /* Control flags. */ @@ -22578,7 +22886,8 @@ struct hwrm_func_backing_store_cfg_v2_input { * | SRQ | srq_split_entries | * | CQ | cq_split_entries | * | VINC | vnic_split_entries | - * | MRAV | marv_split_entries | + * | MRAV | mrav_split_entries | + * | TS | ts_split_entries | */ uint32_t split_entry_0; /* Split entry #1. */ @@ -22711,6 +23020,15 @@ struct hwrm_func_backing_store_qcfg_v2_input { * Instance of the backing store type. It is zero-based, * which means "0" indicates the first instance. For backing * stores with single instance only, leave this field to 0. + * 1. If the backing store type is MPC TQM ring, use the following + * instance value to map to MPC clients: + * TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4) + * 2. If the backing store type is TBL_SCOPE, use the following + * instance value to map to table scope regions: + * RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3) + * 3. If the backing store type is XID partition, use the following + * instance value to map to context types: + * KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3) */ uint16_t instance; uint8_t rsvd[4]; @@ -22779,6 +23097,15 @@ struct hwrm_func_backing_store_qcfg_v2_output { * Instance of the backing store type. It is zero-based, * which means "0" indicates the first instance. For backing * stores with single instance only, leave this field to 0. + * 1. If the backing store type is MPC TQM ring, use the following + * instance value to map to MPC clients: + * TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4) + * 2. If the backing store type is TBL_SCOPE, use the following + * instance value to map to table scope regions: + * RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3) + * 3. If the backing store type is XID partition, use the following + * instance value to map to context types: + * KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3) */ uint16_t instance; /* Control flags. */ @@ -22855,7 +23182,8 @@ struct hwrm_func_backing_store_qcfg_v2_output { * | SRQ | srq_split_entries | * | CQ | cq_split_entries | * | VINC | vnic_split_entries | - * | MRAV | marv_split_entries | + * | MRAV | mrav_split_entries | + * | TS | ts_split_entries | */ uint32_t split_entry_0; /* Split entry #1. */ @@ -22876,17 +23204,20 @@ struct hwrm_func_backing_store_qcfg_v2_output { uint8_t valid; } __rte_packed; -/* Common structure to cast QPC split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is QPC. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */ /* qpc_split_entries (size:128b/16B) */ struct qpc_split_entries { /* Number of L2 QP backing store entries. */ uint32_t qp_num_l2_entries; /* Number of QP1 entries. */ uint32_t qp_num_qp1_entries; - uint32_t rsvd[2]; + /* + * Number of RoCE QP context entries required for this + * function to support fast QP modify destroy feature. + */ + uint32_t qp_num_fast_qpmd_entries; + uint32_t rsvd; } __rte_packed; -/* Common structure to cast SRQ split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is SRQ. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */ /* srq_split_entries (size:128b/16B) */ struct srq_split_entries { /* Number of L2 SRQ backing store entries. */ @@ -22895,7 +23226,6 @@ struct srq_split_entries { uint32_t rsvd2[2]; } __rte_packed; -/* Common structure to cast CQ split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is CQ. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */ /* cq_split_entries (size:128b/16B) */ struct cq_split_entries { /* Number of L2 CQ backing store entries. */ @@ -22904,7 +23234,6 @@ struct cq_split_entries { uint32_t rsvd2[2]; } __rte_packed; -/* Common structure to cast VNIC split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is VNIC. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */ /* vnic_split_entries (size:128b/16B) */ struct vnic_split_entries { /* Number of VNIC backing store entries. */ @@ -22913,7 +23242,6 @@ struct vnic_split_entries { uint32_t rsvd2[2]; } __rte_packed; -/* Common structure to cast MRAV split entries. This casting is required in the following HWRM command inputs/outputs if the backing store type is MRAV. 1. hwrm_func_backing_store_cfg_v2_input 2. hwrm_func_backing_store_qcfg_v2_output 3. hwrm_func_backing_store_qcaps_v2_output */ /* mrav_split_entries (size:128b/16B) */ struct mrav_split_entries { /* Number of AV backing store entries. */ @@ -22922,6 +23250,21 @@ struct mrav_split_entries { uint32_t rsvd2[2]; } __rte_packed; +/* ts_split_entries (size:128b/16B) */ +struct ts_split_entries { + /* Max number of TBL_SCOPE region entries (QCAPS). */ + uint32_t region_num_entries; + /* tsid to configure (CFG). */ + uint8_t tsid; + /* + * Lkup static bucket count (power of 2). + * Array is indexed by enum cfa_dir + */ + uint8_t lkup_static_bkt_cnt_exp[2]; + uint8_t rsvd; + uint32_t rsvd2[2]; +} __rte_packed; + /************************************ * hwrm_func_backing_store_qcaps_v2 * ************************************/ @@ -23112,12 +23455,36 @@ struct hwrm_func_backing_store_qcaps_v2_output { */ #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_DRIVER_MANAGED_MEMORY \ UINT32_C(0x4) + /* + * When set, it indicates the support of the following capability + * that is specific to the QP type: + * - For 2-port adapters, the ability to extend the RoCE QP + * entries configured on a PF, during some network events such as + * Link Down. These additional entries count is included in the + * advertised 'max_num_entries'. + * - The count of RoCE QP entries, derived from 'max_num_entries' + * (max_num_entries - qp_num_qp1_entries - qp_num_l2_entries - + * qp_num_fast_qpmd_entries, note qp_num_fast_qpmd_entries is + * always zero when QPs are pseudo-statically allocated), includes + * the count of QPs that can be migrated from the other PF (e.g., + * during network link down). Therefore, during normal operation + * when both PFs are active, the supported number of RoCE QPs for + * each of the PF is half of the advertised value. + */ + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_ROCE_QP_PSEUDO_STATIC_ALLOC \ + UINT32_C(0x8) /* * Bit map of the valid instances associated with the * backing store type. * 1. If the backing store type is MPC TQM ring, use the following - * bit to MPC client mapping: + * bits to map to MPC clients: * TCE (0), RCE (1), TE_CFA(2), RE_CFA (3), PRIMATE(4) + * 2. If the backing store type is TBL_SCOPE, use the following + * bits to map to table scope regions: + * RE_CFA_LKUP (0), RE_CFA_ACT (1), TE_CFA_LKUP(2), TE_CFA_ACT (3) + * 3. If the backing store type is VF XID partition in-use table, use + * the following bits to map to context types: + * KTLS_TKC (0), KTLS_RKC (1), QUIC_TKC (2), QUIC_RKC (3) */ uint32_t instance_bit_map; /* @@ -23164,7 +23531,43 @@ struct hwrm_func_backing_store_qcaps_v2_output { * | 4 | All four split entries have valid data. | */ uint8_t subtype_valid_cnt; - uint8_t rsvd2; + /* + * Bitmap that indicates if each of the 'split_entry' denotes an + * exact count (i.e., min = max). When the exact count bit is set, + * it indicates the exact number of entries as advertised has to be + * configured. The 'split_entry' to be set to contain exact count by + * this bitmap needs to be a valid split entry specified by + * 'subtype_valid_cnt'. + */ + uint8_t exact_cnt_bit_map; + /* + * When this bit is '1', it indicates 'split_entry_0' contains + * an exact count. + */ + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_0_EXACT \ + UINT32_C(0x1) + /* + * When this bit is '1', it indicates 'split_entry_1' contains + * an exact count. + */ + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_1_EXACT \ + UINT32_C(0x2) + /* + * When this bit is '1', it indicates 'split_entry_2' contains + * an exact count. + */ + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_2_EXACT \ + UINT32_C(0x4) + /* + * When this bit is '1', it indicates 'split_entry_3' contains + * an exact count. + */ + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_SPLIT_ENTRY_3_EXACT \ + UINT32_C(0x8) + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_UNUSED_MASK \ + UINT32_C(0xf0) + #define HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_EXACT_CNT_BIT_MAP_UNUSED_SFT \ + 4 /* * Split entry #0. Note that the four split entries (as a group) * must be cast to a type-specific data structure first before @@ -23176,7 +23579,8 @@ struct hwrm_func_backing_store_qcaps_v2_output { * | SRQ | srq_split_entries | * | CQ | cq_split_entries | * | VINC | vnic_split_entries | - * | MRAV | marv_split_entries | + * | MRAV | mrav_split_entries | + * | TS | ts_split_entries | */ uint32_t split_entry_0; /* Split entry #1. */ @@ -23471,7 +23875,9 @@ struct hwrm_func_dbr_pacing_qcfg_output { * dbr_throttling_aeq_arm_reg register. */ uint8_t dbr_throttling_aeq_arm_reg_val; - uint8_t unused_3[7]; + uint8_t unused_3[3]; + /* This field indicates the maximum depth of the doorbell FIFO. */ + uint32_t dbr_stat_db_max_fifo_depth; /* * Specifies primary function’s NQ ID. * A value of 0xFFFF FFFF indicates NQ ID is invalid. @@ -25128,7 +25534,7 @@ struct hwrm_func_spd_qcfg_output { *********************/ -/* hwrm_port_phy_cfg_input (size:448b/56B) */ +/* hwrm_port_phy_cfg_input (size:512b/64B) */ struct hwrm_port_phy_cfg_input { /* The HWRM command request type. */ uint16_t req_type; @@ -25505,6 +25911,18 @@ struct hwrm_port_phy_cfg_input { */ #define HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_PAM4_LINK_SPEED_MASK \ UINT32_C(0x1000) + /* + * This bit must be '1' for the force_link_speeds2 field to be + * configured. + */ + #define HWRM_PORT_PHY_CFG_INPUT_ENABLES_FORCE_LINK_SPEEDS2 \ + UINT32_C(0x2000) + /* + * This bit must be '1' for the auto_link_speeds2_mask field to + * be configured. + */ + #define HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_LINK_SPEEDS2_MASK \ + UINT32_C(0x4000) /* Port ID of port that is to be configured. */ uint16_t port_id; /* @@ -25808,7 +26226,99 @@ struct hwrm_port_phy_cfg_input { UINT32_C(0x2) #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_PAM4_SPEED_MASK_200G \ UINT32_C(0x4) - uint8_t unused_2[2]; + /* + * This is the speed that will be used if the force_link_speeds2 + * bit is '1'. If unsupported speed is selected, an error + * will be generated. + */ + uint16_t force_link_speeds2; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_1GB \ + UINT32_C(0xa) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_10GB \ + UINT32_C(0x64) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_25GB \ + UINT32_C(0xfa) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_40GB \ + UINT32_C(0x190) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_50GB \ + UINT32_C(0x1f4) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB \ + UINT32_C(0x3e8) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_50GB_PAM4_56 \ + UINT32_C(0x1f5) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_56 \ + UINT32_C(0x3e9) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_56 \ + UINT32_C(0x7d1) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_56 \ + UINT32_C(0xfa1) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_112 \ + UINT32_C(0x3ea) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_112 \ + UINT32_C(0x7d2) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_112 \ + UINT32_C(0xfa2) + #define HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_LAST \ + HWRM_PORT_PHY_CFG_INPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_112 + /* + * This is a mask of link speeds that will be used if + * auto_link_speeds2_mask bit in the "enables" field is 1. + * If unsupported speed is enabled an error will be generated. + */ + uint16_t auto_link_speeds2_mask; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_1GB \ + UINT32_C(0x1) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_10GB \ + UINT32_C(0x2) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_25GB \ + UINT32_C(0x4) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_40GB \ + UINT32_C(0x8) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_50GB \ + UINT32_C(0x10) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_100GB \ + UINT32_C(0x20) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_50GB_PAM4_56 \ + UINT32_C(0x40) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_100GB_PAM4_56 \ + UINT32_C(0x80) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_200GB_PAM4_56 \ + UINT32_C(0x100) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_400GB_PAM4_56 \ + UINT32_C(0x200) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_100GB_PAM4_112 \ + UINT32_C(0x400) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_200GB_PAM4_112 \ + UINT32_C(0x800) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_CFG_INPUT_AUTO_LINK_SPEEDS2_MASK_400GB_PAM4_112 \ + UINT32_C(0x1000) + uint8_t unused_2[6]; } __rte_packed; /* hwrm_port_phy_cfg_output (size:128b/16B) */ @@ -25932,11 +26442,14 @@ struct hwrm_port_phy_qcfg_output { /* NRZ signaling */ #define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_NRZ \ UINT32_C(0x0) - /* PAM4 signaling */ + /* PAM4-56 signaling */ #define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4 \ UINT32_C(0x1) + /* PAM4-112 signaling */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4_112 \ + UINT32_C(0x2) #define HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_LAST \ - HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4 + HWRM_PORT_PHY_QCFG_OUTPUT_SIGNAL_MODE_PAM4_112 /* This value indicates the current active FEC mode. */ #define HWRM_PORT_PHY_QCFG_OUTPUT_ACTIVE_FEC_MASK \ UINT32_C(0xf0) @@ -25992,6 +26505,8 @@ struct hwrm_port_phy_qcfg_output { #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_100GB UINT32_C(0x3e8) /* 200Gb link speed */ #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_200GB UINT32_C(0x7d0) + /* 400Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_400GB UINT32_C(0xfa0) /* 10Mb link speed */ #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_10MB UINT32_C(0xffff) #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_SPEED_LAST \ @@ -26446,8 +26961,56 @@ struct hwrm_port_phy_qcfg_output { /* 100G_BASEER2 */ #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASEER2 \ UINT32_C(0x27) + /* 400G_BASECR */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASECR \ + UINT32_C(0x28) + /* 100G_BASESR */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASESR \ + UINT32_C(0x29) + /* 100G_BASELR */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASELR \ + UINT32_C(0x2a) + /* 100G_BASEER */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASEER \ + UINT32_C(0x2b) + /* 200G_BASECR2 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASECR2 \ + UINT32_C(0x2c) + /* 200G_BASESR2 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASESR2 \ + UINT32_C(0x2d) + /* 200G_BASELR2 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASELR2 \ + UINT32_C(0x2e) + /* 200G_BASEER2 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_200G_BASEER2 \ + UINT32_C(0x2f) + /* 400G_BASECR8 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASECR8 \ + UINT32_C(0x30) + /* 200G_BASESR8 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASESR8 \ + UINT32_C(0x31) + /* 400G_BASELR8 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASELR8 \ + UINT32_C(0x32) + /* 400G_BASEER8 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASEER8 \ + UINT32_C(0x33) + /* 400G_BASECR4 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASECR4 \ + UINT32_C(0x34) + /* 400G_BASESR4 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASESR4 \ + UINT32_C(0x35) + /* 400G_BASELR4 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASELR4 \ + UINT32_C(0x36) + /* 400G_BASEER4 */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASEER4 \ + UINT32_C(0x37) #define HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_LAST \ - HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_100G_BASEER2 + HWRM_PORT_PHY_QCFG_OUTPUT_PHY_TYPE_400G_BASEER4 /* This value represents a media type. */ uint8_t media_type; /* Unknown */ @@ -26855,6 +27418,12 @@ struct hwrm_port_phy_qcfg_output { */ #define HWRM_PORT_PHY_QCFG_OUTPUT_OPTION_FLAGS_SIGNAL_MODE_KNOWN \ UINT32_C(0x2) + /* + * When this bit is '1', speeds2 fields are used to get + * speed details. + */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_OPTION_FLAGS_SPEEDS2_SUPPORTED \ + UINT32_C(0x4) /* * Up to 16 bytes of null padded ASCII string representing * PHY vendor. @@ -26933,7 +27502,162 @@ struct hwrm_port_phy_qcfg_output { uint8_t link_down_reason; /* Remote fault */ #define HWRM_PORT_PHY_QCFG_OUTPUT_LINK_DOWN_REASON_RF UINT32_C(0x1) - uint8_t unused_0[7]; + /* + * The supported speeds for the port. This is a bit mask. + * For each speed that is supported, the corresponding + * bit will be set to '1'. This is valid only if speeds2_supported + * is set in option_flags + */ + uint16_t support_speeds2; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_1GB \ + UINT32_C(0x1) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_10GB \ + UINT32_C(0x2) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_25GB \ + UINT32_C(0x4) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_40GB \ + UINT32_C(0x8) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_50GB \ + UINT32_C(0x10) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB \ + UINT32_C(0x20) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_50GB_PAM4_56 \ + UINT32_C(0x40) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB_PAM4_56 \ + UINT32_C(0x80) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_200GB_PAM4_56 \ + UINT32_C(0x100) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_400GB_PAM4_56 \ + UINT32_C(0x200) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_100GB_PAM4_112 \ + UINT32_C(0x400) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_200GB_PAM4_112 \ + UINT32_C(0x800) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_400GB_PAM4_112 \ + UINT32_C(0x1000) + /* 800Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_SUPPORT_SPEEDS2_800GB_PAM4_112 \ + UINT32_C(0x2000) + /* + * Current setting of forced link speed. When the link speed is not + * being forced, this value shall be set to 0. + * This field is valid only if speeds2_supported is set in option_flags. + */ + uint16_t force_link_speeds2; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_1GB \ + UINT32_C(0xa) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_10GB \ + UINT32_C(0x64) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_25GB \ + UINT32_C(0xfa) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_40GB \ + UINT32_C(0x190) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_50GB \ + UINT32_C(0x1f4) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_100GB \ + UINT32_C(0x3e8) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_50GB_PAM4_56 \ + UINT32_C(0x1f5) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_56 \ + UINT32_C(0x3e9) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_56 \ + UINT32_C(0x7d1) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_56 \ + UINT32_C(0xfa1) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_100GB_PAM4_112 \ + UINT32_C(0x3ea) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_200GB_PAM4_112 \ + UINT32_C(0x7d2) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_400GB_PAM4_112 \ + UINT32_C(0xfa2) + /* 800Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_800GB_PAM4_112 \ + UINT32_C(0x1f42) + #define HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_LAST \ + HWRM_PORT_PHY_QCFG_OUTPUT_FORCE_LINK_SPEEDS2_800GB_PAM4_112 + /* + * Current setting of auto_link speed_mask that is used to advertise + * speeds during autonegotiation. + * This field is only valid when auto_mode is set to "mask". + * and if speeds2_supported is set in option_flags + * The speeds specified in this field shall be a subset of + * supported speeds on this port. + */ + uint16_t auto_link_speeds2; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_1GB \ + UINT32_C(0x1) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_10GB \ + UINT32_C(0x2) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_25GB \ + UINT32_C(0x4) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_40GB \ + UINT32_C(0x8) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_50GB \ + UINT32_C(0x10) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_100GB \ + UINT32_C(0x20) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_50GB_PAM4_56 \ + UINT32_C(0x40) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_100GB_PAM4_56 \ + UINT32_C(0x80) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_200GB_PAM4_56 \ + UINT32_C(0x100) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_400GB_PAM4_56 \ + UINT32_C(0x200) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_100GB_PAM4_112 \ + UINT32_C(0x400) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_200GB_PAM4_112 \ + UINT32_C(0x800) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_400GB_PAM4_112 \ + UINT32_C(0x1000) + /* 800Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCFG_OUTPUT_AUTO_LINK_SPEEDS2_800GB_PAM4_112 \ + UINT32_C(0x2000) + /* + * This field is indicate the number of lanes used to transfer + * data. If the link is down, the value is zero. + * This is valid only if speeds2_supported is set in option_flags. + */ + uint8_t active_lanes; /* * This field is used in Output records to indicate that the output * is completely written to RAM. This field should be read as '1' @@ -28381,7 +29105,7 @@ struct tx_port_stats_ext { } __rte_packed; /* Port Rx Statistics extended Format */ -/* rx_port_stats_ext (size:3776b/472B) */ +/* rx_port_stats_ext (size:3904b/488B) */ struct rx_port_stats_ext { /* Number of times link state changed to down */ uint64_t link_down_events; @@ -28462,8 +29186,9 @@ struct rx_port_stats_ext { /* The number of events where the port receive buffer was over 85% full */ uint64_t rx_buffer_passed_threshold; /* - * The number of symbol errors that wasn't corrected by FEC correction - * algorithm + * This counter represents uncorrected symbol errors post-FEC and may not + * be populated in all cases. Each uncorrected FEC block may result in + * one or more symbol errors. */ uint64_t rx_pcs_symbol_err; /* The number of corrected bits on the port according to active FEC */ @@ -28507,6 +29232,21 @@ struct rx_port_stats_ext { * FEC function in the PHY */ uint64_t rx_fec_uncorrectable_blocks; + /* + * Total number of packets that are dropped due to not matching + * any RX filter rules. This value is zero on the non supported + * controllers. This counter is per controller, Firmware reports the + * same value on active ports. This counter does not include the + * packet discards because of no available buffers. + */ + uint64_t rx_filter_miss; + /* + * This field represents the number of FEC symbol errors by counting + * once for each 10-bit symbol corrected by FEC block. + * rx_fec_corrected_blocks will be incremented if all symbol errors in a + * codeword gets corrected. + */ + uint64_t rx_fec_symbol_err; } __rte_packed; /* @@ -29435,7 +30175,7 @@ struct hwrm_port_phy_qcaps_input { uint8_t unused_0[6]; } __rte_packed; -/* hwrm_port_phy_qcaps_output (size:256b/32B) */ +/* hwrm_port_phy_qcaps_output (size:320b/40B) */ struct hwrm_port_phy_qcaps_output { /* The specific error status for the command. */ uint16_t error_code; @@ -29725,6 +30465,13 @@ struct hwrm_port_phy_qcaps_output { */ #define HWRM_PORT_PHY_QCAPS_OUTPUT_FLAGS2_BANK_ADDR_SUPPORTED \ UINT32_C(0x4) + /* + * If set to 1, then this field indicates that + * supported_speed2 field is to be used in lieu of all + * supported_speed variants. + */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_FLAGS2_SPEEDS2_SUPPORTED \ + UINT32_C(0x8) /* * Number of internal ports for this device. This field allows the FW * to advertise how many internal ports are present. Manufacturing @@ -29733,6 +30480,108 @@ struct hwrm_port_phy_qcaps_output { * option "HPTN_MODE" is set to 1. */ uint8_t internal_port_cnt; + uint8_t unused_0; + /* + * This is a bit mask to indicate what speeds are supported + * as forced speeds on this link. + * For each speed that can be forced on this link, the + * corresponding mask bit shall be set to '1'. + * This field is valid only if speeds2_supported bit is set in flags2 + */ + uint16_t supported_speeds2_force_mode; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_1GB \ + UINT32_C(0x1) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_10GB \ + UINT32_C(0x2) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_25GB \ + UINT32_C(0x4) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_40GB \ + UINT32_C(0x8) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_50GB \ + UINT32_C(0x10) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_100GB \ + UINT32_C(0x20) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_50GB_PAM4_56 \ + UINT32_C(0x40) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_100GB_PAM4_56 \ + UINT32_C(0x80) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_200GB_PAM4_56 \ + UINT32_C(0x100) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_400GB_PAM4_56 \ + UINT32_C(0x200) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_100GB_PAM4_112 \ + UINT32_C(0x400) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_200GB_PAM4_112 \ + UINT32_C(0x800) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_400GB_PAM4_112 \ + UINT32_C(0x1000) + /* 800Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_FORCE_MODE_800GB_PAM4_112 \ + UINT32_C(0x2000) + /* + * This is a bit mask to indicate what speeds are supported + * for autonegotiation on this link. + * For each speed that can be autonegotiated on this link, the + * corresponding mask bit shall be set to '1'. + * This field is valid only if speeds2_supported bit is set in flags2 + */ + uint16_t supported_speeds2_auto_mode; + /* 1Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_1GB \ + UINT32_C(0x1) + /* 10Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_10GB \ + UINT32_C(0x2) + /* 25Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_25GB \ + UINT32_C(0x4) + /* 40Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_40GB \ + UINT32_C(0x8) + /* 50Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_50GB \ + UINT32_C(0x10) + /* 100Gb link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_100GB \ + UINT32_C(0x20) + /* 50Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_50GB_PAM4_56 \ + UINT32_C(0x40) + /* 100Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_100GB_PAM4_56 \ + UINT32_C(0x80) + /* 200Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_200GB_PAM4_56 \ + UINT32_C(0x100) + /* 400Gb (PAM4-56: 50G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_400GB_PAM4_56 \ + UINT32_C(0x200) + /* 100Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_100GB_PAM4_112 \ + UINT32_C(0x400) + /* 200Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_200GB_PAM4_112 \ + UINT32_C(0x800) + /* 400Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_400GB_PAM4_112 \ + UINT32_C(0x1000) + /* 800Gb (PAM4-112: 100G per lane) link speed */ + #define HWRM_PORT_PHY_QCAPS_OUTPUT_SUPPORTED_SPEEDS2_AUTO_MODE_800GB_PAM4_112 \ + UINT32_C(0x2000) + uint8_t unused_1[3]; /* * This field is used in Output records to indicate that the output * is completely written to RAM. This field should be read as '1' @@ -38132,6 +38981,9 @@ struct hwrm_vnic_qcaps_output { /* When this bit is '1' FW supports VNIC hash mode. */ #define HWRM_VNIC_QCAPS_OUTPUT_FLAGS_VNIC_RSS_HASH_MODE_CAP \ UINT32_C(0x10000000) + /* When this bit is set to '1', hardware supports tunnel TPA. */ + #define HWRM_VNIC_QCAPS_OUTPUT_FLAGS_HW_TUNNEL_TPA_CAP \ + UINT32_C(0x20000000) /* * This field advertises the maximum concurrent TPA aggregations * supported by the VNIC on new devices that support TPA v2 or v3. @@ -38154,7 +39006,7 @@ struct hwrm_vnic_qcaps_output { *********************/ -/* hwrm_vnic_tpa_cfg_input (size:320b/40B) */ +/* hwrm_vnic_tpa_cfg_input (size:384b/48B) */ struct hwrm_vnic_tpa_cfg_input { /* The HWRM command request type. */ uint16_t req_type; @@ -38276,6 +39128,12 @@ struct hwrm_vnic_tpa_cfg_input { #define HWRM_VNIC_TPA_CFG_INPUT_ENABLES_MAX_AGG_TIMER UINT32_C(0x4) /* deprecated bit. Do not use!!! */ #define HWRM_VNIC_TPA_CFG_INPUT_ENABLES_MIN_AGG_LEN UINT32_C(0x8) + /* + * This bit must be '1' for the tnl_tpa_en_bitmap field to be + * configured. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_ENABLES_TNL_TPA_EN \ + UINT32_C(0x10) /* Logical vnic ID */ uint16_t vnic_id; /* @@ -38332,6 +39190,117 @@ struct hwrm_vnic_tpa_cfg_input { * and can be queried using hwrm_vnic_tpa_qcfg. */ uint32_t min_agg_len; + /* + * If the device supports hardware tunnel TPA feature, as indicated by + * the HWRM_VNIC_QCAPS command, this field is used to configure the + * tunnel types to be enabled. Each bit corresponds to a specific + * tunnel type. If a bit is set to '1', then the associated tunnel + * type is enabled; otherwise, it is disabled. + */ + uint32_t tnl_tpa_en_bitmap; + /* + * When this bit is '1', enable VXLAN encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN \ + UINT32_C(0x1) + /* + * When this bit is set to ‘1’, enable GENEVE encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GENEVE \ + UINT32_C(0x2) + /* + * When this bit is set to ‘1’, enable NVGRE encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_NVGRE \ + UINT32_C(0x4) + /* + * When this bit is set to ‘1’, enable GRE encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GRE \ + UINT32_C(0x8) + /* + * When this bit is set to ‘1’, enable IPV4 encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_IPV4 \ + UINT32_C(0x10) + /* + * When this bit is set to ‘1’, enable IPV6 encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_IPV6 \ + UINT32_C(0x20) + /* + * When this bit is '1', enable VXLAN_GPE encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN_GPE \ + UINT32_C(0x40) + /* + * When this bit is '1', enable VXLAN_CUSTOMER1 encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_VXLAN_CUST1 \ + UINT32_C(0x80) + /* + * When this bit is '1', enable GRE_CUSTOMER1 encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_GRE_CUST1 \ + UINT32_C(0x100) + /* + * When this bit is '1', enable UPAR1 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR1 \ + UINT32_C(0x200) + /* + * When this bit is '1', enable UPAR2 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR2 \ + UINT32_C(0x400) + /* + * When this bit is '1', enable UPAR3 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR3 \ + UINT32_C(0x800) + /* + * When this bit is '1', enable UPAR4 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR4 \ + UINT32_C(0x1000) + /* + * When this bit is '1', enable UPAR5 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR5 \ + UINT32_C(0x2000) + /* + * When this bit is '1', enable UPAR6 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR6 \ + UINT32_C(0x4000) + /* + * When this bit is '1', enable UPAR7 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR7 \ + UINT32_C(0x8000) + /* + * When this bit is '1', enable UPAR8 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_CFG_INPUT_TNL_TPA_EN_BITMAP_UPAR8 \ + UINT32_C(0x10000) + uint8_t unused_1[4]; } __rte_packed; /* hwrm_vnic_tpa_cfg_output (size:128b/16B) */ @@ -38355,6 +39324,288 @@ struct hwrm_vnic_tpa_cfg_output { uint8_t valid; } __rte_packed; +/********************** + * hwrm_vnic_tpa_qcfg * + **********************/ + + +/* hwrm_vnic_tpa_qcfg_input (size:192b/24B) */ +struct hwrm_vnic_tpa_qcfg_input { + /* The HWRM command request type. */ + uint16_t req_type; + /* + * The completion ring to send the completion event on. This should + * be the NQ ID returned from the `nq_alloc` HWRM command. + */ + uint16_t cmpl_ring; + /* + * The sequence ID is used by the driver for tracking multiple + * commands. This ID is treated as opaque data by the firmware and + * the value is returned in the `hwrm_resp_hdr` upon completion. + */ + uint16_t seq_id; + /* + * The target ID of the command: + * * 0x0-0xFFF8 - The function ID + * * 0xFFF8-0xFFFC, 0xFFFE - Reserved for internal processors + * * 0xFFFD - Reserved for user-space HWRM interface + * * 0xFFFF - HWRM + */ + uint16_t target_id; + /* + * A physical address pointer pointing to a host buffer that the + * command's response data will be written. This can be either a host + * physical address (HPA) or a guest physical address (GPA) and must + * point to a physically contiguous block of memory. + */ + uint64_t resp_addr; + /* Logical vnic ID */ + uint16_t vnic_id; + uint8_t unused_0[6]; +} __rte_packed; + +/* hwrm_vnic_tpa_qcfg_output (size:256b/32B) */ +struct hwrm_vnic_tpa_qcfg_output { + /* The specific error status for the command. */ + uint16_t error_code; + /* The HWRM command request type. */ + uint16_t req_type; + /* The sequence ID from the original command. */ + uint16_t seq_id; + /* The length of the response data in number of bytes. */ + uint16_t resp_len; + uint32_t flags; + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) of + * non-tunneled TCP packets. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_TPA \ + UINT32_C(0x1) + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) of + * tunneled TCP packets. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_ENCAP_TPA \ + UINT32_C(0x2) + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) according + * to Windows Receive Segment Coalescing (RSC) rules. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_RSC_WND_UPDATE \ + UINT32_C(0x4) + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) according + * to Linux Generic Receive Offload (GRO) rules. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_GRO \ + UINT32_C(0x8) + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) for TCP + * packets with IP ECN set to non-zero. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_AGG_WITH_ECN \ + UINT32_C(0x10) + /* + * When this bit is '1', the VNIC is configured to + * perform transparent packet aggregation (TPA) for + * GRE tunneled TCP packets only if all packets have the + * same GRE sequence. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_AGG_WITH_SAME_GRE_SEQ \ + UINT32_C(0x20) + /* + * When this bit is '1' and the GRO mode is enabled, + * the VNIC is configured to + * perform transparent packet aggregation (TPA) for + * TCP/IPv4 packets with consecutively increasing IPIDs. + * In other words, the last packet that is being + * aggregated to an already existing aggregation context + * shall have IPID 1 more than the IPID of the last packet + * that was aggregated in that aggregation context. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_GRO_IPID_CHECK \ + UINT32_C(0x40) + /* + * When this bit is '1' and the GRO mode is enabled, + * the VNIC is configured to + * perform transparent packet aggregation (TPA) for + * TCP packets with the same TTL (IPv4) or Hop limit (IPv6) + * value. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_FLAGS_GRO_TTL_CHECK \ + UINT32_C(0x80) + /* + * This is the maximum number of TCP segments that can + * be aggregated (unit is Log2). Max value is 31. + */ + uint16_t max_agg_segs; + /* 1 segment */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_1 UINT32_C(0x0) + /* 2 segments */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_2 UINT32_C(0x1) + /* 4 segments */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_4 UINT32_C(0x2) + /* 8 segments */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_8 UINT32_C(0x3) + /* Any segment size larger than this is not valid */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_MAX UINT32_C(0x1f) + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_LAST \ + HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGG_SEGS_MAX + /* + * This is the maximum number of aggregations this VNIC is + * allowed (unit is Log2). Max value is 7 + */ + uint16_t max_aggs; + /* 1 aggregation */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_1 UINT32_C(0x0) + /* 2 aggregations */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_2 UINT32_C(0x1) + /* 4 aggregations */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_4 UINT32_C(0x2) + /* 8 aggregations */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_8 UINT32_C(0x3) + /* 16 aggregations */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_16 UINT32_C(0x4) + /* Any aggregation size larger than this is not valid */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_MAX UINT32_C(0x7) + #define HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_LAST \ + HWRM_VNIC_TPA_QCFG_OUTPUT_MAX_AGGS_MAX + /* + * This is the maximum amount of time allowed for + * an aggregation context to complete after it was initiated. + */ + uint32_t max_agg_timer; + /* + * This is the minimum amount of payload length required to + * start an aggregation context. + */ + uint32_t min_agg_len; + /* + * If the device supports hardware tunnel TPA feature, as indicated by + * the HWRM_VNIC_QCAPS command, this field conveys the bitmap of the + * tunnel types that have been configured. Each bit corresponds to a + * specific tunnel type. If a bit is set to '1', then the associated + * tunnel type is enabled; otherwise, it is disabled. + */ + uint32_t tnl_tpa_en_bitmap; + /* + * When this bit is '1', enable VXLAN encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_VXLAN \ + UINT32_C(0x1) + /* + * When this bit is set to ‘1’, enable GENEVE encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_GENEVE \ + UINT32_C(0x2) + /* + * When this bit is set to ‘1’, enable NVGRE encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_NVGRE \ + UINT32_C(0x4) + /* + * When this bit is set to ‘1’, enable GRE encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_GRE \ + UINT32_C(0x8) + /* + * When this bit is set to ‘1’, enable IPV4 encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_IPV4 \ + UINT32_C(0x10) + /* + * When this bit is set to ‘1’, enable IPV6 encapsulated packets + * for aggregation.. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_IPV6 \ + UINT32_C(0x20) + /* + * When this bit is '1', enable VXLAN_GPE encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_VXLAN_GPE \ + UINT32_C(0x40) + /* + * When this bit is '1', enable VXLAN_CUSTOMER1 encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_VXLAN_CUST1 \ + UINT32_C(0x80) + /* + * When this bit is '1', enable GRE_CUSTOMER1 encapsulated packets + * for aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_GRE_CUST1 \ + UINT32_C(0x100) + /* + * When this bit is '1', enable UPAR1 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR1 \ + UINT32_C(0x200) + /* + * When this bit is '1', enable UPAR2 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR2 \ + UINT32_C(0x400) + /* + * When this bit is '1', enable UPAR3 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR3 \ + UINT32_C(0x800) + /* + * When this bit is '1', enable UPAR4 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR4 \ + UINT32_C(0x1000) + /* + * When this bit is '1', enable UPAR5 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR5 \ + UINT32_C(0x2000) + /* + * When this bit is '1', enable UPAR6 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR6 \ + UINT32_C(0x4000) + /* + * When this bit is '1', enable UPAR7 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR7 \ + UINT32_C(0x8000) + /* + * When this bit is '1', enable UPAR8 encapsulated packets for + * aggregation. + */ + #define HWRM_VNIC_TPA_QCFG_OUTPUT_TNL_TPA_EN_BITMAP_UPAR8 \ + UINT32_C(0x10000) + uint8_t unused_0[3]; + /* + * This field is used in Output records to indicate that the output + * is completely written to RAM. This field should be read as '1' + * to indicate that the output has been completely written. + * When writing a command completion or response to an internal processor, + * the order of writes has to be such that this field is written last. + */ + uint8_t valid; +} __rte_packed; + /********************* * hwrm_vnic_rss_cfg * *********************/ @@ -38572,6 +39823,12 @@ struct hwrm_vnic_rss_cfg_input { */ #define HWRM_VNIC_RSS_CFG_INPUT_FLAGS_HASH_TYPE_EXCLUDE \ UINT32_C(0x2) + /* + * When this bit is '1', it indicates that the support of setting + * ipsec hash_types by the host drivers. + */ + #define HWRM_VNIC_RSS_CFG_INPUT_FLAGS_IPSEC_HASH_TYPE_CFG_SUPPORT \ + UINT32_C(0x4) uint8_t ring_select_mode; /* * In this mode, HW uses Toeplitz algorithm and provided Toeplitz @@ -39439,6 +40696,12 @@ struct hwrm_ring_alloc_input { */ #define HWRM_RING_ALLOC_INPUT_ENABLES_MPC_CHNLS_TYPE \ UINT32_C(0x400) + /* + * This bit must be '1' for the steering_tag field to be + * configured. + */ + #define HWRM_RING_ALLOC_INPUT_ENABLES_STEERING_TAG_VALID \ + UINT32_C(0x800) /* Ring Type. */ uint8_t ring_type; /* L2 Completion Ring (CR) */ @@ -39664,7 +40927,8 @@ struct hwrm_ring_alloc_input { #define HWRM_RING_ALLOC_INPUT_RING_ARB_CFG_ARB_POLICY_PARAM_MASK \ UINT32_C(0xff00) #define HWRM_RING_ALLOC_INPUT_RING_ARB_CFG_ARB_POLICY_PARAM_SFT 8 - uint16_t unused_3; + /* Steering tag to use for memory transactions. */ + uint16_t steering_tag; /* * This field is reserved for the future use. * It shall be set to 0. @@ -43871,7 +45135,10 @@ struct hwrm_cfa_ntuple_filter_alloc_input { * Setting of this flag indicates that the dst_id field contains RFS * ring table index. If this is not set it indicates dst_id is VNIC * or VPORT or function ID. Note dest_fid and dest_rfs_ring_idx - * can’t be set at the same time. + * can't be set at the same time. Updated drivers should pass ring + * idx in the rfs_ring_tbl_idx field if the firmware indicates + * support for the new field in the HWRM_CFA_ADV_FLOW_MGMT_QCAPS + * response. */ #define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_FLAGS_DEST_RFS_RING_IDX \ UINT32_C(0x20) @@ -43986,10 +45253,7 @@ struct hwrm_cfa_ntuple_filter_alloc_input { */ #define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_ID \ UINT32_C(0x10000) - /* - * This bit must be '1' for the mirror_vnic_id field to be - * configured. - */ + /* This flag is deprecated. */ #define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_MIRROR_VNIC_ID \ UINT32_C(0x20000) /* @@ -43998,7 +45262,10 @@ struct hwrm_cfa_ntuple_filter_alloc_input { */ #define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_DST_MACADDR \ UINT32_C(0x40000) - /* This flag is deprecated. */ + /* + * This bit must be '1' for the rfs_ring_tbl_idx field to + * be configured. + */ #define HWRM_CFA_NTUPLE_FILTER_ALLOC_INPUT_ENABLES_RFS_RING_TBL_IDX \ UINT32_C(0x80000) /* @@ -44069,10 +45336,12 @@ struct hwrm_cfa_ntuple_filter_alloc_input { */ uint16_t dst_id; /* - * Logical VNIC ID of the VNIC where traffic is - * mirrored. + * If set, this value shall represent the ring table + * index for receive flow steering. Note that this offset + * was formerly used for the mirror_vnic_id field, which + * is no longer supported. */ - uint16_t mirror_vnic_id; + uint16_t rfs_ring_tbl_idx; /* * This value indicates the tunnel type for this filter. * If this field is not specified, then the filter shall @@ -50258,6 +51527,13 @@ struct hwrm_cfa_adv_flow_mgnt_qcaps_output { */ #define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_NTUPLE_FLOW_RX_EXT_IP_PROTO_SUPPORTED \ UINT32_C(0x100000) + /* + * Value of 1 to indicate that firmware supports setting of + * rfs_ring_tbl_idx (new offset) in HWRM_CFA_NTUPLE_ALLOC command. + * Value of 0 indicates ring tbl idx should be passed using dst_id. + */ + #define HWRM_CFA_ADV_FLOW_MGNT_QCAPS_OUTPUT_FLAGS_RFS_RING_TBL_IDX_V3_SUPPORTED \ + UINT32_C(0x200000) uint8_t unused_0[3]; /* * This field is used in Output records to indicate that the output @@ -56744,9 +58020,17 @@ struct hwrm_tunnel_dst_port_query_input { /* Generic Protocol Extension for VXLAN (VXLAN-GPE) */ #define HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_VXLAN_GPE \ UINT32_C(0x10) + /* Generic Routing Encapsulation */ + #define HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_GRE \ + UINT32_C(0x11) #define HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_LAST \ - HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_VXLAN_GPE - uint8_t unused_0[7]; + HWRM_TUNNEL_DST_PORT_QUERY_INPUT_TUNNEL_TYPE_GRE + /* + * This field is used to specify the next protocol value defined in the + * corresponding RFC spec for the applicable tunnel type. + */ + uint8_t tunnel_next_proto; + uint8_t unused_0[6]; } __rte_packed; /* hwrm_tunnel_dst_port_query_output (size:128b/16B) */ @@ -56808,7 +58092,21 @@ struct hwrm_tunnel_dst_port_query_output { /* This bit will be '1' when UPAR7 is IN_USE */ #define HWRM_TUNNEL_DST_PORT_QUERY_OUTPUT_UPAR_IN_USE_UPAR7 \ UINT32_C(0x80) - uint8_t unused_0[2]; + /* + * This field is used to convey the status of non udp port based + * tunnel parsing at chip level and at function level. + */ + uint8_t status; + /* This bit will be '1' when tunnel parsing is enabled globally. */ + #define HWRM_TUNNEL_DST_PORT_QUERY_OUTPUT_STATUS_CHIP_LEVEL \ + UINT32_C(0x1) + /* + * This bit will be '1' when tunnel parsing is enabled + * on the corresponding function. + */ + #define HWRM_TUNNEL_DST_PORT_QUERY_OUTPUT_STATUS_FUNC_LEVEL \ + UINT32_C(0x2) + uint8_t unused_0; /* * This field is used in Output records to indicate that the output * is completely written to RAM. This field should be read as '1' @@ -56886,9 +58184,16 @@ struct hwrm_tunnel_dst_port_alloc_input { /* Generic Protocol Extension for VXLAN (VXLAN-GPE) */ #define HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN_GPE \ UINT32_C(0x10) + /* Generic Routing Encapsulation */ + #define HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_GRE \ + UINT32_C(0x11) #define HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_LAST \ - HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_VXLAN_GPE - uint8_t unused_0; + HWRM_TUNNEL_DST_PORT_ALLOC_INPUT_TUNNEL_TYPE_GRE + /* + * This field is used to specify the next protocol value defined in the + * corresponding RFC spec for the applicable tunnel type. + */ + uint8_t tunnel_next_proto; /* * This field represents the value of L4 destination port used * for the given tunnel type. This field is valid for @@ -56900,7 +58205,7 @@ struct hwrm_tunnel_dst_port_alloc_input { * A value of 0 shall fail the command. */ uint16_t tunnel_dst_port_val; - uint8_t unused_1[4]; + uint8_t unused_0[4]; } __rte_packed; /* hwrm_tunnel_dst_port_alloc_output (size:128b/16B) */ @@ -56929,8 +58234,11 @@ struct hwrm_tunnel_dst_port_alloc_output { /* Out of resources error */ #define HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_NO_RESOURCE \ UINT32_C(0x2) + /* Tunnel type is alread enabled */ + #define HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_ENABLED \ + UINT32_C(0x3) #define HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_LAST \ - HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_NO_RESOURCE + HWRM_TUNNEL_DST_PORT_ALLOC_OUTPUT_ERROR_INFO_ERR_ENABLED /* * This field represents the UPAR usage status. * Available UPARs on wh+ are UPAR0 and UPAR1 @@ -57040,15 +58348,22 @@ struct hwrm_tunnel_dst_port_free_input { /* Generic Protocol Extension for VXLAN (VXLAN-GPE) */ #define HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN_GPE \ UINT32_C(0x10) + /* Generic Routing Encapsulation */ + #define HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_GRE \ + UINT32_C(0x11) #define HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_LAST \ - HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_VXLAN_GPE - uint8_t unused_0; + HWRM_TUNNEL_DST_PORT_FREE_INPUT_TUNNEL_TYPE_GRE + /* + * This field is used to specify the next protocol value defined in the + * corresponding RFC spec for the applicable tunnel type. + */ + uint8_t tunnel_next_proto; /* * Identifier of a tunnel L4 destination port value. Only applies to tunnel * types that has l4 destination port parameters. */ uint16_t tunnel_dst_port_id; - uint8_t unused_1[4]; + uint8_t unused_0[4]; } __rte_packed; /* hwrm_tunnel_dst_port_free_output (size:128b/16B) */ @@ -57234,7 +58549,7 @@ struct ctx_eng_stats { ***********************/ -/* hwrm_stat_ctx_alloc_input (size:256b/32B) */ +/* hwrm_stat_ctx_alloc_input (size:320b/40B) */ struct hwrm_stat_ctx_alloc_input { /* The HWRM command request type. */ uint16_t req_type; @@ -57305,6 +58620,18 @@ struct hwrm_stat_ctx_alloc_input { * for the periodic DMA updates. */ uint16_t stats_dma_length; + uint16_t flags; + /* This stats context uses the steering tag specified in the command. */ + #define HWRM_STAT_CTX_ALLOC_INPUT_FLAGS_STEERING_TAG_VALID \ + UINT32_C(0x1) + /* + * Steering tag to use for memory transactions from the periodic DMA + * updates. 'steering_tag_valid' should be set and 'steering_tag' + * should be specified, when the 'steering_tag_supported' bit is set + * under the 'flags_ext2' field of the hwrm_func_qcaps_output. + */ + uint16_t steering_tag; + uint32_t unused_1; } __rte_packed; /* hwrm_stat_ctx_alloc_output (size:128b/16B) */ From patchwork Mon Dec 11 17:10:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135030 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68450436C8; Mon, 11 Dec 2023 18:11:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23F3C42DA6; Mon, 11 Dec 2023 18:11:28 +0100 (CET) Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) by mails.dpdk.org (Postfix) with ESMTP id 4C54842D87 for ; Mon, 11 Dec 2023 18:11:26 +0100 (CET) Received: by mail-qk1-f173.google.com with SMTP id af79cd13be357-77f59fcb204so154649985a.3 for ; Mon, 11 Dec 2023 09:11:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314685; x=1702919485; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=4VkVMu8vyWIaIU5dMczQxYYCHchNtSAO9q9tqoaTt/c=; b=IUHmdpdrtd80HruogN8InGy5p5ESN8joW/Ibx8dHfzK989HxJW9le9v7Vg0OE1EXdL WlDMx0zkYj+TmvH7MwOpU1onp8+ABHhGGJFEU7MS7dhe1wS5IOj/mpEbmqb+EnbOjTuc pw8Wcxwtme4xT/nyJ8Z7YgaeT4W3W827VHM7Q= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314685; x=1702919485; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=4VkVMu8vyWIaIU5dMczQxYYCHchNtSAO9q9tqoaTt/c=; b=aakjfe55KLBfnJ4Vel7uEZiDzu3+8pp459oxI8p/iq9XgjLU9zV1SkvA33+YkmaJ4e K/RpIahoAdwPyZ5Jfpj4M3i8QUssCfnaQDHPz464Q8XHxm8adfranKwYwjaFbhhNhqjb LAp3UYKdwmusSH9/4IN5Ef5FWfwPCZZuNCvbEiuKOdpiCod5sCfXHTzk+Kmx+Bn9dSPn w8tunTUvlLKNWMQq5nrsQc4G0KwxEtseP3w/uwbYJd1InsOlBmdNQu06CWn9BvsFLMCe c7tlo/NrufcXJw7zKH/EwDpTkSN0F9Gpczs5BMJCjqg5p+fHkoJ15+nvpS2BAz1ln7Jq 1E6Q== X-Gm-Message-State: AOJu0Yx7INzEHQ/HB73w4UfGx+w3u2JmvPKdmvEpPD9BTf9l2SyUbyy/ 44CdzBCYB7oFN+FNPIl1r/HxvzoumdF7Y3mPxrxeD+AKfLBslqL5WCbjKU4wzxmrj2ER3T5XscO bkc/2JkVfDl1WMXA8LwpfxxnIHveTzv6kOVMy8Gm8BC1CeuJjEAQc/WHEhzd+WHfiteKg X-Google-Smtp-Source: AGHT+IFMsm0Dd00CS2FK++FPYVwvFF6CFuUy3GWbTrTsib5qs0f83AdcK44riv0Km8TEehZbJL0gnw== X-Received: by 2002:a05:620a:5311:b0:77f:34b1:66a3 with SMTP id oo17-20020a05620a531100b0077f34b166a3mr4924586qkn.107.1702314685256; Mon, 11 Dec 2023 09:11:25 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:22 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP , Somnath Kotur Subject: [PATCH v3 03/14] net/bnxt: log a message when multicast promisc mode changes Date: Mon, 11 Dec 2023 09:10:58 -0800 Message-Id: <20231211171109.89716-4-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kalesh AP When the user tries to add more number of Mcast MAC addresses than supported by the port, driver puts port into Mcast promiscuous mode. It may be useful to the user to know that Mcast promiscuous mode is turned on. Similarly added a log when Mcast promiscuous mode is turned off. Signed-off-by: Kalesh AP Reviewed-by: Somnath Kotur Reviewed-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_ethdev.c | 8 +++++++- 1 file changed, 7 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index acf7e6e46e..f398838ea8 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -2931,12 +2931,18 @@ bnxt_dev_set_mc_addr_list_op(struct rte_eth_dev *eth_dev, bp->nb_mc_addr = nb_mc_addr; if (nb_mc_addr > BNXT_MAX_MC_ADDRS) { + PMD_DRV_LOG(INFO, "Number of Mcast MACs added (%u) exceeded Max supported (%u)\n", + nb_mc_addr, BNXT_MAX_MC_ADDRS); + PMD_DRV_LOG(INFO, "Turning on Mcast promiscuous mode\n"); vnic->flags |= BNXT_VNIC_INFO_ALLMULTI; goto allmulti; } /* TODO Check for Duplicate mcast addresses */ - vnic->flags &= ~BNXT_VNIC_INFO_ALLMULTI; + if (vnic->flags & BNXT_VNIC_INFO_ALLMULTI) { + PMD_DRV_LOG(INFO, "Turning off Mcast promiscuous mode\n"); + vnic->flags &= ~BNXT_VNIC_INFO_ALLMULTI; + } for (i = 0; i < nb_mc_addr; i++) rte_ether_addr_copy(&mc_addr_set[i], &bp->mcast_addr_list[i]); From patchwork Mon Dec 11 17:10:59 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135031 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F3F77436C8; Mon, 11 Dec 2023 18:11:51 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 64F0442D96; Mon, 11 Dec 2023 18:11:30 +0100 (CET) Received: from mail-qk1-f176.google.com (mail-qk1-f176.google.com [209.85.222.176]) by mails.dpdk.org (Postfix) with ESMTP id 0BD9142D96 for ; Mon, 11 Dec 2023 18:11:29 +0100 (CET) Received: by mail-qk1-f176.google.com with SMTP id af79cd13be357-77f3c4914e5so252681985a.3 for ; Mon, 11 Dec 2023 09:11:28 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314688; x=1702919488; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=TJQrFf1wqELfrzUcAHqojml3xW6Mxws8imFhGIFOEEw=; b=ZaFEDkFA5+l9IcoKw6p2m9uGSVuVa3WWcvb56UXm2oWW67By392CjhnqXESknbrFbG hY8VdYq9kzK696SJktsBcx56xkVf1sARNk6z8eZq2GgkwQQqmZWiMqicEhDL9jbu5mU+ PQ0lelHI4LUo17Nk2ZqduJWk0A+y97LIAMBl4= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314688; x=1702919488; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=TJQrFf1wqELfrzUcAHqojml3xW6Mxws8imFhGIFOEEw=; b=ovWqr1GRhNF79EthNIPcPP+1SFlO/iPaFLMFAHGWhgbT5beIc2JlovzzPCg0v/bUFg qrS8CL9mVQ/LqV8c9CuY87D8gNGl/P3serDOPaY+VkknLjcMrLh1y61PHR5kyuf/vzEb U+hlJ5g/3+/Ag8L2Shrf0iYtvkx0kYxPxwvMHl9FLP+fk7QamkO5DRmpbyoyDBs39JSd MFlcxWkfzhtrGC6XZe7iGS/Zj88t3Bd7P53UuGB+YmlV59eZdXJ5R9i2B4YZRcUgORaw EjxsDNrxvuejmdgptX0HvAw4WyJq9ELr+b9dbUcR2qKD6/INHdExKEayjile15qk30Dv EkeQ== X-Gm-Message-State: AOJu0YzrKeKAy3/goaipDVx8KEwU31aGwkdNXoo6BBFPwwkiGhwHzcRm eX1nngogN4+v1iNbmKMEIU/HceRGFVpqr72sBpC2fAx+s1WShPN35i7k9xIB0ALhpgCvNIovL4p fPPdMfnYQ8rBdJqP3TSW6wBAW0BmhCaqse2tUm8BD3XN+rpg/x19q+sMfoqnpbKEuqlDm X-Google-Smtp-Source: AGHT+IF72AqYaORbrOe1DO8au6d9kg/LrWCDFckFsQ59fx9ezAlM4jjypEbGNXvHMjwsFXsh8jnYmw== X-Received: by 2002:a05:620a:63c4:b0:77e:fba3:4f40 with SMTP id pw4-20020a05620a63c400b0077efba34f40mr7026683qkn.150.1702314687893; Mon, 11 Dec 2023 09:11:27 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:26 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Somnath Kotur Subject: [PATCH v3 04/14] net/bnxt: use the correct COS queue for Tx Date: Mon, 11 Dec 2023 09:10:59 -0800 Message-Id: <20231211171109.89716-5-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Earlier the firmware was configuring single lossy COS profiles for Tx. But now more than one profiles is possible. Identify the profile a NIC driver should use based on the profile type hint provided in queue_cfg_info. If the firmware does not set the bit to use profile type, then we will use the older method to pick the COS queue for Tx. Signed-off-by: Ajit Khaparde Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_hwrm.c | 56 ++++++++++++++++++++++++++++++++++-- drivers/net/bnxt/bnxt_hwrm.h | 7 +++++ 3 files changed, 62 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 0e01b1d4ba..542ef13f7c 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -311,6 +311,7 @@ struct bnxt_link_info { struct bnxt_cos_queue_info { uint8_t id; uint8_t profile; + uint8_t profile_type; }; struct rte_flow { diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 0a31b984e6..fe9e629892 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1544,7 +1544,7 @@ int bnxt_hwrm_port_phy_qcaps(struct bnxt *bp) return 0; } -static bool bnxt_find_lossy_profile(struct bnxt *bp) +static bool _bnxt_find_lossy_profile(struct bnxt *bp) { int i = 0; @@ -1558,6 +1558,41 @@ static bool bnxt_find_lossy_profile(struct bnxt *bp) return false; } +static bool _bnxt_find_lossy_nic_profile(struct bnxt *bp) +{ + int i = 0, j = 0; + + for (i = 0; i < BNXT_COS_QUEUE_COUNT; i++) { + for (j = 0; j < BNXT_COS_QUEUE_COUNT; j++) { + if (bp->tx_cos_queue[i].profile == + HWRM_QUEUE_SERVICE_PROFILE_LOSSY && + bp->tx_cos_queue[j].profile_type == + HWRM_QUEUE_SERVICE_PROFILE_TYPE_NIC) { + bp->tx_cosq_id[0] = bp->tx_cos_queue[i].id; + return true; + } + } + } + return false; +} + +static bool bnxt_find_lossy_profile(struct bnxt *bp, bool use_prof_type) +{ + int i; + + for (i = 0; i < BNXT_COS_QUEUE_COUNT; i++) { + PMD_DRV_LOG(DEBUG, "profile %d, profile_id %d, type %d\n", + bp->tx_cos_queue[i].profile, + bp->tx_cos_queue[i].id, + bp->tx_cos_queue[i].profile_type); + } + + if (use_prof_type) + return _bnxt_find_lossy_nic_profile(bp); + else + return _bnxt_find_lossy_profile(bp); +} + static void bnxt_find_first_valid_profile(struct bnxt *bp) { int i = 0; @@ -1579,6 +1614,7 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) struct hwrm_queue_qportcfg_input req = {.req_type = 0 }; struct hwrm_queue_qportcfg_output *resp = bp->hwrm_cmd_resp_addr; uint32_t dir = HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_TX; + bool use_prof_type = false; int i; get_rx_info: @@ -1590,10 +1626,15 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) !(bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY)) req.drv_qmap_cap = HWRM_QUEUE_QPORTCFG_INPUT_DRV_QMAP_CAP_ENABLED; + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); HWRM_CHECK_RESULT(); + if (resp->queue_cfg_info & + HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_CFG_INFO_USE_PROFILE_TYPE) + use_prof_type = true; + if (dir == HWRM_QUEUE_QPORTCFG_INPUT_FLAGS_PATH_TX) { GET_TX_QUEUE_INFO(0); GET_TX_QUEUE_INFO(1); @@ -1603,6 +1644,16 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) GET_TX_QUEUE_INFO(5); GET_TX_QUEUE_INFO(6); GET_TX_QUEUE_INFO(7); + if (use_prof_type) { + GET_TX_QUEUE_TYPE_INFO(0); + GET_TX_QUEUE_TYPE_INFO(1); + GET_TX_QUEUE_TYPE_INFO(2); + GET_TX_QUEUE_TYPE_INFO(3); + GET_TX_QUEUE_TYPE_INFO(4); + GET_TX_QUEUE_TYPE_INFO(5); + GET_TX_QUEUE_TYPE_INFO(6); + GET_TX_QUEUE_TYPE_INFO(7); + } } else { GET_RX_QUEUE_INFO(0); GET_RX_QUEUE_INFO(1); @@ -1636,11 +1687,12 @@ int bnxt_hwrm_queue_qportcfg(struct bnxt *bp) * operations, ideally we should look to use LOSSY. * If not found, fallback to the first valid profile */ - if (!bnxt_find_lossy_profile(bp)) + if (!bnxt_find_lossy_profile(bp, use_prof_type)) bnxt_find_first_valid_profile(bp); } } + PMD_DRV_LOG(DEBUG, "Tx COS Queue ID %d\n", bp->tx_cosq_id[0]); bp->max_tc = resp->max_configurable_queues; bp->max_lltc = resp->max_configurable_lossless_queues; diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index 68384bc757..f9fa6cf73a 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -46,6 +46,9 @@ struct hwrm_func_qstats_output; #define HWRM_QUEUE_SERVICE_PROFILE_UNKNOWN \ HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_ID0_SERVICE_PROFILE_UNKNOWN +#define HWRM_QUEUE_SERVICE_PROFILE_TYPE_NIC \ + HWRM_QUEUE_QPORTCFG_OUTPUT_QUEUE_ID0_SERVICE_PROFILE_TYPE_NIC + #define HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC \ HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESERVATION_STRATEGY_MINIMAL_STATIC #define HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MAXIMAL \ @@ -74,6 +77,10 @@ struct hwrm_func_qstats_output; bp->tx_cos_queue[x].profile = \ resp->queue_id##x##_service_profile +#define GET_TX_QUEUE_TYPE_INFO(x) \ + bp->tx_cos_queue[x].profile_type = \ + resp->queue_id##x##_service_profile_type + #define GET_RX_QUEUE_INFO(x) \ bp->rx_cos_queue[x].id = resp->queue_id##x; \ bp->rx_cos_queue[x].profile = \ From patchwork Mon Dec 11 17:11:00 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135032 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A8192436C8; Mon, 11 Dec 2023 18:11:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9160C42D92; Mon, 11 Dec 2023 18:11:32 +0100 (CET) Received: from mail-oa1-f54.google.com (mail-oa1-f54.google.com [209.85.160.54]) by mails.dpdk.org (Postfix) with ESMTP id 4E63842D9A for ; Mon, 11 Dec 2023 18:11:31 +0100 (CET) Received: by mail-oa1-f54.google.com with SMTP id 586e51a60fabf-1ef36a04931so3258522fac.2 for ; Mon, 11 Dec 2023 09:11:31 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314690; x=1702919490; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=rfgcfB6eXZsMIYYir/LhKBAMLEyiq1xZs/RLGsT7s60=; b=WFe0M8p93Krhgtsv+ls2OFUg1HZg/fWArTFtnNaNtmx2QScT8fek5jLmH/9KLHrSCJ 6s7JPrLR2R20MJkgEJY/fD6nFRqxVkXzke2tvJgFBrI2mw8IS32tzzOGWzNQudGoGAyZ KGroTG7XaLjf0RkN9d7Z1N0s2bEPJ+3B0JL7A= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314690; x=1702919490; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=rfgcfB6eXZsMIYYir/LhKBAMLEyiq1xZs/RLGsT7s60=; b=wDv3j3OhGMyTLrMZQ6eZs72TAaIZKix+btBLXGFaJ6/WfZiDlxLrBUeOx2poECq5V9 VdtkLi2ER7ceaTiCOpoT1DH8dV0DpYzYssFtxj2wDDQwpI17TZX7RoFHwHDOEtFGBDl4 vw5ojfGCP5yPS2i8HctMPvPIx8UeqmkLU1nRnhWi2mG9fFMXCuF85pk4KvBlDdwztLSz LGKcRfppjDq6K7RzKONDTeIAW8NZ2/OaXwDjnnvFo5S/lnXskMhteP89Hdc0iced7Znq /wb8+4xePtuf8h1JpGwcCNar1zeKjJaphVtuc/68flvzN0oqmyalIGOJEExXx7WVAz5Z YQfQ== X-Gm-Message-State: AOJu0YxiFsRUXB2LCzEjFU7lF0cDWw8M9bjIcYqRWVnzo6egesUexRXo uTh//Qcyhw3INFGugLx7egsQhd18xKOpcRUzG4N0UVD7KBelvhn+RFEXvqTo/ucdUQM+d6z/i2V aH7c8Sy004LE5T4gcTOt3XbkLjj5GnI42OSwZ5LrES5A+mWl3+SYBTPQDCOw3bj8r/2cD X-Google-Smtp-Source: AGHT+IEi/9eknhVyPxX/XTxea+CeWy3UTg+Gi+Px1k+X0P0jDgYNdcayaxvbT+v9KiGK3CVQlWtTWQ== X-Received: by 2002:a05:6870:418c:b0:1fa:406c:219 with SMTP id y12-20020a056870418c00b001fa406c0219mr4662796oac.28.1702314689800; Mon, 11 Dec 2023 09:11:29 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:29 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Somnath Kotur , Kalesh AP Subject: [PATCH v3 05/14] net/bnxt: refactor mem zone allocation Date: Mon, 11 Dec 2023 09:11:00 -0800 Message-Id: <20231211171109.89716-6-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently we are allocating memzone for VNIC attributes per VNIC. In cases where the firmware supports a higher VNIC count, this could lead to a higher number of memzone segments than supported. Move the memzone for VNIC attributes per function instead of per VNIC. Divide the memzone per VNIC as needed. Signed-off-by: Ajit Khaparde Reviewed-by: Somnath Kotur Reviewed-by: Kalesh AP --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_vnic.c | 52 +++++++++++++++++++----------------- drivers/net/bnxt/bnxt_vnic.h | 1 - 3 files changed, 28 insertions(+), 26 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 542ef13f7c..6af668e92f 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -772,6 +772,7 @@ struct bnxt { struct bnxt_vnic_info *vnic_info; STAILQ_HEAD(, bnxt_vnic_info) free_vnic_list; + const struct rte_memzone *vnic_rss_mz; struct bnxt_filter_info *filter_info; STAILQ_HEAD(, bnxt_filter_info) free_filter_list; diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index f86d27fd79..d40daf631e 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -123,13 +123,11 @@ void bnxt_free_vnic_attributes(struct bnxt *bp) for (i = 0; i < bp->max_vnics; i++) { vnic = &bp->vnic_info[i]; - if (vnic->rss_mz != NULL) { - rte_memzone_free(vnic->rss_mz); - vnic->rss_mz = NULL; - vnic->rss_hash_key = NULL; - vnic->rss_table = NULL; - } + vnic->rss_hash_key = NULL; + vnic->rss_table = NULL; } + rte_memzone_free(bp->vnic_rss_mz); + bp->vnic_rss_mz = NULL; } int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig) @@ -153,31 +151,35 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig) entry_length = RTE_CACHE_LINE_ROUNDUP(entry_length + rss_table_size); - for (i = 0; i < bp->max_vnics; i++) { - vnic = &bp->vnic_info[i]; - - snprintf(mz_name, RTE_MEMZONE_NAMESIZE, - "bnxt_" PCI_PRI_FMT "_vnicattr_%d", pdev->addr.domain, - pdev->addr.bus, pdev->addr.devid, pdev->addr.function, i); - mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; - mz = rte_memzone_lookup(mz_name); - if (mz == NULL) { - mz = rte_memzone_reserve(mz_name, - entry_length, + snprintf(mz_name, RTE_MEMZONE_NAMESIZE, + "bnxt_" PCI_PRI_FMT "_vnicattr", pdev->addr.domain, + pdev->addr.bus, pdev->addr.devid, pdev->addr.function); + mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; + mz = rte_memzone_lookup(mz_name); + if (mz == NULL) { + mz = rte_memzone_reserve_aligned(mz_name, + entry_length * bp->max_vnics, bp->eth_dev->device->numa_node, RTE_MEMZONE_2MB | RTE_MEMZONE_SIZE_HINT_ONLY | - RTE_MEMZONE_IOVA_CONTIG); - if (mz == NULL) { - PMD_DRV_LOG(ERR, "Cannot allocate bnxt vnic_attributes memory\n"); - return -ENOMEM; - } + RTE_MEMZONE_IOVA_CONTIG, + BNXT_PAGE_SIZE); + if (mz == NULL) { + PMD_DRV_LOG(ERR, + "Cannot allocate vnic_attributes memory\n"); + return -ENOMEM; } - vnic->rss_mz = mz; - mz_phys_addr = mz->iova; + } + bp->vnic_rss_mz = mz; + for (i = 0; i < bp->max_vnics; i++) { + uint32_t offset = entry_length * i; + + vnic = &bp->vnic_info[i]; + + mz_phys_addr = mz->iova + offset; /* Allocate rss table and hash key */ - vnic->rss_table = (void *)((char *)mz->addr); + vnic->rss_table = (void *)((char *)mz->addr + offset); vnic->rss_table_dma_addr = mz_phys_addr; memset(vnic->rss_table, -1, entry_length); diff --git a/drivers/net/bnxt/bnxt_vnic.h b/drivers/net/bnxt/bnxt_vnic.h index 4396d95bda..7a6a0aa739 100644 --- a/drivers/net/bnxt/bnxt_vnic.h +++ b/drivers/net/bnxt/bnxt_vnic.h @@ -47,7 +47,6 @@ struct bnxt_vnic_info { uint16_t hash_type; uint8_t hash_mode; uint8_t prev_hash_mode; - const struct rte_memzone *rss_mz; rte_iova_t rss_table_dma_addr; uint16_t *rss_table; rte_iova_t rss_hash_key_dma_addr; From patchwork Mon Dec 11 17:11:01 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135033 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1A2F5436C8; Mon, 11 Dec 2023 18:12:09 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EA05342DC9; Mon, 11 Dec 2023 18:11:33 +0100 (CET) Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) by mails.dpdk.org (Postfix) with ESMTP id 606B742D97 for ; Mon, 11 Dec 2023 18:11:32 +0100 (CET) Received: by mail-qk1-f179.google.com with SMTP id af79cd13be357-77f2f492a43so262215885a.2 for ; Mon, 11 Dec 2023 09:11:32 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314691; x=1702919491; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :from:to:cc:subject:date:message-id:reply-to; bh=2Fl25Y0Srfy4dOuufz7Sb/WR4QH666xVKN23/PCnTAI=; b=E3BHoyLktryYPGGf8zqC/MmoJDmpD2zMDIz3jN1O7o9ya30B2i5dixOfYDHMLl13Ig XOKl8BOqtDmNgFO3wncQ+D0xMY4l37otoMen7DrTU8fkdLKQV7wjaUDPcnMa7DKPc2AT ko1saHMYLYMJLz55sHXxAB8LL7nbnT6t9HHsk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314691; x=1702919491; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=2Fl25Y0Srfy4dOuufz7Sb/WR4QH666xVKN23/PCnTAI=; b=bzzgQZeNym/74G6eQsMUoheDShnvMDrz5DI89gZ3brmI5dvnLmDGFLwvU/+otdHyex 5LB6DHRjmbxt9YQA+AheN+eNmVt1sf2phXwcMb0qGGUcDJKYzvx9hnb8LbFzIbbi+79t k6bWRCuAO2Ol0M+AusiHE1jbtY1toZkBmIijJt/dz+rlwXwOXNmbNvJIqa8KrUgWeYIA DK8TSXPXzztI8teAUNstGLRGWLcHUEOdGhNpVTuy9ahjeiRZBL0zXduIGM4YCairf24u CYyXw6UAta3uKX0XilzS/qmAy9xgk3psZijnHiHN6jzx+fgDmr4kH2rJC0GP1o4JrKze C/Ig== X-Gm-Message-State: AOJu0Yw/3pmb1QOTBBcT6gxSvjiMAejax1NCqVlJSfJ9ISNiBkHSPpeI F6iq4eBYwRMPE2JJ6rxhWfdyAXvN6sBIAahn0cYugvd7xuOYHj9tHT6dlFqFbYclFUqZahMm5SA t/4w6+ibzmJCySCUAP/JdSApm4RSUr3rSjkHgBXERK7iCW51u/L596vvnUUmb9adOIJE3 X-Google-Smtp-Source: AGHT+IFU/GV+GTLYvWyU+EtanrVljCkBMb7EQ3shCZf3Y1D/kZxhkUChzRsBbmbxTglnHLAq+QVldQ== X-Received: by 2002:a05:620a:29d6:b0:77f:51f:5e29 with SMTP id s22-20020a05620a29d600b0077f051f5e29mr6877880qkp.106.1702314691349; Mon, 11 Dec 2023 09:11:31 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.30 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:30 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Subject: [PATCH v3 06/14] net/bnxt: add support for p7 device family Date: Mon, 11 Dec 2023 09:11:01 -0800 Message-Id: <20231211171109.89716-7-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add support for the P7 device family. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 14 ++++++++++++-- drivers/net/bnxt/bnxt_ethdev.c | 25 +++++++++++++++++++++++++ 2 files changed, 37 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 6af668e92f..3a1d8a6ff6 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -72,6 +72,11 @@ #define BROADCOM_DEV_ID_58814 0xd814 #define BROADCOM_DEV_ID_58818 0xd818 #define BROADCOM_DEV_ID_58818_VF 0xd82e +#define BROADCOM_DEV_ID_57608 0x1760 +#define BROADCOM_DEV_ID_57604 0x1761 +#define BROADCOM_DEV_ID_57602 0x1762 +#define BROADCOM_DEV_ID_57601 0x1763 +#define BROADCOM_DEV_ID_5760X_VF 0x1819 #define BROADCOM_DEV_957508_N2100 0x5208 #define BROADCOM_DEV_957414_N225 0x4145 @@ -685,6 +690,7 @@ struct bnxt { #define BNXT_FLAG_FLOW_XSTATS_EN BIT(25) #define BNXT_FLAG_DFLT_MAC_SET BIT(26) #define BNXT_FLAG_GFID_ENABLE BIT(27) +#define BNXT_FLAG_CHIP_P7 BIT(30) #define BNXT_PF(bp) (!((bp)->flags & BNXT_FLAG_VF)) #define BNXT_VF(bp) ((bp)->flags & BNXT_FLAG_VF) #define BNXT_NPAR(bp) ((bp)->flags & BNXT_FLAG_NPAR_PF) @@ -694,12 +700,16 @@ struct bnxt { #define BNXT_USE_KONG(bp) ((bp)->flags & BNXT_FLAG_KONG_MB_EN) #define BNXT_VF_IS_TRUSTED(bp) ((bp)->flags & BNXT_FLAG_TRUSTED_VF_EN) #define BNXT_CHIP_P5(bp) ((bp)->flags & BNXT_FLAG_CHIP_P5) +#define BNXT_CHIP_P7(bp) ((bp)->flags & BNXT_FLAG_CHIP_P7) +#define BNXT_CHIP_P5_P7(bp) (BNXT_CHIP_P5(bp) || BNXT_CHIP_P7(bp)) #define BNXT_STINGRAY(bp) ((bp)->flags & BNXT_FLAG_STINGRAY) -#define BNXT_HAS_NQ(bp) BNXT_CHIP_P5(bp) -#define BNXT_HAS_RING_GRPS(bp) (!BNXT_CHIP_P5(bp)) +#define BNXT_HAS_NQ(bp) BNXT_CHIP_P5_P7(bp) +#define BNXT_HAS_RING_GRPS(bp) (!BNXT_CHIP_P5_P7(bp)) #define BNXT_FLOW_XSTATS_EN(bp) ((bp)->flags & BNXT_FLAG_FLOW_XSTATS_EN) #define BNXT_HAS_DFLT_MAC_SET(bp) ((bp)->flags & BNXT_FLAG_DFLT_MAC_SET) #define BNXT_GFID_ENABLED(bp) ((bp)->flags & BNXT_FLAG_GFID_ENABLE) +#define BNXT_P7_MAX_NQ_RING_CNT 512 +#define BNXT_P7_CQ_MAX_L2_ENT 8192 uint32_t flags2; #define BNXT_FLAGS2_PTP_TIMESYNC_ENABLED BIT(0) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index f398838ea8..bd30e9fd3e 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -84,6 +84,11 @@ static const struct rte_pci_id bnxt_pci_id_map[] = { { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58814) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58818) }, { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_58818_VF) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57608) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57604) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57602) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_57601) }, + { RTE_PCI_DEVICE(PCI_VENDOR_ID_BROADCOM, BROADCOM_DEV_ID_5760X_VF) }, { .vendor_id = 0, /* sentinel */ }, }; @@ -4681,6 +4686,7 @@ static bool bnxt_vf_pciid(uint16_t device_id) case BROADCOM_DEV_ID_57500_VF1: case BROADCOM_DEV_ID_57500_VF2: case BROADCOM_DEV_ID_58818_VF: + case BROADCOM_DEV_ID_5760X_VF: /* FALLTHROUGH */ return true; default: @@ -4706,7 +4712,23 @@ static bool bnxt_p5_device(uint16_t device_id) case BROADCOM_DEV_ID_58812: case BROADCOM_DEV_ID_58814: case BROADCOM_DEV_ID_58818: + /* FALLTHROUGH */ + return true; + default: + return false; + } +} + +/* Phase 7 device */ +static bool bnxt_p7_device(uint16_t device_id) +{ + switch (device_id) { case BROADCOM_DEV_ID_58818_VF: + case BROADCOM_DEV_ID_57608: + case BROADCOM_DEV_ID_57604: + case BROADCOM_DEV_ID_57602: + case BROADCOM_DEV_ID_57601: + case BROADCOM_DEV_ID_5760X_VF: /* FALLTHROUGH */ return true; default: @@ -5874,6 +5896,9 @@ static int bnxt_drv_init(struct rte_eth_dev *eth_dev) if (bnxt_p5_device(pci_dev->id.device_id)) bp->flags |= BNXT_FLAG_CHIP_P5; + if (bnxt_p7_device(pci_dev->id.device_id)) + bp->flags |= BNXT_FLAG_CHIP_P7; + if (pci_dev->id.device_id == BROADCOM_DEV_ID_58802 || pci_dev->id.device_id == BROADCOM_DEV_ID_58804 || pci_dev->id.device_id == BROADCOM_DEV_ID_58808 || From patchwork Mon Dec 11 17:11:02 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135034 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56A23436C8; Mon, 11 Dec 2023 18:12:16 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0480142DD0; Mon, 11 Dec 2023 18:11:35 +0100 (CET) Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com [209.85.222.173]) by mails.dpdk.org (Postfix) with ESMTP id B92AA42DC4 for ; Mon, 11 Dec 2023 18:11:33 +0100 (CET) Received: by mail-qk1-f173.google.com with SMTP id af79cd13be357-77f35b70944so293194785a.0 for ; Mon, 11 Dec 2023 09:11:33 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314693; x=1702919493; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :from:to:cc:subject:date:message-id:reply-to; bh=yxd9qYpKOf2wSABkXwCt/fuF65itsUpWo+boemx8E0g=; b=UCSMrgEBMUdDUzAONxnQaDnkNTjkW+uL0NtxAafKlnOHBj+/7OEuHIZPqzaP7K9uDu IGItwb4sgxo7vZ4SFt5gBgLhT1XfGpgmUv0XR7Rt8unTCpaGv9t7MAc/J4ZVmGXMmhYY 4QXH9Urqdy+PWLQzHkeC4pehoj3Z288e6zEUI= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314693; x=1702919493; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=yxd9qYpKOf2wSABkXwCt/fuF65itsUpWo+boemx8E0g=; b=eUH57PRq45abfgin4gl9EkW+OU6zPvGSBJ4gAYgAR4bpA/D/0kl1dCsFibnVT4ET9Y AZqJ1iOkWMTZmRGDUCrCcxSUU4rwvKEs+FAwnYrtce3l5Wlfuqfg3HSr5fN2MSZuGMA8 v4I4WFEbgjM/JtJPC/xKp3Sbi7QZERJB4wgaIlr6Oi9MPIfcP3nqz3WhAtiux4MTjEpE OuXvTm/jx4EDPydo1tkmfkjzPb8Pvxju2Iqfq66hDU6wPyKJoI+jyelzo3TU6wYLKcxf U385HzAP+7vseh//pbw8e5A4U++WuISV13/daScY/EFGTpVmIQ3DwLJtl0mqUJk0YWck 0nWQ== X-Gm-Message-State: AOJu0YzMsOopttaZV0cKZpKKsKxQCwmo7VXPFMBP6IAzIbcI3sP4auY4 N3qWDHHbDaVpBnOU9hMbJVeoGxSufLYLia2AxNMCe0eLYZgd3xg7CRHUVyU2H03xNAkhOEKqAfO otWA3TYGl7ixNMRbYiE65bNDRCsn0koB5vEOPMNbIRU7nQth9trYGb5BgVK8Q6H9tUrqB X-Google-Smtp-Source: AGHT+IHdHRibInvIsKgegUMBic4lqBZiOtQ4faYVY/bIkbuN9GkY38+MksLTCYknXpEWxMb1MA7dKw== X-Received: by 2002:a05:620a:3:b0:77e:fba4:3a3f with SMTP id j3-20020a05620a000300b0077efba43a3fmr5992239qki.149.1702314692570; Mon, 11 Dec 2023 09:11:32 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.31 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:31 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Subject: [PATCH v3 07/14] net/bnxt: refactor code to support P7 devices Date: Mon, 11 Dec 2023 09:11:02 -0800 Message-Id: <20231211171109.89716-8-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Refactor code to support the P7 device family. The changes include support for RSS, VNIC allocation, TPA. Remove unnecessary check to disable vector mode support for some device families. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 6 +++--- drivers/net/bnxt/bnxt_ethdev.c | 29 +++++++++-------------------- drivers/net/bnxt/bnxt_flow.c | 2 +- drivers/net/bnxt/bnxt_hwrm.c | 26 ++++++++++++++------------ drivers/net/bnxt/bnxt_ring.c | 6 +++--- drivers/net/bnxt/bnxt_rxq.c | 2 +- drivers/net/bnxt/bnxt_rxr.c | 6 +++--- drivers/net/bnxt/bnxt_vnic.c | 6 +++--- 8 files changed, 37 insertions(+), 46 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 3a1d8a6ff6..7439ecf4fa 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -107,11 +107,11 @@ #define TPA_MAX_SEGS 5 /* 32 segments in log2 units */ #define BNXT_TPA_MAX_AGGS(bp) \ - (BNXT_CHIP_P5(bp) ? TPA_MAX_AGGS_TH : \ + (BNXT_CHIP_P5_P7(bp) ? TPA_MAX_AGGS_TH : \ TPA_MAX_AGGS) #define BNXT_TPA_MAX_SEGS(bp) \ - (BNXT_CHIP_P5(bp) ? TPA_MAX_SEGS_TH : \ + (BNXT_CHIP_P5_P7(bp) ? TPA_MAX_SEGS_TH : \ TPA_MAX_SEGS) /* @@ -938,7 +938,7 @@ inline uint16_t bnxt_max_rings(struct bnxt *bp) * RSS table size in P5 is 512. * Cap max Rx rings to the same value for RSS. */ - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) max_rx_rings = RTE_MIN(max_rx_rings, BNXT_RSS_TBL_SIZE_P5); max_tx_rings = RTE_MIN(max_tx_rings, max_rx_rings); diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index bd30e9fd3e..d79396b009 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -212,7 +212,7 @@ uint16_t bnxt_rss_ctxts(const struct bnxt *bp) unsigned int num_rss_rings = RTE_MIN(bp->rx_nr_rings, BNXT_RSS_TBL_SIZE_P5); - if (!BNXT_CHIP_P5(bp)) + if (!BNXT_CHIP_P5_P7(bp)) return 1; return RTE_ALIGN_MUL_CEIL(num_rss_rings, @@ -222,7 +222,7 @@ uint16_t bnxt_rss_ctxts(const struct bnxt *bp) uint16_t bnxt_rss_hash_tbl_size(const struct bnxt *bp) { - if (!BNXT_CHIP_P5(bp)) + if (!BNXT_CHIP_P5_P7(bp)) return HW_HASH_INDEX_SIZE; return bnxt_rss_ctxts(bp) * BNXT_RSS_ENTRIES_PER_CTX_P5; @@ -765,7 +765,7 @@ static int bnxt_start_nic(struct bnxt *bp) /* P5 does not support ring groups. * But we will use the array to save RSS context IDs. */ - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) bp->max_ring_grps = BNXT_MAX_RSS_CTXTS_P5; rc = bnxt_vnic_queue_db_init(bp); @@ -1247,12 +1247,6 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev) { struct bnxt *bp = eth_dev->data->dev_private; - /* Disable vector mode RX for Stingray2 for now */ - if (BNXT_CHIP_SR2(bp)) { - bp->flags &= ~BNXT_FLAG_RX_VECTOR_PKT_MODE; - return bnxt_recv_pkts; - } - #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) /* Vector mode receive cannot be enabled if scattered rx is in use. */ if (eth_dev->data->scattered_rx) @@ -1317,16 +1311,11 @@ bnxt_receive_function(struct rte_eth_dev *eth_dev) } static eth_tx_burst_t -bnxt_transmit_function(struct rte_eth_dev *eth_dev) +bnxt_transmit_function(__rte_unused struct rte_eth_dev *eth_dev) { - struct bnxt *bp = eth_dev->data->dev_private; - - /* Disable vector mode TX for Stingray2 for now */ - if (BNXT_CHIP_SR2(bp)) - return bnxt_xmit_pkts; - #if defined(RTE_ARCH_X86) || defined(RTE_ARCH_ARM64) uint64_t offloads = eth_dev->data->dev_conf.txmode.offloads; + struct bnxt *bp = eth_dev->data->dev_private; /* * Vector mode transmit can be enabled only if not using scatter rx @@ -2091,7 +2080,7 @@ static int bnxt_reta_update_op(struct rte_eth_dev *eth_dev, continue; rxq = bnxt_qid_to_rxq(bp, reta_conf[idx].reta[sft]); - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { vnic->rss_table[i * 2] = rxq->rx_ring->rx_ring_struct->fw_ring_id; vnic->rss_table[i * 2 + 1] = @@ -2138,7 +2127,7 @@ static int bnxt_reta_query_op(struct rte_eth_dev *eth_dev, if (reta_conf[idx].mask & (1ULL << sft)) { uint16_t qid; - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) qid = bnxt_rss_to_qid(bp, vnic->rss_table[i * 2]); else @@ -3224,7 +3213,7 @@ bnxt_rx_queue_count_op(void *rx_queue) break; case CMPL_BASE_TYPE_RX_TPA_END: - if (BNXT_CHIP_P5(rxq->bp)) { + if (BNXT_CHIP_P5_P7(rxq->bp)) { struct rx_tpa_v2_end_cmpl_hi *p5_tpa_end; p5_tpa_end = (void *)rxcmp; @@ -3335,7 +3324,7 @@ bnxt_rx_descriptor_status_op(void *rx_queue, uint16_t offset) if (desc == offset) return RTE_ETH_RX_DESC_DONE; - if (BNXT_CHIP_P5(rxq->bp)) { + if (BNXT_CHIP_P5_P7(rxq->bp)) { struct rx_tpa_v2_end_cmpl_hi *p5_tpa_end; p5_tpa_end = (void *)rxcmp; diff --git a/drivers/net/bnxt/bnxt_flow.c b/drivers/net/bnxt/bnxt_flow.c index 28dd5ae6cb..15f0e1b308 100644 --- a/drivers/net/bnxt/bnxt_flow.c +++ b/drivers/net/bnxt/bnxt_flow.c @@ -1199,7 +1199,7 @@ bnxt_vnic_rss_cfg_update(struct bnxt *bp, if (i == bp->rx_cp_nr_rings) return 0; - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { rxq = bp->rx_queues[idx]; vnic->rss_table[rss_idx * 2] = rxq->rx_ring->rx_ring_struct->fw_ring_id; diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index fe9e629892..2d0a7a2731 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -853,7 +853,7 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) bp->first_vf_id = rte_le_to_cpu_16(resp->first_vf_id); bp->max_rx_em_flows = rte_le_to_cpu_16(resp->max_rx_em_flows); bp->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs); - if (!BNXT_CHIP_P5(bp) && !bp->pdev->max_vfs) + if (!BNXT_CHIP_P5_P7(bp) && !bp->pdev->max_vfs) bp->max_l2_ctx += bp->max_rx_em_flows; if (bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY) bp->max_vnics = rte_le_to_cpu_16(BNXT_MAX_VNICS_COS_CLASSIFY); @@ -1187,7 +1187,7 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp) * So use the value provided by func_qcaps. */ bp->max_l2_ctx = rte_le_to_cpu_16(resp->max_l2_ctxs); - if (!BNXT_CHIP_P5(bp) && !bp->pdev->max_vfs) + if (!BNXT_CHIP_P5_P7(bp) && !bp->pdev->max_vfs) bp->max_l2_ctx += bp->max_rx_em_flows; if (bp->vnic_cap_flags & BNXT_VNIC_CAP_COS_CLASSIFY) bp->max_vnics = rte_le_to_cpu_16(BNXT_MAX_VNICS_COS_CLASSIFY); @@ -1744,7 +1744,7 @@ int bnxt_hwrm_ring_alloc(struct bnxt *bp, req.ring_type = ring_type; req.cmpl_ring_id = rte_cpu_to_le_16(cmpl_ring_id); req.stat_ctx_id = rte_cpu_to_le_32(stats_ctx_id); - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { mb_pool = bp->rx_queues[0]->mb_pool; rx_buf_size = rte_pktmbuf_data_room_size(mb_pool) - RTE_PKTMBUF_HEADROOM; @@ -2118,7 +2118,7 @@ int bnxt_hwrm_vnic_cfg(struct bnxt *bp, struct bnxt_vnic_info *vnic) HWRM_PREP(&req, HWRM_VNIC_CFG, BNXT_USE_CHIMP_MB); - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { int dflt_rxq = vnic->start_grp_id; struct bnxt_rx_ring_info *rxr; struct bnxt_cp_ring_info *cpr; @@ -2304,7 +2304,7 @@ int bnxt_hwrm_vnic_ctx_free(struct bnxt *bp, struct bnxt_vnic_info *vnic) { int rc = 0; - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { int j; for (j = 0; j < vnic->num_lb_ctxts; j++) { @@ -2556,7 +2556,7 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, struct hwrm_vnic_tpa_cfg_input req = {.req_type = 0 }; struct hwrm_vnic_tpa_cfg_output *resp = bp->hwrm_cmd_resp_addr; - if (BNXT_CHIP_P5(bp) && !bp->max_tpa_v2) { + if ((BNXT_CHIP_P5(bp) || BNXT_CHIP_P7(bp)) && !bp->max_tpa_v2) { if (enable) PMD_DRV_LOG(ERR, "No HW support for LRO\n"); return -ENOTSUP; @@ -2584,6 +2584,9 @@ int bnxt_hwrm_vnic_tpa_cfg(struct bnxt *bp, req.max_aggs = rte_cpu_to_le_16(BNXT_TPA_MAX_AGGS(bp)); req.max_agg_segs = rte_cpu_to_le_16(BNXT_TPA_MAX_SEGS(bp)); req.min_agg_len = rte_cpu_to_le_32(512); + + if (BNXT_CHIP_P5_P7(bp)) + req.max_aggs = rte_cpu_to_le_16(bp->max_tpa_v2); } req.vnic_id = rte_cpu_to_le_16(vnic->fw_vnic_id); @@ -2836,7 +2839,7 @@ void bnxt_free_hwrm_rx_ring(struct bnxt *bp, int queue_index) ring = rxr ? rxr->ag_ring_struct : NULL; if (ring != NULL && cpr != NULL) { bnxt_hwrm_ring_free(bp, ring, - BNXT_CHIP_P5(bp) ? + BNXT_CHIP_P5_P7(bp) ? HWRM_RING_FREE_INPUT_RING_TYPE_RX_AGG : HWRM_RING_FREE_INPUT_RING_TYPE_RX, cpr->cp_ring_struct->fw_ring_id); @@ -3356,8 +3359,7 @@ int bnxt_set_hwrm_link_config(struct bnxt *bp, bool link_up) /* Get user requested autoneg setting */ autoneg = bnxt_check_eth_link_autoneg(dev_conf->link_speeds); - - if (BNXT_CHIP_P5(bp) && + if (BNXT_CHIP_P5_P7(bp) && dev_conf->link_speeds & RTE_ETH_LINK_SPEED_40G) { /* 40G is not supported as part of media auto detect. * The speed should be forced and autoneg disabled @@ -5348,7 +5350,7 @@ int bnxt_vnic_rss_configure(struct bnxt *bp, struct bnxt_vnic_info *vnic) if (!(vnic->rss_table && vnic->hash_type)) return 0; - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) return bnxt_vnic_rss_configure_p5(bp, vnic); /* @@ -5440,7 +5442,7 @@ int bnxt_hwrm_set_ring_coal(struct bnxt *bp, int rc; /* Set ring coalesce parameters only for 100G NICs */ - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { if (bnxt_hwrm_set_coal_params_p5(bp, &req)) return -1; } else if (bnxt_stratus_device(bp)) { @@ -5470,7 +5472,7 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) int total_alloc_len; int rc, i, tqm_rings; - if (!BNXT_CHIP_P5(bp) || + if (!BNXT_CHIP_P5_P7(bp) || bp->hwrm_spec_code < HWRM_VERSION_1_9_2 || BNXT_VF(bp) || bp->ctx) diff --git a/drivers/net/bnxt/bnxt_ring.c b/drivers/net/bnxt/bnxt_ring.c index 6dacb1b37f..90cad6c9c6 100644 --- a/drivers/net/bnxt/bnxt_ring.c +++ b/drivers/net/bnxt/bnxt_ring.c @@ -57,7 +57,7 @@ int bnxt_alloc_ring_grps(struct bnxt *bp) /* P5 does not support ring groups. * But we will use the array to save RSS context IDs. */ - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { bp->max_ring_grps = BNXT_MAX_RSS_CTXTS_P5; } else if (bp->max_ring_grps < bp->rx_cp_nr_rings) { /* 1 ring is for default completion ring */ @@ -354,7 +354,7 @@ static void bnxt_set_db(struct bnxt *bp, uint32_t fid, uint32_t ring_mask) { - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { int db_offset = DB_PF_OFFSET; switch (ring_type) { case HWRM_RING_ALLOC_INPUT_RING_TYPE_TX: @@ -559,7 +559,7 @@ static int bnxt_alloc_rx_agg_ring(struct bnxt *bp, int queue_index) ring->fw_rx_ring_id = rxr->rx_ring_struct->fw_ring_id; - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { ring_type = HWRM_RING_ALLOC_INPUT_RING_TYPE_RX_AGG; hw_stats_ctx_id = cpr->hw_stats_ctx_id; } else { diff --git a/drivers/net/bnxt/bnxt_rxq.c b/drivers/net/bnxt/bnxt_rxq.c index 0d0b5e28e4..575e7f193f 100644 --- a/drivers/net/bnxt/bnxt_rxq.c +++ b/drivers/net/bnxt/bnxt_rxq.c @@ -600,7 +600,7 @@ int bnxt_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) if (bp->rx_queues[i]->rx_started) active_queue_cnt++; - if (BNXT_CHIP_P5(bp)) { + if (BNXT_CHIP_P5_P7(bp)) { /* * For P5, we need to ensure that the VNIC default * receive ring corresponds to an active receive queue. diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 0cabfb583c..9d45065f28 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -334,7 +334,7 @@ static int bnxt_rx_pages(struct bnxt_rx_queue *rxq, uint16_t cp_cons, ag_cons; struct rx_pkt_cmpl *rxcmp; struct rte_mbuf *last = mbuf; - bool is_p5_tpa = tpa_info && BNXT_CHIP_P5(rxq->bp); + bool is_p5_tpa = tpa_info && BNXT_CHIP_P5_P7(rxq->bp); for (i = 0; i < agg_buf; i++) { struct rte_mbuf **ag_buf; @@ -395,7 +395,7 @@ static int bnxt_discard_rx(struct bnxt *bp, struct bnxt_cp_ring_info *cpr, } else if (cmp_type == RX_TPA_END_CMPL_TYPE_RX_TPA_END) { struct rx_tpa_end_cmpl *tpa_end = cmp; - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) return 0; agg_bufs = BNXT_TPA_END_AGG_BUFS(tpa_end); @@ -430,7 +430,7 @@ static inline struct rte_mbuf *bnxt_tpa_end( return NULL; } - if (BNXT_CHIP_P5(rxq->bp)) { + if (BNXT_CHIP_P5_P7(rxq->bp)) { struct rx_tpa_v2_end_cmpl *th_tpa_end; struct rx_tpa_v2_end_cmpl_hi *th_tpa_end1; diff --git a/drivers/net/bnxt/bnxt_vnic.c b/drivers/net/bnxt/bnxt_vnic.c index d40daf631e..bf93120d28 100644 --- a/drivers/net/bnxt/bnxt_vnic.c +++ b/drivers/net/bnxt/bnxt_vnic.c @@ -143,7 +143,7 @@ int bnxt_alloc_vnic_attributes(struct bnxt *bp, bool reconfig) entry_length = HW_HASH_KEY_SIZE; - if (BNXT_CHIP_P5(bp)) + if (BNXT_CHIP_P5_P7(bp)) rss_table_size = BNXT_RSS_TBL_SIZE_P5 * 2 * sizeof(*vnic->rss_table); else @@ -418,8 +418,8 @@ static int32_t bnxt_vnic_populate_rss_table(struct bnxt *bp, struct bnxt_vnic_info *vnic) { - /* RSS table population is different for p4 and p5 platforms */ - if (BNXT_CHIP_P5(bp)) + /* RSS table population is different for p4 and p5, p7 platforms */ + if (BNXT_CHIP_P5_P7(bp)) return bnxt_vnic_populate_rss_table_p5(bp, vnic); return bnxt_vnic_populate_rss_table_p4(bp, vnic); From patchwork Mon Dec 11 17:11:03 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135035 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 31EDD436C8; Mon, 11 Dec 2023 18:12:27 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9BDB942DA3; Mon, 11 Dec 2023 18:11:37 +0100 (CET) Received: from mail-qk1-f182.google.com (mail-qk1-f182.google.com [209.85.222.182]) by mails.dpdk.org (Postfix) with ESMTP id 3A97C42DCB for ; Mon, 11 Dec 2023 18:11:35 +0100 (CET) Received: by mail-qk1-f182.google.com with SMTP id af79cd13be357-77f35b70944so293196385a.0 for ; Mon, 11 Dec 2023 09:11:35 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314694; x=1702919494; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=biU6MMwzVsAzS1y1+goBA5sqCzK1fluvQ8GYyKvGiaY=; b=DiqtHnYpjwb1CaGhqLcUUK5SmeF5nJ01OBuX631eIkv3zQwq65jNxck2nWsANH0qWz MbwFCsRymS9n+nvIZ+dz4A+0GIhDu5E1ZNd+QewAvvKYBzD3/4UnhFGZlY1GevN8Js+r lUfiHhuneUnihoxlU0SYc7EjplLt3UuovM6IU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314694; x=1702919494; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=biU6MMwzVsAzS1y1+goBA5sqCzK1fluvQ8GYyKvGiaY=; b=DGOq7+vd/eWRSUBHA03paPW9UouH8qdp3UlQ9Uajv3Ne4H6aUNFvVdlTE5Sa49NO6b OaHlej0tqOeBVNpDZQ/qVrxkr957XlCf4JfCifpi5vF/aC/kRpj7sZGIu5AyU4+OnK7i 8neAuVUh22FXTCPDrPF/erASTWpnJZkUQrtQcaSK9z4Bh/UIcMXGJYHG7dzK5o+AV5vr SsH9gi6kWRSYYI8PxkLebPtMs2Jao22BTWcyndIEcxTXtWG6mXLKr2ylae5/WtoGpXQh XDm0xlflYpNUvQ4VXvyzHhCe+RL4EYk4c4AMOB5a2nKZgm68pXposJNiI974yJN/uBke afCQ== X-Gm-Message-State: AOJu0YxoSh4go3cfdgz+wO6ezddni6OTURH9Dksga2DSaB1UtZwtmQAZ FaQgiU76keUQf8IjRVDhhRpRYLn1wbxe/vQuhCJjOIuf1J3PT4tvBginKmYbmbIziAdmDy+gAXj 8yPcOqA869k7NXsevb5ZX9XOcPYY9pKfPCx6VZK2kYbHtP1cGFh/3Gd8vP+HYu78ztZIe X-Google-Smtp-Source: AGHT+IEOHxGDRJd7gb9mxHFpBs2/s+aq9q0CYfRZVBTkJYwJb3FP9DNBMOLDXUv5K8JEBhNeOjVCNQ== X-Received: by 2002:a05:620a:4506:b0:778:ba89:2fbd with SMTP id t6-20020a05620a450600b00778ba892fbdmr6209789qkp.36.1702314694236; Mon, 11 Dec 2023 09:11:34 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:33 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: stable@dpdk.org, Damodharam Ammepalli Subject: [PATCH v3 08/14] net/bnxt: fix array overflow Date: Mon, 11 Dec 2023 09:11:03 -0800 Message-Id: <20231211171109.89716-9-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org In some cases the number of elements in the context memory array can exceed the MAX_CTX_PAGES and that can cause the static members ctx_pg_arr and ctx_dma_arr to overflow. Allocate them dynamically to prevent this overflow. Cc: stable@dpdk.org Fixes: f8168ca0e690 ("net/bnxt: support thor controller") Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt.h | 4 ++-- drivers/net/bnxt/bnxt_ethdev.c | 42 +++++++++++++++++++++++++++------- 2 files changed, 36 insertions(+), 10 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 7439ecf4fa..3fbdf1ddcc 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -455,8 +455,8 @@ struct bnxt_ring_mem_info { struct bnxt_ctx_pg_info { uint32_t entries; - void *ctx_pg_arr[MAX_CTX_PAGES]; - rte_iova_t ctx_dma_arr[MAX_CTX_PAGES]; + void **ctx_pg_arr; + rte_iova_t *ctx_dma_arr; struct bnxt_ring_mem_info ring_mem; }; diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index d79396b009..95f9dd1aa1 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -4767,7 +4767,7 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, { struct bnxt_ring_mem_info *rmem = &ctx_pg->ring_mem; const struct rte_memzone *mz = NULL; - char mz_name[RTE_MEMZONE_NAMESIZE]; + char name[RTE_MEMZONE_NAMESIZE]; rte_iova_t mz_phys_addr; uint64_t valid_bits = 0; uint32_t sz; @@ -4779,6 +4779,19 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, rmem->nr_pages = RTE_ALIGN_MUL_CEIL(mem_size, BNXT_PAGE_SIZE) / BNXT_PAGE_SIZE; rmem->page_size = BNXT_PAGE_SIZE; + + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_pg_arr%s_%x_%d", + suffix, idx, bp->eth_dev->data->port_id); + ctx_pg->ctx_pg_arr = rte_zmalloc(name, sizeof(void *) * rmem->nr_pages, 0); + if (ctx_pg->ctx_pg_arr == NULL) + return -ENOMEM; + + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_dma_arr%s_%x_%d", + suffix, idx, bp->eth_dev->data->port_id); + ctx_pg->ctx_dma_arr = rte_zmalloc(name, sizeof(rte_iova_t *) * rmem->nr_pages, 0); + if (ctx_pg->ctx_dma_arr == NULL) + return -ENOMEM; + rmem->pg_arr = ctx_pg->ctx_pg_arr; rmem->dma_arr = ctx_pg->ctx_dma_arr; rmem->flags = BNXT_RMEM_VALID_PTE_FLAG; @@ -4786,13 +4799,13 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, valid_bits = PTU_PTE_VALID; if (rmem->nr_pages > 1) { - snprintf(mz_name, RTE_MEMZONE_NAMESIZE, + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_pg_tbl%s_%x_%d", suffix, idx, bp->eth_dev->data->port_id); - mz_name[RTE_MEMZONE_NAMESIZE - 1] = 0; - mz = rte_memzone_lookup(mz_name); + name[RTE_MEMZONE_NAMESIZE - 1] = 0; + mz = rte_memzone_lookup(name); if (!mz) { - mz = rte_memzone_reserve_aligned(mz_name, + mz = rte_memzone_reserve_aligned(name, rmem->nr_pages * 8, bp->eth_dev->device->numa_node, RTE_MEMZONE_2MB | @@ -4811,11 +4824,11 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, rmem->pg_tbl_mz = mz; } - snprintf(mz_name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_%s_%x_%d", + snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_%s_%x_%d", suffix, idx, bp->eth_dev->data->port_id); - mz = rte_memzone_lookup(mz_name); + mz = rte_memzone_lookup(name); if (!mz) { - mz = rte_memzone_reserve_aligned(mz_name, + mz = rte_memzone_reserve_aligned(name, mem_size, bp->eth_dev->device->numa_node, RTE_MEMZONE_1GB | @@ -4861,6 +4874,17 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) return; bp->ctx->flags &= ~BNXT_CTX_FLAG_INITED; + rte_free(bp->ctx->qp_mem.ctx_pg_arr); + rte_free(bp->ctx->srq_mem.ctx_pg_arr); + rte_free(bp->ctx->cq_mem.ctx_pg_arr); + rte_free(bp->ctx->vnic_mem.ctx_pg_arr); + rte_free(bp->ctx->stat_mem.ctx_pg_arr); + rte_free(bp->ctx->qp_mem.ctx_dma_arr); + rte_free(bp->ctx->srq_mem.ctx_dma_arr); + rte_free(bp->ctx->cq_mem.ctx_dma_arr); + rte_free(bp->ctx->vnic_mem.ctx_dma_arr); + rte_free(bp->ctx->stat_mem.ctx_dma_arr); + rte_memzone_free(bp->ctx->qp_mem.ring_mem.mz); rte_memzone_free(bp->ctx->srq_mem.ring_mem.mz); rte_memzone_free(bp->ctx->cq_mem.ring_mem.mz); @@ -4873,6 +4897,8 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) rte_memzone_free(bp->ctx->stat_mem.ring_mem.pg_tbl_mz); for (i = 0; i < bp->ctx->tqm_fp_rings_count + 1; i++) { + rte_free(bp->ctx->tqm_mem[i]->ctx_pg_arr); + rte_free(bp->ctx->tqm_mem[i]->ctx_dma_arr); if (bp->ctx->tqm_mem[i]) rte_memzone_free(bp->ctx->tqm_mem[i]->ring_mem.mz); } From patchwork Mon Dec 11 17:11:04 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135036 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 885C6436C8; Mon, 11 Dec 2023 18:12:34 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D3B5742DDD; Mon, 11 Dec 2023 18:11:38 +0100 (CET) Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by mails.dpdk.org (Postfix) with ESMTP id 44F1042DA3 for ; Mon, 11 Dec 2023 18:11:37 +0100 (CET) Received: by mail-qk1-f169.google.com with SMTP id af79cd13be357-77f408d123bso146713485a.0 for ; Mon, 11 Dec 2023 09:11:37 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314696; x=1702919496; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :from:to:cc:subject:date:message-id:reply-to; bh=FL1grHKT+Xmx2EA/zQNDiok355sO1wixfqx0xp0u5C0=; b=fCki1XUcJ/ppPJerXCsFj9DiIVGCgEudE1ZboHVAKJoJpN45PU9VS35Tm4Faw2QCDb 8KpzSJPTg1htrPbZiaadTiXxVsDWBIy6JTF7KuLUvuq5w+UJBsVDx6djgueynqk0J6tW QvA9q9+qZNQPkwV1X6MgxqKI/JbQW8FrljXiM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314696; x=1702919496; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=FL1grHKT+Xmx2EA/zQNDiok355sO1wixfqx0xp0u5C0=; b=C4XDyoG8ZUZGwn7BuoYPMDuK6cvFU+mahM7i355FGY0kO4Xn3v6cKWzJ4Falm1Ogwl WTxWoNuoC/XHl6p4FM4E14ZlVAy0NyJ3Zshy/ZS0fmdSZ0rXPWzh4pQQj9499b98rsrG W0oxwTRPfQhpQ94+V7FpNZo6aP3NcusAnD8tPfVRh0ccHIQ/izwVpbRpYf4Ia+rYzBr5 JNTH3MEH4Dq7t2+KqZdGvkoVsnCOxGMP6z4p4E91XIKjwSrBk1vkKaN/vOo24abTF21u 4i4wyftROIoqfzVETRiDTqsPSiE9ChXzLiP8/SI6Pbv7gBXy2ol3da6bzZfo7gCx4OeN FCJw== X-Gm-Message-State: AOJu0YzSAVCUkjVgTOGDQvDlQnpwbKcmwAZ+vLiCKIvkvCFVQYpC93a4 GR0JHv0a5WBg3KDd15I5A1w3l7jN/0nplsdM2n46pJZsDbSobc+RaEgSJfuGuzjRSuWO4YW9pdc a4J1Ps/rUaHLdse4uG3mYz4a6hvRiCYQgUO1btW2ZSYbqTnBIlAGDwQ3Ro4BZNu2PguE+ X-Google-Smtp-Source: AGHT+IFUzC1dX8uoti9F9H3iCHFipWYaWAhYtxea26KPSCB5d0v7mWq4/0I3Nk2djpSzMEwOywbKsA== X-Received: by 2002:ae9:c00a:0:b0:77b:aa20:8dd with SMTP id u10-20020ae9c00a000000b0077baa2008ddmr2987023qkk.8.1702314695776; Mon, 11 Dec 2023 09:11:35 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.34 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:35 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Subject: [PATCH v3 09/14] net/bnxt: add support for backing store v2 Date: Mon, 11 Dec 2023 09:11:04 -0800 Message-Id: <20231211171109.89716-10-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add backing store v2 changes. The firmware supports the new backing store scheme for P7 and newer devices. To support this, the driver queries the different types of chip contexts the firmware supports and allocates the appropriate size of memory for the firmware and hardware to use. The code then goes ahead and frees up the memory during cleanup. Older P5 device family continues to support the version 1 of backing store. While the P4 device family does not need any backing store memory. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt.h | 69 ++++++- drivers/net/bnxt/bnxt_ethdev.c | 177 ++++++++++++++++-- drivers/net/bnxt/bnxt_hwrm.c | 319 +++++++++++++++++++++++++++++++-- drivers/net/bnxt/bnxt_hwrm.h | 8 + drivers/net/bnxt/bnxt_util.c | 10 ++ drivers/net/bnxt/bnxt_util.h | 1 + 6 files changed, 545 insertions(+), 39 deletions(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 3fbdf1ddcc..68c4778dc3 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -81,6 +81,11 @@ #define BROADCOM_DEV_957508_N2100 0x5208 #define BROADCOM_DEV_957414_N225 0x4145 +#define HWRM_SPEC_CODE_1_8_3 0x10803 +#define HWRM_VERSION_1_9_1 0x10901 +#define HWRM_VERSION_1_9_2 0x10903 +#define HWRM_VERSION_1_10_2_13 0x10a020d + #define BNXT_MAX_MTU 9574 #define BNXT_NUM_VLANS 2 #define BNXT_MAX_PKT_LEN (BNXT_MAX_MTU + RTE_ETHER_HDR_LEN +\ @@ -430,16 +435,26 @@ struct bnxt_coal { #define BNXT_PAGE_SIZE (1 << BNXT_PAGE_SHFT) #define MAX_CTX_PAGES (BNXT_PAGE_SIZE / 8) +#define BNXT_RTE_MEMZONE_FLAG (RTE_MEMZONE_1GB | RTE_MEMZONE_IOVA_CONTIG) + #define PTU_PTE_VALID 0x1UL #define PTU_PTE_LAST 0x2UL #define PTU_PTE_NEXT_TO_LAST 0x4UL +#define BNXT_CTX_MIN 1 +#define BNXT_CTX_INV 0xffff + +#define BNXT_CTX_INIT_VALID(flags) \ + ((flags) & \ + HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_ENABLE_CTX_KIND_INIT) + struct bnxt_ring_mem_info { int nr_pages; int page_size; uint32_t flags; #define BNXT_RMEM_VALID_PTE_FLAG 1 #define BNXT_RMEM_RING_PTE_FLAG 2 +#define BNXT_RMEM_USE_FULL_PAGE_FLAG 4 void **pg_arr; rte_iova_t *dma_arr; @@ -460,7 +475,50 @@ struct bnxt_ctx_pg_info { struct bnxt_ring_mem_info ring_mem; }; +struct bnxt_ctx_mem { + uint16_t type; + uint16_t entry_size; + uint32_t flags; +#define BNXT_CTX_MEM_TYPE_VALID \ + HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_TYPE_VALID + uint32_t instance_bmap; + uint8_t init_value; + uint8_t entry_multiple; + uint16_t init_offset; +#define BNXT_CTX_INIT_INVALID_OFFSET 0xffff + uint32_t max_entries; + uint32_t min_entries; + uint8_t last:1; + uint8_t split_entry_cnt; +#define BNXT_MAX_SPLIT_ENTRY 4 + union { + struct { + uint32_t qp_l2_entries; + uint32_t qp_qp1_entries; + uint32_t qp_fast_qpmd_entries; + }; + uint32_t srq_l2_entries; + uint32_t cq_l2_entries; + uint32_t vnic_entries; + struct { + uint32_t mrav_av_entries; + uint32_t mrav_num_entries_units; + }; + uint32_t split[BNXT_MAX_SPLIT_ENTRY]; + }; + struct bnxt_ctx_pg_info *pg_info; +}; + +#define BNXT_CTX_FLAG_INITED 0x01 + struct bnxt_ctx_mem_info { + struct bnxt_ctx_mem *ctx_arr; + uint32_t supported_types; + uint32_t flags; + uint16_t types; + uint8_t tqm_fp_rings_count; + + /* The following are used for V1 */ uint32_t qp_max_entries; uint16_t qp_min_qp1_entries; uint16_t qp_max_l2_entries; @@ -484,10 +542,6 @@ struct bnxt_ctx_mem_info { uint16_t tim_entry_size; uint32_t tim_max_entries; uint8_t tqm_entries_multiple; - uint8_t tqm_fp_rings_count; - - uint32_t flags; -#define BNXT_CTX_FLAG_INITED 0x01 struct bnxt_ctx_pg_info qp_mem; struct bnxt_ctx_pg_info srq_mem; @@ -739,6 +793,13 @@ struct bnxt { #define BNXT_FW_CAP_TRUFLOW_EN BIT(8) #define BNXT_FW_CAP_VLAN_TX_INSERT BIT(9) #define BNXT_FW_CAP_RX_ALL_PKT_TS BIT(10) +#define BNXT_FW_CAP_BACKING_STORE_V2 BIT(12) +#define BNXT_FW_BACKING_STORE_V2_EN(bp) \ + ((bp)->fw_cap & BNXT_FW_CAP_BACKING_STORE_V2) +#define BNXT_FW_BACKING_STORE_V1_EN(bp) \ + (BNXT_CHIP_P5_P7((bp)) && \ + (bp)->hwrm_spec_code >= HWRM_VERSION_1_9_2 && \ + !BNXT_VF((bp))) #define BNXT_TRUFLOW_EN(bp) ((bp)->fw_cap & BNXT_FW_CAP_TRUFLOW_EN &&\ (bp)->app_id != 0xFF) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 95f9dd1aa1..004b2df4f4 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -4759,8 +4759,26 @@ static int bnxt_map_pci_bars(struct rte_eth_dev *eth_dev) return 0; } +static void bnxt_init_ctxm_mem(struct bnxt_ctx_mem *ctxm, void *p, int len) +{ + uint8_t init_val = ctxm->init_value; + uint16_t offset = ctxm->init_offset; + uint8_t *p2 = p; + int i; + + if (!init_val) + return; + if (offset == BNXT_CTX_INIT_INVALID_OFFSET) { + memset(p, init_val, len); + return; + } + for (i = 0; i < len; i += ctxm->entry_size) + *(p2 + i + offset) = init_val; +} + static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, struct bnxt_ctx_pg_info *ctx_pg, + struct bnxt_ctx_mem *ctxm, uint32_t mem_size, const char *suffix, uint16_t idx) @@ -4776,8 +4794,8 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, if (!mem_size) return 0; - rmem->nr_pages = RTE_ALIGN_MUL_CEIL(mem_size, BNXT_PAGE_SIZE) / - BNXT_PAGE_SIZE; + rmem->nr_pages = + RTE_ALIGN_MUL_CEIL(mem_size, BNXT_PAGE_SIZE) / BNXT_PAGE_SIZE; rmem->page_size = BNXT_PAGE_SIZE; snprintf(name, RTE_MEMZONE_NAMESIZE, "bnxt_ctx_pg_arr%s_%x_%d", @@ -4794,13 +4812,13 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, rmem->pg_arr = ctx_pg->ctx_pg_arr; rmem->dma_arr = ctx_pg->ctx_dma_arr; - rmem->flags = BNXT_RMEM_VALID_PTE_FLAG; + rmem->flags = BNXT_RMEM_VALID_PTE_FLAG | BNXT_RMEM_USE_FULL_PAGE_FLAG; valid_bits = PTU_PTE_VALID; if (rmem->nr_pages > 1) { snprintf(name, RTE_MEMZONE_NAMESIZE, - "bnxt_ctx_pg_tbl%s_%x_%d", + "bnxt_ctxpgtbl%s_%x_%d", suffix, idx, bp->eth_dev->data->port_id); name[RTE_MEMZONE_NAMESIZE - 1] = 0; mz = rte_memzone_lookup(name); @@ -4816,9 +4834,11 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, return -ENOMEM; } - memset(mz->addr, 0, mz->len); + memset(mz->addr, 0xff, mz->len); mz_phys_addr = mz->iova; + if (ctxm != NULL) + bnxt_init_ctxm_mem(ctxm, mz->addr, mz->len); rmem->pg_tbl = mz->addr; rmem->pg_tbl_map = mz_phys_addr; rmem->pg_tbl_mz = mz; @@ -4839,9 +4859,11 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, return -ENOMEM; } - memset(mz->addr, 0, mz->len); + memset(mz->addr, 0xff, mz->len); mz_phys_addr = mz->iova; + if (ctxm != NULL) + bnxt_init_ctxm_mem(ctxm, mz->addr, mz->len); for (sz = 0, i = 0; sz < mem_size; sz += BNXT_PAGE_SIZE, i++) { rmem->pg_arr[i] = ((char *)mz->addr) + sz; rmem->dma_arr[i] = mz_phys_addr + sz; @@ -4866,6 +4888,34 @@ static int bnxt_alloc_ctx_mem_blk(struct bnxt *bp, return 0; } +static void bnxt_free_ctx_mem_v2(struct bnxt *bp) +{ + uint16_t type; + + for (type = 0; type < bp->ctx->types; type++) { + struct bnxt_ctx_mem *ctxm = &bp->ctx->ctx_arr[type]; + struct bnxt_ctx_pg_info *ctx_pg = ctxm->pg_info; + int i, n = 1; + + if (!ctx_pg) + continue; + if (ctxm->instance_bmap) + n = hweight32(ctxm->instance_bmap); + + for (i = 0; i < n; i++) { + rte_free(ctx_pg[i].ctx_pg_arr); + rte_free(ctx_pg[i].ctx_dma_arr); + rte_memzone_free(ctx_pg[i].ring_mem.mz); + rte_memzone_free(ctx_pg[i].ring_mem.pg_tbl_mz); + } + + rte_free(ctx_pg); + ctxm->pg_info = NULL; + } + rte_free(bp->ctx->ctx_arr); + bp->ctx->ctx_arr = NULL; +} + static void bnxt_free_ctx_mem(struct bnxt *bp) { int i; @@ -4874,6 +4924,12 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) return; bp->ctx->flags &= ~BNXT_CTX_FLAG_INITED; + + if (BNXT_FW_BACKING_STORE_V2_EN(bp)) { + bnxt_free_ctx_mem_v2(bp); + goto free_ctx; + } + rte_free(bp->ctx->qp_mem.ctx_pg_arr); rte_free(bp->ctx->srq_mem.ctx_pg_arr); rte_free(bp->ctx->cq_mem.ctx_pg_arr); @@ -4903,6 +4959,7 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) rte_memzone_free(bp->ctx->tqm_mem[i]->ring_mem.mz); } +free_ctx: rte_free(bp->ctx); bp->ctx = NULL; } @@ -4921,28 +4978,113 @@ static void bnxt_free_ctx_mem(struct bnxt *bp) #define clamp_t(type, _x, min, max) min_t(type, max_t(type, _x, min), max) +int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp) +{ + struct bnxt_ctx_mem_info *ctx = bp->ctx; + struct bnxt_ctx_mem *ctx2; + uint16_t type; + int rc = 0; + + ctx2 = &ctx->ctx_arr[0]; + for (type = 0; type < ctx->types && rc == 0; type++) { + struct bnxt_ctx_mem *ctxm = &ctx->ctx_arr[type]; + struct bnxt_ctx_pg_info *ctx_pg; + uint32_t entries, mem_size; + int w = 1; + int i; + + if (ctxm->entry_size == 0) + continue; + + ctx_pg = ctxm->pg_info; + + if (ctxm->instance_bmap) + w = hweight32(ctxm->instance_bmap); + + for (i = 0; i < w && rc == 0; i++) { + char name[RTE_MEMZONE_NAMESIZE] = {0}; + + sprintf(name, "_%d_%d", i, type); + + if (ctxm->entry_multiple) + entries = bnxt_roundup(ctxm->max_entries, + ctxm->entry_multiple); + else + entries = ctxm->max_entries; + + if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_CQ) + entries = ctxm->cq_l2_entries; + else if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_QP) + entries = ctxm->qp_l2_entries; + else if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_MRAV) + entries = ctxm->mrav_av_entries; + else if (ctxm->type == HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_TYPE_TIM) + entries = ctx2->qp_l2_entries; + entries = clamp_t(uint32_t, entries, ctxm->min_entries, + ctxm->max_entries); + ctx_pg[i].entries = entries; + mem_size = ctxm->entry_size * entries; + PMD_DRV_LOG(DEBUG, + "Type:0x%x instance:%d entries:%d size:%d\n", + ctxm->type, i, ctx_pg[i].entries, mem_size); + rc = bnxt_alloc_ctx_mem_blk(bp, &ctx_pg[i], + ctxm->init_value ? ctxm : NULL, + mem_size, name, i); + } + } + + return rc; +} + int bnxt_alloc_ctx_mem(struct bnxt *bp) { struct bnxt_ctx_pg_info *ctx_pg; struct bnxt_ctx_mem_info *ctx; uint32_t mem_size, ena, entries; + int types = BNXT_CTX_MIN; uint32_t entries_sp, min; - int i, rc; + int i, rc = 0; + + if (!BNXT_FW_BACKING_STORE_V1_EN(bp) && + !BNXT_FW_BACKING_STORE_V2_EN(bp)) + return rc; + + if (BNXT_FW_BACKING_STORE_V2_EN(bp)) { + types = bnxt_hwrm_func_backing_store_types_count(bp); + if (types <= 0) + return types; + } + + rc = bnxt_hwrm_func_backing_store_ctx_alloc(bp, types); + if (rc != 0) + return rc; + + if (bp->ctx->flags & BNXT_CTX_FLAG_INITED) + return 0; + + ctx = bp->ctx; + if (BNXT_FW_BACKING_STORE_V2_EN(bp)) { + rc = bnxt_hwrm_func_backing_store_qcaps_v2(bp); + + for (i = 0 ; i < bp->ctx->types && rc == 0; i++) { + struct bnxt_ctx_mem *ctxm = &ctx->ctx_arr[i]; + + rc = bnxt_hwrm_func_backing_store_cfg_v2(bp, ctxm); + } + goto done; + } rc = bnxt_hwrm_func_backing_store_qcaps(bp); if (rc) { PMD_DRV_LOG(ERR, "Query context mem capability failed\n"); return rc; } - ctx = bp->ctx; - if (!ctx || (ctx->flags & BNXT_CTX_FLAG_INITED)) - return 0; ctx_pg = &ctx->qp_mem; ctx_pg->entries = ctx->qp_min_qp1_entries + ctx->qp_max_l2_entries; if (ctx->qp_entry_size) { mem_size = ctx->qp_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "qp_mem", 0); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "qp_mem", 0); if (rc) return rc; } @@ -4951,7 +5093,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx_pg->entries = ctx->srq_max_l2_entries; if (ctx->srq_entry_size) { mem_size = ctx->srq_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "srq_mem", 0); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "srq_mem", 0); if (rc) return rc; } @@ -4960,7 +5102,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx_pg->entries = ctx->cq_max_l2_entries; if (ctx->cq_entry_size) { mem_size = ctx->cq_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "cq_mem", 0); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "cq_mem", 0); if (rc) return rc; } @@ -4970,7 +5112,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx->vnic_max_ring_table_entries; if (ctx->vnic_entry_size) { mem_size = ctx->vnic_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "vnic_mem", 0); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "vnic_mem", 0); if (rc) return rc; } @@ -4979,7 +5121,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx_pg->entries = ctx->stat_max_entries; if (ctx->stat_entry_size) { mem_size = ctx->stat_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, "stat_mem", 0); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, mem_size, "stat_mem", 0); if (rc) return rc; } @@ -5003,8 +5145,8 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ctx_pg->entries = i ? entries : entries_sp; if (ctx->tqm_entry_size) { mem_size = ctx->tqm_entry_size * ctx_pg->entries; - rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, mem_size, - "tqm_mem", i); + rc = bnxt_alloc_ctx_mem_blk(bp, ctx_pg, NULL, + mem_size, "tqm_mem", i); if (rc) return rc; } @@ -5016,6 +5158,7 @@ int bnxt_alloc_ctx_mem(struct bnxt *bp) ena |= FUNC_BACKING_STORE_CFG_INPUT_DFLT_ENABLES; rc = bnxt_hwrm_func_backing_store_cfg(bp, ena); +done: if (rc) PMD_DRV_LOG(ERR, "Failed to configure context mem: rc = %d\n", rc); diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 2d0a7a2731..a2182af036 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -24,10 +24,6 @@ #include "bnxt_vnic.h" #include "hsi_struct_def_dpdk.h" -#define HWRM_SPEC_CODE_1_8_3 0x10803 -#define HWRM_VERSION_1_9_1 0x10901 -#define HWRM_VERSION_1_9_2 0x10903 -#define HWRM_VERSION_1_10_2_13 0x10a020d struct bnxt_plcmodes_cfg { uint32_t flags; uint16_t jumbo_thresh; @@ -35,6 +31,43 @@ struct bnxt_plcmodes_cfg { uint16_t hds_threshold; }; +const char *bnxt_backing_store_types[] = { + "Queue pair", + "Shared receive queue", + "Completion queue", + "Virtual NIC", + "Statistic context", + "Slow-path TQM ring", + "Fast-path TQM ring", + "Unused", + "Unused", + "Unused", + "Unused", + "Unused", + "Unused", + "Unused", + "MR and MAV Context", + "TIM", + "Unused", + "Unused", + "Unused", + "Tx key context", + "Rx key context", + "Mid-path TQM ring", + "SQ Doorbell shadow region", + "RQ Doorbell shadow region", + "SRQ Doorbell shadow region", + "CQ Doorbell shadow region", + "QUIC Tx key context", + "QUIC Rx key context", + "Invalid type", + "Invalid type", + "Invalid type", + "Invalid type", + "Invalid type", + "Invalid type" +}; + static int page_getenum(size_t size) { if (size <= 1 << 4) @@ -894,6 +927,11 @@ static int __bnxt_hwrm_func_qcaps(struct bnxt *bp) if (flags & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_LINK_ADMIN_STATUS_SUPPORTED) bp->fw_cap |= BNXT_FW_CAP_LINK_ADMIN; + if (flags & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_EXT_BS_V2_SUPPORTED) { + PMD_DRV_LOG(DEBUG, "Backing store v2 supported\n"); + if (BNXT_CHIP_P7(bp)) + bp->fw_cap |= BNXT_FW_CAP_BACKING_STORE_V2; + } if (!(flags & HWRM_FUNC_QCAPS_OUTPUT_FLAGS_VLAN_ACCELERATION_TX_DISABLED)) { bp->fw_cap |= BNXT_FW_CAP_VLAN_TX_INSERT; PMD_DRV_LOG(DEBUG, "VLAN acceleration for TX is enabled\n"); @@ -5461,7 +5499,194 @@ int bnxt_hwrm_set_ring_coal(struct bnxt *bp, return 0; } -#define BNXT_RTE_MEMZONE_FLAG (RTE_MEMZONE_1GB | RTE_MEMZONE_IOVA_CONTIG) +static void bnxt_init_ctx_initializer(struct bnxt_ctx_mem *ctxm, + uint8_t init_val, + uint8_t init_offset, + bool init_mask_set) +{ + ctxm->init_value = init_val; + ctxm->init_offset = BNXT_CTX_INIT_INVALID_OFFSET; + if (init_mask_set) + ctxm->init_offset = init_offset * 4; + else + ctxm->init_value = 0; +} + +static int bnxt_alloc_all_ctx_pg_info(struct bnxt *bp) +{ + struct bnxt_ctx_mem_info *ctx = bp->ctx; + char name[RTE_MEMZONE_NAMESIZE]; + uint16_t type; + + for (type = 0; type < ctx->types; type++) { + struct bnxt_ctx_mem *ctxm = &ctx->ctx_arr[type]; + int n = 1; + + if (!ctxm->max_entries || ctxm->pg_info) + continue; + + if (ctxm->instance_bmap) + n = hweight32(ctxm->instance_bmap); + + sprintf(name, "bnxt_ctx_pgmem_%d_%d", + bp->eth_dev->data->port_id, type); + ctxm->pg_info = rte_malloc(name, sizeof(*ctxm->pg_info) * n, + RTE_CACHE_LINE_SIZE); + if (!ctxm->pg_info) + return -ENOMEM; + } + return 0; +} + +static void bnxt_init_ctx_v2_driver_managed(struct bnxt *bp __rte_unused, + struct bnxt_ctx_mem *ctxm) +{ + switch (ctxm->type) { + case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_SQ_DB_SHADOW: + case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_RQ_DB_SHADOW: + case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_SRQ_DB_SHADOW: + case HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_CQ_DB_SHADOW: + /* FALLTHROUGH */ + ctxm->entry_size = 0; + ctxm->min_entries = 1; + ctxm->max_entries = 1; + break; + } +} + +int bnxt_hwrm_func_backing_store_qcaps_v2(struct bnxt *bp) +{ + struct hwrm_func_backing_store_qcaps_v2_input req = {0}; + struct hwrm_func_backing_store_qcaps_v2_output *resp = + bp->hwrm_cmd_resp_addr; + struct bnxt_ctx_mem_info *ctx = bp->ctx; + uint16_t last_valid_type = BNXT_CTX_INV; + uint16_t last_valid_idx = 0; + uint16_t types, type; + int rc; + + for (types = 0, type = 0; types < bp->ctx->types && type != BNXT_CTX_INV; types++) { + struct bnxt_ctx_mem *ctxm = &bp->ctx->ctx_arr[types]; + uint8_t init_val, init_off, i; + uint32_t *p; + uint32_t flags; + + HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS_V2, BNXT_USE_CHIMP_MB); + req.type = rte_cpu_to_le_16(type); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); + HWRM_CHECK_RESULT(); + + flags = rte_le_to_cpu_32(resp->flags); + type = rte_le_to_cpu_16(resp->next_valid_type); + if (!(flags & HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_TYPE_VALID)) + goto next; + + ctxm->type = rte_le_to_cpu_16(resp->type); + + ctxm->flags = flags; + if (flags & + HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_DRIVER_MANAGED_MEMORY) { + bnxt_init_ctx_v2_driver_managed(bp, ctxm); + goto next; + } + ctxm->entry_size = rte_le_to_cpu_16(resp->entry_size); + + if (ctxm->entry_size == 0) + goto next; + + ctxm->instance_bmap = rte_le_to_cpu_32(resp->instance_bit_map); + ctxm->entry_multiple = resp->entry_multiple; + ctxm->max_entries = rte_le_to_cpu_32(resp->max_num_entries); + ctxm->min_entries = rte_le_to_cpu_32(resp->min_num_entries); + init_val = resp->ctx_init_value; + init_off = resp->ctx_init_offset; + bnxt_init_ctx_initializer(ctxm, init_val, init_off, + BNXT_CTX_INIT_VALID(flags)); + ctxm->split_entry_cnt = RTE_MIN(resp->subtype_valid_cnt, + BNXT_MAX_SPLIT_ENTRY); + for (i = 0, p = &resp->split_entry_0; i < ctxm->split_entry_cnt; + i++, p++) + ctxm->split[i] = rte_le_to_cpu_32(*p); + + PMD_DRV_LOG(DEBUG, + "type:%s size:%d multiple:%d max:%d min:%d split:%d init_val:%d init_off:%d init:%d bmap:0x%x\n", + bnxt_backing_store_types[ctxm->type], ctxm->entry_size, + ctxm->entry_multiple, ctxm->max_entries, ctxm->min_entries, + ctxm->split_entry_cnt, init_val, init_off, + BNXT_CTX_INIT_VALID(flags), ctxm->instance_bmap); + last_valid_type = ctxm->type; + last_valid_idx = types; +next: + HWRM_UNLOCK(); + } + ctx->ctx_arr[last_valid_idx].last = true; + PMD_DRV_LOG(DEBUG, "Last valid type 0x%x\n", last_valid_type); + + rc = bnxt_alloc_all_ctx_pg_info(bp); + if (rc == 0) + rc = bnxt_alloc_ctx_pg_tbls(bp); + return rc; +} + +int bnxt_hwrm_func_backing_store_types_count(struct bnxt *bp) +{ + struct hwrm_func_backing_store_qcaps_v2_input req = {0}; + struct hwrm_func_backing_store_qcaps_v2_output *resp = + bp->hwrm_cmd_resp_addr; + uint16_t type = 0; + int types = 0; + int rc; + + /* Calculate number of valid context types */ + do { + uint32_t flags; + + HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS_V2, BNXT_USE_CHIMP_MB); + req.type = rte_cpu_to_le_16(type); + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); + HWRM_CHECK_RESULT(); + + flags = rte_le_to_cpu_32(resp->flags); + type = rte_le_to_cpu_16(resp->next_valid_type); + HWRM_UNLOCK(); + + if (flags & HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_FLAGS_TYPE_VALID) { + PMD_DRV_LOG(DEBUG, "Valid types 0x%x - %s\n", + req.type, bnxt_backing_store_types[req.type]); + types++; + } + } while (type != HWRM_FUNC_BACKING_STORE_QCAPS_V2_OUTPUT_TYPE_INVALID); + PMD_DRV_LOG(DEBUG, "Number of valid types %d\n", types); + + return types; +} + +int bnxt_hwrm_func_backing_store_ctx_alloc(struct bnxt *bp, uint16_t types) +{ + int alloc_len = sizeof(struct bnxt_ctx_mem_info); + + if (!BNXT_CHIP_P5_P7(bp) || + bp->hwrm_spec_code < HWRM_VERSION_1_9_2 || + BNXT_VF(bp) || + bp->ctx) + return 0; + + bp->ctx = rte_zmalloc("bnxt_ctx_mem", alloc_len, + RTE_CACHE_LINE_SIZE); + if (bp->ctx == NULL) + return -ENOMEM; + + alloc_len = sizeof(struct bnxt_ctx_mem) * types; + bp->ctx->ctx_arr = rte_zmalloc("bnxt_ctx_mem_arr", + alloc_len, + RTE_CACHE_LINE_SIZE); + if (bp->ctx->ctx_arr == NULL) + return -ENOMEM; + + bp->ctx->types = types; + return 0; +} + int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) { struct hwrm_func_backing_store_qcaps_input req = {0}; @@ -5469,27 +5694,19 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) bp->hwrm_cmd_resp_addr; struct bnxt_ctx_pg_info *ctx_pg; struct bnxt_ctx_mem_info *ctx; - int total_alloc_len; int rc, i, tqm_rings; if (!BNXT_CHIP_P5_P7(bp) || bp->hwrm_spec_code < HWRM_VERSION_1_9_2 || BNXT_VF(bp) || - bp->ctx) + bp->ctx->flags & BNXT_CTX_FLAG_INITED) return 0; + ctx = bp->ctx; HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_QCAPS, BNXT_USE_CHIMP_MB); rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); HWRM_CHECK_RESULT_SILENT(); - total_alloc_len = sizeof(*ctx); - ctx = rte_zmalloc("bnxt_ctx_mem", total_alloc_len, - RTE_CACHE_LINE_SIZE); - if (!ctx) { - rc = -ENOMEM; - goto ctx_err; - } - ctx->qp_max_entries = rte_le_to_cpu_32(resp->qp_max_entries); ctx->qp_min_qp1_entries = rte_le_to_cpu_16(resp->qp_min_qp1_entries); @@ -5500,8 +5717,13 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) rte_le_to_cpu_16(resp->srq_max_l2_entries); ctx->srq_max_entries = rte_le_to_cpu_32(resp->srq_max_entries); ctx->srq_entry_size = rte_le_to_cpu_16(resp->srq_entry_size); - ctx->cq_max_l2_entries = - rte_le_to_cpu_16(resp->cq_max_l2_entries); + if (BNXT_CHIP_P7(bp)) + ctx->cq_max_l2_entries = + RTE_MIN(BNXT_P7_CQ_MAX_L2_ENT, + rte_le_to_cpu_16(resp->cq_max_l2_entries)); + else + ctx->cq_max_l2_entries = + rte_le_to_cpu_16(resp->cq_max_l2_entries); ctx->cq_max_entries = rte_le_to_cpu_32(resp->cq_max_entries); ctx->cq_entry_size = rte_le_to_cpu_16(resp->cq_entry_size); ctx->vnic_max_vnic_entries = @@ -5555,12 +5777,73 @@ int bnxt_hwrm_func_backing_store_qcaps(struct bnxt *bp) for (i = 0; i < tqm_rings; i++, ctx_pg++) ctx->tqm_mem[i] = ctx_pg; - bp->ctx = ctx; ctx_err: HWRM_UNLOCK(); return rc; } +int bnxt_hwrm_func_backing_store_cfg_v2(struct bnxt *bp, + struct bnxt_ctx_mem *ctxm) +{ + struct hwrm_func_backing_store_cfg_v2_input req = {0}; + struct hwrm_func_backing_store_cfg_v2_output *resp = + bp->hwrm_cmd_resp_addr; + struct bnxt_ctx_pg_info *ctx_pg; + int i, j, k; + uint32_t *p; + int rc = 0; + int w = 1; + int b = 1; + + if (!BNXT_PF(bp)) { + PMD_DRV_LOG(INFO, + "Backing store config V2 can be issued on PF only\n"); + return 0; + } + + if (!(ctxm->flags & BNXT_CTX_MEM_TYPE_VALID) || !ctxm->pg_info) + return 0; + + if (ctxm->instance_bmap) + b = ctxm->instance_bmap; + + w = hweight32(b); + + for (i = 0, j = 0; i < w && rc == 0; i++) { + if (!(b & (1 << i))) + continue; + + HWRM_PREP(&req, HWRM_FUNC_BACKING_STORE_CFG_V2, BNXT_USE_CHIMP_MB); + req.type = rte_cpu_to_le_16(ctxm->type); + req.entry_size = rte_cpu_to_le_16(ctxm->entry_size); + req.subtype_valid_cnt = ctxm->split_entry_cnt; + for (k = 0, p = &req.split_entry_0; k < ctxm->split_entry_cnt; k++) + p[k] = rte_cpu_to_le_32(ctxm->split[k]); + + req.instance = rte_cpu_to_le_16(i); + ctx_pg = &ctxm->pg_info[j++]; + if (!ctx_pg->entries) + goto unlock; + + req.num_entries = rte_cpu_to_le_32(ctx_pg->entries); + bnxt_hwrm_set_pg_attr(&ctx_pg->ring_mem, + &req.page_size_pbl_level, + &req.page_dir); + PMD_DRV_LOG(DEBUG, + "Backing store config V2 type:%s last %d, instance %d, hw %d\n", + bnxt_backing_store_types[req.type], ctxm->last, j, w); + if (ctxm->last && i == (w - 1)) + req.flags = + rte_cpu_to_le_32(BACKING_STORE_CFG_V2_IN_FLG_CFG_ALL_DONE); + + rc = bnxt_hwrm_send_message(bp, &req, sizeof(req), BNXT_USE_CHIMP_MB); + HWRM_CHECK_RESULT(); +unlock: + HWRM_UNLOCK(); + } + return rc; +} + int bnxt_hwrm_func_backing_store_cfg(struct bnxt *bp, uint32_t enables) { struct hwrm_func_backing_store_cfg_input req = {0}; diff --git a/drivers/net/bnxt/bnxt_hwrm.h b/drivers/net/bnxt/bnxt_hwrm.h index f9fa6cf73a..3d5194257b 100644 --- a/drivers/net/bnxt/bnxt_hwrm.h +++ b/drivers/net/bnxt/bnxt_hwrm.h @@ -60,6 +60,8 @@ struct hwrm_func_qstats_output; HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_PAM4_LINK_SPEED_MASK #define HWRM_PORT_PHY_CFG_IN_EN_AUTO_LINK_SPEED_MASK \ HWRM_PORT_PHY_CFG_INPUT_ENABLES_AUTO_LINK_SPEED_MASK +#define BACKING_STORE_CFG_V2_IN_FLG_CFG_ALL_DONE \ + HWRM_FUNC_BACKING_STORE_CFG_V2_INPUT_FLAGS_BS_CFG_ALL_DONE #define HWRM_SPEC_CODE_1_8_4 0x10804 #define HWRM_SPEC_CODE_1_9_0 0x10900 @@ -355,4 +357,10 @@ void bnxt_free_hwrm_tx_ring(struct bnxt *bp, int queue_index); int bnxt_alloc_hwrm_tx_ring(struct bnxt *bp, int queue_index); int bnxt_hwrm_config_host_mtu(struct bnxt *bp); int bnxt_vnic_rss_clear_p5(struct bnxt *bp, struct bnxt_vnic_info *vnic); +int bnxt_hwrm_func_backing_store_qcaps_v2(struct bnxt *bp); +int bnxt_hwrm_func_backing_store_cfg_v2(struct bnxt *bp, + struct bnxt_ctx_mem *ctxm); +int bnxt_hwrm_func_backing_store_types_count(struct bnxt *bp); +int bnxt_hwrm_func_backing_store_ctx_alloc(struct bnxt *bp, uint16_t types); +int bnxt_alloc_ctx_pg_tbls(struct bnxt *bp); #endif diff --git a/drivers/net/bnxt/bnxt_util.c b/drivers/net/bnxt/bnxt_util.c index 47dd5fa6ff..aa184496c2 100644 --- a/drivers/net/bnxt/bnxt_util.c +++ b/drivers/net/bnxt/bnxt_util.c @@ -27,3 +27,13 @@ void bnxt_eth_hw_addr_random(uint8_t *mac_addr) mac_addr[1] = 0x0a; mac_addr[2] = 0xf7; } + +uint8_t hweight32(uint32_t word32) +{ + uint32_t res = word32 - ((word32 >> 1) & 0x55555555); + + res = (res & 0x33333333) + ((res >> 2) & 0x33333333); + res = (res + (res >> 4)) & 0x0F0F0F0F; + res = res + (res >> 8); + return (res + (res >> 16)) & 0x000000FF; +} diff --git a/drivers/net/bnxt/bnxt_util.h b/drivers/net/bnxt/bnxt_util.h index 7f5b4c160e..b265f5841b 100644 --- a/drivers/net/bnxt/bnxt_util.h +++ b/drivers/net/bnxt/bnxt_util.h @@ -17,4 +17,5 @@ int bnxt_check_zero_bytes(const uint8_t *bytes, int len); void bnxt_eth_hw_addr_random(uint8_t *mac_addr); +uint8_t hweight32(uint32_t word32); #endif /* _BNXT_UTIL_H_ */ From patchwork Mon Dec 11 17:11:05 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135037 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3A95B436C8; Mon, 11 Dec 2023 18:12:44 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 67D6D42DE8; Mon, 11 Dec 2023 18:11:40 +0100 (CET) Received: from mail-qk1-f175.google.com (mail-qk1-f175.google.com [209.85.222.175]) by mails.dpdk.org (Postfix) with ESMTP id 320E342DDF for ; Mon, 11 Dec 2023 18:11:39 +0100 (CET) Received: by mail-qk1-f175.google.com with SMTP id af79cd13be357-77f320ca2d5so347027885a.1 for ; Mon, 11 Dec 2023 09:11:39 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314698; x=1702919498; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=5UG0mnkI/Xses58D65ioT17ltUPruy8WdWEocbeTKN8=; b=Yx26G1wgioUvt/kQdDpuUHrbMgtLGV5xNqa5VyZBa5WGzQ/Tg64qAk/YX0DQ50+3gg l1gRkMq3Oi9/j2CqAtNwKLUEF9zG0lF4laKpD/2dAk26iaH87GpXDiFusCn9VtNjlJ/Q 5ys7TgC2gAMaMET3nI055Qak5xC0w0YECWVfg= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314698; x=1702919498; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=5UG0mnkI/Xses58D65ioT17ltUPruy8WdWEocbeTKN8=; b=GmE56EWL4sCnMYsP724tdb6pjIoJNzgKWYfkSM7TTMaH0zISoRqRll6dUAQjN405HG Z2imi85N0t4rqEaub21l5NMT3GGjb9yNii9bqoXsyUfJHveOnEcv+SQC3nDt99gtqxZw xJV4q0DxzFqD++7hJ4y02zLUBr+nY4FOVAGgRKJvFq/pfKgUym2a+bGLG9F8IEXa+FoU vyzGWCAkxO2vzAu/CaOjAPL2yGVX12GjWgGbc8nS9p7oo43w6sHwi3l40dEL7IiztP5N MH7rAZC4Gxk0ZkCw3WINCBWOD7PkyhhTCSnB5zDYopKFouv0jKhctecMsy7OYwiFSDhj RUXw== X-Gm-Message-State: AOJu0YxJaB+tWe323ZRDXVAOiw/d1+hLgoJrfsjMAD8n8xkLB9eVMmXg +P813b84h0IMoVcJgnCmL8NPWo7W5hDjuTVQCq4d4ros+ij9Yzp3ToJHupsGwZdQq7FLCGZwe5s iXKYsUqCxnCZcbwkhDQHwm/Clp1KUxJBY+oHNsFfIA9ptP+HmcRsBxpNCs/gnsy9XcvvL X-Google-Smtp-Source: AGHT+IHTwbyBhQmlxgUGuuWVtk/K54vp2R1jQ/rJdgqEralIoSYK45zC2cPxLyxWFwtY8AxIAqskPg== X-Received: by 2002:a05:620a:5591:b0:76c:ea67:38e4 with SMTP id vq17-20020a05620a559100b0076cea6738e4mr5603292qkn.12.1702314698199; Mon, 11 Dec 2023 09:11:38 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:36 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kishore Padmanabha , Mike Baucom Subject: [PATCH v3 10/14] net/bnxt: refactor the ulp initialization Date: Mon, 11 Dec 2023 09:11:05 -0800 Message-Id: <20231211171109.89716-11-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Kishore Padmanabha Add new method to consider all the conditions to check before the ulp could be initialized. Signed-off-by: Kishore Padmanabha Reviewed-by: Ajit Khaparde Reviewed-by: Mike Baucom --- drivers/net/bnxt/bnxt_ethdev.c | 28 +++++++++++++++++++++++----- 1 file changed, 23 insertions(+), 5 deletions(-) diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 004b2df4f4..81a30eb983 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -190,6 +190,7 @@ static void bnxt_dev_recover(void *arg); static void bnxt_free_error_recovery_info(struct bnxt *bp); static void bnxt_free_rep_info(struct bnxt *bp); static int bnxt_check_fw_ready(struct bnxt *bp); +static bool bnxt_enable_ulp(struct bnxt *bp); int is_bnxt_in_error(struct bnxt *bp) { @@ -1520,7 +1521,8 @@ static int bnxt_dev_stop(struct rte_eth_dev *eth_dev) return ret; /* delete the bnxt ULP port details */ - bnxt_ulp_port_deinit(bp); + if (bnxt_enable_ulp(bp)) + bnxt_ulp_port_deinit(bp); bnxt_cancel_fw_health_check(bp); @@ -1641,9 +1643,11 @@ int bnxt_dev_start_op(struct rte_eth_dev *eth_dev) goto error; /* Initialize bnxt ULP port details */ - rc = bnxt_ulp_port_init(bp); - if (rc) - goto error; + if (bnxt_enable_ulp(bp)) { + rc = bnxt_ulp_port_init(bp); + if (rc) + goto error; + } eth_dev->rx_pkt_burst = bnxt_receive_function(eth_dev); eth_dev->tx_pkt_burst = bnxt_transmit_function(eth_dev); @@ -3426,7 +3430,7 @@ bnxt_flow_ops_get_op(struct rte_eth_dev *dev, */ dev->data->dev_flags |= RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE; - if (BNXT_TRUFLOW_EN(bp)) + if (bnxt_enable_ulp(bp)) *ops = &bnxt_ulp_rte_flow_ops; else *ops = &bnxt_flow_ops; @@ -6666,6 +6670,20 @@ struct tf *bnxt_get_tfp_session(struct bnxt *bp, enum bnxt_session_type type) &bp->tfp[BNXT_SESSION_TYPE_REGULAR] : &bp->tfp[type]; } +/* check if ULP should be enabled or not */ +static bool bnxt_enable_ulp(struct bnxt *bp) +{ + /* truflow and MPC should be enabled */ + /* not enabling ulp for cli and no truflow apps */ + if (BNXT_TRUFLOW_EN(bp) && bp->app_id != 254 && + bp->app_id != 255) { + if (BNXT_CHIP_P7(bp)) + return false; + return true; + } + return false; +} + RTE_LOG_REGISTER_SUFFIX(bnxt_logtype_driver, driver, NOTICE); RTE_PMD_REGISTER_PCI(net_bnxt, bnxt_rte_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_bnxt, bnxt_pci_id_map); From patchwork Mon Dec 11 17:11:06 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135038 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E7D57436C8; Mon, 11 Dec 2023 18:12:50 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9A04B42DD4; Mon, 11 Dec 2023 18:11:42 +0100 (CET) Received: from mail-oa1-f49.google.com (mail-oa1-f49.google.com [209.85.160.49]) by mails.dpdk.org (Postfix) with ESMTP id 26DF942DEC for ; Mon, 11 Dec 2023 18:11:41 +0100 (CET) Received: by mail-oa1-f49.google.com with SMTP id 586e51a60fabf-1fb9c24a16aso3423723fac.0 for ; Mon, 11 Dec 2023 09:11:41 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314700; x=1702919500; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=walHHvkbVOR7tCKXSU/LUIhTPw3kSMiMmPV206igwv4=; b=NlVz349PzYZXs88HqmTY2NIYehv4OjUfr75SaKJ/JlQEjaiqA5/45N1GV14sKNogWc 3dnIiJtI8uJHuKal+vgkBI6HsTk8ryVb52dJoNoIwjBFsWOZv/KXa2ilWTzbmjiYjQ5B u5pDoJw5Y5AqX4eaA/V3Q1DNwNvng7YUA1LXk= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314700; x=1702919500; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=walHHvkbVOR7tCKXSU/LUIhTPw3kSMiMmPV206igwv4=; b=BqFAznAmw5BraCqjkWHuW/YOXK7MH6pBi72jJQHqjBowoOuwX6dGE7T96OKI4sfoz2 QnzshhDsdOR2YPelkM7sOq1nn6uUyCCHGNust/r3kzeNLSt34W4L2dmtK/ks7hgIytj9 dW0igURtLa0kt4s0CNDMVPyjL46oALvcP/4W7hAlNHbOZGjgTIuCFUZKyLr2h+UaeWI8 Tgw1mTVPUP8LdlgxaUzkGzpy4+Dbyhcip0XpVIaCgb3NXxXHNy32mb+DUssuMXSgxfsU dSh6/mss6pa/zGN+WNuylKkmYoX2zoZl6frJ8NQ1K5PNx9TnSX7t4vXFDfLGgC+EE8Mq o7Fw== X-Gm-Message-State: AOJu0Yxius7+Q7j43u6S5DVsSMNL2Ji1aEbEB5EhXtyZjdm795SFIeAK ppAZ+rrnYwzJL5WjBKqQT/sR3myVb9jUvRHX5joH2K+cW7MqKd2Qka39RrMy2TADI9OkD+4E5fp K1rCVchcvbI6O8OHBWqg9nicelCCjOFai7wX7wJMw24oNsXzRgBaQgD+JbKNCZGa5/Np5 X-Google-Smtp-Source: AGHT+IEcTjarIBdpemNHJqt9no749Of9FhDQr6iKlfJf2L2JFXxHm+Wy7nxubkabjBj2TnAsJcp36w== X-Received: by 2002:a05:6871:58a1:b0:1fa:fcfb:2a7d with SMTP id ok33-20020a05687158a100b001fafcfb2a7dmr5886330oac.19.1702314700071; Mon, 11 Dec 2023 09:11:40 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:39 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Damodharam Ammepalli Subject: [PATCH v3 11/14] net/bnxt: modify sending new HWRM commands to firmware Date: Mon, 11 Dec 2023 09:11:06 -0800 Message-Id: <20231211171109.89716-12-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org If the firmware fails to respond a HWRM command in a certain time, it may be because the firmware is in a bad state. Do not send any new HWRM commands in such a scenario. Signed-off-by: Ajit Khaparde Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_hwrm.c | 5 +++++ 2 files changed, 6 insertions(+) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index 68c4778dc3..f7a60eb9a1 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -745,6 +745,7 @@ struct bnxt { #define BNXT_FLAG_DFLT_MAC_SET BIT(26) #define BNXT_FLAG_GFID_ENABLE BIT(27) #define BNXT_FLAG_CHIP_P7 BIT(30) +#define BNXT_FLAG_FW_TIMEDOUT BIT(31) #define BNXT_PF(bp) (!((bp)->flags & BNXT_FLAG_VF)) #define BNXT_VF(bp) ((bp)->flags & BNXT_FLAG_VF) #define BNXT_NPAR(bp) ((bp)->flags & BNXT_FLAG_NPAR_PF) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index a2182af036..1cc2c532dd 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -215,6 +215,10 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg, if (bp->flags & BNXT_FLAG_FATAL_ERROR) return 0; + /* If previous HWRM command timed out, donot send new HWRM command */ + if (bp->flags & BNXT_FLAG_FW_TIMEDOUT) + return 0; + timeout = bp->hwrm_cmd_timeout; /* Update the message length for backing store config for new FW. */ @@ -315,6 +319,7 @@ static int bnxt_hwrm_send_message(struct bnxt *bp, void *msg, PMD_DRV_LOG(ERR, "Error(timeout) sending msg 0x%04x, seq_id %d\n", req->req_type, req->seq_id); + bp->flags |= BNXT_FLAG_FW_TIMEDOUT; return -ETIMEDOUT; } return 0; From patchwork Mon Dec 11 17:11:07 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135039 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C080F436C8; Mon, 11 Dec 2023 18:12:56 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C43A842DD8; Mon, 11 Dec 2023 18:11:44 +0100 (CET) Received: from mail-qk1-f172.google.com (mail-qk1-f172.google.com [209.85.222.172]) by mails.dpdk.org (Postfix) with ESMTP id 4BBF042DDF for ; Mon, 11 Dec 2023 18:11:43 +0100 (CET) Received: by mail-qk1-f172.google.com with SMTP id af79cd13be357-77f320ca2d5so347033085a.1 for ; Mon, 11 Dec 2023 09:11:43 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314702; x=1702919502; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=tmqjWk9MFRDUctyRn+WtIb/J5GfDiBOT+EZzibIcv/U=; b=LINOhY1zjeLgacQk1nTP1fytSiJ0I5h6ilYWINGsS0LcMmaTIoYPtG8gBAVuo8qmmO ZQ3vdFg4WYJ5u5q+XHqCC0t+mpgIOA5T8YySGXFBYtcq26Ykq8N1Ji0MXHUh9rpTmcCd mtiU2+glLLGeRq4+igo02PngzbbV7zI+9X7xE= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314702; x=1702919502; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=tmqjWk9MFRDUctyRn+WtIb/J5GfDiBOT+EZzibIcv/U=; b=YSUrLvV1SiNv22TJkSHtJQgBhculMkk/yHcFG8+AXNHvu+KlCOdSlBYQKH8mw2OHYa InJPXfbtsvMn3WP/JYenO9/n/J5Eox6e8Vpp6w1sIjZmD0kMWm7ge4LIMLvbkuM5Db5G xxSypRavUOp0bH6wpI2kqeuwhHjIeoYtF2rg9rIQqwqkyumTQ+DoHsBL095SXlPOgI4s z6cqzVPK99wvuvw1fxM5Yfl8jRnmOw3rx1m0kUEfVoJY/Xr7MuGFdzwBK5S6QIcYSESj 0l07retoV5Ft1DGGANbzRUf+AEFzyNDd22VP+f4YomCxR9t/WmML/M3FSfwwZP+hOuqx YG/Q== X-Gm-Message-State: AOJu0YxC5j206lI+IjoHu97h17JpcPZtwsjfq/Q3KZcSTC46kJu0JYOF dbC5yIpyt2ETd+PnC5ABz0WPqx1Sr0I3ZFGLBqSBh96rOaiLjK+MxCEqkeNbgtiQVs+GO899itE b2NFoyKcXvb0Umj9HV7nNHCqAO/6MOJJrME4Iddn9N0FNtYTiwNDzodTUdLtBta0MbjnL X-Google-Smtp-Source: AGHT+IH6lipbq4dte+yFWXCpd/Nuph5CXYVEnwE901CtYd5jL83JEx6KJBjV11LYY82vPilqLL8JpQ== X-Received: by 2002:a05:620a:31aa:b0:77d:84d1:a594 with SMTP id bi42-20020a05620a31aa00b0077d84d1a594mr7940607qkb.10.1702314702325; Mon, 11 Dec 2023 09:11:42 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:40 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP , Somnath Kotur Subject: [PATCH v3 12/14] net/bnxt: retry HWRM ver get if the command fails Date: Mon, 11 Dec 2023 09:11:07 -0800 Message-Id: <20231211171109.89716-13-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Retry HWRM ver get if the command timesout because of PCI FLR. When the PCI driver issues an FLR during device initialization, the firmware may have to block the PXP target traffic till the FLR is complete. HWRM_VER_GET command issued during that window may time out. So retry the command again in such a scenario. Signed-off-by: Ajit Khaparde Reviewed-by: Kalesh AP Reviewed-by: Somnath Kotur --- drivers/net/bnxt/bnxt.h | 1 + drivers/net/bnxt/bnxt_ethdev.c | 12 +++++++++++- 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt.h b/drivers/net/bnxt/bnxt.h index f7a60eb9a1..7aed4c3da3 100644 --- a/drivers/net/bnxt/bnxt.h +++ b/drivers/net/bnxt/bnxt.h @@ -879,6 +879,7 @@ struct bnxt { /* default command timeout value of 500ms */ #define DFLT_HWRM_CMD_TIMEOUT 500000 +#define PCI_FUNC_RESET_WAIT_TIMEOUT 1500000 /* short command timeout value of 50ms */ #define SHORT_HWRM_CMD_TIMEOUT 50000 /* default HWRM request timeout value */ diff --git a/drivers/net/bnxt/bnxt_ethdev.c b/drivers/net/bnxt/bnxt_ethdev.c index 81a30eb983..75e968394f 100644 --- a/drivers/net/bnxt/bnxt_ethdev.c +++ b/drivers/net/bnxt/bnxt_ethdev.c @@ -5441,6 +5441,7 @@ static int bnxt_map_hcomm_fw_status_reg(struct bnxt *bp) static int bnxt_get_config(struct bnxt *bp) { uint16_t mtu; + int timeout; int rc = 0; bp->fw_cap = 0; @@ -5449,8 +5450,17 @@ static int bnxt_get_config(struct bnxt *bp) if (rc) return rc; - rc = bnxt_hwrm_ver_get(bp, DFLT_HWRM_CMD_TIMEOUT); + timeout = BNXT_CHIP_P7(bp) ? + PCI_FUNC_RESET_WAIT_TIMEOUT : + DFLT_HWRM_CMD_TIMEOUT; +try_again: + rc = bnxt_hwrm_ver_get(bp, timeout); if (rc) { + if (rc == -ETIMEDOUT && timeout == PCI_FUNC_RESET_WAIT_TIMEOUT) { + bp->flags &= ~BNXT_FLAG_FW_TIMEDOUT; + timeout = DFLT_HWRM_CMD_TIMEOUT; + goto try_again; + } bnxt_check_fw_status(bp); return rc; } From patchwork Mon Dec 11 17:11:08 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135040 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0A2F1436C8; Mon, 11 Dec 2023 18:13:05 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5E94B42DF7; Mon, 11 Dec 2023 18:11:46 +0100 (CET) Received: from mail-qk1-f174.google.com (mail-qk1-f174.google.com [209.85.222.174]) by mails.dpdk.org (Postfix) with ESMTP id 3339242DEA for ; Mon, 11 Dec 2023 18:11:45 +0100 (CET) Received: by mail-qk1-f174.google.com with SMTP id af79cd13be357-77f37d19b6fso253024485a.2 for ; Mon, 11 Dec 2023 09:11:45 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314704; x=1702919504; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:from:to:cc:subject:date:message-id:reply-to; bh=1FDl3kCDNXOk7VGiNEWkqK8Wjnt9TJ4+8YCf7l9gUxE=; b=PxbU/FP85hZWgQ08wQpwFt2zGUo/YDcLEVCYDVG1GVktFWQN8M6yNDF+lFCz2ekN7G QUC8MoXkcPrIAFSBUQ2d3nVD6lwYZDXqhI6JdH7Qu+X2q6ZMGk84lg7/EZdf7JPuqHOZ Lw6YXL/mrUJkMuUB4lbfoJnaMNGw6UfwpKVUM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314704; x=1702919504; h=mime-version:references:in-reply-to:message-id:date:subject:cc:to :from:x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=1FDl3kCDNXOk7VGiNEWkqK8Wjnt9TJ4+8YCf7l9gUxE=; b=rwHi+wgeAmeMy1Pg1zreDzbhgsCPcdMlowKlu31x4vtpGcDdUX1qpvXmZDs8HQCub/ l36uayAIozROpzIlDxRSxlVUQlTM53NYFBmMXd2MANVUKoGAnsIokT4xsQXvftMWkEgX UF/hCvuU0KoNpe3EYhGdToSGIXGHELtZEwJQ7VKdertWjb4IaGcHIleIXY882OJckawH H9Nv8hXBscOe9WOP6ssunjHb+aQ1iya1h5HqWNawa8VG0tezZnWv0Ow/1ILDflBSJ8Ec 6YBZ2bQhqWa2IU4jZHzwbBOvgootBTjRwXB05AHQeMz3vyPTx7ln+uK51CtXqehO5PDg 8n7Q== X-Gm-Message-State: AOJu0YzU5jcyuhxxPXDh1yrcgETaTE6HjIroNvKJ0dx+gEuGtPfVLKPP EZ6Upjv+mB13UDUh7FvS9xZJb8h9tgdubz4xK8eH49zCa40TQw25UxSSpmkDRvgitH1ePkzbimR f+3/0srjbwShk937IKHdEGqBQFGZS245NsbxDDdsDL+zH8CMelG/9MNSYHm34kogKS/Pz X-Google-Smtp-Source: AGHT+IGpAXK8u4sv9BePYrYC5T87pWygHVilx4ZfPNMvWYBegllD55ny+SU8QzOAYQ3k0YPJaY9kOg== X-Received: by 2002:a05:620a:10a6:b0:77f:9c9:70cb with SMTP id h6-20020a05620a10a600b0077f09c970cbmr5809039qkk.118.1702314704193; Mon, 11 Dec 2023 09:11:44 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:43 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Cc: Kalesh AP , Damodharam Ammepalli Subject: [PATCH v3 13/14] net/bnxt: cap ring resources for P7 devices Date: Mon, 11 Dec 2023 09:11:08 -0800 Message-Id: <20231211171109.89716-14-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Cap the NQ count for P7 devices. Driver does not need a high NQ ring count anyway since we operate in poll mode. Signed-off-by: Ajit Khaparde Reviewed-by: Kalesh AP Reviewed-by: Damodharam Ammepalli --- drivers/net/bnxt/bnxt_hwrm.c | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/drivers/net/bnxt/bnxt_hwrm.c b/drivers/net/bnxt/bnxt_hwrm.c index 1cc2c532dd..e56f7693af 100644 --- a/drivers/net/bnxt/bnxt_hwrm.c +++ b/drivers/net/bnxt/bnxt_hwrm.c @@ -1237,7 +1237,10 @@ int bnxt_hwrm_func_resc_qcaps(struct bnxt *bp) else bp->max_vnics = rte_le_to_cpu_16(resp->max_vnics); bp->max_stat_ctx = rte_le_to_cpu_16(resp->max_stat_ctx); - bp->max_nq_rings = rte_le_to_cpu_16(resp->max_msix); + if (BNXT_CHIP_P7(bp)) + bp->max_nq_rings = BNXT_P7_MAX_NQ_RING_CNT; + else + bp->max_nq_rings = rte_le_to_cpu_16(resp->max_msix); bp->vf_resv_strategy = rte_le_to_cpu_16(resp->vf_reservation_strategy); if (bp->vf_resv_strategy > HWRM_FUNC_RESOURCE_QCAPS_OUTPUT_VF_RESV_STRATEGY_MINIMAL_STATIC) From patchwork Mon Dec 11 17:11:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ajit Khaparde X-Patchwork-Id: 135041 X-Patchwork-Delegate: ajit.khaparde@broadcom.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 01633436C8; Mon, 11 Dec 2023 18:13:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 975FC42DF0; Mon, 11 Dec 2023 18:11:48 +0100 (CET) Received: from mail-qk1-f182.google.com (mail-qk1-f182.google.com [209.85.222.182]) by mails.dpdk.org (Postfix) with ESMTP id 39CCB42DF9 for ; Mon, 11 Dec 2023 18:11:47 +0100 (CET) Received: by mail-qk1-f182.google.com with SMTP id af79cd13be357-77f2f492a43so262233785a.2 for ; Mon, 11 Dec 2023 09:11:47 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; t=1702314706; x=1702919506; darn=dpdk.org; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :from:to:cc:subject:date:message-id:reply-to; bh=jAAtH5bX3yTV4E3FZbOubWeMpZPcWthQ0JF7m7pbQxM=; b=IG9O6K/fJij65wsNXIGpiMpYHRFy3aNPU+XFKikRFssc5RSivhPZG+2t0gj4vCOCDn bzXT/CbzuaJxVpjXvZ5dDhVJKSzBavXnS6J845p5fUQTh/MtQEC8O9hrUnSC61zcINEq RAmixmVMa1I46M2jo0isEJp6EBpuYdOmRwe84= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1702314706; x=1702919506; h=mime-version:references:in-reply-to:message-id:date:subject:to:from :x-gm-message-state:from:to:cc:subject:date:message-id:reply-to; bh=jAAtH5bX3yTV4E3FZbOubWeMpZPcWthQ0JF7m7pbQxM=; b=lDj2mw9nOkYcFkWCrG20sbBfFXrqx4chhDKNkz2v6DsbxAU62ojUl9iNBdPgsoWuAF Hyo1kaC5cI+TrMR1AscldIwa2AFjeywg3bVy4ACaV4ZoRkggAsKnV6UMzmNFw9sUW7tD HzQ8HdvXXRZz66Uw2AsflAkx1f6n6LQ4XTi4PXGa/0k0TWphTQU6ScbTtRVlpOBYGWbl DspiD0M20AnrtP0ysVmvjMF1C0uWWH3Y6Kuco0mLU7VIIJGRDlMeed9JiUilvXNI4mbu vLgmQ+yCZ/RqA7JvK53VZC9ksdh7JmxioLCxcnkoXM7CxDh40oJ4IQbt88kC+vBMy2AM tvJA== X-Gm-Message-State: AOJu0YzXFDkJRCqsPCe9EIa3JFS2NrKUk033PgDlwwUhbNkRBRRzBYAi +m8jonB77KbaLWsy+AVnY7zxdsO0Nsq04pQgitmzbUn63Q++k5zt+4jXjNwCJZXhP5aqFC5XPG+ Jii3qYGmOQeZuIM2lv3fKQnBB4LExwgNv6b6SBpyMKe/SppPzWBzL0K/9z3wZL/TUb+l/ X-Google-Smtp-Source: AGHT+IHTC16Mv8yZbK23jmHbEJsgYG1qv5od/CHrg9WHw7M0iqe/GvXnieXKz5NGZ8VwURQ7WguDAQ== X-Received: by 2002:a05:620a:20d7:b0:77a:1476:fb93 with SMTP id f23-20020a05620a20d700b0077a1476fb93mr5658079qka.47.1702314705988; Mon, 11 Dec 2023 09:11:45 -0800 (PST) Received: from C02GC2QQMD6T.wifi.broadcom.net ([192.19.223.252]) by smtp.gmail.com with ESMTPSA id qz16-20020a05620a8c1000b0077efdfbd730sm3094581qkn.34.2023.12.11.09.11.44 for (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 11 Dec 2023 09:11:44 -0800 (PST) From: Ajit Khaparde To: dev@dpdk.org Subject: [PATCH v3 14/14] net/bnxt: add support for v3 Rx completion Date: Mon, 11 Dec 2023 09:11:09 -0800 Message-Id: <20231211171109.89716-15-ajit.khaparde@broadcom.com> X-Mailer: git-send-email 2.39.2 (Apple Git-143) In-Reply-To: <20231211171109.89716-1-ajit.khaparde@broadcom.com> References: <20231211171109.89716-1-ajit.khaparde@broadcom.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org P7 devices support the newer Rx completion version. This Rx completion though similar to the previous generation, provides some extra information for flow offload scenarios apart from the normal information. Signed-off-by: Ajit Khaparde --- drivers/net/bnxt/bnxt_rxr.c | 87 ++++++++++++++++++++++++++++++++++- drivers/net/bnxt/bnxt_rxr.h | 92 +++++++++++++++++++++++++++++++++++++ 2 files changed, 177 insertions(+), 2 deletions(-) diff --git a/drivers/net/bnxt/bnxt_rxr.c b/drivers/net/bnxt/bnxt_rxr.c index 9d45065f28..59ea0121de 100644 --- a/drivers/net/bnxt/bnxt_rxr.c +++ b/drivers/net/bnxt/bnxt_rxr.c @@ -553,6 +553,41 @@ bnxt_parse_pkt_type(struct rx_pkt_cmpl *rxcmp, struct rx_pkt_cmpl_hi *rxcmp1) return bnxt_ptype_table[index]; } +static void +bnxt_parse_pkt_type_v3(struct rte_mbuf *mbuf, + struct rx_pkt_cmpl *rxcmp_v1, + struct rx_pkt_cmpl_hi *rxcmp1_v1) +{ + uint32_t flags_type, flags2, meta; + struct rx_pkt_v3_cmpl_hi *rxcmp1; + struct rx_pkt_v3_cmpl *rxcmp; + uint8_t index; + + rxcmp = (void *)rxcmp_v1; + rxcmp1 = (void *)rxcmp1_v1; + + flags_type = rte_le_to_cpu_16(rxcmp->flags_type); + flags2 = rte_le_to_cpu_32(rxcmp1->flags2); + meta = rte_le_to_cpu_32(rxcmp->metadata1_payload_offset); + + /* TODO */ + /* Validate ptype table indexing at build time. */ + /* bnxt_check_ptype_constants_v3(); */ + + /* + * Index format: + * bit 0: Set if IP tunnel encapsulated packet. + * bit 1: Set if IPv6 packet, clear if IPv4. + * bit 2: Set if VLAN tag present. + * bits 3-6: Four-bit hardware packet type field. + */ + index = BNXT_CMPL_V3_ITYPE_TO_IDX(flags_type) | + BNXT_CMPL_V3_VLAN_TO_IDX(meta) | + BNXT_CMPL_V3_IP_VER_TO_IDX(flags2); + + mbuf->packet_type = bnxt_ptype_table[index]; +} + static void __rte_cold bnxt_init_ol_flags_tables(struct bnxt_rx_queue *rxq) { @@ -716,6 +751,43 @@ bnxt_get_rx_ts_p5(struct bnxt *bp, uint32_t rx_ts_cmpl) ptp->rx_timestamp = pkt_time; } +static uint32_t +bnxt_ulp_set_mark_in_mbuf_v3(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1, + struct rte_mbuf *mbuf, uint32_t *vfr_flag) +{ + struct rx_pkt_v3_cmpl_hi *rxcmp1_v3 = (void *)rxcmp1; + uint32_t flags2, meta, mark_id = 0; + /* revisit the usage of gfid/lfid if mark action is supported. + * for now, only VFR is using mark and the metadata is the SVIF + * (a small number) + */ + bool gfid = false; + int rc = 0; + + flags2 = rte_le_to_cpu_32(rxcmp1_v3->flags2); + + switch (flags2 & RX_PKT_V3_CMPL_HI_FLAGS2_META_FORMAT_MASK) { + case RX_PKT_V3_CMPL_HI_FLAGS2_META_FORMAT_CHDR_DATA: + /* Only supporting Metadata for ulp now */ + meta = rxcmp1_v3->metadata2; + break; + default: + goto skip_mark; + } + + rc = ulp_mark_db_mark_get(bp->ulp_ctx, gfid, meta, vfr_flag, &mark_id); + if (!rc) { + /* Only supporting VFR for now, no Mark actions */ + if (vfr_flag && *vfr_flag) + return mark_id; + } + +skip_mark: + mbuf->hash.fdir.hi = 0; + + return 0; +} + static uint32_t bnxt_ulp_set_mark_in_mbuf(struct bnxt *bp, struct rx_pkt_cmpl_hi *rxcmp1, struct rte_mbuf *mbuf, uint32_t *vfr_flag) @@ -892,7 +964,8 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, *rx_pkt = mbuf; goto next_rx; } else if ((cmp_type != CMPL_BASE_TYPE_RX_L2) && - (cmp_type != CMPL_BASE_TYPE_RX_L2_V2)) { + (cmp_type != CMPL_BASE_TYPE_RX_L2_V2) && + (cmp_type != CMPL_BASE_TYPE_RX_L2_V3)) { rc = -EINVAL; goto next_rx; } @@ -929,6 +1002,16 @@ static int bnxt_rx_pkt(struct rte_mbuf **rx_pkt, bp->ptp_all_rx_tstamp) bnxt_get_rx_ts_p5(rxq->bp, rxcmp1->reorder); + if (cmp_type == CMPL_BASE_TYPE_RX_L2_V3) { + bnxt_parse_csum_v3(mbuf, rxcmp1); + bnxt_parse_pkt_type_v3(mbuf, rxcmp, rxcmp1); + bnxt_rx_vlan_v3(mbuf, rxcmp, rxcmp1); + if (BNXT_TRUFLOW_EN(bp)) + mark_id = bnxt_ulp_set_mark_in_mbuf_v3(rxq->bp, rxcmp1, + mbuf, &vfr_flag); + goto reuse_rx_mbuf; + } + if (cmp_type == CMPL_BASE_TYPE_RX_L2_V2) { bnxt_parse_csum_v2(mbuf, rxcmp1); bnxt_parse_pkt_type_v2(mbuf, rxcmp, rxcmp1); @@ -1066,7 +1149,7 @@ uint16_t bnxt_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, if (CMP_TYPE(rxcmp) == CMPL_BASE_TYPE_HWRM_DONE) { PMD_DRV_LOG(ERR, "Rx flush done\n"); } else if ((CMP_TYPE(rxcmp) >= CMPL_BASE_TYPE_RX_TPA_START_V2) && - (CMP_TYPE(rxcmp) <= RX_TPA_V2_ABUF_CMPL_TYPE_RX_TPA_AGG)) { + (CMP_TYPE(rxcmp) <= CMPL_BASE_TYPE_RX_TPA_START_V3)) { rc = bnxt_rx_pkt(&rx_pkts[nb_rx_pkts], rxq, &raw_cons); if (!rc) nb_rx_pkts++; diff --git a/drivers/net/bnxt/bnxt_rxr.h b/drivers/net/bnxt/bnxt_rxr.h index af53bc0c25..439d29a07f 100644 --- a/drivers/net/bnxt/bnxt_rxr.h +++ b/drivers/net/bnxt/bnxt_rxr.h @@ -386,4 +386,96 @@ bnxt_parse_pkt_type_v2(struct rte_mbuf *mbuf, mbuf->packet_type = pkt_type; } + +/* Thor2 specific code for RX completion parsing */ +#define RX_PKT_V3_CMPL_FLAGS2_IP_TYPE_SFT 8 +#define RX_PKT_V3_CMPL_METADATA1_VALID_SFT 15 + +#define BNXT_CMPL_V3_ITYPE_TO_IDX(ft) \ + (((ft) & RX_PKT_V3_CMPL_FLAGS_ITYPE_MASK) >> \ + (RX_PKT_V3_CMPL_FLAGS_ITYPE_SFT - BNXT_PTYPE_TBL_TYPE_SFT)) + +#define BNXT_CMPL_V3_VLAN_TO_IDX(meta) \ + (((meta) & (1 << RX_PKT_V3_CMPL_METADATA1_VALID_SFT)) >> \ + (RX_PKT_V3_CMPL_METADATA1_VALID_SFT - BNXT_PTYPE_TBL_VLAN_SFT)) + +#define BNXT_CMPL_V3_IP_VER_TO_IDX(f2) \ + (((f2) & RX_PKT_V3_CMPL_HI_FLAGS2_IP_TYPE) >> \ + (RX_PKT_V3_CMPL_FLAGS2_IP_TYPE_SFT - BNXT_PTYPE_TBL_IP_VER_SFT)) + +#define RX_CMP_V3_VLAN_VALID(rxcmp) \ + (((struct rx_pkt_v3_cmpl *)rxcmp)->metadata1_payload_offset & \ + RX_PKT_V3_CMPL_METADATA1_VALID) + +#define RX_CMP_V3_METADATA0_VID(rxcmp1) \ + ((((struct rx_pkt_v3_cmpl_hi *)rxcmp1)->metadata0) & \ + (RX_PKT_V3_CMPL_HI_METADATA0_VID_MASK | \ + RX_PKT_V3_CMPL_HI_METADATA0_DE | \ + RX_PKT_V3_CMPL_HI_METADATA0_PRI_MASK)) + +static inline void bnxt_rx_vlan_v3(struct rte_mbuf *mbuf, + struct rx_pkt_cmpl *rxcmp, + struct rx_pkt_cmpl_hi *rxcmp1) +{ + if (RX_CMP_V3_VLAN_VALID(rxcmp)) { + mbuf->vlan_tci = RX_CMP_V3_METADATA0_VID(rxcmp1); + mbuf->ol_flags |= RTE_MBUF_F_RX_VLAN | RTE_MBUF_F_RX_VLAN_STRIPPED; + } +} + +#define RX_CMP_V3_L4_CS_ERR(err) \ + (((err) & RX_PKT_CMPL_ERRORS_MASK) \ + & (RX_PKT_CMPL_ERRORS_L4_CS_ERROR)) +#define RX_CMP_V3_L3_CS_ERR(err) \ + (((err) & RX_PKT_CMPL_ERRORS_MASK) \ + & (RX_PKT_CMPL_ERRORS_IP_CS_ERROR)) +#define RX_CMP_V3_T_IP_CS_ERR(err) \ + (((err) & RX_PKT_CMPL_ERRORS_MASK) \ + & (RX_PKT_CMPL_ERRORS_T_IP_CS_ERROR)) +#define RX_CMP_V3_T_L4_CS_ERR(err) \ + (((err) & RX_PKT_CMPL_ERRORS_MASK) \ + & (RX_PKT_CMPL_ERRORS_T_L4_CS_ERROR)) +#define RX_PKT_CMPL_CALC \ + (RX_PKT_CMPL_FLAGS2_IP_CS_CALC | \ + RX_PKT_CMPL_FLAGS2_L4_CS_CALC | \ + RX_PKT_CMPL_FLAGS2_T_IP_CS_CALC | \ + RX_PKT_CMPL_FLAGS2_T_L4_CS_CALC) + +static inline uint64_t +bnxt_parse_csum_fields_v3(uint32_t flags2, uint32_t error_v2) +{ + uint64_t ol_flags = 0; + + if (flags2 & RX_PKT_CMPL_CALC) { + if (unlikely(RX_CMP_V3_L4_CS_ERR(error_v2))) + ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_BAD; + else + ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; + if (unlikely(RX_CMP_V3_L3_CS_ERR(error_v2))) + ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_BAD; + if (unlikely(RX_CMP_V3_T_L4_CS_ERR(error_v2))) + ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_BAD; + else + ol_flags |= RTE_MBUF_F_RX_OUTER_L4_CKSUM_GOOD; + if (unlikely(RX_CMP_V3_T_IP_CS_ERR(error_v2))) + ol_flags |= RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD; + if (!(ol_flags & (RTE_MBUF_F_RX_IP_CKSUM_BAD | RTE_MBUF_F_RX_OUTER_IP_CKSUM_BAD))) + ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_GOOD; + } else { + /* Unknown is defined as 0 for all packets types hence using below for all */ + ol_flags |= RTE_MBUF_F_RX_IP_CKSUM_UNKNOWN; + } + return ol_flags; +} + +static inline void +bnxt_parse_csum_v3(struct rte_mbuf *mbuf, struct rx_pkt_cmpl_hi *rxcmp1) +{ + struct rx_pkt_v3_cmpl_hi *v3_cmp = + (struct rx_pkt_v3_cmpl_hi *)(rxcmp1); + uint16_t error_v2 = rte_le_to_cpu_16(v3_cmp->errors_v2); + uint32_t flags2 = rte_le_to_cpu_32(v3_cmp->flags2); + + mbuf->ol_flags |= bnxt_parse_csum_fields_v3(flags2, error_v2); +} #endif /* _BNXT_RXR_H_ */