From patchwork Mon Mar 4 09:33:50 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Mingjin Ye X-Patchwork-Id: 137877 X-Patchwork-Delegate: bruce.richardson@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D53B243B7C; Mon, 4 Mar 2024 10:52:10 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C22B440687; Mon, 4 Mar 2024 10:52:10 +0100 (CET) Received: from mgamail.intel.com (mgamail.intel.com [192.198.163.8]) by mails.dpdk.org (Postfix) with ESMTP id 768D640685 for ; Mon, 4 Mar 2024 10:52:08 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1709545929; x=1741081929; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=81iWtttXkuIEvP0XBnVrFwuILRhghU2089z44VWF50k=; b=e6pLNO+YzXo7+rq2+JcbPVfzjeAtplpRDl/RHZxG2k10DievXm4uI8js H3JgnbE3OAE3FuZ3L0zLyABxwRXennBIJq9nu7vPPH34SaqQhq0iCzgR2 H4o28CrCOd1TN6opFgEy+tzOmrmHdoS2Wny1tCv9vjUaHM3lj7rwQ2Azs RuMza3kLyd7zbbLkO/bXYvgzlZqRsaAOVooPCVFRyaMN1gtviOhfhl86R 4vYsRn0Ok4b2yt08j/HLkFY39uHB1usm4IuV7+9+nk8g20rEqLFc5W5U3 DH9JsMU1cdhHNN+6vxQDPaWAwS3AlCB0xHMzqbOXeDEVco0FKahgXIVBP A==; X-IronPort-AV: E=McAfee;i="6600,9927,11002"; a="21563897" X-IronPort-AV: E=Sophos;i="6.06,203,1705392000"; d="scan'208";a="21563897" Received: from fmviesa004.fm.intel.com ([10.60.135.144]) by fmvoesa102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 01:52:08 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="6.06,203,1705392000"; d="scan'208";a="13500040" Received: from unknown (HELO localhost.localdomain) ([10.239.252.253]) by fmviesa004-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Mar 2024 01:52:06 -0800 From: Mingjin Ye To: dev@dpdk.org Cc: Mingjin Ye Subject: [PATCH v4] net/ice: add diagnostic support in Tx path Date: Mon, 4 Mar 2024 09:33:50 +0000 Message-Id: <20240304093350.592125-1-mingjinx.ye@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20240301095052.460597-1-mingjinx.ye@intel.com> References: <20240301095052.460597-1-mingjinx.ye@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Implemented a Tx wrapper to perform a thorough check on mbufs, categorizing and counting invalid cases by type for diagnostic purposes. The count of invalid cases is accessible through xstats_get. Also, the devarg option "mbuf_check" was introduced to configure the diagnostic parameters to enable the appropriate diagnostic features. supported cases: mbuf, size, segment, offload. 1. mbuf: Check for corrupted mbuf. 2. size: Check min/max packet length according to HW spec. 3. segment: Check number of mbuf segments not exceed HW limits. 4. offload: Check for use of an unsupported offload flag. parameter format: "mbuf_check=" or "mbuf_check=[,]" eg: dpdk-testpmd -a 0000:81:00.0,mbuf_check=[mbuf,size] -- -i Signed-off-by: Mingjin Ye --- v2: rebase. --- v3: Modify comment log. --- v4: Changes the commit log. --- doc/guides/nics/ice.rst | 14 ++++ drivers/net/ice/ice_ethdev.c | 108 +++++++++++++++++++++++- drivers/net/ice/ice_ethdev.h | 23 +++++ drivers/net/ice/ice_rxtx.c | 158 ++++++++++++++++++++++++++++++++--- drivers/net/ice/ice_rxtx.h | 20 +++++ 5 files changed, 312 insertions(+), 11 deletions(-) diff --git a/doc/guides/nics/ice.rst b/doc/guides/nics/ice.rst index 8f33751577..53b4a79095 100644 --- a/doc/guides/nics/ice.rst +++ b/doc/guides/nics/ice.rst @@ -257,6 +257,20 @@ Runtime Configuration As a trade-off, this configuration may cause the packet processing performance degradation due to the PCI bandwidth limitation. +- ``Tx diagnostics`` (default ``not enabled``) + + Set the ``devargs`` parameter ``mbuf_check`` to enable TX diagnostics. + For example, ``-a 81:00.0,mbuf_check=`` or ``-a 81:00.0,mbuf_check=[,...]``. + Thereafter, ``rte_eth_xstats_get()`` can be used to get the error counts, + which are collected in ``tx_mbuf_error_packets`` xstats. + In testpmd these can be shown via: ``testpmd> show port xstats all``. + Supported values for the ``case`` parameter are: + + * mbuf: Check for corrupted mbuf. + * size: Check min/max packet length according to HW spec. + * segment: Check number of mbuf segments does not exceed HW limits. + * offload: Check for use of an unsupported offload flag. + Driver compilation and testing ------------------------------ diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index f07b236ad4..b2e1a664d4 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -12,6 +12,7 @@ #include #include +#include #include "eal_firmware.h" @@ -34,6 +35,7 @@ #define ICE_HW_DEBUG_MASK_ARG "hw_debug_mask" #define ICE_ONE_PPS_OUT_ARG "pps_out" #define ICE_RX_LOW_LATENCY_ARG "rx_low_latency" +#define ICE_MBUF_CHECK_ARG "mbuf_check" #define ICE_CYCLECOUNTER_MASK 0xffffffffffffffffULL @@ -49,6 +51,7 @@ static const char * const ice_valid_args[] = { ICE_ONE_PPS_OUT_ARG, ICE_RX_LOW_LATENCY_ARG, ICE_DEFAULT_MAC_DISABLE, + ICE_MBUF_CHECK_ARG, NULL }; @@ -320,6 +323,14 @@ static const struct ice_xstats_name_off ice_stats_strings[] = { #define ICE_NB_ETH_XSTATS (sizeof(ice_stats_strings) / \ sizeof(ice_stats_strings[0])) +static const struct ice_xstats_name_off ice_mbuf_strings[] = { + {"tx_mbuf_error_packets", offsetof(struct ice_mbuf_stats, + tx_pkt_errors)}, +}; + +#define ICE_NB_MBUF_XSTATS (sizeof(ice_mbuf_strings) / \ + sizeof(ice_mbuf_strings[0])) + static const struct ice_xstats_name_off ice_hw_port_strings[] = { {"tx_link_down_dropped", offsetof(struct ice_hw_port_stats, tx_dropped_link_down)}, @@ -2062,6 +2073,58 @@ handle_pps_out_arg(__rte_unused const char *key, const char *value, return 0; } +static int +ice_parse_mbuf_check(__rte_unused const char *key, const char *value, void *args) +{ + char *cur; + char *tmp; + int str_len; + int valid_len; + + int ret = 0; + uint64_t *mc_flags = args; + char *str2 = strdup(value); + if (str2 == NULL) + return -1; + + str_len = strlen(str2); + if (str_len == 0) { + ret = -1; + goto err_end; + } + + /* Try stripping the outer square brackets of the parameter string. */ + str_len = strlen(str2); + if (str2[0] == '[' && str2[str_len - 1] == ']') { + if (str_len < 3) { + ret = -1; + goto err_end; + } + valid_len = str_len - 2; + memmove(str2, str2 + 1, valid_len); + memset(str2 + valid_len, '\0', 2); + } + + cur = strtok_r(str2, ",", &tmp); + while (cur != NULL) { + if (!strcmp(cur, "mbuf")) + *mc_flags |= ICE_MBUF_CHECK_F_TX_MBUF; + else if (!strcmp(cur, "size")) + *mc_flags |= ICE_MBUF_CHECK_F_TX_SIZE; + else if (!strcmp(cur, "segment")) + *mc_flags |= ICE_MBUF_CHECK_F_TX_SEGMENT; + else if (!strcmp(cur, "offload")) + *mc_flags |= ICE_MBUF_CHECK_F_TX_OFFLOAD; + else + PMD_DRV_LOG(ERR, "Unsupported diagnostic type: %s", cur); + cur = strtok_r(NULL, ",", &tmp); + } + +err_end: + free(str2); + return ret; +} + static int ice_parse_devargs(struct rte_eth_dev *dev) { struct ice_adapter *ad = @@ -2118,6 +2181,11 @@ static int ice_parse_devargs(struct rte_eth_dev *dev) if (ret) goto bail; + ret = rte_kvargs_process(kvlist, ICE_MBUF_CHECK_ARG, + &ice_parse_mbuf_check, &ad->devargs.mbuf_check); + if (ret) + goto bail; + ret = rte_kvargs_process(kvlist, ICE_RX_LOW_LATENCY_ARG, &parse_bool, &ad->devargs.rx_low_latency); @@ -6162,6 +6230,9 @@ ice_stats_reset(struct rte_eth_dev *dev) /* read the stats, reading current register values into offset */ ice_read_stats_registers(pf, hw); + memset(&pf->mbuf_stats, 0, + sizeof(struct ice_mbuf_stats)); + return 0; } @@ -6170,17 +6241,33 @@ ice_xstats_calc_num(void) { uint32_t num; - num = ICE_NB_ETH_XSTATS + ICE_NB_HW_PORT_XSTATS; + num = ICE_NB_ETH_XSTATS + ICE_NB_MBUF_XSTATS + ICE_NB_HW_PORT_XSTATS; return num; } +static void +ice_update_mbuf_stats(struct rte_eth_dev *ethdev, + struct ice_mbuf_stats *mbuf_stats) +{ + uint16_t idx; + struct ice_tx_queue *txq; + + for (idx = 0; idx < ethdev->data->nb_tx_queues; idx++) { + txq = ethdev->data->tx_queues[idx]; + mbuf_stats->tx_pkt_errors += txq->mbuf_errors; + } +} + static int ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, unsigned int n) { struct ice_pf *pf = ICE_DEV_PRIVATE_TO_PF(dev->data->dev_private); struct ice_hw *hw = ICE_DEV_PRIVATE_TO_HW(dev->data->dev_private); + struct ice_adapter *adapter = + ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + struct ice_mbuf_stats mbuf_stats = {0}; unsigned int i; unsigned int count; struct ice_hw_port_stats *hw_stats = &pf->stats; @@ -6196,6 +6283,9 @@ ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, count = 0; + if (adapter->devargs.mbuf_check) + ice_update_mbuf_stats(dev, &mbuf_stats); + /* Get stats from ice_eth_stats struct */ for (i = 0; i < ICE_NB_ETH_XSTATS; i++) { xstats[count].value = @@ -6205,6 +6295,15 @@ ice_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, count++; } + /* Get stats from ice_mbuf_stats struct */ + for (i = 0; i < ICE_NB_MBUF_XSTATS; i++) { + xstats[count].value = + *(uint64_t *)((char *)&mbuf_stats + + ice_mbuf_strings[i].offset); + xstats[count].id = count; + count++; + } + /* Get individual stats from ice_hw_port struct */ for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) { xstats[count].value = @@ -6236,6 +6335,13 @@ static int ice_xstats_get_names(__rte_unused struct rte_eth_dev *dev, count++; } + /* Get stats from ice_mbuf_stats struct */ + for (i = 0; i < ICE_NB_MBUF_XSTATS; i++) { + strlcpy(xstats_names[count].name, ice_mbuf_strings[i].name, + sizeof(xstats_names[count].name)); + count++; + } + /* Get individual stats from ice_hw_port struct */ for (i = 0; i < ICE_NB_HW_PORT_XSTATS; i++) { strlcpy(xstats_names[count].name, ice_hw_port_strings[i].name, diff --git a/drivers/net/ice/ice_ethdev.h b/drivers/net/ice/ice_ethdev.h index 008a7a23b9..4740a85405 100644 --- a/drivers/net/ice/ice_ethdev.h +++ b/drivers/net/ice/ice_ethdev.h @@ -497,6 +497,10 @@ struct ice_tm_conf { bool clear_on_fail; }; +struct ice_mbuf_stats { + uint64_t tx_pkt_errors; +}; + struct ice_pf { struct ice_adapter *adapter; /* The adapter this PF associate to */ struct ice_vsi *main_vsi; /* pointer to main VSI structure */ @@ -526,6 +530,7 @@ struct ice_pf { uint16_t fdir_fltr_cnt[ICE_FLTR_PTYPE_MAX][ICE_FD_HW_SEG_MAX]; struct ice_hw_port_stats stats_offset; struct ice_hw_port_stats stats; + struct ice_mbuf_stats mbuf_stats; /* internal packet statistics, it should be excluded from the total */ struct ice_eth_stats internal_stats_offset; struct ice_eth_stats internal_stats; @@ -564,6 +569,7 @@ struct ice_devargs { uint8_t xtr_flag_offs[PROTO_XTR_MAX]; /* Name of the field. */ char xtr_field_name[RTE_MBUF_DYN_NAMESIZE]; + uint64_t mbuf_check; }; /** @@ -582,6 +588,21 @@ struct ice_rss_prof_info { bool symm; }; +#define ICE_MBUF_CHECK_F_TX_MBUF (1ULL << 0) +#define ICE_MBUF_CHECK_F_TX_SIZE (1ULL << 1) +#define ICE_MBUF_CHECK_F_TX_SEGMENT (1ULL << 2) +#define ICE_MBUF_CHECK_F_TX_OFFLOAD (1ULL << 3) + +enum ice_tx_burst_type { + ICE_TX_DEFAULT, + ICE_TX_SIMPLE, + ICE_TX_VEC_SSE, + ICE_TX_VEC_AVX2, + ICE_TX_VEC_AVX2_OFFLOAD, + ICE_TX_VEC_AVX512, + ICE_TX_VEC_AVX512_OFFLOAD, +}; + /** * Structure to store private data for each PF/VF instance. */ @@ -599,6 +620,8 @@ struct ice_adapter { struct ice_devargs devargs; enum ice_pkg_type active_pkg_type; /* loaded ddp package type */ uint16_t fdir_ref_cnt; + /* For vector PMD */ + enum ice_tx_burst_type tx_burst_type; /* For PTP */ struct rte_timecounter systime_tc; struct rte_timecounter rx_tstamp_tc; diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 1f33700f1d..5836bf82f6 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -3695,6 +3695,132 @@ ice_check_empty_mbuf(struct rte_mbuf *tx_pkt) return 0; } +static +const eth_tx_burst_t ice_tx_pkt_burst_ops[] = { + [ICE_TX_DEFAULT] = ice_xmit_pkts, + [ICE_TX_SIMPLE] = ice_xmit_pkts_simple, +#ifdef RTE_ARCH_X86 + [ICE_TX_VEC_SSE] = ice_xmit_pkts_vec, + [ICE_TX_VEC_AVX2] = ice_xmit_pkts_vec_avx2, + [ICE_TX_VEC_AVX2_OFFLOAD] = ice_xmit_pkts_vec_avx2_offload, +#ifdef CC_AVX512_SUPPORT + [ICE_TX_VEC_AVX512] = ice_xmit_pkts_vec_avx512, + [ICE_TX_VEC_AVX512_OFFLOAD] = ice_xmit_pkts_vec_avx512_offload, +#endif +#endif +}; + +/* Tx mbuf check */ +static uint16_t +ice_xmit_pkts_check(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct ice_tx_queue *txq = tx_queue; + uint16_t idx; + struct rte_mbuf *mb; + bool pkt_error = false; + uint16_t good_pkts = nb_pkts; + const char *reason = NULL; + struct ice_adapter *adapter = txq->vsi->adapter; + enum ice_tx_burst_type tx_burst_type = + txq->vsi->adapter->tx_burst_type; + uint64_t ol_flags; + + for (idx = 0; idx < nb_pkts; idx++) { + mb = tx_pkts[idx]; + ol_flags = mb->ol_flags; + + if ((adapter->devargs.mbuf_check & ICE_MBUF_CHECK_F_TX_MBUF) && + (rte_mbuf_check(mb, 0, &reason) != 0)) { + PMD_TX_LOG(ERR, "INVALID mbuf: %s\n", reason); + pkt_error = true; + break; + } + + if ((adapter->devargs.mbuf_check & ICE_MBUF_CHECK_F_TX_SIZE) && + (mb->data_len > mb->pkt_len || + mb->data_len < ICE_TX_MIN_PKT_LEN || + mb->data_len > ICE_FRAME_SIZE_MAX)) { + PMD_TX_LOG(ERR, "INVALID mbuf: data_len (%u) is out " + "of range, reasonable range (%d - %d)\n", mb->data_len, + ICE_TX_MIN_PKT_LEN, ICE_FRAME_SIZE_MAX); + pkt_error = true; + break; + } + + if (adapter->devargs.mbuf_check & ICE_MBUF_CHECK_F_TX_SEGMENT) { + if (!(ol_flags & RTE_MBUF_F_TX_TCP_SEG)) { + /** + * No TSO case: nb->segs, pkt_len to not exceed + * the limites. + */ + if (mb->nb_segs > ICE_TX_MTU_SEG_MAX) { + PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs (%d) exceeds " + "HW limit, maximum allowed value is %d\n", mb->nb_segs, + ICE_TX_MTU_SEG_MAX); + pkt_error = true; + break; + } + if (mb->pkt_len > ICE_FRAME_SIZE_MAX) { + PMD_TX_LOG(ERR, "INVALID mbuf: pkt_len (%d) exceeds " + "HW limit, maximum allowed value is %d\n", mb->nb_segs, + ICE_FRAME_SIZE_MAX); + pkt_error = true; + break; + } + } else if (ol_flags & RTE_MBUF_F_TX_TCP_SEG) { + /** TSO case: tso_segsz, nb_segs, pkt_len not exceed + * the limits. + */ + if (mb->tso_segsz < ICE_MIN_TSO_MSS || + mb->tso_segsz > ICE_MAX_TSO_MSS) { + /** + * MSS outside the range are considered malicious + */ + PMD_TX_LOG(ERR, "INVALID mbuf: tso_segsz (%u) is out " + "of range, reasonable range (%d - %u)\n", mb->tso_segsz, + ICE_MIN_TSO_MSS, ICE_MAX_TSO_MSS); + pkt_error = true; + break; + } + if (mb->nb_segs > + ((struct ice_tx_queue *)tx_queue)->nb_tx_desc) { + PMD_TX_LOG(ERR, "INVALID mbuf: nb_segs out " + "of ring length\n"); + pkt_error = true; + break; + } + } + } + + if (adapter->devargs.mbuf_check & ICE_MBUF_CHECK_F_TX_OFFLOAD) { + if (ol_flags & ICE_TX_OFFLOAD_NOTSUP_MASK) { + PMD_TX_LOG(ERR, "INVALID mbuf: TX offload " + "is not supported\n"); + pkt_error = true; + break; + } + + if (!rte_validate_tx_offload(mb)) { + PMD_TX_LOG(ERR, "INVALID mbuf: TX offload " + "setup error\n"); + pkt_error = true; + break; + } + } + } + + if (pkt_error) { + txq->mbuf_errors++; + good_pkts = idx; + if (good_pkts == 0) + return 0; + } + + return ice_tx_pkt_burst_ops[tx_burst_type](tx_queue, + tx_pkts, good_pkts); +} + uint16_t ice_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) @@ -3763,6 +3889,8 @@ ice_set_tx_function(struct rte_eth_dev *dev) { struct ice_adapter *ad = ICE_DEV_PRIVATE_TO_ADAPTER(dev->data->dev_private); + enum ice_tx_burst_type tx_burst_type = ICE_TX_DEFAULT; + int mbuf_check = ad->devargs.mbuf_check; #ifdef RTE_ARCH_X86 struct ice_tx_queue *txq; int i; @@ -3817,14 +3945,13 @@ ice_set_tx_function(struct rte_eth_dev *dev) PMD_DRV_LOG(NOTICE, "Using AVX512 OFFLOAD Vector Tx (port %d).", dev->data->port_id); - dev->tx_pkt_burst = - ice_xmit_pkts_vec_avx512_offload; + tx_burst_type = ICE_TX_VEC_AVX512_OFFLOAD; dev->tx_pkt_prepare = ice_prep_pkts; } else { PMD_DRV_LOG(NOTICE, "Using AVX512 Vector Tx (port %d).", dev->data->port_id); - dev->tx_pkt_burst = ice_xmit_pkts_vec_avx512; + tx_burst_type = ICE_TX_VEC_AVX512; } #endif } else { @@ -3832,32 +3959,43 @@ ice_set_tx_function(struct rte_eth_dev *dev) PMD_DRV_LOG(NOTICE, "Using AVX2 OFFLOAD Vector Tx (port %d).", dev->data->port_id); - dev->tx_pkt_burst = - ice_xmit_pkts_vec_avx2_offload; + tx_burst_type = ICE_TX_VEC_AVX2_OFFLOAD; dev->tx_pkt_prepare = ice_prep_pkts; } else { PMD_DRV_LOG(DEBUG, "Using %sVector Tx (port %d).", ad->tx_use_avx2 ? "avx2 " : "", dev->data->port_id); - dev->tx_pkt_burst = ad->tx_use_avx2 ? - ice_xmit_pkts_vec_avx2 : - ice_xmit_pkts_vec; + tx_burst_type = ad->tx_use_avx2 ? + ICE_TX_VEC_AVX2 : ICE_TX_VEC_SSE; } } + if (mbuf_check) { + ad->tx_burst_type = tx_burst_type; + dev->tx_pkt_burst = ice_xmit_pkts_check; + } else { + dev->tx_pkt_burst = ice_tx_pkt_burst_ops[tx_burst_type]; + } return; } #endif if (ad->tx_simple_allowed) { PMD_INIT_LOG(DEBUG, "Simple tx finally be used."); - dev->tx_pkt_burst = ice_xmit_pkts_simple; + tx_burst_type = ICE_TX_SIMPLE; dev->tx_pkt_prepare = NULL; } else { PMD_INIT_LOG(DEBUG, "Normal tx finally be used."); - dev->tx_pkt_burst = ice_xmit_pkts; + tx_burst_type = ICE_TX_DEFAULT; dev->tx_pkt_prepare = ice_prep_pkts; } + + if (mbuf_check) { + ad->tx_burst_type = tx_burst_type; + dev->tx_pkt_burst = ice_xmit_pkts_check; + } else { + dev->tx_pkt_burst = ice_tx_pkt_burst_ops[tx_burst_type]; + } } static const struct { diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index 52e52cff34..f7276cfc9f 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -45,6 +45,25 @@ #define ICE_TX_MIN_PKT_LEN 17 +#define ICE_TX_OFFLOAD_MASK ( \ + RTE_MBUF_F_TX_OUTER_IPV6 | \ + RTE_MBUF_F_TX_OUTER_IPV4 | \ + RTE_MBUF_F_TX_OUTER_IP_CKSUM | \ + RTE_MBUF_F_TX_VLAN | \ + RTE_MBUF_F_TX_IPV6 | \ + RTE_MBUF_F_TX_IPV4 | \ + RTE_MBUF_F_TX_IP_CKSUM | \ + RTE_MBUF_F_TX_L4_MASK | \ + RTE_MBUF_F_TX_IEEE1588_TMST | \ + RTE_MBUF_F_TX_TCP_SEG | \ + RTE_MBUF_F_TX_QINQ | \ + RTE_MBUF_F_TX_TUNNEL_MASK | \ + RTE_MBUF_F_TX_UDP_SEG | \ + RTE_MBUF_F_TX_OUTER_UDP_CKSUM) + +#define ICE_TX_OFFLOAD_NOTSUP_MASK \ + (RTE_MBUF_F_TX_OFFLOAD_MASK ^ ICE_TX_OFFLOAD_MASK) + extern uint64_t ice_timestamp_dynflag; extern int ice_timestamp_dynfield_offset; @@ -164,6 +183,7 @@ struct ice_tx_queue { struct ice_vsi *vsi; /* the VSI this queue belongs to */ uint16_t tx_next_dd; uint16_t tx_next_rs; + uint64_t mbuf_errors; bool tx_deferred_start; /* don't start this queue in dev start */ bool q_set; /* indicate if tx queue has been configured */ ice_tx_release_mbufs_t tx_rel_mbufs;