From patchwork Wed Jun 1 13:50:56 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 112234 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CE3F8A0548; Wed, 1 Jun 2022 16:14:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A464142826; Wed, 1 Jun 2022 16:14:49 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 828824113F for ; Wed, 1 Jun 2022 16:14:47 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654092887; x=1685628887; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=JmEQKFpKTC9GLx6Jq04gS9TokPLsy6con4W6ZesFddQ=; b=dRX+yAUWqs9Po2T4v+4aJHFZt/TtP6kYCod7HqsnKBeAKJq8zgrhIlwS RMpDROoTe/UEfBYM9LhusV3VcQUqQyMIdJ079qMKFlQ0jpRzRTWTPCK3k Sr9AV+NxjkZLGH2PIgTRPV16Wq/o60vpo3biTGV0MCoJKbhVXY8O+Ogiz B8weR5Z/tSgRjzcbZENm1Kv3wvk7hjZ/z+8UgyklHdkuO8xJ4JNFC+PcC 2cnhCyu23Bf3kyzCBz2FKIIMt229hoUcKwHHdIclR/j8yVTVhcNGgjLa/ 2CmZt7dSM1E+/s0P51CwWrSVIMnXEPVJa7ogTVKjZ5cOICySh6S31NSoS g==; X-IronPort-AV: E=McAfee;i="6400,9594,10365"; a="338647457" X-IronPort-AV: E=Sophos;i="5.91,268,1647327600"; d="scan'208";a="338647457" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2022 07:14:47 -0700 X-IronPort-AV: E=Sophos;i="5.91,268,1647327600"; d="scan'208";a="606287126" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2022 07:14:43 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, Wenxuan Wu , Xuan Ding , Yuan Wang , Ray Kinsella Subject: [PATCH v8 1/3] ethdev: introduce protocol hdr based buffer split Date: Wed, 1 Jun 2022 13:50:56 +0000 Message-Id: <20220601135059.958882-2-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220601135059.958882-1-wenxuanx.wu@intel.com> References: <20220303060136.36427-1-xuan.ding@intel.com> <20220601135059.958882-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu Currently, Rx buffer split supports length based split. With Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment configured, PMD will be able to split the received packets into multiple segments. However, length based buffer split is not suitable for NICs that do split based on protocol headers. Given a arbitrarily variable length in Rx packet segment, it is almost impossible to pass a fixed protocol header to PMD. Besides, the existence of tunneling results in the composition of a packet is various, which makes the situation even worse. This patch extends current buffer split to support protocol header based buffer split. A new proto_hdr field is introduced in the reserved field of rte_eth_rxseg_split structure to specify protocol header. The proto_hdr field defines the split position of packet, splitting will always happens after the protocol header defined in the Rx packet segment. When Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding protocol header is configured, PMD will split the ingress packets into multiple segments. struct rte_eth_rxseg_split { struct rte_mempool *mp; /* memory pools to allocate segment from */ uint16_t length; /* segment maximal data length, configures "split point" */ uint16_t offset; /* data offset from beginning of mbuf data buffer */ uint32_t proto_hdr; /* inner/outer L2/L3/L4 protocol header, configures "split point" */ }; Both inner and outer L2/L3/L4 level protocol header split can be supported. Corresponding protocol header capability is RTE_PTYPE_L2_ETHER, RTE_PTYPE_L3_IPV4, RTE_PTYPE_L3_IPV6, RTE_PTYPE_L4_TCP, RTE_PTYPE_L4_UDP, RTE_PTYPE_L4_SCTP, RTE_PTYPE_INNER_L2_ETHER, RTE_PTYPE_INNER_L3_IPV4, RTE_PTYPE_INNER_L3_IPV6, RTE_PTYPE_INNER_L4_TCP, RTE_PTYPE_INNER_L4_UDP, RTE_PTYPE_INNER_L4_SCTP. For example, let's suppose we configured the Rx queue with the following segments: seg0 - pool0, proto_hdr0=RTE_PTYPE_L3_IPV4, off0=2B seg1 - pool1, proto_hdr1=RTE_PTYPE_L4_UDP, off1=128B seg2 - pool2, off1=0B The packet consists of MAC_IPV4_UDP_PAYLOAD will be split like following: seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0 seg1 - udp header @ 128 in mbuf from pool1 seg2 - payload @ 0 in mbuf from pool2 Now buffet split can be configured in two modes. For length based buffer split, the mp, length, offset field in Rx packet segment should be configured, while the proto_hdr field should not be configured. For protocol header based buffer split, the mp, offset, proto_hdr field in Rx packet segment should be configured, while the length field should not be configured. The split limitations imposed by underlying PMD is reported in the rte_eth_dev_info->rx_seg_capa field. The memory attributes for the split parts may differ either, dpdk memory and external memory, respectively. Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang Signed-off-by: Wenxuan Wu Reviewed-by: Qi Zhang Acked-by: Ray Kinsella --- lib/ethdev/rte_ethdev.c | 40 +++++++++++++++++++++++++++++++++------- lib/ethdev/rte_ethdev.h | 28 +++++++++++++++++++++++++++- 2 files changed, 60 insertions(+), 8 deletions(-) diff --git a/lib/ethdev/rte_ethdev.c b/lib/ethdev/rte_ethdev.c index 29a3d80466..fbd55cdd9d 100644 --- a/lib/ethdev/rte_ethdev.c +++ b/lib/ethdev/rte_ethdev.c @@ -1661,6 +1661,7 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, struct rte_mempool *mpl = rx_seg[seg_idx].mp; uint32_t length = rx_seg[seg_idx].length; uint32_t offset = rx_seg[seg_idx].offset; + uint32_t proto_hdr = rx_seg[seg_idx].proto_hdr; if (mpl == NULL) { RTE_ETHDEV_LOG(ERR, "null mempool pointer\n"); @@ -1694,13 +1695,38 @@ rte_eth_rx_queue_check_split(const struct rte_eth_rxseg_split *rx_seg, } offset += seg_idx != 0 ? 0 : RTE_PKTMBUF_HEADROOM; *mbp_buf_size = rte_pktmbuf_data_room_size(mpl); - length = length != 0 ? length : *mbp_buf_size; - if (*mbp_buf_size < length + offset) { - RTE_ETHDEV_LOG(ERR, - "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", - mpl->name, *mbp_buf_size, - length + offset, length, offset); - return -EINVAL; + if (proto_hdr == RTE_PTYPE_UNKNOWN) { + /* Split at fixed length. */ + length = length != 0 ? length : *mbp_buf_size; + if (*mbp_buf_size < length + offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u (segment length=%u + segment offset=%u)\n", + mpl->name, *mbp_buf_size, + length + offset, length, offset); + return -EINVAL; + } + } else { + /* Split after specified protocol header. */ + if (!(proto_hdr & RTE_BUFFER_SPLIT_PROTO_HDR_MASK)) { + RTE_ETHDEV_LOG(ERR, + "Protocol header %u not supported)\n", + proto_hdr); + return -EINVAL; + } + + if (length != 0) { + RTE_ETHDEV_LOG(ERR, "segment length should be set to zero in protocol header " + "based buffer split\n"); + return -EINVAL; + } + + if (*mbp_buf_size < offset) { + RTE_ETHDEV_LOG(ERR, + "%s mbuf_data_room_size %u < %u segment offset)\n", + mpl->name, *mbp_buf_size, + offset); + return -EINVAL; + } } } return 0; diff --git a/lib/ethdev/rte_ethdev.h b/lib/ethdev/rte_ethdev.h index 04cff8ee10..0cd9dd6cc0 100644 --- a/lib/ethdev/rte_ethdev.h +++ b/lib/ethdev/rte_ethdev.h @@ -1187,6 +1187,9 @@ struct rte_eth_txmode { * mbuf) the following data will be pushed to the next segment * up to its own length, and so on. * + * - The proto_hdrs in the elements define the split position of + * received packets. + * * - If the length in the segment description element is zero * the actual buffer size will be deduced from the appropriate * memory pool properties. @@ -1197,14 +1200,37 @@ struct rte_eth_txmode { * - pool from the last valid element * - the buffer size from this pool * - zero offset + * + * - Length based buffer split: + * - mp, length, offset should be configured. + * - The proto_hdr field should not be configured. + * + * - Protocol header based buffer split: + * - mp, offset, proto_hdr should be configured. + * - The length field should not be configured. */ struct rte_eth_rxseg_split { struct rte_mempool *mp; /**< Memory pool to allocate segment from. */ uint16_t length; /**< Segment data length, configures split point. */ uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */ - uint32_t reserved; /**< Reserved field. */ + uint32_t proto_hdr; /**< Inner/outer L2/L3/L4 protocol header, configures split point. */ }; +/* Buffer split protocol header capability. */ +#define RTE_BUFFER_SPLIT_PROTO_HDR_MASK ( \ + RTE_PTYPE_L2_ETHER | \ + RTE_PTYPE_L3_IPV4 | \ + RTE_PTYPE_L3_IPV6 | \ + RTE_PTYPE_L4_TCP | \ + RTE_PTYPE_L4_UDP | \ + RTE_PTYPE_L4_SCTP | \ + RTE_PTYPE_INNER_L2_ETHER | \ + RTE_PTYPE_INNER_L3_IPV4 | \ + RTE_PTYPE_INNER_L3_IPV6 | \ + RTE_PTYPE_INNER_L4_TCP | \ + RTE_PTYPE_INNER_L4_UDP | \ + RTE_PTYPE_INNER_L4_SCTP) + /** * @warning * @b EXPERIMENTAL: this structure may change without prior notice. From patchwork Wed Jun 1 13:50:58 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 112236 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2A0A2A0548; Wed, 1 Jun 2022 16:15:06 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6EEDE42B76; Wed, 1 Jun 2022 16:14:58 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id C493B42B76 for ; Wed, 1 Jun 2022 16:14:55 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654092896; x=1685628896; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=HtwFdDWh0EVvpWKgUIoCSx3zNE2hyzW4LwyJrtgjlSw=; b=APo4nZMtewegdBdzd+te3E5FX+i8ZUUM8bkC57naUJYU1qC25sw1eDDv abQynkRei1z8Uvx0r5aJnw2KF0r9awI15FNgP2DyxPDpJSsWdR59cfzRC EfW6cVA9N4GRHfqtRezq2/eZqX/AVcKJymILuiaT86Ns7KQZeR0NQCpq4 SdjIZREXjqRsyDG0MwjeGyYDZOcTXlgMh3YmUd3IL0floPGOCWA63ijH8 0dfnC6Kjn6hkqw0U2avnDzeOG5ZC9SWqq8VUn0Hry4rNap9Mayqnrt9OI VCwiKkgS0Kxq+8UnJXnE9DlWGhoOhXJj1on6SxGA9K6Uc3M+c+0fN4TH4 A==; X-IronPort-AV: E=McAfee;i="6400,9594,10365"; a="338647486" X-IronPort-AV: E=Sophos;i="5.91,268,1647327600"; d="scan'208";a="338647486" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2022 07:14:55 -0700 X-IronPort-AV: E=Sophos;i="5.91,268,1647327600"; d="scan'208";a="606287165" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2022 07:14:51 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, Wenxuan Wu , Xuan Ding , Yuan Wang Subject: [PATCH v8 2/3] net/ice: support buffer split in Rx path Date: Wed, 1 Jun 2022 13:50:58 +0000 Message-Id: <20220601135059.958882-4-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220601135059.958882-1-wenxuanx.wu@intel.com> References: <20220303060136.36427-1-xuan.ding@intel.com> <20220601135059.958882-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu This patch adds support for proto based buffer split in normal Rx data paths. When the Rx queue is configured with specific protocol type, packets received will be directly splitted into protocol header and payload parts. And the two parts will be put into different mempools. Currently, protocol based buffer split is not supported in vectorized paths. Signed-off-by: Xuan Ding Signed-off-by: Wenxuan Wu Signed-off-by: Yuan Wang Reviewed-by: Qi Zhang --- drivers/net/ice/ice_ethdev.c | 10 +- drivers/net/ice/ice_rxtx.c | 220 ++++++++++++++++++++++---- drivers/net/ice/ice_rxtx.h | 16 ++ drivers/net/ice/ice_rxtx_vec_common.h | 3 + 4 files changed, 217 insertions(+), 32 deletions(-) diff --git a/drivers/net/ice/ice_ethdev.c b/drivers/net/ice/ice_ethdev.c index 73e550f5fb..ce3f49c863 100644 --- a/drivers/net/ice/ice_ethdev.c +++ b/drivers/net/ice/ice_ethdev.c @@ -3713,7 +3713,8 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM | RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | RTE_ETH_RX_OFFLOAD_RSS_HASH | - RTE_ETH_RX_OFFLOAD_TIMESTAMP; + RTE_ETH_RX_OFFLOAD_TIMESTAMP | + RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT; dev_info->tx_offload_capa |= RTE_ETH_TX_OFFLOAD_QINQ_INSERT | RTE_ETH_TX_OFFLOAD_IPV4_CKSUM | @@ -3725,7 +3726,7 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->flow_type_rss_offloads |= ICE_RSS_OFFLOAD_ALL; } - dev_info->rx_queue_offload_capa = 0; + dev_info->rx_queue_offload_capa = RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT; dev_info->tx_queue_offload_capa = RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; dev_info->reta_size = pf->hash_lut_size; @@ -3794,6 +3795,11 @@ ice_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->default_rxportconf.ring_size = ICE_BUF_SIZE_MIN; dev_info->default_txportconf.ring_size = ICE_BUF_SIZE_MIN; + dev_info->rx_seg_capa.max_nseg = ICE_RX_MAX_NSEG; + dev_info->rx_seg_capa.multi_pools = 1; + dev_info->rx_seg_capa.offset_allowed = 0; + dev_info->rx_seg_capa.offset_align_log2 = 0; + return 0; } diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index 2dd2637fbb..77ab258f7f 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -282,7 +282,6 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) /* Set buffer size as the head split is disabled. */ buf_size = (uint16_t)(rte_pktmbuf_data_room_size(rxq->mp) - RTE_PKTMBUF_HEADROOM); - rxq->rx_hdr_len = 0; rxq->rx_buf_len = RTE_ALIGN(buf_size, (1 << ICE_RLAN_CTX_DBUF_S)); rxq->max_pkt_len = RTE_MIN((uint32_t)ICE_SUPPORT_CHAIN_NUM * rxq->rx_buf_len, @@ -311,11 +310,53 @@ ice_program_hw_rx_queue(struct ice_rx_queue *rxq) memset(&rx_ctx, 0, sizeof(rx_ctx)); + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + switch (rxq->rxseg[0].proto_hdr) { + case RTE_PTYPE_L2_ETHER: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_1 = ICE_RLAN_RX_HSPLIT_1_SPLIT_L2; + break; + case RTE_PTYPE_INNER_L2_ETHER: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_L2; + break; + case RTE_PTYPE_L3_IPV4: + case RTE_PTYPE_L3_IPV6: + case RTE_PTYPE_INNER_L3_IPV4: + case RTE_PTYPE_INNER_L3_IPV6: + case RTE_PTYPE_L3_IPV4 | RTE_PTYPE_L3_IPV6: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_IP; + break; + case RTE_PTYPE_L4_TCP: + case RTE_PTYPE_L4_UDP: + case RTE_PTYPE_INNER_L4_TCP: + case RTE_PTYPE_INNER_L4_UDP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_TCP_UDP; + break; + case RTE_PTYPE_L4_SCTP: + case RTE_PTYPE_INNER_L4_SCTP: + rx_ctx.dtype = ICE_RX_DTYPE_HEADER_SPLIT; + rx_ctx.hsplit_0 = ICE_RLAN_RX_HSPLIT_0_SPLIT_SCTP; + break; + case 0: + PMD_DRV_LOG(ERR, "Buffer split protocol must be configured"); + return -EINVAL; + default: + PMD_DRV_LOG(ERR, "Buffer split protocol is not supported"); + return -EINVAL; + } + rxq->rx_hdr_len = ICE_RX_HDR_BUF_SIZE; + } else { + rxq->rx_hdr_len = 0; + rx_ctx.dtype = 0; /* No Protocol Based Buffer Split mode */ + } + rx_ctx.base = rxq->rx_ring_dma / ICE_QUEUE_BASE_ADDR_UNIT; rx_ctx.qlen = rxq->nb_rx_desc; rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; - rx_ctx.dtype = 0; /* No Header Split mode */ #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC rx_ctx.dsize = 1; /* 32B descriptors */ #endif @@ -401,6 +442,7 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) for (i = 0; i < rxq->nb_rx_desc; i++) { volatile union ice_rx_flex_desc *rxd; + rxd = &rxq->rx_ring[i]; struct rte_mbuf *mbuf = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!mbuf)) { @@ -408,8 +450,6 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) return -ENOMEM; } - rte_mbuf_refcnt_set(mbuf, 1); - mbuf->next = NULL; mbuf->data_off = RTE_PKTMBUF_HEADROOM; mbuf->nb_segs = 1; mbuf->port = rxq->port_id; @@ -417,9 +457,33 @@ ice_alloc_rx_queue_mbufs(struct ice_rx_queue *rxq) dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf)); - rxd = &rxq->rx_ring[i]; - rxd->read.pkt_addr = dma_addr; - rxd->read.hdr_addr = 0; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + struct rte_mbuf *mbuf_pay; + mbuf_pay = rte_mbuf_raw_alloc(rxq->rxseg[1].mp); + if (unlikely(!mbuf_pay)) { + PMD_DRV_LOG(ERR, "Failed to allocate payload mbuf for RX"); + return -ENOMEM; + } + + mbuf_pay->next = NULL; + mbuf_pay->data_off = RTE_PKTMBUF_HEADROOM; + mbuf_pay->nb_segs = 1; + mbuf_pay->port = rxq->port_id; + mbuf->next = mbuf_pay; + + rxd->read.hdr_addr = dma_addr; + /* The LS bit should be set to zero regardless of + * buffer split enablement. + */ + rxd->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mbuf_pay)); + + } else { + rte_mbuf_refcnt_set(mbuf, 1); + mbuf->next = NULL; + rxd->read.hdr_addr = 0; + rxd->read.pkt_addr = dma_addr; + } + #ifndef RTE_LIBRTE_ICE_16BYTE_RX_DESC rxd->read.rsvd1 = 0; rxd->read.rsvd2 = 0; @@ -443,14 +507,14 @@ _ice_rx_queue_release_mbufs(struct ice_rx_queue *rxq) for (i = 0; i < rxq->nb_rx_desc; i++) { if (rxq->sw_ring[i].mbuf) { - rte_pktmbuf_free_seg(rxq->sw_ring[i].mbuf); + rte_pktmbuf_free(rxq->sw_ring[i].mbuf); rxq->sw_ring[i].mbuf = NULL; } } if (rxq->rx_nb_avail == 0) return; for (i = 0; i < rxq->rx_nb_avail; i++) - rte_pktmbuf_free_seg(rxq->rx_stage[rxq->rx_next_avail + i]); + rte_pktmbuf_free(rxq->rx_stage[rxq->rx_next_avail + i]); rxq->rx_nb_avail = 0; } @@ -742,7 +806,7 @@ ice_fdir_program_hw_rx_queue(struct ice_rx_queue *rxq) rx_ctx.qlen = rxq->nb_rx_desc; rx_ctx.dbuf = rxq->rx_buf_len >> ICE_RLAN_CTX_DBUF_S; rx_ctx.hbuf = rxq->rx_hdr_len >> ICE_RLAN_CTX_HBUF_S; - rx_ctx.dtype = 0; /* No Header Split mode */ + rx_ctx.dtype = 0; /* No Buffer Split mode */ rx_ctx.dsize = 1; /* 32B descriptors */ rx_ctx.rxmax = ICE_ETH_MAX_LEN; /* TPH: Transaction Layer Packet (TLP) processing hints */ @@ -1076,6 +1140,7 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, uint16_t len; int use_def_burst_func = 1; uint64_t offloads; + uint16_t n_seg = rx_conf->rx_nseg; if (nb_desc % ICE_ALIGN_RING_DESC != 0 || nb_desc > ICE_MAX_RING_DESC || @@ -1087,6 +1152,17 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + if (mp) + n_seg = 1; + + if (n_seg > 1) { + if (!(offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT)) { + PMD_INIT_LOG(ERR, "port %u queue index %u split offload not configured", + dev->data->port_id, queue_idx); + return -EINVAL; + } + } + /* Free memory if needed */ if (dev->data->rx_queues[queue_idx]) { ice_rx_queue_release(dev->data->rx_queues[queue_idx]); @@ -1098,12 +1174,22 @@ ice_rx_queue_setup(struct rte_eth_dev *dev, sizeof(struct ice_rx_queue), RTE_CACHE_LINE_SIZE, socket_id); + if (!rxq) { PMD_INIT_LOG(ERR, "Failed to allocate memory for " "rx queue data structure"); return -ENOMEM; } - rxq->mp = mp; + + rxq->rxseg_nb = n_seg; + if (n_seg > 1) { + rte_memcpy(rxq->rxseg, rx_conf->rx_seg, + sizeof(struct rte_eth_rxseg_split) * n_seg); + rxq->mp = rxq->rxseg[0].mp; + } else { + rxq->mp = mp; + } + rxq->nb_rx_desc = nb_desc; rxq->rx_free_thresh = rx_conf->rx_free_thresh; rxq->queue_id = queue_idx; @@ -1568,7 +1654,7 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) struct ice_rx_entry *rxep; struct rte_mbuf *mb; uint16_t stat_err0; - uint16_t pkt_len; + uint16_t pkt_len, hdr_len; int32_t s[ICE_LOOK_AHEAD], nb_dd; int32_t i, j, nb_rx = 0; uint64_t pkt_flags = 0; @@ -1623,6 +1709,24 @@ ice_rx_scan_hw_ring(struct ice_rx_queue *rxq) ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; mb->data_len = pkt_len; mb->pkt_len = pkt_len; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + mb->nb_segs = (uint16_t)(mb->nb_segs + mb->next->nb_segs); + mb->next->next = NULL; + hdr_len = rte_le_to_cpu_16(rxdp[j].wb.hdr_len_sph_flex_flags1) & + ICE_RX_FLEX_DESC_HEADER_LEN_M; + pkt_len = (rte_le_to_cpu_16(rxdp[j].wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + mb->data_len = hdr_len; + mb->pkt_len = hdr_len + pkt_len; + mb->next->data_len = pkt_len; + } else { + pkt_len = (rte_le_to_cpu_16(rxdp[j].wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + mb->data_len = pkt_len; + mb->pkt_len = pkt_len; + } + mb->ol_flags = 0; stat_err0 = rte_le_to_cpu_16(rxdp[j].wb.status_error0); pkt_flags = ice_rxd_error_to_pkt_flags(stat_err0); @@ -1714,7 +1818,9 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) struct rte_mbuf *mb; uint16_t alloc_idx, i; uint64_t dma_addr; - int diag; + int diag, diag_pay; + uint64_t pay_addr; + struct rte_mbuf *mbufs_pay[rxq->rx_free_thresh]; /* Allocate buffers in bulk */ alloc_idx = (uint16_t)(rxq->rx_free_trigger - @@ -1727,6 +1833,15 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) return -ENOMEM; } + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + diag_pay = rte_mempool_get_bulk(rxq->rxseg[1].mp, + (void *)mbufs_pay, rxq->rx_free_thresh); + if (unlikely(diag_pay != 0)) { + PMD_RX_LOG(ERR, "Failed to get payload mbufs in bulk"); + return -ENOMEM; + } + } + rxdp = &rxq->rx_ring[alloc_idx]; for (i = 0; i < rxq->rx_free_thresh; i++) { if (likely(i < (rxq->rx_free_thresh - 1))) @@ -1735,13 +1850,21 @@ ice_rx_alloc_bufs(struct ice_rx_queue *rxq) mb = rxep[i].mbuf; rte_mbuf_refcnt_set(mb, 1); - mb->next = NULL; mb->data_off = RTE_PKTMBUF_HEADROOM; mb->nb_segs = 1; mb->port = rxq->port_id; dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(mb)); - rxdp[i].read.hdr_addr = 0; - rxdp[i].read.pkt_addr = dma_addr; + + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + mb->next = mbufs_pay[i]; + pay_addr = rte_mbuf_data_iova_default(mbufs_pay[i]); + rxdp[i].read.hdr_addr = dma_addr; + rxdp[i].read.pkt_addr = rte_cpu_to_le_64(pay_addr); + } else { + mb->next = NULL; + rxdp[i].read.hdr_addr = 0; + rxdp[i].read.pkt_addr = dma_addr; + } } /* Update Rx tail register */ @@ -2350,11 +2473,13 @@ ice_recv_pkts(void *rx_queue, struct ice_rx_entry *sw_ring = rxq->sw_ring; struct ice_rx_entry *rxe; struct rte_mbuf *nmb; /* new allocated mbuf */ + struct rte_mbuf *nmb_pay; /* new allocated payload mbuf */ struct rte_mbuf *rxm; /* pointer to store old mbuf in SW ring */ uint16_t rx_id = rxq->rx_tail; uint16_t nb_rx = 0; uint16_t nb_hold = 0; uint16_t rx_packet_len; + uint16_t rx_header_len; uint16_t rx_stat_err0; uint64_t dma_addr; uint64_t pkt_flags; @@ -2382,12 +2507,16 @@ ice_recv_pkts(void *rx_queue, if (!(rx_stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_DD_S))) break; - /* allocate mbuf */ + if (rx_stat_err0 & (1 << ICE_RX_FLEX_DESC_STATUS0_HBO_S)) + break; + + /* allocate header mbuf */ nmb = rte_mbuf_raw_alloc(rxq->mp); if (unlikely(!nmb)) { rxq->vsi->adapter->pf.dev_data->rx_mbuf_alloc_failed++; break; } + rxd = *rxdp; /* copy descriptor in ring to temp variable*/ nb_hold++; @@ -2400,24 +2529,55 @@ ice_recv_pkts(void *rx_queue, dma_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); - /** - * fill the read format of descriptor with physic address in - * new allocated mbuf: nmb - */ - rxdp->read.hdr_addr = 0; - rxdp->read.pkt_addr = dma_addr; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + /* allocate payload mbuf */ + nmb_pay = rte_mbuf_raw_alloc(rxq->rxseg[1].mp); + if (unlikely(!nmb_pay)) { + rxq->vsi->adapter->pf.dev_data->rx_mbuf_alloc_failed++; + break; + } + + nmb->next = nmb_pay; + nmb_pay->next = NULL; - /* calculate rx_packet_len of the received pkt */ - rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & - ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + /** + * fill the read format of descriptor with physic address in + * new allocated mbuf: nmb + */ + rxdp->read.hdr_addr = dma_addr; + rxdp->read.pkt_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb_pay)); + } else { + /** + * fill the read format of descriptor with physic address in + * new allocated mbuf: nmb + */ + rxdp->read.hdr_addr = 0; + rxdp->read.pkt_addr = dma_addr; + } /* fill old mbuf with received descriptor: rxd */ rxm->data_off = RTE_PKTMBUF_HEADROOM; rte_prefetch0(RTE_PTR_ADD(rxm->buf_addr, RTE_PKTMBUF_HEADROOM)); - rxm->nb_segs = 1; - rxm->next = NULL; - rxm->pkt_len = rx_packet_len; - rxm->data_len = rx_packet_len; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) { + rxm->nb_segs = (uint16_t)(rxm->nb_segs + rxm->next->nb_segs); + rxm->next->next = NULL; + /* calculate rx_packet_len of the received pkt */ + rx_header_len = rte_le_to_cpu_16(rxd.wb.hdr_len_sph_flex_flags1) & + ICE_RX_FLEX_DESC_HEADER_LEN_M; + rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + rxm->data_len = rx_header_len; + rxm->pkt_len = rx_header_len + rx_packet_len; + rxm->next->data_len = rx_packet_len; + } else { + rxm->nb_segs = 1; + rxm->next = NULL; + /* calculate rx_packet_len of the received pkt */ + rx_packet_len = (rte_le_to_cpu_16(rxd.wb.pkt_len) & + ICE_RX_FLX_DESC_PKT_LEN_M) - rxq->crc_len; + rxm->data_len = rx_packet_len; + rxm->pkt_len = rx_packet_len; + } rxm->port = rxq->port_id; rxm->packet_type = ptype_tbl[ICE_RX_FLEX_DESC_PTYPE_M & rte_le_to_cpu_16(rxd.wb.ptype_flex_flags0)]; diff --git a/drivers/net/ice/ice_rxtx.h b/drivers/net/ice/ice_rxtx.h index bb18a01951..611dbc8503 100644 --- a/drivers/net/ice/ice_rxtx.h +++ b/drivers/net/ice/ice_rxtx.h @@ -16,6 +16,9 @@ #define ICE_RX_MAX_BURST 32 #define ICE_TX_MAX_BURST 32 +/* Maximal number of segments to split. */ +#define ICE_RX_MAX_NSEG 2 + #define ICE_CHK_Q_ENA_COUNT 100 #define ICE_CHK_Q_ENA_INTERVAL_US 100 @@ -43,6 +46,11 @@ extern uint64_t ice_timestamp_dynflag; extern int ice_timestamp_dynfield_offset; +/* Max header size can be 2K - 64 bytes */ +#define ICE_RX_HDR_BUF_SIZE (2048 - 64) + +#define ICE_HEADER_SPLIT_ENA BIT(0) + typedef void (*ice_rx_release_mbufs_t)(struct ice_rx_queue *rxq); typedef void (*ice_tx_release_mbufs_t)(struct ice_tx_queue *txq); typedef void (*ice_rxd_to_pkt_fields_t)(struct ice_rx_queue *rxq, @@ -53,6 +61,12 @@ struct ice_rx_entry { struct rte_mbuf *mbuf; }; +enum ice_rx_dtype { + ICE_RX_DTYPE_NO_SPLIT = 0, + ICE_RX_DTYPE_HEADER_SPLIT = 1, + ICE_RX_DTYPE_SPLIT_ALWAYS = 2, +}; + struct ice_rx_queue { struct rte_mempool *mp; /* mbuf pool to populate RX ring */ volatile union ice_rx_flex_desc *rx_ring;/* RX ring virtual address */ @@ -95,6 +109,8 @@ struct ice_rx_queue { uint32_t time_high; uint32_t hw_register_set; const struct rte_memzone *mz; + struct rte_eth_rxseg_split rxseg[ICE_RX_MAX_NSEG]; + uint32_t rxseg_nb; }; struct ice_tx_entry { diff --git a/drivers/net/ice/ice_rxtx_vec_common.h b/drivers/net/ice/ice_rxtx_vec_common.h index 2dd2d83650..eec6ea2134 100644 --- a/drivers/net/ice/ice_rxtx_vec_common.h +++ b/drivers/net/ice/ice_rxtx_vec_common.h @@ -291,6 +291,9 @@ ice_rx_vec_queue_default(struct ice_rx_queue *rxq) if (rxq->offloads & RTE_ETH_RX_OFFLOAD_TIMESTAMP) return -1; + if (rxq->offloads & RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT) + return -1; + if (rxq->offloads & ICE_RX_VECTOR_OFFLOAD) return ICE_VECTOR_OFFLOAD_PATH; From patchwork Wed Jun 1 13:50:59 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wu, WenxuanX" X-Patchwork-Id: 112237 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 042B6A0548; Wed, 1 Jun 2022 16:15:18 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CA7B242B84; Wed, 1 Jun 2022 16:15:01 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id A0F5242B84 for ; Wed, 1 Jun 2022 16:14:59 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1654092899; x=1685628899; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=br5k1yihzz20Apwplp9d85LcvKjip54BLcuwKraetvc=; b=nJ0LFeWdiqgc7v6vNYPkUEJayK2Gh2OPkRp144LbDAKo3XDrB6ivhWW9 mbqn364tPWG7nRHaSsuRbjBtgoKlyWOrmRT1HpT7Q2ESLG1VSGqNcUnAm RtrXMxPjW0FwBe1BmxP/YghF58qYhrp2AWp8mYsl+FKD5sdDCnGTjmqcw KTUUGsF+yaqdqc7Gx5xPedGYxOzLIJiKHxJhXiAoxcy2uobvv5Suh93OI wslzI5nVvDv1isR4HSioNyl+L1zx0d3/aF7KZP9JIATRrxbO1gLpIK2Mx 4V1TV3KXW/Sv1P/0Vps8R+2IHMtMHWNsA1PXRb1m2QYDeVA7C2WvR53bu Q==; X-IronPort-AV: E=McAfee;i="6400,9594,10365"; a="338647498" X-IronPort-AV: E=Sophos;i="5.91,268,1647327600"; d="scan'208";a="338647498" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2022 07:14:59 -0700 X-IronPort-AV: E=Sophos;i="5.91,268,1647327600"; d="scan'208";a="606287186" Received: from unknown (HELO localhost.localdomain) ([10.239.251.3]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Jun 2022 07:14:55 -0700 From: wenxuanx.wu@intel.com To: thomas@monjalon.net, andrew.rybchenko@oktetlabs.ru, xiaoyun.li@intel.com, ferruh.yigit@xilinx.com, aman.deep.singh@intel.com, dev@dpdk.org, yuying.zhang@intel.com, qi.z.zhang@intel.com, jerinjacobk@gmail.com Cc: stephen@networkplumber.org, Wenxuan Wu , Xuan Ding , Yuan Wang Subject: [PATCH v8 3/3] app/testpmd: add rxhdrs commands and parameters Date: Wed, 1 Jun 2022 13:50:59 +0000 Message-Id: <20220601135059.958882-5-wenxuanx.wu@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220601135059.958882-1-wenxuanx.wu@intel.com> References: <20220303060136.36427-1-xuan.ding@intel.com> <20220601135059.958882-1-wenxuanx.wu@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Wenxuan Wu Add command line parameter: --rxhdrs=mac,[ipv4,udp] Set the protocol_hdr of segments to scatter packets on receiving if split feature is engaged. And the queues with BUFFER_SPLIT flag. Add interative mode command: testpmd>set rxhdrs mac,ipv4,l3,tcp,udp,sctp (protocol sequence and nb_segs should be valid) The protocol split feature is off by default. To enable protocol split, you need: 1. Start testpmd with two mempools. e.g. --mbuf-size=2048,2048 2. Configure Rx queue with rx_offload buffer split on. 3. Set the protocol type of buffer split.e.g. set rxhdrs mac,ipv4 (Supported protocols: mac|ipv4|ipv6|l3|tcp|udp|sctp|l4|inner_mac| inner_ipv4|inner_ipv6|inner_l3|inner_tcp| inner_udp|inner_sctp|inner_l4) Signed-off-by: Wenxuan Wu Signed-off-by: Xuan Ding Signed-off-by: Yuan Wang Reviewed-by: Qi Zhang --- app/test-pmd/cmdline.c | 127 +++++++++++++++++++++++++++++++++++++- app/test-pmd/config.c | 81 ++++++++++++++++++++++++ app/test-pmd/parameters.c | 15 ++++- app/test-pmd/testpmd.c | 6 +- app/test-pmd/testpmd.h | 6 ++ 5 files changed, 228 insertions(+), 7 deletions(-) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 6ffea8e21a..52e98e1c06 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -183,7 +183,7 @@ static void cmd_help_long_parsed(void *parsed_result, "show (rxq|txq) info (port_id) (queue_id)\n" " Display information for configured RX/TX queue.\n\n" - "show config (rxtx|cores|fwd|rxoffs|rxpkts|txpkts)\n" + "show config (rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts)\n" " Display the given configuration.\n\n" "read rxd (port_id) (queue_id) (rxd_id)\n" @@ -316,6 +316,15 @@ static void cmd_help_long_parsed(void *parsed_result, " Affects only the queues configured with split" " offloads.\n\n" + "set rxhdrs (mac[,ipv4])*\n" + " Set the protocol hdr of each segment to scatter" + " packets on receiving if split feature is engaged." + " Affects only the queues configured with split" + " offloads.\n\n" + " Supported proto header: mac|ipv4|ipv6|l3|tcp|udp|sctp|l4|" + "inner_mac|inner_ipv4|inner_ipv6|inner_l3|inner_tcp|" + "inner_udp|inner_sctp\n" + "set txpkts (x[,y]*)\n" " Set the length of each segment of TXONLY" " and optionally CSUM packets.\n\n" @@ -3617,6 +3626,72 @@ cmdline_parse_inst_t cmd_stop = { }, }; +static unsigned int +get_ptype(char *value) +{ + uint32_t protocol; + if (!strcmp(value, "mac")) + protocol = RTE_PTYPE_L2_ETHER; + else if (!strcmp(value, "ipv4")) + protocol = RTE_PTYPE_L3_IPV4; + else if (!strcmp(value, "ipv6")) + protocol = RTE_PTYPE_L3_IPV6; + else if (!strcmp(value, "l3")) + protocol = RTE_PTYPE_L3_IPV4|RTE_PTYPE_L3_IPV6; + else if (!strcmp(value, "tcp")) + protocol = RTE_PTYPE_L4_TCP; + else if (!strcmp(value, "udp")) + protocol = RTE_PTYPE_L4_UDP; + else if (!strcmp(value, "sctp")) + protocol = RTE_PTYPE_L4_SCTP; + else if (!strcmp(value, "l4")) + protocol = RTE_PTYPE_L4_TCP|RTE_PTYPE_L4_UDP|RTE_PTYPE_L4_SCTP; + else if (!strcmp(value, "inner_mac")) + protocol = RTE_PTYPE_INNER_L2_ETHER; + else if (!strcmp(value, "inner_ipv4")) + protocol = RTE_PTYPE_INNER_L3_IPV4; + else if (!strcmp(value, "inner_ipv6")) + protocol = RTE_PTYPE_INNER_L3_IPV6; + else if (!strcmp(value, "inner_l3")) + protocol = RTE_PTYPE_INNER_L3_IPV4|RTE_PTYPE_INNER_L3_IPV6; + else if (!strcmp(value, "inner_tcp")) + protocol = RTE_PTYPE_INNER_L4_TCP; + else if (!strcmp(value, "inner_udp")) + protocol = RTE_PTYPE_INNER_L4_UDP; + else if (!strcmp(value, "inner_sctp")) + protocol = RTE_PTYPE_INNER_L4_SCTP; + else { + fprintf(stderr, "Unknown protocol name: %s\n", value); + return 0; + } + return protocol; +} +/* *** SET RXHDRSLIST *** */ + +unsigned int +parse_hdrs_list(const char *str, const char *item_name, unsigned int max_items, + unsigned int *parsed_items, int check_hdrs_sequence) +{ + unsigned int nb_item; + char *cur; + char *tmp; + nb_item = 0; + char *str2 = strdup(str); + cur = strtok_r(str2, ",", &tmp); + while (cur != NULL) { + parsed_items[nb_item] = get_ptype(cur); + cur = strtok_r(NULL, ",", &tmp); + nb_item++; + } + if (nb_item > max_items) + fprintf(stderr, "Number of %s = %u > %u (maximum items)\n", + item_name, nb_item + 1, max_items); + set_rx_pkt_hdrs(parsed_items, nb_item); + free(str2); + if (!check_hdrs_sequence) + return nb_item; + return nb_item; +} /* *** SET CORELIST and PORTLIST CONFIGURATION *** */ unsigned int @@ -3986,6 +4061,49 @@ cmdline_parse_inst_t cmd_set_rxpkts = { }, }; +/* *** SET SEGMENT HEADERS OF RX PACKETS SPLIT *** */ +struct cmd_set_rxhdrs_result { + cmdline_fixed_string_t cmd_keyword; + cmdline_fixed_string_t rxhdrs; + cmdline_fixed_string_t seg_hdrs; +}; + +static void +cmd_set_rxhdrs_parsed(void *parsed_result, + __rte_unused struct cmdline *cl, + __rte_unused void *data) +{ + struct cmd_set_rxhdrs_result *res; + unsigned int seg_hdrs[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + + res = parsed_result; + nb_segs = parse_hdrs_list(res->seg_hdrs, "segment hdrs", + MAX_SEGS_BUFFER_SPLIT, seg_hdrs, 0); + if (nb_segs >= 1) + set_rx_pkt_hdrs(seg_hdrs, nb_segs); + cmd_reconfig_device_queue(RTE_PORT_ALL, 0, 1); +} +cmdline_parse_token_string_t cmd_set_rxhdrs_keyword = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + cmd_keyword, "set"); +cmdline_parse_token_string_t cmd_set_rxhdrs_name = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + rxhdrs, "rxhdrs"); +cmdline_parse_token_string_t cmd_set_rxhdrs_seg_hdrs = + TOKEN_STRING_INITIALIZER(struct cmd_set_rxhdrs_result, + seg_hdrs, NULL); +cmdline_parse_inst_t cmd_set_rxhdrs = { + .f = cmd_set_rxhdrs_parsed, + .data = NULL, + .help_str = "set rxhdrs ", + .tokens = { + (void *)&cmd_set_rxhdrs_keyword, + (void *)&cmd_set_rxhdrs_name, + (void *)&cmd_set_rxhdrs_seg_hdrs, + NULL, + }, +}; /* *** SET SEGMENT LENGTHS OF TXONLY PACKETS *** */ struct cmd_set_txpkts_result { @@ -8058,6 +8176,8 @@ static void cmd_showcfg_parsed(void *parsed_result, show_rx_pkt_offsets(); else if (!strcmp(res->what, "rxpkts")) show_rx_pkt_segments(); + else if (!strcmp(res->what, "rxhdrs")) + show_rx_pkt_hdrs(); else if (!strcmp(res->what, "txpkts")) show_tx_pkt_segments(); else if (!strcmp(res->what, "txtimes")) @@ -8070,12 +8190,12 @@ cmdline_parse_token_string_t cmd_showcfg_port = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, cfg, "config"); cmdline_parse_token_string_t cmd_showcfg_what = TOKEN_STRING_INITIALIZER(struct cmd_showcfg_result, what, - "rxtx#cores#fwd#rxoffs#rxpkts#txpkts#txtimes"); + "rxtx#cores#fwd#rxoffs#rxpkts#rxhdrs#txpkts#txtimes"); cmdline_parse_inst_t cmd_showcfg = { .f = cmd_showcfg_parsed, .data = NULL, - .help_str = "show config rxtx|cores|fwd|rxoffs|rxpkts|txpkts|txtimes", + .help_str = "show config rxtx|cores|fwd|rxoffs|rxpkts|rxhdrs|txpkts|txtimes", .tokens = { (void *)&cmd_showcfg_show, (void *)&cmd_showcfg_port, @@ -17833,6 +17953,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_set_log, (cmdline_parse_inst_t *)&cmd_set_rxoffs, (cmdline_parse_inst_t *)&cmd_set_rxpkts, + (cmdline_parse_inst_t *)&cmd_set_rxhdrs, (cmdline_parse_inst_t *)&cmd_set_txpkts, (cmdline_parse_inst_t *)&cmd_set_txsplit, (cmdline_parse_inst_t *)&cmd_set_txtimes, diff --git a/app/test-pmd/config.c b/app/test-pmd/config.c index cc8e7aa138..742473456a 100644 --- a/app/test-pmd/config.c +++ b/app/test-pmd/config.c @@ -4757,6 +4757,87 @@ show_rx_pkt_segments(void) printf("%hu\n", rx_pkt_seg_lengths[i]); } } +static const char *get_ptype_str(uint32_t ptype) +{ + switch (ptype) { + case RTE_PTYPE_L2_ETHER: + return "mac"; + case RTE_PTYPE_L3_IPV4: + return "ipv4"; + case RTE_PTYPE_L3_IPV6: + return "ipv6"; + case RTE_PTYPE_L3_IPV6|RTE_PTYPE_L3_IPV4: + return "l3"; + case RTE_PTYPE_L4_TCP: + return "tcp"; + case RTE_PTYPE_L4_UDP: + return "udp"; + case RTE_PTYPE_L4_SCTP: + return "sctp"; + case RTE_PTYPE_L4_TCP|RTE_PTYPE_L4_UDP|RTE_PTYPE_L4_SCTP: + return "l4"; + case RTE_PTYPE_INNER_L2_ETHER: + return "inner_mac"; + case RTE_PTYPE_INNER_L3_IPV4: + return "inner_ipv4"; + case RTE_PTYPE_INNER_L3_IPV6: + return "inner_ipv6"; + case RTE_PTYPE_INNER_L4_TCP: + return "inner_tcp"; + case RTE_PTYPE_INNER_L4_UDP: + return "inner_udp"; + case RTE_PTYPE_INNER_L4_SCTP: + return "inner_sctp"; + default: + return "unknown"; + } +} +void +show_rx_pkt_hdrs(void) +{ + uint32_t i, n; + + n = rx_pkt_nb_segs; + printf("Number of segments: %u\n", n); + if (n) { + printf("Packet segs: "); + for (i = 0; i != n - 1; i++) + printf("%s, ", get_ptype_str(rx_pkt_hdr_protos[i])); + printf("%s\n", rx_pkt_hdr_protos[i] == 0 ? "payload" : + get_ptype_str(rx_pkt_hdr_protos[i])); + } +} +void +set_rx_pkt_hdrs(unsigned int *seg_hdrs, unsigned int nb_segs) +{ + unsigned int i; + + if (nb_segs >= MAX_SEGS_BUFFER_SPLIT) { + printf("nb segments per RX packets=%u >= " + "MAX_SEGS_BUFFER_SPLIT - ignored\n", nb_segs); + return; + } + + /* + * No extra check here, the segment length will be checked by PMD + * in the extended queue setup. + */ + for (i = 0; i < nb_segs; i++) { + if (!(seg_hdrs[i] & RTE_BUFFER_SPLIT_PROTO_HDR_MASK)) { + printf("ptype [%u]=%u > is not supported - give up\n", + i, seg_hdrs[i]); + return; + } + } + + for (i = 0; i < nb_segs; i++) + rx_pkt_hdr_protos[i] = (uint32_t) seg_hdrs[i]; + /* + * We calculate the number of hdrs, but payload is not included, + * so rx_pkt_nb_segs would increase 1. + */ + rx_pkt_nb_segs = (uint8_t) nb_segs + 1; +} void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs) diff --git a/app/test-pmd/parameters.c b/app/test-pmd/parameters.c index daf6a31b2b..f86d626276 100644 --- a/app/test-pmd/parameters.c +++ b/app/test-pmd/parameters.c @@ -161,6 +161,7 @@ usage(char* progname) " Used mainly with PCAP drivers.\n"); printf(" --rxoffs=X[,Y]*: set RX segment offsets for split.\n"); printf(" --rxpkts=X[,Y]*: set RX segment sizes to split.\n"); + printf(" --rxhdrs=mac[,ipv4]*: set RX segment protocol to split.\n"); printf(" --txpkts=X[,Y]*: set TX segment sizes" " or total packet length.\n"); printf(" --txonly-multi-flow: generate multiple flows in txonly mode\n"); @@ -673,6 +674,7 @@ launch_args_parse(int argc, char** argv) { "flow-isolate-all", 0, 0, 0 }, { "rxoffs", 1, 0, 0 }, { "rxpkts", 1, 0, 0 }, + { "rxhdrs", 1, 0, 0 }, { "txpkts", 1, 0, 0 }, { "txonly-multi-flow", 0, 0, 0 }, { "rxq-share", 2, 0, 0 }, @@ -1327,7 +1329,6 @@ launch_args_parse(int argc, char** argv) if (!strcmp(lgopts[opt_idx].name, "rxpkts")) { unsigned int seg_len[MAX_SEGS_BUFFER_SPLIT]; unsigned int nb_segs; - nb_segs = parse_item_list (optarg, "rxpkt segments", MAX_SEGS_BUFFER_SPLIT, @@ -1337,6 +1338,18 @@ launch_args_parse(int argc, char** argv) else rte_exit(EXIT_FAILURE, "bad rxpkts\n"); } + if (!strcmp(lgopts[opt_idx].name, "rxhdrs")) { + unsigned int seg_hdrs[MAX_SEGS_BUFFER_SPLIT]; + unsigned int nb_segs; + nb_segs = parse_hdrs_list + (optarg, "rxpkt segments", + MAX_SEGS_BUFFER_SPLIT, + seg_hdrs, 0); + if (nb_segs >= 1) + set_rx_pkt_hdrs(seg_hdrs, nb_segs); + else + rte_exit(EXIT_FAILURE, "bad rxpkts\n"); + } if (!strcmp(lgopts[opt_idx].name, "txpkts")) { unsigned seg_lengths[RTE_MAX_SEGS_PER_PKT]; unsigned int nb_segs; diff --git a/app/test-pmd/testpmd.c b/app/test-pmd/testpmd.c index fe2ce19f99..77379b7aa9 100644 --- a/app/test-pmd/testpmd.c +++ b/app/test-pmd/testpmd.c @@ -240,6 +240,7 @@ uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; uint8_t rx_pkt_nb_offs; /**< Number of specified offsets */ +uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; /* * Configuration of packet segments used by the "txonly" processing engine. @@ -2586,12 +2587,11 @@ rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, mp_n = (i > mbuf_data_size_n) ? mbuf_data_size_n - 1 : i; mpx = mbuf_pool_find(socket_id, mp_n); /* Handle zero as mbuf data buffer size. */ - rx_seg->length = rx_pkt_seg_lengths[i] ? - rx_pkt_seg_lengths[i] : - mbuf_data_size[mp_n]; + rx_seg->length = rx_pkt_seg_lengths[i]; rx_seg->offset = i < rx_pkt_nb_offs ? rx_pkt_seg_offsets[i] : 0; rx_seg->mp = mpx ? mpx : mp; + rx_seg->proto_hdr = rx_pkt_hdr_protos[i]; } rx_conf->rx_nseg = rx_pkt_nb_segs; rx_conf->rx_seg = rx_useg; diff --git a/app/test-pmd/testpmd.h b/app/test-pmd/testpmd.h index 31f766c965..e791b9becd 100644 --- a/app/test-pmd/testpmd.h +++ b/app/test-pmd/testpmd.h @@ -534,6 +534,7 @@ extern uint32_t max_rx_pkt_len; * Configuration of packet segments used to scatter received packets * if some of split features is configured. */ +extern uint32_t rx_pkt_hdr_protos[MAX_SEGS_BUFFER_SPLIT]; extern uint16_t rx_pkt_seg_lengths[MAX_SEGS_BUFFER_SPLIT]; extern uint8_t rx_pkt_nb_segs; /**< Number of segments to split */ extern uint16_t rx_pkt_seg_offsets[MAX_SEGS_BUFFER_SPLIT]; @@ -864,6 +865,9 @@ inc_tx_burst_stats(struct fwd_stream *fs, uint16_t nb_tx) unsigned int parse_item_list(const char *str, const char *item_name, unsigned int max_items, unsigned int *parsed_items, int check_unique_values); +unsigned int parse_hdrs_list(const char *str, const char *item_name, + unsigned int max_item, + unsigned int *parsed_items, int check_unique_values); void launch_args_parse(int argc, char** argv); void cmdline_read_from_file(const char *filename); void prompt(void); @@ -1018,6 +1022,8 @@ void set_record_core_cycles(uint8_t on_off); void set_record_burst_stats(uint8_t on_off); void set_verbose_level(uint16_t vb_level); void set_rx_pkt_segments(unsigned int *seg_lengths, unsigned int nb_segs); +void set_rx_pkt_hdrs(unsigned int *seg_protos, unsigned int nb_segs); +void show_rx_pkt_hdrs(void); void show_rx_pkt_segments(void); void set_rx_pkt_offsets(unsigned int *seg_offsets, unsigned int nb_offs); void show_rx_pkt_offsets(void);