From patchwork Tue Oct 13 13:45:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 80543 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6CE8DA04B7; Tue, 13 Oct 2020 15:58:39 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A843B1BF5C; Tue, 13 Oct 2020 15:46:58 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 25AC01DB76 for ; Tue, 13 Oct 2020 15:46:06 +0200 (CEST) Received: from mx1-us1.ppe-hosted.com (unknown [10.110.50.144]) by dispatch1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTP id C2465200D9 for ; Tue, 13 Oct 2020 13:46:03 +0000 (UTC) Received: from us4-mdac16-73.at1.mdlocal (unknown [10.110.50.191]) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTP id C0332800AD for ; Tue, 13 Oct 2020 13:46:03 +0000 (UTC) X-Virus-Scanned: Proofpoint Essentials engine Received: from mx1-us1.ppe-hosted.com (unknown [10.110.49.106]) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 553754007A for ; Tue, 13 Oct 2020 13:46:03 +0000 (UTC) Received: from webmail.solarflare.com (uk.solarflare.com [193.34.186.16]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us1.ppe-hosted.com (PPE Hosted ESMTP Server) with ESMTPS id 1F1A3B40068 for ; Tue, 13 Oct 2020 13:46:03 +0000 (UTC) Received: from ukex01.SolarFlarecom.com (10.17.10.4) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Tue, 13 Oct 2020 14:45:56 +0100 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ukex01.SolarFlarecom.com (10.17.10.4) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Tue, 13 Oct 2020 14:45:56 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id 09DDjuRp006109 for ; Tue, 13 Oct 2020 14:45:56 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 9CBE71613AB for ; Tue, 13 Oct 2020 14:45:56 +0100 (BST) From: Andrew Rybchenko To: Date: Tue, 13 Oct 2020 14:45:43 +0100 Message-ID: <1602596753-32282-27-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1602596753-32282-1-git-send-email-arybchenko@solarflare.com> References: <1602596753-32282-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.6.1012-25722.003 X-TM-AS-Result: No-5.193900-8.000000-10 X-TMASE-MatchedRID: GhdZ4a7VZuM2jeY+Udg/InTzPL3sqyAmD6NL4tGmXzVBbp4JobErAgEi gi6albpX8XVI39JCRnS4XjarCGCNGyxppiUy9o4cA9lly13c/gH+paX6bXuNYVVkJxysad/ICie TlR6U1ppFzHrMKRZICrc4S8S0Ye+iDTtJ9doVsvZlpwNsTvdlKWtNZoZ5+7ekBCzD0Dc8iUvGXV TaHB60j0fcXkmynd8Vw4vwPnA8xAh/Ohykmtm+ZK2WWoi3s7pdcUeqyCu6iFk26TIMgH4duqE1p 7DlAza64vM1YF6AJbbCCfuIMF6xLSAHAopEd76vUBDccBXQOzDS5ukvsAPDhy3Kx+Au8qDDgD0G aNa1/tFIK4QhfDywtw== X-TM-AS-User-Approved-Sender: Yes X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--5.193900-8.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.6.1012-25722.003 X-MDID: 1602596763-hZ6WI3_tmFNK X-PPE-DISP: 1602596763;hZ6WI3_tmFNK Subject: [dpdk-dev] [PATCH 26/36] net/sfc: support Rx checksum offload for EF100 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Also support Rx packet type offload. Checksumming is actually always enabled. Report it per-queue offload to give applications maximum flexibility. Signed-off-by: Andrew Rybchenko --- drivers/net/sfc/sfc_ef100_rx.c | 183 ++++++++++++++++++++++++++++++++- 1 file changed, 182 insertions(+), 1 deletion(-) diff --git a/drivers/net/sfc/sfc_ef100_rx.c b/drivers/net/sfc/sfc_ef100_rx.c index c0e70c9943..2f5c5ab533 100644 --- a/drivers/net/sfc/sfc_ef100_rx.c +++ b/drivers/net/sfc/sfc_ef100_rx.c @@ -177,6 +177,166 @@ sfc_ef100_rx_qrefill(struct sfc_ef100_rxq *rxq) sfc_ef100_rx_qpush(rxq, added); } +static inline uint64_t +sfc_ef100_rx_nt_or_inner_l4_csum(const efx_word_t class) +{ + return EFX_WORD_FIELD(class, + ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L4_CSUM) == + ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ? + PKT_RX_L4_CKSUM_GOOD : PKT_RX_L4_CKSUM_BAD; +} + +static inline uint64_t +sfc_ef100_rx_tun_outer_l4_csum(const efx_word_t class) +{ + return EFX_WORD_FIELD(class, + ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L4_CSUM) == + ESE_GZ_RH_HCLASS_L4_CSUM_GOOD ? + PKT_RX_OUTER_L4_CKSUM_GOOD : PKT_RX_OUTER_L4_CKSUM_GOOD; +} + +static uint32_t +sfc_ef100_rx_class_decode(const efx_word_t class, uint64_t *ol_flags) +{ + uint32_t ptype; + bool no_tunnel = false; + + if (unlikely(EFX_WORD_FIELD(class, ESF_GZ_RX_PREFIX_HCLASS_L2_CLASS) != + ESE_GZ_RH_HCLASS_L2_CLASS_E2_0123VLAN)) + return 0; + + switch (EFX_WORD_FIELD(class, ESF_GZ_RX_PREFIX_HCLASS_L2_N_VLAN)) { + case 0: + ptype = RTE_PTYPE_L2_ETHER; + break; + case 1: + ptype = RTE_PTYPE_L2_ETHER_VLAN; + break; + default: + ptype = RTE_PTYPE_L2_ETHER_QINQ; + break; + } + + switch (EFX_WORD_FIELD(class, ESF_GZ_RX_PREFIX_HCLASS_TUNNEL_CLASS)) { + case ESE_GZ_RH_HCLASS_TUNNEL_CLASS_NONE: + no_tunnel = true; + break; + case ESE_GZ_RH_HCLASS_TUNNEL_CLASS_VXLAN: + ptype |= RTE_PTYPE_TUNNEL_VXLAN | RTE_PTYPE_L4_UDP; + *ol_flags |= sfc_ef100_rx_tun_outer_l4_csum(class); + break; + case ESE_GZ_RH_HCLASS_TUNNEL_CLASS_NVGRE: + ptype |= RTE_PTYPE_TUNNEL_NVGRE; + break; + case ESE_GZ_RH_HCLASS_TUNNEL_CLASS_GENEVE: + ptype |= RTE_PTYPE_TUNNEL_GENEVE | RTE_PTYPE_L4_UDP; + *ol_flags |= sfc_ef100_rx_tun_outer_l4_csum(class); + break; + default: + /* + * Driver does not know the tunnel, but it is + * still a tunnel and NT_OR_INNER refer to inner + * frame. + */ + no_tunnel = false; + } + + if (no_tunnel) { + bool l4_valid = true; + + switch (EFX_WORD_FIELD(class, + ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) { + case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD: + ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN; + *ol_flags |= PKT_RX_IP_CKSUM_GOOD; + break; + case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD: + ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN; + *ol_flags |= PKT_RX_IP_CKSUM_BAD; + break; + case ESE_GZ_RH_HCLASS_L3_CLASS_IP6: + ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN; + break; + default: + l4_valid = false; + } + + if (l4_valid) { + switch (EFX_WORD_FIELD(class, + ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L4_CLASS)) { + case ESE_GZ_RH_HCLASS_L4_CLASS_TCP: + ptype |= RTE_PTYPE_L4_TCP; + *ol_flags |= + sfc_ef100_rx_nt_or_inner_l4_csum(class); + break; + case ESE_GZ_RH_HCLASS_L4_CLASS_UDP: + ptype |= RTE_PTYPE_L4_UDP; + *ol_flags |= + sfc_ef100_rx_nt_or_inner_l4_csum(class); + break; + case ESE_GZ_RH_HCLASS_L4_CLASS_FRAG: + ptype |= RTE_PTYPE_L4_FRAG; + break; + } + } + } else { + bool l4_valid = true; + + switch (EFX_WORD_FIELD(class, + ESF_GZ_RX_PREFIX_HCLASS_TUN_OUTER_L3_CLASS)) { + case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD: + ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN; + break; + case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD: + ptype |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN; + *ol_flags |= PKT_RX_EIP_CKSUM_BAD; + break; + case ESE_GZ_RH_HCLASS_L3_CLASS_IP6: + ptype |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN; + break; + } + + switch (EFX_WORD_FIELD(class, + ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L3_CLASS)) { + case ESE_GZ_RH_HCLASS_L3_CLASS_IP4GOOD: + ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN; + *ol_flags |= PKT_RX_IP_CKSUM_GOOD; + break; + case ESE_GZ_RH_HCLASS_L3_CLASS_IP4BAD: + ptype |= RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN; + *ol_flags |= PKT_RX_IP_CKSUM_BAD; + break; + case ESE_GZ_RH_HCLASS_L3_CLASS_IP6: + ptype |= RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN; + break; + default: + l4_valid = false; + break; + } + + if (l4_valid) { + switch (EFX_WORD_FIELD(class, + ESF_GZ_RX_PREFIX_HCLASS_NT_OR_INNER_L4_CLASS)) { + case ESE_GZ_RH_HCLASS_L4_CLASS_TCP: + ptype |= RTE_PTYPE_INNER_L4_TCP; + *ol_flags |= + sfc_ef100_rx_nt_or_inner_l4_csum(class); + break; + case ESE_GZ_RH_HCLASS_L4_CLASS_UDP: + ptype |= RTE_PTYPE_INNER_L4_UDP; + *ol_flags |= + sfc_ef100_rx_nt_or_inner_l4_csum(class); + break; + case ESE_GZ_RH_HCLASS_L4_CLASS_FRAG: + ptype |= RTE_PTYPE_INNER_L4_FRAG; + break; + } + } + } + + return ptype; +} + static bool sfc_ef100_rx_prefix_to_offloads(const efx_oword_t *rx_prefix, struct rte_mbuf *m) @@ -195,6 +355,8 @@ sfc_ef100_rx_prefix_to_offloads(const efx_oword_t *rx_prefix, ESE_GZ_RH_HCLASS_L2_STATUS_OK)) return false; + m->packet_type = sfc_ef100_rx_class_decode(*class, &ol_flags); + m->ol_flags = ol_flags; return true; } @@ -374,6 +536,22 @@ static const uint32_t * sfc_ef100_supported_ptypes_get(__rte_unused uint32_t tunnel_encaps) { static const uint32_t ef100_native_ptypes[] = { + RTE_PTYPE_L2_ETHER, + RTE_PTYPE_L2_ETHER_VLAN, + RTE_PTYPE_L2_ETHER_QINQ, + RTE_PTYPE_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_L3_IPV6_EXT_UNKNOWN, + RTE_PTYPE_L4_TCP, + RTE_PTYPE_L4_UDP, + RTE_PTYPE_L4_FRAG, + RTE_PTYPE_TUNNEL_VXLAN, + RTE_PTYPE_TUNNEL_NVGRE, + RTE_PTYPE_TUNNEL_GENEVE, + RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN, + RTE_PTYPE_INNER_L3_IPV6_EXT_UNKNOWN, + RTE_PTYPE_INNER_L4_TCP, + RTE_PTYPE_INNER_L4_UDP, + RTE_PTYPE_INNER_L4_FRAG, RTE_PTYPE_UNKNOWN }; @@ -596,7 +774,10 @@ struct sfc_dp_rx sfc_ef100_rx = { }, .features = SFC_DP_RX_FEAT_MULTI_PROCESS, .dev_offload_capa = 0, - .queue_offload_capa = DEV_RX_OFFLOAD_SCATTER, + .queue_offload_capa = DEV_RX_OFFLOAD_CHECKSUM | + DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM | + DEV_RX_OFFLOAD_OUTER_UDP_CKSUM | + DEV_RX_OFFLOAD_SCATTER, .get_dev_info = sfc_ef100_rx_get_dev_info, .qsize_up_rings = sfc_ef100_rx_qsize_up_rings, .qcreate = sfc_ef100_rx_qcreate,