From patchwork Fri Oct 28 09:42:32 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhichao Zeng X-Patchwork-Id: 119258 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC276A0542; Fri, 28 Oct 2022 11:41:23 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 608EF400D5; Fri, 28 Oct 2022 11:41:23 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id CDDBD40041 for ; Fri, 28 Oct 2022 11:41:21 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1666950081; x=1698486081; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=3kHbuAaMB38hwA3x/NvJ95LdwrT7DAosZZRy4GmDA+g=; b=YjVP6U2lTve7Sk9Mv4hIFcbLyjSQiPxmYnq1ELi6Igypc3DDShL806VH Rqs558d7TXzKevlG2UhdlIztq/JObksdzCa/HSyuEuYRKBiibAaDKKeT9 QSw6Atagkrs2r+hBCa28tl/3LZXYJQ6jvtoq15NKUUYSKUNkB7Mk57imi QN7huoDnNJNEumIxOAmcOtvAU3/KmOVgeuj0a23hbEjDFCbAm08yQHyjj uSP4e+VWdRV5dikWrZUQIXtaq+QfNZ4/ZZ9w3LyNoKtTg0uO9Rlci4sSq szf+Yha4JGXsktKeQGa94uFTAJ/OfOltZTIywymRB8y4NDsiYH1ZyIEHM w==; X-IronPort-AV: E=McAfee;i="6500,9779,10513"; a="335094215" X-IronPort-AV: E=Sophos;i="5.95,220,1661842800"; d="scan'208";a="335094215" Received: from orsmga005.jf.intel.com ([10.7.209.41]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Oct 2022 02:41:20 -0700 X-IronPort-AV: E=McAfee;i="6500,9779,10513"; a="807769359" X-IronPort-AV: E=Sophos;i="5.95,220,1661842800"; d="scan'208";a="807769359" Received: from unknown (HELO localhost.localdomain) ([10.239.252.103]) by orsmga005-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Oct 2022 02:41:17 -0700 From: Zhichao Zeng To: dev@dpdk.org Cc: qiming.yang@intel.com, yidingx.zhou@intel.com, Zhichao Zeng , Radu Nicolau , Ke Xu , Jingjing Wu , Beilei Xing , Qi Zhang , Peng Zhang Subject: [PATCH] net/iavf: fix Tx descriptors for IPSec Date: Fri, 28 Oct 2022 17:42:32 +0800 Message-Id: <20221028094232.103542-1-zhichaox.zeng@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch fixes the building of context and data descriptor on the scalar path for IPSec. Fixes: f7c8c36fdeb7 ("net/iavf: enable inner and outer Tx checksum offload") Signed-off-by: Radu Nicolau Signed-off-by: Zhichao Zeng Tested-by: Ke Xu --- drivers/net/iavf/iavf_rxtx.c | 80 +++++++++++++++++++----------------- 1 file changed, 43 insertions(+), 37 deletions(-) diff --git a/drivers/net/iavf/iavf_rxtx.c b/drivers/net/iavf/iavf_rxtx.c index 3292541ad9..bd5dd2d4ed 100644 --- a/drivers/net/iavf/iavf_rxtx.c +++ b/drivers/net/iavf/iavf_rxtx.c @@ -2417,43 +2417,45 @@ iavf_fill_ctx_desc_tunnelling_field(volatile uint64_t *qw0, break; } - /* L4TUNT: L4 Tunneling Type */ - switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { - case RTE_MBUF_F_TX_TUNNEL_IPIP: - /* for non UDP / GRE tunneling, set to 00b */ - break; - case RTE_MBUF_F_TX_TUNNEL_VXLAN: - case RTE_MBUF_F_TX_TUNNEL_GTP: - case RTE_MBUF_F_TX_TUNNEL_GENEVE: - eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; - break; - case RTE_MBUF_F_TX_TUNNEL_GRE: - eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; - break; - default: - PMD_TX_LOG(ERR, "Tunnel type not supported"); - return; - } + if (!(m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) { + /* L4TUNT: L4 Tunneling Type */ + switch (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) { + case RTE_MBUF_F_TX_TUNNEL_IPIP: + /* for non UDP / GRE tunneling, set to 00b */ + break; + case RTE_MBUF_F_TX_TUNNEL_VXLAN: + case RTE_MBUF_F_TX_TUNNEL_GTP: + case RTE_MBUF_F_TX_TUNNEL_GENEVE: + eip_typ |= IAVF_TXD_CTX_UDP_TUNNELING; + break; + case RTE_MBUF_F_TX_TUNNEL_GRE: + eip_typ |= IAVF_TXD_CTX_GRE_TUNNELING; + break; + default: + PMD_TX_LOG(ERR, "Tunnel type not supported"); + return; + } - /* L4TUNLEN: L4 Tunneling Length, in Words - * - * We depend on app to set rte_mbuf.l2_len correctly. - * For IP in GRE it should be set to the length of the GRE - * header; - * For MAC in GRE or MAC in UDP it should be set to the length - * of the GRE or UDP headers plus the inner MAC up to including - * its last Ethertype. - * If MPLS labels exists, it should include them as well. - */ - eip_typ |= (m->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; + /* L4TUNLEN: L4 Tunneling Length, in Words + * + * We depend on app to set rte_mbuf.l2_len correctly. + * For IP in GRE it should be set to the length of the GRE + * header; + * For MAC in GRE or MAC in UDP it should be set to the length + * of the GRE or UDP headers plus the inner MAC up to including + * its last Ethertype. + * If MPLS labels exists, it should include them as well. + */ + eip_typ |= (m->l2_len >> 1) << IAVF_TXD_CTX_QW0_NATLEN_SHIFT; - /** - * Calculate the tunneling UDP checksum. - * Shall be set only if L4TUNT = 01b and EIPT is not zero - */ - if (!(eip_typ & IAVF_TX_CTX_EXT_IP_NONE) && - (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING)) - eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + /** + * Calculate the tunneling UDP checksum. + * Shall be set only if L4TUNT = 01b and EIPT is not zero + */ + if (!(eip_typ & IAVF_TX_CTX_EXT_IP_NONE) && + (eip_typ & IAVF_TXD_CTX_UDP_TUNNELING)) + eip_typ |= IAVF_TXD_CTX_QW0_L4T_CS_MASK; + } *qw0 = eip_typ << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPT_SHIFT | eip_len << IAVF_TXD_CTX_QW0_TUN_PARAMS_EIPLEN_SHIFT | @@ -2591,7 +2593,8 @@ iavf_build_data_desc_cmd_offset_fields(volatile uint64_t *qw1, } /* Set MACLEN */ - if (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK) + if (m->ol_flags & RTE_MBUF_F_TX_TUNNEL_MASK && + !(m->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD)) offset |= (m->outer_l2_len >> 1) << IAVF_TX_DESC_LENGTH_MACLEN_SHIFT; else @@ -2844,7 +2847,10 @@ iavf_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) txe->mbuf = mb_seg; - if (mb_seg->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) { + if ((mb_seg->ol_flags & RTE_MBUF_F_TX_SEC_OFFLOAD) && + (mb_seg->ol_flags & + (RTE_MBUF_F_TX_TCP_SEG | + RTE_MBUF_F_TX_UDP_SEG))) { slen = tlen + mb_seg->l2_len + mb_seg->l3_len + mb_seg->outer_l3_len + ipseclen; if (mb_seg->ol_flags & RTE_MBUF_F_TX_L4_MASK)