From patchwork Thu Jul 30 16:04:30 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Wang, Haiyue" X-Patchwork-Id: 75048 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3810DA052B; Thu, 30 Jul 2020 18:14:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 590AA11A2; Thu, 30 Jul 2020 18:14:09 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by dpdk.org (Postfix) with ESMTP id E8B6EA69; Thu, 30 Jul 2020 18:14:06 +0200 (CEST) IronPort-SDR: m0COcXVajw2nRKqn7Z0lrEi6G1rumj1K4qt3e5NLAr7bZ60fEtNcN+rRqGiYbRCdieZoB6beKJ FJA4uuyaRjAA== X-IronPort-AV: E=McAfee;i="6000,8403,9698"; a="151603170" X-IronPort-AV: E=Sophos;i="5.75,415,1589266800"; d="scan'208";a="151603170" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Jul 2020 09:13:50 -0700 IronPort-SDR: RVaS5YoGJp6Jxh2IVa9y75UjldfXISYLM8InEerX214SGrD59MwsvESxHrpDt9eNA4e9EYxxsP 77am4eSLoVvA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.75,415,1589266800"; d="scan'208";a="435094710" Received: from npg-dpdk-haiyue-3.sh.intel.com ([10.67.118.199]) by orsmga004.jf.intel.com with ESMTP; 30 Jul 2020 09:13:48 -0700 From: Haiyue Wang To: dev@dpdk.org, qiming.yang@intel.com, qi.z.zhang@intel.com Cc: ting.xu@intel.com, yinan.wang@intel.com, Haiyue Wang , stable@dpdk.org Date: Fri, 31 Jul 2020 00:04:30 +0800 Message-Id: <20200730160430.10609-1-haiyue.wang@intel.com> X-Mailer: git-send-email 2.28.0 MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1] net/ice: optimize TCP header size calculation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The rte_pktmbuf_read function can handle the contiguous data buffer reading, so remove the redundant contiguous memory handling. Fixes: 17c7d0f9d6a4 ("net/ice: support basic Rx/Tx") Fixes: 2a0c9ae4f646 ("net/ice: fix TCP checksum offload") Fixes: 7365a3cee51f ("net/ice: calculate TCP header size for offload") Cc: stable@dpdk.org Signed-off-by: Haiyue Wang --- drivers/net/ice/ice_rxtx.c | 14 ++++---------- 1 file changed, 4 insertions(+), 10 deletions(-) diff --git a/drivers/net/ice/ice_rxtx.c b/drivers/net/ice/ice_rxtx.c index bcb67ec25..a35c5546b 100644 --- a/drivers/net/ice/ice_rxtx.c +++ b/drivers/net/ice/ice_rxtx.c @@ -2375,18 +2375,12 @@ ice_calc_pkt_desc(struct rte_mbuf *tx_pkt) static inline uint16_t ice_calc_pkt_tcp_hdr(struct rte_mbuf *tx_pkt, union ice_tx_offload tx_offload) { - uint16_t tcpoff = tx_offload.l2_len + tx_offload.l3_len; const struct rte_tcp_hdr *tcp_hdr; - struct rte_tcp_hdr _tcp_hdr; + struct rte_tcp_hdr tcp_hdr_buf; - if (tcpoff + sizeof(struct rte_tcp_hdr) < tx_pkt->data_len) { - tcp_hdr = rte_pktmbuf_mtod_offset(tx_pkt, struct rte_tcp_hdr *, - tcpoff); - - return (tcp_hdr->data_off & 0xf0) >> 2; - } - - tcp_hdr = rte_pktmbuf_read(tx_pkt, tcpoff, sizeof(_tcp_hdr), &_tcp_hdr); + tcp_hdr = rte_pktmbuf_read(tx_pkt, + tx_offload.l2_len + tx_offload.l3_len, + sizeof(tcp_hdr_buf), &tcp_hdr_buf); if (tcp_hdr) return (tcp_hdr->data_off & 0xf0) >> 2; else