From patchwork Sat Jul 8 01:57:09 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 129388 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 662FE42DFC; Sat, 8 Jul 2023 03:57:33 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1EA9042BAC; Sat, 8 Jul 2023 03:57:26 +0200 (CEST) Received: from mail-pl1-f170.google.com (mail-pl1-f170.google.com [209.85.214.170]) by mails.dpdk.org (Postfix) with ESMTP id DD5DE40A8A for ; Sat, 8 Jul 2023 03:57:23 +0200 (CEST) Received: by mail-pl1-f170.google.com with SMTP id d9443c01a7336-1b8b318c5cfso17069925ad.1 for ; Fri, 07 Jul 2023 18:57:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20221208.gappssmtp.com; s=20221208; t=1688781443; x=1691373443; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=JBBc8zBeAQW55C07Hl1ShZznD9LZV1/3U+mjyj9dXDs=; b=ZeXT65do+8Vn3oRwTchkUS2Znuh1FMSheokDVf44dV1G6P9wlQ8N3Rsw7cr5iQbrMo fJuGTLIVsr4v0sxnL34hytO0t8p+nvGEZbBFmgrSI1Lq+KKRiwK1KQ7Rmg7wBWgiXUl5 6R33aaYKupadSPQtLsRy9WLVrEZyoGM9pPdY/zvoWMj+pe2a2KaM7YwuAjR8heKjCS5N rYdEBFsbi/FACYSHRRiR3LfJqf92LS8KA/aaLjf1Z/FKe23B2GwBVJNKZPlUv+b7X04i /UpwNq5l4I+rbeycpYPF0cxUXGg2RbI227WF7VFJVhFURvmcrSRQfzd0So5/bEAbxRjZ A7gQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20221208; t=1688781443; x=1691373443; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=JBBc8zBeAQW55C07Hl1ShZznD9LZV1/3U+mjyj9dXDs=; b=EVVzuFa9Mw1MfNNR1ww9BeeiwTYr8dF3rOYPZWpUNODrvyB1KXaQEd99zz060X45hM orJABMF83bcmZ4b0du5I2Vacq2ppGYz4Lj4iwIL89+X15nRy35AEbWYEVJ290gNOn9IF oBxH2ibKgmDbcHxfcB0KLLKaUWUGtotdyLb3u8ArEQaMHKC4nxKEMsPhJ7TOtOUTdtCM qelDpJkVDHc2JGg1AA2eFnKVa+rYj22YLBlTw6M4G20P1VuhomoNDbeYEXGb5qRENzS7 QbuRB7bP1yBngjC9FiE8325Nt7XaM5ZyfqP8dl/2af0X7CS10J5tnxCcOOmRD5UYVzIf wKsg== X-Gm-Message-State: ABy/qLZeAwkGXGummrum3gYOq78IP3vCxTb1VzVTXeG4g6Tz/fnzF1n1 bl0d3YL8QAhXb43CW+AYQeW2NKXbLBvd+8wHoTUY2T+9 X-Google-Smtp-Source: APBJJlH3tHq5lv/FWiNgFNaF3cp/JrdT5RoGkv0tpVyGclNr2wf7iiemHBXivalyQ3eNTleBTVmIvQ== X-Received: by 2002:a17:903:447:b0:1b8:a31b:ac85 with SMTP id iw7-20020a170903044700b001b8a31bac85mr5943962plb.41.1688781442586; Fri, 07 Jul 2023 18:57:22 -0700 (PDT) Received: from hermes.local (204-195-120-218.wavecable.com. [204.195.120.218]) by smtp.gmail.com with ESMTPSA id 24-20020a17090a035800b00263f5ac814esm2260269pjf.38.2023.07.07.18.57.22 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 07 Jul 2023 18:57:22 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Jiayu Hu Subject: [PATCH v5 02/11] gso: use rte_pktmbuf_mtod_offset Date: Fri, 7 Jul 2023 18:57:09 -0700 Message-Id: <20230708015718.75565-3-stephen@networkplumber.org> X-Mailer: git-send-email 2.39.2 In-Reply-To: <20230708015718.75565-1-stephen@networkplumber.org> References: <20230505174813.133894-1-stephen@networkplumber.org> <20230708015718.75565-1-stephen@networkplumber.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Replace explicit packet offset computations with rte_pktmbuf_mtod_offset(). Signed-off-by: Stephen Hemminger --- lib/gso/gso_common.h | 12 ++++++------ lib/gso/gso_tcp4.c | 8 ++++---- lib/gso/gso_tunnel_tcp4.c | 12 ++++++------ lib/gso/gso_tunnel_udp4.c | 18 +++++++++--------- 4 files changed, 25 insertions(+), 25 deletions(-) diff --git a/lib/gso/gso_common.h b/lib/gso/gso_common.h index 9456d596d3c5..d1c1b73091e2 100644 --- a/lib/gso/gso_common.h +++ b/lib/gso/gso_common.h @@ -52,8 +52,8 @@ update_udp_header(struct rte_mbuf *pkt, uint16_t udp_offset) { struct rte_udp_hdr *udp_hdr; - udp_hdr = (struct rte_udp_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - udp_offset); + udp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_udp_hdr *, + udp_offset); udp_hdr->dgram_len = rte_cpu_to_be_16(pkt->pkt_len - udp_offset); } @@ -77,8 +77,8 @@ update_tcp_header(struct rte_mbuf *pkt, uint16_t l4_offset, uint32_t sent_seq, { struct rte_tcp_hdr *tcp_hdr; - tcp_hdr = (struct rte_tcp_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - l4_offset); + tcp_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_tcp_hdr *, + l4_offset); tcp_hdr->sent_seq = rte_cpu_to_be_32(sent_seq); if (likely(non_tail)) tcp_hdr->tcp_flags &= (~(TCP_HDR_PSH_MASK | @@ -104,8 +104,8 @@ update_ipv4_header(struct rte_mbuf *pkt, uint16_t l3_offset, uint16_t id) { struct rte_ipv4_hdr *ipv4_hdr; - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - l3_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + l3_offset); ipv4_hdr->total_length = rte_cpu_to_be_16(pkt->pkt_len - l3_offset); ipv4_hdr->packet_id = rte_cpu_to_be_16(id); } diff --git a/lib/gso/gso_tcp4.c b/lib/gso/gso_tcp4.c index d31feaff95cd..e2ae4aaf6c5a 100644 --- a/lib/gso/gso_tcp4.c +++ b/lib/gso/gso_tcp4.c @@ -16,8 +16,8 @@ update_ipv4_tcp_headers(struct rte_mbuf *pkt, uint8_t ipid_delta, uint16_t l3_offset = pkt->l2_len; uint16_t l4_offset = l3_offset + pkt->l3_len; - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char*) + - l3_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + l3_offset); tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len); id = rte_be_to_cpu_16(ipv4_hdr->packet_id); sent_seq = rte_be_to_cpu_32(tcp_hdr->sent_seq); @@ -46,8 +46,8 @@ gso_tcp4_segment(struct rte_mbuf *pkt, int ret; /* Don't process the fragmented packet */ - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - pkt->l2_len); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + pkt->l2_len); frag_off = rte_be_to_cpu_16(ipv4_hdr->fragment_offset); if (unlikely(IS_FRAGMENTED(frag_off))) { return 0; diff --git a/lib/gso/gso_tunnel_tcp4.c b/lib/gso/gso_tunnel_tcp4.c index 1a7ef30ddebf..3a9159774b27 100644 --- a/lib/gso/gso_tunnel_tcp4.c +++ b/lib/gso/gso_tunnel_tcp4.c @@ -23,13 +23,13 @@ update_tunnel_ipv4_tcp_headers(struct rte_mbuf *pkt, uint8_t ipid_delta, tcp_offset = inner_ipv4_offset + pkt->l3_len; /* Outer IPv4 header. */ - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - outer_ipv4_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + outer_ipv4_offset); outer_id = rte_be_to_cpu_16(ipv4_hdr->packet_id); /* Inner IPv4 header. */ - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - inner_ipv4_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + inner_ipv4_offset); inner_id = rte_be_to_cpu_16(ipv4_hdr->packet_id); tcp_hdr = (struct rte_tcp_hdr *)((char *)ipv4_hdr + pkt->l3_len); @@ -65,8 +65,8 @@ gso_tunnel_tcp4_segment(struct rte_mbuf *pkt, int ret; hdr_offset = pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len; - inner_ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - hdr_offset); + inner_ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + hdr_offset); /* * Don't process the packet whose MF bit or offset in the inner * IPv4 header are non-zero. diff --git a/lib/gso/gso_tunnel_udp4.c b/lib/gso/gso_tunnel_udp4.c index 1fc7a8dbc5aa..4fb275484ca8 100644 --- a/lib/gso/gso_tunnel_udp4.c +++ b/lib/gso/gso_tunnel_udp4.c @@ -22,13 +22,13 @@ update_tunnel_ipv4_udp_headers(struct rte_mbuf *pkt, struct rte_mbuf **segs, inner_ipv4_offset = outer_udp_offset + pkt->l2_len; /* Outer IPv4 header. */ - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - outer_ipv4_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + outer_ipv4_offset); outer_id = rte_be_to_cpu_16(ipv4_hdr->packet_id); /* Inner IPv4 header. */ - ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - inner_ipv4_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + inner_ipv4_offset); inner_id = rte_be_to_cpu_16(ipv4_hdr->packet_id); tail_idx = nb_segs - 1; @@ -42,9 +42,9 @@ update_tunnel_ipv4_udp_headers(struct rte_mbuf *pkt, struct rte_mbuf **segs, * * Set IP fragment offset for inner IP header. */ - ipv4_hdr = (struct rte_ipv4_hdr *) - (rte_pktmbuf_mtod(segs[i], char *) + - inner_ipv4_offset); + ipv4_hdr = rte_pktmbuf_mtod_offset(segs[i], + struct rte_ipv4_hdr *, + inner_ipv4_offset); is_mf = i < tail_idx ? IPV4_HDR_MF_BIT : 0; ipv4_hdr->fragment_offset = rte_cpu_to_be_16(frag_offset | is_mf); @@ -67,8 +67,8 @@ gso_tunnel_udp4_segment(struct rte_mbuf *pkt, int ret; hdr_offset = pkt->outer_l2_len + pkt->outer_l3_len + pkt->l2_len; - inner_ipv4_hdr = (struct rte_ipv4_hdr *)(rte_pktmbuf_mtod(pkt, char *) + - hdr_offset); + inner_ipv4_hdr = rte_pktmbuf_mtod_offset(pkt, struct rte_ipv4_hdr *, + hdr_offset); /* * Don't process the packet whose MF bit or offset in the inner * IPv4 header are non-zero.