From patchwork Mon May 3 13:26:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: David Marchand X-Patchwork-Id: 92634 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 620B3A034F; Mon, 3 May 2021 15:27:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 523A9407FF; Mon, 3 May 2021 15:27:38 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id D789440150 for ; Mon, 3 May 2021 15:27:36 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1620048456; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=0rBTGM3CHtKaXadIaCccjP0Rn96c9nbAOabSPhJMHJk=; b=guXK7Mlf+XUJCU1fxgrSoSYUNkgxNONOqPlQXTWJrpbhwiES6/7A+yj2L0Vnkyy9E69Ixj olrDSGrZQ65LeINi7FAg3QhnjTflolMb0zi7b4PCRAmjxn9Ej47oGl6F05RlUMhrcnkzRu fwOdkETZj1kARumhs2cNoLjzKTKPsGI= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-3-P5fGjLuxNF6gXJHc65exmg-1; Mon, 03 May 2021 09:27:32 -0400 X-MC-Unique: P5fGjLuxNF6gXJHc65exmg-1 Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C70756D253; Mon, 3 May 2021 13:27:30 +0000 (UTC) Received: from dmarchan.remote.csb (unknown [10.40.192.99]) by smtp.corp.redhat.com (Postfix) with ESMTP id 3F3985D9D0; Mon, 3 May 2021 13:27:20 +0000 (UTC) From: David Marchand To: dev@dpdk.org Cc: maxime.coquelin@redhat.com, olivier.matz@6wind.com, fbl@sysclose.org, i.maximets@ovn.org, chenbo.xia@intel.com, ian.stokes@intel.com, Ruifeng Wang , Bruce Richardson , Konstantin Ananyev , Jerin Jacob Date: Mon, 3 May 2021 15:26:45 +0200 Message-Id: <20210503132646.16076-4-david.marchand@redhat.com> In-Reply-To: <20210503132646.16076-1-david.marchand@redhat.com> References: <20210401095243.18211-1-david.marchand@redhat.com> <20210503132646.16076-1-david.marchand@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=david.marchand@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v3 3/4] net/virtio: refactor Tx offload helper X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Purely cosmetic but it is rather odd to have an "offload" helper that checks if it actually must do something. We already have the same checks in most callers, so move this branch in them. Signed-off-by: David Marchand Reviewed-by: Flavio Leitner Reviewed-by: Ruifeng Wang Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtio_rxtx.c | 7 +- drivers/net/virtio/virtio_rxtx_packed_avx.h | 2 +- drivers/net/virtio/virtio_rxtx_packed_neon.h | 2 +- drivers/net/virtio/virtqueue.h | 83 +++++++++----------- 4 files changed, 44 insertions(+), 50 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 8df913b0ba..34108fb946 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -448,7 +448,7 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, if (!vq->hw->has_tx_offload) virtqueue_clear_net_hdr(hdr); else - virtqueue_xmit_offload(hdr, cookies[i], true); + virtqueue_xmit_offload(hdr, cookies[i]); start_dp[idx].addr = rte_mbuf_data_iova(cookies[i]) - head_size; start_dp[idx].len = cookies[i]->data_len + head_size; @@ -495,7 +495,7 @@ virtqueue_enqueue_xmit_packed_fast(struct virtnet_tx *txvq, if (!vq->hw->has_tx_offload) virtqueue_clear_net_hdr(hdr); else - virtqueue_xmit_offload(hdr, cookie, true); + virtqueue_xmit_offload(hdr, cookie); dp->addr = rte_mbuf_data_iova(cookie) - head_size; dp->len = cookie->data_len + head_size; @@ -581,7 +581,8 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, idx = start_dp[idx].next; } - virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload); + if (vq->hw->has_tx_offload) + virtqueue_xmit_offload(hdr, cookie); do { start_dp[idx].addr = rte_mbuf_data_iova(cookie); diff --git a/drivers/net/virtio/virtio_rxtx_packed_avx.h b/drivers/net/virtio/virtio_rxtx_packed_avx.h index 228cf5437b..c819d2e4f2 100644 --- a/drivers/net/virtio/virtio_rxtx_packed_avx.h +++ b/drivers/net/virtio/virtio_rxtx_packed_avx.h @@ -115,7 +115,7 @@ virtqueue_enqueue_batch_packed_vec(struct virtnet_tx *txvq, virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { hdr = rte_pktmbuf_mtod_offset(tx_pkts[i], struct virtio_net_hdr *, -head_size); - virtqueue_xmit_offload(hdr, tx_pkts[i], true); + virtqueue_xmit_offload(hdr, tx_pkts[i]); } } diff --git a/drivers/net/virtio/virtio_rxtx_packed_neon.h b/drivers/net/virtio/virtio_rxtx_packed_neon.h index d4257e68f0..f19e618635 100644 --- a/drivers/net/virtio/virtio_rxtx_packed_neon.h +++ b/drivers/net/virtio/virtio_rxtx_packed_neon.h @@ -134,7 +134,7 @@ virtqueue_enqueue_batch_packed_vec(struct virtnet_tx *txvq, virtio_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { hdr = rte_pktmbuf_mtod_offset(tx_pkts[i], struct virtio_net_hdr *, -head_size); - virtqueue_xmit_offload(hdr, tx_pkts[i], true); + virtqueue_xmit_offload(hdr, tx_pkts[i]); } } diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index ed3b85080e..03957b2bd0 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -617,52 +617,44 @@ virtqueue_notify(struct virtqueue *vq) } while (0) static inline void -virtqueue_xmit_offload(struct virtio_net_hdr *hdr, - struct rte_mbuf *cookie, - uint8_t offload) +virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie) { - if (offload) { - uint64_t csum_l4 = cookie->ol_flags & PKT_TX_L4_MASK; - - if (cookie->ol_flags & PKT_TX_TCP_SEG) - csum_l4 |= PKT_TX_TCP_CKSUM; - - switch (csum_l4) { - case PKT_TX_UDP_CKSUM: - hdr->csum_start = cookie->l2_len + cookie->l3_len; - hdr->csum_offset = offsetof(struct rte_udp_hdr, - dgram_cksum); - hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; - break; - - case PKT_TX_TCP_CKSUM: - hdr->csum_start = cookie->l2_len + cookie->l3_len; - hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum); - hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; - break; - - default: - ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); - ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0); - ASSIGN_UNLESS_EQUAL(hdr->flags, 0); - break; - } + uint64_t csum_l4 = cookie->ol_flags & PKT_TX_L4_MASK; + + if (cookie->ol_flags & PKT_TX_TCP_SEG) + csum_l4 |= PKT_TX_TCP_CKSUM; + + switch (csum_l4) { + case PKT_TX_UDP_CKSUM: + hdr->csum_start = cookie->l2_len + cookie->l3_len; + hdr->csum_offset = offsetof(struct rte_udp_hdr, dgram_cksum); + hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; + break; + + case PKT_TX_TCP_CKSUM: + hdr->csum_start = cookie->l2_len + cookie->l3_len; + hdr->csum_offset = offsetof(struct rte_tcp_hdr, cksum); + hdr->flags = VIRTIO_NET_HDR_F_NEEDS_CSUM; + break; + + default: + ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); + ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0); + ASSIGN_UNLESS_EQUAL(hdr->flags, 0); + break; + } - /* TCP Segmentation Offload */ - if (cookie->ol_flags & PKT_TX_TCP_SEG) { - hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ? - VIRTIO_NET_HDR_GSO_TCPV6 : - VIRTIO_NET_HDR_GSO_TCPV4; - hdr->gso_size = cookie->tso_segsz; - hdr->hdr_len = - cookie->l2_len + - cookie->l3_len + - cookie->l4_len; - } else { - ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0); - ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0); - ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0); - } + /* TCP Segmentation Offload */ + if (cookie->ol_flags & PKT_TX_TCP_SEG) { + hdr->gso_type = (cookie->ol_flags & PKT_TX_IPV6) ? + VIRTIO_NET_HDR_GSO_TCPV6 : + VIRTIO_NET_HDR_GSO_TCPV4; + hdr->gso_size = cookie->tso_segsz; + hdr->hdr_len = cookie->l2_len + cookie->l3_len + cookie->l4_len; + } else { + ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0); + ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0); + ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0); } } @@ -741,7 +733,8 @@ virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, } } - virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload); + if (vq->hw->has_tx_offload) + virtqueue_xmit_offload(hdr, cookie); do { uint16_t flags;