From patchwork Wed Jun 8 12:49:45 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 112571 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F11DDA04FD; Wed, 8 Jun 2022 14:50:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1482042B7B; Wed, 8 Jun 2022 14:50:13 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [170.10.133.124]) by mails.dpdk.org (Postfix) with ESMTP id 2750E42B79 for ; Wed, 8 Jun 2022 14:50:12 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1654692611; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=V/pNjetY4XaqiUJsWn93Ffgh/9z1Oqeza6C7scIY1Qw=; b=QPCKpgAJkMKEMLLYfFHWaEp2GH+sV6nkxhFiupXivVF4R/nKbr00SvMl0xerisFtNgjs+o FDHF19VGW92qwQuNQKEuLzX3Gm3nhBzAYc8UMgoZpt2BtxjfFKfC/DzXuqhfx85SBMmjSC kffBJOVSqrHuFWQjEl25NWnKUtdxoe4= Received: from mimecast-mx02.redhat.com (mimecast-mx02.redhat.com [66.187.233.88]) by relay.mimecast.com with ESMTP with STARTTLS (version=TLSv1.2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id us-mta-612-dMPLi3EGPR2UKUWW7wqUvQ-1; Wed, 08 Jun 2022 08:50:06 -0400 X-MC-Unique: dMPLi3EGPR2UKUWW7wqUvQ-1 Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.rdu2.redhat.com [10.11.54.1]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx02.redhat.com (Postfix) with ESMTPS id 4715685A584; Wed, 8 Jun 2022 12:50:06 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.39.208.20]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8374F40CFD0B; Wed, 8 Jun 2022 12:50:04 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, jasowang@redhat.com, chenbo.xia@intel.com, david.marchand@redhat.com, olivier.matz@6wind.com, wenwux.ma@intel.com, yuying.zhang@intel.com, aman.deep.singh@intel.com Cc: Maxime Coquelin Subject: [PATCH v2 5/6] net/vhost: perform SW checksum in Rx path Date: Wed, 8 Jun 2022 14:49:45 +0200 Message-Id: <20220608124946.102623-6-maxime.coquelin@redhat.com> In-Reply-To: <20220608124946.102623-1-maxime.coquelin@redhat.com> References: <20220608124946.102623-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.84 on 10.11.54.1 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Virtio specification supports host checksum offloading for L4, which is enabled with VIRTIO_NET_F_CSUM feature negotiation. However, the Vhost PMD does not advertise Rx checksum offload capabilities, so we can end-up with the VIRTIO_NET_F_CSUM feature being negotiated, implying the Vhost library returns packets with checksum being offloaded while the application did not request for it. Advertising these offload capabilities at the ethdev level is not enough, because we could still end-up with the application not enabling these offloads while the guest still negotiate them. This patch advertises the Rx checksum offload capabilities, and introduces a compatibility layer to cover the case VIRTIO_NET_F_CSUM has been negotiated but the application does not configure the Rx checksum offloads. This function performis the L4 Rx checksum in SW for UDP and TCP. Note that it is not needed to calculate the pseudo-header checksum, because the Virtio specification requires that the driver do it. This patch does not advertise SCTP checksum offloading capability for now, but it could be handled later if the need arises. Reported-by: Jason Wang Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia Reviewed-by: Cheng Jiang --- doc/guides/nics/features/vhost.ini | 1 + drivers/net/vhost/rte_eth_vhost.c | 83 ++++++++++++++++++++++++++++++ 2 files changed, 84 insertions(+) diff --git a/doc/guides/nics/features/vhost.ini b/doc/guides/nics/features/vhost.ini index ef81abb439..15f4dfe5e8 100644 --- a/doc/guides/nics/features/vhost.ini +++ b/doc/guides/nics/features/vhost.ini @@ -7,6 +7,7 @@ Link status = Y Free Tx mbuf on demand = Y Queue status event = Y +L4 checksum offload = P Basic stats = Y Extended stats = Y x86-32 = Y diff --git a/drivers/net/vhost/rte_eth_vhost.c b/drivers/net/vhost/rte_eth_vhost.c index e931d59053..42f0d52ebc 100644 --- a/drivers/net/vhost/rte_eth_vhost.c +++ b/drivers/net/vhost/rte_eth_vhost.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -85,10 +86,12 @@ struct pmd_internal { char *iface_name; uint64_t flags; uint64_t disable_flags; + uint64_t features; uint16_t max_queues; int vid; rte_atomic32_t started; bool vlan_strip; + bool rx_sw_csum; }; struct internal_list { @@ -275,6 +278,70 @@ vhost_dev_xstats_get(struct rte_eth_dev *dev, struct rte_eth_xstat *xstats, return nstats; } +static void +vhost_dev_csum_configure(struct rte_eth_dev *eth_dev) +{ + struct pmd_internal *internal = eth_dev->data->dev_private; + const struct rte_eth_rxmode *rxmode = ð_dev->data->dev_conf.rxmode; + + internal->rx_sw_csum = false; + + /* SW checksum is not compatible with legacy mode */ + if (!(internal->flags & RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS)) + return; + + if (internal->features & (1ULL << VIRTIO_NET_F_CSUM)) { + if (!(rxmode->offloads & + (RTE_ETH_RX_OFFLOAD_UDP_CKSUM | RTE_ETH_RX_OFFLOAD_TCP_CKSUM))) { + VHOST_LOG(NOTICE, "Rx csum will be done in SW, may impact performance."); + internal->rx_sw_csum = true; + } + } +} + +static void +vhost_dev_rx_sw_csum(struct rte_mbuf *mbuf) +{ + struct rte_net_hdr_lens hdr_lens; + uint32_t ptype, hdr_len; + uint16_t csum = 0, csum_offset; + + /* Return early if the L4 checksum was not offloaded */ + if ((mbuf->ol_flags & RTE_MBUF_F_RX_L4_CKSUM_MASK) != RTE_MBUF_F_RX_L4_CKSUM_NONE) + return; + + ptype = rte_net_get_ptype(mbuf, &hdr_lens, RTE_PTYPE_ALL_MASK); + + hdr_len = hdr_lens.l2_len + hdr_lens.l3_len; + + switch (ptype & RTE_PTYPE_L4_MASK) { + case RTE_PTYPE_L4_TCP: + csum_offset = offsetof(struct rte_tcp_hdr, cksum) + hdr_len; + break; + case RTE_PTYPE_L4_UDP: + csum_offset = offsetof(struct rte_udp_hdr, dgram_cksum) + hdr_len; + break; + default: + /* Unsupported packet type */ + return; + } + + /* The pseudo-header checksum is already performed, as per Virtio spec */ + if (rte_raw_cksum_mbuf(mbuf, hdr_len, rte_pktmbuf_pkt_len(mbuf) - hdr_len, &csum) < 0) + return; + + csum = ~csum; + /* See RFC768 */ + if (unlikely((ptype & RTE_PTYPE_L4_UDP) && csum == 0)) + csum = 0xffff; + + if (rte_pktmbuf_data_len(mbuf) >= csum_offset + 1) + *rte_pktmbuf_mtod_offset(mbuf, uint16_t *, csum_offset) = csum; + + mbuf->ol_flags &= ~RTE_MBUF_F_RX_L4_CKSUM_MASK; + mbuf->ol_flags |= RTE_MBUF_F_RX_L4_CKSUM_GOOD; +} + static uint16_t eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) { @@ -315,6 +382,9 @@ eth_vhost_rx(void *q, struct rte_mbuf **bufs, uint16_t nb_bufs) if (r->internal->vlan_strip) rte_vlan_strip(bufs[i]); + if (r->internal->rx_sw_csum) + vhost_dev_rx_sw_csum(bufs[i]); + r->stats.bytes += bufs[i]->pkt_len; } @@ -711,6 +781,11 @@ new_device(int vid) eth_dev->data->numa_node = newnode; #endif + if (rte_vhost_get_negotiated_features(vid, &internal->features)) { + VHOST_LOG(ERR, "Failed to get device features\n"); + return -1; + } + internal->vid = vid; if (rte_atomic32_read(&internal->started) == 1) { queue_setup(eth_dev, internal); @@ -733,6 +808,8 @@ new_device(int vid) eth_dev->data->dev_link.link_status = RTE_ETH_LINK_UP; + vhost_dev_csum_configure(eth_dev); + rte_atomic32_set(&internal->dev_attached, 1); update_queuing_status(eth_dev); @@ -1039,6 +1116,8 @@ eth_dev_configure(struct rte_eth_dev *dev) internal->vlan_strip = !!(rxmode->offloads & RTE_ETH_RX_OFFLOAD_VLAN_STRIP); + vhost_dev_csum_configure(dev); + return 0; } @@ -1189,6 +1268,10 @@ eth_dev_info(struct rte_eth_dev *dev, dev_info->tx_offload_capa = RTE_ETH_TX_OFFLOAD_MULTI_SEGS | RTE_ETH_TX_OFFLOAD_VLAN_INSERT; dev_info->rx_offload_capa = RTE_ETH_RX_OFFLOAD_VLAN_STRIP; + if (internal->flags & RTE_VHOST_USER_NET_COMPLIANT_OL_FLAGS) { + dev_info->rx_offload_capa |= RTE_ETH_RX_OFFLOAD_UDP_CKSUM | + RTE_ETH_RX_OFFLOAD_TCP_CKSUM; + } return 0; }