From patchwork Fri Feb 5 07:47:58 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Marvin Liu X-Patchwork-Id: 87792 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6FCDA0524; Fri, 5 Feb 2021 08:48:09 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4451140682; Fri, 5 Feb 2021 08:48:09 +0100 (CET) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by mails.dpdk.org (Postfix) with ESMTP id 2B6AE4067B; Fri, 5 Feb 2021 08:48:03 +0100 (CET) IronPort-SDR: vTubzc/4u8NHqzHUGFgiDNkynPK927UnhPuthljaIWPfs1wLVcEWqNxw1hZfaY1eNujq6VH0/g TKwQpUp8jyew== X-IronPort-AV: E=McAfee;i="6000,8403,9885"; a="177889825" X-IronPort-AV: E=Sophos;i="5.81,154,1610438400"; d="scan'208";a="177889825" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 Feb 2021 23:48:01 -0800 IronPort-SDR: QlRvjXtrQz2LAwY1xInxk77W45bTkODZh/S5U39KJiz+VorOQ2vgLL4NH/UAKZzbUuV7jBLw+T sjTmxOyFSFFg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,154,1610438400"; d="scan'208";a="393745759" Received: from npg-dpdk-virtual-marvin-dev.sh.intel.com ([10.67.119.108]) by orsmga008.jf.intel.com with ESMTP; 04 Feb 2021 23:47:59 -0800 From: Marvin Liu To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, Marvin Liu , stable@dpdk.org Date: Fri, 5 Feb 2021 15:47:58 +0800 Message-Id: <20210205074758.28278-1-yong.liu@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH] vhost: fix packed ring dequeue offloading X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When vhost doing dequeue offloading, will parse ethernet and l3/l4 header of the packet. Then vhost will set corresponded value in mbuf attributes. Thus mean offloading action should after packet data copy. Fixes: 75ed51697820 ("vhost: add packed ring batch dequeue") Cc: stable@dpdk.org Signed-off-by: Marvin Liu Reviewed-by: Maxime Coquelin diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 730b92e478..0a7d008a91 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -2267,7 +2267,6 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev, { bool wrap = vq->avail_wrap_counter; struct vring_packed_desc *descs = vq->desc_packed; - struct virtio_net_hdr *hdr; uint64_t lens[PACKED_BATCH_SIZE]; uint64_t buf_lens[PACKED_BATCH_SIZE]; uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); @@ -2324,13 +2323,6 @@ vhost_reserve_avail_batch_packed(struct virtio_net *dev, ids[i] = descs[avail_idx + i].id; } - if (virtio_net_with_host_offload(dev)) { - vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { - hdr = (struct virtio_net_hdr *)(desc_addrs[i]); - vhost_dequeue_offload(hdr, pkts[i]); - } - } - return 0; free_buf: @@ -2348,6 +2340,7 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, { uint16_t avail_idx = vq->last_avail_idx; uint32_t buf_offset = sizeof(struct virtio_net_hdr_mrg_rxbuf); + struct virtio_net_hdr *hdr; uintptr_t desc_addrs[PACKED_BATCH_SIZE]; uint16_t ids[PACKED_BATCH_SIZE]; uint16_t i; @@ -2364,6 +2357,13 @@ virtio_dev_tx_batch_packed(struct virtio_net *dev, (void *)(uintptr_t)(desc_addrs[i] + buf_offset), pkts[i]->pkt_len); + if (virtio_net_with_host_offload(dev)) { + vhost_for_each_try_unroll(i, 0, PACKED_BATCH_SIZE) { + hdr = (struct virtio_net_hdr *)(desc_addrs[i]); + vhost_dequeue_offload(hdr, pkts[i]); + } + } + if (virtio_net_is_inorder(dev)) vhost_shadow_dequeue_batch_packed_inorder(vq, ids[PACKED_BATCH_SIZE - 1]);