From patchwork Mon Aug 22 04:31:25 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Jiang, Cheng1" X-Patchwork-Id: 115319 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 71691A00C2; Mon, 22 Aug 2022 07:03:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 23F954281E; Mon, 22 Aug 2022 07:03:13 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 3F5104281A for ; Mon, 22 Aug 2022 07:03:11 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1661144591; x=1692680591; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=9VTzkzwpaN+mvWzDry9Ez699X7hG8xUPf8d37NPnybs=; b=MJCmWdxhu4a4D1NnGKKOblqjLsl2sbBg6MzCxGGLzCoSOzqaUzVbJ2Co w2wIK5fMKsnALWtuKUfi/a1UpiqeJbKWTNikeqInjIj6nzAAcDUeAoIl5 hJIOLYgoIXVPLznid6ZAX1P5Zg04NDkPOFk2ifk2XE08pPxUC7sH24hz4 gP8Com9bh8k7NGlSN0YKGdAzsXjbXAY2FTtoik/jjS1BnsY+QOcGua6R7 KTfMZej3ea+fnEkFabk47Dqq5JYRcSjL763hrwlVJmnVh/54DNJJEXa/D UNQKli5bvbOU3XgEd0abDpvo0oQyhxuZeJKutOxhwo4eWjfOVju4mFT2n w==; X-IronPort-AV: E=McAfee;i="6500,9779,10446"; a="319353304" X-IronPort-AV: E=Sophos;i="5.93,254,1654585200"; d="scan'208";a="319353304" Received: from orsmga006.jf.intel.com ([10.7.209.51]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Aug 2022 22:03:08 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.93,254,1654585200"; d="scan'208";a="585359690" Received: from dpdk_jiangcheng.sh.intel.com ([10.67.118.237]) by orsmga006.jf.intel.com with ESMTP; 21 Aug 2022 22:03:05 -0700 From: Cheng Jiang To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xuan.ding@intel.com, wenwux.ma@intel.com, yuanx.wang@intel.com, yvonnex.yang@intel.com, xingguang.he@intel.com, Cheng Jiang Subject: [PATH 1/2] vhost: fix descs count in async vhost packed ring Date: Mon, 22 Aug 2022 04:31:25 +0000 Message-Id: <20220822043126.19340-2-cheng1.jiang@intel.com> X-Mailer: git-send-email 2.35.1 In-Reply-To: <20220822043126.19340-1-cheng1.jiang@intel.com> References: <20220822043126.19340-1-cheng1.jiang@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When vhost receive packets from the front-end using packed virtqueue, it might use multiple descriptors for one packet, so we need calculate and record the descriptor number for each packet to update available descriptor counter and used descriptor counter, and rollback when DMA ring is full. Signed-off-by: Cheng Jiang --- lib/vhost/virtio_net.c | 24 +++++++++++++++--------- 1 file changed, 15 insertions(+), 9 deletions(-) diff --git a/lib/vhost/virtio_net.c b/lib/vhost/virtio_net.c index 35fa4670fd..bfc6d65b7c 100644 --- a/lib/vhost/virtio_net.c +++ b/lib/vhost/virtio_net.c @@ -3553,14 +3553,15 @@ virtio_dev_tx_async_split_compliant(struct virtio_net *dev, } static __rte_always_inline void -vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, uint16_t buf_id) +vhost_async_shadow_dequeue_single_packed(struct vhost_virtqueue *vq, + uint16_t buf_id, uint16_t count) { struct vhost_async *async = vq->async; uint16_t idx = async->buffer_idx_packed; async->buffers_packed[idx].id = buf_id; async->buffers_packed[idx].len = 0; - async->buffers_packed[idx].count = 1; + async->buffers_packed[idx].count = count; async->buffer_idx_packed++; if (async->buffer_idx_packed >= vq->size) @@ -3581,6 +3582,8 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, uint16_t nr_vec = 0; uint32_t buf_len; struct buf_vector buf_vec[BUF_VECTOR_MAX]; + struct vhost_async *async = vq->async; + struct async_inflight_info *pkts_info = async->pkts_info; static bool allocerr_warned; if (unlikely(fill_vec_buf_packed(dev, vq, vq->last_avail_idx, &desc_count, @@ -3609,8 +3612,12 @@ virtio_dev_tx_async_single_packed(struct virtio_net *dev, return -1; } + pkts_info[slot_idx].descs = desc_count; + /* update async shadow packed ring */ - vhost_async_shadow_dequeue_single_packed(vq, buf_id); + vhost_async_shadow_dequeue_single_packed(vq, buf_id, desc_count); + + vq_inc_last_avail_packed(vq, desc_count); return err; } @@ -3649,9 +3656,6 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } pkts_info[slot_idx].mbuf = pkt; - - vq_inc_last_avail_packed(vq, 1); - } n_xfer = vhost_async_dma_transfer(dev, vq, dma_id, vchan_id, async->pkts_idx, @@ -3662,6 +3666,8 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, pkt_err = pkt_idx - n_xfer; if (unlikely(pkt_err)) { + uint16_t descs_err = 0; + pkt_idx -= pkt_err; /** @@ -3678,10 +3684,10 @@ virtio_dev_tx_async_packed(struct virtio_net *dev, struct vhost_virtqueue *vq, } /* recover available ring */ - if (vq->last_avail_idx >= pkt_err) { - vq->last_avail_idx -= pkt_err; + if (vq->last_avail_idx >= descs_err) { + vq->last_avail_idx -= descs_err; } else { - vq->last_avail_idx += vq->size - pkt_err; + vq->last_avail_idx += vq->size - descs_err; vq->avail_wrap_counter ^= 1; } }