From patchwork Mon May 4 17:11:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sivaprasad Tummala X-Patchwork-Id: 69710 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5841AA04B2; Mon, 4 May 2020 19:12:21 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 794BF1D168; Mon, 4 May 2020 19:12:20 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 4B46B1D160; Mon, 4 May 2020 19:12:18 +0200 (CEST) IronPort-SDR: J8RS79/WwXhn30KytfGUQPo6dcoOyppnoA6Wc2cd9g7sefbS+HtZnuwQidc1HtHKhitZ+g0qC2 8T5hnS3LIJdg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 04 May 2020 10:12:17 -0700 IronPort-SDR: DGHzmNmNhQkTIy3ccakIP+w2TV1ZOHpLkLUSNv3eR6E1UG1hI4xueoJsRP/KxrrG86kK4IMRQp tGBSd/nOZFNQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,352,1583222400"; d="scan'208";a="406546291" Received: from unknown (HELO localhost.localdomain) ([10.190.212.182]) by orsmga004.jf.intel.com with ESMTP; 04 May 2020 10:12:14 -0700 From: Sivaprasad Tummala To: Maxime Coquelin , Zhihong Wang , Xiaolong Ye Cc: dev@dpdk.org, stable@dpdk.org, fbl@sysclose.org Date: Mon, 4 May 2020 22:41:17 +0530 Message-Id: <20200504171118.93782-1-Sivaprasad.Tummala@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200428095203.64935-1-Sivaprasad.Tummala@intel.com> References: <20200428095203.64935-1-Sivaprasad.Tummala@intel.com> Subject: [dpdk-dev] [PATCH v2] vhost: fix mbuf alloc failure X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" vhost buffer allocation is successful for packets that fit into a linear buffer. If it fails, vhost library is expected to drop the current packet and skip to the next. The patch fixes the error scenario by skipping to next packet. Note: Drop counters are not currently supported. Fixes: c3ff0ac70acb ("vhost: improve performance by supporting large buffer") Cc: stable@dpdk.org Cc: fbl@sysclose.org --- v2: * fixed review comments - Maxime Coquelin * fixed mbuf alloc errors for packed virtqueues - Maxime Coquelin * fixed mbuf copy errors - Flavio Leitner Signed-off-by: Sivaprasad Tummala --- lib/librte_vhost/virtio_net.c | 50 ++++++++++++++++++++++++++--------- 1 file changed, 37 insertions(+), 13 deletions(-) diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c index 1fc30c681..764c514fd 100644 --- a/lib/librte_vhost/virtio_net.c +++ b/lib/librte_vhost/virtio_net.c @@ -1674,6 +1674,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, { uint16_t i; uint16_t free_entries; + uint16_t dropped = 0; if (unlikely(dev->dequeue_zero_copy)) { struct zcopy_mbuf *zmbuf, *next; @@ -1737,13 +1738,31 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, update_shadow_used_ring_split(vq, head_idx, 0); pkts[i] = virtio_dev_pktmbuf_alloc(dev, mbuf_pool, buf_len); - if (unlikely(pkts[i] == NULL)) + if (unlikely(pkts[i] == NULL)) { + /* + * mbuf allocation fails for jumbo packets when external + * buffer allocation is not allowed and linear buffer + * is required. Drop this packet. + */ +#ifdef RTE_LIBRTE_VHOST_DEBUG + VHOST_LOG_DATA(ERR, + "Failed to allocate memory for mbuf. Packet dropped!\n"); +#endif + dropped += 1; + i++; break; + } err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i], mbuf_pool); if (unlikely(err)) { rte_pktmbuf_free(pkts[i]); +#ifdef RTE_LIBRTE_VHOST_DEBUG + VHOST_LOG_DATA(ERR, + "Failed to copy desc to mbuf. Packet dropped!\n"); +#endif + dropped += 1; + i++; break; } @@ -1753,6 +1772,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, zmbuf = get_zmbuf(vq); if (!zmbuf) { rte_pktmbuf_free(pkts[i]); + dropped += 1; + i++; break; } zmbuf->mbuf = pkts[i]; @@ -1782,7 +1803,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq, } } - return i; + return (i - dropped); } static __rte_always_inline int @@ -1946,21 +1967,24 @@ virtio_dev_tx_single_packed(struct virtio_net *dev, struct rte_mbuf **pkts) { - uint16_t buf_id, desc_count; + uint16_t buf_id, desc_count = 0; + int ret; - if (vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id, - &desc_count)) - return -1; + ret = vhost_dequeue_single_packed(dev, vq, mbuf_pool, pkts, &buf_id, + &desc_count); - if (virtio_net_is_inorder(dev)) - vhost_shadow_dequeue_single_packed_inorder(vq, buf_id, - desc_count); - else - vhost_shadow_dequeue_single_packed(vq, buf_id, desc_count); + if (likely(desc_count > 0)) { + if (virtio_net_is_inorder(dev)) + vhost_shadow_dequeue_single_packed_inorder(vq, buf_id, + desc_count); + else + vhost_shadow_dequeue_single_packed(vq, buf_id, + desc_count); - vq_inc_last_avail_packed(vq, desc_count); + vq_inc_last_avail_packed(vq, desc_count); + } - return 0; + return ret; } static __rte_always_inline int