From patchwork Thu May 28 15:16:43 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ouyang Changchun X-Patchwork-Id: 4938 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id A8AA4C320; Thu, 28 May 2015 17:17:00 +0200 (CEST) Received: from mga11.intel.com (mga11.intel.com [192.55.52.93]) by dpdk.org (Postfix) with ESMTP id 0EFAEC31A for ; Thu, 28 May 2015 17:16:58 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by fmsmga102.fm.intel.com with ESMTP; 28 May 2015 08:16:57 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.13,513,1427785200"; d="scan'208";a="701621277" Received: from shvmail01.sh.intel.com ([10.239.29.42]) by orsmga001.jf.intel.com with ESMTP; 28 May 2015 08:16:57 -0700 Received: from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com [10.239.29.89]) by shvmail01.sh.intel.com with ESMTP id t4SFGtHX002182; Thu, 28 May 2015 23:16:55 +0800 Received: from shecgisg004.sh.intel.com (localhost [127.0.0.1]) by shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP id t4SFGpeC008475; Thu, 28 May 2015 23:16:53 +0800 Received: (from couyang@localhost) by shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t4SFGpsZ008471; Thu, 28 May 2015 23:16:51 +0800 From: Ouyang Changchun To: dev@dpdk.org Date: Thu, 28 May 2015 23:16:43 +0800 Message-Id: <1432826207-8428-2-git-send-email-changchun.ouyang@intel.com> X-Mailer: git-send-email 1.7.12.2 In-Reply-To: <1432826207-8428-1-git-send-email-changchun.ouyang@intel.com> References: <1430720780-27525-1-git-send-email-changchun.ouyang@intel.com> <1432826207-8428-1-git-send-email-changchun.ouyang@intel.com> Subject: [dpdk-dev] [PATCH v2 1/5] lib_vhost: Fix enqueue/dequeue can't handle chained vring descriptors X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Vring enqueue need consider the 2 cases: 1. Vring descriptors chained together, the first one is for virtio header, the rest are for real data, virtio driver in Linux usually use this scheme; 2. Only one descriptor, virtio header and real data share one single descriptor, virtio-net pmd use such scheme; So does vring dequeue, it should not assume vring descriptor is chained or not chained, virtio in different Linux version has different behavior, e.g. fedora 20 use chained vring descriptor, while fedora 21 use one single vring descriptor for tx. Changes in v2 - drop the uncompleted packet - refine code logic Signed-off-by: Changchun Ouyang --- lib/librte_vhost/vhost_rxtx.c | 65 +++++++++++++++++++++++++++++++++---------- 1 file changed, 50 insertions(+), 15 deletions(-) diff --git a/lib/librte_vhost/vhost_rxtx.c b/lib/librte_vhost/vhost_rxtx.c index 4809d32..06ae2df 100644 --- a/lib/librte_vhost/vhost_rxtx.c +++ b/lib/librte_vhost/vhost_rxtx.c @@ -59,7 +59,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, struct virtio_net_hdr_mrg_rxbuf virtio_hdr = {{0, 0, 0, 0, 0, 0}, 0}; uint64_t buff_addr = 0; uint64_t buff_hdr_addr = 0; - uint32_t head[MAX_PKT_BURST], packet_len = 0; + uint32_t head[MAX_PKT_BURST]; uint32_t head_idx, packet_success = 0; uint16_t avail_idx, res_cur_idx; uint16_t res_base_idx, res_end_idx; @@ -113,6 +113,10 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, rte_prefetch0(&vq->desc[head[packet_success]]); while (res_cur_idx != res_end_idx) { + uint32_t offset = 0; + uint32_t data_len, len_to_cpy; + uint8_t hdr = 0, uncompleted_pkt = 0; + /* Get descriptor from available ring */ desc = &vq->desc[head[packet_success]]; @@ -125,7 +129,6 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, /* Copy virtio_hdr to packet and increment buffer address */ buff_hdr_addr = buff_addr; - packet_len = rte_pktmbuf_data_len(buff) + vq->vhost_hlen; /* * If the descriptors are chained the header and data are @@ -136,28 +139,55 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id, desc = &vq->desc[desc->next]; /* Buffer address translation. */ buff_addr = gpa_to_vva(dev, desc->addr); - desc->len = rte_pktmbuf_data_len(buff); } else { buff_addr += vq->vhost_hlen; - desc->len = packet_len; + hdr = 1; } + data_len = rte_pktmbuf_data_len(buff); + len_to_cpy = RTE_MIN(data_len, + hdr ? desc->len - vq->vhost_hlen : desc->len); + while (len_to_cpy > 0) { + /* Copy mbuf data to buffer */ + rte_memcpy((void *)(uintptr_t)buff_addr, + (const void *)(rte_pktmbuf_mtod(buff, const char *) + offset), + len_to_cpy); + PRINT_PACKET(dev, (uintptr_t)buff_addr, + len_to_cpy, 0); + + offset += len_to_cpy; + + if (offset == data_len) + break; + + if (desc->flags & VRING_DESC_F_NEXT) { + desc = &vq->desc[desc->next]; + buff_addr = gpa_to_vva(dev, desc->addr); + len_to_cpy = RTE_MIN(data_len - offset, desc->len); + } else { + /* Room in vring buffer is not enough */ + uncompleted_pkt = 1; + break; + } + }; + /* Update used ring with desc information */ vq->used->ring[res_cur_idx & (vq->size - 1)].id = head[packet_success]; - vq->used->ring[res_cur_idx & (vq->size - 1)].len = packet_len; - /* Copy mbuf data to buffer */ - /* FIXME for sg mbuf and the case that desc couldn't hold the mbuf data */ - rte_memcpy((void *)(uintptr_t)buff_addr, - rte_pktmbuf_mtod(buff, const void *), - rte_pktmbuf_data_len(buff)); - PRINT_PACKET(dev, (uintptr_t)buff_addr, - rte_pktmbuf_data_len(buff), 0); + /* Drop the packet if it is uncompleted */ + if (unlikely(uncompleted_pkt == 1)) + vq->used->ring[res_cur_idx & (vq->size - 1)].len = 0; + else + vq->used->ring[res_cur_idx & (vq->size - 1)].len = + offset + vq->vhost_hlen; res_cur_idx++; packet_success++; + if (unlikely(uncompleted_pkt == 1)) + continue; + rte_memcpy((void *)(uintptr_t)buff_hdr_addr, (const void *)&virtio_hdr, vq->vhost_hlen); @@ -589,7 +619,14 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, desc = &vq->desc[head[entry_success]]; /* Discard first buffer as it is the virtio header */ - desc = &vq->desc[desc->next]; + if (desc->flags & VRING_DESC_F_NEXT) { + desc = &vq->desc[desc->next]; + vb_offset = 0; + vb_avail = desc->len; + } else { + vb_offset = vq->vhost_hlen; + vb_avail = desc->len - vb_offset; + } /* Buffer address translation. */ vb_addr = gpa_to_vva(dev, desc->addr); @@ -608,8 +645,6 @@ rte_vhost_dequeue_burst(struct virtio_net *dev, uint16_t queue_id, vq->used->ring[used_idx].id = head[entry_success]; vq->used->ring[used_idx].len = 0; - vb_offset = 0; - vb_avail = desc->len; /* Allocate an mbuf and populate the structure. */ m = rte_pktmbuf_alloc(mbuf_pool); if (unlikely(m == NULL)) {