From patchwork Mon Jan 15 11:32:19 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: junjie.j.chen@intel.com X-Patchwork-Id: 33709 X-Patchwork-Delegate: yuanhan.liu@linux.intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 96CE7728A; Mon, 15 Jan 2018 04:51:55 +0100 (CET) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id 08217727A for ; Mon, 15 Jan 2018 04:51:53 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga002.fm.intel.com ([10.253.24.26]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Jan 2018 19:51:53 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.46,361,1511856000"; d="scan'208";a="10671079" Received: from dpdk-dev.sh.intel.com ([10.67.111.147]) by fmsmga002.fm.intel.com with ESMTP; 14 Jan 2018 19:51:51 -0800 From: Junjie Chen To: yliu@fridaylinux.org, maxime.coquelin@redhat.com Cc: dev@dpdk.org, Junjie Chen Date: Mon, 15 Jan 2018 06:32:19 -0500 Message-Id: <1516015939-11266-1-git-send-email-junjie.j.chen@intel.com> X-Mailer: git-send-email 2.0.1 Subject: [dpdk-dev] [PATCH] vhost: do deep copy while reallocate vq X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" When vhost reallocate dev and vq for NUMA enabled case, it doesn't perform deep copy, which lead to 1) zmbuf list not valid 2) remote memory access. This patch is to re-initlize the zmbuf list and also do the deep copy. Signed-off-by: Junjie Chen Reviewed-by: Maxime Coquelin Reviewed-by: Zhiyong Yang --- lib/librte_vhost/vhost_user.c | 31 +++++++++++++++++++++++++++++++ 1 file changed, 31 insertions(+) diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c index f4c7ce4..795462c 100644 --- a/lib/librte_vhost/vhost_user.c +++ b/lib/librte_vhost/vhost_user.c @@ -227,6 +227,7 @@ vhost_user_set_vring_num(struct virtio_net *dev, "zero copy is force disabled\n"); dev->dequeue_zero_copy = 0; } + TAILQ_INIT(&vq->zmbuf_list); } vq->shadow_used_ring = rte_malloc(NULL, @@ -261,6 +262,9 @@ numa_realloc(struct virtio_net *dev, int index) int oldnode, newnode; struct virtio_net *old_dev; struct vhost_virtqueue *old_vq, *vq; + struct zcopy_mbuf *new_zmbuf; + struct vring_used_elem *new_shadow_used_ring; + struct batch_copy_elem *new_batch_copy_elems; int ret; old_dev = dev; @@ -285,6 +289,33 @@ numa_realloc(struct virtio_net *dev, int index) return dev; memcpy(vq, old_vq, sizeof(*vq)); + TAILQ_INIT(&vq->zmbuf_list); + + new_zmbuf = rte_malloc_socket(NULL, vq->zmbuf_size * + sizeof(struct zcopy_mbuf), 0, newnode); + if (new_zmbuf) { + rte_free(vq->zmbufs); + vq->zmbufs = new_zmbuf; + } + + new_shadow_used_ring = rte_malloc_socket(NULL, + vq->size * sizeof(struct vring_used_elem), + RTE_CACHE_LINE_SIZE, + newnode); + if (new_shadow_used_ring) { + rte_free(vq->shadow_used_ring); + vq->shadow_used_ring = new_shadow_used_ring; + } + + new_batch_copy_elems = rte_malloc_socket(NULL, + vq->size * sizeof(struct batch_copy_elem), + RTE_CACHE_LINE_SIZE, + newnode); + if (new_batch_copy_elems) { + rte_free(vq->batch_copy_elems); + vq->batch_copy_elems = new_batch_copy_elems; + } + rte_free(old_vq); }