From patchwork Thu Sep 29 01:32:43 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Ding, Xuan" X-Patchwork-Id: 117087 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BBA9BA00C4; Thu, 29 Sep 2022 03:33:50 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 04D2142B72; Thu, 29 Sep 2022 03:33:27 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id D522E42802 for ; Thu, 29 Sep 2022 03:33:22 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1664415203; x=1695951203; h=from:to:cc:subject:date:message-id:in-reply-to: references; bh=WSUAf38m9HT/I+nSHdAP2BGiUThgU1051kigLq/zHhA=; b=W51SyULCrgYCMCoP0ZvMRCYt0em+trc2dx1Di6N69WIsFpuowhf0h+kK CoTjBtubR++XahbCViGmCrXAMS6H4FmV3sLLR3g2iiq1rMWvgh4/92Tv/ ++sntWoUbcv6ugZAf6oMzB4a67fk2yiYNAdvJpa4Toy0IIxbTAwz/7xBm KqPwKVIuu4h2AUhAl0FWVtnGfSTQxCbHIkoOb/wucZqD0S1K6I7gYelPI e0oo0ADD7FBfrq08JHS+054V8WqPMsnlbanSu8trhP7fFC278NLzR0/5N v6gAFOTDdw/qCWxqLzu58LYp6+KqO2/mALU7dzzSB9LNtO/SF1kfMCK9Q w==; X-IronPort-AV: E=McAfee;i="6500,9779,10484"; a="328135485" X-IronPort-AV: E=Sophos;i="5.93,353,1654585200"; d="scan'208";a="328135485" Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 28 Sep 2022 18:33:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6500,9779,10484"; a="747638181" X-IronPort-AV: E=Sophos;i="5.93,353,1654585200"; d="scan'208";a="747638181" Received: from npg-dpdk-xuan-cbdma.sh.intel.com ([10.67.111.102]) by orsmga004.jf.intel.com with ESMTP; 28 Sep 2022 18:33:19 -0700 From: xuan.ding@intel.com To: maxime.coquelin@redhat.com, chenbo.xia@intel.com Cc: dev@dpdk.org, jiayu.hu@intel.com, xingguang.he@intel.com, yvonnex.yang@intel.com, cheng1.jiang@intel.com, yuanx.wang@intel.com, wenwux.ma@intel.com, Xuan Ding Subject: [PATCH v3 2/2] examples/vhost: unconfigure DMA vchannel Date: Thu, 29 Sep 2022 01:32:43 +0000 Message-Id: <20220929013243.15889-3-xuan.ding@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20220929013243.15889-1-xuan.ding@intel.com> References: <20220814140442.82525-1-xuan.ding@intel.com> <20220929013243.15889-1-xuan.ding@intel.com> X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Xuan Ding This patch applies rte_vhost_async_dma_unconfigure() API to manually free DMA vchannels instead of waiting until the program ends to be released. Signed-off-by: Xuan Ding --- examples/vhost/main.c | 45 ++++++++++++++++++++++++++++++------------- examples/vhost/main.h | 1 + 2 files changed, 33 insertions(+), 13 deletions(-) diff --git a/examples/vhost/main.c b/examples/vhost/main.c index 0fa4753e70..32f396d88a 100644 --- a/examples/vhost/main.c +++ b/examples/vhost/main.c @@ -1558,6 +1558,28 @@ vhost_clear_queue(struct vhost_dev *vdev, uint16_t queue_id) } } +static void +vhost_clear_async(struct vhost_dev *vdev, int vid, uint16_t queue_id) +{ + int16_t dma_id; + uint16_t ref_count; + + if (dma_bind[vid].dmas[queue_id].async_enabled) { + vhost_clear_queue(vdev, queue_id); + rte_vhost_async_channel_unregister(vid, queue_id); + dma_bind[vid].dmas[queue_id].async_enabled = false; + } + + dma_id = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dev_id; + dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dma_ref_count--; + ref_count = dma_bind[vid2socketid[vdev->vid]].dmas[queue_id].dma_ref_count; + + if (ref_count == 0 && dma_id != INVALID_DMA_ID) { + if (rte_vhost_async_dma_unconfigure(dma_id, 0) < 0) + RTE_LOG(ERR, VHOST_PORT, "Failed to unconfigure DMA in vhost.\n"); + } +} + /* * Remove a device from the specific data core linked list and from the * main linked list. Synchronization occurs through the use of the @@ -1614,17 +1636,8 @@ destroy_device(int vid) "(%d) device has been removed from data core\n", vdev->vid); - if (dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled) { - vhost_clear_queue(vdev, VIRTIO_RXQ); - rte_vhost_async_channel_unregister(vid, VIRTIO_RXQ); - dma_bind[vid].dmas[VIRTIO_RXQ].async_enabled = false; - } - - if (dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled) { - vhost_clear_queue(vdev, VIRTIO_TXQ); - rte_vhost_async_channel_unregister(vid, VIRTIO_TXQ); - dma_bind[vid].dmas[VIRTIO_TXQ].async_enabled = false; - } + vhost_clear_async(vdev, vid, VIRTIO_RXQ); + vhost_clear_async(vdev, vid, VIRTIO_TXQ); rte_free(vdev); } @@ -1673,14 +1686,19 @@ vhost_async_channel_register(int vid) if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dev_id != INVALID_DMA_ID) { rx_ret = rte_vhost_async_channel_register(vid, VIRTIO_RXQ); - if (rx_ret == 0) + if (rx_ret == 0) { dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].async_enabled = true; + dma_bind[vid2socketid[vid]].dmas[VIRTIO_RXQ].dma_ref_count++; + } + } if (dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dev_id != INVALID_DMA_ID) { tx_ret = rte_vhost_async_channel_register(vid, VIRTIO_TXQ); - if (tx_ret == 0) + if (tx_ret == 0) { dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].async_enabled = true; + dma_bind[vid2socketid[vid]].dmas[VIRTIO_TXQ].dma_ref_count++; + } } return rx_ret | tx_ret; @@ -1886,6 +1904,7 @@ reset_dma(void) for (j = 0; j < RTE_MAX_QUEUES_PER_PORT * 2; j++) { dma_bind[i].dmas[j].dev_id = INVALID_DMA_ID; dma_bind[i].dmas[j].async_enabled = false; + dma_bind[i].dmas[j].dma_ref_count = 0; } } diff --git a/examples/vhost/main.h b/examples/vhost/main.h index 2fcb8376c5..2b2cf828d3 100644 --- a/examples/vhost/main.h +++ b/examples/vhost/main.h @@ -96,6 +96,7 @@ struct dma_info { struct rte_pci_addr addr; int16_t dev_id; bool async_enabled; + uint16_t dma_ref_count; }; struct dma_for_vhost {