From patchwork Mon Nov 14 03:04:26 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Hao Chen X-Patchwork-Id: 119813 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AFB64A09FB; Mon, 14 Nov 2022 04:04:34 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6186940150; Mon, 14 Nov 2022 04:04:34 +0100 (CET) Received: from out28-193.mail.aliyun.com (out28-193.mail.aliyun.com [115.124.28.193]) by mails.dpdk.org (Postfix) with ESMTP id 92E954014F for ; Mon, 14 Nov 2022 04:04:31 +0100 (CET) X-Alimail-AntiSpam: AC=CONTINUE; BC=0.1802083|-1; CH=green; DM=|CONTINUE|false|; DS=CONTINUE|ham_system_inform|0.00527429-0.000112149-0.994614; FP=0|0|0|0|0|-1|-1|-1; HT=ay29a033018047201; MF=chenh@yusur.tech; NM=1; PH=DS; RN=4; RT=4; SR=0; TI=SMTPD_---.Q6eOy5e_1668395067; Received: from localhost.localdomain(mailfrom:chenh@yusur.tech fp:SMTPD_---.Q6eOy5e_1668395067) by smtp.aliyun-inc.com; Mon, 14 Nov 2022 11:04:28 +0800 From: Hao Chen To: dev@dpdk.org Cc: zy@yusur.tech, Maxime Coquelin , Chenbo Xia Subject: [PATCH v4] examples/vdpa: support running in nested virtualization environment Date: Sun, 13 Nov 2022 22:04:26 -0500 Message-Id: <20221114030426.1363561-1-chenh@yusur.tech> X-Mailer: git-send-email 2.27.0 In-Reply-To: <20221025061939.3229676-1-chenh@yusur.tech> References: <20221025061939.3229676-1-chenh@yusur.tech> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org When we run dpdk vdpa in the nested virtual machine vm-L1 and ping test in vm-L2, the ping is not good. The reason for troubleshooting is that the virtio net in vm-L2 sends control information to the vring, and the qemu back-end device in vm-L1 cannot obtain correct data from the vring. This problem is related to the opening of the vIOMMU. This patch add flag RTE_VHOST_USER_IOMMU_SUPPORT to use vhost vIOMMU , VIRTIO_F_IOMMU_PLATFORM feature will be negotiated successfully if virtio iommu is used in a nested virtualization environment. The configuration is as follows: The host starts iommu, and the kernel parameter is added with 'intel_iommu=on iommu=pt'. VM-L1's xml add viommu and virtio device adds iommu='on' ats='on'. VM-L2's xml enables viommu, and adds parameters 'intel_iommu=on iommu=pt' to kernel. Then the ping test in vm-L2 is OK. Signed-off-by: Hao Chen Reviewed-by: Maxime Coquelin --- v4: *Simplify the patch. Set the flags RTE_VHOST_USER_IOMMU_SUPPORT default. v3: *Modify mail title. v2: *fprintf all string including the eal one. *remove exit(1). examples/vdpa/main.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/examples/vdpa/main.c b/examples/vdpa/main.c index 4c7e81d7b6..4d3203f3a7 100644 --- a/examples/vdpa/main.c +++ b/examples/vdpa/main.c @@ -214,6 +214,8 @@ start_vdpa(struct vdpa_port *vport) if (client_mode) vport->flags |= RTE_VHOST_USER_CLIENT; + vport->flags |= RTE_VHOST_USER_IOMMU_SUPPORT; + if (access(socket_path, F_OK) != -1 && !client_mode) { RTE_LOG(ERR, VDPA, "%s exists, please remove it or specify another file and try again.\n",