From patchwork Tue Nov 22 06:52:42 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 120038 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 55429A057E; Tue, 22 Nov 2022 07:58:32 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5102B42D4B; Tue, 22 Nov 2022 07:58:32 +0100 (CET) Received: from mga06.intel.com (mga06b.intel.com [134.134.136.31]) by mails.dpdk.org (Postfix) with ESMTP id 94492427EB for ; Tue, 22 Nov 2022 07:58:30 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1669100311; x=1700636311; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=FA9UDWBaUrubrqFjaxlqFPreSkM5UAKwGmkbrdIQKvI=; b=B3+G53w+ogd2IwCsy4B/LYKbVns8G9nE5bDUc+LMFBfv9C3hJLQyPOrA nl9JhvmltLxbBKellLfM1nSGeyV5KfQd7K2kqlpHK/b3RQY01Fa1a3sBw p7JBmULBP9PU0isnGLGMrxWXCeDoL50vcMESmafoPhlL0Ng0nKm3nQ5gi ogvt4phmCQ0jWsQh1+iWzPnWigYvX5qbAk/kBKGEYx4k4gZQvoaOE/0vh miJjUyPA1QxFdifXKRJbnZsO7LhdyWQ8ZcrCoQ0LhOWpVFfBHRsdpgsv5 jfp72UKyyGC86dsceNZwz4Iw62tn+w88mL56TR4c4cPB5ym5dsgsZfO6f g==; X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="375893011" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208";a="375893011" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2022 22:58:30 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10538"; a="970371844" X-IronPort-AV: E=Sophos;i="5.96,183,1665471600"; d="scan'208";a="970371844" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 Nov 2022 22:58:28 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V3 2/3] test_plans/vhost_user_interrupt_cbdma_test_plan: add new testplan Date: Tue, 22 Nov 2022 14:52:42 +0800 Message-Id: <20221122065242.2894075-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add new vhost_user_interrupt_cbdma testplan to test the virtio enqueue and dequeue use l3fwd-power with split ring and packed ring path and CBDMA. Signed-off-by: Wei Ling --- .../vhost_user_interrupt_cbdma_test_plan.rst | 84 +++++++++++++++++++ 1 file changed, 84 insertions(+) create mode 100644 test_plans/vhost_user_interrupt_cbdma_test_plan.rst diff --git a/test_plans/vhost_user_interrupt_cbdma_test_plan.rst b/test_plans/vhost_user_interrupt_cbdma_test_plan.rst new file mode 100644 index 00000000..3eab4797 --- /dev/null +++ b/test_plans/vhost_user_interrupt_cbdma_test_plan.rst @@ -0,0 +1,84 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2022 Intel Corporation + +============================================== +vhost-user interrupt mode with CBDMA test plan +============================================== + +Description +=========== + +Vhost-user interrupt need test with l3fwd-power sample with CBDMA channel, +small packets send from virtio-user to vhost side, check vhost-user cores +can be wakeup,and vhost-user cores should be back to sleep after stop +sending packets from virtio side. + +.. note:: + + 1.For packed virtqueue virtio-net test, need qemu version > 4.2.0 and VM kernel version > 5.1, and packed ring multi-queues not support reconnect in qemu yet. + 2.For split virtqueue virtio-net with multi-queues server mode test, better to use qemu version >= 5.2.0, dut to qemu(v4.2.0~v5.1.0) exist split ring multi-queues reconnection issue. + 3.Kernel version > 4.8.0, mostly linux distribution don't support vfio-noiommu mode by default, + so testing this case need rebuild kernel to enable vfio-noiommu. + 4.When DMA devices are bound to vfio driver, VA mode is the default and recommended. For PA mode, page by page mapping may exceed IOMMU's max capability, better to use 1G guest hugepage. + 5.DPDK local patch that about vhost pmd is needed when testing Vhost asynchronous data path with testpmd. + +Prerequisites +============= +Topology +-------- +Test flow: Virtio-user --> Vhost-user + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 + +Test case +========= + +Test Case 1: Wake up split ring vhost-user cores with l3fwd-power sample when multi queues and cbdma are enabled +---------------------------------------------------------------------------------------------------------------- + +1. Launch virtio-user with server mode:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4 -- -i --rxq=4 --txq=4 --rss-ip + +2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 9-12 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;rxq0@0000:80:04.0;rxq1@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3]' -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)" + +3. Send packet by testpmd, check vhost-user multi-cores will keep wakeup status:: + + testpmd>set fwd txonly + testpmd>start + +4. Stop and restart testpmd again, check vhost-user cores will sleep and wakeup again. + +Test Case 2: Wake up packed ring vhost-user cores with l3fwd-power sample when multi queues and cbdma are enabled +----------------------------------------------------------------------------------------------------------------- + +1. Launch virtio-user with server mode:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4,packed_vq=1 -- -i --rxq=4 --txq=4 --rss-ip + +2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 9-12 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;rxq0@0000:80:04.0;rxq1@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3]' -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)" + +3. Send packet by testpmd, check vhost-user multi-cores will keep wakeup status:: + + testpmd>set fwd txonly + testpmd>start + +4. Stop and restart testpmd again, check vhost-user cores will sleep and wakeup again.