From patchwork Wed Nov 9 02:54:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: "Ling, WeiX" X-Patchwork-Id: 119578 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 528D6A0093; Wed, 9 Nov 2022 04:00:30 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4D33D42D2A; Wed, 9 Nov 2022 04:00:30 +0100 (CET) Received: from mga02.intel.com (mga02.intel.com [134.134.136.20]) by mails.dpdk.org (Postfix) with ESMTP id E6EE5400D4 for ; Wed, 9 Nov 2022 04:00:28 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1667962829; x=1699498829; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=h2oVYYQuZwLbX91SYDqwlWzHLUbpjUF874hmlDASEwo=; b=PiuMw5n5Oa8MzfKEW+MzjIjz7IKqLCkb4/UmvkV4JBUJ0ynBXl90uOio u2zy0bB2JV1tRvRxGkKUy1iElY7XCl0bSnQgqumydYeHVTMiD6rygSrwL FxZnvAXhbefFliNHZKFxWaW7GGmxmZo9XIRJLIeh0Z2aydWp2w8UqxI68 TnvgWGaMl3MQqcxruUwYRb//K6vEXoeuEFZUaGp24ANkHVNiLpC+rjvKn 0BNPTw9dIHiOjyNQv0/8vCKsNTT3KNSEzigB4wPYbzpMwfLaZQFUeA2jW PUIXDx8gU5dRM6IAOVfiPK9uLa8YmbGXsCKF/pL3yRuQ0b1f4V61CRes0 g==; X-IronPort-AV: E=McAfee;i="6500,9779,10525"; a="298391838" X-IronPort-AV: E=Sophos;i="5.96,149,1665471600"; d="scan'208";a="298391838" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga101.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Nov 2022 19:00:28 -0800 X-IronPort-AV: E=McAfee;i="6500,9779,10525"; a="667829849" X-IronPort-AV: E=Sophos;i="5.96,149,1665471600"; d="scan'208";a="667829849" Received: from unknown (HELO localhost.localdomain) ([10.239.252.222]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 08 Nov 2022 19:00:26 -0800 From: Wei Ling To: dts@dpdk.org Cc: Wei Ling Subject: [dts][PATCH V1 2/3] test_plans/vhost_user_interrupt_cbdma_test_plan: add testplan Date: Wed, 9 Nov 2022 10:54:05 +0800 Message-Id: <20221109025405.1204987-1-weix.ling@intel.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dts@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: test suite reviews and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dts-bounces@dpdk.org Add vhost_user_interrupt_cbdma_test_plan in test_plans to test the virtio enqueue and dequeue with split ring and packed ring path and CBDMA. Signed-off-by: Wei Ling --- .../vhost_user_interrupt_cbdma_test_plan.rst | 76 +++++++++++++++++++ 1 file changed, 76 insertions(+) create mode 100644 test_plans/vhost_user_interrupt_cbdma_test_plan.rst diff --git a/test_plans/vhost_user_interrupt_cbdma_test_plan.rst b/test_plans/vhost_user_interrupt_cbdma_test_plan.rst new file mode 100644 index 00000000..d21aac1f --- /dev/null +++ b/test_plans/vhost_user_interrupt_cbdma_test_plan.rst @@ -0,0 +1,76 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2022 Intel Corporation + +============================================== +vhost-user interrupt mode with CBDMA test plan +============================================== + +Description +=========== + +Vhost-user interrupt need test with l3fwd-power sample with CBDMA channel, +small packets send from virtio-user to vhost side, check vhost-user cores +can be wakeup,and vhost-user cores should be back to sleep after stop +sending packets from virtio side. +For packed virtqueue test, need using qemu version > 4.2.0. + +Prerequisites +============= +Topology +-------- +Test flow: Virtio-user --> Vhost-user + +General set up +-------------- +1. Compile DPDK:: + + # CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static + # ninja -C -j 110 + For example: + CC=gcc meson --werror -Denable_kmods=True -Dlibdir=lib -Dexamples=all --default-library=static x86_64-native-linuxapp-gcc + ninja -C x86_64-native-linuxapp-gcc -j 110 + +Test case +========= + +Test Case1: Wake up split ring vhost-user cores with l3fwd-power sample when multi queues and cbdma are enabled +--------------------------------------------------------------------------------------------------------------- + +1. Launch virtio-user with server mode:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4 -- -i --rxq=4 --txq=4 --rss-ip + +2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 9-12 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;rxq0@0000:80:04.0;rxq1@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3]' -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)" + +3. Send packet by testpmd, check vhost-user multi-cores will keep wakeup status:: + + testpmd>set fwd txonly + testpmd>start + +4. Stop and restart testpmd again, check vhost-user cores will sleep and wakeup again. + +Test Case2: Wake up packed ring vhost-user cores with l3fwd-power sample when multi queues and cbdma are enabled +---------------------------------------------------------------------------------------------------------------- + +1. Launch virtio-user with server mode:: + + ./x86_64-native-linuxapp-gcc/app/dpdk-testpmd -l 1-5 -n 4 --no-pci --file-prefix=virtio \ + --vdev=net_virtio_user0,mac=00:11:22:33:44:10,path=/tmp/sock0,server=1,queues=4,packed_vq=1 -- -i --rxq=4 --txq=4 --rss-ip + +2. Bind 4 cbdma ports to vfio-pci driver, then launch l3fwd-power with a virtual vhost device:: + + ./x86_64-native-linuxapp-gcc/examples/dpdk-l3fwd-power -l 9-12 -n 4 --log-level=9 \ + --vdev 'eth_vhost0,iface=/tmp/sock0,queues=4,client=1,dmas=[txq0@0000:80:04.0;txq1@0000:80:04.1;txq2@0000:80:04.2;txq3@0000:80:04.3;rxq0@0000:80:04.0;rxq1@0000:80:04.1;rxq2@0000:80:04.2;rxq3@0000:80:04.3]' -- -p 0x1 --parse-ptype 1 \ + --config "(0,0,9),(0,1,10),(0,2,11),(0,3,12)" + +3. Send packet by testpmd, check vhost-user multi-cores will keep wakeup status:: + + testpmd>set fwd txonly + testpmd>start + +4. Stop and restart testpmd again, check vhost-user cores will sleep and wakeup again.