get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/76073/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 76073,
    "url": "https://patches.dpdk.org/api/patches/76073/?format=api",
    "web_url": "https://patches.dpdk.org/project/dts/patch/20200827172459.43152-1-yinan.wang@intel.com/",
    "project": {
        "id": 3,
        "url": "https://patches.dpdk.org/api/projects/3/?format=api",
        "name": "DTS",
        "link_name": "dts",
        "list_id": "dts.dpdk.org",
        "list_email": "dts@dpdk.org",
        "web_url": "",
        "scm_url": "git://dpdk.org/tools/dts",
        "webscm_url": "http://git.dpdk.org/tools/dts/",
        "list_archive_url": "https://inbox.dpdk.org/dts",
        "list_archive_url_format": "https://inbox.dpdk.org/dts/{}",
        "commit_url_format": ""
    },
    "msgid": "<20200827172459.43152-1-yinan.wang@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dts/20200827172459.43152-1-yinan.wang@intel.com",
    "date": "2020-08-27T17:24:59",
    "name": "[v1] test_plans/vhost_virtio_pmd_interrupt_test_plan.rst",
    "commit_ref": null,
    "pull_url": null,
    "state": "accepted",
    "archived": false,
    "hash": "2e8f35f58d27f6cbc28efcff50cb893ea20283f0",
    "submitter": {
        "id": 1081,
        "url": "https://patches.dpdk.org/api/people/1081/?format=api",
        "name": "Wang, Yinan",
        "email": "yinan.wang@intel.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dts/patch/20200827172459.43152-1-yinan.wang@intel.com/mbox/",
    "series": [
        {
            "id": 11821,
            "url": "https://patches.dpdk.org/api/series/11821/?format=api",
            "web_url": "https://patches.dpdk.org/project/dts/list/?series=11821",
            "date": "2020-08-27T17:24:59",
            "name": "[v1] test_plans/vhost_virtio_pmd_interrupt_test_plan.rst",
            "version": 1,
            "mbox": "https://patches.dpdk.org/series/11821/mbox/"
        }
    ],
    "comments": "https://patches.dpdk.org/api/patches/76073/comments/",
    "check": "pending",
    "checks": "https://patches.dpdk.org/api/patches/76073/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dts-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id E2F96A04B1;\n\tThu, 27 Aug 2020 10:35:44 +0200 (CEST)",
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id A2D871C0B9;\n\tThu, 27 Aug 2020 10:35:44 +0200 (CEST)",
            "from mga11.intel.com (mga11.intel.com [192.55.52.93])\n by dpdk.org (Postfix) with ESMTP id BECCB1C0B8\n for <dts@dpdk.org>; Thu, 27 Aug 2020 10:35:42 +0200 (CEST)",
            "from orsmga008.jf.intel.com ([10.7.209.65])\n by fmsmga102.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 27 Aug 2020 01:35:41 -0700",
            "from dpdk-yinan-ntb1.sh.intel.com ([10.67.119.39])\n by orsmga008.jf.intel.com with ESMTP; 27 Aug 2020 01:35:40 -0700"
        ],
        "IronPort-SDR": [
            "\n Uxtv7C4AKLo3s46bGykm4W7AVkVw15Mtmmxcsyh0m0WwrefPSDKOicCprh4UvsCnK+WiRLgyfS\n YZAqXP50NPsQ==",
            "\n GOfJPSvBqvXZNuxh7KJZYEQGpUyR7GmWwL+F3/3ld9iXAHppyPxmOuHvSvGFw5P9rijTONpU25\n e+EkXSdsTC2g=="
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6000,8403,9725\"; a=\"154006413\"",
            "E=Sophos;i=\"5.76,359,1592895600\"; d=\"scan'208\";a=\"154006413\"",
            "E=Sophos;i=\"5.76,359,1592895600\"; d=\"scan'208\";a=\"329515947\""
        ],
        "X-Amp-Result": "SKIPPED(no attachment in message)",
        "X-Amp-File-Uploaded": "False",
        "X-ExtLoop1": "1",
        "From": "Yinan Wang <yinan.wang@intel.com>",
        "To": "dts@dpdk.org",
        "Cc": "Yinan Wang <yinan.wang@intel.com>",
        "Date": "Thu, 27 Aug 2020 13:24:59 -0400",
        "Message-Id": "<20200827172459.43152-1-yinan.wang@intel.com>",
        "X-Mailer": "git-send-email 2.17.1",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain; charset=UTF-8",
        "Content-Transfer-Encoding": "8bit",
        "Subject": "[dts] [PATCH v1] test_plans/vhost_virtio_pmd_interrupt_test_plan.rst",
        "X-BeenThere": "dts@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "test suite reviews and discussions <dts.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dts>,\n <mailto:dts-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dts/>",
        "List-Post": "<mailto:dts@dpdk.org>",
        "List-Help": "<mailto:dts-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dts>,\n <mailto:dts-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dts-bounces@dpdk.org",
        "Sender": "\"dts\" <dts-bounces@dpdk.org>"
    },
    "content": "Add cbdma cases in vhost_virtio_pmd_interrupt test plan\n\nSigned-off-by: Yinan Wang <yinan.wang@intel.com>\n---\n .../vhost_virtio_pmd_interrupt_test_plan.rst  | 103 +++++++++++++++---\n 1 file changed, 88 insertions(+), 15 deletions(-)",
    "diff": "diff --git a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst\nindex 389d8d8..4f8b6c4 100644\n--- a/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst\n+++ b/test_plans/vhost_virtio_pmd_interrupt_test_plan.rst\n@@ -39,7 +39,6 @@ to virtio-pmd side,check virtio-pmd cores can be wakeup status,and virtio-pm\n sleep status after stop sending packets from traffic generator.This test plan cover virtio 0.95,\n virtio 1.0 and virtio 1.1 test.For packed virtqueue test, need using qemu version > 4.2.0.\n \n-\n Prerequisites\n =============\n \n@@ -56,23 +55,24 @@ Test Case 1: Basic virtio interrupt test with 4 queues\n 1. Bind one NIC port to igb_uio, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n-    ./testpmd -c 0x7c -n 4 --socket-mem 1024,1024 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip\n+    ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip\n \n 2. Launch VM1, set queues=4, vectors>=2xqueues+2, mq=on::\n \n     taskset -c 34-35 \\\n     qemu-system-x86_64 -name us-vhost-vm2 \\\n      -cpu host -enable-kvm -m 4096 -object memory-backend-file,id=mem,size=4096M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \\\n-     -smp cores=4,sockets=1 -drive file=/home/osimg/noiommu-ubt16.img \\\n-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6002-:22 \\\n+     -smp cores=4,sockets=1 -drive file=/home/osimg/ubuntu1910.img \\\n+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n      -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=4 \\\n      -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=15  \\\n      -vnc :10 -daemonize\n \n 3. Bind virtio port to vfio-pci::\n \n-\tmodprobe vfio enable_unsafe_noiommu_mode=1\n-\tmodprobe vfio-pci\n+\t  modprobe vfio enable_unsafe_noiommu_mode=1\n+\t  modprobe vfio-pci\n     ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x\n \n 4. In VM, launch l3fwd-power sample::\n@@ -91,15 +91,16 @@ Test Case 2: Basic virtio interrupt test with 16 queues\n 1. Bind one NIC port to igb_uio, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n-    ./testpmd -c 0x1ffff -n 4 --socket-mem 1024 1024 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip\n+    ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip\n \n 2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on::\n \n     taskset -c 34-35 \\\n     qemu-system-x86_64 -name us-vhost-vm2 \\\n      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \\\n-     -smp cores=16,sockets=1 -drive file=/home/osimg/noiommu-ubt16.img \\\n-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \\\n+     -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu1910.img \\\n+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n      -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=16 \\\n      -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=40  \\\n      -vnc :11 -daemonize\n@@ -126,15 +127,16 @@ Test Case 3: Basic virtio-1.0 interrupt test with 4 queues\n 1. Bind one NIC port to igb_uio, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n-    ./testpmd -c 0x7c -n 4 --socket-mem 1024,1024 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip\n+    ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip\n \n 2. Launch VM1, set queues=4, vectors>=2xqueues+2, mq=on::\n \n     taskset -c 34-35 \\\n     qemu-system-x86_64 -name us-vhost-vm2 \\\n      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \\\n-     -smp cores=4,sockets=1 -drive file=/home/osimg/noiommu-ubt16.img \\\n-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \\\n+     -smp cores=4,sockets=1 -drive file=/home/osimg/ubuntu1910.img \\\n+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n      -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=4 \\\n      -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-modern=false,mrg_rxbuf=on,csum=on,mq=on,vectors=15  \\\n      -vnc :11 -daemonize\n@@ -161,15 +163,16 @@ Test Case 4: Packed ring virtio interrupt test with 16 queues\n 1. Bind one NIC port to igb_uio, then launch testpmd by below command::\n \n     rm -rf vhost-net*\n-    ./testpmd -c 0x1ffff -n 4 --socket-mem 1024 1024 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip\n+    ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip\n \n 2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on::\n \n     taskset -c 34-35 \\\n     qemu-system-x86_64 -name us-vhost-vm2 \\\n      -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \\\n-     -smp cores=16,sockets=1 -drive file=/home/osimg/noiommu-ubt16.img \\\n-     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -net nic,vlan=2,macaddr=00:00:00:08:e8:aa,addr=1f -net user,vlan=2,hostfwd=tcp:127.0.0.1:6005-:22 \\\n+     -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu1910.img \\\n+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n      -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=16 \\\n      -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=40,packed=on  \\\n      -vnc :11 -daemonize\n@@ -189,3 +192,73 @@ Test Case 4: Packed ring virtio interrupt test with 16 queues\n 6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.\n \n 7. Stop the date transmitter, check all related core will be back to sleep status.\n+\n+Test Case 5: Basic virtio interrupt test with 16 queues and cbdma enabled\n+=========================================================================\n+\n+1. Bind four cbdma ports and one NIC port to igb_uio, then launch testpmd by below command::\n+\n+    ./testpmd -c 0x1ffff -n 4 --vdev 'eth_vhost0,iface=vhost-net,queues=16,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3;txq4@80:04.4;txq5@80:04.5;txq6@80:04.6;txq7@80:04.7;txq8@00:04.0;txq9@00:04.1;txq10@00:04.2;txq11@00:04.3;txq12@00:04.4;txq13@00:04.5;txq14@00:04.6;txq15@00:04.7],dmathr=1024' -- -i --nb-cores=16 --rxq=16 --txq=16 --rss-ip\n+\n+2. Launch VM1, set queues=16, vectors>=2xqueues+2, mq=on::\n+\n+    taskset -c 34-35 \\\n+    qemu-system-x86_64 -name us-vhost-vm2 \\\n+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \\\n+     -smp cores=16,sockets=1 -drive file=/home/osimg/ubuntu1910.img \\\n+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+     -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=16 \\\n+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,mrg_rxbuf=on,csum=on,mq=on,vectors=40  \\\n+     -vnc :11 -daemonize\n+\n+3. Bind virtio port to vfio-pci::\n+\n+    modprobe vfio enable_unsafe_noiommu_mode=1\n+    modprobe vfio-pci\n+    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x\n+\n+4. In VM, launch l3fwd-power sample::\n+\n+    ./l3fwd-power -c 0x0ffff -n 4 --log-level='user1,7' -- -p 1 -P  --config '(0,0,0),(0,1,1),(0,2,2),(0,3,3)(0,4,4),(0,5,5),(0,6,6),(0,7,7)(0,8,8),(0,9,9),(0,10,10),(0,11,11)(0,12,12),(0,13,13),(0,14,14),(0,15,15)' --no-numa  --parse-ptype\n+\n+5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.\n+\n+6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.\n+\n+7. Stop the date transmitter, check all related core will be back to sleep status.\n+\n+Test Case 6: Basic virtio-1.0 interrupt test with 4 queues and cbdma enabled\n+============================================================================\n+\n+1. Bind four cbdma port and one NIC port to igb_uio, then launch testpmd by below command::\n+\n+    ./testpmd -c 0x7c -n 4 --vdev 'net_vhost0,iface=vhost-net,queues=4,dmas=[txq0@80:04.0;txq1@80:04.1;txq2@80:04.2;txq3@80:04.3]' -- -i --nb-cores=4 --rxq=4 --txq=4 --rss-ip\n+\n+2. Launch VM1, set queues=4, vectors>=2xqueues+2, mq=on::\n+\n+    taskset -c 34-35 \\\n+    qemu-system-x86_64 -name us-vhost-vm2 \\\n+     -cpu host -enable-kvm -m 2048 -object memory-backend-file,id=mem,size=2048M,mem-path=/mnt/huge,share=on -numa node,memdev=mem -mem-prealloc \\\n+     -smp cores=4,sockets=1 -drive file=/home/osimg/ubuntu1910.img \\\n+     -monitor unix:/tmp/vm2_monitor.sock,server,nowait -device e1000,netdev=nttsip1 \\\n+     -netdev user,id=nttsip1,hostfwd=tcp:127.0.0.1:6003-:22 \\\n+     -chardev socket,id=char1,path=./vhost-net -netdev type=vhost-user,id=mynet2,chardev=char1,vhostforce,queues=4 \\\n+     -device virtio-net-pci,mac=52:54:00:00:00:02,netdev=mynet2,disable-modern=false,mrg_rxbuf=on,csum=on,mq=on,vectors=15  \\\n+     -vnc :11 -daemonize\n+\n+3. Bind virtio port to vfio-pci::\n+\n+    modprobe vfio enable_unsafe_noiommu_mode=1\n+    modprobe vfio-pci\n+    ./usertools/dpdk-devbind.py --bind=vfio-pci xx:xx.x\n+\n+4. In VM, launch l3fwd-power sample::\n+\n+    ./l3fwd-power -c 0xf -n 4 --log-level='user1,7' -- -p 1 -P --config=\"(0,0,0),(0,1,1),(0,2,2),(0,3,3)\" --no-numa --parse-ptype\n+\n+5. Send random dest ip address packets to host nic with packet generator, packets will distribute to all queues, check l3fwd-power log that all related cores are waked up.\n+\n+6. Change dest IP address to fixed ip, packets will distribute to 1 queue, check l3fwd-power log that only one related core is waked up.\n+\n+7. Stop the date transmitter, check all related core will be back to sleep status.\n\\ No newline at end of file\n",
    "prefixes": [
        "v1"
    ]
}