get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/5433/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 5433,
    "url": "http://patches.dpdk.org/api/patches/5433/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/1434355006-30583-10-git-send-email-changchun.ouyang@intel.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1434355006-30583-10-git-send-email-changchun.ouyang@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1434355006-30583-10-git-send-email-changchun.ouyang@intel.com",
    "date": "2015-06-15T07:56:46",
    "name": "[dpdk-dev,v3,9/9] doc: Update doc for vhost multiple queues",
    "commit_ref": null,
    "pull_url": null,
    "state": "changes-requested",
    "archived": true,
    "hash": "054ab563df73bdd09023027e231ad168162ec828",
    "submitter": {
        "id": 31,
        "url": "http://patches.dpdk.org/api/people/31/?format=api",
        "name": "Ouyang Changchun",
        "email": "changchun.ouyang@intel.com"
    },
    "delegate": null,
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/1434355006-30583-10-git-send-email-changchun.ouyang@intel.com/mbox/",
    "series": [],
    "comments": "http://patches.dpdk.org/api/patches/5433/comments/",
    "check": "pending",
    "checks": "http://patches.dpdk.org/api/patches/5433/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id DED05C34C;\n\tMon, 15 Jun 2015 09:57:20 +0200 (CEST)",
            "from mga01.intel.com (mga01.intel.com [192.55.52.88])\n\tby dpdk.org (Postfix) with ESMTP id E393EC314\n\tfor <dev@dpdk.org>; Mon, 15 Jun 2015 09:57:16 +0200 (CEST)",
            "from orsmga002.jf.intel.com ([10.7.209.21])\n\tby fmsmga101.fm.intel.com with ESMTP; 15 Jun 2015 00:57:18 -0700",
            "from shvmail01.sh.intel.com ([10.239.29.42])\n\tby orsmga002.jf.intel.com with ESMTP; 15 Jun 2015 00:57:15 -0700",
            "from shecgisg004.sh.intel.com (shecgisg004.sh.intel.com\n\t[10.239.29.89])\n\tby shvmail01.sh.intel.com with ESMTP id t5F7vDp3005835;\n\tMon, 15 Jun 2015 15:57:13 +0800",
            "from shecgisg004.sh.intel.com (localhost [127.0.0.1])\n\tby shecgisg004.sh.intel.com (8.13.6/8.13.6/SuSE Linux 0.8) with ESMTP\n\tid t5F7v9CC030681; Mon, 15 Jun 2015 15:57:11 +0800",
            "(from couyang@localhost)\n\tby shecgisg004.sh.intel.com (8.13.6/8.13.6/Submit) id t5F7v9nM030677; \n\tMon, 15 Jun 2015 15:57:09 +0800"
        ],
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.13,617,1427785200\"; d=\"scan'208\";a=\"746791548\"",
        "From": "Ouyang Changchun <changchun.ouyang@intel.com>",
        "To": "dev@dpdk.org",
        "Date": "Mon, 15 Jun 2015 15:56:46 +0800",
        "Message-Id": "<1434355006-30583-10-git-send-email-changchun.ouyang@intel.com>",
        "X-Mailer": "git-send-email 1.7.12.2",
        "In-Reply-To": "<1434355006-30583-1-git-send-email-changchun.ouyang@intel.com>",
        "References": "<1433915549-18571-1-git-send-email-changchun.ouyang@intel.com>\n\t<1434355006-30583-1-git-send-email-changchun.ouyang@intel.com>",
        "Subject": "[dpdk-dev] [PATCH v3 9/9] doc: Update doc for vhost multiple queues",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "patches and discussions about DPDK <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Update the sample guide doc for vhost multiple queues;\nUpdate the prog guide doc for vhost lib multiple queues feature;\n\nIt is added since v3\n\nSigned-off-by: Changchun Ouyang <changchun.ouyang@intel.com>\n---\n doc/guides/prog_guide/vhost_lib.rst |  35 ++++++++++++\n doc/guides/sample_app_ug/vhost.rst  | 110 ++++++++++++++++++++++++++++++++++++\n 2 files changed, 145 insertions(+)",
    "diff": "diff --git a/doc/guides/prog_guide/vhost_lib.rst b/doc/guides/prog_guide/vhost_lib.rst\nindex 48e1fff..e444681 100644\n--- a/doc/guides/prog_guide/vhost_lib.rst\n+++ b/doc/guides/prog_guide/vhost_lib.rst\n@@ -128,6 +128,41 @@ VHOST_GET_VRING_BASE is used as the signal to remove vhost device from data plan\n \n When the socket connection is closed, vhost will destroy the device.\n \n+Vhost multiple queues feature\n+-----------------------------\n+This feature supports the multiple queues for each virtio device in vhost.\n+The vhost-user is used to enable the multiple queues feature, It's not ready for vhost-cuse.\n+\n+The QEMU patch of enabling vhost-use multiple queues has already merged into upstream sub-tree in\n+QEMU community and it will be put in QEMU 2.4. If using QEMU 2.3, it requires applying the\n+same patch onto QEMU 2.3 and rebuild the QEMU before running vhost multiple queues:\n+http://patchwork.ozlabs.org/patch/477461/\n+\n+The vhost will get the queue pair number based on the communication message with QEMU.\n+\n+HW queue numbers in pool is strongly recommended to set as identical with the queue number to start\n+the QMEU guest and identical with the queue number to start with virtio port on guest.\n+\n+=========================================\n+==================|   |==================|\n+       vport0     |   |      vport1      |\n+---  ---  ---  ---|   |---  ---  ---  ---|\n+q0 | q1 | q2 | q3 |   |q0 | q1 | q2 | q3 |\n+/\\= =/\\= =/\\= =/\\=|   |/\\= =/\\= =/\\= =/\\=|\n+||   ||   ||   ||      ||   ||   ||   ||\n+||   ||   ||   ||      ||   ||   ||   ||\n+||= =||= =||= =||=|   =||== ||== ||== ||=|\n+q0 | q1 | q2 | q3 |   |q0 | q1 | q2 | q3 |\n+------------------|   |------------------|\n+     VMDq pool0   |   |    VMDq pool1    |\n+==================|   |==================|\n+\n+In RX side, it firstly polls each queue of the pool and gets the packets from\n+it and enqueue them into its corresponding virtqueue in virtio device/port.\n+In TX side, it dequeue packets from each virtqueue of virtio device/port and send\n+to either physical port or another virtio device according to its destination\n+MAC address.\n+\n Vhost supported vSwitch reference\n ---------------------------------\n \ndiff --git a/doc/guides/sample_app_ug/vhost.rst b/doc/guides/sample_app_ug/vhost.rst\nindex 730b9da..9a57d19 100644\n--- a/doc/guides/sample_app_ug/vhost.rst\n+++ b/doc/guides/sample_app_ug/vhost.rst\n@@ -514,6 +514,13 @@ It is enabled by default.\n \n     user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --vlan-strip [0, 1]\n \n+**rxq.**\n+The rxq option specify the rx queue number per VMDq pool, it is 1 on default.\n+\n+.. code-block:: console\n+\n+    user@target:~$ ./build/app/vhost-switch -c f -n 4 --huge-dir /mnt/huge -- --rxq [1, 2, 4]\n+\n Running the Virtual Machine (QEMU)\n ----------------------------------\n \n@@ -833,3 +840,106 @@ For example:\n The above message indicates that device 0 has been registered with MAC address cc:bb:bb:bb:bb:bb and VLAN tag 1000.\n Any packets received on the NIC with these values is placed on the devices receive queue.\n When a virtio-net device transmits packets, the VLAN tag is added to the packet by the DPDK vhost sample code.\n+\n+Vhost multiple queues\n+---------------------\n+\n+This feature supports the multiple queues for each virtio device in vhost.\n+The vhost-user is used to enable the multiple queues feature, It's not ready for vhost-cuse.\n+\n+The QEMU patch of enabling vhost-use multiple queues has already merged into upstream sub-tree in\n+QEMU community and it will be put in QEMU 2.4. If using QEMU 2.3, it requires applying the\n+same patch onto QEMU 2.3 and rebuild the QEMU before running vhost multiple queues:\n+http://patchwork.ozlabs.org/patch/477461/\n+\n+Basically vhost sample leverages the VMDq+RSS in HW to receive packets and distribute them\n+into different queue in the pool according to their 5 tuples.\n+\n+On the other hand, the vhost will get the queue pair number based on the communication message with\n+QEMU.\n+\n+HW queue numbers in pool is strongly recommended to set as identical with the queue number to start\n+the QMEU guest and identical with the queue number to start with virtio port on guest.\n+E.g. use '--rxq 4' to set the queue number as 4, it means there are 4 HW queues in each VMDq pool,\n+and 4 queues in each vhost device/port, every queue in pool maps to one queue in vhost device.\n+\n+=========================================\n+==================|   |==================|\n+       vport0     |   |      vport1      |\n+---  ---  ---  ---|   |---  ---  ---  ---|\n+q0 | q1 | q2 | q3 |   |q0 | q1 | q2 | q3 |\n+/\\= =/\\= =/\\= =/\\=|   |/\\= =/\\= =/\\= =/\\=|\n+||   ||   ||   ||      ||   ||   ||   ||\n+||   ||   ||   ||      ||   ||   ||   ||\n+||= =||= =||= =||=|   =||== ||== ||== ||=|\n+q0 | q1 | q2 | q3 |   |q0 | q1 | q2 | q3 |\n+------------------|   |------------------|\n+     VMDq pool0   |   |    VMDq pool1    |\n+==================|   |==================|\n+\n+In RX side, it firstly polls each queue of the pool and gets the packets from\n+it and enqueue them into its corresponding virtqueue in virtio device/port.\n+In TX side, it dequeue packets from each virtqueue of virtio device/port and send\n+to either physical port or another virtio device according to its destination\n+MAC address.\n+\n+\n+Test guidance\n+~~~~~~~~~~~~~\n+\n+#.  On host, firstly mount hugepage, and insmod uio, igb_uio, bind one nic on igb_uio;\n+    and then run vhost sample, key steps as follows:\n+\n+.. code-block:: console\n+\n+    sudo mount -t hugetlbfs nodev /mnt/huge\n+    sudo modprobe uio\n+    sudo insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko\n+\n+    $RTE_SDK/tools/dpdk_nic_bind.py --bind igb_uio 0000:08:00.0\n+    sudo $RTE_SDK/examples/vhost/build/vhost-switch -c 0xf0 -n 4 --huge-dir \\\n+    /mnt/huge --socket-mem 1024,0 -- -p 1 --vm2vm 0 --dev-basename usvhost --rxq 2\n+\n+.. note::\n+\n+    use '--stats 1' to enable the stats dumping on screen for vhost.\n+\n+#.  After step 1, on host, modprobe kvm and kvm_intel, and use qemu command line to start one guest:\n+\n+.. code-block:: console\n+\n+    modprobe kvm\n+    modprobe kvm_intel\n+    sudo mount -t hugetlbfs nodev /dev/hugepages -o pagesize=1G\n+\n+    $QEMU_PATH/qemu-system-x86_64 -enable-kvm -m 4096 \\\n+    -object memory-backend-file,id=mem,size=4096M,mem-path=/dev/hugepages,share=on \\\n+    -numa node,memdev=mem -mem-prealloc -smp 10 -cpu core2duo,+sse3,+sse4.1,+sse4.2 \\\n+    -name <vm-name> -drive file=<img-path>/vm.img \\\n+    -chardev socket,id=char0,path=<usvhost-path>/usvhost \\\n+    -netdev type=vhost-user,id=hostnet2,chardev=char0,vhostforce=on,queues=2 \\\n+    -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet2,id=net2,mac=52:54:00:12:34:56,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off \\\n+    -chardev socket,id=char1,path=<usvhost-path>/usvhost \\\n+    -netdev type=vhost-user,id=hostnet3,chardev=char1,vhostforce=on,queues=2 \\\n+    -device virtio-net-pci,mq=on,vectors=6,netdev=hostnet3,id=net3,mac=52:54:00:12:34:57,csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off\n+\n+#.  Log on guest, use testpmd(dpdk based) to test, use multiple virtio queues to rx and tx packets.\n+\n+.. code-block:: console\n+\n+    modprobe uio\n+    insmod $RTE_SDK/$RTE_TARGET/kmod/igb_uio.ko\n+    echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages\n+    ./tools/dpdk_nic_bind.py --bind igb_uio 00:03.0 00:04.0\n+\n+    $RTE_SDK/$RTE_TARGET/app/testpmd -c 1f -n 4 -- --rxq=2 --txq=2 --nb-cores=4 \\\n+    --rx-queue-stats-mapping=\"(0,0,0),(0,1,1),(1,0,2),(1,1,3)\" \\\n+    --tx-queue-stats-mapping=\"(0,0,0),(0,1,1),(1,0,2),(1,1,3)\" -i --disable-hw-vlan --txqflags 0xf00\n+\n+    set fwd mac\n+    start tx_first\n+\n+#.  Use packet generator to send packets with dest MAC:52 54 00 12 34 57  VLAN tag:1001,\n+    select IPv4 as protocols and continuous incremental IP address.\n+\n+#.  Testpmd on guest can display packets received/transmitted in both queues of each virtio port.\n",
    "prefixes": [
        "dpdk-dev",
        "v3",
        "9/9"
    ]
}