[dpdk-dev,v2,1/2] doc: live migration of VM with vhost_user on host

Message ID 1468424161-13064-2-git-send-email-bernard.iremonger@intel.com (mailing list archive)
State Superseded, archived
Headers

Commit Message

Iremonger, Bernard July 13, 2016, 3:36 p.m. UTC
  This patch describes the procedure to be be followed to perform
Live Migration of a VM with Virtio PMD running on a host which
is running the vhost_user sample application (vhost-switch).

It includes sample host and VM scripts used in the procedure.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 doc/guides/howto/index.rst                |   1 +
 doc/guides/howto/lm_virtio_vhost_user.rst | 455 ++++++++++++++++++++++++++++++
 2 files changed, 456 insertions(+)
 create mode 100644 doc/guides/howto/lm_virtio_vhost_user.rst
  

Comments

John McNamara July 17, 2016, 6:12 p.m. UTC | #1
> -----Original Message-----
> From: Iremonger, Bernard
> Sent: Wednesday, July 13, 2016 4:36 PM
> To: Mcnamara, John <john.mcnamara@intel.com>; dev@dpdk.org
> Cc: Liu, Yong <yong.liu@intel.com>; Xu, Qian Q <qian.q.xu@intel.com>;
> yuanhan.liu@linux.intel.com; Iremonger, Bernard
> <bernard.iremonger@intel.com>
> Subject: [PATCH v2 1/2] doc: live migration of VM with vhost_user on host
> 
> This patch describes the procedure to be be followed to perform Live
> Migration of a VM with Virtio PMD running on a host which is running the
> vhost_user sample application (vhost-switch).
> 
> It includes sample host and VM scripts used in the procedure.

Hi Bernard,

Same comments to this as to the SRIOV version.

Also, it may be worth restricting the index to 2 levels since there are a
lot of third level entries showing up in the index.html file and the level
2 headings are sufficiently informative.

Thanks,

John
  
Iremonger, Bernard July 18, 2016, 7:53 a.m. UTC | #2
Hi John

> -----Original Message-----
> From: Mcnamara, John
> Sent: Sunday, July 17, 2016 7:13 PM
> To: Iremonger, Bernard <bernard.iremonger@intel.com>; dev@dpdk.org
> Cc: Liu, Yong <yong.liu@intel.com>; Xu, Qian Q <qian.q.xu@intel.com>;
> yuanhan.liu@linux.intel.com
> Subject: RE: [PATCH v2 1/2] doc: live migration of VM with vhost_user on
> host
> 
> > -----Original Message-----
> > From: Iremonger, Bernard
> > Sent: Wednesday, July 13, 2016 4:36 PM
> > To: Mcnamara, John <john.mcnamara@intel.com>; dev@dpdk.org
> > Cc: Liu, Yong <yong.liu@intel.com>; Xu, Qian Q <qian.q.xu@intel.com>;
> > yuanhan.liu@linux.intel.com; Iremonger, Bernard
> > <bernard.iremonger@intel.com>
> > Subject: [PATCH v2 1/2] doc: live migration of VM with vhost_user on
> > host
> >
> > This patch describes the procedure to be be followed to perform Live
> > Migration of a VM with Virtio PMD running on a host which is running
> > the vhost_user sample application (vhost-switch).
> >
> > It includes sample host and VM scripts used in the procedure.
> 
> Hi Bernard,
> 
> Same comments to this as to the SRIOV version.
> 
> Also, it may be worth restricting the index to 2 levels since there are a lot of
> third level entries showing up in the index.html file and the level
> 2 headings are sufficiently informative.
> 
> Thanks,
> 
> John

I will send a V3 and restrict the index to 2 levels.

Regards,

Bernard.
  

Patch

diff --git a/doc/guides/howto/index.rst b/doc/guides/howto/index.rst
index 4b97a32..d3e3a90 100644
--- a/doc/guides/howto/index.rst
+++ b/doc/guides/howto/index.rst
@@ -36,3 +36,4 @@  How To User Guide
     :numbered:
 
     lm_bond_virtio_sriov
+    lm_virtio_vhost_user
diff --git a/doc/guides/howto/lm_virtio_vhost_user.rst b/doc/guides/howto/lm_virtio_vhost_user.rst
new file mode 100644
index 0000000..2de3ef7
--- /dev/null
+++ b/doc/guides/howto/lm_virtio_vhost_user.rst
@@ -0,0 +1,455 @@ 
+..  BSD LICENSE
+    Copyright(c) 2016 Intel Corporation. All rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+Live Migration of VM with Virtio on host running vhost_user:
+============================================================
+
+Live Migration overview for VM with Virtio:
+-------------------------------------------
+
+To test the Live Migration two servers with identical operating systems installed are used.
+KVM and QEMU is also required on the servers.
+
+QEMU 2.5 is required for Live Migration of a VM with vhost_user running on the hosts.
+
+The servers have Niantic and or Fortville NIC's installed.
+The NIC's on both servers are connected to a switch
+which is also connected to the traffic generator.
+
+The switch is configured to broadcast traffic on all the NIC ports.
+
+Live Migration with Virtio and vhost_user test setup:
+-----------------------------------------------------
+
+Live Migration steps for VM with Virtio PMD and vhost_user on host:
+-------------------------------------------------------------------
+
+The host is running the DPDK PMD (ixgbe or i40e) and the DPDK vhost_user
+sample application (vhost-switch).
+
+The ip address of host_server_1 is 10.237.212.46
+
+The ip address of host_server_2 is 10.237.212.131
+
+The sample scripts mentioned in the steps below can be found in the host_scripts
+and vm_scripts sections.
+
+On host_server_1: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Setup DPDK on host_server_1
+
+.. code-block:: console
+
+    cd /root/dpdk/host_scripts
+    ./setup_dpdk_on_host.sh
+
+On host_server_1: Terminal 2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Bind the Niantic or Fortville NIC to igb_uio on host_server_1.
+
+For Fortville NIC
+
+.. code-block:: console
+
+   cd /root/dpdk/tools
+   ./dpdk_nic_bind.py -b igb_uio 0000:02:00.0
+
+For Niantic NIC
+
+.. code-block:: console
+
+   cd /root/dpdk/tools
+   ./dpdk_nic_bind.py -b igb_uio 0000:09:00.0
+
+On host_server_1: Terminal 3
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For Fortville and Niantic NIC's reset SRIOV and run the
+vhost_user sample application (vhost-switch) on host_server_1.
+
+.. code-block:: console
+
+     cd /root/dpdk/host_scripts
+    ./reset_vf_on_212_46.sh
+    ./run_vhost_switch_on_host.sh
+
+On host_server_1: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Start the VM on host_server_1
+
+.. code-block:: console
+
+   ./vm_virtio_vhost_user.sh
+
+On host_server_1: Terminal 4
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Connect to the QEMU monitor on host_server_1
+
+.. code-block:: console
+
+    cd /root/dpdk/host_scripts
+    ./connect_to_qemu_mon_on_host.sh
+    (qemu)
+
+On host_server_1: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**In VM on host_server_1:**
+
+Setup DPDK in the VM and run testpmd in the VM.
+
+.. code-block:: console
+
+    cd /root/dpdk/vm_scripts
+    ./setup_dpdk_in_vm.sh
+    ./run_testpmd_in_vm.sh
+
+    testpmd> show port info all
+    testpmd> set fwd mac retry
+    testpmd> start tx_first
+    testpmd> show port stats all
+
+Virtio traffic seen at P1 and P2
+
+On host_server_2: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Set up DPDK on the host_server_2
+
+.. code-block:: console
+
+    cd /root/dpdk/host_scripts
+    ./setup_dpdk_on_host.sh
+
+On host_server_2: Terminal 2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Bind the Niantic or Fortville NIC to igb_uio on host_server_2
+
+For Fortville NIC
+
+.. code-block:: console
+
+   cd /root/dpdk/tools
+   ./dpdk_nic_bind.py -b igb_uio 0000:03:00.0
+
+For Niantic NIC
+
+.. code-block:: console
+
+   cd /root/dpdk/tools
+   ./dpdk_nic_bind.py -b igb_uio 0000:06:00.0
+
+On host_server_2: Terminal 3
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+For Fortville and Niantic NIC's reset SRIOV, and run
+the vhost_user sample application on host_server_2.
+
+.. code-block:: console
+
+     cd /root/dpdk/host_scripts
+    ./reset_vf_on_212_131.sh
+    ./run_vhost_switch_on_host.sh
+
+On host_server_2: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Start the VM on host_server_2
+
+.. code-block:: console
+
+   ./vm_virtio_vhost_user_migrate.sh
+
+On host_server_2: Terminal 4
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Connect to the QEMU monitor on host_server_2
+
+.. code-block:: console
+
+    cd /root/dpdk/host_scripts
+    ./connect_to_qemu_mon_on_host.sh
+   (qemu) info status
+   VM status: paused (inmigrate)
+   (qemu)
+
+On host_server_1: Terminal 4
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Check that switch is up before migrating the VM
+
+.. code-block:: console
+
+    (qemu) migrate tcp:10.237.212.131:5555
+    (qemu) info status
+    VM status: paused (postmigrate)
+
+    (qemu) info migrate
+    capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
+    Migration status: completed
+    total time: 11619 milliseconds
+    downtime: 5 milliseconds
+    setup: 7 milliseconds
+    transferred ram: 379699 kbytes
+    throughput: 267.82 mbps
+    remaining ram: 0 kbytes
+    total ram: 1590088 kbytes
+    duplicate: 303985 pages
+    skipped: 0 pages
+    normal: 94073 pages
+    normal bytes: 376292 kbytes
+    dirty sync count: 2
+    (qemu) quit
+
+On host_server_2: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**In VM on host_server_2:**
+
+    Hit Enter key. This brings the user to the testpmd prompt.
+
+.. code-block:: console
+
+    testpmd>
+
+On host_server_2: Terminal 4
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**In QEMU monitor on host_server_2**
+
+.. code-block:: console
+
+    (qemu) info status
+    VM status: running
+
+On host_server_2: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**In VM on host_server_2:**
+
+.. code-block:: console
+
+   testomd> show port info all
+   testpmd> show port stats all
+
+Virtio traffic seen at P0 and P1.
+
+sample host scripts
+-------------------
+
+reset_vf_on_212_46.sh
+^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: sh
+
+   #!/bin/sh
+   # This script is run on the host 10.237.212.46 to reset SRIOV
+
+   # BDF for Fortville NIC is 0000:02:00.0
+   cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
+   echo 0 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
+   cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
+
+   # BDF for Niantic NIC is 0000:09:00.0
+   cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
+   echo 0 > /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
+   cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
+
+vm_virtio_vhost_user.sh
+^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: sh
+
+   #/bin/sh
+   # Script for use with vhost_user sample application
+   # The host system has 8 cpu's (0-7)
+
+   # Path to KVM tool
+   KVM_PATH="/usr/bin/qemu-system-x86_64"
+
+   # Guest Disk image
+   DISK_IMG="/home/user/disk_image/virt1_sml.disk"
+
+   # Number of guest cpus
+   VCPUS_NR="6"
+
+   # Memory
+   MEM=1024
+
+   VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
+
+   # Socket Path
+   SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
+
+   taskset -c 2-7 $KVM_PATH \
+    -enable-kvm \
+    -m $MEM \
+    -smp $VCPUS_NR \
+    -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem,nodeid=0 \
+    -cpu host \
+    -name VM1 \
+    -no-reboot \
+    -net none \
+    -vnc none \
+    -nographic \
+    -hda $DISK_IMG \
+    -chardev socket,id=chr0,path=$SOCKET_PATH \
+    -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
+    -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
+    -chardev socket,id=chr1,path=$SOCKET_PATH \
+    -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
+    -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
+    -monitor telnet::3333,server,nowait
+
+connect_to_qemu_mon_on_host.sh
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: sh
+
+  #!/bin/sh
+  # This script is run on both hosts when the VM is up,
+  # to connect to the Qemu Monitor.
+
+  telnet 0 3333
+
+reset_vf_on_212_131.sh
+^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: sh
+
+  #!/bin/sh
+  # This script is run on the host 10.237.212.131 to reset SRIOV
+
+  # BDF for Ninatic NIC is 0000:06:00.0
+  cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
+  echo 0 > /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
+  cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
+
+  # BDF for Fortville NIC is 0000:03:00.0
+  cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
+  echo 0 > /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
+  cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
+
+vm_virtio_vhost_user_migrate.sh
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: sh
+
+   #/bin/sh
+   # Script for use with vhost user sample application
+   # The host system has 8 cpu's (0-7)
+
+   # Path to KVM tool
+   KVM_PATH="/usr/bin/qemu-system-x86_64"
+
+   # Guest Disk image
+   DISK_IMG="/home/user/disk_image/virt1_sml.disk"
+
+   # Number of guest cpus
+   VCPUS_NR="6"
+
+   # Memory
+   MEM=1024
+
+   VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
+
+   # Socket Path
+   SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
+
+   taskset -c 2-7 $KVM_PATH \
+    -enable-kvm \
+    -m $MEM \
+    -smp $VCPUS_NR \
+    -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
+    -numa node,memdev=mem,nodeid=0 \
+    -cpu host \
+    -name VM1 \
+    -no-reboot \
+    -net none \
+    -vnc none \
+    -nographic \
+    -hda $DISK_IMG \
+    -chardev socket,id=chr0,path=$SOCKET_PATH \
+    -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
+    -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
+    -chardev socket,id=chr1,path=$SOCKET_PATH \
+    -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
+    -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
+    -incoming tcp:0:5555 \
+    -monitor telnet::3333,server,nowait
+
+sample VM scripts
+-----------------
+
+setup_dpdk_virtio_in_vm.sh
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: sh
+
+  #!/bin/sh
+  # this script matches the vm_virtio_vhost_user script
+  # virtio port is 03
+  # virtio port is 04
+
+  cat  /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+  echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+  cat  /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+  ifconfig -a
+  /root/dpdk/tools/dpdk_nic_bind.py --status
+
+  rmmod virtio-pci
+
+  modprobe uio
+  insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
+
+  /root/dpdk/tools/dpdk_nic_bind.py -b igb_uio 0000:00:03.0
+  /root/dpdk/tools/dpdk_nic_bind.py -b igb_uio 0000:00:04.0
+
+  /root/dpdk/tools/dpdk_nic_bind.py --status
+
+run_testpmd_in_vm.sh
+^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: sh
+
+   #!/bin/sh
+   # Run testpmd for use with vhost_user sample app.
+   # test system has 8 cpus (0-7), use cpus 2-7 for VM
+
+   /root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \
+   -c 3f -n 4 --socket-mem 350 -- --burst=64 --i --disable-hw-vlan-filter