diff mbox

[dpdk-dev,1/2] doc: live migration of VM with Virtio and VF

Message ID 1467370086-24386-1-git-send-email-bernard.iremonger@intel.com (mailing list archive)
State Superseded, archived
Delegated to: Yuanhan Liu
Headers show

Commit Message

Bernard Iremonger July 1, 2016, 10:48 a.m. UTC
This patch describes the procedure to be be followed
to perform Live Migration of a VM with Virtio and VF PMD's
using the bonding PMD.

It includes sample host and VM scripts used in the procedure,
and a sample switch configuration.

Signed-off-by: Bernard Iremonger <bernard.iremonger@intel.com>
---
 doc/guides/how_to/index.rst                |  38 ++
 doc/guides/how_to/lm_bond_virtio_sriov.rst | 687 +++++++++++++++++++++++++++++
 doc/guides/index.rst                       |   3 +-
 3 files changed, 727 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/how_to/index.rst
 create mode 100644 doc/guides/how_to/lm_bond_virtio_sriov.rst

Comments

Bernard Iremonger July 6, 2016, 4:01 p.m. UTC | #1
This pachset set describes the procedure to Live migrate
a VM with Virtio and VF PMD's using the bonding PMD.

Changes in v2:
change primary port before removing slave port
remove unused variables from QEMU scripts
identify NIC's in bridge setup scripts

Bernard Iremonger (2):
  doc: live migration of VM with Virtio and VF
  doc: add live migration overview image

 doc/guides/how_to/img/lm_overview.svg      | 666 +++++++++++++++++++++++++++
 doc/guides/how_to/index.rst                |  38 ++
 doc/guides/how_to/lm_bond_virtio_sriov.rst | 693 +++++++++++++++++++++++++++++
 doc/guides/index.rst                       |   3 +-
 4 files changed, 1399 insertions(+), 1 deletion(-)
 create mode 100644 doc/guides/how_to/img/lm_overview.svg
 create mode 100644 doc/guides/how_to/index.rst
 create mode 100644 doc/guides/how_to/lm_bond_virtio_sriov.rst
Thomas Monjalon July 6, 2016, 4:25 p.m. UTC | #2
Hi Bernard,

2016-07-06 17:01, Bernard Iremonger:
> This pachset set describes the procedure to Live migrate
> a VM with Virtio and VF PMD's using the bonding PMD.
[...]
>  doc/guides/how_to/img/lm_overview.svg      | 666 +++++++++++++++++++++++++++
>  doc/guides/how_to/index.rst                |  38 ++
>  doc/guides/how_to/lm_bond_virtio_sriov.rst | 693 +++++++++++++++++++++++++++++
>  doc/guides/index.rst                       |   3 +-

I'm surprised to see an howto appearing in the repository.
My first reaction is: how hard will it be to maintain up-to-date?
Otherwise I don't see any other reason to not accept it.
It's eventually a good idea :)

Opinion from others?
Bernard Iremonger July 7, 2016, 10:42 a.m. UTC | #3
This pachset set describes the procedure to Live migrate
a VM with Virtio and VF PMD's using the bonding PMD.

Changes in v3:
rename directory from how_to to howto
move to after FAQ in the index

Changes in v2:
change primary port before removing slave port
remove unused variables from QEMU scripts
identify NIC's in bridge setup scripts

Bernard Iremonger (2):
  doc: live migration of VM with Virtio and VF
  doc: add live migration overview image

 doc/guides/howto/img/lm_overview.svg      | 666 ++++++++++++++++++++++++++++
 doc/guides/howto/index.rst                |  38 ++
 doc/guides/howto/lm_bond_virtio_sriov.rst | 693 ++++++++++++++++++++++++++++++
 doc/guides/index.rst                      |   1 +
 4 files changed, 1398 insertions(+)
 create mode 100644 doc/guides/howto/img/lm_overview.svg
 create mode 100644 doc/guides/howto/index.rst
 create mode 100644 doc/guides/howto/lm_bond_virtio_sriov.rst
Bernard Iremonger July 13, 2016, 3:35 p.m. UTC | #4
This patchset describes the procedure to Live migrate a VM with
Virtio PMD's with the vhost_user sample application (vhost-switch)
running on the host.

This patchset depends on the following patch:
http://dpdk.org/dev/patchwork/patch/14625

Changes in v2:
removed l2fwd information
minor changes to svg file

Bernard Iremonger (2):
  doc: live migration of VM with vhost_user on host
  doc: add vhost_user live migration image

 doc/guides/howto/img/lm_vhost_user.svg    | 644 ++++++++++++++++++++++++++++++
 doc/guides/howto/index.rst                |   1 +
 doc/guides/howto/lm_virtio_vhost_user.rst | 459 +++++++++++++++++++++
 3 files changed, 1104 insertions(+)
 create mode 100644 doc/guides/howto/img/lm_vhost_user.svg
 create mode 100644 doc/guides/howto/lm_virtio_vhost_user.rst
Bernard Iremonger July 18, 2016, 10:17 a.m. UTC | #5
This pachset set describes the procedure to Live migrate
a VM with Virtio and VF PMD's using the bonding PMD.

Changes in v4:
rename image file and patch
added links to rst file
updated rst file in line with comments

Changes in v3:
rename directory from how_to to howto
move to after FAQ in the index

Changes in v2:
change primary port before removing slave port
remove unused variables from QEMU scripts
identify NIC's in bridge setup scripts

Bernard Iremonger (2):
  doc: live migration of VM with Virtio and VF
  doc: add live migration virtio sriov image

 doc/guides/howto/index.rst                |  38 ++
 doc/guides/howto/lm_bond_virtio_sriov.rst | 713 ++++++++++++++++++++++++++++++
 doc/guides/index.rst                      |   1 +
 3 files changed, 752 insertions(+)
 create mode 100644 doc/guides/howto/index.rst
 create mode 100644 doc/guides/howto/lm_bond_virtio_sriov.rst
Bernard Iremonger July 18, 2016, 2:30 p.m. UTC | #6
This patchset describes the procedure to Live migrate a VM with
Virtio PMD's with the vhost_user sample application (vhost-switch)
running on the host.

This patchset depends on the following patch:
http://dpdk.org/dev/patchwork/patch/14871

Changes in v3:
added links in rst file
updated rst file in line with comments

Changes in v2:
removed l2fwd information
minor changes to svg file

Bernard Iremonger (2):
  doc: live migration of VM with vhost_user on host
  doc: add vhost_user live migration image

 doc/guides/howto/img/lm_vhost_user.svg    | 644 ++++++++++++++++++++++++++++++
 doc/guides/howto/index.rst                |   1 +
 doc/guides/howto/lm_virtio_vhost_user.rst | 469 ++++++++++++++++++++++
 3 files changed, 1114 insertions(+)
 create mode 100644 doc/guides/howto/img/lm_vhost_user.svg
 create mode 100644 doc/guides/howto/lm_virtio_vhost_user.rst
Bernard Iremonger July 19, 2016, 3:09 p.m. UTC | #7
This patch set describes the procedure to Live migrate
a VM with Virtio and VF PMD's using the bonding PMD.

Changes in v5:
restore missing image file
change Guide to Guides in heading

Changes in v4:
rename image file and patch
added links to rst file
updated rst file in line with comments

Changes in v3:
rename directory from how_to to howto
move to after FAQ in the index

Changes in v2:
change primary port before removing slave port
remove unused variables from QEMU scripts
identify NIC's in bridge setup scripts

Bernard Iremonger (2):
  doc: live migration of VM with Virtio and VF
  doc: add live migration virtio sriov image

 doc/guides/howto/img/lm_bond_virtio_sriov.svg | 666 ++++++++++++++++++++++++
 doc/guides/howto/index.rst                    |  38 ++
 doc/guides/howto/lm_bond_virtio_sriov.rst     | 713 ++++++++++++++++++++++++++
 doc/guides/index.rst                          |   1 +
 4 files changed, 1418 insertions(+)
 create mode 100644 doc/guides/howto/img/lm_bond_virtio_sriov.svg
 create mode 100644 doc/guides/howto/index.rst
 create mode 100644 doc/guides/howto/lm_bond_virtio_sriov.rst
Thomas Monjalon July 22, 2016, 4:56 p.m. UTC | #8
2016-07-19 16:09, Bernard Iremonger:
> This patch set describes the procedure to Live migrate
> a VM with Virtio and VF PMD's using the bonding PMD.

Applied, thanks
Thomas Monjalon July 22, 2016, 4:59 p.m. UTC | #9
2016-07-18 15:30, Bernard Iremonger:
> This patchset describes the procedure to Live migrate a VM with
> Virtio PMD's with the vhost_user sample application (vhost-switch)
> running on the host.

Applied, thanks
diff mbox

Patch

diff --git a/doc/guides/how_to/index.rst b/doc/guides/how_to/index.rst
new file mode 100644
index 0000000..4b97a32
--- /dev/null
+++ b/doc/guides/how_to/index.rst
@@ -0,0 +1,38 @@ 
+..  BSD LICENSE
+    Copyright(c) 2016 Intel Corporation. All rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+How To User Guide
+=================
+
+.. toctree::
+    :maxdepth: 3
+    :numbered:
+
+    lm_bond_virtio_sriov
diff --git a/doc/guides/how_to/lm_bond_virtio_sriov.rst b/doc/guides/how_to/lm_bond_virtio_sriov.rst
new file mode 100644
index 0000000..95d5523
--- /dev/null
+++ b/doc/guides/how_to/lm_bond_virtio_sriov.rst
@@ -0,0 +1,687 @@ 
+..  BSD LICENSE
+    Copyright(c) 2016 Intel Corporation. All rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+
+Live Migration of VM with SR-IOV VF:
+====================================
+
+Live Migration overview for VM with Virtio and VF PMD's:
+--------------------------------------------------------
+
+It is not possible to migrate a Virtual Machine which has an SR-IOV Virtual Function.
+To get around this problem the bonding PMD is used.
+
+A bonded device is created in the VM.
+The virtio and VF PMD's are added as slaves to the bonded device.
+The VF is set as the primary slave of the bonded device.
+
+A bridge must be set up on the Host connecting the tap device, which is the
+backend of the Virtio device and the Physical Function device.
+
+To test the Live Migration two servers with identical operating systems installed are used.
+KVM and Qemu 2.3 is also required on the servers.
+
+The servers have Niantic and or Fortville NIC's installed.
+The NIC's on both servers are connected to a switch
+which is also connected to the traffic generator.
+
+The switch is configured to broadcast traffic on all the NIC ports.
+
+Live Migration with SR-IOV VF test setup:
+-----------------------------------------
+
+
+Live Migration steps for VM with Virtio and VF PMD's:
+-----------------------------------------------------
+
+The host is running the Kernel Physical Function driver (ixgbe or i40e).
+
+The ip address of host_server_1 is 10.237.212.46
+
+The ip address of host_server_2 is 10.237.212.131
+
+The sample scripts mentioned in the steps below can be found in the host_scripts
+and vm_scripts sections.
+
+On host_server_1: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+    cd /root/dpdk/host_scripts
+    ./setup_vf_on_212_46.sh
+
+For Fortville NIC
+
+.. code-block:: console
+
+    ./vm_virtio_vf_i40e_212_46.sh
+
+For Niantic NIC
+
+.. code-block:: console
+
+    ./vm_virtio_vf_one_212_46.sh
+
+On host_server_1: Terminal 2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+    cd /root/dpdk/host_scripts
+    ./setup_bridge_on_212_46.sh
+    ./connect_to_qemu_mon_on_host.sh
+    (qemu)
+
+On host_server_1: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**In VM on host_server_1:**
+
+.. code-block:: console
+
+    cd /root/dpdk/vm_scripts
+    ./setup_dpdk_in_vm.sh
+    ./run_testpmd_bonding_in_vm.sh
+
+    testpmd> show port info all
+
+The following command only works with kernel PF for Niantic
+
+.. code-block:: console
+
+    testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
+
+create bonded device(mode) (socket)
+
+Mode 1 is active backup.
+
+Virtio is port 0.
+
+VF is port 1.
+
+.. code-block:: console
+
+    testpmd> create bonded device 1 0
+    Created new bonded device eth_bond_testpmd_0 on (port 2).
+    testpmd> add bonding slave 0 2
+    testpmd> add bonding slave 1 2
+    testpmd> show bonding config 2
+
+set bonding primary (slave id) (port id)
+
+set primary to 1 before starting bonding port
+
+.. code-block:: console
+
+    testpmd> set bonding primary 1 2
+    testpmd> show bonding config 2
+    testpmd> port start 2
+    Port 2: 02:09:C0:68:99:A5
+    Checking link statuses...
+    Port 0 Link Up - speed 10000 Mbps - full-duplex
+    Port 1 Link Up - speed 10000 Mbps - full-duplex
+    Port 2 Link Up - speed 10000 Mbps - full-duplex
+
+    testpmd> show bonding config 2
+
+primary is port 1, 2 active slaves
+
+use port 2 only for forwarding
+
+.. code-block:: console
+
+    testpmd> set portlist 2
+    testpmd> show config fwd
+    testpmd> set fwd mac
+    testpmd> start
+    testpmd> show bonding config 2
+
+primary is 1, 2 active slaves
+
+.. code-block:: console
+
+    testpmd> show port stats all
+
+VF traffic seen at P1 and P2
+
+.. code-block:: console
+
+    testpmd> clear port stats all
+    testpmd> remove bonding slave 1 2
+    testpmd> show bonding config 2
+
+primary is 0, active slaves 1
+
+.. code-block:: console
+
+    testpmd> clear port stats all
+    testpmd> show port stats all
+
+no VF traffic seen at P0 and P2 , VF MAC address still present.
+
+.. code-block:: console
+
+    testpmd> port stop 1
+    testpmd> port close 1
+
+Port close should remove VF MAC address, it does not remove perm_addr.
+
+The following command only works with kernel PF for Niantic.
+
+.. code-block:: console
+
+    testpmd> mac_addr remove 1 AA:BB:CC:DD:EE:FF
+    testpmd> port detach 1
+    Port '0000:00:04.0' is detached. Now total ports is 2
+    testpmd> show port stats all
+
+no VF traffic seen at P0 and P2
+
+On host_server_1: Terminal 2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+    (qemu) device_del vf1
+
+
+On host_server_1: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**In VM on host_server_1:**
+
+.. code-block:: console
+
+    testpmd> show bonding config 2
+
+primary is 0, active slaves 1
+
+.. code-block:: console
+
+    testpmd> show port info all
+    testpmd> show port stats all
+
+On host_server_2: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+    cd /root/dpdk/host_scripts
+    ./setup_vf_on_212_131.sh
+    ./vm_virtio_one_migrate.sh
+
+On host_server_2: Terminal 2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+   ./setup_bridge_on_212_131.sh
+   ./connect_to_qemu_mon_on_host.sh
+   (qemu) info status
+   VM status: paused (inmigrate)
+   (qemu)
+
+
+On host_server_1: Terminal 2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Check that switch is up before migrating
+
+.. code-block:: console
+
+    (qemu) migrate tcp:10.237.212.131:5555
+    (qemu) info status
+    VM status: paused (postmigrate)
+
+    /* for Ninatic ixgbe PF */
+    (qemu) info migrate
+    capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
+    Migration status: completed
+    total time: 11834 milliseconds
+    downtime: 18 milliseconds
+    setup: 3 milliseconds
+    transferred ram: 389137 kbytes
+    throughput: 269.49 mbps
+    remaining ram: 0 kbytes
+    total ram: 1590088 kbytes
+    duplicate: 301620 pages
+    skipped: 0 pages
+    normal: 96433 pages
+    normal bytes: 385732 kbytes
+    dirty sync count: 2
+    (qemu) quit
+
+    /* for Fortville i40e PF  */
+    (qemu) info migrate
+    capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
+    Migration status: completed
+    total time: 11619 milliseconds
+    downtime: 5 milliseconds
+    setup: 7 milliseconds
+    transferred ram: 379699 kbytes
+    throughput: 267.82 mbps
+    remaining ram: 0 kbytes
+    total ram: 1590088 kbytes
+    duplicate: 303985 pages
+    skipped: 0 pages
+    normal: 94073 pages
+    normal bytes: 376292 kbytes
+    dirty sync count: 2
+    (qemu) quit
+
+On host_server_2: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**In VM on host_server_2:**
+
+    Hit Enter key. This brings the user to the testpmd prompt.
+
+.. code-block:: console
+
+    testpmd>
+
+On host_server_2: Terminal 2
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: console
+
+    (qemu) info status
+    VM status: running
+
+for Niantic NIC
+
+.. code-block:: console
+
+    (qemu) device_add pci-assign,host=06:10.0,id=vf1
+
+for Fortville NIC
+
+.. code-block:: console
+
+    (qemu) device_add pci-assign,host=03:02.0,id=vf1
+
+On host_server_2: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+**In VM on host_server_2:**
+
+.. code-block:: console
+
+    testomd> show port info all
+    testpmd> show port stats all
+    testpmd> show bonding config 2
+    testpmd> port attach 0000:00:04.0
+    Port 1 is attached.
+    Now total ports is 3
+    Done
+
+    testpmd> port start 1
+
+The mac_addr command only works with the Kernel PF for Niantic.
+
+.. code-block:: console
+
+    testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
+    testpmd> show port stats all.
+    testpmd> show config fwd
+    testpmd> show bonding config 2
+    testpmd> add bonding slave 1 2
+    testpmd> set bonding primary 1 2
+    testpmd> show bonding config 2
+    testpmd> show port stats all
+
+VF traffic seen at P1 (VF) and P2 (Bonded device).
+
+.. code-block:: console
+
+    testpmd> remove bonding slave 0 2
+    testpmd> show bonding config 2
+    testpmd> port stop 0
+    testpmd> port close 0
+    testpmd> port detach 0
+    Port '0000:00:03.0' is detached. Now total ports is 2
+
+    testpmd> show port info all
+    testpmd> show config fwd
+    testpmd> show port stats all
+
+VF traffic seen at P1 (VF) and P2 (Bonded device).
+
+sample host scripts
+-------------------
+
+setup_vf_on_212_46.sh
+^^^^^^^^^^^^^^^^^^^^^
+
+Set up Virtual Functions on host_server_1
+
+.. code-block:: sh
+
+  #!/bin/sh
+  # This script is run on the host 10.237.212.46 to setup the VF
+
+  # set up Niantic VF
+  cat /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
+  echo 1 > /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
+  cat /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
+  rmmod ixgbevf
+
+  # set up Fortville VF
+  cat /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
+  echo 1 > /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
+  cat /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
+  rmmod i40evf
+
+vm_virtio_vf_one_212_46.sh
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Setup Virtual Machine on host_server_1
+
+.. code-block:: sh
+
+  #!/bin/sh
+
+  # Path to KVM tool
+  KVM_PATH="/usr/bin/qemu-system-x86_64"
+
+  # Guest Disk image
+  DISK_IMG="/home/username/disk_image/virt1_sml.disk"
+
+  # Number of guest cpus
+  VCPUS_NR="4"
+
+  # Memory
+  MEM=1536
+
+  VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
+
+  taskset -c 1-5 $KVM_PATH \
+    -enable-kvm \
+    -m $MEM \
+    -smp $VCPUS_NR \
+    -cpu host \
+    -name VM1 \
+    -no-reboot \
+    -net none \
+    -vnc none -nographic \
+    -hda $DISK_IMG \
+    -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
+    -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB \
+    -device pci-assign,host=09:10.0,id=vf1 \
+    -monitor telnet::3333,server,nowait
+
+setup_bridge_on_212_46.sh
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Setup bridge on host_server_1
+
+.. code-block:: sh
+
+  #!/bin/sh
+  # This script is run on the host 10.237.212.46 to setup the bridge
+  # for the Tap device and the PF device.
+  # This enables traffic to go from the PF to the Tap to the Virtio PMD in the VM.
+
+  ifconfig ens3f0 down
+  ifconfig tap1 down
+  ifconfig ens6f0 down
+  ifconfig virbr0 down
+
+  brctl show virbr0
+  brctl addif virbr0 ens3f0
+  brctl addif virbr0 ens6f0
+  brctl addif virbr0 tap1
+  brctl show virbr0
+
+  ifconfig ens3f0 up
+  ifconfig tap1 up
+  ifconfig ens6f0 up
+  ifconfig virbr0 up
+
+connect_to_qemu_mon_on_host.sh
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: sh
+
+  #!/bin/sh
+  # This script is run on both hosts when the VM is up,
+  # to connect to the Qemu Monitor.
+
+  telnet 0 3333
+
+setup_vf_on_212_131.sh
+^^^^^^^^^^^^^^^^^^^^^^
+
+Set up Virtual Functions on host_server_2
+
+.. code-block:: sh
+
+  #!/bin/sh
+  # This script is run on the host 10.237.212.131 to setup the VF
+
+  # set up Niantic VF
+  cat /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
+  echo 1 > /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
+  cat /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
+  rmmod ixgbevf
+
+  # set up Fortville VF
+  cat /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
+  echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
+  cat /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
+  rmmod i40evf
+
+vm_virtio_one_migrate.sh
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+Setup Virtual Machine on host_server_2
+
+.. code-block:: sh
+
+  #!/bin/sh
+  # Start the VM on host_server_2 with the same parameters except without the VF
+  # parameters, as the VM on host_server_1, in migration-listen mode
+  # (-incoming tcp:0:5555)
+
+  # Path to KVM tool
+  KVM_PATH="/usr/bin/qemu-system-x86_64"
+
+  # Guest Disk image
+  DISK_IMG="/home/username/disk_image/virt1_sml.disk"
+
+  # Number of guest cpus
+  VCPUS_NR="4"
+
+  # Memory
+  MEM=1536
+
+  VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
+
+  taskset -c 1-5 $KVM_PATH \
+    -enable-kvm \
+    -m $MEM \
+    -smp $VCPUS_NR \
+    -cpu host \
+    -name VM1 \
+    -no-reboot \
+    -net none \
+    -vnc none -nographic \
+    -hda $DISK_IMG \
+    -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
+    -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB \
+    -incoming tcp:0:5555 \
+    -monitor telnet::3333,server,nowait
+
+setup_bridge_on_212_131.sh
+^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Setup bridge on host_server_2
+
+.. code-block:: sh
+
+   #!/bin/sh
+   # This script is run on the host to setup the bridge
+   # for the Tap device and the PF device.
+   # This enables traffic to go from the PF to the Tap to the Virtio PMD in the VM.
+
+   ifconfig ens4f0 down
+   ifconfig tap1 down
+   ifconfig ens5f0 down
+   ifconfig virbr0 down
+
+   brctl show virbr0
+   brctl addif virbr0 ens4f0
+   brctl addif virbr0 ens5f0
+   brctl addif virbr0 tap1
+   brctl show virbr0
+
+   ifconfig ens4f0 up
+   ifconfig tap1 up
+   ifconfig ens5f0 up
+   ifconfig virbr0 up
+
+sample VM scripts
+-----------------
+
+Set up DPDK in the Virtual Machine
+
+setup_dpdk_in_vm.sh
+^^^^^^^^^^^^^^^^^^^
+
+.. code-block:: sh
+
+  #!/bin/sh
+  # this script matches the vm_virtio_vf_one script
+  # virtio port is 03
+  # vf port is 04
+
+  cat  /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+  echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+  cat  /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages
+
+  ifconfig -a
+  /root/dpdk/tools/dpdk_nic_bind.py --status
+
+  rmmod virtio-pci ixgbevf
+
+  modprobe uio
+  insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko
+
+  /root/dpdk/tools/dpdk_nic_bind.py -b igb_uio 0000:00:03.0
+  /root/dpdk/tools/dpdk_nic_bind.py -b igb_uio 0000:00:04.0
+
+  /root/dpdk/tools/dpdk_nic_bind.py --status
+
+
+run_testpmd_bonding_in_vm.sh
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Run testpmd in the Virtual Machine.
+
+.. code-block:: sh
+
+  #!/bin/sh
+  # Run testpmd in the VM
+
+  # The test system has 8 cpus (0-7), use cpus 2-7 for VM
+  # Use taskset -pc <core number> <thread_id>
+
+  # use for bonding of virtio and vf tests in VM
+
+  /root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \
+  -c f -n 4 --socket-mem 350 --  --i --port-topology=chained
+
+sample Switch configuration
+---------------------------
+
+The Intel Switch is used to connect the traffic generator to the
+NIC's on host_server_1 and host_server_2.
+
+In order to run the switch configuration two console windows are required.
+
+Log in as root in both windows.
+
+TestPointShared, run_switch.sh and load /root/switch_config must be executed
+in the sequence below.
+
+On Switch: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^
+
+run TestPointShared
+
+.. code-block:: console
+
+  /usr/bin/TestPointShared
+
+On Switch: Terminal 2
+^^^^^^^^^^^^^^^^^^^^^
+
+execute run_switch.sh
+
+.. code-block:: console
+
+  /root/run_switch.sh
+
+On Switch: Terminal 1
+^^^^^^^^^^^^^^^^^^^^^
+
+load switch configuration
+
+.. code-block:: console
+
+  load /root/switch_config
+
+Sample switch configuration script
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+   /root/switch_config
+
+.. code-block:: sh
+
+  # TestPoint History
+  show port 1,5,9,13,17,21,25
+  set port 1,5,9,13,17,21,25 up
+  show port 1,5,9,13,17,21,25
+  del acl 1
+  create acl 1
+  create acl-port-set
+  create acl-port-set
+  add port port-set 1 0
+  add port port-set 5,9,13,17,21,25 1
+  create acl-rule 1 1
+  add acl-rule condition 1 1 port-set 1
+  add acl-rule action 1 1 redirect 1
+  apply acl
+  create vlan 1000
+  add vlan port 1000 1,5,9,13,17,21,25
+  set vlan tagging 1000 1,5,9,13,17,21,25 tag
+  set switch config flood_ucast fwd
+  show port stats all 1,5,9,13,17,21,25
diff --git a/doc/guides/index.rst b/doc/guides/index.rst
index 7aef7a3..8a0a359 100644
--- a/doc/guides/index.rst
+++ b/doc/guides/index.rst
@@ -1,5 +1,5 @@ 
 ..  BSD LICENSE
-    Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+    Copyright(c) 2010-2016 Intel Corporation. All rights reserved.
     All rights reserved.
 
     Redistribution and use in source and binary forms, with or without
@@ -45,3 +45,4 @@  DPDK documentation
    faq/index
    rel_notes/index
    contributing/index
+   how_to/index