[dpdk-dev,v3,16/16] doc: adds information related to the AVP PMD

Message ID 1488414008-162839-17-git-send-email-allain.legacy@windriver.com (mailing list archive)
State Superseded, archived
Delegated to: Ferruh Yigit
Headers

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK

Commit Message

Allain Legacy March 2, 2017, 12:20 a.m. UTC
  Updates the documentation and feature lists for the AVP PMD device.

Signed-off-by: Allain Legacy <allain.legacy@windriver.com>
Signed-off-by: Matt Peters <matt.peters@windriver.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
---
 MAINTAINERS                            |   1 +
 doc/guides/nics/avp.rst                | 112 +++++++++++++++++++++++++++++++++
 doc/guides/nics/features/avp.ini       |  17 +++++
 doc/guides/nics/index.rst              |   1 +
 doc/guides/rel_notes/release_17_05.rst |   5 ++
 5 files changed, 136 insertions(+)
 create mode 100644 doc/guides/nics/avp.rst
 create mode 100644 doc/guides/nics/features/avp.ini
  

Comments

Vincent Jardin March 3, 2017, 4:21 p.m. UTC | #1
Le 02/03/2017 à 01:20, Allain Legacy a écrit :
> +Since the initial implementation of AVP devices, vhost-user has become
> +part of the qemu offering with a significant performance increase over
> +the original virtio implementation.  However, vhost-user still does
> +not achieve the level of performance that the AVP device can provide
> +to our customers for DPDK based VM instances.

Allain,

please, can you be more explicit: why is virtio not fast enough?

Moreover, why should we get another PMD for Qemu/kvm which is not 
virtio? There is not argument into your doc about it.
NEC, before vhost-user, made a memnic proposal too because 
virtio/vhost-user was not available.
Now, we all agree that vhost-user is the right way to support VMs, it 
avoids duplication of maintenances.

Please add some arguments that explains why virtio should not be used, 
so others like memnic or avp should be.

Regarding,
+    nova boot --flavor small --image my-image \
+       --nic net-id=${NETWORK1_UUID} \
+       --nic net-id=${NETWORK2_UUID},vif-model=avp \
+       --nic net-id=${NETWORK3_UUID},vif-model=avp \
+       --security-group default my-instance1

I do not see how to get it working with vanilla nova. Please, I think 
you should rather show with qemu or virsh.

Then, there is not such AVP netdevice into Linux kernel upstream. Before 
adding any AVP support, it should be added into legacy upstream so we 
can be sure that the APIs will be solid and won't need to be updated 
because of some kernel constraints.

Thank you,
   Vincent
  
Allain Legacy March 13, 2017, 7:17 p.m. UTC | #2
Vincent,
Perhaps you can help me understand why the performance or functionality of AVP vs. Virtio is relevant to the decision of accepting this driver.   There are many drivers in the DPDK; most of which provide the same functionality at comparable performance rates.  AVP is just another such driver.   The fact that it is virtual rather than physical, in my opinion, should not influence the decision of accepting this driver.   On the other hand, code quality/complexity or lack of a maintainer are reasonable reasons for rejecting.    If our driver is accepted we are committed to maintaining it and testing changes as required by any driver framework changes which may impact all drivers.

Along the same lines, I do not understand why upstreaming AVP in to the Linux kernel or qemu/kvm should be a prerequisite for inclusion in the DPDK.   Continuing my analogy from above, the AVP device is a commercial offering tied to the Wind River Systems Titanium product line.   It enables virtualized DPDK applications and increases DPDK adoption.   Similarly to how a driver from company XYX is tied to a commercial NIC that must be purchased by a customer, our AVP device is available to operators that choose to leverage our Titanium product to implement their Cloud solutions.    It is not our intention to upstream the qemu/kvm or host vswitch portion of the AVP device.   Our qemu/kvm extensions are GPL so they are available to our customers if they desire to rebuild qemu/kvm with their own proprietary extensions

Our AVP device was implemented in 2013 in response to the challenge of lower than required performance of qemu/virtio in both user space and DPDK applications in the VM.   Rather than making complex changes to qemu/virtio and continuously have to forward prop those as we upgraded to newer versions of qemu we decided to decouple ourselves from that code base.   We developed the AVP device based on an evolution of KNI+ivshmem by enhancing both with features that would meet the needs of our customers; better performance, multi-queue support, live-migration support, hot-plug support.    As I said in my earlier response, since 2013, qemu/virtio has seen improved performance with the introduction of vhost-user.   The performance of vhost-user still has not yet achieved performance levels equal to our AVP PMD.   

I acknowledge that the AVP driver could exist as an out-of-tree driver loaded as a shared library at runtime.  In fact, 2 years ago we released our driver source on github for this very reason.  We provide instructions and support for building the AVP PMD as a shared library.   Some customers have adopted this method while many insist on an in-tree driver for several reasons.   

Most importantly, they want to eliminate the burden of needing to build and support an additional package into their product.   An in-tree driver would eliminate the need for a separate build/packaging process.   Also, they want an option that allows them to be able to develop directly on the bleeding edge of DPDK rather than waiting on us to update our out-of-tree driver based on stable releases of the DPDK.   In this regard, an in-tree driver would allow our customers to work directly on the latest DPDK. 

An in-tree driver provides obvious benefits to our customers, but keep in mind that this also provides a benefit to the DPDK.  If a customer must develop on a stable release because they must use an out-of-tree driver then they are less likely to contribute fixes/enhancements/testing upstream.  I know this first hand because I work with software from different sources on a daily basis and it is a significant burden to have to reproduce/test fixes on master when you build/ship on an older stable release.   Accepting this driver would increase the potential pool of developers available for contributions and reviews.

Again, we are committed to contributing to the DPDK community by supporting our driver and upstreaming other fixes/enhancements we develop along the way.   We feel that if the DPDK is limited to only a single virtual driver of any type then choice and innovation is also limited.   In the end if more variety and innovation increases DPDK adoption then this is a win for DPDK and everyone that is involved in the project.

Regards,
Allain

Allain Legacy, Software Developer
direct 613.270.2279  fax 613.492.7870 skype allain.legacy
 



> -----Original Message-----
> From: Vincent JARDIN [mailto:vincent.jardin@6wind.com]
> Sent: Friday, March 03, 2017 11:22 AM
> To: Legacy, Allain; YIGIT, FERRUH
> Cc: Jolliffe, Ian; jerin.jacob@caviumnetworks.com;
> stephen@networkplumber.org; thomas.monjalon@6wind.com;
> dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH v3 16/16] doc: adds information related to
> the AVP PMD
> 
> Le 02/03/2017 à 01:20, Allain Legacy a écrit :
> > +Since the initial implementation of AVP devices, vhost-user has become
> > +part of the qemu offering with a significant performance increase over
> > +the original virtio implementation.  However, vhost-user still does
> > +not achieve the level of performance that the AVP device can provide
> > +to our customers for DPDK based VM instances.
> 
> Allain,
> 
> please, can you be more explicit: why is virtio not fast enough?
> 
> Moreover, why should we get another PMD for Qemu/kvm which is not
> virtio? There is not argument into your doc about it.
> NEC, before vhost-user, made a memnic proposal too because
> virtio/vhost-user was not available.
> Now, we all agree that vhost-user is the right way to support VMs, it
> avoids duplication of maintenances.
> 
> Please add some arguments that explains why virtio should not be used,
> so others like memnic or avp should be.
> 
> Regarding,
> +    nova boot --flavor small --image my-image \
> +       --nic net-id=${NETWORK1_UUID} \
> +       --nic net-id=${NETWORK2_UUID},vif-model=avp \
> +       --nic net-id=${NETWORK3_UUID},vif-model=avp \
> +       --security-group default my-instance1
> 
> I do not see how to get it working with vanilla nova. Please, I think
> you should rather show with qemu or virsh.
> 
> Then, there is not such AVP netdevice into Linux kernel upstream. Before
> adding any AVP support, it should be added into legacy upstream so we
> can be sure that the APIs will be solid and won't need to be updated
> because of some kernel constraints.
> 
> Thank you,
>    Vincent
  

Patch

diff --git a/MAINTAINERS b/MAINTAINERS
index fef23a0..4a14945 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -427,6 +427,7 @@  Wind River AVP PMD
 M: Allain Legacy <allain.legacy@windriver.com>
 M: Matt Peters <matt.peters@windriver.com>
 F: drivers/net/avp
+F: doc/guides/nics/avp.rst
 
 
 Crypto Drivers
diff --git a/doc/guides/nics/avp.rst b/doc/guides/nics/avp.rst
new file mode 100644
index 0000000..af6d04d
--- /dev/null
+++ b/doc/guides/nics/avp.rst
@@ -0,0 +1,112 @@ 
+..  BSD LICENSE
+    Copyright(c) 2017 Wind River Systems, Inc. rights reserved.
+    All rights reserved.
+
+    Redistribution and use in source and binary forms, with or without
+    modification, are permitted provided that the following conditions
+    are met:
+
+    * Redistributions of source code must retain the above copyright
+    notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+    notice, this list of conditions and the following disclaimer in
+    the documentation and/or other materials provided with the
+    distribution.
+    * Neither the name of Intel Corporation nor the names of its
+    contributors may be used to endorse or promote products derived
+    from this software without specific prior written permission.
+
+    THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+    "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+    LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+    A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+    OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+    SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+    LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+    DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+    THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+    (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+    OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+AVP Poll Mode Driver
+=================================================================
+
+The Accelerated Virtual Port (AVP) device is a shared memory based device
+available on the `virtualization platforms <http://www.windriver.com/products/titanium-cloud/>`_
+from Wind River Systems.  It is based on an earlier implementation of the DPDK
+KNI device and made available to VM instances via a mechanism based on an early
+implementation of qemu-kvm ivshmem.
+
+It enables optimized packet throughput without requiring any packet processing
+in qemu. This provides our customers with a significant performance increase
+for DPDK applications in the VM.  Since our AVP implementation supports VM
+live-migration it is viewed as a better alternative to PCI passthrough or PCI
+SRIOV since neither of those support VM live-migration without manual
+intervention or significant performance penalties.
+
+Since the initial implementation of AVP devices, vhost-user has become
+part of the qemu offering with a significant performance increase over
+the original virtio implementation.  However, vhost-user still does
+not achieve the level of performance that the AVP device can provide
+to our customers for DPDK based VM instances.
+
+The driver binds to PCI devices that are exported by the hypervisor DPDK
+application via the ivshmem-like mechanism.  The definition of the device
+structure and configuration options are defined in rte_avp_common.h and
+rte_avp_fifo.h.  These two header files are made available as part of the PMD
+implementation in order to share the device definitions between the guest
+implementation (i.e., the PMD) and the host implementation (i.e., the
+hypervisor DPDK vswitch application).
+
+
+Features and Limitations of the AVP PMD
+---------------------------------------
+
+The AVP PMD driver provides the following functionality.
+
+*   Receive and transmit of both simple and chained mbuf packets,
+
+*   Chained mbufs may include up to 5 chained segments,
+
+*   Up to 8 receive and transmit queues per device,
+
+*   Only a single MAC address is supported,
+
+*   The MAC address cannot be modified,
+
+*   The maximum receive packet length is 9238 bytes,
+
+*   VLAN header stripping and inserting,
+
+*   Promiscuous mode
+
+*   VM live-migration
+
+*   PCI hotplug insertion and removal
+
+
+Prerequisites
+-------------
+
+The following prerequisites apply:
+
+*   A virtual machine running in a Wind River Systems virtualization
+    environment and configured with at least one neutron port defined with a
+    vif-model set to "avp".
+
+
+Launching a VM with an AVP type network attachment
+--------------------------------------------------
+
+The following example will launch a VM with three network attachments.  The
+first attachment will have a default vif-model of "virtio".  The next two
+network attachments will have a vif-model of "avp" and may be used with a DPDK
+application which is built to include the AVP PMD driver.
+
+.. code-block:: console
+
+    nova boot --flavor small --image my-image \
+       --nic net-id=${NETWORK1_UUID} \
+       --nic net-id=${NETWORK2_UUID},vif-model=avp \
+       --nic net-id=${NETWORK3_UUID},vif-model=avp \
+       --security-group default my-instance1
diff --git a/doc/guides/nics/features/avp.ini b/doc/guides/nics/features/avp.ini
new file mode 100644
index 0000000..64bf42e
--- /dev/null
+++ b/doc/guides/nics/features/avp.ini
@@ -0,0 +1,17 @@ 
+;
+; Supported features of the 'AVP' network poll mode driver.
+;
+; Refer to default.ini for the full list of available PMD features.
+;
+[Features]
+Link status          = Y
+Queue start/stop     = Y
+Jumbo frame          = Y
+Scattered Rx         = Y
+Promiscuous mode     = Y
+Unicast MAC filter   = Y
+VLAN offload         = Y
+Basic stats          = Y
+Stats per queue      = Y
+Linux UIO            = Y
+x86-64               = Y
diff --git a/doc/guides/nics/index.rst b/doc/guides/nics/index.rst
index 87f9334..0ddcea5 100644
--- a/doc/guides/nics/index.rst
+++ b/doc/guides/nics/index.rst
@@ -36,6 +36,7 @@  Network Interface Controller Drivers
     :numbered:
 
     overview
+    avp
     bnx2x
     bnxt
     cxgbe
diff --git a/doc/guides/rel_notes/release_17_05.rst b/doc/guides/rel_notes/release_17_05.rst
index e25ea9f..3accbac 100644
--- a/doc/guides/rel_notes/release_17_05.rst
+++ b/doc/guides/rel_notes/release_17_05.rst
@@ -41,6 +41,11 @@  New Features
      Also, make sure to start the actual text at the margin.
      =========================================================
 
+* **Added support for the Wind River Systems AVP PMD.**
+
+  Added a new networking driver for the AVP device type. Theses devices are
+  specific to the Wind River Systems virtualization platforms.
+
 
 Resolved Issues
 ---------------