List comments

GET /api/covers/45067/comments/
HTTP 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

[
    {
        "id": 85898,
        "web_url": "http://patches.dpdk.org/comment/85898/",
        "msgid": "<20180921123222.GA25292@debian>",
        "date": "2018-09-21T12:32:22",
        "subject": "Re: [dpdk-dev] [PATCH v6 00/11] implement packed virtqueues",
        "submitter": {
            "id": 617,
            "url": "http://patches.dpdk.org/api/people/617/",
            "name": "Tiwei Bie",
            "email": "tiwei.bie@intel.com"
        },
        "content": "On Fri, Sep 21, 2018 at 12:32:57PM +0200, Jens Freimann wrote:\n> This is a basic implementation of packed virtqueues as specified in the\n> Virtio 1.1 draft. A compiled version of the current draft is available\n> at https://github.com/oasis-tcs/virtio-docs.git (or as .pdf at\n> https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd10.pdf\n> \n> A packed virtqueue is different from a split virtqueue in that it\n> consists of only a single descriptor ring that replaces available and\n> used ring, index and descriptor buffer.\n> \n> Each descriptor is readable and writable and has a flags field. These flags\n> will mark if a descriptor is available or used.  To detect new available descriptors\n> even after the ring has wrapped, device and driver each have a\n> single-bit wrap counter that is flipped from 0 to 1 and vice versa every time\n> the last descriptor in the ring is used/made available.\n> \n> The idea behind this is to 1. improve performance by avoiding cache misses\n> and 2. be easier for devices to implement.\n> \n> Regarding performance: with these patches I get 21.13 Mpps on my system\n> as compared to 18.8 Mpps with the virtio 1.0 code. Packet size was 64\n\nDid you enable multiple-queue and use multiple cores on\nvhost side? If not, I guess the above performance gain\nis the gain in vhost side instead of virtio side.\n\nIf you use more cores on vhost side or virtio side, will\nyou see any performance changes?\n\nDid you do any performance test with the kernel vhost-net\nbackend (with zero-copy enabled and disabled)? I think we\nalso need some performance data for these two cases. And\nit can help us to make sure that it works with the kernel\nbackends.\n\nAnd for the \"virtio-PMD + vhost-PMD\" test cases, I think\nwe need below performance data:\n\n#1. The maximum 1 core performance of virtio PMD when using split ring.\n#2. The maximum 1 core performance of virtio PMD when using packed ring.\n#3. The maximum 1 core performance of vhost PMD when using split ring.\n#4. The maximum 1 core performance of vhost PMD when using packed ring.\n\nAnd then we can have a clear understanding of the\nperformance gain in DPDK with packed ring.\n\nAnd FYI, the maximum 1 core performance of virtio PMD\ncan be got in below steps:\n\n1. Launch vhost-PMD with multiple queues, and use multiple\n   CPU cores for forwarding.\n2. Launch virtio-PMD with multiple queues and use 1 CPU\n   core for forwarding.\n3. Repeat above two steps with adding more CPU cores\n   for forwarding in vhost-PMD side until we can't see\n   performance increase anymore.\n\nBesides, I just did a quick glance at the Tx implementation,\nit still assumes the descs will be written back in order\nby device. You can find more details from my comments on\nthat patch.\n\nThanks\n\n\n\n> bytes, 0.05% acceptable loss.  Test setup is described as in\n> http://dpdk.org/doc/guides/howto/pvp_reference_benchmark.html\n> \n> Packet generator:\n> MoonGen\n> Intel(R) Xeon(R) CPU E5-2665 0 @ 2.40GHz\n> Intel X710 NIC\n> RHEL 7.4\n> \n> Device under test:\n> Intel(R) Xeon(R) CPU E5-2667 v4 @ 3.20GHz\n> Intel X710 NIC\n> RHEL 7.4\n> \n> VM on DuT: RHEL7.4\n> \n> I plan to do more performance test with bigger frame sizes.\n> \n> changes from v5->v6:\n> * fix VIRTQUEUE_DUMP macro\n> * rework mergeable rx buffer path, support out of order (not sure if I\n>   need a .next field to support chains) \n> * move wmb in virtio_receive_pkts_packed() (Gavin)\n> * rename to virtio_init_split/_packed (Maxime)\n> * add support for ctrl virtqueues (Tiwei, thx Max for fixing)\n> * rework tx path to support update_packet_stats and\n>   virtqueue_xmit_offload, TODO: merge with split-ring code to\n>   avoid a lot of duplicate code\n> * remove unnecessary check for avoiding to call VIRTQUEUE_DUMP (Maxime)\n> \n> changes from v4->v5:\n> * fix VIRTQUEUE_DUMP macro\n> * fix wrap counter logic in transmit and receive functions  \n> \n> changes from v3->v4:\n> * added helpers to increment index and set available/used flags\n> * driver keeps track of number of descriptors used\n> * change logic in set_rxtx_funcs()\n> * add patch for ctrl virtqueue with support for packed virtqueues\n> * rename virtio-1.1.h to virtio-packed.h\n> * fix wrong sizeof() in \"vhost: vring address setup for packed queues\"\n> * fix coding style of function definition in \"net/virtio: add packed\n>   virtqueue helpers\"\n> * fix padding in vring_size()\n> * move patches to enable packed virtqueues end of series\n> * v4 has two open problems: I'm sending it out anyway for feedback/help:\n>  * when VIRTIO_NET_F_MRG_RXBUF enabled only 128 packets are send in\n>    guest, i.e. when ring is full for the first time. I suspect a bug in\n>    setting the avail/used flags\n> \n> changes from v2->v3:\n> * implement event suppression\n> * add code do dump packed virtqueues\n> * don't use assert in vhost code\n> * rename virtio-user parameter to packed-vq\n> * support rxvf flush\n> \n> changes from v1->v2:\n> * don't use VIRTQ_DESC_F_NEXT in used descriptors (Jason)\n> * no rte_panice() in guest triggerable code (Maxime)\n> * use unlikely when checking for vq (Maxime)\n> * rename everything from _1_1 to _packed  (Yuanhan)\n> * add two more patches to implement mergeable receive buffers\n> \n> *** BLURB HERE ***\n> \n> Jens Freimann (10):\n>   net/virtio: vring init for packed queues\n>   net/virtio: add packed virtqueue defines\n>   net/virtio: add packed virtqueue helpers\n>   net/virtio: flush packed receive virtqueues\n>   net/virtio: dump packed virtqueue data\n>   net/virtio: implement transmit path for packed queues\n>   net/virtio: implement receive path for packed queues\n>   net/virtio: add support for mergeable buffers with packed virtqueues\n>   net/virtio: add virtio send command packed queue support\n>   net/virtio: enable packed virtqueues by default\n> \n> Yuanhan Liu (1):\n>   net/virtio-user: add option to use packed queues\n> \n>  drivers/net/virtio/virtio_ethdev.c            | 135 ++++-\n>  drivers/net/virtio/virtio_ethdev.h            |   5 +\n>  drivers/net/virtio/virtio_pci.h               |   8 +\n>  drivers/net/virtio/virtio_ring.h              |  96 +++-\n>  drivers/net/virtio/virtio_rxtx.c              | 490 +++++++++++++++++-\n>  .../net/virtio/virtio_user/virtio_user_dev.c  |  10 +-\n>  .../net/virtio/virtio_user/virtio_user_dev.h  |   2 +-\n>  drivers/net/virtio/virtio_user_ethdev.c       |  14 +-\n>  drivers/net/virtio/virtqueue.c                |  21 +\n>  drivers/net/virtio/virtqueue.h                |  50 +-\n>  10 files changed, 796 insertions(+), 35 deletions(-)\n> \n> -- \n> 2.17.1\n>",
        "headers": {
            "Return-Path": "<dev-bounces@dpdk.org>",
            "References": "<20180921103308.16357-1-jfreimann@redhat.com>",
            "X-Mailman-Version": "2.1.15",
            "X-IronPort-AV": "E=Sophos;i=\"5.54,285,1534834800\"; d=\"scan'208\";a=\"92084888\"",
            "From": "Tiwei Bie <tiwei.bie@intel.com>",
            "User-Agent": "Mutt/1.10.1 (2018-07-13)",
            "List-Post": "<mailto:dev@dpdk.org>",
            "Content-Type": "text/plain; charset=utf-8",
            "Delivered-To": "patchwork@dpdk.org",
            "X-Original-To": "patchwork@dpdk.org",
            "Received": [
                "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 658B81B19;\n\tFri, 21 Sep 2018 14:33:36 +0200 (CEST)",
                "from mga12.intel.com (mga12.intel.com [192.55.52.136])\n\tby dpdk.org (Postfix) with ESMTP id 80A33A49\n\tfor <dev@dpdk.org>; Fri, 21 Sep 2018 14:33:34 +0200 (CEST)",
                "from fmsmga001.fm.intel.com ([10.253.24.23])\n\tby fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t21 Sep 2018 05:33:33 -0700",
                "from btwcube1.sh.intel.com (HELO debian) ([10.67.104.151])\n\tby fmsmga001.fm.intel.com with ESMTP; 21 Sep 2018 05:33:28 -0700"
            ],
            "Subject": "Re: [dpdk-dev] [PATCH v6 00/11] implement packed virtqueues",
            "Sender": "\"dev\" <dev-bounces@dpdk.org>",
            "X-Amp-File-Uploaded": "False",
            "Message-ID": "<20180921123222.GA25292@debian>",
            "X-Amp-Original-Verdict": "FILE UNKNOWN",
            "X-BeenThere": "dev@dpdk.org",
            "Date": "Fri, 21 Sep 2018 20:32:22 +0800",
            "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
            "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
            "X-ExtLoop1": "1",
            "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
            "Cc": "dev@dpdk.org, maxime.coquelin@redhat.com, Gavin.Hu@arm.com,\n\tzhihong.wang@intel.com",
            "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
            "Precedence": "list",
            "In-Reply-To": "<20180921103308.16357-1-jfreimann@redhat.com>",
            "Errors-To": "dev-bounces@dpdk.org",
            "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
            "MIME-Version": "1.0",
            "To": "Jens Freimann <jfreimann@redhat.com>",
            "X-Amp-Result": "UNKNOWN",
            "Content-Disposition": "inline"
        }
    },
    {
        "id": 85911,
        "web_url": "http://patches.dpdk.org/comment/85911/",
        "msgid": "<20180921140644.oiwl7gwekreflc7v@jenstp.localdomain>",
        "date": "2018-09-21T14:06:44",
        "subject": "Re: [dpdk-dev] [PATCH v6 00/11] implement packed virtqueues",
        "submitter": {
            "id": 745,
            "url": "http://patches.dpdk.org/api/people/745/",
            "name": "Jens Freimann",
            "email": "jfreimann@redhat.com"
        },
        "content": "On Fri, Sep 21, 2018 at 08:32:22PM +0800, Tiwei Bie wrote:\n>On Fri, Sep 21, 2018 at 12:32:57PM +0200, Jens Freimann wrote:\n>> This is a basic implementation of packed virtqueues as specified in the\n>> Virtio 1.1 draft. A compiled version of the current draft is available\n>> at https://github.com/oasis-tcs/virtio-docs.git (or as .pdf at\n>> https://github.com/oasis-tcs/virtio-docs/blob/master/virtio-v1.1-packed-wd10.pdf\n>>\n>> A packed virtqueue is different from a split virtqueue in that it\n>> consists of only a single descriptor ring that replaces available and\n>> used ring, index and descriptor buffer.\n>>\n>> Each descriptor is readable and writable and has a flags field. These flags\n>> will mark if a descriptor is available or used.  To detect new available descriptors\n>> even after the ring has wrapped, device and driver each have a\n>> single-bit wrap counter that is flipped from 0 to 1 and vice versa every time\n>> the last descriptor in the ring is used/made available.\n>>\n>> The idea behind this is to 1. improve performance by avoiding cache misses\n>> and 2. be easier for devices to implement.\n>>\n>> Regarding performance: with these patches I get 21.13 Mpps on my system\n>> as compared to 18.8 Mpps with the virtio 1.0 code. Packet size was 64\n>\n>Did you enable multiple-queue and use multiple cores on\n>vhost side? If not, I guess the above performance gain\n>is the gain in vhost side instead of virtio side.\n\nI tested several variations back then and they all looked very good.\nBut code change a lot meanwhile and I need to do more benchmarking\nin any case.\n\n>\n>If you use more cores on vhost side or virtio side, will\n>you see any performance changes?\n>\n>Did you do any performance test with the kernel vhost-net\n>backend (with zero-copy enabled and disabled)? I think we\n>also need some performance data for these two cases. And\n>it can help us to make sure that it works with the kernel\n>backends.\n\nI tested against vhost-kernel but only to test functionality not\nto benchmark. \n>\n>And for the \"virtio-PMD + vhost-PMD\" test cases, I think\n>we need below performance data:\n>\n>#1. The maximum 1 core performance of virtio PMD when using split ring.\n>#2. The maximum 1 core performance of virtio PMD when using packed ring.\n>#3. The maximum 1 core performance of vhost PMD when using split ring.\n>#4. The maximum 1 core performance of vhost PMD when using packed ring.\n>\n>And then we can have a clear understanding of the\n>performance gain in DPDK with packed ring.\n>\n>And FYI, the maximum 1 core performance of virtio PMD\n>can be got in below steps:\n>\n>1. Launch vhost-PMD with multiple queues, and use multiple\n>   CPU cores for forwarding.\n>2. Launch virtio-PMD with multiple queues and use 1 CPU\n>   core for forwarding.\n>3. Repeat above two steps with adding more CPU cores\n>   for forwarding in vhost-PMD side until we can't see\n>   performance increase anymore.\n\n Thanks for the suggestions, I'll come back with more\nnumbers.\n\n>\n>Besides, I just did a quick glance at the Tx implementation,\n>it still assumes the descs will be written back in order\n>by device. You can find more details from my comments on\n>that patch.\n\nSaw it and noted. I had hoped to be able to avoid the list but\nI see no way around it now. \n\nThanks for your review Tiwei!\n\nregards,\nJens",
        "headers": {
            "Return-Path": "<dev-bounces@dpdk.org>",
            "References": "<20180921103308.16357-1-jfreimann@redhat.com>\n\t<20180921123222.GA25292@debian>",
            "X-Mailman-Version": "2.1.15",
            "X-Greylist": "Sender IP whitelisted, not delayed by milter-greylist-4.5.16\n\t(mx1.redhat.com [10.5.110.46]); Fri, 21 Sep 2018 14:06:49 +0000 (UTC)",
            "From": "Jens Freimann <jfreimann@redhat.com>",
            "User-Agent": "NeoMutt/20180716",
            "List-Post": "<mailto:dev@dpdk.org>",
            "Content-Type": "text/plain; charset=us-ascii; format=flowed",
            "X-BeenThere": "dev@dpdk.org",
            "X-Original-To": "patchwork@dpdk.org",
            "X-Scanned-By": "MIMEDefang 2.84 on 10.5.11.22",
            "Received": [
                "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id F1BE01DA4;\n\tFri, 21 Sep 2018 16:06:50 +0200 (CEST)",
                "from mx1.redhat.com (mx1.redhat.com [209.132.183.28])\n\tby dpdk.org (Postfix) with ESMTP id 0E616F04\n\tfor <dev@dpdk.org>; Fri, 21 Sep 2018 16:06:50 +0200 (CEST)",
                "from smtp.corp.redhat.com\n\t(int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22])\n\t(using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))\n\t(No client certificate requested)\n\tby mx1.redhat.com (Postfix) with ESMTPS id 5CBCD3001773;\n\tFri, 21 Sep 2018 14:06:49 +0000 (UTC)",
                "from localhost (dhcp-192-209.str.redhat.com [10.33.192.209])\n\tby smtp.corp.redhat.com (Postfix) with ESMTPS id 862F81073022;\n\tFri, 21 Sep 2018 14:06:45 +0000 (UTC)"
            ],
            "Subject": "Re: [dpdk-dev] [PATCH v6 00/11] implement packed virtqueues",
            "Sender": "\"dev\" <dev-bounces@dpdk.org>",
            "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
            "Message-ID": "<20180921140644.oiwl7gwekreflc7v@jenstp.localdomain>",
            "Precedence": "list",
            "Date": "Fri, 21 Sep 2018 16:06:44 +0200",
            "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
            "Errors-To": "dev-bounces@dpdk.org",
            "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
            "Cc": "dev@dpdk.org, maxime.coquelin@redhat.com, Gavin.Hu@arm.com,\n\tzhihong.wang@intel.com",
            "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
            "Delivered-To": "patchwork@dpdk.org",
            "In-Reply-To": "<20180921123222.GA25292@debian>",
            "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
            "MIME-Version": "1.0",
            "To": "Tiwei Bie <tiwei.bie@intel.com>",
            "Content-Disposition": "inline"
        }
    }
]