Message ID | 20190219105951.31046-1-tiwei.bie@intel.com (mailing list archive) |
---|---|
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 347355942; Tue, 19 Feb 2019 12:02:39 +0100 (CET) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by dpdk.org (Postfix) with ESMTP id B17105920 for <dev@dpdk.org>; Tue, 19 Feb 2019 12:02:37 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga107.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 19 Feb 2019 03:02:36 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.58,388,1544515200"; d="scan'208";a="135430779" Received: from dpdk-tbie.sh.intel.com ([10.67.104.173]) by orsmga002.jf.intel.com with ESMTP; 19 Feb 2019 03:02:35 -0800 From: Tiwei Bie <tiwei.bie@intel.com> To: maxime.coquelin@redhat.com, zhihong.wang@intel.com, dev@dpdk.org Date: Tue, 19 Feb 2019 18:59:46 +0800 Message-Id: <20190219105951.31046-1-tiwei.bie@intel.com> X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH 0/5] Fixes and enhancements for Tx path in Virtio PMD X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <https://mails.dpdk.org/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://mails.dpdk.org/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <https://mails.dpdk.org/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Series |
Fixes and enhancements for Tx path in Virtio PMD
|
|
Message
Tiwei Bie
Feb. 19, 2019, 10:59 a.m. UTC
Below is a quick (unofficial) performance test (macfwd loop, 64B) for the packed ring optimizations in this series on an Intel(R) Xeon(R) Gold 6140 CPU @ 2.30GHz platform: w/o this series: packed ring normal/in-order: ~10.4 Mpps w/ this series: packed ring normal: ~10.9 Mpps packed ring in-order: ~11.3 Mpps In the test, we need to make sure that the vhost side is fast enough. So 4 forwarding cores are used in vhost side, and 1 forwarding core is used in virtio side. vhost side: ./x86_64-native-linuxapp-gcc/app/testpmd \ -l 13,14,15,16,17 \ --socket-mem 1024,0 \ --file-prefix=vhost \ --vdev=net_vhost0,iface=/tmp/vhost0,queues=4 \ -- \ --forward-mode=mac \ -i \ --rxq=4 \ --txq=4 \ --nb-cores 4 virtio side: ./x86_64-native-linuxapp-gcc/app/testpmd \ -l 8,9,10,11,12 \ --socket-mem 1024,0 \ --single-file-segments \ --file-prefix=virtio-user \ --vdev=virtio_user0,path=/tmp/vhost0,queues=4,in_order=1,packed_vq=1 \ -- \ --forward-mode=mac \ -i \ --rxq=4 \ --txq=4 \ --nb-cores 1 Tiwei Bie (5): net/virtio: fix Tx desc cleanup for packed ring net/virtio: fix in-order Tx path for split ring net/virtio: fix in-order Tx path for packed ring net/virtio: introduce a helper for clearing net header net/virtio: optimize xmit enqueue for packed ring drivers/net/virtio/virtio_ethdev.c | 4 +- drivers/net/virtio/virtio_rxtx.c | 203 ++++++++++++++++++++--------- 2 files changed, 146 insertions(+), 61 deletions(-)
Comments
On 2019/2/19 下午6:59, Tiwei Bie wrote: > Below is a quick (unofficial) performance test (macfwd loop, 64B) > for the packed ring optimizations in this series on an Intel(R) > Xeon(R) Gold 6140 CPU @ 2.30GHz platform: > > w/o this series: > packed ring normal/in-order: ~10.4 Mpps > > w/ this series: > packed ring normal: ~10.9 Mpps > packed ring in-order: ~11.3 Mpps Since your series contain optimization for split ring as well. I wonder whether you have its numbers as well. Thanks > > In the test, we need to make sure that the vhost side is fast enough. > So 4 forwarding cores are used in vhost side, and 1 forwarding core is > used in virtio side. > > vhost side: > > ./x86_64-native-linuxapp-gcc/app/testpmd \ > -l 13,14,15,16,17 \ > --socket-mem 1024,0 \ > --file-prefix=vhost \ > --vdev=net_vhost0,iface=/tmp/vhost0,queues=4 \ > -- \ > --forward-mode=mac \ > -i \ > --rxq=4 \ > --txq=4 \ > --nb-cores 4 > > virtio side: > > ./x86_64-native-linuxapp-gcc/app/testpmd \ > -l 8,9,10,11,12 \ > --socket-mem 1024,0 \ > --single-file-segments \ > --file-prefix=virtio-user \ > --vdev=virtio_user0,path=/tmp/vhost0,queues=4,in_order=1,packed_vq=1 \ > -- \ > --forward-mode=mac \ > -i \ > --rxq=4 \ > --txq=4 \ > --nb-cores 1 > > > Tiwei Bie (5): > net/virtio: fix Tx desc cleanup for packed ring > net/virtio: fix in-order Tx path for split ring > net/virtio: fix in-order Tx path for packed ring > net/virtio: introduce a helper for clearing net header > net/virtio: optimize xmit enqueue for packed ring > > drivers/net/virtio/virtio_ethdev.c | 4 +- > drivers/net/virtio/virtio_rxtx.c | 203 ++++++++++++++++++++--------- > 2 files changed, 146 insertions(+), 61 deletions(-) >
On Tue, Feb 19, 2019 at 09:40:05PM +0800, Jason Wang wrote: > On 2019/2/19 下午6:59, Tiwei Bie wrote: > > Below is a quick (unofficial) performance test (macfwd loop, 64B) > > for the packed ring optimizations in this series on an Intel(R) > > Xeon(R) Gold 6140 CPU @ 2.30GHz platform: > > > > w/o this series: > > packed ring normal/in-order: ~10.4 Mpps > > > > w/ this series: > > packed ring normal: ~10.9 Mpps > > packed ring in-order: ~11.3 Mpps > > > Since your series contain optimization for split ring as well. I wonder > whether you have its numbers as well. The PPS of split ring in-order (with or without this series) showed by testpmd isn't stable in my above test. So I didn't manage to get some numbers.. > > Thanks > > > > > > In the test, we need to make sure that the vhost side is fast enough. > > So 4 forwarding cores are used in vhost side, and 1 forwarding core is > > used in virtio side. > > > > vhost side: > > > > ./x86_64-native-linuxapp-gcc/app/testpmd \ > > -l 13,14,15,16,17 \ > > --socket-mem 1024,0 \ > > --file-prefix=vhost \ > > --vdev=net_vhost0,iface=/tmp/vhost0,queues=4 \ > > -- \ > > --forward-mode=mac \ > > -i \ > > --rxq=4 \ > > --txq=4 \ > > --nb-cores 4 > > > > virtio side: > > > > ./x86_64-native-linuxapp-gcc/app/testpmd \ > > -l 8,9,10,11,12 \ > > --socket-mem 1024,0 \ > > --single-file-segments \ > > --file-prefix=virtio-user \ > > --vdev=virtio_user0,path=/tmp/vhost0,queues=4,in_order=1,packed_vq=1 \ > > -- \ > > --forward-mode=mac \ > > -i \ > > --rxq=4 \ > > --txq=4 \ > > --nb-cores 1 > > > > > > Tiwei Bie (5): > > net/virtio: fix Tx desc cleanup for packed ring > > net/virtio: fix in-order Tx path for split ring > > net/virtio: fix in-order Tx path for packed ring > > net/virtio: introduce a helper for clearing net header > > net/virtio: optimize xmit enqueue for packed ring > > > > drivers/net/virtio/virtio_ethdev.c | 4 +- > > drivers/net/virtio/virtio_rxtx.c | 203 ++++++++++++++++++++--------- > > 2 files changed, 146 insertions(+), 61 deletions(-) > >
On 2/19/19 11:59 AM, Tiwei Bie wrote: > Below is a quick (unofficial) performance test (macfwd loop, 64B) > for the packed ring optimizations in this series on an Intel(R) > Xeon(R) Gold 6140 CPU @ 2.30GHz platform: > > w/o this series: > packed ring normal/in-order: ~10.4 Mpps > > w/ this series: > packed ring normal: ~10.9 Mpps > packed ring in-order: ~11.3 Mpps > > In the test, we need to make sure that the vhost side is fast enough. > So 4 forwarding cores are used in vhost side, and 1 forwarding core is > used in virtio side. > > vhost side: > > ./x86_64-native-linuxapp-gcc/app/testpmd \ > -l 13,14,15,16,17 \ > --socket-mem 1024,0 \ > --file-prefix=vhost \ > --vdev=net_vhost0,iface=/tmp/vhost0,queues=4 \ > -- \ > --forward-mode=mac \ > -i \ > --rxq=4 \ > --txq=4 \ > --nb-cores 4 > > virtio side: > > ./x86_64-native-linuxapp-gcc/app/testpmd \ > -l 8,9,10,11,12 \ > --socket-mem 1024,0 \ > --single-file-segments \ > --file-prefix=virtio-user \ > --vdev=virtio_user0,path=/tmp/vhost0,queues=4,in_order=1,packed_vq=1 \ > -- \ > --forward-mode=mac \ > -i \ > --rxq=4 \ > --txq=4 \ > --nb-cores 1 > > > Tiwei Bie (5): > net/virtio: fix Tx desc cleanup for packed ring > net/virtio: fix in-order Tx path for split ring > net/virtio: fix in-order Tx path for packed ring > net/virtio: introduce a helper for clearing net header > net/virtio: optimize xmit enqueue for packed ring > > drivers/net/virtio/virtio_ethdev.c | 4 +- > drivers/net/virtio/virtio_rxtx.c | 203 ++++++++++++++++++++--------- > 2 files changed, 146 insertions(+), 61 deletions(-) > Applied to dpdk-next-virtio/master. Thanks, Maxime