Message ID | f7c9c6c2-b0bb-a7df-cca1-abe93c5089fb@redhat.com (mailing list archive) |
---|---|
State | Not Applicable, archived |
Headers |
Return-Path: <dev-bounces@dpdk.org> X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 991EA58C8; Fri, 2 Dec 2016 11:00:54 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 71CC758C5 for <dev@dpdk.org>; Fri, 2 Dec 2016 11:00:52 +0100 (CET) Received: from int-mx10.intmail.prod.int.phx2.redhat.com (int-mx10.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id DA9449C0AF; Fri, 2 Dec 2016 10:00:51 +0000 (UTC) Received: from [10.36.126.11] ([10.36.126.11]) by int-mx10.intmail.prod.int.phx2.redhat.com (8.14.4/8.14.4) with ESMTP id uB2A0m1c027704 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-SHA bits=256 verify=NO); Fri, 2 Dec 2016 05:00:50 -0500 To: Zhiyong Yang <zhiyong.yang@intel.com>, dev@dpdk.org References: <1480926387-63838-1-git-send-email-zhiyong.yang@intel.com> Cc: yuanhan.liu@linux.intel.com, bruce.richardson@intel.com, konstantin.ananyev@intel.com From: Maxime Coquelin <maxime.coquelin@redhat.com> Message-ID: <f7c9c6c2-b0bb-a7df-cca1-abe93c5089fb@redhat.com> Date: Fri, 2 Dec 2016 11:00:48 +0100 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.4.0 MIME-Version: 1.0 In-Reply-To: <1480926387-63838-1-git-send-email-zhiyong.yang@intel.com> Content-Type: text/plain; charset=windows-1252; format=flowed Content-Transfer-Encoding: 7bit X-Scanned-By: MIMEDefang 2.68 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.26]); Fri, 02 Dec 2016 10:00:51 +0000 (UTC) Subject: Re: [dpdk-dev] [PATCH 0/4] eal/common: introduce rte_memset and related test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions <dev.dpdk.org> List-Unsubscribe: <http://dpdk.org/ml/options/dev>, <mailto:dev-request@dpdk.org?subject=unsubscribe> List-Archive: <http://dpdk.org/ml/archives/dev/> List-Post: <mailto:dev@dpdk.org> List-Help: <mailto:dev-request@dpdk.org?subject=help> List-Subscribe: <http://dpdk.org/ml/listinfo/dev>, <mailto:dev-request@dpdk.org?subject=subscribe> Errors-To: dev-bounces@dpdk.org Sender: "dev" <dev-bounces@dpdk.org> |
Commit Message
Maxime Coquelin
Dec. 2, 2016, 10 a.m. UTC
Hi Zhiyong, On 12/05/2016 09:26 AM, Zhiyong Yang wrote: > DPDK code has met performance drop badly in some case when calling glibc > function memset. Reference to discussions about memset in > http://dpdk.org/ml/archives/dev/2016-October/048628.html > It is necessary to introduce more high efficient function to fix it. > One important thing about rte_memset is that we can get clear control > on what instruction flow is used. > > This patchset introduces rte_memset to bring more high efficient > implementation, and will bring obvious perf improvement, especially > for small N bytes in the most application scenarios. > > Patch 1 implements rte_memset in the file rte_memset.h on IA platform > The file supports three types of instruction sets including sse & avx > (128bits), avx2(256bits) and avx512(512bits). rte_memset makes use of > vectorization and inline function to improve the perf on IA. In addition, > cache line and memory alignment are fully taken into consideration. > > Patch 2 implements functional autotest to validates the function whether > to work in a right way. > > Patch 3 implements performance autotest separately in cache and memory. > > Patch 4 Using rte_memset instead of copy_virtio_net_hdr can bring 3%~4% > performance improvements on IA platform from virtio/vhost non-mergeable > loopback testing. > > Zhiyong Yang (4): > eal/common: introduce rte_memset on IA platform > app/test: add functional autotest for rte_memset > app/test: add performance autotest for rte_memset > lib/librte_vhost: improve vhost perf using rte_memset > > app/test/Makefile | 3 + > app/test/test_memset.c | 158 +++++++++ > app/test/test_memset_perf.c | 348 +++++++++++++++++++ > doc/guides/rel_notes/release_17_02.rst | 11 + > .../common/include/arch/x86/rte_memset.h | 376 +++++++++++++++++++++ > lib/librte_eal/common/include/generic/rte_memset.h | 51 +++ > lib/librte_vhost/virtio_net.c | 18 +- > 7 files changed, 958 insertions(+), 7 deletions(-) > create mode 100644 app/test/test_memset.c > create mode 100644 app/test/test_memset_perf.c > create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memset.h > create mode 100644 lib/librte_eal/common/include/generic/rte_memset.h > Thanks for the series, idea looks good to me. Wouldn't be worth to also use rte_memset in Virtio PMD (not compiled/tested)? : /* setup tx ring slot to point to indirect * descriptor list stored in reserved region. Cheers, Maxime
Comments
Hi, Maxime: > -----Original Message----- > From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com] > Sent: Friday, December 2, 2016 6:01 PM > To: Yang, Zhiyong <zhiyong.yang@intel.com>; dev@dpdk.org > Cc: yuanhan.liu@linux.intel.com; Richardson, Bruce > <bruce.richardson@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com> > Subject: Re: [dpdk-dev] [PATCH 0/4] eal/common: introduce rte_memset > and related test > > Hi Zhiyong, > > On 12/05/2016 09:26 AM, Zhiyong Yang wrote: > > DPDK code has met performance drop badly in some case when calling > > glibc function memset. Reference to discussions about memset in > > http://dpdk.org/ml/archives/dev/2016-October/048628.html > > It is necessary to introduce more high efficient function to fix it. > > One important thing about rte_memset is that we can get clear control > > on what instruction flow is used. > > > > This patchset introduces rte_memset to bring more high efficient > > implementation, and will bring obvious perf improvement, especially > > for small N bytes in the most application scenarios. > > > > Patch 1 implements rte_memset in the file rte_memset.h on IA platform > > The file supports three types of instruction sets including sse & avx > > (128bits), avx2(256bits) and avx512(512bits). rte_memset makes use of > > vectorization and inline function to improve the perf on IA. In > > addition, cache line and memory alignment are fully taken into > consideration. > > > > Patch 2 implements functional autotest to validates the function > > whether to work in a right way. > > > > Patch 3 implements performance autotest separately in cache and memory. > > > > Patch 4 Using rte_memset instead of copy_virtio_net_hdr can bring > > 3%~4% performance improvements on IA platform from virtio/vhost > > non-mergeable loopback testing. > > > > Zhiyong Yang (4): > > eal/common: introduce rte_memset on IA platform > > app/test: add functional autotest for rte_memset > > app/test: add performance autotest for rte_memset > > lib/librte_vhost: improve vhost perf using rte_memset > > > > app/test/Makefile | 3 + > > app/test/test_memset.c | 158 +++++++++ > > app/test/test_memset_perf.c | 348 +++++++++++++++++++ > > doc/guides/rel_notes/release_17_02.rst | 11 + > > .../common/include/arch/x86/rte_memset.h | 376 > +++++++++++++++++++++ > > lib/librte_eal/common/include/generic/rte_memset.h | 51 +++ > > lib/librte_vhost/virtio_net.c | 18 +- > > 7 files changed, 958 insertions(+), 7 deletions(-) create mode > > 100644 app/test/test_memset.c create mode 100644 > > app/test/test_memset_perf.c create mode 100644 > > lib/librte_eal/common/include/arch/x86/rte_memset.h > > create mode 100644 > lib/librte_eal/common/include/generic/rte_memset.h > > > > Thanks for the series, idea looks good to me. > > Wouldn't be worth to also use rte_memset in Virtio PMD (not > compiled/tested)? : > I think rte_memset maybe can bring some benefit here, but , I'm not clear how to enter the branch and test it. :) thanks Zhiyong > diff --git a/drivers/net/virtio/virtio_rxtx.c > b/drivers/net/virtio/virtio_rxtx.c > index 22d97a4..a5f70c4 100644 > --- a/drivers/net/virtio/virtio_rxtx.c > +++ b/drivers/net/virtio/virtio_rxtx.c > @@ -287,7 +287,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, > struct rte_mbuf *cookie, > rte_pktmbuf_prepend(cookie, head_size); > /* if offload disabled, it is not zeroed below, do it now */ > if (offload == 0) > - memset(hdr, 0, head_size); > + rte_memset(hdr, 0, head_size); > } else if (use_indirect) { > /* setup tx ring slot to point to indirect > * descriptor list stored in reserved region. > > Cheers, > Maxime
On 12/06/2016 07:33 AM, Yang, Zhiyong wrote: > Hi, Maxime: > >> -----Original Message----- >> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com] >> Sent: Friday, December 2, 2016 6:01 PM >> To: Yang, Zhiyong <zhiyong.yang@intel.com>; dev@dpdk.org >> Cc: yuanhan.liu@linux.intel.com; Richardson, Bruce >> <bruce.richardson@intel.com>; Ananyev, Konstantin >> <konstantin.ananyev@intel.com> >> Subject: Re: [dpdk-dev] [PATCH 0/4] eal/common: introduce rte_memset >> and related test >> >> Hi Zhiyong, >> >> On 12/05/2016 09:26 AM, Zhiyong Yang wrote: >>> DPDK code has met performance drop badly in some case when calling >>> glibc function memset. Reference to discussions about memset in >>> http://dpdk.org/ml/archives/dev/2016-October/048628.html >>> It is necessary to introduce more high efficient function to fix it. >>> One important thing about rte_memset is that we can get clear control >>> on what instruction flow is used. >>> >>> This patchset introduces rte_memset to bring more high efficient >>> implementation, and will bring obvious perf improvement, especially >>> for small N bytes in the most application scenarios. >>> >>> Patch 1 implements rte_memset in the file rte_memset.h on IA platform >>> The file supports three types of instruction sets including sse & avx >>> (128bits), avx2(256bits) and avx512(512bits). rte_memset makes use of >>> vectorization and inline function to improve the perf on IA. In >>> addition, cache line and memory alignment are fully taken into >> consideration. >>> >>> Patch 2 implements functional autotest to validates the function >>> whether to work in a right way. >>> >>> Patch 3 implements performance autotest separately in cache and memory. >>> >>> Patch 4 Using rte_memset instead of copy_virtio_net_hdr can bring >>> 3%~4% performance improvements on IA platform from virtio/vhost >>> non-mergeable loopback testing. >>> >>> Zhiyong Yang (4): >>> eal/common: introduce rte_memset on IA platform >>> app/test: add functional autotest for rte_memset >>> app/test: add performance autotest for rte_memset >>> lib/librte_vhost: improve vhost perf using rte_memset >>> >>> app/test/Makefile | 3 + >>> app/test/test_memset.c | 158 +++++++++ >>> app/test/test_memset_perf.c | 348 +++++++++++++++++++ >>> doc/guides/rel_notes/release_17_02.rst | 11 + >>> .../common/include/arch/x86/rte_memset.h | 376 >> +++++++++++++++++++++ >>> lib/librte_eal/common/include/generic/rte_memset.h | 51 +++ >>> lib/librte_vhost/virtio_net.c | 18 +- >>> 7 files changed, 958 insertions(+), 7 deletions(-) create mode >>> 100644 app/test/test_memset.c create mode 100644 >>> app/test/test_memset_perf.c create mode 100644 >>> lib/librte_eal/common/include/arch/x86/rte_memset.h >>> create mode 100644 >> lib/librte_eal/common/include/generic/rte_memset.h >>> >> >> Thanks for the series, idea looks good to me. >> >> Wouldn't be worth to also use rte_memset in Virtio PMD (not >> compiled/tested)? : >> > > I think rte_memset maybe can bring some benefit here, but , I'm not clear how to > enter the branch and test it. :) Indeed, you will need Pierre's patch: [dpdk-dev] [PATCH] virtio: tx with can_push when VERSION_1 is set Thanks, Maxime > > thanks > Zhiyong > >> diff --git a/drivers/net/virtio/virtio_rxtx.c >> b/drivers/net/virtio/virtio_rxtx.c >> index 22d97a4..a5f70c4 100644 >> --- a/drivers/net/virtio/virtio_rxtx.c >> +++ b/drivers/net/virtio/virtio_rxtx.c >> @@ -287,7 +287,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, >> struct rte_mbuf *cookie, >> rte_pktmbuf_prepend(cookie, head_size); >> /* if offload disabled, it is not zeroed below, do it now */ >> if (offload == 0) >> - memset(hdr, 0, head_size); >> + rte_memset(hdr, 0, head_size); >> } else if (use_indirect) { >> /* setup tx ring slot to point to indirect >> * descriptor list stored in reserved region. >> >> Cheers, >> Maxime
Hi, Maxime: > -----Original Message----- > From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com] > Sent: Tuesday, December 6, 2016 4:30 PM > To: Yang, Zhiyong <zhiyong.yang@intel.com>; dev@dpdk.org > Cc: yuanhan.liu@linux.intel.com; Richardson, Bruce > <bruce.richardson@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Pierre Pfister (ppfister) > <ppfister@cisco.com> > Subject: Re: [dpdk-dev] [PATCH 0/4] eal/common: introduce rte_memset > and related test > > > > On 12/06/2016 07:33 AM, Yang, Zhiyong wrote: > > Hi, Maxime: > > > >> -----Original Message----- > >> From: Maxime Coquelin [mailto:maxime.coquelin@redhat.com] > >> Sent: Friday, December 2, 2016 6:01 PM > >> To: Yang, Zhiyong <zhiyong.yang@intel.com>; dev@dpdk.org > >> Cc: yuanhan.liu@linux.intel.com; Richardson, Bruce > >> <bruce.richardson@intel.com>; Ananyev, Konstantin > >> <konstantin.ananyev@intel.com> > >> Subject: Re: [dpdk-dev] [PATCH 0/4] eal/common: introduce rte_memset > >> and related test > >> > >> Hi Zhiyong, > >> > >> On 12/05/2016 09:26 AM, Zhiyong Yang wrote: > >>> DPDK code has met performance drop badly in some case when calling > >>> glibc function memset. Reference to discussions about memset in > >>> http://dpdk.org/ml/archives/dev/2016-October/048628.html > >>> It is necessary to introduce more high efficient function to fix it. > >>> One important thing about rte_memset is that we can get clear > >>> control on what instruction flow is used. > >>> > >>> This patchset introduces rte_memset to bring more high efficient > >>> implementation, and will bring obvious perf improvement, especially > >>> for small N bytes in the most application scenarios. > >>> > >>> Patch 1 implements rte_memset in the file rte_memset.h on IA > >>> platform The file supports three types of instruction sets including > >>> sse & avx (128bits), avx2(256bits) and avx512(512bits). rte_memset > >>> makes use of vectorization and inline function to improve the perf > >>> on IA. In addition, cache line and memory alignment are fully taken > >>> into > >> consideration. > >>> > >>> Patch 2 implements functional autotest to validates the function > >>> whether to work in a right way. > >>> > >>> Patch 3 implements performance autotest separately in cache and > memory. > >>> > >>> Patch 4 Using rte_memset instead of copy_virtio_net_hdr can bring > >>> 3%~4% performance improvements on IA platform from virtio/vhost > >>> non-mergeable loopback testing. > >>> > >>> Zhiyong Yang (4): > >>> eal/common: introduce rte_memset on IA platform > >>> app/test: add functional autotest for rte_memset > >>> app/test: add performance autotest for rte_memset > >>> lib/librte_vhost: improve vhost perf using rte_memset > >>> > >>> app/test/Makefile | 3 + > >>> app/test/test_memset.c | 158 +++++++++ > >>> app/test/test_memset_perf.c | 348 > +++++++++++++++++++ > >>> doc/guides/rel_notes/release_17_02.rst | 11 + > >>> .../common/include/arch/x86/rte_memset.h | 376 > >> +++++++++++++++++++++ > >>> lib/librte_eal/common/include/generic/rte_memset.h | 51 +++ > >>> lib/librte_vhost/virtio_net.c | 18 +- > >>> 7 files changed, 958 insertions(+), 7 deletions(-) create mode > >>> 100644 app/test/test_memset.c create mode 100644 > >>> app/test/test_memset_perf.c create mode 100644 > >>> lib/librte_eal/common/include/arch/x86/rte_memset.h > >>> create mode 100644 > >> lib/librte_eal/common/include/generic/rte_memset.h > >>> > >> > >> Thanks for the series, idea looks good to me. > >> > >> Wouldn't be worth to also use rte_memset in Virtio PMD (not > >> compiled/tested)? : > >> > > > > I think rte_memset maybe can bring some benefit here, but , I'm not > > clear how to enter the branch and test it. :) > > Indeed, you will need Pierre's patch: > [dpdk-dev] [PATCH] virtio: tx with can_push when VERSION_1 is set > > Thanks, > Maxime > > Thank you Maxime. I can see a little, but not obviously performance improvement here. You know, memset(hdr, 0, head_size); only consumes fewer cycles for virtio pmd. head_size only 10 or 12 bytes. I optimize rte_memset perf further for N=8~15 bytes. The main purpose of Introducing rte_memset is that we can use it to avoid perf drop issue instead of glibc memset on some platform, I think. > > > >> diff --git a/drivers/net/virtio/virtio_rxtx.c > >> b/drivers/net/virtio/virtio_rxtx.c > >> index 22d97a4..a5f70c4 100644 > >> --- a/drivers/net/virtio/virtio_rxtx.c > >> +++ b/drivers/net/virtio/virtio_rxtx.c > >> @@ -287,7 +287,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, > >> struct rte_mbuf *cookie, > >> rte_pktmbuf_prepend(cookie, head_size); > >> /* if offload disabled, it is not zeroed below, do it now */ > >> if (offload == 0) > >> - memset(hdr, 0, head_size); > >> + rte_memset(hdr, 0, head_size); > >> } else if (use_indirect) { > >> /* setup tx ring slot to point to indirect > >> * descriptor list stored in reserved region. > >> > >> Cheers, > >> Maxime
On Wed, Dec 07, 2016 at 09:28:17AM +0000, Yang, Zhiyong wrote: > > >> Wouldn't be worth to also use rte_memset in Virtio PMD (not > > >> compiled/tested)? : > > >> > > > > > > I think rte_memset maybe can bring some benefit here, but , I'm not > > > clear how to enter the branch and test it. :) > > > > Indeed, you will need Pierre's patch: > > [dpdk-dev] [PATCH] virtio: tx with can_push when VERSION_1 is set I will apply it shortly. > > Thanks, > > Maxime > > > > Thank you Maxime. > I can see a little, but not obviously performance improvement here. Are you you have run into that code piece? FYI, you have to enable virtio 1.0 explicitly, which is disabled by deafault. > You know, memset(hdr, 0, head_size); only consumes fewer cycles for virtio pmd. > head_size only 10 or 12 bytes. > I optimize rte_memset perf further for N=8~15 bytes. > The main purpose of Introducing rte_memset is that we can use it > to avoid perf drop issue instead of glibc memset on some platform, I think. For this case (as well as the 4th patch), it's more about making sure rte_memset is inlined. --yliu
Hi, yuanhan: > -----Original Message----- > From: Yuanhan Liu [mailto:yuanhan.liu@linux.intel.com] > Sent: Wednesday, December 7, 2016 5:38 PM > To: Yang, Zhiyong <zhiyong.yang@intel.com> > Cc: Maxime Coquelin <maxime.coquelin@redhat.com>; dev@dpdk.org; > Richardson, Bruce <bruce.richardson@intel.com>; Ananyev, Konstantin > <konstantin.ananyev@intel.com>; Pierre Pfister (ppfister) > <ppfister@cisco.com> > Subject: Re: [dpdk-dev] [PATCH 0/4] eal/common: introduce rte_memset > and related test > > On Wed, Dec 07, 2016 at 09:28:17AM +0000, Yang, Zhiyong wrote: > > > >> Wouldn't be worth to also use rte_memset in Virtio PMD (not > > > >> compiled/tested)? : > > > >> > > > > > > > > I think rte_memset maybe can bring some benefit here, but , I'm > > > > not clear how to enter the branch and test it. :) > > > > > > Indeed, you will need Pierre's patch: > > > [dpdk-dev] [PATCH] virtio: tx with can_push when VERSION_1 is set > > I will apply it shortly. > > > > Thanks, > > > Maxime > > > > > > Thank you Maxime. > > I can see a little, but not obviously performance improvement here. > > Are you you have run into that code piece? FYI, you have to enable virtio 1.0 > explicitly, which is disabled by deafault. Yes. I use the patch from Pierre and set offload = 0 ; Thanks Zhiyong > > > You know, memset(hdr, 0, head_size); only consumes fewer cycles for > virtio pmd. > > head_size only 10 or 12 bytes. > > I optimize rte_memset perf further for N=8~15 bytes. > > The main purpose of Introducing rte_memset is that we can use it to > > avoid perf drop issue instead of glibc memset on some platform, I think. > > For this case (as well as the 4th patch), it's more about making sure > rte_memset is inlined. > > --yliu
On Wed, Dec 07, 2016 at 09:43:06AM +0000, Yang, Zhiyong wrote: > > On Wed, Dec 07, 2016 at 09:28:17AM +0000, Yang, Zhiyong wrote: > > > > >> Wouldn't be worth to also use rte_memset in Virtio PMD (not > > > > >> compiled/tested)? : > > > > >> > > > > > > > > > > I think rte_memset maybe can bring some benefit here, but , I'm > > > > > not clear how to enter the branch and test it. :) > > > > > > > > Indeed, you will need Pierre's patch: > > > > [dpdk-dev] [PATCH] virtio: tx with can_push when VERSION_1 is set > > > > I will apply it shortly. > > > > > > Thanks, > > > > Maxime > > > > > > > > Thank you Maxime. > > > I can see a little, but not obviously performance improvement here. > > > > Are you you have run into that code piece? FYI, you have to enable virtio 1.0 > > explicitly, which is disabled by deafault. > > Yes. I use the patch from Pierre and set offload = 0 ; I meant virtio 1.0. Have you added following options for the QEMU virtio-net device? disable-modern=false --yliu
diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 22d97a4..a5f70c4 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -287,7 +287,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, rte_pktmbuf_prepend(cookie, head_size); /* if offload disabled, it is not zeroed below, do it now */ if (offload == 0) - memset(hdr, 0, head_size); + rte_memset(hdr, 0, head_size); } else if (use_indirect) {