[dpdk-dev] Avoid possible memory cpoy when sort hugepages

Message ID 1418178341-4193-1-git-send-email-michael.qiu@intel.com (mailing list archive)
State Changes Requested, archived
Headers

Commit Message

Michael Qiu Dec. 10, 2014, 2:25 a.m. UTC
  When the first address is the compared address in the loop,
it will also do memory copy, which is meaningless,
worse more, when hugepg_tbl is mostly in order. This should
be a big deal in large hugepage memory systerm(like hunderd
or thousand GB).

Meanwhile smallest_idx never be a value of -1,so remove this check.

This patch also includes some coding style fix.

Signed-off-by: Michael Qiu <michael.qiu@intel.com>
---
 lib/librte_eal/linuxapp/eal/eal_memory.c | 13 +++++--------
 1 file changed, 5 insertions(+), 8 deletions(-)
  

Comments

Bruce Richardson Dec. 10, 2014, 10:41 a.m. UTC | #1
On Wed, Dec 10, 2014 at 10:25:41AM +0800, Michael Qiu wrote:
> When the first address is the compared address in the loop,
> it will also do memory copy, which is meaningless,
> worse more, when hugepg_tbl is mostly in order. This should
> be a big deal in large hugepage memory systerm(like hunderd
> or thousand GB).

I actually doubt the time taken by this sorting is a significant part of the
initialization time taken, even for hundreds of GBs of memory. Do you have
any measurements to confirm this?
[However, that's not to say that we can't merge in this patch]


> 
> Meanwhile smallest_idx never be a value of -1,so remove this check.
> 
> This patch also includes some coding style fix.
> 
> Signed-off-by: Michael Qiu <michael.qiu@intel.com>
> ---
>  lib/librte_eal/linuxapp/eal/eal_memory.c | 13 +++++--------
>  1 file changed, 5 insertions(+), 8 deletions(-)
> 
> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
> index e6cb919..700aba2 100644
> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
> @@ -678,14 +678,13 @@ error:
>  static int
>  sort_by_physaddr(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
>  {
> -	unsigned i, j;
> -	int compare_idx;
> +	unsigned i, j, compare_idx;
>  	uint64_t compare_addr;
>  	struct hugepage_file tmp;
>  
>  	for (i = 0; i < hpi->num_pages[0]; i++) {
>  		compare_addr = 0;
> -		compare_idx = -1;
> +		compare_idx = i;
>  
>  		/*
>  		 * browse all entries starting at 'i', and find the
> @@ -704,11 +703,9 @@ sort_by_physaddr(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
>  			}
>  		}
>  
> -		/* should not happen */
> -		if (compare_idx == -1) {
> -			RTE_LOG(ERR, EAL, "%s(): error in physaddr sorting\n", __func__);
> -			return -1;
> -		}
> +		/* avoid memory copy when the first entry is the compared */
> +		if (compare_idx == i)
> +			continue;
>  
>  		/* swap the 2 entries in the table */
>  		memcpy(&tmp, &hugepg_tbl[compare_idx],
> -- 
> 1.9.3
>
  
Michael Qiu Dec. 10, 2014, 10:59 a.m. UTC | #2
On 12/10/2014 6:41 PM, Richardson, Bruce wrote:
> On Wed, Dec 10, 2014 at 10:25:41AM +0800, Michael Qiu wrote:
>> When the first address is the compared address in the loop,
>> it will also do memory copy, which is meaningless,
>> worse more, when hugepg_tbl is mostly in order. This should
>> be a big deal in large hugepage memory systerm(like hunderd
>> or thousand GB).
> I actually doubt the time taken by this sorting is a significant part of the
> initialization time taken, even for hundreds of GBs of memory. Do you have
> any measurements to confirm this?
> [However, that's not to say that we can't merge in this patch]

I've no hardware env of so huge memory, so I haven't do measurements on
this.

This is not a big improvement, but indeed it may do lots of useless
memory copy in initialize stat.

It could, at least could  save time :)

Thanks,
Michael
>
>
>> Meanwhile smallest_idx never be a value of -1,so remove this check.
>>
>> This patch also includes some coding style fix.
>>
>> Signed-off-by: Michael Qiu <michael.qiu@intel.com>
>> ---
>>  lib/librte_eal/linuxapp/eal/eal_memory.c | 13 +++++--------
>>  1 file changed, 5 insertions(+), 8 deletions(-)
>>
>> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
>> index e6cb919..700aba2 100644
>> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
>> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
>> @@ -678,14 +678,13 @@ error:
>>  static int
>>  sort_by_physaddr(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
>>  {
>> -	unsigned i, j;
>> -	int compare_idx;
>> +	unsigned i, j, compare_idx;
>>  	uint64_t compare_addr;
>>  	struct hugepage_file tmp;
>>  
>>  	for (i = 0; i < hpi->num_pages[0]; i++) {
>>  		compare_addr = 0;
>> -		compare_idx = -1;
>> +		compare_idx = i;
>>  
>>  		/*
>>  		 * browse all entries starting at 'i', and find the
>> @@ -704,11 +703,9 @@ sort_by_physaddr(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
>>  			}
>>  		}
>>  
>> -		/* should not happen */
>> -		if (compare_idx == -1) {
>> -			RTE_LOG(ERR, EAL, "%s(): error in physaddr sorting\n", __func__);
>> -			return -1;
>> -		}
>> +		/* avoid memory copy when the first entry is the compared */
>> +		if (compare_idx == i)
>> +			continue;
>>  
>>  		/* swap the 2 entries in the table */
>>  		memcpy(&tmp, &hugepg_tbl[compare_idx],
>> -- 
>> 1.9.3
>>
  
Ananyev, Konstantin Dec. 10, 2014, 11:08 a.m. UTC | #3
> -----Original Message-----
> From: dev [mailto:dev-bounces@dpdk.org] On Behalf Of Qiu, Michael
> Sent: Wednesday, December 10, 2014 10:59 AM
> To: Richardson, Bruce
> Cc: dev@dpdk.org
> Subject: Re: [dpdk-dev] [PATCH] Avoid possible memory cpoy when sort hugepages
> 
> On 12/10/2014 6:41 PM, Richardson, Bruce wrote:
> > On Wed, Dec 10, 2014 at 10:25:41AM +0800, Michael Qiu wrote:
> >> When the first address is the compared address in the loop,
> >> it will also do memory copy, which is meaningless,
> >> worse more, when hugepg_tbl is mostly in order. This should
> >> be a big deal in large hugepage memory systerm(like hunderd
> >> or thousand GB).
> > I actually doubt the time taken by this sorting is a significant part of the
> > initialization time taken, even for hundreds of GBs of memory. Do you have
> > any measurements to confirm this?
> > [However, that's not to say that we can't merge in this patch]
> 
> I've no hardware env of so huge memory, so I haven't do measurements on
> this.
> 
> This is not a big improvement, but indeed it may do lots of useless
> memory copy in initialize stat.
> 
> It could, at least could  save time :)

I wonder why we do need to write our own bubble sort procedure?
Why we can't use standard qsort() here?
Konstantin

> 
> Thanks,
> Michael
> >
> >
> >> Meanwhile smallest_idx never be a value of -1,so remove this check.
> >>
> >> This patch also includes some coding style fix.
> >>
> >> Signed-off-by: Michael Qiu <michael.qiu@intel.com>
> >> ---
> >>  lib/librte_eal/linuxapp/eal/eal_memory.c | 13 +++++--------
> >>  1 file changed, 5 insertions(+), 8 deletions(-)
> >>
> >> diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
> >> index e6cb919..700aba2 100644
> >> --- a/lib/librte_eal/linuxapp/eal/eal_memory.c
> >> +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
> >> @@ -678,14 +678,13 @@ error:
> >>  static int
> >>  sort_by_physaddr(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
> >>  {
> >> -	unsigned i, j;
> >> -	int compare_idx;
> >> +	unsigned i, j, compare_idx;
> >>  	uint64_t compare_addr;
> >>  	struct hugepage_file tmp;
> >>
> >>  	for (i = 0; i < hpi->num_pages[0]; i++) {
> >>  		compare_addr = 0;
> >> -		compare_idx = -1;
> >> +		compare_idx = i;
> >>
> >>  		/*
> >>  		 * browse all entries starting at 'i', and find the
> >> @@ -704,11 +703,9 @@ sort_by_physaddr(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
> >>  			}
> >>  		}
> >>
> >> -		/* should not happen */
> >> -		if (compare_idx == -1) {
> >> -			RTE_LOG(ERR, EAL, "%s(): error in physaddr sorting\n", __func__);
> >> -			return -1;
> >> -		}
> >> +		/* avoid memory copy when the first entry is the compared */
> >> +		if (compare_idx == i)
> >> +			continue;
> >>
> >>  		/* swap the 2 entries in the table */
> >>  		memcpy(&tmp, &hugepg_tbl[compare_idx],
> >> --
> >> 1.9.3
> >>
  
Jay Rolette Dec. 10, 2014, 5:58 p.m. UTC | #4
On Wed, Dec 10, 2014 at 5:08 AM, Ananyev, Konstantin <
konstantin.ananyev@intel.com> wrote:

> I wonder why we do need to write our own bubble sort procedure?
> Why we can't use standard qsort() here?
>

Sadly, even bubble sort would be better than the selection sort being used
here. It's guaranteed to be O(n^2) in all cases.

I just got through replacing that entire function in my repo with a call to
qsort() from the standard library last night myself. Faster (although
probably not material to most deployments) and less code.

Jay
  
Ananyev, Konstantin Dec. 10, 2014, 6:39 p.m. UTC | #5
> From: Jay Rolette [mailto:rolette@infiniteio.com]

> Sent: Wednesday, December 10, 2014 5:58 PM

> To: Ananyev, Konstantin

> Cc: Qiu, Michael; Richardson, Bruce; dev@dpdk.org

> Subject: Re: [dpdk-dev] [PATCH] Avoid possible memory cpoy when sort hugepages

> 

> 

> On Wed, Dec 10, 2014 at 5:08 AM, Ananyev, Konstantin <konstantin.ananyev@intel.com> wrote:

> I wonder why we do need to write our own bubble sort procedure?

> Why we can't use standard qsort() here?

> 

> Sadly, even bubble sort would be better than the selection sort being used here. It's guaranteed to be O(n^2) in all cases.


Ah yes, your right it is a selection sort here.

> 

> I just got through replacing that entire function in my repo with a call to qsort() from the standard library last night myself. Faster

> (although probably not material to most deployments) and less code.


If you feel like it is worth it, why not to submit a patch? :)
Thanks
Konstantin

> 

> Jay
  
Jay Rolette Dec. 10, 2014, 9:36 p.m. UTC | #6
On Wed, Dec 10, 2014 at 12:39 PM, Ananyev, Konstantin <
konstantin.ananyev@intel.com> wrote:

> > I just got through replacing that entire function in my repo with a call
> to qsort() from the standard library last night myself. Faster
> > (although probably not material to most deployments) and less code.
>
> If you feel like it is worth it, why not to submit a patch? :)


On Haswell and IvyBridge Xeons, with 128 1G huge pages, it doesn't make a
user-noticeable difference in the time required for
rte_eal_hugepage_init(). The reason I went ahead and checked it in my repo
is because:

a) it eats at my soul to see an O(n^2) case for something where qsort() is
trivial to use
b) we will increase that up to ~232 1G huge pages soon. Likely doesn't
matter at that point either, but since it was already written...

What *does* chew up a lot of time in init is where the huge pages are being
explicitly zeroed in map_all_hugepages().

Removing that memset() makes find_numasocket() blow up, but I was able to
do a quick test where I only memset 1 byte on each page. That cut init time
by 30% (~20 seconds in my test).  Significant, but since I'm not entirely
sure it is safe, I'm not making that change right now.

On Linux, shared memory that isn't file-backed is automatically zeroed
before the app gets it. However, I haven't had a chance to chase down
whether that applies to huge pages or not, much less how hugetlbfs factors
into the equation.

Back to the question about the patch, if you guys are interested in it,
I'll have to figure out your patch submission process. Shouldn't be a huge
deal other than the fact that we are on DPDK 1.6 (r2).

Cheers,
Jay
  
Michael Qiu Dec. 11, 2014, 1:44 a.m. UTC | #7
On 12/11/2014 5:37 AM, Jay Rolette wrote:
> On Wed, Dec 10, 2014 at 12:39 PM, Ananyev, Konstantin <
> konstantin.ananyev@intel.com> wrote:
>
>>> I just got through replacing that entire function in my repo with a call
>> to qsort() from the standard library last night myself. Faster
>>> (although probably not material to most deployments) and less code.
>> If you feel like it is worth it, why not to submit a patch? :)
>
> On Haswell and IvyBridge Xeons, with 128 1G huge pages, it doesn't make a
> user-noticeable difference in the time required for
> rte_eal_hugepage_init(). The reason I went ahead and checked it in my repo
> is because:
>
> a) it eats at my soul to see an O(n^2) case for something where qsort() is
> trivial to use
> b) we will increase that up to ~232 1G huge pages soon. Likely doesn't
> matter at that point either, but since it was already written...
>
> What *does* chew up a lot of time in init is where the huge pages are being
> explicitly zeroed in map_all_hugepages().
>
> Removing that memset() makes find_numasocket() blow up, but I was able to
> do a quick test where I only memset 1 byte on each page. That cut init time
> by 30% (~20 seconds in my test).  Significant, but since I'm not entirely
> sure it is safe, I'm not making that change right now.
>
> On Linux, shared memory that isn't file-backed is automatically zeroed
> before the app gets it. However, I haven't had a chance to chase down
> whether that applies to huge pages or not, much less how hugetlbfs factors
> into the equation.
>
> Back to the question about the patch, if you guys are interested in it,
> I'll have to figure out your patch submission process. Shouldn't be a huge
> deal other than the fact that we are on DPDK 1.6 (r2).

Go ahead and post it :)

Thanks,
Michael
> Cheers,
> Jay
>
  

Patch

diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c
index e6cb919..700aba2 100644
--- a/lib/librte_eal/linuxapp/eal/eal_memory.c
+++ b/lib/librte_eal/linuxapp/eal/eal_memory.c
@@ -678,14 +678,13 @@  error:
 static int
 sort_by_physaddr(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
 {
-	unsigned i, j;
-	int compare_idx;
+	unsigned i, j, compare_idx;
 	uint64_t compare_addr;
 	struct hugepage_file tmp;
 
 	for (i = 0; i < hpi->num_pages[0]; i++) {
 		compare_addr = 0;
-		compare_idx = -1;
+		compare_idx = i;
 
 		/*
 		 * browse all entries starting at 'i', and find the
@@ -704,11 +703,9 @@  sort_by_physaddr(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi)
 			}
 		}
 
-		/* should not happen */
-		if (compare_idx == -1) {
-			RTE_LOG(ERR, EAL, "%s(): error in physaddr sorting\n", __func__);
-			return -1;
-		}
+		/* avoid memory copy when the first entry is the compared */
+		if (compare_idx == i)
+			continue;
 
 		/* swap the 2 entries in the table */
 		memcpy(&tmp, &hugepg_tbl[compare_idx],