hash: fix SSE comparison

Message ID 20230906023100.3618303-1-jieqiang.wang@arm.com (mailing list archive)
State Superseded, archived
Delegated to: David Marchand
Headers
Series hash: fix SSE comparison |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/github-robot: build success github build: passed
ci/iol-mellanox-Performance success Performance Testing PASS
ci/intel-Functional success Functional PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-intel-Functional success Functional Testing PASS

Commit Message

Jieqiang Wang Sept. 6, 2023, 2:31 a.m. UTC
  __mm_cmpeq_epi16 returns 0xFFFF if the corresponding 16-bit elements are
equal. In original SSE2 implementation for function compare_signatures,
it utilizes _mm_movemask_epi8 to create mask from the MSB of each 8-bit
element, while we should only care about the MSB of lower 8-bit in each
16-bit element.
For example, if the comparison result is all equal, SSE2 path returns
0xFFFF while NEON and default scalar path return 0x5555.
Although this bug is not causing any negative effects since the caller
function solely examines the trailing zeros of each match mask, we
recommend this fix to ensure consistency with NEON and default scalar
code behaviors.

Fixes: c7d93df552c2 ("hash: use partial-key hashing")
Cc: yipeng1.wang@intel.com
Cc: stable@dpdk.org

Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
---
 lib/hash/rte_cuckoo_hash.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)
  

Comments

David Marchand Sept. 29, 2023, 3:32 p.m. UTC | #1
On Wed, Sep 6, 2023 at 4:31 AM Jieqiang Wang <jieqiang.wang@arm.com> wrote:
>
> __mm_cmpeq_epi16 returns 0xFFFF if the corresponding 16-bit elements are
> equal. In original SSE2 implementation for function compare_signatures,
> it utilizes _mm_movemask_epi8 to create mask from the MSB of each 8-bit
> element, while we should only care about the MSB of lower 8-bit in each
> 16-bit element.
> For example, if the comparison result is all equal, SSE2 path returns
> 0xFFFF while NEON and default scalar path return 0x5555.
> Although this bug is not causing any negative effects since the caller
> function solely examines the trailing zeros of each match mask, we
> recommend this fix to ensure consistency with NEON and default scalar
> code behaviors.
>
> Fixes: c7d93df552c2 ("hash: use partial-key hashing")
> Cc: yipeng1.wang@intel.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>

A review from this library maintainers please?
  
Bruce Richardson Oct. 2, 2023, 10:39 a.m. UTC | #2
On Wed, Sep 06, 2023 at 10:31:00AM +0800, Jieqiang Wang wrote:
> __mm_cmpeq_epi16 returns 0xFFFF if the corresponding 16-bit elements are
> equal. In original SSE2 implementation for function compare_signatures,
> it utilizes _mm_movemask_epi8 to create mask from the MSB of each 8-bit
> element, while we should only care about the MSB of lower 8-bit in each
> 16-bit element.
> For example, if the comparison result is all equal, SSE2 path returns
> 0xFFFF while NEON and default scalar path return 0x5555.
> Although this bug is not causing any negative effects since the caller
> function solely examines the trailing zeros of each match mask, we
> recommend this fix to ensure consistency with NEON and default scalar
> code behaviors.
> 
> Fixes: c7d93df552c2 ("hash: use partial-key hashing")
> Cc: yipeng1.wang@intel.com
> Cc: stable@dpdk.org
> 
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>

Fix looks correct, but see comment below. I think we can convert the vector
mask to a simpler - and possibly faster - scalar one.

/Bruce

> ---
>  lib/hash/rte_cuckoo_hash.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
> index d92a903bb3..acaa8b74bd 100644
> --- a/lib/hash/rte_cuckoo_hash.c
> +++ b/lib/hash/rte_cuckoo_hash.c
> @@ -1862,17 +1862,19 @@ compare_signatures(uint32_t *prim_hash_matches, uint32_t *sec_hash_matches,
>  	/* For match mask the first bit of every two bits indicates the match */
>  	switch (sig_cmp_fn) {
>  #if defined(__SSE2__)
> -	case RTE_HASH_COMPARE_SSE:
> +	case RTE_HASH_COMPARE_SSE: {
>  		/* Compare all signatures in the bucket */
> -		*prim_hash_matches = _mm_movemask_epi8(_mm_cmpeq_epi16(
> -				_mm_load_si128(
> +		__m128i shift_mask = _mm_set1_epi16(0x0080);

Not sure that this variable name is the most descriptive, as we don't
actually shift anything using this. How about "results_mask".

> +		__m128i prim_cmp = _mm_cmpeq_epi16(_mm_load_si128(
>  					(__m128i const *)prim_bkt->sig_current),
> -				_mm_set1_epi16(sig)));
> +					_mm_set1_epi16(sig));
> +		*prim_hash_matches = _mm_movemask_epi8(_mm_and_si128(prim_cmp, shift_mask));

While this will work like you describe, I would think the simpler solution
here is not to do a vector mask, but instead to simply do a scalar one.
This would save extra vector loads too, since all values could just be
masked with compile-time constant 0xAAAA.

>  		/* Compare all signatures in the bucket */
> -		*sec_hash_matches = _mm_movemask_epi8(_mm_cmpeq_epi16(
> -				_mm_load_si128(
> +		__m128i sec_cmp = _mm_cmpeq_epi16(_mm_load_si128(
>  					(__m128i const *)sec_bkt->sig_current),
> -				_mm_set1_epi16(sig)));
> +					_mm_set1_epi16(sig));
> +		*sec_hash_matches = _mm_movemask_epi8(_mm_and_si128(sec_cmp, shift_mask));
> +		}
>  		break;
>  #elif defined(__ARM_NEON)
>  	case RTE_HASH_COMPARE_NEON: {
> -- 
> 2.25.1
>
  
Jieqiang Wang Oct. 7, 2023, 6:41 a.m. UTC | #3
Thanks for your comments, Bruce!
A few comments inline.

BR,
Jieqiang Wang
-----邮件原件-----
发件人: Bruce Richardson <bruce.richardson@intel.com>
发送时间: Monday, October 2, 2023 6:40 PM
收件人: Jieqiang Wang <Jieqiang.Wang@arm.com>
抄送: Yipeng Wang <yipeng1.wang@intel.com>; Sameh Gobriel <sameh.gobriel@intel.com>; Vladimir Medvedkin <vladimir.medvedkin@intel.com>; Honnappa Nagarahalli <Honnappa.Nagarahalli@arm.com>; Dharmik Jayesh Thakkar <DharmikJayesh.Thakkar@arm.com>; dev@dpdk.org; nd <nd@arm.com>; stable@dpdk.org; Feifei Wang <Feifei.Wang2@arm.com>; Ruifeng Wang <Ruifeng.Wang@arm.com>
主题: Re: [PATCH] hash: fix SSE comparison

On Wed, Sep 06, 2023 at 10:31:00AM +0800, Jieqiang Wang wrote:
> __mm_cmpeq_epi16 returns 0xFFFF if the corresponding 16-bit elements
> are equal. In original SSE2 implementation for function
> compare_signatures, it utilizes _mm_movemask_epi8 to create mask from
> the MSB of each 8-bit element, while we should only care about the MSB
> of lower 8-bit in each 16-bit element.
> For example, if the comparison result is all equal, SSE2 path returns
> 0xFFFF while NEON and default scalar path return 0x5555.
> Although this bug is not causing any negative effects since the caller
> function solely examines the trailing zeros of each match mask, we
> recommend this fix to ensure consistency with NEON and default scalar
> code behaviors.
>
> Fixes: c7d93df552c2 ("hash: use partial-key hashing")
> Cc: yipeng1.wang@intel.com
> Cc: stable@dpdk.org
>
> Signed-off-by: Feifei Wang <feifei.wang2@arm.com>
> Signed-off-by: Jieqiang Wang <jieqiang.wang@arm.com>
> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>

Fix looks correct, but see comment below. I think we can convert the vector mask to a simpler - and possibly faster - scalar one.

/Bruce

> ---
>  lib/hash/rte_cuckoo_hash.c | 16 +++++++++-------
>  1 file changed, 9 insertions(+), 7 deletions(-)
>
> diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
> index d92a903bb3..acaa8b74bd 100644
> --- a/lib/hash/rte_cuckoo_hash.c
> +++ b/lib/hash/rte_cuckoo_hash.c
> @@ -1862,17 +1862,19 @@ compare_signatures(uint32_t *prim_hash_matches, uint32_t *sec_hash_matches,
>       /* For match mask the first bit of every two bits indicates the match */
>       switch (sig_cmp_fn) {
>  #if defined(__SSE2__)
> -     case RTE_HASH_COMPARE_SSE:
> +     case RTE_HASH_COMPARE_SSE: {
>               /* Compare all signatures in the bucket */
> -             *prim_hash_matches = _mm_movemask_epi8(_mm_cmpeq_epi16(
> -                             _mm_load_si128(
> +             __m128i shift_mask = _mm_set1_epi16(0x0080);

Not sure that this variable name is the most descriptive, as we don't actually shift anything using this. How about "results_mask".

Ack.

> +             __m128i prim_cmp = _mm_cmpeq_epi16(_mm_load_si128(
>                                       (__m128i const *)prim_bkt->sig_current),
> -                             _mm_set1_epi16(sig)));
> +                                     _mm_set1_epi16(sig));
> +             *prim_hash_matches = _mm_movemask_epi8(_mm_and_si128(prim_cmp,
> +shift_mask));

While this will work like you describe, I would think the simpler solution here is not to do a vector mask, but instead to simply do a scalar one.
This would save extra vector loads too, since all values could just be masked with compile-time constant 0xAAAA.

Bingo! That's indeed a better way to fix this issue. Just to confirm my understanding: we don't need to construct a vector mask to execute AND operation with the compared mask. Instead, we can AND the result(prim_hash_matches/sec_hash_matches) with a constant mask in the end. But It appears the correct constant should be 0x5555, not 0xAAAA, because we only care about the even-index bits based on the code logic of the default scalar path.

>               /* Compare all signatures in the bucket */
> -             *sec_hash_matches = _mm_movemask_epi8(_mm_cmpeq_epi16(
> -                             _mm_load_si128(
> +             __m128i sec_cmp = _mm_cmpeq_epi16(_mm_load_si128(
>                                       (__m128i const *)sec_bkt->sig_current),
> -                             _mm_set1_epi16(sig)));
> +                                     _mm_set1_epi16(sig));
> +             *sec_hash_matches = _mm_movemask_epi8(_mm_and_si128(sec_cmp, shift_mask));
> +             }
>               break;
>  #elif defined(__ARM_NEON)
>       case RTE_HASH_COMPARE_NEON: {
> --
> 2.25.1
>
IMPORTANT NOTICE: The contents of this email and any attachments are confidential and may also be privileged. If you are not the intended recipient, please notify the sender immediately and do not disclose the contents to any other person, use it for any purpose, or store or copy the information in any medium. Thank you.
  

Patch

diff --git a/lib/hash/rte_cuckoo_hash.c b/lib/hash/rte_cuckoo_hash.c
index d92a903bb3..acaa8b74bd 100644
--- a/lib/hash/rte_cuckoo_hash.c
+++ b/lib/hash/rte_cuckoo_hash.c
@@ -1862,17 +1862,19 @@  compare_signatures(uint32_t *prim_hash_matches, uint32_t *sec_hash_matches,
 	/* For match mask the first bit of every two bits indicates the match */
 	switch (sig_cmp_fn) {
 #if defined(__SSE2__)
-	case RTE_HASH_COMPARE_SSE:
+	case RTE_HASH_COMPARE_SSE: {
 		/* Compare all signatures in the bucket */
-		*prim_hash_matches = _mm_movemask_epi8(_mm_cmpeq_epi16(
-				_mm_load_si128(
+		__m128i shift_mask = _mm_set1_epi16(0x0080);
+		__m128i prim_cmp = _mm_cmpeq_epi16(_mm_load_si128(
 					(__m128i const *)prim_bkt->sig_current),
-				_mm_set1_epi16(sig)));
+					_mm_set1_epi16(sig));
+		*prim_hash_matches = _mm_movemask_epi8(_mm_and_si128(prim_cmp, shift_mask));
 		/* Compare all signatures in the bucket */
-		*sec_hash_matches = _mm_movemask_epi8(_mm_cmpeq_epi16(
-				_mm_load_si128(
+		__m128i sec_cmp = _mm_cmpeq_epi16(_mm_load_si128(
 					(__m128i const *)sec_bkt->sig_current),
-				_mm_set1_epi16(sig)));
+					_mm_set1_epi16(sig));
+		*sec_hash_matches = _mm_movemask_epi8(_mm_and_si128(sec_cmp, shift_mask));
+		}
 		break;
 #elif defined(__ARM_NEON)
 	case RTE_HASH_COMPARE_NEON: {