crypto/ipsec_mb: Do not dequeue ops from ring after job flush.

Message ID 20230927124034.1092086-1-krzysztof.karas@intel.com (mailing list archive)
State Changes Requested, archived
Delegated to: akhil goyal
Headers
Series crypto/ipsec_mb: Do not dequeue ops from ring after job flush. |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/loongarch-compilation success Compilation OK
ci/loongarch-unit-testing success Unit Testing PASS
ci/github-robot: build success github build: passed
ci/iol-mellanox-Performance success Performance Testing PASS
ci/iol-compile-amd64-testing success Testing PASS
ci/iol-broadcom-Performance success Performance Testing PASS
ci/iol-intel-Performance success Performance Testing PASS
ci/iol-broadcom-Functional success Functional Testing PASS
ci/iol-intel-Functional success Functional Testing PASS
ci/iol-unit-arm64-testing success Testing PASS
ci/iol-unit-amd64-testing success Testing PASS
ci/iol-compile-arm64-testing success Testing PASS
ci/iol-sample-apps-testing success Testing PASS
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS
ci/intel-Functional success Functional PASS

Commit Message

Krzysztof Karas Sept. 27, 2023, 12:40 p.m. UTC
  Previously it was possible to increment `processed_jobs` to a value
greater than requested `nb_ops`, because after flushing at most
`nb_ops` jobs the while loop continued, so `processed_jobs` could
still be incremented and it was possible for this variable to be
greater than `nb_ops`. If `ops` provided to the function were
only `nb_ops` long, then the `aesni_mb_dequeue_burst()` would
write to the memory outside of `ops` array.

Fixes: b50b8b5b38f8 ("crypto/ipsec_mb: use burst API in AESNI")
Cc: stable@dpdk.org

Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>

---
 drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)
  

Comments

Akhil Goyal Oct. 30, 2023, 7:10 a.m. UTC | #1
> Subject: [EXT] [PATCH] crypto/ipsec_mb: Do not dequeue ops from ring after job
> flush.
> Previously it was possible to increment `processed_jobs` to a value
> greater than requested `nb_ops`, because after flushing at most
> `nb_ops` jobs the while loop continued, so `processed_jobs` could
> still be incremented and it was possible for this variable to be
> greater than `nb_ops`. If `ops` provided to the function were
> only `nb_ops` long, then the `aesni_mb_dequeue_burst()` would
> write to the memory outside of `ops` array.
> 
> Fixes: b50b8b5b38f8 ("crypto/ipsec_mb: use burst API in AESNI")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
> 
Pablo/Kai,
Can you please review?
  
Cornu, Marcel D Nov. 2, 2023, 5:32 p.m. UTC | #2
> -----Original Message-----
> From: Karas, Krzysztof <krzysztof.karas@intel.com>
> Sent: Wednesday, September 27, 2023 1:41 PM
> To: Ji, Kai <kai.ji@intel.com>; De Lara Guarch, Pablo
> <pablo.de.lara.guarch@intel.com>; Cornu, Marcel D
> <marcel.d.cornu@intel.com>; Power, Ciara <ciara.power@intel.com>
> Cc: dev@dpdk.org; Karas, Krzysztof <krzysztof.karas@intel.com>;
> stable@dpdk.org
> Subject: [PATCH] crypto/ipsec_mb: Do not dequeue ops from ring after job flush.
> 
> Previously it was possible to increment `processed_jobs` to a value greater than
> requested `nb_ops`, because after flushing at most `nb_ops` jobs the while loop
> continued, so `processed_jobs` could still be incremented and it was possible for
> this variable to be greater than `nb_ops`. If `ops` provided to the function were
> only `nb_ops` long, then the `aesni_mb_dequeue_burst()` would write to the
> memory outside of `ops` array.
> 
> Fixes: b50b8b5b38f8 ("crypto/ipsec_mb: use burst API in AESNI")
> Cc: stable@dpdk.org
> 
> Signed-off-by: Krzysztof Karas <krzysztof.karas@intel.com>
> 
> ---
>  drivers/crypto/ipsec_mb/pmd_aesni_mb.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> index 9e298023d7..ff52bc85a4 100644
> --- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> +++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
> @@ -2056,7 +2056,7 @@ aesni_mb_dequeue_burst(void *queue_pair, struct
> rte_crypto_op **ops,
>  		uint16_t n = (nb_ops / burst_sz) ?
>  			burst_sz : nb_ops;
> 
> -		while (unlikely((IMB_GET_NEXT_BURST(mb_mgr, n, jobs)) < n)) {
> +		if (unlikely((IMB_GET_NEXT_BURST(mb_mgr, n, jobs)) < n)) {
>  			/*
>  			 * Not enough free jobs in the queue
>  			 * Flush n jobs until enough jobs available @@ -2074,6
> +2074,8 @@ aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op
> **ops,
>  					break;
>  				}
>  			}
> +			nb_ops -= nb_jobs;
This assumes the loop above completes without errors.
If post_process_mb_job() returns with an error it will break out of the loop and nb_ops will be decremented by the wrong value.
Maybe decrementing by 'i' would work better here.
> +			continue;
>  		}
> 
>  		/*
> --
> 2.34.1

Hi Krzysztof, I noticed that when I run the dpdk-test-crypto-perf application testing with imix, the number of failed enqueue ops increases vs without the patch.

Example command: dpdk-test-crypto-perf -l 4,5  --no-huge  --vdev="crypto_aesni_mb" -- --pool-sz 8192 --cipher-algo aes-cbc --cipher-key-sz 16 --optype cipher-only --cipher-iv-sz 16 --cipher-op encrypt --silent --buffer-sz 16,6144 --imix 99,1 --burst-sz 32

Could you try it and see if you get the same result?

Regards,
Marcel
  

Patch

diff --git a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
index 9e298023d7..ff52bc85a4 100644
--- a/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
+++ b/drivers/crypto/ipsec_mb/pmd_aesni_mb.c
@@ -2056,7 +2056,7 @@  aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
 		uint16_t n = (nb_ops / burst_sz) ?
 			burst_sz : nb_ops;
 
-		while (unlikely((IMB_GET_NEXT_BURST(mb_mgr, n, jobs)) < n)) {
+		if (unlikely((IMB_GET_NEXT_BURST(mb_mgr, n, jobs)) < n)) {
 			/*
 			 * Not enough free jobs in the queue
 			 * Flush n jobs until enough jobs available
@@ -2074,6 +2074,8 @@  aesni_mb_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops,
 					break;
 				}
 			}
+			nb_ops -= nb_jobs;
+			continue;
 		}
 
 		/*