[v2,4/4] test/lpm: improve coverage on tbl8

Message ID 20210114065926.1200855-5-ruifeng.wang@arm.com (mailing list archive)
State Accepted, archived
Delegated to: David Marchand
Headers
Series lpm lookupx4 fixes |

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/intel-Testing success Testing PASS

Commit Message

Ruifeng Wang Jan. 14, 2021, 6:59 a.m. UTC
  Existing test cases create 256 tbl8 groups for testing. The number covers
only 8 bit next_hop/group field. Since the next_hop/group field had been
extended to 24-bits, creating more than 256 groups in tests can improve
the coverage.

Coverage was not expanded to reach the max supported group number, because
it would take too much time to run for this fast-test.

Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
Tested-by: David Christensen <drc@linux.vnet.ibm.com>
Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
---
v2:
Check all 4 returned next hops. (Vladimir)

 app/test/test_lpm.c | 25 +++++++++++++++++--------
 1 file changed, 17 insertions(+), 8 deletions(-)
  

Comments

Vladimir Medvedkin Jan. 14, 2021, 11:14 a.m. UTC | #1
On 14/01/2021 06:59, Ruifeng Wang wrote:
> Existing test cases create 256 tbl8 groups for testing. The number covers
> only 8 bit next_hop/group field. Since the next_hop/group field had been
> extended to 24-bits, creating more than 256 groups in tests can improve
> the coverage.
> 
> Coverage was not expanded to reach the max supported group number, because
> it would take too much time to run for this fast-test.
> 
> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>
> Tested-by: David Christensen <drc@linux.vnet.ibm.com>
> Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
> ---
> v2:
> Check all 4 returned next hops. (Vladimir)
> 
> ...

Acked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>
  

Patch

diff --git a/app/test/test_lpm.c b/app/test/test_lpm.c
index 258b2f67c..556f5a67b 100644
--- a/app/test/test_lpm.c
+++ b/app/test/test_lpm.c
@@ -993,7 +993,7 @@  test13(void)
 }
 
 /*
- * Fore TBL8 extension exhaustion. Add 256 rules that require a tbl8 extension.
+ * For TBL8 extension exhaustion. Add 512 rules that require a tbl8 extension.
  * No more tbl8 extensions will be allowed. Now add one more rule that required
  * a tbl8 extension and get fail.
  * */
@@ -1008,28 +1008,37 @@  test14(void)
 	struct rte_lpm_config config;
 
 	config.max_rules = 256 * 32;
-	config.number_tbl8s = NUMBER_TBL8S;
+	config.number_tbl8s = 512;
 	config.flags = 0;
-	uint32_t ip, next_hop_add, next_hop_return;
+	uint32_t ip, next_hop_base, next_hop_return;
 	uint8_t depth;
 	int32_t status = 0;
+	xmm_t ipx4;
+	uint32_t hop[4];
 
 	/* Add enough space for 256 rules for every depth */
 	lpm = rte_lpm_create(__func__, SOCKET_ID_ANY, &config);
 	TEST_LPM_ASSERT(lpm != NULL);
 
 	depth = 32;
-	next_hop_add = 100;
+	next_hop_base = 100;
 	ip = RTE_IPV4(0, 0, 0, 0);
 
 	/* Add 256 rules that require a tbl8 extension */
-	for (; ip <= RTE_IPV4(0, 0, 255, 0); ip += 256) {
-		status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	for (; ip <= RTE_IPV4(0, 1, 255, 0); ip += 256) {
+		status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
 		TEST_LPM_ASSERT(status == 0);
 
 		status = rte_lpm_lookup(lpm, ip, &next_hop_return);
 		TEST_LPM_ASSERT((status == 0) &&
-				(next_hop_return == next_hop_add));
+				(next_hop_return == next_hop_base + ip));
+
+		ipx4 = vect_set_epi32(ip + 3, ip + 2, ip + 1, ip);
+		rte_lpm_lookupx4(lpm, ipx4, hop, UINT32_MAX);
+		TEST_LPM_ASSERT(hop[0] == next_hop_base + ip);
+		TEST_LPM_ASSERT(hop[1] == UINT32_MAX);
+		TEST_LPM_ASSERT(hop[2] == UINT32_MAX);
+		TEST_LPM_ASSERT(hop[3] == UINT32_MAX);
 	}
 
 	/* All tbl8 extensions have been used above. Try to add one more and
@@ -1037,7 +1046,7 @@  test14(void)
 	ip = RTE_IPV4(1, 0, 0, 0);
 	depth = 32;
 
-	status = rte_lpm_add(lpm, ip, depth, next_hop_add);
+	status = rte_lpm_add(lpm, ip, depth, next_hop_base + ip);
 	TEST_LPM_ASSERT(status < 0);
 
 	rte_lpm_free(lpm);