[1/2] hash: add lock free support for extendable bucket

Message ID 20190320223513.31249-2-dharmik.thakkar@arm.com
State Superseded, archived
Delegated to: Thomas Monjalon
Headers show
Series
  • hash: add lock free support for ext bkt
Related show

Checks

Context Check Description
ci/checkpatch success coding style OK
ci/Intel-compilation success Compilation OK
ci/mellanox-Performance-Testing success Performance Testing PASS
ci/intel-Performance-Testing success Performance Testing PASS

Commit Message

Dharmik Thakkar March 20, 2019, 10:35 p.m.
This patch enables lock-free read-write concurrency support for
extendable bucket feature.

Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Gavin Hu <gavin.hu@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
---
 doc/guides/prog_guide/hash_lib.rst |   3 +-
 lib/librte_hash/rte_cuckoo_hash.c  | 163 ++++++++++++++++++++---------
 lib/librte_hash/rte_cuckoo_hash.h  |   7 ++
 3 files changed, 121 insertions(+), 52 deletions(-)

Comments

Wang, Yipeng1 March 22, 2019, 11:48 p.m. | #1
Thanks for the patch! 

Comments inlined:

>-----Original Message-----
>From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com]
>Sent: Wednesday, March 20, 2019 3:35 PM
>To: Wang, Yipeng1 <yipeng1.wang@intel.com>; Gobriel, Sameh <sameh.gobriel@intel.com>; Richardson, Bruce
><bruce.richardson@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Mcnamara, John
><john.mcnamara@intel.com>; Kovacevic, Marko <marko.kovacevic@intel.com>
>Cc: dev@dpdk.org; Dharmik Thakkar <dharmik.thakkar@arm.com>
>Subject: [PATCH 1/2] hash: add lock free support for extendable bucket
>
>This patch enables lock-free read-write concurrency support for
>extendable bucket feature.
>
>Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
>Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
>Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
>Reviewed-by: Gavin Hu <gavin.hu@arm.com>
>Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
>---
> doc/guides/prog_guide/hash_lib.rst |   3 +-
> lib/librte_hash/rte_cuckoo_hash.c  | 163 ++++++++++++++++++++---------
> lib/librte_hash/rte_cuckoo_hash.h  |   7 ++
> 3 files changed, 121 insertions(+), 52 deletions(-)
>
>diff --git a/doc/guides/prog_guide/hash_lib.rst b/doc/guides/prog_guide/hash_lib.rst
>index 85a6edfa8b16..b00446e949ba 100644
>--- a/doc/guides/prog_guide/hash_lib.rst
>+++ b/doc/guides/prog_guide/hash_lib.rst
>@@ -108,8 +108,7 @@ Extendable Bucket Functionality support
> An extra flag is used to enable this functionality (flag is not set by default). When the (RTE_HASH_EXTRA_FLAGS_EXT_TABLE) is set
>and
> in the very unlikely case due to excessive hash collisions that a key has failed to be inserted, the hash table bucket is extended with a
>linked
> list to insert these failed keys. This feature is important for the workloads (e.g. telco workloads) that need to insert up to 100% of the
>-hash table size and can't tolerate any key insertion failure (even if very few). Currently the extendable bucket is not supported
>-with the lock-free concurrency implementation (RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF).
>+hash table size and can't tolerate any key insertion failure (even if very few).
[Wang, Yipeng] I am thinking maybe make it a bit more clear here by adding something like:
Please note that with the lock-free flag enabled, users need to promptly free the deleted keys, to maintain the 100% capacity guarantee.

I want to add this because of the piggy-back mechanism, one un-recycled key with an un-recycled ext bucket may actually makes in total
of 9 entries unavailable (8 entries in the ext bucket). So it would be useful to remind the user here.
>
>
>@@ -1054,7 +1059,15 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key,
> 			/* Check if slot is available */
> 			if (likely(cur_bkt->key_idx[i] == EMPTY_SLOT)) {
> 				cur_bkt->sig_current[i] = short_sig;
>-				cur_bkt->key_idx[i] = new_idx;
>+				/* Key can be of arbitrary length, so it is
>+				 * not possible to store it atomically.
>+				 * Hence the new key element's memory stores
>+				 * (key as well as data) should be complete
>+				 * before it is referenced.
>+				 */
[Wang, Yipeng]  My understanding is this atomic store is to prevent the signature store leaking after the key_idx store.
But the comment does not exactly describe this reason.
>+				__atomic_store_n(&cur_bkt->key_idx[i],
>+						 new_idx,
>+						 __ATOMIC_RELEASE);
> 				__hash_rw_writer_unlock(h);
> 				return new_idx - 1;
> 			}
>@@ -1545,6 +1597,14 @@ rte_hash_free_key_with_position(const struct rte_hash *h,
> 	/* Out of bounds */
> 	if (position >= total_entries)
> 		return -EINVAL;
>+	if (h->ext_table_support) {
>+		uint32_t index = h->ext_bkt_to_free[position];
[Wang, Yipeng] I think user can theoretically set  RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL to be 1
But LF flag to be 0. I think here you assume this function only called when LF flag is 1. You may need to
Add another condition e.g. if(h->ext_table_support && h->readwrite_concur_lf_support)
>+		if (index) {
>+			/* Recycle empty ext bkt to free list. */
>+			rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index);
>+			h->ext_bkt_to_free[position] = 0;
>+		}
>+	}
>
> 	if (h->use_local_cache) {
> 		lcore_id = rte_lcore_id();
Dharmik Thakkar March 25, 2019, 8:10 p.m. | #2
+Honnappa

Hi Yipeng,

Thank you for reviewing!

> On Mar 22, 2019, at 6:48 PM, Wang, Yipeng1 <yipeng1.wang@intel.com> wrote:
> 
> Thanks for the patch! 
> 
> Comments inlined:
> 
>> -----Original Message-----
>> From: Dharmik Thakkar [mailto:dharmik.thakkar@arm.com]
>> Sent: Wednesday, March 20, 2019 3:35 PM
>> To: Wang, Yipeng1 <yipeng1.wang@intel.com>; Gobriel, Sameh <sameh.gobriel@intel.com>; Richardson, Bruce
>> <bruce.richardson@intel.com>; De Lara Guarch, Pablo <pablo.de.lara.guarch@intel.com>; Mcnamara, John
>> <john.mcnamara@intel.com>; Kovacevic, Marko <marko.kovacevic@intel.com>
>> Cc: dev@dpdk.org; Dharmik Thakkar <dharmik.thakkar@arm.com>
>> Subject: [PATCH 1/2] hash: add lock free support for extendable bucket
>> 
>> This patch enables lock-free read-write concurrency support for
>> extendable bucket feature.
>> 
>> Suggested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
>> Signed-off-by: Dharmik Thakkar <dharmik.thakkar@arm.com>
>> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
>> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
>> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
>> ---
>> doc/guides/prog_guide/hash_lib.rst |   3 +-
>> lib/librte_hash/rte_cuckoo_hash.c  | 163 ++++++++++++++++++++---------
>> lib/librte_hash/rte_cuckoo_hash.h  |   7 ++
>> 3 files changed, 121 insertions(+), 52 deletions(-)
>> 
>> diff --git a/doc/guides/prog_guide/hash_lib.rst b/doc/guides/prog_guide/hash_lib.rst
>> index 85a6edfa8b16..b00446e949ba 100644
>> --- a/doc/guides/prog_guide/hash_lib.rst
>> +++ b/doc/guides/prog_guide/hash_lib.rst
>> @@ -108,8 +108,7 @@ Extendable Bucket Functionality support
>> An extra flag is used to enable this functionality (flag is not set by default). When the (RTE_HASH_EXTRA_FLAGS_EXT_TABLE) is set
>> and
>> in the very unlikely case due to excessive hash collisions that a key has failed to be inserted, the hash table bucket is extended with a
>> linked
>> list to insert these failed keys. This feature is important for the workloads (e.g. telco workloads) that need to insert up to 100% of the
>> -hash table size and can't tolerate any key insertion failure (even if very few). Currently the extendable bucket is not supported
>> -with the lock-free concurrency implementation (RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF).
>> +hash table size and can't tolerate any key insertion failure (even if very few).
> [Wang, Yipeng] I am thinking maybe make it a bit more clear here by adding something like:
> Please note that with the lock-free flag enabled, users need to promptly free the deleted keys, to maintain the 100% capacity guarantee.
> 
> I want to add this because of the piggy-back mechanism, one un-recycled key with an un-recycled ext bucket may actually makes in total
> of 9 entries unavailable (8 entries in the ext bucket). So it would be useful to remind the user here.
All right. I will add it.
>> 
>> 
>> @@ -1054,7 +1059,15 @@ __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key,
>> 			/* Check if slot is available */
>> 			if (likely(cur_bkt->key_idx[i] == EMPTY_SLOT)) {
>> 				cur_bkt->sig_current[i] = short_sig;
>> -				cur_bkt->key_idx[i] = new_idx;
>> +				/* Key can be of arbitrary length, so it is
>> +				 * not possible to store it atomically.
>> +				 * Hence the new key element's memory stores
>> +				 * (key as well as data) should be complete
>> +				 * before it is referenced.
>> +				 */
> [Wang, Yipeng]  My understanding is this atomic store is to prevent the signature store leaking after the key_idx store.
> But the comment does not exactly describe this reason.
I will update the comment.
>> +				__atomic_store_n(&cur_bkt->key_idx[i],
>> +						 new_idx,
>> +						 __ATOMIC_RELEASE);
>> 				__hash_rw_writer_unlock(h);
>> 				return new_idx - 1;
>> 			}
>> @@ -1545,6 +1597,14 @@ rte_hash_free_key_with_position(const struct rte_hash *h,
>> 	/* Out of bounds */
>> 	if (position >= total_entries)
>> 		return -EINVAL;
>> +	if (h->ext_table_support) {
>> +		uint32_t index = h->ext_bkt_to_free[position];
> [Wang, Yipeng] I think user can theoretically set  RTE_HASH_EXTRA_FLAGS_NO_FREE_ON_DEL to be 1
> But LF flag to be 0. I think here you assume this function only called when LF flag is 1. You may need to
> Add another condition e.g. if(h->ext_table_support && h->readwrite_concur_lf_support)
Correct. I will update it.
>> +		if (index) {
>> +			/* Recycle empty ext bkt to free list. */
>> +			rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index);
>> +			h->ext_bkt_to_free[position] = 0;
>> +		}
>> +	}
>> 
>> 	if (h->use_local_cache) {
>> 		lcore_id = rte_lcore_id();

Patch

diff --git a/doc/guides/prog_guide/hash_lib.rst b/doc/guides/prog_guide/hash_lib.rst
index 85a6edfa8b16..b00446e949ba 100644
--- a/doc/guides/prog_guide/hash_lib.rst
+++ b/doc/guides/prog_guide/hash_lib.rst
@@ -108,8 +108,7 @@  Extendable Bucket Functionality support
 An extra flag is used to enable this functionality (flag is not set by default). When the (RTE_HASH_EXTRA_FLAGS_EXT_TABLE) is set and
 in the very unlikely case due to excessive hash collisions that a key has failed to be inserted, the hash table bucket is extended with a linked
 list to insert these failed keys. This feature is important for the workloads (e.g. telco workloads) that need to insert up to 100% of the
-hash table size and can't tolerate any key insertion failure (even if very few). Currently the extendable bucket is not supported
-with the lock-free concurrency implementation (RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF).
+hash table size and can't tolerate any key insertion failure (even if very few).
 
 
 Implementation Details (non Extendable Bucket Case)
diff --git a/lib/librte_hash/rte_cuckoo_hash.c b/lib/librte_hash/rte_cuckoo_hash.c
index c01489ba5193..4cb05a5528c1 100644
--- a/lib/librte_hash/rte_cuckoo_hash.c
+++ b/lib/librte_hash/rte_cuckoo_hash.c
@@ -140,6 +140,7 @@  rte_hash_create(const struct rte_hash_parameters *params)
 	unsigned int readwrite_concur_support = 0;
 	unsigned int writer_takes_lock = 0;
 	unsigned int no_free_on_del = 0;
+	uint32_t *ext_bkt_to_free = NULL;
 	uint32_t *tbl_chng_cnt = NULL;
 	unsigned int readwrite_concur_lf_support = 0;
 
@@ -170,15 +171,6 @@  rte_hash_create(const struct rte_hash_parameters *params)
 		return NULL;
 	}
 
-	if ((params->extra_flag & RTE_HASH_EXTRA_FLAGS_RW_CONCURRENCY_LF) &&
-	    (params->extra_flag & RTE_HASH_EXTRA_FLAGS_EXT_TABLE)) {
-		rte_errno = EINVAL;
-		RTE_LOG(ERR, HASH, "rte_hash_create: extendable bucket "
-			"feature not supported with rw concurrency "
-			"lock free\n");
-		return NULL;
-	}
-
 	/* Check extra flags field to check extra options. */
 	if (params->extra_flag & RTE_HASH_EXTRA_FLAGS_TRANS_MEM_SUPPORT)
 		hw_trans_mem_support = 1;
@@ -302,6 +294,16 @@  rte_hash_create(const struct rte_hash_parameters *params)
 		 */
 		for (i = 1; i <= num_buckets; i++)
 			rte_ring_sp_enqueue(r_ext, (void *)((uintptr_t) i));
+
+		if (readwrite_concur_lf_support) {
+			ext_bkt_to_free = rte_zmalloc(NULL, sizeof(uint32_t) *
+								num_key_slots, 0);
+			if (ext_bkt_to_free == NULL) {
+				RTE_LOG(ERR, HASH, "ext bkt to free memory allocation "
+								"failed\n");
+				goto err_unlock;
+			}
+		}
 	}
 
 	const uint32_t key_entry_size =
@@ -393,6 +395,7 @@  rte_hash_create(const struct rte_hash_parameters *params)
 		default_hash_func : params->hash_func;
 	h->key_store = k;
 	h->free_slots = r;
+	h->ext_bkt_to_free = ext_bkt_to_free;
 	h->tbl_chng_cnt = tbl_chng_cnt;
 	*h->tbl_chng_cnt = 0;
 	h->hw_trans_mem_support = hw_trans_mem_support;
@@ -443,6 +446,7 @@  rte_hash_create(const struct rte_hash_parameters *params)
 	rte_free(buckets_ext);
 	rte_free(k);
 	rte_free(tbl_chng_cnt);
+	rte_free(ext_bkt_to_free);
 	return NULL;
 }
 
@@ -484,6 +488,7 @@  rte_hash_free(struct rte_hash *h)
 	rte_free(h->buckets);
 	rte_free(h->buckets_ext);
 	rte_free(h->tbl_chng_cnt);
+	rte_free(h->ext_bkt_to_free);
 	rte_free(h);
 	rte_free(te);
 }
@@ -799,7 +804,7 @@  rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h,
 			__atomic_store_n(h->tbl_chng_cnt,
 					 *h->tbl_chng_cnt + 1,
 					 __ATOMIC_RELEASE);
-			/* The stores to sig_alt and sig_current should not
+			/* The store to sig_current should not
 			 * move above the store to tbl_chng_cnt.
 			 */
 			__atomic_thread_fence(__ATOMIC_RELEASE);
@@ -831,7 +836,7 @@  rte_hash_cuckoo_move_insert_mw(const struct rte_hash *h,
 		__atomic_store_n(h->tbl_chng_cnt,
 				 *h->tbl_chng_cnt + 1,
 				 __ATOMIC_RELEASE);
-		/* The stores to sig_alt and sig_current should not
+		/* The store to sig_current should not
 		 * move above the store to tbl_chng_cnt.
 		 */
 		__atomic_thread_fence(__ATOMIC_RELEASE);
@@ -1054,7 +1059,15 @@  __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key,
 			/* Check if slot is available */
 			if (likely(cur_bkt->key_idx[i] == EMPTY_SLOT)) {
 				cur_bkt->sig_current[i] = short_sig;
-				cur_bkt->key_idx[i] = new_idx;
+				/* Key can be of arbitrary length, so it is
+				 * not possible to store it atomically.
+				 * Hence the new key element's memory stores
+				 * (key as well as data) should be complete
+				 * before it is referenced.
+				 */
+				__atomic_store_n(&cur_bkt->key_idx[i],
+						 new_idx,
+						 __ATOMIC_RELEASE);
 				__hash_rw_writer_unlock(h);
 				return new_idx - 1;
 			}
@@ -1072,7 +1085,15 @@  __rte_hash_add_key_with_hash(const struct rte_hash *h, const void *key,
 	bkt_id = (uint32_t)((uintptr_t)ext_bkt_id) - 1;
 	/* Use the first location of the new bucket */
 	(h->buckets_ext[bkt_id]).sig_current[0] = short_sig;
-	(h->buckets_ext[bkt_id]).key_idx[0] = new_idx;
+	/* Key can be of arbitrary length, so it is
+	 * not possible to store it atomically.
+	 * Hence the new key element's memory stores
+	 * (key as well as data) should be complete
+	 * before it is referenced.
+	 */
+	__atomic_store_n(&(h->buckets_ext[bkt_id]).key_idx[0],
+			 new_idx,
+			 __ATOMIC_RELEASE);
 	/* Link the new bucket to sec bucket linked list */
 	last = rte_hash_get_last_bkt(sec_bkt);
 	last->next = &h->buckets_ext[bkt_id];
@@ -1366,7 +1387,8 @@  remove_entry(const struct rte_hash *h, struct rte_hash_bucket *bkt, unsigned i)
  * empty slot.
  */
 static inline void
-__rte_hash_compact_ll(struct rte_hash_bucket *cur_bkt, int pos) {
+__rte_hash_compact_ll(const struct rte_hash *h,
+			struct rte_hash_bucket *cur_bkt, int pos) {
 	int i;
 	struct rte_hash_bucket *last_bkt;
 
@@ -1377,10 +1399,27 @@  __rte_hash_compact_ll(struct rte_hash_bucket *cur_bkt, int pos) {
 
 	for (i = RTE_HASH_BUCKET_ENTRIES - 1; i >= 0; i--) {
 		if (last_bkt->key_idx[i] != EMPTY_SLOT) {
-			cur_bkt->key_idx[pos] = last_bkt->key_idx[i];
 			cur_bkt->sig_current[pos] = last_bkt->sig_current[i];
+			__atomic_store_n(&cur_bkt->key_idx[pos],
+					 last_bkt->key_idx[i],
+					 __ATOMIC_RELEASE);
+			if (h->readwrite_concur_lf_support) {
+				/* Inform the readers that the table has changed
+				 * Since there is one writer, load acquire on
+				 * tbl_chng_cnt is not required.
+				 */
+				__atomic_store_n(h->tbl_chng_cnt,
+					 *h->tbl_chng_cnt + 1,
+					 __ATOMIC_RELEASE);
+				/* The store to sig_current should
+				 * not move above the store to tbl_chng_cnt.
+				 */
+				__atomic_thread_fence(__ATOMIC_RELEASE);
+			}
 			last_bkt->sig_current[i] = NULL_SIGNATURE;
-			last_bkt->key_idx[i] = EMPTY_SLOT;
+			__atomic_store_n(&last_bkt->key_idx[i],
+					 EMPTY_SLOT,
+					 __ATOMIC_RELEASE);
 			return;
 		}
 	}
@@ -1449,7 +1488,7 @@  __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key,
 	/* look for key in primary bucket */
 	ret = search_and_remove(h, key, prim_bkt, short_sig, &pos);
 	if (ret != -1) {
-		__rte_hash_compact_ll(prim_bkt, pos);
+		__rte_hash_compact_ll(h, prim_bkt, pos);
 		last_bkt = prim_bkt->next;
 		prev_bkt = prim_bkt;
 		goto return_bkt;
@@ -1461,7 +1500,7 @@  __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key,
 	FOR_EACH_BUCKET(cur_bkt, sec_bkt) {
 		ret = search_and_remove(h, key, cur_bkt, short_sig, &pos);
 		if (ret != -1) {
-			__rte_hash_compact_ll(cur_bkt, pos);
+			__rte_hash_compact_ll(h, cur_bkt, pos);
 			last_bkt = sec_bkt->next;
 			prev_bkt = sec_bkt;
 			goto return_bkt;
@@ -1488,11 +1527,24 @@  __rte_hash_del_key_with_hash(const struct rte_hash *h, const void *key,
 	}
 	/* found empty bucket and recycle */
 	if (i == RTE_HASH_BUCKET_ENTRIES) {
-		prev_bkt->next = last_bkt->next = NULL;
+		prev_bkt->next = NULL;
 		uint32_t index = last_bkt - h->buckets_ext + 1;
-		rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index);
+		/* Recycle the empty bkt if
+		 * no_free_on_del is disabled.
+		 */
+		if (h->no_free_on_del)
+			/* Store index of an empty ext bkt to be recycled
+			 * on calling rte_hash_del_xxx APIs.
+			 * When lock free read-write concurrency is enabled,
+			 * an empty ext bkt cannot be put into free list
+			 * immediately (as readers might be using it still).
+			 * Hence freeing of the ext bkt is piggy-backed to
+			 * freeing of the key index.
+			 */
+			h->ext_bkt_to_free[ret] = index;
+		else
+			rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index);
 	}
-
 	__hash_rw_writer_unlock(h);
 	return ret;
 }
@@ -1545,6 +1597,14 @@  rte_hash_free_key_with_position(const struct rte_hash *h,
 	/* Out of bounds */
 	if (position >= total_entries)
 		return -EINVAL;
+	if (h->ext_table_support) {
+		uint32_t index = h->ext_bkt_to_free[position];
+		if (index) {
+			/* Recycle empty ext bkt to free list. */
+			rte_ring_sp_enqueue(h->free_ext_bkts, (void *)(uintptr_t)index);
+			h->ext_bkt_to_free[position] = 0;
+		}
+	}
 
 	if (h->use_local_cache) {
 		lcore_id = rte_lcore_id();
@@ -1855,6 +1915,9 @@  __rte_hash_lookup_bulk_lf(const struct rte_hash *h, const void **keys,
 		rte_prefetch0(secondary_bkt[i]);
 	}
 
+	for (i = 0; i < num_keys; i++)
+		positions[i] = -ENOENT;
+
 	do {
 		/* Load the table change counter before the lookup
 		 * starts. Acquire semantics will make sure that
@@ -1899,7 +1962,6 @@  __rte_hash_lookup_bulk_lf(const struct rte_hash *h, const void **keys,
 
 		/* Compare keys, first hits in primary first */
 		for (i = 0; i < num_keys; i++) {
-			positions[i] = -ENOENT;
 			while (prim_hitmask[i]) {
 				uint32_t hit_index =
 						__builtin_ctzl(prim_hitmask[i])
@@ -1972,6 +2034,35 @@  __rte_hash_lookup_bulk_lf(const struct rte_hash *h, const void **keys,
 			continue;
 		}
 
+		/* all found, do not need to go through ext bkt */
+		if (hits == ((1ULL << num_keys) - 1)) {
+			if (hit_mask != NULL)
+				*hit_mask = hits;
+			return;
+		}
+		/* need to check ext buckets for match */
+		if (h->ext_table_support) {
+			for (i = 0; i < num_keys; i++) {
+				if ((hits & (1ULL << i)) != 0)
+					continue;
+				next_bkt = secondary_bkt[i]->next;
+				FOR_EACH_BUCKET(cur_bkt, next_bkt) {
+					if (data != NULL)
+						ret = search_one_bucket_lf(h,
+							keys[i], sig[i],
+							&data[i], cur_bkt);
+					else
+						ret = search_one_bucket_lf(h,
+								keys[i], sig[i],
+								NULL, cur_bkt);
+					if (ret != -1) {
+						positions[i] = ret;
+						hits |= 1ULL << i;
+						break;
+					}
+				}
+			}
+		}
 		/* The loads of sig_current in compare_signatures
 		 * should not move below the load from tbl_chng_cnt.
 		 */
@@ -1988,34 +2079,6 @@  __rte_hash_lookup_bulk_lf(const struct rte_hash *h, const void **keys,
 					__ATOMIC_ACQUIRE);
 	} while (cnt_b != cnt_a);
 
-	/* all found, do not need to go through ext bkt */
-	if ((hits == ((1ULL << num_keys) - 1)) || !h->ext_table_support) {
-		if (hit_mask != NULL)
-			*hit_mask = hits;
-		__hash_rw_reader_unlock(h);
-		return;
-	}
-
-	/* need to check ext buckets for match */
-	for (i = 0; i < num_keys; i++) {
-		if ((hits & (1ULL << i)) != 0)
-			continue;
-		next_bkt = secondary_bkt[i]->next;
-		FOR_EACH_BUCKET(cur_bkt, next_bkt) {
-			if (data != NULL)
-				ret = search_one_bucket_lf(h, keys[i],
-						sig[i], &data[i], cur_bkt);
-			else
-				ret = search_one_bucket_lf(h, keys[i],
-						sig[i], NULL, cur_bkt);
-			if (ret != -1) {
-				positions[i] = ret;
-				hits |= 1ULL << i;
-				break;
-			}
-		}
-	}
-
 	if (hit_mask != NULL)
 		*hit_mask = hits;
 }
diff --git a/lib/librte_hash/rte_cuckoo_hash.h b/lib/librte_hash/rte_cuckoo_hash.h
index eacdaa8d4684..48c85c890712 100644
--- a/lib/librte_hash/rte_cuckoo_hash.h
+++ b/lib/librte_hash/rte_cuckoo_hash.h
@@ -210,6 +210,13 @@  struct rte_hash {
 	rte_rwlock_t *readwrite_lock; /**< Read-write lock thread-safety. */
 	struct rte_hash_bucket *buckets_ext; /**< Extra buckets array */
 	struct rte_ring *free_ext_bkts; /**< Ring of indexes of free buckets */
+	/* Stores index of an empty ext bkt to be recycled on calling
+	 * rte_hash_del_xxx APIs. When lock free read-write concurrency is
+	 * enabled, an empty ext bkt cannot be put into free list immediately
+	 * (as readers might be using it still). Hence freeing of the ext bkt
+	 * is piggy-backed to freeing of the key index.
+	 */
+	uint32_t *ext_bkt_to_free;
 	uint32_t *tbl_chng_cnt;
 	/**< Indicates if the hash table changed from last read. */
 } __rte_cache_aligned;