From patchwork Fri Oct 23 07:14:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Suanming Mou X-Patchwork-Id: 81880 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0417A04DE; Fri, 23 Oct 2020 09:19:49 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F18CCA8FD; Fri, 23 Oct 2020 09:15:55 +0200 (CEST) Received: from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129]) by dpdk.org (Postfix) with ESMTP id 48E9772E3 for ; Fri, 23 Oct 2020 09:15:30 +0200 (CEST) Received: from Internal Mail-Server by MTLPINE1 (envelope-from suanmingm@nvidia.com) with SMTP; 23 Oct 2020 10:15:25 +0300 Received: from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9]) by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 09N7F2LV026736; Fri, 23 Oct 2020 10:15:24 +0300 From: Suanming Mou To: Matan Azrad , Shahaf Shuler , Viacheslav Ovsiienko Cc: dev@dpdk.org, Xueming Li Date: Fri, 23 Oct 2020 15:14:42 +0800 Message-Id: <1603437295-119083-13-git-send-email-suanmingm@nvidia.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1603437295-119083-1-git-send-email-suanmingm@nvidia.com> References: <1601984948-313027-1-git-send-email-suanmingm@nvidia.com> <1603437295-119083-1-git-send-email-suanmingm@nvidia.com> Subject: [dpdk-dev] [PATCH v2 12/25] net/mlx5: support concurrent access for hash list X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Xueming Li In order to support hash list concurrent access, adding next: 1. List level read/write lock. 2. Entry reference counter. 3. Entry create/match/remove callback. 4. Remove insert/lookup/remove function which are not thread safe. 5. Add register/unregister function to support entry reuse. For better performance, lookup function uses read lock to allow concurrent lookup from different thread, all other hash list modification functions uses write lock which blocks concurrent modification and lookups from other thread. The exact objects change will be applied in the next patches. Signed-off-by: Xueming Li Acked-by: Matan Azrad --- drivers/net/mlx5/linux/mlx5_os.c | 27 ++++--- drivers/net/mlx5/mlx5.c | 13 ++-- drivers/net/mlx5/mlx5_flow.c | 7 +- drivers/net/mlx5/mlx5_flow_dv.c | 6 +- drivers/net/mlx5/mlx5_utils.c | 154 ++++++++++++++++++++++++++++++++------- drivers/net/mlx5/mlx5_utils.h | 149 ++++++++++++++++++++++++++++++------- 6 files changed, 276 insertions(+), 80 deletions(-) diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c index 0900307..929fed2 100644 --- a/drivers/net/mlx5/linux/mlx5_os.c +++ b/drivers/net/mlx5/linux/mlx5_os.c @@ -236,14 +236,16 @@ return err; /* Create tags hash list table. */ snprintf(s, sizeof(s), "%s_tags", sh->ibdev_name); - sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE); + sh->tag_table = mlx5_hlist_create(s, MLX5_TAGS_HLIST_ARRAY_SIZE, 0, + 0, NULL, NULL, NULL); if (!sh->tag_table) { DRV_LOG(ERR, "tags with hash creation failed."); err = ENOMEM; goto error; } snprintf(s, sizeof(s), "%s_hdr_modify", sh->ibdev_name); - sh->modify_cmds = mlx5_hlist_create(s, MLX5_FLOW_HDR_MODIFY_HTABLE_SZ); + sh->modify_cmds = mlx5_hlist_create(s, MLX5_FLOW_HDR_MODIFY_HTABLE_SZ, + 0, 0, NULL, NULL, NULL); if (!sh->modify_cmds) { DRV_LOG(ERR, "hdr modify hash creation failed"); err = ENOMEM; @@ -251,7 +253,8 @@ } snprintf(s, sizeof(s), "%s_encaps_decaps", sh->ibdev_name); sh->encaps_decaps = mlx5_hlist_create(s, - MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ); + MLX5_FLOW_ENCAP_DECAP_HTABLE_SZ, + 0, 0, NULL, NULL, NULL); if (!sh->encaps_decaps) { DRV_LOG(ERR, "encap decap hash creation failed"); err = ENOMEM; @@ -327,16 +330,16 @@ sh->pop_vlan_action = NULL; } if (sh->encaps_decaps) { - mlx5_hlist_destroy(sh->encaps_decaps, NULL, NULL); + mlx5_hlist_destroy(sh->encaps_decaps); sh->encaps_decaps = NULL; } if (sh->modify_cmds) { - mlx5_hlist_destroy(sh->modify_cmds, NULL, NULL); + mlx5_hlist_destroy(sh->modify_cmds); sh->modify_cmds = NULL; } if (sh->tag_table) { /* tags should be destroyed with flow before. */ - mlx5_hlist_destroy(sh->tag_table, NULL, NULL); + mlx5_hlist_destroy(sh->tag_table); sh->tag_table = NULL; } mlx5_free_table_hash_list(priv); @@ -386,16 +389,16 @@ mlx5_glue->destroy_flow_action (sh->default_miss_action); if (sh->encaps_decaps) { - mlx5_hlist_destroy(sh->encaps_decaps, NULL, NULL); + mlx5_hlist_destroy(sh->encaps_decaps); sh->encaps_decaps = NULL; } if (sh->modify_cmds) { - mlx5_hlist_destroy(sh->modify_cmds, NULL, NULL); + mlx5_hlist_destroy(sh->modify_cmds); sh->modify_cmds = NULL; } if (sh->tag_table) { /* tags should be destroyed with flow before. */ - mlx5_hlist_destroy(sh->tag_table, NULL, NULL); + mlx5_hlist_destroy(sh->tag_table); sh->tag_table = NULL; } mlx5_free_table_hash_list(priv); @@ -1454,7 +1457,9 @@ mlx5_flow_ext_mreg_supported(eth_dev) && priv->sh->dv_regc0_mask) { priv->mreg_cp_tbl = mlx5_hlist_create(MLX5_FLOW_MREG_HNAME, - MLX5_FLOW_MREG_HTABLE_SZ); + MLX5_FLOW_MREG_HTABLE_SZ, + 0, 0, + NULL, NULL, NULL); if (!priv->mreg_cp_tbl) { err = ENOMEM; goto error; @@ -1465,7 +1470,7 @@ error: if (priv) { if (priv->mreg_cp_tbl) - mlx5_hlist_destroy(priv->mreg_cp_tbl, NULL, NULL); + mlx5_hlist_destroy(priv->mreg_cp_tbl); if (priv->sh) mlx5_os_free_shared_dr(priv); if (priv->nl_socket_route >= 0) diff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c index 5fbb342..da043e2 100644 --- a/drivers/net/mlx5/mlx5.c +++ b/drivers/net/mlx5/mlx5.c @@ -1025,7 +1025,7 @@ struct mlx5_dev_ctx_shared * if (!sh->flow_tbls) return; - pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64); + pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); if (pos) { tbl_data = container_of(pos, struct mlx5_flow_tbl_data_entry, entry); @@ -1034,7 +1034,7 @@ struct mlx5_dev_ctx_shared * mlx5_free(tbl_data); } table_key.direction = 1; - pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64); + pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); if (pos) { tbl_data = container_of(pos, struct mlx5_flow_tbl_data_entry, entry); @@ -1044,7 +1044,7 @@ struct mlx5_dev_ctx_shared * } table_key.direction = 0; table_key.domain = 1; - pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64); + pos = mlx5_hlist_lookup(sh->flow_tbls, table_key.v64, NULL); if (pos) { tbl_data = container_of(pos, struct mlx5_flow_tbl_data_entry, entry); @@ -1052,7 +1052,7 @@ struct mlx5_dev_ctx_shared * mlx5_hlist_remove(sh->flow_tbls, pos); mlx5_free(tbl_data); } - mlx5_hlist_destroy(sh->flow_tbls, NULL, NULL); + mlx5_hlist_destroy(sh->flow_tbls); } /** @@ -1074,7 +1074,8 @@ struct mlx5_dev_ctx_shared * MLX5_ASSERT(sh); snprintf(s, sizeof(s), "%s_flow_table", priv->sh->ibdev_name); - sh->flow_tbls = mlx5_hlist_create(s, MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE); + sh->flow_tbls = mlx5_hlist_create(s, MLX5_FLOW_TABLE_HLIST_ARRAY_SIZE, + 0, 0, NULL, NULL, NULL); if (!sh->flow_tbls) { DRV_LOG(ERR, "flow tables with hash creation failed."); err = ENOMEM; @@ -1304,7 +1305,7 @@ struct mlx5_dev_ctx_shared * if (priv->drop_queue.hrxq) mlx5_drop_action_destroy(dev); if (priv->mreg_cp_tbl) - mlx5_hlist_destroy(priv->mreg_cp_tbl, NULL, NULL); + mlx5_hlist_destroy(priv->mreg_cp_tbl); mlx5_mprq_free_mp(dev); mlx5_os_free_shared_dr(priv); if (priv->rss_conf.rss_key != NULL) diff --git a/drivers/net/mlx5/mlx5_flow.c b/drivers/net/mlx5/mlx5_flow.c index c6d3cc4..80b4980 100644 --- a/drivers/net/mlx5/mlx5_flow.c +++ b/drivers/net/mlx5/mlx5_flow.c @@ -3377,7 +3377,7 @@ struct mlx5_flow_tunnel_info { cp_mreg.src = ret; /* Check if already registered. */ MLX5_ASSERT(priv->mreg_cp_tbl); - mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, mark_id); + mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, mark_id, NULL); if (mcp_res) { /* For non-default rule. */ if (mark_id != MLX5_DEFAULT_COPY_ID) @@ -3454,8 +3454,7 @@ struct mlx5_flow_tunnel_info { goto error; mcp_res->refcnt++; mcp_res->hlist_ent.key = mark_id; - ret = mlx5_hlist_insert(priv->mreg_cp_tbl, - &mcp_res->hlist_ent); + ret = !mlx5_hlist_insert(priv->mreg_cp_tbl, &mcp_res->hlist_ent); MLX5_ASSERT(!ret); if (ret) goto error; @@ -3605,7 +3604,7 @@ struct mlx5_flow_tunnel_info { if (!priv->mreg_cp_tbl) return; mcp_res = (void *)mlx5_hlist_lookup(priv->mreg_cp_tbl, - MLX5_DEFAULT_COPY_ID); + MLX5_DEFAULT_COPY_ID, NULL); if (!mcp_res) return; MLX5_ASSERT(mcp_res->rix_flow); diff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c index 708ec65..43d16b4 100644 --- a/drivers/net/mlx5/mlx5_flow_dv.c +++ b/drivers/net/mlx5/mlx5_flow_dv.c @@ -7835,7 +7835,7 @@ struct field_modify_info modify_tcp[] = { } }; struct mlx5_hlist_entry *pos = mlx5_hlist_lookup(sh->flow_tbls, - table_key.v64); + table_key.v64, NULL); struct mlx5_flow_tbl_data_entry *tbl_data; uint32_t idx = 0; int ret; @@ -7892,7 +7892,7 @@ struct field_modify_info modify_tcp[] = { } } pos->key = table_key.v64; - ret = mlx5_hlist_insert(sh->flow_tbls, pos); + ret = !mlx5_hlist_insert(sh->flow_tbls, pos); if (ret < 0) { rte_flow_error_set(error, -ret, RTE_FLOW_ERROR_TYPE_UNSPECIFIED, NULL, @@ -8072,7 +8072,7 @@ struct field_modify_info modify_tcp[] = { int ret; /* Lookup a matching resource from cache. */ - entry = mlx5_hlist_lookup(sh->tag_table, (uint64_t)tag_be24); + entry = mlx5_hlist_lookup(sh->tag_table, (uint64_t)tag_be24, NULL); if (entry) { cache_resource = container_of (entry, struct mlx5_flow_dv_tag_resource, entry); diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c index 7a6b0c6..d041b07 100644 --- a/drivers/net/mlx5/mlx5_utils.c +++ b/drivers/net/mlx5/mlx5_utils.c @@ -9,14 +9,40 @@ #include "mlx5_utils.h" +/********************* Hash List **********************/ + +static struct mlx5_hlist_entry * +mlx5_hlist_default_create_cb(struct mlx5_hlist *h, uint64_t key __rte_unused, + void *ctx __rte_unused) +{ + return mlx5_malloc(MLX5_MEM_ZERO, h->entry_sz, 0, SOCKET_ID_ANY); +} + +static void +mlx5_hlist_default_remove_cb(struct mlx5_hlist *h __rte_unused, + struct mlx5_hlist_entry *entry) +{ + mlx5_free(entry); +} + +static int +mlx5_hlist_default_match_cb(struct mlx5_hlist *h __rte_unused, + struct mlx5_hlist_entry *entry, + uint64_t key, void *ctx __rte_unused) +{ + return entry->key != key; +} + struct mlx5_hlist * -mlx5_hlist_create(const char *name, uint32_t size) +mlx5_hlist_create(const char *name, uint32_t size, uint32_t entry_size, + uint32_t flags, mlx5_hlist_create_cb cb_create, + mlx5_hlist_match_cb cb_match, mlx5_hlist_remove_cb cb_remove) { struct mlx5_hlist *h; uint32_t act_size; uint32_t alloc_size; - if (!size) + if (!size || (!cb_create ^ !cb_remove)) return NULL; /* Align to the next power of 2, 32bits integer is enough now. */ if (!rte_is_power_of_2(size)) { @@ -40,45 +66,108 @@ struct mlx5_hlist * snprintf(h->name, MLX5_HLIST_NAMESIZE, "%s", name); h->table_sz = act_size; h->mask = act_size - 1; + h->entry_sz = entry_size; + h->direct_key = !!(flags & MLX5_HLIST_DIRECT_KEY); + h->write_most = !!(flags & MLX5_HLIST_WRITE_MOST); + h->cb_create = cb_create ? cb_create : mlx5_hlist_default_create_cb; + h->cb_match = cb_match ? cb_match : mlx5_hlist_default_match_cb; + h->cb_remove = cb_remove ? cb_remove : mlx5_hlist_default_remove_cb; + rte_rwlock_init(&h->lock); DRV_LOG(DEBUG, "Hash list with %s size 0x%" PRIX32 " is created.", h->name, act_size); return h; } -struct mlx5_hlist_entry * -mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key) +static struct mlx5_hlist_entry * +__hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx, bool reuse) { uint32_t idx; struct mlx5_hlist_head *first; struct mlx5_hlist_entry *node; MLX5_ASSERT(h); - idx = rte_hash_crc_8byte(key, 0) & h->mask; + if (h->direct_key) + idx = (uint32_t)(key & h->mask); + else + idx = rte_hash_crc_8byte(key, 0) & h->mask; first = &h->heads[idx]; LIST_FOREACH(node, first, next) { - if (node->key == key) - return node; + if (!h->cb_match(h, node, key, ctx)) { + if (reuse) { + __atomic_add_fetch(&node->ref_cnt, 1, + __ATOMIC_RELAXED); + DRV_LOG(DEBUG, "Hash list %s entry %p " + "reuse: %u.", + h->name, (void *)node, node->ref_cnt); + } + break; + } } - return NULL; + return node; } -int -mlx5_hlist_insert(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry) +static struct mlx5_hlist_entry * +hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx, bool reuse) +{ + struct mlx5_hlist_entry *node; + + MLX5_ASSERT(h); + rte_rwlock_read_lock(&h->lock); + node = __hlist_lookup(h, key, ctx, reuse); + rte_rwlock_read_unlock(&h->lock); + return node; +} + +struct mlx5_hlist_entry * +mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, void *ctx) +{ + return hlist_lookup(h, key, ctx, false); +} + +struct mlx5_hlist_entry* +mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, void *ctx) { uint32_t idx; struct mlx5_hlist_head *first; - struct mlx5_hlist_entry *node; + struct mlx5_hlist_entry *entry; + uint32_t prev_gen_cnt = 0; MLX5_ASSERT(h && entry); - idx = rte_hash_crc_8byte(entry->key, 0) & h->mask; + /* Use write lock directly for write-most list. */ + if (!h->write_most) { + prev_gen_cnt = __atomic_load_n(&h->gen_cnt, __ATOMIC_ACQUIRE); + entry = hlist_lookup(h, key, ctx, true); + if (entry) + return entry; + } + rte_rwlock_write_lock(&h->lock); + /* Check if the list changed by other threads. */ + if (h->write_most || + prev_gen_cnt != __atomic_load_n(&h->gen_cnt, __ATOMIC_ACQUIRE)) { + entry = __hlist_lookup(h, key, ctx, true); + if (entry) + goto done; + } + if (h->direct_key) + idx = (uint32_t)(key & h->mask); + else + idx = rte_hash_crc_8byte(key, 0) & h->mask; first = &h->heads[idx]; - /* No need to reuse the lookup function. */ - LIST_FOREACH(node, first, next) { - if (node->key == entry->key) - return -EEXIST; + entry = h->cb_create(h, key, ctx); + if (!entry) { + rte_errno = ENOMEM; + DRV_LOG(ERR, "Can't allocate hash list %s entry.", h->name); + goto done; } + entry->key = key; + entry->ref_cnt = 1; LIST_INSERT_HEAD(first, entry, next); - return 0; + __atomic_add_fetch(&h->gen_cnt, 1, __ATOMIC_ACQ_REL); + DRV_LOG(DEBUG, "Hash list %s entry %p new: %u.", + h->name, (void *)entry, entry->ref_cnt); +done: + rte_rwlock_write_unlock(&h->lock); + return entry; } struct mlx5_hlist_entry * @@ -119,26 +208,36 @@ struct mlx5_hlist_entry * return 0; } -void -mlx5_hlist_remove(struct mlx5_hlist *h __rte_unused, - struct mlx5_hlist_entry *entry) +int +mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry) { - MLX5_ASSERT(entry && entry->next.le_prev); + rte_rwlock_write_lock(&h->lock); + MLX5_ASSERT(entry && entry->ref_cnt && entry->next.le_prev); + DRV_LOG(DEBUG, "Hash list %s entry %p deref: %u.", + h->name, (void *)entry, entry->ref_cnt); + if (--entry->ref_cnt) { + rte_rwlock_write_unlock(&h->lock); + return 1; + } LIST_REMOVE(entry, next); /* Set to NULL to get rid of removing action for more than once. */ entry->next.le_prev = NULL; + h->cb_remove(h, entry); + rte_rwlock_write_unlock(&h->lock); + DRV_LOG(DEBUG, "Hash list %s entry %p removed.", + h->name, (void *)entry); + return 0; } void -mlx5_hlist_destroy(struct mlx5_hlist *h, - mlx5_hlist_destroy_callback_fn cb, void *ctx) +mlx5_hlist_destroy(struct mlx5_hlist *h) { uint32_t idx; struct mlx5_hlist_entry *entry; MLX5_ASSERT(h); for (idx = 0; idx < h->table_sz; ++idx) { - /* no LIST_FOREACH_SAFE, using while instead */ + /* No LIST_FOREACH_SAFE, using while instead. */ while (!LIST_EMPTY(&h->heads[idx])) { entry = LIST_FIRST(&h->heads[idx]); LIST_REMOVE(entry, next); @@ -150,15 +249,14 @@ struct mlx5_hlist_entry * * the beginning). Or else the default free function * will be used. */ - if (cb) - cb(entry, ctx); - else - mlx5_free(entry); + h->cb_remove(h, entry); } } mlx5_free(h); } +/********************* Indexed pool **********************/ + static inline void mlx5_ipool_lock(struct mlx5_indexed_pool *pool) { diff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h index ca9bb76..c665558 100644 --- a/drivers/net/mlx5/mlx5_utils.h +++ b/drivers/net/mlx5/mlx5_utils.h @@ -13,6 +13,7 @@ #include #include +#include #include #include @@ -20,6 +21,11 @@ #include "mlx5_defs.h" +#define mlx5_hlist_remove(h, e) \ + mlx5_hlist_unregister(h, e) + +#define mlx5_hlist_insert(h, e) \ + mlx5_hlist_register(h, 0, e) /* Convert a bit number to the corresponding 64-bit mask */ #define MLX5_BITSHIFT(v) (UINT64_C(1) << (v)) @@ -259,9 +265,14 @@ struct mlx5_indexed_pool { return l + r; } +#define MLX5_HLIST_DIRECT_KEY 0x0001 /* Use the key directly as hash index. */ +#define MLX5_HLIST_WRITE_MOST 0x0002 /* List mostly used for append new. */ + /** Maximum size of string for naming the hlist table. */ #define MLX5_HLIST_NAMESIZE 32 +struct mlx5_hlist; + /** * Structure of the entry in the hash list, user should define its own struct * that contains this in order to store the data. The 'key' is 64-bits right @@ -270,6 +281,7 @@ struct mlx5_indexed_pool { struct mlx5_hlist_entry { LIST_ENTRY(mlx5_hlist_entry) next; /* entry pointers in the list. */ uint64_t key; /* user defined 'key', could be the hash signature. */ + uint32_t ref_cnt; /* Reference count. */ }; /** Structure for hash head. */ @@ -292,13 +304,77 @@ struct mlx5_hlist_entry { typedef int (*mlx5_hlist_match_callback_fn)(struct mlx5_hlist_entry *entry, void *ctx); -/** hash list table structure */ +/** + * Type of callback function for entry removal. + * + * @param list + * The hash list. + * @param entry + * The entry in the list. + */ +typedef void (*mlx5_hlist_remove_cb)(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry); + +/** + * Type of function for user defined matching. + * + * @param list + * The hash list. + * @param entry + * The entry in the list. + * @param key + * The new entry key. + * @param ctx + * The pointer to new entry context. + * + * @return + * 0 if matching, non-zero number otherwise. + */ +typedef int (*mlx5_hlist_match_cb)(struct mlx5_hlist *list, + struct mlx5_hlist_entry *entry, + uint64_t key, void *ctx); + +/** + * Type of function for user defined hash list entry creation. + * + * @param list + * The hash list. + * @param key + * The key of the new entry. + * @param ctx + * The pointer to new entry context. + * + * @return + * Pointer to allocated entry on success, NULL otherwise. + */ +typedef struct mlx5_hlist_entry *(*mlx5_hlist_create_cb) + (struct mlx5_hlist *list, + uint64_t key, void *ctx); + +/** + * Hash list table structure + * + * Entry in hash list could be reused if entry already exists, reference + * count will increase and the existing entry returns. + * + * When destroy an entry from list, decrease reference count and only + * destroy when no further reference. + */ struct mlx5_hlist { char name[MLX5_HLIST_NAMESIZE]; /**< Name of the hash list. */ /**< number of heads, need to be power of 2. */ uint32_t table_sz; + uint32_t entry_sz; /**< Size of entry, used to allocate entry. */ /**< mask to get the index of the list heads. */ uint32_t mask; + rte_rwlock_t lock; + uint32_t gen_cnt; /* List modification will update generation count. */ + bool direct_key; /* Use the new entry key directly as hash index. */ + bool write_most; /* List mostly used for append new or destroy. */ + void *ctx; + mlx5_hlist_create_cb cb_create; /**< entry create callback. */ + mlx5_hlist_match_cb cb_match; /**< entry match callback. */ + mlx5_hlist_remove_cb cb_remove; /**< entry remove callback. */ struct mlx5_hlist_head heads[]; /**< list head arrays. */ }; @@ -314,40 +390,43 @@ struct mlx5_hlist { * Name of the hash list(optional). * @param size * Heads array size of the hash list. - * + * @param entry_size + * Entry size to allocate if cb_create not specified. + * @param flags + * The hash list attribute flags. + * @param cb_create + * Callback function for entry create. + * @param cb_match + * Callback function for entry match. + * @param cb_destroy + * Callback function for entry destroy. * @return * Pointer of the hash list table created, NULL on failure. */ -struct mlx5_hlist *mlx5_hlist_create(const char *name, uint32_t size); +struct mlx5_hlist *mlx5_hlist_create(const char *name, uint32_t size, + uint32_t entry_size, uint32_t flags, + mlx5_hlist_create_cb cb_create, + mlx5_hlist_match_cb cb_match, + mlx5_hlist_remove_cb cb_destroy); /** * Search an entry matching the key. * + * Result returned might be destroyed by other thread, must use + * this function only in main thread. + * * @param h * Pointer to the hast list table. * @param key * Key for the searching entry. + * @param ctx + * Common context parameter used by entry callback function. * * @return * Pointer of the hlist entry if found, NULL otherwise. */ -struct mlx5_hlist_entry *mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key); - -/** - * Insert an entry to the hash list table, the entry is only part of whole data - * element and a 64B key is used for matching. User should construct the key or - * give a calculated hash signature and guarantee there is no collision. - * - * @param h - * Pointer to the hast list table. - * @param entry - * Entry to be inserted into the hash list table. - * - * @return - * - zero for success. - * - -EEXIST if the entry is already inserted. - */ -int mlx5_hlist_insert(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry); +struct mlx5_hlist_entry *mlx5_hlist_lookup(struct mlx5_hlist *h, uint64_t key, + void *ctx); /** * Extended routine to search an entry matching the context with @@ -393,6 +472,24 @@ int mlx5_hlist_insert_ex(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry, mlx5_hlist_match_callback_fn cb, void *ctx); /** + * Insert an entry to the hash list table, the entry is only part of whole data + * element and a 64B key is used for matching. User should construct the key or + * give a calculated hash signature and guarantee there is no collision. + * + * @param h + * Pointer to the hast list table. + * @param entry + * Entry to be inserted into the hash list table. + * @param ctx + * Common context parameter used by callback function. + * + * @return + * registered entry on success, NULL otherwise + */ +struct mlx5_hlist_entry *mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key, + void *ctx); + +/** * Remove an entry from the hash list table. User should guarantee the validity * of the entry. * @@ -400,9 +497,10 @@ int mlx5_hlist_insert_ex(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry, * Pointer to the hast list table. (not used) * @param entry * Entry to be removed from the hash list table. + * @return + * 0 on entry removed, 1 on entry still referenced. */ -void mlx5_hlist_remove(struct mlx5_hlist *h __rte_unused, - struct mlx5_hlist_entry *entry); +int mlx5_hlist_unregister(struct mlx5_hlist *h, struct mlx5_hlist_entry *entry); /** * Destroy the hash list table, all the entries already inserted into the lists @@ -411,13 +509,8 @@ void mlx5_hlist_remove(struct mlx5_hlist *h __rte_unused, * * @param h * Pointer to the hast list table. - * @param cb - * Callback function for each inserted entry when destroying the hash list. - * @param ctx - * Common context parameter used by callback function for each entry. */ -void mlx5_hlist_destroy(struct mlx5_hlist *h, - mlx5_hlist_destroy_callback_fn cb, void *ctx); +void mlx5_hlist_destroy(struct mlx5_hlist *h); /** * This function allocates non-initialized memory entry from pool.