get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/79769/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 79769,
    "url": "http://patches.dpdk.org/api/patches/79769/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/1601984948-313027-16-git-send-email-suanmingm@nvidia.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1601984948-313027-16-git-send-email-suanmingm@nvidia.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1601984948-313027-16-git-send-email-suanmingm@nvidia.com",
    "date": "2020-10-06T11:48:58",
    "name": "[15/25] net/mlx5: introduce thread safe linked list cache",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "77b15a4fb681c0458039c74e360cb7297fe8af0f",
    "submitter": {
        "id": 1887,
        "url": "http://patches.dpdk.org/api/people/1887/?format=api",
        "name": "Suanming Mou",
        "email": "suanmingm@nvidia.com"
    },
    "delegate": {
        "id": 3268,
        "url": "http://patches.dpdk.org/api/users/3268/?format=api",
        "username": "rasland",
        "first_name": "Raslan",
        "last_name": "Darawsheh",
        "email": "rasland@nvidia.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/1601984948-313027-16-git-send-email-suanmingm@nvidia.com/mbox/",
    "series": [
        {
            "id": 12718,
            "url": "http://patches.dpdk.org/api/series/12718/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=12718",
            "date": "2020-10-06T11:48:45",
            "name": "net/mlx5: support multiple-thread flow operations",
            "version": 1,
            "mbox": "http://patches.dpdk.org/series/12718/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/79769/comments/",
    "check": "success",
    "checks": "http://patches.dpdk.org/api/patches/79769/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 0B5C5A04BB;\n\tTue,  6 Oct 2020 13:54:53 +0200 (CEST)",
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id E84EB1BA96;\n\tTue,  6 Oct 2020 13:49:47 +0200 (CEST)",
            "from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129])\n by dpdk.org (Postfix) with ESMTP id 25D041B9EB\n for <dev@dpdk.org>; Tue,  6 Oct 2020 13:49:45 +0200 (CEST)",
            "from Internal Mail-Server by MTLPINE1 (envelope-from\n suanmingm@nvidia.com) with SMTP; 6 Oct 2020 14:49:40 +0300",
            "from nvidia.com (mtbc-r640-04.mtbc.labs.mlnx [10.75.70.9])\n by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 096BnC0Z028553;\n Tue, 6 Oct 2020 14:49:39 +0300"
        ],
        "From": "Suanming Mou <suanmingm@nvidia.com>",
        "To": "viacheslavo@nvidia.com, matan@nvidia.com",
        "Cc": "rasland@nvidia.com, dev@dpdk.org, Xueming Li <xuemingl@nvidia.com>",
        "Date": "Tue,  6 Oct 2020 19:48:58 +0800",
        "Message-Id": "<1601984948-313027-16-git-send-email-suanmingm@nvidia.com>",
        "X-Mailer": "git-send-email 1.8.3.1",
        "In-Reply-To": "<1601984948-313027-1-git-send-email-suanmingm@nvidia.com>",
        "References": "<1601984948-313027-1-git-send-email-suanmingm@nvidia.com>",
        "Subject": "[dpdk-dev] [PATCH 15/25] net/mlx5: introduce thread safe linked\n\tlist cache",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: Xueming Li <xuemingl@nvidia.com>\n\nNew API of linked list for cache:\n- optimized for small amount cache list\n- optimized for read-most list\n- thread safe\n- since number of entries are limited, entries allocated by API\n- for dynamic entry size, pass 0 as entry size, then the creation\ncallback allocate the entry.\n- since number of entries are limited, no need to use indexed pool to\nallocate memory. API will remove entry and free with mlx5_free.\n- search API is not supposed to be used in multi-thread\n\nSigned-off-by: Xueming Li <xuemingl@nvidia.com>\n---\n drivers/net/mlx5/mlx5_utils.c | 170 +++++++++++++++++++++++++++++++++++++++++\n drivers/net/mlx5/mlx5_utils.h | 172 ++++++++++++++++++++++++++++++++++++++++++\n 2 files changed, 342 insertions(+)",
    "diff": "diff --git a/drivers/net/mlx5/mlx5_utils.c b/drivers/net/mlx5/mlx5_utils.c\nindex 387a988..c0b4ae5 100644\n--- a/drivers/net/mlx5/mlx5_utils.c\n+++ b/drivers/net/mlx5/mlx5_utils.c\n@@ -223,6 +223,176 @@ struct mlx5_hlist_entry*\n \tmlx5_free(h);\n }\n \n+/********************* Cache list ************************/\n+\n+static struct mlx5_cache_entry *\n+mlx5_clist_default_create_cb(struct mlx5_cache_list *list,\n+\t\t\t     struct mlx5_cache_entry *entry __rte_unused,\n+\t\t\t     void *ctx __rte_unused)\n+{\n+\treturn mlx5_malloc(MLX5_MEM_ZERO, list->entry_sz, 0, SOCKET_ID_ANY);\n+}\n+\n+static void\n+mlx5_clist_default_remove_cb(struct mlx5_cache_list *list __rte_unused,\n+\t\t\t     struct mlx5_cache_entry *entry)\n+{\n+\tmlx5_free(entry);\n+}\n+\n+int\n+mlx5_cache_list_init(struct mlx5_cache_list *list, const char *name,\n+\t\t     uint32_t entry_size, void *ctx,\n+\t\t     mlx5_cache_create_cb cb_create,\n+\t\t     mlx5_cache_match_cb cb_match,\n+\t\t     mlx5_cache_remove_cb cb_remove)\n+{\n+\tMLX5_ASSERT(list);\n+\tif (!cb_match || (!cb_create ^ !cb_remove))\n+\t\treturn -1;\n+\tif (name)\n+\t\tsnprintf(list->name, sizeof(list->name), \"%s\", name);\n+\tlist->entry_sz = entry_size;\n+\tlist->ctx = ctx;\n+\tlist->cb_create = cb_create ? cb_create : mlx5_clist_default_create_cb;\n+\tlist->cb_match = cb_match;\n+\tlist->cb_remove = cb_remove ? cb_remove : mlx5_clist_default_remove_cb;\n+\trte_rwlock_init(&list->lock);\n+\tDRV_LOG(DEBUG, \"Cache list %s initialized.\", list->name);\n+\tLIST_INIT(&list->head);\n+\treturn 0;\n+}\n+\n+static struct mlx5_cache_entry *\n+__cache_lookup(struct mlx5_cache_list *list, void *ctx, bool reuse)\n+{\n+\tstruct mlx5_cache_entry *entry;\n+\n+\tLIST_FOREACH(entry, &list->head, next) {\n+\t\tif (!__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED))\n+\t\t\t/* Ignore entry in middle of removal */\n+\t\t\tcontinue;\n+\t\tif (list->cb_match(list, entry, ctx))\n+\t\t\tcontinue;\n+\t\tif (reuse) {\n+\t\t\t__atomic_add_fetch(&entry->ref_cnt, 1,\n+\t\t\t\t\t   __ATOMIC_RELAXED);\n+\t\t\tDRV_LOG(DEBUG, \"cache list %s entry %p ref++: %u\",\n+\t\t\t\tlist->name, (void *)entry, entry->ref_cnt);\n+\t\t}\n+\t\tbreak;\n+\t}\n+\treturn entry;\n+}\n+\n+static struct mlx5_cache_entry *\n+cache_lookup(struct mlx5_cache_list *list, void *ctx, bool reuse)\n+{\n+\tstruct mlx5_cache_entry *entry;\n+\n+\trte_rwlock_read_lock(&list->lock);\n+\tentry = __cache_lookup(list, ctx, reuse);\n+\trte_rwlock_read_unlock(&list->lock);\n+\treturn entry;\n+}\n+\n+struct mlx5_cache_entry *\n+mlx5_cache_lookup(struct mlx5_cache_list *list, void *ctx)\n+{\n+\treturn __cache_lookup(list, ctx, false);\n+}\n+\n+struct mlx5_cache_entry *\n+mlx5_cache_register(struct mlx5_cache_list *list, void *ctx)\n+{\n+\tstruct mlx5_cache_entry *entry;\n+\tuint32_t prev_gen_cnt = 0;\n+\n+\tMLX5_ASSERT(list);\n+\tprev_gen_cnt = __atomic_load_n(&list->gen_cnt, __ATOMIC_ACQUIRE);\n+\t/* Lookup with read lock, reuse if found. */\n+\tentry = cache_lookup(list, ctx, true);\n+\tif (entry)\n+\t\treturn entry;\n+\t/* Not found, append with write lock - block read from other threads. */\n+\trte_rwlock_write_lock(&list->lock);\n+\t/* If list changed by other threads before lock, search again. */\n+\tif (prev_gen_cnt != __atomic_load_n(&list->gen_cnt, __ATOMIC_ACQUIRE)) {\n+\t\t/* Lookup and reuse w/o read lock */\n+\t\tentry = __cache_lookup(list, ctx, true);\n+\t\tif (entry)\n+\t\t\tgoto done;\n+\t}\n+\tentry = list->cb_create(list, entry, ctx);\n+\tif (!entry) {\n+\t\tif (list->entry_sz)\n+\t\t\tmlx5_free(entry);\n+\t\telse if (list->cb_remove)\n+\t\t\tlist->cb_remove(list, entry);\n+\t\tDRV_LOG(ERR, \"Failed to init cache list %s entry %p\",\n+\t\t\tlist->name, (void *)entry);\n+\t\tentry = NULL;\n+\t\tgoto done;\n+\t}\n+\tentry->ref_cnt = 1;\n+\tLIST_INSERT_HEAD(&list->head, entry, next);\n+\t__atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_ACQUIRE);\n+\t__atomic_add_fetch(&list->count, 1, __ATOMIC_ACQUIRE);\n+\tDRV_LOG(DEBUG, \"cache list %s entry %p new: %u\",\n+\t\tlist->name, (void *)entry, entry->ref_cnt);\n+done:\n+\trte_rwlock_write_unlock(&list->lock);\n+\treturn entry;\n+}\n+\n+int\n+mlx5_cache_unregister(struct mlx5_cache_list *list,\n+\t\t      struct mlx5_cache_entry *entry)\n+{\n+\tuint32_t ref_cnt;\n+\n+\tMLX5_ASSERT(entry && entry->next.le_prev);\n+\tMLX5_ASSERT(__atomic_fetch_n(&entry->ref_cnt, __ATOMIC_RELAXED));\n+\n+\tref_cnt = __atomic_sub_fetch(&entry->ref_cnt, 1, __ATOMIC_ACQ_REL);\n+\tDRV_LOG(DEBUG, \"cache list %s entry %p ref--: %u\",\n+\t\tlist->name, (void *)entry, entry->ref_cnt);\n+\tif (ref_cnt)\n+\t\treturn 1;\n+\trte_rwlock_write_lock(&list->lock);\n+\tif (__atomic_load_n(&entry->ref_cnt, __ATOMIC_RELAXED)) {\n+\t\treturn 1;\n+\t\trte_rwlock_write_unlock(&list->lock);\n+\t}\n+\t__atomic_add_fetch(&list->gen_cnt, 1, __ATOMIC_ACQUIRE);\n+\t__atomic_sub_fetch(&list->count, 1, __ATOMIC_ACQUIRE);\n+\tLIST_REMOVE(entry, next);\n+\tlist->cb_remove(list, entry);\n+\trte_rwlock_write_unlock(&list->lock);\n+\tDRV_LOG(DEBUG, \"cache list %s entry %p removed\",\n+\t\tlist->name, (void *)entry);\n+\treturn 0;\n+}\n+\n+void\n+mlx5_cache_list_destroy(struct mlx5_cache_list *list)\n+{\n+\tstruct mlx5_cache_entry *entry;\n+\n+\tMLX5_ASSERT(list);\n+\tif (__atomic_load_n(&list->count, __ATOMIC_RELAXED)) {\n+\t\t/* no LIST_FOREACH_SAFE, using while instead */\n+\t\twhile (!LIST_EMPTY(&list->head)) {\n+\t\t\tentry = LIST_FIRST(&list->head);\n+\t\t\tLIST_REMOVE(entry, next);\n+\t\t\tlist->cb_remove(list, entry);\n+\t\t\tDRV_LOG(DEBUG, \"cache list %s entry %p destroyed\",\n+\t\t\t\tlist->name, (void *)entry);\n+\t\t}\n+\t}\n+\tmemset(list, 0, sizeof(*list));\n+}\n+\n /********************* Indexed pool **********************/\n \n static inline void\ndiff --git a/drivers/net/mlx5/mlx5_utils.h b/drivers/net/mlx5/mlx5_utils.h\nindex 479dd10..5c39f98 100644\n--- a/drivers/net/mlx5/mlx5_utils.h\n+++ b/drivers/net/mlx5/mlx5_utils.h\n@@ -422,6 +422,178 @@ struct mlx5_hlist_entry *mlx5_hlist_register(struct mlx5_hlist *h, uint64_t key,\n  */\n void mlx5_hlist_destroy(struct mlx5_hlist *h);\n \n+/************************ cache list *****************************/\n+\n+/** Maximum size of string for naming. */\n+#define MLX5_NAME_SIZE\t\t\t32\n+\n+struct mlx5_cache_list;\n+\n+/**\n+ * Structure of the entry in the cache list, user should define its own struct\n+ * that contains this in order to store the data.\n+ */\n+struct mlx5_cache_entry {\n+\tLIST_ENTRY(mlx5_cache_entry) next; /* entry pointers in the list. */\n+\tuint32_t ref_cnt; /* reference count. */\n+};\n+\n+/**\n+ * Type of callback function for entry removal.\n+ *\n+ * @param list\n+ *   The cache list.\n+ * @param entry\n+ *   The entry in the list.\n+ */\n+typedef void (*mlx5_cache_remove_cb)(struct mlx5_cache_list *list,\n+\t\t\t\t     struct mlx5_cache_entry *entry);\n+\n+/**\n+ * Type of function for user defined matching.\n+ *\n+ * @param list\n+ *   The cache list.\n+ * @param entry\n+ *   The entry in the list.\n+ * @param ctx\n+ *   The pointer to new entry context.\n+ *\n+ * @return\n+ *   0 if matching, non-zero number otherwise.\n+ */\n+typedef int (*mlx5_cache_match_cb)(struct mlx5_cache_list *list,\n+\t\t\t\t   struct mlx5_cache_entry *entry, void *ctx);\n+\n+/**\n+ * Type of function for user defined cache list entry creation.\n+ *\n+ * @param list\n+ *   The cache list.\n+ * @param entry\n+ *   The new allocated entry, NULL if list entry size unspecified,\n+ *   New entry has to be allocated in callback and return.\n+ * @param ctx\n+ *   The pointer to new entry context.\n+ *\n+ * @return\n+ *   Pointer of entry on success, NULL otherwise.\n+ */\n+typedef struct mlx5_cache_entry *(*mlx5_cache_create_cb)\n+\t\t\t\t (struct mlx5_cache_list *list,\n+\t\t\t\t  struct mlx5_cache_entry *entry,\n+\t\t\t\t  void *ctx);\n+\n+/**\n+ * Linked cache list structure.\n+ *\n+ * Entry in cache list could be reused if entry already exists,\n+ * reference count will increase and the existing entry returns.\n+ *\n+ * When destroy an entry from list, decrease reference count and only\n+ * destroy when no further reference.\n+ *\n+ * Linked list cache is designed for limited number of entries cache,\n+ * read mostly, less modification.\n+ *\n+ * For huge amount of entries cache, please consider hash list cache.\n+ *\n+ */\n+struct mlx5_cache_list {\n+\tchar name[MLX5_NAME_SIZE]; /**< Name of the cache list. */\n+\tuint32_t entry_sz; /**< Entry size, 0: use create callback. */\n+\trte_rwlock_t lock; /* read/write lock. */\n+\tuint32_t gen_cnt; /* List modification will update generation count. */\n+\tuint32_t count; /* number of entries in list. */\n+\tvoid *ctx; /* user objects target to callback. */\n+\tmlx5_cache_create_cb cb_create; /**< entry create callback. */\n+\tmlx5_cache_match_cb cb_match; /**< entry match callback. */\n+\tmlx5_cache_remove_cb cb_remove; /**< entry remove callback. */\n+\tLIST_HEAD(mlx5_cache_head, mlx5_cache_entry) head;\n+};\n+\n+/**\n+ * Initialize a cache list.\n+ *\n+ * @param list\n+ *   Pointer to the hast list table.\n+ * @param name\n+ *   Name of the cache list.\n+ * @param entry_size\n+ *   Entry size to allocate, 0 to allocate by creation callback.\n+ * @param ctx\n+ *   Pointer to the list context data.\n+ * @param cb_create\n+ *   Callback function for entry create.\n+ * @param cb_match\n+ *   Callback function for entry match.\n+ * @param cb_remove\n+ *   Callback function for entry remove.\n+ * @return\n+ *   0 on success, otherwise failure.\n+ */\n+int mlx5_cache_list_init(struct mlx5_cache_list *list,\n+\t\t\t const char *name, uint32_t entry_size, void *ctx,\n+\t\t\t mlx5_cache_create_cb cb_create,\n+\t\t\t mlx5_cache_match_cb cb_match,\n+\t\t\t mlx5_cache_remove_cb cb_remove);\n+\n+/**\n+ * Search an entry matching the key.\n+ *\n+ * Result returned might be destroyed by other thread, must use\n+ * this function only in main thread.\n+ *\n+ * @param list\n+ *   Pointer to the cache list.\n+ * @param ctx\n+ *   Common context parameter used by entry callback function.\n+ *\n+ * @return\n+ *   Pointer of the cache entry if found, NULL otherwise.\n+ */\n+struct mlx5_cache_entry *mlx5_cache_lookup(struct mlx5_cache_list *list,\n+\t\t\t\t\t   void *ctx);\n+\n+/**\n+ * Reuse or create an entry to the cache list.\n+ *\n+ * @param list\n+ *   Pointer to the hast list table.\n+ * @param ctx\n+ *   Common context parameter used by callback function.\n+ *\n+ * @return\n+ *   registered entry on success, NULL otherwise\n+ */\n+struct mlx5_cache_entry *mlx5_cache_register(struct mlx5_cache_list *list,\n+\t\t\t\t\t     void *ctx);\n+\n+/**\n+ * Remove an entry from the cache list.\n+ *\n+ * User should guarantee the validity of the entry.\n+ *\n+ * @param list\n+ *   Pointer to the hast list.\n+ * @param entry\n+ *   Entry to be removed from the cache list table.\n+ * @return\n+ *   0 on entry removed, 1 on entry still referenced.\n+ */\n+int mlx5_cache_unregister(struct mlx5_cache_list *list,\n+\t\t\t  struct mlx5_cache_entry *entry);\n+\n+/**\n+ * Destroy the cache list.\n+ *\n+ * @param list\n+ *   Pointer to the cache list.\n+ */\n+void mlx5_cache_list_destroy(struct mlx5_cache_list *list);\n+\n+/********************************* indexed pool *************************/\n+\n /**\n  * This function allocates non-initialized memory entry from pool.\n  * In NUMA systems, the memory entry allocated resides on the same\n",
    "prefixes": [
        "15/25"
    ]
}