List patch comments

GET /api/patches/73449/comments/?format=api
HTTP 200 OK
Allow: GET, HEAD, OPTIONS
Content-Type: application/json
Link: 
<https://patches.dpdk.org/api/patches/73449/comments/?format=api&page=1>; rel="first",
<https://patches.dpdk.org/api/patches/73449/comments/?format=api&page=1>; rel="last"
Vary: Accept
[ { "id": 115511, "web_url": "https://patches.dpdk.org/comment/115511/", "msgid": "<b9e80481-7247-5214-0e09-48d0f6fbe84b@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/b9e80481-7247-5214-0e09-48d0f6fbe84b@intel.com", "date": "2020-07-08T12:36:58", "subject": "Re: [dpdk-dev] [PATCH v7 1/3] lib/lpm: integrate RCU QSBR", "submitter": { "id": 1216, "url": "https://patches.dpdk.org/api/people/1216/?format=api", "name": "Vladimir Medvedkin", "email": "vladimir.medvedkin@intel.com" }, "content": "On 07/07/2020 16:15, Ruifeng Wang wrote:\n> Currently, the tbl8 group is freed even though the readers might be\n> using the tbl8 group entries. The freed tbl8 group can be reallocated\n> quickly. This results in incorrect lookup results.\n>\n> RCU QSBR process is integrated for safe tbl8 group reclaim.\n> Refer to RCU documentation to understand various aspects of\n> integrating RCU library into other libraries.\n>\n> Signed-off-by: Ruifeng Wang <ruifeng.wang@arm.com>\n> Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>\n> Acked-by: Ray Kinsella <mdr@ashroe.eu>\n> ---\n> doc/guides/prog_guide/lpm_lib.rst | 32 ++++++++\n> lib/librte_lpm/Makefile | 2 +-\n> lib/librte_lpm/meson.build | 1 +\n> lib/librte_lpm/rte_lpm.c | 120 ++++++++++++++++++++++++++---\n> lib/librte_lpm/rte_lpm.h | 59 ++++++++++++++\n> lib/librte_lpm/rte_lpm_version.map | 6 ++\n> 6 files changed, 208 insertions(+), 12 deletions(-)\n>\n> diff --git a/doc/guides/prog_guide/lpm_lib.rst b/doc/guides/prog_guide/lpm_lib.rst\n> index 1609a57d0..03945904b 100644\n> --- a/doc/guides/prog_guide/lpm_lib.rst\n> +++ b/doc/guides/prog_guide/lpm_lib.rst\n> @@ -145,6 +145,38 @@ depending on whether we need to move to the next table or not.\n> Prefix expansion is one of the keys of this algorithm,\n> since it improves the speed dramatically by adding redundancy.\n> \n> +Deletion\n> +~~~~~~~~\n> +\n> +When deleting a rule, a replacement rule is searched for. Replacement rule is an existing rule that has\n> +the longest prefix match with the rule to be deleted, but has shorter prefix.\n> +\n> +If a replacement rule is found, target tbl24 and tbl8 entries are updated to have the same depth and next hop\n> +value with the replacement rule.\n> +\n> +If no replacement rule can be found, target tbl24 and tbl8 entries will be cleared.\n> +\n> +Prefix expansion is performed if the rule's depth is not exactly 24 bits or 32 bits.\n> +\n> +After deleting a rule, a group of tbl8s that belongs to the same tbl24 entry are freed in following cases:\n> +\n> +* All tbl8s in the group are empty .\n> +\n> +* All tbl8s in the group have the same values and with depth no greater than 24.\n> +\n> +Free of tbl8s have different behaviors:\n> +\n> +* If RCU is not used, tbl8s are cleared and reclaimed immediately.\n> +\n> +* If RCU is used, tbl8s are reclaimed when readers are in quiescent state.\n> +\n> +When the LPM is not using RCU, tbl8 group can be freed immediately even though the readers might be using\n> +the tbl8 group entries. This might result in incorrect lookup results.\n> +\n> +RCU QSBR process is integrated for safe tbl8 group reclamation. Application has certain responsibilities\n> +while using this feature. Please refer to resource reclamation framework of :ref:`RCU library <RCU_Library>`\n> +for more details.\n> +\n> Lookup\n> ~~~~~~\n> \n> diff --git a/lib/librte_lpm/Makefile b/lib/librte_lpm/Makefile\n> index d682785b6..6f06c5c03 100644\n> --- a/lib/librte_lpm/Makefile\n> +++ b/lib/librte_lpm/Makefile\n> @@ -8,7 +8,7 @@ LIB = librte_lpm.a\n> \n> CFLAGS += -O3\n> CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR)\n> -LDLIBS += -lrte_eal -lrte_hash\n> +LDLIBS += -lrte_eal -lrte_hash -lrte_rcu\n> \n> EXPORT_MAP := rte_lpm_version.map\n> \n> diff --git a/lib/librte_lpm/meson.build b/lib/librte_lpm/meson.build\n> index 021ac6d8d..6cfc083c5 100644\n> --- a/lib/librte_lpm/meson.build\n> +++ b/lib/librte_lpm/meson.build\n> @@ -7,3 +7,4 @@ headers = files('rte_lpm.h', 'rte_lpm6.h')\n> # without worrying about which architecture we actually need\n> headers += files('rte_lpm_altivec.h', 'rte_lpm_neon.h', 'rte_lpm_sse.h')\n> deps += ['hash']\n> +deps += ['rcu']\n> diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c\n> index 38ab512a4..d498ba761 100644\n> --- a/lib/librte_lpm/rte_lpm.c\n> +++ b/lib/librte_lpm/rte_lpm.c\n> @@ -1,5 +1,6 @@\n> /* SPDX-License-Identifier: BSD-3-Clause\n> * Copyright(c) 2010-2014 Intel Corporation\n> + * Copyright(c) 2020 Arm Limited\n> */\n> \n> #include <string.h>\n> @@ -246,12 +247,82 @@ rte_lpm_free(struct rte_lpm *lpm)\n> \n> \trte_mcfg_tailq_write_unlock();\n> \n> +\tif (lpm->dq)\n> +\t\trte_rcu_qsbr_dq_delete(lpm->dq);\n> \trte_free(lpm->tbl8);\n> \trte_free(lpm->rules_tbl);\n> \trte_free(lpm);\n> \trte_free(te);\n> }\n> \n> +static void\n> +__lpm_rcu_qsbr_free_resource(void *p, void *data, unsigned int n)\n> +{\n> +\tstruct rte_lpm_tbl_entry zero_tbl8_entry = {0};\n> +\tuint32_t tbl8_group_index = *(uint32_t *)data;\n> +\tstruct rte_lpm_tbl_entry *tbl8 = ((struct rte_lpm *)p)->tbl8;\n> +\n> +\tRTE_SET_USED(n);\n> +\t/* Set tbl8 group invalid */\n> +\t__atomic_store(&tbl8[tbl8_group_index], &zero_tbl8_entry,\n> +\t\t__ATOMIC_RELAXED);\n> +}\n> +\n> +/* Associate QSBR variable with an LPM object.\n> + */\n> +int\n> +rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg,\n> +\tstruct rte_rcu_qsbr_dq **dq)\n> +{\n> +\tchar rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE];\n> +\tstruct rte_rcu_qsbr_dq_parameters params = {0};\n> +\n> +\tif ((lpm == NULL) || (cfg == NULL)) {\n> +\t\trte_errno = EINVAL;\n> +\t\treturn 1;\n> +\t}\n> +\n> +\tif (lpm->v) {\n> +\t\trte_errno = EEXIST;\n> +\t\treturn 1;\n> +\t}\n> +\n> +\tif (cfg->mode == RTE_LPM_QSBR_MODE_SYNC) {\n> +\t\t/* No other things to do. */\n> +\t} else if (cfg->mode == RTE_LPM_QSBR_MODE_DQ) {\n> +\t\t/* Init QSBR defer queue. */\n> +\t\tsnprintf(rcu_dq_name, sizeof(rcu_dq_name),\n> +\t\t\t\t\"LPM_RCU_%s\", lpm->name);\n> +\t\tparams.name = rcu_dq_name;\n> +\t\tparams.size = cfg->dq_size;\n> +\t\tif (params.size == 0)\n> +\t\t\tparams.size = lpm->number_tbl8s;\n> +\t\tparams.trigger_reclaim_limit = cfg->reclaim_thd;\n> +\t\tparams.max_reclaim_size = cfg->reclaim_max;\n> +\t\tif (params.max_reclaim_size == 0)\n> +\t\t\tparams.max_reclaim_size = RTE_LPM_RCU_DQ_RECLAIM_MAX;\n> +\t\tparams.esize = sizeof(uint32_t);\t/* tbl8 group index */\n> +\t\tparams.free_fn = __lpm_rcu_qsbr_free_resource;\n> +\t\tparams.p = lpm;\n> +\t\tparams.v = cfg->v;\n> +\t\tlpm->dq = rte_rcu_qsbr_dq_create(&params);\n> +\t\tif (lpm->dq == NULL) {\n> +\t\t\tRTE_LOG(ERR, LPM,\n> +\t\t\t\t\t\"LPM QS defer queue creation failed\\n\");\n> +\t\t\treturn 1;\n> +\t\t}\n> +\t\tif (dq)\n> +\t\t\t*dq = lpm->dq;\n> +\t} else {\n> +\t\trte_errno = EINVAL;\n> +\t\treturn 1;\n> +\t}\n> +\tlpm->rcu_mode = cfg->mode;\n> +\tlpm->v = cfg->v;\n> +\n> +\treturn 0;\n> +}\n> +\n> /*\n> * Adds a rule to the rule table.\n> *\n> @@ -394,14 +465,15 @@ rule_find(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth)\n> * Find, clean and allocate a tbl8.\n> */\n> static int32_t\n> -tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)\n> +_tbl8_alloc(struct rte_lpm *lpm)\n> {\n> \tuint32_t group_idx; /* tbl8 group index. */\n> \tstruct rte_lpm_tbl_entry *tbl8_entry;\n> \n> \t/* Scan through tbl8 to find a free (i.e. INVALID) tbl8 group. */\n> -\tfor (group_idx = 0; group_idx < number_tbl8s; group_idx++) {\n> -\t\ttbl8_entry = &tbl8[group_idx * RTE_LPM_TBL8_GROUP_NUM_ENTRIES];\n> +\tfor (group_idx = 0; group_idx < lpm->number_tbl8s; group_idx++) {\n> +\t\ttbl8_entry = &lpm->tbl8[group_idx *\n> +\t\t\t\t\tRTE_LPM_TBL8_GROUP_NUM_ENTRIES];\n> \t\t/* If a free tbl8 group is found clean it and set as VALID. */\n> \t\tif (!tbl8_entry->valid_group) {\n> \t\t\tstruct rte_lpm_tbl_entry new_tbl8_entry = {\n> @@ -427,14 +499,40 @@ tbl8_alloc(struct rte_lpm_tbl_entry *tbl8, uint32_t number_tbl8s)\n> \treturn -ENOSPC;\n> }\n> \n> +static int32_t\n> +tbl8_alloc(struct rte_lpm *lpm)\n> +{\n> +\tint32_t group_idx; /* tbl8 group index. */\n> +\n> +\tgroup_idx = _tbl8_alloc(lpm);\n> +\tif ((group_idx == -ENOSPC) && (lpm->dq != NULL)) {\n> +\t\t/* If there are no tbl8 groups try to reclaim one. */\n> +\t\tif (rte_rcu_qsbr_dq_reclaim(lpm->dq, 1, NULL, NULL, NULL) == 0)\n> +\t\t\tgroup_idx = _tbl8_alloc(lpm);\n> +\t}\n> +\n> +\treturn group_idx;\n> +}\n> +\n> static void\n> -tbl8_free(struct rte_lpm_tbl_entry *tbl8, uint32_t tbl8_group_start)\n> +tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start)\n> {\n> -\t/* Set tbl8 group invalid*/\n> \tstruct rte_lpm_tbl_entry zero_tbl8_entry = {0};\n> \n> -\t__atomic_store(&tbl8[tbl8_group_start], &zero_tbl8_entry,\n> -\t\t\t__ATOMIC_RELAXED);\n> +\tif (!lpm->v) {\n> +\t\t/* Set tbl8 group invalid*/\n> +\t\t__atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry,\n> +\t\t\t\t__ATOMIC_RELAXED);\n> +\t} else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_SYNC) {\n> +\t\t/* Wait for quiescent state change. */\n> +\t\trte_rcu_qsbr_synchronize(lpm->v, RTE_QSBR_THRID_INVALID);\n> +\t\t/* Set tbl8 group invalid*/\n> +\t\t__atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry,\n> +\t\t\t\t__ATOMIC_RELAXED);\n> +\t} else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) {\n> +\t\t/* Push into QSBR defer queue. */\n> +\t\trte_rcu_qsbr_dq_enqueue(lpm->dq, (void *)&tbl8_group_start);\n> +\t}\n> }\n> \n> static __rte_noinline int32_t\n> @@ -523,7 +621,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,\n> \n> \tif (!lpm->tbl24[tbl24_index].valid) {\n> \t\t/* Search for a free tbl8 group. */\n> -\t\ttbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);\n> +\t\ttbl8_group_index = tbl8_alloc(lpm);\n> \n> \t\t/* Check tbl8 allocation was successful. */\n> \t\tif (tbl8_group_index < 0) {\n> @@ -569,7 +667,7 @@ add_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint8_t depth,\n> \t} /* If valid entry but not extended calculate the index into Table8. */\n> \telse if (lpm->tbl24[tbl24_index].valid_group == 0) {\n> \t\t/* Search for free tbl8 group. */\n> -\t\ttbl8_group_index = tbl8_alloc(lpm->tbl8, lpm->number_tbl8s);\n> +\t\ttbl8_group_index = tbl8_alloc(lpm);\n> \n> \t\tif (tbl8_group_index < 0) {\n> \t\t\treturn tbl8_group_index;\n> @@ -977,7 +1075,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,\n> \t\t */\n> \t\tlpm->tbl24[tbl24_index].valid = 0;\n> \t\t__atomic_thread_fence(__ATOMIC_RELEASE);\n> -\t\ttbl8_free(lpm->tbl8, tbl8_group_start);\n> +\t\ttbl8_free(lpm, tbl8_group_start);\n> \t} else if (tbl8_recycle_index > -1) {\n> \t\t/* Update tbl24 entry. */\n> \t\tstruct rte_lpm_tbl_entry new_tbl24_entry = {\n> @@ -993,7 +1091,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked,\n> \t\t__atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry,\n> \t\t\t\t__ATOMIC_RELAXED);\n> \t\t__atomic_thread_fence(__ATOMIC_RELEASE);\n> -\t\ttbl8_free(lpm->tbl8, tbl8_group_start);\n> +\t\ttbl8_free(lpm, tbl8_group_start);\n> \t}\n> #undef group_idx\n> \treturn 0;\n> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h\n> index b9d49ac87..7889f21b3 100644\n> --- a/lib/librte_lpm/rte_lpm.h\n> +++ b/lib/librte_lpm/rte_lpm.h\n> @@ -1,5 +1,6 @@\n> /* SPDX-License-Identifier: BSD-3-Clause\n> * Copyright(c) 2010-2014 Intel Corporation\n> + * Copyright(c) 2020 Arm Limited\n> */\n> \n> #ifndef _RTE_LPM_H_\n> @@ -20,6 +21,7 @@\n> #include <rte_memory.h>\n> #include <rte_common.h>\n> #include <rte_vect.h>\n> +#include <rte_rcu_qsbr.h>\n> \n> #ifdef __cplusplus\n> extern \"C\" {\n> @@ -62,6 +64,17 @@ extern \"C\" {\n> /** Bitmask used to indicate successful lookup */\n> #define RTE_LPM_LOOKUP_SUCCESS 0x01000000\n> \n> +/** @internal Default RCU defer queue entries to reclaim in one go. */\n> +#define RTE_LPM_RCU_DQ_RECLAIM_MAX\t16\n> +\n> +/** RCU reclamation modes */\n> +enum rte_lpm_qsbr_mode {\n> +\t/** Create defer queue for reclaim. */\n> +\tRTE_LPM_QSBR_MODE_DQ = 0,\n> +\t/** Use blocking mode reclaim. No defer queue created. */\n> +\tRTE_LPM_QSBR_MODE_SYNC\n> +};\n> +\n> #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN\n> /** @internal Tbl24 entry structure. */\n> __extension__\n> @@ -130,6 +143,28 @@ struct rte_lpm {\n> \t\t\t__rte_cache_aligned; /**< LPM tbl24 table. */\n> \tstruct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */\n> \tstruct rte_lpm_rule *rules_tbl; /**< LPM rules. */\n> +#ifdef ALLOW_EXPERIMENTAL_API\n> +\t/* RCU config. */\n> +\tstruct rte_rcu_qsbr *v;\t\t/* RCU QSBR variable. */\n> +\tenum rte_lpm_qsbr_mode rcu_mode;/* Blocking, defer queue. */\n> +\tstruct rte_rcu_qsbr_dq *dq;\t/* RCU QSBR defer queue. */\n> +#endif\n> +};\n> +\n> +/** LPM RCU QSBR configuration structure. */\n> +struct rte_lpm_rcu_config {\n> +\tstruct rte_rcu_qsbr *v;\t/* RCU QSBR variable. */\n> +\t/* Mode of RCU QSBR. RTE_LPM_QSBR_MODE_xxx\n> +\t * '0' for default: create defer queue for reclaim.\n> +\t */\n> +\tenum rte_lpm_qsbr_mode mode;\n> +\tuint32_t dq_size;\t/* RCU defer queue size.\n> +\t\t\t\t * default: lpm->number_tbl8s.\n> +\t\t\t\t */\n> +\tuint32_t reclaim_thd;\t/* Threshold to trigger auto reclaim. */\n> +\tuint32_t reclaim_max;\t/* Max entries to reclaim in one go.\n> +\t\t\t\t * default: RTE_LPM_RCU_DQ_RECLAIM_MAX.\n> +\t\t\t\t */\n> };\n> \n> /**\n> @@ -179,6 +214,30 @@ rte_lpm_find_existing(const char *name);\n> void\n> rte_lpm_free(struct rte_lpm *lpm);\n> \n> +/**\n> + * @warning\n> + * @b EXPERIMENTAL: this API may change without prior notice\n> + *\n> + * Associate RCU QSBR variable with an LPM object.\n> + *\n> + * @param lpm\n> + * the lpm object to add RCU QSBR\n> + * @param cfg\n> + * RCU QSBR configuration\n> + * @param dq\n> + * handler of created RCU QSBR defer queue\n> + * @return\n> + * On success - 0\n> + * On error - 1 with error code set in rte_errno.\n> + * Possible rte_errno codes are:\n> + * - EINVAL - invalid pointer\n> + * - EEXIST - already added QSBR\n> + * - ENOMEM - memory allocation failure\n> + */\n> +__rte_experimental\n> +int rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct rte_lpm_rcu_config *cfg,\n> +\tstruct rte_rcu_qsbr_dq **dq);\n> +\n> /**\n> * Add a rule to the LPM table.\n> *\n> diff --git a/lib/librte_lpm/rte_lpm_version.map b/lib/librte_lpm/rte_lpm_version.map\n> index 500f58b80..bfccd7eac 100644\n> --- a/lib/librte_lpm/rte_lpm_version.map\n> +++ b/lib/librte_lpm/rte_lpm_version.map\n> @@ -21,3 +21,9 @@ DPDK_20.0 {\n> \n> \tlocal: *;\n> };\n> +\n> +EXPERIMENTAL {\n> +\tglobal:\n> +\n> +\trte_lpm_rcu_qsbr_add;\n> +};\n\nAcked-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com>", "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 740C7A0526;\n\tWed, 8 Jul 2020 14:37:06 +0200 (CEST)", "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 701061DC6A;\n\tWed, 8 Jul 2020 14:37:05 +0200 (CEST)", "from mga12.intel.com (mga12.intel.com [192.55.52.136])\n by dpdk.org (Postfix) with ESMTP id 406661DB4F\n for <dev@dpdk.org>; Wed, 8 Jul 2020 14:37:03 +0200 (CEST)", "from fmsmga002.fm.intel.com ([10.253.24.26])\n by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 08 Jul 2020 05:37:02 -0700", "from vmedvedk-mobl.ger.corp.intel.com (HELO [10.213.247.70])\n ([10.213.247.70])\n by fmsmga002.fm.intel.com with ESMTP; 08 Jul 2020 05:36:59 -0700" ], "IronPort-SDR": [ "\n /EV5TCSE/OYVekQ7eEEmLyCfFTcMFhPaQk93HZydAS5Xj+6/J8LZynnyF3pWM5H1VEkNpxhSzt\n 3y9j2udq+sHw==", "\n x2UqOymYFu6Drl3LV8qrUyIao5r6lFjCcyXslNhwxWfI+uoogJs7k8P6yV2rbTnt/gUGX54YDs\n JmPredL4xl5Q==" ], "X-IronPort-AV": [ "E=McAfee;i=\"6000,8403,9675\"; a=\"127384705\"", "E=Sophos;i=\"5.75,327,1589266800\";\n d=\"scan'208,217\";a=\"127384705\"", "E=Sophos;i=\"5.75,327,1589266800\";\n d=\"scan'208,217\";a=\"315856356\"" ], "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "To": "Ruifeng Wang <ruifeng.wang@arm.com>,\n Bruce Richardson <bruce.richardson@intel.com>,\n John McNamara <john.mcnamara@intel.com>,\n Marko Kovacevic <marko.kovacevic@intel.com>, Ray Kinsella <mdr@ashroe.eu>,\n Neil Horman <nhorman@tuxdriver.com>", "Cc": "dev@dpdk.org, konstantin.ananyev@intel.com, honnappa.nagarahalli@arm.com,\n nd@arm.com", "References": "<20190906094534.36060-1-ruifeng.wang@arm.com>\n <20200707151554.64431-1-ruifeng.wang@arm.com>\n <20200707151554.64431-2-ruifeng.wang@arm.com>", "From": "\"Medvedkin, Vladimir\" <vladimir.medvedkin@intel.com>", "Message-ID": "<b9e80481-7247-5214-0e09-48d0f6fbe84b@intel.com>", "Date": "Wed, 8 Jul 2020 13:36:58 +0100", "User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101\n Thunderbird/68.10.0", "MIME-Version": "1.0", "In-Reply-To": "<20200707151554.64431-2-ruifeng.wang@arm.com>", "Content-Language": "en-US", "Content-Type": "text/plain; charset=utf-8; format=flowed", "Content-Transfer-Encoding": "7bit", "X-Content-Filtered-By": "Mailman/MimeDel 2.1.15", "Subject": "Re: [dpdk-dev] [PATCH v7 1/3] lib/lpm: integrate RCU QSBR", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "addressed": null }, { "id": 115539, "web_url": "https://patches.dpdk.org/comment/115539/", "msgid": "<CAJFAV8wSvAR0sBXotu1ssGOZKD634hVJT2OcMs=XYsxc10F3-g@mail.gmail.com>", "list_archive_url": "https://inbox.dpdk.org/dev/CAJFAV8wSvAR0sBXotu1ssGOZKD634hVJT2OcMs=XYsxc10F3-g@mail.gmail.com", "date": "2020-07-08T14:30:21", "subject": "Re: [dpdk-dev] [PATCH v7 1/3] lib/lpm: integrate RCU QSBR", "submitter": { "id": 1173, "url": "https://patches.dpdk.org/api/people/1173/?format=api", "name": "David Marchand", "email": "david.marchand@redhat.com" }, "content": "On Tue, Jul 7, 2020 at 5:16 PM Ruifeng Wang <ruifeng.wang@arm.com> wrote:\n> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h\n> index b9d49ac87..7889f21b3 100644\n> --- a/lib/librte_lpm/rte_lpm.h\n> +++ b/lib/librte_lpm/rte_lpm.h\n> @@ -1,5 +1,6 @@\n> /* SPDX-License-Identifier: BSD-3-Clause\n> * Copyright(c) 2010-2014 Intel Corporation\n> + * Copyright(c) 2020 Arm Limited\n> */\n>\n> #ifndef _RTE_LPM_H_\n> @@ -20,6 +21,7 @@\n> #include <rte_memory.h>\n> #include <rte_common.h>\n> #include <rte_vect.h>\n> +#include <rte_rcu_qsbr.h>\n>\n> #ifdef __cplusplus\n> extern \"C\" {\n> @@ -62,6 +64,17 @@ extern \"C\" {\n> /** Bitmask used to indicate successful lookup */\n> #define RTE_LPM_LOOKUP_SUCCESS 0x01000000\n>\n> +/** @internal Default RCU defer queue entries to reclaim in one go. */\n> +#define RTE_LPM_RCU_DQ_RECLAIM_MAX 16\n> +\n> +/** RCU reclamation modes */\n> +enum rte_lpm_qsbr_mode {\n> + /** Create defer queue for reclaim. */\n> + RTE_LPM_QSBR_MODE_DQ = 0,\n> + /** Use blocking mode reclaim. No defer queue created. */\n> + RTE_LPM_QSBR_MODE_SYNC\n> +};\n> +\n> #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN\n> /** @internal Tbl24 entry structure. */\n> __extension__\n> @@ -130,6 +143,28 @@ struct rte_lpm {\n> __rte_cache_aligned; /**< LPM tbl24 table. */\n> struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */\n> struct rte_lpm_rule *rules_tbl; /**< LPM rules. */\n> +#ifdef ALLOW_EXPERIMENTAL_API\n> + /* RCU config. */\n> + struct rte_rcu_qsbr *v; /* RCU QSBR variable. */\n> + enum rte_lpm_qsbr_mode rcu_mode;/* Blocking, defer queue. */\n> + struct rte_rcu_qsbr_dq *dq; /* RCU QSBR defer queue. */\n> +#endif\n> +};\n\nI can see failures in travis reports for v7 and v6.\nI reproduced them in my env.\n\n1 function with some indirect sub-type change:\n\n [C]'function int rte_lpm_add(rte_lpm*, uint32_t, uint8_t, uint32_t)'\nat rte_lpm.c:764:1 has some indirect sub-type changes:\n parameter 1 of type 'rte_lpm*' has sub-type changes:\n in pointed to type 'struct rte_lpm' at rte_lpm.h:134:1:\n type size hasn't changed\n 3 data member insertions:\n 'rte_rcu_qsbr* rte_lpm::v', at offset 536873600 (in bits) at\nrte_lpm.h:148:1\n 'rte_lpm_qsbr_mode rte_lpm::rcu_mode', at offset 536873664\n(in bits) at rte_lpm.h:149:1\n 'rte_rcu_qsbr_dq* rte_lpm::dq', at offset 536873728 (in\nbits) at rte_lpm.h:150:1\n\n\nGoing back to my proposal of hiding what does not need to be seen.\n\nDisclaimer, *this is quick & dirty* but it builds and passes ABI check:\n\n$ git diff\ndiff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c\nindex d498ba761..7109aef6a 100644\n--- a/lib/librte_lpm/rte_lpm.c\n+++ b/lib/librte_lpm/rte_lpm.c\n@@ -115,6 +115,15 @@ rte_lpm_find_existing(const char *name)\n return l;\n }\n\n+struct internal_lpm {\n+ /* Public object */\n+ struct rte_lpm lpm;\n+ /* RCU config. */\n+ struct rte_rcu_qsbr *v; /* RCU QSBR variable. */\n+ enum rte_lpm_qsbr_mode rcu_mode;/* Blocking, defer queue. */\n+ struct rte_rcu_qsbr_dq *dq; /* RCU QSBR defer queue. */\n+};\n+\n /*\n * Allocates memory for LPM object\n */\n@@ -123,6 +132,7 @@ rte_lpm_create(const char *name, int socket_id,\n const struct rte_lpm_config *config)\n {\n char mem_name[RTE_LPM_NAMESIZE];\n+ struct internal_lpm *internal = NULL;\n struct rte_lpm *lpm = NULL;\n struct rte_tailq_entry *te;\n uint32_t mem_size, rules_size, tbl8s_size;\n@@ -141,12 +151,6 @@ rte_lpm_create(const char *name, int socket_id,\n\n snprintf(mem_name, sizeof(mem_name), \"LPM_%s\", name);\n\n- /* Determine the amount of memory to allocate. */\n- mem_size = sizeof(*lpm);\n- rules_size = sizeof(struct rte_lpm_rule) * config->max_rules;\n- tbl8s_size = (sizeof(struct rte_lpm_tbl_entry) *\n- RTE_LPM_TBL8_GROUP_NUM_ENTRIES * config->number_tbl8s);\n-\n rte_mcfg_tailq_write_lock();\n\n /* guarantee there's no existing */\n@@ -170,16 +174,23 @@ rte_lpm_create(const char *name, int socket_id,\n goto exit;\n }\n\n+ /* Determine the amount of memory to allocate. */\n+ mem_size = sizeof(*internal);\n+ rules_size = sizeof(struct rte_lpm_rule) * config->max_rules;\n+ tbl8s_size = (sizeof(struct rte_lpm_tbl_entry) *\n+ RTE_LPM_TBL8_GROUP_NUM_ENTRIES * config->number_tbl8s);\n+\n /* Allocate memory to store the LPM data structures. */\n- lpm = rte_zmalloc_socket(mem_name, mem_size,\n+ internal = rte_zmalloc_socket(mem_name, mem_size,\n RTE_CACHE_LINE_SIZE, socket_id);\n- if (lpm == NULL) {\n+ if (internal == NULL) {\n RTE_LOG(ERR, LPM, \"LPM memory allocation failed\\n\");\n rte_free(te);\n rte_errno = ENOMEM;\n goto exit;\n }\n\n+ lpm = &internal->lpm;\n lpm->rules_tbl = rte_zmalloc_socket(NULL,\n (size_t)rules_size, RTE_CACHE_LINE_SIZE, socket_id);\n\n@@ -226,6 +237,7 @@ rte_lpm_create(const char *name, int socket_id,\n void\n rte_lpm_free(struct rte_lpm *lpm)\n {\n+ struct internal_lpm *internal;\n struct rte_lpm_list *lpm_list;\n struct rte_tailq_entry *te;\n\n@@ -247,8 +259,9 @@ rte_lpm_free(struct rte_lpm *lpm)\n\n rte_mcfg_tailq_write_unlock();\n\n- if (lpm->dq)\n- rte_rcu_qsbr_dq_delete(lpm->dq);\n+ internal = container_of(lpm, struct internal_lpm, lpm);\n+ if (internal->dq != NULL)\n+ rte_rcu_qsbr_dq_delete(internal->dq);\n rte_free(lpm->tbl8);\n rte_free(lpm->rules_tbl);\n rte_free(lpm);\n@@ -276,13 +289,15 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct\nrte_lpm_rcu_config *cfg,\n {\n char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE];\n struct rte_rcu_qsbr_dq_parameters params = {0};\n+ struct internal_lpm *internal;\n\n- if ((lpm == NULL) || (cfg == NULL)) {\n+ if (lpm == NULL || cfg == NULL) {\n rte_errno = EINVAL;\n return 1;\n }\n\n- if (lpm->v) {\n+ internal = container_of(lpm, struct internal_lpm, lpm);\n+ if (internal->v != NULL) {\n rte_errno = EEXIST;\n return 1;\n }\n@@ -305,20 +320,19 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct\nrte_lpm_rcu_config *cfg,\n params.free_fn = __lpm_rcu_qsbr_free_resource;\n params.p = lpm;\n params.v = cfg->v;\n- lpm->dq = rte_rcu_qsbr_dq_create(&params);\n- if (lpm->dq == NULL) {\n- RTE_LOG(ERR, LPM,\n- \"LPM QS defer queue creation failed\\n\");\n+ internal->dq = rte_rcu_qsbr_dq_create(&params);\n+ if (internal->dq == NULL) {\n+ RTE_LOG(ERR, LPM, \"LPM QS defer queue creation\nfailed\\n\");\n return 1;\n }\n if (dq)\n- *dq = lpm->dq;\n+ *dq = internal->dq;\n } else {\n rte_errno = EINVAL;\n return 1;\n }\n- lpm->rcu_mode = cfg->mode;\n- lpm->v = cfg->v;\n+ internal->rcu_mode = cfg->mode;\n+ internal->v = cfg->v;\n\n return 0;\n }\n@@ -502,12 +516,13 @@ _tbl8_alloc(struct rte_lpm *lpm)\n static int32_t\n tbl8_alloc(struct rte_lpm *lpm)\n {\n+ struct internal_lpm *internal = container_of(lpm, struct\ninternal_lpm, lpm);\n int32_t group_idx; /* tbl8 group index. */\n\n group_idx = _tbl8_alloc(lpm);\n- if ((group_idx == -ENOSPC) && (lpm->dq != NULL)) {\n+ if (group_idx == -ENOSPC && internal->dq != NULL) {\n /* If there are no tbl8 groups try to reclaim one. */\n- if (rte_rcu_qsbr_dq_reclaim(lpm->dq, 1, NULL, NULL, NULL) == 0)\n+ if (rte_rcu_qsbr_dq_reclaim(internal->dq, 1, NULL,\nNULL, NULL) == 0)\n group_idx = _tbl8_alloc(lpm);\n }\n\n@@ -518,20 +533,21 @@ static void\n tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start)\n {\n struct rte_lpm_tbl_entry zero_tbl8_entry = {0};\n+ struct internal_lpm *internal = container_of(lpm, struct\ninternal_lpm, lpm);\n\n- if (!lpm->v) {\n+ if (internal->v == NULL) {\n /* Set tbl8 group invalid*/\n __atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry,\n __ATOMIC_RELAXED);\n- } else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_SYNC) {\n+ } else if (internal->rcu_mode == RTE_LPM_QSBR_MODE_SYNC) {\n /* Wait for quiescent state change. */\n- rte_rcu_qsbr_synchronize(lpm->v, RTE_QSBR_THRID_INVALID);\n+ rte_rcu_qsbr_synchronize(internal->v, RTE_QSBR_THRID_INVALID);\n /* Set tbl8 group invalid*/\n __atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry,\n __ATOMIC_RELAXED);\n- } else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) {\n+ } else if (internal->rcu_mode == RTE_LPM_QSBR_MODE_DQ) {\n /* Push into QSBR defer queue. */\n- rte_rcu_qsbr_dq_enqueue(lpm->dq, (void *)&tbl8_group_start);\n+ rte_rcu_qsbr_dq_enqueue(internal->dq, (void\n*)&tbl8_group_start);\n }\n }\n\ndiff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h\nindex 7889f21b3..a9568fcdd 100644\n--- a/lib/librte_lpm/rte_lpm.h\n+++ b/lib/librte_lpm/rte_lpm.h\n@@ -143,12 +143,6 @@ struct rte_lpm {\n __rte_cache_aligned; /**< LPM tbl24 table. */\n struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */\n struct rte_lpm_rule *rules_tbl; /**< LPM rules. */\n-#ifdef ALLOW_EXPERIMENTAL_API\n- /* RCU config. */\n- struct rte_rcu_qsbr *v; /* RCU QSBR variable. */\n- enum rte_lpm_qsbr_mode rcu_mode;/* Blocking, defer queue. */\n- struct rte_rcu_qsbr_dq *dq; /* RCU QSBR defer queue. */\n-#endif\n };\n\n /** LPM RCU QSBR configuration structure. */", "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 795CFA0526;\n\tWed, 8 Jul 2020 16:30:39 +0200 (CEST)", "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 4B73C1DF90;\n\tWed, 8 Jul 2020 16:30:39 +0200 (CEST)", "from us-smtp-1.mimecast.com (us-smtp-delivery-1.mimecast.com\n [207.211.31.120]) by dpdk.org (Postfix) with ESMTP id F1B7E1DEF3\n for <dev@dpdk.org>; Wed, 8 Jul 2020 16:30:37 +0200 (CEST)", "from mail-ua1-f71.google.com (mail-ua1-f71.google.com\n [209.85.222.71]) (Using TLS) by relay.mimecast.com with ESMTP id\n us-mta-457-RNDqqi6GPTWSnaDxYztmWg-1; Wed, 08 Jul 2020 10:30:34 -0400", "by mail-ua1-f71.google.com with SMTP id x1so12071877uar.4\n for <dev@dpdk.org>; Wed, 08 Jul 2020 07:30:33 -0700 (PDT)" ], "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;\n s=mimecast20190719; t=1594218637;\n h=from:from:reply-to:subject:subject:date:date:message-id:message-id:\n to:to:cc:cc:mime-version:mime-version:content-type:content-type:\n in-reply-to:in-reply-to:references:references;\n bh=B+U8naIVOKBuB8Sxis+3r9QPIxwFdBVTDtYu9D7X0ek=;\n b=Tk0PIadzLG4VxSFI3r4Pk/MbZAhIlXMSlRcbw65C5Ow6ZUqM+SJYF7NBucSTjJcs44PB8o\n A5ZprXCk05wzi6f4ex4VPHZqFYo0Oev6DDUPEJ+7vapahtWf0PihQkRk78kyjlT1k2uS8/\n sWOwa4vn8HRo34AzqlDuX0e8B/12hKg=", "X-MC-Unique": "RNDqqi6GPTWSnaDxYztmWg-1", "X-Google-DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed;\n d=1e100.net; s=20161025;\n h=x-gm-message-state:mime-version:references:in-reply-to:from:date\n :message-id:subject:to:cc;\n bh=B+U8naIVOKBuB8Sxis+3r9QPIxwFdBVTDtYu9D7X0ek=;\n b=PqLzQkVvE2RAw1+PoySO1SGvXOSwu4wQCnD2JEJIeNA2dGBLFOFVfXDHa9i2mWGpm2\n ntXei1qns7UIhjRBAZxrePkSux5pByiltUJ9RSL2cCXDdMbjM0UMe0df1QosghWZt2tQ\n 5K3ernoj5GSvEnlHaO6CO6ausL6HRdzrmBpXUCfY7Mtwhyyzk1vCRP4LX1Bn+o22YicG\n gaIpp5oQDvR+Su/W8LsDcCPQAulLSI8fvibxz5Qf3c8qaZlOjkRGgzm3UM/RkWwZ6vLG\n /r77Pdk46PmvLVRvbppSYfHvYjoqzqr26j4zmHfR1x8ZC93RxjNhaKPf+UUCk+Ul0ysd\n P0sQ==", "X-Gm-Message-State": "AOAM53071IU4robKRFfUxuAw8OHMRcXZ1GUQ2nkRJCnC4cM+2bfHItNg\n 36GohIhcSug6te9nLng/seajnlHR92dUCRxnRbnE3+cnEPrbvnZojybymGionQm+GFOwg4SL0jQ\n vQxZxZEOTHxWO/4fS6c8=", "X-Received": [ "by 2002:ab0:2c3:: with SMTP id 61mr40861921uah.87.1594218633306;\n Wed, 08 Jul 2020 07:30:33 -0700 (PDT)", "by 2002:ab0:2c3:: with SMTP id 61mr40861869uah.87.1594218632895;\n Wed, 08 Jul 2020 07:30:32 -0700 (PDT)" ], "X-Google-Smtp-Source": "\n ABdhPJxluA5mF2D955UVCj5r3NPF496eCrhNdfrgfuEp7oy6M8Nt7MIJl8GDTMoeumXwS1dyajMCvQRpLOe42xCStNA=", "MIME-Version": "1.0", "References": "<20190906094534.36060-1-ruifeng.wang@arm.com>\n <20200707151554.64431-1-ruifeng.wang@arm.com>\n <20200707151554.64431-2-ruifeng.wang@arm.com>", "In-Reply-To": "<20200707151554.64431-2-ruifeng.wang@arm.com>", "From": "David Marchand <david.marchand@redhat.com>", "Date": "Wed, 8 Jul 2020 16:30:21 +0200", "Message-ID": "\n <CAJFAV8wSvAR0sBXotu1ssGOZKD634hVJT2OcMs=XYsxc10F3-g@mail.gmail.com>", "To": "Ruifeng Wang <ruifeng.wang@arm.com>", "Cc": "Bruce Richardson <bruce.richardson@intel.com>,\n Vladimir Medvedkin <vladimir.medvedkin@intel.com>,\n John McNamara <john.mcnamara@intel.com>,\n Marko Kovacevic <marko.kovacevic@intel.com>, Ray Kinsella <mdr@ashroe.eu>,\n Neil Horman <nhorman@tuxdriver.com>, dev <dev@dpdk.org>,\n \"Ananyev, Konstantin\" <konstantin.ananyev@intel.com>,\n Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>, nd <nd@arm.com>", "Authentication-Results": "relay.mimecast.com;\n auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=dmarchan@redhat.com", "X-Mimecast-Spam-Score": "0", "X-Mimecast-Originator": "redhat.com", "Content-Type": "text/plain; charset=\"UTF-8\"", "Subject": "Re: [dpdk-dev] [PATCH v7 1/3] lib/lpm: integrate RCU QSBR", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "addressed": null }, { "id": 115552, "web_url": "https://patches.dpdk.org/comment/115552/", "msgid": "<HE1PR0801MB2025F66DF4F63ADC0C5DC0909E670@HE1PR0801MB2025.eurprd08.prod.outlook.com>", "list_archive_url": "https://inbox.dpdk.org/dev/HE1PR0801MB2025F66DF4F63ADC0C5DC0909E670@HE1PR0801MB2025.eurprd08.prod.outlook.com", "date": "2020-07-08T15:34:44", "subject": "Re: [dpdk-dev] [PATCH v7 1/3] lib/lpm: integrate RCU QSBR", "submitter": { "id": 1198, "url": "https://patches.dpdk.org/api/people/1198/?format=api", "name": "Ruifeng Wang", "email": "ruifeng.wang@arm.com" }, "content": "> -----Original Message-----\n> From: David Marchand <david.marchand@redhat.com>\n> Sent: Wednesday, July 8, 2020 10:30 PM\n> To: Ruifeng Wang <Ruifeng.Wang@arm.com>\n> Cc: Bruce Richardson <bruce.richardson@intel.com>; Vladimir Medvedkin\n> <vladimir.medvedkin@intel.com>; John McNamara\n> <john.mcnamara@intel.com>; Marko Kovacevic\n> <marko.kovacevic@intel.com>; Ray Kinsella <mdr@ashroe.eu>; Neil Horman\n> <nhorman@tuxdriver.com>; dev <dev@dpdk.org>; Ananyev, Konstantin\n> <konstantin.ananyev@intel.com>; Honnappa Nagarahalli\n> <Honnappa.Nagarahalli@arm.com>; nd <nd@arm.com>\n> Subject: Re: [dpdk-dev] [PATCH v7 1/3] lib/lpm: integrate RCU QSBR\n> \n> On Tue, Jul 7, 2020 at 5:16 PM Ruifeng Wang <ruifeng.wang@arm.com>\n> wrote:\n> > diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index\n> > b9d49ac87..7889f21b3 100644\n> > --- a/lib/librte_lpm/rte_lpm.h\n> > +++ b/lib/librte_lpm/rte_lpm.h\n> > @@ -1,5 +1,6 @@\n> > /* SPDX-License-Identifier: BSD-3-Clause\n> > * Copyright(c) 2010-2014 Intel Corporation\n> > + * Copyright(c) 2020 Arm Limited\n> > */\n> >\n> > #ifndef _RTE_LPM_H_\n> > @@ -20,6 +21,7 @@\n> > #include <rte_memory.h>\n> > #include <rte_common.h>\n> > #include <rte_vect.h>\n> > +#include <rte_rcu_qsbr.h>\n> >\n> > #ifdef __cplusplus\n> > extern \"C\" {\n> > @@ -62,6 +64,17 @@ extern \"C\" {\n> > /** Bitmask used to indicate successful lookup */\n> > #define RTE_LPM_LOOKUP_SUCCESS 0x01000000\n> >\n> > +/** @internal Default RCU defer queue entries to reclaim in one go. */\n> > +#define RTE_LPM_RCU_DQ_RECLAIM_MAX 16\n> > +\n> > +/** RCU reclamation modes */\n> > +enum rte_lpm_qsbr_mode {\n> > + /** Create defer queue for reclaim. */\n> > + RTE_LPM_QSBR_MODE_DQ = 0,\n> > + /** Use blocking mode reclaim. No defer queue created. */\n> > + RTE_LPM_QSBR_MODE_SYNC\n> > +};\n> > +\n> > #if RTE_BYTE_ORDER == RTE_LITTLE_ENDIAN\n> > /** @internal Tbl24 entry structure. */ __extension__ @@ -130,6\n> > +143,28 @@ struct rte_lpm {\n> > __rte_cache_aligned; /**< LPM tbl24 table. */\n> > struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */\n> > struct rte_lpm_rule *rules_tbl; /**< LPM rules. */\n> > +#ifdef ALLOW_EXPERIMENTAL_API\n> > + /* RCU config. */\n> > + struct rte_rcu_qsbr *v; /* RCU QSBR variable. */\n> > + enum rte_lpm_qsbr_mode rcu_mode;/* Blocking, defer queue. */\n> > + struct rte_rcu_qsbr_dq *dq; /* RCU QSBR defer queue. */\n> > +#endif\n> > +};\n> \n> I can see failures in travis reports for v7 and v6.\n> I reproduced them in my env.\n> \n> 1 function with some indirect sub-type change:\n> \n> [C]'function int rte_lpm_add(rte_lpm*, uint32_t, uint8_t, uint32_t)'\n> at rte_lpm.c:764:1 has some indirect sub-type changes:\n> parameter 1 of type 'rte_lpm*' has sub-type changes:\n> in pointed to type 'struct rte_lpm' at rte_lpm.h:134:1:\n> type size hasn't changed\n> 3 data member insertions:\n> 'rte_rcu_qsbr* rte_lpm::v', at offset 536873600 (in bits) at\n> rte_lpm.h:148:1\n> 'rte_lpm_qsbr_mode rte_lpm::rcu_mode', at offset 536873664 (in bits)\n> at rte_lpm.h:149:1\n> 'rte_rcu_qsbr_dq* rte_lpm::dq', at offset 536873728 (in\n> bits) at rte_lpm.h:150:1\n> \nSorry, I thought if ALLOW_EXPERIMENTAL was added, ABI would be kept when experimental was not allowed by user.\nABI and ALLOW_EXPERIMENTAL should be two different things.\n\n> \n> Going back to my proposal of hiding what does not need to be seen.\n> \n> Disclaimer, *this is quick & dirty* but it builds and passes ABI check:\n> \n> $ git diff\n> diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index\n> d498ba761..7109aef6a 100644\n> --- a/lib/librte_lpm/rte_lpm.c\n> +++ b/lib/librte_lpm/rte_lpm.c\nI understand your proposal in v5 now. A new data structure encloses rte_lpm and new members that for RCU use.\nIn this way, rte_lpm ABI is kept. And we can move out other members in rte_lpm that not need to be exposed in 20.11 release.\nI will fix the ABI issue in next version.\n\n> @@ -115,6 +115,15 @@ rte_lpm_find_existing(const char *name)\n> return l;\n> }\n> \n> +struct internal_lpm {\n> + /* Public object */\n> + struct rte_lpm lpm;\n> + /* RCU config. */\n> + struct rte_rcu_qsbr *v; /* RCU QSBR variable. */\n> + enum rte_lpm_qsbr_mode rcu_mode;/* Blocking, defer queue. */\n> + struct rte_rcu_qsbr_dq *dq; /* RCU QSBR defer queue. */\n> +};\n> +\n> /*\n> * Allocates memory for LPM object\n> */\n> @@ -123,6 +132,7 @@ rte_lpm_create(const char *name, int socket_id,\n> const struct rte_lpm_config *config) {\n> char mem_name[RTE_LPM_NAMESIZE];\n> + struct internal_lpm *internal = NULL;\n> struct rte_lpm *lpm = NULL;\n> struct rte_tailq_entry *te;\n> uint32_t mem_size, rules_size, tbl8s_size; @@ -141,12 +151,6 @@\n> rte_lpm_create(const char *name, int socket_id,\n> \n> snprintf(mem_name, sizeof(mem_name), \"LPM_%s\", name);\n> \n> - /* Determine the amount of memory to allocate. */\n> - mem_size = sizeof(*lpm);\n> - rules_size = sizeof(struct rte_lpm_rule) * config->max_rules;\n> - tbl8s_size = (sizeof(struct rte_lpm_tbl_entry) *\n> - RTE_LPM_TBL8_GROUP_NUM_ENTRIES * config-\n> >number_tbl8s);\n> -\n> rte_mcfg_tailq_write_lock();\n> \n> /* guarantee there's no existing */ @@ -170,16 +174,23 @@\n> rte_lpm_create(const char *name, int socket_id,\n> goto exit;\n> }\n> \n> + /* Determine the amount of memory to allocate. */\n> + mem_size = sizeof(*internal);\n> + rules_size = sizeof(struct rte_lpm_rule) * config->max_rules;\n> + tbl8s_size = (sizeof(struct rte_lpm_tbl_entry) *\n> + RTE_LPM_TBL8_GROUP_NUM_ENTRIES *\n> + config->number_tbl8s);\n> +\n> /* Allocate memory to store the LPM data structures. */\n> - lpm = rte_zmalloc_socket(mem_name, mem_size,\n> + internal = rte_zmalloc_socket(mem_name, mem_size,\n> RTE_CACHE_LINE_SIZE, socket_id);\n> - if (lpm == NULL) {\n> + if (internal == NULL) {\n> RTE_LOG(ERR, LPM, \"LPM memory allocation failed\\n\");\n> rte_free(te);\n> rte_errno = ENOMEM;\n> goto exit;\n> }\n> \n> + lpm = &internal->lpm;\n> lpm->rules_tbl = rte_zmalloc_socket(NULL,\n> (size_t)rules_size, RTE_CACHE_LINE_SIZE, socket_id);\n> \n> @@ -226,6 +237,7 @@ rte_lpm_create(const char *name, int socket_id,\n> void rte_lpm_free(struct rte_lpm *lpm) {\n> + struct internal_lpm *internal;\n> struct rte_lpm_list *lpm_list;\n> struct rte_tailq_entry *te;\n> \n> @@ -247,8 +259,9 @@ rte_lpm_free(struct rte_lpm *lpm)\n> \n> rte_mcfg_tailq_write_unlock();\n> \n> - if (lpm->dq)\n> - rte_rcu_qsbr_dq_delete(lpm->dq);\n> + internal = container_of(lpm, struct internal_lpm, lpm);\n> + if (internal->dq != NULL)\n> + rte_rcu_qsbr_dq_delete(internal->dq);\n> rte_free(lpm->tbl8);\n> rte_free(lpm->rules_tbl);\n> rte_free(lpm);\n> @@ -276,13 +289,15 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct\n> rte_lpm_rcu_config *cfg, {\n> char rcu_dq_name[RTE_RCU_QSBR_DQ_NAMESIZE];\n> struct rte_rcu_qsbr_dq_parameters params = {0};\n> + struct internal_lpm *internal;\n> \n> - if ((lpm == NULL) || (cfg == NULL)) {\n> + if (lpm == NULL || cfg == NULL) {\n> rte_errno = EINVAL;\n> return 1;\n> }\n> \n> - if (lpm->v) {\n> + internal = container_of(lpm, struct internal_lpm, lpm);\n> + if (internal->v != NULL) {\n> rte_errno = EEXIST;\n> return 1;\n> }\n> @@ -305,20 +320,19 @@ rte_lpm_rcu_qsbr_add(struct rte_lpm *lpm, struct\n> rte_lpm_rcu_config *cfg,\n> params.free_fn = __lpm_rcu_qsbr_free_resource;\n> params.p = lpm;\n> params.v = cfg->v;\n> - lpm->dq = rte_rcu_qsbr_dq_create(&params);\n> - if (lpm->dq == NULL) {\n> - RTE_LOG(ERR, LPM,\n> - \"LPM QS defer queue creation failed\\n\");\n> + internal->dq = rte_rcu_qsbr_dq_create(&params);\n> + if (internal->dq == NULL) {\n> + RTE_LOG(ERR, LPM, \"LPM QS defer queue creation\n> failed\\n\");\n> return 1;\n> }\n> if (dq)\n> - *dq = lpm->dq;\n> + *dq = internal->dq;\n> } else {\n> rte_errno = EINVAL;\n> return 1;\n> }\n> - lpm->rcu_mode = cfg->mode;\n> - lpm->v = cfg->v;\n> + internal->rcu_mode = cfg->mode;\n> + internal->v = cfg->v;\n> \n> return 0;\n> }\n> @@ -502,12 +516,13 @@ _tbl8_alloc(struct rte_lpm *lpm) static int32_t\n> tbl8_alloc(struct rte_lpm *lpm) {\n> + struct internal_lpm *internal = container_of(lpm, struct\n> internal_lpm, lpm);\n> int32_t group_idx; /* tbl8 group index. */\n> \n> group_idx = _tbl8_alloc(lpm);\n> - if ((group_idx == -ENOSPC) && (lpm->dq != NULL)) {\n> + if (group_idx == -ENOSPC && internal->dq != NULL) {\n> /* If there are no tbl8 groups try to reclaim one. */\n> - if (rte_rcu_qsbr_dq_reclaim(lpm->dq, 1, NULL, NULL, NULL) == 0)\n> + if (rte_rcu_qsbr_dq_reclaim(internal->dq, 1, NULL,\n> NULL, NULL) == 0)\n> group_idx = _tbl8_alloc(lpm);\n> }\n> \n> @@ -518,20 +533,21 @@ static void\n> tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start) {\n> struct rte_lpm_tbl_entry zero_tbl8_entry = {0};\n> + struct internal_lpm *internal = container_of(lpm, struct\n> internal_lpm, lpm);\n> \n> - if (!lpm->v) {\n> + if (internal->v == NULL) {\n> /* Set tbl8 group invalid*/\n> __atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry,\n> __ATOMIC_RELAXED);\n> - } else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_SYNC) {\n> + } else if (internal->rcu_mode == RTE_LPM_QSBR_MODE_SYNC) {\n> /* Wait for quiescent state change. */\n> - rte_rcu_qsbr_synchronize(lpm->v, RTE_QSBR_THRID_INVALID);\n> + rte_rcu_qsbr_synchronize(internal->v,\n> + RTE_QSBR_THRID_INVALID);\n> /* Set tbl8 group invalid*/\n> __atomic_store(&lpm->tbl8[tbl8_group_start], &zero_tbl8_entry,\n> __ATOMIC_RELAXED);\n> - } else if (lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) {\n> + } else if (internal->rcu_mode == RTE_LPM_QSBR_MODE_DQ) {\n> /* Push into QSBR defer queue. */\n> - rte_rcu_qsbr_dq_enqueue(lpm->dq, (void *)&tbl8_group_start);\n> + rte_rcu_qsbr_dq_enqueue(internal->dq, (void\n> *)&tbl8_group_start);\n> }\n> }\n> \n> diff --git a/lib/librte_lpm/rte_lpm.h b/lib/librte_lpm/rte_lpm.h index\n> 7889f21b3..a9568fcdd 100644\n> --- a/lib/librte_lpm/rte_lpm.h\n> +++ b/lib/librte_lpm/rte_lpm.h\n> @@ -143,12 +143,6 @@ struct rte_lpm {\n> __rte_cache_aligned; /**< LPM tbl24 table. */\n> struct rte_lpm_tbl_entry *tbl8; /**< LPM tbl8 table. */\n> struct rte_lpm_rule *rules_tbl; /**< LPM rules. */ -#ifdef\n> ALLOW_EXPERIMENTAL_API\n> - /* RCU config. */\n> - struct rte_rcu_qsbr *v; /* RCU QSBR variable. */\n> - enum rte_lpm_qsbr_mode rcu_mode;/* Blocking, defer queue. */\n> - struct rte_rcu_qsbr_dq *dq; /* RCU QSBR defer queue. */\n> -#endif\n> };\n> \n> /** LPM RCU QSBR configuration structure. */\n> \n> \n> \n> \n> --\n> David Marchand", "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@inbox.dpdk.org", "Delivered-To": "patchwork@inbox.dpdk.org", "Received": [ "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 4109FA0527;\n\tWed, 8 Jul 2020 17:34:56 +0200 (CEST)", "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id 60C2D1D73C;\n\tWed, 8 Jul 2020 17:34:55 +0200 (CEST)", "from EUR04-HE1-obe.outbound.protection.outlook.com\n (mail-eopbgr70079.outbound.protection.outlook.com [40.107.7.79])\n by dpdk.org (Postfix) with ESMTP id 9D04A1C2BB\n for <dev@dpdk.org>; Wed, 8 Jul 2020 17:34:53 +0200 (CEST)", "from DB7PR05CA0066.eurprd05.prod.outlook.com (2603:10a6:10:2e::43)\n by DB6PR08MB2805.eurprd08.prod.outlook.com (2603:10a6:6:20::15) with\n Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.29; Wed, 8 Jul\n 2020 15:34:52 +0000", "from DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com\n (2603:10a6:10:2e:cafe::c6) by DB7PR05CA0066.outlook.office365.com\n (2603:10a6:10:2e::43) with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3174.21 via Frontend\n Transport; Wed, 8 Jul 2020 15:34:52 +0000", "from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by\n DB5EUR03FT037.mail.protection.outlook.com (10.152.20.215) with\n Microsoft SMTP\n Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id\n 15.20.3174.21 via Frontend Transport; Wed, 8 Jul 2020 15:34:52 +0000", "(\"Tessian outbound 8f45de5545d6:v62\");\n Wed, 08 Jul 2020 15:34:52 +0000", "from c2e289cf25af.2\n by 64aa7808-outbound-1.mta.getcheckrecipient.com id\n EAA7A46D-B2EB-48AF-93AB-5F266FE1F0FF.1;\n Wed, 08 Jul 2020 15:34:47 +0000", "from EUR02-AM5-obe.outbound.protection.outlook.com\n by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id\n c2e289cf25af.2\n (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);\n Wed, 08 Jul 2020 15:34:47 +0000", "from HE1PR0801MB2025.eurprd08.prod.outlook.com (2603:10a6:3:50::14)\n by HE1PR0801MB1930.eurprd08.prod.outlook.com (2603:10a6:3:57::17)\n with Microsoft SMTP Server (version=TLS1_2,\n cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3153.24; Wed, 8 Jul\n 2020 15:34:45 +0000", "from HE1PR0801MB2025.eurprd08.prod.outlook.com\n ([fe80::e863:15c9:b803:6533]) by HE1PR0801MB2025.eurprd08.prod.outlook.com\n ([fe80::e863:15c9:b803:6533%7]) with mapi id 15.20.3174.021; Wed, 8 Jul 2020\n 15:34:45 +0000" ], "DKIM-Signature": [ "v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;\n s=selector2-armh-onmicrosoft-com;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=t36wL1HBAqhi+T0ZjDPw4FLhC9P+TQ7TXNUIJYFKVcQ=;\n b=Mv4KBia6iNSzX6ARYYFrjnEsAdSQKfI6LWYM9TuFlReCI5Bq5739t506EYDMAaS21GmIE063TFLMhJIH0mJvCdRVGT1p2rOyvTy2O34LJTcaUfu+RcgOChWba0yQ1YdHLumv2cS/f1e5Gz90I15f2iT5ddOFdh2V/8Eq0ELCvRE=", "v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;\n s=selector2-armh-onmicrosoft-com;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=t36wL1HBAqhi+T0ZjDPw4FLhC9P+TQ7TXNUIJYFKVcQ=;\n b=Mv4KBia6iNSzX6ARYYFrjnEsAdSQKfI6LWYM9TuFlReCI5Bq5739t506EYDMAaS21GmIE063TFLMhJIH0mJvCdRVGT1p2rOyvTy2O34LJTcaUfu+RcgOChWba0yQ1YdHLumv2cS/f1e5Gz90I15f2iT5ddOFdh2V/8Eq0ELCvRE=" ], "X-MS-Exchange-Authentication-Results": "spf=pass (sender IP is 63.35.35.123)\n smtp.mailfrom=arm.com; dpdk.org; dkim=pass (signature was verified)\n header.d=armh.onmicrosoft.com;dpdk.org; dmarc=bestguesspass action=none\n header.from=arm.com;", "Received-SPF": "Pass (protection.outlook.com: domain of arm.com designates\n 63.35.35.123 as permitted sender) receiver=protection.outlook.com;\n client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;", "X-CR-MTA-TID": "64aa7808", "ARC-Seal": "i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;\n b=RwyRt2dyc4SyOQpSK2nHCTznAwalVcSaPydN6ybCwflpOxJDEIXiljnt6QJ8kzKgMeMNue/4IcxpD1OkEAgIT8jHNSukM6104J2DJL60SbzoWSnCK92LJ20pZb3Q8g7dOdO3Tg+NF2u/SuHgJGC/4NjPPC6xMjwQwo4xUfZhXbbr58QoiGXus4koi/4BiCm4oAYD88mFplvkOC8Rvc8g0f1sEbVBkMnUUIUdNJ/D5YatUpGcXCmZQ5sutha9GOoyNUNHR6ChtOqyfMZCibg3PrplwNd3K6Kidus+e1zQb5OSApc0ooLwjgCClXSNWqQID614JQ2KjZpr6WWtbWi0aA==", "ARC-Message-Signature": "i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;\n s=arcselector9901;\n h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;\n bh=t36wL1HBAqhi+T0ZjDPw4FLhC9P+TQ7TXNUIJYFKVcQ=;\n b=i46S16hNUKrIAgyhNLxIWFPcTJ6sLSDatPPm40St+hZRdd5pehPNMZzWniYIrNCmoXz8vtGKo/UCU0iKoEO7sbour3enRhUtOvaXiKVu7qWFawZw901REj6j2ljN4AkJJSKiQEAAlgLWnxirhhsEjPC3GI8tLZmgC1ZMNLRGvAQi+qWggna3zYSZgJto4wlVOqcDVd+mDHSoTtdKonwee4uriAfG6KHfnMnzqkRUkKbmvpoz1m5G4PDVu8F+cqiEoC78AyQwg/yDlJL4jqs4FlRG3b8AqKVbTsUaZHWyXT/HqXvPlUJy2lV9ovra0BE+b7JBM+WRSLdE5DiUkCZt+Q==", "ARC-Authentication-Results": "i=1; mx.microsoft.com 1; spf=pass\n smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass\n header.d=arm.com; arc=none", "From": "Ruifeng Wang <Ruifeng.Wang@arm.com>", "To": "David Marchand <david.marchand@redhat.com>", "CC": "Bruce Richardson <bruce.richardson@intel.com>, Vladimir Medvedkin\n <vladimir.medvedkin@intel.com>, John McNamara <john.mcnamara@intel.com>,\n Marko Kovacevic <marko.kovacevic@intel.com>, Ray Kinsella <mdr@ashroe.eu>,\n Neil Horman <nhorman@tuxdriver.com>, dev <dev@dpdk.org>, \"Ananyev,\n Konstantin\" <konstantin.ananyev@intel.com>, Honnappa Nagarahalli\n <Honnappa.Nagarahalli@arm.com>, nd <nd@arm.com>, nd <nd@arm.com>", "Thread-Topic": "[dpdk-dev] [PATCH v7 1/3] lib/lpm: integrate RCU QSBR", "Thread-Index": "AQHWVHGX18VwTh7s1U+6JqiTNFG00aj9v7uAgAAGnpA=", "Date": "Wed, 8 Jul 2020 15:34:44 +0000", "Message-ID": "\n <HE1PR0801MB2025F66DF4F63ADC0C5DC0909E670@HE1PR0801MB2025.eurprd08.prod.outlook.com>", "References": "<20190906094534.36060-1-ruifeng.wang@arm.com>\n <20200707151554.64431-1-ruifeng.wang@arm.com>\n <20200707151554.64431-2-ruifeng.wang@arm.com>\n <CAJFAV8wSvAR0sBXotu1ssGOZKD634hVJT2OcMs=XYsxc10F3-g@mail.gmail.com>", "In-Reply-To": "\n <CAJFAV8wSvAR0sBXotu1ssGOZKD634hVJT2OcMs=XYsxc10F3-g@mail.gmail.com>", "Accept-Language": "en-US", "Content-Language": "en-US", "X-MS-Has-Attach": "", "X-MS-TNEF-Correlator": "", "x-ts-tracking-id": "dc10ff59-f9f3-45be-9be9-d305cf1850c5.0", "x-checkrecipientchecked": "true", "Authentication-Results-Original": "redhat.com; dkim=none (message not signed)\n header.d=none;redhat.com; dmarc=none action=none header.from=arm.com;", "x-originating-ip": "[203.126.0.113]", "x-ms-publictraffictype": "Email", "X-MS-Office365-Filtering-HT": "Tenant", "X-MS-Office365-Filtering-Correlation-Id": "0e508b7f-4f57-4e90-02ff-08d823547539", "x-ms-traffictypediagnostic": "HE1PR0801MB1930:|DB6PR08MB2805:", "x-ms-exchange-transport-forked": "True", "X-Microsoft-Antispam-PRVS": "\n <DB6PR08MB28058611F05AA594531B05B59E670@DB6PR08MB2805.eurprd08.prod.outlook.com>", "x-checkrecipientrouted": "true", "nodisclaimer": "true", "x-ms-oob-tlc-oobclassifiers": "OLM:10000;OLM:10000;", "x-forefront-prvs": [ "04583CED1A", "04583CED1A" ], "X-MS-Exchange-SenderADCheck": "1", "X-Microsoft-Antispam-Untrusted": "BCL:0;", "X-Microsoft-Antispam-Message-Info-Original": "\n FNG548dUAmqGfU8TsPh0Ji6VegO/zZ3dbPmyXAc96INK09+acilftQ8p9Vou/1l8REW3qZoQogjLE3X6sQaq1vnkfp8bCKX/7ixnZhAL9pnATcE/XlMx7YtBO7fVZhRScYEYr5Sv42WtRy2SHG3Y3Dn3SFbNm/Df1FOgQCD9Sh0E8BGKG9gQLes+xjel5kQKB5yJiVLFgp2OUF5KUpMayD66TgOll4Vi4XXupsCteamyqTLg616zBji+1pceYBixf+ZsjYU0W6u2J2Eg+XEM1g/zrB15ZWXz8a1ywEnDbLf3FS3a2fTDVsr2H96++6y9KXHG85VLK7PTejJPUrc8qQ==", "X-Forefront-Antispam-Report-Untrusted": "CIP:255.255.255.255; CTRY:; LANG:en;\n SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:HE1PR0801MB2025.eurprd08.prod.outlook.com;\n PTR:; CAT:NONE; SFTY:;\n SFS:(4636009)(366004)(39860400002)(136003)(346002)(396003)(376002)(83380400001)(186003)(66946007)(4326008)(8936002)(6506007)(66446008)(64756008)(66556008)(66476007)(53546011)(76116006)(2906002)(6916009)(55016002)(86362001)(9686003)(5660300002)(26005)(54906003)(316002)(52536014)(71200400001)(478600001)(33656002)(8676002)(7696005);\n DIR:OUT; SFP:1101;", "x-ms-exchange-antispam-messagedata": "\n UossPZCOOgyAkWedGnDPmkFuztJsNSIgMALLkNQ+AFE8g0Jb79bcFmUy6o0PLy5dvbl9fnTP0eBGa8eg1Q9EVPoMvcbp9Cu6EIh7fGXu1YlhXS2OXv/Qsng9SvOENNJdzSuTN28ryc9jWRmGABbXs5YMPxKiX3QJpNBAlaiE+DDrV1aWM2U5a5d5A8BA+gAngnB5eD/4ZzNu8OJC8028bBc0AK4NAc+9mKCfmAySt1kpYL9xrPEGoOXHjHft8yXR02+VdSL/h0hJv2L+17LbJ3eD7Vqv1sqbWD7A++yj1C63P9b7cfZqdh1a2a6hImND5zKk94J6kA4tahukl6bri+ad++sYj8A7mb4MnAhLqLLMp2PYFttI+46HhP4I9TcR7aRU5GONmg2TVcGo7MoygQa/g/z8pBB/3S6mnhM/5+261bvXPXLtVzViQav4W0gV4lWiE2HcYOtMU9Ra3JOiLKaJ3QW71X+OWZ9ciwqAu94=", "Content-Type": "text/plain; charset=\"utf-8\"", "Content-Transfer-Encoding": "base64", "MIME-Version": "1.0", "X-MS-Exchange-Transport-CrossTenantHeadersStamped": [ "HE1PR0801MB1930", "DB6PR08MB2805" ], "Original-Authentication-Results": "redhat.com; dkim=none (message not signed)\n header.d=none;redhat.com; dmarc=none action=none header.from=arm.com;", "X-EOPAttributedMessage": "0", "X-MS-Exchange-Transport-CrossTenantHeadersStripped": "\n DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com", "X-Forefront-Antispam-Report": "CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;\n IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;\n PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;\n SFS:(4636009)(396003)(39860400002)(346002)(376002)(136003)(46966005)(186003)(478600001)(70586007)(86362001)(70206006)(2906002)(55016002)(47076004)(81166007)(82740400003)(316002)(54906003)(6862004)(83380400001)(33656002)(26005)(6506007)(7696005)(5660300002)(52536014)(8676002)(356005)(30864003)(336012)(9686003)(53546011)(82310400002)(8936002)(4326008);\n DIR:OUT; SFP:1101;", "X-MS-Office365-Filtering-Correlation-Id-Prvs": "\n aa98c40d-9698-4226-0519-08d8235470e6", "X-Forefront-PRVS": [ "04583CED1A", "04583CED1A" ], "X-Microsoft-Antispam": "BCL:0;", "X-Microsoft-Antispam-Message-Info": "\n yABo0yQB83XugI3dTOSePJf5uQp2akQdyDtoCxtybfgK4NgjzF0KgOjtcs9QYvqgIsDKpYCE6dhSBPAHzNwwJYHXgoPc1NlS03I6m2ObeNbZRCRk2OMt29DmnukP65FpOK/Y51iA+8WWjsv/y0KpkxGaQ3UvboAyl2Tq1UL5DtHJpZuejOhrg/owXmW8M+LXYnC0yWs2yq4WeMMqae6QGSg2z6TZ3Il7M3ipGDQZgPqATVfRYUsP/0Gw1J0fvvXnHIMvIFkXWKBq3xguuuKuTvJ0y22JjRZiRaoXv9EGO3wtXU+vghwkg9H7Nelczq7ocdas/z9hKmhwnJGoUH/DoLbwEWSZcelkFX6Zs13m0I4W7sIEuuV3YAl8dbCMRQefq0e6nIxiI+ffyXNhvXmc+w==", "X-OriginatorOrg": "arm.com", "X-MS-Exchange-CrossTenant-OriginalArrivalTime": "08 Jul 2020 15:34:52.3740 (UTC)", "X-MS-Exchange-CrossTenant-Network-Message-Id": "\n 0e508b7f-4f57-4e90-02ff-08d823547539", "X-MS-Exchange-CrossTenant-Id": "f34e5979-57d9-4aaa-ad4d-b122a662184d", "X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp": "\n TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];\n Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]", "X-MS-Exchange-CrossTenant-AuthSource": "\n DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com", "X-MS-Exchange-CrossTenant-AuthAs": "Anonymous", "X-MS-Exchange-CrossTenant-FromEntityHeader": "HybridOnPrem", "Subject": "Re: [dpdk-dev] [PATCH v7 1/3] lib/lpm: integrate RCU QSBR", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "addressed": null } ]