Patch Detail
get:
Show a patch.
patch:
Update a patch.
put:
Update a patch.
GET /api/patches/52802/?format=api
http://patches.dpdk.org/api/patches/52802/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/patch/1555364488-28207-2-git-send-email-erik.g.carrillo@intel.com/", "project": { "id": 1, "url": "http://patches.dpdk.org/api/projects/1/?format=api", "name": "DPDK", "link_name": "dpdk", "list_id": "dev.dpdk.org", "list_email": "dev@dpdk.org", "web_url": "http://core.dpdk.org", "scm_url": "git://dpdk.org/dpdk", "webscm_url": "http://git.dpdk.org/dpdk", "list_archive_url": "https://inbox.dpdk.org/dev", "list_archive_url_format": "https://inbox.dpdk.org/dev/{}", "commit_url_format": "" }, "msgid": "<1555364488-28207-2-git-send-email-erik.g.carrillo@intel.com>", "list_archive_url": "https://inbox.dpdk.org/dev/1555364488-28207-2-git-send-email-erik.g.carrillo@intel.com", "date": "2019-04-15T21:41:27", "name": "[v5,1/2] timer: allow timer management in shared memory", "commit_ref": null, "pull_url": null, "state": "accepted", "archived": true, "hash": "b7090a5dc5a75e16251211eb5a3a790bc8b51409", "submitter": { "id": 762, "url": "http://patches.dpdk.org/api/people/762/?format=api", "name": "Carrillo, Erik G", "email": "erik.g.carrillo@intel.com" }, "delegate": null, "mbox": "http://patches.dpdk.org/project/dpdk/patch/1555364488-28207-2-git-send-email-erik.g.carrillo@intel.com/mbox/", "series": [ { "id": 4319, "url": "http://patches.dpdk.org/api/series/4319/?format=api", "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=4319", "date": "2019-04-15T21:41:26", "name": "Timer library changes", "version": 5, "mbox": "http://patches.dpdk.org/series/4319/mbox/" } ], "comments": "http://patches.dpdk.org/api/patches/52802/comments/", "check": "warning", "checks": "http://patches.dpdk.org/api/patches/52802/checks/", "tags": {}, "related": [], "headers": { "Return-Path": "<dev-bounces@dpdk.org>", "X-Original-To": "patchwork@dpdk.org", "Delivered-To": "patchwork@dpdk.org", "Received": [ "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id C03931B438;\n\tMon, 15 Apr 2019 23:42:43 +0200 (CEST)", "from mga02.intel.com (mga02.intel.com [134.134.136.20])\n\tby dpdk.org (Postfix) with ESMTP id 9A0DA1B39F\n\tfor <dev@dpdk.org>; Mon, 15 Apr 2019 23:42:35 +0200 (CEST)", "from orsmga002.jf.intel.com ([10.7.209.21])\n\tby orsmga101.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384;\n\t15 Apr 2019 14:42:34 -0700", "from txasoft-yocto.an.intel.com ([10.123.72.192])\n\tby orsmga002.jf.intel.com with ESMTP; 15 Apr 2019 14:42:33 -0700" ], "X-Amp-Result": "SKIPPED(no attachment in message)", "X-Amp-File-Uploaded": "False", "X-ExtLoop1": "1", "X-IronPort-AV": "E=Sophos;i=\"5.60,355,1549958400\"; d=\"scan'208\";a=\"151130531\"", "From": "Erik Gabriel Carrillo <erik.g.carrillo@intel.com>", "To": "rsanford@akamai.com,\n\tthomas@monjalon.net", "Cc": "dev@dpdk.org", "Date": "Mon, 15 Apr 2019 16:41:27 -0500", "Message-Id": "<1555364488-28207-2-git-send-email-erik.g.carrillo@intel.com>", "X-Mailer": "git-send-email 1.7.10", "In-Reply-To": "<1555364488-28207-1-git-send-email-erik.g.carrillo@intel.com>", "References": "<1551892822-23967-1-git-send-email-erik.g.carrillo@intel.com>\n\t<1555364488-28207-1-git-send-email-erik.g.carrillo@intel.com>", "Subject": "[dpdk-dev] [PATCH v5 1/2] timer: allow timer management in shared\n\tmemory", "X-BeenThere": "dev@dpdk.org", "X-Mailman-Version": "2.1.15", "Precedence": "list", "List-Id": "DPDK patches and discussions <dev.dpdk.org>", "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>", "List-Archive": "<http://mails.dpdk.org/archives/dev/>", "List-Post": "<mailto:dev@dpdk.org>", "List-Help": "<mailto:dev-request@dpdk.org?subject=help>", "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>", "Errors-To": "dev-bounces@dpdk.org", "Sender": "\"dev\" <dev-bounces@dpdk.org>" }, "content": "Currently, the timer library uses a per-process table of structures to\nmanage skiplists of timers presumably because timers contain arbitrary\nfunction pointers whose value may not resolve properly in other\nprocesses.\n\nHowever, if the same callback is used handle all timers, and that\ncallback is only invoked in one process, then it woud be safe to allow\nthe data structures to be allocated in shared memory, and to allow\nsecondary processes to modify the timer lists. This would let timers be\nused in more multi-process scenarios.\n\nThe library's global variables are wrapped with a struct, and an array\nof these structures is created in shared memory. The original APIs\nare updated to reference the zeroth entry in the array. This maintains\nthe original behavior for both primary and secondary processes since\nthe set intersection of their coremasks should be empty [1]. New APIs\nare introduced to enable the allocation/deallocation of other entries\nin the array.\n\nNew variants of the APIs used to start and stop timers are introduced;\nthey allow a caller to specify which array entry should be used to\nlocate the timer list to insert into or delete from.\n\nFinally, a new variant of rte_timer_manage() is introduced, which\nallows a caller to specify which array entry should be used to locate\nthe timer lists to process; it can also process multiple timer lists per\ninvocation.\n\n[1] https://doc.dpdk.org/guides/prog_guide/multi_proc_support.html#multi-process-limitations\n\nSigned-off-by: Erik Gabriel Carrillo <erik.g.carrillo@intel.com>\n---\n lib/librte_timer/Makefile | 1 +\n lib/librte_timer/rte_timer.c | 519 ++++++++++++++++++++++++++++++---\n lib/librte_timer/rte_timer.h | 226 +++++++++++++-\n lib/librte_timer/rte_timer_version.map | 22 ++\n 4 files changed, 723 insertions(+), 45 deletions(-)", "diff": "diff --git a/lib/librte_timer/Makefile b/lib/librte_timer/Makefile\nindex 4ebd528..8ec63f4 100644\n--- a/lib/librte_timer/Makefile\n+++ b/lib/librte_timer/Makefile\n@@ -6,6 +6,7 @@ include $(RTE_SDK)/mk/rte.vars.mk\n # library name\n LIB = librte_timer.a\n \n+CFLAGS += -DALLOW_EXPERIMENTAL_API\n CFLAGS += $(WERROR_FLAGS) -I$(SRCDIR) -O3\n LDLIBS += -lrte_eal\n \ndiff --git a/lib/librte_timer/rte_timer.c b/lib/librte_timer/rte_timer.c\nindex 30c7b0a..511d902 100644\n--- a/lib/librte_timer/rte_timer.c\n+++ b/lib/librte_timer/rte_timer.c\n@@ -5,6 +5,7 @@\n #include <string.h>\n #include <stdio.h>\n #include <stdint.h>\n+#include <stdbool.h>\n #include <inttypes.h>\n #include <assert.h>\n #include <sys/queue.h>\n@@ -21,11 +22,15 @@\n #include <rte_spinlock.h>\n #include <rte_random.h>\n #include <rte_pause.h>\n+#include <rte_memzone.h>\n+#include <rte_malloc.h>\n+#include <rte_compat.h>\n \n #include \"rte_timer.h\"\n \n-LIST_HEAD(rte_timer_list, rte_timer);\n-\n+/**\n+ * Per-lcore info for timers.\n+ */\n struct priv_timer {\n \tstruct rte_timer pending_head; /**< dummy timer instance to head up list */\n \trte_spinlock_t list_lock; /**< lock to protect list access */\n@@ -48,25 +53,84 @@ struct priv_timer {\n #endif\n } __rte_cache_aligned;\n \n-/** per-lcore private info for timers */\n-static struct priv_timer priv_timer[RTE_MAX_LCORE];\n+#define FL_ALLOCATED\t(1 << 0)\n+struct rte_timer_data {\n+\tstruct priv_timer priv_timer[RTE_MAX_LCORE];\n+\tuint8_t internal_flags;\n+};\n+\n+#define RTE_MAX_DATA_ELS 64\n+static struct rte_timer_data *rte_timer_data_arr;\n+static const uint32_t default_data_id;\n+static uint32_t rte_timer_subsystem_initialized;\n+\n+/* For maintaining older interfaces for a period */\n+static struct rte_timer_data default_timer_data;\n \n /* when debug is enabled, store some statistics */\n #ifdef RTE_LIBRTE_TIMER_DEBUG\n-#define __TIMER_STAT_ADD(name, n) do {\t\t\t\t\t\\\n+#define __TIMER_STAT_ADD(priv_timer, name, n) do {\t\t\t\\\n \t\tunsigned __lcore_id = rte_lcore_id();\t\t\t\\\n \t\tif (__lcore_id < RTE_MAX_LCORE)\t\t\t\t\\\n \t\t\tpriv_timer[__lcore_id].stats.name += (n);\t\\\n \t} while(0)\n #else\n-#define __TIMER_STAT_ADD(name, n) do {} while(0)\n+#define __TIMER_STAT_ADD(priv_timer, name, n) do {} while (0)\n #endif\n \n-/* Init the timer library. */\n+static inline int\n+timer_data_valid(uint32_t id)\n+{\n+\treturn !!(rte_timer_data_arr[id].internal_flags & FL_ALLOCATED);\n+}\n+\n+/* validate ID and retrieve timer data pointer, or return error value */\n+#define TIMER_DATA_VALID_GET_OR_ERR_RET(id, timer_data, retval) do {\t\\\n+\tif (id >= RTE_MAX_DATA_ELS || !timer_data_valid(id))\t\t\\\n+\t\treturn retval;\t\t\t\t\t\t\\\n+\ttimer_data = &rte_timer_data_arr[id];\t\t\t\t\\\n+} while (0)\n+\n+int __rte_experimental\n+rte_timer_data_alloc(uint32_t *id_ptr)\n+{\n+\tint i;\n+\tstruct rte_timer_data *data;\n+\n+\tif (!rte_timer_subsystem_initialized)\n+\t\treturn -ENOMEM;\n+\n+\tfor (i = 0; i < RTE_MAX_DATA_ELS; i++) {\n+\t\tdata = &rte_timer_data_arr[i];\n+\t\tif (!(data->internal_flags & FL_ALLOCATED)) {\n+\t\t\tdata->internal_flags |= FL_ALLOCATED;\n+\n+\t\t\tif (id_ptr)\n+\t\t\t\t*id_ptr = i;\n+\n+\t\t\treturn 0;\n+\t\t}\n+\t}\n+\n+\treturn -ENOSPC;\n+}\n+\n+int __rte_experimental\n+rte_timer_data_dealloc(uint32_t id)\n+{\n+\tstruct rte_timer_data *timer_data;\n+\tTIMER_DATA_VALID_GET_OR_ERR_RET(id, timer_data, -EINVAL);\n+\n+\ttimer_data->internal_flags &= ~(FL_ALLOCATED);\n+\n+\treturn 0;\n+}\n+\n void\n-rte_timer_subsystem_init(void)\n+rte_timer_subsystem_init_v20(void)\n {\n \tunsigned lcore_id;\n+\tstruct priv_timer *priv_timer = default_timer_data.priv_timer;\n \n \t/* since priv_timer is static, it's zeroed by default, so only init some\n \t * fields.\n@@ -76,6 +140,76 @@ rte_timer_subsystem_init(void)\n \t\tpriv_timer[lcore_id].prev_lcore = lcore_id;\n \t}\n }\n+VERSION_SYMBOL(rte_timer_subsystem_init, _v20, 2.0);\n+\n+/* Init the timer library. Allocate an array of timer data structs in shared\n+ * memory, and allocate the zeroth entry for use with original timer\n+ * APIs. Since the intersection of the sets of lcore ids in primary and\n+ * secondary processes should be empty, the zeroth entry can be shared by\n+ * multiple processes.\n+ */\n+int\n+rte_timer_subsystem_init_v1905(void)\n+{\n+\tconst struct rte_memzone *mz;\n+\tstruct rte_timer_data *data;\n+\tint i, lcore_id;\n+\tstatic const char *mz_name = \"rte_timer_mz\";\n+\n+\tif (rte_timer_subsystem_initialized)\n+\t\treturn -EALREADY;\n+\n+\tif (rte_eal_process_type() != RTE_PROC_PRIMARY) {\n+\t\tmz = rte_memzone_lookup(mz_name);\n+\t\tif (mz == NULL)\n+\t\t\treturn -EEXIST;\n+\n+\t\trte_timer_data_arr = mz->addr;\n+\n+\t\trte_timer_data_arr[default_data_id].internal_flags |=\n+\t\t\tFL_ALLOCATED;\n+\n+\t\trte_timer_subsystem_initialized = 1;\n+\n+\t\treturn 0;\n+\t}\n+\n+\tmz = rte_memzone_reserve_aligned(mz_name,\n+\t\t\tRTE_MAX_DATA_ELS * sizeof(*rte_timer_data_arr),\n+\t\t\tSOCKET_ID_ANY, 0, RTE_CACHE_LINE_SIZE);\n+\tif (mz == NULL)\n+\t\treturn -ENOMEM;\n+\n+\trte_timer_data_arr = mz->addr;\n+\n+\tfor (i = 0; i < RTE_MAX_DATA_ELS; i++) {\n+\t\tdata = &rte_timer_data_arr[i];\n+\n+\t\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n+\t\t\trte_spinlock_init(\n+\t\t\t\t&data->priv_timer[lcore_id].list_lock);\n+\t\t\tdata->priv_timer[lcore_id].prev_lcore = lcore_id;\n+\t\t}\n+\t}\n+\n+\trte_timer_data_arr[default_data_id].internal_flags |= FL_ALLOCATED;\n+\n+\trte_timer_subsystem_initialized = 1;\n+\n+\treturn 0;\n+}\n+MAP_STATIC_SYMBOL(int rte_timer_subsystem_init(void),\n+\t\t rte_timer_subsystem_init_v1905);\n+BIND_DEFAULT_SYMBOL(rte_timer_subsystem_init, _v1905, 19.05);\n+\n+void __rte_experimental\n+rte_timer_subsystem_finalize(void)\n+{\n+\tif (rte_timer_data_arr)\n+\t\trte_free(rte_timer_data_arr);\n+\n+\trte_timer_subsystem_initialized = 0;\n+}\n \n /* Initialize the timer handle tim for use */\n void\n@@ -95,7 +229,8 @@ rte_timer_init(struct rte_timer *tim)\n */\n static int\n timer_set_config_state(struct rte_timer *tim,\n-\t\t union rte_timer_status *ret_prev_status)\n+\t\t union rte_timer_status *ret_prev_status,\n+\t\t struct priv_timer *priv_timer)\n {\n \tunion rte_timer_status prev_status, status;\n \tint success = 0;\n@@ -207,7 +342,7 @@ timer_get_skiplist_level(unsigned curr_depth)\n */\n static void\n timer_get_prev_entries(uint64_t time_val, unsigned tim_lcore,\n-\t\tstruct rte_timer **prev)\n+\t\t struct rte_timer **prev, struct priv_timer *priv_timer)\n {\n \tunsigned lvl = priv_timer[tim_lcore].curr_skiplist_depth;\n \tprev[lvl] = &priv_timer[tim_lcore].pending_head;\n@@ -226,13 +361,15 @@ timer_get_prev_entries(uint64_t time_val, unsigned tim_lcore,\n */\n static void\n timer_get_prev_entries_for_node(struct rte_timer *tim, unsigned tim_lcore,\n-\t\tstruct rte_timer **prev)\n+\t\t\t\tstruct rte_timer **prev,\n+\t\t\t\tstruct priv_timer *priv_timer)\n {\n \tint i;\n+\n \t/* to get a specific entry in the list, look for just lower than the time\n \t * values, and then increment on each level individually if necessary\n \t */\n-\ttimer_get_prev_entries(tim->expire - 1, tim_lcore, prev);\n+\ttimer_get_prev_entries(tim->expire - 1, tim_lcore, prev, priv_timer);\n \tfor (i = priv_timer[tim_lcore].curr_skiplist_depth - 1; i >= 0; i--) {\n \t\twhile (prev[i]->sl_next[i] != NULL &&\n \t\t\t\tprev[i]->sl_next[i] != tim &&\n@@ -247,14 +384,15 @@ timer_get_prev_entries_for_node(struct rte_timer *tim, unsigned tim_lcore,\n * timer must not be in a list\n */\n static void\n-timer_add(struct rte_timer *tim, unsigned int tim_lcore)\n+timer_add(struct rte_timer *tim, unsigned int tim_lcore,\n+\t struct priv_timer *priv_timer)\n {\n \tunsigned lvl;\n \tstruct rte_timer *prev[MAX_SKIPLIST_DEPTH+1];\n \n \t/* find where exactly this element goes in the list of elements\n \t * for each depth. */\n-\ttimer_get_prev_entries(tim->expire, tim_lcore, prev);\n+\ttimer_get_prev_entries(tim->expire, tim_lcore, prev, priv_timer);\n \n \t/* now assign it a new level and add at that level */\n \tconst unsigned tim_level = timer_get_skiplist_level(\n@@ -284,7 +422,7 @@ timer_add(struct rte_timer *tim, unsigned int tim_lcore)\n */\n static void\n timer_del(struct rte_timer *tim, union rte_timer_status prev_status,\n-\t\tint local_is_locked)\n+\t int local_is_locked, struct priv_timer *priv_timer)\n {\n \tunsigned lcore_id = rte_lcore_id();\n \tunsigned prev_owner = prev_status.owner;\n@@ -304,7 +442,7 @@ timer_del(struct rte_timer *tim, union rte_timer_status prev_status,\n \t\t\t\t((tim->sl_next[0] == NULL) ? 0 : tim->sl_next[0]->expire);\n \n \t/* adjust pointers from previous entries to point past this */\n-\ttimer_get_prev_entries_for_node(tim, prev_owner, prev);\n+\ttimer_get_prev_entries_for_node(tim, prev_owner, prev, priv_timer);\n \tfor (i = priv_timer[prev_owner].curr_skiplist_depth - 1; i >= 0; i--) {\n \t\tif (prev[i]->sl_next[i] == tim)\n \t\t\tprev[i]->sl_next[i] = tim->sl_next[i];\n@@ -326,11 +464,13 @@ static int\n __rte_timer_reset(struct rte_timer *tim, uint64_t expire,\n \t\t uint64_t period, unsigned tim_lcore,\n \t\t rte_timer_cb_t fct, void *arg,\n-\t\t int local_is_locked)\n+\t\t int local_is_locked,\n+\t\t struct rte_timer_data *timer_data)\n {\n \tunion rte_timer_status prev_status, status;\n \tint ret;\n \tunsigned lcore_id = rte_lcore_id();\n+\tstruct priv_timer *priv_timer = timer_data->priv_timer;\n \n \t/* round robin for tim_lcore */\n \tif (tim_lcore == (unsigned)LCORE_ID_ANY) {\n@@ -348,11 +488,11 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,\n \n \t/* wait that the timer is in correct status before update,\n \t * and mark it as being configured */\n-\tret = timer_set_config_state(tim, &prev_status);\n+\tret = timer_set_config_state(tim, &prev_status, priv_timer);\n \tif (ret < 0)\n \t\treturn -1;\n \n-\t__TIMER_STAT_ADD(reset, 1);\n+\t__TIMER_STAT_ADD(priv_timer, reset, 1);\n \tif (prev_status.state == RTE_TIMER_RUNNING &&\n \t lcore_id < RTE_MAX_LCORE) {\n \t\tpriv_timer[lcore_id].updated = 1;\n@@ -360,8 +500,8 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,\n \n \t/* remove it from list */\n \tif (prev_status.state == RTE_TIMER_PENDING) {\n-\t\ttimer_del(tim, prev_status, local_is_locked);\n-\t\t__TIMER_STAT_ADD(pending, -1);\n+\t\ttimer_del(tim, prev_status, local_is_locked, priv_timer);\n+\t\t__TIMER_STAT_ADD(priv_timer, pending, -1);\n \t}\n \n \ttim->period = period;\n@@ -376,8 +516,8 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,\n \tif (tim_lcore != lcore_id || !local_is_locked)\n \t\trte_spinlock_lock(&priv_timer[tim_lcore].list_lock);\n \n-\t__TIMER_STAT_ADD(pending, 1);\n-\ttimer_add(tim, tim_lcore);\n+\t__TIMER_STAT_ADD(priv_timer, pending, 1);\n+\ttimer_add(tim, tim_lcore, priv_timer);\n \n \t/* update state: as we are in CONFIG state, only us can modify\n \t * the state so we don't need to use cmpset() here */\n@@ -394,9 +534,9 @@ __rte_timer_reset(struct rte_timer *tim, uint64_t expire,\n \n /* Reset and start the timer associated with the timer handle tim */\n int\n-rte_timer_reset(struct rte_timer *tim, uint64_t ticks,\n-\t\tenum rte_timer_type type, unsigned tim_lcore,\n-\t\trte_timer_cb_t fct, void *arg)\n+rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,\n+\t\t enum rte_timer_type type, unsigned int tim_lcore,\n+\t\t rte_timer_cb_t fct, void *arg)\n {\n \tuint64_t cur_time = rte_get_timer_cycles();\n \tuint64_t period;\n@@ -412,7 +552,48 @@ rte_timer_reset(struct rte_timer *tim, uint64_t ticks,\n \t\tperiod = 0;\n \n \treturn __rte_timer_reset(tim, cur_time + ticks, period, tim_lcore,\n-\t\t\t fct, arg, 0);\n+\t\t\t fct, arg, 0, &default_timer_data);\n+}\n+VERSION_SYMBOL(rte_timer_reset, _v20, 2.0);\n+\n+int\n+rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,\n+\t\t enum rte_timer_type type, unsigned int tim_lcore,\n+\t\t rte_timer_cb_t fct, void *arg)\n+{\n+\treturn rte_timer_alt_reset(default_data_id, tim, ticks, type,\n+\t\t\t\t tim_lcore, fct, arg);\n+}\n+MAP_STATIC_SYMBOL(int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,\n+\t\t\t\t enum rte_timer_type type,\n+\t\t\t\t unsigned int tim_lcore,\n+\t\t\t\t rte_timer_cb_t fct, void *arg),\n+\t\t rte_timer_reset_v1905);\n+BIND_DEFAULT_SYMBOL(rte_timer_reset, _v1905, 19.05);\n+\n+int __rte_experimental\n+rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,\n+\t\t uint64_t ticks, enum rte_timer_type type,\n+\t\t unsigned int tim_lcore, rte_timer_cb_t fct, void *arg)\n+{\n+\tuint64_t cur_time = rte_get_timer_cycles();\n+\tuint64_t period;\n+\tstruct rte_timer_data *timer_data;\n+\n+\tTIMER_DATA_VALID_GET_OR_ERR_RET(timer_data_id, timer_data, -EINVAL);\n+\n+\tif (unlikely((tim_lcore != (unsigned int)LCORE_ID_ANY) &&\n+\t\t\t!(rte_lcore_is_enabled(tim_lcore) ||\n+\t\t\t rte_lcore_has_role(tim_lcore, ROLE_SERVICE))))\n+\t\treturn -1;\n+\n+\tif (type == PERIODICAL)\n+\t\tperiod = ticks;\n+\telse\n+\t\tperiod = 0;\n+\n+\treturn __rte_timer_reset(tim, cur_time + ticks, period, tim_lcore,\n+\t\t\t\t fct, arg, 0, timer_data);\n }\n \n /* loop until rte_timer_reset() succeed */\n@@ -426,21 +607,22 @@ rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,\n \t\trte_pause();\n }\n \n-/* Stop the timer associated with the timer handle tim */\n-int\n-rte_timer_stop(struct rte_timer *tim)\n+static int\n+__rte_timer_stop(struct rte_timer *tim, int local_is_locked,\n+\t\t struct rte_timer_data *timer_data)\n {\n \tunion rte_timer_status prev_status, status;\n \tunsigned lcore_id = rte_lcore_id();\n \tint ret;\n+\tstruct priv_timer *priv_timer = timer_data->priv_timer;\n \n \t/* wait that the timer is in correct status before update,\n \t * and mark it as being configured */\n-\tret = timer_set_config_state(tim, &prev_status);\n+\tret = timer_set_config_state(tim, &prev_status, priv_timer);\n \tif (ret < 0)\n \t\treturn -1;\n \n-\t__TIMER_STAT_ADD(stop, 1);\n+\t__TIMER_STAT_ADD(priv_timer, stop, 1);\n \tif (prev_status.state == RTE_TIMER_RUNNING &&\n \t lcore_id < RTE_MAX_LCORE) {\n \t\tpriv_timer[lcore_id].updated = 1;\n@@ -448,8 +630,8 @@ rte_timer_stop(struct rte_timer *tim)\n \n \t/* remove it from list */\n \tif (prev_status.state == RTE_TIMER_PENDING) {\n-\t\ttimer_del(tim, prev_status, 0);\n-\t\t__TIMER_STAT_ADD(pending, -1);\n+\t\ttimer_del(tim, prev_status, local_is_locked, priv_timer);\n+\t\t__TIMER_STAT_ADD(priv_timer, pending, -1);\n \t}\n \n \t/* mark timer as stopped */\n@@ -461,6 +643,33 @@ rte_timer_stop(struct rte_timer *tim)\n \treturn 0;\n }\n \n+/* Stop the timer associated with the timer handle tim */\n+int\n+rte_timer_stop_v20(struct rte_timer *tim)\n+{\n+\treturn __rte_timer_stop(tim, 0, &default_timer_data);\n+}\n+VERSION_SYMBOL(rte_timer_stop, _v20, 2.0);\n+\n+int\n+rte_timer_stop_v1905(struct rte_timer *tim)\n+{\n+\treturn rte_timer_alt_stop(default_data_id, tim);\n+}\n+MAP_STATIC_SYMBOL(int rte_timer_stop(struct rte_timer *tim),\n+\t\t rte_timer_stop_v1905);\n+BIND_DEFAULT_SYMBOL(rte_timer_stop, _v1905, 19.05);\n+\n+int __rte_experimental\n+rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim)\n+{\n+\tstruct rte_timer_data *timer_data;\n+\n+\tTIMER_DATA_VALID_GET_OR_ERR_RET(timer_data_id, timer_data, -EINVAL);\n+\n+\treturn __rte_timer_stop(tim, 0, timer_data);\n+}\n+\n /* loop until rte_timer_stop() succeed */\n void\n rte_timer_stop_sync(struct rte_timer *tim)\n@@ -477,7 +686,8 @@ rte_timer_pending(struct rte_timer *tim)\n }\n \n /* must be called periodically, run all timer that expired */\n-void rte_timer_manage(void)\n+static void\n+__rte_timer_manage(struct rte_timer_data *timer_data)\n {\n \tunion rte_timer_status status;\n \tstruct rte_timer *tim, *next_tim;\n@@ -486,11 +696,12 @@ void rte_timer_manage(void)\n \tstruct rte_timer *prev[MAX_SKIPLIST_DEPTH + 1];\n \tuint64_t cur_time;\n \tint i, ret;\n+\tstruct priv_timer *priv_timer = timer_data->priv_timer;\n \n \t/* timer manager only runs on EAL thread with valid lcore_id */\n \tassert(lcore_id < RTE_MAX_LCORE);\n \n-\t__TIMER_STAT_ADD(manage, 1);\n+\t__TIMER_STAT_ADD(priv_timer, manage, 1);\n \t/* optimize for the case where per-cpu list is empty */\n \tif (priv_timer[lcore_id].pending_head.sl_next[0] == NULL)\n \t\treturn;\n@@ -518,7 +729,7 @@ void rte_timer_manage(void)\n \ttim = priv_timer[lcore_id].pending_head.sl_next[0];\n \n \t/* break the existing list at current time point */\n-\ttimer_get_prev_entries(cur_time, lcore_id, prev);\n+\ttimer_get_prev_entries(cur_time, lcore_id, prev, priv_timer);\n \tfor (i = priv_timer[lcore_id].curr_skiplist_depth -1; i >= 0; i--) {\n \t\tif (prev[i] == &priv_timer[lcore_id].pending_head)\n \t\t\tcontinue;\n@@ -563,7 +774,7 @@ void rte_timer_manage(void)\n \t\t/* execute callback function with list unlocked */\n \t\ttim->f(tim, tim->arg);\n \n-\t\t__TIMER_STAT_ADD(pending, -1);\n+\t\t__TIMER_STAT_ADD(priv_timer, pending, -1);\n \t\t/* the timer was stopped or reloaded by the callback\n \t\t * function, we have nothing to do here */\n \t\tif (priv_timer[lcore_id].updated == 1)\n@@ -580,24 +791,222 @@ void rte_timer_manage(void)\n \t\t\t/* keep it in list and mark timer as pending */\n \t\t\trte_spinlock_lock(&priv_timer[lcore_id].list_lock);\n \t\t\tstatus.state = RTE_TIMER_PENDING;\n-\t\t\t__TIMER_STAT_ADD(pending, 1);\n+\t\t\t__TIMER_STAT_ADD(priv_timer, pending, 1);\n \t\t\tstatus.owner = (int16_t)lcore_id;\n \t\t\trte_wmb();\n \t\t\ttim->status.u32 = status.u32;\n \t\t\t__rte_timer_reset(tim, tim->expire + tim->period,\n-\t\t\t\ttim->period, lcore_id, tim->f, tim->arg, 1);\n+\t\t\t\ttim->period, lcore_id, tim->f, tim->arg, 1,\n+\t\t\t\ttimer_data);\n \t\t\trte_spinlock_unlock(&priv_timer[lcore_id].list_lock);\n \t\t}\n \t}\n \tpriv_timer[lcore_id].running_tim = NULL;\n }\n \n+void\n+rte_timer_manage_v20(void)\n+{\n+\t__rte_timer_manage(&default_timer_data);\n+}\n+VERSION_SYMBOL(rte_timer_manage, _v20, 2.0);\n+\n+int\n+rte_timer_manage_v1905(void)\n+{\n+\tstruct rte_timer_data *timer_data;\n+\n+\tTIMER_DATA_VALID_GET_OR_ERR_RET(default_data_id, timer_data, -EINVAL);\n+\n+\t__rte_timer_manage(timer_data);\n+\n+\treturn 0;\n+}\n+MAP_STATIC_SYMBOL(int rte_timer_manage(void), rte_timer_manage_v1905);\n+BIND_DEFAULT_SYMBOL(rte_timer_manage, _v1905, 19.05);\n+\n+int __rte_experimental\n+rte_timer_alt_manage(uint32_t timer_data_id,\n+\t\t unsigned int *poll_lcores,\n+\t\t int nb_poll_lcores,\n+\t\t rte_timer_alt_manage_cb_t f)\n+{\n+\tunion rte_timer_status status;\n+\tstruct rte_timer *tim, *next_tim, **pprev;\n+\tstruct rte_timer *run_first_tims[RTE_MAX_LCORE];\n+\tunsigned int runlist_lcore_ids[RTE_MAX_LCORE];\n+\tunsigned int this_lcore = rte_lcore_id();\n+\tstruct rte_timer *prev[MAX_SKIPLIST_DEPTH + 1];\n+\tuint64_t cur_time;\n+\tint i, j, ret;\n+\tint nb_runlists = 0;\n+\tstruct rte_timer_data *data;\n+\tstruct priv_timer *privp;\n+\tuint32_t poll_lcore;\n+\n+\tTIMER_DATA_VALID_GET_OR_ERR_RET(timer_data_id, data, -EINVAL);\n+\n+\t/* timer manager only runs on EAL thread with valid lcore_id */\n+\tassert(this_lcore < RTE_MAX_LCORE);\n+\n+\t__TIMER_STAT_ADD(data->priv_timer, manage, 1);\n+\n+\tif (poll_lcores == NULL) {\n+\t\tpoll_lcores = (unsigned int []){rte_lcore_id()};\n+\t\tnb_poll_lcores = 1;\n+\t}\n+\n+\tfor (i = 0; i < nb_poll_lcores; i++) {\n+\t\tpoll_lcore = poll_lcores[i];\n+\t\tprivp = &data->priv_timer[poll_lcore];\n+\n+\t\t/* optimize for the case where per-cpu list is empty */\n+\t\tif (privp->pending_head.sl_next[0] == NULL)\n+\t\t\tcontinue;\n+\t\tcur_time = rte_get_timer_cycles();\n+\n+#ifdef RTE_ARCH_64\n+\t\t/* on 64-bit the value cached in the pending_head.expired will\n+\t\t * be updated atomically, so we can consult that for a quick\n+\t\t * check here outside the lock\n+\t\t */\n+\t\tif (likely(privp->pending_head.expire > cur_time))\n+\t\t\tcontinue;\n+#endif\n+\n+\t\t/* browse ordered list, add expired timers in 'expired' list */\n+\t\trte_spinlock_lock(&privp->list_lock);\n+\n+\t\t/* if nothing to do just unlock and return */\n+\t\tif (privp->pending_head.sl_next[0] == NULL ||\n+\t\t privp->pending_head.sl_next[0]->expire > cur_time) {\n+\t\t\trte_spinlock_unlock(&privp->list_lock);\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\t/* save start of list of expired timers */\n+\t\ttim = privp->pending_head.sl_next[0];\n+\n+\t\t/* break the existing list at current time point */\n+\t\ttimer_get_prev_entries(cur_time, poll_lcore, prev,\n+\t\t\t\t data->priv_timer);\n+\t\tfor (j = privp->curr_skiplist_depth - 1; j >= 0; j--) {\n+\t\t\tif (prev[j] == &privp->pending_head)\n+\t\t\t\tcontinue;\n+\t\t\tprivp->pending_head.sl_next[j] =\n+\t\t\t\tprev[j]->sl_next[j];\n+\t\t\tif (prev[j]->sl_next[j] == NULL)\n+\t\t\t\tprivp->curr_skiplist_depth--;\n+\n+\t\t\tprev[j]->sl_next[j] = NULL;\n+\t\t}\n+\n+\t\t/* transition run-list from PENDING to RUNNING */\n+\t\trun_first_tims[nb_runlists] = tim;\n+\t\trunlist_lcore_ids[nb_runlists] = poll_lcore;\n+\t\tpprev = &run_first_tims[nb_runlists];\n+\t\tnb_runlists++;\n+\n+\t\tfor ( ; tim != NULL; tim = next_tim) {\n+\t\t\tnext_tim = tim->sl_next[0];\n+\n+\t\t\tret = timer_set_running_state(tim);\n+\t\t\tif (likely(ret == 0)) {\n+\t\t\t\tpprev = &tim->sl_next[0];\n+\t\t\t} else {\n+\t\t\t\t/* another core is trying to re-config this one,\n+\t\t\t\t * remove it from local expired list\n+\t\t\t\t */\n+\t\t\t\t*pprev = next_tim;\n+\t\t\t}\n+\t\t}\n+\n+\t\t/* update the next to expire timer value */\n+\t\tprivp->pending_head.expire =\n+\t\t (privp->pending_head.sl_next[0] == NULL) ? 0 :\n+\t\t\tprivp->pending_head.sl_next[0]->expire;\n+\n+\t\trte_spinlock_unlock(&privp->list_lock);\n+\t}\n+\n+\t/* Now process the run lists */\n+\twhile (1) {\n+\t\tbool done = true;\n+\t\tuint64_t min_expire = UINT64_MAX;\n+\t\tint min_idx = 0;\n+\n+\t\t/* Find the next oldest timer to process */\n+\t\tfor (i = 0; i < nb_runlists; i++) {\n+\t\t\ttim = run_first_tims[i];\n+\n+\t\t\tif (tim != NULL && tim->expire < min_expire) {\n+\t\t\t\tmin_expire = tim->expire;\n+\t\t\t\tmin_idx = i;\n+\t\t\t\tdone = false;\n+\t\t\t}\n+\t\t}\n+\n+\t\tif (done)\n+\t\t\tbreak;\n+\n+\t\ttim = run_first_tims[min_idx];\n+\t\tprivp = &data->priv_timer[runlist_lcore_ids[min_idx]];\n+\n+\t\t/* Move down the runlist from which we picked a timer to\n+\t\t * execute\n+\t\t */\n+\t\trun_first_tims[min_idx] = run_first_tims[min_idx]->sl_next[0];\n+\n+\t\tprivp->updated = 0;\n+\t\tprivp->running_tim = tim;\n+\n+\t\t/* Call the provided callback function */\n+\t\tf(tim);\n+\n+\t\t__TIMER_STAT_ADD(privp, pending, -1);\n+\n+\t\t/* the timer was stopped or reloaded by the callback\n+\t\t * function, we have nothing to do here\n+\t\t */\n+\t\tif (privp->updated == 1)\n+\t\t\tcontinue;\n+\n+\t\tif (tim->period == 0) {\n+\t\t\t/* remove from done list and mark timer as stopped */\n+\t\t\tstatus.state = RTE_TIMER_STOP;\n+\t\t\tstatus.owner = RTE_TIMER_NO_OWNER;\n+\t\t\trte_wmb();\n+\t\t\ttim->status.u32 = status.u32;\n+\t\t} else {\n+\t\t\t/* keep it in list and mark timer as pending */\n+\t\t\trte_spinlock_lock(\n+\t\t\t\t&data->priv_timer[this_lcore].list_lock);\n+\t\t\tstatus.state = RTE_TIMER_PENDING;\n+\t\t\t__TIMER_STAT_ADD(data->priv_timer, pending, 1);\n+\t\t\tstatus.owner = (int16_t)this_lcore;\n+\t\t\trte_wmb();\n+\t\t\ttim->status.u32 = status.u32;\n+\t\t\t__rte_timer_reset(tim, tim->expire + tim->period,\n+\t\t\t\ttim->period, this_lcore, tim->f, tim->arg, 1,\n+\t\t\t\tdata);\n+\t\t\trte_spinlock_unlock(\n+\t\t\t\t&data->priv_timer[this_lcore].list_lock);\n+\t\t}\n+\n+\t\tprivp->running_tim = NULL;\n+\t}\n+\n+\treturn 0;\n+}\n+\n /* dump statistics about timers */\n-void rte_timer_dump_stats(FILE *f)\n+static void\n+__rte_timer_dump_stats(struct rte_timer_data *timer_data __rte_unused, FILE *f)\n {\n #ifdef RTE_LIBRTE_TIMER_DEBUG\n \tstruct rte_timer_debug_stats sum;\n \tunsigned lcore_id;\n+\tstruct priv_timer *priv_timer = timer_data->priv_timer;\n \n \tmemset(&sum, 0, sizeof(sum));\n \tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n@@ -615,3 +1024,31 @@ void rte_timer_dump_stats(FILE *f)\n \tfprintf(f, \"No timer statistics, RTE_LIBRTE_TIMER_DEBUG is disabled\\n\");\n #endif\n }\n+\n+void\n+rte_timer_dump_stats_v20(FILE *f)\n+{\n+\t__rte_timer_dump_stats(&default_timer_data, f);\n+}\n+VERSION_SYMBOL(rte_timer_dump_stats, _v20, 2.0);\n+\n+int\n+rte_timer_dump_stats_v1905(FILE *f)\n+{\n+\treturn rte_timer_alt_dump_stats(default_data_id, f);\n+}\n+MAP_STATIC_SYMBOL(int rte_timer_dump_stats(FILE *f),\n+\t\t rte_timer_dump_stats_v1905);\n+BIND_DEFAULT_SYMBOL(rte_timer_dump_stats, _v1905, 19.05);\n+\n+int __rte_experimental\n+rte_timer_alt_dump_stats(uint32_t timer_data_id __rte_unused, FILE *f)\n+{\n+\tstruct rte_timer_data *timer_data;\n+\n+\tTIMER_DATA_VALID_GET_OR_ERR_RET(timer_data_id, timer_data, -EINVAL);\n+\n+\t__rte_timer_dump_stats(timer_data, f);\n+\n+\treturn 0;\n+}\ndiff --git a/lib/librte_timer/rte_timer.h b/lib/librte_timer/rte_timer.h\nindex 9b95cd2..6a9c499 100644\n--- a/lib/librte_timer/rte_timer.h\n+++ b/lib/librte_timer/rte_timer.h\n@@ -39,6 +39,7 @@\n #include <stddef.h>\n #include <rte_common.h>\n #include <rte_config.h>\n+#include <rte_spinlock.h>\n \n #ifdef __cplusplus\n extern \"C\" {\n@@ -132,12 +133,68 @@ struct rte_timer\n #endif\n \n /**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice\n+ *\n+ * Allocate a timer data instance in shared memory to track a set of pending\n+ * timer lists.\n+ *\n+ * @param id_ptr\n+ * Pointer to variable into which to write the identifier of the allocated\n+ * timer data instance.\n+ *\n+ * @return\n+ * - 0: Success\n+ * - -ENOSPC: maximum number of timer data instances already allocated\n+ */\n+int __rte_experimental rte_timer_data_alloc(uint32_t *id_ptr);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice\n+ *\n+ * Deallocate a timer data instance.\n+ *\n+ * @param id\n+ * Identifier of the timer data instance to deallocate.\n+ *\n+ * @return\n+ * - 0: Success\n+ * - -EINVAL: invalid timer data instance identifier\n+ */\n+int __rte_experimental rte_timer_data_dealloc(uint32_t id);\n+\n+/**\n * Initialize the timer library.\n *\n * Initializes internal variables (list, locks and so on) for the RTE\n * timer library.\n */\n-void rte_timer_subsystem_init(void);\n+void rte_timer_subsystem_init_v20(void);\n+\n+/**\n+ * Initialize the timer library.\n+ *\n+ * Initializes internal variables (list, locks and so on) for the RTE\n+ * timer library.\n+ *\n+ * @return\n+ * - 0: Success\n+ * - -EEXIST: Returned in secondary process when primary process has not\n+ * yet initialized the timer subsystem\n+ * - -ENOMEM: Unable to allocate memory needed to initialize timer\n+ * subsystem\n+ */\n+int rte_timer_subsystem_init_v1905(void);\n+int rte_timer_subsystem_init(void);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice\n+ *\n+ * Free timer subsystem resources.\n+ */\n+void __rte_experimental rte_timer_subsystem_finalize(void);\n \n /**\n * Initialize a timer handle.\n@@ -193,6 +250,12 @@ void rte_timer_init(struct rte_timer *tim);\n * - 0: Success; the timer is scheduled.\n * - (-1): Timer is in the RUNNING or CONFIG state.\n */\n+int rte_timer_reset_v20(struct rte_timer *tim, uint64_t ticks,\n+\t\t\tenum rte_timer_type type, unsigned int tim_lcore,\n+\t\t\trte_timer_cb_t fct, void *arg);\n+int rte_timer_reset_v1905(struct rte_timer *tim, uint64_t ticks,\n+\t\t\t enum rte_timer_type type, unsigned int tim_lcore,\n+\t\t\t rte_timer_cb_t fct, void *arg);\n int rte_timer_reset(struct rte_timer *tim, uint64_t ticks,\n \t\t enum rte_timer_type type, unsigned tim_lcore,\n \t\t rte_timer_cb_t fct, void *arg);\n@@ -252,9 +315,10 @@ rte_timer_reset_sync(struct rte_timer *tim, uint64_t ticks,\n * - 0: Success; the timer is stopped.\n * - (-1): The timer is in the RUNNING or CONFIG state.\n */\n+int rte_timer_stop_v20(struct rte_timer *tim);\n+int rte_timer_stop_v1905(struct rte_timer *tim);\n int rte_timer_stop(struct rte_timer *tim);\n \n-\n /**\n * Loop until rte_timer_stop() succeeds.\n *\n@@ -292,7 +356,25 @@ int rte_timer_pending(struct rte_timer *tim);\n * function. However, the more often the function is called, the more\n * CPU resources it will use.\n */\n-void rte_timer_manage(void);\n+void rte_timer_manage_v20(void);\n+\n+/**\n+ * Manage the timer list and execute callback functions.\n+ *\n+ * This function must be called periodically from EAL lcores\n+ * main_loop(). It browses the list of pending timers and runs all\n+ * timers that are expired.\n+ *\n+ * The precision of the timer depends on the call frequency of this\n+ * function. However, the more often the function is called, the more\n+ * CPU resources it will use.\n+ *\n+ * @return\n+ * - 0: Success\n+ * - -EINVAL: timer subsystem not yet initialized\n+ */\n+int rte_timer_manage_v1905(void);\n+int rte_timer_manage(void);\n \n /**\n * Dump statistics about timers.\n@@ -300,7 +382,143 @@ void rte_timer_manage(void);\n * @param f\n * A pointer to a file for output\n */\n-void rte_timer_dump_stats(FILE *f);\n+void rte_timer_dump_stats_v20(FILE *f);\n+\n+/**\n+ * Dump statistics about timers.\n+ *\n+ * @param f\n+ * A pointer to a file for output\n+ * @return\n+ * - 0: Success\n+ * - -EINVAL: timer subsystem not yet initialized\n+ */\n+int rte_timer_dump_stats_v1905(FILE *f);\n+int rte_timer_dump_stats(FILE *f);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice\n+ *\n+ * This function is the same as rte_timer_reset(), except that it allows a\n+ * caller to specify the rte_timer_data instance containing the list to which\n+ * the timer should be added.\n+ *\n+ * @see rte_timer_reset()\n+ *\n+ * @param timer_data_id\n+ * An identifier indicating which instance of timer data should be used for\n+ * this operation.\n+ * @param tim\n+ * The timer handle.\n+ * @param ticks\n+ * The number of cycles (see rte_get_hpet_hz()) before the callback\n+ * function is called.\n+ * @param type\n+ * The type can be either:\n+ * - PERIODICAL: The timer is automatically reloaded after execution\n+ * (returns to the PENDING state)\n+ * - SINGLE: The timer is one-shot, that is, the timer goes to a\n+ * STOPPED state after execution.\n+ * @param tim_lcore\n+ * The ID of the lcore where the timer callback function has to be\n+ * executed. If tim_lcore is LCORE_ID_ANY, the timer library will\n+ * launch it on a different core for each call (round-robin).\n+ * @param fct\n+ * The callback function of the timer. This parameter can be NULL if (and\n+ * only if) rte_timer_alt_manage() will be used to manage this timer.\n+ * @param arg\n+ * The user argument of the callback function.\n+ * @return\n+ * - 0: Success; the timer is scheduled.\n+ * - (-1): Timer is in the RUNNING or CONFIG state.\n+ * - -EINVAL: invalid timer_data_id\n+ */\n+int __rte_experimental\n+rte_timer_alt_reset(uint32_t timer_data_id, struct rte_timer *tim,\n+\t\t uint64_t ticks, enum rte_timer_type type,\n+\t\t unsigned int tim_lcore, rte_timer_cb_t fct, void *arg);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice\n+ *\n+ * This function is the same as rte_timer_stop(), except that it allows a\n+ * caller to specify the rte_timer_data instance containing the list from which\n+ * this timer should be removed.\n+ *\n+ * @see rte_timer_stop()\n+ *\n+ * @param timer_data_id\n+ * An identifier indicating which instance of timer data should be used for\n+ * this operation.\n+ * @param tim\n+ * The timer handle.\n+ * @return\n+ * - 0: Success; the timer is stopped.\n+ * - (-1): The timer is in the RUNNING or CONFIG state.\n+ * - -EINVAL: invalid timer_data_id\n+ */\n+int __rte_experimental\n+rte_timer_alt_stop(uint32_t timer_data_id, struct rte_timer *tim);\n+\n+/**\n+ * Callback function type for rte_timer_alt_manage().\n+ */\n+typedef void (*rte_timer_alt_manage_cb_t)(struct rte_timer *tim);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice\n+ *\n+ * Manage a set of timer lists and execute the specified callback function for\n+ * all expired timers. This function is similar to rte_timer_manage(), except\n+ * that it allows a caller to specify the timer_data instance that should\n+ * be operated on, as well as a set of lcore IDs identifying which timer lists\n+ * should be processed. Callback functions of individual timers are ignored.\n+ *\n+ * @see rte_timer_manage()\n+ *\n+ * @param timer_data_id\n+ * An identifier indicating which instance of timer data should be used for\n+ * this operation.\n+ * @param poll_lcores\n+ * An array of lcore ids identifying the timer lists that should be processed.\n+ * NULL is allowed - if NULL, the timer list corresponding to the lcore\n+ * calling this routine is processed (same as rte_timer_manage()).\n+ * @param n_poll_lcores\n+ * The size of the poll_lcores array. If 'poll_lcores' is NULL, this parameter\n+ * is ignored.\n+ * @param f\n+ * The callback function which should be called for all expired timers.\n+ * @return\n+ * - 0: success\n+ * - -EINVAL: invalid timer_data_id\n+ */\n+int __rte_experimental\n+rte_timer_alt_manage(uint32_t timer_data_id, unsigned int *poll_lcores,\n+\t\t int n_poll_lcores, rte_timer_alt_manage_cb_t f);\n+\n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change without prior notice\n+ *\n+ * This function is the same as rte_timer_dump_stats(), except that it allows\n+ * the caller to specify the rte_timer_data instance that should be used.\n+ *\n+ * @see rte_timer_dump_stats()\n+ *\n+ * @param timer_data_id\n+ * An identifier indicating which instance of timer data should be used for\n+ * this operation.\n+ * @param f\n+ * A pointer to a file for output\n+ * @return\n+ * - 0: success\n+ * - -EINVAL: invalid timer_data_id\n+ */\n+int __rte_experimental\n+rte_timer_alt_dump_stats(uint32_t timer_data_id, FILE *f);\n \n #ifdef __cplusplus\n }\ndiff --git a/lib/librte_timer/rte_timer_version.map b/lib/librte_timer/rte_timer_version.map\nindex 9b2e4b8..c2e5836 100644\n--- a/lib/librte_timer/rte_timer_version.map\n+++ b/lib/librte_timer/rte_timer_version.map\n@@ -13,3 +13,25 @@ DPDK_2.0 {\n \n \tlocal: *;\n };\n+\n+DPDK_19.05 {\n+\tglobal:\n+\n+\trte_timer_dump_stats;\n+\trte_timer_manage;\n+\trte_timer_reset;\n+\trte_timer_stop;\n+\trte_timer_subsystem_init;\n+} DPDK_2.0;\n+\n+EXPERIMENTAL {\n+\tglobal:\n+\n+\trte_timer_alt_dump_stats;\n+\trte_timer_alt_manage;\n+\trte_timer_alt_reset;\n+\trte_timer_alt_stop;\n+\trte_timer_data_alloc;\n+\trte_timer_data_dealloc;\n+\trte_timer_subsystem_finalize;\n+};\n", "prefixes": [ "v5", "1/2" ] }{ "id": 52802, "url": "