get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/6627/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 6627,
    "url": "https://patches.dpdk.org/api/patches/6627/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/1438037168-639-4-git-send-email-rsanford2@gmail.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1438037168-639-4-git-send-email-rsanford2@gmail.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1438037168-639-4-git-send-email-rsanford2@gmail.com",
    "date": "2015-07-27T22:46:06",
    "name": "[dpdk-dev,v2,3/3] timer: fix race condition in rte_timer_manage()",
    "commit_ref": null,
    "pull_url": null,
    "state": "accepted",
    "archived": true,
    "hash": "76c6f1a8597142f0a4963907f7c0a899052d3ca1",
    "submitter": {
        "id": 7,
        "url": "https://patches.dpdk.org/api/people/7/?format=api",
        "name": "Robert Sanford",
        "email": "rsanford2@gmail.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/1438037168-639-4-git-send-email-rsanford2@gmail.com/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/6627/comments/",
    "check": "pending",
    "checks": "https://patches.dpdk.org/api/patches/6627/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id 49D89C600;\n\tTue, 28 Jul 2015 00:46:30 +0200 (CEST)",
            "from mail-ig0-f173.google.com (mail-ig0-f173.google.com\n\t[209.85.213.173]) by dpdk.org (Postfix) with ESMTP id 8D405C5DC\n\tfor <dev@dpdk.org>; Tue, 28 Jul 2015 00:46:26 +0200 (CEST)",
            "by igk11 with SMTP id 11so82810912igk.1\n\tfor <dev@dpdk.org>; Mon, 27 Jul 2015 15:46:26 -0700 (PDT)",
            "from localhost.localdomain ([23.79.237.14])\n\tby smtp.gmail.com with ESMTPSA id\n\tj18sm7063061igf.2.2015.07.27.15.46.25\n\t(version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128);\n\tMon, 27 Jul 2015 15:46:25 -0700 (PDT)"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113;\n\th=from:to:cc:subject:date:message-id:in-reply-to:references;\n\tbh=1rVxFFPMqASOwh/VumSD3yA+M6EF25roe9bt+TdFKoc=;\n\tb=OnHbVo8nWYPUbB9YZKkDB/tmebHyB2R4mx93BElIwYzjZ+vo5XUI8OW+OAgUXV5+80\n\tH04WNgSdZ1oYKVY5VFBhChm9f6bohld+Eest5qhNE3e11MjwxrXM3G0+AgL4A63NFlPp\n\tkM56T7eLwXCGO6aqimhAO1skVJKkHt+FkMshr6QbFFTYVInnkpGzeZcQfYI34rxDBTkB\n\tdg9XDdhwa/gmOCdFSJgB1lu30PrH6nYPXuBlVL6ZG7otGlwnaViNtVt9IFh2Srq7XMYp\n\txLxfOqw0Me2Jg+pGxxyF9pM0ksn3AMNh3MRLvWnA7j6cG0LTWOD5OBih1p0qW6iZOvuS\n\txt1g==",
        "X-Received": "by 10.107.135.200 with SMTP id r69mr47994292ioi.54.1438037186118;\n\tMon, 27 Jul 2015 15:46:26 -0700 (PDT)",
        "From": "rsanford2@gmail.com",
        "To": "dev@dpdk.org",
        "Date": "Mon, 27 Jul 2015 18:46:06 -0400",
        "Message-Id": "<1438037168-639-4-git-send-email-rsanford2@gmail.com>",
        "X-Mailer": "git-send-email 1.7.1",
        "In-Reply-To": "<1437691347-58708-1-git-send-email-rsanford2@gmail.com>",
        "References": "<1437691347-58708-1-git-send-email-rsanford2@gmail.com>",
        "Subject": "[dpdk-dev] [PATCH v2 3/3] timer: fix race condition in\n\trte_timer_manage()",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "patches and discussions about DPDK <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: Robert Sanford <rsanford@akamai.com>\n\nEliminate problematic race condition in rte_timer_manage() that can\nlead to corruption of per-lcore pending-lists (implemented as\nskip-lists). The race condition occurs when rte_timer_manage() expires\nmultiple timers on lcore A, while lcore B simultaneously invokes\nrte_timer_reset() for one of the expiring timers (other than the first\none).\n\nLcore A splits its pending-list, creating a local list of expired timers\nlinked through their sl_next[0] pointers, and sets the first expired\ntimer to the RUNNING state, all during one list-lock round trip.\nLcore A then unlocks the list-lock to run the first callback, and that\nis when A and B can have different interpretations of the subsequent\nexpired timers' true state. Lcore B sees an expired timer still in the\nPENDING state, atomically changes the timer to the CONFIG state, locks\nlcore A's list-lock, and reinserts the timer into A's pending-list.\nThe two lcores try to use the same next-pointers to maintain both lists!\n\nOur solution is to remove expired timers from the pending-list and try\nto set them all to the RUNNING state in one atomic step, i.e.,\nrte_timer_manage() should perform these two actions within one\nownership of the list-lock.\n\nAfter splitting the pending-list at the current point in time and trying\nto set all expired timers to the RUNNING state, we must put back into\nthe pending-list any timers that we failed to set to the RUNNING state,\nall while still holding the list-lock. It is then safe to release the\nlock and run the callback functions for all expired timers that remain\non our local run-list.\n\nSigned-off-by: Robert Sanford <rsanford@akamai.com>\n---\n lib/librte_timer/rte_timer.c |   56 ++++++++++++++++++++++++++---------------\n 1 files changed, 35 insertions(+), 21 deletions(-)",
    "diff": "diff --git a/lib/librte_timer/rte_timer.c b/lib/librte_timer/rte_timer.c\nindex 8e9243a..3dcdab5 100644\n--- a/lib/librte_timer/rte_timer.c\n+++ b/lib/librte_timer/rte_timer.c\n@@ -504,6 +504,7 @@ void rte_timer_manage(void)\n {\n \tunion rte_timer_status status;\n \tstruct rte_timer *tim, *next_tim;\n+\tstruct rte_timer *run_first_tim, **pprev;\n \tunsigned lcore_id = rte_lcore_id();\n \tstruct rte_timer *prev[MAX_SKIPLIST_DEPTH + 1];\n \tuint64_t cur_time;\n@@ -519,9 +520,9 @@ void rte_timer_manage(void)\n \tcur_time = rte_get_timer_cycles();\n \n #ifdef RTE_ARCH_X86_64\n-\t/* on 64-bit the value cached in the pending_head.expired will be updated\n-\t * atomically, so we can consult that for a quick check here outside the\n-\t * lock */\n+\t/* on 64-bit the value cached in the pending_head.expired will be\n+\t * updated atomically, so we can consult that for a quick check here\n+\t * outside the lock */\n \tif (likely(priv_timer[lcore_id].pending_head.expire > cur_time))\n \t\treturn;\n #endif\n@@ -531,8 +532,10 @@ void rte_timer_manage(void)\n \n \t/* if nothing to do just unlock and return */\n \tif (priv_timer[lcore_id].pending_head.sl_next[0] == NULL ||\n-\t\t\tpriv_timer[lcore_id].pending_head.sl_next[0]->expire > cur_time)\n-\t\tgoto done;\n+\t    priv_timer[lcore_id].pending_head.sl_next[0]->expire > cur_time) {\n+\t\trte_spinlock_unlock(&priv_timer[lcore_id].list_lock);\n+\t\treturn;\n+\t}\n \n \t/* save start of list of expired timers */\n \ttim = priv_timer[lcore_id].pending_head.sl_next[0];\n@@ -540,30 +543,47 @@ void rte_timer_manage(void)\n \t/* break the existing list at current time point */\n \ttimer_get_prev_entries(cur_time, lcore_id, prev);\n \tfor (i = priv_timer[lcore_id].curr_skiplist_depth -1; i >= 0; i--) {\n-\t\tpriv_timer[lcore_id].pending_head.sl_next[i] = prev[i]->sl_next[i];\n+\t\tpriv_timer[lcore_id].pending_head.sl_next[i] =\n+\t\t    prev[i]->sl_next[i];\n \t\tif (prev[i]->sl_next[i] == NULL)\n \t\t\tpriv_timer[lcore_id].curr_skiplist_depth--;\n \t\tprev[i] ->sl_next[i] = NULL;\n \t}\n \n-\t/* now scan expired list and call callbacks */\n+\t/* transition run-list from PENDING to RUNNING */\n+\trun_first_tim = tim;\n+\tpprev = &run_first_tim;\n+\n \tfor ( ; tim != NULL; tim = next_tim) {\n \t\tnext_tim = tim->sl_next[0];\n \n \t\tret = timer_set_running_state(tim);\n+\t\tif (likely(ret == 0)) {\n+\t\t\tpprev = &tim->sl_next[0];\n+\t\t} else {\n+\t\t\t/* another core is trying to re-config this one,\n+\t\t\t * remove it from local expired list and put it\n+\t\t\t * back on the priv_timer[] skip list */\n+\t\t\t*pprev = next_tim;\n+\t\t\ttimer_add(tim, lcore_id, 1);\n+\t\t}\n+\t}\n \n-\t\t/* this timer was not pending, continue */\n-\t\tif (ret < 0)\n-\t\t\tcontinue;\n+\t/* update the next to expire timer value */\n+\tpriv_timer[lcore_id].pending_head.expire =\n+\t    (priv_timer[lcore_id].pending_head.sl_next[0] == NULL) ? 0 :\n+\t\tpriv_timer[lcore_id].pending_head.sl_next[0]->expire;\n \n-\t\trte_spinlock_unlock(&priv_timer[lcore_id].list_lock);\n+\trte_spinlock_unlock(&priv_timer[lcore_id].list_lock);\n \n+\t/* now scan expired list and call callbacks */\n+\tfor (tim = run_first_tim; tim != NULL; tim = next_tim) {\n+\t\tnext_tim = tim->sl_next[0];\n \t\tpriv_timer[lcore_id].updated = 0;\n \n \t\t/* execute callback function with list unlocked */\n \t\ttim->f(tim, tim->arg);\n \n-\t\trte_spinlock_lock(&priv_timer[lcore_id].list_lock);\n \t\t__TIMER_STAT_ADD(pending, -1);\n \t\t/* the timer was stopped or reloaded by the callback\n \t\t * function, we have nothing to do here */\n@@ -579,23 +599,17 @@ void rte_timer_manage(void)\n \t\t}\n \t\telse {\n \t\t\t/* keep it in list and mark timer as pending */\n+\t\t\trte_spinlock_lock(&priv_timer[lcore_id].list_lock);\n \t\t\tstatus.state = RTE_TIMER_PENDING;\n \t\t\t__TIMER_STAT_ADD(pending, 1);\n \t\t\tstatus.owner = (int16_t)lcore_id;\n \t\t\trte_wmb();\n \t\t\ttim->status.u32 = status.u32;\n \t\t\t__rte_timer_reset(tim, cur_time + tim->period,\n-\t\t\t\t\ttim->period, lcore_id, tim->f, tim->arg, 1);\n+\t\t\t\ttim->period, lcore_id, tim->f, tim->arg, 1);\n+\t\t\trte_spinlock_unlock(&priv_timer[lcore_id].list_lock);\n \t\t}\n \t}\n-\n-\t/* update the next to expire timer value */\n-\tpriv_timer[lcore_id].pending_head.expire =\n-\t\t\t(priv_timer[lcore_id].pending_head.sl_next[0] == NULL) ? 0 :\n-\t\t\t\t\tpriv_timer[lcore_id].pending_head.sl_next[0]->expire;\n-done:\n-\t/* job finished, unlock the list lock */\n-\trte_spinlock_unlock(&priv_timer[lcore_id].list_lock);\n }\n \n /* dump statistics about timers */\n",
    "prefixes": [
        "dpdk-dev",
        "v2",
        "3/3"
    ]
}