get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/103220/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 103220,
    "url": "http://patches.dpdk.org/api/patches/103220/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/20211029082021.945586-2-feifei.wang2@arm.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20211029082021.945586-2-feifei.wang2@arm.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20211029082021.945586-2-feifei.wang2@arm.com",
    "date": "2021-10-29T08:20:17",
    "name": "[v8,1/5] eal: add new definitions for wait scheme",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "671cf47732d58a17db9ee3c0086e464123381ff7",
    "submitter": {
        "id": 1771,
        "url": "http://patches.dpdk.org/api/people/1771/?format=api",
        "name": "Feifei Wang",
        "email": "feifei.wang2@arm.com"
    },
    "delegate": {
        "id": 24651,
        "url": "http://patches.dpdk.org/api/users/24651/?format=api",
        "username": "dmarchand",
        "first_name": "David",
        "last_name": "Marchand",
        "email": "david.marchand@redhat.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/20211029082021.945586-2-feifei.wang2@arm.com/mbox/",
    "series": [
        {
            "id": 20120,
            "url": "http://patches.dpdk.org/api/series/20120/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=20120",
            "date": "2021-10-29T08:20:16",
            "name": "add new definitions for wait scheme",
            "version": 8,
            "mbox": "http://patches.dpdk.org/series/20120/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/103220/comments/",
    "check": "success",
    "checks": "http://patches.dpdk.org/api/patches/103220/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 39AD7A0032;\n\tFri, 29 Oct 2021 10:20:38 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 2734C41139;\n\tFri, 29 Oct 2021 10:20:37 +0200 (CEST)",
            "from foss.arm.com (foss.arm.com [217.140.110.172])\n by mails.dpdk.org (Postfix) with ESMTP id 9E65B41136\n for <dev@dpdk.org>; Fri, 29 Oct 2021 10:20:35 +0200 (CEST)",
            "from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])\n by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0B861ED1;\n Fri, 29 Oct 2021 01:20:35 -0700 (PDT)",
            "from net-x86-dell-8268.shanghai.arm.com\n (net-x86-dell-8268.shanghai.arm.com [10.169.210.102])\n by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id E37793F5A1;\n Fri, 29 Oct 2021 01:20:31 -0700 (PDT)"
        ],
        "From": "Feifei Wang <feifei.wang2@arm.com>",
        "To": "Ruifeng Wang <ruifeng.wang@arm.com>",
        "Cc": "dev@dpdk.org, nd@arm.com, jerinjacobk@gmail.com,\n stephen@networkplumber.org, david.marchand@redhat.com, thomas@monjalon.net,\n mattias.ronnblom@ericsson.com, konstantin.ananyev@intel.com,\n Feifei Wang <feifei.wang2@arm.com>",
        "Date": "Fri, 29 Oct 2021 16:20:17 +0800",
        "Message-Id": "<20211029082021.945586-2-feifei.wang2@arm.com>",
        "X-Mailer": "git-send-email 2.25.1",
        "In-Reply-To": "<20211029082021.945586-1-feifei.wang2@arm.com>",
        "References": "<20210902053253.3017858-1-feifei.wang2@arm.com>\n <20211029082021.945586-1-feifei.wang2@arm.com>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain; charset=UTF-8",
        "Content-Transfer-Encoding": "8bit",
        "Subject": "[dpdk-dev] [PATCH v8 1/5] eal: add new definitions for wait scheme",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Introduce macros as generic interface for address monitoring.\n\nAdd '__LOAD_EXC_128' for size of 128. For different size, encapsulate\n'__LOAD_EXC_16', '__LOAD_EXC_32', '__LOAD_EXC_64' and '__LOAD_EXC_128'\ninto a new macro '__LOAD_EXC'.\n\nFurthermore, to prevent compilation warning in arm:\n----------------------------------------------\n'warning: implicit declaration of function ...'\n----------------------------------------------\nDelete 'undef' constructions for '__LOAD_EXC_xx', '__SEVL' and '__WFE'.\nAnd add ‘__RTE_ARM’ for these macros to fix the namespace.\nThis is because original macros are undefine at the end of the file.\nIf new macro 'rte_wait_event' calls them in other files, they will be\nseen as 'not defined'.\n\nSigned-off-by: Feifei Wang <feifei.wang2@arm.com>\nReviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>\nAcked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>\n---\n lib/eal/arm/include/rte_pause_64.h  | 202 +++++++++++++++++-----------\n lib/eal/include/generic/rte_pause.h |  28 ++++\n 2 files changed, 154 insertions(+), 76 deletions(-)",
    "diff": "diff --git a/lib/eal/arm/include/rte_pause_64.h b/lib/eal/arm/include/rte_pause_64.h\nindex e87d10b8cc..783c6aae87 100644\n--- a/lib/eal/arm/include/rte_pause_64.h\n+++ b/lib/eal/arm/include/rte_pause_64.h\n@@ -26,47 +26,120 @@ static inline void rte_pause(void)\n #ifdef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED\n \n /* Send an event to quit WFE. */\n-#define __SEVL() { asm volatile(\"sevl\" : : : \"memory\"); }\n+#define __RTE_ARM_SEVL() { asm volatile(\"sevl\" : : : \"memory\"); }\n \n /* Put processor into low power WFE(Wait For Event) state. */\n-#define __WFE() { asm volatile(\"wfe\" : : : \"memory\"); }\n+#define __RTE_ARM_WFE() { asm volatile(\"wfe\" : : : \"memory\"); }\n \n-static __rte_always_inline void\n-rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,\n-\t\tint memorder)\n-{\n-\tuint16_t value;\n-\n-\tassert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);\n-\n-\t/*\n-\t * Atomic exclusive load from addr, it returns the 16-bit content of\n-\t * *addr while making it 'monitored',when it is written by someone\n-\t * else, the 'monitored' state is cleared and a event is generated\n-\t * implicitly to exit WFE.\n-\t */\n-#define __LOAD_EXC_16(src, dst, memorder) {               \\\n+/*\n+ * Atomic exclusive load from addr, it returns the 16-bit content of\n+ * *addr while making it 'monitored', when it is written by someone\n+ * else, the 'monitored' state is cleared and an event is generated\n+ * implicitly to exit WFE.\n+ */\n+#define __RTE_ARM_LOAD_EXC_16(src, dst, memorder) {       \\\n \tif (memorder == __ATOMIC_RELAXED) {               \\\n \t\tasm volatile(\"ldxrh %w[tmp], [%x[addr]]\"  \\\n \t\t\t: [tmp] \"=&r\" (dst)               \\\n-\t\t\t: [addr] \"r\"(src)                 \\\n+\t\t\t: [addr] \"r\" (src)                \\\n \t\t\t: \"memory\");                      \\\n \t} else {                                          \\\n \t\tasm volatile(\"ldaxrh %w[tmp], [%x[addr]]\" \\\n \t\t\t: [tmp] \"=&r\" (dst)               \\\n-\t\t\t: [addr] \"r\"(src)                 \\\n+\t\t\t: [addr] \"r\" (src)                \\\n \t\t\t: \"memory\");                      \\\n \t} }\n \n-\t__LOAD_EXC_16(addr, value, memorder)\n+/*\n+ * Atomic exclusive load from addr, it returns the 32-bit content of\n+ * *addr while making it 'monitored', when it is written by someone\n+ * else, the 'monitored' state is cleared and an event is generated\n+ * implicitly to exit WFE.\n+ */\n+#define __RTE_ARM_LOAD_EXC_32(src, dst, memorder) {      \\\n+\tif (memorder == __ATOMIC_RELAXED) {              \\\n+\t\tasm volatile(\"ldxr %w[tmp], [%x[addr]]\"  \\\n+\t\t\t: [tmp] \"=&r\" (dst)              \\\n+\t\t\t: [addr] \"r\" (src)               \\\n+\t\t\t: \"memory\");                     \\\n+\t} else {                                         \\\n+\t\tasm volatile(\"ldaxr %w[tmp], [%x[addr]]\" \\\n+\t\t\t: [tmp] \"=&r\" (dst)              \\\n+\t\t\t: [addr] \"r\" (src)               \\\n+\t\t\t: \"memory\");                     \\\n+\t} }\n+\n+/*\n+ * Atomic exclusive load from addr, it returns the 64-bit content of\n+ * *addr while making it 'monitored', when it is written by someone\n+ * else, the 'monitored' state is cleared and an event is generated\n+ * implicitly to exit WFE.\n+ */\n+#define __RTE_ARM_LOAD_EXC_64(src, dst, memorder) {      \\\n+\tif (memorder == __ATOMIC_RELAXED) {              \\\n+\t\tasm volatile(\"ldxr %x[tmp], [%x[addr]]\"  \\\n+\t\t\t: [tmp] \"=&r\" (dst)              \\\n+\t\t\t: [addr] \"r\" (src)               \\\n+\t\t\t: \"memory\");                     \\\n+\t} else {                                         \\\n+\t\tasm volatile(\"ldaxr %x[tmp], [%x[addr]]\" \\\n+\t\t\t: [tmp] \"=&r\" (dst)              \\\n+\t\t\t: [addr] \"r\" (src)               \\\n+\t\t\t: \"memory\");                     \\\n+\t} }\n+\n+/*\n+ * Atomic exclusive load from addr, it returns the 128-bit content of\n+ * *addr while making it 'monitored', when it is written by someone\n+ * else, the 'monitored' state is cleared and an event is generated\n+ * implicitly to exit WFE.\n+ */\n+#define __RTE_ARM_LOAD_EXC_128(src, dst, memorder) {                    \\\n+\tvolatile rte_int128_t *dst_128 = (volatile rte_int128_t *)&dst; \\\n+\tif (memorder == __ATOMIC_RELAXED) {                             \\\n+\t\tasm volatile(\"ldxp %x[tmp0], %x[tmp1], [%x[addr]]\"      \\\n+\t\t\t: [tmp0] \"=&r\" (dst_128->val[0]),               \\\n+\t\t\t  [tmp1] \"=&r\" (dst_128->val[1])                \\\n+\t\t\t: [addr] \"r\" (src)                              \\\n+\t\t\t: \"memory\");                                    \\\n+\t} else {                                                        \\\n+\t\tasm volatile(\"ldaxp %x[tmp0], %x[tmp1], [%x[addr]]\"     \\\n+\t\t\t: [tmp0] \"=&r\" (dst_128->val[0]),               \\\n+\t\t\t  [tmp1] \"=&r\" (dst_128->val[1])                \\\n+\t\t\t: [addr] \"r\" (src)                              \\\n+\t\t\t: \"memory\");                                    \\\n+\t} }                                                             \\\n+\n+#define __RTE_ARM_LOAD_EXC(src, dst, memorder, size) {          \\\n+\tRTE_BUILD_BUG_ON(size != 16 && size != 32 && size != 64 \\\n+\t\t\t\t&& size != 128);                \\\n+\tif (size == 16)                                         \\\n+\t\t__RTE_ARM_LOAD_EXC_16(src, dst, memorder)       \\\n+\telse if (size == 32)                                    \\\n+\t\t__RTE_ARM_LOAD_EXC_32(src, dst, memorder)       \\\n+\telse if (size == 64)                                    \\\n+\t\t__RTE_ARM_LOAD_EXC_64(src, dst, memorder)       \\\n+\telse if (size == 128)                                   \\\n+\t\t__RTE_ARM_LOAD_EXC_128(src, dst, memorder)      \\\n+}\n+\n+static __rte_always_inline void\n+rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected,\n+\t\tint memorder)\n+{\n+\tuint16_t value;\n+\n+\tRTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&\n+\t\t\t\tmemorder != __ATOMIC_RELAXED);\n+\n+\t__RTE_ARM_LOAD_EXC_16(addr, value, memorder)\n \tif (value != expected) {\n-\t\t__SEVL()\n+\t\t__RTE_ARM_SEVL()\n \t\tdo {\n-\t\t\t__WFE()\n-\t\t\t__LOAD_EXC_16(addr, value, memorder)\n+\t\t\t__RTE_ARM_WFE()\n+\t\t\t__RTE_ARM_LOAD_EXC_16(addr, value, memorder)\n \t\t} while (value != expected);\n \t}\n-#undef __LOAD_EXC_16\n }\n \n static __rte_always_inline void\n@@ -75,36 +148,17 @@ rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected,\n {\n \tuint32_t value;\n \n-\tassert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);\n-\n-\t/*\n-\t * Atomic exclusive load from addr, it returns the 32-bit content of\n-\t * *addr while making it 'monitored',when it is written by someone\n-\t * else, the 'monitored' state is cleared and a event is generated\n-\t * implicitly to exit WFE.\n-\t */\n-#define __LOAD_EXC_32(src, dst, memorder) {              \\\n-\tif (memorder == __ATOMIC_RELAXED) {              \\\n-\t\tasm volatile(\"ldxr %w[tmp], [%x[addr]]\"  \\\n-\t\t\t: [tmp] \"=&r\" (dst)              \\\n-\t\t\t: [addr] \"r\"(src)                \\\n-\t\t\t: \"memory\");                     \\\n-\t} else {                                         \\\n-\t\tasm volatile(\"ldaxr %w[tmp], [%x[addr]]\" \\\n-\t\t\t: [tmp] \"=&r\" (dst)              \\\n-\t\t\t: [addr] \"r\"(src)                \\\n-\t\t\t: \"memory\");                     \\\n-\t} }\n+\tRTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&\n+\t\t\t\tmemorder != __ATOMIC_RELAXED);\n \n-\t__LOAD_EXC_32(addr, value, memorder)\n+\t__RTE_ARM_LOAD_EXC_32(addr, value, memorder)\n \tif (value != expected) {\n-\t\t__SEVL()\n+\t\t__RTE_ARM_SEVL()\n \t\tdo {\n-\t\t\t__WFE()\n-\t\t\t__LOAD_EXC_32(addr, value, memorder)\n+\t\t\t__RTE_ARM_WFE()\n+\t\t\t__RTE_ARM_LOAD_EXC_32(addr, value, memorder)\n \t\t} while (value != expected);\n \t}\n-#undef __LOAD_EXC_32\n }\n \n static __rte_always_inline void\n@@ -113,40 +167,36 @@ rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,\n {\n \tuint64_t value;\n \n-\tassert(memorder == __ATOMIC_ACQUIRE || memorder == __ATOMIC_RELAXED);\n-\n-\t/*\n-\t * Atomic exclusive load from addr, it returns the 64-bit content of\n-\t * *addr while making it 'monitored',when it is written by someone\n-\t * else, the 'monitored' state is cleared and a event is generated\n-\t * implicitly to exit WFE.\n-\t */\n-#define __LOAD_EXC_64(src, dst, memorder) {              \\\n-\tif (memorder == __ATOMIC_RELAXED) {              \\\n-\t\tasm volatile(\"ldxr %x[tmp], [%x[addr]]\"  \\\n-\t\t\t: [tmp] \"=&r\" (dst)              \\\n-\t\t\t: [addr] \"r\"(src)                \\\n-\t\t\t: \"memory\");                     \\\n-\t} else {                                         \\\n-\t\tasm volatile(\"ldaxr %x[tmp], [%x[addr]]\" \\\n-\t\t\t: [tmp] \"=&r\" (dst)              \\\n-\t\t\t: [addr] \"r\"(src)                \\\n-\t\t\t: \"memory\");                     \\\n-\t} }\n+\tRTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&\n+\t\t\t\tmemorder != __ATOMIC_RELAXED);\n \n-\t__LOAD_EXC_64(addr, value, memorder)\n+\t__RTE_ARM_LOAD_EXC_64(addr, value, memorder)\n \tif (value != expected) {\n-\t\t__SEVL()\n+\t\t__RTE_ARM_SEVL()\n \t\tdo {\n-\t\t\t__WFE()\n-\t\t\t__LOAD_EXC_64(addr, value, memorder)\n+\t\t\t__RTE_ARM_WFE()\n+\t\t\t__RTE_ARM_LOAD_EXC_64(addr, value, memorder)\n \t\t} while (value != expected);\n \t}\n }\n-#undef __LOAD_EXC_64\n \n-#undef __SEVL\n-#undef __WFE\n+#define rte_wait_event(addr, mask, cond, expected, memorder)              \\\n+do {                                                                      \\\n+\tRTE_BUILD_BUG_ON(!__builtin_constant_p(memorder));                \\\n+\tRTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&                  \\\n+\t\t\t\tmemorder != __ATOMIC_RELAXED);            \\\n+\tconst uint32_t size = sizeof(*(addr)) << 3;                       \\\n+\ttypeof(*(addr)) expected_value = (expected);                      \\\n+\ttypeof(*(addr)) value;                                            \\\n+\t__RTE_ARM_LOAD_EXC((addr), value, memorder, size)                 \\\n+\tif ((value & (mask)) cond expected_value) {                       \\\n+\t\t__RTE_ARM_SEVL()                                          \\\n+\t\tdo {                                                      \\\n+\t\t\t__RTE_ARM_WFE()                                   \\\n+\t\t\t__RTE_ARM_LOAD_EXC((addr), value, memorder, size) \\\n+\t\t} while ((value & (mask)) cond expected_value);           \\\n+\t}                                                                 \\\n+} while (0)\n \n #endif\n \ndiff --git a/lib/eal/include/generic/rte_pause.h b/lib/eal/include/generic/rte_pause.h\nindex 668ee4a184..d0c5b5a415 100644\n--- a/lib/eal/include/generic/rte_pause.h\n+++ b/lib/eal/include/generic/rte_pause.h\n@@ -111,6 +111,34 @@ rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected,\n \twhile (__atomic_load_n(addr, memorder) != expected)\n \t\trte_pause();\n }\n+\n+/*\n+ * Wait until *addr breaks the condition, with a relaxed memory\n+ * ordering model meaning the loads around this API can be reordered.\n+ *\n+ * @param addr\n+ *  A pointer to the memory location.\n+ * @param mask\n+ *  A mask of value bits in interest.\n+ * @param cond\n+ *  A symbol representing the condition.\n+ * @param expected\n+ *  An expected value to be in the memory location.\n+ * @param memorder\n+ *  Two different memory orders that can be specified:\n+ *  __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to\n+ *  C++11 memory orders with the same names, see the C++11 standard or\n+ *  the GCC wiki on atomic synchronization for detailed definition.\n+ */\n+#define rte_wait_event(addr, mask, cond, expected, memorder)                       \\\n+do {                                                                               \\\n+\tRTE_BUILD_BUG_ON(!__builtin_constant_p(memorder));                         \\\n+\tRTE_BUILD_BUG_ON(memorder != __ATOMIC_ACQUIRE &&                           \\\n+\t\t\t\tmemorder != __ATOMIC_RELAXED);                     \\\n+\ttypeof(*(addr)) expected_value = (expected);                               \\\n+\twhile ((__atomic_load_n((addr), (memorder)) & (mask)) cond expected_value) \\\n+\t\trte_pause();                                                       \\\n+} while (0)\n #endif\n \n #endif /* _RTE_PAUSE_H_ */\n",
    "prefixes": [
        "v8",
        "1/5"
    ]
}