Show a patch.

GET /api/patches/73411/
Content-Type: application/json
Vary: Accept

    "id": 73411,
    "url": "",
    "web_url": "",
    "project": {
        "id": 1,
        "url": "",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "",
        "list_email": "",
        "web_url": "",
        "scm_url": "git://",
        "webscm_url": ""
    "msgid": "<>",
    "date": "2020-07-07T09:50:46",
    "name": "[v6,1/4] doc: add generic atomic deprecation section",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "074effdc3c49347e4b621717b5a5ec86c06804bc",
    "submitter": {
        "id": 833,
        "url": "",
        "name": "Phil Yang",
        "email": ""
    "delegate": {
        "id": 24651,
        "url": "",
        "username": "dmarchand",
        "first_name": "David",
        "last_name": "Marchand",
        "email": ""
    "mbox": "",
    "series": [
            "id": 10843,
            "url": "",
            "web_url": "",
            "date": "2020-07-07T09:50:45",
            "name": "generic rte atomic APIs deprecate proposal",
            "version": 6,
            "mbox": ""
    "comments": "",
    "check": "fail",
    "checks": "",
    "tags": {},
    "headers": {
        "X-Mailer": "git-send-email 2.7.4",
        "List-Id": "DPDK patches and discussions <>",
        "From": "Phil Yang <>",
        "List-Help": "<>",
        "Date": "Tue,  7 Jul 2020 17:50:46 +0800",
        "X-Mailman-Version": "2.1.15",
        "Delivered-To": "",
        "List-Subscribe": "<>,\n <>",
        "Cc": ",,,\n,,,\n, Honnappa Nagarahalli <>",
        "To": ",\n\,\n\",
        "Errors-To": "",
        "References": "<>\n <>",
        "Sender": "\"dev\" <>",
        "Return-Path": "<>",
        "X-BeenThere": "",
        "List-Post": "<>",
        "Received": [
            "from ( [])\n\tby (Postfix) with ESMTP id 93834A00BE;\n\tTue,  7 Jul 2020 11:51:32 +0200 (CEST)",
            "from [] (localhost [])\n\tby (Postfix) with ESMTP id 243A11DD74;\n\tTue,  7 Jul 2020 11:51:31 +0200 (CEST)",
            "from ( [])\n by (Postfix) with ESMTP id AE5C41D59E\n for <>; Tue,  7 Jul 2020 11:51:29 +0200 (CEST)",
            "from (unknown [])\n by (Postfix) with ESMTP id 36B00C0A;\n Tue,  7 Jul 2020 02:51:29 -0700 (PDT)",
            "from\n ( [])\n by (Postfix) with ESMTPA id C487F3F718;\n Tue,  7 Jul 2020 02:51:25 -0700 (PDT)"
        "List-Archive": "<>",
        "Subject": "[dpdk-dev] [PATCH v6 1/4] doc: add generic atomic deprecation\n\tsection",
        "In-Reply-To": "<>",
        "Message-Id": "<>",
        "Precedence": "list",
        "List-Unsubscribe": "<>,\n <>",
        "X-Original-To": ""
    "content": "Add deprecating the generic rte_atomic_xx APIs to c11 atomic built-ins\nguide and examples.\n\nSigned-off-by: Phil Yang <>\nSigned-off-by: Honnappa Nagarahalli <>\n---\n doc/guides/prog_guide/writing_efficient_code.rst | 139 ++++++++++++++++++++++-\n 1 file changed, 138 insertions(+), 1 deletion(-)",
    "diff": "diff --git a/doc/guides/prog_guide/writing_efficient_code.rst b/doc/guides/prog_guide/writing_efficient_code.rst\nindex 849f63e..3bd2601 100644\n--- a/doc/guides/prog_guide/writing_efficient_code.rst\n+++ b/doc/guides/prog_guide/writing_efficient_code.rst\n@@ -167,7 +167,13 @@ but with the added cost of lower throughput.\n Locks and Atomic Operations\n ---------------------------\n \n-Atomic operations imply a lock prefix before the instruction,\n+This section describes some key considerations when using locks and atomic\n+operations in the DPDK environment.\n+\n+Locks\n+~~~~~\n+\n+On x86, atomic operations imply a lock prefix before the instruction,\n causing the processor's LOCK# signal to be asserted during execution of the following instruction.\n This has a big impact on performance in a multicore environment.\n \n@@ -176,6 +182,137 @@ It can often be replaced by other solutions like per-lcore variables.\n Also, some locking techniques are more efficient than others.\n For instance, the Read-Copy-Update (RCU) algorithm can frequently replace simple rwlocks.\n \n+Atomic Operations: Use C11 Atomic Built-ins\n+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n+\n+DPDK `generic rte_atomic <>`_ operations are\n+implemented by `__sync built-ins <>`_.\n+These __sync built-ins result in full barriers on aarch64, which are unnecessary\n+in many use cases. They can be replaced by `__atomic built-ins <>`_ that\n+conform to the C11 memory model and provide finer memory order control.\n+\n+So replacing the rte_atomic operations with __atomic built-ins might improve\n+performance for aarch64 machines. `More details <>`_.\n+\n+Some typical optimization cases are listed below:\n+\n+Atomicity\n+^^^^^^^^^\n+\n+Some use cases require atomicity alone, the ordering of the memory operations\n+does not matter. For example the packets statistics in the `vhost <>`_ example application.\n+\n+It just updates the number of transmitted packets, no subsequent logic depends\n+on these counters. So the RELAXED memory ordering is sufficient:\n+\n+.. code-block:: c\n+\n+    static __rte_always_inline void\n+    virtio_xmit(struct vhost_dev *dst_vdev, struct vhost_dev *src_vdev,\n+            struct rte_mbuf *m)\n+    {\n+        ...\n+        ...\n+        if (enable_stats) {\n+            __atomic_add_fetch(&dst_vdev->stats.rx_total_atomic, 1, __ATOMIC_RELAXED);\n+            __atomic_add_fetch(&dst_vdev->stats.rx_atomic, ret, __ATOMIC_RELAXED);\n+            ...\n+        }\n+    }\n+\n+One-way Barrier\n+^^^^^^^^^^^^^^^\n+\n+Some use cases allow for memory reordering in one way while requiring memory\n+ordering in the other direction.\n+\n+For example, the memory operations before the `lock <>`_ can move to the\n+critical section, but the memory operations in the critical section cannot move\n+above the lock. In this case, the full memory barrier in the CAS operation can\n+be replaced to ACQUIRE. On the other hand, the memory operations after the\n+`unlock <>`_ can move to the critical section, but the memory operations in the\n+critical section cannot move below the unlock. So the full barrier in the STORE\n+operation can be replaced with RELEASE.\n+\n+Reader-Writer Concurrency\n+^^^^^^^^^^^^^^^^^^^^^^^^^\n+Lock-free reader-writer concurrency is one of the common use cases in DPDK.\n+\n+The payload or the data that the writer wants to communicate to the reader,\n+can be written with RELAXED memory order. However, the guard variable should\n+be written with RELEASE memory order. This ensures that the store to guard\n+variable is observable only after the store to payload is observable.\n+Refer to `rte_hash insert <>`_ for an example.\n+\n+.. code-block:: c\n+\n+    static inline int32_t\n+    rte_hash_cuckoo_insert_mw(const struct rte_hash *h,\n+        ...\n+        int32_t *ret_val)\n+    {\n+        ...\n+        ...\n+\n+        /* Insert new entry if there is room in the primary\n+         * bucket.\n+         */\n+        for (i = 0; i < RTE_HASH_BUCKET_ENTRIES; i++) {\n+                /* Check if slot is available */\n+                if (likely(prim_bkt->key_idx[i] == EMPTY_SLOT)) {\n+                        prim_bkt->sig_current[i] = sig;\n+                        /* Store to signature and key should not\n+                         * leak after the store to key_idx. i.e.\n+                         * key_idx is the guard variable for signature\n+                         * and key.\n+                         */\n+                        __atomic_store_n(&prim_bkt->key_idx[i],\n+                                         new_idx,\n+                                         __ATOMIC_RELEASE);\n+                        break;\n+                }\n+        }\n+\n+        ...\n+    }\n+\n+Correspondingly, on the reader side, the guard variable should be read\n+with ACQUIRE memory order. The payload or the data the writer communicated,\n+can be read with RELAXED memory order. This ensures that, if the store to\n+guard variable is observable, the store to payload is also observable. Refer to `rte_hash lookup <>`_ for an example.\n+\n+.. code-block:: c\n+\n+    static inline int32_t\n+    search_one_bucket_lf(const struct rte_hash *h, const void *key, uint16_t sig,\n+        void **data, const struct rte_hash_bucket *bkt)\n+    {\n+        ...\n+\n+        for (i = 0; i < RTE_HASH_BUCKET_ENTRIES; i++) {\n+            ....\n+            if (bkt->sig_current[i] == sig) {\n+                key_idx = __atomic_load_n(&bkt->key_idx[i],\n+                                        __ATOMIC_ACQUIRE);\n+                if (key_idx != EMPTY_SLOT) {\n+                    k = (struct rte_hash_key *) ((char *)keys +\n+                        key_idx * h->key_entry_size);\n+\n+                if (rte_hash_cmp_eq(key, k->key, h) == 0) {\n+                    if (data != NULL) {\n+                        *data = __atomic_load_n(&k->pdata,\n+                                        __ATOMIC_ACQUIRE);\n+                    }\n+\n+                    /*\n+                    * Return index where key is stored,\n+                    * subtracting the first dummy index\n+                    */\n+                    return key_idx - 1;\n+                }\n+            ...\n+    }\n+\n Coding Considerations\n ---------------------\n \n",
    "prefixes": [