get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/105235/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 105235,
    "url": "http://patches.dpdk.org/api/patches/105235/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/20211217131526.3135144-1-ferruh.yigit@intel.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20211217131526.3135144-1-ferruh.yigit@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20211217131526.3135144-1-ferruh.yigit@intel.com",
    "date": "2021-12-17T13:15:26",
    "name": "examples/performance-thread: remove",
    "commit_ref": null,
    "pull_url": null,
    "state": "accepted",
    "archived": true,
    "hash": "2d3e838164e97bb51428e29b1f3946bc51073f81",
    "submitter": {
        "id": 324,
        "url": "http://patches.dpdk.org/api/people/324/?format=api",
        "name": "Ferruh Yigit",
        "email": "ferruh.yigit@intel.com"
    },
    "delegate": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/users/1/?format=api",
        "username": "tmonjalo",
        "first_name": "Thomas",
        "last_name": "Monjalon",
        "email": "thomas@monjalon.net"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/20211217131526.3135144-1-ferruh.yigit@intel.com/mbox/",
    "series": [
        {
            "id": 20969,
            "url": "http://patches.dpdk.org/api/series/20969/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=20969",
            "date": "2021-12-17T13:15:26",
            "name": "examples/performance-thread: remove",
            "version": 1,
            "mbox": "http://patches.dpdk.org/series/20969/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/105235/comments/",
    "check": "success",
    "checks": "http://patches.dpdk.org/api/patches/105235/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 30066A04A2;\n\tFri, 17 Dec 2021 14:15:39 +0100 (CET)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id A8A7B4013F;\n\tFri, 17 Dec 2021 14:15:38 +0100 (CET)",
            "from mga05.intel.com (mga05.intel.com [192.55.52.43])\n by mails.dpdk.org (Postfix) with ESMTP id EE26040040\n for <dev@dpdk.org>; Fri, 17 Dec 2021 14:15:34 +0100 (CET)",
            "from fmsmga004.fm.intel.com ([10.253.24.48])\n by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 17 Dec 2021 05:15:33 -0800",
            "from silpixa00399752.ir.intel.com (HELO\n silpixa00399752.ger.corp.intel.com) ([10.237.222.27])\n by fmsmga004.fm.intel.com with ESMTP; 17 Dec 2021 05:15:30 -0800"
        ],
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple;\n d=intel.com; i=@intel.com; q=dns/txt; s=Intel;\n t=1639746935; x=1671282935;\n h=from:to:cc:subject:date:message-id:mime-version:\n content-transfer-encoding;\n bh=lT5skxf1e2LtUkoXQ6AbyqRU8ktXqZQdHL3VC+pAuWw=;\n b=HmgHyQ7a7nJtIf0A4hl0rv/MaFTF1KRbKpdg5jULxYr+DEzqymHwHSwt\n 9sCEu2Ql1fYZLTQ1WhO1N7rHQnYYfHU4ugyFwpCQ0fK1EYR+Ux68NztZK\n Dfe7YVzNOO/KA/f3Jh5KYDLW+NMtMdRHAHJL1mgUv2VXsDW15YXXHOpcw\n 7MwYhhzPZZUBIQ/YNUGseUVZchOeH7voYSeet+OQS7iiF940+CDkiJmnG\n 4vfCzrdJfFN8cBMTnyu6ty3+xbuDu9d8/SRZJlhyO24F2Pv3Oq+P5iGWw\n +mK7EbVkAPqo1NOteMw5gBw5Isz/ijCHNp4YlLrzu3/8h3X0UKtDHe3Oj g==;",
        "X-IronPort-AV": [
            "E=McAfee;i=\"6200,9189,10200\"; a=\"326039720\"",
            "E=Sophos;i=\"5.88,213,1635231600\"; d=\"scan'208\";a=\"326039720\"",
            "E=Sophos;i=\"5.88,213,1635231600\"; d=\"scan'208\";a=\"585577337\""
        ],
        "X-ExtLoop1": "1",
        "From": "Ferruh Yigit <ferruh.yigit@intel.com>",
        "To": "Thomas Monjalon <thomas@monjalon.net>",
        "Cc": "dev@dpdk.org, Ferruh Yigit <ferruh.yigit@intel.com>,\n David Marchand <david.marchand@redhat.com>",
        "Subject": "[PATCH] examples/performance-thread: remove",
        "Date": "Fri, 17 Dec 2021 13:15:26 +0000",
        "Message-Id": "<20211217131526.3135144-1-ferruh.yigit@intel.com>",
        "X-Mailer": "git-send-email 2.33.1",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org"
    },
    "content": "Remove sample application which is not clear if it is still relevant.\n\nSigned-off-by: Ferruh Yigit <ferruh.yigit@intel.com>\n---\n\nPlease comment if there is an reason to keep the sample application.\n---\n MAINTAINERS                                   |    5 -\n doc/guides/rel_notes/release_22_03.rst        |    1 +\n .../img/performance_thread_1.svg              |  799 ----\n .../img/performance_thread_2.svg              |  865 ----\n doc/guides/sample_app_ug/index.rst            |    1 -\n .../sample_app_ug/performance_thread.rst      | 1201 ------\n examples/meson.build                          |    2 -\n examples/performance-thread/Makefile          |   14 -\n .../common/arch/arm64/ctx.c                   |   62 -\n .../common/arch/arm64/ctx.h                   |   55 -\n .../common/arch/arm64/stack.h                 |   56 -\n .../performance-thread/common/arch/x86/ctx.c  |   37 -\n .../performance-thread/common/arch/x86/ctx.h  |   36 -\n .../common/arch/x86/stack.h                   |   40 -\n examples/performance-thread/common/common.mk  |   21 -\n examples/performance-thread/common/lthread.c  |  470 --\n examples/performance-thread/common/lthread.h  |   51 -\n .../performance-thread/common/lthread_api.h   |  784 ----\n .../performance-thread/common/lthread_cond.c  |  184 -\n .../performance-thread/common/lthread_cond.h  |   30 -\n .../performance-thread/common/lthread_diag.c  |  293 --\n .../performance-thread/common/lthread_diag.h  |  112 -\n .../common/lthread_diag_api.h                 |  304 --\n .../performance-thread/common/lthread_int.h   |  151 -\n .../performance-thread/common/lthread_mutex.c |  226 -\n .../performance-thread/common/lthread_mutex.h |   31 -\n .../common/lthread_objcache.h                 |  136 -\n .../performance-thread/common/lthread_pool.h  |  277 --\n .../performance-thread/common/lthread_queue.h |  247 --\n .../performance-thread/common/lthread_sched.c |  540 ---\n .../performance-thread/common/lthread_sched.h |  104 -\n .../performance-thread/common/lthread_timer.h |   68 -\n .../performance-thread/common/lthread_tls.c   |  223 -\n .../performance-thread/common/lthread_tls.h   |   35 -\n .../performance-thread/l3fwd-thread/Makefile  |   54 -\n .../performance-thread/l3fwd-thread/main.c    | 3797 -----------------\n .../l3fwd-thread/meson.build                  |   32 -\n .../performance-thread/l3fwd-thread/test.sh   |  150 -\n .../performance-thread/pthread_shim/Makefile  |   63 -\n .../performance-thread/pthread_shim/main.c    |  271 --\n .../pthread_shim/meson.build                  |   33 -\n .../pthread_shim/pthread_shim.c               |  713 ----\n .../pthread_shim/pthread_shim.h               |   85 -\n 43 files changed, 1 insertion(+), 12658 deletions(-)\n delete mode 100644 doc/guides/sample_app_ug/img/performance_thread_1.svg\n delete mode 100644 doc/guides/sample_app_ug/img/performance_thread_2.svg\n delete mode 100644 doc/guides/sample_app_ug/performance_thread.rst\n delete mode 100644 examples/performance-thread/Makefile\n delete mode 100644 examples/performance-thread/common/arch/arm64/ctx.c\n delete mode 100644 examples/performance-thread/common/arch/arm64/ctx.h\n delete mode 100644 examples/performance-thread/common/arch/arm64/stack.h\n delete mode 100644 examples/performance-thread/common/arch/x86/ctx.c\n delete mode 100644 examples/performance-thread/common/arch/x86/ctx.h\n delete mode 100644 examples/performance-thread/common/arch/x86/stack.h\n delete mode 100644 examples/performance-thread/common/common.mk\n delete mode 100644 examples/performance-thread/common/lthread.c\n delete mode 100644 examples/performance-thread/common/lthread.h\n delete mode 100644 examples/performance-thread/common/lthread_api.h\n delete mode 100644 examples/performance-thread/common/lthread_cond.c\n delete mode 100644 examples/performance-thread/common/lthread_cond.h\n delete mode 100644 examples/performance-thread/common/lthread_diag.c\n delete mode 100644 examples/performance-thread/common/lthread_diag.h\n delete mode 100644 examples/performance-thread/common/lthread_diag_api.h\n delete mode 100644 examples/performance-thread/common/lthread_int.h\n delete mode 100644 examples/performance-thread/common/lthread_mutex.c\n delete mode 100644 examples/performance-thread/common/lthread_mutex.h\n delete mode 100644 examples/performance-thread/common/lthread_objcache.h\n delete mode 100644 examples/performance-thread/common/lthread_pool.h\n delete mode 100644 examples/performance-thread/common/lthread_queue.h\n delete mode 100644 examples/performance-thread/common/lthread_sched.c\n delete mode 100644 examples/performance-thread/common/lthread_sched.h\n delete mode 100644 examples/performance-thread/common/lthread_timer.h\n delete mode 100644 examples/performance-thread/common/lthread_tls.c\n delete mode 100644 examples/performance-thread/common/lthread_tls.h\n delete mode 100644 examples/performance-thread/l3fwd-thread/Makefile\n delete mode 100644 examples/performance-thread/l3fwd-thread/main.c\n delete mode 100644 examples/performance-thread/l3fwd-thread/meson.build\n delete mode 100755 examples/performance-thread/l3fwd-thread/test.sh\n delete mode 100644 examples/performance-thread/pthread_shim/Makefile\n delete mode 100644 examples/performance-thread/pthread_shim/main.c\n delete mode 100644 examples/performance-thread/pthread_shim/meson.build\n delete mode 100644 examples/performance-thread/pthread_shim/pthread_shim.c\n delete mode 100644 examples/performance-thread/pthread_shim/pthread_shim.h",
    "diff": "diff --git a/MAINTAINERS b/MAINTAINERS\nindex 18d9edaf88e5..648d4236be07 100644\n--- a/MAINTAINERS\n+++ b/MAINTAINERS\n@@ -1787,11 +1787,6 @@ Link status interrupt example\n F: examples/link_status_interrupt/\n F: doc/guides/sample_app_ug/link_status_intr.rst\n \n-L-threads - EXPERIMENTAL\n-M: John McNamara <john.mcnamara@intel.com>\n-F: examples/performance-thread/\n-F: doc/guides/sample_app_ug/performance_thread.rst\n-\n PTP client example\n M: Kirill Rybalchenko <kirill.rybalchenko@intel.com>\n F: examples/ptpclient/\ndiff --git a/doc/guides/rel_notes/release_22_03.rst b/doc/guides/rel_notes/release_22_03.rst\nindex 6d99d1eaa94a..567cb39e34c6 100644\n--- a/doc/guides/rel_notes/release_22_03.rst\n+++ b/doc/guides/rel_notes/release_22_03.rst\n@@ -68,6 +68,7 @@ Removed Items\n    Also, make sure to start the actual text at the margin.\n    =======================================================\n \n+* **Removed experimental performance thread example application.**\n \n API Changes\n -----------\ndiff --git a/doc/guides/sample_app_ug/img/performance_thread_1.svg b/doc/guides/sample_app_ug/img/performance_thread_1.svg\ndeleted file mode 100644\nindex db01d7c24885..000000000000\n--- a/doc/guides/sample_app_ug/img/performance_thread_1.svg\n+++ /dev/null\n@@ -1,799 +0,0 @@\n-<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n-<!-- Created with Inkscape (http://www.inkscape.org/) -->\n-\n-<svg\n-   xmlns:dc=\"http://purl.org/dc/elements/1.1/\"\n-   xmlns:cc=\"http://creativecommons.org/ns#\"\n-   xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\"\n-   xmlns:svg=\"http://www.w3.org/2000/svg\"\n-   xmlns=\"http://www.w3.org/2000/svg\"\n-   xmlns:sodipodi=\"http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd\"\n-   xmlns:inkscape=\"http://www.inkscape.org/namespaces/inkscape\"\n-   width=\"449.57141\"\n-   height=\"187.34319\"\n-   viewBox=\"0 0 449.57143 187.34319\"\n-   id=\"svg2\"\n-   version=\"1.1\"\n-   inkscape:version=\"0.48.3.1 r9886\"\n-   sodipodi:docname=\"performance_thread_1.svg\"\n-   inkscape:export-filename=\"C:\\Users\\tkulasex\\Documents\\L-threads\\model-v2.png\"\n-   inkscape:export-xdpi=\"90\"\n-   inkscape:export-ydpi=\"90\">\n-  <defs\n-     id=\"defs4\">\n-    <marker\n-       inkscape:stockid=\"Arrow1Mend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker11487\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path11489\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker11285\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path11287\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker11107\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path11109\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker10757\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\">\n-      <path\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path10759\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10421\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10423\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10273\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10275\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker9983\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\">\n-      <path\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path9985\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker9853\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\">\n-      <path\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path9855\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"Arrow1Lend-6\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         id=\"path4248-0\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker4992-4\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path4994-2\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Mend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"Arrow1Mend-1\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         id=\"path4254-1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker4992-4-0\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path4994-2-9\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"Arrow1Lend-6-8\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         id=\"path4248-0-3\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker5952-2\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path5954-4\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker5952-2-1\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path5954-4-2\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker6881-5\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path6883-0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431-3\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433-4\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431-3-0\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433-4-2\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431-3-0-4\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433-4-2-4\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431-3-1\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433-4-6\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker10119-2\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\"\n-       inkscape:collect=\"always\">\n-      <path\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path10121-6\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Mend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker11487-0\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path11489-6\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10585\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\"\n-       inkscape:collect=\"always\">\n-      <path\n-         id=\"path10587\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10273-9\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10275-3\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10421-3\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10423-1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431-2\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433-5\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker10119\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\"\n-       inkscape:collect=\"always\">\n-      <path\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path10121\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker10923\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\"\n-       inkscape:collect=\"always\">\n-      <path\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path10925\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker10757-4\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\">\n-      <path\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path10759-3\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-  </defs>\n-  <sodipodi:namedview\n-     id=\"base\"\n-     pagecolor=\"#ffffff\"\n-     bordercolor=\"#666666\"\n-     borderopacity=\"1.0\"\n-     inkscape:pageopacity=\"0.0\"\n-     inkscape:pageshadow=\"2\"\n-     inkscape:zoom=\"1.4\"\n-     inkscape:cx=\"138.23152\"\n-     inkscape:cy=\"-30.946457\"\n-     inkscape:document-units=\"px\"\n-     inkscape:current-layer=\"g4142-7\"\n-     showgrid=\"false\"\n-     fit-margin-top=\"0\"\n-     fit-margin-left=\"0\"\n-     fit-margin-right=\"0\"\n-     fit-margin-bottom=\"0\"\n-     inkscape:window-width=\"1920\"\n-     inkscape:window-height=\"1148\"\n-     inkscape:window-x=\"0\"\n-     inkscape:window-y=\"0\"\n-     inkscape:window-maximized=\"1\"\n-     width=\"744.09px\" />\n-  <metadata\n-     id=\"metadata7\">\n-    <rdf:RDF>\n-      <cc:Work\n-         rdf:about=\"\">\n-        <dc:format>image/svg+xml</dc:format>\n-        <dc:type\n-           rdf:resource=\"http://purl.org/dc/dcmitype/StillImage\" />\n-        <dc:title></dc:title>\n-      </cc:Work>\n-    </rdf:RDF>\n-  </metadata>\n-  <g\n-     inkscape:label=\"Layer 1\"\n-     inkscape:groupmode=\"layer\"\n-     id=\"layer1\"\n-     transform=\"translate(-40.428564,-78.569476)\">\n-    <g\n-       transform=\"translate(7.9156519e-7,106.78572)\"\n-       id=\"g4142-7\">\n-      <g\n-         transform=\"translate(162.14285,0.35714094)\"\n-         id=\"g4177-1\">\n-        <g\n-           transform=\"translate(-160.49999,-56.592401)\"\n-           id=\"g4142-55-1\">\n-          <rect\n-             y=\"43.076488\"\n-             x=\"39.285713\"\n-             height=\"65\"\n-             width=\"38.57143\"\n-             id=\"rect4136-65-2\"\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\" />\n-          <text\n-             transform=\"matrix(0,-1,1,0,0,0)\"\n-             sodipodi:linespacing=\"125%\"\n-             id=\"text4138-4-8\"\n-             y=\"62.447506\"\n-             x=\"-95.515633\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             xml:space=\"preserve\"><tspan\n-               y=\"62.447506\"\n-               x=\"-95.515633\"\n-               id=\"tspan4140-2-4\"\n-               sodipodi:role=\"line\">Port 1</tspan></text>\n-        </g>\n-        <rect\n-           y=\"93.269798\"\n-           x=\"-121.21429\"\n-           height=\"65\"\n-           width=\"38.57143\"\n-           id=\"rect4136-8-3-7\"\n-           style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4.00000008, 1.00000002;stroke-dashoffset:0\" />\n-        <text\n-           transform=\"matrix(0,-1,1,0,0,0)\"\n-           sodipodi:linespacing=\"125%\"\n-           id=\"text4138-8-7-3\"\n-           y=\"-98.052498\"\n-           x=\"-145.70891\"\n-           style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-           xml:space=\"preserve\"><tspan\n-             y=\"-98.052498\"\n-             x=\"-145.70891\"\n-             id=\"tspan4140-5-8-3\"\n-             sodipodi:role=\"line\">Port 2</tspan></text>\n-        <g\n-           transform=\"translate(-158.35713,1.6218895)\"\n-           id=\"g4177-7-6\">\n-          <rect\n-             y=\"1.2907723\"\n-             x=\"132.85715\"\n-             height=\"46.42857\"\n-             width=\"94.285713\"\n-             id=\"rect4171-1-9\"\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\" />\n-          <text\n-             sodipodi:linespacing=\"125%\"\n-             id=\"text4173-0-0\"\n-             y=\"29.147915\"\n-             x=\"146.42856\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             xml:space=\"preserve\"><tspan\n-               y=\"29.147915\"\n-               x=\"146.42856\"\n-               id=\"tspan4175-6-1\"\n-               sodipodi:role=\"line\">rx-thread</tspan></text>\n-        </g>\n-        <text\n-           xml:space=\"preserve\"\n-           style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-           x=\"86.642853\"\n-           y=\"78.626976\"\n-           id=\"text5627-0-5\"\n-           sodipodi:linespacing=\"125%\"><tspan\n-             sodipodi:role=\"line\"\n-             id=\"tspan5629-8-6\"\n-             x=\"86.642853\"\n-             y=\"78.626976\">rings</tspan></text>\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10757)\"\n-           d=\"m -83.357144,17.912679 56.42858,4.28571\"\n-           id=\"path4239-3-5\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10923)\"\n-           d=\"m -82.808124,125.71821 53.57145,-9.28573\"\n-           id=\"path4239-0-3-6\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4.00000008, 2.00000004;stroke-dashoffset:0;marker-end:url(#marker10119)\"\n-           d=\"m 68.78571,29.341249 62.5,28.21429\"\n-           id=\"path5457-1-2\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <g\n-           transform=\"translate(-161.92858,95.100119)\"\n-           id=\"g4177-7-6-7\">\n-          <rect\n-             y=\"1.2907723\"\n-             x=\"132.85715\"\n-             height=\"46.42857\"\n-             width=\"94.285713\"\n-             id=\"rect4171-1-9-8\"\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\" />\n-          <text\n-             sodipodi:linespacing=\"125%\"\n-             id=\"text4173-0-0-6\"\n-             y=\"29.147915\"\n-             x=\"146.42856\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             xml:space=\"preserve\"><tspan\n-               y=\"29.147915\"\n-               x=\"146.42856\"\n-               id=\"tspan4175-6-1-8\"\n-               sodipodi:role=\"line\">rx-thread</tspan></text>\n-        </g>\n-        <g\n-           transform=\"translate(249.5,-71.149881)\"\n-           id=\"g4142-5-1-2\">\n-          <rect\n-             y=\"43.076488\"\n-             x=\"39.285713\"\n-             height=\"65\"\n-             width=\"38.57143\"\n-             id=\"rect4136-6-5-3\"\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\" />\n-          <text\n-             transform=\"matrix(0,-1,1,0,0,0)\"\n-             sodipodi:linespacing=\"125%\"\n-             id=\"text4138-3-3-5\"\n-             y=\"62.447506\"\n-             x=\"-95.515633\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             xml:space=\"preserve\"><tspan\n-               y=\"62.447506\"\n-               x=\"-95.515633\"\n-               id=\"tspan4140-7-3-5\"\n-               sodipodi:role=\"line\">Port 1</tspan></text>\n-        </g>\n-        <rect\n-           y=\"74.426659\"\n-           x=\"288.07141\"\n-           height=\"65\"\n-           width=\"38.57143\"\n-           id=\"rect4136-8-4-7-7\"\n-           style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4.00000008, 1.00000002;stroke-dashoffset:0\" />\n-        <text\n-           transform=\"matrix(0,-1,1,0,0,0)\"\n-           sodipodi:linespacing=\"125%\"\n-           id=\"text4138-8-2-5-8\"\n-           y=\"311.23318\"\n-           x=\"-126.86578\"\n-           style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-           xml:space=\"preserve\"><tspan\n-             y=\"311.23318\"\n-             x=\"-126.86578\"\n-             id=\"tspan4140-5-4-9-6\"\n-             sodipodi:role=\"line\">Port 2</tspan></text>\n-        <g\n-           id=\"g5905-4\"\n-           transform=\"translate(-1.2142913,-215.16774)\">\n-          <rect\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\"\n-             id=\"rect4171-9-0-0\"\n-             width=\"94.285713\"\n-             height=\"46.42857\"\n-             x=\"132.85715\"\n-             y=\"250.48721\" />\n-          <text\n-             xml:space=\"preserve\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             x=\"146.42856\"\n-             y=\"278.34433\"\n-             id=\"text4173-9-2-6\"\n-             sodipodi:linespacing=\"125%\"><tspan\n-               sodipodi:role=\"line\"\n-               id=\"tspan4175-0-7-3\"\n-               x=\"146.42856\"\n-               y=\"278.34433\">tx-thread</tspan></text>\n-        </g>\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10431)\"\n-           d=\"M 226.28573,52.462339 287.7143,2.8194795\"\n-           id=\"path4984-4-07\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10421)\"\n-           d=\"m 227.09388,122.75669 60.35714,9.64286\"\n-           id=\"path4984-1-6-8\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <g\n-           id=\"g5905-6-0\"\n-           transform=\"translate(0.21427875,-156.1499)\">\n-          <rect\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\"\n-             id=\"rect4171-9-0-9-1\"\n-             width=\"94.285713\"\n-             height=\"46.42857\"\n-             x=\"132.85715\"\n-             y=\"250.48721\" />\n-          <text\n-             xml:space=\"preserve\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             x=\"146.42856\"\n-             y=\"278.34433\"\n-             id=\"text4173-9-2-0-0\"\n-             sodipodi:linespacing=\"125%\"><tspan\n-               sodipodi:role=\"line\"\n-               id=\"tspan4175-0-7-7-8\"\n-               x=\"146.42856\"\n-               y=\"278.34433\">tx-thread</tspan></text>\n-        </g>\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10273)\"\n-           d=\"m 227.19687,67.801919 58.92857,41.071411\"\n-           id=\"path4984-4-0-4\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10585)\"\n-           d=\"M 227.30382,110.24508 286.94667,24.530799\"\n-           id=\"path4984-4-0-0-7\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4.00000008, 2.00000004;stroke-dashoffset:0;marker-end:url(#marker11487)\"\n-           d=\"m 66.28572,118.8909 65.71429,-2.14285\"\n-           id=\"path5457-1-2-8\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <g\n-           id=\"g5905-4-6\"\n-           transform=\"translate(-3.5000113,-277.43173)\">\n-          <rect\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\"\n-             id=\"rect4171-9-0-0-7\"\n-             width=\"94.285713\"\n-             height=\"46.42857\"\n-             x=\"132.85715\"\n-             y=\"250.48721\" />\n-          <text\n-             xml:space=\"preserve\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             x=\"146.42856\"\n-             y=\"278.34433\"\n-             id=\"text4173-9-2-6-8\"\n-             sodipodi:linespacing=\"125%\"><tspan\n-               sodipodi:role=\"line\"\n-               id=\"tspan4175-0-7-3-5\"\n-               x=\"146.42856\"\n-               y=\"278.34433\">tx-thread</tspan></text>\n-        </g>\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4.00000008, 2.00000004;stroke-dashoffset:0;marker-end:url(#marker10119-2)\"\n-           d=\"M 68.35772,16.118199 127.64343,-6.3818105\"\n-           id=\"path5457-1-2-2\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10431-3)\"\n-           d=\"m 224.52079,-13.531251 64.28571,2.14286\"\n-           id=\"path4984-4-07-4\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10431-3-0)\"\n-           d=\"M 224.17025,2.1505695 287.02739,87.864849\"\n-           id=\"path4984-4-07-4-7\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-      </g>\n-    </g>\n-  </g>\n-</svg>\ndiff --git a/doc/guides/sample_app_ug/img/performance_thread_2.svg b/doc/guides/sample_app_ug/img/performance_thread_2.svg\ndeleted file mode 100644\nindex 48cf83383000..000000000000\n--- a/doc/guides/sample_app_ug/img/performance_thread_2.svg\n+++ /dev/null\n@@ -1,865 +0,0 @@\n-<?xml version=\"1.0\" encoding=\"UTF-8\" standalone=\"no\"?>\n-<!-- Created with Inkscape (http://www.inkscape.org/) -->\n-\n-<svg\n-   xmlns:dc=\"http://purl.org/dc/elements/1.1/\"\n-   xmlns:cc=\"http://creativecommons.org/ns#\"\n-   xmlns:rdf=\"http://www.w3.org/1999/02/22-rdf-syntax-ns#\"\n-   xmlns:svg=\"http://www.w3.org/2000/svg\"\n-   xmlns=\"http://www.w3.org/2000/svg\"\n-   xmlns:sodipodi=\"http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd\"\n-   xmlns:inkscape=\"http://www.inkscape.org/namespaces/inkscape\"\n-   width=\"449.57141\"\n-   height=\"187.34319\"\n-   viewBox=\"0 0 449.57143 187.34319\"\n-   id=\"svg2\"\n-   version=\"1.1\"\n-   inkscape:version=\"0.48.3.1 r9886\"\n-   sodipodi:docname=\"performance_thread_2.svg\"\n-   inkscape:export-filename=\"C:\\Users\\tkulasex\\Documents\\L-threads\\model-v2.png\"\n-   inkscape:export-xdpi=\"90\"\n-   inkscape:export-ydpi=\"90\">\n-  <defs\n-     id=\"defs4\">\n-    <marker\n-       inkscape:stockid=\"Arrow1Mend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker11487\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path11489\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker11285\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path11287\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker11107\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path11109\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker10757\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\">\n-      <path\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path10759\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10421\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10423\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10273\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10275\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker9983\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\">\n-      <path\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path9985\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker9853\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\">\n-      <path\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path9855\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"Arrow1Lend-6\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         id=\"path4248-0\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker4992-4\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path4994-2\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Mend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"Arrow1Mend-1\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         id=\"path4254-1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker4992-4-0\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path4994-2-9\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"Arrow1Lend-6-8\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         id=\"path4248-0-3\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker5952-2\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path5954-4\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker5952-2-1\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path5954-4-2\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker6881-5\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\">\n-      <path\n-         inkscape:connector-curvature=\"0\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path6883-0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431-3\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433-4\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431-3-0\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433-4-2\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431-3-0-4\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433-4-2-4\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431-3-1\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433-4-6\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker10119-2\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\"\n-       inkscape:collect=\"always\">\n-      <path\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path10121-6\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Mend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker11487-0\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path11489-6\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10585\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\"\n-       inkscape:collect=\"always\">\n-      <path\n-         id=\"path10587\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10273-9\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10275-3\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10421-3\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10423-1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:stockid=\"Arrow1Lend\"\n-       orient=\"auto\"\n-       refY=\"0\"\n-       refX=\"0\"\n-       id=\"marker10431-2\"\n-       style=\"overflow:visible\"\n-       inkscape:isstock=\"true\">\n-      <path\n-         id=\"path10433-5\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker10119\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Mend\"\n-       inkscape:collect=\"always\">\n-      <path\n-         transform=\"matrix(-0.4,0,0,-0.4,-4,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path10121\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker10923\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\"\n-       inkscape:collect=\"always\">\n-      <path\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path10925\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-    <marker\n-       inkscape:isstock=\"true\"\n-       style=\"overflow:visible\"\n-       id=\"marker10757-4\"\n-       refX=\"0\"\n-       refY=\"0\"\n-       orient=\"auto\"\n-       inkscape:stockid=\"Arrow1Lend\">\n-      <path\n-         transform=\"matrix(-0.8,0,0,-0.8,-10,0)\"\n-         style=\"fill:#000000;fill-opacity:1;fill-rule:evenodd;stroke:#000000;stroke-width:1pt;stroke-opacity:1\"\n-         d=\"M 0,0 5,-5 -12.5,0 5,5 0,0 z\"\n-         id=\"path10759-3\"\n-         inkscape:connector-curvature=\"0\" />\n-    </marker>\n-  </defs>\n-  <sodipodi:namedview\n-     id=\"base\"\n-     pagecolor=\"#ffffff\"\n-     bordercolor=\"#666666\"\n-     borderopacity=\"1.0\"\n-     inkscape:pageopacity=\"0.0\"\n-     inkscape:pageshadow=\"2\"\n-     inkscape:zoom=\"1.4\"\n-     inkscape:cx=\"138.23152\"\n-     inkscape:cy=\"-30.946457\"\n-     inkscape:document-units=\"px\"\n-     inkscape:current-layer=\"g4177-1\"\n-     showgrid=\"false\"\n-     fit-margin-top=\"0\"\n-     fit-margin-left=\"0\"\n-     fit-margin-right=\"0\"\n-     fit-margin-bottom=\"0\"\n-     inkscape:window-width=\"1920\"\n-     inkscape:window-height=\"1148\"\n-     inkscape:window-x=\"0\"\n-     inkscape:window-y=\"0\"\n-     inkscape:window-maximized=\"1\"\n-     width=\"744.09px\" />\n-  <metadata\n-     id=\"metadata7\">\n-    <rdf:RDF>\n-      <cc:Work\n-         rdf:about=\"\">\n-        <dc:format>image/svg+xml</dc:format>\n-        <dc:type\n-           rdf:resource=\"http://purl.org/dc/dcmitype/StillImage\" />\n-        <dc:title></dc:title>\n-      </cc:Work>\n-    </rdf:RDF>\n-  </metadata>\n-  <g\n-     inkscape:label=\"Layer 1\"\n-     inkscape:groupmode=\"layer\"\n-     id=\"layer1\"\n-     transform=\"translate(-40.428564,-78.569476)\">\n-    <g\n-       transform=\"translate(7.9156519e-7,106.78572)\"\n-       id=\"g4142-7\">\n-      <g\n-         transform=\"translate(162.14285,0.35714094)\"\n-         id=\"g4177-1\">\n-        <g\n-           transform=\"translate(-160.49999,-56.592401)\"\n-           id=\"g4142-55-1\">\n-          <rect\n-             y=\"43.076488\"\n-             x=\"39.285713\"\n-             height=\"65\"\n-             width=\"38.57143\"\n-             id=\"rect4136-65-2\"\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\" />\n-          <text\n-             transform=\"matrix(0,-1,1,0,0,0)\"\n-             sodipodi:linespacing=\"125%\"\n-             id=\"text4138-4-8\"\n-             y=\"62.447506\"\n-             x=\"-95.515633\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             xml:space=\"preserve\"><tspan\n-               y=\"62.447506\"\n-               x=\"-95.515633\"\n-               id=\"tspan4140-2-4\"\n-               sodipodi:role=\"line\">Port 1</tspan></text>\n-        </g>\n-        <rect\n-           y=\"93.269798\"\n-           x=\"-121.21429\"\n-           height=\"65\"\n-           width=\"38.57143\"\n-           id=\"rect4136-8-3-7\"\n-           style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4.00000008, 1.00000002;stroke-dashoffset:0\" />\n-        <text\n-           transform=\"matrix(0,-1,1,0,0,0)\"\n-           sodipodi:linespacing=\"125%\"\n-           id=\"text4138-8-7-3\"\n-           y=\"-98.052498\"\n-           x=\"-145.70891\"\n-           style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-           xml:space=\"preserve\"><tspan\n-             y=\"-98.052498\"\n-             x=\"-145.70891\"\n-             id=\"tspan4140-5-8-3\"\n-             sodipodi:role=\"line\">Port 2</tspan></text>\n-        <g\n-           transform=\"translate(-158.35713,1.6218895)\"\n-           id=\"g4177-7-6\">\n-          <rect\n-             y=\"1.2907723\"\n-             x=\"132.85715\"\n-             height=\"46.42857\"\n-             width=\"94.285713\"\n-             id=\"rect4171-1-9\"\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\" />\n-          <text\n-             sodipodi:linespacing=\"125%\"\n-             id=\"text4173-0-0\"\n-             y=\"29.147915\"\n-             x=\"146.42856\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             xml:space=\"preserve\"><tspan\n-               y=\"29.147915\"\n-               x=\"146.42856\"\n-               id=\"tspan4175-6-1\"\n-               sodipodi:role=\"line\">rx-thread</tspan></text>\n-        </g>\n-        <text\n-           xml:space=\"preserve\"\n-           style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-           x=\"86.642853\"\n-           y=\"78.626976\"\n-           id=\"text5627-0-5\"\n-           sodipodi:linespacing=\"125%\"><tspan\n-             sodipodi:role=\"line\"\n-             id=\"tspan5629-8-6\"\n-             x=\"86.642853\"\n-             y=\"78.626976\">rings</tspan></text>\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10757)\"\n-           d=\"m -83.357144,17.912679 56.42858,4.28571\"\n-           id=\"path4239-3-5\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10923)\"\n-           d=\"m -82.808124,125.71821 53.57145,-9.28573\"\n-           id=\"path4239-0-3-6\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4.00000008, 2.00000004;stroke-dashoffset:0;marker-end:url(#marker10119)\"\n-           d=\"m 68.78571,29.341249 62.5,28.21429\"\n-           id=\"path5457-1-2\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <g\n-           transform=\"translate(-161.92858,95.100119)\"\n-           id=\"g4177-7-6-7\">\n-          <rect\n-             y=\"1.2907723\"\n-             x=\"132.85715\"\n-             height=\"46.42857\"\n-             width=\"94.285713\"\n-             id=\"rect4171-1-9-8\"\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\" />\n-          <text\n-             sodipodi:linespacing=\"125%\"\n-             id=\"text4173-0-0-6\"\n-             y=\"29.147915\"\n-             x=\"146.42856\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             xml:space=\"preserve\"><tspan\n-               y=\"29.147915\"\n-               x=\"146.42856\"\n-               id=\"tspan4175-6-1-8\"\n-               sodipodi:role=\"line\">rx-thread</tspan></text>\n-        </g>\n-        <g\n-           transform=\"translate(249.5,-71.149881)\"\n-           id=\"g4142-5-1-2\">\n-          <rect\n-             y=\"43.076488\"\n-             x=\"39.285713\"\n-             height=\"65\"\n-             width=\"38.57143\"\n-             id=\"rect4136-6-5-3\"\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\" />\n-          <text\n-             transform=\"matrix(0,-1,1,0,0,0)\"\n-             sodipodi:linespacing=\"125%\"\n-             id=\"text4138-3-3-5\"\n-             y=\"62.447506\"\n-             x=\"-95.515633\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             xml:space=\"preserve\"><tspan\n-               y=\"62.447506\"\n-               x=\"-95.515633\"\n-               id=\"tspan4140-7-3-5\"\n-               sodipodi:role=\"line\">Port 1</tspan></text>\n-        </g>\n-        <rect\n-           y=\"74.426659\"\n-           x=\"288.07141\"\n-           height=\"65\"\n-           width=\"38.57143\"\n-           id=\"rect4136-8-4-7-7\"\n-           style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4.00000008, 1.00000002;stroke-dashoffset:0\" />\n-        <text\n-           transform=\"matrix(0,-1,1,0,0,0)\"\n-           sodipodi:linespacing=\"125%\"\n-           id=\"text4138-8-2-5-8\"\n-           y=\"311.23318\"\n-           x=\"-126.86578\"\n-           style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-           xml:space=\"preserve\"><tspan\n-             y=\"311.23318\"\n-             x=\"-126.86578\"\n-             id=\"tspan4140-5-4-9-6\"\n-             sodipodi:role=\"line\">Port 2</tspan></text>\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10431)\"\n-           d=\"M 226.28573,52.462339 287.7143,2.8194795\"\n-           id=\"path4984-4-07\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10421)\"\n-           d=\"m 227.09388,122.75669 60.35714,9.64286\"\n-           id=\"path4984-1-6-8\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10273)\"\n-           d=\"m 227.19687,67.801919 58.92857,41.071411\"\n-           id=\"path4984-4-0-4\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10585)\"\n-           d=\"M 228.01811,113.10222 287.66096,27.387942\"\n-           id=\"path4984-4-0-0-7\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4.00000008, 2.00000004;stroke-dashoffset:0;marker-end:url(#marker11487)\"\n-           d=\"m 66.28572,118.8909 65.71429,-2.14285\"\n-           id=\"path5457-1-2-8\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <g\n-           id=\"g5905-4-6\"\n-           transform=\"matrix(1,0,0,0.48279909,-0.64286832,-142.16523)\">\n-          <rect\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\"\n-             id=\"rect4171-9-0-0-7\"\n-             width=\"94.285713\"\n-             height=\"46.42857\"\n-             x=\"132.85715\"\n-             y=\"250.48721\" />\n-          <text\n-             xml:space=\"preserve\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             x=\"146.42856\"\n-             y=\"278.34433\"\n-             id=\"text4173-9-2-6-8\"\n-             sodipodi:linespacing=\"125%\"><tspan\n-               sodipodi:role=\"line\"\n-               id=\"tspan4175-0-7-3-5\"\n-               x=\"146.42856\"\n-               y=\"278.34433\">tx-thread</tspan></text>\n-        </g>\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:2;stroke-linecap:butt;stroke-linejoin:miter;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4.00000008, 2.00000004;stroke-dashoffset:0;marker-end:url(#marker10119-2)\"\n-           d=\"M 68.35772,16.118199 127.64343,-6.3818105\"\n-           id=\"path5457-1-2-2\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10431-3)\"\n-           d=\"m 224.52079,-13.531251 64.28571,2.14286\"\n-           id=\"path4984-4-07-4\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <path\n-           style=\"fill:none;stroke:#000000;stroke-width:1px;stroke-linecap:butt;stroke-linejoin:miter;stroke-opacity:1;marker-end:url(#marker10431-3-0)\"\n-           d=\"M 224.17025,2.1505695 287.02739,87.864849\"\n-           id=\"path4984-4-07-4-7\"\n-           inkscape:connector-curvature=\"0\"\n-           sodipodi:nodetypes=\"cc\" />\n-        <g\n-           id=\"g5905-4-6-5\"\n-           transform=\"matrix(1,0,0,0.45244466,-0.99999222,-110.73112)\">\n-          <rect\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\"\n-             id=\"rect4171-9-0-0-7-6\"\n-             width=\"94.285713\"\n-             height=\"46.42857\"\n-             x=\"132.85715\"\n-             y=\"250.48721\" />\n-          <text\n-             xml:space=\"preserve\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             x=\"146.42856\"\n-             y=\"278.34433\"\n-             id=\"text4173-9-2-6-8-7\"\n-             sodipodi:linespacing=\"125%\"><tspan\n-               sodipodi:role=\"line\"\n-               id=\"tspan4175-0-7-3-5-0\"\n-               x=\"146.42856\"\n-               y=\"278.34433\">tx-drain</tspan></text>\n-        </g>\n-        <g\n-           id=\"g5905-4-6-2\"\n-           transform=\"matrix(1,0,0,0.48279909,1.3158755,-80.292458)\">\n-          <rect\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\"\n-             id=\"rect4171-9-0-0-7-8\"\n-             width=\"94.285713\"\n-             height=\"46.42857\"\n-             x=\"132.85715\"\n-             y=\"250.48721\" />\n-          <text\n-             xml:space=\"preserve\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             x=\"146.42856\"\n-             y=\"278.34433\"\n-             id=\"text4173-9-2-6-8-0\"\n-             sodipodi:linespacing=\"125%\"><tspan\n-               sodipodi:role=\"line\"\n-               id=\"tspan4175-0-7-3-5-6\"\n-               x=\"146.42856\"\n-               y=\"278.34433\">tx-thread</tspan></text>\n-        </g>\n-        <g\n-           id=\"g5905-4-6-5-9\"\n-           transform=\"matrix(1,0,0,0.45244466,0.95875552,-48.858358)\">\n-          <rect\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\"\n-             id=\"rect4171-9-0-0-7-6-6\"\n-             width=\"94.285713\"\n-             height=\"46.42857\"\n-             x=\"132.85715\"\n-             y=\"250.48721\" />\n-          <text\n-             xml:space=\"preserve\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             x=\"146.42856\"\n-             y=\"278.34433\"\n-             id=\"text4173-9-2-6-8-7-4\"\n-             sodipodi:linespacing=\"125%\"><tspan\n-               sodipodi:role=\"line\"\n-               id=\"tspan4175-0-7-3-5-0-0\"\n-               x=\"146.42856\"\n-               y=\"278.34433\">tx-drain</tspan></text>\n-        </g>\n-        <g\n-           id=\"g5905-4-6-6\"\n-           transform=\"matrix(1,0,0,0.48279909,1.315876,-24.578174)\">\n-          <rect\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\"\n-             id=\"rect4171-9-0-0-7-3\"\n-             width=\"94.285713\"\n-             height=\"46.42857\"\n-             x=\"132.85715\"\n-             y=\"250.48721\" />\n-          <text\n-             xml:space=\"preserve\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             x=\"146.42856\"\n-             y=\"278.34433\"\n-             id=\"text4173-9-2-6-8-78\"\n-             sodipodi:linespacing=\"125%\"><tspan\n-               sodipodi:role=\"line\"\n-               id=\"tspan4175-0-7-3-5-9\"\n-               x=\"146.42856\"\n-               y=\"278.34433\">tx-thread</tspan></text>\n-        </g>\n-        <g\n-           id=\"g5905-4-6-5-0\"\n-           transform=\"matrix(1,0,0,0.45244466,0.958756,6.8559263)\">\n-          <rect\n-             style=\"fill:none;stroke:#000000;stroke-width:1;stroke-linecap:square;stroke-linejoin:round;stroke-miterlimit:4;stroke-opacity:1;stroke-dasharray:4, 1;stroke-dashoffset:0\"\n-             id=\"rect4171-9-0-0-7-6-0\"\n-             width=\"94.285713\"\n-             height=\"46.42857\"\n-             x=\"132.85715\"\n-             y=\"250.48721\" />\n-          <text\n-             xml:space=\"preserve\"\n-             style=\"font-size:15px;font-style:normal;font-weight:normal;line-height:125%;letter-spacing:0px;word-spacing:0px;fill:#000000;fill-opacity:1;stroke:none;font-family:Sans\"\n-             x=\"146.42856\"\n-             y=\"278.34433\"\n-             id=\"text4173-9-2-6-8-7-0\"\n-             sodipodi:linespacing=\"125%\"><tspan\n-               sodipodi:role=\"line\"\n-               id=\"tspan4175-0-7-3-5-0-3\"\n-               x=\"146.42856\"\n-               y=\"278.34433\">tx-drain</tspan></text>\n-        </g>\n-      </g>\n-    </g>\n-  </g>\n-</svg>\ndiff --git a/doc/guides/sample_app_ug/index.rst b/doc/guides/sample_app_ug/index.rst\nindex 8835dd03acd0..853e338778ba 100644\n--- a/doc/guides/sample_app_ug/index.rst\n+++ b/doc/guides/sample_app_ug/index.rst\n@@ -53,7 +53,6 @@ Sample Applications User Guides\n     dist_app\n     vm_power_management\n     ptpclient\n-    performance_thread\n     fips_validation\n     ipsec_secgw\n     bbdev_app\ndiff --git a/doc/guides/sample_app_ug/performance_thread.rst b/doc/guides/sample_app_ug/performance_thread.rst\ndeleted file mode 100644\nindex 7d1bf6eaae8c..000000000000\n--- a/doc/guides/sample_app_ug/performance_thread.rst\n+++ /dev/null\n@@ -1,1201 +0,0 @@\n-..  SPDX-License-Identifier: BSD-3-Clause\n-    Copyright(c) 2015 Intel Corporation.\n-\n-Performance Thread Sample Application\n-=====================================\n-\n-The performance thread sample application is a derivative of the standard L3\n-forwarding application that demonstrates different threading models.\n-\n-Overview\n---------\n-For a general description of the L3 forwarding applications capabilities\n-please refer to the documentation of the standard application in\n-:doc:`l3_forward`.\n-\n-The performance thread sample application differs from the standard L3\n-forwarding example in that it divides the TX and RX processing between\n-different threads, and makes it possible to assign individual threads to\n-different cores.\n-\n-Three threading models are considered:\n-\n-#. When there is one EAL thread per physical core.\n-#. When there are multiple EAL threads per physical core.\n-#. When there are multiple lightweight threads per EAL thread.\n-\n-Since DPDK release 2.0 it is possible to launch applications using the\n-``--lcores`` EAL parameter, specifying cpu-sets for a physical core. With the\n-performance thread sample application its is now also possible to assign\n-individual RX and TX functions to different cores.\n-\n-As an alternative to dividing the L3 forwarding work between different EAL\n-threads the performance thread sample introduces the possibility to run the\n-application threads as lightweight threads (L-threads) within one or\n-more EAL threads.\n-\n-In order to facilitate this threading model the example includes a primitive\n-cooperative scheduler (L-thread) subsystem. More details of the L-thread\n-subsystem can be found in :ref:`lthread_subsystem`.\n-\n-**Note:** Whilst theoretically possible it is not anticipated that multiple\n-L-thread schedulers would be run on the same physical core, this mode of\n-operation should not be expected to yield useful performance and is considered\n-invalid.\n-\n-Compiling the Application\n--------------------------\n-\n-To compile the sample application see :doc:`compiling`.\n-\n-The application is located in the `performance-thread/l3fwd-thread` sub-directory.\n-\n-Running the Application\n------------------------\n-\n-The application has a number of command line options::\n-\n-    ./<build_dir>/examples/dpdk-l3fwd-thread [EAL options] --\n-        -p PORTMASK [-P]\n-        --rx(port,queue,lcore,thread)[,(port,queue,lcore,thread)]\n-        --tx(lcore,thread)[,(lcore,thread)]\n-        [--max-pkt-len PKTLEN]  [--no-numa]\n-        [--hash-entry-num] [--ipv6] [--no-lthreads] [--stat-lcore lcore]\n-        [--parse-ptype]\n-\n-Where:\n-\n-* ``-p PORTMASK``: Hexadecimal bitmask of ports to configure.\n-\n-* ``-P``: optional, sets all ports to promiscuous mode so that packets are\n-  accepted regardless of the packet's Ethernet MAC destination address.\n-  Without this option, only packets with the Ethernet MAC destination address\n-  set to the Ethernet address of the port are accepted.\n-\n-* ``--rx (port,queue,lcore,thread)[,(port,queue,lcore,thread)]``: the list of\n-  NIC RX ports and queues handled by the RX lcores and threads. The parameters\n-  are explained below.\n-\n-* ``--tx (lcore,thread)[,(lcore,thread)]``: the list of TX threads identifying\n-  the lcore the thread runs on, and the id of RX thread with which it is\n-  associated. The parameters are explained below.\n-\n-* ``--max-pkt-len``: optional, maximum packet length in decimal (64-9600).\n-\n-* ``--no-numa``: optional, disables numa awareness.\n-\n-* ``--hash-entry-num``: optional, specifies the hash entry number in hex to be\n-  setup.\n-\n-* ``--ipv6``: optional, set it if running ipv6 packets.\n-\n-* ``--no-lthreads``: optional, disables l-thread model and uses EAL threading\n-  model. See below.\n-\n-* ``--stat-lcore``: optional, run CPU load stats collector on the specified\n-  lcore.\n-\n-* ``--parse-ptype:`` optional, set to use software to analyze packet type.\n-  Without this option, hardware will check the packet type.\n-\n-The parameters of the ``--rx`` and ``--tx`` options are:\n-\n-* ``--rx`` parameters\n-\n-   .. _table_l3fwd_rx_parameters:\n-\n-   +--------+------------------------------------------------------+\n-   | port   | RX port                                              |\n-   +--------+------------------------------------------------------+\n-   | queue  | RX queue that will be read on the specified RX port  |\n-   +--------+------------------------------------------------------+\n-   | lcore  | Core to use for the thread                           |\n-   +--------+------------------------------------------------------+\n-   | thread | Thread id (continuously from 0 to N)                 |\n-   +--------+------------------------------------------------------+\n-\n-\n-* ``--tx`` parameters\n-\n-   .. _table_l3fwd_tx_parameters:\n-\n-   +--------+------------------------------------------------------+\n-   | lcore  | Core to use for L3 route match and transmit          |\n-   +--------+------------------------------------------------------+\n-   | thread | Id of RX thread to be associated with this TX thread |\n-   +--------+------------------------------------------------------+\n-\n-The ``l3fwd-thread`` application allows you to start packet processing in two\n-threading models: L-Threads (default) and EAL Threads (when the\n-``--no-lthreads`` parameter is used). For consistency all parameters are used\n-in the same way for both models.\n-\n-\n-Running with L-threads\n-~~~~~~~~~~~~~~~~~~~~~~\n-\n-When the L-thread model is used (default option), lcore and thread parameters\n-in ``--rx/--tx`` are used to affinitize threads to the selected scheduler.\n-\n-For example, the following places every l-thread on different lcores::\n-\n-   dpdk-l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \\\n-                --rx=\"(0,0,0,0)(1,0,1,1)\" \\\n-                --tx=\"(2,0)(3,1)\"\n-\n-The following places RX l-threads on lcore 0 and TX l-threads on lcore 1 and 2\n-and so on::\n-\n-   dpdk-l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \\\n-                --rx=\"(0,0,0,0)(1,0,0,1)\" \\\n-                --tx=\"(1,0)(2,1)\"\n-\n-\n-Running with EAL threads\n-~~~~~~~~~~~~~~~~~~~~~~~~\n-\n-When the ``--no-lthreads`` parameter is used, the L-threading model is turned\n-off and EAL threads are used for all processing. EAL threads are enumerated in\n-the same way as L-threads, but the ``--lcores`` EAL parameter is used to\n-affinitize threads to the selected cpu-set (scheduler). Thus it is possible to\n-place every RX and TX thread on different lcores.\n-\n-For example, the following places every EAL thread on different lcores::\n-\n-   dpdk-l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \\\n-                --rx=\"(0,0,0,0)(1,0,1,1)\" \\\n-                --tx=\"(2,0)(3,1)\" \\\n-                --no-lthreads\n-\n-\n-To affinitize two or more EAL threads to one cpu-set, the EAL ``--lcores``\n-parameter is used.\n-\n-The following places RX EAL threads on lcore 0 and TX EAL threads on lcore 1\n-and 2 and so on::\n-\n-   dpdk-l3fwd-thread -l 0-7 -n 2 --lcores=\"(0,1)@0,(2,3)@1\" -- -P -p 3 \\\n-                --rx=\"(0,0,0,0)(1,0,1,1)\" \\\n-                --tx=\"(2,0)(3,1)\" \\\n-                --no-lthreads\n-\n-\n-Examples\n-~~~~~~~~\n-\n-For selected scenarios the command line configuration of the application for L-threads\n-and its corresponding EAL threads command line can be realized as follows:\n-\n-a) Start every thread on different scheduler (1:1)::\n-\n-      dpdk-l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \\\n-                   --rx=\"(0,0,0,0)(1,0,1,1)\" \\\n-                   --tx=\"(2,0)(3,1)\"\n-\n-   EAL thread equivalent::\n-\n-      dpdk-l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \\\n-                   --rx=\"(0,0,0,0)(1,0,1,1)\" \\\n-                   --tx=\"(2,0)(3,1)\" \\\n-                   --no-lthreads\n-\n-b) Start all threads on one core (N:1).\n-\n-   Start 4 L-threads on lcore 0::\n-\n-      dpdk-l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \\\n-                   --rx=\"(0,0,0,0)(1,0,0,1)\" \\\n-                   --tx=\"(0,0)(0,1)\"\n-\n-   Start 4 EAL threads on cpu-set 0::\n-\n-      dpdk-l3fwd-thread -l 0-7 -n 2 --lcores=\"(0-3)@0\" -- -P -p 3 \\\n-                   --rx=\"(0,0,0,0)(1,0,0,1)\" \\\n-                   --tx=\"(2,0)(3,1)\" \\\n-                   --no-lthreads\n-\n-c) Start threads on different cores (N:M).\n-\n-   Start 2 L-threads for RX on lcore 0, and 2 L-threads for TX on lcore 1::\n-\n-      dpdk-l3fwd-thread -l 0-7 -n 2 -- -P -p 3 \\\n-                   --rx=\"(0,0,0,0)(1,0,0,1)\" \\\n-                   --tx=\"(1,0)(1,1)\"\n-\n-   Start 2 EAL threads for RX on cpu-set 0, and 2 EAL threads for TX on\n-   cpu-set 1::\n-\n-      dpdk-l3fwd-thread -l 0-7 -n 2 --lcores=\"(0-1)@0,(2-3)@1\" -- -P -p 3 \\\n-                   --rx=\"(0,0,0,0)(1,0,1,1)\" \\\n-                   --tx=\"(2,0)(3,1)\" \\\n-                   --no-lthreads\n-\n-Explanation\n------------\n-\n-To a great extent the sample application differs little from the standard L3\n-forwarding application, and readers are advised to familiarize themselves with\n-the material covered in the :doc:`l3_forward` documentation before proceeding.\n-\n-The following explanation is focused on the way threading is handled in the\n-performance thread example.\n-\n-\n-Mode of operation with EAL threads\n-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n-\n-The performance thread sample application has split the RX and TX functionality\n-into two different threads, and the RX and TX threads are\n-interconnected via software rings. With respect to these rings the RX threads\n-are producers and the TX threads are consumers.\n-\n-On initialization the TX and RX threads are started according to the command\n-line parameters.\n-\n-The RX threads poll the network interface queues and post received packets to a\n-TX thread via a corresponding software ring.\n-\n-The TX threads poll software rings, perform the L3 forwarding hash/LPM match,\n-and assemble packet bursts before performing burst transmit on the network\n-interface.\n-\n-As with the standard L3 forward application, burst draining of residual packets\n-is performed periodically with the period calculated from elapsed time using\n-the timestamps counter.\n-\n-The diagram below illustrates a case with two RX threads and three TX threads.\n-\n-.. _figure_performance_thread_1:\n-\n-.. figure:: img/performance_thread_1.*\n-\n-\n-Mode of operation with L-threads\n-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n-\n-Like the EAL thread configuration the application has split the RX and TX\n-functionality into different threads, and the pairs of RX and TX threads are\n-interconnected via software rings.\n-\n-On initialization an L-thread scheduler is started on every EAL thread. On all\n-but the main EAL thread only a dummy L-thread is initially started.\n-The L-thread started on the main EAL thread then spawns other L-threads on\n-different L-thread schedulers according the command line parameters.\n-\n-The RX threads poll the network interface queues and post received packets\n-to a TX thread via the corresponding software ring.\n-\n-The ring interface is augmented by means of an L-thread condition variable that\n-enables the TX thread to be suspended when the TX ring is empty. The RX thread\n-signals the condition whenever it posts to the TX ring, causing the TX thread\n-to be resumed.\n-\n-Additionally the TX L-thread spawns a worker L-thread to take care of\n-polling the software rings, whilst it handles burst draining of the transmit\n-buffer.\n-\n-The worker threads poll the software rings, perform L3 route lookup and\n-assemble packet bursts. If the TX ring is empty the worker thread suspends\n-itself by waiting on the condition variable associated with the ring.\n-\n-Burst draining of residual packets, less than the burst size, is performed by\n-the TX thread which sleeps (using an L-thread sleep function) and resumes\n-periodically to flush the TX buffer.\n-\n-This design means that L-threads that have no work, can yield the CPU to other\n-L-threads and avoid having to constantly poll the software rings.\n-\n-The diagram below illustrates a case with two RX threads and three TX functions\n-(each comprising a thread that processes forwarding and a thread that\n-periodically drains the output buffer of residual packets).\n-\n-.. _figure_performance_thread_2:\n-\n-.. figure:: img/performance_thread_2.*\n-\n-\n-CPU load statistics\n-~~~~~~~~~~~~~~~~~~~\n-\n-It is possible to display statistics showing estimated CPU load on each core.\n-The statistics indicate the percentage of CPU time spent: processing\n-received packets (forwarding), polling queues/rings (waiting for work),\n-and doing any other processing (context switch and other overhead).\n-\n-When enabled statistics are gathered by having the application threads set and\n-clear flags when they enter and exit pertinent code sections. The flags are\n-then sampled in real time by a statistics collector thread running on another\n-core. This thread displays the data in real time on the console.\n-\n-This feature is enabled by designating a statistics collector core, using the\n-``--stat-lcore`` parameter.\n-\n-\n-.. _lthread_subsystem:\n-\n-The L-thread subsystem\n-----------------------\n-\n-The L-thread subsystem resides in the examples/performance-thread/common\n-directory and is built and linked automatically when building the\n-``l3fwd-thread`` example.\n-\n-The subsystem provides a simple cooperative scheduler to enable arbitrary\n-functions to run as cooperative threads within a single EAL thread.\n-The subsystem provides a pthread like API that is intended to assist in\n-reuse of legacy code written for POSIX pthreads.\n-\n-The following sections provide some detail on the features, constraints,\n-performance and porting considerations when using L-threads.\n-\n-\n-.. _comparison_between_lthreads_and_pthreads:\n-\n-Comparison between L-threads and POSIX pthreads\n-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n-\n-The fundamental difference between the L-thread and pthread models is the\n-way in which threads are scheduled. The simplest way to think about this is to\n-consider the case of a processor with a single CPU. To run multiple threads\n-on a single CPU, the scheduler must frequently switch between the threads,\n-in order that each thread is able to make timely progress.\n-This is the basis of any multitasking operating system.\n-\n-This section explores the differences between the pthread model and the\n-L-thread model as implemented in the provided L-thread subsystem. If needed a\n-theoretical discussion of preemptive vs cooperative multi-threading can be\n-found in any good text on operating system design.\n-\n-\n-Scheduling and context switching\n-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n-\n-The POSIX pthread library provides an application programming interface to\n-create and synchronize threads. Scheduling policy is determined by the host OS,\n-and may be configurable. The OS may use sophisticated rules to determine which\n-thread should be run next, threads may suspend themselves or make other threads\n-ready, and the scheduler may employ a time slice giving each thread a maximum\n-time quantum after which it will be preempted in favor of another thread that\n-is ready to run. To complicate matters further threads may be assigned\n-different scheduling priorities.\n-\n-By contrast the L-thread subsystem is considerably simpler. Logically the\n-L-thread scheduler performs the same multiplexing function for L-threads\n-within a single pthread as the OS scheduler does for pthreads within an\n-application process. The L-thread scheduler is simply the main loop of a\n-pthread, and in so far as the host OS is concerned it is a regular pthread\n-just like any other. The host OS is oblivious about the existence of and\n-not at all involved in the scheduling of L-threads.\n-\n-The other and most significant difference between the two models is that\n-L-threads are scheduled cooperatively. L-threads cannot not preempt each\n-other, nor can the L-thread scheduler preempt a running L-thread (i.e.\n-there is no time slicing). The consequence is that programs implemented with\n-L-threads must possess frequent rescheduling points, meaning that they must\n-explicitly and of their own volition return to the scheduler at frequent\n-intervals, in order to allow other L-threads an opportunity to proceed.\n-\n-In both models switching between threads requires that the current CPU\n-context is saved and a new context (belonging to the next thread ready to run)\n-is restored. With pthreads this context switching is handled transparently\n-and the set of CPU registers that must be preserved between context switches\n-is as per an interrupt handler.\n-\n-An L-thread context switch is achieved by the thread itself making a function\n-call to the L-thread scheduler. Thus it is only necessary to preserve the\n-callee registers. The caller is responsible to save and restore any other\n-registers it is using before a function call, and restore them on return,\n-and this is handled by the compiler. For ``X86_64`` on both Linux and BSD the\n-System V calling convention is used, this defines registers RSP, RBP, and\n-R12-R15 as callee-save registers (for more detailed discussion a good reference\n-is `X86 Calling Conventions <https://en.wikipedia.org/wiki/X86_calling_conventions>`_).\n-\n-Taking advantage of this, and due to the absence of preemption, an L-thread\n-context switch is achieved with less than 20 load/store instructions.\n-\n-The scheduling policy for L-threads is fixed, there is no prioritization of\n-L-threads, all L-threads are equal and scheduling is based on a FIFO\n-ready queue.\n-\n-An L-thread is a struct containing the CPU context of the thread\n-(saved on context switch) and other useful items. The ready queue contains\n-pointers to threads that are ready to run. The L-thread scheduler is a simple\n-loop that polls the ready queue, reads from it the next thread ready to run,\n-which it resumes by saving the current context (the current position in the\n-scheduler loop) and restoring the context of the next thread from its thread\n-struct. Thus an L-thread is always resumed at the last place it yielded.\n-\n-A well behaved L-thread will call the context switch regularly (at least once\n-in its main loop) thus returning to the scheduler's own main loop. Yielding\n-inserts the current thread at the back of the ready queue, and the process of\n-servicing the ready queue is repeated, thus the system runs by flipping back\n-and forth the between L-threads and scheduler loop.\n-\n-In the case of pthreads, the preemptive scheduling, time slicing, and support\n-for thread prioritization means that progress is normally possible for any\n-thread that is ready to run. This comes at the price of a relatively heavier\n-context switch and scheduling overhead.\n-\n-With L-threads the progress of any particular thread is determined by the\n-frequency of rescheduling opportunities in the other L-threads. This means that\n-an errant L-thread monopolizing the CPU might cause scheduling of other threads\n-to be stalled. Due to the lower cost of context switching, however, voluntary\n-rescheduling to ensure progress of other threads, if managed sensibly, is not\n-a prohibitive overhead, and overall performance can exceed that of an\n-application using pthreads.\n-\n-\n-Mutual exclusion\n-^^^^^^^^^^^^^^^^\n-\n-With pthreads preemption means that threads that share data must observe\n-some form of mutual exclusion protocol.\n-\n-The fact that L-threads cannot preempt each other means that in many cases\n-mutual exclusion devices can be completely avoided.\n-\n-Locking to protect shared data can be a significant bottleneck in\n-multi-threaded applications so a carefully designed cooperatively scheduled\n-program can enjoy significant performance advantages.\n-\n-So far we have considered only the simplistic case of a single core CPU,\n-when multiple CPUs are considered things are somewhat more complex.\n-\n-First of all it is inevitable that there must be multiple L-thread schedulers,\n-one running on each EAL thread. So long as these schedulers remain isolated\n-from each other the above assertions about the potential advantages of\n-cooperative scheduling hold true.\n-\n-A configuration with isolated cooperative schedulers is less flexible than the\n-pthread model where threads can be affinitized to run on any CPU. With isolated\n-schedulers scaling of applications to utilize fewer or more CPUs according to\n-system demand is very difficult to achieve.\n-\n-The L-thread subsystem makes it possible for L-threads to migrate between\n-schedulers running on different CPUs. Needless to say if the migration means\n-that threads that share data end up running on different CPUs then this will\n-introduce the need for some kind of mutual exclusion system.\n-\n-Of course ``rte_ring`` software rings can always be used to interconnect\n-threads running on different cores, however to protect other kinds of shared\n-data structures, lock free constructs or else explicit locking will be\n-required. This is a consideration for the application design.\n-\n-In support of this extended functionality, the L-thread subsystem implements\n-thread safe mutexes and condition variables.\n-\n-The cost of affinitizing and of condition variable signaling is significantly\n-lower than the equivalent pthread operations, and so applications using these\n-features will see a performance benefit.\n-\n-\n-Thread local storage\n-^^^^^^^^^^^^^^^^^^^^\n-\n-As with applications written for pthreads an application written for L-threads\n-can take advantage of thread local storage, in this case local to an L-thread.\n-An application may save and retrieve a single pointer to application data in\n-the L-thread struct.\n-\n-For legacy and backward compatibility reasons two alternative methods are also\n-offered, the first is modeled directly on the pthread get/set specific APIs,\n-the second approach is modeled on the ``RTE_PER_LCORE`` macros, whereby\n-``PER_LTHREAD`` macros are introduced, in both cases the storage is local to\n-the L-thread.\n-\n-\n-.. _constraints_and_performance_implications:\n-\n-Constraints and performance implications when using L-threads\n-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n-\n-\n-.. _API_compatibility:\n-\n-API compatibility\n-^^^^^^^^^^^^^^^^^\n-\n-The L-thread subsystem provides a set of functions that are logically equivalent\n-to the corresponding functions offered by the POSIX pthread library, however not\n-all pthread functions have a corresponding L-thread equivalent, and not all\n-features available to pthreads are implemented for L-threads.\n-\n-The pthread library offers considerable flexibility via programmable attributes\n-that can be associated with threads, mutexes, and condition variables.\n-\n-By contrast the L-thread subsystem has fixed functionality, the scheduler policy\n-cannot be varied, and L-threads cannot be prioritized. There are no variable\n-attributes associated with any L-thread objects. L-threads, mutexes and\n-conditional variables, all have fixed functionality. (Note: reserved parameters\n-are included in the APIs to facilitate possible future support for attributes).\n-\n-The table below lists the pthread and equivalent L-thread APIs with notes on\n-differences and/or constraints. Where there is no L-thread entry in the table,\n-then the L-thread subsystem provides no equivalent function.\n-\n-.. _table_lthread_pthread:\n-\n-.. table:: Pthread and equivalent L-thread APIs.\n-\n-   +----------------------------+------------------------+-------------------+\n-   | **Pthread function**       | **L-thread function**  | **Notes**         |\n-   +============================+========================+===================+\n-   | pthread_barrier_destroy    |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_barrier_init       |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_barrier_wait       |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_cond_broadcast     | lthread_cond_broadcast | See note 1        |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_cond_destroy       | lthread_cond_destroy   |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_cond_init          | lthread_cond_init      |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_cond_signal        | lthread_cond_signal    | See note 1        |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_cond_timedwait     |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_cond_wait          | lthread_cond_wait      | See note 5        |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_create             | lthread_create         | See notes 2, 3    |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_detach             | lthread_detach         | See note 4        |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_equal              |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_exit               | lthread_exit           |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_getspecific        | lthread_getspecific    |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_getcpuclockid      |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_join               | lthread_join           |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_key_create         | lthread_key_create     |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_key_delete         | lthread_key_delete     |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_mutex_destroy      | lthread_mutex_destroy  |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_mutex_init         | lthread_mutex_init     |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_mutex_lock         | lthread_mutex_lock     | See note 6        |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_mutex_trylock      | lthread_mutex_trylock  | See note 6        |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_mutex_timedlock    |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_mutex_unlock       | lthread_mutex_unlock   |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_once               |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_rwlock_destroy     |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_rwlock_init        |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_rwlock_rdlock      |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_rwlock_timedrdlock |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_rwlock_timedwrlock |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_rwlock_tryrdlock   |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_rwlock_trywrlock   |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_rwlock_unlock      |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_rwlock_wrlock      |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_self               | lthread_current        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_setspecific        | lthread_setspecific    |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_spin_init          |                        | See note 10       |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_spin_destroy       |                        | See note 10       |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_spin_lock          |                        | See note 10       |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_spin_trylock       |                        | See note 10       |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_spin_unlock        |                        | See note 10       |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_cancel             | lthread_cancel         |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_setcancelstate     |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_setcanceltype      |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_testcancel         |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_getschedparam      |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_setschedparam      |                        |                   |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_yield              | lthread_yield          | See note 7        |\n-   +----------------------------+------------------------+-------------------+\n-   | pthread_setaffinity_np     | lthread_set_affinity   | See notes 2, 3, 8 |\n-   +----------------------------+------------------------+-------------------+\n-   |                            | lthread_sleep          | See note 9        |\n-   +----------------------------+------------------------+-------------------+\n-   |                            | lthread_sleep_clks     | See note 9        |\n-   +----------------------------+------------------------+-------------------+\n-\n-\n-**Note 1**:\n-\n-Neither lthread signal nor broadcast may be called concurrently by L-threads\n-running on different schedulers, although multiple L-threads running in the\n-same scheduler may freely perform signal or broadcast operations. L-threads\n-running on the same or different schedulers may always safely wait on a\n-condition variable.\n-\n-\n-**Note 2**:\n-\n-Pthread attributes may be used to affinitize a pthread with a cpu-set. The\n-L-thread subsystem does not support a cpu-set. An L-thread may be affinitized\n-only with a single CPU at any time.\n-\n-\n-**Note 3**:\n-\n-If an L-thread is intended to run on a different NUMA node than the node that\n-creates the thread then, when calling ``lthread_create()`` it is advantageous\n-to specify the destination core as a parameter of ``lthread_create()``. See\n-:ref:`memory_allocation_and_NUMA_awareness` for details.\n-\n-\n-**Note 4**:\n-\n-An L-thread can only detach itself, and cannot detach other L-threads.\n-\n-\n-**Note 5**:\n-\n-A wait operation on a pthread condition variable is always associated with and\n-protected by a mutex which must be owned by the thread at the time it invokes\n-``pthread_wait()``. By contrast L-thread condition variables are thread safe\n-(for waiters) and do not use an associated mutex. Multiple L-threads (including\n-L-threads running on other schedulers) can safely wait on a L-thread condition\n-variable. As a consequence the performance of an L-thread condition variables\n-is typically an order of magnitude faster than its pthread counterpart.\n-\n-\n-**Note 6**:\n-\n-Recursive locking is not supported with L-threads, attempts to take a lock\n-recursively will be detected and rejected.\n-\n-\n-**Note 7**:\n-\n-``lthread_yield()`` will save the current context, insert the current thread\n-to the back of the ready queue, and resume the next ready thread. Yielding\n-increases ready queue backlog, see :ref:`ready_queue_backlog` for more details\n-about the implications of this.\n-\n-\n-N.B. The context switch time as measured from immediately before the call to\n-``lthread_yield()`` to the point at which the next ready thread is resumed,\n-can be an order of magnitude faster that the same measurement for\n-pthread_yield.\n-\n-\n-**Note 8**:\n-\n-``lthread_set_affinity()`` is similar to a yield apart from the fact that the\n-yielding thread is inserted into a peer ready queue of another scheduler.\n-The peer ready queue is actually a separate thread safe queue, which means that\n-threads appearing in the peer ready queue can jump any backlog in the local\n-ready queue on the destination scheduler.\n-\n-The context switch time as measured from the time just before the call to\n-``lthread_set_affinity()`` to just after the same thread is resumed on the new\n-scheduler can be orders of magnitude faster than the same measurement for\n-``pthread_setaffinity_np()``.\n-\n-\n-**Note 9**:\n-\n-Although there is no ``pthread_sleep()`` function, ``lthread_sleep()`` and\n-``lthread_sleep_clks()`` can be used wherever ``sleep()``, ``usleep()`` or\n-``nanosleep()`` might ordinarily be used. The L-thread sleep functions suspend\n-the current thread, start an ``rte_timer`` and resume the thread when the\n-timer matures. The ``rte_timer_manage()`` entry point is called on every pass\n-of the scheduler loop. This means that the worst case jitter on timer expiry\n-is determined by the longest period between context switches of any running\n-L-threads.\n-\n-In a synthetic test with many threads sleeping and resuming then the measured\n-jitter is typically orders of magnitude lower than the same measurement made\n-for ``nanosleep()``.\n-\n-\n-**Note 10**:\n-\n-Spin locks are not provided because they are problematical in a cooperative\n-environment, see :ref:`porting_locks_and_spinlocks` for a more detailed\n-discussion on how to avoid spin locks.\n-\n-\n-.. _Thread_local_storage_performance:\n-\n-Thread local storage\n-^^^^^^^^^^^^^^^^^^^^\n-\n-Of the three L-thread local storage options the simplest and most efficient is\n-storing a single application data pointer in the L-thread struct.\n-\n-The ``PER_LTHREAD`` macros involve a run time computation to obtain the address\n-of the variable being saved/retrieved and also require that the accesses are\n-de-referenced  via a pointer. This means that code that has used\n-``RTE_PER_LCORE`` macros being ported to L-threads might need some slight\n-adjustment (see :ref:`porting_thread_local_storage` for hints about porting\n-code that makes use of thread local storage).\n-\n-The get/set specific APIs are consistent with their pthread counterparts both\n-in use and in performance.\n-\n-\n-.. _memory_allocation_and_NUMA_awareness:\n-\n-Memory allocation and NUMA awareness\n-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n-\n-All memory allocation is from DPDK huge pages, and is NUMA aware. Each\n-scheduler maintains its own caches of objects: lthreads, their stacks, TLS,\n-mutexes and condition variables. These caches are implemented as unbounded lock\n-free MPSC queues. When objects are created they are always allocated from the\n-caches on the local core (current EAL thread).\n-\n-If an L-thread has been affinitized to a different scheduler, then it can\n-always safely free resources to the caches from which they originated (because\n-the caches are MPSC queues).\n-\n-If the L-thread has been affinitized to a different NUMA node then the memory\n-resources associated with it may incur longer access latency.\n-\n-The commonly used pattern of setting affinity on entry to a thread after it has\n-started, means that memory allocation for both the stack and TLS will have been\n-made from caches on the NUMA node on which the threads creator is running.\n-This has the side effect that access latency will be sub-optimal after\n-affinitizing.\n-\n-This side effect can be mitigated to some extent (although not completely) by\n-specifying the destination CPU as a parameter of ``lthread_create()`` this\n-causes the L-thread's stack and TLS to be allocated when it is first scheduled\n-on the destination scheduler, if the destination is a on another NUMA node it\n-results in a more optimal memory allocation.\n-\n-Note that the lthread struct itself remains allocated from memory on the\n-creating node, this is unavoidable because an L-thread is known everywhere by\n-the address of this struct.\n-\n-\n-.. _object_cache_sizing:\n-\n-Object cache sizing\n-^^^^^^^^^^^^^^^^^^^\n-\n-The per lcore object caches pre-allocate objects in bulk whenever a request to\n-allocate an object finds a cache empty. By default 100 objects are\n-pre-allocated, this is defined by ``LTHREAD_PREALLOC`` in the public API\n-header file lthread_api.h. This means that the caches constantly grow to meet\n-system demand.\n-\n-In the present implementation there is no mechanism to reduce the cache sizes\n-if system demand reduces. Thus the caches will remain at their maximum extent\n-indefinitely.\n-\n-A consequence of the bulk pre-allocation of objects is that every 100 (default\n-value) additional new object create operations results in a call to\n-``rte_malloc()``. For creation of objects such as L-threads, which trigger the\n-allocation of even more objects (i.e. their stacks and TLS) then this can\n-cause outliers in scheduling performance.\n-\n-If this is a problem the simplest mitigation strategy is to dimension the\n-system, by setting the bulk object pre-allocation size to some large number\n-that you do not expect to be exceeded. This means the caches will be populated\n-once only, the very first time a thread is created.\n-\n-\n-.. _Ready_queue_backlog:\n-\n-Ready queue backlog\n-^^^^^^^^^^^^^^^^^^^\n-\n-One of the more subtle performance considerations is managing the ready queue\n-backlog. The fewer threads that are waiting in the ready queue then the faster\n-any particular thread will get serviced.\n-\n-In a naive L-thread application with N L-threads simply looping and yielding,\n-this backlog will always be equal to the number of L-threads, thus the cost of\n-a yield to a particular L-thread will be N times the context switch time.\n-\n-This side effect can be mitigated by arranging for threads to be suspended and\n-wait to be resumed, rather than polling for work by constantly yielding.\n-Blocking on a mutex or condition variable or even more obviously having a\n-thread sleep if it has a low frequency workload are all mechanisms by which a\n-thread can be excluded from the ready queue until it really does need to be\n-run. This can have a significant positive impact on performance.\n-\n-\n-.. _Initialization_and_shutdown_dependencies:\n-\n-Initialization, shutdown and dependencies\n-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n-\n-The L-thread subsystem depends on DPDK for huge page allocation and depends on\n-the ``rte_timer subsystem``. The DPDK EAL initialization and\n-``rte_timer_subsystem_init()`` **MUST** be completed before the L-thread sub\n-system can be used.\n-\n-Thereafter initialization of the L-thread subsystem is largely transparent to\n-the application. Constructor functions ensure that global variables are properly\n-initialized. Other than global variables each scheduler is initialized\n-independently the first time that an L-thread is created by a particular EAL\n-thread.\n-\n-If the schedulers are to be run as isolated and independent schedulers, with\n-no intention that L-threads running on different schedulers will migrate between\n-schedulers or synchronize with L-threads running on other schedulers, then\n-initialization consists simply of creating an L-thread, and then running the\n-L-thread scheduler.\n-\n-If there will be interaction between L-threads running on different schedulers,\n-then it is important that the starting of schedulers on different EAL threads\n-is synchronized.\n-\n-To achieve this an additional initialization step is necessary, this is simply\n-to set the number of schedulers by calling the API function\n-``lthread_num_schedulers_set(n)``, where ``n`` is the number of EAL threads\n-that will run L-thread schedulers. Setting the number of schedulers to a\n-number greater than 0 will cause all schedulers to wait until the others have\n-started before beginning to schedule L-threads.\n-\n-The L-thread scheduler is started by calling the function ``lthread_run()``\n-and should be called from the EAL thread and thus become the main loop of the\n-EAL thread.\n-\n-The function ``lthread_run()``, will not return until all threads running on\n-the scheduler have exited, and the scheduler has been explicitly stopped by\n-calling ``lthread_scheduler_shutdown(lcore)`` or\n-``lthread_scheduler_shutdown_all()``.\n-\n-All these function do is tell the scheduler that it can exit when there are no\n-longer any running L-threads, neither function forces any running L-thread to\n-terminate. Any desired application shutdown behavior must be designed and\n-built into the application to ensure that L-threads complete in a timely\n-manner.\n-\n-**Important Note:** It is assumed when the scheduler exits that the application\n-is terminating for good, the scheduler does not free resources before exiting\n-and running the scheduler a subsequent time will result in undefined behavior.\n-\n-\n-.. _porting_legacy_code_to_run_on_lthreads:\n-\n-Porting legacy code to run on L-threads\n-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n-\n-Legacy code originally written for a pthread environment may be ported to\n-L-threads if the considerations about differences in scheduling policy, and\n-constraints discussed in the previous sections can be accommodated.\n-\n-This section looks in more detail at some of the issues that may have to be\n-resolved when porting code.\n-\n-\n-.. _pthread_API_compatibility:\n-\n-pthread API compatibility\n-^^^^^^^^^^^^^^^^^^^^^^^^^\n-\n-The first step is to establish exactly which pthread APIs the legacy\n-application uses, and to understand the requirements of those APIs. If there\n-are corresponding L-lthread APIs, and where the default pthread functionality\n-is used by the application then, notwithstanding the other issues discussed\n-here, it should be feasible to run the application with L-threads. If the\n-legacy code modifies the default behavior using attributes then if may be\n-necessary to make some adjustments to eliminate those requirements.\n-\n-\n-.. _blocking_system_calls:\n-\n-Blocking system API calls\n-^^^^^^^^^^^^^^^^^^^^^^^^^\n-\n-It is important to understand what other system services the application may be\n-using, bearing in mind that in a cooperatively scheduled environment a thread\n-cannot block without stalling the scheduler and with it all other cooperative\n-threads. Any kind of blocking system call, for example file or socket IO, is a\n-potential problem, a good tool to analyze the application for this purpose is\n-the ``strace`` utility.\n-\n-There are many strategies to resolve these kind of issues, each with it\n-merits. Possible solutions include:\n-\n-* Adopting a polled mode of the system API concerned (if available).\n-\n-* Arranging for another core to perform the function and synchronizing with\n-  that core via constructs that will not block the L-thread.\n-\n-* Affinitizing the thread to another scheduler devoted (as a matter of policy)\n-  to handling threads wishing to make blocking calls, and then back again when\n-  finished.\n-\n-\n-.. _porting_locks_and_spinlocks:\n-\n-Locks and spinlocks\n-^^^^^^^^^^^^^^^^^^^\n-\n-Locks and spinlocks are another source of blocking behavior that for the same\n-reasons as system calls will need to be addressed.\n-\n-If the application design ensures that the contending L-threads will always\n-run on the same scheduler then it its probably safe to remove locks and spin\n-locks completely.\n-\n-The only exception to the above rule is if for some reason the\n-code performs any kind of context switch whilst holding the lock\n-(e.g. yield, sleep, or block on a different lock, or on a condition variable).\n-This will need to determined before deciding to eliminate a lock.\n-\n-If a lock cannot be eliminated then an L-thread mutex can be substituted for\n-either kind of lock.\n-\n-An L-thread blocking on an L-thread mutex will be suspended and will cause\n-another ready L-thread to be resumed, thus not blocking the scheduler. When\n-default behavior is required, it can be used as a direct replacement for a\n-pthread mutex lock.\n-\n-Spin locks are typically used when lock contention is likely to be rare and\n-where the period during which the lock may be held is relatively short.\n-When the contending L-threads are running on the same scheduler then an\n-L-thread blocking on a spin lock will enter an infinite loop stopping the\n-scheduler completely (see :ref:`porting_infinite_loops` below).\n-\n-If the application design ensures that contending L-threads will always run\n-on different schedulers then it might be reasonable to leave a short spin lock\n-that rarely experiences contention in place.\n-\n-If after all considerations it appears that a spin lock can neither be\n-eliminated completely, replaced with an L-thread mutex, or left in place as\n-is, then an alternative is to loop on a flag, with a call to\n-``lthread_yield()`` inside the loop (n.b. if the contending L-threads might\n-ever run on different schedulers the flag will need to be manipulated\n-atomically).\n-\n-Spinning and yielding is the least preferred solution since it introduces\n-ready queue backlog (see also :ref:`ready_queue_backlog`).\n-\n-\n-.. _porting_sleeps_and_delays:\n-\n-Sleeps and delays\n-^^^^^^^^^^^^^^^^^\n-\n-Yet another kind of blocking behavior (albeit momentary) are delay functions\n-like ``sleep()``, ``usleep()``, ``nanosleep()`` etc. All will have the\n-consequence of stalling the L-thread scheduler and unless the delay is very\n-short (e.g. a very short nanosleep) calls to these functions will need to be\n-eliminated.\n-\n-The simplest mitigation strategy is to use the L-thread sleep API functions,\n-of which two variants exist, ``lthread_sleep()`` and ``lthread_sleep_clks()``.\n-These functions start an rte_timer against the L-thread, suspend the L-thread\n-and cause another ready L-thread to be resumed. The suspended L-thread is\n-resumed when the rte_timer matures.\n-\n-\n-.. _porting_infinite_loops:\n-\n-Infinite loops\n-^^^^^^^^^^^^^^\n-\n-Some applications have threads with loops that contain no inherent\n-rescheduling opportunity, and rely solely on the OS time slicing to share\n-the CPU. In a cooperative environment this will stop everything dead. These\n-kind of loops are not hard to identify, in a debug session you will find the\n-debugger is always stopping in the same loop.\n-\n-The simplest solution to this kind of problem is to insert an explicit\n-``lthread_yield()`` or ``lthread_sleep()`` into the loop. Another solution\n-might be to include the function performed by the loop into the execution path\n-of some other loop that does in fact yield, if this is possible.\n-\n-\n-.. _porting_thread_local_storage:\n-\n-Thread local storage\n-^^^^^^^^^^^^^^^^^^^^\n-\n-If the application uses thread local storage, the use case should be\n-studied carefully.\n-\n-In a legacy pthread application either or both the ``__thread`` prefix, or the\n-pthread set/get specific APIs may have been used to define storage local to a\n-pthread.\n-\n-In some applications it may be a reasonable assumption that the data could\n-or in fact most likely should be placed in L-thread local storage.\n-\n-If the application (like many DPDK applications) has assumed a certain\n-relationship between a pthread and the CPU to which it is affinitized, there\n-is a risk that thread local storage may have been used to save some data items\n-that are correctly logically associated with the CPU, and others items which\n-relate to application context for the thread. Only a good understanding of the\n-application will reveal such cases.\n-\n-If the application requires an that an L-thread is to be able to move between\n-schedulers then care should be taken to separate these kinds of data, into per\n-lcore, and per L-thread storage. In this way a migrating thread will bring with\n-it the local data it needs, and pick up the new logical core specific values\n-from pthread local storage at its new home.\n-\n-\n-.. _pthread_shim:\n-\n-Pthread shim\n-~~~~~~~~~~~~\n-\n-A convenient way to get something working with legacy code can be to use a\n-shim that adapts pthread API calls to the corresponding L-thread ones.\n-This approach will not mitigate any of the porting considerations mentioned\n-in the previous sections, but it will reduce the amount of code churn that\n-would otherwise been involved. It is a reasonable approach to evaluate\n-L-threads, before investing effort in porting to the native L-thread APIs.\n-\n-\n-Overview\n-^^^^^^^^\n-The L-thread subsystem includes an example pthread shim. This is a partial\n-implementation but does contain the API stubs needed to get basic applications\n-running. There is a simple \"hello world\" application that demonstrates the\n-use of the pthread shim.\n-\n-A subtlety of working with a shim is that the application will still need\n-to make use of the genuine pthread library functions, at the very least in\n-order to create the EAL threads in which the L-thread schedulers will run.\n-This is the case with DPDK initialization, and exit.\n-\n-To deal with the initialization and shutdown scenarios, the shim is capable of\n-switching on or off its adaptor functionality, an application can control this\n-behavior by the calling the function ``pt_override_set()``. The default state\n-is disabled.\n-\n-The pthread shim uses the dynamic linker loader and saves the loaded addresses\n-of the genuine pthread API functions in an internal table, when the shim\n-functionality is enabled it performs the adaptor function, when disabled it\n-invokes the genuine pthread function.\n-\n-The function ``pthread_exit()`` has additional special handling. The standard\n-system header file pthread.h declares ``pthread_exit()`` with\n-``__rte_noreturn`` this is an optimization that is possible because\n-the pthread is terminating and this enables the compiler to omit the normal\n-handling of stack and protection of registers since the function is not\n-expected to return, and in fact the thread is being destroyed. These\n-optimizations are applied in both the callee and the caller of the\n-``pthread_exit()`` function.\n-\n-In our cooperative scheduling environment this behavior is inadmissible. The\n-pthread is the L-thread scheduler thread, and, although an L-thread is\n-terminating, there must be a return to the scheduler in order that the system\n-can continue to run. Further, returning from a function with attribute\n-``noreturn`` is invalid and may result in undefined behavior.\n-\n-The solution is to redefine the ``pthread_exit`` function with a macro,\n-causing it to be mapped to a stub function in the shim that does not have the\n-``noreturn`` attribute. This macro is defined in the file\n-``pthread_shim.h``. The stub function is otherwise no different than any of\n-the other stub functions in the shim, and will switch between the real\n-``pthread_exit()`` function or the ``lthread_exit()`` function as\n-required. The only difference is that the mapping to the stub by macro\n-substitution.\n-\n-A consequence of this is that the file ``pthread_shim.h`` must be included in\n-legacy code wishing to make use of the shim. It also means that dynamic\n-linkage of a pre-compiled binary that did not include pthread_shim.h is not be\n-supported.\n-\n-Given the requirements for porting legacy code outlined in\n-:ref:`porting_legacy_code_to_run_on_lthreads` most applications will require at\n-least some minimal adjustment and recompilation to run on L-threads so\n-pre-compiled binaries are unlikely to be met in practice.\n-\n-In summary the shim approach adds some overhead but can be a useful tool to help\n-establish the feasibility of a code reuse project. It is also a fairly\n-straightforward task to extend the shim if necessary.\n-\n-**Note:** Bearing in mind the preceding discussions about the impact of making\n-blocking calls then switching the shim in and out on the fly to invoke any\n-pthread API this might block is something that should typically be avoided.\n-\n-\n-Building and running the pthread shim\n-^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n-\n-The shim example application is located in the sample application\n-in the performance-thread folder\n-\n-To build and run the pthread shim example\n-\n-#. Build the application:\n-\n-   To compile the sample application see :doc:`compiling`.\n-\n-#. To run the pthread_shim example\n-\n-   .. code-block:: console\n-\n-       dpdk-pthread-shim -c core_mask -n number_of_channels\n-\n-.. _lthread_diagnostics:\n-\n-L-thread Diagnostics\n-~~~~~~~~~~~~~~~~~~~~\n-\n-When debugging you must take account of the fact that the L-threads are run in\n-a single pthread. The current scheduler is defined by\n-``RTE_PER_LCORE(this_sched)``, and the current lthread is stored at\n-``RTE_PER_LCORE(this_sched)->current_lthread``. Thus on a breakpoint in a GDB\n-session the current lthread can be obtained by displaying the pthread local\n-variable ``per_lcore_this_sched->current_lthread``.\n-\n-Another useful diagnostic feature is the possibility to trace significant\n-events in the life of an L-thread, this feature is enabled by changing the\n-value of LTHREAD_DIAG from 0 to 1 in the file ``lthread_diag_api.h``.\n-\n-Tracing of events can be individually masked, and the mask may be programmed\n-at run time. An unmasked event results in a callback that provides information\n-about the event. The default callback simply prints trace information. The\n-default mask is 0 (all events off) the mask can be modified by calling the\n-function ``lthread_diagnostic_set_mask()``.\n-\n-It is possible register a user callback function to implement more\n-sophisticated diagnostic functions.\n-Object creation events (lthread, mutex, and condition variable) accept, and\n-store in the created object, a user supplied reference value returned by the\n-callback function.\n-\n-The lthread reference value is passed back in all subsequent event callbacks,\n-the mutex and APIs are provided to retrieve the reference value from\n-mutexes and condition variables. This enables a user to monitor, count, or\n-filter for specific events, on specific objects, for example to monitor for a\n-specific thread signaling a specific condition variable, or to monitor\n-on all timer events, the possibilities and combinations are endless.\n-\n-The callback function can be set by calling the function\n-``lthread_diagnostic_enable()`` supplying a callback function pointer and an\n-event mask.\n-\n-Setting ``LTHREAD_DIAG`` also enables counting of statistics about cache and\n-queue usage, and these statistics can be displayed by calling the function\n-``lthread_diag_stats_display()``. This function also performs a consistency\n-check on the caches and queues. The function should only be called from the\n-main EAL thread after all worker threads have stopped and returned to the C\n-main program, otherwise the consistency check will fail.\ndiff --git a/examples/meson.build b/examples/meson.build\nindex bac9b760077e..268422a25703 100644\n--- a/examples/meson.build\n+++ b/examples/meson.build\n@@ -43,8 +43,6 @@ all_examples = [\n         'multi_process/symmetric_mp',\n         'ntb',\n         'packet_ordering',\n-        'performance-thread/l3fwd-thread',\n-        'performance-thread/pthread_shim',\n         'pipeline',\n         'ptpclient',\n         'qos_meter',\ndiff --git a/examples/performance-thread/Makefile b/examples/performance-thread/Makefile\ndeleted file mode 100644\nindex ef88722d3c29..000000000000\n--- a/examples/performance-thread/Makefile\n+++ /dev/null\n@@ -1,14 +0,0 @@\n-# SPDX-License-Identifier: BSD-3-Clause\n-# Copyright(c) 2015-2020 Intel Corporation\n-\n-subdirs := l3fwd-thread pthread_shim\n-\n-.PHONY: all static shared clean $(subdirs)\n-all static shared clean: $(subdirs)\n-\n-ifeq ($(filter $(shell uname -m),x86_64 arm64),)\n-$(error This application is only supported for x86_64 and arm64 targets)\n-endif\n-\n-$(subdirs):\n-\t$(MAKE) -C $@ $(MAKECMDGOALS)\ndiff --git a/examples/performance-thread/common/arch/arm64/ctx.c b/examples/performance-thread/common/arch/arm64/ctx.c\ndeleted file mode 100644\nindex 7c5c91658d52..000000000000\n--- a/examples/performance-thread/common/arch/arm64/ctx.c\n+++ /dev/null\n@@ -1,62 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2017 Cavium, Inc\n- */\n-\n-#include <rte_common.h>\n-#include <ctx.h>\n-\n-void\n-ctx_switch(struct ctx *new_ctx __rte_unused, struct ctx *curr_ctx __rte_unused)\n-{\n-\t/* SAVE CURRENT CONTEXT */\n-\tasm volatile (\n-\t\t/* Save SP */\n-\t\t\"mov x3, sp\\n\"\n-\t\t\"str x3, [x1, #0]\\n\"\n-\n-\t\t/* Save FP and LR */\n-\t\t\"stp x29, x30, [x1, #8]\\n\"\n-\n-\t\t/* Save Callee Saved Regs x19 - x28 */\n-\t\t\"stp x19, x20, [x1, #24]\\n\"\n-\t\t\"stp x21, x22, [x1, #40]\\n\"\n-\t\t\"stp x23, x24, [x1, #56]\\n\"\n-\t\t\"stp x25, x26, [x1, #72]\\n\"\n-\t\t\"stp x27, x28, [x1, #88]\\n\"\n-\n-\t\t/*\n-\t\t * Save bottom 64-bits of Callee Saved\n-\t\t * SIMD Regs v8 - v15\n-\t\t */\n-\t\t\"stp d8, d9, [x1, #104]\\n\"\n-\t\t\"stp d10, d11, [x1, #120]\\n\"\n-\t\t\"stp d12, d13, [x1, #136]\\n\"\n-\t\t\"stp d14, d15, [x1, #152]\\n\"\n-\t);\n-\n-\t/* RESTORE NEW CONTEXT */\n-\tasm volatile (\n-\t\t/* Restore SP */\n-\t\t\"ldr x3, [x0, #0]\\n\"\n-\t\t\"mov sp, x3\\n\"\n-\n-\t\t/* Restore FP and LR */\n-\t\t\"ldp x29, x30, [x0, #8]\\n\"\n-\n-\t\t/* Restore Callee Saved Regs x19 - x28 */\n-\t\t\"ldp x19, x20, [x0, #24]\\n\"\n-\t\t\"ldp x21, x22, [x0, #40]\\n\"\n-\t\t\"ldp x23, x24, [x0, #56]\\n\"\n-\t\t\"ldp x25, x26, [x0, #72]\\n\"\n-\t\t\"ldp x27, x28, [x0, #88]\\n\"\n-\n-\t\t/*\n-\t\t * Restore bottom 64-bits of Callee Saved\n-\t\t * SIMD Regs v8 - v15\n-\t\t */\n-\t\t\"ldp d8, d9, [x0, #104]\\n\"\n-\t\t\"ldp d10, d11, [x0, #120]\\n\"\n-\t\t\"ldp d12, d13, [x0, #136]\\n\"\n-\t\t\"ldp d14, d15, [x0, #152]\\n\"\n-\t);\n-}\ndiff --git a/examples/performance-thread/common/arch/arm64/ctx.h b/examples/performance-thread/common/arch/arm64/ctx.h\ndeleted file mode 100644\nindex 74c2e7a73cd3..000000000000\n--- a/examples/performance-thread/common/arch/arm64/ctx.h\n+++ /dev/null\n@@ -1,55 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2017 Cavium, Inc\n- */\n-\n-#ifndef CTX_H\n-#define CTX_H\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-/*\n- * CPU context registers\n- */\n-struct ctx {\n-\tvoid\t*sp;\t\t/* 0  */\n-\tvoid\t*fp;\t\t/* 8 */\n-\tvoid\t*lr;\t\t/* 16  */\n-\n-\t/* Callee Saved Generic Registers */\n-\tvoid\t*r19;\t\t/* 24 */\n-\tvoid\t*r20;\t\t/* 32 */\n-\tvoid\t*r21;\t\t/* 40 */\n-\tvoid\t*r22;\t\t/* 48 */\n-\tvoid\t*r23;\t\t/* 56 */\n-\tvoid\t*r24;\t\t/* 64 */\n-\tvoid\t*r25;\t\t/* 72 */\n-\tvoid\t*r26;\t\t/* 80 */\n-\tvoid\t*r27;\t\t/* 88 */\n-\tvoid\t*r28;\t\t/* 96 */\n-\n-\t/*\n-\t * Callee Saved SIMD Registers. Only the bottom 64-bits\n-\t * of these registers needs to be saved.\n-\t */\n-\tvoid\t*v8;\t\t/* 104 */\n-\tvoid\t*v9;\t\t/* 112 */\n-\tvoid\t*v10;\t\t/* 120 */\n-\tvoid\t*v11;\t\t/* 128 */\n-\tvoid\t*v12;\t\t/* 136 */\n-\tvoid\t*v13;\t\t/* 144 */\n-\tvoid\t*v14;\t\t/* 152 */\n-\tvoid\t*v15;\t\t/* 160 */\n-};\n-\n-\n-void\n-ctx_switch(struct ctx *new_ctx, struct ctx *curr_ctx);\n-\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif /* RTE_CTX_H_ */\ndiff --git a/examples/performance-thread/common/arch/arm64/stack.h b/examples/performance-thread/common/arch/arm64/stack.h\ndeleted file mode 100644\nindex 722c473353ca..000000000000\n--- a/examples/performance-thread/common/arch/arm64/stack.h\n+++ /dev/null\n@@ -1,56 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2017 Cavium, Inc\n- */\n-\n-#ifndef STACK_H\n-#define STACK_H\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include \"lthread_int.h\"\n-\n-/*\n- * Sets up the initial stack for the lthread.\n- */\n-static inline void\n-arch_set_stack(struct lthread *lt, void *func)\n-{\n-\tvoid **stack_top = (void *)((char *)(lt->stack) + lt->stack_size);\n-\n-\t/*\n-\t * Align stack_top to 16 bytes. Arm64 has the constraint that the\n-\t * stack pointer must always be quad-word aligned.\n-\t */\n-\tstack_top = (void **)(((unsigned long)(stack_top)) & ~0xfUL);\n-\n-\t/*\n-\t * First Stack Frame\n-\t */\n-\tstack_top[0] = NULL;\n-\tstack_top[-1] = NULL;\n-\n-\t/*\n-\t * Initialize the context\n-\t */\n-\tlt->ctx.fp = &stack_top[-1];\n-\tlt->ctx.sp = &stack_top[-2];\n-\n-\t/*\n-\t * Here only the address of _lthread_exec is saved as the link\n-\t * register value. The argument to _lthread_exec i.e the address of\n-\t * the lthread struct is not saved. This is because the first\n-\t * argument to ctx_switch is the address of the new context,\n-\t * which also happens to be the address of required lthread struct.\n-\t * So while returning from ctx_switch into _thread_exec, parameter\n-\t * register x0 will always contain the required value.\n-\t */\n-\tlt->ctx.lr = func;\n-}\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif /* STACK_H_ */\ndiff --git a/examples/performance-thread/common/arch/x86/ctx.c b/examples/performance-thread/common/arch/x86/ctx.c\ndeleted file mode 100644\nindex d63fd9fc0d10..000000000000\n--- a/examples/performance-thread/common/arch/x86/ctx.c\n+++ /dev/null\n@@ -1,37 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2012 Hasan Alayli <halayli@gmail.com>\n- */\n-\n-#if defined(__x86_64__)\n-__asm__ (\n-\".text\\n\"\n-\".p2align 4,,15\\n\"\n-\".globl ctx_switch\\n\"\n-\".globl _ctx_switch\\n\"\n-\"ctx_switch:\\n\"\n-\"_ctx_switch:\\n\"\n-\"\tmovq %rsp, 0(%rsi)\t# save stack_pointer\\n\"\n-\"\tmovq %rbp, 8(%rsi)\t# save frame_pointer\\n\"\n-\"\tmovq (%rsp), %rax\t# save insn_pointer\\n\"\n-\"\tmovq %rax, 16(%rsi)\\n\"\n-\"\tmovq %rbx, 24(%rsi)\\n\t# save rbx,r12-r15\\n\"\n-\"\tmovq 24(%rdi), %rbx\\n\"\n-\"\tmovq %r15, 56(%rsi)\\n\"\n-\"\tmovq %r14, 48(%rsi)\\n\"\n-\"\tmovq 48(%rdi), %r14\\n\"\n-\"\tmovq 56(%rdi), %r15\\n\"\n-\"\tmovq %r13, 40(%rsi)\\n\"\n-\"\tmovq %r12, 32(%rsi)\\n\"\n-\"\tmovq 32(%rdi), %r12\\n\"\n-\"\tmovq 40(%rdi), %r13\\n\"\n-\"\tmovq 0(%rdi), %rsp\t# restore stack_pointer\\n\"\n-\"\tmovq 16(%rdi), %rax\t# restore insn_pointer\\n\"\n-\"\tmovq 8(%rdi), %rbp\t# restore frame_pointer\\n\"\n-\"\tmovq %rax, (%rsp)\\n\"\n-\"\tret\\n\"\n-\t);\n-#else\n-#pragma GCC error \"__x86_64__ is not defined\"\n-#endif\ndiff --git a/examples/performance-thread/common/arch/x86/ctx.h b/examples/performance-thread/common/arch/x86/ctx.h\ndeleted file mode 100644\nindex c6a46c52913f..000000000000\n--- a/examples/performance-thread/common/arch/x86/ctx.h\n+++ /dev/null\n@@ -1,36 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-\n-#ifndef CTX_H\n-#define CTX_H\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-/*\n- * CPU context registers\n- */\n-struct ctx {\n-\tvoid\t*rsp;\t\t/* 0  */\n-\tvoid\t*rbp;\t\t/* 8  */\n-\tvoid\t*rip;\t\t/* 16 */\n-\tvoid\t*rbx;\t\t/* 24 */\n-\tvoid\t*r12;\t\t/* 32 */\n-\tvoid\t*r13;\t\t/* 40 */\n-\tvoid\t*r14;\t\t/* 48 */\n-\tvoid\t*r15;\t\t/* 56 */\n-};\n-\n-\n-void\n-ctx_switch(struct ctx *new_ctx, struct ctx *curr_ctx);\n-\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif /* RTE_CTX_H_ */\ndiff --git a/examples/performance-thread/common/arch/x86/stack.h b/examples/performance-thread/common/arch/x86/stack.h\ndeleted file mode 100644\nindex 7cdd5c7aecde..000000000000\n--- a/examples/performance-thread/common/arch/x86/stack.h\n+++ /dev/null\n@@ -1,40 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation.\n- * Copyright(c) Cavium, Inc. 2017.\n- * All rights reserved\n- * Copyright (C) 2012, Hasan Alayli <halayli@gmail.com>\n- * Portions derived from: https://github.com/halayli/lthread\n- * With permissions from Hasan Alayli to use them as BSD-3-Clause\n- */\n-\n-#ifndef STACK_H\n-#define STACK_H\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include \"lthread_int.h\"\n-\n-/*\n- * Sets up the initial stack for the lthread.\n- */\n-static inline void\n-arch_set_stack(struct lthread *lt, void *func)\n-{\n-\tchar *stack_top = (char *)(lt->stack) + lt->stack_size;\n-\tvoid **s = (void **)stack_top;\n-\n-\t/* set initial context */\n-\ts[-3] = NULL;\n-\ts[-2] = (void *)lt;\n-\tlt->ctx.rsp = (void *)(stack_top - (4 * sizeof(void *)));\n-\tlt->ctx.rbp = (void *)(stack_top - (3 * sizeof(void *)));\n-\tlt->ctx.rip = func;\n-}\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif /* STACK_H_ */\ndiff --git a/examples/performance-thread/common/common.mk b/examples/performance-thread/common/common.mk\ndeleted file mode 100644\nindex a33e2ab03880..000000000000\n--- a/examples/performance-thread/common/common.mk\n+++ /dev/null\n@@ -1,21 +0,0 @@\n-# SPDX-License-Identifier: BSD-3-Clause\n-# Copyright(c) 2015 Intel Corporation\n-\n-# list the C files belonging to the lthread subsystem, these are common to all\n-# lthread apps. Any makefile including this should set VPATH to include this\n-# directory path\n-#\n-\n-MKFILE_PATH=$(abspath $(dir $(lastword $(MAKEFILE_LIST))))\n-\n-ifeq ($(shell uname -m),x86_64)\n-ARCH_PATH += $(MKFILE_PATH)/arch/x86\n-else ifeq ($(shell uname -m),arm64)\n-ARCH_PATH += $(MKFILE_PATH)/arch/arm64\n-endif\n-\n-VPATH := $(MKFILE_PATH) $(ARCH_PATH)\n-\n-SRCS-y += lthread.c lthread_sched.c lthread_cond.c lthread_tls.c lthread_mutex.c lthread_diag.c ctx.c\n-\n-CFLAGS += -I$(MKFILE_PATH) -I$(ARCH_PATH)\ndiff --git a/examples/performance-thread/common/lthread.c b/examples/performance-thread/common/lthread.c\ndeleted file mode 100644\nindex 009374a8c3e0..000000000000\n--- a/examples/performance-thread/common/lthread.c\n+++ /dev/null\n@@ -1,470 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2012 Hasan Alayli <halayli@gmail.com>\n- */\n-\n-#define RTE_MEM 1\n-\n-#include <stdio.h>\n-#include <stdlib.h>\n-#include <string.h>\n-#include <stdint.h>\n-#include <stddef.h>\n-#include <limits.h>\n-#include <inttypes.h>\n-#include <unistd.h>\n-#include <pthread.h>\n-#include <fcntl.h>\n-#include <sys/time.h>\n-#include <sys/mman.h>\n-\n-#include <rte_log.h>\n-#include <rte_string_fns.h>\n-#include <ctx.h>\n-#include <stack.h>\n-\n-#include \"lthread_api.h\"\n-#include \"lthread.h\"\n-#include \"lthread_timer.h\"\n-#include \"lthread_tls.h\"\n-#include \"lthread_objcache.h\"\n-#include \"lthread_diag.h\"\n-\n-\n-/*\n- * This function gets called after an lthread function has returned.\n- */\n-void _lthread_exit_handler(struct lthread *lt)\n-{\n-\n-\tlt->state |= BIT(ST_LT_EXITED);\n-\n-\tif (!(lt->state & BIT(ST_LT_DETACH))) {\n-\t\t/* thread is this not explicitly detached\n-\t\t * it must be joinable, so we call lthread_exit().\n-\t\t */\n-\t\tlthread_exit(NULL);\n-\t}\n-\n-\t/* if we get here the thread is detached so we can reschedule it,\n-\t * allowing the scheduler to free it\n-\t */\n-\t_reschedule();\n-}\n-\n-\n-/*\n- * Free resources allocated to an lthread\n- */\n-void _lthread_free(struct lthread *lt)\n-{\n-\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_FREE, lt, 0);\n-\n-\t/* invoke any user TLS destructor functions */\n-\t_lthread_tls_destroy(lt);\n-\n-\t/* free memory allocated for TLS defined using RTE_PER_LTHREAD macros */\n-\tif (sizeof(void *) < (uint64_t)RTE_PER_LTHREAD_SECTION_SIZE)\n-\t\t_lthread_objcache_free(lt->tls->root_sched->per_lthread_cache,\n-\t\t\t\t\tlt->per_lthread_data);\n-\n-\t/* free pthread style TLS memory */\n-\t_lthread_objcache_free(lt->tls->root_sched->tls_cache, lt->tls);\n-\n-\t/* free the stack */\n-\t_lthread_objcache_free(lt->stack_container->root_sched->stack_cache,\n-\t\t\t\tlt->stack_container);\n-\n-\t/* now free the thread */\n-\t_lthread_objcache_free(lt->root_sched->lthread_cache, lt);\n-\n-}\n-\n-/*\n- * Allocate a stack and maintain a cache of stacks\n- */\n-struct lthread_stack *_stack_alloc(void)\n-{\n-\tstruct lthread_stack *s;\n-\n-\ts = _lthread_objcache_alloc((THIS_SCHED)->stack_cache);\n-\tRTE_ASSERT(s != NULL);\n-\n-\ts->root_sched = THIS_SCHED;\n-\ts->stack_size = LTHREAD_MAX_STACK_SIZE;\n-\treturn s;\n-}\n-\n-/*\n- * Execute a ctx by invoking the start function\n- * On return call an exit handler if the user has provided one\n- */\n-static void _lthread_exec(void *arg)\n-{\n-\tstruct lthread *lt = (struct lthread *)arg;\n-\n-\t/* invoke the contexts function */\n-\tlt->fun(lt->arg);\n-\t/* do exit handling */\n-\tif (lt->exit_handler != NULL)\n-\t\tlt->exit_handler(lt);\n-}\n-\n-/*\n- *\tInitialize an lthread\n- *\tSet its function, args, and exit handler\n- */\n-void\n-_lthread_init(struct lthread *lt,\n-\tlthread_func_t fun, void *arg, lthread_exit_func exit_handler)\n-{\n-\n-\t/* set ctx func and args */\n-\tlt->fun = fun;\n-\tlt->arg = arg;\n-\tlt->exit_handler = exit_handler;\n-\n-\t/* set initial state */\n-\tlt->birth = _sched_now();\n-\tlt->state = BIT(ST_LT_INIT);\n-\tlt->join = LT_JOIN_INITIAL;\n-}\n-\n-/*\n- *\tset the lthread stack\n- */\n-void _lthread_set_stack(struct lthread *lt, void *stack, size_t stack_size)\n-{\n-\t/* set stack */\n-\tlt->stack = stack;\n-\tlt->stack_size = stack_size;\n-\n-\tarch_set_stack(lt, _lthread_exec);\n-}\n-\n-/*\n- * Create an lthread on the current scheduler\n- * If there is no current scheduler on this pthread then first create one\n- */\n-int\n-lthread_create(struct lthread **new_lt, int lcore_id,\n-\t\tlthread_func_t fun, void *arg)\n-{\n-\tif ((new_lt == NULL) || (fun == NULL))\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\tif (lcore_id < 0)\n-\t\tlcore_id = rte_lcore_id();\n-\telse if (lcore_id > LTHREAD_MAX_LCORES)\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\tstruct lthread *lt = NULL;\n-\n-\tif (THIS_SCHED == NULL) {\n-\t\tTHIS_SCHED = _lthread_sched_create(0);\n-\t\tif (THIS_SCHED == NULL) {\n-\t\t\tperror(\"Failed to create scheduler\");\n-\t\t\treturn POSIX_ERRNO(EAGAIN);\n-\t\t}\n-\t}\n-\n-\t/* allocate a thread structure */\n-\tlt = _lthread_objcache_alloc((THIS_SCHED)->lthread_cache);\n-\tif (lt == NULL)\n-\t\treturn POSIX_ERRNO(EAGAIN);\n-\n-\tbzero(lt, sizeof(struct lthread));\n-\tlt->root_sched = THIS_SCHED;\n-\n-\t/* set the function args and exit handlder */\n-\t_lthread_init(lt, fun, arg, _lthread_exit_handler);\n-\n-\t/* put it in the ready queue */\n-\t*new_lt = lt;\n-\n-\tif (lcore_id < 0)\n-\t\tlcore_id = rte_lcore_id();\n-\n-\tDIAG_CREATE_EVENT(lt, LT_DIAG_LTHREAD_CREATE);\n-\n-\trte_wmb();\n-\t_ready_queue_insert(_lthread_sched_get(lcore_id), lt);\n-\treturn 0;\n-}\n-\n-/*\n- * Schedules lthread to sleep for `nsecs`\n- * setting the lthread state to LT_ST_SLEEPING.\n- * lthread state is cleared upon resumption or expiry.\n- */\n-static inline void _lthread_sched_sleep(struct lthread *lt, uint64_t nsecs)\n-{\n-\tuint64_t state = lt->state;\n-\tuint64_t clks = _ns_to_clks(nsecs);\n-\n-\tif (clks) {\n-\t\t_timer_start(lt, clks);\n-\t\tlt->state = state | BIT(ST_LT_SLEEPING);\n-\t}\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_SLEEP, clks, 0);\n-\t_suspend();\n-}\n-\n-\n-\n-/*\n- * Cancels any running timer.\n- * This can be called multiple times on the same lthread regardless if it was\n- * sleeping or not.\n- */\n-int _lthread_desched_sleep(struct lthread *lt)\n-{\n-\tuint64_t state = lt->state;\n-\n-\tif (state & BIT(ST_LT_SLEEPING)) {\n-\t\t_timer_stop(lt);\n-\t\tstate &= (CLEARBIT(ST_LT_SLEEPING) & CLEARBIT(ST_LT_EXPIRED));\n-\t\tlt->state = state | BIT(ST_LT_READY);\n-\t\treturn 1;\n-\t}\n-\treturn 0;\n-}\n-\n-/*\n- * set user data pointer in an lthread\n- */\n-void lthread_set_data(void *data)\n-{\n-\tif (sizeof(void *) == RTE_PER_LTHREAD_SECTION_SIZE)\n-\t\tTHIS_LTHREAD->per_lthread_data = data;\n-}\n-\n-/*\n- * Retrieve user data pointer from an lthread\n- */\n-void *lthread_get_data(void)\n-{\n-\treturn THIS_LTHREAD->per_lthread_data;\n-}\n-\n-/*\n- * Return the current lthread handle\n- */\n-struct lthread *lthread_current(void)\n-{\n-\tstruct lthread_sched *sched = THIS_SCHED;\n-\n-\tif (sched)\n-\t\treturn sched->current_lthread;\n-\treturn NULL;\n-}\n-\n-\n-\n-/*\n- * Tasklet to cancel a thread\n- */\n-static void *\n-_cancel(void *arg)\n-{\n-\tstruct lthread *lt = (struct lthread *) arg;\n-\n-\tlt->state |= BIT(ST_LT_CANCELLED);\n-\tlthread_detach();\n-\treturn NULL;\n-}\n-\n-\n-/*\n- * Mark the specified as canceled\n- */\n-int lthread_cancel(struct lthread *cancel_lt)\n-{\n-\tstruct lthread *lt;\n-\n-\tif ((cancel_lt == NULL) || (cancel_lt == THIS_LTHREAD))\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\tDIAG_EVENT(cancel_lt, LT_DIAG_LTHREAD_CANCEL, cancel_lt, 0);\n-\n-\tif (cancel_lt->sched != THIS_SCHED) {\n-\n-\t\t/* spawn task-let to cancel the thread */\n-\t\tlthread_create(&lt,\n-\t\t\t\tcancel_lt->sched->lcore_id,\n-\t\t\t\t_cancel,\n-\t\t\t\tcancel_lt);\n-\t\treturn 0;\n-\t}\n-\tcancel_lt->state |= BIT(ST_LT_CANCELLED);\n-\treturn 0;\n-}\n-\n-/*\n- * Suspend the current lthread for specified time\n- */\n-void lthread_sleep(uint64_t nsecs)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\t_lthread_sched_sleep(lt, nsecs);\n-\n-}\n-\n-/*\n- * Suspend the current lthread for specified time\n- */\n-void lthread_sleep_clks(uint64_t clks)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\tuint64_t state = lt->state;\n-\n-\tif (clks) {\n-\t\t_timer_start(lt, clks);\n-\t\tlt->state = state | BIT(ST_LT_SLEEPING);\n-\t}\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_SLEEP, clks, 0);\n-\t_suspend();\n-}\n-\n-/*\n- * Requeue the current thread to the back of the ready queue\n- */\n-void lthread_yield(void)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_YIELD, 0, 0);\n-\n-\t_ready_queue_insert(THIS_SCHED, lt);\n-\tctx_switch(&(THIS_SCHED)->ctx, &lt->ctx);\n-}\n-\n-/*\n- * Exit the current lthread\n- * If a thread is joining pass the user pointer to it\n- */\n-void lthread_exit(void *ptr)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\t/* if thread is detached (this is not valid) just exit */\n-\tif (lt->state & BIT(ST_LT_DETACH))\n-\t\treturn;\n-\n-\t/* There is a race between lthread_join() and lthread_exit()\n-\t *  - if exit before join then we suspend and resume on join\n-\t *  - if join before exit then we resume the joining thread\n-\t */\n-\tuint64_t join_initial = LT_JOIN_INITIAL;\n-\tif ((lt->join == LT_JOIN_INITIAL)\n-\t    && __atomic_compare_exchange_n(&lt->join, &join_initial,\n-\t\tLT_JOIN_EXITING, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED)) {\n-\n-\t\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_EXIT, 1, 0);\n-\t\t_suspend();\n-\t\t/* set the exit value */\n-\t\tif ((ptr != NULL) && (lt->lt_join->lt_exit_ptr != NULL))\n-\t\t\t*(lt->lt_join->lt_exit_ptr) = ptr;\n-\n-\t\t/* let the joining thread know we have set the exit value */\n-\t\tlt->join = LT_JOIN_EXIT_VAL_SET;\n-\t} else {\n-\n-\t\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_EXIT, 0, 0);\n-\t\t/* set the exit value */\n-\t\tif ((ptr != NULL) && (lt->lt_join->lt_exit_ptr != NULL))\n-\t\t\t*(lt->lt_join->lt_exit_ptr) = ptr;\n-\t\t/* let the joining thread know we have set the exit value */\n-\t\tlt->join = LT_JOIN_EXIT_VAL_SET;\n-\t\t_ready_queue_insert(lt->lt_join->sched,\n-\t\t\t\t    (struct lthread *)lt->lt_join);\n-\t}\n-\n-\n-\t/* wait until the joinging thread has collected the exit value */\n-\twhile (lt->join != LT_JOIN_EXIT_VAL_READ)\n-\t\t_reschedule();\n-\n-\t/* reset join state */\n-\tlt->join = LT_JOIN_INITIAL;\n-\n-\t/* detach it so its resources can be released */\n-\tlt->state |= (BIT(ST_LT_DETACH) | BIT(ST_LT_EXITED));\n-}\n-\n-/*\n- * Join an lthread\n- * Suspend until the joined thread returns\n- */\n-int lthread_join(struct lthread *lt, void **ptr)\n-{\n-\tif (lt == NULL)\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\tstruct lthread *current = THIS_LTHREAD;\n-\tuint64_t lt_state = lt->state;\n-\n-\t/* invalid to join a detached thread, or a thread that is joined */\n-\tif ((lt_state & BIT(ST_LT_DETACH)) || (lt->join == LT_JOIN_THREAD_SET))\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\t/* pointer to the joining thread and a poingter to return a value */\n-\tlt->lt_join = current;\n-\tcurrent->lt_exit_ptr = ptr;\n-\t/* There is a race between lthread_join() and lthread_exit()\n-\t *  - if join before exit we suspend and will resume when exit is called\n-\t *  - if exit before join we resume the exiting thread\n-\t */\n-\tuint64_t join_initial = LT_JOIN_INITIAL;\n-\tif ((lt->join == LT_JOIN_INITIAL)\n-\t    && __atomic_compare_exchange_n(&lt->join, &join_initial,\n-\t\tLT_JOIN_THREAD_SET, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED)) {\n-\n-\t\tDIAG_EVENT(current, LT_DIAG_LTHREAD_JOIN, lt, 1);\n-\t\t_suspend();\n-\t} else {\n-\t\tDIAG_EVENT(current, LT_DIAG_LTHREAD_JOIN, lt, 0);\n-\t\t_ready_queue_insert(lt->sched, lt);\n-\t}\n-\n-\t/* wait for exiting thread to set return value */\n-\twhile (lt->join != LT_JOIN_EXIT_VAL_SET)\n-\t\t_reschedule();\n-\n-\t/* collect the return value */\n-\tif (ptr != NULL)\n-\t\t*ptr = *current->lt_exit_ptr;\n-\n-\t/* let the exiting thread proceed to exit */\n-\tlt->join = LT_JOIN_EXIT_VAL_READ;\n-\treturn 0;\n-}\n-\n-\n-/*\n- * Detach current lthread\n- * A detached thread cannot be joined\n- */\n-void lthread_detach(void)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_DETACH, 0, 0);\n-\n-\tuint64_t state = lt->state;\n-\n-\tlt->state = state | BIT(ST_LT_DETACH);\n-}\n-\n-/*\n- * Set function name of an lthread\n- * this is a debug aid\n- */\n-void lthread_set_funcname(const char *f)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\tstrlcpy(lt->funcname, f, sizeof(lt->funcname));\n-}\ndiff --git a/examples/performance-thread/common/lthread.h b/examples/performance-thread/common/lthread.h\ndeleted file mode 100644\nindex 4c945cf76a3a..000000000000\n--- a/examples/performance-thread/common/lthread.h\n+++ /dev/null\n@@ -1,51 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2012 Hasan Alayli <halayli@gmail.com>\n- */\n-#ifndef LTHREAD_H_\n-#define LTHREAD_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include <rte_per_lcore.h>\n-\n-#include \"lthread_api.h\"\n-#include \"lthread_diag.h\"\n-\n-struct lthread;\n-struct lthread_sched;\n-\n-/* function to be called when a context function returns */\n-typedef void (*lthread_exit_func) (struct lthread *);\n-\n-void _lthread_exit_handler(struct lthread *lt);\n-\n-void lthread_set_funcname(const char *f);\n-\n-void _lthread_sched_busy_sleep(struct lthread *lt, uint64_t nsecs);\n-\n-int _lthread_desched_sleep(struct lthread *lt);\n-\n-void _lthread_free(struct lthread *lt);\n-\n-struct lthread_sched *_lthread_sched_get(unsigned int lcore_id);\n-\n-struct lthread_stack *_stack_alloc(void);\n-\n-struct\n-lthread_sched *_lthread_sched_create(size_t stack_size);\n-\n-void\n-_lthread_init(struct lthread *lt,\n-\t      lthread_func_t fun, void *arg, lthread_exit_func exit_handler);\n-\n-void _lthread_set_stack(struct lthread *lt, void *stack, size_t stack_size);\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_H_ */\ndiff --git a/examples/performance-thread/common/lthread_api.h b/examples/performance-thread/common/lthread_api.h\ndeleted file mode 100644\nindex e6879ea5ce53..000000000000\n--- a/examples/performance-thread/common/lthread_api.h\n+++ /dev/null\n@@ -1,784 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2012 Hasan Alayli <halayli@gmail.com>\n- */\n-/**\n- *  @file lthread_api.h\n- *\n- *  @warning\n- *  @b EXPERIMENTAL: this API may change without prior notice\n- *\n- *  This file contains the public API for the L-thread subsystem\n- *\n- *  The L_thread subsystem provides a simple cooperative scheduler to\n- *  enable arbitrary functions to run as cooperative threads within a\n- * single P-thread.\n- *\n- * The subsystem provides a P-thread like API that is intended to assist in\n- * reuse of legacy code written for POSIX p_threads.\n- *\n- * The L-thread subsystem relies on cooperative multitasking, as such\n- * an L-thread must possess frequent rescheduling points. Often these\n- * rescheduling points are provided transparently when the application\n- * invokes an L-thread API.\n- *\n- * In some applications it is possible that the program may enter a loop the\n- * exit condition for which depends on the action of another thread or a\n- * response from hardware. In such a case it is necessary to yield the thread\n- * periodically in the loop body, to allow other threads an opportunity to\n- * run. This can be done by inserting a call to lthread_yield() or\n- * lthread_sleep(n) in the body of the loop.\n- *\n- * If the application makes expensive / blocking system calls or does other\n- * work that would take an inordinate amount of time to complete, this will\n- * stall the cooperative scheduler resulting in very poor performance.\n- *\n- * In such cases an L-thread can be migrated temporarily to another scheduler\n- * running in a different P-thread on another core. When the expensive or\n- * blocking operation is completed it can be migrated back to the original\n- * scheduler.  In this way other threads can continue to run on the original\n- * scheduler and will be completely unaffected by the blocking behaviour.\n- * To migrate an L-thread to another scheduler the API lthread_set_affinity()\n- * is provided.\n- *\n- * If L-threads that share data are running on the same core it is possible\n- * to design programs where mutual exclusion mechanisms to protect shared data\n- * can be avoided. This is due to the fact that the cooperative threads cannot\n- * preempt each other.\n- *\n- * There are two cases where mutual exclusion mechanisms are necessary.\n- *\n- *  a) Where the L-threads sharing data are running on different cores.\n- *  b) Where code must yield while updating data shared with another thread.\n- *\n- * The L-thread subsystem provides a set of mutex APIs to help with such\n- * scenarios, however excessive reliance on on these will impact performance\n- * and is best avoided if possible.\n- *\n- * L-threads can synchronise using a fast condition variable implementation\n- * that supports signal and broadcast. An L-thread running on any core can\n- * wait on a condition.\n- *\n- * L-threads can have L-thread local storage with an API modelled on either the\n- * P-thread get/set specific API or using PER_LTHREAD macros modelled on the\n- * RTE_PER_LCORE macros. Alternatively a simple user data pointer may be set\n- * and retrieved from a thread.\n- */\n-#ifndef LTHREAD_H\n-#define LTHREAD_H\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include <stdint.h>\n-#include <sys/socket.h>\n-#include <fcntl.h>\n-#include <netinet/in.h>\n-\n-#include <rte_cycles.h>\n-\n-\n-struct lthread;\n-struct lthread_cond;\n-struct lthread_mutex;\n-\n-struct lthread_condattr;\n-struct lthread_mutexattr;\n-\n-typedef void *(*lthread_func_t) (void *);\n-\n-/*\n- * Define the size of stack for an lthread\n- * Then this is the size that will be allocated on lthread creation\n- * This is a fixed size and will not grow.\n- */\n-#define LTHREAD_MAX_STACK_SIZE (1024*64)\n-\n-/**\n- * Define the maximum number of TLS keys that can be created\n- *\n- */\n-#define LTHREAD_MAX_KEYS 1024\n-\n-/**\n- * Define the maximum number of attempts to destroy an lthread's\n- * TLS data on thread exit\n- */\n-#define LTHREAD_DESTRUCTOR_ITERATIONS 4\n-\n-\n-/**\n- * Define the maximum number of lcores that will support lthreads\n- */\n-#define LTHREAD_MAX_LCORES RTE_MAX_LCORE\n-\n-/**\n- * How many lthread objects to pre-allocate as the system grows\n- * applies to lthreads + stacks, TLS, mutexs, cond vars.\n- *\n- * @see _lthread_alloc()\n- * @see _cond_alloc()\n- * @see _mutex_alloc()\n- *\n- */\n-#define LTHREAD_PREALLOC 100\n-\n-/**\n- * Set the number of schedulers in the system.\n- *\n- * This function may optionally be called before starting schedulers.\n- *\n- * If the number of schedulers is not set, or set to 0 then each scheduler\n- * will begin scheduling lthreads immediately it is started.\n-\n- * If the number of schedulers is set to greater than 0, then each scheduler\n- * will wait until all schedulers have started before beginning to schedule\n- * lthreads.\n- *\n- * If an application wishes to have threads migrate between cores using\n- * lthread_set_affinity(), or join threads running on other cores using\n- * lthread_join(), then it is prudent to set the number of schedulers to ensure\n- * that all schedulers are initialised beforehand.\n- *\n- * @param num\n- *  the number of schedulers in the system\n- * @return\n- * the number of schedulers in the system\n- */\n-int lthread_num_schedulers_set(int num);\n-\n-/**\n- * Return the number of schedulers currently running\n- * @return\n- *  the number of schedulers in the system\n- */\n-int lthread_active_schedulers(void);\n-\n-/**\n-  * Shutdown the specified scheduler\n-  *\n-  *  This function tells the specified scheduler to\n-  *  exit if/when there is no more work to do.\n-  *\n-  *  Note that although the scheduler will stop\n-  *  resources are not freed.\n-  *\n-  * @param lcore\n-  *\tThe lcore of the scheduler to shutdown\n-  *\n-  * @return\n-  *  none\n-  */\n-void lthread_scheduler_shutdown(unsigned lcore);\n-\n-/**\n-  * Shutdown all schedulers\n-  *\n-  *  This function tells all schedulers  including the current scheduler to\n-  *  exit if/when there is no more work to do.\n-  *\n-  *  Note that although the schedulers will stop\n-  *  resources are not freed.\n-  *\n-  * @return\n-  *  none\n-  */\n-void lthread_scheduler_shutdown_all(void);\n-\n-/**\n-  * Run the lthread scheduler\n-  *\n-  *  Runs the lthread scheduler.\n-  *  This function returns only if/when all lthreads have exited.\n-  *  This function must be the main loop of an EAL thread.\n-  *\n-  * @return\n-  *\t none\n-  */\n-\n-void lthread_run(void);\n-\n-/**\n-  * Create an lthread\n-  *\n-  *  Creates an lthread and places it in the ready queue on a particular\n-  *  lcore.\n-  *\n-  *  If no scheduler exists yet on the current lcore then one is created.\n-  *\n-  * @param new_lt\n-  *  Pointer to an lthread pointer that will be initialized\n-  * @param lcore\n-  *  the lcore the thread should be started on or the current lcore\n-  *    -1 the current lcore\n-  *    0 - LTHREAD_MAX_LCORES any other lcore\n-  * @param lthread_func\n-  *  Pointer to the function the for the thread to run\n-  * @param arg\n-  *  Pointer to args that will be passed to the thread\n-  *\n-  * @return\n-  *\t 0    success\n-  *\t EAGAIN  no resources available\n-  *\t EINVAL  NULL thread or function pointer, or lcore_id out of range\n-  */\n-int\n-lthread_create(struct lthread **new_lt,\n-\t\tint lcore, lthread_func_t func, void *arg);\n-\n-/**\n-  * Cancel an lthread\n-  *\n-  *  Cancels an lthread and causes it to be terminated\n-  *  If the lthread is detached it will be freed immediately\n-  *  otherwise its resources will not be released until it is joined.\n-  *\n-  * @param new_lt\n-  *  Pointer to an lthread that will be cancelled\n-  *\n-  * @return\n-  *\t 0    success\n-  *\t EINVAL  thread was NULL\n-  */\n-int lthread_cancel(struct lthread *lt);\n-\n-/**\n-  * Join an lthread\n-  *\n-  *  Joins the current thread with the specified lthread, and waits for that\n-  *  thread to exit.\n-  *  Passes an optional pointer to collect returned data.\n-  *\n-  * @param lt\n-  *  Pointer to the lthread to be joined\n-  * @param ptr\n-  *  Pointer to pointer to collect returned data\n-  *\n-0  * @return\n-  *  0    success\n-  *  EINVAL lthread could not be joined.\n-  */\n-int lthread_join(struct lthread *lt, void **ptr);\n-\n-/**\n-  * Detach an lthread\n-  *\n-  * Detaches the current thread\n-  * On exit a detached lthread will be freed immediately and will not wait\n-  * to be joined. The default state for a thread is not detached.\n-  *\n-  * @return\n-  *  none\n-  */\n-void lthread_detach(void);\n-\n-/**\n-  *  Exit an lthread\n-  *\n-  * Terminate the current thread, optionally return data.\n-  * The data may be collected by lthread_join()\n-  *\n-  * After calling this function the lthread will be suspended until it is\n-  * joined. After it is joined then its resources will be freed.\n-  *\n-  * @param ptr\n-  *  Pointer to pointer to data to be returned\n-  *\n-  * @return\n-  *  none\n-  */\n-void lthread_exit(void *val);\n-\n-/**\n-  * Cause the current lthread to sleep for n nanoseconds\n-  *\n-  * The current thread will be suspended until the specified time has elapsed\n-  * or has been exceeded.\n-  *\n-  * Execution will switch to the next lthread that is ready to run\n-  *\n-  * @param nsecs\n-  *  Number of nanoseconds to sleep\n-  *\n-  * @return\n-  *  none\n-  */\n-void lthread_sleep(uint64_t nsecs);\n-\n-/**\n-  * Cause the current lthread to sleep for n cpu clock ticks\n-  *\n-  *  The current thread will be suspended until the specified time has elapsed\n-  *  or has been exceeded.\n-  *\n-  *\t Execution will switch to the next lthread that is ready to run\n-  *\n-  * @param clks\n-  *  Number of clock ticks to sleep\n-  *\n-  * @return\n-  *  none\n-  */\n-void lthread_sleep_clks(uint64_t clks);\n-\n-/**\n-  * Yield the current lthread\n-  *\n-  *  The current thread will yield and execution will switch to the\n-  *  next lthread that is ready to run\n-  *\n-  * @return\n-  *  none\n-  */\n-void lthread_yield(void);\n-\n-/**\n-  * Migrate the current thread to another scheduler\n-  *\n-  *  This function migrates the current thread to another scheduler.\n-  *  Execution will switch to the next lthread that is ready to run on the\n-  *  current scheduler. The current thread will be resumed on the new scheduler.\n-  *\n-  * @param lcore\n-  *\tThe lcore to migrate to\n-  *\n-  * @return\n-  *  0   success we are now running on the specified core\n-  *  EINVAL the destination lcore was not valid\n-  */\n-int lthread_set_affinity(unsigned lcore);\n-\n-/**\n-  * Return the current lthread\n-  *\n-  *  Returns the current lthread\n-  *\n-  * @return\n-  *  pointer to the current lthread\n-  */\n-struct lthread\n-*lthread_current(void);\n-\n-/**\n-  * Associate user data with an lthread\n-  *\n-  *  This function sets a user data pointer in the current lthread\n-  *  The pointer can be retrieved with lthread_get_data()\n-  *  It is the users responsibility to allocate and free any data referenced\n-  *  by the user pointer.\n-  *\n-  * @param data\n-  *  pointer to user data\n-  *\n-  * @return\n-  *  none\n-  */\n-void lthread_set_data(void *data);\n-\n-/**\n-  * Get user data for the current lthread\n-  *\n-  *  This function returns a user data pointer for the current lthread\n-  *  The pointer must first be set with lthread_set_data()\n-  *  It is the users responsibility to allocate and free any data referenced\n-  *  by the user pointer.\n-  *\n-  * @return\n-  *  pointer to user data\n-  */\n-void\n-*lthread_get_data(void);\n-\n-struct lthread_key;\n-typedef void (*tls_destructor_func) (void *);\n-\n-/**\n-  * Create a key for lthread TLS\n-  *\n-  *  This function is modelled on pthread_key_create\n-  *  It creates a thread-specific data key visible to all lthreads on the\n-  *  current scheduler.\n-  *\n-  *  Key values may be used to locate thread-specific data.\n-  *  The same key value\tmay be used by different threads, the values bound\n-  *  to the key by\tlthread_setspecific() are maintained on\ta per-thread\n-  *  basis and persist for the life of the calling thread.\n-  *\n-  *  An\toptional destructor function may be associated with each key value.\n-  *  At\tthread exit, if\ta key value has\ta non-NULL destructor pointer, and the\n-  *  thread has\ta non-NULL value associated with the key, the function pointed\n-  *  to\tis called with the current associated value as its sole\targument.\n-  *\n-  * @param key\n-  *   Pointer to the key to be created\n-  * @param destructor\n-  *   Pointer to destructor function\n-  *\n-  * @return\n-  *  0 success\n-  *  EINVAL the key ptr was NULL\n-  *  EAGAIN no resources available\n-  */\n-int lthread_key_create(unsigned int *key, tls_destructor_func destructor);\n-\n-/**\n-  * Delete key for lthread TLS\n-  *\n-  *  This function is modelled on pthread_key_delete().\n-  *  It deletes a thread-specific data key previously returned by\n-  *  lthread_key_create().\n-  *  The thread-specific data values associated with the key need not be NULL\n-  *  at the time that lthread_key_delete is called.\n-  *  It is the responsibility of the application to free any application\n-  *  storage or perform any cleanup actions for data structures related to the\n-  *  deleted key. This cleanup can be done either before or after\n-  * lthread_key_delete is called.\n-  *\n-  * @param key\n-  *  The key to be deleted\n-  *\n-  * @return\n-  *  0 Success\n-  *  EINVAL the key was invalid\n-  */\n-int lthread_key_delete(unsigned int key);\n-\n-/**\n-  * Get lthread TLS\n-  *\n-  *  This function is modelled on pthread_get_specific().\n-  *  It returns the value currently bound to the specified key on behalf of the\n-  *  calling thread. Calling lthread_getspecific() with a key value not\n-  *  obtained from lthread_key_create() or after key has been deleted with\n-  *  lthread_key_delete() will result in undefined behaviour.\n-  *  lthread_getspecific() may be called from a thread-specific data destructor\n-  *  function.\n-  *\n-  * @param key\n-  *  The key for which data is requested\n-  *\n-  * @return\n-  *  Pointer to the thread specific data associated with that key\n-  *  or NULL if no data has been set.\n-  */\n-void\n-*lthread_getspecific(unsigned int key);\n-\n-/**\n-  * Set lthread TLS\n-  *\n-  *  This function is modelled on pthread_set_specific()\n-  *  It associates a thread-specific value with a key obtained via a previous\n-  *  call to lthread_key_create().\n-  *  Different threads may bind different values to the same key. These values\n-  *  are typically pointers to dynamically allocated memory that have been\n-  *  reserved by the calling thread. Calling lthread_setspecific with a key\n-  *  value not obtained from lthread_key_create or after the key has been\n-  *  deleted with lthread_key_delete will result in undefined behaviour.\n-  *\n-  * @param key\n-  *  The key for which data is to be set\n-  * @param key\n-  *  Pointer to the user data\n-  *\n-  * @return\n-  *  0 success\n-  *  EINVAL the key was invalid\n-  */\n-\n-int lthread_setspecific(unsigned int key, const void *value);\n-\n-/**\n- * The macros below provide an alternative mechanism to access lthread local\n- *  storage.\n- *\n- * The macros can be used to declare define and access per lthread local\n- * storage in a similar way to the RTE_PER_LCORE macros which control storage\n- * local to an lcore.\n- *\n- * Memory for per lthread variables declared in this way is allocated when the\n- * lthread is created and a pointer to this memory is stored in the lthread.\n- * The per lthread variables are accessed via the pointer + the offset of the\n- * particular variable.\n- *\n- * The total size of per lthread storage, and the variable offsets are found by\n- * defining the variables in a unique global memory section, the start and end\n- * of which is known. This global memory section is used only in the\n- * computation of the addresses of the lthread variables, and is never actually\n- * used to store any data.\n- *\n- * Due to the fact that variables declared this way may be scattered across\n- * many files, the start and end of the section and variable offsets are only\n- * known after linking, thus the computation of section size and variable\n- * addresses is performed at run time.\n- *\n- * These macros are primarily provided to aid porting of code that makes use\n- * of the existing RTE_PER_LCORE macros. In principle it would be more efficient\n- * to gather all lthread local variables into a single structure and\n- * set/retrieve a pointer to that struct using the alternative\n- * lthread_data_set/get APIs.\n- *\n- * These macros are mutually exclusive with the lthread_data_set/get APIs.\n- * If you define storage using these macros then the lthread_data_set/get APIs\n- * will not perform as expected, the lthread_data_set API does nothing, and the\n- * lthread_data_get API returns the start of global section.\n- *\n- */\n-/* start and end of per lthread section */\n-extern char __start_per_lt;\n-extern char __stop_per_lt;\n-\n-\n-#define RTE_DEFINE_PER_LTHREAD(type, name)                      \\\n-__typeof__(type)__attribute((section(\"per_lt\"))) per_lt_##name\n-\n-/**\n- * Macro to declare an extern per lthread variable \"var\" of type \"type\"\n- */\n-#define RTE_DECLARE_PER_LTHREAD(type, name)                     \\\n-extern __typeof__(type)__attribute((section(\"per_lt\"))) per_lt_##name\n-\n-/**\n- * Read/write the per-lcore variable value\n- */\n-#define RTE_PER_LTHREAD(name) ((typeof(per_lt_##name) *)\\\n-((char *)lthread_get_data() +\\\n-((char *) &per_lt_##name - &__start_per_lt)))\n-\n-/**\n-  * Initialize a mutex\n-  *\n-  *  This function provides a mutual exclusion device, the need for which\n-  *  can normally be avoided in a cooperative multitasking environment.\n-  *  It is provided to aid porting of legacy code originally written for\n-  *   preemptive multitasking environments such as pthreads.\n-  *\n-  *  A mutex may be unlocked (not owned by any thread), or locked (owned by\n-  *  one thread).\n-  *\n-  *  A mutex can never be owned  by more than one thread simultaneously.\n-  *  A thread attempting to lock a mutex that is already locked by another\n-  *  thread is suspended until the owning thread unlocks the mutex.\n-  *\n-  *  lthread_mutex_init() initializes the mutex object pointed to by mutex\n-  *  Optional mutex attributes specified in mutexattr, are reserved for future\n-  *  use and are currently ignored.\n-  *\n-  *  If a thread calls lthread_mutex_lock() on the mutex, then if the mutex\n-  *  is currently unlocked,  it  becomes  locked  and  owned  by  the calling\n-  *  thread, and lthread_mutex_lock returns immediately. If the mutex is\n-  *  already locked by another thread, lthread_mutex_lock suspends the calling\n-  *  thread until the mutex is unlocked.\n-  *\n-  *  lthread_mutex_trylock behaves identically to rte_thread_mutex_lock, except\n-  *  that it does not block the calling  thread  if the mutex is already locked\n-  *  by another thread.\n-  *\n-  *  lthread_mutex_unlock() unlocks the specified mutex. The mutex is assumed\n-  *  to be locked and owned by the calling thread.\n-  *\n-  *  lthread_mutex_destroy() destroys a\tmutex object, freeing its resources.\n-  *  The mutex must be unlocked with nothing blocked on it before calling\n-  *  lthread_mutex_destroy.\n-  *\n-  * @param name\n-  *  Optional pointer to string describing the mutex\n-  * @param mutex\n-  *  Pointer to pointer to the mutex to be initialized\n-  * @param attribute\n-  *  Pointer to attribute - unused reserved\n-  *\n-  * @return\n-  *  0 success\n-  *  EINVAL mutex was not a valid pointer\n-  *  EAGAIN insufficient resources\n-  */\n-\n-int\n-lthread_mutex_init(char *name, struct lthread_mutex **mutex,\n-\t\t   const struct lthread_mutexattr *attr);\n-\n-/**\n-  * Destroy a mutex\n-  *\n-  *  This function destroys the specified mutex freeing its resources.\n-  *  The mutex must be unlocked before calling lthread_mutex_destroy.\n-  *\n-  * @see lthread_mutex_init()\n-  *\n-  * @param mutex\n-  *  Pointer to pointer to the mutex to be initialized\n-  *\n-  * @return\n-  *  0 success\n-  *  EINVAL mutex was not an initialized mutex\n-  *  EBUSY mutex was still in use\n-  */\n-int lthread_mutex_destroy(struct lthread_mutex *mutex);\n-\n-/**\n-  * Lock a mutex\n-  *\n-  *  This function attempts to lock a mutex.\n-  *  If a thread calls lthread_mutex_lock() on the mutex, then if the mutex\n-  *  is currently unlocked,  it  becomes  locked  and  owned  by  the calling\n-  *  thread, and lthread_mutex_lock returns immediately. If the mutex is\n-  *  already locked by another thread, lthread_mutex_lock suspends the calling\n-  *  thread until the mutex is unlocked.\n-  *\n-  * @see lthread_mutex_init()\n-  *\n-  * @param mutex\n-  *  Pointer to pointer to the mutex to be initialized\n-  *\n-  * @return\n-  *  0 success\n-  *  EINVAL mutex was not an initialized mutex\n-  *  EDEADLOCK the mutex was already owned by the calling thread\n-  */\n-\n-int lthread_mutex_lock(struct lthread_mutex *mutex);\n-\n-/**\n-  * Try to lock a mutex\n-  *\n-  *  This function attempts to lock a mutex.\n-  *  lthread_mutex_trylock behaves identically to rte_thread_mutex_lock, except\n-  *  that it does not block the calling  thread  if the mutex is already locked\n-  *  by another thread.\n-  *\n-  *\n-  * @see lthread_mutex_init()\n-  *\n-  * @param mutex\n-  *  Pointer to pointer to the mutex to be initialized\n-  *\n-  * @return\n-  * 0 success\n-  * EINVAL mutex was not an initialized mutex\n-  * EBUSY the mutex was already locked by another thread\n-  */\n-int lthread_mutex_trylock(struct lthread_mutex *mutex);\n-\n-/**\n-  * Unlock a mutex\n-  *\n-  * This function attempts to unlock the specified mutex. The mutex is assumed\n-  * to be locked and owned by the calling thread.\n-  *\n-  * The oldest of any threads blocked on the mutex is made ready and may\n-  * compete with any other running thread to gain the mutex, it fails it will\n-  *  be blocked again.\n-  *\n-  * @param mutex\n-  * Pointer to pointer to the mutex to be initialized\n-  *\n-  * @return\n-  *  0 mutex was unlocked\n-  *  EINVAL mutex was not an initialized mutex\n-  *  EPERM the mutex was not owned by the calling thread\n-  */\n-\n-int lthread_mutex_unlock(struct lthread_mutex *mutex);\n-\n-/**\n-  * Initialize a condition variable\n-  *\n-  *  This function initializes a condition variable.\n-  *\n-  *  Condition variables can be used to communicate changes in the state of data\n-  *  shared between threads.\n-  *\n-  * @see lthread_cond_wait()\n-  *\n-  * @param name\n-  *  Pointer to optional string describing the condition variable\n-  * @param c\n-  *  Pointer to pointer to the condition variable to be initialized\n-  * @param attr\n-  *  Pointer to optional attribute reserved for future use, currently ignored\n-  *\n-  * @return\n-  *  0 success\n-  *  EINVAL cond was not a valid pointer\n-  *  EAGAIN insufficient resources\n-  */\n-int\n-lthread_cond_init(char *name, struct lthread_cond **c,\n-\t\t  const struct lthread_condattr *attr);\n-\n-/**\n-  * Destroy a condition variable\n-  *\n-  *  This function destroys a condition variable that was created with\n-  *  lthread_cond_init() and releases its resources.\n-  *\n-  * @param cond\n-  *  Pointer to pointer to the condition variable to be destroyed\n-  *\n-  * @return\n-  *  0 Success\n-  *  EBUSY condition variable was still in use\n-  *  EINVAL was not an initialised condition variable\n-  */\n-int lthread_cond_destroy(struct lthread_cond *cond);\n-\n-/**\n-  * Wait on a condition variable\n-  *\n-  *  The function blocks the current thread waiting on the condition variable\n-  *  specified by cond. The waiting thread unblocks only after another thread\n-  *  calls lthread_cond_signal, or lthread_cond_broadcast, specifying the\n-  *  same condition variable.\n-  *\n-  * @param cond\n-  *  Pointer to pointer to the condition variable to be waited on\n-  *\n-  * @param reserved\n-  *  reserved for future use\n-  *\n-  * @return\n-  *  0 The condition was signalled ( Success )\n-  *  EINVAL was not a an initialised condition variable\n-  */\n-int lthread_cond_wait(struct lthread_cond *c, uint64_t reserved);\n-\n-/**\n-  * Signal a condition variable\n-  *\n-  *  The function unblocks one thread waiting for the condition variable cond.\n-  *  If no threads are waiting on cond, the rte_lthread_cond_signal() function\n-  *  has no effect.\n-  *\n-  * @param cond\n-  *  Pointer to pointer to the condition variable to be signalled\n-  *\n-  * @return\n-  *  0 The condition was signalled ( Success )\n-  *  EINVAL was not a an initialised condition variable\n-  */\n-int lthread_cond_signal(struct lthread_cond *c);\n-\n-/**\n-  * Broadcast a condition variable\n-  *\n-  *  The function unblocks all threads waiting for the condition variable cond.\n-  *  If no threads are waiting on cond, the rte_lathed_cond_broadcast()\n-  *  function has no effect.\n-  *\n-  * @param cond\n-  *  Pointer to pointer to the condition variable to be signalled\n-  *\n-  * @return\n-  *  0 The condition was signalled ( Success )\n-  *  EINVAL was not a an initialised condition variable\n-  */\n-int lthread_cond_broadcast(struct lthread_cond *c);\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_H */\ndiff --git a/examples/performance-thread/common/lthread_cond.c b/examples/performance-thread/common/lthread_cond.c\ndeleted file mode 100644\nindex e7be17089aad..000000000000\n--- a/examples/performance-thread/common/lthread_cond.c\n+++ /dev/null\n@@ -1,184 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2012 Hasan Alayli <halayli@gmail.com>\n- */\n-\n-#include <stdio.h>\n-#include <stdlib.h>\n-#include <string.h>\n-#include <stdint.h>\n-#include <stddef.h>\n-#include <limits.h>\n-#include <inttypes.h>\n-#include <unistd.h>\n-#include <pthread.h>\n-#include <fcntl.h>\n-#include <sys/time.h>\n-#include <sys/mman.h>\n-#include <errno.h>\n-\n-#include <rte_log.h>\n-#include <rte_common.h>\n-#include <rte_string_fns.h>\n-\n-#include \"lthread_api.h\"\n-#include \"lthread_diag_api.h\"\n-#include \"lthread_diag.h\"\n-#include \"lthread_int.h\"\n-#include \"lthread_sched.h\"\n-#include \"lthread_queue.h\"\n-#include \"lthread_objcache.h\"\n-#include \"lthread_timer.h\"\n-#include \"lthread_mutex.h\"\n-#include \"lthread_cond.h\"\n-\n-/*\n- * Create a condition variable\n- */\n-int\n-lthread_cond_init(char *name, struct lthread_cond **cond,\n-\t\t  __rte_unused const struct lthread_condattr *attr)\n-{\n-\tstruct lthread_cond *c;\n-\n-\tif (cond == NULL)\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\t/* allocate a condition variable from cache */\n-\tc = _lthread_objcache_alloc((THIS_SCHED)->cond_cache);\n-\n-\tif (c == NULL)\n-\t\treturn POSIX_ERRNO(EAGAIN);\n-\n-\tc->blocked = _lthread_queue_create(\"blocked\");\n-\tif (c->blocked == NULL) {\n-\t\t_lthread_objcache_free((THIS_SCHED)->cond_cache, (void *)c);\n-\t\treturn POSIX_ERRNO(EAGAIN);\n-\t}\n-\n-\tif (name == NULL)\n-\t\tstrlcpy(c->name, \"no name\", sizeof(c->name));\n-\telse\n-\t\tstrlcpy(c->name, name, sizeof(c->name));\n-\n-\tc->root_sched = THIS_SCHED;\n-\n-\t(*cond) = c;\n-\tDIAG_CREATE_EVENT((*cond), LT_DIAG_COND_CREATE);\n-\treturn 0;\n-}\n-\n-/*\n- * Destroy a condition variable\n- */\n-int lthread_cond_destroy(struct lthread_cond *c)\n-{\n-\tif (c == NULL) {\n-\t\tDIAG_EVENT(c, LT_DIAG_COND_DESTROY, c, POSIX_ERRNO(EINVAL));\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\t}\n-\n-\t/* try to free it */\n-\tif (_lthread_queue_destroy(c->blocked) < 0) {\n-\t\t/* queue in use */\n-\t\tDIAG_EVENT(c, LT_DIAG_COND_DESTROY, c, POSIX_ERRNO(EBUSY));\n-\t\treturn POSIX_ERRNO(EBUSY);\n-\t}\n-\n-\t/* okay free it */\n-\t_lthread_objcache_free(c->root_sched->cond_cache, c);\n-\tDIAG_EVENT(c, LT_DIAG_COND_DESTROY, c, 0);\n-\treturn 0;\n-}\n-\n-/*\n- * Wait on a condition variable\n- */\n-int lthread_cond_wait(struct lthread_cond *c, __rte_unused uint64_t reserved)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\tif (c == NULL) {\n-\t\tDIAG_EVENT(c, LT_DIAG_COND_WAIT, c, POSIX_ERRNO(EINVAL));\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\t}\n-\n-\n-\tDIAG_EVENT(c, LT_DIAG_COND_WAIT, c, 0);\n-\n-\t/* queue the current thread in the blocked queue\n-\t * this will be written when we return to the scheduler\n-\t * to ensure that the current thread context is saved\n-\t * before any signal could result in it being dequeued and\n-\t * resumed\n-\t */\n-\tlt->pending_wr_queue = c->blocked;\n-\t_suspend();\n-\n-\t/* the condition happened */\n-\treturn 0;\n-}\n-\n-/*\n- * Signal a condition variable\n- * attempt to resume any blocked thread\n- */\n-int lthread_cond_signal(struct lthread_cond *c)\n-{\n-\tstruct lthread *lt;\n-\n-\tif (c == NULL) {\n-\t\tDIAG_EVENT(c, LT_DIAG_COND_SIGNAL, c, POSIX_ERRNO(EINVAL));\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\t}\n-\n-\tlt = _lthread_queue_remove(c->blocked);\n-\n-\tif (lt != NULL) {\n-\t\t/* okay wake up this thread */\n-\t\tDIAG_EVENT(c, LT_DIAG_COND_SIGNAL, c, lt);\n-\t\t_ready_queue_insert((struct lthread_sched *)lt->sched, lt);\n-\t}\n-\treturn 0;\n-}\n-\n-/*\n- * Broadcast a condition variable\n- */\n-int lthread_cond_broadcast(struct lthread_cond *c)\n-{\n-\tstruct lthread *lt;\n-\n-\tif (c == NULL) {\n-\t\tDIAG_EVENT(c, LT_DIAG_COND_BROADCAST, c, POSIX_ERRNO(EINVAL));\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\t}\n-\n-\tDIAG_EVENT(c, LT_DIAG_COND_BROADCAST, c, 0);\n-\tdo {\n-\t\t/* drain the queue waking everybody */\n-\t\tlt = _lthread_queue_remove(c->blocked);\n-\n-\t\tif (lt != NULL) {\n-\t\t\tDIAG_EVENT(c, LT_DIAG_COND_BROADCAST, c, lt);\n-\t\t\t/* wake up */\n-\t\t\t_ready_queue_insert((struct lthread_sched *)lt->sched,\n-\t\t\t\t\t    lt);\n-\t\t}\n-\t} while (!_lthread_queue_empty(c->blocked));\n-\t_reschedule();\n-\tDIAG_EVENT(c, LT_DIAG_COND_BROADCAST, c, 0);\n-\treturn 0;\n-}\n-\n-/*\n- * return the diagnostic ref val stored in a condition var\n- */\n-uint64_t\n-lthread_cond_diag_ref(struct lthread_cond *c)\n-{\n-\tif (c == NULL)\n-\t\treturn 0;\n-\treturn c->diag_ref;\n-}\ndiff --git a/examples/performance-thread/common/lthread_cond.h b/examples/performance-thread/common/lthread_cond.h\ndeleted file mode 100644\nindex 616a55c4dad2..000000000000\n--- a/examples/performance-thread/common/lthread_cond.h\n+++ /dev/null\n@@ -1,30 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2012 Hasan Alayli <halayli@gmail.com>\n- */\n-\n-#ifndef LTHREAD_COND_H_\n-#define LTHREAD_COND_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include \"lthread_queue.h\"\n-\n-#define MAX_COND_NAME_SIZE 64\n-\n-struct lthread_cond {\n-\tstruct lthread_queue *blocked;\n-\tstruct lthread_sched *root_sched;\n-\tint count;\n-\tchar name[MAX_COND_NAME_SIZE];\n-\tuint64_t diag_ref;\t/* optional ref to user diag data */\n-} __rte_cache_aligned;\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_COND_H_ */\ndiff --git a/examples/performance-thread/common/lthread_diag.c b/examples/performance-thread/common/lthread_diag.c\ndeleted file mode 100644\nindex 57760a1e230c..000000000000\n--- a/examples/performance-thread/common/lthread_diag.c\n+++ /dev/null\n@@ -1,293 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-#include <rte_log.h>\n-#include <rte_common.h>\n-\n-#include \"lthread_diag.h\"\n-#include \"lthread_queue.h\"\n-#include \"lthread_pool.h\"\n-#include \"lthread_objcache.h\"\n-#include \"lthread_sched.h\"\n-#include \"lthread_diag_api.h\"\n-\n-\n-/* dummy ref value of default diagnostic callback */\n-static uint64_t dummy_ref;\n-\n-#define DIAG_SCHED_STATS_FORMAT \\\n-\"core %d\\n%33s %12s %12s %12s %12s\\n\"\n-\n-#define DIAG_CACHE_STATS_FORMAT \\\n-\"%20s %12lu %12lu %12lu %12lu %12lu\\n\"\n-\n-#define DIAG_QUEUE_STATS_FORMAT \\\n-\"%20s %12lu %12lu %12lu\\n\"\n-\n-\n-/*\n- * texts used in diagnostic events,\n- * corresponding diagnostic mask bit positions are given as comment\n- */\n-const char *diag_event_text[] = {\n-\t\"LTHREAD_CREATE     \",\t/* 00 */\n-\t\"LTHREAD_EXIT       \",\t/* 01 */\n-\t\"LTHREAD_JOIN       \",\t/* 02 */\n-\t\"LTHREAD_CANCEL     \",\t/* 03 */\n-\t\"LTHREAD_DETACH     \",\t/* 04 */\n-\t\"LTHREAD_FREE       \",\t/* 05 */\n-\t\"LTHREAD_SUSPENDED  \",\t/* 06 */\n-\t\"LTHREAD_YIELD      \",\t/* 07 */\n-\t\"LTHREAD_RESCHEDULED\",\t/* 08 */\n-\t\"LTHREAD_SLEEP      \",\t/* 09 */\n-\t\"LTHREAD_RESUMED    \",\t/* 10 */\n-\t\"LTHREAD_AFFINITY   \",\t/* 11 */\n-\t\"LTHREAD_TMR_START  \",\t/* 12 */\n-\t\"LTHREAD_TMR_DELETE \",\t/* 13 */\n-\t\"LTHREAD_TMR_EXPIRED\",\t/* 14 */\n-\t\"COND_CREATE        \",\t/* 15 */\n-\t\"COND_DESTROY       \",\t/* 16 */\n-\t\"COND_WAIT          \",\t/* 17 */\n-\t\"COND_SIGNAL        \",\t/* 18 */\n-\t\"COND_BROADCAST     \",\t/* 19 */\n-\t\"MUTEX_CREATE       \",\t/* 20 */\n-\t\"MUTEX_DESTROY      \",\t/* 21 */\n-\t\"MUTEX_LOCK         \",\t/* 22 */\n-\t\"MUTEX_TRYLOCK      \",\t/* 23 */\n-\t\"MUTEX_BLOCKED      \",\t/* 24 */\n-\t\"MUTEX_UNLOCKED     \",\t/* 25 */\n-\t\"SCHED_CREATE       \",\t/* 26 */\n-\t\"SCHED_SHUTDOWN     \"\t/* 27 */\n-};\n-\n-\n-/*\n- * set diagnostic ,ask\n- */\n-void lthread_diagnostic_set_mask(DIAG_USED uint64_t mask)\n-{\n-#if LTHREAD_DIAG\n-\tdiag_mask = mask;\n-#else\n-\tRTE_LOG(INFO, LTHREAD,\n-\t\t\"LTHREAD_DIAG is not set, see lthread_diag_api.h\\n\");\n-#endif\n-}\n-\n-\n-/*\n- * Check consistency of the scheduler stats\n- * Only sensible run after the schedulers are stopped\n- * Count the number of objects lying in caches and queues\n- * and available in the qnode pool.\n- * This should be equal to the total capacity of all\n- * qnode pools.\n- */\n-void\n-_sched_stats_consistency_check(void);\n-void\n-_sched_stats_consistency_check(void)\n-{\n-#if LTHREAD_DIAG\n-\tint i;\n-\tstruct lthread_sched *sched;\n-\tuint64_t count = 0;\n-\tuint64_t capacity = 0;\n-\n-\tfor (i = 0; i < LTHREAD_MAX_LCORES; i++) {\n-\t\tsched = schedcore[i];\n-\t\tif (sched == NULL)\n-\t\t\tcontinue;\n-\n-\t\t/* each of these queues consumes a stub node */\n-\t\tcount += 8;\n-\t\tcount += DIAG_COUNT(sched->ready, size);\n-\t\tcount += DIAG_COUNT(sched->pready, size);\n-\t\tcount += DIAG_COUNT(sched->lthread_cache, available);\n-\t\tcount += DIAG_COUNT(sched->stack_cache, available);\n-\t\tcount += DIAG_COUNT(sched->tls_cache, available);\n-\t\tcount += DIAG_COUNT(sched->per_lthread_cache, available);\n-\t\tcount += DIAG_COUNT(sched->cond_cache, available);\n-\t\tcount += DIAG_COUNT(sched->mutex_cache, available);\n-\n-\t\t/* the node pool does not consume a stub node */\n-\t\tif (sched->qnode_pool->fast_alloc != NULL)\n-\t\t\tcount++;\n-\t\tcount += DIAG_COUNT(sched->qnode_pool, available);\n-\n-\t\tcapacity += DIAG_COUNT(sched->qnode_pool, capacity);\n-\t}\n-\tif (count != capacity) {\n-\t\tRTE_LOG(CRIT, LTHREAD,\n-\t\t\t\"Scheduler caches are inconsistent\\n\");\n-\t} else {\n-\t\tRTE_LOG(INFO, LTHREAD,\n-\t\t\t\"Scheduler caches are ok\\n\");\n-\t}\n-#endif\n-}\n-\n-\n-#if LTHREAD_DIAG\n-/*\n- * Display node pool stats\n- */\n-static inline void\n-_qnode_pool_display(DIAG_USED struct qnode_pool *p)\n-{\n-\n-\tprintf(DIAG_CACHE_STATS_FORMAT,\n-\t\t\tp->name,\n-\t\t\tDIAG_COUNT(p, rd),\n-\t\t\tDIAG_COUNT(p, wr),\n-\t\t\tDIAG_COUNT(p, available),\n-\t\t\tDIAG_COUNT(p, prealloc),\n-\t\t\tDIAG_COUNT(p, capacity));\n-\tfflush(stdout);\n-}\n-#endif\n-\n-\n-#if LTHREAD_DIAG\n-/*\n- * Display queue stats\n- */\n-static inline void\n-_lthread_queue_display(DIAG_USED struct lthread_queue *q)\n-{\n-#if DISPLAY_OBJCACHE_QUEUES\n-\tprintf(DIAG_QUEUE_STATS_FORMAT,\n-\t\t\tq->name,\n-\t\t\tDIAG_COUNT(q, rd),\n-\t\t\tDIAG_COUNT(q, wr),\n-\t\t\tDIAG_COUNT(q, size));\n-\tfflush(stdout);\n-#else\n-\tprintf(\"%s: queue stats disabled\\n\",\n-\t\t\tq->name);\n-\n-#endif\n-}\n-#endif\n-\n-#if LTHREAD_DIAG\n-/*\n- * Display objcache stats\n- */\n-static inline void\n-_objcache_display(DIAG_USED struct lthread_objcache *c)\n-{\n-\n-\tprintf(DIAG_CACHE_STATS_FORMAT,\n-\t\t\tc->name,\n-\t\t\tDIAG_COUNT(c, rd),\n-\t\t\tDIAG_COUNT(c, wr),\n-\t\t\tDIAG_COUNT(c, available),\n-\t\t\tDIAG_COUNT(c, prealloc),\n-\t\t\tDIAG_COUNT(c, capacity));\n-\t_lthread_queue_display(c->q);\n-\tfflush(stdout);\n-}\n-#endif\n-\n-/*\n- * Display sched stats\n- */\n-void\n-lthread_sched_stats_display(void)\n-{\n-#if LTHREAD_DIAG\n-\tint i;\n-\tstruct lthread_sched *sched;\n-\n-\tfor (i = 0; i < LTHREAD_MAX_LCORES; i++) {\n-\t\tsched = schedcore[i];\n-\t\tif (sched != NULL) {\n-\t\t\tprintf(DIAG_SCHED_STATS_FORMAT,\n-\t\t\t\t\tsched->lcore_id,\n-\t\t\t\t\t\"rd\",\n-\t\t\t\t\t\"wr\",\n-\t\t\t\t\t\"present\",\n-\t\t\t\t\t\"nb preallocs\",\n-\t\t\t\t\t\"capacity\");\n-\t\t\t_lthread_queue_display(sched->ready);\n-\t\t\t_lthread_queue_display(sched->pready);\n-\t\t\t_qnode_pool_display(sched->qnode_pool);\n-\t\t\t_objcache_display(sched->lthread_cache);\n-\t\t\t_objcache_display(sched->stack_cache);\n-\t\t\t_objcache_display(sched->tls_cache);\n-\t\t\t_objcache_display(sched->per_lthread_cache);\n-\t\t\t_objcache_display(sched->cond_cache);\n-\t\t\t_objcache_display(sched->mutex_cache);\n-\t\tfflush(stdout);\n-\t\t}\n-\t}\n-\t_sched_stats_consistency_check();\n-#else\n-\tRTE_LOG(INFO, LTHREAD,\n-\t\t\"lthread diagnostics disabled\\n\"\n-\t\t\"hint - set LTHREAD_DIAG in lthread_diag_api.h\\n\");\n-#endif\n-}\n-\n-/*\n- * Defafult diagnostic callback\n- */\n-static uint64_t\n-_lthread_diag_default_cb(uint64_t time, struct lthread *lt, int diag_event,\n-\t\tuint64_t diag_ref, const char *text, uint64_t p1, uint64_t p2)\n-{\n-\tuint64_t _p2;\n-\tint lcore = (int) rte_lcore_id();\n-\n-\tswitch (diag_event) {\n-\tcase LT_DIAG_LTHREAD_CREATE:\n-\tcase LT_DIAG_MUTEX_CREATE:\n-\tcase LT_DIAG_COND_CREATE:\n-\t\t_p2 = dummy_ref;\n-\t\tbreak;\n-\tdefault:\n-\t\t_p2 = p2;\n-\t\tbreak;\n-\t}\n-\n-\tprintf(\"%\"PRIu64\" %d %8.8lx %8.8lx %s %8.8lx %8.8lx\\n\",\n-\t\ttime,\n-\t\tlcore,\n-\t\t(uint64_t) lt,\n-\t\tdiag_ref,\n-\t\ttext,\n-\t\tp1,\n-\t\t_p2);\n-\n-\treturn dummy_ref++;\n-}\n-\n-/*\n- * plug in default diag callback with mask off\n- */\n-RTE_INIT(_lthread_diag_ctor)\n-{\n-\tdiag_cb = _lthread_diag_default_cb;\n-\tdiag_mask = 0;\n-}\n-\n-\n-/*\n- * enable diagnostics\n- */\n-void lthread_diagnostic_enable(DIAG_USED diag_callback cb,\n-\t\t\t\tDIAG_USED uint64_t mask)\n-{\n-#if LTHREAD_DIAG\n-\tif (cb == NULL)\n-\t\tdiag_cb = _lthread_diag_default_cb;\n-\telse\n-\t\tdiag_cb = cb;\n-\tdiag_mask = mask;\n-#else\n-\tRTE_LOG(INFO, LTHREAD,\n-\t\t\"LTHREAD_DIAG is not set, see lthread_diag_api.h\\n\");\n-#endif\n-}\ndiff --git a/examples/performance-thread/common/lthread_diag.h b/examples/performance-thread/common/lthread_diag.h\ndeleted file mode 100644\nindex 7ee89eef388d..000000000000\n--- a/examples/performance-thread/common/lthread_diag.h\n+++ /dev/null\n@@ -1,112 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-#ifndef LTHREAD_DIAG_H_\n-#define LTHREAD_DIAG_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include <stdint.h>\n-#include <inttypes.h>\n-\n-#include <rte_log.h>\n-#include <rte_common.h>\n-\n-#include \"lthread_api.h\"\n-#include \"lthread_diag_api.h\"\n-\n-extern diag_callback diag_cb;\n-\n-extern const char *diag_event_text[];\n-extern uint64_t diag_mask;\n-\n-/* max size of name strings */\n-#define LT_MAX_NAME_SIZE 64\n-\n-#if LTHREAD_DIAG\n-#define DISPLAY_OBJCACHE_QUEUES 1\n-\n-/*\n- * Generate a diagnostic trace or event in the case where an object is created.\n- *\n- * The value returned by the callback is stored in the object.\n- *\n- * @ param obj\n- *  pointer to the object that was created\n- * @ param ev\n- *  the event code\n- *\n- */\n-#define DIAG_CREATE_EVENT(obj, ev) do {\t\t\t\t\t\\\n-\tstruct lthread *ct = RTE_PER_LCORE(this_sched)->current_lthread;\\\n-\tif ((BIT(ev) & diag_mask) && (ev < LT_DIAG_EVENT_MAX)) {\t\\\n-\t\t(obj)->diag_ref = (diag_cb)(rte_rdtsc(),\t\t\\\n-\t\t\t\t\tct,\t\t\t\t\\\n-\t\t\t\t\t(ev),\t\t\t\t\\\n-\t\t\t\t\t0,\t\t\t\t\\\n-\t\t\t\t\tdiag_event_text[(ev)],\t\t\\\n-\t\t\t\t\t(uint64_t)obj,\t\t\t\\\n-\t\t\t\t\t0);\t\t\t\t\\\n-\t}\t\t\t\t\t\t\t\t\\\n-} while (0)\n-\n-/*\n- * Generate a diagnostic trace event.\n- *\n- * @ param obj\n- *  pointer to the lthread, cond or mutex object\n- * @ param ev\n- *  the event code\n- * @ param p1\n- *  object specific value ( see lthread_diag_api.h )\n- * @ param p2\n- *  object specific value ( see lthread_diag_api.h )\n- */\n-#define DIAG_EVENT(obj, ev, p1, p2) do {\t\t\t\t\\\n-\tstruct lthread *ct = RTE_PER_LCORE(this_sched)->current_lthread;\\\n-\tif ((BIT(ev) & diag_mask) && (ev < LT_DIAG_EVENT_MAX)) {\t\\\n-\t\t(diag_cb)(rte_rdtsc(),\t\t\t\t\t\\\n-\t\t\t\tct,\t\t\t\t\t\\\n-\t\t\t\tev,\t\t\t\t\t\\\n-\t\t\t\t(obj)->diag_ref,\t\t\t\\\n-\t\t\t\tdiag_event_text[(ev)],\t\t\t\\\n-\t\t\t\t(uint64_t)(p1),\t\t\t\t\\\n-\t\t\t\t(uint64_t)(p2));\t\t\t\\\n-\t}\t\t\t\t\t\t\t\t\\\n-} while (0)\n-\n-#define DIAG_COUNT_DEFINE(x) uint64_t count_##x\n-#define DIAG_COUNT_INIT(o, x) __atomic_store_n(&((o)->count_##x), 0, __ATOMIC_RELAXED)\n-#define DIAG_COUNT_INC(o, x) __atomic_fetch_add(&((o)->count_##x), 1, __ATOMIC_RELAXED)\n-#define DIAG_COUNT_DEC(o, x) __atomic_fetch_sub(&((o)->count_##x), 1, __ATOMIC_RELAXED)\n-#define DIAG_COUNT(o, x) __atomic_load_n(&((o)->count_##x), __ATOMIC_RELAXED)\n-\n-#define DIAG_USED\n-\n-#else\n-\n-/* no diagnostics configured */\n-\n-#define DISPLAY_OBJCACHE_QUEUES 0\n-\n-#define DIAG_CREATE_EVENT(obj, ev)\n-#define DIAG_EVENT(obj, ev, p1, p)\n-\n-#define DIAG_COUNT_DEFINE(x)\n-#define DIAG_COUNT_INIT(o, x) do {} while (0)\n-#define DIAG_COUNT_INC(o, x) do {} while (0)\n-#define DIAG_COUNT_DEC(o, x) do {} while (0)\n-#define DIAG_COUNT(o, x) 0\n-\n-#define DIAG_USED __rte_unused\n-\n-#endif\t\t\t\t/* LTHREAD_DIAG */\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_DIAG_H_ */\ndiff --git a/examples/performance-thread/common/lthread_diag_api.h b/examples/performance-thread/common/lthread_diag_api.h\ndeleted file mode 100644\nindex d65f486ecea0..000000000000\n--- a/examples/performance-thread/common/lthread_diag_api.h\n+++ /dev/null\n@@ -1,304 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-#ifndef LTHREAD_DIAG_API_H_\n-#define LTHREAD_DIAG_API_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include <stdint.h>\n-#include <inttypes.h>\n-\n-/*\n- * Enable diagnostics\n- * 0 = conditionally compiled out\n- * 1 = compiled in and maskable at run time, see below for details\n- */\n-#define LTHREAD_DIAG 0\n-\n-/**\n- *\n- * @file lthread_diag_api.h\n- *\n- * @warning\n- * @b EXPERIMENTAL: this API may change without prior notice\n- *\n- * lthread diagnostic interface\n- *\n- * If enabled via configuration file option ( tbd ) the lthread subsystem\n- * can generate selected trace information, either RTE_LOG  (INFO) messages,\n- * or else invoke a user supplied callback function when any of the events\n- * listed below occur.\n- *\n- * Reporting of events can be selectively masked, the bit position in the\n- * mask is determined by the corresponding event identifier listed below.\n- *\n- * Diagnostics are enabled by registering the callback function and mask\n- * using the API lthread_diagnostic_enable().\n- *\n- * Various interesting parameters are passed to the callback, including the\n- * time in cpu clks, the lthread id, the diagnostic event id, a user ref value,\n- * event text string, object being traced, and two context dependent parameters\n- * (p1 and p2). The meaning of the two parameters p1 and p2 depends on\n- * the specific event.\n- *\n- * The events LT_DIAG_LTHREAD_CREATE, LT_DIAG_MUTEX_CREATE and\n- * LT_DIAG_COND_CREATE are implicitly enabled if the event mask includes any of\n- * the LT_DIAG_LTHREAD_XXX, LT_DIAG_MUTEX_XXX or LT_DIAG_COND_XXX events\n- * respectively.\n- *\n- * These create events may also be included in the mask discreetly if it is\n- * desired to monitor only create events.\n- *\n- * @param  time\n- *  The time in cpu clks at which the event occurred\n- *\n- * @param  lthread\n- *  The current lthread\n- *\n- * @param diag_event\n- *  The diagnostic event id (bit position in the mask)\n- *\n- * @param  diag_ref\n- *\n- * For LT_DIAG_LTHREAD_CREATE, LT_DIAG_MUTEX_CREATE or LT_DIAG_COND_CREATE\n- * this parameter is not used and set to 0.\n- * All other events diag_ref contains the user ref value returned by the\n- * callback function when lthread is created.\n- *\n- * The diag_ref values assigned to mutex and cond var can be retrieved\n- * using the APIs lthread_mutex_diag_ref(), and lthread_cond_diag_ref()\n- * respectively.\n- *\n- * @param p1\n- *  see below\n- *\n- * @param p1\n- *  see below\n- *\n- * @returns\n- * For LT_DIAG_LTHREAD_CREATE, LT_DIAG_MUTEX_CREATE or LT_DIAG_COND_CREATE\n- * expects a user diagnostic ref value that will be saved in the lthread, mutex\n- * or cond var.\n- *\n- * For all other events return value is ignored.\n- *\n- *\tLT_DIAG_SCHED_CREATE - Invoked when a scheduler is created\n- *\t\tp1 = the scheduler that was created\n- *\t\tp2 = not used\n- *\t\treturn value will be ignored\n- *\n- *\tLT_DIAG_SCHED_SHUTDOWN - Invoked when a shutdown request is received\n- *\t\tp1 = the scheduler to be shutdown\n- *\t\tp2 = not used\n- *\t\treturn value will be ignored\n- *\n- *\tLT_DIAG_LTHREAD_CREATE - Invoked when a thread is created\n- *\t\tp1 = the lthread that was created\n- *\t\tp2 = not used\n- *\t\treturn value will be stored in the lthread\n- *\n- *\tLT_DIAG_LTHREAD_EXIT - Invoked when a lthread exits\n- *\t\tp2 = 0 if the thread was already joined\n- *\t\tp2 = 1 if the thread was not already joined\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_JOIN - Invoked when a lthread exits\n- *\t\tp1 = the lthread that is being joined\n- *\t\tp2 = 0 if the thread was already exited\n- *\t\tp2 = 1 if the thread was not already exited\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_CANCELLED - Invoked when an lthread is cancelled\n- *\t\tp1 = not used\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_DETACH - Invoked when an lthread is detached\n- *\t\tp1 = not used\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_FREE - Invoked when an lthread is freed\n- *\t\tp1 = not used\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_SUSPENDED - Invoked when an lthread is suspended\n- *\t\tp1 = not used\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_YIELD - Invoked when an lthread explicitly yields\n- *\t\tp1 = not used\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_RESCHEDULED - Invoked when an lthread is rescheduled\n- *\t\tp1 = not used\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_RESUMED - Invoked when an lthread is resumed\n- *\t\tp1 = not used\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_AFFINITY - Invoked when an lthread is affinitised\n- *\t\tp1 = the destination lcore_id\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_TMR_START - Invoked when an lthread starts a timer\n- *\t\tp1 = address of timer node\n- *\t\tp2 = the timeout value\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_TMR_DELETE - Invoked when an lthread deletes a timer\n- *\t\tp1 = address of the timer node\n- *\t\tp2 = 0 the timer and the was successfully deleted\n- *\t\tp2 = not usee\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_LTHREAD_TMR_EXPIRED - Invoked when an lthread timer expires\n- *\t\tp1 = address of scheduler the timer expired on\n- *\t\tp2 = the thread associated with the timer\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_COND_CREATE - Invoked when a condition variable is created\n- *\t\tp1 = address of cond var that was created\n- *\t\tp2 = not used\n- *\t\treturn diag ref value will be stored in the condition variable\n- *\n- *\tLT_DIAG_COND_DESTROY - Invoked when a condition variable is destroyed\n- *\t\tp1 = not used\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_COND_WAIT - Invoked when an lthread waits on a cond var\n- *\t\tp1 = the address of the condition variable\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_COND_SIGNAL - Invoked when an lthread signals a cond var\n- *\t\tp1 = the address of the cond var\n- *\t\tp2 = the lthread that was signalled, or error code\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_COND_BROADCAST - Invoked when an lthread broadcasts a cond var\n- *\t\tp1 = the address of the condition variable\n- *\t\tp2 = the lthread(s) that are signalled, or error code\n- *\n- *\tLT_DIAG_MUTEX_CREATE - Invoked when a mutex is created\n- *\t\tp1 = address of muex\n- *\t\tp2 = not used\n- *\t\treturn diag ref value will be stored in the mutex variable\n- *\n- *\tLT_DIAG_MUTEX_DESTROY - Invoked when a mutex is destroyed\n- *\t\tp1 = address of mutex\n- *\t\tp2 = not used\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_MUTEX_LOCK - Invoked when a mutex lock is obtained\n- *\t\tp1 = address of mutex\n- *\t\tp2 = function return value\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_MUTEX_BLOCKED  - Invoked when an lthread blocks on a mutex\n- *\t\tp1 = address of mutex\n- *\t\tp2 = function return value\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_MUTEX_TRYLOCK - Invoked when a mutex try lock is attempted\n- *\t\tp1 = address of mutex\n- *\t\tp2 = the function return value\n- *\t\treturn val ignored\n- *\n- *\tLT_DIAG_MUTEX_UNLOCKED - Invoked when a mutex is unlocked\n- *\t\tp1 = address of mutex\n- *\t\tp2 = the thread that was unlocked, or error code\n- *\t\treturn val ignored\n- */\n-typedef uint64_t (*diag_callback) (uint64_t time, struct lthread *lt,\n-\t\t\t\t  int diag_event, uint64_t diag_ref,\n-\t\t\t\tconst char *text, uint64_t p1, uint64_t p2);\n-\n-/*\n- * Set user diagnostic callback and mask\n- * If the callback function pointer is NULL the default\n- * callback handler will be restored.\n- */\n-void lthread_diagnostic_enable(diag_callback cb, uint64_t diag_mask);\n-\n-/*\n- * Set diagnostic mask\n- */\n-void lthread_diagnostic_set_mask(uint64_t mask);\n-\n-/*\n- * lthread diagnostic callback\n- */\n-enum lthread_diag_ev {\n-\t/* bits 0 - 14 lthread flag group */\n-\tLT_DIAG_LTHREAD_CREATE,\t\t/* 00 mask 0x00000001 */\n-\tLT_DIAG_LTHREAD_EXIT,\t\t/* 01 mask 0x00000002 */\n-\tLT_DIAG_LTHREAD_JOIN,\t\t/* 02 mask 0x00000004 */\n-\tLT_DIAG_LTHREAD_CANCEL,\t\t/* 03 mask 0x00000008 */\n-\tLT_DIAG_LTHREAD_DETACH,\t\t/* 04 mask 0x00000010 */\n-\tLT_DIAG_LTHREAD_FREE,\t\t/* 05 mask 0x00000020 */\n-\tLT_DIAG_LTHREAD_SUSPENDED,\t/* 06 mask 0x00000040 */\n-\tLT_DIAG_LTHREAD_YIELD,\t\t/* 07 mask 0x00000080 */\n-\tLT_DIAG_LTHREAD_RESCHEDULED,\t/* 08 mask 0x00000100 */\n-\tLT_DIAG_LTHREAD_SLEEP,\t\t/* 09 mask 0x00000200 */\n-\tLT_DIAG_LTHREAD_RESUMED,\t/* 10 mask 0x00000400 */\n-\tLT_DIAG_LTHREAD_AFFINITY,\t/* 11 mask 0x00000800 */\n-\tLT_DIAG_LTHREAD_TMR_START,\t/* 12 mask 0x00001000 */\n-\tLT_DIAG_LTHREAD_TMR_DELETE,\t/* 13 mask 0x00002000 */\n-\tLT_DIAG_LTHREAD_TMR_EXPIRED,\t/* 14 mask 0x00004000 */\n-\t/* bits 15 - 19 conditional variable flag group */\n-\tLT_DIAG_COND_CREATE,\t\t/* 15 mask 0x00008000 */\n-\tLT_DIAG_COND_DESTROY,\t\t/* 16 mask 0x00010000 */\n-\tLT_DIAG_COND_WAIT,\t\t/* 17 mask 0x00020000 */\n-\tLT_DIAG_COND_SIGNAL,\t\t/* 18 mask 0x00040000 */\n-\tLT_DIAG_COND_BROADCAST,\t\t/* 19 mask 0x00080000 */\n-\t/* bits 20 - 25 mutex flag group */\n-\tLT_DIAG_MUTEX_CREATE,\t\t/* 20 mask 0x00100000 */\n-\tLT_DIAG_MUTEX_DESTROY,\t\t/* 21 mask 0x00200000 */\n-\tLT_DIAG_MUTEX_LOCK,\t\t/* 22 mask 0x00400000 */\n-\tLT_DIAG_MUTEX_TRYLOCK,\t\t/* 23 mask 0x00800000 */\n-\tLT_DIAG_MUTEX_BLOCKED,\t\t/* 24 mask 0x01000000 */\n-\tLT_DIAG_MUTEX_UNLOCKED,\t\t/* 25 mask 0x02000000 */\n-\t/* bits 26 - 27 scheduler flag group - 8 bits */\n-\tLT_DIAG_SCHED_CREATE,\t\t/* 26 mask 0x04000000 */\n-\tLT_DIAG_SCHED_SHUTDOWN,\t\t/* 27 mask 0x08000000 */\n-\tLT_DIAG_EVENT_MAX\n-};\n-\n-#define LT_DIAG_ALL 0xffffffffffffffff\n-\n-\n-/*\n- * Display scheduler stats\n- */\n-void\n-lthread_sched_stats_display(void);\n-\n-/*\n- * return the diagnostic ref val stored in a condition var\n- */\n-uint64_t\n-lthread_cond_diag_ref(struct lthread_cond *c);\n-\n-/*\n- * return the diagnostic ref val stored in a mutex\n- */\n-uint64_t\n-lthread_mutex_diag_ref(struct lthread_mutex *m);\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_DIAG_API_H_ */\ndiff --git a/examples/performance-thread/common/lthread_int.h b/examples/performance-thread/common/lthread_int.h\ndeleted file mode 100644\nindex d010126f1681..000000000000\n--- a/examples/performance-thread/common/lthread_int.h\n+++ /dev/null\n@@ -1,151 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2012 Hasan Alayli <halayli@gmail.com>\n- */\n-#ifndef LTHREAD_INT_H\n-#define LTHREAD_INT_H\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include <stdint.h>\n-#include <sys/time.h>\n-#include <sys/types.h>\n-#include <errno.h>\n-#include <pthread.h>\n-#include <time.h>\n-\n-#include <rte_memory.h>\n-#include <rte_cycles.h>\n-#include <rte_per_lcore.h>\n-#include <rte_timer.h>\n-#include <rte_spinlock.h>\n-#include <ctx.h>\n-\n-#include <lthread_api.h>\n-#include \"lthread.h\"\n-#include \"lthread_diag.h\"\n-#include \"lthread_tls.h\"\n-\n-struct lthread;\n-struct lthread_sched;\n-struct lthread_cond;\n-struct lthread_mutex;\n-struct lthread_key;\n-\n-struct key_pool;\n-struct qnode;\n-struct qnode_pool;\n-struct lthread_sched;\n-struct lthread_tls;\n-\n-\n-#define BIT(x) (1 << (x))\n-#define CLEARBIT(x) ~(1 << (x))\n-\n-#define POSIX_ERRNO(x)  (x)\n-\n-#define MAX_LTHREAD_NAME_SIZE 64\n-\n-#define RTE_LOGTYPE_LTHREAD RTE_LOGTYPE_USER1\n-\n-\n-/* define some shorthand for current scheduler and current thread */\n-#define THIS_SCHED RTE_PER_LCORE(this_sched)\n-#define THIS_LTHREAD RTE_PER_LCORE(this_sched)->current_lthread\n-\n-/*\n- * Definition of an scheduler struct\n- */\n-struct lthread_sched {\n-\tstruct ctx ctx;\t\t\t\t\t/* cpu context */\n-\tuint64_t birth;\t\t\t\t\t/* time created */\n-\tstruct lthread *current_lthread;\t\t/* running thread */\n-\tunsigned lcore_id;\t\t\t\t/* this sched lcore */\n-\tint run_flag;\t\t\t\t\t/* sched shutdown */\n-\tuint64_t nb_blocked_threads;\t/* blocked threads */\n-\tstruct lthread_queue *ready;\t\t\t/* local ready queue */\n-\tstruct lthread_queue *pready;\t\t\t/* peer ready queue */\n-\tstruct lthread_objcache *lthread_cache;\t\t/* free lthreads */\n-\tstruct lthread_objcache *stack_cache;\t\t/* free stacks */\n-\tstruct lthread_objcache *per_lthread_cache;\t/* free per lthread */\n-\tstruct lthread_objcache *tls_cache;\t\t/* free TLS */\n-\tstruct lthread_objcache *cond_cache;\t\t/* free cond vars */\n-\tstruct lthread_objcache *mutex_cache;\t\t/* free mutexes */\n-\tstruct qnode_pool *qnode_pool;\t\t/* pool of queue nodes */\n-\tstruct key_pool *key_pool;\t\t/* pool of free TLS keys */\n-\tsize_t stack_size;\n-\tuint64_t diag_ref;\t\t\t\t/* diag ref */\n-} __rte_cache_aligned;\n-\n-RTE_DECLARE_PER_LCORE(struct lthread_sched *, this_sched);\n-\n-\n-/*\n- * State for an lthread\n- */\n-enum lthread_st {\n-\tST_LT_INIT,\t\t/* initial state */\n-\tST_LT_READY,\t\t/* lthread is ready to run */\n-\tST_LT_SLEEPING,\t\t/* lthread is sleeping */\n-\tST_LT_EXPIRED,\t\t/* lthread timeout has expired  */\n-\tST_LT_EXITED,\t\t/* lthread has exited and needs cleanup */\n-\tST_LT_DETACH,\t\t/* lthread frees on exit*/\n-\tST_LT_CANCELLED,\t/* lthread has been cancelled */\n-};\n-\n-/*\n- * lthread sub states for exit/join\n- */\n-enum join_st {\n-\tLT_JOIN_INITIAL,\t/* initial state */\n-\tLT_JOIN_EXITING,\t/* thread is exiting */\n-\tLT_JOIN_THREAD_SET,\t/* joining thread has been set */\n-\tLT_JOIN_EXIT_VAL_SET,\t/* exiting thread has set ret val */\n-\tLT_JOIN_EXIT_VAL_READ,\t/* joining thread has collected ret val */\n-};\n-\n-/* defnition of an lthread stack object */\n-struct lthread_stack {\n-\tuint8_t stack[LTHREAD_MAX_STACK_SIZE];\n-\tsize_t stack_size;\n-\tstruct lthread_sched *root_sched;\n-} __rte_cache_aligned;\n-\n-/*\n- * Definition of an lthread\n- */\n-struct lthread {\n-\tstruct ctx ctx;\t\t\t\t/* cpu context */\n-\n-\tuint64_t state;\t\t\t\t/* current lthread state */\n-\n-\tstruct lthread_sched *sched;\t\t/* current scheduler */\n-\tvoid *stack;\t\t\t\t/* ptr to actual stack */\n-\tsize_t stack_size;\t\t\t/* current stack_size */\n-\tsize_t last_stack_size;\t\t\t/* last yield  stack_size */\n-\tlthread_func_t fun;\t\t\t/* func ctx is running */\n-\tvoid *arg;\t\t\t\t/* func args passed to func */\n-\tvoid *per_lthread_data;\t\t\t/* per lthread user data */\n-\tlthread_exit_func exit_handler;\t\t/* called when thread exits */\n-\tuint64_t birth;\t\t\t\t/* time lthread was born */\n-\tstruct lthread_queue *pending_wr_queue;\t/* deferred  queue to write */\n-\tstruct lthread *lt_join;\t\t/* lthread to join on */\n-\tuint64_t join;\t\t\t\t/* state for joining */\n-\tvoid **lt_exit_ptr;\t\t\t/* exit ptr for lthread_join */\n-\tstruct lthread_sched *root_sched;\t/* thread was created here*/\n-\tstruct queue_node *qnode;\t\t/* node when in a queue */\n-\tstruct rte_timer tim;\t\t\t/* sleep timer */\n-\tstruct lthread_tls *tls;\t\t/* keys in use by the thread */\n-\tstruct lthread_stack *stack_container;\t/* stack */\n-\tchar funcname[MAX_LTHREAD_NAME_SIZE];\t/* thread func name */\n-\tuint64_t diag_ref;\t\t\t/* ref to user diag data */\n-} __rte_cache_aligned;\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_INT_H */\ndiff --git a/examples/performance-thread/common/lthread_mutex.c b/examples/performance-thread/common/lthread_mutex.c\ndeleted file mode 100644\nindex f3ec7c1c60cd..000000000000\n--- a/examples/performance-thread/common/lthread_mutex.c\n+++ /dev/null\n@@ -1,226 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-#include <stdio.h>\n-#include <stdlib.h>\n-#include <string.h>\n-#include <stdint.h>\n-#include <stddef.h>\n-#include <limits.h>\n-#include <inttypes.h>\n-#include <unistd.h>\n-#include <pthread.h>\n-#include <fcntl.h>\n-#include <sys/time.h>\n-#include <sys/mman.h>\n-\n-#include <rte_per_lcore.h>\n-#include <rte_log.h>\n-#include <rte_spinlock.h>\n-#include <rte_common.h>\n-#include <rte_string_fns.h>\n-\n-#include \"lthread_api.h\"\n-#include \"lthread_int.h\"\n-#include \"lthread_mutex.h\"\n-#include \"lthread_sched.h\"\n-#include \"lthread_queue.h\"\n-#include \"lthread_objcache.h\"\n-#include \"lthread_diag.h\"\n-\n-/*\n- * Create a mutex\n- */\n-int\n-lthread_mutex_init(char *name, struct lthread_mutex **mutex,\n-\t\t   __rte_unused const struct lthread_mutexattr *attr)\n-{\n-\tstruct lthread_mutex *m;\n-\n-\tif (mutex == NULL)\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\n-\tm = _lthread_objcache_alloc((THIS_SCHED)->mutex_cache);\n-\tif (m == NULL)\n-\t\treturn POSIX_ERRNO(EAGAIN);\n-\n-\tm->blocked = _lthread_queue_create(\"blocked queue\");\n-\tif (m->blocked == NULL) {\n-\t\t_lthread_objcache_free((THIS_SCHED)->mutex_cache, m);\n-\t\treturn POSIX_ERRNO(EAGAIN);\n-\t}\n-\n-\tif (name == NULL)\n-\t\tstrlcpy(m->name, \"no name\", sizeof(m->name));\n-\telse\n-\t\tstrlcpy(m->name, name, sizeof(m->name));\n-\n-\tm->root_sched = THIS_SCHED;\n-\tm->owner = NULL;\n-\n-\t__atomic_store_n(&m->count, 0, __ATOMIC_RELAXED);\n-\n-\tDIAG_CREATE_EVENT(m, LT_DIAG_MUTEX_CREATE);\n-\t/* success */\n-\t(*mutex) = m;\n-\treturn 0;\n-}\n-\n-/*\n- * Destroy a mutex\n- */\n-int lthread_mutex_destroy(struct lthread_mutex *m)\n-{\n-\tif ((m == NULL) || (m->blocked == NULL)) {\n-\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_DESTROY, m, POSIX_ERRNO(EINVAL));\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\t}\n-\n-\tif (m->owner == NULL) {\n-\t\t/* try to delete the blocked queue */\n-\t\tif (_lthread_queue_destroy(m->blocked) < 0) {\n-\t\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_DESTROY,\n-\t\t\t\t\tm, POSIX_ERRNO(EBUSY));\n-\t\t\treturn POSIX_ERRNO(EBUSY);\n-\t\t}\n-\n-\t\t/* free the mutex to cache */\n-\t\t_lthread_objcache_free(m->root_sched->mutex_cache, m);\n-\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_DESTROY, m, 0);\n-\t\treturn 0;\n-\t}\n-\t/* can't do its still in use */\n-\tDIAG_EVENT(m, LT_DIAG_MUTEX_DESTROY, m, POSIX_ERRNO(EBUSY));\n-\treturn POSIX_ERRNO(EBUSY);\n-}\n-\n-/*\n- * Try to obtain a mutex\n- */\n-int lthread_mutex_lock(struct lthread_mutex *m)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\tif ((m == NULL) || (m->blocked == NULL)) {\n-\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_LOCK, m, POSIX_ERRNO(EINVAL));\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\t}\n-\n-\t/* allow no recursion */\n-\tif (m->owner == lt) {\n-\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_LOCK, m, POSIX_ERRNO(EDEADLK));\n-\t\treturn POSIX_ERRNO(EDEADLK);\n-\t}\n-\n-\tfor (;;) {\n-\t\t__atomic_fetch_add(&m->count, 1, __ATOMIC_RELAXED);\n-\t\tdo {\n-\t\t\tuint64_t lt_init = 0;\n-\t\t\tif (__atomic_compare_exchange_n((uint64_t *) &m->owner, &lt_init,\n-\t\t\t\t(uint64_t) lt, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED)) {\n-\t\t\t\t/* happy days, we got the lock */\n-\t\t\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_LOCK, m, 0);\n-\t\t\t\treturn 0;\n-\t\t\t}\n-\t\t\t/* spin due to race with unlock when\n-\t\t\t* nothing was blocked\n-\t\t\t*/\n-\t\t} while ((__atomic_load_n(&m->count, __ATOMIC_RELAXED) == 1) &&\n-\t\t\t\t(m->owner == NULL));\n-\n-\t\t/* queue the current thread in the blocked queue\n-\t\t * we defer this to after we return to the scheduler\n-\t\t * to ensure that the current thread context is saved\n-\t\t * before unlock could result in it being dequeued and\n-\t\t * resumed\n-\t\t */\n-\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_BLOCKED, m, lt);\n-\t\tlt->pending_wr_queue = m->blocked;\n-\t\t/* now relinquish cpu */\n-\t\t_suspend();\n-\t\t/* resumed, must loop and compete for the lock again */\n-\t}\n-\treturn 0;\n-}\n-\n-/* try to lock a mutex but don't block */\n-int lthread_mutex_trylock(struct lthread_mutex *m)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\tif ((m == NULL) || (m->blocked == NULL)) {\n-\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_TRYLOCK, m, POSIX_ERRNO(EINVAL));\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\t}\n-\n-\tif (m->owner == lt) {\n-\t\t/* no recursion */\n-\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_TRYLOCK, m, POSIX_ERRNO(EDEADLK));\n-\t\treturn POSIX_ERRNO(EDEADLK);\n-\t}\n-\n-\t__atomic_fetch_add(&m->count, 1, __ATOMIC_RELAXED);\n-\tuint64_t lt_init = 0;\n-\tif (__atomic_compare_exchange_n((uint64_t *) &m->owner, &lt_init,\n-\t\t(uint64_t) lt, 0, __ATOMIC_RELAXED, __ATOMIC_RELAXED)) {\n-\t\t/* got the lock */\n-\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_TRYLOCK, m, 0);\n-\t\treturn 0;\n-\t}\n-\n-\t/* failed so return busy */\n-\t__atomic_fetch_sub(&m->count, 1, __ATOMIC_RELAXED);\n-\tDIAG_EVENT(m, LT_DIAG_MUTEX_TRYLOCK, m, POSIX_ERRNO(EBUSY));\n-\treturn POSIX_ERRNO(EBUSY);\n-}\n-\n-/*\n- * Unlock a mutex\n- */\n-int lthread_mutex_unlock(struct lthread_mutex *m)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\tstruct lthread *unblocked;\n-\n-\tif ((m == NULL) || (m->blocked == NULL)) {\n-\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_UNLOCKED, m, POSIX_ERRNO(EINVAL));\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\t}\n-\n-\t/* fail if its owned */\n-\tif (m->owner != lt || m->owner == NULL) {\n-\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_UNLOCKED, m, POSIX_ERRNO(EPERM));\n-\t\treturn POSIX_ERRNO(EPERM);\n-\t}\n-\n-\t__atomic_fetch_sub(&m->count, 1, __ATOMIC_RELAXED);\n-\t/* if there are blocked threads then make one ready */\n-\twhile (__atomic_load_n(&m->count, __ATOMIC_RELAXED) > 0) {\n-\t\tunblocked = _lthread_queue_remove(m->blocked);\n-\n-\t\tif (unblocked != NULL) {\n-\t\t\t__atomic_fetch_sub(&m->count, 1, __ATOMIC_RELAXED);\n-\t\t\tDIAG_EVENT(m, LT_DIAG_MUTEX_UNLOCKED, m, unblocked);\n-\t\t\tRTE_ASSERT(unblocked->sched != NULL);\n-\t\t\t_ready_queue_insert((struct lthread_sched *)\n-\t\t\t\t\t    unblocked->sched, unblocked);\n-\t\t\tbreak;\n-\t\t}\n-\t}\n-\t/* release the lock */\n-\tm->owner = NULL;\n-\treturn 0;\n-}\n-\n-/*\n- * return the diagnostic ref val stored in a mutex\n- */\n-uint64_t\n-lthread_mutex_diag_ref(struct lthread_mutex *m)\n-{\n-\tif (m == NULL)\n-\t\treturn 0;\n-\treturn m->diag_ref;\n-}\ndiff --git a/examples/performance-thread/common/lthread_mutex.h b/examples/performance-thread/common/lthread_mutex.h\ndeleted file mode 100644\nindex 730092bdf8bb..000000000000\n--- a/examples/performance-thread/common/lthread_mutex.h\n+++ /dev/null\n@@ -1,31 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-\n-#ifndef LTHREAD_MUTEX_H_\n-#define LTHREAD_MUTEX_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include \"lthread_queue.h\"\n-\n-\n-#define MAX_MUTEX_NAME_SIZE 64\n-\n-struct lthread_mutex {\n-\tstruct lthread *owner;\n-\tuint64_t count;\n-\tstruct lthread_queue *blocked __rte_cache_aligned;\n-\tstruct lthread_sched *root_sched;\n-\tchar\t\t\tname[MAX_MUTEX_NAME_SIZE];\n-\tuint64_t\t\tdiag_ref; /* optional ref to user diag data */\n-} __rte_cache_aligned;\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif /* LTHREAD_MUTEX_H_ */\ndiff --git a/examples/performance-thread/common/lthread_objcache.h b/examples/performance-thread/common/lthread_objcache.h\ndeleted file mode 100644\nindex 777a1945b486..000000000000\n--- a/examples/performance-thread/common/lthread_objcache.h\n+++ /dev/null\n@@ -1,136 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-#ifndef LTHREAD_OBJCACHE_H_\n-#define LTHREAD_OBJCACHE_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include <string.h>\n-\n-#include <rte_per_lcore.h>\n-#include <rte_malloc.h>\n-#include <rte_memory.h>\n-\n-#include \"lthread_int.h\"\n-#include \"lthread_diag.h\"\n-#include \"lthread_queue.h\"\n-\n-\n-RTE_DECLARE_PER_LCORE(struct lthread_sched *, this_sched);\n-\n-struct lthread_objcache {\n-\tstruct lthread_queue *q;\n-\tsize_t obj_size;\n-\tint prealloc_size;\n-\tchar name[LT_MAX_NAME_SIZE];\n-\n-\tDIAG_COUNT_DEFINE(rd);\n-\tDIAG_COUNT_DEFINE(wr);\n-\tDIAG_COUNT_DEFINE(prealloc);\n-\tDIAG_COUNT_DEFINE(capacity);\n-\tDIAG_COUNT_DEFINE(available);\n-};\n-\n-/*\n- * Create a cache\n- */\n-static inline struct\n-lthread_objcache *_lthread_objcache_create(const char *name,\n-\t\t\t\t\tsize_t obj_size,\n-\t\t\t\t\tint prealloc_size)\n-{\n-\tstruct lthread_objcache *c =\n-\t    rte_malloc_socket(NULL, sizeof(struct lthread_objcache),\n-\t\t\t\tRTE_CACHE_LINE_SIZE,\n-\t\t\t\trte_socket_id());\n-\tif (c == NULL)\n-\t\treturn NULL;\n-\n-\tc->q = _lthread_queue_create(\"cache queue\");\n-\tif (c->q == NULL) {\n-\t\trte_free(c);\n-\t\treturn NULL;\n-\t}\n-\tc->obj_size = obj_size;\n-\tc->prealloc_size = prealloc_size;\n-\n-\tif (name != NULL)\n-\t\tstrncpy(c->name, name, LT_MAX_NAME_SIZE);\n-\tc->name[sizeof(c->name)-1] = 0;\n-\n-\tDIAG_COUNT_INIT(c, rd);\n-\tDIAG_COUNT_INIT(c, wr);\n-\tDIAG_COUNT_INIT(c, prealloc);\n-\tDIAG_COUNT_INIT(c, capacity);\n-\tDIAG_COUNT_INIT(c, available);\n-\treturn c;\n-}\n-\n-/*\n- * Destroy an objcache\n- */\n-static inline int\n-_lthread_objcache_destroy(struct lthread_objcache *c)\n-{\n-\tif (_lthread_queue_destroy(c->q) == 0) {\n-\t\trte_free(c);\n-\t\treturn 0;\n-\t}\n-\treturn -1;\n-}\n-\n-/*\n- * Allocate an object from an object cache\n- */\n-static inline void *\n-_lthread_objcache_alloc(struct lthread_objcache *c)\n-{\n-\tint i;\n-\tvoid *data;\n-\tstruct lthread_queue *q = c->q;\n-\tsize_t obj_size = c->obj_size;\n-\tint prealloc_size = c->prealloc_size;\n-\n-\tdata = _lthread_queue_remove(q);\n-\n-\tif (data == NULL) {\n-\t\tDIAG_COUNT_INC(c, prealloc);\n-\t\tfor (i = 0; i < prealloc_size; i++) {\n-\t\t\tdata =\n-\t\t\t    rte_zmalloc_socket(NULL, obj_size,\n-\t\t\t\t\tRTE_CACHE_LINE_SIZE,\n-\t\t\t\t\trte_socket_id());\n-\t\t\tif (data == NULL)\n-\t\t\t\treturn NULL;\n-\n-\t\t\tDIAG_COUNT_INC(c, available);\n-\t\t\tDIAG_COUNT_INC(c, capacity);\n-\t\t\t_lthread_queue_insert_mp(q, data);\n-\t\t}\n-\t\tdata = _lthread_queue_remove(q);\n-\t}\n-\tDIAG_COUNT_INC(c, rd);\n-\tDIAG_COUNT_DEC(c, available);\n-\treturn data;\n-}\n-\n-/*\n- * free an object to a cache\n- */\n-static inline void\n-_lthread_objcache_free(struct lthread_objcache *c, void *obj)\n-{\n-\tDIAG_COUNT_INC(c, wr);\n-\tDIAG_COUNT_INC(c, available);\n-\t_lthread_queue_insert_mp(c->q, obj);\n-}\n-\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_OBJCACHE_H_ */\ndiff --git a/examples/performance-thread/common/lthread_pool.h b/examples/performance-thread/common/lthread_pool.h\ndeleted file mode 100644\nindex 6f93775fbca3..000000000000\n--- a/examples/performance-thread/common/lthread_pool.h\n+++ /dev/null\n@@ -1,277 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2010-2011 Dmitry Vyukov\n- */\n-\n-#ifndef LTHREAD_POOL_H_\n-#define LTHREAD_POOL_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include <rte_malloc.h>\n-#include <rte_per_lcore.h>\n-#include <rte_log.h>\n-\n-#include \"lthread_int.h\"\n-#include \"lthread_diag.h\"\n-\n-/*\n- * This file implements pool of queue nodes used by the queue implemented\n- * in lthread_queue.h.\n- *\n- * The pool is an intrusive lock free MPSC queue.\n- *\n- * The pool is created empty and populated lazily, i.e. on first attempt to\n- * allocate a the pool.\n- *\n- * Whenever the pool is empty more nodes are added to the pool\n- * The number of nodes preallocated in this way is a parameter of\n- * _qnode_pool_create. Freeing an object returns it to the pool.\n- *\n- * Each lthread scheduler maintains its own pool of nodes. L-threads must always\n- * allocate from this local pool ( because it is a single consumer queue ).\n- * L-threads can free nodes to any pool (because it is a multi producer queue)\n- * This enables threads that have affined to a different scheduler to free\n- * nodes safely.\n- */\n-\n-struct qnode;\n-struct qnode_cache;\n-\n-/*\n- * define intermediate node\n- */\n-struct qnode {\n-\tstruct qnode *next;\n-\tvoid *data;\n-\tstruct qnode_pool *pool;\n-} __rte_cache_aligned;\n-\n-/*\n- * a pool structure\n- */\n-struct qnode_pool {\n-\tstruct qnode *head;\n-\tstruct qnode *stub;\n-\tstruct qnode *fast_alloc;\n-\tstruct qnode *tail __rte_cache_aligned;\n-\tint pre_alloc;\n-\tchar name[LT_MAX_NAME_SIZE];\n-\n-\tDIAG_COUNT_DEFINE(rd);\n-\tDIAG_COUNT_DEFINE(wr);\n-\tDIAG_COUNT_DEFINE(available);\n-\tDIAG_COUNT_DEFINE(prealloc);\n-\tDIAG_COUNT_DEFINE(capacity);\n-} __rte_cache_aligned;\n-\n-/*\n- * Create a pool of qnodes\n- */\n-\n-static inline struct qnode_pool *\n-_qnode_pool_create(const char *name, int prealloc_size) {\n-\n-\tstruct qnode_pool *p = rte_malloc_socket(NULL,\n-\t\t\t\t\tsizeof(struct qnode_pool),\n-\t\t\t\t\tRTE_CACHE_LINE_SIZE,\n-\t\t\t\t\trte_socket_id());\n-\n-\tRTE_ASSERT(p);\n-\n-\tp->stub = rte_malloc_socket(NULL,\n-\t\t\t\tsizeof(struct qnode),\n-\t\t\t\tRTE_CACHE_LINE_SIZE,\n-\t\t\t\trte_socket_id());\n-\n-\tRTE_ASSERT(p->stub);\n-\n-\tif (name != NULL)\n-\t\tstrncpy(p->name, name, LT_MAX_NAME_SIZE);\n-\tp->name[sizeof(p->name)-1] = 0;\n-\n-\tp->stub->pool = p;\n-\tp->stub->next = NULL;\n-\tp->tail = p->stub;\n-\tp->head = p->stub;\n-\tp->pre_alloc = prealloc_size;\n-\n-\tDIAG_COUNT_INIT(p, rd);\n-\tDIAG_COUNT_INIT(p, wr);\n-\tDIAG_COUNT_INIT(p, available);\n-\tDIAG_COUNT_INIT(p, prealloc);\n-\tDIAG_COUNT_INIT(p, capacity);\n-\n-\treturn p;\n-}\n-\n-\n-/*\n- * Insert a node into the pool\n- */\n-static __rte_always_inline void\n-_qnode_pool_insert(struct qnode_pool *p, struct qnode *n)\n-{\n-\tn->next = NULL;\n-\tstruct qnode *prev = n;\n-\t/* We insert at the head */\n-\tprev = (struct qnode *) __sync_lock_test_and_set((uint64_t *)&p->head,\n-\t\t\t\t\t\t(uint64_t) prev);\n-\t/* there is a window of inconsistency until prev next is set */\n-\t/* which is why remove must retry */\n-\tprev->next = (n);\n-}\n-\n-/*\n- * Remove a node from the pool\n- *\n- * There is a race with _qnode_pool_insert() whereby the queue could appear\n- * empty during a concurrent insert, this is handled by retrying\n- *\n- * The queue uses a stub node, which must be swung as the queue becomes\n- * empty, this requires an insert of the stub, which means that removing the\n- * last item from the queue incurs the penalty of an atomic exchange. Since the\n- * pool is maintained with a bulk pre-allocation the cost of this is amortised.\n- */\n-static __rte_always_inline struct qnode *\n-_pool_remove(struct qnode_pool *p)\n-{\n-\tstruct qnode *head;\n-\tstruct qnode *tail = p->tail;\n-\tstruct qnode *next = tail->next;\n-\n-\t/* we remove from the tail */\n-\tif (tail == p->stub) {\n-\t\tif (next == NULL)\n-\t\t\treturn NULL;\n-\t\t/* advance the tail */\n-\t\tp->tail = next;\n-\t\ttail = next;\n-\t\tnext = next->next;\n-\t}\n-\tif (likely(next != NULL)) {\n-\t\tp->tail = next;\n-\t\treturn tail;\n-\t}\n-\n-\thead = p->head;\n-\tif (tail == head)\n-\t\treturn NULL;\n-\n-\t/* swing stub node */\n-\t_qnode_pool_insert(p, p->stub);\n-\n-\tnext = tail->next;\n-\tif (next) {\n-\t\tp->tail = next;\n-\t\treturn tail;\n-\t}\n-\treturn NULL;\n-}\n-\n-\n-/*\n- * This adds a retry to the _pool_remove function\n- * defined above\n- */\n-static __rte_always_inline struct qnode *\n-_qnode_pool_remove(struct qnode_pool *p)\n-{\n-\tstruct qnode *n;\n-\n-\tdo {\n-\t\tn = _pool_remove(p);\n-\t\tif (likely(n != NULL))\n-\t\t\treturn n;\n-\n-\t\trte_compiler_barrier();\n-\t}  while ((p->head != p->tail) &&\n-\t\t\t(p->tail != p->stub));\n-\treturn NULL;\n-}\n-\n-/*\n- * Allocate a node from the pool\n- * If the pool is empty add mode nodes\n- */\n-static __rte_always_inline struct qnode *\n-_qnode_alloc(void)\n-{\n-\tstruct qnode_pool *p = (THIS_SCHED)->qnode_pool;\n-\tint prealloc_size = p->pre_alloc;\n-\tstruct qnode *n;\n-\tint i;\n-\n-\tif (likely(p->fast_alloc != NULL)) {\n-\t\tn = p->fast_alloc;\n-\t\tp->fast_alloc = NULL;\n-\t\treturn n;\n-\t}\n-\n-\tn = _qnode_pool_remove(p);\n-\n-\tif (unlikely(n == NULL)) {\n-\t\tDIAG_COUNT_INC(p, prealloc);\n-\t\tfor (i = 0; i < prealloc_size; i++) {\n-\t\t\tn = rte_malloc_socket(NULL,\n-\t\t\t\t\tsizeof(struct qnode),\n-\t\t\t\t\tRTE_CACHE_LINE_SIZE,\n-\t\t\t\t\trte_socket_id());\n-\t\t\tif (n == NULL)\n-\t\t\t\treturn NULL;\n-\n-\t\t\tDIAG_COUNT_INC(p, available);\n-\t\t\tDIAG_COUNT_INC(p, capacity);\n-\n-\t\t\tn->pool = p;\n-\t\t\t_qnode_pool_insert(p, n);\n-\t\t}\n-\t\tn = _qnode_pool_remove(p);\n-\t}\n-\tn->pool = p;\n-\tDIAG_COUNT_INC(p, rd);\n-\tDIAG_COUNT_DEC(p, available);\n-\treturn n;\n-}\n-\n-\n-\n-/*\n-* free a queue node to the per scheduler pool from which it came\n-*/\n-static __rte_always_inline void\n-_qnode_free(struct qnode *n)\n-{\n-\tstruct qnode_pool *p = n->pool;\n-\n-\n-\tif (unlikely(p->fast_alloc != NULL) ||\n-\t\t\tunlikely(n->pool != (THIS_SCHED)->qnode_pool)) {\n-\t\tDIAG_COUNT_INC(p, wr);\n-\t\tDIAG_COUNT_INC(p, available);\n-\t\t_qnode_pool_insert(p, n);\n-\t\treturn;\n-\t}\n-\tp->fast_alloc = n;\n-}\n-\n-/*\n- * Destroy an qnode pool\n- * queue must be empty when this is called\n- */\n-static inline int\n-_qnode_pool_destroy(struct qnode_pool *p)\n-{\n-\trte_free(p->stub);\n-\trte_free(p);\n-\treturn 0;\n-}\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_POOL_H_ */\ndiff --git a/examples/performance-thread/common/lthread_queue.h b/examples/performance-thread/common/lthread_queue.h\ndeleted file mode 100644\nindex 5b63ba220cda..000000000000\n--- a/examples/performance-thread/common/lthread_queue.h\n+++ /dev/null\n@@ -1,247 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2010-2011 Dmitry Vyukov\n- */\n-\n-#ifndef LTHREAD_QUEUE_H_\n-#define LTHREAD_QUEUE_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include <string.h>\n-\n-#include <rte_prefetch.h>\n-#include <rte_per_lcore.h>\n-\n-#include \"lthread_int.h\"\n-#include \"lthread.h\"\n-#include \"lthread_diag.h\"\n-#include \"lthread_pool.h\"\n-\n-struct lthread_queue;\n-\n-/*\n- * This file implements an unbounded FIFO queue based on a lock free\n- * linked list.\n- *\n- * The queue is non-intrusive in that it uses intermediate nodes, and does\n- * not require these nodes to be inserted into the object being placed\n- * in the queue.\n- *\n- * This is slightly more efficient than the very similar queue in lthread_pool\n- * in that it does not have to swing a stub node as the queue becomes empty.\n- *\n- * The queue access functions allocate and free intermediate node\n- * transparently from/to a per scheduler pool ( see lthread_pool.h ).\n- *\n- * The queue provides both MPSC and SPSC insert methods\n- */\n-\n-/*\n- * define a queue of lthread nodes\n- */\n-struct lthread_queue {\n-\tstruct qnode *head;\n-\tstruct qnode *tail __rte_cache_aligned;\n-\tstruct lthread_queue *p;\n-\tchar name[LT_MAX_NAME_SIZE];\n-\n-\tDIAG_COUNT_DEFINE(rd);\n-\tDIAG_COUNT_DEFINE(wr);\n-\tDIAG_COUNT_DEFINE(size);\n-\n-} __rte_cache_aligned;\n-\n-\n-\n-static inline struct lthread_queue *\n-_lthread_queue_create(const char *name)\n-{\n-\tstruct qnode *stub;\n-\tstruct lthread_queue *new_queue;\n-\n-\tnew_queue = rte_malloc_socket(NULL, sizeof(struct lthread_queue),\n-\t\t\t\t\tRTE_CACHE_LINE_SIZE,\n-\t\t\t\t\trte_socket_id());\n-\tif (new_queue == NULL)\n-\t\treturn NULL;\n-\n-\t/* allocated stub node */\n-\tstub = _qnode_alloc();\n-\tRTE_ASSERT(stub);\n-\n-\tif (name != NULL)\n-\t\tstrncpy(new_queue->name, name, sizeof(new_queue->name));\n-\tnew_queue->name[sizeof(new_queue->name)-1] = 0;\n-\n-\t/* initialize queue as empty */\n-\tstub->next = NULL;\n-\tnew_queue->head = stub;\n-\tnew_queue->tail = stub;\n-\n-\tDIAG_COUNT_INIT(new_queue, rd);\n-\tDIAG_COUNT_INIT(new_queue, wr);\n-\tDIAG_COUNT_INIT(new_queue, size);\n-\n-\treturn new_queue;\n-}\n-\n-/**\n- * Return true if the queue is empty\n- */\n-static __rte_always_inline int\n-_lthread_queue_empty(struct lthread_queue *q)\n-{\n-\treturn q->tail == q->head;\n-}\n-\n-\n-\n-/**\n- * Destroy a queue\n- * fail if queue is not empty\n- */\n-static inline int _lthread_queue_destroy(struct lthread_queue *q)\n-{\n-\tif (q == NULL)\n-\t\treturn -1;\n-\n-\tif (!_lthread_queue_empty(q))\n-\t\treturn -1;\n-\n-\t_qnode_free(q->head);\n-\trte_free(q);\n-\treturn 0;\n-}\n-\n-RTE_DECLARE_PER_LCORE(struct lthread_sched *, this_sched);\n-\n-/*\n- * Insert a node into a queue\n- * this implementation is multi producer safe\n- */\n-static __rte_always_inline struct qnode *\n-_lthread_queue_insert_mp(struct lthread_queue\n-\t\t\t\t\t\t\t  *q, void *data)\n-{\n-\tstruct qnode *prev;\n-\tstruct qnode *n = _qnode_alloc();\n-\n-\tif (n == NULL)\n-\t\treturn NULL;\n-\n-\t/* set object in node */\n-\tn->data = data;\n-\tn->next = NULL;\n-\n-\t/* this is an MPSC method, perform a locked update */\n-\tprev = n;\n-\tprev =\n-\t    (struct qnode *)__sync_lock_test_and_set((uint64_t *) &(q)->head,\n-\t\t\t\t\t       (uint64_t) prev);\n-\t/* there is a window of inconsistency until prev next is set,\n-\t * which is why remove must retry\n-\t */\n-\tprev->next = n;\n-\n-\tDIAG_COUNT_INC(q, wr);\n-\tDIAG_COUNT_INC(q, size);\n-\n-\treturn n;\n-}\n-\n-/*\n- * Insert an node into a queue in single producer mode\n- * this implementation is NOT mult producer safe\n- */\n-static __rte_always_inline struct qnode *\n-_lthread_queue_insert_sp(struct lthread_queue\n-\t\t\t\t\t\t\t  *q, void *data)\n-{\n-\t/* allocate a queue node */\n-\tstruct qnode *prev;\n-\tstruct qnode *n = _qnode_alloc();\n-\n-\tif (n == NULL)\n-\t\treturn NULL;\n-\n-\t/* set data in node */\n-\tn->data = data;\n-\tn->next = NULL;\n-\n-\t/* this is an SPSC method, no need for locked exchange operation */\n-\tprev = q->head;\n-\tprev->next = q->head = n;\n-\n-\tDIAG_COUNT_INC(q, wr);\n-\tDIAG_COUNT_INC(q, size);\n-\n-\treturn n;\n-}\n-\n-/*\n- * Remove a node from a queue\n- */\n-static __rte_always_inline void *\n-_lthread_queue_poll(struct lthread_queue *q)\n-{\n-\tvoid *data = NULL;\n-\tstruct qnode *tail = q->tail;\n-\tstruct qnode *next = (struct qnode *)tail->next;\n-\t/*\n-\t * There is a small window of inconsistency between producer and\n-\t * consumer whereby the queue may appear empty if consumer and\n-\t * producer access it at the same time.\n-\t * The consumer must handle this by retrying\n-\t */\n-\n-\tif (likely(next != NULL)) {\n-\t\tq->tail = next;\n-\t\ttail->data = next->data;\n-\t\tdata = tail->data;\n-\n-\t\t/* free the node */\n-\t\t_qnode_free(tail);\n-\n-\t\tDIAG_COUNT_INC(q, rd);\n-\t\tDIAG_COUNT_DEC(q, size);\n-\t\treturn data;\n-\t}\n-\treturn NULL;\n-}\n-\n-/*\n- * Remove a node from a queue\n- */\n-static __rte_always_inline void *\n-_lthread_queue_remove(struct lthread_queue *q)\n-{\n-\tvoid *data = NULL;\n-\n-\t/*\n-\t * There is a small window of inconsistency between producer and\n-\t * consumer whereby the queue may appear empty if consumer and\n-\t * producer access it at the same time. We handle this by retrying\n-\t */\n-\tdo {\n-\t\tdata = _lthread_queue_poll(q);\n-\n-\t\tif (likely(data != NULL)) {\n-\n-\t\t\tDIAG_COUNT_INC(q, rd);\n-\t\t\tDIAG_COUNT_DEC(q, size);\n-\t\t\treturn data;\n-\t\t}\n-\t\trte_compiler_barrier();\n-\t} while (unlikely(!_lthread_queue_empty(q)));\n-\treturn NULL;\n-}\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_QUEUE_H_ */\ndiff --git a/examples/performance-thread/common/lthread_sched.c b/examples/performance-thread/common/lthread_sched.c\ndeleted file mode 100644\nindex 3784b010c221..000000000000\n--- a/examples/performance-thread/common/lthread_sched.c\n+++ /dev/null\n@@ -1,540 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2012 Hasan Alayli <halayli@gmail.com>\n- */\n-\n-#define RTE_MEM 1\n-\n-#include <stdio.h>\n-#include <stdlib.h>\n-#include <string.h>\n-#include <stdint.h>\n-#include <stddef.h>\n-#include <limits.h>\n-#include <inttypes.h>\n-#include <unistd.h>\n-#include <pthread.h>\n-#include <fcntl.h>\n-#include <sys/time.h>\n-#include <sys/mman.h>\n-#include <sched.h>\n-\n-#include <rte_prefetch.h>\n-#include <rte_per_lcore.h>\n-#include <rte_log.h>\n-#include <rte_common.h>\n-#include <rte_branch_prediction.h>\n-\n-#include \"lthread_api.h\"\n-#include \"lthread_int.h\"\n-#include \"lthread_sched.h\"\n-#include \"lthread_objcache.h\"\n-#include \"lthread_timer.h\"\n-#include \"lthread_mutex.h\"\n-#include \"lthread_cond.h\"\n-#include \"lthread_tls.h\"\n-#include \"lthread_diag.h\"\n-\n-/*\n- * This file implements the lthread scheduler\n- * The scheduler is the function lthread_run()\n- * This must be run as the main loop of an EAL thread.\n- *\n- * Currently once a scheduler is created it cannot be destroyed\n- * When a scheduler shuts down it is assumed that the application is terminating\n- */\n-\n-static uint16_t num_schedulers;\n-static uint16_t active_schedulers;\n-\n-/* one scheduler per lcore */\n-RTE_DEFINE_PER_LCORE(struct lthread_sched *, this_sched) = NULL;\n-\n-struct lthread_sched *schedcore[LTHREAD_MAX_LCORES];\n-\n-diag_callback diag_cb;\n-\n-uint64_t diag_mask;\n-\n-\n-/* constructor */\n-RTE_INIT(lthread_sched_ctor)\n-{\n-\tmemset(schedcore, 0, sizeof(schedcore));\n-\t__atomic_store_n(&num_schedulers, 1, __ATOMIC_RELAXED);\n-\t__atomic_store_n(&active_schedulers, 0, __ATOMIC_RELAXED);\n-\tdiag_cb = NULL;\n-}\n-\n-\n-enum sched_alloc_phase {\n-\tSCHED_ALLOC_OK,\n-\tSCHED_ALLOC_QNODE_POOL,\n-\tSCHED_ALLOC_READY_QUEUE,\n-\tSCHED_ALLOC_PREADY_QUEUE,\n-\tSCHED_ALLOC_LTHREAD_CACHE,\n-\tSCHED_ALLOC_STACK_CACHE,\n-\tSCHED_ALLOC_PERLT_CACHE,\n-\tSCHED_ALLOC_TLS_CACHE,\n-\tSCHED_ALLOC_COND_CACHE,\n-\tSCHED_ALLOC_MUTEX_CACHE,\n-};\n-\n-static int\n-_lthread_sched_alloc_resources(struct lthread_sched *new_sched)\n-{\n-\tint alloc_status;\n-\n-\tdo {\n-\t\t/* Initialize per scheduler queue node pool */\n-\t\talloc_status = SCHED_ALLOC_QNODE_POOL;\n-\t\tnew_sched->qnode_pool =\n-\t\t\t_qnode_pool_create(\"qnode pool\", LTHREAD_PREALLOC);\n-\t\tif (new_sched->qnode_pool == NULL)\n-\t\t\tbreak;\n-\n-\t\t/* Initialize per scheduler local ready queue */\n-\t\talloc_status = SCHED_ALLOC_READY_QUEUE;\n-\t\tnew_sched->ready = _lthread_queue_create(\"ready queue\");\n-\t\tif (new_sched->ready == NULL)\n-\t\t\tbreak;\n-\n-\t\t/* Initialize per scheduler local peer ready queue */\n-\t\talloc_status = SCHED_ALLOC_PREADY_QUEUE;\n-\t\tnew_sched->pready = _lthread_queue_create(\"pready queue\");\n-\t\tif (new_sched->pready == NULL)\n-\t\t\tbreak;\n-\n-\t\t/* Initialize per scheduler local free lthread cache */\n-\t\talloc_status = SCHED_ALLOC_LTHREAD_CACHE;\n-\t\tnew_sched->lthread_cache =\n-\t\t\t_lthread_objcache_create(\"lthread cache\",\n-\t\t\t\t\t\tsizeof(struct lthread),\n-\t\t\t\t\t\tLTHREAD_PREALLOC);\n-\t\tif (new_sched->lthread_cache == NULL)\n-\t\t\tbreak;\n-\n-\t\t/* Initialize per scheduler local free stack cache */\n-\t\talloc_status = SCHED_ALLOC_STACK_CACHE;\n-\t\tnew_sched->stack_cache =\n-\t\t\t_lthread_objcache_create(\"stack_cache\",\n-\t\t\t\t\t\tsizeof(struct lthread_stack),\n-\t\t\t\t\t\tLTHREAD_PREALLOC);\n-\t\tif (new_sched->stack_cache == NULL)\n-\t\t\tbreak;\n-\n-\t\t/* Initialize per scheduler local free per lthread data cache */\n-\t\talloc_status = SCHED_ALLOC_PERLT_CACHE;\n-\t\tnew_sched->per_lthread_cache =\n-\t\t\t_lthread_objcache_create(\"per_lt cache\",\n-\t\t\t\t\t\tRTE_PER_LTHREAD_SECTION_SIZE,\n-\t\t\t\t\t\tLTHREAD_PREALLOC);\n-\t\tif (new_sched->per_lthread_cache == NULL)\n-\t\t\tbreak;\n-\n-\t\t/* Initialize per scheduler local free tls cache */\n-\t\talloc_status = SCHED_ALLOC_TLS_CACHE;\n-\t\tnew_sched->tls_cache =\n-\t\t\t_lthread_objcache_create(\"TLS cache\",\n-\t\t\t\t\t\tsizeof(struct lthread_tls),\n-\t\t\t\t\t\tLTHREAD_PREALLOC);\n-\t\tif (new_sched->tls_cache == NULL)\n-\t\t\tbreak;\n-\n-\t\t/* Initialize per scheduler local free cond var cache */\n-\t\talloc_status = SCHED_ALLOC_COND_CACHE;\n-\t\tnew_sched->cond_cache =\n-\t\t\t_lthread_objcache_create(\"cond cache\",\n-\t\t\t\t\t\tsizeof(struct lthread_cond),\n-\t\t\t\t\t\tLTHREAD_PREALLOC);\n-\t\tif (new_sched->cond_cache == NULL)\n-\t\t\tbreak;\n-\n-\t\t/* Initialize per scheduler local free mutex cache */\n-\t\talloc_status = SCHED_ALLOC_MUTEX_CACHE;\n-\t\tnew_sched->mutex_cache =\n-\t\t\t_lthread_objcache_create(\"mutex cache\",\n-\t\t\t\t\t\tsizeof(struct lthread_mutex),\n-\t\t\t\t\t\tLTHREAD_PREALLOC);\n-\t\tif (new_sched->mutex_cache == NULL)\n-\t\t\tbreak;\n-\n-\t\talloc_status = SCHED_ALLOC_OK;\n-\t} while (0);\n-\n-\t/* roll back on any failure */\n-\tswitch (alloc_status) {\n-\tcase SCHED_ALLOC_MUTEX_CACHE:\n-\t\t_lthread_objcache_destroy(new_sched->cond_cache);\n-\t\t/* fall through */\n-\tcase SCHED_ALLOC_COND_CACHE:\n-\t\t_lthread_objcache_destroy(new_sched->tls_cache);\n-\t\t/* fall through */\n-\tcase SCHED_ALLOC_TLS_CACHE:\n-\t\t_lthread_objcache_destroy(new_sched->per_lthread_cache);\n-\t\t/* fall through */\n-\tcase SCHED_ALLOC_PERLT_CACHE:\n-\t\t_lthread_objcache_destroy(new_sched->stack_cache);\n-\t\t/* fall through */\n-\tcase SCHED_ALLOC_STACK_CACHE:\n-\t\t_lthread_objcache_destroy(new_sched->lthread_cache);\n-\t\t/* fall through */\n-\tcase SCHED_ALLOC_LTHREAD_CACHE:\n-\t\t_lthread_queue_destroy(new_sched->pready);\n-\t\t/* fall through */\n-\tcase SCHED_ALLOC_PREADY_QUEUE:\n-\t\t_lthread_queue_destroy(new_sched->ready);\n-\t\t/* fall through */\n-\tcase SCHED_ALLOC_READY_QUEUE:\n-\t\t_qnode_pool_destroy(new_sched->qnode_pool);\n-\t\t/* fall through */\n-\tcase SCHED_ALLOC_QNODE_POOL:\n-\t\t/* fall through */\n-\tcase SCHED_ALLOC_OK:\n-\t\tbreak;\n-\t}\n-\treturn alloc_status;\n-}\n-\n-\n-/*\n- * Create a scheduler on the current lcore\n- */\n-struct lthread_sched *_lthread_sched_create(size_t stack_size)\n-{\n-\tint status;\n-\tstruct lthread_sched *new_sched;\n-\tunsigned lcoreid = rte_lcore_id();\n-\n-\tRTE_ASSERT(stack_size <= LTHREAD_MAX_STACK_SIZE);\n-\n-\tif (stack_size == 0)\n-\t\tstack_size = LTHREAD_MAX_STACK_SIZE;\n-\n-\tnew_sched =\n-\t     rte_calloc_socket(NULL, 1, sizeof(struct lthread_sched),\n-\t\t\t\tRTE_CACHE_LINE_SIZE,\n-\t\t\t\trte_socket_id());\n-\tif (new_sched == NULL) {\n-\t\tRTE_LOG(CRIT, LTHREAD,\n-\t\t\t\"Failed to allocate memory for scheduler\\n\");\n-\t\treturn NULL;\n-\t}\n-\n-\t_lthread_key_pool_init();\n-\n-\tnew_sched->stack_size = stack_size;\n-\tnew_sched->birth = rte_rdtsc();\n-\tTHIS_SCHED = new_sched;\n-\n-\tstatus = _lthread_sched_alloc_resources(new_sched);\n-\tif (status != SCHED_ALLOC_OK) {\n-\t\tRTE_LOG(CRIT, LTHREAD,\n-\t\t\t\"Failed to allocate resources for scheduler code = %d\\n\",\n-\t\t\tstatus);\n-\t\trte_free(new_sched);\n-\t\treturn NULL;\n-\t}\n-\n-\tbzero(&new_sched->ctx, sizeof(struct ctx));\n-\n-\tnew_sched->lcore_id = lcoreid;\n-\n-\tschedcore[lcoreid] = new_sched;\n-\n-\tnew_sched->run_flag = 1;\n-\n-\tDIAG_EVENT(new_sched, LT_DIAG_SCHED_CREATE, rte_lcore_id(), 0);\n-\n-\trte_wmb();\n-\treturn new_sched;\n-}\n-\n-/*\n- * Set the number of schedulers in the system\n- */\n-int lthread_num_schedulers_set(int num)\n-{\n-\t__atomic_store_n(&num_schedulers, num, __ATOMIC_RELAXED);\n-\treturn (int)__atomic_load_n(&num_schedulers, __ATOMIC_RELAXED);\n-}\n-\n-/*\n- * Return the number of schedulers active\n- */\n-int lthread_active_schedulers(void)\n-{\n-\treturn (int)__atomic_load_n(&active_schedulers, __ATOMIC_RELAXED);\n-}\n-\n-\n-/**\n- * shutdown the scheduler running on the specified lcore\n- */\n-void lthread_scheduler_shutdown(unsigned lcoreid)\n-{\n-\tuint64_t coreid = (uint64_t) lcoreid;\n-\n-\tif (coreid < LTHREAD_MAX_LCORES) {\n-\t\tif (schedcore[coreid] != NULL)\n-\t\t\tschedcore[coreid]->run_flag = 0;\n-\t}\n-}\n-\n-/**\n- * shutdown all schedulers\n- */\n-void lthread_scheduler_shutdown_all(void)\n-{\n-\tuint64_t i;\n-\n-\t/*\n-\t * give time for all schedulers to have started\n-\t * Note we use sched_yield() rather than pthread_yield() to allow\n-\t * for the possibility of a pthread wrapper on lthread_yield(),\n-\t * something that is not possible unless the scheduler is running.\n-\t */\n-\twhile (__atomic_load_n(&active_schedulers, __ATOMIC_RELAXED) <\n-\t       __atomic_load_n(&num_schedulers, __ATOMIC_RELAXED))\n-\t\tsched_yield();\n-\n-\tfor (i = 0; i < LTHREAD_MAX_LCORES; i++) {\n-\t\tif (schedcore[i] != NULL)\n-\t\t\tschedcore[i]->run_flag = 0;\n-\t}\n-}\n-\n-/*\n- * Resume a suspended lthread\n- */\n-static __rte_always_inline void\n-_lthread_resume(struct lthread *lt);\n-static inline void _lthread_resume(struct lthread *lt)\n-{\n-\tstruct lthread_sched *sched = THIS_SCHED;\n-\tstruct lthread_stack *s;\n-\tuint64_t state = lt->state;\n-#if LTHREAD_DIAG\n-\tint init = 0;\n-#endif\n-\n-\tsched->current_lthread = lt;\n-\n-\tif (state & (BIT(ST_LT_CANCELLED) | BIT(ST_LT_EXITED))) {\n-\t\t/* if detached we can free the thread now */\n-\t\tif (state & BIT(ST_LT_DETACH)) {\n-\t\t\t_lthread_free(lt);\n-\t\t\tsched->current_lthread = NULL;\n-\t\t\treturn;\n-\t\t}\n-\t}\n-\n-\tif (state & BIT(ST_LT_INIT)) {\n-\t\t/* first time this thread has been run */\n-\t\t/* assign thread to this scheduler */\n-\t\tlt->sched = THIS_SCHED;\n-\n-\t\t/* allocate stack */\n-\t\ts = _stack_alloc();\n-\n-\t\tlt->stack_container = s;\n-\t\t_lthread_set_stack(lt, s->stack, s->stack_size);\n-\n-\t\t/* allocate memory for TLS used by this thread */\n-\t\t_lthread_tls_alloc(lt);\n-\n-\t\tlt->state = BIT(ST_LT_READY);\n-#if LTHREAD_DIAG\n-\t\tinit = 1;\n-#endif\n-\t}\n-\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_RESUMED, init, lt);\n-\n-\t/* switch to the new thread */\n-\tctx_switch(&lt->ctx, &sched->ctx);\n-\n-\t/* If posting to a queue that could be read by another lcore\n-\t * we defer the queue write till now to ensure the context has been\n-\t * saved before the other core tries to resume it\n-\t * This applies to blocking on mutex, cond, and to set_affinity\n-\t */\n-\tif (lt->pending_wr_queue != NULL) {\n-\t\tstruct lthread_queue *dest = lt->pending_wr_queue;\n-\n-\t\tlt->pending_wr_queue = NULL;\n-\n-\t\t/* queue the current thread to the specified queue */\n-\t\t_lthread_queue_insert_mp(dest, lt);\n-\t}\n-\n-\tsched->current_lthread = NULL;\n-}\n-\n-/*\n- * Handle sleep timer expiry\n-*/\n-void\n-_sched_timer_cb(struct rte_timer *tim, void *arg)\n-{\n-\tstruct lthread *lt = (struct lthread *) arg;\n-\tuint64_t state = lt->state;\n-\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_TMR_EXPIRED, &lt->tim, 0);\n-\n-\trte_timer_stop(tim);\n-\n-\tif (lt->state & BIT(ST_LT_CANCELLED))\n-\t\t(THIS_SCHED)->nb_blocked_threads--;\n-\n-\tlt->state = state | BIT(ST_LT_EXPIRED);\n-\t_lthread_resume(lt);\n-\tlt->state = state & CLEARBIT(ST_LT_EXPIRED);\n-}\n-\n-\n-\n-/*\n- * Returns 0 if there is a pending job in scheduler or 1 if done and can exit.\n- */\n-static inline int _lthread_sched_isdone(struct lthread_sched *sched)\n-{\n-\treturn (sched->run_flag == 0) &&\n-\t\t\t(_lthread_queue_empty(sched->ready)) &&\n-\t\t\t(_lthread_queue_empty(sched->pready)) &&\n-\t\t\t(sched->nb_blocked_threads == 0);\n-}\n-\n-/*\n- * Wait for all schedulers to start\n- */\n-static inline void _lthread_schedulers_sync_start(void)\n-{\n-\t__atomic_fetch_add(&active_schedulers, 1, __ATOMIC_RELAXED);\n-\n-\t/* wait for lthread schedulers\n-\t * Note we use sched_yield() rather than pthread_yield() to allow\n-\t * for the possibility of a pthread wrapper on lthread_yield(),\n-\t * something that is not possible unless the scheduler is running.\n-\t */\n-\twhile (__atomic_load_n(&active_schedulers, __ATOMIC_RELAXED) <\n-\t       __atomic_load_n(&num_schedulers, __ATOMIC_RELAXED))\n-\t\tsched_yield();\n-\n-}\n-\n-/*\n- * Wait for all schedulers to stop\n- */\n-static inline void _lthread_schedulers_sync_stop(void)\n-{\n-\t__atomic_fetch_sub(&active_schedulers, 1, __ATOMIC_RELAXED);\n-\t__atomic_fetch_sub(&num_schedulers, 1, __ATOMIC_RELAXED);\n-\n-\t/* wait for schedulers\n-\t * Note we use sched_yield() rather than pthread_yield() to allow\n-\t * for the possibility of a pthread wrapper on lthread_yield(),\n-\t * something that is not possible unless the scheduler is running.\n-\t */\n-\twhile (__atomic_load_n(&active_schedulers, __ATOMIC_RELAXED) > 0)\n-\t\tsched_yield();\n-\n-}\n-\n-\n-/*\n- * Run the lthread scheduler\n- * This loop is the heart of the system\n- */\n-void lthread_run(void)\n-{\n-\n-\tstruct lthread_sched *sched = THIS_SCHED;\n-\tstruct lthread *lt = NULL;\n-\n-\tRTE_LOG(INFO, LTHREAD,\n-\t\t\"starting scheduler %p on lcore %u phys core %u\\n\",\n-\t\tsched, rte_lcore_id(),\n-\t\trte_lcore_index(rte_lcore_id()));\n-\n-\t/* if more than one, wait for all schedulers to start */\n-\t_lthread_schedulers_sync_start();\n-\n-\n-\t/*\n-\t * This is the main scheduling loop\n-\t * So long as there are tasks in existence we run this loop.\n-\t * We check for:-\n-\t *   expired timers,\n-\t *   the local ready queue,\n-\t *   and the peer ready queue,\n-\t *\n-\t * and resume lthreads ad infinitum.\n-\t */\n-\twhile (!_lthread_sched_isdone(sched)) {\n-\n-\t\trte_timer_manage();\n-\n-\t\tlt = _lthread_queue_poll(sched->ready);\n-\t\tif (lt != NULL)\n-\t\t\t_lthread_resume(lt);\n-\t\tlt = _lthread_queue_poll(sched->pready);\n-\t\tif (lt != NULL)\n-\t\t\t_lthread_resume(lt);\n-\t}\n-\n-\n-\t/* if more than one wait for all schedulers to stop */\n-\t_lthread_schedulers_sync_stop();\n-\n-\t(THIS_SCHED) = NULL;\n-\n-\tRTE_LOG(INFO, LTHREAD,\n-\t\t\"stopping scheduler %p on lcore %u phys core %u\\n\",\n-\t\tsched, rte_lcore_id(),\n-\t\trte_lcore_index(rte_lcore_id()));\n-\tfflush(stdout);\n-}\n-\n-/*\n- * Return the scheduler for this lcore\n- *\n- */\n-struct lthread_sched *_lthread_sched_get(unsigned int lcore_id)\n-{\n-\tstruct lthread_sched *res = NULL;\n-\n-\tif (lcore_id < LTHREAD_MAX_LCORES)\n-\t\tres = schedcore[lcore_id];\n-\n-\treturn res;\n-}\n-\n-/*\n- * migrate the current thread to another scheduler running\n- * on the specified lcore.\n- */\n-int lthread_set_affinity(unsigned lcoreid)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\tstruct lthread_sched *dest_sched;\n-\n-\tif (unlikely(lcoreid >= LTHREAD_MAX_LCORES))\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_AFFINITY, lcoreid, 0);\n-\n-\tdest_sched = schedcore[lcoreid];\n-\n-\tif (unlikely(dest_sched == NULL))\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\tif (likely(dest_sched != THIS_SCHED)) {\n-\t\tlt->sched = dest_sched;\n-\t\tlt->pending_wr_queue = dest_sched->pready;\n-\t\t_affinitize();\n-\t\treturn 0;\n-\t}\n-\treturn 0;\n-}\ndiff --git a/examples/performance-thread/common/lthread_sched.h b/examples/performance-thread/common/lthread_sched.h\ndeleted file mode 100644\nindex d14bec1c860f..000000000000\n--- a/examples/performance-thread/common/lthread_sched.h\n+++ /dev/null\n@@ -1,104 +0,0 @@\n-/*\n- * SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2015 Intel Corporation.\n- * Copyright 2012 Hasan Alayli <halayli@gmail.com>\n- */\n-\n-#ifndef LTHREAD_SCHED_H_\n-#define LTHREAD_SCHED_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include \"lthread_int.h\"\n-#include \"lthread_queue.h\"\n-#include \"lthread_objcache.h\"\n-#include \"lthread_diag.h\"\n-#include \"ctx.h\"\n-\n-/*\n- * insert an lthread into a queue\n- */\n-static inline void\n-_ready_queue_insert(struct lthread_sched *sched, struct lthread *lt)\n-{\n-\tif (sched == THIS_SCHED)\n-\t\t_lthread_queue_insert_sp((THIS_SCHED)->ready, lt);\n-\telse\n-\t\t_lthread_queue_insert_mp(sched->pready, lt);\n-}\n-\n-/*\n- * remove an lthread from a queue\n- */\n-static inline struct lthread *_ready_queue_remove(struct lthread_queue *q)\n-{\n-\treturn _lthread_queue_remove(q);\n-}\n-\n-/**\n- * Return true if the ready queue is empty\n- */\n-static inline int _ready_queue_empty(struct lthread_queue *q)\n-{\n-\treturn _lthread_queue_empty(q);\n-}\n-\n-static inline uint64_t _sched_now(void)\n-{\n-\tuint64_t now = rte_rdtsc();\n-\n-\tif (now > (THIS_SCHED)->birth)\n-\t\treturn now - (THIS_SCHED)->birth;\n-\tif (now < (THIS_SCHED)->birth)\n-\t\treturn (THIS_SCHED)->birth - now;\n-\t/* never return 0 because this means sleep forever */\n-\treturn 1;\n-}\n-\n-static __rte_always_inline void\n-_affinitize(void);\n-static inline void\n-_affinitize(void)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_SUSPENDED, 0, 0);\n-\tctx_switch(&(THIS_SCHED)->ctx, &lt->ctx);\n-}\n-\n-static __rte_always_inline void\n-_suspend(void);\n-static inline void\n-_suspend(void)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\t(THIS_SCHED)->nb_blocked_threads++;\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_SUSPENDED, 0, 0);\n-\tctx_switch(&(THIS_SCHED)->ctx, &lt->ctx);\n-\t(THIS_SCHED)->nb_blocked_threads--;\n-}\n-\n-static __rte_always_inline void\n-_reschedule(void);\n-static inline void\n-_reschedule(void)\n-{\n-\tstruct lthread *lt = THIS_LTHREAD;\n-\n-\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_RESCHEDULED, 0, 0);\n-\t_ready_queue_insert(THIS_SCHED, lt);\n-\tctx_switch(&(THIS_SCHED)->ctx, &lt->ctx);\n-}\n-\n-extern struct lthread_sched *schedcore[];\n-void _sched_timer_cb(struct rte_timer *tim, void *arg);\n-void _sched_shutdown(__rte_unused void *arg);\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_SCHED_H_ */\ndiff --git a/examples/performance-thread/common/lthread_timer.h b/examples/performance-thread/common/lthread_timer.h\ndeleted file mode 100644\nindex f2d8671a4f81..000000000000\n--- a/examples/performance-thread/common/lthread_timer.h\n+++ /dev/null\n@@ -1,68 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-\n-#ifndef LTHREAD_TIMER_H_\n-#define LTHREAD_TIMER_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include \"lthread_int.h\"\n-#include \"lthread_sched.h\"\n-\n-\n-static inline uint64_t\n-_ns_to_clks(uint64_t ns)\n-{\n-\t/*\n-\t * clkns needs to be divided by 1E9 to get ns clocks. However,\n-\t * dividing by this first would lose a lot of accuracy.\n-\t * Dividing after a multiply by ns, could cause overflow of\n-\t * uint64_t if ns is about 5 seconds [if we assume a max tsc\n-\t * rate of 4GHz]. Therefore we first divide by 1E4, then\n-\t * multiply and finally divide by 1E5. This allows ns to be\n-\t * values many hours long, without overflow, while still keeping\n-\t * reasonable accuracy.\n-\t */\n-\tuint64_t clkns = rte_get_tsc_hz() / 1e4;\n-\n-\tclkns *= ns;\n-\tclkns /= 1e5;\n-\n-\treturn clkns;\n-}\n-\n-\n-static inline void\n-_timer_start(struct lthread *lt, uint64_t clks)\n-{\n-\tif (clks > 0) {\n-\t\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_TMR_START, &lt->tim, clks);\n-\t\trte_timer_init(&lt->tim);\n-\t\trte_timer_reset(&lt->tim,\n-\t\t\t\tclks,\n-\t\t\t\tSINGLE,\n-\t\t\t\trte_lcore_id(),\n-\t\t\t\t_sched_timer_cb,\n-\t\t\t\t(void *)lt);\n-\t}\n-}\n-\n-\n-static inline void\n-_timer_stop(struct lthread *lt)\n-{\n-\tif (lt != NULL) {\n-\t\tDIAG_EVENT(lt, LT_DIAG_LTHREAD_TMR_DELETE, &lt->tim, 0);\n-\t\trte_timer_stop(&lt->tim);\n-\t}\n-}\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif /* LTHREAD_TIMER_H_ */\ndiff --git a/examples/performance-thread/common/lthread_tls.c b/examples/performance-thread/common/lthread_tls.c\ndeleted file mode 100644\nindex 4ab2e3558b1c..000000000000\n--- a/examples/performance-thread/common/lthread_tls.c\n+++ /dev/null\n@@ -1,223 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-#include <stdio.h>\n-#include <stdlib.h>\n-#include <string.h>\n-#include <stdint.h>\n-#include <limits.h>\n-#include <inttypes.h>\n-#include <unistd.h>\n-#include <pthread.h>\n-#include <fcntl.h>\n-#include <sys/time.h>\n-#include <sys/mman.h>\n-#include <sched.h>\n-\n-#include <rte_malloc.h>\n-#include <rte_log.h>\n-#include <rte_ring.h>\n-\n-#include \"lthread_tls.h\"\n-#include \"lthread_queue.h\"\n-#include \"lthread_objcache.h\"\n-#include \"lthread_sched.h\"\n-\n-static struct rte_ring *key_pool;\n-static uint64_t key_pool_init;\n-\n-/* needed to cause section start and end to be defined */\n-RTE_DEFINE_PER_LTHREAD(void *, dummy);\n-\n-static struct lthread_key key_table[LTHREAD_MAX_KEYS];\n-\n-RTE_INIT(thread_tls_ctor)\n-{\n-\tkey_pool = NULL;\n-\tkey_pool_init = 0;\n-}\n-\n-/*\n- * Initialize a pool of keys\n- * These are unique tokens that can be obtained by threads\n- * calling lthread_key_create()\n- */\n-void _lthread_key_pool_init(void)\n-{\n-\tstatic struct rte_ring *pool;\n-\tstruct lthread_key *new_key;\n-\tchar name[MAX_LTHREAD_NAME_SIZE];\n-\n-\tbzero(key_table, sizeof(key_table));\n-\n-\tuint64_t pool_init = 0;\n-\t/* only one lcore should do this */\n-\tif (__atomic_compare_exchange_n(&key_pool_init, &pool_init, 1, 0,\n-\t\t\t__ATOMIC_RELAXED, __ATOMIC_RELAXED)) {\n-\n-\t\tsnprintf(name,\n-\t\t\tMAX_LTHREAD_NAME_SIZE,\n-\t\t\t\"lthread_key_pool_%d\",\n-\t\t\tgetpid());\n-\n-\t\tpool = rte_ring_create(name,\n-\t\t\t\t\tLTHREAD_MAX_KEYS, 0, 0);\n-\t\tRTE_ASSERT(pool);\n-\n-\t\tint i;\n-\n-\t\tfor (i = 1; i < LTHREAD_MAX_KEYS; i++) {\n-\t\t\tnew_key = &key_table[i];\n-\t\t\trte_ring_mp_enqueue((struct rte_ring *)pool,\n-\t\t\t\t\t\t(void *)new_key);\n-\t\t}\n-\t\tkey_pool = pool;\n-\t}\n-\t/* other lcores wait here till done */\n-\twhile (key_pool == NULL) {\n-\t\trte_compiler_barrier();\n-\t\tsched_yield();\n-\t};\n-}\n-\n-/*\n- * Create a key\n- * this means getting a key from the pool\n- */\n-int lthread_key_create(unsigned int *key, tls_destructor_func destructor)\n-{\n-\tif (key == NULL)\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\tstruct lthread_key *new_key;\n-\n-\tif (rte_ring_mc_dequeue((struct rte_ring *)key_pool, (void **)&new_key)\n-\t    == 0) {\n-\t\tnew_key->destructor = destructor;\n-\t\t*key = (new_key - key_table);\n-\n-\t\treturn 0;\n-\t}\n-\treturn POSIX_ERRNO(EAGAIN);\n-}\n-\n-\n-/*\n- * Delete a key\n- */\n-int lthread_key_delete(unsigned int k)\n-{\n-\tstruct lthread_key *key;\n-\n-\tkey = (struct lthread_key *) &key_table[k];\n-\n-\tif (k > LTHREAD_MAX_KEYS)\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\tkey->destructor = NULL;\n-\trte_ring_mp_enqueue((struct rte_ring *)key_pool,\n-\t\t\t\t\t(void *)key);\n-\treturn 0;\n-}\n-\n-\n-\n-/*\n- * Break association for all keys in use by this thread\n- * invoke the destructor if available.\n- * Since a destructor can create keys we could enter an infinite loop\n- * therefore we give up after LTHREAD_DESTRUCTOR_ITERATIONS\n- * the behavior is modelled on pthread\n- */\n-void _lthread_tls_destroy(struct lthread *lt)\n-{\n-\tint i, k;\n-\tint nb_keys;\n-\tvoid *data;\n-\n-\tfor (i = 0; i < LTHREAD_DESTRUCTOR_ITERATIONS; i++) {\n-\n-\t\tfor (k = 1; k < LTHREAD_MAX_KEYS; k++) {\n-\n-\t\t\t/* no keys in use ? */\n-\t\t\tnb_keys = lt->tls->nb_keys_inuse;\n-\t\t\tif (nb_keys == 0)\n-\t\t\t\treturn;\n-\n-\t\t\t/* this key not in use ? */\n-\t\t\tif (lt->tls->data[k] == NULL)\n-\t\t\t\tcontinue;\n-\n-\t\t\t/* remove this key */\n-\t\t\tdata = lt->tls->data[k];\n-\t\t\tlt->tls->data[k] = NULL;\n-\t\t\tlt->tls->nb_keys_inuse = nb_keys-1;\n-\n-\t\t\t/* invoke destructor */\n-\t\t\tif (key_table[k].destructor != NULL)\n-\t\t\t\tkey_table[k].destructor(data);\n-\t\t}\n-\t}\n-}\n-\n-/*\n- * Return the pointer associated with a key\n- * If the key is no longer valid return NULL\n- */\n-void\n-*lthread_getspecific(unsigned int k)\n-{\n-\tvoid *res = NULL;\n-\n-\tif (k < LTHREAD_MAX_KEYS)\n-\t\tres = THIS_LTHREAD->tls->data[k];\n-\n-\treturn res;\n-}\n-\n-/*\n- * Set a value against a key\n- * If the key is no longer valid return an error\n- * when storing value\n- */\n-int lthread_setspecific(unsigned int k, const void *data)\n-{\n-\tif (k >= LTHREAD_MAX_KEYS)\n-\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\tint n = THIS_LTHREAD->tls->nb_keys_inuse;\n-\n-\t/* discard const qualifier */\n-\tchar *p = (char *) (uintptr_t) data;\n-\n-\n-\tif (data != NULL) {\n-\t\tif (THIS_LTHREAD->tls->data[k] == NULL)\n-\t\t\tTHIS_LTHREAD->tls->nb_keys_inuse = n+1;\n-\t}\n-\n-\tTHIS_LTHREAD->tls->data[k] = (void *) p;\n-\treturn 0;\n-}\n-\n-/*\n- * Allocate data for TLS cache\n-*/\n-void _lthread_tls_alloc(struct lthread *lt)\n-{\n-\tstruct lthread_tls *tls;\n-\n-\ttls = _lthread_objcache_alloc((THIS_SCHED)->tls_cache);\n-\n-\tRTE_ASSERT(tls != NULL);\n-\n-\ttls->root_sched = (THIS_SCHED);\n-\tlt->tls = tls;\n-\n-\t/* allocate data for TLS varaiables using RTE_PER_LTHREAD macros */\n-\tif (sizeof(void *) < (uint64_t)RTE_PER_LTHREAD_SECTION_SIZE) {\n-\t\tlt->per_lthread_data =\n-\t\t    _lthread_objcache_alloc((THIS_SCHED)->per_lthread_cache);\n-\t}\n-}\ndiff --git a/examples/performance-thread/common/lthread_tls.h b/examples/performance-thread/common/lthread_tls.h\ndeleted file mode 100644\nindex 4c262e98b019..000000000000\n--- a/examples/performance-thread/common/lthread_tls.h\n+++ /dev/null\n@@ -1,35 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-#ifndef LTHREAD_TLS_H_\n-#define LTHREAD_TLS_H_\n-\n-#ifdef __cplusplus\n-extern \"C\" {\n-#endif\n-\n-#include \"lthread_api.h\"\n-\n-#define RTE_PER_LTHREAD_SECTION_SIZE \\\n-(&__stop_per_lt - &__start_per_lt)\n-\n-struct lthread_key {\n-\ttls_destructor_func destructor;\n-};\n-\n-struct lthread_tls {\n-\tvoid *data[LTHREAD_MAX_KEYS];\n-\tint  nb_keys_inuse;\n-\tstruct lthread_sched *root_sched;\n-};\n-\n-void _lthread_tls_destroy(struct lthread *lt);\n-void _lthread_key_pool_init(void);\n-void _lthread_tls_alloc(struct lthread *lt);\n-\n-#ifdef __cplusplus\n-}\n-#endif\n-\n-#endif\t\t\t\t/* LTHREAD_TLS_H_ */\ndiff --git a/examples/performance-thread/l3fwd-thread/Makefile b/examples/performance-thread/l3fwd-thread/Makefile\ndeleted file mode 100644\nindex 14ce9c0eb297..000000000000\n--- a/examples/performance-thread/l3fwd-thread/Makefile\n+++ /dev/null\n@@ -1,54 +0,0 @@\n-# SPDX-License-Identifier: BSD-3-Clause\n-# Copyright(c) 2010-2020 Intel Corporation\n-\n-# binary name\n-APP = l3fwd-thread\n-\n-# all source are stored in SRCS-y\n-SRCS-y := main.c\n-\n-include ../common/common.mk\n-\n-ifeq ($(MAKECMDGOALS),static)\n-# check for broken pkg-config\n-ifeq ($(shell echo $(LDFLAGS_STATIC) | grep 'whole-archive.*l:lib.*no-whole-archive'),)\n-$(warning \"pkg-config output list does not contain drivers between 'whole-archive'/'no-whole-archive' flags.\")\n-$(error \"Cannot generate statically-linked binaries with this version of pkg-config\")\n-endif\n-endif\n-\n-PKGCONF ?= pkg-config\n-\n-CFLAGS += -DALLOW_EXPERIMENTAL_API\n-\n-# Build using pkg-config variables if possible\n-ifneq ($(shell $(PKGCONF) --exists libdpdk && echo 0),0)\n-$(error \"no installation of DPDK found\")\n-endif\n-\n-all: shared\n-.PHONY: shared static\n-shared: build/$(APP)-shared\n-\tln -sf $(APP)-shared build/$(APP)\n-static: build/$(APP)-static\n-\tln -sf $(APP)-static build/$(APP)\n-\n-\n-PC_FILE := $(shell $(PKGCONF) --path libdpdk 2>/dev/null)\n-CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk)\n-LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk)\n-LDFLAGS_STATIC = $(shell $(PKGCONF) --static --libs libdpdk)\n-\n-build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build\n-\t$(CC) $(CFLAGS) $(filter %.c,$^) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)\n-\n-build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build\n-\t$(CC) $(CFLAGS) $(filter %.c,$^) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)\n-\n-build:\n-\t@mkdir -p $@\n-\n-.PHONY: clean\n-clean:\n-\trm -f build/$(APP) build/$(APP)-static build/$(APP)-shared\n-\ttest -d build && rmdir -p build || true\ndiff --git a/examples/performance-thread/l3fwd-thread/main.c b/examples/performance-thread/l3fwd-thread/main.c\ndeleted file mode 100644\nindex 8a3504059766..000000000000\n--- a/examples/performance-thread/l3fwd-thread/main.c\n+++ /dev/null\n@@ -1,3797 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2010-2016 Intel Corporation\n- */\n-\n-#ifndef _GNU_SOURCE\n-#define _GNU_SOURCE\n-#endif\n-\n-#include <stdio.h>\n-#include <stdlib.h>\n-#include <stdint.h>\n-#include <inttypes.h>\n-#include <sys/types.h>\n-#include <string.h>\n-#include <sys/queue.h>\n-#include <stdarg.h>\n-#include <errno.h>\n-#include <getopt.h>\n-#include <sched.h>\n-\n-#include <rte_common.h>\n-#include <rte_vect.h>\n-#include <rte_byteorder.h>\n-#include <rte_log.h>\n-#include <rte_memory.h>\n-#include <rte_memcpy.h>\n-#include <rte_eal.h>\n-#include <rte_launch.h>\n-#include <rte_cycles.h>\n-#include <rte_prefetch.h>\n-#include <rte_lcore.h>\n-#include <rte_per_lcore.h>\n-#include <rte_branch_prediction.h>\n-#include <rte_interrupts.h>\n-#include <rte_random.h>\n-#include <rte_debug.h>\n-#include <rte_ether.h>\n-#include <rte_ethdev.h>\n-#include <rte_ring.h>\n-#include <rte_mempool.h>\n-#include <rte_mbuf.h>\n-#include <rte_ip.h>\n-#include <rte_tcp.h>\n-#include <rte_udp.h>\n-#include <rte_string_fns.h>\n-#include <rte_pause.h>\n-#include <rte_timer.h>\n-\n-#include <cmdline_parse.h>\n-#include <cmdline_parse_etheraddr.h>\n-\n-#include <lthread_api.h>\n-\n-#define APP_LOOKUP_EXACT_MATCH          0\n-#define APP_LOOKUP_LPM                  1\n-#define DO_RFC_1812_CHECKS\n-\n-/* Enable cpu-load stats 0-off, 1-on */\n-#define APP_CPU_LOAD                 1\n-\n-#ifndef APP_LOOKUP_METHOD\n-#define APP_LOOKUP_METHOD             APP_LOOKUP_LPM\n-#endif\n-\n-#ifndef __GLIBC__ /* sched_getcpu() is glibc specific */\n-#define sched_getcpu() rte_lcore_id()\n-#endif\n-\n-static int\n-check_ptype(int portid)\n-{\n-\tint i, ret;\n-\tint ipv4 = 0, ipv6 = 0;\n-\n-\tret = rte_eth_dev_get_supported_ptypes(portid, RTE_PTYPE_L3_MASK, NULL,\n-\t\t\t0);\n-\tif (ret <= 0)\n-\t\treturn 0;\n-\n-\tuint32_t ptypes[ret];\n-\n-\tret = rte_eth_dev_get_supported_ptypes(portid, RTE_PTYPE_L3_MASK,\n-\t\t\tptypes, ret);\n-\tfor (i = 0; i < ret; ++i) {\n-\t\tif (ptypes[i] & RTE_PTYPE_L3_IPV4)\n-\t\t\tipv4 = 1;\n-\t\tif (ptypes[i] & RTE_PTYPE_L3_IPV6)\n-\t\t\tipv6 = 1;\n-\t}\n-\n-\tif (ipv4 && ipv6)\n-\t\treturn 1;\n-\n-\treturn 0;\n-}\n-\n-static inline void\n-parse_ptype(struct rte_mbuf *m)\n-{\n-\tstruct rte_ether_hdr *eth_hdr;\n-\tuint32_t packet_type = RTE_PTYPE_UNKNOWN;\n-\tuint16_t ether_type;\n-\n-\teth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);\n-\tether_type = eth_hdr->ether_type;\n-\tif (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV4))\n-\t\tpacket_type |= RTE_PTYPE_L3_IPV4_EXT_UNKNOWN;\n-\telse if (ether_type == rte_cpu_to_be_16(RTE_ETHER_TYPE_IPV6))\n-\t\tpacket_type |= RTE_PTYPE_L3_IPV6_EXT_UNKNOWN;\n-\n-\tm->packet_type = packet_type;\n-}\n-\n-static uint16_t\n-cb_parse_ptype(__rte_unused uint16_t port, __rte_unused uint16_t queue,\n-\t\tstruct rte_mbuf *pkts[], uint16_t nb_pkts,\n-\t\t__rte_unused uint16_t max_pkts, __rte_unused void *user_param)\n-{\n-\tunsigned int i;\n-\n-\tfor (i = 0; i < nb_pkts; i++)\n-\t\tparse_ptype(pkts[i]);\n-\n-\treturn nb_pkts;\n-}\n-\n-/*\n- *  When set to zero, simple forwaring path is eanbled.\n- *  When set to one, optimized forwarding path is enabled.\n- *  Note that LPM optimisation path uses SSE4.1 instructions.\n- */\n-#define ENABLE_MULTI_BUFFER_OPTIMIZE\t1\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)\n-#include <rte_hash.h>\n-#elif (APP_LOOKUP_METHOD == APP_LOOKUP_LPM)\n-#include <rte_lpm.h>\n-#include <rte_lpm6.h>\n-#else\n-#error \"APP_LOOKUP_METHOD set to incorrect value\"\n-#endif\n-\n-#define RTE_LOGTYPE_L3FWD RTE_LOGTYPE_USER1\n-\n-#define MAX_JUMBO_PKT_LEN  9600\n-\n-#define IPV6_ADDR_LEN 16\n-\n-#define MEMPOOL_CACHE_SIZE 256\n-\n-/*\n- * This expression is used to calculate the number of mbufs needed depending on\n- * user input, taking into account memory for rx and tx hardware rings, cache\n- * per lcore and mtable per port per lcore. RTE_MAX is used to ensure that\n- * NB_MBUF never goes below a minimum value of 8192\n- */\n-\n-#define NB_MBUF RTE_MAX(\\\n-\t\t(nb_ports*nb_rx_queue*nb_rxd +      \\\n-\t\tnb_ports*nb_lcores*MAX_PKT_BURST +  \\\n-\t\tnb_ports*n_tx_queue*nb_txd +        \\\n-\t\tnb_lcores*MEMPOOL_CACHE_SIZE),      \\\n-\t\t(unsigned)8192)\n-\n-#define MAX_PKT_BURST     32\n-#define BURST_TX_DRAIN_US 100 /* TX drain every ~100us */\n-\n-/*\n- * Try to avoid TX buffering if we have at least MAX_TX_BURST packets to send.\n- */\n-#define\tMAX_TX_BURST  (MAX_PKT_BURST / 2)\n-#define BURST_SIZE    MAX_TX_BURST\n-\n-#define NB_SOCKETS 8\n-\n-/* Configure how many packets ahead to prefetch, when reading packets */\n-#define PREFETCH_OFFSET\t3\n-\n-/* Used to mark destination port as 'invalid'. */\n-#define\tBAD_PORT\t((uint16_t)-1)\n-\n-#define FWDSTEP\t4\n-\n-/*\n- * Configurable number of RX/TX ring descriptors\n- */\n-#define RTE_TEST_RX_DESC_DEFAULT 1024\n-#define RTE_TEST_TX_DESC_DEFAULT 1024\n-static uint16_t nb_rxd = RTE_TEST_RX_DESC_DEFAULT;\n-static uint16_t nb_txd = RTE_TEST_TX_DESC_DEFAULT;\n-\n-/* ethernet addresses of ports */\n-static uint64_t dest_eth_addr[RTE_MAX_ETHPORTS];\n-static struct rte_ether_addr ports_eth_addr[RTE_MAX_ETHPORTS];\n-\n-static xmm_t val_eth[RTE_MAX_ETHPORTS];\n-\n-/* replace first 12B of the ethernet header. */\n-#define\tMASK_ETH 0x3f\n-\n-/* mask of enabled ports */\n-static uint32_t enabled_port_mask;\n-static int promiscuous_on; /**< Set in promiscuous mode off by default. */\n-static int numa_on = 1;    /**< NUMA is enabled by default. */\n-static int parse_ptype_on;\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)\n-static int ipv6;           /**< ipv6 is false by default. */\n-#endif\n-\n-#if (APP_CPU_LOAD == 1)\n-\n-#define MAX_CPU RTE_MAX_LCORE\n-#define CPU_LOAD_TIMEOUT_US (5 * 1000 * 1000)  /**< Timeout for collecting 5s */\n-\n-#define CPU_PROCESS     0\n-#define CPU_POLL        1\n-#define MAX_CPU_COUNTER 2\n-\n-struct cpu_load {\n-\tuint16_t       n_cpu;\n-\tuint64_t       counter;\n-\tuint64_t       hits[MAX_CPU_COUNTER][MAX_CPU];\n-} __rte_cache_aligned;\n-\n-static struct cpu_load cpu_load;\n-static int cpu_load_lcore_id = -1;\n-\n-#define SET_CPU_BUSY(thread, counter) \\\n-\t\tthread->conf.busy[counter] = 1\n-\n-#define SET_CPU_IDLE(thread, counter) \\\n-\t\tthread->conf.busy[counter] = 0\n-\n-#define IS_CPU_BUSY(thread, counter) \\\n-\t\t(thread->conf.busy[counter] > 0)\n-\n-#else\n-\n-#define SET_CPU_BUSY(thread, counter)\n-#define SET_CPU_IDLE(thread, counter)\n-#define IS_CPU_BUSY(thread, counter) 0\n-\n-#endif\n-\n-struct mbuf_table {\n-\tuint16_t len;\n-\tstruct rte_mbuf *m_table[MAX_PKT_BURST];\n-};\n-\n-struct lcore_rx_queue {\n-\tuint16_t port_id;\n-\tuint8_t queue_id;\n-} __rte_cache_aligned;\n-\n-#define MAX_RX_QUEUE_PER_LCORE 16\n-#define MAX_TX_QUEUE_PER_PORT  RTE_MAX_ETHPORTS\n-#define MAX_RX_QUEUE_PER_PORT  128\n-\n-#define MAX_LCORE_PARAMS       1024\n-struct rx_thread_params {\n-\tuint16_t port_id;\n-\tuint8_t queue_id;\n-\tuint8_t lcore_id;\n-\tuint8_t thread_id;\n-} __rte_cache_aligned;\n-\n-static struct rx_thread_params rx_thread_params_array[MAX_LCORE_PARAMS];\n-static struct rx_thread_params rx_thread_params_array_default[] = {\n-\t{0, 0, 2, 0},\n-\t{0, 1, 2, 1},\n-\t{0, 2, 2, 2},\n-\t{1, 0, 2, 3},\n-\t{1, 1, 2, 4},\n-\t{1, 2, 2, 5},\n-\t{2, 0, 2, 6},\n-\t{3, 0, 3, 7},\n-\t{3, 1, 3, 8},\n-};\n-\n-static struct rx_thread_params *rx_thread_params =\n-\t\trx_thread_params_array_default;\n-static uint16_t nb_rx_thread_params = RTE_DIM(rx_thread_params_array_default);\n-\n-struct tx_thread_params {\n-\tuint8_t lcore_id;\n-\tuint8_t thread_id;\n-} __rte_cache_aligned;\n-\n-static struct tx_thread_params tx_thread_params_array[MAX_LCORE_PARAMS];\n-static struct tx_thread_params tx_thread_params_array_default[] = {\n-\t{4, 0},\n-\t{5, 1},\n-\t{6, 2},\n-\t{7, 3},\n-\t{8, 4},\n-\t{9, 5},\n-\t{10, 6},\n-\t{11, 7},\n-\t{12, 8},\n-};\n-\n-static struct tx_thread_params *tx_thread_params =\n-\t\ttx_thread_params_array_default;\n-static uint16_t nb_tx_thread_params = RTE_DIM(tx_thread_params_array_default);\n-\n-static struct rte_eth_conf port_conf = {\n-\t.rxmode = {\n-\t\t.mq_mode = RTE_ETH_MQ_RX_RSS,\n-\t\t.split_hdr_size = 0,\n-\t\t.offloads = RTE_ETH_RX_OFFLOAD_CHECKSUM,\n-\t},\n-\t.rx_adv_conf = {\n-\t\t.rss_conf = {\n-\t\t\t.rss_key = NULL,\n-\t\t\t.rss_hf = RTE_ETH_RSS_TCP,\n-\t\t},\n-\t},\n-\t.txmode = {\n-\t\t.mq_mode = RTE_ETH_MQ_TX_NONE,\n-\t},\n-};\n-\n-static uint32_t max_pkt_len;\n-\n-static struct rte_mempool *pktmbuf_pool[NB_SOCKETS];\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)\n-\n-#include <rte_hash_crc.h>\n-#define DEFAULT_HASH_FUNC       rte_hash_crc\n-\n-struct ipv4_5tuple {\n-\tuint32_t ip_dst;\n-\tuint32_t ip_src;\n-\tuint16_t port_dst;\n-\tuint16_t port_src;\n-\tuint8_t  proto;\n-} __rte_packed;\n-\n-union ipv4_5tuple_host {\n-\tstruct {\n-\t\tuint8_t  pad0;\n-\t\tuint8_t  proto;\n-\t\tuint16_t pad1;\n-\t\tuint32_t ip_src;\n-\t\tuint32_t ip_dst;\n-\t\tuint16_t port_src;\n-\t\tuint16_t port_dst;\n-\t};\n-\t__m128i xmm;\n-};\n-\n-#define XMM_NUM_IN_IPV6_5TUPLE 3\n-\n-struct ipv6_5tuple {\n-\tuint8_t  ip_dst[IPV6_ADDR_LEN];\n-\tuint8_t  ip_src[IPV6_ADDR_LEN];\n-\tuint16_t port_dst;\n-\tuint16_t port_src;\n-\tuint8_t  proto;\n-} __rte_packed;\n-\n-union ipv6_5tuple_host {\n-\tstruct {\n-\t\tuint16_t pad0;\n-\t\tuint8_t  proto;\n-\t\tuint8_t  pad1;\n-\t\tuint8_t  ip_src[IPV6_ADDR_LEN];\n-\t\tuint8_t  ip_dst[IPV6_ADDR_LEN];\n-\t\tuint16_t port_src;\n-\t\tuint16_t port_dst;\n-\t\tuint64_t reserve;\n-\t};\n-\t__m128i xmm[XMM_NUM_IN_IPV6_5TUPLE];\n-};\n-\n-struct ipv4_l3fwd_route {\n-\tstruct ipv4_5tuple key;\n-\tuint8_t if_out;\n-};\n-\n-struct ipv6_l3fwd_route {\n-\tstruct ipv6_5tuple key;\n-\tuint8_t if_out;\n-};\n-\n-static struct ipv4_l3fwd_route ipv4_l3fwd_route_array[] = {\n-\t{{RTE_IPV4(101, 0, 0, 0), RTE_IPV4(100, 10, 0, 1),  101, 11, IPPROTO_TCP}, 0},\n-\t{{RTE_IPV4(201, 0, 0, 0), RTE_IPV4(200, 20, 0, 1),  102, 12, IPPROTO_TCP}, 1},\n-\t{{RTE_IPV4(111, 0, 0, 0), RTE_IPV4(100, 30, 0, 1),  101, 11, IPPROTO_TCP}, 2},\n-\t{{RTE_IPV4(211, 0, 0, 0), RTE_IPV4(200, 40, 0, 1),  102, 12, IPPROTO_TCP}, 3},\n-};\n-\n-static struct ipv6_l3fwd_route ipv6_l3fwd_route_array[] = {\n-\t{{\n-\t{0xfe, 0x80, 0, 0, 0, 0, 0, 0, 0x02, 0x1e, 0x67, 0xff, 0xfe, 0, 0, 0},\n-\t{0xfe, 0x80, 0, 0, 0, 0, 0, 0, 0x02, 0x1b, 0x21, 0xff, 0xfe, 0x91, 0x38,\n-\t\t\t0x05},\n-\t101, 11, IPPROTO_TCP}, 0},\n-\n-\t{{\n-\t{0xfe, 0x90, 0, 0, 0, 0, 0, 0, 0x02, 0x1e, 0x67, 0xff, 0xfe, 0, 0, 0},\n-\t{0xfe, 0x90, 0, 0, 0, 0, 0, 0, 0x02, 0x1b, 0x21, 0xff, 0xfe, 0x91, 0x38,\n-\t\t\t0x05},\n-\t102, 12, IPPROTO_TCP}, 1},\n-\n-\t{{\n-\t{0xfe, 0xa0, 0, 0, 0, 0, 0, 0, 0x02, 0x1e, 0x67, 0xff, 0xfe, 0, 0, 0},\n-\t{0xfe, 0xa0, 0, 0, 0, 0, 0, 0, 0x02, 0x1b, 0x21, 0xff, 0xfe, 0x91, 0x38,\n-\t\t\t0x05},\n-\t101, 11, IPPROTO_TCP}, 2},\n-\n-\t{{\n-\t{0xfe, 0xb0, 0, 0, 0, 0, 0, 0, 0x02, 0x1e, 0x67, 0xff, 0xfe, 0, 0, 0},\n-\t{0xfe, 0xb0, 0, 0, 0, 0, 0, 0, 0x02, 0x1b, 0x21, 0xff, 0xfe, 0x91, 0x38,\n-\t\t\t0x05},\n-\t102, 12, IPPROTO_TCP}, 3},\n-};\n-\n-typedef struct rte_hash lookup_struct_t;\n-static lookup_struct_t *ipv4_l3fwd_lookup_struct[NB_SOCKETS];\n-static lookup_struct_t *ipv6_l3fwd_lookup_struct[NB_SOCKETS];\n-\n-#ifdef RTE_ARCH_X86_64\n-/* default to 4 million hash entries (approx) */\n-#define L3FWD_HASH_ENTRIES (1024*1024*4)\n-#else\n-/* 32-bit has less address-space for hugepage memory, limit to 1M entries */\n-#define L3FWD_HASH_ENTRIES (1024*1024*1)\n-#endif\n-#define HASH_ENTRY_NUMBER_DEFAULT 4\n-\n-static uint32_t hash_entry_number = HASH_ENTRY_NUMBER_DEFAULT;\n-\n-static inline uint32_t\n-ipv4_hash_crc(const void *data, __rte_unused uint32_t data_len,\n-\t\tuint32_t init_val)\n-{\n-\tconst union ipv4_5tuple_host *k;\n-\tuint32_t t;\n-\tconst uint32_t *p;\n-\n-\tk = data;\n-\tt = k->proto;\n-\tp = (const uint32_t *)&k->port_src;\n-\n-\tinit_val = rte_hash_crc_4byte(t, init_val);\n-\tinit_val = rte_hash_crc_4byte(k->ip_src, init_val);\n-\tinit_val = rte_hash_crc_4byte(k->ip_dst, init_val);\n-\tinit_val = rte_hash_crc_4byte(*p, init_val);\n-\treturn init_val;\n-}\n-\n-static inline uint32_t\n-ipv6_hash_crc(const void *data, __rte_unused uint32_t data_len,\n-\t\tuint32_t init_val)\n-{\n-\tconst union ipv6_5tuple_host *k;\n-\tuint32_t t;\n-\tconst uint32_t *p;\n-\tconst uint32_t *ip_src0, *ip_src1, *ip_src2, *ip_src3;\n-\tconst uint32_t *ip_dst0, *ip_dst1, *ip_dst2, *ip_dst3;\n-\n-\tk = data;\n-\tt = k->proto;\n-\tp = (const uint32_t *)&k->port_src;\n-\n-\tip_src0 = (const uint32_t *) k->ip_src;\n-\tip_src1 = (const uint32_t *)(k->ip_src + 4);\n-\tip_src2 = (const uint32_t *)(k->ip_src + 8);\n-\tip_src3 = (const uint32_t *)(k->ip_src + 12);\n-\tip_dst0 = (const uint32_t *) k->ip_dst;\n-\tip_dst1 = (const uint32_t *)(k->ip_dst + 4);\n-\tip_dst2 = (const uint32_t *)(k->ip_dst + 8);\n-\tip_dst3 = (const uint32_t *)(k->ip_dst + 12);\n-\tinit_val = rte_hash_crc_4byte(t, init_val);\n-\tinit_val = rte_hash_crc_4byte(*ip_src0, init_val);\n-\tinit_val = rte_hash_crc_4byte(*ip_src1, init_val);\n-\tinit_val = rte_hash_crc_4byte(*ip_src2, init_val);\n-\tinit_val = rte_hash_crc_4byte(*ip_src3, init_val);\n-\tinit_val = rte_hash_crc_4byte(*ip_dst0, init_val);\n-\tinit_val = rte_hash_crc_4byte(*ip_dst1, init_val);\n-\tinit_val = rte_hash_crc_4byte(*ip_dst2, init_val);\n-\tinit_val = rte_hash_crc_4byte(*ip_dst3, init_val);\n-\tinit_val = rte_hash_crc_4byte(*p, init_val);\n-\treturn init_val;\n-}\n-\n-#define IPV4_L3FWD_NUM_ROUTES RTE_DIM(ipv4_l3fwd_route_array)\n-#define IPV6_L3FWD_NUM_ROUTES RTE_DIM(ipv6_l3fwd_route_array)\n-\n-static uint8_t ipv4_l3fwd_out_if[L3FWD_HASH_ENTRIES] __rte_cache_aligned;\n-static uint8_t ipv6_l3fwd_out_if[L3FWD_HASH_ENTRIES] __rte_cache_aligned;\n-\n-#endif\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_LPM)\n-struct ipv4_l3fwd_route {\n-\tuint32_t ip;\n-\tuint8_t  depth;\n-\tuint8_t  if_out;\n-};\n-\n-struct ipv6_l3fwd_route {\n-\tuint8_t ip[16];\n-\tuint8_t depth;\n-\tuint8_t if_out;\n-};\n-\n-static struct ipv4_l3fwd_route ipv4_l3fwd_route_array[] = {\n-\t{RTE_IPV4(1, 1, 1, 0), 24, 0},\n-\t{RTE_IPV4(2, 1, 1, 0), 24, 1},\n-\t{RTE_IPV4(3, 1, 1, 0), 24, 2},\n-\t{RTE_IPV4(4, 1, 1, 0), 24, 3},\n-\t{RTE_IPV4(5, 1, 1, 0), 24, 4},\n-\t{RTE_IPV4(6, 1, 1, 0), 24, 5},\n-\t{RTE_IPV4(7, 1, 1, 0), 24, 6},\n-\t{RTE_IPV4(8, 1, 1, 0), 24, 7},\n-};\n-\n-static struct ipv6_l3fwd_route ipv6_l3fwd_route_array[] = {\n-\t{{1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 0},\n-\t{{2, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 1},\n-\t{{3, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 2},\n-\t{{4, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 3},\n-\t{{5, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 4},\n-\t{{6, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 5},\n-\t{{7, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 6},\n-\t{{8, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1}, 48, 7},\n-};\n-\n-#define IPV4_L3FWD_NUM_ROUTES RTE_DIM(ipv4_l3fwd_route_array)\n-#define IPV6_L3FWD_NUM_ROUTES RTE_DIM(ipv6_l3fwd_route_array)\n-\n-#define IPV4_L3FWD_LPM_MAX_RULES         1024\n-#define IPV6_L3FWD_LPM_MAX_RULES         1024\n-#define IPV6_L3FWD_LPM_NUMBER_TBL8S (1 << 16)\n-\n-typedef struct rte_lpm lookup_struct_t;\n-typedef struct rte_lpm6 lookup6_struct_t;\n-static lookup_struct_t *ipv4_l3fwd_lookup_struct[NB_SOCKETS];\n-static lookup6_struct_t *ipv6_l3fwd_lookup_struct[NB_SOCKETS];\n-#endif\n-\n-struct lcore_conf {\n-\tlookup_struct_t *ipv4_lookup_struct;\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_LPM)\n-\tlookup6_struct_t *ipv6_lookup_struct;\n-#else\n-\tlookup_struct_t *ipv6_lookup_struct;\n-#endif\n-\tvoid *data;\n-} __rte_cache_aligned;\n-\n-static struct lcore_conf lcore_conf[RTE_MAX_LCORE];\n-RTE_DEFINE_PER_LCORE(struct lcore_conf *, lcore_conf);\n-\n-#define MAX_RX_QUEUE_PER_THREAD 16\n-#define MAX_TX_PORT_PER_THREAD  RTE_MAX_ETHPORTS\n-#define MAX_TX_QUEUE_PER_PORT   RTE_MAX_ETHPORTS\n-#define MAX_RX_QUEUE_PER_PORT   128\n-\n-#define MAX_RX_THREAD 1024\n-#define MAX_TX_THREAD 1024\n-#define MAX_THREAD    (MAX_RX_THREAD + MAX_TX_THREAD)\n-\n-/**\n- * Producers and consumers threads configuration\n- */\n-static int lthreads_on = 1; /**< Use lthreads for processing*/\n-\n-uint16_t rx_counter;  /**< Number of spawned rx threads */\n-uint16_t tx_counter;  /**< Number of spawned tx threads */\n-\n-struct thread_conf {\n-\tuint16_t lcore_id;      /**< Initial lcore for rx thread */\n-\tuint16_t cpu_id;        /**< Cpu id for cpu load stats counter */\n-\tuint16_t thread_id;     /**< Thread ID */\n-\n-#if (APP_CPU_LOAD > 0)\n-\tint busy[MAX_CPU_COUNTER];\n-#endif\n-};\n-\n-struct thread_rx_conf {\n-\tstruct thread_conf conf;\n-\n-\tuint16_t n_rx_queue;\n-\tstruct lcore_rx_queue rx_queue_list[MAX_RX_QUEUE_PER_LCORE];\n-\n-\tuint16_t n_ring;        /**< Number of output rings */\n-\tstruct rte_ring *ring[RTE_MAX_LCORE];\n-\tstruct lthread_cond *ready[RTE_MAX_LCORE];\n-\n-#if (APP_CPU_LOAD > 0)\n-\tint busy[MAX_CPU_COUNTER];\n-#endif\n-} __rte_cache_aligned;\n-\n-uint16_t n_rx_thread;\n-struct thread_rx_conf rx_thread[MAX_RX_THREAD];\n-\n-struct thread_tx_conf {\n-\tstruct thread_conf conf;\n-\n-\tuint16_t tx_queue_id[RTE_MAX_ETHPORTS];\n-\tstruct mbuf_table tx_mbufs[RTE_MAX_ETHPORTS];\n-\n-\tstruct rte_ring *ring;\n-\tstruct lthread_cond **ready;\n-\n-} __rte_cache_aligned;\n-\n-uint16_t n_tx_thread;\n-struct thread_tx_conf tx_thread[MAX_TX_THREAD];\n-\n-/* Send burst of packets on an output interface */\n-static inline int\n-send_burst(struct thread_tx_conf *qconf, uint16_t n, uint16_t port)\n-{\n-\tstruct rte_mbuf **m_table;\n-\tint ret;\n-\tuint16_t queueid;\n-\n-\tqueueid = qconf->tx_queue_id[port];\n-\tm_table = (struct rte_mbuf **)qconf->tx_mbufs[port].m_table;\n-\n-\tret = rte_eth_tx_burst(port, queueid, m_table, n);\n-\tif (unlikely(ret < n)) {\n-\t\tdo {\n-\t\t\trte_pktmbuf_free(m_table[ret]);\n-\t\t} while (++ret < n);\n-\t}\n-\n-\treturn 0;\n-}\n-\n-/* Enqueue a single packet, and send burst if queue is filled */\n-static inline int\n-send_single_packet(struct rte_mbuf *m, uint16_t port)\n-{\n-\tuint16_t len;\n-\tstruct thread_tx_conf *qconf;\n-\n-\tif (lthreads_on)\n-\t\tqconf = (struct thread_tx_conf *)lthread_get_data();\n-\telse\n-\t\tqconf = (struct thread_tx_conf *)RTE_PER_LCORE(lcore_conf)->data;\n-\n-\tlen = qconf->tx_mbufs[port].len;\n-\tqconf->tx_mbufs[port].m_table[len] = m;\n-\tlen++;\n-\n-\t/* enough pkts to be sent */\n-\tif (unlikely(len == MAX_PKT_BURST)) {\n-\t\tsend_burst(qconf, MAX_PKT_BURST, port);\n-\t\tlen = 0;\n-\t}\n-\n-\tqconf->tx_mbufs[port].len = len;\n-\treturn 0;\n-}\n-\n-#if ((APP_LOOKUP_METHOD == APP_LOOKUP_LPM) && \\\n-\t(ENABLE_MULTI_BUFFER_OPTIMIZE == 1))\n-static __rte_always_inline void\n-send_packetsx4(uint16_t port,\n-\tstruct rte_mbuf *m[], uint32_t num)\n-{\n-\tuint32_t len, j, n;\n-\tstruct thread_tx_conf *qconf;\n-\n-\tif (lthreads_on)\n-\t\tqconf = (struct thread_tx_conf *)lthread_get_data();\n-\telse\n-\t\tqconf = (struct thread_tx_conf *)RTE_PER_LCORE(lcore_conf)->data;\n-\n-\tlen = qconf->tx_mbufs[port].len;\n-\n-\t/*\n-\t * If TX buffer for that queue is empty, and we have enough packets,\n-\t * then send them straightway.\n-\t */\n-\tif (num >= MAX_TX_BURST && len == 0) {\n-\t\tn = rte_eth_tx_burst(port, qconf->tx_queue_id[port], m, num);\n-\t\tif (unlikely(n < num)) {\n-\t\t\tdo {\n-\t\t\t\trte_pktmbuf_free(m[n]);\n-\t\t\t} while (++n < num);\n-\t\t}\n-\t\treturn;\n-\t}\n-\n-\t/*\n-\t * Put packets into TX buffer for that queue.\n-\t */\n-\n-\tn = len + num;\n-\tn = (n > MAX_PKT_BURST) ? MAX_PKT_BURST - len : num;\n-\n-\tj = 0;\n-\tswitch (n % FWDSTEP) {\n-\twhile (j < n) {\n-\tcase 0:\n-\t\tqconf->tx_mbufs[port].m_table[len + j] = m[j];\n-\t\tj++;\n-\t\t/* fall-through */\n-\tcase 3:\n-\t\tqconf->tx_mbufs[port].m_table[len + j] = m[j];\n-\t\tj++;\n-\t\t/* fall-through */\n-\tcase 2:\n-\t\tqconf->tx_mbufs[port].m_table[len + j] = m[j];\n-\t\tj++;\n-\t\t/* fall-through */\n-\tcase 1:\n-\t\tqconf->tx_mbufs[port].m_table[len + j] = m[j];\n-\t\tj++;\n-\t}\n-\t}\n-\n-\tlen += n;\n-\n-\t/* enough pkts to be sent */\n-\tif (unlikely(len == MAX_PKT_BURST)) {\n-\n-\t\tsend_burst(qconf, MAX_PKT_BURST, port);\n-\n-\t\t/* copy rest of the packets into the TX buffer. */\n-\t\tlen = num - n;\n-\t\tj = 0;\n-\t\tswitch (len % FWDSTEP) {\n-\t\twhile (j < len) {\n-\t\tcase 0:\n-\t\t\tqconf->tx_mbufs[port].m_table[j] = m[n + j];\n-\t\t\tj++;\n-\t\t\t/* fall-through */\n-\t\tcase 3:\n-\t\t\tqconf->tx_mbufs[port].m_table[j] = m[n + j];\n-\t\t\tj++;\n-\t\t\t/* fall-through */\n-\t\tcase 2:\n-\t\t\tqconf->tx_mbufs[port].m_table[j] = m[n + j];\n-\t\t\tj++;\n-\t\t\t/* fall-through */\n-\t\tcase 1:\n-\t\t\tqconf->tx_mbufs[port].m_table[j] = m[n + j];\n-\t\t\tj++;\n-\t\t}\n-\t\t}\n-\t}\n-\n-\tqconf->tx_mbufs[port].len = len;\n-}\n-#endif /* APP_LOOKUP_LPM */\n-\n-#ifdef DO_RFC_1812_CHECKS\n-static inline int\n-is_valid_ipv4_pkt(struct rte_ipv4_hdr *pkt, uint32_t link_len)\n-{\n-\t/* From http://www.rfc-editor.org/rfc/rfc1812.txt section 5.2.2 */\n-\t/*\n-\t * 1. The packet length reported by the Link Layer must be large\n-\t * enough to hold the minimum length legal IP datagram (20 bytes).\n-\t */\n-\tif (link_len < sizeof(struct rte_ipv4_hdr))\n-\t\treturn -1;\n-\n-\t/* 2. The IP checksum must be correct. */\n-\t/* this is checked in H/W */\n-\n-\t/*\n-\t * 3. The IP version number must be 4. If the version number is not 4\n-\t * then the packet may be another version of IP, such as IPng or\n-\t * ST-II.\n-\t */\n-\tif (((pkt->version_ihl) >> 4) != 4)\n-\t\treturn -3;\n-\t/*\n-\t * 4. The IP header length field must be large enough to hold the\n-\t * minimum length legal IP datagram (20 bytes = 5 words).\n-\t */\n-\tif ((pkt->version_ihl & 0xf) < 5)\n-\t\treturn -4;\n-\n-\t/*\n-\t * 5. The IP total length field must be large enough to hold the IP\n-\t * datagram header, whose length is specified in the IP header length\n-\t * field.\n-\t */\n-\tif (rte_cpu_to_be_16(pkt->total_length) < sizeof(struct rte_ipv4_hdr))\n-\t\treturn -5;\n-\n-\treturn 0;\n-}\n-#endif\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)\n-\n-static __m128i mask0;\n-static __m128i mask1;\n-static __m128i mask2;\n-static inline uint16_t\n-get_ipv4_dst_port(void *ipv4_hdr, uint16_t portid,\n-\t\tlookup_struct_t *ipv4_l3fwd_lookup_struct)\n-{\n-\tint ret = 0;\n-\tunion ipv4_5tuple_host key;\n-\n-\tipv4_hdr = (uint8_t *)ipv4_hdr +\n-\t\toffsetof(struct rte_ipv4_hdr, time_to_live);\n-\t__m128i data = _mm_loadu_si128((__m128i *)(ipv4_hdr));\n-\t/* Get 5 tuple: dst port, src port, dst IP address, src IP address and\n-\t   protocol */\n-\tkey.xmm = _mm_and_si128(data, mask0);\n-\t/* Find destination port */\n-\tret = rte_hash_lookup(ipv4_l3fwd_lookup_struct, (const void *)&key);\n-\treturn ((ret < 0) ? portid : ipv4_l3fwd_out_if[ret]);\n-}\n-\n-static inline uint16_t\n-get_ipv6_dst_port(void *ipv6_hdr, uint16_t portid,\n-\t\tlookup_struct_t *ipv6_l3fwd_lookup_struct)\n-{\n-\tint ret = 0;\n-\tunion ipv6_5tuple_host key;\n-\n-\tipv6_hdr = (uint8_t *)ipv6_hdr +\n-\t\toffsetof(struct rte_ipv6_hdr, payload_len);\n-\t__m128i data0 = _mm_loadu_si128((__m128i *)(ipv6_hdr));\n-\t__m128i data1 = _mm_loadu_si128((__m128i *)(((uint8_t *)ipv6_hdr) +\n-\t\t\tsizeof(__m128i)));\n-\t__m128i data2 = _mm_loadu_si128((__m128i *)(((uint8_t *)ipv6_hdr) +\n-\t\t\tsizeof(__m128i) + sizeof(__m128i)));\n-\t/* Get part of 5 tuple: src IP address lower 96 bits and protocol */\n-\tkey.xmm[0] = _mm_and_si128(data0, mask1);\n-\t/* Get part of 5 tuple: dst IP address lower 96 bits and src IP address\n-\t   higher 32 bits */\n-\tkey.xmm[1] = data1;\n-\t/* Get part of 5 tuple: dst port and src port and dst IP address higher\n-\t   32 bits */\n-\tkey.xmm[2] = _mm_and_si128(data2, mask2);\n-\n-\t/* Find destination port */\n-\tret = rte_hash_lookup(ipv6_l3fwd_lookup_struct, (const void *)&key);\n-\treturn ((ret < 0) ? portid : ipv6_l3fwd_out_if[ret]);\n-}\n-#endif\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_LPM)\n-\n-static inline uint16_t\n-get_ipv4_dst_port(void *ipv4_hdr, uint16_t portid,\n-\t\tlookup_struct_t *ipv4_l3fwd_lookup_struct)\n-{\n-\tuint32_t next_hop;\n-\n-\treturn ((rte_lpm_lookup(ipv4_l3fwd_lookup_struct,\n-\t\trte_be_to_cpu_32(((struct rte_ipv4_hdr *)ipv4_hdr)->dst_addr),\n-\t\t&next_hop) == 0) ? next_hop : portid);\n-}\n-\n-static inline uint16_t\n-get_ipv6_dst_port(void *ipv6_hdr,  uint16_t portid,\n-\t\tlookup6_struct_t *ipv6_l3fwd_lookup_struct)\n-{\n-\tuint32_t next_hop;\n-\n-\treturn ((rte_lpm6_lookup(ipv6_l3fwd_lookup_struct,\n-\t\t((struct rte_ipv6_hdr *)ipv6_hdr)->dst_addr, &next_hop) == 0) ?\n-\t\tnext_hop : portid);\n-}\n-#endif\n-\n-static inline void l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid)\n-\t\t__rte_unused;\n-\n-#if ((APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH) && \\\n-\t(ENABLE_MULTI_BUFFER_OPTIMIZE == 1))\n-\n-#define MASK_ALL_PKTS   0xff\n-#define EXCLUDE_1ST_PKT 0xfe\n-#define EXCLUDE_2ND_PKT 0xfd\n-#define EXCLUDE_3RD_PKT 0xfb\n-#define EXCLUDE_4TH_PKT 0xf7\n-#define EXCLUDE_5TH_PKT 0xef\n-#define EXCLUDE_6TH_PKT 0xdf\n-#define EXCLUDE_7TH_PKT 0xbf\n-#define EXCLUDE_8TH_PKT 0x7f\n-\n-static inline void\n-simple_ipv4_fwd_8pkts(struct rte_mbuf *m[8], uint16_t portid)\n-{\n-\tstruct rte_ether_hdr *eth_hdr[8];\n-\tstruct rte_ipv4_hdr *ipv4_hdr[8];\n-\tuint16_t dst_port[8];\n-\tint32_t ret[8];\n-\tunion ipv4_5tuple_host key[8];\n-\t__m128i data[8];\n-\n-\teth_hdr[0] = rte_pktmbuf_mtod(m[0], struct rte_ether_hdr *);\n-\teth_hdr[1] = rte_pktmbuf_mtod(m[1], struct rte_ether_hdr *);\n-\teth_hdr[2] = rte_pktmbuf_mtod(m[2], struct rte_ether_hdr *);\n-\teth_hdr[3] = rte_pktmbuf_mtod(m[3], struct rte_ether_hdr *);\n-\teth_hdr[4] = rte_pktmbuf_mtod(m[4], struct rte_ether_hdr *);\n-\teth_hdr[5] = rte_pktmbuf_mtod(m[5], struct rte_ether_hdr *);\n-\teth_hdr[6] = rte_pktmbuf_mtod(m[6], struct rte_ether_hdr *);\n-\teth_hdr[7] = rte_pktmbuf_mtod(m[7], struct rte_ether_hdr *);\n-\n-\t/* Handle IPv4 headers.*/\n-\tipv4_hdr[0] = rte_pktmbuf_mtod_offset(m[0], struct rte_ipv4_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv4_hdr[1] = rte_pktmbuf_mtod_offset(m[1], struct rte_ipv4_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv4_hdr[2] = rte_pktmbuf_mtod_offset(m[2], struct rte_ipv4_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv4_hdr[3] = rte_pktmbuf_mtod_offset(m[3], struct rte_ipv4_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv4_hdr[4] = rte_pktmbuf_mtod_offset(m[4], struct rte_ipv4_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv4_hdr[5] = rte_pktmbuf_mtod_offset(m[5], struct rte_ipv4_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv4_hdr[6] = rte_pktmbuf_mtod_offset(m[6], struct rte_ipv4_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv4_hdr[7] = rte_pktmbuf_mtod_offset(m[7], struct rte_ipv4_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\n-#ifdef DO_RFC_1812_CHECKS\n-\t/* Check to make sure the packet is valid (RFC1812) */\n-\tuint8_t valid_mask = MASK_ALL_PKTS;\n-\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[0], m[0]->pkt_len) < 0) {\n-\t\trte_pktmbuf_free(m[0]);\n-\t\tvalid_mask &= EXCLUDE_1ST_PKT;\n-\t}\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[1], m[1]->pkt_len) < 0) {\n-\t\trte_pktmbuf_free(m[1]);\n-\t\tvalid_mask &= EXCLUDE_2ND_PKT;\n-\t}\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[2], m[2]->pkt_len) < 0) {\n-\t\trte_pktmbuf_free(m[2]);\n-\t\tvalid_mask &= EXCLUDE_3RD_PKT;\n-\t}\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[3], m[3]->pkt_len) < 0) {\n-\t\trte_pktmbuf_free(m[3]);\n-\t\tvalid_mask &= EXCLUDE_4TH_PKT;\n-\t}\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[4], m[4]->pkt_len) < 0) {\n-\t\trte_pktmbuf_free(m[4]);\n-\t\tvalid_mask &= EXCLUDE_5TH_PKT;\n-\t}\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[5], m[5]->pkt_len) < 0) {\n-\t\trte_pktmbuf_free(m[5]);\n-\t\tvalid_mask &= EXCLUDE_6TH_PKT;\n-\t}\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[6], m[6]->pkt_len) < 0) {\n-\t\trte_pktmbuf_free(m[6]);\n-\t\tvalid_mask &= EXCLUDE_7TH_PKT;\n-\t}\n-\tif (is_valid_ipv4_pkt(ipv4_hdr[7], m[7]->pkt_len) < 0) {\n-\t\trte_pktmbuf_free(m[7]);\n-\t\tvalid_mask &= EXCLUDE_8TH_PKT;\n-\t}\n-\tif (unlikely(valid_mask != MASK_ALL_PKTS)) {\n-\t\tif (valid_mask == 0)\n-\t\t\treturn;\n-\n-\t\tuint8_t i = 0;\n-\n-\t\tfor (i = 0; i < 8; i++)\n-\t\t\tif ((0x1 << i) & valid_mask)\n-\t\t\t\tl3fwd_simple_forward(m[i], portid);\n-\t}\n-#endif /* End of #ifdef DO_RFC_1812_CHECKS */\n-\n-\tdata[0] = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m[0], __m128i *,\n-\t\t\tsizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv4_hdr, time_to_live)));\n-\tdata[1] = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m[1], __m128i *,\n-\t\t\tsizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv4_hdr, time_to_live)));\n-\tdata[2] = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m[2], __m128i *,\n-\t\t\tsizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv4_hdr, time_to_live)));\n-\tdata[3] = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m[3], __m128i *,\n-\t\t\tsizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv4_hdr, time_to_live)));\n-\tdata[4] = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m[4], __m128i *,\n-\t\t\tsizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv4_hdr, time_to_live)));\n-\tdata[5] = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m[5], __m128i *,\n-\t\t\tsizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv4_hdr, time_to_live)));\n-\tdata[6] = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m[6], __m128i *,\n-\t\t\tsizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv4_hdr, time_to_live)));\n-\tdata[7] = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m[7], __m128i *,\n-\t\t\tsizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv4_hdr, time_to_live)));\n-\n-\tkey[0].xmm = _mm_and_si128(data[0], mask0);\n-\tkey[1].xmm = _mm_and_si128(data[1], mask0);\n-\tkey[2].xmm = _mm_and_si128(data[2], mask0);\n-\tkey[3].xmm = _mm_and_si128(data[3], mask0);\n-\tkey[4].xmm = _mm_and_si128(data[4], mask0);\n-\tkey[5].xmm = _mm_and_si128(data[5], mask0);\n-\tkey[6].xmm = _mm_and_si128(data[6], mask0);\n-\tkey[7].xmm = _mm_and_si128(data[7], mask0);\n-\n-\tconst void *key_array[8] = {&key[0], &key[1], &key[2], &key[3],\n-\t\t\t&key[4], &key[5], &key[6], &key[7]};\n-\n-\trte_hash_lookup_bulk(RTE_PER_LCORE(lcore_conf)->ipv4_lookup_struct,\n-\t\t\t&key_array[0], 8, ret);\n-\tdst_port[0] = ((ret[0] < 0) ? portid : ipv4_l3fwd_out_if[ret[0]]);\n-\tdst_port[1] = ((ret[1] < 0) ? portid : ipv4_l3fwd_out_if[ret[1]]);\n-\tdst_port[2] = ((ret[2] < 0) ? portid : ipv4_l3fwd_out_if[ret[2]]);\n-\tdst_port[3] = ((ret[3] < 0) ? portid : ipv4_l3fwd_out_if[ret[3]]);\n-\tdst_port[4] = ((ret[4] < 0) ? portid : ipv4_l3fwd_out_if[ret[4]]);\n-\tdst_port[5] = ((ret[5] < 0) ? portid : ipv4_l3fwd_out_if[ret[5]]);\n-\tdst_port[6] = ((ret[6] < 0) ? portid : ipv4_l3fwd_out_if[ret[6]]);\n-\tdst_port[7] = ((ret[7] < 0) ? portid : ipv4_l3fwd_out_if[ret[7]]);\n-\n-\tif (dst_port[0] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[0]) == 0)\n-\t\tdst_port[0] = portid;\n-\tif (dst_port[1] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[1]) == 0)\n-\t\tdst_port[1] = portid;\n-\tif (dst_port[2] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[2]) == 0)\n-\t\tdst_port[2] = portid;\n-\tif (dst_port[3] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[3]) == 0)\n-\t\tdst_port[3] = portid;\n-\tif (dst_port[4] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[4]) == 0)\n-\t\tdst_port[4] = portid;\n-\tif (dst_port[5] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[5]) == 0)\n-\t\tdst_port[5] = portid;\n-\tif (dst_port[6] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[6]) == 0)\n-\t\tdst_port[6] = portid;\n-\tif (dst_port[7] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[7]) == 0)\n-\t\tdst_port[7] = portid;\n-\n-#ifdef DO_RFC_1812_CHECKS\n-\t/* Update time to live and header checksum */\n-\t--(ipv4_hdr[0]->time_to_live);\n-\t--(ipv4_hdr[1]->time_to_live);\n-\t--(ipv4_hdr[2]->time_to_live);\n-\t--(ipv4_hdr[3]->time_to_live);\n-\t++(ipv4_hdr[0]->hdr_checksum);\n-\t++(ipv4_hdr[1]->hdr_checksum);\n-\t++(ipv4_hdr[2]->hdr_checksum);\n-\t++(ipv4_hdr[3]->hdr_checksum);\n-\t--(ipv4_hdr[4]->time_to_live);\n-\t--(ipv4_hdr[5]->time_to_live);\n-\t--(ipv4_hdr[6]->time_to_live);\n-\t--(ipv4_hdr[7]->time_to_live);\n-\t++(ipv4_hdr[4]->hdr_checksum);\n-\t++(ipv4_hdr[5]->hdr_checksum);\n-\t++(ipv4_hdr[6]->hdr_checksum);\n-\t++(ipv4_hdr[7]->hdr_checksum);\n-#endif\n-\n-\t/* dst addr */\n-\t*(uint64_t *)&eth_hdr[0]->dst_addr = dest_eth_addr[dst_port[0]];\n-\t*(uint64_t *)&eth_hdr[1]->dst_addr = dest_eth_addr[dst_port[1]];\n-\t*(uint64_t *)&eth_hdr[2]->dst_addr = dest_eth_addr[dst_port[2]];\n-\t*(uint64_t *)&eth_hdr[3]->dst_addr = dest_eth_addr[dst_port[3]];\n-\t*(uint64_t *)&eth_hdr[4]->dst_addr = dest_eth_addr[dst_port[4]];\n-\t*(uint64_t *)&eth_hdr[5]->dst_addr = dest_eth_addr[dst_port[5]];\n-\t*(uint64_t *)&eth_hdr[6]->dst_addr = dest_eth_addr[dst_port[6]];\n-\t*(uint64_t *)&eth_hdr[7]->dst_addr = dest_eth_addr[dst_port[7]];\n-\n-\t/* src addr */\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[0]], &eth_hdr[0]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[1]], &eth_hdr[1]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[2]], &eth_hdr[2]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[3]], &eth_hdr[3]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[4]], &eth_hdr[4]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[5]], &eth_hdr[5]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[6]], &eth_hdr[6]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[7]], &eth_hdr[7]->src_addr);\n-\n-\tsend_single_packet(m[0], (uint8_t)dst_port[0]);\n-\tsend_single_packet(m[1], (uint8_t)dst_port[1]);\n-\tsend_single_packet(m[2], (uint8_t)dst_port[2]);\n-\tsend_single_packet(m[3], (uint8_t)dst_port[3]);\n-\tsend_single_packet(m[4], (uint8_t)dst_port[4]);\n-\tsend_single_packet(m[5], (uint8_t)dst_port[5]);\n-\tsend_single_packet(m[6], (uint8_t)dst_port[6]);\n-\tsend_single_packet(m[7], (uint8_t)dst_port[7]);\n-\n-}\n-\n-static inline void get_ipv6_5tuple(struct rte_mbuf *m0, __m128i mask0,\n-\t\t__m128i mask1, union ipv6_5tuple_host *key)\n-{\n-\t__m128i tmpdata0 = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m0,\n-\t\t\t__m128i *, sizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv6_hdr, payload_len)));\n-\t__m128i tmpdata1 = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m0,\n-\t\t\t__m128i *, sizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv6_hdr, payload_len) +\n-\t\t\tsizeof(__m128i)));\n-\t__m128i tmpdata2 = _mm_loadu_si128(rte_pktmbuf_mtod_offset(m0,\n-\t\t\t__m128i *, sizeof(struct rte_ether_hdr) +\n-\t\t\toffsetof(struct rte_ipv6_hdr, payload_len) +\n-\t\t\tsizeof(__m128i) + sizeof(__m128i)));\n-\tkey->xmm[0] = _mm_and_si128(tmpdata0, mask0);\n-\tkey->xmm[1] = tmpdata1;\n-\tkey->xmm[2] = _mm_and_si128(tmpdata2, mask1);\n-}\n-\n-static inline void\n-simple_ipv6_fwd_8pkts(struct rte_mbuf *m[8], uint16_t portid)\n-{\n-\tint32_t ret[8];\n-\tuint16_t dst_port[8];\n-\tstruct rte_ether_hdr *eth_hdr[8];\n-\tunion ipv6_5tuple_host key[8];\n-\n-\t__rte_unused struct rte_ipv6_hdr *ipv6_hdr[8];\n-\n-\teth_hdr[0] = rte_pktmbuf_mtod(m[0], struct rte_ether_hdr *);\n-\teth_hdr[1] = rte_pktmbuf_mtod(m[1], struct rte_ether_hdr *);\n-\teth_hdr[2] = rte_pktmbuf_mtod(m[2], struct rte_ether_hdr *);\n-\teth_hdr[3] = rte_pktmbuf_mtod(m[3], struct rte_ether_hdr *);\n-\teth_hdr[4] = rte_pktmbuf_mtod(m[4], struct rte_ether_hdr *);\n-\teth_hdr[5] = rte_pktmbuf_mtod(m[5], struct rte_ether_hdr *);\n-\teth_hdr[6] = rte_pktmbuf_mtod(m[6], struct rte_ether_hdr *);\n-\teth_hdr[7] = rte_pktmbuf_mtod(m[7], struct rte_ether_hdr *);\n-\n-\t/* Handle IPv6 headers.*/\n-\tipv6_hdr[0] = rte_pktmbuf_mtod_offset(m[0], struct rte_ipv6_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv6_hdr[1] = rte_pktmbuf_mtod_offset(m[1], struct rte_ipv6_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv6_hdr[2] = rte_pktmbuf_mtod_offset(m[2], struct rte_ipv6_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv6_hdr[3] = rte_pktmbuf_mtod_offset(m[3], struct rte_ipv6_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv6_hdr[4] = rte_pktmbuf_mtod_offset(m[4], struct rte_ipv6_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv6_hdr[5] = rte_pktmbuf_mtod_offset(m[5], struct rte_ipv6_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv6_hdr[6] = rte_pktmbuf_mtod_offset(m[6], struct rte_ipv6_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\tipv6_hdr[7] = rte_pktmbuf_mtod_offset(m[7], struct rte_ipv6_hdr *,\n-\t\t\tsizeof(struct rte_ether_hdr));\n-\n-\tget_ipv6_5tuple(m[0], mask1, mask2, &key[0]);\n-\tget_ipv6_5tuple(m[1], mask1, mask2, &key[1]);\n-\tget_ipv6_5tuple(m[2], mask1, mask2, &key[2]);\n-\tget_ipv6_5tuple(m[3], mask1, mask2, &key[3]);\n-\tget_ipv6_5tuple(m[4], mask1, mask2, &key[4]);\n-\tget_ipv6_5tuple(m[5], mask1, mask2, &key[5]);\n-\tget_ipv6_5tuple(m[6], mask1, mask2, &key[6]);\n-\tget_ipv6_5tuple(m[7], mask1, mask2, &key[7]);\n-\n-\tconst void *key_array[8] = {&key[0], &key[1], &key[2], &key[3],\n-\t\t\t&key[4], &key[5], &key[6], &key[7]};\n-\n-\trte_hash_lookup_bulk(RTE_PER_LCORE(lcore_conf)->ipv6_lookup_struct,\n-\t\t\t&key_array[0], 4, ret);\n-\tdst_port[0] = ((ret[0] < 0) ? portid : ipv6_l3fwd_out_if[ret[0]]);\n-\tdst_port[1] = ((ret[1] < 0) ? portid : ipv6_l3fwd_out_if[ret[1]]);\n-\tdst_port[2] = ((ret[2] < 0) ? portid : ipv6_l3fwd_out_if[ret[2]]);\n-\tdst_port[3] = ((ret[3] < 0) ? portid : ipv6_l3fwd_out_if[ret[3]]);\n-\tdst_port[4] = ((ret[4] < 0) ? portid : ipv6_l3fwd_out_if[ret[4]]);\n-\tdst_port[5] = ((ret[5] < 0) ? portid : ipv6_l3fwd_out_if[ret[5]]);\n-\tdst_port[6] = ((ret[6] < 0) ? portid : ipv6_l3fwd_out_if[ret[6]]);\n-\tdst_port[7] = ((ret[7] < 0) ? portid : ipv6_l3fwd_out_if[ret[7]]);\n-\n-\tif (dst_port[0] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[0]) == 0)\n-\t\tdst_port[0] = portid;\n-\tif (dst_port[1] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[1]) == 0)\n-\t\tdst_port[1] = portid;\n-\tif (dst_port[2] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[2]) == 0)\n-\t\tdst_port[2] = portid;\n-\tif (dst_port[3] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[3]) == 0)\n-\t\tdst_port[3] = portid;\n-\tif (dst_port[4] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[4]) == 0)\n-\t\tdst_port[4] = portid;\n-\tif (dst_port[5] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[5]) == 0)\n-\t\tdst_port[5] = portid;\n-\tif (dst_port[6] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[6]) == 0)\n-\t\tdst_port[6] = portid;\n-\tif (dst_port[7] >= RTE_MAX_ETHPORTS ||\n-\t\t\t(enabled_port_mask & 1 << dst_port[7]) == 0)\n-\t\tdst_port[7] = portid;\n-\n-\t/* dst addr */\n-\t*(uint64_t *)&eth_hdr[0]->dst_addr = dest_eth_addr[dst_port[0]];\n-\t*(uint64_t *)&eth_hdr[1]->dst_addr = dest_eth_addr[dst_port[1]];\n-\t*(uint64_t *)&eth_hdr[2]->dst_addr = dest_eth_addr[dst_port[2]];\n-\t*(uint64_t *)&eth_hdr[3]->dst_addr = dest_eth_addr[dst_port[3]];\n-\t*(uint64_t *)&eth_hdr[4]->dst_addr = dest_eth_addr[dst_port[4]];\n-\t*(uint64_t *)&eth_hdr[5]->dst_addr = dest_eth_addr[dst_port[5]];\n-\t*(uint64_t *)&eth_hdr[6]->dst_addr = dest_eth_addr[dst_port[6]];\n-\t*(uint64_t *)&eth_hdr[7]->dst_addr = dest_eth_addr[dst_port[7]];\n-\n-\t/* src addr */\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[0]], &eth_hdr[0]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[1]], &eth_hdr[1]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[2]], &eth_hdr[2]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[3]], &eth_hdr[3]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[4]], &eth_hdr[4]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[5]], &eth_hdr[5]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[6]], &eth_hdr[6]->src_addr);\n-\trte_ether_addr_copy(&ports_eth_addr[dst_port[7]], &eth_hdr[7]->src_addr);\n-\n-\tsend_single_packet(m[0], dst_port[0]);\n-\tsend_single_packet(m[1], dst_port[1]);\n-\tsend_single_packet(m[2], dst_port[2]);\n-\tsend_single_packet(m[3], dst_port[3]);\n-\tsend_single_packet(m[4], dst_port[4]);\n-\tsend_single_packet(m[5], dst_port[5]);\n-\tsend_single_packet(m[6], dst_port[6]);\n-\tsend_single_packet(m[7], dst_port[7]);\n-\n-}\n-#endif /* APP_LOOKUP_METHOD */\n-\n-static __rte_always_inline void\n-l3fwd_simple_forward(struct rte_mbuf *m, uint16_t portid)\n-{\n-\tstruct rte_ether_hdr *eth_hdr;\n-\tstruct rte_ipv4_hdr *ipv4_hdr;\n-\tuint16_t dst_port;\n-\n-\teth_hdr = rte_pktmbuf_mtod(m, struct rte_ether_hdr *);\n-\n-\tif (RTE_ETH_IS_IPV4_HDR(m->packet_type)) {\n-\t\t/* Handle IPv4 headers.*/\n-\t\tipv4_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv4_hdr *,\n-\t\t\t\tsizeof(struct rte_ether_hdr));\n-\n-#ifdef DO_RFC_1812_CHECKS\n-\t\t/* Check to make sure the packet is valid (RFC1812) */\n-\t\tif (is_valid_ipv4_pkt(ipv4_hdr, m->pkt_len) < 0) {\n-\t\t\trte_pktmbuf_free(m);\n-\t\t\treturn;\n-\t\t}\n-#endif\n-\n-\t\t dst_port = get_ipv4_dst_port(ipv4_hdr, portid,\n-\t\t\tRTE_PER_LCORE(lcore_conf)->ipv4_lookup_struct);\n-\t\tif (dst_port >= RTE_MAX_ETHPORTS ||\n-\t\t\t\t(enabled_port_mask & 1 << dst_port) == 0)\n-\t\t\tdst_port = portid;\n-\n-#ifdef DO_RFC_1812_CHECKS\n-\t\t/* Update time to live and header checksum */\n-\t\t--(ipv4_hdr->time_to_live);\n-\t\t++(ipv4_hdr->hdr_checksum);\n-#endif\n-\t\t/* dst addr */\n-\t\t*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[dst_port];\n-\n-\t\t/* src addr */\n-\t\trte_ether_addr_copy(&ports_eth_addr[dst_port],\n-\t\t\t\t&eth_hdr->src_addr);\n-\n-\t\tsend_single_packet(m, dst_port);\n-\t} else if (RTE_ETH_IS_IPV6_HDR(m->packet_type)) {\n-\t\t/* Handle IPv6 headers.*/\n-\t\tstruct rte_ipv6_hdr *ipv6_hdr;\n-\n-\t\tipv6_hdr = rte_pktmbuf_mtod_offset(m, struct rte_ipv6_hdr *,\n-\t\t\t\tsizeof(struct rte_ether_hdr));\n-\n-\t\tdst_port = get_ipv6_dst_port(ipv6_hdr, portid,\n-\t\t\t\tRTE_PER_LCORE(lcore_conf)->ipv6_lookup_struct);\n-\n-\t\tif (dst_port >= RTE_MAX_ETHPORTS ||\n-\t\t\t\t(enabled_port_mask & 1 << dst_port) == 0)\n-\t\t\tdst_port = portid;\n-\n-\t\t/* dst addr */\n-\t\t*(uint64_t *)&eth_hdr->dst_addr = dest_eth_addr[dst_port];\n-\n-\t\t/* src addr */\n-\t\trte_ether_addr_copy(&ports_eth_addr[dst_port],\n-\t\t\t\t&eth_hdr->src_addr);\n-\n-\t\tsend_single_packet(m, dst_port);\n-\t} else\n-\t\t/* Free the mbuf that contains non-IPV4/IPV6 packet */\n-\t\trte_pktmbuf_free(m);\n-}\n-\n-#if ((APP_LOOKUP_METHOD == APP_LOOKUP_LPM) && \\\n-\t(ENABLE_MULTI_BUFFER_OPTIMIZE == 1))\n-#ifdef DO_RFC_1812_CHECKS\n-\n-#define\tIPV4_MIN_VER_IHL\t0x45\n-#define\tIPV4_MAX_VER_IHL\t0x4f\n-#define\tIPV4_MAX_VER_IHL_DIFF\t(IPV4_MAX_VER_IHL - IPV4_MIN_VER_IHL)\n-\n-/* Minimum value of IPV4 total length (20B) in network byte order. */\n-#define\tIPV4_MIN_LEN_BE\t(sizeof(struct rte_ipv4_hdr) << 8)\n-\n-/*\n- * From http://www.rfc-editor.org/rfc/rfc1812.txt section 5.2.2:\n- * - The IP version number must be 4.\n- * - The IP header length field must be large enough to hold the\n- *    minimum length legal IP datagram (20 bytes = 5 words).\n- * - The IP total length field must be large enough to hold the IP\n- *   datagram header, whose length is specified in the IP header length\n- *   field.\n- * If we encounter invalid IPV4 packet, then set destination port for it\n- * to BAD_PORT value.\n- */\n-static __rte_always_inline void\n-rfc1812_process(struct rte_ipv4_hdr *ipv4_hdr, uint16_t *dp, uint32_t ptype)\n-{\n-\tuint8_t ihl;\n-\n-\tif (RTE_ETH_IS_IPV4_HDR(ptype)) {\n-\t\tihl = ipv4_hdr->version_ihl - IPV4_MIN_VER_IHL;\n-\n-\t\tipv4_hdr->time_to_live--;\n-\t\tipv4_hdr->hdr_checksum++;\n-\n-\t\tif (ihl > IPV4_MAX_VER_IHL_DIFF ||\n-\t\t\t\t((uint8_t)ipv4_hdr->total_length == 0 &&\n-\t\t\t\tipv4_hdr->total_length < IPV4_MIN_LEN_BE)) {\n-\t\t\tdp[0] = BAD_PORT;\n-\t\t}\n-\t}\n-}\n-\n-#else\n-#define\trfc1812_process(mb, dp, ptype)\tdo { } while (0)\n-#endif /* DO_RFC_1812_CHECKS */\n-#endif /* APP_LOOKUP_LPM && ENABLE_MULTI_BUFFER_OPTIMIZE */\n-\n-\n-#if ((APP_LOOKUP_METHOD == APP_LOOKUP_LPM) && \\\n-\t(ENABLE_MULTI_BUFFER_OPTIMIZE == 1))\n-\n-static __rte_always_inline uint16_t\n-get_dst_port(struct rte_mbuf *pkt, uint32_t dst_ipv4, uint16_t portid)\n-{\n-\tuint32_t next_hop;\n-\tstruct rte_ipv6_hdr *ipv6_hdr;\n-\tstruct rte_ether_hdr *eth_hdr;\n-\n-\tif (RTE_ETH_IS_IPV4_HDR(pkt->packet_type)) {\n-\t\treturn (uint16_t) ((rte_lpm_lookup(\n-\t\t\t\tRTE_PER_LCORE(lcore_conf)->ipv4_lookup_struct, dst_ipv4,\n-\t\t\t\t&next_hop) == 0) ? next_hop : portid);\n-\n-\t} else if (RTE_ETH_IS_IPV6_HDR(pkt->packet_type)) {\n-\n-\t\teth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);\n-\t\tipv6_hdr = (struct rte_ipv6_hdr *)(eth_hdr + 1);\n-\n-\t\treturn (uint16_t) ((rte_lpm6_lookup(\n-\t\t\t\tRTE_PER_LCORE(lcore_conf)->ipv6_lookup_struct,\n-\t\t\t\tipv6_hdr->dst_addr, &next_hop) == 0) ?\n-\t\t\t\tnext_hop : portid);\n-\n-\t}\n-\n-\treturn portid;\n-}\n-\n-static inline void\n-process_packet(struct rte_mbuf *pkt, uint16_t *dst_port, uint16_t portid)\n-{\n-\tstruct rte_ether_hdr *eth_hdr;\n-\tstruct rte_ipv4_hdr *ipv4_hdr;\n-\tuint32_t dst_ipv4;\n-\tuint16_t dp;\n-\t__m128i te, ve;\n-\n-\teth_hdr = rte_pktmbuf_mtod(pkt, struct rte_ether_hdr *);\n-\tipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);\n-\n-\tdst_ipv4 = ipv4_hdr->dst_addr;\n-\tdst_ipv4 = rte_be_to_cpu_32(dst_ipv4);\n-\tdp = get_dst_port(pkt, dst_ipv4, portid);\n-\n-\tte = _mm_load_si128((__m128i *)eth_hdr);\n-\tve = val_eth[dp];\n-\n-\tdst_port[0] = dp;\n-\trfc1812_process(ipv4_hdr, dst_port, pkt->packet_type);\n-\n-\tte =  _mm_blend_epi16(te, ve, MASK_ETH);\n-\t_mm_store_si128((__m128i *)eth_hdr, te);\n-}\n-\n-/*\n- * Read packet_type and destination IPV4 addresses from 4 mbufs.\n- */\n-static inline void\n-processx4_step1(struct rte_mbuf *pkt[FWDSTEP],\n-\t\t__m128i *dip,\n-\t\tuint32_t *ipv4_flag)\n-{\n-\tstruct rte_ipv4_hdr *ipv4_hdr;\n-\tstruct rte_ether_hdr *eth_hdr;\n-\tuint32_t x0, x1, x2, x3;\n-\n-\teth_hdr = rte_pktmbuf_mtod(pkt[0], struct rte_ether_hdr *);\n-\tipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);\n-\tx0 = ipv4_hdr->dst_addr;\n-\tipv4_flag[0] = pkt[0]->packet_type & RTE_PTYPE_L3_IPV4;\n-\n-\teth_hdr = rte_pktmbuf_mtod(pkt[1], struct rte_ether_hdr *);\n-\tipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);\n-\tx1 = ipv4_hdr->dst_addr;\n-\tipv4_flag[0] &= pkt[1]->packet_type;\n-\n-\teth_hdr = rte_pktmbuf_mtod(pkt[2], struct rte_ether_hdr *);\n-\tipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);\n-\tx2 = ipv4_hdr->dst_addr;\n-\tipv4_flag[0] &= pkt[2]->packet_type;\n-\n-\teth_hdr = rte_pktmbuf_mtod(pkt[3], struct rte_ether_hdr *);\n-\tipv4_hdr = (struct rte_ipv4_hdr *)(eth_hdr + 1);\n-\tx3 = ipv4_hdr->dst_addr;\n-\tipv4_flag[0] &= pkt[3]->packet_type;\n-\n-\tdip[0] = _mm_set_epi32(x3, x2, x1, x0);\n-}\n-\n-/*\n- * Lookup into LPM for destination port.\n- * If lookup fails, use incoming port (portid) as destination port.\n- */\n-static inline void\n-processx4_step2(__m128i dip,\n-\t\tuint32_t ipv4_flag,\n-\t\tuint16_t portid,\n-\t\tstruct rte_mbuf *pkt[FWDSTEP],\n-\t\tuint16_t dprt[FWDSTEP])\n-{\n-\trte_xmm_t dst;\n-\tconst __m128i bswap_mask = _mm_set_epi8(12, 13, 14, 15, 8, 9, 10, 11,\n-\t\t\t4, 5, 6, 7, 0, 1, 2, 3);\n-\n-\t/* Byte swap 4 IPV4 addresses. */\n-\tdip = _mm_shuffle_epi8(dip, bswap_mask);\n-\n-\t/* if all 4 packets are IPV4. */\n-\tif (likely(ipv4_flag)) {\n-\t\trte_lpm_lookupx4(RTE_PER_LCORE(lcore_conf)->ipv4_lookup_struct, dip,\n-\t\t\t\tdst.u32, portid);\n-\n-\t\t/* get rid of unused upper 16 bit for each dport. */\n-\t\tdst.x = _mm_packs_epi32(dst.x, dst.x);\n-\t\t*(uint64_t *)dprt = dst.u64[0];\n-\t} else {\n-\t\tdst.x = dip;\n-\t\tdprt[0] = get_dst_port(pkt[0], dst.u32[0], portid);\n-\t\tdprt[1] = get_dst_port(pkt[1], dst.u32[1], portid);\n-\t\tdprt[2] = get_dst_port(pkt[2], dst.u32[2], portid);\n-\t\tdprt[3] = get_dst_port(pkt[3], dst.u32[3], portid);\n-\t}\n-}\n-\n-/*\n- * Update source and destination MAC addresses in the ethernet header.\n- * Perform RFC1812 checks and updates for IPV4 packets.\n- */\n-static inline void\n-processx4_step3(struct rte_mbuf *pkt[FWDSTEP], uint16_t dst_port[FWDSTEP])\n-{\n-\t__m128i te[FWDSTEP];\n-\t__m128i ve[FWDSTEP];\n-\t__m128i *p[FWDSTEP];\n-\n-\tp[0] = rte_pktmbuf_mtod(pkt[0], __m128i *);\n-\tp[1] = rte_pktmbuf_mtod(pkt[1], __m128i *);\n-\tp[2] = rte_pktmbuf_mtod(pkt[2], __m128i *);\n-\tp[3] = rte_pktmbuf_mtod(pkt[3], __m128i *);\n-\n-\tve[0] = val_eth[dst_port[0]];\n-\tte[0] = _mm_load_si128(p[0]);\n-\n-\tve[1] = val_eth[dst_port[1]];\n-\tte[1] = _mm_load_si128(p[1]);\n-\n-\tve[2] = val_eth[dst_port[2]];\n-\tte[2] = _mm_load_si128(p[2]);\n-\n-\tve[3] = val_eth[dst_port[3]];\n-\tte[3] = _mm_load_si128(p[3]);\n-\n-\t/* Update first 12 bytes, keep rest bytes intact. */\n-\tte[0] =  _mm_blend_epi16(te[0], ve[0], MASK_ETH);\n-\tte[1] =  _mm_blend_epi16(te[1], ve[1], MASK_ETH);\n-\tte[2] =  _mm_blend_epi16(te[2], ve[2], MASK_ETH);\n-\tte[3] =  _mm_blend_epi16(te[3], ve[3], MASK_ETH);\n-\n-\t_mm_store_si128(p[0], te[0]);\n-\t_mm_store_si128(p[1], te[1]);\n-\t_mm_store_si128(p[2], te[2]);\n-\t_mm_store_si128(p[3], te[3]);\n-\n-\trfc1812_process((struct rte_ipv4_hdr *)\n-\t\t\t((struct rte_ether_hdr *)p[0] + 1),\n-\t\t\t&dst_port[0], pkt[0]->packet_type);\n-\trfc1812_process((struct rte_ipv4_hdr *)\n-\t\t\t((struct rte_ether_hdr *)p[1] + 1),\n-\t\t\t&dst_port[1], pkt[1]->packet_type);\n-\trfc1812_process((struct rte_ipv4_hdr *)\n-\t\t\t((struct rte_ether_hdr *)p[2] + 1),\n-\t\t\t&dst_port[2], pkt[2]->packet_type);\n-\trfc1812_process((struct rte_ipv4_hdr *)\n-\t\t\t((struct rte_ether_hdr *)p[3] + 1),\n-\t\t\t&dst_port[3], pkt[3]->packet_type);\n-}\n-\n-/*\n- * We group consecutive packets with the same destionation port into one burst.\n- * To avoid extra latency this is done together with some other packet\n- * processing, but after we made a final decision about packet's destination.\n- * To do this we maintain:\n- * pnum - array of number of consecutive packets with the same dest port for\n- * each packet in the input burst.\n- * lp - pointer to the last updated element in the pnum.\n- * dlp - dest port value lp corresponds to.\n- */\n-\n-#define\tGRPSZ\t(1 << FWDSTEP)\n-#define\tGRPMSK\t(GRPSZ - 1)\n-\n-#define GROUP_PORT_STEP(dlp, dcp, lp, pn, idx)\tdo { \\\n-\tif (likely((dlp) == (dcp)[(idx)])) {         \\\n-\t\t(lp)[0]++;                           \\\n-\t} else {                                     \\\n-\t\t(dlp) = (dcp)[idx];                  \\\n-\t\t(lp) = (pn) + (idx);                 \\\n-\t\t(lp)[0] = 1;                         \\\n-\t}                                            \\\n-} while (0)\n-\n-/*\n- * Group consecutive packets with the same destination port in bursts of 4.\n- * Suppose we have array of destionation ports:\n- * dst_port[] = {a, b, c, d,, e, ... }\n- * dp1 should contain: <a, b, c, d>, dp2: <b, c, d, e>.\n- * We doing 4 comparisons at once and the result is 4 bit mask.\n- * This mask is used as an index into prebuild array of pnum values.\n- */\n-static inline uint16_t *\n-port_groupx4(uint16_t pn[FWDSTEP + 1], uint16_t *lp, __m128i dp1, __m128i dp2)\n-{\n-\tstatic const struct {\n-\t\tuint64_t pnum; /* prebuild 4 values for pnum[]. */\n-\t\tint32_t  idx;  /* index for new last updated elemnet. */\n-\t\tuint16_t lpv;  /* add value to the last updated element. */\n-\t} gptbl[GRPSZ] = {\n-\t{\n-\t\t/* 0: a != b, b != c, c != d, d != e */\n-\t\t.pnum = UINT64_C(0x0001000100010001),\n-\t\t.idx = 4,\n-\t\t.lpv = 0,\n-\t},\n-\t{\n-\t\t/* 1: a == b, b != c, c != d, d != e */\n-\t\t.pnum = UINT64_C(0x0001000100010002),\n-\t\t.idx = 4,\n-\t\t.lpv = 1,\n-\t},\n-\t{\n-\t\t/* 2: a != b, b == c, c != d, d != e */\n-\t\t.pnum = UINT64_C(0x0001000100020001),\n-\t\t.idx = 4,\n-\t\t.lpv = 0,\n-\t},\n-\t{\n-\t\t/* 3: a == b, b == c, c != d, d != e */\n-\t\t.pnum = UINT64_C(0x0001000100020003),\n-\t\t.idx = 4,\n-\t\t.lpv = 2,\n-\t},\n-\t{\n-\t\t/* 4: a != b, b != c, c == d, d != e */\n-\t\t.pnum = UINT64_C(0x0001000200010001),\n-\t\t.idx = 4,\n-\t\t.lpv = 0,\n-\t},\n-\t{\n-\t\t/* 5: a == b, b != c, c == d, d != e */\n-\t\t.pnum = UINT64_C(0x0001000200010002),\n-\t\t.idx = 4,\n-\t\t.lpv = 1,\n-\t},\n-\t{\n-\t\t/* 6: a != b, b == c, c == d, d != e */\n-\t\t.pnum = UINT64_C(0x0001000200030001),\n-\t\t.idx = 4,\n-\t\t.lpv = 0,\n-\t},\n-\t{\n-\t\t/* 7: a == b, b == c, c == d, d != e */\n-\t\t.pnum = UINT64_C(0x0001000200030004),\n-\t\t.idx = 4,\n-\t\t.lpv = 3,\n-\t},\n-\t{\n-\t\t/* 8: a != b, b != c, c != d, d == e */\n-\t\t.pnum = UINT64_C(0x0002000100010001),\n-\t\t.idx = 3,\n-\t\t.lpv = 0,\n-\t},\n-\t{\n-\t\t/* 9: a == b, b != c, c != d, d == e */\n-\t\t.pnum = UINT64_C(0x0002000100010002),\n-\t\t.idx = 3,\n-\t\t.lpv = 1,\n-\t},\n-\t{\n-\t\t/* 0xa: a != b, b == c, c != d, d == e */\n-\t\t.pnum = UINT64_C(0x0002000100020001),\n-\t\t.idx = 3,\n-\t\t.lpv = 0,\n-\t},\n-\t{\n-\t\t/* 0xb: a == b, b == c, c != d, d == e */\n-\t\t.pnum = UINT64_C(0x0002000100020003),\n-\t\t.idx = 3,\n-\t\t.lpv = 2,\n-\t},\n-\t{\n-\t\t/* 0xc: a != b, b != c, c == d, d == e */\n-\t\t.pnum = UINT64_C(0x0002000300010001),\n-\t\t.idx = 2,\n-\t\t.lpv = 0,\n-\t},\n-\t{\n-\t\t/* 0xd: a == b, b != c, c == d, d == e */\n-\t\t.pnum = UINT64_C(0x0002000300010002),\n-\t\t.idx = 2,\n-\t\t.lpv = 1,\n-\t},\n-\t{\n-\t\t/* 0xe: a != b, b == c, c == d, d == e */\n-\t\t.pnum = UINT64_C(0x0002000300040001),\n-\t\t.idx = 1,\n-\t\t.lpv = 0,\n-\t},\n-\t{\n-\t\t/* 0xf: a == b, b == c, c == d, d == e */\n-\t\t.pnum = UINT64_C(0x0002000300040005),\n-\t\t.idx = 0,\n-\t\t.lpv = 4,\n-\t},\n-\t};\n-\n-\tunion {\n-\t\tuint16_t u16[FWDSTEP + 1];\n-\t\tuint64_t u64;\n-\t} *pnum = (void *)pn;\n-\n-\tint32_t v;\n-\n-\tdp1 = _mm_cmpeq_epi16(dp1, dp2);\n-\tdp1 = _mm_unpacklo_epi16(dp1, dp1);\n-\tv = _mm_movemask_ps((__m128)dp1);\n-\n-\t/* update last port counter. */\n-\tlp[0] += gptbl[v].lpv;\n-\n-\t/* if dest port value has changed. */\n-\tif (v != GRPMSK) {\n-\t\tpnum->u64 = gptbl[v].pnum;\n-\t\tpnum->u16[FWDSTEP] = 1;\n-\t\tlp = pnum->u16 + gptbl[v].idx;\n-\t}\n-\n-\treturn lp;\n-}\n-\n-#endif /* APP_LOOKUP_METHOD */\n-\n-static void\n-process_burst(struct rte_mbuf *pkts_burst[MAX_PKT_BURST], int nb_rx,\n-\t\tuint16_t portid)\n-{\n-\n-\tint j;\n-\n-#if ((APP_LOOKUP_METHOD == APP_LOOKUP_LPM) && \\\n-\t(ENABLE_MULTI_BUFFER_OPTIMIZE == 1))\n-\tint32_t k;\n-\tuint16_t dlp;\n-\tuint16_t *lp;\n-\tuint16_t dst_port[MAX_PKT_BURST];\n-\t__m128i dip[MAX_PKT_BURST / FWDSTEP];\n-\tuint32_t ipv4_flag[MAX_PKT_BURST / FWDSTEP];\n-\tuint16_t pnum[MAX_PKT_BURST + 1];\n-#endif\n-\n-\n-#if (ENABLE_MULTI_BUFFER_OPTIMIZE == 1)\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)\n-\t{\n-\t\t/*\n-\t\t * Send nb_rx - nb_rx%8 packets\n-\t\t * in groups of 8.\n-\t\t */\n-\t\tint32_t n = RTE_ALIGN_FLOOR(nb_rx, 8);\n-\n-\t\tfor (j = 0; j < n; j += 8) {\n-\t\t\tuint32_t pkt_type =\n-\t\t\t\tpkts_burst[j]->packet_type &\n-\t\t\t\tpkts_burst[j+1]->packet_type &\n-\t\t\t\tpkts_burst[j+2]->packet_type &\n-\t\t\t\tpkts_burst[j+3]->packet_type &\n-\t\t\t\tpkts_burst[j+4]->packet_type &\n-\t\t\t\tpkts_burst[j+5]->packet_type &\n-\t\t\t\tpkts_burst[j+6]->packet_type &\n-\t\t\t\tpkts_burst[j+7]->packet_type;\n-\t\t\tif (pkt_type & RTE_PTYPE_L3_IPV4) {\n-\t\t\t\tsimple_ipv4_fwd_8pkts(&pkts_burst[j], portid);\n-\t\t\t} else if (pkt_type &\n-\t\t\t\tRTE_PTYPE_L3_IPV6) {\n-\t\t\t\tsimple_ipv6_fwd_8pkts(&pkts_burst[j], portid);\n-\t\t\t} else {\n-\t\t\t\tl3fwd_simple_forward(pkts_burst[j], portid);\n-\t\t\t\tl3fwd_simple_forward(pkts_burst[j+1], portid);\n-\t\t\t\tl3fwd_simple_forward(pkts_burst[j+2], portid);\n-\t\t\t\tl3fwd_simple_forward(pkts_burst[j+3], portid);\n-\t\t\t\tl3fwd_simple_forward(pkts_burst[j+4], portid);\n-\t\t\t\tl3fwd_simple_forward(pkts_burst[j+5], portid);\n-\t\t\t\tl3fwd_simple_forward(pkts_burst[j+6], portid);\n-\t\t\t\tl3fwd_simple_forward(pkts_burst[j+7], portid);\n-\t\t\t}\n-\t\t}\n-\t\tfor (; j < nb_rx ; j++)\n-\t\t\tl3fwd_simple_forward(pkts_burst[j], portid);\n-\t}\n-#elif (APP_LOOKUP_METHOD == APP_LOOKUP_LPM)\n-\n-\tk = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);\n-\tfor (j = 0; j != k; j += FWDSTEP)\n-\t\tprocessx4_step1(&pkts_burst[j], &dip[j / FWDSTEP],\n-\t\t\t\t&ipv4_flag[j / FWDSTEP]);\n-\n-\tk = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);\n-\tfor (j = 0; j != k; j += FWDSTEP)\n-\t\tprocessx4_step2(dip[j / FWDSTEP], ipv4_flag[j / FWDSTEP],\n-\t\t\t\tportid, &pkts_burst[j], &dst_port[j]);\n-\n-\t/*\n-\t * Finish packet processing and group consecutive\n-\t * packets with the same destination port.\n-\t */\n-\tk = RTE_ALIGN_FLOOR(nb_rx, FWDSTEP);\n-\tif (k != 0) {\n-\t\t__m128i dp1, dp2;\n-\n-\t\tlp = pnum;\n-\t\tlp[0] = 1;\n-\n-\t\tprocessx4_step3(pkts_burst, dst_port);\n-\n-\t\t/* dp1: <d[0], d[1], d[2], d[3], ... > */\n-\t\tdp1 = _mm_loadu_si128((__m128i *)dst_port);\n-\n-\t\tfor (j = FWDSTEP; j != k; j += FWDSTEP) {\n-\t\t\tprocessx4_step3(&pkts_burst[j], &dst_port[j]);\n-\n-\t\t\t/*\n-\t\t\t * dp2:\n-\t\t\t * <d[j-3], d[j-2], d[j-1], d[j], ... >\n-\t\t\t */\n-\t\t\tdp2 = _mm_loadu_si128(\n-\t\t\t\t\t(__m128i *)&dst_port[j - FWDSTEP + 1]);\n-\t\t\tlp  = port_groupx4(&pnum[j - FWDSTEP], lp, dp1, dp2);\n-\n-\t\t\t/*\n-\t\t\t * dp1:\n-\t\t\t * <d[j], d[j+1], d[j+2], d[j+3], ... >\n-\t\t\t */\n-\t\t\tdp1 = _mm_srli_si128(dp2, (FWDSTEP - 1) *\n-\t\t\t\t\tsizeof(dst_port[0]));\n-\t\t}\n-\n-\t\t/*\n-\t\t * dp2: <d[j-3], d[j-2], d[j-1], d[j-1], ... >\n-\t\t */\n-\t\tdp2 = _mm_shufflelo_epi16(dp1, 0xf9);\n-\t\tlp  = port_groupx4(&pnum[j - FWDSTEP], lp, dp1, dp2);\n-\n-\t\t/*\n-\t\t * remove values added by the last repeated\n-\t\t * dst port.\n-\t\t */\n-\t\tlp[0]--;\n-\t\tdlp = dst_port[j - 1];\n-\t} else {\n-\t\t/* set dlp and lp to the never used values. */\n-\t\tdlp = BAD_PORT - 1;\n-\t\tlp = pnum + MAX_PKT_BURST;\n-\t}\n-\n-\t/* Process up to last 3 packets one by one. */\n-\tswitch (nb_rx % FWDSTEP) {\n-\tcase 3:\n-\t\tprocess_packet(pkts_burst[j], dst_port + j, portid);\n-\t\tGROUP_PORT_STEP(dlp, dst_port, lp, pnum, j);\n-\t\tj++;\n-\t\t/* fall-through */\n-\tcase 2:\n-\t\tprocess_packet(pkts_burst[j], dst_port + j, portid);\n-\t\tGROUP_PORT_STEP(dlp, dst_port, lp, pnum, j);\n-\t\tj++;\n-\t\t/* fall-through */\n-\tcase 1:\n-\t\tprocess_packet(pkts_burst[j], dst_port + j, portid);\n-\t\tGROUP_PORT_STEP(dlp, dst_port, lp, pnum, j);\n-\t\tj++;\n-\t}\n-\n-\t/*\n-\t * Send packets out, through destination port.\n-\t * Consecuteve pacekts with the same destination port\n-\t * are already grouped together.\n-\t * If destination port for the packet equals BAD_PORT,\n-\t * then free the packet without sending it out.\n-\t */\n-\tfor (j = 0; j < nb_rx; j += k) {\n-\n-\t\tint32_t m;\n-\t\tuint16_t pn;\n-\n-\t\tpn = dst_port[j];\n-\t\tk = pnum[j];\n-\n-\t\tif (likely(pn != BAD_PORT))\n-\t\t\tsend_packetsx4(pn, pkts_burst + j, k);\n-\t\telse\n-\t\t\tfor (m = j; m != j + k; m++)\n-\t\t\t\trte_pktmbuf_free(pkts_burst[m]);\n-\n-\t}\n-\n-#endif /* APP_LOOKUP_METHOD */\n-#else /* ENABLE_MULTI_BUFFER_OPTIMIZE == 0 */\n-\n-\t/* Prefetch first packets */\n-\tfor (j = 0; j < PREFETCH_OFFSET && j < nb_rx; j++)\n-\t\trte_prefetch0(rte_pktmbuf_mtod(pkts_burst[j], void *));\n-\n-\t/* Prefetch and forward already prefetched packets */\n-\tfor (j = 0; j < (nb_rx - PREFETCH_OFFSET); j++) {\n-\t\trte_prefetch0(rte_pktmbuf_mtod(pkts_burst[\n-\t\t\t\tj + PREFETCH_OFFSET], void *));\n-\t\tl3fwd_simple_forward(pkts_burst[j], portid);\n-\t}\n-\n-\t/* Forward remaining prefetched packets */\n-\tfor (; j < nb_rx; j++)\n-\t\tl3fwd_simple_forward(pkts_burst[j], portid);\n-\n-#endif /* ENABLE_MULTI_BUFFER_OPTIMIZE */\n-\n-}\n-\n-#if (APP_CPU_LOAD > 0)\n-\n-/*\n- * CPU-load stats collector\n- */\n-static int __rte_noreturn\n-cpu_load_collector(__rte_unused void *arg) {\n-\tunsigned i, j, k;\n-\tuint64_t prev_tsc, diff_tsc, cur_tsc;\n-\tuint64_t total[MAX_CPU] = { 0 };\n-\tunsigned min_cpu = MAX_CPU;\n-\tunsigned max_cpu = 0;\n-\tunsigned cpu_id;\n-\tint busy_total = 0;\n-\tint busy_flag = 0;\n-\n-\tunsigned int n_thread_per_cpu[MAX_CPU] = { 0 };\n-\tstruct thread_conf *thread_per_cpu[MAX_CPU][MAX_THREAD];\n-\n-\tstruct thread_conf *thread_conf;\n-\n-\tconst uint64_t interval_tsc = (rte_get_tsc_hz() + US_PER_S - 1) /\n-\t\tUS_PER_S * CPU_LOAD_TIMEOUT_US;\n-\n-\tprev_tsc = 0;\n-\t/*\n-\t * Wait for all threads\n-\t */\n-\n-\tprintf(\"Waiting for %d rx threads and %d tx threads\\n\", n_rx_thread,\n-\t\t\tn_tx_thread);\n-\n-\trte_wait_until_equal_16(&rx_counter, n_rx_thread, __ATOMIC_RELAXED);\n-\trte_wait_until_equal_16(&tx_counter, n_tx_thread, __ATOMIC_RELAXED);\n-\n-\tfor (i = 0; i < n_rx_thread; i++) {\n-\n-\t\tthread_conf = &rx_thread[i].conf;\n-\t\tcpu_id = thread_conf->cpu_id;\n-\t\tthread_per_cpu[cpu_id][n_thread_per_cpu[cpu_id]++] = thread_conf;\n-\n-\t\tif (cpu_id > max_cpu)\n-\t\t\tmax_cpu = cpu_id;\n-\t\tif (cpu_id < min_cpu)\n-\t\t\tmin_cpu = cpu_id;\n-\t}\n-\tfor (i = 0; i < n_tx_thread; i++) {\n-\n-\t\tthread_conf = &tx_thread[i].conf;\n-\t\tcpu_id = thread_conf->cpu_id;\n-\t\tthread_per_cpu[cpu_id][n_thread_per_cpu[cpu_id]++] = thread_conf;\n-\n-\t\tif (thread_conf->cpu_id > max_cpu)\n-\t\t\tmax_cpu = thread_conf->cpu_id;\n-\t\tif (thread_conf->cpu_id < min_cpu)\n-\t\t\tmin_cpu = thread_conf->cpu_id;\n-\t}\n-\n-\twhile (1) {\n-\n-\t\tcpu_load.counter++;\n-\t\tfor (i = min_cpu; i <= max_cpu; i++) {\n-\t\t\tfor (j = 0; j < MAX_CPU_COUNTER; j++) {\n-\t\t\t\tfor (k = 0; k < n_thread_per_cpu[i]; k++)\n-\t\t\t\t\tif (thread_per_cpu[i][k]->busy[j]) {\n-\t\t\t\t\t\tbusy_flag = 1;\n-\t\t\t\t\t\tbreak;\n-\t\t\t\t\t}\n-\t\t\t\tif (busy_flag) {\n-\t\t\t\t\tcpu_load.hits[j][i]++;\n-\t\t\t\t\tbusy_total = 1;\n-\t\t\t\t\tbusy_flag = 0;\n-\t\t\t\t}\n-\t\t\t}\n-\n-\t\t\tif (busy_total) {\n-\t\t\t\ttotal[i]++;\n-\t\t\t\tbusy_total = 0;\n-\t\t\t}\n-\t\t}\n-\n-\t\tcur_tsc = rte_rdtsc();\n-\n-\t\tdiff_tsc = cur_tsc - prev_tsc;\n-\t\tif (unlikely(diff_tsc > interval_tsc)) {\n-\n-\t\t\tprintf(\"\\033c\");\n-\n-\t\t\tprintf(\"Cpu usage for %d rx threads and %d tx threads:\\n\\n\",\n-\t\t\t\t\tn_rx_thread, n_tx_thread);\n-\n-\t\t\tprintf(\"cpu#     proc%%  poll%%  overhead%%\\n\\n\");\n-\n-\t\t\tfor (i = min_cpu; i <= max_cpu; i++) {\n-\t\t\t\tprintf(\"CPU %d:\", i);\n-\t\t\t\tfor (j = 0; j < MAX_CPU_COUNTER; j++) {\n-\t\t\t\t\tprintf(\"%7\" PRIu64 \"\",\n-\t\t\t\t\t\t\tcpu_load.hits[j][i] * 100 / cpu_load.counter);\n-\t\t\t\t\tcpu_load.hits[j][i] = 0;\n-\t\t\t\t}\n-\t\t\t\tprintf(\"%7\" PRIu64 \"\\n\",\n-\t\t\t\t\t\t100 - total[i] * 100 / cpu_load.counter);\n-\t\t\t\ttotal[i] = 0;\n-\t\t\t}\n-\t\t\tcpu_load.counter = 0;\n-\n-\t\t\tprev_tsc = cur_tsc;\n-\t\t}\n-\n-\t}\n-}\n-#endif /* APP_CPU_LOAD */\n-\n-/*\n- * Null processing lthread loop\n- *\n- * This loop is used to start empty scheduler on lcore.\n- */\n-static void *\n-lthread_null(__rte_unused void *args)\n-{\n-\tint lcore_id = rte_lcore_id();\n-\n-\tRTE_LOG(INFO, L3FWD, \"Starting scheduler on lcore %d.\\n\", lcore_id);\n-\tlthread_exit(NULL);\n-\treturn NULL;\n-}\n-\n-/* main processing loop */\n-static void *\n-lthread_tx_per_ring(void *dummy)\n-{\n-\tint nb_rx;\n-\tuint16_t portid;\n-\tstruct rte_ring *ring;\n-\tstruct thread_tx_conf *tx_conf;\n-\tstruct rte_mbuf *pkts_burst[MAX_PKT_BURST];\n-\tstruct lthread_cond *ready;\n-\n-\ttx_conf = (struct thread_tx_conf *)dummy;\n-\tring = tx_conf->ring;\n-\tready = *tx_conf->ready;\n-\n-\tlthread_set_data((void *)tx_conf);\n-\n-\t/*\n-\t * Move this lthread to lcore\n-\t */\n-\tlthread_set_affinity(tx_conf->conf.lcore_id);\n-\n-\tRTE_LOG(INFO, L3FWD, \"entering main tx loop on lcore %u\\n\", rte_lcore_id());\n-\n-\tnb_rx = 0;\n-\t__atomic_fetch_add(&tx_counter, 1, __ATOMIC_RELAXED);\n-\twhile (1) {\n-\n-\t\t/*\n-\t\t * Read packet from ring\n-\t\t */\n-\t\tSET_CPU_BUSY(tx_conf, CPU_POLL);\n-\t\tnb_rx = rte_ring_sc_dequeue_burst(ring, (void **)pkts_burst,\n-\t\t\t\tMAX_PKT_BURST, NULL);\n-\t\tSET_CPU_IDLE(tx_conf, CPU_POLL);\n-\n-\t\tif (nb_rx > 0) {\n-\t\t\tSET_CPU_BUSY(tx_conf, CPU_PROCESS);\n-\t\t\tportid = pkts_burst[0]->port;\n-\t\t\tprocess_burst(pkts_burst, nb_rx, portid);\n-\t\t\tSET_CPU_IDLE(tx_conf, CPU_PROCESS);\n-\t\t\tlthread_yield();\n-\t\t} else\n-\t\t\tlthread_cond_wait(ready, 0);\n-\n-\t}\n-\treturn NULL;\n-}\n-\n-/*\n- * Main tx-lthreads spawner lthread.\n- *\n- * This lthread is used to spawn one new lthread per ring from producers.\n- *\n- */\n-static void *\n-lthread_tx(void *args)\n-{\n-\tstruct lthread *lt;\n-\n-\tunsigned lcore_id;\n-\tuint16_t portid;\n-\tstruct thread_tx_conf *tx_conf;\n-\n-\ttx_conf = (struct thread_tx_conf *)args;\n-\tlthread_set_data((void *)tx_conf);\n-\n-\t/*\n-\t * Move this lthread to the selected lcore\n-\t */\n-\tlthread_set_affinity(tx_conf->conf.lcore_id);\n-\n-\t/*\n-\t * Spawn tx readers (one per input ring)\n-\t */\n-\tlthread_create(&lt, tx_conf->conf.lcore_id, lthread_tx_per_ring,\n-\t\t\t(void *)tx_conf);\n-\n-\tlcore_id = rte_lcore_id();\n-\n-\tRTE_LOG(INFO, L3FWD, \"Entering Tx main loop on lcore %u\\n\", lcore_id);\n-\n-\ttx_conf->conf.cpu_id = sched_getcpu();\n-\twhile (1) {\n-\n-\t\tlthread_sleep(BURST_TX_DRAIN_US * 1000);\n-\n-\t\t/*\n-\t\t * TX burst queue drain\n-\t\t */\n-\t\tfor (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {\n-\t\t\tif (tx_conf->tx_mbufs[portid].len == 0)\n-\t\t\t\tcontinue;\n-\t\t\tSET_CPU_BUSY(tx_conf, CPU_PROCESS);\n-\t\t\tsend_burst(tx_conf, tx_conf->tx_mbufs[portid].len, portid);\n-\t\t\tSET_CPU_IDLE(tx_conf, CPU_PROCESS);\n-\t\t\ttx_conf->tx_mbufs[portid].len = 0;\n-\t\t}\n-\n-\t}\n-\treturn NULL;\n-}\n-\n-static void *\n-lthread_rx(void *dummy)\n-{\n-\tint ret;\n-\tuint16_t nb_rx;\n-\tint i;\n-\tuint16_t portid;\n-\tuint8_t queueid;\n-\tint worker_id;\n-\tint len[RTE_MAX_LCORE] = { 0 };\n-\tint old_len, new_len;\n-\tstruct rte_mbuf *pkts_burst[MAX_PKT_BURST];\n-\tstruct thread_rx_conf *rx_conf;\n-\n-\trx_conf = (struct thread_rx_conf *)dummy;\n-\tlthread_set_data((void *)rx_conf);\n-\n-\t/*\n-\t * Move this lthread to lcore\n-\t */\n-\tlthread_set_affinity(rx_conf->conf.lcore_id);\n-\n-\tif (rx_conf->n_rx_queue == 0) {\n-\t\tRTE_LOG(INFO, L3FWD, \"lcore %u has nothing to do\\n\", rte_lcore_id());\n-\t\treturn NULL;\n-\t}\n-\n-\tRTE_LOG(INFO, L3FWD, \"Entering main Rx loop on lcore %u\\n\", rte_lcore_id());\n-\n-\tfor (i = 0; i < rx_conf->n_rx_queue; i++) {\n-\n-\t\tportid = rx_conf->rx_queue_list[i].port_id;\n-\t\tqueueid = rx_conf->rx_queue_list[i].queue_id;\n-\t\tRTE_LOG(INFO, L3FWD,\n-\t\t\t\" -- lcoreid=%u portid=%u rxqueueid=%hhu\\n\",\n-\t\t\t\trte_lcore_id(), portid, queueid);\n-\t}\n-\n-\t/*\n-\t * Init all condition variables (one per rx thread)\n-\t */\n-\tfor (i = 0; i < rx_conf->n_rx_queue; i++)\n-\t\tlthread_cond_init(NULL, &rx_conf->ready[i], NULL);\n-\n-\tworker_id = 0;\n-\n-\trx_conf->conf.cpu_id = sched_getcpu();\n-\t__atomic_fetch_add(&rx_counter, 1, __ATOMIC_RELAXED);\n-\twhile (1) {\n-\n-\t\t/*\n-\t\t * Read packet from RX queues\n-\t\t */\n-\t\tfor (i = 0; i < rx_conf->n_rx_queue; ++i) {\n-\t\t\tportid = rx_conf->rx_queue_list[i].port_id;\n-\t\t\tqueueid = rx_conf->rx_queue_list[i].queue_id;\n-\n-\t\t\tSET_CPU_BUSY(rx_conf, CPU_POLL);\n-\t\t\tnb_rx = rte_eth_rx_burst(portid, queueid, pkts_burst,\n-\t\t\t\tMAX_PKT_BURST);\n-\t\t\tSET_CPU_IDLE(rx_conf, CPU_POLL);\n-\n-\t\t\tif (nb_rx != 0) {\n-\t\t\t\tworker_id = (worker_id + 1) % rx_conf->n_ring;\n-\t\t\t\told_len = len[worker_id];\n-\n-\t\t\t\tSET_CPU_BUSY(rx_conf, CPU_PROCESS);\n-\t\t\t\tret = rte_ring_sp_enqueue_burst(\n-\t\t\t\t\t\trx_conf->ring[worker_id],\n-\t\t\t\t\t\t(void **) pkts_burst,\n-\t\t\t\t\t\tnb_rx, NULL);\n-\n-\t\t\t\tnew_len = old_len + ret;\n-\n-\t\t\t\tif (new_len >= BURST_SIZE) {\n-\t\t\t\t\tlthread_cond_signal(rx_conf->ready[worker_id]);\n-\t\t\t\t\tnew_len = 0;\n-\t\t\t\t}\n-\n-\t\t\t\tlen[worker_id] = new_len;\n-\n-\t\t\t\tif (unlikely(ret < nb_rx)) {\n-\t\t\t\t\tuint32_t k;\n-\n-\t\t\t\t\tfor (k = ret; k < nb_rx; k++) {\n-\t\t\t\t\t\tstruct rte_mbuf *m = pkts_burst[k];\n-\n-\t\t\t\t\t\trte_pktmbuf_free(m);\n-\t\t\t\t\t}\n-\t\t\t\t}\n-\t\t\t\tSET_CPU_IDLE(rx_conf, CPU_PROCESS);\n-\t\t\t}\n-\n-\t\t\tlthread_yield();\n-\t\t}\n-\t}\n-\treturn NULL;\n-}\n-\n-/*\n- * Start scheduler with initial lthread on lcore\n- *\n- * This lthread loop spawns all rx and tx lthreads on main lcore\n- */\n-\n-static void *\n-lthread_spawner(__rte_unused void *arg)\n-{\n-\tstruct lthread *lt[MAX_THREAD];\n-\tint i;\n-\tint n_thread = 0;\n-\n-\tprintf(\"Entering lthread_spawner\\n\");\n-\n-\t/*\n-\t * Create producers (rx threads) on default lcore\n-\t */\n-\tfor (i = 0; i < n_rx_thread; i++) {\n-\t\trx_thread[i].conf.thread_id = i;\n-\t\tlthread_create(&lt[n_thread], -1, lthread_rx,\n-\t\t\t\t(void *)&rx_thread[i]);\n-\t\tn_thread++;\n-\t}\n-\n-\t/*\n-\t * Wait for all producers. Until some producers can be started on the same\n-\t * scheduler as this lthread, yielding is required to let them to run and\n-\t * prevent deadlock here.\n-\t */\n-\twhile (__atomic_load_n(&rx_counter, __ATOMIC_RELAXED) < n_rx_thread)\n-\t\tlthread_sleep(100000);\n-\n-\t/*\n-\t * Create consumers (tx threads) on default lcore_id\n-\t */\n-\tfor (i = 0; i < n_tx_thread; i++) {\n-\t\ttx_thread[i].conf.thread_id = i;\n-\t\tlthread_create(&lt[n_thread], -1, lthread_tx,\n-\t\t\t\t(void *)&tx_thread[i]);\n-\t\tn_thread++;\n-\t}\n-\n-\t/*\n-\t * Wait for all threads finished\n-\t */\n-\tfor (i = 0; i < n_thread; i++)\n-\t\tlthread_join(lt[i], NULL);\n-\n-\treturn NULL;\n-}\n-\n-/*\n- * Start main scheduler with initial lthread spawning rx and tx lthreads\n- * (main_lthread_main).\n- */\n-static int\n-lthread_main_spawner(__rte_unused void *arg) {\n-\tstruct lthread *lt;\n-\tint lcore_id = rte_lcore_id();\n-\n-\tRTE_PER_LCORE(lcore_conf) = &lcore_conf[lcore_id];\n-\tlthread_create(&lt, -1, lthread_spawner, NULL);\n-\tlthread_run();\n-\n-\treturn 0;\n-}\n-\n-/*\n- * Start scheduler on lcore.\n- */\n-static int\n-sched_spawner(__rte_unused void *arg) {\n-\tstruct lthread *lt;\n-\tint lcore_id = rte_lcore_id();\n-\n-#if (APP_CPU_LOAD)\n-\tif (lcore_id == cpu_load_lcore_id) {\n-\t\tcpu_load_collector(arg);\n-\t\treturn 0;\n-\t}\n-#endif /* APP_CPU_LOAD */\n-\n-\tRTE_PER_LCORE(lcore_conf) = &lcore_conf[lcore_id];\n-\tlthread_create(&lt, -1, lthread_null, NULL);\n-\tlthread_run();\n-\n-\treturn 0;\n-}\n-\n-/* main processing loop */\n-static int __rte_noreturn\n-pthread_tx(void *dummy)\n-{\n-\tstruct rte_mbuf *pkts_burst[MAX_PKT_BURST];\n-\tuint64_t prev_tsc, diff_tsc, cur_tsc;\n-\tint nb_rx;\n-\tuint16_t portid;\n-\tstruct thread_tx_conf *tx_conf;\n-\n-\tconst uint64_t drain_tsc = (rte_get_tsc_hz() + US_PER_S - 1) /\n-\t\tUS_PER_S * BURST_TX_DRAIN_US;\n-\n-\tprev_tsc = 0;\n-\n-\ttx_conf = (struct thread_tx_conf *)dummy;\n-\n-\tRTE_LOG(INFO, L3FWD, \"Entering main Tx loop on lcore %u\\n\", rte_lcore_id());\n-\n-\ttx_conf->conf.cpu_id = sched_getcpu();\n-\t__atomic_fetch_add(&tx_counter, 1, __ATOMIC_RELAXED);\n-\twhile (1) {\n-\n-\t\tcur_tsc = rte_rdtsc();\n-\n-\t\t/*\n-\t\t * TX burst queue drain\n-\t\t */\n-\t\tdiff_tsc = cur_tsc - prev_tsc;\n-\t\tif (unlikely(diff_tsc > drain_tsc)) {\n-\n-\t\t\t/*\n-\t\t\t * This could be optimized (use queueid instead of\n-\t\t\t * portid), but it is not called so often\n-\t\t\t */\n-\t\t\tSET_CPU_BUSY(tx_conf, CPU_PROCESS);\n-\t\t\tfor (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {\n-\t\t\t\tif (tx_conf->tx_mbufs[portid].len == 0)\n-\t\t\t\t\tcontinue;\n-\t\t\t\tsend_burst(tx_conf, tx_conf->tx_mbufs[portid].len, portid);\n-\t\t\t\ttx_conf->tx_mbufs[portid].len = 0;\n-\t\t\t}\n-\t\t\tSET_CPU_IDLE(tx_conf, CPU_PROCESS);\n-\n-\t\t\tprev_tsc = cur_tsc;\n-\t\t}\n-\n-\t\t/*\n-\t\t * Read packet from ring\n-\t\t */\n-\t\tSET_CPU_BUSY(tx_conf, CPU_POLL);\n-\t\tnb_rx = rte_ring_sc_dequeue_burst(tx_conf->ring,\n-\t\t\t\t(void **)pkts_burst, MAX_PKT_BURST, NULL);\n-\t\tSET_CPU_IDLE(tx_conf, CPU_POLL);\n-\n-\t\tif (unlikely(nb_rx == 0)) {\n-\t\t\tsched_yield();\n-\t\t\tcontinue;\n-\t\t}\n-\n-\t\tSET_CPU_BUSY(tx_conf, CPU_PROCESS);\n-\t\tportid = pkts_burst[0]->port;\n-\t\tprocess_burst(pkts_burst, nb_rx, portid);\n-\t\tSET_CPU_IDLE(tx_conf, CPU_PROCESS);\n-\n-\t}\n-}\n-\n-static int\n-pthread_rx(void *dummy)\n-{\n-\tint i;\n-\tint worker_id;\n-\tuint32_t n;\n-\tuint32_t nb_rx;\n-\tunsigned lcore_id;\n-\tuint8_t queueid;\n-\tuint16_t portid;\n-\tstruct rte_mbuf *pkts_burst[MAX_PKT_BURST];\n-\n-\tstruct thread_rx_conf *rx_conf;\n-\n-\tlcore_id = rte_lcore_id();\n-\trx_conf = (struct thread_rx_conf *)dummy;\n-\n-\tif (rx_conf->n_rx_queue == 0) {\n-\t\tRTE_LOG(INFO, L3FWD, \"lcore %u has nothing to do\\n\", lcore_id);\n-\t\treturn 0;\n-\t}\n-\n-\tRTE_LOG(INFO, L3FWD, \"entering main rx loop on lcore %u\\n\", lcore_id);\n-\n-\tfor (i = 0; i < rx_conf->n_rx_queue; i++) {\n-\n-\t\tportid = rx_conf->rx_queue_list[i].port_id;\n-\t\tqueueid = rx_conf->rx_queue_list[i].queue_id;\n-\t\tRTE_LOG(INFO, L3FWD,\n-\t\t\t\" -- lcoreid=%u portid=%u rxqueueid=%hhu\\n\",\n-\t\t\t\tlcore_id, portid, queueid);\n-\t}\n-\n-\tworker_id = 0;\n-\trx_conf->conf.cpu_id = sched_getcpu();\n-\t__atomic_fetch_add(&rx_counter, 1, __ATOMIC_RELAXED);\n-\twhile (1) {\n-\n-\t\t/*\n-\t\t * Read packet from RX queues\n-\t\t */\n-\t\tfor (i = 0; i < rx_conf->n_rx_queue; ++i) {\n-\t\t\tportid = rx_conf->rx_queue_list[i].port_id;\n-\t\t\tqueueid = rx_conf->rx_queue_list[i].queue_id;\n-\n-\t\t\tSET_CPU_BUSY(rx_conf, CPU_POLL);\n-\t\t\tnb_rx = rte_eth_rx_burst(portid, queueid, pkts_burst,\n-\t\t\t\tMAX_PKT_BURST);\n-\t\t\tSET_CPU_IDLE(rx_conf, CPU_POLL);\n-\n-\t\t\tif (nb_rx == 0) {\n-\t\t\t\tsched_yield();\n-\t\t\t\tcontinue;\n-\t\t\t}\n-\n-\t\t\tSET_CPU_BUSY(rx_conf, CPU_PROCESS);\n-\t\t\tworker_id = (worker_id + 1) % rx_conf->n_ring;\n-\t\t\tn = rte_ring_sp_enqueue_burst(rx_conf->ring[worker_id],\n-\t\t\t\t\t(void **)pkts_burst, nb_rx, NULL);\n-\n-\t\t\tif (unlikely(n != nb_rx)) {\n-\t\t\t\tuint32_t k;\n-\n-\t\t\t\tfor (k = n; k < nb_rx; k++) {\n-\t\t\t\t\tstruct rte_mbuf *m = pkts_burst[k];\n-\n-\t\t\t\t\trte_pktmbuf_free(m);\n-\t\t\t\t}\n-\t\t\t}\n-\n-\t\t\tSET_CPU_IDLE(rx_conf, CPU_PROCESS);\n-\n-\t\t}\n-\t}\n-}\n-\n-/*\n- * P-Thread spawner.\n- */\n-static int\n-pthread_run(__rte_unused void *arg) {\n-\tint lcore_id = rte_lcore_id();\n-\tint i;\n-\n-\tfor (i = 0; i < n_rx_thread; i++)\n-\t\tif (rx_thread[i].conf.lcore_id == lcore_id) {\n-\t\t\tprintf(\"Start rx thread on %d...\\n\", lcore_id);\n-\t\t\tRTE_PER_LCORE(lcore_conf) = &lcore_conf[lcore_id];\n-\t\t\tRTE_PER_LCORE(lcore_conf)->data = (void *)&rx_thread[i];\n-\t\t\tpthread_rx((void *)&rx_thread[i]);\n-\t\t\treturn 0;\n-\t\t}\n-\n-\tfor (i = 0; i < n_tx_thread; i++)\n-\t\tif (tx_thread[i].conf.lcore_id == lcore_id) {\n-\t\t\tprintf(\"Start tx thread on %d...\\n\", lcore_id);\n-\t\t\tRTE_PER_LCORE(lcore_conf) = &lcore_conf[lcore_id];\n-\t\t\tRTE_PER_LCORE(lcore_conf)->data = (void *)&tx_thread[i];\n-\t\t\tpthread_tx((void *)&tx_thread[i]);\n-\t\t\treturn 0;\n-\t\t}\n-\n-#if (APP_CPU_LOAD)\n-\tif (lcore_id == cpu_load_lcore_id)\n-\t\tcpu_load_collector(arg);\n-#endif /* APP_CPU_LOAD */\n-\n-\treturn 0;\n-}\n-\n-static int\n-check_lcore_params(void)\n-{\n-\tuint8_t queue, lcore;\n-\tuint16_t i;\n-\tint socketid;\n-\n-\tfor (i = 0; i < nb_rx_thread_params; ++i) {\n-\t\tqueue = rx_thread_params[i].queue_id;\n-\t\tif (queue >= MAX_RX_QUEUE_PER_PORT) {\n-\t\t\tprintf(\"invalid queue number: %hhu\\n\", queue);\n-\t\t\treturn -1;\n-\t\t}\n-\t\tlcore = rx_thread_params[i].lcore_id;\n-\t\tif (!rte_lcore_is_enabled(lcore)) {\n-\t\t\tprintf(\"error: lcore %hhu is not enabled in lcore mask\\n\", lcore);\n-\t\t\treturn -1;\n-\t\t}\n-\t\tsocketid = rte_lcore_to_socket_id(lcore);\n-\t\tif ((socketid != 0) && (numa_on == 0))\n-\t\t\tprintf(\"warning: lcore %hhu is on socket %d with numa off\\n\",\n-\t\t\t\tlcore, socketid);\n-\t}\n-\treturn 0;\n-}\n-\n-static int\n-check_port_config(void)\n-{\n-\tunsigned portid;\n-\tuint16_t i;\n-\n-\tfor (i = 0; i < nb_rx_thread_params; ++i) {\n-\t\tportid = rx_thread_params[i].port_id;\n-\t\tif ((enabled_port_mask & (1 << portid)) == 0) {\n-\t\t\tprintf(\"port %u is not enabled in port mask\\n\", portid);\n-\t\t\treturn -1;\n-\t\t}\n-\t\tif (!rte_eth_dev_is_valid_port(portid)) {\n-\t\t\tprintf(\"port %u is not present on the board\\n\", portid);\n-\t\t\treturn -1;\n-\t\t}\n-\t}\n-\treturn 0;\n-}\n-\n-static uint8_t\n-get_port_n_rx_queues(const uint16_t port)\n-{\n-\tint queue = -1;\n-\tuint16_t i;\n-\n-\tfor (i = 0; i < nb_rx_thread_params; ++i)\n-\t\tif (rx_thread_params[i].port_id == port &&\n-\t\t\t\trx_thread_params[i].queue_id > queue)\n-\t\t\tqueue = rx_thread_params[i].queue_id;\n-\n-\treturn (uint8_t)(++queue);\n-}\n-\n-static int\n-init_rx_rings(void)\n-{\n-\tunsigned socket_io;\n-\tstruct thread_rx_conf *rx_conf;\n-\tstruct thread_tx_conf *tx_conf;\n-\tunsigned rx_thread_id, tx_thread_id;\n-\tchar name[256];\n-\tstruct rte_ring *ring = NULL;\n-\n-\tfor (tx_thread_id = 0; tx_thread_id < n_tx_thread; tx_thread_id++) {\n-\n-\t\ttx_conf = &tx_thread[tx_thread_id];\n-\n-\t\tprintf(\"Connecting tx-thread %d with rx-thread %d\\n\", tx_thread_id,\n-\t\t\t\ttx_conf->conf.thread_id);\n-\n-\t\trx_thread_id = tx_conf->conf.thread_id;\n-\t\tif (rx_thread_id > n_tx_thread) {\n-\t\t\tprintf(\"connection from tx-thread %u to rx-thread %u fails \"\n-\t\t\t\t\t\"(rx-thread not defined)\\n\", tx_thread_id, rx_thread_id);\n-\t\t\treturn -1;\n-\t\t}\n-\n-\t\trx_conf = &rx_thread[rx_thread_id];\n-\t\tsocket_io = rte_lcore_to_socket_id(rx_conf->conf.lcore_id);\n-\n-\t\tsnprintf(name, sizeof(name), \"app_ring_s%u_rx%u_tx%u\",\n-\t\t\t\tsocket_io, rx_thread_id, tx_thread_id);\n-\n-\t\tring = rte_ring_create(name, 1024 * 4, socket_io,\n-\t\t\t\tRING_F_SP_ENQ | RING_F_SC_DEQ);\n-\n-\t\tif (ring == NULL) {\n-\t\t\trte_panic(\"Cannot create ring to connect rx-thread %u \"\n-\t\t\t\t\t\"with tx-thread %u\\n\", rx_thread_id, tx_thread_id);\n-\t\t}\n-\n-\t\trx_conf->ring[rx_conf->n_ring] = ring;\n-\n-\t\ttx_conf->ring = ring;\n-\t\ttx_conf->ready = &rx_conf->ready[rx_conf->n_ring];\n-\n-\t\trx_conf->n_ring++;\n-\t}\n-\treturn 0;\n-}\n-\n-static int\n-init_rx_queues(void)\n-{\n-\tuint16_t i, nb_rx_queue;\n-\tuint8_t thread;\n-\n-\tn_rx_thread = 0;\n-\n-\tfor (i = 0; i < nb_rx_thread_params; ++i) {\n-\t\tthread = rx_thread_params[i].thread_id;\n-\t\tnb_rx_queue = rx_thread[thread].n_rx_queue;\n-\n-\t\tif (nb_rx_queue >= MAX_RX_QUEUE_PER_LCORE) {\n-\t\t\tprintf(\"error: too many queues (%u) for thread: %u\\n\",\n-\t\t\t\t(unsigned)nb_rx_queue + 1, (unsigned)thread);\n-\t\t\treturn -1;\n-\t\t}\n-\n-\t\trx_thread[thread].conf.thread_id = thread;\n-\t\trx_thread[thread].conf.lcore_id = rx_thread_params[i].lcore_id;\n-\t\trx_thread[thread].rx_queue_list[nb_rx_queue].port_id =\n-\t\t\trx_thread_params[i].port_id;\n-\t\trx_thread[thread].rx_queue_list[nb_rx_queue].queue_id =\n-\t\t\trx_thread_params[i].queue_id;\n-\t\trx_thread[thread].n_rx_queue++;\n-\n-\t\tif (thread >= n_rx_thread)\n-\t\t\tn_rx_thread = thread + 1;\n-\n-\t}\n-\treturn 0;\n-}\n-\n-static int\n-init_tx_threads(void)\n-{\n-\tint i;\n-\n-\tn_tx_thread = 0;\n-\tfor (i = 0; i < nb_tx_thread_params; ++i) {\n-\t\ttx_thread[n_tx_thread].conf.thread_id = tx_thread_params[i].thread_id;\n-\t\ttx_thread[n_tx_thread].conf.lcore_id = tx_thread_params[i].lcore_id;\n-\t\tn_tx_thread++;\n-\t}\n-\treturn 0;\n-}\n-\n-/* display usage */\n-static void\n-print_usage(const char *prgname)\n-{\n-\tprintf(\"%s [EAL options] -- -p PORTMASK -P\"\n-\t\t\"  [--rx (port,queue,lcore,thread)[,(port,queue,lcore,thread]]\"\n-\t\t\"  [--tx (lcore,thread)[,(lcore,thread]]\"\n-\t\t\"  [--max-pkt-len PKTLEN]\"\n-\t\t\"  [--parse-ptype]\\n\\n\"\n-\t\t\"  -p PORTMASK: hexadecimal bitmask of ports to configure\\n\"\n-\t\t\"  -P : enable promiscuous mode\\n\"\n-\t\t\"  --rx (port,queue,lcore,thread): rx queues configuration\\n\"\n-\t\t\"  --tx (lcore,thread): tx threads configuration\\n\"\n-\t\t\"  --stat-lcore LCORE: use lcore for stat collector\\n\"\n-\t\t\"  --eth-dest=X,MM:MM:MM:MM:MM:MM: optional, ethernet destination for port X\\n\"\n-\t\t\"  --no-numa: optional, disable numa awareness\\n\"\n-\t\t\"  --ipv6: optional, specify it if running ipv6 packets\\n\"\n-\t\t\"  --max-pkt-len PKTLEN: maximum packet length in decimal (64-9600)\\n\"\n-\t\t\"  --hash-entry-num: specify the hash entry number in hexadecimal to be setup\\n\"\n-\t\t\"  --no-lthreads: turn off lthread model\\n\"\n-\t\t\"  --parse-ptype: set to use software to analyze packet type\\n\\n\",\n-\t\tprgname);\n-}\n-\n-static int parse_max_pkt_len(const char *pktlen)\n-{\n-\tchar *end = NULL;\n-\tunsigned long len;\n-\n-\t/* parse decimal string */\n-\tlen = strtoul(pktlen, &end, 10);\n-\tif ((pktlen[0] == '\\0') || (end == NULL) || (*end != '\\0'))\n-\t\treturn -1;\n-\n-\tif (len == 0)\n-\t\treturn -1;\n-\n-\treturn len;\n-}\n-\n-static int\n-parse_portmask(const char *portmask)\n-{\n-\tchar *end = NULL;\n-\tunsigned long pm;\n-\n-\t/* parse hexadecimal string */\n-\tpm = strtoul(portmask, &end, 16);\n-\tif ((portmask[0] == '\\0') || (end == NULL) || (*end != '\\0'))\n-\t\treturn 0;\n-\n-\treturn pm;\n-}\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)\n-static int\n-parse_hash_entry_number(const char *hash_entry_num)\n-{\n-\tchar *end = NULL;\n-\tunsigned long hash_en;\n-\n-\t/* parse hexadecimal string */\n-\thash_en = strtoul(hash_entry_num, &end, 16);\n-\tif ((hash_entry_num[0] == '\\0') || (end == NULL) || (*end != '\\0'))\n-\t\treturn -1;\n-\n-\tif (hash_en == 0)\n-\t\treturn -1;\n-\n-\treturn hash_en;\n-}\n-#endif\n-\n-static int\n-parse_rx_config(const char *q_arg)\n-{\n-\tchar s[256];\n-\tconst char *p, *p0 = q_arg;\n-\tchar *end;\n-\tenum fieldnames {\n-\t\tFLD_PORT = 0,\n-\t\tFLD_QUEUE,\n-\t\tFLD_LCORE,\n-\t\tFLD_THREAD,\n-\t\t_NUM_FLD\n-\t};\n-\tunsigned long int_fld[_NUM_FLD];\n-\tchar *str_fld[_NUM_FLD];\n-\tint i;\n-\tunsigned size;\n-\n-\tnb_rx_thread_params = 0;\n-\n-\twhile ((p = strchr(p0, '(')) != NULL) {\n-\t\t++p;\n-\t\tp0 = strchr(p, ')');\n-\t\tif (p0 == NULL)\n-\t\t\treturn -1;\n-\n-\t\tsize = p0 - p;\n-\t\tif (size >= sizeof(s))\n-\t\t\treturn -1;\n-\n-\t\tsnprintf(s, sizeof(s), \"%.*s\", size, p);\n-\t\tif (rte_strsplit(s, sizeof(s), str_fld, _NUM_FLD, ',') != _NUM_FLD)\n-\t\t\treturn -1;\n-\t\tfor (i = 0; i < _NUM_FLD; i++) {\n-\t\t\terrno = 0;\n-\t\t\tint_fld[i] = strtoul(str_fld[i], &end, 0);\n-\t\t\tif (errno != 0 || end == str_fld[i] || int_fld[i] > 255)\n-\t\t\t\treturn -1;\n-\t\t}\n-\t\tif (nb_rx_thread_params >= MAX_LCORE_PARAMS) {\n-\t\t\tprintf(\"exceeded max number of rx params: %hu\\n\",\n-\t\t\t\t\tnb_rx_thread_params);\n-\t\t\treturn -1;\n-\t\t}\n-\t\trx_thread_params_array[nb_rx_thread_params].port_id =\n-\t\t\t\tint_fld[FLD_PORT];\n-\t\trx_thread_params_array[nb_rx_thread_params].queue_id =\n-\t\t\t\t(uint8_t)int_fld[FLD_QUEUE];\n-\t\trx_thread_params_array[nb_rx_thread_params].lcore_id =\n-\t\t\t\t(uint8_t)int_fld[FLD_LCORE];\n-\t\trx_thread_params_array[nb_rx_thread_params].thread_id =\n-\t\t\t\t(uint8_t)int_fld[FLD_THREAD];\n-\t\t++nb_rx_thread_params;\n-\t}\n-\trx_thread_params = rx_thread_params_array;\n-\treturn 0;\n-}\n-\n-static int\n-parse_tx_config(const char *q_arg)\n-{\n-\tchar s[256];\n-\tconst char *p, *p0 = q_arg;\n-\tchar *end;\n-\tenum fieldnames {\n-\t\tFLD_LCORE = 0,\n-\t\tFLD_THREAD,\n-\t\t_NUM_FLD\n-\t};\n-\tunsigned long int_fld[_NUM_FLD];\n-\tchar *str_fld[_NUM_FLD];\n-\tint i;\n-\tunsigned size;\n-\n-\tnb_tx_thread_params = 0;\n-\n-\twhile ((p = strchr(p0, '(')) != NULL) {\n-\t\t++p;\n-\t\tp0 = strchr(p, ')');\n-\t\tif (p0 == NULL)\n-\t\t\treturn -1;\n-\n-\t\tsize = p0 - p;\n-\t\tif (size >= sizeof(s))\n-\t\t\treturn -1;\n-\n-\t\tsnprintf(s, sizeof(s), \"%.*s\", size, p);\n-\t\tif (rte_strsplit(s, sizeof(s), str_fld, _NUM_FLD, ',') != _NUM_FLD)\n-\t\t\treturn -1;\n-\t\tfor (i = 0; i < _NUM_FLD; i++) {\n-\t\t\terrno = 0;\n-\t\t\tint_fld[i] = strtoul(str_fld[i], &end, 0);\n-\t\t\tif (errno != 0 || end == str_fld[i] || int_fld[i] > 255)\n-\t\t\t\treturn -1;\n-\t\t}\n-\t\tif (nb_tx_thread_params >= MAX_LCORE_PARAMS) {\n-\t\t\tprintf(\"exceeded max number of tx params: %hu\\n\",\n-\t\t\t\tnb_tx_thread_params);\n-\t\t\treturn -1;\n-\t\t}\n-\t\ttx_thread_params_array[nb_tx_thread_params].lcore_id =\n-\t\t\t\t(uint8_t)int_fld[FLD_LCORE];\n-\t\ttx_thread_params_array[nb_tx_thread_params].thread_id =\n-\t\t\t\t(uint8_t)int_fld[FLD_THREAD];\n-\t\t++nb_tx_thread_params;\n-\t}\n-\ttx_thread_params = tx_thread_params_array;\n-\n-\treturn 0;\n-}\n-\n-#if (APP_CPU_LOAD > 0)\n-static int\n-parse_stat_lcore(const char *stat_lcore)\n-{\n-\tchar *end = NULL;\n-\tunsigned long lcore_id;\n-\n-\tlcore_id = strtoul(stat_lcore, &end, 10);\n-\tif ((stat_lcore[0] == '\\0') || (end == NULL) || (*end != '\\0'))\n-\t\treturn -1;\n-\n-\treturn lcore_id;\n-}\n-#endif\n-\n-static void\n-parse_eth_dest(const char *optarg)\n-{\n-\tuint16_t portid;\n-\tchar *port_end;\n-\tuint8_t c, *dest, peer_addr[6];\n-\n-\terrno = 0;\n-\tportid = strtoul(optarg, &port_end, 10);\n-\tif (errno != 0 || port_end == optarg || *port_end++ != ',')\n-\t\trte_exit(EXIT_FAILURE,\n-\t\t\"Invalid eth-dest: %s\", optarg);\n-\tif (portid >= RTE_MAX_ETHPORTS)\n-\t\trte_exit(EXIT_FAILURE,\n-\t\t\"eth-dest: port %d >= RTE_MAX_ETHPORTS(%d)\\n\",\n-\t\tportid, RTE_MAX_ETHPORTS);\n-\n-\tif (cmdline_parse_etheraddr(NULL, port_end,\n-\t\t&peer_addr, sizeof(peer_addr)) < 0)\n-\t\trte_exit(EXIT_FAILURE,\n-\t\t\"Invalid ethernet address: %s\\n\",\n-\t\tport_end);\n-\tdest = (uint8_t *)&dest_eth_addr[portid];\n-\tfor (c = 0; c < 6; c++)\n-\t\tdest[c] = peer_addr[c];\n-\t*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];\n-}\n-\n-enum {\n-#define OPT_RX_CONFIG       \"rx\"\n-\tOPT_RX_CONFIG_NUM = 256,\n-#define OPT_TX_CONFIG       \"tx\"\n-\tOPT_TX_CONFIG_NUM,\n-#define OPT_STAT_LCORE      \"stat-lcore\"\n-\tOPT_STAT_LCORE_NUM,\n-#define OPT_ETH_DEST        \"eth-dest\"\n-\tOPT_ETH_DEST_NUM,\n-#define OPT_NO_NUMA         \"no-numa\"\n-\tOPT_NO_NUMA_NUM,\n-#define OPT_IPV6            \"ipv6\"\n-\tOPT_IPV6_NUM,\n-#define OPT_MAX_PKT_LEN \"max-pkt-len\"\n-\tOPT_MAX_PKT_LEN_NUM,\n-#define OPT_HASH_ENTRY_NUM  \"hash-entry-num\"\n-\tOPT_HASH_ENTRY_NUM_NUM,\n-#define OPT_NO_LTHREADS     \"no-lthreads\"\n-\tOPT_NO_LTHREADS_NUM,\n-#define OPT_PARSE_PTYPE     \"parse-ptype\"\n-\tOPT_PARSE_PTYPE_NUM,\n-};\n-\n-/* Parse the argument given in the command line of the application */\n-static int\n-parse_args(int argc, char **argv)\n-{\n-\tint opt, ret;\n-\tchar **argvopt;\n-\tint option_index;\n-\tchar *prgname = argv[0];\n-\tstatic struct option lgopts[] = {\n-\t\t{OPT_RX_CONFIG,      1, NULL, OPT_RX_CONFIG_NUM      },\n-\t\t{OPT_TX_CONFIG,      1, NULL, OPT_TX_CONFIG_NUM      },\n-\t\t{OPT_STAT_LCORE,     1, NULL, OPT_STAT_LCORE_NUM     },\n-\t\t{OPT_ETH_DEST,       1, NULL, OPT_ETH_DEST_NUM       },\n-\t\t{OPT_NO_NUMA,        0, NULL, OPT_NO_NUMA_NUM        },\n-\t\t{OPT_IPV6,           0, NULL, OPT_IPV6_NUM           },\n-\t\t{OPT_MAX_PKT_LEN,    1, NULL, OPT_MAX_PKT_LEN_NUM    },\n-\t\t{OPT_HASH_ENTRY_NUM, 1, NULL, OPT_HASH_ENTRY_NUM_NUM },\n-\t\t{OPT_NO_LTHREADS,    0, NULL, OPT_NO_LTHREADS_NUM    },\n-\t\t{OPT_PARSE_PTYPE,    0, NULL, OPT_PARSE_PTYPE_NUM    },\n-\t\t{NULL,               0, 0,    0                      }\n-\t};\n-\n-\targvopt = argv;\n-\n-\twhile ((opt = getopt_long(argc, argvopt, \"p:P\",\n-\t\t\t\tlgopts, &option_index)) != EOF) {\n-\n-\t\tswitch (opt) {\n-\t\t/* portmask */\n-\t\tcase 'p':\n-\t\t\tenabled_port_mask = parse_portmask(optarg);\n-\t\t\tif (enabled_port_mask == 0) {\n-\t\t\t\tprintf(\"invalid portmask\\n\");\n-\t\t\t\tprint_usage(prgname);\n-\t\t\t\treturn -1;\n-\t\t\t}\n-\t\t\tbreak;\n-\n-\t\tcase 'P':\n-\t\t\tprintf(\"Promiscuous mode selected\\n\");\n-\t\t\tpromiscuous_on = 1;\n-\t\t\tbreak;\n-\n-\t\t/* long options */\n-\t\tcase OPT_RX_CONFIG_NUM:\n-\t\t\tret = parse_rx_config(optarg);\n-\t\t\tif (ret) {\n-\t\t\t\tprintf(\"invalid rx-config\\n\");\n-\t\t\t\tprint_usage(prgname);\n-\t\t\t\treturn -1;\n-\t\t\t}\n-\t\t\tbreak;\n-\n-\t\tcase OPT_TX_CONFIG_NUM:\n-\t\t\tret = parse_tx_config(optarg);\n-\t\t\tif (ret) {\n-\t\t\t\tprintf(\"invalid tx-config\\n\");\n-\t\t\t\tprint_usage(prgname);\n-\t\t\t\treturn -1;\n-\t\t\t}\n-\t\t\tbreak;\n-\n-#if (APP_CPU_LOAD > 0)\n-\t\tcase OPT_STAT_LCORE_NUM:\n-\t\t\tcpu_load_lcore_id = parse_stat_lcore(optarg);\n-\t\t\tbreak;\n-#endif\n-\n-\t\tcase OPT_ETH_DEST_NUM:\n-\t\t\tparse_eth_dest(optarg);\n-\t\t\tbreak;\n-\n-\t\tcase OPT_NO_NUMA_NUM:\n-\t\t\tprintf(\"numa is disabled\\n\");\n-\t\t\tnuma_on = 0;\n-\t\t\tbreak;\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)\n-\t\tcase OPT_IPV6_NUM:\n-\t\t\tprintf(\"ipv6 is specified\\n\");\n-\t\t\tipv6 = 1;\n-\t\t\tbreak;\n-#endif\n-\n-\t\tcase OPT_NO_LTHREADS_NUM:\n-\t\t\tprintf(\"l-threads model is disabled\\n\");\n-\t\t\tlthreads_on = 0;\n-\t\t\tbreak;\n-\n-\t\tcase OPT_PARSE_PTYPE_NUM:\n-\t\t\tprintf(\"software packet type parsing enabled\\n\");\n-\t\t\tparse_ptype_on = 1;\n-\t\t\tbreak;\n-\n-\t\tcase OPT_MAX_PKT_LEN_NUM:\n-\t\t\tmax_pkt_len = parse_max_pkt_len(optarg);\n-\t\t\tbreak;\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)\n-\t\tcase OPT_HASH_ENTRY_NUM_NUM:\n-\t\t\tret = parse_hash_entry_number(optarg);\n-\t\t\tif ((ret > 0) && (ret <= L3FWD_HASH_ENTRIES)) {\n-\t\t\t\thash_entry_number = ret;\n-\t\t\t} else {\n-\t\t\t\tprintf(\"invalid hash entry number\\n\");\n-\t\t\t\tprint_usage(prgname);\n-\t\t\t\treturn -1;\n-\t\t\t}\n-\t\t\tbreak;\n-#endif\n-\n-\t\tdefault:\n-\t\t\tprint_usage(prgname);\n-\t\t\treturn -1;\n-\t\t}\n-\t}\n-\n-\tif (optind >= 0)\n-\t\targv[optind-1] = prgname;\n-\n-\tret = optind-1;\n-\toptind = 1; /* reset getopt lib */\n-\treturn ret;\n-}\n-\n-static void\n-print_ethaddr(const char *name, const struct rte_ether_addr *eth_addr)\n-{\n-\tchar buf[RTE_ETHER_ADDR_FMT_SIZE];\n-\n-\trte_ether_format_addr(buf, RTE_ETHER_ADDR_FMT_SIZE, eth_addr);\n-\tprintf(\"%s%s\", name, buf);\n-}\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_EXACT_MATCH)\n-\n-static void convert_ipv4_5tuple(struct ipv4_5tuple *key1,\n-\t\tunion ipv4_5tuple_host *key2)\n-{\n-\tkey2->ip_dst = rte_cpu_to_be_32(key1->ip_dst);\n-\tkey2->ip_src = rte_cpu_to_be_32(key1->ip_src);\n-\tkey2->port_dst = rte_cpu_to_be_16(key1->port_dst);\n-\tkey2->port_src = rte_cpu_to_be_16(key1->port_src);\n-\tkey2->proto = key1->proto;\n-\tkey2->pad0 = 0;\n-\tkey2->pad1 = 0;\n-}\n-\n-static void convert_ipv6_5tuple(struct ipv6_5tuple *key1,\n-\t\tunion ipv6_5tuple_host *key2)\n-{\n-\tuint32_t i;\n-\n-\tfor (i = 0; i < 16; i++) {\n-\t\tkey2->ip_dst[i] = key1->ip_dst[i];\n-\t\tkey2->ip_src[i] = key1->ip_src[i];\n-\t}\n-\tkey2->port_dst = rte_cpu_to_be_16(key1->port_dst);\n-\tkey2->port_src = rte_cpu_to_be_16(key1->port_src);\n-\tkey2->proto = key1->proto;\n-\tkey2->pad0 = 0;\n-\tkey2->pad1 = 0;\n-\tkey2->reserve = 0;\n-}\n-\n-#define BYTE_VALUE_MAX 256\n-#define ALL_32_BITS 0xffffffff\n-#define BIT_8_TO_15 0x0000ff00\n-static inline void\n-populate_ipv4_few_flow_into_table(const struct rte_hash *h)\n-{\n-\tuint32_t i;\n-\tint32_t ret;\n-\tuint32_t array_len = RTE_DIM(ipv4_l3fwd_route_array);\n-\n-\tmask0 = _mm_set_epi32(ALL_32_BITS, ALL_32_BITS, ALL_32_BITS, BIT_8_TO_15);\n-\tfor (i = 0; i < array_len; i++) {\n-\t\tstruct ipv4_l3fwd_route  entry;\n-\t\tunion ipv4_5tuple_host newkey;\n-\n-\t\tentry = ipv4_l3fwd_route_array[i];\n-\t\tconvert_ipv4_5tuple(&entry.key, &newkey);\n-\t\tret = rte_hash_add_key(h, (void *)&newkey);\n-\t\tif (ret < 0) {\n-\t\t\trte_exit(EXIT_FAILURE, \"Unable to add entry %\" PRIu32\n-\t\t\t\t\" to the l3fwd hash.\\n\", i);\n-\t\t}\n-\t\tipv4_l3fwd_out_if[ret] = entry.if_out;\n-\t}\n-\tprintf(\"Hash: Adding 0x%\" PRIx32 \" keys\\n\", array_len);\n-}\n-\n-#define BIT_16_TO_23 0x00ff0000\n-static inline void\n-populate_ipv6_few_flow_into_table(const struct rte_hash *h)\n-{\n-\tuint32_t i;\n-\tint32_t ret;\n-\tuint32_t array_len = RTE_DIM(ipv6_l3fwd_route_array);\n-\n-\tmask1 = _mm_set_epi32(ALL_32_BITS, ALL_32_BITS, ALL_32_BITS, BIT_16_TO_23);\n-\tmask2 = _mm_set_epi32(0, 0, ALL_32_BITS, ALL_32_BITS);\n-\tfor (i = 0; i < array_len; i++) {\n-\t\tstruct ipv6_l3fwd_route entry;\n-\t\tunion ipv6_5tuple_host newkey;\n-\n-\t\tentry = ipv6_l3fwd_route_array[i];\n-\t\tconvert_ipv6_5tuple(&entry.key, &newkey);\n-\t\tret = rte_hash_add_key(h, (void *)&newkey);\n-\t\tif (ret < 0) {\n-\t\t\trte_exit(EXIT_FAILURE, \"Unable to add entry %\" PRIu32\n-\t\t\t\t\" to the l3fwd hash.\\n\", i);\n-\t\t}\n-\t\tipv6_l3fwd_out_if[ret] = entry.if_out;\n-\t}\n-\tprintf(\"Hash: Adding 0x%\" PRIx32 \"keys\\n\", array_len);\n-}\n-\n-#define NUMBER_PORT_USED 4\n-static inline void\n-populate_ipv4_many_flow_into_table(const struct rte_hash *h,\n-\t\tunsigned int nr_flow)\n-{\n-\tunsigned i;\n-\n-\tmask0 = _mm_set_epi32(ALL_32_BITS, ALL_32_BITS, ALL_32_BITS, BIT_8_TO_15);\n-\n-\tfor (i = 0; i < nr_flow; i++) {\n-\t\tstruct ipv4_l3fwd_route entry;\n-\t\tunion ipv4_5tuple_host newkey;\n-\t\tuint8_t a = (uint8_t)((i / NUMBER_PORT_USED) % BYTE_VALUE_MAX);\n-\t\tuint8_t b = (uint8_t)(((i / NUMBER_PORT_USED) / BYTE_VALUE_MAX) %\n-\t\t\t\tBYTE_VALUE_MAX);\n-\t\tuint8_t c = (uint8_t)((i / NUMBER_PORT_USED) / (BYTE_VALUE_MAX *\n-\t\t\t\tBYTE_VALUE_MAX));\n-\t\t/* Create the ipv4 exact match flow */\n-\t\tmemset(&entry, 0, sizeof(entry));\n-\t\tswitch (i & (NUMBER_PORT_USED - 1)) {\n-\t\tcase 0:\n-\t\t\tentry = ipv4_l3fwd_route_array[0];\n-\t\t\tentry.key.ip_dst = RTE_IPV4(101, c, b, a);\n-\t\t\tbreak;\n-\t\tcase 1:\n-\t\t\tentry = ipv4_l3fwd_route_array[1];\n-\t\t\tentry.key.ip_dst = RTE_IPV4(201, c, b, a);\n-\t\t\tbreak;\n-\t\tcase 2:\n-\t\t\tentry = ipv4_l3fwd_route_array[2];\n-\t\t\tentry.key.ip_dst = RTE_IPV4(111, c, b, a);\n-\t\t\tbreak;\n-\t\tcase 3:\n-\t\t\tentry = ipv4_l3fwd_route_array[3];\n-\t\t\tentry.key.ip_dst = RTE_IPV4(211, c, b, a);\n-\t\t\tbreak;\n-\t\t};\n-\t\tconvert_ipv4_5tuple(&entry.key, &newkey);\n-\t\tint32_t ret = rte_hash_add_key(h, (void *)&newkey);\n-\n-\t\tif (ret < 0)\n-\t\t\trte_exit(EXIT_FAILURE, \"Unable to add entry %u\\n\", i);\n-\n-\t\tipv4_l3fwd_out_if[ret] = (uint8_t)entry.if_out;\n-\n-\t}\n-\tprintf(\"Hash: Adding 0x%x keys\\n\", nr_flow);\n-}\n-\n-static inline void\n-populate_ipv6_many_flow_into_table(const struct rte_hash *h,\n-\t\tunsigned int nr_flow)\n-{\n-\tunsigned i;\n-\n-\tmask1 = _mm_set_epi32(ALL_32_BITS, ALL_32_BITS, ALL_32_BITS, BIT_16_TO_23);\n-\tmask2 = _mm_set_epi32(0, 0, ALL_32_BITS, ALL_32_BITS);\n-\tfor (i = 0; i < nr_flow; i++) {\n-\t\tstruct ipv6_l3fwd_route entry;\n-\t\tunion ipv6_5tuple_host newkey;\n-\n-\t\tuint8_t a = (uint8_t) ((i / NUMBER_PORT_USED) % BYTE_VALUE_MAX);\n-\t\tuint8_t b = (uint8_t) (((i / NUMBER_PORT_USED) / BYTE_VALUE_MAX) %\n-\t\t\t\tBYTE_VALUE_MAX);\n-\t\tuint8_t c = (uint8_t) ((i / NUMBER_PORT_USED) / (BYTE_VALUE_MAX *\n-\t\t\t\tBYTE_VALUE_MAX));\n-\n-\t\t/* Create the ipv6 exact match flow */\n-\t\tmemset(&entry, 0, sizeof(entry));\n-\t\tswitch (i & (NUMBER_PORT_USED - 1)) {\n-\t\tcase 0:\n-\t\t\tentry = ipv6_l3fwd_route_array[0];\n-\t\t\tbreak;\n-\t\tcase 1:\n-\t\t\tentry = ipv6_l3fwd_route_array[1];\n-\t\t\tbreak;\n-\t\tcase 2:\n-\t\t\tentry = ipv6_l3fwd_route_array[2];\n-\t\t\tbreak;\n-\t\tcase 3:\n-\t\t\tentry = ipv6_l3fwd_route_array[3];\n-\t\t\tbreak;\n-\t\t};\n-\t\tentry.key.ip_dst[13] = c;\n-\t\tentry.key.ip_dst[14] = b;\n-\t\tentry.key.ip_dst[15] = a;\n-\t\tconvert_ipv6_5tuple(&entry.key, &newkey);\n-\t\tint32_t ret = rte_hash_add_key(h, (void *)&newkey);\n-\n-\t\tif (ret < 0)\n-\t\t\trte_exit(EXIT_FAILURE, \"Unable to add entry %u\\n\", i);\n-\n-\t\tipv6_l3fwd_out_if[ret] = (uint8_t) entry.if_out;\n-\n-\t}\n-\tprintf(\"Hash: Adding 0x%x keys\\n\", nr_flow);\n-}\n-\n-static void\n-setup_hash(int socketid)\n-{\n-\tstruct rte_hash_parameters ipv4_l3fwd_hash_params = {\n-\t\t.name = NULL,\n-\t\t.entries = L3FWD_HASH_ENTRIES,\n-\t\t.key_len = sizeof(union ipv4_5tuple_host),\n-\t\t.hash_func = ipv4_hash_crc,\n-\t\t.hash_func_init_val = 0,\n-\t};\n-\n-\tstruct rte_hash_parameters ipv6_l3fwd_hash_params = {\n-\t\t.name = NULL,\n-\t\t.entries = L3FWD_HASH_ENTRIES,\n-\t\t.key_len = sizeof(union ipv6_5tuple_host),\n-\t\t.hash_func = ipv6_hash_crc,\n-\t\t.hash_func_init_val = 0,\n-\t};\n-\n-\tchar s[64];\n-\n-\t/* create ipv4 hash */\n-\tsnprintf(s, sizeof(s), \"ipv4_l3fwd_hash_%d\", socketid);\n-\tipv4_l3fwd_hash_params.name = s;\n-\tipv4_l3fwd_hash_params.socket_id = socketid;\n-\tipv4_l3fwd_lookup_struct[socketid] =\n-\t\t\trte_hash_create(&ipv4_l3fwd_hash_params);\n-\tif (ipv4_l3fwd_lookup_struct[socketid] == NULL)\n-\t\trte_exit(EXIT_FAILURE, \"Unable to create the l3fwd hash on \"\n-\t\t\t\t\"socket %d\\n\", socketid);\n-\n-\t/* create ipv6 hash */\n-\tsnprintf(s, sizeof(s), \"ipv6_l3fwd_hash_%d\", socketid);\n-\tipv6_l3fwd_hash_params.name = s;\n-\tipv6_l3fwd_hash_params.socket_id = socketid;\n-\tipv6_l3fwd_lookup_struct[socketid] =\n-\t\t\trte_hash_create(&ipv6_l3fwd_hash_params);\n-\tif (ipv6_l3fwd_lookup_struct[socketid] == NULL)\n-\t\trte_exit(EXIT_FAILURE, \"Unable to create the l3fwd hash on \"\n-\t\t\t\t\"socket %d\\n\", socketid);\n-\n-\tif (hash_entry_number != HASH_ENTRY_NUMBER_DEFAULT) {\n-\t\t/* For testing hash matching with a large number of flows we\n-\t\t * generate millions of IP 5-tuples with an incremented dst\n-\t\t * address to initialize the hash table. */\n-\t\tif (ipv6 == 0) {\n-\t\t\t/* populate the ipv4 hash */\n-\t\t\tpopulate_ipv4_many_flow_into_table(\n-\t\t\t\tipv4_l3fwd_lookup_struct[socketid], hash_entry_number);\n-\t\t} else {\n-\t\t\t/* populate the ipv6 hash */\n-\t\t\tpopulate_ipv6_many_flow_into_table(\n-\t\t\t\tipv6_l3fwd_lookup_struct[socketid], hash_entry_number);\n-\t\t}\n-\t} else {\n-\t\t/* Use data in ipv4/ipv6 l3fwd lookup table directly to initialize\n-\t\t * the hash table */\n-\t\tif (ipv6 == 0) {\n-\t\t\t/* populate the ipv4 hash */\n-\t\t\tpopulate_ipv4_few_flow_into_table(\n-\t\t\t\t\tipv4_l3fwd_lookup_struct[socketid]);\n-\t\t} else {\n-\t\t\t/* populate the ipv6 hash */\n-\t\t\tpopulate_ipv6_few_flow_into_table(\n-\t\t\t\t\tipv6_l3fwd_lookup_struct[socketid]);\n-\t\t}\n-\t}\n-}\n-#endif\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_LPM)\n-static void\n-setup_lpm(int socketid)\n-{\n-\tstruct rte_lpm6_config config;\n-\tstruct rte_lpm_config lpm_ipv4_config;\n-\tunsigned i;\n-\tint ret;\n-\tchar s[64];\n-\n-\t/* create the LPM table */\n-\tsnprintf(s, sizeof(s), \"IPV4_L3FWD_LPM_%d\", socketid);\n-\tlpm_ipv4_config.max_rules = IPV4_L3FWD_LPM_MAX_RULES;\n-\tlpm_ipv4_config.number_tbl8s = 256;\n-\tlpm_ipv4_config.flags = 0;\n-\tipv4_l3fwd_lookup_struct[socketid] =\n-\t\t\trte_lpm_create(s, socketid, &lpm_ipv4_config);\n-\tif (ipv4_l3fwd_lookup_struct[socketid] == NULL)\n-\t\trte_exit(EXIT_FAILURE, \"Unable to create the l3fwd LPM table\"\n-\t\t\t\t\" on socket %d\\n\", socketid);\n-\n-\t/* populate the LPM table */\n-\tfor (i = 0; i < IPV4_L3FWD_NUM_ROUTES; i++) {\n-\n-\t\t/* skip unused ports */\n-\t\tif ((1 << ipv4_l3fwd_route_array[i].if_out &\n-\t\t\t\tenabled_port_mask) == 0)\n-\t\t\tcontinue;\n-\n-\t\tret = rte_lpm_add(ipv4_l3fwd_lookup_struct[socketid],\n-\t\t\tipv4_l3fwd_route_array[i].ip,\n-\t\t\tipv4_l3fwd_route_array[i].depth,\n-\t\t\tipv4_l3fwd_route_array[i].if_out);\n-\n-\t\tif (ret < 0) {\n-\t\t\trte_exit(EXIT_FAILURE, \"Unable to add entry %u to the \"\n-\t\t\t\t\"l3fwd LPM table on socket %d\\n\",\n-\t\t\t\ti, socketid);\n-\t\t}\n-\n-\t\tprintf(\"LPM: Adding route 0x%08x / %d (%d)\\n\",\n-\t\t\t(unsigned)ipv4_l3fwd_route_array[i].ip,\n-\t\t\tipv4_l3fwd_route_array[i].depth,\n-\t\t\tipv4_l3fwd_route_array[i].if_out);\n-\t}\n-\n-\t/* create the LPM6 table */\n-\tsnprintf(s, sizeof(s), \"IPV6_L3FWD_LPM_%d\", socketid);\n-\n-\tconfig.max_rules = IPV6_L3FWD_LPM_MAX_RULES;\n-\tconfig.number_tbl8s = IPV6_L3FWD_LPM_NUMBER_TBL8S;\n-\tconfig.flags = 0;\n-\tipv6_l3fwd_lookup_struct[socketid] = rte_lpm6_create(s, socketid,\n-\t\t\t\t&config);\n-\tif (ipv6_l3fwd_lookup_struct[socketid] == NULL)\n-\t\trte_exit(EXIT_FAILURE, \"Unable to create the l3fwd LPM table\"\n-\t\t\t\t\" on socket %d\\n\", socketid);\n-\n-\t/* populate the LPM table */\n-\tfor (i = 0; i < IPV6_L3FWD_NUM_ROUTES; i++) {\n-\n-\t\t/* skip unused ports */\n-\t\tif ((1 << ipv6_l3fwd_route_array[i].if_out &\n-\t\t\t\tenabled_port_mask) == 0)\n-\t\t\tcontinue;\n-\n-\t\tret = rte_lpm6_add(ipv6_l3fwd_lookup_struct[socketid],\n-\t\t\tipv6_l3fwd_route_array[i].ip,\n-\t\t\tipv6_l3fwd_route_array[i].depth,\n-\t\t\tipv6_l3fwd_route_array[i].if_out);\n-\n-\t\tif (ret < 0) {\n-\t\t\trte_exit(EXIT_FAILURE, \"Unable to add entry %u to the \"\n-\t\t\t\t\"l3fwd LPM table on socket %d\\n\",\n-\t\t\t\ti, socketid);\n-\t\t}\n-\n-\t\tprintf(\"LPM: Adding route %s / %d (%d)\\n\",\n-\t\t\t\"IPV6\",\n-\t\t\tipv6_l3fwd_route_array[i].depth,\n-\t\t\tipv6_l3fwd_route_array[i].if_out);\n-\t}\n-}\n-#endif\n-\n-static int\n-init_mem(unsigned nb_mbuf)\n-{\n-\tstruct lcore_conf *qconf;\n-\tint socketid;\n-\tunsigned lcore_id;\n-\tchar s[64];\n-\n-\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n-\t\tif (rte_lcore_is_enabled(lcore_id) == 0)\n-\t\t\tcontinue;\n-\n-\t\tif (numa_on)\n-\t\t\tsocketid = rte_lcore_to_socket_id(lcore_id);\n-\t\telse\n-\t\t\tsocketid = 0;\n-\n-\t\tif (socketid >= NB_SOCKETS) {\n-\t\t\trte_exit(EXIT_FAILURE, \"Socket %d of lcore %u is out of range %d\\n\",\n-\t\t\t\tsocketid, lcore_id, NB_SOCKETS);\n-\t\t}\n-\t\tif (pktmbuf_pool[socketid] == NULL) {\n-\t\t\tsnprintf(s, sizeof(s), \"mbuf_pool_%d\", socketid);\n-\t\t\tpktmbuf_pool[socketid] =\n-\t\t\t\trte_pktmbuf_pool_create(s, nb_mbuf,\n-\t\t\t\t\tMEMPOOL_CACHE_SIZE, 0,\n-\t\t\t\t\tRTE_MBUF_DEFAULT_BUF_SIZE, socketid);\n-\t\t\tif (pktmbuf_pool[socketid] == NULL)\n-\t\t\t\trte_exit(EXIT_FAILURE,\n-\t\t\t\t\t\t\"Cannot init mbuf pool on socket %d\\n\", socketid);\n-\t\t\telse\n-\t\t\t\tprintf(\"Allocated mbuf pool on socket %d\\n\", socketid);\n-\n-#if (APP_LOOKUP_METHOD == APP_LOOKUP_LPM)\n-\t\t\tsetup_lpm(socketid);\n-#else\n-\t\t\tsetup_hash(socketid);\n-#endif\n-\t\t}\n-\t\tqconf = &lcore_conf[lcore_id];\n-\t\tqconf->ipv4_lookup_struct = ipv4_l3fwd_lookup_struct[socketid];\n-\t\tqconf->ipv6_lookup_struct = ipv6_l3fwd_lookup_struct[socketid];\n-\t}\n-\treturn 0;\n-}\n-\n-/* Check the link status of all ports in up to 9s, and print them finally */\n-static void\n-check_all_ports_link_status(uint32_t port_mask)\n-{\n-#define CHECK_INTERVAL 100 /* 100ms */\n-#define MAX_CHECK_TIME 90 /* 9s (90 * 100ms) in total */\n-\tuint16_t portid;\n-\tuint8_t count, all_ports_up, print_flag = 0;\n-\tstruct rte_eth_link link;\n-\tint ret;\n-\tchar link_status_text[RTE_ETH_LINK_MAX_STR_LEN];\n-\n-\tprintf(\"\\nChecking link status\");\n-\tfflush(stdout);\n-\tfor (count = 0; count <= MAX_CHECK_TIME; count++) {\n-\t\tall_ports_up = 1;\n-\t\tRTE_ETH_FOREACH_DEV(portid) {\n-\t\t\tif ((port_mask & (1 << portid)) == 0)\n-\t\t\t\tcontinue;\n-\t\t\tmemset(&link, 0, sizeof(link));\n-\t\t\tret = rte_eth_link_get_nowait(portid, &link);\n-\t\t\tif (ret < 0) {\n-\t\t\t\tall_ports_up = 0;\n-\t\t\t\tif (print_flag == 1)\n-\t\t\t\t\tprintf(\"Port %u link get failed: %s\\n\",\n-\t\t\t\t\t\tportid, rte_strerror(-ret));\n-\t\t\t\tcontinue;\n-\t\t\t}\n-\t\t\t/* print link status if flag set */\n-\t\t\tif (print_flag == 1) {\n-\t\t\t\trte_eth_link_to_str(link_status_text,\n-\t\t\t\t\tsizeof(link_status_text), &link);\n-\t\t\t\tprintf(\"Port %d %s\\n\", portid,\n-\t\t\t\t\tlink_status_text);\n-\t\t\t\tcontinue;\n-\t\t\t}\n-\t\t\t/* clear all_ports_up flag if any link down */\n-\t\t\tif (link.link_status == RTE_ETH_LINK_DOWN) {\n-\t\t\t\tall_ports_up = 0;\n-\t\t\t\tbreak;\n-\t\t\t}\n-\t\t}\n-\t\t/* after finally printing all link status, get out */\n-\t\tif (print_flag == 1)\n-\t\t\tbreak;\n-\n-\t\tif (all_ports_up == 0) {\n-\t\t\tprintf(\".\");\n-\t\t\tfflush(stdout);\n-\t\t\trte_delay_ms(CHECK_INTERVAL);\n-\t\t}\n-\n-\t\t/* set the print_flag if all ports up or timeout */\n-\t\tif (all_ports_up == 1 || count == (MAX_CHECK_TIME - 1)) {\n-\t\t\tprint_flag = 1;\n-\t\t\tprintf(\"done\\n\");\n-\t\t}\n-\t}\n-}\n-\n-static uint32_t\n-eth_dev_get_overhead_len(uint32_t max_rx_pktlen, uint16_t max_mtu)\n-{\n-\tuint32_t overhead_len;\n-\n-\tif (max_mtu != UINT16_MAX && max_rx_pktlen > max_mtu)\n-\t\toverhead_len = max_rx_pktlen - max_mtu;\n-\telse\n-\t\toverhead_len = RTE_ETHER_HDR_LEN + RTE_ETHER_CRC_LEN;\n-\n-\treturn overhead_len;\n-}\n-\n-static int\n-config_port_max_pkt_len(struct rte_eth_conf *conf,\n-\t\tstruct rte_eth_dev_info *dev_info)\n-{\n-\tuint32_t overhead_len;\n-\n-\tif (max_pkt_len == 0)\n-\t\treturn 0;\n-\n-\tif (max_pkt_len < RTE_ETHER_MIN_LEN || max_pkt_len > MAX_JUMBO_PKT_LEN)\n-\t\treturn -1;\n-\n-\toverhead_len = eth_dev_get_overhead_len(dev_info->max_rx_pktlen,\n-\t\t\tdev_info->max_mtu);\n-\tconf->rxmode.mtu = max_pkt_len - overhead_len;\n-\n-\tif (conf->rxmode.mtu > RTE_ETHER_MTU)\n-\t\tconf->txmode.offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS;\n-\n-\treturn 0;\n-}\n-\n-int\n-main(int argc, char **argv)\n-{\n-\tstruct rte_eth_dev_info dev_info;\n-\tstruct rte_eth_txconf *txconf;\n-\tint ret;\n-\tint i;\n-\tunsigned nb_ports;\n-\tuint16_t queueid, portid;\n-\tunsigned lcore_id;\n-\tuint32_t n_tx_queue, nb_lcores;\n-\tuint8_t nb_rx_queue, queue, socketid;\n-\n-\t/* init EAL */\n-\tret = rte_eal_init(argc, argv);\n-\tif (ret < 0)\n-\t\trte_exit(EXIT_FAILURE, \"Invalid EAL parameters\\n\");\n-\targc -= ret;\n-\targv += ret;\n-\n-\tret = rte_timer_subsystem_init();\n-\tif (ret < 0)\n-\t\trte_exit(EXIT_FAILURE, \"Failed to initialize timer subystem\\n\");\n-\n-\t/* pre-init dst MACs for all ports to 02:00:00:00:00:xx */\n-\tfor (portid = 0; portid < RTE_MAX_ETHPORTS; portid++) {\n-\t\tdest_eth_addr[portid] = RTE_ETHER_LOCAL_ADMIN_ADDR +\n-\t\t\t\t((uint64_t)portid << 40);\n-\t\t*(uint64_t *)(val_eth + portid) = dest_eth_addr[portid];\n-\t}\n-\n-\t/* parse application arguments (after the EAL ones) */\n-\tret = parse_args(argc, argv);\n-\tif (ret < 0)\n-\t\trte_exit(EXIT_FAILURE, \"Invalid L3FWD parameters\\n\");\n-\n-\tif (check_lcore_params() < 0)\n-\t\trte_exit(EXIT_FAILURE, \"check_lcore_params failed\\n\");\n-\n-\tprintf(\"Initializing rx-queues...\\n\");\n-\tret = init_rx_queues();\n-\tif (ret < 0)\n-\t\trte_exit(EXIT_FAILURE, \"init_rx_queues failed\\n\");\n-\n-\tprintf(\"Initializing tx-threads...\\n\");\n-\tret = init_tx_threads();\n-\tif (ret < 0)\n-\t\trte_exit(EXIT_FAILURE, \"init_tx_threads failed\\n\");\n-\n-\tprintf(\"Initializing rings...\\n\");\n-\tret = init_rx_rings();\n-\tif (ret < 0)\n-\t\trte_exit(EXIT_FAILURE, \"init_rx_rings failed\\n\");\n-\n-\tnb_ports = rte_eth_dev_count_avail();\n-\n-\tif (check_port_config() < 0)\n-\t\trte_exit(EXIT_FAILURE, \"check_port_config failed\\n\");\n-\n-\tnb_lcores = rte_lcore_count();\n-\n-\t/* initialize all ports */\n-\tRTE_ETH_FOREACH_DEV(portid) {\n-\t\tstruct rte_eth_conf local_port_conf = port_conf;\n-\n-\t\t/* skip ports that are not enabled */\n-\t\tif ((enabled_port_mask & (1 << portid)) == 0) {\n-\t\t\tprintf(\"\\nSkipping disabled port %d\\n\", portid);\n-\t\t\tcontinue;\n-\t\t}\n-\n-\t\t/* init port */\n-\t\tprintf(\"Initializing port %d ... \", portid);\n-\t\tfflush(stdout);\n-\n-\t\tnb_rx_queue = get_port_n_rx_queues(portid);\n-\t\tn_tx_queue = nb_lcores;\n-\t\tif (n_tx_queue > MAX_TX_QUEUE_PER_PORT)\n-\t\t\tn_tx_queue = MAX_TX_QUEUE_PER_PORT;\n-\t\tprintf(\"Creating queues: nb_rxq=%d nb_txq=%u... \",\n-\t\t\tnb_rx_queue, (unsigned)n_tx_queue);\n-\n-\t\tret = rte_eth_dev_info_get(portid, &dev_info);\n-\t\tif (ret != 0)\n-\t\t\trte_exit(EXIT_FAILURE,\n-\t\t\t\t\"Error during getting device (port %u) info: %s\\n\",\n-\t\t\t\tportid, strerror(-ret));\n-\n-\t\tret = config_port_max_pkt_len(&local_port_conf, &dev_info);\n-\t\tif (ret != 0)\n-\t\t\trte_exit(EXIT_FAILURE,\n-\t\t\t\t\"Invalid max packet length: %u (port %u)\\n\",\n-\t\t\t\tmax_pkt_len, portid);\n-\n-\t\tif (dev_info.tx_offload_capa & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE)\n-\t\t\tlocal_port_conf.txmode.offloads |=\n-\t\t\t\tRTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE;\n-\n-\t\tlocal_port_conf.rx_adv_conf.rss_conf.rss_hf &=\n-\t\t\tdev_info.flow_type_rss_offloads;\n-\t\tif (local_port_conf.rx_adv_conf.rss_conf.rss_hf !=\n-\t\t\t\tport_conf.rx_adv_conf.rss_conf.rss_hf) {\n-\t\t\tprintf(\"Port %u modified RSS hash function based on hardware support,\"\n-\t\t\t\t\"requested:%#\"PRIx64\" configured:%#\"PRIx64\"\\n\",\n-\t\t\t\tportid,\n-\t\t\t\tport_conf.rx_adv_conf.rss_conf.rss_hf,\n-\t\t\t\tlocal_port_conf.rx_adv_conf.rss_conf.rss_hf);\n-\t\t}\n-\n-\t\tret = rte_eth_dev_configure(portid, nb_rx_queue,\n-\t\t\t\t\t(uint16_t)n_tx_queue, &local_port_conf);\n-\t\tif (ret < 0)\n-\t\t\trte_exit(EXIT_FAILURE, \"Cannot configure device: err=%d, port=%d\\n\",\n-\t\t\t\tret, portid);\n-\n-\t\tret = rte_eth_dev_adjust_nb_rx_tx_desc(portid, &nb_rxd,\n-\t\t\t\t\t\t       &nb_txd);\n-\t\tif (ret < 0)\n-\t\t\trte_exit(EXIT_FAILURE,\n-\t\t\t\t \"rte_eth_dev_adjust_nb_rx_tx_desc: err=%d, port=%d\\n\",\n-\t\t\t\t ret, portid);\n-\n-\t\tret = rte_eth_macaddr_get(portid, &ports_eth_addr[portid]);\n-\t\tif (ret < 0)\n-\t\t\trte_exit(EXIT_FAILURE,\n-\t\t\t\t \"rte_eth_macaddr_get: err=%d, port=%d\\n\",\n-\t\t\t\t ret, portid);\n-\n-\t\tprint_ethaddr(\" Address:\", &ports_eth_addr[portid]);\n-\t\tprintf(\", \");\n-\t\tprint_ethaddr(\"Destination:\",\n-\t\t\t(const struct rte_ether_addr *)&dest_eth_addr[portid]);\n-\t\tprintf(\", \");\n-\n-\t\t/*\n-\t\t * prepare src MACs for each port.\n-\t\t */\n-\t\trte_ether_addr_copy(&ports_eth_addr[portid],\n-\t\t\t(struct rte_ether_addr *)(val_eth + portid) + 1);\n-\n-\t\t/* init memory */\n-\t\tret = init_mem(NB_MBUF);\n-\t\tif (ret < 0)\n-\t\t\trte_exit(EXIT_FAILURE, \"init_mem failed\\n\");\n-\n-\t\t/* init one TX queue per couple (lcore,port) */\n-\t\tqueueid = 0;\n-\t\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n-\t\t\tif (rte_lcore_is_enabled(lcore_id) == 0)\n-\t\t\t\tcontinue;\n-\n-\t\t\tif (numa_on)\n-\t\t\t\tsocketid = (uint8_t)rte_lcore_to_socket_id(lcore_id);\n-\t\t\telse\n-\t\t\t\tsocketid = 0;\n-\n-\t\t\tprintf(\"txq=%u,%d,%d \", lcore_id, queueid, socketid);\n-\t\t\tfflush(stdout);\n-\n-\t\t\ttxconf = &dev_info.default_txconf;\n-\t\t\ttxconf->offloads = local_port_conf.txmode.offloads;\n-\t\t\tret = rte_eth_tx_queue_setup(portid, queueid, nb_txd,\n-\t\t\t\t\t\t     socketid, txconf);\n-\t\t\tif (ret < 0)\n-\t\t\t\trte_exit(EXIT_FAILURE, \"rte_eth_tx_queue_setup: err=%d, \"\n-\t\t\t\t\t\"port=%d\\n\", ret, portid);\n-\n-\t\t\ttx_thread[lcore_id].tx_queue_id[portid] = queueid;\n-\t\t\tqueueid++;\n-\t\t}\n-\t\tprintf(\"\\n\");\n-\t}\n-\n-\tfor (i = 0; i < n_rx_thread; i++) {\n-\t\tlcore_id = rx_thread[i].conf.lcore_id;\n-\n-\t\tif (rte_lcore_is_enabled(lcore_id) == 0) {\n-\t\t\trte_exit(EXIT_FAILURE,\n-\t\t\t\t\t\"Cannot start Rx thread on lcore %u: lcore disabled\\n\",\n-\t\t\t\t\tlcore_id\n-\t\t\t\t);\n-\t\t}\n-\n-\t\tprintf(\"\\nInitializing rx queues for Rx thread %d on lcore %u ... \",\n-\t\t\t\ti, lcore_id);\n-\t\tfflush(stdout);\n-\n-\t\t/* init RX queues */\n-\t\tfor (queue = 0; queue < rx_thread[i].n_rx_queue; ++queue) {\n-\t\t\tstruct rte_eth_rxconf rxq_conf;\n-\n-\t\t\tportid = rx_thread[i].rx_queue_list[queue].port_id;\n-\t\t\tqueueid = rx_thread[i].rx_queue_list[queue].queue_id;\n-\n-\t\t\tif (numa_on)\n-\t\t\t\tsocketid = (uint8_t)rte_lcore_to_socket_id(lcore_id);\n-\t\t\telse\n-\t\t\t\tsocketid = 0;\n-\n-\t\t\tprintf(\"rxq=%d,%d,%d \", portid, queueid, socketid);\n-\t\t\tfflush(stdout);\n-\n-\t\t\tret = rte_eth_dev_info_get(portid, &dev_info);\n-\t\t\tif (ret != 0)\n-\t\t\t\trte_exit(EXIT_FAILURE,\n-\t\t\t\t\t\"Error during getting device (port %u) info: %s\\n\",\n-\t\t\t\t\tportid, strerror(-ret));\n-\n-\t\t\trxq_conf = dev_info.default_rxconf;\n-\t\t\trxq_conf.offloads = port_conf.rxmode.offloads;\n-\t\t\tret = rte_eth_rx_queue_setup(portid, queueid, nb_rxd,\n-\t\t\t\t\tsocketid,\n-\t\t\t\t\t&rxq_conf,\n-\t\t\t\t\tpktmbuf_pool[socketid]);\n-\t\t\tif (ret < 0)\n-\t\t\t\trte_exit(EXIT_FAILURE, \"rte_eth_rx_queue_setup: err=%d, \"\n-\t\t\t\t\t\t\"port=%d\\n\", ret, portid);\n-\t\t}\n-\t}\n-\n-\tprintf(\"\\n\");\n-\n-\t/* start ports */\n-\tRTE_ETH_FOREACH_DEV(portid) {\n-\t\tif ((enabled_port_mask & (1 << portid)) == 0)\n-\t\t\tcontinue;\n-\n-\t\t/* Start device */\n-\t\tret = rte_eth_dev_start(portid);\n-\t\tif (ret < 0)\n-\t\t\trte_exit(EXIT_FAILURE, \"rte_eth_dev_start: err=%d, port=%d\\n\",\n-\t\t\t\tret, portid);\n-\n-\t\t/*\n-\t\t * If enabled, put device in promiscuous mode.\n-\t\t * This allows IO forwarding mode to forward packets\n-\t\t * to itself through 2 cross-connected  ports of the\n-\t\t * target machine.\n-\t\t */\n-\t\tif (promiscuous_on) {\n-\t\t\tret = rte_eth_promiscuous_enable(portid);\n-\t\t\tif (ret != 0)\n-\t\t\t\trte_exit(EXIT_FAILURE,\n-\t\t\t\t\t\"rte_eth_promiscuous_enable: err=%s, port=%u\\n\",\n-\t\t\t\t\trte_strerror(-ret), portid);\n-\t\t}\n-\t}\n-\n-\tfor (i = 0; i < n_rx_thread; i++) {\n-\t\tlcore_id = rx_thread[i].conf.lcore_id;\n-\t\tif (rte_lcore_is_enabled(lcore_id) == 0)\n-\t\t\tcontinue;\n-\n-\t\t/* check if hw packet type is supported */\n-\t\tfor (queue = 0; queue < rx_thread[i].n_rx_queue; ++queue) {\n-\t\t\tportid = rx_thread[i].rx_queue_list[queue].port_id;\n-\t\t\tqueueid = rx_thread[i].rx_queue_list[queue].queue_id;\n-\n-\t\t\tif (parse_ptype_on) {\n-\t\t\t\tif (!rte_eth_add_rx_callback(portid, queueid,\n-\t\t\t\t\t\tcb_parse_ptype, NULL))\n-\t\t\t\t\trte_exit(EXIT_FAILURE,\n-\t\t\t\t\t\t\"Failed to add rx callback: \"\n-\t\t\t\t\t\t\"port=%d\\n\", portid);\n-\t\t\t} else if (!check_ptype(portid))\n-\t\t\t\trte_exit(EXIT_FAILURE,\n-\t\t\t\t\t\"Port %d cannot parse packet type.\\n\\n\"\n-\t\t\t\t\t\"Please add --parse-ptype to use sw \"\n-\t\t\t\t\t\"packet type analyzer.\\n\\n\",\n-\t\t\t\t\tportid);\n-\t\t}\n-\t}\n-\n-\tcheck_all_ports_link_status(enabled_port_mask);\n-\n-\tif (lthreads_on) {\n-\t\tprintf(\"Starting L-Threading Model\\n\");\n-\n-#if (APP_CPU_LOAD > 0)\n-\t\tif (cpu_load_lcore_id > 0)\n-\t\t\t/* Use one lcore for cpu load collector */\n-\t\t\tnb_lcores--;\n-#endif\n-\n-\t\tlthread_num_schedulers_set(nb_lcores);\n-\t\trte_eal_mp_remote_launch(sched_spawner, NULL, SKIP_MAIN);\n-\t\tlthread_main_spawner(NULL);\n-\n-\t} else {\n-\t\tprintf(\"Starting P-Threading Model\\n\");\n-\t\t/* launch per-lcore init on every lcore */\n-\t\trte_eal_mp_remote_launch(pthread_run, NULL, CALL_MAIN);\n-\t\tRTE_LCORE_FOREACH_WORKER(lcore_id) {\n-\t\t\tif (rte_eal_wait_lcore(lcore_id) < 0)\n-\t\t\t\treturn -1;\n-\t\t}\n-\t}\n-\n-\t/* clean up the EAL */\n-\trte_eal_cleanup();\n-\n-\treturn 0;\n-}\ndiff --git a/examples/performance-thread/l3fwd-thread/meson.build b/examples/performance-thread/l3fwd-thread/meson.build\ndeleted file mode 100644\nindex 58d4e96a45d7..000000000000\n--- a/examples/performance-thread/l3fwd-thread/meson.build\n+++ /dev/null\n@@ -1,32 +0,0 @@\n-# SPDX-License-Identifier: BSD-3-Clause\n-# Copyright(c) 2019 Intel Corporation\n-\n-# meson file, for building this example as part of a main DPDK build.\n-#\n-# To build this example as a standalone application with an already-installed\n-# DPDK instance, use 'make'\n-\n-build = dpdk_conf.has('RTE_ARCH_X86_64')\n-if not build\n-    subdir_done()\n-endif\n-\n-deps += ['timer', 'lpm']\n-allow_experimental_apis = true\n-\n-# get the performance thread (pt) architecture subdir\n-if dpdk_conf.has('RTE_ARCH_ARM64')\n-    pt_arch_dir = '../common/arch/arm64'\n-else\n-    pt_arch_dir = '../common/arch/x86'\n-endif\n-sources += files('main.c',\n-    '../common/lthread.c',\n-    '../common/lthread_cond.c',\n-    '../common/lthread_diag.c',\n-    '../common/lthread_mutex.c',\n-    '../common/lthread_sched.c',\n-    '../common/lthread_tls.c',\n-    pt_arch_dir + '/ctx.c')\n-\n-includes += include_directories('../common', pt_arch_dir)\ndiff --git a/examples/performance-thread/l3fwd-thread/test.sh b/examples/performance-thread/l3fwd-thread/test.sh\ndeleted file mode 100755\nindex 3dd33407ea41..000000000000\n--- a/examples/performance-thread/l3fwd-thread/test.sh\n+++ /dev/null\n@@ -1,150 +0,0 @@\n-#!/bin/bash\n-# SPDX-License-Identifier: BSD-3-Clause\n-\n-case \"$1\" in\n-\n-\t######################\n-\t# 1 L-core per pcore #\n-\t######################\n-\n-\t\"1.1\")\n-\t\techo \"1.1 1 L-core per pcore (N=2)\"\n-\n-\t\t./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \\\n-\t\t\t\t--max-pkt-len 1500  \\\n-\t\t\t\t--rx=\"(0,0,0,0)(1,0,0,0)\"          \\\n-\t\t\t\t--tx=\"(1,0)\"                       \\\n-\t\t\t\t--stat-lcore 2                     \\\n-\t\t\t\t--no-lthread\n-\n-\t\t;;\n-\n-\t\"1.2\")\n-\t\techo \"1.2 1 L-core per pcore (N=4)\"\n-\n-\t\t./build/l3fwd-thread -c ff -n 2 -- -P -p 3 \\\n-\t\t\t\t--max-pkt-len 1500  \\\n-\t\t\t\t--rx=\"(0,0,0,0)(1,0,1,1)\"          \\\n-\t\t\t\t--tx=\"(2,0)(3,1)\"                  \\\n-\t\t\t\t--stat-lcore 4                     \\\n-\t\t\t\t--no-lthread\n-\t\t;;\n-\n-\t\"1.3\")\n-\t\techo \"1.3 1 L-core per pcore (N=8)\"\n-\n-\t\t./build/l3fwd-thread -c 1ff -n 2 -- -P -p 3                          \\\n-\t\t\t\t--max-pkt-len 1500                            \\\n-\t\t\t\t--rx=\"(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)\"                  \\\n-\t\t\t\t--tx=\"(4,0)(5,1)(6,2)(7,3)\"                                  \\\n-\t\t\t\t--stat-lcore 8                                               \\\n-\t\t\t\t--no-lthread\n-\t\t;;\n-\n-\t\"1.4\")\n-\t\techo \"1.3 1 L-core per pcore (N=16)\"\n-\n-\t\t./build/l3fwd-thread -c 3ffff -n 2 -- -P -p 3                          \\\n-\t\t\t\t--max-pkt-len 1500                              \\\n-\t\t\t\t--rx=\"(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)\" \\\n-\t\t\t\t--tx=\"(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)\"          \\\n-\t\t\t\t--stat-lcore 16                                                \\\n-\t\t\t\t--no-lthread\n-\t\t;;\n-\n-\n-\t######################\n-\t# N L-core per pcore #\n-\t######################\n-\n-\t\"2.1\")\n-\t\techo \"2.1 N L-core per pcore (N=2)\"\n-\n-\t\t./build/l3fwd-thread -c ff -n 2 --lcores=\"2,(0-1)@0\" -- -P -p 3 \\\n-\t\t\t\t--max-pkt-len 1500                       \\\n-\t\t\t\t--rx=\"(0,0,0,0)(1,0,0,0)\"                               \\\n-\t\t\t\t--tx=\"(1,0)\"                                            \\\n-\t\t\t\t--stat-lcore 2                                          \\\n-\t\t\t\t--no-lthread\n-\n-\t\t;;\n-\n-\t\"2.2\")\n-\t\techo \"2.2 N L-core per pcore (N=4)\"\n-\n-\t\t./build/l3fwd-thread -c ff -n 2 --lcores=\"(0-3)@0,4\" -- -P -p 3 \\\n-\t\t\t\t--max-pkt-len 1500  \\\n-\t\t\t\t--rx=\"(0,0,0,0)(1,0,1,1)\"          \\\n-\t\t\t\t--tx=\"(2,0)(3,1)\"                  \\\n-\t\t\t\t--stat-lcore 4                     \\\n-\t\t\t\t--no-lthread\n-\t\t;;\n-\n-\t\"2.3\")\n-\t\techo \"2.3 N L-core per pcore (N=8)\"\n-\n-\t\t./build/l3fwd-thread -c 3ffff -n 2 --lcores=\"(0-7)@0,8\" -- -P -p 3     \\\n-\t\t\t\t--max-pkt-len 1500                              \\\n-\t\t\t\t--rx=\"(0,0,0,0)(0,1,1,1)(1,0,2,2)(1,1,3,3)\"                    \\\n-\t\t\t\t--tx=\"(4,0)(5,1)(6,2)(7,3)\"                                    \\\n-\t\t\t\t--stat-lcore 8                                                 \\\n-\t\t\t\t--no-lthread\n-\t\t;;\n-\n-\t\"2.4\")\n-\t\techo \"2.3 N L-core per pcore (N=16)\"\n-\n-\t\t./build/l3fwd-thread -c 3ffff -n 2 --lcores=\"(0-15)@0,16\" -- -P -p 3   \\\n-\t\t\t\t--max-pkt-len 1500                              \\\n-\t\t\t\t--rx=\"(0,0,0,0)(0,1,1,1)(0,2,2,2)(0,3,3,3)(1,0,4,4)(1,1,5,5)(1,2,6,6)(1,3,7,7)\" \\\n-\t\t\t\t--tx=\"(8,0)(9,1)(10,2)(11,3)(12,4)(13,5)(14,6)(15,7)\"          \\\n-\t\t\t\t--stat-lcore 16                                                \\\n-\t\t\t\t--no-lthread\n-\t\t;;\n-\n-\n-\t#########################\n-\t# N L-threads per pcore #\n-\t#########################\n-\n-\t\"3.1\")\n-\t\techo \"3.1 N L-threads per pcore (N=2)\"\n-\n-\t\t./build/l3fwd-thread -c ff -n 2 -- -P -p 3  \\\n-\t\t\t\t--max-pkt-len 1500   \\\n-\t\t\t\t--rx=\"(0,0,0,0)(1,0,0,0)\"           \\\n-\t\t\t\t--tx=\"(0,0)\"                        \\\n-\t\t\t\t--stat-lcore 1\n-\t\t;;\n-\n-\t\"3.2\")\n-\t\techo \"3.2 N L-threads per pcore (N=4)\"\n-\n-\t\t./build/l3fwd-thread -c ff -n 2 -- -P -p 3  \\\n-\t\t\t\t--max-pkt-len 1500   \\\n-\t\t\t\t--rx=\"(0,0,0,0)(1,0,0,1)\"           \\\n-\t\t\t\t--tx=\"(0,0)(0,1)\"                   \\\n-\t\t\t\t--stat-lcore 1\n-\t\t;;\n-\n-\t\"3.3\")\n-\t\techo \"3.2 N L-threads per pcore (N=8)\"\n-\n-\t\t./build/l3fwd-thread -c ff -n 2 -- -P -p 3                             \\\n-\t\t\t\t--max-pkt-len 1500                              \\\n-\t\t\t\t--rx=\"(0,0,0,0)(0,1,0,1)(1,0,0,2)(1,1,0,3)\"                    \\\n-\t\t\t\t--tx=\"(0,0)(0,1)(0,2)(0,3)\"                                    \\\n-\t\t\t\t--stat-lcore 1\n-\t\t;;\n-\n-\t\"3.4\")\n-\t\techo \"3.2 N L-threads per pcore (N=16)\"\n-\n-\t\t./build/l3fwd-thread -c ff -n 2 -- -P -p 3                             \\\n-\t\t\t\t--max-pkt-len 1500                              \\\n-\t\t\t\t--rx=\"(0,0,0,0)(0,1,0,1)(0,2,0,2)(0,0,0,3)(1,0,0,4)(1,1,0,5)(1,2,0,6)(1,3,0,7)\" \\\n-\t\t\t\t--tx=\"(0,0)(0,1)(0,2)(0,3)(0,4)(0,5)(0,6)(0,7)\"                \\\n-\t\t\t\t--stat-lcore 1\n-\t\t;;\n-\n-esac\ndiff --git a/examples/performance-thread/pthread_shim/Makefile b/examples/performance-thread/pthread_shim/Makefile\ndeleted file mode 100644\nindex 5acf74fff30c..000000000000\n--- a/examples/performance-thread/pthread_shim/Makefile\n+++ /dev/null\n@@ -1,63 +0,0 @@\n-# SPDX-License-Identifier: BSD-3-Clause\n-# Copyright(c) 2010-2020 Intel Corporation\n-\n-# binary name\n-APP = lthread_pthread_shim\n-\n-# all source are stored in SRCS-y\n-SRCS-y := main.c pthread_shim.c\n-\n-include ../common/common.mk\n-\n-ifeq ($(MAKECMDGOALS),static)\n-# check for broken pkg-config\n-ifeq ($(shell echo $(LDFLAGS_STATIC) | grep 'whole-archive.*l:lib.*no-whole-archive'),)\n-$(warning \"pkg-config output list does not contain drivers between 'whole-archive'/'no-whole-archive' flags.\")\n-$(error \"Cannot generate statically-linked binaries with this version of pkg-config\")\n-endif\n-endif\n-\n-CFLAGS += -DALLOW_EXPERIMENTAL_API\n-CFLAGS += -D_GNU_SOURCE\n-LDFLAGS += \"-Wl,--copy-dt-needed-entries\"\n-\n-PKGCONF ?= pkg-config\n-\n-# Build using pkg-config variables if possible\n-ifneq ($(shell $(PKGCONF) --exists libdpdk && echo 0),0)\n-$(error \"no installation of DPDK found\")\n-endif\n-\n-all: shared\n-.PHONY: shared static\n-shared: build/$(APP)-shared\n-\tln -sf $(APP)-shared build/$(APP)\n-static: build/$(APP)-static\n-\tln -sf $(APP)-static build/$(APP)\n-\n-LDFLAGS += -lpthread\n-\n-PC_FILE := $(shell $(PKGCONF) --path libdpdk 2>/dev/null)\n-CFLAGS += -O3 $(shell $(PKGCONF) --cflags libdpdk)\n-LDFLAGS_SHARED = $(shell $(PKGCONF) --libs libdpdk)\n-LDFLAGS_STATIC = $(shell $(PKGCONF) --static --libs libdpdk)\n-\n-build/$(APP)-shared: $(SRCS-y) Makefile $(PC_FILE) | build\n-\t$(CC) $(CFLAGS) $(filter %.c,$^) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)\n-\n-build/$(APP)-static: $(SRCS-y) Makefile $(PC_FILE) | build\n-\t$(CC) $(CFLAGS) $(filter %.c,$^) -o $@ $(LDFLAGS) $(LDFLAGS_SHARED)\n-\n-# workaround for a gcc bug with noreturn attribute\n-# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603\n-ifeq ($(shell gcc -dumpversion),-gt 0)\n-CFLAGS_main.o += -Wno-return-type\n-endif\n-\n-build:\n-\t@mkdir -p $@\n-\n-.PHONY: clean\n-clean:\n-\trm -f build/$(APP) build/$(APP)-static build/$(APP)-shared\n-\ttest -d build && rmdir -p build || true\ndiff --git a/examples/performance-thread/pthread_shim/main.c b/examples/performance-thread/pthread_shim/main.c\ndeleted file mode 100644\nindex 7ce6cfb0c8c0..000000000000\n--- a/examples/performance-thread/pthread_shim/main.c\n+++ /dev/null\n@@ -1,271 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-#include <stdio.h>\n-#include <stdlib.h>\n-#include <stdint.h>\n-#include <inttypes.h>\n-#include <sys/types.h>\n-#include <string.h>\n-#include <sys/queue.h>\n-#include <stdarg.h>\n-#include <errno.h>\n-#include <getopt.h>\n-#include <unistd.h>\n-#include <sched.h>\n-#include <pthread.h>\n-\n-#include <rte_common.h>\n-#include <rte_lcore.h>\n-#include <rte_per_lcore.h>\n-#include <rte_timer.h>\n-\n-#include \"lthread_api.h\"\n-#include \"lthread_diag_api.h\"\n-#include \"pthread_shim.h\"\n-\n-#define DEBUG_APP 0\n-#define HELLOW_WORLD_MAX_LTHREADS 10\n-#define THREAD_NAME_LEN\t16\n-\n-#ifndef __GLIBC__ /* sched_getcpu() is glibc-specific */\n-#define sched_getcpu() rte_lcore_id()\n-#endif\n-\n-__thread int print_count;\n-__thread pthread_mutex_t print_lock;\n-\n-__thread pthread_mutex_t exit_lock;\n-__thread pthread_cond_t exit_cond;\n-\n-/*\n- * A simple thread that demonstrates use of a mutex, a condition\n- * variable, thread local storage, explicit yield, and thread exit.\n- *\n- * The thread uses a mutex to protect a shared counter which is incremented\n- * and then it waits on condition variable before exiting.\n- *\n- * The thread argument is stored in and retrieved from TLS, using\n- * the pthread key create, get and set specific APIs.\n- *\n- * The thread yields while holding the mutex, to provide opportunity\n- * for other threads to contend.\n- *\n- * All of the pthread API functions used by this thread are actually\n- * resolved to corresponding lthread functions by the pthread shim\n- * implemented in pthread_shim.c\n- */\n-void *helloworld_pthread(void *arg);\n-void *helloworld_pthread(void *arg)\n-{\n-\tpthread_key_t key;\n-\n-\t/* create a key for TLS */\n-\tpthread_key_create(&key, NULL);\n-\n-\t/* store the arg in TLS */\n-\tpthread_setspecific(key, arg);\n-\n-\t/* grab lock and increment shared counter */\n-\tpthread_mutex_lock(&print_lock);\n-\tprint_count++;\n-\n-\t/* yield thread to give opportunity for lock contention */\n-\tsched_yield();\n-\n-\t/* retrieve arg from TLS */\n-\tuint64_t thread_no = (uint64_t) pthread_getspecific(key);\n-\n-\tprintf(\"Hello - lcore = %d count = %d thread_no = %d thread_id = %p\\n\",\n-\t\t\tsched_getcpu(),\n-\t\t\tprint_count,\n-\t\t\t(int) thread_no,\n-\t\t\t(void *)pthread_self());\n-\n-\t/* release the lock */\n-\tpthread_mutex_unlock(&print_lock);\n-\n-\t/*\n-\t * wait on condition variable\n-\t * before exiting\n-\t */\n-\tpthread_mutex_lock(&exit_lock);\n-\tpthread_cond_wait(&exit_cond, &exit_lock);\n-\tpthread_mutex_unlock(&exit_lock);\n-\n-\t/* exit */\n-\tpthread_exit((void *) thread_no);\n-}\n-\n-\n-/*\n- * This is the initial thread\n- *\n- * It demonstrates pthread, mutex and condition variable creation,\n- * broadcast and pthread join APIs.\n- *\n- * This initial thread must always start life as an lthread.\n- *\n- * This thread creates many more threads then waits a short time\n- * before signalling them to exit using a broadcast.\n- *\n- * All of the pthread API functions used by this thread are actually\n- * resolved to corresponding lthread functions by the pthread shim\n- * implemented in pthread_shim.c\n- *\n- * After all threads have finished the lthread scheduler is shutdown\n- * and normal pthread operation is restored\n- */\n-__thread pthread_t tid[HELLOW_WORLD_MAX_LTHREADS];\n-\n-static void *initial_lthread(void *args __rte_unused)\n-{\n-\tint lcore = (int) rte_lcore_id();\n-\t/*\n-\t *\n-\t * We can now enable pthread API override\n-\t * and start to use the pthread APIs\n-\t */\n-\tpthread_override_set(1);\n-\n-\tuint64_t i;\n-\tint ret;\n-\n-\t/* initialize mutex for shared counter */\n-\tprint_count = 0;\n-\tpthread_mutex_init(&print_lock, NULL);\n-\n-\t/* initialize mutex and condition variable controlling thread exit */\n-\tpthread_mutex_init(&exit_lock, NULL);\n-\tpthread_cond_init(&exit_cond, NULL);\n-\n-\t/* spawn a number of threads */\n-\tfor (i = 0; i < HELLOW_WORLD_MAX_LTHREADS; i++) {\n-\n-\t\t/*\n-\t\t * Not strictly necessary but\n-\t\t * for the sake of this example\n-\t\t * use an attribute to pass the desired lcore\n-\t\t */\n-\t\tpthread_attr_t attr;\n-\t\trte_cpuset_t cpuset;\n-\t\tchar name[THREAD_NAME_LEN];\n-\n-\t\tCPU_ZERO(&cpuset);\n-\t\tCPU_SET(lcore, &cpuset);\n-\t\tpthread_attr_init(&attr);\n-\t\tpthread_attr_setaffinity_np(&attr, sizeof(rte_cpuset_t), &cpuset);\n-\n-\t\t/* create the thread */\n-\t\tret = pthread_create(&tid[i], &attr,\n-\t\t\t\thelloworld_pthread, (void *) i);\n-\t\tif (ret != 0)\n-\t\t\trte_exit(EXIT_FAILURE, \"Cannot create helloworld thread\\n\");\n-\n-\t\tsnprintf(name, sizeof(name), \"helloworld-%u\", (uint32_t)i);\n-\t\trte_thread_setname(tid[i], name);\n-\t}\n-\n-\t/* wait for 1s to allow threads\n-\t * to block on the condition variable\n-\t * N.B. nanosleep() is resolved to lthread_sleep()\n-\t * by the shim.\n-\t */\n-\tstruct timespec time;\n-\n-\ttime.tv_sec = 1;\n-\ttime.tv_nsec = 0;\n-\tnanosleep(&time, NULL);\n-\n-\t/* wake up all the threads */\n-\tpthread_cond_broadcast(&exit_cond);\n-\n-\t/* wait for them to finish */\n-\tfor (i = 0; i < HELLOW_WORLD_MAX_LTHREADS; i++) {\n-\n-\t\tuint64_t thread_no;\n-\n-\t\tpthread_join(tid[i], (void *) &thread_no);\n-\t\tif (thread_no != i)\n-\t\t\tprintf(\"error on thread exit\\n\");\n-\t}\n-\n-\tpthread_cond_destroy(&exit_cond);\n-\tpthread_mutex_destroy(&print_lock);\n-\tpthread_mutex_destroy(&exit_lock);\n-\n-\t/* shutdown the lthread scheduler */\n-\tlthread_scheduler_shutdown(rte_lcore_id());\n-\tlthread_detach();\n-\treturn NULL;\n-}\n-\n-\n-\n-/* This thread creates a single initial lthread\n- * and then runs the scheduler\n- * An instance of this thread is created on each thread\n- * in the core mask\n- */\n-static int\n-lthread_scheduler(void *args __rte_unused)\n-{\n-\t/* create initial thread  */\n-\tstruct lthread *lt;\n-\n-\tlthread_create(&lt, -1, initial_lthread, (void *) NULL);\n-\n-\t/* run the lthread scheduler */\n-\tlthread_run();\n-\n-\t/* restore genuine pthread operation */\n-\tpthread_override_set(0);\n-\treturn 0;\n-}\n-\n-int main(int argc, char **argv)\n-{\n-\tint num_sched = 0;\n-\n-\t/* basic DPDK initialization is all that is necessary to run lthreads*/\n-\tint ret = rte_eal_init(argc, argv);\n-\n-\tif (ret < 0)\n-\t\trte_exit(EXIT_FAILURE, \"Invalid EAL parameters\\n\");\n-\n-\t/* enable timer subsystem */\n-\trte_timer_subsystem_init();\n-\n-#if DEBUG_APP\n-\tlthread_diagnostic_set_mask(LT_DIAG_ALL);\n-#endif\n-\n-\t/* create a scheduler on every core in the core mask\n-\t * and launch an initial lthread that will spawn many more.\n-\t */\n-\tunsigned lcore_id;\n-\n-\tfor (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) {\n-\t\tif (rte_lcore_is_enabled(lcore_id))\n-\t\t\tnum_sched++;\n-\t}\n-\n-\t/* set the number of schedulers, this forces all schedulers synchronize\n-\t * before entering their main loop\n-\t */\n-\tlthread_num_schedulers_set(num_sched);\n-\n-\t/* launch all threads */\n-\trte_eal_mp_remote_launch(lthread_scheduler, (void *)NULL, CALL_MAIN);\n-\n-\t/* wait for threads to stop */\n-\tRTE_LCORE_FOREACH_WORKER(lcore_id) {\n-\t\trte_eal_wait_lcore(lcore_id);\n-\t}\n-\n-\t/* clean up the EAL */\n-\trte_eal_cleanup();\n-\n-\treturn 0;\n-}\ndiff --git a/examples/performance-thread/pthread_shim/meson.build b/examples/performance-thread/pthread_shim/meson.build\ndeleted file mode 100644\nindex 866e6c94e891..000000000000\n--- a/examples/performance-thread/pthread_shim/meson.build\n+++ /dev/null\n@@ -1,33 +0,0 @@\n-# SPDX-License-Identifier: BSD-3-Clause\n-# Copyright(c) 2019 Intel Corporation\n-\n-# meson file, for building this example as part of a main DPDK build.\n-#\n-# To build this example as a standalone application with an already-installed\n-# DPDK instance, use 'make'\n-\n-build = dpdk_conf.has('RTE_ARCH_X86_64') or dpdk_conf.has('RTE_ARCH_ARM64')\n-if not build\n-    subdir_done()\n-endif\n-\n-deps += ['timer']\n-allow_experimental_apis = true\n-\n-# get the performance thread (pt) architecture subdir\n-if dpdk_conf.has('RTE_ARCH_ARM64')\n-    pt_arch_dir = '../common/arch/arm64'\n-else\n-    pt_arch_dir = '../common/arch/x86'\n-endif\n-sources += files('main.c',\n-    'pthread_shim.c',\n-    '../common/lthread.c',\n-    '../common/lthread_cond.c',\n-    '../common/lthread_diag.c',\n-    '../common/lthread_mutex.c',\n-    '../common/lthread_sched.c',\n-    '../common/lthread_tls.c',\n-    pt_arch_dir + '/ctx.c')\n-\n-includes += include_directories('../common', pt_arch_dir)\ndiff --git a/examples/performance-thread/pthread_shim/pthread_shim.c b/examples/performance-thread/pthread_shim/pthread_shim.c\ndeleted file mode 100644\nindex bbc076584b0e..000000000000\n--- a/examples/performance-thread/pthread_shim/pthread_shim.c\n+++ /dev/null\n@@ -1,713 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-#include <stdio.h>\n-#include <stdlib.h>\n-#include <sys/types.h>\n-#include <errno.h>\n-#include <sched.h>\n-#include <dlfcn.h>\n-\n-#include <rte_log.h>\n-\n-#include \"lthread_api.h\"\n-#include \"pthread_shim.h\"\n-\n-#define RTE_LOGTYPE_PTHREAD_SHIM RTE_LOGTYPE_USER3\n-\n-#define POSIX_ERRNO(x)  (x)\n-\n-/* some releases of FreeBSD 10, e.g. 10.0, don't have CPU_COUNT macro */\n-#ifndef CPU_COUNT\n-#define CPU_COUNT(x) __cpu_count(x)\n-\n-static inline unsigned int\n-__cpu_count(const rte_cpuset_t *cpuset)\n-{\n-\tunsigned int i, count = 0;\n-\tfor (i = 0; i < RTE_MAX_LCORE; i++)\n-\t\tif (CPU_ISSET(i, cpuset))\n-\t\t\tcount++;\n-\treturn count;\n-}\n-#endif\n-\n-/*\n- * this flag determines at run time if we override pthread\n- * calls and map then to equivalent lthread calls\n- * or of we call the standard pthread function\n- */\n-static __thread int override;\n-\n-\n-/*\n- * this structures contains function pointers that will be\n- * initialised to the loaded address of the real\n- * pthread library API functions\n- */\n-struct pthread_lib_funcs {\n-int (*f_pthread_barrier_destroy)\n-\t(pthread_barrier_t *);\n-int (*f_pthread_barrier_init)\n-\t(pthread_barrier_t *, const pthread_barrierattr_t *, unsigned);\n-int (*f_pthread_barrier_wait)\n-\t(pthread_barrier_t *);\n-int (*f_pthread_cond_broadcast)\n-\t(pthread_cond_t *);\n-int (*f_pthread_cond_destroy)\n-\t(pthread_cond_t *);\n-int (*f_pthread_cond_init)\n-\t(pthread_cond_t *, const pthread_condattr_t *);\n-int (*f_pthread_cond_signal)\n-\t(pthread_cond_t *);\n-int (*f_pthread_cond_timedwait)\n-\t(pthread_cond_t *, pthread_mutex_t *, const struct timespec *);\n-int (*f_pthread_cond_wait)\n-\t(pthread_cond_t *, pthread_mutex_t *);\n-int (*f_pthread_create)\n-\t(pthread_t *, const pthread_attr_t *, void *(*)(void *), void *);\n-int (*f_pthread_detach)\n-\t(pthread_t);\n-int (*f_pthread_equal)\n-\t(pthread_t, pthread_t);\n-void (*f_pthread_exit)\n-\t(void *);\n-void * (*f_pthread_getspecific)\n-\t(pthread_key_t);\n-int (*f_pthread_getcpuclockid)\n-\t(pthread_t, clockid_t *);\n-int (*f_pthread_join)\n-\t(pthread_t, void **);\n-int (*f_pthread_key_create)\n-\t(pthread_key_t *, void (*) (void *));\n-int (*f_pthread_key_delete)\n-\t(pthread_key_t);\n-int (*f_pthread_mutex_destroy)\n-\t(pthread_mutex_t *__mutex);\n-int (*f_pthread_mutex_init)\n-\t(pthread_mutex_t *__mutex, const pthread_mutexattr_t *);\n-int (*f_pthread_mutex_lock)\n-\t(pthread_mutex_t *__mutex);\n-int (*f_pthread_mutex_trylock)\n-\t(pthread_mutex_t *__mutex);\n-int (*f_pthread_mutex_timedlock)\n-\t(pthread_mutex_t *__mutex, const struct timespec *);\n-int (*f_pthread_mutex_unlock)\n-\t(pthread_mutex_t *__mutex);\n-int (*f_pthread_once)\n-\t(pthread_once_t *, void (*) (void));\n-int (*f_pthread_rwlock_destroy)\n-\t(pthread_rwlock_t *__rwlock);\n-int (*f_pthread_rwlock_init)\n-\t(pthread_rwlock_t *__rwlock, const pthread_rwlockattr_t *);\n-int (*f_pthread_rwlock_rdlock)\n-\t(pthread_rwlock_t *__rwlock);\n-int (*f_pthread_rwlock_timedrdlock)\n-\t(pthread_rwlock_t *__rwlock, const struct timespec *);\n-int (*f_pthread_rwlock_timedwrlock)\n-\t(pthread_rwlock_t *__rwlock, const struct timespec *);\n-int (*f_pthread_rwlock_tryrdlock)\n-\t(pthread_rwlock_t *__rwlock);\n-int (*f_pthread_rwlock_trywrlock)\n-\t(pthread_rwlock_t *__rwlock);\n-int (*f_pthread_rwlock_unlock)\n-\t(pthread_rwlock_t *__rwlock);\n-int (*f_pthread_rwlock_wrlock)\n-\t(pthread_rwlock_t *__rwlock);\n-pthread_t (*f_pthread_self)\n-\t(void);\n-int (*f_pthread_setspecific)\n-\t(pthread_key_t, const void *);\n-int (*f_pthread_spin_init)\n-\t(pthread_spinlock_t *__spin, int);\n-int (*f_pthread_spin_destroy)\n-\t(pthread_spinlock_t *__spin);\n-int (*f_pthread_spin_lock)\n-\t(pthread_spinlock_t *__spin);\n-int (*f_pthread_spin_trylock)\n-\t(pthread_spinlock_t *__spin);\n-int (*f_pthread_spin_unlock)\n-\t(pthread_spinlock_t *__spin);\n-int (*f_pthread_cancel)\n-\t(pthread_t);\n-int (*f_pthread_setcancelstate)\n-\t(int, int *);\n-int (*f_pthread_setcanceltype)\n-\t(int, int *);\n-void (*f_pthread_testcancel)\n-\t(void);\n-int (*f_pthread_getschedparam)\n-\t(pthread_t pthread, int *, struct sched_param *);\n-int (*f_pthread_setschedparam)\n-\t(pthread_t, int, const struct sched_param *);\n-int (*f_pthread_yield)\n-\t(void);\n-int (*f_pthread_setaffinity_np)\n-\t(pthread_t thread, size_t cpusetsize, const rte_cpuset_t *cpuset);\n-int (*f_nanosleep)\n-\t(const struct timespec *req, struct timespec *rem);\n-} _sys_pthread_funcs = {\n-\t.f_pthread_barrier_destroy = NULL,\n-};\n-\n-\n-/*\n- * this macro obtains the loaded address of a library function\n- * and saves it.\n- */\n-static void *__libc_dl_handle = RTLD_NEXT;\n-\n-#define get_addr_of_loaded_symbol(name) do {\t\t\t\t\\\n-\tchar *error_str;\t\t\t\t\t\t\\\n-\t_sys_pthread_funcs.f_##name = dlsym(__libc_dl_handle, (#name));\t\\\n-\terror_str = dlerror();\t\t\t\t\t\t\\\n-\tif (error_str != NULL) {\t\t\t\t\t\\\n-\t\tfprintf(stderr, \"%s\\n\", error_str);\t\t\t\\\n-\t}\t\t\t\t\t\t\t\t\\\n-} while (0)\n-\n-\n-/*\n- * The constructor function initialises the\n- * function pointers for pthread library functions\n- */\n-RTE_INIT(pthread_intercept_ctor)\n-{\n-\toverride = 0;\n-\t/*\n-\t * Get the original functions\n-\t */\n-\tget_addr_of_loaded_symbol(pthread_barrier_destroy);\n-\tget_addr_of_loaded_symbol(pthread_barrier_init);\n-\tget_addr_of_loaded_symbol(pthread_barrier_wait);\n-\tget_addr_of_loaded_symbol(pthread_cond_broadcast);\n-\tget_addr_of_loaded_symbol(pthread_cond_destroy);\n-\tget_addr_of_loaded_symbol(pthread_cond_init);\n-\tget_addr_of_loaded_symbol(pthread_cond_signal);\n-\tget_addr_of_loaded_symbol(pthread_cond_timedwait);\n-\tget_addr_of_loaded_symbol(pthread_cond_wait);\n-\tget_addr_of_loaded_symbol(pthread_create);\n-\tget_addr_of_loaded_symbol(pthread_detach);\n-\tget_addr_of_loaded_symbol(pthread_equal);\n-\tget_addr_of_loaded_symbol(pthread_exit);\n-\tget_addr_of_loaded_symbol(pthread_getspecific);\n-\tget_addr_of_loaded_symbol(pthread_getcpuclockid);\n-\tget_addr_of_loaded_symbol(pthread_join);\n-\tget_addr_of_loaded_symbol(pthread_key_create);\n-\tget_addr_of_loaded_symbol(pthread_key_delete);\n-\tget_addr_of_loaded_symbol(pthread_mutex_destroy);\n-\tget_addr_of_loaded_symbol(pthread_mutex_init);\n-\tget_addr_of_loaded_symbol(pthread_mutex_lock);\n-\tget_addr_of_loaded_symbol(pthread_mutex_trylock);\n-\tget_addr_of_loaded_symbol(pthread_mutex_timedlock);\n-\tget_addr_of_loaded_symbol(pthread_mutex_unlock);\n-\tget_addr_of_loaded_symbol(pthread_once);\n-\tget_addr_of_loaded_symbol(pthread_rwlock_destroy);\n-\tget_addr_of_loaded_symbol(pthread_rwlock_init);\n-\tget_addr_of_loaded_symbol(pthread_rwlock_rdlock);\n-\tget_addr_of_loaded_symbol(pthread_rwlock_timedrdlock);\n-\tget_addr_of_loaded_symbol(pthread_rwlock_timedwrlock);\n-\tget_addr_of_loaded_symbol(pthread_rwlock_tryrdlock);\n-\tget_addr_of_loaded_symbol(pthread_rwlock_trywrlock);\n-\tget_addr_of_loaded_symbol(pthread_rwlock_unlock);\n-\tget_addr_of_loaded_symbol(pthread_rwlock_wrlock);\n-\tget_addr_of_loaded_symbol(pthread_self);\n-\tget_addr_of_loaded_symbol(pthread_setspecific);\n-\tget_addr_of_loaded_symbol(pthread_spin_init);\n-\tget_addr_of_loaded_symbol(pthread_spin_destroy);\n-\tget_addr_of_loaded_symbol(pthread_spin_lock);\n-\tget_addr_of_loaded_symbol(pthread_spin_trylock);\n-\tget_addr_of_loaded_symbol(pthread_spin_unlock);\n-\tget_addr_of_loaded_symbol(pthread_cancel);\n-\tget_addr_of_loaded_symbol(pthread_setcancelstate);\n-\tget_addr_of_loaded_symbol(pthread_setcanceltype);\n-\tget_addr_of_loaded_symbol(pthread_testcancel);\n-\tget_addr_of_loaded_symbol(pthread_getschedparam);\n-\tget_addr_of_loaded_symbol(pthread_setschedparam);\n-\tget_addr_of_loaded_symbol(pthread_yield);\n-\tget_addr_of_loaded_symbol(pthread_setaffinity_np);\n-\tget_addr_of_loaded_symbol(nanosleep);\n-}\n-\n-\n-/*\n- * Enable/Disable pthread override\n- * state\n- *  0 disable\n- *  1 enable\n- */\n-void pthread_override_set(int state)\n-{\n-\toverride = state;\n-}\n-\n-\n-/*\n- * Return pthread override state\n- * return\n- *  0 disable\n- *  1 enable\n- */\n-int pthread_override_get(void)\n-{\n-\treturn override;\n-}\n-\n-/*\n- * This macro is used to catch and log\n- * invocation of stubs for unimplemented pthread\n- * API functions.\n- */\n-#define NOT_IMPLEMENTED do {\t\t\t\t\\\n-\tif (override) {\t\t\t\t\t\\\n-\t\tRTE_LOG(WARNING,\t\t\t\\\n-\t\t\tPTHREAD_SHIM,\t\t\t\\\n-\t\t\t\"WARNING %s NOT IMPLEMENTED\\n\",\t\\\n-\t\t\t__func__);\t\t\t\\\n-\t}\t\t\t\t\t\t\\\n-} while (0)\n-\n-/*\n- * pthread API override functions follow\n- * Note in this example code only a subset of functions are\n- * implemented.\n- *\n- * The stub functions provided will issue a warning log\n- * message if an unimplemented function is invoked\n- *\n- */\n-\n-int pthread_barrier_destroy(pthread_barrier_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_barrier_destroy(a);\n-}\n-\n-int\n-pthread_barrier_init(pthread_barrier_t *a,\n-\t\t     const pthread_barrierattr_t *b, unsigned c)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_barrier_init(a, b, c);\n-}\n-\n-int pthread_barrier_wait(pthread_barrier_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_barrier_wait(a);\n-}\n-\n-int pthread_cond_broadcast(pthread_cond_t *cond)\n-{\n-\tif (override) {\n-\n-\t\tlthread_cond_broadcast(*(struct lthread_cond **)cond);\n-\t\treturn 0;\n-\t}\n-\treturn _sys_pthread_funcs.f_pthread_cond_broadcast(cond);\n-}\n-\n-int pthread_mutex_destroy(pthread_mutex_t *mutex)\n-{\n-\tif (override)\n-\t\treturn lthread_mutex_destroy(*(struct lthread_mutex **)mutex);\n-\treturn _sys_pthread_funcs.f_pthread_mutex_destroy(mutex);\n-}\n-\n-int pthread_cond_destroy(pthread_cond_t *cond)\n-{\n-\tif (override)\n-\t\treturn lthread_cond_destroy(*(struct lthread_cond **)cond);\n-\treturn _sys_pthread_funcs.f_pthread_cond_destroy(cond);\n-}\n-\n-int pthread_cond_init(pthread_cond_t *cond, const pthread_condattr_t *attr)\n-{\n-\tif (override)\n-\t\treturn lthread_cond_init(NULL,\n-\t\t\t\t(struct lthread_cond **)cond,\n-\t\t\t\t(const struct lthread_condattr *) attr);\n-\treturn _sys_pthread_funcs.f_pthread_cond_init(cond, attr);\n-}\n-\n-int pthread_cond_signal(pthread_cond_t *cond)\n-{\n-\tif (override) {\n-\t\tlthread_cond_signal(*(struct lthread_cond **)cond);\n-\t\treturn 0;\n-\t}\n-\treturn _sys_pthread_funcs.f_pthread_cond_signal(cond);\n-}\n-\n-int\n-pthread_cond_timedwait(pthread_cond_t *__rte_restrict cond,\n-\t\t       pthread_mutex_t *__rte_restrict mutex,\n-\t\t       const struct timespec *__rte_restrict time)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_cond_timedwait(cond, mutex, time);\n-}\n-\n-int pthread_cond_wait(pthread_cond_t *cond, pthread_mutex_t *mutex)\n-{\n-\tif (override) {\n-\t\tpthread_mutex_unlock(mutex);\n-\t\tint rv = lthread_cond_wait(*(struct lthread_cond **)cond, 0);\n-\n-\t\tpthread_mutex_lock(mutex);\n-\t\treturn rv;\n-\t}\n-\treturn _sys_pthread_funcs.f_pthread_cond_wait(cond, mutex);\n-}\n-\n-int\n-pthread_create(pthread_t *__rte_restrict tid,\n-\t\tconst pthread_attr_t *__rte_restrict attr,\n-\t\tlthread_func_t func,\n-\t       void *__rte_restrict arg)\n-{\n-\tif (override) {\n-\t\tint lcore = -1;\n-\n-\t\tif (attr != NULL) {\n-\t\t\t/* determine CPU being requested */\n-\t\t\trte_cpuset_t cpuset;\n-\n-\t\t\tCPU_ZERO(&cpuset);\n-\t\t\tpthread_attr_getaffinity_np(attr,\n-\t\t\t\t\t\tsizeof(rte_cpuset_t),\n-\t\t\t\t\t\t&cpuset);\n-\n-\t\t\tif (CPU_COUNT(&cpuset) != 1)\n-\t\t\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\t\t\tfor (lcore = 0; lcore < LTHREAD_MAX_LCORES; lcore++) {\n-\t\t\t\tif (!CPU_ISSET(lcore, &cpuset))\n-\t\t\t\t\tcontinue;\n-\t\t\t\tbreak;\n-\t\t\t}\n-\t\t}\n-\t\treturn lthread_create((struct lthread **)tid, lcore,\n-\t\t\t\t      func, arg);\n-\t}\n-\treturn _sys_pthread_funcs.f_pthread_create(tid, attr, func, arg);\n-}\n-\n-int pthread_detach(pthread_t tid)\n-{\n-\tif (override) {\n-\t\tstruct lthread *lt = (struct lthread *)tid;\n-\n-\t\tif (lt == lthread_current()) {\n-\t\t\tlthread_detach();\n-\t\t\treturn 0;\n-\t\t}\n-\t\tNOT_IMPLEMENTED;\n-\t}\n-\treturn _sys_pthread_funcs.f_pthread_detach(tid);\n-}\n-\n-int pthread_equal(pthread_t a, pthread_t b)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_equal(a, b);\n-}\n-\n-void pthread_exit_override(void *v)\n-{\n-\tif (override) {\n-\t\tlthread_exit(v);\n-\t\treturn;\n-\t}\n-\t_sys_pthread_funcs.f_pthread_exit(v);\n-}\n-\n-void\n-*pthread_getspecific(pthread_key_t key)\n-{\n-\tif (override)\n-\t\treturn lthread_getspecific((unsigned int) key);\n-\treturn _sys_pthread_funcs.f_pthread_getspecific(key);\n-}\n-\n-int pthread_getcpuclockid(pthread_t a, clockid_t *b)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_getcpuclockid(a, b);\n-}\n-\n-int pthread_join(pthread_t tid, void **val)\n-{\n-\tif (override)\n-\t\treturn lthread_join((struct lthread *)tid, val);\n-\treturn _sys_pthread_funcs.f_pthread_join(tid, val);\n-}\n-\n-int pthread_key_create(pthread_key_t *keyptr, void (*dtor) (void *))\n-{\n-\tif (override)\n-\t\treturn lthread_key_create((unsigned int *)keyptr, dtor);\n-\treturn _sys_pthread_funcs.f_pthread_key_create(keyptr, dtor);\n-}\n-\n-int pthread_key_delete(pthread_key_t key)\n-{\n-\tif (override) {\n-\t\tlthread_key_delete((unsigned int) key);\n-\t\treturn 0;\n-\t}\n-\treturn _sys_pthread_funcs.f_pthread_key_delete(key);\n-}\n-\n-\n-int\n-pthread_mutex_init(pthread_mutex_t *mutex, const pthread_mutexattr_t *attr)\n-{\n-\tif (override)\n-\t\treturn lthread_mutex_init(NULL,\n-\t\t\t\t(struct lthread_mutex **)mutex,\n-\t\t\t\t(const struct lthread_mutexattr *)attr);\n-\treturn _sys_pthread_funcs.f_pthread_mutex_init(mutex, attr);\n-}\n-\n-int pthread_mutex_lock(pthread_mutex_t *mutex)\n-{\n-\tif (override)\n-\t\treturn lthread_mutex_lock(*(struct lthread_mutex **)mutex);\n-\treturn _sys_pthread_funcs.f_pthread_mutex_lock(mutex);\n-}\n-\n-int pthread_mutex_trylock(pthread_mutex_t *mutex)\n-{\n-\tif (override)\n-\t\treturn lthread_mutex_trylock(*(struct lthread_mutex **)mutex);\n-\treturn _sys_pthread_funcs.f_pthread_mutex_trylock(mutex);\n-}\n-\n-int pthread_mutex_timedlock(pthread_mutex_t *mutex, const struct timespec *b)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_mutex_timedlock(mutex, b);\n-}\n-\n-int pthread_mutex_unlock(pthread_mutex_t *mutex)\n-{\n-\tif (override)\n-\t\treturn lthread_mutex_unlock(*(struct lthread_mutex **)mutex);\n-\treturn _sys_pthread_funcs.f_pthread_mutex_unlock(mutex);\n-}\n-\n-int pthread_once(pthread_once_t *a, void (b) (void))\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_once(a, b);\n-}\n-\n-int pthread_rwlock_destroy(pthread_rwlock_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_rwlock_destroy(a);\n-}\n-\n-int pthread_rwlock_init(pthread_rwlock_t *a, const pthread_rwlockattr_t *b)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_rwlock_init(a, b);\n-}\n-\n-int pthread_rwlock_rdlock(pthread_rwlock_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_rwlock_rdlock(a);\n-}\n-\n-int pthread_rwlock_timedrdlock(pthread_rwlock_t *a, const struct timespec *b)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_rwlock_timedrdlock(a, b);\n-}\n-\n-int pthread_rwlock_timedwrlock(pthread_rwlock_t *a, const struct timespec *b)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_rwlock_timedwrlock(a, b);\n-}\n-\n-int pthread_rwlock_tryrdlock(pthread_rwlock_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_rwlock_tryrdlock(a);\n-}\n-\n-int pthread_rwlock_trywrlock(pthread_rwlock_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_rwlock_trywrlock(a);\n-}\n-\n-int pthread_rwlock_unlock(pthread_rwlock_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_rwlock_unlock(a);\n-}\n-\n-int pthread_rwlock_wrlock(pthread_rwlock_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_rwlock_wrlock(a);\n-}\n-\n-#ifdef RTE_EXEC_ENV_LINUX\n-int\n-pthread_yield(void)\n-{\n-\tif (override) {\n-\t\tlthread_yield();\n-\t\treturn 0;\n-\t}\n-\treturn _sys_pthread_funcs.f_pthread_yield();\n-}\n-#else\n-void\n-pthread_yield(void)\n-{\n-\tif (override)\n-\t\tlthread_yield();\n-\telse\n-\t\t_sys_pthread_funcs.f_pthread_yield();\n-}\n-#endif\n-\n-pthread_t pthread_self(void)\n-{\n-\tif (override)\n-\t\treturn (pthread_t) lthread_current();\n-\treturn _sys_pthread_funcs.f_pthread_self();\n-}\n-\n-int pthread_setspecific(pthread_key_t key, const void *data)\n-{\n-\tif (override) {\n-\t\tint rv =  lthread_setspecific((unsigned int)key, data);\n-\t\treturn rv;\n-\t}\n-\treturn _sys_pthread_funcs.f_pthread_setspecific(key, data);\n-}\n-\n-int pthread_spin_init(pthread_spinlock_t *a, int b)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_spin_init(a, b);\n-}\n-\n-int pthread_spin_destroy(pthread_spinlock_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_spin_destroy(a);\n-}\n-\n-int pthread_spin_lock(pthread_spinlock_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_spin_lock(a);\n-}\n-\n-int pthread_spin_trylock(pthread_spinlock_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_spin_trylock(a);\n-}\n-\n-int pthread_spin_unlock(pthread_spinlock_t *a)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_spin_unlock(a);\n-}\n-\n-int pthread_cancel(pthread_t tid)\n-{\n-\tif (override) {\n-\t\tlthread_cancel(*(struct lthread **)tid);\n-\t\treturn 0;\n-\t}\n-\treturn _sys_pthread_funcs.f_pthread_cancel(tid);\n-}\n-\n-int pthread_setcancelstate(int a, int *b)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_setcancelstate(a, b);\n-}\n-\n-int pthread_setcanceltype(int a, int *b)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_setcanceltype(a, b);\n-}\n-\n-void pthread_testcancel(void)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_testcancel();\n-}\n-\n-\n-int pthread_getschedparam(pthread_t tid, int *a, struct sched_param *b)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_getschedparam(tid, a, b);\n-}\n-\n-int pthread_setschedparam(pthread_t a, int b, const struct sched_param *c)\n-{\n-\tNOT_IMPLEMENTED;\n-\treturn _sys_pthread_funcs.f_pthread_setschedparam(a, b, c);\n-}\n-\n-\n-int nanosleep(const struct timespec *req, struct timespec *rem)\n-{\n-\tif (override) {\n-\t\tuint64_t ns = req->tv_sec * 1000000000 + req->tv_nsec;\n-\n-\t\tlthread_sleep(ns);\n-\t\treturn 0;\n-\t}\n-\treturn _sys_pthread_funcs.f_nanosleep(req, rem);\n-}\n-\n-int\n-pthread_setaffinity_np(pthread_t thread, size_t cpusetsize,\n-\t\t       const rte_cpuset_t *cpuset)\n-{\n-\tif (override) {\n-\t\t/* we only allow affinity with a single CPU */\n-\t\tif (CPU_COUNT(cpuset) != 1)\n-\t\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\t\t/* we only allow the current thread to sets its own affinity */\n-\t\tstruct lthread *lt = (struct lthread *)thread;\n-\n-\t\tif (lthread_current() != lt)\n-\t\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\t\t/* determine the CPU being requested */\n-\t\tint i;\n-\n-\t\tfor (i = 0; i < LTHREAD_MAX_LCORES; i++) {\n-\t\t\tif (!CPU_ISSET(i, cpuset))\n-\t\t\t\tcontinue;\n-\t\t\tbreak;\n-\t\t}\n-\t\t/* check requested core is allowed */\n-\t\tif (i == LTHREAD_MAX_LCORES)\n-\t\t\treturn POSIX_ERRNO(EINVAL);\n-\n-\t\t/* finally we can set affinity to the requested lcore */\n-\t\tlthread_set_affinity(i);\n-\t\treturn 0;\n-\t}\n-\treturn _sys_pthread_funcs.f_pthread_setaffinity_np(thread, cpusetsize,\n-\t\t\t\t\t\t\t   cpuset);\n-}\ndiff --git a/examples/performance-thread/pthread_shim/pthread_shim.h b/examples/performance-thread/pthread_shim/pthread_shim.h\ndeleted file mode 100644\nindex e90fb15fc138..000000000000\n--- a/examples/performance-thread/pthread_shim/pthread_shim.h\n+++ /dev/null\n@@ -1,85 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2015 Intel Corporation\n- */\n-\n-#ifndef _PTHREAD_SHIM_H_\n-#define _PTHREAD_SHIM_H_\n-\n-#include <rte_lcore.h>\n-\n-/*\n- * This pthread shim is an example that demonstrates how legacy code\n- * that makes use of POSIX pthread services can make use of lthreads\n- * with reduced porting effort.\n- *\n- * N.B. The example is not a complete implementation, only a subset of\n- * pthread APIs sufficient to demonstrate the principle of operation\n- * are implemented.\n- *\n- * In general pthread attribute objects do not have equivalent functions\n- * in lthreads, and are ignored.\n- *\n- * There is one exception and that is the use of attr to specify a\n- * core affinity in calls to pthread_create.\n- *\n- * The shim operates as follows:-\n- *\n- * On initialisation a constructor function uses dlsym to obtain and\n- * save the loaded address of the full set of pthread APIs that will\n- * be overridden.\n- *\n- * For each function there is a stub provided that will invoke either\n- * the genuine pthread library function saved saved by the constructor,\n- * or else the corresponding equivalent lthread function.\n- *\n- * The stub functions are implemented in pthread_shim.c\n- *\n- * The stub will take care of adapting parameters, and any police\n- * any constraints where lthread functionality differs.\n- *\n- * The initial thread must always be a pure lthread.\n- *\n- * The decision whether to invoke the real library function or the lthread\n- * function is controlled by a per pthread flag that can be switched\n- * on of off by the pthread_override_set() API described below. Typcially\n- * this should be done as the first action of the initial lthread.\n- *\n- * N.B In general it would be poor practice to revert to invoke a real\n- * pthread function when running as an lthread, since these may block and\n- * effectively stall the lthread scheduler.\n- *\n- */\n-\n-\n-/*\n- * An exiting lthread must not terminate the pthread it is running in\n- * since this would mean terminating the lthread scheduler.\n- * We override pthread_exit() with a macro because it is typically declared with\n- * __rte_noreturn\n- */\n-void pthread_exit_override(void *v);\n-\n-#define pthread_exit(v) do { \\\n-\tpthread_exit_override((v));\t\\\n-\treturn NULL;\t\\\n-} while (0)\n-\n-/*\n- * Enable/Disable pthread override\n- * state\n- * 0 disable\n- * 1 enable\n- */\n-void pthread_override_set(int state);\n-\n-\n-/*\n- * Return pthread override state\n- * return\n- * 0 disable\n- * 1 enable\n- */\n-int pthread_override_get(void);\n-\n-\n-#endif /* _PTHREAD_SHIM_H_ */\n",
    "prefixes": []
}