get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/94894/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 94894,
    "url": "https://patches.dpdk.org/api/patches/94894/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/a3ff0d51b51125390a5736e488cb2afbd4a15c52.1624884053.git.anatoly.burakov@intel.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<a3ff0d51b51125390a5736e488cb2afbd4a15c52.1624884053.git.anatoly.burakov@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/a3ff0d51b51125390a5736e488cb2afbd4a15c52.1624884053.git.anatoly.burakov@intel.com",
    "date": "2021-06-28T12:41:11",
    "name": "[v3,5/7] power: support callbacks for multiple Rx queues",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "5b0e867664ef9747c200a1f150e3843e6690c277",
    "submitter": {
        "id": 4,
        "url": "https://patches.dpdk.org/api/people/4/?format=api",
        "name": "Anatoly Burakov",
        "email": "anatoly.burakov@intel.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/a3ff0d51b51125390a5736e488cb2afbd4a15c52.1624884053.git.anatoly.burakov@intel.com/mbox/",
    "series": [
        {
            "id": 17501,
            "url": "https://patches.dpdk.org/api/series/17501/?format=api",
            "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=17501",
            "date": "2021-06-28T12:41:06",
            "name": "Enhancements for PMD power management",
            "version": 3,
            "mbox": "https://patches.dpdk.org/series/17501/mbox/"
        }
    ],
    "comments": "https://patches.dpdk.org/api/patches/94894/comments/",
    "check": "warning",
    "checks": "https://patches.dpdk.org/api/patches/94894/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id C0964A0C3F;\n\tMon, 28 Jun 2021 14:41:52 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 112FD4114C;\n\tMon, 28 Jun 2021 14:41:35 +0200 (CEST)",
            "from mga14.intel.com (mga14.intel.com [192.55.52.115])\n by mails.dpdk.org (Postfix) with ESMTP id 64DBC4068A\n for <dev@dpdk.org>; Mon, 28 Jun 2021 14:41:26 +0200 (CEST)",
            "from fmsmga005.fm.intel.com ([10.253.24.32])\n by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 28 Jun 2021 05:41:26 -0700",
            "from silpixa00399498.ir.intel.com (HELO\n silpixa00399498.ger.corp.intel.com) ([10.237.223.53])\n by fmsmga005.fm.intel.com with ESMTP; 28 Jun 2021 05:41:24 -0700"
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6200,9189,10028\"; a=\"207761734\"",
            "E=Sophos;i=\"5.83,306,1616482800\"; d=\"scan'208\";a=\"207761734\"",
            "E=Sophos;i=\"5.83,306,1616482800\"; d=\"scan'208\";a=\"643319603\""
        ],
        "X-ExtLoop1": "1",
        "From": "Anatoly Burakov <anatoly.burakov@intel.com>",
        "To": "dev@dpdk.org, David Hunt <david.hunt@intel.com>,\n Ray Kinsella <mdr@ashroe.eu>, Neil Horman <nhorman@tuxdriver.com>",
        "Cc": "ciara.loftus@intel.com",
        "Date": "Mon, 28 Jun 2021 12:41:11 +0000",
        "Message-Id": "\n <a3ff0d51b51125390a5736e488cb2afbd4a15c52.1624884053.git.anatoly.burakov@intel.com>",
        "X-Mailer": "git-send-email 2.25.1",
        "In-Reply-To": "<cover.1624884053.git.anatoly.burakov@intel.com>",
        "References": "<cover.1624629506.git.anatoly.burakov@intel.com>\n <cover.1624884053.git.anatoly.burakov@intel.com>",
        "MIME-Version": "1.0",
        "Content-Transfer-Encoding": "8bit",
        "Subject": "[dpdk-dev] [PATCH v3 5/7] power: support callbacks for multiple Rx\n queues",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Currently, there is a hard limitation on the PMD power management\nsupport that only allows it to support a single queue per lcore. This is\nnot ideal as most DPDK use cases will poll multiple queues per core.\n\nThe PMD power management mechanism relies on ethdev Rx callbacks, so it\nis very difficult to implement such support because callbacks are\neffectively stateless and have no visibility into what the other ethdev\ndevices are doing. This places limitations on what we can do within the\nframework of Rx callbacks, but the basics of this implementation are as\nfollows:\n\n- Replace per-queue structures with per-lcore ones, so that any device\n  polled from the same lcore can share data\n- Any queue that is going to be polled from a specific lcore has to be\n  added to the list of cores to poll, so that the callback is aware of\n  other queues being polled by the same lcore\n- Both the empty poll counter and the actual power saving mechanism is\n  shared between all queues polled on a particular lcore, and is only\n  activated when a special designated \"power saving\" queue is polled. To\n  put it another way, we have no idea which queue the user will poll in\n  what order, so we rely on them telling us that queue X is the last one\n  in the polling loop, so any power management should happen there.\n- A new API is added to mark a specific Rx queue as \"power saving\".\n  Failing to call this API will result in no power management, however\n  when having only one queue per core it is obvious which queue is the\n  \"power saving\" one, so things will still work without this new API for\n  use cases that were previously working without it.\n- The limitation on UMWAIT-based polling is not removed because UMWAIT\n  is incapable of monitoring more than one address.\n\nAlso, while we're at it, update and improve the docs.\n\nSigned-off-by: Anatoly Burakov <anatoly.burakov@intel.com>\n---\n\nNotes:\n    v3:\n    - Move the list of supported NICs to NIC feature table\n    \n    v2:\n    - Use a TAILQ for queues instead of a static array\n    - Address feedback from Konstantin\n    - Add additional checks for stopped queues\n\n doc/guides/nics/features.rst           |  10 +\n doc/guides/prog_guide/power_man.rst    |  75 +++--\n doc/guides/rel_notes/release_21_08.rst |   3 +\n lib/power/rte_power_pmd_mgmt.c         | 381 ++++++++++++++++++++-----\n lib/power/rte_power_pmd_mgmt.h         |  34 +++\n lib/power/version.map                  |   3 +\n 6 files changed, 412 insertions(+), 94 deletions(-)",
    "diff": "diff --git a/doc/guides/nics/features.rst b/doc/guides/nics/features.rst\nindex 403c2b03a3..a96e12d155 100644\n--- a/doc/guides/nics/features.rst\n+++ b/doc/guides/nics/features.rst\n@@ -912,6 +912,16 @@ Supports to get Rx/Tx packet burst mode information.\n * **[implements] eth_dev_ops**: ``rx_burst_mode_get``, ``tx_burst_mode_get``.\n * **[related] API**: ``rte_eth_rx_burst_mode_get()``, ``rte_eth_tx_burst_mode_get()``.\n \n+.. _nic_features_get_monitor_addr:\n+\n+PMD power management using monitor addresses\n+--------------------------------------------\n+\n+Supports getting a monitoring condition to use together with Ethernet PMD power\n+management (see :doc:`../prog_guide/power_man` for more details).\n+\n+* **[implements] eth_dev_ops**: ``get_monitor_addr``\n+\n .. _nic_features_other:\n \n Other dev ops not represented by a Feature\ndiff --git a/doc/guides/prog_guide/power_man.rst b/doc/guides/prog_guide/power_man.rst\nindex c70ae128ac..fac2c19516 100644\n--- a/doc/guides/prog_guide/power_man.rst\n+++ b/doc/guides/prog_guide/power_man.rst\n@@ -198,34 +198,41 @@ Ethernet PMD Power Management API\n Abstract\n ~~~~~~~~\n \n-Existing power management mechanisms require developers\n-to change application design or change code to make use of it.\n-The PMD power management API provides a convenient alternative\n-by utilizing Ethernet PMD RX callbacks,\n-and triggering power saving whenever empty poll count reaches a certain number.\n-\n-Monitor\n-   This power saving scheme will put the CPU into optimized power state\n-   and use the ``rte_power_monitor()`` function\n-   to monitor the Ethernet PMD RX descriptor address,\n-   and wake the CPU up whenever there's new traffic.\n-\n-Pause\n-   This power saving scheme will avoid busy polling\n-   by either entering power-optimized sleep state\n-   with ``rte_power_pause()`` function,\n-   or, if it's not available, use ``rte_pause()``.\n-\n-Frequency scaling\n-   This power saving scheme will use ``librte_power`` library\n-   functionality to scale the core frequency up/down\n-   depending on traffic volume.\n-\n-.. note::\n-\n-   Currently, this power management API is limited to mandatory mapping\n-   of 1 queue to 1 core (multiple queues are supported,\n-   but they must be polled from different cores).\n+Existing power management mechanisms require developers to change application\n+design or change code to make use of it. The PMD power management API provides a\n+convenient alternative by utilizing Ethernet PMD RX callbacks, and triggering\n+power saving whenever empty poll count reaches a certain number.\n+\n+* Monitor\n+   This power saving scheme will put the CPU into optimized power state and\n+   monitor the Ethernet PMD RX descriptor address, waking the CPU up whenever\n+   there's new traffic. Support for this scheme may not be available on all\n+   platforms, and further limitations may apply (see below).\n+\n+* Pause\n+   This power saving scheme will avoid busy polling by either entering\n+   power-optimized sleep state with ``rte_power_pause()`` function, or, if it's\n+   not supported by the underlying platform, use ``rte_pause()``.\n+\n+* Frequency scaling\n+   This power saving scheme will use ``librte_power`` library functionality to\n+   scale the core frequency up/down depending on traffic volume.\n+\n+The \"monitor\" mode is only supported in the following configurations and scenarios:\n+\n+* If ``rte_cpu_get_intrinsics_support()`` function indicates that\n+  ``rte_power_monitor()`` is supported by the platform, then monitoring will be\n+  limited to a mapping of 1 core 1 queue (thus, each Rx queue will have to be\n+  monitored from a different lcore).\n+\n+* If ``rte_cpu_get_intrinsics_support()`` function indicates that the\n+  ``rte_power_monitor()`` function is not supported, then monitor mode will not\n+  be supported.\n+\n+* Not all Ethernet devices support monitoring, even if the underlying\n+  platform may support the necessary CPU instructions. Please refer to\n+  :doc:`../nics/overview` for more information.\n+\n \n API Overview for Ethernet PMD Power Management\n ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n@@ -234,6 +241,16 @@ API Overview for Ethernet PMD Power Management\n \n * **Queue Disable**: Disable power scheme for certain queue/port/core.\n \n+* **Set Power Save Queue**: In case of polling multiple queues from one lcore,\n+  designate a specific queue to be the one that triggers power management routines.\n+\n+.. note::\n+\n+   When using PMD power management with multiple Ethernet Rx queues on one lcore,\n+   it is required to designate one of the configured Rx queues as a \"power save\"\n+   queue by calling the appropriate API. Failing to do so will result in no\n+   power saving ever taking effect.\n+\n References\n ----------\n \n@@ -242,3 +259,5 @@ References\n \n *   The :doc:`../sample_app_ug/vm_power_management`\n     chapter in the :doc:`../sample_app_ug/index` section.\n+\n+*   The :doc:`../nics/overview` chapter in the :doc:`../nics/index` section\ndiff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst\nindex f015c509fc..3926d45ef8 100644\n--- a/doc/guides/rel_notes/release_21_08.rst\n+++ b/doc/guides/rel_notes/release_21_08.rst\n@@ -57,6 +57,9 @@ New Features\n \n * eal: added ``rte_power_monitor_multi`` to support waiting for multiple events.\n \n+* rte_power: The experimental PMD power management API now supports managing\n+  multiple Ethernet Rx queues per lcore.\n+\n \n Removed Items\n -------------\ndiff --git a/lib/power/rte_power_pmd_mgmt.c b/lib/power/rte_power_pmd_mgmt.c\nindex 9b95cf1794..7762cd39b8 100644\n--- a/lib/power/rte_power_pmd_mgmt.c\n+++ b/lib/power/rte_power_pmd_mgmt.c\n@@ -33,7 +33,28 @@ enum pmd_mgmt_state {\n \tPMD_MGMT_ENABLED\n };\n \n-struct pmd_queue_cfg {\n+union queue {\n+\tuint32_t val;\n+\tstruct {\n+\t\tuint16_t portid;\n+\t\tuint16_t qid;\n+\t};\n+};\n+\n+struct queue_list_entry {\n+\tTAILQ_ENTRY(queue_list_entry) next;\n+\tunion queue queue;\n+};\n+\n+struct pmd_core_cfg {\n+\tTAILQ_HEAD(queue_list_head, queue_list_entry) head;\n+\t/**< Which port-queue pairs are associated with this lcore? */\n+\tunion queue power_save_queue;\n+\t/**< When polling multiple queues, all but this one will be ignored */\n+\tbool power_save_queue_set;\n+\t/**< When polling multiple queues, power save queue must be set */\n+\tsize_t n_queues;\n+\t/**< How many queues are in the list? */\n \tvolatile enum pmd_mgmt_state pwr_mgmt_state;\n \t/**< State of power management for this queue */\n \tenum rte_power_pmd_mgmt_type cb_mode;\n@@ -43,8 +64,96 @@ struct pmd_queue_cfg {\n \tuint64_t empty_poll_stats;\n \t/**< Number of empty polls */\n } __rte_cache_aligned;\n+static struct pmd_core_cfg lcore_cfg[RTE_MAX_LCORE];\n \n-static struct pmd_queue_cfg port_cfg[RTE_MAX_ETHPORTS][RTE_MAX_QUEUES_PER_PORT];\n+static inline bool\n+queue_equal(const union queue *l, const union queue *r)\n+{\n+\treturn l->val == r->val;\n+}\n+\n+static inline void\n+queue_copy(union queue *dst, const union queue *src)\n+{\n+\tdst->val = src->val;\n+}\n+\n+static inline bool\n+queue_is_power_save(const struct pmd_core_cfg *cfg, const union queue *q)\n+{\n+\tconst union queue *pwrsave = &cfg->power_save_queue;\n+\n+\t/* if there's only single queue, no need to check anything */\n+\tif (cfg->n_queues == 1)\n+\t\treturn true;\n+\treturn cfg->power_save_queue_set && queue_equal(q, pwrsave);\n+}\n+\n+static struct queue_list_entry *\n+queue_list_find(const struct pmd_core_cfg *cfg, const union queue *q)\n+{\n+\tstruct queue_list_entry *cur;\n+\n+\tTAILQ_FOREACH(cur, &cfg->head, next) {\n+\t\tif (queue_equal(&cur->queue, q))\n+\t\t\treturn cur;\n+\t}\n+\treturn NULL;\n+}\n+\n+static int\n+queue_set_power_save(struct pmd_core_cfg *cfg, const union queue *q)\n+{\n+\tconst struct queue_list_entry *found = queue_list_find(cfg, q);\n+\tif (found == NULL)\n+\t\treturn -ENOENT;\n+\tqueue_copy(&cfg->power_save_queue, q);\n+\tcfg->power_save_queue_set = true;\n+\treturn 0;\n+}\n+\n+static int\n+queue_list_add(struct pmd_core_cfg *cfg, const union queue *q)\n+{\n+\tstruct queue_list_entry *qle;\n+\n+\t/* is it already in the list? */\n+\tif (queue_list_find(cfg, q) != NULL)\n+\t\treturn -EEXIST;\n+\n+\tqle = malloc(sizeof(*qle));\n+\tif (qle == NULL)\n+\t\treturn -ENOMEM;\n+\n+\tqueue_copy(&qle->queue, q);\n+\tTAILQ_INSERT_TAIL(&cfg->head, qle, next);\n+\tcfg->n_queues++;\n+\n+\treturn 0;\n+}\n+\n+static int\n+queue_list_remove(struct pmd_core_cfg *cfg, const union queue *q)\n+{\n+\tstruct queue_list_entry *found;\n+\n+\tfound = queue_list_find(cfg, q);\n+\tif (found == NULL)\n+\t\treturn -ENOENT;\n+\n+\tTAILQ_REMOVE(&cfg->head, found, next);\n+\tcfg->n_queues--;\n+\tfree(found);\n+\n+\t/* if this was a power save queue, unset it */\n+\tif (cfg->power_save_queue_set && queue_is_power_save(cfg, q)) {\n+\t\tunion queue *pwrsave = &cfg->power_save_queue;\n+\t\tcfg->power_save_queue_set = false;\n+\t\tpwrsave->val = 0;\n+\t}\n+\n+\treturn 0;\n+}\n \n static void\n calc_tsc(void)\n@@ -79,10 +188,10 @@ clb_umwait(uint16_t port_id, uint16_t qidx, struct rte_mbuf **pkts __rte_unused,\n \t\tuint16_t nb_rx, uint16_t max_pkts __rte_unused,\n \t\tvoid *addr __rte_unused)\n {\n+\tconst unsigned int lcore = rte_lcore_id();\n+\tstruct pmd_core_cfg *q_conf;\n \n-\tstruct pmd_queue_cfg *q_conf;\n-\n-\tq_conf = &port_cfg[port_id][qidx];\n+\tq_conf = &lcore_cfg[lcore];\n \n \tif (unlikely(nb_rx == 0)) {\n \t\tq_conf->empty_poll_stats++;\n@@ -107,11 +216,26 @@ clb_pause(uint16_t port_id, uint16_t qidx, struct rte_mbuf **pkts __rte_unused,\n \t\tuint16_t nb_rx, uint16_t max_pkts __rte_unused,\n \t\tvoid *addr __rte_unused)\n {\n-\tstruct pmd_queue_cfg *q_conf;\n+\tconst unsigned int lcore = rte_lcore_id();\n+\tconst union queue q = {.portid = port_id, .qid = qidx};\n+\tconst bool empty = nb_rx == 0;\n+\tstruct pmd_core_cfg *q_conf;\n \n-\tq_conf = &port_cfg[port_id][qidx];\n+\tq_conf = &lcore_cfg[lcore];\n \n-\tif (unlikely(nb_rx == 0)) {\n+\t/* early exit */\n+\tif (likely(!empty)) {\n+\t\tq_conf->empty_poll_stats = 0;\n+\t} else {\n+\t\t/* do we care about this particular queue? */\n+\t\tif (!queue_is_power_save(q_conf, &q))\n+\t\t\treturn nb_rx;\n+\n+\t\t/*\n+\t\t * we can increment unconditionally here because if there were\n+\t\t * non-empty polls in other queues assigned to this core, we\n+\t\t * dropped the counter to zero anyway.\n+\t\t */\n \t\tq_conf->empty_poll_stats++;\n \t\t/* sleep for 1 microsecond */\n \t\tif (unlikely(q_conf->empty_poll_stats > EMPTYPOLL_MAX)) {\n@@ -127,8 +251,7 @@ clb_pause(uint16_t port_id, uint16_t qidx, struct rte_mbuf **pkts __rte_unused,\n \t\t\t\t\trte_pause();\n \t\t\t}\n \t\t}\n-\t} else\n-\t\tq_conf->empty_poll_stats = 0;\n+\t}\n \n \treturn nb_rx;\n }\n@@ -138,19 +261,33 @@ clb_scale_freq(uint16_t port_id, uint16_t qidx,\n \t\tstruct rte_mbuf **pkts __rte_unused, uint16_t nb_rx,\n \t\tuint16_t max_pkts __rte_unused, void *_  __rte_unused)\n {\n-\tstruct pmd_queue_cfg *q_conf;\n+\tconst unsigned int lcore = rte_lcore_id();\n+\tconst union queue q = {.portid = port_id, .qid = qidx};\n+\tconst bool empty = nb_rx == 0;\n+\tstruct pmd_core_cfg *q_conf;\n \n-\tq_conf = &port_cfg[port_id][qidx];\n+\tq_conf = &lcore_cfg[lcore];\n \n-\tif (unlikely(nb_rx == 0)) {\n+\t/* early exit */\n+\tif (likely(!empty)) {\n+\t\tq_conf->empty_poll_stats = 0;\n+\n+\t\t/* scale up freq immediately */\n+\t\trte_power_freq_max(rte_lcore_id());\n+\t} else {\n+\t\t/* do we care about this particular queue? */\n+\t\tif (!queue_is_power_save(q_conf, &q))\n+\t\t\treturn nb_rx;\n+\n+\t\t/*\n+\t\t * we can increment unconditionally here because if there were\n+\t\t * non-empty polls in other queues assigned to this core, we\n+\t\t * dropped the counter to zero anyway.\n+\t\t */\n \t\tq_conf->empty_poll_stats++;\n \t\tif (unlikely(q_conf->empty_poll_stats > EMPTYPOLL_MAX))\n \t\t\t/* scale down freq */\n \t\t\trte_power_freq_min(rte_lcore_id());\n-\t} else {\n-\t\tq_conf->empty_poll_stats = 0;\n-\t\t/* scale up freq */\n-\t\trte_power_freq_max(rte_lcore_id());\n \t}\n \n \treturn nb_rx;\n@@ -167,11 +304,79 @@ queue_stopped(const uint16_t port_id, const uint16_t queue_id)\n \treturn qinfo.queue_state == RTE_ETH_QUEUE_STATE_STOPPED;\n }\n \n+static int\n+cfg_queues_stopped(struct pmd_core_cfg *queue_cfg)\n+{\n+\tconst struct queue_list_entry *entry;\n+\n+\tTAILQ_FOREACH(entry, &queue_cfg->head, next) {\n+\t\tconst union queue *q = &entry->queue;\n+\t\tint ret = queue_stopped(q->portid, q->qid);\n+\t\tif (ret != 1)\n+\t\t\treturn ret;\n+\t}\n+\treturn 1;\n+}\n+\n+static int\n+check_scale(unsigned int lcore)\n+{\n+\tenum power_management_env env;\n+\n+\t/* only PSTATE and ACPI modes are supported */\n+\tif (!rte_power_check_env_supported(PM_ENV_ACPI_CPUFREQ) &&\n+\t    !rte_power_check_env_supported(PM_ENV_PSTATE_CPUFREQ)) {\n+\t\tRTE_LOG(DEBUG, POWER, \"Neither ACPI nor PSTATE modes are supported\\n\");\n+\t\treturn -ENOTSUP;\n+\t}\n+\t/* ensure we could initialize the power library */\n+\tif (rte_power_init(lcore))\n+\t\treturn -EINVAL;\n+\n+\t/* ensure we initialized the correct env */\n+\tenv = rte_power_get_env();\n+\tif (env != PM_ENV_ACPI_CPUFREQ && env != PM_ENV_PSTATE_CPUFREQ) {\n+\t\tRTE_LOG(DEBUG, POWER, \"Neither ACPI nor PSTATE modes were initialized\\n\");\n+\t\treturn -ENOTSUP;\n+\t}\n+\n+\t/* we're done */\n+\treturn 0;\n+}\n+\n+static int\n+check_monitor(struct pmd_core_cfg *cfg, const union queue *qdata)\n+{\n+\tstruct rte_power_monitor_cond dummy;\n+\n+\t/* check if rte_power_monitor is supported */\n+\tif (!global_data.intrinsics_support.power_monitor) {\n+\t\tRTE_LOG(DEBUG, POWER, \"Monitoring intrinsics are not supported\\n\");\n+\t\treturn -ENOTSUP;\n+\t}\n+\n+\tif (cfg->n_queues > 0) {\n+\t\tRTE_LOG(DEBUG, POWER, \"Monitoring multiple queues is not supported\\n\");\n+\t\treturn -ENOTSUP;\n+\t}\n+\n+\t/* check if the device supports the necessary PMD API */\n+\tif (rte_eth_get_monitor_addr(qdata->portid, qdata->qid,\n+\t\t\t&dummy) == -ENOTSUP) {\n+\t\tRTE_LOG(DEBUG, POWER, \"The device does not support rte_eth_get_monitor_addr\\n\");\n+\t\treturn -ENOTSUP;\n+\t}\n+\n+\t/* we're done */\n+\treturn 0;\n+}\n+\n int\n rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,\n \t\tuint16_t queue_id, enum rte_power_pmd_mgmt_type mode)\n {\n-\tstruct pmd_queue_cfg *queue_cfg;\n+\tconst union queue qdata = {.portid = port_id, .qid = queue_id};\n+\tstruct pmd_core_cfg *queue_cfg;\n \tstruct rte_eth_dev_info info;\n \trte_rx_callback_fn clb;\n \tint ret;\n@@ -202,9 +407,19 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,\n \t\tgoto end;\n \t}\n \n-\tqueue_cfg = &port_cfg[port_id][queue_id];\n+\tqueue_cfg = &lcore_cfg[lcore_id];\n \n-\tif (queue_cfg->pwr_mgmt_state != PMD_MGMT_DISABLED) {\n+\t/* check if other queues are stopped as well */\n+\tret = cfg_queues_stopped(queue_cfg);\n+\tif (ret != 1) {\n+\t\t/* error means invalid queue, 0 means queue wasn't stopped */\n+\t\tret = ret < 0 ? -EINVAL : -EBUSY;\n+\t\tgoto end;\n+\t}\n+\n+\t/* if callback was already enabled, check current callback type */\n+\tif (queue_cfg->pwr_mgmt_state != PMD_MGMT_DISABLED &&\n+\t\t\tqueue_cfg->cb_mode != mode) {\n \t\tret = -EINVAL;\n \t\tgoto end;\n \t}\n@@ -214,53 +429,20 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,\n \n \tswitch (mode) {\n \tcase RTE_POWER_MGMT_TYPE_MONITOR:\n-\t{\n-\t\tstruct rte_power_monitor_cond dummy;\n-\n-\t\t/* check if rte_power_monitor is supported */\n-\t\tif (!global_data.intrinsics_support.power_monitor) {\n-\t\t\tRTE_LOG(DEBUG, POWER, \"Monitoring intrinsics are not supported\\n\");\n-\t\t\tret = -ENOTSUP;\n+\t\t/* check if we can add a new queue */\n+\t\tret = check_monitor(queue_cfg, &qdata);\n+\t\tif (ret < 0)\n \t\t\tgoto end;\n-\t\t}\n \n-\t\t/* check if the device supports the necessary PMD API */\n-\t\tif (rte_eth_get_monitor_addr(port_id, queue_id,\n-\t\t\t\t&dummy) == -ENOTSUP) {\n-\t\t\tRTE_LOG(DEBUG, POWER, \"The device does not support rte_eth_get_monitor_addr\\n\");\n-\t\t\tret = -ENOTSUP;\n-\t\t\tgoto end;\n-\t\t}\n \t\tclb = clb_umwait;\n \t\tbreak;\n-\t}\n \tcase RTE_POWER_MGMT_TYPE_SCALE:\n-\t{\n-\t\tenum power_management_env env;\n-\t\t/* only PSTATE and ACPI modes are supported */\n-\t\tif (!rte_power_check_env_supported(PM_ENV_ACPI_CPUFREQ) &&\n-\t\t\t\t!rte_power_check_env_supported(\n-\t\t\t\t\tPM_ENV_PSTATE_CPUFREQ)) {\n-\t\t\tRTE_LOG(DEBUG, POWER, \"Neither ACPI nor PSTATE modes are supported\\n\");\n-\t\t\tret = -ENOTSUP;\n+\t\t/* check if we can add a new queue */\n+\t\tret = check_scale(lcore_id);\n+\t\tif (ret < 0)\n \t\t\tgoto end;\n-\t\t}\n-\t\t/* ensure we could initialize the power library */\n-\t\tif (rte_power_init(lcore_id)) {\n-\t\t\tret = -EINVAL;\n-\t\t\tgoto end;\n-\t\t}\n-\t\t/* ensure we initialized the correct env */\n-\t\tenv = rte_power_get_env();\n-\t\tif (env != PM_ENV_ACPI_CPUFREQ &&\n-\t\t\t\tenv != PM_ENV_PSTATE_CPUFREQ) {\n-\t\t\tRTE_LOG(DEBUG, POWER, \"Neither ACPI nor PSTATE modes were initialized\\n\");\n-\t\t\tret = -ENOTSUP;\n-\t\t\tgoto end;\n-\t\t}\n \t\tclb = clb_scale_freq;\n \t\tbreak;\n-\t}\n \tcase RTE_POWER_MGMT_TYPE_PAUSE:\n \t\t/* figure out various time-to-tsc conversions */\n \t\tif (global_data.tsc_per_us == 0)\n@@ -273,11 +455,20 @@ rte_power_ethdev_pmgmt_queue_enable(unsigned int lcore_id, uint16_t port_id,\n \t\tret = -EINVAL;\n \t\tgoto end;\n \t}\n+\t/* add this queue to the list */\n+\tret = queue_list_add(queue_cfg, &qdata);\n+\tif (ret < 0) {\n+\t\tRTE_LOG(DEBUG, POWER, \"Failed to add queue to list: %s\\n\",\n+\t\t\t\tstrerror(-ret));\n+\t\tgoto end;\n+\t}\n \n \t/* initialize data before enabling the callback */\n-\tqueue_cfg->empty_poll_stats = 0;\n-\tqueue_cfg->cb_mode = mode;\n-\tqueue_cfg->pwr_mgmt_state = PMD_MGMT_ENABLED;\n+\tif (queue_cfg->n_queues == 1) {\n+\t\tqueue_cfg->empty_poll_stats = 0;\n+\t\tqueue_cfg->cb_mode = mode;\n+\t\tqueue_cfg->pwr_mgmt_state = PMD_MGMT_ENABLED;\n+\t}\n \tqueue_cfg->cur_cb = rte_eth_add_rx_callback(port_id, queue_id,\n \t\t\tclb, NULL);\n \n@@ -290,7 +481,8 @@ int\n rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,\n \t\tuint16_t port_id, uint16_t queue_id)\n {\n-\tstruct pmd_queue_cfg *queue_cfg;\n+\tconst union queue qdata = {.portid = port_id, .qid = queue_id};\n+\tstruct pmd_core_cfg *queue_cfg;\n \tint ret;\n \n \tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);\n@@ -306,13 +498,31 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,\n \t}\n \n \t/* no need to check queue id as wrong queue id would not be enabled */\n-\tqueue_cfg = &port_cfg[port_id][queue_id];\n+\tqueue_cfg = &lcore_cfg[lcore_id];\n+\n+\t/* check if other queues are stopped as well */\n+\tret = cfg_queues_stopped(queue_cfg);\n+\tif (ret != 1) {\n+\t\t/* error means invalid queue, 0 means queue wasn't stopped */\n+\t\treturn ret < 0 ? -EINVAL : -EBUSY;\n+\t}\n \n \tif (queue_cfg->pwr_mgmt_state != PMD_MGMT_ENABLED)\n \t\treturn -EINVAL;\n \n-\t/* stop any callbacks from progressing */\n-\tqueue_cfg->pwr_mgmt_state = PMD_MGMT_DISABLED;\n+\t/*\n+\t * There is no good/easy way to do this without race conditions, so we\n+\t * are just going to throw our hands in the air and hope that the user\n+\t * has read the documentation and has ensured that ports are stopped at\n+\t * the time we enter the API functions.\n+\t */\n+\tret = queue_list_remove(queue_cfg, &qdata);\n+\tif (ret < 0)\n+\t\treturn -ret;\n+\n+\t/* if we've removed all queues from the lists, set state to disabled */\n+\tif (queue_cfg->n_queues == 0)\n+\t\tqueue_cfg->pwr_mgmt_state = PMD_MGMT_DISABLED;\n \n \tswitch (queue_cfg->cb_mode) {\n \tcase RTE_POWER_MGMT_TYPE_MONITOR: /* fall-through */\n@@ -336,3 +546,42 @@ rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,\n \n \treturn 0;\n }\n+\n+int\n+rte_power_ethdev_pmgmt_queue_set_power_save(unsigned int lcore_id,\n+\t\tuint16_t port_id, uint16_t queue_id)\n+{\n+\tconst union queue qdata = {.portid = port_id, .qid = queue_id};\n+\tstruct pmd_core_cfg *queue_cfg;\n+\tint ret;\n+\n+\tRTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -EINVAL);\n+\n+\tif (lcore_id >= RTE_MAX_LCORE || queue_id >= RTE_MAX_QUEUES_PER_PORT)\n+\t\treturn -EINVAL;\n+\n+\t/* no need to check queue id as wrong queue id would not be enabled */\n+\tqueue_cfg = &lcore_cfg[lcore_id];\n+\n+\tif (queue_cfg->pwr_mgmt_state != PMD_MGMT_ENABLED)\n+\t\treturn -EINVAL;\n+\n+\tret = queue_set_power_save(queue_cfg, &qdata);\n+\tif (ret < 0) {\n+\t\tRTE_LOG(DEBUG, POWER, \"Failed to set power save queue: %s\\n\",\n+\t\t\tstrerror(-ret));\n+\t\treturn -ret;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+RTE_INIT(rte_power_ethdev_pmgmt_init) {\n+\tsize_t i;\n+\n+\t/* initialize all tailqs */\n+\tfor (i = 0; i < RTE_DIM(lcore_cfg); i++) {\n+\t\tstruct pmd_core_cfg *cfg = &lcore_cfg[i];\n+\t\tTAILQ_INIT(&cfg->head);\n+\t}\n+}\ndiff --git a/lib/power/rte_power_pmd_mgmt.h b/lib/power/rte_power_pmd_mgmt.h\nindex 444e7b8a66..d6ef8f778a 100644\n--- a/lib/power/rte_power_pmd_mgmt.h\n+++ b/lib/power/rte_power_pmd_mgmt.h\n@@ -90,6 +90,40 @@ int\n rte_power_ethdev_pmgmt_queue_disable(unsigned int lcore_id,\n \t\tuint16_t port_id, uint16_t queue_id);\n \n+/**\n+ * @warning\n+ * @b EXPERIMENTAL: this API may change, or be removed, without prior notice.\n+ *\n+ * Set a specific Ethernet device Rx queue to be the \"power save\" queue for a\n+ * particular lcore. When multiple queues are assigned to a single lcore using\n+ * the `rte_power_ethdev_pmgmt_queue_enable` API, only one of them will trigger\n+ * the power management. In a typical scenario, the last queue to be polled on\n+ * a particular lcore should be designated as power save queue.\n+ *\n+ * @note This function is not thread-safe.\n+ *\n+ * @note When using multiple queues per lcore, calling this function is\n+ *   mandatory. If not called, no power management routines would be triggered\n+ *   when the traffic starts.\n+ *\n+ * @warning This function must be called when all affected Ethernet ports are\n+ *   stopped and no Rx/Tx is in progress!\n+ *\n+ * @param lcore_id\n+ *   The lcore the Rx queue is polled from.\n+ * @param port_id\n+ *   The port identifier of the Ethernet device.\n+ * @param queue_id\n+ *   The queue identifier of the Ethernet device.\n+ * @return\n+ *   0 on success\n+ *   <0 on error\n+ */\n+__rte_experimental\n+int\n+rte_power_ethdev_pmgmt_queue_set_power_save(unsigned int lcore_id,\n+\t\tuint16_t port_id, uint16_t queue_id);\n+\n #ifdef __cplusplus\n }\n #endif\ndiff --git a/lib/power/version.map b/lib/power/version.map\nindex b004e3e4a9..105d1d94c2 100644\n--- a/lib/power/version.map\n+++ b/lib/power/version.map\n@@ -38,4 +38,7 @@ EXPERIMENTAL {\n \t# added in 21.02\n \trte_power_ethdev_pmgmt_queue_disable;\n \trte_power_ethdev_pmgmt_queue_enable;\n+\n+\t# added in 21.08\n+\trte_power_ethdev_pmgmt_queue_set_power_save;\n };\n",
    "prefixes": [
        "v3",
        "5/7"
    ]
}