get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/91506/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 91506,
    "url": "https://patches.dpdk.org/api/patches/91506/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/1618451359-20693-22-git-send-email-timothy.mcdaniel@intel.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1618451359-20693-22-git-send-email-timothy.mcdaniel@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1618451359-20693-22-git-send-email-timothy.mcdaniel@intel.com",
    "date": "2021-04-15T01:49:13",
    "name": "[v4,21/27] event/dlb2: use new implementation of resource file",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "a10816747913db860138125d54643d65856f35ff",
    "submitter": {
        "id": 826,
        "url": "https://patches.dpdk.org/api/people/826/?format=api",
        "name": "Timothy McDaniel",
        "email": "timothy.mcdaniel@intel.com"
    },
    "delegate": {
        "id": 310,
        "url": "https://patches.dpdk.org/api/users/310/?format=api",
        "username": "jerin",
        "first_name": "Jerin",
        "last_name": "Jacob",
        "email": "jerinj@marvell.com"
    },
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/1618451359-20693-22-git-send-email-timothy.mcdaniel@intel.com/mbox/",
    "series": [
        {
            "id": 16383,
            "url": "https://patches.dpdk.org/api/series/16383/?format=api",
            "web_url": "https://patches.dpdk.org/project/dpdk/list/?series=16383",
            "date": "2021-04-15T01:48:52",
            "name": "Add DLB v2.5",
            "version": 4,
            "mbox": "https://patches.dpdk.org/series/16383/mbox/"
        }
    ],
    "comments": "https://patches.dpdk.org/api/patches/91506/comments/",
    "check": "warning",
    "checks": "https://patches.dpdk.org/api/patches/91506/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id E6D56A0562;\n\tThu, 15 Apr 2021 03:53:10 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 5FD0B161EA2;\n\tThu, 15 Apr 2021 03:51:04 +0200 (CEST)",
            "from mga01.intel.com (mga01.intel.com [192.55.52.88])\n by mails.dpdk.org (Postfix) with ESMTP id 43927161E22\n for <dev@dpdk.org>; Thu, 15 Apr 2021 03:50:45 +0200 (CEST)",
            "from orsmga003.jf.intel.com ([10.7.209.27])\n by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;\n 14 Apr 2021 18:50:44 -0700",
            "from txasoft-yocto.an.intel.com ([10.123.72.192])\n by orsmga003.jf.intel.com with ESMTP; 14 Apr 2021 18:50:42 -0700"
        ],
        "IronPort-SDR": [
            "\n VUMyOHoC6yC4aAm4Izj85O2E9nrtDa7DWd34qlvnl8U/gHhWFS4TOIvLZnNlHTxEvYzzdK0eK5\n rG733CuYZ12w==",
            "\n VO5tIVe/lrxedbwuf2gVlJJASHN7qRXztgP6RgdlXudmbnmBN3qEbdZd781zKa7V68GVtjHKbn\n kHYvWmyn3xzg=="
        ],
        "X-IronPort-AV": [
            "E=McAfee;i=\"6200,9189,9954\"; a=\"215272823\"",
            "E=Sophos;i=\"5.82,223,1613462400\"; d=\"scan'208\";a=\"215272823\"",
            "E=Sophos;i=\"5.82,223,1613462400\"; d=\"scan'208\";a=\"382569898\""
        ],
        "X-ExtLoop1": "1",
        "From": "Timothy McDaniel <timothy.mcdaniel@intel.com>",
        "To": "",
        "Cc": "dev@dpdk.org, erik.g.carrillo@intel.com, harry.van.haaren@intel.com,\n jerinj@marvell.com, thomas@monjalon.net",
        "Date": "Wed, 14 Apr 2021 20:49:13 -0500",
        "Message-Id": "<1618451359-20693-22-git-send-email-timothy.mcdaniel@intel.com>",
        "X-Mailer": "git-send-email 1.7.10",
        "In-Reply-To": "<1618451359-20693-1-git-send-email-timothy.mcdaniel@intel.com>",
        "References": "<20210316221857.2254-2-timothy.mcdaniel@intel.com>\n <1618451359-20693-1-git-send-email-timothy.mcdaniel@intel.com>",
        "Subject": "[dpdk-dev] [PATCH v4 21/27] event/dlb2: use new implementation of\n resource file",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "The file dlb_resource_new.c now contains all of the low level\nfunctions required to support both DLB v2.0 and DLB v2.5, and\nthe original file (dlb_resource.c) was removed in the previous\ncommit, so rename dlb_resource_new.c to dlb_resource.c, and\nupdate the meson build file so that the new file is built.\n\nSigned-off-by: Timothy McDaniel <timothy.mcdaniel@intel.com>\n---\n drivers/event/dlb2/meson.build                |    1 -\n drivers/event/dlb2/pf/base/dlb2_resource.c    | 6205 +++++++++++++++-\n .../event/dlb2/pf/base/dlb2_resource_new.c    | 6235 -----------------\n 3 files changed, 6203 insertions(+), 6238 deletions(-)\n delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource_new.c",
    "diff": "diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build\nindex bded07e06..f22638b8e 100644\n--- a/drivers/event/dlb2/meson.build\n+++ b/drivers/event/dlb2/meson.build\n@@ -14,7 +14,6 @@ sources = files('dlb2.c',\n \t\t'pf/dlb2_main.c',\n \t\t'pf/dlb2_pf.c',\n \t\t'pf/base/dlb2_resource.c',\n-\t\t'pf/base/dlb2_resource_new.c',\n \t\t'rte_pmd_dlb2.c',\n \t\t'dlb2_selftest.c'\n )\ndiff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c\nindex e8a9d52f6..2f66b2c71 100644\n--- a/drivers/event/dlb2/pf/base/dlb2_resource.c\n+++ b/drivers/event/dlb2/pf/base/dlb2_resource.c\n@@ -2,13 +2,15 @@\n  * Copyright(c) 2016-2020 Intel Corporation\n  */\n \n+#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */\n+\n #include \"dlb2_user.h\"\n \n-#include \"dlb2_hw_types.h\"\n+#include \"dlb2_hw_types_new.h\"\n #include \"dlb2_osdep.h\"\n #include \"dlb2_osdep_bitmap.h\"\n #include \"dlb2_osdep_types.h\"\n-#include \"dlb2_regs.h\"\n+#include \"dlb2_regs_new.h\"\n #include \"dlb2_resource.h\"\n \n #include \"../../dlb2_priv.h\"\n@@ -32,3 +34,6202 @@\n #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \\\n \tDLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)\n \n+/*\n+ * The PF driver cannot assume that a register write will affect subsequent HCW\n+ * writes. To ensure a write completes, the driver must read back a CSR. This\n+ * function only need be called for configuration that can occur after the\n+ * domain has started; prior to starting, applications can't send HCWs.\n+ */\n+static inline void dlb2_flush_csr(struct dlb2_hw *hw)\n+{\n+\tDLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));\n+}\n+\n+static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)\n+{\n+\tint i;\n+\n+\tdlb2_list_init_head(&domain->used_ldb_queues);\n+\tdlb2_list_init_head(&domain->used_dir_pq_pairs);\n+\tdlb2_list_init_head(&domain->avail_ldb_queues);\n+\tdlb2_list_init_head(&domain->avail_dir_pq_pairs);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n+\t\tdlb2_list_init_head(&domain->used_ldb_ports[i]);\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n+\t\tdlb2_list_init_head(&domain->avail_ldb_ports[i]);\n+}\n+\n+static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)\n+{\n+\tint i;\n+\tdlb2_list_init_head(&rsrc->avail_domains);\n+\tdlb2_list_init_head(&rsrc->used_domains);\n+\tdlb2_list_init_head(&rsrc->avail_ldb_queues);\n+\tdlb2_list_init_head(&rsrc->avail_dir_pq_pairs);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n+\t\tdlb2_list_init_head(&rsrc->avail_ldb_ports[i]);\n+}\n+\n+/**\n+ * dlb2_resource_free() - free device state memory\n+ * @hw: dlb2_hw handle for a particular device.\n+ *\n+ * This function frees software state pointed to by dlb2_hw. This function\n+ * should be called when resetting the device or unloading the driver.\n+ */\n+void dlb2_resource_free(struct dlb2_hw *hw)\n+{\n+\tint i;\n+\n+\tif (hw->pf.avail_hist_list_entries)\n+\t\tdlb2_bitmap_free(hw->pf.avail_hist_list_entries);\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {\n+\t\tif (hw->vdev[i].avail_hist_list_entries)\n+\t\t\tdlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);\n+\t}\n+}\n+\n+/**\n+ * dlb2_resource_init() - initialize the device\n+ * @hw: pointer to struct dlb2_hw.\n+ * @ver: device version.\n+ *\n+ * This function initializes the device's software state (pointed to by the hw\n+ * argument) and programs global scheduling QoS registers. This function should\n+ * be called during driver initialization, and the dlb2_hw structure should\n+ * be zero-initialized before calling the function.\n+ *\n+ * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the\n+ * device is reset.\n+ *\n+ * Return:\n+ * Returns 0 upon success, <0 otherwise.\n+ */\n+int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)\n+{\n+\tstruct dlb2_list_entry *list;\n+\tunsigned int i;\n+\tint ret;\n+\n+\t/*\n+\t * For optimal load-balancing, ports that map to one or more QIDs in\n+\t * common should not be in numerical sequence. The port->QID mapping is\n+\t * application dependent, but the driver interleaves port IDs as much\n+\t * as possible to reduce the likelihood of sequential ports mapping to\n+\t * the same QID(s). This initial allocation of port IDs maximizes the\n+\t * average distance between an ID and its immediate neighbors (i.e.\n+\t * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to\n+\t * 3, etc.).\n+\t */\n+\tconst u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {\n+\t\t0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,\n+\t\t16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,\n+\t\t32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,\n+\t\t48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,\n+\t};\n+\n+\thw->ver = ver;\n+\n+\tdlb2_init_fn_rsrc_lists(&hw->pf);\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)\n+\t\tdlb2_init_fn_rsrc_lists(&hw->vdev[i]);\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {\n+\t\tdlb2_init_domain_rsrc_lists(&hw->domains[i]);\n+\t\thw->domains[i].parent_func = &hw->pf;\n+\t}\n+\n+\t/* Give all resources to the PF driver */\n+\thw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;\n+\tfor (i = 0; i < hw->pf.num_avail_domains; i++) {\n+\t\tlist = &hw->domains[i].func_list;\n+\n+\t\tdlb2_list_add(&hw->pf.avail_domains, list);\n+\t}\n+\n+\thw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;\n+\tfor (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {\n+\t\tlist = &hw->rsrcs.ldb_queues[i].func_list;\n+\n+\t\tdlb2_list_add(&hw->pf.avail_ldb_queues, list);\n+\t}\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n+\t\thw->pf.num_avail_ldb_ports[i] =\n+\t\t\tDLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {\n+\t\tint cos_id = i >> DLB2_NUM_COS_DOMAINS;\n+\t\tstruct dlb2_ldb_port *port;\n+\n+\t\tport = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];\n+\n+\t\tdlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],\n+\t\t\t      &port->func_list);\n+\t}\n+\n+\thw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);\n+\tfor (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {\n+\t\tlist = &hw->rsrcs.dir_pq_pairs[i].func_list;\n+\n+\t\tdlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);\n+\t}\n+\n+\tif (hw->ver == DLB2_HW_V2) {\n+\t\thw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;\n+\t\thw->pf.num_avail_dqed_entries =\n+\t\t\tDLB2_MAX_NUM_DIR_CREDITS(hw->ver);\n+\t} else {\n+\t\thw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);\n+\t}\n+\n+\thw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;\n+\n+\tret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,\n+\t\t\t\tDLB2_MAX_NUM_HIST_LIST_ENTRIES);\n+\tif (ret)\n+\t\tgoto unwind;\n+\n+\tret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);\n+\tif (ret)\n+\t\tgoto unwind;\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {\n+\t\tret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,\n+\t\t\t\t\tDLB2_MAX_NUM_HIST_LIST_ENTRIES);\n+\t\tif (ret)\n+\t\t\tgoto unwind;\n+\n+\t\tret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);\n+\t\tif (ret)\n+\t\t\tgoto unwind;\n+\t}\n+\n+\t/* Initialize the hardware resource IDs */\n+\tfor (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {\n+\t\thw->domains[i].id.phys_id = i;\n+\t\thw->domains[i].id.vdev_owned = false;\n+\t}\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {\n+\t\thw->rsrcs.ldb_queues[i].id.phys_id = i;\n+\t\thw->rsrcs.ldb_queues[i].id.vdev_owned = false;\n+\t}\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {\n+\t\thw->rsrcs.ldb_ports[i].id.phys_id = i;\n+\t\thw->rsrcs.ldb_ports[i].id.vdev_owned = false;\n+\t}\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {\n+\t\thw->rsrcs.dir_pq_pairs[i].id.phys_id = i;\n+\t\thw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;\n+\t}\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {\n+\t\thw->rsrcs.sn_groups[i].id = i;\n+\t\t/* Default mode (0) is 64 sequence numbers per queue */\n+\t\thw->rsrcs.sn_groups[i].mode = 0;\n+\t\thw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;\n+\t\thw->rsrcs.sn_groups[i].slot_use_bitmap = 0;\n+\t}\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n+\t\thw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;\n+\n+\treturn 0;\n+\n+unwind:\n+\tdlb2_resource_free(hw);\n+\n+\treturn ret;\n+}\n+\n+/**\n+ * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @ver: device version.\n+ *\n+ * Clearing the PMCSR must be done at initialization to make the device fully\n+ * operational.\n+ */\n+void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)\n+{\n+\tu32 pmcsr_dis;\n+\n+\tpmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));\n+\n+\tDLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);\n+\n+\tDLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);\n+}\n+\n+/**\n+ * dlb2_hw_get_num_resources() - query the PCI function's available resources\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @arg: pointer to resource counts.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function returns the number of available resources for the PF or for a\n+ * VF.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is\n+ * invalid.\n+ */\n+int dlb2_hw_get_num_resources(struct dlb2_hw *hw,\n+\t\t\t      struct dlb2_get_num_resources_args *arg,\n+\t\t\t      bool vdev_req,\n+\t\t\t      unsigned int vdev_id)\n+{\n+\tstruct dlb2_function_resources *rsrcs;\n+\tstruct dlb2_bitmap *map;\n+\tint i;\n+\n+\tif (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)\n+\t\treturn -EINVAL;\n+\n+\tif (vdev_req)\n+\t\trsrcs = &hw->vdev[vdev_id];\n+\telse\n+\t\trsrcs = &hw->pf;\n+\n+\targ->num_sched_domains = rsrcs->num_avail_domains;\n+\n+\targ->num_ldb_queues = rsrcs->num_avail_ldb_queues;\n+\n+\targ->num_ldb_ports = 0;\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n+\t\targ->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];\n+\n+\targ->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];\n+\targ->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];\n+\targ->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];\n+\targ->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];\n+\n+\targ->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;\n+\n+\targ->num_atomic_inflights = rsrcs->num_avail_aqed_entries;\n+\n+\tmap = rsrcs->avail_hist_list_entries;\n+\n+\targ->num_hist_list_entries = dlb2_bitmap_count(map);\n+\n+\targ->max_contiguous_hist_list_entries =\n+\t\tdlb2_bitmap_longest_set_range(map);\n+\n+\tif (hw->ver == DLB2_HW_V2) {\n+\t\targ->num_ldb_credits = rsrcs->num_avail_qed_entries;\n+\t\targ->num_dir_credits = rsrcs->num_avail_dqed_entries;\n+\t} else {\n+\t\targ->num_credits = rsrcs->num_avail_entries;\n+\t}\n+\treturn 0;\n+}\n+\n+static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,\n+\t\t\t\t\t       struct dlb2_hw_domain *domain)\n+{\n+\tu32 reg = 0;\n+\n+\tDLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);\n+\tDLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);\n+}\n+\n+static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,\n+\t\t\t\t\t     struct dlb2_hw_domain *domain)\n+{\n+\tu32 reg = 0;\n+\n+\tDLB2_BITS_SET(reg, domain->num_ldb_credits,\n+\t\t      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);\n+\tDLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, domain->num_dir_credits,\n+\t\t      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);\n+\tDLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);\n+}\n+\n+static void dlb2_configure_domain_credits(struct dlb2_hw *hw,\n+\t\t\t\t\t  struct dlb2_hw_domain *domain)\n+{\n+\tif (hw->ver == DLB2_HW_V2)\n+\t\tdlb2_configure_domain_credits_v2(hw, domain);\n+\telse\n+\t\tdlb2_configure_domain_credits_v2_5(hw, domain);\n+}\n+\n+static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,\n+\t\t\t       struct dlb2_hw_domain *domain,\n+\t\t\t       u32 num_credits,\n+\t\t\t       struct dlb2_cmd_response *resp)\n+{\n+\tif (rsrcs->num_avail_entries < num_credits) {\n+\t\tresp->status = DLB2_ST_CREDITS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\trsrcs->num_avail_entries -= num_credits;\n+\tdomain->num_credits += num_credits;\n+\treturn 0;\n+}\n+\n+static struct dlb2_ldb_port *\n+dlb2_get_next_ldb_port(struct dlb2_hw *hw,\n+\t\t       struct dlb2_function_resources *rsrcs,\n+\t\t       u32 domain_id,\n+\t\t       u32 cos_id)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tRTE_SET_USED(iter);\n+\n+\t/*\n+\t * To reduce the odds of consecutive load-balanced ports mapping to the\n+\t * same queue(s), the driver attempts to allocate ports whose neighbors\n+\t * are owned by a different domain.\n+\t */\n+\tDLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {\n+\t\tu32 next, prev;\n+\t\tu32 phys_id;\n+\n+\t\tphys_id = port->id.phys_id;\n+\t\tnext = phys_id + 1;\n+\t\tprev = phys_id - 1;\n+\n+\t\tif (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)\n+\t\t\tnext = 0;\n+\t\tif (phys_id == 0)\n+\t\t\tprev = DLB2_MAX_NUM_LDB_PORTS - 1;\n+\n+\t\tif (!hw->rsrcs.ldb_ports[next].owned ||\n+\t\t    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)\n+\t\t\tcontinue;\n+\n+\t\tif (!hw->rsrcs.ldb_ports[prev].owned ||\n+\t\t    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)\n+\t\t\tcontinue;\n+\n+\t\treturn port;\n+\t}\n+\n+\t/*\n+\t * Failing that, the driver looks for a port with one neighbor owned by\n+\t * a different domain and the other unallocated.\n+\t */\n+\tDLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {\n+\t\tu32 next, prev;\n+\t\tu32 phys_id;\n+\n+\t\tphys_id = port->id.phys_id;\n+\t\tnext = phys_id + 1;\n+\t\tprev = phys_id - 1;\n+\n+\t\tif (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)\n+\t\t\tnext = 0;\n+\t\tif (phys_id == 0)\n+\t\t\tprev = DLB2_MAX_NUM_LDB_PORTS - 1;\n+\n+\t\tif (!hw->rsrcs.ldb_ports[prev].owned &&\n+\t\t    hw->rsrcs.ldb_ports[next].owned &&\n+\t\t    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)\n+\t\t\treturn port;\n+\n+\t\tif (!hw->rsrcs.ldb_ports[next].owned &&\n+\t\t    hw->rsrcs.ldb_ports[prev].owned &&\n+\t\t    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)\n+\t\t\treturn port;\n+\t}\n+\n+\t/*\n+\t * Failing that, the driver looks for a port with both neighbors\n+\t * unallocated.\n+\t */\n+\tDLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {\n+\t\tu32 next, prev;\n+\t\tu32 phys_id;\n+\n+\t\tphys_id = port->id.phys_id;\n+\t\tnext = phys_id + 1;\n+\t\tprev = phys_id - 1;\n+\n+\t\tif (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)\n+\t\t\tnext = 0;\n+\t\tif (phys_id == 0)\n+\t\t\tprev = DLB2_MAX_NUM_LDB_PORTS - 1;\n+\n+\t\tif (!hw->rsrcs.ldb_ports[prev].owned &&\n+\t\t    !hw->rsrcs.ldb_ports[next].owned)\n+\t\t\treturn port;\n+\t}\n+\n+\t/* If all else fails, the driver returns the next available port. */\n+\treturn DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],\n+\t\t\t\t   typeof(*port));\n+}\n+\n+static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,\n+\t\t\t\t   struct dlb2_function_resources *rsrcs,\n+\t\t\t\t   struct dlb2_hw_domain *domain,\n+\t\t\t\t   u32 num_ports,\n+\t\t\t\t   u32 cos_id,\n+\t\t\t\t   struct dlb2_cmd_response *resp)\n+{\n+\tunsigned int i;\n+\n+\tif (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {\n+\t\tresp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tfor (i = 0; i < num_ports; i++) {\n+\t\tstruct dlb2_ldb_port *port;\n+\n+\t\tport = dlb2_get_next_ldb_port(hw, rsrcs,\n+\t\t\t\t\t      domain->id.phys_id, cos_id);\n+\t\tif (port == NULL) {\n+\t\t\tDLB2_HW_ERR(hw,\n+\t\t\t\t    \"[%s()] Internal error: domain validation failed\\n\",\n+\t\t\t\t    __func__);\n+\t\t\treturn -EFAULT;\n+\t\t}\n+\n+\t\tdlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],\n+\t\t\t      &port->func_list);\n+\n+\t\tport->domain_id = domain->id;\n+\t\tport->owned = true;\n+\n+\t\tdlb2_list_add(&domain->avail_ldb_ports[cos_id],\n+\t\t\t      &port->domain_list);\n+\t}\n+\n+\trsrcs->num_avail_ldb_ports[cos_id] -= num_ports;\n+\n+\treturn 0;\n+}\n+\n+\n+static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,\n+\t\t\t\t struct dlb2_function_resources *rsrcs,\n+\t\t\t\t struct dlb2_hw_domain *domain,\n+\t\t\t\t struct dlb2_create_sched_domain_args *args,\n+\t\t\t\t struct dlb2_cmd_response *resp)\n+{\n+\tunsigned int i, j;\n+\tint ret;\n+\n+\tif (args->cos_strict) {\n+\t\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\t\tu32 num = args->num_cos_ldb_ports[i];\n+\n+\t\t\t/* Allocate ports from specific classes-of-service */\n+\t\t\tret = __dlb2_attach_ldb_ports(hw,\n+\t\t\t\t\t\t      rsrcs,\n+\t\t\t\t\t\t      domain,\n+\t\t\t\t\t\t      num,\n+\t\t\t\t\t\t      i,\n+\t\t\t\t\t\t      resp);\n+\t\t\tif (ret)\n+\t\t\t\treturn ret;\n+\t\t}\n+\t} else {\n+\t\tunsigned int k;\n+\t\tu32 cos_id;\n+\n+\t\t/*\n+\t\t * Attempt to allocate from specific class-of-service, but\n+\t\t * fallback to the other classes if that fails.\n+\t\t */\n+\t\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\t\tfor (j = 0; j < args->num_cos_ldb_ports[i]; j++) {\n+\t\t\t\tfor (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {\n+\t\t\t\t\tcos_id = (i + k) % DLB2_NUM_COS_DOMAINS;\n+\n+\t\t\t\t\tret = __dlb2_attach_ldb_ports(hw,\n+\t\t\t\t\t\t\t\t      rsrcs,\n+\t\t\t\t\t\t\t\t      domain,\n+\t\t\t\t\t\t\t\t      1,\n+\t\t\t\t\t\t\t\t      cos_id,\n+\t\t\t\t\t\t\t\t      resp);\n+\t\t\t\t\tif (ret == 0)\n+\t\t\t\t\t\tbreak;\n+\t\t\t\t}\n+\n+\t\t\t\tif (ret)\n+\t\t\t\t\treturn ret;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\t/* Allocate num_ldb_ports from any class-of-service */\n+\tfor (i = 0; i < args->num_ldb_ports; i++) {\n+\t\tfor (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {\n+\t\t\tret = __dlb2_attach_ldb_ports(hw,\n+\t\t\t\t\t\t      rsrcs,\n+\t\t\t\t\t\t      domain,\n+\t\t\t\t\t\t      1,\n+\t\t\t\t\t\t      j,\n+\t\t\t\t\t\t      resp);\n+\t\t\tif (ret == 0)\n+\t\t\t\tbreak;\n+\t\t}\n+\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static int dlb2_attach_dir_ports(struct dlb2_hw *hw,\n+\t\t\t\t struct dlb2_function_resources *rsrcs,\n+\t\t\t\t struct dlb2_hw_domain *domain,\n+\t\t\t\t u32 num_ports,\n+\t\t\t\t struct dlb2_cmd_response *resp)\n+{\n+\tunsigned int i;\n+\n+\tif (rsrcs->num_avail_dir_pq_pairs < num_ports) {\n+\t\tresp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tfor (i = 0; i < num_ports; i++) {\n+\t\tstruct dlb2_dir_pq_pair *port;\n+\n+\t\tport = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,\n+\t\t\t\t\t   typeof(*port));\n+\t\tif (port == NULL) {\n+\t\t\tDLB2_HW_ERR(hw,\n+\t\t\t\t    \"[%s()] Internal error: domain validation failed\\n\",\n+\t\t\t\t    __func__);\n+\t\t\treturn -EFAULT;\n+\t\t}\n+\n+\t\tdlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);\n+\n+\t\tport->domain_id = domain->id;\n+\t\tport->owned = true;\n+\n+\t\tdlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);\n+\t}\n+\n+\trsrcs->num_avail_dir_pq_pairs -= num_ports;\n+\n+\treturn 0;\n+}\n+\n+static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,\n+\t\t\t\t   struct dlb2_hw_domain *domain,\n+\t\t\t\t   u32 num_credits,\n+\t\t\t\t   struct dlb2_cmd_response *resp)\n+{\n+\tif (rsrcs->num_avail_qed_entries < num_credits) {\n+\t\tresp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\trsrcs->num_avail_qed_entries -= num_credits;\n+\tdomain->num_ldb_credits += num_credits;\n+\treturn 0;\n+}\n+\n+static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,\n+\t\t\t\t   struct dlb2_hw_domain *domain,\n+\t\t\t\t   u32 num_credits,\n+\t\t\t\t   struct dlb2_cmd_response *resp)\n+{\n+\tif (rsrcs->num_avail_dqed_entries < num_credits) {\n+\t\tresp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\trsrcs->num_avail_dqed_entries -= num_credits;\n+\tdomain->num_dir_credits += num_credits;\n+\treturn 0;\n+}\n+\n+\n+static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,\n+\t\t\t\t\tstruct dlb2_hw_domain *domain,\n+\t\t\t\t\tu32 num_atomic_inflights,\n+\t\t\t\t\tstruct dlb2_cmd_response *resp)\n+{\n+\tif (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {\n+\t\tresp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\trsrcs->num_avail_aqed_entries -= num_atomic_inflights;\n+\tdomain->num_avail_aqed_entries += num_atomic_inflights;\n+\treturn 0;\n+}\n+\n+static int\n+dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,\n+\t\t\t\t     struct dlb2_hw_domain *domain,\n+\t\t\t\t     u32 num_hist_list_entries,\n+\t\t\t\t     struct dlb2_cmd_response *resp)\n+{\n+\tstruct dlb2_bitmap *bitmap;\n+\tint base;\n+\n+\tif (num_hist_list_entries) {\n+\t\tbitmap = rsrcs->avail_hist_list_entries;\n+\n+\t\tbase = dlb2_bitmap_find_set_bit_range(bitmap,\n+\t\t\t\t\t\t      num_hist_list_entries);\n+\t\tif (base < 0)\n+\t\t\tgoto error;\n+\n+\t\tdomain->total_hist_list_entries = num_hist_list_entries;\n+\t\tdomain->avail_hist_list_entries = num_hist_list_entries;\n+\t\tdomain->hist_list_entry_base = base;\n+\t\tdomain->hist_list_entry_offset = 0;\n+\n+\t\tdlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);\n+\t}\n+\treturn 0;\n+\n+error:\n+\tresp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;\n+\treturn -EINVAL;\n+}\n+\n+static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,\n+\t\t\t\t  struct dlb2_function_resources *rsrcs,\n+\t\t\t\t  struct dlb2_hw_domain *domain,\n+\t\t\t\t  u32 num_queues,\n+\t\t\t\t  struct dlb2_cmd_response *resp)\n+{\n+\tunsigned int i;\n+\n+\tif (rsrcs->num_avail_ldb_queues < num_queues) {\n+\t\tresp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tfor (i = 0; i < num_queues; i++) {\n+\t\tstruct dlb2_ldb_queue *queue;\n+\n+\t\tqueue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,\n+\t\t\t\t\t    typeof(*queue));\n+\t\tif (queue == NULL) {\n+\t\t\tDLB2_HW_ERR(hw,\n+\t\t\t\t    \"[%s()] Internal error: domain validation failed\\n\",\n+\t\t\t\t    __func__);\n+\t\t\treturn -EFAULT;\n+\t\t}\n+\n+\t\tdlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);\n+\n+\t\tqueue->domain_id = domain->id;\n+\t\tqueue->owned = true;\n+\n+\t\tdlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);\n+\t}\n+\n+\trsrcs->num_avail_ldb_queues -= num_queues;\n+\n+\treturn 0;\n+}\n+\n+static int\n+dlb2_domain_attach_resources(struct dlb2_hw *hw,\n+\t\t\t     struct dlb2_function_resources *rsrcs,\n+\t\t\t     struct dlb2_hw_domain *domain,\n+\t\t\t     struct dlb2_create_sched_domain_args *args,\n+\t\t\t     struct dlb2_cmd_response *resp)\n+{\n+\tint ret;\n+\n+\tret = dlb2_attach_ldb_queues(hw,\n+\t\t\t\t     rsrcs,\n+\t\t\t\t     domain,\n+\t\t\t\t     args->num_ldb_queues,\n+\t\t\t\t     resp);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = dlb2_attach_ldb_ports(hw,\n+\t\t\t\t    rsrcs,\n+\t\t\t\t    domain,\n+\t\t\t\t    args,\n+\t\t\t\t    resp);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = dlb2_attach_dir_ports(hw,\n+\t\t\t\t    rsrcs,\n+\t\t\t\t    domain,\n+\t\t\t\t    args->num_dir_ports,\n+\t\t\t\t    resp);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tif (hw->ver == DLB2_HW_V2) {\n+\t\tret = dlb2_attach_ldb_credits(rsrcs,\n+\t\t\t\t\t      domain,\n+\t\t\t\t\t      args->num_ldb_credits,\n+\t\t\t\t\t      resp);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\n+\t\tret = dlb2_attach_dir_credits(rsrcs,\n+\t\t\t\t\t      domain,\n+\t\t\t\t\t      args->num_dir_credits,\n+\t\t\t\t\t      resp);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t} else {  /* DLB 2.5 */\n+\t\tret = dlb2_attach_credits(rsrcs,\n+\t\t\t\t\t  domain,\n+\t\t\t\t\t  args->num_credits,\n+\t\t\t\t\t  resp);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t}\n+\n+\tret = dlb2_attach_domain_hist_list_entries(rsrcs,\n+\t\t\t\t\t\t   domain,\n+\t\t\t\t\t\t   args->num_hist_list_entries,\n+\t\t\t\t\t\t   resp);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = dlb2_attach_atomic_inflights(rsrcs,\n+\t\t\t\t\t   domain,\n+\t\t\t\t\t   args->num_atomic_inflights,\n+\t\t\t\t\t   resp);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tdlb2_configure_domain_credits(hw, domain);\n+\n+\tdomain->configured = true;\n+\n+\tdomain->started = false;\n+\n+\trsrcs->num_avail_domains--;\n+\n+\treturn 0;\n+}\n+\n+static int\n+dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,\n+\t\t\t\t  struct dlb2_create_sched_domain_args *args,\n+\t\t\t\t  struct dlb2_cmd_response *resp,\n+\t\t\t\t  struct dlb2_hw *hw,\n+\t\t\t\t  struct dlb2_hw_domain **out_domain)\n+{\n+\tu32 num_avail_ldb_ports, req_ldb_ports;\n+\tstruct dlb2_bitmap *avail_hl_entries;\n+\tunsigned int max_contig_hl_range;\n+\tstruct dlb2_hw_domain *domain;\n+\tint i;\n+\n+\tavail_hl_entries = rsrcs->avail_hist_list_entries;\n+\n+\tmax_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);\n+\n+\tnum_avail_ldb_ports = 0;\n+\treq_ldb_ports = 0;\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tnum_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];\n+\n+\t\treq_ldb_ports += args->num_cos_ldb_ports[i];\n+\t}\n+\n+\treq_ldb_ports += args->num_ldb_ports;\n+\n+\tif (rsrcs->num_avail_domains < 1) {\n+\t\tresp->status = DLB2_ST_DOMAIN_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tdomain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));\n+\tif (domain == NULL) {\n+\t\tresp->status = DLB2_ST_DOMAIN_UNAVAILABLE;\n+\t\treturn -EFAULT;\n+\t}\n+\n+\tif (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {\n+\t\tresp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (req_ldb_ports > num_avail_ldb_ports) {\n+\t\tresp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tfor (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tif (args->num_cos_ldb_ports[i] >\n+\t\t    rsrcs->num_avail_ldb_ports[i]) {\n+\t\t\tresp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t}\n+\n+\tif (args->num_ldb_queues > 0 && req_ldb_ports == 0) {\n+\t\tresp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {\n+\t\tresp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\tif (hw->ver == DLB2_HW_V2_5) {\n+\t\tif (rsrcs->num_avail_entries < args->num_credits) {\n+\t\t\tresp->status = DLB2_ST_CREDITS_UNAVAILABLE;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t} else {\n+\t\tif (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {\n+\t\t\tresp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t\tif (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {\n+\t\t\tresp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t}\n+\n+\tif (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {\n+\t\tresp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (max_contig_hl_range < args->num_hist_list_entries) {\n+\t\tresp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t*out_domain = domain;\n+\n+\treturn 0;\n+}\n+\n+static void\n+dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,\n+\t\t\t\t  struct dlb2_create_sched_domain_args *args,\n+\t\t\t\t  bool vdev_req,\n+\t\t\t\t  unsigned int vdev_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB2 create sched domain arguments:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of LDB queues:          %d\\n\",\n+\t\t    args->num_ldb_queues);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of LDB ports (any CoS): %d\\n\",\n+\t\t    args->num_ldb_ports);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of LDB ports (CoS 0):   %d\\n\",\n+\t\t    args->num_cos_ldb_ports[0]);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of LDB ports (CoS 1):   %d\\n\",\n+\t\t    args->num_cos_ldb_ports[1]);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of LDB ports (CoS 2):   %d\\n\",\n+\t\t    args->num_cos_ldb_ports[2]);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of LDB ports (CoS 3):   %d\\n\",\n+\t\t    args->num_cos_ldb_ports[3]);\n+\tDLB2_HW_DBG(hw, \"\\tStrict CoS allocation:         %d\\n\",\n+\t\t    args->cos_strict);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of DIR ports:           %d\\n\",\n+\t\t    args->num_dir_ports);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of ATM inflights:       %d\\n\",\n+\t\t    args->num_atomic_inflights);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of hist list entries:   %d\\n\",\n+\t\t    args->num_hist_list_entries);\n+\tif (hw->ver == DLB2_HW_V2) {\n+\t\tDLB2_HW_DBG(hw, \"\\tNumber of LDB credits:         %d\\n\",\n+\t\t\t    args->num_ldb_credits);\n+\t\tDLB2_HW_DBG(hw, \"\\tNumber of DIR credits:         %d\\n\",\n+\t\t\t    args->num_dir_credits);\n+\t} else {\n+\t\tDLB2_HW_DBG(hw, \"\\tNumber of credits:         %d\\n\",\n+\t\t\t    args->num_credits);\n+\t}\n+}\n+\n+/**\n+ * dlb2_hw_create_sched_domain() - create a scheduling domain\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @args: scheduling domain creation arguments.\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function creates a scheduling domain containing the resources specified\n+ * in args. The individual resources (queues, ports, credits) can be configured\n+ * after creating a scheduling domain.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n+ * contains the domain ID.\n+ *\n+ * resp->id contains a virtual ID if vdev_req is true.\n+ *\n+ * Errors:\n+ * EINVAL - A requested resource is unavailable, or the requested domain name\n+ *\t    is already in use.\n+ * EFAULT - Internal error (resp->status not set).\n+ */\n+int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,\n+\t\t\t\tstruct dlb2_create_sched_domain_args *args,\n+\t\t\t\tstruct dlb2_cmd_response *resp,\n+\t\t\t\tbool vdev_req,\n+\t\t\t\tunsigned int vdev_id)\n+{\n+\tstruct dlb2_function_resources *rsrcs;\n+\tstruct dlb2_hw_domain *domain;\n+\tint ret;\n+\n+\trsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;\n+\n+\tdlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);\n+\n+\t/*\n+\t * Verify that hardware resources are available before attempting to\n+\t * satisfy the request. This simplifies the error unwinding code.\n+\t */\n+\tret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tdlb2_init_domain_rsrc_lists(domain);\n+\n+\tret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);\n+\tif (ret) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s()] Internal error: failed to verify args.\\n\",\n+\t\t\t    __func__);\n+\n+\t\treturn ret;\n+\t}\n+\n+\tdlb2_list_del(&rsrcs->avail_domains, &domain->func_list);\n+\n+\tdlb2_list_add(&rsrcs->used_domains, &domain->func_list);\n+\n+\tresp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;\n+\tresp->status = 0;\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,\n+\t\t\t\t     struct dlb2_dir_pq_pair *port)\n+{\n+\tu32 reg = 0;\n+\n+\tDLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);\n+\tDLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);\n+\n+\tdlb2_flush_csr(hw);\n+}\n+\n+static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,\n+\t\t\t\t   struct dlb2_dir_pq_pair *port)\n+{\n+\tu32 cnt;\n+\n+\tcnt = DLB2_CSR_RD(hw,\n+\t\t\t  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));\n+\n+\t/*\n+\t * Account for the initial token count, which is used in order to\n+\t * provide a CQ with depth less than 8.\n+\t */\n+\n+\treturn DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -\n+\t       port->init_tkn_cnt;\n+}\n+\n+static void dlb2_drain_dir_cq(struct dlb2_hw *hw,\n+\t\t\t      struct dlb2_dir_pq_pair *port)\n+{\n+\tunsigned int port_id = port->id.phys_id;\n+\tu32 cnt;\n+\n+\t/* Return any outstanding tokens */\n+\tcnt = dlb2_dir_cq_token_count(hw, port);\n+\n+\tif (cnt != 0) {\n+\t\tstruct dlb2_hcw hcw_mem[8], *hcw;\n+\t\tvoid __iomem *pp_addr;\n+\n+\t\tpp_addr = os_map_producer_port(hw, port_id, false);\n+\n+\t\t/* Point hcw to a 64B-aligned location */\n+\t\thcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);\n+\n+\t\t/*\n+\t\t * Program the first HCW for a batch token return and\n+\t\t * the rest as NOOPS\n+\t\t */\n+\t\tmemset(hcw, 0, 4 * sizeof(*hcw));\n+\t\thcw->cq_token = 1;\n+\t\thcw->lock_id = cnt - 1;\n+\n+\t\tdlb2_movdir64b(pp_addr, hcw);\n+\n+\t\tos_fence_hcw(hw, pp_addr);\n+\n+\t\tos_unmap_producer_port(hw, pp_addr);\n+\t}\n+}\n+\n+static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,\n+\t\t\t\t    struct dlb2_dir_pq_pair *port)\n+{\n+\tu32 reg = 0;\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);\n+\n+\tdlb2_flush_csr(hw);\n+}\n+\n+static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,\n+\t\t\t\t     struct dlb2_hw_domain *domain,\n+\t\t\t\t     bool toggle_port)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *port;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n+\t\t/*\n+\t\t * Can't drain a port if it's not configured, and there's\n+\t\t * nothing to drain if its queue is unconfigured.\n+\t\t */\n+\t\tif (!port->port_configured || !port->queue_configured)\n+\t\t\tcontinue;\n+\n+\t\tif (toggle_port)\n+\t\t\tdlb2_dir_port_cq_disable(hw, port);\n+\n+\t\tdlb2_drain_dir_cq(hw, port);\n+\n+\t\tif (toggle_port)\n+\t\t\tdlb2_dir_port_cq_enable(hw, port);\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,\n+\t\t\t\tstruct dlb2_dir_pq_pair *queue)\n+{\n+\tu32 cnt;\n+\n+\tcnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,\n+\t\t\t\t\t\t      queue->id.phys_id));\n+\n+\treturn DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);\n+}\n+\n+static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,\n+\t\t\t\t    struct dlb2_dir_pq_pair *queue)\n+{\n+\treturn dlb2_dir_queue_depth(hw, queue) == 0;\n+}\n+\n+static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,\n+\t\t\t\t\t struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *queue;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {\n+\t\tif (!dlb2_dir_queue_is_empty(hw, queue))\n+\t\t\treturn false;\n+\t}\n+\n+\treturn true;\n+}\n+static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,\n+\t\t\t\t\tstruct dlb2_hw_domain *domain)\n+{\n+\tint i;\n+\n+\t/* If the domain hasn't been started, there's no traffic to drain */\n+\tif (!domain->started)\n+\t\treturn 0;\n+\n+\tfor (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {\n+\t\tdlb2_domain_drain_dir_cqs(hw, domain, true);\n+\n+\t\tif (dlb2_domain_dir_queues_empty(hw, domain))\n+\t\t\tbreak;\n+\t}\n+\n+\tif (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s()] Internal error: failed to empty queues\\n\",\n+\t\t\t    __func__);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\t/*\n+\t * Drain the CQs one more time. For the queues to go empty, they would\n+\t * have scheduled one or more QEs.\n+\t */\n+\tdlb2_domain_drain_dir_cqs(hw, domain, true);\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,\n+\t\t\t\t    struct dlb2_ldb_port *port)\n+{\n+\tu32 reg = 0;\n+\n+\t/*\n+\t * Don't re-enable the port if a removal is pending. The caller should\n+\t * mark this port as enabled (if it isn't already), and when the\n+\t * removal completes the port will be enabled.\n+\t */\n+\tif (port->num_pending_removals)\n+\t\treturn;\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);\n+\n+\tdlb2_flush_csr(hw);\n+}\n+\n+static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,\n+\t\t\t\t     struct dlb2_ldb_port *port)\n+{\n+\tu32 reg = 0;\n+\n+\tDLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);\n+\tDLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);\n+\n+\tdlb2_flush_csr(hw);\n+}\n+\n+static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,\n+\t\t\t\t      struct dlb2_ldb_port *port)\n+{\n+\tu32 cnt;\n+\n+\tcnt = DLB2_CSR_RD(hw,\n+\t\t\t  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));\n+\n+\treturn DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);\n+}\n+\n+static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,\n+\t\t\t\t   struct dlb2_ldb_port *port)\n+{\n+\tu32 cnt;\n+\n+\tcnt = DLB2_CSR_RD(hw,\n+\t\t\t  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));\n+\n+\t/*\n+\t * Account for the initial token count, which is used in order to\n+\t * provide a CQ with depth less than 8.\n+\t */\n+\n+\treturn DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -\n+\t\tport->init_tkn_cnt;\n+}\n+\n+static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)\n+{\n+\tu32 infl_cnt, tkn_cnt;\n+\tunsigned int i;\n+\n+\tinfl_cnt = dlb2_ldb_cq_inflight_count(hw, port);\n+\ttkn_cnt = dlb2_ldb_cq_token_count(hw, port);\n+\n+\tif (infl_cnt || tkn_cnt) {\n+\t\tstruct dlb2_hcw hcw_mem[8], *hcw;\n+\t\tvoid __iomem *pp_addr;\n+\n+\t\tpp_addr = os_map_producer_port(hw, port->id.phys_id, true);\n+\n+\t\t/* Point hcw to a 64B-aligned location */\n+\t\thcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);\n+\n+\t\t/*\n+\t\t * Program the first HCW for a completion and token return and\n+\t\t * the other HCWs as NOOPS\n+\t\t */\n+\n+\t\tmemset(hcw, 0, 4 * sizeof(*hcw));\n+\t\thcw->qe_comp = (infl_cnt > 0);\n+\t\thcw->cq_token = (tkn_cnt > 0);\n+\t\thcw->lock_id = tkn_cnt - 1;\n+\n+\t\t/* Return tokens in the first HCW */\n+\t\tdlb2_movdir64b(pp_addr, hcw);\n+\n+\t\thcw->cq_token = 0;\n+\n+\t\t/* Issue remaining completions (if any) */\n+\t\tfor (i = 1; i < infl_cnt; i++)\n+\t\t\tdlb2_movdir64b(pp_addr, hcw);\n+\n+\t\tos_fence_hcw(hw, pp_addr);\n+\n+\t\tos_unmap_producer_port(hw, pp_addr);\n+\t}\n+}\n+\n+static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,\n+\t\t\t\t      struct dlb2_hw_domain *domain,\n+\t\t\t\t      bool toggle_port)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\t/* If the domain hasn't been started, there's no traffic to drain */\n+\tif (!domain->started)\n+\t\treturn;\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tif (toggle_port)\n+\t\t\t\tdlb2_ldb_port_cq_disable(hw, port);\n+\n+\t\t\tdlb2_drain_ldb_cq(hw, port);\n+\n+\t\t\tif (toggle_port)\n+\t\t\t\tdlb2_ldb_port_cq_enable(hw, port);\n+\t\t}\n+\t}\n+}\n+\n+static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,\n+\t\t\t\tstruct dlb2_ldb_queue *queue)\n+{\n+\tu32 aqed, ldb, atm;\n+\n+\taqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,\n+\t\t\t\t\t\t       queue->id.phys_id));\n+\tldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,\n+\t\t\t\t\t\t      queue->id.phys_id));\n+\tatm = DLB2_CSR_RD(hw,\n+\t\t\t  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));\n+\n+\treturn DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)\n+\t       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)\n+\t       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);\n+}\n+\n+static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,\n+\t\t\t\t    struct dlb2_ldb_queue *queue)\n+{\n+\treturn dlb2_ldb_queue_depth(hw, queue) == 0;\n+}\n+\n+static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,\n+\t\t\t\t\t    struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_queue *queue;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n+\t\tif (queue->num_mappings == 0)\n+\t\t\tcontinue;\n+\n+\t\tif (!dlb2_ldb_queue_is_empty(hw, queue))\n+\t\t\treturn false;\n+\t}\n+\n+\treturn true;\n+}\n+\n+static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,\n+\t\t\t\t\t   struct dlb2_hw_domain *domain)\n+{\n+\tint i;\n+\n+\t/* If the domain hasn't been started, there's no traffic to drain */\n+\tif (!domain->started)\n+\t\treturn 0;\n+\n+\tif (domain->num_pending_removals > 0) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s()] Internal error: failed to unmap domain queues\\n\",\n+\t\t\t    __func__);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\tfor (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {\n+\t\tdlb2_domain_drain_ldb_cqs(hw, domain, true);\n+\n+\t\tif (dlb2_domain_mapped_queues_empty(hw, domain))\n+\t\t\tbreak;\n+\t}\n+\n+\tif (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s()] Internal error: failed to empty queues\\n\",\n+\t\t\t    __func__);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\t/*\n+\t * Drain the CQs one more time. For the queues to go empty, they would\n+\t * have scheduled one or more QEs.\n+\t */\n+\tdlb2_domain_drain_ldb_cqs(hw, domain, true);\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,\n+\t\t\t\t       struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tport->enabled = true;\n+\n+\t\t\tdlb2_ldb_port_cq_enable(hw, port);\n+\t\t}\n+\t}\n+}\n+\n+static struct dlb2_ldb_queue *\n+dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,\n+\t\t\t   u32 id,\n+\t\t\t   bool vdev_req,\n+\t\t\t   unsigned int vdev_id)\n+{\n+\tstruct dlb2_list_entry *iter1;\n+\tstruct dlb2_list_entry *iter2;\n+\tstruct dlb2_function_resources *rsrcs;\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_queue *queue;\n+\tRTE_SET_USED(iter1);\n+\tRTE_SET_USED(iter2);\n+\n+\tif (id >= DLB2_MAX_NUM_LDB_QUEUES)\n+\t\treturn NULL;\n+\n+\trsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;\n+\n+\tif (!vdev_req)\n+\t\treturn &hw->rsrcs.ldb_queues[id];\n+\n+\tDLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {\n+\t\t\tif (queue->id.virt_id == id)\n+\t\t\t\treturn queue;\n+\t\t}\n+\t}\n+\n+\tDLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {\n+\t\tif (queue->id.virt_id == id)\n+\t\t\treturn queue;\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,\n+\t\t\t\t\t\t      u32 id,\n+\t\t\t\t\t\t      bool vdev_req,\n+\t\t\t\t\t\t      unsigned int vdev_id)\n+{\n+\tstruct dlb2_list_entry *iteration;\n+\tstruct dlb2_function_resources *rsrcs;\n+\tstruct dlb2_hw_domain *domain;\n+\tRTE_SET_USED(iteration);\n+\n+\tif (id >= DLB2_MAX_NUM_DOMAINS)\n+\t\treturn NULL;\n+\n+\tif (!vdev_req)\n+\t\treturn &hw->domains[id];\n+\n+\trsrcs = &hw->vdev[vdev_id];\n+\n+\tDLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {\n+\t\tif (domain->id.virt_id == id)\n+\t\t\treturn domain;\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,\n+\t\t\t\t\t   struct dlb2_ldb_port *port,\n+\t\t\t\t\t   struct dlb2_ldb_queue *queue,\n+\t\t\t\t\t   int slot,\n+\t\t\t\t\t   enum dlb2_qid_map_state new_state)\n+{\n+\tenum dlb2_qid_map_state curr_state = port->qid_map[slot].state;\n+\tstruct dlb2_hw_domain *domain;\n+\tint domain_id;\n+\n+\tdomain_id = port->domain_id.phys_id;\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, false, 0);\n+\tif (domain == NULL) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s()] Internal error: unable to find domain %d\\n\",\n+\t\t\t    __func__, domain_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tswitch (curr_state) {\n+\tcase DLB2_QUEUE_UNMAPPED:\n+\t\tswitch (new_state) {\n+\t\tcase DLB2_QUEUE_MAPPED:\n+\t\t\tqueue->num_mappings++;\n+\t\t\tport->num_mappings++;\n+\t\t\tbreak;\n+\t\tcase DLB2_QUEUE_MAP_IN_PROG:\n+\t\t\tqueue->num_pending_additions++;\n+\t\t\tdomain->num_pending_additions++;\n+\t\t\tbreak;\n+\t\tdefault:\n+\t\t\tgoto error;\n+\t\t}\n+\t\tbreak;\n+\tcase DLB2_QUEUE_MAPPED:\n+\t\tswitch (new_state) {\n+\t\tcase DLB2_QUEUE_UNMAPPED:\n+\t\t\tqueue->num_mappings--;\n+\t\t\tport->num_mappings--;\n+\t\t\tbreak;\n+\t\tcase DLB2_QUEUE_UNMAP_IN_PROG:\n+\t\t\tport->num_pending_removals++;\n+\t\t\tdomain->num_pending_removals++;\n+\t\t\tbreak;\n+\t\tcase DLB2_QUEUE_MAPPED:\n+\t\t\t/* Priority change, nothing to update */\n+\t\t\tbreak;\n+\t\tdefault:\n+\t\t\tgoto error;\n+\t\t}\n+\t\tbreak;\n+\tcase DLB2_QUEUE_MAP_IN_PROG:\n+\t\tswitch (new_state) {\n+\t\tcase DLB2_QUEUE_UNMAPPED:\n+\t\t\tqueue->num_pending_additions--;\n+\t\t\tdomain->num_pending_additions--;\n+\t\t\tbreak;\n+\t\tcase DLB2_QUEUE_MAPPED:\n+\t\t\tqueue->num_mappings++;\n+\t\t\tport->num_mappings++;\n+\t\t\tqueue->num_pending_additions--;\n+\t\t\tdomain->num_pending_additions--;\n+\t\t\tbreak;\n+\t\tdefault:\n+\t\t\tgoto error;\n+\t\t}\n+\t\tbreak;\n+\tcase DLB2_QUEUE_UNMAP_IN_PROG:\n+\t\tswitch (new_state) {\n+\t\tcase DLB2_QUEUE_UNMAPPED:\n+\t\t\tport->num_pending_removals--;\n+\t\t\tdomain->num_pending_removals--;\n+\t\t\tqueue->num_mappings--;\n+\t\t\tport->num_mappings--;\n+\t\t\tbreak;\n+\t\tcase DLB2_QUEUE_MAPPED:\n+\t\t\tport->num_pending_removals--;\n+\t\t\tdomain->num_pending_removals--;\n+\t\t\tbreak;\n+\t\tcase DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:\n+\t\t\t/* Nothing to update */\n+\t\t\tbreak;\n+\t\tdefault:\n+\t\t\tgoto error;\n+\t\t}\n+\t\tbreak;\n+\tcase DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:\n+\t\tswitch (new_state) {\n+\t\tcase DLB2_QUEUE_UNMAP_IN_PROG:\n+\t\t\t/* Nothing to update */\n+\t\t\tbreak;\n+\t\tcase DLB2_QUEUE_UNMAPPED:\n+\t\t\t/*\n+\t\t\t * An UNMAP_IN_PROG_PENDING_MAP slot briefly\n+\t\t\t * becomes UNMAPPED before it transitions to\n+\t\t\t * MAP_IN_PROG.\n+\t\t\t */\n+\t\t\tqueue->num_mappings--;\n+\t\t\tport->num_mappings--;\n+\t\t\tport->num_pending_removals--;\n+\t\t\tdomain->num_pending_removals--;\n+\t\t\tbreak;\n+\t\tdefault:\n+\t\t\tgoto error;\n+\t\t}\n+\t\tbreak;\n+\tdefault:\n+\t\tgoto error;\n+\t}\n+\n+\tport->qid_map[slot].state = new_state;\n+\n+\tDLB2_HW_DBG(hw,\n+\t\t    \"[%s()] queue %d -> port %d state transition (%d -> %d)\\n\",\n+\t\t    __func__, queue->id.phys_id, port->id.phys_id,\n+\t\t    curr_state, new_state);\n+\treturn 0;\n+\n+error:\n+\tDLB2_HW_ERR(hw,\n+\t\t    \"[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\\n\",\n+\t\t    __func__, queue->id.phys_id, port->id.phys_id,\n+\t\t    curr_state, new_state);\n+\treturn -EFAULT;\n+}\n+\n+static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,\n+\t\t\t\tenum dlb2_qid_map_state state,\n+\t\t\t\tint *slot)\n+{\n+\tint i;\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {\n+\t\tif (port->qid_map[i].state == state)\n+\t\t\tbreak;\n+\t}\n+\n+\t*slot = i;\n+\n+\treturn (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);\n+}\n+\n+static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,\n+\t\t\t\t      enum dlb2_qid_map_state state,\n+\t\t\t\t      struct dlb2_ldb_queue *queue,\n+\t\t\t\t      int *slot)\n+{\n+\tint i;\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {\n+\t\tif (port->qid_map[i].state == state &&\n+\t\t    port->qid_map[i].qid == queue->id.phys_id)\n+\t\t\tbreak;\n+\t}\n+\n+\t*slot = i;\n+\n+\treturn (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);\n+}\n+\n+/*\n+ * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as\n+ * their function names imply, and should only be called by the dynamic CQ\n+ * mapping code.\n+ */\n+static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,\n+\t\t\t\t\t      struct dlb2_hw_domain *domain,\n+\t\t\t\t\t      struct dlb2_ldb_queue *queue)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tint slot, i;\n+\tRTE_SET_USED(iter);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tenum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;\n+\n+\t\t\tif (!dlb2_port_find_slot_queue(port, state,\n+\t\t\t\t\t\t       queue, &slot))\n+\t\t\t\tcontinue;\n+\n+\t\t\tif (port->enabled)\n+\t\t\t\tdlb2_ldb_port_cq_disable(hw, port);\n+\t\t}\n+\t}\n+}\n+\n+static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,\n+\t\t\t\t\t     struct dlb2_hw_domain *domain,\n+\t\t\t\t\t     struct dlb2_ldb_queue *queue)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tint slot, i;\n+\tRTE_SET_USED(iter);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tenum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;\n+\n+\t\t\tif (!dlb2_port_find_slot_queue(port, state,\n+\t\t\t\t\t\t       queue, &slot))\n+\t\t\t\tcontinue;\n+\n+\t\t\tif (port->enabled)\n+\t\t\t\tdlb2_ldb_port_cq_enable(hw, port);\n+\t\t}\n+\t}\n+}\n+\n+static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,\n+\t\t\t\t\t\tstruct dlb2_ldb_port *port,\n+\t\t\t\t\t\tint slot)\n+{\n+\tu32 ctrl = 0;\n+\n+\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n+\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n+\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n+\n+\tdlb2_flush_csr(hw);\n+}\n+\n+static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,\n+\t\t\t\t\t      struct dlb2_ldb_port *port,\n+\t\t\t\t\t      int slot)\n+{\n+\tu32 ctrl = 0;\n+\n+\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n+\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n+\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);\n+\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n+\n+\tdlb2_flush_csr(hw);\n+}\n+\n+static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,\n+\t\t\t\t\tstruct dlb2_ldb_port *p,\n+\t\t\t\t\tstruct dlb2_ldb_queue *q,\n+\t\t\t\t\tu8 priority)\n+{\n+\tenum dlb2_qid_map_state state;\n+\tu32 lsp_qid2cq2;\n+\tu32 lsp_qid2cq;\n+\tu32 atm_qid2cq;\n+\tu32 cq2priov;\n+\tu32 cq2qid;\n+\tint i;\n+\n+\t/* Look for a pending or already mapped slot, else an unused slot */\n+\tif (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&\n+\t    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&\n+\t    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s():%d] Internal error: CQ has no available QID mapping slots\\n\",\n+\t\t\t    __func__, __LINE__);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\t/* Read-modify-write the priority and valid bit register */\n+\tcq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));\n+\n+\tcq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;\n+\tcq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)\n+\t\t    & DLB2_LSP_CQ2PRIOV_PRIO;\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);\n+\n+\t/* Read-modify-write the QID map register */\n+\tif (i < 4)\n+\t\tcq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,\n+\t\t\t\t\t\t\t  p->id.phys_id));\n+\telse\n+\t\tcq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,\n+\t\t\t\t\t\t\t  p->id.phys_id));\n+\n+\tif (i == 0 || i == 4)\n+\t\tDLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);\n+\tif (i == 1 || i == 5)\n+\t\tDLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);\n+\tif (i == 2 || i == 6)\n+\t\tDLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);\n+\tif (i == 3 || i == 7)\n+\t\tDLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);\n+\n+\tif (i < 4)\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);\n+\telse\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);\n+\n+\tatm_qid2cq = DLB2_CSR_RD(hw,\n+\t\t\t\t DLB2_ATM_QID2CQIDIX(q->id.phys_id,\n+\t\t\t\t\t\tp->id.phys_id / 4));\n+\n+\tlsp_qid2cq = DLB2_CSR_RD(hw,\n+\t\t\t\t DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,\n+\t\t\t\t\t\tp->id.phys_id / 4));\n+\n+\tlsp_qid2cq2 = DLB2_CSR_RD(hw,\n+\t\t\t\t  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,\n+\t\t\t\t\t\t  p->id.phys_id / 4));\n+\n+\tswitch (p->id.phys_id % 4) {\n+\tcase 0:\n+\t\tDLB2_BIT_SET(atm_qid2cq,\n+\t\t\t     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));\n+\t\tDLB2_BIT_SET(lsp_qid2cq,\n+\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));\n+\t\tDLB2_BIT_SET(lsp_qid2cq2,\n+\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));\n+\t\tbreak;\n+\n+\tcase 1:\n+\t\tDLB2_BIT_SET(atm_qid2cq,\n+\t\t\t     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));\n+\t\tDLB2_BIT_SET(lsp_qid2cq,\n+\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));\n+\t\tDLB2_BIT_SET(lsp_qid2cq2,\n+\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));\n+\t\tbreak;\n+\n+\tcase 2:\n+\t\tDLB2_BIT_SET(atm_qid2cq,\n+\t\t\t     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));\n+\t\tDLB2_BIT_SET(lsp_qid2cq,\n+\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));\n+\t\tDLB2_BIT_SET(lsp_qid2cq2,\n+\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));\n+\t\tbreak;\n+\n+\tcase 3:\n+\t\tDLB2_BIT_SET(atm_qid2cq,\n+\t\t\t     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));\n+\t\tDLB2_BIT_SET(lsp_qid2cq,\n+\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));\n+\t\tDLB2_BIT_SET(lsp_qid2cq2,\n+\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));\n+\t\tbreak;\n+\t}\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),\n+\t\t    atm_qid2cq);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_QID2CQIDIX(hw->ver,\n+\t\t\t\t\tq->id.phys_id, p->id.phys_id / 4),\n+\t\t    lsp_qid2cq);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_QID2CQIDIX2(hw->ver,\n+\t\t\t\t\t q->id.phys_id, p->id.phys_id / 4),\n+\t\t    lsp_qid2cq2);\n+\n+\tdlb2_flush_csr(hw);\n+\n+\tp->qid_map[i].qid = q->id.phys_id;\n+\tp->qid_map[i].priority = priority;\n+\n+\tstate = DLB2_QUEUE_MAPPED;\n+\n+\treturn dlb2_port_slot_state_transition(hw, p, q, i, state);\n+}\n+\n+static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,\n+\t\t\t\t\t   struct dlb2_ldb_port *port,\n+\t\t\t\t\t   struct dlb2_ldb_queue *queue,\n+\t\t\t\t\t   int slot)\n+{\n+\tu32 ctrl = 0;\n+\tu32 active;\n+\tu32 enq;\n+\n+\t/* Set the atomic scheduling haswork bit */\n+\tactive = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,\n+\t\t\t\t\t\t\t queue->id.phys_id));\n+\n+\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n+\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n+\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);\n+\tDLB2_BITS_SET(ctrl,\n+\t\t      DLB2_BITS_GET(active,\n+\t\t\t\t    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,\n+\t\t\t\t    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);\n+\n+\t/* Set the non-atomic scheduling haswork bit */\n+\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n+\n+\tenq = DLB2_CSR_RD(hw,\n+\t\t\t  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,\n+\t\t\t\t\t\t       queue->id.phys_id));\n+\n+\tmemset(&ctrl, 0, sizeof(ctrl));\n+\n+\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n+\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n+\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);\n+\tDLB2_BITS_SET(ctrl,\n+\t\t      DLB2_BITS_GET(enq,\n+\t\t\t\t    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,\n+\t\t      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n+\n+\tdlb2_flush_csr(hw);\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,\n+\t\t\t\t\t      struct dlb2_ldb_port *port,\n+\t\t\t\t\t      u8 slot)\n+{\n+\tu32 ctrl = 0;\n+\n+\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n+\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n+\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n+\n+\tmemset(&ctrl, 0, sizeof(ctrl));\n+\n+\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n+\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n+\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n+\n+\tdlb2_flush_csr(hw);\n+}\n+\n+\n+static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,\n+\t\t\t\t\t      struct dlb2_ldb_queue *queue)\n+{\n+\tu32 infl_lim = 0;\n+\n+\tDLB2_BITS_SET(infl_lim, queue->num_qid_inflights,\n+\t\t DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),\n+\t\t    infl_lim);\n+}\n+\n+static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,\n+\t\t\t\t\t\tstruct dlb2_ldb_queue *queue)\n+{\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),\n+\t\t    DLB2_LSP_QID_LDB_INFL_LIM_RST);\n+}\n+\n+static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,\n+\t\t\t\t\t\tstruct dlb2_hw_domain *domain,\n+\t\t\t\t\t\tstruct dlb2_ldb_port *port,\n+\t\t\t\t\t\tstruct dlb2_ldb_queue *queue)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tenum dlb2_qid_map_state state;\n+\tint slot, ret, i;\n+\tu32 infl_cnt;\n+\tu8 prio;\n+\tRTE_SET_USED(iter);\n+\n+\tinfl_cnt = DLB2_CSR_RD(hw,\n+\t\t\t       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,\n+\t\t\t\t\t\t    queue->id.phys_id));\n+\n+\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s()] Internal error: non-zero QID inflight count\\n\",\n+\t\t\t    __func__);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/*\n+\t * Static map the port and set its corresponding has_work bits.\n+\t */\n+\tstate = DLB2_QUEUE_MAP_IN_PROG;\n+\tif (!dlb2_port_find_slot_queue(port, state, queue, &slot))\n+\t\treturn -EINVAL;\n+\n+\tprio = port->qid_map[slot].priority;\n+\n+\t/*\n+\t * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and\n+\t * the port's qid_map state.\n+\t */\n+\tret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/*\n+\t * Ensure IF_status(cq,qid) is 0 before enabling the port to\n+\t * prevent spurious schedules to cause the queue's inflight\n+\t * count to increase.\n+\t */\n+\tdlb2_ldb_port_clear_queue_if_status(hw, port, slot);\n+\n+\t/* Reset the queue's inflight status */\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tstate = DLB2_QUEUE_MAPPED;\n+\t\t\tif (!dlb2_port_find_slot_queue(port, state,\n+\t\t\t\t\t\t       queue, &slot))\n+\t\t\t\tcontinue;\n+\n+\t\t\tdlb2_ldb_port_set_queue_if_status(hw, port, slot);\n+\t\t}\n+\t}\n+\n+\tdlb2_ldb_queue_set_inflight_limit(hw, queue);\n+\n+\t/* Re-enable CQs mapped to this queue */\n+\tdlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);\n+\n+\t/* If this queue has other mappings pending, clear its inflight limit */\n+\tif (queue->num_pending_additions > 0)\n+\t\tdlb2_ldb_queue_clear_inflight_limit(hw, queue);\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * dlb2_ldb_port_map_qid_dynamic() - perform a \"dynamic\" QID->CQ mapping\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @port: load-balanced port\n+ * @queue: load-balanced queue\n+ * @priority: queue servicing priority\n+ *\n+ * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur\n+ * at a later point, and <0 if an error occurred.\n+ */\n+static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,\n+\t\t\t\t\t struct dlb2_ldb_port *port,\n+\t\t\t\t\t struct dlb2_ldb_queue *queue,\n+\t\t\t\t\t u8 priority)\n+{\n+\tenum dlb2_qid_map_state state;\n+\tstruct dlb2_hw_domain *domain;\n+\tint domain_id, slot, ret;\n+\tu32 infl_cnt;\n+\n+\tdomain_id = port->domain_id.phys_id;\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, false, 0);\n+\tif (domain == NULL) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s()] Internal error: unable to find domain %d\\n\",\n+\t\t\t    __func__, port->domain_id.phys_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/*\n+\t * Set the QID inflight limit to 0 to prevent further scheduling of the\n+\t * queue.\n+\t */\n+\tDLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,\n+\t\t\t\t\t\t  queue->id.phys_id), 0);\n+\n+\tif (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"Internal error: No available unmapped slots\\n\");\n+\t\treturn -EFAULT;\n+\t}\n+\n+\tport->qid_map[slot].qid = queue->id.phys_id;\n+\tport->qid_map[slot].priority = priority;\n+\n+\tstate = DLB2_QUEUE_MAP_IN_PROG;\n+\tret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tinfl_cnt = DLB2_CSR_RD(hw,\n+\t\t\t       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,\n+\t\t\t\t\t\t    queue->id.phys_id));\n+\n+\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {\n+\t\t/*\n+\t\t * The queue is owed completions so it's not safe to map it\n+\t\t * yet. Schedule a kernel thread to complete the mapping later,\n+\t\t * once software has completed all the queue's inflight events.\n+\t\t */\n+\t\tif (!os_worker_active(hw))\n+\t\t\tos_schedule_work(hw);\n+\n+\t\treturn 1;\n+\t}\n+\n+\t/*\n+\t * Disable the affected CQ, and the CQs already mapped to the QID,\n+\t * before reading the QID's inflight count a second time. There is an\n+\t * unlikely race in which the QID may schedule one more QE after we\n+\t * read an inflight count of 0, and disabling the CQs guarantees that\n+\t * the race will not occur after a re-read of the inflight count\n+\t * register.\n+\t */\n+\tif (port->enabled)\n+\t\tdlb2_ldb_port_cq_disable(hw, port);\n+\n+\tdlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);\n+\n+\tinfl_cnt = DLB2_CSR_RD(hw,\n+\t\t\t       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,\n+\t\t\t\t\t\t    queue->id.phys_id));\n+\n+\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {\n+\t\tif (port->enabled)\n+\t\t\tdlb2_ldb_port_cq_enable(hw, port);\n+\n+\t\tdlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);\n+\n+\t\t/*\n+\t\t * The queue is owed completions so it's not safe to map it\n+\t\t * yet. Schedule a kernel thread to complete the mapping later,\n+\t\t * once software has completed all the queue's inflight events.\n+\t\t */\n+\t\tif (!os_worker_active(hw))\n+\t\t\tos_schedule_work(hw);\n+\n+\t\treturn 1;\n+\t}\n+\n+\treturn dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);\n+}\n+\n+static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,\n+\t\t\t\t\tstruct dlb2_hw_domain *domain,\n+\t\t\t\t\tstruct dlb2_ldb_port *port)\n+{\n+\tint i;\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {\n+\t\tu32 infl_cnt;\n+\t\tstruct dlb2_ldb_queue *queue;\n+\t\tint qid;\n+\n+\t\tif (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)\n+\t\t\tcontinue;\n+\n+\t\tqid = port->qid_map[i].qid;\n+\n+\t\tqueue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);\n+\n+\t\tif (queue == NULL) {\n+\t\t\tDLB2_HW_ERR(hw,\n+\t\t\t\t    \"[%s()] Internal error: unable to find queue %d\\n\",\n+\t\t\t\t    __func__, qid);\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\tinfl_cnt = DLB2_CSR_RD(hw,\n+\t\t\t\t       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));\n+\n+\t\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))\n+\t\t\tcontinue;\n+\n+\t\t/*\n+\t\t * Disable the affected CQ, and the CQs already mapped to the\n+\t\t * QID, before reading the QID's inflight count a second time.\n+\t\t * There is an unlikely race in which the QID may schedule one\n+\t\t * more QE after we read an inflight count of 0, and disabling\n+\t\t * the CQs guarantees that the race will not occur after a\n+\t\t * re-read of the inflight count register.\n+\t\t */\n+\t\tif (port->enabled)\n+\t\t\tdlb2_ldb_port_cq_disable(hw, port);\n+\n+\t\tdlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);\n+\n+\t\tinfl_cnt = DLB2_CSR_RD(hw,\n+\t\t\t\t       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));\n+\n+\t\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {\n+\t\t\tif (port->enabled)\n+\t\t\t\tdlb2_ldb_port_cq_enable(hw, port);\n+\n+\t\t\tdlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);\n+\n+\t\t\tcontinue;\n+\t\t}\n+\n+\t\tdlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);\n+\t}\n+}\n+\n+static unsigned int\n+dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,\n+\t\t\t\t      struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tif (!domain->configured || domain->num_pending_additions == 0)\n+\t\treturn 0;\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)\n+\t\t\tdlb2_domain_finish_map_port(hw, domain, port);\n+\t}\n+\n+\treturn domain->num_pending_additions;\n+}\n+\n+static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,\n+\t\t\t\t   struct dlb2_ldb_port *port,\n+\t\t\t\t   struct dlb2_ldb_queue *queue)\n+{\n+\tenum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;\n+\tu32 lsp_qid2cq2;\n+\tu32 lsp_qid2cq;\n+\tu32 atm_qid2cq;\n+\tu32 cq2priov;\n+\tu32 queue_id;\n+\tu32 port_id;\n+\tint i;\n+\n+\t/* Find the queue's slot */\n+\tmapped = DLB2_QUEUE_MAPPED;\n+\tin_progress = DLB2_QUEUE_UNMAP_IN_PROG;\n+\tpending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;\n+\n+\tif (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&\n+\t    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&\n+\t    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s():%d] Internal error: QID %d isn't mapped\\n\",\n+\t\t\t    __func__, __LINE__, queue->id.phys_id);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\tport_id = port->id.phys_id;\n+\tqueue_id = queue->id.phys_id;\n+\n+\t/* Read-modify-write the priority and valid bit register */\n+\tcq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));\n+\n+\tcq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);\n+\n+\tatm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,\n+\t\t\t\t\t\t\t port_id / 4));\n+\n+\tlsp_qid2cq = DLB2_CSR_RD(hw,\n+\t\t\t\t DLB2_LSP_QID2CQIDIX(hw->ver,\n+\t\t\t\t\t\tqueue_id, port_id / 4));\n+\n+\tlsp_qid2cq2 = DLB2_CSR_RD(hw,\n+\t\t\t\t  DLB2_LSP_QID2CQIDIX2(hw->ver,\n+\t\t\t\t\t\t  queue_id, port_id / 4));\n+\n+\tswitch (port_id % 4) {\n+\tcase 0:\n+\t\tatm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));\n+\t\tlsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));\n+\t\tlsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));\n+\t\tbreak;\n+\n+\tcase 1:\n+\t\tatm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));\n+\t\tlsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));\n+\t\tlsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));\n+\t\tbreak;\n+\n+\tcase 2:\n+\t\tatm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));\n+\t\tlsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));\n+\t\tlsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));\n+\t\tbreak;\n+\n+\tcase 3:\n+\t\tatm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));\n+\t\tlsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));\n+\t\tlsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));\n+\t\tbreak;\n+\t}\n+\n+\tDLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),\n+\t\t    lsp_qid2cq);\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),\n+\t\t    lsp_qid2cq2);\n+\n+\tdlb2_flush_csr(hw);\n+\n+\tunmapped = DLB2_QUEUE_UNMAPPED;\n+\n+\treturn dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);\n+}\n+\n+static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,\n+\t\t\t\t struct dlb2_hw_domain *domain,\n+\t\t\t\t struct dlb2_ldb_port *port,\n+\t\t\t\t struct dlb2_ldb_queue *queue,\n+\t\t\t\t u8 prio)\n+{\n+\tif (domain->started)\n+\t\treturn dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);\n+\telse\n+\t\treturn dlb2_ldb_port_map_qid_static(hw, port, queue, prio);\n+}\n+\n+static void\n+dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,\n+\t\t\t\t   struct dlb2_hw_domain *domain,\n+\t\t\t\t   struct dlb2_ldb_port *port,\n+\t\t\t\t   int slot)\n+{\n+\tenum dlb2_qid_map_state state;\n+\tstruct dlb2_ldb_queue *queue;\n+\n+\tqueue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];\n+\n+\tstate = port->qid_map[slot].state;\n+\n+\t/* Update the QID2CQIDX and CQ2QID vectors */\n+\tdlb2_ldb_port_unmap_qid(hw, port, queue);\n+\n+\t/*\n+\t * Ensure the QID will not be serviced by this {CQ, slot} by clearing\n+\t * the has_work bits\n+\t */\n+\tdlb2_ldb_port_clear_has_work_bits(hw, port, slot);\n+\n+\t/* Reset the {CQ, slot} to its default state */\n+\tdlb2_ldb_port_set_queue_if_status(hw, port, slot);\n+\n+\t/* Re-enable the CQ if it was not manually disabled by the user */\n+\tif (port->enabled)\n+\t\tdlb2_ldb_port_cq_enable(hw, port);\n+\n+\t/*\n+\t * If there is a mapping that is pending this slot's removal, perform\n+\t * the mapping now.\n+\t */\n+\tif (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {\n+\t\tstruct dlb2_ldb_port_qid_map *map;\n+\t\tstruct dlb2_ldb_queue *map_queue;\n+\t\tu8 prio;\n+\n+\t\tmap = &port->qid_map[slot];\n+\n+\t\tmap->qid = map->pending_qid;\n+\t\tmap->priority = map->pending_priority;\n+\n+\t\tmap_queue = &hw->rsrcs.ldb_queues[map->qid];\n+\t\tprio = map->priority;\n+\n+\t\tdlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);\n+\t}\n+}\n+\n+\n+static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,\n+\t\t\t\t\t  struct dlb2_hw_domain *domain,\n+\t\t\t\t\t  struct dlb2_ldb_port *port)\n+{\n+\tu32 infl_cnt;\n+\tint i;\n+\n+\tif (port->num_pending_removals == 0)\n+\t\treturn false;\n+\n+\t/*\n+\t * The unmap requires all the CQ's outstanding inflights to be\n+\t * completed.\n+\t */\n+\tinfl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,\n+\t\t\t\t\t\t       port->id.phys_id));\n+\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)\n+\t\treturn false;\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {\n+\t\tstruct dlb2_ldb_port_qid_map *map;\n+\n+\t\tmap = &port->qid_map[i];\n+\n+\t\tif (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&\n+\t\t    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)\n+\t\t\tcontinue;\n+\n+\t\tdlb2_domain_finish_unmap_port_slot(hw, domain, port, i);\n+\t}\n+\n+\treturn true;\n+}\n+\n+static unsigned int\n+dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,\n+\t\t\t\t\tstruct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tif (!domain->configured || domain->num_pending_removals == 0)\n+\t\treturn 0;\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)\n+\t\t\tdlb2_domain_finish_unmap_port(hw, domain, port);\n+\t}\n+\n+\treturn domain->num_pending_removals;\n+}\n+\n+static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,\n+\t\t\t\t\tstruct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tport->enabled = false;\n+\n+\t\t\tdlb2_ldb_port_cq_disable(hw, port);\n+\t\t}\n+\t}\n+}\n+\n+\n+static void dlb2_log_reset_domain(struct dlb2_hw *hw,\n+\t\t\t\t  u32 domain_id,\n+\t\t\t\t  bool vdev_req,\n+\t\t\t\t  unsigned int vdev_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB2 reset domain:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n+\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\", domain_id);\n+}\n+\n+static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,\n+\t\t\t\t\t struct dlb2_hw_domain *domain,\n+\t\t\t\t\t unsigned int vdev_id)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *port;\n+\tu32 vpp_v = 0;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n+\t\tunsigned int offs;\n+\t\tu32 virt_id;\n+\n+\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n+\t\t\tvirt_id = port->id.virt_id;\n+\t\telse\n+\t\t\tvirt_id = port->id.phys_id;\n+\n+\t\toffs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;\n+\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);\n+\t}\n+}\n+\n+static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,\n+\t\t\t\t\t struct dlb2_hw_domain *domain,\n+\t\t\t\t\t unsigned int vdev_id)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tu32 vpp_v = 0;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tunsigned int offs;\n+\t\t\tu32 virt_id;\n+\n+\t\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n+\t\t\t\tvirt_id = port->id.virt_id;\n+\t\t\telse\n+\t\t\t\tvirt_id = port->id.phys_id;\n+\n+\t\t\toffs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;\n+\n+\t\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);\n+\t\t}\n+\t}\n+}\n+\n+static void\n+dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,\n+\t\t\t\t\tstruct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tu32 int_en = 0;\n+\tu32 wd_en = 0;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tDLB2_CSR_WR(hw,\n+\t\t\t\t    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,\n+\t\t\t\t\t\t       port->id.phys_id),\n+\t\t\t\t    int_en);\n+\n+\t\t\tDLB2_CSR_WR(hw,\n+\t\t\t\t    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,\n+\t\t\t\t\t\t      port->id.phys_id),\n+\t\t\t\t    wd_en);\n+\t\t}\n+\t}\n+}\n+\n+static void\n+dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,\n+\t\t\t\t\tstruct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *port;\n+\tu32 int_en = 0;\n+\tu32 wd_en = 0;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),\n+\t\t\t    int_en);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),\n+\t\t\t    wd_en);\n+\t}\n+}\n+\n+static void\n+dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,\n+\t\t\t\t\t  struct dlb2_hw_domain *domain)\n+{\n+\tint domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_queue *queue;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n+\t\tint idx = domain_offset + queue->id.phys_id;\n+\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);\n+\n+\t\tif (queue->id.vdev_owned) {\n+\t\t\tDLB2_CSR_WR(hw,\n+\t\t\t\t    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),\n+\t\t\t\t    0);\n+\n+\t\t\tidx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +\n+\t\t\t\tqueue->id.virt_id;\n+\n+\t\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);\n+\n+\t\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);\n+\t\t}\n+\t}\n+}\n+\n+static void\n+dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,\n+\t\t\t\t\t  struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *queue;\n+\tunsigned long max_ports;\n+\tint domain_offset;\n+\tRTE_SET_USED(iter);\n+\n+\tmax_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);\n+\n+\tdomain_offset = domain->id.phys_id * max_ports;\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {\n+\t\tint idx = domain_offset + queue->id.phys_id;\n+\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);\n+\n+\t\tif (queue->id.vdev_owned) {\n+\t\t\tidx = queue->id.vdev_id * max_ports + queue->id.virt_id;\n+\n+\t\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);\n+\n+\t\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);\n+\t\t}\n+\t}\n+}\n+\n+static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,\n+\t\t\t\t\t       struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tu32 chk_en = 0;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tDLB2_CSR_WR(hw,\n+\t\t\t\t    DLB2_CHP_SN_CHK_ENBL(hw->ver,\n+\t\t\t\t\t\t\t port->id.phys_id),\n+\t\t\t\t    chk_en);\n+\t\t}\n+\t}\n+}\n+\n+static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,\n+\t\t\t\t\t\t struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tint j;\n+\n+\t\t\tfor (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {\n+\t\t\t\tif (dlb2_ldb_cq_inflight_count(hw, port) == 0)\n+\t\t\t\t\tbreak;\n+\t\t\t}\n+\n+\t\t\tif (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {\n+\t\t\t\tDLB2_HW_ERR(hw,\n+\t\t\t\t\t    \"[%s()] Internal error: failed to flush load-balanced port %d's completions.\\n\",\n+\t\t\t\t\t    __func__, port->id.phys_id);\n+\t\t\t\treturn -EFAULT;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,\n+\t\t\t\t\tstruct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *port;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n+\t\tport->enabled = false;\n+\n+\t\tdlb2_dir_port_cq_disable(hw, port);\n+\t}\n+}\n+\n+static void\n+dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,\n+\t\t\t\t       struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *port;\n+\tu32 pp_v = 0;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_DIR_PP_V(port->id.phys_id),\n+\t\t\t    pp_v);\n+\t}\n+}\n+\n+static void\n+dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,\n+\t\t\t\t       struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tu32 pp_v = 0;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tDLB2_CSR_WR(hw,\n+\t\t\t\t    DLB2_SYS_LDB_PP_V(port->id.phys_id),\n+\t\t\t\t    pp_v);\n+\t\t}\n+\t}\n+}\n+\n+static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,\n+\t\t\t\t\t    struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *dir_port;\n+\tstruct dlb2_ldb_port *ldb_port;\n+\tstruct dlb2_ldb_queue *queue;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\t/*\n+\t * Confirm that all the domain's queue's inflight counts and AQED\n+\t * active counts are 0.\n+\t */\n+\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n+\t\tif (!dlb2_ldb_queue_is_empty(hw, queue)) {\n+\t\t\tDLB2_HW_ERR(hw,\n+\t\t\t\t    \"[%s()] Internal error: failed to empty ldb queue %d\\n\",\n+\t\t\t\t    __func__, queue->id.phys_id);\n+\t\t\treturn -EFAULT;\n+\t\t}\n+\t}\n+\n+\t/* Confirm that all the domain's CQs inflight and token counts are 0. */\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {\n+\t\t\tif (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||\n+\t\t\t    dlb2_ldb_cq_token_count(hw, ldb_port)) {\n+\t\t\t\tDLB2_HW_ERR(hw,\n+\t\t\t\t\t    \"[%s()] Internal error: failed to empty ldb port %d\\n\",\n+\t\t\t\t\t    __func__, ldb_port->id.phys_id);\n+\t\t\t\treturn -EFAULT;\n+\t\t\t}\n+\t\t}\n+\t}\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {\n+\t\tif (!dlb2_dir_queue_is_empty(hw, dir_port)) {\n+\t\t\tDLB2_HW_ERR(hw,\n+\t\t\t\t    \"[%s()] Internal error: failed to empty dir queue %d\\n\",\n+\t\t\t\t    __func__, dir_port->id.phys_id);\n+\t\t\treturn -EFAULT;\n+\t\t}\n+\n+\t\tif (dlb2_dir_cq_token_count(hw, dir_port)) {\n+\t\t\tDLB2_HW_ERR(hw,\n+\t\t\t\t    \"[%s()] Internal error: failed to empty dir port %d\\n\",\n+\t\t\t\t    __func__, dir_port->id.phys_id);\n+\t\t\treturn -EFAULT;\n+\t\t}\n+\t}\n+\n+\treturn 0;\n+}\n+\n+static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,\n+\t\t\t\t\t\t   struct dlb2_ldb_port *port)\n+{\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),\n+\t\t    DLB2_SYS_LDB_PP2VAS_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_LDB_CQ2VAS_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),\n+\t\t    DLB2_SYS_LDB_PP2VDEV_RST);\n+\n+\tif (port->id.vdev_owned) {\n+\t\tunsigned int offs;\n+\t\tu32 virt_id;\n+\n+\t\t/*\n+\t\t * DLB uses producer port address bits 17:12 to determine the\n+\t\t * producer port ID. In Scalable IOV mode, PP accesses come\n+\t\t * through the PF MMIO window for the physical producer port,\n+\t\t * so for translation purposes the virtual and physical port\n+\t\t * IDs are equal.\n+\t\t */\n+\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n+\t\t\tvirt_id = port->id.virt_id;\n+\t\telse\n+\t\t\tvirt_id = port->id.phys_id;\n+\n+\t\toffs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_VF_LDB_VPP2PP(offs),\n+\t\t\t    DLB2_SYS_VF_LDB_VPP2PP_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_VF_LDB_VPP_V(offs),\n+\t\t\t    DLB2_SYS_VF_LDB_VPP_V_RST);\n+\t}\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_LDB_PP_V(port->id.phys_id),\n+\t\t    DLB2_SYS_LDB_PP_V_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_LDB_DSBL_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_LDB_CQ_DEPTH_RST);\n+\n+\tif (hw->ver != DLB2_HW_V2)\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),\n+\t\t\t    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_LDB_INFL_LIM_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_HIST_LIST_LIM_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_HIST_LIST_BASE_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_HIST_LIST_POP_PTR_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_LDB_CQ_INT_ENB_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),\n+\t\t    DLB2_SYS_LDB_CQ_ISR_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_LDB_CQ_WPTR_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_LDB_TKN_CNT_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),\n+\t\t    DLB2_SYS_LDB_CQ_ADDR_L_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),\n+\t\t    DLB2_SYS_LDB_CQ_ADDR_U_RST);\n+\n+\tif (hw->ver == DLB2_HW_V2)\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),\n+\t\t\t    DLB2_SYS_LDB_CQ_AT_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),\n+\t\t    DLB2_SYS_LDB_CQ_PASID_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),\n+\t\t    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ2QID0_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ2QID1_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ2PRIOV_RST);\n+}\n+\n+static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,\n+\t\t\t\t\t\t struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)\n+\t\t\t__dlb2_domain_reset_ldb_port_registers(hw, port);\n+\t}\n+}\n+\n+static void\n+__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,\n+\t\t\t\t       struct dlb2_dir_pq_pair *port)\n+{\n+\tu32 reg = 0;\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_DIR_CQ2VAS_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_DIR_DSBL_RST);\n+\n+\tDLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);\n+\n+\tif (hw->ver == DLB2_HW_V2)\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);\n+\telse\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_DIR_CQ_DEPTH_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_DIR_CQ_INT_ENB_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),\n+\t\t    DLB2_SYS_DIR_CQ_ISR_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,\n+\t\t\t\t\t\t      port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_DIR_CQ_WPTR_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_DIR_TKN_CNT_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),\n+\t\t    DLB2_SYS_DIR_CQ_ADDR_L_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),\n+\t\t    DLB2_SYS_DIR_CQ_ADDR_U_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),\n+\t\t    DLB2_SYS_DIR_CQ_AT_RST);\n+\n+\tif (hw->ver == DLB2_HW_V2)\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),\n+\t\t\t    DLB2_SYS_DIR_CQ_AT_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),\n+\t\t    DLB2_SYS_DIR_CQ_PASID_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),\n+\t\t    DLB2_SYS_DIR_CQ_FMT_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),\n+\t\t    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),\n+\t\t    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),\n+\t\t    DLB2_SYS_DIR_PP2VAS_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_DIR_CQ2VAS_RST);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),\n+\t\t    DLB2_SYS_DIR_PP2VDEV_RST);\n+\n+\tif (port->id.vdev_owned) {\n+\t\tunsigned int offs;\n+\t\tu32 virt_id;\n+\n+\t\t/*\n+\t\t * DLB uses producer port address bits 17:12 to determine the\n+\t\t * producer port ID. In Scalable IOV mode, PP accesses come\n+\t\t * through the PF MMIO window for the physical producer port,\n+\t\t * so for translation purposes the virtual and physical port\n+\t\t * IDs are equal.\n+\t\t */\n+\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n+\t\t\tvirt_id = port->id.virt_id;\n+\t\telse\n+\t\t\tvirt_id = port->id.phys_id;\n+\n+\t\toffs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +\n+\t\t\tvirt_id;\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_VF_DIR_VPP2PP(offs),\n+\t\t\t    DLB2_SYS_VF_DIR_VPP2PP_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_VF_DIR_VPP_V(offs),\n+\t\t\t    DLB2_SYS_VF_DIR_VPP_V_RST);\n+\t}\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_SYS_DIR_PP_V(port->id.phys_id),\n+\t\t    DLB2_SYS_DIR_PP_V_RST);\n+}\n+\n+static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,\n+\t\t\t\t\t\t struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *port;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)\n+\t\t__dlb2_domain_reset_dir_port_registers(hw, port);\n+}\n+\n+static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,\n+\t\t\t\t\t\t  struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_queue *queue;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n+\t\tunsigned int queue_id = queue->id.phys_id;\n+\t\tint i;\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),\n+\t\t\t    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),\n+\t\t\t    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),\n+\t\t\t    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),\n+\t\t\t    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),\n+\t\t\t    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),\n+\t\t\t    DLB2_LSP_QID_LDB_INFL_LIM_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),\n+\t\t\t    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),\n+\t\t\t    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),\n+\t\t\t    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_LDB_QID_ITS(queue_id),\n+\t\t\t    DLB2_SYS_LDB_QID_ITS_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),\n+\t\t\t    DLB2_CHP_ORD_QID_SN_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),\n+\t\t\t    DLB2_CHP_ORD_QID_SN_MAP_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_LDB_QID_V(queue_id),\n+\t\t\t    DLB2_SYS_LDB_QID_V_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_LDB_QID_CFG_V(queue_id),\n+\t\t\t    DLB2_SYS_LDB_QID_CFG_V_RST);\n+\n+\t\tif (queue->sn_cfg_valid) {\n+\t\t\tu32 offs[2];\n+\n+\t\t\toffs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,\n+\t\t\t\t\t\t\t queue->sn_slot);\n+\t\t\toffs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,\n+\t\t\t\t\t\t\t queue->sn_slot);\n+\n+\t\t\tDLB2_CSR_WR(hw,\n+\t\t\t\t    offs[queue->sn_group],\n+\t\t\t\t    DLB2_RO_GRP_0_SLT_SHFT_RST);\n+\t\t}\n+\n+\t\tfor (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {\n+\t\t\tDLB2_CSR_WR(hw,\n+\t\t\t\t    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),\n+\t\t\t\t    DLB2_LSP_QID2CQIDIX_00_RST);\n+\n+\t\t\tDLB2_CSR_WR(hw,\n+\t\t\t\t    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),\n+\t\t\t\t    DLB2_LSP_QID2CQIDIX2_00_RST);\n+\n+\t\t\tDLB2_CSR_WR(hw,\n+\t\t\t\t    DLB2_ATM_QID2CQIDIX(queue_id, i),\n+\t\t\t\t    DLB2_ATM_QID2CQIDIX_00_RST);\n+\t\t}\n+\t}\n+}\n+\n+static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,\n+\t\t\t\t\t\t  struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *queue;\n+\tRTE_SET_USED(iter);\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,\n+\t\t\t\t\t\t       queue->id.phys_id),\n+\t\t\t    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,\n+\t\t\t\t\t\t\t  queue->id.phys_id),\n+\t\t\t    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,\n+\t\t\t\t\t\t\t  queue->id.phys_id),\n+\t\t\t    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,\n+\t\t\t\t\t\t\t queue->id.phys_id),\n+\t\t\t    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),\n+\t\t\t    DLB2_SYS_DIR_QID_ITS_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_SYS_DIR_QID_V(queue->id.phys_id),\n+\t\t\t    DLB2_SYS_DIR_QID_V_RST);\n+\t}\n+}\n+\n+\n+\n+\n+\n+static void dlb2_domain_reset_registers(struct dlb2_hw *hw,\n+\t\t\t\t\tstruct dlb2_hw_domain *domain)\n+{\n+\tdlb2_domain_reset_ldb_port_registers(hw, domain);\n+\n+\tdlb2_domain_reset_dir_port_registers(hw, domain);\n+\n+\tdlb2_domain_reset_ldb_queue_registers(hw, domain);\n+\n+\tdlb2_domain_reset_dir_queue_registers(hw, domain);\n+\n+\tif (hw->ver == DLB2_HW_V2) {\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),\n+\t\t\t    DLB2_CHP_CFG_LDB_VAS_CRD_RST);\n+\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),\n+\t\t\t    DLB2_CHP_CFG_DIR_VAS_CRD_RST);\n+\t} else\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),\n+\t\t\t    DLB2_CHP_CFG_VAS_CRD_RST);\n+}\n+\n+static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,\n+\t\t\t\t\t    struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_dir_pq_pair *tmp_dir_port;\n+\tstruct dlb2_ldb_queue *tmp_ldb_queue;\n+\tstruct dlb2_ldb_port *tmp_ldb_port;\n+\tstruct dlb2_list_entry *iter1;\n+\tstruct dlb2_list_entry *iter2;\n+\tstruct dlb2_function_resources *rsrcs;\n+\tstruct dlb2_dir_pq_pair *dir_port;\n+\tstruct dlb2_ldb_queue *ldb_queue;\n+\tstruct dlb2_ldb_port *ldb_port;\n+\tstruct dlb2_list_head *list;\n+\tint ret, i;\n+\tRTE_SET_USED(tmp_dir_port);\n+\tRTE_SET_USED(tmp_ldb_queue);\n+\tRTE_SET_USED(tmp_ldb_port);\n+\tRTE_SET_USED(iter1);\n+\tRTE_SET_USED(iter2);\n+\n+\trsrcs = domain->parent_func;\n+\n+\t/* Move the domain's ldb queues to the function's avail list */\n+\tlist = &domain->used_ldb_queues;\n+\tDLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {\n+\t\tif (ldb_queue->sn_cfg_valid) {\n+\t\t\tstruct dlb2_sn_group *grp;\n+\n+\t\t\tgrp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];\n+\n+\t\t\tdlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);\n+\t\t\tldb_queue->sn_cfg_valid = false;\n+\t\t}\n+\n+\t\tldb_queue->owned = false;\n+\t\tldb_queue->num_mappings = 0;\n+\t\tldb_queue->num_pending_additions = 0;\n+\n+\t\tdlb2_list_del(&domain->used_ldb_queues,\n+\t\t\t      &ldb_queue->domain_list);\n+\t\tdlb2_list_add(&rsrcs->avail_ldb_queues,\n+\t\t\t      &ldb_queue->func_list);\n+\t\trsrcs->num_avail_ldb_queues++;\n+\t}\n+\n+\tlist = &domain->avail_ldb_queues;\n+\tDLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {\n+\t\tldb_queue->owned = false;\n+\n+\t\tdlb2_list_del(&domain->avail_ldb_queues,\n+\t\t\t      &ldb_queue->domain_list);\n+\t\tdlb2_list_add(&rsrcs->avail_ldb_queues,\n+\t\t\t      &ldb_queue->func_list);\n+\t\trsrcs->num_avail_ldb_queues++;\n+\t}\n+\n+\t/* Move the domain's ldb ports to the function's avail list */\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tlist = &domain->used_ldb_ports[i];\n+\t\tDLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,\n+\t\t\t\t       iter1, iter2) {\n+\t\t\tint j;\n+\n+\t\t\tldb_port->owned = false;\n+\t\t\tldb_port->configured = false;\n+\t\t\tldb_port->num_pending_removals = 0;\n+\t\t\tldb_port->num_mappings = 0;\n+\t\t\tldb_port->init_tkn_cnt = 0;\n+\t\t\tldb_port->cq_depth = 0;\n+\t\t\tfor (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)\n+\t\t\t\tldb_port->qid_map[j].state =\n+\t\t\t\t\tDLB2_QUEUE_UNMAPPED;\n+\n+\t\t\tdlb2_list_del(&domain->used_ldb_ports[i],\n+\t\t\t\t      &ldb_port->domain_list);\n+\t\t\tdlb2_list_add(&rsrcs->avail_ldb_ports[i],\n+\t\t\t\t      &ldb_port->func_list);\n+\t\t\trsrcs->num_avail_ldb_ports[i]++;\n+\t\t}\n+\n+\t\tlist = &domain->avail_ldb_ports[i];\n+\t\tDLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,\n+\t\t\t\t       iter1, iter2) {\n+\t\t\tldb_port->owned = false;\n+\n+\t\t\tdlb2_list_del(&domain->avail_ldb_ports[i],\n+\t\t\t\t      &ldb_port->domain_list);\n+\t\t\tdlb2_list_add(&rsrcs->avail_ldb_ports[i],\n+\t\t\t\t      &ldb_port->func_list);\n+\t\t\trsrcs->num_avail_ldb_ports[i]++;\n+\t\t}\n+\t}\n+\n+\t/* Move the domain's dir ports to the function's avail list */\n+\tlist = &domain->used_dir_pq_pairs;\n+\tDLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {\n+\t\tdir_port->owned = false;\n+\t\tdir_port->port_configured = false;\n+\t\tdir_port->init_tkn_cnt = 0;\n+\n+\t\tdlb2_list_del(&domain->used_dir_pq_pairs,\n+\t\t\t      &dir_port->domain_list);\n+\n+\t\tdlb2_list_add(&rsrcs->avail_dir_pq_pairs,\n+\t\t\t      &dir_port->func_list);\n+\t\trsrcs->num_avail_dir_pq_pairs++;\n+\t}\n+\n+\tlist = &domain->avail_dir_pq_pairs;\n+\tDLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {\n+\t\tdir_port->owned = false;\n+\n+\t\tdlb2_list_del(&domain->avail_dir_pq_pairs,\n+\t\t\t      &dir_port->domain_list);\n+\n+\t\tdlb2_list_add(&rsrcs->avail_dir_pq_pairs,\n+\t\t\t      &dir_port->func_list);\n+\t\trsrcs->num_avail_dir_pq_pairs++;\n+\t}\n+\n+\t/* Return hist list entries to the function */\n+\tret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,\n+\t\t\t\t    domain->hist_list_entry_base,\n+\t\t\t\t    domain->total_hist_list_entries);\n+\tif (ret) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s()] Internal error: domain hist list base does not match the function's bitmap.\\n\",\n+\t\t\t    __func__);\n+\t\treturn ret;\n+\t}\n+\n+\tdomain->total_hist_list_entries = 0;\n+\tdomain->avail_hist_list_entries = 0;\n+\tdomain->hist_list_entry_base = 0;\n+\tdomain->hist_list_entry_offset = 0;\n+\n+\tif (hw->ver == DLB2_HW_V2_5) {\n+\t\trsrcs->num_avail_entries += domain->num_credits;\n+\t\tdomain->num_credits = 0;\n+\t} else {\n+\t\trsrcs->num_avail_qed_entries += domain->num_ldb_credits;\n+\t\tdomain->num_ldb_credits = 0;\n+\n+\t\trsrcs->num_avail_dqed_entries += domain->num_dir_credits;\n+\t\tdomain->num_dir_credits = 0;\n+\t}\n+\trsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;\n+\trsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;\n+\tdomain->num_avail_aqed_entries = 0;\n+\tdomain->num_used_aqed_entries = 0;\n+\n+\tdomain->num_pending_removals = 0;\n+\tdomain->num_pending_additions = 0;\n+\tdomain->configured = false;\n+\tdomain->started = false;\n+\n+\t/*\n+\t * Move the domain out of the used_domains list and back to the\n+\t * function's avail_domains list.\n+\t */\n+\tdlb2_list_del(&rsrcs->used_domains, &domain->func_list);\n+\tdlb2_list_add(&rsrcs->avail_domains, &domain->func_list);\n+\trsrcs->num_avail_domains++;\n+\n+\treturn 0;\n+}\n+\n+static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,\n+\t\t\t\t\t    struct dlb2_hw_domain *domain,\n+\t\t\t\t\t    struct dlb2_ldb_queue *queue)\n+{\n+\tstruct dlb2_ldb_port *port = NULL;\n+\tint ret, i;\n+\n+\t/* If a domain has LDB queues, it must have LDB ports */\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tport = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],\n+\t\t\t\t\t  typeof(*port));\n+\t\tif (port)\n+\t\t\tbreak;\n+\t}\n+\n+\tif (port == NULL) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s()] Internal error: No configured LDB ports\\n\",\n+\t\t\t    __func__);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\t/* If necessary, free up a QID slot in this CQ */\n+\tif (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {\n+\t\tstruct dlb2_ldb_queue *mapped_queue;\n+\n+\t\tmapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];\n+\n+\t\tret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t}\n+\n+\tret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\treturn dlb2_domain_drain_mapped_queues(hw, domain);\n+}\n+\n+static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,\n+\t\t\t\t\t     struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_queue *queue;\n+\tint ret;\n+\tRTE_SET_USED(iter);\n+\n+\t/* If the domain hasn't been started, there's no traffic to drain */\n+\tif (!domain->started)\n+\t\treturn 0;\n+\n+\t/*\n+\t * Pre-condition: the unattached queue must not have any outstanding\n+\t * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()\n+\t * prior to this in dlb2_domain_drain_mapped_queues().\n+\t */\n+\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n+\t\tif (queue->num_mappings != 0 ||\n+\t\t    dlb2_ldb_queue_is_empty(hw, queue))\n+\t\t\tcontinue;\n+\n+\t\tret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * dlb2_reset_domain() - reset a scheduling domain\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function resets and frees a DLB 2.0 scheduling domain and its associated\n+ * resources.\n+ *\n+ * Pre-condition: the driver must ensure software has stopped sending QEs\n+ * through this domain's producer ports before invoking this function, or\n+ * undefined behavior will result.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, -1 otherwise.\n+ *\n+ * EINVAL - Invalid domain ID, or the domain is not configured.\n+ * EFAULT - Internal error. (Possibly caused if software is the pre-condition\n+ *\t    is not met.)\n+ * ETIMEDOUT - Hardware component didn't reset in the expected time.\n+ */\n+int dlb2_reset_domain(struct dlb2_hw *hw,\n+\t\t      u32 domain_id,\n+\t\t      bool vdev_req,\n+\t\t      unsigned int vdev_id)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tint ret;\n+\n+\tdlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n+\n+\tif (domain == NULL || !domain->configured)\n+\t\treturn -EINVAL;\n+\n+\t/* Disable VPPs */\n+\tif (vdev_req) {\n+\t\tdlb2_domain_disable_dir_vpps(hw, domain, vdev_id);\n+\n+\t\tdlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);\n+\t}\n+\n+\t/* Disable CQ interrupts */\n+\tdlb2_domain_disable_dir_port_interrupts(hw, domain);\n+\n+\tdlb2_domain_disable_ldb_port_interrupts(hw, domain);\n+\n+\t/*\n+\t * For each queue owned by this domain, disable its write permissions to\n+\t * cause any traffic sent to it to be dropped. Well-behaved software\n+\t * should not be sending QEs at this point.\n+\t */\n+\tdlb2_domain_disable_dir_queue_write_perms(hw, domain);\n+\n+\tdlb2_domain_disable_ldb_queue_write_perms(hw, domain);\n+\n+\t/* Turn off completion tracking on all the domain's PPs. */\n+\tdlb2_domain_disable_ldb_seq_checks(hw, domain);\n+\n+\t/*\n+\t * Disable the LDB CQs and drain them in order to complete the map and\n+\t * unmap procedures, which require zero CQ inflights and zero QID\n+\t * inflights respectively.\n+\t */\n+\tdlb2_domain_disable_ldb_cqs(hw, domain);\n+\n+\tdlb2_domain_drain_ldb_cqs(hw, domain, false);\n+\n+\tret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = dlb2_domain_finish_map_qid_procedures(hw, domain);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/* Re-enable the CQs in order to drain the mapped queues. */\n+\tdlb2_domain_enable_ldb_cqs(hw, domain);\n+\n+\tret = dlb2_domain_drain_mapped_queues(hw, domain);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = dlb2_domain_drain_unmapped_queues(hw, domain);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/* Done draining LDB QEs, so disable the CQs. */\n+\tdlb2_domain_disable_ldb_cqs(hw, domain);\n+\n+\tdlb2_domain_drain_dir_queues(hw, domain);\n+\n+\t/* Done draining DIR QEs, so disable the CQs. */\n+\tdlb2_domain_disable_dir_cqs(hw, domain);\n+\n+\t/* Disable PPs */\n+\tdlb2_domain_disable_dir_producer_ports(hw, domain);\n+\n+\tdlb2_domain_disable_ldb_producer_ports(hw, domain);\n+\n+\tret = dlb2_domain_verify_reset_success(hw, domain);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/* Reset the QID and port state. */\n+\tdlb2_domain_reset_registers(hw, domain);\n+\n+\t/* Hardware reset complete. Reset the domain's software state */\n+\treturn dlb2_domain_reset_software_state(hw, domain);\n+}\n+\n+static void\n+dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,\n+\t\t\t       u32 domain_id,\n+\t\t\t       struct dlb2_create_ldb_queue_args *args,\n+\t\t\t       bool vdev_req,\n+\t\t\t       unsigned int vdev_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB2 create load-balanced queue arguments:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n+\tDLB2_HW_DBG(hw, \"\\tDomain ID:                  %d\\n\",\n+\t\t    domain_id);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of sequence numbers: %d\\n\",\n+\t\t    args->num_sequence_numbers);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of QID inflights:    %d\\n\",\n+\t\t    args->num_qid_inflights);\n+\tDLB2_HW_DBG(hw, \"\\tNumber of ATM inflights:    %d\\n\",\n+\t\t    args->num_atomic_inflights);\n+}\n+\n+static int\n+dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,\n+\t\t\t\t  struct dlb2_ldb_queue *queue,\n+\t\t\t\t  struct dlb2_create_ldb_queue_args *args)\n+{\n+\tint slot = -1;\n+\tint i;\n+\n+\tqueue->sn_cfg_valid = false;\n+\n+\tif (args->num_sequence_numbers == 0)\n+\t\treturn 0;\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {\n+\t\tstruct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];\n+\n+\t\tif (group->sequence_numbers_per_queue ==\n+\t\t    args->num_sequence_numbers &&\n+\t\t    !dlb2_sn_group_full(group)) {\n+\t\t\tslot = dlb2_sn_group_alloc_slot(group);\n+\t\t\tif (slot >= 0)\n+\t\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\tif (slot == -1) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s():%d] Internal error: no sequence number slots available\\n\",\n+\t\t\t    __func__, __LINE__);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\tqueue->sn_cfg_valid = true;\n+\tqueue->sn_group = i;\n+\tqueue->sn_slot = slot;\n+\treturn 0;\n+}\n+\n+static int\n+dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,\n+\t\t\t\t  u32 domain_id,\n+\t\t\t\t  struct dlb2_create_ldb_queue_args *args,\n+\t\t\t\t  struct dlb2_cmd_response *resp,\n+\t\t\t\t  bool vdev_req,\n+\t\t\t\t  unsigned int vdev_id,\n+\t\t\t\t  struct dlb2_hw_domain **out_domain,\n+\t\t\t\t  struct dlb2_ldb_queue **out_queue)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_queue *queue;\n+\tint i;\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n+\n+\tif (!domain) {\n+\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (!domain->configured) {\n+\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (domain->started) {\n+\t\tresp->status = DLB2_ST_DOMAIN_STARTED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tqueue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));\n+\tif (!queue) {\n+\t\tresp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (args->num_sequence_numbers) {\n+\t\tfor (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {\n+\t\t\tstruct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];\n+\n+\t\t\tif (group->sequence_numbers_per_queue ==\n+\t\t\t    args->num_sequence_numbers &&\n+\t\t\t    !dlb2_sn_group_full(group))\n+\t\t\t\tbreak;\n+\t\t}\n+\n+\t\tif (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {\n+\t\t\tresp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t}\n+\n+\tif (args->num_qid_inflights > 4096) {\n+\t\tresp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/* Inflights must be <= number of sequence numbers if ordered */\n+\tif (args->num_sequence_numbers != 0 &&\n+\t    args->num_qid_inflights > args->num_sequence_numbers) {\n+\t\tresp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (domain->num_avail_aqed_entries < args->num_atomic_inflights) {\n+\t\tresp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (args->num_atomic_inflights &&\n+\t    args->lock_id_comp_level != 0 &&\n+\t    args->lock_id_comp_level != 64 &&\n+\t    args->lock_id_comp_level != 128 &&\n+\t    args->lock_id_comp_level != 256 &&\n+\t    args->lock_id_comp_level != 512 &&\n+\t    args->lock_id_comp_level != 1024 &&\n+\t    args->lock_id_comp_level != 2048 &&\n+\t    args->lock_id_comp_level != 4096 &&\n+\t    args->lock_id_comp_level != 65536) {\n+\t\tresp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t*out_domain = domain;\n+\t*out_queue = queue;\n+\n+\treturn 0;\n+}\n+\n+static int\n+dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,\n+\t\t\t\tstruct dlb2_hw_domain *domain,\n+\t\t\t\tstruct dlb2_ldb_queue *queue,\n+\t\t\t\tstruct dlb2_create_ldb_queue_args *args)\n+{\n+\tint ret;\n+\tret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/* Attach QID inflights */\n+\tqueue->num_qid_inflights = args->num_qid_inflights;\n+\n+\t/* Attach atomic inflights */\n+\tqueue->aqed_limit = args->num_atomic_inflights;\n+\n+\tdomain->num_avail_aqed_entries -= args->num_atomic_inflights;\n+\tdomain->num_used_aqed_entries += args->num_atomic_inflights;\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,\n+\t\t\t\t     struct dlb2_hw_domain *domain,\n+\t\t\t\t     struct dlb2_ldb_queue *queue,\n+\t\t\t\t     struct dlb2_create_ldb_queue_args *args,\n+\t\t\t\t     bool vdev_req,\n+\t\t\t\t     unsigned int vdev_id)\n+{\n+\tstruct dlb2_sn_group *sn_group;\n+\tunsigned int offs;\n+\tu32 reg = 0;\n+\tu32 alimit;\n+\n+\t/* QID write permissions are turned on when the domain is started */\n+\toffs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;\n+\n+\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);\n+\n+\t/*\n+\t * Unordered QIDs get 4K inflights, ordered get as many as the number\n+\t * of sequence numbers.\n+\t */\n+\tDLB2_BITS_SET(reg, args->num_qid_inflights,\n+\t\t      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);\n+\tDLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,\n+\t\t\t\t\t\t  queue->id.phys_id), reg);\n+\n+\talimit = queue->aqed_limit;\n+\n+\tif (alimit > DLB2_MAX_NUM_AQED_ENTRIES)\n+\t\talimit = DLB2_MAX_NUM_AQED_ENTRIES;\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,\n+\t\t\t\t\t\t queue->id.phys_id), reg);\n+\n+\treg = 0;\n+\tswitch (args->lock_id_comp_level) {\n+\tcase 64:\n+\t\tDLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n+\t\tbreak;\n+\tcase 128:\n+\t\tDLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n+\t\tbreak;\n+\tcase 256:\n+\t\tDLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n+\t\tbreak;\n+\tcase 512:\n+\t\tDLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n+\t\tbreak;\n+\tcase 1024:\n+\t\tDLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n+\t\tbreak;\n+\tcase 2048:\n+\t\tDLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n+\t\tbreak;\n+\tcase 4096:\n+\t\tDLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n+\t\tbreak;\n+\tdefault:\n+\t\t/* No compression by default */\n+\t\tbreak;\n+\t}\n+\n+\tDLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);\n+\n+\treg = 0;\n+\t/* Don't timestamp QEs that pass through this queue */\n+\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);\n+\n+\tDLB2_BITS_SET(reg, args->depth_threshold,\n+\t\t      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,\n+\t\t\t\t\t\t queue->id.phys_id), reg);\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, args->depth_threshold,\n+\t\t      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),\n+\t\t    reg);\n+\n+\t/*\n+\t * This register limits the number of inflight flows a queue can have\n+\t * at one time.  It has an upper bound of 2048, but can be\n+\t * over-subscribed. 512 is chosen so that a single queue does not use\n+\t * the entire atomic storage, but can use a substantial portion if\n+\t * needed.\n+\t */\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);\n+\tDLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);\n+\n+\t/* Configure SNs */\n+\treg = 0;\n+\tsn_group = &hw->rsrcs.sn_groups[queue->sn_group];\n+\tDLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);\n+\tDLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);\n+\tDLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);\n+\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),\n+\t\t DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);\n+\tDLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),\n+\t\t DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);\n+\n+\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);\n+\n+\tif (vdev_req) {\n+\t\toffs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;\n+\n+\t\treg = 0;\n+\t\tDLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);\n+\n+\t\treg = 0;\n+\t\tDLB2_BITS_SET(reg, queue->id.phys_id,\n+\t\t\t      DLB2_SYS_VF_LDB_VQID2QID_QID);\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);\n+\n+\t\treg = 0;\n+\t\tDLB2_BITS_SET(reg, queue->id.virt_id,\n+\t\t\t      DLB2_SYS_LDB_QID2VQID_VQID);\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);\n+\t}\n+\n+\treg = 0;\n+\tDLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);\n+\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);\n+}\n+\n+/**\n+ * dlb2_hw_create_ldb_queue() - create a load-balanced queue\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @args: queue creation arguments.\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function creates a load-balanced queue.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n+ * contains the queue ID.\n+ *\n+ * resp->id contains a virtual ID if vdev_req is true.\n+ *\n+ * Errors:\n+ * EINVAL - A requested resource is unavailable, the domain is not configured,\n+ *\t    the domain has already been started, or the requested queue name is\n+ *\t    already in use.\n+ * EFAULT - Internal error (resp->status not set).\n+ */\n+int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,\n+\t\t\t     u32 domain_id,\n+\t\t\t     struct dlb2_create_ldb_queue_args *args,\n+\t\t\t     struct dlb2_cmd_response *resp,\n+\t\t\t     bool vdev_req,\n+\t\t\t     unsigned int vdev_id)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_queue *queue;\n+\tint ret;\n+\n+\tdlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);\n+\n+\t/*\n+\t * Verify that hardware resources are available before attempting to\n+\t * satisfy the request. This simplifies the error unwinding code.\n+\t */\n+\tret = dlb2_verify_create_ldb_queue_args(hw,\n+\t\t\t\t\t\tdomain_id,\n+\t\t\t\t\t\targs,\n+\t\t\t\t\t\tresp,\n+\t\t\t\t\t\tvdev_req,\n+\t\t\t\t\t\tvdev_id,\n+\t\t\t\t\t\t&domain,\n+\t\t\t\t\t\t&queue);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);\n+\n+\tif (ret) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s():%d] Internal error: failed to attach the ldb queue resources\\n\",\n+\t\t\t    __func__, __LINE__);\n+\t\treturn ret;\n+\t}\n+\n+\tdlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);\n+\n+\tqueue->num_mappings = 0;\n+\n+\tqueue->configured = true;\n+\n+\t/*\n+\t * Configuration succeeded, so move the resource from the 'avail' to\n+\t * the 'used' list.\n+\t */\n+\tdlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);\n+\n+\tdlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);\n+\n+\tresp->status = 0;\n+\tresp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,\n+\t\t\t\t       struct dlb2_hw_domain *domain,\n+\t\t\t\t       struct dlb2_ldb_port *port,\n+\t\t\t\t       bool vdev_req,\n+\t\t\t\t       unsigned int vdev_id)\n+{\n+\tu32 reg = 0;\n+\n+\tDLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);\n+\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);\n+\n+\tif (vdev_req) {\n+\t\tunsigned int offs;\n+\t\tu32 virt_id;\n+\n+\t\t/*\n+\t\t * DLB uses producer port address bits 17:12 to determine the\n+\t\t * producer port ID. In Scalable IOV mode, PP accesses come\n+\t\t * through the PF MMIO window for the physical producer port,\n+\t\t * so for translation purposes the virtual and physical port\n+\t\t * IDs are equal.\n+\t\t */\n+\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n+\t\t\tvirt_id = port->id.virt_id;\n+\t\telse\n+\t\t\tvirt_id = port->id.phys_id;\n+\n+\t\treg = 0;\n+\t\tDLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);\n+\t\toffs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);\n+\n+\t\treg = 0;\n+\t\tDLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);\n+\n+\t\treg = 0;\n+\t\tDLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);\n+\t}\n+\n+\treg = 0;\n+\tDLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);\n+\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);\n+}\n+\n+static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,\n+\t\t\t\t      struct dlb2_hw_domain *domain,\n+\t\t\t\t      struct dlb2_ldb_port *port,\n+\t\t\t\t      uintptr_t cq_dma_base,\n+\t\t\t\t      struct dlb2_create_ldb_port_args *args,\n+\t\t\t\t      bool vdev_req,\n+\t\t\t\t      unsigned int vdev_id)\n+{\n+\tu32 hl_base = 0;\n+\tu32 reg = 0;\n+\tu32 ds = 0;\n+\n+\t/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */\n+\tDLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);\n+\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);\n+\n+\treg = cq_dma_base >> 32;\n+\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);\n+\n+\t/*\n+\t * 'ro' == relaxed ordering. This setting allows DLB2 to write\n+\t * cache lines out-of-order (but QEs within a cache line are always\n+\t * updated in-order).\n+\t */\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);\n+\tDLB2_BITS_SET(reg,\n+\t\t !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),\n+\t\t DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);\n+\tDLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);\n+\n+\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);\n+\n+\tport->cq_depth = args->cq_depth;\n+\n+\tif (args->cq_depth <= 8) {\n+\t\tds = 1;\n+\t} else if (args->cq_depth == 16) {\n+\t\tds = 2;\n+\t} else if (args->cq_depth == 32) {\n+\t\tds = 3;\n+\t} else if (args->cq_depth == 64) {\n+\t\tds = 4;\n+\t} else if (args->cq_depth == 128) {\n+\t\tds = 5;\n+\t} else if (args->cq_depth == 256) {\n+\t\tds = 6;\n+\t} else if (args->cq_depth == 512) {\n+\t\tds = 7;\n+\t} else if (args->cq_depth == 1024) {\n+\t\tds = 8;\n+\t} else {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s():%d] Internal error: invalid CQ depth\\n\",\n+\t\t\t    __func__, __LINE__);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, ds,\n+\t\t      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n+\t\t    reg);\n+\n+\t/*\n+\t * To support CQs with depth less than 8, program the token count\n+\t * register with a non-zero initial value. Operations such as domain\n+\t * reset must take this initial value into account when quiescing the\n+\t * CQ.\n+\t */\n+\tport->init_tkn_cnt = 0;\n+\n+\tif (args->cq_depth < 8) {\n+\t\treg = 0;\n+\t\tport->init_tkn_cnt = 8 - args->cq_depth;\n+\n+\t\tDLB2_BITS_SET(reg,\n+\t\t\t      port->init_tkn_cnt,\n+\t\t\t      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),\n+\t\t\t    reg);\n+\t} else {\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),\n+\t\t\t    DLB2_LSP_CQ_LDB_TKN_CNT_RST);\n+\t}\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, ds,\n+\t\t      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n+\t\t    reg);\n+\n+\t/* Reset the CQ write pointer */\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_LDB_CQ_WPTR_RST);\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg,\n+\t\t      port->hist_list_entry_limit - 1,\n+\t\t      DLB2_CHP_HIST_LIST_LIM_LIMIT);\n+\tDLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);\n+\n+\tDLB2_BITS_SET(hl_base, port->hist_list_entry_base,\n+\t\t      DLB2_CHP_HIST_LIST_BASE_BASE);\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),\n+\t\t    hl_base);\n+\n+\t/*\n+\t * The inflight limit sets a cap on the number of QEs for which this CQ\n+\t * can owe completions at one time.\n+\t */\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, args->cq_history_list_size,\n+\t\t      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);\n+\tDLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),\n+\t\t    reg);\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),\n+\t\t      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);\n+\tDLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),\n+\t\t    reg);\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),\n+\t\t      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);\n+\tDLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),\n+\t\t    reg);\n+\n+\t/*\n+\t * Address translation (AT) settings: 0: untranslated, 2: translated\n+\t * (see ATS spec regarding Address Type field for more details)\n+\t */\n+\n+\tif (hw->ver == DLB2_HW_V2) {\n+\t\treg = 0;\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);\n+\t}\n+\n+\tif (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {\n+\t\treg = 0;\n+\t\tDLB2_BITS_SET(reg, hw->pasid[vdev_id],\n+\t\t\t      DLB2_SYS_LDB_CQ_PASID_PASID);\n+\t\tDLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);\n+\t}\n+\n+\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);\n+\tDLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);\n+\n+\t/* Disable the port's QID mappings */\n+\treg = 0;\n+\tDLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);\n+\n+\treturn 0;\n+}\n+\n+static bool\n+dlb2_cq_depth_is_valid(u32 depth)\n+{\n+\tif (depth != 1 && depth != 2 &&\n+\t    depth != 4 && depth != 8 &&\n+\t    depth != 16 && depth != 32 &&\n+\t    depth != 64 && depth != 128 &&\n+\t    depth != 256 && depth != 512 &&\n+\t    depth != 1024)\n+\t\treturn false;\n+\n+\treturn true;\n+}\n+\n+static int dlb2_configure_ldb_port(struct dlb2_hw *hw,\n+\t\t\t\t   struct dlb2_hw_domain *domain,\n+\t\t\t\t   struct dlb2_ldb_port *port,\n+\t\t\t\t   uintptr_t cq_dma_base,\n+\t\t\t\t   struct dlb2_create_ldb_port_args *args,\n+\t\t\t\t   bool vdev_req,\n+\t\t\t\t   unsigned int vdev_id)\n+{\n+\tint ret, i;\n+\n+\tport->hist_list_entry_base = domain->hist_list_entry_base +\n+\t\t\t\t     domain->hist_list_entry_offset;\n+\tport->hist_list_entry_limit = port->hist_list_entry_base +\n+\t\t\t\t      args->cq_history_list_size;\n+\n+\tdomain->hist_list_entry_offset += args->cq_history_list_size;\n+\tdomain->avail_hist_list_entries -= args->cq_history_list_size;\n+\n+\tret = dlb2_ldb_port_configure_cq(hw,\n+\t\t\t\t\t domain,\n+\t\t\t\t\t port,\n+\t\t\t\t\t cq_dma_base,\n+\t\t\t\t\t args,\n+\t\t\t\t\t vdev_req,\n+\t\t\t\t\t vdev_id);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tdlb2_ldb_port_configure_pp(hw,\n+\t\t\t\t   domain,\n+\t\t\t\t   port,\n+\t\t\t\t   vdev_req,\n+\t\t\t\t   vdev_id);\n+\n+\tdlb2_ldb_port_cq_enable(hw, port);\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)\n+\t\tport->qid_map[i].state = DLB2_QUEUE_UNMAPPED;\n+\tport->num_mappings = 0;\n+\n+\tport->enabled = true;\n+\n+\tport->configured = true;\n+\n+\treturn 0;\n+}\n+\n+static void\n+dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,\n+\t\t\t      u32 domain_id,\n+\t\t\t      uintptr_t cq_dma_base,\n+\t\t\t      struct dlb2_create_ldb_port_args *args,\n+\t\t\t      bool vdev_req,\n+\t\t\t      unsigned int vdev_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB2 create load-balanced port arguments:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n+\tDLB2_HW_DBG(hw, \"\\tDomain ID:                 %d\\n\",\n+\t\t    domain_id);\n+\tDLB2_HW_DBG(hw, \"\\tCQ depth:                  %d\\n\",\n+\t\t    args->cq_depth);\n+\tDLB2_HW_DBG(hw, \"\\tCQ hist list size:         %d\\n\",\n+\t\t    args->cq_history_list_size);\n+\tDLB2_HW_DBG(hw, \"\\tCQ base address:           0x%lx\\n\",\n+\t\t    cq_dma_base);\n+\tDLB2_HW_DBG(hw, \"\\tCoS ID:                    %u\\n\", args->cos_id);\n+\tDLB2_HW_DBG(hw, \"\\tStrict CoS allocation:     %u\\n\",\n+\t\t    args->cos_strict);\n+}\n+\n+static int\n+dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,\n+\t\t\t\t u32 domain_id,\n+\t\t\t\t uintptr_t cq_dma_base,\n+\t\t\t\t struct dlb2_create_ldb_port_args *args,\n+\t\t\t\t struct dlb2_cmd_response *resp,\n+\t\t\t\t bool vdev_req,\n+\t\t\t\t unsigned int vdev_id,\n+\t\t\t\t struct dlb2_hw_domain **out_domain,\n+\t\t\t\t struct dlb2_ldb_port **out_port,\n+\t\t\t\t int *out_cos_id)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_port *port;\n+\tint i, id;\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n+\n+\tif (!domain) {\n+\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (!domain->configured) {\n+\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (domain->started) {\n+\t\tresp->status = DLB2_ST_DOMAIN_STARTED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (args->cos_id >= DLB2_NUM_COS_DOMAINS) {\n+\t\tresp->status = DLB2_ST_INVALID_COS_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (args->cos_strict) {\n+\t\tid = args->cos_id;\n+\t\tport = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],\n+\t\t\t\t\t  typeof(*port));\n+\t} else {\n+\t\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\t\tid = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;\n+\n+\t\t\tport = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],\n+\t\t\t\t\t\t  typeof(*port));\n+\t\t\tif (port)\n+\t\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\tif (!port) {\n+\t\tresp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/* Check cache-line alignment */\n+\tif ((cq_dma_base & 0x3F) != 0) {\n+\t\tresp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (!dlb2_cq_depth_is_valid(args->cq_depth)) {\n+\t\tresp->status = DLB2_ST_INVALID_CQ_DEPTH;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/* The history list size must be >= 1 */\n+\tif (!args->cq_history_list_size) {\n+\t\tresp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (args->cq_history_list_size > domain->avail_hist_list_entries) {\n+\t\tresp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t*out_domain = domain;\n+\t*out_port = port;\n+\t*out_cos_id = id;\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * dlb2_hw_create_ldb_port() - create a load-balanced port\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @args: port creation arguments.\n+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function creates a load-balanced port.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n+ * contains the port ID.\n+ *\n+ * resp->id contains a virtual ID if vdev_req is true.\n+ *\n+ * Errors:\n+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a\n+ *\t    pointer address is not properly aligned, the domain is not\n+ *\t    configured, or the domain has already been started.\n+ * EFAULT - Internal error (resp->status not set).\n+ */\n+int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,\n+\t\t\t    u32 domain_id,\n+\t\t\t    struct dlb2_create_ldb_port_args *args,\n+\t\t\t    uintptr_t cq_dma_base,\n+\t\t\t    struct dlb2_cmd_response *resp,\n+\t\t\t    bool vdev_req,\n+\t\t\t    unsigned int vdev_id)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_port *port;\n+\tint ret, cos_id;\n+\n+\tdlb2_log_create_ldb_port_args(hw,\n+\t\t\t\t      domain_id,\n+\t\t\t\t      cq_dma_base,\n+\t\t\t\t      args,\n+\t\t\t\t      vdev_req,\n+\t\t\t\t      vdev_id);\n+\n+\t/*\n+\t * Verify that hardware resources are available before attempting to\n+\t * satisfy the request. This simplifies the error unwinding code.\n+\t */\n+\tret = dlb2_verify_create_ldb_port_args(hw,\n+\t\t\t\t\t       domain_id,\n+\t\t\t\t\t       cq_dma_base,\n+\t\t\t\t\t       args,\n+\t\t\t\t\t       resp,\n+\t\t\t\t\t       vdev_req,\n+\t\t\t\t\t       vdev_id,\n+\t\t\t\t\t       &domain,\n+\t\t\t\t\t       &port,\n+\t\t\t\t\t       &cos_id);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = dlb2_configure_ldb_port(hw,\n+\t\t\t\t      domain,\n+\t\t\t\t      port,\n+\t\t\t\t      cq_dma_base,\n+\t\t\t\t      args,\n+\t\t\t\t      vdev_req,\n+\t\t\t\t      vdev_id);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/*\n+\t * Configuration succeeded, so move the resource from the 'avail' to\n+\t * the 'used' list.\n+\t */\n+\tdlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);\n+\n+\tdlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);\n+\n+\tresp->status = 0;\n+\tresp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;\n+\n+\treturn 0;\n+}\n+\n+static void\n+dlb2_log_create_dir_port_args(struct dlb2_hw *hw,\n+\t\t\t      u32 domain_id,\n+\t\t\t      uintptr_t cq_dma_base,\n+\t\t\t      struct dlb2_create_dir_port_args *args,\n+\t\t\t      bool vdev_req,\n+\t\t\t      unsigned int vdev_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB2 create directed port arguments:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n+\tDLB2_HW_DBG(hw, \"\\tDomain ID:                 %d\\n\",\n+\t\t    domain_id);\n+\tDLB2_HW_DBG(hw, \"\\tCQ depth:                  %d\\n\",\n+\t\t    args->cq_depth);\n+\tDLB2_HW_DBG(hw, \"\\tCQ base address:           0x%lx\\n\",\n+\t\t    cq_dma_base);\n+}\n+\n+static struct dlb2_dir_pq_pair *\n+dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,\n+\t\t\t    u32 id,\n+\t\t\t    bool vdev_req,\n+\t\t\t    struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *port;\n+\tRTE_SET_USED(iter);\n+\n+\tif (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))\n+\t\treturn NULL;\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n+\t\tif ((!vdev_req && port->id.phys_id == id) ||\n+\t\t    (vdev_req && port->id.virt_id == id))\n+\t\t\treturn port;\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+static int\n+dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,\n+\t\t\t\t u32 domain_id,\n+\t\t\t\t uintptr_t cq_dma_base,\n+\t\t\t\t struct dlb2_create_dir_port_args *args,\n+\t\t\t\t struct dlb2_cmd_response *resp,\n+\t\t\t\t bool vdev_req,\n+\t\t\t\t unsigned int vdev_id,\n+\t\t\t\t struct dlb2_hw_domain **out_domain,\n+\t\t\t\t struct dlb2_dir_pq_pair **out_port)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_dir_pq_pair *pq;\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n+\n+\tif (!domain) {\n+\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (!domain->configured) {\n+\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (domain->started) {\n+\t\tresp->status = DLB2_ST_DOMAIN_STARTED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (args->queue_id != -1) {\n+\t\t/*\n+\t\t * If the user claims the queue is already configured, validate\n+\t\t * the queue ID, its domain, and whether the queue is\n+\t\t * configured.\n+\t\t */\n+\t\tpq = dlb2_get_domain_used_dir_pq(hw,\n+\t\t\t\t\t\t args->queue_id,\n+\t\t\t\t\t\t vdev_req,\n+\t\t\t\t\t\t domain);\n+\n+\t\tif (!pq || pq->domain_id.phys_id != domain->id.phys_id ||\n+\t\t    !pq->queue_configured) {\n+\t\t\tresp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t} else {\n+\t\t/*\n+\t\t * If the port's queue is not configured, validate that a free\n+\t\t * port-queue pair is available.\n+\t\t */\n+\t\tpq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,\n+\t\t\t\t\ttypeof(*pq));\n+\t\tif (!pq) {\n+\t\t\tresp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t}\n+\n+\t/* Check cache-line alignment */\n+\tif ((cq_dma_base & 0x3F) != 0) {\n+\t\tresp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (!dlb2_cq_depth_is_valid(args->cq_depth)) {\n+\t\tresp->status = DLB2_ST_INVALID_CQ_DEPTH;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t*out_domain = domain;\n+\t*out_port = pq;\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,\n+\t\t\t\t       struct dlb2_hw_domain *domain,\n+\t\t\t\t       struct dlb2_dir_pq_pair *port,\n+\t\t\t\t       bool vdev_req,\n+\t\t\t\t       unsigned int vdev_id)\n+{\n+\tu32 reg = 0;\n+\n+\tDLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);\n+\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);\n+\n+\tif (vdev_req) {\n+\t\tunsigned int offs;\n+\t\tu32 virt_id;\n+\n+\t\t/*\n+\t\t * DLB uses producer port address bits 17:12 to determine the\n+\t\t * producer port ID. In Scalable IOV mode, PP accesses come\n+\t\t * through the PF MMIO window for the physical producer port,\n+\t\t * so for translation purposes the virtual and physical port\n+\t\t * IDs are equal.\n+\t\t */\n+\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n+\t\t\tvirt_id = port->id.virt_id;\n+\t\telse\n+\t\t\tvirt_id = port->id.phys_id;\n+\n+\t\treg = 0;\n+\t\tDLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);\n+\t\toffs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);\n+\n+\t\treg = 0;\n+\t\tDLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);\n+\n+\t\treg = 0;\n+\t\tDLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);\n+\t}\n+\n+\treg = 0;\n+\tDLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);\n+\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);\n+}\n+\n+static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,\n+\t\t\t\t      struct dlb2_hw_domain *domain,\n+\t\t\t\t      struct dlb2_dir_pq_pair *port,\n+\t\t\t\t      uintptr_t cq_dma_base,\n+\t\t\t\t      struct dlb2_create_dir_port_args *args,\n+\t\t\t\t      bool vdev_req,\n+\t\t\t\t      unsigned int vdev_id)\n+{\n+\tu32 reg = 0;\n+\tu32 ds = 0;\n+\n+\t/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */\n+\tDLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);\n+\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);\n+\n+\treg = cq_dma_base >> 32;\n+\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);\n+\n+\t/*\n+\t * 'ro' == relaxed ordering. This setting allows DLB2 to write\n+\t * cache lines out-of-order (but QEs within a cache line are always\n+\t * updated in-order).\n+\t */\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);\n+\tDLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),\n+\t\t DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);\n+\tDLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);\n+\n+\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);\n+\n+\tif (args->cq_depth <= 8) {\n+\t\tds = 1;\n+\t} else if (args->cq_depth == 16) {\n+\t\tds = 2;\n+\t} else if (args->cq_depth == 32) {\n+\t\tds = 3;\n+\t} else if (args->cq_depth == 64) {\n+\t\tds = 4;\n+\t} else if (args->cq_depth == 128) {\n+\t\tds = 5;\n+\t} else if (args->cq_depth == 256) {\n+\t\tds = 6;\n+\t} else if (args->cq_depth == 512) {\n+\t\tds = 7;\n+\t} else if (args->cq_depth == 1024) {\n+\t\tds = 8;\n+\t} else {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s():%d] Internal error: invalid CQ depth\\n\",\n+\t\t\t    __func__, __LINE__);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, ds,\n+\t\t      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n+\t\t    reg);\n+\n+\t/*\n+\t * To support CQs with depth less than 8, program the token count\n+\t * register with a non-zero initial value. Operations such as domain\n+\t * reset must take this initial value into account when quiescing the\n+\t * CQ.\n+\t */\n+\tport->init_tkn_cnt = 0;\n+\n+\tif (args->cq_depth < 8) {\n+\t\treg = 0;\n+\t\tport->init_tkn_cnt = 8 - args->cq_depth;\n+\n+\t\tDLB2_BITS_SET(reg, port->init_tkn_cnt,\n+\t\t\t      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),\n+\t\t\t    reg);\n+\t} else {\n+\t\tDLB2_CSR_WR(hw,\n+\t\t\t    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),\n+\t\t\t    DLB2_LSP_CQ_DIR_TKN_CNT_RST);\n+\t}\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, ds,\n+\t\t      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,\n+\t\t\t\t\t\t      port->id.phys_id),\n+\t\t    reg);\n+\n+\t/* Reset the CQ write pointer */\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),\n+\t\t    DLB2_CHP_DIR_CQ_WPTR_RST);\n+\n+\t/* Virtualize the PPID */\n+\treg = 0;\n+\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);\n+\n+\t/*\n+\t * Address translation (AT) settings: 0: untranslated, 2: translated\n+\t * (see ATS spec regarding Address Type field for more details)\n+\t */\n+\tif (hw->ver == DLB2_HW_V2) {\n+\t\treg = 0;\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);\n+\t}\n+\n+\tif (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {\n+\t\tDLB2_BITS_SET(reg, hw->pasid[vdev_id],\n+\t\t\t      DLB2_SYS_DIR_CQ_PASID_PASID);\n+\t\tDLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);\n+\t}\n+\n+\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);\n+\tDLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);\n+\n+\treturn 0;\n+}\n+\n+static int dlb2_configure_dir_port(struct dlb2_hw *hw,\n+\t\t\t\t   struct dlb2_hw_domain *domain,\n+\t\t\t\t   struct dlb2_dir_pq_pair *port,\n+\t\t\t\t   uintptr_t cq_dma_base,\n+\t\t\t\t   struct dlb2_create_dir_port_args *args,\n+\t\t\t\t   bool vdev_req,\n+\t\t\t\t   unsigned int vdev_id)\n+{\n+\tint ret;\n+\n+\tret = dlb2_dir_port_configure_cq(hw,\n+\t\t\t\t\t domain,\n+\t\t\t\t\t port,\n+\t\t\t\t\t cq_dma_base,\n+\t\t\t\t\t args,\n+\t\t\t\t\t vdev_req,\n+\t\t\t\t\t vdev_id);\n+\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tdlb2_dir_port_configure_pp(hw,\n+\t\t\t\t   domain,\n+\t\t\t\t   port,\n+\t\t\t\t   vdev_req,\n+\t\t\t\t   vdev_id);\n+\n+\tdlb2_dir_port_cq_enable(hw, port);\n+\n+\tport->enabled = true;\n+\n+\tport->port_configured = true;\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * dlb2_hw_create_dir_port() - create a directed port\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @args: port creation arguments.\n+ * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function creates a directed port.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n+ * contains the port ID.\n+ *\n+ * resp->id contains a virtual ID if vdev_req is true.\n+ *\n+ * Errors:\n+ * EINVAL - A requested resource is unavailable, a credit setting is invalid, a\n+ *\t    pointer address is not properly aligned, the domain is not\n+ *\t    configured, or the domain has already been started.\n+ * EFAULT - Internal error (resp->status not set).\n+ */\n+int dlb2_hw_create_dir_port(struct dlb2_hw *hw,\n+\t\t\t    u32 domain_id,\n+\t\t\t    struct dlb2_create_dir_port_args *args,\n+\t\t\t    uintptr_t cq_dma_base,\n+\t\t\t    struct dlb2_cmd_response *resp,\n+\t\t\t    bool vdev_req,\n+\t\t\t    unsigned int vdev_id)\n+{\n+\tstruct dlb2_dir_pq_pair *port;\n+\tstruct dlb2_hw_domain *domain;\n+\tint ret;\n+\n+\tdlb2_log_create_dir_port_args(hw,\n+\t\t\t\t      domain_id,\n+\t\t\t\t      cq_dma_base,\n+\t\t\t\t      args,\n+\t\t\t\t      vdev_req,\n+\t\t\t\t      vdev_id);\n+\n+\t/*\n+\t * Verify that hardware resources are available before attempting to\n+\t * satisfy the request. This simplifies the error unwinding code.\n+\t */\n+\tret = dlb2_verify_create_dir_port_args(hw,\n+\t\t\t\t\t       domain_id,\n+\t\t\t\t\t       cq_dma_base,\n+\t\t\t\t\t       args,\n+\t\t\t\t\t       resp,\n+\t\t\t\t\t       vdev_req,\n+\t\t\t\t\t       vdev_id,\n+\t\t\t\t\t       &domain,\n+\t\t\t\t\t       &port);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tret = dlb2_configure_dir_port(hw,\n+\t\t\t\t      domain,\n+\t\t\t\t      port,\n+\t\t\t\t      cq_dma_base,\n+\t\t\t\t      args,\n+\t\t\t\t      vdev_req,\n+\t\t\t\t      vdev_id);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/*\n+\t * Configuration succeeded, so move the resource from the 'avail' to\n+\t * the 'used' list (if it's not already there).\n+\t */\n+\tif (args->queue_id == -1) {\n+\t\tdlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);\n+\n+\t\tdlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);\n+\t}\n+\n+\tresp->status = 0;\n+\tresp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_configure_dir_queue(struct dlb2_hw *hw,\n+\t\t\t\t     struct dlb2_hw_domain *domain,\n+\t\t\t\t     struct dlb2_dir_pq_pair *queue,\n+\t\t\t\t     struct dlb2_create_dir_queue_args *args,\n+\t\t\t\t     bool vdev_req,\n+\t\t\t\t     unsigned int vdev_id)\n+{\n+\tunsigned int offs;\n+\tu32 reg = 0;\n+\n+\t/* QID write permissions are turned on when the domain is started */\n+\toffs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +\n+\t\tqueue->id.phys_id;\n+\n+\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);\n+\n+\t/* Don't timestamp QEs that pass through this queue */\n+\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);\n+\n+\treg = 0;\n+\tDLB2_BITS_SET(reg, args->depth_threshold,\n+\t\t      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);\n+\tDLB2_CSR_WR(hw,\n+\t\t    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),\n+\t\t    reg);\n+\n+\tif (vdev_req) {\n+\t\toffs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +\n+\t\t\tqueue->id.virt_id;\n+\n+\t\treg = 0;\n+\t\tDLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);\n+\n+\t\treg = 0;\n+\t\tDLB2_BITS_SET(reg, queue->id.phys_id,\n+\t\t\t      DLB2_SYS_VF_DIR_VQID2QID_QID);\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);\n+\t}\n+\n+\treg = 0;\n+\tDLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);\n+\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);\n+\n+\tqueue->queue_configured = true;\n+}\n+\n+static void\n+dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,\n+\t\t\t       u32 domain_id,\n+\t\t\t       struct dlb2_create_dir_queue_args *args,\n+\t\t\t       bool vdev_req,\n+\t\t\t       unsigned int vdev_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB2 create directed queue arguments:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n+\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\", domain_id);\n+\tDLB2_HW_DBG(hw, \"\\tPort ID:   %d\\n\", args->port_id);\n+}\n+\n+static int\n+dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,\n+\t\t\t\t  u32 domain_id,\n+\t\t\t\t  struct dlb2_create_dir_queue_args *args,\n+\t\t\t\t  struct dlb2_cmd_response *resp,\n+\t\t\t\t  bool vdev_req,\n+\t\t\t\t  unsigned int vdev_id,\n+\t\t\t\t  struct dlb2_hw_domain **out_domain,\n+\t\t\t\t  struct dlb2_dir_pq_pair **out_queue)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_dir_pq_pair *pq;\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n+\n+\tif (!domain) {\n+\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (!domain->configured) {\n+\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (domain->started) {\n+\t\tresp->status = DLB2_ST_DOMAIN_STARTED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/*\n+\t * If the user claims the port is already configured, validate the port\n+\t * ID, its domain, and whether the port is configured.\n+\t */\n+\tif (args->port_id != -1) {\n+\t\tpq = dlb2_get_domain_used_dir_pq(hw,\n+\t\t\t\t\t\t args->port_id,\n+\t\t\t\t\t\t vdev_req,\n+\t\t\t\t\t\t domain);\n+\n+\t\tif (!pq || pq->domain_id.phys_id != domain->id.phys_id ||\n+\t\t    !pq->port_configured) {\n+\t\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t} else {\n+\t\t/*\n+\t\t * If the queue's port is not configured, validate that a free\n+\t\t * port-queue pair is available.\n+\t\t */\n+\t\tpq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,\n+\t\t\t\t\ttypeof(*pq));\n+\t\tif (!pq) {\n+\t\t\tresp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t}\n+\n+\t*out_domain = domain;\n+\t*out_queue = pq;\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * dlb2_hw_create_dir_queue() - create a directed queue\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @args: queue creation arguments.\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function creates a directed queue.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n+ * contains the queue ID.\n+ *\n+ * resp->id contains a virtual ID if vdev_req is true.\n+ *\n+ * Errors:\n+ * EINVAL - A requested resource is unavailable, the domain is not configured,\n+ *\t    or the domain has already been started.\n+ * EFAULT - Internal error (resp->status not set).\n+ */\n+int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,\n+\t\t\t     u32 domain_id,\n+\t\t\t     struct dlb2_create_dir_queue_args *args,\n+\t\t\t     struct dlb2_cmd_response *resp,\n+\t\t\t     bool vdev_req,\n+\t\t\t     unsigned int vdev_id)\n+{\n+\tstruct dlb2_dir_pq_pair *queue;\n+\tstruct dlb2_hw_domain *domain;\n+\tint ret;\n+\n+\tdlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);\n+\n+\t/*\n+\t * Verify that hardware resources are available before attempting to\n+\t * satisfy the request. This simplifies the error unwinding code.\n+\t */\n+\tret = dlb2_verify_create_dir_queue_args(hw,\n+\t\t\t\t\t\tdomain_id,\n+\t\t\t\t\t\targs,\n+\t\t\t\t\t\tresp,\n+\t\t\t\t\t\tvdev_req,\n+\t\t\t\t\t\tvdev_id,\n+\t\t\t\t\t\t&domain,\n+\t\t\t\t\t\t&queue);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tdlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);\n+\n+\t/*\n+\t * Configuration succeeded, so move the resource from the 'avail' to\n+\t * the 'used' list (if it's not already there).\n+\t */\n+\tif (args->port_id == -1) {\n+\t\tdlb2_list_del(&domain->avail_dir_pq_pairs,\n+\t\t\t      &queue->domain_list);\n+\n+\t\tdlb2_list_add(&domain->used_dir_pq_pairs,\n+\t\t\t      &queue->domain_list);\n+\t}\n+\n+\tresp->status = 0;\n+\n+\tresp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;\n+\n+\treturn 0;\n+}\n+\n+static bool\n+dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,\n+\t\t\t\t\t   struct dlb2_ldb_queue *queue,\n+\t\t\t\t\t   int *slot)\n+{\n+\tint i;\n+\n+\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {\n+\t\tstruct dlb2_ldb_port_qid_map *map = &port->qid_map[i];\n+\n+\t\tif (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&\n+\t\t    map->pending_qid == queue->id.phys_id)\n+\t\t\tbreak;\n+\t}\n+\n+\t*slot = i;\n+\n+\treturn (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);\n+}\n+\n+static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,\n+\t\t\t\t\t      struct dlb2_ldb_queue *queue,\n+\t\t\t\t\t      struct dlb2_cmd_response *resp)\n+{\n+\tenum dlb2_qid_map_state state;\n+\tint i;\n+\n+\t/* Unused slot available? */\n+\tif (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)\n+\t\treturn 0;\n+\n+\t/*\n+\t * If the queue is already mapped (from the application's perspective),\n+\t * this is simply a priority update.\n+\t */\n+\tstate = DLB2_QUEUE_MAPPED;\n+\tif (dlb2_port_find_slot_queue(port, state, queue, &i))\n+\t\treturn 0;\n+\n+\tstate = DLB2_QUEUE_MAP_IN_PROG;\n+\tif (dlb2_port_find_slot_queue(port, state, queue, &i))\n+\t\treturn 0;\n+\n+\tif (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))\n+\t\treturn 0;\n+\n+\t/*\n+\t * If the slot contains an unmap in progress, it's considered\n+\t * available.\n+\t */\n+\tstate = DLB2_QUEUE_UNMAP_IN_PROG;\n+\tif (dlb2_port_find_slot(port, state, &i))\n+\t\treturn 0;\n+\n+\tstate = DLB2_QUEUE_UNMAPPED;\n+\tif (dlb2_port_find_slot(port, state, &i))\n+\t\treturn 0;\n+\n+\tresp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;\n+\treturn -EINVAL;\n+}\n+\n+static struct dlb2_ldb_queue *\n+dlb2_get_domain_ldb_queue(u32 id,\n+\t\t\t  bool vdev_req,\n+\t\t\t  struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_queue *queue;\n+\tRTE_SET_USED(iter);\n+\n+\tif (id >= DLB2_MAX_NUM_LDB_QUEUES)\n+\t\treturn NULL;\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n+\t\tif ((!vdev_req && queue->id.phys_id == id) ||\n+\t\t    (vdev_req && queue->id.virt_id == id))\n+\t\t\treturn queue;\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+static struct dlb2_ldb_port *\n+dlb2_get_domain_used_ldb_port(u32 id,\n+\t\t\t      bool vdev_req,\n+\t\t\t      struct dlb2_hw_domain *domain)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_ldb_port *port;\n+\tint i;\n+\tRTE_SET_USED(iter);\n+\n+\tif (id >= DLB2_MAX_NUM_LDB_PORTS)\n+\t\treturn NULL;\n+\n+\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n+\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n+\t\t\tif ((!vdev_req && port->id.phys_id == id) ||\n+\t\t\t    (vdev_req && port->id.virt_id == id))\n+\t\t\t\treturn port;\n+\t\t}\n+\n+\t\tDLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {\n+\t\t\tif ((!vdev_req && port->id.phys_id == id) ||\n+\t\t\t    (vdev_req && port->id.virt_id == id))\n+\t\t\t\treturn port;\n+\t\t}\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,\n+\t\t\t\t\t      struct dlb2_ldb_port *port,\n+\t\t\t\t\t      int slot,\n+\t\t\t\t\t      struct dlb2_map_qid_args *args)\n+{\n+\tu32 cq2priov;\n+\n+\t/* Read-modify-write the priority and valid bit register */\n+\tcq2priov = DLB2_CSR_RD(hw,\n+\t\t\t       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));\n+\n+\tcq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &\n+\t\t    DLB2_LSP_CQ2PRIOV_V;\n+\tcq2priov |= ((args->priority & 0x7) << slot * 3) &\n+\t\t    DLB2_LSP_CQ2PRIOV_PRIO;\n+\n+\tDLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);\n+\n+\tdlb2_flush_csr(hw);\n+\n+\tport->qid_map[slot].priority = args->priority;\n+}\n+\n+static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,\n+\t\t\t\t    u32 domain_id,\n+\t\t\t\t    struct dlb2_map_qid_args *args,\n+\t\t\t\t    struct dlb2_cmd_response *resp,\n+\t\t\t\t    bool vdev_req,\n+\t\t\t\t    unsigned int vdev_id,\n+\t\t\t\t    struct dlb2_hw_domain **out_domain,\n+\t\t\t\t    struct dlb2_ldb_port **out_port,\n+\t\t\t\t    struct dlb2_ldb_queue **out_queue)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_queue *queue;\n+\tstruct dlb2_ldb_port *port;\n+\tint id;\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n+\n+\tif (!domain) {\n+\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (!domain->configured) {\n+\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tid = args->port_id;\n+\n+\tport = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);\n+\n+\tif (!port || !port->configured) {\n+\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (args->priority >= DLB2_QID_PRIORITIES) {\n+\t\tresp->status = DLB2_ST_INVALID_PRIORITY;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tqueue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);\n+\n+\tif (!queue || !queue->configured) {\n+\t\tresp->status = DLB2_ST_INVALID_QID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (queue->domain_id.phys_id != domain->id.phys_id) {\n+\t\tresp->status = DLB2_ST_INVALID_QID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (port->domain_id.phys_id != domain->id.phys_id) {\n+\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t*out_domain = domain;\n+\t*out_queue = queue;\n+\t*out_port = port;\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_log_map_qid(struct dlb2_hw *hw,\n+\t\t\t     u32 domain_id,\n+\t\t\t     struct dlb2_map_qid_args *args,\n+\t\t\t     bool vdev_req,\n+\t\t\t     unsigned int vdev_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB2 map QID arguments:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n+\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\",\n+\t\t    domain_id);\n+\tDLB2_HW_DBG(hw, \"\\tPort ID:   %d\\n\",\n+\t\t    args->port_id);\n+\tDLB2_HW_DBG(hw, \"\\tQueue ID:  %d\\n\",\n+\t\t    args->qid);\n+\tDLB2_HW_DBG(hw, \"\\tPriority:  %d\\n\",\n+\t\t    args->priority);\n+}\n+\n+/**\n+ * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @args: map QID arguments.\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function configures the DLB to schedule QEs from the specified queue\n+ * to the specified port. Each load-balanced port can be mapped to up to 8\n+ * queues; each load-balanced queue can potentially map to all the\n+ * load-balanced ports.\n+ *\n+ * A successful return does not necessarily mean the mapping was configured. If\n+ * this function is unable to immediately map the queue to the port, it will\n+ * add the requested operation to a per-port list of pending map/unmap\n+ * operations, and (if it's not already running) launch a kernel thread that\n+ * periodically attempts to process all pending operations. In a sense, this is\n+ * an asynchronous function.\n+ *\n+ * This asynchronicity creates two views of the state of hardware: the actual\n+ * hardware state and the requested state (as if every request completed\n+ * immediately). If there are any pending map/unmap operations, the requested\n+ * state will differ from the actual state. All validation is performed with\n+ * respect to the pending state; for instance, if there are 8 pending map\n+ * operations for port X, a request for a 9th will fail because a load-balanced\n+ * port can only map up to 8 queues.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error.\n+ *\n+ * Errors:\n+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or\n+ *\t    the domain is not configured.\n+ * EFAULT - Internal error (resp->status not set).\n+ */\n+int dlb2_hw_map_qid(struct dlb2_hw *hw,\n+\t\t    u32 domain_id,\n+\t\t    struct dlb2_map_qid_args *args,\n+\t\t    struct dlb2_cmd_response *resp,\n+\t\t    bool vdev_req,\n+\t\t    unsigned int vdev_id)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_queue *queue;\n+\tenum dlb2_qid_map_state st;\n+\tstruct dlb2_ldb_port *port;\n+\tint ret, i;\n+\tu8 prio;\n+\n+\tdlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);\n+\n+\t/*\n+\t * Verify that hardware resources are available before attempting to\n+\t * satisfy the request. This simplifies the error unwinding code.\n+\t */\n+\tret = dlb2_verify_map_qid_args(hw,\n+\t\t\t\t       domain_id,\n+\t\t\t\t       args,\n+\t\t\t\t       resp,\n+\t\t\t\t       vdev_req,\n+\t\t\t\t       vdev_id,\n+\t\t\t\t       &domain,\n+\t\t\t\t       &port,\n+\t\t\t\t       &queue);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\tprio = args->priority;\n+\n+\t/*\n+\t * If there are any outstanding detach operations for this port,\n+\t * attempt to complete them. This may be necessary to free up a QID\n+\t * slot for this requested mapping.\n+\t */\n+\tif (port->num_pending_removals)\n+\t\tdlb2_domain_finish_unmap_port(hw, domain, port);\n+\n+\tret = dlb2_verify_map_qid_slot_available(port, queue, resp);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/* Hardware requires disabling the CQ before mapping QIDs. */\n+\tif (port->enabled)\n+\t\tdlb2_ldb_port_cq_disable(hw, port);\n+\n+\t/*\n+\t * If this is only a priority change, don't perform the full QID->CQ\n+\t * mapping procedure\n+\t */\n+\tst = DLB2_QUEUE_MAPPED;\n+\tif (dlb2_port_find_slot_queue(port, st, queue, &i)) {\n+\t\tif (prio != port->qid_map[i].priority) {\n+\t\t\tdlb2_ldb_port_change_qid_priority(hw, port, i, args);\n+\t\t\tDLB2_HW_DBG(hw, \"DLB2 map: priority change\\n\");\n+\t\t}\n+\n+\t\tst = DLB2_QUEUE_MAPPED;\n+\t\tret = dlb2_port_slot_state_transition(hw, port, queue, i, st);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\n+\t\tgoto map_qid_done;\n+\t}\n+\n+\tst = DLB2_QUEUE_UNMAP_IN_PROG;\n+\tif (dlb2_port_find_slot_queue(port, st, queue, &i)) {\n+\t\tif (prio != port->qid_map[i].priority) {\n+\t\t\tdlb2_ldb_port_change_qid_priority(hw, port, i, args);\n+\t\t\tDLB2_HW_DBG(hw, \"DLB2 map: priority change\\n\");\n+\t\t}\n+\n+\t\tst = DLB2_QUEUE_MAPPED;\n+\t\tret = dlb2_port_slot_state_transition(hw, port, queue, i, st);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\n+\t\tgoto map_qid_done;\n+\t}\n+\n+\t/*\n+\t * If this is a priority change on an in-progress mapping, don't\n+\t * perform the full QID->CQ mapping procedure.\n+\t */\n+\tst = DLB2_QUEUE_MAP_IN_PROG;\n+\tif (dlb2_port_find_slot_queue(port, st, queue, &i)) {\n+\t\tport->qid_map[i].priority = prio;\n+\n+\t\tDLB2_HW_DBG(hw, \"DLB2 map: priority change only\\n\");\n+\n+\t\tgoto map_qid_done;\n+\t}\n+\n+\t/*\n+\t * If this is a priority change on a pending mapping, update the\n+\t * pending priority\n+\t */\n+\tif (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {\n+\t\tport->qid_map[i].pending_priority = prio;\n+\n+\t\tDLB2_HW_DBG(hw, \"DLB2 map: priority change only\\n\");\n+\n+\t\tgoto map_qid_done;\n+\t}\n+\n+\t/*\n+\t * If all the CQ's slots are in use, then there's an unmap in progress\n+\t * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this\n+\t * mapping to pending_map and return. When the removal is completed for\n+\t * the slot's current occupant, this mapping will be performed.\n+\t */\n+\tif (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {\n+\t\tif (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {\n+\t\t\tenum dlb2_qid_map_state new_st;\n+\n+\t\t\tport->qid_map[i].pending_qid = queue->id.phys_id;\n+\t\t\tport->qid_map[i].pending_priority = prio;\n+\n+\t\t\tnew_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;\n+\n+\t\t\tret = dlb2_port_slot_state_transition(hw, port, queue,\n+\t\t\t\t\t\t\t      i, new_st);\n+\t\t\tif (ret)\n+\t\t\t\treturn ret;\n+\n+\t\t\tDLB2_HW_DBG(hw, \"DLB2 map: map pending removal\\n\");\n+\n+\t\t\tgoto map_qid_done;\n+\t\t}\n+\t}\n+\n+\t/*\n+\t * If the domain has started, a special \"dynamic\" CQ->queue mapping\n+\t * procedure is required in order to safely update the CQ<->QID tables.\n+\t * The \"static\" procedure cannot be used when traffic is flowing,\n+\t * because the CQ<->QID tables cannot be updated atomically and the\n+\t * scheduler won't see the new mapping unless the queue's if_status\n+\t * changes, which isn't guaranteed.\n+\t */\n+\tret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);\n+\n+\t/* If ret is less than zero, it's due to an internal error */\n+\tif (ret < 0)\n+\t\treturn ret;\n+\n+map_qid_done:\n+\tif (port->enabled)\n+\t\tdlb2_ldb_port_cq_enable(hw, port);\n+\n+\tresp->status = 0;\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_log_unmap_qid(struct dlb2_hw *hw,\n+\t\t\t       u32 domain_id,\n+\t\t\t       struct dlb2_unmap_qid_args *args,\n+\t\t\t       bool vdev_req,\n+\t\t\t       unsigned int vdev_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB2 unmap QID arguments:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n+\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\",\n+\t\t    domain_id);\n+\tDLB2_HW_DBG(hw, \"\\tPort ID:   %d\\n\",\n+\t\t    args->port_id);\n+\tDLB2_HW_DBG(hw, \"\\tQueue ID:  %d\\n\",\n+\t\t    args->qid);\n+\tif (args->qid < DLB2_MAX_NUM_LDB_QUEUES)\n+\t\tDLB2_HW_DBG(hw, \"\\tQueue's num mappings:  %d\\n\",\n+\t\t\t    hw->rsrcs.ldb_queues[args->qid].num_mappings);\n+}\n+\n+static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,\n+\t\t\t\t      u32 domain_id,\n+\t\t\t\t      struct dlb2_unmap_qid_args *args,\n+\t\t\t\t      struct dlb2_cmd_response *resp,\n+\t\t\t\t      bool vdev_req,\n+\t\t\t\t      unsigned int vdev_id,\n+\t\t\t\t      struct dlb2_hw_domain **out_domain,\n+\t\t\t\t      struct dlb2_ldb_port **out_port,\n+\t\t\t\t      struct dlb2_ldb_queue **out_queue)\n+{\n+\tenum dlb2_qid_map_state state;\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_queue *queue;\n+\tstruct dlb2_ldb_port *port;\n+\tint slot;\n+\tint id;\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n+\n+\tif (!domain) {\n+\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (!domain->configured) {\n+\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tid = args->port_id;\n+\n+\tport = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);\n+\n+\tif (!port || !port->configured) {\n+\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (port->domain_id.phys_id != domain->id.phys_id) {\n+\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tqueue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);\n+\n+\tif (!queue || !queue->configured) {\n+\t\tDLB2_HW_ERR(hw, \"[%s()] Can't unmap unconfigured queue %d\\n\",\n+\t\t\t    __func__, args->qid);\n+\t\tresp->status = DLB2_ST_INVALID_QID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/*\n+\t * Verify that the port has the queue mapped. From the application's\n+\t * perspective a queue is mapped if it is actually mapped, the map is\n+\t * in progress, or the map is blocked pending an unmap.\n+\t */\n+\tstate = DLB2_QUEUE_MAPPED;\n+\tif (dlb2_port_find_slot_queue(port, state, queue, &slot))\n+\t\tgoto done;\n+\n+\tstate = DLB2_QUEUE_MAP_IN_PROG;\n+\tif (dlb2_port_find_slot_queue(port, state, queue, &slot))\n+\t\tgoto done;\n+\n+\tif (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))\n+\t\tgoto done;\n+\n+\tresp->status = DLB2_ST_INVALID_QID;\n+\treturn -EINVAL;\n+\n+done:\n+\t*out_domain = domain;\n+\t*out_port = port;\n+\t*out_queue = queue;\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @args: unmap QID arguments.\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function configures the DLB to stop scheduling QEs from the specified\n+ * queue to the specified port.\n+ *\n+ * A successful return does not necessarily mean the mapping was removed. If\n+ * this function is unable to immediately unmap the queue from the port, it\n+ * will add the requested operation to a per-port list of pending map/unmap\n+ * operations, and (if it's not already running) launch a kernel thread that\n+ * periodically attempts to process all pending operations. See\n+ * dlb2_hw_map_qid() for more details.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error.\n+ *\n+ * Errors:\n+ * EINVAL - A requested resource is unavailable, invalid port or queue ID, or\n+ *\t    the domain is not configured.\n+ * EFAULT - Internal error (resp->status not set).\n+ */\n+int dlb2_hw_unmap_qid(struct dlb2_hw *hw,\n+\t\t      u32 domain_id,\n+\t\t      struct dlb2_unmap_qid_args *args,\n+\t\t      struct dlb2_cmd_response *resp,\n+\t\t      bool vdev_req,\n+\t\t      unsigned int vdev_id)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_queue *queue;\n+\tenum dlb2_qid_map_state st;\n+\tstruct dlb2_ldb_port *port;\n+\tbool unmap_complete;\n+\tint i, ret;\n+\n+\tdlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);\n+\n+\t/*\n+\t * Verify that hardware resources are available before attempting to\n+\t * satisfy the request. This simplifies the error unwinding code.\n+\t */\n+\tret = dlb2_verify_unmap_qid_args(hw,\n+\t\t\t\t\t domain_id,\n+\t\t\t\t\t args,\n+\t\t\t\t\t resp,\n+\t\t\t\t\t vdev_req,\n+\t\t\t\t\t vdev_id,\n+\t\t\t\t\t &domain,\n+\t\t\t\t\t &port,\n+\t\t\t\t\t &queue);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/*\n+\t * If the queue hasn't been mapped yet, we need to update the slot's\n+\t * state and re-enable the queue's inflights.\n+\t */\n+\tst = DLB2_QUEUE_MAP_IN_PROG;\n+\tif (dlb2_port_find_slot_queue(port, st, queue, &i)) {\n+\t\t/*\n+\t\t * Since the in-progress map was aborted, re-enable the QID's\n+\t\t * inflights.\n+\t\t */\n+\t\tif (queue->num_pending_additions == 0)\n+\t\t\tdlb2_ldb_queue_set_inflight_limit(hw, queue);\n+\n+\t\tst = DLB2_QUEUE_UNMAPPED;\n+\t\tret = dlb2_port_slot_state_transition(hw, port, queue, i, st);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\n+\t\tgoto unmap_qid_done;\n+\t}\n+\n+\t/*\n+\t * If the queue mapping is on hold pending an unmap, we simply need to\n+\t * update the slot's state.\n+\t */\n+\tif (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {\n+\t\tst = DLB2_QUEUE_UNMAP_IN_PROG;\n+\t\tret = dlb2_port_slot_state_transition(hw, port, queue, i, st);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\n+\t\tgoto unmap_qid_done;\n+\t}\n+\n+\tst = DLB2_QUEUE_MAPPED;\n+\tif (!dlb2_port_find_slot_queue(port, st, queue, &i)) {\n+\t\tDLB2_HW_ERR(hw,\n+\t\t\t    \"[%s()] Internal error: no available CQ slots\\n\",\n+\t\t\t    __func__);\n+\t\treturn -EFAULT;\n+\t}\n+\n+\t/*\n+\t * QID->CQ mapping removal is an asynchronous procedure. It requires\n+\t * stopping the DLB2 from scheduling this CQ, draining all inflights\n+\t * from the CQ, then unmapping the queue from the CQ. This function\n+\t * simply marks the port as needing the queue unmapped, and (if\n+\t * necessary) starts the unmapping worker thread.\n+\t */\n+\tdlb2_ldb_port_cq_disable(hw, port);\n+\n+\tst = DLB2_QUEUE_UNMAP_IN_PROG;\n+\tret = dlb2_port_slot_state_transition(hw, port, queue, i, st);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/*\n+\t * Attempt to finish the unmapping now, in case the port has no\n+\t * outstanding inflights. If that's not the case, this will fail and\n+\t * the unmapping will be completed at a later time.\n+\t */\n+\tunmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);\n+\n+\t/*\n+\t * If the unmapping couldn't complete immediately, launch the worker\n+\t * thread (if it isn't already launched) to finish it later.\n+\t */\n+\tif (!unmap_complete && !os_worker_active(hw))\n+\t\tos_schedule_work(hw);\n+\n+unmap_qid_done:\n+\tresp->status = 0;\n+\n+\treturn 0;\n+}\n+\n+static void\n+dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,\n+\t\t\t\t  struct dlb2_pending_port_unmaps_args *args,\n+\t\t\t\t  bool vdev_req,\n+\t\t\t\t  unsigned int vdev_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB unmaps in progress arguments:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from VF %d)\\n\", vdev_id);\n+\tDLB2_HW_DBG(hw, \"\\tPort ID: %d\\n\", args->port_id);\n+}\n+\n+/**\n+ * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in\n+ *\tprogress.\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @args: number of unmaps in progress args\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n+ * contains the number of unmaps in progress.\n+ *\n+ * Errors:\n+ * EINVAL - Invalid port ID.\n+ */\n+int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,\n+\t\t\t\tu32 domain_id,\n+\t\t\t\tstruct dlb2_pending_port_unmaps_args *args,\n+\t\t\t\tstruct dlb2_cmd_response *resp,\n+\t\t\t\tbool vdev_req,\n+\t\t\t\tunsigned int vdev_id)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_port *port;\n+\n+\tdlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n+\n+\tif (!domain) {\n+\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tport = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);\n+\tif (!port || !port->configured) {\n+\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tresp->id = port->num_pending_removals;\n+\n+\treturn 0;\n+}\n+\n+static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,\n+\t\t\t\t\t u32 domain_id,\n+\t\t\t\t\t struct dlb2_cmd_response *resp,\n+\t\t\t\t\t bool vdev_req,\n+\t\t\t\t\t unsigned int vdev_id,\n+\t\t\t\t\t struct dlb2_hw_domain **out_domain)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n+\n+\tif (!domain) {\n+\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (!domain->configured) {\n+\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tif (domain->started) {\n+\t\tresp->status = DLB2_ST_DOMAIN_STARTED;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t*out_domain = domain;\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_log_start_domain(struct dlb2_hw *hw,\n+\t\t\t\t  u32 domain_id,\n+\t\t\t\t  bool vdev_req,\n+\t\t\t\t  unsigned int vdev_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB2 start domain arguments:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n+\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\", domain_id);\n+}\n+\n+/**\n+ * dlb2_hw_start_domain() - start a scheduling domain\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @arg: start domain arguments.\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function starts a scheduling domain, which allows applications to send\n+ * traffic through it. Once a domain is started, its resources can no longer be\n+ * configured (besides QID remapping and port enable/disable).\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error.\n+ *\n+ * Errors:\n+ * EINVAL - the domain is not configured, or the domain is already started.\n+ */\n+int\n+dlb2_hw_start_domain(struct dlb2_hw *hw,\n+\t\t     u32 domain_id,\n+\t\t     struct dlb2_start_domain_args *args,\n+\t\t     struct dlb2_cmd_response *resp,\n+\t\t     bool vdev_req,\n+\t\t     unsigned int vdev_id)\n+{\n+\tstruct dlb2_list_entry *iter;\n+\tstruct dlb2_dir_pq_pair *dir_queue;\n+\tstruct dlb2_ldb_queue *ldb_queue;\n+\tstruct dlb2_hw_domain *domain;\n+\tint ret;\n+\tRTE_SET_USED(args);\n+\tRTE_SET_USED(iter);\n+\n+\tdlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);\n+\n+\tret = dlb2_verify_start_domain_args(hw,\n+\t\t\t\t\t    domain_id,\n+\t\t\t\t\t    resp,\n+\t\t\t\t\t    vdev_req,\n+\t\t\t\t\t    vdev_id,\n+\t\t\t\t\t    &domain);\n+\tif (ret)\n+\t\treturn ret;\n+\n+\t/*\n+\t * Enable load-balanced and directed queue write permissions for the\n+\t * queues this domain owns. Without this, the DLB2 will drop all\n+\t * incoming traffic to those queues.\n+\t */\n+\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {\n+\t\tu32 vasqid_v = 0;\n+\t\tunsigned int offs;\n+\n+\t\tDLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);\n+\n+\t\toffs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +\n+\t\t\tldb_queue->id.phys_id;\n+\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);\n+\t}\n+\n+\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {\n+\t\tu32 vasqid_v = 0;\n+\t\tunsigned int offs;\n+\n+\t\tDLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);\n+\n+\t\toffs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +\n+\t\t\tdir_queue->id.phys_id;\n+\n+\t\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);\n+\t}\n+\n+\tdlb2_flush_csr(hw);\n+\n+\tdomain->started = true;\n+\n+\tresp->status = 0;\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,\n+\t\t\t\t\t u32 domain_id,\n+\t\t\t\t\t u32 queue_id,\n+\t\t\t\t\t bool vdev_req,\n+\t\t\t\t\t unsigned int vf_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB get directed queue depth:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from VF %d)\\n\", vf_id);\n+\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\", domain_id);\n+\tDLB2_HW_DBG(hw, \"\\tQueue ID: %d\\n\", queue_id);\n+}\n+\n+/**\n+ * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @args: queue depth args\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function returns the depth of a directed queue.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n+ * contains the depth.\n+ *\n+ * Errors:\n+ * EINVAL - Invalid domain ID or queue ID.\n+ */\n+int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,\n+\t\t\t\tu32 domain_id,\n+\t\t\t\tstruct dlb2_get_dir_queue_depth_args *args,\n+\t\t\t\tstruct dlb2_cmd_response *resp,\n+\t\t\t\tbool vdev_req,\n+\t\t\t\tunsigned int vdev_id)\n+{\n+\tstruct dlb2_dir_pq_pair *queue;\n+\tstruct dlb2_hw_domain *domain;\n+\tint id;\n+\n+\tid = domain_id;\n+\n+\tdlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,\n+\t\t\t\t     vdev_req, vdev_id);\n+\n+\tdomain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);\n+\tif (!domain) {\n+\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tid = args->queue_id;\n+\n+\tqueue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);\n+\tif (!queue) {\n+\t\tresp->status = DLB2_ST_INVALID_QID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tresp->id = dlb2_dir_queue_depth(hw, queue);\n+\n+\treturn 0;\n+}\n+\n+static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,\n+\t\t\t\t\t u32 domain_id,\n+\t\t\t\t\t u32 queue_id,\n+\t\t\t\t\t bool vdev_req,\n+\t\t\t\t\t unsigned int vf_id)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB get load-balanced queue depth:\\n\");\n+\tif (vdev_req)\n+\t\tDLB2_HW_DBG(hw, \"(Request from VF %d)\\n\", vf_id);\n+\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\", domain_id);\n+\tDLB2_HW_DBG(hw, \"\\tQueue ID: %d\\n\", queue_id);\n+}\n+\n+/**\n+ * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @domain_id: domain ID.\n+ * @args: queue depth args\n+ * @resp: response structure.\n+ * @vdev_req: indicates whether this request came from a vdev.\n+ * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n+ *\n+ * This function returns the depth of a load-balanced queue.\n+ *\n+ * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n+ * device.\n+ *\n+ * Return:\n+ * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n+ * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n+ * contains the depth.\n+ *\n+ * Errors:\n+ * EINVAL - Invalid domain ID or queue ID.\n+ */\n+int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,\n+\t\t\t\tu32 domain_id,\n+\t\t\t\tstruct dlb2_get_ldb_queue_depth_args *args,\n+\t\t\t\tstruct dlb2_cmd_response *resp,\n+\t\t\t\tbool vdev_req,\n+\t\t\t\tunsigned int vdev_id)\n+{\n+\tstruct dlb2_hw_domain *domain;\n+\tstruct dlb2_ldb_queue *queue;\n+\n+\tdlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,\n+\t\t\t\t     vdev_req, vdev_id);\n+\n+\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n+\tif (!domain) {\n+\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tqueue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);\n+\tif (!queue) {\n+\t\tresp->status = DLB2_ST_INVALID_QID;\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tresp->id = dlb2_ldb_queue_depth(hw, queue);\n+\n+\treturn 0;\n+}\n+\n+/**\n+ * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures\n+ * @hw: dlb2_hw handle for a particular device.\n+ *\n+ * This function attempts to finish any outstanding unmap procedures.\n+ * This function should be called by the kernel thread responsible for\n+ * finishing map/unmap procedures.\n+ *\n+ * Return:\n+ * Returns the number of procedures that weren't completed.\n+ */\n+unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)\n+{\n+\tint i, num = 0;\n+\n+\t/* Finish queue unmap jobs for any domain that needs it */\n+\tfor (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {\n+\t\tstruct dlb2_hw_domain *domain = &hw->domains[i];\n+\n+\t\tnum += dlb2_domain_finish_unmap_qid_procedures(hw, domain);\n+\t}\n+\n+\treturn num;\n+}\n+\n+/**\n+ * dlb2_finish_map_qid_procedures() - finish any pending map procedures\n+ * @hw: dlb2_hw handle for a particular device.\n+ *\n+ * This function attempts to finish any outstanding map procedures.\n+ * This function should be called by the kernel thread responsible for\n+ * finishing map/unmap procedures.\n+ *\n+ * Return:\n+ * Returns the number of procedures that weren't completed.\n+ */\n+unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)\n+{\n+\tint i, num = 0;\n+\n+\t/* Finish queue map jobs for any domain that needs it */\n+\tfor (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {\n+\t\tstruct dlb2_hw_domain *domain = &hw->domains[i];\n+\n+\t\tnum += dlb2_domain_finish_map_qid_procedures(hw, domain);\n+\t}\n+\n+\treturn num;\n+}\n+\n+/**\n+ * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.\n+ * @hw: dlb2_hw handle for a particular device.\n+ *\n+ * This function must be called prior to configuring scheduling domains.\n+ */\n+\n+void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)\n+{\n+\tu32 ctrl;\n+\n+\tctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);\n+\n+\tDLB2_BIT_SET(ctrl,\n+\t\t     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);\n+\n+\tDLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);\n+}\n+\n+/**\n+ * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced\n+ *\tports.\n+ * @hw: dlb2_hw handle for a particular device.\n+ *\n+ * This function must be called prior to configuring scheduling domains.\n+ */\n+void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)\n+{\n+\tu32 ctrl;\n+\n+\tctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);\n+\n+\tDLB2_BIT_SET(ctrl,\n+\t\t     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);\n+\n+\tDLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);\n+}\n+\n+/**\n+ * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @group_id: sequence number group ID.\n+ *\n+ * This function returns the configured number of sequence numbers per queue\n+ * for the specified group.\n+ *\n+ * Return:\n+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.\n+ */\n+int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)\n+{\n+\tif (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)\n+\t\treturn -EINVAL;\n+\n+\treturn hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;\n+}\n+\n+/**\n+ * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @group_id: sequence number group ID.\n+ *\n+ * This function returns the group's number of in-use slots (i.e. load-balanced\n+ * queues using the specified group).\n+ *\n+ * Return:\n+ * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.\n+ */\n+int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)\n+{\n+\tif (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)\n+\t\treturn -EINVAL;\n+\n+\treturn dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);\n+}\n+\n+static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,\n+\t\t\t\t\t\tu32 group_id,\n+\t\t\t\t\t\tu32 val)\n+{\n+\tDLB2_HW_DBG(hw, \"DLB2 set group sequence numbers:\\n\");\n+\tDLB2_HW_DBG(hw, \"\\tGroup ID: %u\\n\", group_id);\n+\tDLB2_HW_DBG(hw, \"\\tValue:    %u\\n\", val);\n+}\n+\n+/**\n+ * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue\n+ * @hw: dlb2_hw handle for a particular device.\n+ * @group_id: sequence number group ID.\n+ * @val: requested amount of sequence numbers per queue.\n+ *\n+ * This function configures the group's number of sequence numbers per queue.\n+ * val can be a power-of-two between 32 and 1024, inclusive. This setting can\n+ * be configured until the first ordered load-balanced queue is configured, at\n+ * which point the configuration is locked.\n+ *\n+ * Return:\n+ * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an\n+ * ordered queue is configured.\n+ */\n+int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,\n+\t\t\t\t    u32 group_id,\n+\t\t\t\t    u32 val)\n+{\n+\tconst u32 valid_allocations[] = {64, 128, 256, 512, 1024};\n+\tstruct dlb2_sn_group *group;\n+\tu32 sn_mode = 0;\n+\tint mode;\n+\n+\tif (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)\n+\t\treturn -EINVAL;\n+\n+\tgroup = &hw->rsrcs.sn_groups[group_id];\n+\n+\t/*\n+\t * Once the first load-balanced queue using an SN group is configured,\n+\t * the group cannot be changed.\n+\t */\n+\tif (group->slot_use_bitmap != 0)\n+\t\treturn -EPERM;\n+\n+\tfor (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)\n+\t\tif (val == valid_allocations[mode])\n+\t\t\tbreak;\n+\n+\tif (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)\n+\t\treturn -EINVAL;\n+\n+\tgroup->mode = mode;\n+\tgroup->sequence_numbers_per_queue = val;\n+\n+\tDLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,\n+\t\t DLB2_RO_GRP_SN_MODE_SN_MODE_0);\n+\tDLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,\n+\t\t DLB2_RO_GRP_SN_MODE_SN_MODE_1);\n+\n+\tDLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);\n+\n+\tdlb2_log_set_group_sequence_numbers(hw, group_id, val);\n+\n+\treturn 0;\n+}\n+\ndiff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c\ndeleted file mode 100644\nindex 2f66b2c71..000000000\n--- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c\n+++ /dev/null\n@@ -1,6235 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2016-2020 Intel Corporation\n- */\n-\n-#define DLB2_USE_NEW_HEADERS /* TEMPORARY FOR MERGE */\n-\n-#include \"dlb2_user.h\"\n-\n-#include \"dlb2_hw_types_new.h\"\n-#include \"dlb2_osdep.h\"\n-#include \"dlb2_osdep_bitmap.h\"\n-#include \"dlb2_osdep_types.h\"\n-#include \"dlb2_regs_new.h\"\n-#include \"dlb2_resource.h\"\n-\n-#include \"../../dlb2_priv.h\"\n-#include \"../../dlb2_inline_fns.h\"\n-\n-#define DLB2_DOM_LIST_HEAD(head, type) \\\n-\tDLB2_LIST_HEAD((head), type, domain_list)\n-\n-#define DLB2_FUNC_LIST_HEAD(head, type) \\\n-\tDLB2_LIST_HEAD((head), type, func_list)\n-\n-#define DLB2_DOM_LIST_FOR(head, ptr, iter) \\\n-\tDLB2_LIST_FOR_EACH(head, ptr, domain_list, iter)\n-\n-#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \\\n-\tDLB2_LIST_FOR_EACH(head, ptr, func_list, iter)\n-\n-#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \\\n-\tDLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp)\n-\n-#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \\\n-\tDLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp)\n-\n-/*\n- * The PF driver cannot assume that a register write will affect subsequent HCW\n- * writes. To ensure a write completes, the driver must read back a CSR. This\n- * function only need be called for configuration that can occur after the\n- * domain has started; prior to starting, applications can't send HCWs.\n- */\n-static inline void dlb2_flush_csr(struct dlb2_hw *hw)\n-{\n-\tDLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS(hw->ver));\n-}\n-\n-static void dlb2_init_domain_rsrc_lists(struct dlb2_hw_domain *domain)\n-{\n-\tint i;\n-\n-\tdlb2_list_init_head(&domain->used_ldb_queues);\n-\tdlb2_list_init_head(&domain->used_dir_pq_pairs);\n-\tdlb2_list_init_head(&domain->avail_ldb_queues);\n-\tdlb2_list_init_head(&domain->avail_dir_pq_pairs);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n-\t\tdlb2_list_init_head(&domain->used_ldb_ports[i]);\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n-\t\tdlb2_list_init_head(&domain->avail_ldb_ports[i]);\n-}\n-\n-static void dlb2_init_fn_rsrc_lists(struct dlb2_function_resources *rsrc)\n-{\n-\tint i;\n-\tdlb2_list_init_head(&rsrc->avail_domains);\n-\tdlb2_list_init_head(&rsrc->used_domains);\n-\tdlb2_list_init_head(&rsrc->avail_ldb_queues);\n-\tdlb2_list_init_head(&rsrc->avail_dir_pq_pairs);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n-\t\tdlb2_list_init_head(&rsrc->avail_ldb_ports[i]);\n-}\n-\n-/**\n- * dlb2_resource_free() - free device state memory\n- * @hw: dlb2_hw handle for a particular device.\n- *\n- * This function frees software state pointed to by dlb2_hw. This function\n- * should be called when resetting the device or unloading the driver.\n- */\n-void dlb2_resource_free(struct dlb2_hw *hw)\n-{\n-\tint i;\n-\n-\tif (hw->pf.avail_hist_list_entries)\n-\t\tdlb2_bitmap_free(hw->pf.avail_hist_list_entries);\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {\n-\t\tif (hw->vdev[i].avail_hist_list_entries)\n-\t\t\tdlb2_bitmap_free(hw->vdev[i].avail_hist_list_entries);\n-\t}\n-}\n-\n-/**\n- * dlb2_resource_init() - initialize the device\n- * @hw: pointer to struct dlb2_hw.\n- * @ver: device version.\n- *\n- * This function initializes the device's software state (pointed to by the hw\n- * argument) and programs global scheduling QoS registers. This function should\n- * be called during driver initialization, and the dlb2_hw structure should\n- * be zero-initialized before calling the function.\n- *\n- * The dlb2_hw struct must be unique per DLB 2.0 device and persist until the\n- * device is reset.\n- *\n- * Return:\n- * Returns 0 upon success, <0 otherwise.\n- */\n-int dlb2_resource_init(struct dlb2_hw *hw, enum dlb2_hw_ver ver)\n-{\n-\tstruct dlb2_list_entry *list;\n-\tunsigned int i;\n-\tint ret;\n-\n-\t/*\n-\t * For optimal load-balancing, ports that map to one or more QIDs in\n-\t * common should not be in numerical sequence. The port->QID mapping is\n-\t * application dependent, but the driver interleaves port IDs as much\n-\t * as possible to reduce the likelihood of sequential ports mapping to\n-\t * the same QID(s). This initial allocation of port IDs maximizes the\n-\t * average distance between an ID and its immediate neighbors (i.e.\n-\t * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to\n-\t * 3, etc.).\n-\t */\n-\tconst u8 init_ldb_port_allocation[DLB2_MAX_NUM_LDB_PORTS] = {\n-\t\t0,  7,  14,  5, 12,  3, 10,  1,  8, 15,  6, 13,  4, 11,  2,  9,\n-\t\t16, 23, 30, 21, 28, 19, 26, 17, 24, 31, 22, 29, 20, 27, 18, 25,\n-\t\t32, 39, 46, 37, 44, 35, 42, 33, 40, 47, 38, 45, 36, 43, 34, 41,\n-\t\t48, 55, 62, 53, 60, 51, 58, 49, 56, 63, 54, 61, 52, 59, 50, 57,\n-\t};\n-\n-\thw->ver = ver;\n-\n-\tdlb2_init_fn_rsrc_lists(&hw->pf);\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_VDEVS; i++)\n-\t\tdlb2_init_fn_rsrc_lists(&hw->vdev[i]);\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {\n-\t\tdlb2_init_domain_rsrc_lists(&hw->domains[i]);\n-\t\thw->domains[i].parent_func = &hw->pf;\n-\t}\n-\n-\t/* Give all resources to the PF driver */\n-\thw->pf.num_avail_domains = DLB2_MAX_NUM_DOMAINS;\n-\tfor (i = 0; i < hw->pf.num_avail_domains; i++) {\n-\t\tlist = &hw->domains[i].func_list;\n-\n-\t\tdlb2_list_add(&hw->pf.avail_domains, list);\n-\t}\n-\n-\thw->pf.num_avail_ldb_queues = DLB2_MAX_NUM_LDB_QUEUES;\n-\tfor (i = 0; i < hw->pf.num_avail_ldb_queues; i++) {\n-\t\tlist = &hw->rsrcs.ldb_queues[i].func_list;\n-\n-\t\tdlb2_list_add(&hw->pf.avail_ldb_queues, list);\n-\t}\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n-\t\thw->pf.num_avail_ldb_ports[i] =\n-\t\t\tDLB2_MAX_NUM_LDB_PORTS / DLB2_NUM_COS_DOMAINS;\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {\n-\t\tint cos_id = i >> DLB2_NUM_COS_DOMAINS;\n-\t\tstruct dlb2_ldb_port *port;\n-\n-\t\tport = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]];\n-\n-\t\tdlb2_list_add(&hw->pf.avail_ldb_ports[cos_id],\n-\t\t\t      &port->func_list);\n-\t}\n-\n-\thw->pf.num_avail_dir_pq_pairs = DLB2_MAX_NUM_DIR_PORTS(hw->ver);\n-\tfor (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) {\n-\t\tlist = &hw->rsrcs.dir_pq_pairs[i].func_list;\n-\n-\t\tdlb2_list_add(&hw->pf.avail_dir_pq_pairs, list);\n-\t}\n-\n-\tif (hw->ver == DLB2_HW_V2) {\n-\t\thw->pf.num_avail_qed_entries = DLB2_MAX_NUM_LDB_CREDITS;\n-\t\thw->pf.num_avail_dqed_entries =\n-\t\t\tDLB2_MAX_NUM_DIR_CREDITS(hw->ver);\n-\t} else {\n-\t\thw->pf.num_avail_entries = DLB2_MAX_NUM_CREDITS(hw->ver);\n-\t}\n-\n-\thw->pf.num_avail_aqed_entries = DLB2_MAX_NUM_AQED_ENTRIES;\n-\n-\tret = dlb2_bitmap_alloc(&hw->pf.avail_hist_list_entries,\n-\t\t\t\tDLB2_MAX_NUM_HIST_LIST_ENTRIES);\n-\tif (ret)\n-\t\tgoto unwind;\n-\n-\tret = dlb2_bitmap_fill(hw->pf.avail_hist_list_entries);\n-\tif (ret)\n-\t\tgoto unwind;\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_VDEVS; i++) {\n-\t\tret = dlb2_bitmap_alloc(&hw->vdev[i].avail_hist_list_entries,\n-\t\t\t\t\tDLB2_MAX_NUM_HIST_LIST_ENTRIES);\n-\t\tif (ret)\n-\t\t\tgoto unwind;\n-\n-\t\tret = dlb2_bitmap_zero(hw->vdev[i].avail_hist_list_entries);\n-\t\tif (ret)\n-\t\t\tgoto unwind;\n-\t}\n-\n-\t/* Initialize the hardware resource IDs */\n-\tfor (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {\n-\t\thw->domains[i].id.phys_id = i;\n-\t\thw->domains[i].id.vdev_owned = false;\n-\t}\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_LDB_QUEUES; i++) {\n-\t\thw->rsrcs.ldb_queues[i].id.phys_id = i;\n-\t\thw->rsrcs.ldb_queues[i].id.vdev_owned = false;\n-\t}\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_LDB_PORTS; i++) {\n-\t\thw->rsrcs.ldb_ports[i].id.phys_id = i;\n-\t\thw->rsrcs.ldb_ports[i].id.vdev_owned = false;\n-\t}\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_DIR_PORTS(hw->ver); i++) {\n-\t\thw->rsrcs.dir_pq_pairs[i].id.phys_id = i;\n-\t\thw->rsrcs.dir_pq_pairs[i].id.vdev_owned = false;\n-\t}\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {\n-\t\thw->rsrcs.sn_groups[i].id = i;\n-\t\t/* Default mode (0) is 64 sequence numbers per queue */\n-\t\thw->rsrcs.sn_groups[i].mode = 0;\n-\t\thw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 64;\n-\t\thw->rsrcs.sn_groups[i].slot_use_bitmap = 0;\n-\t}\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n-\t\thw->cos_reservation[i] = 100 / DLB2_NUM_COS_DOMAINS;\n-\n-\treturn 0;\n-\n-unwind:\n-\tdlb2_resource_free(hw);\n-\n-\treturn ret;\n-}\n-\n-/**\n- * dlb2_clr_pmcsr_disable() - power on bulk of DLB 2.0 logic\n- * @hw: dlb2_hw handle for a particular device.\n- * @ver: device version.\n- *\n- * Clearing the PMCSR must be done at initialization to make the device fully\n- * operational.\n- */\n-void dlb2_clr_pmcsr_disable(struct dlb2_hw *hw, enum dlb2_hw_ver ver)\n-{\n-\tu32 pmcsr_dis;\n-\n-\tpmcsr_dis = DLB2_CSR_RD(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver));\n-\n-\tDLB2_BITS_CLR(pmcsr_dis, DLB2_CM_CFG_PM_PMCSR_DISABLE_DISABLE);\n-\n-\tDLB2_CSR_WR(hw, DLB2_CM_CFG_PM_PMCSR_DISABLE(ver), pmcsr_dis);\n-}\n-\n-/**\n- * dlb2_hw_get_num_resources() - query the PCI function's available resources\n- * @hw: dlb2_hw handle for a particular device.\n- * @arg: pointer to resource counts.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function returns the number of available resources for the PF or for a\n- * VF.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, -EINVAL if vdev_req is true and vdev_id is\n- * invalid.\n- */\n-int dlb2_hw_get_num_resources(struct dlb2_hw *hw,\n-\t\t\t      struct dlb2_get_num_resources_args *arg,\n-\t\t\t      bool vdev_req,\n-\t\t\t      unsigned int vdev_id)\n-{\n-\tstruct dlb2_function_resources *rsrcs;\n-\tstruct dlb2_bitmap *map;\n-\tint i;\n-\n-\tif (vdev_req && vdev_id >= DLB2_MAX_NUM_VDEVS)\n-\t\treturn -EINVAL;\n-\n-\tif (vdev_req)\n-\t\trsrcs = &hw->vdev[vdev_id];\n-\telse\n-\t\trsrcs = &hw->pf;\n-\n-\targ->num_sched_domains = rsrcs->num_avail_domains;\n-\n-\targ->num_ldb_queues = rsrcs->num_avail_ldb_queues;\n-\n-\targ->num_ldb_ports = 0;\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++)\n-\t\targ->num_ldb_ports += rsrcs->num_avail_ldb_ports[i];\n-\n-\targ->num_cos_ldb_ports[0] = rsrcs->num_avail_ldb_ports[0];\n-\targ->num_cos_ldb_ports[1] = rsrcs->num_avail_ldb_ports[1];\n-\targ->num_cos_ldb_ports[2] = rsrcs->num_avail_ldb_ports[2];\n-\targ->num_cos_ldb_ports[3] = rsrcs->num_avail_ldb_ports[3];\n-\n-\targ->num_dir_ports = rsrcs->num_avail_dir_pq_pairs;\n-\n-\targ->num_atomic_inflights = rsrcs->num_avail_aqed_entries;\n-\n-\tmap = rsrcs->avail_hist_list_entries;\n-\n-\targ->num_hist_list_entries = dlb2_bitmap_count(map);\n-\n-\targ->max_contiguous_hist_list_entries =\n-\t\tdlb2_bitmap_longest_set_range(map);\n-\n-\tif (hw->ver == DLB2_HW_V2) {\n-\t\targ->num_ldb_credits = rsrcs->num_avail_qed_entries;\n-\t\targ->num_dir_credits = rsrcs->num_avail_dqed_entries;\n-\t} else {\n-\t\targ->num_credits = rsrcs->num_avail_entries;\n-\t}\n-\treturn 0;\n-}\n-\n-static void dlb2_configure_domain_credits_v2_5(struct dlb2_hw *hw,\n-\t\t\t\t\t       struct dlb2_hw_domain *domain)\n-{\n-\tu32 reg = 0;\n-\n-\tDLB2_BITS_SET(reg, domain->num_credits, DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);\n-\tDLB2_CSR_WR(hw, DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id), reg);\n-}\n-\n-static void dlb2_configure_domain_credits_v2(struct dlb2_hw *hw,\n-\t\t\t\t\t     struct dlb2_hw_domain *domain)\n-{\n-\tu32 reg = 0;\n-\n-\tDLB2_BITS_SET(reg, domain->num_ldb_credits,\n-\t\t      DLB2_CHP_CFG_LDB_VAS_CRD_COUNT);\n-\tDLB2_CSR_WR(hw, DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id), reg);\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, domain->num_dir_credits,\n-\t\t      DLB2_CHP_CFG_DIR_VAS_CRD_COUNT);\n-\tDLB2_CSR_WR(hw, DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id), reg);\n-}\n-\n-static void dlb2_configure_domain_credits(struct dlb2_hw *hw,\n-\t\t\t\t\t  struct dlb2_hw_domain *domain)\n-{\n-\tif (hw->ver == DLB2_HW_V2)\n-\t\tdlb2_configure_domain_credits_v2(hw, domain);\n-\telse\n-\t\tdlb2_configure_domain_credits_v2_5(hw, domain);\n-}\n-\n-static int dlb2_attach_credits(struct dlb2_function_resources *rsrcs,\n-\t\t\t       struct dlb2_hw_domain *domain,\n-\t\t\t       u32 num_credits,\n-\t\t\t       struct dlb2_cmd_response *resp)\n-{\n-\tif (rsrcs->num_avail_entries < num_credits) {\n-\t\tresp->status = DLB2_ST_CREDITS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\trsrcs->num_avail_entries -= num_credits;\n-\tdomain->num_credits += num_credits;\n-\treturn 0;\n-}\n-\n-static struct dlb2_ldb_port *\n-dlb2_get_next_ldb_port(struct dlb2_hw *hw,\n-\t\t       struct dlb2_function_resources *rsrcs,\n-\t\t       u32 domain_id,\n-\t\t       u32 cos_id)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tRTE_SET_USED(iter);\n-\n-\t/*\n-\t * To reduce the odds of consecutive load-balanced ports mapping to the\n-\t * same queue(s), the driver attempts to allocate ports whose neighbors\n-\t * are owned by a different domain.\n-\t */\n-\tDLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {\n-\t\tu32 next, prev;\n-\t\tu32 phys_id;\n-\n-\t\tphys_id = port->id.phys_id;\n-\t\tnext = phys_id + 1;\n-\t\tprev = phys_id - 1;\n-\n-\t\tif (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)\n-\t\t\tnext = 0;\n-\t\tif (phys_id == 0)\n-\t\t\tprev = DLB2_MAX_NUM_LDB_PORTS - 1;\n-\n-\t\tif (!hw->rsrcs.ldb_ports[next].owned ||\n-\t\t    hw->rsrcs.ldb_ports[next].domain_id.phys_id == domain_id)\n-\t\t\tcontinue;\n-\n-\t\tif (!hw->rsrcs.ldb_ports[prev].owned ||\n-\t\t    hw->rsrcs.ldb_ports[prev].domain_id.phys_id == domain_id)\n-\t\t\tcontinue;\n-\n-\t\treturn port;\n-\t}\n-\n-\t/*\n-\t * Failing that, the driver looks for a port with one neighbor owned by\n-\t * a different domain and the other unallocated.\n-\t */\n-\tDLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {\n-\t\tu32 next, prev;\n-\t\tu32 phys_id;\n-\n-\t\tphys_id = port->id.phys_id;\n-\t\tnext = phys_id + 1;\n-\t\tprev = phys_id - 1;\n-\n-\t\tif (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)\n-\t\t\tnext = 0;\n-\t\tif (phys_id == 0)\n-\t\t\tprev = DLB2_MAX_NUM_LDB_PORTS - 1;\n-\n-\t\tif (!hw->rsrcs.ldb_ports[prev].owned &&\n-\t\t    hw->rsrcs.ldb_ports[next].owned &&\n-\t\t    hw->rsrcs.ldb_ports[next].domain_id.phys_id != domain_id)\n-\t\t\treturn port;\n-\n-\t\tif (!hw->rsrcs.ldb_ports[next].owned &&\n-\t\t    hw->rsrcs.ldb_ports[prev].owned &&\n-\t\t    hw->rsrcs.ldb_ports[prev].domain_id.phys_id != domain_id)\n-\t\t\treturn port;\n-\t}\n-\n-\t/*\n-\t * Failing that, the driver looks for a port with both neighbors\n-\t * unallocated.\n-\t */\n-\tDLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_ports[cos_id], port, iter) {\n-\t\tu32 next, prev;\n-\t\tu32 phys_id;\n-\n-\t\tphys_id = port->id.phys_id;\n-\t\tnext = phys_id + 1;\n-\t\tprev = phys_id - 1;\n-\n-\t\tif (phys_id == DLB2_MAX_NUM_LDB_PORTS - 1)\n-\t\t\tnext = 0;\n-\t\tif (phys_id == 0)\n-\t\t\tprev = DLB2_MAX_NUM_LDB_PORTS - 1;\n-\n-\t\tif (!hw->rsrcs.ldb_ports[prev].owned &&\n-\t\t    !hw->rsrcs.ldb_ports[next].owned)\n-\t\t\treturn port;\n-\t}\n-\n-\t/* If all else fails, the driver returns the next available port. */\n-\treturn DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_ports[cos_id],\n-\t\t\t\t   typeof(*port));\n-}\n-\n-static int __dlb2_attach_ldb_ports(struct dlb2_hw *hw,\n-\t\t\t\t   struct dlb2_function_resources *rsrcs,\n-\t\t\t\t   struct dlb2_hw_domain *domain,\n-\t\t\t\t   u32 num_ports,\n-\t\t\t\t   u32 cos_id,\n-\t\t\t\t   struct dlb2_cmd_response *resp)\n-{\n-\tunsigned int i;\n-\n-\tif (rsrcs->num_avail_ldb_ports[cos_id] < num_ports) {\n-\t\tresp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tfor (i = 0; i < num_ports; i++) {\n-\t\tstruct dlb2_ldb_port *port;\n-\n-\t\tport = dlb2_get_next_ldb_port(hw, rsrcs,\n-\t\t\t\t\t      domain->id.phys_id, cos_id);\n-\t\tif (port == NULL) {\n-\t\t\tDLB2_HW_ERR(hw,\n-\t\t\t\t    \"[%s()] Internal error: domain validation failed\\n\",\n-\t\t\t\t    __func__);\n-\t\t\treturn -EFAULT;\n-\t\t}\n-\n-\t\tdlb2_list_del(&rsrcs->avail_ldb_ports[cos_id],\n-\t\t\t      &port->func_list);\n-\n-\t\tport->domain_id = domain->id;\n-\t\tport->owned = true;\n-\n-\t\tdlb2_list_add(&domain->avail_ldb_ports[cos_id],\n-\t\t\t      &port->domain_list);\n-\t}\n-\n-\trsrcs->num_avail_ldb_ports[cos_id] -= num_ports;\n-\n-\treturn 0;\n-}\n-\n-\n-static int dlb2_attach_ldb_ports(struct dlb2_hw *hw,\n-\t\t\t\t struct dlb2_function_resources *rsrcs,\n-\t\t\t\t struct dlb2_hw_domain *domain,\n-\t\t\t\t struct dlb2_create_sched_domain_args *args,\n-\t\t\t\t struct dlb2_cmd_response *resp)\n-{\n-\tunsigned int i, j;\n-\tint ret;\n-\n-\tif (args->cos_strict) {\n-\t\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\t\tu32 num = args->num_cos_ldb_ports[i];\n-\n-\t\t\t/* Allocate ports from specific classes-of-service */\n-\t\t\tret = __dlb2_attach_ldb_ports(hw,\n-\t\t\t\t\t\t      rsrcs,\n-\t\t\t\t\t\t      domain,\n-\t\t\t\t\t\t      num,\n-\t\t\t\t\t\t      i,\n-\t\t\t\t\t\t      resp);\n-\t\t\tif (ret)\n-\t\t\t\treturn ret;\n-\t\t}\n-\t} else {\n-\t\tunsigned int k;\n-\t\tu32 cos_id;\n-\n-\t\t/*\n-\t\t * Attempt to allocate from specific class-of-service, but\n-\t\t * fallback to the other classes if that fails.\n-\t\t */\n-\t\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\t\tfor (j = 0; j < args->num_cos_ldb_ports[i]; j++) {\n-\t\t\t\tfor (k = 0; k < DLB2_NUM_COS_DOMAINS; k++) {\n-\t\t\t\t\tcos_id = (i + k) % DLB2_NUM_COS_DOMAINS;\n-\n-\t\t\t\t\tret = __dlb2_attach_ldb_ports(hw,\n-\t\t\t\t\t\t\t\t      rsrcs,\n-\t\t\t\t\t\t\t\t      domain,\n-\t\t\t\t\t\t\t\t      1,\n-\t\t\t\t\t\t\t\t      cos_id,\n-\t\t\t\t\t\t\t\t      resp);\n-\t\t\t\t\tif (ret == 0)\n-\t\t\t\t\t\tbreak;\n-\t\t\t\t}\n-\n-\t\t\t\tif (ret)\n-\t\t\t\t\treturn ret;\n-\t\t\t}\n-\t\t}\n-\t}\n-\n-\t/* Allocate num_ldb_ports from any class-of-service */\n-\tfor (i = 0; i < args->num_ldb_ports; i++) {\n-\t\tfor (j = 0; j < DLB2_NUM_COS_DOMAINS; j++) {\n-\t\t\tret = __dlb2_attach_ldb_ports(hw,\n-\t\t\t\t\t\t      rsrcs,\n-\t\t\t\t\t\t      domain,\n-\t\t\t\t\t\t      1,\n-\t\t\t\t\t\t      j,\n-\t\t\t\t\t\t      resp);\n-\t\t\tif (ret == 0)\n-\t\t\t\tbreak;\n-\t\t}\n-\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\t}\n-\n-\treturn 0;\n-}\n-\n-static int dlb2_attach_dir_ports(struct dlb2_hw *hw,\n-\t\t\t\t struct dlb2_function_resources *rsrcs,\n-\t\t\t\t struct dlb2_hw_domain *domain,\n-\t\t\t\t u32 num_ports,\n-\t\t\t\t struct dlb2_cmd_response *resp)\n-{\n-\tunsigned int i;\n-\n-\tif (rsrcs->num_avail_dir_pq_pairs < num_ports) {\n-\t\tresp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tfor (i = 0; i < num_ports; i++) {\n-\t\tstruct dlb2_dir_pq_pair *port;\n-\n-\t\tport = DLB2_FUNC_LIST_HEAD(rsrcs->avail_dir_pq_pairs,\n-\t\t\t\t\t   typeof(*port));\n-\t\tif (port == NULL) {\n-\t\t\tDLB2_HW_ERR(hw,\n-\t\t\t\t    \"[%s()] Internal error: domain validation failed\\n\",\n-\t\t\t\t    __func__);\n-\t\t\treturn -EFAULT;\n-\t\t}\n-\n-\t\tdlb2_list_del(&rsrcs->avail_dir_pq_pairs, &port->func_list);\n-\n-\t\tport->domain_id = domain->id;\n-\t\tport->owned = true;\n-\n-\t\tdlb2_list_add(&domain->avail_dir_pq_pairs, &port->domain_list);\n-\t}\n-\n-\trsrcs->num_avail_dir_pq_pairs -= num_ports;\n-\n-\treturn 0;\n-}\n-\n-static int dlb2_attach_ldb_credits(struct dlb2_function_resources *rsrcs,\n-\t\t\t\t   struct dlb2_hw_domain *domain,\n-\t\t\t\t   u32 num_credits,\n-\t\t\t\t   struct dlb2_cmd_response *resp)\n-{\n-\tif (rsrcs->num_avail_qed_entries < num_credits) {\n-\t\tresp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\trsrcs->num_avail_qed_entries -= num_credits;\n-\tdomain->num_ldb_credits += num_credits;\n-\treturn 0;\n-}\n-\n-static int dlb2_attach_dir_credits(struct dlb2_function_resources *rsrcs,\n-\t\t\t\t   struct dlb2_hw_domain *domain,\n-\t\t\t\t   u32 num_credits,\n-\t\t\t\t   struct dlb2_cmd_response *resp)\n-{\n-\tif (rsrcs->num_avail_dqed_entries < num_credits) {\n-\t\tresp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\trsrcs->num_avail_dqed_entries -= num_credits;\n-\tdomain->num_dir_credits += num_credits;\n-\treturn 0;\n-}\n-\n-\n-static int dlb2_attach_atomic_inflights(struct dlb2_function_resources *rsrcs,\n-\t\t\t\t\tstruct dlb2_hw_domain *domain,\n-\t\t\t\t\tu32 num_atomic_inflights,\n-\t\t\t\t\tstruct dlb2_cmd_response *resp)\n-{\n-\tif (rsrcs->num_avail_aqed_entries < num_atomic_inflights) {\n-\t\tresp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\trsrcs->num_avail_aqed_entries -= num_atomic_inflights;\n-\tdomain->num_avail_aqed_entries += num_atomic_inflights;\n-\treturn 0;\n-}\n-\n-static int\n-dlb2_attach_domain_hist_list_entries(struct dlb2_function_resources *rsrcs,\n-\t\t\t\t     struct dlb2_hw_domain *domain,\n-\t\t\t\t     u32 num_hist_list_entries,\n-\t\t\t\t     struct dlb2_cmd_response *resp)\n-{\n-\tstruct dlb2_bitmap *bitmap;\n-\tint base;\n-\n-\tif (num_hist_list_entries) {\n-\t\tbitmap = rsrcs->avail_hist_list_entries;\n-\n-\t\tbase = dlb2_bitmap_find_set_bit_range(bitmap,\n-\t\t\t\t\t\t      num_hist_list_entries);\n-\t\tif (base < 0)\n-\t\t\tgoto error;\n-\n-\t\tdomain->total_hist_list_entries = num_hist_list_entries;\n-\t\tdomain->avail_hist_list_entries = num_hist_list_entries;\n-\t\tdomain->hist_list_entry_base = base;\n-\t\tdomain->hist_list_entry_offset = 0;\n-\n-\t\tdlb2_bitmap_clear_range(bitmap, base, num_hist_list_entries);\n-\t}\n-\treturn 0;\n-\n-error:\n-\tresp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;\n-\treturn -EINVAL;\n-}\n-\n-static int dlb2_attach_ldb_queues(struct dlb2_hw *hw,\n-\t\t\t\t  struct dlb2_function_resources *rsrcs,\n-\t\t\t\t  struct dlb2_hw_domain *domain,\n-\t\t\t\t  u32 num_queues,\n-\t\t\t\t  struct dlb2_cmd_response *resp)\n-{\n-\tunsigned int i;\n-\n-\tif (rsrcs->num_avail_ldb_queues < num_queues) {\n-\t\tresp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tfor (i = 0; i < num_queues; i++) {\n-\t\tstruct dlb2_ldb_queue *queue;\n-\n-\t\tqueue = DLB2_FUNC_LIST_HEAD(rsrcs->avail_ldb_queues,\n-\t\t\t\t\t    typeof(*queue));\n-\t\tif (queue == NULL) {\n-\t\t\tDLB2_HW_ERR(hw,\n-\t\t\t\t    \"[%s()] Internal error: domain validation failed\\n\",\n-\t\t\t\t    __func__);\n-\t\t\treturn -EFAULT;\n-\t\t}\n-\n-\t\tdlb2_list_del(&rsrcs->avail_ldb_queues, &queue->func_list);\n-\n-\t\tqueue->domain_id = domain->id;\n-\t\tqueue->owned = true;\n-\n-\t\tdlb2_list_add(&domain->avail_ldb_queues, &queue->domain_list);\n-\t}\n-\n-\trsrcs->num_avail_ldb_queues -= num_queues;\n-\n-\treturn 0;\n-}\n-\n-static int\n-dlb2_domain_attach_resources(struct dlb2_hw *hw,\n-\t\t\t     struct dlb2_function_resources *rsrcs,\n-\t\t\t     struct dlb2_hw_domain *domain,\n-\t\t\t     struct dlb2_create_sched_domain_args *args,\n-\t\t\t     struct dlb2_cmd_response *resp)\n-{\n-\tint ret;\n-\n-\tret = dlb2_attach_ldb_queues(hw,\n-\t\t\t\t     rsrcs,\n-\t\t\t\t     domain,\n-\t\t\t\t     args->num_ldb_queues,\n-\t\t\t\t     resp);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tret = dlb2_attach_ldb_ports(hw,\n-\t\t\t\t    rsrcs,\n-\t\t\t\t    domain,\n-\t\t\t\t    args,\n-\t\t\t\t    resp);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tret = dlb2_attach_dir_ports(hw,\n-\t\t\t\t    rsrcs,\n-\t\t\t\t    domain,\n-\t\t\t\t    args->num_dir_ports,\n-\t\t\t\t    resp);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tif (hw->ver == DLB2_HW_V2) {\n-\t\tret = dlb2_attach_ldb_credits(rsrcs,\n-\t\t\t\t\t      domain,\n-\t\t\t\t\t      args->num_ldb_credits,\n-\t\t\t\t\t      resp);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\n-\t\tret = dlb2_attach_dir_credits(rsrcs,\n-\t\t\t\t\t      domain,\n-\t\t\t\t\t      args->num_dir_credits,\n-\t\t\t\t\t      resp);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\t} else {  /* DLB 2.5 */\n-\t\tret = dlb2_attach_credits(rsrcs,\n-\t\t\t\t\t  domain,\n-\t\t\t\t\t  args->num_credits,\n-\t\t\t\t\t  resp);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\t}\n-\n-\tret = dlb2_attach_domain_hist_list_entries(rsrcs,\n-\t\t\t\t\t\t   domain,\n-\t\t\t\t\t\t   args->num_hist_list_entries,\n-\t\t\t\t\t\t   resp);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tret = dlb2_attach_atomic_inflights(rsrcs,\n-\t\t\t\t\t   domain,\n-\t\t\t\t\t   args->num_atomic_inflights,\n-\t\t\t\t\t   resp);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tdlb2_configure_domain_credits(hw, domain);\n-\n-\tdomain->configured = true;\n-\n-\tdomain->started = false;\n-\n-\trsrcs->num_avail_domains--;\n-\n-\treturn 0;\n-}\n-\n-static int\n-dlb2_verify_create_sched_dom_args(struct dlb2_function_resources *rsrcs,\n-\t\t\t\t  struct dlb2_create_sched_domain_args *args,\n-\t\t\t\t  struct dlb2_cmd_response *resp,\n-\t\t\t\t  struct dlb2_hw *hw,\n-\t\t\t\t  struct dlb2_hw_domain **out_domain)\n-{\n-\tu32 num_avail_ldb_ports, req_ldb_ports;\n-\tstruct dlb2_bitmap *avail_hl_entries;\n-\tunsigned int max_contig_hl_range;\n-\tstruct dlb2_hw_domain *domain;\n-\tint i;\n-\n-\tavail_hl_entries = rsrcs->avail_hist_list_entries;\n-\n-\tmax_contig_hl_range = dlb2_bitmap_longest_set_range(avail_hl_entries);\n-\n-\tnum_avail_ldb_ports = 0;\n-\treq_ldb_ports = 0;\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tnum_avail_ldb_ports += rsrcs->num_avail_ldb_ports[i];\n-\n-\t\treq_ldb_ports += args->num_cos_ldb_ports[i];\n-\t}\n-\n-\treq_ldb_ports += args->num_ldb_ports;\n-\n-\tif (rsrcs->num_avail_domains < 1) {\n-\t\tresp->status = DLB2_ST_DOMAIN_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tdomain = DLB2_FUNC_LIST_HEAD(rsrcs->avail_domains, typeof(*domain));\n-\tif (domain == NULL) {\n-\t\tresp->status = DLB2_ST_DOMAIN_UNAVAILABLE;\n-\t\treturn -EFAULT;\n-\t}\n-\n-\tif (rsrcs->num_avail_ldb_queues < args->num_ldb_queues) {\n-\t\tresp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (req_ldb_ports > num_avail_ldb_ports) {\n-\t\tresp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tfor (i = 0; args->cos_strict && i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tif (args->num_cos_ldb_ports[i] >\n-\t\t    rsrcs->num_avail_ldb_ports[i]) {\n-\t\t\tresp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\t}\n-\n-\tif (args->num_ldb_queues > 0 && req_ldb_ports == 0) {\n-\t\tresp->status = DLB2_ST_LDB_PORT_REQUIRED_FOR_LDB_QUEUES;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (rsrcs->num_avail_dir_pq_pairs < args->num_dir_ports) {\n-\t\tresp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\tif (hw->ver == DLB2_HW_V2_5) {\n-\t\tif (rsrcs->num_avail_entries < args->num_credits) {\n-\t\t\tresp->status = DLB2_ST_CREDITS_UNAVAILABLE;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\t} else {\n-\t\tif (rsrcs->num_avail_qed_entries < args->num_ldb_credits) {\n-\t\t\tresp->status = DLB2_ST_LDB_CREDITS_UNAVAILABLE;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\t\tif (rsrcs->num_avail_dqed_entries < args->num_dir_credits) {\n-\t\t\tresp->status = DLB2_ST_DIR_CREDITS_UNAVAILABLE;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\t}\n-\n-\tif (rsrcs->num_avail_aqed_entries < args->num_atomic_inflights) {\n-\t\tresp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (max_contig_hl_range < args->num_hist_list_entries) {\n-\t\tresp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t*out_domain = domain;\n-\n-\treturn 0;\n-}\n-\n-static void\n-dlb2_log_create_sched_domain_args(struct dlb2_hw *hw,\n-\t\t\t\t  struct dlb2_create_sched_domain_args *args,\n-\t\t\t\t  bool vdev_req,\n-\t\t\t\t  unsigned int vdev_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB2 create sched domain arguments:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of LDB queues:          %d\\n\",\n-\t\t    args->num_ldb_queues);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of LDB ports (any CoS): %d\\n\",\n-\t\t    args->num_ldb_ports);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of LDB ports (CoS 0):   %d\\n\",\n-\t\t    args->num_cos_ldb_ports[0]);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of LDB ports (CoS 1):   %d\\n\",\n-\t\t    args->num_cos_ldb_ports[1]);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of LDB ports (CoS 2):   %d\\n\",\n-\t\t    args->num_cos_ldb_ports[2]);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of LDB ports (CoS 3):   %d\\n\",\n-\t\t    args->num_cos_ldb_ports[3]);\n-\tDLB2_HW_DBG(hw, \"\\tStrict CoS allocation:         %d\\n\",\n-\t\t    args->cos_strict);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of DIR ports:           %d\\n\",\n-\t\t    args->num_dir_ports);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of ATM inflights:       %d\\n\",\n-\t\t    args->num_atomic_inflights);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of hist list entries:   %d\\n\",\n-\t\t    args->num_hist_list_entries);\n-\tif (hw->ver == DLB2_HW_V2) {\n-\t\tDLB2_HW_DBG(hw, \"\\tNumber of LDB credits:         %d\\n\",\n-\t\t\t    args->num_ldb_credits);\n-\t\tDLB2_HW_DBG(hw, \"\\tNumber of DIR credits:         %d\\n\",\n-\t\t\t    args->num_dir_credits);\n-\t} else {\n-\t\tDLB2_HW_DBG(hw, \"\\tNumber of credits:         %d\\n\",\n-\t\t\t    args->num_credits);\n-\t}\n-}\n-\n-/**\n- * dlb2_hw_create_sched_domain() - create a scheduling domain\n- * @hw: dlb2_hw handle for a particular device.\n- * @args: scheduling domain creation arguments.\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function creates a scheduling domain containing the resources specified\n- * in args. The individual resources (queues, ports, credits) can be configured\n- * after creating a scheduling domain.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n- * contains the domain ID.\n- *\n- * resp->id contains a virtual ID if vdev_req is true.\n- *\n- * Errors:\n- * EINVAL - A requested resource is unavailable, or the requested domain name\n- *\t    is already in use.\n- * EFAULT - Internal error (resp->status not set).\n- */\n-int dlb2_hw_create_sched_domain(struct dlb2_hw *hw,\n-\t\t\t\tstruct dlb2_create_sched_domain_args *args,\n-\t\t\t\tstruct dlb2_cmd_response *resp,\n-\t\t\t\tbool vdev_req,\n-\t\t\t\tunsigned int vdev_id)\n-{\n-\tstruct dlb2_function_resources *rsrcs;\n-\tstruct dlb2_hw_domain *domain;\n-\tint ret;\n-\n-\trsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;\n-\n-\tdlb2_log_create_sched_domain_args(hw, args, vdev_req, vdev_id);\n-\n-\t/*\n-\t * Verify that hardware resources are available before attempting to\n-\t * satisfy the request. This simplifies the error unwinding code.\n-\t */\n-\tret = dlb2_verify_create_sched_dom_args(rsrcs, args, resp, hw, &domain);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tdlb2_init_domain_rsrc_lists(domain);\n-\n-\tret = dlb2_domain_attach_resources(hw, rsrcs, domain, args, resp);\n-\tif (ret) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s()] Internal error: failed to verify args.\\n\",\n-\t\t\t    __func__);\n-\n-\t\treturn ret;\n-\t}\n-\n-\tdlb2_list_del(&rsrcs->avail_domains, &domain->func_list);\n-\n-\tdlb2_list_add(&rsrcs->used_domains, &domain->func_list);\n-\n-\tresp->id = (vdev_req) ? domain->id.virt_id : domain->id.phys_id;\n-\tresp->status = 0;\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_dir_port_cq_disable(struct dlb2_hw *hw,\n-\t\t\t\t     struct dlb2_dir_pq_pair *port)\n-{\n-\tu32 reg = 0;\n-\n-\tDLB2_BIT_SET(reg, DLB2_LSP_CQ_DIR_DSBL_DISABLED);\n-\tDLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);\n-\n-\tdlb2_flush_csr(hw);\n-}\n-\n-static u32 dlb2_dir_cq_token_count(struct dlb2_hw *hw,\n-\t\t\t\t   struct dlb2_dir_pq_pair *port)\n-{\n-\tu32 cnt;\n-\n-\tcnt = DLB2_CSR_RD(hw,\n-\t\t\t  DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id));\n-\n-\t/*\n-\t * Account for the initial token count, which is used in order to\n-\t * provide a CQ with depth less than 8.\n-\t */\n-\n-\treturn DLB2_BITS_GET(cnt, DLB2_LSP_CQ_DIR_TKN_CNT_COUNT) -\n-\t       port->init_tkn_cnt;\n-}\n-\n-static void dlb2_drain_dir_cq(struct dlb2_hw *hw,\n-\t\t\t      struct dlb2_dir_pq_pair *port)\n-{\n-\tunsigned int port_id = port->id.phys_id;\n-\tu32 cnt;\n-\n-\t/* Return any outstanding tokens */\n-\tcnt = dlb2_dir_cq_token_count(hw, port);\n-\n-\tif (cnt != 0) {\n-\t\tstruct dlb2_hcw hcw_mem[8], *hcw;\n-\t\tvoid __iomem *pp_addr;\n-\n-\t\tpp_addr = os_map_producer_port(hw, port_id, false);\n-\n-\t\t/* Point hcw to a 64B-aligned location */\n-\t\thcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);\n-\n-\t\t/*\n-\t\t * Program the first HCW for a batch token return and\n-\t\t * the rest as NOOPS\n-\t\t */\n-\t\tmemset(hcw, 0, 4 * sizeof(*hcw));\n-\t\thcw->cq_token = 1;\n-\t\thcw->lock_id = cnt - 1;\n-\n-\t\tdlb2_movdir64b(pp_addr, hcw);\n-\n-\t\tos_fence_hcw(hw, pp_addr);\n-\n-\t\tos_unmap_producer_port(hw, pp_addr);\n-\t}\n-}\n-\n-static void dlb2_dir_port_cq_enable(struct dlb2_hw *hw,\n-\t\t\t\t    struct dlb2_dir_pq_pair *port)\n-{\n-\tu32 reg = 0;\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id), reg);\n-\n-\tdlb2_flush_csr(hw);\n-}\n-\n-static int dlb2_domain_drain_dir_cqs(struct dlb2_hw *hw,\n-\t\t\t\t     struct dlb2_hw_domain *domain,\n-\t\t\t\t     bool toggle_port)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *port;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n-\t\t/*\n-\t\t * Can't drain a port if it's not configured, and there's\n-\t\t * nothing to drain if its queue is unconfigured.\n-\t\t */\n-\t\tif (!port->port_configured || !port->queue_configured)\n-\t\t\tcontinue;\n-\n-\t\tif (toggle_port)\n-\t\t\tdlb2_dir_port_cq_disable(hw, port);\n-\n-\t\tdlb2_drain_dir_cq(hw, port);\n-\n-\t\tif (toggle_port)\n-\t\t\tdlb2_dir_port_cq_enable(hw, port);\n-\t}\n-\n-\treturn 0;\n-}\n-\n-static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw,\n-\t\t\t\tstruct dlb2_dir_pq_pair *queue)\n-{\n-\tu32 cnt;\n-\n-\tcnt = DLB2_CSR_RD(hw, DLB2_LSP_QID_DIR_ENQUEUE_CNT(hw->ver,\n-\t\t\t\t\t\t      queue->id.phys_id));\n-\n-\treturn DLB2_BITS_GET(cnt, DLB2_LSP_QID_DIR_ENQUEUE_CNT_COUNT);\n-}\n-\n-static bool dlb2_dir_queue_is_empty(struct dlb2_hw *hw,\n-\t\t\t\t    struct dlb2_dir_pq_pair *queue)\n-{\n-\treturn dlb2_dir_queue_depth(hw, queue) == 0;\n-}\n-\n-static bool dlb2_domain_dir_queues_empty(struct dlb2_hw *hw,\n-\t\t\t\t\t struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *queue;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {\n-\t\tif (!dlb2_dir_queue_is_empty(hw, queue))\n-\t\t\treturn false;\n-\t}\n-\n-\treturn true;\n-}\n-static int dlb2_domain_drain_dir_queues(struct dlb2_hw *hw,\n-\t\t\t\t\tstruct dlb2_hw_domain *domain)\n-{\n-\tint i;\n-\n-\t/* If the domain hasn't been started, there's no traffic to drain */\n-\tif (!domain->started)\n-\t\treturn 0;\n-\n-\tfor (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {\n-\t\tdlb2_domain_drain_dir_cqs(hw, domain, true);\n-\n-\t\tif (dlb2_domain_dir_queues_empty(hw, domain))\n-\t\t\tbreak;\n-\t}\n-\n-\tif (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s()] Internal error: failed to empty queues\\n\",\n-\t\t\t    __func__);\n-\t\treturn -EFAULT;\n-\t}\n-\n-\t/*\n-\t * Drain the CQs one more time. For the queues to go empty, they would\n-\t * have scheduled one or more QEs.\n-\t */\n-\tdlb2_domain_drain_dir_cqs(hw, domain, true);\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw,\n-\t\t\t\t    struct dlb2_ldb_port *port)\n-{\n-\tu32 reg = 0;\n-\n-\t/*\n-\t * Don't re-enable the port if a removal is pending. The caller should\n-\t * mark this port as enabled (if it isn't already), and when the\n-\t * removal completes the port will be enabled.\n-\t */\n-\tif (port->num_pending_removals)\n-\t\treturn;\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);\n-\n-\tdlb2_flush_csr(hw);\n-}\n-\n-static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw,\n-\t\t\t\t     struct dlb2_ldb_port *port)\n-{\n-\tu32 reg = 0;\n-\n-\tDLB2_BIT_SET(reg, DLB2_LSP_CQ_LDB_DSBL_DISABLED);\n-\tDLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id), reg);\n-\n-\tdlb2_flush_csr(hw);\n-}\n-\n-static u32 dlb2_ldb_cq_inflight_count(struct dlb2_hw *hw,\n-\t\t\t\t      struct dlb2_ldb_port *port)\n-{\n-\tu32 cnt;\n-\n-\tcnt = DLB2_CSR_RD(hw,\n-\t\t\t  DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver, port->id.phys_id));\n-\n-\treturn DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT);\n-}\n-\n-static u32 dlb2_ldb_cq_token_count(struct dlb2_hw *hw,\n-\t\t\t\t   struct dlb2_ldb_port *port)\n-{\n-\tu32 cnt;\n-\n-\tcnt = DLB2_CSR_RD(hw,\n-\t\t\t  DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id));\n-\n-\t/*\n-\t * Account for the initial token count, which is used in order to\n-\t * provide a CQ with depth less than 8.\n-\t */\n-\n-\treturn DLB2_BITS_GET(cnt, DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT) -\n-\t\tport->init_tkn_cnt;\n-}\n-\n-static void dlb2_drain_ldb_cq(struct dlb2_hw *hw, struct dlb2_ldb_port *port)\n-{\n-\tu32 infl_cnt, tkn_cnt;\n-\tunsigned int i;\n-\n-\tinfl_cnt = dlb2_ldb_cq_inflight_count(hw, port);\n-\ttkn_cnt = dlb2_ldb_cq_token_count(hw, port);\n-\n-\tif (infl_cnt || tkn_cnt) {\n-\t\tstruct dlb2_hcw hcw_mem[8], *hcw;\n-\t\tvoid __iomem *pp_addr;\n-\n-\t\tpp_addr = os_map_producer_port(hw, port->id.phys_id, true);\n-\n-\t\t/* Point hcw to a 64B-aligned location */\n-\t\thcw = (struct dlb2_hcw *)((uintptr_t)&hcw_mem[4] & ~0x3F);\n-\n-\t\t/*\n-\t\t * Program the first HCW for a completion and token return and\n-\t\t * the other HCWs as NOOPS\n-\t\t */\n-\n-\t\tmemset(hcw, 0, 4 * sizeof(*hcw));\n-\t\thcw->qe_comp = (infl_cnt > 0);\n-\t\thcw->cq_token = (tkn_cnt > 0);\n-\t\thcw->lock_id = tkn_cnt - 1;\n-\n-\t\t/* Return tokens in the first HCW */\n-\t\tdlb2_movdir64b(pp_addr, hcw);\n-\n-\t\thcw->cq_token = 0;\n-\n-\t\t/* Issue remaining completions (if any) */\n-\t\tfor (i = 1; i < infl_cnt; i++)\n-\t\t\tdlb2_movdir64b(pp_addr, hcw);\n-\n-\t\tos_fence_hcw(hw, pp_addr);\n-\n-\t\tos_unmap_producer_port(hw, pp_addr);\n-\t}\n-}\n-\n-static void dlb2_domain_drain_ldb_cqs(struct dlb2_hw *hw,\n-\t\t\t\t      struct dlb2_hw_domain *domain,\n-\t\t\t\t      bool toggle_port)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\t/* If the domain hasn't been started, there's no traffic to drain */\n-\tif (!domain->started)\n-\t\treturn;\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tif (toggle_port)\n-\t\t\t\tdlb2_ldb_port_cq_disable(hw, port);\n-\n-\t\t\tdlb2_drain_ldb_cq(hw, port);\n-\n-\t\t\tif (toggle_port)\n-\t\t\t\tdlb2_ldb_port_cq_enable(hw, port);\n-\t\t}\n-\t}\n-}\n-\n-static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw,\n-\t\t\t\tstruct dlb2_ldb_queue *queue)\n-{\n-\tu32 aqed, ldb, atm;\n-\n-\taqed = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,\n-\t\t\t\t\t\t       queue->id.phys_id));\n-\tldb = DLB2_CSR_RD(hw, DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,\n-\t\t\t\t\t\t      queue->id.phys_id));\n-\tatm = DLB2_CSR_RD(hw,\n-\t\t\t  DLB2_LSP_QID_ATM_ACTIVE(hw->ver, queue->id.phys_id));\n-\n-\treturn DLB2_BITS_GET(aqed, DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT)\n-\t       + DLB2_BITS_GET(ldb, DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT)\n-\t       + DLB2_BITS_GET(atm, DLB2_LSP_QID_ATM_ACTIVE_COUNT);\n-}\n-\n-static bool dlb2_ldb_queue_is_empty(struct dlb2_hw *hw,\n-\t\t\t\t    struct dlb2_ldb_queue *queue)\n-{\n-\treturn dlb2_ldb_queue_depth(hw, queue) == 0;\n-}\n-\n-static bool dlb2_domain_mapped_queues_empty(struct dlb2_hw *hw,\n-\t\t\t\t\t    struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_queue *queue;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n-\t\tif (queue->num_mappings == 0)\n-\t\t\tcontinue;\n-\n-\t\tif (!dlb2_ldb_queue_is_empty(hw, queue))\n-\t\t\treturn false;\n-\t}\n-\n-\treturn true;\n-}\n-\n-static int dlb2_domain_drain_mapped_queues(struct dlb2_hw *hw,\n-\t\t\t\t\t   struct dlb2_hw_domain *domain)\n-{\n-\tint i;\n-\n-\t/* If the domain hasn't been started, there's no traffic to drain */\n-\tif (!domain->started)\n-\t\treturn 0;\n-\n-\tif (domain->num_pending_removals > 0) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s()] Internal error: failed to unmap domain queues\\n\",\n-\t\t\t    __func__);\n-\t\treturn -EFAULT;\n-\t}\n-\n-\tfor (i = 0; i < DLB2_MAX_QID_EMPTY_CHECK_LOOPS; i++) {\n-\t\tdlb2_domain_drain_ldb_cqs(hw, domain, true);\n-\n-\t\tif (dlb2_domain_mapped_queues_empty(hw, domain))\n-\t\t\tbreak;\n-\t}\n-\n-\tif (i == DLB2_MAX_QID_EMPTY_CHECK_LOOPS) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s()] Internal error: failed to empty queues\\n\",\n-\t\t\t    __func__);\n-\t\treturn -EFAULT;\n-\t}\n-\n-\t/*\n-\t * Drain the CQs one more time. For the queues to go empty, they would\n-\t * have scheduled one or more QEs.\n-\t */\n-\tdlb2_domain_drain_ldb_cqs(hw, domain, true);\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_domain_enable_ldb_cqs(struct dlb2_hw *hw,\n-\t\t\t\t       struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tport->enabled = true;\n-\n-\t\t\tdlb2_ldb_port_cq_enable(hw, port);\n-\t\t}\n-\t}\n-}\n-\n-static struct dlb2_ldb_queue *\n-dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw,\n-\t\t\t   u32 id,\n-\t\t\t   bool vdev_req,\n-\t\t\t   unsigned int vdev_id)\n-{\n-\tstruct dlb2_list_entry *iter1;\n-\tstruct dlb2_list_entry *iter2;\n-\tstruct dlb2_function_resources *rsrcs;\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_queue *queue;\n-\tRTE_SET_USED(iter1);\n-\tRTE_SET_USED(iter2);\n-\n-\tif (id >= DLB2_MAX_NUM_LDB_QUEUES)\n-\t\treturn NULL;\n-\n-\trsrcs = (vdev_req) ? &hw->vdev[vdev_id] : &hw->pf;\n-\n-\tif (!vdev_req)\n-\t\treturn &hw->rsrcs.ldb_queues[id];\n-\n-\tDLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iter1) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter2) {\n-\t\t\tif (queue->id.virt_id == id)\n-\t\t\t\treturn queue;\n-\t\t}\n-\t}\n-\n-\tDLB2_FUNC_LIST_FOR(rsrcs->avail_ldb_queues, queue, iter1) {\n-\t\tif (queue->id.virt_id == id)\n-\t\t\treturn queue;\n-\t}\n-\n-\treturn NULL;\n-}\n-\n-static struct dlb2_hw_domain *dlb2_get_domain_from_id(struct dlb2_hw *hw,\n-\t\t\t\t\t\t      u32 id,\n-\t\t\t\t\t\t      bool vdev_req,\n-\t\t\t\t\t\t      unsigned int vdev_id)\n-{\n-\tstruct dlb2_list_entry *iteration;\n-\tstruct dlb2_function_resources *rsrcs;\n-\tstruct dlb2_hw_domain *domain;\n-\tRTE_SET_USED(iteration);\n-\n-\tif (id >= DLB2_MAX_NUM_DOMAINS)\n-\t\treturn NULL;\n-\n-\tif (!vdev_req)\n-\t\treturn &hw->domains[id];\n-\n-\trsrcs = &hw->vdev[vdev_id];\n-\n-\tDLB2_FUNC_LIST_FOR(rsrcs->used_domains, domain, iteration) {\n-\t\tif (domain->id.virt_id == id)\n-\t\t\treturn domain;\n-\t}\n-\n-\treturn NULL;\n-}\n-\n-static int dlb2_port_slot_state_transition(struct dlb2_hw *hw,\n-\t\t\t\t\t   struct dlb2_ldb_port *port,\n-\t\t\t\t\t   struct dlb2_ldb_queue *queue,\n-\t\t\t\t\t   int slot,\n-\t\t\t\t\t   enum dlb2_qid_map_state new_state)\n-{\n-\tenum dlb2_qid_map_state curr_state = port->qid_map[slot].state;\n-\tstruct dlb2_hw_domain *domain;\n-\tint domain_id;\n-\n-\tdomain_id = port->domain_id.phys_id;\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, false, 0);\n-\tif (domain == NULL) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s()] Internal error: unable to find domain %d\\n\",\n-\t\t\t    __func__, domain_id);\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tswitch (curr_state) {\n-\tcase DLB2_QUEUE_UNMAPPED:\n-\t\tswitch (new_state) {\n-\t\tcase DLB2_QUEUE_MAPPED:\n-\t\t\tqueue->num_mappings++;\n-\t\t\tport->num_mappings++;\n-\t\t\tbreak;\n-\t\tcase DLB2_QUEUE_MAP_IN_PROG:\n-\t\t\tqueue->num_pending_additions++;\n-\t\t\tdomain->num_pending_additions++;\n-\t\t\tbreak;\n-\t\tdefault:\n-\t\t\tgoto error;\n-\t\t}\n-\t\tbreak;\n-\tcase DLB2_QUEUE_MAPPED:\n-\t\tswitch (new_state) {\n-\t\tcase DLB2_QUEUE_UNMAPPED:\n-\t\t\tqueue->num_mappings--;\n-\t\t\tport->num_mappings--;\n-\t\t\tbreak;\n-\t\tcase DLB2_QUEUE_UNMAP_IN_PROG:\n-\t\t\tport->num_pending_removals++;\n-\t\t\tdomain->num_pending_removals++;\n-\t\t\tbreak;\n-\t\tcase DLB2_QUEUE_MAPPED:\n-\t\t\t/* Priority change, nothing to update */\n-\t\t\tbreak;\n-\t\tdefault:\n-\t\t\tgoto error;\n-\t\t}\n-\t\tbreak;\n-\tcase DLB2_QUEUE_MAP_IN_PROG:\n-\t\tswitch (new_state) {\n-\t\tcase DLB2_QUEUE_UNMAPPED:\n-\t\t\tqueue->num_pending_additions--;\n-\t\t\tdomain->num_pending_additions--;\n-\t\t\tbreak;\n-\t\tcase DLB2_QUEUE_MAPPED:\n-\t\t\tqueue->num_mappings++;\n-\t\t\tport->num_mappings++;\n-\t\t\tqueue->num_pending_additions--;\n-\t\t\tdomain->num_pending_additions--;\n-\t\t\tbreak;\n-\t\tdefault:\n-\t\t\tgoto error;\n-\t\t}\n-\t\tbreak;\n-\tcase DLB2_QUEUE_UNMAP_IN_PROG:\n-\t\tswitch (new_state) {\n-\t\tcase DLB2_QUEUE_UNMAPPED:\n-\t\t\tport->num_pending_removals--;\n-\t\t\tdomain->num_pending_removals--;\n-\t\t\tqueue->num_mappings--;\n-\t\t\tport->num_mappings--;\n-\t\t\tbreak;\n-\t\tcase DLB2_QUEUE_MAPPED:\n-\t\t\tport->num_pending_removals--;\n-\t\t\tdomain->num_pending_removals--;\n-\t\t\tbreak;\n-\t\tcase DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:\n-\t\t\t/* Nothing to update */\n-\t\t\tbreak;\n-\t\tdefault:\n-\t\t\tgoto error;\n-\t\t}\n-\t\tbreak;\n-\tcase DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP:\n-\t\tswitch (new_state) {\n-\t\tcase DLB2_QUEUE_UNMAP_IN_PROG:\n-\t\t\t/* Nothing to update */\n-\t\t\tbreak;\n-\t\tcase DLB2_QUEUE_UNMAPPED:\n-\t\t\t/*\n-\t\t\t * An UNMAP_IN_PROG_PENDING_MAP slot briefly\n-\t\t\t * becomes UNMAPPED before it transitions to\n-\t\t\t * MAP_IN_PROG.\n-\t\t\t */\n-\t\t\tqueue->num_mappings--;\n-\t\t\tport->num_mappings--;\n-\t\t\tport->num_pending_removals--;\n-\t\t\tdomain->num_pending_removals--;\n-\t\t\tbreak;\n-\t\tdefault:\n-\t\t\tgoto error;\n-\t\t}\n-\t\tbreak;\n-\tdefault:\n-\t\tgoto error;\n-\t}\n-\n-\tport->qid_map[slot].state = new_state;\n-\n-\tDLB2_HW_DBG(hw,\n-\t\t    \"[%s()] queue %d -> port %d state transition (%d -> %d)\\n\",\n-\t\t    __func__, queue->id.phys_id, port->id.phys_id,\n-\t\t    curr_state, new_state);\n-\treturn 0;\n-\n-error:\n-\tDLB2_HW_ERR(hw,\n-\t\t    \"[%s()] Internal error: invalid queue %d -> port %d state transition (%d -> %d)\\n\",\n-\t\t    __func__, queue->id.phys_id, port->id.phys_id,\n-\t\t    curr_state, new_state);\n-\treturn -EFAULT;\n-}\n-\n-static bool dlb2_port_find_slot(struct dlb2_ldb_port *port,\n-\t\t\t\tenum dlb2_qid_map_state state,\n-\t\t\t\tint *slot)\n-{\n-\tint i;\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {\n-\t\tif (port->qid_map[i].state == state)\n-\t\t\tbreak;\n-\t}\n-\n-\t*slot = i;\n-\n-\treturn (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);\n-}\n-\n-static bool dlb2_port_find_slot_queue(struct dlb2_ldb_port *port,\n-\t\t\t\t      enum dlb2_qid_map_state state,\n-\t\t\t\t      struct dlb2_ldb_queue *queue,\n-\t\t\t\t      int *slot)\n-{\n-\tint i;\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {\n-\t\tif (port->qid_map[i].state == state &&\n-\t\t    port->qid_map[i].qid == queue->id.phys_id)\n-\t\t\tbreak;\n-\t}\n-\n-\t*slot = i;\n-\n-\treturn (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);\n-}\n-\n-/*\n- * dlb2_ldb_queue_{enable, disable}_mapped_cqs() don't operate exactly as\n- * their function names imply, and should only be called by the dynamic CQ\n- * mapping code.\n- */\n-static void dlb2_ldb_queue_disable_mapped_cqs(struct dlb2_hw *hw,\n-\t\t\t\t\t      struct dlb2_hw_domain *domain,\n-\t\t\t\t\t      struct dlb2_ldb_queue *queue)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tint slot, i;\n-\tRTE_SET_USED(iter);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tenum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;\n-\n-\t\t\tif (!dlb2_port_find_slot_queue(port, state,\n-\t\t\t\t\t\t       queue, &slot))\n-\t\t\t\tcontinue;\n-\n-\t\t\tif (port->enabled)\n-\t\t\t\tdlb2_ldb_port_cq_disable(hw, port);\n-\t\t}\n-\t}\n-}\n-\n-static void dlb2_ldb_queue_enable_mapped_cqs(struct dlb2_hw *hw,\n-\t\t\t\t\t     struct dlb2_hw_domain *domain,\n-\t\t\t\t\t     struct dlb2_ldb_queue *queue)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tint slot, i;\n-\tRTE_SET_USED(iter);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tenum dlb2_qid_map_state state = DLB2_QUEUE_MAPPED;\n-\n-\t\t\tif (!dlb2_port_find_slot_queue(port, state,\n-\t\t\t\t\t\t       queue, &slot))\n-\t\t\t\tcontinue;\n-\n-\t\t\tif (port->enabled)\n-\t\t\t\tdlb2_ldb_port_cq_enable(hw, port);\n-\t\t}\n-\t}\n-}\n-\n-static void dlb2_ldb_port_clear_queue_if_status(struct dlb2_hw *hw,\n-\t\t\t\t\t\tstruct dlb2_ldb_port *port,\n-\t\t\t\t\t\tint slot)\n-{\n-\tu32 ctrl = 0;\n-\n-\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n-\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n-\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n-\n-\tdlb2_flush_csr(hw);\n-}\n-\n-static void dlb2_ldb_port_set_queue_if_status(struct dlb2_hw *hw,\n-\t\t\t\t\t      struct dlb2_ldb_port *port,\n-\t\t\t\t\t      int slot)\n-{\n-\tu32 ctrl = 0;\n-\n-\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n-\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n-\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);\n-\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_INFLIGHT_OK_V);\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n-\n-\tdlb2_flush_csr(hw);\n-}\n-\n-static int dlb2_ldb_port_map_qid_static(struct dlb2_hw *hw,\n-\t\t\t\t\tstruct dlb2_ldb_port *p,\n-\t\t\t\t\tstruct dlb2_ldb_queue *q,\n-\t\t\t\t\tu8 priority)\n-{\n-\tenum dlb2_qid_map_state state;\n-\tu32 lsp_qid2cq2;\n-\tu32 lsp_qid2cq;\n-\tu32 atm_qid2cq;\n-\tu32 cq2priov;\n-\tu32 cq2qid;\n-\tint i;\n-\n-\t/* Look for a pending or already mapped slot, else an unused slot */\n-\tif (!dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAP_IN_PROG, q, &i) &&\n-\t    !dlb2_port_find_slot_queue(p, DLB2_QUEUE_MAPPED, q, &i) &&\n-\t    !dlb2_port_find_slot(p, DLB2_QUEUE_UNMAPPED, &i)) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s():%d] Internal error: CQ has no available QID mapping slots\\n\",\n-\t\t\t    __func__, __LINE__);\n-\t\treturn -EFAULT;\n-\t}\n-\n-\t/* Read-modify-write the priority and valid bit register */\n-\tcq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id));\n-\n-\tcq2priov |= (1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC)) & DLB2_LSP_CQ2PRIOV_V;\n-\tcq2priov |= ((priority & 0x7) << (i + DLB2_LSP_CQ2PRIOV_PRIO_LOC) * 3)\n-\t\t    & DLB2_LSP_CQ2PRIOV_PRIO;\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, p->id.phys_id), cq2priov);\n-\n-\t/* Read-modify-write the QID map register */\n-\tif (i < 4)\n-\t\tcq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID0(hw->ver,\n-\t\t\t\t\t\t\t  p->id.phys_id));\n-\telse\n-\t\tcq2qid = DLB2_CSR_RD(hw, DLB2_LSP_CQ2QID1(hw->ver,\n-\t\t\t\t\t\t\t  p->id.phys_id));\n-\n-\tif (i == 0 || i == 4)\n-\t\tDLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P0);\n-\tif (i == 1 || i == 5)\n-\t\tDLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P1);\n-\tif (i == 2 || i == 6)\n-\t\tDLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P2);\n-\tif (i == 3 || i == 7)\n-\t\tDLB2_BITS_SET(cq2qid, q->id.phys_id, DLB2_LSP_CQ2QID0_QID_P3);\n-\n-\tif (i < 4)\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_CQ2QID0(hw->ver, p->id.phys_id), cq2qid);\n-\telse\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_CQ2QID1(hw->ver, p->id.phys_id), cq2qid);\n-\n-\tatm_qid2cq = DLB2_CSR_RD(hw,\n-\t\t\t\t DLB2_ATM_QID2CQIDIX(q->id.phys_id,\n-\t\t\t\t\t\tp->id.phys_id / 4));\n-\n-\tlsp_qid2cq = DLB2_CSR_RD(hw,\n-\t\t\t\t DLB2_LSP_QID2CQIDIX(hw->ver, q->id.phys_id,\n-\t\t\t\t\t\tp->id.phys_id / 4));\n-\n-\tlsp_qid2cq2 = DLB2_CSR_RD(hw,\n-\t\t\t\t  DLB2_LSP_QID2CQIDIX2(hw->ver, q->id.phys_id,\n-\t\t\t\t\t\t  p->id.phys_id / 4));\n-\n-\tswitch (p->id.phys_id % 4) {\n-\tcase 0:\n-\t\tDLB2_BIT_SET(atm_qid2cq,\n-\t\t\t     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));\n-\t\tDLB2_BIT_SET(lsp_qid2cq,\n-\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));\n-\t\tDLB2_BIT_SET(lsp_qid2cq2,\n-\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));\n-\t\tbreak;\n-\n-\tcase 1:\n-\t\tDLB2_BIT_SET(atm_qid2cq,\n-\t\t\t     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));\n-\t\tDLB2_BIT_SET(lsp_qid2cq,\n-\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));\n-\t\tDLB2_BIT_SET(lsp_qid2cq2,\n-\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));\n-\t\tbreak;\n-\n-\tcase 2:\n-\t\tDLB2_BIT_SET(atm_qid2cq,\n-\t\t\t     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));\n-\t\tDLB2_BIT_SET(lsp_qid2cq,\n-\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));\n-\t\tDLB2_BIT_SET(lsp_qid2cq2,\n-\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));\n-\t\tbreak;\n-\n-\tcase 3:\n-\t\tDLB2_BIT_SET(atm_qid2cq,\n-\t\t\t     1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));\n-\t\tDLB2_BIT_SET(lsp_qid2cq,\n-\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));\n-\t\tDLB2_BIT_SET(lsp_qid2cq2,\n-\t\t\t     1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));\n-\t\tbreak;\n-\t}\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_ATM_QID2CQIDIX(q->id.phys_id, p->id.phys_id / 4),\n-\t\t    atm_qid2cq);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_QID2CQIDIX(hw->ver,\n-\t\t\t\t\tq->id.phys_id, p->id.phys_id / 4),\n-\t\t    lsp_qid2cq);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_QID2CQIDIX2(hw->ver,\n-\t\t\t\t\t q->id.phys_id, p->id.phys_id / 4),\n-\t\t    lsp_qid2cq2);\n-\n-\tdlb2_flush_csr(hw);\n-\n-\tp->qid_map[i].qid = q->id.phys_id;\n-\tp->qid_map[i].priority = priority;\n-\n-\tstate = DLB2_QUEUE_MAPPED;\n-\n-\treturn dlb2_port_slot_state_transition(hw, p, q, i, state);\n-}\n-\n-static int dlb2_ldb_port_set_has_work_bits(struct dlb2_hw *hw,\n-\t\t\t\t\t   struct dlb2_ldb_port *port,\n-\t\t\t\t\t   struct dlb2_ldb_queue *queue,\n-\t\t\t\t\t   int slot)\n-{\n-\tu32 ctrl = 0;\n-\tu32 active;\n-\tu32 enq;\n-\n-\t/* Set the atomic scheduling haswork bit */\n-\tactive = DLB2_CSR_RD(hw, DLB2_LSP_QID_AQED_ACTIVE_CNT(hw->ver,\n-\t\t\t\t\t\t\t queue->id.phys_id));\n-\n-\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n-\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n-\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);\n-\tDLB2_BITS_SET(ctrl,\n-\t\t      DLB2_BITS_GET(active,\n-\t\t\t\t    DLB2_LSP_QID_AQED_ACTIVE_CNT_COUNT) > 0,\n-\t\t\t\t    DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);\n-\n-\t/* Set the non-atomic scheduling haswork bit */\n-\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n-\n-\tenq = DLB2_CSR_RD(hw,\n-\t\t\t  DLB2_LSP_QID_LDB_ENQUEUE_CNT(hw->ver,\n-\t\t\t\t\t\t       queue->id.phys_id));\n-\n-\tmemset(&ctrl, 0, sizeof(ctrl));\n-\n-\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n-\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n-\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_VALUE);\n-\tDLB2_BITS_SET(ctrl,\n-\t\t      DLB2_BITS_GET(enq,\n-\t\t\t\t    DLB2_LSP_QID_LDB_ENQUEUE_CNT_COUNT) > 0,\n-\t\t      DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n-\n-\tdlb2_flush_csr(hw);\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_ldb_port_clear_has_work_bits(struct dlb2_hw *hw,\n-\t\t\t\t\t      struct dlb2_ldb_port *port,\n-\t\t\t\t\t      u8 slot)\n-{\n-\tu32 ctrl = 0;\n-\n-\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n-\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n-\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_RLIST_HASWORK_V);\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n-\n-\tmemset(&ctrl, 0, sizeof(ctrl));\n-\n-\tDLB2_BITS_SET(ctrl, port->id.phys_id, DLB2_LSP_LDB_SCHED_CTRL_CQ);\n-\tDLB2_BITS_SET(ctrl, slot, DLB2_LSP_LDB_SCHED_CTRL_QIDIX);\n-\tDLB2_BIT_SET(ctrl, DLB2_LSP_LDB_SCHED_CTRL_NALB_HASWORK_V);\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_LDB_SCHED_CTRL(hw->ver), ctrl);\n-\n-\tdlb2_flush_csr(hw);\n-}\n-\n-\n-static void dlb2_ldb_queue_set_inflight_limit(struct dlb2_hw *hw,\n-\t\t\t\t\t      struct dlb2_ldb_queue *queue)\n-{\n-\tu32 infl_lim = 0;\n-\n-\tDLB2_BITS_SET(infl_lim, queue->num_qid_inflights,\n-\t\t DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),\n-\t\t    infl_lim);\n-}\n-\n-static void dlb2_ldb_queue_clear_inflight_limit(struct dlb2_hw *hw,\n-\t\t\t\t\t\tstruct dlb2_ldb_queue *queue)\n-{\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue->id.phys_id),\n-\t\t    DLB2_LSP_QID_LDB_INFL_LIM_RST);\n-}\n-\n-static int dlb2_ldb_port_finish_map_qid_dynamic(struct dlb2_hw *hw,\n-\t\t\t\t\t\tstruct dlb2_hw_domain *domain,\n-\t\t\t\t\t\tstruct dlb2_ldb_port *port,\n-\t\t\t\t\t\tstruct dlb2_ldb_queue *queue)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tenum dlb2_qid_map_state state;\n-\tint slot, ret, i;\n-\tu32 infl_cnt;\n-\tu8 prio;\n-\tRTE_SET_USED(iter);\n-\n-\tinfl_cnt = DLB2_CSR_RD(hw,\n-\t\t\t       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,\n-\t\t\t\t\t\t    queue->id.phys_id));\n-\n-\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s()] Internal error: non-zero QID inflight count\\n\",\n-\t\t\t    __func__);\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/*\n-\t * Static map the port and set its corresponding has_work bits.\n-\t */\n-\tstate = DLB2_QUEUE_MAP_IN_PROG;\n-\tif (!dlb2_port_find_slot_queue(port, state, queue, &slot))\n-\t\treturn -EINVAL;\n-\n-\tprio = port->qid_map[slot].priority;\n-\n-\t/*\n-\t * Update the CQ2QID, CQ2PRIOV, and QID2CQIDX registers, and\n-\t * the port's qid_map state.\n-\t */\n-\tret = dlb2_ldb_port_map_qid_static(hw, port, queue, prio);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tret = dlb2_ldb_port_set_has_work_bits(hw, port, queue, slot);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/*\n-\t * Ensure IF_status(cq,qid) is 0 before enabling the port to\n-\t * prevent spurious schedules to cause the queue's inflight\n-\t * count to increase.\n-\t */\n-\tdlb2_ldb_port_clear_queue_if_status(hw, port, slot);\n-\n-\t/* Reset the queue's inflight status */\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tstate = DLB2_QUEUE_MAPPED;\n-\t\t\tif (!dlb2_port_find_slot_queue(port, state,\n-\t\t\t\t\t\t       queue, &slot))\n-\t\t\t\tcontinue;\n-\n-\t\t\tdlb2_ldb_port_set_queue_if_status(hw, port, slot);\n-\t\t}\n-\t}\n-\n-\tdlb2_ldb_queue_set_inflight_limit(hw, queue);\n-\n-\t/* Re-enable CQs mapped to this queue */\n-\tdlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);\n-\n-\t/* If this queue has other mappings pending, clear its inflight limit */\n-\tif (queue->num_pending_additions > 0)\n-\t\tdlb2_ldb_queue_clear_inflight_limit(hw, queue);\n-\n-\treturn 0;\n-}\n-\n-/**\n- * dlb2_ldb_port_map_qid_dynamic() - perform a \"dynamic\" QID->CQ mapping\n- * @hw: dlb2_hw handle for a particular device.\n- * @port: load-balanced port\n- * @queue: load-balanced queue\n- * @priority: queue servicing priority\n- *\n- * Returns 0 if the queue was mapped, 1 if the mapping is scheduled to occur\n- * at a later point, and <0 if an error occurred.\n- */\n-static int dlb2_ldb_port_map_qid_dynamic(struct dlb2_hw *hw,\n-\t\t\t\t\t struct dlb2_ldb_port *port,\n-\t\t\t\t\t struct dlb2_ldb_queue *queue,\n-\t\t\t\t\t u8 priority)\n-{\n-\tenum dlb2_qid_map_state state;\n-\tstruct dlb2_hw_domain *domain;\n-\tint domain_id, slot, ret;\n-\tu32 infl_cnt;\n-\n-\tdomain_id = port->domain_id.phys_id;\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, false, 0);\n-\tif (domain == NULL) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s()] Internal error: unable to find domain %d\\n\",\n-\t\t\t    __func__, port->domain_id.phys_id);\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/*\n-\t * Set the QID inflight limit to 0 to prevent further scheduling of the\n-\t * queue.\n-\t */\n-\tDLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,\n-\t\t\t\t\t\t  queue->id.phys_id), 0);\n-\n-\tif (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &slot)) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"Internal error: No available unmapped slots\\n\");\n-\t\treturn -EFAULT;\n-\t}\n-\n-\tport->qid_map[slot].qid = queue->id.phys_id;\n-\tport->qid_map[slot].priority = priority;\n-\n-\tstate = DLB2_QUEUE_MAP_IN_PROG;\n-\tret = dlb2_port_slot_state_transition(hw, port, queue, slot, state);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tinfl_cnt = DLB2_CSR_RD(hw,\n-\t\t\t       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,\n-\t\t\t\t\t\t    queue->id.phys_id));\n-\n-\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {\n-\t\t/*\n-\t\t * The queue is owed completions so it's not safe to map it\n-\t\t * yet. Schedule a kernel thread to complete the mapping later,\n-\t\t * once software has completed all the queue's inflight events.\n-\t\t */\n-\t\tif (!os_worker_active(hw))\n-\t\t\tos_schedule_work(hw);\n-\n-\t\treturn 1;\n-\t}\n-\n-\t/*\n-\t * Disable the affected CQ, and the CQs already mapped to the QID,\n-\t * before reading the QID's inflight count a second time. There is an\n-\t * unlikely race in which the QID may schedule one more QE after we\n-\t * read an inflight count of 0, and disabling the CQs guarantees that\n-\t * the race will not occur after a re-read of the inflight count\n-\t * register.\n-\t */\n-\tif (port->enabled)\n-\t\tdlb2_ldb_port_cq_disable(hw, port);\n-\n-\tdlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);\n-\n-\tinfl_cnt = DLB2_CSR_RD(hw,\n-\t\t\t       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver,\n-\t\t\t\t\t\t    queue->id.phys_id));\n-\n-\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {\n-\t\tif (port->enabled)\n-\t\t\tdlb2_ldb_port_cq_enable(hw, port);\n-\n-\t\tdlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);\n-\n-\t\t/*\n-\t\t * The queue is owed completions so it's not safe to map it\n-\t\t * yet. Schedule a kernel thread to complete the mapping later,\n-\t\t * once software has completed all the queue's inflight events.\n-\t\t */\n-\t\tif (!os_worker_active(hw))\n-\t\t\tos_schedule_work(hw);\n-\n-\t\treturn 1;\n-\t}\n-\n-\treturn dlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);\n-}\n-\n-static void dlb2_domain_finish_map_port(struct dlb2_hw *hw,\n-\t\t\t\t\tstruct dlb2_hw_domain *domain,\n-\t\t\t\t\tstruct dlb2_ldb_port *port)\n-{\n-\tint i;\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {\n-\t\tu32 infl_cnt;\n-\t\tstruct dlb2_ldb_queue *queue;\n-\t\tint qid;\n-\n-\t\tif (port->qid_map[i].state != DLB2_QUEUE_MAP_IN_PROG)\n-\t\t\tcontinue;\n-\n-\t\tqid = port->qid_map[i].qid;\n-\n-\t\tqueue = dlb2_get_ldb_queue_from_id(hw, qid, false, 0);\n-\n-\t\tif (queue == NULL) {\n-\t\t\tDLB2_HW_ERR(hw,\n-\t\t\t\t    \"[%s()] Internal error: unable to find queue %d\\n\",\n-\t\t\t\t    __func__, qid);\n-\t\t\tcontinue;\n-\t\t}\n-\n-\t\tinfl_cnt = DLB2_CSR_RD(hw,\n-\t\t\t\t       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));\n-\n-\t\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT))\n-\t\t\tcontinue;\n-\n-\t\t/*\n-\t\t * Disable the affected CQ, and the CQs already mapped to the\n-\t\t * QID, before reading the QID's inflight count a second time.\n-\t\t * There is an unlikely race in which the QID may schedule one\n-\t\t * more QE after we read an inflight count of 0, and disabling\n-\t\t * the CQs guarantees that the race will not occur after a\n-\t\t * re-read of the inflight count register.\n-\t\t */\n-\t\tif (port->enabled)\n-\t\t\tdlb2_ldb_port_cq_disable(hw, port);\n-\n-\t\tdlb2_ldb_queue_disable_mapped_cqs(hw, domain, queue);\n-\n-\t\tinfl_cnt = DLB2_CSR_RD(hw,\n-\t\t\t\t       DLB2_LSP_QID_LDB_INFL_CNT(hw->ver, qid));\n-\n-\t\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_QID_LDB_INFL_CNT_COUNT)) {\n-\t\t\tif (port->enabled)\n-\t\t\t\tdlb2_ldb_port_cq_enable(hw, port);\n-\n-\t\t\tdlb2_ldb_queue_enable_mapped_cqs(hw, domain, queue);\n-\n-\t\t\tcontinue;\n-\t\t}\n-\n-\t\tdlb2_ldb_port_finish_map_qid_dynamic(hw, domain, port, queue);\n-\t}\n-}\n-\n-static unsigned int\n-dlb2_domain_finish_map_qid_procedures(struct dlb2_hw *hw,\n-\t\t\t\t      struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tif (!domain->configured || domain->num_pending_additions == 0)\n-\t\treturn 0;\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)\n-\t\t\tdlb2_domain_finish_map_port(hw, domain, port);\n-\t}\n-\n-\treturn domain->num_pending_additions;\n-}\n-\n-static int dlb2_ldb_port_unmap_qid(struct dlb2_hw *hw,\n-\t\t\t\t   struct dlb2_ldb_port *port,\n-\t\t\t\t   struct dlb2_ldb_queue *queue)\n-{\n-\tenum dlb2_qid_map_state mapped, in_progress, pending_map, unmapped;\n-\tu32 lsp_qid2cq2;\n-\tu32 lsp_qid2cq;\n-\tu32 atm_qid2cq;\n-\tu32 cq2priov;\n-\tu32 queue_id;\n-\tu32 port_id;\n-\tint i;\n-\n-\t/* Find the queue's slot */\n-\tmapped = DLB2_QUEUE_MAPPED;\n-\tin_progress = DLB2_QUEUE_UNMAP_IN_PROG;\n-\tpending_map = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;\n-\n-\tif (!dlb2_port_find_slot_queue(port, mapped, queue, &i) &&\n-\t    !dlb2_port_find_slot_queue(port, in_progress, queue, &i) &&\n-\t    !dlb2_port_find_slot_queue(port, pending_map, queue, &i)) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s():%d] Internal error: QID %d isn't mapped\\n\",\n-\t\t\t    __func__, __LINE__, queue->id.phys_id);\n-\t\treturn -EFAULT;\n-\t}\n-\n-\tport_id = port->id.phys_id;\n-\tqueue_id = queue->id.phys_id;\n-\n-\t/* Read-modify-write the priority and valid bit register */\n-\tcq2priov = DLB2_CSR_RD(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id));\n-\n-\tcq2priov &= ~(1 << (i + DLB2_LSP_CQ2PRIOV_V_LOC));\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port_id), cq2priov);\n-\n-\tatm_qid2cq = DLB2_CSR_RD(hw, DLB2_ATM_QID2CQIDIX(queue_id,\n-\t\t\t\t\t\t\t port_id / 4));\n-\n-\tlsp_qid2cq = DLB2_CSR_RD(hw,\n-\t\t\t\t DLB2_LSP_QID2CQIDIX(hw->ver,\n-\t\t\t\t\t\tqueue_id, port_id / 4));\n-\n-\tlsp_qid2cq2 = DLB2_CSR_RD(hw,\n-\t\t\t\t  DLB2_LSP_QID2CQIDIX2(hw->ver,\n-\t\t\t\t\t\t  queue_id, port_id / 4));\n-\n-\tswitch (port_id % 4) {\n-\tcase 0:\n-\t\tatm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P0_LOC));\n-\t\tlsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P0_LOC));\n-\t\tlsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P0_LOC));\n-\t\tbreak;\n-\n-\tcase 1:\n-\t\tatm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P1_LOC));\n-\t\tlsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P1_LOC));\n-\t\tlsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P1_LOC));\n-\t\tbreak;\n-\n-\tcase 2:\n-\t\tatm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P2_LOC));\n-\t\tlsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P2_LOC));\n-\t\tlsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P2_LOC));\n-\t\tbreak;\n-\n-\tcase 3:\n-\t\tatm_qid2cq &= ~(1 << (i + DLB2_ATM_QID2CQIDIX_00_CQ_P3_LOC));\n-\t\tlsp_qid2cq &= ~(1 << (i + DLB2_LSP_QID2CQIDIX_00_CQ_P3_LOC));\n-\t\tlsp_qid2cq2 &= ~(1 << (i + DLB2_LSP_QID2CQIDIX2_00_CQ_P3_LOC));\n-\t\tbreak;\n-\t}\n-\n-\tDLB2_CSR_WR(hw, DLB2_ATM_QID2CQIDIX(queue_id, port_id / 4), atm_qid2cq);\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, port_id / 4),\n-\t\t    lsp_qid2cq);\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, port_id / 4),\n-\t\t    lsp_qid2cq2);\n-\n-\tdlb2_flush_csr(hw);\n-\n-\tunmapped = DLB2_QUEUE_UNMAPPED;\n-\n-\treturn dlb2_port_slot_state_transition(hw, port, queue, i, unmapped);\n-}\n-\n-static int dlb2_ldb_port_map_qid(struct dlb2_hw *hw,\n-\t\t\t\t struct dlb2_hw_domain *domain,\n-\t\t\t\t struct dlb2_ldb_port *port,\n-\t\t\t\t struct dlb2_ldb_queue *queue,\n-\t\t\t\t u8 prio)\n-{\n-\tif (domain->started)\n-\t\treturn dlb2_ldb_port_map_qid_dynamic(hw, port, queue, prio);\n-\telse\n-\t\treturn dlb2_ldb_port_map_qid_static(hw, port, queue, prio);\n-}\n-\n-static void\n-dlb2_domain_finish_unmap_port_slot(struct dlb2_hw *hw,\n-\t\t\t\t   struct dlb2_hw_domain *domain,\n-\t\t\t\t   struct dlb2_ldb_port *port,\n-\t\t\t\t   int slot)\n-{\n-\tenum dlb2_qid_map_state state;\n-\tstruct dlb2_ldb_queue *queue;\n-\n-\tqueue = &hw->rsrcs.ldb_queues[port->qid_map[slot].qid];\n-\n-\tstate = port->qid_map[slot].state;\n-\n-\t/* Update the QID2CQIDX and CQ2QID vectors */\n-\tdlb2_ldb_port_unmap_qid(hw, port, queue);\n-\n-\t/*\n-\t * Ensure the QID will not be serviced by this {CQ, slot} by clearing\n-\t * the has_work bits\n-\t */\n-\tdlb2_ldb_port_clear_has_work_bits(hw, port, slot);\n-\n-\t/* Reset the {CQ, slot} to its default state */\n-\tdlb2_ldb_port_set_queue_if_status(hw, port, slot);\n-\n-\t/* Re-enable the CQ if it was not manually disabled by the user */\n-\tif (port->enabled)\n-\t\tdlb2_ldb_port_cq_enable(hw, port);\n-\n-\t/*\n-\t * If there is a mapping that is pending this slot's removal, perform\n-\t * the mapping now.\n-\t */\n-\tif (state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP) {\n-\t\tstruct dlb2_ldb_port_qid_map *map;\n-\t\tstruct dlb2_ldb_queue *map_queue;\n-\t\tu8 prio;\n-\n-\t\tmap = &port->qid_map[slot];\n-\n-\t\tmap->qid = map->pending_qid;\n-\t\tmap->priority = map->pending_priority;\n-\n-\t\tmap_queue = &hw->rsrcs.ldb_queues[map->qid];\n-\t\tprio = map->priority;\n-\n-\t\tdlb2_ldb_port_map_qid(hw, domain, port, map_queue, prio);\n-\t}\n-}\n-\n-\n-static bool dlb2_domain_finish_unmap_port(struct dlb2_hw *hw,\n-\t\t\t\t\t  struct dlb2_hw_domain *domain,\n-\t\t\t\t\t  struct dlb2_ldb_port *port)\n-{\n-\tu32 infl_cnt;\n-\tint i;\n-\n-\tif (port->num_pending_removals == 0)\n-\t\treturn false;\n-\n-\t/*\n-\t * The unmap requires all the CQ's outstanding inflights to be\n-\t * completed.\n-\t */\n-\tinfl_cnt = DLB2_CSR_RD(hw, DLB2_LSP_CQ_LDB_INFL_CNT(hw->ver,\n-\t\t\t\t\t\t       port->id.phys_id));\n-\tif (DLB2_BITS_GET(infl_cnt, DLB2_LSP_CQ_LDB_INFL_CNT_COUNT) > 0)\n-\t\treturn false;\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {\n-\t\tstruct dlb2_ldb_port_qid_map *map;\n-\n-\t\tmap = &port->qid_map[i];\n-\n-\t\tif (map->state != DLB2_QUEUE_UNMAP_IN_PROG &&\n-\t\t    map->state != DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP)\n-\t\t\tcontinue;\n-\n-\t\tdlb2_domain_finish_unmap_port_slot(hw, domain, port, i);\n-\t}\n-\n-\treturn true;\n-}\n-\n-static unsigned int\n-dlb2_domain_finish_unmap_qid_procedures(struct dlb2_hw *hw,\n-\t\t\t\t\tstruct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tif (!domain->configured || domain->num_pending_removals == 0)\n-\t\treturn 0;\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)\n-\t\t\tdlb2_domain_finish_unmap_port(hw, domain, port);\n-\t}\n-\n-\treturn domain->num_pending_removals;\n-}\n-\n-static void dlb2_domain_disable_ldb_cqs(struct dlb2_hw *hw,\n-\t\t\t\t\tstruct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tport->enabled = false;\n-\n-\t\t\tdlb2_ldb_port_cq_disable(hw, port);\n-\t\t}\n-\t}\n-}\n-\n-\n-static void dlb2_log_reset_domain(struct dlb2_hw *hw,\n-\t\t\t\t  u32 domain_id,\n-\t\t\t\t  bool vdev_req,\n-\t\t\t\t  unsigned int vdev_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB2 reset domain:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n-\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\", domain_id);\n-}\n-\n-static void dlb2_domain_disable_dir_vpps(struct dlb2_hw *hw,\n-\t\t\t\t\t struct dlb2_hw_domain *domain,\n-\t\t\t\t\t unsigned int vdev_id)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *port;\n-\tu32 vpp_v = 0;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n-\t\tunsigned int offs;\n-\t\tu32 virt_id;\n-\n-\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n-\t\t\tvirt_id = port->id.virt_id;\n-\t\telse\n-\t\t\tvirt_id = port->id.phys_id;\n-\n-\t\toffs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;\n-\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), vpp_v);\n-\t}\n-}\n-\n-static void dlb2_domain_disable_ldb_vpps(struct dlb2_hw *hw,\n-\t\t\t\t\t struct dlb2_hw_domain *domain,\n-\t\t\t\t\t unsigned int vdev_id)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tu32 vpp_v = 0;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tunsigned int offs;\n-\t\t\tu32 virt_id;\n-\n-\t\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n-\t\t\t\tvirt_id = port->id.virt_id;\n-\t\t\telse\n-\t\t\t\tvirt_id = port->id.phys_id;\n-\n-\t\t\toffs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;\n-\n-\t\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), vpp_v);\n-\t\t}\n-\t}\n-}\n-\n-static void\n-dlb2_domain_disable_ldb_port_interrupts(struct dlb2_hw *hw,\n-\t\t\t\t\tstruct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tu32 int_en = 0;\n-\tu32 wd_en = 0;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tDLB2_CSR_WR(hw,\n-\t\t\t\t    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver,\n-\t\t\t\t\t\t       port->id.phys_id),\n-\t\t\t\t    int_en);\n-\n-\t\t\tDLB2_CSR_WR(hw,\n-\t\t\t\t    DLB2_CHP_LDB_CQ_WD_ENB(hw->ver,\n-\t\t\t\t\t\t      port->id.phys_id),\n-\t\t\t\t    wd_en);\n-\t\t}\n-\t}\n-}\n-\n-static void\n-dlb2_domain_disable_dir_port_interrupts(struct dlb2_hw *hw,\n-\t\t\t\t\tstruct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *port;\n-\tu32 int_en = 0;\n-\tu32 wd_en = 0;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),\n-\t\t\t    int_en);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_CHP_DIR_CQ_WD_ENB(hw->ver, port->id.phys_id),\n-\t\t\t    wd_en);\n-\t}\n-}\n-\n-static void\n-dlb2_domain_disable_ldb_queue_write_perms(struct dlb2_hw *hw,\n-\t\t\t\t\t  struct dlb2_hw_domain *domain)\n-{\n-\tint domain_offset = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES;\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_queue *queue;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n-\t\tint idx = domain_offset + queue->id.phys_id;\n-\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(idx), 0);\n-\n-\t\tif (queue->id.vdev_owned) {\n-\t\t\tDLB2_CSR_WR(hw,\n-\t\t\t\t    DLB2_SYS_LDB_QID2VQID(queue->id.phys_id),\n-\t\t\t\t    0);\n-\n-\t\t\tidx = queue->id.vdev_id * DLB2_MAX_NUM_LDB_QUEUES +\n-\t\t\t\tqueue->id.virt_id;\n-\n-\t\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(idx), 0);\n-\n-\t\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(idx), 0);\n-\t\t}\n-\t}\n-}\n-\n-static void\n-dlb2_domain_disable_dir_queue_write_perms(struct dlb2_hw *hw,\n-\t\t\t\t\t  struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *queue;\n-\tunsigned long max_ports;\n-\tint domain_offset;\n-\tRTE_SET_USED(iter);\n-\n-\tmax_ports = DLB2_MAX_NUM_DIR_PORTS(hw->ver);\n-\n-\tdomain_offset = domain->id.phys_id * max_ports;\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {\n-\t\tint idx = domain_offset + queue->id.phys_id;\n-\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(idx), 0);\n-\n-\t\tif (queue->id.vdev_owned) {\n-\t\t\tidx = queue->id.vdev_id * max_ports + queue->id.virt_id;\n-\n-\t\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(idx), 0);\n-\n-\t\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(idx), 0);\n-\t\t}\n-\t}\n-}\n-\n-static void dlb2_domain_disable_ldb_seq_checks(struct dlb2_hw *hw,\n-\t\t\t\t\t       struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tu32 chk_en = 0;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tDLB2_CSR_WR(hw,\n-\t\t\t\t    DLB2_CHP_SN_CHK_ENBL(hw->ver,\n-\t\t\t\t\t\t\t port->id.phys_id),\n-\t\t\t\t    chk_en);\n-\t\t}\n-\t}\n-}\n-\n-static int dlb2_domain_wait_for_ldb_cqs_to_empty(struct dlb2_hw *hw,\n-\t\t\t\t\t\t struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tint j;\n-\n-\t\t\tfor (j = 0; j < DLB2_MAX_CQ_COMP_CHECK_LOOPS; j++) {\n-\t\t\t\tif (dlb2_ldb_cq_inflight_count(hw, port) == 0)\n-\t\t\t\t\tbreak;\n-\t\t\t}\n-\n-\t\t\tif (j == DLB2_MAX_CQ_COMP_CHECK_LOOPS) {\n-\t\t\t\tDLB2_HW_ERR(hw,\n-\t\t\t\t\t    \"[%s()] Internal error: failed to flush load-balanced port %d's completions.\\n\",\n-\t\t\t\t\t    __func__, port->id.phys_id);\n-\t\t\t\treturn -EFAULT;\n-\t\t\t}\n-\t\t}\n-\t}\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_domain_disable_dir_cqs(struct dlb2_hw *hw,\n-\t\t\t\t\tstruct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *port;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n-\t\tport->enabled = false;\n-\n-\t\tdlb2_dir_port_cq_disable(hw, port);\n-\t}\n-}\n-\n-static void\n-dlb2_domain_disable_dir_producer_ports(struct dlb2_hw *hw,\n-\t\t\t\t       struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *port;\n-\tu32 pp_v = 0;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_DIR_PP_V(port->id.phys_id),\n-\t\t\t    pp_v);\n-\t}\n-}\n-\n-static void\n-dlb2_domain_disable_ldb_producer_ports(struct dlb2_hw *hw,\n-\t\t\t\t       struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tu32 pp_v = 0;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tDLB2_CSR_WR(hw,\n-\t\t\t\t    DLB2_SYS_LDB_PP_V(port->id.phys_id),\n-\t\t\t\t    pp_v);\n-\t\t}\n-\t}\n-}\n-\n-static int dlb2_domain_verify_reset_success(struct dlb2_hw *hw,\n-\t\t\t\t\t    struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *dir_port;\n-\tstruct dlb2_ldb_port *ldb_port;\n-\tstruct dlb2_ldb_queue *queue;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\t/*\n-\t * Confirm that all the domain's queue's inflight counts and AQED\n-\t * active counts are 0.\n-\t */\n-\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n-\t\tif (!dlb2_ldb_queue_is_empty(hw, queue)) {\n-\t\t\tDLB2_HW_ERR(hw,\n-\t\t\t\t    \"[%s()] Internal error: failed to empty ldb queue %d\\n\",\n-\t\t\t\t    __func__, queue->id.phys_id);\n-\t\t\treturn -EFAULT;\n-\t\t}\n-\t}\n-\n-\t/* Confirm that all the domain's CQs inflight and token counts are 0. */\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], ldb_port, iter) {\n-\t\t\tif (dlb2_ldb_cq_inflight_count(hw, ldb_port) ||\n-\t\t\t    dlb2_ldb_cq_token_count(hw, ldb_port)) {\n-\t\t\t\tDLB2_HW_ERR(hw,\n-\t\t\t\t\t    \"[%s()] Internal error: failed to empty ldb port %d\\n\",\n-\t\t\t\t\t    __func__, ldb_port->id.phys_id);\n-\t\t\t\treturn -EFAULT;\n-\t\t\t}\n-\t\t}\n-\t}\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_port, iter) {\n-\t\tif (!dlb2_dir_queue_is_empty(hw, dir_port)) {\n-\t\t\tDLB2_HW_ERR(hw,\n-\t\t\t\t    \"[%s()] Internal error: failed to empty dir queue %d\\n\",\n-\t\t\t\t    __func__, dir_port->id.phys_id);\n-\t\t\treturn -EFAULT;\n-\t\t}\n-\n-\t\tif (dlb2_dir_cq_token_count(hw, dir_port)) {\n-\t\t\tDLB2_HW_ERR(hw,\n-\t\t\t\t    \"[%s()] Internal error: failed to empty dir port %d\\n\",\n-\t\t\t\t    __func__, dir_port->id.phys_id);\n-\t\t\treturn -EFAULT;\n-\t\t}\n-\t}\n-\n-\treturn 0;\n-}\n-\n-static void __dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,\n-\t\t\t\t\t\t   struct dlb2_ldb_port *port)\n-{\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_LDB_PP2VAS(port->id.phys_id),\n-\t\t    DLB2_SYS_LDB_PP2VAS_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_LDB_CQ2VAS_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_LDB_PP2VDEV(port->id.phys_id),\n-\t\t    DLB2_SYS_LDB_PP2VDEV_RST);\n-\n-\tif (port->id.vdev_owned) {\n-\t\tunsigned int offs;\n-\t\tu32 virt_id;\n-\n-\t\t/*\n-\t\t * DLB uses producer port address bits 17:12 to determine the\n-\t\t * producer port ID. In Scalable IOV mode, PP accesses come\n-\t\t * through the PF MMIO window for the physical producer port,\n-\t\t * so for translation purposes the virtual and physical port\n-\t\t * IDs are equal.\n-\t\t */\n-\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n-\t\t\tvirt_id = port->id.virt_id;\n-\t\telse\n-\t\t\tvirt_id = port->id.phys_id;\n-\n-\t\toffs = port->id.vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_VF_LDB_VPP2PP(offs),\n-\t\t\t    DLB2_SYS_VF_LDB_VPP2PP_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_VF_LDB_VPP_V(offs),\n-\t\t\t    DLB2_SYS_VF_LDB_VPP_V_RST);\n-\t}\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_LDB_PP_V(port->id.phys_id),\n-\t\t    DLB2_SYS_LDB_PP_V_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_LDB_DSBL(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_LDB_DSBL_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_LDB_CQ_DEPTH(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_LDB_CQ_DEPTH_RST);\n-\n-\tif (hw->ver != DLB2_HW_V2)\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT(port->id.phys_id),\n-\t\t\t    DLB2_LSP_CFG_CQ_LDB_WU_LIMIT_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_LDB_INFL_LIM_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_HIST_LIST_LIM_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_HIST_LIST_BASE_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_HIST_LIST_POP_PTR_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_HIST_LIST_PUSH_PTR_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_LDB_CQ_INT_DEPTH_THRSH_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_LDB_CQ_TMR_THRSH(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_LDB_CQ_TMR_THRSH_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_LDB_CQ_INT_ENB(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_LDB_CQ_INT_ENB_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_LDB_CQ_ISR(port->id.phys_id),\n-\t\t    DLB2_SYS_LDB_CQ_ISR_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_LDB_CQ_WPTR_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_LDB_TKN_CNT_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id),\n-\t\t    DLB2_SYS_LDB_CQ_ADDR_L_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id),\n-\t\t    DLB2_SYS_LDB_CQ_ADDR_U_RST);\n-\n-\tif (hw->ver == DLB2_HW_V2)\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_LDB_CQ_AT(port->id.phys_id),\n-\t\t\t    DLB2_SYS_LDB_CQ_AT_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id),\n-\t\t    DLB2_SYS_LDB_CQ_PASID_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id),\n-\t\t    DLB2_SYS_LDB_CQ2VF_PF_RO_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_LDB_TOT_SCH_CNTL_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_LDB_TOT_SCH_CNTH_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ2QID0(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ2QID0_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ2QID1(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ2QID1_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ2PRIOV_RST);\n-}\n-\n-static void dlb2_domain_reset_ldb_port_registers(struct dlb2_hw *hw,\n-\t\t\t\t\t\t struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter)\n-\t\t\t__dlb2_domain_reset_ldb_port_registers(hw, port);\n-\t}\n-}\n-\n-static void\n-__dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,\n-\t\t\t\t       struct dlb2_dir_pq_pair *port)\n-{\n-\tu32 reg = 0;\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_DIR_CQ2VAS_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_DIR_DSBL(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_DIR_DSBL_RST);\n-\n-\tDLB2_BIT_SET(reg, DLB2_SYS_WB_DIR_CQ_STATE_CQ_OPT_CLR);\n-\n-\tif (hw->ver == DLB2_HW_V2)\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_OPT_CLR, port->id.phys_id);\n-\telse\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_WB_DIR_CQ_STATE(port->id.phys_id), reg);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_DIR_CQ_DEPTH(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_DIR_CQ_DEPTH_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_DIR_CQ_INT_DEPTH_THRSH_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_DIR_CQ_TMR_THRSH(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_DIR_CQ_TMR_THRSH_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_DIR_CQ_INT_ENB(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_DIR_CQ_INT_ENB_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_DIR_CQ_ISR(port->id.phys_id),\n-\t\t    DLB2_SYS_DIR_CQ_ISR_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,\n-\t\t\t\t\t\t      port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_DIR_CQ_WPTR_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_DIR_TKN_CNT_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id),\n-\t\t    DLB2_SYS_DIR_CQ_ADDR_L_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id),\n-\t\t    DLB2_SYS_DIR_CQ_ADDR_U_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),\n-\t\t    DLB2_SYS_DIR_CQ_AT_RST);\n-\n-\tif (hw->ver == DLB2_HW_V2)\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_DIR_CQ_AT(port->id.phys_id),\n-\t\t\t    DLB2_SYS_DIR_CQ_AT_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id),\n-\t\t    DLB2_SYS_DIR_CQ_PASID_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_DIR_CQ_FMT(port->id.phys_id),\n-\t\t    DLB2_SYS_DIR_CQ_FMT_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id),\n-\t\t    DLB2_SYS_DIR_CQ2VF_PF_RO_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_DIR_TOT_SCH_CNTL_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH(hw->ver, port->id.phys_id),\n-\t\t    DLB2_LSP_CQ_DIR_TOT_SCH_CNTH_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_DIR_PP2VAS(port->id.phys_id),\n-\t\t    DLB2_SYS_DIR_PP2VAS_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_DIR_CQ2VAS_RST);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_DIR_PP2VDEV(port->id.phys_id),\n-\t\t    DLB2_SYS_DIR_PP2VDEV_RST);\n-\n-\tif (port->id.vdev_owned) {\n-\t\tunsigned int offs;\n-\t\tu32 virt_id;\n-\n-\t\t/*\n-\t\t * DLB uses producer port address bits 17:12 to determine the\n-\t\t * producer port ID. In Scalable IOV mode, PP accesses come\n-\t\t * through the PF MMIO window for the physical producer port,\n-\t\t * so for translation purposes the virtual and physical port\n-\t\t * IDs are equal.\n-\t\t */\n-\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n-\t\t\tvirt_id = port->id.virt_id;\n-\t\telse\n-\t\t\tvirt_id = port->id.phys_id;\n-\n-\t\toffs = port->id.vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +\n-\t\t\tvirt_id;\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_VF_DIR_VPP2PP(offs),\n-\t\t\t    DLB2_SYS_VF_DIR_VPP2PP_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_VF_DIR_VPP_V(offs),\n-\t\t\t    DLB2_SYS_VF_DIR_VPP_V_RST);\n-\t}\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_SYS_DIR_PP_V(port->id.phys_id),\n-\t\t    DLB2_SYS_DIR_PP_V_RST);\n-}\n-\n-static void dlb2_domain_reset_dir_port_registers(struct dlb2_hw *hw,\n-\t\t\t\t\t\t struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *port;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter)\n-\t\t__dlb2_domain_reset_dir_port_registers(hw, port);\n-}\n-\n-static void dlb2_domain_reset_ldb_queue_registers(struct dlb2_hw *hw,\n-\t\t\t\t\t\t  struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_queue *queue;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n-\t\tunsigned int queue_id = queue->id.phys_id;\n-\t\tint i;\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL(hw->ver, queue_id),\n-\t\t\t    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTL_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH(hw->ver, queue_id),\n-\t\t\t    DLB2_LSP_QID_NALDB_TOT_ENQ_CNTH_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL(hw->ver, queue_id),\n-\t\t\t    DLB2_LSP_QID_ATM_TOT_ENQ_CNTL_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH(hw->ver, queue_id),\n-\t\t\t    DLB2_LSP_QID_ATM_TOT_ENQ_CNTH_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_NALDB_MAX_DEPTH(hw->ver, queue_id),\n-\t\t\t    DLB2_LSP_QID_NALDB_MAX_DEPTH_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_LDB_INFL_LIM(hw->ver, queue_id),\n-\t\t\t    DLB2_LSP_QID_LDB_INFL_LIM_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver, queue_id),\n-\t\t\t    DLB2_LSP_QID_AQED_ACTIVE_LIM_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver, queue_id),\n-\t\t\t    DLB2_LSP_QID_ATM_DEPTH_THRSH_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue_id),\n-\t\t\t    DLB2_LSP_QID_NALDB_DEPTH_THRSH_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_LDB_QID_ITS(queue_id),\n-\t\t\t    DLB2_SYS_LDB_QID_ITS_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_CHP_ORD_QID_SN(hw->ver, queue_id),\n-\t\t\t    DLB2_CHP_ORD_QID_SN_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue_id),\n-\t\t\t    DLB2_CHP_ORD_QID_SN_MAP_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_LDB_QID_V(queue_id),\n-\t\t\t    DLB2_SYS_LDB_QID_V_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_LDB_QID_CFG_V(queue_id),\n-\t\t\t    DLB2_SYS_LDB_QID_CFG_V_RST);\n-\n-\t\tif (queue->sn_cfg_valid) {\n-\t\t\tu32 offs[2];\n-\n-\t\t\toffs[0] = DLB2_RO_GRP_0_SLT_SHFT(hw->ver,\n-\t\t\t\t\t\t\t queue->sn_slot);\n-\t\t\toffs[1] = DLB2_RO_GRP_1_SLT_SHFT(hw->ver,\n-\t\t\t\t\t\t\t queue->sn_slot);\n-\n-\t\t\tDLB2_CSR_WR(hw,\n-\t\t\t\t    offs[queue->sn_group],\n-\t\t\t\t    DLB2_RO_GRP_0_SLT_SHFT_RST);\n-\t\t}\n-\n-\t\tfor (i = 0; i < DLB2_LSP_QID2CQIDIX_NUM; i++) {\n-\t\t\tDLB2_CSR_WR(hw,\n-\t\t\t\t    DLB2_LSP_QID2CQIDIX(hw->ver, queue_id, i),\n-\t\t\t\t    DLB2_LSP_QID2CQIDIX_00_RST);\n-\n-\t\t\tDLB2_CSR_WR(hw,\n-\t\t\t\t    DLB2_LSP_QID2CQIDIX2(hw->ver, queue_id, i),\n-\t\t\t\t    DLB2_LSP_QID2CQIDIX2_00_RST);\n-\n-\t\t\tDLB2_CSR_WR(hw,\n-\t\t\t\t    DLB2_ATM_QID2CQIDIX(queue_id, i),\n-\t\t\t\t    DLB2_ATM_QID2CQIDIX_00_RST);\n-\t\t}\n-\t}\n-}\n-\n-static void dlb2_domain_reset_dir_queue_registers(struct dlb2_hw *hw,\n-\t\t\t\t\t\t  struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *queue;\n-\tRTE_SET_USED(iter);\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, queue, iter) {\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_DIR_MAX_DEPTH(hw->ver,\n-\t\t\t\t\t\t       queue->id.phys_id),\n-\t\t\t    DLB2_LSP_QID_DIR_MAX_DEPTH_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL(hw->ver,\n-\t\t\t\t\t\t\t  queue->id.phys_id),\n-\t\t\t    DLB2_LSP_QID_DIR_TOT_ENQ_CNTL_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH(hw->ver,\n-\t\t\t\t\t\t\t  queue->id.phys_id),\n-\t\t\t    DLB2_LSP_QID_DIR_TOT_ENQ_CNTH_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver,\n-\t\t\t\t\t\t\t queue->id.phys_id),\n-\t\t\t    DLB2_LSP_QID_DIR_DEPTH_THRSH_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_DIR_QID_ITS(queue->id.phys_id),\n-\t\t\t    DLB2_SYS_DIR_QID_ITS_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_SYS_DIR_QID_V(queue->id.phys_id),\n-\t\t\t    DLB2_SYS_DIR_QID_V_RST);\n-\t}\n-}\n-\n-\n-\n-\n-\n-static void dlb2_domain_reset_registers(struct dlb2_hw *hw,\n-\t\t\t\t\tstruct dlb2_hw_domain *domain)\n-{\n-\tdlb2_domain_reset_ldb_port_registers(hw, domain);\n-\n-\tdlb2_domain_reset_dir_port_registers(hw, domain);\n-\n-\tdlb2_domain_reset_ldb_queue_registers(hw, domain);\n-\n-\tdlb2_domain_reset_dir_queue_registers(hw, domain);\n-\n-\tif (hw->ver == DLB2_HW_V2) {\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_CHP_CFG_LDB_VAS_CRD(domain->id.phys_id),\n-\t\t\t    DLB2_CHP_CFG_LDB_VAS_CRD_RST);\n-\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_CHP_CFG_DIR_VAS_CRD(domain->id.phys_id),\n-\t\t\t    DLB2_CHP_CFG_DIR_VAS_CRD_RST);\n-\t} else\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_CHP_CFG_VAS_CRD(domain->id.phys_id),\n-\t\t\t    DLB2_CHP_CFG_VAS_CRD_RST);\n-}\n-\n-static int dlb2_domain_reset_software_state(struct dlb2_hw *hw,\n-\t\t\t\t\t    struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_dir_pq_pair *tmp_dir_port;\n-\tstruct dlb2_ldb_queue *tmp_ldb_queue;\n-\tstruct dlb2_ldb_port *tmp_ldb_port;\n-\tstruct dlb2_list_entry *iter1;\n-\tstruct dlb2_list_entry *iter2;\n-\tstruct dlb2_function_resources *rsrcs;\n-\tstruct dlb2_dir_pq_pair *dir_port;\n-\tstruct dlb2_ldb_queue *ldb_queue;\n-\tstruct dlb2_ldb_port *ldb_port;\n-\tstruct dlb2_list_head *list;\n-\tint ret, i;\n-\tRTE_SET_USED(tmp_dir_port);\n-\tRTE_SET_USED(tmp_ldb_queue);\n-\tRTE_SET_USED(tmp_ldb_port);\n-\tRTE_SET_USED(iter1);\n-\tRTE_SET_USED(iter2);\n-\n-\trsrcs = domain->parent_func;\n-\n-\t/* Move the domain's ldb queues to the function's avail list */\n-\tlist = &domain->used_ldb_queues;\n-\tDLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {\n-\t\tif (ldb_queue->sn_cfg_valid) {\n-\t\t\tstruct dlb2_sn_group *grp;\n-\n-\t\t\tgrp = &hw->rsrcs.sn_groups[ldb_queue->sn_group];\n-\n-\t\t\tdlb2_sn_group_free_slot(grp, ldb_queue->sn_slot);\n-\t\t\tldb_queue->sn_cfg_valid = false;\n-\t\t}\n-\n-\t\tldb_queue->owned = false;\n-\t\tldb_queue->num_mappings = 0;\n-\t\tldb_queue->num_pending_additions = 0;\n-\n-\t\tdlb2_list_del(&domain->used_ldb_queues,\n-\t\t\t      &ldb_queue->domain_list);\n-\t\tdlb2_list_add(&rsrcs->avail_ldb_queues,\n-\t\t\t      &ldb_queue->func_list);\n-\t\trsrcs->num_avail_ldb_queues++;\n-\t}\n-\n-\tlist = &domain->avail_ldb_queues;\n-\tDLB2_DOM_LIST_FOR_SAFE(*list, ldb_queue, tmp_ldb_queue, iter1, iter2) {\n-\t\tldb_queue->owned = false;\n-\n-\t\tdlb2_list_del(&domain->avail_ldb_queues,\n-\t\t\t      &ldb_queue->domain_list);\n-\t\tdlb2_list_add(&rsrcs->avail_ldb_queues,\n-\t\t\t      &ldb_queue->func_list);\n-\t\trsrcs->num_avail_ldb_queues++;\n-\t}\n-\n-\t/* Move the domain's ldb ports to the function's avail list */\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tlist = &domain->used_ldb_ports[i];\n-\t\tDLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,\n-\t\t\t\t       iter1, iter2) {\n-\t\t\tint j;\n-\n-\t\t\tldb_port->owned = false;\n-\t\t\tldb_port->configured = false;\n-\t\t\tldb_port->num_pending_removals = 0;\n-\t\t\tldb_port->num_mappings = 0;\n-\t\t\tldb_port->init_tkn_cnt = 0;\n-\t\t\tldb_port->cq_depth = 0;\n-\t\t\tfor (j = 0; j < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; j++)\n-\t\t\t\tldb_port->qid_map[j].state =\n-\t\t\t\t\tDLB2_QUEUE_UNMAPPED;\n-\n-\t\t\tdlb2_list_del(&domain->used_ldb_ports[i],\n-\t\t\t\t      &ldb_port->domain_list);\n-\t\t\tdlb2_list_add(&rsrcs->avail_ldb_ports[i],\n-\t\t\t\t      &ldb_port->func_list);\n-\t\t\trsrcs->num_avail_ldb_ports[i]++;\n-\t\t}\n-\n-\t\tlist = &domain->avail_ldb_ports[i];\n-\t\tDLB2_DOM_LIST_FOR_SAFE(*list, ldb_port, tmp_ldb_port,\n-\t\t\t\t       iter1, iter2) {\n-\t\t\tldb_port->owned = false;\n-\n-\t\t\tdlb2_list_del(&domain->avail_ldb_ports[i],\n-\t\t\t\t      &ldb_port->domain_list);\n-\t\t\tdlb2_list_add(&rsrcs->avail_ldb_ports[i],\n-\t\t\t\t      &ldb_port->func_list);\n-\t\t\trsrcs->num_avail_ldb_ports[i]++;\n-\t\t}\n-\t}\n-\n-\t/* Move the domain's dir ports to the function's avail list */\n-\tlist = &domain->used_dir_pq_pairs;\n-\tDLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {\n-\t\tdir_port->owned = false;\n-\t\tdir_port->port_configured = false;\n-\t\tdir_port->init_tkn_cnt = 0;\n-\n-\t\tdlb2_list_del(&domain->used_dir_pq_pairs,\n-\t\t\t      &dir_port->domain_list);\n-\n-\t\tdlb2_list_add(&rsrcs->avail_dir_pq_pairs,\n-\t\t\t      &dir_port->func_list);\n-\t\trsrcs->num_avail_dir_pq_pairs++;\n-\t}\n-\n-\tlist = &domain->avail_dir_pq_pairs;\n-\tDLB2_DOM_LIST_FOR_SAFE(*list, dir_port, tmp_dir_port, iter1, iter2) {\n-\t\tdir_port->owned = false;\n-\n-\t\tdlb2_list_del(&domain->avail_dir_pq_pairs,\n-\t\t\t      &dir_port->domain_list);\n-\n-\t\tdlb2_list_add(&rsrcs->avail_dir_pq_pairs,\n-\t\t\t      &dir_port->func_list);\n-\t\trsrcs->num_avail_dir_pq_pairs++;\n-\t}\n-\n-\t/* Return hist list entries to the function */\n-\tret = dlb2_bitmap_set_range(rsrcs->avail_hist_list_entries,\n-\t\t\t\t    domain->hist_list_entry_base,\n-\t\t\t\t    domain->total_hist_list_entries);\n-\tif (ret) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s()] Internal error: domain hist list base does not match the function's bitmap.\\n\",\n-\t\t\t    __func__);\n-\t\treturn ret;\n-\t}\n-\n-\tdomain->total_hist_list_entries = 0;\n-\tdomain->avail_hist_list_entries = 0;\n-\tdomain->hist_list_entry_base = 0;\n-\tdomain->hist_list_entry_offset = 0;\n-\n-\tif (hw->ver == DLB2_HW_V2_5) {\n-\t\trsrcs->num_avail_entries += domain->num_credits;\n-\t\tdomain->num_credits = 0;\n-\t} else {\n-\t\trsrcs->num_avail_qed_entries += domain->num_ldb_credits;\n-\t\tdomain->num_ldb_credits = 0;\n-\n-\t\trsrcs->num_avail_dqed_entries += domain->num_dir_credits;\n-\t\tdomain->num_dir_credits = 0;\n-\t}\n-\trsrcs->num_avail_aqed_entries += domain->num_avail_aqed_entries;\n-\trsrcs->num_avail_aqed_entries += domain->num_used_aqed_entries;\n-\tdomain->num_avail_aqed_entries = 0;\n-\tdomain->num_used_aqed_entries = 0;\n-\n-\tdomain->num_pending_removals = 0;\n-\tdomain->num_pending_additions = 0;\n-\tdomain->configured = false;\n-\tdomain->started = false;\n-\n-\t/*\n-\t * Move the domain out of the used_domains list and back to the\n-\t * function's avail_domains list.\n-\t */\n-\tdlb2_list_del(&rsrcs->used_domains, &domain->func_list);\n-\tdlb2_list_add(&rsrcs->avail_domains, &domain->func_list);\n-\trsrcs->num_avail_domains++;\n-\n-\treturn 0;\n-}\n-\n-static int dlb2_domain_drain_unmapped_queue(struct dlb2_hw *hw,\n-\t\t\t\t\t    struct dlb2_hw_domain *domain,\n-\t\t\t\t\t    struct dlb2_ldb_queue *queue)\n-{\n-\tstruct dlb2_ldb_port *port = NULL;\n-\tint ret, i;\n-\n-\t/* If a domain has LDB queues, it must have LDB ports */\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tport = DLB2_DOM_LIST_HEAD(domain->used_ldb_ports[i],\n-\t\t\t\t\t  typeof(*port));\n-\t\tif (port)\n-\t\t\tbreak;\n-\t}\n-\n-\tif (port == NULL) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s()] Internal error: No configured LDB ports\\n\",\n-\t\t\t    __func__);\n-\t\treturn -EFAULT;\n-\t}\n-\n-\t/* If necessary, free up a QID slot in this CQ */\n-\tif (port->num_mappings == DLB2_MAX_NUM_QIDS_PER_LDB_CQ) {\n-\t\tstruct dlb2_ldb_queue *mapped_queue;\n-\n-\t\tmapped_queue = &hw->rsrcs.ldb_queues[port->qid_map[0].qid];\n-\n-\t\tret = dlb2_ldb_port_unmap_qid(hw, port, mapped_queue);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\t}\n-\n-\tret = dlb2_ldb_port_map_qid_dynamic(hw, port, queue, 0);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\treturn dlb2_domain_drain_mapped_queues(hw, domain);\n-}\n-\n-static int dlb2_domain_drain_unmapped_queues(struct dlb2_hw *hw,\n-\t\t\t\t\t     struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_queue *queue;\n-\tint ret;\n-\tRTE_SET_USED(iter);\n-\n-\t/* If the domain hasn't been started, there's no traffic to drain */\n-\tif (!domain->started)\n-\t\treturn 0;\n-\n-\t/*\n-\t * Pre-condition: the unattached queue must not have any outstanding\n-\t * completions. This is ensured by calling dlb2_domain_drain_ldb_cqs()\n-\t * prior to this in dlb2_domain_drain_mapped_queues().\n-\t */\n-\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n-\t\tif (queue->num_mappings != 0 ||\n-\t\t    dlb2_ldb_queue_is_empty(hw, queue))\n-\t\t\tcontinue;\n-\n-\t\tret = dlb2_domain_drain_unmapped_queue(hw, domain, queue);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\t}\n-\n-\treturn 0;\n-}\n-\n-/**\n- * dlb2_reset_domain() - reset a scheduling domain\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function resets and frees a DLB 2.0 scheduling domain and its associated\n- * resources.\n- *\n- * Pre-condition: the driver must ensure software has stopped sending QEs\n- * through this domain's producer ports before invoking this function, or\n- * undefined behavior will result.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, -1 otherwise.\n- *\n- * EINVAL - Invalid domain ID, or the domain is not configured.\n- * EFAULT - Internal error. (Possibly caused if software is the pre-condition\n- *\t    is not met.)\n- * ETIMEDOUT - Hardware component didn't reset in the expected time.\n- */\n-int dlb2_reset_domain(struct dlb2_hw *hw,\n-\t\t      u32 domain_id,\n-\t\t      bool vdev_req,\n-\t\t      unsigned int vdev_id)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tint ret;\n-\n-\tdlb2_log_reset_domain(hw, domain_id, vdev_req, vdev_id);\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n-\n-\tif (domain == NULL || !domain->configured)\n-\t\treturn -EINVAL;\n-\n-\t/* Disable VPPs */\n-\tif (vdev_req) {\n-\t\tdlb2_domain_disable_dir_vpps(hw, domain, vdev_id);\n-\n-\t\tdlb2_domain_disable_ldb_vpps(hw, domain, vdev_id);\n-\t}\n-\n-\t/* Disable CQ interrupts */\n-\tdlb2_domain_disable_dir_port_interrupts(hw, domain);\n-\n-\tdlb2_domain_disable_ldb_port_interrupts(hw, domain);\n-\n-\t/*\n-\t * For each queue owned by this domain, disable its write permissions to\n-\t * cause any traffic sent to it to be dropped. Well-behaved software\n-\t * should not be sending QEs at this point.\n-\t */\n-\tdlb2_domain_disable_dir_queue_write_perms(hw, domain);\n-\n-\tdlb2_domain_disable_ldb_queue_write_perms(hw, domain);\n-\n-\t/* Turn off completion tracking on all the domain's PPs. */\n-\tdlb2_domain_disable_ldb_seq_checks(hw, domain);\n-\n-\t/*\n-\t * Disable the LDB CQs and drain them in order to complete the map and\n-\t * unmap procedures, which require zero CQ inflights and zero QID\n-\t * inflights respectively.\n-\t */\n-\tdlb2_domain_disable_ldb_cqs(hw, domain);\n-\n-\tdlb2_domain_drain_ldb_cqs(hw, domain, false);\n-\n-\tret = dlb2_domain_wait_for_ldb_cqs_to_empty(hw, domain);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tret = dlb2_domain_finish_unmap_qid_procedures(hw, domain);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tret = dlb2_domain_finish_map_qid_procedures(hw, domain);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/* Re-enable the CQs in order to drain the mapped queues. */\n-\tdlb2_domain_enable_ldb_cqs(hw, domain);\n-\n-\tret = dlb2_domain_drain_mapped_queues(hw, domain);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tret = dlb2_domain_drain_unmapped_queues(hw, domain);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/* Done draining LDB QEs, so disable the CQs. */\n-\tdlb2_domain_disable_ldb_cqs(hw, domain);\n-\n-\tdlb2_domain_drain_dir_queues(hw, domain);\n-\n-\t/* Done draining DIR QEs, so disable the CQs. */\n-\tdlb2_domain_disable_dir_cqs(hw, domain);\n-\n-\t/* Disable PPs */\n-\tdlb2_domain_disable_dir_producer_ports(hw, domain);\n-\n-\tdlb2_domain_disable_ldb_producer_ports(hw, domain);\n-\n-\tret = dlb2_domain_verify_reset_success(hw, domain);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/* Reset the QID and port state. */\n-\tdlb2_domain_reset_registers(hw, domain);\n-\n-\t/* Hardware reset complete. Reset the domain's software state */\n-\treturn dlb2_domain_reset_software_state(hw, domain);\n-}\n-\n-static void\n-dlb2_log_create_ldb_queue_args(struct dlb2_hw *hw,\n-\t\t\t       u32 domain_id,\n-\t\t\t       struct dlb2_create_ldb_queue_args *args,\n-\t\t\t       bool vdev_req,\n-\t\t\t       unsigned int vdev_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB2 create load-balanced queue arguments:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n-\tDLB2_HW_DBG(hw, \"\\tDomain ID:                  %d\\n\",\n-\t\t    domain_id);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of sequence numbers: %d\\n\",\n-\t\t    args->num_sequence_numbers);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of QID inflights:    %d\\n\",\n-\t\t    args->num_qid_inflights);\n-\tDLB2_HW_DBG(hw, \"\\tNumber of ATM inflights:    %d\\n\",\n-\t\t    args->num_atomic_inflights);\n-}\n-\n-static int\n-dlb2_ldb_queue_attach_to_sn_group(struct dlb2_hw *hw,\n-\t\t\t\t  struct dlb2_ldb_queue *queue,\n-\t\t\t\t  struct dlb2_create_ldb_queue_args *args)\n-{\n-\tint slot = -1;\n-\tint i;\n-\n-\tqueue->sn_cfg_valid = false;\n-\n-\tif (args->num_sequence_numbers == 0)\n-\t\treturn 0;\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {\n-\t\tstruct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];\n-\n-\t\tif (group->sequence_numbers_per_queue ==\n-\t\t    args->num_sequence_numbers &&\n-\t\t    !dlb2_sn_group_full(group)) {\n-\t\t\tslot = dlb2_sn_group_alloc_slot(group);\n-\t\t\tif (slot >= 0)\n-\t\t\t\tbreak;\n-\t\t}\n-\t}\n-\n-\tif (slot == -1) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s():%d] Internal error: no sequence number slots available\\n\",\n-\t\t\t    __func__, __LINE__);\n-\t\treturn -EFAULT;\n-\t}\n-\n-\tqueue->sn_cfg_valid = true;\n-\tqueue->sn_group = i;\n-\tqueue->sn_slot = slot;\n-\treturn 0;\n-}\n-\n-static int\n-dlb2_verify_create_ldb_queue_args(struct dlb2_hw *hw,\n-\t\t\t\t  u32 domain_id,\n-\t\t\t\t  struct dlb2_create_ldb_queue_args *args,\n-\t\t\t\t  struct dlb2_cmd_response *resp,\n-\t\t\t\t  bool vdev_req,\n-\t\t\t\t  unsigned int vdev_id,\n-\t\t\t\t  struct dlb2_hw_domain **out_domain,\n-\t\t\t\t  struct dlb2_ldb_queue **out_queue)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_queue *queue;\n-\tint i;\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n-\n-\tif (!domain) {\n-\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (!domain->configured) {\n-\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (domain->started) {\n-\t\tresp->status = DLB2_ST_DOMAIN_STARTED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tqueue = DLB2_DOM_LIST_HEAD(domain->avail_ldb_queues, typeof(*queue));\n-\tif (!queue) {\n-\t\tresp->status = DLB2_ST_LDB_QUEUES_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (args->num_sequence_numbers) {\n-\t\tfor (i = 0; i < DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) {\n-\t\t\tstruct dlb2_sn_group *group = &hw->rsrcs.sn_groups[i];\n-\n-\t\t\tif (group->sequence_numbers_per_queue ==\n-\t\t\t    args->num_sequence_numbers &&\n-\t\t\t    !dlb2_sn_group_full(group))\n-\t\t\t\tbreak;\n-\t\t}\n-\n-\t\tif (i == DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) {\n-\t\t\tresp->status = DLB2_ST_SEQUENCE_NUMBERS_UNAVAILABLE;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\t}\n-\n-\tif (args->num_qid_inflights > 4096) {\n-\t\tresp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/* Inflights must be <= number of sequence numbers if ordered */\n-\tif (args->num_sequence_numbers != 0 &&\n-\t    args->num_qid_inflights > args->num_sequence_numbers) {\n-\t\tresp->status = DLB2_ST_INVALID_QID_INFLIGHT_ALLOCATION;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (domain->num_avail_aqed_entries < args->num_atomic_inflights) {\n-\t\tresp->status = DLB2_ST_ATOMIC_INFLIGHTS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (args->num_atomic_inflights &&\n-\t    args->lock_id_comp_level != 0 &&\n-\t    args->lock_id_comp_level != 64 &&\n-\t    args->lock_id_comp_level != 128 &&\n-\t    args->lock_id_comp_level != 256 &&\n-\t    args->lock_id_comp_level != 512 &&\n-\t    args->lock_id_comp_level != 1024 &&\n-\t    args->lock_id_comp_level != 2048 &&\n-\t    args->lock_id_comp_level != 4096 &&\n-\t    args->lock_id_comp_level != 65536) {\n-\t\tresp->status = DLB2_ST_INVALID_LOCK_ID_COMP_LEVEL;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t*out_domain = domain;\n-\t*out_queue = queue;\n-\n-\treturn 0;\n-}\n-\n-static int\n-dlb2_ldb_queue_attach_resources(struct dlb2_hw *hw,\n-\t\t\t\tstruct dlb2_hw_domain *domain,\n-\t\t\t\tstruct dlb2_ldb_queue *queue,\n-\t\t\t\tstruct dlb2_create_ldb_queue_args *args)\n-{\n-\tint ret;\n-\tret = dlb2_ldb_queue_attach_to_sn_group(hw, queue, args);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/* Attach QID inflights */\n-\tqueue->num_qid_inflights = args->num_qid_inflights;\n-\n-\t/* Attach atomic inflights */\n-\tqueue->aqed_limit = args->num_atomic_inflights;\n-\n-\tdomain->num_avail_aqed_entries -= args->num_atomic_inflights;\n-\tdomain->num_used_aqed_entries += args->num_atomic_inflights;\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_configure_ldb_queue(struct dlb2_hw *hw,\n-\t\t\t\t     struct dlb2_hw_domain *domain,\n-\t\t\t\t     struct dlb2_ldb_queue *queue,\n-\t\t\t\t     struct dlb2_create_ldb_queue_args *args,\n-\t\t\t\t     bool vdev_req,\n-\t\t\t\t     unsigned int vdev_id)\n-{\n-\tstruct dlb2_sn_group *sn_group;\n-\tunsigned int offs;\n-\tu32 reg = 0;\n-\tu32 alimit;\n-\n-\t/* QID write permissions are turned on when the domain is started */\n-\toffs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.phys_id;\n-\n-\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), reg);\n-\n-\t/*\n-\t * Unordered QIDs get 4K inflights, ordered get as many as the number\n-\t * of sequence numbers.\n-\t */\n-\tDLB2_BITS_SET(reg, args->num_qid_inflights,\n-\t\t      DLB2_LSP_QID_LDB_INFL_LIM_LIMIT);\n-\tDLB2_CSR_WR(hw, DLB2_LSP_QID_LDB_INFL_LIM(hw->ver,\n-\t\t\t\t\t\t  queue->id.phys_id), reg);\n-\n-\talimit = queue->aqed_limit;\n-\n-\tif (alimit > DLB2_MAX_NUM_AQED_ENTRIES)\n-\t\talimit = DLB2_MAX_NUM_AQED_ENTRIES;\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, alimit, DLB2_LSP_QID_AQED_ACTIVE_LIM_LIMIT);\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_QID_AQED_ACTIVE_LIM(hw->ver,\n-\t\t\t\t\t\t queue->id.phys_id), reg);\n-\n-\treg = 0;\n-\tswitch (args->lock_id_comp_level) {\n-\tcase 64:\n-\t\tDLB2_BITS_SET(reg, 1, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n-\t\tbreak;\n-\tcase 128:\n-\t\tDLB2_BITS_SET(reg, 2, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n-\t\tbreak;\n-\tcase 256:\n-\t\tDLB2_BITS_SET(reg, 3, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n-\t\tbreak;\n-\tcase 512:\n-\t\tDLB2_BITS_SET(reg, 4, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n-\t\tbreak;\n-\tcase 1024:\n-\t\tDLB2_BITS_SET(reg, 5, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n-\t\tbreak;\n-\tcase 2048:\n-\t\tDLB2_BITS_SET(reg, 6, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n-\t\tbreak;\n-\tcase 4096:\n-\t\tDLB2_BITS_SET(reg, 7, DLB2_AQED_QID_HID_WIDTH_COMPRESS_CODE);\n-\t\tbreak;\n-\tdefault:\n-\t\t/* No compression by default */\n-\t\tbreak;\n-\t}\n-\n-\tDLB2_CSR_WR(hw, DLB2_AQED_QID_HID_WIDTH(queue->id.phys_id), reg);\n-\n-\treg = 0;\n-\t/* Don't timestamp QEs that pass through this queue */\n-\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_ITS(queue->id.phys_id), reg);\n-\n-\tDLB2_BITS_SET(reg, args->depth_threshold,\n-\t\t      DLB2_LSP_QID_ATM_DEPTH_THRSH_THRESH);\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_QID_ATM_DEPTH_THRSH(hw->ver,\n-\t\t\t\t\t\t queue->id.phys_id), reg);\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, args->depth_threshold,\n-\t\t      DLB2_LSP_QID_NALDB_DEPTH_THRSH_THRESH);\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_QID_NALDB_DEPTH_THRSH(hw->ver, queue->id.phys_id),\n-\t\t    reg);\n-\n-\t/*\n-\t * This register limits the number of inflight flows a queue can have\n-\t * at one time.  It has an upper bound of 2048, but can be\n-\t * over-subscribed. 512 is chosen so that a single queue does not use\n-\t * the entire atomic storage, but can use a substantial portion if\n-\t * needed.\n-\t */\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, 512, DLB2_AQED_QID_FID_LIM_QID_FID_LIMIT);\n-\tDLB2_CSR_WR(hw, DLB2_AQED_QID_FID_LIM(queue->id.phys_id), reg);\n-\n-\t/* Configure SNs */\n-\treg = 0;\n-\tsn_group = &hw->rsrcs.sn_groups[queue->sn_group];\n-\tDLB2_BITS_SET(reg, sn_group->mode, DLB2_CHP_ORD_QID_SN_MAP_MODE);\n-\tDLB2_BITS_SET(reg, queue->sn_slot, DLB2_CHP_ORD_QID_SN_MAP_SLOT);\n-\tDLB2_BITS_SET(reg, sn_group->id, DLB2_CHP_ORD_QID_SN_MAP_GRP);\n-\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_ORD_QID_SN_MAP(hw->ver, queue->id.phys_id), reg);\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, (args->num_sequence_numbers != 0),\n-\t\t DLB2_SYS_LDB_QID_CFG_V_SN_CFG_V);\n-\tDLB2_BITS_SET(reg, (args->num_atomic_inflights != 0),\n-\t\t DLB2_SYS_LDB_QID_CFG_V_FID_CFG_V);\n-\n-\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_CFG_V(queue->id.phys_id), reg);\n-\n-\tif (vdev_req) {\n-\t\toffs = vdev_id * DLB2_MAX_NUM_LDB_QUEUES + queue->id.virt_id;\n-\n-\t\treg = 0;\n-\t\tDLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VQID_V_VQID_V);\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID_V(offs), reg);\n-\n-\t\treg = 0;\n-\t\tDLB2_BITS_SET(reg, queue->id.phys_id,\n-\t\t\t      DLB2_SYS_VF_LDB_VQID2QID_QID);\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VQID2QID(offs), reg);\n-\n-\t\treg = 0;\n-\t\tDLB2_BITS_SET(reg, queue->id.virt_id,\n-\t\t\t      DLB2_SYS_LDB_QID2VQID_VQID);\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_QID2VQID(queue->id.phys_id), reg);\n-\t}\n-\n-\treg = 0;\n-\tDLB2_BIT_SET(reg, DLB2_SYS_LDB_QID_V_QID_V);\n-\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_QID_V(queue->id.phys_id), reg);\n-}\n-\n-/**\n- * dlb2_hw_create_ldb_queue() - create a load-balanced queue\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @args: queue creation arguments.\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function creates a load-balanced queue.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n- * contains the queue ID.\n- *\n- * resp->id contains a virtual ID if vdev_req is true.\n- *\n- * Errors:\n- * EINVAL - A requested resource is unavailable, the domain is not configured,\n- *\t    the domain has already been started, or the requested queue name is\n- *\t    already in use.\n- * EFAULT - Internal error (resp->status not set).\n- */\n-int dlb2_hw_create_ldb_queue(struct dlb2_hw *hw,\n-\t\t\t     u32 domain_id,\n-\t\t\t     struct dlb2_create_ldb_queue_args *args,\n-\t\t\t     struct dlb2_cmd_response *resp,\n-\t\t\t     bool vdev_req,\n-\t\t\t     unsigned int vdev_id)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_queue *queue;\n-\tint ret;\n-\n-\tdlb2_log_create_ldb_queue_args(hw, domain_id, args, vdev_req, vdev_id);\n-\n-\t/*\n-\t * Verify that hardware resources are available before attempting to\n-\t * satisfy the request. This simplifies the error unwinding code.\n-\t */\n-\tret = dlb2_verify_create_ldb_queue_args(hw,\n-\t\t\t\t\t\tdomain_id,\n-\t\t\t\t\t\targs,\n-\t\t\t\t\t\tresp,\n-\t\t\t\t\t\tvdev_req,\n-\t\t\t\t\t\tvdev_id,\n-\t\t\t\t\t\t&domain,\n-\t\t\t\t\t\t&queue);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tret = dlb2_ldb_queue_attach_resources(hw, domain, queue, args);\n-\n-\tif (ret) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s():%d] Internal error: failed to attach the ldb queue resources\\n\",\n-\t\t\t    __func__, __LINE__);\n-\t\treturn ret;\n-\t}\n-\n-\tdlb2_configure_ldb_queue(hw, domain, queue, args, vdev_req, vdev_id);\n-\n-\tqueue->num_mappings = 0;\n-\n-\tqueue->configured = true;\n-\n-\t/*\n-\t * Configuration succeeded, so move the resource from the 'avail' to\n-\t * the 'used' list.\n-\t */\n-\tdlb2_list_del(&domain->avail_ldb_queues, &queue->domain_list);\n-\n-\tdlb2_list_add(&domain->used_ldb_queues, &queue->domain_list);\n-\n-\tresp->status = 0;\n-\tresp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_ldb_port_configure_pp(struct dlb2_hw *hw,\n-\t\t\t\t       struct dlb2_hw_domain *domain,\n-\t\t\t\t       struct dlb2_ldb_port *port,\n-\t\t\t\t       bool vdev_req,\n-\t\t\t\t       unsigned int vdev_id)\n-{\n-\tu32 reg = 0;\n-\n-\tDLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_LDB_PP2VAS_VAS);\n-\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VAS(port->id.phys_id), reg);\n-\n-\tif (vdev_req) {\n-\t\tunsigned int offs;\n-\t\tu32 virt_id;\n-\n-\t\t/*\n-\t\t * DLB uses producer port address bits 17:12 to determine the\n-\t\t * producer port ID. In Scalable IOV mode, PP accesses come\n-\t\t * through the PF MMIO window for the physical producer port,\n-\t\t * so for translation purposes the virtual and physical port\n-\t\t * IDs are equal.\n-\t\t */\n-\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n-\t\t\tvirt_id = port->id.virt_id;\n-\t\telse\n-\t\t\tvirt_id = port->id.phys_id;\n-\n-\t\treg = 0;\n-\t\tDLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_LDB_VPP2PP_PP);\n-\t\toffs = vdev_id * DLB2_MAX_NUM_LDB_PORTS + virt_id;\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP2PP(offs), reg);\n-\n-\t\treg = 0;\n-\t\tDLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_PP2VDEV_VDEV);\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_PP2VDEV(port->id.phys_id), reg);\n-\n-\t\treg = 0;\n-\t\tDLB2_BIT_SET(reg, DLB2_SYS_VF_LDB_VPP_V_VPP_V);\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_LDB_VPP_V(offs), reg);\n-\t}\n-\n-\treg = 0;\n-\tDLB2_BIT_SET(reg, DLB2_SYS_LDB_PP_V_PP_V);\n-\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_PP_V(port->id.phys_id), reg);\n-}\n-\n-static int dlb2_ldb_port_configure_cq(struct dlb2_hw *hw,\n-\t\t\t\t      struct dlb2_hw_domain *domain,\n-\t\t\t\t      struct dlb2_ldb_port *port,\n-\t\t\t\t      uintptr_t cq_dma_base,\n-\t\t\t\t      struct dlb2_create_ldb_port_args *args,\n-\t\t\t\t      bool vdev_req,\n-\t\t\t\t      unsigned int vdev_id)\n-{\n-\tu32 hl_base = 0;\n-\tu32 reg = 0;\n-\tu32 ds = 0;\n-\n-\t/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */\n-\tDLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_LDB_CQ_ADDR_L_ADDR_L);\n-\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_L(port->id.phys_id), reg);\n-\n-\treg = cq_dma_base >> 32;\n-\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_ADDR_U(port->id.phys_id), reg);\n-\n-\t/*\n-\t * 'ro' == relaxed ordering. This setting allows DLB2 to write\n-\t * cache lines out-of-order (but QEs within a cache line are always\n-\t * updated in-order).\n-\t */\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, vdev_id, DLB2_SYS_LDB_CQ2VF_PF_RO_VF);\n-\tDLB2_BITS_SET(reg,\n-\t\t !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),\n-\t\t DLB2_SYS_LDB_CQ2VF_PF_RO_IS_PF);\n-\tDLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ2VF_PF_RO_RO);\n-\n-\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ2VF_PF_RO(port->id.phys_id), reg);\n-\n-\tport->cq_depth = args->cq_depth;\n-\n-\tif (args->cq_depth <= 8) {\n-\t\tds = 1;\n-\t} else if (args->cq_depth == 16) {\n-\t\tds = 2;\n-\t} else if (args->cq_depth == 32) {\n-\t\tds = 3;\n-\t} else if (args->cq_depth == 64) {\n-\t\tds = 4;\n-\t} else if (args->cq_depth == 128) {\n-\t\tds = 5;\n-\t} else if (args->cq_depth == 256) {\n-\t\tds = 6;\n-\t} else if (args->cq_depth == 512) {\n-\t\tds = 7;\n-\t} else if (args->cq_depth == 1024) {\n-\t\tds = 8;\n-\t} else {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s():%d] Internal error: invalid CQ depth\\n\",\n-\t\t\t    __func__, __LINE__);\n-\t\treturn -EFAULT;\n-\t}\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, ds,\n-\t\t      DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_LDB_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n-\t\t    reg);\n-\n-\t/*\n-\t * To support CQs with depth less than 8, program the token count\n-\t * register with a non-zero initial value. Operations such as domain\n-\t * reset must take this initial value into account when quiescing the\n-\t * CQ.\n-\t */\n-\tport->init_tkn_cnt = 0;\n-\n-\tif (args->cq_depth < 8) {\n-\t\treg = 0;\n-\t\tport->init_tkn_cnt = 8 - args->cq_depth;\n-\n-\t\tDLB2_BITS_SET(reg,\n-\t\t\t      port->init_tkn_cnt,\n-\t\t\t      DLB2_LSP_CQ_LDB_TKN_CNT_TOKEN_COUNT);\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),\n-\t\t\t    reg);\n-\t} else {\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_CQ_LDB_TKN_CNT(hw->ver, port->id.phys_id),\n-\t\t\t    DLB2_LSP_CQ_LDB_TKN_CNT_RST);\n-\t}\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, ds,\n-\t\t      DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT_V2);\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_LDB_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n-\t\t    reg);\n-\n-\t/* Reset the CQ write pointer */\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_LDB_CQ_WPTR(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_LDB_CQ_WPTR_RST);\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg,\n-\t\t      port->hist_list_entry_limit - 1,\n-\t\t      DLB2_CHP_HIST_LIST_LIM_LIMIT);\n-\tDLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_LIM(hw->ver, port->id.phys_id), reg);\n-\n-\tDLB2_BITS_SET(hl_base, port->hist_list_entry_base,\n-\t\t      DLB2_CHP_HIST_LIST_BASE_BASE);\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_HIST_LIST_BASE(hw->ver, port->id.phys_id),\n-\t\t    hl_base);\n-\n-\t/*\n-\t * The inflight limit sets a cap on the number of QEs for which this CQ\n-\t * can owe completions at one time.\n-\t */\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, args->cq_history_list_size,\n-\t\t      DLB2_LSP_CQ_LDB_INFL_LIM_LIMIT);\n-\tDLB2_CSR_WR(hw, DLB2_LSP_CQ_LDB_INFL_LIM(hw->ver, port->id.phys_id),\n-\t\t    reg);\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),\n-\t\t      DLB2_CHP_HIST_LIST_PUSH_PTR_PUSH_PTR);\n-\tDLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_PUSH_PTR(hw->ver, port->id.phys_id),\n-\t\t    reg);\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, DLB2_BITS_GET(hl_base, DLB2_CHP_HIST_LIST_BASE_BASE),\n-\t\t      DLB2_CHP_HIST_LIST_POP_PTR_POP_PTR);\n-\tDLB2_CSR_WR(hw, DLB2_CHP_HIST_LIST_POP_PTR(hw->ver, port->id.phys_id),\n-\t\t    reg);\n-\n-\t/*\n-\t * Address translation (AT) settings: 0: untranslated, 2: translated\n-\t * (see ATS spec regarding Address Type field for more details)\n-\t */\n-\n-\tif (hw->ver == DLB2_HW_V2) {\n-\t\treg = 0;\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_AT(port->id.phys_id), reg);\n-\t}\n-\n-\tif (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {\n-\t\treg = 0;\n-\t\tDLB2_BITS_SET(reg, hw->pasid[vdev_id],\n-\t\t\t      DLB2_SYS_LDB_CQ_PASID_PASID);\n-\t\tDLB2_BIT_SET(reg, DLB2_SYS_LDB_CQ_PASID_FMT2);\n-\t}\n-\n-\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_CQ_PASID(hw->ver, port->id.phys_id), reg);\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_LDB_CQ2VAS_CQ2VAS);\n-\tDLB2_CSR_WR(hw, DLB2_CHP_LDB_CQ2VAS(hw->ver, port->id.phys_id), reg);\n-\n-\t/* Disable the port's QID mappings */\n-\treg = 0;\n-\tDLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), reg);\n-\n-\treturn 0;\n-}\n-\n-static bool\n-dlb2_cq_depth_is_valid(u32 depth)\n-{\n-\tif (depth != 1 && depth != 2 &&\n-\t    depth != 4 && depth != 8 &&\n-\t    depth != 16 && depth != 32 &&\n-\t    depth != 64 && depth != 128 &&\n-\t    depth != 256 && depth != 512 &&\n-\t    depth != 1024)\n-\t\treturn false;\n-\n-\treturn true;\n-}\n-\n-static int dlb2_configure_ldb_port(struct dlb2_hw *hw,\n-\t\t\t\t   struct dlb2_hw_domain *domain,\n-\t\t\t\t   struct dlb2_ldb_port *port,\n-\t\t\t\t   uintptr_t cq_dma_base,\n-\t\t\t\t   struct dlb2_create_ldb_port_args *args,\n-\t\t\t\t   bool vdev_req,\n-\t\t\t\t   unsigned int vdev_id)\n-{\n-\tint ret, i;\n-\n-\tport->hist_list_entry_base = domain->hist_list_entry_base +\n-\t\t\t\t     domain->hist_list_entry_offset;\n-\tport->hist_list_entry_limit = port->hist_list_entry_base +\n-\t\t\t\t      args->cq_history_list_size;\n-\n-\tdomain->hist_list_entry_offset += args->cq_history_list_size;\n-\tdomain->avail_hist_list_entries -= args->cq_history_list_size;\n-\n-\tret = dlb2_ldb_port_configure_cq(hw,\n-\t\t\t\t\t domain,\n-\t\t\t\t\t port,\n-\t\t\t\t\t cq_dma_base,\n-\t\t\t\t\t args,\n-\t\t\t\t\t vdev_req,\n-\t\t\t\t\t vdev_id);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tdlb2_ldb_port_configure_pp(hw,\n-\t\t\t\t   domain,\n-\t\t\t\t   port,\n-\t\t\t\t   vdev_req,\n-\t\t\t\t   vdev_id);\n-\n-\tdlb2_ldb_port_cq_enable(hw, port);\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++)\n-\t\tport->qid_map[i].state = DLB2_QUEUE_UNMAPPED;\n-\tport->num_mappings = 0;\n-\n-\tport->enabled = true;\n-\n-\tport->configured = true;\n-\n-\treturn 0;\n-}\n-\n-static void\n-dlb2_log_create_ldb_port_args(struct dlb2_hw *hw,\n-\t\t\t      u32 domain_id,\n-\t\t\t      uintptr_t cq_dma_base,\n-\t\t\t      struct dlb2_create_ldb_port_args *args,\n-\t\t\t      bool vdev_req,\n-\t\t\t      unsigned int vdev_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB2 create load-balanced port arguments:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n-\tDLB2_HW_DBG(hw, \"\\tDomain ID:                 %d\\n\",\n-\t\t    domain_id);\n-\tDLB2_HW_DBG(hw, \"\\tCQ depth:                  %d\\n\",\n-\t\t    args->cq_depth);\n-\tDLB2_HW_DBG(hw, \"\\tCQ hist list size:         %d\\n\",\n-\t\t    args->cq_history_list_size);\n-\tDLB2_HW_DBG(hw, \"\\tCQ base address:           0x%lx\\n\",\n-\t\t    cq_dma_base);\n-\tDLB2_HW_DBG(hw, \"\\tCoS ID:                    %u\\n\", args->cos_id);\n-\tDLB2_HW_DBG(hw, \"\\tStrict CoS allocation:     %u\\n\",\n-\t\t    args->cos_strict);\n-}\n-\n-static int\n-dlb2_verify_create_ldb_port_args(struct dlb2_hw *hw,\n-\t\t\t\t u32 domain_id,\n-\t\t\t\t uintptr_t cq_dma_base,\n-\t\t\t\t struct dlb2_create_ldb_port_args *args,\n-\t\t\t\t struct dlb2_cmd_response *resp,\n-\t\t\t\t bool vdev_req,\n-\t\t\t\t unsigned int vdev_id,\n-\t\t\t\t struct dlb2_hw_domain **out_domain,\n-\t\t\t\t struct dlb2_ldb_port **out_port,\n-\t\t\t\t int *out_cos_id)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_port *port;\n-\tint i, id;\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n-\n-\tif (!domain) {\n-\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (!domain->configured) {\n-\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (domain->started) {\n-\t\tresp->status = DLB2_ST_DOMAIN_STARTED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (args->cos_id >= DLB2_NUM_COS_DOMAINS) {\n-\t\tresp->status = DLB2_ST_INVALID_COS_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (args->cos_strict) {\n-\t\tid = args->cos_id;\n-\t\tport = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],\n-\t\t\t\t\t  typeof(*port));\n-\t} else {\n-\t\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\t\tid = (args->cos_id + i) % DLB2_NUM_COS_DOMAINS;\n-\n-\t\t\tport = DLB2_DOM_LIST_HEAD(domain->avail_ldb_ports[id],\n-\t\t\t\t\t\t  typeof(*port));\n-\t\t\tif (port)\n-\t\t\t\tbreak;\n-\t\t}\n-\t}\n-\n-\tif (!port) {\n-\t\tresp->status = DLB2_ST_LDB_PORTS_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/* Check cache-line alignment */\n-\tif ((cq_dma_base & 0x3F) != 0) {\n-\t\tresp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (!dlb2_cq_depth_is_valid(args->cq_depth)) {\n-\t\tresp->status = DLB2_ST_INVALID_CQ_DEPTH;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/* The history list size must be >= 1 */\n-\tif (!args->cq_history_list_size) {\n-\t\tresp->status = DLB2_ST_INVALID_HIST_LIST_DEPTH;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (args->cq_history_list_size > domain->avail_hist_list_entries) {\n-\t\tresp->status = DLB2_ST_HIST_LIST_ENTRIES_UNAVAILABLE;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t*out_domain = domain;\n-\t*out_port = port;\n-\t*out_cos_id = id;\n-\n-\treturn 0;\n-}\n-\n-/**\n- * dlb2_hw_create_ldb_port() - create a load-balanced port\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @args: port creation arguments.\n- * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function creates a load-balanced port.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n- * contains the port ID.\n- *\n- * resp->id contains a virtual ID if vdev_req is true.\n- *\n- * Errors:\n- * EINVAL - A requested resource is unavailable, a credit setting is invalid, a\n- *\t    pointer address is not properly aligned, the domain is not\n- *\t    configured, or the domain has already been started.\n- * EFAULT - Internal error (resp->status not set).\n- */\n-int dlb2_hw_create_ldb_port(struct dlb2_hw *hw,\n-\t\t\t    u32 domain_id,\n-\t\t\t    struct dlb2_create_ldb_port_args *args,\n-\t\t\t    uintptr_t cq_dma_base,\n-\t\t\t    struct dlb2_cmd_response *resp,\n-\t\t\t    bool vdev_req,\n-\t\t\t    unsigned int vdev_id)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_port *port;\n-\tint ret, cos_id;\n-\n-\tdlb2_log_create_ldb_port_args(hw,\n-\t\t\t\t      domain_id,\n-\t\t\t\t      cq_dma_base,\n-\t\t\t\t      args,\n-\t\t\t\t      vdev_req,\n-\t\t\t\t      vdev_id);\n-\n-\t/*\n-\t * Verify that hardware resources are available before attempting to\n-\t * satisfy the request. This simplifies the error unwinding code.\n-\t */\n-\tret = dlb2_verify_create_ldb_port_args(hw,\n-\t\t\t\t\t       domain_id,\n-\t\t\t\t\t       cq_dma_base,\n-\t\t\t\t\t       args,\n-\t\t\t\t\t       resp,\n-\t\t\t\t\t       vdev_req,\n-\t\t\t\t\t       vdev_id,\n-\t\t\t\t\t       &domain,\n-\t\t\t\t\t       &port,\n-\t\t\t\t\t       &cos_id);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tret = dlb2_configure_ldb_port(hw,\n-\t\t\t\t      domain,\n-\t\t\t\t      port,\n-\t\t\t\t      cq_dma_base,\n-\t\t\t\t      args,\n-\t\t\t\t      vdev_req,\n-\t\t\t\t      vdev_id);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/*\n-\t * Configuration succeeded, so move the resource from the 'avail' to\n-\t * the 'used' list.\n-\t */\n-\tdlb2_list_del(&domain->avail_ldb_ports[cos_id], &port->domain_list);\n-\n-\tdlb2_list_add(&domain->used_ldb_ports[cos_id], &port->domain_list);\n-\n-\tresp->status = 0;\n-\tresp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;\n-\n-\treturn 0;\n-}\n-\n-static void\n-dlb2_log_create_dir_port_args(struct dlb2_hw *hw,\n-\t\t\t      u32 domain_id,\n-\t\t\t      uintptr_t cq_dma_base,\n-\t\t\t      struct dlb2_create_dir_port_args *args,\n-\t\t\t      bool vdev_req,\n-\t\t\t      unsigned int vdev_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB2 create directed port arguments:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n-\tDLB2_HW_DBG(hw, \"\\tDomain ID:                 %d\\n\",\n-\t\t    domain_id);\n-\tDLB2_HW_DBG(hw, \"\\tCQ depth:                  %d\\n\",\n-\t\t    args->cq_depth);\n-\tDLB2_HW_DBG(hw, \"\\tCQ base address:           0x%lx\\n\",\n-\t\t    cq_dma_base);\n-}\n-\n-static struct dlb2_dir_pq_pair *\n-dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw,\n-\t\t\t    u32 id,\n-\t\t\t    bool vdev_req,\n-\t\t\t    struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *port;\n-\tRTE_SET_USED(iter);\n-\n-\tif (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver))\n-\t\treturn NULL;\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) {\n-\t\tif ((!vdev_req && port->id.phys_id == id) ||\n-\t\t    (vdev_req && port->id.virt_id == id))\n-\t\t\treturn port;\n-\t}\n-\n-\treturn NULL;\n-}\n-\n-static int\n-dlb2_verify_create_dir_port_args(struct dlb2_hw *hw,\n-\t\t\t\t u32 domain_id,\n-\t\t\t\t uintptr_t cq_dma_base,\n-\t\t\t\t struct dlb2_create_dir_port_args *args,\n-\t\t\t\t struct dlb2_cmd_response *resp,\n-\t\t\t\t bool vdev_req,\n-\t\t\t\t unsigned int vdev_id,\n-\t\t\t\t struct dlb2_hw_domain **out_domain,\n-\t\t\t\t struct dlb2_dir_pq_pair **out_port)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_dir_pq_pair *pq;\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n-\n-\tif (!domain) {\n-\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (!domain->configured) {\n-\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (domain->started) {\n-\t\tresp->status = DLB2_ST_DOMAIN_STARTED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (args->queue_id != -1) {\n-\t\t/*\n-\t\t * If the user claims the queue is already configured, validate\n-\t\t * the queue ID, its domain, and whether the queue is\n-\t\t * configured.\n-\t\t */\n-\t\tpq = dlb2_get_domain_used_dir_pq(hw,\n-\t\t\t\t\t\t args->queue_id,\n-\t\t\t\t\t\t vdev_req,\n-\t\t\t\t\t\t domain);\n-\n-\t\tif (!pq || pq->domain_id.phys_id != domain->id.phys_id ||\n-\t\t    !pq->queue_configured) {\n-\t\t\tresp->status = DLB2_ST_INVALID_DIR_QUEUE_ID;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\t} else {\n-\t\t/*\n-\t\t * If the port's queue is not configured, validate that a free\n-\t\t * port-queue pair is available.\n-\t\t */\n-\t\tpq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,\n-\t\t\t\t\ttypeof(*pq));\n-\t\tif (!pq) {\n-\t\t\tresp->status = DLB2_ST_DIR_PORTS_UNAVAILABLE;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\t}\n-\n-\t/* Check cache-line alignment */\n-\tif ((cq_dma_base & 0x3F) != 0) {\n-\t\tresp->status = DLB2_ST_INVALID_CQ_VIRT_ADDR;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (!dlb2_cq_depth_is_valid(args->cq_depth)) {\n-\t\tresp->status = DLB2_ST_INVALID_CQ_DEPTH;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t*out_domain = domain;\n-\t*out_port = pq;\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_dir_port_configure_pp(struct dlb2_hw *hw,\n-\t\t\t\t       struct dlb2_hw_domain *domain,\n-\t\t\t\t       struct dlb2_dir_pq_pair *port,\n-\t\t\t\t       bool vdev_req,\n-\t\t\t\t       unsigned int vdev_id)\n-{\n-\tu32 reg = 0;\n-\n-\tDLB2_BITS_SET(reg, domain->id.phys_id, DLB2_SYS_DIR_PP2VAS_VAS);\n-\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VAS(port->id.phys_id), reg);\n-\n-\tif (vdev_req) {\n-\t\tunsigned int offs;\n-\t\tu32 virt_id;\n-\n-\t\t/*\n-\t\t * DLB uses producer port address bits 17:12 to determine the\n-\t\t * producer port ID. In Scalable IOV mode, PP accesses come\n-\t\t * through the PF MMIO window for the physical producer port,\n-\t\t * so for translation purposes the virtual and physical port\n-\t\t * IDs are equal.\n-\t\t */\n-\t\tif (hw->virt_mode == DLB2_VIRT_SRIOV)\n-\t\t\tvirt_id = port->id.virt_id;\n-\t\telse\n-\t\t\tvirt_id = port->id.phys_id;\n-\n-\t\treg = 0;\n-\t\tDLB2_BITS_SET(reg, port->id.phys_id, DLB2_SYS_VF_DIR_VPP2PP_PP);\n-\t\toffs = vdev_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) + virt_id;\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP2PP(offs), reg);\n-\n-\t\treg = 0;\n-\t\tDLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_PP2VDEV_VDEV);\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_PP2VDEV(port->id.phys_id), reg);\n-\n-\t\treg = 0;\n-\t\tDLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VPP_V_VPP_V);\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VPP_V(offs), reg);\n-\t}\n-\n-\treg = 0;\n-\tDLB2_BIT_SET(reg, DLB2_SYS_DIR_PP_V_PP_V);\n-\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_PP_V(port->id.phys_id), reg);\n-}\n-\n-static int dlb2_dir_port_configure_cq(struct dlb2_hw *hw,\n-\t\t\t\t      struct dlb2_hw_domain *domain,\n-\t\t\t\t      struct dlb2_dir_pq_pair *port,\n-\t\t\t\t      uintptr_t cq_dma_base,\n-\t\t\t\t      struct dlb2_create_dir_port_args *args,\n-\t\t\t\t      bool vdev_req,\n-\t\t\t\t      unsigned int vdev_id)\n-{\n-\tu32 reg = 0;\n-\tu32 ds = 0;\n-\n-\t/* The CQ address is 64B-aligned, and the DLB only wants bits [63:6] */\n-\tDLB2_BITS_SET(reg, cq_dma_base >> 6, DLB2_SYS_DIR_CQ_ADDR_L_ADDR_L);\n-\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_L(port->id.phys_id), reg);\n-\n-\treg = cq_dma_base >> 32;\n-\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_ADDR_U(port->id.phys_id), reg);\n-\n-\t/*\n-\t * 'ro' == relaxed ordering. This setting allows DLB2 to write\n-\t * cache lines out-of-order (but QEs within a cache line are always\n-\t * updated in-order).\n-\t */\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, vdev_id, DLB2_SYS_DIR_CQ2VF_PF_RO_VF);\n-\tDLB2_BITS_SET(reg, !vdev_req && (hw->virt_mode != DLB2_VIRT_SIOV),\n-\t\t DLB2_SYS_DIR_CQ2VF_PF_RO_IS_PF);\n-\tDLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ2VF_PF_RO_RO);\n-\n-\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ2VF_PF_RO(port->id.phys_id), reg);\n-\n-\tif (args->cq_depth <= 8) {\n-\t\tds = 1;\n-\t} else if (args->cq_depth == 16) {\n-\t\tds = 2;\n-\t} else if (args->cq_depth == 32) {\n-\t\tds = 3;\n-\t} else if (args->cq_depth == 64) {\n-\t\tds = 4;\n-\t} else if (args->cq_depth == 128) {\n-\t\tds = 5;\n-\t} else if (args->cq_depth == 256) {\n-\t\tds = 6;\n-\t} else if (args->cq_depth == 512) {\n-\t\tds = 7;\n-\t} else if (args->cq_depth == 1024) {\n-\t\tds = 8;\n-\t} else {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s():%d] Internal error: invalid CQ depth\\n\",\n-\t\t\t    __func__, __LINE__);\n-\t\treturn -EFAULT;\n-\t}\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, ds,\n-\t\t      DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL_TOKEN_DEPTH_SELECT);\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_DIR_CQ_TKN_DEPTH_SEL(hw->ver, port->id.phys_id),\n-\t\t    reg);\n-\n-\t/*\n-\t * To support CQs with depth less than 8, program the token count\n-\t * register with a non-zero initial value. Operations such as domain\n-\t * reset must take this initial value into account when quiescing the\n-\t * CQ.\n-\t */\n-\tport->init_tkn_cnt = 0;\n-\n-\tif (args->cq_depth < 8) {\n-\t\treg = 0;\n-\t\tport->init_tkn_cnt = 8 - args->cq_depth;\n-\n-\t\tDLB2_BITS_SET(reg, port->init_tkn_cnt,\n-\t\t\t      DLB2_LSP_CQ_DIR_TKN_CNT_COUNT);\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),\n-\t\t\t    reg);\n-\t} else {\n-\t\tDLB2_CSR_WR(hw,\n-\t\t\t    DLB2_LSP_CQ_DIR_TKN_CNT(hw->ver, port->id.phys_id),\n-\t\t\t    DLB2_LSP_CQ_DIR_TKN_CNT_RST);\n-\t}\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, ds,\n-\t\t      DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI_TOKEN_DEPTH_SELECT_V2);\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_CQ_DIR_TKN_DEPTH_SEL_DSI(hw->ver,\n-\t\t\t\t\t\t      port->id.phys_id),\n-\t\t    reg);\n-\n-\t/* Reset the CQ write pointer */\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_CHP_DIR_CQ_WPTR(hw->ver, port->id.phys_id),\n-\t\t    DLB2_CHP_DIR_CQ_WPTR_RST);\n-\n-\t/* Virtualize the PPID */\n-\treg = 0;\n-\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_FMT(port->id.phys_id), reg);\n-\n-\t/*\n-\t * Address translation (AT) settings: 0: untranslated, 2: translated\n-\t * (see ATS spec regarding Address Type field for more details)\n-\t */\n-\tif (hw->ver == DLB2_HW_V2) {\n-\t\treg = 0;\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_AT(port->id.phys_id), reg);\n-\t}\n-\n-\tif (vdev_req && hw->virt_mode == DLB2_VIRT_SIOV) {\n-\t\tDLB2_BITS_SET(reg, hw->pasid[vdev_id],\n-\t\t\t      DLB2_SYS_DIR_CQ_PASID_PASID);\n-\t\tDLB2_BIT_SET(reg, DLB2_SYS_DIR_CQ_PASID_FMT2);\n-\t}\n-\n-\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_CQ_PASID(hw->ver, port->id.phys_id), reg);\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, domain->id.phys_id, DLB2_CHP_DIR_CQ2VAS_CQ2VAS);\n-\tDLB2_CSR_WR(hw, DLB2_CHP_DIR_CQ2VAS(hw->ver, port->id.phys_id), reg);\n-\n-\treturn 0;\n-}\n-\n-static int dlb2_configure_dir_port(struct dlb2_hw *hw,\n-\t\t\t\t   struct dlb2_hw_domain *domain,\n-\t\t\t\t   struct dlb2_dir_pq_pair *port,\n-\t\t\t\t   uintptr_t cq_dma_base,\n-\t\t\t\t   struct dlb2_create_dir_port_args *args,\n-\t\t\t\t   bool vdev_req,\n-\t\t\t\t   unsigned int vdev_id)\n-{\n-\tint ret;\n-\n-\tret = dlb2_dir_port_configure_cq(hw,\n-\t\t\t\t\t domain,\n-\t\t\t\t\t port,\n-\t\t\t\t\t cq_dma_base,\n-\t\t\t\t\t args,\n-\t\t\t\t\t vdev_req,\n-\t\t\t\t\t vdev_id);\n-\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tdlb2_dir_port_configure_pp(hw,\n-\t\t\t\t   domain,\n-\t\t\t\t   port,\n-\t\t\t\t   vdev_req,\n-\t\t\t\t   vdev_id);\n-\n-\tdlb2_dir_port_cq_enable(hw, port);\n-\n-\tport->enabled = true;\n-\n-\tport->port_configured = true;\n-\n-\treturn 0;\n-}\n-\n-/**\n- * dlb2_hw_create_dir_port() - create a directed port\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @args: port creation arguments.\n- * @cq_dma_base: base address of the CQ memory. This can be a PA or an IOVA.\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function creates a directed port.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n- * contains the port ID.\n- *\n- * resp->id contains a virtual ID if vdev_req is true.\n- *\n- * Errors:\n- * EINVAL - A requested resource is unavailable, a credit setting is invalid, a\n- *\t    pointer address is not properly aligned, the domain is not\n- *\t    configured, or the domain has already been started.\n- * EFAULT - Internal error (resp->status not set).\n- */\n-int dlb2_hw_create_dir_port(struct dlb2_hw *hw,\n-\t\t\t    u32 domain_id,\n-\t\t\t    struct dlb2_create_dir_port_args *args,\n-\t\t\t    uintptr_t cq_dma_base,\n-\t\t\t    struct dlb2_cmd_response *resp,\n-\t\t\t    bool vdev_req,\n-\t\t\t    unsigned int vdev_id)\n-{\n-\tstruct dlb2_dir_pq_pair *port;\n-\tstruct dlb2_hw_domain *domain;\n-\tint ret;\n-\n-\tdlb2_log_create_dir_port_args(hw,\n-\t\t\t\t      domain_id,\n-\t\t\t\t      cq_dma_base,\n-\t\t\t\t      args,\n-\t\t\t\t      vdev_req,\n-\t\t\t\t      vdev_id);\n-\n-\t/*\n-\t * Verify that hardware resources are available before attempting to\n-\t * satisfy the request. This simplifies the error unwinding code.\n-\t */\n-\tret = dlb2_verify_create_dir_port_args(hw,\n-\t\t\t\t\t       domain_id,\n-\t\t\t\t\t       cq_dma_base,\n-\t\t\t\t\t       args,\n-\t\t\t\t\t       resp,\n-\t\t\t\t\t       vdev_req,\n-\t\t\t\t\t       vdev_id,\n-\t\t\t\t\t       &domain,\n-\t\t\t\t\t       &port);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tret = dlb2_configure_dir_port(hw,\n-\t\t\t\t      domain,\n-\t\t\t\t      port,\n-\t\t\t\t      cq_dma_base,\n-\t\t\t\t      args,\n-\t\t\t\t      vdev_req,\n-\t\t\t\t      vdev_id);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/*\n-\t * Configuration succeeded, so move the resource from the 'avail' to\n-\t * the 'used' list (if it's not already there).\n-\t */\n-\tif (args->queue_id == -1) {\n-\t\tdlb2_list_del(&domain->avail_dir_pq_pairs, &port->domain_list);\n-\n-\t\tdlb2_list_add(&domain->used_dir_pq_pairs, &port->domain_list);\n-\t}\n-\n-\tresp->status = 0;\n-\tresp->id = (vdev_req) ? port->id.virt_id : port->id.phys_id;\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_configure_dir_queue(struct dlb2_hw *hw,\n-\t\t\t\t     struct dlb2_hw_domain *domain,\n-\t\t\t\t     struct dlb2_dir_pq_pair *queue,\n-\t\t\t\t     struct dlb2_create_dir_queue_args *args,\n-\t\t\t\t     bool vdev_req,\n-\t\t\t\t     unsigned int vdev_id)\n-{\n-\tunsigned int offs;\n-\tu32 reg = 0;\n-\n-\t/* QID write permissions are turned on when the domain is started */\n-\toffs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +\n-\t\tqueue->id.phys_id;\n-\n-\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg);\n-\n-\t/* Don't timestamp QEs that pass through this queue */\n-\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg);\n-\n-\treg = 0;\n-\tDLB2_BITS_SET(reg, args->depth_threshold,\n-\t\t      DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH);\n-\tDLB2_CSR_WR(hw,\n-\t\t    DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id),\n-\t\t    reg);\n-\n-\tif (vdev_req) {\n-\t\toffs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) +\n-\t\t\tqueue->id.virt_id;\n-\n-\t\treg = 0;\n-\t\tDLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V);\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg);\n-\n-\t\treg = 0;\n-\t\tDLB2_BITS_SET(reg, queue->id.phys_id,\n-\t\t\t      DLB2_SYS_VF_DIR_VQID2QID_QID);\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg);\n-\t}\n-\n-\treg = 0;\n-\tDLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V);\n-\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg);\n-\n-\tqueue->queue_configured = true;\n-}\n-\n-static void\n-dlb2_log_create_dir_queue_args(struct dlb2_hw *hw,\n-\t\t\t       u32 domain_id,\n-\t\t\t       struct dlb2_create_dir_queue_args *args,\n-\t\t\t       bool vdev_req,\n-\t\t\t       unsigned int vdev_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB2 create directed queue arguments:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n-\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\", domain_id);\n-\tDLB2_HW_DBG(hw, \"\\tPort ID:   %d\\n\", args->port_id);\n-}\n-\n-static int\n-dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw,\n-\t\t\t\t  u32 domain_id,\n-\t\t\t\t  struct dlb2_create_dir_queue_args *args,\n-\t\t\t\t  struct dlb2_cmd_response *resp,\n-\t\t\t\t  bool vdev_req,\n-\t\t\t\t  unsigned int vdev_id,\n-\t\t\t\t  struct dlb2_hw_domain **out_domain,\n-\t\t\t\t  struct dlb2_dir_pq_pair **out_queue)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_dir_pq_pair *pq;\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n-\n-\tif (!domain) {\n-\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (!domain->configured) {\n-\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (domain->started) {\n-\t\tresp->status = DLB2_ST_DOMAIN_STARTED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/*\n-\t * If the user claims the port is already configured, validate the port\n-\t * ID, its domain, and whether the port is configured.\n-\t */\n-\tif (args->port_id != -1) {\n-\t\tpq = dlb2_get_domain_used_dir_pq(hw,\n-\t\t\t\t\t\t args->port_id,\n-\t\t\t\t\t\t vdev_req,\n-\t\t\t\t\t\t domain);\n-\n-\t\tif (!pq || pq->domain_id.phys_id != domain->id.phys_id ||\n-\t\t    !pq->port_configured) {\n-\t\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\t} else {\n-\t\t/*\n-\t\t * If the queue's port is not configured, validate that a free\n-\t\t * port-queue pair is available.\n-\t\t */\n-\t\tpq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs,\n-\t\t\t\t\ttypeof(*pq));\n-\t\tif (!pq) {\n-\t\t\tresp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE;\n-\t\t\treturn -EINVAL;\n-\t\t}\n-\t}\n-\n-\t*out_domain = domain;\n-\t*out_queue = pq;\n-\n-\treturn 0;\n-}\n-\n-/**\n- * dlb2_hw_create_dir_queue() - create a directed queue\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @args: queue creation arguments.\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function creates a directed queue.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n- * contains the queue ID.\n- *\n- * resp->id contains a virtual ID if vdev_req is true.\n- *\n- * Errors:\n- * EINVAL - A requested resource is unavailable, the domain is not configured,\n- *\t    or the domain has already been started.\n- * EFAULT - Internal error (resp->status not set).\n- */\n-int dlb2_hw_create_dir_queue(struct dlb2_hw *hw,\n-\t\t\t     u32 domain_id,\n-\t\t\t     struct dlb2_create_dir_queue_args *args,\n-\t\t\t     struct dlb2_cmd_response *resp,\n-\t\t\t     bool vdev_req,\n-\t\t\t     unsigned int vdev_id)\n-{\n-\tstruct dlb2_dir_pq_pair *queue;\n-\tstruct dlb2_hw_domain *domain;\n-\tint ret;\n-\n-\tdlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id);\n-\n-\t/*\n-\t * Verify that hardware resources are available before attempting to\n-\t * satisfy the request. This simplifies the error unwinding code.\n-\t */\n-\tret = dlb2_verify_create_dir_queue_args(hw,\n-\t\t\t\t\t\tdomain_id,\n-\t\t\t\t\t\targs,\n-\t\t\t\t\t\tresp,\n-\t\t\t\t\t\tvdev_req,\n-\t\t\t\t\t\tvdev_id,\n-\t\t\t\t\t\t&domain,\n-\t\t\t\t\t\t&queue);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tdlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id);\n-\n-\t/*\n-\t * Configuration succeeded, so move the resource from the 'avail' to\n-\t * the 'used' list (if it's not already there).\n-\t */\n-\tif (args->port_id == -1) {\n-\t\tdlb2_list_del(&domain->avail_dir_pq_pairs,\n-\t\t\t      &queue->domain_list);\n-\n-\t\tdlb2_list_add(&domain->used_dir_pq_pairs,\n-\t\t\t      &queue->domain_list);\n-\t}\n-\n-\tresp->status = 0;\n-\n-\tresp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id;\n-\n-\treturn 0;\n-}\n-\n-static bool\n-dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port,\n-\t\t\t\t\t   struct dlb2_ldb_queue *queue,\n-\t\t\t\t\t   int *slot)\n-{\n-\tint i;\n-\n-\tfor (i = 0; i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ; i++) {\n-\t\tstruct dlb2_ldb_port_qid_map *map = &port->qid_map[i];\n-\n-\t\tif (map->state == DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP &&\n-\t\t    map->pending_qid == queue->id.phys_id)\n-\t\t\tbreak;\n-\t}\n-\n-\t*slot = i;\n-\n-\treturn (i < DLB2_MAX_NUM_QIDS_PER_LDB_CQ);\n-}\n-\n-static int dlb2_verify_map_qid_slot_available(struct dlb2_ldb_port *port,\n-\t\t\t\t\t      struct dlb2_ldb_queue *queue,\n-\t\t\t\t\t      struct dlb2_cmd_response *resp)\n-{\n-\tenum dlb2_qid_map_state state;\n-\tint i;\n-\n-\t/* Unused slot available? */\n-\tif (port->num_mappings < DLB2_MAX_NUM_QIDS_PER_LDB_CQ)\n-\t\treturn 0;\n-\n-\t/*\n-\t * If the queue is already mapped (from the application's perspective),\n-\t * this is simply a priority update.\n-\t */\n-\tstate = DLB2_QUEUE_MAPPED;\n-\tif (dlb2_port_find_slot_queue(port, state, queue, &i))\n-\t\treturn 0;\n-\n-\tstate = DLB2_QUEUE_MAP_IN_PROG;\n-\tif (dlb2_port_find_slot_queue(port, state, queue, &i))\n-\t\treturn 0;\n-\n-\tif (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i))\n-\t\treturn 0;\n-\n-\t/*\n-\t * If the slot contains an unmap in progress, it's considered\n-\t * available.\n-\t */\n-\tstate = DLB2_QUEUE_UNMAP_IN_PROG;\n-\tif (dlb2_port_find_slot(port, state, &i))\n-\t\treturn 0;\n-\n-\tstate = DLB2_QUEUE_UNMAPPED;\n-\tif (dlb2_port_find_slot(port, state, &i))\n-\t\treturn 0;\n-\n-\tresp->status = DLB2_ST_NO_QID_SLOTS_AVAILABLE;\n-\treturn -EINVAL;\n-}\n-\n-static struct dlb2_ldb_queue *\n-dlb2_get_domain_ldb_queue(u32 id,\n-\t\t\t  bool vdev_req,\n-\t\t\t  struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_queue *queue;\n-\tRTE_SET_USED(iter);\n-\n-\tif (id >= DLB2_MAX_NUM_LDB_QUEUES)\n-\t\treturn NULL;\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) {\n-\t\tif ((!vdev_req && queue->id.phys_id == id) ||\n-\t\t    (vdev_req && queue->id.virt_id == id))\n-\t\t\treturn queue;\n-\t}\n-\n-\treturn NULL;\n-}\n-\n-static struct dlb2_ldb_port *\n-dlb2_get_domain_used_ldb_port(u32 id,\n-\t\t\t      bool vdev_req,\n-\t\t\t      struct dlb2_hw_domain *domain)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_ldb_port *port;\n-\tint i;\n-\tRTE_SET_USED(iter);\n-\n-\tif (id >= DLB2_MAX_NUM_LDB_PORTS)\n-\t\treturn NULL;\n-\n-\tfor (i = 0; i < DLB2_NUM_COS_DOMAINS; i++) {\n-\t\tDLB2_DOM_LIST_FOR(domain->used_ldb_ports[i], port, iter) {\n-\t\t\tif ((!vdev_req && port->id.phys_id == id) ||\n-\t\t\t    (vdev_req && port->id.virt_id == id))\n-\t\t\t\treturn port;\n-\t\t}\n-\n-\t\tDLB2_DOM_LIST_FOR(domain->avail_ldb_ports[i], port, iter) {\n-\t\t\tif ((!vdev_req && port->id.phys_id == id) ||\n-\t\t\t    (vdev_req && port->id.virt_id == id))\n-\t\t\t\treturn port;\n-\t\t}\n-\t}\n-\n-\treturn NULL;\n-}\n-\n-static void dlb2_ldb_port_change_qid_priority(struct dlb2_hw *hw,\n-\t\t\t\t\t      struct dlb2_ldb_port *port,\n-\t\t\t\t\t      int slot,\n-\t\t\t\t\t      struct dlb2_map_qid_args *args)\n-{\n-\tu32 cq2priov;\n-\n-\t/* Read-modify-write the priority and valid bit register */\n-\tcq2priov = DLB2_CSR_RD(hw,\n-\t\t\t       DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id));\n-\n-\tcq2priov |= (1 << (slot + DLB2_LSP_CQ2PRIOV_V_LOC)) &\n-\t\t    DLB2_LSP_CQ2PRIOV_V;\n-\tcq2priov |= ((args->priority & 0x7) << slot * 3) &\n-\t\t    DLB2_LSP_CQ2PRIOV_PRIO;\n-\n-\tDLB2_CSR_WR(hw, DLB2_LSP_CQ2PRIOV(hw->ver, port->id.phys_id), cq2priov);\n-\n-\tdlb2_flush_csr(hw);\n-\n-\tport->qid_map[slot].priority = args->priority;\n-}\n-\n-static int dlb2_verify_map_qid_args(struct dlb2_hw *hw,\n-\t\t\t\t    u32 domain_id,\n-\t\t\t\t    struct dlb2_map_qid_args *args,\n-\t\t\t\t    struct dlb2_cmd_response *resp,\n-\t\t\t\t    bool vdev_req,\n-\t\t\t\t    unsigned int vdev_id,\n-\t\t\t\t    struct dlb2_hw_domain **out_domain,\n-\t\t\t\t    struct dlb2_ldb_port **out_port,\n-\t\t\t\t    struct dlb2_ldb_queue **out_queue)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_queue *queue;\n-\tstruct dlb2_ldb_port *port;\n-\tint id;\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n-\n-\tif (!domain) {\n-\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (!domain->configured) {\n-\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tid = args->port_id;\n-\n-\tport = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);\n-\n-\tif (!port || !port->configured) {\n-\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (args->priority >= DLB2_QID_PRIORITIES) {\n-\t\tresp->status = DLB2_ST_INVALID_PRIORITY;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tqueue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);\n-\n-\tif (!queue || !queue->configured) {\n-\t\tresp->status = DLB2_ST_INVALID_QID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (queue->domain_id.phys_id != domain->id.phys_id) {\n-\t\tresp->status = DLB2_ST_INVALID_QID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (port->domain_id.phys_id != domain->id.phys_id) {\n-\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t*out_domain = domain;\n-\t*out_queue = queue;\n-\t*out_port = port;\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_log_map_qid(struct dlb2_hw *hw,\n-\t\t\t     u32 domain_id,\n-\t\t\t     struct dlb2_map_qid_args *args,\n-\t\t\t     bool vdev_req,\n-\t\t\t     unsigned int vdev_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB2 map QID arguments:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n-\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\",\n-\t\t    domain_id);\n-\tDLB2_HW_DBG(hw, \"\\tPort ID:   %d\\n\",\n-\t\t    args->port_id);\n-\tDLB2_HW_DBG(hw, \"\\tQueue ID:  %d\\n\",\n-\t\t    args->qid);\n-\tDLB2_HW_DBG(hw, \"\\tPriority:  %d\\n\",\n-\t\t    args->priority);\n-}\n-\n-/**\n- * dlb2_hw_map_qid() - map a load-balanced queue to a load-balanced port\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @args: map QID arguments.\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function configures the DLB to schedule QEs from the specified queue\n- * to the specified port. Each load-balanced port can be mapped to up to 8\n- * queues; each load-balanced queue can potentially map to all the\n- * load-balanced ports.\n- *\n- * A successful return does not necessarily mean the mapping was configured. If\n- * this function is unable to immediately map the queue to the port, it will\n- * add the requested operation to a per-port list of pending map/unmap\n- * operations, and (if it's not already running) launch a kernel thread that\n- * periodically attempts to process all pending operations. In a sense, this is\n- * an asynchronous function.\n- *\n- * This asynchronicity creates two views of the state of hardware: the actual\n- * hardware state and the requested state (as if every request completed\n- * immediately). If there are any pending map/unmap operations, the requested\n- * state will differ from the actual state. All validation is performed with\n- * respect to the pending state; for instance, if there are 8 pending map\n- * operations for port X, a request for a 9th will fail because a load-balanced\n- * port can only map up to 8 queues.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error.\n- *\n- * Errors:\n- * EINVAL - A requested resource is unavailable, invalid port or queue ID, or\n- *\t    the domain is not configured.\n- * EFAULT - Internal error (resp->status not set).\n- */\n-int dlb2_hw_map_qid(struct dlb2_hw *hw,\n-\t\t    u32 domain_id,\n-\t\t    struct dlb2_map_qid_args *args,\n-\t\t    struct dlb2_cmd_response *resp,\n-\t\t    bool vdev_req,\n-\t\t    unsigned int vdev_id)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_queue *queue;\n-\tenum dlb2_qid_map_state st;\n-\tstruct dlb2_ldb_port *port;\n-\tint ret, i;\n-\tu8 prio;\n-\n-\tdlb2_log_map_qid(hw, domain_id, args, vdev_req, vdev_id);\n-\n-\t/*\n-\t * Verify that hardware resources are available before attempting to\n-\t * satisfy the request. This simplifies the error unwinding code.\n-\t */\n-\tret = dlb2_verify_map_qid_args(hw,\n-\t\t\t\t       domain_id,\n-\t\t\t\t       args,\n-\t\t\t\t       resp,\n-\t\t\t\t       vdev_req,\n-\t\t\t\t       vdev_id,\n-\t\t\t\t       &domain,\n-\t\t\t\t       &port,\n-\t\t\t\t       &queue);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\tprio = args->priority;\n-\n-\t/*\n-\t * If there are any outstanding detach operations for this port,\n-\t * attempt to complete them. This may be necessary to free up a QID\n-\t * slot for this requested mapping.\n-\t */\n-\tif (port->num_pending_removals)\n-\t\tdlb2_domain_finish_unmap_port(hw, domain, port);\n-\n-\tret = dlb2_verify_map_qid_slot_available(port, queue, resp);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/* Hardware requires disabling the CQ before mapping QIDs. */\n-\tif (port->enabled)\n-\t\tdlb2_ldb_port_cq_disable(hw, port);\n-\n-\t/*\n-\t * If this is only a priority change, don't perform the full QID->CQ\n-\t * mapping procedure\n-\t */\n-\tst = DLB2_QUEUE_MAPPED;\n-\tif (dlb2_port_find_slot_queue(port, st, queue, &i)) {\n-\t\tif (prio != port->qid_map[i].priority) {\n-\t\t\tdlb2_ldb_port_change_qid_priority(hw, port, i, args);\n-\t\t\tDLB2_HW_DBG(hw, \"DLB2 map: priority change\\n\");\n-\t\t}\n-\n-\t\tst = DLB2_QUEUE_MAPPED;\n-\t\tret = dlb2_port_slot_state_transition(hw, port, queue, i, st);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\n-\t\tgoto map_qid_done;\n-\t}\n-\n-\tst = DLB2_QUEUE_UNMAP_IN_PROG;\n-\tif (dlb2_port_find_slot_queue(port, st, queue, &i)) {\n-\t\tif (prio != port->qid_map[i].priority) {\n-\t\t\tdlb2_ldb_port_change_qid_priority(hw, port, i, args);\n-\t\t\tDLB2_HW_DBG(hw, \"DLB2 map: priority change\\n\");\n-\t\t}\n-\n-\t\tst = DLB2_QUEUE_MAPPED;\n-\t\tret = dlb2_port_slot_state_transition(hw, port, queue, i, st);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\n-\t\tgoto map_qid_done;\n-\t}\n-\n-\t/*\n-\t * If this is a priority change on an in-progress mapping, don't\n-\t * perform the full QID->CQ mapping procedure.\n-\t */\n-\tst = DLB2_QUEUE_MAP_IN_PROG;\n-\tif (dlb2_port_find_slot_queue(port, st, queue, &i)) {\n-\t\tport->qid_map[i].priority = prio;\n-\n-\t\tDLB2_HW_DBG(hw, \"DLB2 map: priority change only\\n\");\n-\n-\t\tgoto map_qid_done;\n-\t}\n-\n-\t/*\n-\t * If this is a priority change on a pending mapping, update the\n-\t * pending priority\n-\t */\n-\tif (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {\n-\t\tport->qid_map[i].pending_priority = prio;\n-\n-\t\tDLB2_HW_DBG(hw, \"DLB2 map: priority change only\\n\");\n-\n-\t\tgoto map_qid_done;\n-\t}\n-\n-\t/*\n-\t * If all the CQ's slots are in use, then there's an unmap in progress\n-\t * (guaranteed by dlb2_verify_map_qid_slot_available()), so add this\n-\t * mapping to pending_map and return. When the removal is completed for\n-\t * the slot's current occupant, this mapping will be performed.\n-\t */\n-\tif (!dlb2_port_find_slot(port, DLB2_QUEUE_UNMAPPED, &i)) {\n-\t\tif (dlb2_port_find_slot(port, DLB2_QUEUE_UNMAP_IN_PROG, &i)) {\n-\t\t\tenum dlb2_qid_map_state new_st;\n-\n-\t\t\tport->qid_map[i].pending_qid = queue->id.phys_id;\n-\t\t\tport->qid_map[i].pending_priority = prio;\n-\n-\t\t\tnew_st = DLB2_QUEUE_UNMAP_IN_PROG_PENDING_MAP;\n-\n-\t\t\tret = dlb2_port_slot_state_transition(hw, port, queue,\n-\t\t\t\t\t\t\t      i, new_st);\n-\t\t\tif (ret)\n-\t\t\t\treturn ret;\n-\n-\t\t\tDLB2_HW_DBG(hw, \"DLB2 map: map pending removal\\n\");\n-\n-\t\t\tgoto map_qid_done;\n-\t\t}\n-\t}\n-\n-\t/*\n-\t * If the domain has started, a special \"dynamic\" CQ->queue mapping\n-\t * procedure is required in order to safely update the CQ<->QID tables.\n-\t * The \"static\" procedure cannot be used when traffic is flowing,\n-\t * because the CQ<->QID tables cannot be updated atomically and the\n-\t * scheduler won't see the new mapping unless the queue's if_status\n-\t * changes, which isn't guaranteed.\n-\t */\n-\tret = dlb2_ldb_port_map_qid(hw, domain, port, queue, prio);\n-\n-\t/* If ret is less than zero, it's due to an internal error */\n-\tif (ret < 0)\n-\t\treturn ret;\n-\n-map_qid_done:\n-\tif (port->enabled)\n-\t\tdlb2_ldb_port_cq_enable(hw, port);\n-\n-\tresp->status = 0;\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_log_unmap_qid(struct dlb2_hw *hw,\n-\t\t\t       u32 domain_id,\n-\t\t\t       struct dlb2_unmap_qid_args *args,\n-\t\t\t       bool vdev_req,\n-\t\t\t       unsigned int vdev_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB2 unmap QID arguments:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n-\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\",\n-\t\t    domain_id);\n-\tDLB2_HW_DBG(hw, \"\\tPort ID:   %d\\n\",\n-\t\t    args->port_id);\n-\tDLB2_HW_DBG(hw, \"\\tQueue ID:  %d\\n\",\n-\t\t    args->qid);\n-\tif (args->qid < DLB2_MAX_NUM_LDB_QUEUES)\n-\t\tDLB2_HW_DBG(hw, \"\\tQueue's num mappings:  %d\\n\",\n-\t\t\t    hw->rsrcs.ldb_queues[args->qid].num_mappings);\n-}\n-\n-static int dlb2_verify_unmap_qid_args(struct dlb2_hw *hw,\n-\t\t\t\t      u32 domain_id,\n-\t\t\t\t      struct dlb2_unmap_qid_args *args,\n-\t\t\t\t      struct dlb2_cmd_response *resp,\n-\t\t\t\t      bool vdev_req,\n-\t\t\t\t      unsigned int vdev_id,\n-\t\t\t\t      struct dlb2_hw_domain **out_domain,\n-\t\t\t\t      struct dlb2_ldb_port **out_port,\n-\t\t\t\t      struct dlb2_ldb_queue **out_queue)\n-{\n-\tenum dlb2_qid_map_state state;\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_queue *queue;\n-\tstruct dlb2_ldb_port *port;\n-\tint slot;\n-\tint id;\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n-\n-\tif (!domain) {\n-\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (!domain->configured) {\n-\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tid = args->port_id;\n-\n-\tport = dlb2_get_domain_used_ldb_port(id, vdev_req, domain);\n-\n-\tif (!port || !port->configured) {\n-\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (port->domain_id.phys_id != domain->id.phys_id) {\n-\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tqueue = dlb2_get_domain_ldb_queue(args->qid, vdev_req, domain);\n-\n-\tif (!queue || !queue->configured) {\n-\t\tDLB2_HW_ERR(hw, \"[%s()] Can't unmap unconfigured queue %d\\n\",\n-\t\t\t    __func__, args->qid);\n-\t\tresp->status = DLB2_ST_INVALID_QID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/*\n-\t * Verify that the port has the queue mapped. From the application's\n-\t * perspective a queue is mapped if it is actually mapped, the map is\n-\t * in progress, or the map is blocked pending an unmap.\n-\t */\n-\tstate = DLB2_QUEUE_MAPPED;\n-\tif (dlb2_port_find_slot_queue(port, state, queue, &slot))\n-\t\tgoto done;\n-\n-\tstate = DLB2_QUEUE_MAP_IN_PROG;\n-\tif (dlb2_port_find_slot_queue(port, state, queue, &slot))\n-\t\tgoto done;\n-\n-\tif (dlb2_port_find_slot_with_pending_map_queue(port, queue, &slot))\n-\t\tgoto done;\n-\n-\tresp->status = DLB2_ST_INVALID_QID;\n-\treturn -EINVAL;\n-\n-done:\n-\t*out_domain = domain;\n-\t*out_port = port;\n-\t*out_queue = queue;\n-\n-\treturn 0;\n-}\n-\n-/**\n- * dlb2_hw_unmap_qid() - Unmap a load-balanced queue from a load-balanced port\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @args: unmap QID arguments.\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function configures the DLB to stop scheduling QEs from the specified\n- * queue to the specified port.\n- *\n- * A successful return does not necessarily mean the mapping was removed. If\n- * this function is unable to immediately unmap the queue from the port, it\n- * will add the requested operation to a per-port list of pending map/unmap\n- * operations, and (if it's not already running) launch a kernel thread that\n- * periodically attempts to process all pending operations. See\n- * dlb2_hw_map_qid() for more details.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error.\n- *\n- * Errors:\n- * EINVAL - A requested resource is unavailable, invalid port or queue ID, or\n- *\t    the domain is not configured.\n- * EFAULT - Internal error (resp->status not set).\n- */\n-int dlb2_hw_unmap_qid(struct dlb2_hw *hw,\n-\t\t      u32 domain_id,\n-\t\t      struct dlb2_unmap_qid_args *args,\n-\t\t      struct dlb2_cmd_response *resp,\n-\t\t      bool vdev_req,\n-\t\t      unsigned int vdev_id)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_queue *queue;\n-\tenum dlb2_qid_map_state st;\n-\tstruct dlb2_ldb_port *port;\n-\tbool unmap_complete;\n-\tint i, ret;\n-\n-\tdlb2_log_unmap_qid(hw, domain_id, args, vdev_req, vdev_id);\n-\n-\t/*\n-\t * Verify that hardware resources are available before attempting to\n-\t * satisfy the request. This simplifies the error unwinding code.\n-\t */\n-\tret = dlb2_verify_unmap_qid_args(hw,\n-\t\t\t\t\t domain_id,\n-\t\t\t\t\t args,\n-\t\t\t\t\t resp,\n-\t\t\t\t\t vdev_req,\n-\t\t\t\t\t vdev_id,\n-\t\t\t\t\t &domain,\n-\t\t\t\t\t &port,\n-\t\t\t\t\t &queue);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/*\n-\t * If the queue hasn't been mapped yet, we need to update the slot's\n-\t * state and re-enable the queue's inflights.\n-\t */\n-\tst = DLB2_QUEUE_MAP_IN_PROG;\n-\tif (dlb2_port_find_slot_queue(port, st, queue, &i)) {\n-\t\t/*\n-\t\t * Since the in-progress map was aborted, re-enable the QID's\n-\t\t * inflights.\n-\t\t */\n-\t\tif (queue->num_pending_additions == 0)\n-\t\t\tdlb2_ldb_queue_set_inflight_limit(hw, queue);\n-\n-\t\tst = DLB2_QUEUE_UNMAPPED;\n-\t\tret = dlb2_port_slot_state_transition(hw, port, queue, i, st);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\n-\t\tgoto unmap_qid_done;\n-\t}\n-\n-\t/*\n-\t * If the queue mapping is on hold pending an unmap, we simply need to\n-\t * update the slot's state.\n-\t */\n-\tif (dlb2_port_find_slot_with_pending_map_queue(port, queue, &i)) {\n-\t\tst = DLB2_QUEUE_UNMAP_IN_PROG;\n-\t\tret = dlb2_port_slot_state_transition(hw, port, queue, i, st);\n-\t\tif (ret)\n-\t\t\treturn ret;\n-\n-\t\tgoto unmap_qid_done;\n-\t}\n-\n-\tst = DLB2_QUEUE_MAPPED;\n-\tif (!dlb2_port_find_slot_queue(port, st, queue, &i)) {\n-\t\tDLB2_HW_ERR(hw,\n-\t\t\t    \"[%s()] Internal error: no available CQ slots\\n\",\n-\t\t\t    __func__);\n-\t\treturn -EFAULT;\n-\t}\n-\n-\t/*\n-\t * QID->CQ mapping removal is an asynchronous procedure. It requires\n-\t * stopping the DLB2 from scheduling this CQ, draining all inflights\n-\t * from the CQ, then unmapping the queue from the CQ. This function\n-\t * simply marks the port as needing the queue unmapped, and (if\n-\t * necessary) starts the unmapping worker thread.\n-\t */\n-\tdlb2_ldb_port_cq_disable(hw, port);\n-\n-\tst = DLB2_QUEUE_UNMAP_IN_PROG;\n-\tret = dlb2_port_slot_state_transition(hw, port, queue, i, st);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/*\n-\t * Attempt to finish the unmapping now, in case the port has no\n-\t * outstanding inflights. If that's not the case, this will fail and\n-\t * the unmapping will be completed at a later time.\n-\t */\n-\tunmap_complete = dlb2_domain_finish_unmap_port(hw, domain, port);\n-\n-\t/*\n-\t * If the unmapping couldn't complete immediately, launch the worker\n-\t * thread (if it isn't already launched) to finish it later.\n-\t */\n-\tif (!unmap_complete && !os_worker_active(hw))\n-\t\tos_schedule_work(hw);\n-\n-unmap_qid_done:\n-\tresp->status = 0;\n-\n-\treturn 0;\n-}\n-\n-static void\n-dlb2_log_pending_port_unmaps_args(struct dlb2_hw *hw,\n-\t\t\t\t  struct dlb2_pending_port_unmaps_args *args,\n-\t\t\t\t  bool vdev_req,\n-\t\t\t\t  unsigned int vdev_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB unmaps in progress arguments:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from VF %d)\\n\", vdev_id);\n-\tDLB2_HW_DBG(hw, \"\\tPort ID: %d\\n\", args->port_id);\n-}\n-\n-/**\n- * dlb2_hw_pending_port_unmaps() - returns the number of unmap operations in\n- *\tprogress.\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @args: number of unmaps in progress args\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n- * contains the number of unmaps in progress.\n- *\n- * Errors:\n- * EINVAL - Invalid port ID.\n- */\n-int dlb2_hw_pending_port_unmaps(struct dlb2_hw *hw,\n-\t\t\t\tu32 domain_id,\n-\t\t\t\tstruct dlb2_pending_port_unmaps_args *args,\n-\t\t\t\tstruct dlb2_cmd_response *resp,\n-\t\t\t\tbool vdev_req,\n-\t\t\t\tunsigned int vdev_id)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_port *port;\n-\n-\tdlb2_log_pending_port_unmaps_args(hw, args, vdev_req, vdev_id);\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n-\n-\tif (!domain) {\n-\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tport = dlb2_get_domain_used_ldb_port(args->port_id, vdev_req, domain);\n-\tif (!port || !port->configured) {\n-\t\tresp->status = DLB2_ST_INVALID_PORT_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tresp->id = port->num_pending_removals;\n-\n-\treturn 0;\n-}\n-\n-static int dlb2_verify_start_domain_args(struct dlb2_hw *hw,\n-\t\t\t\t\t u32 domain_id,\n-\t\t\t\t\t struct dlb2_cmd_response *resp,\n-\t\t\t\t\t bool vdev_req,\n-\t\t\t\t\t unsigned int vdev_id,\n-\t\t\t\t\t struct dlb2_hw_domain **out_domain)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n-\n-\tif (!domain) {\n-\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (!domain->configured) {\n-\t\tresp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tif (domain->started) {\n-\t\tresp->status = DLB2_ST_DOMAIN_STARTED;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t*out_domain = domain;\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_log_start_domain(struct dlb2_hw *hw,\n-\t\t\t\t  u32 domain_id,\n-\t\t\t\t  bool vdev_req,\n-\t\t\t\t  unsigned int vdev_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB2 start domain arguments:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from vdev %d)\\n\", vdev_id);\n-\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\", domain_id);\n-}\n-\n-/**\n- * dlb2_hw_start_domain() - start a scheduling domain\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @arg: start domain arguments.\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function starts a scheduling domain, which allows applications to send\n- * traffic through it. Once a domain is started, its resources can no longer be\n- * configured (besides QID remapping and port enable/disable).\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error.\n- *\n- * Errors:\n- * EINVAL - the domain is not configured, or the domain is already started.\n- */\n-int\n-dlb2_hw_start_domain(struct dlb2_hw *hw,\n-\t\t     u32 domain_id,\n-\t\t     struct dlb2_start_domain_args *args,\n-\t\t     struct dlb2_cmd_response *resp,\n-\t\t     bool vdev_req,\n-\t\t     unsigned int vdev_id)\n-{\n-\tstruct dlb2_list_entry *iter;\n-\tstruct dlb2_dir_pq_pair *dir_queue;\n-\tstruct dlb2_ldb_queue *ldb_queue;\n-\tstruct dlb2_hw_domain *domain;\n-\tint ret;\n-\tRTE_SET_USED(args);\n-\tRTE_SET_USED(iter);\n-\n-\tdlb2_log_start_domain(hw, domain_id, vdev_req, vdev_id);\n-\n-\tret = dlb2_verify_start_domain_args(hw,\n-\t\t\t\t\t    domain_id,\n-\t\t\t\t\t    resp,\n-\t\t\t\t\t    vdev_req,\n-\t\t\t\t\t    vdev_id,\n-\t\t\t\t\t    &domain);\n-\tif (ret)\n-\t\treturn ret;\n-\n-\t/*\n-\t * Enable load-balanced and directed queue write permissions for the\n-\t * queues this domain owns. Without this, the DLB2 will drop all\n-\t * incoming traffic to those queues.\n-\t */\n-\tDLB2_DOM_LIST_FOR(domain->used_ldb_queues, ldb_queue, iter) {\n-\t\tu32 vasqid_v = 0;\n-\t\tunsigned int offs;\n-\n-\t\tDLB2_BIT_SET(vasqid_v, DLB2_SYS_LDB_VASQID_V_VASQID_V);\n-\n-\t\toffs = domain->id.phys_id * DLB2_MAX_NUM_LDB_QUEUES +\n-\t\t\tldb_queue->id.phys_id;\n-\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_LDB_VASQID_V(offs), vasqid_v);\n-\t}\n-\n-\tDLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, dir_queue, iter) {\n-\t\tu32 vasqid_v = 0;\n-\t\tunsigned int offs;\n-\n-\t\tDLB2_BIT_SET(vasqid_v, DLB2_SYS_DIR_VASQID_V_VASQID_V);\n-\n-\t\toffs = domain->id.phys_id * DLB2_MAX_NUM_DIR_PORTS(hw->ver) +\n-\t\t\tdir_queue->id.phys_id;\n-\n-\t\tDLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), vasqid_v);\n-\t}\n-\n-\tdlb2_flush_csr(hw);\n-\n-\tdomain->started = true;\n-\n-\tresp->status = 0;\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw,\n-\t\t\t\t\t u32 domain_id,\n-\t\t\t\t\t u32 queue_id,\n-\t\t\t\t\t bool vdev_req,\n-\t\t\t\t\t unsigned int vf_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB get directed queue depth:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from VF %d)\\n\", vf_id);\n-\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\", domain_id);\n-\tDLB2_HW_DBG(hw, \"\\tQueue ID: %d\\n\", queue_id);\n-}\n-\n-/**\n- * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @args: queue depth args\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function returns the depth of a directed queue.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n- * contains the depth.\n- *\n- * Errors:\n- * EINVAL - Invalid domain ID or queue ID.\n- */\n-int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw,\n-\t\t\t\tu32 domain_id,\n-\t\t\t\tstruct dlb2_get_dir_queue_depth_args *args,\n-\t\t\t\tstruct dlb2_cmd_response *resp,\n-\t\t\t\tbool vdev_req,\n-\t\t\t\tunsigned int vdev_id)\n-{\n-\tstruct dlb2_dir_pq_pair *queue;\n-\tstruct dlb2_hw_domain *domain;\n-\tint id;\n-\n-\tid = domain_id;\n-\n-\tdlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id,\n-\t\t\t\t     vdev_req, vdev_id);\n-\n-\tdomain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id);\n-\tif (!domain) {\n-\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tid = args->queue_id;\n-\n-\tqueue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain);\n-\tif (!queue) {\n-\t\tresp->status = DLB2_ST_INVALID_QID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tresp->id = dlb2_dir_queue_depth(hw, queue);\n-\n-\treturn 0;\n-}\n-\n-static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw,\n-\t\t\t\t\t u32 domain_id,\n-\t\t\t\t\t u32 queue_id,\n-\t\t\t\t\t bool vdev_req,\n-\t\t\t\t\t unsigned int vf_id)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB get load-balanced queue depth:\\n\");\n-\tif (vdev_req)\n-\t\tDLB2_HW_DBG(hw, \"(Request from VF %d)\\n\", vf_id);\n-\tDLB2_HW_DBG(hw, \"\\tDomain ID: %d\\n\", domain_id);\n-\tDLB2_HW_DBG(hw, \"\\tQueue ID: %d\\n\", queue_id);\n-}\n-\n-/**\n- * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue\n- * @hw: dlb2_hw handle for a particular device.\n- * @domain_id: domain ID.\n- * @args: queue depth args\n- * @resp: response structure.\n- * @vdev_req: indicates whether this request came from a vdev.\n- * @vdev_id: If vdev_req is true, this contains the vdev's ID.\n- *\n- * This function returns the depth of a load-balanced queue.\n- *\n- * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual\n- * device.\n- *\n- * Return:\n- * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is\n- * assigned a detailed error code from enum dlb2_error. If successful, resp->id\n- * contains the depth.\n- *\n- * Errors:\n- * EINVAL - Invalid domain ID or queue ID.\n- */\n-int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw,\n-\t\t\t\tu32 domain_id,\n-\t\t\t\tstruct dlb2_get_ldb_queue_depth_args *args,\n-\t\t\t\tstruct dlb2_cmd_response *resp,\n-\t\t\t\tbool vdev_req,\n-\t\t\t\tunsigned int vdev_id)\n-{\n-\tstruct dlb2_hw_domain *domain;\n-\tstruct dlb2_ldb_queue *queue;\n-\n-\tdlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id,\n-\t\t\t\t     vdev_req, vdev_id);\n-\n-\tdomain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id);\n-\tif (!domain) {\n-\t\tresp->status = DLB2_ST_INVALID_DOMAIN_ID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tqueue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain);\n-\tif (!queue) {\n-\t\tresp->status = DLB2_ST_INVALID_QID;\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tresp->id = dlb2_ldb_queue_depth(hw, queue);\n-\n-\treturn 0;\n-}\n-\n-/**\n- * dlb2_finish_unmap_qid_procedures() - finish any pending unmap procedures\n- * @hw: dlb2_hw handle for a particular device.\n- *\n- * This function attempts to finish any outstanding unmap procedures.\n- * This function should be called by the kernel thread responsible for\n- * finishing map/unmap procedures.\n- *\n- * Return:\n- * Returns the number of procedures that weren't completed.\n- */\n-unsigned int dlb2_finish_unmap_qid_procedures(struct dlb2_hw *hw)\n-{\n-\tint i, num = 0;\n-\n-\t/* Finish queue unmap jobs for any domain that needs it */\n-\tfor (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {\n-\t\tstruct dlb2_hw_domain *domain = &hw->domains[i];\n-\n-\t\tnum += dlb2_domain_finish_unmap_qid_procedures(hw, domain);\n-\t}\n-\n-\treturn num;\n-}\n-\n-/**\n- * dlb2_finish_map_qid_procedures() - finish any pending map procedures\n- * @hw: dlb2_hw handle for a particular device.\n- *\n- * This function attempts to finish any outstanding map procedures.\n- * This function should be called by the kernel thread responsible for\n- * finishing map/unmap procedures.\n- *\n- * Return:\n- * Returns the number of procedures that weren't completed.\n- */\n-unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw)\n-{\n-\tint i, num = 0;\n-\n-\t/* Finish queue map jobs for any domain that needs it */\n-\tfor (i = 0; i < DLB2_MAX_NUM_DOMAINS; i++) {\n-\t\tstruct dlb2_hw_domain *domain = &hw->domains[i];\n-\n-\t\tnum += dlb2_domain_finish_map_qid_procedures(hw, domain);\n-\t}\n-\n-\treturn num;\n-}\n-\n-/**\n- * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports.\n- * @hw: dlb2_hw handle for a particular device.\n- *\n- * This function must be called prior to configuring scheduling domains.\n- */\n-\n-void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw)\n-{\n-\tu32 ctrl;\n-\n-\tctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);\n-\n-\tDLB2_BIT_SET(ctrl,\n-\t\t     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE);\n-\n-\tDLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);\n-}\n-\n-/**\n- * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced\n- *\tports.\n- * @hw: dlb2_hw handle for a particular device.\n- *\n- * This function must be called prior to configuring scheduling domains.\n- */\n-void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw)\n-{\n-\tu32 ctrl;\n-\n-\tctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL);\n-\n-\tDLB2_BIT_SET(ctrl,\n-\t\t     DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE);\n-\n-\tDLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl);\n-}\n-\n-/**\n- * dlb2_get_group_sequence_numbers() - return a group's number of SNs per queue\n- * @hw: dlb2_hw handle for a particular device.\n- * @group_id: sequence number group ID.\n- *\n- * This function returns the configured number of sequence numbers per queue\n- * for the specified group.\n- *\n- * Return:\n- * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.\n- */\n-int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, u32 group_id)\n-{\n-\tif (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)\n-\t\treturn -EINVAL;\n-\n-\treturn hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue;\n-}\n-\n-/**\n- * dlb2_get_group_sequence_number_occupancy() - return a group's in-use slots\n- * @hw: dlb2_hw handle for a particular device.\n- * @group_id: sequence number group ID.\n- *\n- * This function returns the group's number of in-use slots (i.e. load-balanced\n- * queues using the specified group).\n- *\n- * Return:\n- * Returns -EINVAL if group_id is invalid, else the group's SNs per queue.\n- */\n-int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, u32 group_id)\n-{\n-\tif (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)\n-\t\treturn -EINVAL;\n-\n-\treturn dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]);\n-}\n-\n-static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw,\n-\t\t\t\t\t\tu32 group_id,\n-\t\t\t\t\t\tu32 val)\n-{\n-\tDLB2_HW_DBG(hw, \"DLB2 set group sequence numbers:\\n\");\n-\tDLB2_HW_DBG(hw, \"\\tGroup ID: %u\\n\", group_id);\n-\tDLB2_HW_DBG(hw, \"\\tValue:    %u\\n\", val);\n-}\n-\n-/**\n- * dlb2_set_group_sequence_numbers() - assign a group's number of SNs per queue\n- * @hw: dlb2_hw handle for a particular device.\n- * @group_id: sequence number group ID.\n- * @val: requested amount of sequence numbers per queue.\n- *\n- * This function configures the group's number of sequence numbers per queue.\n- * val can be a power-of-two between 32 and 1024, inclusive. This setting can\n- * be configured until the first ordered load-balanced queue is configured, at\n- * which point the configuration is locked.\n- *\n- * Return:\n- * Returns 0 upon success; -EINVAL if group_id or val is invalid, -EPERM if an\n- * ordered queue is configured.\n- */\n-int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw,\n-\t\t\t\t    u32 group_id,\n-\t\t\t\t    u32 val)\n-{\n-\tconst u32 valid_allocations[] = {64, 128, 256, 512, 1024};\n-\tstruct dlb2_sn_group *group;\n-\tu32 sn_mode = 0;\n-\tint mode;\n-\n-\tif (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS)\n-\t\treturn -EINVAL;\n-\n-\tgroup = &hw->rsrcs.sn_groups[group_id];\n-\n-\t/*\n-\t * Once the first load-balanced queue using an SN group is configured,\n-\t * the group cannot be changed.\n-\t */\n-\tif (group->slot_use_bitmap != 0)\n-\t\treturn -EPERM;\n-\n-\tfor (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++)\n-\t\tif (val == valid_allocations[mode])\n-\t\t\tbreak;\n-\n-\tif (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES)\n-\t\treturn -EINVAL;\n-\n-\tgroup->mode = mode;\n-\tgroup->sequence_numbers_per_queue = val;\n-\n-\tDLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[0].mode,\n-\t\t DLB2_RO_GRP_SN_MODE_SN_MODE_0);\n-\tDLB2_BITS_SET(sn_mode, hw->rsrcs.sn_groups[1].mode,\n-\t\t DLB2_RO_GRP_SN_MODE_SN_MODE_1);\n-\n-\tDLB2_CSR_WR(hw, DLB2_RO_GRP_SN_MODE(hw->ver), sn_mode);\n-\n-\tdlb2_log_set_group_sequence_numbers(hw, group_id, val);\n-\n-\treturn 0;\n-}\n-\n",
    "prefixes": [
        "v4",
        "21/27"
    ]
}