get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/71415/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 71415,
    "url": "http://patches.dpdk.org/api/patches/71415/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/20200612132934.16488-24-somnath.kotur@broadcom.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<20200612132934.16488-24-somnath.kotur@broadcom.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/20200612132934.16488-24-somnath.kotur@broadcom.com",
    "date": "2020-06-12T13:29:07",
    "name": "[23/50] net/bnxt: update table get to use new design",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "71cc5b2d50c2b163dfb90cfa440e912c87d206ba",
    "submitter": {
        "id": 908,
        "url": "http://patches.dpdk.org/api/people/908/?format=api",
        "name": "Somnath Kotur",
        "email": "somnath.kotur@broadcom.com"
    },
    "delegate": {
        "id": 1766,
        "url": "http://patches.dpdk.org/api/users/1766/?format=api",
        "username": "ajitkhaparde",
        "first_name": "Ajit",
        "last_name": "Khaparde",
        "email": "ajit.khaparde@broadcom.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/20200612132934.16488-24-somnath.kotur@broadcom.com/mbox/",
    "series": [
        {
            "id": 10436,
            "url": "http://patches.dpdk.org/api/series/10436/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=10436",
            "date": "2020-06-12T13:28:44",
            "name": "add features for host-based flow management",
            "version": 1,
            "mbox": "http://patches.dpdk.org/series/10436/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/71415/comments/",
    "check": "fail",
    "checks": "http://patches.dpdk.org/api/patches/71415/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from dpdk.org (dpdk.org [92.243.14.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 210C1A00BE;\n\tFri, 12 Jun 2020 15:44:55 +0200 (CEST)",
            "from [92.243.14.124] (localhost [127.0.0.1])\n\tby dpdk.org (Postfix) with ESMTP id DF0431C0D8;\n\tFri, 12 Jun 2020 15:34:39 +0200 (CEST)",
            "from relay.smtp.broadcom.com (relay.smtp.broadcom.com\n [192.19.232.149]) by dpdk.org (Postfix) with ESMTP id 027F71C0B0\n for <dev@dpdk.org>; Fri, 12 Jun 2020 15:34:36 +0200 (CEST)",
            "from dhcp-10-123-153-55.dhcp.broadcom.net\n (dhcp-10-123-153-55.dhcp.broadcom.net [10.123.153.55])\n by relay.smtp.broadcom.com (Postfix) with ESMTP id C1FBC1BD7A4;\n Fri, 12 Jun 2020 06:34:34 -0700 (PDT)"
        ],
        "DKIM-Filter": "OpenDKIM Filter v2.10.3 relay.smtp.broadcom.com C1FBC1BD7A4",
        "DKIM-Signature": "v=1; a=rsa-sha256; c=relaxed/simple; d=broadcom.com;\n s=dkimrelay; t=1591968875;\n bh=gfdIgGEBBJMCOAresba/NwNYl+9rCvRzaJ8pgQ7Ui5s=;\n h=From:To:Cc:Subject:Date:In-Reply-To:References:From;\n b=g21p+RH6hcEPKa6zYKJ/WpzTTPgtmkP9Bwy2ZC3HRRMS49xy2tr+2iqe5ceKaukwu\n tqNqzISkeaMtZqvhg5/3zOaOfxmaBzxFbp90j8cllOMa74hd7nQzP7LPDYZhDSz9xn\n kD8aCsC53ItbUBkOWqoi8TklnRUnpjMzvc5d/OrA=",
        "From": "Somnath Kotur <somnath.kotur@broadcom.com>",
        "To": "dev@dpdk.org",
        "Cc": "ferruh.yigit@intel.com",
        "Date": "Fri, 12 Jun 2020 18:59:07 +0530",
        "Message-Id": "<20200612132934.16488-24-somnath.kotur@broadcom.com>",
        "X-Mailer": "git-send-email 2.10.1.613.g2cc2e70",
        "In-Reply-To": "<20200612132934.16488-1-somnath.kotur@broadcom.com>",
        "References": "<20200612132934.16488-1-somnath.kotur@broadcom.com>",
        "MIME-Version": "1.0",
        "Content-Type": "text/plain; charset=UTF-8",
        "Content-Transfer-Encoding": "8bit",
        "Subject": "[dpdk-dev] [PATCH 23/50] net/bnxt: update table get to use new\n\tdesign",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: Michael Wildt <michael.wildt@broadcom.com>\n\n- Move bulk table get implementation to new Tbl Module design.\n- Update messages for bulk table get\n- Retrieve specified table element using bulk mechanism\n- Remove deprecated resource definitions\n- Update device type configuration for P4.\n- Update RM DB HCAPI count check and fix EM internal and host\n  code such that EM DBs can be created correctly.\n- Update error logging to be info on unbind in the different modules.\n- Move RTE RSVD out of tf_resources.h\n\nSigned-off-by: Michael Wildt <michael.wildt@broadcom.com>\nReviewed-by: Randy Schacher <stuart.schacher@broadcom.com>\nReviewed-by: Ajit Kumar Khaparde <ajit.khaparde@broadcom.com>\nSigned-off-by: Venkat Duvvuru <venkatkumar.duvvuru@broadcom.com>\n---\n drivers/net/bnxt/hcapi/cfa_p40_hw.h       |    2 +-\n drivers/net/bnxt/hcapi/cfa_p40_tbl.h      |    2 +-\n drivers/net/bnxt/hcapi/hcapi_cfa.h        |    2 +\n drivers/net/bnxt/meson.build              |    3 +-\n drivers/net/bnxt/tf_core/Makefile         |    2 -\n drivers/net/bnxt/tf_core/tf_common.h      |   55 +-\n drivers/net/bnxt/tf_core/tf_core.c        |   86 +-\n drivers/net/bnxt/tf_core/tf_device.h      |   24 +-\n drivers/net/bnxt/tf_core/tf_device_p4.c   |    4 +-\n drivers/net/bnxt/tf_core/tf_device_p4.h   |    5 +-\n drivers/net/bnxt/tf_core/tf_em.h          |   88 +-\n drivers/net/bnxt/tf_core/tf_em_common.c   |   29 +-\n drivers/net/bnxt/tf_core/tf_em_internal.c |   59 +-\n drivers/net/bnxt/tf_core/tf_identifier.c  |   14 +-\n drivers/net/bnxt/tf_core/tf_msg.c         |   31 +-\n drivers/net/bnxt/tf_core/tf_msg.h         |    8 +-\n drivers/net/bnxt/tf_core/tf_resources.h   |  529 -----\n drivers/net/bnxt/tf_core/tf_rm.c          | 3695 ++++++-----------------------\n drivers/net/bnxt/tf_core/tf_rm.h          |  539 +++--\n drivers/net/bnxt/tf_core/tf_rm_new.c      |  907 -------\n drivers/net/bnxt/tf_core/tf_rm_new.h      |  446 ----\n drivers/net/bnxt/tf_core/tf_session.h     |  214 +-\n drivers/net/bnxt/tf_core/tf_tbl.c         |  478 +++-\n drivers/net/bnxt/tf_core/tf_tbl.h         |  436 +++-\n drivers/net/bnxt/tf_core/tf_tbl_type.c    |  342 ---\n drivers/net/bnxt/tf_core/tf_tbl_type.h    |  318 ---\n drivers/net/bnxt/tf_core/tf_tcam.c        |   15 +-\n 27 files changed, 2088 insertions(+), 6245 deletions(-)\n delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.c\n delete mode 100644 drivers/net/bnxt/tf_core/tf_rm_new.h\n delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.c\n delete mode 100644 drivers/net/bnxt/tf_core/tf_tbl_type.h",
    "diff": "diff --git a/drivers/net/bnxt/hcapi/cfa_p40_hw.h b/drivers/net/bnxt/hcapi/cfa_p40_hw.h\nindex 1c51da8..efaf607 100644\n--- a/drivers/net/bnxt/hcapi/cfa_p40_hw.h\n+++ b/drivers/net/bnxt/hcapi/cfa_p40_hw.h\n@@ -5,7 +5,7 @@\n /*\n  * Name:  cfa_p40_hw.h\n  *\n- * Description: header for SWE based on TFLIB2.0\n+ * Description: header for SWE based on Truflow\n  *\n  * Date:  taken from 12/16/19 17:18:12\n  *\ndiff --git a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h\nindex 4238561..c30e4f4 100644\n--- a/drivers/net/bnxt/hcapi/cfa_p40_tbl.h\n+++ b/drivers/net/bnxt/hcapi/cfa_p40_tbl.h\n@@ -5,7 +5,7 @@\n /*\n  * Name:  cfa_p40_tbl.h\n  *\n- * Description: header for SWE based on TFLIB2.0\n+ * Description: header for SWE based on Truflow\n  *\n  * Date:  12/16/19 17:18:12\n  *\ndiff --git a/drivers/net/bnxt/hcapi/hcapi_cfa.h b/drivers/net/bnxt/hcapi/hcapi_cfa.h\nindex a27c749..d2a494e 100644\n--- a/drivers/net/bnxt/hcapi/hcapi_cfa.h\n+++ b/drivers/net/bnxt/hcapi/hcapi_cfa.h\n@@ -244,6 +244,8 @@ int hcapi_cfa_p4_wc_tcam_hwop(struct hcapi_cfa_hwop *op,\n \t\t\t       struct hcapi_cfa_data *obj_data);\n int hcapi_cfa_p4_wc_tcam_rec_hwop(struct hcapi_cfa_hwop *op,\n \t\t\t\t   struct hcapi_cfa_data *obj_data);\n+int hcapi_cfa_p4_mirror_hwop(struct hcapi_cfa_hwop *op,\n+\t\t\t     struct hcapi_cfa_data *mirror);\n #endif /* SUPPORT_CFA_HW_P4 */\n /**\n  *  HCAPI CFA device HW operation function callback definition\ndiff --git a/drivers/net/bnxt/meson.build b/drivers/net/bnxt/meson.build\nindex 35038dc..7f3ec62 100644\n--- a/drivers/net/bnxt/meson.build\n+++ b/drivers/net/bnxt/meson.build\n@@ -41,10 +41,9 @@ sources = files('bnxt_cpr.c',\n \t'tf_core/tf_identifier.c',\n \t'tf_core/tf_shadow_tbl.c',\n \t'tf_core/tf_shadow_tcam.c',\n-\t'tf_core/tf_tbl_type.c',\n \t'tf_core/tf_tcam.c',\n \t'tf_core/tf_util.c',\n-\t'tf_core/tf_rm_new.c',\n+\t'tf_core/tf_rm.c',\n \n \t'hcapi/hcapi_cfa_p4.c',\n \ndiff --git a/drivers/net/bnxt/tf_core/Makefile b/drivers/net/bnxt/tf_core/Makefile\nindex 6ae5c34..b31ed60 100644\n--- a/drivers/net/bnxt/tf_core/Makefile\n+++ b/drivers/net/bnxt/tf_core/Makefile\n@@ -23,10 +23,8 @@ SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_device_p4.c\n SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_identifier.c\n SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tbl.c\n SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_shadow_tcam.c\n-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tbl_type.c\n SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_tcam.c\n SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_util.c\n-SRCS-$(CONFIG_RTE_LIBRTE_BNXT_PMD) += tf_core/tf_rm_new.c\n \n \n SYMLINK-$(CONFIG_RTE_LIBRTE_BNXT_PMD)-include += tf_core/tf_core.h\ndiff --git a/drivers/net/bnxt/tf_core/tf_common.h b/drivers/net/bnxt/tf_core/tf_common.h\nindex ec3bca8..b982203 100644\n--- a/drivers/net/bnxt/tf_core/tf_common.h\n+++ b/drivers/net/bnxt/tf_core/tf_common.h\n@@ -6,52 +6,11 @@\n #ifndef _TF_COMMON_H_\n #define _TF_COMMON_H_\n \n-/* Helper to check the parms */\n-#define TF_CHECK_PARMS_SESSION(tfp, parms) do {\t\\\n-\t\tif ((parms) == NULL || (tfp) == NULL) { \\\n-\t\t\tTFP_DRV_LOG(ERR, \"Invalid Argument(s)\\n\"); \\\n-\t\t\treturn -EINVAL; \\\n-\t\t} \\\n-\t\tif ((tfp)->session == NULL || \\\n-\t\t    (tfp)->session->core_data == NULL) { \\\n-\t\t\tTFP_DRV_LOG(ERR, \"%s: session error\\n\", \\\n-\t\t\t\t    tf_dir_2_str((parms)->dir)); \\\n-\t\t\treturn -EINVAL; \\\n-\t\t} \\\n-\t} while (0)\n-\n-#define TF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms) do {\t\\\n-\t\tif ((parms) == NULL || (tfp) == NULL) { \\\n-\t\t\tTFP_DRV_LOG(ERR, \"Invalid Argument(s)\\n\"); \\\n-\t\t\treturn -EINVAL; \\\n-\t\t} \\\n-\t\tif ((tfp)->session == NULL || \\\n-\t\t    (tfp)->session->core_data == NULL) { \\\n-\t\t\tTFP_DRV_LOG(ERR, \"Session error\\n\"); \\\n-\t\t\treturn -EINVAL; \\\n-\t\t} \\\n-\t} while (0)\n-\n-#define TF_CHECK_PARMS(tfp, parms) do {\t\\\n-\t\tif ((parms) == NULL || (tfp) == NULL) { \\\n-\t\t\tTFP_DRV_LOG(ERR, \"Invalid Argument(s)\\n\"); \\\n-\t\t\treturn -EINVAL; \\\n-\t\t} \\\n-\t} while (0)\n-\n-#define TF_CHECK_TFP_SESSION(tfp) do { \\\n-\t\tif ((tfp) == NULL) { \\\n-\t\t\tTFP_DRV_LOG(ERR, \"Invalid Argument(s)\\n\"); \\\n-\t\t\treturn -EINVAL; \\\n-\t\t} \\\n-\t\tif ((tfp)->session == NULL || \\\n-\t\t    (tfp)->session->core_data == NULL) { \\\n-\t\t\tTFP_DRV_LOG(ERR, \"Session error\\n\"); \\\n-\t\t\treturn -EINVAL; \\\n-\t\t} \\\n-\t} while (0)\n-\n+/* Helpers to performs parameter check */\n \n+/**\n+ * Checks 1 parameter against NULL.\n+ */\n #define TF_CHECK_PARMS1(parms) do {\t\t\t\t\t\\\n \t\tif ((parms) == NULL) {\t\t\t\t\t\\\n \t\t\tTFP_DRV_LOG(ERR, \"Invalid Argument(s)\\n\");\t\\\n@@ -59,6 +18,9 @@\n \t\t}\t\t\t\t\t\t\t\\\n \t} while (0)\n \n+/**\n+ * Checks 2 parameters against NULL.\n+ */\n #define TF_CHECK_PARMS2(parms1, parms2) do {\t\t\t\t\\\n \t\tif ((parms1) == NULL || (parms2) == NULL) {\t\t\\\n \t\t\tTFP_DRV_LOG(ERR, \"Invalid Argument(s)\\n\");\t\\\n@@ -66,6 +28,9 @@\n \t\t}\t\t\t\t\t\t\t\\\n \t} while (0)\n \n+/**\n+ * Checks 3 parameters against NULL.\n+ */\n #define TF_CHECK_PARMS3(parms1, parms2, parms3) do {\t\t\t\\\n \t\tif ((parms1) == NULL ||\t\t\t\t\t\\\n \t\t    (parms2) == NULL ||\t\t\t\t\t\\\ndiff --git a/drivers/net/bnxt/tf_core/tf_core.c b/drivers/net/bnxt/tf_core/tf_core.c\nindex 8b3e15c..8727900 100644\n--- a/drivers/net/bnxt/tf_core/tf_core.c\n+++ b/drivers/net/bnxt/tf_core/tf_core.c\n@@ -186,7 +186,7 @@ int tf_insert_em_entry(struct tf *tfp,\n \tstruct tf_dev_info     *dev;\n \tint rc;\n \n-\tTF_CHECK_PARMS_SESSION(tfp, parms);\n+\tTF_CHECK_PARMS2(tfp, parms);\n \n \t/* Retrieve the session information */\n \trc = tf_session_get_session(tfp, &tfs);\n@@ -241,7 +241,7 @@ int tf_delete_em_entry(struct tf *tfp,\n \tstruct tf_dev_info     *dev;\n \tint rc;\n \n-\tTF_CHECK_PARMS_SESSION(tfp, parms);\n+\tTF_CHECK_PARMS2(tfp, parms);\n \n \t/* Retrieve the session information */\n \trc = tf_session_get_session(tfp, &tfs);\n@@ -523,7 +523,7 @@ int\n tf_get_tcam_entry(struct tf *tfp __rte_unused,\n \t\t  struct tf_get_tcam_entry_parms *parms __rte_unused)\n {\n-\tTF_CHECK_PARMS_SESSION(tfp, parms);\n+\tTF_CHECK_PARMS2(tfp, parms);\n \treturn -EOPNOTSUPP;\n }\n \n@@ -821,7 +821,80 @@ tf_get_tbl_entry(struct tf *tfp,\n \treturn rc;\n }\n \n-/* API defined in tf_core.h */\n+int\n+tf_bulk_get_tbl_entry(struct tf *tfp,\n+\t\t struct tf_bulk_get_tbl_entry_parms *parms)\n+{\n+\tint rc = 0;\n+\tstruct tf_session *tfs;\n+\tstruct tf_dev_info *dev;\n+\tstruct tf_tbl_get_bulk_parms bparms;\n+\n+\tTF_CHECK_PARMS2(tfp, parms);\n+\n+\t/* Can't do static initialization due to UT enum check */\n+\tmemset(&bparms, 0, sizeof(struct tf_tbl_get_bulk_parms));\n+\n+\t/* Retrieve the session information */\n+\trc = tf_session_get_session(tfp, &tfs);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Failed to lookup session, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    strerror(-rc));\n+\t\treturn rc;\n+\t}\n+\n+\t/* Retrieve the device information */\n+\trc = tf_session_get_device(tfs, &dev);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Failed to lookup device, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    strerror(-rc));\n+\t\treturn rc;\n+\t}\n+\n+\tif (parms->type == TF_TBL_TYPE_EXT) {\n+\t\t/* Not supported, yet */\n+\t\trc = -EOPNOTSUPP;\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s, External table type not supported, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    strerror(-rc));\n+\n+\t\treturn rc;\n+\t}\n+\n+\t/* Internal table type processing */\n+\n+\tif (dev->ops->tf_dev_get_bulk_tbl == NULL) {\n+\t\trc = -EOPNOTSUPP;\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Operation not supported, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    strerror(-rc));\n+\t\treturn -EOPNOTSUPP;\n+\t}\n+\n+\tbparms.dir = parms->dir;\n+\tbparms.type = parms->type;\n+\tbparms.starting_idx = parms->starting_idx;\n+\tbparms.num_entries = parms->num_entries;\n+\tbparms.entry_sz_in_bytes = parms->entry_sz_in_bytes;\n+\tbparms.physical_mem_addr = parms->physical_mem_addr;\n+\trc = dev->ops->tf_dev_get_bulk_tbl(tfp, &bparms);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Table get bulk failed, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    strerror(-rc));\n+\t\treturn rc;\n+\t}\n+\n+\treturn rc;\n+}\n+\n int\n tf_alloc_tbl_scope(struct tf *tfp,\n \t\t   struct tf_alloc_tbl_scope_parms *parms)\n@@ -830,7 +903,7 @@ tf_alloc_tbl_scope(struct tf *tfp,\n \tstruct tf_dev_info *dev;\n \tint rc;\n \n-\tTF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);\n+\tTF_CHECK_PARMS2(tfp, parms);\n \n \t/* Retrieve the session information */\n \trc = tf_session_get_session(tfp, &tfs);\n@@ -861,7 +934,6 @@ tf_alloc_tbl_scope(struct tf *tfp,\n \treturn rc;\n }\n \n-/* API defined in tf_core.h */\n int\n tf_free_tbl_scope(struct tf *tfp,\n \t\t  struct tf_free_tbl_scope_parms *parms)\n@@ -870,7 +942,7 @@ tf_free_tbl_scope(struct tf *tfp,\n \tstruct tf_dev_info *dev;\n \tint rc;\n \n-\tTF_CHECK_PARMS_SESSION_NO_DIR(tfp, parms);\n+\tTF_CHECK_PARMS2(tfp, parms);\n \n \t/* Retrieve the session information */\n \trc = tf_session_get_session(tfp, &tfs);\ndiff --git a/drivers/net/bnxt/tf_core/tf_device.h b/drivers/net/bnxt/tf_core/tf_device.h\nindex 2712d10..93f3627 100644\n--- a/drivers/net/bnxt/tf_core/tf_device.h\n+++ b/drivers/net/bnxt/tf_core/tf_device.h\n@@ -8,7 +8,7 @@\n \n #include \"tf_core.h\"\n #include \"tf_identifier.h\"\n-#include \"tf_tbl_type.h\"\n+#include \"tf_tbl.h\"\n #include \"tf_tcam.h\"\n \n struct tf;\n@@ -293,7 +293,27 @@ struct tf_dev_ops {\n \t *   - (-EINVAL) on failure.\n \t */\n \tint (*tf_dev_get_tbl)(struct tf *tfp,\n-\t\t\t       struct tf_tbl_get_parms *parms);\n+\t\t\t      struct tf_tbl_get_parms *parms);\n+\n+\t/**\n+\t * Retrieves the specified table type element using 'bulk'\n+\t * mechanism.\n+\t *\n+\t * This API retrieves the specified element data by invoking the\n+\t * firmware.\n+\t *\n+\t * [in] tfp\n+\t *   Pointer to TF handle\n+\t *\n+\t * [in] parms\n+\t *   Pointer to table get bulk parameters\n+\t *\n+\t * Returns\n+\t *   - (0) if successful.\n+\t *   - (-EINVAL) on failure.\n+\t */\n+\tint (*tf_dev_get_bulk_tbl)(struct tf *tfp,\n+\t\t\t\t   struct tf_tbl_get_bulk_parms *parms);\n \n \t/**\n \t * Allocation of a tcam element.\ndiff --git a/drivers/net/bnxt/tf_core/tf_device_p4.c b/drivers/net/bnxt/tf_core/tf_device_p4.c\nindex 127c655..e352667 100644\n--- a/drivers/net/bnxt/tf_core/tf_device_p4.c\n+++ b/drivers/net/bnxt/tf_core/tf_device_p4.c\n@@ -8,7 +8,7 @@\n \n #include \"tf_device.h\"\n #include \"tf_identifier.h\"\n-#include \"tf_tbl_type.h\"\n+#include \"tf_tbl.h\"\n #include \"tf_tcam.h\"\n #include \"tf_em.h\"\n \n@@ -88,6 +88,7 @@ const struct tf_dev_ops tf_dev_ops_p4_init = {\n \t.tf_dev_alloc_search_tbl = NULL,\n \t.tf_dev_set_tbl = NULL,\n \t.tf_dev_get_tbl = NULL,\n+\t.tf_dev_get_bulk_tbl = NULL,\n \t.tf_dev_alloc_tcam = NULL,\n \t.tf_dev_free_tcam = NULL,\n \t.tf_dev_alloc_search_tcam = NULL,\n@@ -114,6 +115,7 @@ const struct tf_dev_ops tf_dev_ops_p4 = {\n \t.tf_dev_alloc_search_tbl = NULL,\n \t.tf_dev_set_tbl = tf_tbl_set,\n \t.tf_dev_get_tbl = tf_tbl_get,\n+\t.tf_dev_get_bulk_tbl = tf_tbl_bulk_get,\n \t.tf_dev_alloc_tcam = tf_tcam_alloc,\n \t.tf_dev_free_tcam = tf_tcam_free,\n \t.tf_dev_alloc_search_tcam = NULL,\ndiff --git a/drivers/net/bnxt/tf_core/tf_device_p4.h b/drivers/net/bnxt/tf_core/tf_device_p4.h\nindex da6dd65..473e4ea 100644\n--- a/drivers/net/bnxt/tf_core/tf_device_p4.h\n+++ b/drivers/net/bnxt/tf_core/tf_device_p4.h\n@@ -9,7 +9,7 @@\n #include <cfa_resource_types.h>\n \n #include \"tf_core.h\"\n-#include \"tf_rm_new.h\"\n+#include \"tf_rm.h\"\n \n struct tf_rm_element_cfg tf_ident_p4[TF_IDENT_TYPE_MAX] = {\n \t{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_L2_CTXT_REMAP },\n@@ -41,8 +41,7 @@ struct tf_rm_element_cfg tf_tbl_p4[TF_TBL_TYPE_MAX] = {\n \t{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_ENCAP_64B },\n \t{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC },\n \t{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV4 },\n-\t/* CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 */\n-\t{ TF_RM_ELEM_CFG_NULL, CFA_RESOURCE_TYPE_INVALID },\n+\t{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_SP_MAC_IPV6 },\n \t{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_COUNTER_64B },\n \t{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_SPORT },\n \t{ TF_RM_ELEM_CFG_HCAPI, CFA_RESOURCE_TYPE_P4_NAT_DPORT },\ndiff --git a/drivers/net/bnxt/tf_core/tf_em.h b/drivers/net/bnxt/tf_core/tf_em.h\nindex cf799c2..7042f44 100644\n--- a/drivers/net/bnxt/tf_core/tf_em.h\n+++ b/drivers/net/bnxt/tf_core/tf_em.h\n@@ -23,6 +23,56 @@\n #define TF_EM_MAX_MASK 0x7FFF\n #define TF_EM_MAX_ENTRY (128 * 1024 * 1024)\n \n+/**\n+ * Hardware Page sizes supported for EEM:\n+ *   4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.\n+ *\n+ * Round-down other page sizes to the lower hardware page\n+ * size supported.\n+ */\n+#define TF_EM_PAGE_SIZE_4K 12\n+#define TF_EM_PAGE_SIZE_8K 13\n+#define TF_EM_PAGE_SIZE_64K 16\n+#define TF_EM_PAGE_SIZE_256K 18\n+#define TF_EM_PAGE_SIZE_1M 20\n+#define TF_EM_PAGE_SIZE_2M 21\n+#define TF_EM_PAGE_SIZE_4M 22\n+#define TF_EM_PAGE_SIZE_1G 30\n+\n+/* Set page size */\n+#define PAGE_SIZE TF_EM_PAGE_SIZE_2M\n+\n+#if (PAGE_SIZE == TF_EM_PAGE_SIZE_4K)\t/** 4K */\n+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K\n+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K\n+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_8K)\t/** 8K */\n+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K\n+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K\n+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_64K)\t/** 64K */\n+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K\n+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K\n+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_256K)\t/** 256K */\n+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K\n+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K\n+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1M)\t/** 1M */\n+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M\n+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M\n+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_2M)\t/** 2M */\n+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M\n+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M\n+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_4M)\t/** 4M */\n+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M\n+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M\n+#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1G)\t/** 1G */\n+#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G\n+#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G\n+#else\n+#error \"Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define\"\n+#endif\n+\n+#define TF_EM_PAGE_SIZE\t(1 << TF_EM_PAGE_SHIFT)\n+#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)\n+\n /*\n  * Used to build GFID:\n  *\n@@ -80,13 +130,43 @@ struct tf_em_cfg_parms {\n };\n \n /**\n- * @page table Table\n+ * @page em EM\n  *\n  * @ref tf_alloc_eem_tbl_scope\n  *\n  * @ref tf_free_eem_tbl_scope_cb\n  *\n- * @ref tbl_scope_cb_find\n+ * @ref tf_em_insert_int_entry\n+ *\n+ * @ref tf_em_delete_int_entry\n+ *\n+ * @ref tf_em_insert_ext_entry\n+ *\n+ * @ref tf_em_delete_ext_entry\n+ *\n+ * @ref tf_em_insert_ext_sys_entry\n+ *\n+ * @ref tf_em_delete_ext_sys_entry\n+ *\n+ * @ref tf_em_int_bind\n+ *\n+ * @ref tf_em_int_unbind\n+ *\n+ * @ref tf_em_ext_common_bind\n+ *\n+ * @ref tf_em_ext_common_unbind\n+ *\n+ * @ref tf_em_ext_host_alloc\n+ *\n+ * @ref tf_em_ext_host_free\n+ *\n+ * @ref tf_em_ext_system_alloc\n+ *\n+ * @ref tf_em_ext_system_free\n+ *\n+ * @ref tf_em_ext_common_free\n+ *\n+ * @ref tf_em_ext_common_alloc\n  */\n \n /**\n@@ -328,7 +408,7 @@ int tf_em_ext_host_free(struct tf *tfp,\n  *   -EINVAL - Parameter error\n  */\n int tf_em_ext_system_alloc(struct tf *tfp,\n-\t\t\t struct tf_alloc_tbl_scope_parms *parms);\n+\t\t\t   struct tf_alloc_tbl_scope_parms *parms);\n \n /**\n  * Free for external EEM using system memory\n@@ -344,7 +424,7 @@ int tf_em_ext_system_alloc(struct tf *tfp,\n  *   -EINVAL - Parameter error\n  */\n int tf_em_ext_system_free(struct tf *tfp,\n-\t\t\tstruct tf_free_tbl_scope_parms *parms);\n+\t\t\t  struct tf_free_tbl_scope_parms *parms);\n \n /**\n  * Common free for external EEM using host or system memory\ndiff --git a/drivers/net/bnxt/tf_core/tf_em_common.c b/drivers/net/bnxt/tf_core/tf_em_common.c\nindex ba6aa7a..d0d80da 100644\n--- a/drivers/net/bnxt/tf_core/tf_em_common.c\n+++ b/drivers/net/bnxt/tf_core/tf_em_common.c\n@@ -194,12 +194,13 @@ tf_em_ext_common_bind(struct tf *tfp,\n \tint rc;\n \tint i;\n \tstruct tf_rm_create_db_parms db_cfg = { 0 };\n+\tuint8_t db_exists = 0;\n \n \tTF_CHECK_PARMS2(tfp, parms);\n \n \tif (init) {\n \t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"Identifier already initialized\\n\");\n+\t\t\t    \"EM Ext DB already initialized\\n\");\n \t\treturn -EINVAL;\n \t}\n \n@@ -210,19 +211,29 @@ tf_em_ext_common_bind(struct tf *tfp,\n \tfor (i = 0; i < TF_DIR_MAX; i++) {\n \t\tdb_cfg.dir = i;\n \t\tdb_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;\n+\n+\t\t/* Check if we got any request to support EEM, if so\n+\t\t * we build an EM Ext DB holding Table Scopes.\n+\t\t */\n+\t\tif (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_TBL_SCOPE] == 0)\n+\t\t\tcontinue;\n+\n \t\tdb_cfg.rm_db = &eem_db[i];\n \t\trc = tf_rm_create_db(tfp, &db_cfg);\n \t\tif (rc) {\n \t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t    \"%s: EM DB creation failed\\n\",\n+\t\t\t\t    \"%s: EM Ext DB creation failed\\n\",\n \t\t\t\t    tf_dir_2_str(i));\n \n \t\t\treturn rc;\n \t\t}\n+\t\tdb_exists = 1;\n \t}\n \n-\tmem_type = parms->mem_type;\n-\tinit = 1;\n+\tif (db_exists) {\n+\t\tmem_type = parms->mem_type;\n+\t\tinit = 1;\n+\t}\n \n \treturn 0;\n }\n@@ -236,13 +247,11 @@ tf_em_ext_common_unbind(struct tf *tfp)\n \n \tTF_CHECK_PARMS1(tfp);\n \n-\t/* Bail if nothing has been initialized done silent as to\n-\t * allow for creation cleanup.\n-\t */\n+\t/* Bail if nothing has been initialized */\n \tif (!init) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"No EM DBs created\\n\");\n-\t\treturn -EINVAL;\n+\t\tTFP_DRV_LOG(INFO,\n+\t\t\t    \"No EM Ext DBs created\\n\");\n+\t\treturn 0;\n \t}\n \n \tfor (i = 0; i < TF_DIR_MAX; i++) {\ndiff --git a/drivers/net/bnxt/tf_core/tf_em_internal.c b/drivers/net/bnxt/tf_core/tf_em_internal.c\nindex 9be91ad..1c51474 100644\n--- a/drivers/net/bnxt/tf_core/tf_em_internal.c\n+++ b/drivers/net/bnxt/tf_core/tf_em_internal.c\n@@ -225,12 +225,13 @@ tf_em_int_bind(struct tf *tfp,\n \tint i;\n \tstruct tf_rm_create_db_parms db_cfg = { 0 };\n \tstruct tf_session *session;\n+\tuint8_t db_exists = 0;\n \n \tTF_CHECK_PARMS2(tfp, parms);\n \n \tif (init) {\n \t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"Identifier already initialized\\n\");\n+\t\t\t    \"EM Int DB already initialized\\n\");\n \t\treturn -EINVAL;\n \t}\n \n@@ -242,31 +243,35 @@ tf_em_int_bind(struct tf *tfp,\n \t\t\t\t  TF_SESSION_EM_POOL_SIZE);\n \t}\n \n-\t/*\n-\t * I'm not sure that this code is needed.\n-\t * leaving for now until resolved\n-\t */\n-\tif (parms->num_elements) {\n-\t\tdb_cfg.type = TF_DEVICE_MODULE_TYPE_EM;\n-\t\tdb_cfg.num_elements = parms->num_elements;\n-\t\tdb_cfg.cfg = parms->cfg;\n-\n-\t\tfor (i = 0; i < TF_DIR_MAX; i++) {\n-\t\t\tdb_cfg.dir = i;\n-\t\t\tdb_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;\n-\t\t\tdb_cfg.rm_db = &em_db[i];\n-\t\t\trc = tf_rm_create_db(tfp, &db_cfg);\n-\t\t\tif (rc) {\n-\t\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t\t    \"%s: EM DB creation failed\\n\",\n-\t\t\t\t\t    tf_dir_2_str(i));\n+\tdb_cfg.type = TF_DEVICE_MODULE_TYPE_EM;\n+\tdb_cfg.num_elements = parms->num_elements;\n+\tdb_cfg.cfg = parms->cfg;\n \n-\t\t\t\treturn rc;\n-\t\t\t}\n+\tfor (i = 0; i < TF_DIR_MAX; i++) {\n+\t\tdb_cfg.dir = i;\n+\t\tdb_cfg.alloc_cnt = parms->resources->em_cnt[i].cnt;\n+\n+\t\t/* Check if we got any request to support EEM, if so\n+\t\t * we build an EM Int DB holding Table Scopes.\n+\t\t */\n+\t\tif (db_cfg.alloc_cnt[TF_EM_TBL_TYPE_EM_RECORD] == 0)\n+\t\t\tcontinue;\n+\n+\t\tdb_cfg.rm_db = &em_db[i];\n+\t\trc = tf_rm_create_db(tfp, &db_cfg);\n+\t\tif (rc) {\n+\t\t\tTFP_DRV_LOG(ERR,\n+\t\t\t\t    \"%s: EM Int DB creation failed\\n\",\n+\t\t\t\t    tf_dir_2_str(i));\n+\n+\t\t\treturn rc;\n \t\t}\n+\t\tdb_exists = 1;\n \t}\n \n-\tinit = 1;\n+\tif (db_exists)\n+\t\tinit = 1;\n+\n \treturn 0;\n }\n \n@@ -280,13 +285,11 @@ tf_em_int_unbind(struct tf *tfp)\n \n \tTF_CHECK_PARMS1(tfp);\n \n-\t/* Bail if nothing has been initialized done silent as to\n-\t * allow for creation cleanup.\n-\t */\n+\t/* Bail if nothing has been initialized */\n \tif (!init) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"No EM DBs created\\n\");\n-\t\treturn -EINVAL;\n+\t\tTFP_DRV_LOG(INFO,\n+\t\t\t    \"No EM Int DBs created\\n\");\n+\t\treturn 0;\n \t}\n \n \tsession = (struct tf_session *)tfp->session->core_data;\ndiff --git a/drivers/net/bnxt/tf_core/tf_identifier.c b/drivers/net/bnxt/tf_core/tf_identifier.c\nindex b197bb2..2113710 100644\n--- a/drivers/net/bnxt/tf_core/tf_identifier.c\n+++ b/drivers/net/bnxt/tf_core/tf_identifier.c\n@@ -7,7 +7,7 @@\n \n #include \"tf_identifier.h\"\n #include \"tf_common.h\"\n-#include \"tf_rm_new.h\"\n+#include \"tf_rm.h\"\n #include \"tf_util.h\"\n #include \"tfp.h\"\n \n@@ -35,7 +35,7 @@ tf_ident_bind(struct tf *tfp,\n \n \tif (init) {\n \t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"Identifier already initialized\\n\");\n+\t\t\t    \"Identifier DB already initialized\\n\");\n \t\treturn -EINVAL;\n \t}\n \n@@ -65,7 +65,7 @@ tf_ident_bind(struct tf *tfp,\n }\n \n int\n-tf_ident_unbind(struct tf *tfp __rte_unused)\n+tf_ident_unbind(struct tf *tfp)\n {\n \tint rc;\n \tint i;\n@@ -73,13 +73,11 @@ tf_ident_unbind(struct tf *tfp __rte_unused)\n \n \tTF_CHECK_PARMS1(tfp);\n \n-\t/* Bail if nothing has been initialized done silent as to\n-\t * allow for creation cleanup.\n-\t */\n+\t/* Bail if nothing has been initialized */\n \tif (!init) {\n-\t\tTFP_DRV_LOG(ERR,\n+\t\tTFP_DRV_LOG(INFO,\n \t\t\t    \"No Identifier DBs created\\n\");\n-\t\treturn -EINVAL;\n+\t\treturn 0;\n \t}\n \n \tfor (i = 0; i < TF_DIR_MAX; i++) {\ndiff --git a/drivers/net/bnxt/tf_core/tf_msg.c b/drivers/net/bnxt/tf_core/tf_msg.c\nindex d8b80bc..02d8a49 100644\n--- a/drivers/net/bnxt/tf_core/tf_msg.c\n+++ b/drivers/net/bnxt/tf_core/tf_msg.c\n@@ -871,26 +871,41 @@ tf_msg_get_tbl_entry(struct tf *tfp,\n \n int\n tf_msg_bulk_get_tbl_entry(struct tf *tfp,\n-\t\t\t  struct tf_bulk_get_tbl_entry_parms *params)\n+\t\t\t  enum tf_dir dir,\n+\t\t\t  uint16_t hcapi_type,\n+\t\t\t  uint32_t starting_idx,\n+\t\t\t  uint16_t num_entries,\n+\t\t\t  uint16_t entry_sz_in_bytes,\n+\t\t\t  uint64_t physical_mem_addr)\n {\n \tint rc;\n \tstruct tfp_send_msg_parms parms = { 0 };\n \tstruct tf_tbl_type_bulk_get_input req = { 0 };\n \tstruct tf_tbl_type_bulk_get_output resp = { 0 };\n-\tstruct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);\n+\tstruct tf_session *tfs;\n \tint data_size = 0;\n \n+\t/* Retrieve the session information */\n+\trc = tf_session_get_session(tfp, &tfs);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Failed to lookup session, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(dir),\n+\t\t\t    strerror(-rc));\n+\t\treturn rc;\n+\t}\n+\n \t/* Populate the request */\n \treq.fw_session_id =\n \t\ttfp_cpu_to_le_32(tfs->session_id.internal.fw_session_id);\n-\treq.flags = tfp_cpu_to_le_16(params->dir);\n-\treq.type = tfp_cpu_to_le_32(params->type);\n-\treq.start_index = tfp_cpu_to_le_32(params->starting_idx);\n-\treq.num_entries = tfp_cpu_to_le_32(params->num_entries);\n+\treq.flags = tfp_cpu_to_le_16(dir);\n+\treq.type = tfp_cpu_to_le_32(hcapi_type);\n+\treq.start_index = tfp_cpu_to_le_32(starting_idx);\n+\treq.num_entries = tfp_cpu_to_le_32(num_entries);\n \n-\tdata_size = params->num_entries * params->entry_sz_in_bytes;\n+\tdata_size = num_entries * entry_sz_in_bytes;\n \n-\treq.host_addr = tfp_cpu_to_le_64(params->physical_mem_addr);\n+\treq.host_addr = tfp_cpu_to_le_64(physical_mem_addr);\n \n \tMSG_PREP(parms,\n \t\t TF_KONG_MB,\ndiff --git a/drivers/net/bnxt/tf_core/tf_msg.h b/drivers/net/bnxt/tf_core/tf_msg.h\nindex 8e276d4..7432873 100644\n--- a/drivers/net/bnxt/tf_core/tf_msg.h\n+++ b/drivers/net/bnxt/tf_core/tf_msg.h\n@@ -11,7 +11,6 @@\n \n #include \"tf_tbl.h\"\n #include \"tf_rm.h\"\n-#include \"tf_rm_new.h\"\n #include \"tf_tcam.h\"\n \n struct tf;\n@@ -422,6 +421,11 @@ int tf_msg_get_tbl_entry(struct tf *tfp,\n  *  0 on Success else internal Truflow error\n  */\n int tf_msg_bulk_get_tbl_entry(struct tf *tfp,\n-\t\t\t  struct tf_bulk_get_tbl_entry_parms *parms);\n+\t\t\t      enum tf_dir dir,\n+\t\t\t      uint16_t hcapi_type,\n+\t\t\t      uint32_t starting_idx,\n+\t\t\t      uint16_t num_entries,\n+\t\t\t      uint16_t entry_sz_in_bytes,\n+\t\t\t      uint64_t physical_mem_addr);\n \n #endif  /* _TF_MSG_H_ */\ndiff --git a/drivers/net/bnxt/tf_core/tf_resources.h b/drivers/net/bnxt/tf_core/tf_resources.h\nindex b7b4451..4688514 100644\n--- a/drivers/net/bnxt/tf_core/tf_resources.h\n+++ b/drivers/net/bnxt/tf_core/tf_resources.h\n@@ -6,535 +6,6 @@\n #ifndef _TF_RESOURCES_H_\n #define _TF_RESOURCES_H_\n \n-/*\n- * Hardware specific MAX values\n- * NOTE: Should really come from the chip_cfg.h in some MAX form or HCAPI\n- */\n-\n-/* Common HW resources for all chip variants */\n-#define TF_NUM_L2_CTXT_TCAM      1024      /* < Number of L2 context TCAM\n-\t\t\t\t\t    * entries\n-\t\t\t\t\t    */\n-#define TF_NUM_PROF_FUNC          128      /* < Number prof_func ID */\n-#define TF_NUM_PROF_TCAM         1024      /* < Number entries in profile\n-\t\t\t\t\t    * TCAM\n-\t\t\t\t\t    */\n-#define TF_NUM_EM_PROF_ID          64      /* < Number software EM Profile\n-\t\t\t\t\t    * IDs\n-\t\t\t\t\t    */\n-#define TF_NUM_WC_PROF_ID         256      /* < Number WC profile IDs */\n-#define TF_NUM_WC_TCAM_ROW        512      /* < Number of rows in WC TCAM */\n-#define TF_NUM_METER_PROF         256      /* < Number of meter profiles */\n-#define TF_NUM_METER             1024      /* < Number of meter instances */\n-#define TF_NUM_MIRROR               2      /* < Number of mirror instances */\n-#define TF_NUM_UPAR                 2      /* < Number of UPAR instances */\n-\n-/* Wh+/SR specific HW resources */\n-#define TF_NUM_SP_TCAM            512      /* < Number of Source Property TCAM\n-\t\t\t\t\t    * entries\n-\t\t\t\t\t    */\n-\n-/* SR/SR2 specific HW resources */\n-#define TF_NUM_L2_FUNC            256      /* < Number of L2 Func */\n-\n-\n-/* Thor, SR2 common HW resources */\n-#define TF_NUM_FKB                  1      /* < Number of Flexible Key Builder\n-\t\t\t\t\t    * templates\n-\t\t\t\t\t    */\n-\n-/* SR2 specific HW resources */\n #define TF_NUM_TBL_SCOPE           16      /* < Number of TBL scopes */\n-#define TF_NUM_EPOCH0               1      /* < Number of Epoch0 */\n-#define TF_NUM_EPOCH1               1      /* < Number of Epoch1 */\n-#define TF_NUM_METADATA             8      /* < Number of MetaData Profiles */\n-#define TF_NUM_CT_STATE            32      /* < Number of Connection Tracking\n-\t\t\t\t\t    * States\n-\t\t\t\t\t    */\n-#define TF_NUM_RANGE_PROF          16      /* < Number of Range Profiles */\n-#define TF_NUM_RANGE_ENTRY (64 * 1024)     /* < Number of Range Entries */\n-#define TF_NUM_LAG_ENTRY          256      /* < Number of LAG Entries */\n-\n-/*\n- * Common for the Reserved Resource defines below:\n- *\n- * - HW Resources\n- *   For resources where a priority level plays a role, i.e. l2 ctx\n- *   tcam entries, both a number of resources and a begin/end pair is\n- *   required. The begin/end is used to assure TFLIB gets the correct\n- *   priority setting for that resource.\n- *\n- *   For EM records there is no priority required thus a number of\n- *   resources is sufficient.\n- *\n- *   Example, TCAM:\n- *     64 L2 CTXT TCAM entries would in a max 1024 pool be entry\n- *     0-63 as HW presents 0 as the highest priority entry.\n- *\n- * - SRAM Resources\n- *   Handled as regular resources as there is no priority required.\n- *\n- * Common for these resources is that they are handled per direction,\n- * rx/tx.\n- */\n-\n-/* HW Resources */\n-\n-/* L2 CTX */\n-#define TF_RSVD_L2_CTXT_TCAM_RX                   64\n-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_RX         0\n-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_RX           (TF_RSVD_L2_CTXT_RX - 1)\n-#define TF_RSVD_L2_CTXT_TCAM_TX                   960\n-#define TF_RSVD_L2_CTXT_TCAM_BEGIN_IDX_TX         0\n-#define TF_RSVD_L2_CTXT_TCAM_END_IDX_TX           (TF_RSVD_L2_CTXT_TX - 1)\n-\n-/* Profiler */\n-#define TF_RSVD_PROF_FUNC_RX                      64\n-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_RX            64\n-#define TF_RSVD_PROF_FUNC_END_IDX_RX              127\n-#define TF_RSVD_PROF_FUNC_TX                      64\n-#define TF_RSVD_PROF_FUNC_BEGIN_IDX_TX            64\n-#define TF_RSVD_PROF_FUNC_END_IDX_TX              127\n-\n-#define TF_RSVD_PROF_TCAM_RX                      64\n-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_RX            960\n-#define TF_RSVD_PROF_TCAM_END_IDX_RX              1023\n-#define TF_RSVD_PROF_TCAM_TX                      64\n-#define TF_RSVD_PROF_TCAM_BEGIN_IDX_TX            960\n-#define TF_RSVD_PROF_TCAM_END_IDX_TX              1023\n-\n-/* EM Profiles IDs */\n-#define TF_RSVD_EM_PROF_ID_RX                     64\n-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_RX           0\n-#define TF_RSVD_EM_PROF_ID_END_IDX_RX             63  /* Less on CU+ then SR */\n-#define TF_RSVD_EM_PROF_ID_TX                     64\n-#define TF_RSVD_EM_PROF_ID_BEGIN_IDX_TX           0\n-#define TF_RSVD_EM_PROF_ID_END_IDX_TX             63  /* Less on CU+ then SR */\n-\n-/* EM Records */\n-#define TF_RSVD_EM_REC_RX                         16000\n-#define TF_RSVD_EM_REC_BEGIN_IDX_RX               0\n-#define TF_RSVD_EM_REC_TX                         16000\n-#define TF_RSVD_EM_REC_BEGIN_IDX_TX               0\n-\n-/* Wildcard */\n-#define TF_RSVD_WC_TCAM_PROF_ID_RX                128\n-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_RX      128\n-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_RX        255\n-#define TF_RSVD_WC_TCAM_PROF_ID_TX                128\n-#define TF_RSVD_WC_TCAM_PROF_ID_BEGIN_IDX_TX      128\n-#define TF_RSVD_WC_TCAM_PROF_ID_END_IDX_TX        255\n-\n-#define TF_RSVD_WC_TCAM_RX                        64\n-#define TF_RSVD_WC_TCAM_BEGIN_IDX_RX              0\n-#define TF_RSVD_WC_TCAM_END_IDX_RX                63\n-#define TF_RSVD_WC_TCAM_TX                        64\n-#define TF_RSVD_WC_TCAM_BEGIN_IDX_TX              0\n-#define TF_RSVD_WC_TCAM_END_IDX_TX                63\n-\n-#define TF_RSVD_METER_PROF_RX                     0\n-#define TF_RSVD_METER_PROF_BEGIN_IDX_RX           0\n-#define TF_RSVD_METER_PROF_END_IDX_RX             0\n-#define TF_RSVD_METER_PROF_TX                     0\n-#define TF_RSVD_METER_PROF_BEGIN_IDX_TX           0\n-#define TF_RSVD_METER_PROF_END_IDX_TX             0\n-\n-#define TF_RSVD_METER_INST_RX                     0\n-#define TF_RSVD_METER_INST_BEGIN_IDX_RX           0\n-#define TF_RSVD_METER_INST_END_IDX_RX             0\n-#define TF_RSVD_METER_INST_TX                     0\n-#define TF_RSVD_METER_INST_BEGIN_IDX_TX           0\n-#define TF_RSVD_METER_INST_END_IDX_TX             0\n-\n-/* Mirror */\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_MIRROR_RX                         0\n-#define TF_RSVD_MIRROR_BEGIN_IDX_RX               0\n-#define TF_RSVD_MIRROR_END_IDX_RX                 0\n-#define TF_RSVD_MIRROR_TX                         0\n-#define TF_RSVD_MIRROR_BEGIN_IDX_TX               0\n-#define TF_RSVD_MIRROR_END_IDX_TX                 0\n-\n-/* UPAR */\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_UPAR_RX                           0\n-#define TF_RSVD_UPAR_BEGIN_IDX_RX                 0\n-#define TF_RSVD_UPAR_END_IDX_RX                   0\n-#define TF_RSVD_UPAR_TX                           0\n-#define TF_RSVD_UPAR_BEGIN_IDX_TX                 0\n-#define TF_RSVD_UPAR_END_IDX_TX                   0\n-\n-/* Source Properties */\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_SP_TCAM_RX                        0\n-#define TF_RSVD_SP_TCAM_BEGIN_IDX_RX              0\n-#define TF_RSVD_SP_TCAM_END_IDX_RX                0\n-#define TF_RSVD_SP_TCAM_TX                        0\n-#define TF_RSVD_SP_TCAM_BEGIN_IDX_TX              0\n-#define TF_RSVD_SP_TCAM_END_IDX_TX                0\n-\n-/* L2 Func */\n-#define TF_RSVD_L2_FUNC_RX                        0\n-#define TF_RSVD_L2_FUNC_BEGIN_IDX_RX              0\n-#define TF_RSVD_L2_FUNC_END_IDX_RX                0\n-#define TF_RSVD_L2_FUNC_TX                        0\n-#define TF_RSVD_L2_FUNC_BEGIN_IDX_TX              0\n-#define TF_RSVD_L2_FUNC_END_IDX_TX                0\n-\n-/* FKB */\n-#define TF_RSVD_FKB_RX                            0\n-#define TF_RSVD_FKB_BEGIN_IDX_RX                  0\n-#define TF_RSVD_FKB_END_IDX_RX                    0\n-#define TF_RSVD_FKB_TX                            0\n-#define TF_RSVD_FKB_BEGIN_IDX_TX                  0\n-#define TF_RSVD_FKB_END_IDX_TX                    0\n-\n-/* TBL Scope */\n-#define TF_RSVD_TBL_SCOPE_RX                      1\n-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_RX            0\n-#define TF_RSVD_TBL_SCOPE_END_IDX_RX              1\n-#define TF_RSVD_TBL_SCOPE_TX                      1\n-#define TF_RSVD_TBL_SCOPE_BEGIN_IDX_TX            0\n-#define TF_RSVD_TBL_SCOPE_END_IDX_TX              1\n-\n-/* EPOCH0 */\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_EPOCH0_RX                         0\n-#define TF_RSVD_EPOCH0_BEGIN_IDX_RX               0\n-#define TF_RSVD_EPOCH0_END_IDX_RX                 0\n-#define TF_RSVD_EPOCH0_TX                         0\n-#define TF_RSVD_EPOCH0_BEGIN_IDX_TX               0\n-#define TF_RSVD_EPOCH0_END_IDX_TX                 0\n-\n-/* EPOCH1 */\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_EPOCH1_RX                         0\n-#define TF_RSVD_EPOCH1_BEGIN_IDX_RX               0\n-#define TF_RSVD_EPOCH1_END_IDX_RX                 0\n-#define TF_RSVD_EPOCH1_TX                         0\n-#define TF_RSVD_EPOCH1_BEGIN_IDX_TX               0\n-#define TF_RSVD_EPOCH1_END_IDX_TX                 0\n-\n-/* METADATA */\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_METADATA_RX                       0\n-#define TF_RSVD_METADATA_BEGIN_IDX_RX             0\n-#define TF_RSVD_METADATA_END_IDX_RX               0\n-#define TF_RSVD_METADATA_TX                       0\n-#define TF_RSVD_METADATA_BEGIN_IDX_TX             0\n-#define TF_RSVD_METADATA_END_IDX_TX               0\n-\n-/* CT_STATE */\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_CT_STATE_RX                       0\n-#define TF_RSVD_CT_STATE_BEGIN_IDX_RX             0\n-#define TF_RSVD_CT_STATE_END_IDX_RX               0\n-#define TF_RSVD_CT_STATE_TX                       0\n-#define TF_RSVD_CT_STATE_BEGIN_IDX_TX             0\n-#define TF_RSVD_CT_STATE_END_IDX_TX               0\n-\n-/* RANGE_PROF */\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_RANGE_PROF_RX                     0\n-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_RX           0\n-#define TF_RSVD_RANGE_PROF_END_IDX_RX             0\n-#define TF_RSVD_RANGE_PROF_TX                     0\n-#define TF_RSVD_RANGE_PROF_BEGIN_IDX_TX           0\n-#define TF_RSVD_RANGE_PROF_END_IDX_TX             0\n-\n-/* RANGE_ENTRY */\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_RANGE_ENTRY_RX                    0\n-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_RX          0\n-#define TF_RSVD_RANGE_ENTRY_END_IDX_RX            0\n-#define TF_RSVD_RANGE_ENTRY_TX                    0\n-#define TF_RSVD_RANGE_ENTRY_BEGIN_IDX_TX          0\n-#define TF_RSVD_RANGE_ENTRY_END_IDX_TX            0\n-\n-/* LAG_ENTRY */\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_LAG_ENTRY_RX                      0\n-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_RX            0\n-#define TF_RSVD_LAG_ENTRY_END_IDX_RX              0\n-#define TF_RSVD_LAG_ENTRY_TX                      0\n-#define TF_RSVD_LAG_ENTRY_BEGIN_IDX_TX            0\n-#define TF_RSVD_LAG_ENTRY_END_IDX_TX              0\n-\n-\n-/* SRAM - Resources\n- * Limited to the types that CFA provides.\n- */\n-#define TF_RSVD_SRAM_FULL_ACTION_RX               8001\n-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX     0\n-#define TF_RSVD_SRAM_FULL_ACTION_TX               8001\n-#define TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX     0\n-\n-/* Not yet supported fully in the infra */\n-#define TF_RSVD_SRAM_MCG_RX                       0\n-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_RX             0\n-/* Multicast Group on TX is not supported */\n-#define TF_RSVD_SRAM_MCG_TX                       0\n-#define TF_RSVD_SRAM_MCG_BEGIN_IDX_TX             0\n-\n-/* First encap of 8B RX is reserved by CFA */\n-#define TF_RSVD_SRAM_ENCAP_8B_RX                  32\n-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX        0\n-/* First encap of 8B TX is reserved by CFA */\n-#define TF_RSVD_SRAM_ENCAP_8B_TX                  0\n-#define TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX        0\n-\n-#define TF_RSVD_SRAM_ENCAP_16B_RX                 16\n-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX       0\n-/* First encap of 16B TX is reserved by CFA */\n-#define TF_RSVD_SRAM_ENCAP_16B_TX                 20\n-#define TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX       0\n-\n-/* Encap of 64B on RX is not supported */\n-#define TF_RSVD_SRAM_ENCAP_64B_RX                 0\n-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_RX       0\n-/* First encap of 64B TX is reserved by CFA */\n-#define TF_RSVD_SRAM_ENCAP_64B_TX                 1007\n-#define TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX       0\n-\n-#define TF_RSVD_SRAM_SP_SMAC_RX                   0\n-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX         0\n-#define TF_RSVD_SRAM_SP_SMAC_TX                   0\n-#define TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX         0\n-\n-/* SRAM SP IPV4 on RX is not supported */\n-#define TF_RSVD_SRAM_SP_SMAC_IPV4_RX              0\n-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_RX    0\n-#define TF_RSVD_SRAM_SP_SMAC_IPV4_TX              511\n-#define TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX    0\n-\n-/* SRAM SP IPV6 on RX is not supported */\n-#define TF_RSVD_SRAM_SP_SMAC_IPV6_RX              0\n-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_RX    0\n-/* Not yet supported fully in infra */\n-#define TF_RSVD_SRAM_SP_SMAC_IPV6_TX              0\n-#define TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX    0\n-\n-#define TF_RSVD_SRAM_COUNTER_64B_RX               160\n-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX     0\n-#define TF_RSVD_SRAM_COUNTER_64B_TX               160\n-#define TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX     0\n-\n-#define TF_RSVD_SRAM_NAT_SPORT_RX                 0\n-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX       0\n-#define TF_RSVD_SRAM_NAT_SPORT_TX                 0\n-#define TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX       0\n-\n-#define TF_RSVD_SRAM_NAT_DPORT_RX                 0\n-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX       0\n-#define TF_RSVD_SRAM_NAT_DPORT_TX                 0\n-#define TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX       0\n-\n-#define TF_RSVD_SRAM_NAT_S_IPV4_RX                0\n-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX      0\n-#define TF_RSVD_SRAM_NAT_S_IPV4_TX                0\n-#define TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX      0\n-\n-#define TF_RSVD_SRAM_NAT_D_IPV4_RX                0\n-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX      0\n-#define TF_RSVD_SRAM_NAT_D_IPV4_TX                0\n-#define TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX      0\n-\n-/* HW Resource Pool names */\n-\n-#define TF_L2_CTXT_TCAM_POOL_NAME         l2_ctxt_tcam_pool\n-#define TF_L2_CTXT_TCAM_POOL_NAME_RX      l2_ctxt_tcam_pool_rx\n-#define TF_L2_CTXT_TCAM_POOL_NAME_TX      l2_ctxt_tcam_pool_tx\n-\n-#define TF_PROF_FUNC_POOL_NAME            prof_func_pool\n-#define TF_PROF_FUNC_POOL_NAME_RX         prof_func_pool_rx\n-#define TF_PROF_FUNC_POOL_NAME_TX         prof_func_pool_tx\n-\n-#define TF_PROF_TCAM_POOL_NAME            prof_tcam_pool\n-#define TF_PROF_TCAM_POOL_NAME_RX         prof_tcam_pool_rx\n-#define TF_PROF_TCAM_POOL_NAME_TX         prof_tcam_pool_tx\n-\n-#define TF_EM_PROF_ID_POOL_NAME           em_prof_id_pool\n-#define TF_EM_PROF_ID_POOL_NAME_RX        em_prof_id_pool_rx\n-#define TF_EM_PROF_ID_POOL_NAME_TX        em_prof_id_pool_tx\n-\n-#define TF_WC_TCAM_PROF_ID_POOL_NAME      wc_tcam_prof_id_pool\n-#define TF_WC_TCAM_PROF_ID_POOL_NAME_RX   wc_tcam_prof_id_pool_rx\n-#define TF_WC_TCAM_PROF_ID_POOL_NAME_TX   wc_tcam_prof_id_pool_tx\n-\n-#define TF_WC_TCAM_POOL_NAME              wc_tcam_pool\n-#define TF_WC_TCAM_POOL_NAME_RX           wc_tcam_pool_rx\n-#define TF_WC_TCAM_POOL_NAME_TX           wc_tcam_pool_tx\n-\n-#define TF_METER_PROF_POOL_NAME           meter_prof_pool\n-#define TF_METER_PROF_POOL_NAME_RX        meter_prof_pool_rx\n-#define TF_METER_PROF_POOL_NAME_TX        meter_prof_pool_tx\n-\n-#define TF_METER_INST_POOL_NAME           meter_inst_pool\n-#define TF_METER_INST_POOL_NAME_RX        meter_inst_pool_rx\n-#define TF_METER_INST_POOL_NAME_TX        meter_inst_pool_tx\n-\n-#define TF_MIRROR_POOL_NAME               mirror_pool\n-#define TF_MIRROR_POOL_NAME_RX            mirror_pool_rx\n-#define TF_MIRROR_POOL_NAME_TX            mirror_pool_tx\n-\n-#define TF_UPAR_POOL_NAME                 upar_pool\n-#define TF_UPAR_POOL_NAME_RX              upar_pool_rx\n-#define TF_UPAR_POOL_NAME_TX              upar_pool_tx\n-\n-#define TF_SP_TCAM_POOL_NAME              sp_tcam_pool\n-#define TF_SP_TCAM_POOL_NAME_RX           sp_tcam_pool_rx\n-#define TF_SP_TCAM_POOL_NAME_TX           sp_tcam_pool_tx\n-\n-#define TF_FKB_POOL_NAME                  fkb_pool\n-#define TF_FKB_POOL_NAME_RX               fkb_pool_rx\n-#define TF_FKB_POOL_NAME_TX               fkb_pool_tx\n-\n-#define TF_TBL_SCOPE_POOL_NAME            tbl_scope_pool\n-#define TF_TBL_SCOPE_POOL_NAME_RX         tbl_scope_pool_rx\n-#define TF_TBL_SCOPE_POOL_NAME_TX         tbl_scope_pool_tx\n-\n-#define TF_L2_FUNC_POOL_NAME              l2_func_pool\n-#define TF_L2_FUNC_POOL_NAME_RX           l2_func_pool_rx\n-#define TF_L2_FUNC_POOL_NAME_TX           l2_func_pool_tx\n-\n-#define TF_EPOCH0_POOL_NAME               epoch0_pool\n-#define TF_EPOCH0_POOL_NAME_RX            epoch0_pool_rx\n-#define TF_EPOCH0_POOL_NAME_TX            epoch0_pool_tx\n-\n-#define TF_EPOCH1_POOL_NAME               epoch1_pool\n-#define TF_EPOCH1_POOL_NAME_RX            epoch1_pool_rx\n-#define TF_EPOCH1_POOL_NAME_TX            epoch1_pool_tx\n-\n-#define TF_METADATA_POOL_NAME             metadata_pool\n-#define TF_METADATA_POOL_NAME_RX          metadata_pool_rx\n-#define TF_METADATA_POOL_NAME_TX          metadata_pool_tx\n-\n-#define TF_CT_STATE_POOL_NAME             ct_state_pool\n-#define TF_CT_STATE_POOL_NAME_RX          ct_state_pool_rx\n-#define TF_CT_STATE_POOL_NAME_TX          ct_state_pool_tx\n-\n-#define TF_RANGE_PROF_POOL_NAME           range_prof_pool\n-#define TF_RANGE_PROF_POOL_NAME_RX        range_prof_pool_rx\n-#define TF_RANGE_PROF_POOL_NAME_TX        range_prof_pool_tx\n-\n-#define TF_RANGE_ENTRY_POOL_NAME          range_entry_pool\n-#define TF_RANGE_ENTRY_POOL_NAME_RX       range_entry_pool_rx\n-#define TF_RANGE_ENTRY_POOL_NAME_TX       range_entry_pool_tx\n-\n-#define TF_LAG_ENTRY_POOL_NAME            lag_entry_pool\n-#define TF_LAG_ENTRY_POOL_NAME_RX         lag_entry_pool_rx\n-#define TF_LAG_ENTRY_POOL_NAME_TX         lag_entry_pool_tx\n-\n-/* SRAM Resource Pool names */\n-#define TF_SRAM_FULL_ACTION_POOL_NAME     sram_full_action_pool\n-#define TF_SRAM_FULL_ACTION_POOL_NAME_RX  sram_full_action_pool_rx\n-#define TF_SRAM_FULL_ACTION_POOL_NAME_TX  sram_full_action_pool_tx\n-\n-#define TF_SRAM_MCG_POOL_NAME             sram_mcg_pool\n-#define TF_SRAM_MCG_POOL_NAME_RX          sram_mcg_pool_rx\n-#define TF_SRAM_MCG_POOL_NAME_TX          sram_mcg_pool_tx\n-\n-#define TF_SRAM_ENCAP_8B_POOL_NAME        sram_encap_8b_pool\n-#define TF_SRAM_ENCAP_8B_POOL_NAME_RX     sram_encap_8b_pool_rx\n-#define TF_SRAM_ENCAP_8B_POOL_NAME_TX     sram_encap_8b_pool_tx\n-\n-#define TF_SRAM_ENCAP_16B_POOL_NAME       sram_encap_16b_pool\n-#define TF_SRAM_ENCAP_16B_POOL_NAME_RX    sram_encap_16b_pool_rx\n-#define TF_SRAM_ENCAP_16B_POOL_NAME_TX    sram_encap_16b_pool_tx\n-\n-#define TF_SRAM_ENCAP_64B_POOL_NAME       sram_encap_64b_pool\n-#define TF_SRAM_ENCAP_64B_POOL_NAME_RX    sram_encap_64b_pool_rx\n-#define TF_SRAM_ENCAP_64B_POOL_NAME_TX    sram_encap_64b_pool_tx\n-\n-#define TF_SRAM_SP_SMAC_POOL_NAME         sram_sp_smac_pool\n-#define TF_SRAM_SP_SMAC_POOL_NAME_RX      sram_sp_smac_pool_rx\n-#define TF_SRAM_SP_SMAC_POOL_NAME_TX      sram_sp_smac_pool_tx\n-\n-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME    sram_sp_smac_ipv4_pool\n-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_RX sram_sp_smac_ipv4_pool_rx\n-#define TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX sram_sp_smac_ipv4_pool_tx\n-\n-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME    sram_sp_smac_ipv6_pool\n-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_RX sram_sp_smac_ipv6_pool_rx\n-#define TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX sram_sp_smac_ipv6_pool_tx\n-\n-#define TF_SRAM_STATS_64B_POOL_NAME       sram_stats_64b_pool\n-#define TF_SRAM_STATS_64B_POOL_NAME_RX    sram_stats_64b_pool_rx\n-#define TF_SRAM_STATS_64B_POOL_NAME_TX    sram_stats_64b_pool_tx\n-\n-#define TF_SRAM_NAT_SPORT_POOL_NAME       sram_nat_sport_pool\n-#define TF_SRAM_NAT_SPORT_POOL_NAME_RX    sram_nat_sport_pool_rx\n-#define TF_SRAM_NAT_SPORT_POOL_NAME_TX    sram_nat_sport_pool_tx\n-\n-#define TF_SRAM_NAT_DPORT_POOL_NAME       sram_nat_dport_pool\n-#define TF_SRAM_NAT_DPORT_POOL_NAME_RX    sram_nat_dport_pool_rx\n-#define TF_SRAM_NAT_DPORT_POOL_NAME_TX    sram_nat_dport_pool_tx\n-\n-#define TF_SRAM_NAT_S_IPV4_POOL_NAME      sram_nat_s_ipv4_pool\n-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_RX   sram_nat_s_ipv4_pool_rx\n-#define TF_SRAM_NAT_S_IPV4_POOL_NAME_TX   sram_nat_s_ipv4_pool_tx\n-\n-#define TF_SRAM_NAT_D_IPV4_POOL_NAME      sram_nat_d_ipv4_pool\n-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_RX   sram_nat_d_ipv4_pool_rx\n-#define TF_SRAM_NAT_D_IPV4_POOL_NAME_TX   sram_nat_d_ipv4_pool_tx\n-\n-/* Sw Resource Pool Names */\n-\n-#define TF_L2_CTXT_REMAP_POOL_NAME         l2_ctxt_remap_pool\n-#define TF_L2_CTXT_REMAP_POOL_NAME_RX      l2_ctxt_remap_pool_rx\n-#define TF_L2_CTXT_REMAP_POOL_NAME_TX      l2_ctxt_remap_pool_tx\n-\n-\n-/** HW Resource types\n- */\n-enum tf_resource_type_hw {\n-\t/* Common HW resources for all chip variants */\n-\tTF_RESC_TYPE_HW_L2_CTXT_TCAM,\n-\tTF_RESC_TYPE_HW_PROF_FUNC,\n-\tTF_RESC_TYPE_HW_PROF_TCAM,\n-\tTF_RESC_TYPE_HW_EM_PROF_ID,\n-\tTF_RESC_TYPE_HW_EM_REC,\n-\tTF_RESC_TYPE_HW_WC_TCAM_PROF_ID,\n-\tTF_RESC_TYPE_HW_WC_TCAM,\n-\tTF_RESC_TYPE_HW_METER_PROF,\n-\tTF_RESC_TYPE_HW_METER_INST,\n-\tTF_RESC_TYPE_HW_MIRROR,\n-\tTF_RESC_TYPE_HW_UPAR,\n-\t/* Wh+/SR specific HW resources */\n-\tTF_RESC_TYPE_HW_SP_TCAM,\n-\t/* SR/SR2 specific HW resources */\n-\tTF_RESC_TYPE_HW_L2_FUNC,\n-\t/* Thor, SR2 common HW resources */\n-\tTF_RESC_TYPE_HW_FKB,\n-\t/* SR2 specific HW resources */\n-\tTF_RESC_TYPE_HW_TBL_SCOPE,\n-\tTF_RESC_TYPE_HW_EPOCH0,\n-\tTF_RESC_TYPE_HW_EPOCH1,\n-\tTF_RESC_TYPE_HW_METADATA,\n-\tTF_RESC_TYPE_HW_CT_STATE,\n-\tTF_RESC_TYPE_HW_RANGE_PROF,\n-\tTF_RESC_TYPE_HW_RANGE_ENTRY,\n-\tTF_RESC_TYPE_HW_LAG_ENTRY,\n-\tTF_RESC_TYPE_HW_MAX\n-};\n-\n-/** HW Resource types\n- */\n-enum tf_resource_type_sram {\n-\tTF_RESC_TYPE_SRAM_FULL_ACTION,\n-\tTF_RESC_TYPE_SRAM_MCG,\n-\tTF_RESC_TYPE_SRAM_ENCAP_8B,\n-\tTF_RESC_TYPE_SRAM_ENCAP_16B,\n-\tTF_RESC_TYPE_SRAM_ENCAP_64B,\n-\tTF_RESC_TYPE_SRAM_SP_SMAC,\n-\tTF_RESC_TYPE_SRAM_SP_SMAC_IPV4,\n-\tTF_RESC_TYPE_SRAM_SP_SMAC_IPV6,\n-\tTF_RESC_TYPE_SRAM_COUNTER_64B,\n-\tTF_RESC_TYPE_SRAM_NAT_SPORT,\n-\tTF_RESC_TYPE_SRAM_NAT_DPORT,\n-\tTF_RESC_TYPE_SRAM_NAT_S_IPV4,\n-\tTF_RESC_TYPE_SRAM_NAT_D_IPV4,\n-\tTF_RESC_TYPE_SRAM_MAX\n-};\n \n #endif /* _TF_RESOURCES_H_ */\ndiff --git a/drivers/net/bnxt/tf_core/tf_rm.c b/drivers/net/bnxt/tf_core/tf_rm.c\nindex e0a84e6..e0469b6 100644\n--- a/drivers/net/bnxt/tf_core/tf_rm.c\n+++ b/drivers/net/bnxt/tf_core/tf_rm.c\n@@ -7,3171 +7,916 @@\n \n #include <rte_common.h>\n \n+#include <cfa_resource_types.h>\n+\n #include \"tf_rm.h\"\n-#include \"tf_core.h\"\n+#include \"tf_common.h\"\n #include \"tf_util.h\"\n #include \"tf_session.h\"\n-#include \"tf_resources.h\"\n-#include \"tf_msg.h\"\n-#include \"bnxt.h\"\n+#include \"tf_device.h\"\n #include \"tfp.h\"\n+#include \"tf_msg.h\"\n \n /**\n- * Internal macro to perform HW resource allocation check between what\n- * firmware reports vs what was statically requested.\n- *\n- * Parameters:\n- *   struct tf_rm_hw_query    *hquery      - Pointer to the hw query result\n- *   enum tf_dir               dir         - Direction to process\n- *   enum tf_resource_type_hw  hcapi_type  - HCAPI type, the index element\n- *                                           in the hw query structure\n- *   define                    def_value   - Define value to check against\n- *   uint32_t                 *eflag       - Result of the check\n- */\n-#define TF_RM_CHECK_HW_ALLOC(hquery, dir, hcapi_type, def_value, eflag) do {  \\\n-\tif ((dir) == TF_DIR_RX) {\t\t\t\t\t      \\\n-\t\tif ((hquery)->hw_query[(hcapi_type)].max != def_value ## _RX) \\\n-\t\t\t*(eflag) |= 1 << (hcapi_type);\t\t\t      \\\n-\t} else {\t\t\t\t\t\t\t      \\\n-\t\tif ((hquery)->hw_query[(hcapi_type)].max != def_value ## _TX) \\\n-\t\t\t*(eflag) |= 1 << (hcapi_type);\t\t\t      \\\n-\t}\t\t\t\t\t\t\t\t      \\\n-} while (0)\n-\n-/**\n- * Internal macro to perform HW resource allocation check between what\n- * firmware reports vs what was statically requested.\n- *\n- * Parameters:\n- *   struct tf_rm_sram_query   *squery      - Pointer to the sram query result\n- *   enum tf_dir                dir         - Direction to process\n- *   enum tf_resource_type_sram hcapi_type  - HCAPI type, the index element\n- *                                            in the hw query structure\n- *   define                     def_value   - Define value to check against\n- *   uint32_t                  *eflag       - Result of the check\n- */\n-#define TF_RM_CHECK_SRAM_ALLOC(squery, dir, hcapi_type, def_value, eflag) do { \\\n-\tif ((dir) == TF_DIR_RX) {\t\t\t\t\t       \\\n-\t\tif ((squery)->sram_query[(hcapi_type)].max != def_value ## _RX)\\\n-\t\t\t*(eflag) |= 1 << (hcapi_type);\t\t\t       \\\n-\t} else {\t\t\t\t\t\t\t       \\\n-\t\tif ((squery)->sram_query[(hcapi_type)].max != def_value ## _TX)\\\n-\t\t\t*(eflag) |= 1 << (hcapi_type);\t\t\t       \\\n-\t}\t\t\t\t\t\t\t\t       \\\n-} while (0)\n-\n-/**\n- * Internal macro to convert a reserved resource define name to be\n- * direction specific.\n- *\n- * Parameters:\n- *   enum tf_dir    dir         - Direction to process\n- *   string         type        - Type name to append RX or TX to\n- *   string         dtype       - Direction specific type\n- *\n- *\n+ * Generic RM Element data type that an RM DB is build upon.\n  */\n-#define TF_RESC_RSVD(dir, type, dtype) do {\t\\\n-\t\tif ((dir) == TF_DIR_RX)\t\t\\\n-\t\t\t(dtype) = type ## _RX;\t\\\n-\t\telse\t\t\t\t\\\n-\t\t\t(dtype) = type ## _TX;\t\\\n-\t} while (0)\n-\n-const char\n-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type)\n-{\n-\tswitch (hw_type) {\n-\tcase TF_RESC_TYPE_HW_L2_CTXT_TCAM:\n-\t\treturn \"L2 ctxt tcam\";\n-\tcase TF_RESC_TYPE_HW_PROF_FUNC:\n-\t\treturn \"Profile Func\";\n-\tcase TF_RESC_TYPE_HW_PROF_TCAM:\n-\t\treturn \"Profile tcam\";\n-\tcase TF_RESC_TYPE_HW_EM_PROF_ID:\n-\t\treturn \"EM profile id\";\n-\tcase TF_RESC_TYPE_HW_EM_REC:\n-\t\treturn \"EM record\";\n-\tcase TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:\n-\t\treturn \"WC tcam profile id\";\n-\tcase TF_RESC_TYPE_HW_WC_TCAM:\n-\t\treturn \"WC tcam\";\n-\tcase TF_RESC_TYPE_HW_METER_PROF:\n-\t\treturn \"Meter profile\";\n-\tcase TF_RESC_TYPE_HW_METER_INST:\n-\t\treturn \"Meter instance\";\n-\tcase TF_RESC_TYPE_HW_MIRROR:\n-\t\treturn \"Mirror\";\n-\tcase TF_RESC_TYPE_HW_UPAR:\n-\t\treturn \"UPAR\";\n-\tcase TF_RESC_TYPE_HW_SP_TCAM:\n-\t\treturn \"Source properties tcam\";\n-\tcase TF_RESC_TYPE_HW_L2_FUNC:\n-\t\treturn \"L2 Function\";\n-\tcase TF_RESC_TYPE_HW_FKB:\n-\t\treturn \"FKB\";\n-\tcase TF_RESC_TYPE_HW_TBL_SCOPE:\n-\t\treturn \"Table scope\";\n-\tcase TF_RESC_TYPE_HW_EPOCH0:\n-\t\treturn \"EPOCH0\";\n-\tcase TF_RESC_TYPE_HW_EPOCH1:\n-\t\treturn \"EPOCH1\";\n-\tcase TF_RESC_TYPE_HW_METADATA:\n-\t\treturn \"Metadata\";\n-\tcase TF_RESC_TYPE_HW_CT_STATE:\n-\t\treturn \"Connection tracking state\";\n-\tcase TF_RESC_TYPE_HW_RANGE_PROF:\n-\t\treturn \"Range profile\";\n-\tcase TF_RESC_TYPE_HW_RANGE_ENTRY:\n-\t\treturn \"Range entry\";\n-\tcase TF_RESC_TYPE_HW_LAG_ENTRY:\n-\t\treturn \"LAG\";\n-\tdefault:\n-\t\treturn \"Invalid identifier\";\n-\t}\n-}\n-\n-const char\n-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type)\n-{\n-\tswitch (sram_type) {\n-\tcase TF_RESC_TYPE_SRAM_FULL_ACTION:\n-\t\treturn \"Full action\";\n-\tcase TF_RESC_TYPE_SRAM_MCG:\n-\t\treturn \"MCG\";\n-\tcase TF_RESC_TYPE_SRAM_ENCAP_8B:\n-\t\treturn \"Encap 8B\";\n-\tcase TF_RESC_TYPE_SRAM_ENCAP_16B:\n-\t\treturn \"Encap 16B\";\n-\tcase TF_RESC_TYPE_SRAM_ENCAP_64B:\n-\t\treturn \"Encap 64B\";\n-\tcase TF_RESC_TYPE_SRAM_SP_SMAC:\n-\t\treturn \"Source properties SMAC\";\n-\tcase TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:\n-\t\treturn \"Source properties SMAC IPv4\";\n-\tcase TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:\n-\t\treturn \"Source properties IPv6\";\n-\tcase TF_RESC_TYPE_SRAM_COUNTER_64B:\n-\t\treturn \"Counter 64B\";\n-\tcase TF_RESC_TYPE_SRAM_NAT_SPORT:\n-\t\treturn \"NAT source port\";\n-\tcase TF_RESC_TYPE_SRAM_NAT_DPORT:\n-\t\treturn \"NAT destination port\";\n-\tcase TF_RESC_TYPE_SRAM_NAT_S_IPV4:\n-\t\treturn \"NAT source IPv4\";\n-\tcase TF_RESC_TYPE_SRAM_NAT_D_IPV4:\n-\t\treturn \"NAT destination IPv4\";\n-\tdefault:\n-\t\treturn \"Invalid identifier\";\n-\t}\n-}\n+struct tf_rm_element {\n+\t/**\n+\t * RM Element configuration type. If Private then the\n+\t * hcapi_type can be ignored. If Null then the element is not\n+\t * valid for the device.\n+\t */\n+\tenum tf_rm_elem_cfg_type cfg_type;\n \n-/**\n- * Helper function to perform a HW HCAPI resource type lookup against\n- * the reserved value of the same static type.\n- *\n- * Returns:\n- *   -EOPNOTSUPP - Reserved resource type not supported\n- *   Value       - Integer value of the reserved value for the requested type\n- */\n-static int\n-tf_rm_rsvd_hw_value(enum tf_dir dir, enum tf_resource_type_hw index)\n-{\n-\tuint32_t value = -EOPNOTSUPP;\n+\t/**\n+\t * HCAPI RM Type for the element.\n+\t */\n+\tuint16_t hcapi_type;\n \n-\tswitch (index) {\n-\tcase TF_RESC_TYPE_HW_L2_CTXT_TCAM:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_L2_CTXT_TCAM, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_PROF_FUNC:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_PROF_FUNC, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_PROF_TCAM:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_PROF_TCAM, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_EM_PROF_ID:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_EM_PROF_ID, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_EM_REC:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_EM_REC, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_WC_TCAM_PROF_ID:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_WC_TCAM_PROF_ID, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_WC_TCAM:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_WC_TCAM, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_METER_PROF:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_METER_PROF, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_METER_INST:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_METER_INST, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_MIRROR:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_MIRROR, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_UPAR:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_UPAR, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_SP_TCAM:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SP_TCAM, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_L2_FUNC:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_L2_FUNC, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_FKB:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_FKB, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_TBL_SCOPE:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_TBL_SCOPE, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_EPOCH0:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_EPOCH0, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_EPOCH1:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_EPOCH1, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_METADATA:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_METADATA, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_CT_STATE:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_CT_STATE, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_RANGE_PROF:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_RANGE_PROF, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_RANGE_ENTRY:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_RANGE_ENTRY, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_HW_LAG_ENTRY:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_LAG_ENTRY, value);\n-\t\tbreak;\n-\tdefault:\n-\t\tbreak;\n-\t}\n+\t/**\n+\t * HCAPI RM allocated range information for the element.\n+\t */\n+\tstruct tf_rm_alloc_info alloc;\n \n-\treturn value;\n-}\n+\t/**\n+\t * Bit allocator pool for the element. Pool size is controlled\n+\t * by the struct tf_session_resources at time of session creation.\n+\t * Null indicates that the element is not used for the device.\n+\t */\n+\tstruct bitalloc *pool;\n+};\n \n /**\n- * Helper function to perform a SRAM HCAPI resource type lookup\n- * against the reserved value of the same static type.\n- *\n- * Returns:\n- *   -EOPNOTSUPP - Reserved resource type not supported\n- *   Value       - Integer value of the reserved value for the requested type\n+ * TF RM DB definition\n  */\n-static int\n-tf_rm_rsvd_sram_value(enum tf_dir dir, enum tf_resource_type_sram index)\n-{\n-\tuint32_t value = -EOPNOTSUPP;\n-\n-\tswitch (index) {\n-\tcase TF_RESC_TYPE_SRAM_FULL_ACTION:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_FULL_ACTION, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_MCG:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_MCG, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_ENCAP_8B:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_8B, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_ENCAP_16B:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_16B, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_ENCAP_64B:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_ENCAP_64B, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_SP_SMAC:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_SP_SMAC_IPV4:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV4, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_SP_SMAC_IPV6:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_SP_SMAC_IPV6, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_COUNTER_64B:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_COUNTER_64B, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_NAT_SPORT:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_SPORT, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_NAT_DPORT:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_DPORT, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_NAT_S_IPV4:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_S_IPV4, value);\n-\t\tbreak;\n-\tcase TF_RESC_TYPE_SRAM_NAT_D_IPV4:\n-\t\tTF_RESC_RSVD(dir, TF_RSVD_SRAM_NAT_D_IPV4, value);\n-\t\tbreak;\n-\tdefault:\n-\t\tbreak;\n-\t}\n-\n-\treturn value;\n-}\n+struct tf_rm_new_db {\n+\t/**\n+\t * Number of elements in the DB\n+\t */\n+\tuint16_t num_entries;\n \n-/**\n- * Helper function to print all the HW resource qcaps errors reported\n- * in the error_flag.\n- *\n- * [in] dir\n- *   Receive or transmit direction\n- *\n- * [in] error_flag\n- *   Pointer to the hw error flags created at time of the query check\n- */\n-static void\n-tf_rm_print_hw_qcaps_error(enum tf_dir dir,\n-\t\t\t   struct tf_rm_hw_query *hw_query,\n-\t\t\t   uint32_t *error_flag)\n-{\n-\tint i;\n+\t/**\n+\t * Direction this DB controls.\n+\t */\n+\tenum tf_dir dir;\n \n-\tTFP_DRV_LOG(ERR, \"QCAPS errors HW\\n\");\n-\tTFP_DRV_LOG(ERR, \"  Direction: %s\\n\", tf_dir_2_str(dir));\n-\tTFP_DRV_LOG(ERR, \"  Elements:\\n\");\n+\t/**\n+\t * Module type, used for logging purposes.\n+\t */\n+\tenum tf_device_module_type type;\n \n-\tfor (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {\n-\t\tif (*error_flag & 1 << i)\n-\t\t\tTFP_DRV_LOG(ERR, \"    %s, %d elem available, req:%d\\n\",\n-\t\t\t\t    tf_hcapi_hw_2_str(i),\n-\t\t\t\t    hw_query->hw_query[i].max,\n-\t\t\t\t    tf_rm_rsvd_hw_value(dir, i));\n-\t}\n-}\n+\t/**\n+\t * The DB consists of an array of elements\n+\t */\n+\tstruct tf_rm_element *db;\n+};\n \n /**\n- * Helper function to print all the SRAM resource qcaps errors\n- * reported in the error_flag.\n+ * Adjust an index according to the allocation information.\n  *\n- * [in] dir\n- *   Receive or transmit direction\n+ * All resources are controlled in a 0 based pool. Some resources, by\n+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they\n+ * need to be adjusted before they are handed out.\n  *\n- * [in] error_flag\n- *   Pointer to the sram error flags created at time of the query check\n- */\n-static void\n-tf_rm_print_sram_qcaps_error(enum tf_dir dir,\n-\t\t\t     struct tf_rm_sram_query *sram_query,\n-\t\t\t     uint32_t *error_flag)\n-{\n-\tint i;\n-\n-\tTFP_DRV_LOG(ERR, \"QCAPS errors SRAM\\n\");\n-\tTFP_DRV_LOG(ERR, \"  Direction: %s\\n\", tf_dir_2_str(dir));\n-\tTFP_DRV_LOG(ERR, \"  Elements:\\n\");\n-\n-\tfor (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {\n-\t\tif (*error_flag & 1 << i)\n-\t\t\tTFP_DRV_LOG(ERR, \"    %s, %d elem available, req:%d\\n\",\n-\t\t\t\t    tf_hcapi_sram_2_str(i),\n-\t\t\t\t    sram_query->sram_query[i].max,\n-\t\t\t\t    tf_rm_rsvd_sram_value(dir, i));\n-\t}\n-}\n-\n-/**\n- * Performs a HW resource check between what firmware capability\n- * reports and what the core expects is available.\n+ * [in] cfg\n+ *   Pointer to the DB configuration\n  *\n- * Firmware performs the resource carving at AFM init time and the\n- * resource capability is reported in the TruFlow qcaps msg.\n+ * [in] reservations\n+ *   Pointer to the allocation values associated with the module\n  *\n- * [in] query\n- *   Pointer to HW Query data structure. Query holds what the firmware\n- *   offers of the HW resources.\n+ * [in] count\n+ *   Number of DB configuration elements\n  *\n- * [in] dir\n- *   Receive or transmit direction\n- *\n- * [in/out] error_flag\n- *   Pointer to a bit array indicating the error of a single HCAPI\n- *   resource type. When a bit is set to 1, the HCAPI resource type\n- *   failed static allocation.\n+ * [out] valid_count\n+ *   Number of HCAPI entries with a reservation value greater than 0\n  *\n  * Returns:\n- *  0       - Success\n- *  -ENOMEM - Failure on one of the allocated resources. Check the\n- *            error_flag for what types are flagged errored.\n- */\n-static int\n-tf_rm_check_hw_qcaps_static(struct tf_rm_hw_query *query,\n-\t\t\t    enum tf_dir dir,\n-\t\t\t    uint32_t *error_flag)\n-{\n-\t*error_flag = 0;\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_L2_CTXT_TCAM,\n-\t\t\t     TF_RSVD_L2_CTXT_TCAM,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_PROF_FUNC,\n-\t\t\t     TF_RSVD_PROF_FUNC,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_PROF_TCAM,\n-\t\t\t     TF_RSVD_PROF_TCAM,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_EM_PROF_ID,\n-\t\t\t     TF_RSVD_EM_PROF_ID,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_EM_REC,\n-\t\t\t     TF_RSVD_EM_REC,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_WC_TCAM_PROF_ID,\n-\t\t\t     TF_RSVD_WC_TCAM_PROF_ID,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_WC_TCAM,\n-\t\t\t     TF_RSVD_WC_TCAM,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_METER_PROF,\n-\t\t\t     TF_RSVD_METER_PROF,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_METER_INST,\n-\t\t\t     TF_RSVD_METER_INST,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_MIRROR,\n-\t\t\t     TF_RSVD_MIRROR,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_UPAR,\n-\t\t\t     TF_RSVD_UPAR,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_SP_TCAM,\n-\t\t\t     TF_RSVD_SP_TCAM,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_L2_FUNC,\n-\t\t\t     TF_RSVD_L2_FUNC,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_FKB,\n-\t\t\t     TF_RSVD_FKB,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_TBL_SCOPE,\n-\t\t\t     TF_RSVD_TBL_SCOPE,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_EPOCH0,\n-\t\t\t     TF_RSVD_EPOCH0,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_EPOCH1,\n-\t\t\t     TF_RSVD_EPOCH1,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_METADATA,\n-\t\t\t     TF_RSVD_METADATA,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_CT_STATE,\n-\t\t\t     TF_RSVD_CT_STATE,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_RANGE_PROF,\n-\t\t\t     TF_RSVD_RANGE_PROF,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_RANGE_ENTRY,\n-\t\t\t     TF_RSVD_RANGE_ENTRY,\n-\t\t\t     error_flag);\n-\n-\tTF_RM_CHECK_HW_ALLOC(query,\n-\t\t\t     dir,\n-\t\t\t     TF_RESC_TYPE_HW_LAG_ENTRY,\n-\t\t\t     TF_RSVD_LAG_ENTRY,\n-\t\t\t     error_flag);\n-\n-\tif (*error_flag != 0)\n-\t\treturn -ENOMEM;\n-\n-\treturn 0;\n-}\n-\n-/**\n- * Performs a SRAM resource check between what firmware capability\n- * reports and what the core expects is available.\n- *\n- * Firmware performs the resource carving at AFM init time and the\n- * resource capability is reported in the TruFlow qcaps msg.\n- *\n- * [in] query\n- *   Pointer to SRAM Query data structure. Query holds what the\n- *   firmware offers of the SRAM resources.\n- *\n- * [in] dir\n- *   Receive or transmit direction\n- *\n- * [in/out] error_flag\n- *   Pointer to a bit array indicating the error of a single HCAPI\n- *   resource type. When a bit is set to 1, the HCAPI resource type\n- *   failed static allocation.\n- *\n- * Returns:\n- *  0       - Success\n- *  -ENOMEM - Failure on one of the allocated resources. Check the\n- *            error_flag for what types are flagged errored.\n- */\n-static int\n-tf_rm_check_sram_qcaps_static(struct tf_rm_sram_query *query,\n-\t\t\t      enum tf_dir dir,\n-\t\t\t      uint32_t *error_flag)\n-{\n-\t*error_flag = 0;\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_FULL_ACTION,\n-\t\t\t       TF_RSVD_SRAM_FULL_ACTION,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_MCG,\n-\t\t\t       TF_RSVD_SRAM_MCG,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_ENCAP_8B,\n-\t\t\t       TF_RSVD_SRAM_ENCAP_8B,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_ENCAP_16B,\n-\t\t\t       TF_RSVD_SRAM_ENCAP_16B,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_ENCAP_64B,\n-\t\t\t       TF_RSVD_SRAM_ENCAP_64B,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_SP_SMAC,\n-\t\t\t       TF_RSVD_SRAM_SP_SMAC,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_SP_SMAC_IPV4,\n-\t\t\t       TF_RSVD_SRAM_SP_SMAC_IPV4,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_SP_SMAC_IPV6,\n-\t\t\t       TF_RSVD_SRAM_SP_SMAC_IPV6,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_COUNTER_64B,\n-\t\t\t       TF_RSVD_SRAM_COUNTER_64B,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_NAT_SPORT,\n-\t\t\t       TF_RSVD_SRAM_NAT_SPORT,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_NAT_DPORT,\n-\t\t\t       TF_RSVD_SRAM_NAT_DPORT,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_NAT_S_IPV4,\n-\t\t\t       TF_RSVD_SRAM_NAT_S_IPV4,\n-\t\t\t       error_flag);\n-\n-\tTF_RM_CHECK_SRAM_ALLOC(query,\n-\t\t\t       dir,\n-\t\t\t       TF_RESC_TYPE_SRAM_NAT_D_IPV4,\n-\t\t\t       TF_RSVD_SRAM_NAT_D_IPV4,\n-\t\t\t       error_flag);\n-\n-\tif (*error_flag != 0)\n-\t\treturn -ENOMEM;\n-\n-\treturn 0;\n-}\n-\n-/**\n- * Internal function to mark pool entries used.\n+ *     0          - Success\n+ *   - EOPNOTSUPP - Operation not supported\n  */\n static void\n-tf_rm_reserve_range(uint32_t count,\n-\t\t    uint32_t rsv_begin,\n-\t\t    uint32_t rsv_end,\n-\t\t    uint32_t max,\n-\t\t    struct bitalloc *pool)\n+tf_rm_count_hcapi_reservations(enum tf_dir dir,\n+\t\t\t       enum tf_device_module_type type,\n+\t\t\t       struct tf_rm_element_cfg *cfg,\n+\t\t\t       uint16_t *reservations,\n+\t\t\t       uint16_t count,\n+\t\t\t       uint16_t *valid_count)\n {\n-\tuint32_t i;\n+\tint i;\n+\tuint16_t cnt = 0;\n \n-\t/* If no resources has been requested we mark everything\n-\t * 'used'\n-\t */\n-\tif (count == 0)\t{\n-\t\tfor (i = 0; i < max; i++)\n-\t\t\tba_alloc_index(pool, i);\n-\t} else {\n-\t\t/* Support 2 main modes\n-\t\t * Reserved range starts from bottom up (with\n-\t\t * pre-reserved value or not)\n-\t\t * - begin = 0 to end xx\n-\t\t * - begin = 1 to end xx\n-\t\t *\n-\t\t * Reserved range starts from top down\n-\t\t * - begin = yy to end max\n-\t\t */\n+\tfor (i = 0; i < count; i++) {\n+\t\tif (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&\n+\t\t    reservations[i] > 0)\n+\t\t\tcnt++;\n \n-\t\t/* Bottom up check, start from 0 */\n-\t\tif (rsv_begin == 0) {\n-\t\t\tfor (i = rsv_end + 1; i < max; i++)\n-\t\t\t\tba_alloc_index(pool, i);\n-\t\t}\n-\n-\t\t/* Bottom up check, start from 1 or higher OR\n-\t\t * Top Down\n+\t\t/* Only log msg if a type is attempted reserved and\n+\t\t * not supported. We ignore EM module as its using a\n+\t\t * split configuration array thus it would fail for\n+\t\t * this type of check.\n \t\t */\n-\t\tif (rsv_begin >= 1) {\n-\t\t\t/* Allocate from 0 until start */\n-\t\t\tfor (i = 0; i < rsv_begin; i++)\n-\t\t\t\tba_alloc_index(pool, i);\n-\n-\t\t\t/* Skip and then do the remaining */\n-\t\t\tif (rsv_end < max - 1) {\n-\t\t\t\tfor (i = rsv_end; i < max; i++)\n-\t\t\t\t\tba_alloc_index(pool, i);\n-\t\t\t}\n-\t\t}\n-\t}\n-}\n-\n-/**\n- * Internal function to mark all the l2 ctxt allocated that Truflow\n- * does not own.\n- */\n-static void\n-tf_rm_rsvd_l2_ctxt(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_L2_CTXT_TCAM;\n-\tuint32_t end = 0;\n-\n-\t/* l2 ctxt rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_L2_CTXT_TCAM,\n-\t\t\t    tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX);\n-\n-\t/* l2 ctxt tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_L2_CTXT_TCAM,\n-\t\t\t    tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the profile tcam and profile func\n- * resources that Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_prof(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_PROF_FUNC;\n-\tuint32_t end = 0;\n-\n-\t/* profile func rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_PROF_FUNC,\n-\t\t\t    tfs->TF_PROF_FUNC_POOL_NAME_RX);\n-\n-\t/* profile func tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_PROF_FUNC,\n-\t\t\t    tfs->TF_PROF_FUNC_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_HW_PROF_TCAM;\n-\n-\t/* profile tcam rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_PROF_TCAM,\n-\t\t\t    tfs->TF_PROF_TCAM_POOL_NAME_RX);\n-\n-\t/* profile tcam tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_PROF_TCAM,\n-\t\t\t    tfs->TF_PROF_TCAM_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the em profile id allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_em_prof(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_EM_PROF_ID;\n-\tuint32_t end = 0;\n-\n-\t/* em prof id rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_EM_PROF_ID,\n-\t\t\t    tfs->TF_EM_PROF_ID_POOL_NAME_RX);\n-\n-\t/* em prof id tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_EM_PROF_ID,\n-\t\t\t    tfs->TF_EM_PROF_ID_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the wildcard tcam and profile id\n- * resources that Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_wc(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_WC_TCAM_PROF_ID;\n-\tuint32_t end = 0;\n-\n-\t/* wc profile id rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_WC_PROF_ID,\n-\t\t\t    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX);\n-\n-\t/* wc profile id tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_WC_PROF_ID,\n-\t\t\t    tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_HW_WC_TCAM;\n-\n-\t/* wc tcam rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_WC_TCAM_ROW,\n-\t\t\t    tfs->TF_WC_TCAM_POOL_NAME_RX);\n-\n-\t/* wc tcam tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_WC_TCAM_ROW,\n-\t\t\t    tfs->TF_WC_TCAM_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the meter resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_meter(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_METER_PROF;\n-\tuint32_t end = 0;\n-\n-\t/* meter profiles rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_METER_PROF,\n-\t\t\t    tfs->TF_METER_PROF_POOL_NAME_RX);\n-\n-\t/* meter profiles tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_METER_PROF,\n-\t\t\t    tfs->TF_METER_PROF_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_HW_METER_INST;\n-\n-\t/* meter rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_METER,\n-\t\t\t    tfs->TF_METER_INST_POOL_NAME_RX);\n-\n-\t/* meter tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_METER,\n-\t\t\t    tfs->TF_METER_INST_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the mirror resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_mirror(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_MIRROR;\n-\tuint32_t end = 0;\n-\n-\t/* mirror rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_MIRROR,\n-\t\t\t    tfs->TF_MIRROR_POOL_NAME_RX);\n-\n-\t/* mirror tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_MIRROR,\n-\t\t\t    tfs->TF_MIRROR_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the upar resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_upar(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_UPAR;\n-\tuint32_t end = 0;\n-\n-\t/* upar rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_UPAR,\n-\t\t\t    tfs->TF_UPAR_POOL_NAME_RX);\n-\n-\t/* upar tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_UPAR,\n-\t\t\t    tfs->TF_UPAR_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the sp tcam resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_sp_tcam(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_SP_TCAM;\n-\tuint32_t end = 0;\n-\n-\t/* sp tcam rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_SP_TCAM,\n-\t\t\t    tfs->TF_SP_TCAM_POOL_NAME_RX);\n-\n-\t/* sp tcam tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_SP_TCAM,\n-\t\t\t    tfs->TF_SP_TCAM_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the l2 func resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_l2_func(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_L2_FUNC;\n-\tuint32_t end = 0;\n-\n-\t/* l2 func rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_L2_FUNC,\n-\t\t\t    tfs->TF_L2_FUNC_POOL_NAME_RX);\n-\n-\t/* l2 func tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_L2_FUNC,\n-\t\t\t    tfs->TF_L2_FUNC_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the fkb resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_fkb(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_FKB;\n-\tuint32_t end = 0;\n-\n-\t/* fkb rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_FKB,\n-\t\t\t    tfs->TF_FKB_POOL_NAME_RX);\n-\n-\t/* fkb tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_FKB,\n-\t\t\t    tfs->TF_FKB_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the tbld scope resources allocated\n- * that Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_tbl_scope(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_TBL_SCOPE;\n-\tuint32_t end = 0;\n-\n-\t/* tbl scope rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_TBL_SCOPE,\n-\t\t\t    tfs->TF_TBL_SCOPE_POOL_NAME_RX);\n-\n-\t/* tbl scope tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_TBL_SCOPE,\n-\t\t\t    tfs->TF_TBL_SCOPE_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the l2 epoch resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_epoch(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_EPOCH0;\n-\tuint32_t end = 0;\n-\n-\t/* epoch0 rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_EPOCH0,\n-\t\t\t    tfs->TF_EPOCH0_POOL_NAME_RX);\n-\n-\t/* epoch0 tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_EPOCH0,\n-\t\t\t    tfs->TF_EPOCH0_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_HW_EPOCH1;\n-\n-\t/* epoch1 rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_EPOCH1,\n-\t\t\t    tfs->TF_EPOCH1_POOL_NAME_RX);\n-\n-\t/* epoch1 tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_EPOCH1,\n-\t\t\t    tfs->TF_EPOCH1_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the metadata resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_metadata(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_METADATA;\n-\tuint32_t end = 0;\n-\n-\t/* metadata rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_METADATA,\n-\t\t\t    tfs->TF_METADATA_POOL_NAME_RX);\n-\n-\t/* metadata tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_METADATA,\n-\t\t\t    tfs->TF_METADATA_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the ct state resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_ct_state(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_CT_STATE;\n-\tuint32_t end = 0;\n-\n-\t/* ct state rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_CT_STATE,\n-\t\t\t    tfs->TF_CT_STATE_POOL_NAME_RX);\n-\n-\t/* ct state tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_CT_STATE,\n-\t\t\t    tfs->TF_CT_STATE_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the range resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_range(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_RANGE_PROF;\n-\tuint32_t end = 0;\n-\n-\t/* range profile rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_RANGE_PROF,\n-\t\t\t    tfs->TF_RANGE_PROF_POOL_NAME_RX);\n-\n-\t/* range profile tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_RANGE_PROF,\n-\t\t\t    tfs->TF_RANGE_PROF_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_HW_RANGE_ENTRY;\n-\n-\t/* range entry rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_RANGE_ENTRY,\n-\t\t\t    tfs->TF_RANGE_ENTRY_POOL_NAME_RX);\n-\n-\t/* range entry tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_RANGE_ENTRY,\n-\t\t\t    tfs->TF_RANGE_ENTRY_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the lag resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_lag_entry(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_HW_LAG_ENTRY;\n-\tuint32_t end = 0;\n-\n-\t/* lag entry rx direction */\n-\tif (tfs->resc.rx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.hw_entry[index].start +\n-\t\t\ttfs->resc.rx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.rx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_LAG_ENTRY,\n-\t\t\t    tfs->TF_LAG_ENTRY_POOL_NAME_RX);\n-\n-\t/* lag entry tx direction */\n-\tif (tfs->resc.tx.hw_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.hw_entry[index].start +\n-\t\t\ttfs->resc.tx.hw_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.hw_entry[index].stride,\n-\t\t\t    tfs->resc.tx.hw_entry[index].start,\n-\t\t\t    end,\n-\t\t\t    TF_NUM_LAG_ENTRY,\n-\t\t\t    tfs->TF_LAG_ENTRY_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the full action resources allocated\n- * that Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_sram_full_action(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_SRAM_FULL_ACTION;\n-\tuint16_t end = 0;\n-\n-\t/* full action rx direction */\n-\tif (tfs->resc.rx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.sram_entry[index].start +\n-\t\t\ttfs->resc.rx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_RX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_FULL_ACTION_RX,\n-\t\t\t    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX);\n-\n-\t/* full action tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_FULL_ACTION_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_FULL_ACTION_TX,\n-\t\t\t    tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the multicast group resources\n- * allocated that Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_sram_mcg(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_SRAM_MCG;\n-\tuint16_t end = 0;\n-\n-\t/* multicast group rx direction */\n-\tif (tfs->resc.rx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.sram_entry[index].start +\n-\t\t\ttfs->resc.rx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_MCG_BEGIN_IDX_RX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_MCG_RX,\n-\t\t\t    tfs->TF_SRAM_MCG_POOL_NAME_RX);\n-\n-\t/* Multicast Group on TX is not supported */\n-}\n-\n-/**\n- * Internal function to mark all the encap resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_sram_encap(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_SRAM_ENCAP_8B;\n-\tuint16_t end = 0;\n-\n-\t/* encap 8b rx direction */\n-\tif (tfs->resc.rx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.sram_entry[index].start +\n-\t\t\ttfs->resc.rx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_RX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_ENCAP_8B_RX,\n-\t\t\t    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX);\n-\n-\t/* encap 8b tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_ENCAP_8B_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_ENCAP_8B_TX,\n-\t\t\t    tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_SRAM_ENCAP_16B;\n-\n-\t/* encap 16b rx direction */\n-\tif (tfs->resc.rx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.sram_entry[index].start +\n-\t\t\ttfs->resc.rx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_RX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_ENCAP_16B_RX,\n-\t\t\t    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX);\n-\n-\t/* encap 16b tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_ENCAP_16B_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_ENCAP_16B_TX,\n-\t\t\t    tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_SRAM_ENCAP_64B;\n-\n-\t/* Encap 64B not supported on RX */\n-\n-\t/* Encap 64b tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_ENCAP_64B_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_ENCAP_64B_TX,\n-\t\t\t    tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the sp resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_sram_sp(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_SRAM_SP_SMAC;\n-\tuint16_t end = 0;\n-\n-\t/* sp smac rx direction */\n-\tif (tfs->resc.rx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.sram_entry[index].start +\n-\t\t\ttfs->resc.rx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_RX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_SP_SMAC_RX,\n-\t\t\t    tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX);\n-\n-\t/* sp smac tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_SP_SMAC_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_SP_SMAC_TX,\n-\t\t\t    tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;\n-\n-\t/* SP SMAC IPv4 not supported on RX */\n-\n-\t/* sp smac ipv4 tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_SP_SMAC_IPV4_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_SP_SMAC_IPV4_TX,\n-\t\t\t    tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;\n-\n-\t/* SP SMAC IPv6 not supported on RX */\n-\n-\t/* sp smac ipv6 tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_SP_SMAC_IPV6_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_SP_SMAC_IPV6_TX,\n-\t\t\t    tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the stat resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_sram_stats(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_SRAM_COUNTER_64B;\n-\tuint16_t end = 0;\n-\n-\t/* counter 64b rx direction */\n-\tif (tfs->resc.rx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.sram_entry[index].start +\n-\t\t\ttfs->resc.rx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_RX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_COUNTER_64B_RX,\n-\t\t\t    tfs->TF_SRAM_STATS_64B_POOL_NAME_RX);\n-\n-\t/* counter 64b tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_COUNTER_64B_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_COUNTER_64B_TX,\n-\t\t\t    tfs->TF_SRAM_STATS_64B_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function to mark all the nat resources allocated that\n- * Truflow does not own.\n- */\n-static void\n-tf_rm_rsvd_sram_nat(struct tf_session *tfs)\n-{\n-\tuint32_t index = TF_RESC_TYPE_SRAM_NAT_SPORT;\n-\tuint16_t end = 0;\n-\n-\t/* nat source port rx direction */\n-\tif (tfs->resc.rx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.sram_entry[index].start +\n-\t\t\ttfs->resc.rx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_RX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_NAT_SPORT_RX,\n-\t\t\t    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX);\n-\n-\t/* nat source port tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_NAT_SPORT_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_NAT_SPORT_TX,\n-\t\t\t    tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_SRAM_NAT_DPORT;\n-\n-\t/* nat destination port rx direction */\n-\tif (tfs->resc.rx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.sram_entry[index].start +\n-\t\t\ttfs->resc.rx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_RX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_NAT_DPORT_RX,\n-\t\t\t    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX);\n-\n-\t/* nat destination port tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_NAT_DPORT_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_NAT_DPORT_TX,\n-\t\t\t    tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_SRAM_NAT_S_IPV4;\n-\n-\t/* nat source port ipv4 rx direction */\n-\tif (tfs->resc.rx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.sram_entry[index].start +\n-\t\t\ttfs->resc.rx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_RX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_NAT_S_IPV4_RX,\n-\t\t\t    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX);\n-\n-\t/* nat source ipv4 port tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_NAT_S_IPV4_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_NAT_S_IPV4_TX,\n-\t\t\t    tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX);\n-\n-\tindex = TF_RESC_TYPE_SRAM_NAT_D_IPV4;\n-\n-\t/* nat destination port ipv4 rx direction */\n-\tif (tfs->resc.rx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.rx.sram_entry[index].start +\n-\t\t\ttfs->resc.rx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.rx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_RX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_NAT_D_IPV4_RX,\n-\t\t\t    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX);\n-\n-\t/* nat destination ipv4 port tx direction */\n-\tif (tfs->resc.tx.sram_entry[index].stride > 0)\n-\t\tend = tfs->resc.tx.sram_entry[index].start +\n-\t\t\ttfs->resc.tx.sram_entry[index].stride - 1;\n-\n-\ttf_rm_reserve_range(tfs->resc.tx.sram_entry[index].stride,\n-\t\t\t    TF_RSVD_SRAM_NAT_D_IPV4_BEGIN_IDX_TX,\n-\t\t\t    end,\n-\t\t\t    TF_RSVD_SRAM_NAT_D_IPV4_TX,\n-\t\t\t    tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX);\n-}\n-\n-/**\n- * Internal function used to validate the HW allocated resources\n- * against the requested values.\n- */\n-static int\n-tf_rm_hw_alloc_validate(enum tf_dir dir,\n-\t\t\tstruct tf_rm_hw_alloc *hw_alloc,\n-\t\t\tstruct tf_rm_entry *hw_entry)\n-{\n-\tint error = 0;\n-\tint i;\n-\n-\tfor (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {\n-\t\tif (hw_entry[i].stride != hw_alloc->hw_num[i]) {\n+\t\tif (type != TF_DEVICE_MODULE_TYPE_EM &&\n+\t\t    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&\n+\t\t    reservations[i] > 0) {\n \t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t\"%s, Alloc failed id:%d expect:%d got:%d\\n\",\n+\t\t\t\t\"%s, %s, %s allocation not supported\\n\",\n+\t\t\t\ttf_device_module_type_2_str(type),\n \t\t\t\ttf_dir_2_str(dir),\n-\t\t\t\ti,\n-\t\t\t\thw_alloc->hw_num[i],\n-\t\t\t\thw_entry[i].stride);\n-\t\t\terror = -1;\n-\t\t}\n-\t}\n-\n-\treturn error;\n-}\n-\n-/**\n- * Internal function used to validate the SRAM allocated resources\n- * against the requested values.\n- */\n-static int\n-tf_rm_sram_alloc_validate(enum tf_dir dir __rte_unused,\n-\t\t\t  struct tf_rm_sram_alloc *sram_alloc,\n-\t\t\t  struct tf_rm_entry *sram_entry)\n-{\n-\tint error = 0;\n-\tint i;\n-\n-\tfor (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {\n-\t\tif (sram_entry[i].stride != sram_alloc->sram_num[i]) {\n-\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t\"%s, Alloc failed idx:%d expect:%d got:%d\\n\",\n+\t\t\t\ttf_device_module_type_subtype_2_str(type, i));\n+\t\t\tprintf(\"%s, %s, %s allocation of %d not supported\\n\",\n+\t\t\t\ttf_device_module_type_2_str(type),\n \t\t\t\ttf_dir_2_str(dir),\n-\t\t\t\ti,\n-\t\t\t\tsram_alloc->sram_num[i],\n-\t\t\t\tsram_entry[i].stride);\n-\t\t\terror = -1;\n+\t\t\t       tf_device_module_type_subtype_2_str(type, i),\n+\t\t\t       reservations[i]);\n+\n \t\t}\n \t}\n \n-\treturn error;\n+\t*valid_count = cnt;\n }\n \n /**\n- * Internal function used to mark all the HW resources allocated that\n- * Truflow does not own.\n+ * Resource Manager Adjust of base index definitions.\n  */\n-static void\n-tf_rm_reserve_hw(struct tf *tfp)\n-{\n-\tstruct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);\n-\n-\t/* TBD\n-\t * There is no direct AFM resource allocation as it is carved\n-\t * statically at AFM boot time. Thus the bit allocators work\n-\t * on the full HW resource amount and we just mark everything\n-\t * used except the resources that Truflow took ownership off.\n-\t */\n-\ttf_rm_rsvd_l2_ctxt(tfs);\n-\ttf_rm_rsvd_prof(tfs);\n-\ttf_rm_rsvd_em_prof(tfs);\n-\ttf_rm_rsvd_wc(tfs);\n-\ttf_rm_rsvd_mirror(tfs);\n-\ttf_rm_rsvd_meter(tfs);\n-\ttf_rm_rsvd_upar(tfs);\n-\ttf_rm_rsvd_sp_tcam(tfs);\n-\ttf_rm_rsvd_l2_func(tfs);\n-\ttf_rm_rsvd_fkb(tfs);\n-\ttf_rm_rsvd_tbl_scope(tfs);\n-\ttf_rm_rsvd_epoch(tfs);\n-\ttf_rm_rsvd_metadata(tfs);\n-\ttf_rm_rsvd_ct_state(tfs);\n-\ttf_rm_rsvd_range(tfs);\n-\ttf_rm_rsvd_lag_entry(tfs);\n-}\n-\n-/**\n- * Internal function used to mark all the SRAM resources allocated\n- * that Truflow does not own.\n- */\n-static void\n-tf_rm_reserve_sram(struct tf *tfp)\n-{\n-\tstruct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);\n-\n-\t/* TBD\n-\t * There is no direct AFM resource allocation as it is carved\n-\t * statically at AFM boot time. Thus the bit allocators work\n-\t * on the full HW resource amount and we just mark everything\n-\t * used except the resources that Truflow took ownership off.\n-\t */\n-\ttf_rm_rsvd_sram_full_action(tfs);\n-\ttf_rm_rsvd_sram_mcg(tfs);\n-\ttf_rm_rsvd_sram_encap(tfs);\n-\ttf_rm_rsvd_sram_sp(tfs);\n-\ttf_rm_rsvd_sram_stats(tfs);\n-\ttf_rm_rsvd_sram_nat(tfs);\n-}\n-\n-/**\n- * Internal function used to allocate and validate all HW resources.\n- */\n-static int\n-tf_rm_allocate_validate_hw(struct tf *tfp,\n-\t\t\t   enum tf_dir dir)\n-{\n-\tint rc;\n-\tint i;\n-\tstruct tf_rm_hw_query hw_query;\n-\tstruct tf_rm_hw_alloc hw_alloc;\n-\tstruct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);\n-\tstruct tf_rm_entry *hw_entries;\n-\tuint32_t error_flag;\n-\n-\tif (dir == TF_DIR_RX)\n-\t\thw_entries = tfs->resc.rx.hw_entry;\n-\telse\n-\t\thw_entries = tfs->resc.tx.hw_entry;\n-\n-\t/* Query for Session HW Resources */\n-\n-\tmemset(&hw_query, 0, sizeof(hw_query)); /* RSXX */\n-\trc = tf_rm_check_hw_qcaps_static(&hw_query, dir, &error_flag);\n-\tif (rc) {\n-\t\t/* Log error */\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\"%s, HW QCAPS validation failed,\"\n-\t\t\t\"error_flag:0x%x, rc:%s\\n\",\n-\t\t\ttf_dir_2_str(dir),\n-\t\t\terror_flag,\n-\t\t\tstrerror(-rc));\n-\t\ttf_rm_print_hw_qcaps_error(dir, &hw_query, &error_flag);\n-\t\tgoto cleanup;\n-\t}\n-\n-\t/* Post process HW capability */\n-\tfor (i = 0; i < TF_RESC_TYPE_HW_MAX; i++)\n-\t\thw_alloc.hw_num[i] = hw_query.hw_query[i].max;\n-\n-\t/* Allocate Session HW Resources */\n-\t/* Perform HW allocation validation as its possible the\n-\t * resource availability changed between qcaps and alloc\n-\t */\n-\trc = tf_rm_hw_alloc_validate(dir, &hw_alloc, hw_entries);\n-\tif (rc) {\n-\t\t/* Log error */\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, HW Resource validation failed, rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(dir),\n-\t\t\t    strerror(-rc));\n-\t\tgoto cleanup;\n-\t}\n-\n-\treturn 0;\n-\n- cleanup:\n-\n-\treturn -1;\n-}\n+enum tf_rm_adjust_type {\n+\tTF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */\n+\tTF_RM_ADJUST_RM_BASE   /**< Removes base from the index */\n+};\n \n /**\n- * Internal function used to allocate and validate all SRAM resources.\n+ * Adjust an index according to the allocation information.\n  *\n- * [in] tfp\n- *   Pointer to TF handle\n+ * All resources are controlled in a 0 based pool. Some resources, by\n+ * design, are not 0 based, i.e. Full Action Records (SRAM) thus they\n+ * need to be adjusted before they are handed out.\n  *\n- * [in] dir\n- *   Receive or transmit direction\n+ * [in] db\n+ *   Pointer to the db, used for the lookup\n+ *\n+ * [in] action\n+ *   Adjust action\n+ *\n+ * [in] db_index\n+ *   DB index for the element type\n+ *\n+ * [in] index\n+ *   Index to convert\n+ *\n+ * [out] adj_index\n+ *   Adjusted index\n  *\n  * Returns:\n- *   0  - Success\n- *   -1 - Internal error\n+ *     0          - Success\n+ *   - EOPNOTSUPP - Operation not supported\n  */\n static int\n-tf_rm_allocate_validate_sram(struct tf *tfp,\n-\t\t\t     enum tf_dir dir)\n+tf_rm_adjust_index(struct tf_rm_element *db,\n+\t\t   enum tf_rm_adjust_type action,\n+\t\t   uint32_t db_index,\n+\t\t   uint32_t index,\n+\t\t   uint32_t *adj_index)\n {\n-\tint rc;\n-\tint i;\n-\tstruct tf_rm_sram_query sram_query;\n-\tstruct tf_rm_sram_alloc sram_alloc;\n-\tstruct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);\n-\tstruct tf_rm_entry *sram_entries;\n-\tuint32_t error_flag;\n-\n-\tif (dir == TF_DIR_RX)\n-\t\tsram_entries = tfs->resc.rx.sram_entry;\n-\telse\n-\t\tsram_entries = tfs->resc.tx.sram_entry;\n-\n-\tmemset(&sram_query, 0, sizeof(sram_query)); /* RSXX */\n-\trc = tf_rm_check_sram_qcaps_static(&sram_query, dir, &error_flag);\n-\tif (rc) {\n-\t\t/* Log error */\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\"%s, SRAM QCAPS validation failed,\"\n-\t\t\t\"error_flag:%x, rc:%s\\n\",\n-\t\t\ttf_dir_2_str(dir),\n-\t\t\terror_flag,\n-\t\t\tstrerror(-rc));\n-\t\ttf_rm_print_sram_qcaps_error(dir, &sram_query, &error_flag);\n-\t\tgoto cleanup;\n-\t}\n+\tint rc = 0;\n+\tuint32_t base_index;\n \n-\t/* Post process SRAM capability */\n-\tfor (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++)\n-\t\tsram_alloc.sram_num[i] = sram_query.sram_query[i].max;\n+\tbase_index = db[db_index].alloc.entry.start;\n \n-\t/* Perform SRAM allocation validation as its possible the\n-\t * resource availability changed between qcaps and alloc\n-\t */\n-\trc = tf_rm_sram_alloc_validate(dir, &sram_alloc, sram_entries);\n-\tif (rc) {\n-\t\t/* Log error */\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, SRAM Resource allocation validation failed,\"\n-\t\t\t    \" rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(dir),\n-\t\t\t    strerror(-rc));\n-\t\tgoto cleanup;\n+\tswitch (action) {\n+\tcase TF_RM_ADJUST_RM_BASE:\n+\t\t*adj_index = index - base_index;\n+\t\tbreak;\n+\tcase TF_RM_ADJUST_ADD_BASE:\n+\t\t*adj_index = index + base_index;\n+\t\tbreak;\n+\tdefault:\n+\t\treturn -EOPNOTSUPP;\n \t}\n \n-\treturn 0;\n-\n- cleanup:\n-\n-\treturn -1;\n+\treturn rc;\n }\n \n /**\n- * Helper function used to prune a HW resource array to only hold\n- * elements that needs to be flushed.\n- *\n- * [in] tfs\n- *   Session handle\n+ * Logs an array of found residual entries to the console.\n  *\n  * [in] dir\n  *   Receive or transmit direction\n  *\n- * [in] hw_entries\n- *   Master HW Resource database\n+ * [in] type\n+ *   Type of Device Module\n  *\n- * [in/out] flush_entries\n- *   Pruned HW Resource database of entries to be flushed. This\n- *   array should be passed in as a complete copy of the master HW\n- *   Resource database. The outgoing result will be a pruned version\n- *   based on the result of the requested checking\n+ * [in] count\n+ *   Number of entries in the residual array\n  *\n- * Returns:\n- *    0 - Success, no flush required\n- *    1 - Success, flush required\n- *   -1 - Internal error\n+ * [in] residuals\n+ *   Pointer to an array of residual entries. Array is index same as\n+ *   the DB in which this function is used. Each entry holds residual\n+ *   value for that entry.\n  */\n-static int\n-tf_rm_hw_to_flush(struct tf_session *tfs,\n-\t\t  enum tf_dir dir,\n-\t\t  struct tf_rm_entry *hw_entries,\n-\t\t  struct tf_rm_entry *flush_entries)\n+static void\n+tf_rm_log_residuals(enum tf_dir dir,\n+\t\t    enum tf_device_module_type type,\n+\t\t    uint16_t count,\n+\t\t    uint16_t *residuals)\n {\n-\tint rc;\n-\tint flush_rc = 0;\n-\tint free_cnt;\n-\tstruct bitalloc *pool;\n+\tint i;\n \n-\t/* Check all the hw resource pools and check for left over\n-\t * elements. Any found will result in the complete pool of a\n-\t * type to get invalidated.\n+\t/* Walk the residual array and log the types that wasn't\n+\t * cleaned up to the console.\n \t */\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_L2_CTXT_TCAM_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_L2_CTXT_TCAM].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_PROF_FUNC_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_PROF_FUNC].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_PROF_FUNC].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_PROF_TCAM_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_PROF_TCAM].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_PROF_TCAM].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_EM_PROF_ID_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_EM_PROF_ID].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tflush_entries[TF_RESC_TYPE_HW_EM_REC].start = 0;\n-\tflush_entries[TF_RESC_TYPE_HW_EM_REC].stride = 0;\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_WC_TCAM_PROF_ID_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_WC_TCAM_PROF_ID].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_WC_TCAM_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_WC_TCAM].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_WC_TCAM].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_WC_TCAM].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_METER_PROF_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_PROF].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_METER_PROF].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_METER_PROF].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_METER_INST_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_METER_INST].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_METER_INST].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_METER_INST].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_MIRROR_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_MIRROR].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_MIRROR].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_MIRROR].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_UPAR_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_UPAR].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_UPAR].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_UPAR].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_SP_TCAM_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_SP_TCAM].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_SP_TCAM].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_SP_TCAM].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_L2_FUNC_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_L2_FUNC].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_L2_FUNC].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_L2_FUNC].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_FKB_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_FKB].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_FKB].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_FKB].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_TBL_SCOPE_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride = 0;\n-\t} else {\n-\t\tTFP_DRV_LOG(ERR, \"%s, TBL_SCOPE free_cnt:%d, entries:%d\\n\",\n-\t\t\t    tf_dir_2_str(dir),\n-\t\t\t    free_cnt,\n-\t\t\t    hw_entries[TF_RESC_TYPE_HW_TBL_SCOPE].stride);\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_EPOCH0_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH0].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_EPOCH0].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_EPOCH0].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_EPOCH1_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_EPOCH1].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_EPOCH1].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_EPOCH1].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_METADATA_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_METADATA].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_METADATA].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_METADATA].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_CT_STATE_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_CT_STATE].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_CT_STATE].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_CT_STATE].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_RANGE_PROF_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_RANGE_PROF].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_RANGE_PROF].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_RANGE_ENTRY_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_RANGE_ENTRY].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_LAG_ENTRY_POOL_NAME,\n-\t\t\trc);\n-\tif (rc)\n-\t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == hw_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_HW_LAG_ENTRY].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n+\tfor (i = 0; i < count; i++) {\n+\t\tif (residuals[i] != 0)\n+\t\t\tTFP_DRV_LOG(ERR,\n+\t\t\t\t\"%s, %s was not cleaned up, %d outstanding\\n\",\n+\t\t\t\ttf_dir_2_str(dir),\n+\t\t\t\ttf_device_module_type_subtype_2_str(type, i),\n+\t\t\t\tresiduals[i]);\n \t}\n-\n-\treturn flush_rc;\n }\n \n /**\n- * Helper function used to prune a SRAM resource array to only hold\n- * elements that needs to be flushed.\n+ * Performs a check of the passed in DB for any lingering elements. If\n+ * a resource type was found to not have been cleaned up by the caller\n+ * then its residual values are recorded, logged and passed back in an\n+ * allocate reservation array that the caller can pass to the FW for\n+ * cleanup.\n  *\n- * [in] tfs\n- *   Session handle\n- *\n- * [in] dir\n- *   Receive or transmit direction\n+ * [in] db\n+ *   Pointer to the db, used for the lookup\n  *\n- * [in] hw_entries\n- *   Master SRAM Resource data base\n+ * [out] resv_size\n+ *   Pointer to the reservation size of the generated reservation\n+ *   array.\n  *\n- * [in/out] flush_entries\n- *   Pruned SRAM Resource database of entries to be flushed. This\n- *   array should be passed in as a complete copy of the master SRAM\n- *   Resource database. The outgoing result will be a pruned version\n- *   based on the result of the requested checking\n+ * [in/out] resv\n+ *   Pointer Pointer to a reservation array. The reservation array is\n+ *   allocated after the residual scan and holds any found residual\n+ *   entries. Thus it can be smaller than the DB that the check was\n+ *   performed on. Array must be freed by the caller.\n+ *\n+ * [out] residuals_present\n+ *   Pointer to a bool flag indicating if residual was present in the\n+ *   DB\n  *\n  * Returns:\n- *    0 - Success, no flush required\n- *    1 - Success, flush required\n- *   -1 - Internal error\n+ *     0          - Success\n+ *   - EOPNOTSUPP - Operation not supported\n  */\n static int\n-tf_rm_sram_to_flush(struct tf_session *tfs,\n-\t\t    enum tf_dir dir,\n-\t\t    struct tf_rm_entry *sram_entries,\n-\t\t    struct tf_rm_entry *flush_entries)\n+tf_rm_check_residuals(struct tf_rm_new_db *rm_db,\n+\t\t      uint16_t *resv_size,\n+\t\t      struct tf_rm_resc_entry **resv,\n+\t\t      bool *residuals_present)\n {\n \tint rc;\n-\tint flush_rc = 0;\n-\tint free_cnt;\n-\tstruct bitalloc *pool;\n-\n-\t/* Check all the sram resource pools and check for left over\n-\t * elements. Any found will result in the complete pool of a\n-\t * type to get invalidated.\n-\t */\n-\n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_SRAM_FULL_ACTION_POOL_NAME,\n-\t\t\trc);\n+\tint i;\n+\tint f;\n+\tuint16_t count;\n+\tuint16_t found;\n+\tuint16_t *residuals = NULL;\n+\tuint16_t hcapi_type;\n+\tstruct tf_rm_get_inuse_count_parms iparms;\n+\tstruct tf_rm_get_alloc_info_parms aparms;\n+\tstruct tf_rm_get_hcapi_parms hparms;\n+\tstruct tf_rm_alloc_info info;\n+\tstruct tfp_calloc_parms cparms;\n+\tstruct tf_rm_resc_entry *local_resv = NULL;\n+\n+\t/* Create array to hold the entries that have residuals */\n+\tcparms.nitems = rm_db->num_entries;\n+\tcparms.size = sizeof(uint16_t);\n+\tcparms.alignment = 0;\n+\trc = tfp_calloc(&cparms);\n \tif (rc)\n \t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_FULL_ACTION].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n+\n+\tresiduals = (uint16_t *)cparms.mem_va;\n+\n+\t/* Traverse the DB and collect any residual elements */\n+\tiparms.rm_db = rm_db;\n+\tiparms.count = &count;\n+\tfor (i = 0, found = 0; i < rm_db->num_entries; i++) {\n+\t\tiparms.db_index = i;\n+\t\trc = tf_rm_get_inuse_count(&iparms);\n+\t\t/* Not a device supported entry, just skip */\n+\t\tif (rc == -ENOTSUP)\n+\t\t\tcontinue;\n+\t\tif (rc)\n+\t\t\tgoto cleanup_residuals;\n+\n+\t\tif (count) {\n+\t\t\tfound++;\n+\t\t\tresiduals[i] = count;\n+\t\t\t*residuals_present = true;\n+\t\t}\n \t}\n \n-\t/* Only pools for RX direction */\n-\tif (dir == TF_DIR_RX) {\n-\t\tTF_RM_GET_POOLS_RX(tfs, &pool,\n-\t\t\t\t   TF_SRAM_MCG_POOL_NAME);\n+\tif (*residuals_present) {\n+\t\t/* Populate a reduced resv array with only the entries\n+\t\t * that have residuals.\n+\t\t */\n+\t\tcparms.nitems = found;\n+\t\tcparms.size = sizeof(struct tf_rm_resc_entry);\n+\t\tcparms.alignment = 0;\n+\t\trc = tfp_calloc(&cparms);\n \t\tif (rc)\n \t\t\treturn rc;\n-\t\tfree_cnt = ba_free_count(pool);\n-\t\tif (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_MCG].stride) {\n-\t\t\tflush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;\n-\t\t\tflush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;\n-\t\t} else {\n-\t\t\tflush_rc = 1;\n+\n+\t\tlocal_resv = (struct tf_rm_resc_entry *)cparms.mem_va;\n+\n+\t\taparms.rm_db = rm_db;\n+\t\thparms.rm_db = rm_db;\n+\t\thparms.hcapi_type = &hcapi_type;\n+\t\tfor (i = 0, f = 0; i < rm_db->num_entries; i++) {\n+\t\t\tif (residuals[i] == 0)\n+\t\t\t\tcontinue;\n+\t\t\taparms.db_index = i;\n+\t\t\taparms.info = &info;\n+\t\t\trc = tf_rm_get_info(&aparms);\n+\t\t\tif (rc)\n+\t\t\t\tgoto cleanup_all;\n+\n+\t\t\thparms.db_index = i;\n+\t\t\trc = tf_rm_get_hcapi_type(&hparms);\n+\t\t\tif (rc)\n+\t\t\t\tgoto cleanup_all;\n+\n+\t\t\tlocal_resv[f].type = hcapi_type;\n+\t\t\tlocal_resv[f].start = info.entry.start;\n+\t\t\tlocal_resv[f].stride = info.entry.stride;\n+\t\t\tf++;\n \t\t}\n-\t} else {\n-\t\t/* Always prune TX direction */\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_MCG].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_MCG].stride = 0;\n+\t\t*resv_size = found;\n \t}\n \n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_SRAM_ENCAP_8B_POOL_NAME,\n-\t\t\trc);\n+\ttf_rm_log_residuals(rm_db->dir,\n+\t\t\t    rm_db->type,\n+\t\t\t    rm_db->num_entries,\n+\t\t\t    residuals);\n+\n+\ttfp_free((void *)residuals);\n+\t*resv = local_resv;\n+\n+\treturn 0;\n+\n+ cleanup_all:\n+\ttfp_free((void *)local_resv);\n+\t*resv = NULL;\n+ cleanup_residuals:\n+\ttfp_free((void *)residuals);\n+\n+\treturn rc;\n+}\n+\n+int\n+tf_rm_create_db(struct tf *tfp,\n+\t\tstruct tf_rm_create_db_parms *parms)\n+{\n+\tint rc;\n+\tint i;\n+\tint j;\n+\tstruct tf_session *tfs;\n+\tstruct tf_dev_info *dev;\n+\tuint16_t max_types;\n+\tstruct tfp_calloc_parms cparms;\n+\tstruct tf_rm_resc_req_entry *query;\n+\tenum tf_rm_resc_resv_strategy resv_strategy;\n+\tstruct tf_rm_resc_req_entry *req;\n+\tstruct tf_rm_resc_entry *resv;\n+\tstruct tf_rm_new_db *rm_db;\n+\tstruct tf_rm_element *db;\n+\tuint32_t pool_size;\n+\tuint16_t hcapi_items;\n+\n+\tTF_CHECK_PARMS2(tfp, parms);\n+\n+\t/* Retrieve the session information */\n+\trc = tf_session_get_session(tfp, &tfs);\n \tif (rc)\n \t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_ENCAP_8B].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n \n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_SRAM_ENCAP_16B_POOL_NAME,\n-\t\t\trc);\n+\t/* Retrieve device information */\n+\trc = tf_session_get_device(tfs, &dev);\n \tif (rc)\n \t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_ENCAP_16B].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n \n-\t/* Only pools for TX direction */\n-\tif (dir == TF_DIR_TX) {\n-\t\tTF_RM_GET_POOLS_TX(tfs, &pool,\n-\t\t\t\t   TF_SRAM_ENCAP_64B_POOL_NAME);\n-\t\tif (rc)\n-\t\t\treturn rc;\n-\t\tfree_cnt = ba_free_count(pool);\n-\t\tif (free_cnt ==\n-\t\t    sram_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride) {\n-\t\t\tflush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;\n-\t\t\tflush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;\n-\t\t} else {\n-\t\t\tflush_rc = 1;\n-\t\t}\n-\t} else {\n-\t\t/* Always prune RX direction */\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_ENCAP_64B].stride = 0;\n-\t}\n+\t/* Need device max number of elements for the RM QCAPS */\n+\trc = dev->ops->tf_dev_get_max_types(tfp, &max_types);\n+\tif (rc)\n+\t\treturn rc;\n \n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_SRAM_SP_SMAC_POOL_NAME,\n-\t\t\trc);\n+\tcparms.nitems = max_types;\n+\tcparms.size = sizeof(struct tf_rm_resc_req_entry);\n+\tcparms.alignment = 0;\n+\trc = tfp_calloc(&cparms);\n \tif (rc)\n \t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_SP_SMAC].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n \n-\t/* Only pools for TX direction */\n-\tif (dir == TF_DIR_TX) {\n-\t\tTF_RM_GET_POOLS_TX(tfs, &pool,\n-\t\t\t\t   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);\n-\t\tif (rc)\n-\t\t\treturn rc;\n-\t\tfree_cnt = ba_free_count(pool);\n-\t\tif (free_cnt ==\n-\t\t    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride) {\n-\t\t\tflush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;\n-\t\t\tflush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride =\n-\t\t\t\t0;\n-\t\t} else {\n-\t\t\tflush_rc = 1;\n-\t\t}\n-\t} else {\n-\t\t/* Always prune RX direction */\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV4].stride = 0;\n-\t}\n+\tquery = (struct tf_rm_resc_req_entry *)cparms.mem_va;\n \n-\t/* Only pools for TX direction */\n-\tif (dir == TF_DIR_TX) {\n-\t\tTF_RM_GET_POOLS_TX(tfs, &pool,\n-\t\t\t\t   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);\n-\t\tif (rc)\n-\t\t\treturn rc;\n-\t\tfree_cnt = ba_free_count(pool);\n-\t\tif (free_cnt ==\n-\t\t    sram_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride) {\n-\t\t\tflush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;\n-\t\t\tflush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride =\n-\t\t\t\t0;\n-\t\t} else {\n-\t\t\tflush_rc = 1;\n-\t\t}\n-\t} else {\n-\t\t/* Always prune RX direction */\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_SP_SMAC_IPV6].stride = 0;\n+\t/* Get Firmware Capabilities */\n+\trc = tf_msg_session_resc_qcaps(tfp,\n+\t\t\t\t       parms->dir,\n+\t\t\t\t       max_types,\n+\t\t\t\t       query,\n+\t\t\t\t       &resv_strategy);\n+\tif (rc)\n+\t\treturn rc;\n+\n+\t/* Process capabilities against DB requirements. However, as a\n+\t * DB can hold elements that are not HCAPI we can reduce the\n+\t * req msg content by removing those out of the request yet\n+\t * the DB holds them all as to give a fast lookup. We can also\n+\t * remove entries where there are no request for elements.\n+\t */\n+\ttf_rm_count_hcapi_reservations(parms->dir,\n+\t\t\t\t       parms->type,\n+\t\t\t\t       parms->cfg,\n+\t\t\t\t       parms->alloc_cnt,\n+\t\t\t\t       parms->num_elements,\n+\t\t\t\t       &hcapi_items);\n+\n+\t/* Handle the case where a DB create request really ends up\n+\t * being empty. Unsupported (if not rare) case but possible\n+\t * that no resources are necessary for a 'direction'.\n+\t */\n+\tif (hcapi_items == 0) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t\"%s: DB create request for Zero elements, DB Type:%s\\n\",\n+\t\t\ttf_dir_2_str(parms->dir),\n+\t\t\ttf_device_module_type_2_str(parms->type));\n+\n+\t\tparms->rm_db = NULL;\n+\t\treturn -ENOMEM;\n \t}\n \n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_SRAM_STATS_64B_POOL_NAME,\n-\t\t\trc);\n+\t/* Alloc request, alignment already set */\n+\tcparms.nitems = (size_t)hcapi_items;\n+\tcparms.size = sizeof(struct tf_rm_resc_req_entry);\n+\trc = tfp_calloc(&cparms);\n \tif (rc)\n \t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_COUNTER_64B].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n+\treq = (struct tf_rm_resc_req_entry *)cparms.mem_va;\n \n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_SRAM_NAT_SPORT_POOL_NAME,\n-\t\t\trc);\n+\t/* Alloc reservation, alignment and nitems already set */\n+\tcparms.size = sizeof(struct tf_rm_resc_entry);\n+\trc = tfp_calloc(&cparms);\n \tif (rc)\n \t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_NAT_SPORT].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n+\tresv = (struct tf_rm_resc_entry *)cparms.mem_va;\n+\n+\t/* Build the request */\n+\tfor (i = 0, j = 0; i < parms->num_elements; i++) {\n+\t\t/* Skip any non HCAPI cfg elements */\n+\t\tif (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {\n+\t\t\t/* Only perform reservation for entries that\n+\t\t\t * has been requested\n+\t\t\t */\n+\t\t\tif (parms->alloc_cnt[i] == 0)\n+\t\t\t\tcontinue;\n+\n+\t\t\t/* Verify that we can get the full amount\n+\t\t\t * allocated per the qcaps availability.\n+\t\t\t */\n+\t\t\tif (parms->alloc_cnt[i] <=\n+\t\t\t    query[parms->cfg[i].hcapi_type].max) {\n+\t\t\t\treq[j].type = parms->cfg[i].hcapi_type;\n+\t\t\t\treq[j].min = parms->alloc_cnt[i];\n+\t\t\t\treq[j].max = parms->alloc_cnt[i];\n+\t\t\t\tj++;\n+\t\t\t} else {\n+\t\t\t\tTFP_DRV_LOG(ERR,\n+\t\t\t\t\t    \"%s: Resource failure, type:%d\\n\",\n+\t\t\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t\t\t    parms->cfg[i].hcapi_type);\n+\t\t\t\tTFP_DRV_LOG(ERR,\n+\t\t\t\t\t\"req:%d, avail:%d\\n\",\n+\t\t\t\t\tparms->alloc_cnt[i],\n+\t\t\t\t\tquery[parms->cfg[i].hcapi_type].max);\n+\t\t\t\treturn -EINVAL;\n+\t\t\t}\n+\t\t}\n \t}\n \n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_SRAM_NAT_DPORT_POOL_NAME,\n-\t\t\trc);\n+\trc = tf_msg_session_resc_alloc(tfp,\n+\t\t\t\t       parms->dir,\n+\t\t\t\t       hcapi_items,\n+\t\t\t\t       req,\n+\t\t\t\t       resv);\n \tif (rc)\n \t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_NAT_DPORT].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n \n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_SRAM_NAT_S_IPV4_POOL_NAME,\n-\t\t\trc);\n+\t/* Build the RM DB per the request */\n+\tcparms.nitems = 1;\n+\tcparms.size = sizeof(struct tf_rm_new_db);\n+\trc = tfp_calloc(&cparms);\n \tif (rc)\n \t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_NAT_S_IPV4].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n+\trm_db = (void *)cparms.mem_va;\n \n-\tTF_RM_GET_POOLS(tfs, dir, &pool,\n-\t\t\tTF_SRAM_NAT_D_IPV4_POOL_NAME,\n-\t\t\trc);\n+\t/* Build the DB within RM DB */\n+\tcparms.nitems = parms->num_elements;\n+\tcparms.size = sizeof(struct tf_rm_element);\n+\trc = tfp_calloc(&cparms);\n \tif (rc)\n \t\treturn rc;\n-\tfree_cnt = ba_free_count(pool);\n-\tif (free_cnt == sram_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride) {\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].start = 0;\n-\t\tflush_entries[TF_RESC_TYPE_SRAM_NAT_D_IPV4].stride = 0;\n-\t} else {\n-\t\tflush_rc = 1;\n-\t}\n+\trm_db->db = (struct tf_rm_element *)cparms.mem_va;\n \n-\treturn flush_rc;\n-}\n+\tdb = rm_db->db;\n+\tfor (i = 0, j = 0; i < parms->num_elements; i++) {\n+\t\tdb[i].cfg_type = parms->cfg[i].cfg_type;\n+\t\tdb[i].hcapi_type = parms->cfg[i].hcapi_type;\n \n-/**\n- * Helper function used to generate an error log for the HW types that\n- * needs to be flushed. The types should have been cleaned up ahead of\n- * invoking tf_close_session.\n- *\n- * [in] hw_entries\n- *   HW Resource database holding elements to be flushed\n- */\n-static void\n-tf_rm_log_hw_flush(enum tf_dir dir,\n-\t\t   struct tf_rm_entry *hw_entries)\n-{\n-\tint i;\n+\t\t/* Skip any non HCAPI types as we didn't include them\n+\t\t * in the reservation request.\n+\t\t */\n+\t\tif (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)\n+\t\t\tcontinue;\n \n-\t/* Walk the hw flush array and log the types that wasn't\n-\t * cleaned up.\n-\t */\n-\tfor (i = 0; i < TF_RESC_TYPE_HW_MAX; i++) {\n-\t\tif (hw_entries[i].stride != 0)\n+\t\t/* If the element didn't request an allocation no need\n+\t\t * to create a pool nor verify if we got a reservation.\n+\t\t */\n+\t\tif (parms->alloc_cnt[i] == 0)\n+\t\t\tcontinue;\n+\n+\t\t/* If the element had requested an allocation and that\n+\t\t * allocation was a success (full amount) then\n+\t\t * allocate the pool.\n+\t\t */\n+\t\tif (parms->alloc_cnt[i] == resv[j].stride) {\n+\t\t\tdb[i].alloc.entry.start = resv[j].start;\n+\t\t\tdb[i].alloc.entry.stride = resv[j].stride;\n+\n+\t\t\tprintf(\"Entry:%d Start:%d Stride:%d\\n\",\n+\t\t\t       i,\n+\t\t\t       resv[j].start,\n+\t\t\t       resv[j].stride);\n+\n+\t\t\t/* Create pool */\n+\t\t\tpool_size = (BITALLOC_SIZEOF(resv[j].stride) /\n+\t\t\t\t     sizeof(struct bitalloc));\n+\t\t\t/* Alloc request, alignment already set */\n+\t\t\tcparms.nitems = pool_size;\n+\t\t\tcparms.size = sizeof(struct bitalloc);\n+\t\t\trc = tfp_calloc(&cparms);\n+\t\t\tif (rc) {\n+\t\t\t\tTFP_DRV_LOG(ERR,\n+\t\t\t\t\t    \"%s: Pool alloc failed, type:%d\\n\",\n+\t\t\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t\t\t    db[i].cfg_type);\n+\t\t\t\tgoto fail;\n+\t\t\t}\n+\t\t\tdb[i].pool = (struct bitalloc *)cparms.mem_va;\n+\n+\t\t\trc = ba_init(db[i].pool, resv[j].stride);\n+\t\t\tif (rc) {\n+\t\t\t\tTFP_DRV_LOG(ERR,\n+\t\t\t\t\t    \"%s: Pool init failed, type:%d\\n\",\n+\t\t\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t\t\t    db[i].cfg_type);\n+\t\t\t\tgoto fail;\n+\t\t\t}\n+\t\t\tj++;\n+\t\t} else {\n+\t\t\t/* Bail out as we want what we requested for\n+\t\t\t * all elements, not any less.\n+\t\t\t */\n \t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t    \"%s, %s was not cleaned up\\n\",\n-\t\t\t\t    tf_dir_2_str(dir),\n-\t\t\t\t    tf_hcapi_hw_2_str(i));\n+\t\t\t\t    \"%s: Alloc failed, type:%d\\n\",\n+\t\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t\t    db[i].cfg_type);\n+\t\t\tTFP_DRV_LOG(ERR,\n+\t\t\t\t    \"req:%d, alloc:%d\\n\",\n+\t\t\t\t    parms->alloc_cnt[i],\n+\t\t\t\t    resv[j].stride);\n+\t\t\tgoto fail;\n+\t\t}\n \t}\n+\n+\trm_db->num_entries = parms->num_elements;\n+\trm_db->dir = parms->dir;\n+\trm_db->type = parms->type;\n+\t*parms->rm_db = (void *)rm_db;\n+\n+\tprintf(\"%s: type:%d num_entries:%d\\n\",\n+\t       tf_dir_2_str(parms->dir),\n+\t       parms->type,\n+\t       i);\n+\n+\ttfp_free((void *)req);\n+\ttfp_free((void *)resv);\n+\n+\treturn 0;\n+\n+ fail:\n+\ttfp_free((void *)req);\n+\ttfp_free((void *)resv);\n+\ttfp_free((void *)db->pool);\n+\ttfp_free((void *)db);\n+\ttfp_free((void *)rm_db);\n+\tparms->rm_db = NULL;\n+\n+\treturn -EINVAL;\n }\n \n-/**\n- * Helper function used to generate an error log for the SRAM types\n- * that needs to be flushed. The types should have been cleaned up\n- * ahead of invoking tf_close_session.\n- *\n- * [in] sram_entries\n- *   SRAM Resource database holding elements to be flushed\n- */\n-static void\n-tf_rm_log_sram_flush(enum tf_dir dir,\n-\t\t     struct tf_rm_entry *sram_entries)\n+int\n+tf_rm_free_db(struct tf *tfp,\n+\t      struct tf_rm_free_db_parms *parms)\n {\n+\tint rc;\n \tint i;\n+\tuint16_t resv_size = 0;\n+\tstruct tf_rm_new_db *rm_db;\n+\tstruct tf_rm_resc_entry *resv;\n+\tbool residuals_found = false;\n+\n+\tTF_CHECK_PARMS2(parms, parms->rm_db);\n+\n+\t/* Device unbind happens when the TF Session is closed and the\n+\t * session ref count is 0. Device unbind will cleanup each of\n+\t * its support modules, i.e. Identifier, thus we're ending up\n+\t * here to close the DB.\n+\t *\n+\t * On TF Session close it is assumed that the session has already\n+\t * cleaned up all its resources, individually, while\n+\t * destroying its flows.\n+\t *\n+\t * To assist in the 'cleanup checking' the DB is checked for any\n+\t * remaining elements and logged if found to be the case.\n+\t *\n+\t * Any such elements will need to be 'cleared' ahead of\n+\t * returning the resources to the HCAPI RM.\n+\t *\n+\t * RM will signal FW to flush the DB resources. FW will\n+\t * perform the invalidation. TF Session close will return the\n+\t * previous allocated elements to the RM and then close the\n+\t * HCAPI RM registration. That then saves several 'free' msgs\n+\t * from being required.\n+\t */\n \n-\t/* Walk the sram flush array and log the types that wasn't\n-\t * cleaned up.\n+\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n+\n+\t/* Check for residuals that the client didn't clean up */\n+\trc = tf_rm_check_residuals(rm_db,\n+\t\t\t\t   &resv_size,\n+\t\t\t\t   &resv,\n+\t\t\t\t   &residuals_found);\n+\tif (rc)\n+\t\treturn rc;\n+\n+\t/* Invalidate any residuals followed by a DB traversal for\n+\t * pool cleanup.\n \t */\n-\tfor (i = 0; i < TF_RESC_TYPE_SRAM_MAX; i++) {\n-\t\tif (sram_entries[i].stride != 0)\n+\tif (residuals_found) {\n+\t\trc = tf_msg_session_resc_flush(tfp,\n+\t\t\t\t\t       parms->dir,\n+\t\t\t\t\t       resv_size,\n+\t\t\t\t\t       resv);\n+\t\ttfp_free((void *)resv);\n+\t\t/* On failure we still have to cleanup so we can only\n+\t\t * log that FW failed.\n+\t\t */\n+\t\tif (rc)\n \t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t    \"%s, %s was not cleaned up\\n\",\n-\t\t\t\t    tf_dir_2_str(dir),\n-\t\t\t\t    tf_hcapi_sram_2_str(i));\n+\t\t\t\t    \"%s: Internal Flush error, module:%s\\n\",\n+\t\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t\t    tf_device_module_type_2_str(rm_db->type));\n \t}\n-}\n \n-void\n-tf_rm_init(struct tf *tfp __rte_unused)\n-{\n-\tstruct tf_session *tfs =\n-\t\t(struct tf_session *)(tfp->session->core_data);\n+\tfor (i = 0; i < rm_db->num_entries; i++)\n+\t\ttfp_free((void *)rm_db->db[i].pool);\n \n-\t/* This version is host specific and should be checked against\n-\t * when attaching as there is no guarantee that a secondary\n-\t * would run from same image version.\n-\t */\n-\ttfs->ver.major = TF_SESSION_VER_MAJOR;\n-\ttfs->ver.minor = TF_SESSION_VER_MINOR;\n-\ttfs->ver.update = TF_SESSION_VER_UPDATE;\n-\n-\ttfs->session_id.id = 0;\n-\ttfs->ref_count = 0;\n-\n-\t/* Initialization of Table Scopes */\n-\t/* ll_init(&tfs->tbl_scope_ll); */\n-\n-\t/* Initialization of HW and SRAM resource DB */\n-\tmemset(&tfs->resc, 0, sizeof(struct tf_rm_db));\n-\n-\t/* Initialization of HW Resource Pools */\n-\tba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);\n-\tba_init(tfs->TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);\n-\tba_init(tfs->TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);\n-\tba_init(tfs->TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);\n-\tba_init(tfs->TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);\n-\tba_init(tfs->TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);\n-\tba_init(tfs->TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);\n-\tba_init(tfs->TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);\n-\n-\t/* TBD, how do we want to handle EM records ?*/\n-\t/* EM Records should not be controlled by way of a pool */\n-\n-\tba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);\n-\tba_init(tfs->TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);\n-\tba_init(tfs->TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);\n-\tba_init(tfs->TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);\n-\tba_init(tfs->TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);\n-\tba_init(tfs->TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);\n-\tba_init(tfs->TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);\n-\tba_init(tfs->TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);\n-\tba_init(tfs->TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);\n-\tba_init(tfs->TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);\n-\tba_init(tfs->TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);\n-\tba_init(tfs->TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);\n-\n-\tba_init(tfs->TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);\n-\tba_init(tfs->TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);\n-\n-\tba_init(tfs->TF_FKB_POOL_NAME_RX, TF_NUM_FKB);\n-\tba_init(tfs->TF_FKB_POOL_NAME_TX, TF_NUM_FKB);\n-\n-\tba_init(tfs->TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);\n-\tba_init(tfs->TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);\n-\tba_init(tfs->TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);\n-\tba_init(tfs->TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);\n-\tba_init(tfs->TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);\n-\tba_init(tfs->TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);\n-\tba_init(tfs->TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);\n-\tba_init(tfs->TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);\n-\tba_init(tfs->TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);\n-\tba_init(tfs->TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);\n-\tba_init(tfs->TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);\n-\tba_init(tfs->TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);\n-\tba_init(tfs->TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);\n-\tba_init(tfs->TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);\n-\tba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);\n-\tba_init(tfs->TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);\n-\tba_init(tfs->TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);\n-\tba_init(tfs->TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);\n-\n-\t/* Initialization of SRAM Resource Pools\n-\t * These pools are set to the TFLIB defined MAX sizes not\n-\t * AFM's HW max as to limit the memory consumption\n-\t */\n-\tba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_RX,\n-\t\tTF_RSVD_SRAM_FULL_ACTION_RX);\n-\tba_init(tfs->TF_SRAM_FULL_ACTION_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_FULL_ACTION_TX);\n-\t/* Only Multicast Group on RX is supported */\n-\tba_init(tfs->TF_SRAM_MCG_POOL_NAME_RX,\n-\t\tTF_RSVD_SRAM_MCG_RX);\n-\tba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_RX,\n-\t\tTF_RSVD_SRAM_ENCAP_8B_RX);\n-\tba_init(tfs->TF_SRAM_ENCAP_8B_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_ENCAP_8B_TX);\n-\tba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_RX,\n-\t\tTF_RSVD_SRAM_ENCAP_16B_RX);\n-\tba_init(tfs->TF_SRAM_ENCAP_16B_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_ENCAP_16B_TX);\n-\t/* Only Encap 64B on TX is supported */\n-\tba_init(tfs->TF_SRAM_ENCAP_64B_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_ENCAP_64B_TX);\n-\tba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_RX,\n-\t\tTF_RSVD_SRAM_SP_SMAC_RX);\n-\tba_init(tfs->TF_SRAM_SP_SMAC_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_SP_SMAC_TX);\n-\t/* Only SP SMAC IPv4 on TX is supported */\n-\tba_init(tfs->TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_SP_SMAC_IPV4_TX);\n-\t/* Only SP SMAC IPv6 on TX is supported */\n-\tba_init(tfs->TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_SP_SMAC_IPV6_TX);\n-\tba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_RX,\n-\t\tTF_RSVD_SRAM_COUNTER_64B_RX);\n-\tba_init(tfs->TF_SRAM_STATS_64B_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_COUNTER_64B_TX);\n-\tba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_RX,\n-\t\tTF_RSVD_SRAM_NAT_SPORT_RX);\n-\tba_init(tfs->TF_SRAM_NAT_SPORT_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_NAT_SPORT_TX);\n-\tba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_RX,\n-\t\tTF_RSVD_SRAM_NAT_DPORT_RX);\n-\tba_init(tfs->TF_SRAM_NAT_DPORT_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_NAT_DPORT_TX);\n-\tba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,\n-\t\tTF_RSVD_SRAM_NAT_S_IPV4_RX);\n-\tba_init(tfs->TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_NAT_S_IPV4_TX);\n-\tba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,\n-\t\tTF_RSVD_SRAM_NAT_D_IPV4_RX);\n-\tba_init(tfs->TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,\n-\t\tTF_RSVD_SRAM_NAT_D_IPV4_TX);\n-\n-\t/* Initialization of pools local to TF Core */\n-\tba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);\n-\tba_init(tfs->TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);\n+\ttfp_free((void *)parms->rm_db);\n+\n+\treturn rc;\n }\n \n int\n-tf_rm_allocate_validate(struct tf *tfp)\n+tf_rm_allocate(struct tf_rm_allocate_parms *parms)\n {\n \tint rc;\n-\tint i;\n+\tint id;\n+\tuint32_t index;\n+\tstruct tf_rm_new_db *rm_db;\n+\tenum tf_rm_elem_cfg_type cfg_type;\n \n-\tfor (i = 0; i < TF_DIR_MAX; i++) {\n-\t\trc = tf_rm_allocate_validate_hw(tfp, i);\n-\t\tif (rc)\n-\t\t\treturn rc;\n-\t\trc = tf_rm_allocate_validate_sram(tfp, i);\n-\t\tif (rc)\n-\t\t\treturn rc;\n+\tTF_CHECK_PARMS2(parms, parms->rm_db);\n+\n+\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n+\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n+\n+\t/* Bail out if not controlled by RM */\n+\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n+\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n+\t\treturn -ENOTSUP;\n+\n+\t/* Bail out if the pool is not valid, should never happen */\n+\tif (rm_db->db[parms->db_index].pool == NULL) {\n+\t\trc = -ENOTSUP;\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Invalid pool for this type:%d, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(rm_db->dir),\n+\t\t\t    parms->db_index,\n+\t\t\t    strerror(-rc));\n+\t\treturn rc;\n \t}\n \n-\t/* With both HW and SRAM allocated and validated we can\n-\t * 'scrub' the reservation on the pools.\n+\t/*\n+\t * priority  0: allocate from top of the tcam i.e. high\n+\t * priority !0: allocate index from bottom i.e lowest\n \t */\n-\ttf_rm_reserve_hw(tfp);\n-\ttf_rm_reserve_sram(tfp);\n+\tif (parms->priority)\n+\t\tid = ba_alloc_reverse(rm_db->db[parms->db_index].pool);\n+\telse\n+\t\tid = ba_alloc(rm_db->db[parms->db_index].pool);\n+\tif (id == BA_FAIL) {\n+\t\trc = -ENOMEM;\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Allocation failed, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(rm_db->dir),\n+\t\t\t    strerror(-rc));\n+\t\treturn rc;\n+\t}\n+\n+\t/* Adjust for any non zero start value */\n+\trc = tf_rm_adjust_index(rm_db->db,\n+\t\t\t\tTF_RM_ADJUST_ADD_BASE,\n+\t\t\t\tparms->db_index,\n+\t\t\t\tid,\n+\t\t\t\t&index);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Alloc adjust of base index failed, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(rm_db->dir),\n+\t\t\t    strerror(-rc));\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t*parms->index = index;\n \n \treturn rc;\n }\n \n int\n-tf_rm_close(struct tf *tfp)\n+tf_rm_free(struct tf_rm_free_parms *parms)\n {\n \tint rc;\n-\tint rc_close = 0;\n-\tint i;\n-\tstruct tf_rm_entry *hw_entries;\n-\tstruct tf_rm_entry *hw_flush_entries;\n-\tstruct tf_rm_entry *sram_entries;\n-\tstruct tf_rm_entry *sram_flush_entries;\n-\tstruct tf_session *tfs __rte_unused =\n-\t\t(struct tf_session *)(tfp->session->core_data);\n-\n-\tstruct tf_rm_db flush_resc = tfs->resc;\n-\n-\t/* On close it is assumed that the session has already cleaned\n-\t * up all its resources, individually, while destroying its\n-\t * flows. No checking is performed thus the behavior is as\n-\t * follows.\n-\t *\n-\t * Session RM will signal FW to release session resources. FW\n-\t * will perform invalidation of all the allocated entries\n-\t * (assures any outstanding resources has been cleared, then\n-\t * free the FW RM instance.\n-\t *\n-\t * Session will then be freed by tf_close_session() thus there\n-\t * is no need to clean each resource pool as the whole session\n-\t * is going away.\n-\t */\n-\n-\tfor (i = 0; i < TF_DIR_MAX; i++) {\n-\t\tif (i == TF_DIR_RX) {\n-\t\t\thw_entries = tfs->resc.rx.hw_entry;\n-\t\t\thw_flush_entries = flush_resc.rx.hw_entry;\n-\t\t\tsram_entries = tfs->resc.rx.sram_entry;\n-\t\t\tsram_flush_entries = flush_resc.rx.sram_entry;\n-\t\t} else {\n-\t\t\thw_entries = tfs->resc.tx.hw_entry;\n-\t\t\thw_flush_entries = flush_resc.tx.hw_entry;\n-\t\t\tsram_entries = tfs->resc.tx.sram_entry;\n-\t\t\tsram_flush_entries = flush_resc.tx.sram_entry;\n-\t\t}\n+\tuint32_t adj_index;\n+\tstruct tf_rm_new_db *rm_db;\n+\tenum tf_rm_elem_cfg_type cfg_type;\n \n-\t\t/* Check for any not previously freed HW resources and\n-\t\t * flush if required.\n-\t\t */\n-\t\trc = tf_rm_hw_to_flush(tfs, i, hw_entries, hw_flush_entries);\n-\t\tif (rc) {\n-\t\t\trc_close = -ENOTEMPTY;\n-\t\t\t/* Log error */\n-\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t    \"%s, lingering HW resources, rc:%s\\n\",\n-\t\t\t\t    tf_dir_2_str(i),\n-\t\t\t\t    strerror(-rc));\n+\tTF_CHECK_PARMS2(parms, parms->rm_db);\n \n-\t\t\t/* Log the entries to be flushed */\n-\t\t\ttf_rm_log_hw_flush(i, hw_flush_entries);\n-\t\t}\n+\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n+\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n \n-\t\t/* Check for any not previously freed SRAM resources\n-\t\t * and flush if required.\n-\t\t */\n-\t\trc = tf_rm_sram_to_flush(tfs,\n-\t\t\t\t\t i,\n-\t\t\t\t\t sram_entries,\n-\t\t\t\t\t sram_flush_entries);\n-\t\tif (rc) {\n-\t\t\trc_close = -ENOTEMPTY;\n-\t\t\t/* Log error */\n-\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t    \"%s, lingering SRAM resources, rc:%s\\n\",\n-\t\t\t\t    tf_dir_2_str(i),\n-\t\t\t\t    strerror(-rc));\n+\t/* Bail out if not controlled by RM */\n+\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n+\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n+\t\treturn -ENOTSUP;\n \n-\t\t\t/* Log the entries to be flushed */\n-\t\t\ttf_rm_log_sram_flush(i, sram_flush_entries);\n-\t\t}\n+\t/* Bail out if the pool is not valid, should never happen */\n+\tif (rm_db->db[parms->db_index].pool == NULL) {\n+\t\trc = -ENOTSUP;\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Invalid pool for this type:%d, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(rm_db->dir),\n+\t\t\t    parms->db_index,\n+\t\t\t    strerror(-rc));\n+\t\treturn rc;\n \t}\n \n-\treturn rc_close;\n-}\n+\t/* Adjust for any non zero start value */\n+\trc = tf_rm_adjust_index(rm_db->db,\n+\t\t\t\tTF_RM_ADJUST_RM_BASE,\n+\t\t\t\tparms->db_index,\n+\t\t\t\tparms->index,\n+\t\t\t\t&adj_index);\n+\tif (rc)\n+\t\treturn rc;\n \n-#if (TF_SHADOW == 1)\n-int\n-tf_rm_shadow_db_init(struct tf_session *tfs)\n-{\n-\trc = 1;\n+\trc = ba_free(rm_db->db[parms->db_index].pool, adj_index);\n+\t/* No logging direction matters and that is not available here */\n+\tif (rc)\n+\t\treturn rc;\n \n \treturn rc;\n }\n-#endif /* TF_SHADOW */\n \n int\n-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,\n-\t\t\t    enum tf_dir dir,\n-\t\t\t    enum tf_tcam_tbl_type type,\n-\t\t\t    struct bitalloc **pool)\n+tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)\n {\n-\tint rc = -EOPNOTSUPP;\n+\tint rc;\n+\tuint32_t adj_index;\n+\tstruct tf_rm_new_db *rm_db;\n+\tenum tf_rm_elem_cfg_type cfg_type;\n \n-\t*pool = NULL;\n+\tTF_CHECK_PARMS2(parms, parms->rm_db);\n \n-\tswitch (type) {\n-\tcase TF_TCAM_TBL_TYPE_L2_CTXT_TCAM:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_L2_CTXT_TCAM_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TCAM_TBL_TYPE_PROF_TCAM:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_PROF_TCAM_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TCAM_TBL_TYPE_WC_TCAM:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_WC_TCAM_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TCAM_TBL_TYPE_VEB_TCAM:\n-\tcase TF_TCAM_TBL_TYPE_SP_TCAM:\n-\tcase TF_TCAM_TBL_TYPE_CT_RULE_TCAM:\n-\tdefault:\n-\t\tbreak;\n-\t}\n+\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n+\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n \n-\tif (rc == -EOPNOTSUPP) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, Tcam type not supported, type:%d\\n\",\n-\t\t\t    tf_dir_2_str(dir),\n-\t\t\t    type);\n-\t\treturn rc;\n-\t} else if (rc == -1) {\n+\t/* Bail out if not controlled by RM */\n+\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n+\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n+\t\treturn -ENOTSUP;\n+\n+\t/* Bail out if the pool is not valid, should never happen */\n+\tif (rm_db->db[parms->db_index].pool == NULL) {\n+\t\trc = -ENOTSUP;\n \t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, Tcam type lookup failed, type:%d\\n\",\n-\t\t\t    tf_dir_2_str(dir),\n-\t\t\t    type);\n+\t\t\t    \"%s: Invalid pool for this type:%d, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(rm_db->dir),\n+\t\t\t    parms->db_index,\n+\t\t\t    strerror(-rc));\n \t\treturn rc;\n \t}\n \n-\treturn 0;\n+\t/* Adjust for any non zero start value */\n+\trc = tf_rm_adjust_index(rm_db->db,\n+\t\t\t\tTF_RM_ADJUST_RM_BASE,\n+\t\t\t\tparms->db_index,\n+\t\t\t\tparms->index,\n+\t\t\t\t&adj_index);\n+\tif (rc)\n+\t\treturn rc;\n+\n+\t*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,\n+\t\t\t\t     adj_index);\n+\n+\treturn rc;\n }\n \n int\n-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,\n-\t\t\t   enum tf_dir dir,\n-\t\t\t   enum tf_tbl_type type,\n-\t\t\t   struct bitalloc **pool)\n+tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)\n {\n-\tint rc = -EOPNOTSUPP;\n+\tstruct tf_rm_new_db *rm_db;\n+\tenum tf_rm_elem_cfg_type cfg_type;\n \n-\t*pool = NULL;\n+\tTF_CHECK_PARMS2(parms, parms->rm_db);\n \n-\tswitch (type) {\n-\tcase TF_TBL_TYPE_FULL_ACT_RECORD:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_SRAM_FULL_ACTION_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_MCAST_GROUPS:\n-\t\t/* No pools for TX direction, so bail out */\n-\t\tif (dir == TF_DIR_TX)\n-\t\t\tbreak;\n-\t\tTF_RM_GET_POOLS_RX(tfs, pool,\n-\t\t\t\t   TF_SRAM_MCG_POOL_NAME);\n-\t\trc = 0;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_ENCAP_8B:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_SRAM_ENCAP_8B_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_ENCAP_16B:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_SRAM_ENCAP_16B_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_ENCAP_64B:\n-\t\t/* No pools for RX direction, so bail out */\n-\t\tif (dir == TF_DIR_RX)\n-\t\t\tbreak;\n-\t\tTF_RM_GET_POOLS_TX(tfs, pool,\n-\t\t\t\t   TF_SRAM_ENCAP_64B_POOL_NAME);\n-\t\trc = 0;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_SP_SMAC:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_SRAM_SP_SMAC_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_SP_SMAC_IPV4:\n-\t\t/* No pools for TX direction, so bail out */\n-\t\tif (dir == TF_DIR_RX)\n-\t\t\tbreak;\n-\t\tTF_RM_GET_POOLS_TX(tfs, pool,\n-\t\t\t\t   TF_SRAM_SP_SMAC_IPV4_POOL_NAME);\n-\t\trc = 0;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_SP_SMAC_IPV6:\n-\t\t/* No pools for TX direction, so bail out */\n-\t\tif (dir == TF_DIR_RX)\n-\t\t\tbreak;\n-\t\tTF_RM_GET_POOLS_TX(tfs, pool,\n-\t\t\t\t   TF_SRAM_SP_SMAC_IPV6_POOL_NAME);\n-\t\trc = 0;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_STATS_64:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_SRAM_STATS_64B_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_MODIFY_SPORT:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_SRAM_NAT_SPORT_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_SRAM_NAT_S_IPV4_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_SRAM_NAT_D_IPV4_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_METER_PROF:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_METER_PROF_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_METER_INST:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_METER_INST_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_MIRROR_CONFIG:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_MIRROR_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_UPAR:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_UPAR_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_EPOCH0:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_EPOCH0_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_EPOCH1:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_EPOCH1_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_METADATA:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_METADATA_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_CT_STATE:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_CT_STATE_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_RANGE_PROF:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_RANGE_PROF_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_RANGE_ENTRY:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_RANGE_ENTRY_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_LAG:\n-\t\tTF_RM_GET_POOLS(tfs, dir, pool,\n-\t\t\t\tTF_LAG_ENTRY_POOL_NAME,\n-\t\t\t\trc);\n-\t\tbreak;\n-\t/* Not yet supported */\n-\tcase TF_TBL_TYPE_ACT_ENCAP_32B:\n-\tcase TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:\n-\tcase TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:\n-\tcase TF_TBL_TYPE_VNIC_SVIF:\n-\t\tbreak;\n-\t/* No bitalloc pools for these types */\n-\tcase TF_TBL_TYPE_EXT:\n-\tdefault:\n-\t\tbreak;\n-\t}\n+\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n+\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n \n-\tif (rc == -EOPNOTSUPP) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, Table type not supported, type:%d\\n\",\n-\t\t\t    tf_dir_2_str(dir),\n-\t\t\t    type);\n-\t\treturn rc;\n-\t} else if (rc == -1) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, Table type lookup failed, type:%d\\n\",\n-\t\t\t    tf_dir_2_str(dir),\n-\t\t\t    type);\n-\t\treturn rc;\n-\t}\n+\t/* Bail out if not controlled by RM */\n+\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n+\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n+\t\treturn -ENOTSUP;\n+\n+\tmemcpy(parms->info,\n+\t       &rm_db->db[parms->db_index].alloc,\n+\t       sizeof(struct tf_rm_alloc_info));\n \n \treturn 0;\n }\n \n int\n-tf_rm_convert_tbl_type(enum tf_tbl_type type,\n-\t\t       uint32_t *hcapi_type)\n+tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)\n {\n-\tint rc = 0;\n+\tstruct tf_rm_new_db *rm_db;\n+\tenum tf_rm_elem_cfg_type cfg_type;\n \n-\tswitch (type) {\n-\tcase TF_TBL_TYPE_FULL_ACT_RECORD:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_FULL_ACTION;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_MCAST_GROUPS:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_MCG;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_ENCAP_8B:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_8B;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_ENCAP_16B:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_16B;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_ENCAP_64B:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_ENCAP_64B;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_SP_SMAC:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_SP_SMAC_IPV4:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV4;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_SP_SMAC_IPV6:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_SP_SMAC_IPV6;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_STATS_64:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_COUNTER_64B;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_MODIFY_SPORT:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_NAT_SPORT;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_MODIFY_DPORT:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_NAT_DPORT;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_NAT_S_IPV4;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:\n-\t\t*hcapi_type = TF_RESC_TYPE_SRAM_NAT_D_IPV4;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_METER_PROF:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_METER_PROF;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_METER_INST:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_METER_INST;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_MIRROR_CONFIG:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_MIRROR;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_UPAR:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_UPAR;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_EPOCH0:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_EPOCH0;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_EPOCH1:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_EPOCH1;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_METADATA:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_METADATA;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_CT_STATE:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_CT_STATE;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_RANGE_PROF:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_RANGE_PROF;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_RANGE_ENTRY:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_RANGE_ENTRY;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_LAG:\n-\t\t*hcapi_type = TF_RESC_TYPE_HW_LAG_ENTRY;\n-\t\tbreak;\n-\t/* Not yet supported */\n-\tcase TF_TBL_TYPE_ACT_ENCAP_32B:\n-\tcase TF_TBL_TYPE_ACT_MODIFY_IPV6_DEST:\n-\tcase TF_TBL_TYPE_ACT_MODIFY_IPV6_SRC:\n-\tcase TF_TBL_TYPE_VNIC_SVIF:\n-\tcase TF_TBL_TYPE_EXT:   /* No pools for this type */\n-\tdefault:\n-\t\t*hcapi_type = -1;\n-\t\trc = -EOPNOTSUPP;\n-\t}\n+\tTF_CHECK_PARMS2(parms, parms->rm_db);\n \n-\treturn rc;\n+\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n+\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n+\n+\t/* Bail out if not controlled by RM */\n+\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n+\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n+\t\treturn -ENOTSUP;\n+\n+\t*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;\n+\n+\treturn 0;\n }\n \n int\n-tf_rm_convert_index(struct tf_session *tfs,\n-\t\t    enum tf_dir dir,\n-\t\t    enum tf_tbl_type type,\n-\t\t    enum tf_rm_convert_type c_type,\n-\t\t    uint32_t index,\n-\t\t    uint32_t *convert_index)\n+tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)\n {\n-\tint rc;\n-\tstruct tf_rm_resc *resc;\n-\tuint32_t hcapi_type;\n-\tuint32_t base_index;\n+\tint rc = 0;\n+\tstruct tf_rm_new_db *rm_db;\n+\tenum tf_rm_elem_cfg_type cfg_type;\n \n-\tif (dir == TF_DIR_RX)\n-\t\tresc = &tfs->resc.rx;\n-\telse if (dir == TF_DIR_TX)\n-\t\tresc = &tfs->resc.tx;\n-\telse\n-\t\treturn -EOPNOTSUPP;\n+\tTF_CHECK_PARMS2(parms, parms->rm_db);\n \n-\trc = tf_rm_convert_tbl_type(type, &hcapi_type);\n-\tif (rc)\n-\t\treturn -1;\n-\n-\tswitch (type) {\n-\tcase TF_TBL_TYPE_FULL_ACT_RECORD:\n-\tcase TF_TBL_TYPE_MCAST_GROUPS:\n-\tcase TF_TBL_TYPE_ACT_ENCAP_8B:\n-\tcase TF_TBL_TYPE_ACT_ENCAP_16B:\n-\tcase TF_TBL_TYPE_ACT_ENCAP_32B:\n-\tcase TF_TBL_TYPE_ACT_ENCAP_64B:\n-\tcase TF_TBL_TYPE_ACT_SP_SMAC:\n-\tcase TF_TBL_TYPE_ACT_SP_SMAC_IPV4:\n-\tcase TF_TBL_TYPE_ACT_SP_SMAC_IPV6:\n-\tcase TF_TBL_TYPE_ACT_STATS_64:\n-\tcase TF_TBL_TYPE_ACT_MODIFY_SPORT:\n-\tcase TF_TBL_TYPE_ACT_MODIFY_DPORT:\n-\tcase TF_TBL_TYPE_ACT_MODIFY_IPV4_SRC:\n-\tcase TF_TBL_TYPE_ACT_MODIFY_IPV4_DEST:\n-\t\tbase_index = resc->sram_entry[hcapi_type].start;\n-\t\tbreak;\n-\tcase TF_TBL_TYPE_MIRROR_CONFIG:\n-\tcase TF_TBL_TYPE_METER_PROF:\n-\tcase TF_TBL_TYPE_METER_INST:\n-\tcase TF_TBL_TYPE_UPAR:\n-\tcase TF_TBL_TYPE_EPOCH0:\n-\tcase TF_TBL_TYPE_EPOCH1:\n-\tcase TF_TBL_TYPE_METADATA:\n-\tcase TF_TBL_TYPE_CT_STATE:\n-\tcase TF_TBL_TYPE_RANGE_PROF:\n-\tcase TF_TBL_TYPE_RANGE_ENTRY:\n-\tcase TF_TBL_TYPE_LAG:\n-\t\tbase_index = resc->hw_entry[hcapi_type].start;\n-\t\tbreak;\n-\t/* Not yet supported */\n-\tcase TF_TBL_TYPE_VNIC_SVIF:\n-\tcase TF_TBL_TYPE_EXT:   /* No pools for this type */\n-\tdefault:\n-\t\treturn -EOPNOTSUPP;\n-\t}\n+\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n+\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n \n-\tswitch (c_type) {\n-\tcase TF_RM_CONVERT_RM_BASE:\n-\t\t*convert_index = index - base_index;\n-\t\tbreak;\n-\tcase TF_RM_CONVERT_ADD_BASE:\n-\t\t*convert_index = index + base_index;\n-\t\tbreak;\n-\tdefault:\n-\t\treturn -EOPNOTSUPP;\n+\t/* Bail out if not controlled by RM */\n+\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n+\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n+\t\treturn -ENOTSUP;\n+\n+\t/* Bail silently (no logging), if the pool is not valid there\n+\t * was no elements allocated for it.\n+\t */\n+\tif (rm_db->db[parms->db_index].pool == NULL) {\n+\t\t*parms->count = 0;\n+\t\treturn 0;\n \t}\n \n-\treturn 0;\n+\t*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);\n+\n+\treturn rc;\n+\n }\ndiff --git a/drivers/net/bnxt/tf_core/tf_rm.h b/drivers/net/bnxt/tf_core/tf_rm.h\nindex 1a09f13..5cb6889 100644\n--- a/drivers/net/bnxt/tf_core/tf_rm.h\n+++ b/drivers/net/bnxt/tf_core/tf_rm.h\n@@ -3,301 +3,444 @@\n  * All rights reserved.\n  */\n \n-#ifndef TF_RM_H_\n-#define TF_RM_H_\n+#ifndef TF_RM_NEW_H_\n+#define TF_RM_NEW_H_\n \n-#include \"tf_resources.h\"\n #include \"tf_core.h\"\n #include \"bitalloc.h\"\n+#include \"tf_device.h\"\n \n struct tf;\n-struct tf_session;\n \n-/* Internal macro to determine appropriate allocation pools based on\n- * DIRECTION parm, also performs error checking for DIRECTION parm. The\n- * SESSION_POOL and SESSION pointers are set appropriately upon\n- * successful return (the GLOBAL_POOL is used to globally manage\n- * resource allocation and the SESSION_POOL is used to track the\n- * resources that have been allocated to the session)\n+/**\n+ * The Resource Manager (RM) module provides basic DB handling for\n+ * internal resources. These resources exists within the actual device\n+ * and are controlled by the HCAPI Resource Manager running on the\n+ * firmware.\n+ *\n+ * The RM DBs are all intended to be indexed using TF types there for\n+ * a lookup requires no additional conversion. The DB configuration\n+ * specifies the TF Type to HCAPI Type mapping and it becomes the\n+ * responsibility of the DB initialization to handle this static\n+ * mapping.\n+ *\n+ * Accessor functions are providing access to the DB, thus hiding the\n+ * implementation.\n  *\n- * parameters:\n- *   struct tfp        *tfp\n- *   enum tf_dir        direction\n- *   struct bitalloc  **session_pool\n- *   string             base_pool_name - used to form pointers to the\n- *\t\t\t\t\t appropriate bit allocation\n- *\t\t\t\t\t pools, both directions of the\n- *\t\t\t\t\t session pools must have same\n- *\t\t\t\t\t base name, for example if\n- *\t\t\t\t\t POOL_NAME is feat_pool: - the\n- *\t\t\t\t\t ptr's to the session pools\n- *\t\t\t\t\t are feat_pool_rx feat_pool_tx\n+ * The RM DB will work on its initial allocated sizes so the\n+ * capability of dynamically growing a particular resource is not\n+ * possible. If this capability later becomes a requirement then the\n+ * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info\n+ * structure and several new APIs would need to be added to allow for\n+ * growth of a single TF resource type.\n  *\n- *  int                  rc - return code\n- *\t\t\t      0 - Success\n- *\t\t\t     -1 - invalid DIRECTION parm\n+ * The access functions does not check for NULL pointers as it's a\n+ * support module, not called directly.\n  */\n-#define TF_RM_GET_POOLS(tfs, direction, session_pool, pool_name, rc) do { \\\n-\t\t(rc) = 0;\t\t\t\t\t\t\\\n-\t\tif ((direction) == TF_DIR_RX) {\t\t\t\t\\\n-\t\t\t*(session_pool) = (tfs)->pool_name ## _RX;\t\\\n-\t\t} else if ((direction) == TF_DIR_TX) {\t\t\t\\\n-\t\t\t*(session_pool) = (tfs)->pool_name ## _TX;\t\\\n-\t\t} else {\t\t\t\t\t\t\\\n-\t\t\trc = -1;\t\t\t\t\t\\\n-\t\t}\t\t\t\t\t\t\t\\\n-\t} while (0)\n \n-#define TF_RM_GET_POOLS_RX(tfs, session_pool, pool_name)\t\\\n-\t(*(session_pool) = (tfs)->pool_name ## _RX)\n+/**\n+ * Resource reservation single entry result. Used when accessing HCAPI\n+ * RM on the firmware.\n+ */\n+struct tf_rm_new_entry {\n+\t/** Starting index of the allocated resource */\n+\tuint16_t start;\n+\t/** Number of allocated elements */\n+\tuint16_t stride;\n+};\n \n-#define TF_RM_GET_POOLS_TX(tfs, session_pool, pool_name)\t\\\n-\t(*(session_pool) = (tfs)->pool_name ## _TX)\n+/**\n+ * RM Element configuration enumeration. Used by the Device to\n+ * indicate how the RM elements the DB consists off, are to be\n+ * configured at time of DB creation. The TF may present types to the\n+ * ULP layer that is not controlled by HCAPI within the Firmware.\n+ */\n+enum tf_rm_elem_cfg_type {\n+\t/** No configuration */\n+\tTF_RM_ELEM_CFG_NULL,\n+\t/** HCAPI 'controlled', uses a Pool for internal storage */\n+\tTF_RM_ELEM_CFG_HCAPI,\n+\t/** Private thus not HCAPI 'controlled', creates a Pool for storage */\n+\tTF_RM_ELEM_CFG_PRIVATE,\n+\t/**\n+\t * Shared element thus it belongs to a shared FW Session and\n+\t * is not controlled by the Host.\n+\t */\n+\tTF_RM_ELEM_CFG_SHARED,\n+\tTF_RM_TYPE_MAX\n+};\n \n /**\n- * Resource query single entry\n+ * RM Reservation strategy enumeration. Type of strategy comes from\n+ * the HCAPI RM QCAPS handshake.\n  */\n-struct tf_rm_query_entry {\n-\t/** Minimum guaranteed number of elements */\n-\tuint16_t min;\n-\t/** Maximum non-guaranteed number of elements */\n-\tuint16_t max;\n+enum tf_rm_resc_resv_strategy {\n+\tTF_RM_RESC_RESV_STATIC_PARTITION,\n+\tTF_RM_RESC_RESV_STRATEGY_1,\n+\tTF_RM_RESC_RESV_STRATEGY_2,\n+\tTF_RM_RESC_RESV_STRATEGY_3,\n+\tTF_RM_RESC_RESV_MAX\n };\n \n /**\n- * Resource single entry\n+ * RM Element configuration structure, used by the Device to configure\n+ * how an individual TF type is configured in regard to the HCAPI RM\n+ * of same type.\n  */\n-struct tf_rm_entry {\n-\t/** Starting index of the allocated resource */\n-\tuint16_t start;\n-\t/** Number of allocated elements */\n-\tuint16_t stride;\n+struct tf_rm_element_cfg {\n+\t/**\n+\t * RM Element config controls how the DB for that element is\n+\t * processed.\n+\t */\n+\tenum tf_rm_elem_cfg_type cfg_type;\n+\n+\t/* If a HCAPI to TF type conversion is required then TF type\n+\t * can be added here.\n+\t */\n+\n+\t/**\n+\t * HCAPI RM Type for the element. Used for TF to HCAPI type\n+\t * conversion.\n+\t */\n+\tuint16_t hcapi_type;\n };\n \n /**\n- * Resource query array of HW entities\n+ * Allocation information for a single element.\n  */\n-struct tf_rm_hw_query {\n-\t/** array of HW resource entries */\n-\tstruct tf_rm_query_entry hw_query[TF_RESC_TYPE_HW_MAX];\n+struct tf_rm_alloc_info {\n+\t/**\n+\t * HCAPI RM allocated range information.\n+\t *\n+\t * NOTE:\n+\t * In case of dynamic allocation support this would have\n+\t * to be changed to linked list of tf_rm_entry instead.\n+\t */\n+\tstruct tf_rm_new_entry entry;\n };\n \n /**\n- * Resource allocation array of HW entities\n+ * Create RM DB parameters\n  */\n-struct tf_rm_hw_alloc {\n-\t/** array of HW resource entries */\n-\tuint16_t hw_num[TF_RESC_TYPE_HW_MAX];\n+struct tf_rm_create_db_parms {\n+\t/**\n+\t * [in] Device module type. Used for logging purposes.\n+\t */\n+\tenum tf_device_module_type type;\n+\t/**\n+\t * [in] Receive or transmit direction.\n+\t */\n+\tenum tf_dir dir;\n+\t/**\n+\t * [in] Number of elements.\n+\t */\n+\tuint16_t num_elements;\n+\t/**\n+\t * [in] Parameter structure array. Array size is num_elements.\n+\t */\n+\tstruct tf_rm_element_cfg *cfg;\n+\t/**\n+\t * Resource allocation count array. This array content\n+\t * originates from the tf_session_resources that is passed in\n+\t * on session open.\n+\t * Array size is num_elements.\n+\t */\n+\tuint16_t *alloc_cnt;\n+\t/**\n+\t * [out] RM DB Handle\n+\t */\n+\tvoid **rm_db;\n };\n \n /**\n- * Resource query array of SRAM entities\n+ * Free RM DB parameters\n  */\n-struct tf_rm_sram_query {\n-\t/** array of SRAM resource entries */\n-\tstruct tf_rm_query_entry sram_query[TF_RESC_TYPE_SRAM_MAX];\n+struct tf_rm_free_db_parms {\n+\t/**\n+\t * [in] Receive or transmit direction\n+\t */\n+\tenum tf_dir dir;\n+\t/**\n+\t * [in] RM DB Handle\n+\t */\n+\tvoid *rm_db;\n };\n \n /**\n- * Resource allocation array of SRAM entities\n+ * Allocate RM parameters for a single element\n  */\n-struct tf_rm_sram_alloc {\n-\t/** array of SRAM resource entries */\n-\tuint16_t sram_num[TF_RESC_TYPE_SRAM_MAX];\n+struct tf_rm_allocate_parms {\n+\t/**\n+\t * [in] RM DB Handle\n+\t */\n+\tvoid *rm_db;\n+\t/**\n+\t * [in] DB Index, indicates which DB entry to perform the\n+\t * action on.\n+\t */\n+\tuint16_t db_index;\n+\t/**\n+\t * [in] Pointer to the allocated index in normalized\n+\t * form. Normalized means the index has been adjusted,\n+\t * i.e. Full Action Record offsets.\n+\t */\n+\tuint32_t *index;\n+\t/**\n+\t * [in] Priority, indicates the prority of the entry\n+\t * priority  0: allocate from top of the tcam (from index 0\n+\t *              or lowest available index)\n+\t * priority !0: allocate from bottom of the tcam (from highest\n+\t *              available index)\n+\t */\n+\tuint32_t priority;\n };\n \n /**\n- * Resource Manager arrays for a single direction\n+ * Free RM parameters for a single element\n  */\n-struct tf_rm_resc {\n-\t/** array of HW resource entries */\n-\tstruct tf_rm_entry hw_entry[TF_RESC_TYPE_HW_MAX];\n-\t/** array of SRAM resource entries */\n-\tstruct tf_rm_entry sram_entry[TF_RESC_TYPE_SRAM_MAX];\n+struct tf_rm_free_parms {\n+\t/**\n+\t * [in] RM DB Handle\n+\t */\n+\tvoid *rm_db;\n+\t/**\n+\t * [in] DB Index, indicates which DB entry to perform the\n+\t * action on.\n+\t */\n+\tuint16_t db_index;\n+\t/**\n+\t * [in] Index to free\n+\t */\n+\tuint16_t index;\n };\n \n /**\n- * Resource Manager Database\n+ * Is Allocated parameters for a single element\n  */\n-struct tf_rm_db {\n-\tstruct tf_rm_resc rx;\n-\tstruct tf_rm_resc tx;\n+struct tf_rm_is_allocated_parms {\n+\t/**\n+\t * [in] RM DB Handle\n+\t */\n+\tvoid *rm_db;\n+\t/**\n+\t * [in] DB Index, indicates which DB entry to perform the\n+\t * action on.\n+\t */\n+\tuint16_t db_index;\n+\t/**\n+\t * [in] Index to free\n+\t */\n+\tuint32_t index;\n+\t/**\n+\t * [in] Pointer to flag that indicates the state of the query\n+\t */\n+\tint *allocated;\n };\n \n /**\n- * Helper function used to convert HW HCAPI resource type to a string.\n+ * Get Allocation information for a single element\n  */\n-const char\n-*tf_hcapi_hw_2_str(enum tf_resource_type_hw hw_type);\n+struct tf_rm_get_alloc_info_parms {\n+\t/**\n+\t * [in] RM DB Handle\n+\t */\n+\tvoid *rm_db;\n+\t/**\n+\t * [in] DB Index, indicates which DB entry to perform the\n+\t * action on.\n+\t */\n+\tuint16_t db_index;\n+\t/**\n+\t * [out] Pointer to the requested allocation information for\n+\t * the specified db_index\n+\t */\n+\tstruct tf_rm_alloc_info *info;\n+};\n \n /**\n- * Helper function used to convert SRAM HCAPI resource type to a string.\n+ * Get HCAPI type parameters for a single element\n  */\n-const char\n-*tf_hcapi_sram_2_str(enum tf_resource_type_sram sram_type);\n+struct tf_rm_get_hcapi_parms {\n+\t/**\n+\t * [in] RM DB Handle\n+\t */\n+\tvoid *rm_db;\n+\t/**\n+\t * [in] DB Index, indicates which DB entry to perform the\n+\t * action on.\n+\t */\n+\tuint16_t db_index;\n+\t/**\n+\t * [out] Pointer to the hcapi type for the specified db_index\n+\t */\n+\tuint16_t *hcapi_type;\n+};\n \n /**\n- * Initializes the Resource Manager and the associated database\n- * entries for HW and SRAM resources. Must be called before any other\n- * Resource Manager functions.\n+ * Get InUse count parameters for single element\n+ */\n+struct tf_rm_get_inuse_count_parms {\n+\t/**\n+\t * [in] RM DB Handle\n+\t */\n+\tvoid *rm_db;\n+\t/**\n+\t * [in] DB Index, indicates which DB entry to perform the\n+\t * action on.\n+\t */\n+\tuint16_t db_index;\n+\t/**\n+\t * [out] Pointer to the inuse count for the specified db_index\n+\t */\n+\tuint16_t *count;\n+};\n+\n+/**\n+ * @page rm Resource Manager\n  *\n- * [in] tfp\n- *   Pointer to TF handle\n+ * @ref tf_rm_create_db\n+ *\n+ * @ref tf_rm_free_db\n+ *\n+ * @ref tf_rm_allocate\n+ *\n+ * @ref tf_rm_free\n+ *\n+ * @ref tf_rm_is_allocated\n+ *\n+ * @ref tf_rm_get_info\n+ *\n+ * @ref tf_rm_get_hcapi_type\n+ *\n+ * @ref tf_rm_get_inuse_count\n  */\n-void tf_rm_init(struct tf *tfp);\n \n /**\n- * Allocates and validates both HW and SRAM resources per the NVM\n- * configuration. If any allocation fails all resources for the\n- * session is deallocated.\n+ * Creates and fills a Resource Manager (RM) DB with requested\n+ * elements. The DB is indexed per the parms structure.\n  *\n  * [in] tfp\n- *   Pointer to TF handle\n+ *   Pointer to TF handle, used for HCAPI communication\n+ *\n+ * [in] parms\n+ *   Pointer to create parameters\n  *\n  * Returns\n  *   - (0) if successful.\n  *   - (-EINVAL) on failure.\n  */\n-int tf_rm_allocate_validate(struct tf *tfp);\n+/*\n+ * NOTE:\n+ * - Fail on parameter check\n+ * - Fail on DB creation, i.e. alloc amount is not possible or validation fails\n+ * - Fail on DB creation if DB already exist\n+ *\n+ * - Allocs local DB\n+ * - Does hcapi qcaps\n+ * - Does hcapi reservation\n+ * - Populates the pool with allocated elements\n+ * - Returns handle to the created DB\n+ */\n+int tf_rm_create_db(struct tf *tfp,\n+\t\t    struct tf_rm_create_db_parms *parms);\n \n /**\n- * Closes the Resource Manager and frees all allocated resources per\n- * the associated database.\n+ * Closes the Resource Manager (RM) DB and frees all allocated\n+ * resources per the associated database.\n  *\n  * [in] tfp\n- *   Pointer to TF handle\n+ *   Pointer to TF handle, used for HCAPI communication\n+ *\n+ * [in] parms\n+ *   Pointer to free parameters\n  *\n  * Returns\n  *   - (0) if successful.\n  *   - (-EINVAL) on failure.\n- *   - (-ENOTEMPTY) if resources are not cleaned up before close\n  */\n-int tf_rm_close(struct tf *tfp);\n+int tf_rm_free_db(struct tf *tfp,\n+\t\t  struct tf_rm_free_db_parms *parms);\n \n-#if (TF_SHADOW == 1)\n /**\n- * Initializes Shadow DB of configuration elements\n+ * Allocates a single element for the type specified, within the DB.\n  *\n- * [in] tfs\n- *   Pointer to TF Session\n+ * [in] parms\n+ *   Pointer to allocate parameters\n  *\n- * Returns:\n- *  0  - Success\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n+ *   - (-ENOMEM) if pool is empty\n  */\n-int tf_rm_shadow_db_init(struct tf_session *tfs);\n-#endif /* TF_SHADOW */\n+int tf_rm_allocate(struct tf_rm_allocate_parms *parms);\n \n /**\n- * Perform a Session Pool lookup using the Tcam table type.\n- *\n- * Function will print error msg if tcam type is unsupported or lookup\n- * failed.\n+ * Free's a single element for the type specified, within the DB.\n  *\n- * [in] tfs\n- *   Pointer to TF Session\n+ * [in] parms\n+ *   Pointer to free parameters\n  *\n- * [in] type\n- *   Type of the object\n- *\n- * [in] dir\n- *    Receive or transmit direction\n- *\n- * [in/out]  session_pool\n- *   Session pool\n- *\n- * Returns:\n- *  0           - Success will set the **pool\n- *  -EOPNOTSUPP - Type is not supported\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n  */\n-int\n-tf_rm_lookup_tcam_type_pool(struct tf_session *tfs,\n-\t\t\t    enum tf_dir dir,\n-\t\t\t    enum tf_tcam_tbl_type type,\n-\t\t\t    struct bitalloc **pool);\n+int tf_rm_free(struct tf_rm_free_parms *parms);\n \n /**\n- * Perform a Session Pool lookup using the Table type.\n- *\n- * Function will print error msg if table type is unsupported or\n- * lookup failed.\n- *\n- * [in] tfs\n- *   Pointer to TF Session\n- *\n- * [in] type\n- *   Type of the object\n+ * Performs an allocation verification check on a specified element.\n  *\n- * [in] dir\n- *    Receive or transmit direction\n+ * [in] parms\n+ *   Pointer to is allocated parameters\n  *\n- * [in/out]  session_pool\n- *   Session pool\n- *\n- * Returns:\n- *  0           - Success will set the **pool\n- *  -EOPNOTSUPP - Type is not supported\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n  */\n-int\n-tf_rm_lookup_tbl_type_pool(struct tf_session *tfs,\n-\t\t\t   enum tf_dir dir,\n-\t\t\t   enum tf_tbl_type type,\n-\t\t\t   struct bitalloc **pool);\n+/*\n+ * NOTE:\n+ *  - If pool is set to Chip MAX, then the query index must be checked\n+ *    against the allocated range and query index must be allocated as well.\n+ *  - If pool is allocated size only, then check if query index is allocated.\n+ */\n+int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);\n \n /**\n- * Converts the TF Table Type to internal HCAPI_TYPE\n- *\n- * [in] type\n- *   Type to be converted\n+ * Retrieves an elements allocation information from the Resource\n+ * Manager (RM) DB.\n  *\n- * [in/out] hcapi_type\n- *   Converted type\n+ * [in] parms\n+ *   Pointer to get info parameters\n  *\n- * Returns:\n- *  0           - Success will set the *hcapi_type\n- *  -EOPNOTSUPP - Type is not supported\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n  */\n-int\n-tf_rm_convert_tbl_type(enum tf_tbl_type type,\n-\t\t       uint32_t *hcapi_type);\n+int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);\n \n /**\n- * TF RM Convert of index methods.\n+ * Performs a lookup in the Resource Manager DB and retrives the\n+ * requested HCAPI RM type.\n+ *\n+ * [in] parms\n+ *   Pointer to get hcapi parameters\n+ *\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n  */\n-enum tf_rm_convert_type {\n-\t/** Adds the base of the Session Pool to the index */\n-\tTF_RM_CONVERT_ADD_BASE,\n-\t/** Removes the Session Pool base from the index */\n-\tTF_RM_CONVERT_RM_BASE\n-};\n+int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);\n \n /**\n- * Provides conversion of the Table Type index in relation to the\n- * Session Pool base.\n- *\n- * [in] tfs\n- *   Pointer to TF Session\n- *\n- * [in] dir\n- *    Receive or transmit direction\n- *\n- * [in] type\n- *   Type of the object\n+ * Performs a lookup in the Resource Manager DB and retrives the\n+ * requested HCAPI RM type inuse count.\n  *\n- * [in] c_type\n- *   Type of conversion to perform\n+ * [in] parms\n+ *   Pointer to get inuse parameters\n  *\n- * [in] index\n- *   Index to be converted\n- *\n- * [in/out]  convert_index\n- *   Pointer to the converted index\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n  */\n-int\n-tf_rm_convert_index(struct tf_session *tfs,\n-\t\t    enum tf_dir dir,\n-\t\t    enum tf_tbl_type type,\n-\t\t    enum tf_rm_convert_type c_type,\n-\t\t    uint32_t index,\n-\t\t    uint32_t *convert_index);\n+int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);\n \n-#endif /* TF_RM_H_ */\n+#endif /* TF_RM_NEW_H_ */\ndiff --git a/drivers/net/bnxt/tf_core/tf_rm_new.c b/drivers/net/bnxt/tf_core/tf_rm_new.c\ndeleted file mode 100644\nindex 2d9be65..0000000\n--- a/drivers/net/bnxt/tf_core/tf_rm_new.c\n+++ /dev/null\n@@ -1,907 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2019-2020 Broadcom\n- * All rights reserved.\n- */\n-\n-#include <string.h>\n-\n-#include <rte_common.h>\n-\n-#include <cfa_resource_types.h>\n-\n-#include \"tf_rm_new.h\"\n-#include \"tf_common.h\"\n-#include \"tf_util.h\"\n-#include \"tf_session.h\"\n-#include \"tf_device.h\"\n-#include \"tfp.h\"\n-#include \"tf_msg.h\"\n-\n-/**\n- * Generic RM Element data type that an RM DB is build upon.\n- */\n-struct tf_rm_element {\n-\t/**\n-\t * RM Element configuration type. If Private then the\n-\t * hcapi_type can be ignored. If Null then the element is not\n-\t * valid for the device.\n-\t */\n-\tenum tf_rm_elem_cfg_type cfg_type;\n-\n-\t/**\n-\t * HCAPI RM Type for the element.\n-\t */\n-\tuint16_t hcapi_type;\n-\n-\t/**\n-\t * HCAPI RM allocated range information for the element.\n-\t */\n-\tstruct tf_rm_alloc_info alloc;\n-\n-\t/**\n-\t * Bit allocator pool for the element. Pool size is controlled\n-\t * by the struct tf_session_resources at time of session creation.\n-\t * Null indicates that the element is not used for the device.\n-\t */\n-\tstruct bitalloc *pool;\n-};\n-\n-/**\n- * TF RM DB definition\n- */\n-struct tf_rm_new_db {\n-\t/**\n-\t * Number of elements in the DB\n-\t */\n-\tuint16_t num_entries;\n-\n-\t/**\n-\t * Direction this DB controls.\n-\t */\n-\tenum tf_dir dir;\n-\n-\t/**\n-\t * Module type, used for logging purposes.\n-\t */\n-\tenum tf_device_module_type type;\n-\n-\t/**\n-\t * The DB consists of an array of elements\n-\t */\n-\tstruct tf_rm_element *db;\n-};\n-\n-/**\n- * Adjust an index according to the allocation information.\n- *\n- * All resources are controlled in a 0 based pool. Some resources, by\n- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they\n- * need to be adjusted before they are handed out.\n- *\n- * [in] cfg\n- *   Pointer to the DB configuration\n- *\n- * [in] reservations\n- *   Pointer to the allocation values associated with the module\n- *\n- * [in] count\n- *   Number of DB configuration elements\n- *\n- * [out] valid_count\n- *   Number of HCAPI entries with a reservation value greater than 0\n- *\n- * Returns:\n- *     0          - Success\n- *   - EOPNOTSUPP - Operation not supported\n- */\n-static void\n-tf_rm_count_hcapi_reservations(enum tf_dir dir,\n-\t\t\t       enum tf_device_module_type type,\n-\t\t\t       struct tf_rm_element_cfg *cfg,\n-\t\t\t       uint16_t *reservations,\n-\t\t\t       uint16_t count,\n-\t\t\t       uint16_t *valid_count)\n-{\n-\tint i;\n-\tuint16_t cnt = 0;\n-\n-\tfor (i = 0; i < count; i++) {\n-\t\tif (cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI &&\n-\t\t    reservations[i] > 0)\n-\t\t\tcnt++;\n-\n-\t\t/* Only log msg if a type is attempted reserved and\n-\t\t * not supported. We ignore EM module as its using a\n-\t\t * split configuration array thus it would fail for\n-\t\t * this type of check.\n-\t\t */\n-\t\tif (type != TF_DEVICE_MODULE_TYPE_EM &&\n-\t\t    cfg[i].cfg_type == TF_RM_ELEM_CFG_NULL &&\n-\t\t    reservations[i] > 0) {\n-\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t\"%s, %s, %s allocation not supported\\n\",\n-\t\t\t\ttf_device_module_type_2_str(type),\n-\t\t\t\ttf_dir_2_str(dir),\n-\t\t\t\ttf_device_module_type_subtype_2_str(type, i));\n-\t\t\tprintf(\"%s, %s, %s allocation of %d not supported\\n\",\n-\t\t\t\ttf_device_module_type_2_str(type),\n-\t\t\t\ttf_dir_2_str(dir),\n-\t\t\t       tf_device_module_type_subtype_2_str(type, i),\n-\t\t\t       reservations[i]);\n-\t\t}\n-\t}\n-\n-\t*valid_count = cnt;\n-}\n-\n-/**\n- * Resource Manager Adjust of base index definitions.\n- */\n-enum tf_rm_adjust_type {\n-\tTF_RM_ADJUST_ADD_BASE, /**< Adds base to the index */\n-\tTF_RM_ADJUST_RM_BASE   /**< Removes base from the index */\n-};\n-\n-/**\n- * Adjust an index according to the allocation information.\n- *\n- * All resources are controlled in a 0 based pool. Some resources, by\n- * design, are not 0 based, i.e. Full Action Records (SRAM) thus they\n- * need to be adjusted before they are handed out.\n- *\n- * [in] db\n- *   Pointer to the db, used for the lookup\n- *\n- * [in] action\n- *   Adjust action\n- *\n- * [in] db_index\n- *   DB index for the element type\n- *\n- * [in] index\n- *   Index to convert\n- *\n- * [out] adj_index\n- *   Adjusted index\n- *\n- * Returns:\n- *     0          - Success\n- *   - EOPNOTSUPP - Operation not supported\n- */\n-static int\n-tf_rm_adjust_index(struct tf_rm_element *db,\n-\t\t   enum tf_rm_adjust_type action,\n-\t\t   uint32_t db_index,\n-\t\t   uint32_t index,\n-\t\t   uint32_t *adj_index)\n-{\n-\tint rc = 0;\n-\tuint32_t base_index;\n-\n-\tbase_index = db[db_index].alloc.entry.start;\n-\n-\tswitch (action) {\n-\tcase TF_RM_ADJUST_RM_BASE:\n-\t\t*adj_index = index - base_index;\n-\t\tbreak;\n-\tcase TF_RM_ADJUST_ADD_BASE:\n-\t\t*adj_index = index + base_index;\n-\t\tbreak;\n-\tdefault:\n-\t\treturn -EOPNOTSUPP;\n-\t}\n-\n-\treturn rc;\n-}\n-\n-/**\n- * Logs an array of found residual entries to the console.\n- *\n- * [in] dir\n- *   Receive or transmit direction\n- *\n- * [in] type\n- *   Type of Device Module\n- *\n- * [in] count\n- *   Number of entries in the residual array\n- *\n- * [in] residuals\n- *   Pointer to an array of residual entries. Array is index same as\n- *   the DB in which this function is used. Each entry holds residual\n- *   value for that entry.\n- */\n-static void\n-tf_rm_log_residuals(enum tf_dir dir,\n-\t\t    enum tf_device_module_type type,\n-\t\t    uint16_t count,\n-\t\t    uint16_t *residuals)\n-{\n-\tint i;\n-\n-\t/* Walk the residual array and log the types that wasn't\n-\t * cleaned up to the console.\n-\t */\n-\tfor (i = 0; i < count; i++) {\n-\t\tif (residuals[i] != 0)\n-\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t\"%s, %s was not cleaned up, %d outstanding\\n\",\n-\t\t\t\ttf_dir_2_str(dir),\n-\t\t\t\ttf_device_module_type_subtype_2_str(type, i),\n-\t\t\t\tresiduals[i]);\n-\t}\n-}\n-\n-/**\n- * Performs a check of the passed in DB for any lingering elements. If\n- * a resource type was found to not have been cleaned up by the caller\n- * then its residual values are recorded, logged and passed back in an\n- * allocate reservation array that the caller can pass to the FW for\n- * cleanup.\n- *\n- * [in] db\n- *   Pointer to the db, used for the lookup\n- *\n- * [out] resv_size\n- *   Pointer to the reservation size of the generated reservation\n- *   array.\n- *\n- * [in/out] resv\n- *   Pointer Pointer to a reservation array. The reservation array is\n- *   allocated after the residual scan and holds any found residual\n- *   entries. Thus it can be smaller than the DB that the check was\n- *   performed on. Array must be freed by the caller.\n- *\n- * [out] residuals_present\n- *   Pointer to a bool flag indicating if residual was present in the\n- *   DB\n- *\n- * Returns:\n- *     0          - Success\n- *   - EOPNOTSUPP - Operation not supported\n- */\n-static int\n-tf_rm_check_residuals(struct tf_rm_new_db *rm_db,\n-\t\t      uint16_t *resv_size,\n-\t\t      struct tf_rm_resc_entry **resv,\n-\t\t      bool *residuals_present)\n-{\n-\tint rc;\n-\tint i;\n-\tint f;\n-\tuint16_t count;\n-\tuint16_t found;\n-\tuint16_t *residuals = NULL;\n-\tuint16_t hcapi_type;\n-\tstruct tf_rm_get_inuse_count_parms iparms;\n-\tstruct tf_rm_get_alloc_info_parms aparms;\n-\tstruct tf_rm_get_hcapi_parms hparms;\n-\tstruct tf_rm_alloc_info info;\n-\tstruct tfp_calloc_parms cparms;\n-\tstruct tf_rm_resc_entry *local_resv = NULL;\n-\n-\t/* Create array to hold the entries that have residuals */\n-\tcparms.nitems = rm_db->num_entries;\n-\tcparms.size = sizeof(uint16_t);\n-\tcparms.alignment = 0;\n-\trc = tfp_calloc(&cparms);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\tresiduals = (uint16_t *)cparms.mem_va;\n-\n-\t/* Traverse the DB and collect any residual elements */\n-\tiparms.rm_db = rm_db;\n-\tiparms.count = &count;\n-\tfor (i = 0, found = 0; i < rm_db->num_entries; i++) {\n-\t\tiparms.db_index = i;\n-\t\trc = tf_rm_get_inuse_count(&iparms);\n-\t\t/* Not a device supported entry, just skip */\n-\t\tif (rc == -ENOTSUP)\n-\t\t\tcontinue;\n-\t\tif (rc)\n-\t\t\tgoto cleanup_residuals;\n-\n-\t\tif (count) {\n-\t\t\tfound++;\n-\t\t\tresiduals[i] = count;\n-\t\t\t*residuals_present = true;\n-\t\t}\n-\t}\n-\n-\tif (*residuals_present) {\n-\t\t/* Populate a reduced resv array with only the entries\n-\t\t * that have residuals.\n-\t\t */\n-\t\tcparms.nitems = found;\n-\t\tcparms.size = sizeof(struct tf_rm_resc_entry);\n-\t\tcparms.alignment = 0;\n-\t\trc = tfp_calloc(&cparms);\n-\t\tif (rc)\n-\t\t\treturn rc;\n-\n-\t\tlocal_resv = (struct tf_rm_resc_entry *)cparms.mem_va;\n-\n-\t\taparms.rm_db = rm_db;\n-\t\thparms.rm_db = rm_db;\n-\t\thparms.hcapi_type = &hcapi_type;\n-\t\tfor (i = 0, f = 0; i < rm_db->num_entries; i++) {\n-\t\t\tif (residuals[i] == 0)\n-\t\t\t\tcontinue;\n-\t\t\taparms.db_index = i;\n-\t\t\taparms.info = &info;\n-\t\t\trc = tf_rm_get_info(&aparms);\n-\t\t\tif (rc)\n-\t\t\t\tgoto cleanup_all;\n-\n-\t\t\thparms.db_index = i;\n-\t\t\trc = tf_rm_get_hcapi_type(&hparms);\n-\t\t\tif (rc)\n-\t\t\t\tgoto cleanup_all;\n-\n-\t\t\tlocal_resv[f].type = hcapi_type;\n-\t\t\tlocal_resv[f].start = info.entry.start;\n-\t\t\tlocal_resv[f].stride = info.entry.stride;\n-\t\t\tf++;\n-\t\t}\n-\t\t*resv_size = found;\n-\t}\n-\n-\ttf_rm_log_residuals(rm_db->dir,\n-\t\t\t    rm_db->type,\n-\t\t\t    rm_db->num_entries,\n-\t\t\t    residuals);\n-\n-\ttfp_free((void *)residuals);\n-\t*resv = local_resv;\n-\n-\treturn 0;\n-\n- cleanup_all:\n-\ttfp_free((void *)local_resv);\n-\t*resv = NULL;\n- cleanup_residuals:\n-\ttfp_free((void *)residuals);\n-\n-\treturn rc;\n-}\n-\n-int\n-tf_rm_create_db(struct tf *tfp,\n-\t\tstruct tf_rm_create_db_parms *parms)\n-{\n-\tint rc;\n-\tint i;\n-\tint j;\n-\tstruct tf_session *tfs;\n-\tstruct tf_dev_info *dev;\n-\tuint16_t max_types;\n-\tstruct tfp_calloc_parms cparms;\n-\tstruct tf_rm_resc_req_entry *query;\n-\tenum tf_rm_resc_resv_strategy resv_strategy;\n-\tstruct tf_rm_resc_req_entry *req;\n-\tstruct tf_rm_resc_entry *resv;\n-\tstruct tf_rm_new_db *rm_db;\n-\tstruct tf_rm_element *db;\n-\tuint32_t pool_size;\n-\tuint16_t hcapi_items;\n-\n-\tTF_CHECK_PARMS2(tfp, parms);\n-\n-\t/* Retrieve the session information */\n-\trc = tf_session_get_session(tfp, &tfs);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\t/* Retrieve device information */\n-\trc = tf_session_get_device(tfs, &dev);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\t/* Need device max number of elements for the RM QCAPS */\n-\trc = dev->ops->tf_dev_get_max_types(tfp, &max_types);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\tcparms.nitems = max_types;\n-\tcparms.size = sizeof(struct tf_rm_resc_req_entry);\n-\tcparms.alignment = 0;\n-\trc = tfp_calloc(&cparms);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\tquery = (struct tf_rm_resc_req_entry *)cparms.mem_va;\n-\n-\t/* Get Firmware Capabilities */\n-\trc = tf_msg_session_resc_qcaps(tfp,\n-\t\t\t\t       parms->dir,\n-\t\t\t\t       max_types,\n-\t\t\t\t       query,\n-\t\t\t\t       &resv_strategy);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\t/* Process capabilities against DB requirements. However, as a\n-\t * DB can hold elements that are not HCAPI we can reduce the\n-\t * req msg content by removing those out of the request yet\n-\t * the DB holds them all as to give a fast lookup. We can also\n-\t * remove entries where there are no request for elements.\n-\t */\n-\ttf_rm_count_hcapi_reservations(parms->dir,\n-\t\t\t\t       parms->type,\n-\t\t\t\t       parms->cfg,\n-\t\t\t\t       parms->alloc_cnt,\n-\t\t\t\t       parms->num_elements,\n-\t\t\t\t       &hcapi_items);\n-\n-\t/* Alloc request, alignment already set */\n-\tcparms.nitems = (size_t)hcapi_items;\n-\tcparms.size = sizeof(struct tf_rm_resc_req_entry);\n-\trc = tfp_calloc(&cparms);\n-\tif (rc)\n-\t\treturn rc;\n-\treq = (struct tf_rm_resc_req_entry *)cparms.mem_va;\n-\n-\t/* Alloc reservation, alignment and nitems already set */\n-\tcparms.size = sizeof(struct tf_rm_resc_entry);\n-\trc = tfp_calloc(&cparms);\n-\tif (rc)\n-\t\treturn rc;\n-\tresv = (struct tf_rm_resc_entry *)cparms.mem_va;\n-\n-\t/* Build the request */\n-\tfor (i = 0, j = 0; i < parms->num_elements; i++) {\n-\t\t/* Skip any non HCAPI cfg elements */\n-\t\tif (parms->cfg[i].cfg_type == TF_RM_ELEM_CFG_HCAPI) {\n-\t\t\t/* Only perform reservation for entries that\n-\t\t\t * has been requested\n-\t\t\t */\n-\t\t\tif (parms->alloc_cnt[i] == 0)\n-\t\t\t\tcontinue;\n-\n-\t\t\t/* Verify that we can get the full amount\n-\t\t\t * allocated per the qcaps availability.\n-\t\t\t */\n-\t\t\tif (parms->alloc_cnt[i] <=\n-\t\t\t    query[parms->cfg[i].hcapi_type].max) {\n-\t\t\t\treq[j].type = parms->cfg[i].hcapi_type;\n-\t\t\t\treq[j].min = parms->alloc_cnt[i];\n-\t\t\t\treq[j].max = parms->alloc_cnt[i];\n-\t\t\t\tj++;\n-\t\t\t} else {\n-\t\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t\t    \"%s: Resource failure, type:%d\\n\",\n-\t\t\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t\t\t    parms->cfg[i].hcapi_type);\n-\t\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t\t\"req:%d, avail:%d\\n\",\n-\t\t\t\t\tparms->alloc_cnt[i],\n-\t\t\t\t\tquery[parms->cfg[i].hcapi_type].max);\n-\t\t\t\treturn -EINVAL;\n-\t\t\t}\n-\t\t}\n-\t}\n-\n-\trc = tf_msg_session_resc_alloc(tfp,\n-\t\t\t\t       parms->dir,\n-\t\t\t\t       hcapi_items,\n-\t\t\t\t       req,\n-\t\t\t\t       resv);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\t/* Build the RM DB per the request */\n-\tcparms.nitems = 1;\n-\tcparms.size = sizeof(struct tf_rm_new_db);\n-\trc = tfp_calloc(&cparms);\n-\tif (rc)\n-\t\treturn rc;\n-\trm_db = (void *)cparms.mem_va;\n-\n-\t/* Build the DB within RM DB */\n-\tcparms.nitems = parms->num_elements;\n-\tcparms.size = sizeof(struct tf_rm_element);\n-\trc = tfp_calloc(&cparms);\n-\tif (rc)\n-\t\treturn rc;\n-\trm_db->db = (struct tf_rm_element *)cparms.mem_va;\n-\n-\tdb = rm_db->db;\n-\tfor (i = 0, j = 0; i < parms->num_elements; i++) {\n-\t\tdb[i].cfg_type = parms->cfg[i].cfg_type;\n-\t\tdb[i].hcapi_type = parms->cfg[i].hcapi_type;\n-\n-\t\t/* Skip any non HCAPI types as we didn't include them\n-\t\t * in the reservation request.\n-\t\t */\n-\t\tif (parms->cfg[i].cfg_type != TF_RM_ELEM_CFG_HCAPI)\n-\t\t\tcontinue;\n-\n-\t\t/* If the element didn't request an allocation no need\n-\t\t * to create a pool nor verify if we got a reservation.\n-\t\t */\n-\t\tif (parms->alloc_cnt[i] == 0)\n-\t\t\tcontinue;\n-\n-\t\t/* If the element had requested an allocation and that\n-\t\t * allocation was a success (full amount) then\n-\t\t * allocate the pool.\n-\t\t */\n-\t\tif (parms->alloc_cnt[i] == resv[j].stride) {\n-\t\t\tdb[i].alloc.entry.start = resv[j].start;\n-\t\t\tdb[i].alloc.entry.stride = resv[j].stride;\n-\n-\t\t\tprintf(\"Entry:%d Start:%d Stride:%d\\n\",\n-\t\t\t       i,\n-\t\t\t       resv[j].start,\n-\t\t\t       resv[j].stride);\n-\n-\t\t\t/* Create pool */\n-\t\t\tpool_size = (BITALLOC_SIZEOF(resv[j].stride) /\n-\t\t\t\t     sizeof(struct bitalloc));\n-\t\t\t/* Alloc request, alignment already set */\n-\t\t\tcparms.nitems = pool_size;\n-\t\t\tcparms.size = sizeof(struct bitalloc);\n-\t\t\trc = tfp_calloc(&cparms);\n-\t\t\tif (rc) {\n-\t\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t\t    \"%s: Pool alloc failed, type:%d\\n\",\n-\t\t\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t\t\t    db[i].cfg_type);\n-\t\t\t\tgoto fail;\n-\t\t\t}\n-\t\t\tdb[i].pool = (struct bitalloc *)cparms.mem_va;\n-\n-\t\t\trc = ba_init(db[i].pool, resv[j].stride);\n-\t\t\tif (rc) {\n-\t\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t\t    \"%s: Pool init failed, type:%d\\n\",\n-\t\t\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t\t\t    db[i].cfg_type);\n-\t\t\t\tgoto fail;\n-\t\t\t}\n-\t\t\tj++;\n-\t\t} else {\n-\t\t\t/* Bail out as we want what we requested for\n-\t\t\t * all elements, not any less.\n-\t\t\t */\n-\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t    \"%s: Alloc failed, type:%d\\n\",\n-\t\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t\t    db[i].cfg_type);\n-\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t    \"req:%d, alloc:%d\\n\",\n-\t\t\t\t    parms->alloc_cnt[i],\n-\t\t\t\t    resv[j].stride);\n-\t\t\tgoto fail;\n-\t\t}\n-\t}\n-\n-\trm_db->num_entries = parms->num_elements;\n-\trm_db->dir = parms->dir;\n-\trm_db->type = parms->type;\n-\t*parms->rm_db = (void *)rm_db;\n-\n-\tprintf(\"%s: type:%d num_entries:%d\\n\",\n-\t       tf_dir_2_str(parms->dir),\n-\t       parms->type,\n-\t       i);\n-\n-\ttfp_free((void *)req);\n-\ttfp_free((void *)resv);\n-\n-\treturn 0;\n-\n- fail:\n-\ttfp_free((void *)req);\n-\ttfp_free((void *)resv);\n-\ttfp_free((void *)db->pool);\n-\ttfp_free((void *)db);\n-\ttfp_free((void *)rm_db);\n-\tparms->rm_db = NULL;\n-\n-\treturn -EINVAL;\n-}\n-\n-int\n-tf_rm_free_db(struct tf *tfp,\n-\t      struct tf_rm_free_db_parms *parms)\n-{\n-\tint rc;\n-\tint i;\n-\tuint16_t resv_size = 0;\n-\tstruct tf_rm_new_db *rm_db;\n-\tstruct tf_rm_resc_entry *resv;\n-\tbool residuals_found = false;\n-\n-\tTF_CHECK_PARMS2(parms, parms->rm_db);\n-\n-\t/* Device unbind happens when the TF Session is closed and the\n-\t * session ref count is 0. Device unbind will cleanup each of\n-\t * its support modules, i.e. Identifier, thus we're ending up\n-\t * here to close the DB.\n-\t *\n-\t * On TF Session close it is assumed that the session has already\n-\t * cleaned up all its resources, individually, while\n-\t * destroying its flows.\n-\t *\n-\t * To assist in the 'cleanup checking' the DB is checked for any\n-\t * remaining elements and logged if found to be the case.\n-\t *\n-\t * Any such elements will need to be 'cleared' ahead of\n-\t * returning the resources to the HCAPI RM.\n-\t *\n-\t * RM will signal FW to flush the DB resources. FW will\n-\t * perform the invalidation. TF Session close will return the\n-\t * previous allocated elements to the RM and then close the\n-\t * HCAPI RM registration. That then saves several 'free' msgs\n-\t * from being required.\n-\t */\n-\n-\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n-\n-\t/* Check for residuals that the client didn't clean up */\n-\trc = tf_rm_check_residuals(rm_db,\n-\t\t\t\t   &resv_size,\n-\t\t\t\t   &resv,\n-\t\t\t\t   &residuals_found);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\t/* Invalidate any residuals followed by a DB traversal for\n-\t * pool cleanup.\n-\t */\n-\tif (residuals_found) {\n-\t\trc = tf_msg_session_resc_flush(tfp,\n-\t\t\t\t\t       parms->dir,\n-\t\t\t\t\t       resv_size,\n-\t\t\t\t\t       resv);\n-\t\ttfp_free((void *)resv);\n-\t\t/* On failure we still have to cleanup so we can only\n-\t\t * log that FW failed.\n-\t\t */\n-\t\tif (rc)\n-\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t    \"%s: Internal Flush error, module:%s\\n\",\n-\t\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t\t    tf_device_module_type_2_str(rm_db->type));\n-\t}\n-\n-\tfor (i = 0; i < rm_db->num_entries; i++)\n-\t\ttfp_free((void *)rm_db->db[i].pool);\n-\n-\ttfp_free((void *)parms->rm_db);\n-\n-\treturn rc;\n-}\n-\n-int\n-tf_rm_allocate(struct tf_rm_allocate_parms *parms)\n-{\n-\tint rc;\n-\tint id;\n-\tuint32_t index;\n-\tstruct tf_rm_new_db *rm_db;\n-\tenum tf_rm_elem_cfg_type cfg_type;\n-\n-\tTF_CHECK_PARMS2(parms, parms->rm_db);\n-\n-\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n-\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n-\n-\t/* Bail out if not controlled by RM */\n-\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n-\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n-\t\treturn -ENOTSUP;\n-\n-\t/* Bail out if the pool is not valid, should never happen */\n-\tif (rm_db->db[parms->db_index].pool == NULL) {\n-\t\trc = -ENOTSUP;\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: Invalid pool for this type:%d, rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(rm_db->dir),\n-\t\t\t    parms->db_index,\n-\t\t\t    strerror(-rc));\n-\t\treturn rc;\n-\t}\n-\n-\t/*\n-\t * priority  0: allocate from top of the tcam i.e. high\n-\t * priority !0: allocate index from bottom i.e lowest\n-\t */\n-\tif (parms->priority)\n-\t\tid = ba_alloc_reverse(rm_db->db[parms->db_index].pool);\n-\telse\n-\t\tid = ba_alloc(rm_db->db[parms->db_index].pool);\n-\tif (id == BA_FAIL) {\n-\t\trc = -ENOMEM;\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: Allocation failed, rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(rm_db->dir),\n-\t\t\t    strerror(-rc));\n-\t\treturn rc;\n-\t}\n-\n-\t/* Adjust for any non zero start value */\n-\trc = tf_rm_adjust_index(rm_db->db,\n-\t\t\t\tTF_RM_ADJUST_ADD_BASE,\n-\t\t\t\tparms->db_index,\n-\t\t\t\tid,\n-\t\t\t\t&index);\n-\tif (rc) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: Alloc adjust of base index failed, rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(rm_db->dir),\n-\t\t\t    strerror(-rc));\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t*parms->index = index;\n-\n-\treturn rc;\n-}\n-\n-int\n-tf_rm_free(struct tf_rm_free_parms *parms)\n-{\n-\tint rc;\n-\tuint32_t adj_index;\n-\tstruct tf_rm_new_db *rm_db;\n-\tenum tf_rm_elem_cfg_type cfg_type;\n-\n-\tTF_CHECK_PARMS2(parms, parms->rm_db);\n-\n-\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n-\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n-\n-\t/* Bail out if not controlled by RM */\n-\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n-\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n-\t\treturn -ENOTSUP;\n-\n-\t/* Bail out if the pool is not valid, should never happen */\n-\tif (rm_db->db[parms->db_index].pool == NULL) {\n-\t\trc = -ENOTSUP;\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: Invalid pool for this type:%d, rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(rm_db->dir),\n-\t\t\t    parms->db_index,\n-\t\t\t    strerror(-rc));\n-\t\treturn rc;\n-\t}\n-\n-\t/* Adjust for any non zero start value */\n-\trc = tf_rm_adjust_index(rm_db->db,\n-\t\t\t\tTF_RM_ADJUST_RM_BASE,\n-\t\t\t\tparms->db_index,\n-\t\t\t\tparms->index,\n-\t\t\t\t&adj_index);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\trc = ba_free(rm_db->db[parms->db_index].pool, adj_index);\n-\t/* No logging direction matters and that is not available here */\n-\tif (rc)\n-\t\treturn rc;\n-\n-\treturn rc;\n-}\n-\n-int\n-tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms)\n-{\n-\tint rc;\n-\tuint32_t adj_index;\n-\tstruct tf_rm_new_db *rm_db;\n-\tenum tf_rm_elem_cfg_type cfg_type;\n-\n-\tTF_CHECK_PARMS2(parms, parms->rm_db);\n-\n-\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n-\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n-\n-\t/* Bail out if not controlled by RM */\n-\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n-\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n-\t\treturn -ENOTSUP;\n-\n-\t/* Bail out if the pool is not valid, should never happen */\n-\tif (rm_db->db[parms->db_index].pool == NULL) {\n-\t\trc = -ENOTSUP;\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: Invalid pool for this type:%d, rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(rm_db->dir),\n-\t\t\t    parms->db_index,\n-\t\t\t    strerror(-rc));\n-\t\treturn rc;\n-\t}\n-\n-\t/* Adjust for any non zero start value */\n-\trc = tf_rm_adjust_index(rm_db->db,\n-\t\t\t\tTF_RM_ADJUST_RM_BASE,\n-\t\t\t\tparms->db_index,\n-\t\t\t\tparms->index,\n-\t\t\t\t&adj_index);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\t*parms->allocated = ba_inuse(rm_db->db[parms->db_index].pool,\n-\t\t\t\t     adj_index);\n-\n-\treturn rc;\n-}\n-\n-int\n-tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms)\n-{\n-\tstruct tf_rm_new_db *rm_db;\n-\tenum tf_rm_elem_cfg_type cfg_type;\n-\n-\tTF_CHECK_PARMS2(parms, parms->rm_db);\n-\n-\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n-\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n-\n-\t/* Bail out if not controlled by RM */\n-\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n-\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n-\t\treturn -ENOTSUP;\n-\n-\tmemcpy(parms->info,\n-\t       &rm_db->db[parms->db_index].alloc,\n-\t       sizeof(struct tf_rm_alloc_info));\n-\n-\treturn 0;\n-}\n-\n-int\n-tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms)\n-{\n-\tstruct tf_rm_new_db *rm_db;\n-\tenum tf_rm_elem_cfg_type cfg_type;\n-\n-\tTF_CHECK_PARMS2(parms, parms->rm_db);\n-\n-\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n-\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n-\n-\t/* Bail out if not controlled by RM */\n-\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n-\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n-\t\treturn -ENOTSUP;\n-\n-\t*parms->hcapi_type = rm_db->db[parms->db_index].hcapi_type;\n-\n-\treturn 0;\n-}\n-\n-int\n-tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms)\n-{\n-\tint rc = 0;\n-\tstruct tf_rm_new_db *rm_db;\n-\tenum tf_rm_elem_cfg_type cfg_type;\n-\n-\tTF_CHECK_PARMS2(parms, parms->rm_db);\n-\n-\trm_db = (struct tf_rm_new_db *)parms->rm_db;\n-\tcfg_type = rm_db->db[parms->db_index].cfg_type;\n-\n-\t/* Bail out if not controlled by RM */\n-\tif (cfg_type != TF_RM_ELEM_CFG_HCAPI &&\n-\t    cfg_type != TF_RM_ELEM_CFG_PRIVATE)\n-\t\treturn -ENOTSUP;\n-\n-\t/* Bail silently (no logging), if the pool is not valid there\n-\t * was no elements allocated for it.\n-\t */\n-\tif (rm_db->db[parms->db_index].pool == NULL) {\n-\t\t*parms->count = 0;\n-\t\treturn 0;\n-\t}\n-\n-\t*parms->count = ba_inuse_count(rm_db->db[parms->db_index].pool);\n-\n-\treturn rc;\n-\n-}\ndiff --git a/drivers/net/bnxt/tf_core/tf_rm_new.h b/drivers/net/bnxt/tf_core/tf_rm_new.h\ndeleted file mode 100644\nindex 5cb6889..0000000\n--- a/drivers/net/bnxt/tf_core/tf_rm_new.h\n+++ /dev/null\n@@ -1,446 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2019-2020 Broadcom\n- * All rights reserved.\n- */\n-\n-#ifndef TF_RM_NEW_H_\n-#define TF_RM_NEW_H_\n-\n-#include \"tf_core.h\"\n-#include \"bitalloc.h\"\n-#include \"tf_device.h\"\n-\n-struct tf;\n-\n-/**\n- * The Resource Manager (RM) module provides basic DB handling for\n- * internal resources. These resources exists within the actual device\n- * and are controlled by the HCAPI Resource Manager running on the\n- * firmware.\n- *\n- * The RM DBs are all intended to be indexed using TF types there for\n- * a lookup requires no additional conversion. The DB configuration\n- * specifies the TF Type to HCAPI Type mapping and it becomes the\n- * responsibility of the DB initialization to handle this static\n- * mapping.\n- *\n- * Accessor functions are providing access to the DB, thus hiding the\n- * implementation.\n- *\n- * The RM DB will work on its initial allocated sizes so the\n- * capability of dynamically growing a particular resource is not\n- * possible. If this capability later becomes a requirement then the\n- * MAX pool size of the Chip œneeds to be added to the tf_rm_elem_info\n- * structure and several new APIs would need to be added to allow for\n- * growth of a single TF resource type.\n- *\n- * The access functions does not check for NULL pointers as it's a\n- * support module, not called directly.\n- */\n-\n-/**\n- * Resource reservation single entry result. Used when accessing HCAPI\n- * RM on the firmware.\n- */\n-struct tf_rm_new_entry {\n-\t/** Starting index of the allocated resource */\n-\tuint16_t start;\n-\t/** Number of allocated elements */\n-\tuint16_t stride;\n-};\n-\n-/**\n- * RM Element configuration enumeration. Used by the Device to\n- * indicate how the RM elements the DB consists off, are to be\n- * configured at time of DB creation. The TF may present types to the\n- * ULP layer that is not controlled by HCAPI within the Firmware.\n- */\n-enum tf_rm_elem_cfg_type {\n-\t/** No configuration */\n-\tTF_RM_ELEM_CFG_NULL,\n-\t/** HCAPI 'controlled', uses a Pool for internal storage */\n-\tTF_RM_ELEM_CFG_HCAPI,\n-\t/** Private thus not HCAPI 'controlled', creates a Pool for storage */\n-\tTF_RM_ELEM_CFG_PRIVATE,\n-\t/**\n-\t * Shared element thus it belongs to a shared FW Session and\n-\t * is not controlled by the Host.\n-\t */\n-\tTF_RM_ELEM_CFG_SHARED,\n-\tTF_RM_TYPE_MAX\n-};\n-\n-/**\n- * RM Reservation strategy enumeration. Type of strategy comes from\n- * the HCAPI RM QCAPS handshake.\n- */\n-enum tf_rm_resc_resv_strategy {\n-\tTF_RM_RESC_RESV_STATIC_PARTITION,\n-\tTF_RM_RESC_RESV_STRATEGY_1,\n-\tTF_RM_RESC_RESV_STRATEGY_2,\n-\tTF_RM_RESC_RESV_STRATEGY_3,\n-\tTF_RM_RESC_RESV_MAX\n-};\n-\n-/**\n- * RM Element configuration structure, used by the Device to configure\n- * how an individual TF type is configured in regard to the HCAPI RM\n- * of same type.\n- */\n-struct tf_rm_element_cfg {\n-\t/**\n-\t * RM Element config controls how the DB for that element is\n-\t * processed.\n-\t */\n-\tenum tf_rm_elem_cfg_type cfg_type;\n-\n-\t/* If a HCAPI to TF type conversion is required then TF type\n-\t * can be added here.\n-\t */\n-\n-\t/**\n-\t * HCAPI RM Type for the element. Used for TF to HCAPI type\n-\t * conversion.\n-\t */\n-\tuint16_t hcapi_type;\n-};\n-\n-/**\n- * Allocation information for a single element.\n- */\n-struct tf_rm_alloc_info {\n-\t/**\n-\t * HCAPI RM allocated range information.\n-\t *\n-\t * NOTE:\n-\t * In case of dynamic allocation support this would have\n-\t * to be changed to linked list of tf_rm_entry instead.\n-\t */\n-\tstruct tf_rm_new_entry entry;\n-};\n-\n-/**\n- * Create RM DB parameters\n- */\n-struct tf_rm_create_db_parms {\n-\t/**\n-\t * [in] Device module type. Used for logging purposes.\n-\t */\n-\tenum tf_device_module_type type;\n-\t/**\n-\t * [in] Receive or transmit direction.\n-\t */\n-\tenum tf_dir dir;\n-\t/**\n-\t * [in] Number of elements.\n-\t */\n-\tuint16_t num_elements;\n-\t/**\n-\t * [in] Parameter structure array. Array size is num_elements.\n-\t */\n-\tstruct tf_rm_element_cfg *cfg;\n-\t/**\n-\t * Resource allocation count array. This array content\n-\t * originates from the tf_session_resources that is passed in\n-\t * on session open.\n-\t * Array size is num_elements.\n-\t */\n-\tuint16_t *alloc_cnt;\n-\t/**\n-\t * [out] RM DB Handle\n-\t */\n-\tvoid **rm_db;\n-};\n-\n-/**\n- * Free RM DB parameters\n- */\n-struct tf_rm_free_db_parms {\n-\t/**\n-\t * [in] Receive or transmit direction\n-\t */\n-\tenum tf_dir dir;\n-\t/**\n-\t * [in] RM DB Handle\n-\t */\n-\tvoid *rm_db;\n-};\n-\n-/**\n- * Allocate RM parameters for a single element\n- */\n-struct tf_rm_allocate_parms {\n-\t/**\n-\t * [in] RM DB Handle\n-\t */\n-\tvoid *rm_db;\n-\t/**\n-\t * [in] DB Index, indicates which DB entry to perform the\n-\t * action on.\n-\t */\n-\tuint16_t db_index;\n-\t/**\n-\t * [in] Pointer to the allocated index in normalized\n-\t * form. Normalized means the index has been adjusted,\n-\t * i.e. Full Action Record offsets.\n-\t */\n-\tuint32_t *index;\n-\t/**\n-\t * [in] Priority, indicates the prority of the entry\n-\t * priority  0: allocate from top of the tcam (from index 0\n-\t *              or lowest available index)\n-\t * priority !0: allocate from bottom of the tcam (from highest\n-\t *              available index)\n-\t */\n-\tuint32_t priority;\n-};\n-\n-/**\n- * Free RM parameters for a single element\n- */\n-struct tf_rm_free_parms {\n-\t/**\n-\t * [in] RM DB Handle\n-\t */\n-\tvoid *rm_db;\n-\t/**\n-\t * [in] DB Index, indicates which DB entry to perform the\n-\t * action on.\n-\t */\n-\tuint16_t db_index;\n-\t/**\n-\t * [in] Index to free\n-\t */\n-\tuint16_t index;\n-};\n-\n-/**\n- * Is Allocated parameters for a single element\n- */\n-struct tf_rm_is_allocated_parms {\n-\t/**\n-\t * [in] RM DB Handle\n-\t */\n-\tvoid *rm_db;\n-\t/**\n-\t * [in] DB Index, indicates which DB entry to perform the\n-\t * action on.\n-\t */\n-\tuint16_t db_index;\n-\t/**\n-\t * [in] Index to free\n-\t */\n-\tuint32_t index;\n-\t/**\n-\t * [in] Pointer to flag that indicates the state of the query\n-\t */\n-\tint *allocated;\n-};\n-\n-/**\n- * Get Allocation information for a single element\n- */\n-struct tf_rm_get_alloc_info_parms {\n-\t/**\n-\t * [in] RM DB Handle\n-\t */\n-\tvoid *rm_db;\n-\t/**\n-\t * [in] DB Index, indicates which DB entry to perform the\n-\t * action on.\n-\t */\n-\tuint16_t db_index;\n-\t/**\n-\t * [out] Pointer to the requested allocation information for\n-\t * the specified db_index\n-\t */\n-\tstruct tf_rm_alloc_info *info;\n-};\n-\n-/**\n- * Get HCAPI type parameters for a single element\n- */\n-struct tf_rm_get_hcapi_parms {\n-\t/**\n-\t * [in] RM DB Handle\n-\t */\n-\tvoid *rm_db;\n-\t/**\n-\t * [in] DB Index, indicates which DB entry to perform the\n-\t * action on.\n-\t */\n-\tuint16_t db_index;\n-\t/**\n-\t * [out] Pointer to the hcapi type for the specified db_index\n-\t */\n-\tuint16_t *hcapi_type;\n-};\n-\n-/**\n- * Get InUse count parameters for single element\n- */\n-struct tf_rm_get_inuse_count_parms {\n-\t/**\n-\t * [in] RM DB Handle\n-\t */\n-\tvoid *rm_db;\n-\t/**\n-\t * [in] DB Index, indicates which DB entry to perform the\n-\t * action on.\n-\t */\n-\tuint16_t db_index;\n-\t/**\n-\t * [out] Pointer to the inuse count for the specified db_index\n-\t */\n-\tuint16_t *count;\n-};\n-\n-/**\n- * @page rm Resource Manager\n- *\n- * @ref tf_rm_create_db\n- *\n- * @ref tf_rm_free_db\n- *\n- * @ref tf_rm_allocate\n- *\n- * @ref tf_rm_free\n- *\n- * @ref tf_rm_is_allocated\n- *\n- * @ref tf_rm_get_info\n- *\n- * @ref tf_rm_get_hcapi_type\n- *\n- * @ref tf_rm_get_inuse_count\n- */\n-\n-/**\n- * Creates and fills a Resource Manager (RM) DB with requested\n- * elements. The DB is indexed per the parms structure.\n- *\n- * [in] tfp\n- *   Pointer to TF handle, used for HCAPI communication\n- *\n- * [in] parms\n- *   Pointer to create parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-/*\n- * NOTE:\n- * - Fail on parameter check\n- * - Fail on DB creation, i.e. alloc amount is not possible or validation fails\n- * - Fail on DB creation if DB already exist\n- *\n- * - Allocs local DB\n- * - Does hcapi qcaps\n- * - Does hcapi reservation\n- * - Populates the pool with allocated elements\n- * - Returns handle to the created DB\n- */\n-int tf_rm_create_db(struct tf *tfp,\n-\t\t    struct tf_rm_create_db_parms *parms);\n-\n-/**\n- * Closes the Resource Manager (RM) DB and frees all allocated\n- * resources per the associated database.\n- *\n- * [in] tfp\n- *   Pointer to TF handle, used for HCAPI communication\n- *\n- * [in] parms\n- *   Pointer to free parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_rm_free_db(struct tf *tfp,\n-\t\t  struct tf_rm_free_db_parms *parms);\n-\n-/**\n- * Allocates a single element for the type specified, within the DB.\n- *\n- * [in] parms\n- *   Pointer to allocate parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- *   - (-ENOMEM) if pool is empty\n- */\n-int tf_rm_allocate(struct tf_rm_allocate_parms *parms);\n-\n-/**\n- * Free's a single element for the type specified, within the DB.\n- *\n- * [in] parms\n- *   Pointer to free parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_rm_free(struct tf_rm_free_parms *parms);\n-\n-/**\n- * Performs an allocation verification check on a specified element.\n- *\n- * [in] parms\n- *   Pointer to is allocated parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-/*\n- * NOTE:\n- *  - If pool is set to Chip MAX, then the query index must be checked\n- *    against the allocated range and query index must be allocated as well.\n- *  - If pool is allocated size only, then check if query index is allocated.\n- */\n-int tf_rm_is_allocated(struct tf_rm_is_allocated_parms *parms);\n-\n-/**\n- * Retrieves an elements allocation information from the Resource\n- * Manager (RM) DB.\n- *\n- * [in] parms\n- *   Pointer to get info parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_rm_get_info(struct tf_rm_get_alloc_info_parms *parms);\n-\n-/**\n- * Performs a lookup in the Resource Manager DB and retrives the\n- * requested HCAPI RM type.\n- *\n- * [in] parms\n- *   Pointer to get hcapi parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_rm_get_hcapi_type(struct tf_rm_get_hcapi_parms *parms);\n-\n-/**\n- * Performs a lookup in the Resource Manager DB and retrives the\n- * requested HCAPI RM type inuse count.\n- *\n- * [in] parms\n- *   Pointer to get inuse parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_rm_get_inuse_count(struct tf_rm_get_inuse_count_parms *parms);\n-\n-#endif /* TF_RM_NEW_H_ */\ndiff --git a/drivers/net/bnxt/tf_core/tf_session.h b/drivers/net/bnxt/tf_core/tf_session.h\nindex 705bb09..e4472ed 100644\n--- a/drivers/net/bnxt/tf_core/tf_session.h\n+++ b/drivers/net/bnxt/tf_core/tf_session.h\n@@ -14,6 +14,7 @@\n #include \"tf_device.h\"\n #include \"tf_rm.h\"\n #include \"tf_tbl.h\"\n+#include \"tf_resources.h\"\n #include \"stack.h\"\n \n /**\n@@ -43,7 +44,8 @@\n #define TF_SESSION_EM_POOL_SIZE \\\n \t(TF_SESSION_TOTAL_FN_BLOCKS / TF_SESSION_EM_ENTRY_SIZE)\n \n-/** Session\n+/**\n+ * Session\n  *\n  * Shared memory containing private TruFlow session information.\n  * Through this structure the session can keep track of resource\n@@ -99,216 +101,6 @@ struct tf_session {\n \t/** Device handle */\n \tstruct tf_dev_info dev;\n \n-\t/** Session HW and SRAM resources */\n-\tstruct tf_rm_db resc;\n-\n-\t/* Session HW resource pools */\n-\n-\t/** RX L2 CTXT TCAM Pool */\n-\tBITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);\n-\t/** TX L2 CTXT TCAM Pool */\n-\tBITALLOC_INST(TF_L2_CTXT_TCAM_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);\n-\n-\t/** RX Profile Func Pool */\n-\tBITALLOC_INST(TF_PROF_FUNC_POOL_NAME_RX, TF_NUM_PROF_FUNC);\n-\t/** TX Profile Func Pool */\n-\tBITALLOC_INST(TF_PROF_FUNC_POOL_NAME_TX, TF_NUM_PROF_FUNC);\n-\n-\t/** RX Profile TCAM Pool */\n-\tBITALLOC_INST(TF_PROF_TCAM_POOL_NAME_RX, TF_NUM_PROF_TCAM);\n-\t/** TX Profile TCAM Pool */\n-\tBITALLOC_INST(TF_PROF_TCAM_POOL_NAME_TX, TF_NUM_PROF_TCAM);\n-\n-\t/** RX EM Profile ID Pool */\n-\tBITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_RX, TF_NUM_EM_PROF_ID);\n-\t/** TX EM Key Pool */\n-\tBITALLOC_INST(TF_EM_PROF_ID_POOL_NAME_TX, TF_NUM_EM_PROF_ID);\n-\n-\t/** RX WC Profile Pool */\n-\tBITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_RX, TF_NUM_WC_PROF_ID);\n-\t/** TX WC Profile Pool */\n-\tBITALLOC_INST(TF_WC_TCAM_PROF_ID_POOL_NAME_TX, TF_NUM_WC_PROF_ID);\n-\n-\t/* TBD, how do we want to handle EM records ?*/\n-\t/* EM Records are not controlled by way of a pool */\n-\n-\t/** RX WC TCAM Pool */\n-\tBITALLOC_INST(TF_WC_TCAM_POOL_NAME_RX, TF_NUM_WC_TCAM_ROW);\n-\t/** TX WC TCAM Pool */\n-\tBITALLOC_INST(TF_WC_TCAM_POOL_NAME_TX, TF_NUM_WC_TCAM_ROW);\n-\n-\t/** RX Meter Profile Pool */\n-\tBITALLOC_INST(TF_METER_PROF_POOL_NAME_RX, TF_NUM_METER_PROF);\n-\t/** TX Meter Profile Pool */\n-\tBITALLOC_INST(TF_METER_PROF_POOL_NAME_TX, TF_NUM_METER_PROF);\n-\n-\t/** RX Meter Instance Pool */\n-\tBITALLOC_INST(TF_METER_INST_POOL_NAME_RX, TF_NUM_METER);\n-\t/** TX Meter Pool */\n-\tBITALLOC_INST(TF_METER_INST_POOL_NAME_TX, TF_NUM_METER);\n-\n-\t/** RX Mirror Configuration Pool*/\n-\tBITALLOC_INST(TF_MIRROR_POOL_NAME_RX, TF_NUM_MIRROR);\n-\t/** RX Mirror Configuration Pool */\n-\tBITALLOC_INST(TF_MIRROR_POOL_NAME_TX, TF_NUM_MIRROR);\n-\n-\t/** RX UPAR Pool */\n-\tBITALLOC_INST(TF_UPAR_POOL_NAME_RX, TF_NUM_UPAR);\n-\t/** TX UPAR Pool */\n-\tBITALLOC_INST(TF_UPAR_POOL_NAME_TX, TF_NUM_UPAR);\n-\n-\t/** RX SP TCAM Pool */\n-\tBITALLOC_INST(TF_SP_TCAM_POOL_NAME_RX, TF_NUM_SP_TCAM);\n-\t/** TX SP TCAM Pool */\n-\tBITALLOC_INST(TF_SP_TCAM_POOL_NAME_TX, TF_NUM_SP_TCAM);\n-\n-\t/** RX FKB Pool */\n-\tBITALLOC_INST(TF_FKB_POOL_NAME_RX, TF_NUM_FKB);\n-\t/** TX FKB Pool */\n-\tBITALLOC_INST(TF_FKB_POOL_NAME_TX, TF_NUM_FKB);\n-\n-\t/** RX Table Scope Pool */\n-\tBITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_RX, TF_NUM_TBL_SCOPE);\n-\t/** TX Table Scope Pool */\n-\tBITALLOC_INST(TF_TBL_SCOPE_POOL_NAME_TX, TF_NUM_TBL_SCOPE);\n-\n-\t/** RX L2 Func Pool */\n-\tBITALLOC_INST(TF_L2_FUNC_POOL_NAME_RX, TF_NUM_L2_FUNC);\n-\t/** TX L2 Func Pool */\n-\tBITALLOC_INST(TF_L2_FUNC_POOL_NAME_TX, TF_NUM_L2_FUNC);\n-\n-\t/** RX Epoch0 Pool */\n-\tBITALLOC_INST(TF_EPOCH0_POOL_NAME_RX, TF_NUM_EPOCH0);\n-\t/** TX Epoch0 Pool */\n-\tBITALLOC_INST(TF_EPOCH0_POOL_NAME_TX, TF_NUM_EPOCH0);\n-\n-\t/** TX Epoch1 Pool */\n-\tBITALLOC_INST(TF_EPOCH1_POOL_NAME_RX, TF_NUM_EPOCH1);\n-\t/** TX Epoch1 Pool */\n-\tBITALLOC_INST(TF_EPOCH1_POOL_NAME_TX, TF_NUM_EPOCH1);\n-\n-\t/** RX MetaData Profile Pool */\n-\tBITALLOC_INST(TF_METADATA_POOL_NAME_RX, TF_NUM_METADATA);\n-\t/** TX MetaData Profile Pool */\n-\tBITALLOC_INST(TF_METADATA_POOL_NAME_TX, TF_NUM_METADATA);\n-\n-\t/** RX Connection Tracking State Pool */\n-\tBITALLOC_INST(TF_CT_STATE_POOL_NAME_RX, TF_NUM_CT_STATE);\n-\t/** TX Connection Tracking State Pool */\n-\tBITALLOC_INST(TF_CT_STATE_POOL_NAME_TX, TF_NUM_CT_STATE);\n-\n-\t/** RX Range Profile Pool */\n-\tBITALLOC_INST(TF_RANGE_PROF_POOL_NAME_RX, TF_NUM_RANGE_PROF);\n-\t/** TX Range Profile Pool */\n-\tBITALLOC_INST(TF_RANGE_PROF_POOL_NAME_TX, TF_NUM_RANGE_PROF);\n-\n-\t/** RX Range Pool */\n-\tBITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_RX, TF_NUM_RANGE_ENTRY);\n-\t/** TX Range Pool */\n-\tBITALLOC_INST(TF_RANGE_ENTRY_POOL_NAME_TX, TF_NUM_RANGE_ENTRY);\n-\n-\t/** RX LAG Pool */\n-\tBITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_RX, TF_NUM_LAG_ENTRY);\n-\t/** TX LAG Pool */\n-\tBITALLOC_INST(TF_LAG_ENTRY_POOL_NAME_TX, TF_NUM_LAG_ENTRY);\n-\n-\t/* Session SRAM pools */\n-\n-\t/** RX Full Action Record Pool */\n-\tBITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_RX,\n-\t\t      TF_RSVD_SRAM_FULL_ACTION_RX);\n-\t/** TX Full Action Record Pool */\n-\tBITALLOC_INST(TF_SRAM_FULL_ACTION_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_FULL_ACTION_TX);\n-\n-\t/** RX Multicast Group Pool, only RX is supported */\n-\tBITALLOC_INST(TF_SRAM_MCG_POOL_NAME_RX,\n-\t\t      TF_RSVD_SRAM_MCG_RX);\n-\n-\t/** RX Encap 8B Pool*/\n-\tBITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_RX,\n-\t\t      TF_RSVD_SRAM_ENCAP_8B_RX);\n-\t/** TX Encap 8B Pool*/\n-\tBITALLOC_INST(TF_SRAM_ENCAP_8B_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_ENCAP_8B_TX);\n-\n-\t/** RX Encap 16B Pool */\n-\tBITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_RX,\n-\t\t      TF_RSVD_SRAM_ENCAP_16B_RX);\n-\t/** TX Encap 16B Pool */\n-\tBITALLOC_INST(TF_SRAM_ENCAP_16B_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_ENCAP_16B_TX);\n-\n-\t/** TX Encap 64B Pool, only TX is supported */\n-\tBITALLOC_INST(TF_SRAM_ENCAP_64B_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_ENCAP_64B_TX);\n-\n-\t/** RX Source Properties SMAC Pool */\n-\tBITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_RX,\n-\t\t      TF_RSVD_SRAM_SP_SMAC_RX);\n-\t/** TX Source Properties SMAC Pool */\n-\tBITALLOC_INST(TF_SRAM_SP_SMAC_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_SP_SMAC_TX);\n-\n-\t/** TX Source Properties SMAC IPv4 Pool, only TX is supported */\n-\tBITALLOC_INST(TF_SRAM_SP_SMAC_IPV4_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_SP_SMAC_IPV4_TX);\n-\n-\t/** TX Source Properties SMAC IPv6 Pool, only TX is supported */\n-\tBITALLOC_INST(TF_SRAM_SP_SMAC_IPV6_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_SP_SMAC_IPV6_TX);\n-\n-\t/** RX Counter 64B Pool */\n-\tBITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_RX,\n-\t\t      TF_RSVD_SRAM_COUNTER_64B_RX);\n-\t/** TX Counter 64B Pool */\n-\tBITALLOC_INST(TF_SRAM_STATS_64B_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_COUNTER_64B_TX);\n-\n-\t/** RX NAT Source Port Pool */\n-\tBITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_RX,\n-\t\t      TF_RSVD_SRAM_NAT_SPORT_RX);\n-\t/** TX NAT Source Port Pool */\n-\tBITALLOC_INST(TF_SRAM_NAT_SPORT_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_NAT_SPORT_TX);\n-\n-\t/** RX NAT Destination Port Pool */\n-\tBITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_RX,\n-\t\t      TF_RSVD_SRAM_NAT_DPORT_RX);\n-\t/** TX NAT Destination Port Pool */\n-\tBITALLOC_INST(TF_SRAM_NAT_DPORT_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_NAT_DPORT_TX);\n-\n-\t/** RX NAT Source IPv4 Pool */\n-\tBITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_RX,\n-\t\t      TF_RSVD_SRAM_NAT_S_IPV4_RX);\n-\t/** TX NAT Source IPv4 Pool */\n-\tBITALLOC_INST(TF_SRAM_NAT_S_IPV4_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_NAT_S_IPV4_TX);\n-\n-\t/** RX NAT Destination IPv4 Pool */\n-\tBITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_RX,\n-\t\t      TF_RSVD_SRAM_NAT_D_IPV4_RX);\n-\t/** TX NAT IPv4 Destination Pool */\n-\tBITALLOC_INST(TF_SRAM_NAT_D_IPV4_POOL_NAME_TX,\n-\t\t      TF_RSVD_SRAM_NAT_D_IPV4_TX);\n-\n-\t/**\n-\t * Pools not allocated from HCAPI RM\n-\t */\n-\n-\t/** RX L2 Ctx Remap ID  Pool */\n-\tBITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_RX, TF_NUM_L2_CTXT_TCAM);\n-\t/** TX L2 Ctx Remap ID Pool */\n-\tBITALLOC_INST(TF_L2_CTXT_REMAP_POOL_NAME_TX, TF_NUM_L2_CTXT_TCAM);\n-\n-\t/** CRC32 seed table */\n-#define TF_LKUP_SEED_MEM_SIZE 512\n-\tuint32_t lkup_em_seed_mem[TF_DIR_MAX][TF_LKUP_SEED_MEM_SIZE];\n-\n-\t/** Lookup3 init values */\n-\tuint32_t lkup_lkup3_init_cfg[TF_DIR_MAX];\n-\n \t/** Table scope array */\n \tstruct tf_tbl_scope_cb tbl_scopes[TF_NUM_TBL_SCOPE];\n \ndiff --git a/drivers/net/bnxt/tf_core/tf_tbl.c b/drivers/net/bnxt/tf_core/tf_tbl.c\nindex b5ce860..6303033 100644\n--- a/drivers/net/bnxt/tf_core/tf_tbl.c\n+++ b/drivers/net/bnxt/tf_core/tf_tbl.c\n@@ -3,176 +3,412 @@\n  * All rights reserved.\n  */\n \n-/* Truflow Table APIs and supporting code */\n-\n-#include <stdio.h>\n-#include <string.h>\n-#include <stdbool.h>\n-#include <math.h>\n-#include <sys/param.h>\n #include <rte_common.h>\n-#include <rte_errno.h>\n-#include \"hsi_struct_def_dpdk.h\"\n \n-#include \"tf_core.h\"\n+#include \"tf_tbl.h\"\n+#include \"tf_common.h\"\n+#include \"tf_rm.h\"\n #include \"tf_util.h\"\n-#include \"tf_em.h\"\n #include \"tf_msg.h\"\n #include \"tfp.h\"\n-#include \"hwrm_tf.h\"\n-#include \"bnxt.h\"\n-#include \"tf_resources.h\"\n-#include \"tf_rm.h\"\n-#include \"stack.h\"\n-#include \"tf_common.h\"\n+\n+struct tf;\n+\n+/**\n+ * Table DBs.\n+ */\n+static void *tbl_db[TF_DIR_MAX];\n+\n+/**\n+ * Table Shadow DBs\n+ */\n+/* static void *shadow_tbl_db[TF_DIR_MAX]; */\n+\n+/**\n+ * Init flag, set on bind and cleared on unbind\n+ */\n+static uint8_t init;\n \n /**\n- * Internal function to get a Table Entry. Supports all Table Types\n- * except the TF_TBL_TYPE_EXT as that is handled as a table scope.\n- *\n- * [in] tfp\n- *   Pointer to TruFlow handle\n- *\n- * [in] parms\n- *   Pointer to input parameters\n- *\n- * Returns:\n- *   0       - Success\n- *   -EINVAL - Parameter error\n+ * Shadow init flag, set on bind and cleared on unbind\n  */\n-static int\n-tf_bulk_get_tbl_entry_internal(struct tf *tfp,\n-\t\t\t  struct tf_bulk_get_tbl_entry_parms *parms)\n+/* static uint8_t shadow_init; */\n+\n+int\n+tf_tbl_bind(struct tf *tfp,\n+\t    struct tf_tbl_cfg_parms *parms)\n+{\n+\tint rc;\n+\tint i;\n+\tstruct tf_rm_create_db_parms db_cfg = { 0 };\n+\n+\tTF_CHECK_PARMS2(tfp, parms);\n+\n+\tif (init) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"Table DB already initialized\\n\");\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tdb_cfg.num_elements = parms->num_elements;\n+\tdb_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;\n+\tdb_cfg.num_elements = parms->num_elements;\n+\tdb_cfg.cfg = parms->cfg;\n+\n+\tfor (i = 0; i < TF_DIR_MAX; i++) {\n+\t\tdb_cfg.dir = i;\n+\t\tdb_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;\n+\t\tdb_cfg.rm_db = &tbl_db[i];\n+\t\trc = tf_rm_create_db(tfp, &db_cfg);\n+\t\tif (rc) {\n+\t\t\tTFP_DRV_LOG(ERR,\n+\t\t\t\t    \"%s: Table DB creation failed\\n\",\n+\t\t\t\t    tf_dir_2_str(i));\n+\n+\t\t\treturn rc;\n+\t\t}\n+\t}\n+\n+\tinit = 1;\n+\n+\tprintf(\"Table Type - initialized\\n\");\n+\n+\treturn 0;\n+}\n+\n+int\n+tf_tbl_unbind(struct tf *tfp)\n {\n \tint rc;\n-\tint id;\n-\tuint32_t index;\n-\tstruct bitalloc *session_pool;\n-\tstruct tf_session *tfs = (struct tf_session *)(tfp->session->core_data);\n-\n-\t/* Lookup the pool using the table type of the element */\n-\trc = tf_rm_lookup_tbl_type_pool(tfs,\n-\t\t\t\t\tparms->dir,\n-\t\t\t\t\tparms->type,\n-\t\t\t\t\t&session_pool);\n-\t/* Error logging handled by tf_rm_lookup_tbl_type_pool */\n+\tint i;\n+\tstruct tf_rm_free_db_parms fparms = { 0 };\n+\n+\tTF_CHECK_PARMS1(tfp);\n+\n+\t/* Bail if nothing has been initialized */\n+\tif (!init) {\n+\t\tTFP_DRV_LOG(INFO,\n+\t\t\t    \"No Table DBs created\\n\");\n+\t\treturn 0;\n+\t}\n+\n+\tfor (i = 0; i < TF_DIR_MAX; i++) {\n+\t\tfparms.dir = i;\n+\t\tfparms.rm_db = tbl_db[i];\n+\t\trc = tf_rm_free_db(tfp, &fparms);\n+\t\tif (rc)\n+\t\t\treturn rc;\n+\n+\t\ttbl_db[i] = NULL;\n+\t}\n+\n+\tinit = 0;\n+\n+\treturn 0;\n+}\n+\n+int\n+tf_tbl_alloc(struct tf *tfp __rte_unused,\n+\t     struct tf_tbl_alloc_parms *parms)\n+{\n+\tint rc;\n+\tuint32_t idx;\n+\tstruct tf_rm_allocate_parms aparms = { 0 };\n+\n+\tTF_CHECK_PARMS2(tfp, parms);\n+\n+\tif (!init) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: No Table DBs created\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir));\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/* Allocate requested element */\n+\taparms.rm_db = tbl_db[parms->dir];\n+\taparms.db_index = parms->type;\n+\taparms.index = &idx;\n+\trc = tf_rm_allocate(&aparms);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Failed allocate, type:%d\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    parms->type);\n+\t\treturn rc;\n+\t}\n+\n+\t*parms->idx = idx;\n+\n+\treturn 0;\n+}\n+\n+int\n+tf_tbl_free(struct tf *tfp __rte_unused,\n+\t    struct tf_tbl_free_parms *parms)\n+{\n+\tint rc;\n+\tstruct tf_rm_is_allocated_parms aparms = { 0 };\n+\tstruct tf_rm_free_parms fparms = { 0 };\n+\tint allocated = 0;\n+\n+\tTF_CHECK_PARMS2(tfp, parms);\n+\n+\tif (!init) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: No Table DBs created\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir));\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/* Check if element is in use */\n+\taparms.rm_db = tbl_db[parms->dir];\n+\taparms.db_index = parms->type;\n+\taparms.index = parms->idx;\n+\taparms.allocated = &allocated;\n+\trc = tf_rm_is_allocated(&aparms);\n \tif (rc)\n \t\treturn rc;\n \n-\tindex = parms->starting_idx;\n+\tif (!allocated) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Entry already free, type:%d, index:%d\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    parms->type,\n+\t\t\t    parms->idx);\n+\t\treturn rc;\n+\t}\n \n-\t/*\n-\t * Adjust the returned index/offset as there is no guarantee\n-\t * that the start is 0 at time of RM allocation\n-\t */\n-\ttf_rm_convert_index(tfs,\n-\t\t\t    parms->dir,\n+\t/* Free requested element */\n+\tfparms.rm_db = tbl_db[parms->dir];\n+\tfparms.db_index = parms->type;\n+\tfparms.index = parms->idx;\n+\trc = tf_rm_free(&fparms);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: Free failed, type:%d, index:%d\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n \t\t\t    parms->type,\n-\t\t\t    TF_RM_CONVERT_RM_BASE,\n-\t\t\t    parms->starting_idx,\n-\t\t\t    &index);\n+\t\t\t    parms->idx);\n+\t\treturn rc;\n+\t}\n+\n+\treturn 0;\n+}\n+\n+int\n+tf_tbl_alloc_search(struct tf *tfp __rte_unused,\n+\t\t    struct tf_tbl_alloc_search_parms *parms __rte_unused)\n+{\n+\treturn 0;\n+}\n+\n+int\n+tf_tbl_set(struct tf *tfp,\n+\t   struct tf_tbl_set_parms *parms)\n+{\n+\tint rc;\n+\tint allocated = 0;\n+\tuint16_t hcapi_type;\n+\tstruct tf_rm_is_allocated_parms aparms = { 0 };\n+\tstruct tf_rm_get_hcapi_parms hparms = { 0 };\n+\n+\tTF_CHECK_PARMS3(tfp, parms, parms->data);\n+\n+\tif (!init) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: No Table DBs created\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir));\n+\t\treturn -EINVAL;\n+\t}\n \n \t/* Verify that the entry has been previously allocated */\n-\tid = ba_inuse(session_pool, index);\n-\tif (id != 1) {\n+\taparms.rm_db = tbl_db[parms->dir];\n+\taparms.db_index = parms->type;\n+\taparms.index = parms->idx;\n+\taparms.allocated = &allocated;\n+\trc = tf_rm_is_allocated(&aparms);\n+\tif (rc)\n+\t\treturn rc;\n+\n+\tif (!allocated) {\n \t\tTFP_DRV_LOG(ERR,\n-\t\t   \"%s, Invalid or not allocated index, type:%d, starting_idx:%d\\n\",\n+\t\t   \"%s, Invalid or not allocated index, type:%d, idx:%d\\n\",\n \t\t   tf_dir_2_str(parms->dir),\n \t\t   parms->type,\n-\t\t   index);\n+\t\t   parms->idx);\n \t\treturn -EINVAL;\n \t}\n \n-\t/* Get the entry */\n-\trc = tf_msg_bulk_get_tbl_entry(tfp, parms);\n+\t/* Set the entry */\n+\thparms.rm_db = tbl_db[parms->dir];\n+\thparms.db_index = parms->type;\n+\thparms.hcapi_type = &hcapi_type;\n+\trc = tf_rm_get_hcapi_type(&hparms);\n \tif (rc) {\n \t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, Bulk get failed, type:%d, rc:%s\\n\",\n+\t\t\t    \"%s, Failed type lookup, type:%d, rc:%s\\n\",\n \t\t\t    tf_dir_2_str(parms->dir),\n \t\t\t    parms->type,\n \t\t\t    strerror(-rc));\n+\t\treturn rc;\n \t}\n \n-\treturn rc;\n+\trc = tf_msg_set_tbl_entry(tfp,\n+\t\t\t\t  parms->dir,\n+\t\t\t\t  hcapi_type,\n+\t\t\t\t  parms->data_sz_in_bytes,\n+\t\t\t\t  parms->data,\n+\t\t\t\t  parms->idx);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s, Set failed, type:%d, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    parms->type,\n+\t\t\t    strerror(-rc));\n+\t}\n+\n+\treturn 0;\n }\n \n-#if (TF_SHADOW == 1)\n-/**\n- * Allocate Tbl entry from the Shadow DB. Shadow DB is searched for\n- * the requested entry. If found the ref count is incremente and\n- * returned.\n- *\n- * [in] tfs\n- *   Pointer to session\n- * [in] parms\n- *   Allocation parameters\n- *\n- * Return:\n- *  0       - Success, entry found and ref count incremented\n- *  -ENOENT - Failure, entry not found\n- */\n-static int\n-tf_alloc_tbl_entry_shadow(struct tf_session *tfs __rte_unused,\n-\t\t\t  struct tf_alloc_tbl_entry_parms *parms __rte_unused)\n+int\n+tf_tbl_get(struct tf *tfp,\n+\t   struct tf_tbl_get_parms *parms)\n {\n-\tTFP_DRV_LOG(ERR,\n-\t\t    \"%s, Entry Alloc with search not supported\\n\",\n-\t\t    tf_dir_2_str(parms->dir));\n+\tint rc;\n+\tuint16_t hcapi_type;\n+\tint allocated = 0;\n+\tstruct tf_rm_is_allocated_parms aparms = { 0 };\n+\tstruct tf_rm_get_hcapi_parms hparms = { 0 };\n \n-\treturn -EOPNOTSUPP;\n-}\n+\tTF_CHECK_PARMS3(tfp, parms, parms->data);\n \n-/**\n- * Free Tbl entry from the Shadow DB. Shadow DB is searched for\n- * the requested entry. If found the ref count is decremente and\n- * new ref_count returned.\n- *\n- * [in] tfs\n- *   Pointer to session\n- * [in] parms\n- *   Allocation parameters\n- *\n- * Return:\n- *  0       - Success, entry found and ref count decremented\n- *  -ENOENT - Failure, entry not found\n- */\n-static int\n-tf_free_tbl_entry_shadow(struct tf_session *tfs,\n-\t\t\t struct tf_free_tbl_entry_parms *parms)\n-{\n-\tTFP_DRV_LOG(ERR,\n-\t\t    \"%s, Entry Free with search not supported\\n\",\n-\t\t    tf_dir_2_str(parms->dir));\n+\tif (!init) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s: No Table DBs created\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir));\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/* Verify that the entry has been previously allocated */\n+\taparms.rm_db = tbl_db[parms->dir];\n+\taparms.db_index = parms->type;\n+\taparms.index = parms->idx;\n+\taparms.allocated = &allocated;\n+\trc = tf_rm_is_allocated(&aparms);\n+\tif (rc)\n+\t\treturn rc;\n+\n+\tif (!allocated) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t   \"%s, Invalid or not allocated index, type:%d, idx:%d\\n\",\n+\t\t   tf_dir_2_str(parms->dir),\n+\t\t   parms->type,\n+\t\t   parms->idx);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\t/* Set the entry */\n+\thparms.rm_db = tbl_db[parms->dir];\n+\thparms.db_index = parms->type;\n+\thparms.hcapi_type = &hcapi_type;\n+\trc = tf_rm_get_hcapi_type(&hparms);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s, Failed type lookup, type:%d, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    parms->type,\n+\t\t\t    strerror(-rc));\n+\t\treturn rc;\n+\t}\n \n-\treturn -EOPNOTSUPP;\n+\t/* Get the entry */\n+\trc = tf_msg_get_tbl_entry(tfp,\n+\t\t\t\t  parms->dir,\n+\t\t\t\t  hcapi_type,\n+\t\t\t\t  parms->data_sz_in_bytes,\n+\t\t\t\t  parms->data,\n+\t\t\t\t  parms->idx);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s, Get failed, type:%d, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    parms->type,\n+\t\t\t    strerror(-rc));\n+\t}\n+\n+\treturn 0;\n }\n-#endif /* TF_SHADOW */\n \n-/* API defined in tf_core.h */\n int\n-tf_bulk_get_tbl_entry(struct tf *tfp,\n-\t\t struct tf_bulk_get_tbl_entry_parms *parms)\n+tf_tbl_bulk_get(struct tf *tfp,\n+\t\tstruct tf_tbl_get_bulk_parms *parms)\n {\n-\tint rc = 0;\n+\tint rc;\n+\tint i;\n+\tuint16_t hcapi_type;\n+\tuint32_t idx;\n+\tint allocated = 0;\n+\tstruct tf_rm_is_allocated_parms aparms = { 0 };\n+\tstruct tf_rm_get_hcapi_parms hparms = { 0 };\n \n-\tTF_CHECK_PARMS_SESSION(tfp, parms);\n+\tTF_CHECK_PARMS2(tfp, parms);\n \n-\tif (parms->type == TF_TBL_TYPE_EXT) {\n-\t\t/* Not supported, yet */\n+\tif (!init) {\n \t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, External table type not supported\\n\",\n+\t\t\t    \"%s: No Table DBs created\\n\",\n \t\t\t    tf_dir_2_str(parms->dir));\n+\t\treturn -EINVAL;\n+\t}\n \n-\t\trc = -EOPNOTSUPP;\n-\t} else {\n-\t\t/* Internal table type processing */\n-\t\trc = tf_bulk_get_tbl_entry_internal(tfp, parms);\n+\t/* Verify that the entries has been previously allocated */\n+\taparms.rm_db = tbl_db[parms->dir];\n+\taparms.db_index = parms->type;\n+\taparms.allocated = &allocated;\n+\tidx = parms->starting_idx;\n+\tfor (i = 0; i < parms->num_entries; i++) {\n+\t\taparms.index = idx;\n+\t\trc = tf_rm_is_allocated(&aparms);\n \t\tif (rc)\n+\t\t\treturn rc;\n+\n+\t\tif (!allocated) {\n \t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t    \"%s, Bulk get failed, type:%d, rc:%s\\n\",\n+\t\t\t\t    \"%s, Invalid or not allocated index, type:%d, idx:%d\\n\",\n \t\t\t\t    tf_dir_2_str(parms->dir),\n \t\t\t\t    parms->type,\n-\t\t\t\t    strerror(-rc));\n+\t\t\t\t    idx);\n+\t\t\treturn -EINVAL;\n+\t\t}\n+\t\tidx++;\n+\t}\n+\n+\thparms.rm_db = tbl_db[parms->dir];\n+\thparms.db_index = parms->type;\n+\thparms.hcapi_type = &hcapi_type;\n+\trc = tf_rm_get_hcapi_type(&hparms);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s, Failed type lookup, type:%d, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    parms->type,\n+\t\t\t    strerror(-rc));\n+\t\treturn rc;\n+\t}\n+\n+\t/* Get the entries */\n+\trc = tf_msg_bulk_get_tbl_entry(tfp,\n+\t\t\t\t       parms->dir,\n+\t\t\t\t       hcapi_type,\n+\t\t\t\t       parms->starting_idx,\n+\t\t\t\t       parms->num_entries,\n+\t\t\t\t       parms->entry_sz_in_bytes,\n+\t\t\t\t       parms->physical_mem_addr);\n+\tif (rc) {\n+\t\tTFP_DRV_LOG(ERR,\n+\t\t\t    \"%s, Bulk get failed, type:%d, rc:%s\\n\",\n+\t\t\t    tf_dir_2_str(parms->dir),\n+\t\t\t    parms->type,\n+\t\t\t    strerror(-rc));\n \t}\n \n \treturn rc;\ndiff --git a/drivers/net/bnxt/tf_core/tf_tbl.h b/drivers/net/bnxt/tf_core/tf_tbl.h\nindex 2b7456f..eb560ff 100644\n--- a/drivers/net/bnxt/tf_core/tf_tbl.h\n+++ b/drivers/net/bnxt/tf_core/tf_tbl.h\n@@ -3,17 +3,21 @@\n  * All rights reserved.\n  */\n \n-#ifndef _TF_TBL_H_\n-#define _TF_TBL_H_\n-\n-#include <stdint.h>\n+#ifndef TF_TBL_TYPE_H_\n+#define TF_TBL_TYPE_H_\n \n #include \"tf_core.h\"\n #include \"stack.h\"\n \n-struct tf_session;\n+struct tf;\n+\n+/**\n+ * The Table module provides processing of Internal TF table types.\n+ */\n \n-/** table scope control block content */\n+/**\n+ * Table scope control block content\n+ */\n struct tf_em_caps {\n \tuint32_t flags;\n \tuint32_t supported;\n@@ -35,66 +39,364 @@ struct tf_em_caps {\n struct tf_tbl_scope_cb {\n \tuint32_t tbl_scope_id;\n \tint index;\n-\tstruct hcapi_cfa_em_ctx_mem_info  em_ctx_info[TF_DIR_MAX];\n-\tstruct tf_em_caps          em_caps[TF_DIR_MAX];\n-\tstruct stack               ext_act_pool[TF_DIR_MAX];\n-\tuint32_t                  *ext_act_pool_mem[TF_DIR_MAX];\n+\tstruct hcapi_cfa_em_ctx_mem_info em_ctx_info[TF_DIR_MAX];\n+\tstruct tf_em_caps em_caps[TF_DIR_MAX];\n+\tstruct stack ext_act_pool[TF_DIR_MAX];\n+\tuint32_t *ext_act_pool_mem[TF_DIR_MAX];\n+};\n+\n+/**\n+ * Table configuration parameters\n+ */\n+struct tf_tbl_cfg_parms {\n+\t/**\n+\t * Number of table types in each of the configuration arrays\n+\t */\n+\tuint16_t num_elements;\n+\t/**\n+\t * Table Type element configuration array\n+\t */\n+\tstruct tf_rm_element_cfg *cfg;\n+\t/**\n+\t * Shadow table type configuration array\n+\t */\n+\tstruct tf_shadow_tbl_cfg *shadow_cfg;\n+\t/**\n+\t * Boolean controlling the request shadow copy.\n+\t */\n+\tbool shadow_copy;\n+\t/**\n+\t * Session resource allocations\n+\t */\n+\tstruct tf_session_resources *resources;\n+};\n+\n+/**\n+ * Table allocation parameters\n+ */\n+struct tf_tbl_alloc_parms {\n+\t/**\n+\t * [in] Receive or transmit direction\n+\t */\n+\tenum tf_dir dir;\n+\t/**\n+\t * [in] Type of the allocation\n+\t */\n+\tenum tf_tbl_type type;\n+\t/**\n+\t * [out] Idx of allocated entry or found entry (if search_enable)\n+\t */\n+\tuint32_t *idx;\n+};\n+\n+/**\n+ * Table free parameters\n+ */\n+struct tf_tbl_free_parms {\n+\t/**\n+\t * [in] Receive or transmit direction\n+\t */\n+\tenum tf_dir dir;\n+\t/**\n+\t * [in] Type of the allocation type\n+\t */\n+\tenum tf_tbl_type type;\n+\t/**\n+\t * [in] Index to free\n+\t */\n+\tuint32_t idx;\n+\t/**\n+\t * [out] Reference count after free, only valid if session has been\n+\t * created with shadow_copy.\n+\t */\n+\tuint16_t ref_cnt;\n };\n \n-/** Hardware Page sizes supported for EEM: 4K, 8K, 64K, 256K, 1M, 2M, 4M, 1G.\n- * Round-down other page sizes to the lower hardware page size supported.\n- */\n-#define TF_EM_PAGE_SIZE_4K 12\n-#define TF_EM_PAGE_SIZE_8K 13\n-#define TF_EM_PAGE_SIZE_64K 16\n-#define TF_EM_PAGE_SIZE_256K 18\n-#define TF_EM_PAGE_SIZE_1M 20\n-#define TF_EM_PAGE_SIZE_2M 21\n-#define TF_EM_PAGE_SIZE_4M 22\n-#define TF_EM_PAGE_SIZE_1G 30\n-\n-/* Set page size */\n-#define PAGE_SIZE TF_EM_PAGE_SIZE_2M\n-\n-#if (PAGE_SIZE == TF_EM_PAGE_SIZE_4K)\t/** 4K */\n-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4K\n-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4K\n-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_8K)\t/** 8K */\n-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_8K\n-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_8K\n-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_64K)\t/** 64K */\n-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_64K\n-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_64K\n-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_256K)\t/** 256K */\n-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_256K\n-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_256K\n-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1M)\t/** 1M */\n-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1M\n-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1M\n-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_2M)\t/** 2M */\n-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_2M\n-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_2M\n-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_4M)\t/** 4M */\n-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_4M\n-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_4M\n-#elif (PAGE_SIZE == TF_EM_PAGE_SIZE_1G)\t/** 1G */\n-#define TF_EM_PAGE_SHIFT TF_EM_PAGE_SIZE_1G\n-#define TF_EM_PAGE_SIZE_ENUM HWRM_TF_CTXT_MEM_RGTR_INPUT_PAGE_SIZE_1G\n-#else\n-#error \"Invalid Page Size specified. Please use a TF_EM_PAGE_SIZE_n define\"\n-#endif\n-\n-#define TF_EM_PAGE_SIZE\t(1 << TF_EM_PAGE_SHIFT)\n-#define TF_EM_PAGE_ALIGNMENT (1 << TF_EM_PAGE_SHIFT)\n-\n-/**\n- * Initialize table pool structure to indicate\n- * no table scope has been associated with the\n- * external pool of indexes.\n- *\n- * [in] session\n- */\n-void\n-tf_init_tbl_pool(struct tf_session *session);\n-\n-#endif /* _TF_TBL_H_ */\n+/**\n+ * Table allocate search parameters\n+ */\n+struct tf_tbl_alloc_search_parms {\n+\t/**\n+\t * [in] Receive or transmit direction\n+\t */\n+\tenum tf_dir dir;\n+\t/**\n+\t * [in] Type of the allocation\n+\t */\n+\tenum tf_tbl_type type;\n+\t/**\n+\t * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)\n+\t */\n+\tuint32_t tbl_scope_id;\n+\t/**\n+\t * [in] Enable search for matching entry. If the table type is\n+\t * internal the shadow copy will be searched before\n+\t * alloc. Session must be configured with shadow copy enabled.\n+\t */\n+\tuint8_t search_enable;\n+\t/**\n+\t * [in] Result data to search for (if search_enable)\n+\t */\n+\tuint8_t *result;\n+\t/**\n+\t * [in] Result data size in bytes (if search_enable)\n+\t */\n+\tuint16_t result_sz_in_bytes;\n+\t/**\n+\t * [out] If search_enable, set if matching entry found\n+\t */\n+\tuint8_t hit;\n+\t/**\n+\t * [out] Current ref count after allocation (if search_enable)\n+\t */\n+\tuint16_t ref_cnt;\n+\t/**\n+\t * [out] Idx of allocated entry or found entry (if search_enable)\n+\t */\n+\tuint32_t idx;\n+};\n+\n+/**\n+ * Table set parameters\n+ */\n+struct tf_tbl_set_parms {\n+\t/**\n+\t * [in] Receive or transmit direction\n+\t */\n+\tenum tf_dir dir;\n+\t/**\n+\t * [in] Type of object to set\n+\t */\n+\tenum tf_tbl_type type;\n+\t/**\n+\t * [in] Entry data\n+\t */\n+\tuint8_t *data;\n+\t/**\n+\t * [in] Entry size\n+\t */\n+\tuint16_t data_sz_in_bytes;\n+\t/**\n+\t * [in] Entry index to write to\n+\t */\n+\tuint32_t idx;\n+};\n+\n+/**\n+ * Table get parameters\n+ */\n+struct tf_tbl_get_parms {\n+\t/**\n+\t * [in] Receive or transmit direction\n+\t */\n+\tenum tf_dir dir;\n+\t/**\n+\t * [in] Type of object to get\n+\t */\n+\tenum tf_tbl_type type;\n+\t/**\n+\t * [out] Entry data\n+\t */\n+\tuint8_t *data;\n+\t/**\n+\t * [out] Entry size\n+\t */\n+\tuint16_t data_sz_in_bytes;\n+\t/**\n+\t * [in] Entry index to read\n+\t */\n+\tuint32_t idx;\n+};\n+\n+/**\n+ * Table get bulk parameters\n+ */\n+struct tf_tbl_get_bulk_parms {\n+\t/**\n+\t * [in] Receive or transmit direction\n+\t */\n+\tenum tf_dir dir;\n+\t/**\n+\t * [in] Type of object to get\n+\t */\n+\tenum tf_tbl_type type;\n+\t/**\n+\t * [in] Starting index to read from\n+\t */\n+\tuint32_t starting_idx;\n+\t/**\n+\t * [in] Number of sequential entries\n+\t */\n+\tuint16_t num_entries;\n+\t/**\n+\t * [in] Size of the single entry\n+\t */\n+\tuint16_t entry_sz_in_bytes;\n+\t/**\n+\t * [out] Host physical address, where the data\n+\t * will be copied to by the firmware.\n+\t * Use tfp_calloc() API and mem_pa\n+\t * variable of the tfp_calloc_parms\n+\t * structure for the physical address.\n+\t */\n+\tuint64_t physical_mem_addr;\n+};\n+\n+/**\n+ * @page tbl Table\n+ *\n+ * @ref tf_tbl_bind\n+ *\n+ * @ref tf_tbl_unbind\n+ *\n+ * @ref tf_tbl_alloc\n+ *\n+ * @ref tf_tbl_free\n+ *\n+ * @ref tf_tbl_alloc_search\n+ *\n+ * @ref tf_tbl_set\n+ *\n+ * @ref tf_tbl_get\n+ *\n+ * @ref tf_tbl_bulk_get\n+ */\n+\n+/**\n+ * Initializes the Table module with the requested DBs. Must be\n+ * invoked as the first thing before any of the access functions.\n+ *\n+ * [in] tfp\n+ *   Pointer to TF handle, used for HCAPI communication\n+ *\n+ * [in] parms\n+ *   Pointer to Table configuration parameters\n+ *\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n+ */\n+int tf_tbl_bind(struct tf *tfp,\n+\t\tstruct tf_tbl_cfg_parms *parms);\n+\n+/**\n+ * Cleans up the private DBs and releases all the data.\n+ *\n+ * [in] tfp\n+ *   Pointer to TF handle, used for HCAPI communication\n+ *\n+ * [in] parms\n+ *   Pointer to parameters\n+ *\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n+ */\n+int tf_tbl_unbind(struct tf *tfp);\n+\n+/**\n+ * Allocates the requested table type from the internal RM DB.\n+ *\n+ * [in] tfp\n+ *   Pointer to TF handle, used for HCAPI communication\n+ *\n+ * [in] parms\n+ *   Pointer to Table allocation parameters\n+ *\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n+ */\n+int tf_tbl_alloc(struct tf *tfp,\n+\t\t struct tf_tbl_alloc_parms *parms);\n+\n+/**\n+ * Free's the requested table type and returns it to the DB. If shadow\n+ * DB is enabled its searched first and if found the element refcount\n+ * is decremented. If refcount goes to 0 then its returned to the\n+ * table type DB.\n+ *\n+ * [in] tfp\n+ *   Pointer to TF handle, used for HCAPI communication\n+ *\n+ * [in] parms\n+ *   Pointer to Table free parameters\n+ *\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n+ */\n+int tf_tbl_free(struct tf *tfp,\n+\t\tstruct tf_tbl_free_parms *parms);\n+\n+/**\n+ * Supported if Shadow DB is configured. Searches the Shadow DB for\n+ * any matching element. If found the refcount in the shadow DB is\n+ * updated accordingly. If not found a new element is allocated and\n+ * installed into the shadow DB.\n+ *\n+ * [in] tfp\n+ *   Pointer to TF handle, used for HCAPI communication\n+ *\n+ * [in] parms\n+ *   Pointer to parameters\n+ *\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n+ */\n+int tf_tbl_alloc_search(struct tf *tfp,\n+\t\t\tstruct tf_tbl_alloc_search_parms *parms);\n+\n+/**\n+ * Configures the requested element by sending a firmware request which\n+ * then installs it into the device internal structures.\n+ *\n+ * [in] tfp\n+ *   Pointer to TF handle, used for HCAPI communication\n+ *\n+ * [in] parms\n+ *   Pointer to Table set parameters\n+ *\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n+ */\n+int tf_tbl_set(struct tf *tfp,\n+\t       struct tf_tbl_set_parms *parms);\n+\n+/**\n+ * Retrieves the requested element by sending a firmware request to get\n+ * the element.\n+ *\n+ * [in] tfp\n+ *   Pointer to TF handle, used for HCAPI communication\n+ *\n+ * [in] parms\n+ *   Pointer to Table get parameters\n+ *\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n+ */\n+int tf_tbl_get(struct tf *tfp,\n+\t       struct tf_tbl_get_parms *parms);\n+\n+/**\n+ * Retrieves bulk block of elements by sending a firmware request to\n+ * get the elements.\n+ *\n+ * [in] tfp\n+ *   Pointer to TF handle, used for HCAPI communication\n+ *\n+ * [in] parms\n+ *   Pointer to Table get bulk parameters\n+ *\n+ * Returns\n+ *   - (0) if successful.\n+ *   - (-EINVAL) on failure.\n+ */\n+int tf_tbl_bulk_get(struct tf *tfp,\n+\t\t    struct tf_tbl_get_bulk_parms *parms);\n+\n+#endif /* TF_TBL_TYPE_H */\ndiff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.c b/drivers/net/bnxt/tf_core/tf_tbl_type.c\ndeleted file mode 100644\nindex 2f5af60..0000000\n--- a/drivers/net/bnxt/tf_core/tf_tbl_type.c\n+++ /dev/null\n@@ -1,342 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2019-2020 Broadcom\n- * All rights reserved.\n- */\n-\n-#include <rte_common.h>\n-\n-#include \"tf_tbl_type.h\"\n-#include \"tf_common.h\"\n-#include \"tf_rm_new.h\"\n-#include \"tf_util.h\"\n-#include \"tf_msg.h\"\n-#include \"tfp.h\"\n-\n-struct tf;\n-\n-/**\n- * Table DBs.\n- */\n-static void *tbl_db[TF_DIR_MAX];\n-\n-/**\n- * Table Shadow DBs\n- */\n-/* static void *shadow_tbl_db[TF_DIR_MAX]; */\n-\n-/**\n- * Init flag, set on bind and cleared on unbind\n- */\n-static uint8_t init;\n-\n-/**\n- * Shadow init flag, set on bind and cleared on unbind\n- */\n-/* static uint8_t shadow_init; */\n-\n-int\n-tf_tbl_bind(struct tf *tfp,\n-\t    struct tf_tbl_cfg_parms *parms)\n-{\n-\tint rc;\n-\tint i;\n-\tstruct tf_rm_create_db_parms db_cfg = { 0 };\n-\n-\tTF_CHECK_PARMS2(tfp, parms);\n-\n-\tif (init) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"Table already initialized\\n\");\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tdb_cfg.num_elements = parms->num_elements;\n-\tdb_cfg.type = TF_DEVICE_MODULE_TYPE_TABLE;\n-\tdb_cfg.num_elements = parms->num_elements;\n-\tdb_cfg.cfg = parms->cfg;\n-\n-\tfor (i = 0; i < TF_DIR_MAX; i++) {\n-\t\tdb_cfg.dir = i;\n-\t\tdb_cfg.alloc_cnt = parms->resources->tbl_cnt[i].cnt;\n-\t\tdb_cfg.rm_db = &tbl_db[i];\n-\t\trc = tf_rm_create_db(tfp, &db_cfg);\n-\t\tif (rc) {\n-\t\t\tTFP_DRV_LOG(ERR,\n-\t\t\t\t    \"%s: Table DB creation failed\\n\",\n-\t\t\t\t    tf_dir_2_str(i));\n-\n-\t\t\treturn rc;\n-\t\t}\n-\t}\n-\n-\tinit = 1;\n-\n-\tprintf(\"Table Type - initialized\\n\");\n-\n-\treturn 0;\n-}\n-\n-int\n-tf_tbl_unbind(struct tf *tfp __rte_unused)\n-{\n-\tint rc;\n-\tint i;\n-\tstruct tf_rm_free_db_parms fparms = { 0 };\n-\n-\tTF_CHECK_PARMS1(tfp);\n-\n-\t/* Bail if nothing has been initialized done silent as to\n-\t * allow for creation cleanup.\n-\t */\n-\tif (!init) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"No Table DBs created\\n\");\n-\t\treturn -EINVAL;\n-\t}\n-\n-\tfor (i = 0; i < TF_DIR_MAX; i++) {\n-\t\tfparms.dir = i;\n-\t\tfparms.rm_db = tbl_db[i];\n-\t\trc = tf_rm_free_db(tfp, &fparms);\n-\t\tif (rc)\n-\t\t\treturn rc;\n-\n-\t\ttbl_db[i] = NULL;\n-\t}\n-\n-\tinit = 0;\n-\n-\treturn 0;\n-}\n-\n-int\n-tf_tbl_alloc(struct tf *tfp __rte_unused,\n-\t     struct tf_tbl_alloc_parms *parms)\n-{\n-\tint rc;\n-\tuint32_t idx;\n-\tstruct tf_rm_allocate_parms aparms = { 0 };\n-\n-\tTF_CHECK_PARMS2(tfp, parms);\n-\n-\tif (!init) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: No Table DBs created\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir));\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/* Allocate requested element */\n-\taparms.rm_db = tbl_db[parms->dir];\n-\taparms.db_index = parms->type;\n-\taparms.index = &idx;\n-\trc = tf_rm_allocate(&aparms);\n-\tif (rc) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: Failed allocate, type:%d\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t    parms->type);\n-\t\treturn rc;\n-\t}\n-\n-\t*parms->idx = idx;\n-\n-\treturn 0;\n-}\n-\n-int\n-tf_tbl_free(struct tf *tfp __rte_unused,\n-\t    struct tf_tbl_free_parms *parms)\n-{\n-\tint rc;\n-\tstruct tf_rm_is_allocated_parms aparms = { 0 };\n-\tstruct tf_rm_free_parms fparms = { 0 };\n-\tint allocated = 0;\n-\n-\tTF_CHECK_PARMS2(tfp, parms);\n-\n-\tif (!init) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: No Table DBs created\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir));\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/* Check if element is in use */\n-\taparms.rm_db = tbl_db[parms->dir];\n-\taparms.db_index = parms->type;\n-\taparms.index = parms->idx;\n-\taparms.allocated = &allocated;\n-\trc = tf_rm_is_allocated(&aparms);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\tif (!allocated) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: Entry already free, type:%d, index:%d\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t    parms->type,\n-\t\t\t    parms->idx);\n-\t\treturn rc;\n-\t}\n-\n-\t/* Free requested element */\n-\tfparms.rm_db = tbl_db[parms->dir];\n-\tfparms.db_index = parms->type;\n-\tfparms.index = parms->idx;\n-\trc = tf_rm_free(&fparms);\n-\tif (rc) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: Free failed, type:%d, index:%d\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t    parms->type,\n-\t\t\t    parms->idx);\n-\t\treturn rc;\n-\t}\n-\n-\treturn 0;\n-}\n-\n-int\n-tf_tbl_alloc_search(struct tf *tfp __rte_unused,\n-\t\t    struct tf_tbl_alloc_search_parms *parms __rte_unused)\n-{\n-\treturn 0;\n-}\n-\n-int\n-tf_tbl_set(struct tf *tfp,\n-\t   struct tf_tbl_set_parms *parms)\n-{\n-\tint rc;\n-\tint allocated = 0;\n-\tuint16_t hcapi_type;\n-\tstruct tf_rm_is_allocated_parms aparms = { 0 };\n-\tstruct tf_rm_get_hcapi_parms hparms = { 0 };\n-\n-\tTF_CHECK_PARMS3(tfp, parms, parms->data);\n-\n-\tif (!init) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: No Table DBs created\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir));\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/* Verify that the entry has been previously allocated */\n-\taparms.rm_db = tbl_db[parms->dir];\n-\taparms.db_index = parms->type;\n-\taparms.index = parms->idx;\n-\taparms.allocated = &allocated;\n-\trc = tf_rm_is_allocated(&aparms);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\tif (!allocated) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t   \"%s, Invalid or not allocated index, type:%d, idx:%d\\n\",\n-\t\t   tf_dir_2_str(parms->dir),\n-\t\t   parms->type,\n-\t\t   parms->idx);\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/* Set the entry */\n-\thparms.rm_db = tbl_db[parms->dir];\n-\thparms.db_index = parms->type;\n-\thparms.hcapi_type = &hcapi_type;\n-\trc = tf_rm_get_hcapi_type(&hparms);\n-\tif (rc) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, Failed type lookup, type:%d, rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t    parms->type,\n-\t\t\t    strerror(-rc));\n-\t\treturn rc;\n-\t}\n-\n-\trc = tf_msg_set_tbl_entry(tfp,\n-\t\t\t\t  parms->dir,\n-\t\t\t\t  hcapi_type,\n-\t\t\t\t  parms->data_sz_in_bytes,\n-\t\t\t\t  parms->data,\n-\t\t\t\t  parms->idx);\n-\tif (rc) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, Set failed, type:%d, rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t    parms->type,\n-\t\t\t    strerror(-rc));\n-\t}\n-\n-\treturn 0;\n-}\n-\n-int\n-tf_tbl_get(struct tf *tfp,\n-\t   struct tf_tbl_get_parms *parms)\n-{\n-\tint rc;\n-\tuint16_t hcapi_type;\n-\tint allocated = 0;\n-\tstruct tf_rm_is_allocated_parms aparms = { 0 };\n-\tstruct tf_rm_get_hcapi_parms hparms = { 0 };\n-\n-\tTF_CHECK_PARMS3(tfp, parms, parms->data);\n-\n-\tif (!init) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s: No Table DBs created\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir));\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/* Verify that the entry has been previously allocated */\n-\taparms.rm_db = tbl_db[parms->dir];\n-\taparms.db_index = parms->type;\n-\taparms.index = parms->idx;\n-\taparms.allocated = &allocated;\n-\trc = tf_rm_is_allocated(&aparms);\n-\tif (rc)\n-\t\treturn rc;\n-\n-\tif (!allocated) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t   \"%s, Invalid or not allocated index, type:%d, idx:%d\\n\",\n-\t\t   tf_dir_2_str(parms->dir),\n-\t\t   parms->type,\n-\t\t   parms->idx);\n-\t\treturn -EINVAL;\n-\t}\n-\n-\t/* Set the entry */\n-\thparms.rm_db = tbl_db[parms->dir];\n-\thparms.db_index = parms->type;\n-\thparms.hcapi_type = &hcapi_type;\n-\trc = tf_rm_get_hcapi_type(&hparms);\n-\tif (rc) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, Failed type lookup, type:%d, rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t    parms->type,\n-\t\t\t    strerror(-rc));\n-\t\treturn rc;\n-\t}\n-\n-\t/* Get the entry */\n-\trc = tf_msg_get_tbl_entry(tfp,\n-\t\t\t\t  parms->dir,\n-\t\t\t\t  hcapi_type,\n-\t\t\t\t  parms->data_sz_in_bytes,\n-\t\t\t\t  parms->data,\n-\t\t\t\t  parms->idx);\n-\tif (rc) {\n-\t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"%s, Get failed, type:%d, rc:%s\\n\",\n-\t\t\t    tf_dir_2_str(parms->dir),\n-\t\t\t    parms->type,\n-\t\t\t    strerror(-rc));\n-\t}\n-\n-\treturn 0;\n-}\ndiff --git a/drivers/net/bnxt/tf_core/tf_tbl_type.h b/drivers/net/bnxt/tf_core/tf_tbl_type.h\ndeleted file mode 100644\nindex 3474489..0000000\n--- a/drivers/net/bnxt/tf_core/tf_tbl_type.h\n+++ /dev/null\n@@ -1,318 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright(c) 2019-2020 Broadcom\n- * All rights reserved.\n- */\n-\n-#ifndef TF_TBL_TYPE_H_\n-#define TF_TBL_TYPE_H_\n-\n-#include \"tf_core.h\"\n-\n-struct tf;\n-\n-/**\n- * The Table module provides processing of Internal TF table types.\n- */\n-\n-/**\n- * Table configuration parameters\n- */\n-struct tf_tbl_cfg_parms {\n-\t/**\n-\t * Number of table types in each of the configuration arrays\n-\t */\n-\tuint16_t num_elements;\n-\t/**\n-\t * Table Type element configuration array\n-\t */\n-\tstruct tf_rm_element_cfg *cfg;\n-\t/**\n-\t * Shadow table type configuration array\n-\t */\n-\tstruct tf_shadow_tbl_cfg *shadow_cfg;\n-\t/**\n-\t * Boolean controlling the request shadow copy.\n-\t */\n-\tbool shadow_copy;\n-\t/**\n-\t * Session resource allocations\n-\t */\n-\tstruct tf_session_resources *resources;\n-};\n-\n-/**\n- * Table allocation parameters\n- */\n-struct tf_tbl_alloc_parms {\n-\t/**\n-\t * [in] Receive or transmit direction\n-\t */\n-\tenum tf_dir dir;\n-\t/**\n-\t * [in] Type of the allocation\n-\t */\n-\tenum tf_tbl_type type;\n-\t/**\n-\t * [out] Idx of allocated entry or found entry (if search_enable)\n-\t */\n-\tuint32_t *idx;\n-};\n-\n-/**\n- * Table free parameters\n- */\n-struct tf_tbl_free_parms {\n-\t/**\n-\t * [in] Receive or transmit direction\n-\t */\n-\tenum tf_dir dir;\n-\t/**\n-\t * [in] Type of the allocation type\n-\t */\n-\tenum tf_tbl_type type;\n-\t/**\n-\t * [in] Index to free\n-\t */\n-\tuint32_t idx;\n-\t/**\n-\t * [out] Reference count after free, only valid if session has been\n-\t * created with shadow_copy.\n-\t */\n-\tuint16_t ref_cnt;\n-};\n-\n-/**\n- * Table allocate search parameters\n- */\n-struct tf_tbl_alloc_search_parms {\n-\t/**\n-\t * [in] Receive or transmit direction\n-\t */\n-\tenum tf_dir dir;\n-\t/**\n-\t * [in] Type of the allocation\n-\t */\n-\tenum tf_tbl_type type;\n-\t/**\n-\t * [in] Table scope identifier (ignored unless TF_TBL_TYPE_EXT)\n-\t */\n-\tuint32_t tbl_scope_id;\n-\t/**\n-\t * [in] Enable search for matching entry. If the table type is\n-\t * internal the shadow copy will be searched before\n-\t * alloc. Session must be configured with shadow copy enabled.\n-\t */\n-\tuint8_t search_enable;\n-\t/**\n-\t * [in] Result data to search for (if search_enable)\n-\t */\n-\tuint8_t *result;\n-\t/**\n-\t * [in] Result data size in bytes (if search_enable)\n-\t */\n-\tuint16_t result_sz_in_bytes;\n-\t/**\n-\t * [out] If search_enable, set if matching entry found\n-\t */\n-\tuint8_t hit;\n-\t/**\n-\t * [out] Current ref count after allocation (if search_enable)\n-\t */\n-\tuint16_t ref_cnt;\n-\t/**\n-\t * [out] Idx of allocated entry or found entry (if search_enable)\n-\t */\n-\tuint32_t idx;\n-};\n-\n-/**\n- * Table set parameters\n- */\n-struct tf_tbl_set_parms {\n-\t/**\n-\t * [in] Receive or transmit direction\n-\t */\n-\tenum tf_dir dir;\n-\t/**\n-\t * [in] Type of object to set\n-\t */\n-\tenum tf_tbl_type type;\n-\t/**\n-\t * [in] Entry data\n-\t */\n-\tuint8_t *data;\n-\t/**\n-\t * [in] Entry size\n-\t */\n-\tuint16_t data_sz_in_bytes;\n-\t/**\n-\t * [in] Entry index to write to\n-\t */\n-\tuint32_t idx;\n-};\n-\n-/**\n- * Table get parameters\n- */\n-struct tf_tbl_get_parms {\n-\t/**\n-\t * [in] Receive or transmit direction\n-\t */\n-\tenum tf_dir dir;\n-\t/**\n-\t * [in] Type of object to get\n-\t */\n-\tenum tf_tbl_type type;\n-\t/**\n-\t * [out] Entry data\n-\t */\n-\tuint8_t *data;\n-\t/**\n-\t * [out] Entry size\n-\t */\n-\tuint16_t data_sz_in_bytes;\n-\t/**\n-\t * [in] Entry index to read\n-\t */\n-\tuint32_t idx;\n-};\n-\n-/**\n- * @page tbl Table\n- *\n- * @ref tf_tbl_bind\n- *\n- * @ref tf_tbl_unbind\n- *\n- * @ref tf_tbl_alloc\n- *\n- * @ref tf_tbl_free\n- *\n- * @ref tf_tbl_alloc_search\n- *\n- * @ref tf_tbl_set\n- *\n- * @ref tf_tbl_get\n- */\n-\n-/**\n- * Initializes the Table module with the requested DBs. Must be\n- * invoked as the first thing before any of the access functions.\n- *\n- * [in] tfp\n- *   Pointer to TF handle, used for HCAPI communication\n- *\n- * [in] parms\n- *   Pointer to Table configuration parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_tbl_bind(struct tf *tfp,\n-\t\tstruct tf_tbl_cfg_parms *parms);\n-\n-/**\n- * Cleans up the private DBs and releases all the data.\n- *\n- * [in] tfp\n- *   Pointer to TF handle, used for HCAPI communication\n- *\n- * [in] parms\n- *   Pointer to parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_tbl_unbind(struct tf *tfp);\n-\n-/**\n- * Allocates the requested table type from the internal RM DB.\n- *\n- * [in] tfp\n- *   Pointer to TF handle, used for HCAPI communication\n- *\n- * [in] parms\n- *   Pointer to Table allocation parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_tbl_alloc(struct tf *tfp,\n-\t\t struct tf_tbl_alloc_parms *parms);\n-\n-/**\n- * Free's the requested table type and returns it to the DB. If shadow\n- * DB is enabled its searched first and if found the element refcount\n- * is decremented. If refcount goes to 0 then its returned to the\n- * table type DB.\n- *\n- * [in] tfp\n- *   Pointer to TF handle, used for HCAPI communication\n- *\n- * [in] parms\n- *   Pointer to Table free parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_tbl_free(struct tf *tfp,\n-\t\tstruct tf_tbl_free_parms *parms);\n-\n-/**\n- * Supported if Shadow DB is configured. Searches the Shadow DB for\n- * any matching element. If found the refcount in the shadow DB is\n- * updated accordingly. If not found a new element is allocated and\n- * installed into the shadow DB.\n- *\n- * [in] tfp\n- *   Pointer to TF handle, used for HCAPI communication\n- *\n- * [in] parms\n- *   Pointer to parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_tbl_alloc_search(struct tf *tfp,\n-\t\t\tstruct tf_tbl_alloc_search_parms *parms);\n-\n-/**\n- * Configures the requested element by sending a firmware request which\n- * then installs it into the device internal structures.\n- *\n- * [in] tfp\n- *   Pointer to TF handle, used for HCAPI communication\n- *\n- * [in] parms\n- *   Pointer to Table set parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_tbl_set(struct tf *tfp,\n-\t       struct tf_tbl_set_parms *parms);\n-\n-/**\n- * Retrieves the requested element by sending a firmware request to get\n- * the element.\n- *\n- * [in] tfp\n- *   Pointer to TF handle, used for HCAPI communication\n- *\n- * [in] parms\n- *   Pointer to Table get parameters\n- *\n- * Returns\n- *   - (0) if successful.\n- *   - (-EINVAL) on failure.\n- */\n-int tf_tbl_get(struct tf *tfp,\n-\t       struct tf_tbl_get_parms *parms);\n-\n-#endif /* TF_TBL_TYPE_H */\ndiff --git a/drivers/net/bnxt/tf_core/tf_tcam.c b/drivers/net/bnxt/tf_core/tf_tcam.c\nindex a1761ad..fc047f8 100644\n--- a/drivers/net/bnxt/tf_core/tf_tcam.c\n+++ b/drivers/net/bnxt/tf_core/tf_tcam.c\n@@ -9,7 +9,7 @@\n #include \"tf_tcam.h\"\n #include \"tf_common.h\"\n #include \"tf_util.h\"\n-#include \"tf_rm_new.h\"\n+#include \"tf_rm.h\"\n #include \"tf_device.h\"\n #include \"tfp.h\"\n #include \"tf_session.h\"\n@@ -49,7 +49,7 @@ tf_tcam_bind(struct tf *tfp,\n \n \tif (init) {\n \t\tTFP_DRV_LOG(ERR,\n-\t\t\t    \"TCAM already initialized\\n\");\n+\t\t\t    \"TCAM DB already initialized\\n\");\n \t\treturn -EINVAL;\n \t}\n \n@@ -86,11 +86,12 @@ tf_tcam_unbind(struct tf *tfp)\n \n \tTF_CHECK_PARMS1(tfp);\n \n-\t/* Bail if nothing has been initialized done silent as to\n-\t * allow for creation cleanup.\n-\t */\n-\tif (!init)\n-\t\treturn -EINVAL;\n+\t/* Bail if nothing has been initialized */\n+\tif (!init) {\n+\t\tTFP_DRV_LOG(INFO,\n+\t\t\t    \"No TCAM DBs created\\n\");\n+\t\treturn 0;\n+\t}\n \n \tfor (i = 0; i < TF_DIR_MAX; i++) {\n \t\tfparms.dir = i;\n",
    "prefixes": [
        "23/50"
    ]
}