get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/91873/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 91873,
    "url": "http://patches.dpdk.org/api/patches/91873/?format=api",
    "web_url": "http://patches.dpdk.org/project/dpdk/patch/1618916122-181792-11-git-send-email-jiaweiw@nvidia.com/",
    "project": {
        "id": 1,
        "url": "http://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1618916122-181792-11-git-send-email-jiaweiw@nvidia.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1618916122-181792-11-git-send-email-jiaweiw@nvidia.com",
    "date": "2021-04-20T10:55:17",
    "name": "[v6,10/15] net/mlx5: initialize the flow meter ASO SQ",
    "commit_ref": null,
    "pull_url": null,
    "state": "accepted",
    "archived": true,
    "hash": "e1cbb9e2174d6bbdc7946284b19a80e3cd7fa221",
    "submitter": {
        "id": 1939,
        "url": "http://patches.dpdk.org/api/people/1939/?format=api",
        "name": "Jiawei Wang",
        "email": "jiaweiw@nvidia.com"
    },
    "delegate": {
        "id": 3268,
        "url": "http://patches.dpdk.org/api/users/3268/?format=api",
        "username": "rasland",
        "first_name": "Raslan",
        "last_name": "Darawsheh",
        "email": "rasland@nvidia.com"
    },
    "mbox": "http://patches.dpdk.org/project/dpdk/patch/1618916122-181792-11-git-send-email-jiaweiw@nvidia.com/mbox/",
    "series": [
        {
            "id": 16520,
            "url": "http://patches.dpdk.org/api/series/16520/?format=api",
            "web_url": "http://patches.dpdk.org/project/dpdk/list/?series=16520",
            "date": "2021-04-20T10:55:12",
            "name": "Add ASO meter support in MLX5 PMD",
            "version": 6,
            "mbox": "http://patches.dpdk.org/series/16520/mbox/"
        }
    ],
    "comments": "http://patches.dpdk.org/api/patches/91873/comments/",
    "check": "success",
    "checks": "http://patches.dpdk.org/api/patches/91873/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@inbox.dpdk.org",
        "Delivered-To": "patchwork@inbox.dpdk.org",
        "Received": [
            "from mails.dpdk.org (mails.dpdk.org [217.70.189.124])\n\tby inbox.dpdk.org (Postfix) with ESMTP id 8DA01A0548;\n\tTue, 20 Apr 2021 12:56:51 +0200 (CEST)",
            "from [217.70.189.124] (localhost [127.0.0.1])\n\tby mails.dpdk.org (Postfix) with ESMTP id 4D0A941798;\n\tTue, 20 Apr 2021 12:55:46 +0200 (CEST)",
            "from mellanox.co.il (mail-il-dmz.mellanox.com [193.47.165.129])\n by mails.dpdk.org (Postfix) with ESMTP id 9C3BC41745\n for <dev@dpdk.org>; Tue, 20 Apr 2021 12:55:30 +0200 (CEST)",
            "from Internal Mail-Server by MTLPINE1 (envelope-from\n jiaweiw@nvidia.com) with SMTP; 20 Apr 2021 13:55:25 +0300",
            "from nvidia.com (gen-l-vrt-281.mtl.labs.mlnx [10.237.44.1])\n by labmailer.mlnx (8.13.8/8.13.8) with ESMTP id 13KAtMRw009943;\n Tue, 20 Apr 2021 13:55:25 +0300"
        ],
        "From": "Jiawei Wang <jiaweiw@nvidia.com>",
        "To": "matan@nvidia.com, orika@nvidia.com, viacheslavo@nvidia.com,\n ferruh.yigit@intel.com, thomas@monjalon.net,\n Shahaf Shuler <shahafs@nvidia.com>",
        "Cc": "dev@dpdk.org, rasland@nvidia.com, asafp@nvidia.com,\n Li Zhang <lizh@nvidia.com>",
        "Date": "Tue, 20 Apr 2021 13:55:17 +0300",
        "Message-Id": "<1618916122-181792-11-git-send-email-jiaweiw@nvidia.com>",
        "X-Mailer": "git-send-email 1.8.3.1",
        "In-Reply-To": "<1618916122-181792-1-git-send-email-jiaweiw@nvidia.com>",
        "References": "<20210331073632.1443011-1-lizh@nvidia.com>\n <1618916122-181792-1-git-send-email-jiaweiw@nvidia.com>",
        "Subject": "[dpdk-dev] [PATCH v6 10/15] net/mlx5: initialize the flow meter ASO\n SQ",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.29",
        "Precedence": "list",
        "List-Id": "DPDK patches and discussions <dev.dpdk.org>",
        "List-Unsubscribe": "<https://mails.dpdk.org/options/dev>,\n <mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://mails.dpdk.org/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<https://mails.dpdk.org/listinfo/dev>,\n <mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "From: Li Zhang <lizh@nvidia.com>\n\nInitialize the flow meter ASO SQ WQEs with\nall the constant data that should not be updated\nper enqueue operation.\n\nSigned-off-by: Li Zhang <lizh@nvidia.com>\nAcked-by: Matan Azrad <matan@nvidia.com>\n---\n drivers/net/mlx5/linux/mlx5_os.c   |  16 +\n drivers/net/mlx5/meson.build       |   2 +-\n drivers/net/mlx5/mlx5.c            |  68 +++-\n drivers/net/mlx5/mlx5.h            |  22 +-\n drivers/net/mlx5/mlx5_flow.h       |   4 +-\n drivers/net/mlx5/mlx5_flow_age.c   | 591 ---------------------------------\n drivers/net/mlx5/mlx5_flow_aso.c   | 659 +++++++++++++++++++++++++++++++++++++\n drivers/net/mlx5/mlx5_flow_dv.c    |   7 +-\n drivers/net/mlx5/mlx5_flow_meter.c |   7 +-\n 9 files changed, 767 insertions(+), 609 deletions(-)\n delete mode 100644 drivers/net/mlx5/mlx5_flow_age.c\n create mode 100644 drivers/net/mlx5/mlx5_flow_aso.c",
    "diff": "diff --git a/drivers/net/mlx5/linux/mlx5_os.c b/drivers/net/mlx5/linux/mlx5_os.c\nindex ad43141..336cdbe 100644\n--- a/drivers/net/mlx5/linux/mlx5_os.c\n+++ b/drivers/net/mlx5/linux/mlx5_os.c\n@@ -1290,6 +1290,22 @@\n \t\t\t\t\tpriv->mtr_color_reg);\n \t\t\t}\n \t\t}\n+\t\tif (config->hca_attr.qos.sup &&\n+\t\t\tconfig->hca_attr.qos.flow_meter_aso_sup) {\n+\t\t\tuint32_t log_obj_size =\n+\t\t\t\trte_log2_u32(MLX5_ASO_MTRS_PER_POOL >> 1);\n+\t\t\tif (log_obj_size >=\n+\t\t\tconfig->hca_attr.qos.log_meter_aso_granularity &&\n+\t\t\tlog_obj_size <=\n+\t\t\tconfig->hca_attr.qos.log_meter_aso_max_alloc) {\n+\t\t\t\tsh->meter_aso_en = 1;\n+\t\t\t\terr = mlx5_aso_flow_mtrs_mng_init(priv);\n+\t\t\t\tif (err) {\n+\t\t\t\t\terr = -err;\n+\t\t\t\t\tgoto error;\n+\t\t\t\t}\n+\t\t\t}\n+\t\t}\n #endif\n #ifdef HAVE_MLX5_DR_CREATE_ACTION_ASO\n \t\tif (config->hca_attr.flow_hit_aso &&\ndiff --git a/drivers/net/mlx5/meson.build b/drivers/net/mlx5/meson.build\nindex 8740ca5..0ec3002 100644\n--- a/drivers/net/mlx5/meson.build\n+++ b/drivers/net/mlx5/meson.build\n@@ -15,7 +15,7 @@ sources = files(\n \t'mlx5_flow.c',\n \t'mlx5_flow_meter.c',\n \t'mlx5_flow_dv.c',\n-        'mlx5_flow_age.c',\n+\t'mlx5_flow_aso.c',\n \t'mlx5_mac.c',\n \t'mlx5_mr.c',\n \t'mlx5_rss.c',\ndiff --git a/drivers/net/mlx5/mlx5.c b/drivers/net/mlx5/mlx5.c\nindex 1b5b5cb..00055e3 100644\n--- a/drivers/net/mlx5/mlx5.c\n+++ b/drivers/net/mlx5/mlx5.c\n@@ -403,7 +403,7 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =\n \t\trte_errno = ENOMEM;\n \t\treturn -ENOMEM;\n \t}\n-\terr = mlx5_aso_queue_init(sh);\n+\terr = mlx5_aso_queue_init(sh, ASO_OPC_MOD_FLOW_HIT);\n \tif (err) {\n \t\tmlx5_free(sh->aso_age_mng);\n \t\treturn -1;\n@@ -425,8 +425,8 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =\n {\n \tint i, j;\n \n-\tmlx5_aso_queue_stop(sh);\n-\tmlx5_aso_queue_uninit(sh);\n+\tmlx5_aso_flow_hit_queue_poll_stop(sh);\n+\tmlx5_aso_queue_uninit(sh, ASO_OPC_MOD_FLOW_HIT);\n \tif (sh->aso_age_mng->pools) {\n \t\tstruct mlx5_aso_age_pool *pool;\n \n@@ -564,6 +564,66 @@ static LIST_HEAD(, mlx5_dev_ctx_shared) mlx5_dev_ctx_list =\n \tmemset(&sh->cmng, 0, sizeof(sh->cmng));\n }\n \n+/**\n+ * Initialize the aso flow meters management structure.\n+ *\n+ * @param[in] sh\n+ *   Pointer to mlx5_dev_ctx_shared object to free\n+ */\n+int\n+mlx5_aso_flow_mtrs_mng_init(struct mlx5_priv *priv)\n+{\n+\tif (!priv->mtr_idx_tbl) {\n+\t\tpriv->mtr_idx_tbl = mlx5_l3t_create(MLX5_L3T_TYPE_DWORD);\n+\t\tif (!priv->mtr_idx_tbl) {\n+\t\t\tDRV_LOG(ERR, \"fail to create meter lookup table.\");\n+\t\t\trte_errno = ENOMEM;\n+\t\t\treturn -ENOMEM;\n+\t\t}\n+\t}\n+\tif (!priv->sh->mtrmng) {\n+\t\tpriv->sh->mtrmng = mlx5_malloc(MLX5_MEM_ZERO,\n+\t\t\tsizeof(*priv->sh->mtrmng),\n+\t\t\tRTE_CACHE_LINE_SIZE, SOCKET_ID_ANY);\n+\t\tif (!priv->sh->mtrmng) {\n+\t\t\tDRV_LOG(ERR, \"mlx5_aso_mtr_pools_mng allocation was failed.\");\n+\t\t\trte_errno = ENOMEM;\n+\t\t\treturn -ENOMEM;\n+\t\t}\n+\t\trte_spinlock_init(&priv->sh->mtrmng->mtrsl);\n+\t\tLIST_INIT(&priv->sh->mtrmng->meters);\n+\t}\n+\treturn 0;\n+}\n+\n+/**\n+ * Close and release all the resources of\n+ * the ASO flow meter management structure.\n+ *\n+ * @param[in] sh\n+ *   Pointer to mlx5_dev_ctx_shared object to free.\n+ */\n+static void\n+mlx5_aso_flow_mtrs_mng_close(struct mlx5_dev_ctx_shared *sh)\n+{\n+\tstruct mlx5_aso_mtr_pool *mtr_pool;\n+\tstruct mlx5_aso_mtr_pools_mng *mtrmng = sh->mtrmng;\n+\tuint32_t idx;\n+\n+\tmlx5_aso_queue_uninit(sh, ASO_OPC_MOD_POLICER);\n+\tidx = mtrmng->n_valid;\n+\twhile (idx--) {\n+\t\tmtr_pool = mtrmng->pools[idx];\n+\t\tclaim_zero(mlx5_devx_cmd_destroy\n+\t\t\t\t\t\t(mtr_pool->devx_obj));\n+\t\tmtrmng->n_valid--;\n+\t\tmlx5_free(mtr_pool);\n+\t}\n+\tmlx5_free(sh->mtrmng->pools);\n+\tmlx5_free(sh->mtrmng);\n+\tsh->mtrmng = NULL;\n+}\n+\n /* Send FLOW_AGED event if needed. */\n void\n mlx5_age_event_prepare(struct mlx5_dev_ctx_shared *sh)\n@@ -1113,6 +1173,8 @@ struct mlx5_dev_ctx_shared *\n \t\tmlx5_flow_aso_age_mng_close(sh);\n \t\tsh->aso_age_mng = NULL;\n \t}\n+\tif (sh->mtrmng)\n+\t\tmlx5_aso_flow_mtrs_mng_close(sh);\n \tmlx5_flow_ipool_destroy(sh);\n \tmlx5_os_dev_shared_handler_uninstall(sh);\n \tif (sh->cnt_id_tbl) {\ndiff --git a/drivers/net/mlx5/mlx5.h b/drivers/net/mlx5/mlx5.h\nindex 2e93dda..4ad0e14 100644\n--- a/drivers/net/mlx5/mlx5.h\n+++ b/drivers/net/mlx5/mlx5.h\n@@ -491,8 +491,13 @@ struct mlx5_aso_devx_mr {\n };\n \n struct mlx5_aso_sq_elem {\n-\tstruct mlx5_aso_age_pool *pool;\n-\tuint16_t burst_size;\n+\tunion {\n+\t\tstruct {\n+\t\t\tstruct mlx5_aso_age_pool *pool;\n+\t\t\tuint16_t burst_size;\n+\t\t};\n+\t\tstruct mlx5_aso_mtr *mtr;\n+\t};\n };\n \n struct mlx5_aso_sq {\n@@ -764,7 +769,6 @@ struct mlx5_aso_mtr_pools_mng {\n \tvolatile uint16_t n_valid; /* Number of valid pools. */\n \tuint16_t n; /* Number of pools. */\n \trte_spinlock_t mtrsl; /* The ASO flow meter free list lock. */\n-\tstruct mlx5_l3t_tbl *mtr_idx_tbl; /* Meter index lookup table. */\n \tstruct aso_meter_list meters; /* Free ASO flow meter list. */\n \tstruct mlx5_aso_sq sq; /*SQ using by ASO flow meter. */\n \tstruct mlx5_aso_mtr_pool **pools; /* ASO flow meter pool array. */\n@@ -1195,6 +1199,7 @@ struct mlx5_priv {\n \tuint8_t mtr_color_reg; /* Meter color match REG_C. */\n \tstruct mlx5_mtr_profiles flow_meter_profiles; /* MTR profile list. */\n \tstruct mlx5_legacy_flow_meters flow_meters; /* MTR list. */\n+\tstruct mlx5_l3t_tbl *mtr_idx_tbl; /* Meter index lookup table. */\n \tuint8_t skip_default_rss_reta; /* Skip configuration of default reta. */\n \tuint8_t fdb_def_rule; /* Whether fdb jump to table 1 is configured. */\n \tstruct mlx5_mp_id mp_id; /* ID of a multi-process process */\n@@ -1258,6 +1263,7 @@ int mlx5_hairpin_cap_get(struct rte_eth_dev *dev,\n bool mlx5_flex_parser_ecpri_exist(struct rte_eth_dev *dev);\n int mlx5_flex_parser_ecpri_alloc(struct rte_eth_dev *dev);\n int mlx5_flow_aso_age_mng_init(struct mlx5_dev_ctx_shared *sh);\n+int mlx5_aso_flow_mtrs_mng_init(struct mlx5_priv *priv);\n \n /* mlx5_ethdev.c */\n \n@@ -1528,9 +1534,11 @@ int mlx5_txpp_xstats_get_names(struct rte_eth_dev *dev,\n \n /* mlx5_flow_aso.c */\n \n-int mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh);\n-int mlx5_aso_queue_start(struct mlx5_dev_ctx_shared *sh);\n-int mlx5_aso_queue_stop(struct mlx5_dev_ctx_shared *sh);\n-void mlx5_aso_queue_uninit(struct mlx5_dev_ctx_shared *sh);\n+int mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh,\n+\t\tenum mlx5_access_aso_opc_mod aso_opc_mod);\n+int mlx5_aso_flow_hit_queue_poll_start(struct mlx5_dev_ctx_shared *sh);\n+int mlx5_aso_flow_hit_queue_poll_stop(struct mlx5_dev_ctx_shared *sh);\n+void mlx5_aso_queue_uninit(struct mlx5_dev_ctx_shared *sh,\n+\t\tenum mlx5_access_aso_opc_mod aso_opc_mod);\n \n #endif /* RTE_PMD_MLX5_H_ */\ndiff --git a/drivers/net/mlx5/mlx5_flow.h b/drivers/net/mlx5/mlx5_flow.h\nindex b6ec727..81cf1b5 100644\n--- a/drivers/net/mlx5/mlx5_flow.h\n+++ b/drivers/net/mlx5/mlx5_flow.h\n@@ -826,8 +826,8 @@ struct mlx5_flow {\n #define MLX5_FLOW_METER_DISABLE 0\n #define MLX5_FLOW_METER_ENABLE 1\n \n-#define MLX5_ASO_CQE_RESPONSE_DELAY 10\n-#define MLX5_MTR_POLL_CQE_TIMES    100000u\n+#define MLX5_ASO_WQE_CQE_RESPONSE_DELAY 10u\n+#define MLX5_MTR_POLL_WQE_CQE_TIMES 100000u\n \n #define MLX5_MAN_WIDTH 8\n /* Legacy Meter parameter structure. */\ndiff --git a/drivers/net/mlx5/mlx5_flow_age.c b/drivers/net/mlx5/mlx5_flow_age.c\ndeleted file mode 100644\nindex 00cb20d..0000000\n--- a/drivers/net/mlx5/mlx5_flow_age.c\n+++ /dev/null\n@@ -1,591 +0,0 @@\n-/* SPDX-License-Identifier: BSD-3-Clause\n- * Copyright 2020 Mellanox Technologies, Ltd\n- */\n-#include <mlx5_prm.h>\n-#include <rte_malloc.h>\n-#include <rte_cycles.h>\n-#include <rte_eal_paging.h>\n-\n-#include <mlx5_malloc.h>\n-#include <mlx5_common_os.h>\n-#include <mlx5_common_devx.h>\n-\n-#include \"mlx5.h\"\n-#include \"mlx5_flow.h\"\n-\n-\n-/**\n- * Destroy Completion Queue used for ASO access.\n- *\n- * @param[in] cq\n- *   ASO CQ to destroy.\n- */\n-static void\n-mlx5_aso_cq_destroy(struct mlx5_aso_cq *cq)\n-{\n-\tif (cq->cq_obj.cq)\n-\t\tmlx5_devx_cq_destroy(&cq->cq_obj);\n-\tmemset(cq, 0, sizeof(*cq));\n-}\n-\n-/**\n- * Create Completion Queue used for ASO access.\n- *\n- * @param[in] ctx\n- *   Context returned from mlx5 open_device() glue function.\n- * @param[in/out] cq\n- *   Pointer to CQ to create.\n- * @param[in] log_desc_n\n- *   Log of number of descriptors in queue.\n- * @param[in] socket\n- *   Socket to use for allocation.\n- * @param[in] uar_page_id\n- *   UAR page ID to use.\n- *\n- * @return\n- *   0 on success, a negative errno value otherwise and rte_errno is set.\n- */\n-static int\n-mlx5_aso_cq_create(void *ctx, struct mlx5_aso_cq *cq, uint16_t log_desc_n,\n-\t\t   int socket, int uar_page_id)\n-{\n-\tstruct mlx5_devx_cq_attr attr = {\n-\t\t.uar_page_id = uar_page_id,\n-\t};\n-\n-\tcq->log_desc_n = log_desc_n;\n-\tcq->cq_ci = 0;\n-\treturn mlx5_devx_cq_create(ctx, &cq->cq_obj, log_desc_n, &attr, socket);\n-}\n-\n-/**\n- * Free MR resources.\n- *\n- * @param[in] mr\n- *   MR to free.\n- */\n-static void\n-mlx5_aso_devx_dereg_mr(struct mlx5_aso_devx_mr *mr)\n-{\n-\tclaim_zero(mlx5_devx_cmd_destroy(mr->mkey));\n-\tif (!mr->is_indirect && mr->umem)\n-\t\tclaim_zero(mlx5_glue->devx_umem_dereg(mr->umem));\n-\tmlx5_free(mr->buf);\n-\tmemset(mr, 0, sizeof(*mr));\n-}\n-\n-/**\n- * Register Memory Region.\n- *\n- * @param[in] ctx\n- *   Context returned from mlx5 open_device() glue function.\n- * @param[in] length\n- *   Size of MR buffer.\n- * @param[in/out] mr\n- *   Pointer to MR to create.\n- * @param[in] socket\n- *   Socket to use for allocation.\n- * @param[in] pdn\n- *   Protection Domain number to use.\n- *\n- * @return\n- *   0 on success, a negative errno value otherwise and rte_errno is set.\n- */\n-static int\n-mlx5_aso_devx_reg_mr(void *ctx, size_t length, struct mlx5_aso_devx_mr *mr,\n-\t\t     int socket, int pdn)\n-{\n-\tstruct mlx5_devx_mkey_attr mkey_attr;\n-\n-\tmr->buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, length, 4096,\n-\t\t\t      socket);\n-\tif (!mr->buf) {\n-\t\tDRV_LOG(ERR, \"Failed to create ASO bits mem for MR by Devx.\");\n-\t\treturn -1;\n-\t}\n-\tmr->umem = mlx5_os_umem_reg(ctx, mr->buf, length,\n-\t\t\t\t\t\t IBV_ACCESS_LOCAL_WRITE);\n-\tif (!mr->umem) {\n-\t\tDRV_LOG(ERR, \"Failed to register Umem for MR by Devx.\");\n-\t\tgoto error;\n-\t}\n-\tmkey_attr.addr = (uintptr_t)mr->buf;\n-\tmkey_attr.size = length;\n-\tmkey_attr.umem_id = mlx5_os_get_umem_id(mr->umem);\n-\tmkey_attr.pd = pdn;\n-\tmkey_attr.pg_access = 1;\n-\tmkey_attr.klm_array = NULL;\n-\tmkey_attr.klm_num = 0;\n-\tmkey_attr.relaxed_ordering_read = 0;\n-\tmkey_attr.relaxed_ordering_write = 0;\n-\tmr->mkey = mlx5_devx_cmd_mkey_create(ctx, &mkey_attr);\n-\tif (!mr->mkey) {\n-\t\tDRV_LOG(ERR, \"Failed to create direct Mkey.\");\n-\t\tgoto error;\n-\t}\n-\tmr->length = length;\n-\tmr->is_indirect = false;\n-\treturn 0;\n-error:\n-\tif (mr->umem)\n-\t\tclaim_zero(mlx5_glue->devx_umem_dereg(mr->umem));\n-\tmlx5_free(mr->buf);\n-\treturn -1;\n-}\n-\n-/**\n- * Destroy Send Queue used for ASO access.\n- *\n- * @param[in] sq\n- *   ASO SQ to destroy.\n- */\n-static void\n-mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq)\n-{\n-\tmlx5_devx_sq_destroy(&sq->sq_obj);\n-\tmlx5_aso_cq_destroy(&sq->cq);\n-\tmlx5_aso_devx_dereg_mr(&sq->mr);\n-\tmemset(sq, 0, sizeof(*sq));\n-}\n-\n-/**\n- * Initialize Send Queue used for ASO access.\n- *\n- * @param[in] sq\n- *   ASO SQ to initialize.\n- */\n-static void\n-mlx5_aso_init_sq(struct mlx5_aso_sq *sq)\n-{\n-\tvolatile struct mlx5_aso_wqe *restrict wqe;\n-\tint i;\n-\tint size = 1 << sq->log_desc_n;\n-\tuint64_t addr;\n-\n-\t/* All the next fields state should stay constant. */\n-\tfor (i = 0, wqe = &sq->sq_obj.aso_wqes[0]; i < size; ++i, ++wqe) {\n-\t\twqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) |\n-\t\t\t\t\t\t\t  (sizeof(*wqe) >> 4));\n-\t\twqe->aso_cseg.lkey = rte_cpu_to_be_32(sq->mr.mkey->id);\n-\t\taddr = (uint64_t)((uint64_t *)sq->mr.buf + i *\n-\t\t\t\t\t    MLX5_ASO_AGE_ACTIONS_PER_POOL / 64);\n-\t\twqe->aso_cseg.va_h = rte_cpu_to_be_32((uint32_t)(addr >> 32));\n-\t\twqe->aso_cseg.va_l_r = rte_cpu_to_be_32((uint32_t)addr | 1u);\n-\t\twqe->aso_cseg.operand_masks = rte_cpu_to_be_32\n-\t\t\t(0u |\n-\t\t\t (ASO_OPER_LOGICAL_OR << ASO_CSEG_COND_OPER_OFFSET) |\n-\t\t\t (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_1_OPER_OFFSET) |\n-\t\t\t (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_0_OPER_OFFSET) |\n-\t\t\t (BYTEWISE_64BYTE << ASO_CSEG_DATA_MASK_MODE_OFFSET));\n-\t\twqe->aso_cseg.data_mask = RTE_BE64(UINT64_MAX);\n-\t}\n-}\n-\n-/**\n- * Create Send Queue used for ASO access.\n- *\n- * @param[in] ctx\n- *   Context returned from mlx5 open_device() glue function.\n- * @param[in/out] sq\n- *   Pointer to SQ to create.\n- * @param[in] socket\n- *   Socket to use for allocation.\n- * @param[in] uar\n- *   User Access Region object.\n- * @param[in] pdn\n- *   Protection Domain number to use.\n- * @param[in] log_desc_n\n- *   Log of number of descriptors in queue.\n- *\n- * @return\n- *   0 on success, a negative errno value otherwise and rte_errno is set.\n- */\n-static int\n-mlx5_aso_sq_create(void *ctx, struct mlx5_aso_sq *sq, int socket,\n-\t\t   void *uar, uint32_t pdn,  uint16_t log_desc_n,\n-\t\t   uint32_t ts_format)\n-{\n-\tstruct mlx5_devx_create_sq_attr attr = {\n-\t\t.user_index = 0xFFFF,\n-\t\t.wq_attr = (struct mlx5_devx_wq_attr){\n-\t\t\t.pd = pdn,\n-\t\t\t.uar_page = mlx5_os_get_devx_uar_page_id(uar),\n-\t\t},\n-\t\t.ts_format = mlx5_ts_format_conv(ts_format),\n-\t};\n-\tstruct mlx5_devx_modify_sq_attr modify_attr = {\n-\t\t.state = MLX5_SQC_STATE_RDY,\n-\t};\n-\tuint32_t sq_desc_n = 1 << log_desc_n;\n-\tuint16_t log_wqbb_n;\n-\tint ret;\n-\n-\tif (mlx5_aso_devx_reg_mr(ctx, (MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) *\n-\t\t\t\t sq_desc_n, &sq->mr, socket, pdn))\n-\t\treturn -1;\n-\tif (mlx5_aso_cq_create(ctx, &sq->cq, log_desc_n, socket,\n-\t\t\t       mlx5_os_get_devx_uar_page_id(uar)))\n-\t\tgoto error;\n-\tsq->log_desc_n = log_desc_n;\n-\tattr.cqn = sq->cq.cq_obj.cq->id;\n-\t/* for mlx5_aso_wqe that is twice the size of mlx5_wqe */\n-\tlog_wqbb_n = log_desc_n + 1;\n-\tret = mlx5_devx_sq_create(ctx, &sq->sq_obj, log_wqbb_n, &attr, socket);\n-\tif (ret) {\n-\t\tDRV_LOG(ERR, \"Can't create SQ object.\");\n-\t\trte_errno = ENOMEM;\n-\t\tgoto error;\n-\t}\n-\tret = mlx5_devx_cmd_modify_sq(sq->sq_obj.sq, &modify_attr);\n-\tif (ret) {\n-\t\tDRV_LOG(ERR, \"Can't change SQ state to ready.\");\n-\t\trte_errno = ENOMEM;\n-\t\tgoto error;\n-\t}\n-\tsq->pi = 0;\n-\tsq->head = 0;\n-\tsq->tail = 0;\n-\tsq->sqn = sq->sq_obj.sq->id;\n-\tsq->uar_addr = mlx5_os_get_devx_uar_reg_addr(uar);\n-\tmlx5_aso_init_sq(sq);\n-\treturn 0;\n-error:\n-\tmlx5_aso_destroy_sq(sq);\n-\treturn -1;\n-}\n-\n-/**\n- * API to create and initialize Send Queue used for ASO access.\n- *\n- * @param[in] sh\n- *   Pointer to shared device context.\n- *\n- * @return\n- *   0 on success, a negative errno value otherwise and rte_errno is set.\n- */\n-int\n-mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh)\n-{\n-\treturn mlx5_aso_sq_create(sh->ctx, &sh->aso_age_mng->aso_sq, 0,\n-\t\t\t\t  sh->tx_uar, sh->pdn, MLX5_ASO_QUEUE_LOG_DESC,\n-\t\t\t\t  sh->sq_ts_format);\n-}\n-\n-/**\n- * API to destroy Send Queue used for ASO access.\n- *\n- * @param[in] sh\n- *   Pointer to shared device context.\n- */\n-void\n-mlx5_aso_queue_uninit(struct mlx5_dev_ctx_shared *sh)\n-{\n-\tmlx5_aso_destroy_sq(&sh->aso_age_mng->aso_sq);\n-}\n-\n-/**\n- * Write a burst of WQEs to ASO SQ.\n- *\n- * @param[in] mng\n- *   ASO management data, contains the SQ.\n- * @param[in] n\n- *   Index of the last valid pool.\n- *\n- * @return\n- *   Number of WQEs in burst.\n- */\n-static uint16_t\n-mlx5_aso_sq_enqueue_burst(struct mlx5_aso_age_mng *mng, uint16_t n)\n-{\n-\tvolatile struct mlx5_aso_wqe *wqe;\n-\tstruct mlx5_aso_sq *sq = &mng->aso_sq;\n-\tstruct mlx5_aso_age_pool *pool;\n-\tuint16_t size = 1 << sq->log_desc_n;\n-\tuint16_t mask = size - 1;\n-\tuint16_t max;\n-\tuint16_t start_head = sq->head;\n-\n-\tmax = RTE_MIN(size - (uint16_t)(sq->head - sq->tail), n - sq->next);\n-\tif (unlikely(!max))\n-\t\treturn 0;\n-\tsq->elts[start_head & mask].burst_size = max;\n-\tdo {\n-\t\twqe = &sq->sq_obj.aso_wqes[sq->head & mask];\n-\t\trte_prefetch0(&sq->sq_obj.aso_wqes[(sq->head + 1) & mask]);\n-\t\t/* Fill next WQE. */\n-\t\trte_spinlock_lock(&mng->resize_sl);\n-\t\tpool = mng->pools[sq->next];\n-\t\trte_spinlock_unlock(&mng->resize_sl);\n-\t\tsq->elts[sq->head & mask].pool = pool;\n-\t\twqe->general_cseg.misc =\n-\t\t\t\trte_cpu_to_be_32(((struct mlx5_devx_obj *)\n-\t\t\t\t\t\t (pool->flow_hit_aso_obj))->id);\n-\t\twqe->general_cseg.flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR <<\n-\t\t\t\t\t\t\t MLX5_COMP_MODE_OFFSET);\n-\t\twqe->general_cseg.opcode = rte_cpu_to_be_32\n-\t\t\t\t\t\t(MLX5_OPCODE_ACCESS_ASO |\n-\t\t\t\t\t\t (ASO_OPC_MOD_FLOW_HIT <<\n-\t\t\t\t\t\t  WQE_CSEG_OPC_MOD_OFFSET) |\n-\t\t\t\t\t\t (sq->pi <<\n-\t\t\t\t\t\t  WQE_CSEG_WQE_INDEX_OFFSET));\n-\t\tsq->pi += 2; /* Each WQE contains 2 WQEBB's. */\n-\t\tsq->head++;\n-\t\tsq->next++;\n-\t\tmax--;\n-\t} while (max);\n-\twqe->general_cseg.flags = RTE_BE32(MLX5_COMP_ALWAYS <<\n-\t\t\t\t\t\t\t MLX5_COMP_MODE_OFFSET);\n-\trte_io_wmb();\n-\tsq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->pi);\n-\trte_wmb();\n-\t*sq->uar_addr = *(volatile uint64_t *)wqe; /* Assume 64 bit ARCH.*/\n-\trte_wmb();\n-\treturn sq->elts[start_head & mask].burst_size;\n-}\n-\n-/**\n- * Debug utility function. Dump contents of error CQE and WQE.\n- *\n- * @param[in] cqe\n- *   Error CQE to dump.\n- * @param[in] wqe\n- *   Error WQE to dump.\n- */\n-static void\n-mlx5_aso_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe)\n-{\n-\tint i;\n-\n-\tDRV_LOG(ERR, \"Error cqe:\");\n-\tfor (i = 0; i < 16; i += 4)\n-\t\tDRV_LOG(ERR, \"%08X %08X %08X %08X\", cqe[i], cqe[i + 1],\n-\t\t\tcqe[i + 2], cqe[i + 3]);\n-\tDRV_LOG(ERR, \"\\nError wqe:\");\n-\tfor (i = 0; i < (int)sizeof(struct mlx5_aso_wqe) / 4; i += 4)\n-\t\tDRV_LOG(ERR, \"%08X %08X %08X %08X\", wqe[i], wqe[i + 1],\n-\t\t\twqe[i + 2], wqe[i + 3]);\n-}\n-\n-/**\n- * Handle case of error CQE.\n- *\n- * @param[in] sq\n- *   ASO SQ to use.\n- */\n-static void\n-mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq)\n-{\n-\tstruct mlx5_aso_cq *cq = &sq->cq;\n-\tuint32_t idx = cq->cq_ci & ((1 << cq->log_desc_n) - 1);\n-\tvolatile struct mlx5_err_cqe *cqe =\n-\t\t\t(volatile struct mlx5_err_cqe *)&cq->cq_obj.cqes[idx];\n-\n-\tcq->errors++;\n-\tidx = rte_be_to_cpu_16(cqe->wqe_counter) & (1u << sq->log_desc_n);\n-\tmlx5_aso_dump_err_objs((volatile uint32_t *)cqe,\n-\t\t\t       (volatile uint32_t *)&sq->sq_obj.aso_wqes[idx]);\n-}\n-\n-/**\n- * Update ASO objects upon completion.\n- *\n- * @param[in] sh\n- *   Shared device context.\n- * @param[in] n\n- *   Number of completed ASO objects.\n- */\n-static void\n-mlx5_aso_age_action_update(struct mlx5_dev_ctx_shared *sh, uint16_t n)\n-{\n-\tstruct mlx5_aso_age_mng *mng = sh->aso_age_mng;\n-\tstruct mlx5_aso_sq *sq = &mng->aso_sq;\n-\tstruct mlx5_age_info *age_info;\n-\tconst uint16_t size = 1 << sq->log_desc_n;\n-\tconst uint16_t mask = size - 1;\n-\tconst uint64_t curr = MLX5_CURR_TIME_SEC;\n-\tuint16_t expected = AGE_CANDIDATE;\n-\tuint16_t i;\n-\n-\tfor (i = 0; i < n; ++i) {\n-\t\tuint16_t idx = (sq->tail + i) & mask;\n-\t\tstruct mlx5_aso_age_pool *pool = sq->elts[idx].pool;\n-\t\tuint64_t diff = curr - pool->time_of_last_age_check;\n-\t\tuint64_t *addr = sq->mr.buf;\n-\t\tint j;\n-\n-\t\taddr += idx * MLX5_ASO_AGE_ACTIONS_PER_POOL / 64;\n-\t\tpool->time_of_last_age_check = curr;\n-\t\tfor (j = 0; j < MLX5_ASO_AGE_ACTIONS_PER_POOL; j++) {\n-\t\t\tstruct mlx5_aso_age_action *act = &pool->actions[j];\n-\t\t\tstruct mlx5_age_param *ap = &act->age_params;\n-\t\t\tuint8_t byte;\n-\t\t\tuint8_t offset;\n-\t\t\tuint8_t *u8addr;\n-\t\t\tuint8_t hit;\n-\n-\t\t\tif (__atomic_load_n(&ap->state, __ATOMIC_RELAXED) !=\n-\t\t\t\t\t    AGE_CANDIDATE)\n-\t\t\t\tcontinue;\n-\t\t\tbyte = 63 - (j / 8);\n-\t\t\toffset = j % 8;\n-\t\t\tu8addr = (uint8_t *)addr;\n-\t\t\thit = (u8addr[byte] >> offset) & 0x1;\n-\t\t\tif (hit) {\n-\t\t\t\t__atomic_store_n(&ap->sec_since_last_hit, 0,\n-\t\t\t\t\t\t __ATOMIC_RELAXED);\n-\t\t\t} else {\n-\t\t\t\tstruct mlx5_priv *priv;\n-\n-\t\t\t\t__atomic_fetch_add(&ap->sec_since_last_hit,\n-\t\t\t\t\t\t   diff, __ATOMIC_RELAXED);\n-\t\t\t\t/* If timeout passed add to aged-out list. */\n-\t\t\t\tif (ap->sec_since_last_hit <= ap->timeout)\n-\t\t\t\t\tcontinue;\n-\t\t\t\tpriv =\n-\t\t\t\trte_eth_devices[ap->port_id].data->dev_private;\n-\t\t\t\tage_info = GET_PORT_AGE_INFO(priv);\n-\t\t\t\trte_spinlock_lock(&age_info->aged_sl);\n-\t\t\t\tif (__atomic_compare_exchange_n(&ap->state,\n-\t\t\t\t\t\t\t\t&expected,\n-\t\t\t\t\t\t\t\tAGE_TMOUT,\n-\t\t\t\t\t\t\t\tfalse,\n-\t\t\t\t\t\t\t       __ATOMIC_RELAXED,\n-\t\t\t\t\t\t\t    __ATOMIC_RELAXED)) {\n-\t\t\t\t\tLIST_INSERT_HEAD(&age_info->aged_aso,\n-\t\t\t\t\t\t\t act, next);\n-\t\t\t\t\tMLX5_AGE_SET(age_info,\n-\t\t\t\t\t\t     MLX5_AGE_EVENT_NEW);\n-\t\t\t\t}\n-\t\t\t\trte_spinlock_unlock(&age_info->aged_sl);\n-\t\t\t}\n-\t\t}\n-\t}\n-\tmlx5_age_event_prepare(sh);\n-}\n-\n-/**\n- * Handle completions from WQEs sent to ASO SQ.\n- *\n- * @param[in] sh\n- *   Shared device context.\n- *\n- * @return\n- *   Number of CQEs handled.\n- */\n-static uint16_t\n-mlx5_aso_completion_handle(struct mlx5_dev_ctx_shared *sh)\n-{\n-\tstruct mlx5_aso_age_mng *mng = sh->aso_age_mng;\n-\tstruct mlx5_aso_sq *sq = &mng->aso_sq;\n-\tstruct mlx5_aso_cq *cq = &sq->cq;\n-\tvolatile struct mlx5_cqe *restrict cqe;\n-\tconst unsigned int cq_size = 1 << cq->log_desc_n;\n-\tconst unsigned int mask = cq_size - 1;\n-\tuint32_t idx;\n-\tuint32_t next_idx = cq->cq_ci & mask;\n-\tconst uint16_t max = (uint16_t)(sq->head - sq->tail);\n-\tuint16_t i = 0;\n-\tint ret;\n-\tif (unlikely(!max))\n-\t\treturn 0;\n-\tdo {\n-\t\tidx = next_idx;\n-\t\tnext_idx = (cq->cq_ci + 1) & mask;\n-\t\trte_prefetch0(&cq->cq_obj.cqes[next_idx]);\n-\t\tcqe = &cq->cq_obj.cqes[idx];\n-\t\tret = check_cqe(cqe, cq_size, cq->cq_ci);\n-\t\t/*\n-\t\t * Be sure owner read is done before any other cookie field or\n-\t\t * opaque field.\n-\t\t */\n-\t\trte_io_rmb();\n-\t\tif (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) {\n-\t\t\tif (likely(ret == MLX5_CQE_STATUS_HW_OWN))\n-\t\t\t\tbreak;\n-\t\t\tmlx5_aso_cqe_err_handle(sq);\n-\t\t} else {\n-\t\t\ti += sq->elts[(sq->tail + i) & mask].burst_size;\n-\t\t}\n-\t\tcq->cq_ci++;\n-\t} while (1);\n-\tif (likely(i)) {\n-\t\tmlx5_aso_age_action_update(sh, i);\n-\t\tsq->tail += i;\n-\t\trte_io_wmb();\n-\t\tcq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);\n-\t}\n-\treturn i;\n-}\n-\n-/**\n- * Periodically read CQEs and send WQEs to ASO SQ.\n- *\n- * @param[in] arg\n- *   Shared device context containing the ASO SQ.\n- */\n-static void\n-mlx5_flow_aso_alarm(void *arg)\n-{\n-\tstruct mlx5_dev_ctx_shared *sh = arg;\n-\tstruct mlx5_aso_sq *sq = &sh->aso_age_mng->aso_sq;\n-\tuint32_t us = 100u;\n-\tuint16_t n;\n-\n-\trte_spinlock_lock(&sh->aso_age_mng->resize_sl);\n-\tn = sh->aso_age_mng->next;\n-\trte_spinlock_unlock(&sh->aso_age_mng->resize_sl);\n-\tmlx5_aso_completion_handle(sh);\n-\tif (sq->next == n) {\n-\t\t/* End of loop: wait 1 second. */\n-\t\tus = US_PER_S;\n-\t\tsq->next = 0;\n-\t}\n-\tmlx5_aso_sq_enqueue_burst(sh->aso_age_mng, n);\n-\tif (rte_eal_alarm_set(us, mlx5_flow_aso_alarm, sh))\n-\t\tDRV_LOG(ERR, \"Cannot reinitialize aso alarm.\");\n-}\n-\n-/**\n- * API to start ASO access using ASO SQ.\n- *\n- * @param[in] sh\n- *   Pointer to shared device context.\n- *\n- * @return\n- *   0 on success, a negative errno value otherwise and rte_errno is set.\n- */\n-int\n-mlx5_aso_queue_start(struct mlx5_dev_ctx_shared *sh)\n-{\n-\tif (rte_eal_alarm_set(US_PER_S, mlx5_flow_aso_alarm, sh)) {\n-\t\tDRV_LOG(ERR, \"Cannot reinitialize ASO age alarm.\");\n-\t\treturn -rte_errno;\n-\t}\n-\treturn 0;\n-}\n-\n-/**\n- * API to stop ASO access using ASO SQ.\n- *\n- * @param[in] sh\n- *   Pointer to shared device context.\n- *\n- * @return\n- *   0 on success, a negative errno value otherwise and rte_errno is set.\n- */\n-int\n-mlx5_aso_queue_stop(struct mlx5_dev_ctx_shared *sh)\n-{\n-\tint retries = 1024;\n-\n-\tif (!sh->aso_age_mng->aso_sq.sq_obj.sq)\n-\t\treturn -EINVAL;\n-\trte_errno = 0;\n-\twhile (--retries) {\n-\t\trte_eal_alarm_cancel(mlx5_flow_aso_alarm, sh);\n-\t\tif (rte_errno != EINPROGRESS)\n-\t\t\tbreak;\n-\t\trte_pause();\n-\t}\n-\treturn -rte_errno;\n-}\ndiff --git a/drivers/net/mlx5/mlx5_flow_aso.c b/drivers/net/mlx5/mlx5_flow_aso.c\nnew file mode 100644\nindex 0000000..067471b\n--- /dev/null\n+++ b/drivers/net/mlx5/mlx5_flow_aso.c\n@@ -0,0 +1,659 @@\n+/* SPDX-License-Identifier: BSD-3-Clause\n+ * Copyright 2020 Mellanox Technologies, Ltd\n+ */\n+#include <mlx5_prm.h>\n+#include <rte_malloc.h>\n+#include <rte_cycles.h>\n+#include <rte_eal_paging.h>\n+\n+#include <mlx5_malloc.h>\n+#include <mlx5_common_os.h>\n+#include <mlx5_common_devx.h>\n+\n+#include \"mlx5.h\"\n+#include \"mlx5_flow.h\"\n+\n+\n+/**\n+ * Destroy Completion Queue used for ASO access.\n+ *\n+ * @param[in] cq\n+ *   ASO CQ to destroy.\n+ */\n+static void\n+mlx5_aso_cq_destroy(struct mlx5_aso_cq *cq)\n+{\n+\tif (cq->cq_obj.cq)\n+\t\tmlx5_devx_cq_destroy(&cq->cq_obj);\n+\tmemset(cq, 0, sizeof(*cq));\n+}\n+\n+/**\n+ * Create Completion Queue used for ASO access.\n+ *\n+ * @param[in] ctx\n+ *   Context returned from mlx5 open_device() glue function.\n+ * @param[in/out] cq\n+ *   Pointer to CQ to create.\n+ * @param[in] log_desc_n\n+ *   Log of number of descriptors in queue.\n+ * @param[in] socket\n+ *   Socket to use for allocation.\n+ * @param[in] uar_page_id\n+ *   UAR page ID to use.\n+ *\n+ * @return\n+ *   0 on success, a negative errno value otherwise and rte_errno is set.\n+ */\n+static int\n+mlx5_aso_cq_create(void *ctx, struct mlx5_aso_cq *cq, uint16_t log_desc_n,\n+\t\t   int socket, int uar_page_id)\n+{\n+\tstruct mlx5_devx_cq_attr attr = {\n+\t\t.uar_page_id = uar_page_id,\n+\t};\n+\n+\tcq->log_desc_n = log_desc_n;\n+\tcq->cq_ci = 0;\n+\treturn mlx5_devx_cq_create(ctx, &cq->cq_obj, log_desc_n, &attr, socket);\n+}\n+\n+/**\n+ * Free MR resources.\n+ *\n+ * @param[in] mr\n+ *   MR to free.\n+ */\n+static void\n+mlx5_aso_devx_dereg_mr(struct mlx5_aso_devx_mr *mr)\n+{\n+\tclaim_zero(mlx5_devx_cmd_destroy(mr->mkey));\n+\tif (!mr->is_indirect && mr->umem)\n+\t\tclaim_zero(mlx5_glue->devx_umem_dereg(mr->umem));\n+\tmlx5_free(mr->buf);\n+\tmemset(mr, 0, sizeof(*mr));\n+}\n+\n+/**\n+ * Register Memory Region.\n+ *\n+ * @param[in] ctx\n+ *   Context returned from mlx5 open_device() glue function.\n+ * @param[in] length\n+ *   Size of MR buffer.\n+ * @param[in/out] mr\n+ *   Pointer to MR to create.\n+ * @param[in] socket\n+ *   Socket to use for allocation.\n+ * @param[in] pdn\n+ *   Protection Domain number to use.\n+ *\n+ * @return\n+ *   0 on success, a negative errno value otherwise and rte_errno is set.\n+ */\n+static int\n+mlx5_aso_devx_reg_mr(void *ctx, size_t length, struct mlx5_aso_devx_mr *mr,\n+\t\t     int socket, int pdn)\n+{\n+\tstruct mlx5_devx_mkey_attr mkey_attr;\n+\n+\tmr->buf = mlx5_malloc(MLX5_MEM_RTE | MLX5_MEM_ZERO, length, 4096,\n+\t\t\t      socket);\n+\tif (!mr->buf) {\n+\t\tDRV_LOG(ERR, \"Failed to create ASO bits mem for MR by Devx.\");\n+\t\treturn -1;\n+\t}\n+\tmr->umem = mlx5_os_umem_reg(ctx, mr->buf, length,\n+\t\t\t\t\t\t IBV_ACCESS_LOCAL_WRITE);\n+\tif (!mr->umem) {\n+\t\tDRV_LOG(ERR, \"Failed to register Umem for MR by Devx.\");\n+\t\tgoto error;\n+\t}\n+\tmkey_attr.addr = (uintptr_t)mr->buf;\n+\tmkey_attr.size = length;\n+\tmkey_attr.umem_id = mlx5_os_get_umem_id(mr->umem);\n+\tmkey_attr.pd = pdn;\n+\tmkey_attr.pg_access = 1;\n+\tmkey_attr.klm_array = NULL;\n+\tmkey_attr.klm_num = 0;\n+\tmkey_attr.relaxed_ordering_read = 0;\n+\tmkey_attr.relaxed_ordering_write = 0;\n+\tmr->mkey = mlx5_devx_cmd_mkey_create(ctx, &mkey_attr);\n+\tif (!mr->mkey) {\n+\t\tDRV_LOG(ERR, \"Failed to create direct Mkey.\");\n+\t\tgoto error;\n+\t}\n+\tmr->length = length;\n+\tmr->is_indirect = false;\n+\treturn 0;\n+error:\n+\tif (mr->umem)\n+\t\tclaim_zero(mlx5_glue->devx_umem_dereg(mr->umem));\n+\tmlx5_free(mr->buf);\n+\treturn -1;\n+}\n+\n+/**\n+ * Destroy Send Queue used for ASO access.\n+ *\n+ * @param[in] sq\n+ *   ASO SQ to destroy.\n+ */\n+static void\n+mlx5_aso_destroy_sq(struct mlx5_aso_sq *sq)\n+{\n+\tmlx5_devx_sq_destroy(&sq->sq_obj);\n+\tmlx5_aso_cq_destroy(&sq->cq);\n+\tmemset(sq, 0, sizeof(*sq));\n+}\n+\n+/**\n+ * Initialize Send Queue used for ASO access.\n+ *\n+ * @param[in] sq\n+ *   ASO SQ to initialize.\n+ */\n+static void\n+mlx5_aso_age_init_sq(struct mlx5_aso_sq *sq)\n+{\n+\tvolatile struct mlx5_aso_wqe *restrict wqe;\n+\tint i;\n+\tint size = 1 << sq->log_desc_n;\n+\tuint64_t addr;\n+\n+\t/* All the next fields state should stay constant. */\n+\tfor (i = 0, wqe = &sq->sq_obj.aso_wqes[0]; i < size; ++i, ++wqe) {\n+\t\twqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) |\n+\t\t\t\t\t\t\t  (sizeof(*wqe) >> 4));\n+\t\twqe->aso_cseg.lkey = rte_cpu_to_be_32(sq->mr.mkey->id);\n+\t\taddr = (uint64_t)((uint64_t *)sq->mr.buf + i *\n+\t\t\t\t\t    MLX5_ASO_AGE_ACTIONS_PER_POOL / 64);\n+\t\twqe->aso_cseg.va_h = rte_cpu_to_be_32((uint32_t)(addr >> 32));\n+\t\twqe->aso_cseg.va_l_r = rte_cpu_to_be_32((uint32_t)addr | 1u);\n+\t\twqe->aso_cseg.operand_masks = rte_cpu_to_be_32\n+\t\t\t(0u |\n+\t\t\t (ASO_OPER_LOGICAL_OR << ASO_CSEG_COND_OPER_OFFSET) |\n+\t\t\t (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_1_OPER_OFFSET) |\n+\t\t\t (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_0_OPER_OFFSET) |\n+\t\t\t (BYTEWISE_64BYTE << ASO_CSEG_DATA_MASK_MODE_OFFSET));\n+\t\twqe->aso_cseg.data_mask = RTE_BE64(UINT64_MAX);\n+\t}\n+}\n+\n+/**\n+ * Initialize Send Queue used for ASO flow meter access.\n+ *\n+ * @param[in] sq\n+ *   ASO SQ to initialize.\n+ */\n+static void\n+mlx5_aso_mtr_init_sq(struct mlx5_aso_sq *sq)\n+{\n+\tvolatile struct mlx5_aso_wqe *restrict wqe;\n+\tint i;\n+\tint size = 1 << sq->log_desc_n;\n+\tuint32_t idx;\n+\n+\t/* All the next fields state should stay constant. */\n+\tfor (i = 0, wqe = &sq->sq_obj.aso_wqes[0]; i < size; ++i, ++wqe) {\n+\t\twqe->general_cseg.sq_ds = rte_cpu_to_be_32((sq->sqn << 8) |\n+\t\t\t\t\t\t\t  (sizeof(*wqe) >> 4));\n+\t\twqe->aso_cseg.operand_masks = RTE_BE32(0u |\n+\t\t\t (ASO_OPER_LOGICAL_OR << ASO_CSEG_COND_OPER_OFFSET) |\n+\t\t\t (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_1_OPER_OFFSET) |\n+\t\t\t (ASO_OP_ALWAYS_TRUE << ASO_CSEG_COND_0_OPER_OFFSET) |\n+\t\t\t (BYTEWISE_64BYTE << ASO_CSEG_DATA_MASK_MODE_OFFSET));\n+\t\twqe->general_cseg.flags = RTE_BE32(MLX5_COMP_ALWAYS <<\n+\t\t\t\t\t\t\t MLX5_COMP_MODE_OFFSET);\n+\t\tfor (idx = 0; idx < MLX5_ASO_METERS_PER_WQE;\n+\t\t\tidx++)\n+\t\t\twqe->aso_dseg.mtrs[idx].v_bo_sc_bbog_mm =\n+\t\t\t\tRTE_BE32((1 << ASO_DSEG_VALID_OFFSET) |\n+\t\t\t\t(MLX5_FLOW_COLOR_GREEN << ASO_DSEG_SC_OFFSET));\n+\t}\n+}\n+\n+/**\n+ * Create Send Queue used for ASO access.\n+ *\n+ * @param[in] ctx\n+ *   Context returned from mlx5 open_device() glue function.\n+ * @param[in/out] sq\n+ *   Pointer to SQ to create.\n+ * @param[in] socket\n+ *   Socket to use for allocation.\n+ * @param[in] uar\n+ *   User Access Region object.\n+ * @param[in] pdn\n+ *   Protection Domain number to use.\n+ * @param[in] log_desc_n\n+ *   Log of number of descriptors in queue.\n+ *\n+ * @return\n+ *   0 on success, a negative errno value otherwise and rte_errno is set.\n+ */\n+static int\n+mlx5_aso_sq_create(void *ctx, struct mlx5_aso_sq *sq, int socket,\n+\t\t   void *uar, uint32_t pdn,  uint16_t log_desc_n,\n+\t\t   uint32_t ts_format)\n+{\n+\tstruct mlx5_devx_create_sq_attr attr = {\n+\t\t.user_index = 0xFFFF,\n+\t\t.wq_attr = (struct mlx5_devx_wq_attr){\n+\t\t\t.pd = pdn,\n+\t\t\t.uar_page = mlx5_os_get_devx_uar_page_id(uar),\n+\t\t},\n+\t\t.ts_format = mlx5_ts_format_conv(ts_format),\n+\t};\n+\tstruct mlx5_devx_modify_sq_attr modify_attr = {\n+\t\t.state = MLX5_SQC_STATE_RDY,\n+\t};\n+\tuint16_t log_wqbb_n;\n+\tint ret;\n+\n+\tif (mlx5_aso_cq_create(ctx, &sq->cq, log_desc_n, socket,\n+\t\t\t       mlx5_os_get_devx_uar_page_id(uar)))\n+\t\tgoto error;\n+\tsq->log_desc_n = log_desc_n;\n+\tattr.cqn = sq->cq.cq_obj.cq->id;\n+\t/* for mlx5_aso_wqe that is twice the size of mlx5_wqe */\n+\tlog_wqbb_n = log_desc_n + 1;\n+\tret = mlx5_devx_sq_create(ctx, &sq->sq_obj, log_wqbb_n, &attr, socket);\n+\tif (ret) {\n+\t\tDRV_LOG(ERR, \"Can't create SQ object.\");\n+\t\trte_errno = ENOMEM;\n+\t\tgoto error;\n+\t}\n+\tret = mlx5_devx_cmd_modify_sq(sq->sq_obj.sq, &modify_attr);\n+\tif (ret) {\n+\t\tDRV_LOG(ERR, \"Can't change SQ state to ready.\");\n+\t\trte_errno = ENOMEM;\n+\t\tgoto error;\n+\t}\n+\tsq->pi = 0;\n+\tsq->head = 0;\n+\tsq->tail = 0;\n+\tsq->sqn = sq->sq_obj.sq->id;\n+\tsq->uar_addr = mlx5_os_get_devx_uar_reg_addr(uar);\n+\treturn 0;\n+error:\n+\tmlx5_aso_destroy_sq(sq);\n+\treturn -1;\n+}\n+\n+/**\n+ * API to create and initialize Send Queue used for ASO access.\n+ *\n+ * @param[in] sh\n+ *   Pointer to shared device context.\n+ *\n+ * @return\n+ *   0 on success, a negative errno value otherwise and rte_errno is set.\n+ */\n+int\n+mlx5_aso_queue_init(struct mlx5_dev_ctx_shared *sh,\n+\t\t\tenum mlx5_access_aso_opc_mod aso_opc_mod)\n+{\n+\tuint32_t sq_desc_n = 1 << MLX5_ASO_QUEUE_LOG_DESC;\n+\n+\tswitch (aso_opc_mod) {\n+\tcase ASO_OPC_MOD_FLOW_HIT:\n+\t\tif (mlx5_aso_devx_reg_mr(sh->ctx,\n+\t\t\t(MLX5_ASO_AGE_ACTIONS_PER_POOL / 8) *\n+\t\t\tsq_desc_n, &sh->aso_age_mng->aso_sq.mr, 0, sh->pdn))\n+\t\t\treturn -1;\n+\t\tif (mlx5_aso_sq_create(sh->ctx, &sh->aso_age_mng->aso_sq, 0,\n+\t\t\t\t  sh->tx_uar, sh->pdn, MLX5_ASO_QUEUE_LOG_DESC,\n+\t\t\t\t  sh->sq_ts_format)) {\n+\t\t\tmlx5_aso_devx_dereg_mr(&sh->aso_age_mng->aso_sq.mr);\n+\t\t\treturn -1;\n+\t\t}\n+\t\tmlx5_aso_age_init_sq(&sh->aso_age_mng->aso_sq);\n+\t\tbreak;\n+\tcase ASO_OPC_MOD_POLICER:\n+\t\tif (mlx5_aso_sq_create(sh->ctx, &sh->mtrmng->sq, 0,\n+\t\t\t\t  sh->tx_uar, sh->pdn, MLX5_ASO_QUEUE_LOG_DESC,\n+\t\t\t\t  sh->sq_ts_format))\n+\t\t\treturn -1;\n+\t\tmlx5_aso_mtr_init_sq(&sh->mtrmng->sq);\n+\t\tbreak;\n+\tdefault:\n+\t\tDRV_LOG(ERR, \"Unknown ASO operation mode\");\n+\t\treturn -1;\n+\t}\n+\treturn 0;\n+}\n+\n+/**\n+ * API to destroy Send Queue used for ASO access.\n+ *\n+ * @param[in] sh\n+ *   Pointer to shared device context.\n+ */\n+void\n+mlx5_aso_queue_uninit(struct mlx5_dev_ctx_shared *sh,\n+\t\t\t\tenum mlx5_access_aso_opc_mod aso_opc_mod)\n+{\n+\tstruct mlx5_aso_sq *sq;\n+\n+\tswitch (aso_opc_mod) {\n+\tcase ASO_OPC_MOD_FLOW_HIT:\n+\t\tmlx5_aso_devx_dereg_mr(&sh->aso_age_mng->aso_sq.mr);\n+\t\tsq = &sh->aso_age_mng->aso_sq;\n+\t\tbreak;\n+\tcase ASO_OPC_MOD_POLICER:\n+\t\tsq = &sh->mtrmng->sq;\n+\t\tbreak;\n+\tdefault:\n+\t\tDRV_LOG(ERR, \"Unknown ASO operation mode\");\n+\t\treturn;\n+\t}\n+\tmlx5_aso_destroy_sq(sq);\n+}\n+\n+/**\n+ * Write a burst of WQEs to ASO SQ.\n+ *\n+ * @param[in] mng\n+ *   ASO management data, contains the SQ.\n+ * @param[in] n\n+ *   Index of the last valid pool.\n+ *\n+ * @return\n+ *   Number of WQEs in burst.\n+ */\n+static uint16_t\n+mlx5_aso_sq_enqueue_burst(struct mlx5_aso_age_mng *mng, uint16_t n)\n+{\n+\tvolatile struct mlx5_aso_wqe *wqe;\n+\tstruct mlx5_aso_sq *sq = &mng->aso_sq;\n+\tstruct mlx5_aso_age_pool *pool;\n+\tuint16_t size = 1 << sq->log_desc_n;\n+\tuint16_t mask = size - 1;\n+\tuint16_t max;\n+\tuint16_t start_head = sq->head;\n+\n+\tmax = RTE_MIN(size - (uint16_t)(sq->head - sq->tail), n - sq->next);\n+\tif (unlikely(!max))\n+\t\treturn 0;\n+\tsq->elts[start_head & mask].burst_size = max;\n+\tdo {\n+\t\twqe = &sq->sq_obj.aso_wqes[sq->head & mask];\n+\t\trte_prefetch0(&sq->sq_obj.aso_wqes[(sq->head + 1) & mask]);\n+\t\t/* Fill next WQE. */\n+\t\trte_spinlock_lock(&mng->resize_sl);\n+\t\tpool = mng->pools[sq->next];\n+\t\trte_spinlock_unlock(&mng->resize_sl);\n+\t\tsq->elts[sq->head & mask].pool = pool;\n+\t\twqe->general_cseg.misc =\n+\t\t\t\trte_cpu_to_be_32(((struct mlx5_devx_obj *)\n+\t\t\t\t\t\t (pool->flow_hit_aso_obj))->id);\n+\t\twqe->general_cseg.flags = RTE_BE32(MLX5_COMP_ONLY_FIRST_ERR <<\n+\t\t\t\t\t\t\t MLX5_COMP_MODE_OFFSET);\n+\t\twqe->general_cseg.opcode = rte_cpu_to_be_32\n+\t\t\t\t\t\t(MLX5_OPCODE_ACCESS_ASO |\n+\t\t\t\t\t\t (ASO_OPC_MOD_FLOW_HIT <<\n+\t\t\t\t\t\t  WQE_CSEG_OPC_MOD_OFFSET) |\n+\t\t\t\t\t\t (sq->pi <<\n+\t\t\t\t\t\t  WQE_CSEG_WQE_INDEX_OFFSET));\n+\t\tsq->pi += 2; /* Each WQE contains 2 WQEBB's. */\n+\t\tsq->head++;\n+\t\tsq->next++;\n+\t\tmax--;\n+\t} while (max);\n+\twqe->general_cseg.flags = RTE_BE32(MLX5_COMP_ALWAYS <<\n+\t\t\t\t\t\t\t MLX5_COMP_MODE_OFFSET);\n+\trte_io_wmb();\n+\tsq->sq_obj.db_rec[MLX5_SND_DBR] = rte_cpu_to_be_32(sq->pi);\n+\trte_wmb();\n+\t*sq->uar_addr = *(volatile uint64_t *)wqe; /* Assume 64 bit ARCH.*/\n+\trte_wmb();\n+\treturn sq->elts[start_head & mask].burst_size;\n+}\n+\n+/**\n+ * Debug utility function. Dump contents of error CQE and WQE.\n+ *\n+ * @param[in] cqe\n+ *   Error CQE to dump.\n+ * @param[in] wqe\n+ *   Error WQE to dump.\n+ */\n+static void\n+mlx5_aso_dump_err_objs(volatile uint32_t *cqe, volatile uint32_t *wqe)\n+{\n+\tint i;\n+\n+\tDRV_LOG(ERR, \"Error cqe:\");\n+\tfor (i = 0; i < 16; i += 4)\n+\t\tDRV_LOG(ERR, \"%08X %08X %08X %08X\", cqe[i], cqe[i + 1],\n+\t\t\tcqe[i + 2], cqe[i + 3]);\n+\tDRV_LOG(ERR, \"\\nError wqe:\");\n+\tfor (i = 0; i < (int)sizeof(struct mlx5_aso_wqe) / 4; i += 4)\n+\t\tDRV_LOG(ERR, \"%08X %08X %08X %08X\", wqe[i], wqe[i + 1],\n+\t\t\twqe[i + 2], wqe[i + 3]);\n+}\n+\n+/**\n+ * Handle case of error CQE.\n+ *\n+ * @param[in] sq\n+ *   ASO SQ to use.\n+ */\n+static void\n+mlx5_aso_cqe_err_handle(struct mlx5_aso_sq *sq)\n+{\n+\tstruct mlx5_aso_cq *cq = &sq->cq;\n+\tuint32_t idx = cq->cq_ci & ((1 << cq->log_desc_n) - 1);\n+\tvolatile struct mlx5_err_cqe *cqe =\n+\t\t\t(volatile struct mlx5_err_cqe *)&cq->cq_obj.cqes[idx];\n+\n+\tcq->errors++;\n+\tidx = rte_be_to_cpu_16(cqe->wqe_counter) & (1u << sq->log_desc_n);\n+\tmlx5_aso_dump_err_objs((volatile uint32_t *)cqe,\n+\t\t\t       (volatile uint32_t *)&sq->sq_obj.aso_wqes[idx]);\n+}\n+\n+/**\n+ * Update ASO objects upon completion.\n+ *\n+ * @param[in] sh\n+ *   Shared device context.\n+ * @param[in] n\n+ *   Number of completed ASO objects.\n+ */\n+static void\n+mlx5_aso_age_action_update(struct mlx5_dev_ctx_shared *sh, uint16_t n)\n+{\n+\tstruct mlx5_aso_age_mng *mng = sh->aso_age_mng;\n+\tstruct mlx5_aso_sq *sq = &mng->aso_sq;\n+\tstruct mlx5_age_info *age_info;\n+\tconst uint16_t size = 1 << sq->log_desc_n;\n+\tconst uint16_t mask = size - 1;\n+\tconst uint64_t curr = MLX5_CURR_TIME_SEC;\n+\tuint16_t expected = AGE_CANDIDATE;\n+\tuint16_t i;\n+\n+\tfor (i = 0; i < n; ++i) {\n+\t\tuint16_t idx = (sq->tail + i) & mask;\n+\t\tstruct mlx5_aso_age_pool *pool = sq->elts[idx].pool;\n+\t\tuint64_t diff = curr - pool->time_of_last_age_check;\n+\t\tuint64_t *addr = sq->mr.buf;\n+\t\tint j;\n+\n+\t\taddr += idx * MLX5_ASO_AGE_ACTIONS_PER_POOL / 64;\n+\t\tpool->time_of_last_age_check = curr;\n+\t\tfor (j = 0; j < MLX5_ASO_AGE_ACTIONS_PER_POOL; j++) {\n+\t\t\tstruct mlx5_aso_age_action *act = &pool->actions[j];\n+\t\t\tstruct mlx5_age_param *ap = &act->age_params;\n+\t\t\tuint8_t byte;\n+\t\t\tuint8_t offset;\n+\t\t\tuint8_t *u8addr;\n+\t\t\tuint8_t hit;\n+\n+\t\t\tif (__atomic_load_n(&ap->state, __ATOMIC_RELAXED) !=\n+\t\t\t\t\t    AGE_CANDIDATE)\n+\t\t\t\tcontinue;\n+\t\t\tbyte = 63 - (j / 8);\n+\t\t\toffset = j % 8;\n+\t\t\tu8addr = (uint8_t *)addr;\n+\t\t\thit = (u8addr[byte] >> offset) & 0x1;\n+\t\t\tif (hit) {\n+\t\t\t\t__atomic_store_n(&ap->sec_since_last_hit, 0,\n+\t\t\t\t\t\t __ATOMIC_RELAXED);\n+\t\t\t} else {\n+\t\t\t\tstruct mlx5_priv *priv;\n+\n+\t\t\t\t__atomic_fetch_add(&ap->sec_since_last_hit,\n+\t\t\t\t\t\t   diff, __ATOMIC_RELAXED);\n+\t\t\t\t/* If timeout passed add to aged-out list. */\n+\t\t\t\tif (ap->sec_since_last_hit <= ap->timeout)\n+\t\t\t\t\tcontinue;\n+\t\t\t\tpriv =\n+\t\t\t\trte_eth_devices[ap->port_id].data->dev_private;\n+\t\t\t\tage_info = GET_PORT_AGE_INFO(priv);\n+\t\t\t\trte_spinlock_lock(&age_info->aged_sl);\n+\t\t\t\tif (__atomic_compare_exchange_n(&ap->state,\n+\t\t\t\t\t\t\t\t&expected,\n+\t\t\t\t\t\t\t\tAGE_TMOUT,\n+\t\t\t\t\t\t\t\tfalse,\n+\t\t\t\t\t\t\t       __ATOMIC_RELAXED,\n+\t\t\t\t\t\t\t    __ATOMIC_RELAXED)) {\n+\t\t\t\t\tLIST_INSERT_HEAD(&age_info->aged_aso,\n+\t\t\t\t\t\t\t act, next);\n+\t\t\t\t\tMLX5_AGE_SET(age_info,\n+\t\t\t\t\t\t     MLX5_AGE_EVENT_NEW);\n+\t\t\t\t}\n+\t\t\t\trte_spinlock_unlock(&age_info->aged_sl);\n+\t\t\t}\n+\t\t}\n+\t}\n+\tmlx5_age_event_prepare(sh);\n+}\n+\n+/**\n+ * Handle completions from WQEs sent to ASO SQ.\n+ *\n+ * @param[in] sh\n+ *   Shared device context.\n+ *\n+ * @return\n+ *   Number of CQEs handled.\n+ */\n+static uint16_t\n+mlx5_aso_completion_handle(struct mlx5_dev_ctx_shared *sh)\n+{\n+\tstruct mlx5_aso_age_mng *mng = sh->aso_age_mng;\n+\tstruct mlx5_aso_sq *sq = &mng->aso_sq;\n+\tstruct mlx5_aso_cq *cq = &sq->cq;\n+\tvolatile struct mlx5_cqe *restrict cqe;\n+\tconst unsigned int cq_size = 1 << cq->log_desc_n;\n+\tconst unsigned int mask = cq_size - 1;\n+\tuint32_t idx;\n+\tuint32_t next_idx = cq->cq_ci & mask;\n+\tconst uint16_t max = (uint16_t)(sq->head - sq->tail);\n+\tuint16_t i = 0;\n+\tint ret;\n+\tif (unlikely(!max))\n+\t\treturn 0;\n+\tdo {\n+\t\tidx = next_idx;\n+\t\tnext_idx = (cq->cq_ci + 1) & mask;\n+\t\trte_prefetch0(&cq->cq_obj.cqes[next_idx]);\n+\t\tcqe = &cq->cq_obj.cqes[idx];\n+\t\tret = check_cqe(cqe, cq_size, cq->cq_ci);\n+\t\t/*\n+\t\t * Be sure owner read is done before any other cookie field or\n+\t\t * opaque field.\n+\t\t */\n+\t\trte_io_rmb();\n+\t\tif (unlikely(ret != MLX5_CQE_STATUS_SW_OWN)) {\n+\t\t\tif (likely(ret == MLX5_CQE_STATUS_HW_OWN))\n+\t\t\t\tbreak;\n+\t\t\tmlx5_aso_cqe_err_handle(sq);\n+\t\t} else {\n+\t\t\ti += sq->elts[(sq->tail + i) & mask].burst_size;\n+\t\t}\n+\t\tcq->cq_ci++;\n+\t} while (1);\n+\tif (likely(i)) {\n+\t\tmlx5_aso_age_action_update(sh, i);\n+\t\tsq->tail += i;\n+\t\trte_io_wmb();\n+\t\tcq->cq_obj.db_rec[0] = rte_cpu_to_be_32(cq->cq_ci);\n+\t}\n+\treturn i;\n+}\n+\n+/**\n+ * Periodically read CQEs and send WQEs to ASO SQ.\n+ *\n+ * @param[in] arg\n+ *   Shared device context containing the ASO SQ.\n+ */\n+static void\n+mlx5_flow_aso_alarm(void *arg)\n+{\n+\tstruct mlx5_dev_ctx_shared *sh = arg;\n+\tstruct mlx5_aso_sq *sq = &sh->aso_age_mng->aso_sq;\n+\tuint32_t us = 100u;\n+\tuint16_t n;\n+\n+\trte_spinlock_lock(&sh->aso_age_mng->resize_sl);\n+\tn = sh->aso_age_mng->next;\n+\trte_spinlock_unlock(&sh->aso_age_mng->resize_sl);\n+\tmlx5_aso_completion_handle(sh);\n+\tif (sq->next == n) {\n+\t\t/* End of loop: wait 1 second. */\n+\t\tus = US_PER_S;\n+\t\tsq->next = 0;\n+\t}\n+\tmlx5_aso_sq_enqueue_burst(sh->aso_age_mng, n);\n+\tif (rte_eal_alarm_set(us, mlx5_flow_aso_alarm, sh))\n+\t\tDRV_LOG(ERR, \"Cannot reinitialize aso alarm.\");\n+}\n+\n+/**\n+ * API to start ASO access using ASO SQ.\n+ *\n+ * @param[in] sh\n+ *   Pointer to shared device context.\n+ *\n+ * @return\n+ *   0 on success, a negative errno value otherwise and rte_errno is set.\n+ */\n+int\n+mlx5_aso_flow_hit_queue_poll_start(struct mlx5_dev_ctx_shared *sh)\n+{\n+\tif (rte_eal_alarm_set(US_PER_S, mlx5_flow_aso_alarm, sh)) {\n+\t\tDRV_LOG(ERR, \"Cannot reinitialize ASO age alarm.\");\n+\t\treturn -rte_errno;\n+\t}\n+\treturn 0;\n+}\n+\n+/**\n+ * API to stop ASO access using ASO SQ.\n+ *\n+ * @param[in] sh\n+ *   Pointer to shared device context.\n+ *\n+ * @return\n+ *   0 on success, a negative errno value otherwise and rte_errno is set.\n+ */\n+int\n+mlx5_aso_flow_hit_queue_poll_stop(struct mlx5_dev_ctx_shared *sh)\n+{\n+\tint retries = 1024;\n+\n+\tif (!sh->aso_age_mng->aso_sq.sq_obj.sq)\n+\t\treturn -EINVAL;\n+\trte_errno = 0;\n+\twhile (--retries) {\n+\t\trte_eal_alarm_cancel(mlx5_flow_aso_alarm, sh);\n+\t\tif (rte_errno != EINPROGRESS)\n+\t\t\tbreak;\n+\t\trte_pause();\n+\t}\n+\treturn -rte_errno;\n+}\ndiff --git a/drivers/net/mlx5/mlx5_flow_dv.c b/drivers/net/mlx5/mlx5_flow_dv.c\nindex 4b2a272..a621417 100644\n--- a/drivers/net/mlx5/mlx5_flow_dv.c\n+++ b/drivers/net/mlx5/mlx5_flow_dv.c\n@@ -5962,6 +5962,11 @@ struct mlx5_hlist_entry *\n \t\trte_errno = ENOMEM;\n \t\treturn -ENOMEM;\n \t}\n+\tif (!mtrmng->n)\n+\t\tif (mlx5_aso_queue_init(priv->sh, ASO_OPC_MOD_POLICER)) {\n+\t\t\tmlx5_free(pools);\n+\t\t\treturn -ENOMEM;\n+\t\t}\n \tif (old_pools)\n \t\tmemcpy(pools, old_pools, mtrmng->n *\n \t\t\t\t       sizeof(struct mlx5_aso_mtr_pool *));\n@@ -10834,7 +10839,7 @@ struct mlx5_cache_entry *\n \t\tmlx5_free(old_pools);\n \t} else {\n \t\t/* First ASO flow hit allocation - starting ASO data-path. */\n-\t\tint ret = mlx5_aso_queue_start(priv->sh);\n+\t\tint ret = mlx5_aso_flow_hit_queue_poll_start(priv->sh);\n \n \t\tif (ret) {\n \t\t\tmlx5_free(pools);\ndiff --git a/drivers/net/mlx5/mlx5_flow_meter.c b/drivers/net/mlx5/mlx5_flow_meter.c\nindex 956a6c3..ef4ca30 100644\n--- a/drivers/net/mlx5/mlx5_flow_meter.c\n+++ b/drivers/net/mlx5/mlx5_flow_meter.c\n@@ -811,7 +811,6 @@\n \t\t\tstruct rte_mtr_error *error)\n {\n \tstruct mlx5_priv *priv = dev->data->dev_private;\n-\tstruct mlx5_aso_mtr_pools_mng *mtrmng = priv->sh->mtrmng;\n \tstruct mlx5_flow_meter_info *fm;\n \tconst struct rte_flow_attr attr = {\n \t\t\t\t.ingress = 1,\n@@ -836,7 +835,7 @@\n \t\t\t\t\t  RTE_MTR_ERROR_TYPE_UNSPECIFIED,\n \t\t\t\t\t  NULL, \"Meter object is being used.\");\n \tif (priv->sh->meter_aso_en) {\n-\t\tif (mlx5_l3t_clear_entry(mtrmng->mtr_idx_tbl, meter_id))\n+\t\tif (mlx5_l3t_clear_entry(priv->mtr_idx_tbl, meter_id))\n \t\t\treturn -rte_mtr_error_set(error, EBUSY,\n \t\t\t\tRTE_MTR_ERROR_TYPE_UNSPECIFIED, NULL,\n \t\t\t\t\"Fail to delete ASO Meter in index table.\");\n@@ -1302,7 +1301,7 @@ struct mlx5_flow_meter_info *\n \t\t\trte_spinlock_unlock(&mtrmng->mtrsl);\n \t\t\treturn NULL;\n \t\t}\n-\t\tif (mlx5_l3t_get_entry(mtrmng->mtr_idx_tbl, meter_id, &data) ||\n+\t\tif (mlx5_l3t_get_entry(priv->mtr_idx_tbl, meter_id, &data) ||\n \t\t\t!data.dword) {\n \t\t\trte_spinlock_unlock(&mtrmng->mtrsl);\n \t\t\treturn NULL;\n@@ -1310,7 +1309,7 @@ struct mlx5_flow_meter_info *\n \t\tif (mtr_idx)\n \t\t\t*mtr_idx = data.dword;\n \t\taso_mtr = mlx5_aso_meter_by_idx(priv, data.dword);\n-\t\tmlx5_l3t_clear_entry(mtrmng->mtr_idx_tbl, meter_id);\n+\t\tmlx5_l3t_clear_entry(priv->mtr_idx_tbl, meter_id);\n \t\tif (meter_id == aso_mtr->fm.meter_id) {\n \t\t\trte_spinlock_unlock(&mtrmng->mtrsl);\n \t\t\treturn &aso_mtr->fm;\n",
    "prefixes": [
        "v6",
        "10/15"
    ]
}