From patchwork Thu Apr 15 01:49:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Timothy McDaniel X-Patchwork-Id: 91500 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BFA23A0562; Thu, 15 Apr 2021 03:52:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A75AE161E84; Thu, 15 Apr 2021 03:50:56 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id AFBD8161E27 for ; Thu, 15 Apr 2021 03:50:40 +0200 (CEST) IronPort-SDR: VaPwHLrd2A4DzAmb+6cY3jWgqAYzL3mf7Q7k02nnJGSyWpDRcxoLQy/rVq41+q2kEOWx170mdq 9It5nn+iUgkA== X-IronPort-AV: E=McAfee;i="6200,9189,9954"; a="215272813" X-IronPort-AV: E=Sophos;i="5.82,223,1613462400"; d="scan'208";a="215272813" Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 14 Apr 2021 18:50:40 -0700 IronPort-SDR: LDXMfQ1SsJeOIgN0P29SdKuMHjdgzAKHPyEJj75yzApiKKs/K5YZ0mwZkWh6SYHOAT9w5DyVMc Zm07rwBurLFg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,223,1613462400"; d="scan'208";a="382569875" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by orsmga003.jf.intel.com with ESMTP; 14 Apr 2021 18:50:39 -0700 From: Timothy McDaniel To: Cc: dev@dpdk.org, erik.g.carrillo@intel.com, harry.van.haaren@intel.com, jerinj@marvell.com, thomas@monjalon.net Date: Wed, 14 Apr 2021 20:49:08 -0500 Message-Id: <1618451359-20693-17-git-send-email-timothy.mcdaniel@intel.com> X-Mailer: git-send-email 1.7.10 In-Reply-To: <1618451359-20693-1-git-send-email-timothy.mcdaniel@intel.com> References: <20210316221857.2254-2-timothy.mcdaniel@intel.com> <1618451359-20693-1-git-send-email-timothy.mcdaniel@intel.com> Subject: [dpdk-dev] [PATCH v4 16/27] event/dlb2: add v2.5 queue depth functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update the low level hardware functions responsible for getting the queue depth. The command arguments are also validated. The logic is very similar to what was done for v2.0, but the new combined register map for v2.0 and v2.5 uses new register names and bit names. Additionally, new register access macros are used so that the code can perform the correct action, based on the hardware. Signed-off-by: Timothy McDaniel --- drivers/event/dlb2/pf/base/dlb2_resource.c | 160 ------------------ .../event/dlb2/pf/base/dlb2_resource_new.c | 135 +++++++++++++++ 2 files changed, 135 insertions(+), 160 deletions(-) diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c index 1e66ebf50..8c1d8c782 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource.c @@ -65,17 +65,6 @@ static inline void dlb2_flush_csr(struct dlb2_hw *hw) DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS); } -static u32 dlb2_dir_queue_depth(struct dlb2_hw *hw, - struct dlb2_dir_pq_pair *queue) -{ - union dlb2_lsp_qid_dir_enqueue_cnt r0; - - r0.val = DLB2_CSR_RD(hw, - DLB2_LSP_QID_DIR_ENQUEUE_CNT(queue->id.phys_id)); - - return r0.field.count; -} - static void dlb2_ldb_port_cq_enable(struct dlb2_hw *hw, struct dlb2_ldb_port *port) { @@ -108,24 +97,6 @@ static void dlb2_ldb_port_cq_disable(struct dlb2_hw *hw, dlb2_flush_csr(hw); } -static u32 dlb2_ldb_queue_depth(struct dlb2_hw *hw, - struct dlb2_ldb_queue *queue) -{ - union dlb2_lsp_qid_aqed_active_cnt r0; - union dlb2_lsp_qid_atm_active r1; - union dlb2_lsp_qid_ldb_enqueue_cnt r2; - - r0.val = DLB2_CSR_RD(hw, - DLB2_LSP_QID_AQED_ACTIVE_CNT(queue->id.phys_id)); - r1.val = DLB2_CSR_RD(hw, - DLB2_LSP_QID_ATM_ACTIVE(queue->id.phys_id)); - - r2.val = DLB2_CSR_RD(hw, - DLB2_LSP_QID_LDB_ENQUEUE_CNT(queue->id.phys_id)); - - return r0.field.count + r1.field.count + r2.field.count; -} - static struct dlb2_ldb_queue * dlb2_get_ldb_queue_from_id(struct dlb2_hw *hw, u32 id, @@ -1204,134 +1175,3 @@ int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw, return 0; } -static struct dlb2_dir_pq_pair * -dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw, - u32 id, - bool vdev_req, - struct dlb2_hw_domain *domain) -{ - struct dlb2_list_entry *iter; - struct dlb2_dir_pq_pair *port; - RTE_SET_USED(iter); - - if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver)) - return NULL; - - DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) - if ((!vdev_req && port->id.phys_id == id) || - (vdev_req && port->id.virt_id == id)) - return port; - - return NULL; -} - -static struct dlb2_ldb_queue * -dlb2_get_domain_ldb_queue(u32 id, - bool vdev_req, - struct dlb2_hw_domain *domain) -{ - struct dlb2_list_entry *iter; - struct dlb2_ldb_queue *queue; - RTE_SET_USED(iter); - - if (id >= DLB2_MAX_NUM_LDB_QUEUES) - return NULL; - - DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) - if ((!vdev_req && queue->id.phys_id == id) || - (vdev_req && queue->id.virt_id == id)) - return queue; - - return NULL; -} - -static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw, - u32 domain_id, - u32 queue_id, - bool vdev_req, - unsigned int vf_id) -{ - DLB2_HW_DBG(hw, "DLB get directed queue depth:\n"); - if (vdev_req) - DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id); - DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id); - DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id); -} - -int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw, - u32 domain_id, - struct dlb2_get_dir_queue_depth_args *args, - struct dlb2_cmd_response *resp, - bool vdev_req, - unsigned int vdev_id) -{ - struct dlb2_dir_pq_pair *queue; - struct dlb2_hw_domain *domain; - int id; - - id = domain_id; - - dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id, - vdev_req, vdev_id); - - domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id); - if (domain == NULL) { - resp->status = DLB2_ST_INVALID_DOMAIN_ID; - return -EINVAL; - } - - id = args->queue_id; - - queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain); - if (queue == NULL) { - resp->status = DLB2_ST_INVALID_QID; - return -EINVAL; - } - - resp->id = dlb2_dir_queue_depth(hw, queue); - - return 0; -} - -static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw, - u32 domain_id, - u32 queue_id, - bool vdev_req, - unsigned int vf_id) -{ - DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n"); - if (vdev_req) - DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id); - DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id); - DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id); -} - -int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw, - u32 domain_id, - struct dlb2_get_ldb_queue_depth_args *args, - struct dlb2_cmd_response *resp, - bool vdev_req, - unsigned int vdev_id) -{ - struct dlb2_hw_domain *domain; - struct dlb2_ldb_queue *queue; - - dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id, - vdev_req, vdev_id); - - domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id); - if (domain == NULL) { - resp->status = DLB2_ST_INVALID_DOMAIN_ID; - return -EINVAL; - } - - queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain); - if (queue == NULL) { - resp->status = DLB2_ST_INVALID_QID; - return -EINVAL; - } - - resp->id = dlb2_ldb_queue_depth(hw, queue); - - return 0; -} diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c index e806a60ac..6a5af0c1e 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c @@ -5904,3 +5904,138 @@ dlb2_hw_start_domain(struct dlb2_hw *hw, return 0; } + +static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw, + u32 domain_id, + u32 queue_id, + bool vdev_req, + unsigned int vf_id) +{ + DLB2_HW_DBG(hw, "DLB get directed queue depth:\n"); + if (vdev_req) + DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id); + DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id); + DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id); +} + +/** + * dlb2_hw_get_dir_queue_depth() - returns the depth of a directed queue + * @hw: dlb2_hw handle for a particular device. + * @domain_id: domain ID. + * @args: queue depth args + * @resp: response structure. + * @vdev_req: indicates whether this request came from a vdev. + * @vdev_id: If vdev_req is true, this contains the vdev's ID. + * + * This function returns the depth of a directed queue. + * + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual + * device. + * + * Return: + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is + * assigned a detailed error code from enum dlb2_error. If successful, resp->id + * contains the depth. + * + * Errors: + * EINVAL - Invalid domain ID or queue ID. + */ +int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw, + u32 domain_id, + struct dlb2_get_dir_queue_depth_args *args, + struct dlb2_cmd_response *resp, + bool vdev_req, + unsigned int vdev_id) +{ + struct dlb2_dir_pq_pair *queue; + struct dlb2_hw_domain *domain; + int id; + + id = domain_id; + + dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id, + vdev_req, vdev_id); + + domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id); + if (!domain) { + resp->status = DLB2_ST_INVALID_DOMAIN_ID; + return -EINVAL; + } + + id = args->queue_id; + + queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain); + if (!queue) { + resp->status = DLB2_ST_INVALID_QID; + return -EINVAL; + } + + resp->id = dlb2_dir_queue_depth(hw, queue); + + return 0; +} + +static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw, + u32 domain_id, + u32 queue_id, + bool vdev_req, + unsigned int vf_id) +{ + DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n"); + if (vdev_req) + DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id); + DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id); + DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id); +} + +/** + * dlb2_hw_get_ldb_queue_depth() - returns the depth of a load-balanced queue + * @hw: dlb2_hw handle for a particular device. + * @domain_id: domain ID. + * @args: queue depth args + * @resp: response structure. + * @vdev_req: indicates whether this request came from a vdev. + * @vdev_id: If vdev_req is true, this contains the vdev's ID. + * + * This function returns the depth of a load-balanced queue. + * + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual + * device. + * + * Return: + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is + * assigned a detailed error code from enum dlb2_error. If successful, resp->id + * contains the depth. + * + * Errors: + * EINVAL - Invalid domain ID or queue ID. + */ +int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw, + u32 domain_id, + struct dlb2_get_ldb_queue_depth_args *args, + struct dlb2_cmd_response *resp, + bool vdev_req, + unsigned int vdev_id) +{ + struct dlb2_hw_domain *domain; + struct dlb2_ldb_queue *queue; + + dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id, + vdev_req, vdev_id); + + domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id); + if (!domain) { + resp->status = DLB2_ST_INVALID_DOMAIN_ID; + return -EINVAL; + } + + queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain); + if (!queue) { + resp->status = DLB2_ST_INVALID_QID; + return -EINVAL; + } + + resp->id = dlb2_ldb_queue_depth(hw, queue); + + return 0; +}