From patchwork Tue Mar 16 22:18:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Timothy McDaniel X-Patchwork-Id: 89286 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DC074A054F; Tue, 16 Mar 2021 23:21:03 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 15901242AB7; Tue, 16 Mar 2021 23:20:07 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 38CF8242A99 for ; Tue, 16 Mar 2021 23:20:00 +0100 (CET) IronPort-SDR: B3W4sV3dpVH1Z2+TbI7FjelJ1QBd4HEHCVCskB3JW+3Z05qg23V6Zl5PS3sxXFlr9WLR9hvLRt hcGU/0VARz5A== X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="253359248" X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="253359248" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2021 15:19:54 -0700 IronPort-SDR: sBNfiU2y7JEtidjL/36HtPVY+QvtpktqvXoIr33i+jCLz4CgWs18PUynE/PRVYYvVx9Wsnf13H 24uUSBdB6Icw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="605440233" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by fmsmga005.fm.intel.com with ESMTP; 16 Mar 2021 15:19:53 -0700 From: Timothy McDaniel To: dev@dpdk.org Cc: jerinj@marvell.com, harry.van.haaren@intel.com, mdr@ashroe.eu, nhorman@tuxdriver.com, nikhil.rao@intel.com, erik.g.carrillo@intel.com, abhinandan.gujjar@intel.com, pbhagavatula@marvell.com, hemant.agrawal@nxp.com, mattias.ronnblom@ericsson.com, peter.mccarthy@intel.com Date: Tue, 16 Mar 2021 17:18:41 -0500 Message-Id: <20210316221857.2254-10-timothy.mcdaniel@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210316221857.2254-1-timothy.mcdaniel@intel.com> References: <20210316221857.2254-1-timothy.mcdaniel@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 09/25] event/dlb2: add DLB v2.5 support to create dir queue X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Updated low level hardware functions to account for new register map and hardware access macros. Signed-off-by: Timothy McDaniel --- drivers/event/dlb2/pf/base/dlb2_resource.c | 213 ------------------ .../event/dlb2/pf/base/dlb2_resource_new.c | 201 +++++++++++++++++ 2 files changed, 201 insertions(+), 213 deletions(-) diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c index 2442327d3..d9284812a 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource.c @@ -1226,219 +1226,6 @@ dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw, return NULL; } -static void dlb2_configure_dir_queue(struct dlb2_hw *hw, - struct dlb2_hw_domain *domain, - struct dlb2_dir_pq_pair *queue, - struct dlb2_create_dir_queue_args *args, - bool vdev_req, - unsigned int vdev_id) -{ - union dlb2_sys_dir_vasqid_v r0 = { {0} }; - union dlb2_sys_dir_qid_its r1 = { {0} }; - union dlb2_lsp_qid_dir_depth_thrsh r2 = { {0} }; - union dlb2_sys_dir_qid_v r5 = { {0} }; - - unsigned int offs; - - /* QID write permissions are turned on when the domain is started */ - r0.field.vasqid_v = 0; - - offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) + - queue->id.phys_id; - - DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), r0.val); - - /* Don't timestamp QEs that pass through this queue */ - r1.field.qid_its = 0; - - DLB2_CSR_WR(hw, - DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), - r1.val); - - r2.field.thresh = args->depth_threshold; - - DLB2_CSR_WR(hw, - DLB2_LSP_QID_DIR_DEPTH_THRSH(queue->id.phys_id), - r2.val); - - if (vdev_req) { - union dlb2_sys_vf_dir_vqid_v r3 = { {0} }; - union dlb2_sys_vf_dir_vqid2qid r4 = { {0} }; - - offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) - + queue->id.virt_id; - - r3.field.vqid_v = 1; - - DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), r3.val); - - r4.field.qid = queue->id.phys_id; - - DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), r4.val); - } - - r5.field.qid_v = 1; - - DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), r5.val); - - queue->queue_configured = true; -} - -static void -dlb2_log_create_dir_queue_args(struct dlb2_hw *hw, - u32 domain_id, - struct dlb2_create_dir_queue_args *args, - bool vdev_req, - unsigned int vdev_id) -{ - DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n"); - if (vdev_req) - DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id); - DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id); - DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id); -} - -static int -dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw, - u32 domain_id, - struct dlb2_create_dir_queue_args *args, - struct dlb2_cmd_response *resp, - bool vdev_req, - unsigned int vdev_id) -{ - struct dlb2_hw_domain *domain; - - domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id); - - if (domain == NULL) { - resp->status = DLB2_ST_INVALID_DOMAIN_ID; - return -EINVAL; - } - - if (!domain->configured) { - resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED; - return -EINVAL; - } - - if (domain->started) { - resp->status = DLB2_ST_DOMAIN_STARTED; - return -EINVAL; - } - - /* - * If the user claims the port is already configured, validate the port - * ID, its domain, and whether the port is configured. - */ - if (args->port_id != -1) { - struct dlb2_dir_pq_pair *port; - - port = dlb2_get_domain_used_dir_pq(hw, - args->port_id, - vdev_req, - domain); - - if (port == NULL || port->domain_id.phys_id != - domain->id.phys_id || !port->port_configured) { - resp->status = DLB2_ST_INVALID_PORT_ID; - return -EINVAL; - } - } - - /* - * If the queue's port is not configured, validate that a free - * port-queue pair is available. - */ - if (args->port_id == -1 && - dlb2_list_empty(&domain->avail_dir_pq_pairs)) { - resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE; - return -EINVAL; - } - - return 0; -} - -/** - * dlb2_hw_create_dir_queue() - Allocate and initialize a DLB DIR queue. - * @hw: Contains the current state of the DLB2 hardware. - * @domain_id: Domain ID - * @args: User-provided arguments. - * @resp: Response to user. - * @vdev_req: Request came from a virtual device. - * @vdev_id: If vdev_req is true, this contains the virtual device's ID. - * - * Return: returns < 0 on error, 0 otherwise. If the driver is unable to - * satisfy a request, resp->status will be set accordingly. - */ -int dlb2_hw_create_dir_queue(struct dlb2_hw *hw, - u32 domain_id, - struct dlb2_create_dir_queue_args *args, - struct dlb2_cmd_response *resp, - bool vdev_req, - unsigned int vdev_id) -{ - struct dlb2_dir_pq_pair *queue; - struct dlb2_hw_domain *domain; - int ret; - - dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id); - - /* - * Verify that hardware resources are available before attempting to - * satisfy the request. This simplifies the error unwinding code. - */ - ret = dlb2_verify_create_dir_queue_args(hw, - domain_id, - args, - resp, - vdev_req, - vdev_id); - if (ret) - return ret; - - domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id); - if (domain == NULL) { - DLB2_HW_ERR(hw, - "[%s():%d] Internal error: domain not found\n", - __func__, __LINE__); - return -EFAULT; - } - - if (args->port_id != -1) - queue = dlb2_get_domain_used_dir_pq(hw, - args->port_id, - vdev_req, - domain); - else - queue = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs, - typeof(*queue)); - if (queue == NULL) { - DLB2_HW_ERR(hw, - "[%s():%d] Internal error: no available dir queues\n", - __func__, __LINE__); - return -EFAULT; - } - - dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id); - - /* - * Configuration succeeded, so move the resource from the 'avail' to - * the 'used' list (if it's not already there). - */ - if (args->port_id == -1) { - dlb2_list_del(&domain->avail_dir_pq_pairs, - &queue->domain_list); - - dlb2_list_add(&domain->used_dir_pq_pairs, - &queue->domain_list); - } - - resp->status = 0; - - resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id; - - return 0; -} - static bool dlb2_port_find_slot_with_pending_map_queue(struct dlb2_ldb_port *port, struct dlb2_ldb_queue *queue, diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c index 1dfbc0c6d..998515933 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c @@ -4858,3 +4858,204 @@ int dlb2_hw_create_dir_port(struct dlb2_hw *hw, return 0; } + +static void dlb2_configure_dir_queue(struct dlb2_hw *hw, + struct dlb2_hw_domain *domain, + struct dlb2_dir_pq_pair *queue, + struct dlb2_create_dir_queue_args *args, + bool vdev_req, + unsigned int vdev_id) +{ + unsigned int offs; + u32 reg = 0; + + /* QID write permissions are turned on when the domain is started */ + offs = domain->id.phys_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) + + queue->id.phys_id; + + DLB2_CSR_WR(hw, DLB2_SYS_DIR_VASQID_V(offs), reg); + + /* Don't timestamp QEs that pass through this queue */ + DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_ITS(queue->id.phys_id), reg); + + reg = 0; + DLB2_BITS_SET(reg, args->depth_threshold, + DLB2_LSP_QID_DIR_DEPTH_THRSH_THRESH); + DLB2_CSR_WR(hw, + DLB2_LSP_QID_DIR_DEPTH_THRSH(hw->ver, queue->id.phys_id), + reg); + + if (vdev_req) { + offs = vdev_id * DLB2_MAX_NUM_DIR_QUEUES(hw->ver) + + queue->id.virt_id; + + reg = 0; + DLB2_BIT_SET(reg, DLB2_SYS_VF_DIR_VQID_V_VQID_V); + DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID_V(offs), reg); + + reg = 0; + DLB2_BITS_SET(reg, queue->id.phys_id, + DLB2_SYS_VF_DIR_VQID2QID_QID); + DLB2_CSR_WR(hw, DLB2_SYS_VF_DIR_VQID2QID(offs), reg); + } + + reg = 0; + DLB2_BIT_SET(reg, DLB2_SYS_DIR_QID_V_QID_V); + DLB2_CSR_WR(hw, DLB2_SYS_DIR_QID_V(queue->id.phys_id), reg); + + queue->queue_configured = true; +} + +static void +dlb2_log_create_dir_queue_args(struct dlb2_hw *hw, + u32 domain_id, + struct dlb2_create_dir_queue_args *args, + bool vdev_req, + unsigned int vdev_id) +{ + DLB2_HW_DBG(hw, "DLB2 create directed queue arguments:\n"); + if (vdev_req) + DLB2_HW_DBG(hw, "(Request from vdev %d)\n", vdev_id); + DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id); + DLB2_HW_DBG(hw, "\tPort ID: %d\n", args->port_id); +} + +static int +dlb2_verify_create_dir_queue_args(struct dlb2_hw *hw, + u32 domain_id, + struct dlb2_create_dir_queue_args *args, + struct dlb2_cmd_response *resp, + bool vdev_req, + unsigned int vdev_id, + struct dlb2_hw_domain **out_domain, + struct dlb2_dir_pq_pair **out_queue) +{ + struct dlb2_hw_domain *domain; + struct dlb2_dir_pq_pair *pq; + + domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id); + + if (!domain) { + resp->status = DLB2_ST_INVALID_DOMAIN_ID; + return -EINVAL; + } + + if (!domain->configured) { + resp->status = DLB2_ST_DOMAIN_NOT_CONFIGURED; + return -EINVAL; + } + + if (domain->started) { + resp->status = DLB2_ST_DOMAIN_STARTED; + return -EINVAL; + } + + /* + * If the user claims the port is already configured, validate the port + * ID, its domain, and whether the port is configured. + */ + if (args->port_id != -1) { + pq = dlb2_get_domain_used_dir_pq(hw, + args->port_id, + vdev_req, + domain); + + if (!pq || pq->domain_id.phys_id != domain->id.phys_id || + !pq->port_configured) { + resp->status = DLB2_ST_INVALID_PORT_ID; + return -EINVAL; + } + } else { + /* + * If the queue's port is not configured, validate that a free + * port-queue pair is available. + */ + pq = DLB2_DOM_LIST_HEAD(domain->avail_dir_pq_pairs, + typeof(*pq)); + if (!pq) { + resp->status = DLB2_ST_DIR_QUEUES_UNAVAILABLE; + return -EINVAL; + } + } + + *out_domain = domain; + *out_queue = pq; + + return 0; +} + +/** + * dlb2_hw_create_dir_queue() - create a directed queue + * @hw: dlb2_hw handle for a particular device. + * @domain_id: domain ID. + * @args: queue creation arguments. + * @resp: response structure. + * @vdev_req: indicates whether this request came from a vdev. + * @vdev_id: If vdev_req is true, this contains the vdev's ID. + * + * This function creates a directed queue. + * + * A vdev can be either an SR-IOV virtual function or a Scalable IOV virtual + * device. + * + * Return: + * Returns 0 upon success, < 0 otherwise. If an error occurs, resp->status is + * assigned a detailed error code from enum dlb2_error. If successful, resp->id + * contains the queue ID. + * + * resp->id contains a virtual ID if vdev_req is true. + * + * Errors: + * EINVAL - A requested resource is unavailable, the domain is not configured, + * or the domain has already been started. + * EFAULT - Internal error (resp->status not set). + */ +int dlb2_hw_create_dir_queue(struct dlb2_hw *hw, + u32 domain_id, + struct dlb2_create_dir_queue_args *args, + struct dlb2_cmd_response *resp, + bool vdev_req, + unsigned int vdev_id) +{ + struct dlb2_dir_pq_pair *queue; + struct dlb2_hw_domain *domain; + int ret; + + dlb2_log_create_dir_queue_args(hw, domain_id, args, vdev_req, vdev_id); + + /* + * Verify that hardware resources are available before attempting to + * satisfy the request. This simplifies the error unwinding code. + */ + ret = dlb2_verify_create_dir_queue_args(hw, + domain_id, + args, + resp, + vdev_req, + vdev_id, + &domain, + &queue); + if (ret) + return ret; + + dlb2_configure_dir_queue(hw, domain, queue, args, vdev_req, vdev_id); + + /* + * Configuration succeeded, so move the resource from the 'avail' to + * the 'used' list (if it's not already there). + */ + if (args->port_id == -1) { + dlb2_list_del(&domain->avail_dir_pq_pairs, + &queue->domain_list); + + dlb2_list_add(&domain->used_dir_pq_pairs, + &queue->domain_list); + } + + resp->status = 0; + + resp->id = (vdev_req) ? queue->id.virt_id : queue->id.phys_id; + + return 0; +} +