From patchwork Tue Apr 13 20:14:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Timothy McDaniel X-Patchwork-Id: 91316 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 75584A0524; Tue, 13 Apr 2021 22:18:37 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D8A89161349; Tue, 13 Apr 2021 22:16:41 +0200 (CEST) Received: from mga03.intel.com (mga03.intel.com [134.134.136.65]) by mails.dpdk.org (Postfix) with ESMTP id 684661612DA for ; Tue, 13 Apr 2021 22:16:19 +0200 (CEST) IronPort-SDR: XA9uSqmF5v5+CzQQWaXFsnVPmV4uP7082fDdNghfxveb59csu0UcE5ldT+VWKYj3RQ951E2jEk Gr+fM5JMZ9Ig== X-IronPort-AV: E=McAfee;i="6200,9189,9953"; a="194519729" X-IronPort-AV: E=Sophos;i="5.82,220,1613462400"; d="scan'208";a="194519729" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga103.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 13 Apr 2021 13:16:19 -0700 IronPort-SDR: IKAa+CHTdStKak0LEbrmZlQjurWTEyiSgFyONh2mtxJ1rw33yrkluLjgDFkeii2G5juMcwgdzy MtyHsrF3wQXQ== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.82,220,1613462400"; d="scan'208";a="424406559" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by orsmga008.jf.intel.com with ESMTP; 13 Apr 2021 13:16:18 -0700 From: Timothy McDaniel To: Cc: dev@dpdk.org, erik.g.carrillo@intel.com, gage.eads@intel.com, harry.van.haaren@intel.com, jerinj@marvell.com, thomas@monjalon.net Date: Tue, 13 Apr 2021 15:14:47 -0500 Message-Id: <1618344896-2090-18-git-send-email-timothy.mcdaniel@intel.com> X-Mailer: git-send-email 1.7.10 In-Reply-To: <1618344896-2090-1-git-send-email-timothy.mcdaniel@intel.com> References: <20210316221857.2254-2-timothy.mcdaniel@intel.com> <1618344896-2090-1-git-send-email-timothy.mcdaniel@intel.com> Subject: [dpdk-dev] [PATCH v3 17/26] event/dlb2: add v2.5 sparse cq mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update sparse cq mode functions for DLB v2.5, accounting for new combined register map and hardware access macros. Signed-off-by: Timothy McDaniel --- drivers/event/dlb2/pf/base/dlb2_resource.c | 22 ----------- .../event/dlb2/pf/base/dlb2_resource_new.c | 39 +++++++++++++++++++ 2 files changed, 39 insertions(+), 22 deletions(-) diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c index f05f750f5..d53cce643 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource.c @@ -32,28 +32,6 @@ #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \ DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp) -void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw) -{ - union dlb2_chp_cfg_chp_csr_ctrl r0; - - r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); - - r0.field.cfg_64bytes_qe_dir_cq_mode = 1; - - DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val); -} - -void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw) -{ - union dlb2_chp_cfg_chp_csr_ctrl r0; - - r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); - - r0.field.cfg_64bytes_qe_ldb_cq_mode = 1; - - DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val); -} - int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id) { if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c index 8cd1762cf..0f18bfeff 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c @@ -6089,3 +6089,42 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw) return num; } + +/** + * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports. + * @hw: dlb2_hw handle for a particular device. + * + * This function must be called prior to configuring scheduling domains. + */ + +void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw) +{ + u32 ctrl; + + ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); + + DLB2_BIT_SET(ctrl, + DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE); + + DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl); +} + +/** + * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced + * ports. + * @hw: dlb2_hw handle for a particular device. + * + * This function must be called prior to configuring scheduling domains. + */ +void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw) +{ + u32 ctrl; + + ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); + + DLB2_BIT_SET(ctrl, + DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE); + + DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl); +} +