From patchwork Tue Mar 30 19:35:29 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Timothy McDaniel X-Patchwork-Id: 90132 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CC07BA034F; Tue, 30 Mar 2021 21:38:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B1194140F4A; Tue, 30 Mar 2021 21:37:09 +0200 (CEST) Received: from mga04.intel.com (mga04.intel.com [192.55.52.120]) by mails.dpdk.org (Postfix) with ESMTP id 860A5140EAE for ; Tue, 30 Mar 2021 21:36:46 +0200 (CEST) IronPort-SDR: cDkLfgyzuihxomRWWc7P/KTxgA9hLMKLkVy/kcDBk0WgaqN0w3OgdW5AYPAngTxCP/8UxYyC0r DJFYnA6SOlvA== X-IronPort-AV: E=McAfee;i="6000,8403,9939"; a="189601164" X-IronPort-AV: E=Sophos;i="5.81,291,1610438400"; d="scan'208";a="189601164" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 30 Mar 2021 12:36:46 -0700 IronPort-SDR: TYZ9T+VX0PH3jl7C+YXFyV4UmMW7Dr/MHdYdqE4uNurbNACqm5nlQDoYLAhDenfyuxcheSULzc Hg5ranJ/s5iw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,291,1610438400"; d="scan'208";a="418309746" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by orsmga008.jf.intel.com with ESMTP; 30 Mar 2021 12:36:45 -0700 From: Timothy McDaniel To: Cc: dev@dpdk.org, erik.g.carrillo@intel.com, gage.eads@intel.com, harry.van.haaren@intel.com, jerinj@marvell.com, thomas@monjalon.net Date: Tue, 30 Mar 2021 14:35:29 -0500 Message-Id: <1617132940-24800-17-git-send-email-timothy.mcdaniel@intel.com> X-Mailer: git-send-email 1.7.10 In-Reply-To: <1617132940-24800-1-git-send-email-timothy.mcdaniel@intel.com> References: <20210316221857.2254-2-timothy.mcdaniel@intel.com> <1617132940-24800-1-git-send-email-timothy.mcdaniel@intel.com> Subject: [dpdk-dev] [PATCH v2 16/27] event/dlb2: add v2.5 sparse cq mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Update sparse cq mode mode functions for DLB v2.5, accounting for new combined register map and hardware access macros. Signed-off-by: Timothy McDaniel --- drivers/event/dlb2/pf/base/dlb2_resource.c | 22 ----------- .../event/dlb2/pf/base/dlb2_resource_new.c | 39 +++++++++++++++++++ 2 files changed, 39 insertions(+), 22 deletions(-) diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c index f05f750f5..d53cce643 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource.c @@ -32,28 +32,6 @@ #define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \ DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp) -void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw) -{ - union dlb2_chp_cfg_chp_csr_ctrl r0; - - r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); - - r0.field.cfg_64bytes_qe_dir_cq_mode = 1; - - DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val); -} - -void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw) -{ - union dlb2_chp_cfg_chp_csr_ctrl r0; - - r0.val = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); - - r0.field.cfg_64bytes_qe_ldb_cq_mode = 1; - - DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, r0.val); -} - int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id) { if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) diff --git a/drivers/event/dlb2/pf/base/dlb2_resource_new.c b/drivers/event/dlb2/pf/base/dlb2_resource_new.c index 8cd1762cf..0f18bfeff 100644 --- a/drivers/event/dlb2/pf/base/dlb2_resource_new.c +++ b/drivers/event/dlb2/pf/base/dlb2_resource_new.c @@ -6089,3 +6089,42 @@ unsigned int dlb2_finish_map_qid_procedures(struct dlb2_hw *hw) return num; } + +/** + * dlb2_hw_enable_sparse_dir_cq_mode() - enable sparse mode for directed ports. + * @hw: dlb2_hw handle for a particular device. + * + * This function must be called prior to configuring scheduling domains. + */ + +void dlb2_hw_enable_sparse_dir_cq_mode(struct dlb2_hw *hw) +{ + u32 ctrl; + + ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); + + DLB2_BIT_SET(ctrl, + DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_DIR_CQ_MODE); + + DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl); +} + +/** + * dlb2_hw_enable_sparse_ldb_cq_mode() - enable sparse mode for load-balanced + * ports. + * @hw: dlb2_hw handle for a particular device. + * + * This function must be called prior to configuring scheduling domains. + */ +void dlb2_hw_enable_sparse_ldb_cq_mode(struct dlb2_hw *hw) +{ + u32 ctrl; + + ctrl = DLB2_CSR_RD(hw, DLB2_CHP_CFG_CHP_CSR_CTRL); + + DLB2_BIT_SET(ctrl, + DLB2_CHP_CFG_CHP_CSR_CTRL_CFG_64BYTES_QE_LDB_CQ_MODE); + + DLB2_CSR_WR(hw, DLB2_CHP_CFG_CHP_CSR_CTRL, ctrl); +} +