From patchwork Tue Mar 16 22:18:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Timothy McDaniel X-Patchwork-Id: 89297 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 89D82A054F; Tue, 16 Mar 2021 23:22:13 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 420AA242B0B; Tue, 16 Mar 2021 23:20:23 +0100 (CET) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by mails.dpdk.org (Postfix) with ESMTP id 3791C242AA3 for ; Tue, 16 Mar 2021 23:20:04 +0100 (CET) IronPort-SDR: a2zfd8Pz4Zop6dZE6FelMXRcQsmRlnIpl+gAnxiGDTzgsTNSsRBXGdGzcBDQQcf+SVRjPxGUxm nsHzfsuXxReA== X-IronPort-AV: E=McAfee;i="6000,8403,9925"; a="253359276" X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="253359276" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 16 Mar 2021 15:20:02 -0700 IronPort-SDR: y1tbkzwtrREggPXFA71QrcEgRkeEP9JvJOPU01ggY07MGj5cJwM7tEf4afYmSVQx5NQxXjF5wn AQSiwwUWruRw== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.81,254,1610438400"; d="scan'208";a="605440307" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by fmsmga005.fm.intel.com with ESMTP; 16 Mar 2021 15:20:01 -0700 From: Timothy McDaniel To: dev@dpdk.org Cc: jerinj@marvell.com, harry.van.haaren@intel.com, mdr@ashroe.eu, nhorman@tuxdriver.com, nikhil.rao@intel.com, erik.g.carrillo@intel.com, abhinandan.gujjar@intel.com, pbhagavatula@marvell.com, hemant.agrawal@nxp.com, mattias.ronnblom@ericsson.com, peter.mccarthy@intel.com Date: Tue, 16 Mar 2021 17:18:51 -0500 Message-Id: <20210316221857.2254-20-timothy.mcdaniel@intel.com> X-Mailer: git-send-email 2.23.0 In-Reply-To: <20210316221857.2254-1-timothy.mcdaniel@intel.com> References: <20210316221857.2254-1-timothy.mcdaniel@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 19/25] event/dlb2: delete old dlb2_resource.c file X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The file dlb_resource_new.c now contains all of the low level functions required to support both DLB v2.0 and DLB v2.5, so delete the temporary "old" file, and stop building it. The new file (dlb_resource_new.c) will be renamed to dlb_resource.c in the next commit. Signed-off-by: Timothy McDaniel --- drivers/event/dlb2/meson.build | 1 - drivers/event/dlb2/pf/base/dlb2_resource.c | 245 --------------------- 2 files changed, 246 deletions(-) delete mode 100644 drivers/event/dlb2/pf/base/dlb2_resource.c diff --git a/drivers/event/dlb2/meson.build b/drivers/event/dlb2/meson.build index bded07e06..d8cfd377f 100644 --- a/drivers/event/dlb2/meson.build +++ b/drivers/event/dlb2/meson.build @@ -13,7 +13,6 @@ sources = files('dlb2.c', 'dlb2_xstats.c', 'pf/dlb2_main.c', 'pf/dlb2_pf.c', - 'pf/base/dlb2_resource.c', 'pf/base/dlb2_resource_new.c', 'rte_pmd_dlb2.c', 'dlb2_selftest.c' diff --git a/drivers/event/dlb2/pf/base/dlb2_resource.c b/drivers/event/dlb2/pf/base/dlb2_resource.c deleted file mode 100644 index bd1404f33..000000000 --- a/drivers/event/dlb2/pf/base/dlb2_resource.c +++ /dev/null @@ -1,245 +0,0 @@ -/* SPDX-License-Identifier: BSD-3-Clause - * Copyright(c) 2016-2020 Intel Corporation - */ - -#include "dlb2_user.h" - -#include "dlb2_hw_types.h" -#include "dlb2_mbox.h" -#include "dlb2_osdep.h" -#include "dlb2_osdep_bitmap.h" -#include "dlb2_osdep_types.h" -#include "dlb2_regs.h" -#include "dlb2_resource.h" - -#include "../../dlb2_priv.h" -#include "../../dlb2_inline_fns.h" - -#define DLB2_DOM_LIST_HEAD(head, type) \ - DLB2_LIST_HEAD((head), type, domain_list) - -#define DLB2_FUNC_LIST_HEAD(head, type) \ - DLB2_LIST_HEAD((head), type, func_list) - -#define DLB2_DOM_LIST_FOR(head, ptr, iter) \ - DLB2_LIST_FOR_EACH(head, ptr, domain_list, iter) - -#define DLB2_FUNC_LIST_FOR(head, ptr, iter) \ - DLB2_LIST_FOR_EACH(head, ptr, func_list, iter) - -#define DLB2_DOM_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \ - DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, domain_list, it, it_tmp) - -#define DLB2_FUNC_LIST_FOR_SAFE(head, ptr, ptr_tmp, it, it_tmp) \ - DLB2_LIST_FOR_EACH_SAFE((head), ptr, ptr_tmp, func_list, it, it_tmp) - -/* - * The PF driver cannot assume that a register write will affect subsequent HCW - * writes. To ensure a write completes, the driver must read back a CSR. This - * function only need be called for configuration that can occur after the - * domain has started; prior to starting, applications can't send HCWs. - */ -static inline void dlb2_flush_csr(struct dlb2_hw *hw) -{ - DLB2_CSR_RD(hw, DLB2_SYS_TOTAL_VAS); -} - -int dlb2_get_group_sequence_numbers(struct dlb2_hw *hw, unsigned int group_id) -{ - if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) - return -EINVAL; - - return hw->rsrcs.sn_groups[group_id].sequence_numbers_per_queue; -} - -int dlb2_get_group_sequence_number_occupancy(struct dlb2_hw *hw, - unsigned int group_id) -{ - if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) - return -EINVAL; - - return dlb2_sn_group_used_slots(&hw->rsrcs.sn_groups[group_id]); -} - -static void dlb2_log_set_group_sequence_numbers(struct dlb2_hw *hw, - unsigned int group_id, - unsigned long val) -{ - DLB2_HW_DBG(hw, "DLB2 set group sequence numbers:\n"); - DLB2_HW_DBG(hw, "\tGroup ID: %u\n", group_id); - DLB2_HW_DBG(hw, "\tValue: %lu\n", val); -} - -int dlb2_set_group_sequence_numbers(struct dlb2_hw *hw, - unsigned int group_id, - unsigned long val) -{ - u32 valid_allocations[] = {64, 128, 256, 512, 1024}; - union dlb2_ro_pipe_grp_sn_mode r0 = { {0} }; - struct dlb2_sn_group *group; - int mode; - - if (group_id >= DLB2_MAX_NUM_SEQUENCE_NUMBER_GROUPS) - return -EINVAL; - - group = &hw->rsrcs.sn_groups[group_id]; - - /* - * Once the first load-balanced queue using an SN group is configured, - * the group cannot be changed. - */ - if (group->slot_use_bitmap != 0) - return -EPERM; - - for (mode = 0; mode < DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES; mode++) - if (val == valid_allocations[mode]) - break; - - if (mode == DLB2_MAX_NUM_SEQUENCE_NUMBER_MODES) - return -EINVAL; - - group->mode = mode; - group->sequence_numbers_per_queue = val; - - r0.field.sn_mode_0 = hw->rsrcs.sn_groups[0].mode; - r0.field.sn_mode_1 = hw->rsrcs.sn_groups[1].mode; - - DLB2_CSR_WR(hw, DLB2_RO_PIPE_GRP_SN_MODE, r0.val); - - dlb2_log_set_group_sequence_numbers(hw, group_id, val); - - return 0; -} - -static struct dlb2_dir_pq_pair * -dlb2_get_domain_used_dir_pq(struct dlb2_hw *hw, - u32 id, - bool vdev_req, - struct dlb2_hw_domain *domain) -{ - struct dlb2_list_entry *iter; - struct dlb2_dir_pq_pair *port; - RTE_SET_USED(iter); - - if (id >= DLB2_MAX_NUM_DIR_PORTS(hw->ver)) - return NULL; - - DLB2_DOM_LIST_FOR(domain->used_dir_pq_pairs, port, iter) - if ((!vdev_req && port->id.phys_id == id) || - (vdev_req && port->id.virt_id == id)) - return port; - - return NULL; -} - -static struct dlb2_ldb_queue * -dlb2_get_domain_ldb_queue(u32 id, - bool vdev_req, - struct dlb2_hw_domain *domain) -{ - struct dlb2_list_entry *iter; - struct dlb2_ldb_queue *queue; - RTE_SET_USED(iter); - - if (id >= DLB2_MAX_NUM_LDB_QUEUES) - return NULL; - - DLB2_DOM_LIST_FOR(domain->used_ldb_queues, queue, iter) - if ((!vdev_req && queue->id.phys_id == id) || - (vdev_req && queue->id.virt_id == id)) - return queue; - - return NULL; -} - -static void dlb2_log_get_dir_queue_depth(struct dlb2_hw *hw, - u32 domain_id, - u32 queue_id, - bool vdev_req, - unsigned int vf_id) -{ - DLB2_HW_DBG(hw, "DLB get directed queue depth:\n"); - if (vdev_req) - DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id); - DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id); - DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id); -} - -int dlb2_hw_get_dir_queue_depth(struct dlb2_hw *hw, - u32 domain_id, - struct dlb2_get_dir_queue_depth_args *args, - struct dlb2_cmd_response *resp, - bool vdev_req, - unsigned int vdev_id) -{ - struct dlb2_dir_pq_pair *queue; - struct dlb2_hw_domain *domain; - int id; - - id = domain_id; - - dlb2_log_get_dir_queue_depth(hw, domain_id, args->queue_id, - vdev_req, vdev_id); - - domain = dlb2_get_domain_from_id(hw, id, vdev_req, vdev_id); - if (domain == NULL) { - resp->status = DLB2_ST_INVALID_DOMAIN_ID; - return -EINVAL; - } - - id = args->queue_id; - - queue = dlb2_get_domain_used_dir_pq(hw, id, vdev_req, domain); - if (queue == NULL) { - resp->status = DLB2_ST_INVALID_QID; - return -EINVAL; - } - - resp->id = dlb2_dir_queue_depth(hw, queue); - - return 0; -} - -static void dlb2_log_get_ldb_queue_depth(struct dlb2_hw *hw, - u32 domain_id, - u32 queue_id, - bool vdev_req, - unsigned int vf_id) -{ - DLB2_HW_DBG(hw, "DLB get load-balanced queue depth:\n"); - if (vdev_req) - DLB2_HW_DBG(hw, "(Request from VF %d)\n", vf_id); - DLB2_HW_DBG(hw, "\tDomain ID: %d\n", domain_id); - DLB2_HW_DBG(hw, "\tQueue ID: %d\n", queue_id); -} - -int dlb2_hw_get_ldb_queue_depth(struct dlb2_hw *hw, - u32 domain_id, - struct dlb2_get_ldb_queue_depth_args *args, - struct dlb2_cmd_response *resp, - bool vdev_req, - unsigned int vdev_id) -{ - struct dlb2_hw_domain *domain; - struct dlb2_ldb_queue *queue; - - dlb2_log_get_ldb_queue_depth(hw, domain_id, args->queue_id, - vdev_req, vdev_id); - - domain = dlb2_get_domain_from_id(hw, domain_id, vdev_req, vdev_id); - if (domain == NULL) { - resp->status = DLB2_ST_INVALID_DOMAIN_ID; - return -EINVAL; - } - - queue = dlb2_get_domain_ldb_queue(args->queue_id, vdev_req, domain); - if (queue == NULL) { - resp->status = DLB2_ST_INVALID_QID; - return -EINVAL; - } - - resp->id = dlb2_ldb_queue_depth(hw, queue); - - return 0; -} -