From patchwork Sun Nov 1 23:30:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Timothy McDaniel X-Patchwork-Id: 83311 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7F0B0A04E7; Mon, 2 Nov 2020 00:31:33 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id F01124C97; Mon, 2 Nov 2020 00:28:49 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 996D42BFA for ; Mon, 2 Nov 2020 00:28:38 +0100 (CET) IronPort-SDR: DXoXs68Ud0F3DRSTmXN6Rd+4P4Ucq8m+QXA8CAmtT8JJRYe2P+b7qoRp7nomJ8clKcOiHAeTjk FqrRn+q6LFIA== X-IronPort-AV: E=McAfee;i="6000,8403,9792"; a="148099227" X-IronPort-AV: E=Sophos;i="5.77,443,1596524400"; d="scan'208";a="148099227" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga007.fm.intel.com ([10.253.24.52]) by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 01 Nov 2020 15:28:37 -0800 IronPort-SDR: B4otNG9AEErYvbAn/VQJkd2mbcc+Lb2pLpeC7LgUu3pY+3FW2crYEC0DRwB1INaRnzsJAOqu4F O/p6nx30RapA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,443,1596524400"; d="scan'208";a="305521518" Received: from txasoft-yocto.an.intel.com ([10.123.72.192]) by fmsmga007.fm.intel.com with ESMTP; 01 Nov 2020 15:28:37 -0800 From: Timothy McDaniel To: Cc: dev@dpdk.org, erik.g.carrillo@intel.com, gage.eads@intel.com, harry.van.haaren@intel.com, jerinj@marvell.com, thomas@monjalon.net Date: Sun, 1 Nov 2020 17:30:00 -0600 Message-Id: <1604273415-13912-9-git-send-email-timothy.mcdaniel@intel.com> X-Mailer: git-send-email 1.7.10 In-Reply-To: <1604273415-13912-1-git-send-email-timothy.mcdaniel@intel.com> References: <20200612212434.6852-2-timothy.mcdaniel@intel.com> <1604273415-13912-1-git-send-email-timothy.mcdaniel@intel.com> Subject: [dpdk-dev] [PATCH v16 08/23] event/dlb: add probe-time hardware init X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This commit adds probe-time low level hardware initialization. It also adds probe-time init for both primary and secondary DPDK processes. Signed-off-by: Timothy McDaniel Reviewed-by: Gage Eads --- drivers/event/dlb/dlb.c | 158 +++++++++++++++- drivers/event/dlb/meson.build | 3 +- drivers/event/dlb/pf/base/dlb_resource.c | 302 +++++++++++++++++++++++++++++++ drivers/event/dlb/pf/dlb_main.c | 20 +- drivers/event/dlb/pf/dlb_pf.c | 86 ++++++++- 5 files changed, 561 insertions(+), 8 deletions(-) create mode 100644 drivers/event/dlb/pf/base/dlb_resource.c diff --git a/drivers/event/dlb/dlb.c b/drivers/event/dlb/dlb.c index 8008a50..57b2837 100644 --- a/drivers/event/dlb/dlb.c +++ b/drivers/event/dlb/dlb.c @@ -42,10 +42,92 @@ #if (RTE_EVENT_MAX_QUEUES_PER_DEV > UINT8_MAX) #error "RTE_EVENT_MAX_QUEUES_PER_DEV cannot fit in member max_event_queues" #endif +static struct rte_event_dev_info evdev_dlb_default_info = { + .driver_name = "", /* probe will set */ + .min_dequeue_timeout_ns = DLB_MIN_DEQUEUE_TIMEOUT_NS, + .max_dequeue_timeout_ns = DLB_MAX_DEQUEUE_TIMEOUT_NS, +#if (RTE_EVENT_MAX_QUEUES_PER_DEV < DLB_MAX_NUM_LDB_QUEUES) + .max_event_queues = RTE_EVENT_MAX_QUEUES_PER_DEV, +#else + .max_event_queues = DLB_MAX_NUM_LDB_QUEUES, +#endif + .max_event_queue_flows = DLB_MAX_NUM_FLOWS, + .max_event_queue_priority_levels = DLB_QID_PRIORITIES, + .max_event_priority_levels = DLB_QID_PRIORITIES, + .max_event_ports = DLB_MAX_NUM_LDB_PORTS, + .max_event_port_dequeue_depth = DLB_MAX_CQ_DEPTH, + .max_event_port_enqueue_depth = DLB_MAX_ENQUEUE_DEPTH, + .max_event_port_links = DLB_MAX_NUM_QIDS_PER_LDB_CQ, + .max_num_events = DLB_MAX_NUM_LDB_CREDITS, + .max_single_link_event_port_queue_pairs = DLB_MAX_NUM_DIR_PORTS, + .event_dev_cap = (RTE_EVENT_DEV_CAP_QUEUE_QOS | + RTE_EVENT_DEV_CAP_EVENT_QOS | + RTE_EVENT_DEV_CAP_BURST_MODE | + RTE_EVENT_DEV_CAP_DISTRIBUTED_SCHED | + RTE_EVENT_DEV_CAP_IMPLICIT_RELEASE_DISABLE | + RTE_EVENT_DEV_CAP_QUEUE_ALL_TYPES), +}; struct process_local_port_data dlb_port[DLB_MAX_NUM_PORTS][NUM_DLB_PORT_TYPES]; +static int +dlb_hw_query_resources(struct dlb_eventdev *dlb) +{ + struct dlb_hw_dev *handle = &dlb->qm_instance; + struct dlb_hw_resource_info *dlb_info = &handle->info; + int ret; + + ret = dlb_iface_get_num_resources(handle, + &dlb->hw_rsrc_query_results); + if (ret) { + DLB_LOG_ERR("get dlb num resources, err=%d\n", ret); + return ret; + } + + /* Complete filling in device resource info returned to evdev app, + * overriding any default values. + * The capabilities (CAPs) were set at compile time. + */ + + evdev_dlb_default_info.max_event_queues = + dlb->hw_rsrc_query_results.num_ldb_queues; + + evdev_dlb_default_info.max_event_ports = + dlb->hw_rsrc_query_results.num_ldb_ports; + + evdev_dlb_default_info.max_num_events = + dlb->hw_rsrc_query_results.max_contiguous_ldb_credits; + + /* Save off values used when creating the scheduling domain. */ + + handle->info.num_sched_domains = + dlb->hw_rsrc_query_results.num_sched_domains; + + handle->info.hw_rsrc_max.nb_events_limit = + dlb->hw_rsrc_query_results.max_contiguous_ldb_credits; + + handle->info.hw_rsrc_max.num_queues = + dlb->hw_rsrc_query_results.num_ldb_queues + + dlb->hw_rsrc_query_results.num_dir_ports; + + handle->info.hw_rsrc_max.num_ldb_queues = + dlb->hw_rsrc_query_results.num_ldb_queues; + + handle->info.hw_rsrc_max.num_ldb_ports = + dlb->hw_rsrc_query_results.num_ldb_ports; + + handle->info.hw_rsrc_max.num_dir_ports = + dlb->hw_rsrc_query_results.num_dir_ports; + + handle->info.hw_rsrc_max.reorder_window_size = + dlb->hw_rsrc_query_results.num_hist_list_entries; + + rte_memcpy(dlb_info, &handle->info.hw_rsrc_max, sizeof(*dlb_info)); + + return 0; +} + /* Wrapper for string to int conversion. Substituted for atoi(...), which is * unsafe. */ @@ -227,9 +309,54 @@ dlb_primary_eventdev_probe(struct rte_eventdev *dev, const char *name, struct dlb_devargs *dlb_args) { - RTE_SET_USED(dev); - RTE_SET_USED(name); - RTE_SET_USED(dlb_args); + struct dlb_eventdev *dlb; + int err; + + dlb = dev->data->dev_private; + + dlb->event_dev = dev; /* backlink */ + + evdev_dlb_default_info.driver_name = name; + + dlb->max_num_events_override = dlb_args->max_num_events; + dlb->num_dir_credits_override = dlb_args->num_dir_credits_override; + dlb->defer_sched = dlb_args->defer_sched; + dlb->num_atm_inflights_per_queue = dlb_args->num_atm_inflights; + + /* Open the interface. + * For vdev mode, this means open the dlb kernel module. + */ + err = dlb_iface_open(&dlb->qm_instance, name); + if (err < 0) { + DLB_LOG_ERR("could not open event hardware device, err=%d\n", + err); + return err; + } + + err = dlb_iface_get_device_version(&dlb->qm_instance, &dlb->revision); + if (err < 0) { + DLB_LOG_ERR("dlb: failed to get the device version, err=%d\n", + err); + return err; + } + + err = dlb_hw_query_resources(dlb); + if (err) { + DLB_LOG_ERR("get resources err=%d for %s\n", err, name); + return err; + } + + err = dlb_iface_get_cq_poll_mode(&dlb->qm_instance, &dlb->poll_mode); + if (err < 0) { + DLB_LOG_ERR("dlb: failed to get the poll mode, err=%d\n", err); + return err; + } + + rte_spinlock_init(&dlb->qm_instance.resource_lock); + + dlb_iface_low_level_io_init(dlb); + + dlb_entry_points_init(dev); return 0; } @@ -238,8 +365,29 @@ int dlb_secondary_eventdev_probe(struct rte_eventdev *dev, const char *name) { - RTE_SET_USED(dev); - RTE_SET_USED(name); + struct dlb_eventdev *dlb; + int err; + + dlb = dev->data->dev_private; + + evdev_dlb_default_info.driver_name = name; + + err = dlb_iface_open(&dlb->qm_instance, name); + if (err < 0) { + DLB_LOG_ERR("could not open event hardware device, err=%d\n", + err); + return err; + } + + err = dlb_hw_query_resources(dlb); + if (err) { + DLB_LOG_ERR("get resources err=%d for %s\n", err, name); + return err; + } + + dlb_iface_low_level_io_init(dlb); + + dlb_entry_points_init(dev); return 0; } diff --git a/drivers/event/dlb/meson.build b/drivers/event/dlb/meson.build index 0e66cdc..5502647 100644 --- a/drivers/event/dlb/meson.build +++ b/drivers/event/dlb/meson.build @@ -10,7 +10,8 @@ endif sources = files('dlb.c', 'dlb_iface.c', 'pf/dlb_main.c', - 'pf/dlb_pf.c' + 'pf/dlb_pf.c', + 'pf/base/dlb_resource.c' ) headers = files() diff --git a/drivers/event/dlb/pf/base/dlb_resource.c b/drivers/event/dlb/pf/base/dlb_resource.c new file mode 100644 index 0000000..9c4267b --- /dev/null +++ b/drivers/event/dlb/pf/base/dlb_resource.c @@ -0,0 +1,302 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2016-2020 Intel Corporation + */ + +#include "dlb_hw_types.h" +#include "../../dlb_user.h" +#include "dlb_resource.h" +#include "dlb_osdep.h" +#include "dlb_osdep_bitmap.h" +#include "dlb_osdep_types.h" +#include "dlb_regs.h" + +void dlb_disable_dp_vasr_feature(struct dlb_hw *hw) +{ + union dlb_dp_dir_csr_ctrl r0; + + r0.val = DLB_CSR_RD(hw, DLB_DP_DIR_CSR_CTRL); + + r0.field.cfg_vasr_dis = 1; + + DLB_CSR_WR(hw, DLB_DP_DIR_CSR_CTRL, r0.val); +} + +void dlb_enable_excess_tokens_alarm(struct dlb_hw *hw) +{ + union dlb_chp_cfg_chp_csr_ctrl r0; + + r0.val = DLB_CSR_RD(hw, DLB_CHP_CFG_CHP_CSR_CTRL); + + r0.val |= 1 << DLB_CHP_CFG_EXCESS_TOKENS_SHIFT; + + DLB_CSR_WR(hw, DLB_CHP_CFG_CHP_CSR_CTRL, r0.val); +} + +void dlb_hw_enable_sparse_ldb_cq_mode(struct dlb_hw *hw) +{ + union dlb_sys_cq_mode r0; + + r0.val = DLB_CSR_RD(hw, DLB_SYS_CQ_MODE); + + r0.field.ldb_cq64 = 1; + + DLB_CSR_WR(hw, DLB_SYS_CQ_MODE, r0.val); +} + +void dlb_hw_enable_sparse_dir_cq_mode(struct dlb_hw *hw) +{ + union dlb_sys_cq_mode r0; + + r0.val = DLB_CSR_RD(hw, DLB_SYS_CQ_MODE); + + r0.field.dir_cq64 = 1; + + DLB_CSR_WR(hw, DLB_SYS_CQ_MODE, r0.val); +} + +void dlb_hw_disable_pf_to_vf_isr_pend_err(struct dlb_hw *hw) +{ + union dlb_sys_sys_alarm_int_enable r0; + + r0.val = DLB_CSR_RD(hw, DLB_SYS_SYS_ALARM_INT_ENABLE); + + r0.field.pf_to_vf_isr_pend_error = 0; + + DLB_CSR_WR(hw, DLB_SYS_SYS_ALARM_INT_ENABLE, r0.val); +} + +void dlb_hw_get_num_resources(struct dlb_hw *hw, + struct dlb_get_num_resources_args *arg) +{ + struct dlb_function_resources *rsrcs; + struct dlb_bitmap *map; + + rsrcs = &hw->pf; + + arg->num_sched_domains = rsrcs->num_avail_domains; + + arg->num_ldb_queues = rsrcs->num_avail_ldb_queues; + + arg->num_ldb_ports = rsrcs->num_avail_ldb_ports; + + arg->num_dir_ports = rsrcs->num_avail_dir_pq_pairs; + + map = rsrcs->avail_aqed_freelist_entries; + + arg->num_atomic_inflights = dlb_bitmap_count(map); + + arg->max_contiguous_atomic_inflights = + dlb_bitmap_longest_set_range(map); + + map = rsrcs->avail_hist_list_entries; + + arg->num_hist_list_entries = dlb_bitmap_count(map); + + arg->max_contiguous_hist_list_entries = + dlb_bitmap_longest_set_range(map); + + map = rsrcs->avail_qed_freelist_entries; + + arg->num_ldb_credits = dlb_bitmap_count(map); + + arg->max_contiguous_ldb_credits = dlb_bitmap_longest_set_range(map); + + map = rsrcs->avail_dqed_freelist_entries; + + arg->num_dir_credits = dlb_bitmap_count(map); + + arg->max_contiguous_dir_credits = dlb_bitmap_longest_set_range(map); + + arg->num_ldb_credit_pools = rsrcs->num_avail_ldb_credit_pools; + + arg->num_dir_credit_pools = rsrcs->num_avail_dir_credit_pools; +} + +static void dlb_init_fn_rsrc_lists(struct dlb_function_resources *rsrc) +{ + dlb_list_init_head(&rsrc->avail_domains); + dlb_list_init_head(&rsrc->used_domains); + dlb_list_init_head(&rsrc->avail_ldb_queues); + dlb_list_init_head(&rsrc->avail_ldb_ports); + dlb_list_init_head(&rsrc->avail_dir_pq_pairs); + dlb_list_init_head(&rsrc->avail_ldb_credit_pools); + dlb_list_init_head(&rsrc->avail_dir_credit_pools); +} + +static void dlb_init_domain_rsrc_lists(struct dlb_domain *domain) +{ + dlb_list_init_head(&domain->used_ldb_queues); + dlb_list_init_head(&domain->used_ldb_ports); + dlb_list_init_head(&domain->used_dir_pq_pairs); + dlb_list_init_head(&domain->used_ldb_credit_pools); + dlb_list_init_head(&domain->used_dir_credit_pools); + dlb_list_init_head(&domain->avail_ldb_queues); + dlb_list_init_head(&domain->avail_ldb_ports); + dlb_list_init_head(&domain->avail_dir_pq_pairs); + dlb_list_init_head(&domain->avail_ldb_credit_pools); + dlb_list_init_head(&domain->avail_dir_credit_pools); +} + +int dlb_resource_init(struct dlb_hw *hw) +{ + struct dlb_list_entry *list; + unsigned int i; + + /* For optimal load-balancing, ports that map to one or more QIDs in + * common should not be in numerical sequence. This is application + * dependent, but the driver interleaves port IDs as much as possible + * to reduce the likelihood of this. This initial allocation maximizes + * the average distance between an ID and its immediate neighbors (i.e. + * the distance from 1 to 0 and to 2, the distance from 2 to 1 and to + * 3, etc.). + */ + u32 init_ldb_port_allocation[DLB_MAX_NUM_LDB_PORTS] = { + 0, 31, 62, 29, 60, 27, 58, 25, 56, 23, 54, 21, 52, 19, 50, 17, + 48, 15, 46, 13, 44, 11, 42, 9, 40, 7, 38, 5, 36, 3, 34, 1, + 32, 63, 30, 61, 28, 59, 26, 57, 24, 55, 22, 53, 20, 51, 18, 49, + 16, 47, 14, 45, 12, 43, 10, 41, 8, 39, 6, 37, 4, 35, 2, 33 + }; + + /* Zero-out resource tracking data structures */ + memset(&hw->rsrcs, 0, sizeof(hw->rsrcs)); + memset(&hw->pf, 0, sizeof(hw->pf)); + + dlb_init_fn_rsrc_lists(&hw->pf); + + for (i = 0; i < DLB_MAX_NUM_DOMAINS; i++) { + memset(&hw->domains[i], 0, sizeof(hw->domains[i])); + dlb_init_domain_rsrc_lists(&hw->domains[i]); + hw->domains[i].parent_func = &hw->pf; + } + + /* Give all resources to the PF driver */ + hw->pf.num_avail_domains = DLB_MAX_NUM_DOMAINS; + for (i = 0; i < hw->pf.num_avail_domains; i++) { + list = &hw->domains[i].func_list; + + dlb_list_add(&hw->pf.avail_domains, list); + } + + hw->pf.num_avail_ldb_queues = DLB_MAX_NUM_LDB_QUEUES; + for (i = 0; i < hw->pf.num_avail_ldb_queues; i++) { + list = &hw->rsrcs.ldb_queues[i].func_list; + + dlb_list_add(&hw->pf.avail_ldb_queues, list); + } + + hw->pf.num_avail_ldb_ports = DLB_MAX_NUM_LDB_PORTS; + for (i = 0; i < hw->pf.num_avail_ldb_ports; i++) { + struct dlb_ldb_port *port; + + port = &hw->rsrcs.ldb_ports[init_ldb_port_allocation[i]]; + + dlb_list_add(&hw->pf.avail_ldb_ports, &port->func_list); + } + + hw->pf.num_avail_dir_pq_pairs = DLB_MAX_NUM_DIR_PORTS; + for (i = 0; i < hw->pf.num_avail_dir_pq_pairs; i++) { + list = &hw->rsrcs.dir_pq_pairs[i].func_list; + + dlb_list_add(&hw->pf.avail_dir_pq_pairs, list); + } + + hw->pf.num_avail_ldb_credit_pools = DLB_MAX_NUM_LDB_CREDIT_POOLS; + for (i = 0; i < hw->pf.num_avail_ldb_credit_pools; i++) { + list = &hw->rsrcs.ldb_credit_pools[i].func_list; + + dlb_list_add(&hw->pf.avail_ldb_credit_pools, list); + } + + hw->pf.num_avail_dir_credit_pools = DLB_MAX_NUM_DIR_CREDIT_POOLS; + for (i = 0; i < hw->pf.num_avail_dir_credit_pools; i++) { + list = &hw->rsrcs.dir_credit_pools[i].func_list; + + dlb_list_add(&hw->pf.avail_dir_credit_pools, list); + } + + /* There are 5120 history list entries, which allows us to overprovision + * the inflight limit (4096) by 1k. + */ + if (dlb_bitmap_alloc(hw, + &hw->pf.avail_hist_list_entries, + DLB_MAX_NUM_HIST_LIST_ENTRIES)) + return -1; + + if (dlb_bitmap_fill(hw->pf.avail_hist_list_entries)) + return -1; + + if (dlb_bitmap_alloc(hw, + &hw->pf.avail_qed_freelist_entries, + DLB_MAX_NUM_LDB_CREDITS)) + return -1; + + if (dlb_bitmap_fill(hw->pf.avail_qed_freelist_entries)) + return -1; + + if (dlb_bitmap_alloc(hw, + &hw->pf.avail_dqed_freelist_entries, + DLB_MAX_NUM_DIR_CREDITS)) + return -1; + + if (dlb_bitmap_fill(hw->pf.avail_dqed_freelist_entries)) + return -1; + + if (dlb_bitmap_alloc(hw, + &hw->pf.avail_aqed_freelist_entries, + DLB_MAX_NUM_AQOS_ENTRIES)) + return -1; + + if (dlb_bitmap_fill(hw->pf.avail_aqed_freelist_entries)) + return -1; + + /* Initialize the hardware resource IDs */ + for (i = 0; i < DLB_MAX_NUM_DOMAINS; i++) + hw->domains[i].id = i; + + for (i = 0; i < DLB_MAX_NUM_LDB_QUEUES; i++) + hw->rsrcs.ldb_queues[i].id = i; + + for (i = 0; i < DLB_MAX_NUM_LDB_PORTS; i++) + hw->rsrcs.ldb_ports[i].id = i; + + for (i = 0; i < DLB_MAX_NUM_DIR_PORTS; i++) + hw->rsrcs.dir_pq_pairs[i].id = i; + + for (i = 0; i < DLB_MAX_NUM_LDB_CREDIT_POOLS; i++) + hw->rsrcs.ldb_credit_pools[i].id = i; + + for (i = 0; i < DLB_MAX_NUM_DIR_CREDIT_POOLS; i++) + hw->rsrcs.dir_credit_pools[i].id = i; + + for (i = 0; i < DLB_MAX_NUM_SEQUENCE_NUMBER_GROUPS; i++) { + hw->rsrcs.sn_groups[i].id = i; + /* Default mode (0) is 32 sequence numbers per queue */ + hw->rsrcs.sn_groups[i].mode = 0; + hw->rsrcs.sn_groups[i].sequence_numbers_per_queue = 32; + hw->rsrcs.sn_groups[i].slot_use_bitmap = 0; + } + + return 0; +} + +void dlb_resource_free(struct dlb_hw *hw) +{ + dlb_bitmap_free(hw->pf.avail_hist_list_entries); + + dlb_bitmap_free(hw->pf.avail_qed_freelist_entries); + + dlb_bitmap_free(hw->pf.avail_dqed_freelist_entries); + + dlb_bitmap_free(hw->pf.avail_aqed_freelist_entries); +} + +void dlb_hw_disable_vf_to_pf_isr_pend_err(struct dlb_hw *hw) +{ + union dlb_sys_sys_alarm_int_enable r0; + + r0.val = DLB_CSR_RD(hw, DLB_SYS_SYS_ALARM_INT_ENABLE); + + r0.field.vf_to_pf_isr_pend_error = 0; + + DLB_CSR_WR(hw, DLB_SYS_SYS_ALARM_INT_ENABLE, r0.val); +} diff --git a/drivers/event/dlb/pf/dlb_main.c b/drivers/event/dlb/pf/dlb_main.c index f816acb..87ab8dd 100644 --- a/drivers/event/dlb/pf/dlb_main.c +++ b/drivers/event/dlb/pf/dlb_main.c @@ -223,12 +223,18 @@ dlb_probe(struct rte_pci_device *pdev) if (ret) goto init_driver_state_fail; + ret = dlb_resource_init(&dlb_dev->hw); + if (ret) + goto resource_init_fail; + dlb_dev->revision = os_get_dev_revision(&dlb_dev->hw); dlb_pf_init_hardware(dlb_dev); return dlb_dev; +resource_init_fail: + dlb_resource_free(&dlb_dev->hw); init_driver_state_fail: mask_ur_err_fail: dlb_reset_fail: @@ -564,5 +570,17 @@ dlb_pf_init_driver_state(struct dlb_dev *dlb_dev) void dlb_pf_init_hardware(struct dlb_dev *dlb_dev) { - RTE_SET_USED(dlb_dev); + dlb_disable_dp_vasr_feature(&dlb_dev->hw); + + dlb_enable_excess_tokens_alarm(&dlb_dev->hw); + + if (dlb_dev->revision >= DLB_REV_B0) { + dlb_hw_enable_sparse_ldb_cq_mode(&dlb_dev->hw); + dlb_hw_enable_sparse_dir_cq_mode(&dlb_dev->hw); + } + + if (dlb_dev->revision >= DLB_REV_B0) { + dlb_hw_disable_pf_to_vf_isr_pend_err(&dlb_dev->hw); + dlb_hw_disable_vf_to_pf_isr_pend_err(&dlb_dev->hw); + } } diff --git a/drivers/event/dlb/pf/dlb_pf.c b/drivers/event/dlb/pf/dlb_pf.c index 05fd76c..7fc85e9 100644 --- a/drivers/event/dlb/pf/dlb_pf.c +++ b/drivers/event/dlb/pf/dlb_pf.c @@ -35,9 +35,93 @@ #include "base/dlb_resource.h" static void -dlb_pf_iface_fn_ptrs_init(void) +dlb_pf_low_level_io_init(struct dlb_eventdev *dlb __rte_unused) { + int i; + + /* Addresses will be initialized at port create */ + for (i = 0; i < DLB_MAX_NUM_PORTS; i++) { + /* First directed ports */ + + /* producer port */ + dlb_port[i][DLB_DIR].pp_addr = NULL; + + /* popcount */ + dlb_port[i][DLB_DIR].ldb_popcount = NULL; + dlb_port[i][DLB_DIR].dir_popcount = NULL; + + /* consumer queue */ + dlb_port[i][DLB_DIR].cq_base = NULL; + dlb_port[i][DLB_DIR].mmaped = true; + + /* Now load balanced ports */ + + /* producer port */ + dlb_port[i][DLB_LDB].pp_addr = NULL; + + /* popcount */ + dlb_port[i][DLB_LDB].ldb_popcount = NULL; + dlb_port[i][DLB_LDB].dir_popcount = NULL; + + /* consumer queue */ + dlb_port[i][DLB_LDB].cq_base = NULL; + dlb_port[i][DLB_LDB].mmaped = true; + } +} + +static int +dlb_pf_open(struct dlb_hw_dev *handle, const char *name) +{ + RTE_SET_USED(handle); + RTE_SET_USED(name); + + return 0; +} + +static int +dlb_pf_get_device_version(struct dlb_hw_dev *handle, + uint8_t *revision) +{ + struct dlb_dev *dlb_dev = (struct dlb_dev *)handle->pf_dev; + + *revision = dlb_dev->revision; + return 0; +} + +static int +dlb_pf_get_num_resources(struct dlb_hw_dev *handle, + struct dlb_get_num_resources_args *rsrcs) +{ + struct dlb_dev *dlb_dev = (struct dlb_dev *)handle->pf_dev; + + dlb_hw_get_num_resources(&dlb_dev->hw, rsrcs); + + return 0; +} + +static int +dlb_pf_get_cq_poll_mode(struct dlb_hw_dev *handle, + enum dlb_cq_poll_modes *mode) +{ + struct dlb_dev *dlb_dev = (struct dlb_dev *)handle->pf_dev; + + if (dlb_dev->revision >= DLB_REV_B0) + *mode = DLB_CQ_POLL_MODE_SPARSE; + else + *mode = DLB_CQ_POLL_MODE_STD; + + return 0; +} + +static void +dlb_pf_iface_fn_ptrs_init(void) +{ + dlb_iface_low_level_io_init = dlb_pf_low_level_io_init; + dlb_iface_open = dlb_pf_open; + dlb_iface_get_device_version = dlb_pf_get_device_version; + dlb_iface_get_num_resources = dlb_pf_get_num_resources; + dlb_iface_get_cq_poll_mode = dlb_pf_get_cq_poll_mode; } /* PCI DEV HOOKS */