From patchwork Mon Oct 7 19:34:19 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serhii Iliushyk X-Patchwork-Id: 145357 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3CCC245AD9; Mon, 7 Oct 2024 21:40:58 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 31A79427A5; Mon, 7 Oct 2024 21:36:10 +0200 (CEST) Received: from egress-ip42a.ess.de.barracuda.com (egress-ip42a.ess.de.barracuda.com [18.185.115.201]) by mails.dpdk.org (Postfix) with ESMTP id CECED40655 for ; Mon, 7 Oct 2024 21:35:41 +0200 (CEST) Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05lp2104.outbound.protection.outlook.com [104.47.17.104]) by mx-outbound9-155.eu-central-1a.ess.aws.cudaops.com (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 07 Oct 2024 19:35:40 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=kFgZDppb/lMHsqOnP171j2JLgzWSWpNFy3QqGe7+kn2RXF5PY7qfrJUq0mkPm6jACORoJV2gpdE06dTk62riasShTF39DdH9+p6ssLwN52aFzBJkpU6wj5WXUZjwO31HhBP8/ULsI9byxmYouU+I/h8Y8TU7kEZzaoRoKdmjcfhcBQ7wGTCSOR7T/MZJGDyaU6zVlzzCCtFMvw4ppT69nUeBASG5IpFzX7vZHlwtAuiX3/GEEPzJHG480h4qQypQY4n0n9cWZenB9h6k2vt9Xs71Ej4yk+Ef/XtHqdEv41bHIhRD244zjxLrZMv+mnrj8TFrVLz8LqHZT39Wx/pWww== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=hPBuaF8EY9r2M5SH9hhxJcenydRdmxqnp/m08V43Tro=; b=jCgpGGVyQHSrOLArpS6ueIZVJFXPG4N3O9T3Pkf36LLXcA4mDOVU/3JBV+HfnqQI4t7wK2nNvP7TnPm8WYsfNyUMsZ/XwEgUS2JjDbQo8fsqEA/q/6MjO6euxwuKFr0GGhKilNogYsr0xGxRdrTSPu5a6yK36LLppf5F66sMDymxv6AScO78m5bLYRfJrB1pfGM1gAQr0KIR4+mOE/hsJXuQdS1kcKLdEtQ/mXiVhIkgQbtufoF2e7F94de0bQyE5Re18H1xXKnjFZHEDXuLnWfaPyR0QT5ARw75oqPOdCEYHYCw4ZXQt43LLNIld46pMaQL2vjZMi1zmEHRQpTNpQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is 178.72.21.4) smtp.rcpttodomain=dpdk.org smtp.mailfrom=napatech.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=napatech.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=napatech.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=hPBuaF8EY9r2M5SH9hhxJcenydRdmxqnp/m08V43Tro=; b=QLmbzyWsT4oiMnL/u19xd/cMEi+70eaY+Jx17V9suyhF5tvAaVs2F8bBnACJoKXmPU1sVdwSdlgLiSdDLRZ9Yed2R/q2kDvKK/0ya1XNzHAgaVvh/Bk4y4XU1cP5Dk19RosMYlpZ7CYTojGWsVv6RahNp2IKCeL04vd5HE4ZUSc= Received: from DB9PR05CA0025.eurprd05.prod.outlook.com (2603:10a6:10:1da::30) by DBAP190MB1014.EURP190.PROD.OUTLOOK.COM (2603:10a6:10:1ac::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.22; Mon, 7 Oct 2024 19:35:38 +0000 Received: from DB1PEPF000509EC.eurprd03.prod.outlook.com (2603:10a6:10:1da:cafe::8e) by DB9PR05CA0025.outlook.office365.com (2603:10a6:10:1da::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7982.34 via Frontend Transport; Mon, 7 Oct 2024 19:35:38 +0000 X-MS-Exchange-Authentication-Results: spf=fail (sender IP is 178.72.21.4) smtp.mailfrom=napatech.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=napatech.com; Received-SPF: Fail (protection.outlook.com: domain of napatech.com does not designate 178.72.21.4 as permitted sender) receiver=protection.outlook.com; client-ip=178.72.21.4; helo=localhost.localdomain; Received: from localhost.localdomain (178.72.21.4) by DB1PEPF000509EC.mail.protection.outlook.com (10.167.242.70) with Microsoft SMTP Server id 15.20.8048.13 via Frontend Transport; Mon, 7 Oct 2024 19:35:37 +0000 From: Serhii Iliushyk To: dev@dpdk.org Cc: mko-plv@napatech.com, sil-plv@napatech.com, ckm@napatech.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@amd.com, Danylo Vodopianov Subject: [PATCH v2 43/50] net/ntnic: add split-queue support Date: Mon, 7 Oct 2024 21:34:19 +0200 Message-ID: <20241007193436.675785-44-sil-plv@napatech.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20241007193436.675785-1-sil-plv@napatech.com> References: <20241006203728.330792-2-sil-plv@napatech.com> <20241007193436.675785-1-sil-plv@napatech.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DB1PEPF000509EC:EE_|DBAP190MB1014:EE_ X-MS-Office365-Filtering-Correlation-Id: b9ef6592-0bc8-44f3-f470-08dce7073895 X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|1800799024|376014|36860700013|82310400026; X-Microsoft-Antispam-Message-Info: YZKfyzv7xf6fFDy2GeALFJ/n75Pd5Ka6mSU0SLjYguMZ9ITLMLSl+bAwDw92AhUuPfrJYkbuUFjDkZOcc8InSutidCroArLjVa5uu2E8K/LAxb6abhNDYnH1iE0RDMDVwynkgkeha85giqOzS5rBiUH4Ddhwc/oJOodn3NphhrYqGUgj3k6f0aDdpcLxb2bZ6h1ihRYGZHJS1zuy/K2B/WNt8lgjGhKYTMzo/0Fydw0KIVEYKxAZKOuDTUd9UDjIJkQn5KjQYwVdZzeHeGzwAKQkMU52R0a/NAq/h36BWcnD6RCEzw40bmid685TuLTuFmQD/PiSKCiMSa2SI9+xbuKbyofjR3AQtPs5MNeHPij/FSvZK8WPBV6qgRIzK8EII9WUei6DXOYCSatJK0xZxDzErlSN9QoluPfrMw9XrovzIkNUltTL5gXUk3SoNO/3hSBC+ED+yjaAvA8hWNhp6ZHqc3fe8puR+qAZf5BpBstlsAi7/RphiT/NOSsUFa2PVu5aKHAXezC0H4lVilMIc1PSws858NV+MQ31Gy8dwgv4nx0zbNh4gcf3dyKaApBvaYXVJyh0SiFs6ZZiIpDr+cF4VBtWCUN+86o3hDLw8EBgaacpfCsjX+L+alkVDv6bwR/J0OhyWpI9+/ktnryU127xJc0JBcpSy2e7YfP67y4rMD5THCyxP1a4ouKIazRMPoNyo6EIinwImNkVSAJcXBnu4AQGPwJmH06HeXQ12MmRn/Ir1RoJL484XMEl4knfmWkD3HrvhVcMFmBY3jNo9EOpi7ZnxK9e2Ar9wc6CBdgpArYflIjfQelen3RvgS5wcrc3XQzG3hvkBGT8BM1Wv0J231QrZbqg9+IF5EKGXvrEPWAgBrm/jChV703rJwPeYOUDXgBylFAiCzCvHoVzkhDKTdWFL1dugxfUm8C+RoMGJRnSW2/VSDa+cr3gKUKqRDfa27zz8Xsq8VW5pR/Q9AoRu1mQYGv/TKGYzV90vPHTuzJhleMJoLtJLlWMgfuB6hvaJ66SxoiK55XqJnmUpubgvprrLC4BfSSjV5p9kh+H886esoXihAly9ZLer5TtYD8UqYGdFngUvYc0Mnxeq3R9sYB6y+mlgDcgZ9FJPJmX+NLHf60wPbcEargWvFqf6l9UAXjOC2so/XIoZPoGoHQ5X1VqXGFRhFEyoD8HytcIJhoHTECjcTqs+o0RY/yaSh18wnrE5qMS8ChjSAUa7U1wgaJWCuXoggxs1iA5li/mJjaxRJR1NH+9IAabS9cr0D3QB0oWw5OWthK0b2irUh4O25qGN9XpSBBKDkqrHpzt/IdC4oZmDAuiat50rge3xaPnPfjLoJb5naLr7WQIm4Pwb0oH4Fd4l1hUCw7SegU= X-Forefront-Antispam-Report: CIP:178.72.21.4; CTRY:DK; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:localhost.localdomain; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(1800799024)(376014)(36860700013)(82310400026); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: gww1v3EtjFZHOF+6XxdVWG7jCgDSkGUcSdZpLDzb31mL86bzlNlQc/op0Sku001cyJwLCBnMlpsP1Rd3YkHkcwgHBo61Owz6/kXJf2YDNwxeSL7rJZlWDIZEqPuYfoDUKZtObuY82wkGrVA307i7VSb1L3ZDI2yCKL7bEf1tZk4Nqt13pwgA/8ZcxYtvfAx8+DWSB4f0nsCzgbA3dEZkzQyPdX5UwkG8NPTKoca1DpgH+9pWvW7N9uWeJrU9f8ZAHhORSKooofJZGRXqx95gDuS/JnM2j2stjgsMVmX2QXZGXdd94EGbPTxElDLeh1LRHYK5rL3Ys4G59KEPrC2A8D91CWHHngMNM8PyyLhNoyjeT132V3993vy3kHn2CKk1hNncWDDpjFumS01PUroTTUxo2OurmlxRLSx7dhHb3lUwqVf6tY4isrxy0Z8Shl0gEZV/s2TwuseJUlo3tMQ+WU4jF8ktRXC4Mzqbzp+w8511SS99kFtk1n28iM5YEn1Ys/lg9SYiME3mZPIkFZdrJa2cfgqGtQMhF3qZ3EA1plYtoL+wR8PqTMKRB1gNjbZC/v6OjDu/zfbf5spZYYYF4K4TDlzOA33xnkKsINAbexd2Ouj8S84+F4XYJ2Q7OcYNRkPv3ajU3Yca/noUyEId1w== X-OriginatorOrg: napatech.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2024 19:35:37.7250 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b9ef6592-0bc8-44f3-f470-08dce7073895 X-MS-Exchange-CrossTenant-Id: c4540d0b-728a-4233-9da5-9ea30c7ec3ed X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=c4540d0b-728a-4233-9da5-9ea30c7ec3ed; Ip=[178.72.21.4]; Helo=[localhost.localdomain] X-MS-Exchange-CrossTenant-AuthSource: DB1PEPF000509EC.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAP190MB1014 X-BESS-ID: 1728329740-302459-3374-3625-1 X-BESS-VER: 2019.1_20241004.2057 X-BESS-Apparent-Source-IP: 104.47.17.104 X-BESS-Parts: H4sIAAAAAAACA4uuVkqtKFGyUioBkjpK+cVKVkamhsbmQGYGUNTYNNnQzNzQ1C Qt1djCItUgOSXNIDXZINHENMXC3DTJTKk2FgBBvblfQgAAAA== X-BESS-Outbound-Spam-Score: 0.00 X-BESS-Outbound-Spam-Report: Code version 3.2, rules version 3.2.2.259566 [from cloudscan19-135.eu-central-1b.ess.aws.cudaops.com] Rule breakdown below pts rule name description ---- ---------------------- -------------------------------- 0.00 BSF_BESS_OUTBOUND META: BESS Outbound X-BESS-Outbound-Spam-Status: SCORE=0.00 using account:ESS113687 scores of KILL_LEVEL=7.0 tests=BSF_BESS_OUTBOUND X-BESS-BRTS-Status: 1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Danylo Vodopianov Split-queue support was added. Internal structures were enhanced with additional managmnet fields. Implement a managed virtual queue function based on the queue type and configuration parameters. DBS control registers were added. Signed-off-by: Danylo Vodopianov --- drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c | 411 +++++++++++++++++- drivers/net/ntnic/include/ntnic_dbs.h | 19 + drivers/net/ntnic/include/ntnic_virt_queue.h | 7 + drivers/net/ntnic/nthw/dbs/nthw_dbs.c | 125 +++++- .../ntnic/nthw/supported/nthw_fpga_reg_defs.h | 1 + .../nthw/supported/nthw_fpga_reg_defs_dbs.h | 79 ++++ 6 files changed, 640 insertions(+), 2 deletions(-) create mode 100644 drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_dbs.h diff --git a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c index fc1dab6c5f..e69cf7ad21 100644 --- a/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c +++ b/drivers/net/ntnic/dbsconfig/ntnic_dbsconfig.c @@ -10,6 +10,7 @@ #include "ntnic_mod_reg.h" #include "ntlog.h" +#define STRUCT_ALIGNMENT (4 * 1024LU) #define MAX_VIRT_QUEUES 128 #define LAST_QUEUE 127 @@ -34,12 +35,79 @@ #define TX_AM_POLL_SPEED 5 #define TX_UW_POLL_SPEED 8 +#define VIRTQ_AVAIL_F_NO_INTERRUPT 1 + +struct __rte_aligned(8) virtq_avail { + uint16_t flags; + uint16_t idx; + uint16_t ring[]; /* Queue Size */ +}; + +struct __rte_aligned(8) virtq_used_elem { + /* Index of start of used descriptor chain. */ + uint32_t id; + /* Total length of the descriptor chain which was used (written to) */ + uint32_t len; +}; + +struct __rte_aligned(8) virtq_used { + uint16_t flags; + uint16_t idx; + struct virtq_used_elem ring[]; /* Queue Size */ +}; + +struct virtq_struct_layout_s { + size_t used_offset; + size_t desc_offset; +}; + enum nthw_virt_queue_usage { - NTHW_VIRTQ_UNUSED = 0 + NTHW_VIRTQ_UNUSED = 0, + NTHW_VIRTQ_UNMANAGED, + NTHW_VIRTQ_MANAGED }; struct nthw_virt_queue { + /* Pointers to virt-queue structs */ + struct { + /* SPLIT virtqueue */ + struct virtq_avail *p_avail; + struct virtq_used *p_used; + struct virtq_desc *p_desc; + /* Control variables for virt-queue structs */ + uint16_t am_idx; + uint16_t used_idx; + uint16_t cached_idx; + uint16_t tx_descr_avail_idx; + }; + + /* Array with packet buffers */ + struct nthw_memory_descriptor *p_virtual_addr; + + /* Queue configuration info */ + nthw_dbs_t *mp_nthw_dbs; + enum nthw_virt_queue_usage usage; + uint16_t irq_vector; + uint16_t vq_type; + uint16_t in_order; + + uint16_t queue_size; + uint32_t index; + uint32_t am_enable; + uint32_t host_id; + uint32_t port; /* Only used by TX queues */ + uint32_t virtual_port; /* Only used by TX queues */ + /* + * Only used by TX queues: + * 0: VirtIO-Net header (12 bytes). + * 1: Napatech DVIO0 descriptor (12 bytes). + */ +}; + +struct pvirtq_struct_layout_s { + size_t driver_event_offset; + size_t device_event_offset; }; static struct nthw_virt_queue rxvq[MAX_VIRT_QUEUES]; @@ -143,7 +211,348 @@ static int nthw_virt_queue_init(struct fpga_info_s *p_fpga_info) return 0; } +static struct virtq_struct_layout_s dbs_calc_struct_layout(uint32_t queue_size) +{ + /* + sizeof(uint16_t); ("avail->used_event" is not used) */ + size_t avail_mem = sizeof(struct virtq_avail) + queue_size * sizeof(uint16_t); + size_t avail_mem_aligned = ((avail_mem % STRUCT_ALIGNMENT) == 0) + ? avail_mem + : STRUCT_ALIGNMENT * (avail_mem / STRUCT_ALIGNMENT + 1); + + /* + sizeof(uint16_t); ("used->avail_event" is not used) */ + size_t used_mem = sizeof(struct virtq_used) + queue_size * sizeof(struct virtq_used_elem); + size_t used_mem_aligned = ((used_mem % STRUCT_ALIGNMENT) == 0) + ? used_mem + : STRUCT_ALIGNMENT * (used_mem / STRUCT_ALIGNMENT + 1); + + struct virtq_struct_layout_s virtq_layout; + virtq_layout.used_offset = avail_mem_aligned; + virtq_layout.desc_offset = avail_mem_aligned + used_mem_aligned; + + return virtq_layout; +} + +static void dbs_initialize_avail_struct(void *addr, uint16_t queue_size, + uint16_t initial_avail_idx) +{ + uint16_t i; + struct virtq_avail *p_avail = (struct virtq_avail *)addr; + + p_avail->flags = VIRTQ_AVAIL_F_NO_INTERRUPT; + p_avail->idx = initial_avail_idx; + + for (i = 0; i < queue_size; ++i) + p_avail->ring[i] = i; +} + +static void dbs_initialize_used_struct(void *addr, uint16_t queue_size) +{ + int i; + struct virtq_used *p_used = (struct virtq_used *)addr; + + p_used->flags = 1; + p_used->idx = 0; + + for (i = 0; i < queue_size; ++i) { + p_used->ring[i].id = 0; + p_used->ring[i].len = 0; + } +} + +static void +dbs_initialize_descriptor_struct(void *addr, + struct nthw_memory_descriptor *packet_buffer_descriptors, + uint16_t queue_size, uint16_t flgs) +{ + if (packet_buffer_descriptors) { + int i; + struct virtq_desc *p_desc = (struct virtq_desc *)addr; + + for (i = 0; i < queue_size; ++i) { + p_desc[i].addr = (uint64_t)packet_buffer_descriptors[i].phys_addr; + p_desc[i].len = packet_buffer_descriptors[i].len; + p_desc[i].flags = flgs; + p_desc[i].next = 0; + } + } +} + +static void +dbs_initialize_virt_queue_structs(void *avail_struct_addr, void *used_struct_addr, + void *desc_struct_addr, + struct nthw_memory_descriptor *packet_buffer_descriptors, + uint16_t queue_size, uint16_t initial_avail_idx, uint16_t flgs) +{ + dbs_initialize_avail_struct(avail_struct_addr, queue_size, initial_avail_idx); + dbs_initialize_used_struct(used_struct_addr, queue_size); + dbs_initialize_descriptor_struct(desc_struct_addr, packet_buffer_descriptors, queue_size, + flgs); +} + +static struct nthw_virt_queue *nthw_setup_rx_virt_queue(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint16_t start_idx, + uint16_t start_ptr, + void *avail_struct_phys_addr, + void *used_struct_phys_addr, + void *desc_struct_phys_addr, + uint16_t queue_size, + uint32_t host_id, + uint32_t header, + uint32_t vq_type, + int irq_vector) +{ + (void)header; + (void)desc_struct_phys_addr; + (void)avail_struct_phys_addr; + (void)used_struct_phys_addr; + + + /* + * 5. Initialize all RX queues (all DBS_RX_QUEUES of them) using the + * DBS.RX_INIT register. + */ + dbs_init_rx_queue(p_nthw_dbs, index, start_idx, start_ptr); + + /* Save queue state */ + rxvq[index].usage = NTHW_VIRTQ_UNMANAGED; + rxvq[index].mp_nthw_dbs = p_nthw_dbs; + rxvq[index].index = index; + rxvq[index].queue_size = queue_size; + rxvq[index].am_enable = (irq_vector < 0) ? RX_AM_ENABLE : RX_AM_DISABLE; + rxvq[index].host_id = host_id; + rxvq[index].vq_type = vq_type; + rxvq[index].in_order = 0; /* not used */ + rxvq[index].irq_vector = irq_vector; + + /* Return queue handle */ + return &rxvq[index]; +} + +static struct nthw_virt_queue *nthw_setup_tx_virt_queue(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint16_t start_idx, + uint16_t start_ptr, + void *avail_struct_phys_addr, + void *used_struct_phys_addr, + void *desc_struct_phys_addr, + uint16_t queue_size, + uint32_t host_id, + uint32_t port, + uint32_t virtual_port, + uint32_t header, + uint32_t vq_type, + int irq_vector, + uint32_t in_order) +{ + (void)header; + (void)desc_struct_phys_addr; + (void)avail_struct_phys_addr; + (void)used_struct_phys_addr; + + /* + * 5. Initialize all TX queues (all DBS_TX_QUEUES of them) using the + * DBS.TX_INIT register. + */ + dbs_init_tx_queue(p_nthw_dbs, index, start_idx, start_ptr); + + /* Save queue state */ + txvq[index].usage = NTHW_VIRTQ_UNMANAGED; + txvq[index].mp_nthw_dbs = p_nthw_dbs; + txvq[index].index = index; + txvq[index].queue_size = queue_size; + txvq[index].am_enable = (irq_vector < 0) ? TX_AM_ENABLE : TX_AM_DISABLE; + txvq[index].host_id = host_id; + txvq[index].port = port; + txvq[index].virtual_port = virtual_port; + txvq[index].vq_type = vq_type; + txvq[index].in_order = in_order; + txvq[index].irq_vector = irq_vector; + + /* Return queue handle */ + return &txvq[index]; +} + +static struct nthw_virt_queue * +nthw_setup_mngd_rx_virt_queue_split(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint32_t queue_size, + uint32_t host_id, + uint32_t header, + struct nthw_memory_descriptor *p_virt_struct_area, + struct nthw_memory_descriptor *p_packet_buffers, + int irq_vector) +{ + struct virtq_struct_layout_s virtq_struct_layout = dbs_calc_struct_layout(queue_size); + + dbs_initialize_virt_queue_structs(p_virt_struct_area->virt_addr, + (char *)p_virt_struct_area->virt_addr + + virtq_struct_layout.used_offset, + (char *)p_virt_struct_area->virt_addr + + virtq_struct_layout.desc_offset, + p_packet_buffers, + (uint16_t)queue_size, + p_packet_buffers ? (uint16_t)queue_size : 0, + VIRTQ_DESC_F_WRITE /* Rx */); + + rxvq[index].p_avail = p_virt_struct_area->virt_addr; + rxvq[index].p_used = + (void *)((char *)p_virt_struct_area->virt_addr + virtq_struct_layout.used_offset); + rxvq[index].p_desc = + (void *)((char *)p_virt_struct_area->virt_addr + virtq_struct_layout.desc_offset); + + rxvq[index].am_idx = p_packet_buffers ? (uint16_t)queue_size : 0; + rxvq[index].used_idx = 0; + rxvq[index].cached_idx = 0; + rxvq[index].p_virtual_addr = NULL; + + if (p_packet_buffers) { + rxvq[index].p_virtual_addr = malloc(queue_size * sizeof(*p_packet_buffers)); + memcpy(rxvq[index].p_virtual_addr, p_packet_buffers, + queue_size * sizeof(*p_packet_buffers)); + } + + nthw_setup_rx_virt_queue(p_nthw_dbs, index, 0, 0, (void *)p_virt_struct_area->phys_addr, + (char *)p_virt_struct_area->phys_addr + + virtq_struct_layout.used_offset, + (char *)p_virt_struct_area->phys_addr + + virtq_struct_layout.desc_offset, + (uint16_t)queue_size, host_id, header, SPLIT_RING, irq_vector); + + rxvq[index].usage = NTHW_VIRTQ_MANAGED; + + return &rxvq[index]; +} + +static struct nthw_virt_queue * +nthw_setup_mngd_tx_virt_queue_split(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint32_t queue_size, + uint32_t host_id, + uint32_t port, + uint32_t virtual_port, + uint32_t header, + int irq_vector, + uint32_t in_order, + struct nthw_memory_descriptor *p_virt_struct_area, + struct nthw_memory_descriptor *p_packet_buffers) +{ + struct virtq_struct_layout_s virtq_struct_layout = dbs_calc_struct_layout(queue_size); + + dbs_initialize_virt_queue_structs(p_virt_struct_area->virt_addr, + (char *)p_virt_struct_area->virt_addr + + virtq_struct_layout.used_offset, + (char *)p_virt_struct_area->virt_addr + + virtq_struct_layout.desc_offset, + p_packet_buffers, + (uint16_t)queue_size, + 0, + 0 /* Tx */); + + txvq[index].p_avail = p_virt_struct_area->virt_addr; + txvq[index].p_used = + (void *)((char *)p_virt_struct_area->virt_addr + virtq_struct_layout.used_offset); + txvq[index].p_desc = + (void *)((char *)p_virt_struct_area->virt_addr + virtq_struct_layout.desc_offset); + txvq[index].queue_size = (uint16_t)queue_size; + txvq[index].am_idx = 0; + txvq[index].used_idx = 0; + txvq[index].cached_idx = 0; + txvq[index].p_virtual_addr = NULL; + + txvq[index].tx_descr_avail_idx = 0; + + if (p_packet_buffers) { + txvq[index].p_virtual_addr = malloc(queue_size * sizeof(*p_packet_buffers)); + memcpy(txvq[index].p_virtual_addr, p_packet_buffers, + queue_size * sizeof(*p_packet_buffers)); + } + + nthw_setup_tx_virt_queue(p_nthw_dbs, index, 0, 0, (void *)p_virt_struct_area->phys_addr, + (char *)p_virt_struct_area->phys_addr + + virtq_struct_layout.used_offset, + (char *)p_virt_struct_area->phys_addr + + virtq_struct_layout.desc_offset, + (uint16_t)queue_size, host_id, port, virtual_port, header, + SPLIT_RING, irq_vector, in_order); + + txvq[index].usage = NTHW_VIRTQ_MANAGED; + + return &txvq[index]; +} + +/* + * Create a Managed Rx Virt Queue + * + * Notice: The queue will be created with interrupts disabled. + * If interrupts are required, make sure to call nthw_enable_rx_virt_queue() + * afterwards. + */ +static struct nthw_virt_queue * +nthw_setup_mngd_rx_virt_queue(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint32_t queue_size, + uint32_t host_id, + uint32_t header, + struct nthw_memory_descriptor *p_virt_struct_area, + struct nthw_memory_descriptor *p_packet_buffers, + uint32_t vq_type, + int irq_vector) +{ + switch (vq_type) { + case SPLIT_RING: + return nthw_setup_mngd_rx_virt_queue_split(p_nthw_dbs, index, queue_size, + host_id, header, p_virt_struct_area, + p_packet_buffers, irq_vector); + + default: + break; + } + + return NULL; +} + +/* + * Create a Managed Tx Virt Queue + * + * Notice: The queue will be created with interrupts disabled. + * If interrupts are required, make sure to call nthw_enable_tx_virt_queue() + * afterwards. + */ +static struct nthw_virt_queue * +nthw_setup_mngd_tx_virt_queue(nthw_dbs_t *p_nthw_dbs, + uint32_t index, + uint32_t queue_size, + uint32_t host_id, + uint32_t port, + uint32_t virtual_port, + uint32_t header, + struct nthw_memory_descriptor *p_virt_struct_area, + struct nthw_memory_descriptor *p_packet_buffers, + uint32_t vq_type, + int irq_vector, + uint32_t in_order) +{ + switch (vq_type) { + case SPLIT_RING: + return nthw_setup_mngd_tx_virt_queue_split(p_nthw_dbs, index, queue_size, + host_id, port, virtual_port, header, + irq_vector, in_order, + p_virt_struct_area, + p_packet_buffers); + + default: + break; + } + + return NULL; +} + static struct sg_ops_s sg_ops = { + .nthw_setup_rx_virt_queue = nthw_setup_rx_virt_queue, + .nthw_setup_tx_virt_queue = nthw_setup_tx_virt_queue, + .nthw_setup_mngd_rx_virt_queue = nthw_setup_mngd_rx_virt_queue, + .nthw_setup_mngd_tx_virt_queue = nthw_setup_mngd_tx_virt_queue, .nthw_virt_queue_init = nthw_virt_queue_init }; diff --git a/drivers/net/ntnic/include/ntnic_dbs.h b/drivers/net/ntnic/include/ntnic_dbs.h index a64d2a0aeb..4e6236e8b4 100644 --- a/drivers/net/ntnic/include/ntnic_dbs.h +++ b/drivers/net/ntnic/include/ntnic_dbs.h @@ -47,6 +47,11 @@ struct nthw_dbs_s { nthw_field_t *mp_fld_rx_init_val_idx; nthw_field_t *mp_fld_rx_init_val_ptr; + nthw_register_t *mp_reg_rx_ptr; + nthw_field_t *mp_fld_rx_ptr_ptr; + nthw_field_t *mp_fld_rx_ptr_queue; + nthw_field_t *mp_fld_rx_ptr_valid; + nthw_register_t *mp_reg_tx_init; nthw_field_t *mp_fld_tx_init_init; nthw_field_t *mp_fld_tx_init_queue; @@ -56,6 +61,20 @@ struct nthw_dbs_s { nthw_field_t *mp_fld_tx_init_val_idx; nthw_field_t *mp_fld_tx_init_val_ptr; + nthw_register_t *mp_reg_tx_ptr; + nthw_field_t *mp_fld_tx_ptr_ptr; + nthw_field_t *mp_fld_tx_ptr_queue; + nthw_field_t *mp_fld_tx_ptr_valid; + + nthw_register_t *mp_reg_rx_idle; + nthw_field_t *mp_fld_rx_idle_idle; + nthw_field_t *mp_fld_rx_idle_queue; + nthw_field_t *mp_fld_rx_idle_busy; + + nthw_register_t *mp_reg_tx_idle; + nthw_field_t *mp_fld_tx_idle_idle; + nthw_field_t *mp_fld_tx_idle_queue; + nthw_field_t *mp_fld_tx_idle_busy; }; typedef struct nthw_dbs_s nthw_dbs_t; diff --git a/drivers/net/ntnic/include/ntnic_virt_queue.h b/drivers/net/ntnic/include/ntnic_virt_queue.h index f8842819e4..97cb474dc8 100644 --- a/drivers/net/ntnic/include/ntnic_virt_queue.h +++ b/drivers/net/ntnic/include/ntnic_virt_queue.h @@ -23,6 +23,13 @@ struct nthw_virt_queue; * contiguous) In Used descriptors it must be ignored */ #define VIRTQ_DESC_F_NEXT 1 +/* + * SPLIT : This marks a buffer as device write-only (otherwise device read-only). + * PACKED: This marks a descriptor as device write-only (otherwise device read-only). + * PACKED: In a used descriptor, this bit is used to specify whether any data has been written by + * the device into any parts of the buffer. + */ +#define VIRTQ_DESC_F_WRITE 2 /* * Split Ring virtq Descriptor diff --git a/drivers/net/ntnic/nthw/dbs/nthw_dbs.c b/drivers/net/ntnic/nthw/dbs/nthw_dbs.c index 853d7bc1ec..cd1123b6f3 100644 --- a/drivers/net/ntnic/nthw/dbs/nthw_dbs.c +++ b/drivers/net/ntnic/nthw/dbs/nthw_dbs.c @@ -44,12 +44,135 @@ int dbs_init(nthw_dbs_t *p, nthw_fpga_t *p_fpga, int n_instance) p->mp_fpga->p_fpga_info->mp_adapter_id_str, p->mn_instance); } + p->mp_reg_rx_control = nthw_module_get_register(p->mp_mod_dbs, DBS_RX_CONTROL); + p->mp_fld_rx_control_last_queue = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_LQ); + p->mp_fld_rx_control_avail_monitor_enable = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_AME); + p->mp_fld_rx_control_avail_monitor_scan_speed = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_AMS); + p->mp_fld_rx_control_used_write_enable = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_UWE); + p->mp_fld_rx_control_used_writer_update_speed = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_UWS); + p->mp_fld_rx_control_rx_queues_enable = + nthw_register_get_field(p->mp_reg_rx_control, DBS_RX_CONTROL_QE); + + p->mp_reg_tx_control = nthw_module_get_register(p->mp_mod_dbs, DBS_TX_CONTROL); + p->mp_fld_tx_control_last_queue = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_LQ); + p->mp_fld_tx_control_avail_monitor_enable = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_AME); + p->mp_fld_tx_control_avail_monitor_scan_speed = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_AMS); + p->mp_fld_tx_control_used_write_enable = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_UWE); + p->mp_fld_tx_control_used_writer_update_speed = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_UWS); + p->mp_fld_tx_control_tx_queues_enable = + nthw_register_get_field(p->mp_reg_tx_control, DBS_TX_CONTROL_QE); + + p->mp_reg_rx_init = nthw_module_get_register(p->mp_mod_dbs, DBS_RX_INIT); + p->mp_fld_rx_init_init = nthw_register_get_field(p->mp_reg_rx_init, DBS_RX_INIT_INIT); + p->mp_fld_rx_init_queue = nthw_register_get_field(p->mp_reg_rx_init, DBS_RX_INIT_QUEUE); + p->mp_fld_rx_init_busy = nthw_register_get_field(p->mp_reg_rx_init, DBS_RX_INIT_BUSY); + + p->mp_reg_rx_init_val = nthw_module_query_register(p->mp_mod_dbs, DBS_RX_INIT_VAL); + + if (p->mp_reg_rx_init_val) { + p->mp_fld_rx_init_val_idx = + nthw_register_query_field(p->mp_reg_rx_init_val, DBS_RX_INIT_VAL_IDX); + p->mp_fld_rx_init_val_ptr = + nthw_register_query_field(p->mp_reg_rx_init_val, DBS_RX_INIT_VAL_PTR); + } + + p->mp_reg_rx_ptr = nthw_module_query_register(p->mp_mod_dbs, DBS_RX_PTR); + + if (p->mp_reg_rx_ptr) { + p->mp_fld_rx_ptr_ptr = nthw_register_query_field(p->mp_reg_rx_ptr, DBS_RX_PTR_PTR); + p->mp_fld_rx_ptr_queue = + nthw_register_query_field(p->mp_reg_rx_ptr, DBS_RX_PTR_QUEUE); + p->mp_fld_rx_ptr_valid = + nthw_register_query_field(p->mp_reg_rx_ptr, DBS_RX_PTR_VALID); + } + + p->mp_reg_tx_init = nthw_module_get_register(p->mp_mod_dbs, DBS_TX_INIT); + p->mp_fld_tx_init_init = nthw_register_get_field(p->mp_reg_tx_init, DBS_TX_INIT_INIT); + p->mp_fld_tx_init_queue = nthw_register_get_field(p->mp_reg_tx_init, DBS_TX_INIT_QUEUE); + p->mp_fld_tx_init_busy = nthw_register_get_field(p->mp_reg_tx_init, DBS_TX_INIT_BUSY); + + p->mp_reg_tx_init_val = nthw_module_query_register(p->mp_mod_dbs, DBS_TX_INIT_VAL); + + if (p->mp_reg_tx_init_val) { + p->mp_fld_tx_init_val_idx = + nthw_register_query_field(p->mp_reg_tx_init_val, DBS_TX_INIT_VAL_IDX); + p->mp_fld_tx_init_val_ptr = + nthw_register_query_field(p->mp_reg_tx_init_val, DBS_TX_INIT_VAL_PTR); + } + + p->mp_reg_tx_ptr = nthw_module_query_register(p->mp_mod_dbs, DBS_TX_PTR); + + if (p->mp_reg_tx_ptr) { + p->mp_fld_tx_ptr_ptr = nthw_register_query_field(p->mp_reg_tx_ptr, DBS_TX_PTR_PTR); + p->mp_fld_tx_ptr_queue = + nthw_register_query_field(p->mp_reg_tx_ptr, DBS_TX_PTR_QUEUE); + p->mp_fld_tx_ptr_valid = + nthw_register_query_field(p->mp_reg_tx_ptr, DBS_TX_PTR_VALID); + } + + p->mp_reg_rx_idle = nthw_module_query_register(p->mp_mod_dbs, DBS_RX_IDLE); + + if (p->mp_reg_rx_idle) { + p->mp_fld_rx_idle_idle = + nthw_register_query_field(p->mp_reg_rx_idle, DBS_RX_IDLE_IDLE); + p->mp_fld_rx_idle_queue = + nthw_register_query_field(p->mp_reg_rx_idle, DBS_RX_IDLE_QUEUE); + p->mp_fld_rx_idle_busy = + nthw_register_query_field(p->mp_reg_rx_idle, DBS_RX_IDLE_BUSY); + } + + p->mp_reg_tx_idle = nthw_module_query_register(p->mp_mod_dbs, DBS_TX_IDLE); + + if (p->mp_reg_tx_idle) { + p->mp_fld_tx_idle_idle = + nthw_register_query_field(p->mp_reg_tx_idle, DBS_TX_IDLE_IDLE); + p->mp_fld_tx_idle_queue = + nthw_register_query_field(p->mp_reg_tx_idle, DBS_TX_IDLE_QUEUE); + p->mp_fld_tx_idle_busy = + nthw_register_query_field(p->mp_reg_tx_idle, DBS_TX_IDLE_BUSY); + } + + return 0; +} + +static int dbs_reset_rx_control(nthw_dbs_t *p) +{ + nthw_field_set_val32(p->mp_fld_rx_control_last_queue, 0); + nthw_field_set_val32(p->mp_fld_rx_control_avail_monitor_enable, 0); + nthw_field_set_val32(p->mp_fld_rx_control_avail_monitor_scan_speed, 8); + nthw_field_set_val32(p->mp_fld_rx_control_used_write_enable, 0); + nthw_field_set_val32(p->mp_fld_rx_control_used_writer_update_speed, 5); + nthw_field_set_val32(p->mp_fld_rx_control_rx_queues_enable, 0); + nthw_register_flush(p->mp_reg_rx_control, 1); + return 0; +} + +static int dbs_reset_tx_control(nthw_dbs_t *p) +{ + nthw_field_set_val32(p->mp_fld_tx_control_last_queue, 0); + nthw_field_set_val32(p->mp_fld_tx_control_avail_monitor_enable, 0); + nthw_field_set_val32(p->mp_fld_tx_control_avail_monitor_scan_speed, 5); + nthw_field_set_val32(p->mp_fld_tx_control_used_write_enable, 0); + nthw_field_set_val32(p->mp_fld_tx_control_used_writer_update_speed, 8); + nthw_field_set_val32(p->mp_fld_tx_control_tx_queues_enable, 0); + nthw_register_flush(p->mp_reg_tx_control, 1); return 0; } void dbs_reset(nthw_dbs_t *p) { - (void)p; + dbs_reset_rx_control(p); + dbs_reset_tx_control(p); } int set_rx_control(nthw_dbs_t *p, diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h index 45f9794958..3560eeda7d 100644 --- a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h +++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs.h @@ -16,6 +16,7 @@ #include "nthw_fpga_reg_defs_cat.h" #include "nthw_fpga_reg_defs_cpy.h" #include "nthw_fpga_reg_defs_csu.h" +#include "nthw_fpga_reg_defs_dbs.h" #include "nthw_fpga_reg_defs_flm.h" #include "nthw_fpga_reg_defs_gfg.h" #include "nthw_fpga_reg_defs_gmf.h" diff --git a/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_dbs.h b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_dbs.h new file mode 100644 index 0000000000..ee5d726aab --- /dev/null +++ b/drivers/net/ntnic/nthw/supported/nthw_fpga_reg_defs_dbs.h @@ -0,0 +1,79 @@ +/* + * SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2024 Napatech A/S + */ + +/* + * nthw_fpga_reg_defs_dbs.h + * + * Auto-generated file - do *NOT* edit + * + */ + +#ifndef _NTHW_FPGA_REG_DEFS_DBS_ +#define _NTHW_FPGA_REG_DEFS_DBS_ + +/* DBS */ +#define DBS_RX_CONTROL (0xb18b2866UL) +#define DBS_RX_CONTROL_AME (0x1f9219acUL) +#define DBS_RX_CONTROL_AMS (0xeb46acfdUL) +#define DBS_RX_CONTROL_LQ (0xe65f90b2UL) +#define DBS_RX_CONTROL_QE (0x3e928d3UL) +#define DBS_RX_CONTROL_UWE (0xb490e8dbUL) +#define DBS_RX_CONTROL_UWS (0x40445d8aUL) +#define DBS_RX_IDLE (0x93c723bfUL) +#define DBS_RX_IDLE_BUSY (0x8e043b5bUL) +#define DBS_RX_IDLE_IDLE (0x9dba27ccUL) +#define DBS_RX_IDLE_QUEUE (0xbbddab49UL) +#define DBS_RX_INIT (0x899772deUL) +#define DBS_RX_INIT_BUSY (0x8576d90aUL) +#define DBS_RX_INIT_INIT (0x8c9894fcUL) +#define DBS_RX_INIT_QUEUE (0xa7bab8c9UL) +#define DBS_RX_INIT_VAL (0x7789b4d8UL) +#define DBS_RX_INIT_VAL_IDX (0xead0e2beUL) +#define DBS_RX_INIT_VAL_PTR (0x5330810eUL) +#define DBS_RX_PTR (0x628ce523UL) +#define DBS_RX_PTR_PTR (0x7f834481UL) +#define DBS_RX_PTR_QUEUE (0x4f3fa6d1UL) +#define DBS_RX_PTR_VALID (0xbcc5ec4dUL) +#define DBS_STATUS (0xb5f35220UL) +#define DBS_STATUS_OK (0xcf09a30fUL) +#define DBS_TX_CONTROL (0xbc955821UL) +#define DBS_TX_CONTROL_AME (0xe750521aUL) +#define DBS_TX_CONTROL_AMS (0x1384e74bUL) +#define DBS_TX_CONTROL_LQ (0x46ba4f6fUL) +#define DBS_TX_CONTROL_QE (0xa30cf70eUL) +#define DBS_TX_CONTROL_UWE (0x4c52a36dUL) +#define DBS_TX_CONTROL_UWS (0xb886163cUL) +#define DBS_TX_IDLE (0xf0171685UL) +#define DBS_TX_IDLE_BUSY (0x61399ebbUL) +#define DBS_TX_IDLE_IDLE (0x7287822cUL) +#define DBS_TX_IDLE_QUEUE (0x1b387494UL) +#define DBS_TX_INIT (0xea4747e4UL) +#define DBS_TX_INIT_BUSY (0x6a4b7ceaUL) +#define DBS_TX_INIT_INIT (0x63a5311cUL) +#define DBS_TX_INIT_QUEUE (0x75f6714UL) +#define DBS_TX_INIT_VAL (0x9f3c7e9bUL) +#define DBS_TX_INIT_VAL_IDX (0xc82a364cUL) +#define DBS_TX_INIT_VAL_PTR (0x71ca55fcUL) +#define DBS_TX_PTR (0xb4d5063eUL) +#define DBS_TX_PTR_PTR (0x729d34c6UL) +#define DBS_TX_PTR_QUEUE (0xa0020331UL) +#define DBS_TX_PTR_VALID (0x53f849adUL) +#define DBS_TX_QOS_CTRL (0x3b2c3286UL) +#define DBS_TX_QOS_CTRL_ADR (0x666600acUL) +#define DBS_TX_QOS_CTRL_CNT (0x766e997dUL) +#define DBS_TX_QOS_DATA (0x94fdb09fUL) +#define DBS_TX_QOS_DATA_BS (0x2c394071UL) +#define DBS_TX_QOS_DATA_EN (0x7eba6fUL) +#define DBS_TX_QOS_DATA_IR (0xb8caa92cUL) +#define DBS_TX_QOS_DATA_MUL (0xd7407a67UL) +#define DBS_TX_QOS_RATE (0xe6e27cc5UL) +#define DBS_TX_QOS_RATE_DIV (0x8cd07ba3UL) +#define DBS_TX_QOS_RATE_MUL (0x9814e40bUL) + +#endif /* _NTHW_FPGA_REG_DEFS_DBS_ */ + +/* + * Auto-generated file - do *NOT* edit + */