From patchwork Mon Oct 7 19:34:16 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Serhii Iliushyk X-Patchwork-Id: 145352 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9DEB245AD9; Mon, 7 Oct 2024 21:40:23 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4C03D42797; Mon, 7 Oct 2024 21:36:04 +0200 (CEST) Received: from egress-ip42b.ess.de.barracuda.com (egress-ip42b.ess.de.barracuda.com [18.185.115.246]) by mails.dpdk.org (Postfix) with ESMTP id 29CAA406B6 for ; Mon, 7 Oct 2024 21:35:38 +0200 (CEST) Received: from EUR05-DB8-obe.outbound.protection.outlook.com (mail-db8eur05lp2105.outbound.protection.outlook.com [104.47.17.105]) by mx-outbound43-193.eu-central-1c.ess.aws.cudaops.com (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NO); Mon, 07 Oct 2024 19:35:37 +0000 ARC-Seal: i=1; a=rsa-sha256; s=arcselector10001; d=microsoft.com; cv=none; b=hG7h1uGWEW2vdprXcmDWBKFt6/SDnd6kXYhtmsg+g8HDII7gNws5JcpWjusoCHrNakBBSXrJyoN+pJ7hiI3sHSRRnCVgPz7Cu6KhGl7RYvjIP7hGogLdeut3t+Lb2NeQc5HJhmAq+0KObIIHXa6k22MyG32AZEI0MaBTYjA9uhxU3ys0zkBzrCCLyS85wvUgVdkkxSIVDmBpXmIZ0KFF+5advGmZ1Gh/wYc3OeWkZ6hnkVJFtzr4lrMVshPiDwc0stmHFECmaeGYbfhZx7S2G4YAByqvqaak7CjYNtDa7eUFSnc2tLkRGR3gcgOq4FS+VHRzdhgaJLyBL/sMVv8y3Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector10001; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Jait+83qO9a8si1jf0oj5RSM6m1nawX84PMAikikLLE=; b=q6VwRE+PXbw4vTf/1ioT3xrRLKW5y9AaiX24xFUi29sNROQIyFo/AbQuV9rijDF8v6Wx4dlf6v2/TNQif6UUY0AaBcQjCS3ftU4RtXTTLrEyAM++4E1Oy/u1cpqweFcJ7GV1z1LhdmoFBBLKx8I62rq5AdpPYESvWbde27kfs+opIMglfs43/NIMyvdMS5BrNLXL0Gti1hH7agtYy04Z02xx4rwFNb0wD4wtlsFV5a07kQJazRBdFPxdhdzJyPjAntMK1jnaK26GFaEKFJmYHvirISf0sdVj+2qUM/LfzFIDnxfGgulYuOVX942QKfMRpkeiY5K2slZeX5ZugDN7kw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is 178.72.21.4) smtp.rcpttodomain=dpdk.org smtp.mailfrom=napatech.com; dmarc=fail (p=reject sp=reject pct=100) action=oreject header.from=napatech.com; dkim=none (message not signed); arc=none (0) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=napatech.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Jait+83qO9a8si1jf0oj5RSM6m1nawX84PMAikikLLE=; b=c8oqu5IEukhcdUojkMItByNRgl/Av4AqNUIxoG/uEkDskNaEF36D9cLhMnxjyJOY3HVxVjGqiY4M8bsMDxmXHR8sokvgu8GxxjIGOozZLAZGcNijzIHbq0ZJxhyThoANmM0+0Z5VWIzslui9VYrpZLySOSFK+V2z4ByrmaFKPmo= Received: from DB9PR05CA0013.eurprd05.prod.outlook.com (2603:10a6:10:1da::18) by PR3P190MB0858.EURP190.PROD.OUTLOOK.COM (2603:10a6:102:80::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.8026.22; Mon, 7 Oct 2024 19:35:35 +0000 Received: from DB1PEPF000509EC.eurprd03.prod.outlook.com (2603:10a6:10:1da:cafe::7d) by DB9PR05CA0013.outlook.office365.com (2603:10a6:10:1da::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.7982.34 via Frontend Transport; Mon, 7 Oct 2024 19:35:35 +0000 X-MS-Exchange-Authentication-Results: spf=fail (sender IP is 178.72.21.4) smtp.mailfrom=napatech.com; dkim=none (message not signed) header.d=none;dmarc=fail action=oreject header.from=napatech.com; Received-SPF: Fail (protection.outlook.com: domain of napatech.com does not designate 178.72.21.4 as permitted sender) receiver=protection.outlook.com; client-ip=178.72.21.4; helo=localhost.localdomain; Received: from localhost.localdomain (178.72.21.4) by DB1PEPF000509EC.mail.protection.outlook.com (10.167.242.70) with Microsoft SMTP Server id 15.20.8048.13 via Frontend Transport; Mon, 7 Oct 2024 19:35:35 +0000 From: Serhii Iliushyk To: dev@dpdk.org Cc: mko-plv@napatech.com, sil-plv@napatech.com, ckm@napatech.com, andrew.rybchenko@oktetlabs.ru, ferruh.yigit@amd.com, Danylo Vodopianov Subject: [PATCH v2 40/50] net/ntnic: add queue setup operations Date: Mon, 7 Oct 2024 21:34:16 +0200 Message-ID: <20241007193436.675785-41-sil-plv@napatech.com> X-Mailer: git-send-email 2.45.0 In-Reply-To: <20241007193436.675785-1-sil-plv@napatech.com> References: <20241006203728.330792-2-sil-plv@napatech.com> <20241007193436.675785-1-sil-plv@napatech.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DB1PEPF000509EC:EE_|PR3P190MB0858:EE_ X-MS-Office365-Filtering-Correlation-Id: d3d9d4fc-dcbb-45f4-f337-08dce707370e X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; ARA:13230040|82310400026|1800799024|36860700013|376014; X-Microsoft-Antispam-Message-Info: +lLV4M1dtp+vYs9eQljBAYe2F3um/Ukk+8qpkgKGImaPyb0AEh91V/BcMiOzJEMX55vKFul8z6LUYfQqa7vBghuiq5JLzJEJrGyd22rOnXRSO7jBHoZZdCT+kn1UkyVPoKxUT0pwFoLSTFVBDIn4lbKkjp76uRqWqOR+YG38mmN7um+uKrYQZCUH7IaVlF8wjZdjfPbCYEmDJmg7ktbqF9heOuk4+og1/30He3MzE5xDOYpYGN9oOCZ3z5G0tk3/eqSEY+7+E2wk5cjJDEKVfVUdWMIipnH4EIFRhWgMBBTfuNraR57shcvyttI3uOZlCFKBatM0+sb7v7448AsNANPS5o7fotKI9uvpki7reH3fiAmY++IU3GZZxzCP7kuPZmQX/5VuQHbLeOnjQWVlB1dwBKxn+iPtSk98l6o1AicQ6136t1kcgCvRYo813irpWOQxrkrimyDGJR5xBpETqdctTa2Ur0p1EkXeHkDcLfYVmjNZd+NqbstSkTxNFmyWD2O8ae0kIGiyWncIGohQKJ84gmS7iJK9kLKsAUoN/lzKsQag354MavSvLFNnxksdKvdNnjas3LRjxYKLT8ADJomCAwLCuHm4X18FZGZKlCYLb5n9IPG8FcpsJvr6gmvhYYVd50J5Dr6YTATh7v0SEvJST4RaxSBTIV+lWGBZfDGlJOBLTQPIDAjmEGVUL+kfqn+lmsAIFFQTwgj4sme50hQGCuwFKJp1cNEs6mg0TBAVrVFzhq8R2CqKoEFsn0eVJMZg1MoN/M4R27EbKwFQOiE4KtvVa3jTc73i8n1VSGW6a08JVkYVEAXvKc8djsBXtTfFsUj0f26D43TKoUd9y5hpbkd6k3TsuDuq4KRjJ3D4UdeAx8QK5f4e9qOkhRmoyY7T7suHLth8q30ACxNJkYvJVSM/F0TbLdYGN7+s9JPaXEDFZHlNk6cGV81d/Y1BoEcV/qQ8s+oA01ZqqDGlpJNgLVQY9PvzEN/KJLBaPxkwjui30MC+0VRa07ImTXuvVw4H8k7WgnLZ42SM3j11cVeZgFmLMKsJwa58bWDHzLGOb8lzONoOTgDeN6M423SqHFHFnjjPQF/2GSSTVSrCGWm43vUU7aWkDVzGfZIs7vaSTpk7ZpyhK2dbo6u3pCkaqgRme1/4oRPwJYJAhhSOfP/CD6/m4EL8e5Qu99RgQHk+u3ClSfkhFmN8fH0wEQLblZuPicfEO9apbKY3dcFzFY+6jb5up4X5HWAhzXA00yIFLJGN0q8hztxB1efqtZs6zzo2JsYRaS2NfezCuWibATXs9/Uh/qFxCKapWArg8CCw+jt78ealfERNhhNRNCANGmMhgQL3K6YJByio1T1WXC4Nyyo5SDbuasS9Lq1g6UOAWxHcCf+z0vA7JFguYRW/ X-Forefront-Antispam-Report: CIP:178.72.21.4; CTRY:DK; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:localhost.localdomain; PTR:InfoDomainNonexistent; CAT:NONE; SFS:(13230040)(82310400026)(1800799024)(36860700013)(376014); DIR:OUT; SFP:1102; X-MS-Exchange-AntiSpam-ExternalHop-MessageData-ChunkCount: 1 X-MS-Exchange-AntiSpam-ExternalHop-MessageData-0: LS/8DFAvfzfQAgh11/fMSy5wETyz5c82Z823dpTiuF4Orra1PGRp01/XN02FdSxsjYFmtVNu8DyU+2x2eDRqfdX0vp5NeD8LGvfqm5LxMs9RWDXlyumleixlaTLzzIrfQq0obeanQzQzX4vkJQz4j2A42hf75vM0yXh8U57dc1yhwR9vtKg4f73AvVC8vY2MmbI3EbOWJky1CYM2aHugxBPahLrHh5ub1BDciY8qEOuaFv2aHH/tIZYDuB7q6h7SvRvoLfsAanY4J1GcMli/KQgO3THrdwWYim6fLa6y8/dSWOpRLPRESx6K6RNOzVRoYoUgveMJmS5GT4HFQUUVtsDunNAKs9ns1TWSkA1T5LbIhczNVEQQNt62vSyTgq2F2x/chjKFLuWdKwUBpT1gAFZg0doORz4BEdWs4qVbSaEQPmpDqQa0kuM2qHYBTL1OCS0af0/xpfb8Pkx0U7Itq3ijYfaANzn5STGhhQooy3ua3hovMOT/5KMj3VyCdfXK6i1zwi8p/uhfLf1RLFgbg7m7AtDBZmiUBZyJL0L79sBLNii3DcWhi0z24px/t9mFQynhj+xk68MlWgFx4ND2tAdm6Yl5jwnR8VyR6/Mdo+OF/NyLyCvot4C32QxZCE5NoTOCu4eegTMSxGePl7tSww== X-OriginatorOrg: napatech.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 Oct 2024 19:35:35.1469 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d3d9d4fc-dcbb-45f4-f337-08dce707370e X-MS-Exchange-CrossTenant-Id: c4540d0b-728a-4233-9da5-9ea30c7ec3ed X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=c4540d0b-728a-4233-9da5-9ea30c7ec3ed; Ip=[178.72.21.4]; Helo=[localhost.localdomain] X-MS-Exchange-CrossTenant-AuthSource: DB1PEPF000509EC.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR3P190MB0858 X-BESS-ID: 1728329736-311201-2746-3529-1 X-BESS-VER: 2019.1_20241004.2057 X-BESS-Apparent-Source-IP: 104.47.17.105 X-BESS-Parts: H4sIAAAAAAACA4uuVkqtKFGyUioBkjpK+cVKVoYmBobGQGYGUDTFNNHE1CjNIN EyJTXFNNnE2MLUJDHZKDnFPDXFzCzJQqk2FgCskqqTQgAAAA== X-BESS-Outbound-Spam-Score: 0.00 X-BESS-Outbound-Spam-Report: Code version 3.2, rules version 3.2.2.259566 [from cloudscan19-135.eu-central-1b.ess.aws.cudaops.com] Rule breakdown below pts rule name description ---- ---------------------- -------------------------------- 0.00 BSF_BESS_OUTBOUND META: BESS Outbound X-BESS-Outbound-Spam-Status: SCORE=0.00 using account:ESS113687 scores of KILL_LEVEL=7.0 tests=BSF_BESS_OUTBOUND X-BESS-BRTS-Status: 1 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Danylo Vodopianov Added TX and RX queue setup. Handles memory allocation and hardware Virtio queue setup. Allocates and configures memory for hardware Virtio queues, including handling IOMMU and VFIO mappings. Signed-off-by: Danylo Vodopianov --- drivers/net/ntnic/include/ntnic_virt_queue.h | 3 +- drivers/net/ntnic/include/ntos_drv.h | 6 + drivers/net/ntnic/nthw/nthw_drv.h | 2 + drivers/net/ntnic/ntnic_ethdev.c | 323 +++++++++++++++++++ 4 files changed, 333 insertions(+), 1 deletion(-) diff --git a/drivers/net/ntnic/include/ntnic_virt_queue.h b/drivers/net/ntnic/include/ntnic_virt_queue.h index 422ac3b950..821b23af6c 100644 --- a/drivers/net/ntnic/include/ntnic_virt_queue.h +++ b/drivers/net/ntnic/include/ntnic_virt_queue.h @@ -13,7 +13,8 @@ struct nthw_virt_queue; -struct nthw_virtq_desc_buf; +#define SPLIT_RING 0 +#define IN_ORDER 1 struct nthw_cvirtq_desc; diff --git a/drivers/net/ntnic/include/ntos_drv.h b/drivers/net/ntnic/include/ntos_drv.h index 233d585303..933b012e07 100644 --- a/drivers/net/ntnic/include/ntos_drv.h +++ b/drivers/net/ntnic/include/ntos_drv.h @@ -47,6 +47,7 @@ struct __rte_cache_aligned ntnic_rx_queue { struct hwq_s hwq; struct nthw_virt_queue *vq; + int nb_hw_rx_descr; nt_meta_port_type_t type; uint32_t port; /* Rx port for this queue */ enum fpga_info_profile profile; /* Inline / Capture */ @@ -57,7 +58,12 @@ struct __rte_cache_aligned ntnic_tx_queue { struct flow_queue_id_s queue; /* queue info - user id and hw queue index */ struct hwq_s hwq; struct nthw_virt_queue *vq; + int nb_hw_tx_descr; + /* Used for bypass in NTDVIO0 header on Tx - pre calculated */ + int target_id; nt_meta_port_type_t type; + /* only used for exception tx queue from OVS SW switching */ + int rss_target_id; uint32_t port; /* Tx port for this queue */ int enabled; /* Enabling/disabling of this queue */ diff --git a/drivers/net/ntnic/nthw/nthw_drv.h b/drivers/net/ntnic/nthw/nthw_drv.h index eaa2b19015..69e0360f5f 100644 --- a/drivers/net/ntnic/nthw/nthw_drv.h +++ b/drivers/net/ntnic/nthw/nthw_drv.h @@ -71,6 +71,8 @@ typedef struct fpga_info_s { struct nthw_pcie3 *mp_nthw_pcie3; struct nthw_tsm *mp_nthw_tsm; + nthw_dbs_t *mp_nthw_dbs; + uint8_t *bar0_addr; /* Needed for register read/write */ size_t bar0_size; diff --git a/drivers/net/ntnic/ntnic_ethdev.c b/drivers/net/ntnic/ntnic_ethdev.c index 78a689d444..57827d73d5 100644 --- a/drivers/net/ntnic/ntnic_ethdev.c +++ b/drivers/net/ntnic/ntnic_ethdev.c @@ -31,10 +31,16 @@ #define MAX_TOTAL_QUEUES 128 +#define SG_NB_HW_RX_DESCRIPTORS 1024 +#define SG_NB_HW_TX_DESCRIPTORS 1024 +#define SG_HW_RX_PKT_BUFFER_SIZE (1024 << 1) +#define SG_HW_TX_PKT_BUFFER_SIZE (1024 << 1) + /* Max RSS queues */ #define MAX_QUEUES 125 #define ONE_G_SIZE 0x40000000 +#define ONE_G_MASK (ONE_G_SIZE - 1) #define ETH_DEV_NTNIC_HELP_ARG "help" #define ETH_DEV_NTHW_RXQUEUES_ARG "rxqs" @@ -187,6 +193,157 @@ eth_dev_infos_get(struct rte_eth_dev *eth_dev, struct rte_eth_dev_info *dev_info return 0; } +static int allocate_hw_virtio_queues(struct rte_eth_dev *eth_dev, int vf_num, struct hwq_s *hwq, + int num_descr, int buf_size) +{ + int i, res; + uint32_t size; + uint64_t iova_addr; + + NT_LOG(DBG, NTNIC, "***** Configure IOMMU for HW queues on VF %i *****\n", vf_num); + + /* Just allocate 1MB to hold all combined descr rings */ + uint64_t tot_alloc_size = 0x100000 + buf_size * num_descr; + + void *virt = + rte_malloc_socket("VirtQDescr", tot_alloc_size, nt_util_align_size(tot_alloc_size), + eth_dev->data->numa_node); + + if (!virt) + return -1; + + uint64_t gp_offset = (uint64_t)virt & ONE_G_MASK; + rte_iova_t hpa = rte_malloc_virt2iova(virt); + + NT_LOG(DBG, NTNIC, "Allocated virtio descr rings : virt " + "%p [0x%" PRIX64 "],hpa %" PRIX64 " [0x%" PRIX64 "]\n", + virt, gp_offset, hpa, hpa & ONE_G_MASK); + + /* + * Same offset on both HPA and IOVA + * Make sure 1G boundary is never crossed + */ + if (((hpa & ONE_G_MASK) != gp_offset) || + (((uint64_t)virt + tot_alloc_size) & ~ONE_G_MASK) != + ((uint64_t)virt & ~ONE_G_MASK)) { + NT_LOG(ERR, NTNIC, "*********************************************************\n"); + NT_LOG(ERR, NTNIC, "ERROR, no optimal IOMMU mapping available hpa: %016" PRIX64 + "(%016" PRIX64 "), gp_offset: %016" PRIX64 " size: %" PRIu64 "\n", + hpa, hpa & ONE_G_MASK, gp_offset, tot_alloc_size); + NT_LOG(ERR, NTNIC, "*********************************************************\n"); + + rte_free(virt); + + /* Just allocate 1MB to hold all combined descr rings */ + size = 0x100000; + void *virt = rte_malloc_socket("VirtQDescr", size, 4096, eth_dev->data->numa_node); + + if (!virt) + return -1; + + res = nt_vfio_dma_map(vf_num, virt, &iova_addr, size); + + NT_LOG(DBG, NTNIC, "VFIO MMAP res %i, vf_num %i\n", res, vf_num); + + if (res != 0) + return -1; + + hwq->vf_num = vf_num; + hwq->virt_queues_ctrl.virt_addr = virt; + hwq->virt_queues_ctrl.phys_addr = (void *)iova_addr; + hwq->virt_queues_ctrl.len = size; + + NT_LOG(DBG, NTNIC, + "Allocated for virtio descr rings combined 1MB : %p, IOVA %016" PRIX64 "\n", + virt, iova_addr); + + size = num_descr * sizeof(struct nthw_memory_descriptor); + hwq->pkt_buffers = + rte_zmalloc_socket("rx_pkt_buffers", size, 64, eth_dev->data->numa_node); + + if (!hwq->pkt_buffers) { + NT_LOG(ERR, NTNIC, + "Failed to allocated buffer array for hw-queue %p, total size %i, elements %i\n", + hwq->pkt_buffers, size, num_descr); + rte_free(virt); + return -1; + } + + size = buf_size * num_descr; + void *virt_addr = + rte_malloc_socket("pkt_buffer_pkts", size, 4096, eth_dev->data->numa_node); + + if (!virt_addr) { + NT_LOG(ERR, NTNIC, + "Failed allocate packet buffers for hw-queue %p, buf size %i, elements %i\n", + hwq->pkt_buffers, buf_size, num_descr); + rte_free(hwq->pkt_buffers); + rte_free(virt); + return -1; + } + + res = nt_vfio_dma_map(vf_num, virt_addr, &iova_addr, size); + + NT_LOG(DBG, NTNIC, + "VFIO MMAP res %i, virt %p, iova %016" PRIX64 ", vf_num %i, num pkt bufs %i, tot size %i\n", + res, virt_addr, iova_addr, vf_num, num_descr, size); + + if (res != 0) + return -1; + + for (i = 0; i < num_descr; i++) { + hwq->pkt_buffers[i].virt_addr = + (void *)((char *)virt_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].phys_addr = + (void *)(iova_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].len = buf_size; + } + + return 0; + } /* End of: no optimal IOMMU mapping available */ + + res = nt_vfio_dma_map(vf_num, virt, &iova_addr, ONE_G_SIZE); + + if (res != 0) { + NT_LOG(ERR, NTNIC, "VFIO MMAP FAILED! res %i, vf_num %i\n", res, vf_num); + return -1; + } + + hwq->vf_num = vf_num; + hwq->virt_queues_ctrl.virt_addr = virt; + hwq->virt_queues_ctrl.phys_addr = (void *)(iova_addr); + hwq->virt_queues_ctrl.len = 0x100000; + iova_addr += 0x100000; + + NT_LOG(DBG, NTNIC, + "VFIO MMAP: virt_addr=%p phys_addr=%p size=%" PRIX32 " hpa=%" PRIX64 "\n", + hwq->virt_queues_ctrl.virt_addr, hwq->virt_queues_ctrl.phys_addr, + hwq->virt_queues_ctrl.len, rte_malloc_virt2iova(hwq->virt_queues_ctrl.virt_addr)); + + size = num_descr * sizeof(struct nthw_memory_descriptor); + hwq->pkt_buffers = + rte_zmalloc_socket("rx_pkt_buffers", size, 64, eth_dev->data->numa_node); + + if (!hwq->pkt_buffers) { + NT_LOG(ERR, NTNIC, + "Failed to allocated buffer array for hw-queue %p, total size %i, elements %i\n", + hwq->pkt_buffers, size, num_descr); + rte_free(virt); + return -1; + } + + void *virt_addr = (void *)((uint64_t)virt + 0x100000); + + for (i = 0; i < num_descr; i++) { + hwq->pkt_buffers[i].virt_addr = + (void *)((char *)virt_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].phys_addr = (void *)(iova_addr + ((uint64_t)(i) * buf_size)); + hwq->pkt_buffers[i].len = buf_size; + } + + return 0; +} + static void release_hw_virtio_queues(struct hwq_s *hwq) { if (!hwq || hwq->vf_num == 0) @@ -245,6 +402,170 @@ static int allocate_queue(int num) return next_free; } +static int eth_rx_scg_queue_setup(struct rte_eth_dev *eth_dev, + uint16_t rx_queue_id, + uint16_t nb_rx_desc __rte_unused, + unsigned int socket_id __rte_unused, + const struct rte_eth_rxconf *rx_conf __rte_unused, + struct rte_mempool *mb_pool) +{ + NT_LOG_DBGX(DBG, NTNIC, "\n"); + struct rte_pktmbuf_pool_private *mbp_priv; + struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private; + struct ntnic_rx_queue *rx_q = &internals->rxq_scg[rx_queue_id]; + struct drv_s *p_drv = internals->p_drv; + struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv; + + if (sg_ops == NULL) { + NT_LOG_DBGX(DBG, NTNIC, "SG module is not initialized\n"); + return 0; + } + + if (internals->type == PORT_TYPE_OVERRIDE) { + rx_q->mb_pool = mb_pool; + eth_dev->data->rx_queues[rx_queue_id] = rx_q; + mbp_priv = rte_mempool_get_priv(rx_q->mb_pool); + rx_q->buf_size = (uint16_t)(mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM); + rx_q->enabled = 1; + return 0; + } + + NT_LOG(DBG, NTNIC, "(%i) NTNIC RX OVS-SW queue setup: queue id %i, hw queue index %i\n", + internals->port, rx_queue_id, rx_q->queue.hw_id); + + rx_q->mb_pool = mb_pool; + + eth_dev->data->rx_queues[rx_queue_id] = rx_q; + + mbp_priv = rte_mempool_get_priv(rx_q->mb_pool); + rx_q->buf_size = (uint16_t)(mbp_priv->mbuf_data_room_size - RTE_PKTMBUF_HEADROOM); + rx_q->enabled = 1; + + if (allocate_hw_virtio_queues(eth_dev, EXCEPTION_PATH_HID, &rx_q->hwq, + SG_NB_HW_RX_DESCRIPTORS, SG_HW_RX_PKT_BUFFER_SIZE) < 0) + return -1; + + rx_q->nb_hw_rx_descr = SG_NB_HW_RX_DESCRIPTORS; + + rx_q->profile = p_drv->ntdrv.adapter_info.fpga_info.profile; + + rx_q->vq = + sg_ops->nthw_setup_mngd_rx_virt_queue(p_nt_drv->adapter_info.fpga_info.mp_nthw_dbs, + rx_q->queue.hw_id, /* index */ + rx_q->nb_hw_rx_descr, + EXCEPTION_PATH_HID, /* host_id */ + 1, /* header NT DVIO header for exception path */ + &rx_q->hwq.virt_queues_ctrl, + rx_q->hwq.pkt_buffers, + SPLIT_RING, + -1); + + NT_LOG(DBG, NTNIC, "(%i) NTNIC RX OVS-SW queues successfully setup\n", internals->port); + + return 0; +} + +static int eth_tx_scg_queue_setup(struct rte_eth_dev *eth_dev, + uint16_t tx_queue_id, + uint16_t nb_tx_desc __rte_unused, + unsigned int socket_id __rte_unused, + const struct rte_eth_txconf *tx_conf __rte_unused) +{ + const struct port_ops *port_ops = get_port_ops(); + + if (port_ops == NULL) { + NT_LOG_DBGX(ERR, NTNIC, "Link management module uninitialized\n"); + return -1; + } + + NT_LOG_DBGX(DBG, NTNIC, "\n"); + struct pmd_internals *internals = (struct pmd_internals *)eth_dev->data->dev_private; + struct drv_s *p_drv = internals->p_drv; + struct ntdrv_4ga_s *p_nt_drv = &p_drv->ntdrv; + struct ntnic_tx_queue *tx_q = &internals->txq_scg[tx_queue_id]; + + if (internals->type == PORT_TYPE_OVERRIDE) { + eth_dev->data->tx_queues[tx_queue_id] = tx_q; + return 0; + } + + if (sg_ops == NULL) { + NT_LOG_DBGX(DBG, NTNIC, "SG module is not initialized\n"); + return 0; + } + + NT_LOG(DBG, NTNIC, "(%i) NTNIC TX OVS-SW queue setup: queue id %i, hw queue index %i\n", + tx_q->port, tx_queue_id, tx_q->queue.hw_id); + + if (tx_queue_id > internals->nb_tx_queues) { + NT_LOG(ERR, NTNIC, "Error invalid tx queue id\n"); + return -1; + } + + eth_dev->data->tx_queues[tx_queue_id] = tx_q; + + /* Calculate target ID for HW - to be used in NTDVIO0 header bypass_port */ + if (tx_q->rss_target_id >= 0) { + /* bypass to a multiqueue port - qsl-hsh index */ + tx_q->target_id = tx_q->rss_target_id + 0x90; + + } else if (internals->vpq[tx_queue_id].hw_id > -1) { + /* virtual port - queue index */ + tx_q->target_id = internals->vpq[tx_queue_id].hw_id; + + } else { + /* Phy port - phy port identifier */ + /* output/bypass to MAC */ + tx_q->target_id = (int)(tx_q->port + 0x80); + } + + if (allocate_hw_virtio_queues(eth_dev, EXCEPTION_PATH_HID, &tx_q->hwq, + SG_NB_HW_TX_DESCRIPTORS, SG_HW_TX_PKT_BUFFER_SIZE) < 0) { + return -1; + } + + tx_q->nb_hw_tx_descr = SG_NB_HW_TX_DESCRIPTORS; + + tx_q->profile = p_drv->ntdrv.adapter_info.fpga_info.profile; + + uint32_t port, header; + port = tx_q->port; /* transmit port */ + header = 0; /* header type VirtIO-Net */ + + tx_q->vq = + sg_ops->nthw_setup_mngd_tx_virt_queue(p_nt_drv->adapter_info.fpga_info.mp_nthw_dbs, + tx_q->queue.hw_id, /* index */ + tx_q->nb_hw_tx_descr, /* queue size */ + EXCEPTION_PATH_HID, /* host_id always VF4 */ + port, + /* + * in_port - in vswitch mode has + * to move tx port from OVS excep. + * away from VM tx port, + * because of QoS is matched by port id! + */ + tx_q->port + 128, + header, + &tx_q->hwq.virt_queues_ctrl, + tx_q->hwq.pkt_buffers, + SPLIT_RING, + -1, + IN_ORDER); + + tx_q->enabled = 1; + + NT_LOG(DBG, NTNIC, "(%i) NTNIC TX OVS-SW queues successfully setup\n", internals->port); + + if (internals->type == PORT_TYPE_PHYSICAL) { + struct adapter_info_s *p_adapter_info = &internals->p_drv->ntdrv.adapter_info; + NT_LOG(DBG, NTNIC, "Port %i is ready for data. Enable port\n", + internals->n_intf_no); + port_ops->set_adm_state(p_adapter_info, internals->n_intf_no, true); + } + + return 0; +} + static int eth_rx_queue_start(struct rte_eth_dev *eth_dev, uint16_t rx_queue_id) { eth_dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; @@ -580,9 +901,11 @@ static const struct eth_dev_ops nthw_eth_dev_ops = { .link_update = eth_link_update, .dev_infos_get = eth_dev_infos_get, .fw_version_get = eth_fw_version_get, + .rx_queue_setup = eth_rx_scg_queue_setup, .rx_queue_start = eth_rx_queue_start, .rx_queue_stop = eth_rx_queue_stop, .rx_queue_release = eth_rx_queue_release, + .tx_queue_setup = eth_tx_scg_queue_setup, .tx_queue_start = eth_tx_queue_start, .tx_queue_stop = eth_tx_queue_stop, .tx_queue_release = eth_tx_queue_release,