From patchwork Thu Oct 12 08:50:29 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Mattias_R=C3=B6nnblom?= X-Patchwork-Id: 132575 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33F4542364; Thu, 12 Oct 2023 10:56:09 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8E10F40647; Thu, 12 Oct 2023 10:55:58 +0200 (CEST) Received: from EUR01-HE1-obe.outbound.protection.outlook.com (mail-he1eur01on2089.outbound.protection.outlook.com [40.107.13.89]) by mails.dpdk.org (Postfix) with ESMTP id 88668402BB; Thu, 12 Oct 2023 10:55:54 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=RVEicivLbfbssSmbQiFdgyoSp29KTXIx99v8NIwFmeqweobFzeVyp5O0e0TLm4S9yJGSz5/fI6OgGFAA7ruDjdm5KfkH2nmBBu0VAE/fPyXHOe45IgzpEVdb+zkPpF4WtMXHtIe8dCaOVscM49znWyN/Obrc8E1LjeQNAKe+fzMFzRIv0n2Ta2JtA8vXQtpAyI+Q2rWU/xEFZ5E1j35O+HaTb0/wZukhcsIcKWuo+PqXYcRQYIC7gzZ/s6mWwDyzN+Ae8xbLRFrRubNewsjcDI83Tngc1+3fWHhijgV32CTDRztyDq1HfA0VPXuG4+C1eoOlfmAOidYlflPA5Qy2oA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=mH5UtD0zP93FhzAqBq9jsMSc/Zv/vGYF511kz+m4KBQ=; b=Hr8gpMr0J4vy1z5Gkn0ZRpDVxXDXGObXIRXX56ZX5dAXJYA8wySJihqFxfDsNCRCKPYOj/rGwPZBNPrh37A0rtH3rR3TNe0ghRaQb4kftEsx5G+HKmMo/ejh+iyox1Xxu4RIzKvqvmGPSzeI6f/6FiiQ10LlOehZovpfW2JylQI8n4Qh/8oI8qQpWk9T01VtgjoiIqM8+dyTEjIU/clLhqGoFmzMeyergoz/yvS7b6f9cjv92zCxhHxH9F/ZFxe+n1BbRrS54RGqFVS9CXvBbeITR+angi9FTL3EHLuYq18b4Stra8tp9BoXvQm+nyokUr31F39imppo7GfQoHT5PA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=mH5UtD0zP93FhzAqBq9jsMSc/Zv/vGYF511kz+m4KBQ=; b=G65Ckzl+G7lkcnu+a5WyttBEgPXCtAOWeybcO4C9G5yqij/gPnu/M0MnJp3QVWN55gxdD3WCn0WuXNGJh/uTBNK6lshBDTIPxj3FA97KTcMhXY77URJTFH3iM2x7lsUsTu5Fsp1Z8/ufW1dnvCPw9lS1W/ZCyBlNNISf7ifHH3w= Received: from DU6P191CA0026.EURP191.PROD.OUTLOOK.COM (2603:10a6:10:53f::17) by DU0PR07MB8834.eurprd07.prod.outlook.com (2603:10a6:10:313::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.41; Thu, 12 Oct 2023 08:55:50 +0000 Received: from DB1PEPF0003922E.eurprd03.prod.outlook.com (2603:10a6:10:53f:cafe::cc) by DU6P191CA0026.outlook.office365.com (2603:10a6:10:53f::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.27 via Frontend Transport; Thu, 12 Oct 2023 08:55:50 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by DB1PEPF0003922E.mail.protection.outlook.com (10.167.8.101) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.20.6838.22 via Frontend Transport; Thu, 12 Oct 2023 08:55:50 +0000 Received: from ESESBMB505.ericsson.se (153.88.183.172) by ESESSMB505.ericsson.se (153.88.183.166) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.32; Thu, 12 Oct 2023 10:55:49 +0200 Received: from seliicinfr00049.seli.gic.ericsson.se (153.88.183.153) by smtp.internal.ericsson.com (153.88.183.188) with Microsoft SMTP Server id 15.1.2507.32 via Frontend Transport; Thu, 12 Oct 2023 10:55:50 +0200 Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100]) by seliicinfr00049.seli.gic.ericsson.se (Postfix) with ESMTP id 040F9380061; Thu, 12 Oct 2023 10:55:50 +0200 (CEST) From: =?utf-8?q?Mattias_R=C3=B6nnblom?= To: , CC: Jerin Jacob , , , , Peter Nilsson , Heng Wang , "Naga Harish K S V" , Pavan Nikhilesh , Gujjar Abhinandan S , Erik Gabriel Carrillo , Shijith Thotton , "Hemant Agrawal" , Sachin Saxena , Liang Ma , Peter Mccarthy , Zhirun Yan , =?utf-8?q?Mattias_R=C3=B6nnblom?= Subject: [PATCH v8 1/3] lib: introduce dispatcher library Date: Thu, 12 Oct 2023 10:50:29 +0200 Message-ID: <20231012085031.444483-2-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231012085031.444483-1-mattias.ronnblom@ericsson.com> References: <20231011071700.442795-2-mattias.ronnblom@ericsson.com> <20231012085031.444483-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DB1PEPF0003922E:EE_|DU0PR07MB8834:EE_ X-MS-Office365-Filtering-Correlation-Id: 5a070dfc-66fc-409b-abb4-08dbcb0108f2 X-LD-Processed: 92e84ceb-fbfd-47ab-be52-080c6b87953f,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: sDDaUIfliaVyj1MoHV/HyDRMxRC101XdaHZXXZz42/pN2JQNohMaUyqk77S3T9tR6bpi7Hy/tOyk3eGGTO5G0yrjP00Ll887ROUcLfqGi1sotvqAWPD8GmmaYI7+GS/nfNhrmVjCdem/HrNftjGbGOAB6DGYT+p/kQfw2BrqFi7GocKpxgkn+IHOEutGsslQ9Rgh2JFCffqfRpoQI26ek6JfFN/+8so2MsCjIeRHbW75QwV5dWQ0BlRkWW/c3fYxrG+qD7/ii0iaFhbSRgoyoTwouek02yCbdoCZ6d3DpTf88RWMpC2VpsbyrwEyKza5c6DXkCnSO2K6EDZALbJYfspnX7+LlqGk7j+Oj/F7kUQXUZpvP4HMk48v8vWBnOiP4UVCR74rwrwVbnYvSkmwk+2WcJbSanNvnk5oWkghOJPq0hHU5Hl4ZvtC0voL7VXhzWO42w9fTTF2QeF5vx+KPBn4Ch7n2fhA8XL7FY8codaDbMncOliPMxXKZ98D9OFn8L1tkO5MSWdiS6WIzsKa2WuR0jlknBjHYRyGwbrpUOoVGP/fSxIZnj0m0gXawHYKT53iPQyVmCt2m30fHMHPjqePfhCbMzr4MfktbrBCDWjj6kcqmL1UtfSY+DTb+h8GTeDxL/Eh2WDjq4ph368QMGpdrKlhVg2p6BNKaciDg6gVX5HYhLN+6F2+W9H/+hcTGX1AtM4tQh1fK0/145XYrWo4zL3ulfe6+std1YchMkISpX9+PMjIsU23byM2sL/ezHWt4ZfINQFt5GmEGqUQLEzIBZVeScDn56KA7ML+XHF4GkmH1v5FYJ/52N2SDmwoLH5OVRAxLwaUCg0Bd9thtpAoo6YoX8VsQ7lMqOT1KOg= X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230031)(4636009)(39860400002)(136003)(346002)(376002)(396003)(230922051799003)(82310400011)(451199024)(1800799009)(64100799003)(186009)(40470700004)(46966006)(36840700001)(6666004)(478600001)(66574015)(83380400001)(6266002)(47076005)(2616005)(336012)(26005)(7416002)(2906002)(30864003)(107886003)(5660300002)(8936002)(41300700001)(110136005)(70206006)(8676002)(4326008)(316002)(54906003)(70586007)(36756003)(36860700001)(82960400001)(82740400003)(86362001)(7636003)(356005)(1076003)(40480700001)(40460700003)(2101003); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 08:55:50.7803 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5a070dfc-66fc-409b-abb4-08dbcb0108f2 X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: DB1PEPF0003922E.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DU0PR07MB8834 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org The purpose of the dispatcher library is to help reduce coupling in an Eventdev-based DPDK application. In addition, the dispatcher also provides a convenient and flexible way for the application to use service cores for application-level processing. Signed-off-by: Mattias Rönnblom Tested-by: Peter Nilsson Reviewed-by: Heng Wang --- PATCH v8: o Since starting and stopping a dispatcher is always successful (save for an inconsistent dispatcher state), have the start and stop calls return void. o Fix merge conflict in the release notes file. PATCH v6: o Use single tab as indentation for continuation lines in multiple-line function prototypes. (David Marchand) o Add dispatcher library release note. (David Marchand) o Various indentation and spelling improvements. (David Marchand) o Add direct , and includes, instead of relying on . (David Marchand) o Avoid Doxygen post annotations for struct fields. (David Marchand) PATCH v5: o Move from using an integer id to a pointer to reference a dispatcher instance, to simplify the API. o Fix bug where dispatcher stats retrieval function erroneously depended on the user-supplied stats buffer being all-zero. PATCH v4: o Fix bugs in handler and finalizer unregistration. (Naga Harish) o Return -EINVAL in cases where NULL pointers were provided in calls requiring non-NULL pointers. (Naga Harish) o Add experimental warning for the whole API. (Jerin Jacob) PATCH v3: o To underline its optional character and since it does not provide hardware abstraction, the event dispatcher is now a separate library. o Change name from rte_event_dispatcher -> rte_dispatcher, to make it shorter and to avoid the rte_event_* namespace. PATCH v2: o Add dequeue batch count statistic. o Add statistics reset function to API. o Clarify MT safety guarantees (or lack thereof) in the API documentation. o Change loop variable type in evd_lcore_get_handler_by_id() to uint16_t, to be consistent with similar loops elsewhere in the dispatcher. o Fix variable names in finalizer unregister function. PATCH: o Change prefix from RED to EVD, to avoid confusion with random early detection. RFC v4: o Move handlers to per-lcore data structures. o Introduce mechanism which rearranges handlers so that often-used handlers tend to be tried first. o Terminate dispatch loop in case all events are delivered. o To avoid the dispatcher's service function hogging the CPU, process only one batch per call. o Have service function return -EAGAIN if no work is performed. o Events delivered in the process function is no longer marked 'const', since modifying them may be useful for the application and cause no difficulties for the dispatcher. o Various minor API documentation improvements. RFC v3: o Add stats_get() function to the version.map file. --- MAINTAINERS | 4 + doc/api/doxy-api-index.md | 1 + doc/api/doxy-api.conf.in | 1 + doc/guides/rel_notes/release_23_11.rst | 5 + lib/dispatcher/meson.build | 13 + lib/dispatcher/rte_dispatcher.c | 694 +++++++++++++++++++++++++ lib/dispatcher/rte_dispatcher.h | 458 ++++++++++++++++ lib/dispatcher/version.map | 20 + lib/meson.build | 2 + 9 files changed, 1198 insertions(+) create mode 100644 lib/dispatcher/meson.build create mode 100644 lib/dispatcher/rte_dispatcher.c create mode 100644 lib/dispatcher/rte_dispatcher.h create mode 100644 lib/dispatcher/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 9af332ae6b..a7039b06dc 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1734,6 +1734,10 @@ M: Nithin Dabilpuram M: Pavan Nikhilesh F: lib/node/ +Dispatcher - EXPERIMENTAL +M: Mattias Rönnblom +F: lib/dispatcher/ + Test Applications ----------------- diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index 732e2ecb28..30918995d3 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -157,6 +157,7 @@ The public API headers are grouped by topics: - **classification** [reorder](@ref rte_reorder.h), + [dispatcher](@ref rte_dispatcher.h), [distributor](@ref rte_distributor.h), [EFD](@ref rte_efd.h), [ACL](@ref rte_acl.h), diff --git a/doc/api/doxy-api.conf.in b/doc/api/doxy-api.conf.in index df801d32f9..93709e1d2c 100644 --- a/doc/api/doxy-api.conf.in +++ b/doc/api/doxy-api.conf.in @@ -34,6 +34,7 @@ INPUT = @TOPDIR@/doc/api/doxy-api-index.md \ @TOPDIR@/lib/cmdline \ @TOPDIR@/lib/compressdev \ @TOPDIR@/lib/cryptodev \ + @TOPDIR@/lib/dispatcher \ @TOPDIR@/lib/distributor \ @TOPDIR@/lib/dmadev \ @TOPDIR@/lib/efd \ diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index 34442e9c6b..00260455b2 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -122,6 +122,11 @@ New Features a group's miss actions, which are the actions to be performed on packets that didn't match any of the flow rules in the group. +* **Added dispatcher library.** + + Added dispatcher library which purpose is to help decouple different + parts (modules) of an eventdev-based application. + * **Updated Solarflare net driver.** * Added support for transfer flow action ``INDIRECT`` with subtype ``VXLAN_ENCAP``. diff --git a/lib/dispatcher/meson.build b/lib/dispatcher/meson.build new file mode 100644 index 0000000000..ffaef26a6d --- /dev/null +++ b/lib/dispatcher/meson.build @@ -0,0 +1,13 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(c) 2023 Ericsson AB + +if is_windows + build = false + reason = 'not supported on Windows' + subdir_done() +endif + +sources = files('rte_dispatcher.c') +headers = files('rte_dispatcher.h') + +deps += ['eventdev'] diff --git a/lib/dispatcher/rte_dispatcher.c b/lib/dispatcher/rte_dispatcher.c new file mode 100644 index 0000000000..10d02edde9 --- /dev/null +++ b/lib/dispatcher/rte_dispatcher.c @@ -0,0 +1,694 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Ericsson AB + */ + +#include +#include + +#include +#include +#include +#include +#include + +#include "eventdev_pmd.h" + +#include + +#define EVD_MAX_PORTS_PER_LCORE 4 +#define EVD_MAX_HANDLERS 32 +#define EVD_MAX_FINALIZERS 16 +#define EVD_AVG_PRIO_INTERVAL 2000 +#define EVD_SERVICE_NAME "dispatcher" + +struct rte_dispatcher_lcore_port { + uint8_t port_id; + uint16_t batch_size; + uint64_t timeout; +}; + +struct rte_dispatcher_handler { + int id; + rte_dispatcher_match_t match_fun; + void *match_data; + rte_dispatcher_process_t process_fun; + void *process_data; +}; + +struct rte_dispatcher_finalizer { + int id; + rte_dispatcher_finalize_t finalize_fun; + void *finalize_data; +}; + +struct rte_dispatcher_lcore { + uint8_t num_ports; + uint16_t num_handlers; + int32_t prio_count; + struct rte_dispatcher_lcore_port ports[EVD_MAX_PORTS_PER_LCORE]; + struct rte_dispatcher_handler handlers[EVD_MAX_HANDLERS]; + struct rte_dispatcher_stats stats; +} __rte_cache_aligned; + +struct rte_dispatcher { + uint8_t event_dev_id; + int socket_id; + uint32_t service_id; + struct rte_dispatcher_lcore lcores[RTE_MAX_LCORE]; + uint16_t num_finalizers; + struct rte_dispatcher_finalizer finalizers[EVD_MAX_FINALIZERS]; +}; + +static int +evd_lookup_handler_idx(struct rte_dispatcher_lcore *lcore, + const struct rte_event *event) +{ + uint16_t i; + + for (i = 0; i < lcore->num_handlers; i++) { + struct rte_dispatcher_handler *handler = + &lcore->handlers[i]; + + if (handler->match_fun(event, handler->match_data)) + return i; + } + + return -1; +} + +static void +evd_prioritize_handler(struct rte_dispatcher_lcore *lcore, + int handler_idx) +{ + struct rte_dispatcher_handler tmp; + + if (handler_idx == 0) + return; + + /* Let the lucky handler "bubble" up the list */ + + tmp = lcore->handlers[handler_idx - 1]; + lcore->handlers[handler_idx - 1] = lcore->handlers[handler_idx]; + lcore->handlers[handler_idx] = tmp; +} + +static inline void +evd_consider_prioritize_handler(struct rte_dispatcher_lcore *lcore, + int handler_idx, uint16_t handler_events) +{ + lcore->prio_count -= handler_events; + + if (unlikely(lcore->prio_count <= 0)) { + evd_prioritize_handler(lcore, handler_idx); + + /* + * Randomize the interval in the unlikely case + * the traffic follow some very strict pattern. + */ + lcore->prio_count = + rte_rand_max(EVD_AVG_PRIO_INTERVAL) + + EVD_AVG_PRIO_INTERVAL / 2; + } +} + +static inline void +evd_dispatch_events(struct rte_dispatcher *dispatcher, + struct rte_dispatcher_lcore *lcore, + struct rte_dispatcher_lcore_port *port, + struct rte_event *events, uint16_t num_events) +{ + int i; + struct rte_event bursts[EVD_MAX_HANDLERS][num_events]; + uint16_t burst_lens[EVD_MAX_HANDLERS] = { 0 }; + uint16_t drop_count = 0; + uint16_t dispatch_count; + uint16_t dispatched = 0; + + for (i = 0; i < num_events; i++) { + struct rte_event *event = &events[i]; + int handler_idx; + + handler_idx = evd_lookup_handler_idx(lcore, event); + + if (unlikely(handler_idx < 0)) { + drop_count++; + continue; + } + + bursts[handler_idx][burst_lens[handler_idx]] = *event; + burst_lens[handler_idx]++; + } + + dispatch_count = num_events - drop_count; + + for (i = 0; i < lcore->num_handlers && + dispatched < dispatch_count; i++) { + struct rte_dispatcher_handler *handler = + &lcore->handlers[i]; + uint16_t len = burst_lens[i]; + + if (len == 0) + continue; + + handler->process_fun(dispatcher->event_dev_id, port->port_id, + bursts[i], len, handler->process_data); + + dispatched += len; + + /* + * Safe, since any reshuffling will only involve + * already-processed handlers. + */ + evd_consider_prioritize_handler(lcore, i, len); + } + + lcore->stats.ev_batch_count++; + lcore->stats.ev_dispatch_count += dispatch_count; + lcore->stats.ev_drop_count += drop_count; + + for (i = 0; i < dispatcher->num_finalizers; i++) { + struct rte_dispatcher_finalizer *finalizer = + &dispatcher->finalizers[i]; + + finalizer->finalize_fun(dispatcher->event_dev_id, + port->port_id, + finalizer->finalize_data); + } +} + +static __rte_always_inline uint16_t +evd_port_dequeue(struct rte_dispatcher *dispatcher, + struct rte_dispatcher_lcore *lcore, + struct rte_dispatcher_lcore_port *port) +{ + uint16_t batch_size = port->batch_size; + struct rte_event events[batch_size]; + uint16_t n; + + n = rte_event_dequeue_burst(dispatcher->event_dev_id, port->port_id, + events, batch_size, port->timeout); + + if (likely(n > 0)) + evd_dispatch_events(dispatcher, lcore, port, events, n); + + lcore->stats.poll_count++; + + return n; +} + +static __rte_always_inline uint16_t +evd_lcore_process(struct rte_dispatcher *dispatcher, + struct rte_dispatcher_lcore *lcore) +{ + uint16_t i; + uint16_t event_count = 0; + + for (i = 0; i < lcore->num_ports; i++) { + struct rte_dispatcher_lcore_port *port = + &lcore->ports[i]; + + event_count += evd_port_dequeue(dispatcher, lcore, port); + } + + return event_count; +} + +static int32_t +evd_process(void *userdata) +{ + struct rte_dispatcher *dispatcher = userdata; + unsigned int lcore_id = rte_lcore_id(); + struct rte_dispatcher_lcore *lcore = + &dispatcher->lcores[lcore_id]; + uint64_t event_count; + + event_count = evd_lcore_process(dispatcher, lcore); + + if (unlikely(event_count == 0)) + return -EAGAIN; + + return 0; +} + +static int +evd_service_register(struct rte_dispatcher *dispatcher) +{ + struct rte_service_spec service = { + .callback = evd_process, + .callback_userdata = dispatcher, + .capabilities = RTE_SERVICE_CAP_MT_SAFE, + .socket_id = dispatcher->socket_id + }; + int rc; + + snprintf(service.name, sizeof(service.name), EVD_SERVICE_NAME); + + rc = rte_service_component_register(&service, &dispatcher->service_id); + if (rc != 0) + RTE_EDEV_LOG_ERR("Registration of dispatcher service " + "%s failed with error code %d\n", + service.name, rc); + + return rc; +} + +static int +evd_service_unregister(struct rte_dispatcher *dispatcher) +{ + int rc; + + rc = rte_service_component_unregister(dispatcher->service_id); + if (rc != 0) + RTE_EDEV_LOG_ERR("Unregistration of dispatcher service " + "failed with error code %d\n", rc); + + return rc; +} + +struct rte_dispatcher * +rte_dispatcher_create(uint8_t event_dev_id) +{ + int socket_id; + struct rte_dispatcher *dispatcher; + int rc; + + socket_id = rte_event_dev_socket_id(event_dev_id); + + dispatcher = + rte_malloc_socket("dispatcher", sizeof(struct rte_dispatcher), + RTE_CACHE_LINE_SIZE, socket_id); + + if (dispatcher == NULL) { + RTE_EDEV_LOG_ERR("Unable to allocate memory for dispatcher\n"); + rte_errno = ENOMEM; + return NULL; + } + + *dispatcher = (struct rte_dispatcher) { + .event_dev_id = event_dev_id, + .socket_id = socket_id + }; + + rc = evd_service_register(dispatcher); + if (rc < 0) { + rte_free(dispatcher); + rte_errno = -rc; + return NULL; + } + + return dispatcher; +} + +int +rte_dispatcher_free(struct rte_dispatcher *dispatcher) +{ + int rc; + + if (dispatcher == NULL) + return 0; + + rc = evd_service_unregister(dispatcher); + if (rc != 0) + return rc; + + rte_free(dispatcher); + + return 0; +} + +uint32_t +rte_dispatcher_service_id_get(const struct rte_dispatcher *dispatcher) +{ + return dispatcher->service_id; +} + +static int +lcore_port_index(struct rte_dispatcher_lcore *lcore, + uint8_t event_port_id) +{ + uint16_t i; + + for (i = 0; i < lcore->num_ports; i++) { + struct rte_dispatcher_lcore_port *port = + &lcore->ports[i]; + + if (port->port_id == event_port_id) + return i; + } + + return -1; +} + +int +rte_dispatcher_bind_port_to_lcore(struct rte_dispatcher *dispatcher, + uint8_t event_port_id, uint16_t batch_size, uint64_t timeout, + unsigned int lcore_id) +{ + struct rte_dispatcher_lcore *lcore; + struct rte_dispatcher_lcore_port *port; + + lcore = &dispatcher->lcores[lcore_id]; + + if (lcore->num_ports == EVD_MAX_PORTS_PER_LCORE) + return -ENOMEM; + + if (lcore_port_index(lcore, event_port_id) >= 0) + return -EEXIST; + + port = &lcore->ports[lcore->num_ports]; + + *port = (struct rte_dispatcher_lcore_port) { + .port_id = event_port_id, + .batch_size = batch_size, + .timeout = timeout + }; + + lcore->num_ports++; + + return 0; +} + +int +rte_dispatcher_unbind_port_from_lcore(struct rte_dispatcher *dispatcher, + uint8_t event_port_id, unsigned int lcore_id) +{ + struct rte_dispatcher_lcore *lcore; + int port_idx; + struct rte_dispatcher_lcore_port *port; + struct rte_dispatcher_lcore_port *last; + + lcore = &dispatcher->lcores[lcore_id]; + + port_idx = lcore_port_index(lcore, event_port_id); + + if (port_idx < 0) + return -ENOENT; + + port = &lcore->ports[port_idx]; + last = &lcore->ports[lcore->num_ports - 1]; + + if (port != last) + *port = *last; + + lcore->num_ports--; + + return 0; +} + +static struct rte_dispatcher_handler * +evd_lcore_get_handler_by_id(struct rte_dispatcher_lcore *lcore, int handler_id) +{ + uint16_t i; + + for (i = 0; i < lcore->num_handlers; i++) { + struct rte_dispatcher_handler *handler = + &lcore->handlers[i]; + + if (handler->id == handler_id) + return handler; + } + + return NULL; +} + +static int +evd_alloc_handler_id(struct rte_dispatcher *dispatcher) +{ + int handler_id = 0; + struct rte_dispatcher_lcore *reference_lcore = + &dispatcher->lcores[0]; + + if (reference_lcore->num_handlers == EVD_MAX_HANDLERS) + return -1; + + while (evd_lcore_get_handler_by_id(reference_lcore, handler_id) != NULL) + handler_id++; + + return handler_id; +} + +static void +evd_lcore_install_handler(struct rte_dispatcher_lcore *lcore, + const struct rte_dispatcher_handler *handler) +{ + int handler_idx = lcore->num_handlers; + + lcore->handlers[handler_idx] = *handler; + lcore->num_handlers++; +} + +static void +evd_install_handler(struct rte_dispatcher *dispatcher, + const struct rte_dispatcher_handler *handler) +{ + int i; + + for (i = 0; i < RTE_MAX_LCORE; i++) { + struct rte_dispatcher_lcore *lcore = + &dispatcher->lcores[i]; + evd_lcore_install_handler(lcore, handler); + } +} + +int +rte_dispatcher_register(struct rte_dispatcher *dispatcher, + rte_dispatcher_match_t match_fun, void *match_data, + rte_dispatcher_process_t process_fun, void *process_data) +{ + struct rte_dispatcher_handler handler = { + .match_fun = match_fun, + .match_data = match_data, + .process_fun = process_fun, + .process_data = process_data + }; + + handler.id = evd_alloc_handler_id(dispatcher); + + if (handler.id < 0) + return -ENOMEM; + + evd_install_handler(dispatcher, &handler); + + return handler.id; +} + +static int +evd_lcore_uninstall_handler(struct rte_dispatcher_lcore *lcore, + int handler_id) +{ + struct rte_dispatcher_handler *unreg_handler; + int handler_idx; + uint16_t last_idx; + + unreg_handler = evd_lcore_get_handler_by_id(lcore, handler_id); + + if (unreg_handler == NULL) { + RTE_EDEV_LOG_ERR("Invalid handler id %d\n", handler_id); + return -EINVAL; + } + + handler_idx = unreg_handler - &lcore->handlers[0]; + + last_idx = lcore->num_handlers - 1; + + if (handler_idx != last_idx) { + /* move all handlers to maintain handler order */ + int n = last_idx - handler_idx; + memmove(unreg_handler, unreg_handler + 1, + sizeof(struct rte_dispatcher_handler) * n); + } + + lcore->num_handlers--; + + return 0; +} + +static int +evd_uninstall_handler(struct rte_dispatcher *dispatcher, int handler_id) +{ + unsigned int lcore_id; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + struct rte_dispatcher_lcore *lcore = + &dispatcher->lcores[lcore_id]; + int rc; + + rc = evd_lcore_uninstall_handler(lcore, handler_id); + if (rc < 0) + return rc; + } + + return 0; +} + +int +rte_dispatcher_unregister(struct rte_dispatcher *dispatcher, int handler_id) +{ + return evd_uninstall_handler(dispatcher, handler_id); +} + +static struct rte_dispatcher_finalizer * +evd_get_finalizer_by_id(struct rte_dispatcher *dispatcher, + int handler_id) +{ + int i; + + for (i = 0; i < dispatcher->num_finalizers; i++) { + struct rte_dispatcher_finalizer *finalizer = + &dispatcher->finalizers[i]; + + if (finalizer->id == handler_id) + return finalizer; + } + + return NULL; +} + +static int +evd_alloc_finalizer_id(struct rte_dispatcher *dispatcher) +{ + int finalizer_id = 0; + + while (evd_get_finalizer_by_id(dispatcher, finalizer_id) != NULL) + finalizer_id++; + + return finalizer_id; +} + +static struct rte_dispatcher_finalizer * +evd_alloc_finalizer(struct rte_dispatcher *dispatcher) +{ + int finalizer_idx; + struct rte_dispatcher_finalizer *finalizer; + + if (dispatcher->num_finalizers == EVD_MAX_FINALIZERS) + return NULL; + + finalizer_idx = dispatcher->num_finalizers; + finalizer = &dispatcher->finalizers[finalizer_idx]; + + finalizer->id = evd_alloc_finalizer_id(dispatcher); + + dispatcher->num_finalizers++; + + return finalizer; +} + +int +rte_dispatcher_finalize_register(struct rte_dispatcher *dispatcher, + rte_dispatcher_finalize_t finalize_fun, void *finalize_data) +{ + struct rte_dispatcher_finalizer *finalizer; + + finalizer = evd_alloc_finalizer(dispatcher); + + if (finalizer == NULL) + return -ENOMEM; + + finalizer->finalize_fun = finalize_fun; + finalizer->finalize_data = finalize_data; + + return finalizer->id; +} + +int +rte_dispatcher_finalize_unregister(struct rte_dispatcher *dispatcher, + int finalizer_id) +{ + struct rte_dispatcher_finalizer *unreg_finalizer; + int finalizer_idx; + uint16_t last_idx; + + unreg_finalizer = evd_get_finalizer_by_id(dispatcher, finalizer_id); + + if (unreg_finalizer == NULL) { + RTE_EDEV_LOG_ERR("Invalid finalizer id %d\n", finalizer_id); + return -EINVAL; + } + + finalizer_idx = unreg_finalizer - &dispatcher->finalizers[0]; + + last_idx = dispatcher->num_finalizers - 1; + + if (finalizer_idx != last_idx) { + /* move all finalizers to maintain order */ + int n = last_idx - finalizer_idx; + memmove(unreg_finalizer, unreg_finalizer + 1, + sizeof(struct rte_dispatcher_finalizer) * n); + } + + dispatcher->num_finalizers--; + + return 0; +} + +static void +evd_set_service_runstate(struct rte_dispatcher *dispatcher, int state) +{ + int rc; + + rc = rte_service_component_runstate_set(dispatcher->service_id, + state); + /* + * The only cause of a runstate_set() failure is an invalid + * service id, which in turns means the dispatcher instance's + * state is invalid. + */ + if (rc != 0) + RTE_EDEV_LOG_ERR("Unexpected error %d occurred while setting " + "service component run state to %d\n", rc, + state); + + RTE_VERIFY(rc == 0); +} + +void +rte_dispatcher_start(struct rte_dispatcher *dispatcher) +{ + evd_set_service_runstate(dispatcher, 1); +} + +void +rte_dispatcher_stop(struct rte_dispatcher *dispatcher) +{ + evd_set_service_runstate(dispatcher, 0); +} + +static void +evd_aggregate_stats(struct rte_dispatcher_stats *result, + const struct rte_dispatcher_stats *part) +{ + result->poll_count += part->poll_count; + result->ev_batch_count += part->ev_batch_count; + result->ev_dispatch_count += part->ev_dispatch_count; + result->ev_drop_count += part->ev_drop_count; +} + +void +rte_dispatcher_stats_get(const struct rte_dispatcher *dispatcher, + struct rte_dispatcher_stats *stats) +{ + unsigned int lcore_id; + + *stats = (struct rte_dispatcher_stats) {}; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + const struct rte_dispatcher_lcore *lcore = + &dispatcher->lcores[lcore_id]; + + evd_aggregate_stats(stats, &lcore->stats); + } +} + +void +rte_dispatcher_stats_reset(struct rte_dispatcher *dispatcher) +{ + unsigned int lcore_id; + + for (lcore_id = 0; lcore_id < RTE_MAX_LCORE; lcore_id++) { + struct rte_dispatcher_lcore *lcore = + &dispatcher->lcores[lcore_id]; + + lcore->stats = (struct rte_dispatcher_stats) {}; + } +} diff --git a/lib/dispatcher/rte_dispatcher.h b/lib/dispatcher/rte_dispatcher.h new file mode 100644 index 0000000000..0ad039d6d5 --- /dev/null +++ b/lib/dispatcher/rte_dispatcher.h @@ -0,0 +1,458 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Ericsson AB + */ + +#ifndef __RTE_DISPATCHER_H__ +#define __RTE_DISPATCHER_H__ + +/** + * @file + * + * RTE Dispatcher + * + * @warning + * @b EXPERIMENTAL: + * All functions in this file may be changed or removed without prior notice. + * + * The purpose of the dispatcher is to help decouple different parts + * of an application (e.g., modules), sharing the same underlying + * event device. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include +#include +#include + +/** + * Function prototype for match callbacks. + * + * Match callbacks are used by an application to decide how the + * dispatcher distributes events to different parts of the + * application. + * + * The application is not expected to process the event at the point + * of the match call. Such matters should be deferred to the process + * callback invocation. + * + * The match callback may be used as an opportunity to prefetch data. + * + * @param event + * Pointer to event + * + * @param cb_data + * The pointer supplied by the application in + * rte_dispatcher_register(). + * + * @return + * Returns true in case this event should be delivered (via + * the process callback), and false otherwise. + */ +typedef bool +(*rte_dispatcher_match_t)(const struct rte_event *event, void *cb_data); + +/** + * Function prototype for process callbacks. + * + * The process callbacks are used by the dispatcher to deliver + * events for processing. + * + * @param event_dev_id + * The originating event device id. + * + * @param event_port_id + * The originating event port. + * + * @param events + * Pointer to an array of events. + * + * @param num + * The number of events in the @p events array. + * + * @param cb_data + * The pointer supplied by the application in + * rte_dispatcher_register(). + */ + +typedef void +(*rte_dispatcher_process_t)(uint8_t event_dev_id, uint8_t event_port_id, + struct rte_event *events, uint16_t num, void *cb_data); + +/** + * Function prototype for finalize callbacks. + * + * The finalize callbacks are used by the dispatcher to notify the + * application it has delivered all events from a particular batch + * dequeued from the event device. + * + * @param event_dev_id + * The originating event device id. + * + * @param event_port_id + * The originating event port. + * + * @param cb_data + * The pointer supplied by the application in + * rte_dispatcher_finalize_register(). + */ + +typedef void +(*rte_dispatcher_finalize_t)(uint8_t event_dev_id, uint8_t event_port_id, + void *cb_data); + +/** + * Dispatcher statistics + */ +struct rte_dispatcher_stats { + /** Number of event dequeue calls made toward the event device. */ + uint64_t poll_count; + /** Number of non-empty event batches dequeued from event device.*/ + uint64_t ev_batch_count; + /** Number of events dispatched to a handler.*/ + uint64_t ev_dispatch_count; + /** Number of events dropped because no handler was found. */ + uint64_t ev_drop_count; +}; + +/** + * Create a dispatcher with the specified id. + * + * @param event_dev_id + * The identifier of the event device from which this dispatcher + * will dequeue events. + * + * @return + * A pointer to a new dispatcher instance, or NULL on failure, in which + * case rte_errno is set. + */ +__rte_experimental +struct rte_dispatcher * +rte_dispatcher_create(uint8_t event_dev_id); + +/** + * Free a dispatcher. + * + * @param dispatcher + * The dispatcher instance. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int +rte_dispatcher_free(struct rte_dispatcher *dispatcher); + +/** + * Retrieve the service identifier of a dispatcher. + * + * @param dispatcher + * The dispatcher instance. + * + * @return + * The dispatcher service's id. + */ +__rte_experimental +uint32_t +rte_dispatcher_service_id_get(const struct rte_dispatcher *dispatcher); + +/** + * Binds an event device port to a specific lcore on the specified + * dispatcher. + * + * This function configures the event port id to be used by the event + * dispatcher service, if run on the specified lcore. + * + * Multiple event device ports may be bound to the same lcore. A + * particular port must not be bound to more than one lcore. + * + * If the dispatcher service is mapped (with rte_service_map_lcore_set()) + * to a lcore to which no ports are bound, the service function will be a + * no-operation. + * + * This function may be called by any thread (including unregistered + * non-EAL threads), but not while the dispatcher is running on lcore + * specified by @c lcore_id. + * + * @param dispatcher + * The dispatcher instance. + * + * @param event_port_id + * The event device port identifier. + * + * @param batch_size + * The batch size to use in rte_event_dequeue_burst(), for the + * configured event device port and lcore. + * + * @param timeout + * The timeout parameter to use in rte_event_dequeue_burst(), for the + * configured event device port and lcore. + * + * @param lcore_id + * The lcore by which this event port will be used. + * + * @return + * - 0: Success + * - -ENOMEM: Unable to allocate sufficient resources. + * - -EEXISTS: Event port is already configured. + * - -EINVAL: Invalid arguments. + */ +__rte_experimental +int +rte_dispatcher_bind_port_to_lcore(struct rte_dispatcher *dispatcher, + uint8_t event_port_id, uint16_t batch_size, uint64_t timeout, + unsigned int lcore_id); + +/** + * Unbind an event device port from a specific lcore. + * + * This function may be called by any thread (including unregistered + * non-EAL threads), but not while the dispatcher is running on + * lcore specified by @c lcore_id. + * + * @param dispatcher + * The dispatcher instance. + * + * @param event_port_id + * The event device port identifier. + * + * @param lcore_id + * The lcore which was using this event port. + * + * @return + * - 0: Success + * - -ENOENT: Event port id not bound to this @c lcore_id. + */ +__rte_experimental +int +rte_dispatcher_unbind_port_from_lcore(struct rte_dispatcher *dispatcher, + uint8_t event_port_id, unsigned int lcore_id); + +/** + * Register an event handler. + * + * The match callback function is used to select if a particular event + * should be delivered, using the corresponding process callback + * function. + * + * The reason for having two distinct steps is to allow the dispatcher + * to deliver all events as a batch. This in turn will cause + * processing of a particular kind of events to happen in a + * back-to-back manner, improving cache locality. + * + * The list of handler callback functions is shared among all lcores, + * but will only be executed on lcores which has an eventdev port + * bound to them, and which are running the dispatcher service. + * + * An event is delivered to at most one handler. Events where no + * handler is found are dropped. + * + * The application must not depend on the order of which the match + * functions are invoked. + * + * Ordering of events is not guaranteed to be maintained between + * different deliver callbacks. For example, suppose there are two + * callbacks registered, matching different subsets of events arriving + * on an atomic queue. A batch of events [ev0, ev1, ev2] are dequeued + * on a particular port, all pertaining to the same flow. The match + * callback for registration A returns true for ev0 and ev2, and the + * matching function for registration B for ev1. In that scenario, the + * dispatcher may choose to deliver first [ev0, ev2] using A's deliver + * function, and then [ev1] to B - or vice versa. + * + * rte_dispatcher_register() may be called by any thread + * (including unregistered non-EAL threads), but not while the event + * dispatcher is running on any service lcore. + * + * @param dispatcher + * The dispatcher instance. + * + * @param match_fun + * The match callback function. + * + * @param match_cb_data + * A pointer to some application-specific opaque data (or NULL), + * which is supplied back to the application when match_fun is + * called. + * + * @param process_fun + * The process callback function. + * + * @param process_cb_data + * A pointer to some application-specific opaque data (or NULL), + * which is supplied back to the application when process_fun is + * called. + * + * @return + * - >= 0: The identifier for this registration. + * - -ENOMEM: Unable to allocate sufficient resources. + */ +__rte_experimental +int +rte_dispatcher_register(struct rte_dispatcher *dispatcher, + rte_dispatcher_match_t match_fun, void *match_cb_data, + rte_dispatcher_process_t process_fun, void *process_cb_data); + +/** + * Unregister an event handler. + * + * This function may be called by any thread (including unregistered + * non-EAL threads), but not while the dispatcher is running on + * any service lcore. + * + * @param dispatcher + * The dispatcher instance. + * + * @param handler_id + * The handler registration id returned by the original + * rte_dispatcher_register() call. + * + * @return + * - 0: Success + * - -EINVAL: The @c handler_id parameter was invalid. + */ +__rte_experimental +int +rte_dispatcher_unregister(struct rte_dispatcher *dispatcher, int handler_id); + +/** + * Register a finalize callback function. + * + * An application may optionally install one or more finalize + * callbacks. + * + * All finalize callbacks are invoked by the dispatcher when a + * complete batch of events (retrieve using rte_event_dequeue_burst()) + * have been delivered to the application (or have been dropped). + * + * The finalize callback is not tied to any particular handler. + * + * The finalize callback provides an opportunity for the application + * to do per-batch processing. One case where this may be useful is if + * an event output buffer is used, and is shared among several + * handlers. In such a case, proper output buffer flushing may be + * assured using a finalize callback. + * + * rte_dispatcher_finalize_register() may be called by any thread + * (including unregistered non-EAL threads), but not while the + * dispatcher is running on any service lcore. + * + * @param dispatcher + * The dispatcher instance. + * + * @param finalize_fun + * The function called after completing the processing of a + * dequeue batch. + * + * @param finalize_data + * A pointer to some application-specific opaque data (or NULL), + * which is supplied back to the application when @c finalize_fun is + * called. + * + * @return + * - >= 0: The identifier for this registration. + * - -ENOMEM: Unable to allocate sufficient resources. + */ +__rte_experimental +int +rte_dispatcher_finalize_register(struct rte_dispatcher *dispatcher, + rte_dispatcher_finalize_t finalize_fun, void *finalize_data); + +/** + * Unregister a finalize callback. + * + * This function may be called by any thread (including unregistered + * non-EAL threads), but not while the dispatcher is running on + * any service lcore. + * + * @param dispatcher + * The dispatcher instance. + * + * @param reg_id + * The finalize registration id returned by the original + * rte_dispatcher_finalize_register() call. + * + * @return + * - 0: Success + * - -EINVAL: The @c reg_id parameter was invalid. + */ +__rte_experimental +int +rte_dispatcher_finalize_unregister(struct rte_dispatcher *dispatcher, int reg_id); + +/** + * Start a dispatcher instance. + * + * Enables the dispatcher service. + * + * The underlying event device must have been started prior to calling + * rte_dispatcher_start(). + * + * For the dispatcher to actually perform work (i.e., dispatch + * events), its service must have been mapped to one or more service + * lcores, and its service run state set to '1'. A dispatcher's + * service is retrieved using rte_dispatcher_service_id_get(). + * + * Each service lcore to which the dispatcher is mapped should + * have at least one event port configured. Such configuration is + * performed by calling rte_dispatcher_bind_port_to_lcore(), prior to + * starting the dispatcher. + * + * @param dispatcher + * The dispatcher instance. + */ +__rte_experimental +void +rte_dispatcher_start(struct rte_dispatcher *dispatcher); + +/** + * Stop a running dispatcher instance. + * + * Disables the dispatcher service. + * + * @param dispatcher + * The dispatcher instance. + */ +__rte_experimental +void +rte_dispatcher_stop(struct rte_dispatcher *dispatcher); + +/** + * Retrieve statistics for a dispatcher instance. + * + * This function is MT safe and may be called by any thread + * (including unregistered non-EAL threads). + * + * @param dispatcher + * The dispatcher instance. + * @param[out] stats + * A pointer to a structure to fill with statistics. + */ +__rte_experimental +void +rte_dispatcher_stats_get(const struct rte_dispatcher *dispatcher, + struct rte_dispatcher_stats *stats); + +/** + * Reset statistics for a dispatcher instance. + * + * This function may be called by any thread (including unregistered + * non-EAL threads), but may not produce the correct result if the + * dispatcher is running on any service lcore. + * + * @param dispatcher + * The dispatcher instance. + */ +__rte_experimental +void +rte_dispatcher_stats_reset(struct rte_dispatcher *dispatcher); + +#ifdef __cplusplus +} +#endif + +#endif /* __RTE_DISPATCHER__ */ diff --git a/lib/dispatcher/version.map b/lib/dispatcher/version.map new file mode 100644 index 0000000000..44585e4f15 --- /dev/null +++ b/lib/dispatcher/version.map @@ -0,0 +1,20 @@ +EXPERIMENTAL { + global: + + # added in 23.11 + rte_dispatcher_bind_port_to_lcore; + rte_dispatcher_create; + rte_dispatcher_finalize_register; + rte_dispatcher_finalize_unregister; + rte_dispatcher_free; + rte_dispatcher_register; + rte_dispatcher_service_id_get; + rte_dispatcher_start; + rte_dispatcher_stats_get; + rte_dispatcher_stats_reset; + rte_dispatcher_stop; + rte_dispatcher_unbind_port_from_lcore; + rte_dispatcher_unregister; + + local: *; +}; diff --git a/lib/meson.build b/lib/meson.build index cf4aa63630..59d381bf7a 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -35,6 +35,7 @@ libraries = [ 'distributor', 'efd', 'eventdev', + 'dispatcher', # dispatcher depends on eventdev 'gpudev', 'gro', 'gso', @@ -81,6 +82,7 @@ optional_libs = [ 'cfgfile', 'compressdev', 'cryptodev', + 'dispatcher', 'distributor', 'dmadev', 'efd', From patchwork Thu Oct 12 08:50:30 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Mattias_R=C3=B6nnblom?= X-Patchwork-Id: 132574 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A982242364; Thu, 12 Oct 2023 10:56:00 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4837F402DE; Thu, 12 Oct 2023 10:55:57 +0200 (CEST) Received: from EUR04-VI1-obe.outbound.protection.outlook.com (mail-vi1eur04on2084.outbound.protection.outlook.com [40.107.8.84]) by mails.dpdk.org (Postfix) with ESMTP id B5F974028C; Thu, 12 Oct 2023 10:55:53 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WsRxqrwfo0Y2CPOqtvwvk057vvFIQ/tEtek0LvpTAq/yU2l5qqri0D016GXkas98RULjXG1k6uJDgq11zqNN0Z1CI+gxsOXLuzoPFTP/K/Tt4qkFjcR/q8beFPdpfDI7rLK4RqC0Z71B09tXwuwpvhL1/SfdbgN3RTxZmqSQ9eODiJdAWQ36j5WxM01G09aikuQ+rIji3CHtD08h5mUSqKa+C96pSQ5kYYgahw75VosqWeXmbkzilRSBCdJyRQyW6BMoskjyK105q1+QlpjoLZwKBojLI+6llbVM40+7CzU+wKRIqS4fr7XxH7e9gQPU/fMjcn4hNIcBOSsCUE5gEQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=5chxlDqIYH1NWjJUpUHx5BiXZuA66FDsGVyNctquCFU=; b=c/d5jXIvyC4VsJF6+Cna4pwpuLp+M3VRPCvJuMNebwWl15lrda+M+E7n3C5P8KS4ZMNkmobdxGx/FlvRaph09XW8fJP8ZuKX26cP5f1Kn1Uxh2QnZd2G3eaHblMj5XpPBm1HXps1Ml4HU7/Q5xjmqyWD90E0iFF4WrX7/OPXA7JHGnPtXLuz0tlnY4hSu5yckh/U3KjEECXeOV4Ta+k4r7Vwd/vsWdwjGhT/8a9taikVLpoeX8LWPMH1DBKQ07Kda5kEE9LQak6sGpI7cc0f+WHiT1OqpC0ByQaL6ISonspq99m4CCN8s4pfUuZ10KY1dkU8UxHvI8/Inz37tzwJ0Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=5chxlDqIYH1NWjJUpUHx5BiXZuA66FDsGVyNctquCFU=; b=TuKMlgYY7lMDzuQRrZnDguVyCzd/R6KBWIC+E7iVNP68yMVpWbn5OwMND6P43yqq8hAZxJN3/nMVYUQNYZKA66cfnbkEGh+nM45V8MVecaQcQQwGDuJdlGEYcg4+Od0uyNv7cb1fgVvfRRa8g80KmpNRdd0pOk7uHputmq6/4V4= Received: from DU6P191CA0025.EURP191.PROD.OUTLOOK.COM (2603:10a6:10:53f::25) by AS4PR07MB8659.eurprd07.prod.outlook.com (2603:10a6:20b:4ce::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.45; Thu, 12 Oct 2023 08:55:51 +0000 Received: from DB1PEPF0003922E.eurprd03.prod.outlook.com (2603:10a6:10:53f:cafe::8c) by DU6P191CA0025.outlook.office365.com (2603:10a6:10:53f::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6886.29 via Frontend Transport; Thu, 12 Oct 2023 08:55:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by DB1PEPF0003922E.mail.protection.outlook.com (10.167.8.101) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.20.6838.22 via Frontend Transport; Thu, 12 Oct 2023 08:55:51 +0000 Received: from ESESBMB501.ericsson.se (153.88.183.168) by ESESSMB505.ericsson.se (153.88.183.166) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.32; Thu, 12 Oct 2023 10:55:50 +0200 Received: from seliicinfr00049.seli.gic.ericsson.se (153.88.183.153) by smtp.internal.ericsson.com (153.88.183.184) with Microsoft SMTP Server id 15.1.2507.32 via Frontend Transport; Thu, 12 Oct 2023 10:55:50 +0200 Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100]) by seliicinfr00049.seli.gic.ericsson.se (Postfix) with ESMTP id 84A75380061; Thu, 12 Oct 2023 10:55:50 +0200 (CEST) From: =?utf-8?q?Mattias_R=C3=B6nnblom?= To: , CC: Jerin Jacob , , , , Peter Nilsson , Heng Wang , "Naga Harish K S V" , Pavan Nikhilesh , Gujjar Abhinandan S , Erik Gabriel Carrillo , Shijith Thotton , "Hemant Agrawal" , Sachin Saxena , Liang Ma , Peter Mccarthy , Zhirun Yan , =?utf-8?q?Mattias_R=C3=B6nnblom?= Subject: [PATCH v8 2/3] test: add dispatcher test suite Date: Thu, 12 Oct 2023 10:50:30 +0200 Message-ID: <20231012085031.444483-3-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231012085031.444483-1-mattias.ronnblom@ericsson.com> References: <20231011071700.442795-2-mattias.ronnblom@ericsson.com> <20231012085031.444483-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: DB1PEPF0003922E:EE_|AS4PR07MB8659:EE_ X-MS-Office365-Filtering-Correlation-Id: 2f563cbe-084e-4c79-949e-08dbcb01093c X-LD-Processed: 92e84ceb-fbfd-47ab-be52-080c6b87953f,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 69dE1MtRuuWpQIWII2oDuga3hnGGl/Gwob33EZrC/hBlgq+kb7ugbv3IeJDT6V+34lkg1FItJUBmoU6P4YLBE3uADoW77PTKXP7ZWOiByDpCSVWExewDx+aOEUPuTSJTtGgwf4Cnz/Ghqu5ejyuW/uQ62N2VuZPJjqVqJ0Y1H2U2glqwDUaykYbzUKY7ps+zGKpHWQ8pbhilZmctBePJ9N5fgvpT4CkNALfoLEGHUKRbshi1lG8pxD/srU1Sm+WIdE4WADQbVp8NtaCBTjl00l3eej5MJZd9r7Id7CodI42+wfi9fynkzy+/vf+6bgNQUszJ96SFNLxuChkJt786SMo5s5/EhYb3MLRDkx7S1CbgEp5SrGk4FQ0BDEuumsRweZFilhoNNKwJUFIqNp1gYYlhDyH/PmSJK0BrpVvaYkZziFia/r9tondHIas3Gw35NWRsfT4HLvkLdwD8dWCiqOwIR5Vj+Tz62rkrhgFrup7gwJ6l1wsHBtnWcck52MgqJAyC8jabTn/0FXSD7Wsxs2MeLpUVnx/u2j03fLox7qi3Pnm8vnkhuJRgkrU0Kh3AP+gXXqPozHsyRTosJjxoefi8wtnmmVYFkHsJnyiOaCz7nVZ+BNPaxmIITyRGsBjdECsw4y8NeGqMJf+UxgZPO0huHNDOe/XsDwRfHIM3Q46m+u2i0ysUK/+5KNvcc8vSwafWSGuJRLRe8X/p85EgmO2XRsQrhG3noncoLLllFLHpFnnh3UWP2yOs3oEaPRcfaMNDde+yN8OZDuZFCez0dipBw0fSaWdHbb3B0nBu6OsrAOMNwszLl/1M18IlZTmm7nK4YeQFpuBRIGCwVSxfey6VvFLG0AdvC4OvAktY3jo= X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230031)(4636009)(346002)(39860400002)(136003)(396003)(376002)(230922051799003)(1800799009)(64100799003)(451199024)(186009)(82310400011)(46966006)(40470700004)(36840700001)(47076005)(36860700001)(478600001)(83380400001)(40460700003)(26005)(7636003)(2616005)(66574015)(6266002)(336012)(40480700001)(7416002)(86362001)(30864003)(2906002)(41300700001)(5660300002)(8676002)(4326008)(8936002)(6666004)(316002)(54906003)(110136005)(70586007)(70206006)(36756003)(356005)(107886003)(82740400003)(1076003)(82960400001)(2101003); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 08:55:51.2490 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 2f563cbe-084e-4c79-949e-08dbcb01093c X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: DB1PEPF0003922E.eurprd03.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS4PR07MB8659 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Add unit tests for the dispatcher. --- PATCH v8: o Adjust test code to match the fact that the dispatcher start and stop functions no longer return a value. PATCH v7: o Skip (not fail) tests in case too few lcores are available or if the DSW event device is not available. (David Marchand) o Properly clean up resources in the above-mentioned scenarios. PATCH v6: o Register test as "fast". (David Marchand) o Use single tab as indentation for continuation lines in multiple-line function prototypes. (David Marchand) o Add Signed-off-by line. (David Marchand) o Use DPDK atomics wrapper API instead of C11 atomics. PATCH v5: o Update test suite to use pointer and not integer id when calling dispatcher functions. PATCH v3: o Adapt the test suite to dispatcher API name changes. PATCH v2: o Test finalize callback functionality. o Test handler and finalizer count upper limits. o Add statistics reset test. o Make sure dispatcher supply the proper event dev id and port id back to the application. PATCH: o Extend test to cover often-used handler optimization feature. RFC v4: o Adapt to non-const events in process function prototype. Signed-off-by: Mattias Rönnblom --- MAINTAINERS | 1 + app/test/meson.build | 1 + app/test/test_dispatcher.c | 1056 ++++++++++++++++++++++++++++++++++++ 3 files changed, 1058 insertions(+) create mode 100644 app/test/test_dispatcher.c diff --git a/MAINTAINERS b/MAINTAINERS index a7039b06dc..0e24da11fe 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1737,6 +1737,7 @@ F: lib/node/ Dispatcher - EXPERIMENTAL M: Mattias Rönnblom F: lib/dispatcher/ +F: app/test/test_dispatcher.c Test Applications diff --git a/app/test/meson.build b/app/test/meson.build index 20a9333c72..c238f4b21c 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -59,6 +59,7 @@ source_file_deps = { 'test_cycles.c': [], 'test_debug.c': [], 'test_devargs.c': ['kvargs'], + 'test_dispatcher.c': ['dispatcher'], 'test_distributor.c': ['distributor'], 'test_distributor_perf.c': ['distributor'], 'test_dmadev.c': ['dmadev', 'bus_vdev'], diff --git a/app/test/test_dispatcher.c b/app/test/test_dispatcher.c new file mode 100644 index 0000000000..6eb3f572cf --- /dev/null +++ b/app/test/test_dispatcher.c @@ -0,0 +1,1056 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2023 Ericsson AB + */ + +#include +#include +#include +#include +#include +#include + +#include "test.h" + +#define NUM_WORKERS 3 +#define NUM_PORTS (NUM_WORKERS + 1) +#define WORKER_PORT_ID(worker_idx) (worker_idx) +#define DRIVER_PORT_ID (NUM_PORTS - 1) + +#define NUM_SERVICE_CORES NUM_WORKERS +#define MIN_LCORES (NUM_SERVICE_CORES + 1) + +/* Eventdev */ +#define NUM_QUEUES 8 +#define LAST_QUEUE_ID (NUM_QUEUES - 1) +#define MAX_EVENTS 4096 +#define NEW_EVENT_THRESHOLD (MAX_EVENTS / 2) +#define DEQUEUE_BURST_SIZE 32 +#define ENQUEUE_BURST_SIZE 32 + +#define NUM_EVENTS 10000000 +#define NUM_FLOWS 16 + +#define DSW_VDEV "event_dsw0" + +struct app_queue { + uint8_t queue_id; + uint64_t sn[NUM_FLOWS]; + int dispatcher_reg_id; +}; + +struct cb_count { + uint8_t expected_event_dev_id; + uint8_t expected_event_port_id[RTE_MAX_LCORE]; + RTE_ATOMIC(int) count; +}; + +struct test_app { + uint8_t event_dev_id; + struct rte_dispatcher *dispatcher; + uint32_t dispatcher_service_id; + + unsigned int service_lcores[NUM_SERVICE_CORES]; + + int never_match_reg_id; + uint64_t never_match_count; + struct cb_count never_process_count; + + struct app_queue queues[NUM_QUEUES]; + + int finalize_reg_id; + struct cb_count finalize_count; + + bool running; + + RTE_ATOMIC(int) completed_events; + RTE_ATOMIC(int) errors; +}; + +static struct test_app * +test_app_create(void) +{ + int i; + struct test_app *app; + + app = calloc(1, sizeof(struct test_app)); + + if (app == NULL) + return NULL; + + for (i = 0; i < NUM_QUEUES; i++) + app->queues[i].queue_id = i; + + return app; +} + +static void +test_app_free(struct test_app *app) +{ + free(app); +} + +static int +test_app_create_vdev(struct test_app *app) +{ + int rc; + + rc = rte_vdev_init(DSW_VDEV, NULL); + if (rc < 0) + return TEST_SKIPPED; + + rc = rte_event_dev_get_dev_id(DSW_VDEV); + + app->event_dev_id = (uint8_t)rc; + + return TEST_SUCCESS; +} + +static int +test_app_destroy_vdev(struct test_app *app) +{ + int rc; + + rc = rte_event_dev_close(app->event_dev_id); + TEST_ASSERT_SUCCESS(rc, "Error while closing event device"); + + rc = rte_vdev_uninit(DSW_VDEV); + TEST_ASSERT_SUCCESS(rc, "Error while uninitializing virtual device"); + + return TEST_SUCCESS; +} + +static int +test_app_setup_event_dev(struct test_app *app) +{ + int rc; + int i; + + rc = test_app_create_vdev(app); + if (rc != TEST_SUCCESS) + return rc; + + struct rte_event_dev_config config = { + .nb_event_queues = NUM_QUEUES, + .nb_event_ports = NUM_PORTS, + .nb_events_limit = MAX_EVENTS, + .nb_event_queue_flows = 64, + .nb_event_port_dequeue_depth = DEQUEUE_BURST_SIZE, + .nb_event_port_enqueue_depth = ENQUEUE_BURST_SIZE + }; + + rc = rte_event_dev_configure(app->event_dev_id, &config); + + TEST_ASSERT_SUCCESS(rc, "Unable to configure event device"); + + struct rte_event_queue_conf queue_config = { + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .schedule_type = RTE_SCHED_TYPE_ATOMIC, + .nb_atomic_flows = 64 + }; + + for (i = 0; i < NUM_QUEUES; i++) { + uint8_t queue_id = i; + + rc = rte_event_queue_setup(app->event_dev_id, queue_id, + &queue_config); + + TEST_ASSERT_SUCCESS(rc, "Unable to setup queue %d", queue_id); + } + + struct rte_event_port_conf port_config = { + .new_event_threshold = NEW_EVENT_THRESHOLD, + .dequeue_depth = DEQUEUE_BURST_SIZE, + .enqueue_depth = ENQUEUE_BURST_SIZE + }; + + for (i = 0; i < NUM_PORTS; i++) { + uint8_t event_port_id = i; + + rc = rte_event_port_setup(app->event_dev_id, event_port_id, + &port_config); + TEST_ASSERT_SUCCESS(rc, "Failed to create event port %d", + event_port_id); + + if (event_port_id == DRIVER_PORT_ID) + continue; + + rc = rte_event_port_link(app->event_dev_id, event_port_id, + NULL, NULL, 0); + + TEST_ASSERT_EQUAL(rc, NUM_QUEUES, "Failed to link port %d", + event_port_id); + } + + return TEST_SUCCESS; +} + +static int +test_app_teardown_event_dev(struct test_app *app) +{ + return test_app_destroy_vdev(app); +} + +static int +test_app_start_event_dev(struct test_app *app) +{ + int rc; + + rc = rte_event_dev_start(app->event_dev_id); + TEST_ASSERT_SUCCESS(rc, "Unable to start event device"); + + return TEST_SUCCESS; +} + +static void +test_app_stop_event_dev(struct test_app *app) +{ + rte_event_dev_stop(app->event_dev_id); +} + +static int +test_app_create_dispatcher(struct test_app *app) +{ + int rc; + + app->dispatcher = rte_dispatcher_create(app->event_dev_id); + + TEST_ASSERT(app->dispatcher != NULL, "Unable to create event " + "dispatcher"); + + app->dispatcher_service_id = + rte_dispatcher_service_id_get(app->dispatcher); + + rc = rte_service_set_stats_enable(app->dispatcher_service_id, 1); + + TEST_ASSERT_SUCCESS(rc, "Unable to enable event dispatcher service " + "stats"); + + rc = rte_service_runstate_set(app->dispatcher_service_id, 1); + + TEST_ASSERT_SUCCESS(rc, "Unable to set dispatcher service runstate"); + + return TEST_SUCCESS; +} + +static int +test_app_free_dispatcher(struct test_app *app) +{ + int rc; + + rc = rte_service_runstate_set(app->dispatcher_service_id, 0); + TEST_ASSERT_SUCCESS(rc, "Error disabling dispatcher service"); + + rc = rte_dispatcher_free(app->dispatcher); + TEST_ASSERT_SUCCESS(rc, "Error freeing dispatcher"); + + return TEST_SUCCESS; +} + +static int +test_app_bind_ports(struct test_app *app) +{ + int i; + + app->never_process_count.expected_event_dev_id = + app->event_dev_id; + app->finalize_count.expected_event_dev_id = + app->event_dev_id; + + for (i = 0; i < NUM_WORKERS; i++) { + unsigned int lcore_id = app->service_lcores[i]; + uint8_t port_id = WORKER_PORT_ID(i); + + int rc = rte_dispatcher_bind_port_to_lcore( + app->dispatcher, port_id, DEQUEUE_BURST_SIZE, 0, + lcore_id + ); + + TEST_ASSERT_SUCCESS(rc, "Unable to bind event device port %d " + "to lcore %d", port_id, lcore_id); + + app->never_process_count.expected_event_port_id[lcore_id] = + port_id; + app->finalize_count.expected_event_port_id[lcore_id] = port_id; + } + + + return TEST_SUCCESS; +} + +static int +test_app_unbind_ports(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_WORKERS; i++) { + unsigned int lcore_id = app->service_lcores[i]; + + int rc = rte_dispatcher_unbind_port_from_lcore( + app->dispatcher, + WORKER_PORT_ID(i), + lcore_id + ); + + TEST_ASSERT_SUCCESS(rc, "Unable to unbind event device port %d " + "from lcore %d", WORKER_PORT_ID(i), + lcore_id); + } + + return TEST_SUCCESS; +} + +static bool +match_queue(const struct rte_event *event, void *cb_data) +{ + uintptr_t queue_id = (uintptr_t)cb_data; + + return event->queue_id == queue_id; +} + +static int +test_app_get_worker_index(struct test_app *app, unsigned int lcore_id) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) + if (app->service_lcores[i] == lcore_id) + return i; + + return -1; +} + +static int +test_app_get_worker_port(struct test_app *app, unsigned int lcore_id) +{ + int worker; + + worker = test_app_get_worker_index(app, lcore_id); + + if (worker < 0) + return -1; + + return WORKER_PORT_ID(worker); +} + +static void +test_app_queue_note_error(struct test_app *app) +{ + rte_atomic_fetch_add_explicit(&app->errors, 1, rte_memory_order_relaxed); +} + +static void +test_app_process_queue(uint8_t p_event_dev_id, uint8_t p_event_port_id, + struct rte_event *in_events, uint16_t num, + void *cb_data) +{ + struct app_queue *app_queue = cb_data; + struct test_app *app = container_of(app_queue, struct test_app, + queues[app_queue->queue_id]); + unsigned int lcore_id = rte_lcore_id(); + bool intermediate_queue = app_queue->queue_id != LAST_QUEUE_ID; + int event_port_id; + uint16_t i; + struct rte_event out_events[num]; + + event_port_id = test_app_get_worker_port(app, lcore_id); + + if (event_port_id < 0 || p_event_dev_id != app->event_dev_id || + p_event_port_id != event_port_id) { + test_app_queue_note_error(app); + return; + } + + for (i = 0; i < num; i++) { + const struct rte_event *in_event = &in_events[i]; + struct rte_event *out_event = &out_events[i]; + uint64_t sn = in_event->u64; + uint64_t expected_sn; + + if (in_event->queue_id != app_queue->queue_id) { + test_app_queue_note_error(app); + return; + } + + expected_sn = app_queue->sn[in_event->flow_id]++; + + if (expected_sn != sn) { + test_app_queue_note_error(app); + return; + } + + if (intermediate_queue) + *out_event = (struct rte_event) { + .queue_id = in_event->queue_id + 1, + .flow_id = in_event->flow_id, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .op = RTE_EVENT_OP_FORWARD, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .u64 = sn + }; + } + + if (intermediate_queue) { + uint16_t n = 0; + + do { + n += rte_event_enqueue_forward_burst(p_event_dev_id, + p_event_port_id, + out_events + n, + num - n); + } while (n != num); + } else + rte_atomic_fetch_add_explicit(&app->completed_events, num, + rte_memory_order_relaxed); +} + +static bool +never_match(const struct rte_event *event __rte_unused, void *cb_data) +{ + uint64_t *count = cb_data; + + (*count)++; + + return false; +} + +static void +test_app_never_process(uint8_t event_dev_id, uint8_t event_port_id, + struct rte_event *in_events __rte_unused, uint16_t num, void *cb_data) +{ + struct cb_count *count = cb_data; + unsigned int lcore_id = rte_lcore_id(); + + if (event_dev_id == count->expected_event_dev_id && + event_port_id == count->expected_event_port_id[lcore_id]) + rte_atomic_fetch_add_explicit(&count->count, num, + rte_memory_order_relaxed); +} + +static void +finalize(uint8_t event_dev_id, uint8_t event_port_id, void *cb_data) +{ + struct cb_count *count = cb_data; + unsigned int lcore_id = rte_lcore_id(); + + if (event_dev_id == count->expected_event_dev_id && + event_port_id == count->expected_event_port_id[lcore_id]) + rte_atomic_fetch_add_explicit(&count->count, 1, + rte_memory_order_relaxed); +} + +static int +test_app_register_callbacks(struct test_app *app) +{ + int i; + + app->never_match_reg_id = + rte_dispatcher_register(app->dispatcher, never_match, + &app->never_match_count, + test_app_never_process, + &app->never_process_count); + + TEST_ASSERT(app->never_match_reg_id >= 0, "Unable to register " + "never-match handler"); + + for (i = 0; i < NUM_QUEUES; i++) { + struct app_queue *app_queue = &app->queues[i]; + uintptr_t queue_id = app_queue->queue_id; + int reg_id; + + reg_id = rte_dispatcher_register(app->dispatcher, + match_queue, (void *)queue_id, + test_app_process_queue, + app_queue); + + TEST_ASSERT(reg_id >= 0, "Unable to register consumer " + "callback for queue %d", i); + + app_queue->dispatcher_reg_id = reg_id; + } + + app->finalize_reg_id = + rte_dispatcher_finalize_register(app->dispatcher, + finalize, + &app->finalize_count); + TEST_ASSERT_SUCCESS(app->finalize_reg_id, "Error registering " + "finalize callback"); + + return TEST_SUCCESS; +} + +static int +test_app_unregister_callback(struct test_app *app, uint8_t queue_id) +{ + int reg_id = app->queues[queue_id].dispatcher_reg_id; + int rc; + + if (reg_id < 0) /* unregistered already */ + return 0; + + rc = rte_dispatcher_unregister(app->dispatcher, reg_id); + + TEST_ASSERT_SUCCESS(rc, "Unable to unregister consumer " + "callback for queue %d", queue_id); + + app->queues[queue_id].dispatcher_reg_id = -1; + + return TEST_SUCCESS; +} + +static int +test_app_unregister_callbacks(struct test_app *app) +{ + int i; + int rc; + + if (app->never_match_reg_id >= 0) { + rc = rte_dispatcher_unregister(app->dispatcher, + app->never_match_reg_id); + + TEST_ASSERT_SUCCESS(rc, "Unable to unregister never-match " + "handler"); + app->never_match_reg_id = -1; + } + + for (i = 0; i < NUM_QUEUES; i++) { + rc = test_app_unregister_callback(app, i); + if (rc != TEST_SUCCESS) + return rc; + } + + if (app->finalize_reg_id >= 0) { + rc = rte_dispatcher_finalize_unregister( + app->dispatcher, app->finalize_reg_id + ); + app->finalize_reg_id = -1; + } + + return TEST_SUCCESS; +} + +static void +test_app_start_dispatcher(struct test_app *app) +{ + rte_dispatcher_start(app->dispatcher); +} + +static void +test_app_stop_dispatcher(struct test_app *app) +{ + rte_dispatcher_stop(app->dispatcher); +} + +static int +test_app_reset_dispatcher_stats(struct test_app *app) +{ + struct rte_dispatcher_stats stats; + + rte_dispatcher_stats_reset(app->dispatcher); + + memset(&stats, 0xff, sizeof(stats)); + + rte_dispatcher_stats_get(app->dispatcher, &stats); + + TEST_ASSERT_EQUAL(stats.poll_count, 0, "Poll count not zero"); + TEST_ASSERT_EQUAL(stats.ev_batch_count, 0, "Batch count not zero"); + TEST_ASSERT_EQUAL(stats.ev_dispatch_count, 0, "Dispatch count " + "not zero"); + TEST_ASSERT_EQUAL(stats.ev_drop_count, 0, "Drop count not zero"); + + return TEST_SUCCESS; +} + +static int +test_app_setup_service_core(struct test_app *app, unsigned int lcore_id) +{ + int rc; + + rc = rte_service_lcore_add(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable to make lcore %d an event dispatcher " + "service core", lcore_id); + + rc = rte_service_map_lcore_set(app->dispatcher_service_id, lcore_id, 1); + TEST_ASSERT_SUCCESS(rc, "Unable to map event dispatcher service"); + + return TEST_SUCCESS; +} + +static int +test_app_setup_service_cores(struct test_app *app) +{ + int i; + int lcore_id = -1; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + lcore_id = rte_get_next_lcore(lcore_id, 1, 0); + + app->service_lcores[i] = lcore_id; + } + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + int rc; + + rc = test_app_setup_service_core(app, app->service_lcores[i]); + if (rc != TEST_SUCCESS) + return rc; + } + + return TEST_SUCCESS; +} + +static int +test_app_teardown_service_core(struct test_app *app, unsigned int lcore_id) +{ + int rc; + + rc = rte_service_map_lcore_set(app->dispatcher_service_id, lcore_id, 0); + TEST_ASSERT_SUCCESS(rc, "Unable to unmap event dispatcher service"); + + rc = rte_service_lcore_del(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable change role of service lcore %d", + lcore_id); + + return TEST_SUCCESS; +} + +static int +test_app_teardown_service_cores(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + unsigned int lcore_id = app->service_lcores[i]; + int rc; + + rc = test_app_teardown_service_core(app, lcore_id); + if (rc != TEST_SUCCESS) + return rc; + } + + return TEST_SUCCESS; +} + +static int +test_app_start_service_cores(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + unsigned int lcore_id = app->service_lcores[i]; + int rc; + + rc = rte_service_lcore_start(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable to start service lcore %d", + lcore_id); + } + + return TEST_SUCCESS; +} + +static int +test_app_stop_service_cores(struct test_app *app) +{ + int i; + + for (i = 0; i < NUM_SERVICE_CORES; i++) { + unsigned int lcore_id = app->service_lcores[i]; + int rc; + + rc = rte_service_lcore_stop(lcore_id); + TEST_ASSERT_SUCCESS(rc, "Unable to stop service lcore %d", + lcore_id); + } + + return TEST_SUCCESS; +} + +static int +test_app_start(struct test_app *app) +{ + int rc; + + rc = test_app_start_event_dev(app); + if (rc != TEST_SUCCESS) + return rc; + + rc = test_app_start_service_cores(app); + if (rc != TEST_SUCCESS) + return rc; + + test_app_start_dispatcher(app); + + app->running = true; + + return TEST_SUCCESS; +} + +static int +test_app_stop(struct test_app *app) +{ + int rc; + + test_app_stop_dispatcher(app); + + rc = test_app_stop_service_cores(app); + if (rc != TEST_SUCCESS) + return rc; + + test_app_stop_event_dev(app); + + app->running = false; + + return TEST_SUCCESS; +} + +struct test_app *test_app; + +static int +test_setup(void) +{ + int rc; + + if (rte_lcore_count() < MIN_LCORES) { + printf("Not enough cores for dispatcher_autotest; expecting at " + "least %d.\n", MIN_LCORES); + return TEST_SKIPPED; + } + + test_app = test_app_create(); + TEST_ASSERT(test_app != NULL, "Unable to allocate memory"); + + rc = test_app_setup_event_dev(test_app); + if (rc != TEST_SUCCESS) + goto err_free_app; + + rc = test_app_create_dispatcher(test_app); + if (rc != TEST_SUCCESS) + goto err_teardown_event_dev; + + rc = test_app_setup_service_cores(test_app); + if (rc != TEST_SUCCESS) + goto err_free_dispatcher; + + rc = test_app_register_callbacks(test_app); + if (rc != TEST_SUCCESS) + goto err_teardown_service_cores; + + rc = test_app_bind_ports(test_app); + if (rc != TEST_SUCCESS) + goto err_unregister_callbacks; + + return TEST_SUCCESS; + +err_unregister_callbacks: + test_app_unregister_callbacks(test_app); +err_teardown_service_cores: + test_app_teardown_service_cores(test_app); +err_free_dispatcher: + test_app_free_dispatcher(test_app); +err_teardown_event_dev: + test_app_teardown_event_dev(test_app); +err_free_app: + test_app_free(test_app); + + test_app = NULL; + + return rc; +} + +static void test_teardown(void) +{ + if (test_app == NULL) + return; + + if (test_app->running) + test_app_stop(test_app); + + test_app_teardown_service_cores(test_app); + + test_app_unregister_callbacks(test_app); + + test_app_unbind_ports(test_app); + + test_app_free_dispatcher(test_app); + + test_app_teardown_event_dev(test_app); + + test_app_free(test_app); + + test_app = NULL; +} + +static int +test_app_get_completed_events(struct test_app *app) +{ + return rte_atomic_load_explicit(&app->completed_events, + rte_memory_order_relaxed); +} + +static int +test_app_get_errors(struct test_app *app) +{ + return rte_atomic_load_explicit(&app->errors, rte_memory_order_relaxed); +} + +static int +test_basic(void) +{ + int rc; + int i; + + rc = test_app_start(test_app); + if (rc != TEST_SUCCESS) + return rc; + + uint64_t sns[NUM_FLOWS] = { 0 }; + + for (i = 0; i < NUM_EVENTS;) { + struct rte_event events[ENQUEUE_BURST_SIZE]; + int left; + int batch_size; + int j; + uint16_t n = 0; + + batch_size = 1 + rte_rand_max(ENQUEUE_BURST_SIZE); + left = NUM_EVENTS - i; + + batch_size = RTE_MIN(left, batch_size); + + for (j = 0; j < batch_size; j++) { + struct rte_event *event = &events[j]; + uint64_t sn; + uint32_t flow_id; + + flow_id = rte_rand_max(NUM_FLOWS); + + sn = sns[flow_id]++; + + *event = (struct rte_event) { + .queue_id = 0, + .flow_id = flow_id, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .op = RTE_EVENT_OP_NEW, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .u64 = sn + }; + } + + while (n < batch_size) + n += rte_event_enqueue_new_burst(test_app->event_dev_id, + DRIVER_PORT_ID, + events + n, + batch_size - n); + + i += batch_size; + } + + while (test_app_get_completed_events(test_app) != NUM_EVENTS) + rte_event_maintain(test_app->event_dev_id, DRIVER_PORT_ID, 0); + + rc = test_app_get_errors(test_app); + TEST_ASSERT(rc == 0, "%d errors occurred", rc); + + rc = test_app_stop(test_app); + if (rc != TEST_SUCCESS) + return rc; + + struct rte_dispatcher_stats stats; + rte_dispatcher_stats_get(test_app->dispatcher, &stats); + + TEST_ASSERT_EQUAL(stats.ev_drop_count, 0, "Drop count is not zero"); + TEST_ASSERT_EQUAL(stats.ev_dispatch_count, NUM_EVENTS * NUM_QUEUES, + "Invalid dispatch count"); + TEST_ASSERT(stats.poll_count > 0, "Poll count is zero"); + + TEST_ASSERT_EQUAL(test_app->never_process_count.count, 0, + "Never-match handler's process function has " + "been called"); + + int finalize_count = + rte_atomic_load_explicit(&test_app->finalize_count.count, + rte_memory_order_relaxed); + + TEST_ASSERT(finalize_count > 0, "Finalize count is zero"); + TEST_ASSERT(finalize_count <= (int)stats.ev_dispatch_count, + "Finalize count larger than event count"); + + TEST_ASSERT_EQUAL(finalize_count, (int)stats.ev_batch_count, + "%"PRIu64" batches dequeued, but finalize called %d " + "times", stats.ev_batch_count, finalize_count); + + /* + * The event dispatcher should call often-matching match functions + * more often, and thus this never-matching match function should + * be called relatively infrequently. + */ + TEST_ASSERT(test_app->never_match_count < + (stats.ev_dispatch_count / 4), + "Never-matching match function called suspiciously often"); + + rc = test_app_reset_dispatcher_stats(test_app); + if (rc != TEST_SUCCESS) + return rc; + + return TEST_SUCCESS; +} + +static int +test_drop(void) +{ + int rc; + uint8_t unhandled_queue; + struct rte_dispatcher_stats stats; + + unhandled_queue = (uint8_t)rte_rand_max(NUM_QUEUES); + + rc = test_app_start(test_app); + if (rc != TEST_SUCCESS) + return rc; + + rc = test_app_unregister_callback(test_app, unhandled_queue); + if (rc != TEST_SUCCESS) + return rc; + + struct rte_event event = { + .queue_id = unhandled_queue, + .flow_id = 0, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .op = RTE_EVENT_OP_NEW, + .priority = RTE_EVENT_DEV_PRIORITY_NORMAL, + .u64 = 0 + }; + + do { + rc = rte_event_enqueue_burst(test_app->event_dev_id, + DRIVER_PORT_ID, &event, 1); + } while (rc == 0); + + do { + rte_dispatcher_stats_get(test_app->dispatcher, &stats); + + rte_event_maintain(test_app->event_dev_id, DRIVER_PORT_ID, 0); + } while (stats.ev_drop_count == 0 && stats.ev_dispatch_count == 0); + + rc = test_app_stop(test_app); + if (rc != TEST_SUCCESS) + return rc; + + TEST_ASSERT_EQUAL(stats.ev_drop_count, 1, "Drop count is not one"); + TEST_ASSERT_EQUAL(stats.ev_dispatch_count, 0, + "Dispatch count is not zero"); + TEST_ASSERT(stats.poll_count > 0, "Poll count is zero"); + + return TEST_SUCCESS; +} + +#define MORE_THAN_MAX_HANDLERS 1000 +#define MIN_HANDLERS 32 + +static int +test_many_handler_registrations(void) +{ + int rc; + int num_regs = 0; + int reg_ids[MORE_THAN_MAX_HANDLERS]; + int reg_id; + int i; + + rc = test_app_unregister_callbacks(test_app); + if (rc != TEST_SUCCESS) + return rc; + + for (i = 0; i < MORE_THAN_MAX_HANDLERS; i++) { + reg_id = rte_dispatcher_register(test_app->dispatcher, + never_match, NULL, + test_app_never_process, NULL); + if (reg_id < 0) + break; + + reg_ids[num_regs++] = reg_id; + } + + TEST_ASSERT_EQUAL(reg_id, -ENOMEM, "Incorrect return code. Expected " + "%d but was %d", -ENOMEM, reg_id); + TEST_ASSERT(num_regs >= MIN_HANDLERS, "Registration failed already " + "after %d handler registrations.", num_regs); + + for (i = 0; i < num_regs; i++) { + rc = rte_dispatcher_unregister(test_app->dispatcher, + reg_ids[i]); + TEST_ASSERT_SUCCESS(rc, "Unable to unregister handler %d", + reg_ids[i]); + } + + return TEST_SUCCESS; +} + +static void +dummy_finalize(uint8_t event_dev_id __rte_unused, + uint8_t event_port_id __rte_unused, + void *cb_data __rte_unused) +{ +} + +#define MORE_THAN_MAX_FINALIZERS 1000 +#define MIN_FINALIZERS 16 + +static int +test_many_finalize_registrations(void) +{ + int rc; + int num_regs = 0; + int reg_ids[MORE_THAN_MAX_FINALIZERS]; + int reg_id; + int i; + + rc = test_app_unregister_callbacks(test_app); + if (rc != TEST_SUCCESS) + return rc; + + for (i = 0; i < MORE_THAN_MAX_FINALIZERS; i++) { + reg_id = rte_dispatcher_finalize_register( + test_app->dispatcher, dummy_finalize, NULL + ); + + if (reg_id < 0) + break; + + reg_ids[num_regs++] = reg_id; + } + + TEST_ASSERT_EQUAL(reg_id, -ENOMEM, "Incorrect return code. Expected " + "%d but was %d", -ENOMEM, reg_id); + TEST_ASSERT(num_regs >= MIN_FINALIZERS, "Finalize registration failed " + "already after %d registrations.", num_regs); + + for (i = 0; i < num_regs; i++) { + rc = rte_dispatcher_finalize_unregister( + test_app->dispatcher, reg_ids[i] + ); + TEST_ASSERT_SUCCESS(rc, "Unable to unregister finalizer %d", + reg_ids[i]); + } + + return TEST_SUCCESS; +} + +static struct unit_test_suite test_suite = { + .suite_name = "Event dispatcher test suite", + .unit_test_cases = { + TEST_CASE_ST(test_setup, test_teardown, test_basic), + TEST_CASE_ST(test_setup, test_teardown, test_drop), + TEST_CASE_ST(test_setup, test_teardown, + test_many_handler_registrations), + TEST_CASE_ST(test_setup, test_teardown, + test_many_finalize_registrations), + TEST_CASES_END() + } +}; + +static int +test_dispatcher(void) +{ + return unit_test_suite_runner(&test_suite); +} + +REGISTER_FAST_TEST(dispatcher_autotest, false, true, test_dispatcher); From patchwork Thu Oct 12 08:50:31 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: =?utf-8?q?Mattias_R=C3=B6nnblom?= X-Patchwork-Id: 132576 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4E0CE42364; Thu, 12 Oct 2023 10:56:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C68F84067E; Thu, 12 Oct 2023 10:55:59 +0200 (CEST) Received: from EUR03-DBA-obe.outbound.protection.outlook.com (mail-dbaeur03on2056.outbound.protection.outlook.com [40.107.104.56]) by mails.dpdk.org (Postfix) with ESMTP id 95370402E2; Thu, 12 Oct 2023 10:55:55 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ksUQJiiPEN5Re/EVX/TLOduJLZrO7WM9b5XHMzrziUb/dm0eRIBP8KEFr9uNISbx+5BXN7cbswM2qHEWOndBs2jVXIM+uQA7gE/+43+Jlz5Pegx2bHJS8kEGffsbyQhjaOAJCC7R0V4aUUijzqxPvghiGDKZPgwhd5D/vKKh+wC7Oz/LBFwY3QynK58REucep6yXjxtPaOdSIq4RDQkw8IaCvljaBjsw7kgoGnldhkbmqNoLp2gnWRlDCgu3fxlE7t6DyqB3nAxus7piu0ZacQcmdEy7TZU8PuV6t9/o8T6NOHOaEWkWrqGyrxqm/PRRRIurWXF5iHQ/wYNJTab2QQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=oWm0rxHiN4kWuHeghx6bWto2C/44hjt4lIne4WdyvRw=; b=V7x7cwWUieacFut+OdhYp3rBXdLsHFqh9NzeS5BHOHM/CdI93UmT1alm2sdmUeQ3oYEo5utmRM4jDPEi/2OcL7cF8hwwlPh2OB+DTV667OzPNLfkPboZVVoNXy+aiTYIr66TKcfbLkeE7Vdd815Tfy8FXiD2dXeTi0alrlfNrl3FQ2Y+WvoV0KEYPKZ41+2CK9uzjzVJnbJpuw2Go6WkVaXcQZmU1nCf2EkIFnIdovaV7q8SXcYS+Zy6aenyMsWHVgYcFeS/lFUVY9HMCf5Vxldcd5Yw9IWEGHzf3UC/svJDGH8QTkwiqpgYKUDTa0qdc0tLGyTfo8Noubj56zJxBw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 192.176.1.74) smtp.rcpttodomain=dpdk.org smtp.mailfrom=ericsson.com; dmarc=pass (p=reject sp=reject pct=100) action=none header.from=ericsson.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=ericsson.com; s=selector1; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=oWm0rxHiN4kWuHeghx6bWto2C/44hjt4lIne4WdyvRw=; b=Fzi0I/bcjbAliHpYPwmQkrXmOp2IKDU4UsE1CUF/kbp9v1brlNSBI6tBcRNS14Hhxn4Ljik8H4JgvZ5M6WLmY/kebWuflM6gj3Ux6KDNzaz1BRwlOmVe+G9Wdpm1GvF6y01/xVm4hfyPsYG3R9VIaXhF3D/zUVM4iAca9rea9S8= Received: from AS9PR01CA0044.eurprd01.prod.exchangelabs.com (2603:10a6:20b:542::23) by DB9PR07MB7306.eurprd07.prod.outlook.com (2603:10a6:10:217::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.44; Thu, 12 Oct 2023 08:55:52 +0000 Received: from AMS0EPF00000195.eurprd05.prod.outlook.com (2603:10a6:20b:542:cafe::d4) by AS9PR01CA0044.outlook.office365.com (2603:10a6:20b:542::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.6863.38 via Frontend Transport; Thu, 12 Oct 2023 08:55:52 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 192.176.1.74) smtp.mailfrom=ericsson.com; dkim=none (message not signed) header.d=none;dmarc=pass action=none header.from=ericsson.com; Received-SPF: Pass (protection.outlook.com: domain of ericsson.com designates 192.176.1.74 as permitted sender) receiver=protection.outlook.com; client-ip=192.176.1.74; helo=oa.msg.ericsson.com; pr=C Received: from oa.msg.ericsson.com (192.176.1.74) by AMS0EPF00000195.mail.protection.outlook.com (10.167.16.215) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.20.6838.22 via Frontend Transport; Thu, 12 Oct 2023 08:55:52 +0000 Received: from SESSMR603.ericsson.se (100.87.178.30) by ESESSMB503.ericsson.se (153.88.183.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2507.32; Thu, 12 Oct 2023 10:55:51 +0200 Received: from ESESBMB504.ericsson.se (153.88.183.36) by SESSMR603.ericsson.se (100.87.178.30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.2.1118.26; Thu, 12 Oct 2023 10:55:51 +0200 Received: from seliicinfr00049.seli.gic.ericsson.se (153.88.183.153) by smtp.internal.ericsson.com (153.88.183.187) with Microsoft SMTP Server id 15.1.2507.32 via Frontend Transport; Thu, 12 Oct 2023 10:55:50 +0200 Received: from breslau.. (seliicwb00002.seli.gic.ericsson.se [10.156.25.100]) by seliicinfr00049.seli.gic.ericsson.se (Postfix) with ESMTP id 07870380061; Thu, 12 Oct 2023 10:55:51 +0200 (CEST) From: =?utf-8?q?Mattias_R=C3=B6nnblom?= To: , CC: Jerin Jacob , , , , Peter Nilsson , Heng Wang , "Naga Harish K S V" , Pavan Nikhilesh , Gujjar Abhinandan S , Erik Gabriel Carrillo , Shijith Thotton , "Hemant Agrawal" , Sachin Saxena , Liang Ma , Peter Mccarthy , Zhirun Yan , =?utf-8?q?Mattias_R=C3=B6nnblom?= Subject: [PATCH v8 3/3] doc: add dispatcher programming guide Date: Thu, 12 Oct 2023 10:50:31 +0200 Message-ID: <20231012085031.444483-4-mattias.ronnblom@ericsson.com> X-Mailer: git-send-email 2.34.1 In-Reply-To: <20231012085031.444483-1-mattias.ronnblom@ericsson.com> References: <20231011071700.442795-2-mattias.ronnblom@ericsson.com> <20231012085031.444483-1-mattias.ronnblom@ericsson.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-TrafficTypeDiagnostic: AMS0EPF00000195:EE_|DB9PR07MB7306:EE_ X-MS-Office365-Filtering-Correlation-Id: 7edf9fca-70ea-4eea-257e-08dbcb0109cd X-LD-Processed: 92e84ceb-fbfd-47ab-be52-080c6b87953f,ExtAddr X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: v9likhlmUpTuP3ulhwXw00TILRXgkVZLVQBc9vG2KrvXNzjOjaeYfLOI3bWq8FMi8Fm5pMTi34U567z7T+VYyasi+E1v8U+IfgjqPpalrWXPDhKh77BBxeXf7fcYrmoIzEP3pRArbxH7B+vpOzICDdzqvE43csg8OmORn3/KEkzXqXRcWIeT0ru/3gRW4xpHKk7HcIX7ABnfq2BdNSSFx78qYq46wvGYQX9bKgRu1XT2jPKeKCIMS7tc8Cimo5QFFZqeERvkkOazmybqwcGMrfH++20RPFKXcRT2W9L7dBXgDIqzSxkV8BfjWeFTT+YMlNxf9GQS2093iLzynXfTjh0Cr2TBYpI1Y8ib+hGRw2GqUGb/IEzGue75upLj4roaQ39FEQ8XwlEhIxFwoNBU01LNVl6mt1yhotmQcE08sJeM2loineqmsZMNHylfqB5L/M8B6mBGbirjSrPpClF1CNNFtcwe4/bgto1UaYzxIlWEJZgPqqMKDgUbW61JDX4kIHjLhSNEiXiWmp3OranDECWLzt4sh3SiUcz4S+8U3f7g8FRdaN0otkzYHyZj1fNc9+q7A1u+WtOwp9q4CX0fbZgb7LI9JyatVuGibRVH975gtq+rgm49IufW0ZXUbFldV5ua2LXG0I2lOHtH2SLghmTrU61MhxNnC0Dzs+e1SXOliqNVRSH4nkhCoJ9eN6dhkEnXik/gbZo80/jv/vmArFAdyVbdzCnxGQSA6RVwWOezTa8qqRUc6sckYr5J/SYQ4VBdlc9zBG0c9jU9SWveQ76JwW7wRDgjQCOGrhCVm2dFcugo3p/5rZguu3SIQEQsfjKatAVDO2W/zPTlHEwXUrS8f6TjadULNAxzRbw3z3k= X-Forefront-Antispam-Report: CIP:192.176.1.74; CTRY:SE; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:oa.msg.ericsson.com; PTR:office365.se.ericsson.net; CAT:NONE; SFS:(13230031)(4636009)(136003)(346002)(39860400002)(396003)(376002)(230922051799003)(64100799003)(186009)(1800799009)(451199024)(82310400011)(46966006)(36840700001)(40470700004)(40480700001)(478600001)(2906002)(6666004)(47076005)(66899024)(40460700003)(54906003)(110136005)(70586007)(70206006)(316002)(7416002)(82740400003)(30864003)(82960400001)(7636003)(8936002)(41300700001)(36756003)(8676002)(5660300002)(4326008)(356005)(86362001)(36860700001)(26005)(66574015)(1076003)(2616005)(83380400001)(107886003)(336012)(6266002)(2101003); DIR:OUT; SFP:1101; X-OriginatorOrg: ericsson.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 Oct 2023 08:55:52.2433 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7edf9fca-70ea-4eea-257e-08dbcb0109cd X-MS-Exchange-CrossTenant-Id: 92e84ceb-fbfd-47ab-be52-080c6b87953f X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=92e84ceb-fbfd-47ab-be52-080c6b87953f; Ip=[192.176.1.74]; Helo=[oa.msg.ericsson.com] X-MS-Exchange-CrossTenant-AuthSource: AMS0EPF00000195.eurprd05.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR07MB7306 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Provide programming guide for the dispatcher library. Signed-off-by: Mattias Rönnblom --- PATCH v7: o Mark pseudo code blocks as being type "none", to avoid Sphinx failures on non-Ubuntu systems. (David Marchand) o "Necessarily" necessarily needs to be spelled just so. (David Marchand) PATCH v6: o Eliminate unneeded white space in code blocks. (David Marchand) PATCH v5: o Update guide to match API changes related to dispatcher ids. PATCH v3: o Adapt guide to the dispatcher API name changes. PATCH: o Improve grammar and spelling. RFC v4: o Extend event matching section of the programming guide. o Improve grammar and spelling. --- MAINTAINERS | 1 + doc/guides/prog_guide/dispatcher_lib.rst | 433 +++++++++++++++++++++++ doc/guides/prog_guide/index.rst | 1 + 3 files changed, 435 insertions(+) create mode 100644 doc/guides/prog_guide/dispatcher_lib.rst diff --git a/MAINTAINERS b/MAINTAINERS index 0e24da11fe..affb4b9410 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1738,6 +1738,7 @@ Dispatcher - EXPERIMENTAL M: Mattias Rönnblom F: lib/dispatcher/ F: app/test/test_dispatcher.c +F: doc/guides/prog_guide/dispatcher_lib.rst Test Applications diff --git a/doc/guides/prog_guide/dispatcher_lib.rst b/doc/guides/prog_guide/dispatcher_lib.rst new file mode 100644 index 0000000000..6de1ea78b0 --- /dev/null +++ b/doc/guides/prog_guide/dispatcher_lib.rst @@ -0,0 +1,433 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2023 Ericsson AB. + +Dispatcher +========== + +Overview +-------- + +The purpose of the dispatcher is to help reduce coupling in an +:doc:`Eventdev `-based DPDK application. + +In particular, the dispatcher addresses a scenario where an +application's modules share the same event device and event device +ports, and performs work on the same lcore threads. + +The dispatcher replaces the conditional logic that follows an event +device dequeue operation, where events are dispatched to different +parts of the application, typically based on fields in the +``rte_event``, such as the ``queue_id``, ``sub_event_type``, or +``sched_type``. + +Below is an excerpt from a fictitious application consisting of two +modules; A and B. In this example, event-to-module routing is based +purely on queue id, where module A expects all events to a certain +queue id, and module B two other queue ids. [#Mapping]_ + +.. code-block:: c + + for (;;) { + struct rte_event events[MAX_BURST]; + unsigned int n; + + n = rte_event_dequeue_burst(dev_id, port_id, events, + MAX_BURST, 0); + + for (i = 0; i < n; i++) { + const struct rte_event *event = &events[i]; + + switch (event->queue_id) { + case MODULE_A_QUEUE_ID: + module_a_process(event); + break; + case MODULE_B_STAGE_0_QUEUE_ID: + module_b_process_stage_0(event); + break; + case MODULE_B_STAGE_1_QUEUE_ID: + module_b_process_stage_1(event); + break; + } + } + } + +The issue this example attempts to illustrate is that the centralized +conditional logic has knowledge of things that should be private to +the modules. In other words, this pattern leads to a violation of +module encapsulation. + +The shared conditional logic contains explicit knowledge about what +events should go where. In case, for example, the +``module_a_process()`` is broken into two processing stages — a +module-internal affair — the shared conditional code must be updated +to reflect this change. + +The centralized event routing code becomes an issue in larger +applications, where modules are developed by different organizations. +This pattern also makes module reuse across different application more +difficult. The part of the conditional logic relevant for a particular +application may need to be duplicated across many module +instantiations (e.g., applications and test setups). + +The dispatcher separates the mechanism (routing events to their +receiver) from the policy (which events should go where). + +The basic operation of the dispatcher is as follows: + +* Dequeue a batch of events from the event device. +* For each event determine which handler should receive the event, using + a set of application-provided, per-handler event matching callback + functions. +* Provide events matching a particular handler, to that handler, using + its process callback. + +If the above application would have made use of the dispatcher, the +code relevant for its module A may have looked something like this: + +.. code-block:: c + + static bool + module_a_match(const struct rte_event *event, void *cb_data) + { + return event->queue_id == MODULE_A_QUEUE_ID; + } + + static void + module_a_process_events(uint8_t event_dev_id, uint8_t event_port_id, + const struct rte_event *events, + uint16_t num, void *cb_data) + { + uint16_t i; + + for (i = 0; i < num; i++) + module_a_process_event(&events[i]); + } + + /* In the module's initialization code */ + rte_dispatcher_register(dispatcher, module_a_match, NULL, + module_a_process_events, module_a_data); + +(Error handling is left out of this and future example code in this +chapter.) + +When the shared conditional logic is removed, a new question arise: +which part of the system actually runs the dispatching mechanism? Or +phrased differently, what is replacing the function hosting the shared +conditional logic (typically launched on all lcores using +``rte_eal_remote_launch()``)? To solve this issue, the dispatcher is a +run as a DPDK :doc:`Service `. + +The dispatcher is a layer between the application and the event device +in the receive direction. In the transmit (i.e., item of work +submission) direction, the application directly accesses the Eventdev +core API (e.g., ``rte_event_enqueue_burst()``) to submit new or +forwarded event to the event device. + +Dispatcher Creation +------------------- + +A dispatcher is created with using +``rte_dispatcher_create()``. + +The event device must be configured before the dispatcher is created. + +Usually, only one dispatcher is needed per event device. A dispatcher +handles exactly one event device. + +An dispatcher is freed using the ``rte_dispatcher_free()`` +function. The dispatcher's service functions must not be running on +any lcore at the point of this call. + +Event Port Binding +------------------ + +To be able to dequeue events, the dispatcher must know which event +ports are to be used, on all the lcores it uses. The application +provides this information using +``rte_dispatcher_bind_port_to_lcore()``. + +This call is typically made from the part of the application that +deals with deployment issues (e.g., iterating lcores and determining +which lcore does what), at the time of application initialization. + +The ``rte_dispatcher_unbind_port_from_lcore()`` is used to undo +this operation. + +Multiple lcore threads may not safely use the same event +port. [#Port-MT-Safety] + +Event ports cannot safely be bound or unbound while the dispatcher's +service function is running on any lcore. + +Event Handlers +-------------- + +The dispatcher handler is an interface between the dispatcher and an +application module, used to route events to the appropriate part of +the application. + +Handler Registration +^^^^^^^^^^^^^^^^^^^^ + +The event handler interface consists of two function pointers: + +* The ``rte_dispatcher_match_t`` callback, which job is to + decide if this event is to be the property of this handler. +* The ``rte_dispatcher_process_t``, which is used by the + dispatcher to deliver matched events. + +An event handler registration is valid on all lcores. + +The functions pointed to by the match and process callbacks resides in +the application's domain logic, with one or more handlers per +application module. + +A module may use more than one event handler, for convenience or to +further decouple sub-modules. However, the dispatcher may impose an +upper limit of the number handlers. In addition, installing a large +number of handlers increase dispatcher overhead, although this does +not necessarily translate to a system-level performance degradation. See +the section on :ref:`Event Clustering` for more information. + +Handler registration and unregistration cannot safely be done while +the dispatcher's service function is running on any lcore. + +Event Matching +^^^^^^^^^^^^^^ + +A handler's match callback function decides if an event should be +delivered to this handler, or not. + +An event is routed to no more than one handler. Thus, if a match +function returns true, no further match functions will be invoked for +that event. + +Match functions must not depend on being invocated in any particular +order (e.g., in the handler registration order). + +Events failing to match any handler are dropped, and the +``ev_drop_count`` counter is updated accordingly. + +Event Delivery +^^^^^^^^^^^^^^ + +The handler callbacks are invocated by the dispatcher's service +function, upon the arrival of events to the event ports bound to the +running service lcore. + +A particular event is delivery to at most one handler. + +The application must not depend on all match callback invocations for +a particular event batch being made prior to any process calls are +being made. For example, if the dispatcher dequeues two events from +the event device, it may choose to find out the destination for the +first event, and deliver it, and then continue to find out the +destination for the second, and then deliver that event as well. The +dispatcher may also choose a strategy where no event is delivered +until the destination handler for both events have been determined. + +The events provided in a single process call always belong to the same +event port dequeue burst. + +.. _Event Clustering: + +Event Clustering +^^^^^^^^^^^^^^^^ + +The dispatcher maintains the order of events destined for the same +handler. + +*Order* here refers to the order in which the events were delivered +from the event device to the dispatcher (i.e., in the event array +populated by ``rte_event_dequeue_burst()``), in relation to the order +in which the dispatcher deliveres these events to the application. + +The dispatcher *does not* guarantee to maintain the order of events +delivered to *different* handlers. + +For example, assume that ``MODULE_A_QUEUE_ID`` expands to the value 0, +and ``MODULE_B_STAGE_0_QUEUE_ID`` expands to the value 1. Then +consider a scenario where the following events are dequeued from the +event device (qid is short for event queue id). + +.. code-block:: none + + [e0: qid=1], [e1: qid=1], [e2: qid=0], [e3: qid=1] + +The dispatcher may deliver the events in the following manner: + +.. code-block:: none + + module_b_stage_0_process([e0: qid=1], [e1: qid=1]) + module_a_process([e2: qid=0]) + module_b_stage_0_process([e2: qid=1]) + +The dispatcher may also choose to cluster (group) all events destined +for ``module_b_stage_0_process()`` into one array: + +.. code-block:: none + + module_b_stage_0_process([e0: qid=1], [e1: qid=1], [e3: qid=1]) + module_a_process([e2: qid=0]) + +Here, the event ``e2`` is reordered and placed behind ``e3``, from a +delivery order point of view. This kind of reshuffling is allowed, +since the events are destined for different handlers. + +The dispatcher may also deliver ``e2`` before the three events +destined for module B. + +An example of what the dispatcher may not do, is to reorder event +``e1`` so, that it precedes ``e0`` in the array passed to the module +B's stage 0 process callback. + +Although clustering requires some extra work for the dispatcher, it +leads to fewer process function calls. In addition, and likely more +importantly, it improves temporal locality of memory accesses to +handler-specific data structures in the application, which in turn may +lead to fewer cache misses and improved overall performance. + +Finalize +-------- + +The dispatcher may be configured to notify one or more parts of the +application when the matching and processing of a batch of events has +completed. + +The ``rte_dispatcher_finalize_register`` call is used to +register a finalize callback. The function +``rte_dispatcher_finalize_unregister`` is used to remove a +callback. + +The finalize hook may be used by a set of event handlers (in the same +modules, or a set of cooperating modules) sharing an event output +buffer, since it allows for flushing of the buffers at the last +possible moment. In particular, it allows for buffering of +``RTE_EVENT_OP_FORWARD`` events, which must be flushed before the next +``rte_event_dequeue_burst()`` call is made (assuming implicit release +is employed). + +The following is an example with an application-defined event output +buffer (the ``event_buffer``): + +.. code-block:: c + + static void + finalize_batch(uint8_t event_dev_id, uint8_t event_port_id, + void *cb_data) + { + struct event_buffer *buffer = cb_data; + unsigned lcore_id = rte_lcore_id(); + struct event_buffer_lcore *lcore_buffer = + &buffer->lcore_buffer[lcore_id]; + + event_buffer_lcore_flush(lcore_buffer); + } + + /* In the module's initialization code */ + rte_dispatcher_finalize_register(dispatcher, finalize_batch, + shared_event_buffer); + +The dispatcher does not track any relationship between a handler and a +finalize callback, and all finalize callbacks will be called, if (and +only if) at least one event was dequeued from the event device. + +Finalize callback registration and unregistration cannot safely be +done while the dispatcher's service function is running on any lcore. + +Service +------- + +The dispatcher is a DPDK service, and is managed in a manner similar +to other DPDK services (e.g., an Event Timer Adapter). + +Below is an example of how to configure a particular lcore to serve as +a service lcore, and to map an already-configured dispatcher +(identified by ``DISPATCHER_ID``) to that lcore. + +.. code-block:: c + + static void + launch_dispatcher_core(struct rte_dispatcher *dispatcher, + unsigned lcore_id) + { + uint32_t service_id; + + rte_service_lcore_add(lcore_id); + + rte_dispatcher_service_id_get(dispatcher, &service_id); + + rte_service_map_lcore_set(service_id, lcore_id, 1); + + rte_service_lcore_start(lcore_id); + + rte_service_runstate_set(service_id, 1); + } + +As the final step, the dispatcher must be started. + +.. code-block:: c + + rte_dispatcher_start(dispatcher); + + +Multi Service Dispatcher Lcores +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +In an Eventdev application, most (or all) compute-intensive and +performance-sensitive processing is done in an event-driven manner, +where CPU cycles spent on application domain logic is the direct +result of items of work (i.e., ``rte_event`` events) dequeued from an +event device. + +In the light of this, it makes sense to have the dispatcher service be +the only DPDK service on all lcores used for packet processing — at +least in principle. + +However, there is nothing in DPDK that prevents colocating other +services with the dispatcher service on the same lcore. + +Tasks that prior to the introduction of the dispatcher into the +application was performed on the lcore, even though no events were +received, are prime targets for being converted into such auxiliary +services, running on the dispatcher core set. + +An example of such a task would be the management of a per-lcore timer +wheel (i.e., calling ``rte_timer_manage()``). + +For applications employing :doc:`Read-Copy-Update (RCU) ` (or +similar technique), may opt for having quiescent state (e.g., calling +``rte_rcu_qsbr_quiescent()``) signaling factored out into a separate +service, to assure resource reclaimination occurs even in though some +lcores currently do not process any events. + +If more services than the dispatcher service is mapped to a service +lcore, it's important that the other service are well-behaved and +don't interfere with event processing to the extent the system's +throughput and/or latency requirements are at risk of not being met. + +In particular, to avoid jitter, they should have an small upper bound +for the maximum amount of time spent in a single service function +call. + +An example of scenario with a more CPU-heavy colocated service is a +low-lcore count deployment, where the event device lacks the +``RTE_EVENT_ETH_RX_ADAPTER_CAP_INTERNAL_PORT`` capability (and thus +require software to feed incoming packets into the event device). In +this case, the best performance may be achieved if the Event Ethernet +RX and/or TX Adapters are mapped to lcores also used by for event +dispatching, since otherwise the adapter lcores would have a lot of +idle CPU cycles. + +.. rubric:: Footnotes + +.. [#Mapping] + Event routing may reasonably be done based on other ``rte_event`` + fields (or even event user data). Indeed, that's the very reason to + have match callback functions, instead of a simple queue + id-to-handler mapping scheme. Queue id-based routing serves well in + a simple example. + +.. [#Port-MT-Safety] + This property (which is a feature, not a bug) is inherited from the + core Eventdev APIs. diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index 52a6d9e7aa..ab05bd6074 100644 --- a/doc/guides/prog_guide/index.rst +++ b/doc/guides/prog_guide/index.rst @@ -60,6 +60,7 @@ Programmer's Guide event_ethernet_tx_adapter event_timer_adapter event_crypto_adapter + dispatcher_lib qos_framework power_man packet_classif_access_ctrl