From patchwork Sun Jan 7 15:34:40 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 135779 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 74B3843857; Sun, 7 Jan 2024 16:35:19 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6130B4064C; Sun, 7 Jan 2024 16:35:19 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 732C4402F2 for ; Sun, 7 Jan 2024 16:35:17 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 407Et9Wu023658; Sun, 7 Jan 2024 07:35:16 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-type:content-transfer-encoding; s= pfpt0220; bh=ng1UnNQgmO94sE1HHv19LyYRfDFRUpA/HpZOGK0TsPw=; b=AA6 sVWyXBHB+w//tf86+r6PtXJbhpL3D1yC0LEUIE1J8riRdgiOnXF7qr52FbzJsQrY 5XatDSm/eQhp5LCKRAmdN9E8O3JsNcO+f/vo8SEVQ7mwKZ1R0hr0mF9L0uKLUSKj NsSgZS3VEBbN3RvqXcO/j0JfFUuiCwfSzIKubjVem7yYNgt4zNOp8tY/mrRwwEmD kXb9DowUwvVZbRRIJnLbtBWj5uzG5kfvuluaDJbf6Cy5jGqCAko/HS+2i5ptyCEQ 7CrFDpbSt9hTquyAcSKrFclYTGallJUSJt1MoRSFNClO5hIbg+oQLRadPzAAUQrB g8WmfeTg+fmrGJf4khQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3vf78n2a1b-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Sun, 07 Jan 2024 07:35:15 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 7 Jan 2024 07:35:13 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 7 Jan 2024 07:35:13 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 1FC9F3F7093; Sun, 7 Jan 2024 07:35:13 -0800 (PST) From: Srikanth Yalavarthi To: Thomas Monjalon , Bruce Richardson , Srikanth Yalavarthi , Jerin Jacob CC: , , , Subject: [PATCH 01/11] eventdev: introduce ML event adapter library Date: Sun, 7 Jan 2024 07:34:40 -0800 Message-ID: <20240107153454.3909-2-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240107153454.3909-1-syalavarthi@marvell.com> References: <20240107153454.3909-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: X0Ihyg0Vrq9pOGATS5O7NoFPDA3JDxpV X-Proofpoint-GUID: X0Ihyg0Vrq9pOGATS5O7NoFPDA3JDxpV X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce event ML adapter APIs. This patch provides information on adapter modes and usage. Application can use this event adapter interface to transfer packets between ML device and event device. Signed-off-by: Srikanth Yalavarthi --- MAINTAINERS | 6 + config/rte_config.h | 1 + doc/api/doxy-api-index.md | 1 + doc/guides/prog_guide/event_ml_adapter.rst | 268 ++++ doc/guides/prog_guide/eventdev.rst | 10 +- .../img/event_ml_adapter_op_forward.svg | 1086 +++++++++++++++++ .../img/event_ml_adapter_op_new.svg | 1079 ++++++++++++++++ doc/guides/prog_guide/index.rst | 1 + lib/eventdev/meson.build | 4 +- lib/eventdev/rte_event_ml_adapter.c | 6 + lib/eventdev/rte_event_ml_adapter.h | 594 +++++++++ lib/eventdev/rte_eventdev.h | 45 + lib/meson.build | 2 +- lib/mldev/rte_mldev.h | 6 + 14 files changed, 3102 insertions(+), 7 deletions(-) create mode 100644 doc/guides/prog_guide/event_ml_adapter.rst create mode 100644 doc/guides/prog_guide/img/event_ml_adapter_op_forward.svg create mode 100644 doc/guides/prog_guide/img/event_ml_adapter_op_new.svg create mode 100644 lib/eventdev/rte_event_ml_adapter.c create mode 100644 lib/eventdev/rte_event_ml_adapter.h diff --git a/MAINTAINERS b/MAINTAINERS index 0d1c8126e3e..a1125e93621 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -554,6 +554,12 @@ F: drivers/raw/skeleton/ F: app/test/test_rawdev.c F: doc/guides/prog_guide/rawdev.rst +Eventdev ML Adapter API +M: Srikanth Yalavarthi +T: git://dpdk.org/next/dpdk-next-eventdev +F: lib/eventdev/*ml_adapter* +F: doc/guides/prog_guide/event_ml_adapter.rst + Memory Pool Drivers ------------------- diff --git a/config/rte_config.h b/config/rte_config.h index da265d7dd24..29c5aa558e6 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -80,6 +80,7 @@ #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32 #define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32 #define RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE 32 +#define RTE_EVENT_ML_ADAPTER_MAX_INSTANCE 32 /* rawdev defines */ #define RTE_RAWDEV_MAX_DEVS 64 diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index a6a768bd7c6..d8c3d887ade 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -30,6 +30,7 @@ The public API headers are grouped by topics: [event_timer_adapter](@ref rte_event_timer_adapter.h), [event_crypto_adapter](@ref rte_event_crypto_adapter.h), [event_dma_adapter](@ref rte_event_dma_adapter.h), + [event_ml_adapter](@ref rte_event_ml_adapter.h), [rawdev](@ref rte_rawdev.h), [metrics](@ref rte_metrics.h), [bitrate](@ref rte_bitrate.h), diff --git a/doc/guides/prog_guide/event_ml_adapter.rst b/doc/guides/prog_guide/event_ml_adapter.rst new file mode 100644 index 00000000000..71f6c4b5974 --- /dev/null +++ b/doc/guides/prog_guide/event_ml_adapter.rst @@ -0,0 +1,268 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (c) 2024 Marvell. + +Event ML Adapter Library +======================== + +DPDK :doc:`Eventdev library ` provides event driven programming model with features +to schedule events. :doc:`ML Device library ` provides an interface to ML poll mode +drivers that support Machine Learning inference operations. Event ML Adapter is intended to +bridge between the event device and the ML device. + +Packet flow from ML device to the event device can be accomplished using software and hardware +based transfer mechanisms. The adapter queries an eventdev PMD to determine which mechanism to +be used. The adapter uses an EAL service core function for software based packet transfer and +uses the eventdev PMD functions to configure hardware based packet transfer between ML device +and the event device. ML adapter uses a new event type called ``RTE_EVENT_TYPE_MLDEV`` to +indicate the source of event. + +Application can choose to submit an ML operation directly to an ML device or send it to an ML +adapter via eventdev based on RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability. The +first mode is known as the event new (RTE_EVENT_ML_ADAPTER_OP_NEW) mode and the second as the +event forward (RTE_EVENT_ML_ADAPTER_OP_FORWARD) mode. Choice of mode can be specified while +creating the adapter. In the former mode, it is the application's responsibility to enable +ingress packet ordering. In the latter mode, it is the adapter's responsibility to enable +ingress packet ordering. + + +Adapter Modes +------------- + +RTE_EVENT_ML_ADAPTER_OP_NEW mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the RTE_EVENT_ML_ADAPTER_OP_NEW mode, application submits ML operations directly to an ML +device. The adapter then dequeues ML completions from the ML device and enqueues them as events +to the event device. This mode does not ensure ingress ordering as the application directly +enqueues to the mldev without going through ML/atomic stage. In this mode, events dequeued +from the adapter are treated as new events. The application has to specify event information +(response information) which is needed to enqueue an event after the ML operation is completed. + +.. _figure_event_ml_adapter_op_new: + +.. figure:: img/event_ml_adapter_op_new.* + + Working model of ``RTE_EVENT_ML_ADAPTER_OP_NEW`` mode + + +RTE_EVENT_ML_ADAPTER_OP_FORWARD mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the ``RTE_EVENT_ML_ADAPTER_OP_FORWARD`` mode, if the event PMD and ML PMD supports internal +event port (``RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should use +``rte_event_ml_adapter_enqueue()`` API to enqueue ML operations as events to ML adapter. If +not, application retrieves ML adapter's event port using ``rte_event_ml_adapter_event_port_get()`` +API, links its event queue to this port and starts enqueuing ML operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and submits the ML +operations to the mldev. After the ML operation is complete, the adapter enqueues events to the +event device. + +Applications can use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. Application has to specify +the mldev ID and queue pair ID (request information) needed to enqueue a ML operation in +addition to the event information (response information) needed to enqueue the event after +the ML operation has completed. + +.. _figure_event_ml_adapter_op_forward: + +.. figure:: img/event_ml_adapter_op_forward.* + + Working model of ``RTE_EVENT_ML_ADAPTER_OP_FORWARD`` mode + + +API Overview +------------ + +This section has a brief introduction to the event ML adapter APIs. The application is expected +to create an adapter which is associated with a single eventdev, then add mldev and queue pair +to the adapter instance. + + +Create an adapter instance +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +An adapter instance is created using ``rte_event_ml_adapter_create()``. This function is called +with event device to be associated with the adapter and port configuration for the adapter to +setup an event port (if the adapter needs to use a service function). + +Adapter can be started in ``RTE_EVENT_ML_ADAPTER_OP_NEW`` or ``RTE_EVENT_ML_ADAPTER_OP_FORWARD`` +mode. + +.. code-block:: c + + enum rte_event_ml_adapter_mode mode; + struct rte_event_dev_info dev_info; + struct rte_event_port_conf conf; + uint8_t evdev_id; + uint8_t mla_id; + int ret; + + ret = rte_event_dev_info_get(mla_id, &dev_info); + + conf.new_event_threshold = dev_info.max_num_events; + conf.dequeue_depth = dev_info.max_event_port_dequeue_depth; + conf.enqueue_depth = dev_info.max_event_port_enqueue_depth; + mode = RTE_EVENT_ML_ADAPTER_OP_FORWARD; + ret = rte_event_ml_adapter_create(mla_id, evdev_id, &conf, mode); + + +``rte_event_ml_adapter_create_ext()`` function can be used by the application to have a finer +control on eventdev port allocation and setup. The ``rte_event_ml_adapter_create_ext()`` +function is passed as a callback function. The callback function is invoked if the adapter +creates a service function and uses an event port for it. The callback is expected to fill the +``struct rte_event_ml_adapter_conf`` structure passed to it. + +In the ``RTE_EVENT_ML_ADAPTER_OP_FORWARD`` mode, if the event PMD and ML PMD supports internal +event port (``RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with ML operations should +be enqueued to the ML adapter using ``rte_event_ml_adapter_enqueue()`` API. If not, the event port +created by the adapter can be retrieved using ``rte_event_ml_adapter_event_port_get()`` API. An +application can use this event port to link with an event queue, on which it enqueues events +towards the ML adapter using ``rte_event_enqueue_burst()``. + +.. code-block:: c + + uint8_t mla_id, evdev_id, cdev_id, ml_ev_port_id, app_qid; + struct rte_event ev; + uint32_t cap; + int ret; + + // Fill in event info and update event_ptr with rte_ml_op + memset(&ev, 0, sizeof(ev)); + . + . + ev.event_ptr = op; + + ret = rte_event_ml_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_ml_adapter_enqueue(evdev_id, app_ev_port_id, ev, nb_events); + } else { + ret = rte_event_ml_adapter_event_port_get(mla_id, &ml_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, ml_ev_port_id, &app_qid, NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, nb_events); + } + + +Event device configuration for service based adapter +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When ``rte_event_ml_adapter_create()`` is used for creating adapter instance, +``rte_event_dev_config::nb_event_ports`` is automatically incremented, and event device is +reconfigured with additional event port during service initialization. This event device +reconfigure logic also increments the ``rte_event_dev_config::nb_single_link_event_port_queues`` +parameter if the adapter event port config is of type ``RTE_EVENT_PORT_CFG_SINGLE_LINK``. + +Applications using this mode of adapter creation need not configure the event device with +``rte_event_dev_config::nb_event_ports`` and +``rte_event_dev_config::nb_single_link_event_port_queues`` parameters required for ML adapter when +the adapter is created using the above-mentioned API. + + +Querying adapter capabilities +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``rte_event_ml_adapter_caps_get()`` function allows the application to query the adapter +capabilities for an eventdev and mldev combination. This API provides whether mldev and eventdev +are connected using internal HW port or not. + +.. code-block:: c + + rte_event_ml_adapter_caps_get(dev_id, cdev_id, &cap); + + +Adding queue pair to the adapter instance +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +mldev device id and queue pair are created using mldev APIs. For more information +see :doc:`here `. + +.. code-block:: c + + struct rte_mldev_qp_conf qp_conf; + struct rte_mldev_config conf; + uint8_t cdev_id = 0; + uint16_t qp_id = 0; + + rte_mldev_configure(cdev_id, &conf); + rte_mldev_queue_pair_setup(cdev_id, qp_id, &qp_conf); + +These mldev id and queue pair are added to the instance using the +``rte_event_ml_adapter_queue_pair_add()`` API. The same is removed using +``rte_event_ml_adapter_queue_pair_del()`` API. If hardware supports +``RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND`` capability, event information must be passed to +the add API. + +.. code-block:: c + + uint32_t cap; + int ret; + + ret = rte_event_ml_adapter_caps_get(mla_id, evdev, &cap); + if (cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) { + struct rte_event_ml_adapter_queue_conf conf; + + rte_event_ml_adapter_queue_pair_add(mla_id, cdev_id, qp_id, &conf); + } else + rte_event_ml_adapter_queue_pair_add(mla_id, cdev_id, qp_id, NULL); + + +Configuring service function +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If the adapter uses a service function, the application is required to assign a service core to +the service function as show below. + +.. code-block:: c + + uint32_t service_id; + + if (rte_event_ml_adapter_service_id_get(mla_id, &service_id) == 0) + rte_service_map_lcore_set(service_id, CORE_ID); + + +Set event request / response information +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the RTE_EVENT_ML_ADAPTER_OP_FORWARD mode, the application specifies the mldev ID and +queue pair ID (request information) in addition to the event information (response information) +needed to enqueue an event after the ML operation has completed. The request and response +information are specified in the ``struct rte_ml_op`` private data or session's private data. + +In the RTE_EVENT_ML_ADAPTER_OP_NEW mode, the application is required to provide only the response +information. + + +Start the adapter instance +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The application calls ``rte_event_ml_adapter_start()`` to start the adapter. This function calls +the start callbacks of the eventdev PMDs for hardware based eventdev-mldev connections and +``rte_service_run_state_set()`` to enable the service function if one exists. + +.. code-block:: c + + rte_event_ml_adapter_start(mla_id, mode); + +.. Note:: + + The eventdev to which the event_ml_adapter is connected should be started before calling + rte_event_ml_adapter_start(). + + +Get adapter statistics +~~~~~~~~~~~~~~~~~~~~~~ + +The ``rte_event_ml_adapter_stats_get()`` function reports counters defined in struct +``rte_event_ml_adapter_stats``. The received packet and enqueued event counts are a sum of the +counts from the eventdev PMD callbacks if the callback is supported, and the counts maintained by +the service function, if one exists. + +Set/Get adapter runtime configuration parameters +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The runtime configuration parameters of adapter can be set/get using +``rte_event_ml_adapter_runtime_params_set()`` and +``rte_event_ml_adapter_runtime_params_get()`` respectively. +The parameters that can be set/get are defined in +``struct rte_event_ml_adapter_runtime_params``. diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst index 9d398d07f7f..bfe23be9888 100644 --- a/doc/guides/prog_guide/eventdev.rst +++ b/doc/guides/prog_guide/eventdev.rst @@ -373,8 +373,8 @@ eventdev. .. Note:: EventDev needs to be started before starting the event producers such - as event_eth_rx_adapter, event_timer_adapter, event_crypto_adapter and - event_dma_adapter. + as event_eth_rx_adapter, event_timer_adapter, event_crypto_adapter, + event_dma_adapter and event_ml_adapter. Ingress of New Events ~~~~~~~~~~~~~~~~~~~~~ @@ -486,9 +486,9 @@ using ``rte_event_dev_stop_flush_callback_register()`` function. .. Note:: The event producers such as ``event_eth_rx_adapter``, - ``event_timer_adapter``, ``event_crypto_adapter`` and - ``event_dma_adapter`` need to be stopped before stopping - the event device. + ``event_timer_adapter``, ``event_crypto_adapter``, + ``event_dma_adapter`` and ``event_ml_adapter`` need + to be stopped before stopping the event device. Summary ------- diff --git a/doc/guides/prog_guide/img/event_ml_adapter_op_forward.svg b/doc/guides/prog_guide/img/event_ml_adapter_op_forward.svg new file mode 100644 index 00000000000..06fe1d26a30 --- /dev/null +++ b/doc/guides/prog_guide/img/event_ml_adapter_op_forward.svg @@ -0,0 +1,1086 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + + + + + + 1 + + + 2 + + + + 8 + + + + + 7 + + + + + 3 + + + + 4 + + + 5 + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + + 6 + + + Eventdev + + + MLAdapter + + + Applicationin orderedstage + + + ML Device + + + 1. Events from the previous stage. 2. Application in ordered stage dequeues events from eventdev. 3. Application enqueues ML operations as events to eventdev. 4. ML adapter dequeues event from eventdev. 5. ML adapter submits ML operations to ML Device (Atomic stage) 6. ML adapter dequeues ML completions from ML Device 7. ML adapter enqueues events to the eventdev 8. Events to the next stage + + + diff --git a/doc/guides/prog_guide/img/event_ml_adapter_op_new.svg b/doc/guides/prog_guide/img/event_ml_adapter_op_new.svg new file mode 100644 index 00000000000..3b3a3d20ed4 --- /dev/null +++ b/doc/guides/prog_guide/img/event_ml_adapter_op_new.svg @@ -0,0 +1,1079 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + +   + + + + + + + + + + + + + 1 + + + 2 + + + + + 3 + + + 4 + + + 6 + + + Eventdev + + + Atomic Stage+Enqueue toML Device + + + 5 + +   + + ML Device + + + MLAdapter + + + 1. Application dequeues events from the previous stage 2. Application prepares the ML operations. 3. ML operations are submitted to mldev by application. 4. ML adapter dequeues ML completions from ML device. 5. ML adapter enqueues events to the eventdev. 6. Application dequeues from eventdev and prepare for further processing + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Application + + + diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index 94964357ffb..99147c0b4b0 100644 --- a/doc/guides/prog_guide/index.rst +++ b/doc/guides/prog_guide/index.rst @@ -62,6 +62,7 @@ Programmer's Guide event_timer_adapter event_crypto_adapter event_dma_adapter + event_ml_adapter dispatcher_lib qos_framework power_man diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build index a04bb86f0f2..e2dced9050d 100644 --- a/lib/eventdev/meson.build +++ b/lib/eventdev/meson.build @@ -16,6 +16,7 @@ sources = files( 'rte_event_eth_tx_adapter.c', 'rte_event_ring.c', 'rte_event_timer_adapter.c', + 'rte_event_ml_adapter.c', 'rte_eventdev.c', ) headers = files( @@ -25,6 +26,7 @@ headers = files( 'rte_event_eth_tx_adapter.h', 'rte_event_ring.h', 'rte_event_timer_adapter.h', + 'rte_event_ml_adapter.h', 'rte_eventdev.h', 'rte_eventdev_trace_fp.h', ) @@ -38,5 +40,5 @@ driver_sdk_headers += files( 'event_timer_adapter_pmd.h', ) -deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev', 'dmadev'] +deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev', 'dmadev', 'mldev'] deps += ['telemetry'] diff --git a/lib/eventdev/rte_event_ml_adapter.c b/lib/eventdev/rte_event_ml_adapter.c new file mode 100644 index 00000000000..5b8b02a0130 --- /dev/null +++ b/lib/eventdev/rte_event_ml_adapter.c @@ -0,0 +1,6 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2024 Marvell. + */ + +#include "rte_event_ml_adapter.h" +#include "rte_eventdev.h" diff --git a/lib/eventdev/rte_event_ml_adapter.h b/lib/eventdev/rte_event_ml_adapter.h new file mode 100644 index 00000000000..9e481026f26 --- /dev/null +++ b/lib/eventdev/rte_event_ml_adapter.h @@ -0,0 +1,594 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2024 Marvell. + */ + +#ifndef RTE_EVENT_ML_ADAPTER +#define RTE_EVENT_ML_ADAPTER + +/** + * @file rte_event_ml_adapter.h + * + * @warning + * @b EXPERIMENTAL: + * All functions in this file may be changed or removed without prior notice. + * + * ML (Machine Learning) Event Adapter API. + * + * Eventdev library provides adapters to bridge between various components for providing new + * event source. The event ML adapter is one of those adapters which is intended to bridge + * between event devices and ML devices. + * + * The ML adapter adds support to enqueue / dequeue ML operations to / from event device. The packet + * flow between ML device and the event device can be accomplished using both SW and HW based + * transfer mechanisms. The adapter uses an EAL service core function for SW based packet transfer + * and uses the eventdev PMD functions to configure HW based packet transfer between the ML device + * and the event device. + * + * The application can choose to submit a ML operation directly to an ML device or send it to the ML + * adapter via eventdev based on RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability. The first + * mode is known as the event new (RTE_EVENT_ML_ADAPTER_OP_NEW) mode and the second as the event + * forward (RTE_EVENT_ML_ADAPTER_OP_FORWARD) mode. The choice of mode can be specified while + * creating the adapter. In the former mode, it is an application responsibility to enable ingress + * packet ordering. In the latter mode, it is the adapter responsibility to enable the ingress + * packet ordering. + * + * + * Working model of RTE_EVENT_ML_ADAPTER_OP_NEW mode: + * + * +--------------+ +--------------+ + * | | | ML stage | + * | Application |---[2]-->| + enqueue to | + * | | | mldev | + * +--------------+ +--------------+ + * ^ ^ | + * | | [3] + * [6] [1] | + * | | | + * +--------------+ | + * | | | + * | Event device | | + * | | | + * +--------------+ | + * ^ | + * | | + * [5] | + * | v + * +--------------+ +--------------+ + * | | | | + * | ML adapter |<--[4]---| mldev | + * | | | | + * +--------------+ +--------------+ + * + * + * [1] Application dequeues events from the previous stage. + * [2] Application prepares the ML operations. + * [3] ML operations are submitted to mldev by application. + * [4] ML adapter dequeues ML completions from mldev. + * [5] ML adapter enqueues events to the eventdev. + * [6] Application dequeues from eventdev for further processing. + * + * In the RTE_EVENT_ML_ADAPTER_OP_NEW mode, application submits ML operations directly to ML device. + * The ML adapter then dequeues ML completions from ML device and enqueues events to the event + * device. This mode does not ensure ingress ordering, if the application directly enqueues to mldev + * without going through ML / atomic stage i.e. removing item [1] and [2]. + * + * Events dequeued from the adapter will be treated as new events. In this mode, application needs + * to specify event information (response information) which is needed to enqueue an event after the + * ML operation is completed. + * + * + * Working model of RTE_EVENT_ML_ADAPTER_OP_FORWARD mode: + * + * +--------------+ +--------------+ + * --[1]-->| |---[2]-->| Application | + * | Event device | | in | + * <--[8]--| |<--[3]---| Ordered stage| + * +--------------+ +--------------+ + * ^ | + * | [4] + * [7] | + * | v + * +----------------+ +--------------+ + * | |--[5]->| | + * | ML adapter | | mldev | + * | |<-[6]--| | + * +----------------+ +--------------+ + * + * + * [1] Events from the previous stage. + * [2] Application in ordered stage dequeues events from eventdev. + * [3] Application enqueues ML operations as events to eventdev. + * [4] ML adapter dequeues event from eventdev. + * [5] ML adapter submits ML operations to mldev (Atomic stage). + * [6] ML adapter dequeues ML completions from mldev + * [7] ML adapter enqueues events to the eventdev + * [8] Events to the next stage + * + * In the event forward (RTE_EVENT_ML_ADAPTER_OP_FORWARD) mode, if the HW supports the capability + * RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application can directly submit the ML operations + * to the mldev. If not, application retrieves the event port of the ML adapter through the API, + * rte_event_ml_adapter_event_port_get(). Then, links its event queue to this port and starts + * enqueuing ML operations as events to the eventdev. The adapter then dequeues the events and + * submits the ML operations to the mldev. After the ML completions, the adapter enqueues events to + * the event device. + * + * Application can use this mode, when ingress packet ordering is needed. Events dequeued from the + * adapter will be treated as forwarded events. In this mode, the application needs to specify the + * mldev ID and queue pair ID (request information) needed to enqueue an ML operation in addition to + * the event information (response information) needed to enqueue an event after the ML operation + * has completed. + * + * The event ML adapter provides common APIs to configure the packet flow from the ML device to + * event devices for both SW and HW based transfers. The ML event adapter's functions are: + * + * - rte_event_ml_adapter_create_ext() + * - rte_event_ml_adapter_create() + * - rte_event_ml_adapter_free() + * - rte_event_ml_adapter_queue_pair_add() + * - rte_event_ml_adapter_queue_pair_del() + * - rte_event_ml_adapter_start() + * - rte_event_ml_adapter_stop() + * - rte_event_ml_adapter_stats_get() + * - rte_event_ml_adapter_stats_reset() + * + * The application creates an instance using rte_event_ml_adapter_create() or + * rte_event_ml_adapter_create_ext(). + * + * mldev queue pair addition / deletion is done using the rte_event_ml_adapter_queue_pair_add() / + * rte_event_ml_adapter_queue_pair_del() APIs. If HW supports the capability + * RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND, event information must be passed to the add + * API. + * + */ + +#include + +#include "rte_eventdev.h" +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * ML event adapter mode + */ +enum rte_event_ml_adapter_mode { + RTE_EVENT_ML_ADAPTER_OP_NEW, + /**< Start the ML adapter in event new mode. + * @see RTE_EVENT_OP_NEW. + * + * Application submits ML operations to the mldev. Adapter only dequeues the ML completions + * from mldev and enqueue events to the eventdev. + */ + + RTE_EVENT_ML_ADAPTER_OP_FORWARD, + /**< Start the ML adapter in event forward mode. + * @see RTE_EVENT_OP_FORWARD. + * + * Application submits ML requests as events to the ML adapter or ML device based on + * RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability. ML completions are enqueued + * back to the eventdev by ML adapter. + */ +}; + +/** + * ML event request structure will be filled by application to provide event request information to + * the adapter. + */ +struct rte_event_ml_request { + uint8_t resv[8]; + /**< Overlaps with first 8 bytes of struct rte_event that encode the response event + * information. Application is expected to fill in struct rte_event response_info. + */ + + int16_t mldev_id; + /**< ML device ID to be used */ + + uint16_t queue_pair_id; + /**< ML queue pair ID to be used */ + + uint32_t rsvd; + /**< Reserved bits */ +}; + +/** + * ML event metadata structure will be filled by application to provide ML request and event + * response information. + * + * If ML events are enqueued using a HW mechanism, the mldev PMD uses the event response information + * to set up the event that is enqueued back to eventdev after completion of the ML operation. If + * the transfer is done by SW, event response information will be used by the adapter. + */ +union rte_event_ml_metadata { + struct rte_event_ml_request request_info; + /**< Request information to be filled in by application for RTE_EVENT_ML_ADAPTER_OP_FORWARD + * mode. First 8 bytes of request_info is reserved for response_info. + */ + + struct rte_event response_info; + /**< Response information to be filled in by application for RTE_EVENT_ML_ADAPTER_OP_NEW and + * RTE_EVENT_ML_ADAPTER_OP_FORWARD mode. + */ +}; + +/** + * Adapter configuration structure that the adapter configuration callback function is expected to + * fill out. + * + * @see rte_event_ml_adapter_conf_cb + */ +struct rte_event_ml_adapter_conf { + uint8_t event_port_id; + /** < Event port identifier, the adapter enqueues events to this port and dequeues ML + * request events in RTE_EVENT_ML_ADAPTER_OP_FORWARD mode. + */ + + uint32_t max_nb; + /**< The adapter can return early if it has processed at least max_nb ML ops. This isn't + * treated as a requirement; batching may cause the adapter to process more than max_nb ML + * ops. + */ +}; + +/** + * Adapter runtime configuration parameters + */ +struct rte_event_ml_adapter_runtime_params { + uint32_t max_nb; + /**< The adapter can return early if it has processed at least max_nb ML ops. This isn't + * treated as a requirement; batching may cause the adapter to process more than max_nb ML + * ops. + * + * rte_event_ml_adapter_create() configures the adapter with default value of max_nb. + * rte_event_ml_adapter_create_ext() configures the adapter with user provided value of + * max_nb through rte_event_ml_adapter_conf::max_nb parameter. + * rte_event_ml_adapter_runtime_params_set() allows to re-configure max_nb during runtime + * (after adding at least one queue pair) + * + * This is valid for the devices without RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD or + * RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_NEW capability. + */ + + uint32_t rsvd[15]; + /**< Reserved fields for future expansion */ +}; + +/** + * Function type used for adapter configuration callback. The callback is used to fill in members of + * the struct rte_event_ml_adapter_conf, this callback is invoked when creating a SW service for + * packet transfer from mldev queue pair to the event device. The SW service is created within the + * function, rte_event_ml_adapter_queue_pair_add(), if SW based packet transfers from mldev queue + * pair to the event device are required. + * + * @param id + * Adapter identifier. + * @param evdev_id + * Event device identifier. + * @param conf + * Structure that needs to be populated by this callback. + * @param arg + * Argument to the callback. This is the same as the conf_arg passed to the + * rte_event_ml_adapter_create_ext(). + */ +typedef int (*rte_event_ml_adapter_conf_cb)(uint8_t id, uint8_t evdev_id, + struct rte_event_ml_adapter_conf *conf, void *arg); + +/** + * A structure used to retrieve statistics for an event ML adapter instance. + */ +struct rte_event_ml_adapter_stats { + uint64_t event_poll_count; + /**< Event port poll count */ + + uint64_t event_deq_count; + /**< Event dequeue count */ + + uint64_t ml_enq_count; + /**< mldev enqueue count */ + + uint64_t ml_enq_fail_count; + /**< mldev enqueue failed count */ + + uint64_t ml_deq_count; + /**< mldev dequeue count */ + + uint64_t event_enq_count; + /**< Event enqueue count */ + + uint64_t event_enq_retry_count; + /**< Event enqueue retry count */ + + uint64_t event_enq_fail_count; + /**< Event enqueue fail count */ +}; + +/** + * Create a new event ML adapter with the specified identifier. + * + * @param id + * Adapter identifier. + * @param evdev_id + * Event device identifier. + * @param conf_cb + * Callback function that fills in members of a struct rte_event_ml_adapter_conf struct passed + * into it. + * @param mode + * Flag to indicate the mode of the adapter. + * @see rte_event_ml_adapter_mode + * @param conf_arg + * Argument that is passed to the conf_cb function. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +int +rte_event_ml_adapter_create_ext(uint8_t id, uint8_t evdev_id, rte_event_ml_adapter_conf_cb conf_cb, + enum rte_event_ml_adapter_mode mode, void *conf_arg); + +/** + * Create a new event ML adapter with the specified identifier. This function uses an internal + * configuration function that creates an event port. This default function reconfigures the event + * device with an additional event port and set up the event port using the port_config parameter + * passed into this function. In case the application needs more control in configuration of the + * service, it should use the rte_event_ml_adapter_create_ext() version. + * + * @param id + * Adapter identifier. + * @param evdev_id + * Event device identifier. + * @param port_config + * Argument of type *rte_event_port_conf* that is passed to the conf_cb function. + * @param mode + * Flag to indicate the mode of the adapter. + * @see rte_event_ml_adapter_mode + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +int +rte_event_ml_adapter_create(uint8_t id, uint8_t evdev_id, struct rte_event_port_conf *port_config, + enum rte_event_ml_adapter_mode mode); + +/** + * Free an event ML adapter + * + * @param id + * Adapter identifier. + * @return + * - 0: Success + * - <0: Error code on failure, If the adapter still has queue pairs added to it, the function + * returns -EBUSY. + */ +int +rte_event_ml_adapter_free(uint8_t id); + +/** + * Add a queue pair to an event ML adapter. + * + * @param id + * Adapter identifier. + * @param mldev_id + * mldev identifier. + * @param queue_pair_id + * ML device queue pair identifier. If queue_pair_id is set -1, adapter adds all the + * preconfigured queue pairs to the instance. + * @param event + * If HW supports mldev queue pair to event queue binding, application is expected to fill in + * event information, else it will be NULL. + * @see RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND + * + * @return + * - 0: Success, queue pair added correctly. + * - <0: Error code on failure. + */ +int +rte_event_ml_adapter_queue_pair_add(uint8_t id, int16_t mldev_id, int32_t queue_pair_id, + const struct rte_event *event); + +/** + * Delete a queue pair from an event ML adapter. + * + * @param id + * Adapter identifier. + * @param mldev_id + * ML device identifier. + * @param queue_pair_id + * ML device queue pair identifier. + * + * @return + * - 0: Success, queue pair deleted successfully. + * - <0: Error code on failure. + */ +int +rte_event_ml_adapter_queue_pair_del(uint8_t id, int16_t mldev_id, int32_t queue_pair_id); + +/** + * Start event ML adapter + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, adapter started successfully. + * - <0: Error code on failure. + * + * @note The eventdev and mldev to which the event_ml_adapter is connected should be started + * before calling rte_event_ml_adapter_start(). + */ +int +rte_event_ml_adapter_start(uint8_t id); + +/** + * Stop event ML adapter + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, adapter stopped successfully. + * - <0: Error code on failure. + */ +int +rte_event_ml_adapter_stop(uint8_t id); + +/** + * Retrieve statistics for an adapter + * + * @param id + * Adapter identifier. + * @param [out] stats + * A pointer to structure used to retrieve statistics for an adapter. + * + * @return + * - 0: Success, retrieved successfully. + * - <0: Error code on failure. + */ +int +rte_event_ml_adapter_stats_get(uint8_t id, struct rte_event_ml_adapter_stats *stats); + +/** + * Reset statistics for an adapter. + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, statistics reset successfully. + * - <0: Error code on failure. + */ +int +rte_event_ml_adapter_stats_reset(uint8_t id); + +/** + * Retrieve the service ID of an adapter. If the adapter doesn't use a rte_service function, this + * function returns -ESRCH. + * + * @param id + * Adapter identifier. + * @param [out] service_id + * A pointer to a uint32_t, to be filled in with the service id. + * + * @return + * - 0: Success + * - <0: Error code on failure, if the adapter doesn't use a rte_service function, this function + * returns -ESRCH. + */ +int +rte_event_ml_adapter_service_id_get(uint8_t id, uint32_t *service_id); + +/** + * Retrieve the event port of an adapter. + * + * @param id + * Adapter identifier. + * + * @param [out] event_port_id + * Application links its event queue to this adapter port which is used in + * RTE_EVENT_ML_ADAPTER_OP_FORWARD mode. + * + * @return + * - 0: Success + * - <0: Error code on failure. + */ +int +rte_event_ml_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); + +/** + * Initialize the adapter runtime configuration parameters + * + * @param params + * A pointer to structure of type struct rte_event_ml_adapter_runtime_params + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +int +rte_event_ml_adapter_runtime_params_init(struct rte_event_ml_adapter_runtime_params *params); + +/** + * Set the adapter runtime configuration parameters + * + * @param id + * Adapter identifier + * + * @param params + * A pointer to structure of type struct rte_event_ml_adapter_runtime_params with configuration + * parameter values. The reserved fields of this structure must be initialized to zero and the valid + * fields need to be set appropriately. This struct can be initialized using + * rte_event_ml_adapter_runtime_params_init() API to default values or application may reset this + * struct and update required fields. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +int +rte_event_ml_adapter_runtime_params_set(uint8_t id, + struct rte_event_ml_adapter_runtime_params *params); + +/** + * Get the adapter runtime configuration parameters + * + * @param id + * Adapter identifier + * + * @param[out] params + * A pointer to structure of type struct rte_event_ml_adapter_runtime_params containing valid + * adapter parameters when return value is 0. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +int +rte_event_ml_adapter_runtime_params_get(uint8_t id, + struct rte_event_ml_adapter_runtime_params *params); + +/** + * Enqueue a burst of ML operations as event objects supplied in *rte_event* structure on an event + * ML adapter designated by its event *evdev_id* through the event port specified by *port_id*. This + * function is supported if the eventdev PMD has the #RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue that are supplied in the + * *ev* array of *rte_event* structure. + * + * The rte_event_ml_adapter_enqueue() function returns the number of event objects it actually + * enqueued. A return value equal to *nb_events* means that all event objects have been enqueued. + * + * @param evdev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure which contain the + * event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The return value can be + * less than the value of the *nb_events* parameter when the event devices queue is full or if + * invalid parameters are specified in a *rte_event*. If the return value is less than *nb_events*, + * the remaining events at the end of ev[] are not consumed and the caller has to take care of them, + * and rte_errno is set accordingly. Possible errno values include: + * + * - EINVAL: The port ID is invalid, device ID is invalid, an event's queue ID is invalid, or an + * event's sched type doesn't match the capabilities of the destination queue. + * - ENOSPC: The event port was backpressured and unable to enqueue one or more events. This + * error code is only applicable to closed systems. + */ +uint16_t +rte_event_ml_adapter_enqueue(uint8_t evdev_id, uint8_t port_id, struct rte_event ev[], + uint16_t nb_events); + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_EVENT_ML_ADAPTER */ diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index ec9b02455d2..c315c9b4788 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1207,6 +1207,8 @@ struct rte_event_vector { /**< The event generated from event eth Rx adapter */ #define RTE_EVENT_TYPE_DMADEV 0x5 /**< The event generated from dma subsystem */ +#define RTE_EVENT_TYPE_MLDEV 0x6 +/**< The event generated from mldev subsystem */ #define RTE_EVENT_TYPE_VECTOR 0x8 /**< Indicates that event is a vector. * All vector event types should be a logical OR of EVENT_TYPE_VECTOR. @@ -1490,6 +1492,26 @@ rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id, #define RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND 0x4 /**< Flag indicates HW is capable of mapping DMA vchan to event queue. */ +/* ML adapter capability bitmap flag */ +#define RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_NEW 0x1 +/**< Flag indicates HW is capable of generating events in + * RTE_EVENT_OP_NEW enqueue operation. MLDEV will send + * packets to the event device as new events using an + * internal event port. + */ + +#define RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD 0x2 +/**< Flag indicates HW is capable of generating events in + * RTE_EVENT_OP_FORWARD enqueue operation. MLDEV will send + * packets to the event device as forwarded event using an + * internal event port. + */ + +#define RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND 0x4 +/**< Flag indicates HW is capable of mapping ML queue pair to + * event queue. + */ + /** * Retrieve the event device's DMA adapter capabilities for the * specified dmadev device @@ -1514,6 +1536,29 @@ __rte_experimental int rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dmadev_id, uint32_t *caps); +/** + * Retrieve the event device's ML adapter capabilities for the + * specified mldev device + * + * @param dev_id + * The identifier of the device. + * + * @param mldev_id + * The identifier of the mldev device. + * + * @param[out] caps + * A pointer to memory filled with event adapter capabilities. + * It is expected to be pre-allocated & initialized by caller. + * + * @return + * - 0: Success, driver provides event adapter capabilities for the + * mldev device. + * - <0: Error code returned by the driver function. + * + */ +int +rte_event_ml_adapter_caps_get(uint8_t dev_id, int16_t mldev_id, uint32_t *caps); + /* Ethdev Tx adapter capability bitmap flags */ #define RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT 0x1 /**< This flag is sent when the PMD supports a packet transmit callback diff --git a/lib/meson.build b/lib/meson.build index 6c143ce5a60..791b77a4a1c 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -35,6 +35,7 @@ libraries = [ 'distributor', 'dmadev', # eventdev depends on this 'efd', + 'mldev', # eventdev depends on this 'eventdev', 'dispatcher', # dispatcher depends on eventdev 'gpudev', @@ -49,7 +50,6 @@ libraries = [ 'power', 'rawdev', 'regexdev', - 'mldev', 'rib', 'reorder', 'sched', diff --git a/lib/mldev/rte_mldev.h b/lib/mldev/rte_mldev.h index 27e372fbcf1..a031592de61 100644 --- a/lib/mldev/rte_mldev.h +++ b/lib/mldev/rte_mldev.h @@ -469,6 +469,12 @@ struct rte_ml_op { * dequeue and enqueue operation. * The application should not modify this field. */ + uint32_t private_data_offset; + /**< Offset to indicate start of private data (if any). + * The offset is counted from the start of the rte_ml_op. + * The offset provides an offset to locate the request / + * response information in the rte_ml_op. + */ } __rte_cache_aligned; /* Enqueue/Dequeue operations */ From patchwork Sun Jan 7 15:34:41 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 135780 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7F2143857; Sun, 7 Jan 2024 16:35:25 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 93B7340696; Sun, 7 Jan 2024 16:35:20 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 34D54402F2 for ; Sun, 7 Jan 2024 16:35:18 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 407EUkDo010955 for ; Sun, 7 Jan 2024 07:35:17 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=vXuv4eF6IrDfkLP1I4LcjDOs/R9/krZvw4Y4JK72b60=; b=SKL aB8rpBVt/QU65RHsCiMu65gUq/CmGBuO+0+IVRWfvqD7igCBr4iv9vxcQ+n5pTri qCU42JYyls8FF1AcXSq8N2xIJowhEbv0khC2beBW84U1Q1dUOzWDCVO0rgX9ig+b Z8QFYScSq1+QLuQvKNW32wdOtlyPMQrlw7MVMxy2iywMUgSUf+OonjFNZsA7vzJb fuKQU3iqD+QpdhkMOMgmKjTsc6a/pHkQKjCpncoUA7OL+h66mtqekGZJRca8aRQO dwHgEOKvU2eMP62U2BXwrfNYXAO0MVSUVU6CFKFCTvtZE3M7RWgy96p4cg/y0kGp Gx+y8+jP/z2oCf/PH/w== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3vf78n2a1d-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 07 Jan 2024 07:35:17 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 7 Jan 2024 07:35:15 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 7 Jan 2024 07:35:15 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 441723F70A1; Sun, 7 Jan 2024 07:35:15 -0800 (PST) From: Srikanth Yalavarthi To: Jerin Jacob CC: , , , , Subject: [PATCH 02/11] event/ml: add ml adapter capabilities get Date: Sun, 7 Jan 2024 07:34:41 -0800 Message-ID: <20240107153454.3909-3-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240107153454.3909-1-syalavarthi@marvell.com> References: <20240107153454.3909-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: nWpjs9-2gOfj84DAQsTeHPmjsMNV7MKT X-Proofpoint-GUID: nWpjs9-2gOfj84DAQsTeHPmjsMNV7MKT X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added library function to get ML adapter capabilities. Signed-off-by: Srikanth Yalavarthi --- lib/eventdev/eventdev_pmd.h | 29 +++++++++++++++++++++++++++++ lib/eventdev/rte_eventdev.c | 27 +++++++++++++++++++++++++++ 2 files changed, 56 insertions(+) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 1790587808a..94d505753dc 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -84,6 +84,8 @@ extern "C" { #define RTE_EVENT_TIMER_ADAPTER_SW_CAP \ RTE_EVENT_TIMER_ADAPTER_CAP_PERIODIC +#define RTE_EVENT_ML_ADAPTER_SW_CAP 0x0 + #define RTE_EVENTDEV_DETACHED (0) #define RTE_EVENTDEV_ATTACHED (1) @@ -1522,6 +1524,30 @@ typedef int (*eventdev_dma_adapter_stats_get)(const struct rte_eventdev *dev, typedef int (*eventdev_dma_adapter_stats_reset)(const struct rte_eventdev *dev, const int16_t dma_dev_id); +struct rte_ml_dev; + +/** + * Retrieve the event device's ML adapter capabilities for the + * specified MLDEV + * + * @param dev + * Event device pointer + * + * @param mldev + * ML device pointer + * + * @param[out] caps + * A pointer to memory filled with event adapter capabilities. + * It is expected to be pre-allocated & initialized by caller. + * + * @return + * - 0: Success, driver provides event adapter capabilities for the + * MLDEV. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_ml_adapter_caps_get_t)(const struct rte_eventdev *dev, + const struct rte_ml_dev *mldev, uint32_t *caps); /** Event device operations function pointer table */ struct eventdev_ops { @@ -1662,6 +1688,9 @@ struct eventdev_ops { eventdev_dma_adapter_stats_reset dma_adapter_stats_reset; /**< Reset DMA stats */ + eventdev_ml_adapter_caps_get_t ml_adapter_caps_get; + /**< Get ML adapter capabilities */ + eventdev_selftest dev_selftest; /**< Start eventdev Selftest */ diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 157752868d5..7fbc6f3d98a 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -22,6 +22,7 @@ #include #include #include +#include #include #include "rte_eventdev.h" @@ -249,6 +250,32 @@ rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t *cap return 0; } +int +rte_event_ml_adapter_caps_get(uint8_t evdev_id, int16_t mldev_id, uint32_t *caps) +{ + struct rte_eventdev *dev; + struct rte_ml_dev *mldev; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(evdev_id, -EINVAL); + if (!rte_ml_dev_is_valid_dev(mldev_id)) + return -EINVAL; + + dev = &rte_eventdevs[evdev_id]; + mldev = rte_ml_dev_pmd_get_dev(mldev_id); + + if (caps == NULL) + return -EINVAL; + + if (dev->dev_ops->ml_adapter_caps_get == NULL) + *caps = RTE_EVENT_ML_ADAPTER_SW_CAP; + else + *caps = 0; + + return dev->dev_ops->ml_adapter_caps_get ? + (*dev->dev_ops->ml_adapter_caps_get)(dev, mldev, caps) : + 0; +} + static inline int event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues) { From patchwork Sun Jan 7 15:34:42 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 135781 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2882143857; Sun, 7 Jan 2024 16:35:36 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 73CDE406A2; Sun, 7 Jan 2024 16:35:22 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7833C4067D for ; Sun, 7 Jan 2024 16:35:19 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 407EEcgF028095 for ; Sun, 7 Jan 2024 07:35:18 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=9Nu8idqJXzreqNqfo7wu7ckcIISRinfrxGSMuBqqHUo=; b=NpA JM+EM5ZmJOp6nuNo5dEiUhNuFhyT2YeDnyLHuSMHyXLljU6EPqm92ZJfrbDHtZPo +b2FdsKlFrc51/ZuNweDk4gqJHI7g/TPhxcBhDiQKH3vpaaUtLuSR0SAsAVn99PH eOLuF9cgSCHkzZ2MEgVpd044Yjx1MC4ChXkjhIUEGCk5pBREK0CFswAYsVV4iUTe kk20BTuUJUq9b0WnweeEKYPnwbBz485dIBCHiPVNwWIjJWHjZKzY7nnXiPGwSqPw YAXz6scFBIb3BaBT63hd8sv7BKJQsMVZ9/RNdusShWWyA/Sjo7rBNCnnpRumCfaG DvTqFKJy9h7d6mogndQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3vf53qjk9y-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 07 Jan 2024 07:35:18 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 7 Jan 2024 07:35:16 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 7 Jan 2024 07:35:16 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 7F5223F70A5; Sun, 7 Jan 2024 07:35:16 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi , Jerin Jacob CC: , , , Subject: [PATCH 03/11] event/ml: add adapter create and free Date: Sun, 7 Jan 2024 07:34:42 -0800 Message-ID: <20240107153454.3909-4-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240107153454.3909-1-syalavarthi@marvell.com> References: <20240107153454.3909-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: KumB0faLdAXFg1Sk0_UU8t2I0coukGf7 X-Proofpoint-GUID: KumB0faLdAXFg1Sk0_UU8t2I0coukGf7 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added ML event adapter create and free functions. Signed-off-by: Srikanth Yalavarthi --- lib/eventdev/rte_event_ml_adapter.c | 317 ++++++++++++++++++++++++++++ 1 file changed, 317 insertions(+) diff --git a/lib/eventdev/rte_event_ml_adapter.c b/lib/eventdev/rte_event_ml_adapter.c index 5b8b02a0130..fed3b67c858 100644 --- a/lib/eventdev/rte_event_ml_adapter.c +++ b/lib/eventdev/rte_event_ml_adapter.c @@ -4,3 +4,320 @@ #include "rte_event_ml_adapter.h" #include "rte_eventdev.h" +#include + +#include "eventdev_pmd.h" +#include "rte_mldev_pmd.h" + +#define ML_ADAPTER_NAME_LEN 32 +#define ML_DEFAULT_MAX_NB 128 +#define ML_ADAPTER_BUFFER_SIZE 1024 + +#define ML_ADAPTER_ARRAY "event_ml_adapter_array" + +/* ML ops circular buffer */ +struct ml_ops_circular_buffer { + /* Index of head element */ + uint16_t head; + + /* Index of tail element */ + uint16_t tail; + + /* Number of elements in buffer */ + uint16_t count; + + /* Size of circular buffer */ + uint16_t size; + + /* Pointer to hold rte_ml_op for processing */ + struct rte_ml_op **op_buffer; +} __rte_cache_aligned; + +/* ML device information */ +struct ml_device_info { + /* Pointer to mldev */ + struct rte_ml_dev *dev; +} __rte_cache_aligned; + +struct event_ml_adapter { + /* Event device identifier */ + uint8_t eventdev_id; + + /* Event port identifier */ + uint8_t event_port_id; + + /* Adapter mode */ + enum rte_event_ml_adapter_mode mode; + + /* Memory allocation name */ + char mem_name[ML_ADAPTER_NAME_LEN]; + + /* Socket identifier cached from eventdev */ + int socket_id; + + /* Lock to serialize config updates with service function */ + rte_spinlock_t lock; + + /* ML device structure array */ + struct ml_device_info *mldevs; + + /* Circular buffer for processing ML ops to eventdev */ + struct ml_ops_circular_buffer ebuf; + + /* Configuration callback for rte_service configuration */ + rte_event_ml_adapter_conf_cb conf_cb; + + /* Configuration callback argument */ + void *conf_arg; + + /* Set if default_cb is being used */ + int default_cb_arg; +} __rte_cache_aligned; + +static struct event_ml_adapter **event_ml_adapter; + +static inline int +emla_valid_id(uint8_t id) +{ + return id < RTE_EVENT_ML_ADAPTER_MAX_INSTANCE; +} + +static inline struct event_ml_adapter * +emla_id_to_adapter(uint8_t id) +{ + return event_ml_adapter ? event_ml_adapter[id] : NULL; +} + +static int +emla_array_init(void) +{ + const struct rte_memzone *mz; + uint32_t sz; + + mz = rte_memzone_lookup(ML_ADAPTER_ARRAY); + if (mz == NULL) { + sz = sizeof(struct event_ml_adapter *) * RTE_EVENT_ML_ADAPTER_MAX_INSTANCE; + sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); + + mz = rte_memzone_reserve_aligned(ML_ADAPTER_ARRAY, sz, rte_socket_id(), 0, + RTE_CACHE_LINE_SIZE); + if (mz == NULL) { + RTE_EDEV_LOG_ERR("Failed to reserve memzone : %s, err = %d", + ML_ADAPTER_ARRAY, rte_errno); + return -rte_errno; + } + } + + event_ml_adapter = mz->addr; + + return 0; +} + +static inline int +emla_circular_buffer_init(const char *name, struct ml_ops_circular_buffer *buf, uint16_t sz) +{ + buf->op_buffer = rte_zmalloc(name, sizeof(struct rte_ml_op *) * sz, 0); + if (buf->op_buffer == NULL) + return -ENOMEM; + + buf->size = sz; + + return 0; +} + +static inline void +emla_circular_buffer_free(struct ml_ops_circular_buffer *buf) +{ + rte_free(buf->op_buffer); +} + +static int +emla_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_ml_adapter_conf *conf, + void *arg) +{ + struct rte_event_port_conf *port_conf; + struct rte_event_dev_config dev_conf; + struct event_ml_adapter *adapter; + struct rte_eventdev *dev; + uint8_t port_id; + int started; + int ret; + + adapter = emla_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + dev_conf = dev->data->dev_conf; + + started = dev->data->dev_started; + if (started) + rte_event_dev_stop(evdev_id); + + port_id = dev_conf.nb_event_ports; + dev_conf.nb_event_ports += 1; + + port_conf = arg; + if (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_SINGLE_LINK) + dev_conf.nb_single_link_event_port_queues += 1; + + ret = rte_event_dev_configure(evdev_id, &dev_conf); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to configure event dev %u", evdev_id); + if (started) { + if (rte_event_dev_start(evdev_id)) + return -EIO; + } + return ret; + } + + ret = rte_event_port_setup(evdev_id, port_id, port_conf); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to setup event port %u", port_id); + return ret; + } + + conf->event_port_id = port_id; + conf->max_nb = ML_DEFAULT_MAX_NB; + if (started) + ret = rte_event_dev_start(evdev_id); + + adapter->default_cb_arg = 1; + adapter->event_port_id = conf->event_port_id; + + return ret; +} + +int +rte_event_ml_adapter_create_ext(uint8_t id, uint8_t evdev_id, rte_event_ml_adapter_conf_cb conf_cb, + enum rte_event_ml_adapter_mode mode, void *conf_arg) +{ + struct rte_event_dev_info dev_info; + struct event_ml_adapter *adapter; + char name[ML_ADAPTER_NAME_LEN]; + int socket_id; + uint8_t i; + int ret; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(evdev_id, -EINVAL); + + if (conf_cb == NULL) + return -EINVAL; + + if (event_ml_adapter == NULL) { + ret = emla_array_init(); + if (ret) + return ret; + } + + adapter = emla_id_to_adapter(id); + if (adapter != NULL) { + RTE_EDEV_LOG_ERR("ML adapter ID %d already exists!", id); + return -EEXIST; + } + + socket_id = rte_event_dev_socket_id(evdev_id); + snprintf(name, ML_ADAPTER_NAME_LEN, "rte_event_ml_adapter_%d", id); + adapter = rte_zmalloc_socket(name, sizeof(struct event_ml_adapter), RTE_CACHE_LINE_SIZE, + socket_id); + if (adapter == NULL) { + RTE_EDEV_LOG_ERR("Failed to get mem for event ML adapter!"); + return -ENOMEM; + } + + if (emla_circular_buffer_init("emla_circular_buffer", &adapter->ebuf, + ML_ADAPTER_BUFFER_SIZE)) { + RTE_EDEV_LOG_ERR("Failed to get memory for event adapter circular buffer"); + rte_free(adapter); + return -ENOMEM; + } + + ret = rte_event_dev_info_get(evdev_id, &dev_info); + if (ret < 0) { + RTE_EDEV_LOG_ERR("Failed to get info for eventdev %d: %s", evdev_id, + dev_info.driver_name); + emla_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + return ret; + } + + adapter->eventdev_id = evdev_id; + adapter->mode = mode; + rte_strlcpy(adapter->mem_name, name, ML_ADAPTER_NAME_LEN); + adapter->socket_id = socket_id; + adapter->conf_cb = conf_cb; + adapter->conf_arg = conf_arg; + adapter->mldevs = rte_zmalloc_socket(adapter->mem_name, + rte_ml_dev_count() * sizeof(struct ml_device_info), 0, + socket_id); + if (adapter->mldevs == NULL) { + RTE_EDEV_LOG_ERR("Failed to get memory for ML devices"); + emla_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + return -ENOMEM; + } + + rte_spinlock_init(&adapter->lock); + for (i = 0; i < rte_ml_dev_count(); i++) + adapter->mldevs[i].dev = rte_ml_dev_pmd_get_dev(i); + + event_ml_adapter[id] = adapter; + + return 0; +} + +int +rte_event_ml_adapter_create(uint8_t id, uint8_t evdev_id, struct rte_event_port_conf *port_config, + enum rte_event_ml_adapter_mode mode) +{ + struct rte_event_port_conf *pc; + int ret; + + if (port_config == NULL) + return -EINVAL; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + pc = rte_malloc(NULL, sizeof(struct rte_event_port_conf), 0); + if (pc == NULL) + return -ENOMEM; + + rte_memcpy(pc, port_config, sizeof(struct rte_event_port_conf)); + ret = rte_event_ml_adapter_create_ext(id, evdev_id, emla_default_config_cb, mode, pc); + if (ret != 0) + rte_free(pc); + + return ret; +} + +int +rte_event_ml_adapter_free(uint8_t id) +{ + struct event_ml_adapter *adapter; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + if (adapter->default_cb_arg) + rte_free(adapter->conf_arg); + + rte_free(adapter->mldevs); + rte_free(adapter); + event_ml_adapter[id] = NULL; + + return 0; +} From patchwork Sun Jan 7 15:34:43 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 135782 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3DE2543857; Sun, 7 Jan 2024 16:35:43 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id AE5C640A67; Sun, 7 Jan 2024 16:35:23 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 814AC40691 for ; Sun, 7 Jan 2024 16:35:20 +0100 (CET) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 407EpvRG002858 for ; Sun, 7 Jan 2024 07:35:19 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=1jT/wN9vsaC5fmAO6qPQLK2S53Bnehv1ODarHU1dfEo=; b=kwV 7yloEyKleiTcoEZfUT0G3xqd9rY2x/MeQNy4NS3yNKxJls5JruVoUa36n2qTWZmg CykhmynpxFL9Gr+uwGEwrVhDtU47xmiWhrA5rol0IXpNqbkiQ1QHxmcE76+UG43P VATmTmdqM81YpxUp9trGZ16HwNh2GMtIE9wkBOafV7G1Wyo0KptHYJ92aHkZaX03 tRVtnQT9IlEzkmhWb0l49Qb2KBR+rlF3tvux1h9u0vXj5Xy3+txrvjVl8gxCy7Fs gSy+NNzN41PMEZQRGresbYFc2Bea3OxC6LMvDtO4KF8NceIao207OgtElokKisgy GvZZXFglayJ1Y4gpGUg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3vf53qjka0-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 07 Jan 2024 07:35:19 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 7 Jan 2024 07:35:17 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 7 Jan 2024 07:35:17 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id AA9163F7093; Sun, 7 Jan 2024 07:35:17 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi , Jerin Jacob CC: , , , Subject: [PATCH 04/11] event/ml: add adapter port get Date: Sun, 7 Jan 2024 07:34:43 -0800 Message-ID: <20240107153454.3909-5-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240107153454.3909-1-syalavarthi@marvell.com> References: <20240107153454.3909-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 112zlTZFlKswMH-_1Hfjiy_OvVPMHIjL X-Proofpoint-GUID: 112zlTZFlKswMH-_1Hfjiy_OvVPMHIjL X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added ML adapter port get function. Signed-off-by: Srikanth Yalavarthi --- lib/eventdev/rte_event_ml_adapter.c | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/lib/eventdev/rte_event_ml_adapter.c b/lib/eventdev/rte_event_ml_adapter.c index fed3b67c858..93ba58b3e9e 100644 --- a/lib/eventdev/rte_event_ml_adapter.c +++ b/lib/eventdev/rte_event_ml_adapter.c @@ -321,3 +321,22 @@ rte_event_ml_adapter_free(uint8_t id) return 0; } + +int +rte_event_ml_adapter_event_port_get(uint8_t id, uint8_t *event_port_id) +{ + struct event_ml_adapter *adapter; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL || event_port_id == NULL) + return -EINVAL; + + *event_port_id = adapter->event_port_id; + + return 0; +} From patchwork Sun Jan 7 15:34:44 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 135783 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D430C43857; Sun, 7 Jan 2024 16:35:49 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DA5ED40A6E; Sun, 7 Jan 2024 16:35:24 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id E9785406B6 for ; Sun, 7 Jan 2024 16:35:21 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 407EUkDp010955 for ; Sun, 7 Jan 2024 07:35:21 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=EN8zo3otmbL55reV4TM9B26/UxRPcesvofljJJrI72E=; b=Lvm EiKCTSl/Tl2Y2QdwFvTA+05ZaVbRKLT6DFnTCtFz4HWeemLkriWJ6/Gb9jp+dn10 8gh+ItSK3yiWEYL2qjBAqHsoLH74H0dfmH9rBmtacyIKzIjcfAcSPUBMHEl+pbds QdDBnDsnJYF6vj/jZ4DLkSP4uwRyb15GyN39PP9neHHTKNz7M/mJh0sTpdFrwQoz 0fvC28R1VhVR9fycSE6N3CgAeJJ9tbYn/CT/tqvQYgvhPtWuewTlv4q3MNuZj0fH RjWrRmPLWlRF0tD99eucTBxOZBFPA5/GUFOsKUahsUwNSGRFXWp3dSu6GV8fSAhF 9vL4woP8WCC/5hkBw3Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3vf78n2a1k-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 07 Jan 2024 07:35:21 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 7 Jan 2024 07:35:19 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 7 Jan 2024 07:35:19 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id D74273F7093; Sun, 7 Jan 2024 07:35:18 -0800 (PST) From: Srikanth Yalavarthi To: Jerin Jacob , Srikanth Yalavarthi CC: , , , Subject: [PATCH 05/11] event/ml: add adapter queue pair add and delete Date: Sun, 7 Jan 2024 07:34:44 -0800 Message-ID: <20240107153454.3909-6-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240107153454.3909-1-syalavarthi@marvell.com> References: <20240107153454.3909-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: BB7JTLllKzM-4kQpIMvA_7JmMIC8iPHN X-Proofpoint-GUID: BB7JTLllKzM-4kQpIMvA_7JmMIC8iPHN X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added ML adapter queue-pair add and delete functions Signed-off-by: Srikanth Yalavarthi --- lib/eventdev/eventdev_pmd.h | 54 ++++++++ lib/eventdev/rte_event_ml_adapter.c | 193 ++++++++++++++++++++++++++++ 2 files changed, 247 insertions(+) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 94d505753dc..48e970a5097 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -1549,6 +1549,56 @@ struct rte_ml_dev; typedef int (*eventdev_ml_adapter_caps_get_t)(const struct rte_eventdev *dev, const struct rte_ml_dev *mldev, uint32_t *caps); +/** + * This API may change without prior notice + * + * Add ML queue pair to event device. This callback is invoked if + * the caps returned from rte_event_ml_adapter_caps_get(, mldev_id) + * has RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_* set. + * + * @param dev + * Event device pointer + * + * @param mldev + * MLDEV pointer + * + * @param queue_pair_id + * MLDEV queue pair identifier. + * + * @param event + * Event information required for binding mldev queue pair to event queue. + * This structure will have a valid value for only those HW PMDs supporting + * @see RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND capability. + * + * @return + * - 0: Success, mldev queue pair added successfully. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_ml_adapter_queue_pair_add_t)(const struct rte_eventdev *dev, + const struct rte_ml_dev *mldev, + int32_t queue_pair_id, + const struct rte_event *event); + +/** + * This API may change without prior notice + * + * Delete ML queue pair to event device. This callback is invoked if + * the caps returned from rte_event_ml_adapter_caps_get(, mldev_id) + * has RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_* set. + * + * @param queue_pair_id + * mldev queue pair identifier. + * + * @return + * - 0: Success, mldev queue pair deleted successfully. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_ml_adapter_queue_pair_del_t)(const struct rte_eventdev *dev, + const struct rte_ml_dev *cdev, + int32_t queue_pair_id); + /** Event device operations function pointer table */ struct eventdev_ops { eventdev_info_get_t dev_infos_get; /**< Get device info. */ @@ -1690,6 +1740,10 @@ struct eventdev_ops { eventdev_ml_adapter_caps_get_t ml_adapter_caps_get; /**< Get ML adapter capabilities */ + eventdev_ml_adapter_queue_pair_add_t ml_adapter_queue_pair_add; + /**< Add queue pair to ML adapter */ + eventdev_ml_adapter_queue_pair_del_t ml_adapter_queue_pair_del; + /**< Delete queue pair from ML adapter */ eventdev_selftest dev_selftest; /**< Start eventdev Selftest */ diff --git a/lib/eventdev/rte_event_ml_adapter.c b/lib/eventdev/rte_event_ml_adapter.c index 93ba58b3e9e..9d441c5d967 100644 --- a/lib/eventdev/rte_event_ml_adapter.c +++ b/lib/eventdev/rte_event_ml_adapter.c @@ -33,10 +33,27 @@ struct ml_ops_circular_buffer { struct rte_ml_op **op_buffer; } __rte_cache_aligned; +/* Queue pair information */ +struct ml_queue_pair_info { + /* Set to indicate queue pair is enabled */ + bool qp_enabled; + + /* Circular buffer for batching ML ops to mldev */ + struct ml_ops_circular_buffer mlbuf; +} __rte_cache_aligned; + /* ML device information */ struct ml_device_info { /* Pointer to mldev */ struct rte_ml_dev *dev; + + /* Pointer to queue pair info */ + struct ml_queue_pair_info *qpairs; + + /* If num_qpairs > 0, the start callback will + * be invoked if not already invoked + */ + uint16_t num_qpairs; } __rte_cache_aligned; struct event_ml_adapter { @@ -72,6 +89,9 @@ struct event_ml_adapter { /* Set if default_cb is being used */ int default_cb_arg; + + /* No. of queue pairs configured */ + uint16_t nb_qps; } __rte_cache_aligned; static struct event_ml_adapter **event_ml_adapter; @@ -340,3 +360,176 @@ rte_event_ml_adapter_event_port_get(uint8_t id, uint8_t *event_port_id) return 0; } + +static void +emla_update_qp_info(struct event_ml_adapter *adapter, struct ml_device_info *dev_info, + int32_t queue_pair_id, uint8_t add) +{ + struct ml_queue_pair_info *qp_info; + int enabled; + uint16_t i; + + if (dev_info->qpairs == NULL) + return; + + if (queue_pair_id == -1) { + for (i = 0; i < dev_info->dev->data->nb_queue_pairs; i++) + emla_update_qp_info(adapter, dev_info, i, add); + } else { + qp_info = &dev_info->qpairs[queue_pair_id]; + enabled = qp_info->qp_enabled; + if (add) { + adapter->nb_qps += !enabled; + dev_info->num_qpairs += !enabled; + } else { + adapter->nb_qps -= enabled; + dev_info->num_qpairs -= enabled; + } + qp_info->qp_enabled = !!add; + } +} + +int +rte_event_ml_adapter_queue_pair_add(uint8_t id, int16_t mldev_id, int32_t queue_pair_id, + const struct rte_event *event) +{ + struct event_ml_adapter *adapter; + struct ml_device_info *dev_info; + struct rte_eventdev *dev; + uint32_t cap; + int ret; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + if (!rte_ml_dev_is_valid_dev(mldev_id)) { + RTE_EDEV_LOG_ERR("Invalid mldev_id = %" PRIu8, mldev_id); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + ret = rte_event_ml_adapter_caps_get(adapter->eventdev_id, mldev_id, &cap); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %u mldev %u", id, mldev_id); + return ret; + } + + if ((cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) && (event == NULL)) { + RTE_EDEV_LOG_ERR("Event can not be NULL for mldev_id = %u", mldev_id); + return -EINVAL; + } + + dev_info = &adapter->mldevs[mldev_id]; + if (queue_pair_id != -1 && (uint16_t)queue_pair_id >= dev_info->dev->data->nb_queue_pairs) { + RTE_EDEV_LOG_ERR("Invalid queue_pair_id %u", (uint16_t)queue_pair_id); + return -EINVAL; + } + + /* In case HW cap is RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, no + * need of service core as HW supports event forward capability. + */ + if ((cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND && + adapter->mode == RTE_EVENT_ML_ADAPTER_OP_NEW) || + (cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + adapter->mode == RTE_EVENT_ML_ADAPTER_OP_NEW)) { + if (*dev->dev_ops->ml_adapter_queue_pair_add == NULL) + return -ENOTSUP; + + if (dev_info->qpairs == NULL) { + dev_info->qpairs = + rte_zmalloc_socket(adapter->mem_name, + dev_info->dev->data->nb_queue_pairs * + sizeof(struct ml_queue_pair_info), + 0, adapter->socket_id); + if (dev_info->qpairs == NULL) + return -ENOMEM; + } + + ret = (*dev->dev_ops->ml_adapter_queue_pair_add)(dev, dev_info->dev, queue_pair_id, + event); + if (ret == 0) + emla_update_qp_info(adapter, &adapter->mldevs[mldev_id], queue_pair_id, 1); + } + + return ret; +} + +int +rte_event_ml_adapter_queue_pair_del(uint8_t id, int16_t mldev_id, int32_t queue_pair_id) +{ + struct event_ml_adapter *adapter; + struct ml_device_info *dev_info; + struct rte_eventdev *dev; + int ret; + uint32_t cap; + uint16_t i; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + if (!rte_ml_dev_is_valid_dev(mldev_id)) { + RTE_EDEV_LOG_ERR("Invalid mldev_id = %" PRIu8, mldev_id); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + ret = rte_event_ml_adapter_caps_get(adapter->eventdev_id, mldev_id, &cap); + if (ret) + return ret; + + dev_info = &adapter->mldevs[mldev_id]; + + if (queue_pair_id != -1 && (uint16_t)queue_pair_id >= dev_info->dev->data->nb_queue_pairs) { + RTE_EDEV_LOG_ERR("Invalid queue_pair_id %" PRIu16, (uint16_t)queue_pair_id); + return -EINVAL; + } + + if ((cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + adapter->mode == RTE_EVENT_ML_ADAPTER_OP_NEW)) { + if (*dev->dev_ops->ml_adapter_queue_pair_del == NULL) + return -ENOTSUP; + + ret = (*dev->dev_ops->ml_adapter_queue_pair_del)(dev, dev_info->dev, queue_pair_id); + if (ret == 0) { + emla_update_qp_info(adapter, &adapter->mldevs[mldev_id], queue_pair_id, 0); + if (dev_info->num_qpairs == 0) { + rte_free(dev_info->qpairs); + dev_info->qpairs = NULL; + } + } + } else { + if (adapter->nb_qps == 0) + return 0; + + rte_spinlock_lock(&adapter->lock); + if (queue_pair_id == -1) { + for (i = 0; i < dev_info->dev->data->nb_queue_pairs; i++) + emla_update_qp_info(adapter, dev_info, queue_pair_id, 0); + } else { + emla_update_qp_info(adapter, dev_info, (uint16_t)queue_pair_id, 0); + } + + if (dev_info->num_qpairs == 0) { + rte_free(dev_info->qpairs); + dev_info->qpairs = NULL; + } + + rte_spinlock_unlock(&adapter->lock); + } + + return ret; +} From patchwork Sun Jan 7 15:34:45 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 135784 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1FB6E43857; Sun, 7 Jan 2024 16:35:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CFA3740A7A; Sun, 7 Jan 2024 16:35:26 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 0FAFC40698 for ; Sun, 7 Jan 2024 16:35:23 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 407DlRPu022570 for ; Sun, 7 Jan 2024 07:35:23 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=OeqVFzzJXsdOnja2urJytWOV15I8Dy3akCa4PvG6nE8=; b=Ir3 fWt8u+z2Eb/3YxWlR/Deb7gGH+2jcBiFirstxAF4o7D/OzaPA6BBMPLJYGllZaJ3 APl2a+kmeTLAsiMFhRRRUIgrXhkjcEcKPEIqamEzum9CduO+DLrH9wIidektYp2d QvtNhaOqawmYibICbhQASThUBZ5KA6lA7zFw+iAiHwQMR3Sdjp7kge6Vw50ioxl6 H7I/wZ6rRugoLkb0w0VxlPEL4kz2u8kcGHM+SqMes9o+r6bfkaSK4cU63inyTWDM PdiVQRmUTwwXr5Yy3fUYr3fzP6taSAntbX5rg6BH5kvF77u0MWBGnZVudaLZdzo5 kHych7P6DTYtifvWBdA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3vf78n2a1r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 07 Jan 2024 07:35:23 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 7 Jan 2024 07:35:20 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 7 Jan 2024 07:35:20 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 0C74E3F7093; Sun, 7 Jan 2024 07:35:20 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi , Jerin Jacob CC: , , , Subject: [PATCH 06/11] event/ml: add support for service function Date: Sun, 7 Jan 2024 07:34:45 -0800 Message-ID: <20240107153454.3909-7-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240107153454.3909-1-syalavarthi@marvell.com> References: <20240107153454.3909-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: MbGUJ5JqTxXIHoV9eWmvsw83PezpDNFk X-Proofpoint-GUID: MbGUJ5JqTxXIHoV9eWmvsw83PezpDNFk X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support for ML adapter service function for software based event devices. Signed-off-by: Srikanth Yalavarthi --- lib/eventdev/rte_event_ml_adapter.c | 538 ++++++++++++++++++++++++++++ 1 file changed, 538 insertions(+) diff --git a/lib/eventdev/rte_event_ml_adapter.c b/lib/eventdev/rte_event_ml_adapter.c index 9d441c5d967..95f566b1025 100644 --- a/lib/eventdev/rte_event_ml_adapter.c +++ b/lib/eventdev/rte_event_ml_adapter.c @@ -5,6 +5,7 @@ #include "rte_event_ml_adapter.h" #include "rte_eventdev.h" #include +#include #include "eventdev_pmd.h" #include "rte_mldev_pmd.h" @@ -13,6 +14,9 @@ #define ML_DEFAULT_MAX_NB 128 #define ML_ADAPTER_BUFFER_SIZE 1024 +#define ML_BATCH_SIZE 32 +#define ML_ADAPTER_OPS_BUFFER_SIZE (ML_BATCH_SIZE + ML_BATCH_SIZE) + #define ML_ADAPTER_ARRAY "event_ml_adapter_array" /* ML ops circular buffer */ @@ -54,6 +58,9 @@ struct ml_device_info { * be invoked if not already invoked */ uint16_t num_qpairs; + + /* Next queue pair to be processed */ + uint16_t next_queue_pair_id; } __rte_cache_aligned; struct event_ml_adapter { @@ -78,6 +85,9 @@ struct event_ml_adapter { /* ML device structure array */ struct ml_device_info *mldevs; + /* Next ML device to be processed */ + int16_t next_mldev_id; + /* Circular buffer for processing ML ops to eventdev */ struct ml_ops_circular_buffer ebuf; @@ -92,6 +102,26 @@ struct event_ml_adapter { /* No. of queue pairs configured */ uint16_t nb_qps; + + /* Per adapter EAL service ID */ + uint32_t service_id; + + /* Service initialization state */ + uint8_t service_initialized; + + /* Max ML ops processed in any service function invocation */ + uint32_t max_nb; + + /* Store event port's implicit release capability */ + uint8_t implicit_release_disabled; + + /* Flag to indicate backpressure at mldev + * Stop further dequeuing events from eventdev + */ + bool stop_enq_to_mldev; + + /* Loop counter to flush ml ops */ + uint16_t transmit_loop_count; } __rte_cache_aligned; static struct event_ml_adapter **event_ml_adapter; @@ -133,6 +163,18 @@ emla_array_init(void) return 0; } +static inline bool +emla_circular_buffer_batch_ready(struct ml_ops_circular_buffer *bufp) +{ + return bufp->count >= ML_BATCH_SIZE; +} + +static inline bool +emla_circular_buffer_space_for_batch(struct ml_ops_circular_buffer *bufp) +{ + return (bufp->size - bufp->count) >= ML_BATCH_SIZE; +} + static inline int emla_circular_buffer_init(const char *name, struct ml_ops_circular_buffer *buf, uint16_t sz) { @@ -151,6 +193,49 @@ emla_circular_buffer_free(struct ml_ops_circular_buffer *buf) rte_free(buf->op_buffer); } +static inline int +emla_circular_buffer_add(struct ml_ops_circular_buffer *bufp, struct rte_ml_op *op) +{ + uint16_t *tail = &bufp->tail; + + bufp->op_buffer[*tail] = op; + + /* circular buffer, go round */ + *tail = (*tail + 1) % bufp->size; + bufp->count++; + + return 0; +} + +static inline int +emla_circular_buffer_flush_to_mldev(struct ml_ops_circular_buffer *bufp, uint8_t mldev_id, + uint16_t qp_id, uint16_t *nb_ops_flushed) +{ + uint16_t n = 0; + uint16_t *head = &bufp->head; + uint16_t *tail = &bufp->tail; + struct rte_ml_op **ops = bufp->op_buffer; + + if (*tail > *head) + n = *tail - *head; + else if (*tail < *head) + n = bufp->size - *head; + else { + *nb_ops_flushed = 0; + return 0; /* buffer empty */ + } + + *nb_ops_flushed = rte_ml_enqueue_burst(mldev_id, qp_id, &ops[*head], n); + bufp->count -= *nb_ops_flushed; + if (!bufp->count) { + *head = 0; + *tail = 0; + } else + *head = (*head + *nb_ops_flushed) % bufp->size; + + return *nb_ops_flushed == n ? 0 : -1; +} + static int emla_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_ml_adapter_conf *conf, void *arg) @@ -361,6 +446,394 @@ rte_event_ml_adapter_event_port_get(uint8_t id, uint8_t *event_port_id) return 0; } +static inline unsigned int +emla_enq_to_mldev(struct event_ml_adapter *adapter, struct rte_event *ev, unsigned int cnt) +{ + union rte_event_ml_metadata *m_data = NULL; + struct ml_queue_pair_info *qp_info = NULL; + struct rte_ml_op *ml_op; + unsigned int i, n; + uint16_t qp_id, nb_enqueued = 0; + int16_t mldev_id; + int ret; + + ret = 0; + n = 0; + + for (i = 0; i < cnt; i++) { + ml_op = ev[i].event_ptr; + if (ml_op == NULL) + continue; + + if (ml_op->private_data_offset) + m_data = (union rte_event_ml_metadata *)((uint8_t *)ml_op + + ml_op->private_data_offset); + if (m_data == NULL) { + if (ml_op != NULL && ml_op->mempool != NULL) + rte_mempool_put(ml_op->mempool, ml_op); + continue; + } + + mldev_id = m_data->request_info.mldev_id; + qp_id = m_data->request_info.queue_pair_id; + qp_info = &adapter->mldevs[mldev_id].qpairs[qp_id]; + if (!qp_info->qp_enabled) { + if (ml_op != NULL && ml_op->mempool != NULL) + rte_mempool_put(ml_op->mempool, ml_op); + continue; + } + emla_circular_buffer_add(&qp_info->mlbuf, ml_op); + + if (emla_circular_buffer_batch_ready(&qp_info->mlbuf)) { + ret = emla_circular_buffer_flush_to_mldev(&qp_info->mlbuf, mldev_id, qp_id, + &nb_enqueued); + n += nb_enqueued; + + /** + * If some ml ops failed to flush to mldev and + * space for another batch is not available, stop + * dequeue from eventdev momentarily + */ + if (unlikely(ret < 0 && + !emla_circular_buffer_space_for_batch(&qp_info->mlbuf))) + adapter->stop_enq_to_mldev = true; + } + } + + return n; +} + +static unsigned int +emla_ml_mldev_flush(struct event_ml_adapter *adapter, int16_t mldev_id, uint16_t *nb_ops_flushed) +{ + struct ml_device_info *curr_dev; + struct ml_queue_pair_info *curr_queue; + struct rte_ml_dev *dev; + uint16_t nb = 0, nb_enqueued = 0; + uint16_t qp; + + curr_dev = &adapter->mldevs[mldev_id]; + dev = rte_ml_dev_pmd_get_dev(mldev_id); + + for (qp = 0; qp < dev->data->nb_queue_pairs; qp++) { + + curr_queue = &curr_dev->qpairs[qp]; + if (unlikely(curr_queue == NULL || !curr_queue->qp_enabled)) + continue; + + emla_circular_buffer_flush_to_mldev(&curr_queue->mlbuf, mldev_id, qp, &nb_enqueued); + *nb_ops_flushed += curr_queue->mlbuf.count; + nb += nb_enqueued; + } + + return nb; +} + +static unsigned int +emla_ml_enq_flush(struct event_ml_adapter *adapter) +{ + int16_t mldev_id; + uint16_t nb_enqueued = 0; + uint16_t nb_ops_flushed = 0; + uint16_t num_mldev = rte_ml_dev_count(); + + for (mldev_id = 0; mldev_id < num_mldev; mldev_id++) + nb_enqueued += emla_ml_mldev_flush(adapter, mldev_id, &nb_ops_flushed); + /** + * Enable dequeue from eventdev if all ops from circular + * buffer flushed to mldev + */ + if (!nb_ops_flushed) + adapter->stop_enq_to_mldev = false; + + return nb_enqueued; +} + +/* Flush an instance's enqueue buffers every CRYPTO_ENQ_FLUSH_THRESHOLD + * iterations of emla_ml_adapter_enq_run() + */ +#define ML_ENQ_FLUSH_THRESHOLD 1024 + +static int +emla_ml_adapter_enq_run(struct event_ml_adapter *adapter, unsigned int max_enq) +{ + struct rte_event ev[ML_BATCH_SIZE]; + unsigned int nb_enq, nb_enqueued; + uint16_t n; + uint8_t event_dev_id = adapter->eventdev_id; + uint8_t event_port_id = adapter->event_port_id; + + nb_enqueued = 0; + if (adapter->mode == RTE_EVENT_ML_ADAPTER_OP_NEW) + return 0; + + for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) { + if (unlikely(adapter->stop_enq_to_mldev)) { + nb_enqueued += emla_ml_enq_flush(adapter); + + if (unlikely(adapter->stop_enq_to_mldev)) + break; + } + + n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, ML_BATCH_SIZE, 0); + + if (!n) + break; + + nb_enqueued += emla_enq_to_mldev(adapter, ev, n); + } + + if ((++adapter->transmit_loop_count & (ML_ENQ_FLUSH_THRESHOLD - 1)) == 0) + nb_enqueued += emla_ml_enq_flush(adapter); + + return nb_enqueued; +} + +#define ML_ADAPTER_MAX_EV_ENQ_RETRIES 100 + +static inline uint16_t +emla_ops_enqueue_burst(struct event_ml_adapter *adapter, struct rte_ml_op **ops, uint16_t num) +{ + union rte_event_ml_metadata *m_data = NULL; + uint8_t event_dev_id = adapter->eventdev_id; + uint8_t event_port_id = adapter->event_port_id; + struct rte_event events[ML_BATCH_SIZE]; + uint16_t nb_enqueued, nb_ev; + uint8_t retry; + uint8_t i; + + nb_ev = 0; + retry = 0; + nb_enqueued = 0; + num = RTE_MIN(num, ML_BATCH_SIZE); + for (i = 0; i < num; i++) { + struct rte_event *ev = &events[nb_ev++]; + + if (ops[i]->private_data_offset) + m_data = (union rte_event_ml_metadata *)((uint8_t *)ops[i] + + ops[i]->private_data_offset); + if (unlikely(m_data == NULL)) { + if (ops[i] != NULL && ops[i]->mempool != NULL) + rte_mempool_put(ops[i]->mempool, ops[i]); + continue; + } + + rte_memcpy(ev, &m_data->response_info, sizeof(*ev)); + ev->event_ptr = ops[i]; + ev->event_type = RTE_EVENT_TYPE_CRYPTODEV; + if (adapter->implicit_release_disabled) + ev->op = RTE_EVENT_OP_FORWARD; + else + ev->op = RTE_EVENT_OP_NEW; + } + + do { + nb_enqueued += rte_event_enqueue_burst(event_dev_id, event_port_id, + &events[nb_enqueued], nb_ev - nb_enqueued); + + } while (retry++ < ML_ADAPTER_MAX_EV_ENQ_RETRIES && nb_enqueued < nb_ev); + + return nb_enqueued; +} + +static int +emla_circular_buffer_flush_to_evdev(struct event_ml_adapter *adapter, + struct ml_ops_circular_buffer *bufp) +{ + uint16_t n = 0, nb_ops_flushed; + uint16_t *head = &bufp->head; + uint16_t *tail = &bufp->tail; + struct rte_ml_op **ops = bufp->op_buffer; + + if (*tail > *head) + n = *tail - *head; + else if (*tail < *head) + n = bufp->size - *head; + else + return 0; /* buffer empty */ + + nb_ops_flushed = emla_ops_enqueue_burst(adapter, &ops[*head], n); + bufp->count -= nb_ops_flushed; + if (!bufp->count) { + *head = 0; + *tail = 0; + return 0; /* buffer empty */ + } + + *head = (*head + nb_ops_flushed) % bufp->size; + return 1; +} + +static void +emla_ops_buffer_flush(struct event_ml_adapter *adapter) +{ + if (likely(adapter->ebuf.count == 0)) + return; + + while (emla_circular_buffer_flush_to_evdev(adapter, &adapter->ebuf)) + ; +} + +static inline unsigned int +emla_ml_adapter_deq_run(struct event_ml_adapter *adapter, unsigned int max_deq) +{ + struct ml_device_info *curr_dev; + struct ml_queue_pair_info *curr_queue; + struct rte_ml_op *ops[ML_BATCH_SIZE]; + uint16_t n, nb_deq, nb_enqueued, i; + struct rte_ml_dev *dev; + int16_t mldev_id; + uint16_t qp, dev_qps; + bool done; + uint16_t num_mldev = rte_ml_dev_count(); + + nb_deq = 0; + emla_ops_buffer_flush(adapter); + + do { + done = true; + + for (mldev_id = adapter->next_mldev_id; mldev_id < num_mldev; mldev_id++) { + uint16_t queues = 0; + + curr_dev = &adapter->mldevs[mldev_id]; + dev = curr_dev->dev; + if (unlikely(dev == NULL)) + continue; + + dev_qps = dev->data->nb_queue_pairs; + + for (qp = curr_dev->next_queue_pair_id; queues < dev_qps; + qp = (qp + 1) % dev_qps, queues++) { + curr_queue = &curr_dev->qpairs[qp]; + if (unlikely(curr_queue == NULL || !curr_queue->qp_enabled)) + continue; + + n = rte_ml_dequeue_burst(mldev_id, qp, ops, ML_BATCH_SIZE); + if (!n) + continue; + + done = false; + nb_enqueued = 0; + + if (unlikely(!adapter->ebuf.count)) + nb_enqueued = emla_ops_enqueue_burst(adapter, ops, n); + + if (likely(nb_enqueued == n)) + goto check; + + /* Failed to enqueue events case */ + for (i = nb_enqueued; i < n; i++) + emla_circular_buffer_add(&adapter->ebuf, ops[i]); + +check: + nb_deq += n; + + if (nb_deq >= max_deq) { + if ((qp + 1) == dev_qps) + adapter->next_mldev_id = (mldev_id + 1) % num_mldev; + + curr_dev->next_queue_pair_id = + (qp + 1) % dev->data->nb_queue_pairs; + + return nb_deq; + } + } + } + adapter->next_mldev_id = 0; + } while (done == false); + + return nb_deq; +} + +static int +emla_ml_adapter_run(struct event_ml_adapter *adapter, unsigned int max_ops) +{ + unsigned int ops_left = max_ops; + + while (ops_left > 0) { + unsigned int e_cnt, d_cnt; + + e_cnt = emla_ml_adapter_deq_run(adapter, ops_left); + ops_left -= RTE_MIN(ops_left, e_cnt); + + d_cnt = emla_ml_adapter_enq_run(adapter, ops_left); + ops_left -= RTE_MIN(ops_left, d_cnt); + + if (e_cnt == 0 && d_cnt == 0) + break; + } + + if (ops_left == max_ops) { + rte_event_maintain(adapter->eventdev_id, adapter->event_port_id, 0); + return -EAGAIN; + } else + return 0; +} + +static int +emla_service_func(void *args) +{ + struct event_ml_adapter *adapter = args; + int ret; + + if (rte_spinlock_trylock(&adapter->lock) == 0) + return 0; + ret = emla_ml_adapter_run(adapter, adapter->max_nb); + rte_spinlock_unlock(&adapter->lock); + + return ret; +} + +static int +emla_init_service(struct event_ml_adapter *adapter, uint8_t id) +{ + struct rte_event_ml_adapter_conf adapter_conf; + struct rte_service_spec service; + int ret; + uint32_t impl_rel; + + if (adapter->service_initialized) + return 0; + + memset(&service, 0, sizeof(service)); + snprintf(service.name, ML_ADAPTER_NAME_LEN, "rte_event_ml_adapter_%d", id); + service.socket_id = adapter->socket_id; + service.callback = emla_service_func; + service.callback_userdata = adapter; + + /* Service function handles locking for queue add/del updates */ + service.capabilities = RTE_SERVICE_CAP_MT_SAFE; + ret = rte_service_component_register(&service, &adapter->service_id); + if (ret) { + RTE_EDEV_LOG_ERR("failed to register service %s err = %" PRId32, service.name, ret); + return ret; + } + + ret = adapter->conf_cb(id, adapter->eventdev_id, &adapter_conf, adapter->conf_arg); + if (ret) { + RTE_EDEV_LOG_ERR("configuration callback failed err = %" PRId32, ret); + return ret; + } + + adapter->max_nb = adapter_conf.max_nb; + adapter->event_port_id = adapter_conf.event_port_id; + + if (rte_event_port_attr_get(adapter->eventdev_id, adapter->event_port_id, + RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE, &impl_rel)) { + RTE_EDEV_LOG_ERR("Failed to get port info for eventdev %" PRId32, + adapter->eventdev_id); + emla_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + return -EINVAL; + } + + adapter->implicit_release_disabled = (uint8_t)impl_rel; + adapter->service_initialized = 1; + + return ret; +} + static void emla_update_qp_info(struct event_ml_adapter *adapter, struct ml_device_info *dev_info, int32_t queue_pair_id, uint8_t add) @@ -389,6 +862,40 @@ emla_update_qp_info(struct event_ml_adapter *adapter, struct ml_device_info *dev } } +static int +emla_add_queue_pair(struct event_ml_adapter *adapter, int16_t mldev_id, int queue_pair_id) +{ + struct ml_device_info *dev_info = &adapter->mldevs[mldev_id]; + struct ml_queue_pair_info *qpairs; + uint32_t i; + + if (dev_info->qpairs == NULL) { + dev_info->qpairs = rte_zmalloc_socket(adapter->mem_name, + dev_info->dev->data->nb_queue_pairs * + sizeof(struct ml_queue_pair_info), + 0, adapter->socket_id); + if (dev_info->qpairs == NULL) + return -ENOMEM; + + qpairs = dev_info->qpairs; + + if (emla_circular_buffer_init("mla_mldev_circular_buffer", &qpairs->mlbuf, + ML_ADAPTER_OPS_BUFFER_SIZE)) { + RTE_EDEV_LOG_ERR("Failed to get memory for mldev buffer"); + rte_free(qpairs); + return -ENOMEM; + } + } + + if (queue_pair_id == -1) { + for (i = 0; i < dev_info->dev->data->nb_queue_pairs; i++) + emla_update_qp_info(adapter, dev_info, i, 1); + } else + emla_update_qp_info(adapter, dev_info, (uint16_t)queue_pair_id, 1); + + return 0; +} + int rte_event_ml_adapter_queue_pair_add(uint8_t id, int16_t mldev_id, int32_t queue_pair_id, const struct rte_event *event) @@ -458,6 +965,36 @@ rte_event_ml_adapter_queue_pair_add(uint8_t id, int16_t mldev_id, int32_t queue_ emla_update_qp_info(adapter, &adapter->mldevs[mldev_id], queue_pair_id, 1); } + /* In case HW cap is RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_NEW, or SW adapter, initiate + * services so the application can choose which ever way it wants to use the adapter. + * + * Case 1: RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_NEW. Application may wants to use one + * of below two modes + * + * a. OP_FORWARD mode -> HW Dequeue + SW enqueue + * b. OP_NEW mode -> HW Dequeue + * + * Case 2: No HW caps, use SW adapter + * + * a. OP_FORWARD mode -> SW enqueue & dequeue + * b. OP_NEW mode -> SW Dequeue + */ + if ((cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + !(cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && + adapter->mode == RTE_EVENT_ML_ADAPTER_OP_FORWARD) || + (!(cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_NEW) && + !(cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && + !(cap & RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND))) { + rte_spinlock_lock(&adapter->lock); + ret = emla_init_service(adapter, id); + if (ret == 0) + ret = emla_add_queue_pair(adapter, mldev_id, queue_pair_id); + rte_spinlock_unlock(&adapter->lock); + + if (ret == 0) + rte_service_component_runstate_set(adapter->service_id, 1); + } + return ret; } @@ -529,6 +1066,7 @@ rte_event_ml_adapter_queue_pair_del(uint8_t id, int16_t mldev_id, int32_t queue_ } rte_spinlock_unlock(&adapter->lock); + rte_service_component_runstate_set(adapter->service_id, adapter->nb_qps); } return ret; From patchwork Sun Jan 7 15:34:46 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 135785 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C0C6443857; Sun, 7 Jan 2024 16:36:05 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0F13740A81; Sun, 7 Jan 2024 16:35:28 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 94F054069D for ; Sun, 7 Jan 2024 16:35:24 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 407DlRPv022570 for ; Sun, 7 Jan 2024 07:35:24 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=8RbFeghdMuxTooKD8XM+VT5OGWAe2K0clJ3WYfscLyM=; b=UHr ywZt0P2/SWaEKexJYNJaGLHWI/o5VQFtQkmXCddl5TBq7w2KuZUfOl1wZtXiNIRI 9VS+sR3xMWe3uSn3rM57q8mleTBfS0GYUD1jd5j2K3Rme8xeWuRnJHbQRhmfsSoh q6mzJZ4uRAO56nGNmETgqaLLydKx6QKKm/ux04lvDPbLtOH5LNnrf3DFBcIwOc/P 4KgPnpZG0cwDMWGgdH8zn5t0Bfyr7AEs5BomdtBgKpmYR5gTYH+YRXnHZg9xRMDq dGIgKoEmHa8ctypshwN92KATurs50w4/YsVjkot4zVPPnLH1wwQJo5t0LWPZMRn+ O0qb5NE70nma14WvdcQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3vf78n2a1r-2 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 07 Jan 2024 07:35:23 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 7 Jan 2024 07:35:22 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 7 Jan 2024 07:35:22 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id ED60C3F7093; Sun, 7 Jan 2024 07:35:21 -0800 (PST) From: Srikanth Yalavarthi To: Jerin Jacob , Srikanth Yalavarthi CC: , , , Subject: [PATCH 07/11] event/ml: add adapter start and stop Date: Sun, 7 Jan 2024 07:34:46 -0800 Message-ID: <20240107153454.3909-8-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240107153454.3909-1-syalavarthi@marvell.com> References: <20240107153454.3909-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Bo20HX__XF6x_DgPA7zB6XBtPqBjElUi X-Proofpoint-GUID: Bo20HX__XF6x_DgPA7zB6XBtPqBjElUi X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added ML adapter start and stop functions. Signed-off-by: Srikanth Yalavarthi --- lib/eventdev/eventdev_pmd.h | 42 ++++++++++++++++ lib/eventdev/rte_event_ml_adapter.c | 75 +++++++++++++++++++++++++++++ 2 files changed, 117 insertions(+) diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index 48e970a5097..44f26473075 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -1599,6 +1599,44 @@ typedef int (*eventdev_ml_adapter_queue_pair_del_t)(const struct rte_eventdev *d const struct rte_ml_dev *cdev, int32_t queue_pair_id); +/** + * Start ML adapter. This callback is invoked if + * the caps returned from rte_event_ml_adapter_caps_get(.., mldev_id) + * has RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_* set and queue pairs + * from mldev_id have been added to the event device. + * + * @param dev + * Event device pointer + * + * @param mldev + * ML device pointer + * + * @return + * - 0: Success, ML adapter started successfully. + * - <0: Error code returned by the driver function. + */ +typedef int (*eventdev_ml_adapter_start_t)(const struct rte_eventdev *dev, + const struct rte_ml_dev *mldev); + +/** + * Stop ML adapter. This callback is invoked if + * the caps returned from rte_event_ml_adapter_caps_get(.., mldev_id) + * has RTE_EVENT_ML_ADAPTER_CAP_INTERNAL_PORT_* set and queue pairs + * from mldev_id have been added to the event device. + * + * @param dev + * Event device pointer + * + * @param mldev + * ML device pointer + * + * @return + * - 0: Success, ML adapter stopped successfully. + * - <0: Error code returned by the driver function. + */ +typedef int (*eventdev_ml_adapter_stop_t)(const struct rte_eventdev *dev, + const struct rte_ml_dev *mldev); + /** Event device operations function pointer table */ struct eventdev_ops { eventdev_info_get_t dev_infos_get; /**< Get device info. */ @@ -1744,6 +1782,10 @@ struct eventdev_ops { /**< Add queue pair to ML adapter */ eventdev_ml_adapter_queue_pair_del_t ml_adapter_queue_pair_del; /**< Delete queue pair from ML adapter */ + eventdev_ml_adapter_start_t ml_adapter_start; + /**< Start ML adapter */ + eventdev_ml_adapter_stop_t ml_adapter_stop; + /**< Stop ML adapter */ eventdev_selftest dev_selftest; /**< Start eventdev Selftest */ diff --git a/lib/eventdev/rte_event_ml_adapter.c b/lib/eventdev/rte_event_ml_adapter.c index 95f566b1025..60c10caef68 100644 --- a/lib/eventdev/rte_event_ml_adapter.c +++ b/lib/eventdev/rte_event_ml_adapter.c @@ -61,6 +61,14 @@ struct ml_device_info { /* Next queue pair to be processed */ uint16_t next_queue_pair_id; + + /* Set to indicate processing has been started */ + uint8_t dev_started; + + /* Set to indicate mldev->eventdev packet + * transfer uses a hardware mechanism + */ + uint8_t internal_event_port; } __rte_cache_aligned; struct event_ml_adapter { @@ -1071,3 +1079,70 @@ rte_event_ml_adapter_queue_pair_del(uint8_t id, int16_t mldev_id, int32_t queue_ return ret; } + +static int +emla_adapter_ctrl(uint8_t id, int start) +{ + struct event_ml_adapter *adapter; + struct ml_device_info *dev_info; + struct rte_eventdev *dev; + int stop = !start; + int use_service; + uint32_t i; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + + use_service = 0; + for (i = 0; i < rte_ml_dev_count(); i++) { + dev_info = &adapter->mldevs[i]; + /* if start check for num queue pairs */ + if (start && !dev_info->num_qpairs) + continue; + /* if stop check if dev has been started */ + if (stop && !dev_info->dev_started) + continue; + use_service |= !dev_info->internal_event_port; + dev_info->dev_started = start; + if (dev_info->internal_event_port == 0) + continue; + start ? (*dev->dev_ops->ml_adapter_start)(dev, &dev_info->dev[i]) : + (*dev->dev_ops->ml_adapter_stop)(dev, &dev_info->dev[i]); + } + + if (use_service) + rte_service_runstate_set(adapter->service_id, start); + + return 0; +} + +int +rte_event_ml_adapter_start(uint8_t id) +{ + struct event_ml_adapter *adapter; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + return emla_adapter_ctrl(id, 1); +} + +int +rte_event_ml_adapter_stop(uint8_t id) +{ + return emla_adapter_ctrl(id, 0); +} From patchwork Sun Jan 7 15:34:47 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 135786 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 69BE943857; Sun, 7 Jan 2024 16:36:12 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4411340A8B; Sun, 7 Jan 2024 16:35:29 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 22FB94069D for ; Sun, 7 Jan 2024 16:35:25 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 407DlRPw022570 for ; Sun, 7 Jan 2024 07:35:24 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=SVOfE6uMOOg4BRiCqET2OAc4ij0aiUOTO3t1RsoYqkM=; b=GQ5 iZhFWSAbWGtcQimI7SpzMgyGNcfcAM6VJgxcadbvYYt2/97Z1Jj5ia9PeQGOnlVg 9XJKxfSmecEEtuzdAX9j/4TLN7Rc5QoJMiQIRI5Jui896M6zhvZZw/fTCBMlG7Od TLWul4S5vO0vI6JmuSXxNoe9WSEmXGVKPUIkxwRjV/s7G0qXMux6pPvnGIK+RYZO WRLB8F9WhGBgC2D2SMc1qS0EAYcYpkE/mbRZe0z/4rkWC2/B0lGnCelSkdOKQSU5 z1yd8DPLbdscOzFke+WIJHl1GWMQhLWHJAegZgFJuH/dlOpt79WsB8LRCKISkMUL oJ6s/wMfMkwQMXveDgA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3vf78n2a1r-3 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 07 Jan 2024 07:35:24 -0800 (PST) Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 7 Jan 2024 07:35:22 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 7 Jan 2024 07:35:22 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 3397D3F7095; Sun, 7 Jan 2024 07:35:22 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi , Jerin Jacob CC: , , , Subject: [PATCH 08/11] event/ml: add support to get adapter service ID Date: Sun, 7 Jan 2024 07:34:47 -0800 Message-ID: <20240107153454.3909-9-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240107153454.3909-1-syalavarthi@marvell.com> References: <20240107153454.3909-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Cf-8xHtehBNSX_OUaM8KGuSFpYDyvFCw X-Proofpoint-GUID: Cf-8xHtehBNSX_OUaM8KGuSFpYDyvFCw X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support to get ML adapter service ID. Signed-off-by: Srikanth Yalavarthi --- lib/eventdev/rte_event_ml_adapter.c | 20 ++++++++++++++++++++ 1 file changed, 20 insertions(+) diff --git a/lib/eventdev/rte_event_ml_adapter.c b/lib/eventdev/rte_event_ml_adapter.c index 60c10caef68..474aeb6325b 100644 --- a/lib/eventdev/rte_event_ml_adapter.c +++ b/lib/eventdev/rte_event_ml_adapter.c @@ -1080,6 +1080,26 @@ rte_event_ml_adapter_queue_pair_del(uint8_t id, int16_t mldev_id, int32_t queue_ return ret; } +int +rte_event_ml_adapter_service_id_get(uint8_t id, uint32_t *service_id) +{ + struct event_ml_adapter *adapter; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL || service_id == NULL) + return -EINVAL; + + if (adapter->service_initialized) + *service_id = adapter->service_id; + + return adapter->service_initialized ? 0 : -ESRCH; +} + static int emla_adapter_ctrl(uint8_t id, int start) { From patchwork Sun Jan 7 15:34:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srikanth Yalavarthi X-Patchwork-Id: 135787 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D294543857; Sun, 7 Jan 2024 16:36:17 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7D0BE40DCB; Sun, 7 Jan 2024 16:35:30 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id A3AEE4069D for ; Sun, 7 Jan 2024 16:35:25 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 407DlRPx022570 for ; Sun, 7 Jan 2024 07:35:25 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding:content-type; s= pfpt0220; bh=yEw2OTXtCjDj7ZVh2b2A7taQHwDrVBaWHNUt3KKvqVw=; b=hOa NWvlrHzwAUQUpKKlkh2Oe90M1oGgNlWj1pw9V8lzYdOVEVV+8Gj5++OA6VdUYNG/ VctgLErDtHcung8bY/geEgj5U2dnO9voTQAwFs/Ko7jPovd/uYe5oI19157pycEl CHKygXEHn41DV1C1fXEzT67T/5nRm4cRWSURa15iLG3lJV6n2AOIubNKy4BTh68Y phVWuz/AhgTxbr4rgJ9DtLLE3rQidFhn9xXjcW9kYEc4P+C6wZFJ5VkvZ/Cz1eQL JHfDgk+7/sxT9MMeYSj1uXQGgdKpPQkw9zaYXhQsM1mLkkXddQD1Lq2yTr8gvuuK uC/mVGFb8U41LO9mrTQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3vf78n2a1r-4 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT) for ; Sun, 07 Jan 2024 07:35:24 -0800 (PST) Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Sun, 7 Jan 2024 07:35:22 -0800 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Sun, 7 Jan 2024 07:35:22 -0800 Received: from ml-host-33.caveonetworks.com (unknown [10.110.143.233]) by maili.marvell.com (Postfix) with ESMTP id 7902E3F70A3; Sun, 7 Jan 2024 07:35:22 -0800 (PST) From: Srikanth Yalavarthi To: Srikanth Yalavarthi , Jerin Jacob CC: , , , Subject: [PATCH 09/11] event/ml: add support for runtime params Date: Sun, 7 Jan 2024 07:34:48 -0800 Message-ID: <20240107153454.3909-10-syalavarthi@marvell.com> X-Mailer: git-send-email 2.42.0 In-Reply-To: <20240107153454.3909-1-syalavarthi@marvell.com> References: <20240107153454.3909-1-syalavarthi@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Y92PZ-BWs7XwZn_0EhozI0OootxtNxQH X-Proofpoint-GUID: Y92PZ-BWs7XwZn_0EhozI0OootxtNxQH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.997,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-12-09_02,2023-12-07_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support to set and get runtime params for ML adapter. Signed-off-by: Srikanth Yalavarthi --- lib/eventdev/rte_event_ml_adapter.c | 99 +++++++++++++++++++++++++++++ 1 file changed, 99 insertions(+) diff --git a/lib/eventdev/rte_event_ml_adapter.c b/lib/eventdev/rte_event_ml_adapter.c index 474aeb6325b..feb488f730a 100644 --- a/lib/eventdev/rte_event_ml_adapter.c +++ b/lib/eventdev/rte_event_ml_adapter.c @@ -1166,3 +1166,102 @@ rte_event_ml_adapter_stop(uint8_t id) { return emla_adapter_ctrl(id, 0); } + +#define DEFAULT_MAX_NB 128 + +int +rte_event_ml_adapter_runtime_params_init(struct rte_event_ml_adapter_runtime_params *params) +{ + if (params == NULL) + return -EINVAL; + + memset(params, 0, sizeof(*params)); + params->max_nb = DEFAULT_MAX_NB; + + return 0; +} + +static int +ml_adapter_cap_check(struct event_ml_adapter *adapter) +{ + uint32_t caps; + int ret; + + if (!adapter->nb_qps) + return -EINVAL; + + ret = rte_event_ml_adapter_caps_get(adapter->eventdev_id, adapter->next_mldev_id, &caps); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %" PRIu8 " cdev %" PRIu8, + adapter->eventdev_id, adapter->next_mldev_id); + return ret; + } + + if ((caps & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (caps & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) + return -ENOTSUP; + + return 0; +} + +int +rte_event_ml_adapter_runtime_params_set(uint8_t id, + struct rte_event_ml_adapter_runtime_params *params) +{ + struct event_ml_adapter *adapter; + int ret; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + if (params == NULL) { + RTE_EDEV_LOG_ERR("params pointer is NULL"); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + ret = ml_adapter_cap_check(adapter); + if (ret) + return ret; + + rte_spinlock_lock(&adapter->lock); + adapter->max_nb = params->max_nb; + rte_spinlock_unlock(&adapter->lock); + + return 0; +} + +int +rte_event_ml_adapter_runtime_params_get(uint8_t id, + struct rte_event_ml_adapter_runtime_params *params) +{ + struct event_ml_adapter *adapter; + int ret; + + if (!emla_valid_id(id)) { + RTE_EDEV_LOG_ERR("Invalid ML adapter id = %d", id); + return -EINVAL; + } + + if (params == NULL) { + RTE_EDEV_LOG_ERR("params pointer is NULL"); + return -EINVAL; + } + + adapter = emla_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + ret = ml_adapter_cap_check(adapter); + if (ret) + return ret; + + params->max_nb = adapter->max_nb; + + return 0; +}