From patchwork Thu Sep 28 16:49:47 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132147 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E4E4C42659; Thu, 28 Sep 2023 18:50:20 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D25E6402EE; Thu, 28 Sep 2023 18:50:20 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 4FAAF402DC for ; Thu, 28 Sep 2023 18:50:18 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SAfcK5003471; Thu, 28 Sep 2023 09:50:17 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-type : content-transfer-encoding; s=pfpt0220; bh=W/b5VoiJ4gTcgLw6VjB4DmR+ZAQBayTOtIE4GicthdY=; b=KCWAG1R1bqoIjhxH9uAR+khx2roI/t6ULtkn5ifBBlxEIPJlaRzqSXrkP2I16IdXTE3R m+e2fOgcKn8ADF+BDk8WcBX2Mza9omy4ckc0Gz/XIBFDKtFoNqHOxTnwYHyJROolZClJ Aig8Z4Ji7gc9rGgQ56eim9IVXk9f5RMzH32AADvUCWScVepPaMslAxz1mKD4hJfTUjha WcJ0x5bmVDht3yBSzBrqww8jMZuwtjqdhMo+pjgCbZ1bKjpcpJgmUdWsXfAywDD6TGrV YpwrFy0DnR09szaRJKASyfoy9L/GoieNlbhGvMb0NJUai94kOQvjUvpE8M+CcDAX5PVx nA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3td7y6sd89-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:50:16 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:50:14 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:50:14 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id A5A165C68FA; Thu, 28 Sep 2023 09:50:09 -0700 (PDT) From: Amit Prakash Shukla To: Thomas Monjalon , Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v6 01/12] eventdev/dma: introduce DMA adapter Date: Thu, 28 Sep 2023 22:19:47 +0530 Message-ID: <20230928164959.340575-2-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jUMfpoWkbHJ5lC2peG6kYegRzjPukkbV X-Proofpoint-GUID: jUMfpoWkbHJ5lC2peG6kYegRzjPukkbV X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce event dma adapter interface to transfer packets between dma device and event device. Signed-off-by: Amit Prakash Shukla Acked-by: Jerin Jacob --- MAINTAINERS | 6 + doc/api/doxy-api-index.md | 1 + doc/guides/eventdevs/features/default.ini | 8 + doc/guides/prog_guide/event_dma_adapter.rst | 264 ++++ doc/guides/prog_guide/eventdev.rst | 8 +- .../img/event_dma_adapter_op_forward.svg | 1086 +++++++++++++++++ .../img/event_dma_adapter_op_new.svg | 1079 ++++++++++++++++ doc/guides/prog_guide/index.rst | 1 + doc/guides/rel_notes/release_23_11.rst | 5 + lib/eventdev/eventdev_pmd.h | 171 ++- lib/eventdev/eventdev_private.c | 10 + lib/eventdev/meson.build | 1 + lib/eventdev/rte_event_dma_adapter.h | 581 +++++++++ lib/eventdev/rte_eventdev.h | 44 + lib/eventdev/rte_eventdev_core.h | 8 +- lib/eventdev/version.map | 16 + lib/meson.build | 2 +- 17 files changed, 3285 insertions(+), 6 deletions(-) create mode 100644 doc/guides/prog_guide/event_dma_adapter.rst create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_new.svg create mode 100644 lib/eventdev/rte_event_dma_adapter.h diff --git a/MAINTAINERS b/MAINTAINERS index a926155f26..4ebbbe8bb3 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -540,6 +540,12 @@ F: lib/eventdev/*crypto_adapter* F: app/test/test_event_crypto_adapter.c F: doc/guides/prog_guide/event_crypto_adapter.rst +Eventdev DMA Adapter API +M: Amit Prakash Shukla +T: git://dpdk.org/next/dpdk-next-eventdev +F: lib/eventdev/*dma_adapter* +F: doc/guides/prog_guide/event_dma_adapter.rst + Raw device API M: Sachin Saxena M: Hemant Agrawal diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index fdeda13932..b7df7be4d9 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -29,6 +29,7 @@ The public API headers are grouped by topics: [event_eth_tx_adapter](@ref rte_event_eth_tx_adapter.h), [event_timer_adapter](@ref rte_event_timer_adapter.h), [event_crypto_adapter](@ref rte_event_crypto_adapter.h), + [event_dma_adapter](@ref rte_event_dma_adapter.h), [rawdev](@ref rte_rawdev.h), [metrics](@ref rte_metrics.h), [bitrate](@ref rte_bitrate.h), diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 00360f60c6..73a52d915b 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -44,6 +44,14 @@ internal_port_op_fwd = internal_port_qp_ev_bind = session_private_data = +; +; Features of a default DMA adapter. +; +[DMA adapter Features] +internal_port_op_new = +internal_port_op_fwd = +internal_port_vchan_ev_bind = + ; ; Features of a default Timer adapter. ; diff --git a/doc/guides/prog_guide/event_dma_adapter.rst b/doc/guides/prog_guide/event_dma_adapter.rst new file mode 100644 index 0000000000..701e50d042 --- /dev/null +++ b/doc/guides/prog_guide/event_dma_adapter.rst @@ -0,0 +1,264 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (c) 2023 Marvell. + +Event DMA Adapter Library +========================= + +DPDK :doc:`Eventdev library ` provides event driven programming model with features +to schedule events. :doc:`DMA Device library ` provides an interface to DMA poll mode +drivers that support DMA operations. Event DMA Adapter is intended to bridge between the event +device and the DMA device. + +Packet flow from DMA device to the event device can be accomplished using software and hardware +based transfer mechanisms. The adapter queries an eventdev PMD to determine which mechanism to +be used. The adapter uses an EAL service core function for software based packet transfer and +uses the eventdev PMD functions to configure hardware based packet transfer between DMA device +and the event device. DMA adapter uses a new event type called ``RTE_EVENT_TYPE_DMADEV`` to +indicate the source of event. + +Application can choose to submit an DMA operation directly to an DMA device or send it to an DMA +adapter via eventdev based on ``RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD`` capability. The +first mode is known as the event new (``RTE_EVENT_DMA_ADAPTER_OP_NEW``) mode and the second as the +event forward (``RTE_EVENT_DMA_ADAPTER_OP_FORWARD``) mode. Choice of mode can be specified while +creating the adapter. In the former mode, it is the application's responsibility to enable +ingress packet ordering. In the latter mode, it is the adapter's responsibility to enable +ingress packet ordering. + + +Adapter Modes +------------- + +RTE_EVENT_DMA_ADAPTER_OP_NEW mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the ``RTE_EVENT_DMA_ADAPTER_OP_NEW`` mode, application submits DMA operations directly to an DMA +device. The adapter then dequeues DMA completions from the DMA device and enqueues them as events +to the event device. This mode does not ensure ingress ordering as the application directly +enqueues to the dmadev without going through DMA/atomic stage. In this mode, events dequeued +from the adapter are treated as new events. The application has to specify event information +(response information) which is needed to enqueue an event after the DMA operation is completed. + +.. _figure_event_dma_adapter_op_new: + +.. figure:: img/event_dma_adapter_op_new.* + + Working model of ``RTE_EVENT_DMA_ADAPTER_OP_NEW`` mode + + +RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the ``RTE_EVENT_DMA_ADAPTER_OP_FORWARD`` mode, if the event PMD and DMA PMD supports internal +event port (``RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should use +``rte_event_dma_adapter_enqueue()`` API to enqueue DMA operations as events to DMA adapter. If +not, application retrieves DMA adapter's event port using ``rte_event_dma_adapter_event_port_get()`` +API, links its event queue to this port and starts enqueuing DMA operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and submits the DMA +operations to the dmadev. After the DMA operation is complete, the adapter enqueues events to the +event device. + +Applications can use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. Application has to specify event +information (response information) needed to enqueue the event after the DMA operation has +completed. + +.. _figure_event_dma_adapter_op_forward: + +.. figure:: img/event_dma_adapter_op_forward.* + + Working model of ``RTE_EVENT_DMA_ADAPTER_OP_FORWARD`` mode + + +API Overview +------------ + +This section has a brief introduction to the event DMA adapter APIs. The application is expected +to create an adapter which is associated with a single eventdev, then add dmadev and vchan to the +adapter instance. + + +Create an adapter instance +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +An adapter instance is created using ``rte_event_dma_adapter_create()``. This function is called +with event device to be associated with the adapter and port configuration for the adapter to +setup an event port (if the adapter needs to use a service function). + +Adapter can be started in ``RTE_EVENT_DMA_ADAPTER_OP_NEW`` or ``RTE_EVENT_DMA_ADAPTER_OP_FORWARD`` +mode. + +.. code-block:: c + + enum rte_event_dma_adapter_mode mode; + struct rte_event_dev_info dev_info; + struct rte_event_port_conf conf; + uint8_t evdev_id; + uint8_t dma_id; + int ret; + + ret = rte_event_dev_info_get(dma_id, &dev_info); + + conf.new_event_threshold = dev_info.max_num_events; + conf.dequeue_depth = dev_info.max_event_port_dequeue_depth; + conf.enqueue_depth = dev_info.max_event_port_enqueue_depth; + mode = RTE_EVENT_DMA_ADAPTER_OP_FORWARD; + ret = rte_event_dma_adapter_create(dma_id, evdev_id, &conf, mode); + + +``rte_event_dma_adapter_create_ext()`` function can be used by the application to have a finer +control on eventdev port allocation and setup. The ``rte_event_dma_adapter_create_ext()`` +function is passed a callback function. The callback function is invoked if the adapter creates +a service function and uses an event port for it. The callback is expected to fill the +``struct rte_event_dma_adapter_conf`` structure passed to it. + +In the ``RTE_EVENT_DMA_ADAPTER_OP_FORWARD`` mode, if the event PMD and DMA PMD supports internal +event port (``RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with DMA operations should +be enqueued to the DMA adapter using ``rte_event_dma_adapter_enqueue()`` API. If not, the event port +created by the adapter can be retrieved using ``rte_event_dma_adapter_event_port_get()`` API. An +application can use this event port to link with an event queue, on which it enqueues events +towards the DMA adapter using ``rte_event_enqueue_burst()``. + +.. code-block:: c + + uint8_t dma_adpt_id, evdev_id, dma_dev_id, dma_ev_port_id, app_qid; + struct rte_event ev; + uint32_t cap; + int ret; + + // Fill in event info and update event_ptr with rte_dma_op + memset(&ev, 0, sizeof(ev)); + . + . + ev.event_ptr = op; + + ret = rte_event_dma_adapter_caps_get(evdev_id, dma_dev_id, &cap); + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_dma_adapter_enqueue(evdev_id, app_ev_port_id, ev, nb_events); + } else { + ret = rte_event_dma_adapter_event_port_get(dma_adpt_id, &dma_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, dma_ev_port_id, &app_qid, NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, nb_events); + } + + +Event device configuration for service based adapter +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When ``rte_event_dma_adapter_create()`` is used for creating adapter instance, +``rte_event_dev_config::nb_event_ports`` is automatically incremented, and event device is +reconfigured with additional event port during service initialization. This event device +reconfigure logic also increments the ``rte_event_dev_config::nb_single_link_event_port_queues`` +parameter if the adapter event port config is of type ``RTE_EVENT_PORT_CFG_SINGLE_LINK``. + +Applications using this mode of adapter creation need not configure the event device with +``rte_event_dev_config::nb_event_ports`` and +``rte_event_dev_config::nb_single_link_event_port_queues`` parameters required for DMA adapter when +the adapter is created using the above-mentioned API. + + +Querying adapter capabilities +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``rte_event_dma_adapter_caps_get()`` function allows the application to query the adapter +capabilities for an eventdev and dmadev combination. This API provides whether dmadev and eventdev +are connected using internal HW port or not. + +.. code-block:: c + + rte_event_dma_adapter_caps_get(dev_id, dma_dev_id, &cap); + + +Adding vchan to the adapter instance +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +dmadev device id and vchan are configured using dmadev APIs. For more information +see :doc:`here `. + +.. code-block:: c + + struct rte_dma_vchan_conf vchan_conf; + struct rte_dma_conf dev_conf; + uint8_t dev_id = 0; + uint16_t vchan = 0; + + rte_dma_configure(dev_id, &dev_conf); + rte_dma_vchan_setup(dev_id, vchan, &vchan_conf); + +These dmadev id and vchan are added to the instance using the +``rte_event_dma_adapter_vchan_add()`` API. The same is removed using +``rte_event_dma_adapter_vchan_del()`` API. If hardware supports +``RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND`` capability, event information must be passed to the add API. + +.. code-block:: c + + uint32_t cap; + int ret; + + ret = rte_event_dma_adapter_caps_get(evdev_id, dma_dev_id, &cap); + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND) { + struct rte_event event; + + rte_event_dma_adapter_vchan_add(id, dma_dev_id, vchan, &conf); + } else + rte_event_dma_adapter_vchan_add(id, dma_dev_id, vchan, NULL); + + +Configuring service function +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If the adapter uses a service function, the application is required to assign a service core to +the service function as show below. + +.. code-block:: c + + uint32_t service_id; + + if (rte_event_dma_adapter_service_id_get(dma_id, &service_id) == 0) + rte_service_map_lcore_set(service_id, CORE_ID); + + +Set event response information +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the RTE_EVENT_DMA_ADAPTER_OP_FORWARD / RTE_EVENT_DMA_ADAPTER_OP_NEW mode, the application +specifies the dmadev ID and vchan ID in ``struct rte_event_dma_adapter_op`` and the event +information (response information) needed to enqueue an event after the DMA operation has +completed. The response information is specified in ``struct rte_event`` and appended to the +``struct rte_event_dma_adapter_op``. + + +Start the adapter instance +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The application calls ``rte_event_dma_adapter_start()`` to start the adapter. This function calls +the start callbacks of the eventdev PMDs for hardware based eventdev-dmadev connections and +``rte_service_run_state_set()`` to enable the service function if one exists. + +.. code-block:: c + + rte_event_dma_adapter_start(id); + +.. Note:: + + The eventdev to which the event_dma_adapter is connected should be started before calling + rte_event_dma_adapter_start(). + + +Get adapter statistics +~~~~~~~~~~~~~~~~~~~~~~ + +The ``rte_event_dma_adapter_stats_get()`` function reports counters defined in struct +``rte_event_dma_adapter_stats``. The received packet and enqueued event counts are a sum of the +counts from the eventdev PMD callbacks if the callback is supported, and the counts maintained by +the service function, if one exists. + +Set/Get adapter runtime configuration parameters +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The runtime configuration parameters of adapter can be set/get using +``rte_event_dma_adapter_runtime_params_set()`` and +``rte_event_dma_adapter_runtime_params_get()`` respectively. +The parameters that can be set/get are defined in +``struct rte_event_dma_adapter_runtime_params``. diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst index 2c83176846..ff55115d0d 100644 --- a/doc/guides/prog_guide/eventdev.rst +++ b/doc/guides/prog_guide/eventdev.rst @@ -333,7 +333,8 @@ eventdev. .. Note:: EventDev needs to be started before starting the event producers such - as event_eth_rx_adapter, event_timer_adapter and event_crypto_adapter. + as event_eth_rx_adapter, event_timer_adapter, event_crypto_adapter and + event_dma_adapter. Ingress of New Events ~~~~~~~~~~~~~~~~~~~~~ @@ -445,8 +446,9 @@ using ``rte_event_dev_stop_flush_callback_register()`` function. .. Note:: The event producers such as ``event_eth_rx_adapter``, - ``event_timer_adapter`` and ``event_crypto_adapter`` - need to be stopped before stopping the event device. + ``event_timer_adapter``, ``event_crypto_adapter`` and + ``event_dma_adapter`` need to be stopped before stopping + the event device. Summary ------- diff --git a/doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg b/doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg new file mode 100644 index 0000000000..b7fe1fecf2 --- /dev/null +++ b/doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg @@ -0,0 +1,1086 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + + + + + + 1 + + + 2 + + + + 8 + + + + + 7 + + + + + 3 + + + + 4 + + + 5 + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + + 6 + + + Eventdev + + + DMAAdapter + + + Applicationin orderedstage + + + DMA Device + + + 1. Events from the previous stage. 2. Application in ordered stage dequeues events from eventdev. 3. Application enqueues DMA operations as events to eventdev. 4. DMA adapter dequeues event from eventdev. 5. DMA adapter submits DMA operations to DMA Device (Atomic stage) 6. DMA adapter dequeues DMA completions from DMA Device 7. DMA adapter enqueues events to the eventdev 8. Events to the next stage + + + diff --git a/doc/guides/prog_guide/img/event_dma_adapter_op_new.svg b/doc/guides/prog_guide/img/event_dma_adapter_op_new.svg new file mode 100644 index 0000000000..e9e8bb2b98 --- /dev/null +++ b/doc/guides/prog_guide/img/event_dma_adapter_op_new.svg @@ -0,0 +1,1079 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + +   + + + + + + + + + + + + + 1 + + + 2 + + + + + 3 + + + 4 + + + 6 + + + Eventdev + + + Atomic Stage+Enqueue toDMA Device + + + 5 + +   + + DMA Device + + + DMAAdapter + + + 1. Application dequeues events from the previous stage 2. Application prepares the DMA operations. 3. DMA operations are submitted to dmadev by application. 4. DMA adapter dequeues DMA completions from DMA device. 5. DMA adapter enqueues events to the eventdev. 6. Application dequeues from eventdev and prepare for further processing + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Application + + + diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index 52a6d9e7aa..beaa4b8869 100644 --- a/doc/guides/prog_guide/index.rst +++ b/doc/guides/prog_guide/index.rst @@ -60,6 +60,7 @@ Programmer's Guide event_ethernet_tx_adapter event_timer_adapter event_crypto_adapter + event_dma_adapter qos_framework power_man packet_classif_access_ctrl diff --git a/doc/guides/rel_notes/release_23_11.rst b/doc/guides/rel_notes/release_23_11.rst index b34ddc0860..1a1f337d23 100644 --- a/doc/guides/rel_notes/release_23_11.rst +++ b/doc/guides/rel_notes/release_23_11.rst @@ -89,6 +89,11 @@ New Features * Added support for ``remaining_ticks_get`` timer adapter PMD callback to get the remaining ticks to expire for a given event timer. +* **Added event DMA adapter library.** + + * Added the Event DMA Adapter Library. This library extends the event-based + model by introducing APIs that allow applications to enqueue/dequeue DMA + operations to/from dmadev as events scheduled by an event device. Removed Items ------------- diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index f62f42e140..f7227c0bfd 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -178,8 +178,12 @@ struct rte_eventdev { event_tx_adapter_enqueue_t txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ event_crypto_adapter_enqueue_t ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ - uint64_t reserved_64s[4]; /**< Reserved for future fields */ + event_dma_adapter_enqueue_t dma_enqueue; + /**< Pointer to PMD DMA adapter enqueue function. */ + + uint64_t reserved_64s[3]; /**< Reserved for future fields */ void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; @@ -1320,6 +1324,156 @@ typedef int (*eventdev_eth_tx_adapter_queue_stop) #define eventdev_stop_flush_t rte_eventdev_stop_flush_t +/** + * Retrieve the event device's DMA adapter capabilities for the + * specified DMA device + * + * @param dev + * Event device pointer + * + * @param dma_dev_id + * DMA device identifier + * + * @param[out] caps + * A pointer to memory filled with event adapter capabilities. + * It is expected to be pre-allocated & initialized by caller. + * + * @return + * - 0: Success, driver provides event adapter capabilities for the + * dmadev. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_dma_adapter_caps_get_t)(const struct rte_eventdev *dev, + const int16_t dma_dev_id, uint32_t *caps); + +/** + * Add DMA vchan queue to event device. This callback is invoked if + * the caps returned from rte_event_dma_adapter_caps_get(, dmadev_id) + * has RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_* set. + * + * @param dev + * Event device pointer + * + * @param dma_dev_id + * DMA device identifier + * + * @param vchan_id + * dmadev vchan queue identifier. + * + * @param event + * Event information required for binding dmadev vchan to event queue. + * This structure will have a valid value for only those HW PMDs supporting + * @see RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND capability. + * + * @return + * - 0: Success, dmadev vchan added successfully. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_dma_adapter_vchan_add_t)(const struct rte_eventdev *dev, + const int16_t dma_dev_id, + uint16_t vchan_id, + const struct rte_event *event); + +/** + * Delete DMA vhcan to event device. This callback is invoked if + * the caps returned from rte_event_dma_adapter_caps_get(, dmadev_id) + * has RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_* set. + * + * @param dev + * Event device pointer + * + * @param dma_dev_id + * DMA device identifier + * + * @param vchan_id + * dmadev vchan identifier. + * + * @return + * - 0: Success, dmadev vchan deleted successfully. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_dma_adapter_vchan_del_t)(const struct rte_eventdev *dev, + const int16_t dma_dev_id, + uint16_t vchan_id); + +/** + * Start DMA adapter. This callback is invoked if + * the caps returned from rte_event_dma_adapter_caps_get(.., dmadev_id) + * has RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_* set and vchan for dmadev_id + * have been added to the event device. + * + * @param dev + * Event device pointer + * + * @param dma_dev_id + * DMA device identifier + * + * @return + * - 0: Success, DMA adapter started successfully. + * - <0: Error code returned by the driver function. + */ +typedef int (*eventdev_dma_adapter_start_t)(const struct rte_eventdev *dev, + const int16_t dma_dev_id); + +/** + * Stop DMA adapter. This callback is invoked if + * the caps returned from rte_event_dma_adapter_caps_get(.., dmadev_id) + * has RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_* set and vchan for dmadev_id + * have been added to the event device. + * + * @param dev + * Event device pointer + * + * @param dma_dev_id + * DMA device identifier + * + * @return + * - 0: Success, DMA adapter stopped successfully. + * - <0: Error code returned by the driver function. + */ +typedef int (*eventdev_dma_adapter_stop_t)(const struct rte_eventdev *dev, + const int16_t dma_dev_id); + +struct rte_event_dma_adapter_stats; + +/** + * Retrieve DMA adapter statistics. + * + * @param dev + * Event device pointer + * + * @param dma_dev_id + * DMA device identifier + * + * @param[out] stats + * Pointer to stats structure + * + * @return + * Return 0 on success. + */ +typedef int (*eventdev_dma_adapter_stats_get)(const struct rte_eventdev *dev, + const int16_t dma_dev_id, + struct rte_event_dma_adapter_stats *stats); + +/** + * Reset DMA adapter statistics. + * + * @param dev + * Event device pointer + * + * @param dma_dev_id + * DMA device identifier + * + * @return + * Return 0 on success. + */ +typedef int (*eventdev_dma_adapter_stats_reset)(const struct rte_eventdev *dev, + const int16_t dma_dev_id); + + /** Event device operations function pointer table */ struct eventdev_ops { eventdev_info_get_t dev_infos_get; /**< Get device info. */ @@ -1440,6 +1594,21 @@ struct eventdev_ops { eventdev_eth_tx_adapter_queue_stop eth_tx_adapter_queue_stop; /**< Stop Tx queue assigned to Tx adapter instance */ + eventdev_dma_adapter_caps_get_t dma_adapter_caps_get; + /**< Get DMA adapter capabilities */ + eventdev_dma_adapter_vchan_add_t dma_adapter_vchan_add; + /**< Add vchan queue to DMA adapter */ + eventdev_dma_adapter_vchan_del_t dma_adapter_vchan_del; + /**< Delete vchan queue from DMA adapter */ + eventdev_dma_adapter_start_t dma_adapter_start; + /**< Start DMA adapter */ + eventdev_dma_adapter_stop_t dma_adapter_stop; + /**< Stop DMA adapter */ + eventdev_dma_adapter_stats_get dma_adapter_stats_get; + /**< Get DMA stats */ + eventdev_dma_adapter_stats_reset dma_adapter_stats_reset; + /**< Reset DMA stats */ + eventdev_selftest dev_selftest; /**< Start eventdev Selftest */ diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c index 1d3d9d357e..18ed8bf3c8 100644 --- a/lib/eventdev/eventdev_private.c +++ b/lib/eventdev/eventdev_private.c @@ -81,6 +81,14 @@ dummy_event_crypto_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +dummy_event_dma_adapter_enqueue(__rte_unused void *port, __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + RTE_EDEV_LOG_ERR("event DMA adapter enqueue requested for unconfigured event device"); + return 0; +} + void event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) { @@ -97,6 +105,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) .txa_enqueue_same_dest = dummy_event_tx_adapter_enqueue_same_dest, .ca_enqueue = dummy_event_crypto_adapter_enqueue, + .dma_enqueue = dummy_event_dma_adapter_enqueue, .data = dummy_data, }; @@ -117,5 +126,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op, fp_op->txa_enqueue = dev->txa_enqueue; fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest; fp_op->ca_enqueue = dev->ca_enqueue; + fp_op->dma_enqueue = dev->dma_enqueue; fp_op->data = dev->data->ports; } diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build index 6edf98dfa5..21347f7c4c 100644 --- a/lib/eventdev/meson.build +++ b/lib/eventdev/meson.build @@ -25,6 +25,7 @@ sources = files( ) headers = files( 'rte_event_crypto_adapter.h', + 'rte_event_dma_adapter.h', 'rte_event_eth_rx_adapter.h', 'rte_event_eth_tx_adapter.h', 'rte_event_ring.h', diff --git a/lib/eventdev/rte_event_dma_adapter.h b/lib/eventdev/rte_event_dma_adapter.h new file mode 100644 index 0000000000..e924ab673d --- /dev/null +++ b/lib/eventdev/rte_event_dma_adapter.h @@ -0,0 +1,581 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef RTE_EVENT_DMA_ADAPTER +#define RTE_EVENT_DMA_ADAPTER + +/** + * @file rte_event_dma_adapter.h + * + * @warning + * @b EXPERIMENTAL: + * All functions in this file may be changed or removed without prior notice. + * + * DMA Event Adapter API. + * + * Eventdev library provides adapters to bridge between various components for providing new + * event source. The event DMA adapter is one of those adapters which is intended to bridge + * between event devices and DMA devices. + * + * The DMA adapter adds support to enqueue / dequeue DMA operations to / from event device. The + * packet flow between DMA device and the event device can be accomplished using both SW and HW + * based transfer mechanisms. The adapter uses an EAL service core function for SW based packet + * transfer and uses the eventdev PMD functions to configure HW based packet transfer between the + * DMA device and the event device. + * + * The application can choose to submit a DMA operation directly to an DMA device or send it to the + * DMA adapter via eventdev based on RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability. The + * first mode is known as the event new (RTE_EVENT_DMA_ADAPTER_OP_NEW) mode and the second as the + * event forward (RTE_EVENT_DMA_ADAPTER_OP_FORWARD) mode. The choice of mode can be specified while + * creating the adapter. In the former mode, it is an application responsibility to enable ingress + * packet ordering. In the latter mode, it is the adapter responsibility to enable the ingress + * packet ordering. + * + * + * Working model of RTE_EVENT_DMA_ADAPTER_OP_NEW mode: + * + * +--------------+ +--------------+ + * | | | DMA stage | + * | Application |---[2]-->| + enqueue to | + * | | | dmadev | + * +--------------+ +--------------+ + * ^ ^ | + * | | [3] + * [6] [1] | + * | | | + * +--------------+ | + * | | | + * | Event device | | + * | | | + * +--------------+ | + * ^ | + * | | + * [5] | + * | v + * +--------------+ +--------------+ + * | | | | + * | DMA adapter |<--[4]---| dmadev | + * | | | | + * +--------------+ +--------------+ + * + * + * [1] Application dequeues events from the previous stage. + * [2] Application prepares the DMA operations. + * [3] DMA operations are submitted to dmadev by application. + * [4] DMA adapter dequeues DMA completions from dmadev. + * [5] DMA adapter enqueues events to the eventdev. + * [6] Application dequeues from eventdev for further processing. + * + * In the RTE_EVENT_DMA_ADAPTER_OP_NEW mode, application submits DMA operations directly to DMA + * device. The DMA adapter then dequeues DMA completions from DMA device and enqueue events to the + * event device. This mode does not ensure ingress ordering, if the application directly enqueues + * to dmadev without going through DMA / atomic stage i.e. removing item [1] and [2]. + * + * Events dequeued from the adapter will be treated as new events. In this mode, application needs + * to specify event information (response information) which is needed to enqueue an event after the + * DMA operation is completed. + * + * + * Working model of RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode: + * + * +--------------+ +--------------+ + * --[1]-->| |---[2]-->| Application | + * | Event device | | in | + * <--[8]--| |<--[3]---| Ordered stage| + * +--------------+ +--------------+ + * ^ | + * | [4] + * [7] | + * | v + * +----------------+ +--------------+ + * | |--[5]->| | + * | DMA adapter | | dmadev | + * | |<-[6]--| | + * +----------------+ +--------------+ + * + * + * [1] Events from the previous stage. + * [2] Application in ordered stage dequeues events from eventdev. + * [3] Application enqueues DMA operations as events to eventdev. + * [4] DMA adapter dequeues event from eventdev. + * [5] DMA adapter submits DMA operations to dmadev (Atomic stage). + * [6] DMA adapter dequeues DMA completions from dmadev + * [7] DMA adapter enqueues events to the eventdev + * [8] Events to the next stage + * + * In the event forward (RTE_EVENT_DMA_ADAPTER_OP_FORWARD) mode, if the HW supports the capability + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application can directly submit the DMA + * operations to the dmadev. If not, application retrieves the event port of the DMA adapter + * through the API, rte_event_DMA_adapter_event_port_get(). Then, links its event queue to this + * port and starts enqueuing DMA operations as events to the eventdev. The adapter then dequeues + * the events and submits the DMA operations to the dmadev. After the DMA completions, the adapter + * enqueues events to the event device. + * + * Application can use this mode, when ingress packet ordering is needed. Events dequeued from the + * adapter will be treated as forwarded events. In this mode, the application needs to specify the + * dmadev ID and queue pair ID (request information) needed to enqueue an DMA operation in addition + * to the event information (response information) needed to enqueue an event after the DMA + * operation has completed. + * + * The event DMA adapter provides common APIs to configure the packet flow from the DMA device to + * event devices for both SW and HW based transfers. The DMA event adapter's functions are: + * + * - rte_event_dma_adapter_create_ext() + * - rte_event_dma_adapter_create() + * - rte_event_dma_adapter_free() + * - rte_event_dma_adapter_vchan_add() + * - rte_event_dma_adapter_vchan_del() + * - rte_event_dma_adapter_start() + * - rte_event_dma_adapter_stop() + * - rte_event_dma_adapter_stats_get() + * - rte_event_dma_adapter_stats_reset() + * + * The application creates an instance using rte_event_dma_adapter_create() or + * rte_event_dma_adapter_create_ext(). + * + * dmadev queue pair addition / deletion is done using the rte_event_dma_adapter_vchan_add() / + * rte_event_dma_adapter_vchan_del() APIs. If HW supports the capability + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND, event information must be passed to the + * add API. + * + */ + +#include + +#include +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * A structure used to hold event based DMA operation entry. All the information + * required for a DMA transfer shall be populated in "struct rte_event_dma_adapter_op" + * instance. + */ +struct rte_event_dma_adapter_op { + struct rte_dma_sge *src_seg; + /**< Source segments. */ + struct rte_dma_sge *dst_seg; + /**< Destination segments. */ + uint16_t nb_src; + /**< Number of source segments. */ + uint16_t nb_dst; + /**< Number of destination segments. */ + uint64_t flags; + /**< Flags related to the operation. + * @see RTE_DMA_OP_FLAG_* + */ + int16_t dma_dev_id; + /**< DMA device ID to be used */ + uint16_t vchan; + /**< DMA vchan ID to be used */ + struct rte_mempool *op_mp; + /**< Mempool from which op is allocated. */ +}; + +/** + * DMA event adapter mode + */ +enum rte_event_dma_adapter_mode { + RTE_EVENT_DMA_ADAPTER_OP_NEW, + /**< Start the DMA adapter in event new mode. + * @see RTE_EVENT_OP_NEW. + * + * Application submits DMA operations to the dmadev. Adapter only dequeues the DMA + * completions from dmadev and enqueue events to the eventdev. + */ + + RTE_EVENT_DMA_ADAPTER_OP_FORWARD, + /**< Start the DMA adapter in event forward mode. + * @see RTE_EVENT_OP_FORWARD. + * + * Application submits DMA requests as events to the DMA adapter or DMA device based on + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability. DMA completions are enqueued + * back to the eventdev by DMA adapter. + */ +}; + +/** + * Adapter configuration structure that the adapter configuration callback function is expected to + * fill out. + * + * @see rte_event_dma_adapter_conf_cb + */ +struct rte_event_dma_adapter_conf { + uint8_t event_port_id; + /** < Event port identifier, the adapter enqueues events to this port and dequeues DMA + * request events in RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode. + */ + + uint32_t max_nb; + /**< The adapter can return early if it has processed at least max_nb DMA ops. This isn't + * treated as a requirement; batching may cause the adapter to process more than max_nb DMA + * ops. + */ +}; + +/** + * Adapter runtime configuration parameters + */ +struct rte_event_dma_adapter_runtime_params { + uint32_t max_nb; + /**< The adapter can return early if it has processed at least max_nb DMA ops. This isn't + * treated as a requirement; batching may cause the adapter to process more than max_nb DMA + * ops. + * + * Callback function passed to rte_event_dma_adapter_create_ext() configures the adapter + * with default value of max_nb. + * rte_event_dma_adapter_runtime_params_set() allows to re-configure max_nb during runtime + * (after adding at least one queue pair) + * + * This is valid for the devices without RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD or + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW capability. + */ + + uint32_t rsvd[15]; + /**< Reserved fields for future expansion */ +}; + +/** + * Function type used for adapter configuration callback. The callback is used to fill in members of + * the struct rte_event_dma_adapter_conf, this callback is invoked when creating a SW service for + * packet transfer from dmadev vchan to the event device. The SW service is created within the + * function, rte_event_dma_adapter_vchan_add(), if SW based packet transfers from dmadev vchan + * to the event device are required. + * + * @param id + * Adapter identifier. + * @param evdev_id + * Event device identifier. + * @param conf + * Structure that needs to be populated by this callback. + * @param arg + * Argument to the callback. This is the same as the conf_arg passed to the + * rte_event_dma_adapter_create_ext(). + */ +typedef int (*rte_event_dma_adapter_conf_cb)(uint8_t id, uint8_t evdev_id, + struct rte_event_dma_adapter_conf *conf, void *arg); + +/** + * A structure used to retrieve statistics for an event DMA adapter instance. + */ +struct rte_event_dma_adapter_stats { + uint64_t event_poll_count; + /**< Event port poll count */ + + uint64_t event_deq_count; + /**< Event dequeue count */ + + uint64_t dma_enq_count; + /**< dmadev enqueue count */ + + uint64_t dma_enq_fail_count; + /**< dmadev enqueue failed count */ + + uint64_t dma_deq_count; + /**< dmadev dequeue count */ + + uint64_t event_enq_count; + /**< Event enqueue count */ + + uint64_t event_enq_retry_count; + /**< Event enqueue retry count */ + + uint64_t event_enq_fail_count; + /**< Event enqueue fail count */ +}; + +/** + * Create a new event DMA adapter with the specified identifier. + * + * @param id + * Adapter identifier. + * @param evdev_id + * Event device identifier. + * @param conf_cb + * Callback function that fills in members of a struct rte_event_dma_adapter_conf struct passed + * into it. + * @param mode + * Flag to indicate the mode of the adapter. + * @see rte_event_dma_adapter_mode + * @param conf_arg + * Argument that is passed to the conf_cb function. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, + rte_event_dma_adapter_conf_cb conf_cb, + enum rte_event_dma_adapter_mode mode, void *conf_arg); + +/** + * Create a new event DMA adapter with the specified identifier. This function uses an internal + * configuration function that creates an event port. This default function reconfigures the event + * device with an additional event port and set up the event port using the port_config parameter + * passed into this function. In case the application needs more control in configuration of the + * service, it should use the rte_event_dma_adapter_create_ext() version. + * + * @param id + * Adapter identifier. + * @param evdev_id + * Event device identifier. + * @param port_config + * Argument of type *rte_event_port_conf* that is passed to the conf_cb function. + * @param mode + * Flag to indicate the mode of the adapter. + * @see rte_event_dma_adapter_mode + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_dma_adapter_create(uint8_t id, uint8_t evdev_id, + struct rte_event_port_conf *port_config, + enum rte_event_dma_adapter_mode mode); + +/** + * Free an event DMA adapter + * + * @param id + * Adapter identifier. + * @return + * - 0: Success + * - <0: Error code on failure, If the adapter still has queue pairs added to it, the function + * returns -EBUSY. + */ +__rte_experimental +int rte_event_dma_adapter_free(uint8_t id); + +/** + * Retrieve the event port of an adapter. + * + * @param id + * Adapter identifier. + * + * @param [out] event_port_id + * Application links its event queue to this adapter port which is used in + * RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode. + * + * @return + * - 0: Success + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); + +/** + * Add a vchan to an event DMA adapter. + * + * @param id + * Adapter identifier. + * @param dmadev_id + * dmadev identifier. + * @param vchan + * DMA device vchan identifier. If vchan is set -1, adapter adds all the + * preconfigured vchan to the instance. + * @param event + * If HW supports dmadev vchan to event queue binding, application is expected to fill in + * event information, else it will be NULL. + * @see RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND + * + * @return + * - 0: Success, vchan added correctly. + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dmadev_id, uint16_t vchan, + const struct rte_event *event); + +/** + * Delete a vchan from an event DMA adapter. + * + * @param id + * Adapter identifier. + * @param dmadev_id + * DMA device identifier. + * @param vchan + * DMA device vchan identifier. + * + * @return + * - 0: Success, vchan deleted successfully. + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_vchan_del(uint8_t id, int16_t dmadev_id, uint16_t vchan); + +/** + * Retrieve the service ID of an adapter. If the adapter doesn't use a rte_service function, this + * function returns -ESRCH. + * + * @param id + * Adapter identifier. + * @param [out] service_id + * A pointer to a uint32_t, to be filled in with the service id. + * + * @return + * - 0: Success + * - <0: Error code on failure, if the adapter doesn't use a rte_service function, this function + * returns -ESRCH. + */ +__rte_experimental +int rte_event_dma_adapter_service_id_get(uint8_t id, uint32_t *service_id); + +/** + * Start event DMA adapter + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, adapter started successfully. + * - <0: Error code on failure. + * + * @note The eventdev and dmadev to which the event_dma_adapter is connected should be started + * before calling rte_event_dma_adapter_start(). + */ +__rte_experimental +int rte_event_dma_adapter_start(uint8_t id); + +/** + * Stop event DMA adapter + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, adapter stopped successfully. + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_stop(uint8_t id); + +/** + * Initialize the adapter runtime configuration parameters + * + * @param params + * A pointer to structure of type struct rte_event_dma_adapter_runtime_params + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_dma_adapter_runtime_params_init(struct rte_event_dma_adapter_runtime_params *params); + +/** + * Set the adapter runtime configuration parameters + * + * @param id + * Adapter identifier + * + * @param params + * A pointer to structure of type struct rte_event_dma_adapter_runtime_params with configuration + * parameter values. The reserved fields of this structure must be initialized to zero and the valid + * fields need to be set appropriately. This struct can be initialized using + * rte_event_dma_adapter_runtime_params_init() API to default values or application may reset this + * struct and update required fields. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_dma_adapter_runtime_params_set(uint8_t id, + struct rte_event_dma_adapter_runtime_params *params); + +/** + * Get the adapter runtime configuration parameters + * + * @param id + * Adapter identifier + * + * @param[out] params + * A pointer to structure of type struct rte_event_dma_adapter_runtime_params containing valid + * adapter parameters when return value is 0. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_dma_adapter_runtime_params_get(uint8_t id, + struct rte_event_dma_adapter_runtime_params *params); + +/** + * Retrieve statistics for an adapter + * + * @param id + * Adapter identifier. + * @param [out] stats + * A pointer to structure used to retrieve statistics for an adapter. + * + * @return + * - 0: Success, retrieved successfully. + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_stats_get(uint8_t id, struct rte_event_dma_adapter_stats *stats); + +/** + * Reset statistics for an adapter. + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, statistics reset successfully. + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_stats_reset(uint8_t id); + +/** + * Enqueue a burst of DMA operations as event objects supplied in *rte_event* structure on an event + * DMA adapter designated by its event *evdev_id* through the event port specified by *port_id*. + * This function is supported if the eventdev PMD has the + * #RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue that are supplied in the + * *ev* array of *rte_event* structure. + * + * The rte_event_dma_adapter_enqueue() function returns the number of event objects it actually + * enqueued. A return value equal to *nb_events* means that all event objects have been enqueued. + * + * @param evdev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure which contain the + * event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The return value can be + * less than the value of the *nb_events* parameter when the event devices queue is full or if + * invalid parameters are specified in a *rte_event*. If the return value is less than *nb_events*, + * the remaining events at the end of ev[] are not consumed and the caller has to take care of them, + * and rte_errno is set accordingly. Possible errno values include: + * - EINVAL: The port ID is invalid, device ID is invalid, an event's queue ID is invalid, or an + * event's sched type doesn't match the capabilities of the destination queue. + * - ENOSPC: The event port was backpressured and unable to enqueue one or more events. This + * error code is only applicable to closed systems. + */ +__rte_experimental +uint16_t rte_event_dma_adapter_enqueue(uint8_t evdev_id, uint8_t port_id, struct rte_event ev[], + uint16_t nb_events); + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_EVENT_DMA_ADAPTER */ diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 2ba8a7b090..41743f91b1 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1197,6 +1197,8 @@ struct rte_event_vector { */ #define RTE_EVENT_TYPE_ETH_RX_ADAPTER 0x4 /**< The event generated from event eth Rx adapter */ +#define RTE_EVENT_TYPE_DMADEV 0x5 +/**< The event generated from dma subsystem */ #define RTE_EVENT_TYPE_VECTOR 0x8 /**< Indicates that event is a vector. * All vector event types should be a logical OR of EVENT_TYPE_VECTOR. @@ -1462,6 +1464,48 @@ int rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id, uint32_t *caps); +/* DMA adapter capability bitmap flag */ +#define RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW 0x1 +/**< Flag indicates HW is capable of generating events in + * RTE_EVENT_OP_NEW enqueue operation. DMADEV will send + * packets to the event device as new events using an + * internal event port. + */ + +#define RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD 0x2 +/**< Flag indicates HW is capable of generating events in + * RTE_EVENT_OP_FORWARD enqueue operation. DMADEV will send + * packets to the event device as forwarded event using an + * internal event port. + */ + +#define RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND 0x4 +/**< Flag indicates HW is capable of mapping DMA vchan to event queue. */ + +/** + * Retrieve the event device's DMA adapter capabilities for the + * specified dmadev device + * + * @param dev_id + * The identifier of the device. + * + * @param dmadev_id + * The identifier of the dmadev device. + * + * @param[out] caps + * A pointer to memory filled with event adapter capabilities. + * It is expected to be pre-allocated & initialized by caller. + * + * @return + * - 0: Success, driver provides event adapter capabilities for the + * dmadev device. + * - <0: Error code returned by the driver function. + * + */ +__rte_experimental +int +rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dmadev_id, uint32_t *caps); + /* Ethdev Tx adapter capability bitmap flags */ #define RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT 0x1 /**< This flag is sent when the PMD supports a packet transmit callback diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index c27a52ccc0..83e8736c71 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -42,6 +42,10 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port, uint16_t nb_events); /**< @internal Enqueue burst of events on crypto adapter */ +typedef uint16_t (*event_dma_adapter_enqueue_t)(void *port, struct rte_event ev[], + uint16_t nb_events); +/**< @internal Enqueue burst of events on DMA adapter */ + struct rte_event_fp_ops { void **data; /**< points to array of internal port data pointers */ @@ -65,7 +69,9 @@ struct rte_event_fp_ops { /**< PMD Tx adapter enqueue same destination function. */ event_crypto_adapter_enqueue_t ca_enqueue; /**< PMD Crypto adapter enqueue function. */ - uintptr_t reserved[5]; + event_dma_adapter_enqueue_t dma_enqueue; + /**< PMD DMA adapter enqueue function. */ + uintptr_t reserved[4]; } __rte_cache_aligned; extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS]; diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index 7ce09a87bb..b81eb2919c 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -133,6 +133,22 @@ EXPERIMENTAL { rte_event_timer_remaining_ticks_get; # added in 23.11 + rte_event_dma_adapter_caps_get; + rte_event_dma_adapter_create; + rte_event_dma_adapter_create_ext; + rte_event_dma_adapter_enqueue; + rte_event_dma_adapter_event_port_get; + rte_event_dma_adapter_free; + rte_event_dma_adapter_runtime_params_get; + rte_event_dma_adapter_runtime_params_init; + rte_event_dma_adapter_runtime_params_set; + rte_event_dma_adapter_service_id_get; + rte_event_dma_adapter_start; + rte_event_dma_adapter_stats_get; + rte_event_dma_adapter_stats_reset; + rte_event_dma_adapter_stop; + rte_event_dma_adapter_vchan_add; + rte_event_dma_adapter_vchan_del; rte_event_eth_rx_adapter_create_ext_with_params; }; diff --git a/lib/meson.build b/lib/meson.build index 53155be8e9..f3191f10b6 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -33,6 +33,7 @@ libraries = [ 'compressdev', 'cryptodev', 'distributor', + 'dmadev', 'efd', 'eventdev', 'gpudev', @@ -48,7 +49,6 @@ libraries = [ 'rawdev', 'regexdev', 'mldev', - 'dmadev', 'rib', 'reorder', 'sched', From patchwork Thu Sep 28 16:49:48 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132148 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 194C242659; Thu, 28 Sep 2023 18:50:31 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 71E8440685; Thu, 28 Sep 2023 18:50:27 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8C79140685 for ; Thu, 28 Sep 2023 18:50:26 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SFhJf6002529; Thu, 28 Sep 2023 09:50:26 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=mxnBGhhwR0v7WDiHITdFO1SqCOcZVw0pCQDS57Pxyi0=; b=Zg7XsCAZFuNiR9/bYlQeNmS2AWTgMPybGozZ+u3ygHbRabaOW184inMIkZdOadZm3KUT PkGdpUWpJrYCSMGLecrHu3OKwJh+29WmFI4mr1hsX6M6njYGRFbhCLPdxn4fqfFOS2uc yAAwSrwiUe7OfHmpA7qE+KdzSU5mlz5xjUx0Mhv/h3toxMcxitK6QOuo4vGqlVSwLyDH FwlHntvtlzt5zNzSwSUPxBEUfV4TLcTtjWBD6IypP5ceE7Gq8m+bn6P25yjP/S7bWMQp 0WJkKc8ILtsqJkVazcUF3PUzBFAZVEApJKgFqTSZZbV+fVL+1guunXrx+xSceT3cKR3b +Q== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3tcrrs4h2n-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:50:25 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:50:23 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:50:22 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id B76D65C6924; Thu, 28 Sep 2023 09:50:16 -0700 (PDT) From: Amit Prakash Shukla To: Jerin Jacob CC: , , , , , , , , , , , , , Amit Prakash Shukla Subject: [PATCH v6 02/12] eventdev/dma: support adapter capabilities get Date: Thu, 28 Sep 2023 22:19:48 +0530 Message-ID: <20230928164959.340575-3-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: UPPupiCJ6DijGxq2a44fu9pIdNC0SeHt X-Proofpoint-GUID: UPPupiCJ6DijGxq2a44fu9pIdNC0SeHt X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added a new eventdev API rte_event_dma_adapter_caps_get(), to get DMA adapter capabilities supported by the driver. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/meson.build | 2 +- lib/eventdev/rte_eventdev.c | 23 +++++++++++++++++++++++ 2 files changed, 24 insertions(+), 1 deletion(-) diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build index 21347f7c4c..b46bbbc9aa 100644 --- a/lib/eventdev/meson.build +++ b/lib/eventdev/meson.build @@ -43,5 +43,5 @@ driver_sdk_headers += files( 'event_timer_adapter_pmd.h', ) -deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev'] +deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev', 'dmadev'] deps += ['telemetry'] diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 6ab4524332..60509c6efb 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -20,6 +20,7 @@ #include #include #include +#include #include #include @@ -224,6 +225,28 @@ rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id, : 0; } +int +rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t *caps) +{ + struct rte_eventdev *dev; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + if (!rte_dma_is_valid(dma_dev_id)) + return -EINVAL; + + dev = &rte_eventdevs[dev_id]; + + if (caps == NULL) + return -EINVAL; + + *caps = 0; + + if (dev->dev_ops->dma_adapter_caps_get) + return (*dev->dev_ops->dma_adapter_caps_get)(dev, dma_dev_id, caps); + + return 0; +} + static inline int event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues) { From patchwork Thu Sep 28 16:49:49 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132149 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D1FD442659; Thu, 28 Sep 2023 18:50:37 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8805D4067A; Thu, 28 Sep 2023 18:50:33 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id DF6C2402DD for ; Thu, 28 Sep 2023 18:50:31 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SAfcA5003479; Thu, 28 Sep 2023 09:50:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=b1leP3bOB84XiEjn8TvrV3UQd/+khP5D57PC3ji4AJw=; b=L2VH9DJBXWtzrtDe76kHc9WakXJ9PzsBesbrjwBHbbgEXxYvC48nO2Us1E/6H3/qxx7Q nL9ohTYabvm28zOWSivWB+GWqcknvVcJZdqiYGE156IWzS8nFCdTRYoUGfvkQoEJ822v iIyQbc22f9FUiWa/JkOqszAp4nddRuOPcgYlMzTzggED9L4yCz8aj6fw5oiIIBTZ5+k0 xZ0yGX2bD8WXtSJdITwDhHkvyV0I+wmo5Eavm97KMw4Ru/E10TFu10zgu1JZoMY88Cch QGK0yqVhznaqJNCDA0lYYnI/lAT7+rEf4tj8mbAFi9elzmj+bKbiAGQ0FGfovWIjzJZR QQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3td7y6sd97-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:50:30 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:50:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:50:29 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 29C045C68EF; Thu, 28 Sep 2023 09:50:24 -0700 (PDT) From: Amit Prakash Shukla To: Bruce Richardson , Jerin Jacob , Amit Prakash Shukla CC: , , , , , , , , , , , Subject: [PATCH v6 03/12] eventdev/dma: support adapter create and free Date: Thu, 28 Sep 2023 22:19:49 +0530 Message-ID: <20230928164959.340575-4-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: azZXRUVpX0JfMaOEtyYa7UcmrySAaWvh X-Proofpoint-GUID: azZXRUVpX0JfMaOEtyYa7UcmrySAaWvh X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API support to create and free DMA adapter. Create function shall be called with event device to be associated with the adapter and port configuration to setup an event port. Signed-off-by: Amit Prakash Shukla --- config/rte_config.h | 1 + lib/eventdev/meson.build | 1 + lib/eventdev/rte_event_dma_adapter.c | 335 +++++++++++++++++++++++++++ 3 files changed, 337 insertions(+) create mode 100644 lib/eventdev/rte_event_dma_adapter.c diff --git a/config/rte_config.h b/config/rte_config.h index 400e44e3cf..401727703f 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -77,6 +77,7 @@ #define RTE_EVENT_ETH_INTR_RING_SIZE 1024 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32 #define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32 +#define RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE 32 /* rawdev defines */ #define RTE_RAWDEV_MAX_DEVS 64 diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build index b46bbbc9aa..250abcb154 100644 --- a/lib/eventdev/meson.build +++ b/lib/eventdev/meson.build @@ -17,6 +17,7 @@ sources = files( 'eventdev_private.c', 'eventdev_trace_points.c', 'rte_event_crypto_adapter.c', + 'rte_event_dma_adapter.c', 'rte_event_eth_rx_adapter.c', 'rte_event_eth_tx_adapter.c', 'rte_event_ring.c', diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c new file mode 100644 index 0000000000..e57d8407cb --- /dev/null +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -0,0 +1,335 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include + +#include "rte_event_dma_adapter.h" + +#define DMA_BATCH_SIZE 32 +#define DMA_DEFAULT_MAX_NB 128 +#define DMA_ADAPTER_NAME_LEN 32 +#define DMA_ADAPTER_BUFFER_SIZE 1024 + +#define DMA_ADAPTER_OPS_BUFFER_SIZE (DMA_BATCH_SIZE + DMA_BATCH_SIZE) + +#define DMA_ADAPTER_ARRAY "event_dma_adapter_array" + +/* Macros to check for valid adapter */ +#define EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \ + do { \ + if (!edma_adapter_valid_id(id)) { \ + RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d\n", id); \ + return retval; \ + } \ + } while (0) + +/* DMA ops circular buffer */ +struct dma_ops_circular_buffer { + /* Index of head element */ + uint16_t head; + + /* Index of tail element */ + uint16_t tail; + + /* Number of elements in buffer */ + uint16_t count; + + /* Size of circular buffer */ + uint16_t size; + + /* Pointer to hold rte_event_dma_adapter_op for processing */ + struct rte_event_dma_adapter_op **op_buffer; +} __rte_cache_aligned; + +/* DMA device information */ +struct dma_device_info { + /* Number of vchans configured for a DMA device. */ + uint16_t num_dma_dev_vchan; +} __rte_cache_aligned; + +struct event_dma_adapter { + /* Event device identifier */ + uint8_t eventdev_id; + + /* Event port identifier */ + uint8_t event_port_id; + + /* Adapter mode */ + enum rte_event_dma_adapter_mode mode; + + /* Memory allocation name */ + char mem_name[DMA_ADAPTER_NAME_LEN]; + + /* Socket identifier cached from eventdev */ + int socket_id; + + /* Lock to serialize config updates with service function */ + rte_spinlock_t lock; + + /* DMA device structure array */ + struct dma_device_info *dma_devs; + + /* Circular buffer for processing DMA ops to eventdev */ + struct dma_ops_circular_buffer ebuf; + + /* Configuration callback for rte_service configuration */ + rte_event_dma_adapter_conf_cb conf_cb; + + /* Configuration callback argument */ + void *conf_arg; + + /* Set if default_cb is being used */ + int default_cb_arg; +} __rte_cache_aligned; + +static struct event_dma_adapter **event_dma_adapter; + +static inline int +edma_adapter_valid_id(uint8_t id) +{ + return id < RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE; +} + +static inline struct event_dma_adapter * +edma_id_to_adapter(uint8_t id) +{ + return event_dma_adapter ? event_dma_adapter[id] : NULL; +} + +static int +edma_array_init(void) +{ + const struct rte_memzone *mz; + uint32_t sz; + + mz = rte_memzone_lookup(DMA_ADAPTER_ARRAY); + if (mz == NULL) { + sz = sizeof(struct event_dma_adapter *) * RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE; + sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); + + mz = rte_memzone_reserve_aligned(DMA_ADAPTER_ARRAY, sz, rte_socket_id(), 0, + RTE_CACHE_LINE_SIZE); + if (mz == NULL) { + RTE_EDEV_LOG_ERR("Failed to reserve memzone : %s, err = %d", + DMA_ADAPTER_ARRAY, rte_errno); + return -rte_errno; + } + } + + event_dma_adapter = mz->addr; + + return 0; +} + +static inline int +edma_circular_buffer_init(const char *name, struct dma_ops_circular_buffer *buf, uint16_t sz) +{ + buf->op_buffer = rte_zmalloc(name, sizeof(struct rte_event_dma_adapter_op *) * sz, 0); + if (buf->op_buffer == NULL) + return -ENOMEM; + + buf->size = sz; + + return 0; +} + +static inline void +edma_circular_buffer_free(struct dma_ops_circular_buffer *buf) +{ + rte_free(buf->op_buffer); +} + +static int +edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapter_conf *conf, + void *arg) +{ + struct rte_event_port_conf *port_conf; + struct rte_event_dev_config dev_conf; + struct event_dma_adapter *adapter; + struct rte_eventdev *dev; + uint8_t port_id; + int started; + int ret; + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + dev_conf = dev->data->dev_conf; + + started = dev->data->dev_started; + if (started) + rte_event_dev_stop(evdev_id); + + port_id = dev_conf.nb_event_ports; + dev_conf.nb_event_ports += 1; + + port_conf = arg; + if (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_SINGLE_LINK) + dev_conf.nb_single_link_event_port_queues += 1; + + ret = rte_event_dev_configure(evdev_id, &dev_conf); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to configure event dev %u\n", evdev_id); + if (started) { + if (rte_event_dev_start(evdev_id)) + return -EIO; + } + return ret; + } + + ret = rte_event_port_setup(evdev_id, port_id, port_conf); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to setup event port %u\n", port_id); + return ret; + } + + conf->event_port_id = port_id; + conf->max_nb = DMA_DEFAULT_MAX_NB; + if (started) + ret = rte_event_dev_start(evdev_id); + + adapter->default_cb_arg = 1; + adapter->event_port_id = conf->event_port_id; + + return ret; +} + +int +rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, + rte_event_dma_adapter_conf_cb conf_cb, + enum rte_event_dma_adapter_mode mode, void *conf_arg) +{ + struct rte_event_dev_info dev_info; + struct event_dma_adapter *adapter; + char name[DMA_ADAPTER_NAME_LEN]; + struct rte_dma_info info; + uint16_t num_dma_dev; + int socket_id; + uint8_t i; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(evdev_id, -EINVAL); + + if (conf_cb == NULL) + return -EINVAL; + + if (event_dma_adapter == NULL) { + ret = edma_array_init(); + if (ret) + return ret; + } + + adapter = edma_id_to_adapter(id); + if (adapter != NULL) { + RTE_EDEV_LOG_ERR("ML adapter ID %d already exists!", id); + return -EEXIST; + } + + socket_id = rte_event_dev_socket_id(evdev_id); + snprintf(name, DMA_ADAPTER_NAME_LEN, "rte_event_dma_adapter_%d", id); + adapter = rte_zmalloc_socket(name, sizeof(struct event_dma_adapter), RTE_CACHE_LINE_SIZE, + socket_id); + if (adapter == NULL) { + RTE_EDEV_LOG_ERR("Failed to get mem for event ML adapter!"); + return -ENOMEM; + } + + if (edma_circular_buffer_init("edma_circular_buffer", &adapter->ebuf, + DMA_ADAPTER_BUFFER_SIZE)) { + RTE_EDEV_LOG_ERR("Failed to get memory for event adapter circular buffer"); + rte_free(adapter); + return -ENOMEM; + } + + ret = rte_event_dev_info_get(evdev_id, &dev_info); + if (ret < 0) { + RTE_EDEV_LOG_ERR("Failed to get info for eventdev %d: %s", evdev_id, + dev_info.driver_name); + edma_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + return ret; + } + + num_dma_dev = rte_dma_count_avail(); + + adapter->eventdev_id = evdev_id; + adapter->mode = mode; + strcpy(adapter->mem_name, name); + adapter->socket_id = socket_id; + adapter->conf_cb = conf_cb; + adapter->conf_arg = conf_arg; + adapter->dma_devs = rte_zmalloc_socket(adapter->mem_name, + num_dma_dev * sizeof(struct dma_device_info), 0, + socket_id); + if (adapter->dma_devs == NULL) { + RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices\n"); + edma_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + return -ENOMEM; + } + + rte_spinlock_init(&adapter->lock); + for (i = 0; i < num_dma_dev; i++) { + ret = rte_dma_info_get(i, &info); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get dma device info\n"); + edma_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + return ret; + } + + adapter->dma_devs[i].num_dma_dev_vchan = info.nb_vchans; + } + + event_dma_adapter[id] = adapter; + + return 0; +} + +int +rte_event_dma_adapter_create(uint8_t id, uint8_t evdev_id, struct rte_event_port_conf *port_config, + enum rte_event_dma_adapter_mode mode) +{ + struct rte_event_port_conf *pc; + int ret; + + if (port_config == NULL) + return -EINVAL; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + pc = rte_malloc(NULL, sizeof(struct rte_event_port_conf), 0); + if (pc == NULL) + return -ENOMEM; + + rte_memcpy(pc, port_config, sizeof(struct rte_event_port_conf)); + ret = rte_event_dma_adapter_create_ext(id, evdev_id, edma_default_config_cb, mode, pc); + if (ret != 0) + rte_free(pc); + + return ret; +} + +int +rte_event_dma_adapter_free(uint8_t id) +{ + struct event_dma_adapter *adapter; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + rte_free(adapter->conf_arg); + rte_free(adapter->dma_devs); + edma_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + event_dma_adapter[id] = NULL; + + return 0; +} From patchwork Thu Sep 28 16:49:50 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132150 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C8D2742659; Thu, 28 Sep 2023 18:50:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A218840A7D; Thu, 28 Sep 2023 18:50:39 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3C382402DD for ; Thu, 28 Sep 2023 18:50:38 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SAfcA7003479; Thu, 28 Sep 2023 09:50:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=/mgE0ZxQDwnYyZ4ZfHx8aUbGHKBzMIh6NfgO0u8XwOE=; b=DZOqi/opVzI+N6/JIBSAtCzcDBTPnjBovmy/sils2R4fEF5E/n5f+H+fd/DTmQz+ufgJ nuh7+H+n94HmMOLTPnA0351TpDyAqJHo2mtUG31K/mIY63KTgORsY29+n2lHiDTvGjm2 fd2TL9RMbK4qqE9b3UqSHwceRCT9YFs+wzpUIGsEIDvI9Nac63A5OOGS30nzgNtO1NO5 NcdA83xEvyrK9GcXrQnHKkHX32h/YM4XIp1v/5CdPF0IxxOeIfGfyUlAaZtPOErulNK8 3MFvGR8IuPMnrKmDPKWI4e1I2uHFsYLaa2Wz4d8ddPkwtiiQ53OYr0jNqX6kv0BsxTGb eQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3td7y6sd9m-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:50:37 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:50:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:50:35 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 691FB5C68F4; Thu, 28 Sep 2023 09:50:31 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v6 04/12] eventdev/dma: support vchan add and delete Date: Thu, 28 Sep 2023 22:19:50 +0530 Message-ID: <20230928164959.340575-5-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Bz52Ev5rTPlZ2xcLezcjedxPaFhNWPfy X-Proofpoint-GUID: Bz52Ev5rTPlZ2xcLezcjedxPaFhNWPfy X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API support to add and delete vchan's from the DMA adapter. DMA devid and vchan are added to the addapter instance by calling rte_event_dma_adapter_vchan_add and deleted using rte_event_dma_adapter_vchan_del. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 204 +++++++++++++++++++++++++++ 1 file changed, 204 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index e57d8407cb..ec81281bf8 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -42,8 +42,31 @@ struct dma_ops_circular_buffer { struct rte_event_dma_adapter_op **op_buffer; } __rte_cache_aligned; +/* Vchan information */ +struct dma_vchan_info { + /* Set to indicate vchan queue is enabled */ + bool vq_enabled; + + /* Circular buffer for batching DMA ops to dma_dev */ + struct dma_ops_circular_buffer dma_buf; +} __rte_cache_aligned; + /* DMA device information */ struct dma_device_info { + /* Pointer to vchan queue info */ + struct dma_vchan_info *vchanq; + + /* Pointer to vchan queue info. + * This holds ops passed by application till the + * dma completion is done. + */ + struct dma_vchan_info *tqmap; + + /* If num_vchanq > 0, the start callback will + * be invoked if not already invoked + */ + uint16_t num_vchanq; + /* Number of vchans configured for a DMA device. */ uint16_t num_dma_dev_vchan; } __rte_cache_aligned; @@ -81,6 +104,9 @@ struct event_dma_adapter { /* Set if default_cb is being used */ int default_cb_arg; + + /* No. of vchan queue configured */ + uint16_t nb_vchanq; } __rte_cache_aligned; static struct event_dma_adapter **event_dma_adapter; @@ -333,3 +359,181 @@ rte_event_dma_adapter_free(uint8_t id) return 0; } + +static void +edma_update_vchanq_info(struct event_dma_adapter *adapter, struct dma_device_info *dev_info, + uint16_t vchan, uint8_t add) +{ + struct dma_vchan_info *vchan_info; + struct dma_vchan_info *tqmap_info; + int enabled; + uint16_t i; + + if (dev_info->vchanq == NULL) + return; + + if (vchan == RTE_DMA_ALL_VCHAN) { + for (i = 0; i < dev_info->num_dma_dev_vchan; i++) + edma_update_vchanq_info(adapter, dev_info, i, add); + } else { + tqmap_info = &dev_info->tqmap[vchan]; + vchan_info = &dev_info->vchanq[vchan]; + enabled = vchan_info->vq_enabled; + if (add) { + adapter->nb_vchanq += !enabled; + dev_info->num_vchanq += !enabled; + } else { + adapter->nb_vchanq -= enabled; + dev_info->num_vchanq -= enabled; + } + vchan_info->vq_enabled = !!add; + tqmap_info->vq_enabled = !!add; + } +} + +int +rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, + const struct rte_event *event) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint32_t cap; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (!rte_dma_is_valid(dma_dev_id)) { + RTE_EDEV_LOG_ERR("Invalid dma_dev_id = %" PRIu8, dma_dev_id); + return -EINVAL; + } + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, dma_dev_id, &cap); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %u dma_dev %u", id, dma_dev_id); + return ret; + } + + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND) && (event == NULL)) { + RTE_EDEV_LOG_ERR("Event can not be NULL for dma_dev_id = %u", dma_dev_id); + return -EINVAL; + } + + dev_info = &adapter->dma_devs[dma_dev_id]; + if (vchan != RTE_DMA_ALL_VCHAN && vchan >= dev_info->num_dma_dev_vchan) { + RTE_EDEV_LOG_ERR("Invalid vhcan %u", vchan); + return -EINVAL; + } + + /* In case HW cap is RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, no + * need of service core as HW supports event forward capability. + */ + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW) || + (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW)) { + if (*dev->dev_ops->dma_adapter_vchan_add == NULL) + return -ENOTSUP; + if (dev_info->vchanq == NULL) { + dev_info->vchanq = rte_zmalloc_socket(adapter->mem_name, + dev_info->num_dma_dev_vchan * + sizeof(struct dma_vchan_info), + 0, adapter->socket_id); + if (dev_info->vchanq == NULL) { + printf("Queue pair add not supported\n"); + return -ENOMEM; + } + } + + if (dev_info->tqmap == NULL) { + dev_info->tqmap = rte_zmalloc_socket(adapter->mem_name, + dev_info->num_dma_dev_vchan * + sizeof(struct dma_vchan_info), + 0, adapter->socket_id); + if (dev_info->tqmap == NULL) { + printf("tq pair add not supported\n"); + return -ENOMEM; + } + } + + ret = (*dev->dev_ops->dma_adapter_vchan_add)(dev, dma_dev_id, vchan, event); + if (ret) + return ret; + + else + edma_update_vchanq_info(adapter, &adapter->dma_devs[dma_dev_id], vchan, 1); + } + + return 0; +} + +int +rte_event_dma_adapter_vchan_del(uint8_t id, int16_t dma_dev_id, uint16_t vchan) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint32_t cap; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (!rte_dma_is_valid(dma_dev_id)) { + RTE_EDEV_LOG_ERR("Invalid dma_dev_id = %" PRIu8, dma_dev_id); + return -EINVAL; + } + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, dma_dev_id, &cap); + if (ret) + return ret; + + dev_info = &adapter->dma_devs[dma_dev_id]; + + if (vchan != RTE_DMA_ALL_VCHAN && vchan >= dev_info->num_dma_dev_vchan) { + RTE_EDEV_LOG_ERR("Invalid vhcan %" PRIu16, vchan); + return -EINVAL; + } + + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW)) { + if (*dev->dev_ops->dma_adapter_vchan_del == NULL) + return -ENOTSUP; + ret = (*dev->dev_ops->dma_adapter_vchan_del)(dev, dma_dev_id, vchan); + if (ret == 0) { + edma_update_vchanq_info(adapter, dev_info, vchan, 0); + if (dev_info->num_vchanq == 0) { + rte_free(dev_info->vchanq); + dev_info->vchanq = NULL; + } + } + } else { + if (adapter->nb_vchanq == 0) + return 0; + + rte_spinlock_lock(&adapter->lock); + edma_update_vchanq_info(adapter, dev_info, vchan, 0); + + if (dev_info->num_vchanq == 0) { + rte_free(dev_info->vchanq); + rte_free(dev_info->tqmap); + dev_info->vchanq = NULL; + dev_info->tqmap = NULL; + } + + rte_spinlock_unlock(&adapter->lock); + } + + return ret; +} From patchwork Thu Sep 28 16:49:51 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132151 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3065D42659; Thu, 28 Sep 2023 18:50:51 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id C6CD4402DD; Thu, 28 Sep 2023 18:50:46 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 8E255402DD for ; Thu, 28 Sep 2023 18:50:45 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SAfaBn003447; Thu, 28 Sep 2023 09:50:44 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=gaICPbFlOFYfW36/kGYFm94C0Tn2q09YAWdvLOO0WdM=; b=QDSj1LYyrXMFy9U8+DdTH13wR+BmOCs/7YpKZoqiLybFxXHQQB4YCn+oY5aHaXoiHLRe s9oEUxrSFwFWop7S89TgC7HKZccDl6sbkfC0xnsIQxxgFe7mrV71kgXZF7qQK9E8QRHs E5LdZRnztF1T2RdBsZa406C3z/KKNlf7/hJoaj1K96Z10OrkiCC3V+FId2xOPIkjyfJ3 SHNte8IQ5SwmMsxSpLWJrGClmJN7K5lAom+fIpUNd8ui8tpGFuW18qHHp3kyZkbYdYbC sIIhlSQPcl2e7n+ZKluOmK6WFHkO1uhu/GHocfjCKMMd03kUYuzLWG8Ap9Rv+VW0nlB7 cw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3td7y6sdab-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:50:44 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:50:42 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:50:42 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id A08DE5C68F0; Thu, 28 Sep 2023 09:50:38 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v6 05/12] eventdev/dma: support adapter service function Date: Thu, 28 Sep 2023 22:19:51 +0530 Message-ID: <20230928164959.340575-6-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: PQlRcZ96j_GIf2v9yL0YzAzax-0uSjC6 X-Proofpoint-GUID: PQlRcZ96j_GIf2v9yL0YzAzax-0uSjC6 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support for DMA adapter service function for event devices. Enqueue and dequeue of event from eventdev and DMA device are done based on the adapter mode and the supported HW capabilities. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 588 +++++++++++++++++++++++++++ 1 file changed, 588 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index ec81281bf8..b0845eb415 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -3,6 +3,7 @@ */ #include +#include #include "rte_event_dma_adapter.h" @@ -69,6 +70,10 @@ struct dma_device_info { /* Number of vchans configured for a DMA device. */ uint16_t num_dma_dev_vchan; + + /* Next queue pair to be processed */ + uint16_t next_vchan_id; + } __rte_cache_aligned; struct event_dma_adapter { @@ -90,6 +95,9 @@ struct event_dma_adapter { /* Lock to serialize config updates with service function */ rte_spinlock_t lock; + /* Next dma device to be processed */ + uint16_t next_dmadev_id; + /* DMA device structure array */ struct dma_device_info *dma_devs; @@ -107,6 +115,26 @@ struct event_dma_adapter { /* No. of vchan queue configured */ uint16_t nb_vchanq; + + /* Per adapter EAL service ID */ + uint32_t service_id; + + /* Service initialization state */ + uint8_t service_initialized; + + /* Max DMA ops processed in any service function invocation */ + uint32_t max_nb; + + /* Store event port's implicit release capability */ + uint8_t implicit_release_disabled; + + /* Flag to indicate backpressure at dma_dev + * Stop further dequeuing events from eventdev + */ + bool stop_enq_to_dma_dev; + + /* Loop counter to flush dma ops */ + uint16_t transmit_loop_count; } __rte_cache_aligned; static struct event_dma_adapter **event_dma_adapter; @@ -148,6 +176,18 @@ edma_array_init(void) return 0; } +static inline bool +edma_circular_buffer_batch_ready(struct dma_ops_circular_buffer *bufp) +{ + return bufp->count >= DMA_BATCH_SIZE; +} + +static inline bool +edma_circular_buffer_space_for_batch(struct dma_ops_circular_buffer *bufp) +{ + return (bufp->size - bufp->count) >= DMA_BATCH_SIZE; +} + static inline int edma_circular_buffer_init(const char *name, struct dma_ops_circular_buffer *buf, uint16_t sz) { @@ -166,6 +206,67 @@ edma_circular_buffer_free(struct dma_ops_circular_buffer *buf) rte_free(buf->op_buffer); } +static inline int +edma_circular_buffer_add(struct dma_ops_circular_buffer *bufp, struct rte_event_dma_adapter_op *op) +{ + uint16_t *tail = &bufp->tail; + + bufp->op_buffer[*tail] = op; + + /* circular buffer, go round */ + *tail = (*tail + 1) % bufp->size; + bufp->count++; + + return 0; +} + +static inline int +edma_circular_buffer_flush_to_dma_dev(struct event_dma_adapter *adapter, + struct dma_ops_circular_buffer *bufp, uint8_t dma_dev_id, + uint16_t vchan, uint16_t *nb_ops_flushed) +{ + struct rte_event_dma_adapter_op *op; + struct dma_vchan_info *tq; + uint16_t *head = &bufp->head; + uint16_t *tail = &bufp->tail; + uint16_t n; + uint16_t i; + int ret; + + if (*tail > *head) + n = *tail - *head; + else if (*tail < *head) + n = bufp->size - *head; + else { + *nb_ops_flushed = 0; + return 0; /* buffer empty */ + } + + tq = &adapter->dma_devs[dma_dev_id].tqmap[vchan]; + + for (i = 0; i < n; i++) { + op = bufp->op_buffer[*head]; + ret = rte_dma_copy_sg(dma_dev_id, vchan, op->src_seg, op->dst_seg, + op->nb_src, op->nb_dst, op->flags); + if (ret < 0) + break; + + /* Enqueue in transaction queue. */ + edma_circular_buffer_add(&tq->dma_buf, op); + + *head = (*head + 1) % bufp->size; + } + + *nb_ops_flushed = i; + bufp->count -= *nb_ops_flushed; + if (!bufp->count) { + *head = 0; + *tail = 0; + } + + return *nb_ops_flushed == n ? 0 : -1; +} + static int edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapter_conf *conf, void *arg) @@ -360,6 +461,406 @@ rte_event_dma_adapter_free(uint8_t id) return 0; } +static inline unsigned int +edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, unsigned int cnt) +{ + struct dma_vchan_info *vchan_qinfo = NULL; + struct rte_event_dma_adapter_op *dma_op; + uint16_t vchan, nb_enqueued = 0; + int16_t dma_dev_id; + unsigned int i, n; + int ret; + + ret = 0; + n = 0; + + for (i = 0; i < cnt; i++) { + dma_op = ev[i].event_ptr; + if (dma_op == NULL) + continue; + + /* Expected to have response info appended to dma_op. */ + + dma_dev_id = dma_op->dma_dev_id; + vchan = dma_op->vchan; + vchan_qinfo = &adapter->dma_devs[dma_dev_id].vchanq[vchan]; + if (!vchan_qinfo->vq_enabled) { + if (dma_op != NULL && dma_op->op_mp != NULL) + rte_mempool_put(dma_op->op_mp, dma_op); + continue; + } + edma_circular_buffer_add(&vchan_qinfo->dma_buf, dma_op); + + if (edma_circular_buffer_batch_ready(&vchan_qinfo->dma_buf)) { + ret = edma_circular_buffer_flush_to_dma_dev(adapter, &vchan_qinfo->dma_buf, + dma_dev_id, vchan, + &nb_enqueued); + n += nb_enqueued; + + /** + * If some dma ops failed to flush to dma_dev and + * space for another batch is not available, stop + * dequeue from eventdev momentarily + */ + if (unlikely(ret < 0 && + !edma_circular_buffer_space_for_batch(&vchan_qinfo->dma_buf))) + adapter->stop_enq_to_dma_dev = true; + } + } + + return n; +} + +static unsigned int +edma_adapter_dev_flush(struct event_dma_adapter *adapter, int16_t dma_dev_id, + uint16_t *nb_ops_flushed) +{ + struct dma_vchan_info *vchan_info; + struct dma_device_info *dev_info; + uint16_t nb = 0, nb_enqueued = 0; + uint16_t vchan, nb_vchans; + + dev_info = &adapter->dma_devs[dma_dev_id]; + nb_vchans = dev_info->num_vchanq; + + for (vchan = 0; vchan < nb_vchans; vchan++) { + + vchan_info = &dev_info->vchanq[vchan]; + if (unlikely(vchan_info == NULL || !vchan_info->vq_enabled)) + continue; + + edma_circular_buffer_flush_to_dma_dev(adapter, &vchan_info->dma_buf, dma_dev_id, + vchan, &nb_enqueued); + *nb_ops_flushed += vchan_info->dma_buf.count; + nb += nb_enqueued; + } + + return nb; +} + +static unsigned int +edma_adapter_enq_flush(struct event_dma_adapter *adapter) +{ + int16_t dma_dev_id; + uint16_t nb_enqueued = 0; + uint16_t nb_ops_flushed = 0; + uint16_t num_dma_dev = rte_dma_count_avail(); + + for (dma_dev_id = 0; dma_dev_id < num_dma_dev; dma_dev_id++) + nb_enqueued += edma_adapter_dev_flush(adapter, dma_dev_id, &nb_ops_flushed); + /** + * Enable dequeue from eventdev if all ops from circular + * buffer flushed to dma_dev + */ + if (!nb_ops_flushed) + adapter->stop_enq_to_dma_dev = false; + + return nb_enqueued; +} + +/* Flush an instance's enqueue buffers every DMA_ENQ_FLUSH_THRESHOLD + * iterations of edma_adapter_enq_run() + */ +#define DMA_ENQ_FLUSH_THRESHOLD 1024 + +static int +edma_adapter_enq_run(struct event_dma_adapter *adapter, unsigned int max_enq) +{ + uint8_t event_port_id = adapter->event_port_id; + uint8_t event_dev_id = adapter->eventdev_id; + struct rte_event ev[DMA_BATCH_SIZE]; + unsigned int nb_enq, nb_enqueued; + uint16_t n; + + if (adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW) + return 0; + + nb_enqueued = 0; + for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) { + + if (unlikely(adapter->stop_enq_to_dma_dev)) { + nb_enqueued += edma_adapter_enq_flush(adapter); + + if (unlikely(adapter->stop_enq_to_dma_dev)) + break; + } + + n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, DMA_BATCH_SIZE, 0); + + if (!n) + break; + + nb_enqueued += edma_enq_to_dma_dev(adapter, ev, n); + } + + if ((++adapter->transmit_loop_count & (DMA_ENQ_FLUSH_THRESHOLD - 1)) == 0) + nb_enqueued += edma_adapter_enq_flush(adapter); + + return nb_enqueued; +} + +#define DMA_ADAPTER_MAX_EV_ENQ_RETRIES 100 + +static inline uint16_t +edma_ops_enqueue_burst(struct event_dma_adapter *adapter, struct rte_event_dma_adapter_op **ops, + uint16_t num) +{ + uint8_t event_port_id = adapter->event_port_id; + uint8_t event_dev_id = adapter->eventdev_id; + struct rte_event events[DMA_BATCH_SIZE]; + struct rte_event *response_info; + uint16_t nb_enqueued, nb_ev; + uint8_t retry; + uint8_t i; + + nb_ev = 0; + retry = 0; + nb_enqueued = 0; + num = RTE_MIN(num, DMA_BATCH_SIZE); + for (i = 0; i < num; i++) { + struct rte_event *ev = &events[nb_ev++]; + + /* Expected to have response info appended to dma_op. */ + response_info = (struct rte_event *)((uint8_t *)ops[i] + + sizeof(struct rte_event_dma_adapter_op)); + if (unlikely(response_info == NULL)) { + if (ops[i] != NULL && ops[i]->op_mp != NULL) + rte_mempool_put(ops[i]->op_mp, ops[i]); + continue; + } + + rte_memcpy(ev, response_info, sizeof(struct rte_event)); + ev->event_ptr = ops[i]; + ev->event_type = RTE_EVENT_TYPE_DMADEV; + if (adapter->implicit_release_disabled) + ev->op = RTE_EVENT_OP_FORWARD; + else + ev->op = RTE_EVENT_OP_NEW; + } + + do { + nb_enqueued += rte_event_enqueue_burst(event_dev_id, event_port_id, + &events[nb_enqueued], nb_ev - nb_enqueued); + + } while (retry++ < DMA_ADAPTER_MAX_EV_ENQ_RETRIES && nb_enqueued < nb_ev); + + return nb_enqueued; +} + +static int +edma_circular_buffer_flush_to_evdev(struct event_dma_adapter *adapter, + struct dma_ops_circular_buffer *bufp, + uint16_t *enqueue_count) +{ + struct rte_event_dma_adapter_op **ops = bufp->op_buffer; + uint16_t n = 0, nb_ops_flushed; + uint16_t *head = &bufp->head; + uint16_t *tail = &bufp->tail; + + if (*tail > *head) + n = *tail - *head; + else if (*tail < *head) + n = bufp->size - *head; + else { + if (enqueue_count) + *enqueue_count = 0; + return 0; /* buffer empty */ + } + + if (enqueue_count && n > *enqueue_count) + n = *enqueue_count; + + nb_ops_flushed = edma_ops_enqueue_burst(adapter, &ops[*head], n); + if (enqueue_count) + *enqueue_count = nb_ops_flushed; + + bufp->count -= nb_ops_flushed; + if (!bufp->count) { + *head = 0; + *tail = 0; + return 0; /* buffer empty */ + } + + *head = (*head + nb_ops_flushed) % bufp->size; + return 1; +} + +static void +edma_ops_buffer_flush(struct event_dma_adapter *adapter) +{ + if (likely(adapter->ebuf.count == 0)) + return; + + while (edma_circular_buffer_flush_to_evdev(adapter, &adapter->ebuf, NULL)) + ; +} + +static inline unsigned int +edma_adapter_deq_run(struct event_dma_adapter *adapter, unsigned int max_deq) +{ + struct dma_vchan_info *vchan_info; + struct dma_ops_circular_buffer *tq_buf; + struct rte_event_dma_adapter_op *ops; + uint16_t n, nb_deq, nb_enqueued, i; + struct dma_device_info *dev_info; + uint16_t vchan, num_vchan; + uint16_t num_dma_dev; + int16_t dma_dev_id; + uint16_t index; + bool done; + bool err; + + nb_deq = 0; + edma_ops_buffer_flush(adapter); + + num_dma_dev = rte_dma_count_avail(); + do { + done = true; + + for (dma_dev_id = adapter->next_dmadev_id; dma_dev_id < num_dma_dev; dma_dev_id++) { + uint16_t queues = 0; + dev_info = &adapter->dma_devs[dma_dev_id]; + num_vchan = dev_info->num_vchanq; + + for (vchan = dev_info->next_vchan_id; queues < num_vchan; + vchan = (vchan + 1) % num_vchan, queues++) { + + vchan_info = &dev_info->vchanq[vchan]; + if (unlikely(vchan_info == NULL || !vchan_info->vq_enabled)) + continue; + + n = rte_dma_completed(dma_dev_id, vchan, DMA_BATCH_SIZE, + &index, &err); + if (!n) + continue; + + done = false; + + tq_buf = &dev_info->tqmap[vchan].dma_buf; + + nb_enqueued = n; + if (unlikely(!adapter->ebuf.count)) + edma_circular_buffer_flush_to_evdev(adapter, tq_buf, + &nb_enqueued); + + if (likely(nb_enqueued == n)) + goto check; + + /* Failed to enqueue events case */ + for (i = nb_enqueued; i < n; i++) { + ops = tq_buf->op_buffer[tq_buf->head]; + edma_circular_buffer_add(&adapter->ebuf, ops); + tq_buf->head = (tq_buf->head + 1) % tq_buf->size; + } + +check: + nb_deq += n; + if (nb_deq >= max_deq) { + if ((vchan + 1) == num_vchan) + adapter->next_dmadev_id = + (dma_dev_id + 1) % num_dma_dev; + + dev_info->next_vchan_id = (vchan + 1) % num_vchan; + + return nb_deq; + } + } + } + adapter->next_dmadev_id = 0; + + } while (done == false); + + return nb_deq; +} + +static int +edma_adapter_run(struct event_dma_adapter *adapter, unsigned int max_ops) +{ + unsigned int ops_left = max_ops; + + while (ops_left > 0) { + unsigned int e_cnt, d_cnt; + + e_cnt = edma_adapter_deq_run(adapter, ops_left); + ops_left -= RTE_MIN(ops_left, e_cnt); + + d_cnt = edma_adapter_enq_run(adapter, ops_left); + ops_left -= RTE_MIN(ops_left, d_cnt); + + if (e_cnt == 0 && d_cnt == 0) + break; + } + + if (ops_left == max_ops) { + rte_event_maintain(adapter->eventdev_id, adapter->event_port_id, 0); + return -EAGAIN; + } else + return 0; +} + +static int +edma_service_func(void *args) +{ + struct event_dma_adapter *adapter = args; + int ret; + + if (rte_spinlock_trylock(&adapter->lock) == 0) + return 0; + ret = edma_adapter_run(adapter, adapter->max_nb); + rte_spinlock_unlock(&adapter->lock); + + return ret; +} + +static int +edma_init_service(struct event_dma_adapter *adapter, uint8_t id) +{ + struct rte_event_dma_adapter_conf adapter_conf; + struct rte_service_spec service; + uint32_t impl_rel; + int ret; + + if (adapter->service_initialized) + return 0; + + memset(&service, 0, sizeof(service)); + snprintf(service.name, DMA_ADAPTER_NAME_LEN, "rte_event_dma_adapter_%d", id); + service.socket_id = adapter->socket_id; + service.callback = edma_service_func; + service.callback_userdata = adapter; + + /* Service function handles locking for queue add/del updates */ + service.capabilities = RTE_SERVICE_CAP_MT_SAFE; + ret = rte_service_component_register(&service, &adapter->service_id); + if (ret) { + RTE_EDEV_LOG_ERR("failed to register service %s err = %" PRId32, service.name, ret); + return ret; + } + + ret = adapter->conf_cb(id, adapter->eventdev_id, &adapter_conf, adapter->conf_arg); + if (ret) { + RTE_EDEV_LOG_ERR("configuration callback failed err = %" PRId32, ret); + return ret; + } + + adapter->max_nb = adapter_conf.max_nb; + adapter->event_port_id = adapter_conf.event_port_id; + + if (rte_event_port_attr_get(adapter->eventdev_id, adapter->event_port_id, + RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE, &impl_rel)) { + RTE_EDEV_LOG_ERR("Failed to get port info for eventdev %" PRId32, + adapter->eventdev_id); + edma_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + return -EINVAL; + } + + adapter->implicit_release_disabled = (uint8_t)impl_rel; + adapter->service_initialized = 1; + + return ret; +} + static void edma_update_vchanq_info(struct event_dma_adapter *adapter, struct dma_device_info *dev_info, uint16_t vchan, uint8_t add) @@ -391,6 +892,60 @@ edma_update_vchanq_info(struct event_dma_adapter *adapter, struct dma_device_inf } } +static int +edma_add_vchan(struct event_dma_adapter *adapter, int16_t dma_dev_id, uint16_t vchan) +{ + struct dma_device_info *dev_info = &adapter->dma_devs[dma_dev_id]; + struct dma_vchan_info *vchanq; + struct dma_vchan_info *tqmap; + uint16_t nb_vchans; + uint32_t i; + + if (dev_info->vchanq == NULL) { + nb_vchans = dev_info->num_dma_dev_vchan; + + dev_info->vchanq = rte_zmalloc_socket(adapter->mem_name, + nb_vchans * sizeof(struct dma_vchan_info), + 0, adapter->socket_id); + if (dev_info->vchanq == NULL) + return -ENOMEM; + + dev_info->tqmap = rte_zmalloc_socket(adapter->mem_name, + nb_vchans * sizeof(struct dma_vchan_info), + 0, adapter->socket_id); + if (dev_info->tqmap == NULL) + return -ENOMEM; + + for (i = 0; i < nb_vchans; i++) { + vchanq = &dev_info->vchanq[i]; + + if (edma_circular_buffer_init("dma_dev_circular_buffer", &vchanq->dma_buf, + DMA_ADAPTER_OPS_BUFFER_SIZE)) { + RTE_EDEV_LOG_ERR("Failed to get memory for dma_dev buffer"); + rte_free(vchanq); + return -ENOMEM; + } + + tqmap = &dev_info->tqmap[i]; + if (edma_circular_buffer_init("dma_dev_circular_trans_buf", &tqmap->dma_buf, + DMA_ADAPTER_OPS_BUFFER_SIZE)) { + RTE_EDEV_LOG_ERR( + "Failed to get memory for dma_dev transaction buffer"); + rte_free(tqmap); + return -ENOMEM; + } + } + } + + if (vchan == RTE_DMA_ALL_VCHAN) { + for (i = 0; i < dev_info->num_dma_dev_vchan; i++) + edma_update_vchanq_info(adapter, dev_info, i, 1); + } else + edma_update_vchanq_info(adapter, dev_info, vchan, 1); + + return 0; +} + int rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, const struct rte_event *event) @@ -470,6 +1025,38 @@ rte_event_dma_adapter_vchan_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, edma_update_vchanq_info(adapter, &adapter->dma_devs[dma_dev_id], vchan, 1); } + /* In case HW cap is RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW, or SW adapter, initiate + * services so the application can choose which ever way it wants to use the adapter. + * + * Case 1: RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW. Application may wants to use one + * of below two modes + * + * a. OP_FORWARD mode -> HW Dequeue + SW enqueue + * b. OP_NEW mode -> HW Dequeue + * + * Case 2: No HW caps, use SW adapter + * + * a. OP_FORWARD mode -> SW enqueue & dequeue + * b. OP_NEW mode -> SW Dequeue + */ + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_FORWARD) || + (!(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW) && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND))) { + rte_spinlock_lock(&adapter->lock); + ret = edma_init_service(adapter, id); + if (ret == 0) + ret = edma_add_vchan(adapter, dma_dev_id, vchan); + rte_spinlock_unlock(&adapter->lock); + + if (ret) + return ret; + + rte_service_component_runstate_set(adapter->service_id, 1); + } + return 0; } @@ -533,6 +1120,7 @@ rte_event_dma_adapter_vchan_del(uint8_t id, int16_t dma_dev_id, uint16_t vchan) } rte_spinlock_unlock(&adapter->lock); + rte_service_component_runstate_set(adapter->service_id, adapter->nb_vchanq); } return ret; From patchwork Thu Sep 28 16:49:52 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132152 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6EDE042659; Thu, 28 Sep 2023 18:51:02 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5F42640273; Thu, 28 Sep 2023 18:51:02 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 44A1940273 for ; Thu, 28 Sep 2023 18:51:00 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SAfXqC003433; Thu, 28 Sep 2023 09:50:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=oWMyBXfYCHuFynfB0KXr7vW3hT5ylGiJrbNTSBKjwJU=; b=dz+57y+AKHCT8ZMSD5rLjaplME2q8dGK2NsuwciegarER3tWRzBFVB+uC8P4kN+woXe+ hxBDF2ETxpnT2Thvd6n1G4pPdAheha95sWyFo1EVjRcwBxftraxZcMn/Tf4QLgpasxdU P/CQ37QQxQB1JJ7Eo4DPCuqlNW4mX3st3sJapCWs9ZpKAy3WBFgXa7wmSrJrrEVQ9Cab 06Lc/eyUq0GC8a3/yK5eRwjCStgB2dA/ZW6EC37EseeB4qdMHTmxu22vHuOb2p5+5BcF JLNgESo8CHKLmJ2HtqLRDN9v9wZomDRfQ24xwCBHP2oJipjlIUiVkG642ICdFJ/HprbH 9w== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3td7y6sdb7-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:50:59 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:50:57 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:50:57 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 45CD13F703F; Thu, 28 Sep 2023 09:50:53 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v6 06/12] eventdev/dma: support adapter start and stop Date: Thu, 28 Sep 2023 22:19:52 +0530 Message-ID: <20230928164959.340575-7-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: QchTK2zrJGXn-QTbdIeQ2dKwgsullaGI X-Proofpoint-GUID: QchTK2zrJGXn-QTbdIeQ2dKwgsullaGI X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API support to start and stop DMA adapter. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 69 ++++++++++++++++++++++++++++ 1 file changed, 69 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index b0845eb415..e955f19c68 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -74,6 +74,13 @@ struct dma_device_info { /* Next queue pair to be processed */ uint16_t next_vchan_id; + /* Set to indicate processing has been started */ + uint8_t dev_started; + + /* Set to indicate dmadev->eventdev packet + * transfer uses a hardware mechanism + */ + uint8_t internal_event_port; } __rte_cache_aligned; struct event_dma_adapter { @@ -1125,3 +1132,65 @@ rte_event_dma_adapter_vchan_del(uint8_t id, int16_t dma_dev_id, uint16_t vchan) return ret; } + +static int +edma_adapter_ctrl(uint8_t id, int start) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint16_t num_dma_dev; + int stop = !start; + int use_service; + uint32_t i; + + use_service = 0; + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + num_dma_dev = rte_dma_count_avail(); + dev = &rte_eventdevs[adapter->eventdev_id]; + + for (i = 0; i < num_dma_dev; i++) { + dev_info = &adapter->dma_devs[i]; + /* start check for num queue pairs */ + if (start && !dev_info->num_vchanq) + continue; + /* stop check if dev has been started */ + if (stop && !dev_info->dev_started) + continue; + use_service |= !dev_info->internal_event_port; + dev_info->dev_started = start; + if (dev_info->internal_event_port == 0) + continue; + start ? (*dev->dev_ops->dma_adapter_start)(dev, i) : + (*dev->dev_ops->dma_adapter_stop)(dev, i); + } + + if (use_service) + rte_service_runstate_set(adapter->service_id, start); + + return 0; +} + +int +rte_event_dma_adapter_start(uint8_t id) +{ + struct event_dma_adapter *adapter; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + return edma_adapter_ctrl(id, 1); +} + +int +rte_event_dma_adapter_stop(uint8_t id) +{ + return edma_adapter_ctrl(id, 0); +} From patchwork Thu Sep 28 16:49:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132153 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 936B142659; Thu, 28 Sep 2023 18:51:12 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 83FB5402EE; Thu, 28 Sep 2023 18:51:12 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 653A7402EE for ; Thu, 28 Sep 2023 18:51:10 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SAfXqG003433; Thu, 28 Sep 2023 09:51:09 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=4MGI4EEsazQz3fqTpIlWkK4V6oG0KL4Pgf7U/yWHURU=; b=bxpaayS9/IrUP12mG91N/uy3lzrAkD2aZpP/BFFHGtYwQCTqD9ABxo/IsbrTPv5qzzY2 5i2rk2ghZflzQjr6oDKEczK6ojWGBxTGe+MLB25ksOQQG8CV1LGHmGcj+dZ7hdVyW/76 m3NvSQHzJpLdzlgOfQ/L2NtN4G8UN0pUF/6qj+CP6yU5BHXsV5jIcMk8nwrOaWNSIjiw iO4IyJNQTtrRAaQXbMOXHT6E99NBp1CyL1dWQ6kKblw5Ow2RPYJ+jGMVwZCRgx+rjo6o xbye1oULhQr1+0LOfKxF7+YLa6LNY+Gc8paqeazEvvUNpI2tgKatYnP/7QGeiNBVWpx8 kg== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3td7y6sdck-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:51:09 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:51:07 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:51:07 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 89EFC5C68F1; Thu, 28 Sep 2023 09:51:03 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v6 07/12] eventdev/dma: support adapter service ID get Date: Thu, 28 Sep 2023 22:19:53 +0530 Message-ID: <20230928164959.340575-8-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: sfArwLaVsfMKkfrNBTkLxFc53GGL1CSc X-Proofpoint-GUID: sfArwLaVsfMKkfrNBTkLxFc53GGL1CSc X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API support to get DMA adapter service ID. Service id returned in the variable by the API call shall be used by application to map a service core. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index e955f19c68..63b07cd14e 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -1133,6 +1133,23 @@ rte_event_dma_adapter_vchan_del(uint8_t id, int16_t dma_dev_id, uint16_t vchan) return ret; } +int +rte_event_dma_adapter_service_id_get(uint8_t id, uint32_t *service_id) +{ + struct event_dma_adapter *adapter; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL || service_id == NULL) + return -EINVAL; + + if (adapter->service_initialized) + *service_id = adapter->service_id; + + return adapter->service_initialized ? 0 : -ESRCH; +} + static int edma_adapter_ctrl(uint8_t id, int start) { From patchwork Thu Sep 28 16:49:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132154 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C6E3042659; Thu, 28 Sep 2023 18:51:18 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B653140A73; Thu, 28 Sep 2023 18:51:18 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 2D78240A73 for ; Thu, 28 Sep 2023 18:51:17 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SAfcAJ003479; Thu, 28 Sep 2023 09:51:16 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=DweaABuhl1Ag1XPwKSVb2n8QHaXToUCMy4QbMb8cePk=; b=WXcnznfKHZUMzrcEXHOn5WRFMKKPljegKGWjM3jmNLdzfkyhH1uUp2PVOhq4WgzKFsg4 ksQVgQedhuq9WAVYBxg2ho960Sctsrj5gb+uxiS/OoYGuK2Bvrt0GrbPQaU+C/mLjuWr uxCJ+koEyvScRprT4H2KzI9SK7+bEhBLM4RNztGSn6DSCxJf9map6itQTIK8D7I6thTw 331rLkdFf4KZgZCqunjvGMOE0BK+OluiRTVjSBrPlPV15ovtdjL1uVV7K/vrB68IhfWC QOKu0XPOuWwhZoD3aWRh1ldJcRURpmTjPxmNVFl4+ABUH6qEbExEsU3ZRoQG7kiX7VSV jw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3td7y6sdd3-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:51:16 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:51:14 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:51:14 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 7B6D43F703F; Thu, 28 Sep 2023 09:51:10 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v6 08/12] eventdev/dma: support adapter runtime params Date: Thu, 28 Sep 2023 22:19:54 +0530 Message-ID: <20230928164959.340575-9-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: DyrFqegjl5xaJfT4v6g6_AaaIEAkY9jm X-Proofpoint-GUID: DyrFqegjl5xaJfT4v6g6_AaaIEAkY9jm X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support to set and get runtime params for DMA adapter. The parameters that can be set/get are defined in struct rte_event_dma_adapter_runtime_params. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 93 ++++++++++++++++++++++++++++ 1 file changed, 93 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index 63b07cd14e..850b010712 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -1211,3 +1211,96 @@ rte_event_dma_adapter_stop(uint8_t id) { return edma_adapter_ctrl(id, 0); } + +#define DEFAULT_MAX_NB 128 + +int +rte_event_dma_adapter_runtime_params_init(struct rte_event_dma_adapter_runtime_params *params) +{ + if (params == NULL) + return -EINVAL; + + memset(params, 0, sizeof(*params)); + params->max_nb = DEFAULT_MAX_NB; + + return 0; +} + +static int +dma_adapter_cap_check(struct event_dma_adapter *adapter) +{ + uint32_t caps; + int ret; + + if (!adapter->nb_vchanq) + return -EINVAL; + + ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, adapter->next_dmadev_id, &caps); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %" PRIu8 " cdev %" PRIu8, + adapter->eventdev_id, adapter->next_dmadev_id); + return ret; + } + + if ((caps & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (caps & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) + return -ENOTSUP; + + return 0; +} + +int +rte_event_dma_adapter_runtime_params_set(uint8_t id, + struct rte_event_dma_adapter_runtime_params *params) +{ + struct event_dma_adapter *adapter; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (params == NULL) { + RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + return -EINVAL; + } + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + ret = dma_adapter_cap_check(adapter); + if (ret) + return ret; + + rte_spinlock_lock(&adapter->lock); + adapter->max_nb = params->max_nb; + rte_spinlock_unlock(&adapter->lock); + + return 0; +} + +int +rte_event_dma_adapter_runtime_params_get(uint8_t id, + struct rte_event_dma_adapter_runtime_params *params) +{ + struct event_dma_adapter *adapter; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (params == NULL) { + RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + return -EINVAL; + } + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + ret = dma_adapter_cap_check(adapter); + if (ret) + return ret; + + params->max_nb = adapter->max_nb; + + return 0; +} From patchwork Thu Sep 28 16:49:55 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132155 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 067EF42659; Thu, 28 Sep 2023 18:51:26 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E795C40E4A; Thu, 28 Sep 2023 18:51:25 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id EE8FE40E03 for ; Thu, 28 Sep 2023 18:51:23 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SAfbHM003461; Thu, 28 Sep 2023 09:51:23 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=Oni4O77r3aBqn2TFb+rqnhRKrtStu1eavoYijWZvCCU=; b=ZaUX3reVK+Cyh4LWgv2vzcHJDWSnX7JMDXUWP/9Wkq5F69zWtRVmXN0QtPyPNx4TdwnN JItqnzQpx2GVXDqHMEu4O4KsUlFV6UiV8O1CWgYVFHXKPGz9mzjG3OBhw3In+9odyHBk U21lvCUXNCinXqMGT7TYj1rdBHEM1rjjVi0mDCkPKmKchDRZJJk+IkujMVcPvdqv7g1c y7awTU1FrYIwCsWUt521SiuxI1GgaHbbPF8mubEO1mNddZ1EacktdazWk/3xtXtSWnSD F31W1TxMWEEOQOgY/8uXAO29cQp2khuT0NYfwuAd5qUX2zvCclzq8lTv1JK3Yr3AvEW2 jQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3td7y6sddc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:51:23 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:51:21 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:51:21 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 4D6E65C68F5; Thu, 28 Sep 2023 09:51:17 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v6 09/12] eventdev/dma: support adapter stats Date: Thu, 28 Sep 2023 22:19:55 +0530 Message-ID: <20230928164959.340575-10-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Zhr7sHsR-0DKmF5XL3oTBBYuFXuzkhyd X-Proofpoint-GUID: Zhr7sHsR-0DKmF5XL3oTBBYuFXuzkhyd X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added DMA adapter stats API support to get and reset stats. DMA SW adapter stats and eventdev driver supported stats for enqueue and dequeue are reported by get API. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 95 ++++++++++++++++++++++++++++ 1 file changed, 95 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index 850b010712..842fb74734 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -142,6 +142,9 @@ struct event_dma_adapter { /* Loop counter to flush dma ops */ uint16_t transmit_loop_count; + + /* Per instance stats structure */ + struct rte_event_dma_adapter_stats dma_stats; } __rte_cache_aligned; static struct event_dma_adapter **event_dma_adapter; @@ -471,6 +474,7 @@ rte_event_dma_adapter_free(uint8_t id) static inline unsigned int edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, unsigned int cnt) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; struct dma_vchan_info *vchan_qinfo = NULL; struct rte_event_dma_adapter_op *dma_op; uint16_t vchan, nb_enqueued = 0; @@ -480,6 +484,7 @@ edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, uns ret = 0; n = 0; + stats->event_deq_count += cnt; for (i = 0; i < cnt; i++) { dma_op = ev[i].event_ptr; @@ -502,6 +507,7 @@ edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, uns ret = edma_circular_buffer_flush_to_dma_dev(adapter, &vchan_qinfo->dma_buf, dma_dev_id, vchan, &nb_enqueued); + stats->dma_enq_count += nb_enqueued; n += nb_enqueued; /** @@ -548,6 +554,7 @@ edma_adapter_dev_flush(struct event_dma_adapter *adapter, int16_t dma_dev_id, static unsigned int edma_adapter_enq_flush(struct event_dma_adapter *adapter) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; int16_t dma_dev_id; uint16_t nb_enqueued = 0; uint16_t nb_ops_flushed = 0; @@ -562,6 +569,8 @@ edma_adapter_enq_flush(struct event_dma_adapter *adapter) if (!nb_ops_flushed) adapter->stop_enq_to_dma_dev = false; + stats->dma_enq_count += nb_enqueued; + return nb_enqueued; } @@ -573,6 +582,7 @@ edma_adapter_enq_flush(struct event_dma_adapter *adapter) static int edma_adapter_enq_run(struct event_dma_adapter *adapter, unsigned int max_enq) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; uint8_t event_port_id = adapter->event_port_id; uint8_t event_dev_id = adapter->eventdev_id; struct rte_event ev[DMA_BATCH_SIZE]; @@ -592,6 +602,7 @@ edma_adapter_enq_run(struct event_dma_adapter *adapter, unsigned int max_enq) break; } + stats->event_poll_count++; n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, DMA_BATCH_SIZE, 0); if (!n) @@ -612,6 +623,7 @@ static inline uint16_t edma_ops_enqueue_burst(struct event_dma_adapter *adapter, struct rte_event_dma_adapter_op **ops, uint16_t num) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; uint8_t event_port_id = adapter->event_port_id; uint8_t event_dev_id = adapter->eventdev_id; struct rte_event events[DMA_BATCH_SIZE]; @@ -651,6 +663,10 @@ edma_ops_enqueue_burst(struct event_dma_adapter *adapter, struct rte_event_dma_a } while (retry++ < DMA_ADAPTER_MAX_EV_ENQ_RETRIES && nb_enqueued < nb_ev); + stats->event_enq_fail_count += nb_ev - nb_enqueued; + stats->event_enq_count += nb_enqueued; + stats->event_enq_retry_count += retry - 1; + return nb_enqueued; } @@ -705,6 +721,7 @@ edma_ops_buffer_flush(struct event_dma_adapter *adapter) static inline unsigned int edma_adapter_deq_run(struct event_dma_adapter *adapter, unsigned int max_deq) { + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; struct dma_vchan_info *vchan_info; struct dma_ops_circular_buffer *tq_buf; struct rte_event_dma_adapter_op *ops; @@ -742,6 +759,7 @@ edma_adapter_deq_run(struct event_dma_adapter *adapter, unsigned int max_deq) continue; done = false; + stats->dma_deq_count += n; tq_buf = &dev_info->tqmap[vchan].dma_buf; @@ -1304,3 +1322,80 @@ rte_event_dma_adapter_runtime_params_get(uint8_t id, return 0; } + +int +rte_event_dma_adapter_stats_get(uint8_t id, struct rte_event_dma_adapter_stats *stats) +{ + struct rte_event_dma_adapter_stats dev_stats_sum = {0}; + struct rte_event_dma_adapter_stats dev_stats; + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint16_t num_dma_dev; + uint32_t i; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL || stats == NULL) + return -EINVAL; + + num_dma_dev = rte_dma_count_avail(); + dev = &rte_eventdevs[adapter->eventdev_id]; + memset(stats, 0, sizeof(*stats)); + for (i = 0; i < num_dma_dev; i++) { + dev_info = &adapter->dma_devs[i]; + + if (dev_info->internal_event_port == 0 || + dev->dev_ops->dma_adapter_stats_get == NULL) + continue; + + ret = (*dev->dev_ops->dma_adapter_stats_get)(dev, i, &dev_stats); + if (ret) + continue; + + dev_stats_sum.dma_deq_count += dev_stats.dma_deq_count; + dev_stats_sum.event_enq_count += dev_stats.event_enq_count; + } + + if (adapter->service_initialized) + *stats = adapter->dma_stats; + + stats->dma_deq_count += dev_stats_sum.dma_deq_count; + stats->event_enq_count += dev_stats_sum.event_enq_count; + + return 0; +} + +int +rte_event_dma_adapter_stats_reset(uint8_t id) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint16_t num_dma_dev; + uint32_t i; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + num_dma_dev = rte_dma_count_avail(); + dev = &rte_eventdevs[adapter->eventdev_id]; + for (i = 0; i < num_dma_dev; i++) { + dev_info = &adapter->dma_devs[i]; + + if (dev_info->internal_event_port == 0 || + dev->dev_ops->dma_adapter_stats_reset == NULL) + continue; + + (*dev->dev_ops->dma_adapter_stats_reset)(dev, i); + } + + memset(&adapter->dma_stats, 0, sizeof(adapter->dma_stats)); + + return 0; +} From patchwork Thu Sep 28 16:49:56 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132156 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2EFE842659; Thu, 28 Sep 2023 18:51:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 211D1402DD; Thu, 28 Sep 2023 18:51:34 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id DEAAC40E40 for ; Thu, 28 Sep 2023 18:51:30 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SFhJfK002529; Thu, 28 Sep 2023 09:51:30 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=lRQ4OFi1gRYo/zRKwLnZhcwd547xAsFzgqdi/XJQJ8U=; b=ADOYkd+rzkGAhOAA9i+0ffuaD2w3lRscXm7I9ds4RyVDKOd9qKeHYwK8CjVKMZoY8ya5 kn3zykrjoPjZf6hwkgZxslFW8dvsjMFTfRblzBEMWkbMyHbi5fhRM91HBNlKtcBhyAj0 isZlwVo5d5aqFp1FC1iibEGaku30PId+3LNUsKRM16CvR+ne99kNoPChR9KAS/TWZikb /KNdIkeoc3Zni1OaT9GRMixFK20V3tf415GgDWjwkPFxFhcMw3H9phlWbiAn3Xd4Bav1 o0neEuuIdAVx+FOS6sR5zSQoKLZwJFIdYApYW6P/CgWxtDvGf4V8OCpRSsEJJBmXMK2Z sQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3tcrrs4h9p-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:51:30 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:51:28 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:51:27 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id D41EA5C68F1; Thu, 28 Sep 2023 09:51:23 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v6 10/12] eventdev/dma: support adapter enqueue Date: Thu, 28 Sep 2023 22:19:56 +0530 Message-ID: <20230928164959.340575-11-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: _7pPo_790l147yD5DKSPbECNvwRRaRfy X-Proofpoint-GUID: _7pPo_790l147yD5DKSPbECNvwRRaRfy X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added API support to enqueue a DMA operation to the DMA driver. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index 842fb74734..bca2be2731 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -1399,3 +1399,16 @@ rte_event_dma_adapter_stats_reset(uint8_t id) return 0; } + +uint16_t +rte_event_dma_adapter_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_event_fp_ops *fp_ops; + void *port; + + fp_ops = &rte_event_fp_ops[dev_id]; + port = fp_ops->data[port_id]; + + return fp_ops->dma_enqueue(port, ev, nb_events); +} From patchwork Thu Sep 28 16:49:57 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132157 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B703B42659; Thu, 28 Sep 2023 18:51:48 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A260F4021E; Thu, 28 Sep 2023 18:51:48 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 4A1A140E7C for ; Thu, 28 Sep 2023 18:51:38 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SAfaBw003447; Thu, 28 Sep 2023 09:51:37 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=wQkSPk+415/DGzUyAkVRmNz8x0WyksgF1Al/QJzIlc4=; b=UkCesKLaq64HmVZn2NkEUT7TD5nazKDe7s1p7XTARSOxDWU59whoWiWs+jk7uSHYS6z7 qtp0hpbGo7gFzHSRfffxV2WLb1OVdXs37FVJsyWtNBp2Ss/kNpPUsessr/PkPQKzOnFA joAN8Ya9dmB+QvrMzovMg+ulKBJT5VNJQTWmslHuTX+S4ngSE3JtJ6scfOkf+vqDezzk E8PysCGVA4SKdirzSzMjfAhXL6j8P6ZdRJ9LOQMd+f3ie35Fr5FSpCMFRMeeYzA8jQ4e tICXSFbnoVyB1+kDcdccyvToh0HgW6BOK+KvHjHfEXCO7gXaXHceUhCPTTu1Z8gZFpYu aw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3td7y6sder-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:51:37 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:51:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:51:35 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 01D3B5C68F7; Thu, 28 Sep 2023 09:51:30 -0700 (PDT) From: Amit Prakash Shukla To: Amit Prakash Shukla , Jerin Jacob CC: , , , , , , , , , , , , Subject: [PATCH v6 11/12] eventdev/dma: support adapter event port get Date: Thu, 28 Sep 2023 22:19:57 +0530 Message-ID: <20230928164959.340575-12-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: B8IAGRJqVMCMbyBxSFAxAqGfrdZLgyMo X-Proofpoint-GUID: B8IAGRJqVMCMbyBxSFAxAqGfrdZLgyMo X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support for DMA adapter event port get. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/rte_event_dma_adapter.c | 16 ++++++++++++++++ 1 file changed, 16 insertions(+) diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c index bca2be2731..4899bc5d0f 100644 --- a/lib/eventdev/rte_event_dma_adapter.c +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -471,6 +471,22 @@ rte_event_dma_adapter_free(uint8_t id) return 0; } +int +rte_event_dma_adapter_event_port_get(uint8_t id, uint8_t *event_port_id) +{ + struct event_dma_adapter *adapter; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL || event_port_id == NULL) + return -EINVAL; + + *event_port_id = adapter->event_port_id; + + return 0; +} + static inline unsigned int edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, unsigned int cnt) { From patchwork Thu Sep 28 16:49:58 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 132158 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1069742659; Thu, 28 Sep 2023 18:51:54 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B5F2740E68; Thu, 28 Sep 2023 18:51:49 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 652B54021E for ; Thu, 28 Sep 2023 18:51:47 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38SAfXqN003433; Thu, 28 Sep 2023 09:51:46 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=1fzNSHU+kasecd1bjUxxCFcD8L+Sb5i3CLegYWC4pHI=; b=iNdEcvzIG3MB/Z5ZsvWkvj95INkVHyVw0Nmj9Nl9RjYlM+IEwWB1BguDlN84tS6UwbRW 00Kn3820fySFl07g14cfKf3s3uwSGRgNE4aPhNuYVb7/5yPDcNeCEy4M6vj+NgMaR/JF lp9ki1rpM0tNSgZKHcXGFnW1j0wTF/DeJFvH0iS/i5RnkGeNHx+hF/aRxKjzmiXse8Z7 TDUcENRrwZyhGPTDk4Euxu8TKHbdOWMLzEcPwhtt6bLPvKmYc0D/4lkfE5rD9Tgh/i3W wi/MEOJSy3texc/llwCnxYaEvza5sJzPXurGM8kLtrTAkx0PoGwszCScT4qPCn/UMds8 jQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3td7y6sdfw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 28 Sep 2023 09:51:46 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Thu, 28 Sep 2023 09:51:44 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Thu, 28 Sep 2023 09:51:44 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 12C055C68F5; Thu, 28 Sep 2023 09:51:39 -0700 (PDT) From: Amit Prakash Shukla To: Thomas Monjalon , Amit Prakash Shukla CC: , , , , , , , , , , , , , Subject: [PATCH v6 12/12] app/test: add event DMA adapter auto-test Date: Thu, 28 Sep 2023 22:19:58 +0530 Message-ID: <20230928164959.340575-13-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230928164959.340575-1-amitprakashs@marvell.com> References: <20230928103623.216287-1-amitprakashs@marvell.com> <20230928164959.340575-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 1GRb3thLhKbD5dZQHjGYLpwKcBcxsI7e X-Proofpoint-GUID: 1GRb3thLhKbD5dZQHjGYLpwKcBcxsI7e X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2023-09-28_16,2023-09-28_03,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added testsuite to test the dma adapter functionality. The testsuite detects event and DMA device capability and accordingly dma adapter is configured and modes are tested. Test command: /app/test/dpdk-test event_dma_adapter_autotest Signed-off-by: Amit Prakash Shukla --- MAINTAINERS | 1 + app/test/meson.build | 1 + app/test/test_event_dma_adapter.c | 805 ++++++++++++++++++++++++++++++ 3 files changed, 807 insertions(+) create mode 100644 app/test/test_event_dma_adapter.c diff --git a/MAINTAINERS b/MAINTAINERS index 4ebbbe8bb3..92c0b47618 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -544,6 +544,7 @@ Eventdev DMA Adapter API M: Amit Prakash Shukla T: git://dpdk.org/next/dpdk-next-eventdev F: lib/eventdev/*dma_adapter* +F: app/test/test_event_dma_adapter.c F: doc/guides/prog_guide/event_dma_adapter.rst Raw device API diff --git a/app/test/meson.build b/app/test/meson.build index 05bae9216d..7caf5ae5fc 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -66,6 +66,7 @@ source_file_deps = { 'test_errno.c': [], 'test_ethdev_link.c': ['ethdev'], 'test_event_crypto_adapter.c': ['cryptodev', 'eventdev', 'bus_vdev'], + 'test_event_dma_adapter.c': ['dmadev', 'eventdev', 'bus_vdev'], 'test_event_eth_rx_adapter.c': ['ethdev', 'eventdev', 'bus_vdev'], 'test_event_eth_tx_adapter.c': ['bus_vdev', 'ethdev', 'net_ring', 'eventdev'], 'test_event_ring.c': ['eventdev'], diff --git a/app/test/test_event_dma_adapter.c b/app/test/test_event_dma_adapter.c new file mode 100644 index 0000000000..1e193f4b52 --- /dev/null +++ b/app/test/test_event_dma_adapter.c @@ -0,0 +1,805 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include "test.h" +#include +#include +#include +#include +#include +#include + +#ifdef RTE_EXEC_ENV_WINDOWS +static int +test_event_dma_adapter(void) +{ + printf("event_dma_adapter not supported on Windows, skipping test\n"); + return TEST_SKIPPED; +} + +#else + +#include +#include +#include +#include +#include + +#define NUM_MBUFS (8191) +#define MBUF_CACHE_SIZE (256) +#define TEST_APP_PORT_ID 0 +#define TEST_APP_EV_QUEUE_ID 0 +#define TEST_APP_EV_PRIORITY 0 +#define TEST_APP_EV_FLOWID 0xAABB +#define TEST_DMA_EV_QUEUE_ID 1 +#define TEST_ADAPTER_ID 0 +#define TEST_DMA_DEV_ID 0 +#define TEST_DMA_VCHAN_ID 0 +#define PACKET_LENGTH 1024 +#define NB_TEST_PORTS 1 +#define NB_TEST_QUEUES 2 +#define NUM_CORES 2 +#define DMA_OP_POOL_SIZE 128 +#define TEST_MAX_OP 32 +#define TEST_RINGSIZE 512 + +#define MBUF_SIZE (RTE_PKTMBUF_HEADROOM + PACKET_LENGTH) + +/* Handle log statements in same manner as test macros */ +#define LOG_DBG(...) RTE_LOG(DEBUG, EAL, __VA_ARGS__) + +struct event_dma_adapter_test_params { + struct rte_mempool *src_mbuf_pool; + struct rte_mempool *dst_mbuf_pool; + struct rte_mempool *op_mpool; + uint8_t dma_event_port_id; + uint8_t internal_port_op_fwd; +}; + +struct rte_event dma_response_info = { + .queue_id = TEST_APP_EV_QUEUE_ID, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .flow_id = TEST_APP_EV_FLOWID, + .priority = TEST_APP_EV_PRIORITY +}; + +static struct event_dma_adapter_test_params params; +static uint8_t dma_adapter_setup_done; +static uint32_t slcore_id; +static int evdev; + +static int +send_recv_ev(struct rte_event *ev) +{ + struct rte_event recv_ev[TEST_MAX_OP]; + uint16_t nb_enqueued = 0; + int i = 0; + + if (params.internal_port_op_fwd) { + nb_enqueued = rte_event_dma_adapter_enqueue(evdev, TEST_APP_PORT_ID, ev, + TEST_MAX_OP); + } else { + while (nb_enqueued < TEST_MAX_OP) { + nb_enqueued += rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, + &ev[nb_enqueued], TEST_MAX_OP - + nb_enqueued); + } + } + + TEST_ASSERT_EQUAL(nb_enqueued, TEST_MAX_OP, "Failed to send event to dma adapter\n"); + + while (i < TEST_MAX_OP) { + if (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev[i], 1, 0) != 1) + continue; + i++; + } + + TEST_ASSERT_EQUAL(i, TEST_MAX_OP, "Test failed. Failed to dequeue events.\n"); + + return TEST_SUCCESS; +} + +static int +test_dma_adapter_stats(void) +{ + struct rte_event_dma_adapter_stats stats; + + rte_event_dma_adapter_stats_get(TEST_ADAPTER_ID, &stats); + printf(" +------------------------------------------------------+\n"); + printf(" + DMA adapter stats for instance %u:\n", TEST_ADAPTER_ID); + printf(" + Event port poll count 0x%" PRIx64 "\n", + stats.event_poll_count); + printf(" + Event dequeue count 0x%" PRIx64 "\n", + stats.event_deq_count); + printf(" + DMA dev enqueue count 0x%" PRIx64 "\n", + stats.dma_enq_count); + printf(" + DMA dev enqueue failed count 0x%" PRIx64 "\n", + stats.dma_enq_fail_count); + printf(" + DMA dev dequeue count 0x%" PRIx64 "\n", + stats.dma_deq_count); + printf(" + Event enqueue count 0x%" PRIx64 "\n", + stats.event_enq_count); + printf(" + Event enqueue retry count 0x%" PRIx64 "\n", + stats.event_enq_retry_count); + printf(" + Event enqueue fail count 0x%" PRIx64 "\n", + stats.event_enq_fail_count); + printf(" +------------------------------------------------------+\n"); + + rte_event_dma_adapter_stats_reset(TEST_ADAPTER_ID); + return TEST_SUCCESS; +} + +static int +test_dma_adapter_params(void) +{ + struct rte_event_dma_adapter_runtime_params out_params; + struct rte_event_dma_adapter_runtime_params in_params; + struct rte_event event; + uint32_t cap; + int err, rc; + + err = rte_event_dma_adapter_caps_get(evdev, TEST_DMA_DEV_ID, &cap); + TEST_ASSERT_SUCCESS(err, "Failed to get adapter capabilities\n"); + + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND) { + err = rte_event_dma_adapter_vchan_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, &event); + } else + err = rte_event_dma_adapter_vchan_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, NULL); + + TEST_ASSERT_SUCCESS(err, "Failed to add vchan\n"); + + err = rte_event_dma_adapter_runtime_params_init(&in_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + err = rte_event_dma_adapter_runtime_params_init(&out_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + /* Case 1: Get the default value of mbufs processed by adapter */ + err = rte_event_dma_adapter_runtime_params_get(TEST_ADAPTER_ID, &out_params); + if (err == -ENOTSUP) { + rc = TEST_SKIPPED; + goto vchan_del; + } + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + /* Case 2: Set max_nb = 32 (=BATCH_SEIZE) */ + in_params.max_nb = 32; + + err = rte_event_dma_adapter_runtime_params_set(TEST_ADAPTER_ID, &in_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + err = rte_event_dma_adapter_runtime_params_get(TEST_ADAPTER_ID, &out_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + TEST_ASSERT(in_params.max_nb == out_params.max_nb, "Expected %u got %u", + in_params.max_nb, out_params.max_nb); + + /* Case 3: Set max_nb = 192 */ + in_params.max_nb = 192; + + err = rte_event_dma_adapter_runtime_params_set(TEST_ADAPTER_ID, &in_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + err = rte_event_dma_adapter_runtime_params_get(TEST_ADAPTER_ID, &out_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + TEST_ASSERT(in_params.max_nb == out_params.max_nb, "Expected %u got %u", + in_params.max_nb, out_params.max_nb); + + /* Case 4: Set max_nb = 256 */ + in_params.max_nb = 256; + + err = rte_event_dma_adapter_runtime_params_set(TEST_ADAPTER_ID, &in_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + err = rte_event_dma_adapter_runtime_params_get(TEST_ADAPTER_ID, &out_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + TEST_ASSERT(in_params.max_nb == out_params.max_nb, "Expected %u got %u", + in_params.max_nb, out_params.max_nb); + + /* Case 5: Set max_nb = 30(src_seg = rte_malloc(NULL, sizeof(struct rte_dma_sge), 0); + op->dst_seg = rte_malloc(NULL, sizeof(struct rte_dma_sge), 0); + + /* Update Op */ + op->src_seg->addr = rte_pktmbuf_iova(src_mbuf[i]); + op->dst_seg->addr = rte_pktmbuf_iova(dst_mbuf[i]); + op->src_seg->length = PACKET_LENGTH; + op->dst_seg->length = PACKET_LENGTH; + op->nb_src = 1; + op->nb_dst = 1; + op->flags = RTE_DMA_OP_FLAG_SUBMIT; + op->op_mp = params.op_mpool; + op->dma_dev_id = TEST_DMA_DEV_ID; + op->vchan = TEST_DMA_VCHAN_ID; + + response_info.event = dma_response_info.event; + rte_memcpy((uint8_t *)op + sizeof(struct rte_event_dma_adapter_op), &response_info, + sizeof(struct rte_event)); + + /* Fill in event info and update event_ptr with rte_event_dma_adapter_op */ + memset(&ev[i], 0, sizeof(struct rte_event)); + ev[i].event = 0; + ev[i].event_type = RTE_EVENT_TYPE_DMADEV; + ev[i].queue_id = TEST_DMA_EV_QUEUE_ID; + ev[i].sched_type = RTE_SCHED_TYPE_ATOMIC; + ev[i].flow_id = 0xAABB; + ev[i].event_ptr = op; + } + + ret = send_recv_ev(ev); + TEST_ASSERT_SUCCESS(ret, "Failed to send/receive event to dma adapter\n"); + + test_dma_adapter_stats(); + + for (i = 0; i < TEST_MAX_OP; i++) { + op = ev[i].event_ptr; + ret = memcmp(rte_pktmbuf_mtod(src_mbuf[i], void *), + rte_pktmbuf_mtod(dst_mbuf[i], void *), PACKET_LENGTH); + + TEST_ASSERT_EQUAL(ret, 0, "Data mismatch for dma adapter\n"); + + rte_free(op->src_seg); + rte_free(op->dst_seg); + rte_mempool_put(op->op_mp, op); + } + + rte_pktmbuf_free_bulk(src_mbuf, TEST_MAX_OP); + rte_pktmbuf_free_bulk(dst_mbuf, TEST_MAX_OP); + + return TEST_SUCCESS; +} + +static int +map_adapter_service_core(void) +{ + uint32_t adapter_service_id; + int ret; + + if (rte_event_dma_adapter_service_id_get(TEST_ADAPTER_ID, &adapter_service_id) == 0) { + uint32_t core_list[NUM_CORES]; + + ret = rte_service_lcore_list(core_list, NUM_CORES); + TEST_ASSERT(ret >= 0, "Failed to get service core list!"); + + if (core_list[0] != slcore_id) { + TEST_ASSERT_SUCCESS(rte_service_lcore_add(slcore_id), + "Failed to add service core"); + TEST_ASSERT_SUCCESS(rte_service_lcore_start(slcore_id), + "Failed to start service core"); + } + + TEST_ASSERT_SUCCESS(rte_service_map_lcore_set( + adapter_service_id, slcore_id, 1), + "Failed to map adapter service"); + } + + return TEST_SUCCESS; +} + +static int +test_with_op_forward_mode(void) +{ + uint32_t cap; + int ret; + + ret = rte_event_dma_adapter_caps_get(evdev, TEST_DMA_DEV_ID, &cap); + TEST_ASSERT_SUCCESS(ret, "Failed to get adapter capabilities\n"); + + if (!(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) + map_adapter_service_core(); + else { + if (!(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) + return TEST_SKIPPED; + } + + TEST_ASSERT_SUCCESS(rte_event_dma_adapter_start(TEST_ADAPTER_ID), + "Failed to start event dma adapter"); + + ret = test_op_forward_mode(); + TEST_ASSERT_SUCCESS(ret, "DMA - FORWARD mode test failed\n"); + return TEST_SUCCESS; +} + +static int +configure_dmadev(void) +{ + const struct rte_dma_conf conf = { .nb_vchans = 1}; + const struct rte_dma_vchan_conf qconf = { + .direction = RTE_DMA_DIR_MEM_TO_MEM, + .nb_desc = TEST_RINGSIZE, + }; + struct rte_dma_info info; + unsigned int elt_size; + int ret; + + ret = rte_dma_count_avail(); + RTE_TEST_ASSERT_FAIL(ret, "No dma devices found!\n"); + + ret = rte_dma_info_get(TEST_DMA_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Error with rte_dma_info_get()\n"); + + if (info.max_vchans < 1) + RTE_LOG(ERR, USER1, "Error, no channels available on device id %u\n", + TEST_DMA_DEV_ID); + + if (rte_dma_configure(TEST_DMA_DEV_ID, &conf) != 0) + RTE_LOG(ERR, USER1, "Error with rte_dma_configure()\n"); + + if (rte_dma_vchan_setup(TEST_DMA_DEV_ID, TEST_DMA_VCHAN_ID, &qconf) < 0) + RTE_LOG(ERR, USER1, "Error with vchan configuration\n"); + + ret = rte_dma_info_get(TEST_DMA_DEV_ID, &info); + if (ret != 0 || info.nb_vchans != 1) + RTE_LOG(ERR, USER1, "Error, no configured vhcan reported on device id %u\n", + TEST_DMA_DEV_ID); + + params.src_mbuf_pool = rte_pktmbuf_pool_create("DMA_ADAPTER_SRC_MBUFPOOL", NUM_MBUFS, + MBUF_CACHE_SIZE, 0, MBUF_SIZE, + rte_socket_id()); + RTE_TEST_ASSERT_NOT_NULL(params.src_mbuf_pool, "Can't create DMA_SRC_MBUFPOOL\n"); + + params.dst_mbuf_pool = rte_pktmbuf_pool_create("DMA_ADAPTER_DST_MBUFPOOL", NUM_MBUFS, + MBUF_CACHE_SIZE, 0, MBUF_SIZE, + rte_socket_id()); + RTE_TEST_ASSERT_NOT_NULL(params.dst_mbuf_pool, "Can't create DMA_DST_MBUFPOOL\n"); + + elt_size = sizeof(struct rte_event_dma_adapter_op) + sizeof(struct rte_event); + params.op_mpool = rte_mempool_create("EVENT_DMA_OP_POOL", DMA_OP_POOL_SIZE, elt_size, 0, + 0, NULL, NULL, NULL, NULL, rte_socket_id(), 0); + RTE_TEST_ASSERT_NOT_NULL(params.op_mpool, "Can't create DMA_OP_POOL\n"); + + return TEST_SUCCESS; +} + +static inline void +evdev_set_conf_values(struct rte_event_dev_config *dev_conf, struct rte_event_dev_info *info) +{ + memset(dev_conf, 0, sizeof(struct rte_event_dev_config)); + dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns; + dev_conf->nb_event_ports = NB_TEST_PORTS; + dev_conf->nb_event_queues = NB_TEST_QUEUES; + dev_conf->nb_event_queue_flows = info->max_event_queue_flows; + dev_conf->nb_event_port_dequeue_depth = + info->max_event_port_dequeue_depth; + dev_conf->nb_event_port_enqueue_depth = + info->max_event_port_enqueue_depth; + dev_conf->nb_event_port_enqueue_depth = + info->max_event_port_enqueue_depth; + dev_conf->nb_events_limit = + info->max_num_events; +} + +static int +configure_eventdev(void) +{ + struct rte_event_queue_conf queue_conf; + struct rte_event_dev_config devconf; + struct rte_event_dev_info info; + uint32_t queue_count; + uint32_t port_count; + uint8_t qid; + int ret; + + if (!rte_event_dev_count()) { + /* If there is no hardware eventdev, or no software vdev was + * specified on the command line, create an instance of + * event_sw. + */ + LOG_DBG("Failed to find a valid event device... " + "testing with event_sw device\n"); + TEST_ASSERT_SUCCESS(rte_vdev_init("event_sw0", NULL), + "Error creating eventdev"); + evdev = rte_event_dev_get_dev_id("event_sw0"); + } + + ret = rte_event_dev_info_get(evdev, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info\n"); + + evdev_set_conf_values(&devconf, &info); + + ret = rte_event_dev_configure(evdev, &devconf); + TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev\n"); + + /* Set up event queue */ + ret = rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count); + TEST_ASSERT_SUCCESS(ret, "Queue count get failed\n"); + TEST_ASSERT_EQUAL(queue_count, 2, "Unexpected queue count\n"); + + qid = TEST_APP_EV_QUEUE_ID; + ret = rte_event_queue_setup(evdev, qid, NULL); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d\n", qid); + + queue_conf.nb_atomic_flows = info.max_event_queue_flows; + queue_conf.nb_atomic_order_sequences = 32; + queue_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; + queue_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST; + queue_conf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK; + + qid = TEST_DMA_EV_QUEUE_ID; + ret = rte_event_queue_setup(evdev, qid, &queue_conf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%u\n", qid); + + /* Set up event port */ + ret = rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, + &port_count); + TEST_ASSERT_SUCCESS(ret, "Port count get failed\n"); + TEST_ASSERT_EQUAL(port_count, 1, "Unexpected port count\n"); + + ret = rte_event_port_setup(evdev, TEST_APP_PORT_ID, NULL); + TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d\n", + TEST_APP_PORT_ID); + + qid = TEST_APP_EV_QUEUE_ID; + ret = rte_event_port_link(evdev, TEST_APP_PORT_ID, &qid, NULL, 1); + TEST_ASSERT(ret >= 0, "Failed to link queue port=%d\n", + TEST_APP_PORT_ID); + + return TEST_SUCCESS; +} + +static void +test_dma_adapter_free(void) +{ + rte_event_dma_adapter_free(TEST_ADAPTER_ID); +} + +static int +test_dma_adapter_create(void) +{ + struct rte_event_dev_info evdev_info = {0}; + struct rte_event_port_conf conf = {0}; + int ret; + + ret = rte_event_dev_info_get(evdev, &evdev_info); + TEST_ASSERT_SUCCESS(ret, "Failed to create event dma adapter\n"); + + conf.new_event_threshold = evdev_info.max_num_events; + conf.dequeue_depth = evdev_info.max_event_port_dequeue_depth; + conf.enqueue_depth = evdev_info.max_event_port_enqueue_depth; + + /* Create adapter with default port creation callback */ + ret = rte_event_dma_adapter_create(TEST_ADAPTER_ID, evdev, &conf, 0); + TEST_ASSERT_SUCCESS(ret, "Failed to create event dma adapter\n"); + + return TEST_SUCCESS; +} + +static int +test_dma_adapter_vchan_add_del(void) +{ + struct rte_event event; + uint32_t cap; + int ret; + + ret = rte_event_dma_adapter_caps_get(evdev, TEST_DMA_DEV_ID, &cap); + TEST_ASSERT_SUCCESS(ret, "Failed to get adapter capabilities\n"); + + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND) { + ret = rte_event_dma_adapter_vchan_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, &event); + } else + ret = rte_event_dma_adapter_vchan_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, NULL); + + TEST_ASSERT_SUCCESS(ret, "Failed to create add vchan\n"); + + ret = rte_event_dma_adapter_vchan_del(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID); + TEST_ASSERT_SUCCESS(ret, "Failed to delete vchan\n"); + + return TEST_SUCCESS; +} + +static int +configure_event_dma_adapter(enum rte_event_dma_adapter_mode mode) +{ + struct rte_event_dev_info evdev_info = {0}; + struct rte_event_port_conf conf = {0}; + struct rte_event event; + uint32_t cap; + int ret; + + ret = rte_event_dma_adapter_caps_get(evdev, TEST_DMA_DEV_ID, &cap); + TEST_ASSERT_SUCCESS(ret, "Failed to get adapter capabilities\n"); + + /* Skip mode and capability mismatch check for SW eventdev */ + if (!(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW) && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND)) + goto adapter_create; + + if (mode == RTE_EVENT_DMA_ADAPTER_OP_FORWARD) { + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + return -ENOTSUP; + } + +adapter_create: + ret = rte_event_dev_info_get(evdev, &evdev_info); + TEST_ASSERT_SUCCESS(ret, "Failed to create event dma adapter\n"); + + conf.new_event_threshold = evdev_info.max_num_events; + conf.dequeue_depth = evdev_info.max_event_port_dequeue_depth; + conf.enqueue_depth = evdev_info.max_event_port_enqueue_depth; + + /* Create adapter with default port creation callback */ + ret = rte_event_dma_adapter_create(TEST_ADAPTER_ID, evdev, &conf, mode); + TEST_ASSERT_SUCCESS(ret, "Failed to create event dma adapter\n"); + + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_VCHAN_EV_BIND) { + ret = rte_event_dma_adapter_vchan_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, &event); + } else + ret = rte_event_dma_adapter_vchan_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, NULL); + + TEST_ASSERT_SUCCESS(ret, "Failed to add vchan\n"); + + if (!params.internal_port_op_fwd) { + ret = rte_event_dma_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.dma_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } + + return TEST_SUCCESS; +} + +static void +test_dma_adapter_stop(void) +{ + uint32_t evdev_service_id, adapter_service_id; + + /* retrieve service ids & stop services */ + if (rte_event_dma_adapter_service_id_get(TEST_ADAPTER_ID, + &adapter_service_id) == 0) { + rte_service_runstate_set(adapter_service_id, 0); + rte_service_lcore_stop(slcore_id); + rte_service_lcore_del(slcore_id); + rte_event_dma_adapter_stop(TEST_ADAPTER_ID); + } + + if (rte_event_dev_service_id_get(evdev, &evdev_service_id) == 0) { + rte_service_runstate_set(evdev_service_id, 0); + rte_service_lcore_stop(slcore_id); + rte_service_lcore_del(slcore_id); + rte_dma_stop(TEST_DMA_DEV_ID); + rte_event_dev_stop(evdev); + } else { + rte_dma_stop(TEST_DMA_DEV_ID); + rte_event_dev_stop(evdev); + } +} + +static int +test_dma_adapter_conf(enum rte_event_dma_adapter_mode mode) +{ + uint32_t evdev_service_id; + uint8_t qid; + int ret; + + if (!dma_adapter_setup_done) { + ret = configure_event_dma_adapter(mode); + if (ret) + return ret; + if (!params.internal_port_op_fwd) { + qid = TEST_DMA_EV_QUEUE_ID; + ret = rte_event_port_link(evdev, + params.dma_event_port_id, &qid, NULL, 1); + TEST_ASSERT(ret >= 0, "Failed to link queue %d " + "port=%u\n", qid, + params.dma_event_port_id); + } + dma_adapter_setup_done = 1; + } + + /* retrieve service ids */ + if (rte_event_dev_service_id_get(evdev, &evdev_service_id) == 0) { + /* add a service core and start it */ + TEST_ASSERT_SUCCESS(rte_service_lcore_add(slcore_id), + "Failed to add service core"); + TEST_ASSERT_SUCCESS(rte_service_lcore_start(slcore_id), + "Failed to start service core"); + + /* map services to it */ + TEST_ASSERT_SUCCESS(rte_service_map_lcore_set(evdev_service_id, + slcore_id, 1), "Failed to map evdev service"); + + /* set services to running */ + TEST_ASSERT_SUCCESS(rte_service_runstate_set(evdev_service_id, + 1), "Failed to start evdev service"); + } + + /* start the eventdev */ + TEST_ASSERT_SUCCESS(rte_event_dev_start(evdev), + "Failed to start event device"); + + /* start the dma dev */ + TEST_ASSERT_SUCCESS(rte_dma_start(TEST_DMA_DEV_ID), + "Failed to start dma device"); + + return TEST_SUCCESS; +} + +static int +test_dma_adapter_conf_op_forward_mode(void) +{ + enum rte_event_dma_adapter_mode mode; + + mode = RTE_EVENT_DMA_ADAPTER_OP_FORWARD; + + return test_dma_adapter_conf(mode); +} + +static int +testsuite_setup(void) +{ + int ret; + + slcore_id = rte_get_next_lcore(-1, 1, 0); + TEST_ASSERT_NOT_EQUAL(slcore_id, RTE_MAX_LCORE, "At least 2 lcores " + "are required to run this autotest\n"); + + /* Setup and start event device. */ + ret = configure_eventdev(); + TEST_ASSERT_SUCCESS(ret, "Failed to setup eventdev\n"); + + /* Setup and start dma device. */ + ret = configure_dmadev(); + TEST_ASSERT_SUCCESS(ret, "dmadev initialization failed\n"); + + return TEST_SUCCESS; +} + +static void +dma_adapter_teardown(void) +{ + int ret; + + ret = rte_event_dma_adapter_stop(TEST_ADAPTER_ID); + if (ret < 0) + RTE_LOG(ERR, USER1, "Failed to stop adapter!"); + + ret = rte_event_dma_adapter_vchan_del(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID); + if (ret < 0) + RTE_LOG(ERR, USER1, "Failed to delete vchan!"); + + ret = rte_event_dma_adapter_free(TEST_ADAPTER_ID); + if (ret < 0) + RTE_LOG(ERR, USER1, "Failed to free adapter!"); + + dma_adapter_setup_done = 0; +} + +static void +dma_teardown(void) +{ + /* Free mbuf mempool */ + if (params.src_mbuf_pool != NULL) { + RTE_LOG(DEBUG, USER1, "DMA_ADAPTER_SRC_MBUFPOOL count %u\n", + rte_mempool_avail_count(params.src_mbuf_pool)); + rte_mempool_free(params.src_mbuf_pool); + params.src_mbuf_pool = NULL; + } + + if (params.dst_mbuf_pool != NULL) { + RTE_LOG(DEBUG, USER1, "DMA_ADAPTER_DST_MBUFPOOL count %u\n", + rte_mempool_avail_count(params.dst_mbuf_pool)); + rte_mempool_free(params.dst_mbuf_pool); + params.dst_mbuf_pool = NULL; + } + + /* Free ops mempool */ + if (params.op_mpool != NULL) { + RTE_LOG(DEBUG, USER1, "EVENT_DMA_OP_POOL count %u\n", + rte_mempool_avail_count(params.op_mpool)); + rte_mempool_free(params.op_mpool); + params.op_mpool = NULL; + } +} + +static void +eventdev_teardown(void) +{ + rte_event_dev_stop(evdev); +} + +static void +testsuite_teardown(void) +{ + dma_adapter_teardown(); + dma_teardown(); + eventdev_teardown(); +} + +static struct unit_test_suite functional_testsuite = { + .suite_name = "Event dma adapter test suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { + + TEST_CASE_ST(NULL, test_dma_adapter_free, test_dma_adapter_create), + + TEST_CASE_ST(test_dma_adapter_create, test_dma_adapter_free, + test_dma_adapter_vchan_add_del), + + TEST_CASE_ST(test_dma_adapter_create, test_dma_adapter_free, + test_dma_adapter_stats), + + TEST_CASE_ST(test_dma_adapter_create, test_dma_adapter_free, + test_dma_adapter_params), + + TEST_CASE_ST(test_dma_adapter_conf_op_forward_mode, test_dma_adapter_stop, + test_with_op_forward_mode), + + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + +static int +test_event_dma_adapter(void) +{ + return unit_test_suite_runner(&functional_testsuite); +} + +#endif /* !RTE_EXEC_ENV_WINDOWS */ + +REGISTER_TEST_COMMAND(event_dma_adapter_autotest, test_event_dma_adapter);