From patchwork Tue Sep 19 13:42:16 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 131626 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 62D5442604; Tue, 19 Sep 2023 15:42:44 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4F5B040697; Tue, 19 Sep 2023 15:42:44 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3D5BF4028B for ; Tue, 19 Sep 2023 15:42:42 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38JDZx0v012030; Tue, 19 Sep 2023 06:42:41 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : mime-version : content-type : content-transfer-encoding; s=pfpt0220; bh=sKBzxVwFJNv2OHAwP0OSKc78mCANTIS7a5wWzr7L5cM=; b=PPsaQrhrLFzTE5oeqjTgwhfkgHmdQR9zah5SYid4imMqbrrba862TgWPBzpXqYbiI3uH b5qIsefCJ0/yGWQAtESV2+y41UfO7zdpFbTEYzKbx34xtJziBv6JlSr9jagEDbkhwLKB RGjzA4YBxu12j7S9dHm+4dsP7zN9g9KkdASO6TrSl81TBB7Pf/0qDaViQ3grbkXTh63F 1sM4LCazY3ePvGgUIcUsuPBEhF/WNYIjUO5VKSo3lGdXrqX4RxOdlHAXKdfb4/s+cFzo EHuclzCc34RVJHokj/D6DXUCPPt0v42yv1G7UUNbc1Ukan/jWs3yMXYcDEA+i1VYsLrf 9g== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7cnq00u2-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Sep 2023 06:42:40 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Sep 2023 06:42:38 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Sep 2023 06:42:38 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 867C33F7099; Tue, 19 Sep 2023 06:42:33 -0700 (PDT) From: Amit Prakash Shukla To: Jerin Jacob CC: , , , , , , , , , , , , , Amit Prakash Shukla Subject: [PATCH v1 1/7] eventdev: introduce DMA event adapter library Date: Tue, 19 Sep 2023 19:12:16 +0530 Message-ID: <20230919134222.2500033-1-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: ys8oEpIh1btoQmBlr1xEiyQ-3hIDQWuX X-Proofpoint-GUID: ys8oEpIh1btoQmBlr1xEiyQ-3hIDQWuX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-19_06,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce event DMA adapter APIs. The change provides information on adapter modes and usage. Application can use this event adapter interface to transfer packets between DMA device and event device. Signed-off-by: Amit Prakash Shukla --- doc/api/doxy-api-index.md | 1 + doc/guides/eventdevs/features/default.ini | 8 + doc/guides/prog_guide/event_dma_adapter.rst | 268 ++++ doc/guides/prog_guide/eventdev.rst | 8 +- .../img/event_dma_adapter_op_forward.svg | 1086 +++++++++++++++++ .../img/event_dma_adapter_op_new.svg | 1079 ++++++++++++++++ doc/guides/prog_guide/index.rst | 1 + lib/eventdev/eventdev_pmd.h | 171 ++- lib/eventdev/eventdev_private.c | 10 + lib/eventdev/rte_event_dma_adapter.h | 641 ++++++++++ lib/eventdev/rte_eventdev.h | 45 + lib/eventdev/rte_eventdev_core.h | 8 +- lib/eventdev/version.map | 15 + 13 files changed, 3336 insertions(+), 5 deletions(-) create mode 100644 doc/guides/prog_guide/event_dma_adapter.rst create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg create mode 100644 doc/guides/prog_guide/img/event_dma_adapter_op_new.svg create mode 100644 lib/eventdev/rte_event_dma_adapter.h diff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md index fdeda13932..b7df7be4d9 100644 --- a/doc/api/doxy-api-index.md +++ b/doc/api/doxy-api-index.md @@ -29,6 +29,7 @@ The public API headers are grouped by topics: [event_eth_tx_adapter](@ref rte_event_eth_tx_adapter.h), [event_timer_adapter](@ref rte_event_timer_adapter.h), [event_crypto_adapter](@ref rte_event_crypto_adapter.h), + [event_dma_adapter](@ref rte_event_dma_adapter.h), [rawdev](@ref rte_rawdev.h), [metrics](@ref rte_metrics.h), [bitrate](@ref rte_bitrate.h), diff --git a/doc/guides/eventdevs/features/default.ini b/doc/guides/eventdevs/features/default.ini index 00360f60c6..fda8baf487 100644 --- a/doc/guides/eventdevs/features/default.ini +++ b/doc/guides/eventdevs/features/default.ini @@ -44,6 +44,14 @@ internal_port_op_fwd = internal_port_qp_ev_bind = session_private_data = +; +; Features of a default DMA adapter. +; +[DMA adapter Features] +internal_port_op_new = +internal_port_op_fwd = +internal_port_qp_ev_bind = + ; ; Features of a default Timer adapter. ; diff --git a/doc/guides/prog_guide/event_dma_adapter.rst b/doc/guides/prog_guide/event_dma_adapter.rst new file mode 100644 index 0000000000..7ac3ce744b --- /dev/null +++ b/doc/guides/prog_guide/event_dma_adapter.rst @@ -0,0 +1,268 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright (c) 2023 Marvell. + +Event DMA Adapter Library +========================= + +DPDK :doc:`Eventdev library ` provides event driven programming model with features +to schedule events. :doc:`DMA Device library ` provides an interface to DMA poll mode +drivers that support DMA operations. Event DMA Adapter is intended to bridge between the event +device and the DMA device. + +Packet flow from DMA device to the event device can be accomplished using software and hardware +based transfer mechanisms. The adapter queries an eventdev PMD to determine which mechanism to +be used. The adapter uses an EAL service core function for software based packet transfer and +uses the eventdev PMD functions to configure hardware based packet transfer between DMA device +and the event device. DMA adapter uses a new event type called ``RTE_EVENT_TYPE_DMADEV`` to +indicate the source of event. + +Application can choose to submit an DMA operation directly to an DMA device or send it to an DMA +adapter via eventdev based on RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability. The +first mode is known as the event new (RTE_EVENT_DMA_ADAPTER_OP_NEW) mode and the second as the +event forward (RTE_EVENT_DMA_ADAPTER_OP_FORWARD) mode. Choice of mode can be specified while +creating the adapter. In the former mode, it is the application's responsibility to enable +ingress packet ordering. In the latter mode, it is the adapter's responsibility to enable +ingress packet ordering. + + +Adapter Modes +------------- + +RTE_EVENT_DMA_ADAPTER_OP_NEW mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the RTE_EVENT_DMA_ADAPTER_OP_NEW mode, application submits DMA operations directly to an DMA +device. The adapter then dequeues DMA completions from the DMA device and enqueues them as events +to the event device. This mode does not ensure ingress ordering as the application directly +enqueues to the dmadev without going through DMA/atomic stage. In this mode, events dequeued +from the adapter are treated as new events. The application has to specify event information +(response information) which is needed to enqueue an event after the DMA operation is completed. + +.. _figure_event_dma_adapter_op_new: + +.. figure:: img/event_dma_adapter_op_new.* + + Working model of ``RTE_EVENT_DMA_ADAPTER_OP_NEW`` mode + + +RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the ``RTE_EVENT_DMA_ADAPTER_OP_FORWARD`` mode, if the event PMD and DMA PMD supports internal +event port (``RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should use +``rte_event_dma_adapter_enqueue()`` API to enqueue DMA operations as events to DMA adapter. If +not, application retrieves DMA adapter's event port using ``rte_event_dma_adapter_event_port_get()`` +API, links its event queue to this port and starts enqueuing DMA operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and submits the DMA +operations to the dmadev. After the DMA operation is complete, the adapter enqueues events to the +event device. + +Applications can use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. Application has to specify +the dmadev ID and vchan ID (request information) needed to enqueue a DMA operation in +addition to the event information (response information) needed to enqueue the event after +the DMA operation has completed. + +.. _figure_event_dma_adapter_op_forward: + +.. figure:: img/event_dma_adapter_op_forward.* + + Working model of ``RTE_EVENT_DMA_ADAPTER_OP_FORWARD`` mode + + +API Overview +------------ + +This section has a brief introduction to the event DMA adapter APIs. The application is expected +to create an adapter which is associated with a single eventdev, then add dmadev and vchan to the +adapter instance. + + +Create an adapter instance +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +An adapter instance is created using ``rte_event_dma_adapter_create()``. This function is called +with event device to be associated with the adapter and port configuration for the adapter to +setup an event port (if the adapter needs to use a service function). + +Adapter can be started in ``RTE_EVENT_DMA_ADAPTER_OP_NEW`` or ``RTE_EVENT_DMA_ADAPTER_OP_FORWARD`` +mode. + +.. code-block:: c + + enum rte_event_dma_adapter_mode mode; + struct rte_event_dev_info dev_info; + struct rte_event_port_conf conf; + uint8_t evdev_id; + uint8_t dma_id; + int ret; + + ret = rte_event_dev_info_get(dma_id, &dev_info); + + conf.new_event_threshold = dev_info.max_num_events; + conf.dequeue_depth = dev_info.max_event_port_dequeue_depth; + conf.enqueue_depth = dev_info.max_event_port_enqueue_depth; + mode = RTE_EVENT_DMA_ADAPTER_OP_FORWARD; + ret = rte_event_dma_adapter_create(dma_id, evdev_id, &conf, mode); + + +``rte_event_dma_adapter_create_ext()`` function can be used by the application to have a finer +control on eventdev port allocation and setup. The ``rte_event_dma_adapter_create_ext()`` +function is passed a callback function. The callback function is invoked if the adapter creates +a service function and uses an event port for it. The callback is expected to fill the +``struct rte_event_dma_adapter_conf`` structure passed to it. + +In the ``RTE_EVENT_DMA_ADAPTER_OP_FORWARD`` mode, if the event PMD and DMA PMD supports internal +event port (``RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with DMA operations should +be enqueued to the DMA adapter using ``rte_event_dma_adapter_enqueue()`` API. If not, the event port +created by the adapter can be retrieved using ``rte_event_dma_adapter_event_port_get()`` API. An +application can use this event port to link with an event queue, on which it enqueues events +towards the DMA adapter using ``rte_event_enqueue_burst()``. + +.. code-block:: c + + uint8_t dma_id, evdev_id, cdev_id, dma_ev_port_id, app_qid; + struct rte_event ev; + uint32_t cap; + int ret; + + // Fill in event info and update event_ptr with rte_dma_op + memset(&ev, 0, sizeof(ev)); + . + . + ev.event_ptr = op; + + ret = rte_event_dma_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_dma_adapter_enqueue(evdev_id, app_ev_port_id, ev, nb_events); + } else { + ret = rte_event_dma_adapter_event_port_get(dma_id, &dma_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, dma_ev_port_id, &app_qid, NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, nb_events); + } + + +Event device configuration for service based adapter +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +When ``rte_event_dma_adapter_create()`` is used for creating adapter instance, +``rte_event_dev_config::nb_event_ports`` is automatically incremented, and event device is +reconfigured with additional event port during service initialization. This event device +reconfigure logic also increments the ``rte_event_dev_config::nb_single_link_event_port_queues`` +parameter if the adapter event port config is of type ``RTE_EVENT_PORT_CFG_SINGLE_LINK``. + +Applications using this mode of adapter creation need not configure the event device with +``rte_event_dev_config::nb_event_ports`` and +``rte_event_dev_config::nb_single_link_event_port_queues`` parameters required for DMA adapter when +the adapter is created using the above-mentioned API. + + +Querying adapter capabilities +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The ``rte_event_dma_adapter_caps_get()`` function allows the application to query the adapter +capabilities for an eventdev and dmadev combination. This API provides whether dmadev and eventdev +are connected using internal HW port or not. + +.. code-block:: c + + rte_event_dma_adapter_caps_get(dev_id, cdev_id, &cap); + + +Adding vchan queue to the adapter instance +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +dmadev device id and vchan queue are configured using dmadev APIs. For more information +see :doc:`here `. + +.. code-block:: c + + struct rte_dma_vchan_conf vchan_conf; + struct rte_dma_conf dev_conf; + uint8_t dev_id = 0; + uint16_t vchan = 0; + + rte_dma_configure(dev_id, &dev_conf); + rte_dma_vchan_setup(dev_id, vhcan, &vchan_conf); + +These dmadev id and vchan are added to the instance using the +``rte_event_dma_adapter_vchan_queue_add()`` API. The same is removed using +``rte_event_dma_adapter_vchan_queue_del()`` API. If hardware supports +``RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND`` capability, event information must be passed to +the add API. + +.. code-block:: c + + uint32_t cap; + int ret; + + ret = rte_event_dma_adapter_caps_get(evdev_id, dma_dev_id, &cap); + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) { + struct rte_event event; + + rte_event_dma_adapter_vchan_queue_add(id, dma_dev_id, vchan, &conf); + } else + rte_event_dma_adapter_vchan_queue_add(id, dma_dev_id, vchan, NULL); + + +Configuring service function +~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +If the adapter uses a service function, the application is required to assign a service core to +the service function as show below. + +.. code-block:: c + + uint32_t service_id; + + if (rte_event_dma_adapter_service_id_get(dma_id, &service_id) == 0) + rte_service_map_lcore_set(service_id, CORE_ID); + + +Set event request / response information +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +In the RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode, the application specifies the dmadev ID and +vchan ID (request information) in addition to the event information (response information) +needed to enqueue an event after the DMA operation has completed. The request and response +information are specified in the ``struct rte_event_dma_metadata``. + +In the RTE_EVENT_DMA_ADAPTER_OP_NEW mode, the application is required to provide only the response +information. + + +Start the adapter instance +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The application calls ``rte_event_dma_adapter_start()`` to start the adapter. This function calls +the start callbacks of the eventdev PMDs for hardware based eventdev-dmadev connections and +``rte_service_run_state_set()`` to enable the service function if one exists. + +.. code-block:: c + + rte_event_dma_adapter_start(id); + +.. Note:: + + The eventdev to which the event_dma_adapter is connected should be started before calling + rte_event_dma_adapter_start(). + + +Get adapter statistics +~~~~~~~~~~~~~~~~~~~~~~ + +The ``rte_event_dma_adapter_stats_get()`` function reports counters defined in struct +``rte_event_dma_adapter_stats``. The received packet and enqueued event counts are a sum of the +counts from the eventdev PMD callbacks if the callback is supported, and the counts maintained by +the service function, if one exists. + +Set/Get adapter runtime configuration parameters +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +The runtime configuration parameters of adapter can be set/get using +``rte_event_dma_adapter_runtime_params_set()`` and +``rte_event_dma_adapter_runtime_params_get()`` respectively. +The parameters that can be set/get are defined in +``struct rte_event_dma_adapter_runtime_params``. diff --git a/doc/guides/prog_guide/eventdev.rst b/doc/guides/prog_guide/eventdev.rst index 2c83176846..ff55115d0d 100644 --- a/doc/guides/prog_guide/eventdev.rst +++ b/doc/guides/prog_guide/eventdev.rst @@ -333,7 +333,8 @@ eventdev. .. Note:: EventDev needs to be started before starting the event producers such - as event_eth_rx_adapter, event_timer_adapter and event_crypto_adapter. + as event_eth_rx_adapter, event_timer_adapter, event_crypto_adapter and + event_dma_adapter. Ingress of New Events ~~~~~~~~~~~~~~~~~~~~~ @@ -445,8 +446,9 @@ using ``rte_event_dev_stop_flush_callback_register()`` function. .. Note:: The event producers such as ``event_eth_rx_adapter``, - ``event_timer_adapter`` and ``event_crypto_adapter`` - need to be stopped before stopping the event device. + ``event_timer_adapter``, ``event_crypto_adapter`` and + ``event_dma_adapter`` need to be stopped before stopping + the event device. Summary ------- diff --git a/doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg b/doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg new file mode 100644 index 0000000000..b7fe1fecf2 --- /dev/null +++ b/doc/guides/prog_guide/img/event_dma_adapter_op_forward.svg @@ -0,0 +1,1086 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + + + + + + 1 + + + 2 + + + + 8 + + + + + 7 + + + + + 3 + + + + 4 + + + 5 + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + + 6 + + + Eventdev + + + DMAAdapter + + + Applicationin orderedstage + + + DMA Device + + + 1. Events from the previous stage. 2. Application in ordered stage dequeues events from eventdev. 3. Application enqueues DMA operations as events to eventdev. 4. DMA adapter dequeues event from eventdev. 5. DMA adapter submits DMA operations to DMA Device (Atomic stage) 6. DMA adapter dequeues DMA completions from DMA Device 7. DMA adapter enqueues events to the eventdev 8. Events to the next stage + + + diff --git a/doc/guides/prog_guide/img/event_dma_adapter_op_new.svg b/doc/guides/prog_guide/img/event_dma_adapter_op_new.svg new file mode 100644 index 0000000000..e9e8bb2b98 --- /dev/null +++ b/doc/guides/prog_guide/img/event_dma_adapter_op_new.svg @@ -0,0 +1,1079 @@ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + image/svg+xml + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + +   + + + + + + + + + + + + + 1 + + + 2 + + + + + 3 + + + 4 + + + 6 + + + Eventdev + + + Atomic Stage+Enqueue toDMA Device + + + 5 + +   + + DMA Device + + + DMAAdapter + + + 1. Application dequeues events from the previous stage 2. Application prepares the DMA operations. 3. DMA operations are submitted to dmadev by application. 4. DMA adapter dequeues DMA completions from DMA device. 5. DMA adapter enqueues events to the eventdev. 6. Application dequeues from eventdev and prepare for further processing + + + Square + Atomic Queue #1 + + + + + + + + + + + + + + + + Application + + + diff --git a/doc/guides/prog_guide/index.rst b/doc/guides/prog_guide/index.rst index 52a6d9e7aa..beaa4b8869 100644 --- a/doc/guides/prog_guide/index.rst +++ b/doc/guides/prog_guide/index.rst @@ -60,6 +60,7 @@ Programmer's Guide event_ethernet_tx_adapter event_timer_adapter event_crypto_adapter + event_dma_adapter qos_framework power_man packet_classif_access_ctrl diff --git a/lib/eventdev/eventdev_pmd.h b/lib/eventdev/eventdev_pmd.h index f62f42e140..6c77c128ac 100644 --- a/lib/eventdev/eventdev_pmd.h +++ b/lib/eventdev/eventdev_pmd.h @@ -178,8 +178,12 @@ struct rte_eventdev { event_tx_adapter_enqueue_t txa_enqueue; /**< Pointer to PMD eth Tx adapter enqueue function. */ event_crypto_adapter_enqueue_t ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ - uint64_t reserved_64s[4]; /**< Reserved for future fields */ + event_dma_adapter_enqueue_t dma_enqueue; + /**< Pointer to PMD DMA adapter enqueue function. */ + + uint64_t reserved_64s[3]; /**< Reserved for future fields */ void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; @@ -1320,6 +1324,156 @@ typedef int (*eventdev_eth_tx_adapter_queue_stop) #define eventdev_stop_flush_t rte_eventdev_stop_flush_t +struct rte_dma_dev; + +/** + * Retrieve the event device's DMA adapter capabilities for the + * specified DMA device + * + * @param dev + * Event device pointer + * + * @param dmadev + * DMA device pointer + * + * @param[out] caps + * A pointer to memory filled with event adapter capabilities. + * It is expected to be pre-allocated & initialized by caller. + * + * @return + * - 0: Success, driver provides event adapter capabilities for the + * DMADEV. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_dma_adapter_caps_get_t)(const struct rte_eventdev *dev, + const struct rte_dma_dev *dmadev, uint32_t *caps); + +/** + * This API may change without prior notice + * + * Add DMA queue pair to event device. This callback is invoked if + * the caps returned from rte_event_dma_adapter_caps_get(, dmadev_id) + * has RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_* set. + * + * @param dev + * Event device pointer + * + * @param dmadev + * DMADEV pointer + * + * @param queue_pair_id + * DMADEV queue pair identifier. + * + * @param event + * Event information required for binding dmadev queue pair to event queue. + * This structure will have a valid value for only those HW PMDs supporting + * @see RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND capability. + * + * @return + * - 0: Success, dmadev queue pair added successfully. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_dma_adapter_queue_pair_add_t)(const struct rte_eventdev *dev, + const struct rte_dma_dev *dmadev, + int32_t queue_pair_id, + const struct rte_event *event); + +/** + * This API may change without prior notice + * + * Delete DMA queue pair to event device. This callback is invoked if + * the caps returned from rte_event_dma_adapter_caps_get(, dmadev_id) + * has RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_* set. + * + * @param queue_pair_id + * dmadev queue pair identifier. + * + * @return + * - 0: Success, dmadev queue pair deleted successfully. + * - <0: Error code returned by the driver function. + * + */ +typedef int (*eventdev_dma_adapter_queue_pair_del_t)(const struct rte_eventdev *dev, + const struct rte_dma_dev *cdev, + int32_t queue_pair_id); + +/** + * Start DMA adapter. This callback is invoked if + * the caps returned from rte_event_dma_adapter_caps_get(.., dmadev_id) + * has RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_* set and queue pairs + * from dmadev_id have been added to the event device. + * + * @param dev + * Event device pointer + * + * @param dmadev + * DMA device pointer + * + * @return + * - 0: Success, DMA adapter started successfully. + * - <0: Error code returned by the driver function. + */ +typedef int (*eventdev_dma_adapter_start_t)(const struct rte_eventdev *dev, + const struct rte_dma_dev *dmadev); + +/** + * Stop DMA adapter. This callback is invoked if + * the caps returned from rte_event_dma_adapter_caps_get(.., dmadev_id) + * has RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_* set and queue pairs + * from dmadev_id have been added to the event device. + * + * @param dev + * Event device pointer + * + * @param dmadev + * DMA device pointer + * + * @return + * - 0: Success, DMA adapter stopped successfully. + * - <0: Error code returned by the driver function. + */ +typedef int (*eventdev_dma_adapter_stop_t)(const struct rte_eventdev *dev, + const struct rte_dma_dev *dmadev); + +struct rte_event_dma_adapter_stats; + +/** + * Retrieve DMA adapter statistics. + * + * @param dev + * Event device pointer + * + * @param dmadev + * DMA device pointer + * + * @param[out] stats + * Pointer to stats structure + * + * @return + * Return 0 on success. + */ +typedef int (*eventdev_dma_adapter_stats_get)(const struct rte_eventdev *dev, + const struct rte_dma_dev *dmadev, + struct rte_event_dma_adapter_stats *stats); + +/** + * Reset DMA adapter statistics. + * + * @param dev + * Event device pointer + * + * @param dmadev + * DMA device pointer + * + * @return + * Return 0 on success. + */ +typedef int (*eventdev_dma_adapter_stats_reset)(const struct rte_eventdev *dev, + const struct rte_dma_dev *dmadev); + + /** Event device operations function pointer table */ struct eventdev_ops { eventdev_info_get_t dev_infos_get; /**< Get device info. */ @@ -1440,6 +1594,21 @@ struct eventdev_ops { eventdev_eth_tx_adapter_queue_stop eth_tx_adapter_queue_stop; /**< Stop Tx queue assigned to Tx adapter instance */ + eventdev_dma_adapter_caps_get_t dma_adapter_caps_get; + /**< Get DMA adapter capabilities */ + eventdev_dma_adapter_queue_pair_add_t dma_adapter_queue_pair_add; + /**< Add queue pair to DMA adapter */ + eventdev_dma_adapter_queue_pair_del_t dma_adapter_queue_pair_del; + /**< Delete queue pair from DMA adapter */ + eventdev_dma_adapter_start_t dma_adapter_start; + /**< Start DMA adapter */ + eventdev_dma_adapter_stop_t dma_adapter_stop; + /**< Stop DMA adapter */ + eventdev_dma_adapter_stats_get dma_adapter_stats_get; + /**< Get DMA stats */ + eventdev_dma_adapter_stats_reset dma_adapter_stats_reset; + /**< Reset DMA stats */ + eventdev_selftest dev_selftest; /**< Start eventdev Selftest */ diff --git a/lib/eventdev/eventdev_private.c b/lib/eventdev/eventdev_private.c index 1d3d9d357e..18ed8bf3c8 100644 --- a/lib/eventdev/eventdev_private.c +++ b/lib/eventdev/eventdev_private.c @@ -81,6 +81,14 @@ dummy_event_crypto_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +dummy_event_dma_adapter_enqueue(__rte_unused void *port, __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + RTE_EDEV_LOG_ERR("event DMA adapter enqueue requested for unconfigured event device"); + return 0; +} + void event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) { @@ -97,6 +105,7 @@ event_dev_fp_ops_reset(struct rte_event_fp_ops *fp_op) .txa_enqueue_same_dest = dummy_event_tx_adapter_enqueue_same_dest, .ca_enqueue = dummy_event_crypto_adapter_enqueue, + .dma_enqueue = dummy_event_dma_adapter_enqueue, .data = dummy_data, }; @@ -117,5 +126,6 @@ event_dev_fp_ops_set(struct rte_event_fp_ops *fp_op, fp_op->txa_enqueue = dev->txa_enqueue; fp_op->txa_enqueue_same_dest = dev->txa_enqueue_same_dest; fp_op->ca_enqueue = dev->ca_enqueue; + fp_op->dma_enqueue = dev->dma_enqueue; fp_op->data = dev->data->ports; } diff --git a/lib/eventdev/rte_event_dma_adapter.h b/lib/eventdev/rte_event_dma_adapter.h new file mode 100644 index 0000000000..c667398d08 --- /dev/null +++ b/lib/eventdev/rte_event_dma_adapter.h @@ -0,0 +1,641 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#ifndef RTE_EVENT_DMA_ADAPTER +#define RTE_EVENT_DMA_ADAPTER + +/** + * @file rte_event_dma_adapter.h + * + * @warning + * @b EXPERIMENTAL: + * All functions in this file may be changed or removed without prior notice. + * + * DMA Event Adapter API. + * + * Eventdev library provides adapters to bridge between various components for providing new + * event source. The event DMA adapter is one of those adapters which is intended to bridge + * between event devices and DMA devices. + * + * The DMA adapter adds support to enqueue / dequeue DMA operations to / from event device. The + * packet flow between DMA device and the event device can be accomplished using both SW and HW + * based transfer mechanisms. The adapter uses an EAL service core function for SW based packet + * transfer and uses the eventdev PMD functions to configure HW based packet transfer between the + * DMA device and the event device. + * + * The application can choose to submit a DMA operation directly to an DMA device or send it to the + * DMA adapter via eventdev based on RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability. The + * first mode is known as the event new (RTE_EVENT_DMA_ADAPTER_OP_NEW) mode and the second as the + * event forward (RTE_EVENT_DMA_ADAPTER_OP_FORWARD) mode. The choice of mode can be specified while + * creating the adapter. In the former mode, it is an application responsibility to enable ingress + * packet ordering. In the latter mode, it is the adapter responsibility to enable the ingress + * packet ordering. + * + * + * Working model of RTE_EVENT_DMA_ADAPTER_OP_NEW mode: + * + * +--------------+ +--------------+ + * | | | DMA stage | + * | Application |---[2]-->| + enqueue to | + * | | | dmadev | + * +--------------+ +--------------+ + * ^ ^ | + * | | [3] + * [6] [1] | + * | | | + * +--------------+ | + * | | | + * | Event device | | + * | | | + * +--------------+ | + * ^ | + * | | + * [5] | + * | v + * +--------------+ +--------------+ + * | | | | + * | DMA adapter |<--[4]---| dmadev | + * | | | | + * +--------------+ +--------------+ + * + * + * [1] Application dequeues events from the previous stage. + * [2] Application prepares the DMA operations. + * [3] DMA operations are submitted to dmadev by application. + * [4] DMA adapter dequeues DMA completions from dmadev. + * [5] DMA adapter enqueues events to the eventdev. + * [6] Application dequeues from eventdev for further processing. + * + * In the RTE_EVENT_DMA_ADAPTER_OP_NEW mode, application submits DMA operations directly to DMA + * device. The DMA adapter then dequeues DMA completions from DMA device and enqueue events to the + * event device. This mode does not ensure ingress ordering, if the application directly enqueues + * to dmadev without going through DMA / atomic stage i.e. removing item [1] and [2]. + * + * Events dequeued from the adapter will be treated as new events. In this mode, application needs + * to specify event information (response information) which is needed to enqueue an event after the + * DMA operation is completed. + * + * + * Working model of RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode: + * + * +--------------+ +--------------+ + * --[1]-->| |---[2]-->| Application | + * | Event device | | in | + * <--[8]--| |<--[3]---| Ordered stage| + * +--------------+ +--------------+ + * ^ | + * | [4] + * [7] | + * | v + * +----------------+ +--------------+ + * | |--[5]->| | + * | DMA adapter | | dmadev | + * | |<-[6]--| | + * +----------------+ +--------------+ + * + * + * [1] Events from the previous stage. + * [2] Application in ordered stage dequeues events from eventdev. + * [3] Application enqueues DMA operations as events to eventdev. + * [4] DMA adapter dequeues event from eventdev. + * [5] DMA adapter submits DMA operations to dmadev (Atomic stage). + * [6] DMA adapter dequeues DMA completions from dmadev + * [7] DMA adapter enqueues events to the eventdev + * [8] Events to the next stage + * + * In the event forward (RTE_EVENT_DMA_ADAPTER_OP_FORWARD) mode, if the HW supports the capability + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application can directly submit the DMA + * operations to the dmadev. If not, application retrieves the event port of the DMA adapter + * through the API, rte_event_DMA_adapter_event_port_get(). Then, links its event queue to this + * port and starts enqueuing DMA operations as events to the eventdev. The adapter then dequeues + * the events and submits the DMA operations to the dmadev. After the DMA completions, the adapter + * enqueues events to the event device. + * + * Application can use this mode, when ingress packet ordering is needed. Events dequeued from the + * adapter will be treated as forwarded events. In this mode, the application needs to specify the + * dmadev ID and queue pair ID (request information) needed to enqueue an DMA operation in addition + * to the event information (response information) needed to enqueue an event after the DMA + * operation has completed. + * + * The event DMA adapter provides common APIs to configure the packet flow from the DMA device to + * event devices for both SW and HW based transfers. The DMA event adapter's functions are: + * + * - rte_event_dma_adapter_create_ext() + * - rte_event_dma_adapter_create() + * - rte_event_dma_adapter_free() + * - rte_event_dma_adapter_vchan_queue_add() + * - rte_event_dma_adapter_vchan_queue_del() + * - rte_event_dma_adapter_start() + * - rte_event_dma_adapter_stop() + * - rte_event_dma_adapter_stats_get() + * - rte_event_dma_adapter_stats_reset() + * + * The application creates an instance using rte_event_dma_adapter_create() or + * rte_event_dma_adapter_create_ext(). + * + * dmadev queue pair addition / deletion is done using the rte_event_dma_adapter_vchan_queue_add() / + * rte_event_dma_adapter_vchan_queue_del() APIs. If HW supports the capability + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND, event information must be passed to the add + * API. + * + */ + +#include + +#include "rte_eventdev.h" +#include +#include + +#ifdef __cplusplus +extern "C" { +#endif + +/** + * A structure used to hold event based DMA operation entry. + */ +struct rte_event_dma_adapter_op { + struct rte_dma_sge *src_seg; + /**< Source segments. */ + struct rte_dma_sge *dst_seg; + /**< Destination segments. */ + uint16_t nb_src; + /**< Number of source segments. */ + uint16_t nb_dst; + /**< Number of destination segments. */ + uint64_t flags; + /**< Flags related to the operation. + * @see RTE_DMA_OP_FLAG_* + */ + struct rte_mempool *op_mp; + /**< Mempool from which op is allocated. */ +}; + +/** + * DMA event adapter mode + */ +enum rte_event_dma_adapter_mode { + RTE_EVENT_DMA_ADAPTER_OP_NEW, + /**< Start the DMA adapter in event new mode. + * @see RTE_EVENT_OP_NEW. + * + * Application submits DMA operations to the dmadev. Adapter only dequeues the DMA + * completions from dmadev and enqueue events to the eventdev. + */ + + RTE_EVENT_DMA_ADAPTER_OP_FORWARD, + /**< Start the DMA adapter in event forward mode. + * @see RTE_EVENT_OP_FORWARD. + * + * Application submits DMA requests as events to the DMA adapter or DMA device based on + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability. DMA completions are enqueued + * back to the eventdev by DMA adapter. + */ +}; + +/** + * DMA event request structure will be filled by application to provide event request information to + * the adapter. + */ +struct rte_event_dma_request { + uint8_t resv[8]; + /**< Overlaps with first 8 bytes of struct rte_event that encode the response event + * information. Application is expected to fill in struct rte_event response_info. + */ + + int16_t dmadev_id; + /**< DMA device ID to be used */ + + uint16_t queue_pair_id; + /**< DMA queue pair ID to be used */ + + uint32_t rsvd; + /**< Reserved bits */ +}; + +/** + * Adapter configuration structure that the adapter configuration callback function is expected to + * fill out. + * + * @see rte_event_dma_adapter_conf_cb + */ +struct rte_event_dma_adapter_conf { + uint8_t event_port_id; + /** < Event port identifier, the adapter enqueues events to this port and dequeues DMA + * request events in RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode. + */ + + uint32_t max_nb; + /**< The adapter can return early if it has processed at least max_nb DMA ops. This isn't + * treated as a requirement; batching may cause the adapter to process more than max_nb DMA + * ops. + */ +}; + +/** + * Adapter runtime configuration parameters + */ +struct rte_event_dma_adapter_runtime_params { + uint32_t max_nb; + /**< The adapter can return early if it has processed at least max_nb DMA ops. This isn't + * treated as a requirement; batching may cause the adapter to process more than max_nb DMA + * ops. + * + * Callback function passed to rte_event_dma_adapter_create_ext() configures the adapter + * with default value of max_nb. + * rte_event_dma_adapter_runtime_params_set() allows to re-configure max_nb during runtime + * (after adding at least one queue pair) + * + * This is valid for the devices without RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD or + * RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW capability. + */ + + uint32_t rsvd[15]; + /**< Reserved fields for future expansion */ +}; + +/** + * Function type used for adapter configuration callback. The callback is used to fill in members of + * the struct rte_event_dma_adapter_conf, this callback is invoked when creating a SW service for + * packet transfer from dmadev vchan queue to the event device. The SW service is created within the + * function, rte_event_dma_adapter_vchan_queue_add(), if SW based packet + * transfers from dmadev vchan queue to the event device are required. + * + * @param id + * Adapter identifier. + * @param evdev_id + * Event device identifier. + * @param conf + * Structure that needs to be populated by this callback. + * @param arg + * Argument to the callback. This is the same as the conf_arg passed to the + * rte_event_dma_adapter_create_ext(). + */ +typedef int (*rte_event_dma_adapter_conf_cb)(uint8_t id, uint8_t evdev_id, + struct rte_event_dma_adapter_conf *conf, void *arg); + +/** + * A structure used to retrieve statistics for an event DMA adapter instance. + */ +struct rte_event_dma_adapter_stats { + uint64_t event_poll_count; + /**< Event port poll count */ + + uint64_t event_deq_count; + /**< Event dequeue count */ + + uint64_t dma_enq_count; + /**< dmadev enqueue count */ + + uint64_t dma_enq_fail_count; + /**< dmadev enqueue failed count */ + + uint64_t dma_deq_count; + /**< dmadev dequeue count */ + + uint64_t event_enq_count; + /**< Event enqueue count */ + + uint64_t event_enq_retry_count; + /**< Event enqueue retry count */ + + uint64_t event_enq_fail_count; + /**< Event enqueue fail count */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create a new event DMA adapter with the specified identifier. + * + * @param id + * Adapter identifier. + * @param evdev_id + * Event device identifier. + * @param conf_cb + * Callback function that fills in members of a struct rte_event_dma_adapter_conf struct passed + * into it. + * @param mode + * Flag to indicate the mode of the adapter. + * @see rte_event_dma_adapter_mode + * @param conf_arg + * Argument that is passed to the conf_cb function. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, + rte_event_dma_adapter_conf_cb conf_cb, + enum rte_event_dma_adapter_mode mode, void *conf_arg); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Create a new event DMA adapter with the specified identifier. This function uses an internal + * configuration function that creates an event port. This default function reconfigures the event + * device with an additional event port and set up the event port using the port_config parameter + * passed into this function. In case the application needs more control in configuration of the + * service, it should use the rte_event_dma_adapter_create_ext() version. + * + * @param id + * Adapter identifier. + * @param evdev_id + * Event device identifier. + * @param port_config + * Argument of type *rte_event_port_conf* that is passed to the conf_cb function. + * @param mode + * Flag to indicate the mode of the adapter. + * @see rte_event_dma_adapter_mode + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_dma_adapter_create(uint8_t id, uint8_t evdev_id, + struct rte_event_port_conf *port_config, + enum rte_event_dma_adapter_mode mode); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Free an event DMA adapter + * + * @param id + * Adapter identifier. + * @return + * - 0: Success + * - <0: Error code on failure, If the adapter still has queue pairs added to it, the function + * returns -EBUSY. + */ +__rte_experimental +int rte_event_dma_adapter_free(uint8_t id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve the event port of an adapter. + * + * @param id + * Adapter identifier. + * + * @param [out] event_port_id + * Application links its event queue to this adapter port which is used in + * RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode. + * + * @return + * - 0: Success + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Add a vchan queue to an event DMA adapter. + * + * @param id + * Adapter identifier. + * @param dmadev_id + * dmadev identifier. + * @param queue_pair_id + * DMA device vchan queue identifier. If queue_pair_id is set -1, adapter adds all the + * preconfigured queue pairs to the instance. + * @param event + * If HW supports dmadev queue pair to event queue binding, application is expected to fill in + * event information, else it will be NULL. + * @see RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND + * + * @return + * - 0: Success, vchan queue added correctly. + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_vchan_queue_add(uint8_t id, int16_t dmadev_id, int32_t queue_pair_id, + const struct rte_event *event); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Delete a vchan queue from an event DMA adapter. + * + * @param id + * Adapter identifier. + * @param dmadev_id + * DMA device identifier. + * @param queue_pair_id + * DMA device vchan queue identifier. + * + * @return + * - 0: Success, vchan queue deleted successfully. + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_vchan_queue_del(uint8_t id, int16_t dmadev_id, int32_t queue_pair_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve the service ID of an adapter. If the adapter doesn't use a rte_service function, this + * function returns -ESRCH. + * + * @param id + * Adapter identifier. + * @param [out] service_id + * A pointer to a uint32_t, to be filled in with the service id. + * + * @return + * - 0: Success + * - <0: Error code on failure, if the adapter doesn't use a rte_service function, this function + * returns -ESRCH. + */ +__rte_experimental +int rte_event_dma_adapter_service_id_get(uint8_t id, uint32_t *service_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Start event DMA adapter + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, adapter started successfully. + * - <0: Error code on failure. + * + * @note The eventdev and dmadev to which the event_dma_adapter is connected should be started + * before calling rte_event_dma_adapter_start(). + */ +__rte_experimental +int rte_event_dma_adapter_start(uint8_t id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Stop event DMA adapter + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, adapter stopped successfully. + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_stop(uint8_t id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Initialize the adapter runtime configuration parameters + * + * @param params + * A pointer to structure of type struct rte_event_dma_adapter_runtime_params + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_dma_adapter_runtime_params_init(struct rte_event_dma_adapter_runtime_params *params); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Set the adapter runtime configuration parameters + * + * @param id + * Adapter identifier + * + * @param params + * A pointer to structure of type struct rte_event_dma_adapter_runtime_params with configuration + * parameter values. The reserved fields of this structure must be initialized to zero and the valid + * fields need to be set appropriately. This struct can be initialized using + * rte_event_dma_adapter_runtime_params_init() API to default values or application may reset this + * struct and update required fields. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_dma_adapter_runtime_params_set(uint8_t id, + struct rte_event_dma_adapter_runtime_params *params); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Get the adapter runtime configuration parameters + * + * @param id + * Adapter identifier + * + * @param[out] params + * A pointer to structure of type struct rte_event_dma_adapter_runtime_params containing valid + * adapter parameters when return value is 0. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +__rte_experimental +int rte_event_dma_adapter_runtime_params_get(uint8_t id, + struct rte_event_dma_adapter_runtime_params *params); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Retrieve statistics for an adapter + * + * @param id + * Adapter identifier. + * @param [out] stats + * A pointer to structure used to retrieve statistics for an adapter. + * + * @return + * - 0: Success, retrieved successfully. + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_stats_get(uint8_t id, struct rte_event_dma_adapter_stats *stats); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Reset statistics for an adapter. + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, statistics reset successfully. + * - <0: Error code on failure. + */ +__rte_experimental +int rte_event_dma_adapter_stats_reset(uint8_t id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice. + * + * Enqueue a burst of DMA operations as event objects supplied in *rte_event* structure on an event + * DMA adapter designated by its event *evdev_id* through the event port specified by *port_id*. + * This function is supported if the eventdev PMD has the + * #RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue that are supplied in the + * *ev* array of *rte_event* structure. + * + * The rte_event_dma_adapter_enqueue() function returns the number of event objects it actually + * enqueued. A return value equal to *nb_events* means that all event objects have been enqueued. + * + * @param evdev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure which contain the + * event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The return value can be + * less than the value of the *nb_events* parameter when the event devices queue is full or if + * invalid parameters are specified in a *rte_event*. If the return value is less than *nb_events*, + * the remaining events at the end of ev[] are not consumed and the caller has to take care of them, + * and rte_errno is set accordingly. Possible errno values include: + * + * - EINVAL: The port ID is invalid, device ID is invalid, an event's queue ID is invalid, or an + * event's sched type doesn't match the capabilities of the destination queue. + * - ENOSPC: The event port was backpressured and unable to enqueue one or more events. This + * error code is only applicable to closed systems. + */ +__rte_experimental +uint16_t rte_event_dma_adapter_enqueue(uint8_t evdev_id, uint8_t port_id, struct rte_event ev[], + uint16_t nb_events); + +#ifdef __cplusplus +} +#endif + +#endif /* RTE_EVENT_DMA_ADAPTER */ diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index 2ba8a7b090..d231f527ae 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1197,6 +1197,8 @@ struct rte_event_vector { */ #define RTE_EVENT_TYPE_ETH_RX_ADAPTER 0x4 /**< The event generated from event eth Rx adapter */ +#define RTE_EVENT_TYPE_DMADEV 0x5 +/**< The event generated from dma subsystem */ #define RTE_EVENT_TYPE_VECTOR 0x8 /**< Indicates that event is a vector. * All vector event types should be a logical OR of EVENT_TYPE_VECTOR. @@ -1462,6 +1464,49 @@ int rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id, uint32_t *caps); +/* DMA adapter capability bitmap flag */ +#define RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW 0x1 +/**< Flag indicates HW is capable of generating events in + * RTE_EVENT_OP_NEW enqueue operation. DMADEV will send + * packets to the event device as new events using an + * internal event port. + */ + +#define RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD 0x2 +/**< Flag indicates HW is capable of generating events in + * RTE_EVENT_OP_FORWARD enqueue operation. DMADEV will send + * packets to the event device as forwarded event using an + * internal event port. + */ + +#define RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND 0x4 +/**< Flag indicates HW is capable of mapping DMA queue pair to + * event queue. + */ + +/** + * Retrieve the event device's DMA adapter capabilities for the + * specified dmadev device + * + * @param dev_id + * The identifier of the device. + * + * @param dmadev_id + * The identifier of the dmadev device. + * + * @param[out] caps + * A pointer to memory filled with event adapter capabilities. + * It is expected to be pre-allocated & initialized by caller. + * + * @return + * - 0: Success, driver provides event adapter capabilities for the + * dmadev device. + * - <0: Error code returned by the driver function. + * + */ +int +rte_event_dma_adapter_caps_get(uint8_t dev_id, int16_t dmadev_id, uint32_t *caps); + /* Ethdev Tx adapter capability bitmap flags */ #define RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT 0x1 /**< This flag is sent when the PMD supports a packet transmit callback diff --git a/lib/eventdev/rte_eventdev_core.h b/lib/eventdev/rte_eventdev_core.h index c27a52ccc0..83e8736c71 100644 --- a/lib/eventdev/rte_eventdev_core.h +++ b/lib/eventdev/rte_eventdev_core.h @@ -42,6 +42,10 @@ typedef uint16_t (*event_crypto_adapter_enqueue_t)(void *port, uint16_t nb_events); /**< @internal Enqueue burst of events on crypto adapter */ +typedef uint16_t (*event_dma_adapter_enqueue_t)(void *port, struct rte_event ev[], + uint16_t nb_events); +/**< @internal Enqueue burst of events on DMA adapter */ + struct rte_event_fp_ops { void **data; /**< points to array of internal port data pointers */ @@ -65,7 +69,9 @@ struct rte_event_fp_ops { /**< PMD Tx adapter enqueue same destination function. */ event_crypto_adapter_enqueue_t ca_enqueue; /**< PMD Crypto adapter enqueue function. */ - uintptr_t reserved[5]; + event_dma_adapter_enqueue_t dma_enqueue; + /**< PMD DMA adapter enqueue function. */ + uintptr_t reserved[4]; } __rte_cache_aligned; extern struct rte_event_fp_ops rte_event_fp_ops[RTE_EVENT_MAX_DEVS]; diff --git a/lib/eventdev/version.map b/lib/eventdev/version.map index 7ce09a87bb..597a5c9cda 100644 --- a/lib/eventdev/version.map +++ b/lib/eventdev/version.map @@ -134,6 +134,21 @@ EXPERIMENTAL { # added in 23.11 rte_event_eth_rx_adapter_create_ext_with_params; + rte_event_dma_adapter_create_ext; + rte_event_dma_adapter_create; + rte_event_dma_adapter_free; + rte_event_dma_adapter_event_port_get; + rte_event_dma_adapter_vchan_queue_add; + rte_event_dma_adapter_vchan_queue_del; + rte_event_dma_adapter_service_id_get; + rte_event_dma_adapter_start; + rte_event_dma_adapter_stop; + rte_event_dma_adapter_runtime_params_init; + rte_event_dma_adapter_runtime_params_set; + rte_event_dma_adapter_runtime_params_get; + rte_event_dma_adapter_stats_get; + rte_event_dma_adapter_stats_reset; + rte_event_dma_adapter_enqueue; }; INTERNAL { From patchwork Tue Sep 19 13:42:17 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 131627 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A5F9642604; Tue, 19 Sep 2023 15:42:52 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0F27C40A6B; Tue, 19 Sep 2023 15:42:50 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 10E6440A72 for ; Tue, 19 Sep 2023 15:42:48 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38J6Gi6A022819; Tue, 19 Sep 2023 06:42:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=g0cj8KF3cqqEGp2wnj45slXuwgXnmjuGr/CjjMRnOVE=; b=KhOrabDuiyE6RdWZySTAATtBbn5WInRKgsKdT6/HKSxuThI+ZPYQASn3ulUOuAcippQI ZOEcycUR69OCpMP0EBOvFsd36D751xSNkFkbbpxHvTcrzlXF/CHcThpMgNGZs6F1apja xWYO7jP6Cn6A2b1Iub70fFvCSnjK3Lf9LTdFTexpe5ubmqRmxH7K3er5c+yy8TzyQ1ot 9UySaTMCowEXCjhsrZh86CNlXjNk08CV3PwRP5PXYp2eZEHX+Son0Sml7sMLXhbPti8a 62QHg68EhFal/xsXMPFY26gxY3VmKqttmRApv7QBZyOxAPnxlMAxrqyVH8gh9e+Fhsgm jw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t5bvkrk9r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Sep 2023 06:42:48 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Sep 2023 06:42:45 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Sep 2023 06:42:45 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 196033F7099; Tue, 19 Sep 2023 06:42:40 -0700 (PDT) From: Amit Prakash Shukla To: Jerin Jacob CC: , , , , , , , , , , , , , Amit Prakash Shukla Subject: [PATCH v1 2/7] eventdev: api to get DMA capabilities Date: Tue, 19 Sep 2023 19:12:17 +0530 Message-ID: <20230919134222.2500033-2-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230919134222.2500033-1-amitprakashs@marvell.com> References: <20230919134222.2500033-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 96POnpQTXlgorr5d8-J_KNI4x01bn1UH X-Proofpoint-ORIG-GUID: 96POnpQTXlgorr5d8-J_KNI4x01bn1UH X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-19_06,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added a new eventdev API rte_event_dma_adapter_caps_get(), to get DMA adapter capabilities supported by the driver. Signed-off-by: Amit Prakash Shukla --- lib/eventdev/meson.build | 2 +- lib/eventdev/rte_eventdev.c | 25 +++++++++++++++++++++++++ lib/eventdev/rte_eventdev.h | 2 +- lib/meson.build | 2 +- 4 files changed, 28 insertions(+), 3 deletions(-) diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build index 6edf98dfa5..fbab3a42ad 100644 --- a/lib/eventdev/meson.build +++ b/lib/eventdev/meson.build @@ -42,5 +42,5 @@ driver_sdk_headers += files( 'event_timer_adapter_pmd.h', ) -deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev'] +deps += ['ring', 'ethdev', 'hash', 'mempool', 'mbuf', 'timer', 'cryptodev', 'dmadev'] deps += ['telemetry'] diff --git a/lib/eventdev/rte_eventdev.c b/lib/eventdev/rte_eventdev.c index 6ab4524332..9415788b6a 100644 --- a/lib/eventdev/rte_eventdev.c +++ b/lib/eventdev/rte_eventdev.c @@ -21,6 +21,7 @@ #include #include #include +#include #include #include "rte_eventdev.h" @@ -224,6 +225,30 @@ rte_event_eth_tx_adapter_caps_get(uint8_t dev_id, uint16_t eth_port_id, : 0; } +int +rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dma_dev_id, uint32_t *caps) +{ + struct rte_eventdev *dev; + struct rte_dma_dev *dma_dev; + + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + if (!rte_dma_is_valid(dma_dev_id)) + return -EINVAL; + + dev = &rte_eventdevs[dev_id]; + dma_dev = rte_dma_pmd_dev_get(dma_dev_id); + + if (caps == NULL || dma_dev == NULL) + return -EINVAL; + + *caps = 0; + + if (dev->dev_ops->dma_adapter_caps_get) + return (*dev->dev_ops->dma_adapter_caps_get)(dev, dma_dev, caps); + + return 0; +} + static inline int event_dev_queue_config(struct rte_eventdev *dev, uint8_t nb_queues) { diff --git a/lib/eventdev/rte_eventdev.h b/lib/eventdev/rte_eventdev.h index d231f527ae..5611880872 100644 --- a/lib/eventdev/rte_eventdev.h +++ b/lib/eventdev/rte_eventdev.h @@ -1505,7 +1505,7 @@ rte_event_crypto_adapter_caps_get(uint8_t dev_id, uint8_t cdev_id, * */ int -rte_event_dma_adapter_caps_get(uint8_t dev_id, int16_t dmadev_id, uint32_t *caps); +rte_event_dma_adapter_caps_get(uint8_t dev_id, uint8_t dmadev_id, uint32_t *caps); /* Ethdev Tx adapter capability bitmap flags */ #define RTE_EVENT_ETH_TX_ADAPTER_CAP_INTERNAL_PORT 0x1 diff --git a/lib/meson.build b/lib/meson.build index 53155be8e9..f3191f10b6 100644 --- a/lib/meson.build +++ b/lib/meson.build @@ -33,6 +33,7 @@ libraries = [ 'compressdev', 'cryptodev', 'distributor', + 'dmadev', 'efd', 'eventdev', 'gpudev', @@ -48,7 +49,6 @@ libraries = [ 'rawdev', 'regexdev', 'mldev', - 'dmadev', 'rib', 'reorder', 'sched', From patchwork Tue Sep 19 13:42:18 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 131628 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 956A442604; Tue, 19 Sep 2023 15:42:59 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 391E140DC9; Tue, 19 Sep 2023 15:42:57 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 70369406FF for ; Tue, 19 Sep 2023 15:42:55 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38J6Gi6I022819; Tue, 19 Sep 2023 06:42:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=bm0r3138EzrKPZBVrTBT5QqRADvHSIOS7J7pdZsj4hg=; b=CzNS0ZlXT/BUjpeqK8dy58NTFP/vFd5DkOM6PyNz0liK3ZWz2hfLiRyC7x/mrudtFzyc mkYfjBwOC9ZzHeWm1dsfq8BPiNVr9t3Ca5JOF4Imma/uLkcj3y+3LGtsdLEgOYyQmurI tn8TCsbbbCEgUCTkiDPMZMIkqniAniGxicLFV/DQ8SgRH2G9uZU7vwh4zFOBmdtIKRyd q8HKqM33/8OKThoGOGTm/CBwuLLQt5dV1oxmdk5YT+Ct4GyP7Mue4DrFytQ0UiE0eaM8 G5sXOprhQWS4aIU4j0eZpiXka+YkRBelV1dw1wHrhDgU2eoL3r/7BfiQlGKNtu4yk5me 6A== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3t5bvkrkbj-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Sep 2023 06:42:54 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Sep 2023 06:42:51 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Sep 2023 06:42:51 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 5B2C13F7099; Tue, 19 Sep 2023 06:42:47 -0700 (PDT) From: Amit Prakash Shukla To: Bruce Richardson , Jerin Jacob CC: , , , , , , , , , , , , Amit Prakash Shukla Subject: [PATCH v1 3/7] eventdev: add DMA adapter implementation Date: Tue, 19 Sep 2023 19:12:18 +0530 Message-ID: <20230919134222.2500033-3-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230919134222.2500033-1-amitprakashs@marvell.com> References: <20230919134222.2500033-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-GUID: 4FEIPmGiFPIr9V-0BBqczFU-Gp0tq66k X-Proofpoint-ORIG-GUID: 4FEIPmGiFPIr9V-0BBqczFU-Gp0tq66k X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-19_06,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This patch adds common code for the dma adapter to support SW and HW based transfer mechanisms. The adapter uses an EAL service core function for SW based packet transfer and uses the eventdev PMD functions to configure HW based packet transfer between the dma device and the event device. Signed-off-by: Amit Prakash Shukla --- config/rte_config.h | 1 + lib/eventdev/meson.build | 2 + lib/eventdev/rte_event_dma_adapter.c | 1423 ++++++++++++++++++++++++++ lib/eventdev/rte_event_dma_adapter.h | 41 +- 4 files changed, 1458 insertions(+), 9 deletions(-) create mode 100644 lib/eventdev/rte_event_dma_adapter.c diff --git a/config/rte_config.h b/config/rte_config.h index 400e44e3cf..401727703f 100644 --- a/config/rte_config.h +++ b/config/rte_config.h @@ -77,6 +77,7 @@ #define RTE_EVENT_ETH_INTR_RING_SIZE 1024 #define RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE 32 #define RTE_EVENT_ETH_TX_ADAPTER_MAX_INSTANCE 32 +#define RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE 32 /* rawdev defines */ #define RTE_RAWDEV_MAX_DEVS 64 diff --git a/lib/eventdev/meson.build b/lib/eventdev/meson.build index fbab3a42ad..822cd83857 100644 --- a/lib/eventdev/meson.build +++ b/lib/eventdev/meson.build @@ -19,6 +19,7 @@ sources = files( 'rte_event_crypto_adapter.c', 'rte_event_eth_rx_adapter.c', 'rte_event_eth_tx_adapter.c', + 'rte_event_dma_adapter.c', 'rte_event_ring.c', 'rte_event_timer_adapter.c', 'rte_eventdev.c', @@ -27,6 +28,7 @@ headers = files( 'rte_event_crypto_adapter.h', 'rte_event_eth_rx_adapter.h', 'rte_event_eth_tx_adapter.h', + 'rte_event_dma_adapter.h', 'rte_event_ring.h', 'rte_event_timer_adapter.h', 'rte_eventdev.h', diff --git a/lib/eventdev/rte_event_dma_adapter.c b/lib/eventdev/rte_event_dma_adapter.c new file mode 100644 index 0000000000..e13283726f --- /dev/null +++ b/lib/eventdev/rte_event_dma_adapter.c @@ -0,0 +1,1423 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rte_eventdev.h" +#include "eventdev_pmd.h" +#include "rte_event_dma_adapter.h" + +#define DMA_BATCH_SIZE 32 +#define DMA_DEFAULT_MAX_NB 128 +#define DMA_ADAPTER_NAME_LEN 32 +#define DMA_ADAPTER_BUFFER_SIZE 1024 + +#define DMA_ADAPTER_OPS_BUFFER_SIZE (DMA_BATCH_SIZE + DMA_BATCH_SIZE) + +#define DMA_ADAPTER_ARRAY "event_dma_adapter_array" + +/* Macros to check for valid adapter */ +#define EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) \ + do { \ + if (!edma_adapter_valid_id(id)) { \ + RTE_EDEV_LOG_ERR("Invalid DMA adapter id = %d\n", id); \ + return retval; \ + } \ + } while (0) + +/* DMA ops circular buffer */ +struct dma_ops_circular_buffer { + /* Index of head element */ + uint16_t head; + + /* Index of tail element */ + uint16_t tail; + + /* Number of elements in buffer */ + uint16_t count; + + /* Size of circular buffer */ + uint16_t size; + + /* Pointer to hold rte_event_dma_adapter_op for processing */ + struct rte_event_dma_adapter_op **op_buffer; +} __rte_cache_aligned; + +/* Queue pair information */ +struct dma_vchan_queue_info { + /* Set to indicate vchan queue is enabled */ + bool vq_enabled; + + /* Circular buffer for batching DMA ops to dma_dev */ + struct dma_ops_circular_buffer dma_buf; +} __rte_cache_aligned; + +/* DMA device information */ +struct dma_device_info { + /* Pointer to dma_dev */ + struct rte_dma_dev *dev; + + /* Pointer to vchan queue info */ + struct dma_vchan_queue_info *vchanq; + + /* Pointer to vchan queue info. + * This holds ops passed by application. + */ + struct dma_vchan_queue_info *tqmap; + + /* Next queue pair to be processed */ + uint16_t next_queue_pair_id; + + /* If num_vchanq > 0, the start callback will + * be invoked if not already invoked + */ + uint16_t num_vchanq; + + /* Set to indicate processing has been started */ + uint8_t dev_started; + + /* Set to indicate dmadev->eventdev packet + * transfer uses a hardware mechanism + */ + uint8_t internal_event_port; +} __rte_cache_aligned; + +struct event_dma_adapter { + /* Event device identifier */ + uint8_t eventdev_id; + + /* Event port identifier */ + uint8_t event_port_id; + + /* Adapter mode */ + enum rte_event_dma_adapter_mode mode; + + /* Memory allocation name */ + char mem_name[DMA_ADAPTER_NAME_LEN]; + + /* Socket identifier cached from eventdev */ + int socket_id; + + /* Lock to serialize config updates with service function */ + rte_spinlock_t lock; + + /* Next dma device to be processed */ + uint16_t next_dmadev_id; + + /* DMA device structure array */ + struct dma_device_info *dma_devs; + + /* Circular buffer for processing DMA ops to eventdev */ + struct dma_ops_circular_buffer ebuf; + + /* Configuration callback for rte_service configuration */ + rte_event_dma_adapter_conf_cb conf_cb; + + /* Configuration callback argument */ + void *conf_arg; + + /* Set if default_cb is being used */ + int default_cb_arg; + + /* No. of vchan queue configured */ + uint16_t nb_vchanq; + + /* Per adapter EAL service ID */ + uint32_t service_id; + + /* Service initialization state */ + uint8_t service_initialized; + + /* Max DMA ops processed in any service function invocation */ + uint32_t max_nb; + + /* Store event port's implicit release capability */ + uint8_t implicit_release_disabled; + + /* Flag to indicate backpressure at dma_dev + * Stop further dequeuing events from eventdev + */ + bool stop_enq_to_dma_dev; + + /* Loop counter to flush dma ops */ + uint16_t transmit_loop_count; + + /* Per instance stats structure */ + struct rte_event_dma_adapter_stats dma_stats; +} __rte_cache_aligned; + +static struct event_dma_adapter **event_dma_adapter; + +static inline int +edma_adapter_valid_id(uint8_t id) +{ + return id < RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE; +} + +static inline struct event_dma_adapter * +edma_id_to_adapter(uint8_t id) +{ + return event_dma_adapter ? event_dma_adapter[id] : NULL; +} + +static int +edma_array_init(void) +{ + const struct rte_memzone *mz; + uint32_t sz; + + mz = rte_memzone_lookup(DMA_ADAPTER_ARRAY); + if (mz == NULL) { + sz = sizeof(struct event_dma_adapter *) * RTE_EVENT_DMA_ADAPTER_MAX_INSTANCE; + sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); + + mz = rte_memzone_reserve_aligned(DMA_ADAPTER_ARRAY, sz, rte_socket_id(), 0, + RTE_CACHE_LINE_SIZE); + if (mz == NULL) { + RTE_EDEV_LOG_ERR("Failed to reserve memzone : %s, err = %d", + DMA_ADAPTER_ARRAY, rte_errno); + return -rte_errno; + } + } + + event_dma_adapter = mz->addr; + + return 0; +} + +static inline bool +edma_circular_buffer_batch_ready(struct dma_ops_circular_buffer *bufp) +{ + return bufp->count >= DMA_BATCH_SIZE; +} + +static inline bool +edma_circular_buffer_space_for_batch(struct dma_ops_circular_buffer *bufp) +{ + return (bufp->size - bufp->count) >= DMA_BATCH_SIZE; +} + +static inline int +edma_circular_buffer_init(const char *name, struct dma_ops_circular_buffer *buf, uint16_t sz) +{ + buf->op_buffer = rte_zmalloc(name, sizeof(struct rte_event_dma_adapter_op *) * sz, 0); + if (buf->op_buffer == NULL) + return -ENOMEM; + + buf->size = sz; + + return 0; +} + +static inline void +edma_circular_buffer_free(struct dma_ops_circular_buffer *buf) +{ + rte_free(buf->op_buffer); +} + +static inline int +edma_circular_buffer_add(struct dma_ops_circular_buffer *bufp, struct rte_event_dma_adapter_op *op) +{ + uint16_t *tail = &bufp->tail; + + bufp->op_buffer[*tail] = op; + + /* circular buffer, go round */ + *tail = (*tail + 1) % bufp->size; + bufp->count++; + + return 0; +} + +static inline int +edma_circular_buffer_flush_to_dma_dev(struct event_dma_adapter *adapter, + struct dma_ops_circular_buffer *bufp, uint8_t dma_dev_id, + uint16_t vchan, uint16_t *nb_ops_flushed) +{ + struct rte_event_dma_adapter_op *op; + struct dma_vchan_queue_info *tq; + uint16_t *head = &bufp->head; + uint16_t *tail = &bufp->tail; + uint16_t n; + uint16_t i; + int ret; + + if (*tail > *head) + n = *tail - *head; + else if (*tail < *head) + n = bufp->size - *head; + else { + *nb_ops_flushed = 0; + return 0; /* buffer empty */ + } + + tq = &adapter->dma_devs[dma_dev_id].tqmap[vchan]; + + for (i = 0; i < n; i++) { + op = bufp->op_buffer[*head]; + ret = rte_dma_copy_sg(dma_dev_id, vchan, op->src_seg, op->dst_seg, + op->nb_src, op->nb_dst, op->flags); + if (ret < 0) + break; + + /* Enqueue in transaction queue. */ + edma_circular_buffer_add(&tq->dma_buf, op); + + *head = (*head + 1) % bufp->size; + } + + *nb_ops_flushed = i; + bufp->count -= *nb_ops_flushed; + if (!bufp->count) { + *head = 0; + *tail = 0; + } + + return *nb_ops_flushed == n ? 0 : -1; +} + +static int +edma_default_config_cb(uint8_t id, uint8_t evdev_id, struct rte_event_dma_adapter_conf *conf, + void *arg) +{ + struct rte_event_port_conf *port_conf; + struct rte_event_dev_config dev_conf; + struct event_dma_adapter *adapter; + struct rte_eventdev *dev; + uint8_t port_id; + int started; + int ret; + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + dev_conf = dev->data->dev_conf; + + started = dev->data->dev_started; + if (started) + rte_event_dev_stop(evdev_id); + + port_id = dev_conf.nb_event_ports; + dev_conf.nb_event_ports += 1; + + port_conf = arg; + if (port_conf->event_port_cfg & RTE_EVENT_PORT_CFG_SINGLE_LINK) + dev_conf.nb_single_link_event_port_queues += 1; + + ret = rte_event_dev_configure(evdev_id, &dev_conf); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to configure event dev %u\n", evdev_id); + if (started) { + if (rte_event_dev_start(evdev_id)) + return -EIO; + } + return ret; + } + + ret = rte_event_port_setup(evdev_id, port_id, port_conf); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to setup event port %u\n", port_id); + return ret; + } + + conf->event_port_id = port_id; + conf->max_nb = DMA_DEFAULT_MAX_NB; + if (started) + ret = rte_event_dev_start(evdev_id); + + adapter->default_cb_arg = 1; + adapter->event_port_id = conf->event_port_id; + + return ret; +} + +int +rte_event_dma_adapter_create_ext(uint8_t id, uint8_t evdev_id, + rte_event_dma_adapter_conf_cb conf_cb, + enum rte_event_dma_adapter_mode mode, void *conf_arg) +{ + struct rte_event_dev_info dev_info; + struct event_dma_adapter *adapter; + char name[DMA_ADAPTER_NAME_LEN]; + uint16_t num_dma_dev; + int socket_id; + uint8_t i; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(evdev_id, -EINVAL); + + if (conf_cb == NULL) + return -EINVAL; + + if (event_dma_adapter == NULL) { + ret = edma_array_init(); + if (ret) + return ret; + } + + adapter = edma_id_to_adapter(id); + if (adapter != NULL) { + RTE_EDEV_LOG_ERR("ML adapter ID %d already exists!", id); + return -EEXIST; + } + + socket_id = rte_event_dev_socket_id(evdev_id); + snprintf(name, DMA_ADAPTER_NAME_LEN, "rte_event_dma_adapter_%d", id); + adapter = rte_zmalloc_socket(name, sizeof(struct event_dma_adapter), RTE_CACHE_LINE_SIZE, + socket_id); + if (adapter == NULL) { + RTE_EDEV_LOG_ERR("Failed to get mem for event ML adapter!"); + return -ENOMEM; + } + + if (edma_circular_buffer_init("edma_circular_buffer", &adapter->ebuf, + DMA_ADAPTER_BUFFER_SIZE)) { + RTE_EDEV_LOG_ERR("Failed to get memory for event adapter circular buffer"); + rte_free(adapter); + return -ENOMEM; + } + + ret = rte_event_dev_info_get(evdev_id, &dev_info); + if (ret < 0) { + RTE_EDEV_LOG_ERR("Failed to get info for eventdev %d: %s", evdev_id, + dev_info.driver_name); + edma_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + return ret; + } + + num_dma_dev = rte_dma_count_avail(); + + adapter->eventdev_id = evdev_id; + adapter->mode = mode; + strcpy(adapter->mem_name, name); + adapter->socket_id = socket_id; + adapter->conf_cb = conf_cb; + adapter->conf_arg = conf_arg; + adapter->dma_devs = rte_zmalloc_socket(adapter->mem_name, + num_dma_dev * sizeof(struct dma_device_info), 0, + socket_id); + if (adapter->dma_devs == NULL) { + RTE_EDEV_LOG_ERR("Failed to get memory for DMA devices\n"); + edma_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + return -ENOMEM; + } + + rte_spinlock_init(&adapter->lock); + for (i = 0; i < num_dma_dev; i++) + adapter->dma_devs[i].dev = rte_dma_pmd_dev_get(i); + + event_dma_adapter[id] = adapter; + + return 0; +} + +int +rte_event_dma_adapter_create(uint8_t id, uint8_t evdev_id, struct rte_event_port_conf *port_config, + enum rte_event_dma_adapter_mode mode) +{ + struct rte_event_port_conf *pc; + int ret; + + if (port_config == NULL) + return -EINVAL; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + pc = rte_malloc(NULL, sizeof(struct rte_event_port_conf), 0); + if (pc == NULL) + return -ENOMEM; + + rte_memcpy(pc, port_config, sizeof(struct rte_event_port_conf)); + ret = rte_event_dma_adapter_create_ext(id, evdev_id, edma_default_config_cb, mode, pc); + if (ret != 0) + rte_free(pc); + + return ret; +} + +int +rte_event_dma_adapter_free(uint8_t id) +{ + struct event_dma_adapter *adapter; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + rte_free(adapter->conf_arg); + rte_free(adapter->dma_devs); + edma_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + event_dma_adapter[id] = NULL; + + return 0; +} + +int +rte_event_dma_adapter_event_port_get(uint8_t id, uint8_t *event_port_id) +{ + struct event_dma_adapter *adapter; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL || event_port_id == NULL) + return -EINVAL; + + *event_port_id = adapter->event_port_id; + + return 0; +} + +static inline unsigned int +edma_enq_to_dma_dev(struct event_dma_adapter *adapter, struct rte_event *ev, unsigned int cnt) +{ + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; + union rte_event_dma_metadata *m_data = NULL; + struct dma_vchan_queue_info *vchan_qinfo = NULL; + struct rte_event_dma_adapter_op *dma_op; + uint16_t vchan, nb_enqueued = 0; + int16_t dma_dev_id; + unsigned int i, n; + int ret; + + ret = 0; + n = 0; + stats->event_deq_count += cnt; + + for (i = 0; i < cnt; i++) { + dma_op = ev[i].event_ptr; + if (dma_op == NULL) + continue; + + /* Expected to have metadata appended to dma_op. */ + m_data = (union rte_event_dma_metadata *)((uint8_t *)dma_op + + sizeof(struct rte_event_dma_adapter_op)); + if (m_data == NULL) { + if (dma_op != NULL && dma_op->op_mp != NULL) + rte_mempool_put(dma_op->op_mp, dma_op); + continue; + } + + dma_dev_id = m_data->request_info.dma_dev_id; + vchan = m_data->request_info.vchan; + vchan_qinfo = &adapter->dma_devs[dma_dev_id].vchanq[vchan]; + if (!vchan_qinfo->vq_enabled) { + if (dma_op != NULL && dma_op->op_mp != NULL) + rte_mempool_put(dma_op->op_mp, dma_op); + continue; + } + edma_circular_buffer_add(&vchan_qinfo->dma_buf, dma_op); + + if (edma_circular_buffer_batch_ready(&vchan_qinfo->dma_buf)) { + ret = edma_circular_buffer_flush_to_dma_dev(adapter, &vchan_qinfo->dma_buf, + dma_dev_id, vchan, + &nb_enqueued); + stats->dma_enq_count += nb_enqueued; + n += nb_enqueued; + + /** + * If some dma ops failed to flush to dma_dev and + * space for another batch is not available, stop + * dequeue from eventdev momentarily + */ + if (unlikely(ret < 0 && + !edma_circular_buffer_space_for_batch(&vchan_qinfo->dma_buf))) + adapter->stop_enq_to_dma_dev = true; + } + } + + return n; +} + +static unsigned int +edma_adapter_dev_flush(struct event_dma_adapter *adapter, int16_t dma_dev_id, + uint16_t *nb_ops_flushed) +{ + struct dma_vchan_queue_info *vchan_queue; + struct dma_device_info *dev_info; + uint16_t nb = 0, nb_enqueued = 0; + uint16_t vchan, nb_vchans; + struct rte_dma_dev *dev; + + dev_info = &adapter->dma_devs[dma_dev_id]; + dev = rte_dma_pmd_dev_get(dma_dev_id); + nb_vchans = dev->data->dev_conf.nb_vchans; + + for (vchan = 0; vchan < nb_vchans; vchan++) { + + vchan_queue = &dev_info->vchanq[vchan]; + if (unlikely(vchan_queue == NULL || !vchan_queue->vq_enabled)) + continue; + + edma_circular_buffer_flush_to_dma_dev(adapter, &vchan_queue->dma_buf, dma_dev_id, + vchan, &nb_enqueued); + *nb_ops_flushed += vchan_queue->dma_buf.count; + nb += nb_enqueued; + } + + return nb; +} + +static unsigned int +edma_adapter_enq_flush(struct event_dma_adapter *adapter) +{ + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; + int16_t dma_dev_id; + uint16_t nb_enqueued = 0; + uint16_t nb_ops_flushed = 0; + uint16_t num_dma_dev = rte_dma_count_avail(); + + for (dma_dev_id = 0; dma_dev_id < num_dma_dev; dma_dev_id++) + nb_enqueued += edma_adapter_dev_flush(adapter, dma_dev_id, &nb_ops_flushed); + /** + * Enable dequeue from eventdev if all ops from circular + * buffer flushed to dma_dev + */ + if (!nb_ops_flushed) + adapter->stop_enq_to_dma_dev = false; + + stats->dma_enq_count += nb_enqueued; + + return nb_enqueued; +} + +/* Flush an instance's enqueue buffers every DMA_ENQ_FLUSH_THRESHOLD + * iterations of edma_adapter_enq_run() + */ +#define DMA_ENQ_FLUSH_THRESHOLD 1024 + +static int +edma_adapter_enq_run(struct event_dma_adapter *adapter, unsigned int max_enq) +{ + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; + uint8_t event_port_id = adapter->event_port_id; + uint8_t event_dev_id = adapter->eventdev_id; + struct rte_event ev[DMA_BATCH_SIZE]; + unsigned int nb_enq, nb_enqueued; + uint16_t n; + + if (adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW) + return 0; + + nb_enqueued = 0; + for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) { + + if (unlikely(adapter->stop_enq_to_dma_dev)) { + nb_enqueued += edma_adapter_enq_flush(adapter); + + if (unlikely(adapter->stop_enq_to_dma_dev)) + break; + } + + stats->event_poll_count++; + n = rte_event_dequeue_burst(event_dev_id, event_port_id, ev, DMA_BATCH_SIZE, 0); + + if (!n) + break; + + nb_enqueued += edma_enq_to_dma_dev(adapter, ev, n); + } + + if ((++adapter->transmit_loop_count & (DMA_ENQ_FLUSH_THRESHOLD - 1)) == 0) + nb_enqueued += edma_adapter_enq_flush(adapter); + + return nb_enqueued; +} + +#define DMA_ADAPTER_MAX_EV_ENQ_RETRIES 100 + +static inline uint16_t +edma_ops_enqueue_burst(struct event_dma_adapter *adapter, struct rte_event_dma_adapter_op **ops, + uint16_t num) +{ + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; + uint8_t event_port_id = adapter->event_port_id; + union rte_event_dma_metadata *m_data = NULL; + uint8_t event_dev_id = adapter->eventdev_id; + struct rte_event events[DMA_BATCH_SIZE]; + uint16_t nb_enqueued, nb_ev; + uint8_t retry; + uint8_t i; + + nb_ev = 0; + retry = 0; + nb_enqueued = 0; + num = RTE_MIN(num, DMA_BATCH_SIZE); + for (i = 0; i < num; i++) { + struct rte_event *ev = &events[nb_ev++]; + + /* Expected to have metadata appended to dma_op. */ + m_data = (union rte_event_dma_metadata *)((uint8_t *)ops[i] + + sizeof(struct rte_event_dma_adapter_op)); + if (unlikely(m_data == NULL)) { + if (ops[i] != NULL && ops[i]->op_mp != NULL) + rte_mempool_put(ops[i]->op_mp, ops[i]); + continue; + } + + rte_memcpy(ev, &m_data->response_info, sizeof(*ev)); + ev->event_ptr = ops[i]; + ev->event_type = RTE_EVENT_TYPE_DMADEV; + if (adapter->implicit_release_disabled) + ev->op = RTE_EVENT_OP_FORWARD; + else + ev->op = RTE_EVENT_OP_NEW; + } + + do { + nb_enqueued += rte_event_enqueue_burst(event_dev_id, event_port_id, + &events[nb_enqueued], nb_ev - nb_enqueued); + + } while (retry++ < DMA_ADAPTER_MAX_EV_ENQ_RETRIES && nb_enqueued < nb_ev); + + stats->event_enq_fail_count += nb_ev - nb_enqueued; + stats->event_enq_count += nb_enqueued; + stats->event_enq_retry_count += retry - 1; + + return nb_enqueued; +} + +static int +edma_circular_buffer_flush_to_evdev(struct event_dma_adapter *adapter, + struct dma_ops_circular_buffer *bufp, + uint16_t *enqueue_count) +{ + struct rte_event_dma_adapter_op **ops = bufp->op_buffer; + uint16_t n = 0, nb_ops_flushed; + uint16_t *head = &bufp->head; + uint16_t *tail = &bufp->tail; + + if (*tail > *head) + n = *tail - *head; + else if (*tail < *head) + n = bufp->size - *head; + else { + if (enqueue_count) + *enqueue_count = 0; + return 0; /* buffer empty */ + } + + if (enqueue_count && n > *enqueue_count) + n = *enqueue_count; + + nb_ops_flushed = edma_ops_enqueue_burst(adapter, &ops[*head], n); + if (enqueue_count) + *enqueue_count = nb_ops_flushed; + + bufp->count -= nb_ops_flushed; + if (!bufp->count) { + *head = 0; + *tail = 0; + return 0; /* buffer empty */ + } + + *head = (*head + nb_ops_flushed) % bufp->size; + return 1; +} + +static void +edma_ops_buffer_flush(struct event_dma_adapter *adapter) +{ + if (likely(adapter->ebuf.count == 0)) + return; + + while (edma_circular_buffer_flush_to_evdev(adapter, &adapter->ebuf, NULL)) + ; +} + +static inline unsigned int +edma_adapter_deq_run(struct event_dma_adapter *adapter, unsigned int max_deq) +{ + struct rte_event_dma_adapter_stats *stats = &adapter->dma_stats; + struct dma_vchan_queue_info *vchan_queue; + struct dma_ops_circular_buffer *tq_buf; + struct rte_event_dma_adapter_op *ops; + uint16_t n, nb_deq, nb_enqueued, i; + struct dma_device_info *dev_info; + uint16_t vchan, num_vchan; + struct rte_dma_dev *dev; + uint16_t num_dma_dev; + int16_t dma_dev_id; + uint16_t index; + bool done; + bool err; + + nb_deq = 0; + edma_ops_buffer_flush(adapter); + + num_dma_dev = rte_dma_count_avail(); + do { + done = true; + + for (dma_dev_id = adapter->next_dmadev_id; dma_dev_id < num_dma_dev; dma_dev_id++) { + uint16_t queues = 0; + dev_info = &adapter->dma_devs[dma_dev_id]; + dev = dev_info->dev; + if (unlikely(dev == NULL)) + continue; + + num_vchan = dev->data->dev_conf.nb_vchans; + for (vchan = dev_info->next_queue_pair_id; queues < num_vchan; + vchan = (vchan + 1) % num_vchan, queues++) { + + vchan_queue = &dev_info->vchanq[vchan]; + if (unlikely(vchan_queue == NULL || !vchan_queue->vq_enabled)) + continue; + + n = rte_dma_completed(dma_dev_id, vchan, DMA_BATCH_SIZE, + &index, &err); + if (!n) + continue; + + done = false; + stats->dma_deq_count += n; + + tq_buf = &dev_info->tqmap[vchan].dma_buf; + + nb_enqueued = n; + if (unlikely(!adapter->ebuf.count)) + edma_circular_buffer_flush_to_evdev(adapter, tq_buf, + &nb_enqueued); + + if (likely(nb_enqueued == n)) + goto check; + + /* Failed to enqueue events case */ + for (i = nb_enqueued; i < n; i++) { + ops = tq_buf->op_buffer[tq_buf->head]; + edma_circular_buffer_add(&adapter->ebuf, ops); + tq_buf->head = (tq_buf->head + 1) % tq_buf->size; + } + +check: + nb_deq += n; + if (nb_deq >= max_deq) { + if ((vchan + 1) == num_vchan) + adapter->next_dmadev_id = + (dma_dev_id + 1) % num_dma_dev; + + dev_info->next_queue_pair_id = (vchan + 1) % num_vchan; + + return nb_deq; + } + } + } + adapter->next_dmadev_id = 0; + + } while (done == false); + + return nb_deq; +} + +static int +edma_adapter_run(struct event_dma_adapter *adapter, unsigned int max_ops) +{ + unsigned int ops_left = max_ops; + + while (ops_left > 0) { + unsigned int e_cnt, d_cnt; + + e_cnt = edma_adapter_deq_run(adapter, ops_left); + ops_left -= RTE_MIN(ops_left, e_cnt); + + d_cnt = edma_adapter_enq_run(adapter, ops_left); + ops_left -= RTE_MIN(ops_left, d_cnt); + + if (e_cnt == 0 && d_cnt == 0) + break; + } + + if (ops_left == max_ops) { + rte_event_maintain(adapter->eventdev_id, adapter->event_port_id, 0); + return -EAGAIN; + } else + return 0; +} + +static int +edma_service_func(void *args) +{ + struct event_dma_adapter *adapter = args; + int ret; + + if (rte_spinlock_trylock(&adapter->lock) == 0) + return 0; + ret = edma_adapter_run(adapter, adapter->max_nb); + rte_spinlock_unlock(&adapter->lock); + + return ret; +} + +static int +edma_init_service(struct event_dma_adapter *adapter, uint8_t id) +{ + struct rte_event_dma_adapter_conf adapter_conf; + struct rte_service_spec service; + uint32_t impl_rel; + int ret; + + if (adapter->service_initialized) + return 0; + + memset(&service, 0, sizeof(service)); + snprintf(service.name, DMA_ADAPTER_NAME_LEN, "rte_event_dma_adapter_%d", id); + service.socket_id = adapter->socket_id; + service.callback = edma_service_func; + service.callback_userdata = adapter; + + /* Service function handles locking for queue add/del updates */ + service.capabilities = RTE_SERVICE_CAP_MT_SAFE; + ret = rte_service_component_register(&service, &adapter->service_id); + if (ret) { + RTE_EDEV_LOG_ERR("failed to register service %s err = %" PRId32, service.name, ret); + return ret; + } + + ret = adapter->conf_cb(id, adapter->eventdev_id, &adapter_conf, adapter->conf_arg); + if (ret) { + RTE_EDEV_LOG_ERR("configuration callback failed err = %" PRId32, ret); + return ret; + } + + adapter->max_nb = adapter_conf.max_nb; + adapter->event_port_id = adapter_conf.event_port_id; + + if (rte_event_port_attr_get(adapter->eventdev_id, adapter->event_port_id, + RTE_EVENT_PORT_ATTR_IMPLICIT_RELEASE_DISABLE, &impl_rel)) { + RTE_EDEV_LOG_ERR("Failed to get port info for eventdev %" PRId32, + adapter->eventdev_id); + edma_circular_buffer_free(&adapter->ebuf); + rte_free(adapter); + return -EINVAL; + } + + adapter->implicit_release_disabled = (uint8_t)impl_rel; + adapter->service_initialized = 1; + + return ret; +} + +static void +edma_update_vchanq_info(struct event_dma_adapter *adapter, struct dma_device_info *dev_info, + uint16_t vchan, uint8_t add) +{ + struct dma_vchan_queue_info *vchan_info; + struct dma_vchan_queue_info *tqmap_info; + int enabled; + uint16_t i; + + if (dev_info->vchanq == NULL) + return; + + if (vchan == RTE_DMA_ALL_VCHAN) { + for (i = 0; i < dev_info->dev->data->dev_conf.nb_vchans; i++) + edma_update_vchanq_info(adapter, dev_info, i, add); + } else { + tqmap_info = &dev_info->tqmap[vchan]; + vchan_info = &dev_info->vchanq[vchan]; + enabled = vchan_info->vq_enabled; + if (add) { + adapter->nb_vchanq += !enabled; + dev_info->num_vchanq += !enabled; + } else { + adapter->nb_vchanq -= enabled; + dev_info->num_vchanq -= enabled; + } + vchan_info->vq_enabled = !!add; + tqmap_info->vq_enabled = !!add; + } +} + +static int +edma_add_queue_pair(struct event_dma_adapter *adapter, int16_t dma_dev_id, uint16_t vchan) +{ + struct dma_device_info *dev_info = &adapter->dma_devs[dma_dev_id]; + struct dma_vchan_queue_info *vchanq; + struct dma_vchan_queue_info *tqmap; + uint16_t nb_vchans; + uint32_t i; + + if (dev_info->vchanq == NULL) { + nb_vchans = dev_info->dev->data->dev_conf.nb_vchans; + + dev_info->vchanq = rte_zmalloc_socket(adapter->mem_name, + nb_vchans * sizeof(struct dma_vchan_queue_info), + 0, adapter->socket_id); + if (dev_info->vchanq == NULL) + return -ENOMEM; + + dev_info->tqmap = rte_zmalloc_socket(adapter->mem_name, + nb_vchans * sizeof(struct dma_vchan_queue_info), + 0, adapter->socket_id); + if (dev_info->tqmap == NULL) + return -ENOMEM; + + for (i = 0; i < nb_vchans; i++) { + vchanq = &dev_info->vchanq[i]; + + if (edma_circular_buffer_init("dma_dev_circular_buffer", &vchanq->dma_buf, + DMA_ADAPTER_OPS_BUFFER_SIZE)) { + RTE_EDEV_LOG_ERR("Failed to get memory for dma_dev buffer"); + rte_free(vchanq); + return -ENOMEM; + } + + tqmap = &dev_info->tqmap[i]; + if (edma_circular_buffer_init("dma_dev_circular_trans_buf", &tqmap->dma_buf, + DMA_ADAPTER_OPS_BUFFER_SIZE)) { + RTE_EDEV_LOG_ERR( + "Failed to get memory for dma_dev transction buffer"); + rte_free(tqmap); + return -ENOMEM; + } + } + } + + if (vchan == RTE_DMA_ALL_VCHAN) { + for (i = 0; i < dev_info->dev->data->dev_conf.nb_vchans; i++) + edma_update_vchanq_info(adapter, dev_info, i, 1); + } else + edma_update_vchanq_info(adapter, dev_info, vchan, 1); + + return 0; +} + +int +rte_event_dma_adapter_vchan_queue_add(uint8_t id, int16_t dma_dev_id, uint16_t vchan, + const struct rte_event *event) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + uint32_t cap; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (!rte_dma_is_valid(dma_dev_id)) { + RTE_EDEV_LOG_ERR("Invalid dma_dev_id = %" PRIu8, dma_dev_id); + return -EINVAL; + } + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, dma_dev_id, &cap); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %u dma_dev %u", id, dma_dev_id); + return ret; + } + + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) && (event == NULL)) { + RTE_EDEV_LOG_ERR("Event can not be NULL for dma_dev_id = %u", dma_dev_id); + return -EINVAL; + } + + dev_info = &adapter->dma_devs[dma_dev_id]; + if (vchan != RTE_DMA_ALL_VCHAN && vchan >= dev_info->dev->data->dev_conf.nb_vchans) { + RTE_EDEV_LOG_ERR("Invalid vhcan %u", vchan); + return -EINVAL; + } + + /* In case HW cap is RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, no + * need of service core as HW supports event forward capability. + */ + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW) || + (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW)) { + if (dev_info->vchanq == NULL) { + dev_info->vchanq = rte_zmalloc_socket(adapter->mem_name, + dev_info->dev->data->dev_conf.nb_vchans * + sizeof(struct dma_vchan_queue_info), + 0, adapter->socket_id); + if (dev_info->vchanq == NULL) { + printf("Queue pair add not supported\n"); + return -ENOMEM; + } + } + + if (dev_info->tqmap == NULL) { + dev_info->tqmap = rte_zmalloc_socket(adapter->mem_name, + dev_info->dev->data->dev_conf.nb_vchans * + sizeof(struct dma_vchan_queue_info), + 0, adapter->socket_id); + if (dev_info->tqmap == NULL) { + printf("tq pair add not supported\n"); + return -ENOMEM; + } + } + + edma_update_vchanq_info(adapter, &adapter->dma_devs[dma_dev_id], vchan, 1); + } + + /* In case HW cap is RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW, or SW adapter, initiate + * services so the application can choose which ever way it wants to use the adapter. + * + * Case 1: RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW. Application may wants to use one + * of below two modes + * + * a. OP_FORWARD mode -> HW Dequeue + SW enqueue + * b. OP_NEW mode -> HW Dequeue + * + * Case 2: No HW caps, use SW adapter + * + * a. OP_FORWARD mode -> SW enqueue & dequeue + * b. OP_NEW mode -> SW Dequeue + */ + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_FORWARD) || + (!(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW) && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND))) { + rte_spinlock_lock(&adapter->lock); + ret = edma_init_service(adapter, id); + if (ret == 0) + ret = edma_add_queue_pair(adapter, dma_dev_id, vchan); + rte_spinlock_unlock(&adapter->lock); + + if (ret) + return ret; + + rte_service_component_runstate_set(adapter->service_id, 1); + } + + return 0; +} + +int +rte_event_dma_adapter_vchan_queue_del(uint8_t id, int16_t dma_dev_id, uint16_t vchan) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + uint32_t cap; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (!rte_dma_is_valid(dma_dev_id)) { + RTE_EDEV_LOG_ERR("Invalid dma_dev_id = %" PRIu8, dma_dev_id); + return -EINVAL; + } + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, dma_dev_id, &cap); + if (ret) + return ret; + + dev_info = &adapter->dma_devs[dma_dev_id]; + + if (vchan != RTE_DMA_ALL_VCHAN && vchan >= dev_info->dev->data->dev_conf.nb_vchans) { + RTE_EDEV_LOG_ERR("Invalid vhcan %" PRIu16, vchan); + return -EINVAL; + } + + if ((cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW && + adapter->mode == RTE_EVENT_DMA_ADAPTER_OP_NEW)) { + edma_update_vchanq_info(adapter, dev_info, vchan, 0); + if (dev_info->num_vchanq == 0) { + rte_free(dev_info->vchanq); + dev_info->vchanq = NULL; + } + } else { + if (adapter->nb_vchanq == 0) + return 0; + + rte_spinlock_lock(&adapter->lock); + edma_update_vchanq_info(adapter, dev_info, vchan, 0); + + if (dev_info->num_vchanq == 0) { + rte_free(dev_info->vchanq); + rte_free(dev_info->tqmap); + dev_info->vchanq = NULL; + dev_info->tqmap = NULL; + } + + rte_spinlock_unlock(&adapter->lock); + rte_service_component_runstate_set(adapter->service_id, adapter->nb_vchanq); + } + + return ret; +} + +int +rte_event_dma_adapter_service_id_get(uint8_t id, uint32_t *service_id) +{ + struct event_dma_adapter *adapter; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL || service_id == NULL) + return -EINVAL; + + if (adapter->service_initialized) + *service_id = adapter->service_id; + + return adapter->service_initialized ? 0 : -ESRCH; +} + +static int +edma_adapter_ctrl(uint8_t id, int start) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint16_t num_dma_dev; + int stop = !start; + int use_service; + uint32_t i; + + use_service = 0; + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + num_dma_dev = rte_dma_count_avail(); + dev = &rte_eventdevs[adapter->eventdev_id]; + + for (i = 0; i < num_dma_dev; i++) { + dev_info = &adapter->dma_devs[i]; + /* start check for num queue pairs */ + if (start && !dev_info->num_vchanq) + continue; + /* stop check if dev has been started */ + if (stop && !dev_info->dev_started) + continue; + use_service |= !dev_info->internal_event_port; + dev_info->dev_started = start; + if (dev_info->internal_event_port == 0) + continue; + start ? (*dev->dev_ops->dma_adapter_start)(dev, &dev_info->dev[i]) : + (*dev->dev_ops->dma_adapter_stop)(dev, &dev_info->dev[i]); + } + + if (use_service) + rte_service_runstate_set(adapter->service_id, start); + + return 0; +} + +int +rte_event_dma_adapter_start(uint8_t id) +{ + struct event_dma_adapter *adapter; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + return edma_adapter_ctrl(id, 1); +} + +int +rte_event_dma_adapter_stop(uint8_t id) +{ + return edma_adapter_ctrl(id, 0); +} + +#define DEFAULT_MAX_NB 128 + +int +rte_event_dma_adapter_runtime_params_init(struct rte_event_dma_adapter_runtime_params *params) +{ + if (params == NULL) + return -EINVAL; + + memset(params, 0, sizeof(*params)); + params->max_nb = DEFAULT_MAX_NB; + + return 0; +} + +static int +dma_adapter_cap_check(struct event_dma_adapter *adapter) +{ + uint32_t caps; + int ret; + + if (!adapter->nb_vchanq) + return -EINVAL; + + ret = rte_event_dma_adapter_caps_get(adapter->eventdev_id, adapter->next_dmadev_id, &caps); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %" PRIu8 " cdev %" PRIu8, + adapter->eventdev_id, adapter->next_dmadev_id); + return ret; + } + + if ((caps & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) || + (caps & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) + return -ENOTSUP; + + return 0; +} + +int +rte_event_dma_adapter_runtime_params_set(uint8_t id, + struct rte_event_dma_adapter_runtime_params *params) +{ + struct event_dma_adapter *adapter; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (params == NULL) { + RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + return -EINVAL; + } + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + ret = dma_adapter_cap_check(adapter); + if (ret) + return ret; + + rte_spinlock_lock(&adapter->lock); + adapter->max_nb = params->max_nb; + rte_spinlock_unlock(&adapter->lock); + + return 0; +} + +int +rte_event_dma_adapter_runtime_params_get(uint8_t id, + struct rte_event_dma_adapter_runtime_params *params) +{ + struct event_dma_adapter *adapter; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (params == NULL) { + RTE_EDEV_LOG_ERR("params pointer is NULL\n"); + return -EINVAL; + } + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + ret = dma_adapter_cap_check(adapter); + if (ret) + return ret; + + params->max_nb = adapter->max_nb; + + return 0; +} + +int +rte_event_dma_adapter_stats_get(uint8_t id, struct rte_event_dma_adapter_stats *stats) +{ + struct rte_event_dma_adapter_stats dev_stats_sum = {0}; + struct rte_event_dma_adapter_stats dev_stats; + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint16_t num_dma_dev; + uint32_t i; + int ret; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL || stats == NULL) + return -EINVAL; + + num_dma_dev = rte_dma_count_avail(); + dev = &rte_eventdevs[adapter->eventdev_id]; + memset(stats, 0, sizeof(*stats)); + for (i = 0; i < num_dma_dev; i++) { + dev_info = &adapter->dma_devs[i]; + + if (dev_info->internal_event_port == 0 || + dev->dev_ops->dma_adapter_stats_get == NULL) + continue; + + ret = (*dev->dev_ops->dma_adapter_stats_get)(dev, dev_info->dev, &dev_stats); + if (ret) + continue; + + dev_stats_sum.dma_deq_count += dev_stats.dma_deq_count; + dev_stats_sum.event_enq_count += dev_stats.event_enq_count; + } + + stats->dma_deq_count += dev_stats_sum.dma_deq_count; + stats->event_enq_count += dev_stats_sum.event_enq_count; + + return 0; +} + +int +rte_event_dma_adapter_stats_reset(uint8_t id) +{ + struct event_dma_adapter *adapter; + struct dma_device_info *dev_info; + struct rte_eventdev *dev; + uint16_t num_dma_dev; + uint32_t i; + + EVENT_DMA_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = edma_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + num_dma_dev = rte_dma_count_avail(); + dev = &rte_eventdevs[adapter->eventdev_id]; + for (i = 0; i < num_dma_dev; i++) { + dev_info = &adapter->dma_devs[i]; + + if (dev_info->internal_event_port == 0 || + dev->dev_ops->dma_adapter_stats_reset == NULL) + continue; + + (*dev->dev_ops->dma_adapter_stats_reset)(dev, dev_info->dev); + } + + memset(&adapter->dma_stats, 0, sizeof(adapter->dma_stats)); + + return 0; +} + +uint16_t +rte_event_dma_adapter_enqueue(uint8_t dev_id, uint8_t port_id, struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_event_fp_ops *fp_ops; + void *port; + + fp_ops = &rte_event_fp_ops[dev_id]; + port = fp_ops->data[port_id]; + + return fp_ops->dma_enqueue(port, ev, nb_events); +} diff --git a/lib/eventdev/rte_event_dma_adapter.h b/lib/eventdev/rte_event_dma_adapter.h index c667398d08..d3d8b2dbbd 100644 --- a/lib/eventdev/rte_event_dma_adapter.h +++ b/lib/eventdev/rte_event_dma_adapter.h @@ -203,16 +203,39 @@ struct rte_event_dma_request { * information. Application is expected to fill in struct rte_event response_info. */ - int16_t dmadev_id; + int16_t dma_dev_id; /**< DMA device ID to be used */ - uint16_t queue_pair_id; - /**< DMA queue pair ID to be used */ + uint16_t vchan; + /**< DMA vchan ID to be used */ uint32_t rsvd; /**< Reserved bits */ }; +/** + * DMA event metadata structure will be filled by application + * to provide dma request and event response information. + * + * If dma events are enqueued using a HW mechanism, the dmadev + * PMD will use the event response information to set up the event + * that is enqueued back to eventdev after completion of the dma + * operation. If the transfer is done by SW, event response information + * will be used by the adapter. + */ +union rte_event_dma_metadata { + struct rte_event_dma_request request_info; + /**< Request information to be filled in by application + * for RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode. + * First 8 bytes of request_info is reserved for response_info. + */ + struct rte_event response_info; + /**< Response information to be filled in by application + * for RTE_EVENT_DMA_ADAPTER_OP_NEW and + * RTE_EVENT_DMA_ADAPTER_OP_FORWARD mode. + */ +}; + /** * Adapter configuration structure that the adapter configuration callback function is expected to * fill out. @@ -406,9 +429,9 @@ int rte_event_dma_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); * Adapter identifier. * @param dmadev_id * dmadev identifier. - * @param queue_pair_id - * DMA device vchan queue identifier. If queue_pair_id is set -1, adapter adds all the - * preconfigured queue pairs to the instance. + * @param vchan + * DMA device vchan queue identifier. If vchan is set -1, adapter adds all the + * preconfigured vchan queue to the instance. * @param event * If HW supports dmadev queue pair to event queue binding, application is expected to fill in * event information, else it will be NULL. @@ -419,7 +442,7 @@ int rte_event_dma_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); * - <0: Error code on failure. */ __rte_experimental -int rte_event_dma_adapter_vchan_queue_add(uint8_t id, int16_t dmadev_id, int32_t queue_pair_id, +int rte_event_dma_adapter_vchan_queue_add(uint8_t id, int16_t dmadev_id, uint16_t vchan, const struct rte_event *event); /** @@ -432,7 +455,7 @@ int rte_event_dma_adapter_vchan_queue_add(uint8_t id, int16_t dmadev_id, int32_t * Adapter identifier. * @param dmadev_id * DMA device identifier. - * @param queue_pair_id + * @param vchan * DMA device vchan queue identifier. * * @return @@ -440,7 +463,7 @@ int rte_event_dma_adapter_vchan_queue_add(uint8_t id, int16_t dmadev_id, int32_t * - <0: Error code on failure. */ __rte_experimental -int rte_event_dma_adapter_vchan_queue_del(uint8_t id, int16_t dmadev_id, int32_t queue_pair_id); +int rte_event_dma_adapter_vchan_queue_del(uint8_t id, int16_t dmadev_id, uint16_t vchan); /** * @warning From patchwork Tue Sep 19 13:42:19 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 131629 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id F140642604; Tue, 19 Sep 2023 15:43:09 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B124140DDA; Tue, 19 Sep 2023 15:43:02 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 8784A40DF5 for ; Tue, 19 Sep 2023 15:43:00 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38JDZuUk011914; Tue, 19 Sep 2023 06:42:59 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=17xPRe7Z730vmmj1u4mHHJWGFqCqM7KgS9IB0eG5pGQ=; b=W7+++QLLucyEPWUxRkbBp1ziSm+AlcKPtmSfgtadtz59isqQhZ7hfcA0VteNjIdkSHoo 2Zjd3MDzPEQV0zpJ3vj9HfYECFvi/4pJgVdHVgYc6+8kosxCGb3Tke2VhK1ucPVWvWMa V1rI83DHBBg+yKDLkFKiEuTs/PkUk319zVxhzm453QCIRQ9zuBbSuyTr9EY6GAxbeYf2 +2/n1X88K5GNCtwwqylu/49T1Yi33YOfshJ4Sl2fBkW9/Srz+WMUJBQbDwxBNxg0Wsxt AdTOzqUOa8MnQmTkUTtJRTrN13xl9pHsUu/6cMydNK9dYV9IXfvCdnvsIrAgP0huLZwO kA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7cnq00vn-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Sep 2023 06:42:59 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Sep 2023 06:42:57 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Sep 2023 06:42:57 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 8EACF3F709B; Tue, 19 Sep 2023 06:42:53 -0700 (PDT) From: Amit Prakash Shukla To: CC: , , , , , , , , , , , , , , Amit Prakash Shukla Subject: [PATCH v1 4/7] app/test: add event DMA adapter auto-test Date: Tue, 19 Sep 2023 19:12:19 +0530 Message-ID: <20230919134222.2500033-4-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230919134222.2500033-1-amitprakashs@marvell.com> References: <20230919134222.2500033-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: jCb5cqpQ_RKN-dfYlHSSecxfu77uycJp X-Proofpoint-GUID: jCb5cqpQ_RKN-dfYlHSSecxfu77uycJp X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-19_06,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added testsuite to test the dma adapter functionality. The testsuite detects event and DMA device capability and accordingly dma adapter is configured and modes are tested. Signed-off-by: Amit Prakash Shukla --- app/test/meson.build | 1 + app/test/test_event_dma_adapter.c | 814 ++++++++++++++++++++++++++++++ 2 files changed, 815 insertions(+) create mode 100644 app/test/test_event_dma_adapter.c diff --git a/app/test/meson.build b/app/test/meson.build index 05bae9216d..eccd3b72d8 100644 --- a/app/test/meson.build +++ b/app/test/meson.build @@ -66,6 +66,7 @@ source_file_deps = { 'test_errno.c': [], 'test_ethdev_link.c': ['ethdev'], 'test_event_crypto_adapter.c': ['cryptodev', 'eventdev', 'bus_vdev'], + 'test_event_dma_adapter.c': ['dmadev', 'eventdev'], 'test_event_eth_rx_adapter.c': ['ethdev', 'eventdev', 'bus_vdev'], 'test_event_eth_tx_adapter.c': ['bus_vdev', 'ethdev', 'net_ring', 'eventdev'], 'test_event_ring.c': ['eventdev'], diff --git a/app/test/test_event_dma_adapter.c b/app/test/test_event_dma_adapter.c new file mode 100644 index 0000000000..5b8649fe14 --- /dev/null +++ b/app/test/test_event_dma_adapter.c @@ -0,0 +1,814 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2023 Marvell. + */ + +#include "test.h" +#include +#include +#include +#include +#include +#include + +#ifdef RTE_EXEC_ENV_WINDOWS +static int +test_event_dma_adapter(void) +{ + printf("event_dma_adapter not supported on Windows, skipping test\n"); + return TEST_SKIPPED; +} + +#else + +#include +#include +#include +#include +#include + +#define NUM_MBUFS (8191) +#define MBUF_CACHE_SIZE (256) +#define TEST_APP_PORT_ID 0 +#define TEST_APP_EV_QUEUE_ID 0 +#define TEST_APP_EV_PRIORITY 0 +#define TEST_APP_EV_FLOWID 0xAABB +#define TEST_DMA_EV_QUEUE_ID 1 +#define TEST_ADAPTER_ID 0 +#define TEST_DMA_DEV_ID 0 +#define TEST_DMA_VCHAN_ID 0 +#define PACKET_LENGTH 64 +#define NB_TEST_PORTS 1 +#define NB_TEST_QUEUES 2 +#define NUM_CORES 1 +#define DMA_OP_POOL_SIZE 64 +#define TEST_MAX_OP 64 +#define TEST_RINGSIZE 512 + +#define MBUF_SIZE (RTE_PKTMBUF_HEADROOM + PACKET_LENGTH) + +/* Handle log statements in same manner as test macros */ +#define LOG_DBG(...) RTE_LOG(DEBUG, EAL, __VA_ARGS__) + +struct event_dma_adapter_test_params { + struct rte_mempool *src_mbuf_pool; + struct rte_mempool *dst_mbuf_pool; + struct rte_mempool *op_mpool; + uint8_t dma_event_port_id; + uint8_t internal_port_op_fwd; +}; + +struct rte_event dma_response_info = { + .queue_id = TEST_APP_EV_QUEUE_ID, + .sched_type = RTE_SCHED_TYPE_ATOMIC, + .flow_id = TEST_APP_EV_FLOWID, + .priority = TEST_APP_EV_PRIORITY +}; + +struct rte_event_dma_request dma_request_info = { + .dma_dev_id = TEST_DMA_DEV_ID, + .vchan = TEST_DMA_VCHAN_ID +}; + +static struct event_dma_adapter_test_params params; +static uint8_t dma_adapter_setup_done; +static uint32_t slcore_id; +static int evdev; + +static int +send_recv_ev(struct rte_event *ev) +{ + struct rte_event recv_ev[TEST_MAX_OP] = {0}; + struct rte_event_dma_adapter_op *op; + uint16_t nb_enqueued = 0; + int ret, i = 0; + + if (params.internal_port_op_fwd) { + nb_enqueued = rte_event_dma_adapter_enqueue(evdev, TEST_APP_PORT_ID, ev, + TEST_MAX_OP); + } else { + while (nb_enqueued < TEST_MAX_OP) { + nb_enqueued += rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, + &ev[nb_enqueued], TEST_MAX_OP - + nb_enqueued); + } + } + + TEST_ASSERT_EQUAL(nb_enqueued, TEST_MAX_OP, "Failed to send event to dma adapter\n"); + + while (i < TEST_MAX_OP) { + if (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev[i], 1, 0) != 1) + continue; + + op = recv_ev[i].event_ptr; + ret = memcmp((uint8_t *)op->src_seg->addr, (uint8_t *)op->dst_seg->addr, + op->src_seg->length); + TEST_ASSERT_EQUAL(ret, 0, "Data mismatch for dma adapter\n"); + i++; + } + + return TEST_SUCCESS; +} + +static int +test_dma_adapter_stats(void) +{ + struct rte_event_dma_adapter_stats stats; + + rte_event_dma_adapter_stats_get(TEST_ADAPTER_ID, &stats); + printf(" +------------------------------------------------------+\n"); + printf(" + DMA adapter stats for instance %u:\n", TEST_ADAPTER_ID); + printf(" + Event port poll count %" PRIx64 "\n", + stats.event_poll_count); + printf(" + Event dequeue count %" PRIx64 "\n", + stats.event_deq_count); + printf(" + DMA dev enqueue count %" PRIx64 "\n", + stats.dma_enq_count); + printf(" + DMA dev enqueue failed count %" PRIx64 "\n", + stats.dma_enq_fail_count); + printf(" + DMA dev dequeue count %" PRIx64 "\n", + stats.dma_deq_count); + printf(" + Event enqueue count %" PRIx64 "\n", + stats.event_enq_count); + printf(" + Event enqueue retry count %" PRIx64 "\n", + stats.event_enq_retry_count); + printf(" + Event enqueue fail count %" PRIx64 "\n", + stats.event_enq_fail_count); + printf(" +------------------------------------------------------+\n"); + + rte_event_dma_adapter_stats_reset(TEST_ADAPTER_ID); + return TEST_SUCCESS; +} + +static int +test_dma_adapter_params(void) +{ + struct rte_event_dma_adapter_runtime_params in_params; + struct rte_event_dma_adapter_runtime_params out_params; + struct rte_event event; + uint32_t cap; + int err, rc; + + err = rte_event_dma_adapter_caps_get(evdev, TEST_DMA_DEV_ID, &cap); + TEST_ASSERT_SUCCESS(err, "Failed to get adapter capabilities\n"); + + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) { + err = rte_event_dma_adapter_vchan_queue_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, &event); + } else + err = rte_event_dma_adapter_vchan_queue_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, NULL); + + TEST_ASSERT_SUCCESS(err, "Failed to add queue pair\n"); + + err = rte_event_dma_adapter_runtime_params_init(&in_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + err = rte_event_dma_adapter_runtime_params_init(&out_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + /* Case 1: Get the default value of mbufs processed by adapter */ + err = rte_event_dma_adapter_runtime_params_get(TEST_ADAPTER_ID, &out_params); + if (err == -ENOTSUP) { + rc = TEST_SKIPPED; + goto queue_pair_del; + } + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + /* Case 2: Set max_nb = 32 (=BATCH_SEIZE) */ + in_params.max_nb = 32; + + err = rte_event_dma_adapter_runtime_params_set(TEST_ADAPTER_ID, &in_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + err = rte_event_dma_adapter_runtime_params_get(TEST_ADAPTER_ID, &out_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + TEST_ASSERT(in_params.max_nb == out_params.max_nb, "Expected %u got %u", + in_params.max_nb, out_params.max_nb); + + /* Case 3: Set max_nb = 192 */ + in_params.max_nb = 192; + + err = rte_event_dma_adapter_runtime_params_set(TEST_ADAPTER_ID, &in_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + err = rte_event_dma_adapter_runtime_params_get(TEST_ADAPTER_ID, &out_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + TEST_ASSERT(in_params.max_nb == out_params.max_nb, "Expected %u got %u", + in_params.max_nb, out_params.max_nb); + + /* Case 4: Set max_nb = 256 */ + in_params.max_nb = 256; + + err = rte_event_dma_adapter_runtime_params_set(TEST_ADAPTER_ID, &in_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + + err = rte_event_dma_adapter_runtime_params_get(TEST_ADAPTER_ID, &out_params); + TEST_ASSERT(err == 0, "Expected 0 got %d", err); + TEST_ASSERT(in_params.max_nb == out_params.max_nb, "Expected %u got %u", + in_params.max_nb, out_params.max_nb); + + /* Case 5: Set max_nb = 30(src_seg = rte_malloc(NULL, sizeof(struct rte_dma_sge), 0); + op->dst_seg = rte_malloc(NULL, sizeof(struct rte_dma_sge), 0); + + /* Update Op */ + op->src_seg->addr = rte_pktmbuf_iova(src_mbuf[i]); + op->dst_seg->addr = rte_pktmbuf_iova(dst_mbuf[i]); + op->src_seg->length = PACKET_LENGTH; + op->dst_seg->length = PACKET_LENGTH; + op->nb_src = 1; + op->nb_dst = 1; + op->flags = RTE_DMA_OP_FLAG_SUBMIT; + op->op_mp = params.op_mpool; + + memset(&m_data, 0, sizeof(m_data)); + m_data.request_info.dma_dev_id = dma_request_info.dma_dev_id; + m_data.request_info.vchan = dma_request_info.vchan; + m_data.response_info.event = dma_response_info.event; + rte_memcpy((uint8_t *)op + sizeof(struct rte_event_dma_adapter_op), &m_data, + sizeof(union rte_event_dma_metadata)); + + /* Fill in event info and update event_ptr with rte_event_dma_adapter_op */ + memset(&ev[i], 0, sizeof(struct rte_event)); + ev[i].event = 0; + ev[i].event_type = RTE_EVENT_TYPE_DMADEV; + ev[i].queue_id = TEST_DMA_EV_QUEUE_ID; + ev[i].sched_type = RTE_SCHED_TYPE_ATOMIC; + ev[i].flow_id = 0xAABB; + ev[i].event_ptr = op; + } + + ret = send_recv_ev(ev); + TEST_ASSERT_SUCCESS(ret, "Failed to send/receive event to dma adapter\n"); + + test_dma_adapter_stats(); + + for (i = 0; i < TEST_MAX_OP; i++) { + op = ev[i].event_ptr; + ret = memcmp((uint8_t *)op->src_seg->addr, (uint8_t *)op->dst_seg->addr, + op->src_seg->length); + TEST_ASSERT_EQUAL(ret, 0, "Data mismatch for dma adapter\n"); + + rte_free(op->src_seg); + rte_free(op->dst_seg); + rte_mempool_put(op->op_mp, op); + } + + rte_pktmbuf_free_bulk(src_mbuf, TEST_MAX_OP); + rte_pktmbuf_free_bulk(dst_mbuf, TEST_MAX_OP); + + return TEST_SUCCESS; +} + +static int +map_adapter_service_core(void) +{ + uint32_t adapter_service_id; + int ret; + + if (rte_event_dma_adapter_service_id_get(TEST_ADAPTER_ID, &adapter_service_id) == 0) { + uint32_t core_list[NUM_CORES]; + + ret = rte_service_lcore_list(core_list, NUM_CORES); + TEST_ASSERT(ret >= 0, "Failed to get service core list!"); + + if (core_list[0] != slcore_id) { + TEST_ASSERT_SUCCESS(rte_service_lcore_add(slcore_id), + "Failed to add service core"); + TEST_ASSERT_SUCCESS(rte_service_lcore_start(slcore_id), + "Failed to start service core"); + } + + TEST_ASSERT_SUCCESS(rte_service_map_lcore_set( + adapter_service_id, slcore_id, 1), + "Failed to map adapter service"); + } + + return TEST_SUCCESS; +} + +static int +test_with_op_forward_mode(void) +{ + uint32_t cap; + int ret; + + ret = rte_event_dma_adapter_caps_get(evdev, TEST_DMA_DEV_ID, &cap); + TEST_ASSERT_SUCCESS(ret, "Failed to get adapter capabilities\n"); + + if (!(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) + map_adapter_service_core(); + else { + if (!(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) + return TEST_SKIPPED; + } + + TEST_ASSERT_SUCCESS(rte_event_dma_adapter_start(TEST_ADAPTER_ID), + "Failed to start event dma adapter"); + + ret = test_op_forward_mode(); + TEST_ASSERT_SUCCESS(ret, "DMA - FORWARD mode test failed\n"); + return TEST_SUCCESS; +} + +static int +configure_dmadev(void) +{ + const struct rte_dma_conf conf = { .nb_vchans = 1}; + const struct rte_dma_vchan_conf qconf = { + .direction = RTE_DMA_DIR_MEM_TO_MEM, + .nb_desc = TEST_RINGSIZE, + }; + struct rte_dma_info info; + unsigned int elt_size; + int ret; + + ret = rte_dma_count_avail(); + RTE_TEST_ASSERT_FAIL(ret, "No dma devices found!\n"); + + ret = rte_dma_info_get(TEST_DMA_DEV_ID, &info); + TEST_ASSERT_SUCCESS(ret, "Error with rte_dma_info_get()\n"); + + if (info.max_vchans < 1) + RTE_LOG(ERR, USER1, "Error, no channels available on device id %u\n", + TEST_DMA_DEV_ID); + + if (rte_dma_configure(TEST_DMA_DEV_ID, &conf) != 0) + RTE_LOG(ERR, USER1, "Error with rte_dma_configure()\n"); + + if (rte_dma_vchan_setup(TEST_DMA_DEV_ID, TEST_DMA_VCHAN_ID, &qconf) < 0) + RTE_LOG(ERR, USER1, "Error with queue configuration\n"); + + ret = rte_dma_info_get(TEST_DMA_DEV_ID, &info); + if (ret != 0 || info.nb_vchans != 1) + RTE_LOG(ERR, USER1, "Error, no configured queues reported on device id %u\n", + TEST_DMA_DEV_ID); + + params.src_mbuf_pool = rte_pktmbuf_pool_create("DMA_ADAPTER_SRC_MBUFPOOL", NUM_MBUFS, + MBUF_CACHE_SIZE, 0, MBUF_SIZE, + rte_socket_id()); + RTE_TEST_ASSERT_NOT_NULL(params.src_mbuf_pool, "Can't create DMA_SRC_MBUFPOOL\n"); + + params.dst_mbuf_pool = rte_pktmbuf_pool_create("DMA_ADAPTER_DST_MBUFPOOL", NUM_MBUFS, + MBUF_CACHE_SIZE, 0, MBUF_SIZE, + rte_socket_id()); + RTE_TEST_ASSERT_NOT_NULL(params.dst_mbuf_pool, "Can't create DMA_DST_MBUFPOOL\n"); + + elt_size = sizeof(struct rte_event_dma_adapter_op) + sizeof(union rte_event_dma_metadata); + params.op_mpool = rte_mempool_create("EVENT_DMA_OP_POOL", DMA_OP_POOL_SIZE, elt_size, 0, + 0, NULL, NULL, NULL, NULL, rte_socket_id(), 0); + RTE_TEST_ASSERT_NOT_NULL(params.op_mpool, "Can't create DMA_OP_POOL\n"); + + return TEST_SUCCESS; +} + +static inline void +evdev_set_conf_values(struct rte_event_dev_config *dev_conf, struct rte_event_dev_info *info) +{ + memset(dev_conf, 0, sizeof(struct rte_event_dev_config)); + dev_conf->dequeue_timeout_ns = info->min_dequeue_timeout_ns; + dev_conf->nb_event_ports = NB_TEST_PORTS; + dev_conf->nb_event_queues = NB_TEST_QUEUES; + dev_conf->nb_event_queue_flows = info->max_event_queue_flows; + dev_conf->nb_event_port_dequeue_depth = + info->max_event_port_dequeue_depth; + dev_conf->nb_event_port_enqueue_depth = + info->max_event_port_enqueue_depth; + dev_conf->nb_event_port_enqueue_depth = + info->max_event_port_enqueue_depth; + dev_conf->nb_events_limit = + info->max_num_events; +} + +static int +configure_eventdev(void) +{ + struct rte_event_queue_conf queue_conf; + struct rte_event_dev_config devconf; + struct rte_event_dev_info info; + uint32_t queue_count; + uint32_t port_count; + uint8_t qid; + int ret; + + if (!rte_event_dev_count()) { + /* If there is no hardware eventdev, or no software vdev was + * specified on the command line, create an instance of + * event_sw. + */ + LOG_DBG("Failed to find a valid event device... " + "testing with event_sw device\n"); + TEST_ASSERT_SUCCESS(rte_vdev_init("event_sw0", NULL), + "Error creating eventdev"); + evdev = rte_event_dev_get_dev_id("event_sw0"); + } + + ret = rte_event_dev_info_get(evdev, &info); + TEST_ASSERT_SUCCESS(ret, "Failed to get event dev info\n"); + + evdev_set_conf_values(&devconf, &info); + + ret = rte_event_dev_configure(evdev, &devconf); + TEST_ASSERT_SUCCESS(ret, "Failed to configure eventdev\n"); + + /* Set up event queue */ + ret = rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_QUEUE_COUNT, &queue_count); + TEST_ASSERT_SUCCESS(ret, "Queue count get failed\n"); + TEST_ASSERT_EQUAL(queue_count, 2, "Unexpected queue count\n"); + + qid = TEST_APP_EV_QUEUE_ID; + ret = rte_event_queue_setup(evdev, qid, NULL); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%d\n", qid); + + queue_conf.nb_atomic_flows = info.max_event_queue_flows; + queue_conf.nb_atomic_order_sequences = 32; + queue_conf.schedule_type = RTE_SCHED_TYPE_ATOMIC; + queue_conf.priority = RTE_EVENT_DEV_PRIORITY_HIGHEST; + queue_conf.event_queue_cfg = RTE_EVENT_QUEUE_CFG_SINGLE_LINK; + + qid = TEST_DMA_EV_QUEUE_ID; + ret = rte_event_queue_setup(evdev, qid, &queue_conf); + TEST_ASSERT_SUCCESS(ret, "Failed to setup queue=%u\n", qid); + + /* Set up event port */ + ret = rte_event_dev_attr_get(evdev, RTE_EVENT_DEV_ATTR_PORT_COUNT, + &port_count); + TEST_ASSERT_SUCCESS(ret, "Port count get failed\n"); + TEST_ASSERT_EQUAL(port_count, 1, "Unexpected port count\n"); + + ret = rte_event_port_setup(evdev, TEST_APP_PORT_ID, NULL); + TEST_ASSERT_SUCCESS(ret, "Failed to setup port=%d\n", + TEST_APP_PORT_ID); + + qid = TEST_APP_EV_QUEUE_ID; + ret = rte_event_port_link(evdev, TEST_APP_PORT_ID, &qid, NULL, 1); + TEST_ASSERT(ret >= 0, "Failed to link queue port=%d\n", + TEST_APP_PORT_ID); + + return TEST_SUCCESS; +} + +static void +test_dma_adapter_free(void) +{ + rte_event_dma_adapter_free(TEST_ADAPTER_ID); +} + +static int +test_dma_adapter_create(void) +{ + struct rte_event_dev_info evdev_info = {0}; + struct rte_event_port_conf conf = {0}; + int ret; + + ret = rte_event_dev_info_get(evdev, &evdev_info); + TEST_ASSERT_SUCCESS(ret, "Failed to create event dma adapter\n"); + + conf.new_event_threshold = evdev_info.max_num_events; + conf.dequeue_depth = evdev_info.max_event_port_dequeue_depth; + conf.enqueue_depth = evdev_info.max_event_port_enqueue_depth; + + /* Create adapter with default port creation callback */ + ret = rte_event_dma_adapter_create(TEST_ADAPTER_ID, evdev, &conf, 0); + TEST_ASSERT_SUCCESS(ret, "Failed to create event dma adapter\n"); + + return TEST_SUCCESS; +} + +static int +test_dma_adapter_qp_add_del(void) +{ + struct rte_event event; + uint32_t cap; + int ret; + + ret = rte_event_dma_adapter_caps_get(evdev, TEST_DMA_DEV_ID, &cap); + TEST_ASSERT_SUCCESS(ret, "Failed to get adapter capabilities\n"); + + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) { + ret = rte_event_dma_adapter_vchan_queue_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, &event); + } else + ret = rte_event_dma_adapter_vchan_queue_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, NULL); + + TEST_ASSERT_SUCCESS(ret, "Failed to create add queue pair\n"); + + ret = rte_event_dma_adapter_vchan_queue_del(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID); + TEST_ASSERT_SUCCESS(ret, "Failed to delete add queue pair\n"); + + return TEST_SUCCESS; +} + +static int +configure_event_dma_adapter(enum rte_event_dma_adapter_mode mode) +{ + struct rte_event_dev_info evdev_info = {0}; + struct rte_event_port_conf conf = {0}; + struct rte_event event; + uint32_t cap; + int ret; + + ret = rte_event_dma_adapter_caps_get(evdev, TEST_DMA_DEV_ID, &cap); + TEST_ASSERT_SUCCESS(ret, "Failed to get adapter capabilities\n"); + + /* Skip mode and capability mismatch check for SW eventdev */ + if (!(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_NEW) && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) && + !(cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND)) + goto adapter_create; + + if (mode == RTE_EVENT_DMA_ADAPTER_OP_FORWARD) { + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + return -ENOTSUP; + } + +adapter_create: + ret = rte_event_dev_info_get(evdev, &evdev_info); + TEST_ASSERT_SUCCESS(ret, "Failed to create event dma adapter\n"); + + conf.new_event_threshold = evdev_info.max_num_events; + conf.dequeue_depth = evdev_info.max_event_port_dequeue_depth; + conf.enqueue_depth = evdev_info.max_event_port_enqueue_depth; + + /* Create adapter with default port creation callback */ + ret = rte_event_dma_adapter_create(TEST_ADAPTER_ID, evdev, &conf, mode); + TEST_ASSERT_SUCCESS(ret, "Failed to create event dma adapter\n"); + + if (cap & RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND) { + ret = rte_event_dma_adapter_vchan_queue_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, &event); + } else + ret = rte_event_dma_adapter_vchan_queue_add(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID, NULL); + + TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); + + if (!params.internal_port_op_fwd) { + ret = rte_event_dma_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.dma_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } + + return TEST_SUCCESS; +} + +static void +test_dma_adapter_stop(void) +{ + uint32_t evdev_service_id, adapter_service_id; + + /* retrieve service ids & stop services */ + if (rte_event_dma_adapter_service_id_get(TEST_ADAPTER_ID, + &adapter_service_id) == 0) { + rte_service_runstate_set(adapter_service_id, 0); + rte_service_lcore_stop(slcore_id); + rte_service_lcore_del(slcore_id); + rte_event_dma_adapter_stop(TEST_ADAPTER_ID); + } + + if (rte_event_dev_service_id_get(evdev, &evdev_service_id) == 0) { + rte_service_runstate_set(evdev_service_id, 0); + rte_service_lcore_stop(slcore_id); + rte_service_lcore_del(slcore_id); + rte_dma_stop(TEST_DMA_DEV_ID); + rte_event_dev_stop(evdev); + } else { + rte_dma_stop(TEST_DMA_DEV_ID); + rte_event_dev_stop(evdev); + } +} + +static int +test_dma_adapter_conf(enum rte_event_dma_adapter_mode mode) +{ + uint32_t evdev_service_id; + uint8_t qid; + int ret; + + if (!dma_adapter_setup_done) { + ret = configure_event_dma_adapter(mode); + if (ret) + return ret; + if (!params.internal_port_op_fwd) { + qid = TEST_DMA_EV_QUEUE_ID; + ret = rte_event_port_link(evdev, + params.dma_event_port_id, &qid, NULL, 1); + TEST_ASSERT(ret >= 0, "Failed to link queue %d " + "port=%u\n", qid, + params.dma_event_port_id); + } + dma_adapter_setup_done = 1; + } + + /* retrieve service ids */ + if (rte_event_dev_service_id_get(evdev, &evdev_service_id) == 0) { + /* add a service core and start it */ + TEST_ASSERT_SUCCESS(rte_service_lcore_add(slcore_id), + "Failed to add service core"); + TEST_ASSERT_SUCCESS(rte_service_lcore_start(slcore_id), + "Failed to start service core"); + + /* map services to it */ + TEST_ASSERT_SUCCESS(rte_service_map_lcore_set(evdev_service_id, + slcore_id, 1), "Failed to map evdev service"); + + /* set services to running */ + TEST_ASSERT_SUCCESS(rte_service_runstate_set(evdev_service_id, + 1), "Failed to start evdev service"); + } + + /* start the eventdev */ + TEST_ASSERT_SUCCESS(rte_event_dev_start(evdev), + "Failed to start event device"); + + /* start the dma dev */ + TEST_ASSERT_SUCCESS(rte_dma_start(TEST_DMA_DEV_ID), + "Failed to start dma device"); + + return TEST_SUCCESS; +} + +static int +test_dma_adapter_conf_op_forward_mode(void) +{ + enum rte_event_dma_adapter_mode mode; + + mode = RTE_EVENT_DMA_ADAPTER_OP_FORWARD; + + return test_dma_adapter_conf(mode); +} + +static int +testsuite_setup(void) +{ + int ret; + + slcore_id = rte_get_next_lcore(-1, 1, 0); + TEST_ASSERT_NOT_EQUAL(slcore_id, RTE_MAX_LCORE, "At least 2 lcores " + "are required to run this autotest\n"); + + /* Setup and start event device. */ + ret = configure_eventdev(); + TEST_ASSERT_SUCCESS(ret, "Failed to setup eventdev\n"); + + /* Setup and start dma device. */ + ret = configure_dmadev(); + TEST_ASSERT_SUCCESS(ret, "dmadev initialization failed\n"); + + return TEST_SUCCESS; +} + +static void +dma_adapter_teardown(void) +{ + int ret; + + ret = rte_event_dma_adapter_stop(TEST_ADAPTER_ID); + if (ret < 0) + RTE_LOG(ERR, USER1, "Failed to stop adapter!"); + + ret = rte_event_dma_adapter_vchan_queue_del(TEST_ADAPTER_ID, TEST_DMA_DEV_ID, + TEST_DMA_VCHAN_ID); + if (ret < 0) + RTE_LOG(ERR, USER1, "Failed to delete queue pair!"); + + ret = rte_event_dma_adapter_free(TEST_ADAPTER_ID); + if (ret < 0) + RTE_LOG(ERR, USER1, "Failed to free adapter!"); + + dma_adapter_setup_done = 0; +} + +static void +dma_teardown(void) +{ + /* Free mbuf mempool */ + if (params.src_mbuf_pool != NULL) { + RTE_LOG(DEBUG, USER1, "DMA_ADAPTER_SRC_MBUFPOOL count %u\n", + rte_mempool_avail_count(params.src_mbuf_pool)); + rte_mempool_free(params.src_mbuf_pool); + params.src_mbuf_pool = NULL; + } + + if (params.dst_mbuf_pool != NULL) { + RTE_LOG(DEBUG, USER1, "DMA_ADAPTER_DST_MBUFPOOL count %u\n", + rte_mempool_avail_count(params.dst_mbuf_pool)); + rte_mempool_free(params.dst_mbuf_pool); + params.dst_mbuf_pool = NULL; + } + + /* Free ops mempool */ + if (params.op_mpool != NULL) { + RTE_LOG(DEBUG, USER1, "EVENT_DMA_OP_POOL count %u\n", + rte_mempool_avail_count(params.op_mpool)); + rte_mempool_free(params.op_mpool); + params.op_mpool = NULL; + } +} + +static void +eventdev_teardown(void) +{ + rte_event_dev_stop(evdev); +} + +static void +testsuite_teardown(void) +{ + dma_adapter_teardown(); + dma_teardown(); + eventdev_teardown(); +} + +static struct unit_test_suite functional_testsuite = { + .suite_name = "Event dma adapter test suite", + .setup = testsuite_setup, + .teardown = testsuite_teardown, + .unit_test_cases = { + + TEST_CASE_ST(NULL, test_dma_adapter_free, test_dma_adapter_create), + + TEST_CASE_ST(test_dma_adapter_create, test_dma_adapter_free, + test_dma_adapter_qp_add_del), + + TEST_CASE_ST(test_dma_adapter_create, test_dma_adapter_free, + test_dma_adapter_stats), + + TEST_CASE_ST(test_dma_adapter_create, test_dma_adapter_free, + test_dma_adapter_params), + + TEST_CASE_ST(test_dma_adapter_conf_op_forward_mode, test_dma_adapter_stop, + test_with_op_forward_mode), + + TEST_CASES_END() /**< NULL terminate unit test array */ + } +}; + +static int +test_event_dma_adapter(void) +{ + return unit_test_suite_runner(&functional_testsuite); +} + +#endif /* !RTE_EXEC_ENV_WINDOWS */ + +REGISTER_TEST_COMMAND(event_dma_adapter_autotest, test_event_dma_adapter); From patchwork Tue Sep 19 13:42:20 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 131630 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D757E42604; Tue, 19 Sep 2023 15:43:17 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1D2F140DF5; Tue, 19 Sep 2023 15:43:09 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 7D0BE40E09 for ; Tue, 19 Sep 2023 15:43:07 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38JDZvGl011923; Tue, 19 Sep 2023 06:43:06 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=oOdnfywSSqWkdyWkDVlsYIVFNFaueqbNJmlpPKqJeZg=; b=QFpATvB/HoheuK2hwt3Aav3PyY9HWemwCI0Hjmckc63rqp88b+GvSqY7k4x4dwcx8bQm rNYzkvgQT7jLNBOzSn50f116x6g7tEeYqjvpvLg3bRSyrGxBViPUiN/JgGZ3oVwf5e5h pWihwgUoozBI0KFjY/lAy631tvsaGBN6MDdqKRdWXSkIbBDVyGSNBVMJp3k1kHR20ipR Suc1m9DNdsarR+DjLf6bDgAw8oMJmx1wokNEVD2CMm2/Ksx3rzuUP05BfOBiATKgcCCE bK7RPVdU6Xi6FnUcjhnX+6ojZw/wMC7CRJ5WhkCh1byZrkzOoP4uPO3wgYEvN+KXUSQz nw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7cnq00wc-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Sep 2023 06:43:06 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Sep 2023 06:43:04 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Sep 2023 06:43:04 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 0CF893F709C; Tue, 19 Sep 2023 06:42:59 -0700 (PDT) From: Amit Prakash Shukla To: Nithin Dabilpuram , Kiran Kumar K , Sunil Kumar Kori , Satha Rao , Vamsi Attunuru CC: , , , , , , , , , , , , Amit Prakash Shukla Subject: [PATCH v1 5/7] common/cnxk: dma result to an offset of the event Date: Tue, 19 Sep 2023 19:12:20 +0530 Message-ID: <20230919134222.2500033-5-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230919134222.2500033-1-amitprakashs@marvell.com> References: <20230919134222.2500033-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: 0GGzNQyGvA4xtvp_MuG4o58UUS_B-nu4 X-Proofpoint-GUID: 0GGzNQyGvA4xtvp_MuG4o58UUS_B-nu4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-19_06,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Adds support to configure writing result to offset of the DMA response event. Signed-off-by: Amit Prakash Shukla --- drivers/common/cnxk/roc_dpi.c | 5 ++++- drivers/common/cnxk/roc_dpi.h | 2 +- drivers/common/cnxk/roc_dpi_priv.h | 4 ++++ drivers/common/cnxk/roc_idev.c | 20 ++++++++++++++++++++ drivers/common/cnxk/roc_idev_priv.h | 3 +++ drivers/dma/cnxk/cnxk_dmadev.c | 3 ++- drivers/dma/cnxk/cnxk_dmadev.h | 1 + 7 files changed, 35 insertions(+), 3 deletions(-) diff --git a/drivers/common/cnxk/roc_dpi.c b/drivers/common/cnxk/roc_dpi.c index 2e086b3698..7bf6ac2aaf 100644 --- a/drivers/common/cnxk/roc_dpi.c +++ b/drivers/common/cnxk/roc_dpi.c @@ -84,6 +84,8 @@ roc_dpi_configure(struct roc_dpi *roc_dpi, uint32_t chunk_sz, uint64_t aura, uin mbox_msg.s.aura = aura; mbox_msg.s.sso_pf_func = idev_sso_pffunc_get(); mbox_msg.s.npa_pf_func = idev_npa_pffunc_get(); + mbox_msg.s.wqecs = 1; + mbox_msg.s.wqecsoff = idev_dma_cs_offset_get(); rc = send_msg_to_pf(&pci_dev->addr, (const char *)&mbox_msg, sizeof(dpi_mbox_msg_t)); @@ -95,7 +97,7 @@ roc_dpi_configure(struct roc_dpi *roc_dpi, uint32_t chunk_sz, uint64_t aura, uin } int -roc_dpi_dev_init(struct roc_dpi *roc_dpi) +roc_dpi_dev_init(struct roc_dpi *roc_dpi, uint8_t offset) { struct plt_pci_device *pci_dev = roc_dpi->pci_dev; uint16_t vfid; @@ -104,6 +106,7 @@ roc_dpi_dev_init(struct roc_dpi *roc_dpi) vfid = ((pci_dev->addr.devid & 0x1F) << 3) | (pci_dev->addr.function & 0x7); vfid -= 1; roc_dpi->vfid = vfid; + idev_dma_cs_offset_set(offset); return 0; } diff --git a/drivers/common/cnxk/roc_dpi.h b/drivers/common/cnxk/roc_dpi.h index 4ebde5b8a6..978e2badb2 100644 --- a/drivers/common/cnxk/roc_dpi.h +++ b/drivers/common/cnxk/roc_dpi.h @@ -11,7 +11,7 @@ struct roc_dpi { uint16_t vfid; } __plt_cache_aligned; -int __roc_api roc_dpi_dev_init(struct roc_dpi *roc_dpi); +int __roc_api roc_dpi_dev_init(struct roc_dpi *roc_dpi, uint8_t offset); int __roc_api roc_dpi_dev_fini(struct roc_dpi *roc_dpi); int __roc_api roc_dpi_configure(struct roc_dpi *dpi, uint32_t chunk_sz, uint64_t aura, diff --git a/drivers/common/cnxk/roc_dpi_priv.h b/drivers/common/cnxk/roc_dpi_priv.h index 518a3e7351..52962c8bc0 100644 --- a/drivers/common/cnxk/roc_dpi_priv.h +++ b/drivers/common/cnxk/roc_dpi_priv.h @@ -31,6 +31,10 @@ typedef union dpi_mbox_msg_t { uint64_t sso_pf_func : 16; /* NPA PF function */ uint64_t npa_pf_func : 16; + /* WQE queue DMA completion status enable */ + uint64_t wqecs : 1; + /* WQE queue DMA completion status offset */ + uint64_t wqecsoff : 8; } s; } dpi_mbox_msg_t; diff --git a/drivers/common/cnxk/roc_idev.c b/drivers/common/cnxk/roc_idev.c index e6c6b34d78..7b922c8bae 100644 --- a/drivers/common/cnxk/roc_idev.c +++ b/drivers/common/cnxk/roc_idev.c @@ -301,6 +301,26 @@ idev_sso_set(struct roc_sso *sso) __atomic_store_n(&idev->sso, sso, __ATOMIC_RELEASE); } +void +idev_dma_cs_offset_set(uint8_t offset) +{ + struct idev_cfg *idev = idev_get_cfg(); + + if (idev != NULL) + idev->dma_cs_offset = offset; +} + +uint8_t +idev_dma_cs_offset_get(void) +{ + struct idev_cfg *idev = idev_get_cfg(); + + if (idev != NULL) + return idev->dma_cs_offset; + + return 0; +} + uint64_t roc_idev_nix_inl_meta_aura_get(void) { diff --git a/drivers/common/cnxk/roc_idev_priv.h b/drivers/common/cnxk/roc_idev_priv.h index 80f8465e1c..cf63c58d92 100644 --- a/drivers/common/cnxk/roc_idev_priv.h +++ b/drivers/common/cnxk/roc_idev_priv.h @@ -37,6 +37,7 @@ struct idev_cfg { struct roc_nix_list roc_nix_list; plt_spinlock_t nix_inl_dev_lock; plt_spinlock_t npa_dev_lock; + uint8_t dma_cs_offset; }; /* Generic */ @@ -55,6 +56,8 @@ void idev_sso_pffunc_set(uint16_t sso_pf_func); uint16_t idev_sso_pffunc_get(void); struct roc_sso *idev_sso_get(void); void idev_sso_set(struct roc_sso *sso); +void idev_dma_cs_offset_set(uint8_t offset); +uint8_t idev_dma_cs_offset_get(void); /* idev lmt */ uint16_t idev_lmt_pffunc_get(void); diff --git a/drivers/dma/cnxk/cnxk_dmadev.c b/drivers/dma/cnxk/cnxk_dmadev.c index 26680edfde..db127e056f 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.c +++ b/drivers/dma/cnxk/cnxk_dmadev.c @@ -511,6 +511,7 @@ static const struct rte_dma_dev_ops cnxk_dmadev_ops = { static int cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_device *pci_dev) { + struct cnxk_dpi_compl_s *compl = NULL; struct cnxk_dpi_vf_s *dpivf = NULL; char name[RTE_DEV_NAME_MAX_LEN]; struct rte_dma_dev *dmadev; @@ -556,7 +557,7 @@ cnxk_dmadev_probe(struct rte_pci_driver *pci_drv __rte_unused, struct rte_pci_de rdpi = &dpivf->rdpi; rdpi->pci_dev = pci_dev; - rc = roc_dpi_dev_init(rdpi); + rc = roc_dpi_dev_init(rdpi, (uint64_t)&compl->wqecs); if (rc < 0) goto err_out_free; diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 350ae73b5c..75059b8843 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -86,6 +86,7 @@ union cnxk_dpi_instr_cmd { struct cnxk_dpi_compl_s { uint64_t cdata; void *cb_data; + uint32_t wqecs; }; struct cnxk_dpi_cdesc_data_s { From patchwork Tue Sep 19 13:42:21 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 131631 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83F4F42604; Tue, 19 Sep 2023 15:43:27 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B3B0540E2D; Tue, 19 Sep 2023 15:43:16 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 239B040E6E for ; Tue, 19 Sep 2023 15:43:14 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38JDZvc1011940; Tue, 19 Sep 2023 06:43:14 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=UQ5OLJG+K3OGY1Hx0LmY5Cc2PvqPDowe2DWwBrgXQMs=; b=B5CMC9RX+DcjhOf98gzdS/DB/nlxKP22YdYizKALYCq68yfSBr5TuZjBMa9ryrIRqj0n q9FsX+drjMG9TLdjvpSr9ElvfpzQ1zoMUvaZdnq1EqO+Gq1q5AR9zktBFUWyImAXiVI4 9SjI89+az+L+D5lUy+phI8CL8Qvs1PyGhXQ1naS0YQtQDSR/rDbvdQJ9O6hmzIaGYij+ cmngA4nK1+/0YRLxfi+GPJRtE4FKw8ZfWhkNLmTz/OzHl8PNUfo0tW29d7/Pup6T76lV JmpLCrBDHVBest/YTbwmZLTx8BZirVXRGEU4QP+y3T5+WhLyFzjyf2OwpD+6af1yUXW7 yw== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7cnq00wy-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Sep 2023 06:43:14 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Sep 2023 06:43:12 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Sep 2023 06:43:12 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 27D863F709B; Tue, 19 Sep 2023 06:43:07 -0700 (PDT) From: Amit Prakash Shukla To: Vamsi Attunuru CC: , , , , , , , , , , , , , Amit Prakash Shukla Subject: [PATCH v1 6/7] dma/cnxk: support for DMA event enqueue dequeue Date: Tue, 19 Sep 2023 19:12:21 +0530 Message-ID: <20230919134222.2500033-6-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230919134222.2500033-1-amitprakashs@marvell.com> References: <20230919134222.2500033-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: CRoc0BnV0j6ASP-iYdV_J5Bn--kP98c4 X-Proofpoint-GUID: CRoc0BnV0j6ASP-iYdV_J5Bn--kP98c4 X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-19_06,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added cnxk driver support for dma event enqueue and dequeue. Signed-off-by: Amit Prakash Shukla --- drivers/dma/cnxk/cnxk_dma_event_dp.h | 22 +++ drivers/dma/cnxk/cnxk_dmadev.h | 9 +- drivers/dma/cnxk/cnxk_dmadev_fp.c | 209 +++++++++++++++++++++++++++ drivers/dma/cnxk/meson.build | 6 +- drivers/dma/cnxk/version.map | 9 ++ 5 files changed, 253 insertions(+), 2 deletions(-) create mode 100644 drivers/dma/cnxk/cnxk_dma_event_dp.h create mode 100644 drivers/dma/cnxk/version.map diff --git a/drivers/dma/cnxk/cnxk_dma_event_dp.h b/drivers/dma/cnxk/cnxk_dma_event_dp.h new file mode 100644 index 0000000000..bf9b01f8f1 --- /dev/null +++ b/drivers/dma/cnxk/cnxk_dma_event_dp.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2023 Marvell. + */ + +#ifndef _CNXK_DMA_EVENT_DP_H_ +#define _CNXK_DMA_EVENT_DP_H_ + +#include + +#include +#include + +__rte_internal +uint16_t cn10k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events); + +__rte_internal +uint16_t cn9k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events); + +__rte_internal +uintptr_t cnxk_dma_adapter_dequeue(uintptr_t get_work1); + +#endif /* _CNXK_DMA_EVENT_DP_H_ */ diff --git a/drivers/dma/cnxk/cnxk_dmadev.h b/drivers/dma/cnxk/cnxk_dmadev.h index 75059b8843..9cba388d02 100644 --- a/drivers/dma/cnxk/cnxk_dmadev.h +++ b/drivers/dma/cnxk/cnxk_dmadev.h @@ -40,6 +40,11 @@ */ #define CNXK_DPI_REQ_CDATA 0xFF +/* Set Completion data to 0xDEADBEEF when request submitted for SSO. + * This helps differentiate if the dequeue is called after cnxk enueue. + */ +#define CNXK_DPI_REQ_SSO_CDATA 0xDEADBEEF + union cnxk_dpi_instr_cmd { uint64_t u; struct cn9k_dpi_instr_cmd { @@ -85,7 +90,9 @@ union cnxk_dpi_instr_cmd { struct cnxk_dpi_compl_s { uint64_t cdata; - void *cb_data; + void *op; + uint16_t dev_id; + uint16_t vchan; uint32_t wqecs; }; diff --git a/drivers/dma/cnxk/cnxk_dmadev_fp.c b/drivers/dma/cnxk/cnxk_dmadev_fp.c index 16d7b5426b..c7cd036a5b 100644 --- a/drivers/dma/cnxk/cnxk_dmadev_fp.c +++ b/drivers/dma/cnxk/cnxk_dmadev_fp.c @@ -5,6 +5,8 @@ #include #include "cnxk_dmadev.h" +#include "cnxk_dma_event_dp.h" +#include static __plt_always_inline void __dpi_cpy_scalar(uint64_t *src, uint64_t *dst, uint8_t n) @@ -434,3 +436,210 @@ cn10k_dmadev_copy_sg(void *dev_private, uint16_t vchan, const struct rte_dma_sge return dpi_conf->desc_idx++; } + +uint16_t +cn10k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events) +{ + union rte_event_dma_metadata *dma_mdata; + struct rte_event_dma_request *req_info; + const struct rte_dma_sge *src, *dst; + struct rte_event_dma_adapter_op *op; + struct cnxk_dpi_compl_s *comp_ptr; + struct cnxk_dpi_conf *dpi_conf; + struct cnxk_dpi_vf_s *dpivf; + struct rte_event *rsp_info; + uint16_t nb_src, nb_dst; + struct rte_dma_dev *dev; + uint64_t hdr[4]; + uint16_t count; + int rc; + + PLT_SET_USED(ws); + + for (count = 0; count < nb_events; count++) { + op = ev[count].event_ptr; + dma_mdata = (union rte_event_dma_metadata *)((uint8_t *)op + + sizeof(struct rte_event_dma_adapter_op)); + rsp_info = &dma_mdata->response_info; + req_info = &dma_mdata->request_info; + dev = rte_dma_pmd_dev_get(req_info->dma_dev_id); + dpivf = dev->data->dev_private; + dpi_conf = &dpivf->conf[req_info->vchan]; + + if (unlikely(((dpi_conf->c_desc.tail + 1) & dpi_conf->c_desc.max_cnt) == + dpi_conf->c_desc.head)) + return count; + + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; + CNXK_DPI_STRM_INC(dpi_conf->c_desc, tail); + comp_ptr->op = op; + comp_ptr->dev_id = req_info->dma_dev_id; + comp_ptr->vchan = req_info->vchan; + comp_ptr->cdata = CNXK_DPI_REQ_SSO_CDATA; + + nb_src = op->nb_src & CNXK_DPI_MAX_POINTER; + nb_dst = op->nb_dst & CNXK_DPI_MAX_POINTER; + + hdr[0] = dpi_conf->cmd.u | ((uint64_t)DPI_HDR_PT_WQP << 54); + hdr[0] |= (nb_dst << 6) | nb_src; + hdr[1] = ((uint64_t)comp_ptr); + hdr[2] = (RTE_EVENT_TYPE_DMADEV << 28 | (rsp_info->sub_event_type << 20) | + rsp_info->flow_id); + hdr[2] |= ((uint64_t)(rsp_info->sched_type & DPI_HDR_TT_MASK)) << 32; + hdr[2] |= ((uint64_t)(rsp_info->queue_id & DPI_HDR_GRP_MASK)) << 34; + + src = &op->src_seg[0]; + dst = &op->dst_seg[0]; + + rc = __dpi_queue_write_sg(dpivf, hdr, src, dst, nb_src, nb_dst); + if (unlikely(rc)) { + CNXK_DPI_STRM_DEC(dpi_conf->c_desc, tail); + return rc; + } + + if (op->flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(dpi_conf->pnum_words + CNXK_DPI_CMD_LEN(nb_src, nb_dst), + dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpi_conf->stats.submitted += dpi_conf->pending + 1; + dpi_conf->pnum_words = 0; + dpi_conf->pending = 0; + } else { + dpi_conf->pnum_words += CNXK_DPI_CMD_LEN(nb_src, nb_dst); + dpi_conf->pending++; + } + } + + return count; +} + +uint16_t +cn9k_dma_adapter_enqueue(void *ws, struct rte_event ev[], uint16_t nb_events) +{ + union rte_event_dma_metadata *dma_mdata; + struct rte_event_dma_request *req_info; + const struct rte_dma_sge *fptr, *lptr; + struct rte_event_dma_adapter_op *op; + struct cnxk_dpi_compl_s *comp_ptr; + struct cnxk_dpi_conf *dpi_conf; + struct cnxk_dpi_vf_s *dpivf; + struct rte_event *rsp_info; + uint16_t nb_src, nb_dst; + struct rte_dma_dev *dev; + uint64_t hdr[4]; + uint16_t count; + int rc; + + PLT_SET_USED(ws); + + for (count = 0; count < nb_events; count++) { + op = ev[count].event_ptr; + dma_mdata = (union rte_event_dma_metadata *)((uint8_t *)op + + sizeof(struct rte_event_dma_adapter_op)); + rsp_info = &dma_mdata->response_info; + req_info = &dma_mdata->request_info; + dev = rte_dma_pmd_dev_get(req_info->dma_dev_id); + dpivf = dev->data->dev_private; + dpi_conf = &dpivf->conf[req_info->vchan]; + + if (unlikely(((dpi_conf->c_desc.tail + 1) & dpi_conf->c_desc.max_cnt) == + dpi_conf->c_desc.head)) + return count; + + comp_ptr = dpi_conf->c_desc.compl_ptr[dpi_conf->c_desc.tail]; + CNXK_DPI_STRM_INC(dpi_conf->c_desc, tail); + comp_ptr->op = op; + comp_ptr->dev_id = req_info->dma_dev_id; + comp_ptr->vchan = req_info->vchan; + comp_ptr->cdata = CNXK_DPI_REQ_SSO_CDATA; + + hdr[1] = dpi_conf->cmd.u | ((uint64_t)DPI_HDR_PT_WQP << 36); + hdr[2] = (uint64_t)comp_ptr; + + nb_src = op->nb_src & CNXK_DPI_MAX_POINTER; + nb_dst = op->nb_dst & CNXK_DPI_MAX_POINTER; + /* + * For inbound case, src pointers are last pointers. + * For all other cases, src pointers are first pointers. + */ + if (((dpi_conf->cmd.u >> 48) & DPI_HDR_XTYPE_MASK) == DPI_XTYPE_INBOUND) { + fptr = &op->dst_seg[0]; + lptr = &op->src_seg[0]; + RTE_SWAP(nb_src, nb_dst); + } else { + fptr = &op->src_seg[0]; + lptr = &op->dst_seg[0]; + } + + hdr[0] = ((uint64_t)nb_dst << 54) | (uint64_t)nb_src << 48; + hdr[0] |= (RTE_EVENT_TYPE_DMADEV << 28 | (rsp_info->sub_event_type << 20) | + rsp_info->flow_id); + hdr[0] |= ((uint64_t)(rsp_info->sched_type & DPI_HDR_TT_MASK)) << 32; + hdr[0] |= ((uint64_t)(rsp_info->queue_id & DPI_HDR_GRP_MASK)) << 34; + + rc = __dpi_queue_write_sg(dpivf, hdr, fptr, lptr, nb_src, nb_dst); + if (unlikely(rc)) { + CNXK_DPI_STRM_DEC(dpi_conf->c_desc, tail); + return rc; + } + + if (op->flags & RTE_DMA_OP_FLAG_SUBMIT) { + rte_wmb(); + plt_write64(dpi_conf->pnum_words + CNXK_DPI_CMD_LEN(nb_src, nb_dst), + dpivf->rdpi.rbase + DPI_VDMA_DBELL); + dpi_conf->stats.submitted += dpi_conf->pending + 1; + dpi_conf->pnum_words = 0; + dpi_conf->pending = 0; + } else { + dpi_conf->pnum_words += CNXK_DPI_CMD_LEN(nb_src, nb_dst); + dpi_conf->pending++; + } + } + + return count; +} + +uintptr_t +cnxk_dma_adapter_dequeue(uintptr_t get_work1) +{ + struct rte_event_dma_adapter_op *op; + struct cnxk_dpi_compl_s *comp_ptr; + struct cnxk_dpi_conf *dpi_conf; + struct cnxk_dpi_vf_s *dpivf; + struct rte_dma_dev *dev; + uint8_t *wqecs; + + comp_ptr = (struct cnxk_dpi_compl_s *)get_work1; + + /* Dequeue can be called without calling cnx_enqueue in case of + * dma_adapter. When its called from adapter, dma op will not be + * embedded in completion pointer. In those cases return op. + */ + if (comp_ptr->cdata != CNXK_DPI_REQ_SSO_CDATA) + return (uintptr_t)comp_ptr; + + dev = rte_dma_pmd_dev_get(comp_ptr->dev_id); + dpivf = dev->data->dev_private; + dpi_conf = &dpivf->conf[comp_ptr->vchan]; + + wqecs = (uint8_t *)&comp_ptr->wqecs; + if (__atomic_load_n(wqecs, __ATOMIC_RELAXED) != 0) + dpi_conf->stats.errors++; + + op = (struct rte_event_dma_adapter_op *)comp_ptr->op; + + /* We are done here. Reset completion buffer.*/ + comp_ptr->wqecs = ~0; + comp_ptr->op = NULL; + comp_ptr->dev_id = ~0; + comp_ptr->vchan = ~0; + comp_ptr->cdata = CNXK_DPI_REQ_CDATA; + + CNXK_DPI_STRM_INC(dpi_conf->c_desc, head); + /* Take into account errors also. This is similar to + * cnxk_dmadev_completed_status(). + */ + dpi_conf->stats.completed++; + + return (uintptr_t)op; +} diff --git a/drivers/dma/cnxk/meson.build b/drivers/dma/cnxk/meson.build index e557349368..9cf5453b0b 100644 --- a/drivers/dma/cnxk/meson.build +++ b/drivers/dma/cnxk/meson.build @@ -8,6 +8,10 @@ foreach flag: error_cflags endif endforeach -deps += ['bus_pci', 'common_cnxk', 'dmadev'] +driver_sdk_headers = files( + 'cnxk_dma_event_dp.h', +) + +deps += ['bus_pci', 'common_cnxk', 'dmadev', 'eventdev'] sources = files('cnxk_dmadev.c', 'cnxk_dmadev_fp.c') require_iova_in_mbuf = false diff --git a/drivers/dma/cnxk/version.map b/drivers/dma/cnxk/version.map new file mode 100644 index 0000000000..6cc1c6aaa5 --- /dev/null +++ b/drivers/dma/cnxk/version.map @@ -0,0 +1,9 @@ +INTERNAL { + global: + + cn10k_dma_adapter_enqueue; + cn9k_dma_adapter_enqueue; + cnxk_dma_adapter_dequeue; + + local: *; +}; From patchwork Tue Sep 19 13:42:22 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Amit Prakash Shukla X-Patchwork-Id: 131632 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 27A0242604; Tue, 19 Sep 2023 15:43:34 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D722E40DFB; Tue, 19 Sep 2023 15:43:23 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 3023B40E72 for ; Tue, 19 Sep 2023 15:43:22 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.17.1.19/8.17.1.19) with ESMTP id 38JDZvGr011923; Tue, 19 Sep 2023 06:43:21 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=ffp43xLkvTb61QSxx5g54sdg14z9oAKhL4q6AWdH53I=; b=hyZxa8BOK2A/iF1La/cozRBecHrprd1U5ev4WcPRt+lO/uX4EUThJU5gtnUHLZbZa3Jc fcjWsCPGeonFgpnvuOsJPgtfE9t10TAQZlROmBfheLS9l6oV8yT5CMq2OkOE98jjim+w ztrCmstlr8ACAbQRe89+X6ayr4nUZaKYDKmAfniR2ws/D95N5g82JBi8cV/6s4VX7nUi 1NPDHEiyXKQ4KsOyO3SIaN3muiHgKqoyiGPltryDxmnU8oBmVnP2tXHLzB+FMXLF54b4 KY+nEfFQizZV5BWaiT5XuystIPiQANDgUAB+GaLhdi+YZ9eGP4pSnDBbyfqLwWJ9hlmI tg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com (PPS) with ESMTPS id 3t7cnq00xb-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 19 Sep 2023 06:43:21 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 19 Sep 2023 06:43:19 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.48 via Frontend Transport; Tue, 19 Sep 2023 06:43:19 -0700 Received: from localhost.localdomain (unknown [10.28.36.157]) by maili.marvell.com (Postfix) with ESMTP id 8E9793F7099; Tue, 19 Sep 2023 06:43:14 -0700 (PDT) From: Amit Prakash Shukla To: Pavan Nikhilesh , Shijith Thotton CC: , , , , , , , , , , , , , , Amit Prakash Shukla Subject: [PATCH v1 7/7] event/cnxk: support DMA event functions Date: Tue, 19 Sep 2023 19:12:22 +0530 Message-ID: <20230919134222.2500033-7-amitprakashs@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230919134222.2500033-1-amitprakashs@marvell.com> References: <20230919134222.2500033-1-amitprakashs@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: g1NIZsjgddGI2G7GxPgM_HtJkxpvG-vX X-Proofpoint-GUID: g1NIZsjgddGI2G7GxPgM_HtJkxpvG-vX X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.267,Aquarius:18.0.980,Hydra:6.0.601,FMLib:17.11.176.26 definitions=2023-09-19_06,2023-09-19_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Added support of dma driver callback assignment to eventdev enqueue and dequeue. The change also defines dma adapter capabilities function. Signed-off-by: Amit Prakash Shukla --- drivers/event/cnxk/cn10k_eventdev.c | 20 ++++++++++++++++++++ drivers/event/cnxk/cn10k_worker.h | 3 +++ drivers/event/cnxk/cn9k_eventdev.c | 17 +++++++++++++++++ drivers/event/cnxk/cn9k_worker.h | 3 +++ drivers/event/cnxk/meson.build | 3 +-- 5 files changed, 44 insertions(+), 2 deletions(-) diff --git a/drivers/event/cnxk/cn10k_eventdev.c b/drivers/event/cnxk/cn10k_eventdev.c index c5d4be0474..9bb8b8ff01 100644 --- a/drivers/event/cnxk/cn10k_eventdev.c +++ b/drivers/event/cnxk/cn10k_eventdev.c @@ -8,6 +8,9 @@ #include "cn10k_cryptodev_ops.h" #include "cnxk_eventdev.h" #include "cnxk_worker.h" +#include "cnxk_dma_event_dp.h" + +#include #define CN10K_SET_EVDEV_DEQ_OP(dev, deq_op, deq_ops) \ deq_op = deq_ops[dev->rx_offloads & (NIX_RX_OFFLOAD_MAX - 1)] @@ -469,6 +472,8 @@ cn10k_sso_fp_fns_set(struct rte_eventdev *event_dev) else event_dev->ca_enqueue = cn10k_cpt_sg_ver1_crypto_adapter_enqueue; + event_dev->dma_enqueue = cn10k_dma_adapter_enqueue; + if (dev->tx_offloads & NIX_TX_MULTI_SEG_F) CN10K_SET_EVDEV_ENQ_OP(dev, event_dev->txa_enqueue, sso_hws_tx_adptr_enq_seg); else @@ -978,6 +983,19 @@ cn10k_crypto_adapter_vec_limits(const struct rte_eventdev *event_dev, return 0; } +static int +cn10k_dma_adapter_caps_get(const struct rte_eventdev *event_dev, + const struct rte_dma_dev *dma_dev, uint32_t *caps) +{ + RTE_SET_USED(dma_dev); + + CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn10k", EINVAL); + + *caps = RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; + + return 0; +} + static struct eventdev_ops cn10k_sso_dev_ops = { .dev_infos_get = cn10k_sso_info_get, .dev_configure = cn10k_sso_dev_configure, @@ -1017,6 +1035,8 @@ static struct eventdev_ops cn10k_sso_dev_ops = { .crypto_adapter_queue_pair_del = cn10k_crypto_adapter_qp_del, .crypto_adapter_vector_limits_get = cn10k_crypto_adapter_vec_limits, + .dma_adapter_caps_get = cn10k_dma_adapter_caps_get, + .xstats_get = cnxk_sso_xstats_get, .xstats_reset = cnxk_sso_xstats_reset, .xstats_get_names = cnxk_sso_xstats_get_names, diff --git a/drivers/event/cnxk/cn10k_worker.h b/drivers/event/cnxk/cn10k_worker.h index e71ab3c523..3d35fcb657 100644 --- a/drivers/event/cnxk/cn10k_worker.h +++ b/drivers/event/cnxk/cn10k_worker.h @@ -7,6 +7,7 @@ #include #include "cn10k_cryptodev_event_dp.h" +#include "cnxk_dma_event_dp.h" #include "cn10k_rx.h" #include "cnxk_worker.h" #include "cn10k_eventdev.h" @@ -226,6 +227,8 @@ cn10k_sso_hws_post_process(struct cn10k_sso_hws *ws, uint64_t *u64, /* Mark vector mempool object as get */ RTE_MEMPOOL_CHECK_COOKIES(rte_mempool_from_obj((void *)u64[1]), (void **)&u64[1], 1, 1); + } else if (CNXK_EVENT_TYPE_FROM_TAG(u64[0]) == RTE_EVENT_TYPE_DMADEV) { + u64[1] = cnxk_dma_adapter_dequeue(u64[1]); } } diff --git a/drivers/event/cnxk/cn9k_eventdev.c b/drivers/event/cnxk/cn9k_eventdev.c index f77a9d7085..980932bd12 100644 --- a/drivers/event/cnxk/cn9k_eventdev.c +++ b/drivers/event/cnxk/cn9k_eventdev.c @@ -510,6 +510,8 @@ cn9k_sso_fp_fns_set(struct rte_eventdev *event_dev) sso_hws_dual_tx_adptr_enq); } + event_dev->dma_enqueue = cn9k_dma_adapter_enqueue; + event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; rte_mb(); #else @@ -991,6 +993,19 @@ cn9k_tim_caps_get(const struct rte_eventdev *evdev, uint64_t flags, cn9k_sso_set_priv_mem); } +static int +cn9k_dma_adapter_caps_get(const struct rte_eventdev *event_dev, + const struct rte_dma_dev *dma_dev, uint32_t *caps) +{ + RTE_SET_USED(dma_dev); + + CNXK_VALID_DEV_OR_ERR_RET(event_dev->dev, "event_cn9k"); + + *caps = RTE_EVENT_DMA_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; + + return 0; +} + static struct eventdev_ops cn9k_sso_dev_ops = { .dev_infos_get = cn9k_sso_info_get, .dev_configure = cn9k_sso_dev_configure, @@ -1027,6 +1042,8 @@ static struct eventdev_ops cn9k_sso_dev_ops = { .crypto_adapter_queue_pair_add = cn9k_crypto_adapter_qp_add, .crypto_adapter_queue_pair_del = cn9k_crypto_adapter_qp_del, + .dma_adapter_caps_get = cn9k_dma_adapter_caps_get, + .xstats_get = cnxk_sso_xstats_get, .xstats_reset = cnxk_sso_xstats_reset, .xstats_get_names = cnxk_sso_xstats_get_names, diff --git a/drivers/event/cnxk/cn9k_worker.h b/drivers/event/cnxk/cn9k_worker.h index 9ddab095ac..6ac6fffc86 100644 --- a/drivers/event/cnxk/cn9k_worker.h +++ b/drivers/event/cnxk/cn9k_worker.h @@ -11,6 +11,7 @@ #include "cnxk_ethdev.h" #include "cnxk_eventdev.h" #include "cnxk_worker.h" +#include "cnxk_dma_event_dp.h" #include "cn9k_cryptodev_ops.h" #include "cn9k_ethdev.h" @@ -214,6 +215,8 @@ cn9k_sso_hws_post_process(uint64_t *u64, uint64_t mbuf, const uint32_t flags, if (flags & NIX_RX_OFFLOAD_TSTAMP_F) cn9k_sso_process_tstamp(u64[1], mbuf, tstamp[port]); u64[1] = mbuf; + } else if (CNXK_EVENT_TYPE_FROM_TAG(u64[0]) == RTE_EVENT_TYPE_DMADEV) { + u64[1] = cnxk_dma_adapter_dequeue(u64[1]); } } diff --git a/drivers/event/cnxk/meson.build b/drivers/event/cnxk/meson.build index 51f1be8848..649419d5d3 100644 --- a/drivers/event/cnxk/meson.build +++ b/drivers/event/cnxk/meson.build @@ -314,8 +314,7 @@ foreach flag: extra_flags endif endforeach -deps += ['bus_pci', 'common_cnxk', 'net_cnxk', 'crypto_cnxk'] - +deps += ['bus_pci', 'common_cnxk', 'net_cnxk', 'crypto_cnxk', 'dma_cnxk'] require_iova_in_mbuf = false annotate_locks = false