From patchwork Wed Apr 14 18:04:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 91469 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B33A4A0562; Wed, 14 Apr 2021 20:04:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 970D1161C3D; Wed, 14 Apr 2021 20:04:46 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 3414F161C31 for ; Wed, 14 Apr 2021 20:04:41 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 13EHk4dw014654; Wed, 14 Apr 2021 11:04:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=cogpLVC1zmJBAHyHbmv9HEbrijVMdabfIO16AQwvf6Q=; b=HVaidFQ3f8oDNPVf2IdHfEpKdJQdJItdeD1njnJ7/llcO4JNYOmwI4EjkwU7883op0Dp fNL9+/+Sns3pXcETqhGSB1vZ3dU8jsHSoBUNnTYb3cmeS9sfJhhtMs1PjpCpNAS5h7eE vzfbhjU5NvWvICqz4QjyYdKek6+I3g2jYL3G5dshVkLGT6t+eJwDILxsnO8aAHymFO+V xMKR4S7MbwQWb+UrBcEeyZ4iXLTRYvOsik7uxCmq+YN6fDE1R8Umn0hrjUur9fRDSHwD HGskTN0kYYgt+S09t5WJCeGOngpob0LFH4s+LRcJcfgTeR9i8DQkwQoutCMPKSQUNMi9 Uw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37wqtm2s8r-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 14 Apr 2021 11:04:37 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 14 Apr 2021 11:04:35 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 14 Apr 2021 11:04:36 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id E1CD73F7041; Wed, 14 Apr 2021 11:04:30 -0700 (PDT) From: To: , , , , CC: , , , , , , , , , , , , , Akhil Goyal Date: Wed, 14 Apr 2021 23:34:14 +0530 Message-ID: <20210414180417.1263585-2-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210414180417.1263585-1-gakhil@marvell.com> References: <20210414122036.1262579-2-gakhil@marvell.com> <20210414180417.1263585-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: WBoXjdNsHEDRNBD1hY9-0pBvybrn0q_g X-Proofpoint-GUID: WBoXjdNsHEDRNBD1hY9-0pBvybrn0q_g X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-04-14_10:2021-04-14, 2021-04-14 signatures=0 Subject: [dpdk-dev] [PATCH v10 1/4] devtools: add exception for reserved fields X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Akhil Goyal Certain structures are added with reserved fields to address any future enhancements to retain ABI compatibility. However, ABI script will still report error as it is not aware of reserved fields. Hence, adding a generic exception for reserved fields. Signed-off-by: Akhil Goyal --- devtools/libabigail.abignore | 6 +++++- 1 file changed, 5 insertions(+), 1 deletion(-) diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 6c0b38984..654755314 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -19,4 +19,8 @@ ; Ignore fields inserted in cacheline boundary of rte_cryptodev [suppress_type] name = rte_cryptodev - has_data_member_inserted_between = {offset_after(attached), end} \ No newline at end of file + has_data_member_inserted_between = {offset_after(attached), end} + +; Ignore changes in reserved fields +[suppress_variable] + name_regexp = reserved From patchwork Wed Apr 14 18:04:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 91470 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D4535A0562; Wed, 14 Apr 2021 20:04:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4CEA6161C4F; Wed, 14 Apr 2021 20:04:53 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id BD8FE161C43 for ; Wed, 14 Apr 2021 20:04:47 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 13EHkALo014876; Wed, 14 Apr 2021 11:04:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=8AL7GO/S0aFoPd65YwLxXgLIkWAoZXNs3mWynKTPaAs=; b=ZuSNz75bJcsyuImXjVZ8jp6ailPs+mT44XoniEPGd0alRFm4OGKIgEhysC8/9Wir3R+z lNboVYGUUy54fj8FjRHRLxw7xQIaR+Dgz1t1nwd44MSUr/44zSW5rK1WXpQDRTdL/Rno rTzmF7q5k4Z5A6Klh/QPGPtoV1XFRhzZvunI/39sV7BWWLXarEUKJYt93mGEIsY9YGzS P/7BbYWLhsVqYoPQOrNWtowfs3+EAqsRuPvo+LjrDZEigEUiaDONfbjfh1xUP1WzQsXT RXcGXuNhviu5lQb8GnzEf3/J8PxBuf03MqGqFUEt96owHUxkwJIXN53zEk5K4UOImzHb wQ== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 37wqtm2s98-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 14 Apr 2021 11:04:43 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 14 Apr 2021 11:04:41 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 14 Apr 2021 11:04:41 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 80AA13F703F; Wed, 14 Apr 2021 11:04:36 -0700 (PDT) From: To: , , , , CC: , , , , , , , , , , , , , Akhil Goyal Date: Wed, 14 Apr 2021 23:34:15 +0530 Message-ID: <20210414180417.1263585-3-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210414180417.1263585-1-gakhil@marvell.com> References: <20210414122036.1262579-2-gakhil@marvell.com> <20210414180417.1263585-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: Yp0_8iabX-rbAZ_-OpGTlqMw0o4KZUaP X-Proofpoint-GUID: Yp0_8iabX-rbAZ_-OpGTlqMw0o4KZUaP X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-04-14_10:2021-04-14, 2021-04-14 signatures=0 Subject: [dpdk-dev] [PATCH v10 2/4] eventdev: introduce crypto adapter enqueue API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Akhil Goyal In case an event from a previous stage is required to be forwarded to a crypto adapter and PMD supports internal event port in crypto adapter, exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, we do not have a way to check in the API rte_event_enqueue_burst(), whether it is for crypto adapter or for eth tx adapter. Hence we need a new API similar to rte_event_eth_tx_adapter_enqueue(), which can send to a crypto adapter. Note that RTE_EVENT_TYPE_* cannot be used to make that decision, as it is meant for event source and not event destination. And event port designated for crypto adapter is designed to be used for OP_NEW mode. Hence, in order to support an event PMD which has an internal event port in crypto adapter (RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode), exposed via capability RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD, application should use rte_event_crypto_adapter_enqueue() API to enqueue events. When internal port is not available(RTE_EVENT_CRYPTO_ADAPTER_OP_NEW mode), application can use API rte_event_enqueue_burst() as it was doing earlier, i.e. retrieve event port used by crypto adapter and bind its event queues to that port and enqueue events using the API rte_event_enqueue_burst(). Signed-off-by: Akhil Goyal Acked-by: Abhinandan Gujjar --- devtools/libabigail.abignore | 5 ++ .../prog_guide/event_crypto_adapter.rst | 69 ++++++++++++------- doc/guides/rel_notes/deprecation.rst | 4 ++ doc/guides/rel_notes/release_21_05.rst | 6 ++ lib/librte_eventdev/eventdev_trace_points.c | 3 + .../rte_event_crypto_adapter.h | 63 +++++++++++++++++ lib/librte_eventdev/rte_eventdev.c | 10 +++ lib/librte_eventdev/rte_eventdev.h | 9 ++- lib/librte_eventdev/rte_eventdev_trace_fp.h | 10 +++ lib/librte_eventdev/version.map | 1 + 10 files changed, 153 insertions(+), 27 deletions(-) diff --git a/devtools/libabigail.abignore b/devtools/libabigail.abignore index 654755314..31c42cb55 100644 --- a/devtools/libabigail.abignore +++ b/devtools/libabigail.abignore @@ -24,3 +24,8 @@ ; Ignore changes in reserved fields [suppress_variable] name_regexp = reserved + +; Ignore fields inserted in place of reserved fields of rte_eventdev +[suppress_type] + name = rte_eventdev + has_data_member_inserted_between = {offset_after(attached), end} diff --git a/doc/guides/prog_guide/event_crypto_adapter.rst b/doc/guides/prog_guide/event_crypto_adapter.rst index 1e3eb7139..4fb5c688e 100644 --- a/doc/guides/prog_guide/event_crypto_adapter.rst +++ b/doc/guides/prog_guide/event_crypto_adapter.rst @@ -55,21 +55,22 @@ which is needed to enqueue an event after the crypto operation is completed. RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -In the RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, if HW supports -RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD capability the application -can directly submit the crypto operations to the cryptodev. -If not, application retrieves crypto adapter's event port using -rte_event_crypto_adapter_event_port_get() API. Then, links its event -queue to this port and starts enqueuing crypto operations as events -to the eventdev. The adapter then dequeues the events and submits the -crypto operations to the cryptodev. After the crypto completions, the -adapter enqueues events to the event device. -Application can use this mode, when ingress packet ordering is needed. -In this mode, events dequeued from the adapter will be treated as -forwarded events. The application needs to specify the cryptodev ID -and queue pair ID (request information) needed to enqueue a crypto -operation in addition to the event information (response information) -needed to enqueue an event after the crypto operation has completed. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), the application should +use ``rte_event_crypto_adapter_enqueue()`` API to enqueue crypto operations as +events to crypto adapter. If not, application retrieves crypto adapter's event +port using ``rte_event_crypto_adapter_event_port_get()`` API, links its event +queue to this port and starts enqueuing crypto operations as events to eventdev +using ``rte_event_enqueue_burst()``. The adapter then dequeues the events and +submits the crypto operations to the cryptodev. After the crypto operation is +complete, the adapter enqueues events to the event device. The application can +use this mode when ingress packet ordering is needed. In this mode, events +dequeued from the adapter will be treated as forwarded events. The application +needs to specify the cryptodev ID and queue pair ID (request information) needed +to enqueue a crypto operation in addition to the event information (response +information) needed to enqueue an event after the crypto operation has +completed. .. _figure_event_crypto_adapter_op_forward: @@ -120,28 +121,44 @@ service function and needs to create an event port for it. The callback is expected to fill the ``struct rte_event_crypto_adapter_conf`` structure passed to it. -For RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD mode, the event port created by adapter -can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` API. -Application can use this event port to link with event queue on which it -enqueues events towards the crypto adapter. +In the ``RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD`` mode, if the event PMD and crypto +PMD supports internal event port +(``RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD``), events with crypto +operations should be enqueued to the crypto adapter using +``rte_event_crypto_adapter_enqueue()`` API. If not, the event port created by +the adapter can be retrieved using ``rte_event_crypto_adapter_event_port_get()`` +API. An application can use this event port to link with an event queue, on +which it enqueues events towards the crypto adapter using +``rte_event_enqueue_burst()``. .. code-block:: c - uint8_t id, evdev, crypto_ev_port_id, app_qid; + uint8_t id, evdev_id, cdev_id, crypto_ev_port_id, app_qid; struct rte_event ev; + uint32_t cap; int ret; - ret = rte_event_crypto_adapter_event_port_get(id, &crypto_ev_port_id); - ret = rte_event_queue_setup(evdev, app_qid, NULL); - ret = rte_event_port_link(evdev, crypto_ev_port_id, &app_qid, NULL, 1); - // Fill in event info and update event_ptr with rte_crypto_op memset(&ev, 0, sizeof(ev)); - ev.queue_id = app_qid; . . ev.event_ptr = op; - ret = rte_event_enqueue_burst(evdev, app_ev_port_id, ev, nb_events); + + ret = rte_event_crypto_adapter_caps_get(evdev_id, cdev_id, &cap); + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) { + ret = rte_event_crypto_adapter_enqueue(evdev_id, app_ev_port_id, + ev, nb_events); + } else { + ret = rte_event_crypto_adapter_event_port_get(id, + &crypto_ev_port_id); + ret = rte_event_queue_setup(evdev_id, app_qid, NULL); + ret = rte_event_port_link(evdev_id, crypto_ev_port_id, &app_qid, + NULL, 1); + ev.queue_id = app_qid; + ret = rte_event_enqueue_burst(evdev_id, app_ev_port_id, ev, + nb_events); + } + Querying adapter capabilities ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/deprecation.rst b/doc/guides/rel_notes/deprecation.rst index 2afc84c39..a973de4a9 100644 --- a/doc/guides/rel_notes/deprecation.rst +++ b/doc/guides/rel_notes/deprecation.rst @@ -127,6 +127,10 @@ Deprecation Notices values to the function ``rte_event_eth_rx_adapter_queue_add`` using the structure ``rte_event_eth_rx_adapter_queue_add``. +* eventdev: The function pointer ``ca_enqueue`` in structure ``rte_eventdev`` + will be moved after ``txa_enqueue`` so that all enqueue/dequeue + function pointers are adjacent to each other. + * sched: To allow more traffic classes, flexible mapping of pipe queues to traffic classes, and subport level configuration of pipes and queues changes will be made to macros, data structures and API functions defined diff --git a/doc/guides/rel_notes/release_21_05.rst b/doc/guides/rel_notes/release_21_05.rst index b21906ccf..773dcbd58 100644 --- a/doc/guides/rel_notes/release_21_05.rst +++ b/doc/guides/rel_notes/release_21_05.rst @@ -182,6 +182,12 @@ New Features * Added command to display Rx queue used descriptor count. ``show port (port_id) rxq (queue_id) desc used count`` +* **Enhanced crypto adapter forward mode.** + + * Added ``rte_event_crypto_adapter_enqueue()`` API to enqueue events to crypto + adapter if forward mode is supported by driver. + * Added support for crypto adapter forward mode in octeontx2 event and crypto + device driver. Removed Items ------------- diff --git a/lib/librte_eventdev/eventdev_trace_points.c b/lib/librte_eventdev/eventdev_trace_points.c index 1a0ccc448..3867ec800 100644 --- a/lib/librte_eventdev/eventdev_trace_points.c +++ b/lib/librte_eventdev/eventdev_trace_points.c @@ -118,3 +118,6 @@ RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_start, RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_stop, lib.eventdev.crypto.stop) + +RTE_TRACE_POINT_REGISTER(rte_eventdev_trace_crypto_adapter_enqueue, + lib.eventdev.crypto.enq) diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h index 60630ef66..f8c6cca87 100644 --- a/lib/librte_eventdev/rte_event_crypto_adapter.h +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -171,6 +171,7 @@ extern "C" { #include #include "rte_eventdev.h" +#include "eventdev_pmd.h" /** * Crypto event adapter mode @@ -522,6 +523,68 @@ rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); int rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); +/** + * Enqueue a burst of crypto operations as event objects supplied in *rte_event* + * structure on an event crypto adapter designated by its event *dev_id* through + * the event port specified by *port_id*. This function is supported if the + * eventdev PMD has the #RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD + * capability flag set. + * + * The *nb_events* parameter is the number of event objects to enqueue which are + * supplied in the *ev* array of *rte_event* structure. + * + * The rte_event_crypto_adapter_enqueue() function returns the number of + * event objects it actually enqueued. A return value equal to *nb_events* + * means that all event objects have been enqueued. + * + * @param dev_id + * The identifier of the device. + * @param port_id + * The identifier of the event port. + * @param ev + * Points to an array of *nb_events* objects of type *rte_event* structure + * which contain the event object enqueue operations to be processed. + * @param nb_events + * The number of event objects to enqueue, typically number of + * rte_event_port_attr_get(...RTE_EVENT_PORT_ATTR_ENQ_DEPTH...) + * available for this port. + * + * @return + * The number of event objects actually enqueued on the event device. The + * return value can be less than the value of the *nb_events* parameter when + * the event devices queue is full or if invalid parameters are specified in a + * *rte_event*. If the return value is less than *nb_events*, the remaining + * events at the end of ev[] are not consumed and the caller has to take care + * of them, and rte_errno is set accordingly. Possible errno values include: + * - EINVAL The port ID is invalid, device ID is invalid, an event's queue + * ID is invalid, or an event's sched type doesn't match the + * capabilities of the destination queue. + * - ENOSPC The event port was backpressured and unable to enqueue + * one or more events. This error code is only applicable to + * closed systems. + */ +static inline uint16_t +rte_event_crypto_adapter_enqueue(uint8_t dev_id, + uint8_t port_id, + struct rte_event ev[], + uint16_t nb_events) +{ + const struct rte_eventdev *dev = &rte_eventdevs[dev_id]; + +#ifdef RTE_LIBRTE_EVENTDEV_DEBUG + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + + if (port_id >= dev->data->nb_ports) { + rte_errno = EINVAL; + return 0; + } +#endif + rte_eventdev_trace_crypto_adapter_enqueue(dev_id, port_id, ev, + nb_events); + + return dev->ca_enqueue(dev->data->ports[port_id], ev, nb_events); +} + #ifdef __cplusplus } #endif diff --git a/lib/librte_eventdev/rte_eventdev.c b/lib/librte_eventdev/rte_eventdev.c index c9bb5d227..594dd5e75 100644 --- a/lib/librte_eventdev/rte_eventdev.c +++ b/lib/librte_eventdev/rte_eventdev.c @@ -1454,6 +1454,15 @@ rte_event_tx_adapter_enqueue(__rte_unused void *port, return 0; } +static uint16_t +rte_event_crypto_adapter_enqueue(__rte_unused void *port, + __rte_unused struct rte_event ev[], + __rte_unused uint16_t nb_events) +{ + rte_errno = ENOTSUP; + return 0; +} + struct rte_eventdev * rte_event_pmd_allocate(const char *name, int socket_id) { @@ -1476,6 +1485,7 @@ rte_event_pmd_allocate(const char *name, int socket_id) eventdev->txa_enqueue = rte_event_tx_adapter_enqueue; eventdev->txa_enqueue_same_dest = rte_event_tx_adapter_enqueue; + eventdev->ca_enqueue = rte_event_crypto_adapter_enqueue; if (eventdev->data == NULL) { struct rte_eventdev_data *eventdev_data = NULL; diff --git a/lib/librte_eventdev/rte_eventdev.h b/lib/librte_eventdev/rte_eventdev.h index 5f1f544cc..a9c496fb6 100644 --- a/lib/librte_eventdev/rte_eventdev.h +++ b/lib/librte_eventdev/rte_eventdev.h @@ -1352,6 +1352,10 @@ typedef uint16_t (*event_tx_adapter_enqueue_same_dest)(void *port, * burst having same destination Ethernet port & Tx queue. */ +typedef uint16_t (*event_crypto_adapter_enqueue)(void *port, + struct rte_event ev[], uint16_t nb_events); +/**< @internal Enqueue burst of events on crypto adapter */ + #define RTE_EVENTDEV_NAME_MAX_LEN (64) /**< @internal Max length of name of event PMD */ @@ -1434,8 +1438,11 @@ struct rte_eventdev { uint8_t attached : 1; /**< Flag indicating the device is attached */ + event_crypto_adapter_enqueue ca_enqueue; + /**< Pointer to PMD crypto adapter enqueue function. */ + uint64_t reserved_64s[4]; /**< Reserved for future fields */ - void *reserved_ptrs[4]; /**< Reserved for future fields */ + void *reserved_ptrs[3]; /**< Reserved for future fields */ } __rte_cache_aligned; extern struct rte_eventdev *rte_eventdevs; diff --git a/lib/librte_eventdev/rte_eventdev_trace_fp.h b/lib/librte_eventdev/rte_eventdev_trace_fp.h index 349129c0f..5639e0b83 100644 --- a/lib/librte_eventdev/rte_eventdev_trace_fp.h +++ b/lib/librte_eventdev/rte_eventdev_trace_fp.h @@ -49,6 +49,16 @@ RTE_TRACE_POINT_FP( rte_trace_point_emit_u8(flags); ) +RTE_TRACE_POINT_FP( + rte_eventdev_trace_crypto_adapter_enqueue, + RTE_TRACE_POINT_ARGS(uint8_t dev_id, uint8_t port_id, void *ev_table, + uint16_t nb_events), + rte_trace_point_emit_u8(dev_id); + rte_trace_point_emit_u8(port_id); + rte_trace_point_emit_ptr(ev_table); + rte_trace_point_emit_u16(nb_events); +) + RTE_TRACE_POINT_FP( rte_eventdev_trace_timer_arm_burst, RTE_TRACE_POINT_ARGS(const void *adapter, void **evtims_table, diff --git a/lib/librte_eventdev/version.map b/lib/librte_eventdev/version.map index 902df0ae3..7e264d3b8 100644 --- a/lib/librte_eventdev/version.map +++ b/lib/librte_eventdev/version.map @@ -143,6 +143,7 @@ EXPERIMENTAL { rte_event_vector_pool_create; rte_event_eth_rx_adapter_vector_limits_get; rte_event_eth_rx_adapter_queue_event_vector_config; + __rte_eventdev_trace_crypto_adapter_enqueue; }; INTERNAL { From patchwork Wed Apr 14 18:04:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 91471 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5B9D8A0562; Wed, 14 Apr 2021 20:05:08 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1BE65161C5A; Wed, 14 Apr 2021 20:04:56 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by mails.dpdk.org (Postfix) with ESMTP id 17276161C52 for ; Wed, 14 Apr 2021 20:04:53 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 13EHjYGl014727; Wed, 14 Apr 2021 11:04:48 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=8xBvB+jLU5B+pfkg9XevhVOE1NHbEBcjBo2VPWDpcZE=; b=bTcBYIMu9KnqQnu3WvYCsHD5LQNFjJtwq6vf2lcU4shnfUS7x3R3P94SH2cjUsb8dTXt Fv6j+P2HUPt3AYeMJwSuM/0wQUUB0XrttKiF63z2OHNCotYNvvKqF5Iv88nkNzwVYleO ylrMLpM8yAdp/two1DEQVB9GKsKUzwkOPrih0FxJX+LtBfabG+ETtfP7EXk54feZV/LQ upAgZ8HpnLbQZqKJ7PQY8SRL27DhHpM/CGRW9F4KXY8D8jao9GHQ6SaTgsOCgnicxfRl rKBn0Z2coCBMb5ZfRbABi1JgQMlZmta/U+/gWoRXhgGxh0bFZRGbB4da7aQuRcUKQD5e Cw== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0a-0016f401.pphosted.com with ESMTP id 37wn4wu57a-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 14 Apr 2021 11:04:48 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 14 Apr 2021 11:04:46 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 14 Apr 2021 11:04:47 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 2289C3F7040; Wed, 14 Apr 2021 11:04:41 -0700 (PDT) From: To: , , , , CC: , , , , , , , , , , , , Date: Wed, 14 Apr 2021 23:34:16 +0530 Message-ID: <20210414180417.1263585-4-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210414180417.1263585-1-gakhil@marvell.com> References: <20210414122036.1262579-2-gakhil@marvell.com> <20210414180417.1263585-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: N3_jPjSgbbHqNn6CunQFlea-beuLdfKo X-Proofpoint-GUID: N3_jPjSgbbHqNn6CunQFlea-beuLdfKo X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-04-14_10:2021-04-14, 2021-04-14 signatures=0 Subject: [dpdk-dev] [PATCH v10 3/4] event/octeontx2: support crypto adapter forward mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Shijith Thotton Advertise crypto adapter forward mode capability and set crypto adapter enqueue function in driver. Signed-off-by: Shijith Thotton Acked-by: Abhinandan Gujjar --- drivers/crypto/octeontx2/otx2_cryptodev_ops.c | 49 +++++++---- drivers/event/octeontx2/otx2_evdev.c | 5 +- .../event/octeontx2/otx2_evdev_crypto_adptr.c | 3 +- ...dptr_dp.h => otx2_evdev_crypto_adptr_rx.h} | 6 +- .../octeontx2/otx2_evdev_crypto_adptr_tx.h | 83 +++++++++++++++++++ drivers/event/octeontx2/otx2_worker.h | 2 +- drivers/event/octeontx2/otx2_worker_dual.h | 2 +- 7 files changed, 129 insertions(+), 21 deletions(-) rename drivers/event/octeontx2/{otx2_evdev_crypto_adptr_dp.h => otx2_evdev_crypto_adptr_rx.h} (93%) create mode 100644 drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h diff --git a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c index fc4d5bac4..5ca16a5ae 100644 --- a/drivers/crypto/octeontx2/otx2_cryptodev_ops.c +++ b/drivers/crypto/octeontx2/otx2_cryptodev_ops.c @@ -7,6 +7,7 @@ #include #include #include +#include #include "otx2_cryptodev.h" #include "otx2_cryptodev_capabilities.h" @@ -438,15 +439,35 @@ sym_session_configure(int driver_id, struct rte_crypto_sym_xform *xform, return -ENOTSUP; } -static __rte_always_inline void __rte_hot +static __rte_always_inline int32_t __rte_hot otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, struct cpt_request_info *req, void *lmtline, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { + union rte_event_crypto_metadata *m_data; union cpt_inst_s inst; uint64_t lmt_status; + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + op->sym->session); + if (m_data == NULL) { + rte_pktmbuf_free(op->sym->m_src); + rte_crypto_op_free(op); + rte_errno = EINVAL; + return -EINVAL; + } + } else if (op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)op + + op->private_data_offset); + } else { + return -EINVAL; + } + inst.u[0] = 0; inst.s9x.res_addr = req->comp_baddr; inst.u[2] = 0; @@ -457,12 +478,11 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, inst.s9x.ei2 = req->ist.ei2; inst.s9x.ei3 = cpt_inst_w7; - inst.s9x.qord = 1; - inst.s9x.grp = qp->ev.queue_id; - inst.s9x.tt = qp->ev.sched_type; - inst.s9x.tag = (RTE_EVENT_TYPE_CRYPTODEV << 28) | - qp->ev.flow_id; - inst.s9x.wq_ptr = (uint64_t)req >> 3; + inst.u[2] = (((RTE_EVENT_TYPE_CRYPTODEV << 28) | + m_data->response_info.flow_id) | + ((uint64_t)m_data->response_info.sched_type << 32) | + ((uint64_t)m_data->response_info.queue_id << 34)); + inst.u[3] = 1 | (((uint64_t)req >> 3) << 3); req->qp = qp; do { @@ -479,22 +499,22 @@ otx2_ca_enqueue_req(const struct otx2_cpt_qp *qp, lmt_status = otx2_lmt_submit(qp->lf_nq_reg); } while (lmt_status == 0); + return 0; } static __rte_always_inline int32_t __rte_hot otx2_cpt_enqueue_req(const struct otx2_cpt_qp *qp, struct pending_queue *pend_q, struct cpt_request_info *req, + struct rte_crypto_op *op, uint64_t cpt_inst_w7) { void *lmtline = qp->lmtline; union cpt_inst_s inst; uint64_t lmt_status; - if (qp->ca_enable) { - otx2_ca_enqueue_req(qp, req, lmtline, cpt_inst_w7); - return 0; - } + if (qp->ca_enable) + return otx2_ca_enqueue_req(qp, req, lmtline, op, cpt_inst_w7); if (unlikely(pend_q->pending_count >= OTX2_CPT_DEFAULT_CMD_QLEN)) return -EAGAIN; @@ -598,7 +618,8 @@ otx2_cpt_enqueue_asym(struct otx2_cpt_qp *qp, goto req_fail; } - ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, params.req, op, + sess->cpt_inst_w7); if (unlikely(ret)) { CPT_LOG_DP_ERR("Could not enqueue crypto req"); @@ -642,7 +663,7 @@ otx2_cpt_enqueue_sym(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (unlikely(ret)) { /* Free buffer allocated by fill params routines */ @@ -711,7 +732,7 @@ otx2_cpt_enqueue_sec(struct otx2_cpt_qp *qp, struct rte_crypto_op *op, return ret; } - ret = otx2_cpt_enqueue_req(qp, pend_q, req, sess->cpt_inst_w7); + ret = otx2_cpt_enqueue_req(qp, pend_q, req, op, sess->cpt_inst_w7); if (winsz && esn) { seq_in_sa = ((uint64_t)esn_hi << 32) | esn_low; diff --git a/drivers/event/octeontx2/otx2_evdev.c b/drivers/event/octeontx2/otx2_evdev.c index cdadbb2b2..ee7a6ad51 100644 --- a/drivers/event/octeontx2/otx2_evdev.c +++ b/drivers/event/octeontx2/otx2_evdev.c @@ -12,8 +12,9 @@ #include #include -#include "otx2_evdev_stats.h" #include "otx2_evdev.h" +#include "otx2_evdev_crypto_adptr_tx.h" +#include "otx2_evdev_stats.h" #include "otx2_irq.h" #include "otx2_tim_evdev.h" @@ -311,6 +312,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_OL3_OL4_CSUM_F)] [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_ca_enq; if (dev->dual_ws) { event_dev->enqueue = otx2_ssogws_dual_enq; @@ -473,6 +475,7 @@ SSO_TX_ADPTR_ENQ_FASTPATH_FUNC [!!(dev->tx_offloads & NIX_TX_OFFLOAD_L3_L4_CSUM_F)]; } + event_dev->ca_enqueue = otx2_ssogws_dual_ca_enq; } event_dev->txa_enqueue_same_dest = event_dev->txa_enqueue; diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c index 4e8a96cb6..2c9b347f0 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr.c @@ -18,7 +18,8 @@ otx2_ca_caps_get(const struct rte_eventdev *dev, RTE_SET_USED(cdev); *caps = RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | - RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW; + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD; return 0; } diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h similarity index 93% rename from drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h rename to drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h index 70b63933e..9e331fdd7 100644 --- a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_dp.h +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_rx.h @@ -2,8 +2,8 @@ * Copyright (C) 2020 Marvell International Ltd. */ -#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ -#define _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ #include #include @@ -72,4 +72,4 @@ otx2_handle_crypto_event(uint64_t get_work1) return (uint64_t)(cop); } -#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_DP_H_ */ +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_RX_H_ */ diff --git a/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h new file mode 100644 index 000000000..ecf7eb9f5 --- /dev/null +++ b/drivers/event/octeontx2/otx2_evdev_crypto_adptr_tx.h @@ -0,0 +1,83 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (C) 2021 Marvell International Ltd. + */ + +#ifndef _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ +#define _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ + +#include +#include +#include +#include + +#include +#include + +static inline uint16_t +otx2_ca_enq(uintptr_t tag_op, const struct rte_event *ev) +{ + union rte_event_crypto_metadata *m_data; + struct rte_crypto_op *crypto_op; + struct rte_cryptodev *cdev; + struct otx2_cpt_qp *qp; + uint8_t cdev_id; + uint16_t qp_id; + + crypto_op = ev->event_ptr; + if (crypto_op == NULL) + return 0; + + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_user_data( + crypto_op->sym->session); + if (m_data == NULL) + goto free_op; + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + } else { + goto free_op; + } + + cdev = &rte_cryptodevs[cdev_id]; + qp = cdev->data->queue_pairs[qp_id]; + + if (!ev->sched_type) + otx2_ssogws_head_wait(tag_op); + if (qp->ca_enable) + return cdev->enqueue_burst(qp, &crypto_op, 1); + +free_op: + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + rte_errno = EINVAL; + return 0; +} + +static uint16_t __rte_hot +otx2_ssogws_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->tag_op, ev); +} + +static uint16_t __rte_hot +otx2_ssogws_dual_ca_enq(void *port, struct rte_event ev[], uint16_t nb_events) +{ + struct otx2_ssogws_dual *ws = port; + + RTE_SET_USED(nb_events); + + return otx2_ca_enq(ws->ws_state[!ws->vws].tag_op, ev); +} +#endif /* _OTX2_EVDEV_CRYPTO_ADPTR_TX_H_ */ diff --git a/drivers/event/octeontx2/otx2_worker.h b/drivers/event/octeontx2/otx2_worker.h index 2b716c042..fd149be91 100644 --- a/drivers/event/octeontx2/otx2_worker.h +++ b/drivers/event/octeontx2/otx2_worker.h @@ -10,7 +10,7 @@ #include #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" #include "otx2_ethdev_sec_tx.h" /* SSO Operations */ diff --git a/drivers/event/octeontx2/otx2_worker_dual.h b/drivers/event/octeontx2/otx2_worker_dual.h index 72b616439..36ae4dd88 100644 --- a/drivers/event/octeontx2/otx2_worker_dual.h +++ b/drivers/event/octeontx2/otx2_worker_dual.h @@ -10,7 +10,7 @@ #include #include "otx2_evdev.h" -#include "otx2_evdev_crypto_adptr_dp.h" +#include "otx2_evdev_crypto_adptr_rx.h" /* SSO Operations */ static __rte_always_inline uint16_t From patchwork Wed Apr 14 18:04:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 91472 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0C964A0562; Wed, 14 Apr 2021 20:05:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5BD33161C56; Wed, 14 Apr 2021 20:04:59 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id D5CB7161C44 for ; Wed, 14 Apr 2021 20:04:56 +0200 (CEST) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id 13EHk9tS014858; Wed, 14 Apr 2021 11:04:54 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=eTbMTO8IFhv/+J6H59bjbnSrVOxyBL7w/BqgVwa5wr4=; b=hiNeKhbgK7LgnKdA3l72CydbMB9Rr6UhNACunJRppEl6JiG0sm2z4+1sXWKCy7G+vYtE rSLPLSuTP+qihp0T6wdw0qvVZrIqwe+f91oYwO4WmzJdON1uCO0ndlXPayL1Pb/lhAW/ qQKiswoDnPUL8LYYzgooYfQYr97SusSETvgc1zXpnAmaBLd0HHDaOcT1pkZZzru1s9jd 1T+kCDGCL+g7zJ6wiuD0VVJeHiy1vQFB9+EuosbP3L0LZJhNS6WbRgav550PW4X5XTRZ EVg/+BZoqkbtsgcBAMWXj5+gomTtqMlgBUEO2wE1SpaLR8Ac2tmH4lDENuI8NpXRN4zn kg== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 37wqtm2s9t-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Wed, 14 Apr 2021 11:04:54 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Wed, 14 Apr 2021 11:04:52 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Wed, 14 Apr 2021 11:04:52 -0700 Received: from localhost.localdomain (unknown [10.28.36.185]) by maili.marvell.com (Postfix) with ESMTP id 79EE53F703F; Wed, 14 Apr 2021 11:04:47 -0700 (PDT) From: To: , , , , CC: , , , , , , , , , , , , Date: Wed, 14 Apr 2021 23:34:17 +0530 Message-ID: <20210414180417.1263585-5-gakhil@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210414180417.1263585-1-gakhil@marvell.com> References: <20210414122036.1262579-2-gakhil@marvell.com> <20210414180417.1263585-1-gakhil@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: NgLIHH5EvcFuFGKzbDf8EhJMtIi5pBht X-Proofpoint-GUID: NgLIHH5EvcFuFGKzbDf8EhJMtIi5pBht X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.391, 18.0.761 definitions=2021-04-14_10:2021-04-14, 2021-04-14 signatures=0 Subject: [dpdk-dev] [PATCH v10 4/4] test/event_crypto: use crypto adapter enqueue API X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Shijith Thotton Use rte_event_crypto_adapter_enqueue() API to enqueue events to crypto adapter if forward mode is supported in driver. Signed-off-by: Shijith Thotton Acked-by: Abhinandan Gujjar --- app/test/test_event_crypto_adapter.c | 33 ++++++++++++++++++---------- 1 file changed, 21 insertions(+), 12 deletions(-) diff --git a/app/test/test_event_crypto_adapter.c b/app/test/test_event_crypto_adapter.c index 335211cd8..f689bc1f2 100644 --- a/app/test/test_event_crypto_adapter.c +++ b/app/test/test_event_crypto_adapter.c @@ -64,6 +64,7 @@ struct event_crypto_adapter_test_params { struct rte_mempool *session_priv_mpool; struct rte_cryptodev_config *config; uint8_t crypto_event_port_id; + uint8_t internal_port_op_fwd; }; struct rte_event response_info = { @@ -110,9 +111,12 @@ send_recv_ev(struct rte_event *ev) struct rte_event recv_ev; int ret; - ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); - TEST_ASSERT_EQUAL(ret, NUM, - "Failed to send event to crypto adapter\n"); + if (params.internal_port_op_fwd) + ret = rte_event_crypto_adapter_enqueue(evdev, TEST_APP_PORT_ID, + ev, NUM); + else + ret = rte_event_enqueue_burst(evdev, TEST_APP_PORT_ID, ev, NUM); + TEST_ASSERT_EQUAL(ret, NUM, "Failed to send event to crypto adapter\n"); while (rte_event_dequeue_burst(evdev, TEST_APP_PORT_ID, &recv_ev, NUM, 0) == 0) @@ -747,9 +751,12 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND)) goto adapter_create; - if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) && - !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD)) - return -ENOTSUP; + if (mode == RTE_EVENT_CRYPTO_ADAPTER_OP_FORWARD) { + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_FWD) + params.internal_port_op_fwd = 1; + else + return -ENOTSUP; + } if ((mode == RTE_EVENT_CRYPTO_ADAPTER_OP_NEW) && !(cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW)) @@ -771,9 +778,11 @@ configure_event_crypto_adapter(enum rte_event_crypto_adapter_mode mode) TEST_ASSERT_SUCCESS(ret, "Failed to add queue pair\n"); - ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, - ¶ms.crypto_event_port_id); - TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + if (!params.internal_port_op_fwd) { + ret = rte_event_crypto_adapter_event_port_get(TEST_ADAPTER_ID, + ¶ms.crypto_event_port_id); + TEST_ASSERT_SUCCESS(ret, "Failed to get event port\n"); + } return TEST_SUCCESS; } @@ -809,15 +818,15 @@ test_crypto_adapter_conf(enum rte_event_crypto_adapter_mode mode) if (!crypto_adapter_setup_done) { ret = configure_event_crypto_adapter(mode); - if (!ret) { + if (ret) + return ret; + if (!params.internal_port_op_fwd) { qid = TEST_CRYPTO_EV_QUEUE_ID; ret = rte_event_port_link(evdev, params.crypto_event_port_id, &qid, NULL, 1); TEST_ASSERT(ret >= 0, "Failed to link queue %d " "port=%u\n", qid, params.crypto_event_port_id); - } else { - return ret; } crypto_adapter_setup_done = 1; }