From patchwork Fri Sep 6 13:12:55 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 58877 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 03ACD1F411; Fri, 6 Sep 2019 15:27:24 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id F19491F410 for ; Fri, 6 Sep 2019 15:27:22 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 8BE602001FB; Fri, 6 Sep 2019 15:27:22 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 7F5EE200041; Fri, 6 Sep 2019 15:27:19 +0200 (CEST) Received: from GDB1.ap.freescale.net (GDB1.ap.freescale.net [10.232.132.179]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 3C485402A5; Fri, 6 Sep 2019 21:27:16 +0800 (SGT) From: Akhil Goyal To: dev@dpdk.org Cc: hemant.agrawal@nxp.com, anoobj@marvell.com, jerinj@marvell.com, Akhil Goyal Date: Fri, 6 Sep 2019 18:42:55 +0530 Message-Id: <20190906131256.23367-1-akhil.goyal@nxp.com> X-Mailer: git-send-email 2.17.1 X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 1/2] crypto/dpaa_sec: support event crypto adapter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" dpaa_sec hw queues can be attached to a hw dpaa event device and the application can configure the event crypto adapter to access the dpaa_sec packets using hardware events. This patch defines APIs which can be used by the dpaa event device to attach/detach dpaa_sec queues. Signed-off-by: Akhil Goyal --- drivers/bus/dpaa/base/qbman/qman.c | 9 +- drivers/bus/dpaa/include/fsl_qman.h | 2 +- drivers/crypto/dpaa_sec/Makefile | 1 + drivers/crypto/dpaa_sec/dpaa_sec.c | 200 +++++++++++++++++- drivers/crypto/dpaa_sec/dpaa_sec_event.h | 19 ++ .../dpaa_sec/rte_pmd_dpaa_sec_version.map | 8 + 6 files changed, 231 insertions(+), 8 deletions(-) create mode 100644 drivers/crypto/dpaa_sec/dpaa_sec_event.h diff --git a/drivers/bus/dpaa/base/qbman/qman.c b/drivers/bus/dpaa/base/qbman/qman.c index c6f7d7bb3..e43fc65ef 100644 --- a/drivers/bus/dpaa/base/qbman/qman.c +++ b/drivers/bus/dpaa/base/qbman/qman.c @@ -2286,7 +2286,7 @@ int qman_enqueue_multi(struct qman_fq *fq, int qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd, - int frames_to_send) + u32 *flags, int frames_to_send) { struct qman_portal *p = get_affine_portal(); struct qm_portal *portal = &p->p; @@ -2294,7 +2294,7 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd, register struct qm_eqcr *eqcr = &portal->eqcr; struct qm_eqcr_entry *eq = eqcr->cursor, *prev_eq; - u8 i, diff, old_ci, sent = 0; + u8 i = 0, diff, old_ci, sent = 0; /* Update the available entries if no entry is free */ if (!eqcr->available) { @@ -2313,6 +2313,11 @@ qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd, eq->fd.addr = cpu_to_be40(fd->addr); eq->fd.status = cpu_to_be32(fd->status); eq->fd.opaque = cpu_to_be32(fd->opaque); + if (flags && (flags[i] & QMAN_ENQUEUE_FLAG_DCA)) { + eq->dca = QM_EQCR_DCA_ENABLE | + ((flags[i] >> 8) & QM_EQCR_DCA_IDXMASK); + } + i++; eq = (void *)((unsigned long)(eq + 1) & (~(unsigned long)(QM_EQCR_SIZE << 6))); diff --git a/drivers/bus/dpaa/include/fsl_qman.h b/drivers/bus/dpaa/include/fsl_qman.h index e5cccbbea..29fb2eb9d 100644 --- a/drivers/bus/dpaa/include/fsl_qman.h +++ b/drivers/bus/dpaa/include/fsl_qman.h @@ -1773,7 +1773,7 @@ int qman_enqueue_multi(struct qman_fq *fq, const struct qm_fd *fd, u32 *flags, */ int qman_enqueue_multi_fq(struct qman_fq *fq[], const struct qm_fd *fd, - int frames_to_send); + u32 *flags, int frames_to_send); typedef int (*qman_cb_precommit) (void *arg); diff --git a/drivers/crypto/dpaa_sec/Makefile b/drivers/crypto/dpaa_sec/Makefile index 1d8b7bec1..353c2549f 100644 --- a/drivers/crypto/dpaa_sec/Makefile +++ b/drivers/crypto/dpaa_sec/Makefile @@ -16,6 +16,7 @@ CFLAGS += $(WERROR_FLAGS) CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include +CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/base/qbman CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa_sec/ #sharing the hw flib headers from dpaa2_sec pmd CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa2_sec/ diff --git a/drivers/crypto/dpaa_sec/dpaa_sec.c b/drivers/crypto/dpaa_sec/dpaa_sec.c index e6f57ce3d..e96307a8a 100644 --- a/drivers/crypto/dpaa_sec/dpaa_sec.c +++ b/drivers/crypto/dpaa_sec/dpaa_sec.c @@ -37,6 +37,7 @@ #include #include +#include #include enum rta_sec_era rta_sec_era; @@ -60,9 +61,6 @@ dpaa_sec_op_ending(struct dpaa_sec_op_ctx *ctx) DPAA_SEC_DP_WARN("SEC return err: 0x%x", ctx->fd_status); ctx->op->status = RTE_CRYPTO_OP_STATUS_ERROR; } - - /* report op status to sym->op and then free the ctx memory */ - rte_mempool_put(ctx->ctx_pool, (void *)ctx); } static inline struct dpaa_sec_op_ctx * @@ -1656,7 +1654,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, struct rte_crypto_op *op; struct dpaa_sec_job *cf; dpaa_sec_session *ses; - uint32_t auth_only_len; + uint32_t auth_only_len, index, flags[DPAA_SEC_BURST] = {0}; struct qman_fq *inq[DPAA_SEC_BURST]; while (nb_ops) { @@ -1664,6 +1662,18 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, DPAA_SEC_BURST : nb_ops; for (loop = 0; loop < frames_to_send; loop++) { op = *(ops++); + if (op->sym->m_src->seqn != 0) { + index = op->sym->m_src->seqn - 1; + if (DPAA_PER_LCORE_DQRR_HELD & (1 << index)) { + /* QM_EQCR_DCA_IDXMASK = 0x0f */ + flags[loop] = ((index & 0x0f) << 8); + flags[loop] |= QMAN_ENQUEUE_FLAG_DCA; + DPAA_PER_LCORE_DQRR_SIZE--; + DPAA_PER_LCORE_DQRR_HELD &= + ~(1 << index); + } + } + switch (op->sess_type) { case RTE_CRYPTO_OP_WITH_SESSION: ses = (dpaa_sec_session *) @@ -1764,7 +1774,7 @@ dpaa_sec_enqueue_burst(void *qp, struct rte_crypto_op **ops, loop = 0; while (loop < frames_to_send) { loop += qman_enqueue_multi_fq(&inq[loop], &fds[loop], - frames_to_send - loop); + &flags[loop], frames_to_send - loop); } nb_ops -= frames_to_send; num_tx += frames_to_send; @@ -2572,6 +2582,186 @@ dpaa_sec_dev_infos_get(struct rte_cryptodev *dev, } } +static enum qman_cb_dqrr_result +dpaa_sec_process_parallel_event(void *event, + struct qman_portal *qm __always_unused, + struct qman_fq *outq, + const struct qm_dqrr_entry *dqrr, + void **bufs) +{ + const struct qm_fd *fd; + struct dpaa_sec_job *job; + struct dpaa_sec_op_ctx *ctx; + struct rte_event *ev = (struct rte_event *)event; + + fd = &dqrr->fd; + + /* sg is embedded in an op ctx, + * sg[0] is for output + * sg[1] for input + */ + job = dpaa_mem_ptov(qm_fd_addr_get64(fd)); + + ctx = container_of(job, struct dpaa_sec_op_ctx, job); + ctx->fd_status = fd->status; + if (ctx->op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) { + struct qm_sg_entry *sg_out; + uint32_t len; + + sg_out = &job->sg[0]; + hw_sg_to_cpu(sg_out); + len = sg_out->length; + ctx->op->sym->m_src->pkt_len = len; + ctx->op->sym->m_src->data_len = len; + } + if (!ctx->fd_status) { + ctx->op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + } else { + DPAA_SEC_DP_WARN("SEC return err: 0x%x", ctx->fd_status); + ctx->op->status = RTE_CRYPTO_OP_STATUS_ERROR; + } + ev->event_ptr = (void *)ctx->op; + + ev->flow_id = outq->ev.flow_id; + ev->sub_event_type = outq->ev.sub_event_type; + ev->event_type = RTE_EVENT_TYPE_CRYPTODEV; + ev->op = RTE_EVENT_OP_NEW; + ev->sched_type = outq->ev.sched_type; + ev->queue_id = outq->ev.queue_id; + ev->priority = outq->ev.priority; + *bufs = (void *)ctx->op; + + rte_mempool_put(ctx->ctx_pool, (void *)ctx); + + return qman_cb_dqrr_consume; +} + +static enum qman_cb_dqrr_result +dpaa_sec_process_atomic_event(void *event, + struct qman_portal *qm __rte_unused, + struct qman_fq *outq, + const struct qm_dqrr_entry *dqrr, + void **bufs) +{ + u8 index; + const struct qm_fd *fd; + struct dpaa_sec_job *job; + struct dpaa_sec_op_ctx *ctx; + struct rte_event *ev = (struct rte_event *)event; + + fd = &dqrr->fd; + + /* sg is embedded in an op ctx, + * sg[0] is for output + * sg[1] for input + */ + job = dpaa_mem_ptov(qm_fd_addr_get64(fd)); + + ctx = container_of(job, struct dpaa_sec_op_ctx, job); + ctx->fd_status = fd->status; + if (ctx->op->sess_type == RTE_CRYPTO_OP_SECURITY_SESSION) { + struct qm_sg_entry *sg_out; + uint32_t len; + + sg_out = &job->sg[0]; + hw_sg_to_cpu(sg_out); + len = sg_out->length; + ctx->op->sym->m_src->pkt_len = len; + ctx->op->sym->m_src->data_len = len; + } + if (!ctx->fd_status) { + ctx->op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + } else { + DPAA_SEC_DP_WARN("SEC return err: 0x%x", ctx->fd_status); + ctx->op->status = RTE_CRYPTO_OP_STATUS_ERROR; + } + ev->event_ptr = (void *)ctx->op; + ev->flow_id = outq->ev.flow_id; + ev->sub_event_type = outq->ev.sub_event_type; + ev->event_type = RTE_EVENT_TYPE_CRYPTODEV; + ev->op = RTE_EVENT_OP_NEW; + ev->sched_type = outq->ev.sched_type; + ev->queue_id = outq->ev.queue_id; + ev->priority = outq->ev.priority; + + /* Save active dqrr entries */ + index = ((uintptr_t)dqrr >> 6) & (16/*QM_DQRR_SIZE*/ - 1); + DPAA_PER_LCORE_DQRR_SIZE++; + DPAA_PER_LCORE_DQRR_HELD |= 1 << index; + DPAA_PER_LCORE_DQRR_MBUF(index) = ctx->op->sym->m_src; + ev->impl_opaque = index + 1; + ctx->op->sym->m_src->seqn = (uint32_t)index + 1; + *bufs = (void *)ctx->op; + + rte_mempool_put(ctx->ctx_pool, (void *)ctx); + + return qman_cb_dqrr_defer; +} + +int +dpaa_sec_eventq_attach(const struct rte_cryptodev *dev, + int qp_id, + uint16_t ch_id, + const struct rte_event *event) +{ + struct dpaa_sec_qp *qp = dev->data->queue_pairs[qp_id]; + struct qm_mcc_initfq opts = {0}; + + int ret; + + opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL | + QM_INITFQ_WE_CONTEXTA | QM_INITFQ_WE_CONTEXTB; + opts.fqd.dest.channel = ch_id; + + switch (event->sched_type) { + case RTE_SCHED_TYPE_ATOMIC: + opts.fqd.fq_ctrl |= QM_FQCTRL_HOLDACTIVE; + /* Reset FQCTRL_AVOIDBLOCK bit as it is unnecessary + * configuration with HOLD_ACTIVE setting + */ + opts.fqd.fq_ctrl &= (~QM_FQCTRL_AVOIDBLOCK); + qp->outq.cb.dqrr_dpdk_cb = dpaa_sec_process_atomic_event; + break; + case RTE_SCHED_TYPE_ORDERED: + DPAA_SEC_ERR("Ordered queue schedule type is not supported\n"); + return -1; + default: + opts.fqd.fq_ctrl |= QM_FQCTRL_AVOIDBLOCK; + qp->outq.cb.dqrr_dpdk_cb = dpaa_sec_process_parallel_event; + break; + } + + ret = qman_init_fq(&qp->outq, QMAN_INITFQ_FLAG_SCHED, &opts); + if (unlikely(ret)) { + DPAA_SEC_ERR("unable to init caam source fq!"); + return ret; + } + + memcpy(&qp->outq.ev, event, sizeof(struct rte_event)); + + return 0; +} + +int +dpaa_sec_eventq_detach(const struct rte_cryptodev *dev, + int qp_id) +{ + struct qm_mcc_initfq opts = {0}; + int ret; + struct dpaa_sec_qp *qp = dev->data->queue_pairs[qp_id]; + + opts.we_mask = QM_INITFQ_WE_DESTWQ | QM_INITFQ_WE_FQCTRL | + QM_INITFQ_WE_CONTEXTA | QM_INITFQ_WE_CONTEXTB; + qp->outq.cb.dqrr = dqrr_out_fq_cb_rx; + qp->outq.cb.ern = ern_sec_fq_handler; + ret = qman_init_fq(&qp->outq, 0, &opts); + if (ret) + RTE_LOG(ERR, PMD, "Error in qman_init_fq: ret: %d\n", ret); + qp->outq.cb.dqrr = NULL; + + return ret; +} + static struct rte_cryptodev_ops crypto_ops = { .dev_configure = dpaa_sec_dev_configure, .dev_start = dpaa_sec_dev_start, diff --git a/drivers/crypto/dpaa_sec/dpaa_sec_event.h b/drivers/crypto/dpaa_sec/dpaa_sec_event.h new file mode 100644 index 000000000..8d1a01809 --- /dev/null +++ b/drivers/crypto/dpaa_sec/dpaa_sec_event.h @@ -0,0 +1,19 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2019 NXP + * + */ + +#ifndef _DPAA_SEC_EVENT_H_ +#define _DPAA_SEC_EVENT_H_ + +int +dpaa_sec_eventq_attach(const struct rte_cryptodev *dev, + int qp_id, + uint16_t ch_id, + const struct rte_event *event); + +int +dpaa_sec_eventq_detach(const struct rte_cryptodev *dev, + int qp_id); + +#endif /* _DPAA_SEC_EVENT_H_ */ diff --git a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map index a70bd197b..cc7f2162e 100644 --- a/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map +++ b/drivers/crypto/dpaa_sec/rte_pmd_dpaa_sec_version.map @@ -2,3 +2,11 @@ DPDK_17.11 { local: *; }; + +DPDK_19.11 { + global: + + dpaa_sec_eventq_attach; + dpaa_sec_eventq_detach; + +} DPDK_17.11; From patchwork Fri Sep 6 13:12:56 2019 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Akhil Goyal X-Patchwork-Id: 58878 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 450941F421; Fri, 6 Sep 2019 15:27:26 +0200 (CEST) Received: from inva021.nxp.com (inva021.nxp.com [92.121.34.21]) by dpdk.org (Postfix) with ESMTP id 9A2AB1F410 for ; Fri, 6 Sep 2019 15:27:23 +0200 (CEST) Received: from inva021.nxp.com (localhost [127.0.0.1]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id 7C0562002F0; Fri, 6 Sep 2019 15:27:23 +0200 (CEST) Received: from invc005.ap-rdc01.nxp.com (invc005.ap-rdc01.nxp.com [165.114.16.14]) by inva021.eu-rdc02.nxp.com (Postfix) with ESMTP id AA09D2001AF; Fri, 6 Sep 2019 15:27:20 +0200 (CEST) Received: from GDB1.ap.freescale.net (GDB1.ap.freescale.net [10.232.132.179]) by invc005.ap-rdc01.nxp.com (Postfix) with ESMTP id 07487402B4; Fri, 6 Sep 2019 21:27:16 +0800 (SGT) From: Akhil Goyal To: dev@dpdk.org Cc: hemant.agrawal@nxp.com, anoobj@marvell.com, jerinj@marvell.com, Akhil Goyal Date: Fri, 6 Sep 2019 18:42:56 +0530 Message-Id: <20190906131256.23367-2-akhil.goyal@nxp.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20190906131256.23367-1-akhil.goyal@nxp.com> References: <20190906131256.23367-1-akhil.goyal@nxp.com> X-Virus-Scanned: ClamAV using ClamSMTP Subject: [dpdk-dev] [PATCH 2/2] event/dpaa: support event crypto adapter X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" event dpaa device support both ethernet as well as crypto queues to be attached to it. eth_rx_adapter provide infrastructure to attach ethernet queues and crypto_adapter provide support for crypto queues. This patch add support for dpaa_eventdev to attach dpaa_sec queues. Signed-off-by: Akhil Goyal --- drivers/event/dpaa/Makefile | 3 + drivers/event/dpaa/dpaa_eventdev.c | 154 ++++++++++++++++++++++++++++- drivers/event/dpaa/dpaa_eventdev.h | 5 + 3 files changed, 161 insertions(+), 1 deletion(-) diff --git a/drivers/event/dpaa/Makefile b/drivers/event/dpaa/Makefile index cf9626495..1856fa468 100644 --- a/drivers/event/dpaa/Makefile +++ b/drivers/event/dpaa/Makefile @@ -21,6 +21,9 @@ CFLAGS += -I$(RTE_SDK)/drivers/bus/dpaa/include/ CFLAGS += -I$(RTE_SDK)/drivers/mempool/dpaa CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common/include +LDLIBS += -lrte_pmd_dpaa_sec +CFLAGS += -I$(RTE_SDK)/drivers/crypto/dpaa_sec + EXPORT_MAP := rte_pmd_dpaa_event_version.map LIBABIVER := 1 diff --git a/drivers/event/dpaa/dpaa_eventdev.c b/drivers/event/dpaa/dpaa_eventdev.c index 1e247e4f4..d02b8694e 100644 --- a/drivers/event/dpaa/dpaa_eventdev.c +++ b/drivers/event/dpaa/dpaa_eventdev.c @@ -27,12 +27,14 @@ #include #include #include +#include #include #include #include #include #include +#include #include "dpaa_eventdev.h" #include @@ -322,7 +324,7 @@ dpaa_event_dev_info_get(struct rte_eventdev *dev, EVENTDEV_INIT_FUNC_TRACE(); RTE_SET_USED(dev); - dev_info->driver_name = "event_dpaa"; + dev_info->driver_name = "event_dpaa1"; dev_info->min_dequeue_timeout_ns = DPAA_EVENT_MIN_DEQUEUE_TIMEOUT; dev_info->max_dequeue_timeout_ns = @@ -718,6 +720,149 @@ dpaa_event_eth_rx_adapter_stop(const struct rte_eventdev *dev, return 0; } +static int +dpaa_eventdev_crypto_caps_get(const struct rte_eventdev *dev, + const struct rte_cryptodev *cdev, + uint32_t *caps) +{ + const char *name = cdev->data->name; + + EVENTDEV_INIT_FUNC_TRACE(); + + RTE_SET_USED(dev); + + if (!strncmp(name, "dpaa_sec-", 9)) + *caps = RTE_EVENT_CRYPTO_ADAPTER_DPAA_CAP; + else + return -1; + + return 0; +} + +static int +dpaa_eventdev_crypto_queue_add_all(const struct rte_eventdev *dev, + const struct rte_cryptodev *cryptodev, + const struct rte_event *ev) +{ + struct dpaa_eventdev *priv = dev->data->dev_private; + uint8_t ev_qid = ev->queue_id; + u16 ch_id = priv->evq_info[ev_qid].ch_id; + int i, ret; + + EVENTDEV_INIT_FUNC_TRACE(); + + for (i = 0; i < cryptodev->data->nb_queue_pairs; i++) { + ret = dpaa_sec_eventq_attach(cryptodev, i, + ch_id, ev); + if (ret) { + DPAA_EVENTDEV_ERR("dpaa_sec_eventq_attach failed: ret %d\n", + ret); + goto fail; + } + } + return 0; +fail: + for (i = (i - 1); i >= 0 ; i--) + dpaa_sec_eventq_detach(cryptodev, i); + + return ret; +} + +static int +dpaa_eventdev_crypto_queue_add(const struct rte_eventdev *dev, + const struct rte_cryptodev *cryptodev, + int32_t rx_queue_id, + const struct rte_event *ev) +{ + struct dpaa_eventdev *priv = dev->data->dev_private; + uint8_t ev_qid = ev->queue_id; + u16 ch_id = priv->evq_info[ev_qid].ch_id; + int ret; + + EVENTDEV_INIT_FUNC_TRACE(); + + if (rx_queue_id == -1) + return dpaa_eventdev_crypto_queue_add_all(dev, + cryptodev, ev); + + ret = dpaa_sec_eventq_attach(cryptodev, rx_queue_id, + ch_id, ev); + if (ret) { + DPAA_EVENTDEV_ERR( + "dpaa_sec_eventq_attach failed: ret: %d\n", ret); + return ret; + } + return 0; +} + +static int +dpaa_eventdev_crypto_queue_del_all(const struct rte_eventdev *dev, + const struct rte_cryptodev *cdev) +{ + int i, ret; + + EVENTDEV_INIT_FUNC_TRACE(); + + RTE_SET_USED(dev); + + for (i = 0; i < cdev->data->nb_queue_pairs; i++) { + ret = dpaa_sec_eventq_detach(cdev, i); + if (ret) { + DPAA_EVENTDEV_ERR( + "dpaa_sec_eventq_detach failed:ret %d\n", ret); + return ret; + } + } + + return 0; +} + +static int +dpaa_eventdev_crypto_queue_del(const struct rte_eventdev *dev, + const struct rte_cryptodev *cryptodev, + int32_t rx_queue_id) +{ + int ret; + + EVENTDEV_INIT_FUNC_TRACE(); + + if (rx_queue_id == -1) + return dpaa_eventdev_crypto_queue_del_all(dev, cryptodev); + + ret = dpaa_sec_eventq_detach(cryptodev, rx_queue_id); + if (ret) { + DPAA_EVENTDEV_ERR( + "dpaa_sec_eventq_detach failed: ret: %d\n", ret); + return ret; + } + + return 0; +} + +static int +dpaa_eventdev_crypto_start(const struct rte_eventdev *dev, + const struct rte_cryptodev *cryptodev) +{ + EVENTDEV_INIT_FUNC_TRACE(); + + RTE_SET_USED(dev); + RTE_SET_USED(cryptodev); + + return 0; +} + +static int +dpaa_eventdev_crypto_stop(const struct rte_eventdev *dev, + const struct rte_cryptodev *cryptodev) +{ + EVENTDEV_INIT_FUNC_TRACE(); + + RTE_SET_USED(dev); + RTE_SET_USED(cryptodev); + + return 0; +} + static struct rte_eventdev_ops dpaa_eventdev_ops = { .dev_infos_get = dpaa_event_dev_info_get, .dev_configure = dpaa_event_dev_configure, @@ -738,6 +883,11 @@ static struct rte_eventdev_ops dpaa_eventdev_ops = { .eth_rx_adapter_queue_del = dpaa_event_eth_rx_adapter_queue_del, .eth_rx_adapter_start = dpaa_event_eth_rx_adapter_start, .eth_rx_adapter_stop = dpaa_event_eth_rx_adapter_stop, + .crypto_adapter_caps_get = dpaa_eventdev_crypto_caps_get, + .crypto_adapter_queue_pair_add = dpaa_eventdev_crypto_queue_add, + .crypto_adapter_queue_pair_del = dpaa_eventdev_crypto_queue_del, + .crypto_adapter_start = dpaa_eventdev_crypto_start, + .crypto_adapter_stop = dpaa_eventdev_crypto_stop, }; static int flag_check_handler(__rte_unused const char *key, @@ -806,6 +956,8 @@ dpaa_event_dev_create(const char *name, const char *params) eventdev->dequeue_burst = dpaa_event_dequeue_burst_intr; } + RTE_LOG(INFO, PMD, "%s eventdev added", name); + /* For secondary processes, the primary has done all the work */ if (rte_eal_process_type() != RTE_PROC_PRIMARY) return 0; diff --git a/drivers/event/dpaa/dpaa_eventdev.h b/drivers/event/dpaa/dpaa_eventdev.h index 8134e6ba9..b8f247c61 100644 --- a/drivers/event/dpaa/dpaa_eventdev.h +++ b/drivers/event/dpaa/dpaa_eventdev.h @@ -40,6 +40,11 @@ do { \ RTE_EVENT_ETH_RX_ADAPTER_CAP_MULTI_EVENTQ | \ RTE_EVENT_ETH_RX_ADAPTER_CAP_OVERRIDE_FLOW_ID) +#define RTE_EVENT_CRYPTO_ADAPTER_DPAA_CAP \ + (RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_OP_NEW | \ + RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT_QP_EV_BIND | \ + RTE_EVENT_CRYPTO_ADAPTER_CAP_SESSION_PRIVATE_DATA) + struct dpaa_eventq { /* Channel Id */ uint16_t ch_id;