From patchwork Wed Apr 4 06:56:39 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Gujjar, Abhinandan S" X-Patchwork-Id: 37077 X-Patchwork-Delegate: jerinj@marvell.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 27FFA1B64E; Wed, 4 Apr 2018 08:56:24 +0200 (CEST) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id C6C19AABA for ; Wed, 4 Apr 2018 08:56:21 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga004.fm.intel.com ([10.253.24.48]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 03 Apr 2018 23:56:20 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.48,405,1517904000"; d="scan'208";a="43255639" Received: from unknown (HELO localhost.localdomain) ([10.224.122.195]) by fmsmga004.fm.intel.com with ESMTP; 03 Apr 2018 23:56:17 -0700 From: Abhinandan Gujjar To: jerin.jacob@caviumnetworks.com, hemant.agrawal@nxp.com, akhil.goyal@nxp.com, dev@dpdk.org Cc: pablo.de.lara.guarch@intel.com, declan.doherty@intel.com, narender.vangati@intel.com, abhinandan.gujjar@intel.com, nikhil.rao@intel.com, Gage Eads Date: Wed, 4 Apr 2018 12:26:39 +0530 Message-Id: <1522824999-61614-1-git-send-email-abhinandan.gujjar@intel.com> X-Mailer: git-send-email 1.9.1 Subject: [dpdk-dev] [dpdk-dev, v1, 2/5] eventdev: add crypto adapter implementation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Abhinandan Gujjar Signed-off-by: Nikhil Rao Signed-off-by: Gage Eads --- config/common_base | 1 + lib/Makefile | 3 +- lib/librte_eventdev/Makefile | 3 + lib/librte_eventdev/rte_event_crypto_adapter.c | 1089 ++++++++++++++++++++++++ lib/librte_eventdev/rte_event_crypto_adapter.h | 449 ++++++++++ lib/librte_eventdev/rte_eventdev_version.map | 12 + 6 files changed, 1556 insertions(+), 1 deletion(-) create mode 100644 lib/librte_eventdev/rte_event_crypto_adapter.c create mode 100644 lib/librte_eventdev/rte_event_crypto_adapter.h diff --git a/config/common_base b/config/common_base index 7abf7c6..97023c7 100644 --- a/config/common_base +++ b/config/common_base @@ -550,6 +550,7 @@ CONFIG_RTE_LIBRTE_EVENTDEV=y CONFIG_RTE_LIBRTE_EVENTDEV_DEBUG=n CONFIG_RTE_EVENT_MAX_DEVS=16 CONFIG_RTE_EVENT_MAX_QUEUES_PER_DEV=64 +CONFIG_RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE=32 # # Compile PMD for skeleton event device diff --git a/lib/Makefile b/lib/Makefile index ec965a6..8553af7 100644 --- a/lib/Makefile +++ b/lib/Makefile @@ -31,7 +31,8 @@ DEPDIRS-librte_security := librte_eal librte_mempool librte_ring librte_mbuf DEPDIRS-librte_security += librte_ether DEPDIRS-librte_security += librte_cryptodev DIRS-$(CONFIG_RTE_LIBRTE_EVENTDEV) += librte_eventdev -DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether librte_hash +DEPDIRS-librte_eventdev := librte_eal librte_ring librte_ether librte_hash \ + librte_mempool librte_cryptodev DIRS-$(CONFIG_RTE_LIBRTE_RAWDEV) += librte_rawdev DEPDIRS-librte_rawdev := librte_eal librte_ether DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost diff --git a/lib/librte_eventdev/Makefile b/lib/librte_eventdev/Makefile index d27dd07..c065f98 100644 --- a/lib/librte_eventdev/Makefile +++ b/lib/librte_eventdev/Makefile @@ -15,11 +15,13 @@ CFLAGS += -DALLOW_EXPERIMENTAL_API CFLAGS += -O3 CFLAGS += $(WERROR_FLAGS) LDLIBS += -lrte_eal -lrte_ring -lrte_ethdev -lrte_hash +LDLIBS += -lrte_cryptodev -lrte_mempool # library source files SRCS-y += rte_eventdev.c SRCS-y += rte_event_ring.c SRCS-y += rte_event_eth_rx_adapter.c +SRCS-y += rte_event_crypto_adapter.c # export include files SYMLINK-y-include += rte_eventdev.h @@ -28,6 +30,7 @@ SYMLINK-y-include += rte_eventdev_pmd_pci.h SYMLINK-y-include += rte_eventdev_pmd_vdev.h SYMLINK-y-include += rte_event_ring.h SYMLINK-y-include += rte_event_eth_rx_adapter.h +SYMLINK-y-include += rte_event_crypto_adapter.h # versioning export map EXPORT_MAP := rte_eventdev_version.map diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.c b/lib/librte_eventdev/rte_event_crypto_adapter.c new file mode 100644 index 0000000..a9203bb --- /dev/null +++ b/lib/librte_eventdev/rte_event_crypto_adapter.c @@ -0,0 +1,1089 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2018 Intel Corporation + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#include "rte_eventdev.h" +#include "rte_eventdev_pmd.h" +#include "rte_event_crypto_adapter.h" + +#define BATCH_SIZE 32 +#define DEFAULT_MAX_NB 128 +#define CRYPTO_ADAPTER_NAME_LEN 32 +#define CRYPTO_ADAPTER_MEM_NAME_LEN 32 +#define CRYPTO_ADAPTER_MAX_EV_ENQ_RETRIES 100 + +/* Flush an instance's enqueue buffers every CRYPTO_ENQ_FLUSH_THRESHOLD + * iterations of eca_crypto_adapter_enq_run() + */ +#define CRYPTO_ENQ_FLUSH_THRESHOLD 1024 + +struct rte_event_crypto_adapter { + /* Event device identifier */ + uint8_t eventdev_id; + /* Event port identifier */ + uint8_t event_port_id; + /* Max crypto ops processed in any service function invocation */ + uint32_t max_nb; + /* Lock to serialize config updates with service function */ + rte_spinlock_t lock; + /* Next crypto device to be processed */ + uint16_t next_cdev_id; + /* Per crypto device structure */ + struct crypto_device_info *cdevs; + /* Loop counter to flush crypto ops */ + uint16_t transmit_loop_count; + /* Per instance stats structure */ + struct rte_event_crypto_adapter_stats crypto_stats; + /* Configuration callback for rte_service configuration */ + rte_event_crypto_adapter_conf_cb conf_cb; + /* Configuration callback argument */ + void *conf_arg; + /* Set if default_cb is being used */ + int default_cb_arg; + /* Service initialization state */ + uint8_t service_inited; + /* Memory allocation name */ + char mem_name[CRYPTO_ADAPTER_MEM_NAME_LEN]; + /* Socket identifier cached from eventdev */ + int socket_id; + /* Per adapter EAL service */ + uint32_t service_id; + /* No. of queue pairs configured */ + uint16_t nb_qps; + /* Adapter mode */ + enum rte_event_crypto_adapter_mode mode; +} __rte_cache_aligned; + +/* Per crypto device information */ +struct crypto_device_info { + /* Pointer to cryptodev */ + struct rte_cryptodev *dev; + /* Pointer to queue pair info */ + struct crypto_queue_pair_info *qpairs; + /* Next queue pair to be processed */ + uint16_t next_queue_pair_id; + /* Set to indicate cryptodev->eventdev packet + * transfer uses a hardware mechanism + */ + uint8_t internal_event_port; + /* Set to indicate processing has been started */ + uint8_t dev_started; + /* If num_qpairs > 0, the start callback will + * be invoked if not already invoked + */ + uint16_t num_qpairs; +}; + +/* Per queue pair information */ +struct crypto_queue_pair_info { + /* Set to indicate queue pair is enabled */ + bool qp_enabled; + /* Pointer to hold rte_crypto_ops for batching */ + struct rte_crypto_op **op_buffer; + /* No of crypto ops accumulated */ + unsigned len; +}; + +static struct rte_event_crypto_adapter **event_crypto_adapter; + +static inline int +eca_valid_id(uint8_t id) +{ + return id < RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE; +} + +#define RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, retval) do { \ + if (!eca_valid_id(id)) { \ + RTE_EDEV_LOG_ERR("Invalid crypto adapter id = %d\n", id); \ + return retval; \ + } \ +} while (0) + +static int +eca_init(void) +{ + const char *name = "crypto_adapter_array"; + const struct rte_memzone *mz; + unsigned int sz; + + sz = sizeof(*event_crypto_adapter) * + RTE_EVENT_CRYPTO_ADAPTER_MAX_INSTANCE; + sz = RTE_ALIGN(sz, RTE_CACHE_LINE_SIZE); + + mz = rte_memzone_lookup(name); + if (mz == NULL) { + mz = rte_memzone_reserve_aligned(name, sz, rte_socket_id(), 0, + RTE_CACHE_LINE_SIZE); + if (mz == NULL) { + RTE_EDEV_LOG_ERR("failed to reserve memzone err = %" + PRId32, rte_errno); + return -rte_errno; + } + } + + event_crypto_adapter = mz->addr; + return 0; +} + +static inline struct rte_event_crypto_adapter * +eca_id_to_adapter(uint8_t id) +{ + return event_crypto_adapter ? + event_crypto_adapter[id] : NULL; +} + +static int +eca_default_config_cb(uint8_t id, uint8_t dev_id, + struct rte_event_crypto_adapter_conf *conf, void *arg) +{ + struct rte_event_dev_config dev_conf; + struct rte_eventdev *dev; + uint8_t port_id; + int started; + int ret; + struct rte_event_port_conf *port_conf = arg; + struct rte_event_crypto_adapter *adapter = eca_id_to_adapter(id); + + dev = &rte_eventdevs[adapter->eventdev_id]; + dev_conf = dev->data->dev_conf; + + started = dev->data->dev_started; + if (started) + rte_event_dev_stop(dev_id); + port_id = dev_conf.nb_event_ports; + dev_conf.nb_event_ports += 1; + ret = rte_event_dev_configure(dev_id, &dev_conf); + if (ret) { + RTE_EDEV_LOG_ERR("failed to configure event dev %u\n", dev_id); + if (started) { + if (rte_event_dev_start(dev_id)) + return -EIO; + } + return ret; + } + + ret = rte_event_port_setup(dev_id, port_id, port_conf); + if (ret) { + RTE_EDEV_LOG_ERR("failed to setup event port %u\n", port_id); + return ret; + } + + conf->event_port_id = port_id; + conf->max_nb = DEFAULT_MAX_NB; + if (started) + ret = rte_event_dev_start(dev_id); + + adapter->default_cb_arg = 1; + return ret; +} + +int __rte_experimental +rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t dev_id, + rte_event_crypto_adapter_conf_cb conf_cb, + void *conf_arg) +{ + struct rte_event_crypto_adapter *adapter; + char mem_name[CRYPTO_ADAPTER_NAME_LEN]; + int socket_id; + uint8_t i; + int ret; + + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + RTE_EVENTDEV_VALID_DEVID_OR_ERR_RET(dev_id, -EINVAL); + if (conf_cb == NULL) + return -EINVAL; + + if (event_crypto_adapter == NULL) { + ret = eca_init(); + if (ret) + return ret; + } + + adapter = eca_id_to_adapter(id); + if (adapter != NULL) { + RTE_EDEV_LOG_ERR("Crypto adapter id %u already exists!", id); + return -EEXIST; + } + + socket_id = rte_event_dev_socket_id(dev_id); + snprintf(mem_name, CRYPTO_ADAPTER_MEM_NAME_LEN, + "rte_event_crypto_adapter_%d", id); + + adapter = rte_zmalloc_socket(mem_name, sizeof(*adapter), + RTE_CACHE_LINE_SIZE, socket_id); + if (adapter == NULL) { + RTE_EDEV_LOG_ERR("Failed to get mem for event crypto adapter!"); + return -ENOMEM; + } + + adapter->eventdev_id = dev_id; + adapter->socket_id = socket_id; + adapter->conf_cb = conf_cb; + adapter->conf_arg = conf_arg; + strcpy(adapter->mem_name, mem_name); + adapter->cdevs = rte_zmalloc_socket(adapter->mem_name, + rte_cryptodev_count() * + sizeof(struct crypto_device_info), 0, + socket_id); + if (adapter->cdevs == NULL) { + RTE_EDEV_LOG_ERR("Failed to get mem for crypto devices\n"); + rte_free(adapter); + return -ENOMEM; + } + + rte_spinlock_init(&adapter->lock); + for (i = 0; i < rte_cryptodev_count(); i++) + adapter->cdevs[i].dev = rte_cryptodev_pmd_get_dev(i); + + event_crypto_adapter[id] = adapter; + + return 0; +} + + +int __rte_experimental +rte_event_crypto_adapter_create(uint8_t id, uint8_t cdev_id, + struct rte_event_port_conf *port_config) +{ + struct rte_event_port_conf *pc; + int ret; + + if (port_config == NULL) + return -EINVAL; + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + pc = rte_malloc(NULL, sizeof(*pc), 0); + if (pc == NULL) + return -ENOMEM; + *pc = *port_config; + ret = rte_event_crypto_adapter_create_ext(id, cdev_id, + eca_default_config_cb, + pc); + if (ret) + rte_free(pc); + + return ret; +} + +int __rte_experimental +rte_event_crypto_adapter_free(uint8_t id) +{ + struct rte_event_crypto_adapter *adapter; + + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = eca_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + if (adapter->nb_qps) { + RTE_EDEV_LOG_ERR("%" PRIu16 "Queue pairs not deleted", + adapter->nb_qps); + return -EBUSY; + } + + if (adapter->default_cb_arg) + rte_free(adapter->conf_arg); + rte_free(adapter->cdevs); + rte_free(adapter); + event_crypto_adapter[id] = NULL; + + return 0; +} + +static inline unsigned +eca_enq_to_cryptodev(struct rte_event_crypto_adapter *adapter, + struct rte_event *ev, unsigned cnt) +{ + struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats; + union rte_event_crypto_metadata *m_data = NULL; + struct crypto_queue_pair_info *qp_info = NULL; + struct rte_crypto_op *crypto_op; + unsigned i, n = 0; + unsigned ret = 0; + uint16_t qp_id = 0; + unsigned len = 0; + uint8_t cdev_id = 0; + + stats->event_dequeue_count += cnt; + + for (i = 0; i < cnt; i++) { + crypto_op = ev[i].event_ptr; + if (crypto_op == NULL) + continue; + if (crypto_op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + m_data = rte_cryptodev_sym_session_get_private_data( + crypto_op->sym->session); + if (m_data == NULL) { + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + continue; + } + + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + qp_info = &adapter->cdevs[cdev_id].qpairs[qp_id]; + if (qp_info == NULL) { + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + continue; + } + len = qp_info->len; + qp_info->op_buffer[len] = crypto_op; + len++; + } else if (crypto_op->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + crypto_op->private_data_offset) { + m_data = (union rte_event_crypto_metadata *) + ((uint8_t *)crypto_op + + crypto_op->private_data_offset); + cdev_id = m_data->request_info.cdev_id; + qp_id = m_data->request_info.queue_pair_id; + qp_info = &adapter->cdevs[cdev_id].qpairs[qp_id]; + if (qp_info == NULL) { + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + continue; + } + len = qp_info->len; + qp_info->op_buffer[len] = crypto_op; + len++; + } else { + rte_pktmbuf_free(crypto_op->sym->m_src); + rte_crypto_op_free(crypto_op); + continue; + } + + if (len == BATCH_SIZE) { + struct rte_crypto_op **op_buffer = qp_info->op_buffer; + ret = rte_cryptodev_enqueue_burst(cdev_id, + qp_id, + op_buffer, + BATCH_SIZE); + + stats->crypto_enq_count += ret; + + while (ret < len) { + struct rte_crypto_op *op; + op = op_buffer[ret++]; + stats->crypto_enq_fail++; + rte_pktmbuf_free(op->sym->m_src); + rte_crypto_op_free(op); + } + + len = 0; + } + + if (qp_info) + qp_info->len = len; + n += ret; + } + + return n; +} + +static unsigned +eca_crypto_enq_flush(struct rte_event_crypto_adapter *adapter) +{ + struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats; + struct crypto_device_info *curr_dev; + struct crypto_queue_pair_info *curr_queue; + struct rte_crypto_op **op_buffer; + struct rte_cryptodev *dev; + uint8_t cdev_id; + uint16_t qp; + uint16_t ret = 0; + uint16_t num_cdev = rte_cryptodev_count(); + + for (cdev_id = 0; cdev_id < num_cdev; cdev_id++) { + curr_dev = &adapter->cdevs[cdev_id]; + if (curr_dev == NULL) + continue; + dev = curr_dev->dev; + + for (qp = 0; qp < dev->data->nb_queue_pairs; qp++) { + + curr_queue = &curr_dev->qpairs[qp]; + if (!curr_queue->qp_enabled) + continue; + + op_buffer = curr_queue->op_buffer; + ret = rte_cryptodev_enqueue_burst(cdev_id, + qp, + op_buffer, + curr_queue->len); + stats->crypto_enq_count += ret; + + while (ret < curr_queue->len) { + struct rte_crypto_op *op; + op = op_buffer[ret++]; + stats->crypto_enq_fail++; + rte_pktmbuf_free(op->sym->m_src); + rte_crypto_op_free(op); + } + curr_queue->len = 0; + } + } + + return ret; +} + +static int +eca_crypto_adapter_enq_run(struct rte_event_crypto_adapter *adapter, + unsigned max_enq) +{ + struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats; + struct rte_event ev[BATCH_SIZE]; + unsigned nb_enq, nb_enqueued = 0; + uint16_t n; + uint8_t event_dev_id = adapter->eventdev_id; + uint8_t event_port_id = adapter->event_port_id; + + + if (adapter->mode == RTE_EVENT_CRYPTO_ADAPTER_DEQ_ONLY) + return 0; + + for (nb_enq = 0; nb_enq < max_enq; nb_enq += n) { + stats->event_poll_count++; + n = rte_event_dequeue_burst(event_dev_id, + event_port_id, ev, BATCH_SIZE, 0); + + if (!n) + break; + + nb_enqueued += eca_enq_to_cryptodev(adapter, ev, n); + } + + if ((++adapter->transmit_loop_count & + (CRYPTO_ENQ_FLUSH_THRESHOLD - 1)) == 0) { + nb_enqueued += eca_crypto_enq_flush(adapter); + } + + return nb_enqueued; +} + +static inline uint16_t +eca_ops_enqueue_burst(struct rte_event_crypto_adapter *adapter, + struct rte_crypto_op **ops, uint16_t num) +{ + struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats; + uint8_t event_dev_id = adapter->eventdev_id; + uint8_t event_port_id = adapter->event_port_id; + struct rte_event events[BATCH_SIZE]; + uint16_t nb_enqueued; + unsigned retry; + unsigned i; + + num = RTE_MIN(num, BATCH_SIZE); + for (i = 0; i < num; i++) { + if (ops[i]->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + union rte_event_crypto_metadata *m_data; + struct rte_event *ev = &events[i]; + m_data = rte_cryptodev_sym_session_get_private_data( + ops[i]->sym->session); + if (m_data == NULL) { + rte_pktmbuf_free(ops[i]->sym->m_src); + rte_crypto_op_free(ops[i]); + continue; + } + rte_memcpy(ev, &m_data->response_info, sizeof(ev)); + ev->op = RTE_EVENT_OP_NEW; + ev->event_ptr = ops[i]; + ev->event_type = RTE_EVENT_TYPE_CRYPTODEV; + } else if (ops[i]->sess_type == RTE_CRYPTO_OP_SESSIONLESS && + ops[i]->private_data_offset) { + union rte_event_crypto_metadata *m_data; + struct rte_event *ev = &events[i]; + m_data = (union rte_event_crypto_metadata *) + (ops[i] + ops[i]->private_data_offset); + rte_memcpy(ev, &m_data->response_info, sizeof(ev)); + ev->op = RTE_EVENT_OP_NEW; + ev->event_ptr = ops[i]; + ev->event_type = RTE_EVENT_TYPE_CRYPTODEV; + } else { + rte_pktmbuf_free(ops[i]->sym->m_src); + rte_crypto_op_free(ops[i]); + } + } + + nb_enqueued = 0; + retry = 0; + do { + nb_enqueued += rte_event_enqueue_burst(event_dev_id, + event_port_id, + &events[nb_enqueued], + num - nb_enqueued); + } while (retry++ < CRYPTO_ADAPTER_MAX_EV_ENQ_RETRIES && + nb_enqueued < num); + + stats->event_enqueue_count += nb_enqueued; + stats->event_enq_retry_count += retry - 1; + + return nb_enqueued; +} + +static inline unsigned +eca_crypto_adapter_deq_run(struct rte_event_crypto_adapter *adapter, + unsigned int max_deq) +{ + struct rte_event_crypto_adapter_stats *stats = &adapter->crypto_stats; + struct crypto_device_info *curr_dev; + struct crypto_queue_pair_info *curr_queue; + struct rte_crypto_op *ops[BATCH_SIZE]; + uint16_t n, k, nb_enqueued, nb_deq; + struct rte_cryptodev *dev; + uint8_t cdev_id; + uint16_t qp; + bool done; + uint16_t num_cdev = rte_cryptodev_count(); + + nb_deq = 0; + do { + unsigned queues = 0; + done = true; + + for (cdev_id = adapter->next_cdev_id; + cdev_id < num_cdev; cdev_id++) { + curr_dev = &adapter->cdevs[cdev_id]; + if (curr_dev == NULL) + continue; + dev = curr_dev->dev; + + for (qp = curr_dev->next_queue_pair_id; + queues < dev->data->nb_queue_pairs; + qp = (qp + 1) % dev->data->nb_queue_pairs, + queues++) { + + curr_queue = &curr_dev->qpairs[qp]; + if (!curr_queue->qp_enabled) + continue; + + nb_enqueued = 0; + + n = rte_cryptodev_dequeue_burst(cdev_id, qp, + ops, BATCH_SIZE); + if (!n) + continue; + + done = false; + stats->crypto_deq_count += n; + nb_enqueued = eca_ops_enqueue_burst(adapter, + ops, n); + + if (nb_enqueued == n) + goto check; + else { + /* Free mbufs and rte_crypto_ops */ + for (k = nb_enqueued; k < n; k++) { + struct rte_mbuf *m; + m = ops[k]->sym->m_src; + rte_pktmbuf_free(m); + rte_crypto_op_free(ops[k]); + } + } + + stats->event_enq_fail_count += n - nb_enqueued; + return (nb_deq + n); + +check: + nb_deq += n; + if (nb_enqueued < n || nb_deq > max_deq) { + curr_dev->next_queue_pair_id = (qp + 1) + % dev->data->nb_queue_pairs; + adapter->next_cdev_id = (cdev_id + 1) + % num_cdev; + return nb_deq; + } + } + } + } while (done == false); + return nb_deq; +} + +static void +eca_crypto_adapter_run(struct rte_event_crypto_adapter *adapter, + unsigned max_ops) +{ + while (max_ops) { + unsigned e_cnt, d_cnt; + + e_cnt = eca_crypto_adapter_deq_run(adapter, max_ops); + max_ops -= RTE_MIN(max_ops, e_cnt); + + d_cnt = eca_crypto_adapter_enq_run(adapter, max_ops); + max_ops -= RTE_MIN(max_ops, d_cnt); + + if (e_cnt == 0 && d_cnt == 0) + break; + + } +} + +static int +eca_service_func(void *args) +{ + struct rte_event_crypto_adapter *adapter = args; + + if (rte_spinlock_trylock(&adapter->lock) == 0) + return 0; + eca_crypto_adapter_run(adapter, adapter->max_nb); + rte_spinlock_unlock(&adapter->lock); + + return 0; +} + +static int +eca_init_service(struct rte_event_crypto_adapter *adapter, uint8_t id) +{ + struct rte_event_crypto_adapter_conf adapter_conf; + struct rte_service_spec service; + int ret; + + if (adapter->service_inited) + return 0; + + memset(&service, 0, sizeof(service)); + snprintf(service.name, CRYPTO_ADAPTER_NAME_LEN, + "rte_event_crypto_adapter_%d", id); + service.socket_id = adapter->socket_id; + service.callback = eca_service_func; + service.callback_userdata = adapter; + /* Service function handles locking for queue add/del updates */ + service.capabilities = RTE_SERVICE_CAP_MT_SAFE; + ret = rte_service_component_register(&service, &adapter->service_id); + if (ret) { + RTE_EDEV_LOG_ERR("failed to register service %s err = %" PRId32, + service.name, ret); + return ret; + } + + ret = adapter->conf_cb(id, adapter->eventdev_id, + &adapter_conf, adapter->conf_arg); + if (ret) { + RTE_EDEV_LOG_ERR("configuration callback failed err = %" PRId32, + ret); + return ret; + } + + adapter->max_nb = adapter_conf.max_nb; + adapter->event_port_id = adapter_conf.event_port_id; + adapter->service_inited = 1; + + return ret; +} + +static void +eca_update_qp_info(struct rte_event_crypto_adapter *adapter, + struct crypto_device_info *dev_info, + int32_t queue_pair_id, + uint8_t add) +{ + struct crypto_queue_pair_info *qp_info; + int enabled; + uint16_t i; + + if (dev_info->qpairs == NULL) + return; + + if (queue_pair_id == -1) { + for (i = 0; i < dev_info->dev->data->nb_queue_pairs; i++) + eca_update_qp_info(adapter, dev_info, i, add); + } else { + qp_info = &dev_info->qpairs[queue_pair_id]; + enabled = qp_info->qp_enabled; + if (add) { + adapter->nb_qps += !enabled; + dev_info->num_qpairs += !enabled; + } else { + adapter->nb_qps -= enabled; + dev_info->num_qpairs -= enabled; + } + qp_info->qp_enabled = !!add; + } +} + +static int +eca_add_queue_pair(struct rte_event_crypto_adapter *adapter, + uint8_t cdev_id, + int queue_pair_id) +{ + struct crypto_device_info *dev_info = &adapter->cdevs[cdev_id]; + struct crypto_queue_pair_info *qpairs; + uint32_t i; + + if (dev_info->qpairs == NULL) { + dev_info->qpairs = + rte_zmalloc_socket(adapter->mem_name, + dev_info->dev->data->nb_queue_pairs * + sizeof(struct crypto_queue_pair_info), + 0, adapter->socket_id); + if (dev_info->qpairs == NULL) + return -ENOMEM; + + qpairs = dev_info->qpairs; + qpairs->op_buffer = rte_zmalloc_socket(adapter->mem_name, + BATCH_SIZE * + sizeof(struct rte_crypto_op *), + 0, adapter->socket_id); + if (!qpairs->op_buffer) { + rte_free(qpairs); + return -ENOMEM; + } + } + + if (queue_pair_id == -1) { + for (i = 0; i < dev_info->dev->data->nb_queue_pairs; i++) + eca_update_qp_info(adapter, dev_info, i, 1); + } else + eca_update_qp_info(adapter, dev_info, + (uint16_t)queue_pair_id, 1); + + return 0; +} + +int __rte_experimental +rte_event_crypto_adapter_queue_pair_add(uint8_t id, + uint8_t cdev_id, + int32_t queue_pair_id) +{ + struct rte_event_crypto_adapter *adapter; + struct rte_eventdev *dev; + struct crypto_device_info *dev_info; + uint32_t cap; + int ret; + + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (!rte_cryptodev_pmd_is_valid_dev(cdev_id)) { + RTE_EDEV_LOG_ERR("Invalid dev_id=%" PRIu8, cdev_id); + return -EINVAL; + } + + adapter = eca_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + ret = rte_event_crypto_adapter_caps_get(adapter->eventdev_id, + cdev_id, + &cap); + if (ret) { + RTE_EDEV_LOG_ERR("Failed to get adapter caps dev %" PRIu8 + "cdev %" PRIu8, id, cdev_id); + return ret; + } + + dev_info = &adapter->cdevs[cdev_id]; + + if (queue_pair_id != -1 && + (uint16_t)queue_pair_id >= dev_info->dev->data->nb_queue_pairs) { + RTE_EDEV_LOG_ERR("Invalid queue_pair_id %" PRIu16, + (uint16_t)queue_pair_id); + return -EINVAL; + } + + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT) { + RTE_FUNC_PTR_OR_ERR_RET( + *dev->dev_ops->crypto_adapter_queue_pair_add, + -ENOTSUP); + if (dev_info->qpairs == NULL) { + dev_info->qpairs = + rte_zmalloc_socket(adapter->mem_name, + dev_info->dev->data->nb_queue_pairs * + sizeof(struct crypto_queue_pair_info), + 0, adapter->socket_id); + if (dev_info->qpairs == NULL) + return -ENOMEM; + } + + ret = (*dev->dev_ops->crypto_adapter_queue_pair_add)(dev, + dev_info->dev, + queue_pair_id); + if (ret == 0) { + eca_update_qp_info(adapter, + &adapter->cdevs[cdev_id], + queue_pair_id, + 1); + } + } else { + rte_spinlock_lock(&adapter->lock); + ret = eca_init_service(adapter, id); + if (ret == 0) + ret = eca_add_queue_pair(adapter, cdev_id, + queue_pair_id); + rte_spinlock_unlock(&adapter->lock); + } + + if (ret) + return ret; + + rte_service_component_runstate_set(adapter->service_id, 1); + + return 0; + +} + +int __rte_experimental +rte_event_crypto_adapter_queue_pair_del(uint8_t id, uint8_t cdev_id, + int32_t queue_pair_id) +{ + struct rte_event_crypto_adapter *adapter; + struct crypto_device_info *dev_info; + struct rte_eventdev *dev; + int ret = 0; + uint32_t cap; + uint16_t i; + + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + if (!rte_cryptodev_pmd_is_valid_dev(cdev_id)) { + RTE_EDEV_LOG_ERR("Invalid dev_id=%" PRIu8, cdev_id); + return -EINVAL; + } + + adapter = eca_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + ret = rte_event_crypto_adapter_caps_get(adapter->eventdev_id, + cdev_id, + &cap); + if (ret) + return ret; + + dev_info = &adapter->cdevs[cdev_id]; + + if (queue_pair_id != -1 && + (uint16_t)queue_pair_id >= dev_info->dev->data->nb_queue_pairs) { + RTE_EDEV_LOG_ERR("Invalid queue_pair_id %" PRIu16, + (uint16_t)queue_pair_id); + return -EINVAL; + } + + if (cap & RTE_EVENT_CRYPTO_ADAPTER_CAP_INTERNAL_PORT) { + RTE_FUNC_PTR_OR_ERR_RET( + *dev->dev_ops->crypto_adapter_queue_pair_del, + -ENOTSUP); + ret = (*dev->dev_ops->crypto_adapter_queue_pair_del)(dev, + dev_info->dev, + queue_pair_id); + if (ret == 0) { + eca_update_qp_info(adapter, + &adapter->cdevs[cdev_id], + queue_pair_id, + 0); + if (dev_info->num_qpairs == 0) { + rte_free(dev_info->qpairs); + dev_info->qpairs = NULL; + } + } + } else { + if (adapter->nb_qps == 0) + return 0; + + rte_spinlock_lock(&adapter->lock); + if (queue_pair_id == -1) { + for (i = 0; i < dev_info->dev->data->nb_queue_pairs; + i++) + eca_update_qp_info(adapter, dev_info, + queue_pair_id, 0); + } else { + eca_update_qp_info(adapter, dev_info, + (uint16_t)queue_pair_id, 0); + } + + if (dev_info->num_qpairs == 0) { + rte_free(dev_info->qpairs); + dev_info->qpairs = NULL; + } + + rte_spinlock_unlock(&adapter->lock); + rte_service_component_runstate_set(adapter->service_id, + adapter->nb_qps); + } + + return ret; +} + +static int +eca_adapter_ctrl(uint8_t id, int start) +{ + struct rte_event_crypto_adapter *adapter; + struct crypto_device_info *dev_info; + struct rte_eventdev *dev; + uint32_t i; + int use_service = 0; + int stop = !start; + + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + adapter = eca_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + + for (i = 0; i < rte_cryptodev_count(); i++) { + dev_info = &adapter->cdevs[i]; + /* if start check for num queue pairs */ + if (start && !dev_info->num_qpairs) + continue; + /* if stop check if dev has been started */ + if (stop && !dev_info->dev_started) + continue; + use_service |= !dev_info->internal_event_port; + dev_info->dev_started = start; + if (dev_info->internal_event_port == 0) + continue; + start ? (*dev->dev_ops->crypto_adapter_start)(dev, + &dev_info->dev[i]) : + (*dev->dev_ops->crypto_adapter_stop)(dev, + &dev_info->dev[i]); + } + + if (use_service) + rte_service_runstate_set(adapter->service_id, start); + + return 0; +} + +int __rte_experimental +rte_event_crypto_adapter_start(uint8_t id, + enum rte_event_crypto_adapter_mode mode) +{ + struct rte_event_crypto_adapter *adapter; + int ret; + + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + adapter = eca_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + ret = eca_adapter_ctrl(id, 1); + if (!ret) + adapter->mode = mode; + return ret; +} + +int __rte_experimental +rte_event_crypto_adapter_stop(uint8_t id) +{ + return eca_adapter_ctrl(id, 0); +} + +int __rte_experimental +rte_event_crypto_adapter_stats_get(uint8_t id, + struct rte_event_crypto_adapter_stats *stats) +{ + struct rte_event_crypto_adapter *adapter; + struct rte_event_crypto_adapter_stats dev_stats_sum = { 0 }; + struct rte_event_crypto_adapter_stats dev_stats; + struct rte_eventdev *dev; + struct crypto_device_info *dev_info; + uint32_t i; + int ret; + + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = eca_id_to_adapter(id); + if (adapter == NULL || stats == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + memset(stats, 0, sizeof(*stats)); + for (i = 0; i < rte_cryptodev_count(); i++) { + dev_info = &adapter->cdevs[i]; + if (dev_info->internal_event_port == 0 || + dev->dev_ops->crypto_adapter_stats_get == NULL) + continue; + ret = (*dev->dev_ops->crypto_adapter_stats_get)(dev, + dev_info->dev, + &dev_stats); + if (ret) + continue; + + dev_stats_sum.crypto_deq_count += dev_stats.crypto_deq_count; + dev_stats_sum.event_enqueue_count += + dev_stats.event_enqueue_count; + } + + if (adapter->service_inited) + *stats = adapter->crypto_stats; + + stats->crypto_deq_count += dev_stats_sum.crypto_deq_count; + stats->event_enqueue_count += dev_stats_sum.event_enqueue_count; + + return 0; +} + +int __rte_experimental +rte_event_crypto_adapter_stats_reset(uint8_t id) +{ + struct rte_event_crypto_adapter *adapter; + struct crypto_device_info *dev_info; + struct rte_eventdev *dev; + uint32_t i; + + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = eca_id_to_adapter(id); + if (adapter == NULL) + return -EINVAL; + + dev = &rte_eventdevs[adapter->eventdev_id]; + for (i = 0; i < rte_cryptodev_count(); i++) { + dev_info = &adapter->cdevs[i]; + if (dev_info->internal_event_port == 0 || + dev->dev_ops->crypto_adapter_stats_reset == NULL) + continue; + (*dev->dev_ops->crypto_adapter_stats_reset)(dev, + dev_info->dev); + } + + memset(&adapter->crypto_stats, 0, sizeof(adapter->crypto_stats)); + return 0; +} + +int __rte_experimental +rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id) +{ + struct rte_event_crypto_adapter *adapter; + + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = eca_id_to_adapter(id); + if (adapter == NULL || service_id == NULL) + return -EINVAL; + + if (adapter->service_inited) + *service_id = adapter->service_id; + + return adapter->service_inited ? 0 : -ESRCH; +} + +int __rte_experimental +rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id) +{ + struct rte_event_crypto_adapter *adapter; + + RTE_EVENT_CRYPTO_ADAPTER_ID_VALID_OR_ERR_RET(id, -EINVAL); + + adapter = eca_id_to_adapter(id); + if (adapter == NULL || event_port_id == NULL || + !adapter->service_inited) + return -EINVAL; + + *event_port_id = adapter->event_port_id; + + return 0; +} diff --git a/lib/librte_eventdev/rte_event_crypto_adapter.h b/lib/librte_eventdev/rte_event_crypto_adapter.h new file mode 100644 index 0000000..a974464 --- /dev/null +++ b/lib/librte_eventdev/rte_event_crypto_adapter.h @@ -0,0 +1,449 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2017-2018 Intel Corporation + */ + +#ifndef _RTE_EVENT_CRYPTO_ADAPTER_ +#define _RTE_EVENT_CRYPTO_ADAPTER_ + +/** + * This adapter adds support to enqueue crypto completions to an event device. + * The packet flow from cryptodev to the event device can be accomplished + * using both SW and HW based transfer mechanisms. + * The adapter uses an EAL service core function for SW based packet transfer + * and uses the eventdev PMD functions to configure HW based packet transfer + * between the cryptodev and the event device. + * + * The application can choose to submit a crypto operation directly to + * cryptodev or send it to the cryptodev adapter via eventdev, the + * cryptodev adapter then submits the crypto operation to the crypto + * device. The first mode is known as the dequeue only (DEQ) mode and + * the second as the enqueue - dequeue (ENQ_DEQ) mode. The choice of + * mode can be specified when starting the adapter. + * + * In the ENQ_DEQ mode the application needs to specify the cryptodev ID + * and queue pair ID (request information) needed to enqueue a crypto + * operation in addition to the event information (response information) + * needed to enqueue an event after the crypto operation has completed. + * The request and response information are specified in the + * rte_crypto_op private_data. In the DEQ mode the the application is + * required to provide only the response information. + * + * In the ENQ_DEQ mode, application sends crypto operations as events to + * the adapter which dequeues events and programs cryptodev operations. + * The adapter then dequeues crypto completions from cryptodev and + * enqueue events to the event device. + * + * The event crypto adapter provides common APIs to configure the packet flow + * from the cryptodev to event devices across both SW and HW based transfers. + * The crypto event adapter's functions are: + * - rte_event_crypto_adapter_create_ext() + * - rte_event_crypto_adapter_create() + * - rte_event_crypto_adapter_free() + * - rte_event_crypto_adapter_queue_pair_add() + * - rte_event_crypto_adapter_queue_pair_del() + * - rte_event_crypto_adapter_start() + * - rte_event_crypto_adapter_stop() + * - rte_event_crypto_adapter_stats_get() + * - rte_event_crypto_adapter_stats_reset() + + * The applicaton creates an instance using rte_event_crypto_adapter_create() + * or rte_event_crypto_adapter_create_ext(). + * + * Cryptodev queue pair addition/deletion is done + * using the rte_event_crypto_adapter_queue_pair_xxx() APIs. + * + * The SW adapter or HW PMD uses rte_crypto_op::private_data_type to decide + * whether request/response data is located in the crypto session/crypto + * security session or at an offset in the rte_crypto_op. + * rte_crypto_op::private_data_offset is used to locate the request/response + * in the rte_crypto_op. If the rte_crypto_op::private_data_type + * indicates that the data is in the crypto session/crypto security session + * then the rte_crypto_op::sess_type is used to decide whether the private + * data is in the session or security session. + * + * For session-less it is mandatory to place the request/response data with + * the rte_crypto_op where as with crypto session/security session it can be + * placed with the rte_crypto_op or in the session/security session. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include +#include + +#include "rte_eventdev.h" + +/** + * @warning + * @b EXPERIMENTAL: this enum may change without prior notice + * + * Crypto event adapter type + */ +enum rte_event_crypto_adapter_mode { + RTE_EVENT_CRYPTO_ADAPTER_DEQ_ONLY = 1, + /**< Start only dequeue part of crypto adapter. + * Packets dequeued from cryptodev are enqueued to eventdev + * as new events and events will be treated as RTE_EVENT_OP_NEW. + */ + RTE_EVENT_CRYPTO_ADAPTER_ENQ_DEQ, + /**< Start both enqueue & dequeue part of crypto adapter. + * In this mode, packet's event context will be retained. + */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * Crypto event request structure will be filled by application to + * provide event request information to the adapter. + */ +struct rte_event_crypto_request { + uint8_t resv[8]; + /**< Overlaps with first 8 bytes of struct rte_event + * that encode the response event information + */ + uint16_t cdev_id; + /**< cryptodev ID to be used */ + uint16_t queue_pair_id; + /**< cryptodev queue pair ID to be used */ + uint32_t resv1; + /**< Reserved bits */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * Crypto event metadata structure will be filled by application + * to provide crypto request and event response information. + * + * If crypto events are enqueued using a HW mechanism, the cryptodev + * PMD will use the event response information to set up the event + * that is enqueued back to eventdev after completion of the crypto + * operation. If the transfer is done by SW, event response information + * will be used by the adapter. + */ +union rte_event_crypto_metadata { + struct rte_event_crypto_request request_info; + struct rte_event response_info; +}; + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * Adapter configuration structure that the adapter configuration callback + * function is expected to fill out + * @see rte_event_crypto_adapter_conf_cb + */ +struct rte_event_crypto_adapter_conf { + uint8_t event_port_id; + /**< Event port identifier, the adapter enqueues events to this + * port. + */ + uint32_t max_nb; + /**< The adapter can return early if it has processed at least + * max_nb crypto ops. This isn't treated as a requirement; batching + * may cause the adapter to process more than max_nb crypto ops. + */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Function type used for adapter configuration callback. The callback is + * used to fill in members of the struct rte_event_crypto_adapter_conf, this + * callback is invoked when creating a SW service for packet transfer from + * cryptodev queue pair to the event device. The SW service is created within + * the rte_event_crypto_adapter_queue_pair_add() function if SW based packet + * transfers from cryptodev queue pair to the event device are required. + * + * @param id + * Adapter identifier. + * + * @param dev_id + * Event device identifier. + * + * @param [out] conf + * Structure that needs to be populated by this callback. + * + * @param arg + * Argument to the callback. This is the same as the conf_arg passed to the + * rte_event_crypto_adapter_create_ext(). + */ +typedef int (*rte_event_crypto_adapter_conf_cb) (uint8_t id, uint8_t dev_id, + struct rte_event_crypto_adapter_conf *conf, + void *arg); + +/** + * @warning + * @b EXPERIMENTAL: this structure may change without prior notice + * + * A structure used to retrieve statistics for an event crypto adapter + * instance. + */ + +struct rte_event_crypto_adapter_stats { + uint64_t event_poll_count; + /**< Event port poll count */ + uint64_t event_dequeue_count; + /**< Event dequeue count */ + uint64_t crypto_enq_count; + /**< Cryptodev enqueue count */ + uint64_t crypto_enq_fail; + /**< Cryptodev enqueue failed count */ + uint64_t crypto_deq_count; + /**< Cryptodev dequeue count */ + uint64_t event_enqueue_count; + /**< Event enqueue count */ + uint64_t event_enq_retry_count; + /**< Event enqueue retry count */ + uint64_t event_enq_fail_count; + /**< Event enqueue fail count */ +}; + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Create a new event crypto adapter with the specified identifier. + * + * @param id + * Adapter identifier. + * + * @param cdev_id + * Crypto device identifier. + * + * @param conf_cb + * Callback function that fills in members of a + * struct rte_event_crypto_adapter_conf struct passed into + * it. + * + * @param conf_arg + * Argument that is passed to the conf_cb function. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +int __rte_experimental +rte_event_crypto_adapter_create_ext(uint8_t id, uint8_t cdev_id, + rte_event_crypto_adapter_conf_cb conf_cb, + void *conf_arg); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Create a new event crypto adapter with the specified identifier. + * This function uses an internal configuration function that creates an event + * port. This default function reconfigures the event device with an + * additional event port and setups up the event port using the port_config + * parameter passed into this function. In case the application needs more + * control in configuration of the service, it should use the + * rte_event_crypto_adapter_create_ext() version. + * + * @param id + * Adapter identifier. + * + * @param cdev_id + * Crypto device identifier. + * + * @param port_config + * Argument of type *rte_event_port_conf* that is passed to the conf_cb + * function. + * + * @return + * - 0: Success + * - <0: Error code on failure + */ +int __rte_experimental +rte_event_crypto_adapter_create(uint8_t id, uint8_t cdev_id, + struct rte_event_port_conf *port_config); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Free an event crypto adapter + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success + * - <0: Error code on failure, If the adapter still has queue pairs + * added to it, the function returns -EBUSY. + */ +int __rte_experimental +rte_event_crypto_adapter_free(uint8_t id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Add a queue pair to an event crypto adapter. + * + * @param id + * Adapter identifier. + * + * @param cdev_id + * Cryptodev identifier. + * + * @param queue_pair_id + * Cryptodev queue pair identifier. If queue_pair_id is set -1, + * adapter adds all the pre configured queue pairs to the instance. + * + * + * @return + * - 0: Success, Receive queue pair added correctly. + * - <0: Error code on failure. + */ +int __rte_experimental +rte_event_crypto_adapter_queue_pair_add(uint8_t id, + uint8_t cdev_id, + int32_t queue_pair_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Delete a queue pair from an event crypto adapter. + * + * @param id + * Adapter identifier. + * + * @param cdev_id + * Cryptodev identifier. + * + * @param queue_pair_id + * Cryptodev queue pair identifier. + * + * @return + * - 0: Success, queue pair deleted successfully. + * - <0: Error code on failure. + */ +int __rte_experimental +rte_event_crypto_adapter_queue_pair_del(uint8_t id, uint8_t cdev_id, + int32_t queue_pair_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Start event crypto adapter + * + * @param id + * Adapter identifier. + * + * @param type + * Flag to indicate to start dequeue only or both enqueue & dequeue. + * + * @return + * - 0: Success, Adapter started successfully. + * - <0: Error code on failure. + */ +int __rte_experimental +rte_event_crypto_adapter_start(uint8_t id, + enum rte_event_crypto_adapter_mode mode); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Stop event crypto adapter + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, Adapter stopped successfully. + * - <0: Error code on failure. + */ +int __rte_experimental +rte_event_crypto_adapter_stop(uint8_t id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Retrieve statistics for an adapter + * + * @param id + * Adapter identifier. + * + * @param [out] stats + * A pointer to structure used to retrieve statistics for an adapter. + * + * @return + * - 0: Success, retrieved successfully. + * - <0: Error code on failure. + */ +int __rte_experimental +rte_event_crypto_adapter_stats_get(uint8_t id, + struct rte_event_crypto_adapter_stats *stats); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Reset statistics for an adapter. + * + * @param id + * Adapter identifier. + * + * @return + * - 0: Success, statistics reset successfully. + * - <0: Error code on failure. + */ +int __rte_experimental +rte_event_crypto_adapter_stats_reset(uint8_t id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Retrieve the service ID of an adapter. If the adapter doesn't use + * a rte_service function, this function returns -ESRCH. + * + * @param id + * Adapter identifier. + * + * @param [out] service_id + * A pointer to a uint32_t, to be filled in with the service id. + * + * @return + * - 0: Success + * - <0: Error code on failure, if the adapter doesn't use a rte_service + * function, this function returns -ESRCH. + */ +int __rte_experimental +rte_event_crypto_adapter_service_id_get(uint8_t id, uint32_t *service_id); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Retrieve the event port of an adapter. + * + * @param id + * Adapter identifier. + * + * @param [out] event_port_id + * Event port identifier used to link to the queue used in ENQ_DEQ mode. + * + * @return + * - 0: Success + * - <0: Error code on failure. + */ +int __rte_experimental +rte_event_crypto_adapter_event_port_get(uint8_t id, uint8_t *event_port_id); + +#ifdef __cplusplus +} +#endif +#endif /* _RTE_EVENT_CRYPTO_ADAPTER_ */ diff --git a/lib/librte_eventdev/rte_eventdev_version.map b/lib/librte_eventdev/rte_eventdev_version.map index 4396536..83ddbea 100644 --- a/lib/librte_eventdev/rte_eventdev_version.map +++ b/lib/librte_eventdev/rte_eventdev_version.map @@ -79,4 +79,16 @@ DPDK_18.05 { global: rte_event_dev_stop_flush_callback_register; + rte_event_crypto_adapter_create_ext; + rte_event_crypto_adapter_create; + rte_event_crypto_adapter_free; + rte_event_crypto_adapter_queue_pair_add; + rte_event_crypto_adapter_queue_pair_del; + rte_event_crypto_adapter_start; + rte_event_crypto_adapter_stop; + rte_event_crypto_adapter_stats_get; + rte_event_crypto_adapter_stats_reset; + rte_event_crypto_adapter_service_id_get; + rte_event_crypto_adapter_event_port_get; + } DPDK_18.02;