get:
Show a patch.

patch:
Update a patch.

put:
Update a patch.

GET /api/patches/7379/?format=api
HTTP 200 OK
Allow: GET, PUT, PATCH, HEAD, OPTIONS
Content-Type: application/json
Vary: Accept

{
    "id": 7379,
    "url": "https://patches.dpdk.org/api/patches/7379/?format=api",
    "web_url": "https://patches.dpdk.org/project/dpdk/patch/1443826867-21004-2-git-send-email-declan.doherty@intel.com/",
    "project": {
        "id": 1,
        "url": "https://patches.dpdk.org/api/projects/1/?format=api",
        "name": "DPDK",
        "link_name": "dpdk",
        "list_id": "dev.dpdk.org",
        "list_email": "dev@dpdk.org",
        "web_url": "http://core.dpdk.org",
        "scm_url": "git://dpdk.org/dpdk",
        "webscm_url": "http://git.dpdk.org/dpdk",
        "list_archive_url": "https://inbox.dpdk.org/dev",
        "list_archive_url_format": "https://inbox.dpdk.org/dev/{}",
        "commit_url_format": ""
    },
    "msgid": "<1443826867-21004-2-git-send-email-declan.doherty@intel.com>",
    "list_archive_url": "https://inbox.dpdk.org/dev/1443826867-21004-2-git-send-email-declan.doherty@intel.com",
    "date": "2015-10-02T23:01:02",
    "name": "[dpdk-dev,1/6] cryptodev: Initial DPDK Crypto APIs and device framework release",
    "commit_ref": null,
    "pull_url": null,
    "state": "superseded",
    "archived": true,
    "hash": "caba22eced2c2561da877e67307c64e141c12168",
    "submitter": {
        "id": 11,
        "url": "https://patches.dpdk.org/api/people/11/?format=api",
        "name": "Doherty, Declan",
        "email": "declan.doherty@intel.com"
    },
    "delegate": null,
    "mbox": "https://patches.dpdk.org/project/dpdk/patch/1443826867-21004-2-git-send-email-declan.doherty@intel.com/mbox/",
    "series": [],
    "comments": "https://patches.dpdk.org/api/patches/7379/comments/",
    "check": "pending",
    "checks": "https://patches.dpdk.org/api/patches/7379/checks/",
    "tags": {},
    "related": [],
    "headers": {
        "Return-Path": "<dev-bounces@dpdk.org>",
        "X-Original-To": "patchwork@dpdk.org",
        "Delivered-To": "patchwork@dpdk.org",
        "Received": [
            "from [92.243.14.124] (localhost [IPv6:::1])\n\tby dpdk.org (Postfix) with ESMTP id C32A68E70;\n\tSat,  3 Oct 2015 00:54:17 +0200 (CEST)",
            "from mga03.intel.com (mga03.intel.com [134.134.136.65])\n\tby dpdk.org (Postfix) with ESMTP id 03423532D\n\tfor <dev@dpdk.org>; Sat,  3 Oct 2015 00:54:14 +0200 (CEST)",
            "from orsmga001.jf.intel.com ([10.7.209.18])\n\tby orsmga103.jf.intel.com with ESMTP; 02 Oct 2015 15:54:14 -0700",
            "from unknown (HELO dwdohert-dpdk-fedora-20.ir.intel.com)\n\t([163.33.213.96])\n\tby orsmga001.jf.intel.com with ESMTP; 02 Oct 2015 15:54:12 -0700"
        ],
        "X-ExtLoop1": "1",
        "X-IronPort-AV": "E=Sophos;i=\"5.17,625,1437462000\"; d=\"scan'208\";a=\"783186004\"",
        "From": "Declan Doherty <declan.doherty@intel.com>",
        "To": "dev@dpdk.org",
        "Date": "Sat,  3 Oct 2015 00:01:02 +0100",
        "Message-Id": "<1443826867-21004-2-git-send-email-declan.doherty@intel.com>",
        "X-Mailer": "git-send-email 2.4.3",
        "In-Reply-To": "<1443826867-21004-1-git-send-email-declan.doherty@intel.com>",
        "References": "<1443826867-21004-1-git-send-email-declan.doherty@intel.com>",
        "Subject": "[dpdk-dev] [PATCH 1/6] cryptodev: Initial DPDK Crypto APIs and\n\tdevice framework release",
        "X-BeenThere": "dev@dpdk.org",
        "X-Mailman-Version": "2.1.15",
        "Precedence": "list",
        "List-Id": "patches and discussions about DPDK <dev.dpdk.org>",
        "List-Unsubscribe": "<http://dpdk.org/ml/options/dev>,\n\t<mailto:dev-request@dpdk.org?subject=unsubscribe>",
        "List-Archive": "<http://dpdk.org/ml/archives/dev/>",
        "List-Post": "<mailto:dev@dpdk.org>",
        "List-Help": "<mailto:dev-request@dpdk.org?subject=help>",
        "List-Subscribe": "<http://dpdk.org/ml/listinfo/dev>,\n\t<mailto:dev-request@dpdk.org?subject=subscribe>",
        "Errors-To": "dev-bounces@dpdk.org",
        "Sender": "\"dev\" <dev-bounces@dpdk.org>"
    },
    "content": "Co-authored-by: Des O Dea <des.j.o.dea@intel.com>\nCo-authored-by: John Griffin <john.griffin@intel.com>\nCo-authored-by: Fiona Trahe <fiona.trahe@intel.com>\n\nThis patch contains the initial proposed APIs and device framework for\nintegrating crypto packet processing into DPDK.\n\nfeatures include:\n - Crypto device configuration / management APIs\n - Definitions of supported cipher algorithms and operations.\n - Definitions of supported hash/authentication algorithms and\n   operations.\n - Crypto session management APIs\n - Crypto operation data structures and APIs allocation of crypto\n   operation structure used to specify the crypto operations to\n   be performed  on a particular mbuf.\n - Extension of mbuf to contain crypto operation data pointer and\n   extra flags.\n - Burst enqueue / dequeue APIs for processing of crypto operations.\n\nchanges from RFC:\n - Session management API changes to support specification of crypto\n   transform(xform) chains using linked list of xforms.\n - Changes to the crypto operation struct as a result of session\n   management changes.\n - Some movement of common MACROS shared by cryptodevs and ethdevs to\n   common headers\n\nSigned-off-by: Declan Doherty <declan.doherty@intel.com>\n---\n config/common_bsdapp                        |    7 +\n config/common_linuxapp                      |   10 +-\n doc/api/doxy-api-index.md                   |    1 +\n doc/api/doxy-api.conf                       |    1 +\n lib/Makefile                                |    1 +\n lib/librte_cryptodev/Makefile               |   60 ++\n lib/librte_cryptodev/rte_crypto.h           |  720 +++++++++++++++++\n lib/librte_cryptodev/rte_crypto_version.map |   40 +\n lib/librte_cryptodev/rte_cryptodev.c        | 1126 +++++++++++++++++++++++++++\n lib/librte_cryptodev/rte_cryptodev.h        |  592 ++++++++++++++\n lib/librte_cryptodev/rte_cryptodev_pmd.h    |  577 ++++++++++++++\n lib/librte_eal/common/include/rte_common.h  |   15 +\n lib/librte_eal/common/include/rte_eal.h     |   14 +\n lib/librte_eal/common/include/rte_log.h     |    1 +\n lib/librte_eal/common/include/rte_memory.h  |   14 +-\n lib/librte_ether/rte_ethdev.c               |   30 -\n lib/librte_mbuf/rte_mbuf.c                  |    1 +\n lib/librte_mbuf/rte_mbuf.h                  |   53 +-\n mk/rte.app.mk                               |    1 +\n 19 files changed, 3230 insertions(+), 34 deletions(-)\n create mode 100644 lib/librte_cryptodev/Makefile\n create mode 100644 lib/librte_cryptodev/rte_crypto.h\n create mode 100644 lib/librte_cryptodev/rte_crypto_version.map\n create mode 100644 lib/librte_cryptodev/rte_cryptodev.c\n create mode 100644 lib/librte_cryptodev/rte_cryptodev.h\n create mode 100644 lib/librte_cryptodev/rte_cryptodev_pmd.h",
    "diff": "diff --git a/config/common_bsdapp b/config/common_bsdapp\nindex b37dcf4..3313a8e 100644\n--- a/config/common_bsdapp\n+++ b/config/common_bsdapp\n@@ -147,6 +147,13 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16\n CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y\n \n #\n+# Compile generic Crypto device library\n+#\n+CONFIG_RTE_LIBRTE_CRYPTODEV=y\n+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y\n+CONFIG_RTE_MAX_CRYPTOPORTS=32\n+\n+#\n # Support NIC bypass logic\n #\n CONFIG_RTE_NIC_BYPASS=n\ndiff --git a/config/common_linuxapp b/config/common_linuxapp\nindex 0de43d5..4ba0299 100644\n--- a/config/common_linuxapp\n+++ b/config/common_linuxapp\n@@ -1,6 +1,6 @@\n #   BSD LICENSE\n #\n-#   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.\n+#   Copyright(c) 2010-2015 Intel Corporation. All rights reserved.\n #   All rights reserved.\n #\n #   Redistribution and use in source and binary forms, with or without\n@@ -145,6 +145,14 @@ CONFIG_RTE_ETHDEV_QUEUE_STAT_CNTRS=16\n CONFIG_RTE_ETHDEV_RXTX_CALLBACKS=y\n \n #\n+# Compile generic Crypto device library\n+#\n+CONFIG_RTE_LIBRTE_CRYPTODEV=y\n+CONFIG_RTE_LIBRTE_CRYPTODEV_DEBUG=y\n+CONFIG_RTE_CRYPTO_MAX_DEVS=64\n+CONFIG_RTE_CRYPTO_MAX_XFORM_CHAIN_LENGTH=2\n+\n+#\n # Support NIC bypass logic\n #\n CONFIG_RTE_NIC_BYPASS=n\ndiff --git a/doc/api/doxy-api-index.md b/doc/api/doxy-api-index.md\nindex 72ac3c4..bdb6130 100644\n--- a/doc/api/doxy-api-index.md\n+++ b/doc/api/doxy-api-index.md\n@@ -39,6 +39,7 @@ There are many libraries, so their headers may be grouped by topics:\n   [dev]                (@ref rte_dev.h),\n   [ethdev]             (@ref rte_ethdev.h),\n   [ethctrl]            (@ref rte_eth_ctrl.h),\n+  [cryptodev]          (@ref rte_cryptodev.h),\n   [devargs]            (@ref rte_devargs.h),\n   [bond]               (@ref rte_eth_bond.h),\n   [vhost]              (@ref rte_virtio_net.h),\ndiff --git a/doc/api/doxy-api.conf b/doc/api/doxy-api.conf\nindex cfb4627..7244b8f 100644\n--- a/doc/api/doxy-api.conf\n+++ b/doc/api/doxy-api.conf\n@@ -37,6 +37,7 @@ INPUT                   = doc/api/doxy-api-index.md \\\n                           lib/librte_cfgfile \\\n                           lib/librte_cmdline \\\n                           lib/librte_compat \\\n+                          lib/librte_cryptodev \\\n                           lib/librte_distributor \\\n                           lib/librte_ether \\\n                           lib/librte_hash \\\ndiff --git a/lib/Makefile b/lib/Makefile\nindex 9727b83..4c5c1b4 100644\n--- a/lib/Makefile\n+++ b/lib/Makefile\n@@ -40,6 +40,7 @@ DIRS-$(CONFIG_RTE_LIBRTE_TIMER) += librte_timer\n DIRS-$(CONFIG_RTE_LIBRTE_CFGFILE) += librte_cfgfile\n DIRS-$(CONFIG_RTE_LIBRTE_CMDLINE) += librte_cmdline\n DIRS-$(CONFIG_RTE_LIBRTE_ETHER) += librte_ether\n+DIRS-$(CONFIG_RTE_LIBRTE_CRYPTODEV) += librte_cryptodev\n DIRS-$(CONFIG_RTE_LIBRTE_VHOST) += librte_vhost\n DIRS-$(CONFIG_RTE_LIBRTE_HASH) += librte_hash\n DIRS-$(CONFIG_RTE_LIBRTE_LPM) += librte_lpm\ndiff --git a/lib/librte_cryptodev/Makefile b/lib/librte_cryptodev/Makefile\nnew file mode 100644\nindex 0000000..6ed9b76\n--- /dev/null\n+++ b/lib/librte_cryptodev/Makefile\n@@ -0,0 +1,60 @@\n+#   BSD LICENSE\n+#\n+#   Copyright(c) 2015 Intel Corporation. All rights reserved.\n+#\n+#   Redistribution and use in source and binary forms, with or without\n+#   modification, are permitted provided that the following conditions\n+#   are met:\n+#\n+#     * Redistributions of source code must retain the above copyright\n+#       notice, this list of conditions and the following disclaimer.\n+#     * Redistributions in binary form must reproduce the above copyright\n+#       notice, this list of conditions and the following disclaimer in\n+#       the documentation and/or other materials provided with the\n+#       distribution.\n+#     * Neither the name of Intel Corporation nor the names of its\n+#       contributors may be used to endorse or promote products derived\n+#       from this software without specific prior written permission.\n+#\n+#   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+#   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+#   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+#   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+#   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+#   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+#   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+#   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+#   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+#   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+#   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+\n+include $(RTE_SDK)/mk/rte.vars.mk\n+\n+# library name\n+LIB = libcryptodev.a\n+\n+# library version\n+LIBABIVER := 1\n+\n+# build flags\n+CFLAGS += -O3\n+CFLAGS += $(WERROR_FLAGS)\n+\n+# library source files\n+SRCS-y += rte_cryptodev.c\n+\n+# export include files\n+SYMLINK-y-include += rte_crypto.h\n+SYMLINK-y-include += rte_cryptodev.h\n+SYMLINK-y-include += rte_cryptodev_pmd.h\n+\n+# versioning export map\n+EXPORT_MAP := rte_cryptodev_version.map\n+\n+# library dependencies\n+DEPDIRS-y += lib/librte_eal\n+DEPDIRS-y += lib/librte_mempool\n+DEPDIRS-y += lib/librte_ring\n+DEPDIRS-y += lib/librte_mbuf\n+\n+include $(RTE_SDK)/mk/rte.lib.mk\ndiff --git a/lib/librte_cryptodev/rte_crypto.h b/lib/librte_cryptodev/rte_crypto.h\nnew file mode 100644\nindex 0000000..3fe4db7\n--- /dev/null\n+++ b/lib/librte_cryptodev/rte_crypto.h\n@@ -0,0 +1,720 @@\n+/*-\n+ *   BSD LICENSE\n+ *\n+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#ifndef _RTE_CRYPTO_H_\n+#define _RTE_CRYPTO_H_\n+\n+/**\n+ * @file rte_crypto.h\n+ *\n+ * RTE Cryptographic Definitions\n+ *\n+ * Defines symmetric cipher and authentication algorithms and modes, as well\n+ * as supported symmetric crypto operation combinations.\n+ */\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+#include <rte_mbuf.h>\n+#include <rte_memory.h>\n+#include <rte_mempool.h>\n+\n+/**\n+ * This enumeration lists different types of crypto operations supported by rte\n+ * crypto devices. The operation type is defined during session registration and\n+ * cannot be changed for a session once it has been setup, or if using a\n+ * session-less crypto operation it is defined within the crypto operation\n+ * op_params.\n+ */\n+enum rte_crypto_operation_chain {\n+\tRTE_CRYPTO_SYM_OP_CIPHER_ONLY,\n+\t/**< Cipher only operation on the data */\n+\tRTE_CRYPTO_SYM_OP_HASH_ONLY,\n+\t/**< Hash only operation on the data */\n+\tRTE_CRYPTO_SYM_OPCHAIN_HASH_CIPHER,\n+\t/**<\n+\t * Chain a hash followed by any cipher operation.\n+\t *\n+\t * If it is required that the result of the hash (i.e. the digest)\n+\t * is going to be included in the data to be ciphered, then:\n+\t *\n+\t * - The digest MUST be placed in the destination buffer at the\n+\t *   location corresponding to the end of the data region to be hashed\n+\t *   (hash_start_offset + message length to hash),  i.e. there must be\n+\t *   no gaps between the start of the digest and the end of the data\n+\t *   region to be hashed.\n+\t *\n+\t * - The message length to cipher member of the rte_crypto_op_data\n+\t *   structure must be equal to the overall length of the plain text,\n+\t *   the digest length and any (optional) trailing data that is to be\n+\t *   included.\n+\t *\n+\t * - The message length to cipher must be a multiple to the block\n+\t *   size if a block cipher is being used - the implementation does not\n+\t *   pad.\n+\t */\n+\tRTE_CRYPTO_SYM_OPCHAIN_CIPHER_HASH,\n+\t/**<\n+\t * Chain any cipher followed by any hash operation.The hash operation\n+\t * will be performed on the ciphertext resulting from the cipher\n+\t * operation.\n+\t */\n+};\n+\n+/** Symmetric Cipher Algorithms */\n+enum rte_crypto_cipher_algorithm {\n+\tRTE_CRYPTO_SYM_CIPHER_NULL = 1,\n+\t/**< NULL cipher algorithm. No mode applies to the NULL algorithm. */\n+\n+\tRTE_CRYPTO_SYM_CIPHER_3DES_CBC,\n+\t/**< Triple DES algorithm in CBC mode */\n+\tRTE_CRYPTO_SYM_CIPHER_3DES_CTR,\n+\t/**< Triple DES algorithm in CTR mode */\n+\tRTE_CRYPTO_SYM_CIPHER_3DES_ECB,\n+\t/**< Triple DES algorithm in ECB mode */\n+\n+\tRTE_CRYPTO_SYM_CIPHER_AES_CBC,\n+\t/**< AES algorithm in CBC mode */\n+\tRTE_CRYPTO_SYM_CIPHER_AES_CCM,\n+\t/**< AES algorithm in CCM mode. When this cipher algorithm is used the\n+\t * *RTE_CRYPTO_SYM_HASH_AES_CCM* element of the\n+\t * *rte_crypto_hash_algorithm* enum MUST be used to set up the related\n+\t * *rte_crypto_hash_setup_data* structure in the session context or in\n+\t * the op_params of the crypto operation structure in the case of a\n+\t * session-less crypto operation\n+\t */\n+\tRTE_CRYPTO_SYM_CIPHER_AES_CTR,\n+\t/**< AES algorithm in Counter mode */\n+\tRTE_CRYPTO_SYM_CIPHER_AES_ECB,\n+\t/**< AES algorithm in ECB mode */\n+\tRTE_CRYPTO_SYM_CIPHER_AES_F8,\n+\t/**< AES algorithm in F8 mode */\n+\tRTE_CRYPTO_SYM_CIPHER_AES_GCM,\n+\t/**< AES algorithm in GCM mode. When this cipher algorithm is used the\n+\t * *RTE_CRYPTO_SYM_HASH_AES_GCM* element of the\n+\t * *rte_crypto_hash_algorithm* enum MUST be used to set up the related\n+\t * *rte_crypto_hash_setup_data* structure in the session context or in\n+\t * the op_params of the crypto operation structure in the case of a\n+\t * session-less crypto operation.\n+\t */\n+\tRTE_CRYPTO_SYM_CIPHER_AES_XTS,\n+\t/**< AES algorithm in XTS mode */\n+\n+\tRTE_CRYPTO_SYM_CIPHER_ARC4,\n+\t/**< (A)RC4 cipher algorithm */\n+\n+\tRTE_CRYPTO_SYM_CIPHER_KASUMI_F8,\n+\t/**< Kasumi algorithm in F8 mode */\n+\n+\tRTE_CRYPTO_SYM_CIPHER_SNOW3G_UEA2,\n+\t/**< SNOW3G algorithm in UEA2 mode */\n+\n+\tRTE_CRYPTO_SYM_CIPHER_ZUC_EEA3\n+\t/**< ZUC algorithm in EEA3 mode */\n+};\n+\n+/** Symmetric Cipher Direction */\n+enum rte_crypto_cipher_operation {\n+\tRTE_CRYPTO_SYM_CIPHER_OP_ENCRYPT,\n+\t/**< Encrypt cipher operation */\n+\tRTE_CRYPTO_SYM_CIPHER_OP_DECRYPT\n+\t/**< Decrypt cipher operation */\n+};\n+\n+/** Crypto key structure */\n+struct rte_crypto_key {\n+\tuint8_t *data;\t/**< pointer to key data */\n+\tphys_addr_t phys_addr;\n+\tsize_t length;\t/**< key length in bytes */\n+};\n+\n+/**\n+ * Symmetric Cipher Setup Data.\n+ *\n+ * This structure contains data relating to Cipher (Encryption and Decryption)\n+ *  use to create a session.\n+ */\n+struct rte_crypto_cipher_xform {\n+\tenum rte_crypto_cipher_operation op;\n+\t/**< This parameter determines if the cipher operation is an encrypt or\n+\t * a decrypt operation. For the RC4 algorithm and the F8/CTR modes,\n+\t * only encrypt operations are valid. */\n+\tenum rte_crypto_cipher_algorithm algo;\n+\t/**< Cipher algorithm */\n+\n+\tstruct rte_crypto_key key;\n+\t/**< Cipher key\n+\t *\n+\t * For the RTE_CRYPTO_SYM_CIPHER_AES_F8 mode of operation, key.data will\n+\t * point to a concatenation of the AES encryption key followed by a\n+\t * keymask. As per RFC3711, the keymask should be padded with trailing\n+\t * bytes to match the length of the encryption key used.\n+\t *\n+\t * For AES-XTS mode of operation, two keys must be provided and\n+\t * key.data must point to the two keys concatenated together (Key1 ||\n+\t * Key2). The cipher key length will contain the total size of both keys.\n+\t *\n+\t * Cipher key length is in bytes. For AES it can be 128 bits (16 bytes),\n+\t * 192 bits (24 bytes) or 256 bits (32 bytes).\n+\t *\n+\t * For the CCM mode of operation, the only supported key length is 128\n+\t * bits (16 bytes).\n+\t *\n+\t * For the RTE_CRYPTO_SYM_CIPHER_AES_F8 mode of operation, key.length\n+\t * should be set to the combined length of the encryption key and the\n+\t * keymask. Since the keymask and the encryption key are the same size,\n+\t * key.length should be set to 2 x the AES encryption key length.\n+\t *\n+\t * For the AES-XTS mode of operation:\n+\t *  - Two keys must be provided and key.length refers to total length of\n+\t *    the two keys.\n+\t *  - Each key can be either 128 bits (16 bytes) or 256 bits (32 bytes).\n+\t *  - Both keys must have the same size.\n+\t **/\n+};\n+\n+/** Symmetric Authentication / Hash Algorithms */\n+enum rte_crypto_auth_algorithm {\n+\tRTE_CRYPTO_SYM_HASH_NONE = 0,\n+\t/**< No hash algorithm. */\n+\n+\tRTE_CRYPTO_SYM_HASH_AES_CBC_MAC,\n+\t/**< AES-CBC-MAC algorithm. Only 128-bit keys are supported. */\n+\tRTE_CRYPTO_SYM_HASH_AES_CCM,\n+\t/**< AES algorithm in CCM mode. This is an authenticated cipher. When\n+\t * this hash algorithm is used, the *RTE_CRYPTO_SYM_CIPHER_AES_CCM*\n+\t * element of the *rte_crypto_cipher_algorithm* enum MUST be used to\n+\t * set up the related rte_crypto_cipher_setup_data structure in the\n+\t * session context or the corresponding parameter in the crypto operation\n+\t * data structures op_params parameter MUST be set for a session-less\n+\t * crypto operation.\n+\t * */\n+\tRTE_CRYPTO_SYM_HASH_AES_CMAC,\n+\t/**< AES CMAC algorithm. */\n+\tRTE_CRYPTO_SYM_HASH_AES_GCM,\n+\t/**< AES algorithm in GCM mode. When this hash algorithm\n+\t * is used, the RTE_CRYPTO_SYM_CIPHER_AES_GCM element of the\n+\t * rte_crypto_cipher_algorithm enum MUST be used to set up the related\n+\t * rte_crypto_cipher_setup_data structure in the session context, or\n+\t * the corresponding parameter in the crypto operation data structures\n+\t * op_params parameter MUST be set for a session-less crypto operation.\n+\t */\n+\tRTE_CRYPTO_SYM_HASH_AES_GMAC,\n+\t/**< AES GMAC algorithm. When this hash algorithm\n+\t* is used, the RTE_CRYPTO_SYM_CIPHER_AES_GCM element of the\n+\t* rte_crypto_cipher_algorithm enum MUST be used to set up the related\n+\t* rte_crypto_cipher_setup_data structure in the session context,  or\n+\t* the corresponding parameter in the crypto operation data structures\n+\t* op_params parameter MUST be set for a session-less crypto operation.\n+\t*/\n+\tRTE_CRYPTO_SYM_HASH_AES_XCBC_MAC,\n+\t/**< AES XCBC algorithm. */\n+\n+\tRTE_CRYPTO_SYM_HASH_KASUMI_F9,\n+\t/**< Kasumi algorithm in F9 mode. */\n+\n+\tRTE_CRYPTO_SYM_HASH_MD5,\n+\t/**< MD5 algorithm */\n+\tRTE_CRYPTO_SYM_HASH_MD5_HMAC,\n+\t/**< HMAC using MD5 algorithm */\n+\n+\tRTE_CRYPTO_SYM_HASH_SHA1,\n+\t/**< 128 bit SHA algorithm. */\n+\tRTE_CRYPTO_SYM_HASH_SHA1_HMAC,\n+\t/**< HMAC using 128 bit SHA algorithm. */\n+\tRTE_CRYPTO_SYM_HASH_SHA224,\n+\t/**< 224 bit SHA algorithm. */\n+\tRTE_CRYPTO_SYM_HASH_SHA224_HMAC,\n+\t/**< HMAC using 224 bit SHA algorithm. */\n+\tRTE_CRYPTO_SYM_HASH_SHA256,\n+\t/**< 256 bit SHA algorithm. */\n+\tRTE_CRYPTO_SYM_HASH_SHA256_HMAC,\n+\t/**< HMAC using 256 bit SHA algorithm. */\n+\tRTE_CRYPTO_SYM_HASH_SHA384,\n+\t/**< 384 bit SHA algorithm. */\n+\tRTE_CRYPTO_SYM_HASH_SHA384_HMAC,\n+\t/**< HMAC using 384 bit SHA algorithm. */\n+\tRTE_CRYPTO_SYM_HASH_SHA512,\n+\t/**< 512 bit SHA algorithm. */\n+\tRTE_CRYPTO_SYM_HASH_SHA512_HMAC,\n+\t/**< HMAC using 512 bit SHA algorithm. */\n+\n+\tRTE_CRYPTO_SYM_HASH_SNOW3G_UIA2,\n+\t/**< SNOW3G algorithm in UIA2 mode. */\n+\n+\tRTE_CRYPTO_SYM_HASH_ZUC_EIA3,\n+\t/**< ZUC algorithm in EIA3 mode */\n+};\n+\n+/** Symmetric Authentication / Hash Operations */\n+enum rte_crypto_auth_operation {\n+\tRTE_CRYPTO_SYM_HASH_OP_DIGEST_VERIFY,\t/**< Verify digest */\n+\tRTE_CRYPTO_SYM_HASH_OP_DIGEST_GENERATE\t/**< Generate digest */\n+};\n+\n+/**\n+ * Authentication / Hash transform data.\n+ *\n+ * This structure contains data relating to an authentication/hash crypto\n+ * transforms. The fields op, algo and digest_length are common to all\n+ * authentication transforms and MUST be set.\n+ */\n+struct rte_crypto_auth_xform {\n+\tenum rte_crypto_auth_operation op;\t/**< Authentication operation type */\n+\tenum rte_crypto_auth_algorithm algo;\t/**< Authentication algorithm selection */\n+\n+\tstruct rte_crypto_key key;\t\t/**< Authentication key data.\n+\t * The authentication key length MUST be less than or equal to the\n+\t * block size of the algorithm. It is the callers responsibility to\n+\t * ensure that the key length is compliant with the standard being used\n+\t * (for example RFC 2104, FIPS 198a).\n+\t */\n+\n+\tuint32_t digest_length;\n+\t/**< Length of the digest to be returned. If the verify option is set,\n+\t * this specifies the length of the digest to be compared for the\n+\t * session.\n+\t *\n+\t * If the value is less than the maximum length allowed by the hash,\n+\t * the result shall be truncated.  If the value is greater than the\n+\t * maximum length allowed by the hash then an error will be generated\n+\t * by *rte_cryptodev_session_create* or by the\n+\t * *rte_cryptodev_enqueue_burst* if using session-less APIs.\n+\t */\n+\n+\tuint32_t add_auth_data_length;\n+\t/**< The length of the additional authenticated data (AAD) in bytes.\n+\t * The maximum permitted value is 240 bytes, unless otherwise specified\n+\t * below.\n+\t *\n+\t * This field must be specified when the hash algorithm is one of the\n+\t * following:\n+\t *\n+\t * - For SNOW3G (@ref RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2), this is the\n+\t *   length of the IV (which should be 16).\n+\t *\n+\t * - For GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM).  In this case, this is\n+\t *   the length of the Additional Authenticated Data (called A, in NIST\n+\t *   SP800-38D).\n+\t *\n+\t * - For CCM (@ref RTE_CRYPTO_SYM_HASH_AES_CCM).  In this case, this is\n+\t *   the length of the associated data (called A, in NIST SP800-38C).\n+\t *   Note that this does NOT include the length of any padding, or the\n+\t *   18 bytes reserved at the start of the above field to store the\n+\t *   block B0 and the encoded length.  The maximum permitted value in\n+\t *   this case is 222 bytes.\n+\t *\n+\t * @note\n+\t *  For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of operation\n+\t *  this field is not used and should be set to 0. Instead the length\n+\t *  of the AAD data is specified in the message length to hash field of\n+\t *  the rte_crypto_op_data structure.\n+\t */\n+};\n+\n+enum rte_crypto_xform_type {\n+\tRTE_CRYPTO_XFORM_NOT_SPECIFIED = 0,\n+\tRTE_CRYPTO_XFORM_AUTH,\n+\tRTE_CRYPTO_XFORM_CIPHER\n+};\n+\n+/**\n+ * Crypto transform structure.\n+ *\n+ * This is used to specify the crypto transforms required, multiple transforms\n+ * can be chained together to specify a chain transforms such as authentication\n+ * then cipher, or cipher then authentication. Each transform structure can\n+ * hold a single transform, the type field is used to specify which transform\n+ * is contained within the union */\n+struct rte_crypto_xform {\n+\tstruct rte_crypto_xform *next; /**< next xform in chain */\n+\n+\tenum rte_crypto_xform_type type; /**< xform type */\n+\tunion {\n+\t\tstruct rte_crypto_auth_xform auth;\t/**< Authentication / hash xform */\n+\t\tstruct rte_crypto_cipher_xform cipher;\t/**< Cipher xform */\n+\t};\n+};\n+\n+/**\n+ * Crypto operation session type. This is used to specify whether a crypto\n+ * operation has session structure attached for immutable parameters or if all\n+ * operation information is included in the operation data structure.\n+ */\n+enum rte_crypto_op_sess_type {\n+\tRTE_CRYPTO_OP_WITH_SESSION,\t/**< Session based crypto operation */\n+\tRTE_CRYPTO_OP_SESSIONLESS\t/**< Session-less crypto operation */\n+};\n+\n+\n+/**\n+ * Cryptographic Operation Data.\n+ *\n+ * This structure contains data relating to performing cryptographic processing\n+ * on a data buffer. This request is used with rte_crypto_enqueue_burst() call\n+ * for performing cipher, hash, or a combined hash and cipher operations.\n+ */\n+struct rte_crypto_op_data {\n+\tenum rte_crypto_op_sess_type type;\n+\n+\tstruct rte_mbuf *dst;\n+\n+\tunion {\n+\t\tstruct rte_cryptodev_session *session;\n+\t\t/**< Handle for the initialised session context */\n+\t\tstruct rte_crypto_xform *xform;\n+\t\t/**< Session-less API crypto operation parameters */\n+\t};\n+\n+\tstruct {\n+\t\tstruct {\n+\t\t\t uint32_t offset;\n+\t\t\t /**< Starting point for cipher processing, specified\n+\t\t\t  * as number of bytes from start of data in the source\n+\t\t\t  * buffer. The result of the cipher operation will be\n+\t\t\t  * written back into the output buffer starting at\n+\t\t\t  * this location. */\n+\n+\t\t\t uint32_t length;\n+\t\t\t /**< The message length, in bytes, of the source buffer\n+\t\t\t  * on which the cryptographic operation will be\n+\t\t\t  * computed. This must be a multiple of the block size\n+\t\t\t  * if a block cipher is being used. This is also the\n+\t\t\t  * same as the result length.\n+\t\t\t  *\n+\t\t\t  * @note\n+\t\t\t  * In the case of CCM @ref RTE_CRYPTO_SYM_HASH_AES_CCM,\n+\t\t\t  * this value should not include the length of the\n+\t\t\t  * padding or the length of the MAC; the driver will\n+\t\t\t  * compute the actual number of bytes over which the\n+\t\t\t  * encryption will occur, which will include these\n+\t\t\t  * values.\n+\t\t\t  *\n+\t\t\t  * @note\n+\t\t\t  * For AES-GMAC @ref RTE_CRYPTO_SYM_HASH_AES_GMAC, this\n+\t\t\t  * field should be set to 0.\n+\t\t\t  */\n+\t\t} to_cipher; /**< Data offsets and length for ciphering */\n+\n+\t\tstruct {\n+\t\t\t uint32_t offset;\n+\t\t\t /**< Starting point for hash processing, specified as\n+\t\t\t  * number of bytes from start of packet in source\n+\t\t\t  * buffer.\n+\t\t\t  *\n+\t\t\t  * @note\n+\t\t\t  * For CCM and GCM modes of operation, this field is\n+\t\t\t  * ignored. The field @ref additional_auth field\n+\t\t\t  * should be set instead.\n+\t\t\t  *\n+\t\t\t  * @note For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of\n+\t\t\t  * operation, this field specifies the start of the AAD data in\n+\t\t\t  * the source buffer.\n+\t\t\t  */\n+\n+\t\t\t uint32_t length;\n+\t\t\t /**< The message length, in bytes, of the source buffer that\n+\t\t\t  * the hash will be computed on.\n+\t\t\t  *\n+\t\t\t  * @note\n+\t\t\t  * For CCM and GCM modes of operation, this field is\n+\t\t\t  * ignored. The field @ref additional_auth field should\n+\t\t\t  * be set instead.\n+\t\t\t  *\n+\t\t\t  * @note\n+\t\t\t  * For AES-GMAC @ref RTE_CRYPTO_SYM_HASH_AES_GMAC mode\n+\t\t\t  * of operation, this field specifies the length of\n+\t\t\t  * the AAD data in the source buffer.\n+\t\t\t  */\n+\t\t} to_hash; /**< Data offsets and length for authentication */\n+\t} data;\t/**< Details of data to be operated on */\n+\n+\tstruct {\n+\t\tuint8_t *data;\n+\t\t/**< Initialisation Vector or Counter.\n+\t\t *\n+\t\t * - For block ciphers in CBC or F8 mode, or for Kasumi in F8\n+\t\t * mode, or for SNOW3G in UEA2 mode, this is the Initialisation\n+\t\t * Vector (IV) value.\n+\t\t *\n+\t\t * - For block ciphers in CTR mode, this is the counter.\n+\t\t *\n+\t\t * - For GCM mode, this is either the IV (if the length is 96\n+\t\t * bits) or J0 (for other sizes), where J0 is as defined by\n+\t\t * NIST SP800-38D. Regardless of the IV length, a full 16 bytes\n+\t\t * needs to be allocated.\n+\t\t *\n+\t\t * - For CCM mode, the first byte is reserved, and the nonce\n+\t\t * should be written starting at &iv[1] (to allow space for the\n+\t\t * implementation to write in the flags in the first byte).\n+\t\t * Note that a full 16 bytes should be allocated, even though\n+\t\t * the length field will have a value less than this.\n+\t\t *\n+\t\t * - For AES-XTS, this is the 128bit tweak, i, from IEEE Std\n+\t\t * 1619-2007.\n+\t\t *\n+\t\t * For optimum performance, the data pointed to SHOULD be\n+\t\t * 8-byte aligned.\n+\t\t */\n+\t\tphys_addr_t phys_addr;\n+\t\tsize_t length;\n+\t\t/**< Length of valid IV data.\n+\t\t *\n+\t\t * - For block ciphers in CBC or F8 mode, or for Kasumi in F8\n+\t\t * mode, or for SNOW3G in UEA2 mode, this is the length of the\n+\t\t * IV (which must be the same as the block length of the\n+\t\t * cipher).\n+\t\t *\n+\t\t * - For block ciphers in CTR mode, this is the length of the\n+\t\t * counter (which must be the same as the block length of the\n+\t\t * cipher).\n+\t\t *\n+\t\t * - For GCM mode, this is either 12 (for 96-bit IVs) or 16, in\n+\t\t * which case data points to J0.\n+\t\t *\n+\t\t * - For CCM mode, this is the length of the nonce, which can\n+\t\t * be in the range 7 to 13 inclusive.\n+\t\t */\n+\t} iv;\t/**< Initialisation vector parameters */\n+\n+\tstruct {\n+\t\tuint8_t *data;\n+\t\t/**< If this member of this structure is set this is a\n+\t\t * pointer to the location where the digest result should be\n+\t\t * inserted (in the case of digest generation) or where the\n+\t\t * purported digest exists (in the case of digest\n+\t\t * verification).\n+\t\t *\n+\t\t * At session creation time, the client specified the digest\n+\t\t * result length with the digest_length member of the @ref\n+\t\t * rte_crypto_hash_setup_data structure. For physical crypto\n+\t\t * devices the caller must allocate at least digest_length of\n+\t\t * physically contiguous memory at this location.\n+\t\t *\n+\t\t * For digest generation, the digest result will overwrite\n+\t\t * any data at this location.\n+\t\t *\n+\t\t * @note\n+\t\t * For GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM), for\n+\t\t * \"digest result\" read \"authentication tag T\".\n+\t\t *\n+\t\t * If this member is not set the digest result is understood\n+\t\t * to be in the destination buffer for digest generation, and\n+\t\t * in the source buffer for digest verification. The location\n+\t\t * of the digest result in this case is immediately following\n+\t\t * the region over which the digest is computed.\n+\t\t */\n+\t\tphys_addr_t phys_addr;\t/**< Physical address of digest */\n+\t\tuint32_t length;\t/**< Length of digest */\n+\t} digest; /**< Digest parameters */\n+\n+\tstruct {\n+\t\tuint8_t *data;\n+\t\t/**< Pointer to Additional Authenticated Data (AAD) needed for\n+\t\t * authenticated cipher mechanisms (CCM and GCM), and to the IV\n+\t\t * for SNOW3G authentication\n+\t\t * (@ref RTE_CRYPTO_SYM_HASH_SNOW3G_UIA2). For other\n+\t\t * authentication mechanisms this pointer is ignored.\n+\t\t *\n+\t\t * The length of the data pointed to by this field is set up for\n+\t\t * the session in the @ref rte_crypto_hash_params structure\n+\t\t * as part of the @ref rte_cryptodev_session_create function\n+\t\t * call.  This length must not exceed 240 bytes.\n+\t\t *\n+\t\t * Specifically for CCM (@ref RTE_CRYPTO_SYM_HASH_AES_CCM), the\n+\t\t * caller should setup this field as follows:\n+\t\t *\n+\t\t * - the nonce should be written starting at an offset of one\n+\t\t *   byte into the array, leaving room for the implementation\n+\t\t *   to write in the flags to the first byte.\n+\t\t *\n+\t\t * - the additional  authentication data itself should be\n+\t\t *   written starting at an offset of 18 bytes into the array,\n+\t\t *   leaving room for the length encoding in the first two\n+\t\t *   bytes of the second block.\n+\t\t *\n+\t\t * - the array should be big enough to hold the above fields,\n+\t\t *   plus any padding to round this up to the nearest multiple\n+\t\t *   of the block size (16 bytes).  Padding will be added by the\n+\t\t *   implementation.\n+\t\t *\n+\t\t * Finally, for GCM (@ref RTE_CRYPTO_SYM_HASH_AES_GCM), the\n+\t\t * caller should setup this field as follows:\n+\t\t *\n+\t\t * - the AAD is written in starting at byte 0\n+\t\t * - the array must be big enough to hold the AAD, plus any\n+\t\t *   padding to round this up to the nearest multiple of the\n+\t\t *   block size (16 bytes).  Padding will be added by the\n+\t\t *    implementation.\n+\t\t *\n+\t\t * @note\n+\t\t * For AES-GMAC (@ref RTE_CRYPTO_SYM_HASH_AES_GMAC) mode of\n+\t\t * operation, this field is not used and should be set to 0.\n+\t\t * Instead the AAD data should be placed in the source buffer.\n+\t\t */\n+\t\tphys_addr_t phys_addr;\t/**< physical address */\n+\t} additional_auth; /**< Additional authentication parameters */\n+\n+\tstruct rte_mempool *pool;\t/**< mempool used to allocate crypto op */\n+};\n+\n+\n+\n+struct crypto_op_pool_private {\n+\tunsigned max_nb_xforms;\n+};\n+\n+\n+extern struct rte_mempool *\n+rte_crypto_op_pool_create(const char *name, unsigned nb_ops,\n+\t\tunsigned cache_size, unsigned nb_xforms, int socket_id);\n+\n+\n+/**\n+ * Reset the fields of a packet mbuf to their default values.\n+ *\n+ * The given mbuf must have only one segment.\n+ *\n+ * @param m\n+ *   The packet mbuf to be resetted.\n+ */\n+static inline void\n+__rte_crypto_op_reset(struct rte_crypto_op_data *op)\n+{\n+\top->type = RTE_CRYPTO_OP_SESSIONLESS;\n+}\n+\n+static inline struct rte_crypto_op_data *\n+__rte_crypto_op_raw_alloc(struct rte_mempool *mp)\n+{\n+\tvoid *buf = NULL;\n+\n+\tif (rte_mempool_get(mp, &buf) < 0)\n+\t\treturn NULL;\n+\n+\treturn (struct rte_crypto_op_data *)buf;\n+}\n+\n+/**\n+ * Create an crypto operation structure which is used to define the crypto\n+ * operation processing which is to be done on a packet.\n+ *\n+ * @param\tdev_id\t\tDevice identifier\n+ * @param\tm_src\t\tSource mbuf of data for processing.\n+ * @param\tm_dst\t\tDestination mbuf for processed data. Can be NULL\n+ *\t\t\t\tif crypto operation is done in place.\n+ */\n+static inline struct rte_crypto_op_data *\n+rte_crypto_op_alloc(struct rte_mempool *mp)\n+{\n+\tstruct rte_crypto_op_data *op = __rte_crypto_op_raw_alloc(mp);\n+\n+\tif (op != NULL)\n+\t\t__rte_crypto_op_reset(op);\n+\treturn op;\n+}\n+\n+static inline int\n+rte_crypto_op_bulk_alloc(struct rte_mempool *mp,\n+\t\tstruct rte_crypto_op_data **ops,\n+\t\tunsigned nb_ops) {\n+\tvoid *objs[nb_ops];\n+\tunsigned i;\n+\n+\tif (rte_mempool_get_bulk(mp, objs, nb_ops) < 0)\n+\t\treturn -1;\n+\n+\tfor (i = 0; i < nb_ops; i++) {\n+\t\tops[i] = objs[i];\n+\t\t__rte_crypto_op_reset(ops[i]);\n+\t}\n+\n+\treturn nb_ops;\n+\n+}\n+\n+static inline struct rte_crypto_op_data *\n+rte_crypto_op_alloc_sessionless(struct rte_mempool *mp, unsigned nb_xforms)\n+{\n+\tstruct rte_crypto_op_data *op = NULL;\n+\tstruct rte_crypto_xform *xform = NULL;\n+\tstruct crypto_op_pool_private *priv_data =\n+\t\t\t\t\t(struct crypto_op_pool_private *)\n+\t\t\t\t\trte_mempool_get_priv(mp);\n+\n+\tif (nb_xforms > priv_data->max_nb_xforms && nb_xforms > 0)\n+\t\treturn op;\n+\n+\top = __rte_crypto_op_raw_alloc(mp);\n+\tif (op != NULL) {\n+\t\t__rte_crypto_op_reset(op);\n+\n+\t\txform = op->xform = (struct rte_crypto_xform *)(op + 1);\n+\n+\t\tdo {\n+\t\t\txform->type = RTE_CRYPTO_XFORM_NOT_SPECIFIED;\n+\t\t\txform = xform->next = --nb_xforms > 0 ? xform + 1 : NULL;\n+\t\t} while (xform);\n+\t}\n+\treturn op;\n+}\n+\n+\n+/**\n+ * Free operation structure free function\n+ *\n+ * @param\top\tCrypto operation data structure to be freed\n+ */\n+static inline void\n+rte_crypto_op_free(struct rte_crypto_op_data *op)\n+{\n+\tif (op != NULL)\n+\t\trte_mempool_put(op->pool, op);\n+}\n+\n+\n+static inline void\n+rte_crypto_op_attach_session(struct rte_crypto_op_data *op,\n+\t\tstruct rte_cryptodev_session *sess)\n+{\n+\top->session = sess;\n+\top->type = RTE_CRYPTO_OP_WITH_SESSION;\n+}\n+\n+\n+#ifdef __cplusplus\n+}\n+#endif\n+\n+#endif /* _RTE_CRYPTO_H_ */\ndiff --git a/lib/librte_cryptodev/rte_crypto_version.map b/lib/librte_cryptodev/rte_crypto_version.map\nnew file mode 100644\nindex 0000000..c93fcad\n--- /dev/null\n+++ b/lib/librte_cryptodev/rte_crypto_version.map\n@@ -0,0 +1,40 @@\n+DPDK_2.2 {\n+\tglobal:\n+\n+\trte_cryptodev_create_vdev;\n+\trte_cryptodev_get_dev_id;\n+\trte_cryptodev_count;\n+\trte_cryptodev_configure;\n+\trte_cryptodev_start;\n+\trte_cryptodev_stop;\n+\trte_cryptodev_close;\n+\trte_cryptodev_queue_pair_setup;\n+\trte_cryptodev_queue_pair_start;\n+\trte_cryptodev_queue_pair_stop;\n+\trte_cryptodev_queue_pair_count;\n+\trte_cryptodev_stats_get;\n+\trte_cryptodev_stats_reset;\n+\trte_cryptodev_info_get;\n+\trte_cryptodev_callback_register;\n+\trte_cryptodev_callback_unregister;\n+\trte_cryptodev_enqueue_burst;\n+\trte_cryptodev_dequeue_burst;\n+\trte_cryptodev_create_crypto_op;\n+\trte_cryptodev_crypto_op_free;\n+\trte_cryptodev_session_create;\n+\trte_cryptodev_session_free;\n+\n+\trte_cryptodev_pmd_get_dev;\n+\trte_cryptodev_pmd_get_named_dev;\n+\trte_cryptodev_pmd_is_valid_dev;\n+\trte_cryptodev_pmd_allocate;\n+\trte_cryptodev_pmd_virtual_dev_init;\n+\trte_cryptodev_pmd_release_device;\n+\trte_cryptodev_pmd_attach;\n+\trte_cryptodev_pmd_detach;\n+\trte_cryptodev_pmd_driver_register;\n+\trte_cryptodev_pmd_socket_id;\n+\trte_cryptodev_pmd_callback_process;\n+\n+\tlocal: *;\n+};\ndiff --git a/lib/librte_cryptodev/rte_cryptodev.c b/lib/librte_cryptodev/rte_cryptodev.c\nnew file mode 100644\nindex 0000000..d45feb0\n--- /dev/null\n+++ b/lib/librte_cryptodev/rte_cryptodev.c\n@@ -0,0 +1,1126 @@\n+/*-\n+ *   BSD LICENSE\n+ *\n+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#include <sys/types.h>\n+#include <sys/queue.h>\n+#include <ctype.h>\n+#include <stdio.h>\n+#include <stdlib.h>\n+#include <string.h>\n+#include <stdarg.h>\n+#include <errno.h>\n+#include <stdint.h>\n+#include <inttypes.h>\n+#include <netinet/in.h>\n+\n+#include <rte_byteorder.h>\n+#include <rte_log.h>\n+#include <rte_debug.h>\n+#include <rte_dev.h>\n+#include <rte_interrupts.h>\n+#include <rte_pci.h>\n+#include <rte_memory.h>\n+#include <rte_memcpy.h>\n+#include <rte_memzone.h>\n+#include <rte_launch.h>\n+#include <rte_tailq.h>\n+#include <rte_eal.h>\n+#include <rte_per_lcore.h>\n+#include <rte_lcore.h>\n+#include <rte_atomic.h>\n+#include <rte_branch_prediction.h>\n+#include <rte_common.h>\n+#include <rte_ring.h>\n+#include <rte_mempool.h>\n+#include <rte_malloc.h>\n+#include <rte_mbuf.h>\n+#include <rte_errno.h>\n+#include <rte_spinlock.h>\n+#include <rte_string_fns.h>\n+\n+#include \"rte_crypto.h\"\n+#include \"rte_cryptodev.h\"\n+#include \"rte_cryptodev_pmd.h\"\n+\n+struct rte_cryptodev rte_crypto_devices[RTE_CRYPTO_MAX_DEVS];\n+\n+struct rte_cryptodev *rte_cryptodevs = &rte_crypto_devices[0];\n+\n+static struct rte_cryptodev_global cryptodev_globals = {\n+\t\t.devs\t\t\t= &rte_crypto_devices[0],\n+\t\t.data\t\t\t= NULL,\n+\t\t.nb_devs\t\t= 0,\n+\t\t.max_devs\t\t= RTE_CRYPTO_MAX_DEVS\n+};\n+\n+struct rte_cryptodev_global *rte_cryptodev_globals = &cryptodev_globals;\n+\n+/* spinlock for crypto device callbacks */\n+static rte_spinlock_t rte_cryptodev_cb_lock = RTE_SPINLOCK_INITIALIZER;\n+\n+\n+/**\n+ * The user application callback description.\n+ *\n+ * It contains callback address to be registered by user application,\n+ * the pointer to the parameters for callback, and the event type.\n+ */\n+struct rte_cryptodev_callback {\n+\tTAILQ_ENTRY(rte_cryptodev_callback) next; /**< Callbacks list */\n+\trte_cryptodev_cb_fn cb_fn;                /**< Callback address */\n+\tvoid *cb_arg;                           /**< Parameter for callback */\n+\tenum rte_cryptodev_event_type event;          /**< Interrupt event type */\n+\tuint32_t active;                        /**< Callback is executing */\n+};\n+\n+int\n+rte_cryptodev_create_vdev(const char *name, const char *args)\n+{\n+\treturn rte_eal_vdev_init(name, args);\n+}\n+\n+int\n+rte_cryptodev_get_dev_id(const char *name) {\n+\tunsigned i;\n+\n+\tif (name == NULL)\n+\t\treturn -1;\n+\n+\tfor (i = 0; i < rte_cryptodev_globals->max_devs; i++)\n+\t\tif (strcmp(rte_cryptodev_globals->devs[i].data->name, name) == 0 &&\n+\t\t\t\trte_cryptodev_globals->devs[i].attached ==\n+\t\t\t\t\t\tRTE_CRYPTODEV_ATTACHED)\n+\t\t\treturn i;\n+\n+\treturn -1;\n+}\n+\n+uint8_t\n+rte_cryptodev_count(void)\n+{\n+\treturn rte_cryptodev_globals->nb_devs;\n+}\n+\n+uint8_t\n+rte_cryptodev_count_devtype(enum rte_cryptodev_type type)\n+{\n+\tuint8_t i, dev_count = 0;\n+\n+\tfor (i = 0; i < rte_cryptodev_globals->max_devs; i++)\n+\t\tif (rte_cryptodev_globals->devs[i].dev_type == type &&\n+\t\t\trte_cryptodev_globals->devs[i].attached ==\n+\t\t\t\t\tRTE_CRYPTODEV_ATTACHED)\n+\t\t\tdev_count++;\n+\n+\treturn dev_count;\n+}\n+\n+int\n+rte_cryptodev_socket_id(uint8_t dev_id)\n+{\n+\tstruct rte_cryptodev *dev;\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id))\n+\t\treturn -1;\n+\n+\tdev = rte_cryptodev_pmd_get_dev(dev_id);\n+\n+\tif (dev->pci_dev)\n+\t\treturn dev->pci_dev->numa_node;\n+\telse\n+\t\treturn 0;\n+}\n+\n+static inline void\n+rte_cryptodev_data_alloc(int socket_id)\n+{\n+\tconst unsigned flags = 0;\n+\tconst struct rte_memzone *mz;\n+\n+\tif (rte_eal_process_type() == RTE_PROC_PRIMARY) {\n+\t\tmz = rte_memzone_reserve(\"rte_cryptodev_data\",\n+\t\t\t\tcryptodev_globals.max_devs * sizeof(struct rte_cryptodev_data),\n+\t\t\t\tsocket_id, flags);\n+\t} else\n+\t\tmz = rte_memzone_lookup(\"rte_cryptodev_data\");\n+\tif (mz == NULL)\n+\t\trte_panic(\"Cannot allocate memzone for the crypto device data\");\n+\n+\tcryptodev_globals.data = mz->addr;\n+\tif (rte_eal_process_type() == RTE_PROC_PRIMARY)\n+\t\tmemset(cryptodev_globals.data, 0,\n+\t\t\t\tcryptodev_globals.max_devs * sizeof(struct rte_cryptodev_data));\n+}\n+\n+static uint8_t\n+rte_cryptodev_find_free_device_index(void)\n+{\n+\tuint8_t dev_id;\n+\n+\tfor (dev_id = 0; dev_id < RTE_CRYPTO_MAX_DEVS; dev_id++) {\n+\t\tif (rte_crypto_devices[dev_id].attached == RTE_CRYPTODEV_DETACHED)\n+\t\t\treturn dev_id;\n+\t}\n+\treturn RTE_CRYPTO_MAX_DEVS;\n+}\n+\n+struct rte_cryptodev *\n+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int socket_id)\n+{\n+\tuint8_t dev_id;\n+\tstruct rte_cryptodev *cryptodev;\n+\n+\tdev_id = rte_cryptodev_find_free_device_index();\n+\tif (dev_id == RTE_CRYPTO_MAX_DEVS) {\n+\t\tCDEV_LOG_ERR(\"Reached maximum number of crypto devices\");\n+\t\treturn NULL;\n+\t}\n+\n+\tif (cryptodev_globals.data == NULL)\n+\t\trte_cryptodev_data_alloc(socket_id);\n+\n+\tif (rte_cryptodev_pmd_get_named_dev(name) != NULL) {\n+\t\tCDEV_LOG_ERR(\"Crypto device with name %s already \"\n+\t\t\t\t\"allocated!\", name);\n+\t\treturn NULL;\n+\t}\n+\n+\tcryptodev = rte_cryptodev_pmd_get_dev(dev_id);\n+\tcryptodev->data = &cryptodev_globals.data[dev_id];\n+\tsnprintf(cryptodev->data->name, RTE_CRYPTODEV_NAME_MAX_LEN, \"%s\", name);\n+\tcryptodev->data->dev_id = dev_id;\n+\tcryptodev->attached = RTE_CRYPTODEV_ATTACHED;\n+\tcryptodev->pmd_type = type;\n+\tcryptodev_globals.nb_devs++;\n+\n+\treturn cryptodev;\n+}\n+\n+static inline int\n+rte_cryptodev_create_unique_device_name(char *name, size_t size,\n+\t\tstruct rte_pci_device *pci_dev)\n+{\n+\tint ret;\n+\n+\tif ((name == NULL) || (pci_dev == NULL))\n+\t\treturn -EINVAL;\n+\n+\tret = snprintf(name, size, \"%d:%d.%d\",\n+\t\t\tpci_dev->addr.bus, pci_dev->addr.devid,\n+\t\t\tpci_dev->addr.function);\n+\tif (ret < 0)\n+\t\treturn ret;\n+\treturn 0;\n+}\n+\n+int\n+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev)\n+{\n+\tif (cryptodev == NULL)\n+\t\treturn -EINVAL;\n+\n+\tcryptodev->attached = RTE_CRYPTODEV_DETACHED;\n+\tcryptodev_globals.nb_devs--;\n+\treturn 0;\n+}\n+\n+struct rte_cryptodev *\n+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,\n+\t\tint socket_id)\n+{\n+\tstruct rte_cryptodev *cryptodev;\n+\n+\t/* allocate device structure */\n+\tcryptodev = rte_cryptodev_pmd_allocate(name, PMD_VDEV, socket_id);\n+\tif (cryptodev == NULL)\n+\t\treturn NULL;\n+\n+\t/* allocate private device structure */\n+\tif (rte_eal_process_type() == RTE_PROC_PRIMARY) {\n+\t\tcryptodev->data->dev_private =\n+\t\t\t\trte_zmalloc(\"%s private structure\",\n+\t\t\t\t\t\tdev_private_size,\n+\t\t\t\t\t\tRTE_CACHE_LINE_SIZE);\n+\n+\t\tif (cryptodev->data->dev_private == NULL)\n+\t\t\trte_panic(\"Cannot allocate memzone for private device\"\n+\t\t\t\t\t\" data\");\n+\t}\n+\n+\t/* initialise user call-back tail queue */\n+\tTAILQ_INIT(&(cryptodev->link_intr_cbs));\n+\n+\treturn cryptodev;\n+}\n+\n+static int\n+rte_cryptodev_init(struct rte_pci_driver *pci_drv,\n+\t\tstruct rte_pci_device *pci_dev)\n+{\n+\tstruct rte_cryptodev_driver *cryptodrv;\n+\tstruct rte_cryptodev *cryptodev;\n+\n+\tchar cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];\n+\n+\tint retval;\n+\n+\tcryptodrv = (struct rte_cryptodev_driver *)pci_drv;\n+\tif (cryptodrv == NULL)\n+\t\t\treturn -ENODEV;\n+\n+\t/* Create unique Crypto device name using PCI address */\n+\trte_cryptodev_create_unique_device_name(cryptodev_name,\n+\t\t\tsizeof(cryptodev_name), pci_dev);\n+\n+\tcryptodev = rte_cryptodev_pmd_allocate(cryptodev_name, PMD_PDEV, rte_socket_id());\n+\tif (cryptodev == NULL)\n+\t\treturn -ENOMEM;\n+\n+\tif (rte_eal_process_type() == RTE_PROC_PRIMARY) {\n+\t\tcryptodev->data->dev_private =\n+\t\t\t\trte_zmalloc_socket(\"cryptodev private structure\",\n+\t\t\t\t\t\tcryptodrv->dev_private_size,\n+\t\t\t\t\t\tRTE_CACHE_LINE_SIZE, rte_socket_id());\n+\n+\t\tif (cryptodev->data->dev_private == NULL)\n+\t\t\trte_panic(\"Cannot allocate memzone for private device data\");\n+\t}\n+\n+\tcryptodev->pci_dev = pci_dev;\n+\tcryptodev->driver = cryptodrv;\n+\n+\t/* init user callbacks */\n+\tTAILQ_INIT(&(cryptodev->link_intr_cbs));\n+\n+\t/* Invoke PMD device initialization function */\n+\tretval = (*cryptodrv->cryptodev_init)(cryptodrv, cryptodev);\n+\tif (retval == 0)\n+\t\treturn 0;\n+\n+\tCDEV_LOG_ERR(\"driver %s: crypto_dev_init(vendor_id=0x%u device_id=0x%x)\"\n+\t\t\t\" failed\", pci_drv->name,\n+\t\t\t(unsigned) pci_dev->id.vendor_id,\n+\t\t\t(unsigned) pci_dev->id.device_id);\n+\n+\tif (rte_eal_process_type() == RTE_PROC_PRIMARY)\n+\t\trte_free(cryptodev->data->dev_private);\n+\n+\tcryptodev->attached = RTE_CRYPTODEV_DETACHED;\n+\tcryptodev_globals.nb_devs--;\n+\n+\treturn -ENXIO;\n+}\n+\n+static int\n+rte_cryptodev_uninit(struct rte_pci_device *pci_dev)\n+{\n+\tconst struct rte_cryptodev_driver *cryptodrv;\n+\tstruct rte_cryptodev *cryptodev;\n+\tchar cryptodev_name[RTE_CRYPTODEV_NAME_MAX_LEN];\n+\tint ret;\n+\n+\tif (pci_dev == NULL)\n+\t\treturn -EINVAL;\n+\n+\t/* Create unique device name using PCI address */\n+\trte_cryptodev_create_unique_device_name(cryptodev_name,\n+\t\t\tsizeof(cryptodev_name), pci_dev);\n+\n+\tcryptodev = rte_cryptodev_pmd_get_named_dev(cryptodev_name);\n+\tif (cryptodev == NULL)\n+\t\treturn -ENODEV;\n+\n+\tcryptodrv = (const struct rte_cryptodev_driver *)pci_dev->driver;\n+\tif (cryptodrv == NULL)\n+\t\t\treturn -ENODEV;\n+\n+\t/* Invoke PMD device uninit function */\n+\tif (*cryptodrv->cryptodev_uninit) {\n+\t\tret = (*cryptodrv->cryptodev_uninit)(cryptodrv, cryptodev);\n+\t\tif (ret)\n+\t\t\treturn ret;\n+\t}\n+\n+\t/* free ether device */\n+\trte_cryptodev_pmd_release_device(cryptodev);\n+\n+\tif (rte_eal_process_type() == RTE_PROC_PRIMARY)\n+\t\trte_free(cryptodev->data->dev_private);\n+\n+\tcryptodev->pci_dev = NULL;\n+\tcryptodev->driver = NULL;\n+\tcryptodev->data = NULL;\n+\n+\treturn 0;\n+}\n+\n+int\n+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *cryptodrv,\n+\t\tenum pmd_type type)\n+{\n+\t/* Call crypto device initialization directly if device is virtual */\n+\tif (type == PMD_VDEV)\n+\t\treturn rte_cryptodev_init((struct rte_pci_driver *)cryptodrv,\n+\t\t\t\tNULL);\n+\n+\t/* Register PCI driver for physical device intialisation during\n+\t * PCI probing */\n+\tcryptodrv->pci_drv.devinit = rte_cryptodev_init;\n+\tcryptodrv->pci_drv.devuninit = rte_cryptodev_uninit;\n+\n+\trte_eal_pci_register(&cryptodrv->pci_drv);\n+\n+\treturn 0;\n+}\n+\n+\n+int\n+rte_cryptodev_pmd_attach(const char *devargs __rte_unused,\n+\t\t\tuint8_t *dev_id __rte_unused)\n+{\n+\tRTE_LOG(ERR, EAL, \"Hotplug support isn't enabled\");\n+\treturn -1;\n+}\n+\n+int\n+rte_cryptodev_pmd_detach(uint8_t dev_id __rte_unused,\n+\t\t\tchar *name __rte_unused)\n+{\n+\tRTE_LOG(ERR, EAL, \"Hotplug support isn't enabled\");\n+\treturn -1;\n+}\n+\n+\n+uint16_t\n+rte_cryptodev_queue_pair_count(uint8_t dev_id)\n+{\n+\tstruct rte_cryptodev *dev;\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\treturn dev->data->nb_queue_pairs;\n+}\n+\n+static int\n+rte_cryptodev_queue_pairs_config(struct rte_cryptodev *dev, uint16_t nb_qpairs, int socket_id)\n+{\n+\tstruct rte_cryptodev_info dev_info;\n+\tuint16_t old_nb_queues = dev->data->nb_queue_pairs;\n+\tvoid **qp;\n+\tunsigned i;\n+\n+\tif ((dev == NULL) || (nb_qpairs < 1)) {\n+\t\tCDEV_LOG_ERR(\"invalid param: dev %p, nb_queues %u\",\n+\t\t\t\t\t\t\tdev, nb_qpairs);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tCDEV_LOG_DEBUG(\"Setup %d queues pairs on device %u\",\n+\t\t\tnb_qpairs, dev->data->dev_id);\n+\n+\n+\tmemset(&dev_info, 0, sizeof(struct rte_cryptodev_info));\n+\n+\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_infos_get, -ENOTSUP);\n+\t(*dev->dev_ops->dev_infos_get)(dev, &dev_info);\n+\n+\tif (nb_qpairs > (dev_info.max_queue_pairs)) {\n+\t\tCDEV_LOG_ERR(\"Invalid num queue_pairs (%u) for dev %u\",\n+\t\t\t\tnb_qpairs, dev->data->dev_id);\n+\t    return (-EINVAL);\n+\t}\n+\n+\tif (dev->data->queue_pairs == NULL) { /* first time configuration */\n+\t\tdev->data->queue_pairs = rte_zmalloc_socket(\n+\t\t\t\t\"cryptodev->queue_pairs\",\n+\t\t\t\tsizeof(dev->data->queue_pairs[0]) * nb_qpairs,\n+\t\t\t\tRTE_CACHE_LINE_SIZE, socket_id);\n+\n+\t\tif (dev->data->queue_pairs == NULL) {\n+\t\t\tdev->data->nb_queue_pairs = 0;\n+\t\t\tCDEV_LOG_ERR(\"failed to get memory for qp meta data, \"\n+\t\t\t\t\t\t\t\"nb_queues %u\", nb_qpairs);\n+\t\t\treturn -(ENOMEM);\n+\t\t}\n+\t} else { /* re-configure */\n+\t\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_release, -ENOTSUP);\n+\n+\t\tqp = dev->data->queue_pairs;\n+\n+\t\tfor (i = nb_qpairs; i < old_nb_queues; i++)\n+\t\t\t(*dev->dev_ops->queue_pair_release)(dev, i);\n+\t\tqp = rte_realloc(qp, sizeof(qp[0]) * nb_qpairs,\n+\t\t\t\tRTE_CACHE_LINE_SIZE);\n+\t\tif (qp == NULL) {\n+\t\t\tCDEV_LOG_ERR(\"failed to realloc qp meta data,\"\n+\t\t\t\t\t\t\" nb_queues %u\", nb_qpairs);\n+\t\t\treturn -(ENOMEM);\n+\t\t}\n+\t\tif (nb_qpairs > old_nb_queues) {\n+\t\t\tuint16_t new_qs = nb_qpairs - old_nb_queues;\n+\n+\t\t\tmemset(qp + old_nb_queues, 0,\n+\t\t\t\tsizeof(qp[0]) * new_qs);\n+\t\t}\n+\n+\t\tdev->data->queue_pairs = qp;\n+\n+\t}\n+\tdev->data->nb_queue_pairs = nb_qpairs;\n+\treturn 0;\n+}\n+\n+int\n+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id)\n+{\n+\tstruct rte_cryptodev *dev;\n+\n+\t/* This function is only safe when called from the primary process\n+\t * in a multi-process setup*/\n+\tPROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%\" PRIu8, dev_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\tif (queue_pair_id >= dev->data->nb_queue_pairs) {\n+\t\tCDEV_LOG_ERR(\"Invalid queue_pair_id=%d\", queue_pair_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_start, -ENOTSUP);\n+\n+\treturn dev->dev_ops->queue_pair_start(dev, queue_pair_id);\n+\n+}\n+\n+int\n+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id)\n+{\n+\tstruct rte_cryptodev *dev;\n+\n+\t/* This function is only safe when called from the primary process\n+\t * in a multi-process setup*/\n+\tPROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%\" PRIu8, dev_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\tif (queue_pair_id >= dev->data->nb_queue_pairs) {\n+\t\tCDEV_LOG_ERR(\"Invalid queue_pair_id=%d\", queue_pair_id);\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_stop, -ENOTSUP);\n+\n+\treturn dev->dev_ops->queue_pair_stop(dev, queue_pair_id);\n+\n+}\n+\n+static int\n+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,\n+\t\tunsigned obj_cache_size, int socket_id);\n+\n+int\n+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config)\n+{\n+\tstruct rte_cryptodev *dev;\n+\tint diag;\n+\n+\t/* This function is only safe when called from the primary process\n+\t * in a multi-process setup*/\n+\tPROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%\" PRIu8, dev_id);\n+\t\treturn (-EINVAL);\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\n+\tif (dev->data->dev_started) {\n+\t\tCDEV_LOG_ERR(\n+\t\t    \"device %d must be stopped to allow configuration\", dev_id);\n+\t\treturn (-EBUSY);\n+\t}\n+\n+\t/* Setup new number of queue pairs and reconfigure device. */\n+\tdiag = rte_cryptodev_queue_pairs_config(dev, config->nb_queue_pairs,\n+\t\t\tconfig->socket_id);\n+\tif (diag != 0) {\n+\t\tCDEV_LOG_ERR(\"dev%d rte_crypto_dev_queue_pairs_config = %d\",\n+\t\t\t\tdev_id, diag);\n+\t\treturn diag;\n+\t}\n+\n+\t/* Setup Session mempool for device */\n+\treturn rte_crypto_session_pool_create(dev, config->session_mp.nb_objs,\n+\t\t\tconfig->session_mp.cache_size, config->socket_id);\n+}\n+\n+static void\n+rte_cryptodev_config_restore(uint8_t dev_id __rte_unused)\n+{\n+}\n+\n+int\n+rte_cryptodev_start(uint8_t dev_id)\n+{\n+\tstruct rte_cryptodev *dev;\n+\tint diag;\n+\n+\tCDEV_LOG_DEBUG(\"Start dev_id=%\" PRIu8, dev_id);\n+\n+\t/* This function is only safe when called from the primary process\n+\t * in a multi-process setup*/\n+\tPROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%\" PRIu8, dev_id);\n+\t\treturn (-EINVAL);\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\n+\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_start, -ENOTSUP);\n+\n+\tif (dev->data->dev_started != 0) {\n+\t\tCDEV_LOG_ERR(\"Device with dev_id=%\" PRIu8 \" already started\",\n+\t\t\tdev_id);\n+\t\treturn 0;\n+\t}\n+\n+\tdiag = (*dev->dev_ops->dev_start)(dev);\n+\tif (diag == 0)\n+\t\tdev->data->dev_started = 1;\n+\telse\n+\t\treturn diag;\n+\n+\trte_cryptodev_config_restore(dev_id);\n+\n+\treturn 0;\n+}\n+\n+void\n+rte_cryptodev_stop(uint8_t dev_id)\n+{\n+\tstruct rte_cryptodev *dev;\n+\n+\t/* This function is only safe when called from the primary process\n+\t * in a multi-process setup*/\n+\tPROC_PRIMARY_OR_RET();\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%\" PRIu8, dev_id);\n+\t\treturn;\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\n+\tFUNC_PTR_OR_RET(*dev->dev_ops->dev_stop);\n+\n+\tif (dev->data->dev_started == 0) {\n+\t\tCDEV_LOG_ERR(\"Device with dev_id=%\" PRIu8 \" already stopped\",\n+\t\t\tdev_id);\n+\t\treturn;\n+\t}\n+\n+\tdev->data->dev_started = 0;\n+\t(*dev->dev_ops->dev_stop)(dev);\n+}\n+\n+int\n+rte_cryptodev_close(uint8_t dev_id)\n+{\n+\tstruct rte_cryptodev *dev;\n+\tint retval;\n+\n+\t/* This function is only safe when called from the primary process\n+\t * in a multi-process setup*/\n+\tPROC_PRIMARY_OR_ERR_RET(-EINVAL);\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%\" PRIu8, dev_id);\n+\t\treturn -1;\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\n+\t/* We can't close the device if there are outstanding session in\n+\t * existence */\n+\tif (dev->data->session_pool != NULL) {\n+\t\tif (!rte_mempool_full(dev->data->session_pool)) {\n+\t\t\tCDEV_LOG_ERR(\"dev_id=%u close failed, session mempool \"\n+\t\t\t\t\t\"has sessions still in use, free \"\n+\t\t\t\t\t\"all sessions before calling close\",\n+\t\t\t\t\t(unsigned)dev_id);\n+\t\t\treturn -ENOTEMPTY;\n+\t\t}\n+\t}\n+\n+\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->dev_close, -ENOTSUP);\n+\tretval = (*dev->dev_ops->dev_close)(dev);\n+\n+\tif (retval < 0)\n+\t\treturn retval;\n+\n+\tdev->data->dev_started = 0;\n+\treturn 0;\n+}\n+\n+int\n+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,\n+\t\tconst struct rte_cryptodev_qp_conf *qp_conf, int socket_id)\n+{\n+\tstruct rte_cryptodev *dev;\n+\n+\t/* This function is only safe when called from the primary process\n+\t * in a multi-process setup*/\n+\tPROC_PRIMARY_OR_ERR_RET(-E_RTE_SECONDARY);\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%\" PRIu8, dev_id);\n+\t\treturn (-EINVAL);\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\tif (queue_pair_id >= dev->data->nb_queue_pairs) {\n+\t\tCDEV_LOG_ERR(\"Invalid queue_pair_id=%d\", queue_pair_id);\n+\t\treturn (-EINVAL);\n+\t}\n+\n+\tif (dev->data->dev_started) {\n+\t\tCDEV_LOG_ERR(\n+\t\t    \"device %d must be stopped to allow configuration\", dev_id);\n+\t\treturn -EBUSY;\n+\t}\n+\n+\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->queue_pair_setup, -ENOTSUP);\n+\n+\treturn (*dev->dev_ops->queue_pair_setup)(dev, queue_pair_id, qp_conf,\n+\t\t\tsocket_id);\n+}\n+\n+\n+int\n+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats)\n+{\n+\tstruct rte_cryptodev *dev;\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%d\", dev_id);\n+\t\treturn (-ENODEV);\n+\t}\n+\n+\tif (stats == NULL) {\n+\t\tCDEV_LOG_ERR(\"Invalid stats ptr\");\n+\t\treturn -EINVAL;\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\tmemset(stats, 0, sizeof(*stats));\n+\n+\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->stats_get, -ENOTSUP);\n+\t(*dev->dev_ops->stats_get)(dev, stats);\n+\treturn 0;\n+}\n+\n+void\n+rte_cryptodev_stats_reset(uint8_t dev_id)\n+{\n+\tstruct rte_cryptodev *dev;\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%\" PRIu8, dev_id);\n+\t\treturn;\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\n+\tFUNC_PTR_OR_RET(*dev->dev_ops->stats_reset);\n+\t(*dev->dev_ops->stats_reset)(dev);\n+}\n+\n+\n+void\n+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info)\n+{\n+\tstruct rte_cryptodev *dev;\n+\n+\tif (dev_id >= cryptodev_globals.nb_devs) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%d\", dev_id);\n+\t\treturn;\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\n+\tmemset(dev_info, 0, sizeof(struct rte_cryptodev_info));\n+\n+\tFUNC_PTR_OR_RET(*dev->dev_ops->dev_infos_get);\n+\t(*dev->dev_ops->dev_infos_get)(dev, dev_info);\n+\n+\tdev_info->pci_dev = dev->pci_dev;\n+\tif (dev->driver)\n+\t\tdev_info->driver_name = dev->driver->pci_drv.name;\n+}\n+\n+\n+int\n+rte_cryptodev_callback_register(uint8_t dev_id,\n+\t\t\tenum rte_cryptodev_event_type event,\n+\t\t\trte_cryptodev_cb_fn cb_fn, void *cb_arg)\n+{\n+\tstruct rte_cryptodev *dev;\n+\tstruct rte_cryptodev_callback *user_cb;\n+\n+\tif (!cb_fn)\n+\t\treturn (-EINVAL);\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%\" PRIu8, dev_id);\n+\t\treturn (-EINVAL);\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\trte_spinlock_lock(&rte_cryptodev_cb_lock);\n+\n+\tTAILQ_FOREACH(user_cb, &(dev->link_intr_cbs), next) {\n+\t\tif (user_cb->cb_fn == cb_fn &&\n+\t\t\tuser_cb->cb_arg == cb_arg &&\n+\t\t\tuser_cb->event == event) {\n+\t\t\tbreak;\n+\t\t}\n+\t}\n+\n+\t/* create a new callback. */\n+\tif (user_cb == NULL) {\n+\t\tuser_cb = rte_zmalloc(\"INTR_USER_CALLBACK\",\n+\t\t\t\tsizeof(struct rte_cryptodev_callback), 0);\n+\t\tif (user_cb != NULL) {\n+\t\t\tuser_cb->cb_fn = cb_fn;\n+\t\t\tuser_cb->cb_arg = cb_arg;\n+\t\t\tuser_cb->event = event;\n+\t\t\tTAILQ_INSERT_TAIL(&(dev->link_intr_cbs), user_cb, next);\n+\t\t}\n+\t}\n+\n+\trte_spinlock_unlock(&rte_cryptodev_cb_lock);\n+\treturn ((user_cb == NULL) ? -ENOMEM : 0);\n+}\n+\n+int\n+rte_cryptodev_callback_unregister(uint8_t dev_id,\n+\t\t\tenum rte_cryptodev_event_type event,\n+\t\t\trte_cryptodev_cb_fn cb_fn, void *cb_arg)\n+{\n+\tint ret;\n+\tstruct rte_cryptodev *dev;\n+\tstruct rte_cryptodev_callback *cb, *next;\n+\n+\tif (!cb_fn)\n+\t\treturn (-EINVAL);\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%\" PRIu8, dev_id);\n+\t\treturn (-EINVAL);\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\trte_spinlock_lock(&rte_cryptodev_cb_lock);\n+\n+\tret = 0;\n+\tfor (cb = TAILQ_FIRST(&dev->link_intr_cbs); cb != NULL; cb = next) {\n+\n+\t\tnext = TAILQ_NEXT(cb, next);\n+\n+\t\tif (cb->cb_fn != cb_fn || cb->event != event ||\n+\t\t\t\t(cb->cb_arg != (void *)-1 &&\n+\t\t\t\tcb->cb_arg != cb_arg))\n+\t\t\tcontinue;\n+\n+\t\t/*\n+\t\t * if this callback is not executing right now,\n+\t\t * then remove it.\n+\t\t */\n+\t\tif (cb->active == 0) {\n+\t\t\tTAILQ_REMOVE(&(dev->link_intr_cbs), cb, next);\n+\t\t\trte_free(cb);\n+\t\t} else {\n+\t\t\tret = -EAGAIN;\n+\t\t}\n+\t}\n+\n+\trte_spinlock_unlock(&rte_cryptodev_cb_lock);\n+\treturn ret;\n+}\n+\n+void\n+rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,\n+\tenum rte_cryptodev_event_type event)\n+{\n+\tstruct rte_cryptodev_callback *cb_lst;\n+\tstruct rte_cryptodev_callback dev_cb;\n+\n+\trte_spinlock_lock(&rte_cryptodev_cb_lock);\n+\tTAILQ_FOREACH(cb_lst, &(dev->link_intr_cbs), next) {\n+\t\tif (cb_lst->cb_fn == NULL || cb_lst->event != event)\n+\t\t\tcontinue;\n+\t\tdev_cb = *cb_lst;\n+\t\tcb_lst->active = 1;\n+\t\trte_spinlock_unlock(&rte_cryptodev_cb_lock);\n+\t\tdev_cb.cb_fn(dev->data->dev_id, dev_cb.event,\n+\t\t\t\t\t\tdev_cb.cb_arg);\n+\t\trte_spinlock_lock(&rte_cryptodev_cb_lock);\n+\t\tcb_lst->active = 0;\n+\t}\n+\trte_spinlock_unlock(&rte_cryptodev_cb_lock);\n+}\n+\n+\n+static void\n+rte_crypto_session_init(struct rte_mempool *mp,\n+\t\tvoid *opaque_arg,\n+\t\tvoid *_sess,\n+\t\t__rte_unused unsigned i)\n+{\n+\tstruct rte_cryptodev_session *sess = _sess;\n+\tstruct rte_cryptodev *dev = opaque_arg;\n+\n+\tmemset(sess, 0, mp->elt_size);\n+\n+\tsess->dev_id = dev->data->dev_id;\n+\tsess->type = dev->dev_type;\n+\tsess->mp = mp;\n+\n+\tif (dev->dev_ops->session_initialize)\n+\t\t(*dev->dev_ops->session_initialize)(mp, sess->_private);\n+}\n+\n+static int\n+rte_crypto_session_pool_create(struct rte_cryptodev *dev, unsigned nb_objs,\n+\t\tunsigned obj_cache_size, int socket_id)\n+{\n+\tchar mp_name[RTE_CRYPTODEV_NAME_MAX_LEN];\n+\tunsigned priv_sess_size;\n+\n+\tunsigned n = snprintf(mp_name, sizeof(mp_name), \"cdev_%d_sess_mp\",\n+\t\t\tdev->data->dev_id);\n+\tif (n > sizeof(mp_name)) {\n+\t\tCDEV_LOG_ERR(\"Unable to create unique name for session mempool\");\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_get_size, -ENOTSUP);\n+\tpriv_sess_size = (*dev->dev_ops->session_get_size)(dev);\n+\tif (priv_sess_size == 0) {\n+\t\tCDEV_LOG_ERR(\"%s returned and invalid private session size \",\n+\t\t\t\t\t\tdev->data->name);\n+\t\treturn -ENOMEM;\n+\t}\n+\n+\tunsigned elt_size = sizeof(struct rte_cryptodev_session) + priv_sess_size;\n+\n+\tdev->data->session_pool = rte_mempool_lookup(mp_name);\n+\tif (dev->data->session_pool != NULL) {\n+\t\tif (dev->data->session_pool->elt_size != elt_size ||\n+\t\t\t\tdev->data->session_pool->cache_size < obj_cache_size ||\n+\t\t\t\tdev->data->session_pool->size < nb_objs) {\n+\n+\t\t\tCDEV_LOG_ERR(\"%s mempool already exists with different \"\n+\t\t\t\t\t\"initialization parameters\", mp_name);\n+\t\t\tdev->data->session_pool = NULL;\n+\t\t\treturn -ENOMEM;\n+\t\t}\n+\t} else {\n+\t\tdev->data->session_pool = rte_mempool_create(\n+\t\t\t\tmp_name, /* mempool name */\n+\t\t\t\tnb_objs, /* number of elements*/\n+\t\t\t\telt_size, /* element size*/\n+\t\t\t\tobj_cache_size, /* Cache size*/\n+\t\t\t\t0, /* private data size */\n+\t\t\t\tNULL, /* obj initialization constructor */\n+\t\t\t\tNULL, /* obj initialization constructor arg */\n+\t\t\t\trte_crypto_session_init, /* obj constructor */\n+\t\t\t\tdev, /* obj constructor arg */\n+\t\t\t\tsocket_id, /* socket id */\n+\t\t\t\t0); /* flags */\n+\n+\t\tif (dev->data->session_pool == NULL) {\n+\t\t\tCDEV_LOG_ERR(\"%s mempool allocation failed\", mp_name);\n+\t\t\treturn -ENOMEM;\n+\t\t}\n+\t}\n+\n+\tCDEV_LOG_DEBUG(\"%s mempool created!\", mp_name);\n+\treturn 0;\n+}\n+\n+struct rte_cryptodev_session *\n+rte_cryptodev_session_create(uint8_t dev_id, struct rte_crypto_xform *xform)\n+{\n+\tstruct rte_cryptodev *dev;\n+\tstruct rte_cryptodev_session *sess;\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%d\", dev_id);\n+\t\treturn NULL;\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\n+\t/* Allocate a session structure from the session pool */\n+\tif (rte_mempool_get(dev->data->session_pool, (void **)&sess)) {\n+\t\tCDEV_LOG_ERR(\"Couldn't get object from session mempool\");\n+\t\treturn NULL;\n+\t}\n+\n+\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_configure, NULL);\n+\tif (dev->dev_ops->session_configure(dev, xform, sess->_private) ==\n+\t\t\tNULL) {\n+\t\tCDEV_LOG_ERR(\"dev_id %d failed to configure session details\",\n+\t\t\t\tdev_id);\n+\n+\t\t/* Return session to mempool */\n+\t\trte_mempool_put(sess->mp, (void *)sess);\n+\t\treturn NULL;\n+\t}\n+\n+\treturn sess;\n+}\n+\n+struct rte_cryptodev_session *\n+rte_cryptodev_session_free(uint8_t dev_id, struct rte_cryptodev_session *sess)\n+{\n+\tstruct rte_cryptodev *dev;\n+\n+\tif (!rte_cryptodev_pmd_is_valid_dev(dev_id)) {\n+\t\tCDEV_LOG_ERR(\"Invalid dev_id=%d\", dev_id);\n+\t\treturn sess;\n+\t}\n+\n+\tdev = &rte_crypto_devices[dev_id];\n+\n+\t/* Check the session belongs to this device type */\n+\tif (sess->type != dev->dev_type)\n+\t\treturn sess;\n+\n+\t/* Let device implementation clear session material */\n+\tFUNC_PTR_OR_ERR_RET(*dev->dev_ops->session_clear, sess);\n+\tdev->dev_ops->session_clear(dev, (void *)sess->_private);\n+\n+\t/* Return session to mempool */\n+\trte_mempool_put(sess->mp, (void *)sess);\n+\n+\treturn NULL;\n+}\n+\n+\n+static void\n+rte_crypto_op_init(struct rte_mempool *mp,\n+\t\t__rte_unused void *opaque_arg,\n+\t\tvoid *_op_data,\n+\t\t__rte_unused unsigned i)\n+{\n+\tstruct rte_crypto_op_data *op_data = _op_data;\n+\n+\tmemset(op_data, 0, mp->elt_size);\n+\n+\top_data->pool = mp;\n+}\n+\n+static void\n+rte_crypto_op_pool_init(__rte_unused struct rte_mempool *mp,\n+\t\t__rte_unused void *opaque_arg)\n+{\n+}\n+\n+struct rte_mempool *\n+rte_crypto_op_pool_create(const char *name, unsigned size,\n+\t\tunsigned cache_size, unsigned nb_xforms, int socket_id)\n+{\n+\tstruct crypto_op_pool_private *priv_data = NULL;\n+\n+\tunsigned elt_size = sizeof(struct rte_crypto_op_data) +\n+\t\t\t(sizeof(struct rte_crypto_xform) * nb_xforms);\n+\n+\t/* lookup mempool in case already allocated */\n+\tstruct rte_mempool *mp = rte_mempool_lookup(name);\n+\n+\tif (mp != NULL) {\n+\t\tpriv_data = (struct crypto_op_pool_private *)\n+\t\t\t\trte_mempool_get_priv(mp);\n+\n+\t\tif (priv_data->max_nb_xforms <  nb_xforms ||\n+\t\t\t\tmp->elt_size != elt_size ||\n+\t\t\t\tmp->cache_size < cache_size ||\n+\t\t\t\tmp->size < size) {\n+\t\t\tmp = NULL;\n+\t\t\tCDEV_LOG_ERR(\"%s mempool already exists with \"\n+\t\t\t\t\t\"incompatible initialisation parameters\",\n+\t\t\t\t\tname);\n+\t\t\treturn NULL;\n+\t\t}\n+\t\tCDEV_LOG_DEBUG(\"%s mempool already exists, reusing!\", name);\n+\t\treturn mp;\n+\t}\n+\n+\tmp = rte_mempool_create(\n+\t\t\tname,\t\t\t\t/* mempool name */\n+\t\t\tsize,\t\t\t\t/* number of elements*/\n+\t\t\telt_size,\t\t\t/* element size*/\n+\t\t\tcache_size,\t\t\t/* Cache size*/\n+\t\t\tsizeof(struct crypto_op_pool_private),\t/* private data size */\n+\t\t\trte_crypto_op_pool_init,\t/* pool initialisation constructor */\n+\t\t\tNULL,\t\t\t\t/* pool initialisation constructor argument */\n+\t\t\trte_crypto_op_init,\t\t/* obj constructor */\n+\t\t\tNULL,\t\t\t\t/* obj constructor argument */\n+\t\t\tsocket_id,\t\t\t/* socket id */\n+\t\t\t0);\t\t\t\t/* flags */\n+\n+\tif (mp == NULL) {\n+\t\tCDEV_LOG_ERR(\"failed to allocate %s mempool\", name);\n+\t\treturn NULL;\n+\t}\n+\n+\tpriv_data = (struct crypto_op_pool_private *)rte_mempool_get_priv(mp);\n+\n+\tpriv_data->max_nb_xforms = nb_xforms;\n+\n+\tCDEV_LOG_DEBUG(\"%s mempool created!\", name);\n+\treturn mp;\n+}\n+\n+\ndiff --git a/lib/librte_cryptodev/rte_cryptodev.h b/lib/librte_cryptodev/rte_cryptodev.h\nnew file mode 100644\nindex 0000000..e436356\n--- /dev/null\n+++ b/lib/librte_cryptodev/rte_cryptodev.h\n@@ -0,0 +1,592 @@\n+/*-\n+ *\n+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#ifndef _RTE_CRYPTODEV_H_\n+#define _RTE_CRYPTODEV_H_\n+\n+/**\n+ * @file rte_cryptodev.h\n+ *\n+ * RTE Cryptographic Device APIs\n+ *\n+ * Defines RTE Crypto Device APIs for the provisioning of cipher and\n+ * authentication operations.\n+ */\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+#include \"stddef.h\"\n+\n+#include \"rte_crypto.h\"\n+#include \"rte_dev.h\"\n+\n+#define CRYPTODEV_NAME_AESNI_MB_PMD\t(\"cryptodev_aesni_mb_pmd\")\n+/**< AES-NI Multi buffer PMD device name */\n+#define CRYPTODEV_NAME_QAT_PMD\t\t(\"cryptodev_qat_pmd\")\n+/**< Intel QAT PMD device name */\n+\n+/** Crypto device type */\n+enum rte_cryptodev_type {\n+\tRTE_CRYPTODEV_AESNI_MB_PMD = 1,\t/**< AES-NI multi buffer PMD */\n+\tRTE_CRYPTODEV_QAT_PMD,\t\t/**< QAT PMD */\n+};\n+\n+\n+/**  Crypto device information */\n+struct rte_cryptodev_info {\n+\tconst char *driver_name;\t\t/**< Driver name. */\n+\tenum rte_cryptodev_type dev_type;\t/**< Device type */\n+\tstruct rte_pci_device *pci_dev;\t\t/**< PCI information. */\n+\tuint16_t max_queue_pairs;\t\t/**< Maximum number of queues\n+\t\t\t\t\t\t* pairs supported by device.\n+\t\t\t\t\t\t*/\n+};\n+\n+#define RTE_CRYPTODEV_DETACHED  (0)\n+#define RTE_CRYPTODEV_ATTACHED  (1)\n+\n+/** Definitions of Crypto device event types */\n+enum rte_cryptodev_event_type {\n+\tRTE_CRYPTODEV_EVENT_UNKNOWN,\t/**< unknown event type */\n+\tRTE_CRYPTODEV_EVENT_ERROR,\t/**< error interrupt event */\n+\tRTE_CRYPTODEV_EVENT_MAX\t\t/**< max value of this enum */\n+};\n+\n+/** Crypto device queue pair configuration structure. */\n+struct rte_cryptodev_qp_conf {\n+\tuint32_t nb_descriptors; /**< Number of descriptors per queue pair */\n+};\n+\n+/**\n+ * Typedef for application callback function to be registered by application\n+ * software for notification of device events\n+ *\n+ * @param\tdev_id\tCrypto device identifier\n+ * @param\tevent\tCrypto device event to register for notification of.\n+ * @param\tcb_arg\tUser specified parameter to be passed as to passed to\n+ *\t\t\tusers callback function.\n+ */\n+typedef void (*rte_cryptodev_cb_fn)(uint8_t dev_id,\n+\t\tenum rte_cryptodev_event_type event, void *cb_arg);\n+\n+#ifdef RTE_CRYPTODEV_PERF\n+/**\n+ * Crypto Device performance counter statistics structure. This structure is\n+ * used for RDTSC counters for measuring crypto operations.\n+ */\n+struct rte_cryptodev_perf_stats {\n+\tuint64_t t_accumlated;\t\t/**< Accumulated time processing operation */\n+\tuint64_t t_min;\t\t\t/**< Max time */\n+\tuint64_t t_max;\t\t\t/**< Min time */\n+};\n+#endif\n+\n+/** Crypto Device statistics */\n+struct rte_cryptodev_stats {\n+\tuint64_t enqueued_count;\t/**< Count of all operations enqueued */\n+\tuint64_t dequeued_count;\t/**< Count of all operations dequeued */\n+\n+\tuint64_t enqueue_err_count;\t/**< Total error count on operations enqueued */\n+\tuint64_t dequeue_err_count;\t/**< Total error count on operations dequeued */\n+\n+#ifdef RTE_CRYPTODEV_DETAILED_STATS\n+\tstruct {\n+\t\tuint64_t encrypt_ops;\t/**< Count of encrypt operations */\n+\t\tuint64_t encrypt_bytes;\t/**< Number of bytes encrypted */\n+\n+\t\tuint64_t decrypt_ops;\t/**< Count of decrypt operations */\n+\t\tuint64_t decrypt_bytes;\t/**< Number of bytes decrypted */\n+\t} cipher; /**< Cipher operations stats */\n+\n+\tstruct {\n+\t\tuint64_t generate_ops;\t/**< Count of generate operations */\n+\t\tuint64_t bytes_hashed;\t/**< Number of bytes hashed */\n+\n+\t\tuint64_t verify_ops;\t/**< Count of verify operations */\n+\t\tuint64_t bytes_verified;/**< Number of bytes verified */\n+\t} hash;\t /**< Hash operations stats */\n+#endif\n+\n+#ifdef RTE_CRYPTODEV_PERF\n+\tstruct rte_cryptodev_perf_stats op_perf;\t/**< Operations stats */\n+#endif\n+} __rte_cache_aligned;\n+\n+/**\n+ * Create a virtual crypto device\n+ *\n+ * @param\tname\tCryptodev PMD name of device to be created.\n+ * @param\targs\tOptions arguments for device.\n+ *\n+ * @return\n+ * - On successful creation of the cryptodev the device index is returned,\n+ *   which will be between 0 and rte_cryptodev_count().\n+ * - In the case of a failure, returns -1.\n+ */\n+extern int\n+rte_cryptodev_create_vdev(const char *name, const char *args);\n+\n+/**\n+ * Get the device identifier for the named crypto device.\n+ *\n+ * @param\tname\tdevice name to select the device structure.\n+ *\n+ * @return\n+ *   - Returns crypto device identifier on success.\n+ *   - Return -1 on failure to find named crypto device.\n+ */\n+extern int\n+rte_cryptodev_get_dev_id(const char *name);\n+\n+/**\n+ * Get the total number of crypto devices that have been successfully\n+ * initialised.\n+ *\n+ * @return\n+ *   - The total number of usable crypto devices.\n+ */\n+extern uint8_t\n+rte_cryptodev_count(void);\n+\n+extern uint8_t\n+rte_cryptodev_count_devtype(enum rte_cryptodev_type type);\n+/*\n+ * Return the NUMA socket to which a device is connected\n+ *\n+ * @param dev_id\n+ *   The identifier of the device\n+ * @return\n+ *   The NUMA socket id to which the device is connected or\n+ *   a default of zero if the socket could not be determined.\n+ *   -1 if returned is the dev_id value is out of range.\n+ */\n+extern int\n+rte_cryptodev_socket_id(uint8_t dev_id);\n+\n+/** Crypto device configuration structure */\n+struct rte_cryptodev_config {\n+\tint socket_id;\t\t\t/**< Socket to allocate resources on */\n+\tuint16_t nb_queue_pairs;\t/**< Number of queue pairs to configure\n+\t\t\t\t\t* on device */\n+\n+\tstruct {\n+\t\tuint32_t nb_objs;\t/**< Number of objects in mempool */\n+\t\tuint32_t cache_size;\t/**< l-core object cache size */\n+\t} session_mp;\t\t/**< Session mempool configuration */\n+};\n+\n+/**\n+ * Configure a device.\n+ *\n+ * This function must be invoked first before any other function in the\n+ * API. This function can also be re-invoked when a device is in the\n+ * stopped state.\n+ *\n+ * @param\tdev_id\t\tThe identifier of the device to configure.\n+ * @param\tnb_qp_queue\tThe number of queue pairs to set up for the device.\n+ *\n+ * @return\n+ *   - 0: Success, device configured.\n+ *   - <0: Error code returned by the driver configuration function.\n+ */\n+extern int\n+rte_cryptodev_configure(uint8_t dev_id, struct rte_cryptodev_config *config);\n+\n+/**\n+ * Start an device.\n+ *\n+ * The device start step is the last one and consists of setting the configured\n+ * offload features and in starting the transmit and the receive units of the\n+ * device.\n+ * On success, all basic functions exported by the API (link status,\n+ * receive/transmit, and so on) can be invoked.\n+ *\n+ * @param dev_id\n+ *   The identifier of the device.\n+ * @return\n+ *   - 0: Success, device started.\n+ *   - <0: Error code of the driver device start function.\n+ */\n+extern int\n+rte_cryptodev_start(uint8_t dev_id);\n+\n+/**\n+ * Stop an device. The device can be restarted with a call to\n+ * rte_cryptodev_start()\n+ *\n+ * @param\tdev_id\t\tThe identifier of the device.\n+ */\n+extern void\n+rte_cryptodev_stop(uint8_t dev_id);\n+\n+/**\n+ * Close an device. The device cannot be restarted!\n+ *\n+ * @param\tdev_id\t\tThe identifier of the device.\n+ *\n+ * @return\n+ *  - 0 on successfully closing device\n+ *  - <0 on failure to close device\n+ */\n+extern int\n+rte_cryptodev_close(uint8_t dev_id);\n+\n+/**\n+ * Allocate and set up a receive queue pair for a device.\n+ *\n+ *\n+ * @param\tdev_id\t\tThe identifier of the device.\n+ * @param\tqueue_pair_id\tThe index of the queue pairs to set up. The\n+ *\t\t\t\tvalue must be in the range [0, nb_queue_pair\n+ *\t\t\t\t- 1] previously supplied to\n+ *\t\t\t\trte_cryptodev_configure().\n+ * @param\tqp_conf\t\tThe pointer to the configuration data to be\n+ *\t\t\t\tused for the queue pair. NULL value is\n+ *\t\t\t\tallowed, in which case default configuration\n+ *\t\t\t\twill be used.\n+ * @param\tsocket_id\tThe *socket_id* argument is the socket\n+ *\t\t\t\tidentifier in case of NUMA. The value can be\n+ *\t\t\t\t*SOCKET_ID_ANY* if there is no NUMA constraint\n+ *\t\t\t\tfor the DMA memory allocated for the receive\n+ *\t\t\t\tqueue pair.\n+ *\n+ * @return\n+ *   - 0: Success, queue pair correctly set up.\n+ *   - <0: Queue pair configuration failed\n+ */\n+extern int\n+rte_cryptodev_queue_pair_setup(uint8_t dev_id, uint16_t queue_pair_id,\n+\t\tconst struct rte_cryptodev_qp_conf *qp_conf, int socket_id);\n+\n+/**\n+ * Start a specified queue pair of a device. It is used\n+ * when deferred_start flag of the specified queue is true.\n+ *\n+ * @param\tdev_id\t\tThe identifier of the device\n+ * @param\tqueue_pair_id\tThe index of the queue pair to start. The value\n+ *\t\t\t\tmust be in the range [0, nb_queue_pair - 1]\n+ *\t\t\t\tpreviously supplied to rte_crypto_dev_configure().\n+ * @return\n+ *   - 0: Success, the transmit queue is correctly set up.\n+ *   - -EINVAL: The dev_id or the queue_id out of range.\n+ *   - -ENOTSUP: The function not supported in PMD driver.\n+ */\n+extern int\n+rte_cryptodev_queue_pair_start(uint8_t dev_id, uint16_t queue_pair_id);\n+\n+/**\n+ * Stop specified queue pair of a device\n+ *\n+ * @param\tdev_id\t\tThe identifier of the device\n+ * @param\tqueue_pair_id\tThe index of the queue pair to stop. The value\n+ *\t\t\t\tmust be in the range [0, nb_queue_pair - 1]\n+ *\t\t\t\tpreviously supplied to rte_cryptodev_configure().\n+ * @return\n+ *   - 0: Success, the transmit queue is correctly set up.\n+ *   - -EINVAL: The dev_id or the queue_id out of range.\n+ *   - -ENOTSUP: The function not supported in PMD driver.\n+ */\n+extern int\n+rte_cryptodev_queue_pair_stop(uint8_t dev_id, uint16_t queue_pair_id);\n+\n+/**\n+ * Get the number of queue pairs on a specific crypto device\n+ *\n+ * @param\tdev_id\t\tCrypto device identifier.\n+ * @return\n+ *   - The number of configured queue pairs.\n+ */\n+extern uint16_t\n+rte_cryptodev_queue_pair_count(uint8_t dev_id);\n+\n+\n+/**\n+ * Retrieve the general I/O statistics of a device.\n+ *\n+ * @param\tdev_id\t\tThe identifier of the device.\n+ * @param\tstats\t\tA pointer to a structure of type\n+ *\t\t\t\t*rte_cryptodev_stats* to be filled with the\n+ *\t\t\t\tvalues of device counters.\n+ * @return\n+ *   - Zero if successful.\n+ *   - Non-zero otherwise.\n+ */\n+extern int\n+rte_cryptodev_stats_get(uint8_t dev_id, struct rte_cryptodev_stats *stats);\n+\n+/**\n+ * Reset the general I/O statistics of a device.\n+ *\n+ * @param\tdev_id\t\tThe identifier of the device.\n+ */\n+extern void\n+rte_cryptodev_stats_reset(uint8_t dev_id);\n+\n+/**\n+ * Retrieve the contextual information of a device.\n+ *\n+ * @param\tdev_id\t\tThe identifier of the device.\n+ * @param\tdev_info\tA pointer to a structure of type\n+ *\t\t\t\t*rte_cryptodev_info* to be filled with the\n+ *\t\t\t\tcontextual information of the device.\n+ */\n+extern void\n+rte_cryptodev_info_get(uint8_t dev_id, struct rte_cryptodev_info *dev_info);\n+\n+\n+/**\n+ * Register a callback function for specific device id.\n+ *\n+ * @param\tdev_id\t\tDevice id.\n+ * @param\tevent\t\tEvent interested.\n+ * @param\tcb_fn\t\tUser supplied callback function to be called.\n+ * @param\tcb_arg\t\tPointer to the parameters for the registered callback.\n+ *\n+ * @return\n+ *  - On success, zero.\n+ *  - On failure, a negative value.\n+ */\n+extern int\n+rte_cryptodev_callback_register(uint8_t dev_id,\n+\t\tenum rte_cryptodev_event_type event,\n+\t\trte_cryptodev_cb_fn cb_fn, void *cb_arg);\n+\n+/**\n+ * Unregister a callback function for specific device id.\n+ *\n+ * @param\tdev_id\t\tThe device identifier.\n+ * @param\tevent\t\tEvent interested.\n+ * @param\tcb_fn\t\tUser supplied callback function to be called.\n+ * @param\tcb_arg\t\tPointer to the parameters for the registered callback.\n+ *\n+ * @return\n+ *  - On success, zero.\n+ *  - On failure, a negative value.\n+ */\n+extern int\n+rte_cryptodev_callback_unregister(uint8_t dev_id,\n+\t\tenum rte_cryptodev_event_type event,\n+\t\trte_cryptodev_cb_fn cb_fn, void *cb_arg);\n+\n+\n+typedef uint16_t (*dequeue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,\n+\t\tuint16_t nb_pkts);\n+/**< Dequeue processed packets from queue pair of a device. */\n+\n+typedef uint16_t (*enqueue_pkt_burst_t)(void *qp, struct rte_mbuf **pkts,\n+\t\tuint16_t nb_pkts);\n+/**< Enqueue packets for processing on queue pair of a device. */\n+\n+\n+struct rte_cryptodev_callback;\n+\n+/** Structure to keep track of registered callbacks */\n+TAILQ_HEAD(rte_cryptodev_cb_list, rte_cryptodev_callback);\n+\n+/** The data structure associated with each crypto device. */\n+struct rte_cryptodev {\n+\tdequeue_pkt_burst_t dequeue_burst;\t/**< Pointer to PMD receive function. */\n+\tenqueue_pkt_burst_t enqueue_burst;\t/**< Pointer to PMD transmit function. */\n+\n+\tconst struct rte_cryptodev_driver *driver;\t/**< Driver for this device */\n+\tstruct rte_cryptodev_data *data;\t\t/**< Pointer to device data */\n+\tstruct rte_cryptodev_ops *dev_ops;\t\t/**< Functions exported by PMD */\n+\tstruct rte_pci_device *pci_dev;\t\t\t/**< PCI info. supplied by probing */\n+\n+\tenum rte_cryptodev_type dev_type;\t\t/**< Crypto device type */\n+\tenum pmd_type pmd_type;\t\t\t\t/**< PMD type - PDEV / VDEV */\n+\n+\tstruct rte_cryptodev_cb_list link_intr_cbs;\n+\t/**< User application callback for interrupts if present */\n+\n+\tuint8_t attached : 1;\t/**< Flag indicating the device is attached */\n+};\n+\n+\n+#define RTE_CRYPTODEV_NAME_MAX_LEN\t(64)\n+/**< Max length of name of crypto PMD */\n+\n+/**\n+ *\n+ * The data part, with no function pointers, associated with each crypto device.\n+ *\n+ * This structure is safe to place in shared memory to be common among different\n+ * processes in a multi-process configuration.\n+ */\n+struct rte_cryptodev_data {\n+\tuint8_t dev_id;\t\t\t\t/**< Device ID for this instance */\n+\tchar name[RTE_CRYPTODEV_NAME_MAX_LEN];\t/**< Unique identifier name */\n+\n+\tuint8_t dev_started : 1;\t\t/**< Device state: STARTED(1)/STOPPED(0) */\n+\n+\tstruct rte_mempool *session_pool;\t/**< Session memory pool */\n+\tvoid **queue_pairs;\t\t/**< Array of pointers to queue pairs. */\n+\tuint16_t nb_queue_pairs;\t/**< Number of device queue pairs. */\n+\n+\tvoid *dev_private;\t\t/**< PMD-specific private data */\n+};\n+\n+extern struct rte_cryptodev *rte_cryptodevs;\n+/**\n+ *\n+ * Dequeue a burst of processed packets from a queue of the crypto device.\n+ * The dequeued packets are stored in *rte_mbuf* structures whose pointers are\n+ * supplied in the *pkts* array.\n+ *\n+ * The rte_crypto_dequeue_burst() function returns the number of packets\n+ * actually dequeued, which is the number of *rte_mbuf* data structures\n+ * effectively supplied into the *pkts* array.\n+ *\n+ * A return value equal to *nb_pkts* indicates that the queue contained\n+ * at least *rx_pkts* packets, and this is likely to signify that other\n+ * received packets remain in the input queue. Applications implementing\n+ * a \"retrieve as much received packets as possible\" policy can check this\n+ * specific case and keep invoking the rte_crypto_dequeue_burst() function until\n+ * a value less than *nb_pkts* is returned.\n+ *\n+ * The rte_crypto_dequeue_burst() function does not provide any error\n+ * notification to avoid the corresponding overhead.\n+ *\n+ * @param\tdev_id\t\tThe identifier of the device.\n+ * @param\tqp_id\t\tThe index of the queue pair from which to\n+ *\t\t\t\tretrieve processed packets. The value must be\n+ *\t\t\t\tin the range [0, nb_queue_pair - 1] previously\n+ *\t\t\t\tsupplied to rte_cryptodev_configure().\n+ * @param\tpkts\t\tThe address of an array of pointers to\n+ *\t\t\t\t*rte_mbuf* structures that must be large enough\n+ *\t\t\t\tto store *nb_pkts* pointers in it.\n+ * @param\tnb_pkts\t\tThe maximum number of packets to dequeue.\n+ *\n+ * @return\n+ *   - The number of packets actually dequeued, which is the number\n+ *   of pointers to *rte_mbuf* structures effectively supplied to the\n+ *   *pkts* array.\n+ */\n+static inline uint16_t\n+rte_cryptodev_dequeue_burst(uint8_t dev_id, uint16_t qp_id,\n+\t\tstruct rte_mbuf **pkts, uint16_t nb_pkts)\n+{\n+\tstruct rte_cryptodev *dev = &rte_cryptodevs[dev_id];\n+\n+\tnb_pkts = (*dev->dequeue_burst)\n+\t\t\t(dev->data->queue_pairs[qp_id], pkts, nb_pkts);\n+\n+\treturn nb_pkts;\n+}\n+\n+/**\n+ * Enqueue a burst of packets for processing on a crypto device.\n+ *\n+ * The rte_crypto_enqueue_burst() function is invoked to place packets\n+ * on the queue *queue_id* of the device designated by its *dev_id*.\n+ *\n+ * The *nb_pkts* parameter is the number of packets to process which are\n+ * supplied in the *pkts* array of *rte_mbuf* structures.\n+ *\n+ * The rte_crypto_enqueue_burst() function returns the number of packets it\n+ * actually sent. A return value equal to *nb_pkts* means that all packets\n+ * have been sent.\n+ * *\n+ * @param\tdev_id\t\tThe identifier of the device.\n+ * @param\tqueue_id\tThe index of the transmit queue through\n+ *\t\t\t\twhich output packets must be sent. The value\n+ *\t\t\t\tmust be in the range [0, nb_queue_pairs - 1]\n+ *\t\t\t\tpreviously supplied to rte_cryptodev_configure().\n+ * @param\ttx_pkts\t\tThe address of an array of *nb_pkts* pointers\n+ *\t\t\t\tto *rte_mbuf* structures which contain the\n+ *\t\t\t\toutput packets.\n+ * @param\tnb_pkts\t\tThe number of packets to transmit.\n+ *\n+ * @return\n+ * The number of packets actually enqueued on the crypto device. The return\n+ * value can be less than the value of the *nb_pkts* parameter when the\n+ * crypto devices queue is full or has been filled up.\n+ */\n+static inline uint16_t\n+rte_cryptodev_enqueue_burst(uint8_t dev_id, uint16_t qp_id,\n+\t\tstruct rte_mbuf **pkts, uint16_t nb_pkts)\n+{\n+\tstruct rte_cryptodev *dev = &rte_cryptodevs[dev_id];\n+\n+\treturn (*dev->enqueue_burst)\n+\t\t\t(dev->data->queue_pairs[qp_id], pkts, nb_pkts);\n+}\n+\n+\n+/**\n+ * Initialise a session for symmetric cryptographic operations.\n+ *\n+ * This function is used by the client to initialize immutable\n+ * parameters of symmetric cryptographic operation.\n+ * To perform the operation the rte_cryptodev_enqueue_burst function is\n+ * used.  Each mbuf should contain a reference to the session\n+ * pointer returned from this function contained within it's crypto_op if a\n+ * session-based operation is being provisioned. Memory to contain the session\n+ * information is allocated from within mempool managed by the cryptodev.\n+ *\n+ * The rte_cryptodev_session_free must be called to free allocated\n+ * memory when the session is no longer required.\n+ *\n+ * @param\tdev_id\t\tThe device identifier.\n+ * @param\txform\t\tCrypto transform chain.\n+\n+ *\n+ * @return\n+ *  Pointer to the created session or NULL\n+ */\n+extern struct rte_cryptodev_session *\n+rte_cryptodev_session_create(uint8_t dev_id,\n+\t\tstruct rte_crypto_xform *xform);\n+\n+\n+/**\n+ * Free the memory associated with a previously allocated session.\n+ *\n+ * @param\tdev_id\t\tThe device identifier.\n+ * @param\tsession\t\tSession pointer previously allocated by\n+ *\t\t\t\t*rte_cryptodev_session_create*.\n+ *\n+ * @return\n+ *   NULL on successful freeing of session.\n+ *   Session pointer on failure to free session.\n+ */\n+extern struct rte_cryptodev_session *\n+rte_cryptodev_session_free(uint8_t dev_id,\n+\t\tstruct rte_cryptodev_session *session);\n+\n+\n+#ifdef __cplusplus\n+}\n+#endif\n+\n+#endif /* _RTE_CRYPTODEV_H_ */\ndiff --git a/lib/librte_cryptodev/rte_cryptodev_pmd.h b/lib/librte_cryptodev/rte_cryptodev_pmd.h\nnew file mode 100644\nindex 0000000..9a6271e\n--- /dev/null\n+++ b/lib/librte_cryptodev/rte_cryptodev_pmd.h\n@@ -0,0 +1,577 @@\n+/*-\n+ *\n+ *   Copyright(c) 2015 Intel Corporation. All rights reserved.\n+ *\n+ *   Redistribution and use in source and binary forms, with or without\n+ *   modification, are permitted provided that the following conditions\n+ *   are met:\n+ *\n+ *     * Redistributions of source code must retain the above copyright\n+ *       notice, this list of conditions and the following disclaimer.\n+ *     * Redistributions in binary form must reproduce the above copyright\n+ *       notice, this list of conditions and the following disclaimer in\n+ *       the documentation and/or other materials provided with the\n+ *       distribution.\n+ *     * Neither the name of Intel Corporation nor the names of its\n+ *       contributors may be used to endorse or promote products derived\n+ *       from this software without specific prior written permission.\n+ *\n+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS\n+ *   \"AS IS\" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT\n+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR\n+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT\n+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,\n+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT\n+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,\n+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY\n+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT\n+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE\n+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.\n+ */\n+\n+#ifndef _RTE_CRYPTODEV_PMD_H_\n+#define _RTE_CRYPTODEV_PMD_H_\n+\n+/** @file\n+ * RTE Crypto PMD APIs\n+ *\n+ * @note\n+ * These API are from crypto PMD only and user applications should not call them\n+ * directly.\n+ */\n+\n+#ifdef __cplusplus\n+extern \"C\" {\n+#endif\n+\n+#include <string.h>\n+\n+#include <rte_dev.h>\n+#include <rte_pci.h>\n+#include <rte_malloc.h>\n+#include <rte_mbuf.h>\n+#include <rte_mempool.h>\n+#include <rte_log.h>\n+\n+#include \"rte_crypto.h\"\n+#include \"rte_cryptodev.h\"\n+\n+struct rte_cryptodev_stats;\n+struct rte_cryptodev_info;\n+struct rte_cryptodev_qp_conf;\n+\n+enum rte_cryptodev_event_type;\n+\n+/* Logging Macros */\n+\n+#define CDEV_LOG_ERR(fmt, args...) do { \\\n+\tRTE_LOG(ERR, CRYPTODEV, \"%s() line %u: \" fmt \"\\n\", \\\n+\t\t\t__func__, __LINE__, ## args); \\\n+\t} while (0)\n+\n+#define CDEV_PMD_LOG_ERR(dev, fmt, args...) do { \\\n+\tRTE_LOG(ERR, CRYPTODEV, \"[%s] %s() line %u: \" fmt \"\\n\", \\\n+\t\t\tdev, __func__, __LINE__, ## args); \\\n+\t} while (0)\n+\n+#ifdef RTE_LIBRTE_CRYPTODEV_DEBUG\n+#define CDEV_LOG_DEBUG(fmt, args...) do {                        \\\n+\t\tRTE_LOG(DEBUG, CRYPTODEV, \"%s() line %u: \" fmt \"\\n\", \\\n+\t\t\t\t__func__, __LINE__, ## args); \\\n+\t} while (0)\n+\n+#define CDEV_PMD_TRACE(fmt, args...) do {                        \\\n+\t\tRTE_LOG(DEBUG, CRYPTODEV, \"[%s] %s: \" fmt \"\\n\", dev, __func__, ## args); \\\n+\t} while (0)\n+\n+#else\n+#define CDEV_LOG_DEBUG(fmt, args...)\n+#define CDEV_PMD_TRACE(fmt, args...)\n+#endif\n+\n+\n+struct rte_cryptodev_session {\n+\tstruct {\n+\t\tuint8_t dev_id;\n+\t\tenum rte_cryptodev_type type;\n+\t\tstruct rte_mempool *mp;\n+\t} __rte_aligned(8);\n+\n+\tchar _private[];\n+};\n+\n+struct rte_cryptodev_driver;\n+struct rte_cryptodev;\n+\n+/**\n+ * Initialisation function of a crypto driver invoked for each matching\n+ * crypto PCI device detected during the PCI probing phase.\n+ *\n+ * @param\tdrv\tThe pointer to the [matching] crypto driver structure\n+ *\t\t\tsupplied by the PMD when it registered itself.\n+ * @param\tdev\tThe dev pointer is the address of the *rte_cryptodev*\n+ *\t\t\tstructure associated with the matching device and which\n+ *\t\t\thas been [automatically] allocated in the\n+ *\t\t\t*rte_crypto_devices* array.\n+ *\n+ * @return\n+ *   - 0: Success, the device is properly initialised by the driver.\n+ *        In particular, the driver MUST have set up the *dev_ops* pointer\n+ *        of the *dev* structure.\n+ *   - <0: Error code of the device initialisation failure.\n+ */\n+typedef int (*cryptodev_init_t)(struct rte_cryptodev_driver *drv,\n+\t\tstruct rte_cryptodev *dev);\n+\n+/**\n+ * Finalisation function of a driver invoked for each matching\n+ * PCI device detected during the PCI closing phase.\n+ *\n+ * @param\tdrv\tThe pointer to the [matching] driver structure supplied\n+ *\t\t\tby the PMD when it registered itself.\n+ * @param\tdev\tThe dev pointer is the address of the *rte_cryptodev*\n+ *\t\t\tstructure associated with the matching device and which\n+ *\t\t\thas been [automatically] allocated in the\n+ *\t\t\t*rte_crypto_devices* array.\n+ *\n+ *  * @return\n+ *   - 0: Success, the device is properly finalised by the driver.\n+ *        In particular, the driver MUST free the *dev_ops* pointer\n+ *        of the *dev* structure.\n+ *   - <0: Error code of the device initialisation failure.\n+ */\n+typedef int (*cryptodev_uninit_t)(const struct rte_cryptodev_driver  *drv,\n+\t\t\t\tstruct rte_cryptodev *dev);\n+\n+/**\n+ * The structure associated with a PMD driver.\n+ *\n+ * Each driver acts as a PCI driver and is represented by a generic\n+ * *crypto_driver* structure that holds:\n+ *\n+ * - An *rte_pci_driver* structure (which must be the first field).\n+ *\n+ * - The *cryptodev_init* function invoked for each matching PCI device.\n+ *\n+ * - The size of the private data to allocate for each matching device.\n+ */\n+struct rte_cryptodev_driver {\n+\tstruct rte_pci_driver pci_drv;\t/**< The PMD is also a PCI driver. */\n+\tunsigned dev_private_size;\t/**< Size of device private data. */\n+\n+\tcryptodev_init_t cryptodev_init;\t/**< Device init function. */\n+\tcryptodev_uninit_t cryptodev_uninit;\t/**< Device uninit function. */\n+};\n+\n+\n+/** Global structure used for maintaining state of allocated crypto devices */\n+struct rte_cryptodev_global {\n+\tstruct rte_cryptodev *devs;\t\t/**< Device information array */\n+\tstruct rte_cryptodev_data *data;\t/**< Device private data */\n+\tuint8_t nb_devs;\t\t\t/**< Number of devices found */\n+\tuint8_t max_devs;\t\t\t/**< Max number of devices */\n+};\n+\n+/** pointer to global crypto devices data structure. */\n+extern struct rte_cryptodev_global *rte_cryptodev_globals;\n+\n+/**\n+ * Get the rte_cryptodev structure device pointer for the device. Assumes a\n+ * valid device index.\n+ *\n+ * @param\tdev_id\tDevice ID value to select the device structure.\n+ *\n+ * @return\n+ *   - The rte_cryptodev structure pointer for the given device ID.\n+ */\n+static inline struct rte_cryptodev *\n+rte_cryptodev_pmd_get_dev(uint8_t dev_id)\n+{\n+\treturn &rte_cryptodev_globals->devs[dev_id];\n+}\n+\n+/**\n+ * Get the rte_cryptodev structure device pointer for the named device.\n+ *\n+ * @param\tname\tdevice name to select the device structure.\n+ *\n+ * @return\n+ *   - The rte_cryptodev structure pointer for the given device ID.\n+ */\n+static inline struct rte_cryptodev *\n+rte_cryptodev_pmd_get_named_dev(const char *name)\n+{\n+\tunsigned i;\n+\n+\tif (name == NULL)\n+\t\treturn NULL;\n+\n+\tfor (i = 0; i < rte_cryptodev_globals->max_devs; i++) {\n+\t\tif (rte_cryptodev_globals->devs[i].attached == RTE_CRYPTODEV_ATTACHED &&\n+\t\t\t\tstrcmp(rte_cryptodev_globals->devs[i].data->name, name) == 0)\n+\t\t\treturn &rte_cryptodev_globals->devs[i];\n+\t}\n+\n+\treturn NULL;\n+}\n+\n+/**\n+ * Validate if the crypto device index is valid attached crypto device.\n+ *\n+ * @param\tdev_id\tCrypto device index.\n+ *\n+ * @return\n+ *   - If the device index is valid (1) or not (0).\n+ */\n+static inline unsigned\n+rte_cryptodev_pmd_is_valid_dev(uint8_t dev_id)\n+{\n+\tstruct rte_cryptodev *dev = NULL;\n+\n+\tif (dev_id >= rte_cryptodev_globals->nb_devs)\n+\t\treturn 0;\n+\n+\tdev = rte_cryptodev_pmd_get_dev(dev_id);\n+\tif (dev->attached != RTE_CRYPTODEV_ATTACHED)\n+\t\treturn 0;\n+\telse\n+\t\treturn 1;\n+}\n+\n+/**\n+ * The pool of rte_cryptodev structures. The size of the pool\n+ * is configured at compile-time in the <rte_cryptodev.c> file.\n+ */\n+extern struct rte_cryptodev rte_crypto_devices[];\n+\n+\n+/**\n+ * Definitions of all functions exported by a driver through the\n+ * the generic structure of type *crypto_dev_ops* supplied in the\n+ * *rte_cryptodev* structure associated with a device.\n+ */\n+\n+/**\n+ *\tFunction used to configure device.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ *\n+ * @return\tReturns 0 on success\n+ */\n+typedef int (*cryptodev_configure_t)(struct rte_cryptodev *dev);\n+\n+/**\n+ * Function used to start a configured device.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ *\n+ * @return\tReturns 0 on success\n+ */\n+typedef int (*cryptodev_start_t)(struct rte_cryptodev *dev);\n+\n+/**\n+ * Function used to stop a configured device.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ */\n+typedef void (*cryptodev_stop_t)(struct rte_cryptodev *dev);\n+\n+/**\n+ Function used to close a configured device.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ */\n+typedef int (*cryptodev_close_t)(struct rte_cryptodev *dev);\n+\n+\n+/**\n+ * Function used to get statistics of a device.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ * @param\tstats\tPointer to crypto device stats structure to populate\n+ */\n+typedef void (*cryptodev_stats_get_t)(struct rte_cryptodev *dev,\n+\t\t\t\tstruct rte_cryptodev_stats *stats);\n+\n+\n+/**\n+ * Function used to reset statistics of a device.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ */\n+typedef void (*cryptodev_stats_reset_t)(struct rte_cryptodev *dev);\n+\n+\n+/**\n+ * Function used to get specific information of a device.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ */\n+typedef void (*cryptodev_info_get_t)(struct rte_cryptodev *dev,\n+\t\t\t\tstruct rte_cryptodev_info *dev_info);\n+\n+/**\n+ * Start queue pair of a device.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ * @param\tqp_id\tQueue Pair Index\n+ *\n+ * @return\tReturns 0 on success.\n+ */\n+typedef int (*cryptodev_queue_pair_start_t)(struct rte_cryptodev *dev,\n+\t\t\t\tuint16_t qp_id);\n+\n+/**\n+ * Stop queue pair of a device.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ * @param\tqp_id\tQueue Pair Index\n+ *\n+ * @return\tReturns 0 on success.\n+ */\n+typedef int (*cryptodev_queue_pair_stop_t)(struct rte_cryptodev *dev,\n+\t\t\t\tuint16_t qp_id);\n+\n+/**\n+ * Setup a queue pair for a device.\n+ *\n+ * @param\tdev\t\tCrypto device pointer\n+ * @param\tqp_id\t\tQueue Pair Index\n+ * @param\tqp_conf\t\tQueue configuration structure\n+ * @param\tsocket_id\tSocket Index\n+ *\n+ * @return\tReturns 0 on success.\n+ */\n+typedef int (*cryptodev_queue_pair_setup_t)(struct rte_cryptodev *dev,\n+\t\tuint16_t qp_id,\tconst struct rte_cryptodev_qp_conf *qp_conf,\n+\t\tint socket_id);\n+\n+/**\n+ * Release memory resources allocated by given queue pair.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ * @param\tqp_id\tQueue Pair Index\n+ */\n+typedef void (*cryptodev_queue_pair_release_t)(struct rte_cryptodev *dev,\n+\t\tuint16_t qp_id);\n+\n+/**\n+ * Get number of available queue pairs of a device.\n+ *\n+ * @param\tdev\tCrypto device pointer\n+ *\n+ * @return\tReturns number of queue pairs on success.\n+ */\n+typedef uint32_t (*cryptodev_queue_pair_count_t)(struct rte_cryptodev *dev);\n+\n+/**\n+ * Create a session mempool to allocate sessions from\n+ *\n+ * @param\tdev\t\tCrypto device pointer\n+ * @param\tnb_objs\t\tnumber of sessions objects in mempool\n+ * @param\tobj_cache\tl-core object cache size, see *rte_ring_create*\n+ * @param\tsocket_id\tSocket Id to allocate  mempool on.\n+ *\n+ * @return\n+ * - On success returns a pointer to a rte_mempool\n+ * - On failure returns a NULL pointer\n+ *  */\n+typedef int (*cryptodev_create_session_pool_t)(\n+\t\tstruct rte_cryptodev *dev, unsigned nb_objs,\n+\t\tunsigned obj_cache_size, int socket_id);\n+\n+\n+/**\n+ * Get the size of a cryptodev session\n+ *\n+ * @param\tdev\t\tCrypto device pointer\n+ *\n+ * @return\n+ *  - On success returns the size of the session structure for device\n+ *  - On failure returns 0\n+ * */\n+typedef unsigned (*cryptodev_get_session_private_size_t)(\n+\t\tstruct rte_cryptodev *dev);\n+\n+/**\n+ * Initialize a Crypto session on a device.\n+ *\n+ * @param\tdev\t\tCrypto device pointer\n+ * @param\txform\t\tSingle or chain of crypto xforms\n+ * @param\tpriv_sess\tPointer to cryptodev's private session structure\n+ *\n+ * @return\n+ *  - Returns private session structure on success.\n+ *  - Returns NULL on failure.\n+ * */\n+typedef void (*cryptodev_initialize_session_t)(struct rte_mempool *mempool,\n+\t\tvoid *session_private);\n+\n+/**\n+ * Configure a Crypto session on a device.\n+ *\n+ * @param\tdev\t\tCrypto device pointer\n+ * @param\txform\t\tSingle or chain of crypto xforms\n+ * @param\tpriv_sess\tPointer to cryptodev's private session structure\n+ *\n+ * @return\n+ *  - Returns private session structure on success.\n+ *  - Returns NULL on failure.\n+ * */\n+typedef void * (*cryptodev_configure_session_t)(struct rte_cryptodev *dev,\n+\t\tstruct rte_crypto_xform *xform, void *session_private);\n+\n+/**\n+ * Free Crypto session.\n+ * @param\tsession\t\tCryptodev session structure to free\n+ * */\n+typedef void (*cryptodev_free_session_t)(struct rte_cryptodev *dev,\n+\t\tvoid *session_private);\n+\n+\n+/** Crypto device operations function pointer table */\n+struct rte_cryptodev_ops {\n+\tcryptodev_configure_t dev_configure;\t/**< Configure device. */\n+\tcryptodev_start_t dev_start;\t\t/**< Start device. */\n+\tcryptodev_stop_t dev_stop;\t\t/**< Stop device. */\n+\tcryptodev_close_t dev_close;\t\t/**< Close device. */\n+\n+\tcryptodev_info_get_t dev_infos_get;\t/**< Get device info. */\n+\n+\tcryptodev_stats_get_t stats_get;\t/**< Get generic device statistics. */\n+\tcryptodev_stats_reset_t stats_reset;\t/**< Reset generic device statistics. */\n+\n+\tcryptodev_queue_pair_setup_t queue_pair_setup;\t\t/**< Set up a device queue pair. */\n+\tcryptodev_queue_pair_release_t queue_pair_release;\t/**< Release a queue pair. */\n+\tcryptodev_queue_pair_start_t queue_pair_start;\t\t/**< Start a queue pair. */\n+\tcryptodev_queue_pair_stop_t queue_pair_stop;\t\t/**< Stop a queue pair. */\n+\tcryptodev_queue_pair_count_t queue_pair_count;\t\t/**< Get count of the queue pairs. */\n+\n+\tcryptodev_get_session_private_size_t session_get_size;\t/**< Return private session. */\n+\tcryptodev_initialize_session_t session_initialize;\t/**< Initialization function for private session data */\n+\tcryptodev_configure_session_t session_configure;\t/**< Configure a Crypto session. */\n+\tcryptodev_free_session_t session_clear;\t\t/**< Clear a Crypto sessions private data. */\n+};\n+\n+\n+/**\n+ * Function for internal use by dummy drivers primarily, e.g. ring-based\n+ * driver.\n+ * Allocates a new cryptodev slot for an crypto device and returns the pointer\n+ * to that slot for the driver to use.\n+ *\n+ * @param\tname\t\tUnique identifier name for each device\n+ * @param\ttype\t\tDevice type of this Crypto device\n+ * @param\tsocket_id\tSocket to allocate resources on.\n+ * @return\n+ *   - Slot in the rte_dev_devices array for a new device;\n+ */\n+struct rte_cryptodev *\n+rte_cryptodev_pmd_allocate(const char *name, enum pmd_type type, int  socket_id);\n+\n+/**\n+ * Creates a new virtual crypto device and returns the pointer\n+ * to that device.\n+ *\n+ * @param\tname\t\t\tPMD type name\n+ * @param\tdev_private_size\tSize of crypto PMDs private data\n+ * @param\tsocket_id\t\tSocket to allocate resources on.\n+ *\n+ * @return\n+ *   - Cryptodev pointer if device is successfully created.\n+ *   - NULL if device cannot be created.\n+ */\n+struct rte_cryptodev *\n+rte_cryptodev_pmd_virtual_dev_init(const char *name, size_t dev_private_size,\n+\t\tint socket_id);\n+\n+\n+/**\n+ * Function for internal use by dummy drivers primarily, e.g. ring-based\n+ * driver.\n+ * Release the specified cryptodev device.\n+ *\n+ * @param cryptodev\n+ * The *cryptodev* pointer is the address of the *rte_cryptodev* structure.\n+ * @return\n+ *   - 0 on success, negative on error\n+ */\n+extern int\n+rte_cryptodev_pmd_release_device(struct rte_cryptodev *cryptodev);\n+\n+/**\n+ * Attach a new device specified by arguments.\n+ *\n+ * @param devargs\n+ *  A pointer to a string array describing the new device\n+ *  to be attached. The string should be a pci address like\n+ *  '0000:01:00.0' or virtual device name like 'crypto_pcap0'.\n+ * @param dev_id\n+ *  A pointer to a identifier actually attached.\n+ * @return\n+ *  0 on success and dev_id is filled, negative on error\n+ */\n+extern int\n+rte_cryptodev_pmd_attach(const char *devargs, uint8_t *dev_id);\n+\n+/**\n+ * Detach a device specified by identifier.\n+ *\n+ * @param dev_id\n+ *   The identifier of the device to detach.\n+ * @param addr\n+ *  A pointer to a device name actually detached.\n+ * @return\n+ *  0 on success and devname is filled, negative on error\n+ */\n+extern int\n+rte_cryptodev_pmd_detach(uint8_t dev_id, char *devname);\n+\n+/**\n+ * Register a Crypto [Poll Mode] driver.\n+ *\n+ * Function invoked by the initialization function of a Crypto driver\n+ * to simultaneously register itself as Crypto Poll Mode Driver and to either:\n+ *\n+ *\ta - register itself as PCI driver if the crypto device is a physical\n+ *\t\tdevice, by invoking the rte_eal_pci_register() function to\n+ *\t\tregister the *pci_drv* structure embedded in the *crypto_drv*\n+ *\t\tstructure, after having stored the address of the\n+ *\t\trte_cryptodev_init() function in the *devinit* field of the\n+ *\t\t*pci_drv* structure.\n+ *\n+ *\t\tDuring the PCI probing phase, the rte_cryptodev_init()\n+ *\t\tfunction is invoked for each PCI [device] matching the\n+ *\t\tembedded PCI identifiers provided by the driver.\n+ *\n+ *\tb, complete the initialization sequence if the device is a virtual\n+ *\t\tdevice by calling the rte_cryptodev_init() directly passing a\n+ *\t\tNULL parameter for the rte_pci_device structure.\n+ *\n+ *   @param crypto_drv\tcrypto_driver structure associated with the crypto\n+ *\t\t\t\t\tdriver.\n+ *   @param type\t\tpmd type\n+ */\n+extern int\n+rte_cryptodev_pmd_driver_register(struct rte_cryptodev_driver *crypto_drv,\n+\t\tenum pmd_type type);\n+\n+/**\n+ * Executes all the user application registered callbacks for the specific\n+ * device.\n+ *  *\n+ * @param\tdev\tPointer to cryptodev struct\n+ * @param\tevent\tCrypto device interrupt event type.\n+ *\n+ * @return\n+ *  void\n+ */\n+void rte_cryptodev_pmd_callback_process(struct rte_cryptodev *dev,\n+\t\t\t\tenum rte_cryptodev_event_type event);\n+\n+\n+#ifdef __cplusplus\n+}\n+#endif\n+\n+#endif /* _RTE_CRYPTODEV_PMD_H_ */\ndiff --git a/lib/librte_eal/common/include/rte_common.h b/lib/librte_eal/common/include/rte_common.h\nindex 3121314..bae4054 100644\n--- a/lib/librte_eal/common/include/rte_common.h\n+++ b/lib/librte_eal/common/include/rte_common.h\n@@ -270,8 +270,23 @@ rte_align64pow2(uint64_t v)\n \t\t_a > _b ? _a : _b; \\\n \t})\n \n+\n /*********** Other general functions / macros ********/\n \n+#define FUNC_PTR_OR_ERR_RET(func, retval) do { \\\n+\tif ((func) == NULL) { \\\n+\t\tRTE_LOG(ERR, PMD, \"Function not supported\"); \\\n+\t\treturn retval; \\\n+\t} \\\n+} while (0)\n+\n+#define FUNC_PTR_OR_RET(func) do { \\\n+\tif ((func) == NULL) { \\\n+\t\tRTE_LOG(ERR, PMD, \"Function not supported\"); \\\n+\t\treturn; \\\n+\t} \\\n+} while (0)\n+\n #ifdef __SSE2__\n #include <emmintrin.h>\n /**\ndiff --git a/lib/librte_eal/common/include/rte_eal.h b/lib/librte_eal/common/include/rte_eal.h\nindex f36a792..948cc0a 100644\n--- a/lib/librte_eal/common/include/rte_eal.h\n+++ b/lib/librte_eal/common/include/rte_eal.h\n@@ -115,6 +115,20 @@ enum rte_lcore_role_t rte_eal_lcore_role(unsigned lcore_id);\n  */\n enum rte_proc_type_t rte_eal_process_type(void);\n \n+#define PROC_PRIMARY_OR_RET() do { \\\n+\tif (rte_eal_process_type() != RTE_PROC_PRIMARY) { \\\n+\t\tRTE_LOG(ERR, PMD, \"Cannot run in secondary processes\"); \\\n+\t\treturn; \\\n+\t} \\\n+} while (0)\n+\n+#define PROC_PRIMARY_OR_ERR_RET(retval) do { \\\n+\tif (rte_eal_process_type() != RTE_PROC_PRIMARY) { \\\n+\t\tRTE_LOG(ERR, PMD, \"Cannot run in secondary processes\"); \\\n+\t\treturn retval; \\\n+\t} \\\n+} while (0)\n+\n /**\n  * Request iopl privilege for all RPL.\n  *\ndiff --git a/lib/librte_eal/common/include/rte_log.h b/lib/librte_eal/common/include/rte_log.h\nindex ede0dca..2e47e7f 100644\n--- a/lib/librte_eal/common/include/rte_log.h\n+++ b/lib/librte_eal/common/include/rte_log.h\n@@ -78,6 +78,7 @@ extern struct rte_logs rte_logs;\n #define RTE_LOGTYPE_TABLE   0x00004000 /**< Log related to table. */\n #define RTE_LOGTYPE_PIPELINE 0x00008000 /**< Log related to pipeline. */\n #define RTE_LOGTYPE_MBUF    0x00010000 /**< Log related to mbuf. */\n+#define RTE_LOGTYPE_CRYPTODEV 0x00020000 /**< Log related to cryptodev. */\n \n /* these log types can be used in an application */\n #define RTE_LOGTYPE_USER1   0x01000000 /**< User-defined log type 1. */\ndiff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h\nindex 1bed415..40e8d43 100644\n--- a/lib/librte_eal/common/include/rte_memory.h\n+++ b/lib/librte_eal/common/include/rte_memory.h\n@@ -76,9 +76,19 @@ enum rte_page_sizes {\n /**< Return the first cache-aligned value greater or equal to size. */\n \n /**\n+ * Force alignment.\n+ */\n+#define __rte_aligned(a) __attribute__((__aligned__(a)))\n+\n+/**\n  * Force alignment to cache line.\n  */\n-#define __rte_cache_aligned __attribute__((__aligned__(RTE_CACHE_LINE_SIZE)))\n+#define __rte_cache_aligned __rte_aligned(RTE_CACHE_LINE_SIZE)\n+\n+/**\n+ * Force a structure to be packed\n+ */\n+#define __rte_packed __attribute__((__packed__))\n \n typedef uint64_t phys_addr_t; /**< Physical address definition. */\n #define RTE_BAD_PHYS_ADDR ((phys_addr_t)-1)\n@@ -104,7 +114,7 @@ struct rte_memseg {\n \t /**< store segment MFNs */\n \tuint64_t mfn[DOM0_NUM_MEMBLOCK];\n #endif\n-} __attribute__((__packed__));\n+} __rte_packed;\n \n /**\n  * Lock page in physical memory and prevent from swapping.\ndiff --git a/lib/librte_ether/rte_ethdev.c b/lib/librte_ether/rte_ethdev.c\nindex b309309..bff6744 100644\n--- a/lib/librte_ether/rte_ethdev.c\n+++ b/lib/librte_ether/rte_ethdev.c\n@@ -77,36 +77,6 @@\n #define PMD_DEBUG_TRACE(fmt, args...)\n #endif\n \n-/* Macros for checking for restricting functions to primary instance only */\n-#define PROC_PRIMARY_OR_ERR_RET(retval) do { \\\n-\tif (rte_eal_process_type() != RTE_PROC_PRIMARY) { \\\n-\t\tPMD_DEBUG_TRACE(\"Cannot run in secondary processes\\n\"); \\\n-\t\treturn (retval); \\\n-\t} \\\n-} while (0)\n-\n-#define PROC_PRIMARY_OR_RET() do { \\\n-\tif (rte_eal_process_type() != RTE_PROC_PRIMARY) { \\\n-\t\tPMD_DEBUG_TRACE(\"Cannot run in secondary processes\\n\"); \\\n-\t\treturn; \\\n-\t} \\\n-} while (0)\n-\n-/* Macros to check for invalid function pointers in dev_ops structure */\n-#define FUNC_PTR_OR_ERR_RET(func, retval) do { \\\n-\tif ((func) == NULL) { \\\n-\t\tPMD_DEBUG_TRACE(\"Function not supported\\n\"); \\\n-\t\treturn (retval); \\\n-\t} \\\n-} while (0)\n-\n-#define FUNC_PTR_OR_RET(func) do { \\\n-\tif ((func) == NULL) { \\\n-\t\tPMD_DEBUG_TRACE(\"Function not supported\\n\"); \\\n-\t\treturn; \\\n-\t} \\\n-} while (0)\n-\n /* Macros to check for valid port */\n #define VALID_PORTID_OR_ERR_RET(port_id, retval) do {\t\t\\\n \tif (!rte_eth_dev_is_valid_port(port_id)) {\t\t\\\ndiff --git a/lib/librte_mbuf/rte_mbuf.c b/lib/librte_mbuf/rte_mbuf.c\nindex c18b438..b7a2498 100644\n--- a/lib/librte_mbuf/rte_mbuf.c\n+++ b/lib/librte_mbuf/rte_mbuf.c\n@@ -271,6 +271,7 @@ const char *rte_get_rx_ol_flag_name(uint64_t mask)\n const char *rte_get_tx_ol_flag_name(uint64_t mask)\n {\n \tswitch (mask) {\n+\tcase PKT_TX_CRYPTO_OP: return \"PKT_TX_CRYPTO_OP\";\n \tcase PKT_TX_VLAN_PKT: return \"PKT_TX_VLAN_PKT\";\n \tcase PKT_TX_IP_CKSUM: return \"PKT_TX_IP_CKSUM\";\n \tcase PKT_TX_TCP_CKSUM: return \"PKT_TX_TCP_CKSUM\";\ndiff --git a/lib/librte_mbuf/rte_mbuf.h b/lib/librte_mbuf/rte_mbuf.h\nindex d7c9030..281486d 100644\n--- a/lib/librte_mbuf/rte_mbuf.h\n+++ b/lib/librte_mbuf/rte_mbuf.h\n@@ -98,14 +98,16 @@ extern \"C\" {\n #define PKT_RX_FDIR_ID       (1ULL << 13) /**< FD id reported if FDIR match. */\n #define PKT_RX_FDIR_FLX      (1ULL << 14) /**< Flexible bytes reported if FDIR match. */\n #define PKT_RX_QINQ_PKT      (1ULL << 15)  /**< RX packet with double VLAN stripped. */\n+#define PKT_RX_CRYPTO_DIGEST_BAD (1ULL << 16) /**< Crypto hash digest verification failed. */\n /* add new RX flags here */\n \n /* add new TX flags here */\n \n+#define PKT_TX_CRYPTO_OP\t(1ULL << 48) /**< Valid Crypto Operation attached to mbuf */\n /**\n  * Second VLAN insertion (QinQ) flag.\n  */\n-#define PKT_TX_QINQ_PKT    (1ULL << 49)   /**< TX packet with double VLAN inserted. */\n+#define PKT_TX_QINQ_PKT\t\t(1ULL << 49) /**< TX packet with double VLAN inserted. */\n \n /**\n  * TCP segmentation offload. To enable this offload feature for a\n@@ -728,6 +730,9 @@ typedef uint8_t  MARKER8[0];  /**< generic marker with 1B alignment */\n typedef uint64_t MARKER64[0]; /**< marker that allows us to overwrite 8 bytes\n                                * with a single assignment */\n \n+/** Opaque accelerator operations declarations */\n+struct rte_crypto_op_data;\n+\n /**\n  * The generic rte_mbuf, containing a packet mbuf.\n  */\n@@ -841,6 +846,8 @@ struct rte_mbuf {\n \n \t/** Timesync flags for use with IEEE1588. */\n \tuint16_t timesync;\n+\t/* Crypto Accelerator operation */\n+\tstruct rte_crypto_op_data *crypto_op;\n } __rte_cache_aligned;\n \n static inline uint16_t rte_pktmbuf_priv_size(struct rte_mempool *mp);\n@@ -1622,6 +1629,33 @@ static inline struct rte_mbuf *rte_pktmbuf_lastseg(struct rte_mbuf *m)\n #define rte_pktmbuf_mtod(m, t) rte_pktmbuf_mtod_offset(m, t, 0)\n \n /**\n+ * A macro that returns the physical address of the data in the mbuf.\n+ *\n+ * The returned pointer is cast to type t. Before using this\n+ * function, the user must ensure that m_headlen(m) is large enough to\n+ * read its data.\n+ *\n+ * @param m\n+ *   The packet mbuf.\n+ * @param o\n+ *   The offset into the data to calculate address from.\n+ */\n+#define rte_pktmbuf_mtophys_offset(m, o) ((phys_addr_t)((char *)(m)->buf_physaddr + (m)->data_off) + (o))\n+\n+/**\n+ * A macro that returns the physical address of the data in the mbuf.\n+ *\n+ * The returned pointer is cast to type t. Before using this\n+ * function, the user must ensure that m_headlen(m) is large enough to\n+ * read its data.\n+ *\n+ * @param m\n+ *   The packet mbuf.\n+ * @param o\n+ *   The offset into the data to calculate address from.\n+ */\n+#define rte_pktmbuf_mtophys(m) rte_pktmbuf_mtophys_offset(m, 0)\n+/**\n  * A macro that returns the length of the packet.\n  *\n  * The value can be read or assigned.\n@@ -1790,6 +1824,23 @@ static inline int rte_pktmbuf_is_contiguous(const struct rte_mbuf *m)\n  */\n void rte_pktmbuf_dump(FILE *f, const struct rte_mbuf *m, unsigned dump_len);\n \n+\n+\n+/**\n+ * Attach a crypto operation to a mbuf.\n+ *\n+ * @param m\n+ *   The packet mbuf.\n+ * @param op\n+ *   The crypto operation data structure to attach.\n+ */\n+static inline void\n+rte_pktmbuf_attach_crypto_op(struct rte_mbuf *m, struct rte_crypto_op_data *op)\n+{\n+\tm->crypto_op = op;\n+\tm->ol_flags |= PKT_TX_CRYPTO_OP;\n+}\n+\n #ifdef __cplusplus\n }\n #endif\ndiff --git a/mk/rte.app.mk b/mk/rte.app.mk\nindex 9e1909e..4a3c41b 100644\n--- a/mk/rte.app.mk\n+++ b/mk/rte.app.mk\n@@ -114,6 +114,7 @@ _LDLIBS-$(CONFIG_RTE_LIBRTE_KVARGS)         += -lrte_kvargs\n _LDLIBS-$(CONFIG_RTE_LIBRTE_MBUF)           += -lrte_mbuf\n _LDLIBS-$(CONFIG_RTE_LIBRTE_IP_FRAG)        += -lrte_ip_frag\n _LDLIBS-$(CONFIG_RTE_LIBRTE_ETHER)          += -lethdev\n+_LDLIBS-$(CONFIG_RTE_LIBRTE_CRYPTODEV)      += -lcryptodev\n _LDLIBS-$(CONFIG_RTE_LIBRTE_MEMPOOL)        += -lrte_mempool\n _LDLIBS-$(CONFIG_RTE_LIBRTE_RING)           += -lrte_ring\n _LDLIBS-$(CONFIG_RTE_LIBRTE_EAL)            += -lrte_eal\n",
    "prefixes": [
        "dpdk-dev",
        "1/6"
    ]
}