From patchwork Mon Oct 12 19:21:24 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Arkadiusz Kusztal X-Patchwork-Id: 80396 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9D99DA04B6; Mon, 12 Oct 2020 21:21:53 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DBF691D99F; Mon, 12 Oct 2020 21:21:37 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id DB0AB1D991 for ; Mon, 12 Oct 2020 21:21:34 +0200 (CEST) IronPort-SDR: RWuEXmW0mWc/j783hZXWOao53ofaS5gioxsERmO0gFLXCDfDshw/C/MM3scW1liWLkDnYl5lXZ vgKgMIukqdRQ== X-IronPort-AV: E=McAfee;i="6000,8403,9772"; a="229972310" X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="229972310" X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Oct 2020 12:21:34 -0700 IronPort-SDR: fb6hpl+HuyM9CFVbAmt5hQOSDkW4ugX6bGWg/REW5Tbuh49+lZyrkopb5HeRkeCeCaOQIRghRl fKrlbuYtF3cA== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.77,367,1596524400"; d="scan'208";a="530087040" Received: from silpixa00400308.ir.intel.com ([10.237.214.143]) by orsmga005.jf.intel.com with ESMTP; 12 Oct 2020 12:21:32 -0700 From: Arek Kusztal To: dev@dpdk.org Cc: akhil.goyal@nxp.com, fiona.trahe@intel.com, Arek Kusztal Date: Mon, 12 Oct 2020 20:21:24 +0100 Message-Id: <20201012192125.28263-2-arkadiuszx.kusztal@intel.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20201012192125.28263-1-arkadiuszx.kusztal@intel.com> References: <20201012192125.28263-1-arkadiuszx.kusztal@intel.com> Subject: [dpdk-dev] [PATCH v4 1/2] cryptodev: remove crypto list end enumerators X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch removes enumerators RTE_CRYPTO_CIPHER_LIST_END, RTE_CRYPTO_AUTH_LIST_END, RTE_CRYPTO_AEAD_LIST_END to prevent some problems that may arise when adding new crypto algorithms. Signed-off-by: Arek Kusztal Acked-by: Akhil Goyal --- lib/librte_cryptodev/rte_crypto_sym.h | 36 ++++++++++++++++++--------- 1 file changed, 24 insertions(+), 12 deletions(-) diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h index f29c98051..84170e24e 100644 --- a/lib/librte_cryptodev/rte_crypto_sym.h +++ b/lib/librte_cryptodev/rte_crypto_sym.h @@ -87,7 +87,13 @@ union rte_crypto_sym_ofs { } ofs; }; -/** Symmetric Cipher Algorithms */ +/** Symmetric Cipher Algorithms + * + * Note, to avoid ABI breakage across releases + * - LIST_END should not be added to this enum + * - the order of enums should not be changed + * - new algorithms should only be added to the end + */ enum rte_crypto_cipher_algorithm { RTE_CRYPTO_CIPHER_NULL = 1, /**< NULL cipher algorithm. No mode applies to the NULL algorithm. */ @@ -132,15 +138,12 @@ enum rte_crypto_cipher_algorithm { * for m_src and m_dst in the rte_crypto_sym_op must be NULL. */ - RTE_CRYPTO_CIPHER_DES_DOCSISBPI, + RTE_CRYPTO_CIPHER_DES_DOCSISBPI /**< DES algorithm using modes required by * DOCSIS Baseline Privacy Plus Spec. * Chained mbufs are not supported in this mode, i.e. rte_mbuf.next * for m_src and m_dst in the rte_crypto_sym_op must be NULL. */ - - RTE_CRYPTO_CIPHER_LIST_END - }; /** Cipher algorithm name strings */ @@ -246,7 +249,13 @@ struct rte_crypto_cipher_xform { } iv; /**< Initialisation vector parameters */ }; -/** Symmetric Authentication / Hash Algorithms */ +/** Symmetric Authentication / Hash Algorithms + * + * Note, to avoid ABI breakage across releases + * - LIST_END should not be added to this enum + * - the order of enums should not be changed + * - new algorithms should only be added to the end + */ enum rte_crypto_auth_algorithm { RTE_CRYPTO_AUTH_NULL = 1, /**< NULL hash algorithm. */ @@ -312,10 +321,8 @@ enum rte_crypto_auth_algorithm { /**< HMAC using 384 bit SHA3 algorithm. */ RTE_CRYPTO_AUTH_SHA3_512, /**< 512 bit SHA3 algorithm. */ - RTE_CRYPTO_AUTH_SHA3_512_HMAC, + RTE_CRYPTO_AUTH_SHA3_512_HMAC /**< HMAC using 512 bit SHA3 algorithm. */ - - RTE_CRYPTO_AUTH_LIST_END }; /** Authentication algorithm name strings */ @@ -406,15 +413,20 @@ struct rte_crypto_auth_xform { }; -/** Symmetric AEAD Algorithms */ +/** Symmetric AEAD Algorithms + * + * Note, to avoid ABI breakage across releases + * - LIST_END should not be added to this enum + * - the order of enums should not be changed + * - new algorithms should only be added to the end + */ enum rte_crypto_aead_algorithm { RTE_CRYPTO_AEAD_AES_CCM = 1, /**< AES algorithm in CCM mode. */ RTE_CRYPTO_AEAD_AES_GCM, /**< AES algorithm in GCM mode. */ - RTE_CRYPTO_AEAD_CHACHA20_POLY1305, + RTE_CRYPTO_AEAD_CHACHA20_POLY1305 /**< Chacha20 cipher with poly1305 authenticator */ - RTE_CRYPTO_AEAD_LIST_END }; /** AEAD algorithm name strings */