From patchwork Sun Jul 2 05:41:10 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "De Lara Guarch, Pablo" X-Patchwork-Id: 26194 X-Patchwork-Delegate: pablo.de.lara.guarch@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 6BB927CBF; Sun, 2 Jul 2017 15:41:37 +0200 (CEST) Received: from mga14.intel.com (mga14.intel.com [192.55.52.115]) by dpdk.org (Postfix) with ESMTP id 061DF7CBB for ; Sun, 2 Jul 2017 15:41:34 +0200 (CEST) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by fmsmga103.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 02 Jul 2017 06:41:33 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.40,297,1496127600"; d="scan'208";a="103708343" Received: from silpixa00399464.ir.intel.com (HELO silpixa00399464.ger.corp.intel.com) ([10.237.222.157]) by orsmga004.jf.intel.com with ESMTP; 02 Jul 2017 06:41:31 -0700 From: Pablo de Lara To: declan.doherty@intel.com, zbigniew.bodek@caviumnetworks.com, jerin.jacob@caviumnetworks.com, akhil.goyal@nxp.com, hemant.agrawal@nxp.com, fiona.trahe@intel.com, john.griffin@intel.com, deepak.k.jain@intel.com Cc: dev@dpdk.org, Pablo de Lara Date: Sun, 2 Jul 2017 06:41:10 +0100 Message-Id: <20170702054127.75610-10-pablo.de.lara.guarch@intel.com> X-Mailer: git-send-email 2.9.4 In-Reply-To: <20170702054127.75610-1-pablo.de.lara.guarch@intel.com> References: <20170629113521.5560-1-pablo.de.lara.guarch@intel.com> <20170702054127.75610-1-pablo.de.lara.guarch@intel.com> Subject: [dpdk-dev] [PATCH v4 09/26] app/crypto-perf: move IV to crypto op private data X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Usually, IV will change for each crypto operation. Therefore, instead of pointing at the same location, IV is copied after each crypto operation. This will let the IV to be passed as an offset from the beginning of the crypto operation, instead of a pointer. Signed-off-by: Pablo de Lara Acked-by: Akhil Goyal Acked-by: Fiona Trahe --- app/test-crypto-perf/cperf_ops.c | 57 +++++++++++++++++++----- app/test-crypto-perf/cperf_ops.h | 3 +- app/test-crypto-perf/cperf_test_latency.c | 8 +++- app/test-crypto-perf/cperf_test_throughput.c | 11 +++-- app/test-crypto-perf/cperf_test_vector_parsing.c | 1 - app/test-crypto-perf/cperf_test_vectors.c | 1 - app/test-crypto-perf/cperf_test_verify.c | 10 +++-- 7 files changed, 68 insertions(+), 23 deletions(-) diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c index 17df2eb..0f45a3c 100644 --- a/app/test-crypto-perf/cperf_ops.c +++ b/app/test-crypto-perf/cperf_ops.c @@ -40,7 +40,8 @@ cperf_set_ops_null_cipher(struct rte_crypto_op **ops, struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out, uint16_t nb_ops, struct rte_cryptodev_sym_session *sess, const struct cperf_options *options, - const struct cperf_test_vector *test_vector __rte_unused) + const struct cperf_test_vector *test_vector __rte_unused, + uint16_t iv_offset __rte_unused) { uint16_t i; @@ -65,7 +66,8 @@ cperf_set_ops_null_auth(struct rte_crypto_op **ops, struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out, uint16_t nb_ops, struct rte_cryptodev_sym_session *sess, const struct cperf_options *options, - const struct cperf_test_vector *test_vector __rte_unused) + const struct cperf_test_vector *test_vector __rte_unused, + uint16_t iv_offset __rte_unused) { uint16_t i; @@ -90,7 +92,8 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops, struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out, uint16_t nb_ops, struct rte_cryptodev_sym_session *sess, const struct cperf_options *options, - const struct cperf_test_vector *test_vector) + const struct cperf_test_vector *test_vector, + uint16_t iv_offset) { uint16_t i; @@ -103,8 +106,10 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops, sym_op->m_dst = bufs_out[i]; /* cipher parameters */ - sym_op->cipher.iv.data = test_vector->iv.data; - sym_op->cipher.iv.phys_addr = test_vector->iv.phys_addr; + sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i], + uint8_t *, iv_offset); + sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i], + iv_offset); sym_op->cipher.iv.length = test_vector->iv.length; if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 || @@ -117,6 +122,13 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops, sym_op->cipher.data.offset = 0; } + if (options->test == CPERF_TEST_TYPE_VERIFY) { + for (i = 0; i < nb_ops; i++) + memcpy(ops[i]->sym->cipher.iv.data, + test_vector->iv.data, + test_vector->iv.length); + } + return 0; } @@ -125,7 +137,8 @@ cperf_set_ops_auth(struct rte_crypto_op **ops, struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out, uint16_t nb_ops, struct rte_cryptodev_sym_session *sess, const struct cperf_options *options, - const struct cperf_test_vector *test_vector) + const struct cperf_test_vector *test_vector, + uint16_t iv_offset __rte_unused) { uint16_t i; @@ -189,7 +202,8 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops, struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out, uint16_t nb_ops, struct rte_cryptodev_sym_session *sess, const struct cperf_options *options, - const struct cperf_test_vector *test_vector) + const struct cperf_test_vector *test_vector, + uint16_t iv_offset) { uint16_t i; @@ -202,8 +216,10 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops, sym_op->m_dst = bufs_out[i]; /* cipher parameters */ - sym_op->cipher.iv.data = test_vector->iv.data; - sym_op->cipher.iv.phys_addr = test_vector->iv.phys_addr; + sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i], + uint8_t *, iv_offset); + sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i], + iv_offset); sym_op->cipher.iv.length = test_vector->iv.length; if (options->cipher_algo == RTE_CRYPTO_CIPHER_SNOW3G_UEA2 || @@ -258,6 +274,13 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops, sym_op->auth.data.offset = 0; } + if (options->test == CPERF_TEST_TYPE_VERIFY) { + for (i = 0; i < nb_ops; i++) + memcpy(ops[i]->sym->cipher.iv.data, + test_vector->iv.data, + test_vector->iv.length); + } + return 0; } @@ -266,7 +289,8 @@ cperf_set_ops_aead(struct rte_crypto_op **ops, struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out, uint16_t nb_ops, struct rte_cryptodev_sym_session *sess, const struct cperf_options *options, - const struct cperf_test_vector *test_vector) + const struct cperf_test_vector *test_vector, + uint16_t iv_offset) { uint16_t i; @@ -279,8 +303,10 @@ cperf_set_ops_aead(struct rte_crypto_op **ops, sym_op->m_dst = bufs_out[i]; /* cipher parameters */ - sym_op->cipher.iv.data = test_vector->iv.data; - sym_op->cipher.iv.phys_addr = test_vector->iv.phys_addr; + sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i], + uint8_t *, iv_offset); + sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i], + iv_offset); sym_op->cipher.iv.length = test_vector->iv.length; sym_op->cipher.data.length = options->test_buffer_size; @@ -327,6 +353,13 @@ cperf_set_ops_aead(struct rte_crypto_op **ops, sym_op->auth.data.offset = options->auth_aad_sz; } + if (options->test == CPERF_TEST_TYPE_VERIFY) { + for (i = 0; i < nb_ops; i++) + memcpy(ops[i]->sym->cipher.iv.data, + test_vector->iv.data, + test_vector->iv.length); + } + return 0; } diff --git a/app/test-crypto-perf/cperf_ops.h b/app/test-crypto-perf/cperf_ops.h index 1b748da..f7b431c 100644 --- a/app/test-crypto-perf/cperf_ops.h +++ b/app/test-crypto-perf/cperf_ops.h @@ -48,7 +48,8 @@ typedef int (*cperf_populate_ops_t)(struct rte_crypto_op **ops, struct rte_mbuf **bufs_in, struct rte_mbuf **bufs_out, uint16_t nb_ops, struct rte_cryptodev_sym_session *sess, const struct cperf_options *options, - const struct cperf_test_vector *test_vector); + const struct cperf_test_vector *test_vector, + uint16_t iv_offset); struct cperf_op_fns { cperf_sessions_create_t sess_create; diff --git a/app/test-crypto-perf/cperf_test_latency.c b/app/test-crypto-perf/cperf_test_latency.c index 32cf5fd..c33129b 100644 --- a/app/test-crypto-perf/cperf_test_latency.c +++ b/app/test-crypto-perf/cperf_test_latency.c @@ -280,7 +280,7 @@ cperf_latency_test_constructor(uint8_t dev_id, uint16_t qp_id, snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d", dev_id); - uint16_t priv_size = sizeof(struct priv_op_data); + uint16_t priv_size = sizeof(struct priv_op_data) + test_vector->iv.length; ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name, RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 512, priv_size, rte_socket_id()); @@ -355,6 +355,10 @@ cperf_latency_test_runner(void *arg) else test_burst_size = ctx->options->burst_size_list[0]; + uint16_t iv_offset = sizeof(struct rte_crypto_op) + + sizeof(struct rte_crypto_sym_op) + + sizeof(struct cperf_op_result *); + while (test_burst_size <= ctx->options->max_burst_size) { uint64_t ops_enqd = 0, ops_deqd = 0; uint64_t m_idx = 0, b_idx = 0; @@ -383,7 +387,7 @@ cperf_latency_test_runner(void *arg) (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx], &ctx->mbufs_out[m_idx], burst_size, ctx->sess, ctx->options, - ctx->test_vector); + ctx->test_vector, iv_offset); tsc_start = rte_rdtsc_precise(); diff --git a/app/test-crypto-perf/cperf_test_throughput.c b/app/test-crypto-perf/cperf_test_throughput.c index 85947a5..5a90eb0 100644 --- a/app/test-crypto-perf/cperf_test_throughput.c +++ b/app/test-crypto-perf/cperf_test_throughput.c @@ -262,9 +262,11 @@ cperf_throughput_test_constructor(uint8_t dev_id, uint16_t qp_id, snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d", dev_id); + uint16_t priv_size = test_vector->iv.length; + ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name, - RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 512, 0, - rte_socket_id()); + RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, + 512, priv_size, rte_socket_id()); if (ctx->crypto_op_pool == NULL) goto err; @@ -315,6 +317,9 @@ cperf_throughput_test_runner(void *test_ctx) else test_burst_size = ctx->options->burst_size_list[0]; + uint16_t iv_offset = sizeof(struct rte_crypto_op) + + sizeof(struct rte_crypto_sym_op); + while (test_burst_size <= ctx->options->max_burst_size) { uint64_t ops_enqd = 0, ops_enqd_total = 0, ops_enqd_failed = 0; uint64_t ops_deqd = 0, ops_deqd_total = 0, ops_deqd_failed = 0; @@ -346,7 +351,7 @@ cperf_throughput_test_runner(void *test_ctx) (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx], &ctx->mbufs_out[m_idx], ops_needed, ctx->sess, ctx->options, - ctx->test_vector); + ctx->test_vector, iv_offset); /** * When ops_needed is smaller than ops_enqd, the diff --git a/app/test-crypto-perf/cperf_test_vector_parsing.c b/app/test-crypto-perf/cperf_test_vector_parsing.c index f384e3d..62d0c91 100644 --- a/app/test-crypto-perf/cperf_test_vector_parsing.c +++ b/app/test-crypto-perf/cperf_test_vector_parsing.c @@ -303,7 +303,6 @@ parse_entry(char *entry, struct cperf_test_vector *vector, } else if (strstr(key_token, "iv")) { rte_free(vector->iv.data); vector->iv.data = data; - vector->iv.phys_addr = rte_malloc_virt2phy(vector->iv.data); if (tc_found) vector->iv.length = data_length; else { diff --git a/app/test-crypto-perf/cperf_test_vectors.c b/app/test-crypto-perf/cperf_test_vectors.c index 757957f..36b3f6f 100644 --- a/app/test-crypto-perf/cperf_test_vectors.c +++ b/app/test-crypto-perf/cperf_test_vectors.c @@ -423,7 +423,6 @@ cperf_test_vector_get_dummy(struct cperf_options *options) memcpy(t_vec->iv.data, iv, options->cipher_iv_sz); } t_vec->ciphertext.length = options->max_buffer_size; - t_vec->iv.phys_addr = rte_malloc_virt2phy(t_vec->iv.data); t_vec->iv.length = options->cipher_iv_sz; t_vec->data.cipher_offset = 0; t_vec->data.cipher_length = options->max_buffer_size; diff --git a/app/test-crypto-perf/cperf_test_verify.c b/app/test-crypto-perf/cperf_test_verify.c index b19f5e1..be684a6 100644 --- a/app/test-crypto-perf/cperf_test_verify.c +++ b/app/test-crypto-perf/cperf_test_verify.c @@ -266,9 +266,10 @@ cperf_verify_test_constructor(uint8_t dev_id, uint16_t qp_id, snprintf(pool_name, sizeof(pool_name), "cperf_op_pool_cdev_%d", dev_id); + uint16_t priv_size = test_vector->iv.length; ctx->crypto_op_pool = rte_crypto_op_pool_create(pool_name, - RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, 512, 0, - rte_socket_id()); + RTE_CRYPTO_OP_TYPE_SYMMETRIC, options->pool_sz, + 512, priv_size, rte_socket_id()); if (ctx->crypto_op_pool == NULL) goto err; @@ -417,6 +418,9 @@ cperf_verify_test_runner(void *test_ctx) printf("\n# Running verify test on device: %u, lcore: %u\n", ctx->dev_id, lcore); + uint16_t iv_offset = sizeof(struct rte_crypto_op) + + sizeof(struct rte_crypto_sym_op); + while (ops_enqd_total < ctx->options->total_ops) { uint16_t burst_size = ((ops_enqd_total + ctx->options->max_burst_size) @@ -438,7 +442,7 @@ cperf_verify_test_runner(void *test_ctx) (ctx->populate_ops)(ops, &ctx->mbufs_in[m_idx], &ctx->mbufs_out[m_idx], ops_needed, ctx->sess, ctx->options, - ctx->test_vector); + ctx->test_vector, iv_offset); #ifdef CPERF_LINEARIZATION_ENABLE if (linearize) {