From patchwork Sun May 28 21:05:18 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "De Lara Guarch, Pablo" X-Patchwork-Id: 24850 X-Patchwork-Delegate: pablo.de.lara.guarch@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 599CC7CF6; Sun, 28 May 2017 23:05:35 +0200 (CEST) Received: from mga09.intel.com (mga09.intel.com [134.134.136.24]) by dpdk.org (Postfix) with ESMTP id 914B57CD2 for ; Sun, 28 May 2017 23:05:22 +0200 (CEST) Received: from orsmga001.jf.intel.com ([10.7.209.18]) by orsmga102.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 28 May 2017 14:05:22 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos; i="5.38,411,1491289200"; d="scan'208"; a="1135678862" Received: from silpixa00381631.ir.intel.com (HELO silpixa00381631.ger.corp.intel.com) ([10.237.222.122]) by orsmga001.jf.intel.com with ESMTP; 28 May 2017 14:05:20 -0700 From: Pablo de Lara To: declan.doherty@intel.com, akhil.goyal@nxp.com, hemant.agrawal@nxp.com, zbigniew.bodek@caviumnetworks.com, jerin.jacob@caviumnetworks.com Cc: dev@dpdk.org, Pablo de Lara Date: Sun, 28 May 2017 22:05:18 +0100 Message-Id: <1496005522-134934-10-git-send-email-pablo.de.lara.guarch@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1496005522-134934-1-git-send-email-pablo.de.lara.guarch@intel.com> References: <1496005522-134934-1-git-send-email-pablo.de.lara.guarch@intel.com> Subject: [dpdk-dev] [PATCH 09/13] cryptodev: pass IV as offset X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Since IV now is copied after the crypto operation, in its private size, IV can be passed only with offset and length. Signed-off-by: Pablo de Lara --- app/test-crypto-perf/cperf_ops.c | 21 +++------ drivers/crypto/aesni_gcm/aesni_gcm_pmd.c | 68 +++++++++++++++-------------- drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c | 3 +- drivers/crypto/armv8/rte_armv8_pmd.c | 3 +- drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c | 8 +++- drivers/crypto/kasumi/rte_kasumi_pmd.c | 10 ++++- drivers/crypto/openssl/rte_openssl_pmd.c | 12 +++-- drivers/crypto/qat/qat_crypto.c | 16 ++++--- drivers/crypto/snow3g/rte_snow3g_pmd.c | 6 ++- drivers/crypto/zuc/rte_zuc_pmd.c | 3 +- lib/librte_cryptodev/rte_crypto_sym.h | 7 +-- 11 files changed, 89 insertions(+), 68 deletions(-) diff --git a/app/test-crypto-perf/cperf_ops.c b/app/test-crypto-perf/cperf_ops.c index a1f2c69..4846b68 100644 --- a/app/test-crypto-perf/cperf_ops.c +++ b/app/test-crypto-perf/cperf_ops.c @@ -106,12 +106,9 @@ cperf_set_ops_cipher(struct rte_crypto_op **ops, sym_op->m_dst = bufs_out[i]; /* cipher parameters */ - sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i], - uint8_t *, iv_offset); - sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i], - iv_offset); + sym_op->cipher.iv.offset = iv_offset; sym_op->cipher.iv.length = test_vector->iv.length; - memcpy(sym_op->cipher.iv.data, + memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset), test_vector->iv.data, test_vector->iv.length); @@ -211,12 +208,9 @@ cperf_set_ops_cipher_auth(struct rte_crypto_op **ops, sym_op->m_dst = bufs_out[i]; /* cipher parameters */ - sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i], - uint8_t *, iv_offset); - sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i], - iv_offset); + sym_op->cipher.iv.offset = iv_offset; sym_op->cipher.iv.length = test_vector->iv.length; - memcpy(sym_op->cipher.iv.data, + memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset), test_vector->iv.data, test_vector->iv.length); @@ -293,12 +287,9 @@ cperf_set_ops_aead(struct rte_crypto_op **ops, sym_op->m_dst = bufs_out[i]; /* cipher parameters */ - sym_op->cipher.iv.data = rte_crypto_op_ctod_offset(ops[i], - uint8_t *, iv_offset); - sym_op->cipher.iv.phys_addr = rte_crypto_op_ctophys_offset(ops[i], - iv_offset); + sym_op->cipher.iv.offset = iv_offset; sym_op->cipher.iv.length = test_vector->iv.length; - memcpy(sym_op->cipher.iv.data, + memcpy(rte_crypto_op_ctod_offset(ops[i], uint8_t *, iv_offset), test_vector->iv.data, test_vector->iv.length); diff --git a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c index 31e48aa..573e071 100644 --- a/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c +++ b/drivers/crypto/aesni_gcm/aesni_gcm_pmd.c @@ -191,12 +191,14 @@ aesni_gcm_get_session(struct aesni_gcm_qp *qp, struct rte_crypto_op *op) * */ static int -process_gcm_crypto_op(struct rte_crypto_sym_op *op, +process_gcm_crypto_op(struct rte_crypto_op *op, struct aesni_gcm_session *session) { uint8_t *src, *dst; - struct rte_mbuf *m_src = op->m_src; - uint32_t offset = op->cipher.data.offset; + uint8_t *IV_ptr; + struct rte_crypto_sym_op *sym_op = op->sym; + struct rte_mbuf *m_src = sym_op->m_src; + uint32_t offset = sym_op->cipher.data.offset; uint32_t part_len, total_len, data_len; RTE_ASSERT(m_src != NULL); @@ -209,53 +211,55 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op, } data_len = m_src->data_len - offset; - part_len = (data_len < op->cipher.data.length) ? data_len : - op->cipher.data.length; + part_len = (data_len < sym_op->cipher.data.length) ? data_len : + sym_op->cipher.data.length; /* Destination buffer is required when segmented source buffer */ - RTE_ASSERT((part_len == op->cipher.data.length) || - ((part_len != op->cipher.data.length) && - (op->m_dst != NULL))); + RTE_ASSERT((part_len == sym_op->cipher.data.length) || + ((part_len != sym_op->cipher.data.length) && + (sym_op->m_dst != NULL))); /* Segmented destination buffer is not supported */ - RTE_ASSERT((op->m_dst == NULL) || - ((op->m_dst != NULL) && - rte_pktmbuf_is_contiguous(op->m_dst))); + RTE_ASSERT((sym_op->m_dst == NULL) || + ((sym_op->m_dst != NULL) && + rte_pktmbuf_is_contiguous(sym_op->m_dst))); - dst = op->m_dst ? - rte_pktmbuf_mtod_offset(op->m_dst, uint8_t *, - op->cipher.data.offset) : - rte_pktmbuf_mtod_offset(op->m_src, uint8_t *, - op->cipher.data.offset); + dst = sym_op->m_dst ? + rte_pktmbuf_mtod_offset(sym_op->m_dst, uint8_t *, + sym_op->cipher.data.offset) : + rte_pktmbuf_mtod_offset(sym_op->m_src, uint8_t *, + sym_op->cipher.data.offset); src = rte_pktmbuf_mtod_offset(m_src, uint8_t *, offset); /* sanity checks */ - if (op->cipher.iv.length != 16 && op->cipher.iv.length != 12 && - op->cipher.iv.length != 0) { + if (sym_op->cipher.iv.length != 16 && sym_op->cipher.iv.length != 12 && + sym_op->cipher.iv.length != 0) { GCM_LOG_ERR("iv"); return -1; } + IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *, + sym_op->cipher.iv.offset); /* * GCM working in 12B IV mode => 16B pre-counter block we need * to set BE LSB to 1, driver expects that 16B is allocated */ - if (op->cipher.iv.length == 12) { - uint32_t *iv_padd = (uint32_t *)&op->cipher.iv.data[12]; + if (sym_op->cipher.iv.length == 12) { + uint32_t *iv_padd = (uint32_t *)&(IV_ptr[12]); *iv_padd = rte_bswap32(1); } if (session->op == AESNI_GCM_OP_AUTHENTICATED_ENCRYPTION) { aesni_gcm_enc[session->key].init(&session->gdata, - op->cipher.iv.data, - op->auth.aad.data, - (uint64_t)op->auth.aad.length); + IV_ptr, + sym_op->auth.aad.data, + (uint64_t)sym_op->auth.aad.length); aesni_gcm_enc[session->key].update(&session->gdata, dst, src, (uint64_t)part_len); - total_len = op->cipher.data.length - part_len; + total_len = sym_op->cipher.data.length - part_len; while (total_len) { dst += part_len; @@ -274,11 +278,11 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op, } aesni_gcm_enc[session->key].finalize(&session->gdata, - op->auth.digest.data, + sym_op->auth.digest.data, (uint64_t)session->digest_length); } else { /* session->op == AESNI_GCM_OP_AUTHENTICATED_DECRYPTION */ - uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(op->m_dst ? - op->m_dst : op->m_src, + uint8_t *auth_tag = (uint8_t *)rte_pktmbuf_append(sym_op->m_dst ? + sym_op->m_dst : sym_op->m_src, session->digest_length); if (!auth_tag) { @@ -287,13 +291,13 @@ process_gcm_crypto_op(struct rte_crypto_sym_op *op, } aesni_gcm_dec[session->key].init(&session->gdata, - op->cipher.iv.data, - op->auth.aad.data, - (uint64_t)op->auth.aad.length); + IV_ptr, + sym_op->auth.aad.data, + (uint64_t)sym_op->auth.aad.length); aesni_gcm_dec[session->key].update(&session->gdata, dst, src, (uint64_t)part_len); - total_len = op->cipher.data.length - part_len; + total_len = sym_op->cipher.data.length - part_len; while (total_len) { dst += part_len; @@ -405,7 +409,7 @@ aesni_gcm_pmd_dequeue_burst(void *queue_pair, break; } - retval = process_gcm_crypto_op(ops[i]->sym, sess); + retval = process_gcm_crypto_op(ops[i], sess); if (retval < 0) { ops[i]->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; qp->qp_stats.dequeue_err_count++; diff --git a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c index 21e3bb2..284e111 100644 --- a/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c +++ b/drivers/crypto/aesni_mb/rte_aesni_mb_pmd.c @@ -470,7 +470,8 @@ set_mb_job_params(JOB_AES_HMAC *job, struct aesni_mb_qp *qp, get_truncated_digest_byte_length(job->hash_alg); /* Set IV parameters */ - job->iv = op->sym->cipher.iv.data; + job->iv = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); job->iv_len_in_bytes = op->sym->cipher.iv.length; /* Data Parameter */ diff --git a/drivers/crypto/armv8/rte_armv8_pmd.c b/drivers/crypto/armv8/rte_armv8_pmd.c index 3ca9007..77d79df 100644 --- a/drivers/crypto/armv8/rte_armv8_pmd.c +++ b/drivers/crypto/armv8/rte_armv8_pmd.c @@ -656,7 +656,8 @@ process_armv8_chained_op return; } - arg.cipher.iv = op->sym->cipher.iv.data; + arg.cipher.iv = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); arg.cipher.key = sess->cipher.key.data; /* Acquire combined mode function */ crypto_func = sess->crypto_func; diff --git a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c index 336c281..c192141 100644 --- a/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c +++ b/drivers/crypto/dpaa2_sec/dpaa2_sec_dpseci.c @@ -87,6 +87,8 @@ build_authenc_fd(dpaa2_sec_session *sess, int icv_len = sess->digest_length; uint8_t *old_icv; uint32_t mem_len = (7 * sizeof(struct qbman_fle)) + icv_len; + uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); PMD_INIT_FUNC_TRACE(); @@ -178,7 +180,7 @@ build_authenc_fd(dpaa2_sec_session *sess, sess->digest_length); /* Configure Input SGE for Encap/Decap */ - DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data)); + DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(IV_ptr)); sge->length = sym_op->cipher.iv.length; sge++; @@ -307,6 +309,8 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op, uint32_t mem_len = (5 * sizeof(struct qbman_fle)); struct sec_flow_context *flc; struct ctxt_priv *priv = sess->ctxt; + uint8_t *IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); PMD_INIT_FUNC_TRACE(); @@ -369,7 +373,7 @@ build_cipher_fd(dpaa2_sec_session *sess, struct rte_crypto_op *op, DPAA2_SET_FLE_SG_EXT(fle); - DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(sym_op->cipher.iv.data)); + DPAA2_SET_FLE_ADDR(sge, DPAA2_VADDR_TO_IOVA(IV_ptr)); sge->length = sym_op->cipher.iv.length; sge++; diff --git a/drivers/crypto/kasumi/rte_kasumi_pmd.c b/drivers/crypto/kasumi/rte_kasumi_pmd.c index 6407a7d..4905641 100644 --- a/drivers/crypto/kasumi/rte_kasumi_pmd.c +++ b/drivers/crypto/kasumi/rte_kasumi_pmd.c @@ -179,6 +179,7 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops, unsigned i; uint8_t processed_ops = 0; uint8_t *src[num_ops], *dst[num_ops]; + uint8_t *IV_ptr; uint64_t IV[num_ops]; uint32_t num_bytes[num_ops]; @@ -197,7 +198,9 @@ process_kasumi_cipher_op(struct rte_crypto_op **ops, (ops[i]->sym->cipher.data.offset >> 3) : rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) + (ops[i]->sym->cipher.data.offset >> 3); - IV[i] = *((uint64_t *)(ops[i]->sym->cipher.iv.data)); + IV_ptr = rte_crypto_op_ctod_offset(ops[i], uint8_t *, + ops[i]->sym->cipher.iv.offset); + IV[i] = *((uint64_t *)(IV_ptr)); num_bytes[i] = ops[i]->sym->cipher.data.length >> 3; processed_ops++; @@ -216,6 +219,7 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op, struct kasumi_session *session) { uint8_t *src, *dst; + uint8_t *IV_ptr; uint64_t IV; uint32_t length_in_bits, offset_in_bits; @@ -234,7 +238,9 @@ process_kasumi_cipher_op_bit(struct rte_crypto_op *op, return 0; } dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *); - IV = *((uint64_t *)(op->sym->cipher.iv.data)); + IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); + IV = *((uint64_t *)(IV_ptr)); length_in_bits = op->sym->cipher.data.length; sso_kasumi_f8_1_buffer_bit(&session->pKeySched_cipher, IV, diff --git a/drivers/crypto/openssl/rte_openssl_pmd.c b/drivers/crypto/openssl/rte_openssl_pmd.c index 0333526..c3e3cf2 100644 --- a/drivers/crypto/openssl/rte_openssl_pmd.c +++ b/drivers/crypto/openssl/rte_openssl_pmd.c @@ -924,7 +924,8 @@ process_openssl_combined_op return; } - iv = op->sym->cipher.iv.data; + iv = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); ivlen = op->sym->cipher.iv.length; aad = op->sym->auth.aad.data; aadlen = op->sym->auth.aad.length; @@ -988,7 +989,8 @@ process_openssl_cipher_op dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *, op->sym->cipher.data.offset); - iv = op->sym->cipher.iv.data; + iv = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); if (sess->cipher.mode == OPENSSL_CIPHER_LIB) if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT) @@ -1029,7 +1031,8 @@ process_openssl_docsis_bpi_op(struct rte_crypto_op *op, dst = rte_pktmbuf_mtod_offset(mbuf_dst, uint8_t *, op->sym->cipher.data.offset); - iv = op->sym->cipher.iv.data; + iv = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); block_size = DES_BLOCK_SIZE; @@ -1087,7 +1090,8 @@ process_openssl_docsis_bpi_op(struct rte_crypto_op *op, dst, iv, last_block_len, sess->cipher.bpi_ctx); /* Prepare parameters for CBC mode op */ - iv = op->sym->cipher.iv.data; + iv = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); dst += last_block_len - srclen; srclen -= last_block_len; } diff --git a/drivers/crypto/qat/qat_crypto.c b/drivers/crypto/qat/qat_crypto.c index 329f88a..f72d3e3 100644 --- a/drivers/crypto/qat/qat_crypto.c +++ b/drivers/crypto/qat/qat_crypto.c @@ -639,7 +639,8 @@ qat_bpicipher_preprocess(struct qat_session *ctx, iv = last_block - block_len; else /* runt block, i.e. less than one full block */ - iv = sym_op->cipher.iv.data; + iv = rte_crypto_op_ctod_offset(op, uint8_t *, + sym_op->cipher.iv.offset); #ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX rte_hexdump(stdout, "BPI: src before pre-process:", last_block, @@ -694,7 +695,8 @@ qat_bpicipher_postprocess(struct qat_session *ctx, iv = dst - block_len; else /* runt block, i.e. less than one full block */ - iv = sym_op->cipher.iv.data; + iv = rte_crypto_op_ctod_offset(op, uint8_t *, + sym_op->cipher.iv.offset); #ifdef RTE_LIBRTE_PMD_QAT_DEBUG_RX rte_hexdump(stdout, "BPI: src before post-process:", last_block, @@ -895,6 +897,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg, uint32_t min_ofs = 0; uint64_t src_buf_start = 0, dst_buf_start = 0; uint8_t do_sgl = 0; + uint8_t *IV_ptr; #ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX @@ -935,6 +938,8 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg, do_cipher = 1; } + IV_ptr = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); if (do_cipher) { if (ctx->qat_cipher_alg == @@ -978,14 +983,15 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg, if (op->sym->cipher.iv.length <= sizeof(cipher_param->u.cipher_IV_array)) { rte_memcpy(cipher_param->u.cipher_IV_array, - op->sym->cipher.iv.data, + IV_ptr, op->sym->cipher.iv.length); } else { ICP_QAT_FW_LA_CIPH_IV_FLD_FLAG_SET( qat_req->comn_hdr.serv_specif_flags, ICP_QAT_FW_CIPH_IV_64BIT_PTR); cipher_param->u.s.cipher_IV_ptr = - op->sym->cipher.iv.phys_addr; + rte_crypto_op_ctophys_offset(op, + op->sym->cipher.iv.offset); } } min_ofs = cipher_ofs; @@ -1185,7 +1191,7 @@ qat_write_hw_desc_entry(struct rte_crypto_op *op, uint8_t *out_msg, rte_hexdump(stdout, "src_data:", rte_pktmbuf_mtod(op->sym->m_src, uint8_t*), rte_pktmbuf_data_len(op->sym->m_src)); - rte_hexdump(stdout, "iv:", op->sym->cipher.iv.data, + rte_hexdump(stdout, "iv:", IV_ptr, op->sym->cipher.iv.length); rte_hexdump(stdout, "digest:", op->sym->auth.digest.data, ctx->digest_length); diff --git a/drivers/crypto/snow3g/rte_snow3g_pmd.c b/drivers/crypto/snow3g/rte_snow3g_pmd.c index 75989da..8ebe302 100644 --- a/drivers/crypto/snow3g/rte_snow3g_pmd.c +++ b/drivers/crypto/snow3g/rte_snow3g_pmd.c @@ -197,7 +197,8 @@ process_snow3g_cipher_op(struct rte_crypto_op **ops, (ops[i]->sym->cipher.data.offset >> 3) : rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) + (ops[i]->sym->cipher.data.offset >> 3); - IV[i] = ops[i]->sym->cipher.iv.data; + IV[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *, + ops[i]->sym->cipher.iv.offset); num_bytes[i] = ops[i]->sym->cipher.data.length >> 3; processed_ops++; @@ -233,7 +234,8 @@ process_snow3g_cipher_op_bit(struct rte_crypto_op *op, return 0; } dst = rte_pktmbuf_mtod(op->sym->m_dst, uint8_t *); - IV = op->sym->cipher.iv.data; + IV = rte_crypto_op_ctod_offset(op, uint8_t *, + op->sym->cipher.iv.offset); length_in_bits = op->sym->cipher.data.length; sso_snow3g_f8_1_buffer_bit(&session->pKeySched_cipher, IV, diff --git a/drivers/crypto/zuc/rte_zuc_pmd.c b/drivers/crypto/zuc/rte_zuc_pmd.c index e7a3de8..df58ec4 100644 --- a/drivers/crypto/zuc/rte_zuc_pmd.c +++ b/drivers/crypto/zuc/rte_zuc_pmd.c @@ -218,7 +218,8 @@ process_zuc_cipher_op(struct rte_crypto_op **ops, (ops[i]->sym->cipher.data.offset >> 3) : rte_pktmbuf_mtod(ops[i]->sym->m_src, uint8_t *) + (ops[i]->sym->cipher.data.offset >> 3); - IV[i] = ops[i]->sym->cipher.iv.data; + IV[i] = rte_crypto_op_ctod_offset(ops[i], uint8_t *, + ops[i]->sym->cipher.iv.offset); num_bytes[i] = ops[i]->sym->cipher.data.length >> 3; cipher_keys[i] = session->pKey_cipher; diff --git a/lib/librte_cryptodev/rte_crypto_sym.h b/lib/librte_cryptodev/rte_crypto_sym.h index 982a97c..4b921e8 100644 --- a/lib/librte_cryptodev/rte_crypto_sym.h +++ b/lib/librte_cryptodev/rte_crypto_sym.h @@ -435,8 +435,10 @@ struct rte_crypto_sym_op { } data; /**< Data offsets and length for ciphering */ struct { - uint8_t *data; - /**< Initialisation Vector or Counter. + uint16_t offset; + /**< Starting point for Initialisation Vector or Counter, + * specified as number of bytes from start of crypto + * operation. * * - For block ciphers in CBC or F8 mode, or for KASUMI * in F8 mode, or for SNOW 3G in UEA2 mode, this is the @@ -462,7 +464,6 @@ struct rte_crypto_sym_op { * For optimum performance, the data pointed to SHOULD * be 8-byte aligned. */ - phys_addr_t phys_addr; uint16_t length; /**< Length of valid IV data. *