From patchwork Wed Jun 13 12:13:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Tomasz Jozwiak X-Patchwork-Id: 41041 X-Patchwork-Delegate: pablo.de.lara.guarch@intel.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 26E271EF50; Wed, 13 Jun 2018 14:14:44 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by dpdk.org (Postfix) with ESMTP id ADABF1D9C1 for ; Wed, 13 Jun 2018 14:14:38 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga001.fm.intel.com ([10.253.24.23]) by fmsmga101.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 13 Jun 2018 05:14:38 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.51,218,1526367600"; d="scan'208";a="63727675" Received: from tjozwiax-mobl.ger.corp.intel.com (HELO localhost.localdomain) ([10.103.104.42]) by fmsmga001.fm.intel.com with ESMTP; 13 Jun 2018 05:14:37 -0700 From: Tomasz Jozwiak To: fiona.trahe@intel.com, tomaszx.jozwiak@intel.com, dev@dpdk.org Date: Wed, 13 Jun 2018 14:13:52 +0200 Message-Id: <1528892062-4997-9-git-send-email-tomaszx.jozwiak@intel.com> X-Mailer: git-send-email 2.7.4 In-Reply-To: <1528892062-4997-1-git-send-email-tomaszx.jozwiak@intel.com> References: <1523040732-3290-1-git-send-email-fiona.trahe@intel.com> <1528892062-4997-1-git-send-email-tomaszx.jozwiak@intel.com> Subject: [dpdk-dev] [PATCH v3 08/38] crypto/qat: make enqueue function generic X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Fiona Trahe Queue-handling code in enqueue is made generic, so it can be used by other services in future. This is done by - Removing all sym-specific refs in input params - replace with void ptrs. - Wrapping this generic enqueue with the sym-specific enqueue called through the API. - Setting a fn ptr for build_request in qp on qp creation - Passing void * params to this, in the service-specific implementation qat_sym_build_request cast back to sym structs. Signed-off-by: Fiona Trahe --- drivers/crypto/qat/qat_qp.c | 1 + drivers/crypto/qat/qat_sym.c | 46 ++++++++++++++++++++---------------- drivers/crypto/qat/qat_sym.h | 11 +++++++++ 3 files changed, 38 insertions(+), 20 deletions(-) diff --git a/drivers/crypto/qat/qat_qp.c b/drivers/crypto/qat/qat_qp.c index fb9c2a7ef..d7d79f1af 100644 --- a/drivers/crypto/qat/qat_qp.c +++ b/drivers/crypto/qat/qat_qp.c @@ -197,6 +197,7 @@ int qat_sym_qp_setup(struct rte_cryptodev *dev, uint16_t queue_pair_id, struct qat_pmd_private *internals = dev->data->dev_private; qp->qat_dev_gen = internals->qat_dev_gen; + qp->build_request = qat_sym_build_request; dev->data->queue_pairs[queue_pair_id] = qp; return 0; diff --git a/drivers/crypto/qat/qat_sym.c b/drivers/crypto/qat/qat_sym.c index 2dfdc9cce..4e404749a 100644 --- a/drivers/crypto/qat/qat_sym.c +++ b/drivers/crypto/qat/qat_sym.c @@ -86,10 +86,6 @@ bpi_cipher_decrypt(uint8_t *src, uint8_t *dst, static inline uint32_t adf_modulo(uint32_t data, uint32_t shift); -static inline int -qat_sym_build_request(struct rte_crypto_op *op, uint8_t *out_msg, - struct qat_sym_op_cookie *qat_op_cookie, struct qat_qp *qp); - static inline uint32_t qat_bpicipher_preprocess(struct qat_sym_session *ctx, struct rte_crypto_op *op) @@ -209,14 +205,12 @@ txq_write_tail(struct qat_qp *qp, struct qat_queue *q) { q->csr_tail = q->tail; } -uint16_t -qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, - uint16_t nb_ops) +static uint16_t +qat_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops) { register struct qat_queue *queue; struct qat_qp *tmp_qp = (struct qat_qp *)qp; register uint32_t nb_ops_sent = 0; - register struct rte_crypto_op **cur_op = ops; register int ret; uint16_t nb_ops_possible = nb_ops; register uint8_t *base_addr; @@ -242,8 +236,9 @@ qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, } while (nb_ops_sent != nb_ops_possible) { - ret = qat_sym_build_request(*cur_op, base_addr + tail, - tmp_qp->op_cookies[tail / queue->msg_size], tmp_qp); + ret = tmp_qp->build_request(*ops, base_addr + tail, + tmp_qp->op_cookies[tail / queue->msg_size], + tmp_qp->qat_dev_gen); if (ret != 0) { tmp_qp->stats.enqueue_err_count++; /* @@ -257,8 +252,8 @@ qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, } tail = adf_modulo(tail + queue->msg_size, queue->modulo); + ops++; nb_ops_sent++; - cur_op++; } kick_tail: queue->tail = tail; @@ -298,6 +293,13 @@ void rxq_free_desc(struct qat_qp *qp, struct qat_queue *q) q->hw_queue_number, new_head); } +uint16_t +qat_sym_pmd_enqueue_op_burst(void *qp, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + return qat_enqueue_op_burst(qp, (void **)ops, nb_ops); +} + uint16_t qat_sym_pmd_dequeue_op_burst(void *qp, struct rte_crypto_op **ops, uint16_t nb_ops) @@ -456,9 +458,10 @@ set_cipher_iv_ccm(uint16_t iv_length, uint16_t iv_offset, iv_length); } -static inline int -qat_sym_build_request(struct rte_crypto_op *op, uint8_t *out_msg, - struct qat_sym_op_cookie *qat_op_cookie, struct qat_qp *qp) + +int +qat_sym_build_request(void *in_op, uint8_t *out_msg, + void *op_cookie, enum qat_device_gen qat_dev_gen) { int ret = 0; struct qat_sym_session *ctx; @@ -471,6 +474,9 @@ qat_sym_build_request(struct rte_crypto_op *op, uint8_t *out_msg, uint32_t min_ofs = 0; uint64_t src_buf_start = 0, dst_buf_start = 0; uint8_t do_sgl = 0; + struct rte_crypto_op *op = (struct rte_crypto_op *)in_op; + struct qat_sym_op_cookie *cookie = + (struct qat_sym_op_cookie *)op_cookie; #ifdef RTE_LIBRTE_PMD_QAT_DEBUG_TX if (unlikely(op->type != RTE_CRYPTO_OP_TYPE_SYMMETRIC)) { @@ -494,7 +500,7 @@ qat_sym_build_request(struct rte_crypto_op *op, uint8_t *out_msg, return -EINVAL; } - if (unlikely(ctx->min_qat_dev_gen > qp->qat_dev_gen)) { + if (unlikely(ctx->min_qat_dev_gen > qat_dev_gen)) { PMD_DRV_LOG(ERR, "Session alg not supported on this device gen"); op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION; return -EINVAL; @@ -807,7 +813,7 @@ qat_sym_build_request(struct rte_crypto_op *op, uint8_t *out_msg, ICP_QAT_FW_COMN_PTR_TYPE_SET(qat_req->comn_hdr.comn_req_flags, QAT_COMN_PTR_TYPE_SGL); ret = qat_sgl_fill_array(op->sym->m_src, src_buf_start, - &qat_op_cookie->qat_sgl_list_src, + &cookie->qat_sgl_list_src, qat_req->comn_mid.src_length); if (ret) { PMD_DRV_LOG(ERR, "QAT PMD Cannot fill sgl array"); @@ -817,11 +823,11 @@ qat_sym_build_request(struct rte_crypto_op *op, uint8_t *out_msg, if (likely(op->sym->m_dst == NULL)) qat_req->comn_mid.dest_data_addr = qat_req->comn_mid.src_data_addr = - qat_op_cookie->qat_sgl_src_phys_addr; + cookie->qat_sgl_src_phys_addr; else { ret = qat_sgl_fill_array(op->sym->m_dst, dst_buf_start, - &qat_op_cookie->qat_sgl_list_dst, + &cookie->qat_sgl_list_dst, qat_req->comn_mid.dst_length); if (ret) { @@ -831,9 +837,9 @@ qat_sym_build_request(struct rte_crypto_op *op, uint8_t *out_msg, } qat_req->comn_mid.src_data_addr = - qat_op_cookie->qat_sgl_src_phys_addr; + cookie->qat_sgl_src_phys_addr; qat_req->comn_mid.dest_data_addr = - qat_op_cookie->qat_sgl_dst_phys_addr; + cookie->qat_sgl_dst_phys_addr; } } else { qat_req->comn_mid.src_data_addr = src_buf_start; diff --git a/drivers/crypto/qat/qat_sym.h b/drivers/crypto/qat/qat_sym.h index 18c77ea11..b1ddb6e93 100644 --- a/drivers/crypto/qat/qat_sym.h +++ b/drivers/crypto/qat/qat_sym.h @@ -27,6 +27,11 @@ #define QAT_CSR_TAIL_FORCE_WRITE_THRESH 256U /* number of inflights below which no tail write coalescing should occur */ +typedef int (*build_request_t)(void *op, + uint8_t *req, void *op_cookie, + enum qat_device_gen qat_dev_gen); +/**< Build a request from an op. */ + struct qat_sym_session; /** @@ -63,8 +68,14 @@ struct qat_qp { void **op_cookies; uint32_t nb_descriptors; enum qat_device_gen qat_dev_gen; + build_request_t build_request; } __rte_cache_aligned; + +int +qat_sym_build_request(void *in_op, uint8_t *out_msg, + void *op_cookie, enum qat_device_gen qat_dev_gen); + void qat_sym_stats_get(struct rte_cryptodev *dev, struct rte_cryptodev_stats *stats); void qat_sym_stats_reset(struct rte_cryptodev *dev);