From patchwork Thu Sep 24 13:04:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Nagadheeraj Rottela X-Patchwork-Id: 78745 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 067A2A04B1; Thu, 24 Sep 2020 15:05:02 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 454DC1DECF; Thu, 24 Sep 2020 15:04:50 +0200 (CEST) Received: from mx0b-0016f401.pphosted.com (mx0a-0016f401.pphosted.com [67.231.148.174]) by dpdk.org (Postfix) with ESMTP id AA1671DE7C for ; Thu, 24 Sep 2020 15:04:46 +0200 (CEST) Received: from pps.filterd (m0045849.ppops.net [127.0.0.1]) by mx0a-0016f401.pphosted.com (8.16.0.42/8.16.0.42) with SMTP id 08OCnhn1013287; Thu, 24 Sep 2020 06:04:45 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=KnyIhs8Zy0oCW1S91rTzf75wh8v7HuLsYbbIT1z2Z4I=; b=FEoxIY4YF/qeJJxRR6D/F781QmkBMPiEELAHF1QYzjSrpwPjC0PlPUeDWQV61tiGmw0T pFM2H2QNYKQwCq2R3KneWjxRAr+FT0PYIiEqPvuectyQ3OH+FiV7hCJp0yaH+BwNN2Xv /9EuR9r6xzkirQJzd0n8H/VBj81ZHeLe7MA+Dz4FigcwpdnZ39h3aHA9rOEdFQ4kbcNo 9qp4Xjj/U73ksQ+hKhGful3K05B8uk6E4FNU3bMaKUK52UvXXYXWSSy1BVgT/Y+gvxcb v123ztlJSmjxBDqdTbL0RaaFmFe77Zi0+mUrKlkGV/LXXrYaukWYZ301luzD0rxVd82F Ew== Received: from sc-exch03.marvell.com ([199.233.58.183]) by mx0a-0016f401.pphosted.com with ESMTP id 33nfbq4u8h-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Thu, 24 Sep 2020 06:04:45 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by SC-EXCH03.marvell.com (10.93.176.83) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 24 Sep 2020 06:04:44 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Thu, 24 Sep 2020 06:04:43 -0700 Received: from hyd1669.caveonetworks.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.2 via Frontend Transport; Thu, 24 Sep 2020 06:04:42 -0700 From: Nagadheeraj Rottela To: CC: , , Nagadheeraj Rottela Date: Thu, 24 Sep 2020 18:34:14 +0530 Message-ID: <20200924130414.59881-4-rnagadheeraj@marvell.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200924130414.59881-1-rnagadheeraj@marvell.com> References: <20200924130414.59881-1-rnagadheeraj@marvell.com> MIME-Version: 1.0 X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10434:6.0.235, 18.0.687 definitions=2020-09-24_08:2020-09-24, 2020-09-24 signatures=0 Subject: [dpdk-dev] [PATCH v2 3/3] crypto/nitrox: support cipher only crypto operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds cipher only crypto operation support. Signed-off-by: Nagadheeraj Rottela --- doc/guides/cryptodevs/nitrox.rst | 2 - doc/guides/rel_notes/release_20_11.rst | 5 + drivers/crypto/nitrox/nitrox_sym.c | 3 + drivers/crypto/nitrox/nitrox_sym_reqmgr.c | 191 ++++++++++++++++------ 4 files changed, 149 insertions(+), 52 deletions(-) diff --git a/doc/guides/cryptodevs/nitrox.rst b/doc/guides/cryptodevs/nitrox.rst index 91fca905a..095e545c6 100644 --- a/doc/guides/cryptodevs/nitrox.rst +++ b/doc/guides/cryptodevs/nitrox.rst @@ -33,8 +33,6 @@ Supported AEAD algorithms: Limitations ----------- -* AES_CBC Cipher Only combination is not supported. -* 3DES Cipher Only combination is not supported. * Session-less APIs are not supported. Installation diff --git a/doc/guides/rel_notes/release_20_11.rst b/doc/guides/rel_notes/release_20_11.rst index 73ac08fb0..ddcf90356 100644 --- a/doc/guides/rel_notes/release_20_11.rst +++ b/doc/guides/rel_notes/release_20_11.rst @@ -55,6 +55,11 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Updated Marvell NITROX symmetric crypto PMD.** + + * Added cipher only offload support. + * Added AES-GCM support. + Removed Items ------------- diff --git a/drivers/crypto/nitrox/nitrox_sym.c b/drivers/crypto/nitrox/nitrox_sym.c index fe3ee6e23..2768bdd2e 100644 --- a/drivers/crypto/nitrox/nitrox_sym.c +++ b/drivers/crypto/nitrox/nitrox_sym.c @@ -550,6 +550,9 @@ nitrox_sym_dev_sess_configure(struct rte_cryptodev *cdev, ctx = mp_obj; ctx->nitrox_chain = get_crypto_chain_order(xform); switch (ctx->nitrox_chain) { + case NITROX_CHAIN_CIPHER_ONLY: + cipher_xform = &xform->cipher; + break; case NITROX_CHAIN_CIPHER_AUTH: cipher_xform = &xform->cipher; auth_xform = &xform->next->auth; diff --git a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c index 47f5244b1..fe3ca25a0 100644 --- a/drivers/crypto/nitrox/nitrox_sym_reqmgr.c +++ b/drivers/crypto/nitrox/nitrox_sym_reqmgr.c @@ -247,38 +247,6 @@ softreq_copy_iv(struct nitrox_softreq *sr, uint8_t salt_size) sr->iv.len = sr->ctx->iv.length - salt_size; } -static int -extract_cipher_auth_digest(struct nitrox_softreq *sr, - struct nitrox_sglist *digest) -{ - struct rte_crypto_op *op = sr->op; - struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst : - op->sym->m_src; - - if (sr->ctx->req_op == NITROX_OP_DECRYPT && - unlikely(!op->sym->auth.digest.data)) - return -EINVAL; - - digest->len = sr->ctx->digest_length; - if (op->sym->auth.digest.data) { - digest->iova = op->sym->auth.digest.phys_addr; - digest->virt = op->sym->auth.digest.data; - return 0; - } - - if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->auth.data.offset + - op->sym->auth.data.length + digest->len)) - return -EINVAL; - - digest->iova = rte_pktmbuf_iova_offset(mdst, - op->sym->auth.data.offset + - op->sym->auth.data.length); - digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *, - op->sym->auth.data.offset + - op->sym->auth.data.length); - return 0; -} - static void fill_sglist(struct nitrox_sgtable *sgtbl, uint16_t len, rte_iova_t iova, void *virt) @@ -340,6 +308,143 @@ create_sglist_from_mbuf(struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf, return 0; } +static void +create_sgcomp(struct nitrox_sgtable *sgtbl) +{ + int i, j, nr_sgcomp; + struct nitrox_sgcomp *sgcomp = sgtbl->sgcomp; + struct nitrox_sglist *sglist = sgtbl->sglist; + + nr_sgcomp = RTE_ALIGN_MUL_CEIL(sgtbl->map_bufs_cnt, 4) / 4; + sgtbl->nr_sgcomp = nr_sgcomp; + for (i = 0; i < nr_sgcomp; i++, sgcomp++) { + for (j = 0; j < 4; j++, sglist++) { + sgcomp->len[j] = rte_cpu_to_be_16(sglist->len); + sgcomp->iova[j] = rte_cpu_to_be_64(sglist->iova); + } + } +} + +static int +create_cipher_inbuf(struct nitrox_softreq *sr) +{ + int err; + struct rte_crypto_op *op = sr->op; + + fill_sglist(&sr->in, sr->iv.len, sr->iv.iova, sr->iv.virt); + err = create_sglist_from_mbuf(&sr->in, op->sym->m_src, + op->sym->cipher.data.offset, + op->sym->cipher.data.length); + if (unlikely(err)) + return err; + + create_sgcomp(&sr->in); + sr->dptr = sr->iova + offsetof(struct nitrox_softreq, in.sgcomp); + + return 0; +} + +static int +create_cipher_outbuf(struct nitrox_softreq *sr) +{ + struct rte_crypto_op *op = sr->op; + int err, cnt = 0; + struct rte_mbuf *m_dst = op->sym->m_dst ? op->sym->m_dst : + op->sym->m_src; + + sr->resp.orh = PENDING_SIG; + sr->out.sglist[cnt].len = sizeof(sr->resp.orh); + sr->out.sglist[cnt].iova = sr->iova + offsetof(struct nitrox_softreq, + resp.orh); + sr->out.sglist[cnt].virt = &sr->resp.orh; + cnt++; + + sr->out.map_bufs_cnt = cnt; + fill_sglist(&sr->out, sr->iv.len, sr->iv.iova, sr->iv.virt); + err = create_sglist_from_mbuf(&sr->out, m_dst, + op->sym->cipher.data.offset, + op->sym->cipher.data.length); + if (unlikely(err)) + return err; + + cnt = sr->out.map_bufs_cnt; + sr->resp.completion = PENDING_SIG; + sr->out.sglist[cnt].len = sizeof(sr->resp.completion); + sr->out.sglist[cnt].iova = sr->iova + offsetof(struct nitrox_softreq, + resp.completion); + sr->out.sglist[cnt].virt = &sr->resp.completion; + cnt++; + + RTE_VERIFY(cnt <= MAX_SGBUF_CNT); + sr->out.map_bufs_cnt = cnt; + + create_sgcomp(&sr->out); + sr->rptr = sr->iova + offsetof(struct nitrox_softreq, out.sgcomp); + + return 0; +} + +static void +create_cipher_gph(uint32_t cryptlen, uint16_t ivlen, struct gphdr *gph) +{ + gph->param0 = rte_cpu_to_be_16(cryptlen); + gph->param1 = 0; + gph->param2 = rte_cpu_to_be_16(ivlen); + gph->param3 = 0; +} + +static int +process_cipher_data(struct nitrox_softreq *sr) +{ + struct rte_crypto_op *op = sr->op; + int err; + + softreq_copy_iv(sr, 0); + err = create_cipher_inbuf(sr); + if (unlikely(err)) + return err; + + err = create_cipher_outbuf(sr); + if (unlikely(err)) + return err; + + create_cipher_gph(op->sym->cipher.data.length, sr->iv.len, &sr->gph); + + return 0; +} + +static int +extract_cipher_auth_digest(struct nitrox_softreq *sr, + struct nitrox_sglist *digest) +{ + struct rte_crypto_op *op = sr->op; + struct rte_mbuf *mdst = op->sym->m_dst ? op->sym->m_dst : + op->sym->m_src; + + if (sr->ctx->req_op == NITROX_OP_DECRYPT && + unlikely(!op->sym->auth.digest.data)) + return -EINVAL; + + digest->len = sr->ctx->digest_length; + if (op->sym->auth.digest.data) { + digest->iova = op->sym->auth.digest.phys_addr; + digest->virt = op->sym->auth.digest.data; + return 0; + } + + if (unlikely(rte_pktmbuf_data_len(mdst) < op->sym->auth.data.offset + + op->sym->auth.data.length + digest->len)) + return -EINVAL; + + digest->iova = rte_pktmbuf_iova_offset(mdst, + op->sym->auth.data.offset + + op->sym->auth.data.length); + digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *, + op->sym->auth.data.offset + + op->sym->auth.data.length); + return 0; +} + static int create_cipher_auth_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl, struct rte_mbuf *mbuf) @@ -408,23 +513,6 @@ create_aead_sglist(struct nitrox_softreq *sr, struct nitrox_sgtable *sgtbl, return err; } -static void -create_sgcomp(struct nitrox_sgtable *sgtbl) -{ - int i, j, nr_sgcomp; - struct nitrox_sgcomp *sgcomp = sgtbl->sgcomp; - struct nitrox_sglist *sglist = sgtbl->sglist; - - nr_sgcomp = RTE_ALIGN_MUL_CEIL(sgtbl->map_bufs_cnt, 4) / 4; - sgtbl->nr_sgcomp = nr_sgcomp; - for (i = 0; i < nr_sgcomp; i++, sgcomp++) { - for (j = 0; j < 4; j++, sglist++) { - sgcomp->len[j] = rte_cpu_to_be_16(sglist->len); - sgcomp->iova[j] = rte_cpu_to_be_64(sglist->iova); - } - } -} - static int create_aead_inbuf(struct nitrox_softreq *sr, struct nitrox_sglist *digest) { @@ -613,7 +701,7 @@ extract_combined_digest(struct nitrox_softreq *sr, struct nitrox_sglist *digest) op->sym->aead.data.length + digest->len)) return -EINVAL; - digest->iova = rte_pktmbuf_mtophys_offset(mdst, + digest->iova = rte_pktmbuf_iova_offset(mdst, op->sym->aead.data.offset + op->sym->aead.data.length); digest->virt = rte_pktmbuf_mtod_offset(mdst, uint8_t *, @@ -661,6 +749,9 @@ process_softreq(struct nitrox_softreq *sr) int err = 0; switch (ctx->nitrox_chain) { + case NITROX_CHAIN_CIPHER_ONLY: + err = process_cipher_data(sr); + break; case NITROX_CHAIN_CIPHER_AUTH: case NITROX_CHAIN_AUTH_CIPHER: err = process_cipher_auth_data(sr);