From patchwork Fri Dec 25 08:03:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: AMARANATH SOMALAPURAM X-Patchwork-Id: 85732 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 92867A052A; Fri, 25 Dec 2020 09:06:25 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C4C7FC9C4; Fri, 25 Dec 2020 09:06:23 +0100 (CET) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2078.outbound.protection.outlook.com [40.107.236.78]) by dpdk.org (Postfix) with ESMTP id 97769C9B2 for ; Fri, 25 Dec 2020 09:06:21 +0100 (CET) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IoNlL1OlZqHhH18Wkd2AYG0MVVQkRN6GQ5o2oJUgNVFD33lsS7LFOWQ9MHH5Xs5WzCoDvdTQZ8FN3fFVLgb6vWigjnOU8xvOS902ONER+hwJS9pFqDd76/1S5/SA92VM6VYS86cVxGPOd9jJy5sppNb7a045fdlnr+fbeNfz2vu2SzEg9J/tqtIXtOXG6rCsdX3PAFdfL3c4vofWSn6Z/yKAkAgeWRouMffPCi9S2y5iLMxc9xC+cvZAaY6Iyg76ntu8746YgdW2MosYxeF7FSLyWRhvoCOCYrk1Qm4C+6HBE4R1uItFGgbyuyCa+AIDLuHlVWKJtrVIRaVMPlOSFw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HfukcrzrkofR9goIXy7jZtvsjf7/uqZU+rFJ9rc/8mA=; b=D2TLVL3vJ8dqI+NHw8dgAxtaJNCVBecm5Vf46L8R5hLVPe8yzcMHQO9WykrHJ1NV/fZHjKLno49ejXHHkaoFEcjU79vcU8/0ySHCqP4rJ799tPMXwCPzcYRUMyhzQnOCjP79yBY9MC3n/f0ZHEF64HSTSsSlzZ/qW/5xdZWnL++Zjt91S1C32OnhIRFIDRqXc/XO7HPu934toK3SIJzMWSQMb0TmoxDBQ85Szv7ZmAM9MqOiMZG80kSEHTQKyQx4saYiPwkVmFmjfzgOERnWDMmjg/m+mx42xh5dRpKlHWaqD5rULngbuWlRzIHaUjB7npgh3EQqccsxWkGrr9jl2w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass header.d=amd.com; arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amdcloud.onmicrosoft.com; s=selector2-amdcloud-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=HfukcrzrkofR9goIXy7jZtvsjf7/uqZU+rFJ9rc/8mA=; b=qMRWf9TYVTEejU/zC0HNB60fjlkVxvbONsb036DS+nGIyLVu+PfdbXRSGrAT0T0P8HbU+uW53uXy5dFPmpkddn+VR/m0NEQS75xrAUDFwvWEwfMcn1WZD3NNMnz3kyy8MfzngMcJ7jGFqgST+dQztBHZHlisy0WJgD+snNcVQvc= Authentication-Results: dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=none action=none header.from=amd.com; Received: from BYAPR12MB2821.namprd12.prod.outlook.com (2603:10b6:a03:9b::30) by BY5PR12MB4885.namprd12.prod.outlook.com (2603:10b6:a03:1de::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3700.28; Fri, 25 Dec 2020 08:06:18 +0000 Received: from BYAPR12MB2821.namprd12.prod.outlook.com ([fe80::c9e:efb6:8cd1:5a6d]) by BYAPR12MB2821.namprd12.prod.outlook.com ([fe80::c9e:efb6:8cd1:5a6d%4]) with mapi id 15.20.3676.033; Fri, 25 Dec 2020 08:06:18 +0000 From: asomalap@amd.com To: dev@dpdk.org Cc: akhil.goyal@nxp.com Date: Fri, 25 Dec 2020 13:33:58 +0530 Message-Id: <20201225080358.366162-1-asomalap@amd.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20200128083819.32834-1-asomalap@amd.com> References: <20200128083819.32834-1-asomalap@amd.com> X-Originating-IP: [165.204.157.251] X-ClientProxiedBy: MA1PR0101CA0046.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:22::32) To BYAPR12MB2821.namprd12.prod.outlook.com (2603:10b6:a03:9b::30) MIME-Version: 1.0 X-MS-Exchange-MessageSentRepresentingType: 1 Received: from cae-Wallaby.amd.com (165.204.157.251) by MA1PR0101CA0046.INDPRD01.PROD.OUTLOOK.COM (2603:1096:a00:22::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3700.27 via Frontend Transport; Fri, 25 Dec 2020 08:06:16 +0000 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-HT: Tenant X-MS-Office365-Filtering-Correlation-Id: 3dfcb5eb-71ed-48ac-a2a8-08d8a8abf552 X-MS-TrafficTypeDiagnostic: BY5PR12MB4885: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:353; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: A5BhP6yDR5bUq6WYA/kKJCsTWpV4UQo9pxjvFLZm3Qk4F7b0NnGK3JVVfGrCvz7m8i0SNgCd0bWs2zdLYA4dB1UQN7ZqWp11lb64hB43g7jMR+J1aSFyAsBHR5WIds+4StgSTtI4RfjkuWqiL8xMmvP+9di6zqI/A/vZffcsZbT2tkM4dgvSB0dEFC2mRe6hF9v2Jx8cLYRF8Cg9Ur/rmrHM7b7RpWhssJN6cTGppQ9LwIVEaXnMlfyJuYqSVpZ7qx+WtMXoH5tIFTQAvU7lcll+JM7I0MWNBiBvEaG64G6fMRZZBFNXBl4DFri6FVzjFN+kOZshcT5k0n5Gku0vwLqEWaoJe5mVnZjz580YDvhIfvt4Stnksmu7jok3oqMAW/KLkCzcYN5wB7H6Y39+Hg== X-Forefront-Antispam-Report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:BYAPR12MB2821.namprd12.prod.outlook.com; PTR:; CAT:NONE; SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(366004)(2906002)(83380400001)(9686003)(316002)(6916009)(66946007)(478600001)(6666004)(30864003)(6486002)(8676002)(8936002)(186003)(66476007)(956004)(52116002)(2616005)(66556008)(7696005)(5660300002)(4326008)(26005)(36756003)(16526019)(1076003); DIR:OUT; SFP:1101; X-MS-Exchange-AntiSpam-MessageData: =?utf-8?q?QwYycaFYEPSe49xCBZwtjNOjmE+a+q?= =?utf-8?q?HOerMeKqP5/Zwf9Z5yd5aRBpnv51YCy8ic941VFU9e8Uxh2Opd72QJXCxona6HH2s?= =?utf-8?q?ETDdH9/ft+nq/Q3p7wn8Mq1KQi6B5cZ+Dt1PTsME4NSWtiW7xH5LdeW6qP+6uDvPF?= =?utf-8?q?kDHFwutgWt6VdgUNMaozlGD6dL3vVLsbkqm35v28lfyWYSWVnBFWrczPBgKAQgY79?= =?utf-8?q?EMirdJnxVg5SxrtLgMCydtOGmEmd0pGU2Ajguv23TYPFU42AM157aNVE1N2QPUzu2?= =?utf-8?q?xvF3o+eK6HfmaZzkNfJR8zFaXn7pJYxy5rGldK38IdnoRLbVLgBBpc8C7YNJosg1L?= =?utf-8?q?p7OdLStpzl6gZovTV+1p8G5NIz+ze9Sk0iUp9qUy3Y94X66lWp/ItEu+NjIqNcX2t?= =?utf-8?q?3XxnM3LFxvNyKh7iqyP6HhySdGVBecCSC9Du4/j/l5wKZpyFX4MW6lA9/HjEQ6QLK?= =?utf-8?q?Nx8xEP+8fq+Mk7K/U3nL8IB7r/1ykIf2+cVXNf0u+7bf111haRVWtPvSRdKKPrfhw?= =?utf-8?q?rZv7Rvcr//koE3izBzhTHi/ZhTM9F5l8J8Sp71KfAv+TodmmT3mD+mtadaHjas8yk?= =?utf-8?q?8pYDOlmylW0IxSuY3T4+THJrOo0/W7bFeKk716vH6BZGcOQO2xWHxxVF4HMR0T6+x?= =?utf-8?q?arghNwzMRCxdaCP13fkyjb/c/ysCe0qlMl4GIgRGgVWbblvQcJFicZL+cg5X4iMO1?= =?utf-8?q?bd3MtiwnKiRjXz8Jqu4Qd1ArjGNOdccMNL0Voz257YYkzfmIWu3ni5GbaqBeYz6vC?= =?utf-8?q?euu4VeYF1MgSVyq/H4GK9ODoxaI2tnsC2eLhOi7KuIFucI/JDbuhxJe18kKCNUPbr?= =?utf-8?q?x4IA/6UCB+HVRm0ZIUWoqr5iO1EUaGmF93r3u+30GkLEYEazX8/H/iB+AVgio8otH?= =?utf-8?q?EwrB/acB/7KmVfnlE5b+MjYnqg8PLBZUtR2YbIfYsWv0p5NFo38uJcOvYkzXjVw+Y?= =?utf-8?q?I4TVpFcA9MDANWlfSve?= X-OriginatorOrg: amd.com X-MS-Exchange-CrossTenant-AuthSource: BYAPR12MB2821.namprd12.prod.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Internal X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 Dec 2020 08:06:18.0553 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d X-MS-Exchange-CrossTenant-Network-Message-Id: 3dfcb5eb-71ed-48ac-a2a8-08d8a8abf552 X-MS-Exchange-CrossTenant-MailboxType: HOSTED X-MS-Exchange-CrossTenant-UserPrincipalName: Rr9qxgqgUYYPePctiqWmmp65mRM4cBCvJXtKJCFEccENOgtFd3sltwvV+5nT6l96WK86HoqWrV92QlXyHRHZOQ== X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR12MB4885 Subject: [dpdk-dev] [PATCH v3] crypto/ccp: enable IOMMU for CCP X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Amaranath Somalapuram CCP use vdev framework, and vdev framework don’t support IOMMU. Adding custom IOMMU support for AMD CCP driver. Signed-off-by: Amaranath Somalapuram --- drivers/crypto/ccp/ccp_crypto.c | 114 ++++++++++++++++++++++++------- drivers/crypto/ccp/ccp_dev.c | 54 +++------------ drivers/crypto/ccp/ccp_pci.c | 1 + drivers/crypto/ccp/rte_ccp_pmd.c | 3 + 4 files changed, 104 insertions(+), 68 deletions(-) diff --git a/drivers/crypto/ccp/ccp_crypto.c b/drivers/crypto/ccp/ccp_crypto.c index db3fb6eff..f37d35f18 100644 --- a/drivers/crypto/ccp/ccp_crypto.c +++ b/drivers/crypto/ccp/ccp_crypto.c @@ -31,8 +31,10 @@ #include #include +extern int iommu_mode; +void *sha_ctx; /* SHA initial context values */ -static uint32_t ccp_sha1_init[SHA_COMMON_DIGEST_SIZE / sizeof(uint32_t)] = { +uint32_t ccp_sha1_init[SHA_COMMON_DIGEST_SIZE / sizeof(uint32_t)] = { SHA1_H4, SHA1_H3, SHA1_H2, SHA1_H1, SHA1_H0, 0x0U, @@ -744,8 +746,13 @@ ccp_configure_session_cipher(struct ccp_session *sess, CCP_LOG_ERR("Invalid CCP Engine"); return -ENOTSUP; } - sess->cipher.nonce_phys = rte_mem_virt2phy(sess->cipher.nonce); - sess->cipher.key_phys = rte_mem_virt2phy(sess->cipher.key_ccp); + if (iommu_mode == 2) { + sess->cipher.nonce_phys = rte_mem_virt2iova(sess->cipher.nonce); + sess->cipher.key_phys = rte_mem_virt2iova(sess->cipher.key_ccp); + } else { + sess->cipher.nonce_phys = rte_mem_virt2phy(sess->cipher.nonce); + sess->cipher.key_phys = rte_mem_virt2phy(sess->cipher.key_ccp); + } return 0; } @@ -784,6 +791,7 @@ ccp_configure_session_auth(struct ccp_session *sess, sess->auth.ctx = (void *)ccp_sha1_init; sess->auth.ctx_len = CCP_SB_BYTES; sess->auth.offset = CCP_SB_BYTES - SHA1_DIGEST_SIZE; + rte_memcpy(sha_ctx, sess->auth.ctx, SHA_COMMON_DIGEST_SIZE); break; case RTE_CRYPTO_AUTH_SHA1_HMAC: if (sess->auth_opt) { @@ -822,6 +830,7 @@ ccp_configure_session_auth(struct ccp_session *sess, sess->auth.ctx = (void *)ccp_sha224_init; sess->auth.ctx_len = CCP_SB_BYTES; sess->auth.offset = CCP_SB_BYTES - SHA224_DIGEST_SIZE; + rte_memcpy(sha_ctx, sess->auth.ctx, SHA256_DIGEST_SIZE); break; case RTE_CRYPTO_AUTH_SHA224_HMAC: if (sess->auth_opt) { @@ -884,6 +893,7 @@ ccp_configure_session_auth(struct ccp_session *sess, sess->auth.ctx = (void *)ccp_sha256_init; sess->auth.ctx_len = CCP_SB_BYTES; sess->auth.offset = CCP_SB_BYTES - SHA256_DIGEST_SIZE; + rte_memcpy(sha_ctx, sess->auth.ctx, SHA256_DIGEST_SIZE); break; case RTE_CRYPTO_AUTH_SHA256_HMAC: if (sess->auth_opt) { @@ -946,6 +956,7 @@ ccp_configure_session_auth(struct ccp_session *sess, sess->auth.ctx = (void *)ccp_sha384_init; sess->auth.ctx_len = CCP_SB_BYTES << 1; sess->auth.offset = (CCP_SB_BYTES << 1) - SHA384_DIGEST_SIZE; + rte_memcpy(sha_ctx, sess->auth.ctx, SHA512_DIGEST_SIZE); break; case RTE_CRYPTO_AUTH_SHA384_HMAC: if (sess->auth_opt) { @@ -1010,6 +1021,7 @@ ccp_configure_session_auth(struct ccp_session *sess, sess->auth.ctx = (void *)ccp_sha512_init; sess->auth.ctx_len = CCP_SB_BYTES << 1; sess->auth.offset = (CCP_SB_BYTES << 1) - SHA512_DIGEST_SIZE; + rte_memcpy(sha_ctx, sess->auth.ctx, SHA512_DIGEST_SIZE); break; case RTE_CRYPTO_AUTH_SHA512_HMAC: if (sess->auth_opt) { @@ -1159,8 +1171,13 @@ ccp_configure_session_aead(struct ccp_session *sess, CCP_LOG_ERR("Unsupported aead algo"); return -ENOTSUP; } - sess->cipher.nonce_phys = rte_mem_virt2phy(sess->cipher.nonce); - sess->cipher.key_phys = rte_mem_virt2phy(sess->cipher.key_ccp); + if (iommu_mode == 2) { + sess->cipher.nonce_phys = rte_mem_virt2iova(sess->cipher.nonce); + sess->cipher.key_phys = rte_mem_virt2iova(sess->cipher.key_ccp); + } else { + sess->cipher.nonce_phys = rte_mem_virt2phy(sess->cipher.nonce); + sess->cipher.key_phys = rte_mem_virt2phy(sess->cipher.key_ccp); + } return 0; } @@ -1575,11 +1592,16 @@ ccp_perform_hmac(struct rte_crypto_op *op, op->sym->auth.data.offset); append_ptr = (void *)rte_pktmbuf_append(op->sym->m_src, session->auth.ctx_len); - dest_addr = (phys_addr_t)rte_mem_virt2phy(append_ptr); + if (iommu_mode == 2) { + dest_addr = (phys_addr_t)rte_mem_virt2iova(append_ptr); + pst.src_addr = (phys_addr_t)rte_mem_virt2iova((void *)addr); + } else { + dest_addr = (phys_addr_t)rte_mem_virt2phy(append_ptr); + pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *)addr); + } dest_addr_t = dest_addr; /** Load PHash1 to LSB*/ - pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *)addr); pst.dest_addr = (phys_addr_t)(cmd_q->sb_sha * CCP_SB_BYTES); pst.len = session->auth.ctx_len; pst.dir = 1; @@ -1659,7 +1681,10 @@ ccp_perform_hmac(struct rte_crypto_op *op, /** Load PHash2 to LSB*/ addr += session->auth.ctx_len; - pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *)addr); + if (iommu_mode == 2) + pst.src_addr = (phys_addr_t)rte_mem_virt2iova((void *)addr); + else + pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *)addr); pst.dest_addr = (phys_addr_t)(cmd_q->sb_sha * CCP_SB_BYTES); pst.len = session->auth.ctx_len; pst.dir = 1; @@ -1745,15 +1770,19 @@ ccp_perform_sha(struct rte_crypto_op *op, src_addr = rte_pktmbuf_iova_offset(op->sym->m_src, op->sym->auth.data.offset); - append_ptr = (void *)rte_pktmbuf_append(op->sym->m_src, session->auth.ctx_len); - dest_addr = (phys_addr_t)rte_mem_virt2phy(append_ptr); + if (iommu_mode == 2) { + dest_addr = (phys_addr_t)rte_mem_virt2iova(append_ptr); + pst.src_addr = (phys_addr_t)sha_ctx; + } else { + dest_addr = (phys_addr_t)rte_mem_virt2phy(append_ptr); + pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *) + session->auth.ctx); + } /** Passthru sha context*/ - pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *) - session->auth.ctx); pst.dest_addr = (phys_addr_t)(cmd_q->sb_sha * CCP_SB_BYTES); pst.len = session->auth.ctx_len; pst.dir = 1; @@ -1840,10 +1869,16 @@ ccp_perform_sha3_hmac(struct rte_crypto_op *op, CCP_LOG_ERR("CCP MBUF append failed\n"); return -1; } - dest_addr = (phys_addr_t)rte_mem_virt2phy((void *)append_ptr); + if (iommu_mode == 2) { + dest_addr = (phys_addr_t)rte_mem_virt2iova((void *)append_ptr); + ctx_paddr = (phys_addr_t)rte_mem_virt2iova( + session->auth.pre_compute); + } else { + dest_addr = (phys_addr_t)rte_mem_virt2phy((void *)append_ptr); + ctx_paddr = (phys_addr_t)rte_mem_virt2phy( + session->auth.pre_compute); + } dest_addr_t = dest_addr + (session->auth.ctx_len / 2); - ctx_paddr = (phys_addr_t)rte_mem_virt2phy((void - *)session->auth.pre_compute); desc = &cmd_q->qbase_desc[cmd_q->qidx]; memset(desc, 0, Q_DESC_SIZE); @@ -1964,7 +1999,7 @@ ccp_perform_sha3(struct rte_crypto_op *op, struct ccp_session *session; union ccp_function function; struct ccp_desc *desc; - uint8_t *ctx_addr, *append_ptr; + uint8_t *ctx_addr = NULL, *append_ptr = NULL; uint32_t tail; phys_addr_t src_addr, dest_addr, ctx_paddr; @@ -1980,9 +2015,15 @@ ccp_perform_sha3(struct rte_crypto_op *op, CCP_LOG_ERR("CCP MBUF append failed\n"); return -1; } - dest_addr = (phys_addr_t)rte_mem_virt2phy((void *)append_ptr); + if (iommu_mode == 2) { + dest_addr = (phys_addr_t)rte_mem_virt2iova((void *)append_ptr); + ctx_paddr = (phys_addr_t)rte_mem_virt2iova((void *)ctx_addr); + } else { + dest_addr = (phys_addr_t)rte_mem_virt2phy((void *)append_ptr); + ctx_paddr = (phys_addr_t)rte_mem_virt2phy((void *)ctx_addr); + } + ctx_addr = session->auth.sha3_ctx; - ctx_paddr = (phys_addr_t)rte_mem_virt2phy((void *)ctx_addr); desc = &cmd_q->qbase_desc[cmd_q->qidx]; memset(desc, 0, Q_DESC_SIZE); @@ -2056,7 +2097,13 @@ ccp_perform_aes_cmac(struct rte_crypto_op *op, ctx_addr = session->auth.pre_compute; memset(ctx_addr, 0, AES_BLOCK_SIZE); - pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *)ctx_addr); + if (iommu_mode == 2) + pst.src_addr = (phys_addr_t)rte_mem_virt2iova( + (void *)ctx_addr); + else + pst.src_addr = (phys_addr_t)rte_mem_virt2phy( + (void *)ctx_addr); + pst.dest_addr = (phys_addr_t)(cmd_q->sb_iv * CCP_SB_BYTES); pst.len = CCP_SB_BYTES; pst.dir = 1; @@ -2094,7 +2141,12 @@ ccp_perform_aes_cmac(struct rte_crypto_op *op, } else { ctx_addr = session->auth.pre_compute + CCP_SB_BYTES; memset(ctx_addr, 0, AES_BLOCK_SIZE); - pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *)ctx_addr); + if (iommu_mode == 2) + pst.src_addr = (phys_addr_t)rte_mem_virt2iova( + (void *)ctx_addr); + else + pst.src_addr = (phys_addr_t)rte_mem_virt2phy( + (void *)ctx_addr); pst.dest_addr = (phys_addr_t)(cmd_q->sb_iv * CCP_SB_BYTES); pst.len = CCP_SB_BYTES; pst.dir = 1; @@ -2288,8 +2340,12 @@ ccp_perform_3des(struct rte_crypto_op *op, rte_memcpy(lsb_buf + (CCP_SB_BYTES - session->iv.length), iv, session->iv.length); - - pst.src_addr = (phys_addr_t)rte_mem_virt2phy((void *) lsb_buf); + if (iommu_mode == 2) + pst.src_addr = (phys_addr_t)rte_mem_virt2iova( + (void *) lsb_buf); + else + pst.src_addr = (phys_addr_t)rte_mem_virt2phy( + (void *) lsb_buf); pst.dest_addr = (phys_addr_t)(cmd_q->sb_iv * CCP_SB_BYTES); pst.len = CCP_SB_BYTES; pst.dir = 1; @@ -2312,7 +2368,10 @@ ccp_perform_3des(struct rte_crypto_op *op, else dest_addr = src_addr; - key_addr = rte_mem_virt2phy(session->cipher.key_ccp); + if (iommu_mode == 2) + key_addr = rte_mem_virt2iova(session->cipher.key_ccp); + else + key_addr = rte_mem_virt2phy(session->cipher.key_ccp); desc = &cmd_q->qbase_desc[cmd_q->qidx]; @@ -2707,8 +2766,13 @@ process_ops_to_enqueue(struct ccp_qp *qp, b_info->lsb_buf_idx = 0; b_info->desccnt = 0; b_info->cmd_q = cmd_q; - b_info->lsb_buf_phys = - (phys_addr_t)rte_mem_virt2phy((void *)b_info->lsb_buf); + if (iommu_mode == 2) + b_info->lsb_buf_phys = + (phys_addr_t)rte_mem_virt2iova((void *)b_info->lsb_buf); + else + b_info->lsb_buf_phys = + (phys_addr_t)rte_mem_virt2phy((void *)b_info->lsb_buf); + rte_atomic64_sub(&b_info->cmd_q->free_slots, slots_req); b_info->head_offset = (uint32_t)(cmd_q->qbase_phys_addr + cmd_q->qidx * diff --git a/drivers/crypto/ccp/ccp_dev.c b/drivers/crypto/ccp/ccp_dev.c index 664ddc174..ee6882b8a 100644 --- a/drivers/crypto/ccp/ccp_dev.c +++ b/drivers/crypto/ccp/ccp_dev.c @@ -23,6 +23,7 @@ #include "ccp_pci.h" #include "ccp_pmd_private.h" +int iommu_mode; struct ccp_list ccp_list = TAILQ_HEAD_INITIALIZER(ccp_list); static int ccp_dev_id; @@ -512,7 +513,7 @@ ccp_add_device(struct ccp_device *dev, int type) CCP_WRITE_REG(vaddr, CMD_CLK_GATE_CTL_OFFSET, 0x00108823); } - CCP_WRITE_REG(vaddr, CMD_REQID_CONFIG_OFFSET, 0x00001249); + CCP_WRITE_REG(vaddr, CMD_REQID_CONFIG_OFFSET, 0x0); /* Copy the private LSB mask to the public registers */ status_lo = CCP_READ_REG(vaddr, LSB_PRIVATE_MASK_LO_OFFSET); @@ -657,9 +658,7 @@ ccp_probe_device(const char *dirname, uint16_t domain, struct rte_pci_device *pci; char filename[PATH_MAX]; unsigned long tmp; - int uio_fd = -1, i, uio_num; - char uio_devname[PATH_MAX]; - void *map_addr; + int uio_fd = -1; ccp_dev = rte_zmalloc("ccp_device", sizeof(*ccp_dev), RTE_CACHE_LINE_SIZE); @@ -710,46 +709,14 @@ ccp_probe_device(const char *dirname, uint16_t domain, snprintf(filename, sizeof(filename), "%s/resource", dirname); if (ccp_pci_parse_sysfs_resource(filename, pci) < 0) goto fail; + if (iommu_mode == 2) + pci->kdrv = RTE_PCI_KDRV_VFIO; + else if (iommu_mode == 0) + pci->kdrv = RTE_PCI_KDRV_IGB_UIO; + else if (iommu_mode == 1) + pci->kdrv = RTE_PCI_KDRV_UIO_GENERIC; - uio_num = ccp_find_uio_devname(dirname); - if (uio_num < 0) { - /* - * It may take time for uio device to appear, - * wait here and try again - */ - usleep(100000); - uio_num = ccp_find_uio_devname(dirname); - if (uio_num < 0) - goto fail; - } - snprintf(uio_devname, sizeof(uio_devname), "/dev/uio%u", uio_num); - - uio_fd = open(uio_devname, O_RDWR | O_NONBLOCK); - if (uio_fd < 0) - goto fail; - if (flock(uio_fd, LOCK_EX | LOCK_NB)) - goto fail; - - /* Map the PCI memory resource of device */ - for (i = 0; i < PCI_MAX_RESOURCE; i++) { - - char devname[PATH_MAX]; - int res_fd; - - if (pci->mem_resource[i].phys_addr == 0) - continue; - snprintf(devname, sizeof(devname), "%s/resource%d", dirname, i); - res_fd = open(devname, O_RDWR); - if (res_fd < 0) - goto fail; - map_addr = mmap(NULL, pci->mem_resource[i].len, - PROT_READ | PROT_WRITE, - MAP_SHARED, res_fd, 0); - if (map_addr == MAP_FAILED) - goto fail; - - pci->mem_resource[i].addr = map_addr; - } + rte_pci_map_device(pci); /* device is valid, add in list */ if (ccp_add_device(ccp_dev, ccp_type)) { @@ -784,6 +751,7 @@ ccp_probe_devices(const struct rte_pci_id *ccp_id) if (module_idx < 0) return -1; + iommu_mode = module_idx; TAILQ_INIT(&ccp_list); dir = opendir(SYSFS_PCI_DEVICES); if (dir == NULL) diff --git a/drivers/crypto/ccp/ccp_pci.c b/drivers/crypto/ccp/ccp_pci.c index 1702a09c4..38029a908 100644 --- a/drivers/crypto/ccp/ccp_pci.c +++ b/drivers/crypto/ccp/ccp_pci.c @@ -15,6 +15,7 @@ static const char * const uio_module_names[] = { "igb_uio", "uio_pci_generic", + "vfio_pci" }; int diff --git a/drivers/crypto/ccp/rte_ccp_pmd.c b/drivers/crypto/ccp/rte_ccp_pmd.c index 000b2f4fe..ba379a19f 100644 --- a/drivers/crypto/ccp/rte_ccp_pmd.c +++ b/drivers/crypto/ccp/rte_ccp_pmd.c @@ -22,6 +22,7 @@ static unsigned int ccp_pmd_init_done; uint8_t ccp_cryptodev_driver_id; uint8_t cryptodev_cnt; +extern void *sha_ctx; struct ccp_pmd_init_params { struct rte_cryptodev_pmd_init_params def_p; @@ -305,6 +306,7 @@ cryptodev_ccp_remove(struct rte_vdev_device *dev) ccp_pmd_init_done = 0; name = rte_vdev_device_name(dev); + rte_free(sha_ctx); if (name == NULL) return -EINVAL; @@ -388,6 +390,7 @@ cryptodev_ccp_probe(struct rte_vdev_device *vdev) }; const char *input_args; + sha_ctx = (void *)rte_malloc(NULL, SHA512_DIGEST_SIZE, 64); if (ccp_pmd_init_done) { RTE_LOG(INFO, PMD, "CCP PMD already initialized\n"); return -EFAULT;