From patchwork Mon Oct 24 13:44:04 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 119026 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D4E3AA034C; Mon, 24 Oct 2022 15:50:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 4ED8F42BAB; Mon, 24 Oct 2022 15:50:52 +0200 (CEST) Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) by mails.dpdk.org (Postfix) with ESMTP id 541BC40A8B for ; Mon, 24 Oct 2022 15:50:50 +0200 (CEST) Received: by mail-wm1-f54.google.com with SMTP id v130-20020a1cac88000000b003bcde03bd44so10015580wme.5 for ; Mon, 24 Oct 2022 06:50:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=98I8HwuNAbA6VPZFGRWCIvcNqdqle9pejvNWleSOUJc=; b=ZP8XS/GBrnfGKsT7Zis1gStph29s0ZmwFnUOS5z5aii+T+FriWNKMiEYh/SbyTnl9y F0lBKUECFiOlpM0Tjb0qt+GV7q5d/bvvHne5hTLz60AXVtx0oAQGlCqKDYNvbnGYXi+x oIxHZqKt6uvHCxJ2PVEAPZS/PvTcTcLupWqE/81N4MeH82OqwV9YEJTevo6IWZjc5ok/ 4bvf3p50AKHVuUBWTSr6CEJh2d5Q7YK1GiWkd5Bg4n0PZViMa8wud/204SC3//CGxo/G RrWcxJPEhhYd/1jL1hSuW7bc73uinXedjkA2cGQrZd8mhWQNrbYAjyQQoYlqAI6LF6ob UxTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=98I8HwuNAbA6VPZFGRWCIvcNqdqle9pejvNWleSOUJc=; b=cpzZ/aZ2aqDPXtqoIedhFLkb+roqECGCwf6kT3Y8cKqejSPZIukjwSlrd09D9tCrki 2jIEcHUx5qmHMdLVIq8AqfnbSYn/UFGwjKlsc9D/7ZqFJeTjH6X7YdqIVssPSCN3B8pi fltckbLlhzE2BgFNTxQU3Nu4hoS5dKst6M3xKeD8CWPYK9OonrmfTT952g4EnRyYHwhK Rc5i2biZ/eI4KYiV+q9rNvC/esAXb6yToDrB8UVMqOIMZ0kGw8BBX3OMkO2Ax1/mC3E3 jzc9zzmr7dxHFnBLPF+EY2F9cIoMFSHI5xRYVCoz1d99Vik3wyz2F9SG78OrRLLq24V+ NtaA== X-Gm-Message-State: ACrzQf0+KW8j7bau6PJDc3WtviUp8fZ0cqmULJU+AQHFg00EHPSQJZVB LW/1hHGeT7qFpkPUTQ9ArxI6ng== X-Google-Smtp-Source: AMsMyM40WBnPQ2Oavn3BmsGz6upLNITL54Hi4I9JugO/K6j+rrg10q55PgOHRMhSHCXnKVabyNLlcA== X-Received: by 2002:a1c:770f:0:b0:3c8:33ba:150f with SMTP id t15-20020a1c770f000000b003c833ba150fmr10672567wmi.194.1666619449991; Mon, 24 Oct 2022 06:50:49 -0700 (PDT) Received: from localhost.localdomain ([213.146.143.36]) by smtp.gmail.com with ESMTPSA id t8-20020a0560001a4800b00236644228cfsm5764108wry.43.2022.10.24.06.50.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 06:50:49 -0700 (PDT) From: Zhangfei Gao To: Akhil Goyal , Declan Doherty , Fan Zhang , Ashish Gupta , Ray Kinsella , "thomas@monjalon.net" Cc: dev@dpdk.org, acc@openeuler.org, Zhangfei Gao Subject: [PATCH resend v5 1/6] crypto/uadk: introduce uadk crypto driver Date: Mon, 24 Oct 2022 21:44:04 +0800 Message-Id: <20221024134409.1896776-2-zhangfei.gao@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221024134409.1896776-1-zhangfei.gao@linaro.org> References: <20221024134409.1896776-1-zhangfei.gao@linaro.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Introduce a new crypto PMD for hardware accelerators based on UADK [1]. UADK is a framework for user applications to access hardware accelerators. UADK relies on IOMMU SVA (Shared Virtual Address) feature, which share the same page table between IOMMU and MMU. Thereby user application can directly use virtual address for device dma, which enhances the performance as well as easy usability. This patch adds the basic framework. [1] https://github.com/Linaro/uadk Signed-off-by: Zhangfei Gao --- MAINTAINERS | 6 ++ doc/guides/cryptodevs/features/uadk.ini | 33 +++++++ doc/guides/cryptodevs/index.rst | 1 + doc/guides/cryptodevs/uadk.rst | 73 ++++++++++++++ drivers/crypto/meson.build | 1 + drivers/crypto/uadk/meson.build | 30 ++++++ drivers/crypto/uadk/uadk_crypto_pmd.c | 121 ++++++++++++++++++++++++ drivers/crypto/uadk/version.map | 3 + 8 files changed, 268 insertions(+) create mode 100644 doc/guides/cryptodevs/features/uadk.ini create mode 100644 doc/guides/cryptodevs/uadk.rst create mode 100644 drivers/crypto/uadk/meson.build create mode 100644 drivers/crypto/uadk/uadk_crypto_pmd.c create mode 100644 drivers/crypto/uadk/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 6f56111323..bf9baa9070 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1060,6 +1060,12 @@ M: Kai Ji F: drivers/crypto/scheduler/ F: doc/guides/cryptodevs/scheduler.rst +HiSilicon UADK crypto +M: Zhangfei Gao +F: drivers/crypto/uadk/ +F: doc/guides/cryptodevs/uadk.rst +F: doc/guides/cryptodevs/features/uadk.ini + Intel QuickAssist M: Kai Ji F: drivers/crypto/qat/ diff --git a/doc/guides/cryptodevs/features/uadk.ini b/doc/guides/cryptodevs/features/uadk.ini new file mode 100644 index 0000000000..df5ad40e3d --- /dev/null +++ b/doc/guides/cryptodevs/features/uadk.ini @@ -0,0 +1,33 @@ +; +; Supported features of the 'uadk' crypto driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +HW Accelerated = Y + +; +; Supported crypto algorithms of the 'uadk' crypto driver. +; +[Cipher] + +; +; Supported authentication algorithms of the 'uadk' crypto driver. +; +[Auth] + +; +; Supported AEAD algorithms of the 'uadk' crypto driver. +; +[AEAD] + +; +; Supported Asymmetric algorithms of the 'uadk' crypto driver. +; +[Asymmetric] + +; +; Supported Operating systems of the 'uadk' crypto driver. +; +[OS] +Linux = Y diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst index 39cca6dbde..cb4ce227e9 100644 --- a/doc/guides/cryptodevs/index.rst +++ b/doc/guides/cryptodevs/index.rst @@ -30,5 +30,6 @@ Crypto Device Drivers scheduler snow3g qat + uadk virtio zuc diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst new file mode 100644 index 0000000000..1dfaab73c8 --- /dev/null +++ b/doc/guides/cryptodevs/uadk.rst @@ -0,0 +1,73 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved. + Copyright 2022-2023 Linaro ltd. + +UADK Crypto Poll Mode Driver +======================================================= + +UADK crypto PMD provides poll mode driver +All cryptographic operations are using UADK crypto API. +Hardware accelerators using UADK are supposed to be supported. + + +Features +-------- + +UADK crypto PMD has support for: + + +Test steps +---------- + + .. code-block:: console + + 1. Build UADK + $ git clone https://github.com/Linaro/uadk.git + $ cd uadk + $ mkdir build + $ ./autogen.sh + $ ./configure --prefix=$PWD/build + $ make + $ make install + + * Without --prefix, UADK will be installed to /usr/local/lib by default + * If get error:"cannot find -lnuma", please install the libnuma-dev + + 2. Run pkg-config libwd to ensure env is setup correctly + $ export PKG_CONFIG_PATH=$PWD/build/lib/pkgconfig + $ pkg-config libwd --cflags --libs + -I/usr/local/include -L/usr/local/lib -lwd + + * export PKG_CONFIG_PATH is required on demand, + not needed if UADK is installed to /usr/local/lib + + 3. Build DPDK + $ cd dpdk + $ mkdir build + $ meson build (--reconfigure) + $ cd build + $ ninja + $ sudo ninja install + + 4. Prepare hugepage for dpdk + $ echo 1024 > /sys/devices/system/node/node0/hugepages/hugepages-2048kB/nr_hugepages + $ echo 1024 > /sys/devices/system/node/node1/hugepages/hugepages-2048kB/nr_hugepages + $ echo 1024 > /sys/devices/system/node/node2/hugepages/hugepages-2048kB/nr_hugepages + $ echo 1024 > /sys/devices/system/node/node3/hugepages/hugepages-2048kB/nr_hugepages + $ mkdir -p /mnt/huge_2mb + $ mount -t hugetlbfs none /mnt/huge_2mb -o pagesize=2MB + + 5. Run test app + +Dependency +---------- + +UADK crypto PMD relies on UADK library [1] + +UADK is a framework for user applications to access hardware accelerators. +UADK relies on IOMMU SVA (Shared Virtual Address) feature, which share +the same page table between IOMMU and MMU. +As a result, user application can directly use virtual address for device dma, +which enhances the performance as well as easy usability. + +[1] https://github.com/Linaro/uadk diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build index 147b8cf633..ee5377deff 100644 --- a/drivers/crypto/meson.build +++ b/drivers/crypto/meson.build @@ -18,6 +18,7 @@ drivers = [ 'octeontx', 'openssl', 'scheduler', + 'uadk', 'virtio', ] diff --git a/drivers/crypto/uadk/meson.build b/drivers/crypto/uadk/meson.build new file mode 100644 index 0000000000..f6fae0a239 --- /dev/null +++ b/drivers/crypto/uadk/meson.build @@ -0,0 +1,30 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved. +# Copyright 2022-2023 Linaro ltd. + +if not is_linux + build = false + reason = 'only supported on Linux' + subdir_done() +endif + +sources = files( + 'uadk_crypto_pmd.c', +) + +deps += 'bus_vdev' +dep = dependency('libwd_crypto', required: false, method: 'pkg-config') +if not dep.found() + build = false + reason = 'missing dependency, "libwd_crypto"' +else + ext_deps += dep +endif + +dep = dependency('libwd', required: false, method: 'pkg-config') +if not dep.found() + build = false + reason = 'missing dependency, "libwd"' +else + ext_deps += dep +endif diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c b/drivers/crypto/uadk/uadk_crypto_pmd.c new file mode 100644 index 0000000000..2ae2b33bd7 --- /dev/null +++ b/drivers/crypto/uadk/uadk_crypto_pmd.c @@ -0,0 +1,121 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright 2022-2023 Huawei Technologies Co.,Ltd. All rights reserved. + * Copyright 2022-2023 Linaro ltd. + */ + +#include +#include +#include +#include +#include +#include +#include + +enum uadk_crypto_version { + UADK_CRYPTO_V2, + UADK_CRYPTO_V3, +}; + +struct uadk_crypto_priv { + enum uadk_crypto_version version; +} __rte_cache_aligned; + +static uint8_t uadk_cryptodev_driver_id; + +RTE_LOG_REGISTER_DEFAULT(uadk_crypto_logtype, INFO); + +#define UADK_LOG(level, fmt, ...) \ + rte_log(RTE_LOG_ ## level, uadk_crypto_logtype, \ + "%s() line %u: " fmt "\n", __func__, __LINE__, \ + ## __VA_ARGS__) + +static struct rte_cryptodev_ops uadk_crypto_pmd_ops = { + .dev_configure = NULL, + .dev_start = NULL, + .dev_stop = NULL, + .dev_close = NULL, + .stats_get = NULL, + .stats_reset = NULL, + .dev_infos_get = NULL, + .queue_pair_setup = NULL, + .queue_pair_release = NULL, + .sym_session_get_size = NULL, + .sym_session_configure = NULL, + .sym_session_clear = NULL, +}; + +static int +uadk_cryptodev_probe(struct rte_vdev_device *vdev) +{ + struct rte_cryptodev_pmd_init_params init_params = { + .name = "", + .private_data_size = sizeof(struct uadk_crypto_priv), + .max_nb_queue_pairs = + RTE_CRYPTODEV_PMD_DEFAULT_MAX_NB_QUEUE_PAIRS, + }; + enum uadk_crypto_version version = UADK_CRYPTO_V2; + struct uadk_crypto_priv *priv; + struct rte_cryptodev *dev; + struct uacce_dev *udev; + const char *name; + + udev = wd_get_accel_dev("cipher"); + if (!udev) + return -ENODEV; + + if (!strcmp(udev->api, "hisi_qm_v2")) + version = UADK_CRYPTO_V2; + + free(udev); + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + + dev = rte_cryptodev_pmd_create(name, &vdev->device, &init_params); + if (dev == NULL) { + UADK_LOG(ERR, "driver %s: create failed", init_params.name); + return -ENODEV; + } + + dev->dev_ops = &uadk_crypto_pmd_ops; + dev->driver_id = uadk_cryptodev_driver_id; + dev->dequeue_burst = NULL; + dev->enqueue_burst = NULL; + dev->feature_flags = RTE_CRYPTODEV_FF_HW_ACCELERATED; + priv = dev->data->dev_private; + priv->version = version; + + rte_cryptodev_pmd_probing_finish(dev); + + return 0; +} + +static int +uadk_cryptodev_remove(struct rte_vdev_device *vdev) +{ + struct rte_cryptodev *cryptodev; + const char *name; + + name = rte_vdev_device_name(vdev); + if (name == NULL) + return -EINVAL; + + cryptodev = rte_cryptodev_pmd_get_named_dev(name); + if (cryptodev == NULL) + return -ENODEV; + + return rte_cryptodev_pmd_destroy(cryptodev); +} + +static struct rte_vdev_driver uadk_crypto_pmd = { + .probe = uadk_cryptodev_probe, + .remove = uadk_cryptodev_remove, +}; + +static struct cryptodev_driver uadk_crypto_drv; + +#define UADK_CRYPTO_DRIVER_NAME crypto_uadk +RTE_PMD_REGISTER_VDEV(UADK_CRYPTO_DRIVER_NAME, uadk_crypto_pmd); +RTE_PMD_REGISTER_CRYPTO_DRIVER(uadk_crypto_drv, uadk_crypto_pmd.driver, + uadk_cryptodev_driver_id); diff --git a/drivers/crypto/uadk/version.map b/drivers/crypto/uadk/version.map new file mode 100644 index 0000000000..c2e0723b4c --- /dev/null +++ b/drivers/crypto/uadk/version.map @@ -0,0 +1,3 @@ +DPDK_22 { + local: *; +}; From patchwork Mon Oct 24 13:44:05 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 119027 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B0485A034C; Mon, 24 Oct 2022 15:51:01 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 559C542BA7; Mon, 24 Oct 2022 15:50:55 +0200 (CEST) Received: from mail-wr1-f49.google.com (mail-wr1-f49.google.com [209.85.221.49]) by mails.dpdk.org (Postfix) with ESMTP id F113E40A8B for ; Mon, 24 Oct 2022 15:50:50 +0200 (CEST) Received: by mail-wr1-f49.google.com with SMTP id k8so7682709wrh.1 for ; Mon, 24 Oct 2022 06:50:50 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=8naeqvB/FduQiNKowSEyfbbUz9wLdsNdvlcn/drCLfk=; b=ZGFfYDvFJpxNzyBOAgj3mSCFlvWjwlyHUzbEfg118fjPI6PDfRiL/fyqLNJ7Mkpd1B 1NVpE9zNq1T79KirUXShVcpC5dWSwfVagw8rrd+dmLo+QFRU5JnKqGzFG4MBP5XcIVws 1Elic56bFQKYi8BHNXZgJIkBGltr4506bC1rX4Hsu/PGDdrJ666qCIIhkUCNbnnF38ky JPVAbo24+FGwbmfPiJYP+ept1LAQ+Ht6JRFroHNZYs0zIyUItaalV2o4DAVbC/IcvoCV 38C8ZfUdIuEG/IPS/xg5lPUBAfVX+j4WNe8jX7sBLLnao0kD5HnNguS0nI3Aa+kn1xdq +N9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=8naeqvB/FduQiNKowSEyfbbUz9wLdsNdvlcn/drCLfk=; b=vF7xR74PcrDG+7P/LP/XXuqrVEyB8B6Zi7P04b9xSfUfzanqqEgaxfZEhpzp9Jl8As fQqDtRqXbWafEDo4NG9PZ3qmwv2SIAC10PMifdGsYvfpwsuoegDJNB30eT1zcT/LbXFC Pfam1nlRsDKYZSBJz0WTiYpeB/8cSHWcYRbYX18fHp61VlG8fdyzCHQGTGYudDWxdUF3 F3LQI+VbOypvOA9U8VoAk212Qlx08al27J+tlazmvIvmh85WjsZ7AFl4ZPWV/gvgu9xe fcivHUos1UghHi0u4VW57ePK7vaZHCh5INz+o+emGWf/NPe9R091FmiFVDxtdWPvXnU6 8mYg== X-Gm-Message-State: ACrzQf2DAk1fuVhGF3Osi6lhbVYlBRZH5neKX22FUqN9pZ27UV42k+Yr VMTZ6FLhIUbAK8RZDVbUhnAfDQ== X-Google-Smtp-Source: AMsMyM4lxDTUnHTXCcO9iJ/eH01/8plg/TvfbRKPW0DxnNJG5VcgAhTOt+7ix+Fe1G2aS0ULFxC2Ew== X-Received: by 2002:a5d:6589:0:b0:236:52af:3b70 with SMTP id q9-20020a5d6589000000b0023652af3b70mr11368734wru.349.1666619450686; Mon, 24 Oct 2022 06:50:50 -0700 (PDT) Received: from localhost.localdomain ([213.146.143.36]) by smtp.gmail.com with ESMTPSA id t8-20020a0560001a4800b00236644228cfsm5764108wry.43.2022.10.24.06.50.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 06:50:50 -0700 (PDT) From: Zhangfei Gao To: Akhil Goyal , Declan Doherty , Fan Zhang , Ashish Gupta , Ray Kinsella , "thomas@monjalon.net" Cc: dev@dpdk.org, acc@openeuler.org, Zhangfei Gao Subject: [PATCH resend v5 2/6] crypto/uadk: support basic operations Date: Mon, 24 Oct 2022 21:44:05 +0800 Message-Id: <20221024134409.1896776-3-zhangfei.gao@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221024134409.1896776-1-zhangfei.gao@linaro.org> References: <20221024134409.1896776-1-zhangfei.gao@linaro.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Support the basic dev control operations: configure, close, start, stop and get info, as well as queue pairs operations. Signed-off-by: Zhangfei Gao --- drivers/crypto/uadk/uadk_crypto_pmd.c | 213 ++++++++++++++++++++++++-- 1 file changed, 204 insertions(+), 9 deletions(-) diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c b/drivers/crypto/uadk/uadk_crypto_pmd.c index 2ae2b33bd7..7116899ca5 100644 --- a/drivers/crypto/uadk/uadk_crypto_pmd.c +++ b/drivers/crypto/uadk/uadk_crypto_pmd.c @@ -11,6 +11,25 @@ #include #include +/* Maximum length for digest (SHA-512 needs 64 bytes) */ +#define DIGEST_LENGTH_MAX 64 + +struct uadk_qp { + /* Ring for placing process packets */ + struct rte_ring *processed_pkts; + /* Queue pair statistics */ + struct rte_cryptodev_stats qp_stats; + /* Queue Pair Identifier */ + uint16_t id; + /* Unique Queue Pair Name */ + char name[RTE_CRYPTODEV_NAME_MAX_LEN]; + /* Buffer used to store the digest generated + * by the driver when verifying a digest provided + * by the user (using authentication verify operation) + */ + uint8_t temp_digest[DIGEST_LENGTH_MAX]; +} __rte_cache_aligned; + enum uadk_crypto_version { UADK_CRYPTO_V2, UADK_CRYPTO_V3, @@ -29,16 +48,192 @@ RTE_LOG_REGISTER_DEFAULT(uadk_crypto_logtype, INFO); "%s() line %u: " fmt "\n", __func__, __LINE__, \ ## __VA_ARGS__) +static const struct rte_cryptodev_capabilities uadk_crypto_v2_capabilities[] = { + /* End of capabilities */ + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +/* Configure device */ +static int +uadk_crypto_pmd_config(struct rte_cryptodev *dev __rte_unused, + struct rte_cryptodev_config *config __rte_unused) +{ + return 0; +} + +/* Start device */ +static int +uadk_crypto_pmd_start(struct rte_cryptodev *dev __rte_unused) +{ + return 0; +} + +/* Stop device */ +static void +uadk_crypto_pmd_stop(struct rte_cryptodev *dev __rte_unused) +{ +} + +/* Close device */ +static int +uadk_crypto_pmd_close(struct rte_cryptodev *dev __rte_unused) +{ + return 0; +} + +/* Get device statistics */ +static void +uadk_crypto_pmd_stats_get(struct rte_cryptodev *dev, + struct rte_cryptodev_stats *stats) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct uadk_qp *qp = dev->data->queue_pairs[qp_id]; + + stats->enqueued_count += qp->qp_stats.enqueued_count; + stats->dequeued_count += qp->qp_stats.dequeued_count; + stats->enqueue_err_count += qp->qp_stats.enqueue_err_count; + stats->dequeue_err_count += qp->qp_stats.dequeue_err_count; + } +} + +/* Reset device statistics */ +static void +uadk_crypto_pmd_stats_reset(struct rte_cryptodev *dev __rte_unused) +{ + int qp_id; + + for (qp_id = 0; qp_id < dev->data->nb_queue_pairs; qp_id++) { + struct uadk_qp *qp = dev->data->queue_pairs[qp_id]; + + memset(&qp->qp_stats, 0, sizeof(qp->qp_stats)); + } +} + +/* Get device info */ +static void +uadk_crypto_pmd_info_get(struct rte_cryptodev *dev, + struct rte_cryptodev_info *dev_info) +{ + struct uadk_crypto_priv *priv = dev->data->dev_private; + + if (dev_info != NULL) { + dev_info->driver_id = dev->driver_id; + dev_info->driver_name = dev->device->driver->name; + dev_info->max_nb_queue_pairs = 128; + /* No limit of number of sessions */ + dev_info->sym.max_nb_sessions = 0; + dev_info->feature_flags = dev->feature_flags; + + if (priv->version == UADK_CRYPTO_V2) + dev_info->capabilities = uadk_crypto_v2_capabilities; + } +} + +/* Release queue pair */ +static int +uadk_crypto_pmd_qp_release(struct rte_cryptodev *dev, uint16_t qp_id) +{ + struct uadk_qp *qp = dev->data->queue_pairs[qp_id]; + + if (qp) { + rte_ring_free(qp->processed_pkts); + rte_free(qp); + dev->data->queue_pairs[qp_id] = NULL; + } + + return 0; +} + +/* set a unique name for the queue pair based on its name, dev_id and qp_id */ +static int +uadk_pmd_qp_set_unique_name(struct rte_cryptodev *dev, + struct uadk_qp *qp) +{ + unsigned int n = snprintf(qp->name, sizeof(qp->name), + "uadk_crypto_pmd_%u_qp_%u", + dev->data->dev_id, qp->id); + + if (n >= sizeof(qp->name)) + return -EINVAL; + + return 0; +} + +/* Create a ring to place process packets on */ +static struct rte_ring * +uadk_pmd_qp_create_processed_pkts_ring(struct uadk_qp *qp, + unsigned int ring_size, int socket_id) +{ + struct rte_ring *r = qp->processed_pkts; + + if (r) { + if (rte_ring_get_size(r) >= ring_size) { + UADK_LOG(INFO, "Reusing existing ring %s for processed packets", + qp->name); + return r; + } + + UADK_LOG(ERR, "Unable to reuse existing ring %s for processed packets", + qp->name); + return NULL; + } + + return rte_ring_create(qp->name, ring_size, socket_id, + RING_F_EXACT_SZ); +} + +static int +uadk_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id) +{ + struct uadk_qp *qp; + + /* Free memory prior to re-allocation if needed. */ + if (dev->data->queue_pairs[qp_id] != NULL) + uadk_crypto_pmd_qp_release(dev, qp_id); + + /* Allocate the queue pair data structure. */ + qp = rte_zmalloc_socket("uadk PMD Queue Pair", sizeof(*qp), + RTE_CACHE_LINE_SIZE, socket_id); + if (qp == NULL) + return (-ENOMEM); + + qp->id = qp_id; + dev->data->queue_pairs[qp_id] = qp; + + if (uadk_pmd_qp_set_unique_name(dev, qp)) + goto qp_setup_cleanup; + + qp->processed_pkts = uadk_pmd_qp_create_processed_pkts_ring(qp, + qp_conf->nb_descriptors, socket_id); + if (qp->processed_pkts == NULL) + goto qp_setup_cleanup; + + memset(&qp->qp_stats, 0, sizeof(qp->qp_stats)); + + return 0; + +qp_setup_cleanup: + if (qp) { + rte_free(qp); + qp = NULL; + } + return -EINVAL; +} + static struct rte_cryptodev_ops uadk_crypto_pmd_ops = { - .dev_configure = NULL, - .dev_start = NULL, - .dev_stop = NULL, - .dev_close = NULL, - .stats_get = NULL, - .stats_reset = NULL, - .dev_infos_get = NULL, - .queue_pair_setup = NULL, - .queue_pair_release = NULL, + .dev_configure = uadk_crypto_pmd_config, + .dev_start = uadk_crypto_pmd_start, + .dev_stop = uadk_crypto_pmd_stop, + .dev_close = uadk_crypto_pmd_close, + .stats_get = uadk_crypto_pmd_stats_get, + .stats_reset = uadk_crypto_pmd_stats_reset, + .dev_infos_get = uadk_crypto_pmd_info_get, + .queue_pair_setup = uadk_crypto_pmd_qp_setup, + .queue_pair_release = uadk_crypto_pmd_qp_release, .sym_session_get_size = NULL, .sym_session_configure = NULL, .sym_session_clear = NULL, From patchwork Mon Oct 24 13:44:06 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 119028 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 634FAA034C; Mon, 24 Oct 2022 15:51:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3B38042BBB; Mon, 24 Oct 2022 15:50:56 +0200 (CEST) Received: from mail-wm1-f51.google.com (mail-wm1-f51.google.com [209.85.128.51]) by mails.dpdk.org (Postfix) with ESMTP id BEF6042BA5 for ; Mon, 24 Oct 2022 15:50:51 +0200 (CEST) Received: by mail-wm1-f51.google.com with SMTP id y10so6668420wma.0 for ; Mon, 24 Oct 2022 06:50:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=4yN1+ofZn83pAuuI5gjHoVUjM27nMh/m+fPyBicItYU=; b=IGE0SpuYU5zZZrRr81TRJRhNhBJYqsid619AQNeXtXmTG8TJK+bsvSwS/yidw3UsFk ePpu9jdROJzLlAVhYs0oDQaOXTfYp1dG2F90Eeel7pj7umGbGRAptM7dDwoXqu/3cZQe 33Rin9k8WyqoMCJTfpIexrC+tVAYsseXuCTXm3yuMFaG61XawuGl5VtMeX8a6TuWalgV Nv4eaMUWLYCjysgmW8vxsPimpsUsZ4Ktdt5xBhAa26AYdakNZNjxC8oiZZNmJuIvBe2p Y7ZY2UQlJNWc0EcOqHV6yh4Qw59HRllQrtYjenm5e+mLpExHBAJ4u4L/y/Sey247Uj7z yBRA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=4yN1+ofZn83pAuuI5gjHoVUjM27nMh/m+fPyBicItYU=; b=M1UORhO4uGpwzFyes9scJCBQxGyr2DWisgN+8TStBwbzhVn56JsxOT621LCeMQ4iyf +5UrPdL1ML3PZTKcktgu2MeI6KpahTKw8ehw9Gktr6NZTt1V8jqVRTv1lestD4q1irqK Gw5z4oS/9Zvknb8NkVcD40C2DFIsLVPa/9knHPLcGBvulUitmtpIdOnKJJEg6+t3v0rR kw59c2/+onn5vfYhl5pZpRuKmYlGCDIBlCNEvcpU308NH1yuEgdUYSNGliZLWntpbLeN K8qKSDQUd1GPWNz1XxJxT06BmbVDF7zw7Z29W8Pv6oShxSfVxCKb9M9czBcASOBVv7jZ 4WHQ== X-Gm-Message-State: ACrzQf3qdhUS7pUomo0+C5u4iDxhzbGNGgKubX8IH6I51JF3VgBqPFma xT2/xArQ1z9tbKJdgUHEAWeErQ== X-Google-Smtp-Source: AMsMyM7Mdbfo+nef20ypGtuoaLLV4R0ciB6b1Ex3qHbklVy2S7636eNIPM0WdnQfFBs3DyUUi5kwxA== X-Received: by 2002:a05:600c:5252:b0:3c6:f478:96e6 with SMTP id fc18-20020a05600c525200b003c6f47896e6mr29718753wmb.75.1666619451509; Mon, 24 Oct 2022 06:50:51 -0700 (PDT) Received: from localhost.localdomain ([213.146.143.36]) by smtp.gmail.com with ESMTPSA id t8-20020a0560001a4800b00236644228cfsm5764108wry.43.2022.10.24.06.50.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 06:50:51 -0700 (PDT) From: Zhangfei Gao To: Akhil Goyal , Declan Doherty , Fan Zhang , Ashish Gupta , Ray Kinsella , "thomas@monjalon.net" Cc: dev@dpdk.org, acc@openeuler.org, Zhangfei Gao Subject: [PATCH resend v5 3/6] crypto/uadk: support enqueue/dequeue operations Date: Mon, 24 Oct 2022 21:44:06 +0800 Message-Id: <20221024134409.1896776-4-zhangfei.gao@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221024134409.1896776-1-zhangfei.gao@linaro.org> References: <20221024134409.1896776-1-zhangfei.gao@linaro.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org This commit adds the enqueue and dequeue operations. Signed-off-by: Zhangfei Gao --- drivers/crypto/uadk/uadk_crypto_pmd.c | 53 ++++++++++++++++++++++++++- 1 file changed, 51 insertions(+), 2 deletions(-) diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c b/drivers/crypto/uadk/uadk_crypto_pmd.c index 7116899ca5..84cb1dab25 100644 --- a/drivers/crypto/uadk/uadk_crypto_pmd.c +++ b/drivers/crypto/uadk/uadk_crypto_pmd.c @@ -239,6 +239,55 @@ static struct rte_cryptodev_ops uadk_crypto_pmd_ops = { .sym_session_clear = NULL, }; +static uint16_t +uadk_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct uadk_qp *qp = queue_pair; + struct rte_crypto_op *op; + uint16_t enqd = 0; + int i, ret; + + for (i = 0; i < nb_ops; i++) { + op = ops[i]; + op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + + if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) + op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + + if (op->status != RTE_CRYPTO_OP_STATUS_ERROR) { + ret = rte_ring_enqueue(qp->processed_pkts, (void *)op); + if (ret < 0) + goto enqueue_err; + qp->qp_stats.enqueued_count++; + enqd++; + } else { + /* increment count if failed to enqueue op */ + qp->qp_stats.enqueue_err_count++; + } + } + + return enqd; + +enqueue_err: + qp->qp_stats.enqueue_err_count++; + return enqd; +} + +static uint16_t +uadk_crypto_dequeue_burst(void *queue_pair, struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + struct uadk_qp *qp = queue_pair; + unsigned int nb_dequeued; + + nb_dequeued = rte_ring_dequeue_burst(qp->processed_pkts, + (void **)ops, nb_ops, NULL); + qp->qp_stats.dequeued_count += nb_dequeued; + + return nb_dequeued; +} + static int uadk_cryptodev_probe(struct rte_vdev_device *vdev) { @@ -275,8 +324,8 @@ uadk_cryptodev_probe(struct rte_vdev_device *vdev) dev->dev_ops = &uadk_crypto_pmd_ops; dev->driver_id = uadk_cryptodev_driver_id; - dev->dequeue_burst = NULL; - dev->enqueue_burst = NULL; + dev->dequeue_burst = uadk_crypto_dequeue_burst; + dev->enqueue_burst = uadk_crypto_enqueue_burst; dev->feature_flags = RTE_CRYPTODEV_FF_HW_ACCELERATED; priv = dev->data->dev_private; priv->version = version; From patchwork Mon Oct 24 13:44:07 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 119029 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DA8FCA04FD; Mon, 24 Oct 2022 15:51:13 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7CEE142BC4; Mon, 24 Oct 2022 15:50:57 +0200 (CEST) Received: from mail-wm1-f49.google.com (mail-wm1-f49.google.com [209.85.128.49]) by mails.dpdk.org (Postfix) with ESMTP id B030342BAF for ; Mon, 24 Oct 2022 15:50:52 +0200 (CEST) Received: by mail-wm1-f49.google.com with SMTP id c7-20020a05600c0ac700b003c6cad86f38so9844017wmr.2 for ; Mon, 24 Oct 2022 06:50:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=9wb0zMGn2afa2f4i5v3/bBV3r0nrExICG6ya5aJIDMg=; b=PBkr9jDPtmzwEZexbZWQ0qPC6gfcbh0gSYFsvscs/Qhon4VBX2vE3nol3elxrGA2M+ C/Flczv8oNr0Wosrdv8l+53dgIFrbUd6ub/ifoqehrv7jv/5Zb0aLRjcKUqVv4zjjsv3 kXOmqJZ8HPR6QP0mFVkWMGYd5SsLGSMTfqcm+4wwGHR3sibZrBPKJPd1A+0arCssp+W4 xgHVXRiMDbML5yIWis3ytP2O3H2QDTRqdlAq2GYGB1p6BB0kuXwRQyGa0S65GfHQVgs3 xMfKBiAUPEklXFIuBTTQzjLz2vBPya0tFwjWMoEGiVEkbrvOgsGQZJtRkqLO+egg5jRb 2OgA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=9wb0zMGn2afa2f4i5v3/bBV3r0nrExICG6ya5aJIDMg=; b=x/ob9fm0KvLCt295kQmAuogwH28uyos4tvYDG30KpY6AgE8ZA+Ak5R2d13F08THPWU Mx390iA9CXLTVEKc2UXPw/DeIIh8WKTrkQmF5i8jOB65iVUdB2r6p9t1GLglp04iYSJo UVI8Y1UwTc8rQuA6upwkXItRitPlm1tM3WDeYFSzdJrT2nq1AEc/ekdyf3r8TkOzMBAN 3Ugft60T9ogh3pyUUxQ74x4Fxw+05LejadDYdPY4sEai2kHUGC79nS2n+qc2yi0eAV6u EIFcvM5PgPS08bGDmmUse3i2UmNnhKfnM5393QA5DYlKiMM2zvOg3OV6LJ00u+kacJC+ TOkw== X-Gm-Message-State: ACrzQf11ThMeNFdEcnH7qJP3to5YY7BLeUhDsvE+MxkaKwcrUBPQBDYh sHQzPAMyWY0JgB5qryki6Rk8xQ== X-Google-Smtp-Source: AMsMyM6CkmiCCcTDEmqqg3phmocOEK9DYS9n4OK36Nn3CPJr+7tPgqt+UuGpUhMsVbAUqY3Cl6fkWA== X-Received: by 2002:a7b:c30a:0:b0:3c1:bf95:e17b with SMTP id k10-20020a7bc30a000000b003c1bf95e17bmr23388581wmj.31.1666619452366; Mon, 24 Oct 2022 06:50:52 -0700 (PDT) Received: from localhost.localdomain ([213.146.143.36]) by smtp.gmail.com with ESMTPSA id t8-20020a0560001a4800b00236644228cfsm5764108wry.43.2022.10.24.06.50.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 06:50:52 -0700 (PDT) From: Zhangfei Gao To: Akhil Goyal , Declan Doherty , Fan Zhang , Ashish Gupta , Ray Kinsella , "thomas@monjalon.net" Cc: dev@dpdk.org, acc@openeuler.org, Zhangfei Gao Subject: [PATCH resend v5 4/6] crypto/uadk: support cipher algorithms Date: Mon, 24 Oct 2022 21:44:07 +0800 Message-Id: <20221024134409.1896776-5-zhangfei.gao@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221024134409.1896776-1-zhangfei.gao@linaro.org> References: <20221024134409.1896776-1-zhangfei.gao@linaro.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Cipher algorithms: * ``RTE_CRYPTO_CIPHER_AES_ECB`` * ``RTE_CRYPTO_CIPHER_AES_CBC`` * ``RTE_CRYPTO_CIPHER_AES_XTS`` * ``RTE_CRYPTO_CIPHER_DES_CBC`` Signed-off-by: Zhangfei Gao --- doc/guides/cryptodevs/features/uadk.ini | 10 + doc/guides/cryptodevs/uadk.rst | 6 + drivers/crypto/uadk/uadk_crypto_pmd.c | 327 +++++++++++++++++++++++- 3 files changed, 338 insertions(+), 5 deletions(-) diff --git a/doc/guides/cryptodevs/features/uadk.ini b/doc/guides/cryptodevs/features/uadk.ini index df5ad40e3d..005e08ac8d 100644 --- a/doc/guides/cryptodevs/features/uadk.ini +++ b/doc/guides/cryptodevs/features/uadk.ini @@ -4,12 +4,22 @@ ; Refer to default.ini for the full list of available PMD features. ; [Features] +Symmetric crypto = Y HW Accelerated = Y ; ; Supported crypto algorithms of the 'uadk' crypto driver. ; [Cipher] +AES CBC (128) = Y +AES CBC (192) = Y +AES CBC (256) = Y +AES ECB (128) = Y +AES ECB (192) = Y +AES ECB (256) = Y +AES XTS (128) = Y +AES XTS (256) = Y +DES CBC = Y ; ; Supported authentication algorithms of the 'uadk' crypto driver. diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst index 1dfaab73c8..7b5c1af144 100644 --- a/doc/guides/cryptodevs/uadk.rst +++ b/doc/guides/cryptodevs/uadk.rst @@ -15,6 +15,12 @@ Features UADK crypto PMD has support for: +Cipher algorithms: + +* ``RTE_CRYPTO_CIPHER_AES_ECB`` +* ``RTE_CRYPTO_CIPHER_AES_CBC`` +* ``RTE_CRYPTO_CIPHER_AES_XTS`` +* ``RTE_CRYPTO_CIPHER_DES_CBC`` Test steps ---------- diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c b/drivers/crypto/uadk/uadk_crypto_pmd.c index 84cb1dab25..f9699e6b7e 100644 --- a/drivers/crypto/uadk/uadk_crypto_pmd.c +++ b/drivers/crypto/uadk/uadk_crypto_pmd.c @@ -30,12 +30,35 @@ struct uadk_qp { uint8_t temp_digest[DIGEST_LENGTH_MAX]; } __rte_cache_aligned; +enum uadk_chain_order { + UADK_CHAIN_ONLY_CIPHER, + UADK_CHAIN_NOT_SUPPORTED +}; + +struct uadk_crypto_session { + handle_t handle_cipher; + enum uadk_chain_order chain_order; + + /* IV parameters */ + struct { + uint16_t length; + uint16_t offset; + } iv; + + /* Cipher Parameters */ + struct { + enum rte_crypto_cipher_operation direction; + struct wd_cipher_req req; + } cipher; +} __rte_cache_aligned; + enum uadk_crypto_version { UADK_CRYPTO_V2, UADK_CRYPTO_V3, }; struct uadk_crypto_priv { + bool env_cipher_init; enum uadk_crypto_version version; } __rte_cache_aligned; @@ -49,6 +72,86 @@ RTE_LOG_REGISTER_DEFAULT(uadk_crypto_logtype, INFO); ## __VA_ARGS__) static const struct rte_cryptodev_capabilities uadk_crypto_v2_capabilities[] = { + { /* AES ECB */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_ECB, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .iv_size = { + .min = 0, + .max = 0, + .increment = 0 + } + }, } + }, } + }, + { /* AES CBC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_CBC, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { /* AES XTS */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_XTS, + .block_size = 1, + .key_size = { + .min = 32, + .max = 64, + .increment = 32 + }, + .iv_size = { + .min = 0, + .max = 0, + .increment = 0 + } + }, } + }, } + }, + { /* DES CBC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_DES_CBC, + .block_size = 8, + .key_size = { + .min = 8, + .max = 8, + .increment = 0 + }, + .iv_size = { + .min = 8, + .max = 8, + .increment = 0 + } + }, } + }, } + }, /* End of capabilities */ RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() }; @@ -76,8 +179,15 @@ uadk_crypto_pmd_stop(struct rte_cryptodev *dev __rte_unused) /* Close device */ static int -uadk_crypto_pmd_close(struct rte_cryptodev *dev __rte_unused) +uadk_crypto_pmd_close(struct rte_cryptodev *dev) { + struct uadk_crypto_priv *priv = dev->data->dev_private; + + if (priv->env_cipher_init) { + wd_cipher_env_uninit(); + priv->env_cipher_init = false; + } + return 0; } @@ -224,6 +334,159 @@ uadk_crypto_pmd_qp_setup(struct rte_cryptodev *dev, uint16_t qp_id, return -EINVAL; } +static unsigned int +uadk_crypto_sym_session_get_size(struct rte_cryptodev *dev __rte_unused) +{ + return sizeof(struct uadk_crypto_session); +} + +static enum uadk_chain_order +uadk_get_chain_order(const struct rte_crypto_sym_xform *xform) +{ + enum uadk_chain_order res = UADK_CHAIN_NOT_SUPPORTED; + + if (xform != NULL) { + if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (xform->next == NULL) + res = UADK_CHAIN_ONLY_CIPHER; + } + } + + return res; +} + +static int +uadk_set_session_cipher_parameters(struct rte_cryptodev *dev, + struct uadk_crypto_session *sess, + struct rte_crypto_sym_xform *xform) +{ + struct uadk_crypto_priv *priv = dev->data->dev_private; + struct rte_crypto_cipher_xform *cipher = &xform->cipher; + struct wd_cipher_sess_setup setup = {0}; + struct sched_params params = {0}; + int ret; + + if (!priv->env_cipher_init) { + ret = wd_cipher_env_init(NULL); + if (ret < 0) + return -EINVAL; + priv->env_cipher_init = true; + } + + sess->cipher.direction = cipher->op; + sess->iv.offset = cipher->iv.offset; + sess->iv.length = cipher->iv.length; + + switch (cipher->algo) { + /* Cover supported cipher algorithms */ + case RTE_CRYPTO_CIPHER_AES_CTR: + setup.alg = WD_CIPHER_AES; + setup.mode = WD_CIPHER_CTR; + sess->cipher.req.out_bytes = 64; + break; + case RTE_CRYPTO_CIPHER_AES_ECB: + setup.alg = WD_CIPHER_AES; + setup.mode = WD_CIPHER_ECB; + sess->cipher.req.out_bytes = 16; + break; + case RTE_CRYPTO_CIPHER_AES_CBC: + setup.alg = WD_CIPHER_AES; + setup.mode = WD_CIPHER_CBC; + if (cipher->key.length == 16) + sess->cipher.req.out_bytes = 16; + else + sess->cipher.req.out_bytes = 64; + break; + case RTE_CRYPTO_CIPHER_AES_XTS: + setup.alg = WD_CIPHER_AES; + setup.mode = WD_CIPHER_XTS; + if (cipher->key.length == 16) + sess->cipher.req.out_bytes = 32; + else + sess->cipher.req.out_bytes = 512; + break; + default: + ret = -ENOTSUP; + goto env_uninit; + } + + params.numa_id = -1; /* choose nearby numa node */ + setup.sched_param = ¶ms; + sess->handle_cipher = wd_cipher_alloc_sess(&setup); + if (!sess->handle_cipher) { + UADK_LOG(ERR, "uadk failed to alloc session!\n"); + ret = -EINVAL; + goto env_uninit; + } + + ret = wd_cipher_set_key(sess->handle_cipher, cipher->key.data, cipher->key.length); + if (ret) { + wd_cipher_free_sess(sess->handle_cipher); + UADK_LOG(ERR, "uadk failed to set key!\n"); + ret = -EINVAL; + goto env_uninit; + } + + return 0; + +env_uninit: + wd_cipher_env_uninit(); + priv->env_cipher_init = false; + return ret; +} + +static int +uadk_crypto_sym_session_configure(struct rte_cryptodev *dev, + struct rte_crypto_sym_xform *xform, + struct rte_cryptodev_sym_session *session) +{ + struct rte_crypto_sym_xform *cipher_xform = NULL; + struct uadk_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(session); + int ret; + + if (unlikely(!sess)) { + UADK_LOG(ERR, "Session not available"); + return -EINVAL; + } + + sess->chain_order = uadk_get_chain_order(xform); + switch (sess->chain_order) { + case UADK_CHAIN_ONLY_CIPHER: + cipher_xform = xform; + break; + default: + return -ENOTSUP; + } + + if (cipher_xform) { + ret = uadk_set_session_cipher_parameters(dev, sess, cipher_xform); + if (ret != 0) { + UADK_LOG(ERR, + "Invalid/unsupported cipher parameters"); + return ret; + } + } + + return 0; +} + +static void +uadk_crypto_sym_session_clear(struct rte_cryptodev *dev __rte_unused, + struct rte_cryptodev_sym_session *session) +{ + struct uadk_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(session); + + if (unlikely(sess == NULL)) { + UADK_LOG(ERR, "Session not available"); + return; + } + + if (sess->handle_cipher) { + wd_cipher_free_sess(sess->handle_cipher); + sess->handle_cipher = 0; + } +} + static struct rte_cryptodev_ops uadk_crypto_pmd_ops = { .dev_configure = uadk_crypto_pmd_config, .dev_start = uadk_crypto_pmd_start, @@ -234,16 +497,51 @@ static struct rte_cryptodev_ops uadk_crypto_pmd_ops = { .dev_infos_get = uadk_crypto_pmd_info_get, .queue_pair_setup = uadk_crypto_pmd_qp_setup, .queue_pair_release = uadk_crypto_pmd_qp_release, - .sym_session_get_size = NULL, - .sym_session_configure = NULL, - .sym_session_clear = NULL, + .sym_session_get_size = uadk_crypto_sym_session_get_size, + .sym_session_configure = uadk_crypto_sym_session_configure, + .sym_session_clear = uadk_crypto_sym_session_clear, }; +static void +uadk_process_cipher_op(struct rte_crypto_op *op, + struct uadk_crypto_session *sess, + struct rte_mbuf *msrc, struct rte_mbuf *mdst) +{ + uint32_t off = op->sym->cipher.data.offset; + int ret; + + if (!sess) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return; + } + + sess->cipher.req.src = rte_pktmbuf_mtod_offset(msrc, uint8_t *, off); + sess->cipher.req.in_bytes = op->sym->cipher.data.length; + sess->cipher.req.dst = rte_pktmbuf_mtod_offset(mdst, uint8_t *, off); + sess->cipher.req.out_buf_bytes = sess->cipher.req.in_bytes; + sess->cipher.req.iv_bytes = sess->iv.length; + sess->cipher.req.iv = rte_crypto_op_ctod_offset(op, uint8_t *, + sess->iv.offset); + if (sess->cipher.direction == RTE_CRYPTO_CIPHER_OP_ENCRYPT) + sess->cipher.req.op_type = WD_CIPHER_ENCRYPTION; + else + sess->cipher.req.op_type = WD_CIPHER_DECRYPTION; + + do { + ret = wd_do_cipher_sync(sess->handle_cipher, &sess->cipher.req); + } while (ret == -WD_EBUSY); + + if (ret) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; +} + static uint16_t uadk_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, uint16_t nb_ops) { struct uadk_qp *qp = queue_pair; + struct uadk_crypto_session *sess = NULL; + struct rte_mbuf *msrc, *mdst; struct rte_crypto_op *op; uint16_t enqd = 0; int i, ret; @@ -251,6 +549,23 @@ uadk_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, for (i = 0; i < nb_ops; i++) { op = ops[i]; op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + msrc = op->sym->m_src; + mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src; + + if (op->sess_type == RTE_CRYPTO_OP_WITH_SESSION) { + if (likely(op->sym->session != NULL)) + sess = CRYPTODEV_GET_SYM_SESS_PRIV( + op->sym->session); + } + + switch (sess->chain_order) { + case UADK_CHAIN_ONLY_CIPHER: + uadk_process_cipher_op(op, sess, msrc, mdst); + break; + default: + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + break; + } if (op->status == RTE_CRYPTO_OP_STATUS_NOT_PROCESSED) op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; @@ -326,7 +641,9 @@ uadk_cryptodev_probe(struct rte_vdev_device *vdev) dev->driver_id = uadk_cryptodev_driver_id; dev->dequeue_burst = uadk_crypto_dequeue_burst; dev->enqueue_burst = uadk_crypto_enqueue_burst; - dev->feature_flags = RTE_CRYPTODEV_FF_HW_ACCELERATED; + dev->feature_flags = RTE_CRYPTODEV_FF_HW_ACCELERATED | + RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | + RTE_CRYPTODEV_FF_SYM_SESSIONLESS; priv = dev->data->dev_private; priv->version = version; From patchwork Mon Oct 24 13:44:08 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 119030 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 65641A034C; Mon, 24 Oct 2022 15:51:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 515B842BCC; Mon, 24 Oct 2022 15:50:58 +0200 (CEST) Received: from mail-wm1-f54.google.com (mail-wm1-f54.google.com [209.85.128.54]) by mails.dpdk.org (Postfix) with ESMTP id 638A842BB4 for ; Mon, 24 Oct 2022 15:50:53 +0200 (CEST) Received: by mail-wm1-f54.google.com with SMTP id l16-20020a05600c4f1000b003c6c0d2a445so6670282wmq.4 for ; Mon, 24 Oct 2022 06:50:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=/JBYhvf0yJ5Fb2ws3Ja0zSA7JAY69V1MEUf3sQRg0t4=; b=bdlgwn76x+HTg2CsATzHjh++tpJUwi2gIFfTMUT+QtuOMVkGZ90lFYnQO6HlLTIOiy DY0R2skksBVaDwnye0KLjjb2c0gXNAfkW1LDLlybQi3genKhD8lkay22IKEcaVs6oL/e 2ubnkesfBaWSOwpXYoa8yuB0cgrJd/dxpHfZaqa436JvFb5PX7n5CXBjSkiEmBaU6OA8 yW0uarYeSZZzTIcZ+Zw3ypqbh/K7CZwwQTR3ChS1OPxKeiAMj/wdh7Lj1avnpOrbCQh5 KbUcGz9HQ4hv7n9QCSBCvMuYU7owfi5ALFXQ9u0DdThs97lc3YzifTfdHdZxnPTddfBH kHOg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=/JBYhvf0yJ5Fb2ws3Ja0zSA7JAY69V1MEUf3sQRg0t4=; b=SkD4wUbqjD2+w2ph7H/TugW6XBfpBeJgg6VZvLUxe3kkIgNwY0WhZpY2rBREBhiZ73 Rh2ojTh90lwPBW4XJ9K3/wJW0Uf3X5UZ+Pxu2cLXib0n5Q/clWqFqFIo9iUCDi2F+WEn nbeiTUZyWCyK0I7ueMNpf4fRK3gjmmchjVBkLmF2lUS5TGYtzntKhmgLP5YGWTaEfVfI lmv64pJhXoom8s0ul8w4gNCjCZfa6LvIYlUwQdiiW7cvWVRwmyM8/qyNl0b5AIhbCOI6 Sxmz28f5PJzcWrbHpcn4s6dcjckX+kGv3ROj9FmrZIuUynTjyL27iLSdTL78hSEjqsmc fxqA== X-Gm-Message-State: ACrzQf0ScwpiKgArIYtU1H4mK5nBJtqNK/CoZTpU6+lUE/WuSqVtK3wz czQCo7dCwunofKmxtM0ZS8J9Xg== X-Google-Smtp-Source: AMsMyM6T2FEC4oLIn3mT+gyLMYMTuOB6IeF6oeB6PKoxAyNznvRDuUHoHODH6z1gHuT3KLKNO6Rt6Q== X-Received: by 2002:a05:600c:4e8f:b0:3c9:9657:9c0a with SMTP id f15-20020a05600c4e8f00b003c996579c0amr8531335wmq.157.1666619453054; Mon, 24 Oct 2022 06:50:53 -0700 (PDT) Received: from localhost.localdomain ([213.146.143.36]) by smtp.gmail.com with ESMTPSA id t8-20020a0560001a4800b00236644228cfsm5764108wry.43.2022.10.24.06.50.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 06:50:52 -0700 (PDT) From: Zhangfei Gao To: Akhil Goyal , Declan Doherty , Fan Zhang , Ashish Gupta , Ray Kinsella , "thomas@monjalon.net" Cc: dev@dpdk.org, acc@openeuler.org, Zhangfei Gao Subject: [PATCH resend v5 5/6] crypto/uadk: support auth algorithms Date: Mon, 24 Oct 2022 21:44:08 +0800 Message-Id: <20221024134409.1896776-6-zhangfei.gao@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221024134409.1896776-1-zhangfei.gao@linaro.org> References: <20221024134409.1896776-1-zhangfei.gao@linaro.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Hash algorithms: * ``RTE_CRYPTO_AUTH_MD5`` * ``RTE_CRYPTO_AUTH_MD5_HMAC`` * ``RTE_CRYPTO_AUTH_SHA1`` * ``RTE_CRYPTO_AUTH_SHA1_HMAC`` * ``RTE_CRYPTO_AUTH_SHA224`` * ``RTE_CRYPTO_AUTH_SHA224_HMAC`` * ``RTE_CRYPTO_AUTH_SHA256`` * ``RTE_CRYPTO_AUTH_SHA256_HMAC`` * ``RTE_CRYPTO_AUTH_SHA384`` * ``RTE_CRYPTO_AUTH_SHA384_HMAC`` * ``RTE_CRYPTO_AUTH_SHA512`` * ``RTE_CRYPTO_AUTH_SHA512_HMAC`` Signed-off-by: Zhangfei Gao --- doc/guides/cryptodevs/features/uadk.ini | 12 + doc/guides/cryptodevs/uadk.rst | 15 + drivers/crypto/uadk/uadk_crypto_pmd.c | 459 ++++++++++++++++++++++++ 3 files changed, 486 insertions(+) diff --git a/doc/guides/cryptodevs/features/uadk.ini b/doc/guides/cryptodevs/features/uadk.ini index 005e08ac8d..2e8a37a2b3 100644 --- a/doc/guides/cryptodevs/features/uadk.ini +++ b/doc/guides/cryptodevs/features/uadk.ini @@ -25,6 +25,18 @@ DES CBC = Y ; Supported authentication algorithms of the 'uadk' crypto driver. ; [Auth] +MD5 = Y +MD5 HMAC = Y +SHA1 = Y +SHA1 HMAC = Y +SHA224 = Y +SHA224 HMAC = Y +SHA256 = Y +SHA256 HMAC = Y +SHA384 = Y +SHA384 HMAC = Y +SHA512 = Y +SHA512 HMAC = Y ; ; Supported AEAD algorithms of the 'uadk' crypto driver. diff --git a/doc/guides/cryptodevs/uadk.rst b/doc/guides/cryptodevs/uadk.rst index 7b5c1af144..cae23c6b91 100644 --- a/doc/guides/cryptodevs/uadk.rst +++ b/doc/guides/cryptodevs/uadk.rst @@ -22,6 +22,21 @@ Cipher algorithms: * ``RTE_CRYPTO_CIPHER_AES_XTS`` * ``RTE_CRYPTO_CIPHER_DES_CBC`` +Hash algorithms: + +* ``RTE_CRYPTO_AUTH_MD5`` +* ``RTE_CRYPTO_AUTH_MD5_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA1`` +* ``RTE_CRYPTO_AUTH_SHA1_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA224`` +* ``RTE_CRYPTO_AUTH_SHA224_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA256`` +* ``RTE_CRYPTO_AUTH_SHA256_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA384`` +* ``RTE_CRYPTO_AUTH_SHA384_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA512`` +* ``RTE_CRYPTO_AUTH_SHA512_HMAC`` + Test steps ---------- diff --git a/drivers/crypto/uadk/uadk_crypto_pmd.c b/drivers/crypto/uadk/uadk_crypto_pmd.c index f9699e6b7e..58ae5ca32a 100644 --- a/drivers/crypto/uadk/uadk_crypto_pmd.c +++ b/drivers/crypto/uadk/uadk_crypto_pmd.c @@ -32,11 +32,15 @@ struct uadk_qp { enum uadk_chain_order { UADK_CHAIN_ONLY_CIPHER, + UADK_CHAIN_ONLY_AUTH, + UADK_CHAIN_CIPHER_AUTH, + UADK_CHAIN_AUTH_CIPHER, UADK_CHAIN_NOT_SUPPORTED }; struct uadk_crypto_session { handle_t handle_cipher; + handle_t handle_digest; enum uadk_chain_order chain_order; /* IV parameters */ @@ -50,6 +54,13 @@ struct uadk_crypto_session { enum rte_crypto_cipher_operation direction; struct wd_cipher_req req; } cipher; + + /* Authentication Parameters */ + struct { + struct wd_digest_req req; + enum rte_crypto_auth_operation operation; + uint16_t digest_length; + } auth; } __rte_cache_aligned; enum uadk_crypto_version { @@ -59,6 +70,7 @@ enum uadk_crypto_version { struct uadk_crypto_priv { bool env_cipher_init; + bool env_auth_init; enum uadk_crypto_version version; } __rte_cache_aligned; @@ -72,6 +84,252 @@ RTE_LOG_REGISTER_DEFAULT(uadk_crypto_logtype, INFO); ## __VA_ARGS__) static const struct rte_cryptodev_capabilities uadk_crypto_v2_capabilities[] = { + { /* MD5 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_MD5_HMAC, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .iv_size = { 0 } + }, } + }, } + }, + { /* MD5 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_MD5, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + }, } + }, } + }, + { /* SHA1 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA1_HMAC, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 20, + .max = 20, + .increment = 0 + }, + .iv_size = { 0 } + }, } + }, } + }, + { /* SHA1 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA1, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 20, + .max = 20, + .increment = 0 + }, + }, } + }, } + }, + { /* SHA224 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA224_HMAC, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 28, + .max = 28, + .increment = 0 + }, + .iv_size = { 0 } + }, } + }, } + }, + { /* SHA224 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA224, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 28, + .max = 28, + .increment = 0 + }, + }, } + }, } + }, + { /* SHA256 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA256_HMAC, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 32, + .max = 32, + .increment = 0 + }, + .iv_size = { 0 } + }, } + }, } + }, + { /* SHA256 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA256, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 32, + .max = 32, + .increment = 0 + }, + }, } + }, } + }, + { /* SHA384 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA384_HMAC, + .block_size = 128, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 48, + .max = 48, + .increment = 0 + }, + .iv_size = { 0 } + }, } + }, } + }, + { /* SHA384 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA384, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 48, + .max = 48, + .increment = 0 + }, + }, } + }, } + }, + { /* SHA512 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA512_HMAC, + .block_size = 128, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .iv_size = { 0 } + }, } + }, } + }, + { /* SHA512 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA512, + .block_size = 128, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + }, } + }, } + }, { /* AES ECB */ .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, {.sym = { @@ -188,6 +446,11 @@ uadk_crypto_pmd_close(struct rte_cryptodev *dev) priv->env_cipher_init = false; } + if (priv->env_auth_init) { + wd_digest_env_uninit(); + priv->env_auth_init = false; + } + return 0; } @@ -346,9 +609,19 @@ uadk_get_chain_order(const struct rte_crypto_sym_xform *xform) enum uadk_chain_order res = UADK_CHAIN_NOT_SUPPORTED; if (xform != NULL) { + if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + if (xform->next == NULL) + res = UADK_CHAIN_ONLY_AUTH; + else if (xform->next->type == + RTE_CRYPTO_SYM_XFORM_CIPHER) + res = UADK_CHAIN_AUTH_CIPHER; + } + if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { if (xform->next == NULL) res = UADK_CHAIN_ONLY_CIPHER; + else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) + res = UADK_CHAIN_CIPHER_AUTH; } } @@ -435,12 +708,119 @@ uadk_set_session_cipher_parameters(struct rte_cryptodev *dev, return ret; } +/* Set session auth parameters */ +static int +uadk_set_session_auth_parameters(struct rte_cryptodev *dev, + struct uadk_crypto_session *sess, + struct rte_crypto_sym_xform *xform) +{ + struct uadk_crypto_priv *priv = dev->data->dev_private; + struct wd_digest_sess_setup setup = {0}; + struct sched_params params = {0}; + int ret; + + if (!priv->env_auth_init) { + ret = wd_digest_env_init(NULL); + if (ret < 0) + return -EINVAL; + priv->env_auth_init = true; + } + + sess->auth.operation = xform->auth.op; + sess->auth.digest_length = xform->auth.digest_length; + + switch (xform->auth.algo) { + case RTE_CRYPTO_AUTH_MD5: + case RTE_CRYPTO_AUTH_MD5_HMAC: + setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_MD5) ? + WD_DIGEST_NORMAL : WD_DIGEST_HMAC; + setup.alg = WD_DIGEST_MD5; + sess->auth.req.out_buf_bytes = 16; + sess->auth.req.out_bytes = 16; + break; + case RTE_CRYPTO_AUTH_SHA1: + case RTE_CRYPTO_AUTH_SHA1_HMAC: + setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA1) ? + WD_DIGEST_NORMAL : WD_DIGEST_HMAC; + setup.alg = WD_DIGEST_SHA1; + sess->auth.req.out_buf_bytes = 20; + sess->auth.req.out_bytes = 20; + break; + case RTE_CRYPTO_AUTH_SHA224: + case RTE_CRYPTO_AUTH_SHA224_HMAC: + setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA224) ? + WD_DIGEST_NORMAL : WD_DIGEST_HMAC; + setup.alg = WD_DIGEST_SHA224; + sess->auth.req.out_buf_bytes = 28; + sess->auth.req.out_bytes = 28; + break; + case RTE_CRYPTO_AUTH_SHA256: + case RTE_CRYPTO_AUTH_SHA256_HMAC: + setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA256) ? + WD_DIGEST_NORMAL : WD_DIGEST_HMAC; + setup.alg = WD_DIGEST_SHA256; + sess->auth.req.out_buf_bytes = 32; + sess->auth.req.out_bytes = 32; + break; + case RTE_CRYPTO_AUTH_SHA384: + case RTE_CRYPTO_AUTH_SHA384_HMAC: + setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA384) ? + WD_DIGEST_NORMAL : WD_DIGEST_HMAC; + setup.alg = WD_DIGEST_SHA384; + sess->auth.req.out_buf_bytes = 48; + sess->auth.req.out_bytes = 48; + break; + case RTE_CRYPTO_AUTH_SHA512: + case RTE_CRYPTO_AUTH_SHA512_HMAC: + setup.mode = (xform->auth.algo == RTE_CRYPTO_AUTH_SHA512) ? + WD_DIGEST_NORMAL : WD_DIGEST_HMAC; + setup.alg = WD_DIGEST_SHA512; + sess->auth.req.out_buf_bytes = 64; + sess->auth.req.out_bytes = 64; + break; + default: + ret = -ENOTSUP; + goto env_uninit; + } + + params.numa_id = -1; /* choose nearby numa node */ + setup.sched_param = ¶ms; + sess->handle_digest = wd_digest_alloc_sess(&setup); + if (!sess->handle_digest) { + UADK_LOG(ERR, "uadk failed to alloc session!\n"); + ret = -EINVAL; + goto env_uninit; + } + + /* if mode is HMAC, should set key */ + if (setup.mode == WD_DIGEST_HMAC) { + ret = wd_digest_set_key(sess->handle_digest, + xform->auth.key.data, + xform->auth.key.length); + if (ret) { + UADK_LOG(ERR, "uadk failed to alloc session!\n"); + wd_digest_free_sess(sess->handle_digest); + sess->handle_digest = 0; + ret = -EINVAL; + goto env_uninit; + } + } + + return 0; + +env_uninit: + wd_digest_env_uninit(); + priv->env_auth_init = false; + return ret; +} + static int uadk_crypto_sym_session_configure(struct rte_cryptodev *dev, struct rte_crypto_sym_xform *xform, struct rte_cryptodev_sym_session *session) { struct rte_crypto_sym_xform *cipher_xform = NULL; + struct rte_crypto_sym_xform *auth_xform = NULL; struct uadk_crypto_session *sess = CRYPTODEV_GET_SYM_SESS_PRIV(session); int ret; @@ -454,6 +834,17 @@ uadk_crypto_sym_session_configure(struct rte_cryptodev *dev, case UADK_CHAIN_ONLY_CIPHER: cipher_xform = xform; break; + case UADK_CHAIN_ONLY_AUTH: + auth_xform = xform; + break; + case UADK_CHAIN_CIPHER_AUTH: + cipher_xform = xform; + auth_xform = xform->next; + break; + case UADK_CHAIN_AUTH_CIPHER: + auth_xform = xform; + cipher_xform = xform->next; + break; default: return -ENOTSUP; } @@ -467,6 +858,15 @@ uadk_crypto_sym_session_configure(struct rte_cryptodev *dev, } } + if (auth_xform) { + ret = uadk_set_session_auth_parameters(dev, sess, auth_xform); + if (ret != 0) { + UADK_LOG(ERR, + "Invalid/unsupported auth parameters"); + return ret; + } + } + return 0; } @@ -485,6 +885,11 @@ uadk_crypto_sym_session_clear(struct rte_cryptodev *dev __rte_unused, wd_cipher_free_sess(sess->handle_cipher); sess->handle_cipher = 0; } + + if (sess->handle_digest) { + wd_digest_free_sess(sess->handle_digest); + sess->handle_digest = 0; + } } static struct rte_cryptodev_ops uadk_crypto_pmd_ops = { @@ -535,6 +940,49 @@ uadk_process_cipher_op(struct rte_crypto_op *op, op->status = RTE_CRYPTO_OP_STATUS_ERROR; } +static void +uadk_process_auth_op(struct uadk_qp *qp, struct rte_crypto_op *op, + struct uadk_crypto_session *sess, + struct rte_mbuf *msrc, struct rte_mbuf *mdst) +{ + uint32_t srclen = op->sym->auth.data.length; + uint32_t off = op->sym->auth.data.offset; + uint8_t *dst = qp->temp_digest; + int ret; + + if (!sess) { + op->status = RTE_CRYPTO_OP_STATUS_INVALID_ARGS; + return; + } + + sess->auth.req.in = rte_pktmbuf_mtod_offset(msrc, uint8_t *, off); + sess->auth.req.in_bytes = srclen; + sess->auth.req.out = dst; + + do { + ret = wd_do_digest_sync(sess->handle_digest, &sess->auth.req); + } while (ret == -WD_EBUSY); + + if (sess->auth.operation == RTE_CRYPTO_AUTH_OP_VERIFY) { + if (memcmp(dst, op->sym->auth.digest.data, + sess->auth.digest_length) != 0) { + op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + } + } else { + uint8_t *auth_dst; + + auth_dst = op->sym->auth.digest.data; + if (auth_dst == NULL) + auth_dst = rte_pktmbuf_mtod_offset(mdst, uint8_t *, + op->sym->auth.data.offset + + op->sym->auth.data.length); + memcpy(auth_dst, dst, sess->auth.digest_length); + } + + if (ret) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; +} + static uint16_t uadk_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, uint16_t nb_ops) @@ -562,6 +1010,17 @@ uadk_crypto_enqueue_burst(void *queue_pair, struct rte_crypto_op **ops, case UADK_CHAIN_ONLY_CIPHER: uadk_process_cipher_op(op, sess, msrc, mdst); break; + case UADK_CHAIN_ONLY_AUTH: + uadk_process_auth_op(qp, op, sess, msrc, mdst); + break; + case UADK_CHAIN_CIPHER_AUTH: + uadk_process_cipher_op(op, sess, msrc, mdst); + uadk_process_auth_op(qp, op, sess, mdst, mdst); + break; + case UADK_CHAIN_AUTH_CIPHER: + uadk_process_auth_op(qp, op, sess, msrc, mdst); + uadk_process_cipher_op(op, sess, msrc, mdst); + break; default: op->status = RTE_CRYPTO_OP_STATUS_ERROR; break; From patchwork Mon Oct 24 13:44:09 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Zhangfei Gao X-Patchwork-Id: 119031 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2579BA034C; Mon, 24 Oct 2022 15:51:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 33AE342BD0; Mon, 24 Oct 2022 15:50:59 +0200 (CEST) Received: from mail-wr1-f45.google.com (mail-wr1-f45.google.com [209.85.221.45]) by mails.dpdk.org (Postfix) with ESMTP id C670842BB4 for ; Mon, 24 Oct 2022 15:50:53 +0200 (CEST) Received: by mail-wr1-f45.google.com with SMTP id v1so16344168wrt.11 for ; Mon, 24 Oct 2022 06:50:53 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:from:to:cc:subject:date :message-id:reply-to; bh=yR438wvdNZ4K9xXMmYcryDhugEVT5AIuP9E3sSuXiE0=; b=v+o6lbjiCZp7GYRxj/YENI4QQgmbTEIdFgak/w5yBscUIASDobpX9hqzKQrw7r/wB7 ieKi9uoefR1bvnRjOFalEjAueihbnhTzxtmj+4mMnfQ5ee3o1wOjQDP9cnTl1qiho7F/ s/OayOwKKTM4tBSOy83KRjhed15BpIGCnzxa+RFplH/VGypdPHibO9VCOL3kkJhrW2BT wF+KOKG7ZdjWh9BxLuaSHDMI++ZZUVlpAc6qLNWZap5q6HAT62WePuwCkXrtt73l3H9I Ry1J2YVfKe8Mg7CF3mFeXHMKjY6Lsr5lsOxBkHu8I5ykfZ4YYirVvV955pXs/iiW4EIl /oVQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=content-transfer-encoding:mime-version:references:in-reply-to :message-id:date:subject:cc:to:from:x-gm-message-state:from:to:cc :subject:date:message-id:reply-to; bh=yR438wvdNZ4K9xXMmYcryDhugEVT5AIuP9E3sSuXiE0=; b=AYJ99XCLUpX293pViA+fZcgjHzj/kd+JxAfwWizRMzcqWPFZiwKGS15E+g4Wqz7VAT ADNOphTo7BIGXb9f4Le/9Zg5N1YH0dlgQOXbh65kwUvE4el5a+bKNW/6kow2FwAhg37D t0xJaBdA7AhGP7gOn3i3xjt6SkyCoH0H7egcP6ZtDaLDSc7FBMzPk+EVNjx1fZ4gk82b yAT96Xftq/DZmXwVY7wuvIIQv/Dy8pEWreC+DgUU1tCDAYjDA5ei0jDiobbIq7fwEd+x xEb/sG0XthecvtjL2hiT1Z9KhMLSc/Hjiz/ppDTiq+QPz2wgw2Gx9WBB9T4m5gX601Ff 3Fzg== X-Gm-Message-State: ACrzQf0f+1oRZ4ULKiLVSyBmFW+zUj8AQBmLE9Eur3YxBahSR0Y2yv+g D1P5rVozmemPzQd8+5uz4xTN9A== X-Google-Smtp-Source: AMsMyM43fw3uKEUPMyFjGCttmQ5IxjlRq1cbNiHvX7XNmf7yn/LkHpNabOK4lhrL2/eGKTEajuBYjQ== X-Received: by 2002:a05:6000:18ac:b0:232:c7c1:314f with SMTP id b12-20020a05600018ac00b00232c7c1314fmr20582503wri.109.1666619453579; Mon, 24 Oct 2022 06:50:53 -0700 (PDT) Received: from localhost.localdomain ([213.146.143.36]) by smtp.gmail.com with ESMTPSA id t8-20020a0560001a4800b00236644228cfsm5764108wry.43.2022.10.24.06.50.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 24 Oct 2022 06:50:53 -0700 (PDT) From: Zhangfei Gao To: Akhil Goyal , Declan Doherty , Fan Zhang , Ashish Gupta , Ray Kinsella , "thomas@monjalon.net" Cc: dev@dpdk.org, acc@openeuler.org, Zhangfei Gao Subject: [PATCH resend v5 6/6] test/crypto: add cryptodev_uadk_autotest Date: Mon, 24 Oct 2022 21:44:09 +0800 Message-Id: <20221024134409.1896776-7-zhangfei.gao@linaro.org> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20221024134409.1896776-1-zhangfei.gao@linaro.org> References: <20221024134409.1896776-1-zhangfei.gao@linaro.org> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Example: sudo dpdk-test --vdev=crypto_uadk --log-level=6 RTE>>cryptodev_uadk_autotest RTE>>quit Signed-off-by: Zhangfei Gao --- app/test/test_cryptodev.c | 7 +++++++ app/test/test_cryptodev.h | 1 + 2 files changed, 8 insertions(+) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index c2b33686ed..183e24f150 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -16467,6 +16467,12 @@ test_cryptodev_qat(void) return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_QAT_SYM_PMD)); } +static int +test_cryptodev_uadk(void) +{ + return run_cryptodev_testsuite(RTE_STR(CRYPTODEV_NAME_UADK_PMD)); +} + static int test_cryptodev_virtio(void) { @@ -16810,6 +16816,7 @@ REGISTER_TEST_COMMAND(cryptodev_sw_mvsam_autotest, test_cryptodev_mrvl); REGISTER_TEST_COMMAND(cryptodev_dpaa2_sec_autotest, test_cryptodev_dpaa2_sec); REGISTER_TEST_COMMAND(cryptodev_dpaa_sec_autotest, test_cryptodev_dpaa_sec); REGISTER_TEST_COMMAND(cryptodev_ccp_autotest, test_cryptodev_ccp); +REGISTER_TEST_COMMAND(cryptodev_uadk_autotest, test_cryptodev_uadk); REGISTER_TEST_COMMAND(cryptodev_virtio_autotest, test_cryptodev_virtio); REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx); REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr); diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h index 29a7d4db2b..abd795f54a 100644 --- a/app/test/test_cryptodev.h +++ b/app/test/test_cryptodev.h @@ -74,6 +74,7 @@ #define CRYPTODEV_NAME_CN9K_PMD crypto_cn9k #define CRYPTODEV_NAME_CN10K_PMD crypto_cn10k #define CRYPTODEV_NAME_MLX5_PMD crypto_mlx5 +#define CRYPTODEV_NAME_UADK_PMD crypto_uadk enum cryptodev_api_test_type {