From patchwork Thu Aug 13 17:23:37 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikas Gupta X-Patchwork-Id: 75502 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1B4A1A04B0; Thu, 13 Aug 2020 19:24:10 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7B2CC1C0C3; Thu, 13 Aug 2020 19:24:03 +0200 (CEST) Received: from mail-qt1-f196.google.com (mail-qt1-f196.google.com [209.85.160.196]) by dpdk.org (Postfix) with ESMTP id 3C6F51C0C2 for ; Thu, 13 Aug 2020 19:24:02 +0200 (CEST) Received: by mail-qt1-f196.google.com with SMTP id e5so4940110qth.5 for ; Thu, 13 Aug 2020 10:24:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ntfivy1bEhAPAw73c6Dz1I/A5patknRI9gueGL2qMLQ=; b=GtbnnLEGsEbJIEryeEgbJAxeHyV28UBSL21dHatdVzDQFpKMWvkbqZGcwzEKsxmKBJ E7/46mq4NKoqhVDL1BUaQs1HCZkoWnsfilVBXKSm5722kga4sNbkRgMNqqnMCWGdp/Pk YjajVgmyYynhkNaTxbYzVYCVcInh6KRwCz+yY= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ntfivy1bEhAPAw73c6Dz1I/A5patknRI9gueGL2qMLQ=; b=VhI3Bouucp7yktAlDK0uV+snUt3DzmeGzF8TxfLEP70A6JdzPvzzp3CYosK9HjaxOp VyQbf6q/Vu1m2cdHm4xdCtHo1vf1I1At2KZzfHsKzK8MYNZeTR/EE4e5Lmb8Uv6ye4MQ X4pG0i7CwKCzwAm9o0oxK+jNeC/viOvEPQ9GAEWmyKiEUw+G3z/6dXuZMzJ9J3NrPkPq E1dwM/Fo6iJVY40gOeFZgA2Mp7vhIFHjHlcJ9nSkgJmoyKBNHcSbqmYBxaTwBhvVJ0S1 vCdV72Xz/msdvvPGLfCuKXgKaQ4RLiNF8cwQ8N+hBl9mhqn3jP3tNl+tB7zwiy6ZahkQ 5kDg== X-Gm-Message-State: AOAM533E8t5PrPBAaeIMNWDvyPvCujTQ71YB9p6Mp6gZEP/1xPuRL8hd v9slPrWTEOXBLUjA6+aS1fYzuA0HLTjJEhjnGcfq1oFqnDpJpFymjDJUixHNEh7N0Dw/YXnsXvy kDmhy6RF6e6vlWjrMn33yd0mnWOwe0MWqTg9BDJdJLJomG2Y+fWoNXoO1G3F/ X-Google-Smtp-Source: ABdhPJxXl/1VQAGO2eGVYfGIWs2e4RDR8zTnTwKVRRWyvUav+mPNJTSUpCFjFtv9fm+XUVcMeD1P1w== X-Received: by 2002:ac8:4256:: with SMTP id r22mr6485809qtm.367.1597339440782; Thu, 13 Aug 2020 10:24:00 -0700 (PDT) Received: from rahul_yocto_ubuntu18.ibn.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g129sm6246635qkb.39.2020.08.13.10.23.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 10:24:00 -0700 (PDT) From: Vikas Gupta To: dev@dpdk.org, akhil.goyal@nxp.com Cc: vikram.prakash@broadcom.com, Vikas Gupta , Raveendra Padasalagi Date: Thu, 13 Aug 2020 22:53:37 +0530 Message-Id: <20200813172344.3228-2-vikas.gupta@broadcom.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200813172344.3228-1-vikas.gupta@broadcom.com> References: <20200812063127.8687-1-vikas.gupta@broadcom.com> <20200813172344.3228-1-vikas.gupta@broadcom.com> Subject: [dpdk-dev] [PATCH v2 1/8] crypto/bcmfs: add BCMFS driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add Broadcom FlexSparc(FS) device creation driver which registers to a vdev and create a device. Add APIs for logs, supportive documention and maintainers file. Signed-off-by: Vikas Gupta Signed-off-by: Raveendra Padasalagi Reviewed-by: Ajit Khaparde --- MAINTAINERS | 7 + config/common_base | 5 + doc/guides/cryptodevs/bcmfs.rst | 26 ++ doc/guides/cryptodevs/index.rst | 1 + drivers/crypto/bcmfs/bcmfs_device.c | 256 ++++++++++++++++++ drivers/crypto/bcmfs/bcmfs_device.h | 40 +++ drivers/crypto/bcmfs/bcmfs_logs.c | 38 +++ drivers/crypto/bcmfs/bcmfs_logs.h | 34 +++ drivers/crypto/bcmfs/meson.build | 10 + .../crypto/bcmfs/rte_pmd_bcmfs_version.map | 3 + drivers/crypto/meson.build | 3 +- mk/rte.app.mk | 1 + 12 files changed, 423 insertions(+), 1 deletion(-) create mode 100644 doc/guides/cryptodevs/bcmfs.rst create mode 100644 drivers/crypto/bcmfs/bcmfs_device.c create mode 100644 drivers/crypto/bcmfs/bcmfs_device.h create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.c create mode 100644 drivers/crypto/bcmfs/bcmfs_logs.h create mode 100644 drivers/crypto/bcmfs/meson.build create mode 100644 drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map diff --git a/MAINTAINERS b/MAINTAINERS index 3cd402b34..7c2d7ff1b 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1099,6 +1099,13 @@ F: drivers/crypto/zuc/ F: doc/guides/cryptodevs/zuc.rst F: doc/guides/cryptodevs/features/zuc.ini +Broadcom FlexSparc +M: Vikas Gupta +M: Raveendra Padasalagi +M: Ajit Khaparde +F: drivers/crypto/bcmfs/ +F: doc/guides/cryptodevs/bcmfs.rst +F: doc/guides/cryptodevs/features/bcmfs.ini Compression Drivers ------------------- diff --git a/config/common_base b/config/common_base index f7a8824f5..21daadcdd 100644 --- a/config/common_base +++ b/config/common_base @@ -705,6 +705,11 @@ CONFIG_RTE_LIBRTE_PMD_MVSAM_CRYPTO=n # CONFIG_RTE_LIBRTE_PMD_NITROX=y +# +# Compile PMD for Broadcom crypto device +# +CONFIG_RTE_LIBRTE_PMD_BCMFS=y + # # Compile generic security library # diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst new file mode 100644 index 000000000..752ce028a --- /dev/null +++ b/doc/guides/cryptodevs/bcmfs.rst @@ -0,0 +1,26 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(C) 2020 Broadcom + +Broadcom FlexSparc Crypto Poll Mode Driver +========================================== + +The FlexSparc crypto poll mode driver provides support for offloading +cryptographic operations to the Broadcom SoCs having FlexSparc4/FlexSparc5 unit. +Detailed information about SoCs can be found in + +* https://www.broadcom.com/ + +Installation +------------ + +For compiling the Broadcom FlexSparc crypto PMD, please check if the +CONFIG_RTE_LIBRTE_PMD_BCMFS setting is set to `y` in config/common_base file. + +* ``CONFIG_RTE_LIBRTE_PMD_BCMFS=y`` + +Initialization +-------------- +BCMFS crypto PMD depend upon the devices present in the path +/sys/bus/platform/devices/fs/ on the platform. +Each cryptodev PMD instance can be attached to the nodes present +in the mentioned path. diff --git a/doc/guides/cryptodevs/index.rst b/doc/guides/cryptodevs/index.rst index a67ed5a28..5d7e028bd 100644 --- a/doc/guides/cryptodevs/index.rst +++ b/doc/guides/cryptodevs/index.rst @@ -29,3 +29,4 @@ Crypto Device Drivers qat virtio zuc + bcmfs diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c new file mode 100644 index 000000000..47c776de6 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_device.c @@ -0,0 +1,256 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Broadcom. + * All rights reserved. + */ + +#include +#include +#include + +#include + +#include "bcmfs_device.h" +#include "bcmfs_logs.h" + +struct bcmfs_device_attr { + const char name[BCMFS_MAX_PATH_LEN]; + const char suffix[BCMFS_DEV_NAME_LEN]; + const enum bcmfs_device_type type; + const uint32_t offset; + const uint32_t version; +}; + +/* BCMFS supported devices */ +static struct bcmfs_device_attr dev_table[] = { + { + .name = "fs4", + .suffix = "crypto_mbox", + .type = BCMFS_SYM_FS4, + .offset = 0, + .version = 0x76303031 + }, + { + .name = "fs5", + .suffix = "mbox", + .type = BCMFS_SYM_FS5, + .offset = 0, + .version = 0x76303032 + }, + { + /* sentinel */ + } +}; + +TAILQ_HEAD(fsdev_list, bcmfs_device); +static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list); + +static struct bcmfs_device * +fsdev_allocate_one_dev(struct rte_vdev_device *vdev, + char *dirpath, + char *devname, + enum bcmfs_device_type dev_type __rte_unused) +{ + struct bcmfs_device *fsdev; + + fsdev = calloc(1, sizeof(*fsdev)); + if (!fsdev) + return NULL; + + if (strlen(dirpath) > sizeof(fsdev->dirname)) { + BCMFS_LOG(ERR, "dir path name is too long"); + goto cleanup; + } + + if (strlen(devname) > sizeof(fsdev->name)) { + BCMFS_LOG(ERR, "devname is too long"); + goto cleanup; + } + + strcpy(fsdev->dirname, dirpath); + strcpy(fsdev->name, devname); + + fsdev->vdev = vdev; + + TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next); + + return fsdev; + +cleanup: + free(fsdev); + + return NULL; +} + +static struct bcmfs_device * +find_fsdev(struct rte_vdev_device *vdev) +{ + struct bcmfs_device *fsdev; + + TAILQ_FOREACH(fsdev, &fsdev_list, next) + if (fsdev->vdev == vdev) + return fsdev; + + return NULL; +} + +static void +fsdev_release(struct bcmfs_device *fsdev) +{ + if (fsdev == NULL) + return; + + TAILQ_REMOVE(&fsdev_list, fsdev, next); + free(fsdev); +} + +static int +cmprator(const void *a, const void *b) +{ + return (*(const unsigned int *)a - *(const unsigned int *)b); +} + +static int +fsdev_find_all_devs(const char *path, const char *search, + uint32_t *devs) +{ + DIR *dir; + struct dirent *entry; + int count = 0; + char addr[BCMFS_MAX_NODES][BCMFS_MAX_PATH_LEN]; + int i; + + dir = opendir(path); + if (dir == NULL) { + BCMFS_LOG(ERR, "Unable to open directory"); + return 0; + } + + while ((entry = readdir(dir)) != NULL) { + if (strstr(entry->d_name, search)) { + strlcpy(addr[count], entry->d_name, + BCMFS_MAX_PATH_LEN); + count++; + } + } + + closedir(dir); + + for (i = 0 ; i < count; i++) + devs[i] = (uint32_t)strtoul(addr[i], NULL, 16); + /* sort the devices based on IO addresses */ + qsort(devs, count, sizeof(uint32_t), cmprator); + + return count; +} + +static bool +fsdev_find_sub_dir(char *path, const char *search, char *output) +{ + DIR *dir; + struct dirent *entry; + + dir = opendir(path); + if (dir == NULL) { + BCMFS_LOG(ERR, "Unable to open directory"); + return -ENODEV; + } + + while ((entry = readdir(dir)) != NULL) { + if (!strcmp(entry->d_name, search)) { + strlcpy(output, entry->d_name, BCMFS_MAX_PATH_LEN); + closedir(dir); + return true; + } + } + + closedir(dir); + + return false; +} + + +static int +bcmfs_vdev_probe(struct rte_vdev_device *vdev) +{ + struct bcmfs_device *fsdev; + char top_dirpath[BCMFS_MAX_PATH_LEN]; + char sub_dirpath[BCMFS_MAX_PATH_LEN]; + char out_dirpath[BCMFS_MAX_PATH_LEN]; + char out_dirname[BCMFS_MAX_PATH_LEN]; + uint32_t fsdev_dev[BCMFS_MAX_NODES]; + enum bcmfs_device_type dtype; + int i = 0; + int dev_idx; + int count = 0; + bool found = false; + + sprintf(top_dirpath, "%s", SYSFS_BCM_PLTFORM_DEVICES); + while (strlen(dev_table[i].name)) { + found = fsdev_find_sub_dir(top_dirpath, + dev_table[i].name, + sub_dirpath); + if (found) + break; + i++; + } + if (!found) { + BCMFS_LOG(ERR, "No supported bcmfs dev found"); + return -ENODEV; + } + + dev_idx = i; + dtype = dev_table[i].type; + + snprintf(out_dirpath, sizeof(out_dirpath), "%s/%s", + top_dirpath, sub_dirpath); + count = fsdev_find_all_devs(out_dirpath, + dev_table[dev_idx].suffix, + fsdev_dev); + if (!count) { + BCMFS_LOG(ERR, "No supported bcmfs dev found"); + return -ENODEV; + } + + i = 0; + while (count) { + /* format the device name present in the patch */ + snprintf(out_dirname, sizeof(out_dirname), "%x.%s", + fsdev_dev[i], dev_table[dev_idx].suffix); + fsdev = fsdev_allocate_one_dev(vdev, out_dirpath, + out_dirname, dtype); + if (!fsdev) { + count--; + i++; + continue; + } + break; + } + if (fsdev == NULL) { + BCMFS_LOG(ERR, "All supported devs busy"); + return -ENODEV; + } + + return 0; +} + +static int +bcmfs_vdev_remove(struct rte_vdev_device *vdev) +{ + struct bcmfs_device *fsdev; + + fsdev = find_fsdev(vdev); + if (fsdev == NULL) + return -ENODEV; + + fsdev_release(fsdev); + return 0; +} + +/* Register with vdev */ +static struct rte_vdev_driver rte_bcmfs_pmd = { + .probe = bcmfs_vdev_probe, + .remove = bcmfs_vdev_remove +}; + +RTE_PMD_REGISTER_VDEV(bcmfs_pmd, + rte_bcmfs_pmd); diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h new file mode 100644 index 000000000..cc64a8df2 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_device.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Broadcom. + * All rights reserved. + */ + +#ifndef _BCMFS_DEV_H_ +#define _BCMFS_DEV_H_ + +#include + +#include + +#include "bcmfs_logs.h" + +/* max number of dev nodes */ +#define BCMFS_MAX_NODES 4 +#define BCMFS_MAX_PATH_LEN 512 +#define BCMFS_DEV_NAME_LEN 64 + +/* Path for BCM-Platform device directory */ +#define SYSFS_BCM_PLTFORM_DEVICES "/sys/bus/platform/devices" + +/* Supported devices */ +enum bcmfs_device_type { + BCMFS_SYM_FS4, + BCMFS_SYM_FS5, + BCMFS_UNKNOWN +}; + +struct bcmfs_device { + TAILQ_ENTRY(bcmfs_device) next; + /* Directory path for vfio */ + char dirname[BCMFS_MAX_PATH_LEN]; + /* BCMFS device name */ + char name[BCMFS_DEV_NAME_LEN]; + /* Parent vdev */ + struct rte_vdev_device *vdev; +}; + +#endif /* _BCMFS_DEV_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_logs.c b/drivers/crypto/bcmfs/bcmfs_logs.c new file mode 100644 index 000000000..86f4ff3b5 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_logs.c @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#include +#include + +#include "bcmfs_logs.h" + +int bcmfs_conf_logtype; +int bcmfs_dp_logtype; + +int +bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *title, + const void *buf, unsigned int len) +{ + if (level > rte_log_get_global_level()) + return 0; + if (level > (uint32_t)(rte_log_get_level(logtype))) + return 0; + + rte_hexdump(rte_log_get_stream(), title, buf, len); + return 0; +} + +RTE_INIT(bcmfs_device_init_log) +{ + /* Configuration and general logs */ + bcmfs_conf_logtype = rte_log_register("pmd.bcmfs_config"); + if (bcmfs_conf_logtype >= 0) + rte_log_set_level(bcmfs_conf_logtype, RTE_LOG_NOTICE); + + /* data-path logs */ + bcmfs_dp_logtype = rte_log_register("pmd.bcmfs_fp"); + if (bcmfs_dp_logtype >= 0) + rte_log_set_level(bcmfs_dp_logtype, RTE_LOG_NOTICE); +} diff --git a/drivers/crypto/bcmfs/bcmfs_logs.h b/drivers/crypto/bcmfs/bcmfs_logs.h new file mode 100644 index 000000000..c03a49b75 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_logs.h @@ -0,0 +1,34 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_LOGS_H_ +#define _BCMFS_LOGS_H_ + +#include + +extern int bcmfs_conf_logtype; +extern int bcmfs_dp_logtype; + +#define BCMFS_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, bcmfs_conf_logtype, \ + "%s(): " fmt "\n", __func__, ## args) + +#define BCMFS_DP_LOG(level, fmt, args...) \ + rte_log(RTE_LOG_ ## level, bcmfs_dp_logtype, \ + "%s(): " fmt "\n", __func__, ## args) + +#define BCMFS_DP_HEXDUMP_LOG(level, title, buf, len) \ + bcmfs_hexdump_log(RTE_LOG_ ## level, bcmfs_dp_logtype, title, buf, len) + +/** + * bcmfs_hexdump_log Dump out memory in a special hex dump format. + * + * The message will be sent to the stream used by the rte_log infrastructure. + */ +int +bcmfs_hexdump_log(uint32_t level, uint32_t logtype, const char *heading, + const void *buf, unsigned int len); + +#endif /* _BCMFS_LOGS_H_ */ diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build new file mode 100644 index 000000000..a4bdd8ee5 --- /dev/null +++ b/drivers/crypto/bcmfs/meson.build @@ -0,0 +1,10 @@ +# SPDX-License-Identifier: BSD-3-Clause +# Copyright(C) 2020 Broadcom +# All rights reserved. +# + +deps += ['eal', 'bus_vdev'] +sources = files( + 'bcmfs_logs.c', + 'bcmfs_device.c' + ) diff --git a/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map new file mode 100644 index 000000000..299ae632d --- /dev/null +++ b/drivers/crypto/bcmfs/rte_pmd_bcmfs_version.map @@ -0,0 +1,3 @@ +DPDK_21.0 { + local: *; +}; diff --git a/drivers/crypto/meson.build b/drivers/crypto/meson.build index a2423507a..8e06d0533 100644 --- a/drivers/crypto/meson.build +++ b/drivers/crypto/meson.build @@ -23,7 +23,8 @@ drivers = ['aesni_gcm', 'scheduler', 'snow3g', 'virtio', - 'zuc'] + 'zuc', + 'bcmfs'] std_deps = ['cryptodev'] # cryptodev pulls in all other needed deps config_flag_fmt = 'RTE_LIBRTE_@0@_PMD' diff --git a/mk/rte.app.mk b/mk/rte.app.mk index 0ce8cf541..5e268f8c0 100644 --- a/mk/rte.app.mk +++ b/mk/rte.app.mk @@ -308,6 +308,7 @@ ifeq ($(CONFIG_RTE_LIBRTE_SECURITY),y) _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_CAAM_JR) += -lrte_pmd_caam_jr endif # CONFIG_RTE_LIBRTE_SECURITY _LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_VIRTIO_CRYPTO) += -lrte_pmd_virtio_crypto +_LDLIBS-$(CONFIG_RTE_LIBRTE_PMD_BCMFS) += -lrte_pmd_bcmfs endif # CONFIG_RTE_LIBRTE_CRYPTODEV ifeq ($(CONFIG_RTE_LIBRTE_COMPRESSDEV),y) From patchwork Thu Aug 13 17:23:38 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikas Gupta X-Patchwork-Id: 75503 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id ADAC4A04B0; Thu, 13 Aug 2020 19:24:23 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 114871C0CE; Thu, 13 Aug 2020 19:24:07 +0200 (CEST) Received: from mail-qt1-f195.google.com (mail-qt1-f195.google.com [209.85.160.195]) by dpdk.org (Postfix) with ESMTP id 18B6C1C0D2 for ; Thu, 13 Aug 2020 19:24:05 +0200 (CEST) Received: by mail-qt1-f195.google.com with SMTP id f19so439368qtp.2 for ; Thu, 13 Aug 2020 10:24:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=eYtOjoKRuQ7dsEkvDwOB1OKgYYjcvEzdLyXIBCQzGbI=; b=MJtDGrB3I1/k9ntFSH/i8JuRv+rJMDq+qteuPO7899SOoXJxSZR7knc0yM1vPZ/YeW a3vuD0WWUTElcSRP9ONGPlN+u5od3wIgSkoU+XuWTzNI4x5Lr1dTMB+99x6DSF/AnqCV 2NNRgqAE90Kk/blw7mHENkAVduEaY1vutvbkM= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=eYtOjoKRuQ7dsEkvDwOB1OKgYYjcvEzdLyXIBCQzGbI=; b=OXEqyuSbXpUDByjJjSTw4giJh7Hw3RxzSrfR2acg5mHGUt3cFapLK0BRxJiQMX1zmG eAbmDXVhAn8qXRwbJ/E5LbkIJeq9RiRHGD3EEXzlPvmMxdFVwBs1IbMGSeP5TH1eB26b 9iaYxlWbnYrSaOkg77pBM2Dib0U1YKGbU0MzEYVySNLSnmkcNnK4vf1d3c49wDc240I4 r1QpdsTryOVzw+RuAi/YJbV/Hx4zNmVNA8hflqUci46+YtRfSj2SztK+EhT4nSQSP/YU Rvqt/+DSDDRc1QsSawyaGATeBojl8TLbs0WCJszf4ITRf3jb29LL2Ytb8xt5P0LAs2fl 0jNQ== X-Gm-Message-State: AOAM533v9vQev5yFrhfwhQnjCIyCx+ZbDuDRnhYmU62HIwDvITnlBnFi V0UjyCNp0k2QY2ZzCFIfXQ0TKps/upwlQJ8oNr5F/Yui192aAVbLYE8m1la24HyQP+kRhVZhDqZ dHjymgqx5L9dWmOvE/yixT2t6dH3amMEWc/AZXB6dOZ6ZhrNex9GCsgrtzNSF X-Google-Smtp-Source: ABdhPJyi+g303vL8qZFfvi4D+c0Jvkh8JS3Sl88Fhbt4GvZLpvY9K/ysuI5ncgiJh4iybVOBOwYZuA== X-Received: by 2002:ac8:7593:: with SMTP id s19mr6378645qtq.199.1597339444038; Thu, 13 Aug 2020 10:24:04 -0700 (PDT) Received: from rahul_yocto_ubuntu18.ibn.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g129sm6246635qkb.39.2020.08.13.10.24.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 10:24:03 -0700 (PDT) From: Vikas Gupta To: dev@dpdk.org, akhil.goyal@nxp.com Cc: vikram.prakash@broadcom.com, Vikas Gupta , Raveendra Padasalagi Date: Thu, 13 Aug 2020 22:53:38 +0530 Message-Id: <20200813172344.3228-3-vikas.gupta@broadcom.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200813172344.3228-1-vikas.gupta@broadcom.com> References: <20200812063127.8687-1-vikas.gupta@broadcom.com> <20200813172344.3228-1-vikas.gupta@broadcom.com> Subject: [dpdk-dev] [PATCH v2 2/8] crypto/bcmfs: add vfio support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add vfio support for device. Signed-off-by: Vikas Gupta Signed-off-by: Raveendra Padasalagi Reviewed-by: Ajit Khaparde --- drivers/crypto/bcmfs/bcmfs_device.c | 5 ++ drivers/crypto/bcmfs/bcmfs_device.h | 6 ++ drivers/crypto/bcmfs/bcmfs_vfio.c | 107 ++++++++++++++++++++++++++++ drivers/crypto/bcmfs/bcmfs_vfio.h | 17 +++++ drivers/crypto/bcmfs/meson.build | 3 +- 5 files changed, 137 insertions(+), 1 deletion(-) create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.c create mode 100644 drivers/crypto/bcmfs/bcmfs_vfio.h diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c index 47c776de6..3b5cc9e98 100644 --- a/drivers/crypto/bcmfs/bcmfs_device.c +++ b/drivers/crypto/bcmfs/bcmfs_device.c @@ -11,6 +11,7 @@ #include "bcmfs_device.h" #include "bcmfs_logs.h" +#include "bcmfs_vfio.h" struct bcmfs_device_attr { const char name[BCMFS_MAX_PATH_LEN]; @@ -71,6 +72,10 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev, fsdev->vdev = vdev; + /* attach to VFIO */ + if (bcmfs_attach_vfio(fsdev)) + goto cleanup; + TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next); return fsdev; diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h index cc64a8df2..c41cc0031 100644 --- a/drivers/crypto/bcmfs/bcmfs_device.h +++ b/drivers/crypto/bcmfs/bcmfs_device.h @@ -35,6 +35,12 @@ struct bcmfs_device { char name[BCMFS_DEV_NAME_LEN]; /* Parent vdev */ struct rte_vdev_device *vdev; + /* vfio handle */ + int vfio_dev_fd; + /* mapped address */ + uint8_t *mmap_addr; + /* mapped size */ + uint32_t mmap_size; }; #endif /* _BCMFS_DEV_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.c b/drivers/crypto/bcmfs/bcmfs_vfio.c new file mode 100644 index 000000000..dc2def580 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_vfio.c @@ -0,0 +1,107 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Broadcom. + * All rights reserved. + */ + +#include +#include +#include + +#include + +#include "bcmfs_device.h" +#include "bcmfs_logs.h" +#include "bcmfs_vfio.h" + +#ifdef VFIO_PRESENT +static int +vfio_map_dev_obj(const char *path, const char *dev_obj, + uint32_t *size, void **addr, int *dev_fd) +{ + int32_t ret; + struct vfio_group_status status = { .argsz = sizeof(status) }; + + struct vfio_device_info d_info = { .argsz = sizeof(d_info) }; + struct vfio_region_info reg_info = { .argsz = sizeof(reg_info) }; + + ret = rte_vfio_setup_device(path, dev_obj, dev_fd, &d_info); + if (ret) { + BCMFS_LOG(ERR, "VFIO Setting for device failed"); + return ret; + } + + /* getting device region info*/ + ret = ioctl(*dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®_info); + if (ret < 0) { + BCMFS_LOG(ERR, "Error in VFIO getting REGION_INFO"); + goto map_failed; + } + + *addr = mmap(NULL, reg_info.size, + PROT_WRITE | PROT_READ, MAP_SHARED, + *dev_fd, reg_info.offset); + if (*addr == MAP_FAILED) { + BCMFS_LOG(ERR, "Error mapping region (errno = %d)", errno); + ret = errno; + goto map_failed; + } + *size = reg_info.size; + + return 0; + +map_failed: + rte_vfio_release_device(path, dev_obj, *dev_fd); + + return ret; +} + +int +bcmfs_attach_vfio(struct bcmfs_device *dev) +{ + int ret; + int vfio_dev_fd; + void *v_addr = NULL; + uint32_t size = 0; + + ret = vfio_map_dev_obj(dev->dirname, dev->name, + &size, &v_addr, &vfio_dev_fd); + if (ret) + return -1; + + dev->mmap_size = size; + dev->mmap_addr = v_addr; + dev->vfio_dev_fd = vfio_dev_fd; + + return 0; +} + +void +bcmfs_release_vfio(struct bcmfs_device *dev) +{ + int ret; + + if (dev == NULL) + return; + + /* unmap the addr */ + munmap(dev->mmap_addr, dev->mmap_size); + /* release the device */ + ret = rte_vfio_release_device(dev->dirname, dev->name, + dev->vfio_dev_fd); + if (ret < 0) { + BCMFS_LOG(ERR, "cannot release device"); + return; + } +} +#else +int +bcmfs_attach_vfio(struct bcmfs_device *dev __rte_unused) +{ + return -1; +} + +void +bcmfs_release_vfio(struct bcmfs_device *dev __rte_unused) +{ +} +#endif diff --git a/drivers/crypto/bcmfs/bcmfs_vfio.h b/drivers/crypto/bcmfs/bcmfs_vfio.h new file mode 100644 index 000000000..d0fdf6483 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_vfio.h @@ -0,0 +1,17 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_VFIO_H_ +#define _BCMFS_VFIO_H_ + +/* Attach the bcmfs device to vfio */ +int +bcmfs_attach_vfio(struct bcmfs_device *dev); + +/* Release the bcmfs device from vfio */ +void +bcmfs_release_vfio(struct bcmfs_device *dev); + +#endif /* _BCMFS_VFIO_H_ */ diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build index a4bdd8ee5..fd39eba20 100644 --- a/drivers/crypto/bcmfs/meson.build +++ b/drivers/crypto/bcmfs/meson.build @@ -6,5 +6,6 @@ deps += ['eal', 'bus_vdev'] sources = files( 'bcmfs_logs.c', - 'bcmfs_device.c' + 'bcmfs_device.c', + 'bcmfs_vfio.c' ) From patchwork Thu Aug 13 17:23:39 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikas Gupta X-Patchwork-Id: 75504 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 56505A04B0; Thu, 13 Aug 2020 19:24:34 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 364BA1C0D7; Thu, 13 Aug 2020 19:24:10 +0200 (CEST) Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by dpdk.org (Postfix) with ESMTP id 649501C0D6 for ; Thu, 13 Aug 2020 19:24:08 +0200 (CEST) Received: by mail-qt1-f194.google.com with SMTP id e5so4940400qth.5 for ; Thu, 13 Aug 2020 10:24:08 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=+Q8pjHhDhCUOE61CDAGTJxkxouFVpBiZbl5I4Hj3jbQ=; b=XffuOB+JnUCgBeeKTltdPBPjJesGr1rgkaVDzWveHI0W5dFisqvQOGBY7INjqF8kdy 95+12pSgFkybhQoa5sQ+7XPdVILoTGqkL4x8HVzelY4CnkBwvJxC6AnDzO+sXMig3BQ2 5pqqsrTg8FXwwlT6h+ulioLL41lKKgKTGDab8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=+Q8pjHhDhCUOE61CDAGTJxkxouFVpBiZbl5I4Hj3jbQ=; b=pVF116BSjE8HFgo3ZcrFr+6cJ4Z5rgSlpErOK5LKbA7BvpJRVPJc1pxwIe+kcTpb7s +qF5PNQFzD4jMpDMtNI4KNpZT0GPWGJSy/WrLQusNKAKaoESPntiIyy6nYu1oFXfGj4D 3V9WhGA34q10cHhjYlG71p5yue0qyrDbTAeSl3e/I7sEgOdj9Sqn1oT9ksEnL9VUd5p5 /rdkwvLvn7WeJXWlZQu49n8x/UFeOxrk5UR+w/vSB2HxK4o7bv++FDxyQMSHz1EL5h0c mW90xfyjfNpmuwzeDhVLcAZSn+cOdQ3wGpuul1AoWe0Fx+aHeMGuesZLJgI2aPaZLg8g Hr4A== X-Gm-Message-State: AOAM530RAjLwLJKrWLC3ZLiMDYvOzrCIGIbhzZzSEY3LjvmU4ibQX/+3 79VbGNUwdnTSQgk9i1Eaxvl1jVIoisU1ImGP1HbRHwfKU+eAonoBHuMwAH5xARm1gKses2XI9Zz 22ijegm9jVTQ6HqfCTp3QNTpYspRuecjSyR6Za21IIkb356a2y5sdxJpx4/a7 X-Google-Smtp-Source: ABdhPJxTGacCzPOwQXLgkENB5BNdeMxyPMMEMEUHsGXXtfkqfGW/tpDkRVYlo4SR40YYgDF+OM2Png== X-Received: by 2002:ac8:3042:: with SMTP id g2mr6686810qte.224.1597339447177; Thu, 13 Aug 2020 10:24:07 -0700 (PDT) Received: from rahul_yocto_ubuntu18.ibn.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g129sm6246635qkb.39.2020.08.13.10.24.04 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 10:24:06 -0700 (PDT) From: Vikas Gupta To: dev@dpdk.org, akhil.goyal@nxp.com Cc: vikram.prakash@broadcom.com, Vikas Gupta , Raveendra Padasalagi Date: Thu, 13 Aug 2020 22:53:39 +0530 Message-Id: <20200813172344.3228-4-vikas.gupta@broadcom.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200813172344.3228-1-vikas.gupta@broadcom.com> References: <20200812063127.8687-1-vikas.gupta@broadcom.com> <20200813172344.3228-1-vikas.gupta@broadcom.com> Subject: [dpdk-dev] [PATCH v2 3/8] crypto/bcmfs: add apis for queue pair management X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add queue pair management APIs which will be used by Crypto device to manage h/w queues. A bcmfs device structure owns multiple queue-pairs based on the mapped address allocated to it. Signed-off-by: Vikas Gupta Signed-off-by: Raveendra Padasalagi Reviewed-by: Ajit Khaparde --- drivers/crypto/bcmfs/bcmfs_device.c | 4 + drivers/crypto/bcmfs/bcmfs_device.h | 5 + drivers/crypto/bcmfs/bcmfs_hw_defs.h | 38 +++ drivers/crypto/bcmfs/bcmfs_qp.c | 345 +++++++++++++++++++++++++++ drivers/crypto/bcmfs/bcmfs_qp.h | 122 ++++++++++ drivers/crypto/bcmfs/meson.build | 3 +- 6 files changed, 516 insertions(+), 1 deletion(-) create mode 100644 drivers/crypto/bcmfs/bcmfs_hw_defs.h create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.c create mode 100644 drivers/crypto/bcmfs/bcmfs_qp.h diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c index 3b5cc9e98..b475c2933 100644 --- a/drivers/crypto/bcmfs/bcmfs_device.c +++ b/drivers/crypto/bcmfs/bcmfs_device.c @@ -11,6 +11,7 @@ #include "bcmfs_device.h" #include "bcmfs_logs.h" +#include "bcmfs_qp.h" #include "bcmfs_vfio.h" struct bcmfs_device_attr { @@ -76,6 +77,9 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev, if (bcmfs_attach_vfio(fsdev)) goto cleanup; + /* Maximum number of QPs supported */ + fsdev->max_hw_qps = fsdev->mmap_size / BCMFS_HW_QUEUE_IO_ADDR_LEN; + TAILQ_INSERT_TAIL(&fsdev_list, fsdev, next); return fsdev; diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h index c41cc0031..a47537332 100644 --- a/drivers/crypto/bcmfs/bcmfs_device.h +++ b/drivers/crypto/bcmfs/bcmfs_device.h @@ -11,6 +11,7 @@ #include #include "bcmfs_logs.h" +#include "bcmfs_qp.h" /* max number of dev nodes */ #define BCMFS_MAX_NODES 4 @@ -41,6 +42,10 @@ struct bcmfs_device { uint8_t *mmap_addr; /* mapped size */ uint32_t mmap_size; + /* max number of h/w queue pairs detected */ + uint16_t max_hw_qps; + /* current qpairs in use */ + struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES]; }; #endif /* _BCMFS_DEV_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_hw_defs.h b/drivers/crypto/bcmfs/bcmfs_hw_defs.h new file mode 100644 index 000000000..ecb0c09ba --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_hw_defs.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_RM_DEFS_H_ +#define _BCMFS_RM_DEFS_H_ + +#include +#include +#include +#include + +/* 32-bit MMIO register write */ +#define FS_MMIO_WRITE32(value, addr) rte_write32_relaxed((value), (addr)) + +/* 32-bit MMIO register read */ +#define FS_MMIO_READ32(addr) rte_read32_relaxed((addr)) + +#ifndef BIT +#define BIT(nr) (1UL << (nr)) +#endif + +#define FS_RING_REGS_SIZE 0x10000 +#define FS_RING_DESC_SIZE 8 +#define FS_RING_BD_ALIGN_ORDER 12 +#define FS_RING_BD_DESC_PER_REQ 32 +#define FS_RING_CMPL_ALIGN_ORDER 13 +#define FS_RING_CMPL_SIZE (1024 * FS_RING_DESC_SIZE) +#define FS_RING_MAX_REQ_COUNT 1024 +#define FS_RING_PAGE_SHFT 12 +#define FS_RING_PAGE_SIZE BIT(FS_RING_PAGE_SHFT) + +/* Minimum and maximum number of requests supported */ +#define FS_RM_MAX_REQS 1024 +#define FS_RM_MIN_REQS 32 + +#endif /* BCMFS_RM_DEFS_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c new file mode 100644 index 000000000..864e7bb74 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_qp.c @@ -0,0 +1,345 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Broadcom. + * All rights reserved. + */ + +#include + +#include +#include +#include +#include +#include +#include +#include +#include + +#include "bcmfs_logs.h" +#include "bcmfs_qp.h" +#include "bcmfs_hw_defs.h" + +/* TX or submission queue name */ +static const char *txq_name = "tx"; +/* Completion or receive queue name */ +static const char *cmplq_name = "cmpl"; + +/* Helper function */ +static int +bcmfs_qp_check_queue_alignment(uint64_t phys_addr, + uint32_t align) +{ + if (((align - 1) & phys_addr) != 0) + return -EINVAL; + return 0; +} + +static void +bcmfs_queue_delete(struct bcmfs_queue *queue, + uint16_t queue_pair_id) +{ + const struct rte_memzone *mz; + int status = 0; + + if (queue == NULL) { + BCMFS_LOG(DEBUG, "Invalid queue"); + return; + } + BCMFS_LOG(DEBUG, "Free ring %d type %d, memzone: %s", + queue_pair_id, queue->q_type, queue->memz_name); + + mz = rte_memzone_lookup(queue->memz_name); + if (mz != NULL) { + /* Write an unused pattern to the queue memory. */ + memset(queue->base_addr, 0x9B, queue->queue_size); + status = rte_memzone_free(mz); + if (status != 0) + BCMFS_LOG(ERR, "Error %d on freeing queue %s", + status, queue->memz_name); + } else { + BCMFS_LOG(DEBUG, "queue %s doesn't exist", + queue->memz_name); + } +} + +static const struct rte_memzone * +queue_dma_zone_reserve(const char *queue_name, uint32_t queue_size, + int socket_id, unsigned int align) +{ + const struct rte_memzone *mz; + + mz = rte_memzone_lookup(queue_name); + if (mz != NULL) { + if (((size_t)queue_size <= mz->len) && + (socket_id == SOCKET_ID_ANY || + socket_id == mz->socket_id)) { + BCMFS_LOG(DEBUG, "re-use memzone already " + "allocated for %s", queue_name); + return mz; + } + + BCMFS_LOG(ERR, "Incompatible memzone already " + "allocated %s, size %u, socket %d. " + "Requested size %u, socket %u", + queue_name, (uint32_t)mz->len, + mz->socket_id, queue_size, socket_id); + return NULL; + } + + BCMFS_LOG(DEBUG, "Allocate memzone for %s, size %u on socket %u", + queue_name, queue_size, socket_id); + return rte_memzone_reserve_aligned(queue_name, queue_size, + socket_id, RTE_MEMZONE_IOVA_CONTIG, align); +} + +static int +bcmfs_queue_create(struct bcmfs_queue *queue, + struct bcmfs_qp_config *qp_conf, + uint16_t queue_pair_id, + enum bcmfs_queue_type qtype) +{ + const struct rte_memzone *qp_mz; + char q_name[16]; + unsigned int align; + uint32_t queue_size_bytes; + int ret; + + if (qtype == BCMFS_RM_TXQ) { + strlcpy(q_name, txq_name, sizeof(q_name)); + align = 1U << FS_RING_BD_ALIGN_ORDER; + queue_size_bytes = qp_conf->nb_descriptors * + qp_conf->max_descs_req * FS_RING_DESC_SIZE; + queue_size_bytes = RTE_ALIGN_MUL_CEIL(queue_size_bytes, + FS_RING_PAGE_SIZE); + /* make queue size to multiple for 4K pages */ + } else if (qtype == BCMFS_RM_CPLQ) { + strlcpy(q_name, cmplq_name, sizeof(q_name)); + align = 1U << FS_RING_CMPL_ALIGN_ORDER; + + /* + * Memory size for cmpl + MSI + * For MSI allocate here itself and so we allocate twice + */ + queue_size_bytes = 2 * FS_RING_CMPL_SIZE; + } else { + BCMFS_LOG(ERR, "Invalid queue selection"); + return -EINVAL; + } + + queue->q_type = qtype; + + /* + * Allocate a memzone for the queue - create a unique name. + */ + snprintf(queue->memz_name, sizeof(queue->memz_name), + "%s_%d_%s_%d_%s", "bcmfs", qtype, "qp_mem", + queue_pair_id, q_name); + qp_mz = queue_dma_zone_reserve(queue->memz_name, queue_size_bytes, + 0, align); + if (qp_mz == NULL) { + BCMFS_LOG(ERR, "Failed to allocate ring memzone"); + return -ENOMEM; + } + + if (bcmfs_qp_check_queue_alignment(qp_mz->iova, align)) { + BCMFS_LOG(ERR, "Invalid alignment on queue create " + " 0x%" PRIx64 "\n", + queue->base_phys_addr); + ret = -EFAULT; + goto queue_create_err; + } + + queue->base_addr = (char *)qp_mz->addr; + queue->base_phys_addr = qp_mz->iova; + queue->queue_size = queue_size_bytes; + + return 0; + +queue_create_err: + rte_memzone_free(qp_mz); + + return ret; +} + +int +bcmfs_qp_release(struct bcmfs_qp **qp_addr) +{ + struct bcmfs_qp *qp = *qp_addr; + + if (qp == NULL) { + BCMFS_LOG(DEBUG, "qp already freed"); + return 0; + } + + /* Don't free memory if there are still responses to be processed */ + if ((qp->stats.enqueued_count - qp->stats.dequeued_count) == 0) { + /* Stop the h/w ring */ + qp->ops->stopq(qp); + /* Delete the queue pairs */ + bcmfs_queue_delete(&qp->tx_q, qp->qpair_id); + bcmfs_queue_delete(&qp->cmpl_q, qp->qpair_id); + } else { + return -EAGAIN; + } + + rte_bitmap_reset(qp->ctx_bmp); + rte_free(qp->ctx_bmp_mem); + rte_free(qp->ctx_pool); + + rte_free(qp); + *qp_addr = NULL; + + return 0; +} + +int +bcmfs_qp_setup(struct bcmfs_qp **qp_addr, + uint16_t queue_pair_id, + struct bcmfs_qp_config *qp_conf) +{ + struct bcmfs_qp *qp; + uint32_t bmp_size; + uint32_t nb_descriptors = qp_conf->nb_descriptors; + uint16_t i; + int rc; + + if (nb_descriptors < FS_RM_MIN_REQS) { + BCMFS_LOG(ERR, "Can't create qp for %u descriptors", + nb_descriptors); + return -EINVAL; + } + + if (nb_descriptors > FS_RM_MAX_REQS) + nb_descriptors = FS_RM_MAX_REQS; + + if (qp_conf->iobase == NULL) { + BCMFS_LOG(ERR, "IO onfig space null"); + return -EINVAL; + } + + qp = rte_zmalloc_socket("BCM FS PMD qp metadata", + sizeof(*qp), RTE_CACHE_LINE_SIZE, + qp_conf->socket_id); + if (qp == NULL) { + BCMFS_LOG(ERR, "Failed to alloc mem for qp struct"); + return -ENOMEM; + } + + qp->qpair_id = queue_pair_id; + qp->ioreg = qp_conf->iobase; + qp->nb_descriptors = nb_descriptors; + + qp->stats.enqueued_count = 0; + qp->stats.dequeued_count = 0; + + rc = bcmfs_queue_create(&qp->tx_q, qp_conf, qp->qpair_id, + BCMFS_RM_TXQ); + if (rc) { + BCMFS_LOG(ERR, "Tx queue create failed queue_pair_id %u", + queue_pair_id); + goto create_err; + } + + rc = bcmfs_queue_create(&qp->cmpl_q, qp_conf, qp->qpair_id, + BCMFS_RM_CPLQ); + if (rc) { + BCMFS_LOG(ERR, "Cmpl queue create failed queue_pair_id= %u", + queue_pair_id); + goto q_create_err; + } + + /* ctx saving bitmap */ + bmp_size = rte_bitmap_get_memory_footprint(nb_descriptors); + + /* Allocate memory for bitmap */ + qp->ctx_bmp_mem = rte_zmalloc("ctx_bmp_mem", bmp_size, + RTE_CACHE_LINE_SIZE); + if (qp->ctx_bmp_mem == NULL) { + rc = -ENOMEM; + goto qp_create_err; + } + + /* Initialize pool resource bitmap array */ + qp->ctx_bmp = rte_bitmap_init(nb_descriptors, qp->ctx_bmp_mem, + bmp_size); + if (qp->ctx_bmp == NULL) { + rc = -EINVAL; + goto bmap_mem_free; + } + + /* Mark all pools available */ + for (i = 0; i < nb_descriptors; i++) + rte_bitmap_set(qp->ctx_bmp, i); + + /* Allocate memory for context */ + qp->ctx_pool = rte_zmalloc("qp_ctx_pool", + sizeof(unsigned long) * + nb_descriptors, 0); + if (qp->ctx_pool == NULL) { + BCMFS_LOG(ERR, "ctx allocation pool fails"); + rc = -ENOMEM; + goto bmap_free; + } + + /* Start h/w ring */ + qp->ops->startq(qp); + + *qp_addr = qp; + + return 0; + +bmap_free: + rte_bitmap_reset(qp->ctx_bmp); +bmap_mem_free: + rte_free(qp->ctx_bmp_mem); +qp_create_err: + bcmfs_queue_delete(&qp->cmpl_q, queue_pair_id); +q_create_err: + bcmfs_queue_delete(&qp->tx_q, queue_pair_id); +create_err: + rte_free(qp); + + return rc; +} + +uint16_t +bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops) +{ + struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp; + register uint32_t nb_ops_sent = 0; + uint16_t nb_ops_possible = nb_ops; + int ret; + + if (unlikely(nb_ops == 0)) + return 0; + + while (nb_ops_sent != nb_ops_possible) { + ret = tmp_qp->ops->enq_one_req(qp, *ops); + if (ret != 0) { + tmp_qp->stats.enqueue_err_count++; + /* This message cannot be enqueued */ + if (nb_ops_sent == 0) + return 0; + goto ring_db; + } + + ops++; + nb_ops_sent++; + } + +ring_db: + tmp_qp->stats.enqueued_count += nb_ops_sent; + tmp_qp->ops->ring_db(tmp_qp); + + return nb_ops_sent; +} + +uint16_t +bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops) +{ + struct bcmfs_qp *tmp_qp = (struct bcmfs_qp *)qp; + uint32_t deq = tmp_qp->ops->dequeue(tmp_qp, ops, nb_ops); + + tmp_qp->stats.dequeued_count += deq; + + return deq; +} diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h new file mode 100644 index 000000000..027d7a50c --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_qp.h @@ -0,0 +1,122 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_QP_H_ +#define _BCMFS_QP_H_ + +#include + +/* Maximum number of h/w queues supported by device */ +#define BCMFS_MAX_HW_QUEUES 32 + +/* H/W queue IO address space len */ +#define BCMFS_HW_QUEUE_IO_ADDR_LEN (64 * 1024) + +/* Maximum size of device ops name */ +#define BCMFS_HW_OPS_NAMESIZE 32 + +enum bcmfs_queue_type { + /* TX or submission queue */ + BCMFS_RM_TXQ, + /* Completion or receive queue */ + BCMFS_RM_CPLQ +}; + +struct bcmfs_qp_stats { + /* Count of all operations enqueued */ + uint64_t enqueued_count; + /* Count of all operations dequeued */ + uint64_t dequeued_count; + /* Total error count on operations enqueued */ + uint64_t enqueue_err_count; + /* Total error count on operations dequeued */ + uint64_t dequeue_err_count; +}; + +struct bcmfs_qp_config { + /* Socket to allocate memory on */ + int socket_id; + /* Mapped iobase for qp */ + void *iobase; + /* nb_descriptors or requests a h/w queue can accommodate */ + uint16_t nb_descriptors; + /* Maximum number of h/w descriptors needed by a request */ + uint16_t max_descs_req; +}; + +struct bcmfs_queue { + /* Base virt address */ + void *base_addr; + /* Base iova */ + rte_iova_t base_phys_addr; + /* Queue type */ + enum bcmfs_queue_type q_type; + /* Queue size based on nb_descriptors and max_descs_reqs */ + uint32_t queue_size; + union { + /* s/w pointer for tx h/w queue*/ + uint32_t tx_write_ptr; + /* s/w pointer for completion h/w queue*/ + uint32_t cmpl_read_ptr; + }; + /* Memzone name */ + char memz_name[RTE_MEMZONE_NAMESIZE]; +}; + +struct bcmfs_qp { + /* Queue-pair ID */ + uint16_t qpair_id; + /* Mapped IO address */ + void *ioreg; + /* A TX queue */ + struct bcmfs_queue tx_q; + /* A Completion queue */ + struct bcmfs_queue cmpl_q; + /* Number of requests queue can acommodate */ + uint32_t nb_descriptors; + /* Number of pending requests and enqueued to h/w queue */ + uint16_t nb_pending_requests; + /* A pool which act as a hash for pair */ + unsigned long *ctx_pool; + /* virt address for mem allocated for bitmap */ + void *ctx_bmp_mem; + /* Bitmap */ + struct rte_bitmap *ctx_bmp; + /* Associated stats */ + struct bcmfs_qp_stats stats; + /* h/w ops associated with qp */ + struct bcmfs_hw_queue_pair_ops *ops; + +} __rte_cache_aligned; + +/* Structure defining h/w queue pair operations */ +struct bcmfs_hw_queue_pair_ops { + /* ops name */ + char name[BCMFS_HW_OPS_NAMESIZE]; + /* Enqueue an object */ + int (*enq_one_req)(struct bcmfs_qp *qp, void *obj); + /* Ring doorbell */ + void (*ring_db)(struct bcmfs_qp *qp); + /* Dequeue objects */ + uint16_t (*dequeue)(struct bcmfs_qp *qp, void **obj, + uint16_t nb_ops); + /* Start the h/w queue */ + int (*startq)(struct bcmfs_qp *qp); + /* Stop the h/w queue */ + void (*stopq)(struct bcmfs_qp *qp); +}; + +uint16_t +bcmfs_enqueue_op_burst(void *qp, void **ops, uint16_t nb_ops); +uint16_t +bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops); +int +bcmfs_qp_release(struct bcmfs_qp **qp_addr); +int +bcmfs_qp_setup(struct bcmfs_qp **qp_addr, + uint16_t queue_pair_id, + struct bcmfs_qp_config *bcmfs_conf); + +#endif /* _BCMFS_QP_H_ */ diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build index fd39eba20..7e2bcbf14 100644 --- a/drivers/crypto/bcmfs/meson.build +++ b/drivers/crypto/bcmfs/meson.build @@ -7,5 +7,6 @@ deps += ['eal', 'bus_vdev'] sources = files( 'bcmfs_logs.c', 'bcmfs_device.c', - 'bcmfs_vfio.c' + 'bcmfs_vfio.c', + 'bcmfs_qp.c' ) From patchwork Thu Aug 13 17:23:40 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikas Gupta X-Patchwork-Id: 75505 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10763A04B0; Thu, 13 Aug 2020 19:24:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 6FD191C0DC; Thu, 13 Aug 2020 19:24:15 +0200 (CEST) Received: from mail-qv1-f68.google.com (mail-qv1-f68.google.com [209.85.219.68]) by dpdk.org (Postfix) with ESMTP id 4295F1C0CD for ; Thu, 13 Aug 2020 19:24:14 +0200 (CEST) Received: by mail-qv1-f68.google.com with SMTP id dd12so3004192qvb.0 for ; Thu, 13 Aug 2020 10:24:14 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TDfdF606hfbgDZS8BJQM3JcwpyNwRoqNIlYoPjV3+zI=; b=DhPZJqGonU3suziU/Yf1kkGxZnkVFDIlyLJoCmOmpR7pH/59QygI3210LObbtpRNkb rXphouSx3fA8MeLcgSpkm7AiS0vHspnyM0txPAnRDIGWKxYEJ5zoS97FVbZ8jq8ebnkq R5nwcAeHgPVHK1KOQ8c+mlRK2g1pLm7g/0G08= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TDfdF606hfbgDZS8BJQM3JcwpyNwRoqNIlYoPjV3+zI=; b=Y0rQPnqDz2H8ObbilJ13Vj/O+Tt+CM18rP7npzADeZFEIT0iIok1Q8UnSpY3oQElZJ H6tSalw64WgD0vxjsu/6LjIKcqtTlkk3uuZW9+i7+RC507Ig0q0mVuxnZ+Pk5VtEb9eX p6g2fUOD3NI/yjxj5f1eMeF9zufh0tO4boNe8MrQr1p/FUZDb5lJ86IslXKke2t0DVt4 MTi49PqnOTbprY6okwnp99LIcBAnGJ6H6aBHFBOrMGwGDX610M0K5i4+cduASnfnJHnl XBmqjDwd7TCGiW16kLwTWmMv8S7Qp8VfidlL6G1DFHbl0CLF2WsequAAkjn8wwOCmbwG EGZg== X-Gm-Message-State: AOAM531U5GgQLWsTmZE/H6ZmgPpN2rT0NQkp8Z8edNBunC061OwtSW4w 70htnlDF8BOhcOXcNhvuByds/QT6Y6nMlqN6mNPKGo4VQVi7g6/CG+YODKO/j1epjsanWH4qTPX Wda32hUNiQW8IWdUOLzoS3cQs9BcftPeWJtOEebJ/l1259I9DV4yYPZ9nrxmz X-Google-Smtp-Source: ABdhPJwANk62yTJInhGl4fjL2eNW4LYW9mm9mNqcEvy6DAmrANfnBUTXWYy0eJ1jEtV8tCFDDYnU6A== X-Received: by 2002:a0c:ffa1:: with SMTP id d1mr5676003qvv.36.1597339451230; Thu, 13 Aug 2020 10:24:11 -0700 (PDT) Received: from rahul_yocto_ubuntu18.ibn.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g129sm6246635qkb.39.2020.08.13.10.24.07 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 10:24:10 -0700 (PDT) From: Vikas Gupta To: dev@dpdk.org, akhil.goyal@nxp.com Cc: vikram.prakash@broadcom.com, Vikas Gupta , Raveendra Padasalagi Date: Thu, 13 Aug 2020 22:53:40 +0530 Message-Id: <20200813172344.3228-5-vikas.gupta@broadcom.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200813172344.3228-1-vikas.gupta@broadcom.com> References: <20200812063127.8687-1-vikas.gupta@broadcom.com> <20200813172344.3228-1-vikas.gupta@broadcom.com> Subject: [dpdk-dev] [PATCH v2 4/8] crypto/bcmfs: add hw queue pair operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add queue pair operations exported by supported devices. Signed-off-by: Vikas Gupta Signed-off-by: Raveendra Padasalagi Reviewed-by: Ajit Khaparde --- drivers/crypto/bcmfs/bcmfs_dev_msg.h | 29 + drivers/crypto/bcmfs/bcmfs_device.c | 51 ++ drivers/crypto/bcmfs/bcmfs_device.h | 16 + drivers/crypto/bcmfs/bcmfs_qp.c | 1 + drivers/crypto/bcmfs/bcmfs_qp.h | 4 + drivers/crypto/bcmfs/hw/bcmfs4_rm.c | 742 ++++++++++++++++++++++ drivers/crypto/bcmfs/hw/bcmfs5_rm.c | 677 ++++++++++++++++++++ drivers/crypto/bcmfs/hw/bcmfs_rm_common.c | 82 +++ drivers/crypto/bcmfs/hw/bcmfs_rm_common.h | 46 ++ drivers/crypto/bcmfs/meson.build | 5 +- 10 files changed, 1652 insertions(+), 1 deletion(-) create mode 100644 drivers/crypto/bcmfs/bcmfs_dev_msg.h create mode 100644 drivers/crypto/bcmfs/hw/bcmfs4_rm.c create mode 100644 drivers/crypto/bcmfs/hw/bcmfs5_rm.c create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.c create mode 100644 drivers/crypto/bcmfs/hw/bcmfs_rm_common.h diff --git a/drivers/crypto/bcmfs/bcmfs_dev_msg.h b/drivers/crypto/bcmfs/bcmfs_dev_msg.h new file mode 100644 index 000000000..5b50bde35 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_dev_msg.h @@ -0,0 +1,29 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_DEV_MSG_H_ +#define _BCMFS_DEV_MSG_H_ + +#define MAX_SRC_ADDR_BUFFERS 8 +#define MAX_DST_ADDR_BUFFERS 3 + +struct bcmfs_qp_message { + /** Physical address of each source */ + uint64_t srcs_addr[MAX_SRC_ADDR_BUFFERS]; + /** Length of each sources */ + uint32_t srcs_len[MAX_SRC_ADDR_BUFFERS]; + /** Total number of sources */ + unsigned int srcs_count; + /** Physical address of each destination */ + uint64_t dsts_addr[MAX_DST_ADDR_BUFFERS]; + /** Length of each destination */ + uint32_t dsts_len[MAX_DST_ADDR_BUFFERS]; + /** Total number of destinations */ + unsigned int dsts_count; + + void *ctx; +}; + +#endif /* _BCMFS_DEV_MSG_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c index b475c2933..bd2d64acf 100644 --- a/drivers/crypto/bcmfs/bcmfs_device.c +++ b/drivers/crypto/bcmfs/bcmfs_device.c @@ -43,6 +43,47 @@ static struct bcmfs_device_attr dev_table[] = { } }; +struct bcmfs_hw_queue_pair_ops_table bcmfs_hw_queue_pair_ops_table = { + .tl = RTE_SPINLOCK_INITIALIZER, + .num_ops = 0 +}; + +int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops *h) +{ + struct bcmfs_hw_queue_pair_ops *ops; + int16_t ops_index; + + rte_spinlock_lock(&bcmfs_hw_queue_pair_ops_table.tl); + + if (h->enq_one_req == NULL || h->dequeue == NULL || + h->ring_db == NULL || h->startq == NULL || h->stopq == NULL) { + rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl); + BCMFS_LOG(ERR, + "Missing callback while registering device ops"); + return -EINVAL; + } + + if (strlen(h->name) >= sizeof(ops->name) - 1) { + rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl); + BCMFS_LOG(ERR, "%s(): fs device_ops <%s>: name too long", + __func__, h->name); + return -EEXIST; + } + + ops_index = bcmfs_hw_queue_pair_ops_table.num_ops++; + ops = &bcmfs_hw_queue_pair_ops_table.qp_ops[ops_index]; + strlcpy(ops->name, h->name, sizeof(ops->name)); + ops->enq_one_req = h->enq_one_req; + ops->dequeue = h->dequeue; + ops->ring_db = h->ring_db; + ops->startq = h->startq; + ops->stopq = h->stopq; + + rte_spinlock_unlock(&bcmfs_hw_queue_pair_ops_table.tl); + + return ops_index; +} + TAILQ_HEAD(fsdev_list, bcmfs_device); static struct fsdev_list fsdev_list = TAILQ_HEAD_INITIALIZER(fsdev_list); @@ -53,6 +94,7 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev, enum bcmfs_device_type dev_type __rte_unused) { struct bcmfs_device *fsdev; + uint32_t i; fsdev = calloc(1, sizeof(*fsdev)); if (!fsdev) @@ -68,6 +110,15 @@ fsdev_allocate_one_dev(struct rte_vdev_device *vdev, goto cleanup; } + /* check if registered ops name is present in directory path */ + for (i = 0; i < bcmfs_hw_queue_pair_ops_table.num_ops; i++) + if (strstr(dirpath, + bcmfs_hw_queue_pair_ops_table.qp_ops[i].name)) + fsdev->sym_hw_qp_ops = + &bcmfs_hw_queue_pair_ops_table.qp_ops[i]; + if (!fsdev->sym_hw_qp_ops) + goto cleanup; + strcpy(fsdev->dirname, dirpath); strcpy(fsdev->name, devname); diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h index a47537332..9e40c5d74 100644 --- a/drivers/crypto/bcmfs/bcmfs_device.h +++ b/drivers/crypto/bcmfs/bcmfs_device.h @@ -8,6 +8,7 @@ #include +#include #include #include "bcmfs_logs.h" @@ -28,6 +29,19 @@ enum bcmfs_device_type { BCMFS_UNKNOWN }; +/* A table to store registered queue pair opertations */ +struct bcmfs_hw_queue_pair_ops_table { + rte_spinlock_t tl; + /* Number of used ops structs in the table. */ + uint32_t num_ops; + /* Storage for all possible ops structs. */ + struct bcmfs_hw_queue_pair_ops qp_ops[BCMFS_MAX_NODES]; +}; + +/* HW queue pair ops register function */ +int bcmfs_hw_queue_pair_register_ops(const struct bcmfs_hw_queue_pair_ops + *qp_ops); + struct bcmfs_device { TAILQ_ENTRY(bcmfs_device) next; /* Directory path for vfio */ @@ -46,6 +60,8 @@ struct bcmfs_device { uint16_t max_hw_qps; /* current qpairs in use */ struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES]; + /* queue pair ops exported by symmetric crypto hw */ + struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops; }; #endif /* _BCMFS_DEV_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c index 864e7bb74..ec1327b78 100644 --- a/drivers/crypto/bcmfs/bcmfs_qp.c +++ b/drivers/crypto/bcmfs/bcmfs_qp.c @@ -227,6 +227,7 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr, qp->qpair_id = queue_pair_id; qp->ioreg = qp_conf->iobase; qp->nb_descriptors = nb_descriptors; + qp->ops = qp_conf->ops; qp->stats.enqueued_count = 0; qp->stats.dequeued_count = 0; diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h index 027d7a50c..e4b0c3f2f 100644 --- a/drivers/crypto/bcmfs/bcmfs_qp.h +++ b/drivers/crypto/bcmfs/bcmfs_qp.h @@ -44,6 +44,8 @@ struct bcmfs_qp_config { uint16_t nb_descriptors; /* Maximum number of h/w descriptors needed by a request */ uint16_t max_descs_req; + /* h/w ops associated with qp */ + struct bcmfs_hw_queue_pair_ops *ops; }; struct bcmfs_queue { @@ -61,6 +63,8 @@ struct bcmfs_queue { /* s/w pointer for completion h/w queue*/ uint32_t cmpl_read_ptr; }; + /* number of inflight descriptor accumulated before next db ring */ + uint16_t descs_inflight; /* Memzone name */ char memz_name[RTE_MEMZONE_NAMESIZE]; }; diff --git a/drivers/crypto/bcmfs/hw/bcmfs4_rm.c b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c new file mode 100644 index 000000000..82b1cf9c5 --- /dev/null +++ b/drivers/crypto/bcmfs/hw/bcmfs4_rm.c @@ -0,0 +1,742 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#include + +#include + +#include "bcmfs_device.h" +#include "bcmfs_dev_msg.h" +#include "bcmfs_hw_defs.h" +#include "bcmfs_logs.h" +#include "bcmfs_qp.h" +#include "bcmfs_rm_common.h" + +/* FS4 configuration */ +#define RING_BD_TOGGLE_INVALID(offset) \ + (((offset) >> FS_RING_BD_ALIGN_ORDER) & 0x1) +#define RING_BD_TOGGLE_VALID(offset) \ + (!RING_BD_TOGGLE_INVALID(offset)) + +#define RING_VER_MAGIC 0x76303031 + +/* Per-Ring register offsets */ +#define RING_VER 0x000 +#define RING_BD_START_ADDR 0x004 +#define RING_BD_READ_PTR 0x008 +#define RING_BD_WRITE_PTR 0x00c +#define RING_BD_READ_PTR_DDR_LS 0x010 +#define RING_BD_READ_PTR_DDR_MS 0x014 +#define RING_CMPL_START_ADDR 0x018 +#define RING_CMPL_WRITE_PTR 0x01c +#define RING_NUM_REQ_RECV_LS 0x020 +#define RING_NUM_REQ_RECV_MS 0x024 +#define RING_NUM_REQ_TRANS_LS 0x028 +#define RING_NUM_REQ_TRANS_MS 0x02c +#define RING_NUM_REQ_OUTSTAND 0x030 +#define RING_CONTROL 0x034 +#define RING_FLUSH_DONE 0x038 +#define RING_MSI_ADDR_LS 0x03c +#define RING_MSI_ADDR_MS 0x040 +#define RING_MSI_CONTROL 0x048 +#define RING_BD_READ_PTR_DDR_CONTROL 0x04c +#define RING_MSI_DATA_VALUE 0x064 + +/* Register RING_BD_START_ADDR fields */ +#define BD_LAST_UPDATE_HW_SHIFT 28 +#define BD_LAST_UPDATE_HW_MASK 0x1 +#define BD_START_ADDR_VALUE(pa) \ + ((uint32_t)((((uint64_t)(pa)) >> FS_RING_BD_ALIGN_ORDER) & 0x0fffffff)) +#define BD_START_ADDR_DECODE(val) \ + ((uint64_t)((val) & 0x0fffffff) << FS_RING_BD_ALIGN_ORDER) + +/* Register RING_CMPL_START_ADDR fields */ +#define CMPL_START_ADDR_VALUE(pa) \ + ((uint32_t)((((uint64_t)(pa)) >> FS_RING_CMPL_ALIGN_ORDER) & 0x7ffffff)) + +/* Register RING_CONTROL fields */ +#define CONTROL_MASK_DISABLE_CONTROL 12 +#define CONTROL_FLUSH_SHIFT 5 +#define CONTROL_ACTIVE_SHIFT 4 +#define CONTROL_RATE_ADAPT_MASK 0xf +#define CONTROL_RATE_DYNAMIC 0x0 +#define CONTROL_RATE_FAST 0x8 +#define CONTROL_RATE_MEDIUM 0x9 +#define CONTROL_RATE_SLOW 0xa +#define CONTROL_RATE_IDLE 0xb + +/* Register RING_FLUSH_DONE fields */ +#define FLUSH_DONE_MASK 0x1 + +/* Register RING_MSI_CONTROL fields */ +#define MSI_TIMER_VAL_SHIFT 16 +#define MSI_TIMER_VAL_MASK 0xffff +#define MSI_ENABLE_SHIFT 15 +#define MSI_ENABLE_MASK 0x1 +#define MSI_COUNT_SHIFT 0 +#define MSI_COUNT_MASK 0x3ff + +/* Register RING_BD_READ_PTR_DDR_CONTROL fields */ +#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16 +#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff +#define BD_READ_PTR_DDR_ENABLE_SHIFT 15 +#define BD_READ_PTR_DDR_ENABLE_MASK 0x1 + +/* ====== Broadcom FS4-RM ring descriptor defines ===== */ + + +/* General descriptor format */ +#define DESC_TYPE_SHIFT 60 +#define DESC_TYPE_MASK 0xf +#define DESC_PAYLOAD_SHIFT 0 +#define DESC_PAYLOAD_MASK 0x0fffffffffffffff + +/* Null descriptor format */ +#define NULL_TYPE 0 +#define NULL_TOGGLE_SHIFT 58 +#define NULL_TOGGLE_MASK 0x1 + +/* Header descriptor format */ +#define HEADER_TYPE 1 +#define HEADER_TOGGLE_SHIFT 58 +#define HEADER_TOGGLE_MASK 0x1 +#define HEADER_ENDPKT_SHIFT 57 +#define HEADER_ENDPKT_MASK 0x1 +#define HEADER_STARTPKT_SHIFT 56 +#define HEADER_STARTPKT_MASK 0x1 +#define HEADER_BDCOUNT_SHIFT 36 +#define HEADER_BDCOUNT_MASK 0x1f +#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK +#define HEADER_FLAGS_SHIFT 16 +#define HEADER_FLAGS_MASK 0xffff +#define HEADER_OPAQUE_SHIFT 0 +#define HEADER_OPAQUE_MASK 0xffff + +/* Source (SRC) descriptor format */ +#define SRC_TYPE 2 +#define SRC_LENGTH_SHIFT 44 +#define SRC_LENGTH_MASK 0xffff +#define SRC_ADDR_SHIFT 0 +#define SRC_ADDR_MASK 0x00000fffffffffff + +/* Destination (DST) descriptor format */ +#define DST_TYPE 3 +#define DST_LENGTH_SHIFT 44 +#define DST_LENGTH_MASK 0xffff +#define DST_ADDR_SHIFT 0 +#define DST_ADDR_MASK 0x00000fffffffffff + +/* Next pointer (NPTR) descriptor format */ +#define NPTR_TYPE 5 +#define NPTR_TOGGLE_SHIFT 58 +#define NPTR_TOGGLE_MASK 0x1 +#define NPTR_ADDR_SHIFT 0 +#define NPTR_ADDR_MASK 0x00000fffffffffff + +/* Mega source (MSRC) descriptor format */ +#define MSRC_TYPE 6 +#define MSRC_LENGTH_SHIFT 44 +#define MSRC_LENGTH_MASK 0xffff +#define MSRC_ADDR_SHIFT 0 +#define MSRC_ADDR_MASK 0x00000fffffffffff + +/* Mega destination (MDST) descriptor format */ +#define MDST_TYPE 7 +#define MDST_LENGTH_SHIFT 44 +#define MDST_LENGTH_MASK 0xffff +#define MDST_ADDR_SHIFT 0 +#define MDST_ADDR_MASK 0x00000fffffffffff + +static uint8_t +bcmfs4_is_next_table_desc(void *desc_ptr) +{ + uint64_t desc = rm_read_desc(desc_ptr); + uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK); + + return (type == NPTR_TYPE) ? true : false; +} + +static uint64_t +bcmfs4_next_table_desc(uint32_t toggle, uint64_t next_addr) +{ + return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(toggle, NPTR_TOGGLE_SHIFT, NPTR_TOGGLE_MASK) | + rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK)); +} + +static uint64_t +bcmfs4_null_desc(uint32_t toggle) +{ + return (rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(toggle, NULL_TOGGLE_SHIFT, NULL_TOGGLE_MASK)); +} + +static void +bcmfs4_flip_header_toggle(void *desc_ptr) +{ + uint64_t desc = rm_read_desc(desc_ptr); + + if (desc & ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT)) + desc &= ~((uint64_t)0x1 << HEADER_TOGGLE_SHIFT); + else + desc |= ((uint64_t)0x1 << HEADER_TOGGLE_SHIFT); + + rm_write_desc(desc_ptr, desc); +} + +static uint64_t +bcmfs4_header_desc(uint32_t toggle, uint32_t startpkt, + uint32_t endpkt, uint32_t bdcount, + uint32_t flags, uint32_t opaque) +{ + return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(toggle, HEADER_TOGGLE_SHIFT, HEADER_TOGGLE_MASK) | + rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT, + HEADER_STARTPKT_MASK) | + rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) | + rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT, + HEADER_BDCOUNT_MASK) | + rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) | + rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK)); +} + +static void +bcmfs4_enqueue_desc(uint32_t nhpos, uint32_t nhcnt, + uint32_t reqid, uint64_t desc, + void **desc_ptr, uint32_t *toggle, + void *start_desc, void *end_desc) +{ + uint64_t d; + uint32_t nhavail, _toggle, _startpkt, _endpkt, _bdcount; + + /* + * Each request or packet start with a HEADER descriptor followed + * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST, + * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors + * following a HEADER descriptor is represented by BDCOUNT field + * of HEADER descriptor. The max value of BDCOUNT field is 31 which + * means we can only have 31 non-HEADER descriptors following one + * HEADER descriptor. + * + * In general use, number of non-HEADER descriptors can easily go + * beyond 31. To tackle this situation, we have packet (or request) + * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor. + * + * To use packet extension, the first HEADER descriptor of request + * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate + * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last + * HEADER descriptor will have STARTPKT=0 and ENDPKT=1. Also, the + * TOGGLE bit of the first HEADER will be set to invalid state to + * ensure that FlexDMA engine does not start fetching descriptors + * till all descriptors are enqueued. The user of this function + * will flip the TOGGLE bit of first HEADER after all descriptors + * are enqueued. + */ + + if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) { + /* Prepare the header descriptor */ + nhavail = (nhcnt - nhpos); + _toggle = (nhpos == 0) ? !(*toggle) : (*toggle); + _startpkt = (nhpos == 0) ? 0x1 : 0x0; + _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0; + _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ? + nhavail : HEADER_BDCOUNT_MAX; + if (nhavail <= HEADER_BDCOUNT_MAX) + _bdcount = nhavail; + else + _bdcount = HEADER_BDCOUNT_MAX; + d = bcmfs4_header_desc(_toggle, _startpkt, _endpkt, + _bdcount, 0x0, reqid); + + /* Write header descriptor */ + rm_write_desc(*desc_ptr, d); + + /* Point to next descriptor */ + *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc); + if (*desc_ptr == end_desc) + *desc_ptr = start_desc; + + /* Skip next pointer descriptors */ + while (bcmfs4_is_next_table_desc(*desc_ptr)) { + *toggle = (*toggle) ? 0 : 1; + *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc); + if (*desc_ptr == end_desc) + *desc_ptr = start_desc; + } + } + + /* Write desired descriptor */ + rm_write_desc(*desc_ptr, desc); + + /* Point to next descriptor */ + *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc); + if (*desc_ptr == end_desc) + *desc_ptr = start_desc; + + /* Skip next pointer descriptors */ + while (bcmfs4_is_next_table_desc(*desc_ptr)) { + *toggle = (*toggle) ? 0 : 1; + *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc); + if (*desc_ptr == end_desc) + *desc_ptr = start_desc; + } +} + +static uint64_t +bcmfs4_src_desc(uint64_t addr, unsigned int length) +{ + return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(length, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) | + rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK)); +} + +static uint64_t +bcmfs4_msrc_desc(uint64_t addr, unsigned int length_div_16) +{ + return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(length_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) | + rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK)); +} + +static uint64_t +bcmfs4_dst_desc(uint64_t addr, unsigned int length) +{ + return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(length, DST_LENGTH_SHIFT, DST_LENGTH_MASK) | + rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK)); +} + +static uint64_t +bcmfs4_mdst_desc(uint64_t addr, unsigned int length_div_16) +{ + return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(length_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) | + rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK)); +} + +static bool +bcmfs4_sanity_check(struct bcmfs_qp_message *msg) +{ + unsigned int i = 0; + + if (msg == NULL) + return false; + + for (i = 0; i < msg->srcs_count; i++) { + if (msg->srcs_len[i] & 0xf) { + if (msg->srcs_len[i] > SRC_LENGTH_MASK) + return false; + } else { + if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16)) + return false; + } + } + for (i = 0; i < msg->dsts_count; i++) { + if (msg->dsts_len[i] & 0xf) { + if (msg->dsts_len[i] > DST_LENGTH_MASK) + return false; + } else { + if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16)) + return false; + } + } + + return true; +} + +static uint32_t +estimate_nonheader_desc_count(struct bcmfs_qp_message *msg) +{ + uint32_t cnt = 0; + unsigned int src = 0; + unsigned int dst = 0; + unsigned int dst_target = 0; + + while (src < msg->srcs_count || + dst < msg->dsts_count) { + if (src < msg->srcs_count) { + cnt++; + dst_target = msg->srcs_len[src]; + src++; + } else { + dst_target = UINT_MAX; + } + while (dst_target && dst < msg->dsts_count) { + cnt++; + if (msg->dsts_len[dst] < dst_target) + dst_target -= msg->dsts_len[dst]; + else + dst_target = 0; + dst++; + } + } + + return cnt; +} + +static void * +bcmfs4_enqueue_msg(struct bcmfs_qp_message *msg, + uint32_t nhcnt, uint32_t reqid, + void *desc_ptr, uint32_t toggle, + void *start_desc, void *end_desc) +{ + uint64_t d; + uint32_t nhpos = 0; + unsigned int src = 0; + unsigned int dst = 0; + unsigned int dst_target = 0; + void *orig_desc_ptr = desc_ptr; + + if (!desc_ptr || !start_desc || !end_desc) + return NULL; + + if (desc_ptr < start_desc || end_desc <= desc_ptr) + return NULL; + + while (src < msg->srcs_count || dst < msg->dsts_count) { + if (src < msg->srcs_count) { + if (msg->srcs_len[src] & 0xf) { + d = bcmfs4_src_desc(msg->srcs_addr[src], + msg->srcs_len[src]); + } else { + d = bcmfs4_msrc_desc(msg->srcs_addr[src], + msg->srcs_len[src] / 16); + } + bcmfs4_enqueue_desc(nhpos, nhcnt, reqid, + d, &desc_ptr, &toggle, + start_desc, end_desc); + nhpos++; + dst_target = msg->srcs_len[src]; + src++; + } else { + dst_target = UINT_MAX; + } + + while (dst_target && (dst < msg->dsts_count)) { + if (msg->dsts_len[dst] & 0xf) { + d = bcmfs4_dst_desc(msg->dsts_addr[dst], + msg->dsts_len[dst]); + } else { + d = bcmfs4_mdst_desc(msg->dsts_addr[dst], + msg->dsts_len[dst] / 16); + } + bcmfs4_enqueue_desc(nhpos, nhcnt, reqid, + d, &desc_ptr, &toggle, + start_desc, end_desc); + nhpos++; + if (msg->dsts_len[dst] < dst_target) + dst_target -= msg->dsts_len[dst]; + else + dst_target = 0; + dst++; /* for next buffer */ + } + } + + /* Null descriptor with invalid toggle bit */ + rm_write_desc(desc_ptr, bcmfs4_null_desc(!toggle)); + + /* Ensure that descriptors have been written to memory */ + rte_smp_wmb(); + + bcmfs4_flip_header_toggle(orig_desc_ptr); + + return desc_ptr; +} + +static int +bcmfs4_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op) +{ + int reqid; + void *next; + uint32_t nhcnt; + int ret = 0; + uint32_t pos = 0; + uint64_t slab = 0; + uint8_t exit_cleanup = false; + struct bcmfs_queue *txq = &qp->tx_q; + struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op; + + /* Do sanity check on message */ + if (!bcmfs4_sanity_check(msg)) { + BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id); + return -EIO; + } + + /* Scan from the beginning */ + __rte_bitmap_scan_init(qp->ctx_bmp); + /* Scan bitmap to get the free pool */ + ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab); + if (ret == 0) { + BCMFS_DP_LOG(ERR, "BD memory exhausted"); + return -ERANGE; + } + + reqid = pos + __builtin_ctzll(slab); + rte_bitmap_clear(qp->ctx_bmp, reqid); + qp->ctx_pool[reqid] = (unsigned long)msg; + + /* + * Number required descriptors = number of non-header descriptors + + * number of header descriptors + + * 1x null descriptor + */ + nhcnt = estimate_nonheader_desc_count(msg); + + /* Write descriptors to ring */ + next = bcmfs4_enqueue_msg(msg, nhcnt, reqid, + (uint8_t *)txq->base_addr + txq->tx_write_ptr, + RING_BD_TOGGLE_VALID(txq->tx_write_ptr), + txq->base_addr, + (uint8_t *)txq->base_addr + txq->queue_size); + if (next == NULL) { + BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d", + qp->qpair_id); + ret = -EINVAL; + exit_cleanup = true; + goto exit; + } + + /* Save ring BD write offset */ + txq->tx_write_ptr = (uint32_t)((uint8_t *)next - + (uint8_t *)txq->base_addr); + + qp->nb_pending_requests++; + + return 0; + +exit: + /* Cleanup if we failed */ + if (exit_cleanup) + rte_bitmap_set(qp->ctx_bmp, reqid); + + return ret; +} + +static void +bcmfs4_ring_doorbell_qp(struct bcmfs_qp *qp __rte_unused) +{ + /* no door bell method supported */ +} + +static uint16_t +bcmfs4_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget) +{ + int err; + uint16_t reqid; + uint64_t desc; + uint16_t count = 0; + unsigned long context = 0; + struct bcmfs_queue *hwq = &qp->cmpl_q; + uint32_t cmpl_read_offset, cmpl_write_offset; + + /* + * Check whether budget is valid, else set the budget to maximum + * so that all the available completions will be processed. + */ + if (budget > qp->nb_pending_requests) + budget = qp->nb_pending_requests; + + /* + * Get current completion read and write offset + * Note: We should read completion write pointer at least once + * after we get a MSI interrupt because HW maintains internal + * MSI status which will allow next MSI interrupt only after + * completion write pointer is read. + */ + cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg + + RING_CMPL_WRITE_PTR); + cmpl_write_offset *= FS_RING_DESC_SIZE; + cmpl_read_offset = hwq->cmpl_read_ptr; + + rte_smp_rmb(); + + /* For each completed request notify mailbox clients */ + reqid = 0; + while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) { + /* Dequeue next completion descriptor */ + desc = *((uint64_t *)((uint8_t *)hwq->base_addr + + cmpl_read_offset)); + + /* Next read offset */ + cmpl_read_offset += FS_RING_DESC_SIZE; + if (cmpl_read_offset == FS_RING_CMPL_SIZE) + cmpl_read_offset = 0; + + /* Decode error from completion descriptor */ + err = rm_cmpl_desc_to_error(desc); + if (err < 0) + BCMFS_DP_LOG(ERR, "error desc rcvd"); + + /* Determine request id from completion descriptor */ + reqid = rm_cmpl_desc_to_reqid(desc); + + /* Determine message pointer based on reqid */ + context = qp->ctx_pool[reqid]; + if (context == 0) + BCMFS_DP_LOG(ERR, "HW error detected"); + + /* Release reqid for recycling */ + qp->ctx_pool[reqid] = 0; + rte_bitmap_set(qp->ctx_bmp, reqid); + + *ops = (void *)context; + + /* Increment number of completions processed */ + count++; + budget--; + ops++; + } + + hwq->cmpl_read_ptr = cmpl_read_offset; + + qp->nb_pending_requests -= count; + + return count; +} + +static int +bcmfs4_start_qp(struct bcmfs_qp *qp) +{ + int timeout; + uint32_t val, off; + uint64_t d, next_addr, msi; + struct bcmfs_queue *tx_queue = &qp->tx_q; + struct bcmfs_queue *cmpl_queue = &qp->cmpl_q; + + /* Disable/deactivate ring */ + FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL); + + /* Configure next table pointer entries in BD memory */ + for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) { + next_addr = off + FS_RING_DESC_SIZE; + if (next_addr == tx_queue->queue_size) + next_addr = 0; + next_addr += (uint64_t)tx_queue->base_phys_addr; + if (FS_RING_BD_ALIGN_CHECK(next_addr)) + d = bcmfs4_next_table_desc(RING_BD_TOGGLE_VALID(off), + next_addr); + else + d = bcmfs4_null_desc(RING_BD_TOGGLE_INVALID(off)); + rm_write_desc((uint8_t *)tx_queue->base_addr + off, d); + } + + /* + * If user interrupt the test in between the run(Ctrl+C), then all + * subsequent test run will fail because sw cmpl_read_offset and hw + * cmpl_write_offset will be pointing at different completion BD. To + * handle this we should flush all the rings in the startup instead + * of shutdown function. + * Ring flush will reset hw cmpl_write_offset. + */ + + /* Set ring flush state */ + timeout = 1000; + FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT), + (uint8_t *)qp->ioreg + RING_CONTROL); + do { + /* + * If previous test is stopped in between the run, then + * sw has to read cmpl_write_offset else DME/AE will be not + * come out of flush state. + */ + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR); + + if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) & + FLUSH_DONE_MASK) + break; + usleep(1000); + } while (--timeout); + if (!timeout) { + BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d", + qp->qpair_id); + } + + /* Clear ring flush state */ + timeout = 1000; + FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL); + do { + if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) & + FLUSH_DONE_MASK)) + break; + usleep(1000); + } while (--timeout); + if (!timeout) { + BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d", + qp->qpair_id); + } + + /* Program BD start address */ + val = BD_START_ADDR_VALUE(tx_queue->base_phys_addr); + FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_BD_START_ADDR); + + /* BD write pointer will be same as HW write pointer */ + tx_queue->tx_write_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg + + RING_BD_WRITE_PTR); + tx_queue->tx_write_ptr *= FS_RING_DESC_SIZE; + + + for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE) + rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0); + + /* Program completion start address */ + val = CMPL_START_ADDR_VALUE(cmpl_queue->base_phys_addr); + FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CMPL_START_ADDR); + + /* Completion read pointer will be same as HW write pointer */ + cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg + + RING_CMPL_WRITE_PTR); + cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE; + + /* Read ring Tx, Rx, and Outstanding counts to clear */ + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS); + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS); + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS); + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS); + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND); + + /* Configure per-Ring MSI registers with dummy location */ + /* We leave 1k * FS_RING_DESC_SIZE size from base phys for MSI */ + msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE); + FS_MMIO_WRITE32((msi & 0xFFFFFFFF), + (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS); + FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF), + (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS); + FS_MMIO_WRITE32(qp->qpair_id, + (uint8_t *)qp->ioreg + RING_MSI_DATA_VALUE); + + /* Configure RING_MSI_CONTROL */ + val = 0; + val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT); + val |= BIT(MSI_ENABLE_SHIFT); + val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT; + FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL); + + /* Enable/activate ring */ + val = BIT(CONTROL_ACTIVE_SHIFT); + FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL); + + return 0; +} + +static void +bcmfs4_shutdown_qp(struct bcmfs_qp *qp) +{ + /* Disable/deactivate ring */ + FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL); +} + +struct bcmfs_hw_queue_pair_ops bcmfs4_qp_ops = { + .name = "fs4", + .enq_one_req = bcmfs4_enqueue_single_request_qp, + .ring_db = bcmfs4_ring_doorbell_qp, + .dequeue = bcmfs4_dequeue_qp, + .startq = bcmfs4_start_qp, + .stopq = bcmfs4_shutdown_qp, +}; + +RTE_INIT(bcmfs4_register_qp_ops) +{ + bcmfs_hw_queue_pair_register_ops(&bcmfs4_qp_ops); +} diff --git a/drivers/crypto/bcmfs/hw/bcmfs5_rm.c b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c new file mode 100644 index 000000000..00ea7a1b3 --- /dev/null +++ b/drivers/crypto/bcmfs/hw/bcmfs5_rm.c @@ -0,0 +1,677 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#include + +#include + +#include "bcmfs_qp.h" +#include "bcmfs_logs.h" +#include "bcmfs_dev_msg.h" +#include "bcmfs_device.h" +#include "bcmfs_hw_defs.h" +#include "bcmfs_rm_common.h" + +/* Ring version */ +#define RING_VER_MAGIC 0x76303032 + +/* Per-Ring register offsets */ +#define RING_VER 0x000 +#define RING_BD_START_ADDRESS_LSB 0x004 +#define RING_BD_READ_PTR 0x008 +#define RING_BD_WRITE_PTR 0x00c +#define RING_BD_READ_PTR_DDR_LS 0x010 +#define RING_BD_READ_PTR_DDR_MS 0x014 +#define RING_CMPL_START_ADDR_LSB 0x018 +#define RING_CMPL_WRITE_PTR 0x01c +#define RING_NUM_REQ_RECV_LS 0x020 +#define RING_NUM_REQ_RECV_MS 0x024 +#define RING_NUM_REQ_TRANS_LS 0x028 +#define RING_NUM_REQ_TRANS_MS 0x02c +#define RING_NUM_REQ_OUTSTAND 0x030 +#define RING_CONTROL 0x034 +#define RING_FLUSH_DONE 0x038 +#define RING_MSI_ADDR_LS 0x03c +#define RING_MSI_ADDR_MS 0x040 +#define RING_MSI_CONTROL 0x048 +#define RING_BD_READ_PTR_DDR_CONTROL 0x04c +#define RING_MSI_DATA_VALUE 0x064 +#define RING_BD_START_ADDRESS_MSB 0x078 +#define RING_CMPL_START_ADDR_MSB 0x07c +#define RING_DOORBELL_BD_WRITE_COUNT 0x074 + +/* Register RING_BD_START_ADDR fields */ +#define BD_LAST_UPDATE_HW_SHIFT 28 +#define BD_LAST_UPDATE_HW_MASK 0x1 +#define BD_START_ADDR_VALUE(pa) \ + ((uint32_t)((((uint64_t)(pa)) >> RING_BD_ALIGN_ORDER) & 0x0fffffff)) +#define BD_START_ADDR_DECODE(val) \ + ((uint64_t)((val) & 0x0fffffff) << RING_BD_ALIGN_ORDER) + +/* Register RING_CMPL_START_ADDR fields */ +#define CMPL_START_ADDR_VALUE(pa) \ + ((uint32_t)((((uint64_t)(pa)) >> RING_CMPL_ALIGN_ORDER) & 0x07ffffff)) + +/* Register RING_CONTROL fields */ +#define CONTROL_MASK_DISABLE_CONTROL 12 +#define CONTROL_FLUSH_SHIFT 5 +#define CONTROL_ACTIVE_SHIFT 4 +#define CONTROL_RATE_ADAPT_MASK 0xf +#define CONTROL_RATE_DYNAMIC 0x0 +#define CONTROL_RATE_FAST 0x8 +#define CONTROL_RATE_MEDIUM 0x9 +#define CONTROL_RATE_SLOW 0xa +#define CONTROL_RATE_IDLE 0xb + +/* Register RING_FLUSH_DONE fields */ +#define FLUSH_DONE_MASK 0x1 + +/* Register RING_MSI_CONTROL fields */ +#define MSI_TIMER_VAL_SHIFT 16 +#define MSI_TIMER_VAL_MASK 0xffff +#define MSI_ENABLE_SHIFT 15 +#define MSI_ENABLE_MASK 0x1 +#define MSI_COUNT_SHIFT 0 +#define MSI_COUNT_MASK 0x3ff + +/* Register RING_BD_READ_PTR_DDR_CONTROL fields */ +#define BD_READ_PTR_DDR_TIMER_VAL_SHIFT 16 +#define BD_READ_PTR_DDR_TIMER_VAL_MASK 0xffff +#define BD_READ_PTR_DDR_ENABLE_SHIFT 15 +#define BD_READ_PTR_DDR_ENABLE_MASK 0x1 + +/* General descriptor format */ +#define DESC_TYPE_SHIFT 60 +#define DESC_TYPE_MASK 0xf +#define DESC_PAYLOAD_SHIFT 0 +#define DESC_PAYLOAD_MASK 0x0fffffffffffffff + +/* Null descriptor format */ +#define NULL_TYPE 0 +#define NULL_TOGGLE_SHIFT 59 +#define NULL_TOGGLE_MASK 0x1 + +/* Header descriptor format */ +#define HEADER_TYPE 1 +#define HEADER_TOGGLE_SHIFT 59 +#define HEADER_TOGGLE_MASK 0x1 +#define HEADER_ENDPKT_SHIFT 57 +#define HEADER_ENDPKT_MASK 0x1 +#define HEADER_STARTPKT_SHIFT 56 +#define HEADER_STARTPKT_MASK 0x1 +#define HEADER_BDCOUNT_SHIFT 36 +#define HEADER_BDCOUNT_MASK 0x1f +#define HEADER_BDCOUNT_MAX HEADER_BDCOUNT_MASK +#define HEADER_FLAGS_SHIFT 16 +#define HEADER_FLAGS_MASK 0xffff +#define HEADER_OPAQUE_SHIFT 0 +#define HEADER_OPAQUE_MASK 0xffff + +/* Source (SRC) descriptor format */ + +#define SRC_TYPE 2 +#define SRC_LENGTH_SHIFT 44 +#define SRC_LENGTH_MASK 0xffff +#define SRC_ADDR_SHIFT 0 +#define SRC_ADDR_MASK 0x00000fffffffffff + +/* Destination (DST) descriptor format */ +#define DST_TYPE 3 +#define DST_LENGTH_SHIFT 44 +#define DST_LENGTH_MASK 0xffff +#define DST_ADDR_SHIFT 0 +#define DST_ADDR_MASK 0x00000fffffffffff + +/* Next pointer (NPTR) descriptor format */ +#define NPTR_TYPE 5 +#define NPTR_TOGGLE_SHIFT 59 +#define NPTR_TOGGLE_MASK 0x1 +#define NPTR_ADDR_SHIFT 0 +#define NPTR_ADDR_MASK 0x00000fffffffffff + +/* Mega source (MSRC) descriptor format */ +#define MSRC_TYPE 6 +#define MSRC_LENGTH_SHIFT 44 +#define MSRC_LENGTH_MASK 0xffff +#define MSRC_ADDR_SHIFT 0 +#define MSRC_ADDR_MASK 0x00000fffffffffff + +/* Mega destination (MDST) descriptor format */ +#define MDST_TYPE 7 +#define MDST_LENGTH_SHIFT 44 +#define MDST_LENGTH_MASK 0xffff +#define MDST_ADDR_SHIFT 0 +#define MDST_ADDR_MASK 0x00000fffffffffff + +static uint8_t +bcmfs5_is_next_table_desc(void *desc_ptr) +{ + uint64_t desc = rm_read_desc(desc_ptr); + uint32_t type = FS_DESC_DEC(desc, DESC_TYPE_SHIFT, DESC_TYPE_MASK); + + return (type == NPTR_TYPE) ? true : false; +} + +static uint64_t +bcmfs5_next_table_desc(uint64_t next_addr) +{ + return (rm_build_desc(NPTR_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(next_addr, NPTR_ADDR_SHIFT, NPTR_ADDR_MASK)); +} + +static uint64_t +bcmfs5_null_desc(void) +{ + return rm_build_desc(NULL_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK); +} + +static uint64_t +bcmfs5_header_desc(uint32_t startpkt, uint32_t endpkt, + uint32_t bdcount, uint32_t flags, + uint32_t opaque) +{ + return (rm_build_desc(HEADER_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(startpkt, HEADER_STARTPKT_SHIFT, + HEADER_STARTPKT_MASK) | + rm_build_desc(endpkt, HEADER_ENDPKT_SHIFT, HEADER_ENDPKT_MASK) | + rm_build_desc(bdcount, HEADER_BDCOUNT_SHIFT, HEADER_BDCOUNT_MASK) | + rm_build_desc(flags, HEADER_FLAGS_SHIFT, HEADER_FLAGS_MASK) | + rm_build_desc(opaque, HEADER_OPAQUE_SHIFT, HEADER_OPAQUE_MASK)); +} + +static int +bcmfs5_enqueue_desc(uint32_t nhpos, uint32_t nhcnt, + uint32_t reqid, uint64_t desc, + void **desc_ptr, void *start_desc, + void *end_desc) +{ + uint64_t d; + uint32_t nhavail, _startpkt, _endpkt, _bdcount; + int is_nxt_page = 0; + + /* + * Each request or packet start with a HEADER descriptor followed + * by one or more non-HEADER descriptors (SRC, SRCT, MSRC, DST, + * DSTT, MDST, IMM, and IMMT). The number of non-HEADER descriptors + * following a HEADER descriptor is represented by BDCOUNT field + * of HEADER descriptor. The max value of BDCOUNT field is 31 which + * means we can only have 31 non-HEADER descriptors following one + * HEADER descriptor. + * + * In general use, number of non-HEADER descriptors can easily go + * beyond 31. To tackle this situation, we have packet (or request) + * extension bits (STARTPKT and ENDPKT) in the HEADER descriptor. + * + * To use packet extension, the first HEADER descriptor of request + * (or packet) will have STARTPKT=1 and ENDPKT=0. The intermediate + * HEADER descriptors will have STARTPKT=0 and ENDPKT=0. The last + * HEADER descriptor will have STARTPKT=0 and ENDPKT=1. + */ + + if ((nhpos % HEADER_BDCOUNT_MAX == 0) && (nhcnt - nhpos)) { + /* Prepare the header descriptor */ + nhavail = (nhcnt - nhpos); + _startpkt = (nhpos == 0) ? 0x1 : 0x0; + _endpkt = (nhavail <= HEADER_BDCOUNT_MAX) ? 0x1 : 0x0; + _bdcount = (nhavail <= HEADER_BDCOUNT_MAX) ? + nhavail : HEADER_BDCOUNT_MAX; + if (nhavail <= HEADER_BDCOUNT_MAX) + _bdcount = nhavail; + else + _bdcount = HEADER_BDCOUNT_MAX; + d = bcmfs5_header_desc(_startpkt, _endpkt, + _bdcount, 0x0, reqid); + + /* Write header descriptor */ + rm_write_desc(*desc_ptr, d); + + /* Point to next descriptor */ + *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc); + if (*desc_ptr == end_desc) + *desc_ptr = start_desc; + + /* Skip next pointer descriptors */ + while (bcmfs5_is_next_table_desc(*desc_ptr)) { + is_nxt_page = 1; + *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc); + if (*desc_ptr == end_desc) + *desc_ptr = start_desc; + } + } + + /* Write desired descriptor */ + rm_write_desc(*desc_ptr, desc); + + /* Point to next descriptor */ + *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc); + if (*desc_ptr == end_desc) + *desc_ptr = start_desc; + + /* Skip next pointer descriptors */ + while (bcmfs5_is_next_table_desc(*desc_ptr)) { + is_nxt_page = 1; + *desc_ptr = (uint8_t *)*desc_ptr + sizeof(desc); + if (*desc_ptr == end_desc) + *desc_ptr = start_desc; + } + + return is_nxt_page; +} + +static uint64_t +bcmfs5_src_desc(uint64_t addr, unsigned int len) +{ + return (rm_build_desc(SRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(len, SRC_LENGTH_SHIFT, SRC_LENGTH_MASK) | + rm_build_desc(addr, SRC_ADDR_SHIFT, SRC_ADDR_MASK)); +} + +static uint64_t +bcmfs5_msrc_desc(uint64_t addr, unsigned int len_div_16) +{ + return (rm_build_desc(MSRC_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(len_div_16, MSRC_LENGTH_SHIFT, MSRC_LENGTH_MASK) | + rm_build_desc(addr, MSRC_ADDR_SHIFT, MSRC_ADDR_MASK)); +} + +static uint64_t +bcmfs5_dst_desc(uint64_t addr, unsigned int len) +{ + return (rm_build_desc(DST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(len, DST_LENGTH_SHIFT, DST_LENGTH_MASK) | + rm_build_desc(addr, DST_ADDR_SHIFT, DST_ADDR_MASK)); +} + +static uint64_t +bcmfs5_mdst_desc(uint64_t addr, unsigned int len_div_16) +{ + return (rm_build_desc(MDST_TYPE, DESC_TYPE_SHIFT, DESC_TYPE_MASK) | + rm_build_desc(len_div_16, MDST_LENGTH_SHIFT, MDST_LENGTH_MASK) | + rm_build_desc(addr, MDST_ADDR_SHIFT, MDST_ADDR_MASK)); +} + +static bool +bcmfs5_sanity_check(struct bcmfs_qp_message *msg) +{ + unsigned int i = 0; + + if (msg == NULL) + return false; + + for (i = 0; i < msg->srcs_count; i++) { + if (msg->srcs_len[i] & 0xf) { + if (msg->srcs_len[i] > SRC_LENGTH_MASK) + return false; + } else { + if (msg->srcs_len[i] > (MSRC_LENGTH_MASK * 16)) + return false; + } + } + for (i = 0; i < msg->dsts_count; i++) { + if (msg->dsts_len[i] & 0xf) { + if (msg->dsts_len[i] > DST_LENGTH_MASK) + return false; + } else { + if (msg->dsts_len[i] > (MDST_LENGTH_MASK * 16)) + return false; + } + } + + return true; +} + +static void * +bcmfs5_enqueue_msg(struct bcmfs_queue *txq, + struct bcmfs_qp_message *msg, + uint32_t reqid, void *desc_ptr, + void *start_desc, void *end_desc) +{ + uint64_t d; + unsigned int src, dst; + uint32_t nhpos = 0; + int nxt_page = 0; + uint32_t nhcnt = msg->srcs_count + msg->dsts_count; + + if (desc_ptr == NULL || start_desc == NULL || end_desc == NULL) + return NULL; + + if (desc_ptr < start_desc || end_desc <= desc_ptr) + return NULL; + + for (src = 0; src < msg->srcs_count; src++) { + if (msg->srcs_len[src] & 0xf) + d = bcmfs5_src_desc(msg->srcs_addr[src], + msg->srcs_len[src]); + else + d = bcmfs5_msrc_desc(msg->srcs_addr[src], + msg->srcs_len[src] / 16); + + nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid, + d, &desc_ptr, start_desc, + end_desc); + if (nxt_page) + txq->descs_inflight++; + nhpos++; + } + + for (dst = 0; dst < msg->dsts_count; dst++) { + if (msg->dsts_len[dst] & 0xf) + d = bcmfs5_dst_desc(msg->dsts_addr[dst], + msg->dsts_len[dst]); + else + d = bcmfs5_mdst_desc(msg->dsts_addr[dst], + msg->dsts_len[dst] / 16); + + nxt_page = bcmfs5_enqueue_desc(nhpos, nhcnt, reqid, + d, &desc_ptr, start_desc, + end_desc); + if (nxt_page) + txq->descs_inflight++; + nhpos++; + } + + txq->descs_inflight += nhcnt + 1; + + return desc_ptr; +} + +static int +bcmfs5_enqueue_single_request_qp(struct bcmfs_qp *qp, void *op) +{ + void *next; + int reqid; + int ret = 0; + uint64_t slab = 0; + uint32_t pos = 0; + uint8_t exit_cleanup = false; + struct bcmfs_queue *txq = &qp->tx_q; + struct bcmfs_qp_message *msg = (struct bcmfs_qp_message *)op; + + /* Do sanity check on message */ + if (!bcmfs5_sanity_check(msg)) { + BCMFS_DP_LOG(ERR, "Invalid msg on queue %d", qp->qpair_id); + return -EIO; + } + + /* Scan from the beginning */ + __rte_bitmap_scan_init(qp->ctx_bmp); + /* Scan bitmap to get the free pool */ + ret = rte_bitmap_scan(qp->ctx_bmp, &pos, &slab); + if (ret == 0) { + BCMFS_DP_LOG(ERR, "BD memory exhausted"); + return -ERANGE; + } + + reqid = pos + __builtin_ctzll(slab); + rte_bitmap_clear(qp->ctx_bmp, reqid); + qp->ctx_pool[reqid] = (unsigned long)msg; + + /* Write descriptors to ring */ + next = bcmfs5_enqueue_msg(txq, msg, reqid, + (uint8_t *)txq->base_addr + txq->tx_write_ptr, + txq->base_addr, + (uint8_t *)txq->base_addr + txq->queue_size); + if (next == NULL) { + BCMFS_DP_LOG(ERR, "Enqueue for desc failed on queue %d", + qp->qpair_id); + ret = -EINVAL; + exit_cleanup = true; + goto exit; + } + + /* Save ring BD write offset */ + txq->tx_write_ptr = (uint32_t)((uint8_t *)next - + (uint8_t *)txq->base_addr); + + qp->nb_pending_requests++; + + return 0; + +exit: + /* Cleanup if we failed */ + if (exit_cleanup) + rte_bitmap_set(qp->ctx_bmp, reqid); + + return ret; +} + +static void bcmfs5_write_doorbell(struct bcmfs_qp *qp) +{ + struct bcmfs_queue *txq = &qp->tx_q; + + /* sync in bfeore ringing the door-bell */ + rte_wmb(); + + FS_MMIO_WRITE32(txq->descs_inflight, + (uint8_t *)qp->ioreg + RING_DOORBELL_BD_WRITE_COUNT); + + /* reset the count */ + txq->descs_inflight = 0; +} + +static uint16_t +bcmfs5_dequeue_qp(struct bcmfs_qp *qp, void **ops, uint16_t budget) +{ + int err; + uint16_t reqid; + uint64_t desc; + uint16_t count = 0; + unsigned long context = 0; + struct bcmfs_queue *hwq = &qp->cmpl_q; + uint32_t cmpl_read_offset, cmpl_write_offset; + + /* + * Check whether budget is valid, else set the budget to maximum + * so that all the available completions will be processed. + */ + if (budget > qp->nb_pending_requests) + budget = qp->nb_pending_requests; + + /* + * Get current completion read and write offset + * + * Note: We should read completion write pointer at least once + * after we get a MSI interrupt because HW maintains internal + * MSI status which will allow next MSI interrupt only after + * completion write pointer is read. + */ + cmpl_write_offset = FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR); + cmpl_write_offset *= FS_RING_DESC_SIZE; + cmpl_read_offset = hwq->cmpl_read_ptr; + + /* read the ring cmpl write ptr before cmpl read offset */ + rte_smp_rmb(); + + /* For each completed request notify mailbox clients */ + reqid = 0; + while ((cmpl_read_offset != cmpl_write_offset) && (budget > 0)) { + /* Dequeue next completion descriptor */ + desc = *((uint64_t *)((uint8_t *)hwq->base_addr + + cmpl_read_offset)); + + /* Next read offset */ + cmpl_read_offset += FS_RING_DESC_SIZE; + if (cmpl_read_offset == FS_RING_CMPL_SIZE) + cmpl_read_offset = 0; + + /* Decode error from completion descriptor */ + err = rm_cmpl_desc_to_error(desc); + if (err < 0) + BCMFS_DP_LOG(ERR, "error desc rcvd"); + + /* Determine request id from completion descriptor */ + reqid = rm_cmpl_desc_to_reqid(desc); + + /* Retrieve context */ + context = qp->ctx_pool[reqid]; + if (context == 0) + BCMFS_DP_LOG(ERR, "HW error detected"); + + /* Release reqid for recycling */ + qp->ctx_pool[reqid] = 0; + rte_bitmap_set(qp->ctx_bmp, reqid); + + *ops = (void *)context; + + /* Increment number of completions processed */ + count++; + budget--; + ops++; + } + + hwq->cmpl_read_ptr = cmpl_read_offset; + + qp->nb_pending_requests -= count; + + return count; +} + +static int +bcmfs5_start_qp(struct bcmfs_qp *qp) +{ + uint32_t val, off; + uint64_t d, next_addr, msi; + int timeout; + uint32_t bd_high, bd_low, cmpl_high, cmpl_low; + struct bcmfs_queue *tx_queue = &qp->tx_q; + struct bcmfs_queue *cmpl_queue = &qp->cmpl_q; + + /* Disable/deactivate ring */ + FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL); + + /* Configure next table pointer entries in BD memory */ + for (off = 0; off < tx_queue->queue_size; off += FS_RING_DESC_SIZE) { + next_addr = off + FS_RING_DESC_SIZE; + if (next_addr == tx_queue->queue_size) + next_addr = 0; + next_addr += (uint64_t)tx_queue->base_phys_addr; + if (FS_RING_BD_ALIGN_CHECK(next_addr)) + d = bcmfs5_next_table_desc(next_addr); + else + d = bcmfs5_null_desc(); + rm_write_desc((uint8_t *)tx_queue->base_addr + off, d); + } + + /* + * If user interrupt the test in between the run(Ctrl+C), then all + * subsequent test run will fail because sw cmpl_read_offset and hw + * cmpl_write_offset will be pointing at different completion BD. To + * handle this we should flush all the rings in the startup instead + * of shutdown function. + * Ring flush will reset hw cmpl_write_offset. + */ + + /* Set ring flush state */ + timeout = 1000; + FS_MMIO_WRITE32(BIT(CONTROL_FLUSH_SHIFT), + (uint8_t *)qp->ioreg + RING_CONTROL); + do { + /* + * If previous test is stopped in between the run, then + * sw has to read cmpl_write_offset else DME/AE will be not + * come out of flush state. + */ + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_CMPL_WRITE_PTR); + + if (FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) & + FLUSH_DONE_MASK) + break; + usleep(1000); + } while (--timeout); + if (!timeout) { + BCMFS_DP_LOG(ERR, "Ring flush timeout hw-queue %d", + qp->qpair_id); + } + + /* Clear ring flush state */ + timeout = 1000; + FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL); + do { + if (!(FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_FLUSH_DONE) & + FLUSH_DONE_MASK)) + break; + usleep(1000); + } while (--timeout); + if (!timeout) { + BCMFS_DP_LOG(ERR, "Ring clear flush timeout hw-queue %d", + qp->qpair_id); + } + + /* Program BD start address */ + bd_low = lower_32_bits(tx_queue->base_phys_addr); + bd_high = upper_32_bits(tx_queue->base_phys_addr); + FS_MMIO_WRITE32(bd_low, (uint8_t *)qp->ioreg + + RING_BD_START_ADDRESS_LSB); + FS_MMIO_WRITE32(bd_high, (uint8_t *)qp->ioreg + + RING_BD_START_ADDRESS_MSB); + + tx_queue->tx_write_ptr = 0; + + for (off = 0; off < FS_RING_CMPL_SIZE; off += FS_RING_DESC_SIZE) + rm_write_desc((uint8_t *)cmpl_queue->base_addr + off, 0x0); + + /* Completion read pointer will be same as HW write pointer */ + cmpl_queue->cmpl_read_ptr = FS_MMIO_READ32((uint8_t *)qp->ioreg + + RING_CMPL_WRITE_PTR); + /* Program completion start address */ + cmpl_low = lower_32_bits(cmpl_queue->base_phys_addr); + cmpl_high = upper_32_bits(cmpl_queue->base_phys_addr); + FS_MMIO_WRITE32(cmpl_low, (uint8_t *)qp->ioreg + + RING_CMPL_START_ADDR_LSB); + FS_MMIO_WRITE32(cmpl_high, (uint8_t *)qp->ioreg + + RING_CMPL_START_ADDR_MSB); + + cmpl_queue->cmpl_read_ptr *= FS_RING_DESC_SIZE; + + /* Read ring Tx, Rx, and Outstanding counts to clear */ + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_LS); + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_RECV_MS); + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_LS); + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_TRANS_MS); + FS_MMIO_READ32((uint8_t *)qp->ioreg + RING_NUM_REQ_OUTSTAND); + + /* Configure per-Ring MSI registers with dummy location */ + msi = cmpl_queue->base_phys_addr + (1024 * FS_RING_DESC_SIZE); + FS_MMIO_WRITE32((msi & 0xFFFFFFFF), + (uint8_t *)qp->ioreg + RING_MSI_ADDR_LS); + FS_MMIO_WRITE32(((msi >> 32) & 0xFFFFFFFF), + (uint8_t *)qp->ioreg + RING_MSI_ADDR_MS); + FS_MMIO_WRITE32(qp->qpair_id, (uint8_t *)qp->ioreg + + RING_MSI_DATA_VALUE); + + /* Configure RING_MSI_CONTROL */ + val = 0; + val |= (MSI_TIMER_VAL_MASK << MSI_TIMER_VAL_SHIFT); + val |= BIT(MSI_ENABLE_SHIFT); + val |= (0x1 & MSI_COUNT_MASK) << MSI_COUNT_SHIFT; + FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_MSI_CONTROL); + + /* Enable/activate ring */ + val = BIT(CONTROL_ACTIVE_SHIFT); + FS_MMIO_WRITE32(val, (uint8_t *)qp->ioreg + RING_CONTROL); + + return 0; +} + +static void +bcmfs5_shutdown_qp(struct bcmfs_qp *qp) +{ + /* Disable/deactivate ring */ + FS_MMIO_WRITE32(0x0, (uint8_t *)qp->ioreg + RING_CONTROL); +} + +struct bcmfs_hw_queue_pair_ops bcmfs5_qp_ops = { + .name = "fs5", + .enq_one_req = bcmfs5_enqueue_single_request_qp, + .ring_db = bcmfs5_write_doorbell, + .dequeue = bcmfs5_dequeue_qp, + .startq = bcmfs5_start_qp, + .stopq = bcmfs5_shutdown_qp, +}; + +RTE_INIT(bcmfs5_register_qp_ops) +{ + bcmfs_hw_queue_pair_register_ops(&bcmfs5_qp_ops); +} diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c new file mode 100644 index 000000000..9445d28f9 --- /dev/null +++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.c @@ -0,0 +1,82 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Broadcom. + * All rights reserved. + */ + +#include "bcmfs_hw_defs.h" +#include "bcmfs_rm_common.h" + +/* Completion descriptor format */ +#define FS_CMPL_OPAQUE_SHIFT 0 +#define FS_CMPL_OPAQUE_MASK 0xffff +#define FS_CMPL_ENGINE_STATUS_SHIFT 16 +#define FS_CMPL_ENGINE_STATUS_MASK 0xffff +#define FS_CMPL_DME_STATUS_SHIFT 32 +#define FS_CMPL_DME_STATUS_MASK 0xffff +#define FS_CMPL_RM_STATUS_SHIFT 48 +#define FS_CMPL_RM_STATUS_MASK 0xffff +/* Completion RM status code */ +#define FS_RM_STATUS_CODE_SHIFT 0 +#define FS_RM_STATUS_CODE_MASK 0x3ff +#define FS_RM_STATUS_CODE_GOOD 0x0 +#define FS_RM_STATUS_CODE_AE_TIMEOUT 0x3ff + + +/* Completion DME status code */ +#define FS_DME_STATUS_MEM_COR_ERR BIT(0) +#define FS_DME_STATUS_MEM_UCOR_ERR BIT(1) +#define FS_DME_STATUS_FIFO_UNDRFLOW BIT(2) +#define FS_DME_STATUS_FIFO_OVERFLOW BIT(3) +#define FS_DME_STATUS_RRESP_ERR BIT(4) +#define FS_DME_STATUS_BRESP_ERR BIT(5) +#define FS_DME_STATUS_ERROR_MASK (FS_DME_STATUS_MEM_COR_ERR | \ + FS_DME_STATUS_MEM_UCOR_ERR | \ + FS_DME_STATUS_FIFO_UNDRFLOW | \ + FS_DME_STATUS_FIFO_OVERFLOW | \ + FS_DME_STATUS_RRESP_ERR | \ + FS_DME_STATUS_BRESP_ERR) + +/* APIs related to ring manager descriptors */ +uint64_t +rm_build_desc(uint64_t val, uint32_t shift, + uint64_t mask) +{ + return((val & mask) << shift); +} + +uint64_t +rm_read_desc(void *desc_ptr) +{ + return le64_to_cpu(*((uint64_t *)desc_ptr)); +} + +void +rm_write_desc(void *desc_ptr, uint64_t desc) +{ + *((uint64_t *)desc_ptr) = cpu_to_le64(desc); +} + +uint32_t +rm_cmpl_desc_to_reqid(uint64_t cmpl_desc) +{ + return (uint32_t)(cmpl_desc & FS_CMPL_OPAQUE_MASK); +} + +int +rm_cmpl_desc_to_error(uint64_t cmpl_desc) +{ + uint32_t status; + + status = FS_DESC_DEC(cmpl_desc, FS_CMPL_DME_STATUS_SHIFT, + FS_CMPL_DME_STATUS_MASK); + if (status & FS_DME_STATUS_ERROR_MASK) + return -EIO; + + status = FS_DESC_DEC(cmpl_desc, FS_CMPL_RM_STATUS_SHIFT, + FS_CMPL_RM_STATUS_MASK); + status &= FS_RM_STATUS_CODE_MASK; + if (status == FS_RM_STATUS_CODE_AE_TIMEOUT) + return -ETIMEDOUT; + + return 0; +} diff --git a/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h new file mode 100644 index 000000000..5cbafa0da --- /dev/null +++ b/drivers/crypto/bcmfs/hw/bcmfs_rm_common.h @@ -0,0 +1,46 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_RM_COMMON_H_ +#define _BCMFS_RM_COMMON_H_ + +#include +#include +#include + +/* Descriptor helper macros */ +#define FS_DESC_DEC(d, s, m) (((d) >> (s)) & (m)) + +#define FS_RING_BD_ALIGN_CHECK(addr) \ + (!((addr) & ((0x1 << FS_RING_BD_ALIGN_ORDER) - 1))) + +#define cpu_to_le64 rte_cpu_to_le_64 +#define cpu_to_le32 rte_cpu_to_le_32 +#define cpu_to_le16 rte_cpu_to_le_16 + +#define le64_to_cpu rte_le_to_cpu_64 +#define le32_to_cpu rte_le_to_cpu_32 +#define le16_to_cpu rte_le_to_cpu_16 + +#define lower_32_bits(x) ((uint32_t)(x)) +#define upper_32_bits(x) ((uint32_t)(((x) >> 16) >> 16)) + +uint64_t +rm_build_desc(uint64_t val, uint32_t shift, + uint64_t mask); +uint64_t +rm_read_desc(void *desc_ptr); + +void +rm_write_desc(void *desc_ptr, uint64_t desc); + +uint32_t +rm_cmpl_desc_to_reqid(uint64_t cmpl_desc); + +int +rm_cmpl_desc_to_error(uint64_t cmpl_desc); + +#endif /* _BCMFS_RM_COMMON_H_ */ + diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build index 7e2bcbf14..cd58bd5e2 100644 --- a/drivers/crypto/bcmfs/meson.build +++ b/drivers/crypto/bcmfs/meson.build @@ -8,5 +8,8 @@ sources = files( 'bcmfs_logs.c', 'bcmfs_device.c', 'bcmfs_vfio.c', - 'bcmfs_qp.c' + 'bcmfs_qp.c', + 'hw/bcmfs4_rm.c', + 'hw/bcmfs5_rm.c', + 'hw/bcmfs_rm_common.c' ) From patchwork Thu Aug 13 17:23:41 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikas Gupta X-Patchwork-Id: 75506 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id B501AA04B0; Thu, 13 Aug 2020 19:25:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BC2AC1C11B; Thu, 13 Aug 2020 19:24:17 +0200 (CEST) Received: from mail-qk1-f196.google.com (mail-qk1-f196.google.com [209.85.222.196]) by dpdk.org (Postfix) with ESMTP id 002C21C10F for ; Thu, 13 Aug 2020 19:24:15 +0200 (CEST) Received: by mail-qk1-f196.google.com with SMTP id b79so5840780qkg.9 for ; Thu, 13 Aug 2020 10:24:15 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=yU/rVcwA4DfEgkMSlpZEKFOCHaKqVGk4q0bDNLSgoYE=; b=YI7my0X4pR/rxP3WPy/8R+EXXI07H+7ESbPlvYjBmQ72fUlWgNDgJ83H6YWLaNUCFY daY2dwHI3gy4EKTaj7pcBrlzQjL/4c6y+Rswah0GPXQrE4Yb4Dlu0Pu4uXySgPMtSsrb E7U0Ua+e9jusOW4mbEqfjBBX3nWRGMwZ4+EVw= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=yU/rVcwA4DfEgkMSlpZEKFOCHaKqVGk4q0bDNLSgoYE=; b=LL8lxasxqFpy/alEShT4pZUimFsQxvvXRiBnpZGL85gqb370tfPbBr+FzsZcvow/61 7VjOq/Kg1r16Ds+VpcHhxTmh/hnEwBxxPcFXMeax4zeH0ymGVJF3FjSF2y1wKQLBFUDn sA8Oms3hQLbuC/d3DOniqxo3g7mw48xxb6Lo9zxTVGWOXnilUDOeL4KBt/aTlwOLQKUI WLEYgDu1w3Xnq8HA7jUzZar1vT8e92eOVLb4fK7t1hXqnLDOWja26W3jdryzW4Z7WkU+ 53Fm8y2mb1ycHSBM2nsktP92q22W/k7VApm7Mj5M09mlpUHmvdeq+nPoeW3vKKFvVvhv NJMg== X-Gm-Message-State: AOAM530CsETsm+ZoQ+l4s3Kd7VzBtQNVWr9Tz0C6FqbK+wJuqa596JDH /nqfnsWCd7GLYI7O2yVaAbuhk0Tz1On4+0KSQNAGdiskRZPDyxEaw610PFLi4mJ3jkVO9jt9jCI wYuwsJvLJ3d4xXT/AOJUkTKqJFRPnJgN2XssxlfG14SOcicBHKuSmK9qJdzTD X-Google-Smtp-Source: ABdhPJybmZNjwVVy0KlhCWwpKJUOm1WAapiati6F71BB0yayKuN5eX+YnnZm3fnjXrCLDvGuZ/RxqA== X-Received: by 2002:a05:620a:1014:: with SMTP id z20mr5664258qkj.52.1597339454412; Thu, 13 Aug 2020 10:24:14 -0700 (PDT) Received: from rahul_yocto_ubuntu18.ibn.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g129sm6246635qkb.39.2020.08.13.10.24.11 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 10:24:13 -0700 (PDT) From: Vikas Gupta To: dev@dpdk.org, akhil.goyal@nxp.com Cc: vikram.prakash@broadcom.com, Vikas Gupta , Raveendra Padasalagi Date: Thu, 13 Aug 2020 22:53:41 +0530 Message-Id: <20200813172344.3228-6-vikas.gupta@broadcom.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200813172344.3228-1-vikas.gupta@broadcom.com> References: <20200812063127.8687-1-vikas.gupta@broadcom.com> <20200813172344.3228-1-vikas.gupta@broadcom.com> Subject: [dpdk-dev] [PATCH v2 5/8] crypto/bcmfs: create a symmetric cryptodev X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Create a symmetric crypto device and supported cryptodev ops. Signed-off-by: Vikas Gupta Signed-off-by: Raveendra Padasalagi Reviewed-by: Ajit Khaparde --- drivers/crypto/bcmfs/bcmfs_device.c | 15 ++ drivers/crypto/bcmfs/bcmfs_device.h | 9 + drivers/crypto/bcmfs/bcmfs_qp.c | 37 +++ drivers/crypto/bcmfs/bcmfs_qp.h | 16 ++ drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 387 +++++++++++++++++++++++++++ drivers/crypto/bcmfs/bcmfs_sym_pmd.h | 38 +++ drivers/crypto/bcmfs/bcmfs_sym_req.h | 22 ++ drivers/crypto/bcmfs/meson.build | 3 +- 8 files changed, 526 insertions(+), 1 deletion(-) create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.c create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_pmd.h create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_req.h diff --git a/drivers/crypto/bcmfs/bcmfs_device.c b/drivers/crypto/bcmfs/bcmfs_device.c index bd2d64acf..c9263ec28 100644 --- a/drivers/crypto/bcmfs/bcmfs_device.c +++ b/drivers/crypto/bcmfs/bcmfs_device.c @@ -13,6 +13,7 @@ #include "bcmfs_logs.h" #include "bcmfs_qp.h" #include "bcmfs_vfio.h" +#include "bcmfs_sym_pmd.h" struct bcmfs_device_attr { const char name[BCMFS_MAX_PATH_LEN]; @@ -239,6 +240,7 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev) char out_dirname[BCMFS_MAX_PATH_LEN]; uint32_t fsdev_dev[BCMFS_MAX_NODES]; enum bcmfs_device_type dtype; + int err; int i = 0; int dev_idx; int count = 0; @@ -290,7 +292,20 @@ bcmfs_vdev_probe(struct rte_vdev_device *vdev) return -ENODEV; } + err = bcmfs_sym_dev_create(fsdev); + if (err) { + BCMFS_LOG(WARNING, + "Failed to create BCMFS SYM PMD for device %s", + fsdev->name); + goto pmd_create_fail; + } + return 0; + +pmd_create_fail: + fsdev_release(fsdev); + + return err; } static int diff --git a/drivers/crypto/bcmfs/bcmfs_device.h b/drivers/crypto/bcmfs/bcmfs_device.h index 9e40c5d74..e8a9c4091 100644 --- a/drivers/crypto/bcmfs/bcmfs_device.h +++ b/drivers/crypto/bcmfs/bcmfs_device.h @@ -62,6 +62,15 @@ struct bcmfs_device { struct bcmfs_qp *qps_in_use[BCMFS_MAX_HW_QUEUES]; /* queue pair ops exported by symmetric crypto hw */ struct bcmfs_hw_queue_pair_ops *sym_hw_qp_ops; + /* a cryptodevice attached to bcmfs device */ + struct rte_cryptodev *cdev; + /* a rte_device to register with cryptodev */ + struct rte_device sym_rte_dev; + /* private info to keep with cryptodev */ + struct bcmfs_sym_dev_private *sym_dev; }; +/* stats exported by device */ + + #endif /* _BCMFS_DEV_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_qp.c b/drivers/crypto/bcmfs/bcmfs_qp.c index ec1327b78..cb5ff6c61 100644 --- a/drivers/crypto/bcmfs/bcmfs_qp.c +++ b/drivers/crypto/bcmfs/bcmfs_qp.c @@ -344,3 +344,40 @@ bcmfs_dequeue_op_burst(void *qp, void **ops, uint16_t nb_ops) return deq; } + +void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp, + struct bcmfs_qp_stats *stats) +{ + int i; + + if (stats == NULL) { + BCMFS_LOG(ERR, "invalid param: stats %p", + stats); + return; + } + + for (i = 0; i < num_qp; i++) { + if (qp[i] == NULL) { + BCMFS_LOG(DEBUG, "Uninitialised qp %d", i); + continue; + } + + stats->enqueued_count += qp[i]->stats.enqueued_count; + stats->dequeued_count += qp[i]->stats.dequeued_count; + stats->enqueue_err_count += qp[i]->stats.enqueue_err_count; + stats->dequeue_err_count += qp[i]->stats.dequeue_err_count; + } +} + +void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp) +{ + int i; + + for (i = 0; i < num_qp; i++) { + if (qp[i] == NULL) { + BCMFS_LOG(DEBUG, "Uninitialised qp %d", i); + continue; + } + memset(&qp[i]->stats, 0, sizeof(qp[i]->stats)); + } +} diff --git a/drivers/crypto/bcmfs/bcmfs_qp.h b/drivers/crypto/bcmfs/bcmfs_qp.h index e4b0c3f2f..fec58ca71 100644 --- a/drivers/crypto/bcmfs/bcmfs_qp.h +++ b/drivers/crypto/bcmfs/bcmfs_qp.h @@ -24,6 +24,13 @@ enum bcmfs_queue_type { BCMFS_RM_CPLQ }; +#define BCMFS_QP_IOBASE_XLATE(base, idx) \ + ((base) + ((idx) * BCMFS_HW_QUEUE_IO_ADDR_LEN)) + +/* Max pkts for preprocessing before submitting to h/w qp */ +#define BCMFS_MAX_REQS_BUFF 64 + +/* qp stats */ struct bcmfs_qp_stats { /* Count of all operations enqueued */ uint64_t enqueued_count; @@ -92,6 +99,10 @@ struct bcmfs_qp { struct bcmfs_qp_stats stats; /* h/w ops associated with qp */ struct bcmfs_hw_queue_pair_ops *ops; + /* bcmfs requests pool*/ + struct rte_mempool *sr_mp; + /* a temporary buffer to keep message pointers */ + struct bcmfs_qp_message *infl_msgs[BCMFS_MAX_REQS_BUFF]; } __rte_cache_aligned; @@ -123,4 +134,9 @@ bcmfs_qp_setup(struct bcmfs_qp **qp_addr, uint16_t queue_pair_id, struct bcmfs_qp_config *bcmfs_conf); +/* stats functions*/ +void bcmfs_qp_stats_get(struct bcmfs_qp **qp, int num_qp, + struct bcmfs_qp_stats *stats); +void bcmfs_qp_stats_reset(struct bcmfs_qp **qp, int num_qp); + #endif /* _BCMFS_QP_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c new file mode 100644 index 000000000..0f96915f7 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c @@ -0,0 +1,387 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#include +#include +#include +#include +#include + +#include "bcmfs_device.h" +#include "bcmfs_logs.h" +#include "bcmfs_qp.h" +#include "bcmfs_sym_pmd.h" +#include "bcmfs_sym_req.h" + +uint8_t cryptodev_bcmfs_driver_id; + +static int bcmfs_sym_qp_release(struct rte_cryptodev *dev, + uint16_t queue_pair_id); + +static int +bcmfs_sym_dev_config(__rte_unused struct rte_cryptodev *dev, + __rte_unused struct rte_cryptodev_config *config) +{ + return 0; +} + +static int +bcmfs_sym_dev_start(__rte_unused struct rte_cryptodev *dev) +{ + return 0; +} + +static void +bcmfs_sym_dev_stop(__rte_unused struct rte_cryptodev *dev) +{ +} + +static int +bcmfs_sym_dev_close(struct rte_cryptodev *dev) +{ + int i, ret; + + for (i = 0; i < dev->data->nb_queue_pairs; i++) { + ret = bcmfs_sym_qp_release(dev, i); + if (ret < 0) + return ret; + } + + return 0; +} + +static void +bcmfs_sym_dev_info_get(struct rte_cryptodev *dev, + struct rte_cryptodev_info *dev_info) +{ + struct bcmfs_sym_dev_private *internals = dev->data->dev_private; + struct bcmfs_device *fsdev = internals->fsdev; + + if (dev_info != NULL) { + dev_info->driver_id = cryptodev_bcmfs_driver_id; + dev_info->feature_flags = dev->feature_flags; + dev_info->max_nb_queue_pairs = fsdev->max_hw_qps; + /* No limit of number of sessions */ + dev_info->sym.max_nb_sessions = 0; + } +} + +static void +bcmfs_sym_stats_get(struct rte_cryptodev *dev, + struct rte_cryptodev_stats *stats) +{ + struct bcmfs_qp_stats bcmfs_stats = {0}; + struct bcmfs_sym_dev_private *bcmfs_priv; + struct bcmfs_device *fsdev; + + if (stats == NULL || dev == NULL) { + BCMFS_LOG(ERR, "invalid ptr: stats %p, dev %p", stats, dev); + return; + } + bcmfs_priv = dev->data->dev_private; + fsdev = bcmfs_priv->fsdev; + + bcmfs_qp_stats_get(fsdev->qps_in_use, fsdev->max_hw_qps, &bcmfs_stats); + + stats->enqueued_count = bcmfs_stats.enqueued_count; + stats->dequeued_count = bcmfs_stats.dequeued_count; + stats->enqueue_err_count = bcmfs_stats.enqueue_err_count; + stats->dequeue_err_count = bcmfs_stats.dequeue_err_count; +} + +static void +bcmfs_sym_stats_reset(struct rte_cryptodev *dev) +{ + struct bcmfs_sym_dev_private *bcmfs_priv; + struct bcmfs_device *fsdev; + + if (dev == NULL) { + BCMFS_LOG(ERR, "invalid cryptodev ptr %p", dev); + return; + } + bcmfs_priv = dev->data->dev_private; + fsdev = bcmfs_priv->fsdev; + + bcmfs_qp_stats_reset(fsdev->qps_in_use, fsdev->max_hw_qps); +} + +static int +bcmfs_sym_qp_release(struct rte_cryptodev *dev, uint16_t queue_pair_id) +{ + struct bcmfs_sym_dev_private *bcmfs_private = dev->data->dev_private; + struct bcmfs_qp *qp = (struct bcmfs_qp *) + (dev->data->queue_pairs[queue_pair_id]); + + BCMFS_LOG(DEBUG, "Release sym qp %u on device %d", + queue_pair_id, dev->data->dev_id); + + rte_mempool_free(qp->sr_mp); + + bcmfs_private->fsdev->qps_in_use[queue_pair_id] = NULL; + + return bcmfs_qp_release((struct bcmfs_qp **) + &dev->data->queue_pairs[queue_pair_id]); +} + +static void +spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused) +{ + memset(sr, 0, sizeof(*sr)); +} + +static void +req_pool_obj_init(__rte_unused struct rte_mempool *mp, + __rte_unused void *opaque, void *obj, + __rte_unused unsigned int obj_idx) +{ + spu_req_init(obj, rte_mempool_virt2iova(obj)); +} + +static struct rte_mempool * +bcmfs_sym_req_pool_create(struct rte_cryptodev *cdev __rte_unused, + uint32_t nobjs, uint16_t qp_id, + int socket_id) +{ + char softreq_pool_name[RTE_RING_NAMESIZE]; + struct rte_mempool *mp; + + snprintf(softreq_pool_name, RTE_RING_NAMESIZE, "%s_%d", + "bcm_sym", qp_id); + + mp = rte_mempool_create(softreq_pool_name, + RTE_ALIGN_MUL_CEIL(nobjs, 64), + sizeof(struct bcmfs_sym_request), + 64, 0, NULL, NULL, req_pool_obj_init, NULL, + socket_id, 0); + if (mp == NULL) + BCMFS_LOG(ERR, "Failed to create req pool, qid %d, err %d", + qp_id, rte_errno); + + return mp; +} + +static int +bcmfs_sym_qp_setup(struct rte_cryptodev *cdev, uint16_t qp_id, + const struct rte_cryptodev_qp_conf *qp_conf, + int socket_id) +{ + int ret = 0; + struct bcmfs_qp *qp = NULL; + struct bcmfs_qp_config bcmfs_qp_conf; + + struct bcmfs_qp **qp_addr = + (struct bcmfs_qp **)&cdev->data->queue_pairs[qp_id]; + struct bcmfs_sym_dev_private *bcmfs_private = cdev->data->dev_private; + struct bcmfs_device *fsdev = bcmfs_private->fsdev; + + + /* If qp is already in use free ring memory and qp metadata. */ + if (*qp_addr != NULL) { + ret = bcmfs_sym_qp_release(cdev, qp_id); + if (ret < 0) + return ret; + } + + if (qp_id >= fsdev->max_hw_qps) { + BCMFS_LOG(ERR, "qp_id %u invalid for this device", qp_id); + return -EINVAL; + } + + bcmfs_qp_conf.nb_descriptors = qp_conf->nb_descriptors; + bcmfs_qp_conf.socket_id = socket_id; + bcmfs_qp_conf.max_descs_req = BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ; + bcmfs_qp_conf.iobase = BCMFS_QP_IOBASE_XLATE(fsdev->mmap_addr, qp_id); + bcmfs_qp_conf.ops = fsdev->sym_hw_qp_ops; + + ret = bcmfs_qp_setup(qp_addr, qp_id, &bcmfs_qp_conf); + if (ret != 0) + return ret; + + qp = (struct bcmfs_qp *)*qp_addr; + + qp->sr_mp = bcmfs_sym_req_pool_create(cdev, qp_conf->nb_descriptors, + qp_id, socket_id); + if (qp->sr_mp == NULL) + return -ENOMEM; + + /* store a link to the qp in the bcmfs_device */ + bcmfs_private->fsdev->qps_in_use[qp_id] = *qp_addr; + + cdev->data->queue_pairs[qp_id] = qp; + BCMFS_LOG(NOTICE, "queue %d setup done\n", qp_id); + + return 0; +} + +static struct rte_cryptodev_ops crypto_bcmfs_ops = { + /* Device related operations */ + .dev_configure = bcmfs_sym_dev_config, + .dev_start = bcmfs_sym_dev_start, + .dev_stop = bcmfs_sym_dev_stop, + .dev_close = bcmfs_sym_dev_close, + .dev_infos_get = bcmfs_sym_dev_info_get, + /* Stats Collection */ + .stats_get = bcmfs_sym_stats_get, + .stats_reset = bcmfs_sym_stats_reset, + /* Queue-Pair management */ + .queue_pair_setup = bcmfs_sym_qp_setup, + .queue_pair_release = bcmfs_sym_qp_release, +}; + +/** Enqueue burst */ +static uint16_t +bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair, + struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + int i, j; + uint16_t enq = 0; + struct bcmfs_sym_request *sreq; + struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair; + + if (nb_ops == 0) + return 0; + + if (nb_ops > BCMFS_MAX_REQS_BUFF) + nb_ops = BCMFS_MAX_REQS_BUFF; + + /* We do not process more than available space */ + if (nb_ops > (qp->nb_descriptors - qp->nb_pending_requests)) + nb_ops = qp->nb_descriptors - qp->nb_pending_requests; + + for (i = 0; i < nb_ops; i++) { + if (rte_mempool_get(qp->sr_mp, (void **)&sreq)) + goto enqueue_err; + + /* save rte_crypto_op */ + sreq->op = ops[i]; + + /* save context */ + qp->infl_msgs[i] = &sreq->msgs; + qp->infl_msgs[i]->ctx = (void *)sreq; + } + /* Send burst request to hw QP */ + enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i); + + for (j = enq; j < i; j++) + rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx); + + return enq; + +enqueue_err: + for (j = 0; j < i; j++) + rte_mempool_put(qp->sr_mp, qp->infl_msgs[j]->ctx); + + return enq; +} + +static uint16_t +bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair, + struct rte_crypto_op **ops, + uint16_t nb_ops) +{ + int i; + uint16_t deq = 0; + unsigned int pkts = 0; + struct bcmfs_sym_request *sreq; + struct bcmfs_qp *qp = queue_pair; + + if (nb_ops > BCMFS_MAX_REQS_BUFF) + nb_ops = BCMFS_MAX_REQS_BUFF; + + deq = bcmfs_dequeue_op_burst(qp, (void **)qp->infl_msgs, nb_ops); + /* get rte_crypto_ops */ + for (i = 0; i < deq; i++) { + sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx; + + ops[pkts++] = sreq->op; + + rte_mempool_put(qp->sr_mp, sreq); + } + + return pkts; +} + +/* + * An rte_driver is needed in the registration of both the + * device and the driver with cryptodev. + */ +static const char bcmfs_sym_drv_name[] = RTE_STR(CRYPTODEV_NAME_BCMFS_SYM_PMD); +static const struct rte_driver cryptodev_bcmfs_sym_driver = { + .name = bcmfs_sym_drv_name, + .alias = bcmfs_sym_drv_name +}; + +int +bcmfs_sym_dev_create(struct bcmfs_device *fsdev) +{ + struct rte_cryptodev_pmd_init_params init_params = { + .name = "", + .socket_id = rte_socket_id(), + .private_data_size = sizeof(struct bcmfs_sym_dev_private) + }; + char name[RTE_CRYPTODEV_NAME_MAX_LEN]; + struct rte_cryptodev *cryptodev; + struct bcmfs_sym_dev_private *internals; + + snprintf(name, RTE_CRYPTODEV_NAME_MAX_LEN, "%s_%s", + fsdev->name, "sym"); + + /* Populate subset device to use in cryptodev device creation */ + fsdev->sym_rte_dev.driver = &cryptodev_bcmfs_sym_driver; + fsdev->sym_rte_dev.numa_node = 0; + fsdev->sym_rte_dev.devargs = NULL; + + cryptodev = rte_cryptodev_pmd_create(name, + &fsdev->sym_rte_dev, + &init_params); + if (cryptodev == NULL) + return -ENODEV; + + fsdev->sym_rte_dev.name = cryptodev->data->name; + cryptodev->driver_id = cryptodev_bcmfs_driver_id; + cryptodev->dev_ops = &crypto_bcmfs_ops; + + cryptodev->enqueue_burst = bcmfs_sym_pmd_enqueue_op_burst; + cryptodev->dequeue_burst = bcmfs_sym_pmd_dequeue_op_burst; + + cryptodev->feature_flags = RTE_CRYPTODEV_FF_SYMMETRIC_CRYPTO | + RTE_CRYPTODEV_FF_SYM_OPERATION_CHAINING | + RTE_CRYPTODEV_FF_OOP_LB_IN_LB_OUT; + + internals = cryptodev->data->dev_private; + internals->fsdev = fsdev; + fsdev->sym_dev = internals; + + internals->sym_dev_id = cryptodev->data->dev_id; + + BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d", + cryptodev->data->name, internals->sym_dev_id); + return 0; +} + +int +bcmfs_sym_dev_destroy(struct bcmfs_device *fsdev) +{ + struct rte_cryptodev *cryptodev; + + if (fsdev == NULL) + return -ENODEV; + if (fsdev->sym_dev == NULL) + return 0; + + /* free crypto device */ + cryptodev = rte_cryptodev_pmd_get_dev(fsdev->sym_dev->sym_dev_id); + rte_cryptodev_pmd_destroy(cryptodev); + fsdev->sym_rte_dev.name = NULL; + fsdev->sym_dev = NULL; + + return 0; +} + +static struct cryptodev_driver bcmfs_crypto_drv; +RTE_PMD_REGISTER_CRYPTO_DRIVER(bcmfs_crypto_drv, + cryptodev_bcmfs_sym_driver, + cryptodev_bcmfs_driver_id); diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.h b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h new file mode 100644 index 000000000..65d704609 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.h @@ -0,0 +1,38 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_SYM_PMD_H_ +#define _BCMFS_SYM_PMD_H_ + +#include + +#include "bcmfs_device.h" + +#define CRYPTODEV_NAME_BCMFS_SYM_PMD crypto_bcmfs + +#define BCMFS_CRYPTO_MAX_HW_DESCS_PER_REQ 16 + +extern uint8_t cryptodev_bcmfs_driver_id; + +/** private data structure for a BCMFS device. + * This BCMFS device is a device offering only symmetric crypto service, + * there can be one of these on each bcmfs_pci_device (VF). + */ +struct bcmfs_sym_dev_private { + /* The bcmfs device hosting the service */ + struct bcmfs_device *fsdev; + /* Device instance for this rte_cryptodev */ + uint8_t sym_dev_id; + /* BCMFS device symmetric crypto capabilities */ + const struct rte_cryptodev_capabilities *fsdev_capabilities; +}; + +int +bcmfs_sym_dev_create(struct bcmfs_device *fdev); + +int +bcmfs_sym_dev_destroy(struct bcmfs_device *fdev); + +#endif /* _BCMFS_SYM_PMD_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h new file mode 100644 index 000000000..0f0b051f1 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h @@ -0,0 +1,22 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_SYM_REQ_H_ +#define _BCMFS_SYM_REQ_H_ + +#include "bcmfs_dev_msg.h" + +/* + * This structure hold the supportive data required to process a + * rte_crypto_op + */ +struct bcmfs_sym_request { + /* bcmfs qp message for h/w queues to process */ + struct bcmfs_qp_message msgs; + /* crypto op */ + struct rte_crypto_op *op; +}; + +#endif /* _BCMFS_SYM_REQ_H_ */ diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build index cd58bd5e2..d9a3d73e9 100644 --- a/drivers/crypto/bcmfs/meson.build +++ b/drivers/crypto/bcmfs/meson.build @@ -11,5 +11,6 @@ sources = files( 'bcmfs_qp.c', 'hw/bcmfs4_rm.c', 'hw/bcmfs5_rm.c', - 'hw/bcmfs_rm_common.c' + 'hw/bcmfs_rm_common.c', + 'bcmfs_sym_pmd.c' ) From patchwork Thu Aug 13 17:23:42 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikas Gupta X-Patchwork-Id: 75507 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7094BA04B0; Thu, 13 Aug 2020 19:25:11 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0B18F1C121; Thu, 13 Aug 2020 19:24:21 +0200 (CEST) Received: from mail-qt1-f194.google.com (mail-qt1-f194.google.com [209.85.160.194]) by dpdk.org (Postfix) with ESMTP id 6DAFC1C0D1 for ; Thu, 13 Aug 2020 19:24:19 +0200 (CEST) Received: by mail-qt1-f194.google.com with SMTP id c12so4942404qtn.9 for ; Thu, 13 Aug 2020 10:24:19 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=9G3N1YJfPWtNgTLA4cdyjVj1QvsXPBnMn2XeT2lf87Y=; b=WlLI4JUgC57QSx33jwZYpInBqIqjDd0Lh28M+Bdam+hBiuAYJKJs2VU2YbBMB/9DOv Nh5Alcl9qq3mJ4W0ark55tYQbKHba7TeGglB0qFj7HAB8ptVmK0jp8mRU+CrgBuTxnYk I7Bg6BHWlDdklMtC6Hrhxaop/pqaol+z5JZvc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=9G3N1YJfPWtNgTLA4cdyjVj1QvsXPBnMn2XeT2lf87Y=; b=YK3pZleaVj0BuAlCLK21LU1vpgbBNi1wp8S2IqtHmGS/wODaqSBM3Bv67k9DHop3To PtyO+oXiBQZy/CfPUq7+m1lDH7QLE3W+VjGXLF2/uG7Q3QBH0L8v6l+vGLAl9CqUckF6 NRpM38J9afmJMIouVSeyzNSLLvNLXR6/uP8id5ecbASDetA5R5ekH4uuYwH8eyjfrGvT gyJWbSToPv2OSCrdnYcf07P2g3EFnSFWjPx7WcA00lyZ33+BPn9Jz7HeY1U+CAapDZZO UuOTBgiogNwiQo7rYY3FL9mtCxpQT6bZ2PK6c4rNYQVxeiYRY5A5zUwiR+8e8glVJxbQ dHpQ== X-Gm-Message-State: AOAM532/YEuWtJ5SVzgRDa/CaMifahM+ud5Xq3cgXrdXti4VS+M7VoEh u2yEyNjoODeVG98p6iUDErrb71Qp6JYscoTrjukzHg+pFBwalckp8fXYY6FOJCj7DjSrvPGOzsZ FFzno/y0Vngr9zv8k8wZ3chYsecaIBmMuUxTg/GHvOtDxcgFc0kWwbE8FGqFm X-Google-Smtp-Source: ABdhPJx/O8jLlNqmUS/DQgrwpgJ0o0LVSdqdb30JTn0jzcnNS+DMIklRVAc73PLzYte/4uYl+DW24A== X-Received: by 2002:ac8:470e:: with SMTP id f14mr6490136qtp.380.1597339457641; Thu, 13 Aug 2020 10:24:17 -0700 (PDT) Received: from rahul_yocto_ubuntu18.ibn.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g129sm6246635qkb.39.2020.08.13.10.24.14 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 10:24:17 -0700 (PDT) From: Vikas Gupta To: dev@dpdk.org, akhil.goyal@nxp.com Cc: vikram.prakash@broadcom.com, Vikas Gupta , Raveendra Padasalagi Date: Thu, 13 Aug 2020 22:53:42 +0530 Message-Id: <20200813172344.3228-7-vikas.gupta@broadcom.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200813172344.3228-1-vikas.gupta@broadcom.com> References: <20200812063127.8687-1-vikas.gupta@broadcom.com> <20200813172344.3228-1-vikas.gupta@broadcom.com> Subject: [dpdk-dev] [PATCH v2 6/8] crypto/bcmfs: add session handling and capabilities X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add session handling and capabilities supported by crypto h/w accelerator. Signed-off-by: Vikas Gupta Signed-off-by: Raveendra Padasalagi Reviewed-by: Ajit Khaparde --- doc/guides/cryptodevs/bcmfs.rst | 46 ++ doc/guides/cryptodevs/features/bcmfs.ini | 56 ++ drivers/crypto/bcmfs/bcmfs_sym_capabilities.c | 764 ++++++++++++++++++ drivers/crypto/bcmfs/bcmfs_sym_capabilities.h | 16 + drivers/crypto/bcmfs/bcmfs_sym_defs.h | 170 ++++ drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 13 + drivers/crypto/bcmfs/bcmfs_sym_session.c | 424 ++++++++++ drivers/crypto/bcmfs/bcmfs_sym_session.h | 99 +++ drivers/crypto/bcmfs/meson.build | 4 +- 9 files changed, 1591 insertions(+), 1 deletion(-) create mode 100644 doc/guides/cryptodevs/features/bcmfs.ini create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.c create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_capabilities.h create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_defs.h create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.c create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_session.h diff --git a/doc/guides/cryptodevs/bcmfs.rst b/doc/guides/cryptodevs/bcmfs.rst index 752ce028a..2488b19f7 100644 --- a/doc/guides/cryptodevs/bcmfs.rst +++ b/doc/guides/cryptodevs/bcmfs.rst @@ -18,9 +18,55 @@ CONFIG_RTE_LIBRTE_PMD_BCMFS setting is set to `y` in config/common_base file. * ``CONFIG_RTE_LIBRTE_PMD_BCMFS=y`` +Features +~~~~~~~~ + +The BCMFS SYM PMD has support for: + +Cipher algorithms: + +* ``RTE_CRYPTO_CIPHER_3DES_CBC`` +* ``RTE_CRYPTO_CIPHER_3DES_CTR`` +* ``RTE_CRYPTO_CIPHER_AES128_CBC`` +* ``RTE_CRYPTO_CIPHER_AES192_CBC`` +* ``RTE_CRYPTO_CIPHER_AES256_CBC`` +* ``RTE_CRYPTO_CIPHER_AES128_CTR`` +* ``RTE_CRYPTO_CIPHER_AES192_CTR`` +* ``RTE_CRYPTO_CIPHER_AES256_CTR`` +* ``RTE_CRYPTO_CIPHER_AES_XTS`` +* ``RTE_CRYPTO_CIPHER_DES_CBC`` + +Hash algorithms: + +* ``RTE_CRYPTO_AUTH_SHA1`` +* ``RTE_CRYPTO_AUTH_SHA1_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA224`` +* ``RTE_CRYPTO_AUTH_SHA224_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA256`` +* ``RTE_CRYPTO_AUTH_SHA256_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA384`` +* ``RTE_CRYPTO_AUTH_SHA384_HMAC`` +* ``RTE_CRYPTO_AUTH_SHA512`` +* ``RTE_CRYPTO_AUTH_SHA512_HMAC`` +* ``RTE_CRYPTO_AUTH_AES_XCBC_MAC`` +* ``RTE_CRYPTO_AUTH_MD5_HMAC`` +* ``RTE_CRYPTO_AUTH_AES_GMAC`` +* ``RTE_CRYPTO_AUTH_AES_CMAC`` + +Supported AEAD algorithms: + +* ``RTE_CRYPTO_AEAD_AES_GCM`` +* ``RTE_CRYPTO_AEAD_AES_CCM`` + Initialization -------------- BCMFS crypto PMD depend upon the devices present in the path /sys/bus/platform/devices/fs/ on the platform. Each cryptodev PMD instance can be attached to the nodes present in the mentioned path. + +Limitations +~~~~~~~~~~~ + +* Only supports the session-oriented API implementation (session-less APIs are not supported). +* CCM is not supported on Broadcom`s SoCs having FlexSparc4 unit. diff --git a/doc/guides/cryptodevs/features/bcmfs.ini b/doc/guides/cryptodevs/features/bcmfs.ini new file mode 100644 index 000000000..82d2c639d --- /dev/null +++ b/doc/guides/cryptodevs/features/bcmfs.ini @@ -0,0 +1,56 @@ +; +; Supported features of the 'bcmfs' crypto driver. +; +; Refer to default.ini for the full list of available PMD features. +; +[Features] +Symmetric crypto = Y +Sym operation chaining = Y +HW Accelerated = Y +Protocol offload = Y +In Place SGL = Y + +; +; Supported crypto algorithms of the 'bcmfs' crypto driver. +; +[Cipher] +AES CBC (128) = Y +AES CBC (192) = Y +AES CBC (256) = Y +AES CTR (128) = Y +AES CTR (192) = Y +AES CTR (256) = Y +AES XTS (128) = Y +AES XTS (256) = Y +3DES CBC = Y +DES CBC = Y +; +; Supported authentication algorithms of the 'bcmfs' crypto driver. +; +[Auth] +MD5 HMAC = Y +SHA1 = Y +SHA1 HMAC = Y +SHA224 = Y +SHA224 HMAC = Y +SHA256 = Y +SHA256 HMAC = Y +SHA384 = Y +SHA384 HMAC = Y +SHA512 = Y +SHA512 HMAC = Y +AES GMAC = Y +AES CMAC (128) = Y +AES CBC = Y +AES XCBC = Y + +; +; Supported AEAD algorithms of the 'bcmfs' crypto driver. +; +[AEAD] +AES GCM (128) = Y +AES GCM (192) = Y +AES GCM (256) = Y +AES CCM (128) = Y +AES CCM (192) = Y +AES CCM (256) = Y diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c new file mode 100644 index 000000000..dee88ed4a --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.c @@ -0,0 +1,764 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#include + +#include "bcmfs_sym_capabilities.h" + +static const struct rte_cryptodev_capabilities bcmfs_sym_capabilities[] = { + { + /* SHA1 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA1, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 20, + .max = 20, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* MD5 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_MD5, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + }, } + }, } + }, + { + /* SHA224 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA224, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 28, + .max = 28, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA256 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA256, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 32, + .max = 32, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA384 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA384, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 48, + .max = 48, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA512 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA512, + .block_size = 64, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA3_224 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA3_224, + .block_size = 144, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 28, + .max = 28, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA3_256 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA3_256, + .block_size = 136, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 32, + .max = 32, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA3_384 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA3_384, + .block_size = 104, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 48, + .max = 48, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA3_512 */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA3_512, + .block_size = 72, + .key_size = { + .min = 0, + .max = 0, + .increment = 0 + }, + .digest_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA1 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA1_HMAC, + .block_size = 64, + .key_size = { + .min = 1, + .max = 64, + .increment = 0 + }, + .digest_size = { + .min = 20, + .max = 20, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* MD5 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_MD5_HMAC, + .block_size = 64, + .key_size = { + .min = 1, + .max = 64, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA224 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA224_HMAC, + .block_size = 64, + .key_size = { + .min = 1, + .max = 64, + .increment = 0 + }, + .digest_size = { + .min = 28, + .max = 28, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA256 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA256_HMAC, + .block_size = 64, + .key_size = { + .min = 1, + .max = 64, + .increment = 0 + }, + .digest_size = { + .min = 32, + .max = 32, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA384 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA384_HMAC, + .block_size = 128, + .key_size = { + .min = 1, + .max = 128, + .increment = 0 + }, + .digest_size = { + .min = 48, + .max = 48, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA512 HMAC*/ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA512_HMAC, + .block_size = 128, + .key_size = { + .min = 1, + .max = 128, + .increment = 0 + }, + .digest_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA3_224 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA3_224_HMAC, + .block_size = 144, + .key_size = { + .min = 1, + .max = 144, + .increment = 0 + }, + .digest_size = { + .min = 28, + .max = 28, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA3_256 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA3_256_HMAC, + .block_size = 136, + .key_size = { + .min = 1, + .max = 136, + .increment = 0 + }, + .digest_size = { + .min = 32, + .max = 32, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA3_384 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA3_384_HMAC, + .block_size = 104, + .key_size = { + .min = 1, + .max = 104, + .increment = 0 + }, + .digest_size = { + .min = 48, + .max = 48, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* SHA3_512 HMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_SHA3_512_HMAC, + .block_size = 72, + .key_size = { + .min = 1, + .max = 72, + .increment = 0 + }, + .digest_size = { + .min = 64, + .max = 64, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* AES XCBC MAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_AES_XCBC_MAC, + .block_size = 16, + .key_size = { + .min = 1, + .max = 16, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* AES GMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_AES_GMAC, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { + .min = 0, + .max = 65535, + .increment = 1 + }, + .iv_size = { + .min = 12, + .max = 16, + .increment = 4 + }, + }, } + }, } + }, + { + /* AES CMAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_AES_CMAC, + .block_size = 16, + .key_size = { + .min = 1, + .max = 16, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* AES CBC MAC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AUTH, + {.auth = { + .algo = RTE_CRYPTO_AUTH_AES_CBC_MAC, + .block_size = 16, + .key_size = { + .min = 1, + .max = 16, + .increment = 0 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { 0 } + }, } + }, } + }, + { + /* AES ECB */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_ECB, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .iv_size = { + .min = 0, + .max = 0, + .increment = 0 + } + }, } + }, } + }, + { + /* AES CBC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_CBC, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { + /* AES CTR */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_CTR, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { + /* AES XTS */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_AES_XTS, + .block_size = 16, + .key_size = { + .min = 32, + .max = 64, + .increment = 32 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { + /* DES CBC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_DES_CBC, + .block_size = 8, + .key_size = { + .min = 8, + .max = 8, + .increment = 0 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { + /* 3DES CBC */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_3DES_CBC, + .block_size = 8, + .key_size = { + .min = 24, + .max = 24, + .increment = 0 + }, + .iv_size = { + .min = 16, + .max = 16, + .increment = 0 + } + }, } + }, } + }, + { + /* 3DES ECB */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_CIPHER, + {.cipher = { + .algo = RTE_CRYPTO_CIPHER_3DES_ECB, + .block_size = 8, + .key_size = { + .min = 24, + .max = 24, + .increment = 0 + }, + .iv_size = { + .min = 0, + .max = 0, + .increment = 0 + } + }, } + }, } + }, + { + /* AES GCM */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, + {.aead = { + .algo = RTE_CRYPTO_AEAD_AES_GCM, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .digest_size = { + .min = 16, + .max = 16, + .increment = 0 + }, + .aad_size = { + .min = 0, + .max = 65535, + .increment = 1 + }, + .iv_size = { + .min = 12, + .max = 16, + .increment = 4 + }, + }, } + }, } + }, + { + /* AES CCM */ + .op = RTE_CRYPTO_OP_TYPE_SYMMETRIC, + {.sym = { + .xform_type = RTE_CRYPTO_SYM_XFORM_AEAD, + {.aead = { + .algo = RTE_CRYPTO_AEAD_AES_CCM, + .block_size = 16, + .key_size = { + .min = 16, + .max = 32, + .increment = 8 + }, + .digest_size = { + .min = 4, + .max = 16, + .increment = 2 + }, + .aad_size = { + .min = 0, + .max = 65535, + .increment = 1 + }, + .iv_size = { + .min = 7, + .max = 13, + .increment = 1 + }, + }, } + }, } + }, + + RTE_CRYPTODEV_END_OF_CAPABILITIES_LIST() +}; + +const struct rte_cryptodev_capabilities * +bcmfs_sym_get_capabilities(void) +{ + return bcmfs_sym_capabilities; +} diff --git a/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h new file mode 100644 index 000000000..3ff61b7d2 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym_capabilities.h @@ -0,0 +1,16 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_SYM_CAPABILITIES_H_ +#define _BCMFS_SYM_CAPABILITIES_H_ + +/* + * Get capabilities list for the device + * + */ +const struct rte_cryptodev_capabilities *bcmfs_sym_get_capabilities(void); + +#endif /* _BCMFS_SYM_CAPABILITIES_H__ */ + diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h new file mode 100644 index 000000000..d94446d35 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h @@ -0,0 +1,170 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_SYM_DEFS_H_ +#define _BCMFS_SYM_DEFS_H_ + +/* + * Max block size of hash algorithm + * currently SHA3 supports max block size + * of 144 bytes + */ +#define BCMFS_MAX_KEY_SIZE 144 +#define BCMFS_MAX_IV_SIZE 16 +#define BCMFS_MAX_DIGEST_SIZE 64 + +/** Symmetric Cipher Direction */ +enum bcmfs_crypto_cipher_op { + /** Encrypt cipher operation */ + BCMFS_CRYPTO_CIPHER_OP_ENCRYPT, + + /** Decrypt cipher operation */ + BCMFS_CRYPTO_CIPHER_OP_DECRYPT, +}; + +/** Symmetric Cipher Algorithms */ +enum bcmfs_crypto_cipher_algorithm { + /** NULL cipher algorithm. No mode applies to the NULL algorithm. */ + BCMFS_CRYPTO_CIPHER_NONE = 0, + + /** Triple DES algorithm in CBC mode */ + BCMFS_CRYPTO_CIPHER_DES_CBC, + + /** Triple DES algorithm in ECB mode */ + BCMFS_CRYPTO_CIPHER_DES_ECB, + + /** Triple DES algorithm in CBC mode */ + BCMFS_CRYPTO_CIPHER_3DES_CBC, + + /** Triple DES algorithm in ECB mode */ + BCMFS_CRYPTO_CIPHER_3DES_ECB, + + /** AES algorithm in CBC mode */ + BCMFS_CRYPTO_CIPHER_AES_CBC, + + /** AES algorithm in CCM mode. */ + BCMFS_CRYPTO_CIPHER_AES_CCM, + + /** AES algorithm in Counter mode */ + BCMFS_CRYPTO_CIPHER_AES_CTR, + + /** AES algorithm in ECB mode */ + BCMFS_CRYPTO_CIPHER_AES_ECB, + + /** AES algorithm in GCM mode. */ + BCMFS_CRYPTO_CIPHER_AES_GCM, + + /** AES algorithm in XTS mode */ + BCMFS_CRYPTO_CIPHER_AES_XTS, + + /** AES algorithm in OFB mode */ + BCMFS_CRYPTO_CIPHER_AES_OFB, +}; + +/** Symmetric Authentication Algorithms */ +enum bcmfs_crypto_auth_algorithm { + /** NULL hash algorithm. */ + BCMFS_CRYPTO_AUTH_NONE = 0, + + /** MD5 algorithm */ + BCMFS_CRYPTO_AUTH_MD5, + + /** MD5-HMAC algorithm */ + BCMFS_CRYPTO_AUTH_MD5_HMAC, + + /** SHA1 algorithm */ + BCMFS_CRYPTO_AUTH_SHA1, + + /** SHA1-HMAC algorithm */ + BCMFS_CRYPTO_AUTH_SHA1_HMAC, + + /** 224 bit SHA algorithm. */ + BCMFS_CRYPTO_AUTH_SHA224, + + /** 224 bit SHA-HMAC algorithm. */ + BCMFS_CRYPTO_AUTH_SHA224_HMAC, + + /** 256 bit SHA algorithm. */ + BCMFS_CRYPTO_AUTH_SHA256, + + /** 256 bit SHA-HMAC algorithm. */ + BCMFS_CRYPTO_AUTH_SHA256_HMAC, + + /** 384 bit SHA algorithm. */ + BCMFS_CRYPTO_AUTH_SHA384, + + /** 384 bit SHA-HMAC algorithm. */ + BCMFS_CRYPTO_AUTH_SHA384_HMAC, + + /** 512 bit SHA algorithm. */ + BCMFS_CRYPTO_AUTH_SHA512, + + /** 512 bit SHA-HMAC algorithm. */ + BCMFS_CRYPTO_AUTH_SHA512_HMAC, + + /** 224 bit SHA3 algorithm. */ + BCMFS_CRYPTO_AUTH_SHA3_224, + + /** 224 bit SHA-HMAC algorithm. */ + BCMFS_CRYPTO_AUTH_SHA3_224_HMAC, + + /** 256 bit SHA3 algorithm. */ + BCMFS_CRYPTO_AUTH_SHA3_256, + + /** 256 bit SHA3-HMAC algorithm. */ + BCMFS_CRYPTO_AUTH_SHA3_256_HMAC, + + /** 384 bit SHA3 algorithm. */ + BCMFS_CRYPTO_AUTH_SHA3_384, + + /** 384 bit SHA3-HMAC algorithm. */ + BCMFS_CRYPTO_AUTH_SHA3_384_HMAC, + + /** 512 bit SHA3 algorithm. */ + BCMFS_CRYPTO_AUTH_SHA3_512, + + /** 512 bit SHA3-HMAC algorithm. */ + BCMFS_CRYPTO_AUTH_SHA3_512_HMAC, + + /** AES XCBC MAC algorithm */ + BCMFS_CRYPTO_AUTH_AES_XCBC_MAC, + + /** AES CMAC algorithm */ + BCMFS_CRYPTO_AUTH_AES_CMAC, + + /** AES CBC-MAC algorithm */ + BCMFS_CRYPTO_AUTH_AES_CBC_MAC, + + /** AES CBC-MAC algorithm */ + BCMFS_CRYPTO_AUTH_AES_GMAC, + + /** AES algorithm in GCM mode. */ + BCMFS_CRYPTO_AUTH_AES_GCM, + + /** AES algorithm in CCM mode. */ + BCMFS_CRYPTO_AUTH_AES_CCM, +}; + +/** Symmetric Authentication Operations */ +enum bcmfs_crypto_auth_op { + /** Verify authentication digest */ + BCMFS_CRYPTO_AUTH_OP_VERIFY, + + /** Generate authentication digest */ + BCMFS_CRYPTO_AUTH_OP_GENERATE, +}; + +enum bcmfs_sym_crypto_class { + /** Cipher algorithm */ + BCMFS_CRYPTO_CIPHER, + + /** Hash algorithm */ + BCMFS_CRYPTO_HASH, + + /** Authenticated Encryption with Associated Data algorithm */ + BCMFS_CRYPTO_AEAD, +}; + +#endif /* _BCMFS_SYM_DEFS_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c index 0f96915f7..381ca8ea4 100644 --- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c +++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c @@ -14,6 +14,8 @@ #include "bcmfs_qp.h" #include "bcmfs_sym_pmd.h" #include "bcmfs_sym_req.h" +#include "bcmfs_sym_session.h" +#include "bcmfs_sym_capabilities.h" uint8_t cryptodev_bcmfs_driver_id; @@ -65,6 +67,7 @@ bcmfs_sym_dev_info_get(struct rte_cryptodev *dev, dev_info->max_nb_queue_pairs = fsdev->max_hw_qps; /* No limit of number of sessions */ dev_info->sym.max_nb_sessions = 0; + dev_info->capabilities = bcmfs_sym_get_capabilities(); } } @@ -228,6 +231,10 @@ static struct rte_cryptodev_ops crypto_bcmfs_ops = { /* Queue-Pair management */ .queue_pair_setup = bcmfs_sym_qp_setup, .queue_pair_release = bcmfs_sym_qp_release, + /* Crypto session related operations */ + .sym_session_get_size = bcmfs_sym_session_get_private_size, + .sym_session_configure = bcmfs_sym_session_configure, + .sym_session_clear = bcmfs_sym_session_clear }; /** Enqueue burst */ @@ -239,6 +246,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair, int i, j; uint16_t enq = 0; struct bcmfs_sym_request *sreq; + struct bcmfs_sym_session *sess; struct bcmfs_qp *qp = (struct bcmfs_qp *)queue_pair; if (nb_ops == 0) @@ -252,6 +260,10 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair, nb_ops = qp->nb_descriptors - qp->nb_pending_requests; for (i = 0; i < nb_ops; i++) { + sess = bcmfs_sym_get_session(ops[i]); + if (unlikely(sess == NULL)) + goto enqueue_err; + if (rte_mempool_get(qp->sr_mp, (void **)&sreq)) goto enqueue_err; @@ -356,6 +368,7 @@ bcmfs_sym_dev_create(struct bcmfs_device *fsdev) fsdev->sym_dev = internals; internals->sym_dev_id = cryptodev->data->dev_id; + internals->fsdev_capabilities = bcmfs_sym_get_capabilities(); BCMFS_LOG(DEBUG, "Created bcmfs-sym device %s as cryptodev instance %d", cryptodev->data->name, internals->sym_dev_id); diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.c b/drivers/crypto/bcmfs/bcmfs_sym_session.c new file mode 100644 index 000000000..8853b4d12 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym_session.c @@ -0,0 +1,424 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#include +#include +#include + +#include "bcmfs_logs.h" +#include "bcmfs_sym_defs.h" +#include "bcmfs_sym_pmd.h" +#include "bcmfs_sym_session.h" + +/** Configure the session from a crypto xform chain */ +static enum bcmfs_sym_chain_order +crypto_get_chain_order(const struct rte_crypto_sym_xform *xform) +{ + enum bcmfs_sym_chain_order res = BCMFS_SYM_CHAIN_NOT_SUPPORTED; + + + if (xform != NULL) { + if (xform->type == RTE_CRYPTO_SYM_XFORM_AEAD) + res = BCMFS_SYM_CHAIN_AEAD; + + if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) { + if (xform->next == NULL) + res = BCMFS_SYM_CHAIN_ONLY_AUTH; + else if (xform->next->type == + RTE_CRYPTO_SYM_XFORM_CIPHER) + res = BCMFS_SYM_CHAIN_AUTH_CIPHER; + } + if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) { + if (xform->next == NULL) + res = BCMFS_SYM_CHAIN_ONLY_CIPHER; + else if (xform->next->type == RTE_CRYPTO_SYM_XFORM_AUTH) + res = BCMFS_SYM_CHAIN_CIPHER_AUTH; + } + } + + return res; +} + +/* Get session cipher key from input cipher key */ +static void +get_key(const uint8_t *input_key, int keylen, uint8_t *session_key) +{ + memcpy(session_key, input_key, keylen); +} + +/* Set session cipher parameters */ +static int +crypto_set_session_cipher_parameters + (struct bcmfs_sym_session *sess, + const struct rte_crypto_cipher_xform *cipher_xform) +{ + int rc = 0; + + sess->cipher.key.length = cipher_xform->key.length; + sess->cipher.iv.offset = cipher_xform->iv.offset; + sess->cipher.iv.length = cipher_xform->iv.length; + sess->cipher.direction = (enum bcmfs_crypto_cipher_op)cipher_xform->op; + + /* Select cipher algo */ + switch (cipher_xform->algo) { + case RTE_CRYPTO_CIPHER_3DES_CBC: + sess->cipher.algo = BCMFS_CRYPTO_CIPHER_3DES_CBC; + break; + case RTE_CRYPTO_CIPHER_3DES_ECB: + sess->cipher.algo = BCMFS_CRYPTO_CIPHER_3DES_ECB; + break; + case RTE_CRYPTO_CIPHER_DES_CBC: + sess->cipher.algo = BCMFS_CRYPTO_CIPHER_DES_CBC; + break; + case RTE_CRYPTO_CIPHER_AES_CBC: + sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CBC; + break; + case RTE_CRYPTO_CIPHER_AES_ECB: + sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_ECB; + break; + case RTE_CRYPTO_CIPHER_AES_CTR: + sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CTR; + break; + case RTE_CRYPTO_CIPHER_AES_XTS: + sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_XTS; + break; + default: + BCMFS_DP_LOG(ERR, "set session failed. unknown algo"); + rc = -EINVAL; + break; + } + + if (!rc) + get_key(cipher_xform->key.data, + sess->cipher.key.length, + sess->cipher.key.data); + + return rc; +} + +/* Set session auth parameters */ +static int +crypto_set_session_auth_parameters(struct bcmfs_sym_session *sess, + const struct rte_crypto_auth_xform + *auth_xform) +{ + int rc = 0; + + /* Select auth generate/verify */ + sess->auth.operation = auth_xform->op ? + BCMFS_CRYPTO_AUTH_OP_GENERATE : + BCMFS_CRYPTO_AUTH_OP_VERIFY; + sess->auth.key.length = auth_xform->key.length; + sess->auth.digest_length = auth_xform->digest_length; + sess->auth.iv.length = auth_xform->iv.length; + sess->auth.iv.offset = auth_xform->iv.offset; + + /* Select auth algo */ + switch (auth_xform->algo) { + case RTE_CRYPTO_AUTH_MD5: + sess->auth.algo = BCMFS_CRYPTO_AUTH_MD5; + break; + case RTE_CRYPTO_AUTH_SHA1: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA1; + break; + case RTE_CRYPTO_AUTH_SHA224: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA224; + break; + case RTE_CRYPTO_AUTH_SHA256: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA256; + break; + case RTE_CRYPTO_AUTH_SHA384: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA384; + break; + case RTE_CRYPTO_AUTH_SHA512: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA512; + break; + case RTE_CRYPTO_AUTH_SHA3_224: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_224; + break; + case RTE_CRYPTO_AUTH_SHA3_256: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_256; + break; + case RTE_CRYPTO_AUTH_SHA3_384: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_384; + break; + case RTE_CRYPTO_AUTH_SHA3_512: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_512; + break; + case RTE_CRYPTO_AUTH_MD5_HMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_MD5_HMAC; + break; + case RTE_CRYPTO_AUTH_SHA1_HMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA1_HMAC; + break; + case RTE_CRYPTO_AUTH_SHA224_HMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA224_HMAC; + break; + case RTE_CRYPTO_AUTH_SHA256_HMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA256_HMAC; + break; + case RTE_CRYPTO_AUTH_SHA384_HMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA384_HMAC; + break; + case RTE_CRYPTO_AUTH_SHA512_HMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA512_HMAC; + break; + case RTE_CRYPTO_AUTH_SHA3_224_HMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_224_HMAC; + break; + case RTE_CRYPTO_AUTH_SHA3_256_HMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_256_HMAC; + break; + case RTE_CRYPTO_AUTH_SHA3_384_HMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_384_HMAC; + break; + case RTE_CRYPTO_AUTH_SHA3_512_HMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_SHA3_512_HMAC; + break; + case RTE_CRYPTO_AUTH_AES_XCBC_MAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_XCBC_MAC; + break; + case RTE_CRYPTO_AUTH_AES_GMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_GMAC; + break; + case RTE_CRYPTO_AUTH_AES_CBC_MAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CBC_MAC; + break; + case RTE_CRYPTO_AUTH_AES_CMAC: + sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CMAC; + break; + default: + BCMFS_DP_LOG(ERR, "Invalid Auth algorithm\n"); + rc = -EINVAL; + break; + } + + if (!rc) + get_key(auth_xform->key.data, + auth_xform->key.length, + sess->auth.key.data); + + return rc; +} + +/* Set session aead parameters */ +static int +crypto_set_session_aead_parameters(struct bcmfs_sym_session *sess, + const struct rte_crypto_sym_xform *xform) +{ + int rc = 0; + + sess->cipher.iv.offset = xform->aead.iv.offset; + sess->cipher.iv.length = xform->aead.iv.length; + sess->aead.aad_length = xform->aead.aad_length; + sess->cipher.key.length = xform->aead.key.length; + sess->auth.digest_length = xform->aead.digest_length; + sess->cipher.direction = (enum bcmfs_crypto_cipher_op)xform->aead.op; + + /* Select aead algo */ + switch (xform->aead.algo) { + case RTE_CRYPTO_AEAD_AES_CCM: + sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_CCM; + sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_CCM; + break; + case RTE_CRYPTO_AEAD_AES_GCM: + sess->auth.algo = BCMFS_CRYPTO_AUTH_AES_GCM; + sess->cipher.algo = BCMFS_CRYPTO_CIPHER_AES_GCM; + break; + default: + BCMFS_DP_LOG(ERR, "Invalid aead algorithm\n"); + rc = -EINVAL; + break; + } + + if (!rc) + get_key(xform->aead.key.data, + xform->aead.key.length, + sess->cipher.key.data); + + return rc; +} + +static struct rte_crypto_auth_xform * +crypto_get_auth_xform(struct rte_crypto_sym_xform *xform) +{ + do { + if (xform->type == RTE_CRYPTO_SYM_XFORM_AUTH) + return &xform->auth; + + xform = xform->next; + } while (xform); + + return NULL; +} + +static struct rte_crypto_cipher_xform * +crypto_get_cipher_xform(struct rte_crypto_sym_xform *xform) +{ + do { + if (xform->type == RTE_CRYPTO_SYM_XFORM_CIPHER) + return &xform->cipher; + + xform = xform->next; + } while (xform); + + return NULL; +} + + +/** Parse crypto xform chain and set private session parameters */ +static int +crypto_set_session_parameters(struct bcmfs_sym_session *sess, + struct rte_crypto_sym_xform *xform) +{ + int rc = 0; + struct rte_crypto_cipher_xform *cipher_xform = + crypto_get_cipher_xform(xform); + struct rte_crypto_auth_xform *auth_xform = + crypto_get_auth_xform(xform); + + sess->chain_order = crypto_get_chain_order(xform); + + switch (sess->chain_order) { + case BCMFS_SYM_CHAIN_ONLY_CIPHER: + if (crypto_set_session_cipher_parameters(sess, + cipher_xform)) { + BCMFS_DP_LOG(ERR, "Invalid cipher"); + rc = -EINVAL; + } + break; + case BCMFS_SYM_CHAIN_ONLY_AUTH: + if (crypto_set_session_auth_parameters(sess, + auth_xform)) { + BCMFS_DP_LOG(ERR, "Invalid auth"); + rc = -EINVAL; + } + break; + case BCMFS_SYM_CHAIN_AUTH_CIPHER: + sess->cipher_first = false; + if (crypto_set_session_auth_parameters(sess, + auth_xform)) { + BCMFS_DP_LOG(ERR, "Invalid auth"); + rc = -EINVAL; + goto error; + } + + if (crypto_set_session_cipher_parameters(sess, + cipher_xform)) { + BCMFS_DP_LOG(ERR, "Invalid cipher"); + rc = -EINVAL; + } + break; + case BCMFS_SYM_CHAIN_CIPHER_AUTH: + sess->cipher_first = true; + if (crypto_set_session_auth_parameters(sess, + auth_xform)) { + BCMFS_DP_LOG(ERR, "Invalid auth"); + rc = -EINVAL; + goto error; + } + + if (crypto_set_session_cipher_parameters(sess, + cipher_xform)) { + BCMFS_DP_LOG(ERR, "Invalid cipher"); + rc = -EINVAL; + } + break; + case BCMFS_SYM_CHAIN_AEAD: + if (crypto_set_session_aead_parameters(sess, + xform)) { + BCMFS_DP_LOG(ERR, "Invalid aead"); + rc = -EINVAL; + } + break; + default: + BCMFS_DP_LOG(ERR, "Invalid chain order\n"); + rc = -EINVAL; + break; + } + +error: + return rc; +} + +struct bcmfs_sym_session * +bcmfs_sym_get_session(struct rte_crypto_op *op) +{ + struct bcmfs_sym_session *sess = NULL; + + if (unlikely(op->sess_type == RTE_CRYPTO_OP_SESSIONLESS)) { + BCMFS_DP_LOG(ERR, "operations op(%p) is sessionless", op); + } else if (likely(op->sym->session != NULL)) { + /* get existing session */ + sess = (struct bcmfs_sym_session *) + get_sym_session_private_data(op->sym->session, + cryptodev_bcmfs_driver_id); + } + + if (sess == NULL) + op->status = RTE_CRYPTO_OP_STATUS_INVALID_SESSION; + + return sess; +} + +int +bcmfs_sym_session_configure(struct rte_cryptodev *dev, + struct rte_crypto_sym_xform *xform, + struct rte_cryptodev_sym_session *sess, + struct rte_mempool *mempool) +{ + void *sess_private_data; + int ret; + + if (unlikely(sess == NULL)) { + BCMFS_DP_LOG(ERR, "Invalid session struct"); + return -EINVAL; + } + + if (rte_mempool_get(mempool, &sess_private_data)) { + BCMFS_DP_LOG(ERR, + "Couldn't get object from session mempool"); + return -ENOMEM; + } + + ret = crypto_set_session_parameters(sess_private_data, xform); + + if (ret != 0) { + BCMFS_DP_LOG(ERR, "Failed configure session parameters"); + /* Return session to mempool */ + rte_mempool_put(mempool, sess_private_data); + return ret; + } + + set_sym_session_private_data(sess, dev->driver_id, + sess_private_data); + + return 0; +} + +/* Clear the memory of session so it doesn't leave key material behind */ +void +bcmfs_sym_session_clear(struct rte_cryptodev *dev, + struct rte_cryptodev_sym_session *sess) +{ + uint8_t index = dev->driver_id; + void *sess_priv = get_sym_session_private_data(sess, index); + + if (sess_priv) { + struct rte_mempool *sess_mp; + + memset(sess_priv, 0, sizeof(struct bcmfs_sym_session)); + sess_mp = rte_mempool_from_obj(sess_priv); + + set_sym_session_private_data(sess, index, NULL); + rte_mempool_put(sess_mp, sess_priv); + } +} + +unsigned int +bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused) +{ + return sizeof(struct bcmfs_sym_session); +} diff --git a/drivers/crypto/bcmfs/bcmfs_sym_session.h b/drivers/crypto/bcmfs/bcmfs_sym_session.h new file mode 100644 index 000000000..43deedcf8 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym_session.h @@ -0,0 +1,99 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_SYM_SESSION_H_ +#define _BCMFS_SYM_SESSION_H_ + +#include +#include +#include + +#include "bcmfs_sym_defs.h" +#include "bcmfs_sym_req.h" + +/* BCMFS_SYM operation order mode enumerator */ +enum bcmfs_sym_chain_order { + BCMFS_SYM_CHAIN_ONLY_CIPHER, + BCMFS_SYM_CHAIN_ONLY_AUTH, + BCMFS_SYM_CHAIN_CIPHER_AUTH, + BCMFS_SYM_CHAIN_AUTH_CIPHER, + BCMFS_SYM_CHAIN_AEAD, + BCMFS_SYM_CHAIN_NOT_SUPPORTED +}; + +/* BCMFS_SYM crypto private session structure */ +struct bcmfs_sym_session { + enum bcmfs_sym_chain_order chain_order; + + /* Cipher Parameters */ + struct { + enum bcmfs_crypto_cipher_op direction; + /* cipher operation direction */ + enum bcmfs_crypto_cipher_algorithm algo; + /* cipher algorithm */ + + struct { + uint8_t data[BCMFS_MAX_KEY_SIZE]; + /* key data */ + size_t length; + /* key length in bytes */ + } key; + + struct { + uint16_t offset; + uint16_t length; + } iv; + } cipher; + + /* Authentication Parameters */ + struct { + enum bcmfs_crypto_auth_op operation; + /* auth operation generate or verify */ + enum bcmfs_crypto_auth_algorithm algo; + /* cipher algorithm */ + + struct { + uint8_t data[BCMFS_MAX_KEY_SIZE]; + /* key data */ + size_t length; + /* key length in bytes */ + } key; + struct { + uint16_t offset; + uint16_t length; + } iv; + + uint16_t digest_length; + } auth; + + /* aead Parameters */ + struct { + uint16_t aad_length; + } aead; + bool cipher_first; +} __rte_cache_aligned; + +int +bcmfs_process_crypto_op(struct rte_crypto_op *op, + struct bcmfs_sym_session *sess, + struct bcmfs_sym_request *req); + +int +bcmfs_sym_session_configure(struct rte_cryptodev *dev, + struct rte_crypto_sym_xform *xform, + struct rte_cryptodev_sym_session *sess, + struct rte_mempool *mempool); + +void +bcmfs_sym_session_clear(struct rte_cryptodev *dev, + struct rte_cryptodev_sym_session *sess); + +unsigned int +bcmfs_sym_session_get_private_size(struct rte_cryptodev *dev __rte_unused); + +struct bcmfs_sym_session * +bcmfs_sym_get_session(struct rte_crypto_op *op); + +#endif /* _BCMFS_SYM_SESSION_H_ */ diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build index d9a3d73e9..2e86c733e 100644 --- a/drivers/crypto/bcmfs/meson.build +++ b/drivers/crypto/bcmfs/meson.build @@ -12,5 +12,7 @@ sources = files( 'hw/bcmfs4_rm.c', 'hw/bcmfs5_rm.c', 'hw/bcmfs_rm_common.c', - 'bcmfs_sym_pmd.c' + 'bcmfs_sym_pmd.c', + 'bcmfs_sym_capabilities.c', + 'bcmfs_sym_session.c' ) From patchwork Thu Aug 13 17:23:43 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikas Gupta X-Patchwork-Id: 75508 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 958ACA04B0; Thu, 13 Aug 2020 19:25:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 90DBC1C127; Thu, 13 Aug 2020 19:24:25 +0200 (CEST) Received: from mail-qk1-f169.google.com (mail-qk1-f169.google.com [209.85.222.169]) by dpdk.org (Postfix) with ESMTP id C37B61C12D for ; Thu, 13 Aug 2020 19:24:24 +0200 (CEST) Received: by mail-qk1-f169.google.com with SMTP id l64so5847799qkb.8 for ; Thu, 13 Aug 2020 10:24:24 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=ro9NxcGZ0QIIM+BLOkqCqrw6KsPbTmyq0J+YBEw6bnk=; b=Wa1KSHNWs/bJXCGY6+vy0/J2i0PcyiD8KczNcFv9DUMgJ8GCNbp9GhmBKYdNhjnxiC gVC3scK0sB3k3ZdW0udAyiiD6tYojCC5TRbkRpZbu67od6/rWw+Gvlh0jNa0W/NOVJsw ZLmb2TFWGGKALzBcgSWgh7fpnlLgwY4IiF/y8= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=ro9NxcGZ0QIIM+BLOkqCqrw6KsPbTmyq0J+YBEw6bnk=; b=Hxg20wTbD2yZA41NtIEy6LMTM8S5Yh9QdyRUxD3/VGu4O8oDzpg6gxpobPKZTTlhX5 y99ob9q+FG3hhBpX+JbRsF5shHhfnqIsCak8aAGnnDe83EiOdqSnWgf7VoVzRZLa/sXq 6w1oRcGCO1MHKzRgLu5nrlebAIE7AxEwyBcN+FFe1xgmKZBxlq3TBaMzHF/35ma7paCF 07mAMljAeokzjd8wvqN32ZWxRq2LjYtI8/xT7sBzEaoa+QI16Y0cfG05kO5xR+AfuTva dkwhjWzWN2Bx/zcgDpgbAhqSQf6xcS+vTyfFEQCMonlmsSBCrtmNpUtuytoot2Ee5XZg NQWA== X-Gm-Message-State: AOAM530sJJVf4enJptLqTq2xqn4csuMLMUh7WU6g/VbRh6NmPVk8SNdc nHV4XtPgXCYTjPril9FiO3DYcpQJdt/Wnstg0BT5aH3oKpqA7wgig+3NUbG9r/7DQ1gwpsKnceG fub5SDLuec5OA469XDvmp+tgOTwVTt9MJ9XU31swCW0vXxAEZykp1pvotLfDB X-Google-Smtp-Source: ABdhPJzaBp0cgslscGoMNS/yKs1UikL3FtaPqhPHkIwh288AK6Up0fHgk5CRwIZKrYha3nMjZG/nyQ== X-Received: by 2002:a05:620a:1469:: with SMTP id j9mr5675745qkl.216.1597339460884; Thu, 13 Aug 2020 10:24:20 -0700 (PDT) Received: from rahul_yocto_ubuntu18.ibn.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g129sm6246635qkb.39.2020.08.13.10.24.18 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 10:24:20 -0700 (PDT) From: Vikas Gupta To: dev@dpdk.org, akhil.goyal@nxp.com Cc: vikram.prakash@broadcom.com, Vikas Gupta , Raveendra Padasalagi Date: Thu, 13 Aug 2020 22:53:43 +0530 Message-Id: <20200813172344.3228-8-vikas.gupta@broadcom.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200813172344.3228-1-vikas.gupta@broadcom.com> References: <20200812063127.8687-1-vikas.gupta@broadcom.com> <20200813172344.3228-1-vikas.gupta@broadcom.com> Subject: [dpdk-dev] [PATCH v2 7/8] crypto/bcmfs: add crypto h/w module X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add crypto h/w module to process crypto op. Crypto op is processed via sym_engine module before submitting the crypto request to h/w queues. Signed-off-by: Vikas Gupta Signed-off-by: Raveendra Padasalagi Reviewed-by: Ajit Khaparde --- drivers/crypto/bcmfs/bcmfs_sym.c | 316 ++++++++ drivers/crypto/bcmfs/bcmfs_sym_defs.h | 16 + drivers/crypto/bcmfs/bcmfs_sym_engine.c | 994 ++++++++++++++++++++++++ drivers/crypto/bcmfs/bcmfs_sym_engine.h | 103 +++ drivers/crypto/bcmfs/bcmfs_sym_pmd.c | 26 + drivers/crypto/bcmfs/bcmfs_sym_req.h | 40 + drivers/crypto/bcmfs/meson.build | 4 +- 7 files changed, 1498 insertions(+), 1 deletion(-) create mode 100644 drivers/crypto/bcmfs/bcmfs_sym.c create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.c create mode 100644 drivers/crypto/bcmfs/bcmfs_sym_engine.h diff --git a/drivers/crypto/bcmfs/bcmfs_sym.c b/drivers/crypto/bcmfs/bcmfs_sym.c new file mode 100644 index 000000000..8f9415b5e --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym.c @@ -0,0 +1,316 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#include + +#include +#include +#include +#include +#include + +#include "bcmfs_sym_defs.h" +#include "bcmfs_sym_engine.h" +#include "bcmfs_sym_req.h" +#include "bcmfs_sym_session.h" + +/** Process cipher operation */ +static int +process_crypto_cipher_op(struct rte_crypto_op *op, + struct rte_mbuf *mbuf_src, + struct rte_mbuf *mbuf_dst, + struct bcmfs_sym_session *sess, + struct bcmfs_sym_request *req) +{ + int rc = 0; + struct fsattr src, dst, iv, key; + struct rte_crypto_sym_op *sym_op = op->sym; + + fsattr_sz(&src) = sym_op->cipher.data.length; + fsattr_sz(&dst) = sym_op->cipher.data.length; + + fsattr_va(&src) = rte_pktmbuf_mtod_offset + (mbuf_src, + uint8_t *, + op->sym->cipher.data.offset); + + fsattr_va(&dst) = rte_pktmbuf_mtod_offset + (mbuf_dst, + uint8_t *, + op->sym->cipher.data.offset); + + fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src); + fsattr_pa(&dst) = rte_pktmbuf_iova(mbuf_dst); + + fsattr_va(&iv) = rte_crypto_op_ctod_offset(op, + uint8_t *, + sess->cipher.iv.offset); + + fsattr_sz(&iv) = sess->cipher.iv.length; + + fsattr_va(&key) = sess->cipher.key.data; + fsattr_pa(&key) = 0; + fsattr_sz(&key) = sess->cipher.key.length; + + rc = bcmfs_crypto_build_cipher_req(req, sess->cipher.algo, + sess->cipher.direction, &src, + &dst, &key, &iv); + if (rc) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + + return rc; +} + +/** Process auth operation */ +static int +process_crypto_auth_op(struct rte_crypto_op *op, + struct rte_mbuf *mbuf_src, + struct bcmfs_sym_session *sess, + struct bcmfs_sym_request *req) +{ + int rc = 0; + struct fsattr src, dst, mac, key; + + fsattr_sz(&src) = op->sym->auth.data.length; + fsattr_va(&src) = rte_pktmbuf_mtod_offset(mbuf_src, + uint8_t *, + op->sym->auth.data.offset); + fsattr_pa(&src) = rte_pktmbuf_iova(mbuf_src); + + if (!sess->auth.operation) { + fsattr_va(&mac) = op->sym->auth.digest.data; + fsattr_pa(&mac) = op->sym->auth.digest.phys_addr; + fsattr_sz(&mac) = sess->auth.digest_length; + } else { + fsattr_va(&dst) = op->sym->auth.digest.data; + fsattr_pa(&dst) = op->sym->auth.digest.phys_addr; + fsattr_sz(&dst) = sess->auth.digest_length; + } + + fsattr_va(&key) = sess->auth.key.data; + fsattr_pa(&key) = 0; + fsattr_sz(&key) = sess->auth.key.length; + + /* AES-GMAC uses AES-GCM-128 authenticator */ + if (sess->auth.algo == BCMFS_CRYPTO_AUTH_AES_GMAC) { + struct fsattr iv; + fsattr_va(&iv) = rte_crypto_op_ctod_offset(op, + uint8_t *, + sess->auth.iv.offset); + fsattr_pa(&iv) = 0; + fsattr_sz(&iv) = sess->auth.iv.length; + + rc = bcmfs_crypto_build_aead_request(req, + BCMFS_CRYPTO_CIPHER_NONE, + 0, + BCMFS_CRYPTO_AUTH_AES_GMAC, + sess->auth.operation, + &src, NULL, NULL, &key, + &iv, NULL, + sess->auth.operation ? + (&dst) : &(mac), + 0); + } else { + rc = bcmfs_crypto_build_auth_req(req, sess->auth.algo, + sess->auth.operation, + &src, + (sess->auth.operation) ? (&dst) : NULL, + (sess->auth.operation) ? NULL : (&mac), + &key); + } + + if (rc) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + + return rc; +} + +/** Process combined/chained mode operation */ +static int +process_crypto_combined_op(struct rte_crypto_op *op, + struct rte_mbuf *mbuf_src, + struct rte_mbuf *mbuf_dst, + struct bcmfs_sym_session *sess, + struct bcmfs_sym_request *req) +{ + int rc = 0, aad_size = 0; + struct fsattr src, dst, iv; + struct rte_crypto_sym_op *sym_op = op->sym; + struct fsattr cipher_key, aad, mac, auth_key; + + fsattr_sz(&src) = sym_op->cipher.data.length; + fsattr_sz(&dst) = sym_op->cipher.data.length; + + fsattr_va(&src) = rte_pktmbuf_mtod_offset + (mbuf_src, + uint8_t *, + sym_op->cipher.data.offset); + + fsattr_va(&dst) = rte_pktmbuf_mtod_offset + (mbuf_dst, + uint8_t *, + sym_op->cipher.data.offset); + + fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src, + sym_op->cipher.data.offset); + fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst, + sym_op->cipher.data.offset); + + fsattr_va(&iv) = rte_crypto_op_ctod_offset(op, + uint8_t *, + sess->cipher.iv.offset); + + fsattr_pa(&iv) = 0; + fsattr_sz(&iv) = sess->cipher.iv.length; + + fsattr_va(&cipher_key) = sess->cipher.key.data; + fsattr_pa(&cipher_key) = 0; + fsattr_sz(&cipher_key) = sess->cipher.key.length; + + fsattr_va(&auth_key) = sess->auth.key.data; + fsattr_pa(&auth_key) = 0; + fsattr_sz(&auth_key) = sess->auth.key.length; + + fsattr_va(&mac) = op->sym->auth.digest.data; + fsattr_pa(&mac) = op->sym->auth.digest.phys_addr; + fsattr_sz(&mac) = sess->auth.digest_length; + + aad_size = sym_op->auth.data.length - sym_op->cipher.data.length; + + if (aad_size > 0) { + fsattr_sz(&aad) = aad_size; + fsattr_va(&aad) = rte_pktmbuf_mtod_offset + (mbuf_src, + uint8_t *, + sym_op->auth.data.offset); + fsattr_pa(&aad) = rte_pktmbuf_iova_offset(mbuf_src, + sym_op->auth.data.offset); + } + + rc = bcmfs_crypto_build_aead_request(req, sess->cipher.algo, + sess->cipher.direction, + sess->auth.algo, + sess->auth.operation, + &src, &dst, &cipher_key, + &auth_key, &iv, + (aad_size > 0) ? (&aad) : NULL, + &mac, sess->cipher_first); + + if (rc) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + + return rc; +} + +/** Process AEAD operation */ +static int +process_crypto_aead_op(struct rte_crypto_op *op, + struct rte_mbuf *mbuf_src, + struct rte_mbuf *mbuf_dst, + struct bcmfs_sym_session *sess, + struct bcmfs_sym_request *req) +{ + int rc = 0; + struct fsattr src, dst, iv; + struct rte_crypto_sym_op *sym_op = op->sym; + struct fsattr cipher_key, aad, mac, auth_key; + enum bcmfs_crypto_cipher_op cipher_op; + enum bcmfs_crypto_auth_op auth_op; + + if (sess->cipher.direction) { + auth_op = BCMFS_CRYPTO_AUTH_OP_VERIFY; + cipher_op = BCMFS_CRYPTO_CIPHER_OP_DECRYPT; + } else { + auth_op = BCMFS_CRYPTO_AUTH_OP_GENERATE; + cipher_op = BCMFS_CRYPTO_CIPHER_OP_ENCRYPT; + } + + fsattr_sz(&src) = sym_op->aead.data.length; + fsattr_sz(&dst) = sym_op->aead.data.length; + + fsattr_va(&src) = rte_pktmbuf_mtod_offset + (mbuf_src, + uint8_t *, + sym_op->aead.data.offset); + + fsattr_va(&dst) = rte_pktmbuf_mtod_offset + (mbuf_dst, + uint8_t *, + sym_op->aead.data.offset); + + fsattr_pa(&src) = rte_pktmbuf_iova_offset(mbuf_src, + sym_op->aead.data.offset); + fsattr_pa(&dst) = rte_pktmbuf_iova_offset(mbuf_dst, + sym_op->aead.data.offset); + + fsattr_va(&iv) = rte_crypto_op_ctod_offset(op, + uint8_t *, + sess->cipher.iv.offset); + + fsattr_pa(&iv) = 0; + fsattr_sz(&iv) = sess->cipher.iv.length; + + fsattr_va(&cipher_key) = sess->cipher.key.data; + fsattr_pa(&cipher_key) = 0; + fsattr_sz(&cipher_key) = sess->cipher.key.length; + + fsattr_va(&auth_key) = sess->auth.key.data; + fsattr_pa(&auth_key) = 0; + fsattr_sz(&auth_key) = sess->auth.key.length; + + fsattr_va(&mac) = op->sym->aead.digest.data; + fsattr_pa(&mac) = op->sym->aead.digest.phys_addr; + fsattr_sz(&mac) = sess->auth.digest_length; + + fsattr_va(&aad) = op->sym->aead.aad.data; + fsattr_pa(&aad) = op->sym->aead.aad.phys_addr; + fsattr_sz(&aad) = sess->aead.aad_length; + + rc = bcmfs_crypto_build_aead_request(req, sess->cipher.algo, + cipher_op, sess->auth.algo, + auth_op, &src, &dst, &cipher_key, + &auth_key, &iv, &aad, &mac, + sess->cipher.direction ? 0 : 1); + + if (rc) + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + + return rc; +} + +/** Process crypto operation for mbuf */ +int +bcmfs_process_sym_crypto_op(struct rte_crypto_op *op, + struct bcmfs_sym_session *sess, + struct bcmfs_sym_request *req) +{ + struct rte_mbuf *msrc, *mdst; + int rc = 0; + + msrc = op->sym->m_src; + mdst = op->sym->m_dst ? op->sym->m_dst : op->sym->m_src; + op->status = RTE_CRYPTO_OP_STATUS_NOT_PROCESSED; + + switch (sess->chain_order) { + case BCMFS_SYM_CHAIN_ONLY_CIPHER: + rc = process_crypto_cipher_op(op, msrc, mdst, sess, req); + break; + case BCMFS_SYM_CHAIN_ONLY_AUTH: + rc = process_crypto_auth_op(op, msrc, sess, req); + break; + case BCMFS_SYM_CHAIN_CIPHER_AUTH: + case BCMFS_SYM_CHAIN_AUTH_CIPHER: + rc = process_crypto_combined_op(op, msrc, mdst, sess, req); + break; + case BCMFS_SYM_CHAIN_AEAD: + rc = process_crypto_aead_op(op, msrc, mdst, sess, req); + break; + default: + op->status = RTE_CRYPTO_OP_STATUS_ERROR; + break; + } + + return rc; +} diff --git a/drivers/crypto/bcmfs/bcmfs_sym_defs.h b/drivers/crypto/bcmfs/bcmfs_sym_defs.h index d94446d35..90280dba5 100644 --- a/drivers/crypto/bcmfs/bcmfs_sym_defs.h +++ b/drivers/crypto/bcmfs/bcmfs_sym_defs.h @@ -15,6 +15,18 @@ #define BCMFS_MAX_IV_SIZE 16 #define BCMFS_MAX_DIGEST_SIZE 64 +struct bcmfs_sym_session; +struct bcmfs_sym_request; + +/** Crypto Request processing successful. */ +#define BCMFS_SYM_RESPONSE_SUCCESS (0) +/** Crypot Request processing protocol failure. */ +#define BCMFS_SYM_RESPONSE_PROTO_FAILURE (1) +/** Crypot Request processing completion failure. */ +#define BCMFS_SYM_RESPONSE_COMPL_ERROR (2) +/** Crypot Request processing hash tag check error. */ +#define BCMFS_SYM_RESPONSE_HASH_TAG_ERROR (3) + /** Symmetric Cipher Direction */ enum bcmfs_crypto_cipher_op { /** Encrypt cipher operation */ @@ -167,4 +179,8 @@ enum bcmfs_sym_crypto_class { BCMFS_CRYPTO_AEAD, }; +int +bcmfs_process_sym_crypto_op(struct rte_crypto_op *op, + struct bcmfs_sym_session *sess, + struct bcmfs_sym_request *req); #endif /* _BCMFS_SYM_DEFS_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.c b/drivers/crypto/bcmfs/bcmfs_sym_engine.c new file mode 100644 index 000000000..c17174fc0 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.c @@ -0,0 +1,994 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(C) 2020 Broadcom. + * All rights reserved. + */ + +#include +#include + +#include +#include + +#include "bcmfs_logs.h" +#include "bcmfs_sym_defs.h" +#include "bcmfs_dev_msg.h" +#include "bcmfs_sym_req.h" +#include "bcmfs_sym_engine.h" + +enum spu2_cipher_type { + SPU2_CIPHER_TYPE_NONE = 0x0, + SPU2_CIPHER_TYPE_AES128 = 0x1, + SPU2_CIPHER_TYPE_AES192 = 0x2, + SPU2_CIPHER_TYPE_AES256 = 0x3, + SPU2_CIPHER_TYPE_DES = 0x4, + SPU2_CIPHER_TYPE_3DES = 0x5, + SPU2_CIPHER_TYPE_LAST +}; + +enum spu2_cipher_mode { + SPU2_CIPHER_MODE_ECB = 0x0, + SPU2_CIPHER_MODE_CBC = 0x1, + SPU2_CIPHER_MODE_CTR = 0x2, + SPU2_CIPHER_MODE_CFB = 0x3, + SPU2_CIPHER_MODE_OFB = 0x4, + SPU2_CIPHER_MODE_XTS = 0x5, + SPU2_CIPHER_MODE_CCM = 0x6, + SPU2_CIPHER_MODE_GCM = 0x7, + SPU2_CIPHER_MODE_LAST +}; + +enum spu2_hash_type { + SPU2_HASH_TYPE_NONE = 0x0, + SPU2_HASH_TYPE_AES128 = 0x1, + SPU2_HASH_TYPE_AES192 = 0x2, + SPU2_HASH_TYPE_AES256 = 0x3, + SPU2_HASH_TYPE_MD5 = 0x6, + SPU2_HASH_TYPE_SHA1 = 0x7, + SPU2_HASH_TYPE_SHA224 = 0x8, + SPU2_HASH_TYPE_SHA256 = 0x9, + SPU2_HASH_TYPE_SHA384 = 0xa, + SPU2_HASH_TYPE_SHA512 = 0xb, + SPU2_HASH_TYPE_SHA512_224 = 0xc, + SPU2_HASH_TYPE_SHA512_256 = 0xd, + SPU2_HASH_TYPE_SHA3_224 = 0xe, + SPU2_HASH_TYPE_SHA3_256 = 0xf, + SPU2_HASH_TYPE_SHA3_384 = 0x10, + SPU2_HASH_TYPE_SHA3_512 = 0x11, + SPU2_HASH_TYPE_LAST +}; + +enum spu2_hash_mode { + SPU2_HASH_MODE_CMAC = 0x0, + SPU2_HASH_MODE_CBC_MAC = 0x1, + SPU2_HASH_MODE_XCBC_MAC = 0x2, + SPU2_HASH_MODE_HMAC = 0x3, + SPU2_HASH_MODE_RABIN = 0x4, + SPU2_HASH_MODE_CCM = 0x5, + SPU2_HASH_MODE_GCM = 0x6, + SPU2_HASH_MODE_RESERVED = 0x7, + SPU2_HASH_MODE_LAST +}; + +enum spu2_proto_sel { + SPU2_PROTO_RESV = 0, + SPU2_MACSEC_SECTAG8_ECB = 1, + SPU2_MACSEC_SECTAG8_SCB = 2, + SPU2_MACSEC_SECTAG16 = 3, + SPU2_MACSEC_SECTAG16_8_XPN = 4, + SPU2_IPSEC = 5, + SPU2_IPSEC_ESN = 6, + SPU2_TLS_CIPHER = 7, + SPU2_TLS_AEAD = 8, + SPU2_DTLS_CIPHER = 9, + SPU2_DTLS_AEAD = 10 +}; + +/* SPU2 response size */ +#define SPU2_STATUS_LEN 2 + +/* Metadata settings in response */ +enum spu2_ret_md_opts { + SPU2_RET_NO_MD = 0, /* return no metadata */ + SPU2_RET_FMD_OMD = 1, /* return both FMD and OMD */ + SPU2_RET_FMD_ONLY = 2, /* return only FMD */ + SPU2_RET_FMD_OMD_IV = 3, /* return FMD and OMD with just IVs */ +}; + +/* FMD ctrl0 field masks */ +#define SPU2_CIPH_ENCRYPT_EN 0x1 /* 0: decrypt, 1: encrypt */ +#define SPU2_CIPH_TYPE_SHIFT 4 +#define SPU2_CIPH_MODE 0xF00 /* one of spu2_cipher_mode */ +#define SPU2_CIPH_MODE_SHIFT 8 +#define SPU2_CFB_MASK 0x7000 /* cipher feedback mask */ +#define SPU2_CFB_MASK_SHIFT 12 +#define SPU2_PROTO_SEL 0xF00000 /* MACsec, IPsec, TLS... */ +#define SPU2_PROTO_SEL_SHIFT 20 +#define SPU2_HASH_FIRST 0x1000000 /* 1: hash input is input pkt + * data + */ +#define SPU2_CHK_TAG 0x2000000 /* 1: check digest provided */ +#define SPU2_HASH_TYPE 0x1F0000000 /* one of spu2_hash_type */ +#define SPU2_HASH_TYPE_SHIFT 28 +#define SPU2_HASH_MODE 0xF000000000 /* one of spu2_hash_mode */ +#define SPU2_HASH_MODE_SHIFT 36 +#define SPU2_CIPH_PAD_EN 0x100000000000 /* 1: Add pad to end of payload for + * enc + */ +#define SPU2_CIPH_PAD 0xFF000000000000 /* cipher pad value */ +#define SPU2_CIPH_PAD_SHIFT 48 + +/* FMD ctrl1 field masks */ +#define SPU2_TAG_LOC 0x1 /* 1: end of payload, 0: undef */ +#define SPU2_HAS_FR_DATA 0x2 /* 1: msg has frame data */ +#define SPU2_HAS_AAD1 0x4 /* 1: msg has AAD1 field */ +#define SPU2_HAS_NAAD 0x8 /* 1: msg has NAAD field */ +#define SPU2_HAS_AAD2 0x10 /* 1: msg has AAD2 field */ +#define SPU2_HAS_ESN 0x20 /* 1: msg has ESN field */ +#define SPU2_HASH_KEY_LEN 0xFF00 /* len of hash key in bytes. + * HMAC only. + */ +#define SPU2_HASH_KEY_LEN_SHIFT 8 +#define SPU2_CIPH_KEY_LEN 0xFF00000 /* len of cipher key in bytes */ +#define SPU2_CIPH_KEY_LEN_SHIFT 20 +#define SPU2_GENIV 0x10000000 /* 1: hw generates IV */ +#define SPU2_HASH_IV 0x20000000 /* 1: IV incl in hash */ +#define SPU2_RET_IV 0x40000000 /* 1: return IV in output msg + * b4 payload + */ +#define SPU2_RET_IV_LEN 0xF00000000 /* length in bytes of IV returned. + * 0 = 16 bytes + */ +#define SPU2_RET_IV_LEN_SHIFT 32 +#define SPU2_IV_OFFSET 0xF000000000 /* gen IV offset */ +#define SPU2_IV_OFFSET_SHIFT 36 +#define SPU2_IV_LEN 0x1F0000000000 /* length of input IV in bytes */ +#define SPU2_IV_LEN_SHIFT 40 +#define SPU2_HASH_TAG_LEN 0x7F000000000000 /* hash tag length in bytes */ +#define SPU2_HASH_TAG_LEN_SHIFT 48 +#define SPU2_RETURN_MD 0x300000000000000 /* return metadata */ +#define SPU2_RETURN_MD_SHIFT 56 +#define SPU2_RETURN_FD 0x400000000000000 +#define SPU2_RETURN_AAD1 0x800000000000000 +#define SPU2_RETURN_NAAD 0x1000000000000000 +#define SPU2_RETURN_AAD2 0x2000000000000000 +#define SPU2_RETURN_PAY 0x4000000000000000 /* return payload */ + +/* FMD ctrl2 field masks */ +#define SPU2_AAD1_OFFSET 0xFFF /* byte offset of AAD1 field */ +#define SPU2_AAD1_LEN 0xFF000 /* length of AAD1 in bytes */ +#define SPU2_AAD1_LEN_SHIFT 12 +#define SPU2_AAD2_OFFSET 0xFFF00000 /* byte offset of AAD2 field */ +#define SPU2_AAD2_OFFSET_SHIFT 20 +#define SPU2_PL_OFFSET 0xFFFFFFFF00000000 /* payload offset from AAD2 */ +#define SPU2_PL_OFFSET_SHIFT 32 + +/* FMD ctrl3 field masks */ +#define SPU2_PL_LEN 0xFFFFFFFF /* payload length in bytes */ +#define SPU2_TLS_LEN 0xFFFF00000000 /* TLS encrypt: cipher len + * TLS decrypt: compressed len + */ +#define SPU2_TLS_LEN_SHIFT 32 + +/* + * Max value that can be represented in the Payload Length field of the + * ctrl3 word of FMD. + */ +#define SPU2_MAX_PAYLOAD SPU2_PL_LEN + +#define SPU2_VAL_NONE 0 + +/* CCM B_0 field definitions, common for SPU-M and SPU2 */ +#define CCM_B0_ADATA 0x40 +#define CCM_B0_ADATA_SHIFT 6 +#define CCM_B0_M_PRIME 0x38 +#define CCM_B0_M_PRIME_SHIFT 3 +#define CCM_B0_L_PRIME 0x07 +#define CCM_B0_L_PRIME_SHIFT 0 +#define CCM_ESP_L_VALUE 4 + +static uint16_t +spu2_cipher_type_xlate(enum bcmfs_crypto_cipher_algorithm cipher_alg, + enum spu2_cipher_type *spu2_type, + struct fsattr *key) +{ + int ret = 0; + int key_size = fsattr_sz(key); + + if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_XTS) + key_size = key_size / 2; + + switch (key_size) { + case BCMFS_CRYPTO_AES128: + *spu2_type = SPU2_CIPHER_TYPE_AES128; + break; + case BCMFS_CRYPTO_AES192: + *spu2_type = SPU2_CIPHER_TYPE_AES192; + break; + case BCMFS_CRYPTO_AES256: + *spu2_type = SPU2_CIPHER_TYPE_AES256; + break; + default: + ret = -EINVAL; + } + + return ret; +} + +static int +spu2_hash_xlate(enum bcmfs_crypto_auth_algorithm auth_alg, + struct fsattr *key, + enum spu2_hash_type *spu2_type, + enum spu2_hash_mode *spu2_mode) +{ + *spu2_mode = 0; + + switch (auth_alg) { + case BCMFS_CRYPTO_AUTH_NONE: + *spu2_type = SPU2_HASH_TYPE_NONE; + break; + case BCMFS_CRYPTO_AUTH_MD5: + *spu2_type = SPU2_HASH_TYPE_MD5; + break; + case BCMFS_CRYPTO_AUTH_MD5_HMAC: + *spu2_type = SPU2_HASH_TYPE_MD5; + *spu2_mode = SPU2_HASH_MODE_HMAC; + break; + case BCMFS_CRYPTO_AUTH_SHA1: + *spu2_type = SPU2_HASH_TYPE_SHA1; + break; + case BCMFS_CRYPTO_AUTH_SHA1_HMAC: + *spu2_type = SPU2_HASH_TYPE_SHA1; + *spu2_mode = SPU2_HASH_MODE_HMAC; + break; + case BCMFS_CRYPTO_AUTH_SHA224: + *spu2_type = SPU2_HASH_TYPE_SHA224; + break; + case BCMFS_CRYPTO_AUTH_SHA224_HMAC: + *spu2_type = SPU2_HASH_TYPE_SHA224; + *spu2_mode = SPU2_HASH_MODE_HMAC; + break; + case BCMFS_CRYPTO_AUTH_SHA256: + *spu2_type = SPU2_HASH_TYPE_SHA256; + break; + case BCMFS_CRYPTO_AUTH_SHA256_HMAC: + *spu2_type = SPU2_HASH_TYPE_SHA256; + *spu2_mode = SPU2_HASH_MODE_HMAC; + break; + case BCMFS_CRYPTO_AUTH_SHA384: + *spu2_type = SPU2_HASH_TYPE_SHA384; + break; + case BCMFS_CRYPTO_AUTH_SHA384_HMAC: + *spu2_type = SPU2_HASH_TYPE_SHA384; + *spu2_mode = SPU2_HASH_MODE_HMAC; + break; + case BCMFS_CRYPTO_AUTH_SHA512: + *spu2_type = SPU2_HASH_TYPE_SHA512; + break; + case BCMFS_CRYPTO_AUTH_SHA512_HMAC: + *spu2_type = SPU2_HASH_TYPE_SHA512; + *spu2_mode = SPU2_HASH_MODE_HMAC; + break; + case BCMFS_CRYPTO_AUTH_SHA3_224: + *spu2_type = SPU2_HASH_TYPE_SHA3_224; + break; + case BCMFS_CRYPTO_AUTH_SHA3_224_HMAC: + *spu2_type = SPU2_HASH_TYPE_SHA3_224; + *spu2_mode = SPU2_HASH_MODE_HMAC; + break; + case BCMFS_CRYPTO_AUTH_SHA3_256: + *spu2_type = SPU2_HASH_TYPE_SHA3_256; + break; + case BCMFS_CRYPTO_AUTH_SHA3_256_HMAC: + *spu2_type = SPU2_HASH_TYPE_SHA3_256; + *spu2_mode = SPU2_HASH_MODE_HMAC; + break; + case BCMFS_CRYPTO_AUTH_SHA3_384: + *spu2_type = SPU2_HASH_TYPE_SHA3_384; + break; + case BCMFS_CRYPTO_AUTH_SHA3_384_HMAC: + *spu2_type = SPU2_HASH_TYPE_SHA3_384; + *spu2_mode = SPU2_HASH_MODE_HMAC; + break; + case BCMFS_CRYPTO_AUTH_SHA3_512: + *spu2_type = SPU2_HASH_TYPE_SHA3_512; + break; + case BCMFS_CRYPTO_AUTH_SHA3_512_HMAC: + *spu2_type = SPU2_HASH_TYPE_SHA3_512; + *spu2_mode = SPU2_HASH_MODE_HMAC; + break; + case BCMFS_CRYPTO_AUTH_AES_XCBC_MAC: + *spu2_mode = SPU2_HASH_MODE_XCBC_MAC; + switch (fsattr_sz(key)) { + case BCMFS_CRYPTO_AES128: + *spu2_type = SPU2_HASH_TYPE_AES128; + break; + case BCMFS_CRYPTO_AES192: + *spu2_type = SPU2_HASH_TYPE_AES192; + break; + case BCMFS_CRYPTO_AES256: + *spu2_type = SPU2_HASH_TYPE_AES256; + break; + default: + return -EINVAL; + } + break; + case BCMFS_CRYPTO_AUTH_AES_CMAC: + *spu2_mode = SPU2_HASH_MODE_CMAC; + switch (fsattr_sz(key)) { + case BCMFS_CRYPTO_AES128: + *spu2_type = SPU2_HASH_TYPE_AES128; + break; + case BCMFS_CRYPTO_AES192: + *spu2_type = SPU2_HASH_TYPE_AES192; + break; + case BCMFS_CRYPTO_AES256: + *spu2_type = SPU2_HASH_TYPE_AES256; + break; + default: + return -EINVAL; + } + break; + case BCMFS_CRYPTO_AUTH_AES_GMAC: + *spu2_mode = SPU2_HASH_MODE_GCM; + switch (fsattr_sz(key)) { + case BCMFS_CRYPTO_AES128: + *spu2_type = SPU2_HASH_TYPE_AES128; + break; + case BCMFS_CRYPTO_AES192: + *spu2_type = SPU2_HASH_TYPE_AES192; + break; + case BCMFS_CRYPTO_AES256: + *spu2_type = SPU2_HASH_TYPE_AES256; + break; + default: + return -EINVAL; + } + break; + case BCMFS_CRYPTO_AUTH_AES_CBC_MAC: + *spu2_mode = SPU2_HASH_MODE_CBC_MAC; + switch (fsattr_sz(key)) { + case BCMFS_CRYPTO_AES128: + *spu2_type = SPU2_HASH_TYPE_AES128; + break; + case BCMFS_CRYPTO_AES192: + *spu2_type = SPU2_HASH_TYPE_AES192; + break; + case BCMFS_CRYPTO_AES256: + *spu2_type = SPU2_HASH_TYPE_AES256; + break; + default: + return -EINVAL; + } + break; + case BCMFS_CRYPTO_AUTH_AES_GCM: + *spu2_mode = SPU2_HASH_MODE_GCM; + break; + case BCMFS_CRYPTO_AUTH_AES_CCM: + *spu2_mode = SPU2_HASH_MODE_CCM; + break; + } + + return 0; +} + +static int +spu2_cipher_xlate(enum bcmfs_crypto_cipher_algorithm cipher_alg, + struct fsattr *key, + enum spu2_cipher_type *spu2_type, + enum spu2_cipher_mode *spu2_mode) +{ + int ret = 0; + + switch (cipher_alg) { + case BCMFS_CRYPTO_CIPHER_NONE: + *spu2_type = SPU2_CIPHER_TYPE_NONE; + break; + case BCMFS_CRYPTO_CIPHER_DES_ECB: + *spu2_mode = SPU2_CIPHER_MODE_ECB; + *spu2_type = SPU2_CIPHER_TYPE_DES; + break; + case BCMFS_CRYPTO_CIPHER_DES_CBC: + *spu2_mode = SPU2_CIPHER_MODE_CBC; + *spu2_type = SPU2_CIPHER_TYPE_DES; + break; + case BCMFS_CRYPTO_CIPHER_3DES_ECB: + *spu2_mode = SPU2_CIPHER_MODE_ECB; + *spu2_type = SPU2_CIPHER_TYPE_3DES; + break; + case BCMFS_CRYPTO_CIPHER_3DES_CBC: + *spu2_mode = SPU2_CIPHER_MODE_CBC; + *spu2_type = SPU2_CIPHER_TYPE_3DES; + break; + case BCMFS_CRYPTO_CIPHER_AES_CBC: + *spu2_mode = SPU2_CIPHER_MODE_CBC; + ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key); + break; + case BCMFS_CRYPTO_CIPHER_AES_ECB: + *spu2_mode = SPU2_CIPHER_MODE_ECB; + ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key); + break; + case BCMFS_CRYPTO_CIPHER_AES_CTR: + *spu2_mode = SPU2_CIPHER_MODE_CTR; + ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key); + break; + case BCMFS_CRYPTO_CIPHER_AES_CCM: + *spu2_mode = SPU2_CIPHER_MODE_CCM; + ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key); + break; + case BCMFS_CRYPTO_CIPHER_AES_GCM: + *spu2_mode = SPU2_CIPHER_MODE_GCM; + ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key); + break; + case BCMFS_CRYPTO_CIPHER_AES_XTS: + *spu2_mode = SPU2_CIPHER_MODE_XTS; + ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key); + break; + case BCMFS_CRYPTO_CIPHER_AES_OFB: + *spu2_mode = SPU2_CIPHER_MODE_OFB; + ret = spu2_cipher_type_xlate(cipher_alg, spu2_type, key); + break; + } + + return ret; +} + +static void +spu2_fmd_ctrl0_write(struct spu2_fmd *fmd, + bool is_inbound, bool auth_first, + enum spu2_proto_sel protocol, + enum spu2_cipher_type cipher_type, + enum spu2_cipher_mode cipher_mode, + enum spu2_hash_type auth_type, + enum spu2_hash_mode auth_mode) +{ + uint64_t ctrl0 = 0; + + if (cipher_type != SPU2_CIPHER_TYPE_NONE && !is_inbound) + ctrl0 |= SPU2_CIPH_ENCRYPT_EN; + + ctrl0 |= ((uint64_t)cipher_type << SPU2_CIPH_TYPE_SHIFT) | + ((uint64_t)cipher_mode << SPU2_CIPH_MODE_SHIFT); + + if (protocol != SPU2_PROTO_RESV) + ctrl0 |= (uint64_t)protocol << SPU2_PROTO_SEL_SHIFT; + + if (auth_first) + ctrl0 |= SPU2_HASH_FIRST; + + if (is_inbound && auth_type != SPU2_HASH_TYPE_NONE) + ctrl0 |= SPU2_CHK_TAG; + + ctrl0 |= (((uint64_t)auth_type << SPU2_HASH_TYPE_SHIFT) | + ((uint64_t)auth_mode << SPU2_HASH_MODE_SHIFT)); + + fmd->ctrl0 = ctrl0; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl0:", &fmd->ctrl0, sizeof(uint64_t)); +#endif +} + +static void +spu2_fmd_ctrl1_write(struct spu2_fmd *fmd, bool is_inbound, + uint64_t assoc_size, uint64_t auth_key_len, + uint64_t cipher_key_len, bool gen_iv, bool hash_iv, + bool return_iv, uint64_t ret_iv_len, + uint64_t ret_iv_offset, uint64_t cipher_iv_len, + uint64_t digest_size, bool return_payload, bool return_md) +{ + uint64_t ctrl1 = 0; + + if (is_inbound && digest_size != 0) + ctrl1 |= SPU2_TAG_LOC; + + if (assoc_size != 0) + ctrl1 |= SPU2_HAS_AAD2; + + if (auth_key_len != 0) + ctrl1 |= ((auth_key_len << SPU2_HASH_KEY_LEN_SHIFT) & + SPU2_HASH_KEY_LEN); + + if (cipher_key_len != 0) + ctrl1 |= ((cipher_key_len << SPU2_CIPH_KEY_LEN_SHIFT) & + SPU2_CIPH_KEY_LEN); + + if (gen_iv) + ctrl1 |= SPU2_GENIV; + + if (hash_iv) + ctrl1 |= SPU2_HASH_IV; + + if (return_iv) { + ctrl1 |= SPU2_RET_IV; + ctrl1 |= ret_iv_len << SPU2_RET_IV_LEN_SHIFT; + ctrl1 |= ret_iv_offset << SPU2_IV_OFFSET_SHIFT; + } + + ctrl1 |= ((cipher_iv_len << SPU2_IV_LEN_SHIFT) & SPU2_IV_LEN); + + if (digest_size != 0) { + ctrl1 |= ((digest_size << SPU2_HASH_TAG_LEN_SHIFT) & + SPU2_HASH_TAG_LEN); + } + + /* + * Let's ask for the output pkt to include FMD, but don't need to + * get keys and IVs back in OMD. + */ + if (return_md) + ctrl1 |= ((uint64_t)SPU2_RET_FMD_ONLY << SPU2_RETURN_MD_SHIFT); + else + ctrl1 |= ((uint64_t)SPU2_RET_NO_MD << SPU2_RETURN_MD_SHIFT); + + /* Crypto API does not get assoc data back. So no need for AAD2. */ + + if (return_payload) + ctrl1 |= SPU2_RETURN_PAY; + + fmd->ctrl1 = ctrl1; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl1:", &fmd->ctrl1, sizeof(uint64_t)); +#endif +} + +static void +spu2_fmd_ctrl2_write(struct spu2_fmd *fmd, uint64_t cipher_offset, + uint64_t auth_key_len __rte_unused, + uint64_t auth_iv_len __rte_unused, + uint64_t cipher_key_len __rte_unused, + uint64_t cipher_iv_len __rte_unused) +{ + uint64_t aad1_offset; + uint64_t aad2_offset; + uint16_t aad1_len = 0; + uint64_t payload_offset; + + /* AAD1 offset is from start of FD. FD length always 0. */ + aad1_offset = 0; + + aad2_offset = aad1_offset; + payload_offset = cipher_offset; + fmd->ctrl2 = aad1_offset | + (aad1_len << SPU2_AAD1_LEN_SHIFT) | + (aad2_offset << SPU2_AAD2_OFFSET_SHIFT) | + (payload_offset << SPU2_PL_OFFSET_SHIFT); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl2:", &fmd->ctrl2, sizeof(uint64_t)); +#endif +} + +static void +spu2_fmd_ctrl3_write(struct spu2_fmd *fmd, uint64_t payload_len) +{ + fmd->ctrl3 = payload_len & SPU2_PL_LEN; + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + BCMFS_DP_HEXDUMP_LOG(DEBUG, "ctrl3:", &fmd->ctrl3, sizeof(uint64_t)); +#endif +} + +int +bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *sreq, + enum bcmfs_crypto_auth_algorithm a_alg, + enum bcmfs_crypto_auth_op auth_op, + struct fsattr *src, struct fsattr *dst, + struct fsattr *mac, struct fsattr *auth_key) +{ + int ret; + uint64_t dst_size; + int src_index = 0; + struct spu2_fmd *fmd; + enum spu2_hash_mode spu2_auth_mode; + enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE; + uint64_t auth_ksize = (auth_key != NULL) ? fsattr_sz(auth_key) : 0; + bool is_inbound = (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY); + + if (src == NULL) + return -EINVAL; + + /* one of dst or mac should not be NULL */ + if (dst == NULL && mac == NULL) + return -EINVAL; + + dst_size = (auth_op == BCMFS_CRYPTO_AUTH_OP_GENERATE) ? + fsattr_sz(dst) : fsattr_sz(mac); + + /* spu2 hash algorithm and hash algorithm mode */ + ret = spu2_hash_xlate(a_alg, auth_key, &spu2_auth_type, + &spu2_auth_mode); + if (ret) + return -EINVAL; + + fmd = &sreq->fmd; + + spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE, + SPU2_PROTO_RESV, SPU2_VAL_NONE, + SPU2_VAL_NONE, spu2_auth_type, spu2_auth_mode); + + spu2_fmd_ctrl1_write(fmd, is_inbound, SPU2_VAL_NONE, + auth_ksize, SPU2_VAL_NONE, false, + false, SPU2_VAL_NONE, SPU2_VAL_NONE, + SPU2_VAL_NONE, SPU2_VAL_NONE, + dst_size, SPU2_VAL_NONE, SPU2_VAL_NONE); + + memset(&fmd->ctrl2, 0, sizeof(uint64_t)); + + spu2_fmd_ctrl3_write(fmd, fsattr_sz(src)); + + /* Source metadata and data pointers */ + sreq->msgs.srcs_addr[src_index] = sreq->fptr; + sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd); + src_index++; + + if (auth_key != NULL && fsattr_sz(auth_key) != 0) { + memcpy(sreq->auth_key, fsattr_va(auth_key), + fsattr_sz(auth_key)); + + sreq->msgs.srcs_addr[src_index] = sreq->aptr; + sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key); + src_index++; + } + + sreq->msgs.srcs_addr[src_index] = fsattr_pa(src); + sreq->msgs.srcs_len[src_index] = fsattr_sz(src); + src_index++; + + /* + * In case of authentication verify operation, use input mac data to + * SPU2 engine. + */ + if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY && mac != NULL) { + sreq->msgs.srcs_addr[src_index] = fsattr_pa(mac); + sreq->msgs.srcs_len[src_index] = fsattr_sz(mac); + src_index++; + } + sreq->msgs.srcs_count = src_index; + + /* + * Output packet contains actual output from SPU2 and + * the status packet, so the dsts_count is always 2 below. + */ + if (auth_op == BCMFS_CRYPTO_AUTH_OP_GENERATE) { + sreq->msgs.dsts_addr[0] = fsattr_pa(dst); + sreq->msgs.dsts_len[0] = fsattr_sz(dst); + } else { + /* + * In case of authentication verify operation, provide dummy + * location to SPU2 engine to generate hash. This is needed + * because SPU2 generates hash even in case of verify operation. + */ + sreq->msgs.dsts_addr[0] = sreq->dptr; + sreq->msgs.dsts_len[0] = fsattr_sz(mac); + } + + sreq->msgs.dsts_addr[1] = sreq->rptr; + sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN; + sreq->msgs.dsts_count = 2; + + return 0; +} + +int +bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *sreq, + enum bcmfs_crypto_cipher_algorithm calgo, + enum bcmfs_crypto_cipher_op cipher_op, + struct fsattr *src, struct fsattr *dst, + struct fsattr *cipher_key, struct fsattr *iv) +{ + int ret = 0; + int src_index = 0; + struct spu2_fmd *fmd; + unsigned int xts_keylen; + enum spu2_cipher_mode spu2_ciph_mode = 0; + enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE; + bool is_inbound = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_DECRYPT); + + if (src == NULL || dst == NULL || iv == NULL) + return -EINVAL; + + fmd = &sreq->fmd; + + /* spu2 cipher algorithm and cipher algorithm mode */ + ret = spu2_cipher_xlate(calgo, cipher_key, + &spu2_ciph_type, &spu2_ciph_mode); + if (ret) + return -EINVAL; + + spu2_fmd_ctrl0_write(fmd, is_inbound, SPU2_VAL_NONE, + SPU2_PROTO_RESV, spu2_ciph_type, spu2_ciph_mode, + SPU2_VAL_NONE, SPU2_VAL_NONE); + + spu2_fmd_ctrl1_write(fmd, SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE, + fsattr_sz(cipher_key), false, false, + SPU2_VAL_NONE, SPU2_VAL_NONE, SPU2_VAL_NONE, + fsattr_sz(iv), SPU2_VAL_NONE, SPU2_VAL_NONE, + SPU2_VAL_NONE); + + /* Nothing for FMD2 */ + memset(&fmd->ctrl2, 0, sizeof(uint64_t)); + + spu2_fmd_ctrl3_write(fmd, fsattr_sz(src)); + + /* Source metadata and data pointers */ + sreq->msgs.srcs_addr[src_index] = sreq->fptr; + sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd); + src_index++; + + if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) { + if (calgo == BCMFS_CRYPTO_CIPHER_AES_XTS) { + xts_keylen = fsattr_sz(cipher_key) / 2; + memcpy(sreq->cipher_key, + (uint8_t *)fsattr_va(cipher_key) + xts_keylen, + xts_keylen); + memcpy(sreq->cipher_key + xts_keylen, + fsattr_va(cipher_key), xts_keylen); + } else { + memcpy(sreq->cipher_key, + fsattr_va(cipher_key), fsattr_sz(cipher_key)); + } + + sreq->msgs.srcs_addr[src_index] = sreq->cptr; + sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key); + src_index++; + } + + if (iv != NULL && fsattr_sz(iv) != 0) { + memcpy(sreq->iv, + fsattr_va(iv), fsattr_sz(iv)); + sreq->msgs.srcs_addr[src_index] = sreq->iptr; + sreq->msgs.srcs_len[src_index] = fsattr_sz(iv); + src_index++; + } + + sreq->msgs.srcs_addr[src_index] = fsattr_pa(src); + sreq->msgs.srcs_len[src_index] = fsattr_sz(src); + src_index++; + sreq->msgs.srcs_count = src_index; + + /** + * Output packet contains actual output from SPU2 and + * the status packet, so the dsts_count is always 2 below. + */ + sreq->msgs.dsts_addr[0] = fsattr_pa(dst); + sreq->msgs.dsts_len[0] = fsattr_sz(dst); + + sreq->msgs.dsts_addr[1] = sreq->rptr; + sreq->msgs.dsts_len[1] = SPU2_STATUS_LEN; + sreq->msgs.dsts_count = 2; + + return 0; +} + +static void +bcmfs_crypto_ccm_update_iv(uint8_t *ivbuf, + unsigned int *ivlen, bool is_esp) +{ + int L; /* size of length field, in bytes */ + + /* + * In RFC4309 mode, L is fixed at 4 bytes; otherwise, IV from + * testmgr contains (L-1) in bottom 3 bits of first byte, + * per RFC 3610. + */ + if (is_esp) + L = CCM_ESP_L_VALUE; + else + L = ((ivbuf[0] & CCM_B0_L_PRIME) >> + CCM_B0_L_PRIME_SHIFT) + 1; + + /* SPU2 doesn't want these length bytes nor the first byte... */ + *ivlen -= (1 + L); + memmove(ivbuf, &ivbuf[1], *ivlen); +} + +int +bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *sreq, + enum bcmfs_crypto_cipher_algorithm cipher_alg, + enum bcmfs_crypto_cipher_op cipher_op, + enum bcmfs_crypto_auth_algorithm auth_alg, + enum bcmfs_crypto_auth_op auth_op, + struct fsattr *src, struct fsattr *dst, + struct fsattr *cipher_key, + struct fsattr *auth_key, + struct fsattr *iv, struct fsattr *aad, + struct fsattr *digest, bool cipher_first) +{ + int ret = 0; + int src_index = 0; + int dst_index = 0; + bool auth_first = 0; + struct spu2_fmd *fmd; + unsigned int payload_len; + enum spu2_cipher_mode spu2_ciph_mode = 0; + enum spu2_hash_mode spu2_auth_mode = 0; + uint64_t aad_size = (aad != NULL) ? fsattr_sz(aad) : 0; + unsigned int iv_size = (iv != NULL) ? fsattr_sz(iv) : 0; + enum spu2_cipher_type spu2_ciph_type = SPU2_CIPHER_TYPE_NONE; + uint64_t auth_ksize = (auth_key != NULL) ? + fsattr_sz(auth_key) : 0; + uint64_t cipher_ksize = (cipher_key != NULL) ? + fsattr_sz(cipher_key) : 0; + uint64_t digest_size = (digest != NULL) ? + fsattr_sz(digest) : 0; + enum spu2_hash_type spu2_auth_type = SPU2_HASH_TYPE_NONE; + bool is_inbound = (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY); + + if (src == NULL) + return -EINVAL; + + payload_len = fsattr_sz(src); + if (!payload_len) { + BCMFS_DP_LOG(ERR, "null payload not supported"); + return -EINVAL; + } + + /* spu2 hash algorithm and hash algorithm mode */ + ret = spu2_hash_xlate(auth_alg, auth_key, &spu2_auth_type, + &spu2_auth_mode); + if (ret) + return -EINVAL; + + /* spu2 cipher algorithm and cipher algorithm mode */ + ret = spu2_cipher_xlate(cipher_alg, cipher_key, &spu2_ciph_type, + &spu2_ciph_mode); + if (ret) { + BCMFS_DP_LOG(ERR, "cipher xlate error"); + return -EINVAL; + } + + auth_first = cipher_first ? 0 : 1; + + if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_GCM) { + spu2_auth_type = (enum spu2_hash_type)spu2_ciph_type; + /* + * SPU2 needs in total 12 bytes of IV + * ie IV of 8 bytes(random number) and 4 bytes of salt. + */ + if (fsattr_sz(iv) > 12) + iv_size = 12; + + /* + * On SPU 2, aes gcm cipher first on encrypt, auth first on + * decrypt + */ + + auth_first = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_ENCRYPT) ? + 0 : 1; + } + + if (iv != NULL && fsattr_sz(iv) != 0) + memcpy(sreq->iv, fsattr_va(iv), fsattr_sz(iv)); + + if (cipher_alg == BCMFS_CRYPTO_CIPHER_AES_CCM) { + spu2_auth_type = (enum spu2_hash_type)spu2_ciph_type; + if (iv != NULL) { + memcpy(sreq->iv, fsattr_va(iv), + fsattr_sz(iv)); + iv_size = fsattr_sz(iv); + bcmfs_crypto_ccm_update_iv(sreq->iv, &iv_size, false); + } + + /* opposite for ccm (auth 1st on encrypt) */ + auth_first = (cipher_op == BCMFS_CRYPTO_CIPHER_OP_ENCRYPT) ? + 1 : 0; + } + + fmd = &sreq->fmd; + + spu2_fmd_ctrl0_write(fmd, is_inbound, auth_first, SPU2_PROTO_RESV, + spu2_ciph_type, spu2_ciph_mode, + spu2_auth_type, spu2_auth_mode); + + spu2_fmd_ctrl1_write(fmd, is_inbound, aad_size, auth_ksize, + cipher_ksize, false, false, SPU2_VAL_NONE, + SPU2_VAL_NONE, SPU2_VAL_NONE, iv_size, + digest_size, false, SPU2_VAL_NONE); + + spu2_fmd_ctrl2_write(fmd, aad_size, auth_ksize, 0, + cipher_ksize, iv_size); + + spu2_fmd_ctrl3_write(fmd, payload_len); + + /* Source metadata and data pointers */ + sreq->msgs.srcs_addr[src_index] = sreq->fptr; + sreq->msgs.srcs_len[src_index] = sizeof(struct spu2_fmd); + src_index++; + + if (auth_key != NULL && fsattr_sz(auth_key) != 0) { + memcpy(sreq->auth_key, + fsattr_va(auth_key), fsattr_sz(auth_key)); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + BCMFS_DP_HEXDUMP_LOG(DEBUG, "auth key:", fsattr_va(auth_key), + fsattr_sz(auth_key)); +#endif + sreq->msgs.srcs_addr[src_index] = sreq->aptr; + sreq->msgs.srcs_len[src_index] = fsattr_sz(auth_key); + src_index++; + } + + if (cipher_key != NULL && fsattr_sz(cipher_key) != 0) { + memcpy(sreq->cipher_key, + fsattr_va(cipher_key), fsattr_sz(cipher_key)); + +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + BCMFS_DP_HEXDUMP_LOG(DEBUG, "cipher key:", fsattr_va(cipher_key), + fsattr_sz(cipher_key)); +#endif + sreq->msgs.srcs_addr[src_index] = sreq->cptr; + sreq->msgs.srcs_len[src_index] = fsattr_sz(cipher_key); + src_index++; + } + + if (iv != NULL && fsattr_sz(iv) != 0) { +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + BCMFS_DP_HEXDUMP_LOG(DEBUG, "iv key:", fsattr_va(iv), + fsattr_sz(iv)); +#endif + sreq->msgs.srcs_addr[src_index] = sreq->iptr; + sreq->msgs.srcs_len[src_index] = iv_size; + src_index++; + } + + if (aad != NULL && fsattr_sz(aad) != 0) { +#if RTE_LOG_DP_LEVEL >= RTE_LOG_DEBUG + BCMFS_DP_HEXDUMP_LOG(DEBUG, "aad :", fsattr_va(aad), + fsattr_sz(aad)); +#endif + sreq->msgs.srcs_addr[src_index] = fsattr_pa(aad); + sreq->msgs.srcs_len[src_index] = fsattr_sz(aad); + src_index++; + } + + sreq->msgs.srcs_addr[src_index] = fsattr_pa(src); + sreq->msgs.srcs_len[src_index] = fsattr_sz(src); + src_index++; + + + if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY && digest != NULL && + fsattr_sz(digest) != 0) { + sreq->msgs.srcs_addr[src_index] = fsattr_pa(digest); + sreq->msgs.srcs_len[src_index] = fsattr_sz(digest); + src_index++; + } + sreq->msgs.srcs_count = src_index; + + if (dst != NULL) { + sreq->msgs.dsts_addr[dst_index] = fsattr_pa(dst); + sreq->msgs.dsts_len[dst_index] = fsattr_sz(dst); + dst_index++; + } + + if (auth_op == BCMFS_CRYPTO_AUTH_OP_VERIFY) { + /* + * In case of decryption digest data is generated by + * SPU2 engine but application doesn't need digest + * as such. So program dummy location to capture + * digest data + */ + if (digest != NULL && fsattr_sz(digest) != 0) { + sreq->msgs.dsts_addr[dst_index] = + sreq->dptr; + sreq->msgs.dsts_len[dst_index] = + fsattr_sz(digest); + dst_index++; + } + } else { + if (digest != NULL && fsattr_sz(digest) != 0) { + sreq->msgs.dsts_addr[dst_index] = + fsattr_pa(digest); + sreq->msgs.dsts_len[dst_index] = + fsattr_sz(digest); + dst_index++; + } + } + + sreq->msgs.dsts_addr[dst_index] = sreq->rptr; + sreq->msgs.dsts_len[dst_index] = SPU2_STATUS_LEN; + dst_index++; + sreq->msgs.dsts_count = dst_index; + + return 0; +} diff --git a/drivers/crypto/bcmfs/bcmfs_sym_engine.h b/drivers/crypto/bcmfs/bcmfs_sym_engine.h new file mode 100644 index 000000000..29cfb4dc2 --- /dev/null +++ b/drivers/crypto/bcmfs/bcmfs_sym_engine.h @@ -0,0 +1,103 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright(c) 2020 Broadcom + * All rights reserved. + */ + +#ifndef _BCMFS_SYM_ENGINE_H_ +#define _BCMFS_SYM_ENGINE_H_ + +#include "bcmfs_dev_msg.h" +#include "bcmfs_sym_defs.h" +#include "bcmfs_sym_req.h" + +/* structure to hold element's arrtibutes */ +struct fsattr { + void *va; + uint64_t pa; + uint64_t sz; +}; + +#define fsattr_va(__ptr) ((__ptr)->va) +#define fsattr_pa(__ptr) ((__ptr)->pa) +#define fsattr_sz(__ptr) ((__ptr)->sz) + +/* + * Macros for Crypto h/w constraints + */ + +#define BCMFS_CRYPTO_AES_BLOCK_SIZE 16 +#define BCMFS_CRYPTO_AES_MIN_KEY_SIZE 16 +#define BCMFS_CRYPTO_AES_MAX_KEY_SIZE 32 + +#define BCMFS_CRYPTO_DES_BLOCK_SIZE 8 +#define BCMFS_CRYPTO_DES_KEY_SIZE 8 + +#define BCMFS_CRYPTO_3DES_BLOCK_SIZE 8 +#define BCMFS_CRYPTO_3DES_KEY_SIZE (3 * 8) + +#define BCMFS_CRYPTO_MD5_DIGEST_SIZE 16 +#define BCMFS_CRYPTO_MD5_BLOCK_SIZE 64 + +#define BCMFS_CRYPTO_SHA1_DIGEST_SIZE 20 +#define BCMFS_CRYPTO_SHA1_BLOCK_SIZE 64 + +#define BCMFS_CRYPTO_SHA224_DIGEST_SIZE 28 +#define BCMFS_CRYPTO_SHA224_BLOCK_SIZE 64 + +#define BCMFS_CRYPTO_SHA256_DIGEST_SIZE 32 +#define BCMFS_CRYPTO_SHA256_BLOCK_SIZE 64 + +#define BCMFS_CRYPTO_SHA384_DIGEST_SIZE 48 +#define BCMFS_CRYPTO_SHA384_BLOCK_SIZE 128 + +#define BCMFS_CRYPTO_SHA512_DIGEST_SIZE 64 +#define BCMFS_CRYPTO_SHA512_BLOCK_SIZE 128 + +#define BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE (224 / 8) +#define BCMFS_CRYPTO_SHA3_224_BLOCK_SIZE (200 - 2 * \ + BCMFS_CRYPTO_SHA3_224_DIGEST_SIZE) + +#define BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE (256 / 8) +#define BCMFS_CRYPTO_SHA3_256_BLOCK_SIZE (200 - 2 * \ + BCMFS_CRYPTO_SHA3_256_DIGEST_SIZE) + +#define BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE (384 / 8) +#define BCMFS_CRYPTO_SHA3_384_BLOCK_SIZE (200 - 2 * \ + BCMFS_CRYPTO_SHA3_384_DIGEST_SIZE) + +#define BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE (512 / 8) +#define BCMFS_CRYPTO_SHA3_512_BLOCK_SIZE (200 - 2 * \ + BCMFS_CRYPTO_SHA3_512_DIGEST_SIZE) + +enum bcmfs_crypto_aes_cipher_key { + BCMFS_CRYPTO_AES128 = 16, + BCMFS_CRYPTO_AES192 = 24, + BCMFS_CRYPTO_AES256 = 32, +}; + +int +bcmfs_crypto_build_cipher_req(struct bcmfs_sym_request *req, + enum bcmfs_crypto_cipher_algorithm c_algo, + enum bcmfs_crypto_cipher_op cop, + struct fsattr *src, struct fsattr *dst, + struct fsattr *key, struct fsattr *iv); + +int +bcmfs_crypto_build_auth_req(struct bcmfs_sym_request *req, + enum bcmfs_crypto_auth_algorithm a_algo, + enum bcmfs_crypto_auth_op aop, + struct fsattr *src, struct fsattr *dst, + struct fsattr *mac, struct fsattr *key); + +int +bcmfs_crypto_build_aead_request(struct bcmfs_sym_request *req, + enum bcmfs_crypto_cipher_algorithm c_algo, + enum bcmfs_crypto_cipher_op cop, + enum bcmfs_crypto_auth_algorithm a_algo, + enum bcmfs_crypto_auth_op aop, + struct fsattr *src, struct fsattr *dst, + struct fsattr *cipher_key, struct fsattr *auth_key, + struct fsattr *iv, struct fsattr *aad, + struct fsattr *digest, bool cipher_first); + +#endif /* _BCMFS_SYM_ENGINE_H_ */ diff --git a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c index 381ca8ea4..568797b4f 100644 --- a/drivers/crypto/bcmfs/bcmfs_sym_pmd.c +++ b/drivers/crypto/bcmfs/bcmfs_sym_pmd.c @@ -132,6 +132,12 @@ static void spu_req_init(struct bcmfs_sym_request *sr, rte_iova_t iova __rte_unused) { memset(sr, 0, sizeof(*sr)); + sr->fptr = iova; + sr->cptr = iova + offsetof(struct bcmfs_sym_request, cipher_key); + sr->aptr = iova + offsetof(struct bcmfs_sym_request, auth_key); + sr->iptr = iova + offsetof(struct bcmfs_sym_request, iv); + sr->dptr = iova + offsetof(struct bcmfs_sym_request, digest); + sr->rptr = iova + offsetof(struct bcmfs_sym_request, resp); } static void @@ -244,6 +250,7 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair, uint16_t nb_ops) { int i, j; + int retval; uint16_t enq = 0; struct bcmfs_sym_request *sreq; struct bcmfs_sym_session *sess; @@ -273,6 +280,11 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair, /* save context */ qp->infl_msgs[i] = &sreq->msgs; qp->infl_msgs[i]->ctx = (void *)sreq; + + /* pre process the request crypto h/w acceleration */ + retval = bcmfs_process_sym_crypto_op(ops[i], sess, sreq); + if (unlikely(retval < 0)) + goto enqueue_err; } /* Send burst request to hw QP */ enq = bcmfs_enqueue_op_burst(qp, (void **)qp->infl_msgs, i); @@ -289,6 +301,17 @@ bcmfs_sym_pmd_enqueue_op_burst(void *queue_pair, return enq; } +static void bcmfs_sym_set_request_status(struct rte_crypto_op *op, + struct bcmfs_sym_request *out) +{ + if (*out->resp == BCMFS_SYM_RESPONSE_SUCCESS) + op->status = RTE_CRYPTO_OP_STATUS_SUCCESS; + else if (*out->resp == BCMFS_SYM_RESPONSE_HASH_TAG_ERROR) + op->status = RTE_CRYPTO_OP_STATUS_AUTH_FAILED; + else + op->status = RTE_CRYPTO_OP_STATUS_ERROR; +} + static uint16_t bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair, struct rte_crypto_op **ops, @@ -308,6 +331,9 @@ bcmfs_sym_pmd_dequeue_op_burst(void *queue_pair, for (i = 0; i < deq; i++) { sreq = (struct bcmfs_sym_request *)qp->infl_msgs[i]->ctx; + /* set the status based on the response from the crypto h/w */ + bcmfs_sym_set_request_status(sreq->op, sreq); + ops[pkts++] = sreq->op; rte_mempool_put(qp->sr_mp, sreq); diff --git a/drivers/crypto/bcmfs/bcmfs_sym_req.h b/drivers/crypto/bcmfs/bcmfs_sym_req.h index 0f0b051f1..e53c50adc 100644 --- a/drivers/crypto/bcmfs/bcmfs_sym_req.h +++ b/drivers/crypto/bcmfs/bcmfs_sym_req.h @@ -6,13 +6,53 @@ #ifndef _BCMFS_SYM_REQ_H_ #define _BCMFS_SYM_REQ_H_ +#include + #include "bcmfs_dev_msg.h" +#include "bcmfs_sym_defs.h" + +/* Fixed SPU2 Metadata */ +struct spu2_fmd { + uint64_t ctrl0; + uint64_t ctrl1; + uint64_t ctrl2; + uint64_t ctrl3; +}; /* * This structure hold the supportive data required to process a * rte_crypto_op */ struct bcmfs_sym_request { + /* spu2 engine related data */ + struct spu2_fmd fmd; + /* cipher key */ + uint8_t cipher_key[BCMFS_MAX_KEY_SIZE]; + /* auth key */ + uint8_t auth_key[BCMFS_MAX_KEY_SIZE]; + /* iv key */ + uint8_t iv[BCMFS_MAX_IV_SIZE]; + /* digest data output from crypto h/w */ + uint8_t digest[BCMFS_MAX_DIGEST_SIZE]; + /* 2-Bytes response from crypto h/w */ + uint8_t resp[2]; + /* + * Below are all iovas for above members + * from top + */ + /* iova for fmd */ + rte_iova_t fptr; + /* iova for cipher key */ + rte_iova_t cptr; + /* iova for auth key */ + rte_iova_t aptr; + /* iova for iv key */ + rte_iova_t iptr; + /* iova for digest */ + rte_iova_t dptr; + /* iova for response */ + rte_iova_t rptr; + /* bcmfs qp message for h/w queues to process */ struct bcmfs_qp_message msgs; /* crypto op */ diff --git a/drivers/crypto/bcmfs/meson.build b/drivers/crypto/bcmfs/meson.build index 2e86c733e..7aa0f05db 100644 --- a/drivers/crypto/bcmfs/meson.build +++ b/drivers/crypto/bcmfs/meson.build @@ -14,5 +14,7 @@ sources = files( 'hw/bcmfs_rm_common.c', 'bcmfs_sym_pmd.c', 'bcmfs_sym_capabilities.c', - 'bcmfs_sym_session.c' + 'bcmfs_sym_session.c', + 'bcmfs_sym.c', + 'bcmfs_sym_engine.c' ) From patchwork Thu Aug 13 17:23:44 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vikas Gupta X-Patchwork-Id: 75509 X-Patchwork-Delegate: gakhil@marvell.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB920A04B0; Thu, 13 Aug 2020 19:25:39 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EBDC21C194; Thu, 13 Aug 2020 19:24:26 +0200 (CEST) Received: from mail-qk1-f193.google.com (mail-qk1-f193.google.com [209.85.222.193]) by dpdk.org (Postfix) with ESMTP id 1590E1C12F for ; Thu, 13 Aug 2020 19:24:25 +0200 (CEST) Received: by mail-qk1-f193.google.com with SMTP id l64so5847807qkb.8 for ; Thu, 13 Aug 2020 10:24:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=xX2lCvpLxnRg8dR2FywAw3rwTaiGLevMBCb40HsME/w=; b=gPywNRhi+YoyLC1KyfqIRKtyzMzTiqCWOKGzRBYYDkNMnUnZY2CVFRAUrxGJfTqIHn t3rUsxfidDZ2QJDnQ/BcJBXRZ2d1ayJMCLvbVdWqJIkU89UGgM9bFN68IdkPKguvcaID lAweOIP+J8LvMJCmIP4TMIJsUkm/9564lX4vs= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=xX2lCvpLxnRg8dR2FywAw3rwTaiGLevMBCb40HsME/w=; b=Qp7lJ1YlC8CqgMZPQl4oICReGAPvbaLXXCVGBGDd8VolZZY8LlsEsELv6IRPeo4GX3 H5agnFBXm2h68snDflmjiLctGOtXVtjgoMKU0TDX2TgIQs2YvH7E0u1PhYsq8ErkGi8J K3K/t14YxtSEQIzHrzaIT/F+Wo5Da5EadMo6LtQ96Tgs0sBwjxtOlV1YHykUARzThIb3 8ACu5r8OICTGf+Ow3Tq1fEahCVYmxwEkERPbgXBjHp57YVSAdEbZptJw5+BGaiWpWKRu tMva8hERL8G06bEe682n/RG0W9cyn3m+OCAiFv++WRGilNNbfIA3R2bhs/U1pGW9u4Vn gObg== X-Gm-Message-State: AOAM533CO/KLFW/++nisS5dldlApc5oIPOqfBL8h05MLy6XJ3aFV2TaR //AA2QeBUHtNH3RhgDiWIPh3yPZGsaSayoTTdqPneL7PB9vGVuHvUKSOrj5GrhCljwCtX50v5z8 KsGbdRDxgmR4y92HKFtWDgTHoepsAE8Sufaq+oqWRrjagLwaIIdmbGheWZAAg X-Google-Smtp-Source: ABdhPJyjnsuWKzRL2aUcNbhfbxQ6I1PSEwp2wMTO9X+5UK5N/MwfhKEyH1zQ/50NKHOaVs+W5CL86g== X-Received: by 2002:a05:620a:149a:: with SMTP id w26mr5464141qkj.432.1597339464040; Thu, 13 Aug 2020 10:24:24 -0700 (PDT) Received: from rahul_yocto_ubuntu18.ibn.broadcom.net ([192.19.234.250]) by smtp.gmail.com with ESMTPSA id g129sm6246635qkb.39.2020.08.13.10.24.21 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 13 Aug 2020 10:24:23 -0700 (PDT) From: Vikas Gupta To: dev@dpdk.org, akhil.goyal@nxp.com Cc: vikram.prakash@broadcom.com, Vikas Gupta , Raveendra Padasalagi Date: Thu, 13 Aug 2020 22:53:44 +0530 Message-Id: <20200813172344.3228-9-vikas.gupta@broadcom.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200813172344.3228-1-vikas.gupta@broadcom.com> References: <20200812063127.8687-1-vikas.gupta@broadcom.com> <20200813172344.3228-1-vikas.gupta@broadcom.com> Subject: [dpdk-dev] [PATCH v2 8/8] crypto/bcmfs: add crypto pmd into cryptodev test X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add global test suite for bcmfs crypto pmd Signed-off-by: Vikas Gupta Signed-off-by: Raveendra Padasalagi Reviewed-by: Ajit Khaparde Acked-by: Akhil Goyal --- app/test/test_cryptodev.c | 17 +++++++++++++++++ app/test/test_cryptodev.h | 1 + 2 files changed, 18 insertions(+) diff --git a/app/test/test_cryptodev.c b/app/test/test_cryptodev.c index 70bf6fe2c..9157115ab 100644 --- a/app/test/test_cryptodev.c +++ b/app/test/test_cryptodev.c @@ -13041,6 +13041,22 @@ test_cryptodev_nitrox(void) return unit_test_suite_runner(&cryptodev_nitrox_testsuite); } +static int +test_cryptodev_bcmfs(void) +{ + gbl_driver_id = rte_cryptodev_driver_id_get( + RTE_STR(CRYPTODEV_NAME_BCMFS_PMD)); + + if (gbl_driver_id == -1) { + RTE_LOG(ERR, USER1, "BCMFS PMD must be loaded. Check if " + "CONFIG_RTE_LIBRTE_PMD_BCMFS is enabled " + "in config file to run this testsuite.\n"); + return TEST_FAILED; + } + + return unit_test_suite_runner(&cryptodev_testsuite); +} + REGISTER_TEST_COMMAND(cryptodev_qat_autotest, test_cryptodev_qat); REGISTER_TEST_COMMAND(cryptodev_aesni_mb_autotest, test_cryptodev_aesni_mb); REGISTER_TEST_COMMAND(cryptodev_cpu_aesni_mb_autotest, @@ -13063,3 +13079,4 @@ REGISTER_TEST_COMMAND(cryptodev_octeontx_autotest, test_cryptodev_octeontx); REGISTER_TEST_COMMAND(cryptodev_octeontx2_autotest, test_cryptodev_octeontx2); REGISTER_TEST_COMMAND(cryptodev_caam_jr_autotest, test_cryptodev_caam_jr); REGISTER_TEST_COMMAND(cryptodev_nitrox_autotest, test_cryptodev_nitrox); +REGISTER_TEST_COMMAND(cryptodev_bcmfs_autotest, test_cryptodev_bcmfs); diff --git a/app/test/test_cryptodev.h b/app/test/test_cryptodev.h index 41542e055..c58126368 100644 --- a/app/test/test_cryptodev.h +++ b/app/test/test_cryptodev.h @@ -70,6 +70,7 @@ #define CRYPTODEV_NAME_OCTEONTX2_PMD crypto_octeontx2 #define CRYPTODEV_NAME_CAAM_JR_PMD crypto_caam_jr #define CRYPTODEV_NAME_NITROX_PMD crypto_nitrox_sym +#define CRYPTODEV_NAME_BCMFS_PMD crypto_bcmfs /** * Write (spread) data from buffer to mbuf data