From patchwork Tue Jul 6 16:44:09 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 95463 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id A4632A0C4A; Wed, 7 Jul 2021 10:25:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 83FD54147E; Wed, 7 Jul 2021 10:25:31 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2054.outbound.protection.outlook.com [40.107.244.54]) by mails.dpdk.org (Postfix) with ESMTP id 71E444120E for ; Tue, 6 Jul 2021 18:45:56 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=ee4G8ndcaAeEl6HHjD5MBAqSM+W2aioOvM3veZf/TsLlqvhLQsQhrh7sgsd4Ap94HrplwiYINRAkQhjcVIDs8xz2bgq71o7K4wrVUpv3wwvjL2/KjB2cSVQkyj4QYAE8y3CMtXp7xdRCpoW5PfJuBHYoG6FpbCHtkUABm17QynFxlk61SJoWO2TUEUOSEevO6UKsVmtX4fNPPORLjC9ZuoptYD2cJWtXQHkK/KQzXykYIKArlHIBe6/xumCkeol5datssUv9F7C/ueDtA99yg4ScVdmoT6IQydxtad6dLUuLcvVjGYZSlMsPjzQHwJy921qKICqc0VdHCgJi4tP0wA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=n4N2D6snkhTwdlVy1vN2rCO9H8Gxsy8KP3GonTNzptw=; b=WQWGY7sf+TFOUgIOVAiV0a/WIBZOY9ei8GaahREI+fziy2o/LPOBAxkWwKGlTEUb2S0ZFOlfMRw4lTlEc0dLvo96bsQNCnxoZKsu0UbUpmHXzo4FhQQE/ArwJlKI/WnwoaxfEVRHpup2V5oXn4wGHgiplXVhL+zUSbzE0wGIjIjAFDnYLUw/NkiNDtkdG4L4Jx8Ri8Oi4xwd3Xlf0w3AYvUqW+pRPrCocq4KAIdjn3bpUxYPVccrA2Qrq7xQzNAnRoItAnc8mWeVWN1ZWJ6QR7Zz3CJ6PCgB8kBa3rSpY35iL6vRrpv+u5AMFdLIuHZe3PXWFafkaSesIy+HdqzqrA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=n4N2D6snkhTwdlVy1vN2rCO9H8Gxsy8KP3GonTNzptw=; b=pjU0p4U2rTv5PBd0mOoyAankBCGiPJkJ7rZxbAm4osaLd++kJjkjZJGMIseUjh/kNe0k7+Qg7x4dpqwy/PMEB0L60noN/kmX+BF2eq7uVtr9UTpgItCUUtXRK5aMjZeEmBS5ZNktrOCYfY8MjQfcclfoy2VeoYQH1PoCUlZT8Vk= Received: from DS7PR05CA0004.namprd05.prod.outlook.com (2603:10b6:5:3b9::9) by CH2PR02MB6950.namprd02.prod.outlook.com (2603:10b6:610:5c::21) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.31; Tue, 6 Jul 2021 16:45:54 +0000 Received: from DM3NAM02FT056.eop-nam02.prod.protection.outlook.com (2603:10b6:5:3b9:cafe::c5) by DS7PR05CA0004.outlook.office365.com (2603:10b6:5:3b9::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.8 via Frontend Transport; Tue, 6 Jul 2021 16:45:54 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT056.mail.protection.outlook.com (10.13.4.177) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:45:53 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:45:53 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:45:53 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oCm-0000pF-Dt; Tue, 06 Jul 2021 09:45:53 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:09 +0530 Message-ID: <20210706164418.32615-2-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a35f68bf-a90c-4ba7-68dd-08d9409d850c X-MS-TrafficTypeDiagnostic: CH2PR02MB6950: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2582; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: RTqXbaMQtsoG6a7eIl8tuFGwDTYvM0mxkhUX0wUi5ZIsgYXCIrHTy+ccYgYtkJ7ZcMGxMLYdd8MDX0hzqlJLr4H7bHbwq61/5P+J9NbyU6aHtkRmbzJkl279p/TlsJjlWgjLU4Kyasvb945236npoeEGA3N8pHEWqDz1BTgziVgjWL7oAyf5AXuRPEoDqo8EUEhvOIvaFRtAg8+vFaS/Q81KX4W2kTMBF72pn/T/jXr71SifEnrWSvHR7zuBP6qPHu2LbCJxN7fdHAl7OsMXG+COyjlWpimV010E3NztbZOly3BnvasSDX99feutCsAFDoajhm3ms7fCVBD3KjlawCZveZ737qK84wtvZfsI9tMXI5TScc+tqNJuK00PwRT4h0RPp5M7ybYE1jnlCPfYZkcKaRSfS/xLNb16XYQTODa6eyTu/1hmIes+TUwGtPps5Uq3ZXILTRrE79b855bSmd343rDHCvllXnMLGgERNRiYV+JAyoE7EK7bdgH5zdkxJingbZW/OcW3FJ9gtNqZbPgXT22EqzNsKwoIGkfRDne5bQgOIoApNE27eDz7yXqahIATicDcldo1lMmgACIBp60IlS6Pz7qi5UvJShZ5z0w/fk+mH+cq2WZyvUAzqa0FZ8I9dUOKuxpB/UAMHmiUmmS9Y/Th4tF+6D1c9hRUx8QLUCYcRHqqz4sRqCDfcBUSN+eX7Zz22bXFzNGYkN1Qvcf12D8k29WNiQwt/+pPR7xEpuq59fwpI2EQoid116JvV+HQcc7R07Wz0ZXWNd2OJ7eHTtUpKciu/OiCv4cI6wV0Rhvj+Wu3jxoUywK7r7yRsTUM9l5R28qM05RngSj8jd0rMpGeC6iOpf/GSmReleg= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(396003)(136003)(376002)(39860400002)(346002)(36840700001)(46966006)(478600001)(30864003)(966005)(6666004)(2616005)(316002)(36860700001)(186003)(47076005)(36756003)(36906005)(5660300002)(70206006)(9786002)(8936002)(7696005)(336012)(1076003)(2906002)(8676002)(107886003)(426003)(26005)(4326008)(6916009)(356005)(82740400003)(54906003)(83380400001)(44832011)(82310400003)(70586007)(7636003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:45:53.4952 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a35f68bf-a90c-4ba7-68dd-08d9409d850c X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT056.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR02MB6950 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 01/10] vdpa/sfc: introduce Xilinx vDPA driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Add new vDPA PMD to support vDPA operation by Xilinx devices. This patch implements probe and remove functions. Signed-off-by: Vijay Kumar Srivastava --- MAINTAINERS | 6 + doc/guides/rel_notes/release_21_08.rst | 5 + doc/guides/vdpadevs/features/sfc.ini | 9 ++ doc/guides/vdpadevs/sfc.rst | 97 +++++++++++ drivers/vdpa/meson.build | 1 + drivers/vdpa/sfc/meson.build | 33 ++++ drivers/vdpa/sfc/sfc_vdpa.c | 286 +++++++++++++++++++++++++++++++++ drivers/vdpa/sfc/sfc_vdpa.h | 40 +++++ drivers/vdpa/sfc/sfc_vdpa_log.h | 77 +++++++++ drivers/vdpa/sfc/version.map | 3 + 10 files changed, 557 insertions(+) create mode 100644 doc/guides/vdpadevs/features/sfc.ini create mode 100644 doc/guides/vdpadevs/sfc.rst create mode 100644 drivers/vdpa/sfc/meson.build create mode 100644 drivers/vdpa/sfc/sfc_vdpa.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa.h create mode 100644 drivers/vdpa/sfc/sfc_vdpa_log.h create mode 100644 drivers/vdpa/sfc/version.map diff --git a/MAINTAINERS b/MAINTAINERS index 5877a16..ccc0a2a 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1197,6 +1197,12 @@ F: drivers/vdpa/mlx5/ F: doc/guides/vdpadevs/mlx5.rst F: doc/guides/vdpadevs/features/mlx5.ini +Xilinx sfc vDPA +M: Vijay Kumar Srivastava +F: drivers/vdpa/sfc/ +F: doc/guides/vdpadevs/sfc.rst +F: doc/guides/vdpadevs/features/sfc.ini + Eventdev Drivers ---------------- diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index a6ecfdf..bb9aa83 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -55,6 +55,11 @@ New Features Also, make sure to start the actual text at the margin. ======================================================= +* **Add new vDPA PMD based on Xilinx devices.** + + Added a new Xilinx vDPA (``sfc_vdpa``) PMD. + See the :doc:`../vdpadevs/sfc` guide for more details on this driver. + Removed Items ------------- diff --git a/doc/guides/vdpadevs/features/sfc.ini b/doc/guides/vdpadevs/features/sfc.ini new file mode 100644 index 0000000..71b6158 --- /dev/null +++ b/doc/guides/vdpadevs/features/sfc.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'sfc' vDPA driver. +; +; Refer to default.ini for the full list of available driver features. +; +[Features] +Linux = Y +x86-64 = Y +Usage doc = Y diff --git a/doc/guides/vdpadevs/sfc.rst b/doc/guides/vdpadevs/sfc.rst new file mode 100644 index 0000000..59f990b --- /dev/null +++ b/doc/guides/vdpadevs/sfc.rst @@ -0,0 +1,97 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2021 Xilinx Corporation. + +Xilinx vDPA driver +================== + +The Xilinx vDPA (vhost data path acceleration) driver (**librte_pmd_sfc_vdpa**) +provides support for the Xilinx SN1022 SmartNICs family of 10/25/40/50/100 Gbps +adapters has support for latest Linux and FreeBSD operating systems. + +More information can be found at Xilinx website https://www.xilinx.com. + + +Xilinx vDPA implementation +-------------------------- + +ef100 device can be configured in the net device or vDPA mode. +Adding "class=vdpa" parameter helps to specify that this +device is to be used in vDPA mode. If this parameter is not specified, device +will be probed by net/sfc driver and will used as a net device. + +This PMD uses libefx (common/sfc_efx) code to access the device firmware. + + +Supported NICs +-------------- + +- Xilinx SN1022 SmartNICs + + +Features +-------- + +Features of the Xilinx vDPA driver are: + +- Compatibility with virtio 0.95 and 1.0 + + +Non-supported Features +---------------------- + +- Control Queue +- Multi queue +- Live Migration + + +Prerequisites +------------- + +Requires firmware version: v1.0.7.0 or higher + +Visit `Xilinx Support Downloads `_ +to get Xilinx Utilities with the latest firmware. +Follow instructions from Alveo SN1000 SmartNICs User Guide to +update firmware and configure the adapter. + + +Per-Device Parameters +~~~~~~~~~~~~~~~~~~~~~ + +The following per-device parameters can be passed via EAL PCI device +whitelist option like "-w 02:00.0,arg1=value1,...". + +Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify +boolean parameters value. + +- ``class`` [net|vdpa] (default **net**) + + Choose the mode of operation of ef100 device. + **net** device will work as network device and will be probed by net/sfc driver. + **vdpa** device will work as vdpa device and will be probed by vdpa/sfc driver. + If this parameter is not specified then ef100 device will operate as network device. + + +Dynamic Logging Parameters +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +One may leverage EAL option "--log-level" to change default levels +for the log types supported by the driver. The option is used with +an argument typically consisting of two parts separated by a colon. + +Level value is the last part which takes a symbolic name (or integer). +Log type is the former part which may shell match syntax. +Depending on the choice of the expression, the given log level may +be used either for some specific log type or for a subset of types. + +SFC vDPA PMD provides the following log types available for control: + +- ``pmd.vdpa.sfc.driver`` (default level is **notice**) + + Affects driver-wide messages unrelated to any particular devices. + +- ``pmd.vdpa.sfc.main`` (default level is **notice**) + + Matches a subset of per-port log types registered during runtime. + A full name for a particular type may be obtained by appending a + dot and a PCI device identifier (``XXXX:XX:XX.X``) to the prefix. diff --git a/drivers/vdpa/meson.build b/drivers/vdpa/meson.build index f765fe3..77412c7 100644 --- a/drivers/vdpa/meson.build +++ b/drivers/vdpa/meson.build @@ -8,6 +8,7 @@ endif drivers = [ 'ifc', 'mlx5', + 'sfc', ] std_deps = ['bus_pci', 'kvargs'] std_deps += ['vhost'] diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build new file mode 100644 index 0000000..d916389 --- /dev/null +++ b/drivers/vdpa/sfc/meson.build @@ -0,0 +1,33 @@ +# SPDX-License-Identifier: BSD-3-Clause +# +# Copyright(c) 2020-2021 Xilinx, Inc. + +if (arch_subdir != 'x86' or not dpdk_conf.get('RTE_ARCH_64')) and (arch_subdir != 'arm' or not host_machine.cpu_family().startswith('aarch64')) + build = false + reason = 'only supported on x86_64 and aarch64' +endif + +fmt_name = 'sfc_vdpa' +extra_flags = [] + +# Enable more warnings +extra_flags += [ + '-Wdisabled-optimization' +] + +# Compiler and version dependent flags +extra_flags += [ + '-Waggregate-return', + '-Wbad-function-cast' +] + +foreach flag: extra_flags + if cc.has_argument(flag) + cflags += flag + endif +endforeach + +deps += ['common_sfc_efx', 'bus_pci'] +sources = files( + 'sfc_vdpa.c', +) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c new file mode 100644 index 0000000..d8faaca --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -0,0 +1,286 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "efx.h" +#include "sfc_efx.h" +#include "sfc_vdpa.h" + +TAILQ_HEAD(sfc_vdpa_adapter_list_head, sfc_vdpa_adapter); +static struct sfc_vdpa_adapter_list_head sfc_vdpa_adapter_list = + TAILQ_HEAD_INITIALIZER(sfc_vdpa_adapter_list); + +static pthread_mutex_t sfc_vdpa_adapter_list_lock = PTHREAD_MUTEX_INITIALIZER; + +struct sfc_vdpa_adapter * +sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev) +{ + bool found = false; + struct sfc_vdpa_adapter *sva; + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + + TAILQ_FOREACH(sva, &sfc_vdpa_adapter_list, next) { + if (pdev == sva->pdev) { + found = true; + break; + } + } + + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + return found ? sva : NULL; +} + +static int +sfc_vdpa_vfio_setup(struct sfc_vdpa_adapter *sva) +{ + struct rte_pci_device *dev = sva->pdev; + char dev_name[RTE_DEV_NAME_MAX_LEN] = {0}; + int rc; + + if (dev == NULL) + goto fail_inval; + + rte_pci_device_name(&dev->addr, dev_name, RTE_DEV_NAME_MAX_LEN); + + sva->vfio_container_fd = rte_vfio_container_create(); + if (sva->vfio_container_fd < 0) { + sfc_vdpa_err(sva, "failed to create VFIO container"); + goto fail_container_create; + } + + rc = rte_vfio_get_group_num(rte_pci_get_sysfs_path(), dev_name, + &sva->iommu_group_num); + if (rc <= 0) { + sfc_vdpa_err(sva, "failed to get IOMMU group for %s : %s", + dev_name, rte_strerror(-rc)); + goto fail_get_group_num; + } + + sva->vfio_group_fd = + rte_vfio_container_group_bind(sva->vfio_container_fd, + sva->iommu_group_num); + if (sva->vfio_group_fd < 0) { + sfc_vdpa_err(sva, + "failed to bind IOMMU group %d to container %d", + sva->iommu_group_num, sva->vfio_container_fd); + goto fail_group_bind; + } + + if (rte_pci_map_device(dev) != 0) { + sfc_vdpa_err(sva, "failed to map PCI device %s : %s", + dev_name, rte_strerror(rte_errno)); + goto fail_pci_map_device; + } + + sva->vfio_dev_fd = dev->intr_handle.vfio_dev_fd; + + return 0; + +fail_pci_map_device: + if (rte_vfio_container_group_unbind(sva->vfio_container_fd, + sva->iommu_group_num) != 0) { + sfc_vdpa_err(sva, + "failed to unbind IOMMU group %d from container %d", + sva->iommu_group_num, sva->vfio_container_fd); + } + +fail_group_bind: +fail_get_group_num: + if (rte_vfio_container_destroy(sva->vfio_container_fd) != 0) { + sfc_vdpa_err(sva, "failed to destroy container %d", + sva->vfio_container_fd); + } + +fail_container_create: +fail_inval: + return -1; +} + +static void +sfc_vdpa_vfio_teardown(struct sfc_vdpa_adapter *sva) +{ + rte_pci_unmap_device(sva->pdev); + + if (rte_vfio_container_group_unbind(sva->vfio_container_fd, + sva->iommu_group_num) != 0) { + sfc_vdpa_err(sva, + "failed to unbind IOMMU group %d from container %d", + sva->iommu_group_num, sva->vfio_container_fd); + } + + if (rte_vfio_container_destroy(sva->vfio_container_fd) != 0) { + sfc_vdpa_err(sva, + "failed to destroy container %d", + sva->vfio_container_fd); + } +} + +static int +sfc_vdpa_set_log_prefix(struct sfc_vdpa_adapter *sva) +{ + struct rte_pci_device *pci_dev = sva->pdev; + int ret; + + ret = snprintf(sva->log_prefix, sizeof(sva->log_prefix), + "PMD: sfc_vdpa " PCI_PRI_FMT " : ", + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); + + if (ret < 0 || ret >= (int)sizeof(sva->log_prefix)) { + SFC_VDPA_GENERIC_LOG(ERR, + "reserved log prefix is too short for " PCI_PRI_FMT, + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); + return -EINVAL; + } + + return 0; +} + +uint32_t +sfc_vdpa_register_logtype(const struct rte_pci_addr *pci_addr, + const char *lt_prefix_str, uint32_t ll_default) +{ + size_t lt_prefix_str_size = strlen(lt_prefix_str); + size_t lt_str_size_max; + char *lt_str = NULL; + int ret; + + if (SIZE_MAX - PCI_PRI_STR_SIZE - 1 > lt_prefix_str_size) { + ++lt_prefix_str_size; /* Reserve space for prefix separator */ + lt_str_size_max = lt_prefix_str_size + PCI_PRI_STR_SIZE + 1; + } else { + return RTE_LOGTYPE_PMD; + } + + lt_str = rte_zmalloc("logtype_str", lt_str_size_max, 0); + if (lt_str == NULL) + return RTE_LOGTYPE_PMD; + + strncpy(lt_str, lt_prefix_str, lt_prefix_str_size); + lt_str[lt_prefix_str_size - 1] = '.'; + rte_pci_device_name(pci_addr, lt_str + lt_prefix_str_size, + lt_str_size_max - lt_prefix_str_size); + lt_str[lt_str_size_max - 1] = '\0'; + + ret = rte_log_register_type_and_pick_level(lt_str, ll_default); + rte_free(lt_str); + + return (ret < 0) ? RTE_LOGTYPE_PMD : ret; +} + +static struct rte_pci_id pci_id_sfc_vdpa_efx_map[] = { + { RTE_PCI_DEVICE(EFX_PCI_VENID_XILINX, EFX_PCI_DEVID_RIVERHEAD_VF) }, + { .vendor_id = 0, /* sentinel */ }, +}; + +static int +sfc_vdpa_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct sfc_vdpa_adapter *sva = NULL; + uint32_t logtype_main; + int ret = 0; + + if (sfc_efx_dev_class_get(pci_dev->device.devargs) != + SFC_EFX_DEV_CLASS_VDPA) { + SFC_VDPA_GENERIC_LOG(INFO, + "Incompatible device class: skip probing, should be probed by other sfc driver."); + return 1; + } + + /* + * It will not be probed in the secondary process. As device class + * is vdpa so return 0 to avoid probe by other sfc driver + */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + logtype_main = sfc_vdpa_register_logtype(&pci_dev->addr, + SFC_VDPA_LOGTYPE_MAIN_STR, + RTE_LOG_NOTICE); + + sva = rte_zmalloc("sfc_vdpa", sizeof(struct sfc_vdpa_adapter), 0); + if (sva == NULL) + goto fail_zmalloc; + + sva->pdev = pci_dev; + sva->logtype_main = logtype_main; + + ret = sfc_vdpa_set_log_prefix(sva); + if (ret != 0) + goto fail_set_log_prefix; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "vfio init"); + if (sfc_vdpa_vfio_setup(sva) < 0) { + sfc_vdpa_err(sva, "failed to setup device %s", pci_dev->name); + goto fail_vfio_setup; + } + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + TAILQ_INSERT_TAIL(&sfc_vdpa_adapter_list, sva, next); + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + sfc_vdpa_log_init(sva, "done"); + + return 0; + +fail_vfio_setup: +fail_set_log_prefix: + rte_free(sva); + +fail_zmalloc: + return -1; +} + +static int +sfc_vdpa_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sfc_vdpa_adapter *sva = NULL; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return -1; + + sva = sfc_vdpa_get_adapter_by_dev(pci_dev); + if (sva == NULL) { + sfc_vdpa_info(sva, "invalid device: %s", pci_dev->name); + return -1; + } + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + TAILQ_REMOVE(&sfc_vdpa_adapter_list, sva, next); + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + sfc_vdpa_vfio_teardown(sva); + + rte_free(sva); + + return 0; +} + +static struct rte_pci_driver rte_sfc_vdpa = { + .id_table = pci_id_sfc_vdpa_efx_map, + .drv_flags = 0, + .probe = sfc_vdpa_pci_probe, + .remove = sfc_vdpa_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_sfc_vdpa, rte_sfc_vdpa); +RTE_PMD_REGISTER_PCI_TABLE(net_sfc_vdpa, pci_id_sfc_vdpa_efx_map); +RTE_PMD_REGISTER_KMOD_DEP(net_sfc_vdpa, "* vfio-pci"); +RTE_LOG_REGISTER_SUFFIX(sfc_vdpa_logtype_driver, driver, NOTICE); diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h new file mode 100644 index 0000000..3b77900 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_H +#define _SFC_VDPA_H + +#include +#include + +#include + +#include "sfc_vdpa_log.h" + +/* Adapter private data */ +struct sfc_vdpa_adapter { + TAILQ_ENTRY(sfc_vdpa_adapter) next; + struct rte_pci_device *pdev; + struct rte_pci_addr pci_addr; + + char log_prefix[SFC_VDPA_LOG_PREFIX_MAX]; + uint32_t logtype_main; + + int vfio_group_fd; + int vfio_dev_fd; + int vfio_container_fd; + int iommu_group_num; +}; + +uint32_t +sfc_vdpa_register_logtype(const struct rte_pci_addr *pci_addr, + const char *lt_prefix_str, + uint32_t ll_default); + +struct sfc_vdpa_adapter * +sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev); + +#endif /* _SFC_VDPA_H */ + diff --git a/drivers/vdpa/sfc/sfc_vdpa_log.h b/drivers/vdpa/sfc/sfc_vdpa_log.h new file mode 100644 index 0000000..0a3d6ad --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_log.h @@ -0,0 +1,77 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_LOG_H_ +#define _SFC_VDPA_LOG_H_ + +/** Generic driver log type */ +extern int sfc_vdpa_logtype_driver; + +/** Common log type name prefix */ +#define SFC_VDPA_LOGTYPE_PREFIX "pmd.vdpa.sfc." + +/** Log PMD generic message, add a prefix and a line break */ +#define SFC_VDPA_GENERIC_LOG(level, ...) \ + rte_log(RTE_LOG_ ## level, sfc_vdpa_logtype_driver, \ + RTE_FMT("PMD: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) + +/** Name prefix for the per-device log type used to report basic information */ +#define SFC_VDPA_LOGTYPE_MAIN_STR SFC_VDPA_LOGTYPE_PREFIX "main" + +#define SFC_VDPA_LOG_PREFIX_MAX 32 + +/* Log PMD message, automatically add prefix and \n */ +#define SFC_VDPA_LOG(sva, level, type, ...) \ + rte_log(level, type, \ + RTE_FMT("%s" RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + sva->log_prefix, \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) + +#define sfc_vdpa_err(sva, ...) \ + do { \ + const struct sfc_vdpa_adapter *_sva = (sva); \ + \ + SFC_VDPA_LOG(_sva, RTE_LOG_ERR, \ + _sva->logtype_main, __VA_ARGS__); \ + } while (0) + +#define sfc_vdpa_warn(sva, ...) \ + do { \ + const struct sfc_vdpa_adapter *_sva = (sva); \ + \ + SFC_VDPA_LOG(_sva, RTE_LOG_WARNING, \ + _sva->logtype_main, __VA_ARGS__); \ + } while (0) + +#define sfc_vdpa_notice(sva, ...) \ + do { \ + const struct sfc_vdpa_adapter *_sva = (sva); \ + \ + SFC_VDPA_LOG(_sva, RTE_LOG_NOTICE, \ + _sva->logtype_main, __VA_ARGS__); \ + } while (0) + +#define sfc_vdpa_info(sva, ...) \ + do { \ + const struct sfc_vdpa_adapter *_sva = (sva); \ + \ + SFC_VDPA_LOG(_sva, RTE_LOG_INFO, \ + _sva->logtype_main, __VA_ARGS__); \ + } while (0) + +#define sfc_vdpa_log_init(sva, ...) \ + do { \ + const struct sfc_vdpa_adapter *_sva = (sva); \ + \ + SFC_VDPA_LOG(_sva, RTE_LOG_INFO, \ + _sva->logtype_main, \ + RTE_FMT("%s(): " \ + RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ + } while (0) + +#endif /* _SFC_VDPA_LOG_H_ */ diff --git a/drivers/vdpa/sfc/version.map b/drivers/vdpa/sfc/version.map new file mode 100644 index 0000000..4a76d1d --- /dev/null +++ b/drivers/vdpa/sfc/version.map @@ -0,0 +1,3 @@ +DPDK_21 { + local: *; +}; From patchwork Tue Jul 6 16:44:10 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 95465 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 60BA2A0C4A; Wed, 7 Jul 2021 10:25:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id F3EC24148B; Wed, 7 Jul 2021 10:25:33 +0200 (CEST) Received: from NAM10-DM6-obe.outbound.protection.outlook.com (mail-dm6nam10on2080.outbound.protection.outlook.com [40.107.93.80]) by mails.dpdk.org (Postfix) with ESMTP id 829A74120E for ; Tue, 6 Jul 2021 18:49:22 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=X4JEqOuEtUnVA+1o0sB4voK8S2hQw0cP74kIZcneg3nQ50ANV3h055o0NbAcWO/aHALwc7SYveVf8MlB3gG9+GgI4TLXdPmfQC7hPaKcTBk+slIC0JnFuWUV6ytPZxcti/jHMQiYEnx5s1s3VTx1uRdMneowZ6c7iaEyKGbaC0mCDzW1oHx70y3iifcyk5H7E4mRZ0R/i7V3mLcyNcF31KO1ubiY+6+hd0aGC/G1bYebJ5ClSj+GQlTxLDscUv8vzKs0f9NX0RLEPScb/Z60ZQY33AkgLWWntLBsfBeX74e6m78cpoNCGdLjIlmCYCwN+hzko7SuIvgyIqYR/Qlo5w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9G/kXqnPyP/+93HG60mebkZmzw9w+UGJF13I1QOXSP0=; b=ji5TK9wMRWwVI8faX7inuA4FAxcgBY6exhSl5gr1YKkSRbjPUzI2dFLev+/dX8ndx8N8/v/N23X+lJMV7RtZ5v30YXuS5PYcqk8vfA+BGxGtFR6p1VHhWU7esb7BoQ+OeyscB3lufZODNU3RSjQL2xAPOICs6SRAc7iad3/wvjjli6PrzZp+b4wioGnfM41kWAOiyT+Oy5Kv6vz6nKHii7K0KJkEx/CNCj4jCLnaKBs5O/rTUxIeVibWDhrciPH3H6RjAorqHgRJ15XBZrkBRQRIG8Z+ubrtEzecPBT2/VxDdRUU1DNguTLHT3AiIMdGiLnC92YldkG2/LFF8Hlv0Q== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=9G/kXqnPyP/+93HG60mebkZmzw9w+UGJF13I1QOXSP0=; b=BmGQKVd8UFfu8kuvnY74FWkK4BDhwuO9xwlHqlykspDvju18I6/yUHcfaeWsgFsosI0t67jmvgOE2WNvgYqgIbyaQXiXt6pa4fwoUfqrWlp1QvtzBtN7LJE6cnhCWgAHcl6iM2OgcYTcQ8QikgIK3YGPrYh+wFIK3QoV2Yw0400= Received: from BN1PR13CA0007.namprd13.prod.outlook.com (2603:10b6:408:e2::12) by CH2PR02MB6038.namprd02.prod.outlook.com (2603:10b6:610:12::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.27; Tue, 6 Jul 2021 16:49:18 +0000 Received: from BN1NAM02FT022.eop-nam02.prod.protection.outlook.com (2603:10b6:408:e2:cafe::a1) by BN1PR13CA0007.outlook.office365.com (2603:10b6:408:e2::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.8 via Frontend Transport; Tue, 6 Jul 2021 16:49:18 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT022.mail.protection.outlook.com (10.13.2.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:49:18 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:49:17 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:49:17 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oFz-0000pF-LP; Tue, 06 Jul 2021 09:49:12 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:10 +0530 Message-ID: <20210706164418.32615-3-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e84b5f69-c158-4440-f5cf-08d9409dff3e X-MS-TrafficTypeDiagnostic: CH2PR02MB6038: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1227; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: jHmyQOk29bTdx4hWC15rR599n0vJBd9aBV6tPZ2Z/c2ud8DHO0jnNDAC0Io6zK09KB02VfNamkgR+0UmppnFTbnOOgp8QSUDMYvCCmT9dm6FppVY9X5b7fUn0BZQNn165kLgU0FWtMr+6lb/Pf7vhx0rUcwR4gszmAMZ/XH0ii4wdB+DPM1RvTuPLQ7WQm3LqII/nLQsmN2LoQT5LOlV7xV4Pqk08B0FdUz4Tbf+mcmxHK4P3CQjrSr1agPXekcveEGjJFzZ4OOQ49rfz1HO+5M+FSiSBBGvcyTf12Ze//bnxG+BTk4WzJyycI4Uyo1El7hN2nW0YmP5buUBQMDcwJ1WYay97MasiMEhCOVwfAh5Poia1K90elP+qWPdYLZdOUXKBwQ7WBi270di/SiA5lPQLCeAdSfplXg5wSyxrXL1IfBowppNZsv9Yj17KUqC+9wa8XpIg5K6c41LATq76Un3gZYeVj94sDM7HqQoDP/QrVpEebT6IvrrwWNGowoPoP1HRjFXGnnH+Ft1GZHGfhDcofSRB7qGnN/e1A/bFRjbG6qUyhg+j7jgCfjwgD7ZGD17FaAdPCfUEuDmVtWMmQC9lnFT6lrJBARBtuI7xyc0QVUVXGE0Afhz3fiZnQcwXP9WHYX9tP4hvIxwFCQbUiRvY0/Z+vV2hz9CTu3bvCMm3sKnkEV04oVA1MERTrYV4bKVfiCCITc8/qSGWvtYQzRGWVZpPG1hdaKwK8KOnu8= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(36840700001)(46966006)(82310400003)(426003)(26005)(4326008)(54906003)(186003)(2616005)(36860700001)(83380400001)(30864003)(8936002)(356005)(8676002)(2906002)(36756003)(1076003)(47076005)(44832011)(70586007)(7636003)(36906005)(107886003)(7696005)(6916009)(70206006)(336012)(5660300002)(9786002)(498600001)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:49:18.4515 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e84b5f69-c158-4440-f5cf-08d9409dff3e X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT022.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR02MB6038 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 02/10] vdpa/sfc: add support for device initialization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Add HW initialization and vDPA device registration support. Signed-off-by: Vijay Kumar Srivastava --- doc/guides/vdpadevs/sfc.rst | 6 + drivers/vdpa/sfc/meson.build | 3 + drivers/vdpa/sfc/sfc_vdpa.c | 23 +++ drivers/vdpa/sfc/sfc_vdpa.h | 49 +++++- drivers/vdpa/sfc/sfc_vdpa_debug.h | 21 +++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 322 ++++++++++++++++++++++++++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_log.h | 3 + drivers/vdpa/sfc/sfc_vdpa_mcdi.c | 74 +++++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 129 +++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_ops.h | 36 +++++ 10 files changed, 665 insertions(+), 1 deletion(-) create mode 100644 drivers/vdpa/sfc/sfc_vdpa_debug.h create mode 100644 drivers/vdpa/sfc/sfc_vdpa_hw.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_mcdi.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_ops.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_ops.h diff --git a/doc/guides/vdpadevs/sfc.rst b/doc/guides/vdpadevs/sfc.rst index 59f990b..abb5900 100644 --- a/doc/guides/vdpadevs/sfc.rst +++ b/doc/guides/vdpadevs/sfc.rst @@ -95,3 +95,9 @@ SFC vDPA PMD provides the following log types available for control: Matches a subset of per-port log types registered during runtime. A full name for a particular type may be obtained by appending a dot and a PCI device identifier (``XXXX:XX:XX.X``) to the prefix. + +- ``pmd.vdpa.sfc.mcdi`` (default level is **notice**) + + Extra logging of the communication with the NIC's management CPU. + The format of the log is consumed by the netlogdecode cross-platform + tool. May be managed per-port, as explained above. diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build index d916389..aac7c51 100644 --- a/drivers/vdpa/sfc/meson.build +++ b/drivers/vdpa/sfc/meson.build @@ -30,4 +30,7 @@ endforeach deps += ['common_sfc_efx', 'bus_pci'] sources = files( 'sfc_vdpa.c', + 'sfc_vdpa_hw.c', + 'sfc_vdpa_mcdi.c', + 'sfc_vdpa_ops.c', ) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index d8faaca..12e8d6e 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -232,6 +232,19 @@ struct sfc_vdpa_adapter * goto fail_vfio_setup; } + sfc_vdpa_log_init(sva, "hw init"); + if (sfc_vdpa_hw_init(sva) != 0) { + sfc_vdpa_err(sva, "failed to init HW %s", pci_dev->name); + goto fail_hw_init; + } + + sfc_vdpa_log_init(sva, "dev init"); + sva->ops_data = sfc_vdpa_device_init(sva, SFC_VDPA_AS_VF); + if (sva->ops_data == NULL) { + sfc_vdpa_err(sva, "failed vDPA dev init %s", pci_dev->name); + goto fail_dev_init; + } + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); TAILQ_INSERT_TAIL(&sfc_vdpa_adapter_list, sva, next); pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); @@ -240,6 +253,12 @@ struct sfc_vdpa_adapter * return 0; +fail_dev_init: + sfc_vdpa_hw_fini(sva); + +fail_hw_init: + sfc_vdpa_vfio_teardown(sva); + fail_vfio_setup: fail_set_log_prefix: rte_free(sva); @@ -266,6 +285,10 @@ struct sfc_vdpa_adapter * TAILQ_REMOVE(&sfc_vdpa_adapter_list, sva, next); pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + sfc_vdpa_device_fini(sva->ops_data); + + sfc_vdpa_hw_fini(sva); + sfc_vdpa_vfio_teardown(sva); rte_free(sva); diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index 3b77900..fb97258 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -11,14 +11,38 @@ #include +#include "sfc_efx.h" +#include "sfc_efx_mcdi.h" +#include "sfc_vdpa_debug.h" #include "sfc_vdpa_log.h" +#include "sfc_vdpa_ops.h" + +#define SFC_VDPA_DEFAULT_MCDI_IOVA 0x200000000000 /* Adapter private data */ struct sfc_vdpa_adapter { TAILQ_ENTRY(sfc_vdpa_adapter) next; + /* + * PMD setup and configuration is not thread safe. Since it is not + * performance sensitive, it is better to guarantee thread-safety + * and add device level lock. vDPA control operations which + * change its state should acquire the lock. + */ + rte_spinlock_t lock; struct rte_pci_device *pdev; struct rte_pci_addr pci_addr; + efx_family_t family; + efx_nic_t *nic; + rte_spinlock_t nic_lock; + + efsys_bar_t mem_bar; + + struct sfc_efx_mcdi mcdi; + size_t mcdi_buff_size; + + uint32_t max_queue_count; + char log_prefix[SFC_VDPA_LOG_PREFIX_MAX]; uint32_t logtype_main; @@ -26,6 +50,7 @@ struct sfc_vdpa_adapter { int vfio_dev_fd; int vfio_container_fd; int iommu_group_num; + struct sfc_vdpa_ops_data *ops_data; }; uint32_t @@ -36,5 +61,27 @@ struct sfc_vdpa_adapter { struct sfc_vdpa_adapter * sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev); -#endif /* _SFC_VDPA_H */ +int +sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva); +void +sfc_vdpa_hw_fini(struct sfc_vdpa_adapter *sa); +int +sfc_vdpa_mcdi_init(struct sfc_vdpa_adapter *sva); +void +sfc_vdpa_mcdi_fini(struct sfc_vdpa_adapter *sva); + +int +sfc_vdpa_dma_alloc(struct sfc_vdpa_adapter *sva, const char *name, + size_t len, efsys_mem_t *esmp); + +void +sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp); + +static inline struct sfc_vdpa_adapter * +sfc_vdpa_adapter_by_dev_handle(void *dev_handle) +{ + return (struct sfc_vdpa_adapter *)dev_handle; +} + +#endif /* _SFC_VDPA_H */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_debug.h b/drivers/vdpa/sfc/sfc_vdpa_debug.h new file mode 100644 index 0000000..cfa8cc5 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_debug.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_DEBUG_H_ +#define _SFC_VDPA_DEBUG_H_ + +#include + +#ifdef RTE_LIBRTE_SFC_VDPA_DEBUG +/* Avoid dependency from RTE_LOG_DP_LEVEL to be able to enable debug check + * in the driver only. + */ +#define SFC_VDPA_ASSERT(exp) RTE_VERIFY(exp) +#else +/* If the driver debug is not enabled, follow DPDK debug/non-debug */ +#define SFC_VDPA_ASSERT(exp) RTE_ASSERT(exp) +#endif + +#endif /* _SFC_VDPA_DEBUG_H_ */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c new file mode 100644 index 0000000..83f3696 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -0,0 +1,322 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include + +#include +#include +#include + +#include "efx.h" +#include "sfc_vdpa.h" +#include "sfc_vdpa_ops.h" + +extern uint32_t sfc_logtype_driver; + +#ifndef PAGE_SIZE +#define PAGE_SIZE (sysconf(_SC_PAGESIZE)) +#endif + +int +sfc_vdpa_dma_alloc(struct sfc_vdpa_adapter *sva, const char *name, + size_t len, efsys_mem_t *esmp) +{ + void *mcdi_buf; + uint64_t mcdi_iova; + size_t mcdi_buff_size; + int ret; + + mcdi_buff_size = RTE_ALIGN_CEIL(len, PAGE_SIZE); + + sfc_vdpa_log_init(sva, "name=%s, len=%zu", name, len); + + mcdi_buf = rte_zmalloc(name, mcdi_buff_size, PAGE_SIZE); + if (mcdi_buf == NULL) { + sfc_vdpa_err(sva, "cannot reserve memory for %s: len=%#x: %s", + name, (unsigned int)len, rte_strerror(rte_errno)); + return -ENOMEM; + } + + /* IOVA address for MCDI would be re-calculated if mapping + * using default IOVA would fail. + * TODO: Earlier there was no way to get valid IOVA range. + * Recently a patch has been submitted to get the IOVA range + * using ioctl. VFIO_IOMMU_GET_INFO. This patch is available + * in the kernel version >= 5.4. Support to get the default + * IOVA address for MCDI buffer using available IOVA range + * would be added later. Meanwhile default IOVA for MCDI buffer + * is kept at high mem at 2TB. In case of overlap new available + * addresses would be searched and same would be used. + */ + mcdi_iova = SFC_VDPA_DEFAULT_MCDI_IOVA; + + do { + ret = rte_vfio_container_dma_map(sva->vfio_container_fd, + (uint64_t)mcdi_buf, mcdi_iova, + mcdi_buff_size); + if (ret == 0) + break; + + mcdi_iova = mcdi_iova >> 1; + if (mcdi_iova < mcdi_buff_size) { + sfc_vdpa_err(sva, + "DMA mapping failed for MCDI : %s", + rte_strerror(rte_errno)); + return ret; + } + + } while (ret < 0); + + esmp->esm_addr = mcdi_iova; + esmp->esm_base = mcdi_buf; + sva->mcdi_buff_size = mcdi_buff_size; + + sfc_vdpa_info(sva, + "DMA name=%s len=%zu => virt=%p iova=%" PRIx64, + name, len, esmp->esm_base, esmp->esm_addr); + + return 0; +} + +void +sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp) +{ + int ret; + + sfc_vdpa_log_init(sva, "name=%s", esmp->esm_mz->name); + + ret = rte_vfio_container_dma_unmap(sva->vfio_container_fd, + (uint64_t)esmp->esm_base, + esmp->esm_addr, sva->mcdi_buff_size); + if (ret < 0) + sfc_vdpa_err(sva, "DMA unmap failed for MCDI : %s", + rte_strerror(rte_errno)); + + sfc_vdpa_info(sva, + "DMA free name=%s => virt=%p iova=%" PRIx64, + esmp->esm_mz->name, esmp->esm_base, esmp->esm_addr); + + rte_free((void *)(esmp->esm_base)); + + sva->mcdi_buff_size = 0; + memset(esmp, 0, sizeof(*esmp)); +} + +static int +sfc_vdpa_mem_bar_init(struct sfc_vdpa_adapter *sva, + const efx_bar_region_t *mem_ebrp) +{ + struct rte_pci_device *pci_dev = sva->pdev; + efsys_bar_t *ebp = &sva->mem_bar; + struct rte_mem_resource *res = + &pci_dev->mem_resource[mem_ebrp->ebr_index]; + + SFC_BAR_LOCK_INIT(ebp, pci_dev->name); + ebp->esb_rid = mem_ebrp->ebr_index; + ebp->esb_dev = pci_dev; + ebp->esb_base = res->addr; + + return 0; +} + +static void +sfc_vdpa_mem_bar_fini(struct sfc_vdpa_adapter *sva) +{ + efsys_bar_t *ebp = &sva->mem_bar; + + SFC_BAR_LOCK_DESTROY(ebp); + memset(ebp, 0, sizeof(*ebp)); +} + +static int +sfc_vdpa_nic_probe(struct sfc_vdpa_adapter *sva) +{ + efx_nic_t *enp = sva->nic; + int rc; + + rc = efx_nic_probe(enp, EFX_FW_VARIANT_DONT_CARE); + if (rc != 0) + sfc_vdpa_err(sva, "nic probe failed: %s", rte_strerror(rc)); + + return rc; +} + +static int +sfc_vdpa_estimate_resource_limits(struct sfc_vdpa_adapter *sva) +{ + efx_drv_limits_t limits; + int rc; + uint32_t evq_allocated; + uint32_t rxq_allocated; + uint32_t txq_allocated; + uint32_t max_queue_cnt; + + memset(&limits, 0, sizeof(limits)); + + /* Request at least one Rx and Tx queue */ + limits.edl_min_rxq_count = 1; + limits.edl_min_txq_count = 1; + /* Management event queue plus event queue for Tx/Rx queue */ + limits.edl_min_evq_count = + 1 + RTE_MAX(limits.edl_min_rxq_count, limits.edl_min_txq_count); + + limits.edl_max_rxq_count = SFC_VDPA_MAX_QUEUE_PAIRS; + limits.edl_max_txq_count = SFC_VDPA_MAX_QUEUE_PAIRS; + limits.edl_max_evq_count = 1 + SFC_VDPA_MAX_QUEUE_PAIRS; + + SFC_VDPA_ASSERT(limits.edl_max_evq_count >= limits.edl_min_rxq_count); + SFC_VDPA_ASSERT(limits.edl_max_rxq_count >= limits.edl_min_rxq_count); + SFC_VDPA_ASSERT(limits.edl_max_txq_count >= limits.edl_min_rxq_count); + + /* Configure the minimum required resources needed for the + * driver to operate, and the maximum desired resources that the + * driver is capable of using. + */ + sfc_vdpa_log_init(sva, "set drv limit"); + efx_nic_set_drv_limits(sva->nic, &limits); + + sfc_vdpa_log_init(sva, "init nic"); + rc = efx_nic_init(sva->nic); + if (rc != 0) { + sfc_vdpa_err(sva, "nic init failed: %s", rte_strerror(rc)); + goto fail_nic_init; + } + + /* Find resource dimensions assigned by firmware to this function */ + rc = efx_nic_get_vi_pool(sva->nic, &evq_allocated, &rxq_allocated, + &txq_allocated); + if (rc != 0) { + sfc_vdpa_err(sva, "vi pool get failed: %s", rte_strerror(rc)); + goto fail_get_vi_pool; + } + + /* It still may allocate more than maximum, ensure limit */ + evq_allocated = RTE_MIN(evq_allocated, limits.edl_max_evq_count); + rxq_allocated = RTE_MIN(rxq_allocated, limits.edl_max_rxq_count); + txq_allocated = RTE_MIN(txq_allocated, limits.edl_max_txq_count); + + + max_queue_cnt = RTE_MIN(rxq_allocated, txq_allocated); + /* Subtract management EVQ not used for traffic */ + max_queue_cnt = RTE_MIN(evq_allocated - 1, max_queue_cnt); + + SFC_VDPA_ASSERT(max_queue_cnt > 0); + + sva->max_queue_count = max_queue_cnt; + + return 0; + +fail_get_vi_pool: + efx_nic_fini(sva->nic); +fail_nic_init: + sfc_vdpa_log_init(sva, "failed: %s", rte_strerror(rc)); + return rc; +} + +int +sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva) +{ + efx_bar_region_t mem_ebr; + efx_nic_t *enp; + int rc; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "get family"); + rc = sfc_efx_family(sva->pdev, &mem_ebr, &sva->family); + if (rc != 0) + goto fail_family; + sfc_vdpa_log_init(sva, + "family is %u, membar is %u," + "function control window offset is %#" PRIx64, + sva->family, mem_ebr.ebr_index, mem_ebr.ebr_offset); + + sfc_vdpa_log_init(sva, "init mem bar"); + rc = sfc_vdpa_mem_bar_init(sva, &mem_ebr); + if (rc != 0) + goto fail_mem_bar_init; + + sfc_vdpa_log_init(sva, "create nic"); + rte_spinlock_init(&sva->nic_lock); + rc = efx_nic_create(sva->family, (efsys_identifier_t *)sva, + &sva->mem_bar, mem_ebr.ebr_offset, + &sva->nic_lock, &enp); + if (rc != 0) { + sfc_vdpa_err(sva, "nic create failed: %s", rte_strerror(rc)); + goto fail_nic_create; + } + sva->nic = enp; + + sfc_vdpa_log_init(sva, "init mcdi"); + rc = sfc_vdpa_mcdi_init(sva); + if (rc != 0) { + sfc_vdpa_err(sva, "mcdi init failed: %s", rte_strerror(rc)); + goto fail_mcdi_init; + } + + sfc_vdpa_log_init(sva, "probe nic"); + rc = sfc_vdpa_nic_probe(sva); + if (rc != 0) + goto fail_nic_probe; + + sfc_vdpa_log_init(sva, "reset nic"); + rc = efx_nic_reset(enp); + if (rc != 0) { + sfc_vdpa_err(sva, "nic reset failed: %s", rte_strerror(rc)); + goto fail_nic_reset; + } + + sfc_vdpa_log_init(sva, "estimate resource limits"); + rc = sfc_vdpa_estimate_resource_limits(sva); + if (rc != 0) + goto fail_estimate_rsrc_limits; + + sfc_vdpa_log_init(sva, "done"); + + return 0; + +fail_estimate_rsrc_limits: +fail_nic_reset: + efx_nic_unprobe(enp); + +fail_nic_probe: + sfc_vdpa_mcdi_fini(sva); + +fail_mcdi_init: + sfc_vdpa_log_init(sva, "destroy nic"); + sva->nic = NULL; + efx_nic_destroy(enp); + +fail_nic_create: + sfc_vdpa_mem_bar_fini(sva); + +fail_mem_bar_init: +fail_family: + sfc_vdpa_log_init(sva, "failed: %s", rte_strerror(rc)); + return rc; +} + +void +sfc_vdpa_hw_fini(struct sfc_vdpa_adapter *sva) +{ + efx_nic_t *enp = sva->nic; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "unprobe nic"); + efx_nic_unprobe(enp); + + sfc_vdpa_log_init(sva, "mcdi fini"); + sfc_vdpa_mcdi_fini(sva); + + sfc_vdpa_log_init(sva, "nic fini"); + efx_nic_fini(enp); + + sfc_vdpa_log_init(sva, "destroy nic"); + sva->nic = NULL; + efx_nic_destroy(enp); + + sfc_vdpa_mem_bar_fini(sva); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_log.h b/drivers/vdpa/sfc/sfc_vdpa_log.h index 0a3d6ad..59af790 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_log.h +++ b/drivers/vdpa/sfc/sfc_vdpa_log.h @@ -21,6 +21,9 @@ /** Name prefix for the per-device log type used to report basic information */ #define SFC_VDPA_LOGTYPE_MAIN_STR SFC_VDPA_LOGTYPE_PREFIX "main" +/** Device MCDI log type name prefix */ +#define SFC_VDPA_LOGTYPE_MCDI_STR SFC_VDPA_LOGTYPE_PREFIX "mcdi" + #define SFC_VDPA_LOG_PREFIX_MAX 32 /* Log PMD message, automatically add prefix and \n */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_mcdi.c b/drivers/vdpa/sfc/sfc_vdpa_mcdi.c new file mode 100644 index 0000000..961d2d3 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_mcdi.c @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include "sfc_efx_mcdi.h" + +#include "sfc_vdpa.h" +#include "sfc_vdpa_debug.h" +#include "sfc_vdpa_log.h" + +static sfc_efx_mcdi_dma_alloc_cb sfc_vdpa_mcdi_dma_alloc; +static int +sfc_vdpa_mcdi_dma_alloc(void *cookie, const char *name, size_t len, + efsys_mem_t *esmp) +{ + struct sfc_vdpa_adapter *sva = cookie; + + return sfc_vdpa_dma_alloc(sva, name, len, esmp); +} + +static sfc_efx_mcdi_dma_free_cb sfc_vdpa_mcdi_dma_free; +static void +sfc_vdpa_mcdi_dma_free(void *cookie, efsys_mem_t *esmp) +{ + struct sfc_vdpa_adapter *sva = cookie; + + sfc_vdpa_dma_free(sva, esmp); +} + +static sfc_efx_mcdi_sched_restart_cb sfc_vdpa_mcdi_sched_restart; +static void +sfc_vdpa_mcdi_sched_restart(void *cookie) +{ + RTE_SET_USED(cookie); +} + +static sfc_efx_mcdi_mgmt_evq_poll_cb sfc_vdpa_mcdi_mgmt_evq_poll; +static void +sfc_vdpa_mcdi_mgmt_evq_poll(void *cookie) +{ + RTE_SET_USED(cookie); +} + +static const struct sfc_efx_mcdi_ops sfc_vdpa_mcdi_ops = { + .dma_alloc = sfc_vdpa_mcdi_dma_alloc, + .dma_free = sfc_vdpa_mcdi_dma_free, + .sched_restart = sfc_vdpa_mcdi_sched_restart, + .mgmt_evq_poll = sfc_vdpa_mcdi_mgmt_evq_poll, + +}; + +int +sfc_vdpa_mcdi_init(struct sfc_vdpa_adapter *sva) +{ + uint32_t logtype; + + sfc_vdpa_log_init(sva, "entry"); + + logtype = sfc_vdpa_register_logtype(&(sva->pdev->addr), + SFC_VDPA_LOGTYPE_MCDI_STR, + RTE_LOG_NOTICE); + + return sfc_efx_mcdi_init(&sva->mcdi, logtype, + sva->log_prefix, sva->nic, + &sfc_vdpa_mcdi_ops, sva); +} + +void +sfc_vdpa_mcdi_fini(struct sfc_vdpa_adapter *sva) +{ + sfc_vdpa_log_init(sva, "entry"); + sfc_efx_mcdi_fini(&sva->mcdi); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c new file mode 100644 index 0000000..71696be --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include +#include +#include +#include + +#include "sfc_vdpa_ops.h" +#include "sfc_vdpa.h" + +/* Dummy functions for mandatory vDPA ops to pass vDPA device registration. + * In subsequent patches these ops would be implemented. + */ +static int +sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(queue_num); + + return -1; +} + +static int +sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(features); + + return -1; +} + +static int +sfc_vdpa_get_protocol_features(struct rte_vdpa_device *vdpa_dev, + uint64_t *features) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(features); + + return -1; +} + +static int +sfc_vdpa_dev_config(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static int +sfc_vdpa_dev_close(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static int +sfc_vdpa_set_vring_state(int vid, int vring, int state) +{ + RTE_SET_USED(vid); + RTE_SET_USED(vring); + RTE_SET_USED(state); + + return -1; +} + +static int +sfc_vdpa_set_features(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static struct rte_vdpa_dev_ops sfc_vdpa_ops = { + .get_queue_num = sfc_vdpa_get_queue_num, + .get_features = sfc_vdpa_get_features, + .get_protocol_features = sfc_vdpa_get_protocol_features, + .dev_conf = sfc_vdpa_dev_config, + .dev_close = sfc_vdpa_dev_close, + .set_vring_state = sfc_vdpa_set_vring_state, + .set_features = sfc_vdpa_set_features, +}; + +struct sfc_vdpa_ops_data * +sfc_vdpa_device_init(void *dev_handle, enum sfc_vdpa_context context) +{ + struct sfc_vdpa_ops_data *ops_data; + struct rte_pci_device *pci_dev; + + /* Create vDPA ops context */ + ops_data = rte_zmalloc("vdpa", sizeof(struct sfc_vdpa_ops_data), 0); + if (ops_data == NULL) + return NULL; + + ops_data->vdpa_context = context; + ops_data->dev_handle = dev_handle; + + pci_dev = sfc_vdpa_adapter_by_dev_handle(dev_handle)->pdev; + + /* Register vDPA Device */ + sfc_vdpa_log_init(dev_handle, "register vDPA device"); + ops_data->vdpa_dev = + rte_vdpa_register_device(&pci_dev->device, &sfc_vdpa_ops); + if (ops_data->vdpa_dev == NULL) { + sfc_vdpa_err(dev_handle, "vDPA device registration failed"); + goto fail_register_device; + } + + ops_data->state = SFC_VDPA_STATE_INITIALIZED; + + return ops_data; + +fail_register_device: + rte_free(ops_data); + return NULL; +} + +void +sfc_vdpa_device_fini(struct sfc_vdpa_ops_data *ops_data) +{ + rte_vdpa_unregister_device(ops_data->vdpa_dev); + + rte_free(ops_data); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h new file mode 100644 index 0000000..817b302 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_OPS_H +#define _SFC_VDPA_OPS_H + +#include + +#define SFC_VDPA_MAX_QUEUE_PAIRS 1 + +enum sfc_vdpa_context { + SFC_VDPA_AS_PF = 0, + SFC_VDPA_AS_VF +}; + +enum sfc_vdpa_state { + SFC_VDPA_STATE_UNINITIALIZED = 0, + SFC_VDPA_STATE_INITIALIZED, + SFC_VDPA_STATE_NSTATES +}; + +struct sfc_vdpa_ops_data { + void *dev_handle; + struct rte_vdpa_device *vdpa_dev; + enum sfc_vdpa_context vdpa_context; + enum sfc_vdpa_state state; +}; + +struct sfc_vdpa_ops_data * +sfc_vdpa_device_init(void *adapter, enum sfc_vdpa_context context); +void +sfc_vdpa_device_fini(struct sfc_vdpa_ops_data *ops_data); + +#endif /* _SFC_VDPA_OPS_H */ From patchwork Tue Jul 6 16:44:11 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 95464 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 06467A0C4A; Wed, 7 Jul 2021 10:25:43 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id B1A4841484; Wed, 7 Jul 2021 10:25:32 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2056.outbound.protection.outlook.com [40.107.236.56]) by mails.dpdk.org (Postfix) with ESMTP id BF5994120E for ; Tue, 6 Jul 2021 18:49:21 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=oao/+GZwOexvFfCrvFdtjM8UKrFHd7KkGf6v0dGrCOu357NqQkl1QwPLlj3lrves+zfDCHPyzJv0gsXyyjQGy1Ugh1OvVT9e/Fqzk2hpCq/TOJ6nrSUOSYmeoP5clbaKBd7VohgsTgBJ5+Q6EYu1QfMmtpzgKQNkLtNXxYmNCLSCa/oqAu7XaSR7wtyBxUKzYaWbJEtkua/FfBoTB72JwtmLd/I1kcuMV9BvbZCzYkC6h8j0bekFwDMiPpJh7QK/meItuoYDetJVwqwineLR47Hx5YGTCUbG3bb/PhESgxQhzzCj3ukLNH6ZbVYa902tPKa5w2nSAq1B7dXOq5BGgg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=G2RmbTQ8/HVqBDUSMprUqc3IZuq5bxG47l9rb1f126Q=; b=LG786V/dX0jhnXdi6H7Q8J4c8RM2spPOjUXqU0t/76Wfvnuz+11AxhuqoM4jb+IHKhVIrrjsyhBENY8Vy+ANHR7GZTDu1uqFaTwROPzJudC+JSr+lNtfkNf8L9CHeUICUyRH6OT1+9w0RNpE8jQpTtQx92Lvli36RkN9L0DpzlH0x+RkRQ0ve9pHrNECXlmbihKFKJYTT/5NyLln9drMAtzYBi8WPax30tfwF3sINtZvitBC0UMwUtwPcDApgBJm/xQI0x6aFZU7bVRrFaUOhZ/raYJagHO6J4q5x42WXKPPVFFmednltWgI2DZH250o6p9G+sFse/KpGju7vQFFYw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=G2RmbTQ8/HVqBDUSMprUqc3IZuq5bxG47l9rb1f126Q=; b=be5wtuQQnS+tTPuWCWU/mDTC9XX56rtKrcxoAsvZI0ai1r+r4d/KnbPzPsORo4A5flnFNYR6lrfPLtnjIX1765nYf/qOjP0br0E/L0jlCkm7BozcIiSfx8scqt905O9L3jKHGRRbz7Ydj/QIUAmskmclumuPklZjXu0TV9F38JE= Received: from BN1PR13CA0015.namprd13.prod.outlook.com (2603:10b6:408:e2::20) by CO6PR02MB7700.namprd02.prod.outlook.com (2603:10b6:303:a7::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4242.22; Tue, 6 Jul 2021 16:49:19 +0000 Received: from BN1NAM02FT022.eop-nam02.prod.protection.outlook.com (2603:10b6:408:e2:cafe::cf) by BN1PR13CA0015.outlook.office365.com (2603:10b6:408:e2::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.7 via Frontend Transport; Tue, 6 Jul 2021 16:49:19 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT022.mail.protection.outlook.com (10.13.2.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:49:19 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:49:17 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:49:17 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oG5-0000pF-4X; Tue, 06 Jul 2021 09:49:17 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:11 +0530 Message-ID: <20210706164418.32615-4-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b147b3bc-8c13-4530-e0bf-08d9409dffa0 X-MS-TrafficTypeDiagnostic: CO6PR02MB7700: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1751; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 73XNugL7cpEXtewSYuuL6EvY8wNmLXlMlhsJAp2TDXWz4EyvdDnnaHk/tet36W38acqdpXqFAIZQtL0JyOiDGhnlhHTGcs/ReAfPQBMCy8OXVr+uzxwG9Is2E8kQo/cWGTIUBsYvDNzWYHAMGQhkoqNaIc4+e2eefN4Jl6U2LRA0nvWS9lgQeG9ju1eUuK3qiPHrGJ+sUH35lROy2SA0M0Wgq+/wQQvaDkCF5cxgp0ZJf4XuGC7RO32JN1qJg+4Lts9XBVghYiZPpONb8nzq1l1LSP5nGLeUoTjSslMypfF70oV4ADxPzwoypDJ91kgoveCsdIih7X10zn/lmjHEibNBT3paFT9CAHLbSO7eq+0804hksOW++EOzFkesQ6CMcc1aPldIqdwN3SY0ok2dfF/Mgo0gOGyAUOmZbLyRgkdxmCT8siZsA8dQz6uMebuJrEzrW925mjV3jFo7p8NMA8XQY0E3Xxnh890NhtEE0m30i6z6eZf6RSs3AOnFJa1ZKSlBPHDl4xYLAKKcf0hvbJhE8MUpJnRf+T8lRPwgcXFgT738jGyEoj2kcKyeLjELyvurKlruXos+dgRsNUk0kXYv6ur7Y7TQ0Y4B3O/n2ADvBxUFXNqcvf9H6A7xtZm5xbfB/fDPuZDCRrKdsJcTvLrSwbPfP6zW5c6fV7+ng613hLubW3vRXLVL0R/seIdHn1URcReJZzOJC7irmWwu7fvZv6gysM/9AD5wvgjkPd0= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(46966006)(36840700001)(2616005)(26005)(426003)(44832011)(8936002)(336012)(6916009)(1076003)(356005)(36756003)(83380400001)(6666004)(7636003)(186003)(9786002)(8676002)(2906002)(70206006)(47076005)(82310400003)(36860700001)(36906005)(107886003)(54906003)(70586007)(5660300002)(7696005)(498600001)(4326008)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:49:19.0923 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b147b3bc-8c13-4530-e0bf-08d9409dffa0 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT022.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR02MB7700 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 03/10] vdpa/sfc: add support to get device and protocol features X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops get_feature and get_protocol_features. This patch retrieves device supported features and enables protocol features. Signed-off-by: Vijay Kumar Srivastava Reviewed-by: Maxime Coquelin --- doc/guides/vdpadevs/features/sfc.ini | 10 ++++ drivers/common/sfc_efx/efsys.h | 2 +- drivers/common/sfc_efx/version.map | 10 ++++ drivers/vdpa/sfc/sfc_vdpa.c | 20 ++++++++ drivers/vdpa/sfc/sfc_vdpa.h | 2 + drivers/vdpa/sfc/sfc_vdpa_hw.c | 13 ++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 91 ++++++++++++++++++++++++++++++++---- drivers/vdpa/sfc/sfc_vdpa_ops.h | 3 ++ 8 files changed, 142 insertions(+), 9 deletions(-) diff --git a/doc/guides/vdpadevs/features/sfc.ini b/doc/guides/vdpadevs/features/sfc.ini index 71b6158..700d061 100644 --- a/doc/guides/vdpadevs/features/sfc.ini +++ b/doc/guides/vdpadevs/features/sfc.ini @@ -4,6 +4,16 @@ ; Refer to default.ini for the full list of available driver features. ; [Features] +csum = Y +guest csum = Y +host tso4 = Y +host tso6 = Y +version 1 = Y +mrg rxbuf = Y +any layout = Y +in_order = Y +proto host notifier = Y +IOMMU platform = Y Linux = Y x86-64 = Y Usage doc = Y diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h index d133d61..37ec6b9 100644 --- a/drivers/common/sfc_efx/efsys.h +++ b/drivers/common/sfc_efx/efsys.h @@ -187,7 +187,7 @@ #define EFSYS_OPT_MAE 1 -#define EFSYS_OPT_VIRTIO 0 +#define EFSYS_OPT_VIRTIO 1 /* ID */ diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map index 5e724fd..03670c8 100644 --- a/drivers/common/sfc_efx/version.map +++ b/drivers/common/sfc_efx/version.map @@ -226,6 +226,16 @@ INTERNAL { efx_txq_nbufs; efx_txq_size; + efx_virtio_fini; + efx_virtio_get_doorbell_offset; + efx_virtio_get_features; + efx_virtio_init; + efx_virtio_qcreate; + efx_virtio_qdestroy; + efx_virtio_qstart; + efx_virtio_qstop; + efx_virtio_verify_features; + sfc_efx_dev_class_get; sfc_efx_family; diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index 12e8d6e..9c12dcb 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -43,6 +43,26 @@ struct sfc_vdpa_adapter * return found ? sva : NULL; } +struct sfc_vdpa_ops_data * +sfc_vdpa_get_data_by_dev(struct rte_vdpa_device *vdpa_dev) +{ + bool found = false; + struct sfc_vdpa_adapter *sva; + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + + TAILQ_FOREACH(sva, &sfc_vdpa_adapter_list, next) { + if (vdpa_dev == sva->ops_data->vdpa_dev) { + found = true; + break; + } + } + + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + return found ? sva->ops_data : NULL; +} + static int sfc_vdpa_vfio_setup(struct sfc_vdpa_adapter *sva) { diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index fb97258..08075e5 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -60,6 +60,8 @@ struct sfc_vdpa_adapter { struct sfc_vdpa_adapter * sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev); +struct sfc_vdpa_ops_data * +sfc_vdpa_get_data_by_dev(struct rte_vdpa_device *vdpa_dev); int sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva); diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c index 83f3696..84e680f 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -273,10 +273,20 @@ if (rc != 0) goto fail_estimate_rsrc_limits; + sfc_vdpa_log_init(sva, "init virtio"); + rc = efx_virtio_init(enp); + if (rc != 0) { + sfc_vdpa_err(sva, "virtio init failed: %s", rte_strerror(rc)); + goto fail_virtio_init; + } + sfc_vdpa_log_init(sva, "done"); return 0; +fail_virtio_init: + efx_nic_fini(enp); + fail_estimate_rsrc_limits: fail_nic_reset: efx_nic_unprobe(enp); @@ -305,6 +315,9 @@ sfc_vdpa_log_init(sva, "entry"); + sfc_vdpa_log_init(sva, "virtio fini"); + efx_virtio_fini(enp); + sfc_vdpa_log_init(sva, "unprobe nic"); efx_nic_unprobe(enp); diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 71696be..5750944 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,17 +3,31 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include #include #include #include #include +#include "efx.h" #include "sfc_vdpa_ops.h" #include "sfc_vdpa.h" -/* Dummy functions for mandatory vDPA ops to pass vDPA device registration. - * In subsequent patches these ops would be implemented. +/* These protocol features are needed to enable notifier ctrl */ +#define SFC_VDPA_PROTOCOL_FEATURES \ + ((1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK) | \ + (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ) | \ + (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD) | \ + (1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER) | \ + (1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD)) + +/* + * Set of features which are enabled by default. + * Protocol feature bit is needed to enable notification notifier ctrl. */ +#define SFC_VDPA_DEFAULT_FEATURES \ + (1ULL << VHOST_USER_F_PROTOCOL_FEATURES) + static int sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) { @@ -24,22 +38,67 @@ } static int +sfc_vdpa_get_device_features(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + uint64_t dev_features; + efx_nic_t *nic; + + nic = sfc_vdpa_adapter_by_dev_handle(ops_data->dev_handle)->nic; + + rc = efx_virtio_get_features(nic, EFX_VIRTIO_DEVICE_TYPE_NET, + &dev_features); + if (rc != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "could not read device feature: %s", + rte_strerror(rc)); + return rc; + } + + ops_data->dev_features = dev_features; + + sfc_vdpa_info(ops_data->dev_handle, + "device supported virtio features : 0x%" PRIx64, + ops_data->dev_features); + + return 0; +} + +static int sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) { - RTE_SET_USED(vdpa_dev); - RTE_SET_USED(features); + struct sfc_vdpa_ops_data *ops_data; - return -1; + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + *features = ops_data->drv_features; + + sfc_vdpa_info(ops_data->dev_handle, + "vDPA ops get_feature :: features : 0x%" PRIx64, + *features); + + return 0; } static int sfc_vdpa_get_protocol_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) { - RTE_SET_USED(vdpa_dev); - RTE_SET_USED(features); + struct sfc_vdpa_ops_data *ops_data; - return -1; + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + *features = SFC_VDPA_PROTOCOL_FEATURES; + + sfc_vdpa_info(ops_data->dev_handle, + "vDPA ops get_protocol_feature :: features : 0x%" PRIx64, + *features); + + return 0; } static int @@ -91,6 +150,7 @@ struct sfc_vdpa_ops_data * { struct sfc_vdpa_ops_data *ops_data; struct rte_pci_device *pci_dev; + int rc; /* Create vDPA ops context */ ops_data = rte_zmalloc("vdpa", sizeof(struct sfc_vdpa_ops_data), 0); @@ -111,10 +171,25 @@ struct sfc_vdpa_ops_data * goto fail_register_device; } + /* Read supported device features */ + sfc_vdpa_log_init(dev_handle, "get device feature"); + rc = sfc_vdpa_get_device_features(ops_data); + if (rc != 0) + goto fail_get_dev_feature; + + /* Driver features are superset of device supported feature + * and any additional features supported by the driver. + */ + ops_data->drv_features = + ops_data->dev_features | SFC_VDPA_DEFAULT_FEATURES; + ops_data->state = SFC_VDPA_STATE_INITIALIZED; return ops_data; +fail_get_dev_feature: + rte_vdpa_unregister_device(ops_data->vdpa_dev); + fail_register_device: rte_free(ops_data); return NULL; diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 817b302..21cbb73 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -26,6 +26,9 @@ struct sfc_vdpa_ops_data { struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; + + uint64_t dev_features; + uint64_t drv_features; }; struct sfc_vdpa_ops_data * From patchwork Tue Jul 6 16:44:12 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 95466 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8C71FA0C4A; Wed, 7 Jul 2021 10:25:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 2189141496; Wed, 7 Jul 2021 10:25:35 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2065.outbound.protection.outlook.com [40.107.237.65]) by mails.dpdk.org (Postfix) with ESMTP id 982784120E for ; Tue, 6 Jul 2021 18:49:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=NvJ/HEspSW9vgba/BZE081YmTTJRNTHGDnxQZn32P41UeZVl4uGN01Eizu2v3krXdmL8V7NB3jnkFy6BHqfKOIhOMQaBZMbP3q1M20rBl64cpDvplhu3S2K2tBlt/dDjsCSGx4FBkRaXh+3iJXUckTJeD2fWGmVbChGtFKBLo1g77KIpyp8Ddtwm+yKgdVGYyUSS439lenXAhvr5kBM1R5xgXyaWUyUnDWc1X6n9+MHh02cqnFTLdtm+ybf0c+28KA+38docyZ6mmQyisqafEBKqSasOM0tVpS2j/bcwuRkhFGpcKxkvJmCr+gbaI+p+z9UFWHCejJS6dXNfJmxqaw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KZ0TJj3qshA+PCuJkw3UgeaNnb0hXgVO3wTS0fMgjRw=; b=DOD7bcWav2aTLzTKQUdSQbHJiHJA7tjgtJ5mD7bcPnQ6abAv28Wyx+sghZaYitDVN9O7Mo4YhhEFZnrb+GshDzmS/WLCpb5J/2XEf9zzDtIhwTjXvXkmnp3Qk4eOg9exq8kl0DYZ/GFhWagGCvhvmR4G7Cm5p2UbVGkTvevpadAu0tIOncTXC/uG8r4ZgccLCymKmOtIxEzSuUAPPf+dRFs/qvldkmEtoDTxDPHAcw+xu20lI0tE9muCf+pPBJTUlEAntKBjuzL54EO7gbYjMCL5jNxm5Y2yeZaF911TDoPmF4EgnezIbWhGHCTTn3msicSm1LVuL/soNh035S8cYw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KZ0TJj3qshA+PCuJkw3UgeaNnb0hXgVO3wTS0fMgjRw=; b=DemyJ3y3+s6VujooknQ+UTbQhDDRI/ksVazv+sR6u8PZwE9QMYWAXESEbja8eKnalAzp9Ynh0f7Iu/C3xIWLcs+axt+KMClTW8WuGd/TkFIErtrbTQgrn4L5CdkNFknpYji0J+R1kSymwpnUjFzhncZkW6A8J9YRsxKP/39VINE= Received: from BN1PR13CA0019.namprd13.prod.outlook.com (2603:10b6:408:e2::24) by DM6PR02MB4443.namprd02.prod.outlook.com (2603:10b6:5:22::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Tue, 6 Jul 2021 16:49:28 +0000 Received: from BN1NAM02FT022.eop-nam02.prod.protection.outlook.com (2603:10b6:408:e2:cafe::c8) by BN1PR13CA0019.outlook.office365.com (2603:10b6:408:e2::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.8 via Frontend Transport; Tue, 6 Jul 2021 16:49:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT022.mail.protection.outlook.com (10.13.2.136) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:49:28 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:49:28 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:49:28 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oGA-0000pF-MZ; Tue, 06 Jul 2021 09:49:23 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:12 +0530 Message-ID: <20210706164418.32615-5-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 824e5379-60dd-4692-233b-08d9409e0551 X-MS-TrafficTypeDiagnostic: DM6PR02MB4443: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1728; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: rU624m7CLBSysdYQUshzo0WBn5osm4pu5DPepy8j0GZxjFFpztp1KcwPD9Ul7pNbVJcbGkbmcSHF+hJvdRJfCKr/rCZ0cO7mekDA/JpM3Yq5LPIcTIyiNu85WJWAghGEyQxapx4ifxK5CVzD/FCXtxWVCXliq1S9S09vWcjjxrLpHBLHFRzZ+euq5lE77Rh2vVOV6IYm9OK/v15vVCLNCc4+Fvy+MPz9EHNDuURzfeMcLbg+LEEB2YejqJGws49eBzwmsQEH5wn4qZaLpGseTnef1pL6Z7BQNVQf9FVkOXJLQ2JHs+PvTqu2Mjn8wBLPKP3H0SD/wtpD96e8ii5TYS2FE+GFtVhu8WP/8SZJYdQ1J4tM/ELPcab8uTDRhEowN6GdMhHj9ibGJkXg7ysJ7oLrM0b4x0KuaTFfeCdD+JPVmOepzJ3GpGQNU6VHwWr+41BIyexaLlwCFh+jvl1SVBCmk0B25wBKJaIV2ECwxLmDVnSXFUZHKGh1Gmq8dYh89jvI7J20BS3nB2Dh9yUjn6xOnx5/qHo5HqaA0j5Z99Rw1WqNHPXHWJ+soEoPwv7wNubppzDkspw4pb2WtVs7eprFXrmvpSJD8u7xRUWCTMihod8WD2f4W7IYGc/fsrMR9gB7kasF431/e0SnoEcf+6M7ecsfcH+HXpQ5RNdnQLylWoqlZLGR8c1IAm+eIr6stez5ZXj/nQJXI9yLx9O2gq71IoMcPwytBtLp/oLX8YU= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(346002)(39860400002)(376002)(396003)(136003)(46966006)(36840700001)(8676002)(36756003)(70586007)(26005)(186003)(70206006)(36906005)(44832011)(9786002)(316002)(478600001)(1076003)(2616005)(6916009)(7636003)(6666004)(107886003)(82740400003)(83380400001)(47076005)(54906003)(5660300002)(4326008)(336012)(356005)(8936002)(7696005)(82310400003)(36860700001)(426003)(2906002)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:49:28.6350 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 824e5379-60dd-4692-233b-08d9409e0551 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT022.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR02MB4443 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 04/10] vdpa/sfc: get device supported max queue count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops get_queue_num to get the maximum number of queues supported by the device. Signed-off-by: Vijay Kumar Srivastava Reviewed-by: Maxime Coquelin --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 5750944..6c702e1 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -31,10 +31,20 @@ static int sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) { - RTE_SET_USED(vdpa_dev); - RTE_SET_USED(queue_num); + struct sfc_vdpa_ops_data *ops_data; + void *dev; - return -1; + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; + + sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", + *queue_num); + + return 0; } static int From patchwork Tue Jul 6 16:44:13 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 95467 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0BD34A0C4A; Wed, 7 Jul 2021 10:26:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 793404149C; Wed, 7 Jul 2021 10:25:36 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2074.outbound.protection.outlook.com [40.107.220.74]) by mails.dpdk.org (Postfix) with ESMTP id B8CD34128B for ; Tue, 6 Jul 2021 18:49:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=WX98e0ZebMOaN9V181weQzclAw7olcxCxO9g6t3vK44parXXkObVoO2kK5iJ0QoU5nOWn6ZRiXnPtm314QO535rVwbxL1hmEr+7C0i2g2mHCO+e40V45kF/l+WOm/yCyNk5pGYybMvUD4PQ029Zrfh1A/X2FLYwUlnRJPr5rOG5fDWO4y6V0owsxaoBrXh4SJldw+9M/rItxEIct+r1MwBWs1cODGy9Dkc8nWxCUuvLBY+Rpoj/cIt756B5PuH7WUmGJr82tUQK+DpYzmzbkM0lFapSWXJO7q0fVbEMJYFuHJew3dFQsAaEHy2fEppeS95+NScD1u4Hokf85KIZZlg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BVA8Vkm9urQrlui1m/s6WcutjuQ0m8iWGkeWm6MF0rg=; b=Y9hgc6lM3FHhWEPNnWsoh3ygfHi2GjMXopnGVk7Zsfs8pNRxhXb2YvILMECtolue7mBCy9OsvfFoJURyGjkbo6xKEKBEfDCwI10Cg8M8LWWiPOA7/Eied0ZQC6NUCcA6bvCNECuOCeVVHcHnlpvGM9lw0gX8Ahyi/w65TtKW2Re4kpiMbBTCewny6+SDlOF4uZfeiOF9wCphHA2A+F+GTbJu6RqBPlZWTZbNT9hYmucR6X27e+JUdZY/wmfFXhTI5i1xwxCVhqxRlf3RUC/3PAwu8OStK0aQPJUQPgpDwIUe3p3sivf77IbebbJx5Jqcuij5aN0HTtsDKRyAKVWspw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BVA8Vkm9urQrlui1m/s6WcutjuQ0m8iWGkeWm6MF0rg=; b=bRsQx32sRUPW0Cr07IvP7KCUrvHeWIig4bGe9gVw7gBUXIAy9Yp+W0Ydf/M9mzNJFhc4j6L8lSxrDwJA+BqT/GEehoJLPMCkZurlVzqq023Tf3dlcrav0zOs5iDVstT9KrlnyYa90Ikk3q1flgeUJyZzJ+P23BX+NlCOy/io/es= Received: from DM5PR13CA0039.namprd13.prod.outlook.com (2603:10b6:3:7b::25) by DM5PR02MB2250.namprd02.prod.outlook.com (2603:10b6:3:51::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.32; Tue, 6 Jul 2021 16:49:28 +0000 Received: from DM3NAM02FT053.eop-nam02.prod.protection.outlook.com (2603:10b6:3:7b:cafe::66) by DM5PR13CA0039.outlook.office365.com (2603:10b6:3:7b::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.8 via Frontend Transport; Tue, 6 Jul 2021 16:49:28 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT053.mail.protection.outlook.com (10.13.5.35) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:49:28 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:49:27 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:49:27 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oGF-0000pF-5m; Tue, 06 Jul 2021 09:49:27 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:13 +0530 Message-ID: <20210706164418.32615-6-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a4f8faaa-861c-4a3c-f2ae-08d9409e0511 X-MS-TrafficTypeDiagnostic: DM5PR02MB2250: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1051; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: HP83W4/84pQR8G6qI5276Rq1g2mc0oUJ5FcxMXjiK6OXYtJr8c6BjTcheYbLuvDrl6PkDe8pzdHuykndCfsYHxcF6T66afHqQa/EkNmMqEv2ZoaocYRizu1FDWSuLYyZwTm8pO5/Z8zgo35Swz8V9cNssRBsbvoC/2XZypqhkVeH9zTFR+NEjDODYd6xvbPi6I3DLSog9zk4McD97+ETKdf8NoRFSyMiH+K94bFLeWprCeMWEgTiVe6JRGa3dYWg17DU/xIIgXckAlKPB8iwqQLuyeRDjaK4o9EODatBhr6RgSCS3RRYPaTFeudieojYbyN5HHtt9AXFaBcWWkJJ8IHlLgWLrFHtHMBaygt6cCq18DqcBIQeH5izw4EfAoFzz5wDkplVxDXdcU++GpDCA/0c7Zpe6+WLnxjK9oKUd9mbd+/w6y5pcmv1TbPxlF1PqrvsaEZOa4p5212uWbMW2GSDteoAGZwxoOcbLd0Q7xL6s+PrQcmqTztmQBo/8rWCTUrmkzqcxkTuSh39dmKtIZKENB3SigjE4vuG6jGofeXhP+svHgLO1j2S7i/sZTpJ1ymB2fcY7gkGzrFm3HXzGxhnIouYHDHu/IlgJE2qgYBBVy+Reij+3/fFzZtqIPhwN0OXGHiRZx083A9S1NalCYOHlNtOg79fzjqMpZK07uo/THi7Wx0Py3FUmrs9hMIWcFyn3I99Isd/cwv9MnXPYtEF3ZTBbDOfTs0z3FRAZ9k= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(376002)(39860400002)(346002)(396003)(136003)(46966006)(36840700001)(54906003)(316002)(70586007)(36906005)(36860700001)(7696005)(44832011)(4326008)(82310400003)(2616005)(5660300002)(6666004)(47076005)(70206006)(8936002)(186003)(8676002)(83380400001)(26005)(9786002)(1076003)(6916009)(336012)(426003)(478600001)(36756003)(107886003)(2906002)(82740400003)(7636003)(356005)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:49:28.2818 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a4f8faaa-861c-4a3c-f2ae-08d9409e0511 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT053.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR02MB2250 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 05/10] vdpa/sfc: add support to get VFIO device fd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops get_vfio_device_fd to get the VFIO device fd. Signed-off-by: Vijay Kumar Srivastava Reviewed-by: Maxime Coquelin --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 6c702e1..5253adb 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -145,6 +145,29 @@ return -1; } +static int +sfc_vdpa_get_vfio_device_fd(int vid) +{ + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + int vfio_dev_fd; + void *dev; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + + sfc_vdpa_info(dev, "vDPA ops get_vfio_device_fd :: vfio fd : %d", + vfio_dev_fd); + + return vfio_dev_fd; +} + static struct rte_vdpa_dev_ops sfc_vdpa_ops = { .get_queue_num = sfc_vdpa_get_queue_num, .get_features = sfc_vdpa_get_features, @@ -153,6 +176,7 @@ .dev_close = sfc_vdpa_dev_close, .set_vring_state = sfc_vdpa_set_vring_state, .set_features = sfc_vdpa_set_features, + .get_vfio_device_fd = sfc_vdpa_get_vfio_device_fd, }; struct sfc_vdpa_ops_data * From patchwork Tue Jul 6 16:44:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 95468 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2CF51A0C4A; Wed, 7 Jul 2021 10:26:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A0144414A1; Wed, 7 Jul 2021 10:25:37 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2081.outbound.protection.outlook.com [40.107.243.81]) by mails.dpdk.org (Postfix) with ESMTP id 865F64120E for ; Tue, 6 Jul 2021 18:50:04 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=E+RfxSsAUxPLZg4xw7uZ1ZY9b9RPD/nuHc6nvN2Z4MG3oniUPHdD9jZXqBUhiM/PKIE7POTM5Wo3X4BRmoPMCWcQhMlCzg3ehk7emKJJeznMQG1Ztcg6u1efGnVKfzuYG4CYpXZVqL1vSxhNoIGUw9tWoYgoBxJT9CXkvd+SkK9vTmHrIZgxaVhvljWKyIYDEu86y3b1j8V/U3ZkjDW9FhAKJ9B/mwBTN2tjK+axjZWZBtPPlhFIJ6GyTIvULjS/4+MjJDjvmXXxLm0mwBAvZzvdz6pFgMwMM+K82mLWJbzY/Gg3uUtA65mJmOVCawsSimp/GeSzmWY7j8pZdvyuTw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=F1Oo26l+W9ky0FcHQPhXkgvlbhPb+1VgS8tYrHOOTxA=; b=iv4+E8vL+Dcwb5B+ydvDKWzIsw4ikbtjQzNBqquy/6pspSff71lrqiISi0VV0a3aVIoIwaxTMoVQO7tdXyp7ST8tdX5oZYp4qsMG/wRZb5xQOKKo4OI7Xaxilufbk+1GLWs4wYfOiGbSpksYi+HdNY9ensP/84R85Tg583lJ+e7HOPbbWXsvaiwg2cFAy5DmZ1IYZyYpXYMmfrO8mE//FnF2rJe0GIRdI9HlfOy00JWZRuDbGrUkcufJkhkICbqZtGOaPNoJXHG53goJxvqPr2wjAx9eBgqPPVtlf3rKk6EE/T8dP1bCXDipL61fA6cqpK04+/rV0ImS6TtpfPKZMA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=F1Oo26l+W9ky0FcHQPhXkgvlbhPb+1VgS8tYrHOOTxA=; b=n+AqLNGPoSmVEdgtIehU1fiWnWk8LpOu93QxkT4bBCWo2jSoeRlH36iFWKRP48kxqy1cfxdfyAUoD6bzTum4OKHshCffGSfzNRmw97DssP4/5PHUi0TFYgMIEj2AA1/OLw0PM9QB6FrnWtvxE8OYdQnW0dMzHjc5ZVYEdztlu4s= Received: from BN0PR04CA0113.namprd04.prod.outlook.com (2603:10b6:408:ec::28) by SA1PR02MB8478.namprd02.prod.outlook.com (2603:10b6:806:1f9::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.27; Tue, 6 Jul 2021 16:50:01 +0000 Received: from BN1NAM02FT056.eop-nam02.prod.protection.outlook.com (2603:10b6:408:ec:cafe::88) by BN0PR04CA0113.outlook.office365.com (2603:10b6:408:ec::28) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.20 via Frontend Transport; Tue, 6 Jul 2021 16:50:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT056.mail.protection.outlook.com (10.13.2.164) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:50:01 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:50:00 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:50:00 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oGl-0000pF-TA; Tue, 06 Jul 2021 09:50:00 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:14 +0530 Message-ID: <20210706164418.32615-7-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: a847b785-ebbd-4ace-5b30-08d9409e18d9 X-MS-TrafficTypeDiagnostic: SA1PR02MB8478: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:183; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FFRkBS7+ZyQ/XyUTPenxr+BkD7wwnAuGOD6XczhOjtZ30esafjSe0MNO8XQ1np6C1wZQK0Jouc3ZXjAVRBAcSvsqx4H9H++yXWlOxfjHkxvGOozmeU5XMHUR8bq+lx+i7yfzGb9tGE/TlM0cVKUf0CnJO3cOkgluOjJlx3tMFNrO+z4/AOK5sXt0M6SU5sT0wAmW0PhMAlDsJMsEfLc33842dQzOg/ddHtA1DdDNjxiLRrCeQrmywApmILhMJ3RRKDPp8wbpoNciq5/yVWy2uyFBm6rNq0KsVATU3KVPpC8/dR6w3SifBShbWFMOaq+o+Hx8JOb4GcBaeTf6zB19UFIt1e7OWuqyj17GRBvsXFwB4DB9z3tW8lhuupB/NFXA03AGvtIhzUNXDBGEFAnoqa6TaNsBPcqlBnahH8s3DoXAMVY+p7TED+IvSrhYSottS9cOxjryV8cGSz2sGBCF9VLvaFZlpvZHzreJ5KF+9psvQCmhD2c25JaxeCPQIGUbtdJxsMzwvAK2Ky0fC2it/+cU0B+Htr1jSSflPn33o/+ZLBHFbBsmU6HXP53FvNFUMAF/fvpdt2eSOB/Ciq7kcaTV5YfhyEjDtTUe3xYZlW1IE1OSsEniLVprwp+LbYwxuianL7TFO8QLV/qgJLQnrJ6anP0g9/VVLjX/+wNWhrgbaGKmYC7kWALOMwRtM7hU++BK2lGVbyGBFtBM+HbdAcXe+eOdn/FCvcUteq+aRUk= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(136003)(39860400002)(376002)(346002)(396003)(36840700001)(46966006)(36906005)(316002)(70586007)(44832011)(82740400003)(2616005)(8936002)(70206006)(83380400001)(54906003)(478600001)(26005)(47076005)(36756003)(4326008)(9786002)(82310400003)(6666004)(336012)(8676002)(426003)(186003)(5660300002)(30864003)(7636003)(107886003)(356005)(6916009)(1076003)(2906002)(36860700001)(7696005)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:50:01.4054 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: a847b785-ebbd-4ace-5b30-08d9409e18d9 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT056.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR02MB8478 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 06/10] vdpa/sfc: add support for dev conf and dev close ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops dev_conf and dev_close for DMA mapping, interrupt and virtqueue configurations. Signed-off-by: Vijay Kumar Srivastava --- drivers/vdpa/sfc/sfc_vdpa.c | 6 + drivers/vdpa/sfc/sfc_vdpa.h | 43 ++++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 70 ++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 527 ++++++++++++++++++++++++++++++++++++++-- drivers/vdpa/sfc/sfc_vdpa_ops.h | 28 +++ 5 files changed, 654 insertions(+), 20 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index 9c12dcb..ca13483 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -246,6 +246,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_log_init(sva, "entry"); + sfc_vdpa_adapter_lock_init(sva); + sfc_vdpa_log_init(sva, "vfio init"); if (sfc_vdpa_vfio_setup(sva) < 0) { sfc_vdpa_err(sva, "failed to setup device %s", pci_dev->name); @@ -280,6 +282,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_vfio_teardown(sva); fail_vfio_setup: + sfc_vdpa_adapter_lock_fini(sva); + fail_set_log_prefix: rte_free(sva); @@ -311,6 +315,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_vfio_teardown(sva); + sfc_vdpa_adapter_lock_fini(sva); + rte_free(sva); return 0; diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index 08075e5..b103b0a 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -80,10 +80,53 @@ struct sfc_vdpa_ops_data * void sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp); +int +sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *vdpa_data, bool do_map); + static inline struct sfc_vdpa_adapter * sfc_vdpa_adapter_by_dev_handle(void *dev_handle) { return (struct sfc_vdpa_adapter *)dev_handle; } +/* + * Add wrapper functions to acquire/release lock to be able to remove or + * change the lock in one place. + */ +static inline void +sfc_vdpa_adapter_lock_init(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_init(&sva->lock); +} + +static inline int +sfc_vdpa_adapter_is_locked(struct sfc_vdpa_adapter *sva) +{ + return rte_spinlock_is_locked(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_lock(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_lock(&sva->lock); +} + +static inline int +sfc_vdpa_adapter_trylock(struct sfc_vdpa_adapter *sva) +{ + return rte_spinlock_trylock(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_unlock(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_unlock(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_lock_fini(__rte_unused struct sfc_vdpa_adapter *sva) +{ + /* Just for symmetry of the API */ +} + #endif /* _SFC_VDPA_H */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c index 84e680f..047bcc4 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "efx.h" #include "sfc_vdpa.h" @@ -104,6 +105,75 @@ memset(esmp, 0, sizeof(*esmp)); } +int +sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *ops_data, bool do_map) +{ + uint32_t i, j; + int rc; + struct rte_vhost_memory *vhost_mem = NULL; + struct rte_vhost_mem_region *mem_reg = NULL; + int vfio_container_fd; + void *dev; + + dev = ops_data->dev_handle; + vfio_container_fd = + sfc_vdpa_adapter_by_dev_handle(dev)->vfio_container_fd; + + rc = rte_vhost_get_mem_table(ops_data->vid, &vhost_mem); + if (rc < 0) { + sfc_vdpa_err(dev, + "failed to get VM memory layout"); + goto error; + } + + for (i = 0; i < vhost_mem->nregions; i++) { + mem_reg = &vhost_mem->regions[i]; + + if (do_map) { + rc = rte_vfio_container_dma_map(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + if (rc < 0) { + sfc_vdpa_err(dev, + "DMA map failed : %s", + rte_strerror(rte_errno)); + goto failed_vfio_dma_map; + } + } else { + rc = rte_vfio_container_dma_unmap(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + if (rc < 0) { + sfc_vdpa_err(dev, + "DMA unmap failed : %s", + rte_strerror(rte_errno)); + goto error; + } + } + } + + free(vhost_mem); + + return 0; + +failed_vfio_dma_map: + for (j = 0; j < i; j++) { + mem_reg = &vhost_mem->regions[j]; + rc = rte_vfio_container_dma_unmap(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + } + +error: + if (vhost_mem) + free(vhost_mem); + + return rc; +} + static int sfc_vdpa_mem_bar_init(struct sfc_vdpa_adapter *sva, const efx_bar_region_t *mem_ebrp) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 5253adb..4228044 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,10 +3,13 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include + #include #include #include #include +#include #include #include "efx.h" @@ -28,24 +31,12 @@ #define SFC_VDPA_DEFAULT_FEATURES \ (1ULL << VHOST_USER_F_PROTOCOL_FEATURES) -static int -sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) -{ - struct sfc_vdpa_ops_data *ops_data; - void *dev; - - ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); - if (ops_data == NULL) - return -1; - - dev = ops_data->dev_handle; - *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; +#define SFC_VDPA_MSIX_IRQ_SET_BUF_LEN \ + (sizeof(struct vfio_irq_set) + \ + sizeof(int) * (SFC_VDPA_MAX_QUEUE_PAIRS * 2 + 1)) - sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", - *queue_num); - - return 0; -} +/* It will be used for target VF when calling function is not PF */ +#define SFC_VDPA_VF_NULL 0xFFFF static int sfc_vdpa_get_device_features(struct sfc_vdpa_ops_data *ops_data) @@ -74,6 +65,438 @@ return 0; } +static uint64_t +hva_to_gpa(int vid, uint64_t hva) +{ + struct rte_vhost_memory *vhost_mem = NULL; + struct rte_vhost_mem_region *mem_reg = NULL; + uint32_t i; + uint64_t gpa = 0; + + if (rte_vhost_get_mem_table(vid, &vhost_mem) < 0) + goto error; + + for (i = 0; i < vhost_mem->nregions; i++) { + mem_reg = &vhost_mem->regions[i]; + + if (hva >= mem_reg->host_user_addr && + hva < mem_reg->host_user_addr + mem_reg->size) { + gpa = (hva - mem_reg->host_user_addr) + + mem_reg->guest_phys_addr; + break; + } + } + +error: + if (vhost_mem) + free(vhost_mem); + return gpa; +} + +static int +sfc_vdpa_enable_vfio_intr(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int *irq_fd_ptr; + int vfio_dev_fd; + uint32_t i, num_vring; + struct rte_vhost_vring vring; + struct vfio_irq_set *irq_set; + struct rte_pci_device *pci_dev; + char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN]; + void *dev; + + num_vring = rte_vhost_get_vring_num(ops_data->vid); + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + pci_dev = sfc_vdpa_adapter_by_dev_handle(dev)->pdev; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = sizeof(irq_set_buf); + irq_set->count = num_vring + 1; + irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | + VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + irq_set->start = 0; + irq_fd_ptr = (int *)&irq_set->data; + irq_fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = pci_dev->intr_handle.fd; + + for (i = 0; i < num_vring; i++) { + rte_vhost_get_vhost_vring(ops_data->vid, i, &vring); + irq_fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd; + } + + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) { + sfc_vdpa_err(ops_data->dev_handle, + "error enabling MSI-X interrupts: %s", + strerror(errno)); + return -1; + } + + return 0; +} + +static int +sfc_vdpa_disable_vfio_intr(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int vfio_dev_fd; + struct vfio_irq_set *irq_set; + char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN]; + void *dev; + + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = sizeof(irq_set_buf); + irq_set->count = 0; + irq_set->flags = VFIO_IRQ_SET_DATA_NONE | VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + irq_set->start = 0; + + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) { + sfc_vdpa_err(ops_data->dev_handle, + "error disabling MSI-X interrupts: %s", + strerror(errno)); + return -1; + } + + return 0; +} + +static int +sfc_vdpa_get_vring_info(struct sfc_vdpa_ops_data *ops_data, + int vq_num, struct sfc_vdpa_vring_info *vring) +{ + int rc; + uint64_t gpa; + struct rte_vhost_vring vq; + + rc = rte_vhost_get_vhost_vring(ops_data->vid, vq_num, &vq); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "get vhost vring failed: %s", rte_strerror(rc)); + return rc; + } + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.desc); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for descriptor ring."); + goto fail_vring_map; + } + vring->desc = gpa; + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.avail); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for available ring."); + goto fail_vring_map; + } + vring->avail = gpa; + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.used); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for used ring."); + goto fail_vring_map; + } + vring->used = gpa; + + vring->size = vq.size; + + rc = rte_vhost_get_vring_base(ops_data->vid, vq_num, + &vring->last_avail_idx, + &vring->last_used_idx); + + return rc; + +fail_vring_map: + return -1; +} + +static int +sfc_vdpa_virtq_start(struct sfc_vdpa_ops_data *ops_data, int vq_num) +{ + int rc; + efx_virtio_vq_t *vq; + struct sfc_vdpa_vring_info vring; + efx_virtio_vq_cfg_t vq_cfg; + efx_virtio_vq_dyncfg_t vq_dyncfg; + + vq = ops_data->vq_cxt[vq_num].vq; + if (vq == NULL) + return -1; + + rc = sfc_vdpa_get_vring_info(ops_data, vq_num, &vring); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "get vring info failed: %s", rte_strerror(rc)); + goto fail_vring_info; + } + + vq_cfg.evvc_target_vf = SFC_VDPA_VF_NULL; + + /* even virtqueue for RX and odd for TX */ + if (vq_num % 2) { + vq_cfg.evvc_type = EFX_VIRTIO_VQ_TYPE_NET_TXQ; + sfc_vdpa_info(ops_data->dev_handle, + "configure virtqueue # %d (TXQ)", vq_num); + } else { + vq_cfg.evvc_type = EFX_VIRTIO_VQ_TYPE_NET_RXQ; + sfc_vdpa_info(ops_data->dev_handle, + "configure virtqueue # %d (RXQ)", vq_num); + } + + vq_cfg.evvc_vq_num = vq_num; + vq_cfg.evvc_desc_tbl_addr = vring.desc; + vq_cfg.evvc_avail_ring_addr = vring.avail; + vq_cfg.evvc_used_ring_addr = vring.used; + vq_cfg.evvc_vq_size = vring.size; + + vq_dyncfg.evvd_vq_pidx = vring.last_used_idx; + vq_dyncfg.evvd_vq_cidx = vring.last_avail_idx; + + /* MSI-X vector is function-relative */ + vq_cfg.evvc_msix_vector = RTE_INTR_VEC_RXTX_OFFSET + vq_num; + if (ops_data->vdpa_context == SFC_VDPA_AS_VF) + vq_cfg.evvc_pas_id = 0; + vq_cfg.evcc_features = ops_data->dev_features & + ops_data->req_features; + + /* Start virtqueue */ + rc = efx_virtio_qstart(vq, &vq_cfg, &vq_dyncfg); + if (rc != 0) { + /* destroy virtqueue */ + sfc_vdpa_err(ops_data->dev_handle, + "virtqueue start failed: %s", + rte_strerror(rc)); + efx_virtio_qdestroy(vq); + goto fail_virtio_qstart; + } + + sfc_vdpa_info(ops_data->dev_handle, + "virtqueue started successfully for vq_num %d", vq_num); + + ops_data->vq_cxt[vq_num].enable = B_TRUE; + + return rc; + +fail_virtio_qstart: +fail_vring_info: + return rc; +} + +static int +sfc_vdpa_virtq_stop(struct sfc_vdpa_ops_data *ops_data, int vq_num) +{ + int rc; + efx_virtio_vq_dyncfg_t vq_idx; + efx_virtio_vq_t *vq; + + if (ops_data->vq_cxt[vq_num].enable != B_TRUE) + return -1; + + vq = ops_data->vq_cxt[vq_num].vq; + if (vq == NULL) + return -1; + + /* stop the vq */ + rc = efx_virtio_qstop(vq, &vq_idx); + if (rc == 0) { + ops_data->vq_cxt[vq_num].cidx = vq_idx.evvd_vq_cidx; + ops_data->vq_cxt[vq_num].pidx = vq_idx.evvd_vq_pidx; + } + ops_data->vq_cxt[vq_num].enable = B_FALSE; + + return rc; +} + +static int +sfc_vdpa_configure(struct sfc_vdpa_ops_data *ops_data) +{ + int rc, i; + int nr_vring; + int max_vring_cnt; + efx_virtio_vq_t *vq; + efx_nic_t *nic; + void *dev; + + dev = ops_data->dev_handle; + nic = sfc_vdpa_adapter_by_dev_handle(dev)->nic; + + SFC_EFX_ASSERT(ops_data->state == SFC_VDPA_STATE_INITIALIZED); + + ops_data->state = SFC_VDPA_STATE_CONFIGURING; + + nr_vring = rte_vhost_get_vring_num(ops_data->vid); + max_vring_cnt = + (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + /* number of vring should not be more than supported max vq count */ + if (nr_vring > max_vring_cnt) { + sfc_vdpa_err(dev, + "nr_vring (%d) is > max vring count (%d)", + nr_vring, max_vring_cnt); + goto fail_vring_num; + } + + rc = sfc_vdpa_dma_map(ops_data, true); + if (rc) { + sfc_vdpa_err(dev, + "DMA map failed: %s", rte_strerror(rc)); + goto fail_dma_map; + } + + for (i = 0; i < nr_vring; i++) { + rc = efx_virtio_qcreate(nic, &vq); + if ((rc != 0) || (vq == NULL)) { + sfc_vdpa_err(dev, + "virtqueue create failed: %s", + rte_strerror(rc)); + goto fail_vq_create; + } + + /* store created virtqueue context */ + ops_data->vq_cxt[i].vq = vq; + } + + ops_data->vq_count = i; + + ops_data->state = SFC_VDPA_STATE_CONFIGURED; + + return 0; + +fail_vq_create: + sfc_vdpa_dma_map(ops_data, false); + +fail_dma_map: +fail_vring_num: + ops_data->state = SFC_VDPA_STATE_INITIALIZED; + + return -1; +} + +static void +sfc_vdpa_close(struct sfc_vdpa_ops_data *ops_data) +{ + int i; + + if (ops_data->state != SFC_VDPA_STATE_CONFIGURED) + return; + + ops_data->state = SFC_VDPA_STATE_CLOSING; + + for (i = 0; i < ops_data->vq_count; i++) { + if (ops_data->vq_cxt[i].vq == NULL) + continue; + + efx_virtio_qdestroy(ops_data->vq_cxt[i].vq); + } + + sfc_vdpa_dma_map(ops_data, false); + + ops_data->state = SFC_VDPA_STATE_INITIALIZED; +} + +static void +sfc_vdpa_stop(struct sfc_vdpa_ops_data *ops_data) +{ + int i; + int rc; + + if (ops_data->state != SFC_VDPA_STATE_STARTED) + return; + + ops_data->state = SFC_VDPA_STATE_STOPPING; + + for (i = 0; i < ops_data->vq_count; i++) { + rc = sfc_vdpa_virtq_stop(ops_data, i); + if (rc != 0) + continue; + } + + sfc_vdpa_disable_vfio_intr(ops_data); + + ops_data->state = SFC_VDPA_STATE_CONFIGURED; +} + +static int +sfc_vdpa_start(struct sfc_vdpa_ops_data *ops_data) +{ + int i, j; + int rc; + + SFC_EFX_ASSERT(ops_data->state == SFC_VDPA_STATE_CONFIGURED); + + sfc_vdpa_log_init(ops_data->dev_handle, "entry"); + + ops_data->state = SFC_VDPA_STATE_STARTING; + + sfc_vdpa_log_init(ops_data->dev_handle, "enable interrupts"); + rc = sfc_vdpa_enable_vfio_intr(ops_data); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "vfio intr allocation failed: %s", + rte_strerror(rc)); + goto fail_enable_vfio_intr; + } + + rte_vhost_get_negotiated_features(ops_data->vid, + &ops_data->req_features); + + sfc_vdpa_info(ops_data->dev_handle, + "negotiated feature : 0x%" PRIx64, + ops_data->req_features); + + for (i = 0; i < ops_data->vq_count; i++) { + sfc_vdpa_log_init(ops_data->dev_handle, + "starting vq# %d", i); + rc = sfc_vdpa_virtq_start(ops_data, i); + if (rc != 0) + goto fail_vq_start; + } + + ops_data->state = SFC_VDPA_STATE_STARTED; + + sfc_vdpa_log_init(ops_data->dev_handle, "done"); + + return 0; + +fail_vq_start: + /* stop already started virtqueues */ + for (j = 0; j < i; j++) + sfc_vdpa_virtq_stop(ops_data, j); + sfc_vdpa_disable_vfio_intr(ops_data); + +fail_enable_vfio_intr: + ops_data->state = SFC_VDPA_STATE_CONFIGURED; + + return rc; +} + +static int +sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) +{ + struct sfc_vdpa_ops_data *ops_data; + void *dev; + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; + + sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", + *queue_num); + + return 0; +} + static int sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) { @@ -114,7 +537,53 @@ static int sfc_vdpa_dev_config(int vid) { - RTE_SET_USED(vid); + struct rte_vdpa_device *vdpa_dev; + int rc; + struct sfc_vdpa_ops_data *ops_data; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + sfc_vdpa_log_init(ops_data->dev_handle, "entry"); + + ops_data->vid = vid; + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + sfc_vdpa_log_init(ops_data->dev_handle, "configuring"); + rc = sfc_vdpa_configure(ops_data); + if (rc != 0) + goto fail_vdpa_config; + + sfc_vdpa_log_init(ops_data->dev_handle, "starting"); + rc = sfc_vdpa_start(ops_data); + if (rc != 0) + goto fail_vdpa_start; + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + sfc_vdpa_log_init(ops_data->dev_handle, "vhost notifier ctrl"); + if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) + sfc_vdpa_info(ops_data->dev_handle, + "vDPA (%s): software relay for notify is used.", + vdpa_dev->device->name); + + sfc_vdpa_log_init(ops_data->dev_handle, "done"); + + return 0; + +fail_vdpa_start: + sfc_vdpa_close(ops_data); + +fail_vdpa_config: + sfc_vdpa_adapter_unlock(ops_data->dev_handle); return -1; } @@ -122,9 +591,27 @@ static int sfc_vdpa_dev_close(int vid) { - RTE_SET_USED(vid); + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; - return -1; + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + sfc_vdpa_stop(ops_data); + sfc_vdpa_close(ops_data); + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + return 0; } static int diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 21cbb73..8d553c5 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -18,17 +18,45 @@ enum sfc_vdpa_context { enum sfc_vdpa_state { SFC_VDPA_STATE_UNINITIALIZED = 0, SFC_VDPA_STATE_INITIALIZED, + SFC_VDPA_STATE_CONFIGURING, + SFC_VDPA_STATE_CONFIGURED, + SFC_VDPA_STATE_CLOSING, + SFC_VDPA_STATE_CLOSED, + SFC_VDPA_STATE_STARTING, + SFC_VDPA_STATE_STARTED, + SFC_VDPA_STATE_STOPPING, SFC_VDPA_STATE_NSTATES }; +struct sfc_vdpa_vring_info { + uint64_t desc; + uint64_t avail; + uint64_t used; + uint64_t size; + uint16_t last_avail_idx; + uint16_t last_used_idx; +}; + +typedef struct sfc_vdpa_vq_context_s { + uint8_t enable; + uint32_t pidx; + uint32_t cidx; + efx_virtio_vq_t *vq; +} sfc_vdpa_vq_context_t; + struct sfc_vdpa_ops_data { void *dev_handle; + int vid; struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; uint64_t dev_features; uint64_t drv_features; + uint64_t req_features; + + uint16_t vq_count; + struct sfc_vdpa_vq_context_s vq_cxt[SFC_VDPA_MAX_QUEUE_PAIRS * 2]; }; struct sfc_vdpa_ops_data * From patchwork Tue Jul 6 16:44:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 95471 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5FB6EA0C4A; Wed, 7 Jul 2021 10:26:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 67C25414AF; Wed, 7 Jul 2021 10:25:41 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2066.outbound.protection.outlook.com [40.107.237.66]) by mails.dpdk.org (Postfix) with ESMTP id 988644120E for ; Tue, 6 Jul 2021 18:50:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fjTA2QNHUb0No8dYqss+Fyp3DVBm+eGb4GZELkrU2P82Zxc+Hx6Ex0VGFGgMHzkij/H1x0P7DWwRaYZfQnRGxM70QmJFLTWxXPLVJVdqyqG6rPQy5hY+luYHaVd+oerQlrAgBH6uztHkAy97DJLx57k0S0OgZ0BlAgiW3fBqTCt5kYhTTH30ygjyqB9p5CrTz1qNWqZMNLh6ouMIdBFf1HSFXPJ/naSG9W6QN59GSBJm1T/9io/rYAOQsUgPLxK6/Lm1NuJaarldYeBJPSWOGv6OfbiDaO5DOHyZj/0wRv72Nm8oszmbUMTxTxKJuQ11arEDKGPCFM51fIdogj+PRg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ldWYRK2qp93s2F9F0fdPVP5KDof6d2Y5hWumy/z9ovE=; b=RyrZjOzTJpyORbu6dTLTx1Yg5BISC3cRUiyJZEqbui2fkgVq6FcPYCK/JwzMIshVp7cGRRVBBdISWnJ3OuqsHQNnZ1LDuZ3nGLlXs+WT7d7JtvDKswRm3b8I+7WUlpnOTQ9tTY64MghhcBMiw088MSWGCTHROCFPqatPgTTZpN6x8KPYD2WP2GX9FqKWHOMrAtouujCYriPlo/51AgNY5TusDxyIEzKBRJ7lmSUe2YKHXzQVqFiHi1Y8MXWISNHijCBFOqVyGxZx9/LGwy++CRk52qjhLSkEu6ltGIV0HHiOt3quUJXBkaCN/YsKybgWZ74w/ZDGoaotLfVe7uVFsQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ldWYRK2qp93s2F9F0fdPVP5KDof6d2Y5hWumy/z9ovE=; b=pK2SFcd4+d5OaHMwKmVoJokep3eM7VwmfTC0Mm/x4KrwZiYTs5UD7l+fsRGsiMQlZjSXek93ZmNv4M1Fje7utTEV2u76hlKH+JBms8X9h/UgCVroPkUtAbY2RsrAfcl9RxazTvAdh5vZTZGLhdpfHGrGp5t9PkknR9TF3qKPlJw= Received: from BN9PR03CA0505.namprd03.prod.outlook.com (2603:10b6:408:130::30) by SN1PR02MB3741.namprd02.prod.outlook.com (2603:10b6:802:28::24) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4219.22; Tue, 6 Jul 2021 16:50:25 +0000 Received: from BN1NAM02FT029.eop-nam02.prod.protection.outlook.com (2603:10b6:408:130:cafe::64) by BN9PR03CA0505.outlook.office365.com (2603:10b6:408:130::30) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.25 via Frontend Transport; Tue, 6 Jul 2021 16:50:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT029.mail.protection.outlook.com (10.13.2.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:50:25 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:50:04 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:50:04 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oGp-0000pF-P8; Tue, 06 Jul 2021 09:50:04 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:15 +0530 Message-ID: <20210706164418.32615-8-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 25b73bf7-d70c-4875-1fbe-08d9409e26fb X-MS-TrafficTypeDiagnostic: SN1PR02MB3741: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3044; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: nj273mTyj74xReToosA2NqRWsTCv6IIuphppfbhAbootZDWZuA9ckKpY/9j1SykdpeMqLNycRgljZErw+rowm2fF4p5SqzVh4uWhfmTC4c6twxr37iccXCzK+cgbn7HH8zczRJVdHGR/9a1UVtnWXkGLBnIblE0kpQWa3YavnSI+CiBP+OyOYZk+3Gsx2WW3vwSAKVHLj5Ug1hAQ9ppHwOQOYI/oEuPzm3VyHah84I/ZrOPIdjZX1TYdsqkcwjzXY+oJsbDQQQS7sZCElJCPZbRTsBwWJO7pR6EEm7IQoaI2o1HerSHaZmk+RYTbWEALLg1zJoSpbaLoB1RJiPSW3X7gBhxfzQdLgttWAAEHyffOIZWgZUELzgI7mEj2NKTTQqsymvR0AWLYhEclmsxl8Af+iTN2EbSL4YtT2rj5095nOerjZe2v6B3q+zocyETxQ9uK+P1QEN+L2Z5Yi9FPNFDjoPuyntNTm/POgS3m3Ve5wEdVu0ZsEjPfOgj9nY/7BvC9XDUT9xJ4XDfQqe94JI/5APxe0+vT6kJLLB/jkHSu8okMVx/446OxMKELlWczBHyl843UWGVcvMEU63djGcfhw2r40F6cBrBrvfVechxDqZ/KX6leM2zb8CxCgbM7B00bPIfqgOXgKkOPXTpcNwQDHxBhfm7RRkbHp1DQktuS9OCRcQ+Ukkyu0FX1SeCx7vDRKKPFUAvapICzUaEgqkVvZAoDo/mNORzrb8UfMrE= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(346002)(396003)(136003)(39860400002)(376002)(36840700001)(46966006)(54906003)(107886003)(47076005)(7636003)(70586007)(186003)(336012)(356005)(36860700001)(316002)(36906005)(82740400003)(2906002)(9786002)(70206006)(36756003)(4326008)(478600001)(26005)(83380400001)(82310400003)(2616005)(426003)(44832011)(8676002)(6916009)(1076003)(6666004)(8936002)(5660300002)(7696005)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:50:25.1178 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 25b73bf7-d70c-4875-1fbe-08d9409e26fb X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT029.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN1PR02MB3741 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 07/10] vdpa/sfc: add support to get queue notify area info X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement the vDPA ops get_notify_area to get the notify area info of the queue. Signed-off-by: Vijay Kumar Srivastava --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 166 ++++++++++++++++++++++++++++++++++++++-- drivers/vdpa/sfc/sfc_vdpa_ops.h | 2 + 2 files changed, 162 insertions(+), 6 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 4228044..a7b9085 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,6 +3,8 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include +#include #include #include @@ -534,6 +536,67 @@ return 0; } +static void * +sfc_vdpa_notify_ctrl(void *arg) +{ + struct sfc_vdpa_ops_data *ops_data; + int vid; + + ops_data = arg; + if (ops_data == NULL) + return NULL; + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + vid = ops_data->vid; + + if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) + sfc_vdpa_info(ops_data->dev_handle, + "vDPA (%s): Notifier could not get configured", + ops_data->vdpa_dev->device->name); + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + return NULL; +} + +static int +sfc_vdpa_setup_notify_ctrl(int vid) +{ + int ret; + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + ops_data->is_notify_thread_started = false; + + /* + * Use rte_vhost_host_notifier_ctrl in a thread to avoid + * dead lock scenario when multiple VFs are used in single vdpa + * application and multiple VFs are passed to a single VM. + */ + ret = pthread_create(&ops_data->notify_tid, NULL, + sfc_vdpa_notify_ctrl, ops_data); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to create notify_ctrl thread: %s", + rte_strerror(ret)); + return -1; + } + ops_data->is_notify_thread_started = true; + + return 0; +} + static int sfc_vdpa_dev_config(int vid) { @@ -567,18 +630,19 @@ if (rc != 0) goto fail_vdpa_start; - sfc_vdpa_adapter_unlock(ops_data->dev_handle); + rc = sfc_vdpa_setup_notify_ctrl(vid); + if (rc != 0) + goto fail_vdpa_notify; - sfc_vdpa_log_init(ops_data->dev_handle, "vhost notifier ctrl"); - if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) - sfc_vdpa_info(ops_data->dev_handle, - "vDPA (%s): software relay for notify is used.", - vdpa_dev->device->name); + sfc_vdpa_adapter_unlock(ops_data->dev_handle); sfc_vdpa_log_init(ops_data->dev_handle, "done"); return 0; +fail_vdpa_notify: + sfc_vdpa_stop(ops_data); + fail_vdpa_start: sfc_vdpa_close(ops_data); @@ -591,6 +655,7 @@ static int sfc_vdpa_dev_close(int vid) { + int ret; struct rte_vdpa_device *vdpa_dev; struct sfc_vdpa_ops_data *ops_data; @@ -605,6 +670,23 @@ } sfc_vdpa_adapter_lock(ops_data->dev_handle); + if (ops_data->is_notify_thread_started == true) { + void *status; + ret = pthread_cancel(ops_data->notify_tid); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to cancel notify_ctrl thread: %s", + rte_strerror(ret)); + } + + ret = pthread_join(ops_data->notify_tid, &status); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to join terminated notify_ctrl thread: %s", + rte_strerror(ret)); + } + } + ops_data->is_notify_thread_started = false; sfc_vdpa_stop(ops_data); sfc_vdpa_close(ops_data); @@ -655,6 +737,77 @@ return vfio_dev_fd; } +static int +sfc_vdpa_get_notify_area(int vid, int qid, uint64_t *offset, uint64_t *size) +{ + int ret; + efx_nic_t *nic; + int vfio_dev_fd; + efx_rc_t rc; + unsigned int bar_offset; + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + struct vfio_region_info reg = { .argsz = sizeof(reg) }; + const efx_nic_cfg_t *encp; + int max_vring_cnt; + int64_t len; + void *dev; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + max_vring_cnt = + (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + nic = sfc_vdpa_adapter_by_dev_handle(ops_data->dev_handle)->nic; + encp = efx_nic_cfg_get(nic); + + if (qid >= max_vring_cnt) { + sfc_vdpa_err(dev, "invalid qid : %d", qid); + return -1; + } + + if (ops_data->vq_cxt[qid].enable != B_TRUE) { + sfc_vdpa_err(dev, "vq is not enabled"); + return -1; + } + + rc = efx_virtio_get_doorbell_offset(ops_data->vq_cxt[qid].vq, + &bar_offset); + if (rc != 0) { + sfc_vdpa_err(dev, "failed to get doorbell offset: %s", + rte_strerror(rc)); + return rc; + } + + reg.index = sfc_vdpa_adapter_by_dev_handle(dev)->mem_bar.esb_rid; + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®); + if (ret != 0) { + sfc_vdpa_err(dev, "could not get device region info: %s", + strerror(errno)); + return ret; + } + + *offset = reg.offset + bar_offset; + + len = (1U << encp->enc_vi_window_shift) / 2; + if (len >= sysconf(_SC_PAGESIZE)) + *size = sysconf(_SC_PAGESIZE); + else + return -1; + + sfc_vdpa_info(dev, "vDPA ops get_notify_area :: offset : 0x%" PRIx64, + *offset); + + return 0; +} + static struct rte_vdpa_dev_ops sfc_vdpa_ops = { .get_queue_num = sfc_vdpa_get_queue_num, .get_features = sfc_vdpa_get_features, @@ -664,6 +817,7 @@ .set_vring_state = sfc_vdpa_set_vring_state, .set_features = sfc_vdpa_set_features, .get_vfio_device_fd = sfc_vdpa_get_vfio_device_fd, + .get_notify_area = sfc_vdpa_get_notify_area, }; struct sfc_vdpa_ops_data * diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 8d553c5..f7523ef 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -50,6 +50,8 @@ struct sfc_vdpa_ops_data { struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; + pthread_t notify_tid; + bool is_notify_thread_started; uint64_t dev_features; uint64_t drv_features; From patchwork Tue Jul 6 16:44:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 95470 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B3CAA0C4A; Wed, 7 Jul 2021 10:26:26 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1FC5E414AA; Wed, 7 Jul 2021 10:25:40 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2045.outbound.protection.outlook.com [40.107.94.45]) by mails.dpdk.org (Postfix) with ESMTP id 8310C4120E for ; Tue, 6 Jul 2021 18:50:26 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=JmXyiCJtxD59y7+oegcK6lBa0LXUMevK+4RI72iWuH+GcJ2bX1I/WyA0RhJQuPwkyOIw85FxINMmRxKbDPHEBTzVPAOGCBG5cbG9ECM2EjfWyzbkTV1om6Nk12BZntof4+KwyZhcxlxu5qzGq6Wy5zjEKtRDqgiMVsgFpF9XQIftaXDvRtKB/Mjx1VmbPJLNPWtM8BATLMQSFwAAJ5xwJyns4WmPbj9hRb1VgblbmU9PSAF48HG5ZbKhnmT8+QXRAyvf2c0cKvtVZ4LNXk34PFfhkc6DmpwkDV9D9jr+NhIH3Zd7OsH/Pe9nYJ4XoN5oAilN940pngOGfGjq6js8Qg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jHQDkohYiAfd4Y5TJG6b5RyhMAU3JyIwrdNyqPrxkho=; b=AGQzQN0Qh24XChENa2AU7wGP7zwW87XXBlWUi19uxUw9u+JxJ1IqLxM6LMzC776r3gdpNy7lwvZ/IS1VooS1vusV/lYktWQqNynX69O3tLLMkBjMsYWYqELg+gWa2s98vcnBhhwDL8vGu6VKycxqCbEhtP7YCVf2Bj4oSjysDJIdkpnA1hhO1lGYYaazgKCDhzW/46mDOoilGmRMD7qhMAwYCrUv+/l9vg/hkv62b4zJsEd+N2EjxmIPOQvD8aQasSSm2yihZR6Iz28aOajYBUzU9BXZ6ujzTemURV5iQxa/t+gFjg5F9bAjTTVfFulMtpmV7rW0UXApnGw1j622sg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jHQDkohYiAfd4Y5TJG6b5RyhMAU3JyIwrdNyqPrxkho=; b=swsNABAj0JnSw6fhwJbuwY+nrUAZWL8g+TbUXnd4Dj4i9Xhp2XBfuMtGR7AsjLqhYVW6d/mfugYixCG0nlbwqgrX58a7cJZ+rR6ekPvSU5rAUHdaFMp7foTlmaN/0gHF1Q1V6PsEvY4F57u3D0snf/E/uecEHyuDuvSJmMA1aQA= Received: from DM6PR06CA0056.namprd06.prod.outlook.com (2603:10b6:5:54::33) by DM6PR02MB4347.namprd02.prod.outlook.com (2603:10b6:5:2a::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.26; Tue, 6 Jul 2021 16:50:23 +0000 Received: from DM3NAM02FT017.eop-nam02.prod.protection.outlook.com (2603:10b6:5:54:cafe::46) by DM6PR06CA0056.outlook.office365.com (2603:10b6:5:54::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.20 via Frontend Transport; Tue, 6 Jul 2021 16:50:23 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT017.mail.protection.outlook.com (10.13.5.6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:50:23 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:50:08 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:50:08 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oGt-0000pF-Rh; Tue, 06 Jul 2021 09:50:08 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:16 +0530 Message-ID: <20210706164418.32615-9-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 7ad0d5d4-e0aa-4cb5-e44a-08d9409e25e7 X-MS-TrafficTypeDiagnostic: DM6PR02MB4347: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:240; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: iua7mikMSmNjzKSo9uLYhIjnUceu15qoJ6zePtSv+WvBBuiWLg86W0YgiOanae5QlY7ZpNFpLSzNXuQo6RzzlO+DF715QAXuHmUJyjI8ruVLzz6OeN6/XQu1L1vAie5Y64MCexz8b2hMftvC72dE0uwKqccdvtYyCW/pm0PWAvfupBIBtrNmGT0k27uVu8WKCPQY/Si7TjzNIpCo9gc8X9IX6sepJDoNpMFHFLs8iaSj5sksPAYQ9o4/DLNl5OD/t5ftCL49fG4VTR9FEOzN6Ab6yUXbQ9o/dYrDuX4W90XoXRgZGnPmfewYb1h1n2J8hOlYDxiRsNzKyA1O1klzBNXbJqh4LxHCopUgMvSxjbS2o1FAPbZmQpqHOGuhYQRaPb4/CEa343t/8Cr6Y22yy+/NQ4TrUF0jmwZ8IfaV97aC2xb2htVoehj+bHj1q6JpLtG4ycs4sskA0d0fIGiOXLVcQaw2NHMQMegVWgcnSwt+tnbZRndvB/3Dva1WJPbokWdBx7NtjpplG1HmsRlHMwtYTKyvrvJmTAJhQIx+kBboMlkZxW8iWf72msS/Kpodrv/AqAnXPDqTaB6FQrvowOawbGlS0JQRXr1DPzodipJnWEa3gqmRE01dLWe/Ivg97wtnY/pJ+T9nBtflbeifMDv3JveoGyNSRUNSMhATdM0RfbNGtDQJcTX24PmSmkWnTYy54G/O/XXJDF5SN/TvmBvgHQpOEw5WYbyJ5hvwF40= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(46966006)(36840700001)(2906002)(316002)(186003)(36906005)(82310400003)(1076003)(426003)(26005)(336012)(107886003)(8936002)(2616005)(7636003)(54906003)(8676002)(4326008)(83380400001)(44832011)(5660300002)(7696005)(356005)(6666004)(70206006)(47076005)(82740400003)(36860700001)(70586007)(9786002)(6916009)(36756003)(478600001)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:50:23.3690 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 7ad0d5d4-e0aa-4cb5-e44a-08d9409e25e7 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT017.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR02MB4347 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 08/10] vdpa/sfc: add support for MAC filter config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Add support for unicast and broadcast MAC filter configuration. Signed-off-by: Vijay Kumar Srivastava --- doc/guides/vdpadevs/sfc.rst | 4 ++ drivers/vdpa/sfc/meson.build | 1 + drivers/vdpa/sfc/sfc_vdpa.c | 32 +++++++++ drivers/vdpa/sfc/sfc_vdpa.h | 30 ++++++++ drivers/vdpa/sfc/sfc_vdpa_filter.c | 144 +++++++++++++++++++++++++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 10 +++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 17 +++++ 7 files changed, 238 insertions(+) create mode 100644 drivers/vdpa/sfc/sfc_vdpa_filter.c diff --git a/doc/guides/vdpadevs/sfc.rst b/doc/guides/vdpadevs/sfc.rst index abb5900..ae5ef42 100644 --- a/doc/guides/vdpadevs/sfc.rst +++ b/doc/guides/vdpadevs/sfc.rst @@ -71,6 +71,10 @@ boolean parameters value. **vdpa** device will work as vdpa device and will be probed by vdpa/sfc driver. If this parameter is not specified then ef100 device will operate as network device. +- ``mac`` [mac address] + + Configures MAC address which would be used to setup MAC filters. + Dynamic Logging Parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build index aac7c51..f69cba9 100644 --- a/drivers/vdpa/sfc/meson.build +++ b/drivers/vdpa/sfc/meson.build @@ -33,4 +33,5 @@ sources = files( 'sfc_vdpa_hw.c', 'sfc_vdpa_mcdi.c', 'sfc_vdpa_ops.c', + 'sfc_vdpa_filter.c', ) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index ca13483..703aa9e 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -8,7 +8,9 @@ #include #include +#include #include +#include #include #include #include @@ -202,6 +204,31 @@ struct sfc_vdpa_ops_data * return (ret < 0) ? RTE_LOGTYPE_PMD : ret; } +static int +sfc_vdpa_kvargs_parse(struct sfc_vdpa_adapter *sva) +{ + struct rte_pci_device *pci_dev = sva->pdev; + struct rte_devargs *devargs = pci_dev->device.devargs; + /* + * To get the device class a mandatory param 'class' is being + * used so included SFC_EFX_KVARG_DEV_CLASS in the param list. + */ + const char **params = (const char *[]){ + SFC_EFX_KVARG_DEV_CLASS, + SFC_VDPA_MAC_ADDR, + NULL, + }; + + if (devargs == NULL) + return 0; + + sva->kvargs = rte_kvargs_parse(devargs->args, params); + if (sva->kvargs == NULL) + return -EINVAL; + + return 0; +} + static struct rte_pci_id pci_id_sfc_vdpa_efx_map[] = { { RTE_PCI_DEVICE(EFX_PCI_VENID_XILINX, EFX_PCI_DEVID_RIVERHEAD_VF) }, { .vendor_id = 0, /* sentinel */ }, @@ -244,6 +271,10 @@ struct sfc_vdpa_ops_data * if (ret != 0) goto fail_set_log_prefix; + ret = sfc_vdpa_kvargs_parse(sva); + if (ret != 0) + goto fail_kvargs_parse; + sfc_vdpa_log_init(sva, "entry"); sfc_vdpa_adapter_lock_init(sva); @@ -284,6 +315,7 @@ struct sfc_vdpa_ops_data * fail_vfio_setup: sfc_vdpa_adapter_lock_fini(sva); +fail_kvargs_parse: fail_set_log_prefix: rte_free(sva); diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index b103b0a..fd480ca 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -17,8 +17,29 @@ #include "sfc_vdpa_log.h" #include "sfc_vdpa_ops.h" +#define SFC_VDPA_MAC_ADDR "mac" #define SFC_VDPA_DEFAULT_MCDI_IOVA 0x200000000000 +/* Broadcast & Unicast MAC filters are supported */ +#define SFC_MAX_SUPPORTED_FILTERS 2 + +/* + * Get function-local index of the associated VI from the + * virtqueue number. Queue 0 is reserved for MCDI + */ +#define SFC_VDPA_GET_VI_INDEX(vq_num) (((vq_num) / 2) + 1) + +enum sfc_vdpa_filter_type { + SFC_VDPA_BCAST_MAC_FILTER = 0, + SFC_VDPA_UCAST_MAC_FILTER = 1, + SFC_VDPA_FILTER_NTYPE +}; + +typedef struct sfc_vdpa_filter_s { + int filter_cnt; + efx_filter_spec_t spec[SFC_MAX_SUPPORTED_FILTERS]; +} sfc_vdpa_filter_t; + /* Adapter private data */ struct sfc_vdpa_adapter { TAILQ_ENTRY(sfc_vdpa_adapter) next; @@ -32,6 +53,8 @@ struct sfc_vdpa_adapter { struct rte_pci_device *pdev; struct rte_pci_addr pci_addr; + struct rte_kvargs *kvargs; + efx_family_t family; efx_nic_t *nic; rte_spinlock_t nic_lock; @@ -46,6 +69,8 @@ struct sfc_vdpa_adapter { char log_prefix[SFC_VDPA_LOG_PREFIX_MAX]; uint32_t logtype_main; + sfc_vdpa_filter_t filters; + int vfio_group_fd; int vfio_dev_fd; int vfio_container_fd; @@ -83,6 +108,11 @@ struct sfc_vdpa_ops_data * int sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *vdpa_data, bool do_map); +int +sfc_vdpa_filter_remove(struct sfc_vdpa_ops_data *ops_data); +int +sfc_vdpa_filter_config(struct sfc_vdpa_ops_data *ops_data); + static inline struct sfc_vdpa_adapter * sfc_vdpa_adapter_by_dev_handle(void *dev_handle) { diff --git a/drivers/vdpa/sfc/sfc_vdpa_filter.c b/drivers/vdpa/sfc/sfc_vdpa_filter.c new file mode 100644 index 0000000..03b6a5d --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_filter.c @@ -0,0 +1,144 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include +#include +#include + +#include "efx.h" +#include "efx_impl.h" +#include "sfc_vdpa.h" + +static inline int +sfc_vdpa_get_eth_addr(const char *key __rte_unused, + const char *value, void *extra_args) +{ + struct rte_ether_addr *mac_addr = extra_args; + + if (value == NULL || extra_args == NULL) + return -EINVAL; + + /* Convert string with Ethernet address to an ether_addr */ + rte_ether_unformat_addr(value, mac_addr); + + return 0; +} + +static int +sfc_vdpa_set_mac_filter(efx_nic_t *nic, efx_filter_spec_t *spec, + int qid, uint8_t *eth_addr) +{ + int rc; + + if (nic == NULL || spec == NULL) + return -1; + + spec->efs_priority = EFX_FILTER_PRI_MANUAL; + spec->efs_flags = EFX_FILTER_FLAG_RX; + spec->efs_dmaq_id = qid; + + rc = efx_filter_spec_set_eth_local(spec, EFX_FILTER_SPEC_VID_UNSPEC, + eth_addr); + if (rc != 0) + return rc; + + rc = efx_filter_insert(nic, spec); + if (rc != 0) + return rc; + + return rc; +} + +int sfc_vdpa_filter_config(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int qid; + efx_nic_t *nic; + struct rte_ether_addr bcast_eth_addr; + struct rte_ether_addr ucast_eth_addr; + struct sfc_vdpa_adapter *sva = ops_data->dev_handle; + efx_filter_spec_t *spec; + + if (ops_data == NULL) + return -1; + + sfc_vdpa_log_init(sva, "entry"); + + nic = sva->nic; + + sfc_vdpa_log_init(sva, "process kvarg"); + + /* skip MAC filter configuration if mac address is not provided */ + if (rte_kvargs_count(sva->kvargs, SFC_VDPA_MAC_ADDR) == 0) { + sfc_vdpa_warn(sva, + "MAC address is not provided, skipping MAC Filter Config"); + return -1; + } + + rc = rte_kvargs_process(sva->kvargs, SFC_VDPA_MAC_ADDR, + &sfc_vdpa_get_eth_addr, + &ucast_eth_addr); + if (rc < 0) + return -1; + + /* create filters on the base queue */ + qid = SFC_VDPA_GET_VI_INDEX(0); + + sfc_vdpa_log_init(sva, "insert broadcast mac filter"); + + EFX_MAC_BROADCAST_ADDR_SET(bcast_eth_addr.addr_bytes); + spec = &sva->filters.spec[SFC_VDPA_BCAST_MAC_FILTER]; + + rc = sfc_vdpa_set_mac_filter(nic, + spec, qid, + bcast_eth_addr.addr_bytes); + if (rc != 0) + sfc_vdpa_err(ops_data->dev_handle, + "broadcast MAC filter insertion failed: %s", + rte_strerror(rc)); + else + sva->filters.filter_cnt++; + + sfc_vdpa_log_init(sva, "insert unicast mac filter"); + spec = &sva->filters.spec[SFC_VDPA_UCAST_MAC_FILTER]; + + rc = sfc_vdpa_set_mac_filter(nic, + spec, qid, + ucast_eth_addr.addr_bytes); + if (rc != 0) + sfc_vdpa_err(sva, + "unicast MAC filter insertion failed: %s", + rte_strerror(rc)); + else + sva->filters.filter_cnt++; + + sfc_vdpa_log_init(sva, "done"); + + return rc; +} + +int sfc_vdpa_filter_remove(struct sfc_vdpa_ops_data *ops_data) +{ + int i, rc = 0; + struct sfc_vdpa_adapter *sva = ops_data->dev_handle; + efx_nic_t *nic; + + if (ops_data == NULL) + return -1; + + nic = sva->nic; + + for (i = 0; i < sva->filters.filter_cnt; i++) { + rc = efx_filter_remove(nic, &(sva->filters.spec[i])); + if (rc != 0) + sfc_vdpa_err(sva, + "remove HW filter failed for entry %d: %s", + i, rte_strerror(rc)); + } + + sva->filters.filter_cnt = 0; + + return rc; +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c index 047bcc4..3a98c8c 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -350,10 +350,20 @@ goto fail_virtio_init; } + sfc_vdpa_log_init(sva, "init filter"); + rc = efx_filter_init(enp); + if (rc != 0) { + sfc_vdpa_err(sva, "filter init failed: %s", rte_strerror(rc)); + goto fail_filter_init; + } + sfc_vdpa_log_init(sva, "done"); return 0; +fail_filter_init: + efx_virtio_fini(enp); + fail_virtio_init: efx_nic_fini(enp); diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index a7b9085..f14b385 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -423,6 +423,8 @@ sfc_vdpa_disable_vfio_intr(ops_data); + sfc_vdpa_filter_remove(ops_data); + ops_data->state = SFC_VDPA_STATE_CONFIGURED; } @@ -462,12 +464,27 @@ goto fail_vq_start; } + ops_data->vq_count = i; + + sfc_vdpa_log_init(ops_data->dev_handle, + "configure MAC filters"); + rc = sfc_vdpa_filter_config(ops_data); + if (rc != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "MAC filter config failed: %s", + rte_strerror(rc)); + goto fail_filter_cfg; + } + ops_data->state = SFC_VDPA_STATE_STARTED; sfc_vdpa_log_init(ops_data->dev_handle, "done"); return 0; +fail_filter_cfg: + /* remove already created filters */ + sfc_vdpa_filter_remove(ops_data); fail_vq_start: /* stop already started virtqueues */ for (j = 0; j < i; j++) From patchwork Tue Jul 6 16:44:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 95469 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9968FA0C4A; Wed, 7 Jul 2021 10:26:20 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D72D8414A4; Wed, 7 Jul 2021 10:25:38 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2041.outbound.protection.outlook.com [40.107.220.41]) by mails.dpdk.org (Postfix) with ESMTP id 3626A4120E for ; Tue, 6 Jul 2021 18:50:22 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=FpFhN+TqWBIg7VfRja8ju+Npvzgsz4NLElXS3oY+psh3woPsSlbjQ7MqyWkn9oLwPdEH1qczC8vejK8Ov9WH9d5Y9H2I/ROY/C4uJ6s9Otv6DLvwUGJXt45skEnZ/YuY3wyWsgx0ayRlYacrTPnvRcLrqSO1XEPLPUEjBtmhhIXgoAOXhqjWwI4cb+77FEBjEnErPfiNOEgQ/rrZy2ArpsNwzzd2B49aHwK6pBkeUGitS6X9NwTbDcGJ5F1HSFDZ1zwCOdMdihLK9QCZ1UdWIwbOsCOvkP8bwCseWdt7R4v0gDZjHPdWcYTh4T15uIyVG4cygLpfvHKMXLz4BCMkzw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TE1IX3aHkjeeOZGvMCKvVlDV7CZr3bsC207SN/iwpzk=; b=ge33O4VjRmYoyx4YcVJDIOkQ5ITHQtxzxsn9k1rcniNL2Z6KHg7M8eHmWjeKMJ38ZjXaZ54LBDmVS97/bTfTMIHrUe2aBMFTsugXjOYNHaC1RabmeHZyh4IKX3071uLGnPJ2ERijPZBXOs1WcnTXM14qqzHG4ddc8ktav56hK63e2O74odK7C+JQyuFsBREdC5wRn485MXWbdt3cZTP/6HY/O3YSrhK11e8svCb1j0yCGlsyDdQ7GFiC9YFVvK0B684cjBNByugTkzsSn7uAoXXAepAQ6DgV4IJvuX0EpAdbd1ZrQxQjQR+A2nw0f/5nFJX9Wke0/4lQBIsYwAWqzQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TE1IX3aHkjeeOZGvMCKvVlDV7CZr3bsC207SN/iwpzk=; b=p2NOi+fIG5DzAxZzP6KDOWFsJk1ghA2Tj8fb9Ue94d7klApvZRFpkKszSG4CqnMzyNztoS55KbAqSBi7c4li4tKgz8blZ0wlDtt6vJh5nf2xvEzMxyBjyw5cF16OX9xbH7SXQLzKre4nvZVXYLEkKxlvq4hhQn30smE8lau35wI= Received: from BN9PR03CA0497.namprd03.prod.outlook.com (2603:10b6:408:130::22) by BYAPR02MB5320.namprd02.prod.outlook.com (2603:10b6:a03:61::20) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Tue, 6 Jul 2021 16:50:20 +0000 Received: from BN1NAM02FT029.eop-nam02.prod.protection.outlook.com (2603:10b6:408:130:cafe::56) by BN9PR03CA0497.outlook.office365.com (2603:10b6:408:130::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.19 via Frontend Transport; Tue, 6 Jul 2021 16:50:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT029.mail.protection.outlook.com (10.13.2.143) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:50:19 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:50:19 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:50:19 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oH4-0000pF-Hr; Tue, 06 Jul 2021 09:50:19 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:17 +0530 Message-ID: <20210706164418.32615-10-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5dfb4268-cc9e-4013-0594-08d9409e23d2 X-MS-TrafficTypeDiagnostic: BYAPR02MB5320: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:124; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: KEIffY0PgnmVWquK1G+jfZlUzQSoP/fSf9CQ8+KEL95+rw7liY/riW5+LYSV0KGx86MI+44jjf6J/xAa/QYmzxVBrE+byz8yTNATFt8a6urDNKye/sfhsySPQjA3Ll5rq/owZqpBqKt12aUG1tlO+7OqR8Dz2fcjQpLxegnd5vtWHn7dhuVVy2xaBaGA2hk6pHvHc21MKIl5UQHENntqG5Yb5dScLdD+fzBkTb+Bf2AnpBLkNy/PIKwaoXIbKZEN/tzSUkC+G1cs0C+SRWPjVXCs2D0b262rStZ3Pzn97l0SRcx4wwJ5Osmnj3RwlS2/W9wyDkAI+Tnl6sBo/ae5RYN1CYnbkY2t+rve65F91pPJinYm0v+CbPk+o0+23DOBWqqXG1unrm0oeaZT7AuZDYtbO/EWhGLbzG2YJRzfU5i2eTSI0xLra7hvCUX3PpjZCSCcQdrr51VmuNbcVtiRu+PSdxedcNfsvd59LSRN6ycUhcvXlJlnWxd/l4ajEaGfv+T9j8Hby6CvYR/9gCklOQYlMkaPi8MgVzmysP2XtwvS/x1Ms7NofhWO4H9khgQIl79Iw3LqmhFIZi5VEGT7aqa729UA+QLDgliuDaOk7pWrdheh9RJc5zQ4mPm/Fvh5OpDg3by2qzQkXTCCArUKNpnoue6tq/0o25lw1DZ13kfIcrHMnPaAJ1vuUDPNGxibo8MkJsgg2byUyHJ0/v4+PKn+kJysBF7nfFO+I1rX2Vo= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(136003)(376002)(346002)(396003)(39860400002)(46966006)(36840700001)(426003)(336012)(36860700001)(44832011)(2616005)(70206006)(36756003)(8676002)(356005)(83380400001)(7636003)(36906005)(70586007)(82740400003)(9786002)(6916009)(54906003)(47076005)(5660300002)(82310400003)(8936002)(107886003)(478600001)(2906002)(7696005)(316002)(26005)(1076003)(4326008)(186003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:50:19.8203 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5dfb4268-cc9e-4013-0594-08d9409e23d2 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT029.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR02MB5320 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 09/10] vdpa/sfc: add support to set vring state X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implements vDPA ops set_vring_state to configure vring state. Signed-off-by: Vijay Kumar Srivastava Reviewed-by: Maxime Coquelin --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 54 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 50 insertions(+), 4 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index f14b385..5a3b766 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -716,11 +716,57 @@ static int sfc_vdpa_set_vring_state(int vid, int vring, int state) { - RTE_SET_USED(vid); - RTE_SET_USED(vring); - RTE_SET_USED(state); + struct sfc_vdpa_ops_data *ops_data; + struct rte_vdpa_device *vdpa_dev; + efx_rc_t rc; + int vring_max; + void *dev; - return -1; + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + + sfc_vdpa_info(dev, + "vDPA ops set_vring_state: vid: %d, vring: %d, state:%d", + vid, vring, state); + + vring_max = (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + if (vring < 0 || vring > vring_max) { + sfc_vdpa_err(dev, "received invalid vring id : %d to set state", + vring); + return -1; + } + + /* + * Skip if device is not yet started. virtqueues state can be + * changed once it is created and other configurations are done. + */ + if (ops_data->state != SFC_VDPA_STATE_STARTED) + return 0; + + if (ops_data->vq_cxt[vring].enable == state) + return 0; + + if (state == 0) { + rc = sfc_vdpa_virtq_stop(ops_data, vring); + if (rc != 0) { + sfc_vdpa_err(dev, "virtqueue stop failed: %s", + rte_strerror(rc)); + } + } else { + rc = sfc_vdpa_virtq_start(ops_data, vring); + if (rc != 0) { + sfc_vdpa_err(dev, "virtqueue start failed: %s", + rte_strerror(rc)); + } + } + + return rc; } static int From patchwork Tue Jul 6 16:44:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 95472 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 33790A0C4A; Wed, 7 Jul 2021 10:26:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 96EE9414B3; Wed, 7 Jul 2021 10:25:42 +0200 (CEST) Received: from NAM11-DM6-obe.outbound.protection.outlook.com (mail-dm6nam11on2078.outbound.protection.outlook.com [40.107.223.78]) by mails.dpdk.org (Postfix) with ESMTP id D5DEB4120E for ; Tue, 6 Jul 2021 18:50:42 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=m7gu5MUwZQ19QD13rL6fxlK78goNygsL5wSa7meSUZfv8ag5RS7qRFJU3jxL2EK02HCusLqJpE7l0+L2nb0EWZnDjQCmu9oTlSvkESYKvYexPDTdqgObS4eR5TIntf0U8C3wnkXwF93XC+b558861kOkdjD4FZIfi8y85+pZZGrHN6k5j9Q5dsI0pnRxbwx0z0XMry4GHR2oapphOAIGdPPhzhjLmiQ5tCNzON5lEHkJMwzWlkC0No0z4SH5ZIY3VpGYXfMdIFZ+6K+zdbBuJlBpdEBcdOo8rDsnpKcbou5J1xH0k8Z67QCOGvh4bMnyaYFSzA+rJLqrVpViUKmpfQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IU/6YBURm/UBK3CemawA/giohGtuRcaZ3VOxepZ4UZU=; b=dYGXdgP62E4ALMlfa9IXNlDH6b5nl5AtSP29AoJWClE7MA5IfY07HoPKksMSZTuOHGlAYlFKYgFfYJso4RSgo2gwaPKFtSWcKg972iYDLTR7ksuqZsFZFc41fN8SAZkXJDkGyFPicgKUPsl8f1loIQIS5lWsfpfjvUsv6vtoY6LM526cFM3y1DtHzz6KgK9lEl9TZr2huMRItpZUeUKZyV5S4rY2tSoE3DWZbIhmj/RlJa6bpAe39fO33OZs6o5MyjG3/AHYj6j7dxrdY4CdSz0vATxjU1WNGCT7BY43SJNUSVTZfxdVn3DN9+n2BXD0GlWD5eXWDHRbKu1N+L2nSA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=IU/6YBURm/UBK3CemawA/giohGtuRcaZ3VOxepZ4UZU=; b=PvrKJ0AFP7B0HvVaCRdS5RNQPxVrS5CB8oeElxydrU6DAXuBQeRB9Ii2OkTCXh3tXgBtD8XbGhyR4oTufVEx3esAAU8ln6UE/iAB7xUSze90WFn4pcGQ/3QJn9nMsY4Ubxmmf4i0MFfbJCaqyahzrsEtat+DBC1qmFsT2m7pyV8= Received: from BN9P221CA0002.NAMP221.PROD.OUTLOOK.COM (2603:10b6:408:10a::9) by MN2PR02MB6301.namprd02.prod.outlook.com (2603:10b6:208:1bc::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4287.22; Tue, 6 Jul 2021 16:50:41 +0000 Received: from BN1NAM02FT047.eop-nam02.prod.protection.outlook.com (2603:10b6:408:10a:cafe::97) by BN9P221CA0002.outlook.office365.com (2603:10b6:408:10a::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4308.19 via Frontend Transport; Tue, 6 Jul 2021 16:50:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT047.mail.protection.outlook.com (10.13.3.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4287.22 via Frontend Transport; Tue, 6 Jul 2021 16:50:40 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.2; Tue, 6 Jul 2021 09:50:23 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.2 via Frontend Transport; Tue, 6 Jul 2021 09:50:23 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.177.4.108] (port=54950 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1m0oH9-0000pF-DG; Tue, 06 Jul 2021 09:50:23 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Tue, 6 Jul 2021 22:14:18 +0530 Message-ID: <20210706164418.32615-11-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20210706164418.32615-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: e1ddf1d8-9727-4bd7-b70d-08d9409e3070 X-MS-TrafficTypeDiagnostic: MN2PR02MB6301: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:483; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: q+Sgs6tmE+cNYU8w25kJqHJat0CWcfKEU4rfPDezE5Gkr9jhNewyb4kKpSIeSRB82S1Cyq2o85C8qdBu69bzH4Sd2ZvxYgJpwj9hN9tEvNM9vaUfbCG1LBmVh+u0yzTJz7uqYnr/04S3Q4TFlpE2hfhwB6qX39XREw39LpPfVB+cXLWJUeqXf+dns7uvqe552keuY6Z66+fQ0zPepN1o8LQ6J9YzeFU5CX1NBBnGav6KcoY8Umxj+tBOKgas8qyBemsnEKAmMH37V8LxmK2bXQGQ3f9WbPwmFpbklkko6xAff3rQiwXyE+xdJxj4sC1SzKgBiJ43Rcgr1qy4TLkC5XIcbnT60X8TIKqrOqkPoanK9HZhRbIkdK2u7nE/q3L3aUe056lRQzUHyWxVjacjMyLvcI+MTbeIflXW8NolOoMM/3mokOqlDjdRZE4w8QnL69GV3QkDHmGfxOqvWjIMcGfLlR307GEL/bc9Cy/6t+FvgjvDsb1k9K2PHiN5Xf27aKJLv5S4Ww0WnOYNHh3Zf+YwubFoi0o6EHBi2Hl8Y80SkYmsnKMjOPWBuT365FHHFKYwbecBnBkZtu+R5kVzMa/zdhpjLyiaZvnYwZJGcvhfdBiMrGInhrn8M3Jnzi43oxLQiR1fAGsM3tgbwwkO6nmAy7UXcRNG2gL6hq96vpBuM+pvgfV6IF5tGs9GmIkH6kuxyNbRlgVgYFHy2W4QDYmG+4WrQmx5AH5Q/qDQYOY= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(4636009)(376002)(346002)(39860400002)(136003)(396003)(46966006)(36840700001)(82740400003)(478600001)(426003)(47076005)(70586007)(9786002)(356005)(36756003)(83380400001)(36860700001)(82310400003)(316002)(7636003)(2906002)(8936002)(107886003)(44832011)(4326008)(336012)(6666004)(26005)(8676002)(7696005)(6916009)(36906005)(5660300002)(70206006)(2616005)(186003)(1076003)(54906003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 Jul 2021 16:50:40.9920 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: e1ddf1d8-9727-4bd7-b70d-08d9409e3070 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT047.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR02MB6301 X-Mailman-Approved-At: Wed, 07 Jul 2021 10:25:28 +0200 Subject: [dpdk-dev] [PATCH 10/10] vdpa/sfc: set a multicast filter during vDPA init X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Insert unknown multicast filter to allow IPv6 neighbor discovery Signed-off-by: Vijay Kumar Srivastava --- drivers/vdpa/sfc/sfc_vdpa.h | 3 ++- drivers/vdpa/sfc/sfc_vdpa_filter.c | 19 +++++++++++++++++-- 2 files changed, 19 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index fd480ca..68bf79a 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -21,7 +21,7 @@ #define SFC_VDPA_DEFAULT_MCDI_IOVA 0x200000000000 /* Broadcast & Unicast MAC filters are supported */ -#define SFC_MAX_SUPPORTED_FILTERS 2 +#define SFC_MAX_SUPPORTED_FILTERS 3 /* * Get function-local index of the associated VI from the @@ -32,6 +32,7 @@ enum sfc_vdpa_filter_type { SFC_VDPA_BCAST_MAC_FILTER = 0, SFC_VDPA_UCAST_MAC_FILTER = 1, + SFC_VDPA_MCAST_DST_FILTER = 2, SFC_VDPA_FILTER_NTYPE }; diff --git a/drivers/vdpa/sfc/sfc_vdpa_filter.c b/drivers/vdpa/sfc/sfc_vdpa_filter.c index 03b6a5d..74204d3 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_filter.c +++ b/drivers/vdpa/sfc/sfc_vdpa_filter.c @@ -39,8 +39,12 @@ spec->efs_flags = EFX_FILTER_FLAG_RX; spec->efs_dmaq_id = qid; - rc = efx_filter_spec_set_eth_local(spec, EFX_FILTER_SPEC_VID_UNSPEC, - eth_addr); + if (eth_addr == NULL) + rc = efx_filter_spec_set_mc_def(spec); + else + rc = efx_filter_spec_set_eth_local(spec, + EFX_FILTER_SPEC_VID_UNSPEC, + eth_addr); if (rc != 0) return rc; @@ -114,6 +118,17 @@ int sfc_vdpa_filter_config(struct sfc_vdpa_ops_data *ops_data) else sva->filters.filter_cnt++; + sfc_vdpa_log_init(sva, "insert unknown mcast filter"); + spec = &sva->filters.spec[SFC_VDPA_MCAST_DST_FILTER]; + + rc = sfc_vdpa_set_mac_filter(nic, spec, qid, NULL); + if (rc != 0) + sfc_vdpa_err(sva, + "mcast filter insertion failed: %s", + rte_strerror(rc)); + else + sva->filters.filter_cnt++; + sfc_vdpa_log_init(sva, "done"); return rc;