From patchwork Thu Oct 28 07:54:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103113 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1CEC8A0C45; Thu, 28 Oct 2021 09:56:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0A1D4410F5; Thu, 28 Oct 2021 09:56:29 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2060.outbound.protection.outlook.com [40.107.100.60]) by mails.dpdk.org (Postfix) with ESMTP id 8E34C4003F for ; Thu, 28 Oct 2021 09:56:27 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=EeSKjHsOb32JcSm3BVjRlXlGG/KkJ4CScV6YQmyqzFNskRyVrrpvVDMcMCmbIxys+5OLF+jWmNC2o4sOQ3hxNSy4f06BGzk8LyKJvLU5r2q6K1jmMRUUXQt15CQW1EasE8OSL6aThifl7jaOR4Qq/uZ3Gvr9pT+Ou8H17msFChyhvzrEekV0n9r6bULVpCnSUp/f5vo3g5RSn4960Bs5/+RRtyZKizwxQsijuA4bmA+7mcmI3yyn05jYZSbjy60/4kc5hZs/n0fLLtWAoLuNi4C9aUMaX22LzW9lgbxDxAlo3FvSARLKxWJbUg42B1z4XHqhdox9CWFStWI7ONY6vA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=TxmpjS6Gfw/S403hnmafT04gOkfxink/y7XULMqVHCs=; b=SPqPsHh7kc7h71GnkoKr91qdeNhvbgEP/h2QOPfXG+KVeTxPYhehplNQ6JHpBbpor3uR0bXQn+P3IQqfK120kgH4sTDBBNciV9PWAHUbzrMf+3MT3y1WpxeGkHGoA+TIRufew5YZxHzuQ/uALQQE3c5skCXBSrO8NsHRm6vl76YR5mHI/XTIuM9YFyWmbWtB3ne+bqVdzFGXxJYK9BQ8P7Z5kzqGIRDPt34kvwc1lEOIK4p/m0QY+sZo9vVrJ5XSzrMj70JCWrIY3hp0Odc3NgZcGsk5OUVzR0YXwMIW4rDD6KTaYMg7WJz3Yp36v4q03Tl2t7TryDeq+aQ/HcUxXw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=TxmpjS6Gfw/S403hnmafT04gOkfxink/y7XULMqVHCs=; b=XfikLnurP6k8p3GyVoaATub0iBpP+VfrJZluVhQNg4LhpbooOKijhKhUuOJYYtEy4/JOAkWdtdozNrXWlj1gglLOML1xZjRLZUDD7rLIrmH4hzTeGGR9ByKNidH53mR18j6R/3q8cTQIQuTNsSaPVdCQFLUO4NDBU5IABgw7kn8= Received: from BN8PR15CA0067.namprd15.prod.outlook.com (2603:10b6:408:80::44) by SA1PR02MB8695.namprd02.prod.outlook.com (2603:10b6:806:1f2::17) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.14; Thu, 28 Oct 2021 07:56:25 +0000 Received: from BN1NAM02FT004.eop-nam02.prod.protection.outlook.com (2603:10b6:408:80:cafe::ec) by BN8PR15CA0067.outlook.office365.com (2603:10b6:408:80::44) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18 via Frontend Transport; Thu, 28 Oct 2021 07:56:25 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT004.mail.protection.outlook.com (10.13.2.123) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:25 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:24 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:24 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0Gt-0005p5-Rs; Thu, 28 Oct 2021 00:56:24 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:43 +0530 Message-ID: <20211028075452.11804-2-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 05009f79-2c8e-4e06-11ad-08d999e87115 X-MS-TrafficTypeDiagnostic: SA1PR02MB8695: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:2582; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Pj+gHL/ZPy1LsaaWRRvhK06IlnUM0W8Axck2+Cub9VIQP0CdZa/TJJd9f6omU6GKMvlsRbfdA394v2gNc/RrBaQo0m+7CJFcmevdZwabBAAWd4n8wp/vkKI37TgzHrSqj/5MxWeugjOym1e4QzvBWnEjsxwIqJlE9f1BZjXxJgx6CQjec0zKvz/n9H3TPSDLy5RveXHgG7VePgKnpuN1Ypc0ZaRJUlvIcLI1kbu5pUja8o26XvOJOAU+UcJjl5K9LMr6SREHKYgi98A0G3xfhz2l2YzgCbWCYV4wXui+9PZhgF+nduhoeWMQGhnUlFqnXY0/Pv4kCorJzGNAVXjwkJsF4uIvMBKLIuT8YwalIzhw3mRWAW9uiMQZmKFpFUKG8lBIOAyOn0HHZHvdZ77G6p14ts03PLi2XYJG4a26OtY01MWJMnlhu8iVconSu1JNV3bYqGw2WkWC32DwJcki5J6TahCs2HDJsPgatU4FEQIrZKWU0yMf/H0MUXk5bSqHuhtl3FpX0O9aF4U/Y1wNu3IRM82OP/h2uHsRzaXQGG7BUUUzN0QJgLaozIcijzqrDCW5ImsBeL9jDMFknU4S7cZlPIIMn4JuiXnhZBvstDPlFxbN/XALo6oT9/B8NH2ImSOkX5+WLGEDmjTTA8cCheW6T3FKvgOw+BY/cGRNR5bzg9LrDrFnrbwY/K+Tp5Ws7kaR2eK1H71kBZypGoLMyco3c1gn8By0tSXY6ydsDvtwNvc7wMA3R8JB4QyZHsdlegPZVl+xZL37A/dAIB59+rwhUH3a67mP7thDcA0sEppAHNLVwZChPiIVdyAhWucUliu8HRvxCLJTt5bPH00PHASKreeO4D2VpwGw0iJ2w3Y= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(6666004)(508600001)(966005)(7696005)(5660300002)(7636003)(356005)(107886003)(186003)(26005)(8936002)(47076005)(30864003)(9786002)(336012)(8676002)(36860700001)(2906002)(4326008)(44832011)(36756003)(70206006)(82310400003)(70586007)(2616005)(1076003)(316002)(83380400001)(36906005)(426003)(6916009)(54906003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:56:25.6975 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 05009f79-2c8e-4e06-11ad-08d999e87115 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT004.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SA1PR02MB8695 Subject: [dpdk-dev] [PATCH v2 01/10] vdpa/sfc: introduce Xilinx vDPA driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Add new vDPA PMD to support vDPA operation by Xilinx devices. This patch implements probe and remove functions. Signed-off-by: Vijay Kumar Srivastava --- v2: * Updated logging mcaros to remove redundant code. MAINTAINERS | 6 + doc/guides/rel_notes/release_21_11.rst | 5 + doc/guides/vdpadevs/features/sfc.ini | 9 ++ doc/guides/vdpadevs/sfc.rst | 97 +++++++++++ drivers/vdpa/meson.build | 1 + drivers/vdpa/sfc/meson.build | 33 ++++ drivers/vdpa/sfc/sfc_vdpa.c | 286 +++++++++++++++++++++++++++++++++ drivers/vdpa/sfc/sfc_vdpa.h | 40 +++++ drivers/vdpa/sfc/sfc_vdpa_log.h | 56 +++++++ drivers/vdpa/sfc/version.map | 3 + 10 files changed, 536 insertions(+) create mode 100644 doc/guides/vdpadevs/features/sfc.ini create mode 100644 doc/guides/vdpadevs/sfc.rst create mode 100644 drivers/vdpa/sfc/meson.build create mode 100644 drivers/vdpa/sfc/sfc_vdpa.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa.h create mode 100644 drivers/vdpa/sfc/sfc_vdpa_log.h create mode 100644 drivers/vdpa/sfc/version.map diff --git a/MAINTAINERS b/MAINTAINERS index be2c9b6..5d12c49 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1236,6 +1236,12 @@ F: drivers/vdpa/mlx5/ F: doc/guides/vdpadevs/mlx5.rst F: doc/guides/vdpadevs/features/mlx5.ini +Xilinx sfc vDPA +M: Vijay Kumar Srivastava +F: drivers/vdpa/sfc/ +F: doc/guides/vdpadevs/sfc.rst +F: doc/guides/vdpadevs/features/sfc.ini + Eventdev Drivers ---------------- diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 1ccac87..bd0a604 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -305,6 +305,11 @@ New Features * Pcapng format with timestamps and meta-data. * Fixes packet capture with stripped VLAN tags. +* **Add new vDPA PMD based on Xilinx devices.** + + Added a new Xilinx vDPA (``sfc_vdpa``) PMD. + See the :doc:`../vdpadevs/sfc` guide for more details on this driver. + Removed Items ------------- diff --git a/doc/guides/vdpadevs/features/sfc.ini b/doc/guides/vdpadevs/features/sfc.ini new file mode 100644 index 0000000..71b6158 --- /dev/null +++ b/doc/guides/vdpadevs/features/sfc.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'sfc' vDPA driver. +; +; Refer to default.ini for the full list of available driver features. +; +[Features] +Linux = Y +x86-64 = Y +Usage doc = Y diff --git a/doc/guides/vdpadevs/sfc.rst b/doc/guides/vdpadevs/sfc.rst new file mode 100644 index 0000000..59f990b --- /dev/null +++ b/doc/guides/vdpadevs/sfc.rst @@ -0,0 +1,97 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2021 Xilinx Corporation. + +Xilinx vDPA driver +================== + +The Xilinx vDPA (vhost data path acceleration) driver (**librte_pmd_sfc_vdpa**) +provides support for the Xilinx SN1022 SmartNICs family of 10/25/40/50/100 Gbps +adapters has support for latest Linux and FreeBSD operating systems. + +More information can be found at Xilinx website https://www.xilinx.com. + + +Xilinx vDPA implementation +-------------------------- + +ef100 device can be configured in the net device or vDPA mode. +Adding "class=vdpa" parameter helps to specify that this +device is to be used in vDPA mode. If this parameter is not specified, device +will be probed by net/sfc driver and will used as a net device. + +This PMD uses libefx (common/sfc_efx) code to access the device firmware. + + +Supported NICs +-------------- + +- Xilinx SN1022 SmartNICs + + +Features +-------- + +Features of the Xilinx vDPA driver are: + +- Compatibility with virtio 0.95 and 1.0 + + +Non-supported Features +---------------------- + +- Control Queue +- Multi queue +- Live Migration + + +Prerequisites +------------- + +Requires firmware version: v1.0.7.0 or higher + +Visit `Xilinx Support Downloads `_ +to get Xilinx Utilities with the latest firmware. +Follow instructions from Alveo SN1000 SmartNICs User Guide to +update firmware and configure the adapter. + + +Per-Device Parameters +~~~~~~~~~~~~~~~~~~~~~ + +The following per-device parameters can be passed via EAL PCI device +whitelist option like "-w 02:00.0,arg1=value1,...". + +Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify +boolean parameters value. + +- ``class`` [net|vdpa] (default **net**) + + Choose the mode of operation of ef100 device. + **net** device will work as network device and will be probed by net/sfc driver. + **vdpa** device will work as vdpa device and will be probed by vdpa/sfc driver. + If this parameter is not specified then ef100 device will operate as network device. + + +Dynamic Logging Parameters +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +One may leverage EAL option "--log-level" to change default levels +for the log types supported by the driver. The option is used with +an argument typically consisting of two parts separated by a colon. + +Level value is the last part which takes a symbolic name (or integer). +Log type is the former part which may shell match syntax. +Depending on the choice of the expression, the given log level may +be used either for some specific log type or for a subset of types. + +SFC vDPA PMD provides the following log types available for control: + +- ``pmd.vdpa.sfc.driver`` (default level is **notice**) + + Affects driver-wide messages unrelated to any particular devices. + +- ``pmd.vdpa.sfc.main`` (default level is **notice**) + + Matches a subset of per-port log types registered during runtime. + A full name for a particular type may be obtained by appending a + dot and a PCI device identifier (``XXXX:XX:XX.X``) to the prefix. diff --git a/drivers/vdpa/meson.build b/drivers/vdpa/meson.build index f765fe3..77412c7 100644 --- a/drivers/vdpa/meson.build +++ b/drivers/vdpa/meson.build @@ -8,6 +8,7 @@ endif drivers = [ 'ifc', 'mlx5', + 'sfc', ] std_deps = ['bus_pci', 'kvargs'] std_deps += ['vhost'] diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build new file mode 100644 index 0000000..d916389 --- /dev/null +++ b/drivers/vdpa/sfc/meson.build @@ -0,0 +1,33 @@ +# SPDX-License-Identifier: BSD-3-Clause +# +# Copyright(c) 2020-2021 Xilinx, Inc. + +if (arch_subdir != 'x86' or not dpdk_conf.get('RTE_ARCH_64')) and (arch_subdir != 'arm' or not host_machine.cpu_family().startswith('aarch64')) + build = false + reason = 'only supported on x86_64 and aarch64' +endif + +fmt_name = 'sfc_vdpa' +extra_flags = [] + +# Enable more warnings +extra_flags += [ + '-Wdisabled-optimization' +] + +# Compiler and version dependent flags +extra_flags += [ + '-Waggregate-return', + '-Wbad-function-cast' +] + +foreach flag: extra_flags + if cc.has_argument(flag) + cflags += flag + endif +endforeach + +deps += ['common_sfc_efx', 'bus_pci'] +sources = files( + 'sfc_vdpa.c', +) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c new file mode 100644 index 0000000..a6e1a9e --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -0,0 +1,286 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "efx.h" +#include "sfc_efx.h" +#include "sfc_vdpa.h" + +TAILQ_HEAD(sfc_vdpa_adapter_list_head, sfc_vdpa_adapter); +static struct sfc_vdpa_adapter_list_head sfc_vdpa_adapter_list = + TAILQ_HEAD_INITIALIZER(sfc_vdpa_adapter_list); + +static pthread_mutex_t sfc_vdpa_adapter_list_lock = PTHREAD_MUTEX_INITIALIZER; + +struct sfc_vdpa_adapter * +sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev) +{ + bool found = false; + struct sfc_vdpa_adapter *sva; + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + + TAILQ_FOREACH(sva, &sfc_vdpa_adapter_list, next) { + if (pdev == sva->pdev) { + found = true; + break; + } + } + + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + return found ? sva : NULL; +} + +static int +sfc_vdpa_vfio_setup(struct sfc_vdpa_adapter *sva) +{ + struct rte_pci_device *dev = sva->pdev; + char dev_name[RTE_DEV_NAME_MAX_LEN] = {0}; + int rc; + + if (dev == NULL) + goto fail_inval; + + rte_pci_device_name(&dev->addr, dev_name, RTE_DEV_NAME_MAX_LEN); + + sva->vfio_container_fd = rte_vfio_container_create(); + if (sva->vfio_container_fd < 0) { + sfc_vdpa_err(sva, "failed to create VFIO container"); + goto fail_container_create; + } + + rc = rte_vfio_get_group_num(rte_pci_get_sysfs_path(), dev_name, + &sva->iommu_group_num); + if (rc <= 0) { + sfc_vdpa_err(sva, "failed to get IOMMU group for %s : %s", + dev_name, rte_strerror(-rc)); + goto fail_get_group_num; + } + + sva->vfio_group_fd = + rte_vfio_container_group_bind(sva->vfio_container_fd, + sva->iommu_group_num); + if (sva->vfio_group_fd < 0) { + sfc_vdpa_err(sva, + "failed to bind IOMMU group %d to container %d", + sva->iommu_group_num, sva->vfio_container_fd); + goto fail_group_bind; + } + + if (rte_pci_map_device(dev) != 0) { + sfc_vdpa_err(sva, "failed to map PCI device %s : %s", + dev_name, rte_strerror(rte_errno)); + goto fail_pci_map_device; + } + + sva->vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle); + + return 0; + +fail_pci_map_device: + if (rte_vfio_container_group_unbind(sva->vfio_container_fd, + sva->iommu_group_num) != 0) { + sfc_vdpa_err(sva, + "failed to unbind IOMMU group %d from container %d", + sva->iommu_group_num, sva->vfio_container_fd); + } + +fail_group_bind: +fail_get_group_num: + if (rte_vfio_container_destroy(sva->vfio_container_fd) != 0) { + sfc_vdpa_err(sva, "failed to destroy container %d", + sva->vfio_container_fd); + } + +fail_container_create: +fail_inval: + return -1; +} + +static void +sfc_vdpa_vfio_teardown(struct sfc_vdpa_adapter *sva) +{ + rte_pci_unmap_device(sva->pdev); + + if (rte_vfio_container_group_unbind(sva->vfio_container_fd, + sva->iommu_group_num) != 0) { + sfc_vdpa_err(sva, + "failed to unbind IOMMU group %d from container %d", + sva->iommu_group_num, sva->vfio_container_fd); + } + + if (rte_vfio_container_destroy(sva->vfio_container_fd) != 0) { + sfc_vdpa_err(sva, + "failed to destroy container %d", + sva->vfio_container_fd); + } +} + +static int +sfc_vdpa_set_log_prefix(struct sfc_vdpa_adapter *sva) +{ + struct rte_pci_device *pci_dev = sva->pdev; + int ret; + + ret = snprintf(sva->log_prefix, sizeof(sva->log_prefix), + "PMD: sfc_vdpa " PCI_PRI_FMT " : ", + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); + + if (ret < 0 || ret >= (int)sizeof(sva->log_prefix)) { + SFC_VDPA_GENERIC_LOG(ERR, + "reserved log prefix is too short for " PCI_PRI_FMT, + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); + return -EINVAL; + } + + return 0; +} + +uint32_t +sfc_vdpa_register_logtype(const struct rte_pci_addr *pci_addr, + const char *lt_prefix_str, uint32_t ll_default) +{ + size_t lt_prefix_str_size = strlen(lt_prefix_str); + size_t lt_str_size_max; + char *lt_str = NULL; + int ret; + + if (SIZE_MAX - PCI_PRI_STR_SIZE - 1 > lt_prefix_str_size) { + ++lt_prefix_str_size; /* Reserve space for prefix separator */ + lt_str_size_max = lt_prefix_str_size + PCI_PRI_STR_SIZE + 1; + } else { + return RTE_LOGTYPE_PMD; + } + + lt_str = rte_zmalloc("logtype_str", lt_str_size_max, 0); + if (lt_str == NULL) + return RTE_LOGTYPE_PMD; + + strncpy(lt_str, lt_prefix_str, lt_prefix_str_size); + lt_str[lt_prefix_str_size - 1] = '.'; + rte_pci_device_name(pci_addr, lt_str + lt_prefix_str_size, + lt_str_size_max - lt_prefix_str_size); + lt_str[lt_str_size_max - 1] = '\0'; + + ret = rte_log_register_type_and_pick_level(lt_str, ll_default); + rte_free(lt_str); + + return (ret < 0) ? RTE_LOGTYPE_PMD : ret; +} + +static struct rte_pci_id pci_id_sfc_vdpa_efx_map[] = { + { RTE_PCI_DEVICE(EFX_PCI_VENID_XILINX, EFX_PCI_DEVID_RIVERHEAD_VF) }, + { .vendor_id = 0, /* sentinel */ }, +}; + +static int +sfc_vdpa_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct sfc_vdpa_adapter *sva = NULL; + uint32_t logtype_main; + int ret = 0; + + if (sfc_efx_dev_class_get(pci_dev->device.devargs) != + SFC_EFX_DEV_CLASS_VDPA) { + SFC_VDPA_GENERIC_LOG(INFO, + "Incompatible device class: skip probing, should be probed by other sfc driver."); + return 1; + } + + /* + * It will not be probed in the secondary process. As device class + * is vdpa so return 0 to avoid probe by other sfc driver + */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + logtype_main = sfc_vdpa_register_logtype(&pci_dev->addr, + SFC_VDPA_LOGTYPE_MAIN_STR, + RTE_LOG_NOTICE); + + sva = rte_zmalloc("sfc_vdpa", sizeof(struct sfc_vdpa_adapter), 0); + if (sva == NULL) + goto fail_zmalloc; + + sva->pdev = pci_dev; + sva->logtype_main = logtype_main; + + ret = sfc_vdpa_set_log_prefix(sva); + if (ret != 0) + goto fail_set_log_prefix; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "vfio init"); + if (sfc_vdpa_vfio_setup(sva) < 0) { + sfc_vdpa_err(sva, "failed to setup device %s", pci_dev->name); + goto fail_vfio_setup; + } + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + TAILQ_INSERT_TAIL(&sfc_vdpa_adapter_list, sva, next); + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + sfc_vdpa_log_init(sva, "done"); + + return 0; + +fail_vfio_setup: +fail_set_log_prefix: + rte_free(sva); + +fail_zmalloc: + return -1; +} + +static int +sfc_vdpa_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sfc_vdpa_adapter *sva = NULL; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return -1; + + sva = sfc_vdpa_get_adapter_by_dev(pci_dev); + if (sva == NULL) { + sfc_vdpa_info(sva, "invalid device: %s", pci_dev->name); + return -1; + } + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + TAILQ_REMOVE(&sfc_vdpa_adapter_list, sva, next); + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + sfc_vdpa_vfio_teardown(sva); + + rte_free(sva); + + return 0; +} + +static struct rte_pci_driver rte_sfc_vdpa = { + .id_table = pci_id_sfc_vdpa_efx_map, + .drv_flags = 0, + .probe = sfc_vdpa_pci_probe, + .remove = sfc_vdpa_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_sfc_vdpa, rte_sfc_vdpa); +RTE_PMD_REGISTER_PCI_TABLE(net_sfc_vdpa, pci_id_sfc_vdpa_efx_map); +RTE_PMD_REGISTER_KMOD_DEP(net_sfc_vdpa, "* vfio-pci"); +RTE_LOG_REGISTER_SUFFIX(sfc_vdpa_logtype_driver, driver, NOTICE); diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h new file mode 100644 index 0000000..3b77900 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_H +#define _SFC_VDPA_H + +#include +#include + +#include + +#include "sfc_vdpa_log.h" + +/* Adapter private data */ +struct sfc_vdpa_adapter { + TAILQ_ENTRY(sfc_vdpa_adapter) next; + struct rte_pci_device *pdev; + struct rte_pci_addr pci_addr; + + char log_prefix[SFC_VDPA_LOG_PREFIX_MAX]; + uint32_t logtype_main; + + int vfio_group_fd; + int vfio_dev_fd; + int vfio_container_fd; + int iommu_group_num; +}; + +uint32_t +sfc_vdpa_register_logtype(const struct rte_pci_addr *pci_addr, + const char *lt_prefix_str, + uint32_t ll_default); + +struct sfc_vdpa_adapter * +sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev); + +#endif /* _SFC_VDPA_H */ + diff --git a/drivers/vdpa/sfc/sfc_vdpa_log.h b/drivers/vdpa/sfc/sfc_vdpa_log.h new file mode 100644 index 0000000..858e5ee --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_log.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_LOG_H_ +#define _SFC_VDPA_LOG_H_ + +/** Generic driver log type */ +extern int sfc_vdpa_logtype_driver; + +/** Common log type name prefix */ +#define SFC_VDPA_LOGTYPE_PREFIX "pmd.vdpa.sfc." + +/** Log PMD generic message, add a prefix and a line break */ +#define SFC_VDPA_GENERIC_LOG(level, ...) \ + rte_log(RTE_LOG_ ## level, sfc_vdpa_logtype_driver, \ + RTE_FMT("PMD: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) + +/** Name prefix for the per-device log type used to report basic information */ +#define SFC_VDPA_LOGTYPE_MAIN_STR SFC_VDPA_LOGTYPE_PREFIX "main" + +#define SFC_VDPA_LOG_PREFIX_MAX 32 + +/* Log PMD message, automatically add prefix and \n */ +#define SFC_VDPA_LOG(sva, level, ...) \ + do { \ + const struct sfc_vdpa_adapter *_sva = (sva); \ + \ + rte_log(RTE_LOG_ ## level, _sva->logtype_main, \ + RTE_FMT("%s" RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + _sva->log_prefix, \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ + } while (0) + +#define sfc_vdpa_err(sva, ...) \ + SFC_VDPA_LOG(sva, ERR, __VA_ARGS__) + +#define sfc_vdpa_warn(sva, ...) \ + SFC_VDPA_LOG(sva, WARNING, __VA_ARGS__) + +#define sfc_vdpa_notice(sva, ...) \ + SFC_VDPA_LOG(sva, NOTICE, __VA_ARGS__) + +#define sfc_vdpa_info(sva, ...) \ + SFC_VDPA_LOG(sva, INFO, __VA_ARGS__) + +#define sfc_vdpa_log_init(sva, ...) \ + SFC_VDPA_LOG(sva, INFO, \ + RTE_FMT("%s(): " \ + RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) + +#endif /* _SFC_VDPA_LOG_H_ */ diff --git a/drivers/vdpa/sfc/version.map b/drivers/vdpa/sfc/version.map new file mode 100644 index 0000000..4a76d1d --- /dev/null +++ b/drivers/vdpa/sfc/version.map @@ -0,0 +1,3 @@ +DPDK_21 { + local: *; +}; From patchwork Thu Oct 28 07:54:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103114 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8E67DA0C45; Thu, 28 Oct 2021 09:56:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 78DBB410F6; Thu, 28 Oct 2021 09:56:38 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2085.outbound.protection.outlook.com [40.107.220.85]) by mails.dpdk.org (Postfix) with ESMTP id 1FBBE4003F for ; Thu, 28 Oct 2021 09:56:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=l2IaQyiqPtpeYmaQykxBJw5e1UUWxLi9ES2VR1NXJpwKjCck2XNOLyh8cHLNwdiZduLg2ND/xJrDzM4optAzEkIYrkAjdQ4kjB0C3cmoArFbyVrVSIwS/edWV2Rxyn6vG0iXlvvXR7qm8o70vkFrNiaoDmrm6WpBBdZgXt9K1ILG+Ar8nS1JfrOnYv6Mokw62CSa1nWmwNNKgVqFmsvuTB1xBiOEJmiIawdArVeVIuwg4ZUnG3MFhcEr2gqY8nkOj9M3PBHqAbZ5rWGxKHQon7jE+gtdWUHrAKRro+RLgD1FDQKGt+oYH/P2HZn3xr10Y7QWEbVSjjLzCyb9S6kOAA== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=Ma5FRKno9JaO1rbOOK0FRCWaNDWOW7cFQlgg2WhQHGo=; b=WB8jGhVvy3vW/4irwDITIia61+40Xn8HwWKWTmgg3Bl7EC4Nlh2O1KglIyynpP38SbDk5gFQFPhVdRB6e/2DNRPqy0ORAufsXYdLMdXmP6uRmY/ku+67s+YojFXvqUjh0QymmrSmo0lSWATfKXPw8R+LOdSHTbXm3mRwuwPlUpx74FykWaQxyqvOOtTsQOISgfbR/qpBjeVGzQZo4m7GzGbTfOBUIn5CsA0Rgci/XQbhvSrfqsl++qfHBOYlkGtjy20yPcdPzc0u8UyZ2diJPJ8THwzSKerKbETd3Dna4nrq2tOB86Dfx059j6sp1yLURZW2dXbPMjrNsbHSDy8ZFQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=Ma5FRKno9JaO1rbOOK0FRCWaNDWOW7cFQlgg2WhQHGo=; b=tc/t6UnwSMg0xKabg7LyUiHuBr9iPBnY2wg+T5nqu3JRPNWrzKF0PE5lnQ1m/N0ls1JKnSY5BUMJDlkNq2PR/x5nP3EmudznXf8XLL5gD5us5z0s9yk8htboweF/a7s0Am+2PBzdsdcZ7PAykCU5pnp9ZKaOQ9R1G7/1Cw9iG7g= Received: from BN6PR19CA0097.namprd19.prod.outlook.com (2603:10b6:404:a0::11) by CO6PR02MB7716.namprd02.prod.outlook.com (2603:10b6:303:b2::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Thu, 28 Oct 2021 07:56:34 +0000 Received: from BN1NAM02FT010.eop-nam02.prod.protection.outlook.com (2603:10b6:404:a0:cafe::51) by BN6PR19CA0097.outlook.office365.com (2603:10b6:404:a0::11) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18 via Frontend Transport; Thu, 28 Oct 2021 07:56:34 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT010.mail.protection.outlook.com (10.13.2.128) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:34 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:33 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:33 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0H2-0005p5-Pi; Thu, 28 Oct 2021 00:56:33 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:44 +0530 Message-ID: <20211028075452.11804-3-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 817023f9-aa10-4f4b-9977-08d999e8765d X-MS-TrafficTypeDiagnostic: CO6PR02MB7716: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1227; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: NAfADKfGo/Ael5IXqh0QAJyoMiNiBunPYFsRfwC/bVzOFiRkLdiWZ4PcR1rmsITqSvSTzvqdisGtFUYEgG91gz+lOLDzPGFlzHGn+9+7vG4TUnaAM0pJ2KExD0l+N7UEEinc7zLDN1Ipe+VEIifsDPthppCehzpzuqRN0g8yeRdg7io8zYzyLVFRyrOORk1BU0K5XXGkHCqLws5xOwHXlZZkxiuINkRF2QW5rAEAaeQmbxA6LjS3vF+622+DfURalv4X+uL1PFyq+o0aTpHkIGuMBai/QmRj8xTixkwdDYEvTSG36VaDBEwBGY/fssx1ySc9mJQxkOvdW4L2/NALv3yjiSY1lZZ2B3KFUj2fZ6tsdyzpJhzzoOR2Vzh860CMGIw6ovQpkUVQ5rcH45RH8nDn+p0Xl3SIE5vzrAJAsqbOzWXNvqzryQkW2PbMS1+UQXeke+GcA90JBEHyrSHKnuX7v1Ozs2aNSBu56p44qBXqFkrlyj1i3TY5JRMvnetHpgZ9QeMD3PLEshIeeSnytRjOUrVlG/+s8WCqQYdG12hsOUZA6nFHkU5ufw1PQcu4CLEruw6FNW3lJ/l/lNtvUOCKWP0tcdDNEZL/gDsznM0+uEPvov1Rzpx2sWQKDD+FA+FcayxvnndLYlI04uoX1evqoiwPuW19SF/4asdiToEwrak179rw/1sSBlubeUqYPcCwe1tqE01wb4enB+O0gh3/nmsAGZjlbyGp4AaUyOY= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(46966006)(36840700001)(186003)(70586007)(6916009)(54906003)(6666004)(2906002)(107886003)(336012)(83380400001)(4326008)(1076003)(508600001)(356005)(7636003)(316002)(5660300002)(70206006)(9786002)(36860700001)(30864003)(47076005)(36756003)(7696005)(426003)(2616005)(82310400003)(8676002)(26005)(44832011)(36906005)(8936002)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:56:34.5534 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 817023f9-aa10-4f4b-9977-08d999e8765d X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT010.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO6PR02MB7716 Subject: [dpdk-dev] [PATCH v2 02/10] vdpa/sfc: add support for device initialization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Add HW initialization and vDPA device registration support. Signed-off-by: Vijay Kumar Srivastava --- v2: * Used rte_memzone_reserve_aligned for mcdi buffer allocation. * Freeing mcdi buff when DMA map fails. * Fixed one typo. doc/guides/vdpadevs/sfc.rst | 6 + drivers/vdpa/sfc/meson.build | 3 + drivers/vdpa/sfc/sfc_vdpa.c | 23 +++ drivers/vdpa/sfc/sfc_vdpa.h | 49 +++++- drivers/vdpa/sfc/sfc_vdpa_debug.h | 21 +++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 327 ++++++++++++++++++++++++++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_log.h | 3 + drivers/vdpa/sfc/sfc_vdpa_mcdi.c | 74 +++++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 129 +++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_ops.h | 36 +++++ 10 files changed, 670 insertions(+), 1 deletion(-) create mode 100644 drivers/vdpa/sfc/sfc_vdpa_debug.h create mode 100644 drivers/vdpa/sfc/sfc_vdpa_hw.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_mcdi.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_ops.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_ops.h diff --git a/doc/guides/vdpadevs/sfc.rst b/doc/guides/vdpadevs/sfc.rst index 59f990b..abb5900 100644 --- a/doc/guides/vdpadevs/sfc.rst +++ b/doc/guides/vdpadevs/sfc.rst @@ -95,3 +95,9 @@ SFC vDPA PMD provides the following log types available for control: Matches a subset of per-port log types registered during runtime. A full name for a particular type may be obtained by appending a dot and a PCI device identifier (``XXXX:XX:XX.X``) to the prefix. + +- ``pmd.vdpa.sfc.mcdi`` (default level is **notice**) + + Extra logging of the communication with the NIC's management CPU. + The format of the log is consumed by the netlogdecode cross-platform + tool. May be managed per-port, as explained above. diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build index d916389..aac7c51 100644 --- a/drivers/vdpa/sfc/meson.build +++ b/drivers/vdpa/sfc/meson.build @@ -30,4 +30,7 @@ endforeach deps += ['common_sfc_efx', 'bus_pci'] sources = files( 'sfc_vdpa.c', + 'sfc_vdpa_hw.c', + 'sfc_vdpa_mcdi.c', + 'sfc_vdpa_ops.c', ) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index a6e1a9e..00fa94a 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -232,6 +232,19 @@ struct sfc_vdpa_adapter * goto fail_vfio_setup; } + sfc_vdpa_log_init(sva, "hw init"); + if (sfc_vdpa_hw_init(sva) != 0) { + sfc_vdpa_err(sva, "failed to init HW %s", pci_dev->name); + goto fail_hw_init; + } + + sfc_vdpa_log_init(sva, "dev init"); + sva->ops_data = sfc_vdpa_device_init(sva, SFC_VDPA_AS_VF); + if (sva->ops_data == NULL) { + sfc_vdpa_err(sva, "failed vDPA dev init %s", pci_dev->name); + goto fail_dev_init; + } + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); TAILQ_INSERT_TAIL(&sfc_vdpa_adapter_list, sva, next); pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); @@ -240,6 +253,12 @@ struct sfc_vdpa_adapter * return 0; +fail_dev_init: + sfc_vdpa_hw_fini(sva); + +fail_hw_init: + sfc_vdpa_vfio_teardown(sva); + fail_vfio_setup: fail_set_log_prefix: rte_free(sva); @@ -266,6 +285,10 @@ struct sfc_vdpa_adapter * TAILQ_REMOVE(&sfc_vdpa_adapter_list, sva, next); pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + sfc_vdpa_device_fini(sva->ops_data); + + sfc_vdpa_hw_fini(sva); + sfc_vdpa_vfio_teardown(sva); rte_free(sva); diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index 3b77900..046f25d 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -11,14 +11,38 @@ #include +#include "sfc_efx.h" +#include "sfc_efx_mcdi.h" +#include "sfc_vdpa_debug.h" #include "sfc_vdpa_log.h" +#include "sfc_vdpa_ops.h" + +#define SFC_VDPA_DEFAULT_MCDI_IOVA 0x200000000000 /* Adapter private data */ struct sfc_vdpa_adapter { TAILQ_ENTRY(sfc_vdpa_adapter) next; + /* + * PMD setup and configuration is not thread safe. Since it is not + * performance sensitive, it is better to guarantee thread-safety + * and add device level lock. vDPA control operations which + * change its state should acquire the lock. + */ + rte_spinlock_t lock; struct rte_pci_device *pdev; struct rte_pci_addr pci_addr; + efx_family_t family; + efx_nic_t *nic; + rte_spinlock_t nic_lock; + + efsys_bar_t mem_bar; + + struct sfc_efx_mcdi mcdi; + size_t mcdi_buff_size; + + uint32_t max_queue_count; + char log_prefix[SFC_VDPA_LOG_PREFIX_MAX]; uint32_t logtype_main; @@ -26,6 +50,7 @@ struct sfc_vdpa_adapter { int vfio_dev_fd; int vfio_container_fd; int iommu_group_num; + struct sfc_vdpa_ops_data *ops_data; }; uint32_t @@ -36,5 +61,27 @@ struct sfc_vdpa_adapter { struct sfc_vdpa_adapter * sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev); -#endif /* _SFC_VDPA_H */ +int +sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva); +void +sfc_vdpa_hw_fini(struct sfc_vdpa_adapter *sva); +int +sfc_vdpa_mcdi_init(struct sfc_vdpa_adapter *sva); +void +sfc_vdpa_mcdi_fini(struct sfc_vdpa_adapter *sva); + +int +sfc_vdpa_dma_alloc(struct sfc_vdpa_adapter *sva, const char *name, + size_t len, efsys_mem_t *esmp); + +void +sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp); + +static inline struct sfc_vdpa_adapter * +sfc_vdpa_adapter_by_dev_handle(void *dev_handle) +{ + return (struct sfc_vdpa_adapter *)dev_handle; +} + +#endif /* _SFC_VDPA_H */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_debug.h b/drivers/vdpa/sfc/sfc_vdpa_debug.h new file mode 100644 index 0000000..cfa8cc5 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_debug.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_DEBUG_H_ +#define _SFC_VDPA_DEBUG_H_ + +#include + +#ifdef RTE_LIBRTE_SFC_VDPA_DEBUG +/* Avoid dependency from RTE_LOG_DP_LEVEL to be able to enable debug check + * in the driver only. + */ +#define SFC_VDPA_ASSERT(exp) RTE_VERIFY(exp) +#else +/* If the driver debug is not enabled, follow DPDK debug/non-debug */ +#define SFC_VDPA_ASSERT(exp) RTE_ASSERT(exp) +#endif + +#endif /* _SFC_VDPA_DEBUG_H_ */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c new file mode 100644 index 0000000..7c256ff --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -0,0 +1,327 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include + +#include +#include +#include + +#include "efx.h" +#include "sfc_vdpa.h" +#include "sfc_vdpa_ops.h" + +extern uint32_t sfc_logtype_driver; + +#ifndef PAGE_SIZE +#define PAGE_SIZE (sysconf(_SC_PAGESIZE)) +#endif + +int +sfc_vdpa_dma_alloc(struct sfc_vdpa_adapter *sva, const char *name, + size_t len, efsys_mem_t *esmp) +{ + uint64_t mcdi_iova; + size_t mcdi_buff_size; + const struct rte_memzone *mz = NULL; + int numa_node = sva->pdev->device.numa_node; + int ret; + + mcdi_buff_size = RTE_ALIGN_CEIL(len, PAGE_SIZE); + + sfc_vdpa_log_init(sva, "name=%s, len=%zu", name, len); + + mz = rte_memzone_reserve_aligned(name, mcdi_buff_size, + numa_node, + RTE_MEMZONE_IOVA_CONTIG, + PAGE_SIZE); + if (mz == NULL) { + sfc_vdpa_err(sva, "cannot reserve memory for %s: len=%#x: %s", + name, (unsigned int)len, rte_strerror(rte_errno)); + return -ENOMEM; + } + + /* IOVA address for MCDI would be re-calculated if mapping + * using default IOVA would fail. + * TODO: Earlier there was no way to get valid IOVA range. + * Recently a patch has been submitted to get the IOVA range + * using ioctl. VFIO_IOMMU_GET_INFO. This patch is available + * in the kernel version >= 5.4. Support to get the default + * IOVA address for MCDI buffer using available IOVA range + * would be added later. Meanwhile default IOVA for MCDI buffer + * is kept at high mem at 2TB. In case of overlap new available + * addresses would be searched and same would be used. + */ + mcdi_iova = SFC_VDPA_DEFAULT_MCDI_IOVA; + + do { + ret = rte_vfio_container_dma_map(sva->vfio_container_fd, + (uint64_t)mz->addr, mcdi_iova, + mcdi_buff_size); + if (ret == 0) + break; + + mcdi_iova = mcdi_iova >> 1; + if (mcdi_iova < mcdi_buff_size) { + sfc_vdpa_err(sva, + "DMA mapping failed for MCDI : %s", + rte_strerror(rte_errno)); + rte_memzone_free(mz); + return ret; + } + + } while (ret < 0); + + esmp->esm_addr = mcdi_iova; + esmp->esm_base = mz->addr; + sva->mcdi_buff_size = mcdi_buff_size; + + sfc_vdpa_info(sva, + "DMA name=%s len=%zu => virt=%p iova=%" PRIx64, + name, len, esmp->esm_base, esmp->esm_addr); + + return 0; +} + +void +sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp) +{ + int ret; + + sfc_vdpa_log_init(sva, "name=%s", esmp->esm_mz->name); + + ret = rte_vfio_container_dma_unmap(sva->vfio_container_fd, + (uint64_t)esmp->esm_base, + esmp->esm_addr, sva->mcdi_buff_size); + if (ret < 0) + sfc_vdpa_err(sva, "DMA unmap failed for MCDI : %s", + rte_strerror(rte_errno)); + + sfc_vdpa_info(sva, + "DMA free name=%s => virt=%p iova=%" PRIx64, + esmp->esm_mz->name, esmp->esm_base, esmp->esm_addr); + + rte_free((void *)(esmp->esm_base)); + + sva->mcdi_buff_size = 0; + memset(esmp, 0, sizeof(*esmp)); +} + +static int +sfc_vdpa_mem_bar_init(struct sfc_vdpa_adapter *sva, + const efx_bar_region_t *mem_ebrp) +{ + struct rte_pci_device *pci_dev = sva->pdev; + efsys_bar_t *ebp = &sva->mem_bar; + struct rte_mem_resource *res = + &pci_dev->mem_resource[mem_ebrp->ebr_index]; + + SFC_BAR_LOCK_INIT(ebp, pci_dev->name); + ebp->esb_rid = mem_ebrp->ebr_index; + ebp->esb_dev = pci_dev; + ebp->esb_base = res->addr; + + return 0; +} + +static void +sfc_vdpa_mem_bar_fini(struct sfc_vdpa_adapter *sva) +{ + efsys_bar_t *ebp = &sva->mem_bar; + + SFC_BAR_LOCK_DESTROY(ebp); + memset(ebp, 0, sizeof(*ebp)); +} + +static int +sfc_vdpa_nic_probe(struct sfc_vdpa_adapter *sva) +{ + efx_nic_t *enp = sva->nic; + int rc; + + rc = efx_nic_probe(enp, EFX_FW_VARIANT_DONT_CARE); + if (rc != 0) + sfc_vdpa_err(sva, "nic probe failed: %s", rte_strerror(rc)); + + return rc; +} + +static int +sfc_vdpa_estimate_resource_limits(struct sfc_vdpa_adapter *sva) +{ + efx_drv_limits_t limits; + int rc; + uint32_t evq_allocated; + uint32_t rxq_allocated; + uint32_t txq_allocated; + uint32_t max_queue_cnt; + + memset(&limits, 0, sizeof(limits)); + + /* Request at least one Rx and Tx queue */ + limits.edl_min_rxq_count = 1; + limits.edl_min_txq_count = 1; + /* Management event queue plus event queue for Tx/Rx queue */ + limits.edl_min_evq_count = + 1 + RTE_MAX(limits.edl_min_rxq_count, limits.edl_min_txq_count); + + limits.edl_max_rxq_count = SFC_VDPA_MAX_QUEUE_PAIRS; + limits.edl_max_txq_count = SFC_VDPA_MAX_QUEUE_PAIRS; + limits.edl_max_evq_count = 1 + SFC_VDPA_MAX_QUEUE_PAIRS; + + SFC_VDPA_ASSERT(limits.edl_max_evq_count >= limits.edl_min_rxq_count); + SFC_VDPA_ASSERT(limits.edl_max_rxq_count >= limits.edl_min_rxq_count); + SFC_VDPA_ASSERT(limits.edl_max_txq_count >= limits.edl_min_rxq_count); + + /* Configure the minimum required resources needed for the + * driver to operate, and the maximum desired resources that the + * driver is capable of using. + */ + sfc_vdpa_log_init(sva, "set drv limit"); + efx_nic_set_drv_limits(sva->nic, &limits); + + sfc_vdpa_log_init(sva, "init nic"); + rc = efx_nic_init(sva->nic); + if (rc != 0) { + sfc_vdpa_err(sva, "nic init failed: %s", rte_strerror(rc)); + goto fail_nic_init; + } + + /* Find resource dimensions assigned by firmware to this function */ + rc = efx_nic_get_vi_pool(sva->nic, &evq_allocated, &rxq_allocated, + &txq_allocated); + if (rc != 0) { + sfc_vdpa_err(sva, "vi pool get failed: %s", rte_strerror(rc)); + goto fail_get_vi_pool; + } + + /* It still may allocate more than maximum, ensure limit */ + evq_allocated = RTE_MIN(evq_allocated, limits.edl_max_evq_count); + rxq_allocated = RTE_MIN(rxq_allocated, limits.edl_max_rxq_count); + txq_allocated = RTE_MIN(txq_allocated, limits.edl_max_txq_count); + + + max_queue_cnt = RTE_MIN(rxq_allocated, txq_allocated); + /* Subtract management EVQ not used for traffic */ + max_queue_cnt = RTE_MIN(evq_allocated - 1, max_queue_cnt); + + SFC_VDPA_ASSERT(max_queue_cnt > 0); + + sva->max_queue_count = max_queue_cnt; + + return 0; + +fail_get_vi_pool: + efx_nic_fini(sva->nic); +fail_nic_init: + sfc_vdpa_log_init(sva, "failed: %s", rte_strerror(rc)); + return rc; +} + +int +sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva) +{ + efx_bar_region_t mem_ebr; + efx_nic_t *enp; + int rc; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "get family"); + rc = sfc_efx_family(sva->pdev, &mem_ebr, &sva->family); + if (rc != 0) + goto fail_family; + sfc_vdpa_log_init(sva, + "family is %u, membar is %u," + "function control window offset is %#" PRIx64, + sva->family, mem_ebr.ebr_index, mem_ebr.ebr_offset); + + sfc_vdpa_log_init(sva, "init mem bar"); + rc = sfc_vdpa_mem_bar_init(sva, &mem_ebr); + if (rc != 0) + goto fail_mem_bar_init; + + sfc_vdpa_log_init(sva, "create nic"); + rte_spinlock_init(&sva->nic_lock); + rc = efx_nic_create(sva->family, (efsys_identifier_t *)sva, + &sva->mem_bar, mem_ebr.ebr_offset, + &sva->nic_lock, &enp); + if (rc != 0) { + sfc_vdpa_err(sva, "nic create failed: %s", rte_strerror(rc)); + goto fail_nic_create; + } + sva->nic = enp; + + sfc_vdpa_log_init(sva, "init mcdi"); + rc = sfc_vdpa_mcdi_init(sva); + if (rc != 0) { + sfc_vdpa_err(sva, "mcdi init failed: %s", rte_strerror(rc)); + goto fail_mcdi_init; + } + + sfc_vdpa_log_init(sva, "probe nic"); + rc = sfc_vdpa_nic_probe(sva); + if (rc != 0) + goto fail_nic_probe; + + sfc_vdpa_log_init(sva, "reset nic"); + rc = efx_nic_reset(enp); + if (rc != 0) { + sfc_vdpa_err(sva, "nic reset failed: %s", rte_strerror(rc)); + goto fail_nic_reset; + } + + sfc_vdpa_log_init(sva, "estimate resource limits"); + rc = sfc_vdpa_estimate_resource_limits(sva); + if (rc != 0) + goto fail_estimate_rsrc_limits; + + sfc_vdpa_log_init(sva, "done"); + + return 0; + +fail_estimate_rsrc_limits: +fail_nic_reset: + efx_nic_unprobe(enp); + +fail_nic_probe: + sfc_vdpa_mcdi_fini(sva); + +fail_mcdi_init: + sfc_vdpa_log_init(sva, "destroy nic"); + sva->nic = NULL; + efx_nic_destroy(enp); + +fail_nic_create: + sfc_vdpa_mem_bar_fini(sva); + +fail_mem_bar_init: +fail_family: + sfc_vdpa_log_init(sva, "failed: %s", rte_strerror(rc)); + return rc; +} + +void +sfc_vdpa_hw_fini(struct sfc_vdpa_adapter *sva) +{ + efx_nic_t *enp = sva->nic; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "unprobe nic"); + efx_nic_unprobe(enp); + + sfc_vdpa_log_init(sva, "mcdi fini"); + sfc_vdpa_mcdi_fini(sva); + + sfc_vdpa_log_init(sva, "nic fini"); + efx_nic_fini(enp); + + sfc_vdpa_log_init(sva, "destroy nic"); + sva->nic = NULL; + efx_nic_destroy(enp); + + sfc_vdpa_mem_bar_fini(sva); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_log.h b/drivers/vdpa/sfc/sfc_vdpa_log.h index 858e5ee..4e7a84f 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_log.h +++ b/drivers/vdpa/sfc/sfc_vdpa_log.h @@ -21,6 +21,9 @@ /** Name prefix for the per-device log type used to report basic information */ #define SFC_VDPA_LOGTYPE_MAIN_STR SFC_VDPA_LOGTYPE_PREFIX "main" +/** Device MCDI log type name prefix */ +#define SFC_VDPA_LOGTYPE_MCDI_STR SFC_VDPA_LOGTYPE_PREFIX "mcdi" + #define SFC_VDPA_LOG_PREFIX_MAX 32 /* Log PMD message, automatically add prefix and \n */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_mcdi.c b/drivers/vdpa/sfc/sfc_vdpa_mcdi.c new file mode 100644 index 0000000..961d2d3 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_mcdi.c @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include "sfc_efx_mcdi.h" + +#include "sfc_vdpa.h" +#include "sfc_vdpa_debug.h" +#include "sfc_vdpa_log.h" + +static sfc_efx_mcdi_dma_alloc_cb sfc_vdpa_mcdi_dma_alloc; +static int +sfc_vdpa_mcdi_dma_alloc(void *cookie, const char *name, size_t len, + efsys_mem_t *esmp) +{ + struct sfc_vdpa_adapter *sva = cookie; + + return sfc_vdpa_dma_alloc(sva, name, len, esmp); +} + +static sfc_efx_mcdi_dma_free_cb sfc_vdpa_mcdi_dma_free; +static void +sfc_vdpa_mcdi_dma_free(void *cookie, efsys_mem_t *esmp) +{ + struct sfc_vdpa_adapter *sva = cookie; + + sfc_vdpa_dma_free(sva, esmp); +} + +static sfc_efx_mcdi_sched_restart_cb sfc_vdpa_mcdi_sched_restart; +static void +sfc_vdpa_mcdi_sched_restart(void *cookie) +{ + RTE_SET_USED(cookie); +} + +static sfc_efx_mcdi_mgmt_evq_poll_cb sfc_vdpa_mcdi_mgmt_evq_poll; +static void +sfc_vdpa_mcdi_mgmt_evq_poll(void *cookie) +{ + RTE_SET_USED(cookie); +} + +static const struct sfc_efx_mcdi_ops sfc_vdpa_mcdi_ops = { + .dma_alloc = sfc_vdpa_mcdi_dma_alloc, + .dma_free = sfc_vdpa_mcdi_dma_free, + .sched_restart = sfc_vdpa_mcdi_sched_restart, + .mgmt_evq_poll = sfc_vdpa_mcdi_mgmt_evq_poll, + +}; + +int +sfc_vdpa_mcdi_init(struct sfc_vdpa_adapter *sva) +{ + uint32_t logtype; + + sfc_vdpa_log_init(sva, "entry"); + + logtype = sfc_vdpa_register_logtype(&(sva->pdev->addr), + SFC_VDPA_LOGTYPE_MCDI_STR, + RTE_LOG_NOTICE); + + return sfc_efx_mcdi_init(&sva->mcdi, logtype, + sva->log_prefix, sva->nic, + &sfc_vdpa_mcdi_ops, sva); +} + +void +sfc_vdpa_mcdi_fini(struct sfc_vdpa_adapter *sva) +{ + sfc_vdpa_log_init(sva, "entry"); + sfc_efx_mcdi_fini(&sva->mcdi); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c new file mode 100644 index 0000000..71696be --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include +#include +#include +#include + +#include "sfc_vdpa_ops.h" +#include "sfc_vdpa.h" + +/* Dummy functions for mandatory vDPA ops to pass vDPA device registration. + * In subsequent patches these ops would be implemented. + */ +static int +sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(queue_num); + + return -1; +} + +static int +sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(features); + + return -1; +} + +static int +sfc_vdpa_get_protocol_features(struct rte_vdpa_device *vdpa_dev, + uint64_t *features) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(features); + + return -1; +} + +static int +sfc_vdpa_dev_config(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static int +sfc_vdpa_dev_close(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static int +sfc_vdpa_set_vring_state(int vid, int vring, int state) +{ + RTE_SET_USED(vid); + RTE_SET_USED(vring); + RTE_SET_USED(state); + + return -1; +} + +static int +sfc_vdpa_set_features(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static struct rte_vdpa_dev_ops sfc_vdpa_ops = { + .get_queue_num = sfc_vdpa_get_queue_num, + .get_features = sfc_vdpa_get_features, + .get_protocol_features = sfc_vdpa_get_protocol_features, + .dev_conf = sfc_vdpa_dev_config, + .dev_close = sfc_vdpa_dev_close, + .set_vring_state = sfc_vdpa_set_vring_state, + .set_features = sfc_vdpa_set_features, +}; + +struct sfc_vdpa_ops_data * +sfc_vdpa_device_init(void *dev_handle, enum sfc_vdpa_context context) +{ + struct sfc_vdpa_ops_data *ops_data; + struct rte_pci_device *pci_dev; + + /* Create vDPA ops context */ + ops_data = rte_zmalloc("vdpa", sizeof(struct sfc_vdpa_ops_data), 0); + if (ops_data == NULL) + return NULL; + + ops_data->vdpa_context = context; + ops_data->dev_handle = dev_handle; + + pci_dev = sfc_vdpa_adapter_by_dev_handle(dev_handle)->pdev; + + /* Register vDPA Device */ + sfc_vdpa_log_init(dev_handle, "register vDPA device"); + ops_data->vdpa_dev = + rte_vdpa_register_device(&pci_dev->device, &sfc_vdpa_ops); + if (ops_data->vdpa_dev == NULL) { + sfc_vdpa_err(dev_handle, "vDPA device registration failed"); + goto fail_register_device; + } + + ops_data->state = SFC_VDPA_STATE_INITIALIZED; + + return ops_data; + +fail_register_device: + rte_free(ops_data); + return NULL; +} + +void +sfc_vdpa_device_fini(struct sfc_vdpa_ops_data *ops_data) +{ + rte_vdpa_unregister_device(ops_data->vdpa_dev); + + rte_free(ops_data); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h new file mode 100644 index 0000000..817b302 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_OPS_H +#define _SFC_VDPA_OPS_H + +#include + +#define SFC_VDPA_MAX_QUEUE_PAIRS 1 + +enum sfc_vdpa_context { + SFC_VDPA_AS_PF = 0, + SFC_VDPA_AS_VF +}; + +enum sfc_vdpa_state { + SFC_VDPA_STATE_UNINITIALIZED = 0, + SFC_VDPA_STATE_INITIALIZED, + SFC_VDPA_STATE_NSTATES +}; + +struct sfc_vdpa_ops_data { + void *dev_handle; + struct rte_vdpa_device *vdpa_dev; + enum sfc_vdpa_context vdpa_context; + enum sfc_vdpa_state state; +}; + +struct sfc_vdpa_ops_data * +sfc_vdpa_device_init(void *adapter, enum sfc_vdpa_context context); +void +sfc_vdpa_device_fini(struct sfc_vdpa_ops_data *ops_data); + +#endif /* _SFC_VDPA_OPS_H */ From patchwork Thu Oct 28 07:54:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103115 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 91E36A0C45; Thu, 28 Oct 2021 09:56:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 7A5DF410FA; Thu, 28 Oct 2021 09:56:48 +0200 (CEST) Received: from NAM10-BN7-obe.outbound.protection.outlook.com (mail-bn7nam10on2074.outbound.protection.outlook.com [40.107.92.74]) by mails.dpdk.org (Postfix) with ESMTP id 540F24003F for ; Thu, 28 Oct 2021 09:56:47 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=J0khlA5TXCPIYVXZJHOx14tV4ZyQ4LXqpmzIqEhVenpZG8HqB27LkiRT3O6T9ozF/SvJH86UBqsN88qbrJjHzFSjhTP3qUl7otugI8snfHyeQV/W+c363MBwsGsi/wrzPsVBqw9dmLuEbruaw0eWSqvh1wIdAe64UHpZO3z4u1KlexJqM0KGVvK8v0bHBS4VJhzdaCW2kcs4FLtGQFcDco/q6A/imBKbtxLa+UEkVIMcVWuvmhByz5+/4SVIUgakoGb1+d+CDE/yN1TRGBkyCtELQikEClj7JR2JFU8y77j07RZIsaTYgask8e7LmlQucsTgyGcp+XfFTiRCpKtL9Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=UHDzcYBpQOP5c9JgrD9QPa5OCgc7oumSHKBUk4mFa3g=; b=IuPupdEaZiEClYNq1kWK9E3OGTwnGxQBXtZn6fCEwGP75OusKwscy6GYf2MqZjuN5S/qdx/DxZ8GnWHsK+KakNOKztErCCxp5G8ZHrd0ZIMsDlJ3oI7jPurqsY3T1m2fvVFTe0itVWZHurDmzoHOa4tRDVamz3+1EbIDAwlw42VOLUt8rzXsukCPMiv4/HDd76/GlOR7vnXqU/wrT2waWa50h+FYYkVrxqc71trKIFVvjjzhbiY9o5OPOYpM4dFAOhufjw70m7LIVNuETPfuhooFHIO/L6w9H+HJ4wzrY1izvhpeGPp1RXh3T+aE+M+0c0sZVhw69DbgquVIV9TEwA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=UHDzcYBpQOP5c9JgrD9QPa5OCgc7oumSHKBUk4mFa3g=; b=bq6RHnzwCjoEMXRFFEp4ThZMXdKRjMUEnTOopRa1+ZLogzDwpwb00RVQ9hmxezG6UCHbFtMxT9gMnDT4J4F4haZwwnNaBWGXYowTj3iv4RmeoAdZN4Uiq0Wtkn96MlU70iATihNx2gL/r65VZ6kF3cMp471NyLo+PnbWVw2yGFc= Received: from BN9PR03CA0789.namprd03.prod.outlook.com (2603:10b6:408:13f::14) by SJ0PR02MB7390.namprd02.prod.outlook.com (2603:10b6:a03:29c::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Thu, 28 Oct 2021 07:56:44 +0000 Received: from BN1NAM02FT048.eop-nam02.prod.protection.outlook.com (2603:10b6:408:13f:cafe::25) by BN9PR03CA0789.outlook.office365.com (2603:10b6:408:13f::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:44 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT048.mail.protection.outlook.com (10.13.2.157) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:44 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:43 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:43 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0HC-0005p5-QF; Thu, 28 Oct 2021 00:56:43 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:45 +0530 Message-ID: <20211028075452.11804-4-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 50de0223-5b35-4bfa-5b2d-08d999e87c06 X-MS-TrafficTypeDiagnostic: SJ0PR02MB7390: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1751; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: v0mcGj9m/RdBjcoIB6GKY7HnLNIdiYVNaDp2e6jUSzuI+842sJcoOpXppguc2X37akcS4NOHIrB4ZWWq2A6+4x2ISXXNHJC82xoRe9AYLCshXT5jsLS5uYRYFQ18oTso0Unuxr4WtgMDgmVbXImQO41i2jBbTxav0s9W5zw9XzlHVcg/QjgMiyKQhkYqyY3L6Hp4TgrmvmNwVxoOoJpYEA4mGt5JJeUDU+3Cem92HN82ZEeWt8mA75qARxjM23VJF7gUrLJAyr3O5b1i4OjvYVXU+5r4Pi7JmrSZ9mFtki00wmfzvbbvmBd3fKlz/G8WsVz71GGtfUWy29CJIkuPXoAbP35Zj1kDE7F0niZsTc9Um1ik6p7WTV6b8IZIbNt3+uo2Jyt/qaZ3uV+G222zeqR3oKsLTL5wu2RgdlWL1iVqpvXg78nJfN/c/IOMbNDxnQjJnpVdybEGyM5XI/qr4N4esgIdqPaywBF3BEQRfylJg8vrDuDHcfdvyqoU59JltOP9/9q8H52QXO0mI5lV0vg7DXl1aq3dW3tYBShCwDdf0T2IlybBJjaXIn7X7U+xnbGM4DVZrAfsj6cLwFsInWeUbNyOacX6i/OObL9lmc0HBY1+O3Zd+hb/5T6nIvvn8+6i/7uUA6kKeKRAVdh46AygD7kak1dC/SItv2JoHmT2xxstdL5HaLILKYtrBAkL0BTGN4IeLsazrTO9E1GTXa/hLwKUUE5b5lYciWHP7Pk= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(46966006)(36840700001)(356005)(8936002)(7636003)(82310400003)(6916009)(426003)(2616005)(336012)(44832011)(26005)(70206006)(1076003)(70586007)(107886003)(47076005)(4326008)(186003)(8676002)(9786002)(2906002)(83380400001)(36860700001)(54906003)(5660300002)(36906005)(508600001)(7696005)(316002)(36756003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:56:44.0480 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 50de0223-5b35-4bfa-5b2d-08d999e87c06 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT048.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR02MB7390 Subject: [dpdk-dev] [PATCH v2 03/10] vdpa/sfc: add support to get device and protocol features X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops get_feature and get_protocol_features. This patch retrieves device supported features and enables protocol features. Signed-off-by: Vijay Kumar Srivastava --- doc/guides/vdpadevs/features/sfc.ini | 10 ++++ drivers/common/sfc_efx/efsys.h | 2 +- drivers/common/sfc_efx/version.map | 10 ++++ drivers/vdpa/sfc/sfc_vdpa.c | 20 ++++++++ drivers/vdpa/sfc/sfc_vdpa.h | 2 + drivers/vdpa/sfc/sfc_vdpa_hw.c | 13 ++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 91 ++++++++++++++++++++++++++++++++---- drivers/vdpa/sfc/sfc_vdpa_ops.h | 3 ++ 8 files changed, 142 insertions(+), 9 deletions(-) diff --git a/doc/guides/vdpadevs/features/sfc.ini b/doc/guides/vdpadevs/features/sfc.ini index 71b6158..700d061 100644 --- a/doc/guides/vdpadevs/features/sfc.ini +++ b/doc/guides/vdpadevs/features/sfc.ini @@ -4,6 +4,16 @@ ; Refer to default.ini for the full list of available driver features. ; [Features] +csum = Y +guest csum = Y +host tso4 = Y +host tso6 = Y +version 1 = Y +mrg rxbuf = Y +any layout = Y +in_order = Y +proto host notifier = Y +IOMMU platform = Y Linux = Y x86-64 = Y Usage doc = Y diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h index d133d61..37ec6b9 100644 --- a/drivers/common/sfc_efx/efsys.h +++ b/drivers/common/sfc_efx/efsys.h @@ -187,7 +187,7 @@ #define EFSYS_OPT_MAE 1 -#define EFSYS_OPT_VIRTIO 0 +#define EFSYS_OPT_VIRTIO 1 /* ID */ diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map index 642a62e..ec86220 100644 --- a/drivers/common/sfc_efx/version.map +++ b/drivers/common/sfc_efx/version.map @@ -247,6 +247,16 @@ INTERNAL { efx_txq_nbufs; efx_txq_size; + efx_virtio_fini; + efx_virtio_get_doorbell_offset; + efx_virtio_get_features; + efx_virtio_init; + efx_virtio_qcreate; + efx_virtio_qdestroy; + efx_virtio_qstart; + efx_virtio_qstop; + efx_virtio_verify_features; + sfc_efx_dev_class_get; sfc_efx_family; diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index 00fa94a..4927698 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -43,6 +43,26 @@ struct sfc_vdpa_adapter * return found ? sva : NULL; } +struct sfc_vdpa_ops_data * +sfc_vdpa_get_data_by_dev(struct rte_vdpa_device *vdpa_dev) +{ + bool found = false; + struct sfc_vdpa_adapter *sva; + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + + TAILQ_FOREACH(sva, &sfc_vdpa_adapter_list, next) { + if (vdpa_dev == sva->ops_data->vdpa_dev) { + found = true; + break; + } + } + + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + return found ? sva->ops_data : NULL; +} + static int sfc_vdpa_vfio_setup(struct sfc_vdpa_adapter *sva) { diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index 046f25d..c10c3d3 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -60,6 +60,8 @@ struct sfc_vdpa_adapter { struct sfc_vdpa_adapter * sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev); +struct sfc_vdpa_ops_data * +sfc_vdpa_get_data_by_dev(struct rte_vdpa_device *vdpa_dev); int sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva); diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c index 7c256ff..7a67bd8 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -278,10 +278,20 @@ if (rc != 0) goto fail_estimate_rsrc_limits; + sfc_vdpa_log_init(sva, "init virtio"); + rc = efx_virtio_init(enp); + if (rc != 0) { + sfc_vdpa_err(sva, "virtio init failed: %s", rte_strerror(rc)); + goto fail_virtio_init; + } + sfc_vdpa_log_init(sva, "done"); return 0; +fail_virtio_init: + efx_nic_fini(enp); + fail_estimate_rsrc_limits: fail_nic_reset: efx_nic_unprobe(enp); @@ -310,6 +320,9 @@ sfc_vdpa_log_init(sva, "entry"); + sfc_vdpa_log_init(sva, "virtio fini"); + efx_virtio_fini(enp); + sfc_vdpa_log_init(sva, "unprobe nic"); efx_nic_unprobe(enp); diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 71696be..5750944 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,17 +3,31 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include #include #include #include #include +#include "efx.h" #include "sfc_vdpa_ops.h" #include "sfc_vdpa.h" -/* Dummy functions for mandatory vDPA ops to pass vDPA device registration. - * In subsequent patches these ops would be implemented. +/* These protocol features are needed to enable notifier ctrl */ +#define SFC_VDPA_PROTOCOL_FEATURES \ + ((1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK) | \ + (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ) | \ + (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD) | \ + (1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER) | \ + (1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD)) + +/* + * Set of features which are enabled by default. + * Protocol feature bit is needed to enable notification notifier ctrl. */ +#define SFC_VDPA_DEFAULT_FEATURES \ + (1ULL << VHOST_USER_F_PROTOCOL_FEATURES) + static int sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) { @@ -24,22 +38,67 @@ } static int +sfc_vdpa_get_device_features(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + uint64_t dev_features; + efx_nic_t *nic; + + nic = sfc_vdpa_adapter_by_dev_handle(ops_data->dev_handle)->nic; + + rc = efx_virtio_get_features(nic, EFX_VIRTIO_DEVICE_TYPE_NET, + &dev_features); + if (rc != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "could not read device feature: %s", + rte_strerror(rc)); + return rc; + } + + ops_data->dev_features = dev_features; + + sfc_vdpa_info(ops_data->dev_handle, + "device supported virtio features : 0x%" PRIx64, + ops_data->dev_features); + + return 0; +} + +static int sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) { - RTE_SET_USED(vdpa_dev); - RTE_SET_USED(features); + struct sfc_vdpa_ops_data *ops_data; - return -1; + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + *features = ops_data->drv_features; + + sfc_vdpa_info(ops_data->dev_handle, + "vDPA ops get_feature :: features : 0x%" PRIx64, + *features); + + return 0; } static int sfc_vdpa_get_protocol_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) { - RTE_SET_USED(vdpa_dev); - RTE_SET_USED(features); + struct sfc_vdpa_ops_data *ops_data; - return -1; + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + *features = SFC_VDPA_PROTOCOL_FEATURES; + + sfc_vdpa_info(ops_data->dev_handle, + "vDPA ops get_protocol_feature :: features : 0x%" PRIx64, + *features); + + return 0; } static int @@ -91,6 +150,7 @@ struct sfc_vdpa_ops_data * { struct sfc_vdpa_ops_data *ops_data; struct rte_pci_device *pci_dev; + int rc; /* Create vDPA ops context */ ops_data = rte_zmalloc("vdpa", sizeof(struct sfc_vdpa_ops_data), 0); @@ -111,10 +171,25 @@ struct sfc_vdpa_ops_data * goto fail_register_device; } + /* Read supported device features */ + sfc_vdpa_log_init(dev_handle, "get device feature"); + rc = sfc_vdpa_get_device_features(ops_data); + if (rc != 0) + goto fail_get_dev_feature; + + /* Driver features are superset of device supported feature + * and any additional features supported by the driver. + */ + ops_data->drv_features = + ops_data->dev_features | SFC_VDPA_DEFAULT_FEATURES; + ops_data->state = SFC_VDPA_STATE_INITIALIZED; return ops_data; +fail_get_dev_feature: + rte_vdpa_unregister_device(ops_data->vdpa_dev); + fail_register_device: rte_free(ops_data); return NULL; diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 817b302..21cbb73 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -26,6 +26,9 @@ struct sfc_vdpa_ops_data { struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; + + uint64_t dev_features; + uint64_t drv_features; }; struct sfc_vdpa_ops_data * From patchwork Thu Oct 28 07:54:46 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103116 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7A382A0C45; Thu, 28 Oct 2021 09:56:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CE348410FC; Thu, 28 Oct 2021 09:56:54 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2052.outbound.protection.outlook.com [40.107.94.52]) by mails.dpdk.org (Postfix) with ESMTP id 608344003F for ; Thu, 28 Oct 2021 09:56:53 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=fXwiqwZMKzJ2xo32AW85lXIbf8gsZr15lkD8FB5zsgtntX+dlbL1rOvReaPsM/QT6frVpRS5rKxVHflAZPTLRkvtW32L+i5PhWo0R1GNyQ34GmWmQYy9L0tgSnAKQMKd+3iGk/Sf+pvnyZ600Fjk9G2Hac/OLsapgVZvvI+5XrkBRQZHapqo2PRhEAYntHnfwiGvZ8S2ntRlXFa+77byLHLlQew3U70Fm157uaRQnjG5oJV8NjOct1C/K+i7koQywl7DU5htuhQifoRNuGWsYgGYNCffDgRg4IsAEArhm7D6q/D5D6pKopuTcZ3o5ren3zAGLkj/4ucn4LHxhJJo8A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=KZ0TJj3qshA+PCuJkw3UgeaNnb0hXgVO3wTS0fMgjRw=; b=Nawui6AlvLX2wUA+lEI/ZtltGjSRrJ83tLGylQcgm43zMY/Wq5kIkWHBCHuyqAaDYuX3EvbZK+m3EfaOehhH14BAJBxE9cry4cP3ZHogm93hpn+HoN4NDjJKDHcIIC/Lbzyh5QctOd62moAK30YvZJq1SNElRslslmOsbn/6H2rwoV8nlYPCpApxjABeMyER155Jb//GwQ3tJ5GObLoKoh0o6JHHXD9GA7ZHq+uehWsinxB6SOUHB6XoVeo8ZoOtJnqNroKxIC62XhF5CS6WwaHhfSUUyMJvsgiI74Vnbu0lpvLGgJW5UDBFAzzx0kvidyD3VtI1jvaVIdwrzXud8A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=KZ0TJj3qshA+PCuJkw3UgeaNnb0hXgVO3wTS0fMgjRw=; b=PanAbVlhUodWoczYmqw0YA05CP5N5meiyiTsB+c239ONOknbA4uDszgSg9Nj9301nzJACLkjorHCetmYQVYpteAoTurT5dHL6VVvwm6Zb6Q1Gn9jwGiy0JE0RsEFaMee/nkAZ15/5ofZIjrVoGm7Dqg5MwLhi61v+CgFjsi/XIU= Received: from BN9PR03CA0067.namprd03.prod.outlook.com (2603:10b6:408:fc::12) by BY5PR02MB6100.namprd02.prod.outlook.com (2603:10b6:a03:1f8::25) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.20; Thu, 28 Oct 2021 07:56:51 +0000 Received: from BN1NAM02FT055.eop-nam02.prod.protection.outlook.com (2603:10b6:408:fc:cafe::d1) by BN9PR03CA0067.outlook.office365.com (2603:10b6:408:fc::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18 via Frontend Transport; Thu, 28 Oct 2021 07:56:51 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT055.mail.protection.outlook.com (10.13.2.163) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:51 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:50 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:50 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0HJ-0005p5-IO; Thu, 28 Oct 2021 00:56:50 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:46 +0530 Message-ID: <20211028075452.11804-5-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b1fbc0ee-e0c6-4ede-d38f-08d999e8804f X-MS-TrafficTypeDiagnostic: BY5PR02MB6100: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1728; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Qy6dF34RJ6MdFewIK9y8gidX43ks/xdek3X5qRl3JF8AhTdIPJ/V3iaqMVkQuDWQJ6c8MU70n/Il88vlbNsTGPPhqcV7/Hf6vueH06MvmgGe/ffw4iA2FUIaiurR+1auRiAZ/RZ21N/wvtABVrZz1Aur0ZbW3j0DiwFE0O410wOKTcD0GBnjGvL3Y/EIPhf8e8NqN/XfqyeS8K19i/BUP1J0TwtYvJKiYhMdSRDbUk8J8NViLkAZplpEYEJ0SR0jRZB9W6/nfCuzYDtOZSPS6V135upsTTacEMSR+dp8aIgcU22SJhEJZHe17tjDEQdKCZwsAs2IxDtb2PWyc8GY13gPx9wRaQTKswzfHszN0eKrp8dFNGTQD4toqSPO2dcWH/2qUJB870six698UnGoe6Mn+mLbAcmJWFakAeWXX1UWsJ9njxyrfHMXkCsmxIOVcwLFaR3l46fhPTaNZ6kabYpD/ZzMiSnlMD7nqpppvR+hVT4CO4PumUyFOhTUG9X31fLvm7HbF7hgkV4YXPnyEDIKKdXX1TT+OnwAL7/akqMVDbBlmCQZLCWk2+Y/NbfDI1eDnHb7HG43YScqzPxT2o0nCAhAb6guiARx+4dWLX0CSCJyTqhkzfJt0/k7KuvqC7eGu0OvePhLYb8fYBxMDAbu8HtNJ9ywxegfRFZZ68IOWu/phCOd+wSOaJVhbHzuZXTOPE5fEQKYLjRX57r1EIJPTY2E/2cDry3AwNBNTxg= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(46966006)(36840700001)(6666004)(6916009)(82310400003)(83380400001)(36906005)(8936002)(54906003)(508600001)(47076005)(7696005)(9786002)(1076003)(8676002)(44832011)(356005)(186003)(36756003)(5660300002)(36860700001)(26005)(426003)(4326008)(316002)(70586007)(2616005)(2906002)(107886003)(7636003)(336012)(70206006)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:56:51.2404 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b1fbc0ee-e0c6-4ede-d38f-08d999e8804f X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT055.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR02MB6100 Subject: [dpdk-dev] [PATCH v2 04/10] vdpa/sfc: get device supported max queue count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops get_queue_num to get the maximum number of queues supported by the device. Signed-off-by: Vijay Kumar Srivastava --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 5750944..6c702e1 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -31,10 +31,20 @@ static int sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) { - RTE_SET_USED(vdpa_dev); - RTE_SET_USED(queue_num); + struct sfc_vdpa_ops_data *ops_data; + void *dev; - return -1; + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; + + sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", + *queue_num); + + return 0; } static int From patchwork Thu Oct 28 07:54:47 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103117 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CB70CA0C45; Thu, 28 Oct 2021 09:57:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CE7964111E; Thu, 28 Oct 2021 09:56:59 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2043.outbound.protection.outlook.com [40.107.212.43]) by mails.dpdk.org (Postfix) with ESMTP id 96E5441104 for ; Thu, 28 Oct 2021 09:56:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=KtvBNzHeRfQaffHVig+9QC1L/r12/RqxVf/j4xQ49FzrQF3f/Rla62LZvc735A4Nnv7HnE1fjuymBxKfVZAJSx6WWfraAzsM74bR0AELipTDd/X2uDRIJ1a5N3MepFf0CuxrrSjexUs1e/w4x1qxbYdah4dQLrwXf26Ry+OyZfOoz0jf+7KhG3I7dztlUY/6kNd3dsknFmJz0dcW1ILttw63FHRuLmGpE54HI25AXxmC59CWXkaZICxmYVn93FsafEgjtLvSUn7PC01M24/pwlmwf+o/wRgtzU1D4q2a9qsJS8cthueQxNsCQAwqvTESSp0Bd0/4IS6HvabQHSNw3A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=BVA8Vkm9urQrlui1m/s6WcutjuQ0m8iWGkeWm6MF0rg=; b=XTYoXqRJr6WnZ2h+6NV+ZFaOgCcrd/ImTmSBUzj9BTkOoxObmcbZjgmQ+ghQX4tGOtpZeRYCiQWA/05jaZ0VIKQMBKdmDcx0ycQrZO6RaTbmnXeG3xKhKR1GIJTMDE6/G2e13lx2+HzfkieOJS4TTLYEWi6ogwkztrH24dZmV9JKA+/kh5M+baKJ3tHVZffY6GMzLPHsredKRv9rX9LCQ+n2kdypMdqTOODcW5C0fzbpKmoTmSqld21rf+BOQ67yX7RmUueejYLlH6gA89sCrHxHLNFY1s1gVKdftEtwoqGEX2OvSJTOZIcS2RnYKZl5bUOA7HtQZyDz+ciAYziUjg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=BVA8Vkm9urQrlui1m/s6WcutjuQ0m8iWGkeWm6MF0rg=; b=enh4EUr/IORpUvpvrjV7mVEaMDhu7Eu8ubGUvFT7HD63Yg7klDt0iY0r/dItHq/Kp6usLBJ2l1gmxaFeEWM3dUwoaV6GCAqzpLQkXuqHVOenfBAaDHzA/u4zNmom/BIehHdPZxUlfW4tpFWWmwjz5I2kn2lgYoNLlx7SrR/DEBo= Received: from BN9PR03CA0749.namprd03.prod.outlook.com (2603:10b6:408:110::34) by MN2PR02MB6176.namprd02.prod.outlook.com (2603:10b6:208:1bb::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.20; Thu, 28 Oct 2021 07:56:56 +0000 Received: from BN1NAM02FT045.eop-nam02.prod.protection.outlook.com (2603:10b6:408:110:cafe::29) by BN9PR03CA0749.outlook.office365.com (2603:10b6:408:110::34) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.20 via Frontend Transport; Thu, 28 Oct 2021 07:56:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT045.mail.protection.outlook.com (10.13.2.156) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:55 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:52 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:52 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0HL-0005p5-SB; Thu, 28 Oct 2021 00:56:52 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:47 +0530 Message-ID: <20211028075452.11804-6-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 6f32cf62-1be9-41f5-bd9b-08d999e88302 X-MS-TrafficTypeDiagnostic: MN2PR02MB6176: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1051; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: E+pEfdrw9REXlam9BEN0WmeFoGof46E7ZBK9ixpRCnb4BF2o0dw9Dv4Ydv9WjX4AO3diPxWlmK00dS4HnKuh3eLzaFWdtrtm9G/Fi9NKDMshPI+RpbrpAvv86KCosonsxZh7G3fs/ULs+ldxXtk+19qblYxQD4VEPa3qMef31BfnojGe/k2RzlZtejuWI57AVK/jlp8ZhhVGkM3qVjEKkd8tDimZtQmeO5hSZ2XcncjauNmDa9jyrgirJKFnqBuqseGndMiDRRFAGuYkuI4bh0j/Alh2dxLTAcAC35ellQPuPRJiM7qzwmMP8EDOg8pLyAKe0/nNUX2jJvIMKiD8ob7Vl82Xor0IRXxHA6VxhR6uEswqTBCJ32s2/mNw9VkIrN/5T/CFcL6rh6snc6VTAf/r4F6x9K1O38g5XdZn38mo1kt6qrye6RqSRcBgK/CY/La4KIyEbs3ADZFGb2S6aVMSQunLI5Mm/2IFx6jmmMRgzjKGIzCz+phN5XkSg/m8Yxux/T1PZ/EXze0xqPqkGxdIlxZ3FLlS+9mMy9+2yR3X3TykRLM2rs9gMRDvMULBE0qJphjN9ou2dfKrLUkhC54N7Wu5MXKgK0P4ZoAx3U/MN9IjprW3HX226dErwXYMbg6PR4zyWaDQJ9ChBjUbctOo3o+qz3PxhlH6iiLU9kLaFzeWRTzwpsZq1p80wNxlqwxLqUDDoxWTo6cGyczaFJtXXMkeAag5aVrQl0fRL4k= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(54906003)(7636003)(356005)(2616005)(508600001)(82310400003)(426003)(107886003)(2906002)(70206006)(6666004)(4326008)(70586007)(44832011)(36860700001)(9786002)(47076005)(83380400001)(6916009)(1076003)(36756003)(8936002)(8676002)(26005)(36906005)(316002)(7696005)(336012)(5660300002)(186003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:56:55.7383 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6f32cf62-1be9-41f5-bd9b-08d999e88302 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT045.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR02MB6176 Subject: [dpdk-dev] [PATCH v2 05/10] vdpa/sfc: add support to get VFIO device fd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops get_vfio_device_fd to get the VFIO device fd. Signed-off-by: Vijay Kumar Srivastava --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 6c702e1..5253adb 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -145,6 +145,29 @@ return -1; } +static int +sfc_vdpa_get_vfio_device_fd(int vid) +{ + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + int vfio_dev_fd; + void *dev; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + + sfc_vdpa_info(dev, "vDPA ops get_vfio_device_fd :: vfio fd : %d", + vfio_dev_fd); + + return vfio_dev_fd; +} + static struct rte_vdpa_dev_ops sfc_vdpa_ops = { .get_queue_num = sfc_vdpa_get_queue_num, .get_features = sfc_vdpa_get_features, @@ -153,6 +176,7 @@ .dev_close = sfc_vdpa_dev_close, .set_vring_state = sfc_vdpa_set_vring_state, .set_features = sfc_vdpa_set_features, + .get_vfio_device_fd = sfc_vdpa_get_vfio_device_fd, }; struct sfc_vdpa_ops_data * From patchwork Thu Oct 28 07:54:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103118 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0AA8FA0C45; Thu, 28 Oct 2021 09:57:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D578741125; Thu, 28 Oct 2021 09:57:00 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2064.outbound.protection.outlook.com [40.107.237.64]) by mails.dpdk.org (Postfix) with ESMTP id 9BA304111E for ; Thu, 28 Oct 2021 09:56:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HHq1t/OsvAUvU/WVS8+YYR1v5+Fwps0IeZpXp/mzdgxBAka6KcJ+8OAhOP7VKpRlKc+dbuyjJiBzI0p9xk6VTK7x5jmbUCDFGqHcUPLmkavqwUvgq5yQKHhSOgVvFE7ZnC6lPJJhxvqrDHArOFBWorWlaqF1FRd7SAalBc29jbm6SAEMDcRUw7BtTvdAQfpgvZIvlkOsNqZiAJwiJPrnYaZ1zOzQ0KRkURFPcHCw+Su6vcibegXAw6jzhvh1INbjQ7vYnCS0KzT4OvwlS6+KGvQecV+uz1rn+zQxiajq1+zDEH8Vfih3Re7RVQ6ze0ULrdf4bv1L//p0Jw9THaydDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jxqweU2a2OdvkwPhAsrXzm7wPqHlKyl/QkEfFV11PeY=; b=mTmzrQw2SddCbyqkN5dDt+6mowzb7XKpRmNFOFqlLAv3/YIuiBTcli21JcFie3BHpOZ+6rTD1vwss/+0yDM7GDaYdx07RIdRV1v5HFwlIFEl1KlKZhZQTykczOKR52Z2wzI4oUxLHi1vmUgCmgtz0okStI0y/WnhoD7wTcdtqXBqr7hWK473r9RlDntzGV+WqwL+9SGNBKabYRlz8OqLON+RC8y9YTTL3SSItlDoIOhsBPg5R2SdJXnZa12hGhuX4ELt0LkfZEzaiYZUzCclAMCMrTDaD7TGwqwjlvuCAyYypzE6bLxr/WCRdv7X3WU4dZPXisQVHDOrvq6yUDrwWQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jxqweU2a2OdvkwPhAsrXzm7wPqHlKyl/QkEfFV11PeY=; b=iLRRUCILASbls6UmWuKZ3jERtTInnhdYSJn/d+oHAjGYmL8STWlkuUsoto+p77uu/Ya06K6s4D256QKjszV5USt7TzWo8pLUSKq7L6ba6P8nA/8AyZq+ZHl1tuhacvkwqlSA9E2Z3WPvgejkiOp7IAlR50Xj8rH7w8qJRYvWywE= Received: from BN9PR03CA0729.namprd03.prod.outlook.com (2603:10b6:408:110::14) by SN6PR02MB4285.namprd02.prod.outlook.com (2603:10b6:805:b1::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Thu, 28 Oct 2021 07:56:56 +0000 Received: from BN1NAM02FT045.eop-nam02.prod.protection.outlook.com (2603:10b6:408:110:cafe::e6) by BN9PR03CA0729.outlook.office365.com (2603:10b6:408:110::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.13 via Frontend Transport; Thu, 28 Oct 2021 07:56:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT045.mail.protection.outlook.com (10.13.2.156) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:56 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:54 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:54 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0HO-0005p5-5n; Thu, 28 Oct 2021 00:56:54 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:48 +0530 Message-ID: <20211028075452.11804-7-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 039ec92e-f5f6-4046-8673-08d999e88359 X-MS-TrafficTypeDiagnostic: SN6PR02MB4285: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:183; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FgTShLzlHM+oaSrOV9Vjj/5apbdTzSStRSs8nCsaM7M5Gxk9VOjwmO9jyC4icsGi9Rlj+mCw42FwRk0EjtIDeveQPmbdS9JGNChzQAe2ZstAqOjDavcwRdWxph0xc7FzESCnLB2kaPmic5Haq2bc+VHAhmPkWarwiU1riv1SJJ08tJPSoHuw2bPCvMJdNz+jt5u3yc9i1GIGuMV+MtXAO11KVrhAMXnVEAAa129RE+EvdaXPODi50LdNJkjYyLOiNxEoFJNi57ZVEP2frADyhSCGFGbplqBeKsJX06SVe3fTaSsjyy/qzUsOgjdddmjHXEJozlkXaIM69MW+S5U68OiF1hfVfUsT7DLos00jPRi9Uc6dLECCZOhHSkB4CQykJt7NWcrLRu7pzROH7S/EQJOeJyfdEUsVJ0HpPTwzFh0QmRcRNUWTtIUfg+PZR//rHhZCOoThG6K5IwJXDgq6NA3/XfBXJbWE/wHMbtGIZob98WXsekru97/pB+9yuKQrPHxy5l3zzuxr8kO3WemOUF7vehklMQfXkGPHQN9QAI7uUr4LsX77H1/k/Ql7G5JlNZ6PNHWvpWCInF7HBALDFqAPNjcUFRpt9ThICADePRGEVHJnBe8Nfg1TBG5ODXQyIdT3VzG4cOkJLCp3XV4LA0TDWOkav4e3wiw6ejh7O6OABL66xe4kbJsJxSctSokneCmbM79hDEXZZCUGHZqgae13fwpCU+FW9y1w3yAmFuk= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(7696005)(8676002)(4326008)(36756003)(26005)(44832011)(47076005)(6666004)(316002)(2906002)(36860700001)(7636003)(5660300002)(1076003)(508600001)(30864003)(83380400001)(6916009)(70586007)(426003)(36906005)(2616005)(107886003)(54906003)(336012)(82310400003)(356005)(8936002)(186003)(9786002)(70206006)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:56:56.3461 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 039ec92e-f5f6-4046-8673-08d999e88359 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT045.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR02MB4285 Subject: [dpdk-dev] [PATCH v2 06/10] vdpa/sfc: add support for dev conf and dev close ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops dev_conf and dev_close for DMA mapping, interrupt and virtqueue configurations. Signed-off-by: Vijay Kumar Srivastava --- v2: * Removed redundant null check while calling free(). * Added error handling for rte_vhost_get_vhost_vring(). drivers/vdpa/sfc/sfc_vdpa.c | 6 + drivers/vdpa/sfc/sfc_vdpa.h | 43 ++++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 69 ++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 530 ++++++++++++++++++++++++++++++++++++++-- drivers/vdpa/sfc/sfc_vdpa_ops.h | 28 +++ 5 files changed, 656 insertions(+), 20 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index 4927698..9ffea59 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -246,6 +246,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_log_init(sva, "entry"); + sfc_vdpa_adapter_lock_init(sva); + sfc_vdpa_log_init(sva, "vfio init"); if (sfc_vdpa_vfio_setup(sva) < 0) { sfc_vdpa_err(sva, "failed to setup device %s", pci_dev->name); @@ -280,6 +282,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_vfio_teardown(sva); fail_vfio_setup: + sfc_vdpa_adapter_lock_fini(sva); + fail_set_log_prefix: rte_free(sva); @@ -311,6 +315,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_vfio_teardown(sva); + sfc_vdpa_adapter_lock_fini(sva); + rte_free(sva); return 0; diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index c10c3d3..1bf96e7 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -80,10 +80,53 @@ struct sfc_vdpa_ops_data * void sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp); +int +sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *vdpa_data, bool do_map); + static inline struct sfc_vdpa_adapter * sfc_vdpa_adapter_by_dev_handle(void *dev_handle) { return (struct sfc_vdpa_adapter *)dev_handle; } +/* + * Add wrapper functions to acquire/release lock to be able to remove or + * change the lock in one place. + */ +static inline void +sfc_vdpa_adapter_lock_init(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_init(&sva->lock); +} + +static inline int +sfc_vdpa_adapter_is_locked(struct sfc_vdpa_adapter *sva) +{ + return rte_spinlock_is_locked(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_lock(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_lock(&sva->lock); +} + +static inline int +sfc_vdpa_adapter_trylock(struct sfc_vdpa_adapter *sva) +{ + return rte_spinlock_trylock(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_unlock(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_unlock(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_lock_fini(__rte_unused struct sfc_vdpa_adapter *sva) +{ + /* Just for symmetry of the API */ +} + #endif /* _SFC_VDPA_H */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c index 7a67bd8..b473708 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "efx.h" #include "sfc_vdpa.h" @@ -109,6 +110,74 @@ memset(esmp, 0, sizeof(*esmp)); } +int +sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *ops_data, bool do_map) +{ + uint32_t i, j; + int rc; + struct rte_vhost_memory *vhost_mem = NULL; + struct rte_vhost_mem_region *mem_reg = NULL; + int vfio_container_fd; + void *dev; + + dev = ops_data->dev_handle; + vfio_container_fd = + sfc_vdpa_adapter_by_dev_handle(dev)->vfio_container_fd; + + rc = rte_vhost_get_mem_table(ops_data->vid, &vhost_mem); + if (rc < 0) { + sfc_vdpa_err(dev, + "failed to get VM memory layout"); + goto error; + } + + for (i = 0; i < vhost_mem->nregions; i++) { + mem_reg = &vhost_mem->regions[i]; + + if (do_map) { + rc = rte_vfio_container_dma_map(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + if (rc < 0) { + sfc_vdpa_err(dev, + "DMA map failed : %s", + rte_strerror(rte_errno)); + goto failed_vfio_dma_map; + } + } else { + rc = rte_vfio_container_dma_unmap(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + if (rc < 0) { + sfc_vdpa_err(dev, + "DMA unmap failed : %s", + rte_strerror(rte_errno)); + goto error; + } + } + } + + free(vhost_mem); + + return 0; + +failed_vfio_dma_map: + for (j = 0; j < i; j++) { + mem_reg = &vhost_mem->regions[j]; + rc = rte_vfio_container_dma_unmap(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + } + +error: + free(vhost_mem); + + return rc; +} + static int sfc_vdpa_mem_bar_init(struct sfc_vdpa_adapter *sva, const efx_bar_region_t *mem_ebrp) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 5253adb..de1c81a 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,10 +3,13 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include + #include #include #include #include +#include #include #include "efx.h" @@ -28,24 +31,12 @@ #define SFC_VDPA_DEFAULT_FEATURES \ (1ULL << VHOST_USER_F_PROTOCOL_FEATURES) -static int -sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) -{ - struct sfc_vdpa_ops_data *ops_data; - void *dev; - - ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); - if (ops_data == NULL) - return -1; - - dev = ops_data->dev_handle; - *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; +#define SFC_VDPA_MSIX_IRQ_SET_BUF_LEN \ + (sizeof(struct vfio_irq_set) + \ + sizeof(int) * (SFC_VDPA_MAX_QUEUE_PAIRS * 2 + 1)) - sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", - *queue_num); - - return 0; -} +/* It will be used for target VF when calling function is not PF */ +#define SFC_VDPA_VF_NULL 0xFFFF static int sfc_vdpa_get_device_features(struct sfc_vdpa_ops_data *ops_data) @@ -74,6 +65,441 @@ return 0; } +static uint64_t +hva_to_gpa(int vid, uint64_t hva) +{ + struct rte_vhost_memory *vhost_mem = NULL; + struct rte_vhost_mem_region *mem_reg = NULL; + uint32_t i; + uint64_t gpa = 0; + + if (rte_vhost_get_mem_table(vid, &vhost_mem) < 0) + goto error; + + for (i = 0; i < vhost_mem->nregions; i++) { + mem_reg = &vhost_mem->regions[i]; + + if (hva >= mem_reg->host_user_addr && + hva < mem_reg->host_user_addr + mem_reg->size) { + gpa = (hva - mem_reg->host_user_addr) + + mem_reg->guest_phys_addr; + break; + } + } + +error: + free(vhost_mem); + return gpa; +} + +static int +sfc_vdpa_enable_vfio_intr(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int *irq_fd_ptr; + int vfio_dev_fd; + uint32_t i, num_vring; + struct rte_vhost_vring vring; + struct vfio_irq_set *irq_set; + struct rte_pci_device *pci_dev; + char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN]; + void *dev; + + num_vring = rte_vhost_get_vring_num(ops_data->vid); + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + pci_dev = sfc_vdpa_adapter_by_dev_handle(dev)->pdev; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = sizeof(irq_set_buf); + irq_set->count = num_vring + 1; + irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | + VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + irq_set->start = 0; + irq_fd_ptr = (int *)&irq_set->data; + irq_fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = + rte_intr_fd_get(pci_dev->intr_handle); + + for (i = 0; i < num_vring; i++) { + rc = rte_vhost_get_vhost_vring(ops_data->vid, i, &vring); + if (rc) + return -1; + + irq_fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd; + } + + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) { + sfc_vdpa_err(ops_data->dev_handle, + "error enabling MSI-X interrupts: %s", + strerror(errno)); + return -1; + } + + return 0; +} + +static int +sfc_vdpa_disable_vfio_intr(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int vfio_dev_fd; + struct vfio_irq_set *irq_set; + char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN]; + void *dev; + + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = sizeof(irq_set_buf); + irq_set->count = 0; + irq_set->flags = VFIO_IRQ_SET_DATA_NONE | VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + irq_set->start = 0; + + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) { + sfc_vdpa_err(ops_data->dev_handle, + "error disabling MSI-X interrupts: %s", + strerror(errno)); + return -1; + } + + return 0; +} + +static int +sfc_vdpa_get_vring_info(struct sfc_vdpa_ops_data *ops_data, + int vq_num, struct sfc_vdpa_vring_info *vring) +{ + int rc; + uint64_t gpa; + struct rte_vhost_vring vq; + + rc = rte_vhost_get_vhost_vring(ops_data->vid, vq_num, &vq); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "get vhost vring failed: %s", rte_strerror(rc)); + return rc; + } + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.desc); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for descriptor ring."); + goto fail_vring_map; + } + vring->desc = gpa; + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.avail); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for available ring."); + goto fail_vring_map; + } + vring->avail = gpa; + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.used); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for used ring."); + goto fail_vring_map; + } + vring->used = gpa; + + vring->size = vq.size; + + rc = rte_vhost_get_vring_base(ops_data->vid, vq_num, + &vring->last_avail_idx, + &vring->last_used_idx); + + return rc; + +fail_vring_map: + return -1; +} + +static int +sfc_vdpa_virtq_start(struct sfc_vdpa_ops_data *ops_data, int vq_num) +{ + int rc; + efx_virtio_vq_t *vq; + struct sfc_vdpa_vring_info vring; + efx_virtio_vq_cfg_t vq_cfg; + efx_virtio_vq_dyncfg_t vq_dyncfg; + + vq = ops_data->vq_cxt[vq_num].vq; + if (vq == NULL) + return -1; + + rc = sfc_vdpa_get_vring_info(ops_data, vq_num, &vring); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "get vring info failed: %s", rte_strerror(rc)); + goto fail_vring_info; + } + + vq_cfg.evvc_target_vf = SFC_VDPA_VF_NULL; + + /* even virtqueue for RX and odd for TX */ + if (vq_num % 2) { + vq_cfg.evvc_type = EFX_VIRTIO_VQ_TYPE_NET_TXQ; + sfc_vdpa_info(ops_data->dev_handle, + "configure virtqueue # %d (TXQ)", vq_num); + } else { + vq_cfg.evvc_type = EFX_VIRTIO_VQ_TYPE_NET_RXQ; + sfc_vdpa_info(ops_data->dev_handle, + "configure virtqueue # %d (RXQ)", vq_num); + } + + vq_cfg.evvc_vq_num = vq_num; + vq_cfg.evvc_desc_tbl_addr = vring.desc; + vq_cfg.evvc_avail_ring_addr = vring.avail; + vq_cfg.evvc_used_ring_addr = vring.used; + vq_cfg.evvc_vq_size = vring.size; + + vq_dyncfg.evvd_vq_pidx = vring.last_used_idx; + vq_dyncfg.evvd_vq_cidx = vring.last_avail_idx; + + /* MSI-X vector is function-relative */ + vq_cfg.evvc_msix_vector = RTE_INTR_VEC_RXTX_OFFSET + vq_num; + if (ops_data->vdpa_context == SFC_VDPA_AS_VF) + vq_cfg.evvc_pas_id = 0; + vq_cfg.evcc_features = ops_data->dev_features & + ops_data->req_features; + + /* Start virtqueue */ + rc = efx_virtio_qstart(vq, &vq_cfg, &vq_dyncfg); + if (rc != 0) { + /* destroy virtqueue */ + sfc_vdpa_err(ops_data->dev_handle, + "virtqueue start failed: %s", + rte_strerror(rc)); + efx_virtio_qdestroy(vq); + goto fail_virtio_qstart; + } + + sfc_vdpa_info(ops_data->dev_handle, + "virtqueue started successfully for vq_num %d", vq_num); + + ops_data->vq_cxt[vq_num].enable = B_TRUE; + + return rc; + +fail_virtio_qstart: +fail_vring_info: + return rc; +} + +static int +sfc_vdpa_virtq_stop(struct sfc_vdpa_ops_data *ops_data, int vq_num) +{ + int rc; + efx_virtio_vq_dyncfg_t vq_idx; + efx_virtio_vq_t *vq; + + if (ops_data->vq_cxt[vq_num].enable != B_TRUE) + return -1; + + vq = ops_data->vq_cxt[vq_num].vq; + if (vq == NULL) + return -1; + + /* stop the vq */ + rc = efx_virtio_qstop(vq, &vq_idx); + if (rc == 0) { + ops_data->vq_cxt[vq_num].cidx = vq_idx.evvd_vq_cidx; + ops_data->vq_cxt[vq_num].pidx = vq_idx.evvd_vq_pidx; + } + ops_data->vq_cxt[vq_num].enable = B_FALSE; + + return rc; +} + +static int +sfc_vdpa_configure(struct sfc_vdpa_ops_data *ops_data) +{ + int rc, i; + int nr_vring; + int max_vring_cnt; + efx_virtio_vq_t *vq; + efx_nic_t *nic; + void *dev; + + dev = ops_data->dev_handle; + nic = sfc_vdpa_adapter_by_dev_handle(dev)->nic; + + SFC_EFX_ASSERT(ops_data->state == SFC_VDPA_STATE_INITIALIZED); + + ops_data->state = SFC_VDPA_STATE_CONFIGURING; + + nr_vring = rte_vhost_get_vring_num(ops_data->vid); + max_vring_cnt = + (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + /* number of vring should not be more than supported max vq count */ + if (nr_vring > max_vring_cnt) { + sfc_vdpa_err(dev, + "nr_vring (%d) is > max vring count (%d)", + nr_vring, max_vring_cnt); + goto fail_vring_num; + } + + rc = sfc_vdpa_dma_map(ops_data, true); + if (rc) { + sfc_vdpa_err(dev, + "DMA map failed: %s", rte_strerror(rc)); + goto fail_dma_map; + } + + for (i = 0; i < nr_vring; i++) { + rc = efx_virtio_qcreate(nic, &vq); + if ((rc != 0) || (vq == NULL)) { + sfc_vdpa_err(dev, + "virtqueue create failed: %s", + rte_strerror(rc)); + goto fail_vq_create; + } + + /* store created virtqueue context */ + ops_data->vq_cxt[i].vq = vq; + } + + ops_data->vq_count = i; + + ops_data->state = SFC_VDPA_STATE_CONFIGURED; + + return 0; + +fail_vq_create: + sfc_vdpa_dma_map(ops_data, false); + +fail_dma_map: +fail_vring_num: + ops_data->state = SFC_VDPA_STATE_INITIALIZED; + + return -1; +} + +static void +sfc_vdpa_close(struct sfc_vdpa_ops_data *ops_data) +{ + int i; + + if (ops_data->state != SFC_VDPA_STATE_CONFIGURED) + return; + + ops_data->state = SFC_VDPA_STATE_CLOSING; + + for (i = 0; i < ops_data->vq_count; i++) { + if (ops_data->vq_cxt[i].vq == NULL) + continue; + + efx_virtio_qdestroy(ops_data->vq_cxt[i].vq); + } + + sfc_vdpa_dma_map(ops_data, false); + + ops_data->state = SFC_VDPA_STATE_INITIALIZED; +} + +static void +sfc_vdpa_stop(struct sfc_vdpa_ops_data *ops_data) +{ + int i; + int rc; + + if (ops_data->state != SFC_VDPA_STATE_STARTED) + return; + + ops_data->state = SFC_VDPA_STATE_STOPPING; + + for (i = 0; i < ops_data->vq_count; i++) { + rc = sfc_vdpa_virtq_stop(ops_data, i); + if (rc != 0) + continue; + } + + sfc_vdpa_disable_vfio_intr(ops_data); + + ops_data->state = SFC_VDPA_STATE_CONFIGURED; +} + +static int +sfc_vdpa_start(struct sfc_vdpa_ops_data *ops_data) +{ + int i, j; + int rc; + + SFC_EFX_ASSERT(ops_data->state == SFC_VDPA_STATE_CONFIGURED); + + sfc_vdpa_log_init(ops_data->dev_handle, "entry"); + + ops_data->state = SFC_VDPA_STATE_STARTING; + + sfc_vdpa_log_init(ops_data->dev_handle, "enable interrupts"); + rc = sfc_vdpa_enable_vfio_intr(ops_data); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "vfio intr allocation failed: %s", + rte_strerror(rc)); + goto fail_enable_vfio_intr; + } + + rte_vhost_get_negotiated_features(ops_data->vid, + &ops_data->req_features); + + sfc_vdpa_info(ops_data->dev_handle, + "negotiated feature : 0x%" PRIx64, + ops_data->req_features); + + for (i = 0; i < ops_data->vq_count; i++) { + sfc_vdpa_log_init(ops_data->dev_handle, + "starting vq# %d", i); + rc = sfc_vdpa_virtq_start(ops_data, i); + if (rc != 0) + goto fail_vq_start; + } + + ops_data->state = SFC_VDPA_STATE_STARTED; + + sfc_vdpa_log_init(ops_data->dev_handle, "done"); + + return 0; + +fail_vq_start: + /* stop already started virtqueues */ + for (j = 0; j < i; j++) + sfc_vdpa_virtq_stop(ops_data, j); + sfc_vdpa_disable_vfio_intr(ops_data); + +fail_enable_vfio_intr: + ops_data->state = SFC_VDPA_STATE_CONFIGURED; + + return rc; +} + +static int +sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) +{ + struct sfc_vdpa_ops_data *ops_data; + void *dev; + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; + + sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", + *queue_num); + + return 0; +} + static int sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) { @@ -114,7 +540,53 @@ static int sfc_vdpa_dev_config(int vid) { - RTE_SET_USED(vid); + struct rte_vdpa_device *vdpa_dev; + int rc; + struct sfc_vdpa_ops_data *ops_data; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + sfc_vdpa_log_init(ops_data->dev_handle, "entry"); + + ops_data->vid = vid; + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + sfc_vdpa_log_init(ops_data->dev_handle, "configuring"); + rc = sfc_vdpa_configure(ops_data); + if (rc != 0) + goto fail_vdpa_config; + + sfc_vdpa_log_init(ops_data->dev_handle, "starting"); + rc = sfc_vdpa_start(ops_data); + if (rc != 0) + goto fail_vdpa_start; + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + sfc_vdpa_log_init(ops_data->dev_handle, "vhost notifier ctrl"); + if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) + sfc_vdpa_info(ops_data->dev_handle, + "vDPA (%s): software relay for notify is used.", + vdpa_dev->device->name); + + sfc_vdpa_log_init(ops_data->dev_handle, "done"); + + return 0; + +fail_vdpa_start: + sfc_vdpa_close(ops_data); + +fail_vdpa_config: + sfc_vdpa_adapter_unlock(ops_data->dev_handle); return -1; } @@ -122,9 +594,27 @@ static int sfc_vdpa_dev_close(int vid) { - RTE_SET_USED(vid); + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; - return -1; + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + sfc_vdpa_stop(ops_data); + sfc_vdpa_close(ops_data); + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + return 0; } static int diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 21cbb73..8d553c5 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -18,17 +18,45 @@ enum sfc_vdpa_context { enum sfc_vdpa_state { SFC_VDPA_STATE_UNINITIALIZED = 0, SFC_VDPA_STATE_INITIALIZED, + SFC_VDPA_STATE_CONFIGURING, + SFC_VDPA_STATE_CONFIGURED, + SFC_VDPA_STATE_CLOSING, + SFC_VDPA_STATE_CLOSED, + SFC_VDPA_STATE_STARTING, + SFC_VDPA_STATE_STARTED, + SFC_VDPA_STATE_STOPPING, SFC_VDPA_STATE_NSTATES }; +struct sfc_vdpa_vring_info { + uint64_t desc; + uint64_t avail; + uint64_t used; + uint64_t size; + uint16_t last_avail_idx; + uint16_t last_used_idx; +}; + +typedef struct sfc_vdpa_vq_context_s { + uint8_t enable; + uint32_t pidx; + uint32_t cidx; + efx_virtio_vq_t *vq; +} sfc_vdpa_vq_context_t; + struct sfc_vdpa_ops_data { void *dev_handle; + int vid; struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; uint64_t dev_features; uint64_t drv_features; + uint64_t req_features; + + uint16_t vq_count; + struct sfc_vdpa_vq_context_s vq_cxt[SFC_VDPA_MAX_QUEUE_PAIRS * 2]; }; struct sfc_vdpa_ops_data * From patchwork Thu Oct 28 07:54:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103119 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4C609A0C45; Thu, 28 Oct 2021 09:57:15 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3922F4113D; Thu, 28 Oct 2021 09:57:03 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2046.outbound.protection.outlook.com [40.107.236.46]) by mails.dpdk.org (Postfix) with ESMTP id DA2EA41104 for ; Thu, 28 Oct 2021 09:57:01 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PoZndCNtt5rci85baC1P5ixP6A2610lBdJqtKGCKq2xfYbgavfHKx+H9OfWyjfn1J1C5czyq1wkKxYDI8QCEGXmzxK7yOIkmK/5OTDPDECrLBS/V2gm6/u/ByMtAt4QJRS9aAhQI+oVxYFfm55Qya0FwXDd/R5XrMhDoySv5pyXxG1LplXyyVwl9loGihHGTuWEUe0gUP3KrYD9LFPSs5EBfogyqI+6Ev9OUV7trimrwQrg3YPk+LIWD9mlhmugCB2fXgzj/1xdSFvI2QUSJg6CkyzImEm+Baix4gASI4Kg+KnlpzntT+RP999PLNDqBwC/FjlFbSdSpyoeSgHmC/w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=CJoNr+Qo3DLpLabUGoYWVI61iMF3n/3VGQVWVjL3V24=; b=WmhxxLI6tpTv3PzJasQyJTGpEO++ewVVOX/jRHCuCZuS8gWfR+WvpQuwpq/y9pIx6ehgPxdEM7ldP3b7PD7kefp9b82YdbXhbBaAnds/DEm+lE6h5PG/jQXD/iZdU6DE5X/EvmPdz3z00Foz7Je0MQP2AoPIo5lCT412rEszARKLokzXT0LB6JbQsc92cbO+h83afwmalb2iIyL7AdC0Pa7Vge2vDVW5u2jWpNoEa6OlKRNr+IMYiLhFul+S1PAcNkfsCfpof4K4WHwzVMhK+M0fRnc99LKlteJBKZp98tlY6JcqosxQMu55jp7ryml4efzho7Ti4MDtAEpzy17clA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=CJoNr+Qo3DLpLabUGoYWVI61iMF3n/3VGQVWVjL3V24=; b=i5pv7aJH8EeNDQ2JyaEXqjriSAeNpq3WuJV7ucx6/aq+t7tZQIDbf8PSoyg638Nl10c4eqyEHLB7W3I8tYV3ZPQmRhcLTWNU8xwAKruF4W3RN+1AhXoIOqYZP3YqQD7/JYj/yLWpoGyh8YzR724LVYJRNECQciPFifq5TW9aBB0= Received: from BN9PR03CA0734.namprd03.prod.outlook.com (2603:10b6:408:110::19) by CH2PR02MB6182.namprd02.prod.outlook.com (2603:10b6:610:d::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Thu, 28 Oct 2021 07:56:59 +0000 Received: from BN1NAM02FT045.eop-nam02.prod.protection.outlook.com (2603:10b6:408:110:cafe::5f) by BN9PR03CA0734.outlook.office365.com (2603:10b6:408:110::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Thu, 28 Oct 2021 07:56:59 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT045.mail.protection.outlook.com (10.13.2.156) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:58 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:57 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:57 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0HQ-0005p5-Fr; Thu, 28 Oct 2021 00:56:57 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:49 +0530 Message-ID: <20211028075452.11804-8-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: cb183c53-1841-4303-cad8-08d999e884e3 X-MS-TrafficTypeDiagnostic: CH2PR02MB6182: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3044; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 6bmJvkjwTvmGYEWHSk8xx4xt3AYeZnYwXh2qsDdj/jVKpYKz9PWll41JFW4kTZNDIVOO9Zc58uGHtK6tv2qur5NvOoyr1ZuVPLdSgDi2MXMMtmTUAZVwnmfNnC/qEhZK8yOQIVGMJ5qjfVf312yaTE+LMbahrRq++1Ecp8OEv9tj31HliAeP3oU8sVBlYzqfFH2NZIRHD2nqAVImmYELoXFLHNcCETy53evfvqZ+HCUn9LECTvT4dqYE2FjmSQMqVn9QVpqvFRvcdIUsPWl2cqnOIFydq/pksLrsQS2QbcjQScBMk/7e89C7rD5qxBlKg1SfkIs8Z7BIb/EQrom7/BGJIWrG5Bvy1zukDYu9m/GSbtiJ8w9lb05bsoIVRdCJNjV/xGWqqFjUAxbz/kGF53+vWGr3xStonza+l++8sIAHEvSn819NJwKLyElSiOTCqULt29tib+vFqb6BPlSdGjTpPl9o+ML4cR9JQneRhfkhKV/KPoERHyI5Qohge8aT9V4Kgk4+FBDKyXMceDzUrbd9bBfU7GojrCWnR67mg9vaSoOB/k30pEDvyrrbx+O8amB6grFwanaVZt2CsGq+IfMYd1M5PdkOngrUUCIG0qTUHC/0ygA8YFJX/OX6eZvJvd8HbOQGu2gq8ZsdQyeMv1jGOz7kSFPyJUUDDCJ79+RwEWla+7Upk9arLKl+0EItZdpbH8sOnDN0OIP/JpIEos/EgsnsahM/XJRqSNXTAJI= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(36860700001)(82310400003)(2616005)(44832011)(426003)(47076005)(336012)(36756003)(6916009)(70206006)(70586007)(2906002)(5660300002)(316002)(54906003)(6666004)(508600001)(36906005)(107886003)(1076003)(7696005)(186003)(26005)(8676002)(8936002)(4326008)(9786002)(7636003)(83380400001)(356005)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:56:58.9299 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: cb183c53-1841-4303-cad8-08d999e884e3 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT045.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR02MB6182 Subject: [dpdk-dev] [PATCH v2 07/10] vdpa/sfc: add support to get queue notify area info X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement the vDPA ops get_notify_area to get the notify area info of the queue. Signed-off-by: Vijay Kumar Srivastava --- v2: * Added error log in sfc_vdpa_get_notify_area. drivers/vdpa/sfc/sfc_vdpa_ops.c | 168 ++++++++++++++++++++++++++++++++++++++-- drivers/vdpa/sfc/sfc_vdpa_ops.h | 2 + 2 files changed, 164 insertions(+), 6 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index de1c81a..774d73e 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,6 +3,8 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include +#include #include #include @@ -537,6 +539,67 @@ return 0; } +static void * +sfc_vdpa_notify_ctrl(void *arg) +{ + struct sfc_vdpa_ops_data *ops_data; + int vid; + + ops_data = arg; + if (ops_data == NULL) + return NULL; + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + vid = ops_data->vid; + + if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) + sfc_vdpa_info(ops_data->dev_handle, + "vDPA (%s): Notifier could not get configured", + ops_data->vdpa_dev->device->name); + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + return NULL; +} + +static int +sfc_vdpa_setup_notify_ctrl(int vid) +{ + int ret; + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + ops_data->is_notify_thread_started = false; + + /* + * Use rte_vhost_host_notifier_ctrl in a thread to avoid + * dead lock scenario when multiple VFs are used in single vdpa + * application and multiple VFs are passed to a single VM. + */ + ret = pthread_create(&ops_data->notify_tid, NULL, + sfc_vdpa_notify_ctrl, ops_data); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to create notify_ctrl thread: %s", + rte_strerror(ret)); + return -1; + } + ops_data->is_notify_thread_started = true; + + return 0; +} + static int sfc_vdpa_dev_config(int vid) { @@ -570,18 +633,19 @@ if (rc != 0) goto fail_vdpa_start; - sfc_vdpa_adapter_unlock(ops_data->dev_handle); + rc = sfc_vdpa_setup_notify_ctrl(vid); + if (rc != 0) + goto fail_vdpa_notify; - sfc_vdpa_log_init(ops_data->dev_handle, "vhost notifier ctrl"); - if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) - sfc_vdpa_info(ops_data->dev_handle, - "vDPA (%s): software relay for notify is used.", - vdpa_dev->device->name); + sfc_vdpa_adapter_unlock(ops_data->dev_handle); sfc_vdpa_log_init(ops_data->dev_handle, "done"); return 0; +fail_vdpa_notify: + sfc_vdpa_stop(ops_data); + fail_vdpa_start: sfc_vdpa_close(ops_data); @@ -594,6 +658,7 @@ static int sfc_vdpa_dev_close(int vid) { + int ret; struct rte_vdpa_device *vdpa_dev; struct sfc_vdpa_ops_data *ops_data; @@ -608,6 +673,23 @@ } sfc_vdpa_adapter_lock(ops_data->dev_handle); + if (ops_data->is_notify_thread_started == true) { + void *status; + ret = pthread_cancel(ops_data->notify_tid); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to cancel notify_ctrl thread: %s", + rte_strerror(ret)); + } + + ret = pthread_join(ops_data->notify_tid, &status); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to join terminated notify_ctrl thread: %s", + rte_strerror(ret)); + } + } + ops_data->is_notify_thread_started = false; sfc_vdpa_stop(ops_data); sfc_vdpa_close(ops_data); @@ -658,6 +740,79 @@ return vfio_dev_fd; } +static int +sfc_vdpa_get_notify_area(int vid, int qid, uint64_t *offset, uint64_t *size) +{ + int ret; + efx_nic_t *nic; + int vfio_dev_fd; + efx_rc_t rc; + unsigned int bar_offset; + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + struct vfio_region_info reg = { .argsz = sizeof(reg) }; + const efx_nic_cfg_t *encp; + int max_vring_cnt; + int64_t len; + void *dev; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + max_vring_cnt = + (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + nic = sfc_vdpa_adapter_by_dev_handle(ops_data->dev_handle)->nic; + encp = efx_nic_cfg_get(nic); + + if (qid >= max_vring_cnt) { + sfc_vdpa_err(dev, "invalid qid : %d", qid); + return -1; + } + + if (ops_data->vq_cxt[qid].enable != B_TRUE) { + sfc_vdpa_err(dev, "vq is not enabled"); + return -1; + } + + rc = efx_virtio_get_doorbell_offset(ops_data->vq_cxt[qid].vq, + &bar_offset); + if (rc != 0) { + sfc_vdpa_err(dev, "failed to get doorbell offset: %s", + rte_strerror(rc)); + return rc; + } + + reg.index = sfc_vdpa_adapter_by_dev_handle(dev)->mem_bar.esb_rid; + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®); + if (ret != 0) { + sfc_vdpa_err(dev, "could not get device region info: %s", + strerror(errno)); + return ret; + } + + *offset = reg.offset + bar_offset; + + len = (1U << encp->enc_vi_window_shift) / 2; + if (len >= sysconf(_SC_PAGESIZE)) { + *size = sysconf(_SC_PAGESIZE); + } else { + sfc_vdpa_err(dev, "invalid VI window size : 0x%" PRIx64, len); + return -1; + } + + sfc_vdpa_info(dev, "vDPA ops get_notify_area :: offset : 0x%" PRIx64, + *offset); + + return 0; +} + static struct rte_vdpa_dev_ops sfc_vdpa_ops = { .get_queue_num = sfc_vdpa_get_queue_num, .get_features = sfc_vdpa_get_features, @@ -667,6 +822,7 @@ .set_vring_state = sfc_vdpa_set_vring_state, .set_features = sfc_vdpa_set_features, .get_vfio_device_fd = sfc_vdpa_get_vfio_device_fd, + .get_notify_area = sfc_vdpa_get_notify_area, }; struct sfc_vdpa_ops_data * diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 8d553c5..f7523ef 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -50,6 +50,8 @@ struct sfc_vdpa_ops_data { struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; + pthread_t notify_tid; + bool is_notify_thread_started; uint64_t dev_features; uint64_t drv_features; From patchwork Thu Oct 28 07:54:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103120 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3B695A0C45; Thu, 28 Oct 2021 09:57:22 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 47D0141135; Thu, 28 Oct 2021 09:57:05 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2067.outbound.protection.outlook.com [40.107.244.67]) by mails.dpdk.org (Postfix) with ESMTP id A444441140 for ; Thu, 28 Oct 2021 09:57:03 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=YU69boF9bp40gO4TL0C9PUrzcvtzaf4q8CEQZw7JMCC/62B8QXgAT2+8/dwBUlg9bAtWJHC3nR0xKviU5lkeW+tgM2FsivKAbjI9lmYOcUemzK4fGN7kLzLyCB1Pmr/8wQEo2sqEccNMO44NBjgkDUtYGTs2smYqxji5nDis+F84PlJuEkl2G8fqQN12w539lCHTtNeBHvedUjj9K21tsXfBR3TInEBGBFj7/KqBpoAbkwSDo+z0LUD8dgJfNkgnSOSkMg7l+jI81fkWiqskKQrmF55X1ntz1E2XyStRIry0z6sj4lQeWuuJS9ddHLiGrMYuCTzda6WuZWiJDJx15Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=o2m2JNeMZOy/LrX7hund0GThr4Qgu6/0WDu24sCFr9g=; b=VOccEjjQ6XG7VDNxlCq9tD2wmOVnvrtNlwnw2VWZ7Bu3m8rG9dm/ArFYWS4Og9mb+44I0oFD/b0tkG2KD+LG1cIhiEU3+W1rPfnDrZ6MmoRhQ64aNkcsGe3wZ74PrxHq5ZrZ3aPpMHnPWcPDyfCg/WL+btdoML5lt+GSthO72+rmwRVa0zAgqdMRYTgCd3pDOFtVyCkKRhNusTrH3clGEpLOLlN+XT8HWONXiPys7E6f1hXZKdMVu6Vt0Ej9A+ptWuHI2crQahxSgP13wT7fwViA4Nu9Xlx1rNfPZiOZfrO1ii6JRsgtXT5K7T0hOqMBqnODcNCeCmIGxsgk0v5TlQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=o2m2JNeMZOy/LrX7hund0GThr4Qgu6/0WDu24sCFr9g=; b=EAaQEMzTRczaYDv1IjUBUQ4Mg0GgoS9AtuK2PwFpsfo5xJ7EMcToitsF4N8MqQWSwsfZf52b8T+DJFo5GpYIjqAaqiK4TKDuodjoMYHLXh1MXeMa8ClTL0bVXQB8hPlqFDWa8eWpg+6pU1NmExwbRmlM8v085Z72PX0zf6u88yE= Received: from BN6PR22CA0056.namprd22.prod.outlook.com (2603:10b6:404:ca::18) by SN6PR02MB4173.namprd02.prod.outlook.com (2603:10b6:805:2f::19) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Thu, 28 Oct 2021 07:57:01 +0000 Received: from BN1NAM02FT047.eop-nam02.prod.protection.outlook.com (2603:10b6:404:ca:cafe::d3) by BN6PR22CA0056.outlook.office365.com (2603:10b6:404:ca::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Thu, 28 Oct 2021 07:57:01 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT047.mail.protection.outlook.com (10.13.3.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:57:00 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:59 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:59 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0HS-0005p5-Pf; Thu, 28 Oct 2021 00:56:59 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:50 +0530 Message-ID: <20211028075452.11804-9-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 3dec2221-286c-44e9-df1e-08d999e8861e X-MS-TrafficTypeDiagnostic: SN6PR02MB4173: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:240; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: VTsDJV8DDvw+P9fkVN4hVTfiNHZ6jPXN52cK77qdsgO03KKZCf98O9zWuTtiVVpXOSZxkWAOswBWXH8kdbVyrJzBaVmTdsD1bBAkMHOty9auUCcKeccVjCWERLgoCVSw61Lk5M7v/VFhMo4Crab24rZVCabQ53YgsRCPZ7z0HC05YDd75mM86xuU0yMfum2irvC/ib8bkCWOTrah3/ujAuSIoUTlCxweAKHiJfszRLvW+pNf0s96grwMFyutt+J2Ghd9Yv0X36sYEVZcI9PMxrS+a+10qrWsrYut9mrUMc4JwOLRyL2k55CbHkrFi1DyxKMFnbIoNMO1R+z9UzEZr5NXLVz2QQ5q2/MGwaiHXrynqQ2wISX2IGEQDrEe19D7Du5581vAGtwZ2wDkk7Q6rxWuEpMeRXZNKOwMAv5bjtHnSK5RVBX9e2/yLsHkQZnSbPJOlVla7XzMPMK9Wl+iMO4ZNPGGE06VERgyJrF32g8+4xuIhPykgSJMwRGEijaDTDHDYi5VELOTqZ9kVkdTdmwkjggMc+PKm+v/8w9w0Kw8EmSGkWw8ud08+0shAH1flS0SDkryDsUNcUjqXxwSUc9GYvH9BidMfYujHAhXRj3Quhe7HSQ6wv5vUbd4e+DpbakGBTH5mXJ5trwl5r7a53/KRiAtsICgIiLrPxc8/VvucTVT0LaqIKVFFkE9A3p83HV27a2kuHNNA68YeiCHfS6Ev4LPALQcRVteGUJLp1c= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(46966006)(36840700001)(316002)(2616005)(6666004)(44832011)(426003)(83380400001)(36906005)(336012)(54906003)(4326008)(5660300002)(1076003)(508600001)(186003)(26005)(8676002)(6916009)(356005)(36860700001)(70206006)(2906002)(7696005)(70586007)(47076005)(107886003)(36756003)(9786002)(8936002)(7636003)(82310400003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:57:00.9948 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 3dec2221-286c-44e9-df1e-08d999e8861e X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT047.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR02MB4173 Subject: [dpdk-dev] [PATCH v2 08/10] vdpa/sfc: add support for MAC filter config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Add support for unicast and broadcast MAC filter configuration. Signed-off-by: Vijay Kumar Srivastava --- doc/guides/vdpadevs/sfc.rst | 4 ++ drivers/vdpa/sfc/meson.build | 1 + drivers/vdpa/sfc/sfc_vdpa.c | 32 +++++++++ drivers/vdpa/sfc/sfc_vdpa.h | 30 ++++++++ drivers/vdpa/sfc/sfc_vdpa_filter.c | 144 +++++++++++++++++++++++++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 10 +++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 17 +++++ 7 files changed, 238 insertions(+) create mode 100644 drivers/vdpa/sfc/sfc_vdpa_filter.c diff --git a/doc/guides/vdpadevs/sfc.rst b/doc/guides/vdpadevs/sfc.rst index abb5900..ae5ef42 100644 --- a/doc/guides/vdpadevs/sfc.rst +++ b/doc/guides/vdpadevs/sfc.rst @@ -71,6 +71,10 @@ boolean parameters value. **vdpa** device will work as vdpa device and will be probed by vdpa/sfc driver. If this parameter is not specified then ef100 device will operate as network device. +- ``mac`` [mac address] + + Configures MAC address which would be used to setup MAC filters. + Dynamic Logging Parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build index aac7c51..f69cba9 100644 --- a/drivers/vdpa/sfc/meson.build +++ b/drivers/vdpa/sfc/meson.build @@ -33,4 +33,5 @@ sources = files( 'sfc_vdpa_hw.c', 'sfc_vdpa_mcdi.c', 'sfc_vdpa_ops.c', + 'sfc_vdpa_filter.c', ) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index 9ffea59..012616b 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -8,7 +8,9 @@ #include #include +#include #include +#include #include #include #include @@ -202,6 +204,31 @@ struct sfc_vdpa_ops_data * return (ret < 0) ? RTE_LOGTYPE_PMD : ret; } +static int +sfc_vdpa_kvargs_parse(struct sfc_vdpa_adapter *sva) +{ + struct rte_pci_device *pci_dev = sva->pdev; + struct rte_devargs *devargs = pci_dev->device.devargs; + /* + * To get the device class a mandatory param 'class' is being + * used so included SFC_EFX_KVARG_DEV_CLASS in the param list. + */ + const char **params = (const char *[]){ + RTE_DEVARGS_KEY_CLASS, + SFC_VDPA_MAC_ADDR, + NULL, + }; + + if (devargs == NULL) + return 0; + + sva->kvargs = rte_kvargs_parse(devargs->args, params); + if (sva->kvargs == NULL) + return -EINVAL; + + return 0; +} + static struct rte_pci_id pci_id_sfc_vdpa_efx_map[] = { { RTE_PCI_DEVICE(EFX_PCI_VENID_XILINX, EFX_PCI_DEVID_RIVERHEAD_VF) }, { .vendor_id = 0, /* sentinel */ }, @@ -244,6 +271,10 @@ struct sfc_vdpa_ops_data * if (ret != 0) goto fail_set_log_prefix; + ret = sfc_vdpa_kvargs_parse(sva); + if (ret != 0) + goto fail_kvargs_parse; + sfc_vdpa_log_init(sva, "entry"); sfc_vdpa_adapter_lock_init(sva); @@ -284,6 +315,7 @@ struct sfc_vdpa_ops_data * fail_vfio_setup: sfc_vdpa_adapter_lock_fini(sva); +fail_kvargs_parse: fail_set_log_prefix: rte_free(sva); diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index 1bf96e7..dbd099f 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -17,8 +17,29 @@ #include "sfc_vdpa_log.h" #include "sfc_vdpa_ops.h" +#define SFC_VDPA_MAC_ADDR "mac" #define SFC_VDPA_DEFAULT_MCDI_IOVA 0x200000000000 +/* Broadcast & Unicast MAC filters are supported */ +#define SFC_MAX_SUPPORTED_FILTERS 2 + +/* + * Get function-local index of the associated VI from the + * virtqueue number. Queue 0 is reserved for MCDI + */ +#define SFC_VDPA_GET_VI_INDEX(vq_num) (((vq_num) / 2) + 1) + +enum sfc_vdpa_filter_type { + SFC_VDPA_BCAST_MAC_FILTER = 0, + SFC_VDPA_UCAST_MAC_FILTER = 1, + SFC_VDPA_FILTER_NTYPE +}; + +typedef struct sfc_vdpa_filter_s { + int filter_cnt; + efx_filter_spec_t spec[SFC_MAX_SUPPORTED_FILTERS]; +} sfc_vdpa_filter_t; + /* Adapter private data */ struct sfc_vdpa_adapter { TAILQ_ENTRY(sfc_vdpa_adapter) next; @@ -32,6 +53,8 @@ struct sfc_vdpa_adapter { struct rte_pci_device *pdev; struct rte_pci_addr pci_addr; + struct rte_kvargs *kvargs; + efx_family_t family; efx_nic_t *nic; rte_spinlock_t nic_lock; @@ -46,6 +69,8 @@ struct sfc_vdpa_adapter { char log_prefix[SFC_VDPA_LOG_PREFIX_MAX]; uint32_t logtype_main; + sfc_vdpa_filter_t filters; + int vfio_group_fd; int vfio_dev_fd; int vfio_container_fd; @@ -83,6 +108,11 @@ struct sfc_vdpa_ops_data * int sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *vdpa_data, bool do_map); +int +sfc_vdpa_filter_remove(struct sfc_vdpa_ops_data *ops_data); +int +sfc_vdpa_filter_config(struct sfc_vdpa_ops_data *ops_data); + static inline struct sfc_vdpa_adapter * sfc_vdpa_adapter_by_dev_handle(void *dev_handle) { diff --git a/drivers/vdpa/sfc/sfc_vdpa_filter.c b/drivers/vdpa/sfc/sfc_vdpa_filter.c new file mode 100644 index 0000000..03b6a5d --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_filter.c @@ -0,0 +1,144 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include +#include +#include + +#include "efx.h" +#include "efx_impl.h" +#include "sfc_vdpa.h" + +static inline int +sfc_vdpa_get_eth_addr(const char *key __rte_unused, + const char *value, void *extra_args) +{ + struct rte_ether_addr *mac_addr = extra_args; + + if (value == NULL || extra_args == NULL) + return -EINVAL; + + /* Convert string with Ethernet address to an ether_addr */ + rte_ether_unformat_addr(value, mac_addr); + + return 0; +} + +static int +sfc_vdpa_set_mac_filter(efx_nic_t *nic, efx_filter_spec_t *spec, + int qid, uint8_t *eth_addr) +{ + int rc; + + if (nic == NULL || spec == NULL) + return -1; + + spec->efs_priority = EFX_FILTER_PRI_MANUAL; + spec->efs_flags = EFX_FILTER_FLAG_RX; + spec->efs_dmaq_id = qid; + + rc = efx_filter_spec_set_eth_local(spec, EFX_FILTER_SPEC_VID_UNSPEC, + eth_addr); + if (rc != 0) + return rc; + + rc = efx_filter_insert(nic, spec); + if (rc != 0) + return rc; + + return rc; +} + +int sfc_vdpa_filter_config(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int qid; + efx_nic_t *nic; + struct rte_ether_addr bcast_eth_addr; + struct rte_ether_addr ucast_eth_addr; + struct sfc_vdpa_adapter *sva = ops_data->dev_handle; + efx_filter_spec_t *spec; + + if (ops_data == NULL) + return -1; + + sfc_vdpa_log_init(sva, "entry"); + + nic = sva->nic; + + sfc_vdpa_log_init(sva, "process kvarg"); + + /* skip MAC filter configuration if mac address is not provided */ + if (rte_kvargs_count(sva->kvargs, SFC_VDPA_MAC_ADDR) == 0) { + sfc_vdpa_warn(sva, + "MAC address is not provided, skipping MAC Filter Config"); + return -1; + } + + rc = rte_kvargs_process(sva->kvargs, SFC_VDPA_MAC_ADDR, + &sfc_vdpa_get_eth_addr, + &ucast_eth_addr); + if (rc < 0) + return -1; + + /* create filters on the base queue */ + qid = SFC_VDPA_GET_VI_INDEX(0); + + sfc_vdpa_log_init(sva, "insert broadcast mac filter"); + + EFX_MAC_BROADCAST_ADDR_SET(bcast_eth_addr.addr_bytes); + spec = &sva->filters.spec[SFC_VDPA_BCAST_MAC_FILTER]; + + rc = sfc_vdpa_set_mac_filter(nic, + spec, qid, + bcast_eth_addr.addr_bytes); + if (rc != 0) + sfc_vdpa_err(ops_data->dev_handle, + "broadcast MAC filter insertion failed: %s", + rte_strerror(rc)); + else + sva->filters.filter_cnt++; + + sfc_vdpa_log_init(sva, "insert unicast mac filter"); + spec = &sva->filters.spec[SFC_VDPA_UCAST_MAC_FILTER]; + + rc = sfc_vdpa_set_mac_filter(nic, + spec, qid, + ucast_eth_addr.addr_bytes); + if (rc != 0) + sfc_vdpa_err(sva, + "unicast MAC filter insertion failed: %s", + rte_strerror(rc)); + else + sva->filters.filter_cnt++; + + sfc_vdpa_log_init(sva, "done"); + + return rc; +} + +int sfc_vdpa_filter_remove(struct sfc_vdpa_ops_data *ops_data) +{ + int i, rc = 0; + struct sfc_vdpa_adapter *sva = ops_data->dev_handle; + efx_nic_t *nic; + + if (ops_data == NULL) + return -1; + + nic = sva->nic; + + for (i = 0; i < sva->filters.filter_cnt; i++) { + rc = efx_filter_remove(nic, &(sva->filters.spec[i])); + if (rc != 0) + sfc_vdpa_err(sva, + "remove HW filter failed for entry %d: %s", + i, rte_strerror(rc)); + } + + sva->filters.filter_cnt = 0; + + return rc; +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c index b473708..5307b03 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -354,10 +354,20 @@ goto fail_virtio_init; } + sfc_vdpa_log_init(sva, "init filter"); + rc = efx_filter_init(enp); + if (rc != 0) { + sfc_vdpa_err(sva, "filter init failed: %s", rte_strerror(rc)); + goto fail_filter_init; + } + sfc_vdpa_log_init(sva, "done"); return 0; +fail_filter_init: + efx_virtio_fini(enp); + fail_virtio_init: efx_nic_fini(enp); diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 774d73e..8551b65 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -426,6 +426,8 @@ sfc_vdpa_disable_vfio_intr(ops_data); + sfc_vdpa_filter_remove(ops_data); + ops_data->state = SFC_VDPA_STATE_CONFIGURED; } @@ -465,12 +467,27 @@ goto fail_vq_start; } + ops_data->vq_count = i; + + sfc_vdpa_log_init(ops_data->dev_handle, + "configure MAC filters"); + rc = sfc_vdpa_filter_config(ops_data); + if (rc != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "MAC filter config failed: %s", + rte_strerror(rc)); + goto fail_filter_cfg; + } + ops_data->state = SFC_VDPA_STATE_STARTED; sfc_vdpa_log_init(ops_data->dev_handle, "done"); return 0; +fail_filter_cfg: + /* remove already created filters */ + sfc_vdpa_filter_remove(ops_data); fail_vq_start: /* stop already started virtqueues */ for (j = 0; j < i; j++) From patchwork Thu Oct 28 07:54:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103121 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 740B0A0C45; Thu, 28 Oct 2021 09:57:30 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 84FC841140; Thu, 28 Oct 2021 09:57:09 +0200 (CEST) Received: from NAM12-DM6-obe.outbound.protection.outlook.com (mail-dm6nam12on2083.outbound.protection.outlook.com [40.107.243.83]) by mails.dpdk.org (Postfix) with ESMTP id 8FBC141140 for ; Thu, 28 Oct 2021 09:57:05 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=UiomU5orNqqCdmjd4D2+qfMbDWBQeJjFFpUOsdudwPBSdf1CSumfeAPIeZtNjdEG7KBVyFiNURXmbaEpgC4BCXQ23gbgvIQrqjmh7Z1X6mwqDmhE+UFpmsjKZDkuAXj5HTcKcRakpnddxnaJeOnaM7xpYbZ63wJ4BTdIIQXqYxqgjP+J9GD+AuKewTAWsevmL5qRoBEWxm2JsMY+D72/jrwEBuHHU8/ifPOYAtqBOpddec03UhbbNPRLidAq42UCVvgaYpMc6xw5nKB1XkQ6lFjoTXxgfnlgfd0ILWrxMQRsHdUtxOIkAafAOLOwFcJxATlNSlx0J4JD8RSCYfsrfg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=EzTW43/BOV7NOCMK1xaLXfEaDOrBbd0V5WiV01vb9gk=; b=G8nxD1iD9rXlkrRH2oiZ6FI4Y+xDrcrQ/EfeCWCtksI9iLxDkemNVgqesTOOGNVxZI+PKsjXSL1UohFuUHU01u5+cbsqoge+2YkI/EV1ulaerowvcnUTPPkyj3t3Mjn9mhVJ76zR29SVRTjT07oJzcXQ8hHSscYDqOnkU0P4UETHZr82RrVE4yP8UG7p8WJUmQsIF3Pw/9AlEiPhvibbB33bUrfNwjWJGTm8ioaSIVj8gyKtKzde4nD3f8YzQjPKt06sonePEyoEMlzSOOMZVpC4R9yc/atX8HS/JDKDVE9Wj3mMdj0dKSZ5O1hjo2OE9GzKUvZKah6CnL0bDmv4lg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=EzTW43/BOV7NOCMK1xaLXfEaDOrBbd0V5WiV01vb9gk=; b=ig0dqfIJPvqnBZB2Vahtgn9nha0lAnqARomxl4QOw3/KVVVFtZvpmeAiJjO6WddzNa7RgueLdZ8/UtT6y8EMUUMeV9uUQVi2AFlasi8g4ltFq0ln7zl2M1Kg8IIs9p1rVBa0kQB9lyhIcUJbp6mRsCazxRCEbEqi6fRRaMGFYhI= Received: from BN6PR22CA0065.namprd22.prod.outlook.com (2603:10b6:404:ca::27) by BYAPR02MB5095.namprd02.prod.outlook.com (2603:10b6:a03:70::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Thu, 28 Oct 2021 07:57:03 +0000 Received: from BN1NAM02FT047.eop-nam02.prod.protection.outlook.com (2603:10b6:404:ca:cafe::42) by BN6PR22CA0065.outlook.office365.com (2603:10b6:404:ca::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Thu, 28 Oct 2021 07:57:03 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT047.mail.protection.outlook.com (10.13.3.196) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:57:02 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:57:01 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:57:01 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0HV-0005p5-34; Thu, 28 Oct 2021 00:57:01 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:51 +0530 Message-ID: <20211028075452.11804-10-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: b47f38ef-499a-4fba-c6a8-08d999e8873b X-MS-TrafficTypeDiagnostic: BYAPR02MB5095: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:124; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: M/24hR65yYLrW7xAp+NLQdWUPJFkFTFrXoYIjc5HS+kBHaiztsHNUJz5cmqpaSTkatmkl8RKWBPBjliBxJNRgiW4XKQsSvn+jefi7SbSWWNs4NK/meFuXvBrSPGpVFWfbSfm2ZWfS3/V5QqKCcPDCyea9Hx5wgwDhXL1Sgk9PLXOFIA0B83oHWOxQ2jHh6cprD+mIAdaPovMdwIHoqbs3Fhs5GhhSP2XSHpm7LH5LIVgOW03j3puRx577K2ISOYhxjXZnyWT11PQPF0789/BH6/QeiziYe4uxVlKidsL6CCyA6tkemepIz2JoOLRCnGHlV9SoT7rdXAzGwg1sXs8jURdbFH214Utd+LoS1I14RnnTWvowSkDFbgjHYKDARQiC1hyUio4+tm8r215YWnKom6/Q5aONDFRxpaxHC93OGnVchLN8gQddGB48IMA9Q9S7Rl91Nm/3yXxPpnfyoef988Dv1ME99afEy7CDhtEuExpIGkUT0VTmorMlggh7KAq8QKcKBI98/1Gbre/MIQy/uO46zgP5btdcM1AJ/82jnDX0QiTn6v7DevJzlGfy/ecf1OAhOG6BHptZ7mdiZOG6ZoNHiqban5kiYNnh5R111F4KM8XLP5Z9FIQJPYRoTJs4lqTsXWNlWAmD+0kJviwScyVqK2JtL1ur3q4HhxUY3LuckLHc/VG4P+rGalMorQzlGUzk/4Zd+0EMuhJoN8BcDt6um2ZfphXd4KkkJsR3io= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(6666004)(2616005)(5660300002)(316002)(7696005)(508600001)(6916009)(82310400003)(36906005)(54906003)(36860700001)(336012)(83380400001)(8936002)(8676002)(426003)(4326008)(186003)(9786002)(107886003)(47076005)(36756003)(356005)(7636003)(70206006)(1076003)(2906002)(44832011)(26005)(70586007)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:57:02.8610 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: b47f38ef-499a-4fba-c6a8-08d999e8873b X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT047.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR02MB5095 Subject: [dpdk-dev] [PATCH v2 09/10] vdpa/sfc: add support to set vring state X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implements vDPA ops set_vring_state to configure vring state. Signed-off-by: Vijay Kumar Srivastava --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 54 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 50 insertions(+), 4 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 8551b65..3430643 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -719,11 +719,57 @@ static int sfc_vdpa_set_vring_state(int vid, int vring, int state) { - RTE_SET_USED(vid); - RTE_SET_USED(vring); - RTE_SET_USED(state); + struct sfc_vdpa_ops_data *ops_data; + struct rte_vdpa_device *vdpa_dev; + efx_rc_t rc; + int vring_max; + void *dev; - return -1; + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + + sfc_vdpa_info(dev, + "vDPA ops set_vring_state: vid: %d, vring: %d, state:%d", + vid, vring, state); + + vring_max = (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + if (vring < 0 || vring > vring_max) { + sfc_vdpa_err(dev, "received invalid vring id : %d to set state", + vring); + return -1; + } + + /* + * Skip if device is not yet started. virtqueues state can be + * changed once it is created and other configurations are done. + */ + if (ops_data->state != SFC_VDPA_STATE_STARTED) + return 0; + + if (ops_data->vq_cxt[vring].enable == state) + return 0; + + if (state == 0) { + rc = sfc_vdpa_virtq_stop(ops_data, vring); + if (rc != 0) { + sfc_vdpa_err(dev, "virtqueue stop failed: %s", + rte_strerror(rc)); + } + } else { + rc = sfc_vdpa_virtq_start(ops_data, vring); + if (rc != 0) { + sfc_vdpa_err(dev, "virtqueue start failed: %s", + rte_strerror(rc)); + } + } + + return rc; } static int From patchwork Thu Oct 28 07:54:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103122 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id AB419A0C45; Thu, 28 Oct 2021 09:57:35 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 888D241123; Thu, 28 Oct 2021 09:57:14 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2054.outbound.protection.outlook.com [40.107.94.54]) by mails.dpdk.org (Postfix) with ESMTP id 0316841123 for ; Thu, 28 Oct 2021 09:57:13 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=PcM1Uqi1U8uvMufZDgvdMFIp5uD7HFAxNAXavsWBNPTvjyJg3ZkHNQazW/MtWl6/5zv//cyiT42TCcN/98jtbSr/Pa6KHAJtCTd89yNvQC6nw/JHBhFzC1L1s+7YrvpOW4dZNQWisVL+iV9Sy4W0DIPJ6wCPe4iqY+9CGqQKLJP9xjqztguSFNK3U+qGENH5Ix83Z5NskKp/9sl9NLxvEj399Mq3szA9MJcDUtetfTM0nJ6JwArI6AGJ22jtwi8Z8jBytgvtpMxI9S0D6m8ISLAlEbH/aLiGsleWpuW1l+R6DKaky/vuBK691wwCQ2uuGTnear8sFFvaSaz3p22Vrw== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LYWiExb4ZlzC1+L5lT11ofna7Z89d1yzG9n9EiBZiHo=; b=MazjD12BsCaMaKpB/v5XSLyIjx+FXRpSANL0vmMH0lZb3A+Qlyw4BOL19dv3IbsfcDgqlAk7qJIs44vHinFfhHyp3+ZTEU62UfjEGhHjIDoBp0O7GFMSTZz9pE/TNdcLNwpKZsByMQx8KUTO4JonT7JCJyMo4pZBu757XbeBBoEJo7XUjUTivbvJu1R4RfsDHt23otZmWLAP77e9QKNoqiT4O9aJR6oNaqPjbYHJ8VGtwwL0SQ55qEKGn7JXzgl+yDSFty52P0G/Oq0+4w/Qtx6w7TgW9V1uuB7cl81xSFyEJagksnbE7P8bjnOnMtJecZR5F5VjoVPnMgJhQlN0eA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LYWiExb4ZlzC1+L5lT11ofna7Z89d1yzG9n9EiBZiHo=; b=EJcw1TsZ1NlB02+FyW0E48JGmBQ4ZupUYeCHR2bZKK3WERNCQf0t5AZBrSrXiWTuAILJG/jEHWZG1P7q/B529RTNdpyCqzXofroXpyCMdT34f7cRNc5G5ZZtzFwP9t0MocHH590rL8i0CvD9tMv27hJ+K1oy0zm6QubFa1VqvEA= Received: from BN6PR12CA0030.namprd12.prod.outlook.com (2603:10b6:405:70::16) by SN6PR02MB5438.namprd02.prod.outlook.com (2603:10b6:805:e7::27) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.14; Thu, 28 Oct 2021 07:57:10 +0000 Received: from BN1NAM02FT035.eop-nam02.prod.protection.outlook.com (2603:10b6:405:70:cafe::bd) by BN6PR12CA0030.outlook.office365.com (2603:10b6:405:70::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18 via Frontend Transport; Thu, 28 Oct 2021 07:57:10 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT035.mail.protection.outlook.com (10.13.2.81) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:57:09 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:57:03 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:57:03 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0HX-0005p5-Cr; Thu, 28 Oct 2021 00:57:03 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:52 +0530 Message-ID: <20211028075452.11804-11-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ef6a47df-fb07-4b30-4820-08d999e88b6e X-MS-TrafficTypeDiagnostic: SN6PR02MB5438: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:483; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: P0zshO4AJ4IrTtw/22zfOBMnlcw2GfN7KBK7TBcfpAchXfkSWOh+mdgLG8EVN8EgzQtXFHnG0zIheKI7tiWHqojNnEj+++7qRiqvCS28/F7espipIMS3O8Sp91KDpzAJfjZgY/O3TT05lRSTPGR7zX4Sqlfdr5AmpaQHlYIDbW93t37NyBUeBUAaJwtUJ6jswzsWMv+kdsHHeV8uLOttDJ8kqUZhB1wf25qVW8e0ASp8uY+zB9+83eSeII/9z6x2/kSeZS8NSrMMqcKne3VkUBIxEw0HCeGDcQ2tOGq9g51nxKV+Y1qEzJKNYkfLFv6KRI7TfloyEtAFVcqt4TQA0WbSZcklUjdYm7MOT6bAttj86LblyM9QR9Rx7WkhRPHMe+me+uhFng/rGnorO+iecO0fZMFePV3+iilNFsdRLv3uufsSYHqsVEABml6YLkXV+yQ/w8p3DWEI/hidN4aNjT57tL5JVicZ0sRzpb2M7IRtj4pZI5t7Zg6+dRvGNTlPl6Jzs08RLl/4ziXrzvMsfPiBIObWpEj4ZgDODrV3SC5UMRVEVBnyDS/nRZi3gd7nGXrAwc6J3JIivEaC9UM9LOun38nENWo4uEzhPedjmOXunMdxSP7d3YSxkgb8TdZ35MoAD8BpHtK451X7ATop0+VaDWM4HYM1/dRKeQiALJ5PMsiHMLlU4L8A58lXq8Ro9n06d3zLgQVFgrUbLSZ3H2G12OF4kCMWynBbzfHjIEQ= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(70586007)(70206006)(36906005)(186003)(2616005)(26005)(47076005)(8936002)(83380400001)(36756003)(9786002)(6916009)(336012)(1076003)(54906003)(426003)(6666004)(5660300002)(356005)(7636003)(316002)(508600001)(2906002)(4326008)(82310400003)(44832011)(36860700001)(7696005)(8676002)(107886003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:57:09.8959 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ef6a47df-fb07-4b30-4820-08d999e88b6e X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT035.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR02MB5438 Subject: [dpdk-dev] [PATCH v2 10/10] vdpa/sfc: set a multicast filter during vDPA init X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Insert unknown multicast filter to allow IPv6 neighbor discovery Signed-off-by: Vijay Kumar Srivastava --- drivers/vdpa/sfc/sfc_vdpa.h | 3 ++- drivers/vdpa/sfc/sfc_vdpa_filter.c | 19 +++++++++++++++++-- 2 files changed, 19 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index dbd099f..bedc76c 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -21,7 +21,7 @@ #define SFC_VDPA_DEFAULT_MCDI_IOVA 0x200000000000 /* Broadcast & Unicast MAC filters are supported */ -#define SFC_MAX_SUPPORTED_FILTERS 2 +#define SFC_MAX_SUPPORTED_FILTERS 3 /* * Get function-local index of the associated VI from the @@ -32,6 +32,7 @@ enum sfc_vdpa_filter_type { SFC_VDPA_BCAST_MAC_FILTER = 0, SFC_VDPA_UCAST_MAC_FILTER = 1, + SFC_VDPA_MCAST_DST_FILTER = 2, SFC_VDPA_FILTER_NTYPE }; diff --git a/drivers/vdpa/sfc/sfc_vdpa_filter.c b/drivers/vdpa/sfc/sfc_vdpa_filter.c index 03b6a5d..74204d3 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_filter.c +++ b/drivers/vdpa/sfc/sfc_vdpa_filter.c @@ -39,8 +39,12 @@ spec->efs_flags = EFX_FILTER_FLAG_RX; spec->efs_dmaq_id = qid; - rc = efx_filter_spec_set_eth_local(spec, EFX_FILTER_SPEC_VID_UNSPEC, - eth_addr); + if (eth_addr == NULL) + rc = efx_filter_spec_set_mc_def(spec); + else + rc = efx_filter_spec_set_eth_local(spec, + EFX_FILTER_SPEC_VID_UNSPEC, + eth_addr); if (rc != 0) return rc; @@ -114,6 +118,17 @@ int sfc_vdpa_filter_config(struct sfc_vdpa_ops_data *ops_data) else sva->filters.filter_cnt++; + sfc_vdpa_log_init(sva, "insert unknown mcast filter"); + spec = &sva->filters.spec[SFC_VDPA_MCAST_DST_FILTER]; + + rc = sfc_vdpa_set_mac_filter(nic, spec, qid, NULL); + if (rc != 0) + sfc_vdpa_err(sva, + "mcast filter insertion failed: %s", + rte_strerror(rc)); + else + sva->filters.filter_cnt++; + sfc_vdpa_log_init(sva, "done"); return rc;