From patchwork Fri Oct 29 14:46:36 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103244 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8A218A0547; Fri, 29 Oct 2021 16:48:33 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 90CC4426D5; Fri, 29 Oct 2021 16:48:31 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2082.outbound.protection.outlook.com [40.107.244.82]) by mails.dpdk.org (Postfix) with ESMTP id 18B62426D5 for ; Fri, 29 Oct 2021 16:48:30 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=eFPpj+TsDcPaav1yI5cE7n/p3IkfFbk+yzm3VX46kL7XAn4+2qjQRBDjVS0nZ4CVeuuDnT2eZxbsjeTeEcRoCR1/odOWyRPYbmR5UKlF4JdeU2eC8HzMxhSk+B69XRzb5KQgN8v4AKEC2vMT06/gzE78cHW6iAkdNOFTRNNaw8nJ/uRdTGry17WwKctTYWTq62ltbrca+Yzh+HQnE7XVAAbDHoboSZXKfvnG47M1N2oOgj7j8rnjM3S0mSk3mHRY22ZocZgWQujRYMP++u9meQfjzGIXsJiI9t9MNdmFPprp/ErAoWsyMkGa1jRsehym/uK7Eputk1D2id1yAJtIpg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=ZY/4B8wpln+Q4BgpFCLdJT8nhAWYUYGWWBQKn7p52ZQ=; b=jRAC8z+GJ8q84wZ4gJJhvcFVRAeBhPmfHAcaSssiu8vMloKFr2Qurei+8JUcjKwSzkkBMTsVDLCvHAZXEZdf0xwKfyiy50ox7IfFzFN1K9QuAJBMxGci2dtqnVFD0kJM1rJaEmFtAv1TytU4D5hR91Tl8IA4G7a4XprsRP7pBHzUISAVxYHE9EPQyVfwmkE7VVeNir12AXx5oBCMFeFGq+Su6YTUD3qrblpul4Yn04vBmo8PvLjaBquAwp+UfO4QTs8dbXPe9tnIiTQ+dh3tP1eZfd5YBPIreNjcL8Esp7NWU25uDqPJPfqAqgQmZ6NYwyWwH07n/fmjpKLup+HOtQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=ZY/4B8wpln+Q4BgpFCLdJT8nhAWYUYGWWBQKn7p52ZQ=; b=ecic4UdYwUHtrnbIm7Glm2ccxBk25c4OvanAmoV1hbeL1kPLPC1XcPOExQPDCxy0EXLV5Whj6og/Z63ChgVkNXwBEv/56jbLlDeQZecDxgxjq/KpqpoYlKrfJSK8qHipZc5Ad6IjCncraH4Zzmd4WEazrslJF3KX2sM/FceLg9Y= Received: from DM6PR03CA0026.namprd03.prod.outlook.com (2603:10b6:5:40::39) by DM6PR02MB5322.namprd02.prod.outlook.com (2603:10b6:5:46::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.20; Fri, 29 Oct 2021 14:48:27 +0000 Received: from DM3NAM02FT048.eop-nam02.prod.protection.outlook.com (2603:10b6:5:40:cafe::20) by DM6PR03CA0026.outlook.office365.com (2603:10b6:5:40::39) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Fri, 29 Oct 2021 14:48:27 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT048.mail.protection.outlook.com (10.13.4.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:48:27 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Fri, 29 Oct 2021 07:48:26 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:26 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mgTBC-0008YO-2H; Fri, 29 Oct 2021 07:48:26 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Fri, 29 Oct 2021 20:16:36 +0530 Message-ID: <20211029144645.30295-2-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211029144645.30295-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 9f2d1d02-ce81-4292-989c-08d99aeb2abc X-MS-TrafficTypeDiagnostic: DM6PR02MB5322: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:236; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: CfAG8Uez9kSr9XPE40JXQdMuB90aeuXiTv3JzYsPIbTuglim3utLuLkva54Viu5UXeXugzKK3QgG8wOJDN/LbIIu6jQz0Se/evZHBIP05n7bMKgiwhy7tLbg9PAZG2bN0ElNig2BpVs9mPjrj9uJ1Ma4H/SThsJvsTXqqnRvnqQtpwtn117hclMABf/gtOtBmVaHbWG8wGatQQm8AiAjywlJ0Bjce6SvYAnhYuxDvR964C3pw+qEbvFHO0vShA7dfIOZi/3oAR/M8wGzoHaUjNSqPj2erx3kO6LSYH/EdqfG1yRNS0hGrVsXrjUoAVu6bR3Qri1wChaL9e4YOCjiO/oA8mJ0fVdmQ9hRoUfT3MKN6PGi/aBQKOev2M2zHlsiKmvTxw7wftR3zAmMjm6B7uXPo75i9Gml2KUJllP16ioe0QJt+hvmVKL872no6Tma3BhUJFOXYMVpKj+ZngIMRZGUXQ0CyCN/Ukkjpl6xpl2MRRjslFGCjgwKbaTfVGbWO4HQKxZ1TOh+hSilWAHE3dm6I0QKc7e8XdQ4/FYuZXQfRhs720gcKxJxVxY+z5xZ20K54Qj+c2w/ypq8vJ3nTPrLBpPSvQvCyu5Ear2nV9JgFZplcpOFEmqLTSDJ9FPPzkrR4sHidD3DIc+b8tNyeu7NHCPUxjxIy6vP+Qjcgt+Jp+/xKcsqZN2pl+M2ga8dd7kLIFrjpRwY1ni6u3KwBAnM0Uzf2vLfF3BxhKkWxOXPjcUbPu9nIlFyJNuUY26/wqkhmFgqioqiXCQHj45olXiPpCe7CSE+lW189z0x7Pmmgv3xzrCYFaNnWE78/pSQ7JtxWxpOt1kHdLjPqHgEIeLfeBq2UpRegxe1xGhN8f0= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(426003)(26005)(2616005)(316002)(186003)(966005)(9786002)(36756003)(508600001)(36906005)(8676002)(54906003)(7696005)(6916009)(5660300002)(44832011)(336012)(47076005)(30864003)(70206006)(83380400001)(36860700001)(4326008)(1076003)(6666004)(107886003)(82310400003)(70586007)(356005)(8936002)(2906002)(7636003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:48:27.3711 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 9f2d1d02-ce81-4292-989c-08d99aeb2abc X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT048.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR02MB5322 Subject: [dpdk-dev] [PATCH v3 01/10] vdpa/sfc: introduce Xilinx vDPA driver X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Add new vDPA PMD to support vDPA operations of Xilinx devices. This patch implements probe and remove functions. Signed-off-by: Vijay Kumar Srivastava Acked-by: Andrew Rybchenko --- v2: * Updated logging macros to remove redundant code. v3: * Replaced depreciated whitelist with allowlist. * Text corrections in the sfc.rst and commit msg. * Added sfc in the toctree of doc/guides/vdpadevs/index.rst. * Removed extra compiler flags. MAINTAINERS | 6 + doc/guides/rel_notes/release_21_11.rst | 5 + doc/guides/vdpadevs/features/sfc.ini | 9 ++ doc/guides/vdpadevs/index.rst | 2 + doc/guides/vdpadevs/sfc.rst | 97 +++++++++++ drivers/vdpa/meson.build | 1 + drivers/vdpa/sfc/meson.build | 22 +++ drivers/vdpa/sfc/sfc_vdpa.c | 286 +++++++++++++++++++++++++++++++++ drivers/vdpa/sfc/sfc_vdpa.h | 40 +++++ drivers/vdpa/sfc/sfc_vdpa_log.h | 56 +++++++ drivers/vdpa/sfc/version.map | 3 + 11 files changed, 527 insertions(+) create mode 100644 doc/guides/vdpadevs/features/sfc.ini create mode 100644 doc/guides/vdpadevs/sfc.rst create mode 100644 drivers/vdpa/sfc/meson.build create mode 100644 drivers/vdpa/sfc/sfc_vdpa.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa.h create mode 100644 drivers/vdpa/sfc/sfc_vdpa_log.h create mode 100644 drivers/vdpa/sfc/version.map diff --git a/MAINTAINERS b/MAINTAINERS index be2c9b6..5d12c49 100644 --- a/MAINTAINERS +++ b/MAINTAINERS @@ -1236,6 +1236,12 @@ F: drivers/vdpa/mlx5/ F: doc/guides/vdpadevs/mlx5.rst F: doc/guides/vdpadevs/features/mlx5.ini +Xilinx sfc vDPA +M: Vijay Kumar Srivastava +F: drivers/vdpa/sfc/ +F: doc/guides/vdpadevs/sfc.rst +F: doc/guides/vdpadevs/features/sfc.ini + Eventdev Drivers ---------------- diff --git a/doc/guides/rel_notes/release_21_11.rst b/doc/guides/rel_notes/release_21_11.rst index 1ccac87..bd0a604 100644 --- a/doc/guides/rel_notes/release_21_11.rst +++ b/doc/guides/rel_notes/release_21_11.rst @@ -305,6 +305,11 @@ New Features * Pcapng format with timestamps and meta-data. * Fixes packet capture with stripped VLAN tags. +* **Add new vDPA PMD based on Xilinx devices.** + + Added a new Xilinx vDPA (``sfc_vdpa``) PMD. + See the :doc:`../vdpadevs/sfc` guide for more details on this driver. + Removed Items ------------- diff --git a/doc/guides/vdpadevs/features/sfc.ini b/doc/guides/vdpadevs/features/sfc.ini new file mode 100644 index 0000000..71b6158 --- /dev/null +++ b/doc/guides/vdpadevs/features/sfc.ini @@ -0,0 +1,9 @@ +; +; Supported features of the 'sfc' vDPA driver. +; +; Refer to default.ini for the full list of available driver features. +; +[Features] +Linux = Y +x86-64 = Y +Usage doc = Y diff --git a/doc/guides/vdpadevs/index.rst b/doc/guides/vdpadevs/index.rst index 1a13efe..f1a946e 100644 --- a/doc/guides/vdpadevs/index.rst +++ b/doc/guides/vdpadevs/index.rst @@ -14,3 +14,5 @@ which can be used from an application through vhost API. features_overview ifc mlx5 + sfc + diff --git a/doc/guides/vdpadevs/sfc.rst b/doc/guides/vdpadevs/sfc.rst new file mode 100644 index 0000000..44e694f --- /dev/null +++ b/doc/guides/vdpadevs/sfc.rst @@ -0,0 +1,97 @@ +.. SPDX-License-Identifier: BSD-3-Clause + Copyright(c) 2021 Xilinx Corporation. + +Xilinx vDPA driver +================== + +The Xilinx vDPA (vhost data path acceleration) driver (**librte_pmd_sfc_vdpa**) +provides support for the Xilinx SN1022 SmartNICs family of 10/25/40/50/100 Gbps +adapters that have support for latest Linux and FreeBSD operating systems. + +More information can be found at Xilinx website https://www.xilinx.com. + + +Xilinx vDPA implementation +-------------------------- + +ef100 device can be configured in the net device or vDPA mode. +Adding "class=vdpa" parameter helps to specify that this +device is to be used in vDPA mode. If this parameter is not specified, device +will be probed by net/sfc driver and will used as a net device. + +This PMD uses libefx (common/sfc_efx) code to access the device firmware. + + +Supported NICs +-------------- + +- Xilinx SN1022 SmartNICs + + +Features +-------- + +Features of the Xilinx vDPA driver are: + +- Compatibility with virtio 0.95 and 1.0 + + +Non-supported Features +---------------------- + +- Control Queue +- Multi queue +- Live Migration + + +Prerequisites +------------- + +Requires firmware version: v1.0.7.0 or higher + +Visit `Xilinx Support Downloads `_ +to get Xilinx Utilities with the latest firmware. +Follow instructions from Alveo SN1000 SmartNICs User Guide to +update firmware and configure the adapter. + + +Per-Device Parameters +~~~~~~~~~~~~~~~~~~~~~ + +The following per-device parameters can be passed via EAL PCI device +allowlist option like "-a 02:00.0,arg1=value1,...". + +Case-insensitive 1/y/yes/on or 0/n/no/off may be used to specify +boolean parameters value. + +- ``class`` [net|vdpa] (default **net**) + + Choose the mode of operation of ef100 device. + **net** device will work as network device and will be probed by net/sfc driver. + **vdpa** device will work as vdpa device and will be probed by vdpa/sfc driver. + If this parameter is not specified then ef100 device will operate as network device. + + +Dynamic Logging Parameters +~~~~~~~~~~~~~~~~~~~~~~~~~~ + +One may leverage EAL option "--log-level" to change default levels +for the log types supported by the driver. The option is used with +an argument typically consisting of two parts separated by a colon. + +Level value is the last part which takes a symbolic name (or integer). +Log type is the former part which may shell match syntax. +Depending on the choice of the expression, the given log level may +be used either for some specific log type or for a subset of types. + +SFC vDPA PMD provides the following log types available for control: + +- ``pmd.vdpa.sfc.driver`` (default level is **notice**) + + Affects driver-wide messages unrelated to any particular devices. + +- ``pmd.vdpa.sfc.main`` (default level is **notice**) + + Matches a subset of per-port log types registered during runtime. + A full name for a particular type may be obtained by appending a + dot and a PCI device identifier (``XXXX:XX:XX.X``) to the prefix. diff --git a/drivers/vdpa/meson.build b/drivers/vdpa/meson.build index f765fe3..77412c7 100644 --- a/drivers/vdpa/meson.build +++ b/drivers/vdpa/meson.build @@ -8,6 +8,7 @@ endif drivers = [ 'ifc', 'mlx5', + 'sfc', ] std_deps = ['bus_pci', 'kvargs'] std_deps += ['vhost'] diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build new file mode 100644 index 0000000..4255d65 --- /dev/null +++ b/drivers/vdpa/sfc/meson.build @@ -0,0 +1,22 @@ +# SPDX-License-Identifier: BSD-3-Clause +# +# Copyright(c) 2020-2021 Xilinx, Inc. + +if (arch_subdir != 'x86' or not dpdk_conf.get('RTE_ARCH_64')) and (arch_subdir != 'arm' or not host_machine.cpu_family().startswith('aarch64')) + build = false + reason = 'only supported on x86_64 and aarch64' +endif + +fmt_name = 'sfc_vdpa' +extra_flags = [] + +foreach flag: extra_flags + if cc.has_argument(flag) + cflags += flag + endif +endforeach + +deps += ['common_sfc_efx', 'bus_pci'] +sources = files( + 'sfc_vdpa.c', +) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c new file mode 100644 index 0000000..d85c52b --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -0,0 +1,286 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include +#include +#include + +#include +#include +#include +#include +#include + +#include "efx.h" +#include "sfc_efx.h" +#include "sfc_vdpa.h" + +TAILQ_HEAD(sfc_vdpa_adapter_list_head, sfc_vdpa_adapter); +static struct sfc_vdpa_adapter_list_head sfc_vdpa_adapter_list = + TAILQ_HEAD_INITIALIZER(sfc_vdpa_adapter_list); + +static pthread_mutex_t sfc_vdpa_adapter_list_lock = PTHREAD_MUTEX_INITIALIZER; + +struct sfc_vdpa_adapter * +sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev) +{ + bool found = false; + struct sfc_vdpa_adapter *sva; + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + + TAILQ_FOREACH(sva, &sfc_vdpa_adapter_list, next) { + if (pdev == sva->pdev) { + found = true; + break; + } + } + + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + return found ? sva : NULL; +} + +static int +sfc_vdpa_vfio_setup(struct sfc_vdpa_adapter *sva) +{ + struct rte_pci_device *dev = sva->pdev; + char dev_name[RTE_DEV_NAME_MAX_LEN] = {0}; + int rc; + + if (dev == NULL) + goto fail_inval; + + rte_pci_device_name(&dev->addr, dev_name, RTE_DEV_NAME_MAX_LEN); + + sva->vfio_container_fd = rte_vfio_container_create(); + if (sva->vfio_container_fd < 0) { + sfc_vdpa_err(sva, "failed to create VFIO container"); + goto fail_container_create; + } + + rc = rte_vfio_get_group_num(rte_pci_get_sysfs_path(), dev_name, + &sva->iommu_group_num); + if (rc <= 0) { + sfc_vdpa_err(sva, "failed to get IOMMU group for %s : %s", + dev_name, rte_strerror(-rc)); + goto fail_get_group_num; + } + + sva->vfio_group_fd = + rte_vfio_container_group_bind(sva->vfio_container_fd, + sva->iommu_group_num); + if (sva->vfio_group_fd < 0) { + sfc_vdpa_err(sva, + "failed to bind IOMMU group %d to container %d", + sva->iommu_group_num, sva->vfio_container_fd); + goto fail_group_bind; + } + + if (rte_pci_map_device(dev) != 0) { + sfc_vdpa_err(sva, "failed to map PCI device %s : %s", + dev_name, rte_strerror(rte_errno)); + goto fail_pci_map_device; + } + + sva->vfio_dev_fd = rte_intr_dev_fd_get(dev->intr_handle); + + return 0; + +fail_pci_map_device: + if (rte_vfio_container_group_unbind(sva->vfio_container_fd, + sva->iommu_group_num) != 0) { + sfc_vdpa_err(sva, + "failed to unbind IOMMU group %d from container %d", + sva->iommu_group_num, sva->vfio_container_fd); + } + +fail_group_bind: +fail_get_group_num: + if (rte_vfio_container_destroy(sva->vfio_container_fd) != 0) { + sfc_vdpa_err(sva, "failed to destroy container %d", + sva->vfio_container_fd); + } + +fail_container_create: +fail_inval: + return -1; +} + +static void +sfc_vdpa_vfio_teardown(struct sfc_vdpa_adapter *sva) +{ + rte_pci_unmap_device(sva->pdev); + + if (rte_vfio_container_group_unbind(sva->vfio_container_fd, + sva->iommu_group_num) != 0) { + sfc_vdpa_err(sva, + "failed to unbind IOMMU group %d from container %d", + sva->iommu_group_num, sva->vfio_container_fd); + } + + if (rte_vfio_container_destroy(sva->vfio_container_fd) != 0) { + sfc_vdpa_err(sva, + "failed to destroy container %d", + sva->vfio_container_fd); + } +} + +static int +sfc_vdpa_set_log_prefix(struct sfc_vdpa_adapter *sva) +{ + struct rte_pci_device *pci_dev = sva->pdev; + int ret; + + ret = snprintf(sva->log_prefix, sizeof(sva->log_prefix), + "PMD: sfc_vdpa " PCI_PRI_FMT " : ", + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); + + if (ret < 0 || ret >= (int)sizeof(sva->log_prefix)) { + SFC_VDPA_GENERIC_LOG(ERR, + "reserved log prefix is too short for " PCI_PRI_FMT, + pci_dev->addr.domain, pci_dev->addr.bus, + pci_dev->addr.devid, pci_dev->addr.function); + return -EINVAL; + } + + return 0; +} + +uint32_t +sfc_vdpa_register_logtype(const struct rte_pci_addr *pci_addr, + const char *lt_prefix_str, uint32_t ll_default) +{ + size_t lt_prefix_str_size = strlen(lt_prefix_str); + size_t lt_str_size_max; + char *lt_str = NULL; + int ret; + + if (SIZE_MAX - PCI_PRI_STR_SIZE - 1 > lt_prefix_str_size) { + ++lt_prefix_str_size; /* Reserve space for prefix separator */ + lt_str_size_max = lt_prefix_str_size + PCI_PRI_STR_SIZE + 1; + } else { + return RTE_LOGTYPE_PMD; + } + + lt_str = rte_zmalloc("logtype_str", lt_str_size_max, 0); + if (lt_str == NULL) + return RTE_LOGTYPE_PMD; + + strncpy(lt_str, lt_prefix_str, lt_prefix_str_size); + lt_str[lt_prefix_str_size - 1] = '.'; + rte_pci_device_name(pci_addr, lt_str + lt_prefix_str_size, + lt_str_size_max - lt_prefix_str_size); + lt_str[lt_str_size_max - 1] = '\0'; + + ret = rte_log_register_type_and_pick_level(lt_str, ll_default); + rte_free(lt_str); + + return ret < 0 ? RTE_LOGTYPE_PMD : ret; +} + +static struct rte_pci_id pci_id_sfc_vdpa_efx_map[] = { + { RTE_PCI_DEVICE(EFX_PCI_VENID_XILINX, EFX_PCI_DEVID_RIVERHEAD_VF) }, + { .vendor_id = 0, /* sentinel */ }, +}; + +static int +sfc_vdpa_pci_probe(struct rte_pci_driver *pci_drv __rte_unused, + struct rte_pci_device *pci_dev) +{ + struct sfc_vdpa_adapter *sva = NULL; + uint32_t logtype_main; + int ret = 0; + + if (sfc_efx_dev_class_get(pci_dev->device.devargs) != + SFC_EFX_DEV_CLASS_VDPA) { + SFC_VDPA_GENERIC_LOG(INFO, + "Incompatible device class: skip probing, should be probed by other sfc driver."); + return 1; + } + + /* + * It will not be probed in the secondary process. As device class + * is vdpa so return 0 to avoid probe by other sfc driver + */ + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return 0; + + logtype_main = sfc_vdpa_register_logtype(&pci_dev->addr, + SFC_VDPA_LOGTYPE_MAIN_STR, + RTE_LOG_NOTICE); + + sva = rte_zmalloc("sfc_vdpa", sizeof(struct sfc_vdpa_adapter), 0); + if (sva == NULL) + goto fail_zmalloc; + + sva->pdev = pci_dev; + sva->logtype_main = logtype_main; + + ret = sfc_vdpa_set_log_prefix(sva); + if (ret != 0) + goto fail_set_log_prefix; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "vfio init"); + if (sfc_vdpa_vfio_setup(sva) < 0) { + sfc_vdpa_err(sva, "failed to setup device %s", pci_dev->name); + goto fail_vfio_setup; + } + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + TAILQ_INSERT_TAIL(&sfc_vdpa_adapter_list, sva, next); + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + sfc_vdpa_log_init(sva, "done"); + + return 0; + +fail_vfio_setup: +fail_set_log_prefix: + rte_free(sva); + +fail_zmalloc: + return -1; +} + +static int +sfc_vdpa_pci_remove(struct rte_pci_device *pci_dev) +{ + struct sfc_vdpa_adapter *sva = NULL; + + if (rte_eal_process_type() != RTE_PROC_PRIMARY) + return -1; + + sva = sfc_vdpa_get_adapter_by_dev(pci_dev); + if (sva == NULL) { + sfc_vdpa_info(sva, "invalid device: %s", pci_dev->name); + return -1; + } + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + TAILQ_REMOVE(&sfc_vdpa_adapter_list, sva, next); + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + sfc_vdpa_vfio_teardown(sva); + + rte_free(sva); + + return 0; +} + +static struct rte_pci_driver rte_sfc_vdpa = { + .id_table = pci_id_sfc_vdpa_efx_map, + .drv_flags = 0, + .probe = sfc_vdpa_pci_probe, + .remove = sfc_vdpa_pci_remove, +}; + +RTE_PMD_REGISTER_PCI(net_sfc_vdpa, rte_sfc_vdpa); +RTE_PMD_REGISTER_PCI_TABLE(net_sfc_vdpa, pci_id_sfc_vdpa_efx_map); +RTE_PMD_REGISTER_KMOD_DEP(net_sfc_vdpa, "* vfio-pci"); +RTE_LOG_REGISTER_SUFFIX(sfc_vdpa_logtype_driver, driver, NOTICE); diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h new file mode 100644 index 0000000..3b77900 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -0,0 +1,40 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_H +#define _SFC_VDPA_H + +#include +#include + +#include + +#include "sfc_vdpa_log.h" + +/* Adapter private data */ +struct sfc_vdpa_adapter { + TAILQ_ENTRY(sfc_vdpa_adapter) next; + struct rte_pci_device *pdev; + struct rte_pci_addr pci_addr; + + char log_prefix[SFC_VDPA_LOG_PREFIX_MAX]; + uint32_t logtype_main; + + int vfio_group_fd; + int vfio_dev_fd; + int vfio_container_fd; + int iommu_group_num; +}; + +uint32_t +sfc_vdpa_register_logtype(const struct rte_pci_addr *pci_addr, + const char *lt_prefix_str, + uint32_t ll_default); + +struct sfc_vdpa_adapter * +sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev); + +#endif /* _SFC_VDPA_H */ + diff --git a/drivers/vdpa/sfc/sfc_vdpa_log.h b/drivers/vdpa/sfc/sfc_vdpa_log.h new file mode 100644 index 0000000..858e5ee --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_log.h @@ -0,0 +1,56 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_LOG_H_ +#define _SFC_VDPA_LOG_H_ + +/** Generic driver log type */ +extern int sfc_vdpa_logtype_driver; + +/** Common log type name prefix */ +#define SFC_VDPA_LOGTYPE_PREFIX "pmd.vdpa.sfc." + +/** Log PMD generic message, add a prefix and a line break */ +#define SFC_VDPA_GENERIC_LOG(level, ...) \ + rte_log(RTE_LOG_ ## level, sfc_vdpa_logtype_driver, \ + RTE_FMT("PMD: " RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) + +/** Name prefix for the per-device log type used to report basic information */ +#define SFC_VDPA_LOGTYPE_MAIN_STR SFC_VDPA_LOGTYPE_PREFIX "main" + +#define SFC_VDPA_LOG_PREFIX_MAX 32 + +/* Log PMD message, automatically add prefix and \n */ +#define SFC_VDPA_LOG(sva, level, ...) \ + do { \ + const struct sfc_vdpa_adapter *_sva = (sva); \ + \ + rte_log(RTE_LOG_ ## level, _sva->logtype_main, \ + RTE_FMT("%s" RTE_FMT_HEAD(__VA_ARGS__ ,) "\n", \ + _sva->log_prefix, \ + RTE_FMT_TAIL(__VA_ARGS__ ,))); \ + } while (0) + +#define sfc_vdpa_err(sva, ...) \ + SFC_VDPA_LOG(sva, ERR, __VA_ARGS__) + +#define sfc_vdpa_warn(sva, ...) \ + SFC_VDPA_LOG(sva, WARNING, __VA_ARGS__) + +#define sfc_vdpa_notice(sva, ...) \ + SFC_VDPA_LOG(sva, NOTICE, __VA_ARGS__) + +#define sfc_vdpa_info(sva, ...) \ + SFC_VDPA_LOG(sva, INFO, __VA_ARGS__) + +#define sfc_vdpa_log_init(sva, ...) \ + SFC_VDPA_LOG(sva, INFO, \ + RTE_FMT("%s(): " \ + RTE_FMT_HEAD(__VA_ARGS__ ,), \ + __func__, \ + RTE_FMT_TAIL(__VA_ARGS__ ,))) + +#endif /* _SFC_VDPA_LOG_H_ */ diff --git a/drivers/vdpa/sfc/version.map b/drivers/vdpa/sfc/version.map new file mode 100644 index 0000000..4a76d1d --- /dev/null +++ b/drivers/vdpa/sfc/version.map @@ -0,0 +1,3 @@ +DPDK_21 { + local: *; +}; From patchwork Fri Oct 29 14:46:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103245 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 238D8A0547; Fri, 29 Oct 2021 16:48:42 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id DEBAA426DE; Fri, 29 Oct 2021 16:48:34 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2047.outbound.protection.outlook.com [40.107.94.47]) by mails.dpdk.org (Postfix) with ESMTP id 0F36A426DE for ; Fri, 29 Oct 2021 16:48:32 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=cPi+kGpKmKYzE/LZ3UVVcKVej24qdxlH/FoMlAbAR78w6A3pvmDho4iSF7ux/VRSfm7YDt5uLlTT7RWMUmU9FBNsZiXwMZefGwXqgeSGa+R4J6uxRI/uznR5BArqXdrHOKL3mceZSmo/KhDp9pcPLKaQMI0CfopoNdnSiXDy4OI84KCMXPMbWeWzXwPFOIB2SJjJfZVpOpDYntVxPdf6vYljnVhoGluTQy0IcOVtVaxaVITIk/9xaLFMs2WTBK4/1BkXmqjS9yuSJRWXgpEh2JYO9ldn5R3i+HDOqfu6wr8vTx7rvbSbJPZYdB9M96KvFsqk3+JayYD60mSVn4r1xg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=LsSE426uMk1Ue+bpo88k4IopivV4BLM0Nd9/cNQqElQ=; b=W815SATVrX+wRhvlxczlencz4VuG/ohKiifMAp6QrE3vLUrgaHc81tLErW2MDSAHmBzMtIM5IC9x3oDIEdVXKOPSDTe7kW4OmwxHr9Qmste3q5rDhr+SYJ2edaroFwmtdZnrhsYInr6uWFRNogLPiOG5FywOfHTeJesKGra/3TSPBwh/yURHa0nj0cKOku3eZNeFp+2n+Yvzm91QoSw+H81rZtcwx6g58tzhkSyfKCKvcqNfvuecQ1oCWjo6TA/FErd+oVbp2+jLTXRWERBzQchTsGaUY43M+uafWC2m+RfgWG/S6YP/p05h2qOyLyHoY2D/ooSH5CvfEYWPBu99TQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=LsSE426uMk1Ue+bpo88k4IopivV4BLM0Nd9/cNQqElQ=; b=OlDPmWRlbX0GZzHWLcfiyMO+e9Ymr4dmfk7j9tH367wBNuCB1+y7Q52WRyREJ6pwXBKd9tNWtt0qJSfllKCXvOlVs9YmSfs0LCqpeSlxbi1IpRKe73uh8/WekewivYyU+0hbgxpUwSToG5xh75lB5LQV4goN3ahRvXPDPIxLtNE= Received: from DM5PR20CA0029.namprd20.prod.outlook.com (2603:10b6:3:13d::15) by PH0PR02MB8795.namprd02.prod.outlook.com (2603:10b6:510:f2::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Fri, 29 Oct 2021 14:48:30 +0000 Received: from DM3NAM02FT035.eop-nam02.prod.protection.outlook.com (2603:10b6:3:13d:cafe::31) by DM5PR20CA0029.outlook.office365.com (2603:10b6:3:13d::15) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.13 via Frontend Transport; Fri, 29 Oct 2021 14:48:29 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT035.mail.protection.outlook.com (10.13.4.78) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:48:29 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Fri, 29 Oct 2021 07:48:29 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:29 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mgTBE-0008YO-Ee; Fri, 29 Oct 2021 07:48:29 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Fri, 29 Oct 2021 20:16:37 +0530 Message-ID: <20211029144645.30295-3-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211029144645.30295-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: d5862634-9dcf-45f8-0d2d-08d99aeb2c25 X-MS-TrafficTypeDiagnostic: PH0PR02MB8795: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1227; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: cT2tBgvxy821xt/vQxlYZB/5N5GDilQUlWPttDVUC+ZQz/fjEzjkbIKxIiE4ISw3AUMjGOxWHOWa6RD3FKKDWt/1526jKNSJnXaeQsnXMhG1xQgs0BaJeW0X3tvWZ6aKgYTBMeTHzR+DO+/K+/YiL91ceTTU2+MEHHeL5O+cZ4xRPkMEhk22a6A/tystg81PMI3hiK4Y0Zh8tVwlUaLFwgpLneTbwIvXgi3QHbdIUQ8GqpbCPnYbZ3KlJgPCmKb9qG7jtGKo8x1zWEkQYpV26+dmxrQSezN7MUFYNHW7VxU6IZL0yQ1acQBK5SDZbRuehI0n0SxzcGzicPTghggv/5qDezE5cVJ37fmBWCIcV5mQijT3VJNvVGQSQmM45ajGwdWyWDPYpME6WTuyBiUHKPjhgJwHSwoBWHSH6NfxusukkQIHcajWtTRLS3LUb4A2X3E5F16HTAtI/1nHkUkjJZ85/LPgfjUMCC1ZUnWOrk+KsTdITMCP6jh8zv1jV+25MiYGHTmMqofkMVT1ilyc3aPuHv9BNmyR2pivPZZiUsnM1tmADr1hccEkXvXy/72jTPqY1iZKLiU8IHKJMKzeKs/7IPvhkVIFWiMgBvNtIihumO9EuZ+vKkVCTVfyWzZOkZ80cwSOjdJujaDspWaQBuoBXBaL8BNWrekaDSwTyLVwSpTBUqzZpAMHpDSYwKdTbZ3ZFL/Kx6YhZr6Hh7SxL9eaq4vOMQukitrwe62fiAk= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(46966006)(36840700001)(8936002)(54906003)(82310400003)(44832011)(186003)(7696005)(7636003)(26005)(1076003)(4326008)(9786002)(6666004)(70206006)(2906002)(107886003)(83380400001)(2616005)(336012)(36756003)(47076005)(70586007)(5660300002)(30864003)(36860700001)(426003)(6916009)(316002)(508600001)(36906005)(8676002)(356005)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:48:29.7334 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: d5862634-9dcf-45f8-0d2d-08d99aeb2c25 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT035.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR02MB8795 Subject: [dpdk-dev] [PATCH v3 02/10] vdpa/sfc: add support for device initialization X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Add HW initialization and vDPA device registration support. Signed-off-by: Vijay Kumar Srivastava Acked-by: Andrew Rybchenko --- v2: * Used rte_memzone_reserve_aligned for mcdi buffer allocation. * Freeing mcdi buff when DMA map fails. * Fixed one typo. doc/guides/vdpadevs/sfc.rst | 6 + drivers/vdpa/sfc/meson.build | 3 + drivers/vdpa/sfc/sfc_vdpa.c | 23 +++ drivers/vdpa/sfc/sfc_vdpa.h | 49 +++++- drivers/vdpa/sfc/sfc_vdpa_debug.h | 21 +++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 327 ++++++++++++++++++++++++++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_log.h | 3 + drivers/vdpa/sfc/sfc_vdpa_mcdi.c | 74 +++++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 129 +++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_ops.h | 36 +++++ 10 files changed, 670 insertions(+), 1 deletion(-) create mode 100644 drivers/vdpa/sfc/sfc_vdpa_debug.h create mode 100644 drivers/vdpa/sfc/sfc_vdpa_hw.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_mcdi.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_ops.c create mode 100644 drivers/vdpa/sfc/sfc_vdpa_ops.h diff --git a/doc/guides/vdpadevs/sfc.rst b/doc/guides/vdpadevs/sfc.rst index 44e694f..d06c427 100644 --- a/doc/guides/vdpadevs/sfc.rst +++ b/doc/guides/vdpadevs/sfc.rst @@ -95,3 +95,9 @@ SFC vDPA PMD provides the following log types available for control: Matches a subset of per-port log types registered during runtime. A full name for a particular type may be obtained by appending a dot and a PCI device identifier (``XXXX:XX:XX.X``) to the prefix. + +- ``pmd.vdpa.sfc.mcdi`` (default level is **notice**) + + Extra logging of the communication with the NIC's management CPU. + The format of the log is consumed by the netlogdecode cross-platform + tool. May be managed per-port, as explained above. diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build index 4255d65..dc333de 100644 --- a/drivers/vdpa/sfc/meson.build +++ b/drivers/vdpa/sfc/meson.build @@ -19,4 +19,7 @@ endforeach deps += ['common_sfc_efx', 'bus_pci'] sources = files( 'sfc_vdpa.c', + 'sfc_vdpa_hw.c', + 'sfc_vdpa_mcdi.c', + 'sfc_vdpa_ops.c', ) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index d85c52b..b7eca56 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -232,6 +232,19 @@ struct sfc_vdpa_adapter * goto fail_vfio_setup; } + sfc_vdpa_log_init(sva, "hw init"); + if (sfc_vdpa_hw_init(sva) != 0) { + sfc_vdpa_err(sva, "failed to init HW %s", pci_dev->name); + goto fail_hw_init; + } + + sfc_vdpa_log_init(sva, "dev init"); + sva->ops_data = sfc_vdpa_device_init(sva, SFC_VDPA_AS_VF); + if (sva->ops_data == NULL) { + sfc_vdpa_err(sva, "failed vDPA dev init %s", pci_dev->name); + goto fail_dev_init; + } + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); TAILQ_INSERT_TAIL(&sfc_vdpa_adapter_list, sva, next); pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); @@ -240,6 +253,12 @@ struct sfc_vdpa_adapter * return 0; +fail_dev_init: + sfc_vdpa_hw_fini(sva); + +fail_hw_init: + sfc_vdpa_vfio_teardown(sva); + fail_vfio_setup: fail_set_log_prefix: rte_free(sva); @@ -266,6 +285,10 @@ struct sfc_vdpa_adapter * TAILQ_REMOVE(&sfc_vdpa_adapter_list, sva, next); pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + sfc_vdpa_device_fini(sva->ops_data); + + sfc_vdpa_hw_fini(sva); + sfc_vdpa_vfio_teardown(sva); rte_free(sva); diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index 3b77900..046f25d 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -11,14 +11,38 @@ #include +#include "sfc_efx.h" +#include "sfc_efx_mcdi.h" +#include "sfc_vdpa_debug.h" #include "sfc_vdpa_log.h" +#include "sfc_vdpa_ops.h" + +#define SFC_VDPA_DEFAULT_MCDI_IOVA 0x200000000000 /* Adapter private data */ struct sfc_vdpa_adapter { TAILQ_ENTRY(sfc_vdpa_adapter) next; + /* + * PMD setup and configuration is not thread safe. Since it is not + * performance sensitive, it is better to guarantee thread-safety + * and add device level lock. vDPA control operations which + * change its state should acquire the lock. + */ + rte_spinlock_t lock; struct rte_pci_device *pdev; struct rte_pci_addr pci_addr; + efx_family_t family; + efx_nic_t *nic; + rte_spinlock_t nic_lock; + + efsys_bar_t mem_bar; + + struct sfc_efx_mcdi mcdi; + size_t mcdi_buff_size; + + uint32_t max_queue_count; + char log_prefix[SFC_VDPA_LOG_PREFIX_MAX]; uint32_t logtype_main; @@ -26,6 +50,7 @@ struct sfc_vdpa_adapter { int vfio_dev_fd; int vfio_container_fd; int iommu_group_num; + struct sfc_vdpa_ops_data *ops_data; }; uint32_t @@ -36,5 +61,27 @@ struct sfc_vdpa_adapter { struct sfc_vdpa_adapter * sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev); -#endif /* _SFC_VDPA_H */ +int +sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva); +void +sfc_vdpa_hw_fini(struct sfc_vdpa_adapter *sva); +int +sfc_vdpa_mcdi_init(struct sfc_vdpa_adapter *sva); +void +sfc_vdpa_mcdi_fini(struct sfc_vdpa_adapter *sva); + +int +sfc_vdpa_dma_alloc(struct sfc_vdpa_adapter *sva, const char *name, + size_t len, efsys_mem_t *esmp); + +void +sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp); + +static inline struct sfc_vdpa_adapter * +sfc_vdpa_adapter_by_dev_handle(void *dev_handle) +{ + return (struct sfc_vdpa_adapter *)dev_handle; +} + +#endif /* _SFC_VDPA_H */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_debug.h b/drivers/vdpa/sfc/sfc_vdpa_debug.h new file mode 100644 index 0000000..cfa8cc5 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_debug.h @@ -0,0 +1,21 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_DEBUG_H_ +#define _SFC_VDPA_DEBUG_H_ + +#include + +#ifdef RTE_LIBRTE_SFC_VDPA_DEBUG +/* Avoid dependency from RTE_LOG_DP_LEVEL to be able to enable debug check + * in the driver only. + */ +#define SFC_VDPA_ASSERT(exp) RTE_VERIFY(exp) +#else +/* If the driver debug is not enabled, follow DPDK debug/non-debug */ +#define SFC_VDPA_ASSERT(exp) RTE_ASSERT(exp) +#endif + +#endif /* _SFC_VDPA_DEBUG_H_ */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c new file mode 100644 index 0000000..7c256ff --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -0,0 +1,327 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include + +#include +#include +#include + +#include "efx.h" +#include "sfc_vdpa.h" +#include "sfc_vdpa_ops.h" + +extern uint32_t sfc_logtype_driver; + +#ifndef PAGE_SIZE +#define PAGE_SIZE (sysconf(_SC_PAGESIZE)) +#endif + +int +sfc_vdpa_dma_alloc(struct sfc_vdpa_adapter *sva, const char *name, + size_t len, efsys_mem_t *esmp) +{ + uint64_t mcdi_iova; + size_t mcdi_buff_size; + const struct rte_memzone *mz = NULL; + int numa_node = sva->pdev->device.numa_node; + int ret; + + mcdi_buff_size = RTE_ALIGN_CEIL(len, PAGE_SIZE); + + sfc_vdpa_log_init(sva, "name=%s, len=%zu", name, len); + + mz = rte_memzone_reserve_aligned(name, mcdi_buff_size, + numa_node, + RTE_MEMZONE_IOVA_CONTIG, + PAGE_SIZE); + if (mz == NULL) { + sfc_vdpa_err(sva, "cannot reserve memory for %s: len=%#x: %s", + name, (unsigned int)len, rte_strerror(rte_errno)); + return -ENOMEM; + } + + /* IOVA address for MCDI would be re-calculated if mapping + * using default IOVA would fail. + * TODO: Earlier there was no way to get valid IOVA range. + * Recently a patch has been submitted to get the IOVA range + * using ioctl. VFIO_IOMMU_GET_INFO. This patch is available + * in the kernel version >= 5.4. Support to get the default + * IOVA address for MCDI buffer using available IOVA range + * would be added later. Meanwhile default IOVA for MCDI buffer + * is kept at high mem at 2TB. In case of overlap new available + * addresses would be searched and same would be used. + */ + mcdi_iova = SFC_VDPA_DEFAULT_MCDI_IOVA; + + do { + ret = rte_vfio_container_dma_map(sva->vfio_container_fd, + (uint64_t)mz->addr, mcdi_iova, + mcdi_buff_size); + if (ret == 0) + break; + + mcdi_iova = mcdi_iova >> 1; + if (mcdi_iova < mcdi_buff_size) { + sfc_vdpa_err(sva, + "DMA mapping failed for MCDI : %s", + rte_strerror(rte_errno)); + rte_memzone_free(mz); + return ret; + } + + } while (ret < 0); + + esmp->esm_addr = mcdi_iova; + esmp->esm_base = mz->addr; + sva->mcdi_buff_size = mcdi_buff_size; + + sfc_vdpa_info(sva, + "DMA name=%s len=%zu => virt=%p iova=%" PRIx64, + name, len, esmp->esm_base, esmp->esm_addr); + + return 0; +} + +void +sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp) +{ + int ret; + + sfc_vdpa_log_init(sva, "name=%s", esmp->esm_mz->name); + + ret = rte_vfio_container_dma_unmap(sva->vfio_container_fd, + (uint64_t)esmp->esm_base, + esmp->esm_addr, sva->mcdi_buff_size); + if (ret < 0) + sfc_vdpa_err(sva, "DMA unmap failed for MCDI : %s", + rte_strerror(rte_errno)); + + sfc_vdpa_info(sva, + "DMA free name=%s => virt=%p iova=%" PRIx64, + esmp->esm_mz->name, esmp->esm_base, esmp->esm_addr); + + rte_free((void *)(esmp->esm_base)); + + sva->mcdi_buff_size = 0; + memset(esmp, 0, sizeof(*esmp)); +} + +static int +sfc_vdpa_mem_bar_init(struct sfc_vdpa_adapter *sva, + const efx_bar_region_t *mem_ebrp) +{ + struct rte_pci_device *pci_dev = sva->pdev; + efsys_bar_t *ebp = &sva->mem_bar; + struct rte_mem_resource *res = + &pci_dev->mem_resource[mem_ebrp->ebr_index]; + + SFC_BAR_LOCK_INIT(ebp, pci_dev->name); + ebp->esb_rid = mem_ebrp->ebr_index; + ebp->esb_dev = pci_dev; + ebp->esb_base = res->addr; + + return 0; +} + +static void +sfc_vdpa_mem_bar_fini(struct sfc_vdpa_adapter *sva) +{ + efsys_bar_t *ebp = &sva->mem_bar; + + SFC_BAR_LOCK_DESTROY(ebp); + memset(ebp, 0, sizeof(*ebp)); +} + +static int +sfc_vdpa_nic_probe(struct sfc_vdpa_adapter *sva) +{ + efx_nic_t *enp = sva->nic; + int rc; + + rc = efx_nic_probe(enp, EFX_FW_VARIANT_DONT_CARE); + if (rc != 0) + sfc_vdpa_err(sva, "nic probe failed: %s", rte_strerror(rc)); + + return rc; +} + +static int +sfc_vdpa_estimate_resource_limits(struct sfc_vdpa_adapter *sva) +{ + efx_drv_limits_t limits; + int rc; + uint32_t evq_allocated; + uint32_t rxq_allocated; + uint32_t txq_allocated; + uint32_t max_queue_cnt; + + memset(&limits, 0, sizeof(limits)); + + /* Request at least one Rx and Tx queue */ + limits.edl_min_rxq_count = 1; + limits.edl_min_txq_count = 1; + /* Management event queue plus event queue for Tx/Rx queue */ + limits.edl_min_evq_count = + 1 + RTE_MAX(limits.edl_min_rxq_count, limits.edl_min_txq_count); + + limits.edl_max_rxq_count = SFC_VDPA_MAX_QUEUE_PAIRS; + limits.edl_max_txq_count = SFC_VDPA_MAX_QUEUE_PAIRS; + limits.edl_max_evq_count = 1 + SFC_VDPA_MAX_QUEUE_PAIRS; + + SFC_VDPA_ASSERT(limits.edl_max_evq_count >= limits.edl_min_rxq_count); + SFC_VDPA_ASSERT(limits.edl_max_rxq_count >= limits.edl_min_rxq_count); + SFC_VDPA_ASSERT(limits.edl_max_txq_count >= limits.edl_min_rxq_count); + + /* Configure the minimum required resources needed for the + * driver to operate, and the maximum desired resources that the + * driver is capable of using. + */ + sfc_vdpa_log_init(sva, "set drv limit"); + efx_nic_set_drv_limits(sva->nic, &limits); + + sfc_vdpa_log_init(sva, "init nic"); + rc = efx_nic_init(sva->nic); + if (rc != 0) { + sfc_vdpa_err(sva, "nic init failed: %s", rte_strerror(rc)); + goto fail_nic_init; + } + + /* Find resource dimensions assigned by firmware to this function */ + rc = efx_nic_get_vi_pool(sva->nic, &evq_allocated, &rxq_allocated, + &txq_allocated); + if (rc != 0) { + sfc_vdpa_err(sva, "vi pool get failed: %s", rte_strerror(rc)); + goto fail_get_vi_pool; + } + + /* It still may allocate more than maximum, ensure limit */ + evq_allocated = RTE_MIN(evq_allocated, limits.edl_max_evq_count); + rxq_allocated = RTE_MIN(rxq_allocated, limits.edl_max_rxq_count); + txq_allocated = RTE_MIN(txq_allocated, limits.edl_max_txq_count); + + + max_queue_cnt = RTE_MIN(rxq_allocated, txq_allocated); + /* Subtract management EVQ not used for traffic */ + max_queue_cnt = RTE_MIN(evq_allocated - 1, max_queue_cnt); + + SFC_VDPA_ASSERT(max_queue_cnt > 0); + + sva->max_queue_count = max_queue_cnt; + + return 0; + +fail_get_vi_pool: + efx_nic_fini(sva->nic); +fail_nic_init: + sfc_vdpa_log_init(sva, "failed: %s", rte_strerror(rc)); + return rc; +} + +int +sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva) +{ + efx_bar_region_t mem_ebr; + efx_nic_t *enp; + int rc; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "get family"); + rc = sfc_efx_family(sva->pdev, &mem_ebr, &sva->family); + if (rc != 0) + goto fail_family; + sfc_vdpa_log_init(sva, + "family is %u, membar is %u," + "function control window offset is %#" PRIx64, + sva->family, mem_ebr.ebr_index, mem_ebr.ebr_offset); + + sfc_vdpa_log_init(sva, "init mem bar"); + rc = sfc_vdpa_mem_bar_init(sva, &mem_ebr); + if (rc != 0) + goto fail_mem_bar_init; + + sfc_vdpa_log_init(sva, "create nic"); + rte_spinlock_init(&sva->nic_lock); + rc = efx_nic_create(sva->family, (efsys_identifier_t *)sva, + &sva->mem_bar, mem_ebr.ebr_offset, + &sva->nic_lock, &enp); + if (rc != 0) { + sfc_vdpa_err(sva, "nic create failed: %s", rte_strerror(rc)); + goto fail_nic_create; + } + sva->nic = enp; + + sfc_vdpa_log_init(sva, "init mcdi"); + rc = sfc_vdpa_mcdi_init(sva); + if (rc != 0) { + sfc_vdpa_err(sva, "mcdi init failed: %s", rte_strerror(rc)); + goto fail_mcdi_init; + } + + sfc_vdpa_log_init(sva, "probe nic"); + rc = sfc_vdpa_nic_probe(sva); + if (rc != 0) + goto fail_nic_probe; + + sfc_vdpa_log_init(sva, "reset nic"); + rc = efx_nic_reset(enp); + if (rc != 0) { + sfc_vdpa_err(sva, "nic reset failed: %s", rte_strerror(rc)); + goto fail_nic_reset; + } + + sfc_vdpa_log_init(sva, "estimate resource limits"); + rc = sfc_vdpa_estimate_resource_limits(sva); + if (rc != 0) + goto fail_estimate_rsrc_limits; + + sfc_vdpa_log_init(sva, "done"); + + return 0; + +fail_estimate_rsrc_limits: +fail_nic_reset: + efx_nic_unprobe(enp); + +fail_nic_probe: + sfc_vdpa_mcdi_fini(sva); + +fail_mcdi_init: + sfc_vdpa_log_init(sva, "destroy nic"); + sva->nic = NULL; + efx_nic_destroy(enp); + +fail_nic_create: + sfc_vdpa_mem_bar_fini(sva); + +fail_mem_bar_init: +fail_family: + sfc_vdpa_log_init(sva, "failed: %s", rte_strerror(rc)); + return rc; +} + +void +sfc_vdpa_hw_fini(struct sfc_vdpa_adapter *sva) +{ + efx_nic_t *enp = sva->nic; + + sfc_vdpa_log_init(sva, "entry"); + + sfc_vdpa_log_init(sva, "unprobe nic"); + efx_nic_unprobe(enp); + + sfc_vdpa_log_init(sva, "mcdi fini"); + sfc_vdpa_mcdi_fini(sva); + + sfc_vdpa_log_init(sva, "nic fini"); + efx_nic_fini(enp); + + sfc_vdpa_log_init(sva, "destroy nic"); + sva->nic = NULL; + efx_nic_destroy(enp); + + sfc_vdpa_mem_bar_fini(sva); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_log.h b/drivers/vdpa/sfc/sfc_vdpa_log.h index 858e5ee..4e7a84f 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_log.h +++ b/drivers/vdpa/sfc/sfc_vdpa_log.h @@ -21,6 +21,9 @@ /** Name prefix for the per-device log type used to report basic information */ #define SFC_VDPA_LOGTYPE_MAIN_STR SFC_VDPA_LOGTYPE_PREFIX "main" +/** Device MCDI log type name prefix */ +#define SFC_VDPA_LOGTYPE_MCDI_STR SFC_VDPA_LOGTYPE_PREFIX "mcdi" + #define SFC_VDPA_LOG_PREFIX_MAX 32 /* Log PMD message, automatically add prefix and \n */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_mcdi.c b/drivers/vdpa/sfc/sfc_vdpa_mcdi.c new file mode 100644 index 0000000..961d2d3 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_mcdi.c @@ -0,0 +1,74 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include "sfc_efx_mcdi.h" + +#include "sfc_vdpa.h" +#include "sfc_vdpa_debug.h" +#include "sfc_vdpa_log.h" + +static sfc_efx_mcdi_dma_alloc_cb sfc_vdpa_mcdi_dma_alloc; +static int +sfc_vdpa_mcdi_dma_alloc(void *cookie, const char *name, size_t len, + efsys_mem_t *esmp) +{ + struct sfc_vdpa_adapter *sva = cookie; + + return sfc_vdpa_dma_alloc(sva, name, len, esmp); +} + +static sfc_efx_mcdi_dma_free_cb sfc_vdpa_mcdi_dma_free; +static void +sfc_vdpa_mcdi_dma_free(void *cookie, efsys_mem_t *esmp) +{ + struct sfc_vdpa_adapter *sva = cookie; + + sfc_vdpa_dma_free(sva, esmp); +} + +static sfc_efx_mcdi_sched_restart_cb sfc_vdpa_mcdi_sched_restart; +static void +sfc_vdpa_mcdi_sched_restart(void *cookie) +{ + RTE_SET_USED(cookie); +} + +static sfc_efx_mcdi_mgmt_evq_poll_cb sfc_vdpa_mcdi_mgmt_evq_poll; +static void +sfc_vdpa_mcdi_mgmt_evq_poll(void *cookie) +{ + RTE_SET_USED(cookie); +} + +static const struct sfc_efx_mcdi_ops sfc_vdpa_mcdi_ops = { + .dma_alloc = sfc_vdpa_mcdi_dma_alloc, + .dma_free = sfc_vdpa_mcdi_dma_free, + .sched_restart = sfc_vdpa_mcdi_sched_restart, + .mgmt_evq_poll = sfc_vdpa_mcdi_mgmt_evq_poll, + +}; + +int +sfc_vdpa_mcdi_init(struct sfc_vdpa_adapter *sva) +{ + uint32_t logtype; + + sfc_vdpa_log_init(sva, "entry"); + + logtype = sfc_vdpa_register_logtype(&(sva->pdev->addr), + SFC_VDPA_LOGTYPE_MCDI_STR, + RTE_LOG_NOTICE); + + return sfc_efx_mcdi_init(&sva->mcdi, logtype, + sva->log_prefix, sva->nic, + &sfc_vdpa_mcdi_ops, sva); +} + +void +sfc_vdpa_mcdi_fini(struct sfc_vdpa_adapter *sva) +{ + sfc_vdpa_log_init(sva, "entry"); + sfc_efx_mcdi_fini(&sva->mcdi); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c new file mode 100644 index 0000000..71696be --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -0,0 +1,129 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include +#include +#include +#include + +#include "sfc_vdpa_ops.h" +#include "sfc_vdpa.h" + +/* Dummy functions for mandatory vDPA ops to pass vDPA device registration. + * In subsequent patches these ops would be implemented. + */ +static int +sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(queue_num); + + return -1; +} + +static int +sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(features); + + return -1; +} + +static int +sfc_vdpa_get_protocol_features(struct rte_vdpa_device *vdpa_dev, + uint64_t *features) +{ + RTE_SET_USED(vdpa_dev); + RTE_SET_USED(features); + + return -1; +} + +static int +sfc_vdpa_dev_config(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static int +sfc_vdpa_dev_close(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static int +sfc_vdpa_set_vring_state(int vid, int vring, int state) +{ + RTE_SET_USED(vid); + RTE_SET_USED(vring); + RTE_SET_USED(state); + + return -1; +} + +static int +sfc_vdpa_set_features(int vid) +{ + RTE_SET_USED(vid); + + return -1; +} + +static struct rte_vdpa_dev_ops sfc_vdpa_ops = { + .get_queue_num = sfc_vdpa_get_queue_num, + .get_features = sfc_vdpa_get_features, + .get_protocol_features = sfc_vdpa_get_protocol_features, + .dev_conf = sfc_vdpa_dev_config, + .dev_close = sfc_vdpa_dev_close, + .set_vring_state = sfc_vdpa_set_vring_state, + .set_features = sfc_vdpa_set_features, +}; + +struct sfc_vdpa_ops_data * +sfc_vdpa_device_init(void *dev_handle, enum sfc_vdpa_context context) +{ + struct sfc_vdpa_ops_data *ops_data; + struct rte_pci_device *pci_dev; + + /* Create vDPA ops context */ + ops_data = rte_zmalloc("vdpa", sizeof(struct sfc_vdpa_ops_data), 0); + if (ops_data == NULL) + return NULL; + + ops_data->vdpa_context = context; + ops_data->dev_handle = dev_handle; + + pci_dev = sfc_vdpa_adapter_by_dev_handle(dev_handle)->pdev; + + /* Register vDPA Device */ + sfc_vdpa_log_init(dev_handle, "register vDPA device"); + ops_data->vdpa_dev = + rte_vdpa_register_device(&pci_dev->device, &sfc_vdpa_ops); + if (ops_data->vdpa_dev == NULL) { + sfc_vdpa_err(dev_handle, "vDPA device registration failed"); + goto fail_register_device; + } + + ops_data->state = SFC_VDPA_STATE_INITIALIZED; + + return ops_data; + +fail_register_device: + rte_free(ops_data); + return NULL; +} + +void +sfc_vdpa_device_fini(struct sfc_vdpa_ops_data *ops_data) +{ + rte_vdpa_unregister_device(ops_data->vdpa_dev); + + rte_free(ops_data); +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h new file mode 100644 index 0000000..817b302 --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -0,0 +1,36 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#ifndef _SFC_VDPA_OPS_H +#define _SFC_VDPA_OPS_H + +#include + +#define SFC_VDPA_MAX_QUEUE_PAIRS 1 + +enum sfc_vdpa_context { + SFC_VDPA_AS_PF = 0, + SFC_VDPA_AS_VF +}; + +enum sfc_vdpa_state { + SFC_VDPA_STATE_UNINITIALIZED = 0, + SFC_VDPA_STATE_INITIALIZED, + SFC_VDPA_STATE_NSTATES +}; + +struct sfc_vdpa_ops_data { + void *dev_handle; + struct rte_vdpa_device *vdpa_dev; + enum sfc_vdpa_context vdpa_context; + enum sfc_vdpa_state state; +}; + +struct sfc_vdpa_ops_data * +sfc_vdpa_device_init(void *adapter, enum sfc_vdpa_context context); +void +sfc_vdpa_device_fini(struct sfc_vdpa_ops_data *ops_data); + +#endif /* _SFC_VDPA_OPS_H */ From patchwork Fri Oct 29 14:46:38 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103246 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id DB14FA0547; Fri, 29 Oct 2021 16:48:48 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D7D4C426EC; Fri, 29 Oct 2021 16:48:37 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2076.outbound.protection.outlook.com [40.107.237.76]) by mails.dpdk.org (Postfix) with ESMTP id 1D504426DE for ; Fri, 29 Oct 2021 16:48:34 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Cu6Vo1sSOasm88aQN7t1ZuvEIPMbkjfzMCZx/SoMu/wBi0XSGS95oXXSv+jk7o/3rPyDNHXa/TwLjwercPUUG463l/k0aoao9wlq3r42YGBL7uT5SMHmD+ytaJs4EedSZAtuHGMixjkqiRa/5EoMhaNJBZCaiTvnqvXhO7RsR/6N0ZW5NIhkNYOR/0+zY+1z6jDGcA6JJOkXBBgbtj6qyupBHRuuV+2A3S3LCXpDXZT5nklhJDhXi6Cl7IzSJ6FK9TGWDOp5TFTMbDjuzgQdzMITvpTXGEbDFwd405tv1pa/po3wmWQbCFbu0g1M/NNGx220/RFz5GS9iFweiVxdrg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=OtGk3zGLxR6dKwsJ6nDOQeb77Ss3fPHg2Y0NMPRIhq4=; b=YRdjtHXvwoJX5jnZo9KL8IR8ZVpsoSg0hIKYh8Fg+zp+p4m2gS3LCI5Rx1P6km+sGscgx+dHlnhbGoHx7aay1wdzXpBBdNcs3VsXyBHeEsSO0RS4n+PhWlwA4x51gsLWCY8BrsJMQED7YHyZL6V7UWnl1jYHFnHKnEOh3DBgBLjslFR9inDUzF1uYfibUPGv810yc3mD3ogEsILj8FmJ9QhI+QqVwFVXKCmSYhCbivp0xtYDe9/L0NBeTWNFJ7u6PfIDcBwG6/kK8rdcDtWOdvHVDV6eXdfsyQ/YAF2BWFaKRLhFAnHuG4+d9q0Ozs66fmPX+vd1BK9A2M/C5JwrXQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=OtGk3zGLxR6dKwsJ6nDOQeb77Ss3fPHg2Y0NMPRIhq4=; b=N+4KpQjPiqt6LZ7ewc3BEOSqKt0MkzDT7PCwOWjX2+EehNgIxhUsJjNr3zsxMjN9mzohRkAlw8ha74VF+6lrkxZAR3RWoDEhku8+WBLkzUqLwy9W2RmlGUo/0iVQYz0/+W1L4kOj9YuPOoME4YSw/lP26bCV8lstET83R39B3V4= Received: from DM5PR05CA0006.namprd05.prod.outlook.com (2603:10b6:3:d4::16) by DM8PR02MB8261.namprd02.prod.outlook.com (2603:10b6:8:f::9) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Fri, 29 Oct 2021 14:48:32 +0000 Received: from DM3NAM02FT018.eop-nam02.prod.protection.outlook.com (2603:10b6:3:d4:cafe::e) by DM5PR05CA0006.outlook.office365.com (2603:10b6:3:d4::16) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.6 via Frontend Transport; Fri, 29 Oct 2021 14:48:32 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT018.mail.protection.outlook.com (10.13.4.68) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:48:32 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Fri, 29 Oct 2021 07:48:31 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:31 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mgTBG-0008YO-Qp; Fri, 29 Oct 2021 07:48:31 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Fri, 29 Oct 2021 20:16:38 +0530 Message-ID: <20211029144645.30295-4-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211029144645.30295-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ff7e9958-5fd3-4a73-dbec-08d99aeb2d92 X-MS-TrafficTypeDiagnostic: DM8PR02MB8261: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1751; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: Pb/rFpjhBqlPDS3RZosRC9YPTO0lHea0YUkOP0aIRKaV5B8oj0nGcwm7OhqKSw9YiEtIHYetNJHjf/8TU0H3/9hLB77xKIo8jX9tTMwXnC3u5za2PdspPnPP+XUpGaDLV/Lh9eo8Xu/fZoXq6L3YkYSUkRjOGYqMw0ZBSAJUXrZthchVw4toDDaucsNuA8cvDxoeuSLxSbL7EGnr29dxo82pnPAb6+Z92kPsfIXEAQsXROjhX7nwNYdsPFZYjb03bHZ7OAbbTV8PpiIKoCoRE/Vuydm7SuP0NC6hhWrdXmwIbmZmWC+vIOuPkZB6OvJ5Qlo8dDh+ggt04NNIBBBMaOZEFCjJ9miJe0vZtz4kafPB4EULVCVbxYMxYHGN8DJSp14vduXVE3tvumvniJx46hiCR6wnkvEHbXdPzlH1disYgGjv1dF1zsb1WNVYaW8QXLlWw2ULWbdWVaPEQ9pVUTQ9vsv9lpCTiL1uuLmhrBUOxIh/bjBf5iZmDb8JTlE3g8OZoLLdc/0El92sK1lRccRSINEa+yp1OKb1OlocpQvW0+Rk7jeF0ING4CO0pDpuQQTlFcIwXxV4w4q0WB+UXQtwWrHEwm2yyBCLLTI7w3SMdjFRXr/xcRSKlwmygNJ0xiLNqHeJ+Q5ifoZEEvvkt3yrqom3P6TfD7iEKMzyn0ubigI3XV2HJy+//ftvvqAjp/ZMzcpZWj3bLJREn6ssULEtGpFiqp4JhR1FDOxjimw= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(9786002)(36756003)(44832011)(83380400001)(1076003)(6916009)(82310400003)(8936002)(26005)(7636003)(7696005)(356005)(70206006)(70586007)(336012)(508600001)(186003)(426003)(107886003)(2906002)(2616005)(36860700001)(5660300002)(316002)(54906003)(4326008)(36906005)(47076005)(8676002)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:48:32.1307 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ff7e9958-5fd3-4a73-dbec-08d99aeb2d92 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT018.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR02MB8261 Subject: [dpdk-dev] [PATCH v3 03/10] vdpa/sfc: add support to get device and protocol features X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops get_feature and get_protocol_features. This patch retrieves device supported features and enables protocol features. Signed-off-by: Vijay Kumar Srivastava Acked-by: Andrew Rybchenko Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- doc/guides/vdpadevs/features/sfc.ini | 10 ++++ drivers/common/sfc_efx/efsys.h | 2 +- drivers/common/sfc_efx/version.map | 10 ++++ drivers/vdpa/sfc/sfc_vdpa.c | 20 ++++++++ drivers/vdpa/sfc/sfc_vdpa.h | 2 + drivers/vdpa/sfc/sfc_vdpa_hw.c | 13 ++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 91 ++++++++++++++++++++++++++++++++---- drivers/vdpa/sfc/sfc_vdpa_ops.h | 3 ++ 8 files changed, 142 insertions(+), 9 deletions(-) diff --git a/doc/guides/vdpadevs/features/sfc.ini b/doc/guides/vdpadevs/features/sfc.ini index 71b6158..700d061 100644 --- a/doc/guides/vdpadevs/features/sfc.ini +++ b/doc/guides/vdpadevs/features/sfc.ini @@ -4,6 +4,16 @@ ; Refer to default.ini for the full list of available driver features. ; [Features] +csum = Y +guest csum = Y +host tso4 = Y +host tso6 = Y +version 1 = Y +mrg rxbuf = Y +any layout = Y +in_order = Y +proto host notifier = Y +IOMMU platform = Y Linux = Y x86-64 = Y Usage doc = Y diff --git a/drivers/common/sfc_efx/efsys.h b/drivers/common/sfc_efx/efsys.h index d133d61..37ec6b9 100644 --- a/drivers/common/sfc_efx/efsys.h +++ b/drivers/common/sfc_efx/efsys.h @@ -187,7 +187,7 @@ #define EFSYS_OPT_MAE 1 -#define EFSYS_OPT_VIRTIO 0 +#define EFSYS_OPT_VIRTIO 1 /* ID */ diff --git a/drivers/common/sfc_efx/version.map b/drivers/common/sfc_efx/version.map index 642a62e..ec86220 100644 --- a/drivers/common/sfc_efx/version.map +++ b/drivers/common/sfc_efx/version.map @@ -247,6 +247,16 @@ INTERNAL { efx_txq_nbufs; efx_txq_size; + efx_virtio_fini; + efx_virtio_get_doorbell_offset; + efx_virtio_get_features; + efx_virtio_init; + efx_virtio_qcreate; + efx_virtio_qdestroy; + efx_virtio_qstart; + efx_virtio_qstop; + efx_virtio_verify_features; + sfc_efx_dev_class_get; sfc_efx_family; diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index b7eca56..ccbd243 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -43,6 +43,26 @@ struct sfc_vdpa_adapter * return found ? sva : NULL; } +struct sfc_vdpa_ops_data * +sfc_vdpa_get_data_by_dev(struct rte_vdpa_device *vdpa_dev) +{ + bool found = false; + struct sfc_vdpa_adapter *sva; + + pthread_mutex_lock(&sfc_vdpa_adapter_list_lock); + + TAILQ_FOREACH(sva, &sfc_vdpa_adapter_list, next) { + if (vdpa_dev == sva->ops_data->vdpa_dev) { + found = true; + break; + } + } + + pthread_mutex_unlock(&sfc_vdpa_adapter_list_lock); + + return found ? sva->ops_data : NULL; +} + static int sfc_vdpa_vfio_setup(struct sfc_vdpa_adapter *sva) { diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index 046f25d..c10c3d3 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -60,6 +60,8 @@ struct sfc_vdpa_adapter { struct sfc_vdpa_adapter * sfc_vdpa_get_adapter_by_dev(struct rte_pci_device *pdev); +struct sfc_vdpa_ops_data * +sfc_vdpa_get_data_by_dev(struct rte_vdpa_device *vdpa_dev); int sfc_vdpa_hw_init(struct sfc_vdpa_adapter *sva); diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c index 7c256ff..7a67bd8 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -278,10 +278,20 @@ if (rc != 0) goto fail_estimate_rsrc_limits; + sfc_vdpa_log_init(sva, "init virtio"); + rc = efx_virtio_init(enp); + if (rc != 0) { + sfc_vdpa_err(sva, "virtio init failed: %s", rte_strerror(rc)); + goto fail_virtio_init; + } + sfc_vdpa_log_init(sva, "done"); return 0; +fail_virtio_init: + efx_nic_fini(enp); + fail_estimate_rsrc_limits: fail_nic_reset: efx_nic_unprobe(enp); @@ -310,6 +320,9 @@ sfc_vdpa_log_init(sva, "entry"); + sfc_vdpa_log_init(sva, "virtio fini"); + efx_virtio_fini(enp); + sfc_vdpa_log_init(sva, "unprobe nic"); efx_nic_unprobe(enp); diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 71696be..5750944 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,17 +3,31 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include #include #include #include #include +#include "efx.h" #include "sfc_vdpa_ops.h" #include "sfc_vdpa.h" -/* Dummy functions for mandatory vDPA ops to pass vDPA device registration. - * In subsequent patches these ops would be implemented. +/* These protocol features are needed to enable notifier ctrl */ +#define SFC_VDPA_PROTOCOL_FEATURES \ + ((1ULL << VHOST_USER_PROTOCOL_F_REPLY_ACK) | \ + (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_REQ) | \ + (1ULL << VHOST_USER_PROTOCOL_F_SLAVE_SEND_FD) | \ + (1ULL << VHOST_USER_PROTOCOL_F_HOST_NOTIFIER) | \ + (1ULL << VHOST_USER_PROTOCOL_F_LOG_SHMFD)) + +/* + * Set of features which are enabled by default. + * Protocol feature bit is needed to enable notification notifier ctrl. */ +#define SFC_VDPA_DEFAULT_FEATURES \ + (1ULL << VHOST_USER_F_PROTOCOL_FEATURES) + static int sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) { @@ -24,22 +38,67 @@ } static int +sfc_vdpa_get_device_features(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + uint64_t dev_features; + efx_nic_t *nic; + + nic = sfc_vdpa_adapter_by_dev_handle(ops_data->dev_handle)->nic; + + rc = efx_virtio_get_features(nic, EFX_VIRTIO_DEVICE_TYPE_NET, + &dev_features); + if (rc != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "could not read device feature: %s", + rte_strerror(rc)); + return rc; + } + + ops_data->dev_features = dev_features; + + sfc_vdpa_info(ops_data->dev_handle, + "device supported virtio features : 0x%" PRIx64, + ops_data->dev_features); + + return 0; +} + +static int sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) { - RTE_SET_USED(vdpa_dev); - RTE_SET_USED(features); + struct sfc_vdpa_ops_data *ops_data; - return -1; + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + *features = ops_data->drv_features; + + sfc_vdpa_info(ops_data->dev_handle, + "vDPA ops get_feature :: features : 0x%" PRIx64, + *features); + + return 0; } static int sfc_vdpa_get_protocol_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) { - RTE_SET_USED(vdpa_dev); - RTE_SET_USED(features); + struct sfc_vdpa_ops_data *ops_data; - return -1; + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + *features = SFC_VDPA_PROTOCOL_FEATURES; + + sfc_vdpa_info(ops_data->dev_handle, + "vDPA ops get_protocol_feature :: features : 0x%" PRIx64, + *features); + + return 0; } static int @@ -91,6 +150,7 @@ struct sfc_vdpa_ops_data * { struct sfc_vdpa_ops_data *ops_data; struct rte_pci_device *pci_dev; + int rc; /* Create vDPA ops context */ ops_data = rte_zmalloc("vdpa", sizeof(struct sfc_vdpa_ops_data), 0); @@ -111,10 +171,25 @@ struct sfc_vdpa_ops_data * goto fail_register_device; } + /* Read supported device features */ + sfc_vdpa_log_init(dev_handle, "get device feature"); + rc = sfc_vdpa_get_device_features(ops_data); + if (rc != 0) + goto fail_get_dev_feature; + + /* Driver features are superset of device supported feature + * and any additional features supported by the driver. + */ + ops_data->drv_features = + ops_data->dev_features | SFC_VDPA_DEFAULT_FEATURES; + ops_data->state = SFC_VDPA_STATE_INITIALIZED; return ops_data; +fail_get_dev_feature: + rte_vdpa_unregister_device(ops_data->vdpa_dev); + fail_register_device: rte_free(ops_data); return NULL; diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 817b302..21cbb73 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -26,6 +26,9 @@ struct sfc_vdpa_ops_data { struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; + + uint64_t dev_features; + uint64_t drv_features; }; struct sfc_vdpa_ops_data * From patchwork Fri Oct 29 14:46:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103248 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C7C58A0547; Fri, 29 Oct 2021 16:49:02 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 01382426FA; Fri, 29 Oct 2021 16:48:42 +0200 (CEST) Received: from NAM11-BN8-obe.outbound.protection.outlook.com (mail-bn8nam11on2070.outbound.protection.outlook.com [40.107.236.70]) by mails.dpdk.org (Postfix) with ESMTP id 80385410E1 for ; Fri, 29 Oct 2021 16:48:39 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=O0O4NUN7O//XGuRgXAKbzOrFJK/e3CkwnyuW0szy3iLqe1UdlayqgFl8biXtNxjgS6ZrBGAPSTsmmp5oAwC9ltwVpXyistwL73lFamNVnrLrDyIOWTuEqoB+GuAiC7zsKRYKBZ+wo26vIzHNxhFP2SYXJDHfOfs03+gNiX0+IIn1nUQ9des9PrWGrXkWdm3HK49BY0jhyNdjF7xSZ1yuLUmAdezdZ9uzJxrlDiJGSgcrNKppD025qEICa6dunSsfadpveUPtprJgw4460FDKjODE7BLgFc+lcuyJ6CFW8eALWLrL1zOV3RSP90H4pbitWuxx5cyIU+FdPkG+snyBKg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=zZ8AjfUCo71GsjOSwbykYCoQ73Oqey+3YSWTwOh6rbE=; b=lK7fepTTxT880Ba5FSZMKMfa/0sryezdwzLYH3R3G1Xs9wtgA3F3Z9Uvg8qIKuGGu9I3AFdJdX9qhSZrtMG5ziXC1YTaYOf6qJL2CW13Iz87D9iNHOp1gZxaArwO6vPUDpF4D+tKLDjTz3Mn72W+VcxvQLfJyEFKWLVQLC8rLsHOHFtLt5U9VXUqIqDHsZGVXOumZanucISWS9HFoJd2Nwtt4yD5GU0975EdKISUdjhUb2Kd6xE1BwiYX0IsrGjKTfyD4B5wCm8ci2SS0xbeE7OBdoPE5mX0pG4CsxezNsdVO+RCip5hpQh5rgDTHTsCP7UmLjO7dXBh4gTHlHks8A== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=zZ8AjfUCo71GsjOSwbykYCoQ73Oqey+3YSWTwOh6rbE=; b=jtIpKmoLEEBgJ2D64f1LL+YdwpmxxHHONgZ5ed24krVwBn6+ugUJtursOGnsYS4RMoDmLLioPuL/7n8x71kq3gDGmRhBzPHhIFUgQ+R0s8OZj/M3MFi9DlyW1vwxlg/8L7CpwZvAZeXEVDuADoXdMxen+t9f7a7f3CPG8x6Seps= Received: from DM5PR05CA0013.namprd05.prod.outlook.com (2603:10b6:3:d4::23) by DM6PR02MB4075.namprd02.prod.outlook.com (2603:10b6:5:9f::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Fri, 29 Oct 2021 14:48:35 +0000 Received: from DM3NAM02FT026.eop-nam02.prod.protection.outlook.com (2603:10b6:3:d4:cafe::53) by DM5PR05CA0013.outlook.office365.com (2603:10b6:3:d4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.5 via Frontend Transport; Fri, 29 Oct 2021 14:48:35 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT026.mail.protection.outlook.com (10.13.5.129) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:48:35 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Fri, 29 Oct 2021 07:48:33 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:33 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mgTBI-0008YO-VM; Fri, 29 Oct 2021 07:48:33 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Fri, 29 Oct 2021 20:16:39 +0530 Message-ID: <20211029144645.30295-5-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211029144645.30295-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c4a99663-06f0-4099-5148-08d99aeb2f79 X-MS-TrafficTypeDiagnostic: DM6PR02MB4075: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1728; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: r0l3JWjZG1t8YQK2n3YAgRFEFNXSk3yI24BUkKF5GiQ8S7DguiYxXB5T2Fq7McgRyTCZgZktbIqvU0rQFQbscHFXNYFsH+LiXlTtfpxYbJgF35s38QrAW8UX2uUjX0NvUPs/XoJNeGKHHAZk8sn/WjbHWxaxvMh0mOnBUaFTPRNI/xvj84Szp9C7+yho4xi+NW+MhSVAPwnv6oXupamYDrY4iMOkdH247RjzLVBC3Y23WSBDdknv0hvta6+LNKZfGBvXJKpccV8Te8XoOg3Fp22RekkjuNwXviIzN4Rt9FLI+Q+99+i2X47DjBAznuqlYe+RCa60WkT8mpd0K3Ve4VS5yFidbNZm3nEhpxxF/ruUvJayE8TQmkhcz2nkC0CyKFjKHOWiSYcgLObhfqCAubPsM00zXSD14tCKXYJ+ucPXq60HeRrOkdfraEQ6aeK9/W4TKAv8N7YQpeRSFqdNZIyHAlgLYilHEnvEpzuSGv91K3Czdc2IwpF7Wpn9iEerZyNSYVCz5LxbxXYx9vW3bYtmJJBX5hiGWXdxG0H+movSrrM4Z+hl7WLU9mgEAIJGNRBBEB7xm1joVAY6wXxZUsEwMxBNl/Zq0pIMp/GyeHn83Pm/ZmMY1wvmUipPV/4sE/YPJmTNNQR0BOM/CCcGcFrCBevr1Ddgtnmb29BKKXXSUqET3hqoAvvQOeqKiiRPqfbHpqj3IBubgM/KwvkyonpRHDU6eRXbRyybqoKuOWI= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(46966006)(36840700001)(2616005)(5660300002)(70586007)(1076003)(82310400003)(70206006)(4326008)(7636003)(54906003)(47076005)(2906002)(336012)(83380400001)(508600001)(107886003)(8936002)(7696005)(8676002)(186003)(44832011)(356005)(6916009)(36906005)(316002)(36860700001)(26005)(36756003)(426003)(9786002)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:48:35.3208 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: c4a99663-06f0-4099-5148-08d99aeb2f79 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT026.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR02MB4075 Subject: [dpdk-dev] [PATCH v3 04/10] vdpa/sfc: get device supported max queue count X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops get_queue_num to get the maximum number of queues supported by the device. Signed-off-by: Vijay Kumar Srivastava Acked-by: Andrew Rybchenko Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 16 +++++++++++++--- 1 file changed, 13 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 5750944..6c702e1 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -31,10 +31,20 @@ static int sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) { - RTE_SET_USED(vdpa_dev); - RTE_SET_USED(queue_num); + struct sfc_vdpa_ops_data *ops_data; + void *dev; - return -1; + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; + + sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", + *queue_num); + + return 0; } static int From patchwork Fri Oct 29 14:46:40 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103247 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D0790A0547; Fri, 29 Oct 2021 16:48:56 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1C679426F5; Fri, 29 Oct 2021 16:48:41 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2079.outbound.protection.outlook.com [40.107.237.79]) by mails.dpdk.org (Postfix) with ESMTP id B460E411DD for ; Fri, 29 Oct 2021 16:48:37 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=lHwBrvpBuieAOEl/DLeTIPtBDltmmrpFI75o8SKywpjLqoPhoPkcc/AePC1hB2tImvBAaaM8c7lxOW3vllKTzvVB4tRzGD6xuV32Ep/0oVLvwHh9zoFwKO/Qn9xcRUCEh+9GCKJLlOHyQeCz6okuoYVBRbZ7I8NABX4iD33jEVisQhznly4oUNScBHeQCK66XwnTGsVoSpOIl4KwZ/rj4cgejk5FvVaRVTukF/BkieCfz9njGULjP8+L401hkqyw8hgHkg68SwgzT1tW13oWjm1An15rYR5qElr9MfS8FrGP3eJzlSbx+ceL77HB7kcTmiizuzKSVHTlkYXCbV15rQ== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=+fs6t3CUV78kLCw9pb1rTWdeTCA1VSIwpzZ6r9CBFW8=; b=K9OwcYt3T52wQIQ2jjCNM40Y2uutApNQIni7X67lImX+WFQd2yw4ZH3mp6TmJC1HNjbDr7zsdLeJ7+Xf7WYKuoMb4BENeDaoTi0gcIkEQ3EdVIyyWesK/sjO33AZsYJDzn/m23ckVctX+PvgHxtYSek5We1R+pRIcCPkIgjxwdIkmX0mh+lyJZVJLCBewXCMNuDYr9MxoNMpJUyh98dYgC4+B55bTyYutKRPGRTMSL7+95l/Arx14Z8RNMGyTZv6LBBEFfv1qBsvD3O8Hrvwh5P+VPKMPE569wuVQhSj4Ip4UMyRejnrauIACAQeNtAxLaOJ7d4tX0eSUOr9JP7F0w== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=+fs6t3CUV78kLCw9pb1rTWdeTCA1VSIwpzZ6r9CBFW8=; b=CE4iWIwf1qPqN0K8lK6KXQwtHIq2r1SyPyx02arY96cCbEldauJHKYyFNGSzVpj4VDnQJduaL3TJ+e21XWjINjQN2D9nunibI3j/kPD92/+8pJ6HnYEmRHDX91oVxGRYOnXs3rdz3gHbP/4pcx5s8MhOCF3p1bJhZ+RcEV1hUh8= Received: from DM5PR05CA0012.namprd05.prod.outlook.com (2603:10b6:3:d4::22) by DM5PR02MB2332.namprd02.prod.outlook.com (2603:10b6:3:59::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Fri, 29 Oct 2021 14:48:36 +0000 Received: from DM3NAM02FT026.eop-nam02.prod.protection.outlook.com (2603:10b6:3:d4:cafe::52) by DM5PR05CA0012.outlook.office365.com (2603:10b6:3:d4::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.6 via Frontend Transport; Fri, 29 Oct 2021 14:48:36 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT026.mail.protection.outlook.com (10.13.5.129) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:48:35 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Fri, 29 Oct 2021 07:48:35 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:35 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mgTBL-0008YO-3h; Fri, 29 Oct 2021 07:48:35 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Fri, 29 Oct 2021 20:16:40 +0530 Message-ID: <20211029144645.30295-6-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211029144645.30295-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 12980564-eb40-4731-276d-08d99aeb2fd3 X-MS-TrafficTypeDiagnostic: DM5PR02MB2332: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:1051; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: KA0E4Q0NnpziYdG7GQsQEiT0YeS7Jtkc+ws4Q14gz091qxUhBqllUUjNI8wdkvequZxjvcNM/UpebpGAQiqNbfQfZN8XVJPmJqTZj1dCKpkVueeDcfM3XL0KG9NwBtV4EFaEPqikFjkCI8pQRj5mNY9SOhYrmdWIv3ZMID3MRdDpheV/J3TcZSD1LfKci7Tq83QTXt3B9bL8MxSXQZDaFcdQ2n61zIJr6jTZ53F8P8py8uaFezDxIcm3FtZqW9S/j/bitUBQKZp3d7+rPeNvWTU3WQPdkoqY2rmKvr3L4KjZwG8ArdaaY0VCvs6j1R+FbMfGg73s09ELUsQpMI/jf4sXYcLsckgfpjN3weGbeWLN0SkxmlMQlXXzfJztk8RCuPhsFCgRkkgHy7xpX2chxH8mS2mG1LBvjw9mV/TwpC+ElAz6yLKizd1YjriHVuZvspBW+w77l6tWJI9xJxmNxy7z00zQZz0Euzs6LxmPs3VKLugFqaT2YrqIaXh2dAyvnnwLya7H5r/VhDkhnvlLG08jd+LjilB/SQpeDgvzPyehENeXWroG2aU4lERH31zbdJkg5UUvQ1Pr/20fD9ENifvYqGRhFNZ6aznkhO+MZh3WsoVMWfxC8F4ZLVWgVWGG47s/7LQTJd8FUXfmpzy1C+b3EN0FveXVPi45MkkIfapnukd83mHxJBydeaOs+CTKpTAIesol7pIh9t81K89kzZ4ein4OP6xdCprl2q8JS6I= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(46966006)(36840700001)(1076003)(36906005)(107886003)(7696005)(6916009)(316002)(356005)(9786002)(6666004)(8676002)(508600001)(8936002)(5660300002)(186003)(54906003)(4326008)(26005)(7636003)(83380400001)(47076005)(36756003)(2906002)(36860700001)(82310400003)(70586007)(70206006)(44832011)(426003)(2616005)(336012)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:48:35.9095 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 12980564-eb40-4731-276d-08d99aeb2fd3 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT026.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR02MB2332 Subject: [dpdk-dev] [PATCH v3 05/10] vdpa/sfc: add support to get VFIO device fd X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops get_vfio_device_fd to get the VFIO device fd. Signed-off-by: Vijay Kumar Srivastava Acked-by: Andrew Rybchenko Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 6c702e1..5253adb 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -145,6 +145,29 @@ return -1; } +static int +sfc_vdpa_get_vfio_device_fd(int vid) +{ + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + int vfio_dev_fd; + void *dev; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + + sfc_vdpa_info(dev, "vDPA ops get_vfio_device_fd :: vfio fd : %d", + vfio_dev_fd); + + return vfio_dev_fd; +} + static struct rte_vdpa_dev_ops sfc_vdpa_ops = { .get_queue_num = sfc_vdpa_get_queue_num, .get_features = sfc_vdpa_get_features, @@ -153,6 +176,7 @@ .dev_close = sfc_vdpa_dev_close, .set_vring_state = sfc_vdpa_set_vring_state, .set_features = sfc_vdpa_set_features, + .get_vfio_device_fd = sfc_vdpa_get_vfio_device_fd, }; struct sfc_vdpa_ops_data * From patchwork Fri Oct 29 14:46:41 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103250 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D759CA0547; Fri, 29 Oct 2021 16:49:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E528942704; Fri, 29 Oct 2021 16:48:46 +0200 (CEST) Received: from NAM12-MW2-obe.outbound.protection.outlook.com (mail-mw2nam12on2043.outbound.protection.outlook.com [40.107.244.43]) by mails.dpdk.org (Postfix) with ESMTP id 8D75D426F0 for ; Fri, 29 Oct 2021 16:48:44 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=BlLmWBhpjBphgdkMBFClbRSdDRLg2cBz49k73DEzaI1tf/hSg9RFyExnWKtCciofHXvjMF3mRBsQ7oObYRAEhsmV632AZJ8LduMsHv+LtSA+RJ9+4tG7FkPzaAZc3QRiGG9pKW4B70kQ30jLB5QvAVoNfpPyamLsFO1snx6RwYJ/iVkhoMguDAOGen/hP6gWN4+EYUKxCVHOxJDfmcR3ls5Pjh3MKTxqdjViQW94i0w1+9bFAPBYDqsr5Ry2rNyIeLexDic15llMVp/U2s8nPkZQ7ZCiu7xBP2Ghdck3Bo4r2dTwuKEW+VsDpIOAFD+hyOalQZrytfM3W1GkQmc3Jg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=vfNpzZGRF3YIvA8/XfgZcBBiZ4UQXIeQjDQnNKRbVUE=; b=Z2z2qhmOQ5eMzcELA9/6VMenWPi0bkxf8rdon9xVJ4ag6+CcIHn2ep9bh9c4LsvZ5u7ELJSTe+d67Yz+VKM2i6oTtwwkiDACohdxSnH8dx4cY2YIpO9WESFbyesM+A09k7eLf1ZoBrpLOTjZ9KQFRBWDJXcyv1RNWDUy/4Cmqh1nIGL4diOCEZiA2fO94yk49/PwYEoVK/9ZfEYUysH2rjjV3fxL7/pZIuzuzJ7KlABLK6VceJGqf44zADOQowij0fl67SleLinqxPVbfCfCG5cXq3L2D9d1WX/+WVjkg0Md+Og9UohnV3oE7qNY/42uU8z+sgpUHNDKw/F9vGFrKQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=vfNpzZGRF3YIvA8/XfgZcBBiZ4UQXIeQjDQnNKRbVUE=; b=dQV1EFFKUcVZ5rkqn9nznfEp8WLl0661ThgUBG6m4eVoUwiSWCNf73pjlysJKV51XzQMm5/2oaGTAx1jxsKyKI4Pdt3nfbZL+Pe+cgli25mfZfENH5fEX+IYVhjVPnq1spx1pOXUq1C39ULE9xPB3CfX0M8vwCGGM4CjUK8PHyM= Received: from DM5PR05CA0013.namprd05.prod.outlook.com (2603:10b6:3:d4::23) by SN6PR02MB5487.namprd02.prod.outlook.com (2603:10b6:805:e7::26) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Fri, 29 Oct 2021 14:48:41 +0000 Received: from DM3NAM02FT054.eop-nam02.prod.protection.outlook.com (2603:10b6:3:d4:cafe::1e) by DM5PR05CA0013.outlook.office365.com (2603:10b6:3:d4::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.5 via Frontend Transport; Fri, 29 Oct 2021 14:48:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT054.mail.protection.outlook.com (10.13.5.135) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:48:41 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Fri, 29 Oct 2021 07:48:37 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:37 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mgTBN-0008YO-8K; Fri, 29 Oct 2021 07:48:37 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Fri, 29 Oct 2021 20:16:41 +0530 Message-ID: <20211029144645.30295-7-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211029144645.30295-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 6195c707-06e5-4684-90ca-08d99aeb32fd X-MS-TrafficTypeDiagnostic: SN6PR02MB5487: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:183; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: ZEs5Ifv5kFdGvBoRAU1jUl8q+ke8s9Sj0QLLailuo+MK8lg9Sp5m91FJl9S8+0H9rAkplUWADRTRaQq/0yJzx3j7x+1EoG+pMmcQBok/LH8GwvoZIu/1dHYYpQpODZt6SQ3AzsFE7+2oVqflPfEalvA3/ZoRcf1uPKmZR42V3hAWTlvTQdrd95KQ+D0TytJK2NKhojzadQwKFs64brgKc0Fc/3c6VioE8okIE/9D0OYebFcdEJRTiIJQDQ4xLc/vUq9ePmvzCkoTDY4PB5cnd4R/YZ3FkqGUV198FkaIpnQO0kZWZSXjk8x9hZK+/JkCNHNciErm1EuD0B3QpmoBnjMTbTm4AxgBAq1bLJ8fTkEDRyKjnVEh6yFPNnzdw8RoPpS3g2bzT6mue2tdG2oih3yq/Stm7nZOpROHXr1UdewBruViYN/mg5NkExramE0lWfS5Dn08lbjvaIuyRoAD3Sh5FmjI+Y76Xy7sDbMoHUqVz36Ebsaatkg1fTGCbTW+mxIzAe4oyRM8Wa0qsKBtqzd/CaMe/oIKCF3d1b6eCsVxcZ0X1xtrz3pLuQ2q3/Jdhy1AaNnnqWW+SK40GUboK0niPdI4gvl/Dm9hmjXQ7OO2JNA0TzG3jiKsMBfss92tetXAIlwez42Vg5iH5bCMKlX35KJXfutCJJd4ogCVkAWrP/q5uMfukqq6vjaKfPnPjoYJLbye6PLlJg8RtjW/eekSxxHSOy1OUIcCt2R5VTc= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(6916009)(186003)(2906002)(30864003)(5660300002)(7696005)(82310400003)(83380400001)(8676002)(426003)(70586007)(4326008)(26005)(36906005)(7636003)(44832011)(508600001)(47076005)(9786002)(1076003)(2616005)(36860700001)(356005)(70206006)(54906003)(107886003)(336012)(8936002)(316002)(6666004)(36756003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:48:41.2206 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 6195c707-06e5-4684-90ca-08d99aeb32fd X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT054.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR02MB5487 Subject: [dpdk-dev] [PATCH v3 06/10] vdpa/sfc: add support for dev conf and dev close ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops dev_conf and dev_close for DMA mapping, interrupt and virtqueue configurations. Signed-off-by: Vijay Kumar Srivastava Acked-by: Andrew Rybchenko --- v2: * Removed redundant null check while calling free(). * Added error handling for rte_vhost_get_vhost_vring(). drivers/vdpa/sfc/sfc_vdpa.c | 6 + drivers/vdpa/sfc/sfc_vdpa.h | 43 ++++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 69 ++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 530 ++++++++++++++++++++++++++++++++++++++-- drivers/vdpa/sfc/sfc_vdpa_ops.h | 28 +++ 5 files changed, 656 insertions(+), 20 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index ccbd243..b3c82e5 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -246,6 +246,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_log_init(sva, "entry"); + sfc_vdpa_adapter_lock_init(sva); + sfc_vdpa_log_init(sva, "vfio init"); if (sfc_vdpa_vfio_setup(sva) < 0) { sfc_vdpa_err(sva, "failed to setup device %s", pci_dev->name); @@ -280,6 +282,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_vfio_teardown(sva); fail_vfio_setup: + sfc_vdpa_adapter_lock_fini(sva); + fail_set_log_prefix: rte_free(sva); @@ -311,6 +315,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_vfio_teardown(sva); + sfc_vdpa_adapter_lock_fini(sva); + rte_free(sva); return 0; diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index c10c3d3..1bf96e7 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -80,10 +80,53 @@ struct sfc_vdpa_ops_data * void sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp); +int +sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *vdpa_data, bool do_map); + static inline struct sfc_vdpa_adapter * sfc_vdpa_adapter_by_dev_handle(void *dev_handle) { return (struct sfc_vdpa_adapter *)dev_handle; } +/* + * Add wrapper functions to acquire/release lock to be able to remove or + * change the lock in one place. + */ +static inline void +sfc_vdpa_adapter_lock_init(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_init(&sva->lock); +} + +static inline int +sfc_vdpa_adapter_is_locked(struct sfc_vdpa_adapter *sva) +{ + return rte_spinlock_is_locked(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_lock(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_lock(&sva->lock); +} + +static inline int +sfc_vdpa_adapter_trylock(struct sfc_vdpa_adapter *sva) +{ + return rte_spinlock_trylock(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_unlock(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_unlock(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_lock_fini(__rte_unused struct sfc_vdpa_adapter *sva) +{ + /* Just for symmetry of the API */ +} + #endif /* _SFC_VDPA_H */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c index 7a67bd8..b473708 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "efx.h" #include "sfc_vdpa.h" @@ -109,6 +110,74 @@ memset(esmp, 0, sizeof(*esmp)); } +int +sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *ops_data, bool do_map) +{ + uint32_t i, j; + int rc; + struct rte_vhost_memory *vhost_mem = NULL; + struct rte_vhost_mem_region *mem_reg = NULL; + int vfio_container_fd; + void *dev; + + dev = ops_data->dev_handle; + vfio_container_fd = + sfc_vdpa_adapter_by_dev_handle(dev)->vfio_container_fd; + + rc = rte_vhost_get_mem_table(ops_data->vid, &vhost_mem); + if (rc < 0) { + sfc_vdpa_err(dev, + "failed to get VM memory layout"); + goto error; + } + + for (i = 0; i < vhost_mem->nregions; i++) { + mem_reg = &vhost_mem->regions[i]; + + if (do_map) { + rc = rte_vfio_container_dma_map(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + if (rc < 0) { + sfc_vdpa_err(dev, + "DMA map failed : %s", + rte_strerror(rte_errno)); + goto failed_vfio_dma_map; + } + } else { + rc = rte_vfio_container_dma_unmap(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + if (rc < 0) { + sfc_vdpa_err(dev, + "DMA unmap failed : %s", + rte_strerror(rte_errno)); + goto error; + } + } + } + + free(vhost_mem); + + return 0; + +failed_vfio_dma_map: + for (j = 0; j < i; j++) { + mem_reg = &vhost_mem->regions[j]; + rc = rte_vfio_container_dma_unmap(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + } + +error: + free(vhost_mem); + + return rc; +} + static int sfc_vdpa_mem_bar_init(struct sfc_vdpa_adapter *sva, const efx_bar_region_t *mem_ebrp) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 5253adb..de1c81a 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,10 +3,13 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include + #include #include #include #include +#include #include #include "efx.h" @@ -28,24 +31,12 @@ #define SFC_VDPA_DEFAULT_FEATURES \ (1ULL << VHOST_USER_F_PROTOCOL_FEATURES) -static int -sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) -{ - struct sfc_vdpa_ops_data *ops_data; - void *dev; - - ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); - if (ops_data == NULL) - return -1; - - dev = ops_data->dev_handle; - *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; +#define SFC_VDPA_MSIX_IRQ_SET_BUF_LEN \ + (sizeof(struct vfio_irq_set) + \ + sizeof(int) * (SFC_VDPA_MAX_QUEUE_PAIRS * 2 + 1)) - sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", - *queue_num); - - return 0; -} +/* It will be used for target VF when calling function is not PF */ +#define SFC_VDPA_VF_NULL 0xFFFF static int sfc_vdpa_get_device_features(struct sfc_vdpa_ops_data *ops_data) @@ -74,6 +65,441 @@ return 0; } +static uint64_t +hva_to_gpa(int vid, uint64_t hva) +{ + struct rte_vhost_memory *vhost_mem = NULL; + struct rte_vhost_mem_region *mem_reg = NULL; + uint32_t i; + uint64_t gpa = 0; + + if (rte_vhost_get_mem_table(vid, &vhost_mem) < 0) + goto error; + + for (i = 0; i < vhost_mem->nregions; i++) { + mem_reg = &vhost_mem->regions[i]; + + if (hva >= mem_reg->host_user_addr && + hva < mem_reg->host_user_addr + mem_reg->size) { + gpa = (hva - mem_reg->host_user_addr) + + mem_reg->guest_phys_addr; + break; + } + } + +error: + free(vhost_mem); + return gpa; +} + +static int +sfc_vdpa_enable_vfio_intr(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int *irq_fd_ptr; + int vfio_dev_fd; + uint32_t i, num_vring; + struct rte_vhost_vring vring; + struct vfio_irq_set *irq_set; + struct rte_pci_device *pci_dev; + char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN]; + void *dev; + + num_vring = rte_vhost_get_vring_num(ops_data->vid); + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + pci_dev = sfc_vdpa_adapter_by_dev_handle(dev)->pdev; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = sizeof(irq_set_buf); + irq_set->count = num_vring + 1; + irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | + VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + irq_set->start = 0; + irq_fd_ptr = (int *)&irq_set->data; + irq_fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = + rte_intr_fd_get(pci_dev->intr_handle); + + for (i = 0; i < num_vring; i++) { + rc = rte_vhost_get_vhost_vring(ops_data->vid, i, &vring); + if (rc) + return -1; + + irq_fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd; + } + + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) { + sfc_vdpa_err(ops_data->dev_handle, + "error enabling MSI-X interrupts: %s", + strerror(errno)); + return -1; + } + + return 0; +} + +static int +sfc_vdpa_disable_vfio_intr(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int vfio_dev_fd; + struct vfio_irq_set *irq_set; + char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN]; + void *dev; + + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = sizeof(irq_set_buf); + irq_set->count = 0; + irq_set->flags = VFIO_IRQ_SET_DATA_NONE | VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + irq_set->start = 0; + + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) { + sfc_vdpa_err(ops_data->dev_handle, + "error disabling MSI-X interrupts: %s", + strerror(errno)); + return -1; + } + + return 0; +} + +static int +sfc_vdpa_get_vring_info(struct sfc_vdpa_ops_data *ops_data, + int vq_num, struct sfc_vdpa_vring_info *vring) +{ + int rc; + uint64_t gpa; + struct rte_vhost_vring vq; + + rc = rte_vhost_get_vhost_vring(ops_data->vid, vq_num, &vq); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "get vhost vring failed: %s", rte_strerror(rc)); + return rc; + } + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.desc); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for descriptor ring."); + goto fail_vring_map; + } + vring->desc = gpa; + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.avail); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for available ring."); + goto fail_vring_map; + } + vring->avail = gpa; + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.used); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for used ring."); + goto fail_vring_map; + } + vring->used = gpa; + + vring->size = vq.size; + + rc = rte_vhost_get_vring_base(ops_data->vid, vq_num, + &vring->last_avail_idx, + &vring->last_used_idx); + + return rc; + +fail_vring_map: + return -1; +} + +static int +sfc_vdpa_virtq_start(struct sfc_vdpa_ops_data *ops_data, int vq_num) +{ + int rc; + efx_virtio_vq_t *vq; + struct sfc_vdpa_vring_info vring; + efx_virtio_vq_cfg_t vq_cfg; + efx_virtio_vq_dyncfg_t vq_dyncfg; + + vq = ops_data->vq_cxt[vq_num].vq; + if (vq == NULL) + return -1; + + rc = sfc_vdpa_get_vring_info(ops_data, vq_num, &vring); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "get vring info failed: %s", rte_strerror(rc)); + goto fail_vring_info; + } + + vq_cfg.evvc_target_vf = SFC_VDPA_VF_NULL; + + /* even virtqueue for RX and odd for TX */ + if (vq_num % 2) { + vq_cfg.evvc_type = EFX_VIRTIO_VQ_TYPE_NET_TXQ; + sfc_vdpa_info(ops_data->dev_handle, + "configure virtqueue # %d (TXQ)", vq_num); + } else { + vq_cfg.evvc_type = EFX_VIRTIO_VQ_TYPE_NET_RXQ; + sfc_vdpa_info(ops_data->dev_handle, + "configure virtqueue # %d (RXQ)", vq_num); + } + + vq_cfg.evvc_vq_num = vq_num; + vq_cfg.evvc_desc_tbl_addr = vring.desc; + vq_cfg.evvc_avail_ring_addr = vring.avail; + vq_cfg.evvc_used_ring_addr = vring.used; + vq_cfg.evvc_vq_size = vring.size; + + vq_dyncfg.evvd_vq_pidx = vring.last_used_idx; + vq_dyncfg.evvd_vq_cidx = vring.last_avail_idx; + + /* MSI-X vector is function-relative */ + vq_cfg.evvc_msix_vector = RTE_INTR_VEC_RXTX_OFFSET + vq_num; + if (ops_data->vdpa_context == SFC_VDPA_AS_VF) + vq_cfg.evvc_pas_id = 0; + vq_cfg.evcc_features = ops_data->dev_features & + ops_data->req_features; + + /* Start virtqueue */ + rc = efx_virtio_qstart(vq, &vq_cfg, &vq_dyncfg); + if (rc != 0) { + /* destroy virtqueue */ + sfc_vdpa_err(ops_data->dev_handle, + "virtqueue start failed: %s", + rte_strerror(rc)); + efx_virtio_qdestroy(vq); + goto fail_virtio_qstart; + } + + sfc_vdpa_info(ops_data->dev_handle, + "virtqueue started successfully for vq_num %d", vq_num); + + ops_data->vq_cxt[vq_num].enable = B_TRUE; + + return rc; + +fail_virtio_qstart: +fail_vring_info: + return rc; +} + +static int +sfc_vdpa_virtq_stop(struct sfc_vdpa_ops_data *ops_data, int vq_num) +{ + int rc; + efx_virtio_vq_dyncfg_t vq_idx; + efx_virtio_vq_t *vq; + + if (ops_data->vq_cxt[vq_num].enable != B_TRUE) + return -1; + + vq = ops_data->vq_cxt[vq_num].vq; + if (vq == NULL) + return -1; + + /* stop the vq */ + rc = efx_virtio_qstop(vq, &vq_idx); + if (rc == 0) { + ops_data->vq_cxt[vq_num].cidx = vq_idx.evvd_vq_cidx; + ops_data->vq_cxt[vq_num].pidx = vq_idx.evvd_vq_pidx; + } + ops_data->vq_cxt[vq_num].enable = B_FALSE; + + return rc; +} + +static int +sfc_vdpa_configure(struct sfc_vdpa_ops_data *ops_data) +{ + int rc, i; + int nr_vring; + int max_vring_cnt; + efx_virtio_vq_t *vq; + efx_nic_t *nic; + void *dev; + + dev = ops_data->dev_handle; + nic = sfc_vdpa_adapter_by_dev_handle(dev)->nic; + + SFC_EFX_ASSERT(ops_data->state == SFC_VDPA_STATE_INITIALIZED); + + ops_data->state = SFC_VDPA_STATE_CONFIGURING; + + nr_vring = rte_vhost_get_vring_num(ops_data->vid); + max_vring_cnt = + (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + /* number of vring should not be more than supported max vq count */ + if (nr_vring > max_vring_cnt) { + sfc_vdpa_err(dev, + "nr_vring (%d) is > max vring count (%d)", + nr_vring, max_vring_cnt); + goto fail_vring_num; + } + + rc = sfc_vdpa_dma_map(ops_data, true); + if (rc) { + sfc_vdpa_err(dev, + "DMA map failed: %s", rte_strerror(rc)); + goto fail_dma_map; + } + + for (i = 0; i < nr_vring; i++) { + rc = efx_virtio_qcreate(nic, &vq); + if ((rc != 0) || (vq == NULL)) { + sfc_vdpa_err(dev, + "virtqueue create failed: %s", + rte_strerror(rc)); + goto fail_vq_create; + } + + /* store created virtqueue context */ + ops_data->vq_cxt[i].vq = vq; + } + + ops_data->vq_count = i; + + ops_data->state = SFC_VDPA_STATE_CONFIGURED; + + return 0; + +fail_vq_create: + sfc_vdpa_dma_map(ops_data, false); + +fail_dma_map: +fail_vring_num: + ops_data->state = SFC_VDPA_STATE_INITIALIZED; + + return -1; +} + +static void +sfc_vdpa_close(struct sfc_vdpa_ops_data *ops_data) +{ + int i; + + if (ops_data->state != SFC_VDPA_STATE_CONFIGURED) + return; + + ops_data->state = SFC_VDPA_STATE_CLOSING; + + for (i = 0; i < ops_data->vq_count; i++) { + if (ops_data->vq_cxt[i].vq == NULL) + continue; + + efx_virtio_qdestroy(ops_data->vq_cxt[i].vq); + } + + sfc_vdpa_dma_map(ops_data, false); + + ops_data->state = SFC_VDPA_STATE_INITIALIZED; +} + +static void +sfc_vdpa_stop(struct sfc_vdpa_ops_data *ops_data) +{ + int i; + int rc; + + if (ops_data->state != SFC_VDPA_STATE_STARTED) + return; + + ops_data->state = SFC_VDPA_STATE_STOPPING; + + for (i = 0; i < ops_data->vq_count; i++) { + rc = sfc_vdpa_virtq_stop(ops_data, i); + if (rc != 0) + continue; + } + + sfc_vdpa_disable_vfio_intr(ops_data); + + ops_data->state = SFC_VDPA_STATE_CONFIGURED; +} + +static int +sfc_vdpa_start(struct sfc_vdpa_ops_data *ops_data) +{ + int i, j; + int rc; + + SFC_EFX_ASSERT(ops_data->state == SFC_VDPA_STATE_CONFIGURED); + + sfc_vdpa_log_init(ops_data->dev_handle, "entry"); + + ops_data->state = SFC_VDPA_STATE_STARTING; + + sfc_vdpa_log_init(ops_data->dev_handle, "enable interrupts"); + rc = sfc_vdpa_enable_vfio_intr(ops_data); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "vfio intr allocation failed: %s", + rte_strerror(rc)); + goto fail_enable_vfio_intr; + } + + rte_vhost_get_negotiated_features(ops_data->vid, + &ops_data->req_features); + + sfc_vdpa_info(ops_data->dev_handle, + "negotiated feature : 0x%" PRIx64, + ops_data->req_features); + + for (i = 0; i < ops_data->vq_count; i++) { + sfc_vdpa_log_init(ops_data->dev_handle, + "starting vq# %d", i); + rc = sfc_vdpa_virtq_start(ops_data, i); + if (rc != 0) + goto fail_vq_start; + } + + ops_data->state = SFC_VDPA_STATE_STARTED; + + sfc_vdpa_log_init(ops_data->dev_handle, "done"); + + return 0; + +fail_vq_start: + /* stop already started virtqueues */ + for (j = 0; j < i; j++) + sfc_vdpa_virtq_stop(ops_data, j); + sfc_vdpa_disable_vfio_intr(ops_data); + +fail_enable_vfio_intr: + ops_data->state = SFC_VDPA_STATE_CONFIGURED; + + return rc; +} + +static int +sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) +{ + struct sfc_vdpa_ops_data *ops_data; + void *dev; + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; + + sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", + *queue_num); + + return 0; +} + static int sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) { @@ -114,7 +540,53 @@ static int sfc_vdpa_dev_config(int vid) { - RTE_SET_USED(vid); + struct rte_vdpa_device *vdpa_dev; + int rc; + struct sfc_vdpa_ops_data *ops_data; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + sfc_vdpa_log_init(ops_data->dev_handle, "entry"); + + ops_data->vid = vid; + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + sfc_vdpa_log_init(ops_data->dev_handle, "configuring"); + rc = sfc_vdpa_configure(ops_data); + if (rc != 0) + goto fail_vdpa_config; + + sfc_vdpa_log_init(ops_data->dev_handle, "starting"); + rc = sfc_vdpa_start(ops_data); + if (rc != 0) + goto fail_vdpa_start; + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + sfc_vdpa_log_init(ops_data->dev_handle, "vhost notifier ctrl"); + if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) + sfc_vdpa_info(ops_data->dev_handle, + "vDPA (%s): software relay for notify is used.", + vdpa_dev->device->name); + + sfc_vdpa_log_init(ops_data->dev_handle, "done"); + + return 0; + +fail_vdpa_start: + sfc_vdpa_close(ops_data); + +fail_vdpa_config: + sfc_vdpa_adapter_unlock(ops_data->dev_handle); return -1; } @@ -122,9 +594,27 @@ static int sfc_vdpa_dev_close(int vid) { - RTE_SET_USED(vid); + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; - return -1; + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + sfc_vdpa_stop(ops_data); + sfc_vdpa_close(ops_data); + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + return 0; } static int diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 21cbb73..8d553c5 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -18,17 +18,45 @@ enum sfc_vdpa_context { enum sfc_vdpa_state { SFC_VDPA_STATE_UNINITIALIZED = 0, SFC_VDPA_STATE_INITIALIZED, + SFC_VDPA_STATE_CONFIGURING, + SFC_VDPA_STATE_CONFIGURED, + SFC_VDPA_STATE_CLOSING, + SFC_VDPA_STATE_CLOSED, + SFC_VDPA_STATE_STARTING, + SFC_VDPA_STATE_STARTED, + SFC_VDPA_STATE_STOPPING, SFC_VDPA_STATE_NSTATES }; +struct sfc_vdpa_vring_info { + uint64_t desc; + uint64_t avail; + uint64_t used; + uint64_t size; + uint16_t last_avail_idx; + uint16_t last_used_idx; +}; + +typedef struct sfc_vdpa_vq_context_s { + uint8_t enable; + uint32_t pidx; + uint32_t cidx; + efx_virtio_vq_t *vq; +} sfc_vdpa_vq_context_t; + struct sfc_vdpa_ops_data { void *dev_handle; + int vid; struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; uint64_t dev_features; uint64_t drv_features; + uint64_t req_features; + + uint16_t vq_count; + struct sfc_vdpa_vq_context_s vq_cxt[SFC_VDPA_MAX_QUEUE_PAIRS * 2]; }; struct sfc_vdpa_ops_data * From patchwork Fri Oct 29 14:46:42 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103249 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0D7ADA0547; Fri, 29 Oct 2021 16:49:09 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id EC28A426FE; Fri, 29 Oct 2021 16:48:45 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2057.outbound.protection.outlook.com [40.107.237.57]) by mails.dpdk.org (Postfix) with ESMTP id E1BC4426F0 for ; Fri, 29 Oct 2021 16:48:43 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=IavaMTy7k2rvb8FyGBGcrUOn28if/2ZVHLNpYlcVlE8k0eBcl7L5CE5wCKAlDRtwxhXxd9prx4a8OFXI5kpXVy76IO811CnsNFz2UsNQlQ0V5jCEXWWLTnfkjVB1TLMw+jqMl33nlIdElQua8Z0OQBnVnJfn/CGA6WJo4UzubfZyH0BpPeIPwKy2JC0xZl1DmDdgdTQ8g6gJBrj17RV5TvyoRK8Yw/AKiU/Sf3ARS0MhFmkuEzWA3ylw9Wmykn1L4GD0gbQI3EkGWn71Xi6TiYESSryVbGw18Q5HEdNHwe3Zehsysi+xlM6vSJWpgds8TPqj+bgArVI844v1E98NEg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=nOrQ0wW0CYERpc0fatCX92QWP0y2ZssMEfS6Mqoz15Y=; b=XwogFLKza/3XV1Yh9bGlUjH9UNX2zni8U2XRAh+OJLAxM/4P5x/ViIp2xmnlOf9JrlXEt8eK4V7nx4bkMbTdCuoOfnPv5vFRy6JMhka8M/yLlIZRPWTiVQwZgFsyBs0rtEkVSoEij4seiHeMfN0TEv0Buk4gE8Q1oAboPFslZFSrCsbZTGA03Q1KaAtXnXsqHtjqk40oZH7jtu7NdqR5cbhbgSFSLoPlAMEIJ+iGsuELNZ05z9iBXkzdFI7TZMC7rRpwtMLjHx+89cEwZxM3rqQ52bUScvTcYdZ1kYEIL+Kk4SbCGQlFPwsBg2Rg3R3pV1nvD0kFB4eXA5MwZ/QoZg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=nOrQ0wW0CYERpc0fatCX92QWP0y2ZssMEfS6Mqoz15Y=; b=iSn5NcqhJLgdYaTW0Ctq5SJ9Ci0jvCDHxjk4XT8uuloTowcNmesS3AcDCL2JSY2WdCy8tkEuiewQJPbcl3mUkuuTBSusKKhgZeNagcL9IRQVh1k2uUTvduAOqmg7uFIgqEgaf0IggiFMLfUOneRiceewq0GqUjV4jEh5JQEEAYQ= Received: from DM5PR05CA0021.namprd05.prod.outlook.com (2603:10b6:3:d4::31) by MN2PR02MB5903.namprd02.prod.outlook.com (2603:10b6:208:116::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Fri, 29 Oct 2021 14:48:41 +0000 Received: from DM3NAM02FT054.eop-nam02.prod.protection.outlook.com (2603:10b6:3:d4:cafe::39) by DM5PR05CA0021.outlook.office365.com (2603:10b6:3:d4::31) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4669.4 via Frontend Transport; Fri, 29 Oct 2021 14:48:41 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT054.mail.protection.outlook.com (10.13.5.135) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:48:41 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Fri, 29 Oct 2021 07:48:39 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:39 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mgTBP-0008YO-D4; Fri, 29 Oct 2021 07:48:39 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Fri, 29 Oct 2021 20:16:42 +0530 Message-ID: <20211029144645.30295-8-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211029144645.30295-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 1cd1b0bb-39eb-4be1-c610-08d99aeb334f X-MS-TrafficTypeDiagnostic: MN2PR02MB5903: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:3044; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 5v9BYgTIZnhCJSK+rprnc8PnkDslkA+nWaVdFkGLizod9L1N/Z/lQsUtl/HF+ifOygpwvxN6l4p/VpfiBa5Ow/R4BFKqyTnE2/263p4dE/lUkfEapFyR6I22kMRtgcheyFVxGiT/LfalVgl1txcpOFtzPZIg9VTTj9v6bu7ad/7d3dCqfd6oAmMZyW6fztqkUheRRjxk7iKEpPbJRSMoveuvMjISr7kmnBWnJN6fU/aI02X4tDLOY6m4lwey6lthsVstFXXB5qSC+r+KjQ885CtlmR1P/8R6rVm2TUPtM/JQRWKJZuYUG+2VBKPy5ehjyn9MZq1NxfWx1tia89DeOqJQDWCs4ws3BmUmwOn4cUjCEtIxtdTFtUpfLTwJVZI4sCJeQVGvanq3X22phX26CAUkj0WSCCSyWZBEd2N8AXsgp1NYgBTYEAlLjBayWzC+vmerhwhjs+PUZDlToIDz3ODrAxBYNMzL6cBB7ztm3SQU83nMwdgg4j4EfDUaVBkBqSsFDe7y2CZJUXFCEY/y5kR+mOVC+n7t77svDfFOqCmj/yV0LpGC5MHAQHL8OEnVENOmy5W0dYbLEo4DndwpjkR0lJPMmvspxXuuA8zlYXOPxN+Zus2vePkXAoGvEcb2I8mEfqAPYua2Uo9SC08iJWTfvfFL7Lk9dARshddvsGagCO2o8D4H4XiTrO+lG4K3mVQHmGlHIjH00+sDouWttHTmuYjvbPyIuXygzVbWgJQ= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(36860700001)(8936002)(186003)(426003)(4326008)(7696005)(8676002)(5660300002)(336012)(508600001)(316002)(47076005)(54906003)(6916009)(2616005)(36906005)(107886003)(7636003)(70586007)(70206006)(26005)(83380400001)(36756003)(82310400003)(6666004)(9786002)(1076003)(2906002)(356005)(44832011)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:48:41.7584 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 1cd1b0bb-39eb-4be1-c610-08d99aeb334f X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT054.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR02MB5903 Subject: [dpdk-dev] [PATCH v3 07/10] vdpa/sfc: add support to get queue notify area info X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement the vDPA ops get_notify_area to get the notify area info of the queue. Signed-off-by: Vijay Kumar Srivastava Acked-by: Andrew Rybchenko --- v2: * Added error log in sfc_vdpa_get_notify_area. drivers/vdpa/sfc/sfc_vdpa_ops.c | 168 ++++++++++++++++++++++++++++++++++++++-- drivers/vdpa/sfc/sfc_vdpa_ops.h | 2 + 2 files changed, 164 insertions(+), 6 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index de1c81a..774d73e 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,6 +3,8 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include +#include #include #include @@ -537,6 +539,67 @@ return 0; } +static void * +sfc_vdpa_notify_ctrl(void *arg) +{ + struct sfc_vdpa_ops_data *ops_data; + int vid; + + ops_data = arg; + if (ops_data == NULL) + return NULL; + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + vid = ops_data->vid; + + if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) + sfc_vdpa_info(ops_data->dev_handle, + "vDPA (%s): Notifier could not get configured", + ops_data->vdpa_dev->device->name); + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + return NULL; +} + +static int +sfc_vdpa_setup_notify_ctrl(int vid) +{ + int ret; + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + ops_data->is_notify_thread_started = false; + + /* + * Use rte_vhost_host_notifier_ctrl in a thread to avoid + * dead lock scenario when multiple VFs are used in single vdpa + * application and multiple VFs are passed to a single VM. + */ + ret = pthread_create(&ops_data->notify_tid, NULL, + sfc_vdpa_notify_ctrl, ops_data); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to create notify_ctrl thread: %s", + rte_strerror(ret)); + return -1; + } + ops_data->is_notify_thread_started = true; + + return 0; +} + static int sfc_vdpa_dev_config(int vid) { @@ -570,18 +633,19 @@ if (rc != 0) goto fail_vdpa_start; - sfc_vdpa_adapter_unlock(ops_data->dev_handle); + rc = sfc_vdpa_setup_notify_ctrl(vid); + if (rc != 0) + goto fail_vdpa_notify; - sfc_vdpa_log_init(ops_data->dev_handle, "vhost notifier ctrl"); - if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) - sfc_vdpa_info(ops_data->dev_handle, - "vDPA (%s): software relay for notify is used.", - vdpa_dev->device->name); + sfc_vdpa_adapter_unlock(ops_data->dev_handle); sfc_vdpa_log_init(ops_data->dev_handle, "done"); return 0; +fail_vdpa_notify: + sfc_vdpa_stop(ops_data); + fail_vdpa_start: sfc_vdpa_close(ops_data); @@ -594,6 +658,7 @@ static int sfc_vdpa_dev_close(int vid) { + int ret; struct rte_vdpa_device *vdpa_dev; struct sfc_vdpa_ops_data *ops_data; @@ -608,6 +673,23 @@ } sfc_vdpa_adapter_lock(ops_data->dev_handle); + if (ops_data->is_notify_thread_started == true) { + void *status; + ret = pthread_cancel(ops_data->notify_tid); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to cancel notify_ctrl thread: %s", + rte_strerror(ret)); + } + + ret = pthread_join(ops_data->notify_tid, &status); + if (ret != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "failed to join terminated notify_ctrl thread: %s", + rte_strerror(ret)); + } + } + ops_data->is_notify_thread_started = false; sfc_vdpa_stop(ops_data); sfc_vdpa_close(ops_data); @@ -658,6 +740,79 @@ return vfio_dev_fd; } +static int +sfc_vdpa_get_notify_area(int vid, int qid, uint64_t *offset, uint64_t *size) +{ + int ret; + efx_nic_t *nic; + int vfio_dev_fd; + efx_rc_t rc; + unsigned int bar_offset; + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; + struct vfio_region_info reg = { .argsz = sizeof(reg) }; + const efx_nic_cfg_t *encp; + int max_vring_cnt; + int64_t len; + void *dev; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + max_vring_cnt = + (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + nic = sfc_vdpa_adapter_by_dev_handle(ops_data->dev_handle)->nic; + encp = efx_nic_cfg_get(nic); + + if (qid >= max_vring_cnt) { + sfc_vdpa_err(dev, "invalid qid : %d", qid); + return -1; + } + + if (ops_data->vq_cxt[qid].enable != B_TRUE) { + sfc_vdpa_err(dev, "vq is not enabled"); + return -1; + } + + rc = efx_virtio_get_doorbell_offset(ops_data->vq_cxt[qid].vq, + &bar_offset); + if (rc != 0) { + sfc_vdpa_err(dev, "failed to get doorbell offset: %s", + rte_strerror(rc)); + return rc; + } + + reg.index = sfc_vdpa_adapter_by_dev_handle(dev)->mem_bar.esb_rid; + ret = ioctl(vfio_dev_fd, VFIO_DEVICE_GET_REGION_INFO, ®); + if (ret != 0) { + sfc_vdpa_err(dev, "could not get device region info: %s", + strerror(errno)); + return ret; + } + + *offset = reg.offset + bar_offset; + + len = (1U << encp->enc_vi_window_shift) / 2; + if (len >= sysconf(_SC_PAGESIZE)) { + *size = sysconf(_SC_PAGESIZE); + } else { + sfc_vdpa_err(dev, "invalid VI window size : 0x%" PRIx64, len); + return -1; + } + + sfc_vdpa_info(dev, "vDPA ops get_notify_area :: offset : 0x%" PRIx64, + *offset); + + return 0; +} + static struct rte_vdpa_dev_ops sfc_vdpa_ops = { .get_queue_num = sfc_vdpa_get_queue_num, .get_features = sfc_vdpa_get_features, @@ -667,6 +822,7 @@ .set_vring_state = sfc_vdpa_set_vring_state, .set_features = sfc_vdpa_set_features, .get_vfio_device_fd = sfc_vdpa_get_vfio_device_fd, + .get_notify_area = sfc_vdpa_get_notify_area, }; struct sfc_vdpa_ops_data * diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 8d553c5..f7523ef 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -50,6 +50,8 @@ struct sfc_vdpa_ops_data { struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; + pthread_t notify_tid; + bool is_notify_thread_started; uint64_t dev_features; uint64_t drv_features; From patchwork Fri Oct 29 14:46:43 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103252 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2F924A0547; Fri, 29 Oct 2021 16:49:31 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3E971426E4; Fri, 29 Oct 2021 16:49:26 +0200 (CEST) Received: from NAM04-BN8-obe.outbound.protection.outlook.com (mail-bn8nam08on2078.outbound.protection.outlook.com [40.107.100.78]) by mails.dpdk.org (Postfix) with ESMTP id 789AD426D5 for ; Fri, 29 Oct 2021 16:49:23 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=Bm+KmVavqF3sQrVhXCn4z9cYrlS2mBTil3nm00wqowFVPk7fAqBnVcFilOVD9do+aG54T/ooUPcqnLxraa63R/Ik4EbDFyxQDYjH7lDRFJWU31aJKLIsggxrciBxVCL/2SA4ykZtPKZHLT/8R3DiCvv1uWiw33USy5c1q1aOqP0+c2KIe1aWmG+WuTT0vydaM0QUzl7at8Tyt97ArQaj/6heA6V88aRzf+WcwMkOWzoVGLXi7lovGm4362jUxR8czPmcgjQaSiq2VbxQMOmoIClrtq5PFmKU128P4W96v4++NKR/kNxUKpWO3kQO9sXCe0KLbOzfXeRroiUZjiFV6A== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=AdiyFnz8uhD9WK3Q3GH/Jtkh95LJ5pA+pT/H8Bvsg5c=; b=hgcaGurphr6iljW5ibpB5GOA1CkZrAKEI43cbPD/1pwBu+I4bsNPuZLUw7aWMzSHBlqTvLRyyuHc/itFKuXmb5Hbcxsx9iegD6Lu3ekTo2S7bMLQTiE3WQhBMnrZXwm3nH5dScTgMLBbdsrEXIszDPm/vmAfSji2PiM+fEAyUk78cyZS45ESocmOqsp9AueusLq2TVXaYAxUKWX3hWGe7MKhQir1VbDE7teMmDA3jknchTVAPHDAEa962TlkDIceClrLZ8IB+V9nt1TKgWNa9ZFP/YLQnu7XoHOBaqy+KyMHedUT49i6YP/H40VIKl3EsLv1G53duTYZ8nFqrz5qqA== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=AdiyFnz8uhD9WK3Q3GH/Jtkh95LJ5pA+pT/H8Bvsg5c=; b=ZD8LXUAnk6LK5+EXqg+YHEqb166lji0uHJcch5RTntuhvD7pxT9vuFar38Ro4voE2kgWSy5FfShgJVFuP+ghMenTBqXFRajIrDnuQLgD0NMD+89ioGFprrekCckRwoLiflEMwwKIauCoo15/peR/DeNxwizgz+Yo0Cx5kLQndCU= Received: from DM5PR2201CA0022.namprd22.prod.outlook.com (2603:10b6:4:14::32) by PH0PR02MB7144.namprd02.prod.outlook.com (2603:10b6:510:9::13) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.13; Fri, 29 Oct 2021 14:49:20 +0000 Received: from DM3NAM02FT022.eop-nam02.prod.protection.outlook.com (2603:10b6:4:14:cafe::cf) by DM5PR2201CA0022.outlook.office365.com (2603:10b6:4:14::32) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:49:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT022.mail.protection.outlook.com (10.13.5.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:49:19 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Fri, 29 Oct 2021 07:48:42 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:42 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mgTBR-0008YO-Ho; Fri, 29 Oct 2021 07:48:42 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Fri, 29 Oct 2021 20:16:43 +0530 Message-ID: <20211029144645.30295-9-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211029144645.30295-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 4390a6f6-5551-4f24-8e3f-08d99aeb4a11 X-MS-TrafficTypeDiagnostic: PH0PR02MB7144: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:240; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: AOs/NyHWqwyCJR+YFnlFMNeBtgxT+f0fJa7kKGBlMrYNaoG8xpsKKb8POaKjuHxStN17uU7q88B6Syr/yvlyyD9LIzJT28Bo9l9SP5rlgE0ssUTJb5UE2LBwh0eUqXzWHJ7R+doeBoCQXE+Vm9GSHK9Q4Grs7nWUVklzy/CVP5Cu5UoTeAdRdbyZvq/RsbGCy/MT3ROjNVcdHQ6QDs1xhXwSL7bqWRjpgFwr4PnACw5vcmxHD0xOLUHI3IxJHIUxrAwfsK09K22CHO8nt33ig56NIACyfsSyS7CDR3TSjihmBBWbTGBayyZ1mR1/Dp/0XiUogi+d7FmAxSeqDfHhVZ0Z0yBDBhqGNJE2fMjFQ+6lJ0Y7RxKTw4TssCunR0TjtXpaOC0dj68EIxsLyRG08IQN/WzrNuqVTLtdH+6k/0u5ytDB3JWr/2PCyVagbgM/6x+QQclOSkssemXmFE6yulYiy9s+PhyJURGzMXv6Vjz/e+Cv4+q8OX+RJgareLHNEqw1q51BIKncCqjSph3pbhBg3vNV3FSTGJRz70X7sG+lnovhG4XOPpUdPg/jJLd57Qh+ZeoxBWYqWrccQmoqhXN60WW7RGbx9BT2A04/5BkF7ep2b3o9B3bPGV+S97MxXRpjJdiWWw8kuALAX90c0HWJJBmTyCg35Joi48B6Qc5+uTU+uChNh1xK0iK6k+nD2L8cVDazgYgi2uyqxv1VAuHvuarPABhptkuayG4+MCA= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(46966006)(36840700001)(186003)(47076005)(9786002)(70206006)(8936002)(508600001)(70586007)(36860700001)(7636003)(7696005)(316002)(83380400001)(36906005)(336012)(44832011)(356005)(82310400003)(6916009)(4326008)(2616005)(426003)(26005)(107886003)(2906002)(8676002)(36756003)(5660300002)(1076003)(54906003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:49:19.9382 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 4390a6f6-5551-4f24-8e3f-08d99aeb4a11 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT022.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR02MB7144 Subject: [dpdk-dev] [PATCH v3 08/10] vdpa/sfc: add support for MAC filter config X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Add support for unicast and broadcast MAC filter configuration. Signed-off-by: Vijay Kumar Srivastava Acked-by: Andrew Rybchenko --- doc/guides/vdpadevs/sfc.rst | 4 ++ drivers/vdpa/sfc/meson.build | 1 + drivers/vdpa/sfc/sfc_vdpa.c | 32 +++++++++ drivers/vdpa/sfc/sfc_vdpa.h | 30 ++++++++ drivers/vdpa/sfc/sfc_vdpa_filter.c | 144 +++++++++++++++++++++++++++++++++++++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 10 +++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 17 +++++ 7 files changed, 238 insertions(+) create mode 100644 drivers/vdpa/sfc/sfc_vdpa_filter.c diff --git a/doc/guides/vdpadevs/sfc.rst b/doc/guides/vdpadevs/sfc.rst index d06c427..512f23e 100644 --- a/doc/guides/vdpadevs/sfc.rst +++ b/doc/guides/vdpadevs/sfc.rst @@ -71,6 +71,10 @@ boolean parameters value. **vdpa** device will work as vdpa device and will be probed by vdpa/sfc driver. If this parameter is not specified then ef100 device will operate as network device. +- ``mac`` [mac address] + + Configures MAC address which would be used to setup MAC filters. + Dynamic Logging Parameters ~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/drivers/vdpa/sfc/meson.build b/drivers/vdpa/sfc/meson.build index dc333de..2ca33bc 100644 --- a/drivers/vdpa/sfc/meson.build +++ b/drivers/vdpa/sfc/meson.build @@ -22,4 +22,5 @@ sources = files( 'sfc_vdpa_hw.c', 'sfc_vdpa_mcdi.c', 'sfc_vdpa_ops.c', + 'sfc_vdpa_filter.c', ) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index b3c82e5..d18cd61 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -8,7 +8,9 @@ #include #include +#include #include +#include #include #include #include @@ -202,6 +204,31 @@ struct sfc_vdpa_ops_data * return ret < 0 ? RTE_LOGTYPE_PMD : ret; } +static int +sfc_vdpa_kvargs_parse(struct sfc_vdpa_adapter *sva) +{ + struct rte_pci_device *pci_dev = sva->pdev; + struct rte_devargs *devargs = pci_dev->device.devargs; + /* + * To get the device class a mandatory param 'class' is being + * used so included SFC_EFX_KVARG_DEV_CLASS in the param list. + */ + const char **params = (const char *[]){ + RTE_DEVARGS_KEY_CLASS, + SFC_VDPA_MAC_ADDR, + NULL, + }; + + if (devargs == NULL) + return 0; + + sva->kvargs = rte_kvargs_parse(devargs->args, params); + if (sva->kvargs == NULL) + return -EINVAL; + + return 0; +} + static struct rte_pci_id pci_id_sfc_vdpa_efx_map[] = { { RTE_PCI_DEVICE(EFX_PCI_VENID_XILINX, EFX_PCI_DEVID_RIVERHEAD_VF) }, { .vendor_id = 0, /* sentinel */ }, @@ -244,6 +271,10 @@ struct sfc_vdpa_ops_data * if (ret != 0) goto fail_set_log_prefix; + ret = sfc_vdpa_kvargs_parse(sva); + if (ret != 0) + goto fail_kvargs_parse; + sfc_vdpa_log_init(sva, "entry"); sfc_vdpa_adapter_lock_init(sva); @@ -284,6 +315,7 @@ struct sfc_vdpa_ops_data * fail_vfio_setup: sfc_vdpa_adapter_lock_fini(sva); +fail_kvargs_parse: fail_set_log_prefix: rte_free(sva); diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index 1bf96e7..dbd099f 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -17,8 +17,29 @@ #include "sfc_vdpa_log.h" #include "sfc_vdpa_ops.h" +#define SFC_VDPA_MAC_ADDR "mac" #define SFC_VDPA_DEFAULT_MCDI_IOVA 0x200000000000 +/* Broadcast & Unicast MAC filters are supported */ +#define SFC_MAX_SUPPORTED_FILTERS 2 + +/* + * Get function-local index of the associated VI from the + * virtqueue number. Queue 0 is reserved for MCDI + */ +#define SFC_VDPA_GET_VI_INDEX(vq_num) (((vq_num) / 2) + 1) + +enum sfc_vdpa_filter_type { + SFC_VDPA_BCAST_MAC_FILTER = 0, + SFC_VDPA_UCAST_MAC_FILTER = 1, + SFC_VDPA_FILTER_NTYPE +}; + +typedef struct sfc_vdpa_filter_s { + int filter_cnt; + efx_filter_spec_t spec[SFC_MAX_SUPPORTED_FILTERS]; +} sfc_vdpa_filter_t; + /* Adapter private data */ struct sfc_vdpa_adapter { TAILQ_ENTRY(sfc_vdpa_adapter) next; @@ -32,6 +53,8 @@ struct sfc_vdpa_adapter { struct rte_pci_device *pdev; struct rte_pci_addr pci_addr; + struct rte_kvargs *kvargs; + efx_family_t family; efx_nic_t *nic; rte_spinlock_t nic_lock; @@ -46,6 +69,8 @@ struct sfc_vdpa_adapter { char log_prefix[SFC_VDPA_LOG_PREFIX_MAX]; uint32_t logtype_main; + sfc_vdpa_filter_t filters; + int vfio_group_fd; int vfio_dev_fd; int vfio_container_fd; @@ -83,6 +108,11 @@ struct sfc_vdpa_ops_data * int sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *vdpa_data, bool do_map); +int +sfc_vdpa_filter_remove(struct sfc_vdpa_ops_data *ops_data); +int +sfc_vdpa_filter_config(struct sfc_vdpa_ops_data *ops_data); + static inline struct sfc_vdpa_adapter * sfc_vdpa_adapter_by_dev_handle(void *dev_handle) { diff --git a/drivers/vdpa/sfc/sfc_vdpa_filter.c b/drivers/vdpa/sfc/sfc_vdpa_filter.c new file mode 100644 index 0000000..03b6a5d --- /dev/null +++ b/drivers/vdpa/sfc/sfc_vdpa_filter.c @@ -0,0 +1,144 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * + * Copyright(c) 2020-2021 Xilinx, Inc. + */ + +#include +#include +#include + +#include "efx.h" +#include "efx_impl.h" +#include "sfc_vdpa.h" + +static inline int +sfc_vdpa_get_eth_addr(const char *key __rte_unused, + const char *value, void *extra_args) +{ + struct rte_ether_addr *mac_addr = extra_args; + + if (value == NULL || extra_args == NULL) + return -EINVAL; + + /* Convert string with Ethernet address to an ether_addr */ + rte_ether_unformat_addr(value, mac_addr); + + return 0; +} + +static int +sfc_vdpa_set_mac_filter(efx_nic_t *nic, efx_filter_spec_t *spec, + int qid, uint8_t *eth_addr) +{ + int rc; + + if (nic == NULL || spec == NULL) + return -1; + + spec->efs_priority = EFX_FILTER_PRI_MANUAL; + spec->efs_flags = EFX_FILTER_FLAG_RX; + spec->efs_dmaq_id = qid; + + rc = efx_filter_spec_set_eth_local(spec, EFX_FILTER_SPEC_VID_UNSPEC, + eth_addr); + if (rc != 0) + return rc; + + rc = efx_filter_insert(nic, spec); + if (rc != 0) + return rc; + + return rc; +} + +int sfc_vdpa_filter_config(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int qid; + efx_nic_t *nic; + struct rte_ether_addr bcast_eth_addr; + struct rte_ether_addr ucast_eth_addr; + struct sfc_vdpa_adapter *sva = ops_data->dev_handle; + efx_filter_spec_t *spec; + + if (ops_data == NULL) + return -1; + + sfc_vdpa_log_init(sva, "entry"); + + nic = sva->nic; + + sfc_vdpa_log_init(sva, "process kvarg"); + + /* skip MAC filter configuration if mac address is not provided */ + if (rte_kvargs_count(sva->kvargs, SFC_VDPA_MAC_ADDR) == 0) { + sfc_vdpa_warn(sva, + "MAC address is not provided, skipping MAC Filter Config"); + return -1; + } + + rc = rte_kvargs_process(sva->kvargs, SFC_VDPA_MAC_ADDR, + &sfc_vdpa_get_eth_addr, + &ucast_eth_addr); + if (rc < 0) + return -1; + + /* create filters on the base queue */ + qid = SFC_VDPA_GET_VI_INDEX(0); + + sfc_vdpa_log_init(sva, "insert broadcast mac filter"); + + EFX_MAC_BROADCAST_ADDR_SET(bcast_eth_addr.addr_bytes); + spec = &sva->filters.spec[SFC_VDPA_BCAST_MAC_FILTER]; + + rc = sfc_vdpa_set_mac_filter(nic, + spec, qid, + bcast_eth_addr.addr_bytes); + if (rc != 0) + sfc_vdpa_err(ops_data->dev_handle, + "broadcast MAC filter insertion failed: %s", + rte_strerror(rc)); + else + sva->filters.filter_cnt++; + + sfc_vdpa_log_init(sva, "insert unicast mac filter"); + spec = &sva->filters.spec[SFC_VDPA_UCAST_MAC_FILTER]; + + rc = sfc_vdpa_set_mac_filter(nic, + spec, qid, + ucast_eth_addr.addr_bytes); + if (rc != 0) + sfc_vdpa_err(sva, + "unicast MAC filter insertion failed: %s", + rte_strerror(rc)); + else + sva->filters.filter_cnt++; + + sfc_vdpa_log_init(sva, "done"); + + return rc; +} + +int sfc_vdpa_filter_remove(struct sfc_vdpa_ops_data *ops_data) +{ + int i, rc = 0; + struct sfc_vdpa_adapter *sva = ops_data->dev_handle; + efx_nic_t *nic; + + if (ops_data == NULL) + return -1; + + nic = sva->nic; + + for (i = 0; i < sva->filters.filter_cnt; i++) { + rc = efx_filter_remove(nic, &(sva->filters.spec[i])); + if (rc != 0) + sfc_vdpa_err(sva, + "remove HW filter failed for entry %d: %s", + i, rte_strerror(rc)); + } + + sva->filters.filter_cnt = 0; + + return rc; +} diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c index b473708..5307b03 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -354,10 +354,20 @@ goto fail_virtio_init; } + sfc_vdpa_log_init(sva, "init filter"); + rc = efx_filter_init(enp); + if (rc != 0) { + sfc_vdpa_err(sva, "filter init failed: %s", rte_strerror(rc)); + goto fail_filter_init; + } + sfc_vdpa_log_init(sva, "done"); return 0; +fail_filter_init: + efx_virtio_fini(enp); + fail_virtio_init: efx_nic_fini(enp); diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 774d73e..8551b65 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -426,6 +426,8 @@ sfc_vdpa_disable_vfio_intr(ops_data); + sfc_vdpa_filter_remove(ops_data); + ops_data->state = SFC_VDPA_STATE_CONFIGURED; } @@ -465,12 +467,27 @@ goto fail_vq_start; } + ops_data->vq_count = i; + + sfc_vdpa_log_init(ops_data->dev_handle, + "configure MAC filters"); + rc = sfc_vdpa_filter_config(ops_data); + if (rc != 0) { + sfc_vdpa_err(ops_data->dev_handle, + "MAC filter config failed: %s", + rte_strerror(rc)); + goto fail_filter_cfg; + } + ops_data->state = SFC_VDPA_STATE_STARTED; sfc_vdpa_log_init(ops_data->dev_handle, "done"); return 0; +fail_filter_cfg: + /* remove already created filters */ + sfc_vdpa_filter_remove(ops_data); fail_vq_start: /* stop already started virtqueues */ for (j = 0; j < i; j++) From patchwork Fri Oct 29 14:46:44 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103251 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 54D6EA0547; Fri, 29 Oct 2021 16:49:25 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 46E9C41238; Fri, 29 Oct 2021 16:49:25 +0200 (CEST) Received: from NAM02-BN1-obe.outbound.protection.outlook.com (mail-bn1nam07on2041.outbound.protection.outlook.com [40.107.212.41]) by mails.dpdk.org (Postfix) with ESMTP id 2CEAB426D5 for ; Fri, 29 Oct 2021 16:49:23 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=CUytpxHwCOlrJoRP5zZTzdyOPf5GFeTcZf2Vr8Yv0tVpFWNNvFe2CTwBdSiS9yUO5GYHRGZ1pDwPvkNCe3tSLifH16AplnSwzNwMOFJd7VV/+BQWqU9FEql0vecXuy6K7uPWkjSzcSqXdUy+IpJSO66BSLlS3H7jORjuPpwCMUmPYxUNB5/mFzW1PmCxsl4mp8HSqaadwBKy+x1zXYbg5M94XxQk3gnDti3G5X+B+vXc9OT0XWyR4+oahhxCBMh+7EXBXU22zD52omTSCvfnTNRLfQIUf0wphjm0ukJTIUcCMCCJcao/HpSv/xIyPPu9NL28SQis75mhqR12PvOM+w== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=wUBE10ountwhKucK8xKGLfllPeot6fsSBegRcuO3miA=; b=VQdDf7Pb7QI5aR19kNdYqoFcYX8OuE6OFzNn6H5Pc/DTkbvINawbT8tmHpbUaL1ypiQO3irPRkILWUtoPvRPJ1KggWgkx4yy4GMrxf5L6PTtFeoR2+Q6gfsv1lzXzQecad8LHZl0cZmmTL9gs20laV6FjWpdWnYr18960FLX+x9NMnATdn1Wic288rAxIMzVsdSnfC9NJjOn1icYjN63H28hJ2v7U31K571Ezm5cHx0+Daut9wndblPxKJ4a2nmdFFTqkTwseeaICk2jGSZJNI9ipDW1ld1Fc+D7ukJnX0HAIfyBMxoVNAA1fJnwy+7f3D498uRlxVgFVkuVt7lncQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=wUBE10ountwhKucK8xKGLfllPeot6fsSBegRcuO3miA=; b=hOU7FWGF2yMb70QkfGAnVacIxdmGfp8fuj5xgzZykX+QGWdArcYnEzAsUyvkPmJohsuNch4FHSKuaBcEbTljKMxoFmFseqy3NiearUNM5Umd3JlF1hfktsOEqk9KH1BxSAP3Yc+etI+KDd9Mu6JYJoGt7a6ETVh6nwzJ2i6kVoE= Received: from DM5PR2201CA0023.namprd22.prod.outlook.com (2603:10b6:4:14::33) by BYAPR02MB5559.namprd02.prod.outlook.com (2603:10b6:a03:a1::18) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Fri, 29 Oct 2021 14:49:20 +0000 Received: from DM3NAM02FT022.eop-nam02.prod.protection.outlook.com (2603:10b6:4:14:cafe::4a) by DM5PR2201CA0023.outlook.office365.com (2603:10b6:4:14::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15 via Frontend Transport; Fri, 29 Oct 2021 14:49:20 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT022.mail.protection.outlook.com (10.13.5.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:49:20 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Fri, 29 Oct 2021 07:48:44 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:44 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mgTBT-0008YO-MQ; Fri, 29 Oct 2021 07:48:44 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Fri, 29 Oct 2021 20:16:44 +0530 Message-ID: <20211029144645.30295-10-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211029144645.30295-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: ac9615e9-81e8-4a1c-e612-08d99aeb4a65 X-MS-TrafficTypeDiagnostic: BYAPR02MB5559: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:124; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 1oUGtcj5jMHr0tLzrAS7osq1I27D5eNg2jgx8HqMGkUtG/JocsdIIxgUOC2K0T8MX/XA11eFQitnwnp5tTQWrjdIVNDW2wOteIiysiNYbjl0NM23IJPYBseDznBvdwj3YD7OAA3fgGHnTk1H8RJb5BhlMlFPbTBW/Fcn+sWpYxDKI7fl3Ug8eU3w/0kAtLw2+xMOWHsoqaU/HytoRRwtBjcvthRR9oMi2RUXskwVR+0GiPHMtL0ikY8sDrdKCx31oxe8fpNp7XhhXnGbTRXsHGUkFB7Y8kTICm3tduwo/zDe/0D2qN7j9LjEhy/wj/vaPIWRnBJxbZVHreuW6szVXoyvVpFulxmyTvBgpOnjnQu9j4AfkouS47g1aGMffEmxH2F/EX9kxA5ivcUzWRycY2Vb19MSEg75IPMrOfpZ4sALRWFiVTwhT6m6hn+MUBv4XdYK5zhUMW4ySBBZtoi+0jkRF1on7GlgtadOK/flf0qKTZvZr+qfLhUIdCNIFhboJ/frGUGATGo+EyHdLcGHp2Steq/fvWtLdVwilEg+ZskMhNhDtCjsW3vNKCaknPAXBmHk7oJcGLTs6TOkiyZ/Q0kGsG6xyqFa6xSfM41MuVzxL4UeEGEV2ru9wCUbMefXJbr5ZUB/voVPRHKSasdFXMevf9BeOT5kY4Y4qolOmuVOiAuy33m7Y/uQNgZuJ+g6Te0Ms8B5zG+tZwSeQGv64l1iZJp2x60lfNQe41N9PkM= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(7636003)(356005)(82310400003)(4326008)(26005)(2906002)(36860700001)(6916009)(8676002)(36906005)(2616005)(508600001)(186003)(9786002)(107886003)(336012)(8936002)(7696005)(426003)(70586007)(36756003)(44832011)(70206006)(54906003)(1076003)(5660300002)(47076005)(316002)(83380400001)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:49:20.4920 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: ac9615e9-81e8-4a1c-e612-08d99aeb4a65 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT022.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR02MB5559 Subject: [dpdk-dev] [PATCH v3 09/10] vdpa/sfc: add support to set vring state X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implements vDPA ops set_vring_state to configure vring state. Signed-off-by: Vijay Kumar Srivastava Acked-by: Andrew Rybchenko Reviewed-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- drivers/vdpa/sfc/sfc_vdpa_ops.c | 54 ++++++++++++++++++++++++++++++++++++++--- 1 file changed, 50 insertions(+), 4 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 8551b65..3430643 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -719,11 +719,57 @@ static int sfc_vdpa_set_vring_state(int vid, int vring, int state) { - RTE_SET_USED(vid); - RTE_SET_USED(vring); - RTE_SET_USED(state); + struct sfc_vdpa_ops_data *ops_data; + struct rte_vdpa_device *vdpa_dev; + efx_rc_t rc; + int vring_max; + void *dev; - return -1; + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + + sfc_vdpa_info(dev, + "vDPA ops set_vring_state: vid: %d, vring: %d, state:%d", + vid, vring, state); + + vring_max = (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + if (vring < 0 || vring > vring_max) { + sfc_vdpa_err(dev, "received invalid vring id : %d to set state", + vring); + return -1; + } + + /* + * Skip if device is not yet started. virtqueues state can be + * changed once it is created and other configurations are done. + */ + if (ops_data->state != SFC_VDPA_STATE_STARTED) + return 0; + + if (ops_data->vq_cxt[vring].enable == state) + return 0; + + if (state == 0) { + rc = sfc_vdpa_virtq_stop(ops_data, vring); + if (rc != 0) { + sfc_vdpa_err(dev, "virtqueue stop failed: %s", + rte_strerror(rc)); + } + } else { + rc = sfc_vdpa_virtq_start(ops_data, vring); + if (rc != 0) { + sfc_vdpa_err(dev, "virtqueue start failed: %s", + rte_strerror(rc)); + } + } + + return rc; } static int From patchwork Fri Oct 29 14:46:45 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103253 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1E71DA0547; Fri, 29 Oct 2021 16:49:38 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 360DD426F0; Fri, 29 Oct 2021 16:49:27 +0200 (CEST) Received: from NAM10-MW2-obe.outbound.protection.outlook.com (mail-mw2nam10on2051.outbound.protection.outlook.com [40.107.94.51]) by mails.dpdk.org (Postfix) with ESMTP id 975F5426DD for ; Fri, 29 Oct 2021 16:49:23 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=M856uWRvs4b/SpLlmy/9syg9bl0wlx5OFbMa/7oQDgocSPTtGthEawGF4jwArky4b1Xq+FcssO3rPaoNlI2GeqAMOI/6irKE9vA+4SxsHDZ0i0l9qK1/sZg0JNTlAoVoRqeBQ6nP7NJv7ybCCtXOyKkgxEO0m5oCIVd9HFS0WLdi0EIWd9lCQ6bV0Cs8I9ceaxgnMiZdSmb6bZDKVCGw+0wZTp30JdRokVQKk9wFoMKI41DU7/oVLgu44xvZkP+00psp/En47emzQ1dhkdc3qdQomDiUmsSqPCyGf/mxH4ae4CDH5ZP8YVgG+Np3SD9l8auOjODDtlrUUyNF5Rp5Ig== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=cTdU0+4P1HI3jXqDb1Q33+OfQ6Q+hWpUgUjvjBz9PLA=; b=cJHNgh0zDar5IyQH+wq0pcOVbMbP7WPL7Z5fz5PZaPEd3ku8as3hc7lQ2AfE1TTHQQAnolYypCdUgMaEizkIo/645KBwX+zO0hKxPVCZgesqA7//qI6nLum999huCfUpmkufccoCQLFMPilR3GLTpEMFv5Tvu21XCLp7oDRernhdfeRHbBlbDPxYN5itU4DIOcmTTZSBfGGm1WuvGqVCq2nohQCpc0UoaNZS0+G6abnb+9GFs/n8mUayeINk5mnlLmTm9Z1LA/9Fox9VfKBcrcTBWa3zIjq6UeLSu24Yg+uyfZr4AUvymFgmCwD8OhZ+3fEIoTPhtYJc7R01YihlHw== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=cTdU0+4P1HI3jXqDb1Q33+OfQ6Q+hWpUgUjvjBz9PLA=; b=DMKBPtQbvHnPpKcS5WZdnV01ulJfGHh9U5nnOJazH8a8Bx7HYpGnZ8w+fOkAoVZaW/BC2xcJBvnXH9vlhQswd+kFfuzY8hgBR2u3lX2OTJBT8zRg5lhylYOJwrsH/ts2ioPUwT6Z4OyU3i9cbCbAIy41r7pMWwNHTFx3m3AnkW0= Received: from DM5PR2201CA0019.namprd22.prod.outlook.com (2603:10b6:4:14::29) by BYAPR02MB5064.namprd02.prod.outlook.com (2603:10b6:a03:71::33) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.15; Fri, 29 Oct 2021 14:49:21 +0000 Received: from DM3NAM02FT022.eop-nam02.prod.protection.outlook.com (2603:10b6:4:14:cafe::e1) by DM5PR2201CA0019.outlook.office365.com (2603:10b6:4:14::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.13 via Frontend Transport; Fri, 29 Oct 2021 14:49:21 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch01.xlnx.xilinx.com; Received: from xsj-pvapexch01.xlnx.xilinx.com (149.199.62.198) by DM3NAM02FT022.mail.protection.outlook.com (10.13.5.89) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Fri, 29 Oct 2021 14:49:21 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch01.xlnx.xilinx.com (172.19.86.40) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Fri, 29 Oct 2021 07:48:46 -0700 Received: from smtp.xilinx.com (172.19.127.96) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Fri, 29 Oct 2021 07:48:46 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=36706 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mgTBV-0008YO-R4; Fri, 29 Oct 2021 07:48:46 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Fri, 29 Oct 2021 20:16:45 +0530 Message-ID: <20211029144645.30295-11-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211029144645.30295-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211029144645.30295-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 5b20ff8f-21fe-4e86-17bc-08d99aeb4b12 X-MS-TrafficTypeDiagnostic: BYAPR02MB5064: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:483; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: 4Obf+1tbpmDy7JpIH2jDsx65y3hSr0wozjTAFi5hHJ32tH7xUxpuNV8Wc9KQ1AO9O+28iFGSc9BDq7QRKimLgOkx8hD2iEx4TfUqSjr5KaI7orDb5Hu1r6d7ZKMkbGBGiEt48frHFihfJalCqabT5Xfk9ntIAq5gayYF1yXulWCBraIclGxrPOR3z6YVJ+sk3mjOr9JdJG4+f7Jk+5WOGYKseOYcjdkLbWJPIB+yYgDIptl9muFahfSQ1oQGeWvN+MFx9g4G710HA0u2xlIGzTgF4YaVq9lBqUslDHQR3WluuJmWKciLilffhliPZpXfmnw6KEcZmY9UaHgE6a2Z1qtNv4ByQQQZMa54WLSfUvgl/nwQzYh/BdVh0BUBb5GDXpJoeR1YY+HBiabgPNzAcN6B5dX/ZipFWfOb3EVu/hU4CeJc89gVFKKzpVfn9BT2CoX0aYglPPnM0MgScvgQyB5CZyfvu8nXIWrph7xmtCIrYjA0EqAMRM7nBs628ZkocCR1uoLeDDLUm4Ko3BQAEr5q3HXjI0MVyNfuNtS5jCiZyH7/QXkaj9iFiUgdrdRqcsEXmwwPymyv8yub+meWhMa7UC6+Vkp/JxKB84hfv8HJVWMM2hLygIo9Neor8Eyon4iKVgNs1u36+6LYvH2LtSS0pC1h2ObF6QeBUCQGvvUIZjJDN3DyeTM6sJ4PY86iGE7Maaaw/F3Me+mR+7nCOoP4VHQFSrUgMRRo07+Sl1o= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch01.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(47076005)(8936002)(186003)(36906005)(54906003)(6916009)(44832011)(5660300002)(316002)(8676002)(83380400001)(36756003)(2906002)(4326008)(70586007)(70206006)(7696005)(9786002)(508600001)(7636003)(26005)(82310400003)(356005)(1076003)(336012)(107886003)(2616005)(36860700001)(426003)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 Oct 2021 14:49:21.6195 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 5b20ff8f-21fe-4e86-17bc-08d99aeb4b12 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch01.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: DM3NAM02FT022.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR02MB5064 Subject: [dpdk-dev] [PATCH v3 10/10] vdpa/sfc: set a multicast filter during vDPA init X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Insert unknown multicast filter to allow IPv6 neighbor discovery Signed-off-by: Vijay Kumar Srivastava Acked-by: Andrew Rybchenko Reviewed-by: Chenbo Xia --- drivers/vdpa/sfc/sfc_vdpa.h | 3 ++- drivers/vdpa/sfc/sfc_vdpa_filter.c | 19 +++++++++++++++++-- 2 files changed, 19 insertions(+), 3 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index dbd099f..bedc76c 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -21,7 +21,7 @@ #define SFC_VDPA_DEFAULT_MCDI_IOVA 0x200000000000 /* Broadcast & Unicast MAC filters are supported */ -#define SFC_MAX_SUPPORTED_FILTERS 2 +#define SFC_MAX_SUPPORTED_FILTERS 3 /* * Get function-local index of the associated VI from the @@ -32,6 +32,7 @@ enum sfc_vdpa_filter_type { SFC_VDPA_BCAST_MAC_FILTER = 0, SFC_VDPA_UCAST_MAC_FILTER = 1, + SFC_VDPA_MCAST_DST_FILTER = 2, SFC_VDPA_FILTER_NTYPE }; diff --git a/drivers/vdpa/sfc/sfc_vdpa_filter.c b/drivers/vdpa/sfc/sfc_vdpa_filter.c index 03b6a5d..74204d3 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_filter.c +++ b/drivers/vdpa/sfc/sfc_vdpa_filter.c @@ -39,8 +39,12 @@ spec->efs_flags = EFX_FILTER_FLAG_RX; spec->efs_dmaq_id = qid; - rc = efx_filter_spec_set_eth_local(spec, EFX_FILTER_SPEC_VID_UNSPEC, - eth_addr); + if (eth_addr == NULL) + rc = efx_filter_spec_set_mc_def(spec); + else + rc = efx_filter_spec_set_eth_local(spec, + EFX_FILTER_SPEC_VID_UNSPEC, + eth_addr); if (rc != 0) return rc; @@ -114,6 +118,17 @@ int sfc_vdpa_filter_config(struct sfc_vdpa_ops_data *ops_data) else sva->filters.filter_cnt++; + sfc_vdpa_log_init(sva, "insert unknown mcast filter"); + spec = &sva->filters.spec[SFC_VDPA_MCAST_DST_FILTER]; + + rc = sfc_vdpa_set_mac_filter(nic, spec, qid, NULL); + if (rc != 0) + sfc_vdpa_err(sva, + "mcast filter insertion failed: %s", + rte_strerror(rc)); + else + sva->filters.filter_cnt++; + sfc_vdpa_log_init(sva, "done"); return rc;