From patchwork Thu Oct 28 07:54:48 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Vijay Srivastava X-Patchwork-Id: 103118 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0AA8FA0C45; Thu, 28 Oct 2021 09:57:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D578741125; Thu, 28 Oct 2021 09:57:00 +0200 (CEST) Received: from NAM12-BN8-obe.outbound.protection.outlook.com (mail-bn8nam12on2064.outbound.protection.outlook.com [40.107.237.64]) by mails.dpdk.org (Postfix) with ESMTP id 9BA304111E for ; Thu, 28 Oct 2021 09:56:58 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=HHq1t/OsvAUvU/WVS8+YYR1v5+Fwps0IeZpXp/mzdgxBAka6KcJ+8OAhOP7VKpRlKc+dbuyjJiBzI0p9xk6VTK7x5jmbUCDFGqHcUPLmkavqwUvgq5yQKHhSOgVvFE7ZnC6lPJJhxvqrDHArOFBWorWlaqF1FRd7SAalBc29jbm6SAEMDcRUw7BtTvdAQfpgvZIvlkOsNqZiAJwiJPrnYaZ1zOzQ0KRkURFPcHCw+Su6vcibegXAw6jzhvh1INbjQ7vYnCS0KzT4OvwlS6+KGvQecV+uz1rn+zQxiajq1+zDEH8Vfih3Re7RVQ6ze0ULrdf4bv1L//p0Jw9THaydDg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-AntiSpam-MessageData-ChunkCount:X-MS-Exchange-AntiSpam-MessageData-0:X-MS-Exchange-AntiSpam-MessageData-1; bh=jxqweU2a2OdvkwPhAsrXzm7wPqHlKyl/QkEfFV11PeY=; b=mTmzrQw2SddCbyqkN5dDt+6mowzb7XKpRmNFOFqlLAv3/YIuiBTcli21JcFie3BHpOZ+6rTD1vwss/+0yDM7GDaYdx07RIdRV1v5HFwlIFEl1KlKZhZQTykczOKR52Z2wzI4oUxLHi1vmUgCmgtz0okStI0y/WnhoD7wTcdtqXBqr7hWK473r9RlDntzGV+WqwL+9SGNBKabYRlz8OqLON+RC8y9YTTL3SSItlDoIOhsBPg5R2SdJXnZa12hGhuX4ELt0LkfZEzaiYZUzCclAMCMrTDaD7TGwqwjlvuCAyYypzE6bLxr/WCRdv7X3WU4dZPXisQVHDOrvq6yUDrwWQ== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 149.199.62.198) smtp.rcpttodomain=dpdk.org smtp.mailfrom=xilinx.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=xilinx.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=jxqweU2a2OdvkwPhAsrXzm7wPqHlKyl/QkEfFV11PeY=; b=iLRRUCILASbls6UmWuKZ3jERtTInnhdYSJn/d+oHAjGYmL8STWlkuUsoto+p77uu/Ya06K6s4D256QKjszV5USt7TzWo8pLUSKq7L6ba6P8nA/8AyZq+ZHl1tuhacvkwqlSA9E2Z3WPvgejkiOp7IAlR50Xj8rH7w8qJRYvWywE= Received: from BN9PR03CA0729.namprd03.prod.outlook.com (2603:10b6:408:110::14) by SN6PR02MB4285.namprd02.prod.outlook.com (2603:10b6:805:b1::22) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4628.18; Thu, 28 Oct 2021 07:56:56 +0000 Received: from BN1NAM02FT045.eop-nam02.prod.protection.outlook.com (2603:10b6:408:110:cafe::e6) by BN9PR03CA0729.outlook.office365.com (2603:10b6:408:110::14) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4649.13 via Frontend Transport; Thu, 28 Oct 2021 07:56:56 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 149.199.62.198) smtp.mailfrom=xilinx.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=xilinx.com; Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates 149.199.62.198 as permitted sender) receiver=protection.outlook.com; client-ip=149.199.62.198; helo=xsj-pvapexch02.xlnx.xilinx.com; Received: from xsj-pvapexch02.xlnx.xilinx.com (149.199.62.198) by BN1NAM02FT045.mail.protection.outlook.com (10.13.2.156) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.20.4649.14 via Frontend Transport; Thu, 28 Oct 2021 07:56:56 +0000 Received: from xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Thu, 28 Oct 2021 00:56:54 -0700 Received: from smtp.xilinx.com (172.19.127.95) by xsj-pvapexch02.xlnx.xilinx.com (172.19.86.41) with Microsoft SMTP Server id 15.1.2176.14 via Frontend Transport; Thu, 28 Oct 2021 00:56:54 -0700 Envelope-to: dev@dpdk.org, maxime.coquelin@redhat.com, chenbo.xia@intel.com, andrew.rybchenko@oktetlabs.ru Received: from [10.170.66.108] (port=35870 helo=xndengvm004108.xilinx.com) by smtp.xilinx.com with esmtp (Exim 4.90) (envelope-from ) id 1mg0HO-0005p5-5n; Thu, 28 Oct 2021 00:56:54 -0700 From: Vijay Srivastava To: CC: , , , Vijay Kumar Srivastava Date: Thu, 28 Oct 2021 13:24:48 +0530 Message-ID: <20211028075452.11804-7-vsrivast@xilinx.com> X-Mailer: git-send-email 2.25.0 In-Reply-To: <20211028075452.11804-1-vsrivast@xilinx.com> References: <20210706164418.32615-1-vsrivast@xilinx.com> <20211028075452.11804-1-vsrivast@xilinx.com> MIME-Version: 1.0 X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 039ec92e-f5f6-4046-8673-08d999e88359 X-MS-TrafficTypeDiagnostic: SN6PR02MB4285: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:183; X-MS-Exchange-SenderADCheck: 1 X-MS-Exchange-AntiSpam-Relay: 0 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: FgTShLzlHM+oaSrOV9Vjj/5apbdTzSStRSs8nCsaM7M5Gxk9VOjwmO9jyC4icsGi9Rlj+mCw42FwRk0EjtIDeveQPmbdS9JGNChzQAe2ZstAqOjDavcwRdWxph0xc7FzESCnLB2kaPmic5Haq2bc+VHAhmPkWarwiU1riv1SJJ08tJPSoHuw2bPCvMJdNz+jt5u3yc9i1GIGuMV+MtXAO11KVrhAMXnVEAAa129RE+EvdaXPODi50LdNJkjYyLOiNxEoFJNi57ZVEP2frADyhSCGFGbplqBeKsJX06SVe3fTaSsjyy/qzUsOgjdddmjHXEJozlkXaIM69MW+S5U68OiF1hfVfUsT7DLos00jPRi9Uc6dLECCZOhHSkB4CQykJt7NWcrLRu7pzROH7S/EQJOeJyfdEUsVJ0HpPTwzFh0QmRcRNUWTtIUfg+PZR//rHhZCOoThG6K5IwJXDgq6NA3/XfBXJbWE/wHMbtGIZob98WXsekru97/pB+9yuKQrPHxy5l3zzuxr8kO3WemOUF7vehklMQfXkGPHQN9QAI7uUr4LsX77H1/k/Ql7G5JlNZ6PNHWvpWCInF7HBALDFqAPNjcUFRpt9ThICADePRGEVHJnBe8Nfg1TBG5ODXQyIdT3VzG4cOkJLCp3XV4LA0TDWOkav4e3wiw6ejh7O6OABL66xe4kbJsJxSctSokneCmbM79hDEXZZCUGHZqgae13fwpCU+FW9y1w3yAmFuk= X-Forefront-Antispam-Report: CIP:149.199.62.198; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:xsj-pvapexch02.xlnx.xilinx.com; PTR:unknown-62-198.xilinx.com; CAT:NONE; SFS:(36840700001)(46966006)(7696005)(8676002)(4326008)(36756003)(26005)(44832011)(47076005)(6666004)(316002)(2906002)(36860700001)(7636003)(5660300002)(1076003)(508600001)(30864003)(83380400001)(6916009)(70586007)(426003)(36906005)(2616005)(107886003)(54906003)(336012)(82310400003)(356005)(8936002)(186003)(9786002)(70206006)(102446001); DIR:OUT; SFP:1101; X-OriginatorOrg: xilinx.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Oct 2021 07:56:56.3461 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 039ec92e-f5f6-4046-8673-08d999e88359 X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.62.198]; Helo=[xsj-pvapexch02.xlnx.xilinx.com] X-MS-Exchange-CrossTenant-AuthSource: BN1NAM02FT045.eop-nam02.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN6PR02MB4285 Subject: [dpdk-dev] [PATCH v2 06/10] vdpa/sfc: add support for dev conf and dev close ops X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Vijay Kumar Srivastava Implement vDPA ops dev_conf and dev_close for DMA mapping, interrupt and virtqueue configurations. Signed-off-by: Vijay Kumar Srivastava --- v2: * Removed redundant null check while calling free(). * Added error handling for rte_vhost_get_vhost_vring(). drivers/vdpa/sfc/sfc_vdpa.c | 6 + drivers/vdpa/sfc/sfc_vdpa.h | 43 ++++ drivers/vdpa/sfc/sfc_vdpa_hw.c | 69 ++++++ drivers/vdpa/sfc/sfc_vdpa_ops.c | 530 ++++++++++++++++++++++++++++++++++++++-- drivers/vdpa/sfc/sfc_vdpa_ops.h | 28 +++ 5 files changed, 656 insertions(+), 20 deletions(-) diff --git a/drivers/vdpa/sfc/sfc_vdpa.c b/drivers/vdpa/sfc/sfc_vdpa.c index 4927698..9ffea59 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.c +++ b/drivers/vdpa/sfc/sfc_vdpa.c @@ -246,6 +246,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_log_init(sva, "entry"); + sfc_vdpa_adapter_lock_init(sva); + sfc_vdpa_log_init(sva, "vfio init"); if (sfc_vdpa_vfio_setup(sva) < 0) { sfc_vdpa_err(sva, "failed to setup device %s", pci_dev->name); @@ -280,6 +282,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_vfio_teardown(sva); fail_vfio_setup: + sfc_vdpa_adapter_lock_fini(sva); + fail_set_log_prefix: rte_free(sva); @@ -311,6 +315,8 @@ struct sfc_vdpa_ops_data * sfc_vdpa_vfio_teardown(sva); + sfc_vdpa_adapter_lock_fini(sva); + rte_free(sva); return 0; diff --git a/drivers/vdpa/sfc/sfc_vdpa.h b/drivers/vdpa/sfc/sfc_vdpa.h index c10c3d3..1bf96e7 100644 --- a/drivers/vdpa/sfc/sfc_vdpa.h +++ b/drivers/vdpa/sfc/sfc_vdpa.h @@ -80,10 +80,53 @@ struct sfc_vdpa_ops_data * void sfc_vdpa_dma_free(struct sfc_vdpa_adapter *sva, efsys_mem_t *esmp); +int +sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *vdpa_data, bool do_map); + static inline struct sfc_vdpa_adapter * sfc_vdpa_adapter_by_dev_handle(void *dev_handle) { return (struct sfc_vdpa_adapter *)dev_handle; } +/* + * Add wrapper functions to acquire/release lock to be able to remove or + * change the lock in one place. + */ +static inline void +sfc_vdpa_adapter_lock_init(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_init(&sva->lock); +} + +static inline int +sfc_vdpa_adapter_is_locked(struct sfc_vdpa_adapter *sva) +{ + return rte_spinlock_is_locked(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_lock(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_lock(&sva->lock); +} + +static inline int +sfc_vdpa_adapter_trylock(struct sfc_vdpa_adapter *sva) +{ + return rte_spinlock_trylock(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_unlock(struct sfc_vdpa_adapter *sva) +{ + rte_spinlock_unlock(&sva->lock); +} + +static inline void +sfc_vdpa_adapter_lock_fini(__rte_unused struct sfc_vdpa_adapter *sva) +{ + /* Just for symmetry of the API */ +} + #endif /* _SFC_VDPA_H */ diff --git a/drivers/vdpa/sfc/sfc_vdpa_hw.c b/drivers/vdpa/sfc/sfc_vdpa_hw.c index 7a67bd8..b473708 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_hw.c +++ b/drivers/vdpa/sfc/sfc_vdpa_hw.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "efx.h" #include "sfc_vdpa.h" @@ -109,6 +110,74 @@ memset(esmp, 0, sizeof(*esmp)); } +int +sfc_vdpa_dma_map(struct sfc_vdpa_ops_data *ops_data, bool do_map) +{ + uint32_t i, j; + int rc; + struct rte_vhost_memory *vhost_mem = NULL; + struct rte_vhost_mem_region *mem_reg = NULL; + int vfio_container_fd; + void *dev; + + dev = ops_data->dev_handle; + vfio_container_fd = + sfc_vdpa_adapter_by_dev_handle(dev)->vfio_container_fd; + + rc = rte_vhost_get_mem_table(ops_data->vid, &vhost_mem); + if (rc < 0) { + sfc_vdpa_err(dev, + "failed to get VM memory layout"); + goto error; + } + + for (i = 0; i < vhost_mem->nregions; i++) { + mem_reg = &vhost_mem->regions[i]; + + if (do_map) { + rc = rte_vfio_container_dma_map(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + if (rc < 0) { + sfc_vdpa_err(dev, + "DMA map failed : %s", + rte_strerror(rte_errno)); + goto failed_vfio_dma_map; + } + } else { + rc = rte_vfio_container_dma_unmap(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + if (rc < 0) { + sfc_vdpa_err(dev, + "DMA unmap failed : %s", + rte_strerror(rte_errno)); + goto error; + } + } + } + + free(vhost_mem); + + return 0; + +failed_vfio_dma_map: + for (j = 0; j < i; j++) { + mem_reg = &vhost_mem->regions[j]; + rc = rte_vfio_container_dma_unmap(vfio_container_fd, + mem_reg->host_user_addr, + mem_reg->guest_phys_addr, + mem_reg->size); + } + +error: + free(vhost_mem); + + return rc; +} + static int sfc_vdpa_mem_bar_init(struct sfc_vdpa_adapter *sva, const efx_bar_region_t *mem_ebrp) diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.c b/drivers/vdpa/sfc/sfc_vdpa_ops.c index 5253adb..de1c81a 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.c +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.c @@ -3,10 +3,13 @@ * Copyright(c) 2020-2021 Xilinx, Inc. */ +#include + #include #include #include #include +#include #include #include "efx.h" @@ -28,24 +31,12 @@ #define SFC_VDPA_DEFAULT_FEATURES \ (1ULL << VHOST_USER_F_PROTOCOL_FEATURES) -static int -sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) -{ - struct sfc_vdpa_ops_data *ops_data; - void *dev; - - ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); - if (ops_data == NULL) - return -1; - - dev = ops_data->dev_handle; - *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; +#define SFC_VDPA_MSIX_IRQ_SET_BUF_LEN \ + (sizeof(struct vfio_irq_set) + \ + sizeof(int) * (SFC_VDPA_MAX_QUEUE_PAIRS * 2 + 1)) - sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", - *queue_num); - - return 0; -} +/* It will be used for target VF when calling function is not PF */ +#define SFC_VDPA_VF_NULL 0xFFFF static int sfc_vdpa_get_device_features(struct sfc_vdpa_ops_data *ops_data) @@ -74,6 +65,441 @@ return 0; } +static uint64_t +hva_to_gpa(int vid, uint64_t hva) +{ + struct rte_vhost_memory *vhost_mem = NULL; + struct rte_vhost_mem_region *mem_reg = NULL; + uint32_t i; + uint64_t gpa = 0; + + if (rte_vhost_get_mem_table(vid, &vhost_mem) < 0) + goto error; + + for (i = 0; i < vhost_mem->nregions; i++) { + mem_reg = &vhost_mem->regions[i]; + + if (hva >= mem_reg->host_user_addr && + hva < mem_reg->host_user_addr + mem_reg->size) { + gpa = (hva - mem_reg->host_user_addr) + + mem_reg->guest_phys_addr; + break; + } + } + +error: + free(vhost_mem); + return gpa; +} + +static int +sfc_vdpa_enable_vfio_intr(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int *irq_fd_ptr; + int vfio_dev_fd; + uint32_t i, num_vring; + struct rte_vhost_vring vring; + struct vfio_irq_set *irq_set; + struct rte_pci_device *pci_dev; + char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN]; + void *dev; + + num_vring = rte_vhost_get_vring_num(ops_data->vid); + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + pci_dev = sfc_vdpa_adapter_by_dev_handle(dev)->pdev; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = sizeof(irq_set_buf); + irq_set->count = num_vring + 1; + irq_set->flags = VFIO_IRQ_SET_DATA_EVENTFD | + VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + irq_set->start = 0; + irq_fd_ptr = (int *)&irq_set->data; + irq_fd_ptr[RTE_INTR_VEC_ZERO_OFFSET] = + rte_intr_fd_get(pci_dev->intr_handle); + + for (i = 0; i < num_vring; i++) { + rc = rte_vhost_get_vhost_vring(ops_data->vid, i, &vring); + if (rc) + return -1; + + irq_fd_ptr[RTE_INTR_VEC_RXTX_OFFSET + i] = vring.callfd; + } + + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) { + sfc_vdpa_err(ops_data->dev_handle, + "error enabling MSI-X interrupts: %s", + strerror(errno)); + return -1; + } + + return 0; +} + +static int +sfc_vdpa_disable_vfio_intr(struct sfc_vdpa_ops_data *ops_data) +{ + int rc; + int vfio_dev_fd; + struct vfio_irq_set *irq_set; + char irq_set_buf[SFC_VDPA_MSIX_IRQ_SET_BUF_LEN]; + void *dev; + + dev = ops_data->dev_handle; + vfio_dev_fd = sfc_vdpa_adapter_by_dev_handle(dev)->vfio_dev_fd; + + irq_set = (struct vfio_irq_set *)irq_set_buf; + irq_set->argsz = sizeof(irq_set_buf); + irq_set->count = 0; + irq_set->flags = VFIO_IRQ_SET_DATA_NONE | VFIO_IRQ_SET_ACTION_TRIGGER; + irq_set->index = VFIO_PCI_MSIX_IRQ_INDEX; + irq_set->start = 0; + + rc = ioctl(vfio_dev_fd, VFIO_DEVICE_SET_IRQS, irq_set); + if (rc) { + sfc_vdpa_err(ops_data->dev_handle, + "error disabling MSI-X interrupts: %s", + strerror(errno)); + return -1; + } + + return 0; +} + +static int +sfc_vdpa_get_vring_info(struct sfc_vdpa_ops_data *ops_data, + int vq_num, struct sfc_vdpa_vring_info *vring) +{ + int rc; + uint64_t gpa; + struct rte_vhost_vring vq; + + rc = rte_vhost_get_vhost_vring(ops_data->vid, vq_num, &vq); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "get vhost vring failed: %s", rte_strerror(rc)); + return rc; + } + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.desc); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for descriptor ring."); + goto fail_vring_map; + } + vring->desc = gpa; + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.avail); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for available ring."); + goto fail_vring_map; + } + vring->avail = gpa; + + gpa = hva_to_gpa(ops_data->vid, (uint64_t)(uintptr_t)vq.used); + if (gpa == 0) { + sfc_vdpa_err(ops_data->dev_handle, + "fail to get GPA for used ring."); + goto fail_vring_map; + } + vring->used = gpa; + + vring->size = vq.size; + + rc = rte_vhost_get_vring_base(ops_data->vid, vq_num, + &vring->last_avail_idx, + &vring->last_used_idx); + + return rc; + +fail_vring_map: + return -1; +} + +static int +sfc_vdpa_virtq_start(struct sfc_vdpa_ops_data *ops_data, int vq_num) +{ + int rc; + efx_virtio_vq_t *vq; + struct sfc_vdpa_vring_info vring; + efx_virtio_vq_cfg_t vq_cfg; + efx_virtio_vq_dyncfg_t vq_dyncfg; + + vq = ops_data->vq_cxt[vq_num].vq; + if (vq == NULL) + return -1; + + rc = sfc_vdpa_get_vring_info(ops_data, vq_num, &vring); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "get vring info failed: %s", rte_strerror(rc)); + goto fail_vring_info; + } + + vq_cfg.evvc_target_vf = SFC_VDPA_VF_NULL; + + /* even virtqueue for RX and odd for TX */ + if (vq_num % 2) { + vq_cfg.evvc_type = EFX_VIRTIO_VQ_TYPE_NET_TXQ; + sfc_vdpa_info(ops_data->dev_handle, + "configure virtqueue # %d (TXQ)", vq_num); + } else { + vq_cfg.evvc_type = EFX_VIRTIO_VQ_TYPE_NET_RXQ; + sfc_vdpa_info(ops_data->dev_handle, + "configure virtqueue # %d (RXQ)", vq_num); + } + + vq_cfg.evvc_vq_num = vq_num; + vq_cfg.evvc_desc_tbl_addr = vring.desc; + vq_cfg.evvc_avail_ring_addr = vring.avail; + vq_cfg.evvc_used_ring_addr = vring.used; + vq_cfg.evvc_vq_size = vring.size; + + vq_dyncfg.evvd_vq_pidx = vring.last_used_idx; + vq_dyncfg.evvd_vq_cidx = vring.last_avail_idx; + + /* MSI-X vector is function-relative */ + vq_cfg.evvc_msix_vector = RTE_INTR_VEC_RXTX_OFFSET + vq_num; + if (ops_data->vdpa_context == SFC_VDPA_AS_VF) + vq_cfg.evvc_pas_id = 0; + vq_cfg.evcc_features = ops_data->dev_features & + ops_data->req_features; + + /* Start virtqueue */ + rc = efx_virtio_qstart(vq, &vq_cfg, &vq_dyncfg); + if (rc != 0) { + /* destroy virtqueue */ + sfc_vdpa_err(ops_data->dev_handle, + "virtqueue start failed: %s", + rte_strerror(rc)); + efx_virtio_qdestroy(vq); + goto fail_virtio_qstart; + } + + sfc_vdpa_info(ops_data->dev_handle, + "virtqueue started successfully for vq_num %d", vq_num); + + ops_data->vq_cxt[vq_num].enable = B_TRUE; + + return rc; + +fail_virtio_qstart: +fail_vring_info: + return rc; +} + +static int +sfc_vdpa_virtq_stop(struct sfc_vdpa_ops_data *ops_data, int vq_num) +{ + int rc; + efx_virtio_vq_dyncfg_t vq_idx; + efx_virtio_vq_t *vq; + + if (ops_data->vq_cxt[vq_num].enable != B_TRUE) + return -1; + + vq = ops_data->vq_cxt[vq_num].vq; + if (vq == NULL) + return -1; + + /* stop the vq */ + rc = efx_virtio_qstop(vq, &vq_idx); + if (rc == 0) { + ops_data->vq_cxt[vq_num].cidx = vq_idx.evvd_vq_cidx; + ops_data->vq_cxt[vq_num].pidx = vq_idx.evvd_vq_pidx; + } + ops_data->vq_cxt[vq_num].enable = B_FALSE; + + return rc; +} + +static int +sfc_vdpa_configure(struct sfc_vdpa_ops_data *ops_data) +{ + int rc, i; + int nr_vring; + int max_vring_cnt; + efx_virtio_vq_t *vq; + efx_nic_t *nic; + void *dev; + + dev = ops_data->dev_handle; + nic = sfc_vdpa_adapter_by_dev_handle(dev)->nic; + + SFC_EFX_ASSERT(ops_data->state == SFC_VDPA_STATE_INITIALIZED); + + ops_data->state = SFC_VDPA_STATE_CONFIGURING; + + nr_vring = rte_vhost_get_vring_num(ops_data->vid); + max_vring_cnt = + (sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count * 2); + + /* number of vring should not be more than supported max vq count */ + if (nr_vring > max_vring_cnt) { + sfc_vdpa_err(dev, + "nr_vring (%d) is > max vring count (%d)", + nr_vring, max_vring_cnt); + goto fail_vring_num; + } + + rc = sfc_vdpa_dma_map(ops_data, true); + if (rc) { + sfc_vdpa_err(dev, + "DMA map failed: %s", rte_strerror(rc)); + goto fail_dma_map; + } + + for (i = 0; i < nr_vring; i++) { + rc = efx_virtio_qcreate(nic, &vq); + if ((rc != 0) || (vq == NULL)) { + sfc_vdpa_err(dev, + "virtqueue create failed: %s", + rte_strerror(rc)); + goto fail_vq_create; + } + + /* store created virtqueue context */ + ops_data->vq_cxt[i].vq = vq; + } + + ops_data->vq_count = i; + + ops_data->state = SFC_VDPA_STATE_CONFIGURED; + + return 0; + +fail_vq_create: + sfc_vdpa_dma_map(ops_data, false); + +fail_dma_map: +fail_vring_num: + ops_data->state = SFC_VDPA_STATE_INITIALIZED; + + return -1; +} + +static void +sfc_vdpa_close(struct sfc_vdpa_ops_data *ops_data) +{ + int i; + + if (ops_data->state != SFC_VDPA_STATE_CONFIGURED) + return; + + ops_data->state = SFC_VDPA_STATE_CLOSING; + + for (i = 0; i < ops_data->vq_count; i++) { + if (ops_data->vq_cxt[i].vq == NULL) + continue; + + efx_virtio_qdestroy(ops_data->vq_cxt[i].vq); + } + + sfc_vdpa_dma_map(ops_data, false); + + ops_data->state = SFC_VDPA_STATE_INITIALIZED; +} + +static void +sfc_vdpa_stop(struct sfc_vdpa_ops_data *ops_data) +{ + int i; + int rc; + + if (ops_data->state != SFC_VDPA_STATE_STARTED) + return; + + ops_data->state = SFC_VDPA_STATE_STOPPING; + + for (i = 0; i < ops_data->vq_count; i++) { + rc = sfc_vdpa_virtq_stop(ops_data, i); + if (rc != 0) + continue; + } + + sfc_vdpa_disable_vfio_intr(ops_data); + + ops_data->state = SFC_VDPA_STATE_CONFIGURED; +} + +static int +sfc_vdpa_start(struct sfc_vdpa_ops_data *ops_data) +{ + int i, j; + int rc; + + SFC_EFX_ASSERT(ops_data->state == SFC_VDPA_STATE_CONFIGURED); + + sfc_vdpa_log_init(ops_data->dev_handle, "entry"); + + ops_data->state = SFC_VDPA_STATE_STARTING; + + sfc_vdpa_log_init(ops_data->dev_handle, "enable interrupts"); + rc = sfc_vdpa_enable_vfio_intr(ops_data); + if (rc < 0) { + sfc_vdpa_err(ops_data->dev_handle, + "vfio intr allocation failed: %s", + rte_strerror(rc)); + goto fail_enable_vfio_intr; + } + + rte_vhost_get_negotiated_features(ops_data->vid, + &ops_data->req_features); + + sfc_vdpa_info(ops_data->dev_handle, + "negotiated feature : 0x%" PRIx64, + ops_data->req_features); + + for (i = 0; i < ops_data->vq_count; i++) { + sfc_vdpa_log_init(ops_data->dev_handle, + "starting vq# %d", i); + rc = sfc_vdpa_virtq_start(ops_data, i); + if (rc != 0) + goto fail_vq_start; + } + + ops_data->state = SFC_VDPA_STATE_STARTED; + + sfc_vdpa_log_init(ops_data->dev_handle, "done"); + + return 0; + +fail_vq_start: + /* stop already started virtqueues */ + for (j = 0; j < i; j++) + sfc_vdpa_virtq_stop(ops_data, j); + sfc_vdpa_disable_vfio_intr(ops_data); + +fail_enable_vfio_intr: + ops_data->state = SFC_VDPA_STATE_CONFIGURED; + + return rc; +} + +static int +sfc_vdpa_get_queue_num(struct rte_vdpa_device *vdpa_dev, uint32_t *queue_num) +{ + struct sfc_vdpa_ops_data *ops_data; + void *dev; + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) + return -1; + + dev = ops_data->dev_handle; + *queue_num = sfc_vdpa_adapter_by_dev_handle(dev)->max_queue_count; + + sfc_vdpa_info(dev, "vDPA ops get_queue_num :: supported queue num : %d", + *queue_num); + + return 0; +} + static int sfc_vdpa_get_features(struct rte_vdpa_device *vdpa_dev, uint64_t *features) { @@ -114,7 +540,53 @@ static int sfc_vdpa_dev_config(int vid) { - RTE_SET_USED(vid); + struct rte_vdpa_device *vdpa_dev; + int rc; + struct sfc_vdpa_ops_data *ops_data; + + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + sfc_vdpa_log_init(ops_data->dev_handle, "entry"); + + ops_data->vid = vid; + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + sfc_vdpa_log_init(ops_data->dev_handle, "configuring"); + rc = sfc_vdpa_configure(ops_data); + if (rc != 0) + goto fail_vdpa_config; + + sfc_vdpa_log_init(ops_data->dev_handle, "starting"); + rc = sfc_vdpa_start(ops_data); + if (rc != 0) + goto fail_vdpa_start; + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + sfc_vdpa_log_init(ops_data->dev_handle, "vhost notifier ctrl"); + if (rte_vhost_host_notifier_ctrl(vid, RTE_VHOST_QUEUE_ALL, true) != 0) + sfc_vdpa_info(ops_data->dev_handle, + "vDPA (%s): software relay for notify is used.", + vdpa_dev->device->name); + + sfc_vdpa_log_init(ops_data->dev_handle, "done"); + + return 0; + +fail_vdpa_start: + sfc_vdpa_close(ops_data); + +fail_vdpa_config: + sfc_vdpa_adapter_unlock(ops_data->dev_handle); return -1; } @@ -122,9 +594,27 @@ static int sfc_vdpa_dev_close(int vid) { - RTE_SET_USED(vid); + struct rte_vdpa_device *vdpa_dev; + struct sfc_vdpa_ops_data *ops_data; - return -1; + vdpa_dev = rte_vhost_get_vdpa_device(vid); + + ops_data = sfc_vdpa_get_data_by_dev(vdpa_dev); + if (ops_data == NULL) { + sfc_vdpa_err(ops_data->dev_handle, + "invalid vDPA device : %p, vid : %d", + vdpa_dev, vid); + return -1; + } + + sfc_vdpa_adapter_lock(ops_data->dev_handle); + + sfc_vdpa_stop(ops_data); + sfc_vdpa_close(ops_data); + + sfc_vdpa_adapter_unlock(ops_data->dev_handle); + + return 0; } static int diff --git a/drivers/vdpa/sfc/sfc_vdpa_ops.h b/drivers/vdpa/sfc/sfc_vdpa_ops.h index 21cbb73..8d553c5 100644 --- a/drivers/vdpa/sfc/sfc_vdpa_ops.h +++ b/drivers/vdpa/sfc/sfc_vdpa_ops.h @@ -18,17 +18,45 @@ enum sfc_vdpa_context { enum sfc_vdpa_state { SFC_VDPA_STATE_UNINITIALIZED = 0, SFC_VDPA_STATE_INITIALIZED, + SFC_VDPA_STATE_CONFIGURING, + SFC_VDPA_STATE_CONFIGURED, + SFC_VDPA_STATE_CLOSING, + SFC_VDPA_STATE_CLOSED, + SFC_VDPA_STATE_STARTING, + SFC_VDPA_STATE_STARTED, + SFC_VDPA_STATE_STOPPING, SFC_VDPA_STATE_NSTATES }; +struct sfc_vdpa_vring_info { + uint64_t desc; + uint64_t avail; + uint64_t used; + uint64_t size; + uint16_t last_avail_idx; + uint16_t last_used_idx; +}; + +typedef struct sfc_vdpa_vq_context_s { + uint8_t enable; + uint32_t pidx; + uint32_t cidx; + efx_virtio_vq_t *vq; +} sfc_vdpa_vq_context_t; + struct sfc_vdpa_ops_data { void *dev_handle; + int vid; struct rte_vdpa_device *vdpa_dev; enum sfc_vdpa_context vdpa_context; enum sfc_vdpa_state state; uint64_t dev_features; uint64_t drv_features; + uint64_t req_features; + + uint16_t vq_count; + struct sfc_vdpa_vq_context_s vq_cxt[SFC_VDPA_MAX_QUEUE_PAIRS * 2]; }; struct sfc_vdpa_ops_data *