From patchwork Thu Jun 17 14:17:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 94357 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 14258A0C4B; Thu, 17 Jun 2021 16:17:47 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BCF96410EC; Thu, 17 Jun 2021 16:17:39 +0200 (CEST) Received: from us-smtp-delivery-124.mimecast.com (us-smtp-delivery-124.mimecast.com [216.205.24.124]) by mails.dpdk.org (Postfix) with ESMTP id 04D30410E9 for ; Thu, 17 Jun 2021 16:17:37 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com; s=mimecast20190719; t=1623939457; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version:content-type:content-type: content-transfer-encoding:content-transfer-encoding: in-reply-to:in-reply-to:references:references; bh=ajwRNZDbjavG2DXjEgMB9SUWXUPOpZMa7nuujSGZFoY=; b=RbzK9fNTsnyFnpIzKurikSURSK6h91AjvoaQl4hnA0e8BtgxmWuq8dPBvsuZbXdoqh6YcL 5MFpx75m7i1e1DKmpCbSCyKDcgahfxxge5m41HQxFQA8e2gaGoapJEGXzmdfq1q9xD/AS/ XhXQ/Vh8iPRLfBhs4RokQxfTqWbzyoo= Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id us-mta-277-hlsZvlJnMwC-haih1LGYUQ-1; Thu, 17 Jun 2021 10:17:33 -0400 X-MC-Unique: hlsZvlJnMwC-haih1LGYUQ-1 Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com [10.5.11.15]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mimecast-mx01.redhat.com (Postfix) with ESMTPS id AC6C6107ACF6; Thu, 17 Jun 2021 14:17:32 +0000 (UTC) Received: from max-t490s.redhat.com (unknown [10.36.110.45]) by smtp.corp.redhat.com (Postfix) with ESMTP id 59CD55D6D1; Thu, 17 Jun 2021 14:17:31 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, chenbo.xia@intel.com, amorenoz@redhat.com, david.marchand@redhat.com Cc: Maxime Coquelin Date: Thu, 17 Jun 2021 16:17:17 +0200 Message-Id: <20210617141718.173396-3-maxime.coquelin@redhat.com> In-Reply-To: <20210617141718.173396-1-maxime.coquelin@redhat.com> References: <20210617141718.173396-1-maxime.coquelin@redhat.com> MIME-Version: 1.0 X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15 Authentication-Results: relay.mimecast.com; auth=pass smtp.auth=CUSA124A263 smtp.mailfrom=maxime.coquelin@redhat.com X-Mimecast-Spam-Score: 0 X-Mimecast-Originator: redhat.com Subject: [dpdk-dev] [PATCH v2 2/3] net/virtio: add device config support to vDPA X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch introduces two virtio-user callbacks to get and set device's config, and implements it for vDPA backends. Signed-off-by: Maxime Coquelin Reviewed-by: Chenbo Xia --- drivers/net/virtio/virtio_user/vhost.h | 3 + drivers/net/virtio/virtio_user/vhost_vdpa.c | 69 +++++++++++++++++++++ 2 files changed, 72 insertions(+) diff --git a/drivers/net/virtio/virtio_user/vhost.h b/drivers/net/virtio/virtio_user/vhost.h index c49e88036d..dfbf6be033 100644 --- a/drivers/net/virtio/virtio_user/vhost.h +++ b/drivers/net/virtio/virtio_user/vhost.h @@ -79,6 +79,9 @@ struct virtio_user_backend_ops { int (*set_vring_addr)(struct virtio_user_dev *dev, struct vhost_vring_addr *addr); int (*get_status)(struct virtio_user_dev *dev, uint8_t *status); int (*set_status)(struct virtio_user_dev *dev, uint8_t status); + int (*get_config)(struct virtio_user_dev *dev, uint8_t *data, uint32_t off, uint32_t len); + int (*set_config)(struct virtio_user_dev *dev, const uint8_t *data, uint32_t off, + uint32_t len); int (*enable_qp)(struct virtio_user_dev *dev, uint16_t pair_idx, int enable); int (*dma_map)(struct virtio_user_dev *dev, void *addr, uint64_t iova, size_t len); int (*dma_unmap)(struct virtio_user_dev *dev, void *addr, uint64_t iova, size_t len); diff --git a/drivers/net/virtio/virtio_user/vhost_vdpa.c b/drivers/net/virtio/virtio_user/vhost_vdpa.c index e2d6d3504d..3d65f0079a 100644 --- a/drivers/net/virtio/virtio_user/vhost_vdpa.c +++ b/drivers/net/virtio/virtio_user/vhost_vdpa.c @@ -41,6 +41,8 @@ struct vhost_vdpa_data { #define VHOST_VDPA_GET_DEVICE_ID _IOR(VHOST_VIRTIO, 0x70, __u32) #define VHOST_VDPA_GET_STATUS _IOR(VHOST_VIRTIO, 0x71, __u8) #define VHOST_VDPA_SET_STATUS _IOW(VHOST_VIRTIO, 0x72, __u8) +#define VHOST_VDPA_GET_CONFIG _IOR(VHOST_VIRTIO, 0x73, struct vhost_vdpa_config) +#define VHOST_VDPA_SET_CONFIG _IOW(VHOST_VIRTIO, 0x74, struct vhost_vdpa_config) #define VHOST_VDPA_SET_VRING_ENABLE _IOW(VHOST_VIRTIO, 0x75, struct vhost_vring_state) #define VHOST_SET_BACKEND_FEATURES _IOW(VHOST_VIRTIO, 0x25, __u64) #define VHOST_GET_BACKEND_FEATURES _IOR(VHOST_VIRTIO, 0x26, __u64) @@ -65,6 +67,12 @@ struct vhost_iotlb_msg { #define VHOST_IOTLB_MSG_V2 0x2 +struct vhost_vdpa_config { + uint32_t off; + uint32_t len; + uint8_t buf[0]; +}; + struct vhost_msg { uint32_t type; uint32_t reserved; @@ -440,6 +448,65 @@ vhost_vdpa_set_status(struct virtio_user_dev *dev, uint8_t status) return vhost_vdpa_ioctl(data->vhostfd, VHOST_VDPA_SET_STATUS, &status); } +static int +vhost_vdpa_get_config(struct virtio_user_dev *dev, uint8_t *data, uint32_t off, uint32_t len) +{ + struct vhost_vdpa_data *vdpa_data = dev->backend_data; + struct vhost_vdpa_config *config; + int ret = 0; + + config = malloc(sizeof(*config) + len); + if (!config) { + PMD_DRV_LOG(ERR, "Failed to allocate vDPA config data"); + return -1; + } + + config->off = off; + config->len = len; + + ret = vhost_vdpa_ioctl(vdpa_data->vhostfd, VHOST_VDPA_GET_CONFIG, config); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to get vDPA config (offset 0x%x, len 0x%x)", off, len); + ret = -1; + goto out; + } + + memcpy(data, config->buf, len); +out: + free(config); + + return ret; +} + +static int +vhost_vdpa_set_config(struct virtio_user_dev *dev, const uint8_t *data, uint32_t off, uint32_t len) +{ + struct vhost_vdpa_data *vdpa_data = dev->backend_data; + struct vhost_vdpa_config *config; + int ret = 0; + + config = malloc(sizeof(*config) + len); + if (!config) { + PMD_DRV_LOG(ERR, "Failed to allocate vDPA config data"); + return -1; + } + + config->off = off; + config->len = len; + + memcpy(config->buf, data, len); + + ret = vhost_vdpa_ioctl(vdpa_data->vhostfd, VHOST_VDPA_SET_CONFIG, config); + if (ret) { + PMD_DRV_LOG(ERR, "Failed to set vDPA config (offset 0x%x, len 0x%x)", off, len); + ret = -1; + } + + free(config); + + return ret; +} + /** * Set up environment to talk with a vhost vdpa backend. * @@ -559,6 +626,8 @@ struct virtio_user_backend_ops virtio_ops_vdpa = { .set_vring_addr = vhost_vdpa_set_vring_addr, .get_status = vhost_vdpa_get_status, .set_status = vhost_vdpa_set_status, + .get_config = vhost_vdpa_get_config, + .set_config = vhost_vdpa_set_config, .enable_qp = vhost_vdpa_enable_queue_pair, .dma_map = vhost_vdpa_dma_map_batch, .dma_unmap = vhost_vdpa_dma_unmap_batch,