From patchwork Tue Feb 27 05:56:35 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srujana Challa X-Patchwork-Id: 137318 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7060043C03; Tue, 27 Feb 2024 06:56:55 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3E6084027D; Tue, 27 Feb 2024 06:56:55 +0100 (CET) Received: from mx0b-0016f401.pphosted.com (mx0b-0016f401.pphosted.com [67.231.156.173]) by mails.dpdk.org (Postfix) with ESMTP id 8F23440150 for ; Tue, 27 Feb 2024 06:56:54 +0100 (CET) Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.17.1.24/8.17.1.24) with ESMTP id 41QNPmCg012596; Mon, 26 Feb 2024 21:56:53 -0800 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h= from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding:content-type; s=pfpt0220; bh=/2ac+oIG CHrqIK6Xjk7U9GoXaldBRsYAcrron/Deamg=; b=FYchd89zcb6FOsoZcjbqvRNy 8zX4JoHUer3aXY5HjUAXy+3Cesm2ZyCwnFiiZvZI0SRlJWE1PbXzJdWKVBD62zny sxEHMDPZrhRz6cSDtx/saAvIP2MB4LPjbQID8Uxm3Y/9xqMgb6wUZHYP3fxnf9YV sjH6eZYd9T+A54WiOyzKJS05vGVcoB5lzHJe1002KnNETJJXxeUfwYT5Rb4hRnZn dr0a6Wwmshi3kja8kyk6RYc+ci8yvi79r4T9wEDlmoVF5w6OR3r62fu065QSJrTp ma76rLVti9fqoLFM7IdrKZ+zLqYhcM3sbtMqxYAHmzu4iVuuGYgwVNVu0/iV1Q== Received: from dc6wp-exch02.marvell.com ([4.21.29.225]) by mx0b-0016f401.pphosted.com (PPS) with ESMTPS id 3wfgun8a4e-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT); Mon, 26 Feb 2024 21:56:53 -0800 (PST) Received: from DC6WP-EXCH01.marvell.com (10.76.176.21) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.2.1258.12; Mon, 26 Feb 2024 21:56:40 -0800 Received: from DC6WP-EXCH02.marvell.com (10.76.176.209) by DC6WP-EXCH01.marvell.com (10.76.176.21) with Microsoft SMTP Server (TLS) id 15.0.1497.48; Tue, 27 Feb 2024 00:56:39 -0500 Received: from maili.marvell.com (10.69.176.80) by DC6WP-EXCH02.marvell.com (10.76.176.209) with Microsoft SMTP Server id 15.2.1258.12 via Frontend Transport; Mon, 26 Feb 2024 21:56:39 -0800 Received: from localhost.localdomain (unknown [10.28.36.175]) by maili.marvell.com (Postfix) with ESMTP id 067B13F7048; Mon, 26 Feb 2024 21:56:36 -0800 (PST) From: Srujana Challa To: , , CC: , , , Subject: [PATCH] net/virtio-user: support IOVA as PA mode for vDPA backend Date: Tue, 27 Feb 2024 11:26:35 +0530 Message-ID: <20240227055635.2135782-1-schalla@marvell.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: LOHzwM2XLeR8b1nHzdSnXnRhc8gwhCyj X-Proofpoint-GUID: LOHzwM2XLeR8b1nHzdSnXnRhc8gwhCyj X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.272,Aquarius:18.0.1011,Hydra:6.0.619,FMLib:17.11.176.26 definitions=2024-02-26_11,2024-02-26_01,2023-05-22_02 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Disable use_va flag for VDPA backend type and fixes the issues with shadow control command processing, when it is disabled. This will help to make virtio user driver works in IOVA as PA mode for vDPA backend. Signed-off-by: Srujana Challa --- drivers/net/virtio/virtio_ring.h | 12 ++- .../net/virtio/virtio_user/virtio_user_dev.c | 94 ++++++++++--------- drivers/net/virtio/virtio_user_ethdev.c | 10 +- drivers/net/virtio/virtqueue.c | 4 +- 4 files changed, 69 insertions(+), 51 deletions(-) diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h index e848c0b73b..998605dbb5 100644 --- a/drivers/net/virtio/virtio_ring.h +++ b/drivers/net/virtio/virtio_ring.h @@ -83,6 +83,7 @@ struct vring_packed_desc_event { struct vring_packed { unsigned int num; + rte_iova_t desc_iova; struct vring_packed_desc *desc; struct vring_packed_desc_event *driver; struct vring_packed_desc_event *device; @@ -90,6 +91,7 @@ struct vring_packed { struct vring { unsigned int num; + rte_iova_t desc_iova; struct vring_desc *desc; struct vring_avail *avail; struct vring_used *used; @@ -149,11 +151,12 @@ vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align) return size; } static inline void -vring_init_split(struct vring *vr, uint8_t *p, unsigned long align, - unsigned int num) +vring_init_split(struct vring *vr, uint8_t *p, rte_iova_t iova, + unsigned long align, unsigned int num) { vr->num = num; vr->desc = (struct vring_desc *) p; + vr->desc_iova = iova; vr->avail = (struct vring_avail *) (p + num * sizeof(struct vring_desc)); vr->used = (void *) @@ -161,11 +164,12 @@ vring_init_split(struct vring *vr, uint8_t *p, unsigned long align, } static inline void -vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align, - unsigned int num) +vring_init_packed(struct vring_packed *vr, uint8_t *p, rte_iova_t iova, + unsigned long align, unsigned int num) { vr->num = num; vr->desc = (struct vring_packed_desc *)p; + vr->desc_iova = iova; vr->driver = (struct vring_packed_desc_event *)(p + vr->num * sizeof(struct vring_packed_desc)); vr->device = (struct vring_packed_desc_event *) diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index d395fc1676..8ad10e6354 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -62,6 +62,7 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel) struct vhost_vring_state state; struct vring *vring = &dev->vrings.split[queue_sel]; struct vring_packed *pq_vring = &dev->vrings.packed[queue_sel]; + uint64_t desc_addr, avail_addr, used_addr; struct vhost_vring_addr addr = { .index = queue_sel, .log_guest_addr = 0, @@ -81,16 +82,23 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel) } if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) { - addr.desc_user_addr = - (uint64_t)(uintptr_t)pq_vring->desc; - addr.avail_user_addr = - (uint64_t)(uintptr_t)pq_vring->driver; - addr.used_user_addr = - (uint64_t)(uintptr_t)pq_vring->device; + desc_addr = pq_vring->desc_iova; + avail_addr = desc_addr + pq_vring->num * sizeof(struct vring_packed_desc); + used_addr = RTE_ALIGN_CEIL(avail_addr + sizeof(struct vring_packed_desc_event), + VIRTIO_VRING_ALIGN); + + addr.desc_user_addr = desc_addr; + addr.avail_user_addr = avail_addr; + addr.used_user_addr = used_addr; } else { - addr.desc_user_addr = (uint64_t)(uintptr_t)vring->desc; - addr.avail_user_addr = (uint64_t)(uintptr_t)vring->avail; - addr.used_user_addr = (uint64_t)(uintptr_t)vring->used; + desc_addr = vring->desc_iova; + avail_addr = desc_addr + vring->num * sizeof(struct vring_desc); + used_addr = RTE_ALIGN_CEIL((uintptr_t)(&vring->avail->ring[vring->num]), + VIRTIO_VRING_ALIGN); + + addr.desc_user_addr = desc_addr; + addr.avail_user_addr = avail_addr; + addr.used_user_addr = used_addr; } state.index = queue_sel; @@ -885,11 +893,11 @@ static uint32_t virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vring, uint16_t idx_hdr) { - struct virtio_net_ctrl_hdr *hdr; virtio_net_ctrl_ack status = ~0; - uint16_t i, idx_data, idx_status; + uint16_t i, idx_data; uint32_t n_descs = 0; int dlen[CVQ_MAX_DATA_DESCS], nb_dlen = 0; + struct virtio_pmd_ctrl *ctrl; /* locate desc for header, data, and status */ idx_data = vring->desc[idx_hdr].next; @@ -902,34 +910,33 @@ virtio_user_handle_ctrl_msg_split(struct virtio_user_dev *dev, struct vring *vri n_descs++; } - /* locate desc for status */ - idx_status = i; n_descs++; - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr; - if (hdr->class == VIRTIO_NET_CTRL_MQ && - hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { - uint16_t queues; + /* Access control command via VA from CVQ */ + ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr; + if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ && + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { + uint16_t *queues; - queues = *(uint16_t *)(uintptr_t)vring->desc[idx_data].addr; - status = virtio_user_handle_mq(dev, queues); - } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { + queues = (uint16_t *)ctrl->data; + status = virtio_user_handle_mq(dev, *queues); + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ && + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { struct virtio_net_ctrl_rss *rss; - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr; + rss = (struct virtio_net_ctrl_rss *)ctrl->data; status = virtio_user_handle_mq(dev, rss->max_tx_vq); - } else if (hdr->class == VIRTIO_NET_CTRL_RX || - hdr->class == VIRTIO_NET_CTRL_MAC || - hdr->class == VIRTIO_NET_CTRL_VLAN) { + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX || + ctrl->hdr.class == VIRTIO_NET_CTRL_MAC || + ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) { status = 0; } if (!status && dev->scvq) - status = virtio_send_command(&dev->scvq->cq, - (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen); + status = virtio_send_command(&dev->scvq->cq, ctrl, dlen, nb_dlen); /* Update status */ - *(virtio_net_ctrl_ack *)(uintptr_t)vring->desc[idx_status].addr = status; + ctrl->status = status; return n_descs; } @@ -948,7 +955,7 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, struct vring_packed *vring, uint16_t idx_hdr) { - struct virtio_net_ctrl_hdr *hdr; + struct virtio_pmd_ctrl *ctrl; virtio_net_ctrl_ack status = ~0; uint16_t idx_data, idx_status; /* initialize to one, header is first */ @@ -971,32 +978,31 @@ virtio_user_handle_ctrl_msg_packed(struct virtio_user_dev *dev, n_descs++; } - hdr = (void *)(uintptr_t)vring->desc[idx_hdr].addr; - if (hdr->class == VIRTIO_NET_CTRL_MQ && - hdr->cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { - uint16_t queues; + /* Access control command via VA from CVQ */ + ctrl = (struct virtio_pmd_ctrl *)dev->hw.cvq->hdr_mz->addr; + if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ && + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET) { + uint16_t *queues; - queues = *(uint16_t *)(uintptr_t) - vring->desc[idx_data].addr; - status = virtio_user_handle_mq(dev, queues); - } else if (hdr->class == VIRTIO_NET_CTRL_MQ && hdr->cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { + queues = (uint16_t *)ctrl->data; + status = virtio_user_handle_mq(dev, *queues); + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_MQ && + ctrl->hdr.cmd == VIRTIO_NET_CTRL_MQ_RSS_CONFIG) { struct virtio_net_ctrl_rss *rss; - rss = (struct virtio_net_ctrl_rss *)(uintptr_t)vring->desc[idx_data].addr; + rss = (struct virtio_net_ctrl_rss *)ctrl->data; status = virtio_user_handle_mq(dev, rss->max_tx_vq); - } else if (hdr->class == VIRTIO_NET_CTRL_RX || - hdr->class == VIRTIO_NET_CTRL_MAC || - hdr->class == VIRTIO_NET_CTRL_VLAN) { + } else if (ctrl->hdr.class == VIRTIO_NET_CTRL_RX || + ctrl->hdr.class == VIRTIO_NET_CTRL_MAC || + ctrl->hdr.class == VIRTIO_NET_CTRL_VLAN) { status = 0; } if (!status && dev->scvq) - status = virtio_send_command(&dev->scvq->cq, - (struct virtio_pmd_ctrl *)hdr, dlen, nb_dlen); + status = virtio_send_command(&dev->scvq->cq, ctrl, dlen, nb_dlen); /* Update status */ - *(virtio_net_ctrl_ack *)(uintptr_t) - vring->desc[idx_status].addr = status; + ctrl->status = status; /* Update used descriptor */ vring->desc[idx_hdr].id = vring->desc[idx_status].id; diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c index bf9de36d8f..ae6593ba0b 100644 --- a/drivers/net/virtio/virtio_user_ethdev.c +++ b/drivers/net/virtio/virtio_user_ethdev.c @@ -198,6 +198,7 @@ virtio_user_setup_queue_packed(struct virtqueue *vq, sizeof(struct vring_packed_desc_event), VIRTIO_VRING_ALIGN); vring->num = vq->vq_nentries; + vring->desc_iova = vq->vq_ring_mem; vring->desc = (void *)(uintptr_t)desc_addr; vring->driver = (void *)(uintptr_t)avail_addr; vring->device = (void *)(uintptr_t)used_addr; @@ -221,6 +222,7 @@ virtio_user_setup_queue_split(struct virtqueue *vq, struct virtio_user_dev *dev) VIRTIO_VRING_ALIGN); dev->vrings.split[queue_idx].num = vq->vq_nentries; + dev->vrings.split[queue_idx].desc_iova = vq->vq_ring_mem; dev->vrings.split[queue_idx].desc = (void *)(uintptr_t)desc_addr; dev->vrings.split[queue_idx].avail = (void *)(uintptr_t)avail_addr; dev->vrings.split[queue_idx].used = (void *)(uintptr_t)used_addr; @@ -689,7 +691,13 @@ virtio_user_pmd_probe(struct rte_vdev_device *vdev) * Virtio-user requires using virtual addresses for the descriptors * buffers, whatever other devices require */ - hw->use_va = true; + if (backend_type == VIRTIO_USER_BACKEND_VHOST_VDPA) + /* VDPA backend requires using iova for the buffers to make it + * work in IOVA as PA mode also. + */ + hw->use_va = false; + else + hw->use_va = true; /* previously called by pci probing for physical dev */ if (eth_virtio_dev_init(eth_dev) < 0) { diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c index 6f419665f1..cf46abfd06 100644 --- a/drivers/net/virtio/virtqueue.c +++ b/drivers/net/virtio/virtqueue.c @@ -282,13 +282,13 @@ virtio_init_vring(struct virtqueue *vq) vq->vq_free_cnt = vq->vq_nentries; memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries); if (virtio_with_packed_queue(vq->hw)) { - vring_init_packed(&vq->vq_packed.ring, ring_mem, + vring_init_packed(&vq->vq_packed.ring, ring_mem, vq->vq_ring_mem, VIRTIO_VRING_ALIGN, size); vring_desc_init_packed(vq, size); } else { struct vring *vr = &vq->vq_split.ring; - vring_init_split(vr, ring_mem, VIRTIO_VRING_ALIGN, size); + vring_init_split(vr, ring_mem, vq->vq_ring_mem, VIRTIO_VRING_ALIGN, size); vring_desc_init_split(vr->desc, size); } /*