From patchwork Thu Dec 13 12:34:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Freimann X-Patchwork-Id: 48770 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0D8081B51C; Thu, 13 Dec 2018 13:35:18 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 358DF1B4CA for ; Thu, 13 Dec 2018 13:35:16 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 70C2C7AE83; Thu, 13 Dec 2018 12:35:15 +0000 (UTC) Received: from localhost (unknown [10.36.118.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 81F3A1019630; Thu, 13 Dec 2018 12:35:03 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Thu, 13 Dec 2018 13:34:44 +0100 Message-Id: <20181213123453.15035-2-jfreimann@redhat.com> In-Reply-To: <20181213123453.15035-1-jfreimann@redhat.com> References: <20181213123453.15035-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.25]); Thu, 13 Dec 2018 12:35:15 +0000 (UTC) Subject: [dpdk-dev] [PATCH v12 01/10] net/virtio: vring init for packed queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add and initialize descriptor data structures. Signed-off-by: Jens Freimann Signed-off-by: Tiwei Bie --- drivers/net/virtio/virtio_ethdev.c | 36 +++++++++++------- drivers/net/virtio/virtio_pci.h | 7 ++++ drivers/net/virtio/virtio_ring.h | 60 ++++++++++++++++++++++++++---- drivers/net/virtio/virtqueue.h | 19 +++++++++- 4 files changed, 100 insertions(+), 22 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index cb2b2e0bf..b1232df59 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -299,24 +299,26 @@ virtio_init_vring(struct virtqueue *vq) PMD_INIT_FUNC_TRACE(); - /* - * Reinitialise since virtio port might have been stopped and restarted - */ memset(ring_mem, 0, vq->vq_ring_size); - vring_init(vr, size, ring_mem, VIRTIO_PCI_VRING_ALIGN); + vq->vq_used_cons_idx = 0; vq->vq_desc_head_idx = 0; vq->vq_avail_idx = 0; vq->vq_desc_tail_idx = (uint16_t)(vq->vq_nentries - 1); vq->vq_free_cnt = vq->vq_nentries; memset(vq->vq_descx, 0, sizeof(struct vq_desc_extra) * vq->vq_nentries); - - vring_desc_init(vr->desc, size); - - /* - * Disable device(host) interrupting guest - */ - virtqueue_disable_intr(vq); + if (vtpci_packed_queue(vq->hw)) { + vring_init_packed(&vq->ring_packed, ring_mem, VIRTIO_PCI_VRING_ALIGN, size); + vring_desc_init_packed(vq, size); + virtqueue_disable_intr_packed(vq); + } else { + vring_init_split(vr, ring_mem, VIRTIO_PCI_VRING_ALIGN, size); + vring_desc_init_split(vr->desc, size); + /* + * Disable device(host) interrupting guest + */ + virtqueue_disable_intr(vq); + } } static int @@ -384,11 +386,16 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx) vq->hw = hw; vq->vq_queue_index = vtpci_queue_idx; vq->vq_nentries = vq_size; + vq->event_flags_shadow = 0; + if (vtpci_packed_queue(hw)) { + vq->avail_wrap_counter = 1; + vq->used_wrap_counter = 1; + } /* * Reserve a memzone for vring elements */ - size = vring_size(vq_size, VIRTIO_PCI_VRING_ALIGN); + size = vring_size(hw, vq_size, VIRTIO_PCI_VRING_ALIGN); vq->vq_ring_size = RTE_ALIGN_CEIL(size, VIRTIO_PCI_VRING_ALIGN); PMD_INIT_LOG(DEBUG, "vring_size: %d, rounded_vring_size: %d", size, vq->vq_ring_size); @@ -491,7 +498,7 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx) for (i = 0; i < vq_size; i++) { struct vring_desc *start_dp = txr[i].tx_indir; - vring_desc_init(start_dp, RTE_DIM(txr[i].tx_indir)); + vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir)); /* first indirect descriptor is always the tx header */ start_dp->addr = txvq->virtio_net_hdr_mem @@ -1908,7 +1915,8 @@ virtio_dev_configure(struct rte_eth_dev *dev) if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) { hw->use_inorder_tx = 1; - if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { + if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF) && + !vtpci_packed_queue(hw)) { hw->use_inorder_rx = 1; hw->use_simple_rx = 0; } else { diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h index e961a58ca..b22b62dad 100644 --- a/drivers/net/virtio/virtio_pci.h +++ b/drivers/net/virtio/virtio_pci.h @@ -113,6 +113,7 @@ struct virtnet_ctl; #define VIRTIO_F_VERSION_1 32 #define VIRTIO_F_IOMMU_PLATFORM 33 +#define VIRTIO_F_RING_PACKED 34 /* * Some VirtIO feature bits (currently bits 28 through 31) are @@ -314,6 +315,12 @@ vtpci_with_feature(struct virtio_hw *hw, uint64_t bit) return (hw->guest_features & (1ULL << bit)) != 0; } +static inline int +vtpci_packed_queue(struct virtio_hw *hw) +{ + return vtpci_with_feature(hw, VIRTIO_F_RING_PACKED); +} + /* * Function declaration from virtio_pci.c */ diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h index 9e3c2a015..e155eb17d 100644 --- a/drivers/net/virtio/virtio_ring.h +++ b/drivers/net/virtio/virtio_ring.h @@ -54,9 +54,35 @@ struct vring_used { struct vring_used_elem ring[0]; }; +/* For support of packed virtqueues in Virtio 1.1 the format of descriptors + * looks like this. + */ +struct vring_packed_desc { + uint64_t addr; + uint32_t len; + uint16_t id; + uint16_t flags; +}; + +#define RING_EVENT_FLAGS_ENABLE 0x0 +#define RING_EVENT_FLAGS_DISABLE 0x1 +#define RING_EVENT_FLAGS_DESC 0x2 +struct vring_packed_desc_event { + uint16_t desc_event_off_wrap; + uint16_t desc_event_flags; +}; + +struct vring_packed { + unsigned int num; + struct vring_packed_desc *desc_packed; + struct vring_packed_desc_event *driver_event; + struct vring_packed_desc_event *device_event; + +}; + struct vring { unsigned int num; - struct vring_desc *desc; + struct vring_desc *desc; struct vring_avail *avail; struct vring_used *used; }; @@ -95,10 +121,18 @@ struct vring { #define vring_avail_event(vr) (*(uint16_t *)&(vr)->used->ring[(vr)->num]) static inline size_t -vring_size(unsigned int num, unsigned long align) +vring_size(struct virtio_hw *hw, unsigned int num, unsigned long align) { size_t size; + if (vtpci_packed_queue(hw)) { + size = num * sizeof(struct vring_packed_desc); + size += sizeof(struct vring_packed_desc_event); + size = RTE_ALIGN_CEIL(size, align); + size += sizeof(struct vring_packed_desc_event); + return size; + } + size = num * sizeof(struct vring_desc); size += sizeof(struct vring_avail) + (num * sizeof(uint16_t)); size = RTE_ALIGN_CEIL(size, align); @@ -106,17 +140,29 @@ vring_size(unsigned int num, unsigned long align) (num * sizeof(struct vring_used_elem)); return size; } - static inline void -vring_init(struct vring *vr, unsigned int num, uint8_t *p, - unsigned long align) +vring_init_split(struct vring *vr, uint8_t *p, unsigned long align, + unsigned int num) { vr->num = num; vr->desc = (struct vring_desc *) p; vr->avail = (struct vring_avail *) (p + - num * sizeof(struct vring_desc)); + vr->num * sizeof(struct vring_desc)); vr->used = (void *) - RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[num]), align); + RTE_ALIGN_CEIL((uintptr_t)(&vr->avail->ring[vr->num]), align); +} + +static inline void +vring_init_packed(struct vring_packed *vr, uint8_t *p, unsigned long align, + unsigned int num) +{ + vr->num = num; + vr->desc_packed = (struct vring_packed_desc *)p; + vr->driver_event = (struct vring_packed_desc_event *)(p + + vr->num * sizeof(struct vring_packed_desc)); + vr->device_event = (struct vring_packed_desc_event *) + RTE_ALIGN_CEIL((uintptr_t)(vr->driver_event + + sizeof(struct vring_packed_desc_event)), align); } /* diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 26518ed98..1401a9844 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -161,11 +161,16 @@ struct virtio_pmd_ctrl { struct vq_desc_extra { void *cookie; uint16_t ndescs; + uint16_t next; }; struct virtqueue { struct virtio_hw *hw; /**< virtio_hw structure pointer. */ struct vring vq_ring; /**< vring keeping desc, used and avail */ + struct vring_packed ring_packed; /**< vring keeping desc, used and avail */ + bool avail_wrap_counter; + bool used_wrap_counter; + /** * Last consumed descriptor in the used table, * trails vq_ring.used->idx. @@ -245,9 +250,21 @@ struct virtio_tx_region { __attribute__((__aligned__(16))); }; +static inline void +vring_desc_init_packed(struct virtqueue *vq, int n) +{ + int i; + for (i = 0; i < n - 1; i++) { + vq->ring_packed.desc_packed[i].id = i; + vq->vq_descx[i].next = i + 1; + } + vq->ring_packed.desc_packed[i].id = i; + vq->vq_descx[i].next = VQ_RING_DESC_CHAIN_END; +} + /* Chain all the descriptors in the ring with an END */ static inline void -vring_desc_init(struct vring_desc *dp, uint16_t n) +vring_desc_init_split(struct vring_desc *dp, uint16_t n) { uint16_t i; From patchwork Thu Dec 13 12:34:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Freimann X-Patchwork-Id: 48771 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4B4541B53E; Thu, 13 Dec 2018 13:35:25 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id B33EA1B4CA for ; Thu, 13 Dec 2018 13:35:24 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com [10.5.11.16]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1F33E300157B; Thu, 13 Dec 2018 12:35:24 +0000 (UTC) Received: from localhost (unknown [10.36.118.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 308AB5C6A6; Thu, 13 Dec 2018 12:35:17 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Thu, 13 Dec 2018 13:34:45 +0100 Message-Id: <20181213123453.15035-3-jfreimann@redhat.com> In-Reply-To: <20181213123453.15035-1-jfreimann@redhat.com> References: <20181213123453.15035-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Thu, 13 Dec 2018 12:35:24 +0000 (UTC) Subject: [dpdk-dev] [PATCH v12 02/10] net/virtio: add packed virtqueue defines X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Jens Freimann --- drivers/net/virtio/virtio_ring.h | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/virtio/virtio_ring.h b/drivers/net/virtio/virtio_ring.h index e155eb17d..9ab007c11 100644 --- a/drivers/net/virtio/virtio_ring.h +++ b/drivers/net/virtio/virtio_ring.h @@ -15,6 +15,10 @@ #define VRING_DESC_F_WRITE 2 /* This means the buffer contains a list of buffer descriptors. */ #define VRING_DESC_F_INDIRECT 4 +/* This flag means the descriptor was made available by the driver */ +#define VRING_DESC_F_AVAIL(b) ((uint16_t)(b) << 7) +/* This flag means the descriptor was used by the device */ +#define VRING_DESC_F_USED(b) ((uint16_t)(b) << 15) /* The Host uses this in used->flags to advise the Guest: don't kick me * when you add a buffer. It's unreliable, so it's simply an From patchwork Thu Dec 13 12:34:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Freimann X-Patchwork-Id: 48772 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7B9641B54A; Thu, 13 Dec 2018 13:35:34 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id CD2461B549 for ; Thu, 13 Dec 2018 13:35:32 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com [10.5.11.14]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 35D8328123; Thu, 13 Dec 2018 12:35:32 +0000 (UTC) Received: from localhost (unknown [10.36.118.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 26AB9607C4; Thu, 13 Dec 2018 12:35:25 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Thu, 13 Dec 2018 13:34:46 +0100 Message-Id: <20181213123453.15035-4-jfreimann@redhat.com> In-Reply-To: <20181213123453.15035-1-jfreimann@redhat.com> References: <20181213123453.15035-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.30]); Thu, 13 Dec 2018 12:35:32 +0000 (UTC) Subject: [dpdk-dev] [PATCH v12 03/10] net/virtio: add packed virtqueue helpers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add helper functions to set/clear and check descriptor flags. Signed-off-by: Jens Freimann --- drivers/net/virtio/virtqueue.h | 72 +++++++++++++++++++++++++++++++++- 1 file changed, 70 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 1401a9844..b7a5a5963 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -250,6 +250,32 @@ struct virtio_tx_region { __attribute__((__aligned__(16))); }; +static inline void +_set_desc_avail(struct vring_packed_desc *desc, int wrap_counter) +{ + desc->flags |= VRING_DESC_F_AVAIL(wrap_counter) | + VRING_DESC_F_USED(!wrap_counter); +} + +static inline void +set_desc_avail(struct virtqueue *vq, struct vring_packed_desc *desc) +{ + _set_desc_avail(desc, vq->avail_wrap_counter); +} + +static inline int +desc_is_used(struct vring_packed_desc *desc, struct virtqueue *vq) +{ + uint16_t used, avail, flags; + + flags = desc->flags; + used = !!(flags & VRING_DESC_F_USED(1)); + avail = !!(flags & VRING_DESC_F_AVAIL(1)); + + return avail == used && used == vq->used_wrap_counter; +} + + static inline void vring_desc_init_packed(struct virtqueue *vq, int n) { @@ -273,22 +299,64 @@ vring_desc_init_split(struct vring_desc *dp, uint16_t n) dp[i].next = VQ_RING_DESC_CHAIN_END; } +/** + * Tell the backend not to interrupt us. + */ +static inline void +virtqueue_disable_intr_packed(struct virtqueue *vq) +{ + uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags; + + if (*event_flags != RING_EVENT_FLAGS_DISABLE) { + *event_flags = RING_EVENT_FLAGS_DISABLE; + } +} + + /** * Tell the backend not to interrupt us. */ static inline void virtqueue_disable_intr(struct virtqueue *vq) { - vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; + if (vtpci_packed_queue(vq->hw)) + virtqueue_disable_intr_packed(vq); + else + vq->vq_ring.avail->flags |= VRING_AVAIL_F_NO_INTERRUPT; +} + +/** + * Tell the backend to interrupt us. + */ +static inline void +virtqueue_enable_intr_packed(struct virtqueue *vq) +{ + uint16_t *off_wrap = &vq->ring_packed.driver_event->desc_event_off_wrap; + uint16_t *event_flags = &vq->ring_packed.driver_event->desc_event_flags; + + *off_wrap = vq->vq_used_cons_idx | + ((uint16_t)(vq->used_wrap_counter << 15)); + + if (vq->event_flags_shadow == RING_EVENT_FLAGS_DISABLE) { + virtio_wmb(); + vq->event_flags_shadow = + vtpci_with_feature(vq->hw, VIRTIO_RING_F_EVENT_IDX) ? + RING_EVENT_FLAGS_DESC : RING_EVENT_FLAGS_ENABLE; + *event_flags = vq->event_flags_shadow; + } } + /** * Tell the backend to interrupt us. */ static inline void virtqueue_enable_intr(struct virtqueue *vq) { - vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT); + if (vtpci_packed_queue(vq->hw)) + virtqueue_enable_intr_packed(vq); + else + vq->vq_ring.avail->flags &= (~VRING_AVAIL_F_NO_INTERRUPT); } /** From patchwork Thu Dec 13 12:34:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Freimann X-Patchwork-Id: 48773 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AEBF31B537; Thu, 13 Dec 2018 13:35:42 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 9F3FE1B4CA for ; Thu, 13 Dec 2018 13:35:41 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 0CCFD2F39; Thu, 13 Dec 2018 12:35:41 +0000 (UTC) Received: from localhost (unknown [10.36.118.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id E69A5614E6; Thu, 13 Dec 2018 12:35:33 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Thu, 13 Dec 2018 13:34:47 +0100 Message-Id: <20181213123453.15035-5-jfreimann@redhat.com> In-Reply-To: <20181213123453.15035-1-jfreimann@redhat.com> References: <20181213123453.15035-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Thu, 13 Dec 2018 12:35:41 +0000 (UTC) Subject: [dpdk-dev] [PATCH v12 04/10] net/virtio: dump packed virtqueue data X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add support to dump packed virtqueue data to the VIRTQUEUE_DUMP() macro. Signed-off-by: Jens Freimann Reviewed-by: Maxime Coquelin --- drivers/net/virtio/virtqueue.h | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index b7a5a5963..e6e0518a3 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -440,6 +440,15 @@ virtqueue_notify(struct virtqueue *vq) uint16_t used_idx, nused; \ used_idx = (vq)->vq_ring.used->idx; \ nused = (uint16_t)(used_idx - (vq)->vq_used_cons_idx); \ + if (vtpci_packed_queue((vq)->hw)) { \ + PMD_INIT_LOG(DEBUG, \ + "VQ: - size=%d; free=%d; used_cons_idx=%d; avail_idx=%d;" \ + "VQ: - avail_wrap_counter=%d; used_wrap_counter=%d", \ + (vq)->vq_nentries, (vq)->vq_free_cnt, (vq)->vq_used_cons_idx, \ + (vq)->vq_avail_idx, (vq)->avail_wrap_counter, \ + (vq)->used_wrap_counter); \ + break; \ + } \ PMD_INIT_LOG(DEBUG, \ "VQ: - size=%d; free=%d; used=%d; desc_head_idx=%d;" \ " avail.idx=%d; used_cons_idx=%d; used.idx=%d;" \ From patchwork Thu Dec 13 12:34:48 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Freimann X-Patchwork-Id: 48774 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0A7CE1B547; Thu, 13 Dec 2018 13:35:52 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 955411B4CA for ; Thu, 13 Dec 2018 13:35:50 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com [10.5.11.23]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 00F5C80F91; Thu, 13 Dec 2018 12:35:50 +0000 (UTC) Received: from localhost (unknown [10.36.118.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id C2D5419497; Thu, 13 Dec 2018 12:35:42 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Thu, 13 Dec 2018 13:34:48 +0100 Message-Id: <20181213123453.15035-6-jfreimann@redhat.com> In-Reply-To: <20181213123453.15035-1-jfreimann@redhat.com> References: <20181213123453.15035-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.27]); Thu, 13 Dec 2018 12:35:50 +0000 (UTC) Subject: [dpdk-dev] [PATCH v12 05/10] net/virtio: implement transmit path for packed queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This implements the transmit path for devices with support for packed virtqueues. Signed-off-by: Jens Freiman Signed-off-by: Tiwei Bie --- drivers/net/virtio/virtio_ethdev.c | 54 ++++--- drivers/net/virtio/virtio_ethdev.h | 2 + drivers/net/virtio/virtio_rxtx.c | 235 ++++++++++++++++++++++++++++- drivers/net/virtio/virtqueue.h | 22 ++- 4 files changed, 291 insertions(+), 22 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index b1232df59..6fdc18245 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -390,6 +390,9 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx) if (vtpci_packed_queue(hw)) { vq->avail_wrap_counter = 1; vq->used_wrap_counter = 1; + vq->avail_used_flags = + VRING_DESC_F_AVAIL(vq->avail_wrap_counter) | + VRING_DESC_F_USED(!vq->avail_wrap_counter); } /* @@ -497,16 +500,22 @@ virtio_init_queue(struct rte_eth_dev *dev, uint16_t vtpci_queue_idx) memset(txr, 0, vq_size * sizeof(*txr)); for (i = 0; i < vq_size; i++) { struct vring_desc *start_dp = txr[i].tx_indir; - - vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir)); - + struct vring_packed_desc *start_dp_packed = txr[i].tx_indir_pq; + /* first indirect descriptor is always the tx header */ - start_dp->addr = txvq->virtio_net_hdr_mem - + i * sizeof(*txr) - + offsetof(struct virtio_tx_region, tx_hdr); - - start_dp->len = hw->vtnet_hdr_size; - start_dp->flags = VRING_DESC_F_NEXT; + if (vtpci_packed_queue(hw)) { + start_dp_packed->addr = txvq->virtio_net_hdr_mem + + i * sizeof(*txr) + + offsetof(struct virtio_tx_region, tx_hdr); + start_dp_packed->len = hw->vtnet_hdr_size; + } else { + vring_desc_init_split(start_dp, RTE_DIM(txr[i].tx_indir)); + start_dp->addr = txvq->virtio_net_hdr_mem + + i * sizeof(*txr) + + offsetof(struct virtio_tx_region, tx_hdr); + start_dp->len = hw->vtnet_hdr_size; + start_dp->flags = VRING_DESC_F_NEXT; + } } } @@ -1335,6 +1344,23 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev) { struct virtio_hw *hw = eth_dev->data->dev_private; + if (vtpci_packed_queue(hw)) { + PMD_INIT_LOG(INFO, + "virtio: using packed ring standard Tx path on port %u", + eth_dev->data->port_id); + eth_dev->tx_pkt_burst = virtio_xmit_pkts_packed; + } else { + if (hw->use_inorder_tx) { + PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u", + eth_dev->data->port_id); + eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder; + } else { + PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u", + eth_dev->data->port_id); + eth_dev->tx_pkt_burst = virtio_xmit_pkts; + } + } + if (hw->use_simple_rx) { PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u", eth_dev->data->port_id); @@ -1355,15 +1381,7 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev) eth_dev->rx_pkt_burst = &virtio_recv_pkts; } - if (hw->use_inorder_tx) { - PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u", - eth_dev->data->port_id); - eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder; - } else { - PMD_INIT_LOG(INFO, "virtio: using standard Tx path on port %u", - eth_dev->data->port_id); - eth_dev->tx_pkt_burst = virtio_xmit_pkts; - } + } /* Only support 1:1 queue/interrupt mapping so far. diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index e0f80e5a4..05d355180 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -82,6 +82,8 @@ uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue, uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); +uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts); uint16_t virtio_xmit_pkts_inorder(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index cb8f89f18..d1c294e1b 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -88,6 +88,23 @@ vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx) dp->next = VQ_RING_DESC_CHAIN_END; } +static void +vq_ring_free_id_packed(struct virtqueue *vq, uint16_t id) +{ + struct vq_desc_extra *dxp; + + dxp = &vq->vq_descx[id]; + vq->vq_free_cnt += dxp->ndescs; + + if (vq->vq_desc_tail_idx == VQ_RING_DESC_CHAIN_END) + vq->vq_desc_head_idx = id; + else + vq->vq_descx[vq->vq_desc_tail_idx].next = id; + + vq->vq_desc_tail_idx = id; + dxp->next = VQ_RING_DESC_CHAIN_END; +} + static uint16_t virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts, uint32_t *len, uint16_t num) @@ -165,6 +182,33 @@ virtqueue_dequeue_rx_inorder(struct virtqueue *vq, #endif /* Cleanup from completed transmits. */ +static void +virtio_xmit_cleanup_packed(struct virtqueue *vq, int num) +{ + uint16_t used_idx, id; + uint16_t size = vq->vq_nentries; + struct vring_packed_desc *desc = vq->ring_packed.desc_packed; + struct vq_desc_extra *dxp; + + used_idx = vq->vq_used_cons_idx; + while (num-- && desc_is_used(&desc[used_idx], vq)) { + used_idx = vq->vq_used_cons_idx; + id = desc[used_idx].id; + dxp = &vq->vq_descx[id]; + vq->vq_used_cons_idx += dxp->ndescs; + if (vq->vq_used_cons_idx >= size) { + vq->vq_used_cons_idx -= size; + vq->used_wrap_counter ^= 1; + } + vq_ring_free_id_packed(vq, id); + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + used_idx = vq->vq_used_cons_idx; + } +} + static void virtio_xmit_cleanup(struct virtqueue *vq, uint16_t num) { @@ -456,6 +500,107 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, vq->vq_desc_head_idx = idx & (vq->vq_nentries - 1); } +static inline void +virtqueue_enqueue_xmit_packed(struct virtnet_tx *txvq, struct rte_mbuf *cookie, + uint16_t needed, int can_push) +{ + struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr; + struct vq_desc_extra *dxp; + struct virtqueue *vq = txvq->vq; + struct vring_packed_desc *start_dp, *head_dp; + uint16_t idx, id, head_idx, head_flags; + uint16_t head_size = vq->hw->vtnet_hdr_size; + struct virtio_net_hdr *hdr; + uint16_t prev; + + id = vq->vq_desc_head_idx; + + dxp = &vq->vq_descx[id]; + dxp->ndescs = needed; + dxp->cookie = cookie; + + head_idx = vq->vq_avail_idx; + idx = head_idx; + prev = head_idx; + start_dp = vq->ring_packed.desc_packed; + + head_dp = &vq->ring_packed.desc_packed[idx]; + head_flags = cookie->next ? VRING_DESC_F_NEXT: 0; + head_flags |= vq->avail_used_flags; + + if (can_push) { + /* prepend cannot fail, checked by caller */ + hdr = (struct virtio_net_hdr *) + rte_pktmbuf_prepend(cookie, head_size); + /* rte_pktmbuf_prepend() counts the hdr size to the pkt length, + * which is wrong. Below subtract restores correct pkt size. + */ + cookie->pkt_len -= head_size; + + /* if offload disabled, it is not zeroed below, do it now */ + if (!vq->hw->has_tx_offload) { + ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); + ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0); + ASSIGN_UNLESS_EQUAL(hdr->flags, 0); + ASSIGN_UNLESS_EQUAL(hdr->gso_type, 0); + ASSIGN_UNLESS_EQUAL(hdr->gso_size, 0); + ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0); + } + } else { + /* setup first tx ring slot to point to header + * stored in reserved region. + */ + start_dp[idx].addr = txvq->virtio_net_hdr_mem + + RTE_PTR_DIFF(&txr[idx].tx_hdr, txr); + start_dp[idx].len = vq->hw->vtnet_hdr_size; + hdr = (struct virtio_net_hdr *)&txr[idx].tx_hdr; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->avail_wrap_counter ^= 1; + vq->avail_used_flags = + VRING_DESC_F_AVAIL(vq->avail_wrap_counter) | + VRING_DESC_F_USED(!vq->avail_wrap_counter); + } + } + + virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload); + + do { + uint16_t flags; + + start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); + start_dp[idx].len = cookie->data_len; + if (likely(idx != head_idx)) { + flags = cookie->next ? VRING_DESC_F_NEXT : 0; + flags |= vq->avail_used_flags; + start_dp[idx].flags = flags; + } + prev = idx; + idx++; + if (idx >= vq->vq_nentries) { + idx -= vq->vq_nentries; + vq->avail_wrap_counter ^= 1; + vq->avail_used_flags = + VRING_DESC_F_AVAIL(vq->avail_wrap_counter) | + VRING_DESC_F_USED(!vq->avail_wrap_counter); + } + } while ((cookie = cookie->next) != NULL); + + start_dp[prev].id = id; + + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); + + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = VQ_RING_DESC_CHAIN_END; + + vq->vq_avail_idx = idx; + + rte_smp_wmb(); + head_dp->flags = head_flags; +} + static inline void virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, uint16_t needed, int use_indirect, int can_push, @@ -733,8 +878,10 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev, PMD_INIT_FUNC_TRACE(); - if (hw->use_inorder_tx) - vq->vq_ring.desc[vq->vq_nentries - 1].next = 0; + if (!vtpci_packed_queue(hw)) { + if (hw->use_inorder_tx) + vq->vq_ring.desc[vq->vq_nentries - 1].next = 0; + } VIRTQUEUE_DUMP(vq); @@ -1346,6 +1493,90 @@ virtio_recv_mergeable_pkts(void *rx_queue, return nb_rx; } +uint16_t +virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) +{ + struct virtnet_tx *txvq = tx_queue; + struct virtqueue *vq = txvq->vq; + struct virtio_hw *hw = vq->hw; + uint16_t hdr_size = hw->vtnet_hdr_size; + uint16_t nb_tx = 0; + int error; + + if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) + return nb_tx; + + if (unlikely(nb_pkts < 1)) + return nb_pkts; + + PMD_TX_LOG(DEBUG, "%d packets to xmit", nb_pkts); + + if (nb_pkts > vq->vq_free_cnt) + virtio_xmit_cleanup_packed(vq, nb_pkts - vq->vq_free_cnt); + + for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { + struct rte_mbuf *txm = tx_pkts[nb_tx]; + int can_push = 0, slots, need; + + /* Do VLAN tag insertion */ + if (unlikely(txm->ol_flags & PKT_TX_VLAN_PKT)) { + error = rte_vlan_insert(&txm); + if (unlikely(error)) { + rte_pktmbuf_free(txm); + continue; + } + } + + /* optimize ring usage */ + if ((vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) || + vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) && + rte_mbuf_refcnt_read(txm) == 1 && + RTE_MBUF_DIRECT(txm) && + txm->nb_segs == 1 && + rte_pktmbuf_headroom(txm) >= hdr_size && + rte_is_aligned(rte_pktmbuf_mtod(txm, char *), + __alignof__(struct virtio_net_hdr_mrg_rxbuf))) + can_push = 1; + + /* How many main ring entries are needed to this Tx? + * any_layout => number of segments + * default => number of segments + 1 + */ + slots = txm->nb_segs + !can_push; + need = slots - vq->vq_free_cnt; + + /* Positive value indicates it need free vring descriptors */ + if (unlikely(need > 0)) { + virtio_rmb(); + need = RTE_MIN(need, (int)nb_pkts); + virtio_xmit_cleanup_packed(vq, need); + need = slots - vq->vq_free_cnt; + if (unlikely(need > 0)) { + PMD_TX_LOG(ERR, + "No free tx descriptors to transmit"); + break; + } + } + + /* Enqueue Packet buffers */ + virtqueue_enqueue_xmit_packed(txvq, txm, slots, can_push); + + txvq->stats.bytes += txm->pkt_len; + virtio_update_packet_stats(&txvq->stats, txm); + } + + txvq->stats.packets += nb_tx; + + if (likely(nb_tx)) { + if (unlikely(virtqueue_kick_prepare_packed(vq))) { + virtqueue_notify(vq); + PMD_TX_LOG(DEBUG, "Notified backend after xmit"); + } + } + + return nb_tx; +} + uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index e6e0518a3..5119818e1 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -170,6 +170,8 @@ struct virtqueue { struct vring_packed ring_packed; /**< vring keeping desc, used and avail */ bool avail_wrap_counter; bool used_wrap_counter; + uint16_t event_flags_shadow; + uint16_t avail_used_flags; /** * Last consumed descriptor in the used table, @@ -246,8 +248,12 @@ struct virtio_net_hdr_mrg_rxbuf { #define VIRTIO_MAX_TX_INDIRECT 8 struct virtio_tx_region { struct virtio_net_hdr_mrg_rxbuf tx_hdr; - struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT] - __attribute__((__aligned__(16))); + union { + struct vring_desc tx_indir[VIRTIO_MAX_TX_INDIRECT] + __attribute__((__aligned__(16))); + struct vring_packed_desc tx_indir_pq[VIRTIO_MAX_TX_INDIRECT] + __attribute__((__aligned__(16))); + }; }; static inline void @@ -391,6 +397,7 @@ virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx) #define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx)) void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx); +void vq_ring_free_chain_packed(struct virtqueue *vq, uint16_t desc_idx); void vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx, uint16_t num); @@ -424,6 +431,17 @@ virtqueue_kick_prepare(struct virtqueue *vq) return !(vq->vq_ring.used->flags & VRING_USED_F_NO_NOTIFY); } +static inline int +virtqueue_kick_prepare_packed(struct virtqueue *vq) +{ + uint16_t flags; + + virtio_mb(); + flags = vq->ring_packed.device_event->desc_event_flags; + + return flags != RING_EVENT_FLAGS_DISABLE; +} + static inline void virtqueue_notify(struct virtqueue *vq) { From patchwork Thu Dec 13 12:34:49 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Freimann X-Patchwork-Id: 48775 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 981271B52A; Thu, 13 Dec 2018 13:36:00 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 15A5E1B52A for ; Thu, 13 Dec 2018 13:35:59 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 753A6315484B; Thu, 13 Dec 2018 12:35:58 +0000 (UTC) Received: from localhost (unknown [10.36.118.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id AFC0460C66; Thu, 13 Dec 2018 12:35:51 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Thu, 13 Dec 2018 13:34:49 +0100 Message-Id: <20181213123453.15035-7-jfreimann@redhat.com> In-Reply-To: <20181213123453.15035-1-jfreimann@redhat.com> References: <20181213123453.15035-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.41]); Thu, 13 Dec 2018 12:35:58 +0000 (UTC) Subject: [dpdk-dev] [PATCH v12 06/10] net/virtio: implement receive path for packed queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Implement the receive part. Signed-off-by: Jens Freimann Signed-off-by: Tiwei Bie --- drivers/net/virtio/virtio_ethdev.c | 61 +++-- drivers/net/virtio/virtio_ethdev.h | 5 + drivers/net/virtio/virtio_rxtx.c | 385 ++++++++++++++++++++++++++++- drivers/net/virtio/virtqueue.c | 22 ++ drivers/net/virtio/virtqueue.h | 2 +- 5 files changed, 444 insertions(+), 31 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 6fdc18245..381816adf 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1361,27 +1361,39 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev) } } - if (hw->use_simple_rx) { - PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u", - eth_dev->data->port_id); - eth_dev->rx_pkt_burst = virtio_recv_pkts_vec; - } else if (hw->use_inorder_rx) { - PMD_INIT_LOG(INFO, - "virtio: using inorder mergeable buffer Rx path on port %u", - eth_dev->data->port_id); - eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder; - } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { - PMD_INIT_LOG(INFO, - "virtio: using mergeable buffer Rx path on port %u", - eth_dev->data->port_id); - eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts; - } else { - PMD_INIT_LOG(INFO, "virtio: using standard Rx path on port %u", - eth_dev->data->port_id); - eth_dev->rx_pkt_burst = &virtio_recv_pkts; + if (vtpci_packed_queue(hw)) { + if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { + PMD_INIT_LOG(INFO, + "virtio: using packed ring mergeable buffer Rx path on port %u", + eth_dev->data->port_id); + eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_packed; + } else { + PMD_INIT_LOG(INFO, + "virtio: using packed ring standard Rx path on port %u", + eth_dev->data->port_id); + eth_dev->rx_pkt_burst = &virtio_recv_pkts_packed; + } + } else { + if (hw->use_simple_rx) { + PMD_INIT_LOG(INFO, "virtio: using simple Rx path on port %u", + eth_dev->data->port_id); + eth_dev->rx_pkt_burst = virtio_recv_pkts_vec; + } else if (hw->use_inorder_rx) { + PMD_INIT_LOG(INFO, + "virtio: using inorder mergeable buffer Rx path on port %u", + eth_dev->data->port_id); + eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder; + } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { + PMD_INIT_LOG(INFO, + "virtio: using mergeable buffer Rx path on port %u", + eth_dev->data->port_id); + eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts; + } else { + PMD_INIT_LOG(INFO, "virtio: using standard Rx path on port %u", + eth_dev->data->port_id); + eth_dev->rx_pkt_burst = &virtio_recv_pkts; + } } - - } /* Only support 1:1 queue/interrupt mapping so far. @@ -1513,7 +1525,8 @@ virtio_init_device(struct rte_eth_dev *eth_dev, uint64_t req_features) /* Setting up rx_header size for the device */ if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF) || - vtpci_with_feature(hw, VIRTIO_F_VERSION_1)) + vtpci_with_feature(hw, VIRTIO_F_VERSION_1) || + vtpci_with_feature(hw, VIRTIO_F_RING_PACKED)) hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr_mrg_rxbuf); else hw->vtnet_hdr_size = sizeof(struct virtio_net_hdr); @@ -1942,6 +1955,12 @@ virtio_dev_configure(struct rte_eth_dev *dev) } } + if (vtpci_packed_queue(hw)) { + hw->use_simple_rx = 0; + hw->use_inorder_rx = 0; + hw->use_inorder_tx = 0; + } + #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) { hw->use_simple_rx = 0; diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index 05d355180..5cf295418 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -73,10 +73,15 @@ int virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); uint16_t virtio_recv_mergeable_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +uint16_t virtio_recv_mergeable_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, + uint16_t nb_pkts); + uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index d1c294e1b..db61c4197 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -31,6 +31,7 @@ #include "virtqueue.h" #include "virtio_rxtx.h" #include "virtio_rxtx_simple.h" +#include "virtio_ring.h" #ifdef RTE_LIBRTE_VIRTIO_DEBUG_DUMP #define VIRTIO_DUMP_PACKET(m, len) rte_pktmbuf_dump(stdout, m, len) @@ -105,6 +106,47 @@ vq_ring_free_id_packed(struct virtqueue *vq, uint16_t id) dxp->next = VQ_RING_DESC_CHAIN_END; } +static uint16_t +virtqueue_dequeue_burst_rx_packed(struct virtqueue *vq, + struct rte_mbuf **rx_pkts, + uint32_t *len, + uint16_t num) +{ + struct rte_mbuf *cookie; + uint16_t used_idx; + uint16_t id; + struct vring_packed_desc *desc; + uint16_t i; + + desc = vq->ring_packed.desc_packed; + + for (i = 0; i < num; i++) { + used_idx = vq->vq_used_cons_idx; + if (!desc_is_used(&desc[used_idx], vq)) + return i; + len[i] = desc[used_idx].len; + id = desc[used_idx].id; + cookie = (struct rte_mbuf *)vq->vq_descx[id].cookie; + if (unlikely(cookie == NULL)) { + PMD_DRV_LOG(ERR, "vring descriptor with no mbuf cookie at %u", + vq->vq_used_cons_idx); + break; + } + rte_prefetch0(cookie); + rte_packet_prefetch(rte_pktmbuf_mtod(cookie, void *)); + rx_pkts[i] = cookie; + + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->used_wrap_counter ^= 1; + } + } + + return i; +} + static uint16_t virtqueue_dequeue_burst_rx(struct virtqueue *vq, struct rte_mbuf **rx_pkts, uint32_t *len, uint16_t num) @@ -350,6 +392,50 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie) return 0; } +static inline int +virtqueue_enqueue_recv_refill_packed(struct virtqueue *vq, struct rte_mbuf **cookie, + uint16_t num) +{ + struct vring_packed_desc *start_dp = vq->ring_packed.desc_packed; + uint16_t flags = VRING_DESC_F_WRITE | vq->avail_used_flags; + struct virtio_hw *hw = vq->hw; + struct vq_desc_extra *dxp; + uint16_t idx; + int i; + + if (unlikely(vq->vq_free_cnt == 0)) + return -ENOSPC; + if (unlikely(vq->vq_free_cnt < num)) + return -EMSGSIZE; + + for (i = 0; i < num; i++) { + idx = vq->vq_avail_idx;; + dxp = &vq->vq_descx[idx]; + dxp->cookie = (void *) cookie[i]; + dxp->ndescs = 1; + + start_dp[idx].addr = VIRTIO_MBUF_ADDR(cookie[i], vq) + + RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; + start_dp[idx].len = cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size; + + vq->vq_desc_head_idx = dxp->next; + if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = vq->vq_desc_head_idx; + rte_smp_wmb(); + start_dp[idx].flags = flags; + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->avail_wrap_counter ^= 1; + vq->avail_used_flags = + VRING_DESC_F_AVAIL(vq->avail_wrap_counter) | + VRING_DESC_F_USED(!vq->avail_wrap_counter); + flags = VRING_DESC_F_WRITE | vq->avail_used_flags; + } + } + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); + return 0; +} + /* When doing TSO, the IP length is not included in the pseudo header * checksum of the packet given to the PMD, but for virtio it is * expected. @@ -801,7 +887,10 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx) break; /* Enqueue allocated buffers */ - error = virtqueue_enqueue_recv_refill(vq, m); + if (vtpci_packed_queue(vq->hw)) + error = virtqueue_enqueue_recv_refill_packed(vq, &m, 1); + else + error = virtqueue_enqueue_recv_refill(vq, m); if (error) { rte_pktmbuf_free(m); break; @@ -809,7 +898,8 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx) nbufs++; } - vq_update_avail_idx(vq); + if (!vtpci_packed_queue(vq->hw)) + vq_update_avail_idx(vq); } PMD_INIT_LOG(DEBUG, "Allocated %d bufs", nbufs); @@ -896,7 +986,10 @@ virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m) * Requeue the discarded mbuf. This should always be * successful since it was just dequeued. */ - error = virtqueue_enqueue_recv_refill(vq, m); + if (vtpci_packed_queue(vq->hw)) + error = virtqueue_enqueue_recv_refill_packed(vq, &m, 1); + else + error = virtqueue_enqueue_recv_refill(vq, m); if (unlikely(error)) { RTE_LOG(ERR, PMD, "cannot requeue discarded mbuf"); @@ -1136,6 +1229,103 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) return nb_rx; } +uint16_t +virtio_recv_pkts_packed(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) +{ + struct virtnet_rx *rxvq = rx_queue; + struct virtqueue *vq = rxvq->vq; + struct virtio_hw *hw = vq->hw; + struct rte_mbuf *rxm, *new_mbuf; + uint16_t num, nb_rx; + uint32_t len[VIRTIO_MBUF_BURST_SZ]; + struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ]; + int error; + uint32_t i, nb_enqueued; + uint32_t hdr_size; + struct virtio_net_hdr *hdr; + + nb_rx = 0; + if (unlikely(hw->started == 0)) + return nb_rx; + + num = RTE_MIN(VIRTIO_MBUF_BURST_SZ, nb_pkts); + if (likely(num > DESC_PER_CACHELINE)) + num = num - ((vq->vq_used_cons_idx + num) % DESC_PER_CACHELINE); + + num = virtqueue_dequeue_burst_rx_packed(vq, rcv_pkts, len, num); + PMD_RX_LOG(DEBUG, "dequeue:%d", num); + + nb_enqueued = 0; + hdr_size = hw->vtnet_hdr_size; + + for (i = 0; i < num; i++) { + rxm = rcv_pkts[i]; + + PMD_RX_LOG(DEBUG, "packet len:%d", len[i]); + + if (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) { + PMD_RX_LOG(ERR, "Packet drop"); + nb_enqueued++; + virtio_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } + + rxm->port = rxvq->port_id; + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->ol_flags = 0; + rxm->vlan_tci = 0; + + rxm->pkt_len = (uint32_t)(len[i] - hdr_size); + rxm->data_len = (uint16_t)(len[i] - hdr_size); + + hdr = (struct virtio_net_hdr *)((char *)rxm->buf_addr + + RTE_PKTMBUF_HEADROOM - hdr_size); + + if (hw->vlan_strip) + rte_vlan_strip(rxm); + + if (hw->has_rx_offload && virtio_rx_offload(rxm, hdr) < 0) { + virtio_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } + + virtio_rx_stats_updated(rxvq, rxm); + + rx_pkts[nb_rx++] = rxm; + } + + rxvq->stats.packets += nb_rx; + + /* Allocate new mbuf for the used descriptor */ + while (likely(!virtqueue_full(vq))) { + new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool); + if (unlikely(new_mbuf == NULL)) { + struct rte_eth_dev *dev + = &rte_eth_devices[rxvq->port_id]; + dev->data->rx_mbuf_alloc_failed++; + break; + } + error = virtqueue_enqueue_recv_refill_packed(vq, &new_mbuf, 1); + if (unlikely(error)) { + rte_pktmbuf_free(new_mbuf); + break; + } + nb_enqueued++; + } + + if (likely(nb_enqueued)) { + if (unlikely(virtqueue_kick_prepare_packed(vq))) { + virtqueue_notify(vq); + PMD_RX_LOG(DEBUG, "Notified"); + } + } + + return nb_rx; +} + + uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -1342,6 +1532,7 @@ virtio_recv_mergeable_pkts(void *rx_queue, uint16_t extra_idx; uint32_t seg_res; uint32_t hdr_size; + uint32_t rx_num = 0; nb_rx = 0; if (unlikely(hw->started == 0)) @@ -1367,6 +1558,8 @@ virtio_recv_mergeable_pkts(void *rx_queue, break; num = virtqueue_dequeue_burst_rx(vq, rcv_pkts, len, 1); + if (num == 0) + return nb_rx; if (num != 1) continue; @@ -1419,11 +1612,8 @@ virtio_recv_mergeable_pkts(void *rx_queue, uint16_t rcv_cnt = RTE_MIN(seg_res, RTE_DIM(rcv_pkts)); if (likely(VIRTQUEUE_NUSED(vq) >= rcv_cnt)) { - uint32_t rx_num = - virtqueue_dequeue_burst_rx(vq, - rcv_pkts, len, rcv_cnt); - i += rx_num; - rcv_cnt = rx_num; + rx_num = virtqueue_dequeue_burst_rx(vq, + rcv_pkts, len, rcv_cnt); } else { PMD_RX_LOG(ERR, "No enough segments for packet."); @@ -1432,6 +1622,8 @@ virtio_recv_mergeable_pkts(void *rx_queue, rxvq->stats.errors++; break; } + i += rx_num; + rcv_cnt = rx_num; extra_idx = 0; @@ -1483,7 +1675,6 @@ virtio_recv_mergeable_pkts(void *rx_queue, if (likely(nb_enqueued)) { vq_update_avail_idx(vq); - if (unlikely(virtqueue_kick_prepare(vq))) { virtqueue_notify(vq); PMD_RX_LOG(DEBUG, "Notified"); @@ -1493,6 +1684,182 @@ virtio_recv_mergeable_pkts(void *rx_queue, return nb_rx; } +uint16_t +virtio_recv_mergeable_pkts_packed(void *rx_queue, + struct rte_mbuf **rx_pkts, + uint16_t nb_pkts) +{ + struct virtnet_rx *rxvq = rx_queue; + struct virtqueue *vq = rxvq->vq; + struct virtio_hw *hw = vq->hw; + struct rte_mbuf *rxm; + struct rte_mbuf *prev = NULL; + uint16_t num, nb_rx = 0; + uint32_t len[VIRTIO_MBUF_BURST_SZ]; + struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ]; + uint32_t nb_enqueued = 0; + uint32_t seg_num = 0; + uint32_t seg_res = 0; + uint32_t hdr_size = hw->vtnet_hdr_size; + int32_t i; + int error; + + if (unlikely(hw->started == 0)) + return nb_rx; + + + num = nb_pkts; + if (unlikely(num > VIRTIO_MBUF_BURST_SZ)) + num = VIRTIO_MBUF_BURST_SZ; + if (likely(num > DESC_PER_CACHELINE)) + num = num - ((vq->vq_used_cons_idx + num) % DESC_PER_CACHELINE); + + num = virtqueue_dequeue_burst_rx_packed(vq, rcv_pkts, len, num); + + for (i = 0; i < num; i++) { + struct virtio_net_hdr_mrg_rxbuf *header; + + PMD_RX_LOG(DEBUG, "dequeue:%d", num); + PMD_RX_LOG(DEBUG, "packet len:%d", len[i]); + + rxm = rcv_pkts[i]; + + if (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) { + PMD_RX_LOG(ERR, "Packet drop"); + nb_enqueued++; + virtio_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } + + header = (struct virtio_net_hdr_mrg_rxbuf *)((char *)rxm->buf_addr + + RTE_PKTMBUF_HEADROOM - hdr_size); + seg_num = header->num_buffers; + + if (seg_num == 0) + seg_num = 1; + + rxm->data_off = RTE_PKTMBUF_HEADROOM; + rxm->nb_segs = seg_num; + rxm->ol_flags = 0; + rxm->vlan_tci = 0; + rxm->pkt_len = (uint32_t)(len[i] - hdr_size); + rxm->data_len = (uint16_t)(len[i] - hdr_size); + + rxm->port = rxvq->port_id; + rx_pkts[nb_rx] = rxm; + prev = rxm; + + if (hw->has_rx_offload && + virtio_rx_offload(rxm, &header->hdr) < 0) { + virtio_discard_rxbuf(vq, rxm); + rxvq->stats.errors++; + continue; + } + + if (hw->vlan_strip) + rte_vlan_strip(rx_pkts[nb_rx]); + + seg_res = seg_num - 1; + + /* Merge remaining segments */ + while (seg_res != 0 && i < (num - 1)) { + i++; + + rxm = rcv_pkts[i]; + rxm->data_off = RTE_PKTMBUF_HEADROOM - hdr_size; + rxm->pkt_len = (uint32_t)(len[i]); + rxm->data_len = (uint16_t)(len[i]); + + rx_pkts[nb_rx]->pkt_len += (uint32_t)(len[i]); + rx_pkts[nb_rx]->data_len += (uint16_t)(len[i]); + + if (prev) + prev->next = rxm; + + prev = rxm; + seg_res -= 1; + } + + if (!seg_res) { + virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]); + nb_rx++; + } + } + + /* Last packet still need merge segments */ + while (seg_res != 0) { + uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res, + VIRTIO_MBUF_BURST_SZ); + if (likely(vq->vq_free_cnt >= rcv_cnt)) { + num = virtqueue_dequeue_burst_rx_packed(vq, rcv_pkts, len, + rcv_cnt); + uint16_t extra_idx = 0; + + rcv_cnt = num; + + while (extra_idx < rcv_cnt) { + rxm = rcv_pkts[extra_idx]; + + rxm->data_off = + RTE_PKTMBUF_HEADROOM - hdr_size; + rxm->pkt_len = (uint32_t)(len[extra_idx]); + rxm->data_len = (uint16_t)(len[extra_idx]); + + prev->next = rxm; + prev = rxm; + rx_pkts[nb_rx]->pkt_len += len[extra_idx]; + rx_pkts[nb_rx]->data_len += len[extra_idx]; + extra_idx += 1; + } + seg_res -= rcv_cnt; + if (!seg_res) { + virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]); + nb_rx++; + } + } else { + PMD_RX_LOG(ERR, + "No enough segments for packet."); + if (prev) + virtio_discard_rxbuf(vq, prev); + rxvq->stats.errors++; + break; + } + } + + rxvq->stats.packets += nb_rx; + + /* Allocate new mbuf for the used descriptor */ + if (likely(!virtqueue_full(vq))) { + /* free_cnt may include mrg descs */ + uint16_t free_cnt = vq->vq_free_cnt; + struct rte_mbuf *new_pkts[free_cnt]; + + if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) { + error = virtqueue_enqueue_recv_refill_packed(vq, new_pkts, + free_cnt); + if (unlikely(error)) { + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + nb_enqueued += free_cnt; + } else { + struct rte_eth_dev *dev = + &rte_eth_devices[rxvq->port_id]; + dev->data->rx_mbuf_alloc_failed += free_cnt; + } + } + + if (likely(nb_enqueued)) { + if (unlikely(virtqueue_kick_prepare_packed(vq))) { + virtqueue_notify(vq); + PMD_RX_LOG(DEBUG, "Notified"); + } + } + + return nb_rx; +} + uint16_t virtio_xmit_pkts_packed(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { diff --git a/drivers/net/virtio/virtqueue.c b/drivers/net/virtio/virtqueue.c index 56a77cc71..f6dbb7d82 100644 --- a/drivers/net/virtio/virtqueue.c +++ b/drivers/net/virtio/virtqueue.c @@ -65,6 +65,28 @@ virtqueue_rxvq_flush(struct virtqueue *vq) uint16_t used_idx, desc_idx; uint16_t nb_used, i; + if (vtpci_packed_queue(vq->hw)) { + struct vring_packed_desc *descs = vq->ring_packed.desc_packed; + int cnt = 0; + + i = vq->vq_used_cons_idx; + while (desc_is_used(&descs[i], vq) && cnt++ < vq->vq_nentries) { + dxp = &vq->vq_descx[descs[i].id]; + if (dxp->cookie != NULL) { + rte_pktmbuf_free(dxp->cookie); + dxp->cookie = NULL; + } + vq->vq_free_cnt++; + vq->vq_used_cons_idx++; + if (vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->used_wrap_counter ^= 1; + } + i = vq->vq_used_cons_idx; + } + return; + } + nb_used = VIRTQUEUE_NUSED(vq); for (i = 0; i < nb_used; i++) { diff --git a/drivers/net/virtio/virtqueue.h b/drivers/net/virtio/virtqueue.h index 5119818e1..bd8645019 100644 --- a/drivers/net/virtio/virtqueue.h +++ b/drivers/net/virtio/virtqueue.h @@ -397,7 +397,7 @@ virtio_get_queue_type(struct virtio_hw *hw, uint16_t vtpci_queue_idx) #define VIRTQUEUE_NUSED(vq) ((uint16_t)((vq)->vq_ring.used->idx - (vq)->vq_used_cons_idx)) void vq_ring_free_chain(struct virtqueue *vq, uint16_t desc_idx); -void vq_ring_free_chain_packed(struct virtqueue *vq, uint16_t desc_idx); +void vq_ring_free_chain_packed(struct virtqueue *vq, uint16_t used_idx); void vq_ring_free_inorder(struct virtqueue *vq, uint16_t desc_idx, uint16_t num); From patchwork Thu Dec 13 12:34:50 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Freimann X-Patchwork-Id: 48776 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 09F171B55B; Thu, 13 Dec 2018 13:36:03 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 624BE1B552 for ; Thu, 13 Dec 2018 13:36:01 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id C236B3082AFB; Thu, 13 Dec 2018 12:36:00 +0000 (UTC) Received: from localhost (unknown [10.36.118.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 2D3EB60C66; Thu, 13 Dec 2018 12:36:00 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Thu, 13 Dec 2018 13:34:50 +0100 Message-Id: <20181213123453.15035-8-jfreimann@redhat.com> In-Reply-To: <20181213123453.15035-1-jfreimann@redhat.com> References: <20181213123453.15035-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.45]); Thu, 13 Dec 2018 12:36:00 +0000 (UTC) Subject: [dpdk-dev] [PATCH v12 07/10] net/virtio: add virtio send command packed queue support X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Use packed virtqueue format when reading and writing descriptors to/from the ring. Signed-off-by: Jens Freimann --- drivers/net/virtio/virtio_ethdev.c | 90 ++++++++++++++++++++++++++++++ 1 file changed, 90 insertions(+) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 381816adf..e464df9ca 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -141,6 +141,90 @@ static const struct rte_virtio_xstats_name_off rte_virtio_txq_stat_strings[] = { struct virtio_hw_internal virtio_hw_internal[RTE_MAX_ETHPORTS]; +static struct virtio_pmd_ctrl * +virtio_pq_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, + int *dlen, int pkt_num) +{ + struct virtqueue *vq = cvq->vq; + int head; + struct vring_packed_desc *desc = vq->ring_packed.desc_packed; + struct virtio_pmd_ctrl *result; + int wrap_counter; + int sum = 0; + int k; + + /* + * Format is enforced in qemu code: + * One TX packet for header; + * At least one TX packet per argument; + * One RX packet for ACK. + */ + head = vq->vq_avail_idx; + wrap_counter = vq->avail_wrap_counter; + desc[head].flags = VRING_DESC_F_NEXT; + desc[head].addr = cvq->virtio_net_hdr_mem; + desc[head].len = sizeof(struct virtio_net_ctrl_hdr); + vq->vq_free_cnt--; + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->avail_wrap_counter ^= 1; + } + + for (k = 0; k < pkt_num; k++) { + desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem + + sizeof(struct virtio_net_ctrl_hdr) + + sizeof(ctrl->status) + sizeof(uint8_t) * sum; + desc[vq->vq_avail_idx].len = dlen[k]; + desc[vq->vq_avail_idx].flags = VRING_DESC_F_NEXT; + sum += dlen[k]; + vq->vq_free_cnt--; + _set_desc_avail(&desc[vq->vq_avail_idx], + vq->avail_wrap_counter); + rte_smp_wmb(); + vq->vq_free_cnt--; + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->avail_wrap_counter ^= 1; + } + } + + + desc[vq->vq_avail_idx].addr = cvq->virtio_net_hdr_mem + + sizeof(struct virtio_net_ctrl_hdr); + desc[vq->vq_avail_idx].len = sizeof(ctrl->status); + desc[vq->vq_avail_idx].flags = VRING_DESC_F_WRITE; + _set_desc_avail(&desc[vq->vq_avail_idx], + vq->avail_wrap_counter); + _set_desc_avail(&desc[head], wrap_counter); + rte_smp_wmb(); + + vq->vq_free_cnt--; + if (++vq->vq_avail_idx >= vq->vq_nentries) { + vq->vq_avail_idx -= vq->vq_nentries; + vq->avail_wrap_counter ^= 1; + } + + virtqueue_notify(vq); + + /* wait for used descriptors in virtqueue */ + do { + rte_rmb(); + usleep(100); + } while (!desc_is_used(&desc[head], vq)); + + /* now get used descriptors */ + while(desc_is_used(&desc[vq->vq_used_cons_idx], vq)) { + vq->vq_free_cnt++; + if (++vq->vq_used_cons_idx >= vq->vq_nentries) { + vq->vq_used_cons_idx -= vq->vq_nentries; + vq->used_wrap_counter ^= 1; + } + } + + result = cvq->virtio_net_hdr_mz->addr; + return result; +} + static int virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, int *dlen, int pkt_num) @@ -174,6 +258,11 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, memcpy(cvq->virtio_net_hdr_mz->addr, ctrl, sizeof(struct virtio_pmd_ctrl)); + if (vtpci_packed_queue(vq->hw)) { + result = virtio_pq_send_command(cvq, ctrl, dlen, pkt_num); + goto out_unlock; + } + /* * Format is enforced in qemu code: * One TX packet for header; @@ -245,6 +334,7 @@ virtio_send_command(struct virtnet_ctl *cvq, struct virtio_pmd_ctrl *ctrl, result = cvq->virtio_net_hdr_mz->addr; +out_unlock: rte_spinlock_unlock(&cvq->lock); return result->status; } From patchwork Thu Dec 13 12:34:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Freimann X-Patchwork-Id: 48777 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 60E2F1B53B; Thu, 13 Dec 2018 13:36:16 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id 09B531B538 for ; Thu, 13 Dec 2018 13:36:15 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com [10.5.11.22]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 688888DA34; Thu, 13 Dec 2018 12:36:14 +0000 (UTC) Received: from localhost (unknown [10.36.118.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 7F0C11057070; Thu, 13 Dec 2018 12:36:02 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Thu, 13 Dec 2018 13:34:51 +0100 Message-Id: <20181213123453.15035-9-jfreimann@redhat.com> In-Reply-To: <20181213123453.15035-1-jfreimann@redhat.com> References: <20181213123453.15035-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.28]); Thu, 13 Dec 2018 12:36:14 +0000 (UTC) Subject: [dpdk-dev] [PATCH v12 08/10] net/virtio-user: add option to use packed queues X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Yuanhan Liu Add option to enable packed queue support for virtio-user devices. Signed-off-by: Yuanhan Liu --- .../net/virtio/virtio_user/virtio_user_dev.c | 18 ++++++++++++++---- .../net/virtio/virtio_user/virtio_user_dev.h | 2 +- drivers/net/virtio/virtio_user_ethdev.c | 14 +++++++++++++- 3 files changed, 28 insertions(+), 6 deletions(-) diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index 20816c936..697ba4ae8 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -58,6 +58,8 @@ virtio_user_kick_queue(struct virtio_user_dev *dev, uint32_t queue_sel) state.index = queue_sel; state.num = 0; /* no reservation */ + if (dev->features & (1ULL << VIRTIO_F_RING_PACKED)) + state.num |= (1 << 15); dev->ops->send_request(dev, VHOST_USER_SET_VRING_BASE, &state); dev->ops->send_request(dev, VHOST_USER_SET_VRING_ADDR, &addr); @@ -407,12 +409,13 @@ virtio_user_dev_setup(struct virtio_user_dev *dev) 1ULL << VIRTIO_NET_F_GUEST_TSO4 | \ 1ULL << VIRTIO_NET_F_GUEST_TSO6 | \ 1ULL << VIRTIO_F_IN_ORDER | \ - 1ULL << VIRTIO_F_VERSION_1) + 1ULL << VIRTIO_F_VERSION_1 | \ + 1ULL << VIRTIO_F_RING_PACKED) int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues, int cq, int queue_size, const char *mac, char **ifname, - int mrg_rxbuf, int in_order) + int mrg_rxbuf, int in_order, int packed_vq) { pthread_mutex_init(&dev->mutex, NULL); snprintf(dev->path, PATH_MAX, "%s", path); @@ -464,10 +467,17 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues, if (!in_order) dev->unsupported_features |= (1ull << VIRTIO_F_IN_ORDER); - if (dev->mac_specified) - dev->frontend_features |= (1ull << VIRTIO_NET_F_MAC); + if (packed_vq) + dev->device_features |= (1ull << VIRTIO_F_RING_PACKED); else + dev->device_features &= ~(1ull << VIRTIO_F_RING_PACKED); + + if (dev->mac_specified) { + dev->device_features |= (1ull << VIRTIO_NET_F_MAC); + } else { + dev->device_features &= ~(1ull << VIRTIO_NET_F_MAC); dev->unsupported_features |= (1ull << VIRTIO_NET_F_MAC); + } if (cq) { /* device does not really need to know anything about CQ, diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.h b/drivers/net/virtio/virtio_user/virtio_user_dev.h index c42ce5d4b..672a8161a 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.h +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.h @@ -50,7 +50,7 @@ int virtio_user_start_device(struct virtio_user_dev *dev); int virtio_user_stop_device(struct virtio_user_dev *dev); int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues, int cq, int queue_size, const char *mac, char **ifname, - int mrg_rxbuf, int in_order); + int mrg_rxbuf, int in_order, int packed_vq); void virtio_user_dev_uninit(struct virtio_user_dev *dev); void virtio_user_handle_cq(struct virtio_user_dev *dev, uint16_t queue_idx); uint8_t virtio_user_handle_mq(struct virtio_user_dev *dev, uint16_t q_pairs); diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c index f8791391a..af2800605 100644 --- a/drivers/net/virtio/virtio_user_ethdev.c +++ b/drivers/net/virtio/virtio_user_ethdev.c @@ -361,6 +361,8 @@ static const char *valid_args[] = { VIRTIO_USER_ARG_MRG_RXBUF, #define VIRTIO_USER_ARG_IN_ORDER "in_order" VIRTIO_USER_ARG_IN_ORDER, +#define VIRTIO_USER_ARG_PACKED_VQ "packed_vq" + VIRTIO_USER_ARG_PACKED_VQ, NULL }; @@ -468,6 +470,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev) char *ifname = NULL; char *mac_addr = NULL; int ret = -1; + uint64_t packed_vq = 0; kvlist = rte_kvargs_parse(rte_vdev_device_args(dev), valid_args); if (!kvlist) { @@ -551,6 +554,15 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev) cq = 1; } + if (rte_kvargs_count(kvlist, VIRTIO_USER_ARG_PACKED_VQ) == 1) { + if (rte_kvargs_process(kvlist, VIRTIO_USER_ARG_PACKED_VQ, + &get_integer_arg, &packed_vq) < 0) { + PMD_INIT_LOG(ERR, "error to parse %s", + VIRTIO_USER_ARG_PACKED_VQ); + goto end; + } + } + if (queues > 1 && cq == 0) { PMD_INIT_LOG(ERR, "multi-q requires ctrl-q"); goto end; @@ -598,7 +610,7 @@ virtio_user_pmd_probe(struct rte_vdev_device *dev) vu_dev->is_server = false; if (virtio_user_dev_init(hw->virtio_user_dev, path, queues, cq, queue_size, mac_addr, &ifname, mrg_rxbuf, - in_order) < 0) { + in_order, packed_vq) < 0) { PMD_INIT_LOG(ERR, "virtio_user_dev_init fails"); virtio_user_eth_dev_free(eth_dev); goto end; From patchwork Thu Dec 13 12:34:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Freimann X-Patchwork-Id: 48778 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B42D01B55A; Thu, 13 Dec 2018 13:36:28 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id A885A1B54B for ; Thu, 13 Dec 2018 13:36:27 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com [10.5.11.13]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 11B03C0669A3; Thu, 13 Dec 2018 12:36:27 +0000 (UTC) Received: from localhost (unknown [10.36.118.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id 105C3614E6; Thu, 13 Dec 2018 12:36:16 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Thu, 13 Dec 2018 13:34:52 +0100 Message-Id: <20181213123453.15035-10-jfreimann@redhat.com> In-Reply-To: <20181213123453.15035-1-jfreimann@redhat.com> References: <20181213123453.15035-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Thu, 13 Dec 2018 12:36:27 +0000 (UTC) Subject: [dpdk-dev] [PATCH v12 09/10] net/virtio-user: ctrl virtqueue not supported with packed vqs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Until we have support for ctrl virtqueues let's disable it and exit if specified as a parameter. Signed-off-by: Jens Freimann --- drivers/net/virtio/virtio_user/virtio_user_dev.c | 8 ++++++-- 1 file changed, 6 insertions(+), 2 deletions(-) diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index 697ba4ae8..f9b83c423 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -467,10 +467,14 @@ virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues, if (!in_order) dev->unsupported_features |= (1ull << VIRTIO_F_IN_ORDER); - if (packed_vq) + if (packed_vq) { + if (cq) + rte_exit(EXIT_FAILURE, + "control vq not supported with packed virtqueues\n"); dev->device_features |= (1ull << VIRTIO_F_RING_PACKED); - else + } else { dev->device_features &= ~(1ull << VIRTIO_F_RING_PACKED); + } if (dev->mac_specified) { dev->device_features |= (1ull << VIRTIO_NET_F_MAC); From patchwork Thu Dec 13 12:34:53 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Jens Freimann X-Patchwork-Id: 48779 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 998931B538; Thu, 13 Dec 2018 13:37:08 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id A9B0B2BF5 for ; Thu, 13 Dec 2018 13:37:06 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com [10.5.11.11]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 196863082E06; Thu, 13 Dec 2018 12:37:06 +0000 (UTC) Received: from localhost (unknown [10.36.118.169]) by smtp.corp.redhat.com (Postfix) with ESMTPS id B8FF96012D; Thu, 13 Dec 2018 12:36:28 +0000 (UTC) From: Jens Freimann To: dev@dpdk.org Cc: tiwei.bie@intel.com, maxime.coquelin@redhat.com, Gavin.Hu@arm.com Date: Thu, 13 Dec 2018 13:34:53 +0100 Message-Id: <20181213123453.15035-11-jfreimann@redhat.com> In-Reply-To: <20181213123453.15035-1-jfreimann@redhat.com> References: <20181213123453.15035-1-jfreimann@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.46]); Thu, 13 Dec 2018 12:37:06 +0000 (UTC) Subject: [dpdk-dev] [PATCH v12 10/10] net/virtio: enable packed virtqueues by default X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Jens Freimann --- drivers/net/virtio/virtio_ethdev.h | 1 + drivers/net/virtio/virtio_user/virtio_user_dev.c | 3 ++- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index 5cf295418..a668544e4 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -34,6 +34,7 @@ 1u << VIRTIO_RING_F_INDIRECT_DESC | \ 1ULL << VIRTIO_F_VERSION_1 | \ 1ULL << VIRTIO_F_IN_ORDER | \ + 1ULL << VIRTIO_F_RING_PACKED | \ 1ULL << VIRTIO_F_IOMMU_PLATFORM) #define VIRTIO_PMD_SUPPORTED_GUEST_FEATURES \ diff --git a/drivers/net/virtio/virtio_user/virtio_user_dev.c b/drivers/net/virtio/virtio_user/virtio_user_dev.c index f9b83c423..957f34759 100644 --- a/drivers/net/virtio/virtio_user/virtio_user_dev.c +++ b/drivers/net/virtio/virtio_user/virtio_user_dev.c @@ -410,7 +410,8 @@ virtio_user_dev_setup(struct virtio_user_dev *dev) 1ULL << VIRTIO_NET_F_GUEST_TSO6 | \ 1ULL << VIRTIO_F_IN_ORDER | \ 1ULL << VIRTIO_F_VERSION_1 | \ - 1ULL << VIRTIO_F_RING_PACKED) + 1ULL << VIRTIO_F_RING_PACKED | \ + 1ULL << VIRTIO_RING_F_EVENT_IDX) int virtio_user_dev_init(struct virtio_user_dev *dev, char *path, int queues,