From patchwork Tue Dec 11 13:48:02 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 48637 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A61DD5592; Tue, 11 Dec 2018 14:48:32 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id F124E5587 for ; Tue, 11 Dec 2018 14:48:31 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 5ACAEC05D3E2; Tue, 11 Dec 2018 13:48:31 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-28.ams2.redhat.com [10.36.112.28]) by smtp.corp.redhat.com (Postfix) with ESMTP id 0C00F60C66; Tue, 11 Dec 2018 13:48:24 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, jfreimann@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: Maxime Coquelin Date: Tue, 11 Dec 2018 14:48:02 +0100 Message-Id: <20181211134804.10318-2-maxime.coquelin@redhat.com> In-Reply-To: <20181211134804.10318-1-maxime.coquelin@redhat.com> References: <20181211134804.10318-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.32]); Tue, 11 Dec 2018 13:48:31 +0000 (UTC) Subject: [dpdk-dev] [PATCH v2 1/3] net/virtio: inline refill and offload helpers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Signed-off-by: Maxime Coquelin Reviewed-by: Jens Freimann --- drivers/net/virtio/virtio_rxtx.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index eb891433e..e1c270b1c 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -741,7 +741,7 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev, return 0; } -static void +static inline void virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m) { int error; @@ -757,7 +757,7 @@ virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m) } } -static void +static inline void virtio_discard_rxbuf_inorder(struct virtqueue *vq, struct rte_mbuf *m) { int error; @@ -769,7 +769,7 @@ virtio_discard_rxbuf_inorder(struct virtqueue *vq, struct rte_mbuf *m) } } -static void +static inline void virtio_update_packet_stats(struct virtnet_stats *stats, struct rte_mbuf *mbuf) { uint32_t s = mbuf->pkt_len; @@ -811,7 +811,7 @@ virtio_rx_stats_updated(struct virtnet_rx *rxvq, struct rte_mbuf *m) } /* Optionally fill offload information in structure */ -static int +static inline int virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr) { struct rte_net_hdr_lens hdr_lens; From patchwork Tue Dec 11 13:48:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 48638 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1FF5A5587; Tue, 11 Dec 2018 14:48:39 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id BE9755699 for ; Tue, 11 Dec 2018 14:48:37 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 30F5A3084258; Tue, 11 Dec 2018 13:48:37 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-28.ams2.redhat.com [10.36.112.28]) by smtp.corp.redhat.com (Postfix) with ESMTP id B9E3860C6A; Tue, 11 Dec 2018 13:48:31 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, jfreimann@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: Maxime Coquelin Date: Tue, 11 Dec 2018 14:48:03 +0100 Message-Id: <20181211134804.10318-3-maxime.coquelin@redhat.com> In-Reply-To: <20181211134804.10318-1-maxime.coquelin@redhat.com> References: <20181211134804.10318-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.40]); Tue, 11 Dec 2018 13:48:37 +0000 (UTC) Subject: [dpdk-dev] [PATCH v2 2/3] net/virtio: add non-mergeable support to in-order path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch adds support for in-order path when meargeable buffers feature hasn't been negotiated. Signed-off-by: Maxime Coquelin Reviewed-by: Tiwei Bie --- drivers/net/virtio/virtio_ethdev.c | 11 +++-------- drivers/net/virtio/virtio_ethdev.h | 2 +- drivers/net/virtio/virtio_rxtx.c | 10 +++++++--- 3 files changed, 11 insertions(+), 12 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 2ba66d291..330b0d7d8 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1332,9 +1332,9 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev) eth_dev->rx_pkt_burst = virtio_recv_pkts_vec; } else if (hw->use_inorder_rx) { PMD_INIT_LOG(INFO, - "virtio: using inorder mergeable buffer Rx path on port %u", + "virtio: using in-order Rx path on port %u", eth_dev->data->port_id); - eth_dev->rx_pkt_burst = &virtio_recv_mergeable_pkts_inorder; + eth_dev->rx_pkt_burst = &virtio_recv_pkts_inorder; } else if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { PMD_INIT_LOG(INFO, "virtio: using mergeable buffer Rx path on port %u", @@ -1906,12 +1906,7 @@ virtio_dev_configure(struct rte_eth_dev *dev) if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) { hw->use_inorder_tx = 1; - if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { - hw->use_inorder_rx = 1; - hw->use_simple_rx = 0; - } else { - hw->use_inorder_rx = 0; - } + hw->use_inorder_rx = 1; } #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index e0f80e5a4..8c1e326af 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -77,7 +77,7 @@ uint16_t virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t virtio_recv_mergeable_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); -uint16_t virtio_recv_mergeable_pkts_inorder(void *rx_queue, +uint16_t virtio_recv_pkts_inorder(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index e1c270b1c..ebe5c74b5 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -989,7 +989,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) } uint16_t -virtio_recv_mergeable_pkts_inorder(void *rx_queue, +virtio_recv_pkts_inorder(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { @@ -1046,10 +1046,14 @@ virtio_recv_mergeable_pkts_inorder(void *rx_queue, header = (struct virtio_net_hdr_mrg_rxbuf *) ((char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM - hdr_size); - seg_num = header->num_buffers; - if (seg_num == 0) + if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { + seg_num = header->num_buffers; + if (seg_num == 0) + seg_num = 1; + } else { seg_num = 1; + } rxm->data_off = RTE_PKTMBUF_HEADROOM; rxm->nb_segs = seg_num; From patchwork Tue Dec 11 13:48:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 48639 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8BC5958FA; Tue, 11 Dec 2018 14:48:41 +0100 (CET) Received: from mx1.redhat.com (mx1.redhat.com [209.132.183.28]) by dpdk.org (Postfix) with ESMTP id BC6B556A3 for ; Tue, 11 Dec 2018 14:48:39 +0100 (CET) Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com [10.5.11.12]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1D48758E49; Tue, 11 Dec 2018 13:48:39 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-28.ams2.redhat.com [10.36.112.28]) by smtp.corp.redhat.com (Postfix) with ESMTP id 8A77F60C64; Tue, 11 Dec 2018 13:48:37 +0000 (UTC) From: Maxime Coquelin To: dev@dpdk.org, jfreimann@redhat.com, tiwei.bie@intel.com, zhihong.wang@intel.com Cc: Maxime Coquelin Date: Tue, 11 Dec 2018 14:48:04 +0100 Message-Id: <20181211134804.10318-4-maxime.coquelin@redhat.com> In-Reply-To: <20181211134804.10318-1-maxime.coquelin@redhat.com> References: <20181211134804.10318-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.5.110.39]); Tue, 11 Dec 2018 13:48:39 +0000 (UTC) Subject: [dpdk-dev] [PATCH v2 3/3] net/virtio: improve batching in mergeable path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch improves both descriptors dequeue and refill, by using the same bathing strategy as done in in-order path. Signed-off-by: Maxime Coquelin Tested-by: Jens Freimann Reviewed-by: Jens Freimann --- drivers/net/virtio/virtio_rxtx.c | 237 ++++++++++++++++--------------- 1 file changed, 126 insertions(+), 111 deletions(-) diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index ebe5c74b5..59bcac2f7 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -267,41 +267,42 @@ virtqueue_enqueue_refill_inorder(struct virtqueue *vq, } static inline int -virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie) +virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf **cookie, + uint16_t num) { struct vq_desc_extra *dxp; struct virtio_hw *hw = vq->hw; - struct vring_desc *start_dp; - uint16_t needed = 1; - uint16_t head_idx, idx; + struct vring_desc *start_dp = vq->vq_ring.desc; + uint16_t idx, i; if (unlikely(vq->vq_free_cnt == 0)) return -ENOSPC; - if (unlikely(vq->vq_free_cnt < needed)) + if (unlikely(vq->vq_free_cnt < num)) return -EMSGSIZE; - head_idx = vq->vq_desc_head_idx; - if (unlikely(head_idx >= vq->vq_nentries)) + if (unlikely(vq->vq_desc_head_idx >= vq->vq_nentries)) return -EFAULT; - idx = head_idx; - dxp = &vq->vq_descx[idx]; - dxp->cookie = (void *)cookie; - dxp->ndescs = needed; + for (i = 0; i < num; i++) { + idx = vq->vq_desc_head_idx; + dxp = &vq->vq_descx[idx]; + dxp->cookie = (void *)cookie[i]; + dxp->ndescs = 1; - start_dp = vq->vq_ring.desc; - start_dp[idx].addr = - VIRTIO_MBUF_ADDR(cookie, vq) + - RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; - start_dp[idx].len = - cookie->buf_len - RTE_PKTMBUF_HEADROOM + hw->vtnet_hdr_size; - start_dp[idx].flags = VRING_DESC_F_WRITE; - idx = start_dp[idx].next; - vq->vq_desc_head_idx = idx; - if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) - vq->vq_desc_tail_idx = idx; - vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - needed); - vq_update_avail_ring(vq, head_idx); + start_dp[idx].addr = + VIRTIO_MBUF_ADDR(cookie[i], vq) + + RTE_PKTMBUF_HEADROOM - hw->vtnet_hdr_size; + start_dp[idx].len = + cookie[i]->buf_len - RTE_PKTMBUF_HEADROOM + + hw->vtnet_hdr_size; + start_dp[idx].flags = VRING_DESC_F_WRITE; + vq->vq_desc_head_idx = start_dp[idx].next; + if (vq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) + vq->vq_desc_tail_idx = vq->vq_desc_head_idx; + vq_update_avail_ring(vq, idx); + } + + vq->vq_free_cnt = (uint16_t)(vq->vq_free_cnt - num); return 0; } @@ -656,7 +657,7 @@ virtio_dev_rx_queue_setup_finish(struct rte_eth_dev *dev, uint16_t queue_idx) break; /* Enqueue allocated buffers */ - error = virtqueue_enqueue_recv_refill(vq, m); + error = virtqueue_enqueue_recv_refill(vq, &m, 1); if (error) { rte_pktmbuf_free(m); break; @@ -749,7 +750,7 @@ virtio_discard_rxbuf(struct virtqueue *vq, struct rte_mbuf *m) * Requeue the discarded mbuf. This should always be * successful since it was just dequeued. */ - error = virtqueue_enqueue_recv_refill(vq, m); + error = virtqueue_enqueue_recv_refill(vq, &m, 1); if (unlikely(error)) { RTE_LOG(ERR, PMD, "cannot requeue discarded mbuf"); @@ -968,7 +969,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) dev->data->rx_mbuf_alloc_failed++; break; } - error = virtqueue_enqueue_recv_refill(vq, new_mbuf); + error = virtqueue_enqueue_recv_refill(vq, &new_mbuf, 1); if (unlikely(error)) { rte_pktmbuf_free(new_mbuf); break; @@ -1187,19 +1188,18 @@ virtio_recv_mergeable_pkts(void *rx_queue, struct virtnet_rx *rxvq = rx_queue; struct virtqueue *vq = rxvq->vq; struct virtio_hw *hw = vq->hw; - struct rte_mbuf *rxm, *new_mbuf; - uint16_t nb_used, num, nb_rx; + struct rte_mbuf *rxm; + struct rte_mbuf *prev; + uint16_t nb_used, num, nb_rx = 0; uint32_t len[VIRTIO_MBUF_BURST_SZ]; struct rte_mbuf *rcv_pkts[VIRTIO_MBUF_BURST_SZ]; - struct rte_mbuf *prev; int error; - uint32_t i, nb_enqueued; - uint32_t seg_num; - uint16_t extra_idx; - uint32_t seg_res; - uint32_t hdr_size; + uint32_t nb_enqueued = 0; + uint32_t seg_num = 0; + uint32_t seg_res = 0; + uint32_t hdr_size = hw->vtnet_hdr_size; + int32_t i; - nb_rx = 0; if (unlikely(hw->started == 0)) return nb_rx; @@ -1209,31 +1209,25 @@ virtio_recv_mergeable_pkts(void *rx_queue, PMD_RX_LOG(DEBUG, "used:%d", nb_used); - i = 0; - nb_enqueued = 0; - seg_num = 0; - extra_idx = 0; - seg_res = 0; - hdr_size = hw->vtnet_hdr_size; - - while (i < nb_used) { - struct virtio_net_hdr_mrg_rxbuf *header; + num = likely(nb_used <= nb_pkts) ? nb_used : nb_pkts; + if (unlikely(num > VIRTIO_MBUF_BURST_SZ)) + num = VIRTIO_MBUF_BURST_SZ; + if (likely(num > DESC_PER_CACHELINE)) + num = num - ((vq->vq_used_cons_idx + num) % + DESC_PER_CACHELINE); - if (nb_rx == nb_pkts) - break; - num = virtqueue_dequeue_burst_rx(vq, rcv_pkts, len, 1); - if (num != 1) - continue; + num = virtqueue_dequeue_burst_rx(vq, rcv_pkts, len, num); - i++; + for (i = 0; i < num; i++) { + struct virtio_net_hdr_mrg_rxbuf *header; PMD_RX_LOG(DEBUG, "dequeue:%d", num); - PMD_RX_LOG(DEBUG, "packet len:%d", len[0]); + PMD_RX_LOG(DEBUG, "packet len:%d", len[i]); - rxm = rcv_pkts[0]; + rxm = rcv_pkts[i]; - if (unlikely(len[0] < hdr_size + ETHER_HDR_LEN)) { + if (unlikely(len[i] < hdr_size + ETHER_HDR_LEN)) { PMD_RX_LOG(ERR, "Packet drop"); nb_enqueued++; virtio_discard_rxbuf(vq, rxm); @@ -1241,10 +1235,10 @@ virtio_recv_mergeable_pkts(void *rx_queue, continue; } - header = (struct virtio_net_hdr_mrg_rxbuf *)((char *)rxm->buf_addr + - RTE_PKTMBUF_HEADROOM - hdr_size); + header = (struct virtio_net_hdr_mrg_rxbuf *) + ((char *)rxm->buf_addr + RTE_PKTMBUF_HEADROOM + - hdr_size); seg_num = header->num_buffers; - if (seg_num == 0) seg_num = 1; @@ -1252,10 +1246,11 @@ virtio_recv_mergeable_pkts(void *rx_queue, rxm->nb_segs = seg_num; rxm->ol_flags = 0; rxm->vlan_tci = 0; - rxm->pkt_len = (uint32_t)(len[0] - hdr_size); - rxm->data_len = (uint16_t)(len[0] - hdr_size); + rxm->pkt_len = (uint32_t)(len[i] - hdr_size); + rxm->data_len = (uint16_t)(len[i] - hdr_size); rxm->port = rxvq->port_id; + rx_pkts[nb_rx] = rxm; prev = rxm; @@ -1266,76 +1261,96 @@ virtio_recv_mergeable_pkts(void *rx_queue, continue; } + if (hw->vlan_strip) + rte_vlan_strip(rx_pkts[nb_rx]); + seg_res = seg_num - 1; - while (seg_res != 0) { - /* - * Get extra segments for current uncompleted packet. - */ - uint16_t rcv_cnt = - RTE_MIN(seg_res, RTE_DIM(rcv_pkts)); - if (likely(VIRTQUEUE_NUSED(vq) >= rcv_cnt)) { - uint32_t rx_num = - virtqueue_dequeue_burst_rx(vq, - rcv_pkts, len, rcv_cnt); - i += rx_num; - rcv_cnt = rx_num; - } else { - PMD_RX_LOG(ERR, - "No enough segments for packet."); - nb_enqueued++; - virtio_discard_rxbuf(vq, rxm); - rxvq->stats.errors++; - break; - } + /* Merge remaining segments */ + while (seg_res != 0 && i < (num - 1)) { + i++; - extra_idx = 0; + rxm = rcv_pkts[i]; + rxm->data_off = RTE_PKTMBUF_HEADROOM - hdr_size; + rxm->pkt_len = (uint32_t)(len[i]); + rxm->data_len = (uint16_t)(len[i]); + rx_pkts[nb_rx]->pkt_len += (uint32_t)(len[i]); + rx_pkts[nb_rx]->data_len += (uint16_t)(len[i]); + + if (prev) + prev->next = rxm; + + prev = rxm; + seg_res -= 1; + } + + if (!seg_res) { + virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]); + nb_rx++; + } + } + + /* Last packet still need merge segments */ + while (seg_res != 0) { + uint16_t rcv_cnt = RTE_MIN((uint16_t)seg_res, + VIRTIO_MBUF_BURST_SZ); + + prev = rcv_pkts[nb_rx]; + if (likely(VIRTQUEUE_NUSED(vq) >= rcv_cnt)) { + num = virtqueue_dequeue_burst_rx(vq, rcv_pkts, len, + rcv_cnt); + uint16_t extra_idx = 0; + + rcv_cnt = num; while (extra_idx < rcv_cnt) { rxm = rcv_pkts[extra_idx]; - - rxm->data_off = RTE_PKTMBUF_HEADROOM - hdr_size; + rxm->data_off = + RTE_PKTMBUF_HEADROOM - hdr_size; rxm->pkt_len = (uint32_t)(len[extra_idx]); rxm->data_len = (uint16_t)(len[extra_idx]); - - if (prev) - prev->next = rxm; - + prev->next = rxm; prev = rxm; - rx_pkts[nb_rx]->pkt_len += rxm->pkt_len; - extra_idx++; + rx_pkts[nb_rx]->pkt_len += len[extra_idx]; + rx_pkts[nb_rx]->data_len += len[extra_idx]; + extra_idx += 1; }; seg_res -= rcv_cnt; - } - if (hw->vlan_strip) - rte_vlan_strip(rx_pkts[nb_rx]); - - VIRTIO_DUMP_PACKET(rx_pkts[nb_rx], - rx_pkts[nb_rx]->data_len); - - rxvq->stats.bytes += rx_pkts[nb_rx]->pkt_len; - virtio_update_packet_stats(&rxvq->stats, rx_pkts[nb_rx]); - nb_rx++; + if (!seg_res) { + virtio_rx_stats_updated(rxvq, rx_pkts[nb_rx]); + nb_rx++; + } + } else { + PMD_RX_LOG(ERR, + "No enough segments for packet."); + virtio_discard_rxbuf(vq, prev); + rxvq->stats.errors++; + break; + } } rxvq->stats.packets += nb_rx; /* Allocate new mbuf for the used descriptor */ - while (likely(!virtqueue_full(vq))) { - new_mbuf = rte_mbuf_raw_alloc(rxvq->mpool); - if (unlikely(new_mbuf == NULL)) { - struct rte_eth_dev *dev - = &rte_eth_devices[rxvq->port_id]; - dev->data->rx_mbuf_alloc_failed++; - break; - } - error = virtqueue_enqueue_recv_refill(vq, new_mbuf); - if (unlikely(error)) { - rte_pktmbuf_free(new_mbuf); - break; + if (likely(!virtqueue_full(vq))) { + /* free_cnt may include mrg descs */ + uint16_t free_cnt = vq->vq_free_cnt; + struct rte_mbuf *new_pkts[free_cnt]; + + if (!rte_pktmbuf_alloc_bulk(rxvq->mpool, new_pkts, free_cnt)) { + error = virtqueue_enqueue_recv_refill(vq, new_pkts, + free_cnt); + if (unlikely(error)) { + for (i = 0; i < free_cnt; i++) + rte_pktmbuf_free(new_pkts[i]); + } + nb_enqueued += free_cnt; + } else { + struct rte_eth_dev *dev = + &rte_eth_devices[rxvq->port_id]; + dev->data->rx_mbuf_alloc_failed += free_cnt; } - nb_enqueued++; } if (likely(nb_enqueued)) {