From patchwork Mon Oct 19 05:16:11 2015 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 7732 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id F39E28E96; Mon, 19 Oct 2015 07:16:13 +0200 (CEST) Received: from mail-pa0-f47.google.com (mail-pa0-f47.google.com [209.85.220.47]) by dpdk.org (Postfix) with ESMTP id 346968E82 for ; Mon, 19 Oct 2015 07:16:09 +0200 (CEST) Received: by padhk11 with SMTP id hk11so18235026pad.1 for ; Sun, 18 Oct 2015 22:16:08 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20130820; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=5axCljtJgA3kHvdtoMy+ErMvQfsTgdpnowzej/uZ5Ig=; b=P6/d5NGzMJ4vsgvwW0dXxiEGjqKlnFBKSSoCD7/0gShinO0/CuWj+nNeqrDnPMtnnY q10PWj9isnZ8zOWZ7XleSHouucdv+GrJ2JsMUY58kYSsTsXrjSJS1K1x1uj+ugUL1CMe WaYW4Hg9iG4Mozb23CSZSE3Gy+0UqQaGs2GUTwQK7AHRzgchf4nVpD3SX5uhF9a4/R8F JOk11GbYDTq0orokO8lQRlUD/uBizSltGb22AZZcbvxSfd3450/vg4I5DHo1xW2y9uj/ iPF7jp8+k5QBFv4qCO6hBpzqspOoHSmx+TlH6C3FfA4HM9bmpAbCZBtOXh5wR+wanPlj assw== X-Gm-Message-State: ALoCoQlSDu9/g0gtVBQ494MxMVoT7C4sJRF/QNTijSQjKF0sDCe+oYaF8JL8tMidk9uRg5HC/Qh3 X-Received: by 10.68.197.97 with SMTP id it1mr32535783pbc.4.1445231768615; Sun, 18 Oct 2015 22:16:08 -0700 (PDT) Received: from xeon-e3.home.lan (static-50-53-82-155.bvtn.or.frontiernet.net. [50.53.82.155]) by smtp.gmail.com with ESMTPSA id k10sm33681035pbq.78.2015.10.18.22.16.07 (version=TLS1_2 cipher=ECDHE-RSA-AES128-SHA bits=128/128); Sun, 18 Oct 2015 22:16:08 -0700 (PDT) From: Stephen Hemminger To: huawei.xie@intel.com, changchun.ouyang@intel.com Date: Sun, 18 Oct 2015 22:16:11 -0700 Message-Id: <1445231772-17467-5-git-send-email-stephen@networkplumber.org> X-Mailer: git-send-email 2.1.4 In-Reply-To: <1445231772-17467-1-git-send-email-stephen@networkplumber.org> References: <1445231772-17467-1-git-send-email-stephen@networkplumber.org> Cc: dev@dpdk.org Subject: [dpdk-dev] [PATCH 4/5] virtio: use any layout on transmit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Virtio supports a feature that allows sender to put transmit header prepended to data. It requires that the mbuf be writeable, correct alignment, and the feature has been negotiatied. If all this works out, then it will be the optimum way to transmit a single segment packet. Signed-off-by: Stephen Hemminger --- drivers/net/virtio/virtio_ethdev.h | 3 +- drivers/net/virtio/virtio_rxtx.c | 66 +++++++++++++++++++++++--------------- 2 files changed, 42 insertions(+), 27 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index 07a9265..f260fbb 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -65,7 +65,8 @@ 1u << VIRTIO_NET_F_CTRL_RX | \ 1u << VIRTIO_NET_F_CTRL_VLAN | \ 1u << VIRTIO_NET_F_MRG_RXBUF | \ - 1u << VIRTIO_RING_F_INDIRECT_DESC) + 1u << VIRTIO_RING_F_INDIRECT_DESC| \ + 1u << VIRTIO_F_ANY_LAYOUT) /* * CQ function prototype diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index f68ab8f..dbedcc3 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -200,13 +200,13 @@ virtqueue_enqueue_recv_refill(struct virtqueue *vq, struct rte_mbuf *cookie) static int virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie, - int use_indirect) + uint16_t needed, int use_indirect, int can_push) { struct vq_desc_extra *dxp; struct vring_desc *start_dp; uint16_t seg_num = cookie->nb_segs; - uint16_t needed = use_indirect ? 1 : 1 + seg_num; uint16_t head_idx, idx; + uint16_t head_size = txvq->hw->vtnet_hdr_size; unsigned long offs; if (unlikely(txvq->vq_free_cnt == 0)) @@ -223,7 +223,12 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie, dxp->ndescs = needed; start_dp = txvq->vq_ring.desc; - if (use_indirect) { + if (can_push) { + /* put on zero'd transmit header (no offloads) */ + void *hdr = rte_pktmbuf_prepend(cookie, head_size); + + memset(hdr, 0, head_size); + } else if (use_indirect) { struct virtio_tx_region *txr = txvq->virtio_net_hdr_mz->addr; @@ -235,7 +240,7 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie, start_dp[idx].flags = VRING_DESC_F_INDIRECT; start_dp = txr[idx].tx_indir; - idx = 0; + idx = 1; } else { offs = idx * sizeof(struct virtio_tx_region) + offsetof(struct virtio_tx_region, tx_hdr); @@ -243,22 +248,19 @@ virtqueue_enqueue_xmit(struct virtqueue *txvq, struct rte_mbuf *cookie, start_dp[idx].addr = txvq->virtio_net_hdr_mem + offs; start_dp[idx].len = txvq->hw->vtnet_hdr_size; start_dp[idx].flags = VRING_DESC_F_NEXT; + idx = start_dp[idx].next; } - for (; ((seg_num > 0) && (cookie != NULL)); seg_num--) { - idx = start_dp[idx].next; + while (cookie != NULL) { start_dp[idx].addr = RTE_MBUF_DATA_DMA_ADDR(cookie); start_dp[idx].len = cookie->data_len; - start_dp[idx].flags = VRING_DESC_F_NEXT; + start_dp[idx].flags = cookie->next ? VRING_DESC_F_NEXT : 0; cookie = cookie->next; + idx = start_dp[idx].next; } - start_dp[idx].flags &= ~VRING_DESC_F_NEXT; - if (use_indirect) idx = txvq->vq_ring.desc[head_idx].next; - else - idx = start_dp[idx].next; txvq->vq_desc_head_idx = idx; if (txvq->vq_desc_head_idx == VQ_RING_DESC_CHAIN_END) @@ -761,10 +763,13 @@ virtio_recv_mergeable_pkts(void *rx_queue, return nb_rx; } + uint16_t virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) { struct virtqueue *txvq = tx_queue; + struct virtio_hw *hw = txvq->hw; + uint16_t hdr_size = hw->vtnet_hdr_size; uint16_t nb_used, nb_tx; int error; @@ -780,14 +785,31 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { struct rte_mbuf *txm = tx_pkts[nb_tx]; - int use_indirect, slots, need; + int can_push = 0, use_indirect = 0, slots, need; + + /* Do VLAN tag insertion */ + if (txm->ol_flags & PKT_TX_VLAN_PKT) { + error = rte_vlan_insert(&txm); + if (unlikely(error)) { + rte_pktmbuf_free(txm); + continue; + } + } - use_indirect = vtpci_with_feature(txvq->hw, - VIRTIO_RING_F_INDIRECT_DESC) - && (txm->nb_segs < VIRTIO_MAX_TX_INDIRECT); + /* optimize ring usage */ + if (vtpci_with_feature(hw, VIRTIO_F_ANY_LAYOUT) && + rte_mbuf_refcnt_read(txm) == 1 && + txm->nb_segs == 1 && + rte_pktmbuf_headroom(txm) >= hdr_size && + rte_is_aligned(rte_pktmbuf_mtod(txm, char *), + __alignof__(struct virtio_net_hdr_mrg_rxbuf))) + can_push = 1; + else if (vtpci_with_feature(hw, VIRTIO_RING_F_INDIRECT_DESC) && + txm->nb_segs < VIRTIO_MAX_TX_INDIRECT) + use_indirect = 1; /* How many ring entries are needed to this Tx? */ - slots = use_indirect ? 1 : 1 + txm->nb_segs; + slots = use_indirect ? 1 : !can_push + txm->nb_segs; need = slots - txvq->vq_free_cnt; /* Positive value indicates it need free vring descriptors */ @@ -805,17 +827,9 @@ virtio_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) } } - /* Do VLAN tag insertion */ - if (txm->ol_flags & PKT_TX_VLAN_PKT) { - error = rte_vlan_insert(&txm); - if (unlikely(error)) { - rte_pktmbuf_free(txm); - continue; - } - } - /* Enqueue Packet buffers */ - error = virtqueue_enqueue_xmit(txvq, txm, use_indirect); + error = virtqueue_enqueue_xmit(txvq, txm, slots, + use_indirect, can_push); if (unlikely(error)) { if (error == ENOSPC) PMD_TX_LOG(ERR, "virtqueue_enqueue Free count = 0");