From patchwork Mon Jul 2 15:25:44 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 42100 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 288081B500; Mon, 2 Jul 2018 17:26:07 +0200 (CEST) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id E1F4C1B430 for ; Mon, 2 Jul 2018 17:26:03 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 887E881663C3; Mon, 2 Jul 2018 15:26:03 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-41.ams2.redhat.com [10.36.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTP id 43CD6111AF34; Mon, 2 Jul 2018 15:26:02 +0000 (UTC) From: Maxime Coquelin To: tiwei.bie@intel.com, zhihong.wang@intel.com, dev@dpdk.org Cc: yong.liu@intel.com, Maxime Coquelin Date: Mon, 2 Jul 2018 17:25:44 +0200 Message-Id: <20180702152547.31423-2-maxime.coquelin@redhat.com> In-Reply-To: <20180702152547.31423-1-maxime.coquelin@redhat.com> References: <20180702152547.31423-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Mon, 02 Jul 2018 15:26:03 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.8]); Mon, 02 Jul 2018 15:26:03 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: [dpdk-dev] [PATCH v4 1/4] net/virtio: remove simple Tx path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The simple Tx path does not comply with the Virtio specification. Now that VIRTIO_F_IN_ORDER feature is supported by the Virtio PMD, let's use this optimized path instead. Signed-off-by: Maxime Coquelin --- drivers/net/virtio/virtio_ethdev.c | 11 +----- drivers/net/virtio/virtio_pci.h | 1 - drivers/net/virtio/virtio_rxtx.c | 28 +------------- drivers/net/virtio/virtio_rxtx_simple.c | 67 --------------------------------- drivers/net/virtio/virtio_rxtx_simple.h | 49 ------------------------ drivers/net/virtio/virtio_user_ethdev.c | 1 - 6 files changed, 2 insertions(+), 155 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 02980e3d1..73c428734 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1336,11 +1336,7 @@ set_rxtx_funcs(struct rte_eth_dev *eth_dev) eth_dev->rx_pkt_burst = &virtio_recv_pkts; } - if (hw->use_simple_tx) { - PMD_INIT_LOG(INFO, "virtio: using simple Tx path on port %u", - eth_dev->data->port_id); - eth_dev->tx_pkt_burst = virtio_xmit_pkts_simple; - } else if (hw->use_inorder_tx) { + if (hw->use_inorder_tx) { PMD_INIT_LOG(INFO, "virtio: using inorder Tx path on port %u", eth_dev->data->port_id); eth_dev->tx_pkt_burst = virtio_xmit_pkts_inorder; @@ -1881,12 +1877,9 @@ virtio_dev_configure(struct rte_eth_dev *dev) rte_spinlock_init(&hw->state_lock); hw->use_simple_rx = 1; - hw->use_simple_tx = 1; if (vtpci_with_feature(hw, VIRTIO_F_IN_ORDER)) { - /* Simple Tx not compatible with in-order ring */ hw->use_inorder_tx = 1; - hw->use_simple_tx = 0; if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { hw->use_inorder_rx = 1; hw->use_simple_rx = 0; @@ -1898,12 +1891,10 @@ virtio_dev_configure(struct rte_eth_dev *dev) #if defined RTE_ARCH_ARM64 || defined RTE_ARCH_ARM if (!rte_cpu_get_flag_enabled(RTE_CPUFLAG_NEON)) { hw->use_simple_rx = 0; - hw->use_simple_tx = 0; } #endif if (vtpci_with_feature(hw, VIRTIO_NET_F_MRG_RXBUF)) { hw->use_simple_rx = 0; - hw->use_simple_tx = 0; } if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h index 77f805df6..0ec216874 100644 --- a/drivers/net/virtio/virtio_pci.h +++ b/drivers/net/virtio/virtio_pci.h @@ -238,7 +238,6 @@ struct virtio_hw { uint8_t use_msix; uint8_t modern; uint8_t use_simple_rx; - uint8_t use_simple_tx; uint8_t use_inorder_rx; uint8_t use_inorder_tx; uint16_t port_id; diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 6394071b8..522e5dd41 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -708,10 +708,6 @@ virtio_dev_tx_queue_setup(struct rte_eth_dev *dev, PMD_INIT_FUNC_TRACE(); - /* cannot use simple rxtx funcs with multisegs or offloads */ - if (dev->data->dev_conf.txmode.offloads) - hw->use_simple_tx = 0; - if (nb_desc == 0 || nb_desc > vq->vq_nentries) nb_desc = vq->vq_nentries; vq->vq_free_cnt = RTE_MIN(vq->vq_free_cnt, nb_desc); @@ -746,33 +742,11 @@ virtio_dev_tx_queue_setup_finish(struct rte_eth_dev *dev, uint8_t vtpci_queue_idx = 2 * queue_idx + VTNET_SQ_TQ_QUEUE_IDX; struct virtio_hw *hw = dev->data->dev_private; struct virtqueue *vq = hw->vqs[vtpci_queue_idx]; - uint16_t mid_idx = vq->vq_nentries >> 1; - struct virtnet_tx *txvq = &vq->txq; - uint16_t desc_idx; PMD_INIT_FUNC_TRACE(); - if (hw->use_simple_tx) { - for (desc_idx = 0; desc_idx < mid_idx; desc_idx++) { - vq->vq_ring.avail->ring[desc_idx] = - desc_idx + mid_idx; - vq->vq_ring.desc[desc_idx + mid_idx].next = - desc_idx; - vq->vq_ring.desc[desc_idx + mid_idx].addr = - txvq->virtio_net_hdr_mem + - offsetof(struct virtio_tx_region, tx_hdr); - vq->vq_ring.desc[desc_idx + mid_idx].len = - vq->hw->vtnet_hdr_size; - vq->vq_ring.desc[desc_idx + mid_idx].flags = - VRING_DESC_F_NEXT; - vq->vq_ring.desc[desc_idx].flags = 0; - } - for (desc_idx = mid_idx; desc_idx < vq->vq_nentries; - desc_idx++) - vq->vq_ring.avail->ring[desc_idx] = desc_idx; - } else if (hw->use_inorder_tx) { + if (hw->use_inorder_tx) vq->vq_ring.desc[vq->vq_nentries - 1].next = 0; - } VIRTQUEUE_DUMP(vq); diff --git a/drivers/net/virtio/virtio_rxtx_simple.c b/drivers/net/virtio/virtio_rxtx_simple.c index 515207581..31e565b4c 100644 --- a/drivers/net/virtio/virtio_rxtx_simple.c +++ b/drivers/net/virtio/virtio_rxtx_simple.c @@ -27,73 +27,6 @@ #pragma GCC diagnostic ignored "-Wcast-qual" #endif -uint16_t -virtio_xmit_pkts_simple(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts) -{ - struct virtnet_tx *txvq = tx_queue; - struct virtqueue *vq = txvq->vq; - struct virtio_hw *hw = vq->hw; - uint16_t nb_used; - uint16_t desc_idx; - struct vring_desc *start_dp; - uint16_t nb_tail, nb_commit; - int i; - uint16_t desc_idx_max = (vq->vq_nentries >> 1) - 1; - uint16_t nb_tx = 0; - - if (unlikely(hw->started == 0 && tx_pkts != hw->inject_pkts)) - return nb_tx; - - nb_used = VIRTQUEUE_NUSED(vq); - rte_compiler_barrier(); - - if (nb_used >= VIRTIO_TX_FREE_THRESH) - virtio_xmit_cleanup_simple(vq); - - nb_commit = nb_pkts = RTE_MIN((vq->vq_free_cnt >> 1), nb_pkts); - desc_idx = (uint16_t)(vq->vq_avail_idx & desc_idx_max); - start_dp = vq->vq_ring.desc; - nb_tail = (uint16_t) (desc_idx_max + 1 - desc_idx); - - if (nb_commit >= nb_tail) { - for (i = 0; i < nb_tail; i++) - vq->vq_descx[desc_idx + i].cookie = tx_pkts[i]; - for (i = 0; i < nb_tail; i++) { - start_dp[desc_idx].addr = - VIRTIO_MBUF_DATA_DMA_ADDR(*tx_pkts, vq); - start_dp[desc_idx].len = (*tx_pkts)->pkt_len; - tx_pkts++; - desc_idx++; - } - nb_commit -= nb_tail; - desc_idx = 0; - } - for (i = 0; i < nb_commit; i++) - vq->vq_descx[desc_idx + i].cookie = tx_pkts[i]; - for (i = 0; i < nb_commit; i++) { - start_dp[desc_idx].addr = - VIRTIO_MBUF_DATA_DMA_ADDR(*tx_pkts, vq); - start_dp[desc_idx].len = (*tx_pkts)->pkt_len; - tx_pkts++; - desc_idx++; - } - - rte_compiler_barrier(); - - vq->vq_free_cnt -= (uint16_t)(nb_pkts << 1); - vq->vq_avail_idx += nb_pkts; - vq->vq_ring.avail->idx = vq->vq_avail_idx; - txvq->stats.packets += nb_pkts; - - if (likely(nb_pkts)) { - if (unlikely(virtqueue_kick_prepare(vq))) - virtqueue_notify(vq); - } - - return nb_pkts; -} - int __attribute__((cold)) virtio_rxq_vec_setup(struct virtnet_rx *rxq) { diff --git a/drivers/net/virtio/virtio_rxtx_simple.h b/drivers/net/virtio/virtio_rxtx_simple.h index 303904d64..dc97e4ccf 100644 --- a/drivers/net/virtio/virtio_rxtx_simple.h +++ b/drivers/net/virtio/virtio_rxtx_simple.h @@ -55,53 +55,4 @@ virtio_rxq_rearm_vec(struct virtnet_rx *rxvq) vq_update_avail_idx(vq); } -#define VIRTIO_TX_FREE_THRESH 32 -#define VIRTIO_TX_MAX_FREE_BUF_SZ 32 -#define VIRTIO_TX_FREE_NR 32 -/* TODO: vq->tx_free_cnt could mean num of free slots so we could avoid shift */ -static inline void -virtio_xmit_cleanup_simple(struct virtqueue *vq) -{ - uint16_t i, desc_idx; - uint32_t nb_free = 0; - struct rte_mbuf *m, *free[VIRTIO_TX_MAX_FREE_BUF_SZ]; - - desc_idx = (uint16_t)(vq->vq_used_cons_idx & - ((vq->vq_nentries >> 1) - 1)); - m = (struct rte_mbuf *)vq->vq_descx[desc_idx++].cookie; - m = rte_pktmbuf_prefree_seg(m); - if (likely(m != NULL)) { - free[0] = m; - nb_free = 1; - for (i = 1; i < VIRTIO_TX_FREE_NR; i++) { - m = (struct rte_mbuf *)vq->vq_descx[desc_idx++].cookie; - m = rte_pktmbuf_prefree_seg(m); - if (likely(m != NULL)) { - if (likely(m->pool == free[0]->pool)) - free[nb_free++] = m; - else { - rte_mempool_put_bulk(free[0]->pool, - (void **)free, - RTE_MIN(RTE_DIM(free), - nb_free)); - free[0] = m; - nb_free = 1; - } - } - } - rte_mempool_put_bulk(free[0]->pool, (void **)free, - RTE_MIN(RTE_DIM(free), nb_free)); - } else { - for (i = 1; i < VIRTIO_TX_FREE_NR; i++) { - m = (struct rte_mbuf *)vq->vq_descx[desc_idx++].cookie; - m = rte_pktmbuf_prefree_seg(m); - if (m != NULL) - rte_mempool_put(m->pool, m); - } - } - - vq->vq_used_cons_idx += VIRTIO_TX_FREE_NR; - vq->vq_free_cnt += (VIRTIO_TX_FREE_NR << 1); -} - #endif /* _VIRTIO_RXTX_SIMPLE_H_ */ diff --git a/drivers/net/virtio/virtio_user_ethdev.c b/drivers/net/virtio/virtio_user_ethdev.c index fcd30251f..5ee59d5e5 100644 --- a/drivers/net/virtio/virtio_user_ethdev.c +++ b/drivers/net/virtio/virtio_user_ethdev.c @@ -436,7 +436,6 @@ virtio_user_eth_dev_alloc(struct rte_vdev_device *vdev) hw->use_msix = 1; hw->modern = 0; hw->use_simple_rx = 0; - hw->use_simple_tx = 0; hw->use_inorder_rx = 0; hw->use_inorder_tx = 0; hw->virtio_user_dev = dev; From patchwork Mon Jul 2 15:25:45 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 42101 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4A34E1B51D; Mon, 2 Jul 2018 17:26:09 +0200 (CEST) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id 7FE291B4F8 for ; Mon, 2 Jul 2018 17:26:05 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1093D8315E; Mon, 2 Jul 2018 15:26:05 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-41.ams2.redhat.com [10.36.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTP id D6841111AF34; Mon, 2 Jul 2018 15:26:03 +0000 (UTC) From: Maxime Coquelin To: tiwei.bie@intel.com, zhihong.wang@intel.com, dev@dpdk.org Cc: yong.liu@intel.com, Maxime Coquelin Date: Mon, 2 Jul 2018 17:25:45 +0200 Message-Id: <20180702152547.31423-3-maxime.coquelin@redhat.com> In-Reply-To: <20180702152547.31423-1-maxime.coquelin@redhat.com> References: <20180702152547.31423-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Mon, 02 Jul 2018 15:26:05 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.1]); Mon, 02 Jul 2018 15:26:05 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: [dpdk-dev] [PATCH v4 2/4] net/virtio: improve Tx offload features negotiation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch improves the Tx offload features selection depending on whether the application request for offloads. When the application doesn't request for Tx offload features, the corresponding features bits aren't negotiated. Signed-off-by: Maxime Coquelin --- drivers/net/virtio/virtio_ethdev.c | 15 +++++++++++++-- drivers/net/virtio/virtio_ethdev.h | 3 --- 2 files changed, 13 insertions(+), 5 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 73c428734..1d223d029 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1806,8 +1806,10 @@ static int virtio_dev_configure(struct rte_eth_dev *dev) { const struct rte_eth_rxmode *rxmode = &dev->data->dev_conf.rxmode; + const struct rte_eth_txmode *txmode = &dev->data->dev_conf.txmode; struct virtio_hw *hw = dev->data->dev_private; uint64_t rx_offloads = rxmode->offloads; + uint64_t tx_offloads = txmode->offloads; uint64_t req_features; int ret; @@ -1829,6 +1831,15 @@ virtio_dev_configure(struct rte_eth_dev *dev) (1ULL << VIRTIO_NET_F_GUEST_TSO4) | (1ULL << VIRTIO_NET_F_GUEST_TSO6); + if (tx_offloads & (DEV_TX_OFFLOAD_UDP_CKSUM | + DEV_TX_OFFLOAD_TCP_CKSUM)) + req_features |= (1ULL << VIRTIO_NET_F_CSUM); + + if (tx_offloads & DEV_TX_OFFLOAD_TCP_TSO) + req_features |= + (1ULL << VIRTIO_NET_F_HOST_TSO4) | + (1ULL << VIRTIO_NET_F_HOST_TSO6); + /* if request features changed, reinit the device */ if (req_features != hw->req_guest_features) { ret = virtio_init_device(dev, req_features); @@ -2155,14 +2166,14 @@ virtio_dev_info_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info) dev_info->tx_offload_capa = DEV_TX_OFFLOAD_MULTI_SEGS | DEV_TX_OFFLOAD_VLAN_INSERT; - if (hw->guest_features & (1ULL << VIRTIO_NET_F_CSUM)) { + if (host_features & (1ULL << VIRTIO_NET_F_CSUM)) { dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_UDP_CKSUM | DEV_TX_OFFLOAD_TCP_CKSUM; } tso_mask = (1ULL << VIRTIO_NET_F_HOST_TSO4) | (1ULL << VIRTIO_NET_F_HOST_TSO6); - if ((hw->guest_features & tso_mask) == tso_mask) + if ((host_features & tso_mask) == tso_mask) dev_info->tx_offload_capa |= DEV_TX_OFFLOAD_TCP_TSO; } diff --git a/drivers/net/virtio/virtio_ethdev.h b/drivers/net/virtio/virtio_ethdev.h index 350e9ce73..f4d09df71 100644 --- a/drivers/net/virtio/virtio_ethdev.h +++ b/drivers/net/virtio/virtio_ethdev.h @@ -28,9 +28,6 @@ 1u << VIRTIO_NET_F_CTRL_VQ | \ 1u << VIRTIO_NET_F_CTRL_RX | \ 1u << VIRTIO_NET_F_CTRL_VLAN | \ - 1u << VIRTIO_NET_F_CSUM | \ - 1u << VIRTIO_NET_F_HOST_TSO4 | \ - 1u << VIRTIO_NET_F_HOST_TSO6 | \ 1u << VIRTIO_NET_F_MRG_RXBUF | \ 1u << VIRTIO_NET_F_MTU | \ 1ULL << VIRTIO_NET_F_GUEST_ANNOUNCE | \ From patchwork Mon Jul 2 15:25:46 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 42102 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E519C1B528; Mon, 2 Jul 2018 17:26:10 +0200 (CEST) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id 177681B4FC for ; Mon, 2 Jul 2018 17:26:07 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 9E1CC401EF06; Mon, 2 Jul 2018 15:26:06 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-41.ams2.redhat.com [10.36.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTP id 5F8E7111AF34; Mon, 2 Jul 2018 15:26:05 +0000 (UTC) From: Maxime Coquelin To: tiwei.bie@intel.com, zhihong.wang@intel.com, dev@dpdk.org Cc: yong.liu@intel.com, Maxime Coquelin Date: Mon, 2 Jul 2018 17:25:46 +0200 Message-Id: <20180702152547.31423-4-maxime.coquelin@redhat.com> In-Reply-To: <20180702152547.31423-1-maxime.coquelin@redhat.com> References: <20180702152547.31423-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Mon, 02 Jul 2018 15:26:06 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.5]); Mon, 02 Jul 2018 15:26:06 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: [dpdk-dev] [PATCH v4 3/4] net/virtio: don't use simple Rx if TCP LRO or VLAN strip X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Reviewed-by: Tiwei Bie Signed-off-by: Maxime Coquelin --- drivers/net/virtio/virtio_ethdev.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 1d223d029..15d5b4f79 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1909,7 +1909,9 @@ virtio_dev_configure(struct rte_eth_dev *dev) } if (rx_offloads & (DEV_RX_OFFLOAD_UDP_CKSUM | - DEV_RX_OFFLOAD_TCP_CKSUM)) + DEV_RX_OFFLOAD_TCP_CKSUM | + DEV_RX_OFFLOAD_TCP_LRO | + DEV_RX_OFFLOAD_VLAN_STRIP)) hw->use_simple_rx = 0; return 0; From patchwork Mon Jul 2 15:25:47 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Maxime Coquelin X-Patchwork-Id: 42103 X-Patchwork-Delegate: maxime.coquelin@redhat.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 473BF1B53C; Mon, 2 Jul 2018 17:26:12 +0200 (CEST) Received: from mx1.redhat.com (mx3-rdu2.redhat.com [66.187.233.73]) by dpdk.org (Postfix) with ESMTP id 763E11B508 for ; Mon, 2 Jul 2018 17:26:08 +0200 (CEST) Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.rdu2.redhat.com [10.11.54.3]) (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits)) (No client certificate requested) by mx1.redhat.com (Postfix) with ESMTPS id 1A8BA4022909; Mon, 2 Jul 2018 15:26:08 +0000 (UTC) Received: from localhost.localdomain (ovpn-112-41.ams2.redhat.com [10.36.112.41]) by smtp.corp.redhat.com (Postfix) with ESMTP id EA05A111AF34; Mon, 2 Jul 2018 15:26:06 +0000 (UTC) From: Maxime Coquelin To: tiwei.bie@intel.com, zhihong.wang@intel.com, dev@dpdk.org Cc: yong.liu@intel.com, Maxime Coquelin Date: Mon, 2 Jul 2018 17:25:47 +0200 Message-Id: <20180702152547.31423-5-maxime.coquelin@redhat.com> In-Reply-To: <20180702152547.31423-1-maxime.coquelin@redhat.com> References: <20180702152547.31423-1-maxime.coquelin@redhat.com> X-Scanned-By: MIMEDefang 2.78 on 10.11.54.3 X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Mon, 02 Jul 2018 15:26:08 +0000 (UTC) X-Greylist: inspected by milter-greylist-4.5.16 (mx1.redhat.com [10.11.55.6]); Mon, 02 Jul 2018 15:26:08 +0000 (UTC) for IP:'10.11.54.3' DOMAIN:'int-mx03.intmail.prod.int.rdu2.redhat.com' HELO:'smtp.corp.redhat.com' FROM:'maxime.coquelin@redhat.com' RCPT:'' Subject: [dpdk-dev] [PATCH v4 4/4] net/virtio: improve offload check performance X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Instead of checking the multiple Virtio features bits for every packet, let's do the check once at configure time and store it in virtio_hw struct. Reviewed-by: Tiwei Bie Signed-off-by: Maxime Coquelin --- drivers/net/virtio/virtio_ethdev.c | 19 ++++++++++++++++ drivers/net/virtio/virtio_pci.h | 3 +++ drivers/net/virtio/virtio_rxtx.c | 46 +++++++++----------------------------- 3 files changed, 33 insertions(+), 35 deletions(-) diff --git a/drivers/net/virtio/virtio_ethdev.c b/drivers/net/virtio/virtio_ethdev.c index 15d5b4f79..8c636b5f9 100644 --- a/drivers/net/virtio/virtio_ethdev.c +++ b/drivers/net/virtio/virtio_ethdev.c @@ -1798,6 +1798,22 @@ rte_virtio_pmd_init(void) rte_pci_register(&rte_virtio_pmd); } +static bool +rx_offload_enabled(struct virtio_hw *hw) +{ + return vtpci_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM) || + vtpci_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) || + vtpci_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6); +} + +static bool +tx_offload_enabled(struct virtio_hw *hw) +{ + return vtpci_with_feature(hw, VIRTIO_NET_F_CSUM) || + vtpci_with_feature(hw, VIRTIO_NET_F_HOST_TSO4) || + vtpci_with_feature(hw, VIRTIO_NET_F_HOST_TSO6); +} + /* * Configure virtio device * It returns 0 on success. @@ -1877,6 +1893,9 @@ virtio_dev_configure(struct rte_eth_dev *dev) return -ENOTSUP; } + hw->has_tx_offload = tx_offload_enabled(hw); + hw->has_rx_offload = rx_offload_enabled(hw); + if (dev->data->dev_flags & RTE_ETH_DEV_INTR_LSC) /* Enable vector (0) for Link State Intrerrupt */ if (VTPCI_OPS(hw)->set_config_irq(hw, 0) == diff --git a/drivers/net/virtio/virtio_pci.h b/drivers/net/virtio/virtio_pci.h index 0ec216874..58fdd3d45 100644 --- a/drivers/net/virtio/virtio_pci.h +++ b/drivers/net/virtio/virtio_pci.h @@ -6,6 +6,7 @@ #define _VIRTIO_PCI_H_ #include +#include #include #include @@ -240,6 +241,8 @@ struct virtio_hw { uint8_t use_simple_rx; uint8_t use_inorder_rx; uint8_t use_inorder_tx; + bool has_tx_offload; + bool has_rx_offload; uint16_t port_id; uint8_t mac_addr[ETHER_ADDR_LEN]; uint32_t notify_off_multiplier; diff --git a/drivers/net/virtio/virtio_rxtx.c b/drivers/net/virtio/virtio_rxtx.c index 522e5dd41..7d59bb5c1 100644 --- a/drivers/net/virtio/virtio_rxtx.c +++ b/drivers/net/virtio/virtio_rxtx.c @@ -347,13 +347,6 @@ virtio_tso_fix_cksum(struct rte_mbuf *m) } } -static inline int -tx_offload_enabled(struct virtio_hw *hw) -{ - return vtpci_with_feature(hw, VIRTIO_NET_F_CSUM) || - vtpci_with_feature(hw, VIRTIO_NET_F_HOST_TSO4) || - vtpci_with_feature(hw, VIRTIO_NET_F_HOST_TSO6); -} /* avoid write operation when necessary, to lessen cache issues */ #define ASSIGN_UNLESS_EQUAL(var, val) do { \ @@ -364,7 +357,7 @@ tx_offload_enabled(struct virtio_hw *hw) static inline void virtqueue_xmit_offload(struct virtio_net_hdr *hdr, struct rte_mbuf *cookie, - int offload) + bool offload) { if (offload) { if (cookie->ol_flags & PKT_TX_TCP_SEG) @@ -421,14 +414,11 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, struct virtio_net_hdr *hdr; uint16_t idx; uint16_t head_size = vq->hw->vtnet_hdr_size; - int offload; uint16_t i = 0; idx = vq->vq_desc_head_idx; start_dp = vq->vq_ring.desc; - offload = tx_offload_enabled(vq->hw); - while (i < num) { idx = idx & (vq->vq_nentries - 1); dxp = &vq->vq_descx[idx]; @@ -440,7 +430,7 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, cookies[i]->pkt_len -= head_size; /* if offload disabled, it is not zeroed below, do it now */ - if (offload == 0) { + if (!vq->hw->has_tx_offload) { ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0); ASSIGN_UNLESS_EQUAL(hdr->flags, 0); @@ -449,7 +439,8 @@ virtqueue_enqueue_xmit_inorder(struct virtnet_tx *txvq, ASSIGN_UNLESS_EQUAL(hdr->hdr_len, 0); } - virtqueue_xmit_offload(hdr, cookies[i], offload); + virtqueue_xmit_offload(hdr, cookies[i], + vq->hw->has_tx_offload); start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookies[i], vq); start_dp[idx].len = cookies[i]->data_len; @@ -478,9 +469,6 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, uint16_t head_idx, idx; uint16_t head_size = vq->hw->vtnet_hdr_size; struct virtio_net_hdr *hdr; - int offload; - - offload = tx_offload_enabled(vq->hw); head_idx = vq->vq_desc_head_idx; idx = head_idx; @@ -500,7 +488,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, cookie->pkt_len -= head_size; /* if offload disabled, it is not zeroed below, do it now */ - if (offload == 0) { + if (!vq->hw->has_tx_offload) { ASSIGN_UNLESS_EQUAL(hdr->csum_start, 0); ASSIGN_UNLESS_EQUAL(hdr->csum_offset, 0); ASSIGN_UNLESS_EQUAL(hdr->flags, 0); @@ -537,7 +525,7 @@ virtqueue_enqueue_xmit(struct virtnet_tx *txvq, struct rte_mbuf *cookie, idx = start_dp[idx].next; } - virtqueue_xmit_offload(hdr, cookie, offload); + virtqueue_xmit_offload(hdr, cookie, vq->hw->has_tx_offload); do { start_dp[idx].addr = VIRTIO_MBUF_DATA_DMA_ADDR(cookie, vq); @@ -894,14 +882,6 @@ virtio_rx_offload(struct rte_mbuf *m, struct virtio_net_hdr *hdr) return 0; } -static inline int -rx_offload_enabled(struct virtio_hw *hw) -{ - return vtpci_with_feature(hw, VIRTIO_NET_F_GUEST_CSUM) || - vtpci_with_feature(hw, VIRTIO_NET_F_GUEST_TSO4) || - vtpci_with_feature(hw, VIRTIO_NET_F_GUEST_TSO6); -} - #define VIRTIO_MBUF_BURST_SZ 64 #define DESC_PER_CACHELINE (RTE_CACHE_LINE_SIZE / sizeof(struct vring_desc)) uint16_t @@ -917,7 +897,6 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) int error; uint32_t i, nb_enqueued; uint32_t hdr_size; - int offload; struct virtio_net_hdr *hdr; nb_rx = 0; @@ -939,7 +918,6 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) nb_enqueued = 0; hdr_size = hw->vtnet_hdr_size; - offload = rx_offload_enabled(hw); for (i = 0; i < num ; i++) { rxm = rcv_pkts[i]; @@ -968,7 +946,7 @@ virtio_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) if (hw->vlan_strip) rte_vlan_strip(rxm); - if (offload && virtio_rx_offload(rxm, hdr) < 0) { + if (hw->has_rx_offload && virtio_rx_offload(rxm, hdr) < 0) { virtio_discard_rxbuf(vq, rxm); rxvq->stats.errors++; continue; @@ -1030,7 +1008,6 @@ virtio_recv_mergeable_pkts_inorder(void *rx_queue, uint32_t seg_res; uint32_t hdr_size; int32_t i; - int offload; nb_rx = 0; if (unlikely(hw->started == 0)) @@ -1048,7 +1025,6 @@ virtio_recv_mergeable_pkts_inorder(void *rx_queue, seg_num = 1; seg_res = 0; hdr_size = hw->vtnet_hdr_size; - offload = rx_offload_enabled(hw); num = virtqueue_dequeue_rx_inorder(vq, rcv_pkts, len, nb_used); @@ -1088,7 +1064,8 @@ virtio_recv_mergeable_pkts_inorder(void *rx_queue, rx_pkts[nb_rx] = rxm; prev = rxm; - if (offload && virtio_rx_offload(rxm, &header->hdr) < 0) { + if (vq->hw->has_rx_offload && + virtio_rx_offload(rxm, &header->hdr) < 0) { virtio_discard_rxbuf_inorder(vq, rxm); rxvq->stats.errors++; continue; @@ -1218,7 +1195,6 @@ virtio_recv_mergeable_pkts(void *rx_queue, uint16_t extra_idx; uint32_t seg_res; uint32_t hdr_size; - int offload; nb_rx = 0; if (unlikely(hw->started == 0)) @@ -1236,7 +1212,6 @@ virtio_recv_mergeable_pkts(void *rx_queue, extra_idx = 0; seg_res = 0; hdr_size = hw->vtnet_hdr_size; - offload = rx_offload_enabled(hw); while (i < nb_used) { struct virtio_net_hdr_mrg_rxbuf *header; @@ -1281,7 +1256,8 @@ virtio_recv_mergeable_pkts(void *rx_queue, rx_pkts[nb_rx] = rxm; prev = rxm; - if (offload && virtio_rx_offload(rxm, &header->hdr) < 0) { + if (hw->has_rx_offload && + virtio_rx_offload(rxm, &header->hdr) < 0) { virtio_discard_rxbuf(vq, rxm); rxvq->stats.errors++; continue;