From patchwork Thu Aug 9 17:50:08 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Stephen Hemminger X-Patchwork-Id: 43659 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 2F29C4C8D; Thu, 9 Aug 2018 19:50:29 +0200 (CEST) Received: from mail-pl0-f68.google.com (mail-pl0-f68.google.com [209.85.160.68]) by dpdk.org (Postfix) with ESMTP id EBC31343C for ; Thu, 9 Aug 2018 19:50:23 +0200 (CEST) Received: by mail-pl0-f68.google.com with SMTP id g6-v6so2852140plq.9 for ; Thu, 09 Aug 2018 10:50:23 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=networkplumber-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=Nu/qK3zVLfLflxQC1/BZ40Ryby39Zg+goe32F4QiVSM=; b=Elcm1zbnJUg59H5xSsWQzSiop9d32wRpXDG0VrxArWua4ZW7ZfaYUG5yO2RYsPlaY3 GykhChyEViJcOheMNnq8NRSW9/cK5QiHZ1jph+MG8oFFEPWpKKoE3F6W3sR9ix/b9L0g U/7q+KbfmB7NDu0El/B0ervzK7btLbpCGRBJR4Z89+TVaQQl/MBftk6jUDzg0aOAeLun +U2w7jaH19xjlUsNMgAaokE1KlZsNjKLnKsVPYnCq8Czmg26MhQV4j3yv8qaKBJJslg4 K6MK/tunEzaVTkETbkmKKhqJr56Bo1UaQiWancQVMofK2ktYW9SyrII2eeG5tzWXPFwy IQcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=Nu/qK3zVLfLflxQC1/BZ40Ryby39Zg+goe32F4QiVSM=; b=JlYbJRqt1NYisIl+2idopKrWZk8ngTlO3IvtGHYgO0nmS+4e7PDikHFbaiHfe/1P/8 PkBAYFcTLuwAU/G6+BT6qhySHK2SG6ItFFH1s7ASaxcr8i/XCd1DZubJ4QnzR7U45VsE BHRBklFvwCnFdgui6BZ4LBK7v+eMqV4nyRjXo0ZIunf7CiqC0DadduisK1xMtbtxmi8l gTTjtoUZfz5cqV7/L6TrdSPJP1HgLuiA3MEKu4BvhKqljJORmrgkBf+G/HCWz6DKl8vw ZxZckH0qzY03B8ynZ92dbY6Du7PezAja6vPR48ERGq8PSOJ7ClDUJb+RGB6oZsSZQMg+ /dgg== X-Gm-Message-State: AOUpUlHlTqq5lhFANTiopReOLtY3SJPjIXnsH8kngJb9X7vhgn98KA5E Jat2CoTRAtYAWampGq/Lfo4JaeXs4g0= X-Google-Smtp-Source: AA+uWPyTz8xDKtCv5456ImjjS+FSTMVUjLQfjr2caxCTmgLuprZebpGEiDKNl0A24/jZnoLsobu8Rw== X-Received: by 2002:a17:902:864b:: with SMTP id y11-v6mr2952519plt.335.1533837022854; Thu, 09 Aug 2018 10:50:22 -0700 (PDT) Received: from xeon-e3.lan (204-195-22-127.wavecable.com. [204.195.22.127]) by smtp.gmail.com with ESMTPSA id n22-v6sm14993873pfj.68.2018.08.09.10.50.21 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 09 Aug 2018 10:50:22 -0700 (PDT) From: Stephen Hemminger To: dev@dpdk.org Cc: Stephen Hemminger , Stephen Hemminger Date: Thu, 9 Aug 2018 10:50:08 -0700 Message-Id: <20180809175008.5787-5-stephen@networkplumber.org> X-Mailer: git-send-email 2.18.0 In-Reply-To: <20180809175008.5787-1-stephen@networkplumber.org> References: <20180809175008.5787-1-stephen@networkplumber.org> Subject: [dpdk-dev] [PATCH 4/4] netvsc: implement tx_done_cleanup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Add tx_done_cleanup ethdev hook to allow application to control if/when it wants completions to be handled. Signed-off-by: Stephen Hemminger --- drivers/net/netvsc/hn_ethdev.c | 1 + drivers/net/netvsc/hn_rndis.c | 2 +- drivers/net/netvsc/hn_rxtx.c | 26 +++++++++++++++++++++----- drivers/net/netvsc/hn_var.h | 4 +++- 4 files changed, 26 insertions(+), 7 deletions(-) diff --git a/drivers/net/netvsc/hn_ethdev.c b/drivers/net/netvsc/hn_ethdev.c index 148e6a33d682..2200ee319f98 100644 --- a/drivers/net/netvsc/hn_ethdev.c +++ b/drivers/net/netvsc/hn_ethdev.c @@ -547,6 +547,7 @@ static const struct eth_dev_ops hn_eth_dev_ops = { .allmulticast_disable = hn_dev_allmulticast_disable, .tx_queue_setup = hn_dev_tx_queue_setup, .tx_queue_release = hn_dev_tx_queue_release, + .tx_done_cleanup = hn_dev_tx_done_cleanup, .rx_queue_setup = hn_dev_rx_queue_setup, .rx_queue_release = hn_dev_rx_queue_release, .link_update = hn_dev_link_update, diff --git a/drivers/net/netvsc/hn_rndis.c b/drivers/net/netvsc/hn_rndis.c index bde33969331e..f44add726b91 100644 --- a/drivers/net/netvsc/hn_rndis.c +++ b/drivers/net/netvsc/hn_rndis.c @@ -382,7 +382,7 @@ static int hn_rndis_exec1(struct hn_data *hv, if (comp) { /* Poll primary channel until response received */ while (hv->rndis_pending == rid) - hn_process_events(hv, 0); + hn_process_events(hv, 0, 1); memcpy(comp, hv->rndis_resp, comp_len); } diff --git a/drivers/net/netvsc/hn_rxtx.c b/drivers/net/netvsc/hn_rxtx.c index 02ef27e363cc..24abc2a91cea 100644 --- a/drivers/net/netvsc/hn_rxtx.c +++ b/drivers/net/netvsc/hn_rxtx.c @@ -801,6 +801,14 @@ hn_dev_rx_queue_release(void *arg) } } +int +hn_dev_tx_done_cleanup(void *arg, uint32_t free_cnt) +{ + struct hn_tx_queue *txq = arg; + + return hn_process_events(txq->hv, txq->queue_id, free_cnt); +} + void hn_dev_rx_queue_info(struct rte_eth_dev *dev, uint16_t queue_idx, struct rte_eth_rxq_info *qinfo) @@ -831,25 +839,27 @@ hn_nvs_handle_notify(const struct vmbus_chanpkt_hdr *pkthdr, * Process pending events on the channel. * Called from both Rx queue poll and Tx cleanup */ -void hn_process_events(struct hn_data *hv, uint16_t queue_id) +uint32_t hn_process_events(struct hn_data *hv, uint16_t queue_id, + uint32_t tx_limit) { struct rte_eth_dev *dev = &rte_eth_devices[hv->port_id]; struct hn_rx_queue *rxq; uint32_t bytes_read = 0; + uint32_t tx_done = 0; int ret = 0; rxq = queue_id == 0 ? hv->primary : dev->data->rx_queues[queue_id]; /* If no pending data then nothing to do */ if (rte_vmbus_chan_rx_empty(rxq->chan)) - return; + return 0; /* * Since channel is shared between Rx and TX queue need to have a lock * since DPDK does not force same CPU to be used for Rx/Tx. */ if (unlikely(!rte_spinlock_trylock(&rxq->ring_lock))) - return; + return 0; for (;;) { const struct vmbus_chanpkt_hdr *pkt; @@ -873,6 +883,7 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id) switch (pkt->type) { case VMBUS_CHANPKT_TYPE_COMP: + ++tx_done; hn_nvs_handle_comp(dev, queue_id, pkt, data); break; @@ -889,6 +900,9 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id) break; } + if (tx_limit && tx_done >= tx_limit) + break; + if (rxq->rx_ring && rte_ring_full(rxq->rx_ring)) break; } @@ -897,6 +911,8 @@ void hn_process_events(struct hn_data *hv, uint16_t queue_id) rte_vmbus_chan_signal_read(rxq->chan, bytes_read); rte_spinlock_unlock(&rxq->ring_lock); + + return tx_done; } static void hn_append_to_chim(struct hn_tx_queue *txq, @@ -1244,7 +1260,7 @@ hn_xmit_pkts(void *ptxq, struct rte_mbuf **tx_pkts, uint16_t nb_pkts) return 0; if (rte_mempool_avail_count(hv->tx_pool) <= txq->free_thresh) - hn_process_events(hv, txq->queue_id); + hn_process_events(hv, txq->queue_id, 0); for (nb_tx = 0; nb_tx < nb_pkts; nb_tx++) { struct rte_mbuf *m = tx_pkts[nb_tx]; @@ -1326,7 +1342,7 @@ hn_recv_pkts(void *prxq, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) /* If ring is empty then process more */ if (rte_ring_count(rxq->rx_ring) < nb_pkts) - hn_process_events(hv, rxq->queue_id); + hn_process_events(hv, rxq->queue_id, 0); /* Get mbufs off staging ring */ return rte_ring_sc_dequeue_burst(rxq->rx_ring, (void **)rx_pkts, diff --git a/drivers/net/netvsc/hn_var.h b/drivers/net/netvsc/hn_var.h index b3e0a93d45df..fec8d7c402a5 100644 --- a/drivers/net/netvsc/hn_var.h +++ b/drivers/net/netvsc/hn_var.h @@ -133,7 +133,8 @@ hn_primary_chan(const struct hn_data *hv) return hv->channels[0]; } -void hn_process_events(struct hn_data *hv, uint16_t queue_id); +uint32_t hn_process_events(struct hn_data *hv, uint16_t queue_id, + uint32_t tx_limit); uint16_t hn_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); @@ -147,6 +148,7 @@ int hn_dev_tx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, void hn_dev_tx_queue_release(void *arg); void hn_dev_tx_queue_info(struct rte_eth_dev *dev, uint16_t queue_idx, struct rte_eth_txq_info *qinfo); +int hn_dev_tx_done_cleanup(void *arg, uint32_t free_cnt); struct hn_rx_queue *hn_rx_queue_alloc(struct hn_data *hv, uint16_t queue_id,