From patchwork Thu Sep 20 13:55:51 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 45038 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AC80D5F4D; Thu, 20 Sep 2018 15:56:37 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id E58D25F29 for ; Thu, 20 Sep 2018 15:56:32 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us3.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 481FDB80121; Thu, 20 Sep 2018 13:56:31 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Thu, 20 Sep 2018 06:56:10 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Thu, 20 Sep 2018 06:56:10 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w8KDu9XG005368; Thu, 20 Sep 2018 14:56:09 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 18BA71626D2; Thu, 20 Sep 2018 14:56:09 +0100 (BST) From: Andrew Rybchenko To: Gaetan Rivet CC: , Ian Dolzhansky Date: Thu, 20 Sep 2018 14:55:51 +0100 Message-ID: <1537451752-28759-4-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1537451752-28759-1-git-send-email-arybchenko@solarflare.com> References: <1535526966-32456-1-git-send-email-arybchenko@solarflare.com> <1537451752-28759-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24106.004 X-TM-AS-Result: No-7.302600-4.000000-10 X-TMASE-MatchedRID: F7iwnmW7ThE2jeY+Udg/Ip7tR0mnRAg1RWnXH60ddk8s/uUAk6xP7Gb6 PphVtfZgn/eS10qcoVIVmqZXmnc/Uhm45uWXWpVB7spMO3HwKCDDHSNFHFxB8/gnJH5vm2+gxxE lGvP0lfiA9OOaTInVlYpnZiLRDhQK3huBcHFqgEvil2r2x2PwtY6ESGgCXvgoJLfQYoCQHFbZGr fobCG631VG+ntCcXddFBNBqLARgRuxpgqXoiq0QU3dRRsh/h6ylhxhDXU/lT/FMsgnB1Q/WGfLK EW7qlsH9J9T2E2v3qwyh2C7EeKtCx8TzIzimOwPC24oEZ6SpSkj80Za3RRg8Dsslt8GT2cwKvfK 4Wl5tb0nSe7p+a2QUNOrrjfs4gsw1PYfTfPNTUk= X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--7.302600-4.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24106.004 X-MDID: 1537451792-UquY-Lp63GhA Subject: [dpdk-dev] [PATCH v2 3/4] net/failsafe: add Rx queue start and stop functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ian Dolzhansky Support Rx queue deferred start. Signed-off-by: Ian Dolzhansky Signed-off-by: Andrew Rybchenko Acked-by: Gaetan Rivet --- doc/guides/nics/features/failsafe.ini | 1 + doc/guides/rel_notes/release_18_11.rst | 7 ++ drivers/net/failsafe/failsafe_ether.c | 44 ++++++++++++ drivers/net/failsafe/failsafe_ops.c | 96 ++++++++++++++++++++++++-- 4 files changed, 143 insertions(+), 5 deletions(-) diff --git a/doc/guides/nics/features/failsafe.ini b/doc/guides/nics/features/failsafe.ini index 39ee57965..712c0b7f7 100644 --- a/doc/guides/nics/features/failsafe.ini +++ b/doc/guides/nics/features/failsafe.ini @@ -7,6 +7,7 @@ Link status = Y Link status event = Y Rx interrupt = Y +Queue start/stop = P MTU update = Y Jumbo frame = Y Promiscuous mode = Y diff --git a/doc/guides/rel_notes/release_18_11.rst b/doc/guides/rel_notes/release_18_11.rst index e9f3a415c..d6af403b1 100644 --- a/doc/guides/rel_notes/release_18_11.rst +++ b/doc/guides/rel_notes/release_18_11.rst @@ -72,6 +72,13 @@ New Features choose the latest vector path that the platform supported. For example, users can use AVX2 vector path on BDW/HSW to get better performance. +* **Updated failsafe driver.** + + Updated the failsafe driver including the following changes: + + * Support for Rx queues start and stop. + * Support for Rx queues deferred start. + * **Added ability to switch queue deferred start flag on testpmd app.** Added a console command to testpmd app, giving ability to switch diff --git a/drivers/net/failsafe/failsafe_ether.c b/drivers/net/failsafe/failsafe_ether.c index 5b5cb3b49..305deed63 100644 --- a/drivers/net/failsafe/failsafe_ether.c +++ b/drivers/net/failsafe/failsafe_ether.c @@ -366,6 +366,47 @@ failsafe_dev_remove(struct rte_eth_dev *dev) } } +static int +failsafe_eth_dev_rx_queues_sync(struct rte_eth_dev *dev) +{ + struct rxq *rxq; + int ret; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + + if (rxq->info.conf.rx_deferred_start && + dev->data->rx_queue_state[i] == + RTE_ETH_QUEUE_STATE_STARTED) { + /* + * The subdevice Rx queue does not launch on device + * start if deferred start flag is set. It needs to be + * started manually in case an appropriate failsafe Rx + * queue has been started earlier. + */ + ret = dev->dev_ops->rx_queue_start(dev, i); + if (ret) { + ERROR("Could not synchronize Rx queue %d", i); + return ret; + } + } else if (dev->data->rx_queue_state[i] == + RTE_ETH_QUEUE_STATE_STOPPED) { + /* + * The subdevice Rx queue needs to be stopped manually + * in case an appropriate failsafe Rx queue has been + * stopped earlier. + */ + ret = dev->dev_ops->rx_queue_stop(dev, i); + if (ret) { + ERROR("Could not synchronize Rx queue %d", i); + return ret; + } + } + } + return 0; +} + int failsafe_eth_dev_state_sync(struct rte_eth_dev *dev) { @@ -422,6 +463,9 @@ failsafe_eth_dev_state_sync(struct rte_eth_dev *dev) if (PRIV(dev)->state < DEV_STARTED) return 0; ret = dev->dev_ops->dev_start(dev); + if (ret) + goto err_remove; + ret = failsafe_eth_dev_rx_queues_sync(dev); if (ret) goto err_remove; return 0; diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c index 9608d09cb..402c5b62c 100644 --- a/drivers/net/failsafe/failsafe_ops.c +++ b/drivers/net/failsafe/failsafe_ops.c @@ -168,6 +168,20 @@ fs_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +fs_set_queues_state_start(struct rte_eth_dev *dev) +{ + struct rxq *rxq; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + if (!rxq->info.conf.rx_deferred_start) + dev->data->rx_queue_state[i] = + RTE_ETH_QUEUE_STATE_STARTED; + } +} + static int fs_dev_start(struct rte_eth_dev *dev) { @@ -202,13 +216,24 @@ fs_dev_start(struct rte_eth_dev *dev) } sdev->state = DEV_STARTED; } - if (PRIV(dev)->state < DEV_STARTED) + if (PRIV(dev)->state < DEV_STARTED) { PRIV(dev)->state = DEV_STARTED; + fs_set_queues_state_start(dev); + } fs_switch_dev(dev, NULL); fs_unlock(dev, 0); return 0; } +static void +fs_set_queues_state_stop(struct rte_eth_dev *dev) +{ + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; +} + static void fs_dev_stop(struct rte_eth_dev *dev) { @@ -223,6 +248,7 @@ fs_dev_stop(struct rte_eth_dev *dev) sdev->state = DEV_STARTED - 1; } failsafe_rx_intr_uninstall(dev); + fs_set_queues_state_stop(dev); fs_unlock(dev, 0); } @@ -292,6 +318,59 @@ fs_dev_close(struct rte_eth_dev *dev) fs_unlock(dev, 0); } +static int +fs_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct sub_device *sdev; + uint8_t i; + int ret; + int err = 0; + bool failure = true; + + fs_lock(dev, 0); + FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) { + uint16_t port_id = ETH(sdev)->data->port_id; + + ret = rte_eth_dev_rx_queue_stop(port_id, rx_queue_id); + ret = fs_err(sdev, ret); + if (ret) { + ERROR("Rx queue stop failed for subdevice %d", i); + err = ret; + } else { + failure = false; + } + } + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + fs_unlock(dev, 0); + /* Return 0 in case of at least one successful queue stop */ + return (failure) ? err : 0; +} + +static int +fs_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct sub_device *sdev; + uint8_t i; + int ret; + + fs_lock(dev, 0); + FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) { + uint16_t port_id = ETH(sdev)->data->port_id; + + ret = rte_eth_dev_rx_queue_start(port_id, rx_queue_id); + ret = fs_err(sdev, ret); + if (ret) { + ERROR("Rx queue start failed for subdevice %d", i); + fs_rx_queue_stop(dev, rx_queue_id); + fs_unlock(dev, 0); + return ret; + } + } + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + fs_unlock(dev, 0); + return 0; +} + static void fs_rx_queue_release(void *queue) { @@ -338,12 +417,17 @@ fs_rx_queue_setup(struct rte_eth_dev *dev, uint8_t i; int ret; + fs_lock(dev, 0); if (rx_conf->rx_deferred_start) { - ERROR("Rx queue deferred start is not supported"); - return -EINVAL; + FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) { + if (SUBOPS(sdev, rx_queue_start) == NULL) { + ERROR("Rx queue deferred start is not " + "supported for subdevice %d", i); + fs_unlock(dev, 0); + return -EINVAL; + } + } } - - fs_lock(dev, 0); rxq = dev->data->rx_queues[rx_queue_id]; if (rxq != NULL) { fs_rx_queue_release(rxq); @@ -1033,6 +1117,8 @@ const struct eth_dev_ops failsafe_ops = { .dev_supported_ptypes_get = fs_dev_supported_ptypes_get, .mtu_set = fs_mtu_set, .vlan_filter_set = fs_vlan_filter_set, + .rx_queue_start = fs_rx_queue_start, + .rx_queue_stop = fs_rx_queue_stop, .rx_queue_setup = fs_rx_queue_setup, .tx_queue_setup = fs_tx_queue_setup, .rx_queue_release = fs_rx_queue_release,