From patchwork Wed Aug 29 07:16:03 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 43939 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 244C74C74; Wed, 29 Aug 2018 09:16:53 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 2D9BE4C74 for ; Wed, 29 Aug 2018 09:16:52 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us3.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 326A1980080; Wed, 29 Aug 2018 07:16:51 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 29 Aug 2018 00:16:48 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Wed, 29 Aug 2018 00:16:48 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w7T7Gkxn013126; Wed, 29 Aug 2018 08:16:46 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 349581626D2; Wed, 29 Aug 2018 08:16:46 +0100 (BST) From: Andrew Rybchenko To: Gaetan Rivet CC: , Ian Dolzhansky Date: Wed, 29 Aug 2018 08:16:03 +0100 Message-ID: <1535526966-32456-2-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1535526966-32456-1-git-send-email-arybchenko@solarflare.com> References: <1535526966-32456-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24060.005 X-TM-AS-Result: No-12.629900-4.000000-10 X-TMASE-MatchedRID: 2BRIvnncHhvNwJnbTIxiVQPZZctd3P4B9DuzN5Up01uGAVmsKRo2NL6Y VRYkPkYC8oCFebEm/7+Vpex4dzu4rOVvZ9VZp3eETd1FGyH+HrKUi9wB9gmcSmFB1xLJwzNXSnR XN9MDu2rZoTly3PGW4J0llj2LKQVU+g3TEidc8f8MH4SsGvRsA85rFUwYWoZKZ5yuplze9pulGs yRp40hY5nZwmSO1y+NimdmItEOFAoIbxS5zNj6JLE3FpMbg63S6Pzo2DwBrIqbKItl61J/yZ+in TK0bC9eKrauXd3MZDUgKsF0sUqGM2MA+3zIwNy4BjxIoZ3CyJw/Mau48lC8yWA3GcHLR1QU X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--12.629900-4.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24060.005 X-MDID: 1535527011-w1N4ai1ZNEmJ Subject: [dpdk-dev] [PATCH 1/4] app/testpmd: add queue deferred start switch X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ian Dolzhansky Signed-off-by: Ian Dolzhansky Signed-off-by: Andrew Rybchenko --- app/test-pmd/cmdline.c | 91 ++++++++++++++++++++++++++ doc/guides/rel_notes/release_18_11.rst | 6 ++ 2 files changed, 97 insertions(+) diff --git a/app/test-pmd/cmdline.c b/app/test-pmd/cmdline.c index 589121d69..f47ec99f1 100644 --- a/app/test-pmd/cmdline.c +++ b/app/test-pmd/cmdline.c @@ -883,6 +883,10 @@ static void cmd_help_long_parsed(void *parsed_result, " Start/stop a rx/tx queue of port X. Only take effect" " when port X is started\n\n" + "port (port_id) (rxq|txq) (queue_id) deferred_start (on|off)\n" + " Switch on/off a deferred start of port X rx/tx queue. Only" + " take effect when port X is stopped.\n\n" + "port (port_id) (rxq|txq) (queue_id) setup\n" " Setup a rx/tx queue of port X.\n\n" @@ -2441,6 +2445,92 @@ cmdline_parse_inst_t cmd_config_rxtx_queue = { }, }; +/* *** configure port rxq/txq deferred start on/off *** */ +struct cmd_config_deferred_start_rxtx_queue { + cmdline_fixed_string_t port; + portid_t port_id; + cmdline_fixed_string_t rxtxq; + uint16_t qid; + cmdline_fixed_string_t opname; + cmdline_fixed_string_t state; +}; + +static void +cmd_config_deferred_start_rxtx_queue_parsed(void *parsed_result, + __attribute__((unused)) struct cmdline *cl, + __attribute__((unused)) void *data) +{ + struct cmd_config_deferred_start_rxtx_queue *res = parsed_result; + struct rte_port *port; + uint8_t isrx; + uint8_t ison; + uint8_t needreconfig = 0; + + if (port_id_is_invalid(res->port_id, ENABLED_WARN)) + return; + + if (port_is_started(res->port_id) != 0) { + printf("Please stop port %u first\n", res->port_id); + return; + } + + port = &ports[res->port_id]; + + isrx = !strcmp(res->rxtxq, "rxq"); + + if (isrx && rx_queue_id_is_invalid(res->qid)) + return; + else if (!isrx && tx_queue_id_is_invalid(res->qid)) + return; + + ison = !strcmp(res->state, "on"); + + if (isrx && port->rx_conf[res->qid].rx_deferred_start != ison) { + port->rx_conf[res->qid].rx_deferred_start = ison; + needreconfig = 1; + } else if (!isrx && port->tx_conf[res->qid].tx_deferred_start != ison) { + port->tx_conf[res->qid].tx_deferred_start = ison; + needreconfig = 1; + } + + if (needreconfig) + cmd_reconfig_device_queue(res->port_id, 0, 1); +} + +cmdline_parse_token_string_t cmd_config_deferred_start_rxtx_queue_port = + TOKEN_STRING_INITIALIZER(struct cmd_config_deferred_start_rxtx_queue, + port, "port"); +cmdline_parse_token_num_t cmd_config_deferred_start_rxtx_queue_port_id = + TOKEN_NUM_INITIALIZER(struct cmd_config_deferred_start_rxtx_queue, + port_id, UINT16); +cmdline_parse_token_string_t cmd_config_deferred_start_rxtx_queue_rxtxq = + TOKEN_STRING_INITIALIZER(struct cmd_config_deferred_start_rxtx_queue, + rxtxq, "rxq#txq"); +cmdline_parse_token_num_t cmd_config_deferred_start_rxtx_queue_qid = + TOKEN_NUM_INITIALIZER(struct cmd_config_deferred_start_rxtx_queue, + qid, UINT16); +cmdline_parse_token_string_t cmd_config_deferred_start_rxtx_queue_opname = + TOKEN_STRING_INITIALIZER(struct cmd_config_deferred_start_rxtx_queue, + opname, "deferred_start"); +cmdline_parse_token_string_t cmd_config_deferred_start_rxtx_queue_state = + TOKEN_STRING_INITIALIZER(struct cmd_config_deferred_start_rxtx_queue, + state, "on#off"); + +cmdline_parse_inst_t cmd_config_deferred_start_rxtx_queue = { + .f = cmd_config_deferred_start_rxtx_queue_parsed, + .data = NULL, + .help_str = "port rxq|txq deferred_start on|off", + .tokens = { + (void *)&cmd_config_deferred_start_rxtx_queue_port, + (void *)&cmd_config_deferred_start_rxtx_queue_port_id, + (void *)&cmd_config_deferred_start_rxtx_queue_rxtxq, + (void *)&cmd_config_deferred_start_rxtx_queue_qid, + (void *)&cmd_config_deferred_start_rxtx_queue_opname, + (void *)&cmd_config_deferred_start_rxtx_queue_state, + NULL, + }, +}; + /* *** configure port rxq/txq setup *** */ struct cmd_setup_rxtx_queue { cmdline_fixed_string_t port; @@ -17711,6 +17801,7 @@ cmdline_parse_ctx_t main_ctx[] = { (cmdline_parse_inst_t *)&cmd_config_rss, (cmdline_parse_inst_t *)&cmd_config_rxtx_ring_size, (cmdline_parse_inst_t *)&cmd_config_rxtx_queue, + (cmdline_parse_inst_t *)&cmd_config_deferred_start_rxtx_queue, (cmdline_parse_inst_t *)&cmd_setup_rxtx_queue, (cmdline_parse_inst_t *)&cmd_config_rss_reta, (cmdline_parse_inst_t *)&cmd_showport_reta, diff --git a/doc/guides/rel_notes/release_18_11.rst b/doc/guides/rel_notes/release_18_11.rst index 24204e67b..1f17befd8 100644 --- a/doc/guides/rel_notes/release_18_11.rst +++ b/doc/guides/rel_notes/release_18_11.rst @@ -54,6 +54,12 @@ New Features Also, make sure to start the actual text at the margin. ========================================================= +* **Added ability to switch queue deferred start flag on testpmd app.** + + Added a console command to testpmd app, giving ability to switch + ``rx_deferred_start`` or ``tx_deferred_start`` flag of the specified queue of + the specified port. The port must be stopped before the command call in order + to reconfigure queues. API Changes ----------- From patchwork Wed Aug 29 07:16:04 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 43941 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AFE544CAB; Wed, 29 Aug 2018 09:16:57 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 547784C88; Wed, 29 Aug 2018 09:16:52 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us3.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id 6BB77980081; Wed, 29 Aug 2018 07:16:51 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 29 Aug 2018 00:16:48 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Wed, 29 Aug 2018 00:16:48 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w7T7GljN013129; Wed, 29 Aug 2018 08:16:47 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id D03CF1626D1; Wed, 29 Aug 2018 08:16:46 +0100 (BST) From: Andrew Rybchenko To: Gaetan Rivet CC: , Ian Dolzhansky , Date: Wed, 29 Aug 2018 08:16:04 +0100 Message-ID: <1535526966-32456-3-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1535526966-32456-1-git-send-email-arybchenko@solarflare.com> References: <1535526966-32456-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24060.005 X-TM-AS-Result: No-1.003700-4.000000-10 X-TMASE-MatchedRID: G2pmNv5Nu4eLwgJA7qJvFByxKuhgjkBWVBDQSDMig9HRLEyE6G4DRNGX gQphHQmxcspcFewXO92AMuqetGVetnyef22ep6XYro1URZJFbJsr/hagQOiSoxUMAYUetV48xIg hHw81fCYBoONvBK7ODJQ8xOXOVVEFutgXAddu+wpVlm+iDWflHCBnl8hDAFO3/Y/I4XhMd9vw7J xwU0EvZMqEROLb/+yO4/0Jvn0rwAJmtL4Dw+zNb5hXfxzgoU6P X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--1.003700-4.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24060.005 X-MDID: 1535527012-442UYGUhrpgV Subject: [dpdk-dev] [PATCH 2/4] net/failsafe: add checks for deferred queue setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ian Dolzhansky Fixes: a46f8d584eb8 ("net/failsafe: add fail-safe PMD") Cc: stable@dpdk.org Signed-off-by: Ian Dolzhansky Signed-off-by: Andrew Rybchenko --- drivers/net/failsafe/failsafe_ops.c | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c index 24e91c931..f7cce0d8f 100644 --- a/drivers/net/failsafe/failsafe_ops.c +++ b/drivers/net/failsafe/failsafe_ops.c @@ -340,6 +340,11 @@ fs_rx_queue_setup(struct rte_eth_dev *dev, uint8_t i; int ret; + if (rx_conf->rx_deferred_start) { + ERROR("Rx queue deferred start is not supported"); + return -EINVAL; + } + fs_lock(dev, 0); rxq = dev->data->rx_queues[rx_queue_id]; if (rxq != NULL) { @@ -497,6 +502,11 @@ fs_tx_queue_setup(struct rte_eth_dev *dev, uint8_t i; int ret; + if (tx_conf->tx_deferred_start) { + ERROR("Tx queue deferred start is not supported"); + return -EINVAL; + } + fs_lock(dev, 0); txq = dev->data->tx_queues[tx_queue_id]; if (txq != NULL) { From patchwork Wed Aug 29 07:16:05 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 43942 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5574E4CC0; Wed, 29 Aug 2018 09:16:59 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id 8E8F32B92 for ; Wed, 29 Aug 2018 09:16:52 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us3.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id A5299980067; Wed, 29 Aug 2018 07:16:51 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 29 Aug 2018 00:16:49 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Wed, 29 Aug 2018 00:16:48 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w7T7GlXu013132; Wed, 29 Aug 2018 08:16:47 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 214F41626D2; Wed, 29 Aug 2018 08:16:47 +0100 (BST) From: Andrew Rybchenko To: Gaetan Rivet CC: , Ian Dolzhansky Date: Wed, 29 Aug 2018 08:16:05 +0100 Message-ID: <1535526966-32456-4-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1535526966-32456-1-git-send-email-arybchenko@solarflare.com> References: <1535526966-32456-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24060.005 X-TM-AS-Result: No-6.888100-4.000000-10 X-TMASE-MatchedRID: 9iO6RCDEnJc2jeY+Udg/Ip7tR0mnRAg1RWnXH60ddk8s/uUAk6xP7Gb6 PphVtfZgn/eS10qcoVIVmqZXmnc/Uhm45uWXWpVB7spMO3HwKCDDHSNFHFxB8+GkTiRMqa/cmLm nZp9xJejDWRaPt1cEvwUoSEMS/Y6IvsTsxkNC83YSEYfcJF0pRfQ7szeVKdNbrTz0igpFp42v1j /2L7xP1hQt9tGZn+icxOwqJ2xqFVEB5/qHbXPakFPjo7D4SFg4E21NSmBqEKfxCQaCt6X8bgblq TuGpWQn9J9T2E2v3qzc+03yqhoEDwzyMxeMEX6wFEUknJ/kEl7dB/CxWTRRu25FeHtsUoHuxqt7 h0ksIsqM6iMv3YabPNq+/7VOdCn71dMorG3t6WrFhXMjdQIJpg== X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--6.888100-4.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24060.005 X-MDID: 1535527012-E2wxt-G6LZlP Subject: [dpdk-dev] [PATCH 3/4] net/failsafe: add Rx queue start and stop functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ian Dolzhansky Support Rx queue deferred start. Signed-off-by: Ian Dolzhansky Signed-off-by: Andrew Rybchenko --- doc/guides/nics/features/failsafe.ini | 1 + doc/guides/rel_notes/release_18_11.rst | 7 ++ drivers/net/failsafe/failsafe_ether.c | 44 ++++++++++++ drivers/net/failsafe/failsafe_ops.c | 96 ++++++++++++++++++++++++-- 4 files changed, 143 insertions(+), 5 deletions(-) diff --git a/doc/guides/nics/features/failsafe.ini b/doc/guides/nics/features/failsafe.ini index 39ee57965..712c0b7f7 100644 --- a/doc/guides/nics/features/failsafe.ini +++ b/doc/guides/nics/features/failsafe.ini @@ -7,6 +7,7 @@ Link status = Y Link status event = Y Rx interrupt = Y +Queue start/stop = P MTU update = Y Jumbo frame = Y Promiscuous mode = Y diff --git a/doc/guides/rel_notes/release_18_11.rst b/doc/guides/rel_notes/release_18_11.rst index 1f17befd8..882ef8ac6 100644 --- a/doc/guides/rel_notes/release_18_11.rst +++ b/doc/guides/rel_notes/release_18_11.rst @@ -54,6 +54,13 @@ New Features Also, make sure to start the actual text at the margin. ========================================================= +* **Updated failsafe driver.** + + Updated the failsafe driver including the following changes: + + * Support for Rx queues start and stop. + * Support for Rx queues deferred start. + * **Added ability to switch queue deferred start flag on testpmd app.** Added a console command to testpmd app, giving ability to switch diff --git a/drivers/net/failsafe/failsafe_ether.c b/drivers/net/failsafe/failsafe_ether.c index 5b5cb3b49..305deed63 100644 --- a/drivers/net/failsafe/failsafe_ether.c +++ b/drivers/net/failsafe/failsafe_ether.c @@ -366,6 +366,47 @@ failsafe_dev_remove(struct rte_eth_dev *dev) } } +static int +failsafe_eth_dev_rx_queues_sync(struct rte_eth_dev *dev) +{ + struct rxq *rxq; + int ret; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + + if (rxq->info.conf.rx_deferred_start && + dev->data->rx_queue_state[i] == + RTE_ETH_QUEUE_STATE_STARTED) { + /* + * The subdevice Rx queue does not launch on device + * start if deferred start flag is set. It needs to be + * started manually in case an appropriate failsafe Rx + * queue has been started earlier. + */ + ret = dev->dev_ops->rx_queue_start(dev, i); + if (ret) { + ERROR("Could not synchronize Rx queue %d", i); + return ret; + } + } else if (dev->data->rx_queue_state[i] == + RTE_ETH_QUEUE_STATE_STOPPED) { + /* + * The subdevice Rx queue needs to be stopped manually + * in case an appropriate failsafe Rx queue has been + * stopped earlier. + */ + ret = dev->dev_ops->rx_queue_stop(dev, i); + if (ret) { + ERROR("Could not synchronize Rx queue %d", i); + return ret; + } + } + } + return 0; +} + int failsafe_eth_dev_state_sync(struct rte_eth_dev *dev) { @@ -422,6 +463,9 @@ failsafe_eth_dev_state_sync(struct rte_eth_dev *dev) if (PRIV(dev)->state < DEV_STARTED) return 0; ret = dev->dev_ops->dev_start(dev); + if (ret) + goto err_remove; + ret = failsafe_eth_dev_rx_queues_sync(dev); if (ret) goto err_remove; return 0; diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c index f7cce0d8f..412d522cf 100644 --- a/drivers/net/failsafe/failsafe_ops.c +++ b/drivers/net/failsafe/failsafe_ops.c @@ -170,6 +170,20 @@ fs_dev_configure(struct rte_eth_dev *dev) return 0; } +static void +fs_set_queues_state_start(struct rte_eth_dev *dev) +{ + struct rxq *rxq; + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) { + rxq = dev->data->rx_queues[i]; + if (!rxq->info.conf.rx_deferred_start) + dev->data->rx_queue_state[i] = + RTE_ETH_QUEUE_STATE_STARTED; + } +} + static int fs_dev_start(struct rte_eth_dev *dev) { @@ -204,13 +218,24 @@ fs_dev_start(struct rte_eth_dev *dev) } sdev->state = DEV_STARTED; } - if (PRIV(dev)->state < DEV_STARTED) + if (PRIV(dev)->state < DEV_STARTED) { PRIV(dev)->state = DEV_STARTED; + fs_set_queues_state_start(dev); + } fs_switch_dev(dev, NULL); fs_unlock(dev, 0); return 0; } +static void +fs_set_queues_state_stop(struct rte_eth_dev *dev) +{ + uint16_t i; + + for (i = 0; i < dev->data->nb_rx_queues; i++) + dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; +} + static void fs_dev_stop(struct rte_eth_dev *dev) { @@ -225,6 +250,7 @@ fs_dev_stop(struct rte_eth_dev *dev) sdev->state = DEV_STARTED - 1; } failsafe_rx_intr_uninstall(dev); + fs_set_queues_state_stop(dev); fs_unlock(dev, 0); } @@ -340,12 +366,17 @@ fs_rx_queue_setup(struct rte_eth_dev *dev, uint8_t i; int ret; + fs_lock(dev, 0); if (rx_conf->rx_deferred_start) { - ERROR("Rx queue deferred start is not supported"); - return -EINVAL; + FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) { + if (SUBOPS(sdev, rx_queue_start) == NULL) { + ERROR("Rx queue deferred start is not " + "supported for subdevice %d", i); + fs_unlock(dev, 0); + return -EINVAL; + } + } } - - fs_lock(dev, 0); rxq = dev->data->rx_queues[rx_queue_id]; if (rxq != NULL) { fs_rx_queue_release(rxq); @@ -393,6 +424,59 @@ fs_rx_queue_setup(struct rte_eth_dev *dev, return ret; } +static int +fs_rx_queue_stop(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct sub_device *sdev; + uint8_t i; + int ret; + int err = 0; + bool failure = true; + + fs_lock(dev, 0); + FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) { + uint16_t port_id = ETH(sdev)->data->port_id; + + ret = rte_eth_dev_rx_queue_stop(port_id, rx_queue_id); + ret = fs_err(sdev, ret); + if (ret) { + ERROR("Rx queue stop failed for subdevice %d", i); + err = ret; + } else { + failure = false; + } + } + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + fs_unlock(dev, 0); + /* Return 0 in case of at least one successful queue stop */ + return (failure) ? err : 0; +} + +static int +fs_rx_queue_start(struct rte_eth_dev *dev, uint16_t rx_queue_id) +{ + struct sub_device *sdev; + uint8_t i; + int ret; + + fs_lock(dev, 0); + FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) { + uint16_t port_id = ETH(sdev)->data->port_id; + + ret = rte_eth_dev_rx_queue_start(port_id, rx_queue_id); + ret = fs_err(sdev, ret); + if (ret) { + ERROR("Rx queue start failed for subdevice %d", i); + fs_rx_queue_stop(dev, rx_queue_id); + fs_unlock(dev, 0); + return ret; + } + } + dev->data->rx_queue_state[rx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + fs_unlock(dev, 0); + return 0; +} + static int fs_rx_intr_enable(struct rte_eth_dev *dev, uint16_t idx) { @@ -1037,6 +1121,8 @@ const struct eth_dev_ops failsafe_ops = { .vlan_filter_set = fs_vlan_filter_set, .rx_queue_setup = fs_rx_queue_setup, .tx_queue_setup = fs_tx_queue_setup, + .rx_queue_start = fs_rx_queue_start, + .rx_queue_stop = fs_rx_queue_stop, .rx_queue_release = fs_rx_queue_release, .tx_queue_release = fs_tx_queue_release, .rx_queue_intr_enable = fs_rx_intr_enable, From patchwork Wed Aug 29 07:16:06 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Andrew Rybchenko X-Patchwork-Id: 43943 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 9F3814F91; Wed, 29 Aug 2018 09:17:00 +0200 (CEST) Received: from dispatch1-us1.ppe-hosted.com (dispatch1-us1.ppe-hosted.com [67.231.154.164]) by dpdk.org (Postfix) with ESMTP id DCA8E4C74 for ; Wed, 29 Aug 2018 09:16:52 +0200 (CEST) X-Virus-Scanned: Proofpoint Essentials engine Received: from webmail.solarflare.com (webmail.solarflare.com [12.187.104.26]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-SHA384 (256/256 bits)) (No client certificate requested) by mx1-us3.ppe-hosted.com (Proofpoint Essentials ESMTP Server) with ESMTPS id DF0FA980067; Wed, 29 Aug 2018 07:16:51 +0000 (UTC) Received: from ocex03.SolarFlarecom.com (10.20.40.36) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4; Wed, 29 Aug 2018 00:16:49 -0700 Received: from opal.uk.solarflarecom.com (10.17.10.1) by ocex03.SolarFlarecom.com (10.20.40.36) with Microsoft SMTP Server (TLS) id 15.0.1395.4 via Frontend Transport; Wed, 29 Aug 2018 00:16:49 -0700 Received: from ukv-loginhost.uk.solarflarecom.com (ukv-loginhost.uk.solarflarecom.com [10.17.10.39]) by opal.uk.solarflarecom.com (8.13.8/8.13.8) with ESMTP id w7T7GlHk013135; Wed, 29 Aug 2018 08:16:47 +0100 Received: from ukv-loginhost.uk.solarflarecom.com (localhost [127.0.0.1]) by ukv-loginhost.uk.solarflarecom.com (Postfix) with ESMTP id 9CBCE1626D1; Wed, 29 Aug 2018 08:16:47 +0100 (BST) From: Andrew Rybchenko To: Gaetan Rivet CC: , Ian Dolzhansky Date: Wed, 29 Aug 2018 08:16:06 +0100 Message-ID: <1535526966-32456-5-git-send-email-arybchenko@solarflare.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1535526966-32456-1-git-send-email-arybchenko@solarflare.com> References: <1535526966-32456-1-git-send-email-arybchenko@solarflare.com> MIME-Version: 1.0 X-TM-AS-Product-Ver: SMEX-12.5.0.1300-8.5.1010-24060.005 X-TM-AS-Result: No-1.648200-4.000000-10 X-TMASE-MatchedRID: YoejGZvC3lcfL4EtCqmd9B+WEMjoO9WWTJDl9FKHbrkYSjzp7efq+iH3 fZSwSQ1slTrfppylHpBLArWPJqBt4kIjaJSsaV6quIwLnB3Aqp24vBuE2X0HlW0eIjMMK3PpC0W clmp83Ya19P7rPMjNlrEpXFJOdxjJESUQs9x1RCfzh2yKdnl7WB1BLI8ZvqYIBWVBTXPjgu1576 my5IxjunI7bYqYmsn6U3N+3UpXmSBD22D6nfxmiYdlc1JaOB1TfS0Ip2eEHnyTitjWv6+zCBe9C QaLe2PPjoczmuoPCq1l3lMxN5s7vPwULVcl7nD8Z3kT+lFQ/SCW++3e57Fe43c9ziMX/cIMMxNU b9o1KkXimPTo7p2eRBWgYV8vAfnST0Rpv/a+xeiF15h6/oibNbKsWJ44GuEGPNxau39/BitFwHZ mk+dWMm4j0wYb2y4WlExlQIQeRG0= X-TM-AS-User-Approved-Sender: No X-TM-AS-User-Blocked-Sender: No X-TMASE-Result: 10--1.648200-4.000000 X-TMASE-Version: SMEX-12.5.0.1300-8.5.1010-24060.005 X-MDID: 1535527012-lwhTUdU3OqJR Subject: [dpdk-dev] [PATCH 4/4] net/failsafe: add Tx queue start and stop functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Ian Dolzhansky Support Tx queue deferred start. Signed-off-by: Ian Dolzhansky Signed-off-by: Andrew Rybchenko --- doc/guides/nics/features/failsafe.ini | 2 +- doc/guides/rel_notes/release_18_11.rst | 4 +- drivers/net/failsafe/failsafe_ether.c | 44 +++++++++++++++ drivers/net/failsafe/failsafe_ops.c | 77 ++++++++++++++++++++++++-- 4 files changed, 120 insertions(+), 7 deletions(-) diff --git a/doc/guides/nics/features/failsafe.ini b/doc/guides/nics/features/failsafe.ini index 712c0b7f7..74eae4a62 100644 --- a/doc/guides/nics/features/failsafe.ini +++ b/doc/guides/nics/features/failsafe.ini @@ -7,7 +7,7 @@ Link status = Y Link status event = Y Rx interrupt = Y -Queue start/stop = P +Queue start/stop = Y MTU update = Y Jumbo frame = Y Promiscuous mode = Y diff --git a/doc/guides/rel_notes/release_18_11.rst b/doc/guides/rel_notes/release_18_11.rst index 882ef8ac6..ad08a204f 100644 --- a/doc/guides/rel_notes/release_18_11.rst +++ b/doc/guides/rel_notes/release_18_11.rst @@ -58,8 +58,8 @@ New Features Updated the failsafe driver including the following changes: - * Support for Rx queues start and stop. - * Support for Rx queues deferred start. + * Support for Rx and Tx queues start and stop. + * Support for Rx and Tx queues deferred start. * **Added ability to switch queue deferred start flag on testpmd app.** diff --git a/drivers/net/failsafe/failsafe_ether.c b/drivers/net/failsafe/failsafe_ether.c index 305deed63..191f95f14 100644 --- a/drivers/net/failsafe/failsafe_ether.c +++ b/drivers/net/failsafe/failsafe_ether.c @@ -407,6 +407,47 @@ failsafe_eth_dev_rx_queues_sync(struct rte_eth_dev *dev) return 0; } +static int +failsafe_eth_dev_tx_queues_sync(struct rte_eth_dev *dev) +{ + struct txq *txq; + int ret; + uint16_t i; + + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + + if (txq->info.conf.tx_deferred_start && + dev->data->tx_queue_state[i] == + RTE_ETH_QUEUE_STATE_STARTED) { + /* + * The subdevice Tx queue does not launch on device + * start if deferred start flag is set. It needs to be + * started manually in case an appropriate failsafe Tx + * queue has been started earlier. + */ + ret = dev->dev_ops->tx_queue_start(dev, i); + if (ret) { + ERROR("Could not synchronize Tx queue %d", i); + return ret; + } + } else if (dev->data->tx_queue_state[i] == + RTE_ETH_QUEUE_STATE_STOPPED) { + /* + * The subdevice Tx queue needs to be stopped manually + * in case an appropriate failsafe Tx queue has been + * stopped earlier. + */ + ret = dev->dev_ops->tx_queue_stop(dev, i); + if (ret) { + ERROR("Could not synchronize Tx queue %d", i); + return ret; + } + } + } + return 0; +} + int failsafe_eth_dev_state_sync(struct rte_eth_dev *dev) { @@ -466,6 +507,9 @@ failsafe_eth_dev_state_sync(struct rte_eth_dev *dev) if (ret) goto err_remove; ret = failsafe_eth_dev_rx_queues_sync(dev); + if (ret) + goto err_remove; + ret = failsafe_eth_dev_tx_queues_sync(dev); if (ret) goto err_remove; return 0; diff --git a/drivers/net/failsafe/failsafe_ops.c b/drivers/net/failsafe/failsafe_ops.c index 412d522cf..4d30eb22d 100644 --- a/drivers/net/failsafe/failsafe_ops.c +++ b/drivers/net/failsafe/failsafe_ops.c @@ -174,6 +174,7 @@ static void fs_set_queues_state_start(struct rte_eth_dev *dev) { struct rxq *rxq; + struct txq *txq; uint16_t i; for (i = 0; i < dev->data->nb_rx_queues; i++) { @@ -182,6 +183,12 @@ fs_set_queues_state_start(struct rte_eth_dev *dev) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STARTED; } + for (i = 0; i < dev->data->nb_tx_queues; i++) { + txq = dev->data->tx_queues[i]; + if (!txq->info.conf.tx_deferred_start) + dev->data->tx_queue_state[i] = + RTE_ETH_QUEUE_STATE_STARTED; + } } static int @@ -234,6 +241,8 @@ fs_set_queues_state_stop(struct rte_eth_dev *dev) for (i = 0; i < dev->data->nb_rx_queues; i++) dev->data->rx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; + for (i = 0; i < dev->data->nb_tx_queues; i++) + dev->data->tx_queue_state[i] = RTE_ETH_QUEUE_STATE_STOPPED; } static void @@ -586,12 +595,17 @@ fs_tx_queue_setup(struct rte_eth_dev *dev, uint8_t i; int ret; + fs_lock(dev, 0); if (tx_conf->tx_deferred_start) { - ERROR("Tx queue deferred start is not supported"); - return -EINVAL; + FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_PROBED) { + if (SUBOPS(sdev, tx_queue_start) == NULL) { + ERROR("Tx queue deferred start is not " + "supported for subdevice %d", i); + fs_unlock(dev, 0); + return -EINVAL; + } + } } - - fs_lock(dev, 0); txq = dev->data->tx_queues[tx_queue_id]; if (txq != NULL) { fs_tx_queue_release(txq); @@ -631,6 +645,59 @@ fs_tx_queue_setup(struct rte_eth_dev *dev, return ret; } +static int +fs_tx_queue_stop(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct sub_device *sdev; + uint8_t i; + int ret; + int err = 0; + bool failure = true; + + fs_lock(dev, 0); + FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) { + uint16_t port_id = ETH(sdev)->data->port_id; + + ret = rte_eth_dev_tx_queue_stop(port_id, tx_queue_id); + ret = fs_err(sdev, ret); + if (ret) { + ERROR("Tx queue stop failed for subdevice %d", i); + err = ret; + } else { + failure = false; + } + } + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STOPPED; + fs_unlock(dev, 0); + /* Return 0 in case of at least one successful queue stop */ + return (failure) ? err : 0; +} + +static int +fs_tx_queue_start(struct rte_eth_dev *dev, uint16_t tx_queue_id) +{ + struct sub_device *sdev; + uint8_t i; + int ret; + + fs_lock(dev, 0); + FOREACH_SUBDEV_STATE(sdev, i, dev, DEV_ACTIVE) { + uint16_t port_id = ETH(sdev)->data->port_id; + + ret = rte_eth_dev_tx_queue_start(port_id, tx_queue_id); + ret = fs_err(sdev, ret); + if (ret) { + ERROR("Tx queue start failed for subdevice %d", i); + fs_tx_queue_stop(dev, tx_queue_id); + fs_unlock(dev, 0); + return ret; + } + } + dev->data->tx_queue_state[tx_queue_id] = RTE_ETH_QUEUE_STATE_STARTED; + fs_unlock(dev, 0); + return 0; +} + static void fs_dev_free_queues(struct rte_eth_dev *dev) { @@ -1122,7 +1189,9 @@ const struct eth_dev_ops failsafe_ops = { .rx_queue_setup = fs_rx_queue_setup, .tx_queue_setup = fs_tx_queue_setup, .rx_queue_start = fs_rx_queue_start, + .tx_queue_start = fs_tx_queue_start, .rx_queue_stop = fs_rx_queue_stop, + .tx_queue_stop = fs_tx_queue_stop, .rx_queue_release = fs_rx_queue_release, .tx_queue_release = fs_tx_queue_release, .rx_queue_intr_enable = fs_rx_intr_enable,