From patchwork Wed May 9 12:45:52 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 39580 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B5BE21B622; Wed, 9 May 2018 14:46:27 +0200 (CEST) Received: from mail-lf0-f66.google.com (mail-lf0-f66.google.com [209.85.215.66]) by dpdk.org (Postfix) with ESMTP id 09C271B404 for ; Wed, 9 May 2018 14:46:18 +0200 (CEST) Received: by mail-lf0-f66.google.com with SMTP id h197-v6so50727108lfg.11 for ; Wed, 09 May 2018 05:46:18 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=FlVK5Ur42EHBhvIljTaPsVhEvD9DdsBf9j93bXET4Sc=; b=mjy3ujG61o28rLonfMqs9unC4xDDpIdsxjlhJmtnC6wz9ORYPgNVtFLz6/6dLPmjXK dQDuz5GOo+yv25TdLMDIxqHPzIp2t+48Az+wwalmDHsjoQ8CA7SCd+OxUnEhjPrZbPan +v6MqI+c8qJSBVM4vf9uChV+rzCuFLVM89T8Oki2SEm2dvNRTVO1Mqb/YtZdaY7PQLvo Y4SS0BsVHyUphZmJUrMJsOUd0P7mJQu6waVCj8ypuQraniLlOpxaTmZJehJxxesxBSba nvlF80d8pENNL27Zjjz1fWpegVYj9Z1cuQHsQbTcRYMLvUaq4zr7OR+olkwTkavWA7YK BpXg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=FlVK5Ur42EHBhvIljTaPsVhEvD9DdsBf9j93bXET4Sc=; b=KbiQzqvbcTYvw7NbCvMR/hUtgvH+ENNmI1x7biuEKH2WjYZwzTK6Q4kTP44iXialpx y+J9LplauVX2woFrdfdSacofyTMUhd4+pZnVky/vbvTKsJF45zZAP/nWOtxwR+tfFiU2 93PvRqcYU/qCURLk1s3/jHKckvxE9Dmg7Ftl2OOTTYngNJ19pSUPIk841rz8x58J0YMM 7GWcLHpsR+xmlhOGFcfE+LuInio1bZ0tPT8jgl8GQ5WgwCMn4toClH1+Dy58gryiBTGQ 0TgJ1MLajV9iBZX2lvn6mOtmjEm+CbqxGcf+hOXJ7Hdg/+xe5sBpinS06iuGIAq4+zbm 8Bnw== X-Gm-Message-State: ALKqPwcYz4iRR9WNI3KefqNG56cmM6Ns41whe2oNGFIfBhf/DrJzV348 99B1EG//zof624qvDGKB8dAUSw== X-Google-Smtp-Source: AB8JxZpb6LvoOlhPJ2HsLh6n0e5x3/23HnHwJyZ7w95ll9uQFk2cecrltn6WUp1UrgCTpGrSSRD8nQ== X-Received: by 2002:a19:27c2:: with SMTP id n185-v6mr5985701lfn.25.1525869978479; Wed, 09 May 2018 05:46:18 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id i6-v6sm5157497lji.49.2018.05.09.05.46.17 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Wed, 09 May 2018 05:46:17 -0700 (PDT) From: Michal Krawczyk To: Marcin Wojtas , Michal Krawczyk , Guy Tzalik , Evgeny Schemeilin Cc: dev@dpdk.org, matua@amazon.com Date: Wed, 9 May 2018 14:45:52 +0200 Message-Id: <20180509124552.22854-8-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20180509124552.22854-1-mk@semihalf.com> References: <20180509124552.22854-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH v1 08/24] net/ena: add reset routine X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Reset routine can be used by the DPDK application to reset the device in case of receiving RTE_ETH_EVENT_INTR_RESET from the PMD. The reset event is not triggered by the driver, yet. It will be added in next commits to enable error recovery in case of misfunctioning of the device. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 82 +++++++++++++++++++++++++++++++++++++++----- drivers/net/ena/ena_ethdev.h | 2 ++ 2 files changed, 76 insertions(+), 8 deletions(-) -- 2.14.1 diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index fc2ee6a4f..160fcf04a 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -225,6 +225,7 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); static int ena_start(struct rte_eth_dev *dev); static void ena_stop(struct rte_eth_dev *dev); static void ena_close(struct rte_eth_dev *dev); +static int ena_dev_reset(struct rte_eth_dev *dev); static int ena_stats_get(struct rte_eth_dev *dev, struct rte_eth_stats *stats); static void ena_rx_queue_release_all(struct rte_eth_dev *dev); static void ena_tx_queue_release_all(struct rte_eth_dev *dev); @@ -266,6 +267,7 @@ static const struct eth_dev_ops ena_dev_ops = { .rx_queue_release = ena_rx_queue_release, .tx_queue_release = ena_tx_queue_release, .dev_close = ena_close, + .dev_reset = ena_dev_reset, .reta_update = ena_rss_reta_update, .reta_query = ena_rss_reta_query, }; @@ -474,6 +476,63 @@ static void ena_close(struct rte_eth_dev *dev) ena_tx_queue_release_all(dev); } +static int +ena_dev_reset(struct rte_eth_dev *dev) +{ + struct rte_mempool *mb_pool_rx[ENA_MAX_NUM_QUEUES]; + struct rte_eth_dev *eth_dev; + struct rte_pci_device *pci_dev; + struct rte_intr_handle *intr_handle; + struct ena_com_dev *ena_dev; + struct ena_com_dev_get_features_ctx get_feat_ctx; + struct ena_adapter *adapter; + int nb_queues; + int rc, i; + + adapter = (struct ena_adapter *)(dev->data->dev_private); + ena_dev = &adapter->ena_dev; + eth_dev = adapter->rte_dev; + pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); + intr_handle = &pci_dev->intr_handle; + nb_queues = eth_dev->data->nb_rx_queues; + + ena_com_set_admin_running_state(ena_dev, false); + + ena_com_dev_reset(ena_dev, adapter->reset_reason); + + for (i = 0; i < nb_queues; i++) + mb_pool_rx[i] = adapter->rx_ring[i].mb_pool; + + ena_rx_queue_release_all(eth_dev); + ena_tx_queue_release_all(eth_dev); + + rte_intr_disable(intr_handle); + + ena_com_abort_admin_commands(ena_dev); + ena_com_wait_for_abort_completion(ena_dev); + ena_com_admin_destroy(ena_dev); + ena_com_mmio_reg_read_request_destroy(ena_dev); + + rc = ena_device_init(ena_dev, &get_feat_ctx); + if (rc) { + PMD_INIT_LOG(CRIT, "Cannot initialize device\n"); + return rc; + } + + rte_intr_enable(intr_handle); + ena_com_set_admin_polling_mode(ena_dev, false); + ena_com_admin_aenq_enable(ena_dev); + + for (i = 0; i < nb_queues; ++i) + ena_rx_queue_setup(eth_dev, i, adapter->rx_ring_size, 0, NULL, + mb_pool_rx[i]); + + for (i = 0; i < nb_queues; ++i) + ena_tx_queue_setup(eth_dev, i, adapter->tx_ring_size, 0, NULL); + + return 0; +} + static int ena_rss_reta_update(struct rte_eth_dev *dev, struct rte_eth_rss_reta_entry64 *reta_conf, uint16_t reta_size) @@ -1024,10 +1083,13 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } - if (tx_conf->txq_flags == ETH_TXQ_FLAGS_IGNORE && - !ena_are_tx_queue_offloads_allowed(adapter, tx_conf->offloads)) { - RTE_LOG(ERR, PMD, "Unsupported queue offloads\n"); - return -EINVAL; + if (tx_conf != NULL) { + if (tx_conf->txq_flags == ETH_TXQ_FLAGS_IGNORE && + !ena_are_tx_queue_offloads_allowed(adapter, + tx_conf->offloads)) { + RTE_LOG(ERR, PMD, "Unsupported queue offloads\n"); + return -EINVAL; + } } ena_qid = ENA_IO_TXQ_IDX(queue_idx); @@ -1084,7 +1146,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, for (i = 0; i < txq->ring_size; i++) txq->empty_tx_reqs[i] = i; - txq->offloads = tx_conf->offloads; + if (tx_conf != NULL) + txq->offloads = tx_conf->offloads; /* Store pointer to this queue in upper layer */ txq->configured = 1; @@ -1133,9 +1196,12 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } - if (!ena_are_rx_queue_offloads_allowed(adapter, rx_conf->offloads)) { - RTE_LOG(ERR, PMD, "Unsupported queue offloads\n"); - return -EINVAL; + if (rx_conf != NULL) { + if (!ena_are_rx_queue_offloads_allowed(adapter, + rx_conf->offloads)) { + RTE_LOG(ERR, PMD, "Unsupported queue offloads\n"); + return -EINVAL; + } } ena_qid = ENA_IO_RXQ_IDX(queue_idx); diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 16172a54a..79e9e655d 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -183,6 +183,8 @@ struct ena_adapter { uint64_t rx_selected_offloads; bool link_status; + + enum ena_regs_reset_reason_types reset_reason; }; #endif /* _ENA_ETHDEV_H_ */