From patchwork Thu Oct 25 17:59:21 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 47433 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 148301B11A; Thu, 25 Oct 2018 19:59:46 +0200 (CEST) Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by dpdk.org (Postfix) with ESMTP id 9C05E6833 for ; Thu, 25 Oct 2018 19:59:42 +0200 (CEST) Received: by mail-lj1-f194.google.com with SMTP id z9-v6so4235202ljk.9 for ; Thu, 25 Oct 2018 10:59:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=N56aBWnDPF8bJBNU7coJwtHHcXjlBO+Kwt3R0YKKQfg=; b=yztTAq3VZ/RykcTvXhuHavTp0c7TgWqaDoHtFGfboXvKcyKngkOMsR/PMNAb2hprmy 5uor+Ypef4RZOadzjw3yEviwWshP6JHarIBWExmMaKV+AYVOb3OjJLZA0jIO8hQTB6ty VMNkDqOrlV8nd1934Fzf6LcyVqCqIlHAHC32s/8O4NVwIZFdd/aIH0VYkCP4K/B6m+d4 GF2wVku8hRvZyA1DgBlEMUcE1jCBIZyhISqpqF0EP4D/egL6HJ0g3Wt0PUNw0iMICXyJ QzApITxQlN3nntHFpEXNP5XZnUY8AuK/4SRyC6QrC6mMyld6kKs8R9vDk4oAgRbMdBDj m0dg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=N56aBWnDPF8bJBNU7coJwtHHcXjlBO+Kwt3R0YKKQfg=; b=FaeBUKAqgH9qDIT9pp39SEhukgyCp/jaF8MQmOuwqGLlOIuYLWuqdiQ4Eu1ijtgz4E cRFKqC0ZsKSSqaVoBF6vEuL2Za5A3lWfCNGsA2EX3PMJxJDA4AHHtU5qmPELR1CZxtka +0FnK90xwcyBgDEGTccd6MengH5/4CLU5LDS1wFFpDzHbVGG/O+fFINlOHEesymMpv7J g4/q9uNLcKFAlDmUVr5K2T2nOSKkkFDX5Tp+n8eeWH2ejNHe02O4nza2yDdSr4giNEhK k5rIGzICm+gsmxc1ogEc/lUKUqRHyjQM1neSfxsTpJXfE5e7bgLdz5H1EW2WcCV/6M4o HDGQ== X-Gm-Message-State: AGRZ1gLpWDmI+HUqE8/33x3c+AGObdAV6QaYS4qQxrNKyeb8Ilw+a2Mo FjFuPuAbgCpQv2+hpTaEI9ipwQ== X-Google-Smtp-Source: AJdET5f4VkRJLTJNYxnY9wqSvL57GG7gQ0gRVCgmWS9AmnLoPV95+BjFvVvbg52F1KslFGGHMw/MPQ== X-Received: by 2002:a2e:21d2:: with SMTP id h79-v6mr175368lji.46.1540490382040; Thu, 25 Oct 2018 10:59:42 -0700 (PDT) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id v13-v6sm1450269lfb.70.2018.10.25.10.59.40 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Thu, 25 Oct 2018 10:59:40 -0700 (PDT) From: Michal Krawczyk To: mk@semihalf.com, mw@semihalf.com, gtzalik@amazon.com, zorik@amazon.com, matua@amazon.com Cc: dev@dpdk.org Date: Thu, 25 Oct 2018 19:59:21 +0200 Message-Id: <20181025175923.10858-2-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20181025175923.10858-1-mk@semihalf.com> References: <20181025175923.10858-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH 1/3] net/ena: recreate HW IO rings on start and stop X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" On the start the driver was refilling all Rx buffs, but the old ones were not released. That way running start/stop for a few times was causing device to run out of descriptors. To fix the issue, IO rings are now being destroyed on stop, and recreated on start. That way the device is not losing any descriptors. Furthermore, there was also memory leak for the Rx mbufs, which were created on start and not destroyed on stop. Fixes: eb0ef49dd5d5 ("net/ena: add stop and uninit routines") Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 196 ++++++++++++++++++++----------------------- 1 file changed, 91 insertions(+), 105 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index c29a581e8..186ab0e6b 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -239,6 +239,8 @@ static void ena_rx_queue_release_bufs(struct ena_ring *ring); static void ena_tx_queue_release_bufs(struct ena_ring *ring); static int ena_link_update(struct rte_eth_dev *dev, int wait_to_complete); +static int ena_create_io_queue(struct ena_ring *ring); +static void ena_free_io_queues_all(struct ena_adapter *adapter); static int ena_queue_restart(struct ena_ring *ring); static int ena_queue_restart_all(struct rte_eth_dev *dev, enum ena_ring_type ring_type); @@ -510,7 +512,8 @@ static void ena_close(struct rte_eth_dev *dev) struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); - ena_stop(dev); + if (adapter->state == ENA_ADAPTER_STATE_RUNNING) + ena_stop(dev); adapter->state = ENA_ADAPTER_STATE_CLOSED; ena_rx_queue_release_all(dev); @@ -746,21 +749,12 @@ static void ena_tx_queue_release_all(struct rte_eth_dev *dev) static void ena_rx_queue_release(void *queue) { struct ena_ring *ring = (struct ena_ring *)queue; - struct ena_adapter *adapter = ring->adapter; - int ena_qid; ena_assert_msg(ring->configured, "API violation - releasing not configured queue"); ena_assert_msg(ring->adapter->state != ENA_ADAPTER_STATE_RUNNING, "API violation"); - /* Destroy HW queue */ - ena_qid = ENA_IO_RXQ_IDX(ring->id); - ena_com_destroy_io_queue(&adapter->ena_dev, ena_qid); - - /* Free all bufs */ - ena_rx_queue_release_bufs(ring); - /* Free ring resources */ if (ring->rx_buffer_info) rte_free(ring->rx_buffer_info); @@ -779,18 +773,12 @@ static void ena_rx_queue_release(void *queue) static void ena_tx_queue_release(void *queue) { struct ena_ring *ring = (struct ena_ring *)queue; - struct ena_adapter *adapter = ring->adapter; - int ena_qid; ena_assert_msg(ring->configured, "API violation. Releasing not configured queue"); ena_assert_msg(ring->adapter->state != ENA_ADAPTER_STATE_RUNNING, "API violation"); - /* Destroy HW queue */ - ena_qid = ENA_IO_TXQ_IDX(ring->id); - ena_com_destroy_io_queue(&adapter->ena_dev, ena_qid); - /* Free all bufs */ ena_tx_queue_release_bufs(ring); @@ -1078,10 +1066,86 @@ static void ena_stop(struct rte_eth_dev *dev) (struct ena_adapter *)(dev->data->dev_private); rte_timer_stop_sync(&adapter->timer_wd); + ena_free_io_queues_all(adapter); adapter->state = ENA_ADAPTER_STATE_STOPPED; } +static int ena_create_io_queue(struct ena_ring *ring) +{ + struct ena_adapter *adapter; + struct ena_com_dev *ena_dev; + struct ena_com_create_io_ctx ctx = + /* policy set to _HOST just to satisfy icc compiler */ + { ENA_ADMIN_PLACEMENT_POLICY_HOST, + 0, 0, 0, 0, 0 }; + uint16_t ena_qid; + int rc; + + adapter = ring->adapter; + ena_dev = &adapter->ena_dev; + + if (ring->type == ENA_RING_TYPE_TX) { + ena_qid = ENA_IO_TXQ_IDX(ring->id); + ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_TX; + ctx.mem_queue_type = ena_dev->tx_mem_queue_type; + ctx.queue_size = adapter->tx_ring_size; + } else { + ena_qid = ENA_IO_RXQ_IDX(ring->id); + ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX; + ctx.queue_size = adapter->rx_ring_size; + } + ctx.qid = ena_qid; + ctx.msix_vector = -1; /* interrupts not used */ + ctx.numa_node = ena_cpu_to_node(ring->id); + + rc = ena_com_create_io_queue(ena_dev, &ctx); + if (rc) { + RTE_LOG(ERR, PMD, + "failed to create io queue #%d (qid:%d) rc: %d\n", + ring->id, ena_qid, rc); + return rc; + } + + rc = ena_com_get_io_handlers(ena_dev, ena_qid, + &ring->ena_com_io_sq, + &ring->ena_com_io_cq); + if (rc) { + RTE_LOG(ERR, PMD, + "Failed to get io queue handlers. queue num %d rc: %d\n", + ring->id, rc); + ena_com_destroy_io_queue(ena_dev, ena_qid); + return rc; + } + + if (ring->type == ENA_RING_TYPE_TX) + ena_com_update_numa_node(ring->ena_com_io_cq, ctx.numa_node); + + return 0; +} + +static void ena_free_io_queues_all(struct ena_adapter *adapter) +{ + struct rte_eth_dev *eth_dev = adapter->rte_dev; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + int i; + uint16_t ena_qid; + uint16_t nb_rxq = eth_dev->data->nb_rx_queues; + uint16_t nb_txq = eth_dev->data->nb_tx_queues; + + for (i = 0; i < nb_txq; ++i) { + ena_qid = ENA_IO_TXQ_IDX(i); + ena_com_destroy_io_queue(ena_dev, ena_qid); + } + + for (i = 0; i < nb_rxq; ++i) { + ena_qid = ENA_IO_RXQ_IDX(i); + ena_com_destroy_io_queue(ena_dev, ena_qid); + + ena_rx_queue_release_bufs(&adapter->rx_ring[i]); + } +} + static int ena_queue_restart(struct ena_ring *ring) { int rc, bufs_num; @@ -1089,6 +1153,12 @@ static int ena_queue_restart(struct ena_ring *ring) ena_assert_msg(ring->configured == 1, "Trying to restart unconfigured queue\n"); + rc = ena_create_io_queue(ring); + if (rc) { + PMD_INIT_LOG(ERR, "Failed to create IO queue!\n"); + return rc; + } + ring->next_to_clean = 0; ring->next_to_use = 0; @@ -1111,17 +1181,10 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, __rte_unused unsigned int socket_id, const struct rte_eth_txconf *tx_conf) { - struct ena_com_create_io_ctx ctx = - /* policy set to _HOST just to satisfy icc compiler */ - { ENA_ADMIN_PLACEMENT_POLICY_HOST, - ENA_COM_IO_QUEUE_DIRECTION_TX, 0, 0, 0, 0 }; struct ena_ring *txq = NULL; struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); unsigned int i; - int ena_qid; - int rc; - struct ena_com_dev *ena_dev = &adapter->ena_dev; txq = &adapter->tx_ring[queue_idx]; @@ -1146,37 +1209,6 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } - ena_qid = ENA_IO_TXQ_IDX(queue_idx); - - ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_TX; - ctx.qid = ena_qid; - ctx.msix_vector = -1; /* admin interrupts not used */ - ctx.mem_queue_type = ena_dev->tx_mem_queue_type; - ctx.queue_size = adapter->tx_ring_size; - ctx.numa_node = ena_cpu_to_node(queue_idx); - - rc = ena_com_create_io_queue(ena_dev, &ctx); - if (rc) { - RTE_LOG(ERR, PMD, - "failed to create io TX queue #%d (qid:%d) rc: %d\n", - queue_idx, ena_qid, rc); - return rc; - } - txq->ena_com_io_cq = &ena_dev->io_cq_queues[ena_qid]; - txq->ena_com_io_sq = &ena_dev->io_sq_queues[ena_qid]; - - rc = ena_com_get_io_handlers(ena_dev, ena_qid, - &txq->ena_com_io_sq, - &txq->ena_com_io_cq); - if (rc) { - RTE_LOG(ERR, PMD, - "Failed to get TX queue handlers. TX queue num %d rc: %d\n", - queue_idx, rc); - goto err_destroy_io_queue; - } - - ena_com_update_numa_node(txq->ena_com_io_cq, ctx.numa_node); - txq->port_id = dev->data->port_id; txq->next_to_clean = 0; txq->next_to_use = 0; @@ -1188,8 +1220,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, RTE_CACHE_LINE_SIZE); if (!txq->tx_buffer_info) { RTE_LOG(ERR, PMD, "failed to alloc mem for tx buffer info\n"); - rc = -ENOMEM; - goto err_destroy_io_queue; + return -ENOMEM; } txq->empty_tx_reqs = rte_zmalloc("txq->empty_tx_reqs", @@ -1197,8 +1228,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, RTE_CACHE_LINE_SIZE); if (!txq->empty_tx_reqs) { RTE_LOG(ERR, PMD, "failed to alloc mem for tx reqs\n"); - rc = -ENOMEM; - goto err_free; + rte_free(txq->tx_buffer_info); + return -ENOMEM; } for (i = 0; i < txq->ring_size; i++) @@ -1214,13 +1245,6 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, dev->data->tx_queues[queue_idx] = txq; return 0; - -err_free: - rte_free(txq->tx_buffer_info); - -err_destroy_io_queue: - ena_com_destroy_io_queue(ena_dev, ena_qid); - return rc; } static int ena_rx_queue_setup(struct rte_eth_dev *dev, @@ -1230,16 +1254,10 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, __rte_unused const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { - struct ena_com_create_io_ctx ctx = - /* policy set to _HOST just to satisfy icc compiler */ - { ENA_ADMIN_PLACEMENT_POLICY_HOST, - ENA_COM_IO_QUEUE_DIRECTION_RX, 0, 0, 0, 0 }; struct ena_adapter *adapter = (struct ena_adapter *)(dev->data->dev_private); struct ena_ring *rxq = NULL; - uint16_t ena_qid = 0; - int i, rc = 0; - struct ena_com_dev *ena_dev = &adapter->ena_dev; + int i; rxq = &adapter->rx_ring[queue_idx]; if (rxq->configured) { @@ -1263,36 +1281,6 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } - ena_qid = ENA_IO_RXQ_IDX(queue_idx); - - ctx.qid = ena_qid; - ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX; - ctx.mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; - ctx.msix_vector = -1; /* admin interrupts not used */ - ctx.queue_size = adapter->rx_ring_size; - ctx.numa_node = ena_cpu_to_node(queue_idx); - - rc = ena_com_create_io_queue(ena_dev, &ctx); - if (rc) { - RTE_LOG(ERR, PMD, "failed to create io RX queue #%d rc: %d\n", - queue_idx, rc); - return rc; - } - - rxq->ena_com_io_cq = &ena_dev->io_cq_queues[ena_qid]; - rxq->ena_com_io_sq = &ena_dev->io_sq_queues[ena_qid]; - - rc = ena_com_get_io_handlers(ena_dev, ena_qid, - &rxq->ena_com_io_sq, - &rxq->ena_com_io_cq); - if (rc) { - RTE_LOG(ERR, PMD, - "Failed to get RX queue handlers. RX queue num %d rc: %d\n", - queue_idx, rc); - ena_com_destroy_io_queue(ena_dev, ena_qid); - return rc; - } - rxq->port_id = dev->data->port_id; rxq->next_to_clean = 0; rxq->next_to_use = 0; @@ -1304,7 +1292,6 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, RTE_CACHE_LINE_SIZE); if (!rxq->rx_buffer_info) { RTE_LOG(ERR, PMD, "failed to alloc mem for rx buffer info\n"); - ena_com_destroy_io_queue(ena_dev, ena_qid); return -ENOMEM; } @@ -1315,7 +1302,6 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, RTE_LOG(ERR, PMD, "failed to alloc mem for empty rx reqs\n"); rte_free(rxq->rx_buffer_info); rxq->rx_buffer_info = NULL; - ena_com_destroy_io_queue(ena_dev, ena_qid); return -ENOMEM; } @@ -1326,7 +1312,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, rxq->configured = 1; dev->data->rx_queues[queue_idx] = rxq; - return rc; + return 0; } static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count)