From patchwork Wed Jul 14 10:34:30 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95846 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A17CA0C4B; Wed, 14 Jul 2021 12:34:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6411141309; Wed, 14 Jul 2021 12:34:56 +0200 (CEST) Received: from mail-lf1-f54.google.com (mail-lf1-f54.google.com [209.85.167.54]) by mails.dpdk.org (Postfix) with ESMTP id EA3B141306 for ; Wed, 14 Jul 2021 12:34:54 +0200 (CEST) Received: by mail-lf1-f54.google.com with SMTP id a18so2768881lfs.10 for ; Wed, 14 Jul 2021 03:34:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=l6BsJT8rAYE6tQr6I1eNmjDBpS7Zly7GAqhH/IZ4VWI=; b=m9yeH5pWkUfbt3SRvZNptC1rUbyxLKHMsVZFRSzCvWmNbmh5k8o9fQNKy7UoQwxC5q n+M+guYnQn5DTHQgzUuvRICm6MXRHANXxOCSDmLjqoIBUrHG2BUaCpeZYNto7uXsbaJq ebDwewf7gSCah1o08QyUSov5OXNuxuv6icPqjtHkUEszG8zv6flkzDqdQ6DTKIjRBJqf Q//aKEmqR4MlCdcgY24LipFVv3Tekd5x4qejTBrYpoTF+5CvEg7DwpwQH/S1i0euf9Nx DtreR3XUog01plKcsejYaf1Cy9trIjqpDuCB3LLFqscD/Rs/50y8IeaPpN/B6uFMHJrG jThQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=l6BsJT8rAYE6tQr6I1eNmjDBpS7Zly7GAqhH/IZ4VWI=; b=IAsuQk8aHyhGgET40kJbyg5I5JsHEj1PT3nAYvk9Lvj4xvXYswevUc4q4mnzFDb7ky hv1XRl1bKmMHyaeGIEK9ww98GEnCr3ANANTEnfqcEEn3xBU5uWwpdq9wVTp3iywT7sPp y/va1FHJcwWyH0CutAvsOY5sBN8st+SFYuEPK7lWiTEeSdqVNqoTl5P6mvAz0VsYLWWY VaA7KR2g2OGrJ+8z4Tqs3IAVRT6qs3bds9nnKxF34Nk4LQarnNT3ucsJDxCPZfG5NOhu UOt+goSMOT8eECK2VwzQ6hB6O9n3ytgnlPba45v9rYiet5VHihmMh9YhKrXIcwgz5rx7 Rmog== X-Gm-Message-State: AOAM533EG+Vtgs6I9L6qvUImZC5Dy1CHpZA5qZwOubSWEeNLqHDD90io uvlQ37sLuhN6wa1zZf+/9p8UPcrciF3bl/vh X-Google-Smtp-Source: ABdhPJx7SccSRP10+M4/Xh2fMNQE6IBdBxiUhCuraXg05ye2lXF81fXGMWLfE8KoXSvByBdtZluOCg== X-Received: by 2002:a05:6512:33c8:: with SMTP id d8mr7932218lfg.477.1626258893952; Wed, 14 Jul 2021 03:34:53 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id l2sm191642ljc.78.2021.07.14.03.34.52 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:34:53 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Igor Chauskin Date: Wed, 14 Jul 2021 12:34:30 +0200 Message-Id: <20210714103435.3388-2-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714103435.3388-1-mk@semihalf.com> References: <20210713154118.32111-1-mk@semihalf.com> <20210714103435.3388-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 1/6] net/ena: adjust driver logs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENA logs were not consistent regarding the new line character. Few of them were relying on the new line character added by the PMD_*_LOG macros, but most were adding the new line character by themselves. It was causing ENA logs to add extra empty line after almost each log. To unify this behavior, the missing new line characters were added to the driver logs, and they were removed from the logging macros. After this patch, every ENA log message should add '\n' at the end. Moreover, the logging messages were adjusted in terms of wording (removed unnecessary abbreviations), capitalizing of the words (start sentences with capital letters, and use 'Tx/Rx' instead of 'tx/TX' etc. Some of the logs were rephrased to make them more clear for the reader. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes Change-Id: I6ee031014fce4dd95c91fa27ac4714514da365b3 --- drivers/net/ena/ena_ethdev.c | 150 ++++++++++++++++++----------------- drivers/net/ena/ena_logs.h | 10 +-- 2 files changed, 84 insertions(+), 76 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index dfe68279fa..f5e812d507 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -544,7 +544,7 @@ ena_dev_reset(struct rte_eth_dev *dev) ena_destroy_device(dev); rc = eth_ena_dev_init(dev); if (rc) - PMD_INIT_LOG(CRIT, "Cannot initialize device"); + PMD_INIT_LOG(CRIT, "Cannot initialize device\n"); return rc; } @@ -565,7 +565,7 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, if (reta_size > ENA_RX_RSS_TABLE_SIZE) { PMD_DRV_LOG(WARNING, - "indirection table %d is bigger than supported (%d)\n", + "Requested indirection table size (%d) is bigger than supported: %d\n", reta_size, ENA_RX_RSS_TABLE_SIZE); return -EINVAL; } @@ -599,8 +599,8 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, return rc; } - PMD_DRV_LOG(DEBUG, "%s(): RSS configured %d entries for port %d\n", - __func__, reta_size, dev->data->port_id); + PMD_DRV_LOG(DEBUG, "RSS configured %d entries for port %d\n", + reta_size, dev->data->port_id); return 0; } @@ -626,7 +626,7 @@ static int ena_rss_reta_query(struct rte_eth_dev *dev, rc = ena_com_indirect_table_get(ena_dev, indirect_table); rte_spinlock_unlock(&adapter->admin_lock); if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, "cannot get indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot get indirection table\n"); return -ENOTSUP; } @@ -650,7 +650,7 @@ static int ena_rss_init_default(struct ena_adapter *adapter) rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); if (unlikely(rc)) { - PMD_DRV_LOG(ERR, "Cannot init indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot init indirection table\n"); goto err_rss_init; } @@ -659,7 +659,7 @@ static int ena_rss_init_default(struct ena_adapter *adapter) rc = ena_com_indirect_table_fill_entry(ena_dev, i, ENA_IO_RXQ_IDX(val)); if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot fill indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot fill indirection table\n"); goto err_fill_indir; } } @@ -679,7 +679,7 @@ static int ena_rss_init_default(struct ena_adapter *adapter) rc = ena_com_indirect_table_set(ena_dev); if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot flush the indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot flush indirection table\n"); goto err_fill_indir; } PMD_DRV_LOG(DEBUG, "RSS configured for port %d\n", @@ -733,7 +733,7 @@ static void ena_rx_queue_release(void *queue) ring->configured = 0; - PMD_DRV_LOG(NOTICE, "RX Queue %d:%d released\n", + PMD_DRV_LOG(NOTICE, "Rx queue %d:%d released\n", ring->port_id, ring->id); } @@ -757,7 +757,7 @@ static void ena_tx_queue_release(void *queue) ring->configured = 0; - PMD_DRV_LOG(NOTICE, "TX Queue %d:%d released\n", + PMD_DRV_LOG(NOTICE, "Tx queue %d:%d released\n", ring->port_id, ring->id); } @@ -822,19 +822,19 @@ static int ena_queue_start_all(struct rte_eth_dev *dev, if (ring_type == ENA_RING_TYPE_RX) { ena_assert_msg( dev->data->rx_queues[i] == &queues[i], - "Inconsistent state of rx queues\n"); + "Inconsistent state of Rx queues\n"); } else { ena_assert_msg( dev->data->tx_queues[i] == &queues[i], - "Inconsistent state of tx queues\n"); + "Inconsistent state of Tx queues\n"); } rc = ena_queue_start(&queues[i]); if (rc) { PMD_INIT_LOG(ERR, - "failed to start queue %d type(%d)", - i, ring_type); + "Failed to start queue[%d] of type(%d)\n", + i, ring_type); goto err; } } @@ -867,9 +867,9 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) uint32_t max_frame_len = ena_get_mtu_conf(adapter); if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) { - PMD_INIT_LOG(ERR, "Unsupported MTU of %d. " - "max mtu: %d, min mtu: %d", - max_frame_len, adapter->max_mtu, ENA_MIN_MTU); + PMD_INIT_LOG(ERR, + "Unsupported MTU of %d. Max MTU: %d, min MTU: %d\n", + max_frame_len, adapter->max_mtu, ENA_MIN_MTU); return ENA_COM_UNSUPPORTED; } @@ -938,7 +938,7 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx, ENA_ADMIN_PLACEMENT_POLICY_DEV)) { max_tx_queue_size /= 2; PMD_INIT_LOG(INFO, - "Forcing large headers and decreasing maximum TX queue size to %d\n", + "Forcing large headers and decreasing maximum Tx queue size to %d\n", max_tx_queue_size); } else { PMD_INIT_LOG(ERR, @@ -947,7 +947,7 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx, } if (unlikely(max_rx_queue_size == 0 || max_tx_queue_size == 0)) { - PMD_INIT_LOG(ERR, "Invalid queue size"); + PMD_INIT_LOG(ERR, "Invalid queue size\n"); return -EFAULT; } @@ -1044,8 +1044,7 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) { PMD_DRV_LOG(ERR, - "Invalid MTU setting. new_mtu: %d " - "max mtu: %d min mtu: %d\n", + "Invalid MTU setting. New MTU: %d, max MTU: %d, min MTU: %d\n", mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU); return -EINVAL; } @@ -1054,7 +1053,7 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (rc) PMD_DRV_LOG(ERR, "Could not set MTU: %d\n", mtu); else - PMD_DRV_LOG(NOTICE, "Set MTU: %d\n", mtu); + PMD_DRV_LOG(NOTICE, "MTU set to: %d\n", mtu); return rc; } @@ -1130,7 +1129,7 @@ static int ena_stop(struct rte_eth_dev *dev) if (adapter->trigger_reset) { rc = ena_com_dev_reset(ena_dev, adapter->reset_reason); if (rc) - PMD_DRV_LOG(ERR, "Device reset failed rc=%d\n", rc); + PMD_DRV_LOG(ERR, "Device reset failed, rc: %d\n", rc); } ++adapter->dev_stats.dev_stop; @@ -1175,7 +1174,7 @@ static int ena_create_io_queue(struct ena_ring *ring) rc = ena_com_create_io_queue(ena_dev, &ctx); if (rc) { PMD_DRV_LOG(ERR, - "failed to create io queue #%d (qid:%d) rc: %d\n", + "Failed to create IO queue[%d] (qid:%d), rc: %d\n", ring->id, ena_qid, rc); return rc; } @@ -1185,7 +1184,7 @@ static int ena_create_io_queue(struct ena_ring *ring) &ring->ena_com_io_cq); if (rc) { PMD_DRV_LOG(ERR, - "Failed to get io queue handlers. queue num %d rc: %d\n", + "Failed to get IO queue[%d] handlers, rc: %d\n", ring->id, rc); ena_com_destroy_io_queue(ena_dev, ena_qid); return rc; @@ -1239,7 +1238,7 @@ static int ena_queue_start(struct ena_ring *ring) rc = ena_create_io_queue(ring); if (rc) { - PMD_INIT_LOG(ERR, "Failed to create IO queue!"); + PMD_INIT_LOG(ERR, "Failed to create IO queue\n"); return rc; } @@ -1257,7 +1256,7 @@ static int ena_queue_start(struct ena_ring *ring) if (rc != bufs_num) { ena_com_destroy_io_queue(&ring->adapter->ena_dev, ENA_IO_RXQ_IDX(ring->id)); - PMD_INIT_LOG(ERR, "Failed to populate rx ring !"); + PMD_INIT_LOG(ERR, "Failed to populate Rx ring\n"); return ENA_COM_FAULT; } /* Flush per-core RX buffers pools cache as they can be used on other @@ -1282,21 +1281,21 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, if (txq->configured) { PMD_DRV_LOG(CRIT, - "API violation. Queue %d is already configured\n", + "API violation. Queue[%d] is already configured\n", queue_idx); return ENA_COM_FAULT; } if (!rte_is_power_of_2(nb_desc)) { PMD_DRV_LOG(ERR, - "Unsupported size of TX queue: %d is not a power of 2.\n", + "Unsupported size of Tx queue: %d is not a power of 2.\n", nb_desc); return -EINVAL; } if (nb_desc > adapter->max_tx_ring_size) { PMD_DRV_LOG(ERR, - "Unsupported size of TX queue (max size: %d)\n", + "Unsupported size of Tx queue (max size: %d)\n", adapter->max_tx_ring_size); return -EINVAL; } @@ -1314,7 +1313,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, txq->ring_size, RTE_CACHE_LINE_SIZE); if (!txq->tx_buffer_info) { - PMD_DRV_LOG(ERR, "failed to alloc mem for tx buffer info\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for Tx buffer info\n"); return -ENOMEM; } @@ -1322,7 +1322,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, sizeof(u16) * txq->ring_size, RTE_CACHE_LINE_SIZE); if (!txq->empty_tx_reqs) { - PMD_DRV_LOG(ERR, "failed to alloc mem for tx reqs\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for empty Tx requests\n"); rte_free(txq->tx_buffer_info); return -ENOMEM; } @@ -1332,7 +1333,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, txq->tx_max_header_size, RTE_CACHE_LINE_SIZE); if (!txq->push_buf_intermediate_buf) { - PMD_DRV_LOG(ERR, "failed to alloc push buff for LLQ\n"); + PMD_DRV_LOG(ERR, "Failed to alloc push buffer for LLQ\n"); rte_free(txq->tx_buffer_info); rte_free(txq->empty_tx_reqs); return -ENOMEM; @@ -1367,21 +1368,21 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, rxq = &adapter->rx_ring[queue_idx]; if (rxq->configured) { PMD_DRV_LOG(CRIT, - "API violation. Queue %d is already configured\n", + "API violation. Queue[%d] is already configured\n", queue_idx); return ENA_COM_FAULT; } if (!rte_is_power_of_2(nb_desc)) { PMD_DRV_LOG(ERR, - "Unsupported size of RX queue: %d is not a power of 2.\n", + "Unsupported size of Rx queue: %d is not a power of 2.\n", nb_desc); return -EINVAL; } if (nb_desc > adapter->max_rx_ring_size) { PMD_DRV_LOG(ERR, - "Unsupported size of RX queue (max size: %d)\n", + "Unsupported size of Rx queue (max size: %d)\n", adapter->max_rx_ring_size); return -EINVAL; } @@ -1390,7 +1391,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, buffer_size = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM; if (buffer_size < ENA_RX_BUF_MIN_SIZE) { PMD_DRV_LOG(ERR, - "Unsupported size of RX buffer: %zu (min size: %d)\n", + "Unsupported size of Rx buffer: %zu (min size: %d)\n", buffer_size, ENA_RX_BUF_MIN_SIZE); return -EINVAL; } @@ -1407,7 +1408,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, sizeof(struct ena_rx_buffer) * nb_desc, RTE_CACHE_LINE_SIZE); if (!rxq->rx_buffer_info) { - PMD_DRV_LOG(ERR, "failed to alloc mem for rx buffer info\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for Rx buffer info\n"); return -ENOMEM; } @@ -1416,7 +1418,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, RTE_CACHE_LINE_SIZE); if (!rxq->rx_refill_buffer) { - PMD_DRV_LOG(ERR, "failed to alloc mem for rx refill buffer\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for Rx refill buffer\n"); rte_free(rxq->rx_buffer_info); rxq->rx_buffer_info = NULL; return -ENOMEM; @@ -1426,7 +1429,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, sizeof(uint16_t) * nb_desc, RTE_CACHE_LINE_SIZE); if (!rxq->empty_rx_reqs) { - PMD_DRV_LOG(ERR, "failed to alloc mem for empty rx reqs\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for empty Rx requests\n"); rte_free(rxq->rx_buffer_info); rxq->rx_buffer_info = NULL; rte_free(rxq->rx_refill_buffer); @@ -1457,7 +1461,7 @@ static int ena_add_single_rx_desc(struct ena_com_io_sq *io_sq, /* pass resource to device */ rc = ena_com_add_single_rx_desc(io_sq, &ebuf, id); if (unlikely(rc != 0)) - PMD_DRV_LOG(WARNING, "failed adding rx desc\n"); + PMD_DRV_LOG(WARNING, "Failed adding Rx desc\n"); return rc; } @@ -1483,7 +1487,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (unlikely(rc < 0)) { rte_atomic64_inc(&rxq->adapter->drv_stats->rx_nombuf); ++rxq->rx_stats.mbuf_alloc_fail; - PMD_RX_LOG(DEBUG, "there are no enough free buffers"); + PMD_RX_LOG(DEBUG, "There are not enough free buffers\n"); return 0; } @@ -1506,8 +1510,9 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } if (unlikely(i < count)) { - PMD_DRV_LOG(WARNING, "refilled rx qid %d with only %d " - "buffers (from %d)\n", rxq->id, i, count); + PMD_DRV_LOG(WARNING, + "Refilled Rx queue[%d] with only %d/%d buffers\n", + rxq->id, i, count); rte_pktmbuf_free_bulk(&mbufs[i], count - i); ++rxq->rx_stats.refill_partial; } @@ -1535,7 +1540,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, /* Initialize mmio registers */ rc = ena_com_mmio_reg_read_request_init(ena_dev); if (rc) { - PMD_DRV_LOG(ERR, "failed to init mmio read less\n"); + PMD_DRV_LOG(ERR, "Failed to init MMIO read less\n"); return rc; } @@ -1548,14 +1553,14 @@ static int ena_device_init(struct ena_com_dev *ena_dev, /* reset device */ rc = ena_com_dev_reset(ena_dev, ENA_REGS_RESET_NORMAL); if (rc) { - PMD_DRV_LOG(ERR, "cannot reset device\n"); + PMD_DRV_LOG(ERR, "Cannot reset device\n"); goto err_mmio_read_less; } /* check FW version */ rc = ena_com_validate_version(ena_dev); if (rc) { - PMD_DRV_LOG(ERR, "device version is too low\n"); + PMD_DRV_LOG(ERR, "Device version is too low\n"); goto err_mmio_read_less; } @@ -1565,7 +1570,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, rc = ena_com_admin_init(ena_dev, &aenq_handlers); if (rc) { PMD_DRV_LOG(ERR, - "cannot initialize ena admin queue with device\n"); + "Cannot initialize ENA admin queue\n"); goto err_mmio_read_less; } @@ -1581,7 +1586,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, rc = ena_com_get_dev_attr_feat(ena_dev, get_feat_ctx); if (rc) { PMD_DRV_LOG(ERR, - "cannot get attribute for ena device rc= %d\n", rc); + "Cannot get attribute for ENA device, rc: %d\n", rc); goto err_admin_init; } @@ -1594,7 +1599,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, aenq_groups &= get_feat_ctx->aenq.supported_groups; rc = ena_com_set_aenq_config(ena_dev, aenq_groups); if (rc) { - PMD_DRV_LOG(ERR, "Cannot configure aenq groups rc: %d\n", rc); + PMD_DRV_LOG(ERR, "Cannot configure AENQ groups, rc: %d\n", rc); goto err_admin_init; } @@ -1643,7 +1648,7 @@ static void check_for_missing_keep_alive(struct ena_adapter *adapter) static void check_for_admin_com_state(struct ena_adapter *adapter) { if (unlikely(!ena_com_get_admin_running_state(&adapter->ena_dev))) { - PMD_DRV_LOG(ERR, "ENA admin queue is not in running state!\n"); + PMD_DRV_LOG(ERR, "ENA admin queue is not in running state\n"); adapter->reset_reason = ENA_REGS_RESET_ADMIN_TO; adapter->trigger_reset = true; } @@ -1706,8 +1711,8 @@ ena_set_queues_placement_policy(struct ena_adapter *adapter, rc = ena_com_config_dev_mode(ena_dev, llq, llq_default_configurations); if (unlikely(rc)) { - PMD_INIT_LOG(WARNING, "Failed to config dev mode. " - "Fallback to host mode policy."); + PMD_INIT_LOG(WARNING, + "Failed to config dev mode. Fallback to host mode policy.\n"); ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; return 0; } @@ -1717,8 +1722,8 @@ ena_set_queues_placement_policy(struct ena_adapter *adapter, return 0; if (!adapter->dev_mem_base) { - PMD_DRV_LOG(ERR, "Unable to access LLQ bar resource. " - "Fallback to host mode policy.\n."); + PMD_DRV_LOG(ERR, + "Unable to access LLQ BAR resource. Fallback to host mode policy.\n"); ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; return 0; } @@ -1758,7 +1763,7 @@ static uint32_t ena_calc_max_io_queue_num(struct ena_com_dev *ena_dev, max_num_io_queues = RTE_MIN(max_num_io_queues, io_tx_cq_num); if (unlikely(max_num_io_queues == 0)) { - PMD_DRV_LOG(ERR, "Number of IO queues should not be 0\n"); + PMD_DRV_LOG(ERR, "Number of IO queues cannot not be 0\n"); return -EFAULT; } @@ -1798,7 +1803,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - PMD_INIT_LOG(INFO, "Initializing %x:%x:%x.%d", + PMD_INIT_LOG(INFO, "Initializing %x:%x:%x.%d\n", pci_dev->addr.domain, pci_dev->addr.bus, pci_dev->addr.devid, @@ -1810,7 +1815,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr; if (!adapter->regs) { - PMD_INIT_LOG(CRIT, "Failed to access registers BAR(%d)", + PMD_INIT_LOG(CRIT, "Failed to access registers BAR(%d)\n", ENA_REGS_BAR); return -ENXIO; } @@ -1833,7 +1838,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) /* device specific initialization routine */ rc = ena_device_init(ena_dev, pci_dev, &get_feat_ctx, &wd_state); if (rc) { - PMD_INIT_LOG(CRIT, "Failed to init ENA device"); + PMD_INIT_LOG(CRIT, "Failed to init ENA device\n"); goto err; } adapter->wd_state = wd_state; @@ -1843,7 +1848,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) rc = ena_set_queues_placement_policy(adapter, ena_dev, &get_feat_ctx.llq, &llq_config); if (unlikely(rc)) { - PMD_INIT_LOG(CRIT, "Failed to set placement policy"); + PMD_INIT_LOG(CRIT, "Failed to set placement policy\n"); return rc; } @@ -1905,7 +1910,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) sizeof(*adapter->drv_stats), RTE_CACHE_LINE_SIZE); if (!adapter->drv_stats) { - PMD_DRV_LOG(ERR, "failed to alloc mem for adapter stats\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for adapter statistics\n"); rc = -ENOMEM; goto err_delete_debug_area; } @@ -2233,7 +2239,9 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->ena_com_io_sq, &ena_rx_ctx); if (unlikely(rc)) { - PMD_DRV_LOG(ERR, "ena_com_rx_pkt error %d\n", rc); + PMD_DRV_LOG(ERR, + "Failed to get the packet from the device, rc: %d\n", + rc); if (rc == ENA_COM_NO_SPACE) { ++rx_ring->rx_stats.bad_desc_num; rx_ring->adapter->reset_reason = @@ -2408,7 +2416,7 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, * be needed so we reduce the segments number from num_segments to 1 */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, 3)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the tx queue\n"); + PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } ++tx_ring->tx_stats.linearize; @@ -2428,7 +2436,7 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, num_segments + 2)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the tx queue\n"); + PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } @@ -2544,7 +2552,7 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) if (unlikely(ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq, &ena_tx_ctx))) { PMD_DRV_LOG(DEBUG, - "llq tx max burst size of queue %d achieved, writing doorbell to send burst\n", + "LLQ Tx max burst size of queue %d achieved, writing doorbell to send burst\n", tx_ring->id); ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); tx_ring->tx_stats.doorbells++; @@ -2666,10 +2674,10 @@ int ena_copy_eni_stats(struct ena_adapter *adapter) if (rc != 0) { if (rc == ENA_COM_UNSUPPORTED) { PMD_DRV_LOG(DEBUG, - "Retrieving ENI metrics is not supported.\n"); + "Retrieving ENI metrics is not supported\n"); } else { PMD_DRV_LOG(WARNING, - "Failed to get ENI metrics: %d\n", rc); + "Failed to get ENI metrics, rc: %d\n", rc); } return rc; } @@ -2993,7 +3001,7 @@ static void ena_notification(void *adapter_data, struct ena_admin_ena_hw_hints *hints; if (aenq_e->aenq_common_desc.group != ENA_ADMIN_NOTIFICATION) - PMD_DRV_LOG(WARNING, "Invalid group(%x) expected %x\n", + PMD_DRV_LOG(WARNING, "Invalid AENQ group: %x. Expected: %x\n", aenq_e->aenq_common_desc.group, ENA_ADMIN_NOTIFICATION); @@ -3004,7 +3012,7 @@ static void ena_notification(void *adapter_data, ena_update_hints(adapter, hints); break; default: - PMD_DRV_LOG(ERR, "Invalid aenq notification link state %d\n", + PMD_DRV_LOG(ERR, "Invalid AENQ notification link state: %d\n", aenq_e->aenq_common_desc.syndrome); } } @@ -3034,8 +3042,8 @@ static void ena_keep_alive(void *adapter_data, static void unimplemented_aenq_handler(__rte_unused void *data, __rte_unused struct ena_admin_aenq_entry *aenq_e) { - PMD_DRV_LOG(ERR, "Unknown event was received or event with " - "unimplemented handler\n"); + PMD_DRV_LOG(ERR, + "Unknown event was received or event with unimplemented handler\n"); } static struct ena_aenq_handlers aenq_handlers = { diff --git a/drivers/net/ena/ena_logs.h b/drivers/net/ena/ena_logs.h index 9053c9183f..040bebfb98 100644 --- a/drivers/net/ena/ena_logs.h +++ b/drivers/net/ena/ena_logs.h @@ -9,13 +9,13 @@ extern int ena_logtype_init; #define PMD_INIT_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_init, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #ifdef RTE_LIBRTE_ENA_DEBUG_RX extern int ena_logtype_rx; #define PMD_RX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_rx, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #else #define PMD_RX_LOG(level, fmt, args...) do { } while (0) #endif @@ -24,7 +24,7 @@ extern int ena_logtype_rx; extern int ena_logtype_tx; #define PMD_TX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_tx, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #else #define PMD_TX_LOG(level, fmt, args...) do { } while (0) #endif @@ -33,7 +33,7 @@ extern int ena_logtype_tx; extern int ena_logtype_tx_free; #define PMD_TX_FREE_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_tx_free, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #else #define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) #endif @@ -41,6 +41,6 @@ extern int ena_logtype_tx_free; extern int ena_logtype_driver; #define PMD_DRV_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_driver, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #endif /* _ENA_LOGS_H_ */ From patchwork Wed Jul 14 10:34:31 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95847 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B6FF1A0C4B; Wed, 14 Jul 2021 12:35:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 06C1141312; Wed, 14 Jul 2021 12:34:58 +0200 (CEST) Received: from mail-lj1-f177.google.com (mail-lj1-f177.google.com [209.85.208.177]) by mails.dpdk.org (Postfix) with ESMTP id 50F1B41307 for ; Wed, 14 Jul 2021 12:34:56 +0200 (CEST) Received: by mail-lj1-f177.google.com with SMTP id r16so2635157ljk.9 for ; Wed, 14 Jul 2021 03:34:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=he7DWqocrc/tRi4YAeHYPcco41o7PtMHNlt/x0tqSk0=; b=lWPRdOA1+k0LQpjTnuaWOf0xZafE03hzExlXsb4DdSqES8H6a/6XNdBBxNbcUMHFhR N6MfrkbAIWoY3z/scOzVTj62hD8EstIWaZ/PTsU8m3+mU9QmoIZnSN5sdKcY0wGRvpDo rDNlf74DoP/0uJfsWfsaKvZCTjjE5Fr593gkgx513meQL/KZBBrotxQWJY0gggtaJ1iI ZsrA4LpB7cEqaHFjKUndkzhVVQx1Cm3oAk3rB2QFWT+Wn4QlNhTigqcJj9LV5Tq14J7g Qm2F079a0Czz4cMWbpEwNFjZWsnHsiaCwzeLawUfPkksD5jgwrpa30RNHHqKauPM2ynl JMGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=he7DWqocrc/tRi4YAeHYPcco41o7PtMHNlt/x0tqSk0=; b=Js4TkOtG0RFULDMNm9j4+3eguzIQp+oSW43Hq1A9XCSg11SsQqrUG89YNCnQpEkKfP 7zGFTLTUU5ljF02thECJ3o4HoCGcpWXEU039ZTv9g2K/UW7Rkx2gFGXl5s4V/F+A4nbT OejexWY7+UdDSuHyoOsZXxug8lVYNu1oQSPZyrej4sFvXArTj7fmRnLnH8bselQMPQ3r sGrkbC75Xi8fK77OFWOoh9N9llVXD5a19oZXgoAwjt+DkNws6DDYeVbVB2MQ3Z4+wgpC O3/VziGANe6TBZpsGrg6BgqZhDF40uF9rSY0p1WZGQj4K4dQuUvtBWjB3EXufxx5vXl0 4r0w== X-Gm-Message-State: AOAM531X0SVQSX0v472ntRH9Dd1C8rrPmUED19w4tqhU0UtstpR1rhxH uH5QHEVfyquQnrZ11k3l4qww4REbQbo85QNI X-Google-Smtp-Source: ABdhPJzEebasrn7wREtEKYsdS4fqtMn3Zbhy7UlsRsmR/liAfsei7Y4yePbcjKzX3943/nVVffaZNw== X-Received: by 2002:a2e:bf2a:: with SMTP id c42mr8562850ljr.1.1626258895541; Wed, 14 Jul 2021 03:34:55 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id l2sm191642ljc.78.2021.07.14.03.34.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:34:54 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Igor Chauskin Date: Wed, 14 Jul 2021 12:34:31 +0200 Message-Id: <20210714103435.3388-3-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714103435.3388-1-mk@semihalf.com> References: <20210713154118.32111-1-mk@semihalf.com> <20210714103435.3388-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 2/6] net/ena: make use of the IO debug build option X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENA defined its own logger flags for Tx and Rx, but they weren't technically used anywhere. Those data path loggers weren't used anywhere after the definition. This commit uses the generic RTE_ETHDEV_DEBUG_RX and RTE_ETHDEV_DEBUG_TX flags to define PMD_TX_LOG and PMD_RX_LOG which are now being used on the data path. The PMD_TX_FREE_LOG was removed, as it has no usage in the current version of the driver. RTE_ETH_DEBUG_[TR]X now wraps extra checks for the driver state in the IO path - this saves extra conditionals on the hot path. ena_com logger is no longer optional (previously it had to be explicitly enabled by defining this flag: RTE_LIBRTE_ENA_COM_DEBUG). Having this logger optional makes tracing of ena_com errors much harder. Due to ena_com design, it's impossible to separate IO path logs from the management path logs, so for now they will be always enabled. Default levels for the affected loggers were modified. Hot path loggers are initialized with the default level of DEBUG instead of NOTICE, as they have to be explicitly enabled. ena_com logging level was reduced from NOTICE to WARNING - as it's no longer optional, the driver should report just a warnings in the ena_com layer. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes Change-Id: Iad694891070137797aed84360f14bc55a21a296d --- drivers/net/ena/base/ena_plat_dpdk.h | 7 ---- drivers/net/ena/ena_ethdev.c | 52 +++++++++++++++------------- drivers/net/ena/ena_logs.h | 13 ++----- 3 files changed, 30 insertions(+), 42 deletions(-) diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index f66df95591..4e7f52881a 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -108,7 +108,6 @@ extern int ena_logtype_com; #define GENMASK_ULL(h, l) (((~0ULL) - (1ULL << (l)) + 1) & \ (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h)))) -#ifdef RTE_LIBRTE_ENA_COM_DEBUG #define ena_trc_log(dev, level, fmt, arg...) \ ( \ ENA_TOUCH(dev), \ @@ -121,12 +120,6 @@ extern int ena_logtype_com; #define ena_trc_warn(dev, format, arg...) \ ena_trc_log(dev, WARNING, format, ##arg) #define ena_trc_err(dev, format, arg...) ena_trc_log(dev, ERR, format, ##arg) -#else -#define ena_trc_dbg(dev, format, arg...) ENA_TOUCH(dev) -#define ena_trc_info(dev, format, arg...) ENA_TOUCH(dev) -#define ena_trc_warn(dev, format, arg...) ENA_TOUCH(dev) -#define ena_trc_err(dev, format, arg...) ENA_TOUCH(dev) -#endif /* RTE_LIBRTE_ENA_COM_DEBUG */ #define ENA_WARN(cond, dev, format, arg...) \ do { \ diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index f5e812d507..2335436b6c 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -399,9 +399,9 @@ static int validate_tx_req_id(struct ena_ring *tx_ring, u16 req_id) } if (tx_info) - PMD_DRV_LOG(ERR, "tx_info doesn't have valid mbuf\n"); + PMD_TX_LOG(ERR, "tx_info doesn't have valid mbuf\n"); else - PMD_DRV_LOG(ERR, "Invalid req_id: %hu\n", req_id); + PMD_TX_LOG(ERR, "Invalid req_id: %hu\n", req_id); /* Trigger device reset */ ++tx_ring->tx_stats.bad_req_id; @@ -1461,7 +1461,7 @@ static int ena_add_single_rx_desc(struct ena_com_io_sq *io_sq, /* pass resource to device */ rc = ena_com_add_single_rx_desc(io_sq, &ebuf, id); if (unlikely(rc != 0)) - PMD_DRV_LOG(WARNING, "Failed adding Rx desc\n"); + PMD_RX_LOG(WARNING, "Failed adding Rx desc\n"); return rc; } @@ -1471,16 +1471,21 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) unsigned int i; int rc; uint16_t next_to_use = rxq->next_to_use; - uint16_t in_use, req_id; + uint16_t req_id; +#ifdef RTE_ETHDEV_DEBUG_RX + uint16_t in_use; +#endif struct rte_mbuf **mbufs = rxq->rx_refill_buffer; if (unlikely(!count)) return 0; +#ifdef RTE_ETHDEV_DEBUG_RX in_use = rxq->ring_size - 1 - ena_com_free_q_entries(rxq->ena_com_io_sq); - ena_assert_msg(((in_use + count) < rxq->ring_size), - "bad ring state\n"); + if (unlikely((in_use + count) >= rxq->ring_size)) + PMD_RX_LOG(ERR, "Bad Rx ring state\n"); +#endif /* get resources for incoming packets */ rc = rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, count); @@ -1510,7 +1515,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } if (unlikely(i < count)) { - PMD_DRV_LOG(WARNING, + PMD_RX_LOG(WARNING, "Refilled Rx queue[%d] with only %d/%d buffers\n", rxq->id, i, count); rte_pktmbuf_free_bulk(&mbufs[i], count - i); @@ -2218,12 +2223,14 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct ena_com_rx_ctx ena_rx_ctx; int i, rc = 0; +#ifdef RTE_ETHDEV_DEBUG_RX /* Check adapter state */ if (unlikely(rx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) { - PMD_DRV_LOG(ALERT, + PMD_RX_LOG(ALERT, "Trying to receive pkts while device is NOT running\n"); return 0; } +#endif descs_in_use = rx_ring->ring_size - ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; @@ -2239,7 +2246,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->ena_com_io_sq, &ena_rx_ctx); if (unlikely(rc)) { - PMD_DRV_LOG(ERR, + PMD_RX_LOG(ERR, "Failed to get the packet from the device, rc: %d\n", rc); if (rc == ENA_COM_NO_SPACE) { @@ -2416,13 +2423,13 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, * be needed so we reduce the segments number from num_segments to 1 */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, 3)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); + PMD_TX_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } ++tx_ring->tx_stats.linearize; rc = rte_pktmbuf_linearize(mbuf); if (unlikely(rc)) { - PMD_DRV_LOG(WARNING, "Mbuf linearize failed\n"); + PMD_TX_LOG(WARNING, "Mbuf linearize failed\n"); rte_atomic64_inc(&tx_ring->adapter->drv_stats->ierrors); ++tx_ring->tx_stats.linearize_failed; return rc; @@ -2436,7 +2443,7 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, num_segments + 2)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); + PMD_TX_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } @@ -2551,7 +2558,7 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) if (unlikely(ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq, &ena_tx_ctx))) { - PMD_DRV_LOG(DEBUG, + PMD_TX_LOG(DEBUG, "LLQ Tx max burst size of queue %d achieved, writing doorbell to send burst\n", tx_ring->id); ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); @@ -2628,12 +2635,14 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, struct ena_ring *tx_ring = (struct ena_ring *)(tx_queue); uint16_t sent_idx = 0; +#ifdef RTE_ETHDEV_DEBUG_TX /* Check adapter state */ if (unlikely(tx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) { - PMD_DRV_LOG(ALERT, + PMD_TX_LOG(ALERT, "Trying to xmit pkts while device is NOT running\n"); return 0; } +#endif for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { if (ena_xmit_mbuf(tx_ring, tx_pkts[sent_idx])) @@ -2960,18 +2969,13 @@ RTE_PMD_REGISTER_KMOD_DEP(net_ena, "* igb_uio | uio_pci_generic | vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(net_ena, ENA_DEVARG_LARGE_LLQ_HDR "=<0|1>"); RTE_LOG_REGISTER_SUFFIX(ena_logtype_init, init, NOTICE); RTE_LOG_REGISTER_SUFFIX(ena_logtype_driver, driver, NOTICE); -#ifdef RTE_LIBRTE_ENA_DEBUG_RX -RTE_LOG_REGISTER_SUFFIX(ena_logtype_rx, rx, NOTICE); -#endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX -RTE_LOG_REGISTER_SUFFIX(ena_logtype_tx, tx, NOTICE); -#endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX_FREE -RTE_LOG_REGISTER_SUFFIX(ena_logtype_tx_free, tx_free, NOTICE); +#ifdef RTE_ETHDEV_DEBUG_RX +RTE_LOG_REGISTER_SUFFIX(ena_logtype_rx, rx, DEBUG); #endif -#ifdef RTE_LIBRTE_ENA_COM_DEBUG -RTE_LOG_REGISTER_SUFFIX(ena_logtype_com, com, NOTICE); +#ifdef RTE_ETHDEV_DEBUG_TX +RTE_LOG_REGISTER_SUFFIX(ena_logtype_tx, tx, DEBUG); #endif +RTE_LOG_REGISTER_SUFFIX(ena_logtype_com, com, WARNING); /****************************************************************************** ******************************** AENQ Handlers ******************************* diff --git a/drivers/net/ena/ena_logs.h b/drivers/net/ena/ena_logs.h index 040bebfb98..43f16458ea 100644 --- a/drivers/net/ena/ena_logs.h +++ b/drivers/net/ena/ena_logs.h @@ -11,7 +11,7 @@ extern int ena_logtype_init; rte_log(RTE_LOG_ ## level, ena_logtype_init, \ "%s(): " fmt, __func__, ## args) -#ifdef RTE_LIBRTE_ENA_DEBUG_RX +#ifdef RTE_ETHDEV_DEBUG_RX extern int ena_logtype_rx; #define PMD_RX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_rx, \ @@ -20,7 +20,7 @@ extern int ena_logtype_rx; #define PMD_RX_LOG(level, fmt, args...) do { } while (0) #endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX +#ifdef RTE_ETHDEV_DEBUG_TX extern int ena_logtype_tx; #define PMD_TX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_tx, \ @@ -29,15 +29,6 @@ extern int ena_logtype_tx; #define PMD_TX_LOG(level, fmt, args...) do { } while (0) #endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX_FREE -extern int ena_logtype_tx_free; -#define PMD_TX_FREE_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, ena_logtype_tx_free, \ - "%s(): " fmt, __func__, ## args) -#else -#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) -#endif - extern int ena_logtype_driver; #define PMD_DRV_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_driver, \ From patchwork Wed Jul 14 10:34:32 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95848 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5EF49A0C4B; Wed, 14 Jul 2021 12:35:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 6637841319; Wed, 14 Jul 2021 12:35:00 +0200 (CEST) Received: from mail-lj1-f174.google.com (mail-lj1-f174.google.com [209.85.208.174]) by mails.dpdk.org (Postfix) with ESMTP id CAE2841312 for ; Wed, 14 Jul 2021 12:34:57 +0200 (CEST) Received: by mail-lj1-f174.google.com with SMTP id h19so2664839ljl.4 for ; Wed, 14 Jul 2021 03:34:57 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=2rFmWrYqaj0tzyKSmL/glJKqExTy201t0xRu8WszzCw=; b=TvTjwhKwge9qkmqzy1BmPKAsFp/+F+Wuq/m0s8NqH9K54mYMA63lwpyCYokYDdQkmq buEyNjLHmw34AWjhgeMRshycNoEFmSBxAZ1HPCnvbsityxZDpLw9gaGRr8w7L+ENWKUJ ZIOb2heY3I9400f72+z9Vap2VNz//0fD5RdFcC3wsHTTRncVI1LhnrXkVwifTbxGIhLd TuWbfqqk/A7PPl0vLFXUPy+8/HGtDybxiyCXQu7X5KTzzWaIBKjiwHKcWaWER4HpJyf4 c90+rb6NMIsUJj3SORHiPmDyvmxfn4noTut04cXH5vJvUAM4UZARsqNSBRv3GnJz4CDO myHA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=2rFmWrYqaj0tzyKSmL/glJKqExTy201t0xRu8WszzCw=; b=QhzP+F1VeLHPKPtGEi/EZyKemkH/eGEWCYhsekfv+3AjgPLC2i5evglzlQ0h6zC6Y1 /avb8IwvuKiz9DacSMfqmCFq3I61GZre0edKJ8AXYP/Q7lSdZoGNZZfKtpfwycDLxRWt KYDIVYAIZp4068cafRj/KKmdwKFHRaxFZpSJbiFhggR8hQ9fesWAvLDrCKdr5ST0AouJ 3IsAOHLKuFoErNgd1F6KXMzTdCnYHxkw8BPTHhSuOiRjNIzVMM7FrcKSZ1iI6NFP1vcH R/4Tu9ehSvqALfuW8Ths/lfRZ3DRlyTSXowmdQWGFtfn3/9U8cfkAR2pEJHDcyJQCNRy gmqQ== X-Gm-Message-State: AOAM533kw6unUwbf/UtDNqkSPoUbLly01JkBRbae4hPNPXrCMOVa4M0E SgRKeEZFYMMkPrSeFZV9wrEgSKEVFQq0l+/H X-Google-Smtp-Source: ABdhPJx87JDi3V3prbHpHF0BzCLo308Bwh7JUv9by8Jph1fbG5IGPeIz2rSKigqrW1KFUAn50rzJeA== X-Received: by 2002:a2e:5810:: with SMTP id m16mr8600787ljb.323.1626258897147; Wed, 14 Jul 2021 03:34:57 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id l2sm191642ljc.78.2021.07.14.03.34.55 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:34:56 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , stable@dpdk.org, Shay Agroskin Date: Wed, 14 Jul 2021 12:34:32 +0200 Message-Id: <20210714103435.3388-4-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714103435.3388-1-mk@semihalf.com> References: <20210713154118.32111-1-mk@semihalf.com> <20210714103435.3388-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 3/6] net/ena: trigger reset when Tx prepare fails X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If the prepare function failed, then it means the descriptors are in the invalid state. This condition now triggers the reset, which should be further handled by the application. To notify the application about prepare function failure, the error log was added. In general, it should never fail in normal conditions, as the Tx function checks for the available space in the Tx ring before the preparation even starts. Fixes: 2081d5e2e92d ("net/ena: add reset routine") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk Reviewed-by: Shai Brandes Reviewed-by: Shay Agroskin Change-Id: Iff7b3a0e8b0b8e52f5230331b8486bb04a076d5b --- drivers/net/ena/ena_ethdev.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 2335436b6c..67cd91046a 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -2570,7 +2570,11 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) rc = ena_com_prepare_tx(tx_ring->ena_com_io_sq, &ena_tx_ctx, &nb_hw_desc); if (unlikely(rc)) { + PMD_DRV_LOG(ERR, "Failed to prepare Tx buffers, rc: %d\n", rc); ++tx_ring->tx_stats.prepare_ctx_err; + tx_ring->adapter->reset_reason = + ENA_REGS_RESET_DRIVER_INVALID_STATE; + tx_ring->adapter->trigger_reset = true; return rc; } From patchwork Wed Jul 14 10:34:33 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95849 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id D2F47A0C4B; Wed, 14 Jul 2021 12:35:19 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8DA544131D; Wed, 14 Jul 2021 12:35:01 +0200 (CEST) Received: from mail-lj1-f178.google.com (mail-lj1-f178.google.com [209.85.208.178]) by mails.dpdk.org (Postfix) with ESMTP id 1EAE241318 for ; Wed, 14 Jul 2021 12:35:00 +0200 (CEST) Received: by mail-lj1-f178.google.com with SMTP id a6so2674298ljq.3 for ; Wed, 14 Jul 2021 03:35:00 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tXKwv9QdqYYF9LojqeKsej5FUUOok5xNMLGCni/N8LQ=; b=qa5OFIrFt/c3rYZWOuMLrl+UJBg7/FkgPXu/j77x3+e5tSdpTNdXBPDvpjJ8ZOvEbX RoxkgdRFOZEVp3UzU0N20ZZK9t8epcloM6i6GXmatLZh9GOMabGw0xHkvyShTlo8oW34 APmGej/3dK4isBzKt+jIpvnusds7yy0dmhI6W5sYBLmzUJWs4KyY18dnrqW1RgzWn6v2 Uw9WqMFUNpEKQhPhivXnIwDBEvmF1pgKP6FErgozxEoZcrWABlgPUxUmC3m7RDFtgRHp on/ndEC00+RDAn7uozV6CwmMzdjQB/hVL3HwWfDGNdGjVZEI4Y/lZ2ofkpP05YAeNrAb BgDg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tXKwv9QdqYYF9LojqeKsej5FUUOok5xNMLGCni/N8LQ=; b=suqH/cXB2tjZUkCh9lsESsinaqdqiQQGHQXpsp7I5rTbf1XR1lPJYYVsGNzDbStseE ePOoD7GkQesAno7Q9/kJhhmaiKfo+Spj0XN8o7qNpLm2RmYFqiJROtkRgAAf0dWBZV/u In0MRBfVKKAbONcO0JJYcYpt+9F89eNUC7JjngjOe8d/bZVU1a4K0rtd6rdtwyx/hRgU qn3tdyGA0grpwH9WRvRiNgSnAAA49++ZChmZeMlkYiwzJ7C41o1TdiFx711lC7gl46Dq ZPVvU5HMSM5eWTfFljyhn83rq8DvHeJTAUv1aD8h5RRoeqyuoqgaFoTEF7DhfXxGnKpX xSkw== X-Gm-Message-State: AOAM5310+hDEvEg3PGYLzzTLra41ao5wdqUmN5A//qp5OqTt+pNljlxW 3M9dpvaI+7LXRMtxXxGEApSlE/x3JXbFY/p5 X-Google-Smtp-Source: ABdhPJzWEF2lDOugafNbvLwXCJ9aUXZduLSLZVw00xo6Ik6qmmQV8wmE6IovJZMbDvYy5DUoZlO40w== X-Received: by 2002:a2e:93c4:: with SMTP id p4mr7106680ljh.38.1626258899312; Wed, 14 Jul 2021 03:34:59 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id l2sm191642ljc.78.2021.07.14.03.34.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:34:58 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Artur Rojek , Igor Chauskin , Shay Agroskin Date: Wed, 14 Jul 2021 12:34:33 +0200 Message-Id: <20210714103435.3388-5-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714103435.3388-1-mk@semihalf.com> References: <20210713154118.32111-1-mk@semihalf.com> <20210714103435.3388-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 4/6] net/ena: add support for Rx interrupts X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to support asynchronous Rx in the applications, the driver has to configure the event file descriptors and configure the HW. This patch configures appropriate data structures for the rte_ethdev layer, adds .rx_queue_intr_enable and .rx_queue_intr_disable API handlers, and configures IO queues to work in the interrupt mode, if it was requested by the application. Signed-off-by: Michal Krawczyk Reviewed-by: Artur Rojek Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes Reviewed-by: Shay Agroskin Change-Id: Ib68d4caa68b7441d53b47ad81bfec37560d102d9 --- doc/guides/nics/ena.rst | 12 ++ doc/guides/nics/features/ena.ini | 1 + doc/guides/rel_notes/release_21_08.rst | 7 ++ drivers/net/ena/ena_ethdev.c | 146 +++++++++++++++++++++++-- 4 files changed, 154 insertions(+), 12 deletions(-) diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst index 0f1f63f722..63951098ea 100644 --- a/doc/guides/nics/ena.rst +++ b/doc/guides/nics/ena.rst @@ -141,6 +141,7 @@ Supported features * LSC event notification * Watchdog (requires handling of timers in the application) * Device reset upon failure +* Rx interrupts Prerequisites ------------- @@ -180,6 +181,17 @@ At this point the system should be ready to run DPDK applications. Once the application runs to completion, the ENA can be detached from attached module if necessary. +**Rx interrupts support** + +ENA PMD supports Rx interrupts, which can be used to wake up lcores waiting for +input. Please note that it won't work with ``igb_uio``, so to use this feature, +the ``vfio-pci`` should be used. + +ENA handles admin interrupts and AENQ notifications on separate interrupt. +There is possibility that there won't be enough event file descriptors to +handle both admin and Rx interrupts. In that situation the Rx interrupt request +will fail. + **Note about usage on \*.metal instances** On AWS, the metal instances are supporting IOMMU for both arm64 and x86_64 diff --git a/doc/guides/nics/features/ena.ini b/doc/guides/nics/features/ena.ini index 2595ff53f9..3976bbbda6 100644 --- a/doc/guides/nics/features/ena.ini +++ b/doc/guides/nics/features/ena.ini @@ -6,6 +6,7 @@ [Features] Link status = Y Link status event = Y +Rx interrupt = Y MTU update = Y Jumbo frame = Y Scattered Rx = Y diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index efcb0f3584..dac86a9d3e 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -94,6 +94,13 @@ New Features Added a new PMD driver for Wangxun 1 Gigabit Ethernet NICs. See the :doc:`../nics/ngbe` for more details. +* **Updated Amazon ENA PMD.** + + The new driver version (v2.4.0) introduced bug fixes and improvements, + including: + + * Added Rx interrupt support. + * **Added support for Marvell CNXK crypto driver.** * Added cnxk crypto PMD which provides support for an integrated diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 67cd91046a..72f9887797 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -213,11 +213,11 @@ static void ena_rx_queue_release_bufs(struct ena_ring *ring); static void ena_tx_queue_release_bufs(struct ena_ring *ring); static int ena_link_update(struct rte_eth_dev *dev, int wait_to_complete); -static int ena_create_io_queue(struct ena_ring *ring); +static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring); static void ena_queue_stop(struct ena_ring *ring); static void ena_queue_stop_all(struct rte_eth_dev *dev, enum ena_ring_type ring_type); -static int ena_queue_start(struct ena_ring *ring); +static int ena_queue_start(struct rte_eth_dev *dev, struct ena_ring *ring); static int ena_queue_start_all(struct rte_eth_dev *dev, enum ena_ring_type ring_type); static void ena_stats_restart(struct rte_eth_dev *dev); @@ -249,6 +249,11 @@ static int ena_process_bool_devarg(const char *key, static int ena_parse_devargs(struct ena_adapter *adapter, struct rte_devargs *devargs); static int ena_copy_eni_stats(struct ena_adapter *adapter); +static int ena_setup_rx_intr(struct rte_eth_dev *dev); +static int ena_rx_queue_intr_enable(struct rte_eth_dev *dev, + uint16_t queue_id); +static int ena_rx_queue_intr_disable(struct rte_eth_dev *dev, + uint16_t queue_id); static const struct eth_dev_ops ena_dev_ops = { .dev_configure = ena_dev_configure, @@ -269,6 +274,8 @@ static const struct eth_dev_ops ena_dev_ops = { .dev_reset = ena_dev_reset, .reta_update = ena_rss_reta_update, .reta_query = ena_rss_reta_query, + .rx_queue_intr_enable = ena_rx_queue_intr_enable, + .rx_queue_intr_disable = ena_rx_queue_intr_disable, }; void ena_rss_key_fill(void *key, size_t size) @@ -829,7 +836,7 @@ static int ena_queue_start_all(struct rte_eth_dev *dev, "Inconsistent state of Tx queues\n"); } - rc = ena_queue_start(&queues[i]); + rc = ena_queue_start(dev, &queues[i]); if (rc) { PMD_INIT_LOG(ERR, @@ -1074,6 +1081,10 @@ static int ena_start(struct rte_eth_dev *dev) if (rc) return rc; + rc = ena_setup_rx_intr(dev); + if (rc) + return rc; + rc = ena_queue_start_all(dev, ENA_RING_TYPE_RX); if (rc) return rc; @@ -1114,6 +1125,8 @@ static int ena_stop(struct rte_eth_dev *dev) { struct ena_adapter *adapter = dev->data->dev_private; struct ena_com_dev *ena_dev = &adapter->ena_dev; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; int rc; /* Cannot free memory in secondary process */ @@ -1132,6 +1145,16 @@ static int ena_stop(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Device reset failed, rc: %d\n", rc); } + rte_intr_disable(intr_handle); + + rte_intr_efd_disable(intr_handle); + if (intr_handle->intr_vec != NULL) { + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; + } + + rte_intr_enable(intr_handle); + ++adapter->dev_stats.dev_stop; adapter->state = ENA_ADAPTER_STATE_STOPPED; dev->data->dev_started = 0; @@ -1139,10 +1162,12 @@ static int ena_stop(struct rte_eth_dev *dev) return 0; } -static int ena_create_io_queue(struct ena_ring *ring) +static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring) { - struct ena_adapter *adapter; - struct ena_com_dev *ena_dev; + struct ena_adapter *adapter = ring->adapter; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct ena_com_create_io_ctx ctx = /* policy set to _HOST just to satisfy icc compiler */ { ENA_ADMIN_PLACEMENT_POLICY_HOST, @@ -1151,9 +1176,7 @@ static int ena_create_io_queue(struct ena_ring *ring) unsigned int i; int rc; - adapter = ring->adapter; - ena_dev = &adapter->ena_dev; - + ctx.msix_vector = -1; if (ring->type == ENA_RING_TYPE_TX) { ena_qid = ENA_IO_TXQ_IDX(ring->id); ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_TX; @@ -1163,12 +1186,13 @@ static int ena_create_io_queue(struct ena_ring *ring) } else { ena_qid = ENA_IO_RXQ_IDX(ring->id); ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX; + if (rte_intr_dp_is_en(intr_handle)) + ctx.msix_vector = intr_handle->intr_vec[ring->id]; for (i = 0; i < ring->ring_size; i++) ring->empty_rx_reqs[i] = i; } ctx.queue_size = ring->ring_size; ctx.qid = ena_qid; - ctx.msix_vector = -1; /* interrupts not used */ ctx.numa_node = ring->numa_socket_id; rc = ena_com_create_io_queue(ena_dev, &ctx); @@ -1193,6 +1217,10 @@ static int ena_create_io_queue(struct ena_ring *ring) if (ring->type == ENA_RING_TYPE_TX) ena_com_update_numa_node(ring->ena_com_io_cq, ctx.numa_node); + /* Start with Rx interrupts being masked. */ + if (ring->type == ENA_RING_TYPE_RX && rte_intr_dp_is_en(intr_handle)) + ena_rx_queue_intr_disable(dev, ring->id); + return 0; } @@ -1229,14 +1257,14 @@ static void ena_queue_stop_all(struct rte_eth_dev *dev, ena_queue_stop(&queues[i]); } -static int ena_queue_start(struct ena_ring *ring) +static int ena_queue_start(struct rte_eth_dev *dev, struct ena_ring *ring) { int rc, bufs_num; ena_assert_msg(ring->configured == 1, "Trying to start unconfigured queue\n"); - rc = ena_create_io_queue(ring); + rc = ena_create_io_queue(dev, ring); if (rc) { PMD_INIT_LOG(ERR, "Failed to create IO queue\n"); return rc; @@ -2944,6 +2972,100 @@ static int ena_parse_devargs(struct ena_adapter *adapter, return rc; } +static int ena_setup_rx_intr(struct rte_eth_dev *dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + int rc; + uint16_t vectors_nb, i; + bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq; + + if (!rx_intr_requested) + return 0; + + if (!rte_intr_cap_multiple(intr_handle)) { + PMD_DRV_LOG(ERR, + "Rx interrupt requested, but it isn't supported by the PCI driver\n"); + return -ENOTSUP; + } + + /* Disable interrupt mapping before the configuration starts. */ + rte_intr_disable(intr_handle); + + /* Verify if there are enough vectors available. */ + vectors_nb = dev->data->nb_rx_queues; + if (vectors_nb > RTE_MAX_RXTX_INTR_VEC_ID) { + PMD_DRV_LOG(ERR, + "Too many Rx interrupts requested, maximum number: %d\n", + RTE_MAX_RXTX_INTR_VEC_ID); + rc = -ENOTSUP; + goto enable_intr; + } + + intr_handle->intr_vec = rte_zmalloc("intr_vec", + dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0); + if (intr_handle->intr_vec == NULL) { + PMD_DRV_LOG(ERR, + "Failed to allocate interrupt vector for %d queues\n", + dev->data->nb_rx_queues); + rc = -ENOMEM; + goto enable_intr; + } + + rc = rte_intr_efd_enable(intr_handle, vectors_nb); + if (rc != 0) + goto free_intr_vec; + + if (!rte_intr_allow_others(intr_handle)) { + PMD_DRV_LOG(ERR, + "Not enough interrupts available to use both ENA Admin and Rx interrupts\n"); + goto disable_intr_efd; + } + + for (i = 0; i < vectors_nb; ++i) + intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i; + + rte_intr_enable(intr_handle); + return 0; + +disable_intr_efd: + rte_intr_efd_disable(intr_handle); +free_intr_vec: + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; +enable_intr: + rte_intr_enable(intr_handle); + return rc; +} + +static void ena_rx_queue_intr_set(struct rte_eth_dev *dev, + uint16_t queue_id, + bool unmask) +{ + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_ring *rxq = &adapter->rx_ring[queue_id]; + struct ena_eth_io_intr_reg intr_reg; + + ena_com_update_intr_reg(&intr_reg, 0, 0, unmask); + ena_com_unmask_intr(rxq->ena_com_io_cq, &intr_reg); +} + +static int ena_rx_queue_intr_enable(struct rte_eth_dev *dev, + uint16_t queue_id) +{ + ena_rx_queue_intr_set(dev, queue_id, true); + + return 0; +} + +static int ena_rx_queue_intr_disable(struct rte_eth_dev *dev, + uint16_t queue_id) +{ + ena_rx_queue_intr_set(dev, queue_id, false); + + return 0; +} + /********************************************************************* * PMD configuration *********************************************************************/ From patchwork Wed Jul 14 10:34:34 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95850 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 41F3AA0C4B; Wed, 14 Jul 2021 12:35:28 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 149E441329; Wed, 14 Jul 2021 12:35:03 +0200 (CEST) Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by mails.dpdk.org (Postfix) with ESMTP id 0A6784131F for ; Wed, 14 Jul 2021 12:35:02 +0200 (CEST) Received: by mail-lf1-f52.google.com with SMTP id 8so2771014lfp.9 for ; Wed, 14 Jul 2021 03:35:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=cleTYiuR+HgkJyzX+kuIzBWmcfXKa0od9z+vqZKnjMs=; b=lM+2Hx95jvl7KDYz83g5pr9uvUmJtMzDZNKFfrgN44H9IQmOHOVu1UUz2exMSnLmWo RZDTrqI3peO+TSDaqkmWQo7RxzC78WI5xskGDdcBc6cgoSjzFTL3iG3PSxktL9yf8LO3 SmBpuf5+FD+SL+JGfZFJZ0zu4o1dtW1EXMcuT9SCi3IYHewJq+OUJvv46JjuXBS4hIWk 961pClpSlr8B/EvunLvkqiHd3SvmImK6Q00qfdnXGtmpBUGRgVSYAj6jPRXvWCP9RNRT wmhO6h7OrkSfUcjfjmPydPr4/MaDDe3Yt8nzpXUTr5rMnFJ7nO0wfqnmpWWz8lDjT19B m2zQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=cleTYiuR+HgkJyzX+kuIzBWmcfXKa0od9z+vqZKnjMs=; b=WVOw6XowTgTN71SstqeLKg5UfXEqKRPxYzLitytMNn+46kh1nSAZkQYEv4IJXJgrNS 7KhPe9NOHaskqqeqFwrwFDtWTRvF0ZHT5gM3zvahaSVy3JXg3U2jZ8pYBcrAEInvLlZ/ 4wENC/m9OwsJ4nL7ddvxybyrSL+QSf3ax2d8crbMbxa3YgZW9DrgNkRWFrikp5WChkdt vu7Sq3xEQewoNZuUzqnZX2Stmfu6aTdhwU9Ad9OUvGdfAsw7v7/pmMF+jMYHZUh1Ex8r s6fKUJbW7Wz6kUzQCBm7HeLJdvcMqU7jSe9j6/SZnAzfn9X74yRdW6jwXG39X6w9+S30 wvQg== X-Gm-Message-State: AOAM533X/ky2ZWu1RNbvhdy1nMq/6jntkepA3X4ohUFdDCJmTLLQeJVZ 3p7gKBW3v+nY7tF4yizcFR8CqHe+9/v8Cv4H X-Google-Smtp-Source: ABdhPJzy0Wgrvmnhfa3IH2Hl1qHdbIqapzZDDoSyXPXrwMS3GXASgvjRVAhQ9LKuVHXiZfbujZ3eeg== X-Received: by 2002:ac2:53a3:: with SMTP id j3mr7577325lfh.479.1626258901029; Wed, 14 Jul 2021 03:35:01 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id l2sm191642ljc.78.2021.07.14.03.34.59 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:35:00 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Shay Agroskin , Amit Bernstein Date: Wed, 14 Jul 2021 12:34:34 +0200 Message-Id: <20210714103435.3388-6-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714103435.3388-1-mk@semihalf.com> References: <20210713154118.32111-1-mk@semihalf.com> <20210714103435.3388-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 5/6] net/ena: rework RSS configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Allow user to specify his own hash key and hash ctrl if the device is supporting that. HW interprets the key in reverse byte order, so the PMD reorders the key before passing it to the ena_com layer. Default key is being set in random matter each time the device is being initialized. Moreover, make minor adjustments for reta size setting in terms of returning error values. RSS code was moved to ena_rss.c file to improve readability. Signed-off-by: Michal Krawczyk Reviewed-by: Shai Brandes Reviewed-by: Shay Agroskin Reviewed-by: Amit Bernstein Change-Id: Ia6d9a63b8b8ed777d68762ec8fa21d025f6da6dd --- doc/guides/nics/features/ena.ini | 1 + doc/guides/rel_notes/release_21_08.rst | 1 + drivers/net/ena/ena_ethdev.c | 230 ++-------- drivers/net/ena/ena_ethdev.h | 34 ++ drivers/net/ena/ena_rss.c | 591 +++++++++++++++++++++++++ drivers/net/ena/meson.build | 1 + 6 files changed, 663 insertions(+), 195 deletions(-) create mode 100644 drivers/net/ena/ena_rss.c diff --git a/doc/guides/nics/features/ena.ini b/doc/guides/nics/features/ena.ini index 3976bbbda6..b971243ff0 100644 --- a/doc/guides/nics/features/ena.ini +++ b/doc/guides/nics/features/ena.ini @@ -12,6 +12,7 @@ Jumbo frame = Y Scattered Rx = Y TSO = Y RSS hash = Y +RSS key update = Y RSS reta update = Y L3 checksum offload = Y L4 checksum offload = Y diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index dac86a9d3e..d01784860e 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -100,6 +100,7 @@ New Features including: * Added Rx interrupt support. + * RSS hash function key reconfiguration support. * **Added support for Marvell CNXK crypto driver.** diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 72f9887797..ee059fc165 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -4,12 +4,6 @@ */ #include -#include -#include -#include -#include -#include -#include #include #include #include @@ -30,21 +24,12 @@ #define DRV_MODULE_VER_MINOR 3 #define DRV_MODULE_VER_SUBMINOR 0 -#define ENA_IO_TXQ_IDX(q) (2 * (q)) -#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1) -/*reverse version of ENA_IO_RXQ_IDX*/ -#define ENA_IO_RXQ_IDX_REV(q) ((q - 1) / 2) - #define __MERGE_64B_H_L(h, l) (((uint64_t)h << 32) | l) -#define TEST_BIT(val, bit_shift) (val & (1UL << bit_shift)) #define GET_L4_HDR_LEN(mbuf) \ ((rte_pktmbuf_mtod_offset(mbuf, struct rte_tcp_hdr *, \ mbuf->l3_len + mbuf->l2_len)->data_off) >> 4) -#define ENA_RX_RSS_TABLE_LOG_SIZE 7 -#define ENA_RX_RSS_TABLE_SIZE (1 << ENA_RX_RSS_TABLE_LOG_SIZE) -#define ENA_HASH_KEY_SIZE 40 #define ETH_GSTRING_LEN 32 #define ARRAY_SIZE(x) RTE_DIM(x) @@ -223,12 +208,6 @@ static int ena_queue_start_all(struct rte_eth_dev *dev, static void ena_stats_restart(struct rte_eth_dev *dev); static int ena_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); -static int ena_rss_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); -static int ena_rss_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); static void ena_interrupt_handler_rte(void *cb_arg); static void ena_timer_wd_callback(struct rte_timer *timer, void *arg); static void ena_destroy_device(struct rte_eth_dev *eth_dev); @@ -276,27 +255,13 @@ static const struct eth_dev_ops ena_dev_ops = { .reta_query = ena_rss_reta_query, .rx_queue_intr_enable = ena_rx_queue_intr_enable, .rx_queue_intr_disable = ena_rx_queue_intr_disable, + .rss_hash_update = ena_rss_hash_update, + .rss_hash_conf_get = ena_rss_hash_conf_get, }; -void ena_rss_key_fill(void *key, size_t size) -{ - static bool key_generated; - static uint8_t default_key[ENA_HASH_KEY_SIZE]; - size_t i; - - RTE_ASSERT(size <= ENA_HASH_KEY_SIZE); - - if (!key_generated) { - for (i = 0; i < ENA_HASH_KEY_SIZE; ++i) - default_key[i] = rte_rand() & 0xff; - key_generated = true; - } - - rte_memcpy(key, default_key, size); -} - static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, - struct ena_com_rx_ctx *ena_rx_ctx) + struct ena_com_rx_ctx *ena_rx_ctx, + bool fill_hash) { uint64_t ol_flags = 0; uint32_t packet_type = 0; @@ -324,7 +289,8 @@ static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, else ol_flags |= PKT_RX_L4_CKSUM_GOOD; - if (likely((packet_type & ENA_PTYPE_HAS_HASH) && !ena_rx_ctx->frag)) { + if (fill_hash && + likely((packet_type & ENA_PTYPE_HAS_HASH) && !ena_rx_ctx->frag)) { ol_flags |= PKT_RX_RSS_HASH; mbuf->hash.rss = ena_rx_ctx->hash; } @@ -446,7 +412,8 @@ static void ena_config_host_info(struct ena_com_dev *ena_dev) host_info->num_cpus = rte_lcore_count(); host_info->driver_supported_features = - ENA_ADMIN_HOST_INFO_RX_OFFSET_MASK; + ENA_ADMIN_HOST_INFO_RX_OFFSET_MASK | + ENA_ADMIN_HOST_INFO_RSS_CONFIGURABLE_FUNCTION_KEY_MASK; rc = ena_com_set_host_attributes(ena_dev); if (rc) { @@ -556,151 +523,6 @@ ena_dev_reset(struct rte_eth_dev *dev) return rc; } -static int ena_rss_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) -{ - struct ena_adapter *adapter = dev->data->dev_private; - struct ena_com_dev *ena_dev = &adapter->ena_dev; - int rc, i; - u16 entry_value; - int conf_idx; - int idx; - - if ((reta_size == 0) || (reta_conf == NULL)) - return -EINVAL; - - if (reta_size > ENA_RX_RSS_TABLE_SIZE) { - PMD_DRV_LOG(WARNING, - "Requested indirection table size (%d) is bigger than supported: %d\n", - reta_size, ENA_RX_RSS_TABLE_SIZE); - return -EINVAL; - } - - for (i = 0 ; i < reta_size ; i++) { - /* each reta_conf is for 64 entries. - * to support 128 we use 2 conf of 64 - */ - conf_idx = i / RTE_RETA_GROUP_SIZE; - idx = i % RTE_RETA_GROUP_SIZE; - if (TEST_BIT(reta_conf[conf_idx].mask, idx)) { - entry_value = - ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]); - - rc = ena_com_indirect_table_fill_entry(ena_dev, - i, - entry_value); - if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, - "Cannot fill indirect table\n"); - return rc; - } - } - } - - rte_spinlock_lock(&adapter->admin_lock); - rc = ena_com_indirect_table_set(ena_dev); - rte_spinlock_unlock(&adapter->admin_lock); - if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, "Cannot flush the indirect table\n"); - return rc; - } - - PMD_DRV_LOG(DEBUG, "RSS configured %d entries for port %d\n", - reta_size, dev->data->port_id); - - return 0; -} - -/* Query redirection table. */ -static int ena_rss_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) -{ - struct ena_adapter *adapter = dev->data->dev_private; - struct ena_com_dev *ena_dev = &adapter->ena_dev; - int rc; - int i; - u32 indirect_table[ENA_RX_RSS_TABLE_SIZE] = {0}; - int reta_conf_idx; - int reta_idx; - - if (reta_size == 0 || reta_conf == NULL || - (reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL))) - return -EINVAL; - - rte_spinlock_lock(&adapter->admin_lock); - rc = ena_com_indirect_table_get(ena_dev, indirect_table); - rte_spinlock_unlock(&adapter->admin_lock); - if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, "Cannot get indirection table\n"); - return -ENOTSUP; - } - - for (i = 0 ; i < reta_size ; i++) { - reta_conf_idx = i / RTE_RETA_GROUP_SIZE; - reta_idx = i % RTE_RETA_GROUP_SIZE; - if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx)) - reta_conf[reta_conf_idx].reta[reta_idx] = - ENA_IO_RXQ_IDX_REV(indirect_table[i]); - } - - return 0; -} - -static int ena_rss_init_default(struct ena_adapter *adapter) -{ - struct ena_com_dev *ena_dev = &adapter->ena_dev; - uint16_t nb_rx_queues = adapter->edev_data->nb_rx_queues; - int rc, i; - u32 val; - - rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); - if (unlikely(rc)) { - PMD_DRV_LOG(ERR, "Cannot init indirection table\n"); - goto err_rss_init; - } - - for (i = 0; i < ENA_RX_RSS_TABLE_SIZE; i++) { - val = i % nb_rx_queues; - rc = ena_com_indirect_table_fill_entry(ena_dev, i, - ENA_IO_RXQ_IDX(val)); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot fill indirection table\n"); - goto err_fill_indir; - } - } - - rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_CRC32, NULL, - ENA_HASH_KEY_SIZE, 0xFFFFFFFF); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(INFO, "Cannot fill hash function\n"); - goto err_fill_indir; - } - - rc = ena_com_set_default_hash_ctrl(ena_dev); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(INFO, "Cannot fill hash control\n"); - goto err_fill_indir; - } - - rc = ena_com_indirect_table_set(ena_dev); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot flush indirection table\n"); - goto err_fill_indir; - } - PMD_DRV_LOG(DEBUG, "RSS configured for port %d\n", - adapter->edev_data->port_id); - - return 0; - -err_fill_indir: - ena_com_rss_destroy(ena_dev); -err_rss_init: - - return rc; -} - static void ena_rx_queue_release_all(struct rte_eth_dev *dev) { struct ena_ring **queues = (struct ena_ring **)dev->data->rx_queues; @@ -1093,9 +915,8 @@ static int ena_start(struct rte_eth_dev *dev) if (rc) goto err_start_tx; - if (adapter->edev_data->dev_conf.rxmode.mq_mode & - ETH_MQ_RX_RSS_FLAG && adapter->edev_data->nb_rx_queues > 0) { - rc = ena_rss_init_default(adapter); + if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) { + rc = ena_rss_configure(adapter); if (rc) goto err_rss_init; } @@ -1385,7 +1206,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, - __rte_unused const struct rte_eth_rxconf *rx_conf, + const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { struct ena_adapter *adapter = dev->data->dev_private; @@ -1469,6 +1290,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, for (i = 0; i < nb_desc; i++) rxq->empty_rx_reqs[i] = i; + rxq->offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + /* Store pointer to this queue in upper layer */ rxq->configured = 1; dev->data->rx_queues[queue_idx] = rxq; @@ -1932,6 +1755,9 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->offloads.rx_csum_supported = (get_feat_ctx.offload.rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK) != 0; + adapter->offloads.rss_hash_supported = + (get_feat_ctx.offload.rx_supported & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK) != 0; /* Copy MAC address and point DPDK to it */ eth_dev->data->mac_addrs = (struct rte_ether_addr *)adapter->mac_addr; @@ -1939,6 +1765,12 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) get_feat_ctx.dev_attr.mac_addr, (struct rte_ether_addr *)adapter->mac_addr); + rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Failed to initialize RSS in ENA device\n"); + goto err_delete_debug_area; + } + adapter->drv_stats = rte_zmalloc("adapter stats", sizeof(*adapter->drv_stats), RTE_CACHE_LINE_SIZE); @@ -1946,7 +1778,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) PMD_DRV_LOG(ERR, "Failed to allocate memory for adapter statistics\n"); rc = -ENOMEM; - goto err_delete_debug_area; + goto err_rss_destroy; } rte_spinlock_init(&adapter->admin_lock); @@ -1967,6 +1799,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) return 0; +err_rss_destroy: + ena_com_rss_destroy(ena_dev); err_delete_debug_area: ena_com_delete_debug_area(ena_dev); @@ -1991,6 +1825,8 @@ static void ena_destroy_device(struct rte_eth_dev *eth_dev) if (adapter->state != ENA_ADAPTER_STATE_CLOSED) ena_close(eth_dev); + ena_com_rss_destroy(ena_dev); + ena_com_delete_debug_area(ena_dev); ena_com_delete_host_info(ena_dev); @@ -2097,13 +1933,14 @@ static int ena_infos_get(struct rte_eth_dev *dev, /* Inform framework about available features */ dev_info->rx_offload_capa = rx_feat; - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; + if (adapter->offloads.rss_hash_supported) + dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; dev_info->rx_queue_offload_capa = rx_feat; dev_info->tx_offload_capa = tx_feat; dev_info->tx_queue_offload_capa = tx_feat; - dev_info->flow_type_rss_offloads = ETH_RSS_IP | ETH_RSS_TCP | - ETH_RSS_UDP; + dev_info->flow_type_rss_offloads = ENA_ALL_RSS_HF; + dev_info->hash_key_size = ENA_HASH_KEY_SIZE; dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN; dev_info->max_rx_pktlen = adapter->max_mtu; @@ -2250,6 +2087,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t completed; struct ena_com_rx_ctx ena_rx_ctx; int i, rc = 0; + bool fill_hash; #ifdef RTE_ETHDEV_DEBUG_RX /* Check adapter state */ @@ -2260,6 +2098,8 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, } #endif + fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH; + descs_in_use = rx_ring->ring_size - ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; nb_pkts = RTE_MIN(descs_in_use, nb_pkts); @@ -2306,7 +2146,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, } /* fill mbuf attributes if any */ - ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx); + ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx, fill_hash); if (unlikely(mbuf->ol_flags & (PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD))) { diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 78718b759b..06ac8b06b5 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -6,10 +6,16 @@ #ifndef _ENA_ETHDEV_H_ #define _ENA_ETHDEV_H_ +#include +#include +#include +#include #include #include #include #include +#include +#include #include "ena_com.h" @@ -43,6 +49,21 @@ #define ENA_IDX_NEXT_MASKED(idx, mask) (((idx) + 1) & (mask)) #define ENA_IDX_ADD_MASKED(idx, n, mask) (((idx) + (n)) & (mask)) +#define ENA_RX_RSS_TABLE_LOG_SIZE 7 +#define ENA_RX_RSS_TABLE_SIZE (1 << ENA_RX_RSS_TABLE_LOG_SIZE) + +#define ENA_HASH_KEY_SIZE 40 + +#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \ + ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP) + +#define ENA_IO_TXQ_IDX(q) (2 * (q)) +#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1) +/* Reversed version of ENA_IO_RXQ_IDX */ +#define ENA_IO_RXQ_IDX_REV(q) (((q) - 1) / 2) + +extern struct ena_shared_data *ena_shared_data; + struct ena_adapter; enum ena_ring_type { @@ -205,6 +226,7 @@ struct ena_offloads { bool tso4_supported; bool tx_csum_supported; bool rx_csum_supported; + bool rss_hash_supported; }; /* board specific private data structure */ @@ -268,4 +290,16 @@ struct ena_adapter { bool use_large_llq_hdr; }; +int ena_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int ena_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int ena_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +int ena_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +int ena_rss_configure(struct ena_adapter *adapter); + #endif /* _ENA_ETHDEV_H_ */ diff --git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c new file mode 100644 index 0000000000..d718f877bc --- /dev/null +++ b/drivers/net/ena/ena_rss.c @@ -0,0 +1,591 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2020 Amazon.com, Inc. or its affiliates. + * All rights reserved. + */ + +#include "ena_ethdev.h" +#include "ena_logs.h" + +#include + +#define TEST_BIT(val, bit_shift) ((val) & (1UL << (bit_shift))) + +#define ENA_HF_RSS_ALL_L2 (ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA) +#define ENA_HF_RSS_ALL_L3 (ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA) +#define ENA_HF_RSS_ALL_L4 (ENA_ADMIN_RSS_L4_SP | ENA_ADMIN_RSS_L4_DP) +#define ENA_HF_RSS_ALL_L3_L4 (ENA_HF_RSS_ALL_L3 | ENA_HF_RSS_ALL_L4) +#define ENA_HF_RSS_ALL_L2_L3_L4 (ENA_HF_RSS_ALL_L2 | ENA_HF_RSS_ALL_L3_L4) + +enum ena_rss_hash_fields { + ENA_HF_RSS_TCP4 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_UDP4 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_TCP6 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_UDP6 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_IP4 = ENA_HF_RSS_ALL_L3, + ENA_HF_RSS_IP6 = ENA_HF_RSS_ALL_L3, + ENA_HF_RSS_IP4_FRAG = ENA_HF_RSS_ALL_L3, + ENA_HF_RSS_NOT_IP = ENA_HF_RSS_ALL_L2, + ENA_HF_RSS_TCP6_EX = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_IP6_EX = ENA_HF_RSS_ALL_L3, +}; + +static int ena_fill_indirect_table_default(struct ena_com_dev *ena_dev, + size_t tbl_size, + size_t queue_num); +static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto, + uint16_t field); +static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto, + uint64_t rss_hf); +static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf); +static int ena_rss_hash_set(struct ena_com_dev *ena_dev, + struct rte_eth_rss_conf *rss_conf, + bool default_allowed); +static void ena_reorder_rss_hash_key(uint8_t *reordered_key, + uint8_t *key, + size_t key_size); +static int ena_get_rss_hash_key(struct ena_com_dev *ena_dev, uint8_t *rss_key); + +void ena_rss_key_fill(void *key, size_t size) +{ + static bool key_generated; + static uint8_t default_key[ENA_HASH_KEY_SIZE]; + size_t i; + + RTE_ASSERT(size <= ENA_HASH_KEY_SIZE); + + if (!key_generated) { + for (i = 0; i < ENA_HASH_KEY_SIZE; ++i) + default_key[i] = rte_rand() & 0xff; + key_generated = true; + } + + rte_memcpy(key, default_key, size); +} + +int ena_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + int rc, i; + u16 entry_value; + int conf_idx; + int idx; + + if (reta_size == 0 || reta_conf == NULL) + return -EINVAL; + + if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) { + PMD_DRV_LOG(ERR, + "RSS was not configured for the PMD\n"); + return -ENOTSUP; + } + + if (reta_size > ENA_RX_RSS_TABLE_SIZE) { + PMD_DRV_LOG(WARNING, + "Requested indirection table size (%d) is bigger than supported: %d\n", + reta_size, ENA_RX_RSS_TABLE_SIZE); + return -EINVAL; + } + + for (i = 0 ; i < reta_size ; i++) { + /* Each reta_conf is for 64 entries. + * To support 128 we use 2 conf of 64. + */ + conf_idx = i / RTE_RETA_GROUP_SIZE; + idx = i % RTE_RETA_GROUP_SIZE; + if (TEST_BIT(reta_conf[conf_idx].mask, idx)) { + entry_value = + ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]); + + rc = ena_com_indirect_table_fill_entry(ena_dev, i, + entry_value); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, + "Cannot fill indirection table\n"); + return rc; + } + } + } + + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_com_indirect_table_set(ena_dev); + rte_spinlock_unlock(&adapter->admin_lock); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Cannot set the indirection table\n"); + return rc; + } + + PMD_DRV_LOG(DEBUG, "RSS configured %d entries for port %d\n", + reta_size, dev->data->port_id); + + return 0; +} + +/* Query redirection table. */ +int ena_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + uint32_t indirect_table[ENA_RX_RSS_TABLE_SIZE] = { 0 }; + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + int rc; + int i; + int reta_conf_idx; + int reta_idx; + + if (reta_size == 0 || reta_conf == NULL || + (reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL))) + return -EINVAL; + + if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) { + PMD_DRV_LOG(ERR, + "RSS was not configured for the PMD\n"); + return -ENOTSUP; + } + + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_com_indirect_table_get(ena_dev, indirect_table); + rte_spinlock_unlock(&adapter->admin_lock); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Cannot get indirection table\n"); + return rc; + } + + for (i = 0 ; i < reta_size ; i++) { + reta_conf_idx = i / RTE_RETA_GROUP_SIZE; + reta_idx = i % RTE_RETA_GROUP_SIZE; + if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx)) + reta_conf[reta_conf_idx].reta[reta_idx] = + ENA_IO_RXQ_IDX_REV(indirect_table[i]); + } + + return 0; +} + +static int ena_fill_indirect_table_default(struct ena_com_dev *ena_dev, + size_t tbl_size, + size_t queue_num) +{ + size_t i; + int rc; + uint16_t val; + + for (i = 0; i < tbl_size; ++i) { + val = i % queue_num; + rc = ena_com_indirect_table_fill_entry(ena_dev, i, + ENA_IO_RXQ_IDX(val)); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(DEBUG, + "Failed to set %zu indirection table entry with val %" PRIu16 "\n", + i, val); + return rc; + } + } + + return 0; +} + +static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto, + uint16_t fields) +{ + uint64_t rss_hf = 0; + + /* If no fields are activated, then RSS is disabled for this proto */ + if ((fields & ENA_HF_RSS_ALL_L2_L3_L4) == 0) + return 0; + + /* Convert proto to ETH flag */ + switch (proto) { + case ENA_ADMIN_RSS_TCP4: + rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP; + break; + case ENA_ADMIN_RSS_UDP4: + rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP; + break; + case ENA_ADMIN_RSS_TCP6: + rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP; + break; + case ENA_ADMIN_RSS_UDP6: + rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP; + break; + case ENA_ADMIN_RSS_IP4: + rss_hf |= ETH_RSS_IPV4; + break; + case ENA_ADMIN_RSS_IP6: + rss_hf |= ETH_RSS_IPV6; + break; + case ENA_ADMIN_RSS_IP4_FRAG: + rss_hf |= ETH_RSS_FRAG_IPV4; + break; + case ENA_ADMIN_RSS_NOT_IP: + rss_hf |= ETH_RSS_L2_PAYLOAD; + break; + case ENA_ADMIN_RSS_TCP6_EX: + rss_hf |= ETH_RSS_IPV6_TCP_EX; + break; + case ENA_ADMIN_RSS_IP6_EX: + rss_hf |= ETH_RSS_IPV6_EX; + break; + default: + break; + }; + + /* Check if only DA or SA is being used for L3. */ + switch (fields & ENA_HF_RSS_ALL_L3) { + case ENA_ADMIN_RSS_L3_SA: + rss_hf |= ETH_RSS_L3_SRC_ONLY; + break; + case ENA_ADMIN_RSS_L3_DA: + rss_hf |= ETH_RSS_L3_DST_ONLY; + break; + default: + break; + }; + + /* Check if only DA or SA is being used for L4. */ + switch (fields & ENA_HF_RSS_ALL_L4) { + case ENA_ADMIN_RSS_L4_SP: + rss_hf |= ETH_RSS_L4_SRC_ONLY; + break; + case ENA_ADMIN_RSS_L4_DP: + rss_hf |= ETH_RSS_L4_DST_ONLY; + break; + default: + break; + }; + + return rss_hf; +} + +static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto, + uint64_t rss_hf) +{ + uint16_t fields_mask = 0; + + /* L2 always uses source and destination addresses. */ + fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA; + + /* Determine which fields of L3 should be used. */ + switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) { + case ETH_RSS_L3_DST_ONLY: + fields_mask |= ENA_ADMIN_RSS_L3_DA; + break; + case ETH_RSS_L3_SRC_ONLY: + fields_mask |= ENA_ADMIN_RSS_L3_SA; + break; + default: + /* + * If SRC nor DST aren't set, it means both of them should be + * used. + */ + fields_mask |= ENA_HF_RSS_ALL_L3; + } + + /* Determine which fields of L4 should be used. */ + switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) { + case ETH_RSS_L4_DST_ONLY: + fields_mask |= ENA_ADMIN_RSS_L4_DP; + break; + case ETH_RSS_L4_SRC_ONLY: + fields_mask |= ENA_ADMIN_RSS_L4_SP; + break; + default: + /* + * If SRC nor DST aren't set, it means both of them should be + * used. + */ + fields_mask |= ENA_HF_RSS_ALL_L4; + } + + /* Return appropriate hash fields. */ + switch (proto) { + case ENA_ADMIN_RSS_TCP4: + return ENA_HF_RSS_TCP4 & fields_mask; + case ENA_ADMIN_RSS_UDP4: + return ENA_HF_RSS_UDP4 & fields_mask; + case ENA_ADMIN_RSS_TCP6: + return ENA_HF_RSS_TCP6 & fields_mask; + case ENA_ADMIN_RSS_UDP6: + return ENA_HF_RSS_UDP6 & fields_mask; + case ENA_ADMIN_RSS_IP4: + return ENA_HF_RSS_IP4 & fields_mask; + case ENA_ADMIN_RSS_IP6: + return ENA_HF_RSS_IP6 & fields_mask; + case ENA_ADMIN_RSS_IP4_FRAG: + return ENA_HF_RSS_IP4_FRAG & fields_mask; + case ENA_ADMIN_RSS_NOT_IP: + return ENA_HF_RSS_NOT_IP & fields_mask; + case ENA_ADMIN_RSS_TCP6_EX: + return ENA_HF_RSS_TCP6_EX & fields_mask; + case ENA_ADMIN_RSS_IP6_EX: + return ENA_HF_RSS_IP6_EX & fields_mask; + default: + break; + } + + return 0; +} + +static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf) +{ + struct ena_admin_proto_input selected_fields[ENA_ADMIN_RSS_PROTO_NUM] = {}; + int rc, i; + + /* Turn on appropriate fields for each requested packet type */ + if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0) + selected_fields[ENA_ADMIN_RSS_TCP4].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf); + + if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0) + selected_fields[ENA_ADMIN_RSS_UDP4].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf); + + if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0) + selected_fields[ENA_ADMIN_RSS_TCP6].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf); + + if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0) + selected_fields[ENA_ADMIN_RSS_UDP6].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf); + + if ((rss_hf & ETH_RSS_IPV4) != 0) + selected_fields[ENA_ADMIN_RSS_IP4].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf); + + if ((rss_hf & ETH_RSS_IPV6) != 0) + selected_fields[ENA_ADMIN_RSS_IP6].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf); + + if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0) + selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf); + + if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0) + selected_fields[ENA_ADMIN_RSS_NOT_IP].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf); + + if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0) + selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf); + + if ((rss_hf & ETH_RSS_IPV6_EX) != 0) + selected_fields[ENA_ADMIN_RSS_IP6_EX].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf); + + /* Try to write them to the device */ + for (i = 0; i < ENA_ADMIN_RSS_PROTO_NUM; i++) { + rc = ena_com_fill_hash_ctrl(ena_dev, + (enum ena_admin_flow_hash_proto)i, + selected_fields[i].fields); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(DEBUG, + "Failed to set ENA HF %d with fields %" PRIu16 "\n", + i, selected_fields[i].fields); + return rc; + } + } + + return 0; +} + +static int ena_rss_hash_set(struct ena_com_dev *ena_dev, + struct rte_eth_rss_conf *rss_conf, + bool default_allowed) +{ + uint8_t hw_rss_key[ENA_HASH_KEY_SIZE]; + uint8_t *rss_key; + int rc; + + if (rss_conf->rss_key != NULL) { + /* Reorder the RSS key bytes for the hardware requirements. */ + ena_reorder_rss_hash_key(hw_rss_key, rss_conf->rss_key, + ENA_HASH_KEY_SIZE); + rss_key = hw_rss_key; + } else { + rss_key = NULL; + } + + /* If the rss_key is NULL, then the randomized key will be used. */ + rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_TOEPLITZ, + rss_key, ENA_HASH_KEY_SIZE, 0); + if (rc != 0 && !(default_allowed && rc == ENA_COM_UNSUPPORTED)) { + PMD_DRV_LOG(ERR, + "Failed to set RSS hash function in the device\n"); + return rc; + } + + rc = ena_set_hash_fields(ena_dev, rss_conf->rss_hf); + if (rc == ENA_COM_UNSUPPORTED) { + if (rss_conf->rss_key == NULL && !default_allowed) { + PMD_DRV_LOG(ERR, + "Setting RSS hash fields is not supported\n"); + return -ENOTSUP; + } + PMD_DRV_LOG(WARNING, + "Setting RSS hash fields is not supported. Using default values: 0x%llx\n", + ENA_ALL_RSS_HF); + } else if (rc != 0) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash fields\n"); + return rc; + } + + return 0; +} + +/* ENA HW interprets the RSS key in reverse bytes order. Because of that, the + * key must be processed upon interaction with ena_com layer. + */ +static void ena_reorder_rss_hash_key(uint8_t *reordered_key, + uint8_t *key, + size_t key_size) +{ + size_t i, rev_i; + + for (i = 0, rev_i = key_size - 1; i < key_size; ++i, --rev_i) + reordered_key[i] = key[rev_i]; +} + +static int ena_get_rss_hash_key(struct ena_com_dev *ena_dev, uint8_t *rss_key) +{ + uint8_t hw_rss_key[ENA_HASH_KEY_SIZE]; + int rc; + + /* The default RSS hash key cannot be retrieved from the HW. Unless it's + * explicitly set, this operation shouldn't be supported. + */ + if (ena_dev->rss.hash_key == NULL) { + PMD_DRV_LOG(WARNING, + "Retrieving default RSS hash key is not supported\n"); + return -ENOTSUP; + } + + rc = ena_com_get_hash_key(ena_dev, hw_rss_key); + if (rc != 0) + return rc; + + ena_reorder_rss_hash_key(rss_key, hw_rss_key, ENA_HASH_KEY_SIZE); + + return 0; +} + +int ena_rss_configure(struct ena_adapter *adapter) +{ + struct rte_eth_rss_conf *rss_conf; + struct ena_com_dev *ena_dev; + int rc; + + ena_dev = &adapter->ena_dev; + rss_conf = &adapter->edev_data->dev_conf.rx_adv_conf.rss_conf; + + if (adapter->edev_data->nb_rx_queues == 0) + return 0; + + /* Restart the indirection table. The number of queues could change + * between start/stop calls, so it must be reinitialized with default + * values. + */ + rc = ena_fill_indirect_table_default(ena_dev, ENA_RX_RSS_TABLE_SIZE, + adapter->edev_data->nb_rx_queues); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, + "Failed to fill indirection table with default values\n"); + return rc; + } + + rc = ena_com_indirect_table_set(ena_dev); + if (unlikely(rc != 0 && rc != ENA_COM_UNSUPPORTED)) { + PMD_DRV_LOG(ERR, + "Failed to set indirection table in the device\n"); + return rc; + } + + rc = ena_rss_hash_set(ena_dev, rss_conf, true); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash\n"); + return rc; + } + + PMD_DRV_LOG(DEBUG, "RSS configured for port %d\n", + adapter->edev_data->port_id); + + return 0; +} + +int ena_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct ena_adapter *adapter = dev->data->dev_private; + int rc; + + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_rss_hash_set(&adapter->ena_dev, rss_conf, false); + rte_spinlock_unlock(&adapter->admin_lock); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash\n"); + return rc; + } + + return 0; +} + +int ena_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + enum ena_admin_flow_hash_proto proto; + uint64_t rss_hf = 0; + int rc, i; + uint16_t admin_hf; + static bool warn_once; + + if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) { + PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n"); + return -ENOTSUP; + } + + if (rss_conf->rss_key != NULL) { + rc = ena_get_rss_hash_key(ena_dev, rss_conf->rss_key); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, + "Cannot retrieve RSS hash key, err: %d\n", + rc); + return rc; + } + } + + for (i = 0; i < ENA_ADMIN_RSS_PROTO_NUM; ++i) { + proto = (enum ena_admin_flow_hash_proto)i; + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_com_get_hash_ctrl(ena_dev, proto, &admin_hf); + rte_spinlock_unlock(&adapter->admin_lock); + if (rc == ENA_COM_UNSUPPORTED) { + /* As some devices may support only reading rss hash + * key and not the hash ctrl, we want to notify the + * caller that this feature is only partially supported + * and do not return an error - the caller could be + * interested only in the key value. + */ + if (!warn_once) { + PMD_DRV_LOG(WARNING, + "Reading hash control from the device is not supported. .rss_hf will contain a default value.\n"); + warn_once = true; + } + rss_hf = ENA_ALL_RSS_HF; + break; + } else if (rc != 0) { + PMD_DRV_LOG(ERR, + "Failed to retrieve hash ctrl for proto: %d with err: %d\n", + i, rc); + return rc; + } + + rss_hf |= ena_admin_hf_to_eth_hf(proto, admin_hf); + } + + rss_conf->rss_hf = rss_hf; + return 0; +} diff --git a/drivers/net/ena/meson.build b/drivers/net/ena/meson.build index cc912fceba..d02ed3f64f 100644 --- a/drivers/net/ena/meson.build +++ b/drivers/net/ena/meson.build @@ -9,6 +9,7 @@ endif sources = files( 'ena_ethdev.c', + 'ena_rss.c', 'base/ena_com.c', 'base/ena_eth_com.c', ) From patchwork Wed Jul 14 10:34:35 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95851 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 72700A0C4B; Wed, 14 Jul 2021 12:35:36 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 9939441337; Wed, 14 Jul 2021 12:35:04 +0200 (CEST) Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by mails.dpdk.org (Postfix) with ESMTP id 5AEC64132C for ; Wed, 14 Jul 2021 12:35:03 +0200 (CEST) Received: by mail-lj1-f169.google.com with SMTP id q4so2596883ljp.13 for ; Wed, 14 Jul 2021 03:35:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=xmoBwVjP0Q8GZ4XKl2FVyKocK7/snrFy4sCAfZqL7vQ=; b=LFv5C1phxaYMwOSVVuPPme/xWEI/RKYyphbVMQS/ZVafbWmmMlIe7Az3W6sV259A/A rDkQT2Y2HYpa8HhOUNS8Cy1ywxFpayVCRHjXey0NkzJsadZFJZ5ESAs7UriYBTWZgOJL yv69BG7r8x25LfUhR0V5rY8KjtS0uCvTb3k6O7eL5cIiNuXVBpBdhJah/FDpEuw5M1Sc WEH9GHQW+Y5yAAOmtDTPkWN9bSIB9GA87E+WpbF16hKJrKzKj39DobcABNLXPrS2NJJf ams6dJ8ZEz1+MOi+8fkyT4jeCZPjz+Yc44zCMevc9Lmqu0tAAMGV1dalYrMoFPM/eTL4 cDyg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=xmoBwVjP0Q8GZ4XKl2FVyKocK7/snrFy4sCAfZqL7vQ=; b=tVHGkx82dF4MqCk6QHfU9sFRWcgS4HFAncUt0aAiTH+s1LgegsrCX9uqmcjuW7U8fQ UGGMKCRVisDstbo1umrRC8cZyTwOWz8EulAEEfOBt5+biY8i+ayNpCLpNhq9eYvFhTSw uUoO/DNP6DhbhXqe4lFUGbw5e2F+CLlyW81ijK0dCGpjadqomr9zRnYpfUkbLVWgbhG7 07EYdUBB5l7mN1QHZm104yYqvK0rMh5Or1ZW6eTM0X55mlc550PcSAU+S6gTQba1BrSq 3PqK03gOTzXOaxve6MD6kenWETulFCqc9teZHtc97ceTFDiY7P44FFzjzV58TJpNHBRE yGyw== X-Gm-Message-State: AOAM533cyTlQsasCAj2QV99H33YhYBmkZKTgH7P0oOSDZ3lhUq4QgZRb CWcZNQOE40+yWvmtK42843vONrnjpCj+p9sh X-Google-Smtp-Source: ABdhPJyw0hpH8belVGiZkZRBWfIXve19u8Egf6UkY6muCE/tDL0d0ipRWb9O0IYlNj1Wyf6x9ShG2g== X-Received: by 2002:a2e:8247:: with SMTP id j7mr8293762ljh.495.1626258902695; Wed, 14 Jul 2021 03:35:02 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id l2sm191642ljc.78.2021.07.14.03.35.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:35:01 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk Date: Wed, 14 Jul 2021 12:34:35 +0200 Message-Id: <20210714103435.3388-7-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714103435.3388-1-mk@semihalf.com> References: <20210713154118.32111-1-mk@semihalf.com> <20210714103435.3388-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 6/6] net/ena: update version to v2.4.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This version update contains: * Rx interrupts feature, * Support for the RSS hash function reconfiguration, * Small rework of the works, * Reset trigger on Tx path fix. Signed-off-by: Michal Krawczyk Change-Id: I66798cdbe5b980eab7e11b036eed256e27f80e8a --- drivers/net/ena/ena_ethdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index ee059fc165..14f776b5ad 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -21,7 +21,7 @@ #include #define DRV_MODULE_VER_MAJOR 2 -#define DRV_MODULE_VER_MINOR 3 +#define DRV_MODULE_VER_MINOR 4 #define DRV_MODULE_VER_SUBMINOR 0 #define __MERGE_64B_H_L(h, l) (((uint64_t)h << 32) | l)