From patchwork Fri Jul 23 10:24:49 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 96238 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F13FA0C46; Fri, 23 Jul 2021 12:26:44 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A3F9440DF5; Fri, 23 Jul 2021 12:26:41 +0200 (CEST) Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by mails.dpdk.org (Postfix) with ESMTP id 45EAD40DDA for ; Fri, 23 Jul 2021 12:26:38 +0200 (CEST) Received: by mail-ej1-f45.google.com with SMTP id gn11so3101139ejc.0 for ; Fri, 23 Jul 2021 03:26:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qgeMFeUjWpiW1KlsbR6HuDbIyCwuFWU/sA2ToLKyl+c=; b=RSofHhTp11VGFFZ+1zBw/CG6g5lB/KbUCnDf+e/UCgPrsOavaSPIp7lLGE6d+uV8Ev ZIbIjhHrRw+vskkZpeqVO9SE1ZQPw5QxqpsIOCPpePEJfPoJauWesrwYvLVMCyEtV/Gw 8Ev11OD4cBvMkQkpQaeVwsdGQWK6efqNC/6BwvYGCqp8oNzMSqmMoIA3b371iR+7Qqua qpiyBCp6r3wolK9/JJ48swu9d8G7VoIFUWE/SW1znzoQl/aUlTaDD/WfY9U1+jJ0lMv8 krziguUrZSn2urLYlL6qpMnmGNrvGz4Er/D7FOalZfYnJtGoDp492qekDPub17WZINb+ th3A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qgeMFeUjWpiW1KlsbR6HuDbIyCwuFWU/sA2ToLKyl+c=; b=GdTsarJbEVPhDRMQY+oFg1ylqLBqAxLWoSbiYckMSiT4hcY0Q+xRscHJP9bDObzh/3 PZdvJ/tjhjccURkGJiaspkeN/4ke8pOBOI7EuvcB3nNGZgWHCZ1444nWNQlfF0asAWdd Mq8FjS8mD/fgvGJ6RrhvHmU096VhplWzMcUetz2UXTZTn7ZLw0kesFuYlf38lB1sX7n+ dSCsg9TO677UnT2xY6tFhRCuNgWIsQ/DL91zt4UaG2PsA+qEP8VLzqy5xNl3fRmxDOL0 /d5XBeImzFYEv6+dvOOZMXy11AUDS6z7wNvXMaPH60Un2FND0JwPtJDyXi2USBnI1Mi1 9BqQ== X-Gm-Message-State: AOAM530gbnS0oyTO6lYGGJozdd38gNP10P2SpjB0bHUs9YNGn7+dweta v183K9AiojjbABlinPDKV1VHFZ7A7VBjUIPK X-Google-Smtp-Source: ABdhPJxfxIV1A/dFz/JrVep1eaw0H60ja8pe6xPeQNs1Kl9HyUb9ALeWw11TuZqIQqDtXpxYsTeNmw== X-Received: by 2002:a17:906:30d8:: with SMTP id b24mr3959861ejb.358.1627035997493; Fri, 23 Jul 2021 03:26:37 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (093105178068.dynamic-ww-01.vectranet.pl. [93.105.178.68]) by smtp.gmail.com with ESMTPSA id jp26sm10506142ejb.28.2021.07.23.03.26.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jul 2021 03:26:36 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Igor Chauskin Date: Fri, 23 Jul 2021 12:24:49 +0200 Message-Id: <20210723102454.12206-2-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210723102454.12206-1-mk@semihalf.com> References: <20210714104320.4096-1-mk@semihalf.com> <20210723102454.12206-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 1/6] net/ena: adjust driver logs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENA logs were not consistent regarding the new line character. Few of them were relying on the new line character added by the PMD_*_LOG macros, but most were adding the new line character by themselves. It was causing ENA logs to add extra empty line after almost each log. To unify this behavior, the missing new line characters were added to the driver logs, and they were removed from the logging macros. After this patch, every ENA log message should add '\n' at the end. Moreover, the logging messages were adjusted in terms of wording (removed unnecessary abbreviations), capitalizing of the words (start sentences with capital letters, and use 'Tx/Rx' instead of 'tx/TX' etc. Some of the logs were rephrased to make them more clear for the reader. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes --- drivers/net/ena/ena_ethdev.c | 150 ++++++++++++++++++----------------- drivers/net/ena/ena_logs.h | 10 +-- 2 files changed, 84 insertions(+), 76 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index dfe68279fa..f5e812d507 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -544,7 +544,7 @@ ena_dev_reset(struct rte_eth_dev *dev) ena_destroy_device(dev); rc = eth_ena_dev_init(dev); if (rc) - PMD_INIT_LOG(CRIT, "Cannot initialize device"); + PMD_INIT_LOG(CRIT, "Cannot initialize device\n"); return rc; } @@ -565,7 +565,7 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, if (reta_size > ENA_RX_RSS_TABLE_SIZE) { PMD_DRV_LOG(WARNING, - "indirection table %d is bigger than supported (%d)\n", + "Requested indirection table size (%d) is bigger than supported: %d\n", reta_size, ENA_RX_RSS_TABLE_SIZE); return -EINVAL; } @@ -599,8 +599,8 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, return rc; } - PMD_DRV_LOG(DEBUG, "%s(): RSS configured %d entries for port %d\n", - __func__, reta_size, dev->data->port_id); + PMD_DRV_LOG(DEBUG, "RSS configured %d entries for port %d\n", + reta_size, dev->data->port_id); return 0; } @@ -626,7 +626,7 @@ static int ena_rss_reta_query(struct rte_eth_dev *dev, rc = ena_com_indirect_table_get(ena_dev, indirect_table); rte_spinlock_unlock(&adapter->admin_lock); if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, "cannot get indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot get indirection table\n"); return -ENOTSUP; } @@ -650,7 +650,7 @@ static int ena_rss_init_default(struct ena_adapter *adapter) rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); if (unlikely(rc)) { - PMD_DRV_LOG(ERR, "Cannot init indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot init indirection table\n"); goto err_rss_init; } @@ -659,7 +659,7 @@ static int ena_rss_init_default(struct ena_adapter *adapter) rc = ena_com_indirect_table_fill_entry(ena_dev, i, ENA_IO_RXQ_IDX(val)); if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot fill indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot fill indirection table\n"); goto err_fill_indir; } } @@ -679,7 +679,7 @@ static int ena_rss_init_default(struct ena_adapter *adapter) rc = ena_com_indirect_table_set(ena_dev); if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot flush the indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot flush indirection table\n"); goto err_fill_indir; } PMD_DRV_LOG(DEBUG, "RSS configured for port %d\n", @@ -733,7 +733,7 @@ static void ena_rx_queue_release(void *queue) ring->configured = 0; - PMD_DRV_LOG(NOTICE, "RX Queue %d:%d released\n", + PMD_DRV_LOG(NOTICE, "Rx queue %d:%d released\n", ring->port_id, ring->id); } @@ -757,7 +757,7 @@ static void ena_tx_queue_release(void *queue) ring->configured = 0; - PMD_DRV_LOG(NOTICE, "TX Queue %d:%d released\n", + PMD_DRV_LOG(NOTICE, "Tx queue %d:%d released\n", ring->port_id, ring->id); } @@ -822,19 +822,19 @@ static int ena_queue_start_all(struct rte_eth_dev *dev, if (ring_type == ENA_RING_TYPE_RX) { ena_assert_msg( dev->data->rx_queues[i] == &queues[i], - "Inconsistent state of rx queues\n"); + "Inconsistent state of Rx queues\n"); } else { ena_assert_msg( dev->data->tx_queues[i] == &queues[i], - "Inconsistent state of tx queues\n"); + "Inconsistent state of Tx queues\n"); } rc = ena_queue_start(&queues[i]); if (rc) { PMD_INIT_LOG(ERR, - "failed to start queue %d type(%d)", - i, ring_type); + "Failed to start queue[%d] of type(%d)\n", + i, ring_type); goto err; } } @@ -867,9 +867,9 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) uint32_t max_frame_len = ena_get_mtu_conf(adapter); if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) { - PMD_INIT_LOG(ERR, "Unsupported MTU of %d. " - "max mtu: %d, min mtu: %d", - max_frame_len, adapter->max_mtu, ENA_MIN_MTU); + PMD_INIT_LOG(ERR, + "Unsupported MTU of %d. Max MTU: %d, min MTU: %d\n", + max_frame_len, adapter->max_mtu, ENA_MIN_MTU); return ENA_COM_UNSUPPORTED; } @@ -938,7 +938,7 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx, ENA_ADMIN_PLACEMENT_POLICY_DEV)) { max_tx_queue_size /= 2; PMD_INIT_LOG(INFO, - "Forcing large headers and decreasing maximum TX queue size to %d\n", + "Forcing large headers and decreasing maximum Tx queue size to %d\n", max_tx_queue_size); } else { PMD_INIT_LOG(ERR, @@ -947,7 +947,7 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx, } if (unlikely(max_rx_queue_size == 0 || max_tx_queue_size == 0)) { - PMD_INIT_LOG(ERR, "Invalid queue size"); + PMD_INIT_LOG(ERR, "Invalid queue size\n"); return -EFAULT; } @@ -1044,8 +1044,7 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) { PMD_DRV_LOG(ERR, - "Invalid MTU setting. new_mtu: %d " - "max mtu: %d min mtu: %d\n", + "Invalid MTU setting. New MTU: %d, max MTU: %d, min MTU: %d\n", mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU); return -EINVAL; } @@ -1054,7 +1053,7 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (rc) PMD_DRV_LOG(ERR, "Could not set MTU: %d\n", mtu); else - PMD_DRV_LOG(NOTICE, "Set MTU: %d\n", mtu); + PMD_DRV_LOG(NOTICE, "MTU set to: %d\n", mtu); return rc; } @@ -1130,7 +1129,7 @@ static int ena_stop(struct rte_eth_dev *dev) if (adapter->trigger_reset) { rc = ena_com_dev_reset(ena_dev, adapter->reset_reason); if (rc) - PMD_DRV_LOG(ERR, "Device reset failed rc=%d\n", rc); + PMD_DRV_LOG(ERR, "Device reset failed, rc: %d\n", rc); } ++adapter->dev_stats.dev_stop; @@ -1175,7 +1174,7 @@ static int ena_create_io_queue(struct ena_ring *ring) rc = ena_com_create_io_queue(ena_dev, &ctx); if (rc) { PMD_DRV_LOG(ERR, - "failed to create io queue #%d (qid:%d) rc: %d\n", + "Failed to create IO queue[%d] (qid:%d), rc: %d\n", ring->id, ena_qid, rc); return rc; } @@ -1185,7 +1184,7 @@ static int ena_create_io_queue(struct ena_ring *ring) &ring->ena_com_io_cq); if (rc) { PMD_DRV_LOG(ERR, - "Failed to get io queue handlers. queue num %d rc: %d\n", + "Failed to get IO queue[%d] handlers, rc: %d\n", ring->id, rc); ena_com_destroy_io_queue(ena_dev, ena_qid); return rc; @@ -1239,7 +1238,7 @@ static int ena_queue_start(struct ena_ring *ring) rc = ena_create_io_queue(ring); if (rc) { - PMD_INIT_LOG(ERR, "Failed to create IO queue!"); + PMD_INIT_LOG(ERR, "Failed to create IO queue\n"); return rc; } @@ -1257,7 +1256,7 @@ static int ena_queue_start(struct ena_ring *ring) if (rc != bufs_num) { ena_com_destroy_io_queue(&ring->adapter->ena_dev, ENA_IO_RXQ_IDX(ring->id)); - PMD_INIT_LOG(ERR, "Failed to populate rx ring !"); + PMD_INIT_LOG(ERR, "Failed to populate Rx ring\n"); return ENA_COM_FAULT; } /* Flush per-core RX buffers pools cache as they can be used on other @@ -1282,21 +1281,21 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, if (txq->configured) { PMD_DRV_LOG(CRIT, - "API violation. Queue %d is already configured\n", + "API violation. Queue[%d] is already configured\n", queue_idx); return ENA_COM_FAULT; } if (!rte_is_power_of_2(nb_desc)) { PMD_DRV_LOG(ERR, - "Unsupported size of TX queue: %d is not a power of 2.\n", + "Unsupported size of Tx queue: %d is not a power of 2.\n", nb_desc); return -EINVAL; } if (nb_desc > adapter->max_tx_ring_size) { PMD_DRV_LOG(ERR, - "Unsupported size of TX queue (max size: %d)\n", + "Unsupported size of Tx queue (max size: %d)\n", adapter->max_tx_ring_size); return -EINVAL; } @@ -1314,7 +1313,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, txq->ring_size, RTE_CACHE_LINE_SIZE); if (!txq->tx_buffer_info) { - PMD_DRV_LOG(ERR, "failed to alloc mem for tx buffer info\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for Tx buffer info\n"); return -ENOMEM; } @@ -1322,7 +1322,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, sizeof(u16) * txq->ring_size, RTE_CACHE_LINE_SIZE); if (!txq->empty_tx_reqs) { - PMD_DRV_LOG(ERR, "failed to alloc mem for tx reqs\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for empty Tx requests\n"); rte_free(txq->tx_buffer_info); return -ENOMEM; } @@ -1332,7 +1333,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, txq->tx_max_header_size, RTE_CACHE_LINE_SIZE); if (!txq->push_buf_intermediate_buf) { - PMD_DRV_LOG(ERR, "failed to alloc push buff for LLQ\n"); + PMD_DRV_LOG(ERR, "Failed to alloc push buffer for LLQ\n"); rte_free(txq->tx_buffer_info); rte_free(txq->empty_tx_reqs); return -ENOMEM; @@ -1367,21 +1368,21 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, rxq = &adapter->rx_ring[queue_idx]; if (rxq->configured) { PMD_DRV_LOG(CRIT, - "API violation. Queue %d is already configured\n", + "API violation. Queue[%d] is already configured\n", queue_idx); return ENA_COM_FAULT; } if (!rte_is_power_of_2(nb_desc)) { PMD_DRV_LOG(ERR, - "Unsupported size of RX queue: %d is not a power of 2.\n", + "Unsupported size of Rx queue: %d is not a power of 2.\n", nb_desc); return -EINVAL; } if (nb_desc > adapter->max_rx_ring_size) { PMD_DRV_LOG(ERR, - "Unsupported size of RX queue (max size: %d)\n", + "Unsupported size of Rx queue (max size: %d)\n", adapter->max_rx_ring_size); return -EINVAL; } @@ -1390,7 +1391,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, buffer_size = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM; if (buffer_size < ENA_RX_BUF_MIN_SIZE) { PMD_DRV_LOG(ERR, - "Unsupported size of RX buffer: %zu (min size: %d)\n", + "Unsupported size of Rx buffer: %zu (min size: %d)\n", buffer_size, ENA_RX_BUF_MIN_SIZE); return -EINVAL; } @@ -1407,7 +1408,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, sizeof(struct ena_rx_buffer) * nb_desc, RTE_CACHE_LINE_SIZE); if (!rxq->rx_buffer_info) { - PMD_DRV_LOG(ERR, "failed to alloc mem for rx buffer info\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for Rx buffer info\n"); return -ENOMEM; } @@ -1416,7 +1418,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, RTE_CACHE_LINE_SIZE); if (!rxq->rx_refill_buffer) { - PMD_DRV_LOG(ERR, "failed to alloc mem for rx refill buffer\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for Rx refill buffer\n"); rte_free(rxq->rx_buffer_info); rxq->rx_buffer_info = NULL; return -ENOMEM; @@ -1426,7 +1429,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, sizeof(uint16_t) * nb_desc, RTE_CACHE_LINE_SIZE); if (!rxq->empty_rx_reqs) { - PMD_DRV_LOG(ERR, "failed to alloc mem for empty rx reqs\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for empty Rx requests\n"); rte_free(rxq->rx_buffer_info); rxq->rx_buffer_info = NULL; rte_free(rxq->rx_refill_buffer); @@ -1457,7 +1461,7 @@ static int ena_add_single_rx_desc(struct ena_com_io_sq *io_sq, /* pass resource to device */ rc = ena_com_add_single_rx_desc(io_sq, &ebuf, id); if (unlikely(rc != 0)) - PMD_DRV_LOG(WARNING, "failed adding rx desc\n"); + PMD_DRV_LOG(WARNING, "Failed adding Rx desc\n"); return rc; } @@ -1483,7 +1487,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (unlikely(rc < 0)) { rte_atomic64_inc(&rxq->adapter->drv_stats->rx_nombuf); ++rxq->rx_stats.mbuf_alloc_fail; - PMD_RX_LOG(DEBUG, "there are no enough free buffers"); + PMD_RX_LOG(DEBUG, "There are not enough free buffers\n"); return 0; } @@ -1506,8 +1510,9 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } if (unlikely(i < count)) { - PMD_DRV_LOG(WARNING, "refilled rx qid %d with only %d " - "buffers (from %d)\n", rxq->id, i, count); + PMD_DRV_LOG(WARNING, + "Refilled Rx queue[%d] with only %d/%d buffers\n", + rxq->id, i, count); rte_pktmbuf_free_bulk(&mbufs[i], count - i); ++rxq->rx_stats.refill_partial; } @@ -1535,7 +1540,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, /* Initialize mmio registers */ rc = ena_com_mmio_reg_read_request_init(ena_dev); if (rc) { - PMD_DRV_LOG(ERR, "failed to init mmio read less\n"); + PMD_DRV_LOG(ERR, "Failed to init MMIO read less\n"); return rc; } @@ -1548,14 +1553,14 @@ static int ena_device_init(struct ena_com_dev *ena_dev, /* reset device */ rc = ena_com_dev_reset(ena_dev, ENA_REGS_RESET_NORMAL); if (rc) { - PMD_DRV_LOG(ERR, "cannot reset device\n"); + PMD_DRV_LOG(ERR, "Cannot reset device\n"); goto err_mmio_read_less; } /* check FW version */ rc = ena_com_validate_version(ena_dev); if (rc) { - PMD_DRV_LOG(ERR, "device version is too low\n"); + PMD_DRV_LOG(ERR, "Device version is too low\n"); goto err_mmio_read_less; } @@ -1565,7 +1570,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, rc = ena_com_admin_init(ena_dev, &aenq_handlers); if (rc) { PMD_DRV_LOG(ERR, - "cannot initialize ena admin queue with device\n"); + "Cannot initialize ENA admin queue\n"); goto err_mmio_read_less; } @@ -1581,7 +1586,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, rc = ena_com_get_dev_attr_feat(ena_dev, get_feat_ctx); if (rc) { PMD_DRV_LOG(ERR, - "cannot get attribute for ena device rc= %d\n", rc); + "Cannot get attribute for ENA device, rc: %d\n", rc); goto err_admin_init; } @@ -1594,7 +1599,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, aenq_groups &= get_feat_ctx->aenq.supported_groups; rc = ena_com_set_aenq_config(ena_dev, aenq_groups); if (rc) { - PMD_DRV_LOG(ERR, "Cannot configure aenq groups rc: %d\n", rc); + PMD_DRV_LOG(ERR, "Cannot configure AENQ groups, rc: %d\n", rc); goto err_admin_init; } @@ -1643,7 +1648,7 @@ static void check_for_missing_keep_alive(struct ena_adapter *adapter) static void check_for_admin_com_state(struct ena_adapter *adapter) { if (unlikely(!ena_com_get_admin_running_state(&adapter->ena_dev))) { - PMD_DRV_LOG(ERR, "ENA admin queue is not in running state!\n"); + PMD_DRV_LOG(ERR, "ENA admin queue is not in running state\n"); adapter->reset_reason = ENA_REGS_RESET_ADMIN_TO; adapter->trigger_reset = true; } @@ -1706,8 +1711,8 @@ ena_set_queues_placement_policy(struct ena_adapter *adapter, rc = ena_com_config_dev_mode(ena_dev, llq, llq_default_configurations); if (unlikely(rc)) { - PMD_INIT_LOG(WARNING, "Failed to config dev mode. " - "Fallback to host mode policy."); + PMD_INIT_LOG(WARNING, + "Failed to config dev mode. Fallback to host mode policy.\n"); ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; return 0; } @@ -1717,8 +1722,8 @@ ena_set_queues_placement_policy(struct ena_adapter *adapter, return 0; if (!adapter->dev_mem_base) { - PMD_DRV_LOG(ERR, "Unable to access LLQ bar resource. " - "Fallback to host mode policy.\n."); + PMD_DRV_LOG(ERR, + "Unable to access LLQ BAR resource. Fallback to host mode policy.\n"); ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; return 0; } @@ -1758,7 +1763,7 @@ static uint32_t ena_calc_max_io_queue_num(struct ena_com_dev *ena_dev, max_num_io_queues = RTE_MIN(max_num_io_queues, io_tx_cq_num); if (unlikely(max_num_io_queues == 0)) { - PMD_DRV_LOG(ERR, "Number of IO queues should not be 0\n"); + PMD_DRV_LOG(ERR, "Number of IO queues cannot not be 0\n"); return -EFAULT; } @@ -1798,7 +1803,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - PMD_INIT_LOG(INFO, "Initializing %x:%x:%x.%d", + PMD_INIT_LOG(INFO, "Initializing %x:%x:%x.%d\n", pci_dev->addr.domain, pci_dev->addr.bus, pci_dev->addr.devid, @@ -1810,7 +1815,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr; if (!adapter->regs) { - PMD_INIT_LOG(CRIT, "Failed to access registers BAR(%d)", + PMD_INIT_LOG(CRIT, "Failed to access registers BAR(%d)\n", ENA_REGS_BAR); return -ENXIO; } @@ -1833,7 +1838,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) /* device specific initialization routine */ rc = ena_device_init(ena_dev, pci_dev, &get_feat_ctx, &wd_state); if (rc) { - PMD_INIT_LOG(CRIT, "Failed to init ENA device"); + PMD_INIT_LOG(CRIT, "Failed to init ENA device\n"); goto err; } adapter->wd_state = wd_state; @@ -1843,7 +1848,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) rc = ena_set_queues_placement_policy(adapter, ena_dev, &get_feat_ctx.llq, &llq_config); if (unlikely(rc)) { - PMD_INIT_LOG(CRIT, "Failed to set placement policy"); + PMD_INIT_LOG(CRIT, "Failed to set placement policy\n"); return rc; } @@ -1905,7 +1910,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) sizeof(*adapter->drv_stats), RTE_CACHE_LINE_SIZE); if (!adapter->drv_stats) { - PMD_DRV_LOG(ERR, "failed to alloc mem for adapter stats\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for adapter statistics\n"); rc = -ENOMEM; goto err_delete_debug_area; } @@ -2233,7 +2239,9 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->ena_com_io_sq, &ena_rx_ctx); if (unlikely(rc)) { - PMD_DRV_LOG(ERR, "ena_com_rx_pkt error %d\n", rc); + PMD_DRV_LOG(ERR, + "Failed to get the packet from the device, rc: %d\n", + rc); if (rc == ENA_COM_NO_SPACE) { ++rx_ring->rx_stats.bad_desc_num; rx_ring->adapter->reset_reason = @@ -2408,7 +2416,7 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, * be needed so we reduce the segments number from num_segments to 1 */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, 3)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the tx queue\n"); + PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } ++tx_ring->tx_stats.linearize; @@ -2428,7 +2436,7 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, num_segments + 2)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the tx queue\n"); + PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } @@ -2544,7 +2552,7 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) if (unlikely(ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq, &ena_tx_ctx))) { PMD_DRV_LOG(DEBUG, - "llq tx max burst size of queue %d achieved, writing doorbell to send burst\n", + "LLQ Tx max burst size of queue %d achieved, writing doorbell to send burst\n", tx_ring->id); ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); tx_ring->tx_stats.doorbells++; @@ -2666,10 +2674,10 @@ int ena_copy_eni_stats(struct ena_adapter *adapter) if (rc != 0) { if (rc == ENA_COM_UNSUPPORTED) { PMD_DRV_LOG(DEBUG, - "Retrieving ENI metrics is not supported.\n"); + "Retrieving ENI metrics is not supported\n"); } else { PMD_DRV_LOG(WARNING, - "Failed to get ENI metrics: %d\n", rc); + "Failed to get ENI metrics, rc: %d\n", rc); } return rc; } @@ -2993,7 +3001,7 @@ static void ena_notification(void *adapter_data, struct ena_admin_ena_hw_hints *hints; if (aenq_e->aenq_common_desc.group != ENA_ADMIN_NOTIFICATION) - PMD_DRV_LOG(WARNING, "Invalid group(%x) expected %x\n", + PMD_DRV_LOG(WARNING, "Invalid AENQ group: %x. Expected: %x\n", aenq_e->aenq_common_desc.group, ENA_ADMIN_NOTIFICATION); @@ -3004,7 +3012,7 @@ static void ena_notification(void *adapter_data, ena_update_hints(adapter, hints); break; default: - PMD_DRV_LOG(ERR, "Invalid aenq notification link state %d\n", + PMD_DRV_LOG(ERR, "Invalid AENQ notification link state: %d\n", aenq_e->aenq_common_desc.syndrome); } } @@ -3034,8 +3042,8 @@ static void ena_keep_alive(void *adapter_data, static void unimplemented_aenq_handler(__rte_unused void *data, __rte_unused struct ena_admin_aenq_entry *aenq_e) { - PMD_DRV_LOG(ERR, "Unknown event was received or event with " - "unimplemented handler\n"); + PMD_DRV_LOG(ERR, + "Unknown event was received or event with unimplemented handler\n"); } static struct ena_aenq_handlers aenq_handlers = { diff --git a/drivers/net/ena/ena_logs.h b/drivers/net/ena/ena_logs.h index 9053c9183f..040bebfb98 100644 --- a/drivers/net/ena/ena_logs.h +++ b/drivers/net/ena/ena_logs.h @@ -9,13 +9,13 @@ extern int ena_logtype_init; #define PMD_INIT_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_init, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #ifdef RTE_LIBRTE_ENA_DEBUG_RX extern int ena_logtype_rx; #define PMD_RX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_rx, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #else #define PMD_RX_LOG(level, fmt, args...) do { } while (0) #endif @@ -24,7 +24,7 @@ extern int ena_logtype_rx; extern int ena_logtype_tx; #define PMD_TX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_tx, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #else #define PMD_TX_LOG(level, fmt, args...) do { } while (0) #endif @@ -33,7 +33,7 @@ extern int ena_logtype_tx; extern int ena_logtype_tx_free; #define PMD_TX_FREE_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_tx_free, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #else #define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) #endif @@ -41,6 +41,6 @@ extern int ena_logtype_tx_free; extern int ena_logtype_driver; #define PMD_DRV_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_driver, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #endif /* _ENA_LOGS_H_ */ From patchwork Fri Jul 23 10:24:50 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 96239 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 21711A0C46; Fri, 23 Jul 2021 12:26:51 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 029B4410DD; Fri, 23 Jul 2021 12:26:43 +0200 (CEST) Received: from mail-ej1-f54.google.com (mail-ej1-f54.google.com [209.85.218.54]) by mails.dpdk.org (Postfix) with ESMTP id 7F05E40DDA for ; Fri, 23 Jul 2021 12:26:39 +0200 (CEST) Received: by mail-ej1-f54.google.com with SMTP id gn11so3101224ejc.0 for ; Fri, 23 Jul 2021 03:26:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oSyKftVlPDWJqRgjIh/i51Zcoc8WZWUiZqcESSncvlE=; b=jUo3BkaaloTdCHABZbLj/4w7ZZM8R09kTPEji7VIKRjbU50gRhSBon9DO0nl/cibmT zYiXvUl45T5nW0MMGwqWlNFi9ukgOut/QqymtWSi0ChTAHf7wT1XPVwxlWX61Ev31ans PG5d5QK2QDJutbVlfoNitMmwWFnt2uNMyZZTY6mWQTVYXd4nwcVJPT9QGZsmuttFqvJM iJnuKzwU1N4yPMP9fdy4htp4MwDPcVGNlBnWWWvDGUwPNNeB/HVqtxK0SOCX0UjosCv6 ZMtxLZpNrzem1RgpQVEvyLKrKT21SDms5hRwYKZVUylMny3rubdMtYhTIW6OAk24+OVR Zf9Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oSyKftVlPDWJqRgjIh/i51Zcoc8WZWUiZqcESSncvlE=; b=JMyysicIS2+oolC4niG7FoPsDDxq7vQy/tJ2x6KD9rquiTaFLrAOORXnvfVEtSseTQ fwgUc5xOlkRKW82kaQElpdPZJuqej9FylCPFoa0NUQf0widQlbVREx7Dpi2igTgAFY9f 4NZwcS4TH72evM2XL3OlkNbDFf/ieRnDHZYF2Eiv24LZWnWPnyOFLrq24wVhNUxpQrbx ZcLykci8Bc00LpvhVqbkUU1NeFJy3DCub2INQGbolJz33DdRc7UMuB51VLqx6JaIbEK6 vOb+D516QGOORebOr3IfT62Ljr8ti1lgmenBPeVSW1wrh+qJ1kMoN6lBekRTPdAltdA3 Tvmw== X-Gm-Message-State: AOAM530asycEECA/82g7wvLiwzqO2SivdcjDH04ga58+YeVLtfePX7q/ AzxX2xNcxu1DVk1tcaRobhddoPbf5cu7AY4q X-Google-Smtp-Source: ABdhPJx1pq68GSWn6cOqXa61vvKLLsHFnEI6ZzIMDTzB2Bcnah9TExRV+p/O9VvDc82bA4nChT2OaA== X-Received: by 2002:a17:906:b0c8:: with SMTP id bk8mr4032296ejb.412.1627035998964; Fri, 23 Jul 2021 03:26:38 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (093105178068.dynamic-ww-01.vectranet.pl. [93.105.178.68]) by smtp.gmail.com with ESMTPSA id jp26sm10506142ejb.28.2021.07.23.03.26.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jul 2021 03:26:38 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Igor Chauskin Date: Fri, 23 Jul 2021 12:24:50 +0200 Message-Id: <20210723102454.12206-3-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210723102454.12206-1-mk@semihalf.com> References: <20210714104320.4096-1-mk@semihalf.com> <20210723102454.12206-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 2/6] net/ena: make use of the IO debug build option X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENA defined its own logger flags for Tx and Rx, but they weren't technically used anywhere. Those data path loggers weren't used anywhere after the definition. This commit uses the generic RTE_ETHDEV_DEBUG_RX and RTE_ETHDEV_DEBUG_TX flags to define PMD_TX_LOG and PMD_RX_LOG which are now being used on the data path. The PMD_TX_FREE_LOG was removed, as it has no usage in the current version of the driver. RTE_ETH_DEBUG_[TR]X now wraps extra checks for the driver state in the IO path - this saves extra conditionals on the hot path. ena_com logger is no longer optional (previously it had to be explicitly enabled by defining this flag: RTE_LIBRTE_ENA_COM_DEBUG). Having this logger optional makes tracing of ena_com errors much harder. Due to ena_com design, it's impossible to separate IO path logs from the management path logs, so for now they will be always enabled. Default levels for the affected loggers were modified. Hot path loggers are initialized with the default level of DEBUG instead of NOTICE, as they have to be explicitly enabled. ena_com logging level was reduced from NOTICE to WARNING - as it's no longer optional, the driver should report just a warnings in the ena_com layer. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes --- drivers/net/ena/base/ena_plat_dpdk.h | 7 ---- drivers/net/ena/ena_ethdev.c | 52 +++++++++++++++------------- drivers/net/ena/ena_logs.h | 13 ++----- 3 files changed, 30 insertions(+), 42 deletions(-) diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index f66df95591..4e7f52881a 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -108,7 +108,6 @@ extern int ena_logtype_com; #define GENMASK_ULL(h, l) (((~0ULL) - (1ULL << (l)) + 1) & \ (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h)))) -#ifdef RTE_LIBRTE_ENA_COM_DEBUG #define ena_trc_log(dev, level, fmt, arg...) \ ( \ ENA_TOUCH(dev), \ @@ -121,12 +120,6 @@ extern int ena_logtype_com; #define ena_trc_warn(dev, format, arg...) \ ena_trc_log(dev, WARNING, format, ##arg) #define ena_trc_err(dev, format, arg...) ena_trc_log(dev, ERR, format, ##arg) -#else -#define ena_trc_dbg(dev, format, arg...) ENA_TOUCH(dev) -#define ena_trc_info(dev, format, arg...) ENA_TOUCH(dev) -#define ena_trc_warn(dev, format, arg...) ENA_TOUCH(dev) -#define ena_trc_err(dev, format, arg...) ENA_TOUCH(dev) -#endif /* RTE_LIBRTE_ENA_COM_DEBUG */ #define ENA_WARN(cond, dev, format, arg...) \ do { \ diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index f5e812d507..2335436b6c 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -399,9 +399,9 @@ static int validate_tx_req_id(struct ena_ring *tx_ring, u16 req_id) } if (tx_info) - PMD_DRV_LOG(ERR, "tx_info doesn't have valid mbuf\n"); + PMD_TX_LOG(ERR, "tx_info doesn't have valid mbuf\n"); else - PMD_DRV_LOG(ERR, "Invalid req_id: %hu\n", req_id); + PMD_TX_LOG(ERR, "Invalid req_id: %hu\n", req_id); /* Trigger device reset */ ++tx_ring->tx_stats.bad_req_id; @@ -1461,7 +1461,7 @@ static int ena_add_single_rx_desc(struct ena_com_io_sq *io_sq, /* pass resource to device */ rc = ena_com_add_single_rx_desc(io_sq, &ebuf, id); if (unlikely(rc != 0)) - PMD_DRV_LOG(WARNING, "Failed adding Rx desc\n"); + PMD_RX_LOG(WARNING, "Failed adding Rx desc\n"); return rc; } @@ -1471,16 +1471,21 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) unsigned int i; int rc; uint16_t next_to_use = rxq->next_to_use; - uint16_t in_use, req_id; + uint16_t req_id; +#ifdef RTE_ETHDEV_DEBUG_RX + uint16_t in_use; +#endif struct rte_mbuf **mbufs = rxq->rx_refill_buffer; if (unlikely(!count)) return 0; +#ifdef RTE_ETHDEV_DEBUG_RX in_use = rxq->ring_size - 1 - ena_com_free_q_entries(rxq->ena_com_io_sq); - ena_assert_msg(((in_use + count) < rxq->ring_size), - "bad ring state\n"); + if (unlikely((in_use + count) >= rxq->ring_size)) + PMD_RX_LOG(ERR, "Bad Rx ring state\n"); +#endif /* get resources for incoming packets */ rc = rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, count); @@ -1510,7 +1515,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } if (unlikely(i < count)) { - PMD_DRV_LOG(WARNING, + PMD_RX_LOG(WARNING, "Refilled Rx queue[%d] with only %d/%d buffers\n", rxq->id, i, count); rte_pktmbuf_free_bulk(&mbufs[i], count - i); @@ -2218,12 +2223,14 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct ena_com_rx_ctx ena_rx_ctx; int i, rc = 0; +#ifdef RTE_ETHDEV_DEBUG_RX /* Check adapter state */ if (unlikely(rx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) { - PMD_DRV_LOG(ALERT, + PMD_RX_LOG(ALERT, "Trying to receive pkts while device is NOT running\n"); return 0; } +#endif descs_in_use = rx_ring->ring_size - ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; @@ -2239,7 +2246,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->ena_com_io_sq, &ena_rx_ctx); if (unlikely(rc)) { - PMD_DRV_LOG(ERR, + PMD_RX_LOG(ERR, "Failed to get the packet from the device, rc: %d\n", rc); if (rc == ENA_COM_NO_SPACE) { @@ -2416,13 +2423,13 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, * be needed so we reduce the segments number from num_segments to 1 */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, 3)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); + PMD_TX_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } ++tx_ring->tx_stats.linearize; rc = rte_pktmbuf_linearize(mbuf); if (unlikely(rc)) { - PMD_DRV_LOG(WARNING, "Mbuf linearize failed\n"); + PMD_TX_LOG(WARNING, "Mbuf linearize failed\n"); rte_atomic64_inc(&tx_ring->adapter->drv_stats->ierrors); ++tx_ring->tx_stats.linearize_failed; return rc; @@ -2436,7 +2443,7 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, num_segments + 2)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); + PMD_TX_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } @@ -2551,7 +2558,7 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) if (unlikely(ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq, &ena_tx_ctx))) { - PMD_DRV_LOG(DEBUG, + PMD_TX_LOG(DEBUG, "LLQ Tx max burst size of queue %d achieved, writing doorbell to send burst\n", tx_ring->id); ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); @@ -2628,12 +2635,14 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, struct ena_ring *tx_ring = (struct ena_ring *)(tx_queue); uint16_t sent_idx = 0; +#ifdef RTE_ETHDEV_DEBUG_TX /* Check adapter state */ if (unlikely(tx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) { - PMD_DRV_LOG(ALERT, + PMD_TX_LOG(ALERT, "Trying to xmit pkts while device is NOT running\n"); return 0; } +#endif for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { if (ena_xmit_mbuf(tx_ring, tx_pkts[sent_idx])) @@ -2960,18 +2969,13 @@ RTE_PMD_REGISTER_KMOD_DEP(net_ena, "* igb_uio | uio_pci_generic | vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(net_ena, ENA_DEVARG_LARGE_LLQ_HDR "=<0|1>"); RTE_LOG_REGISTER_SUFFIX(ena_logtype_init, init, NOTICE); RTE_LOG_REGISTER_SUFFIX(ena_logtype_driver, driver, NOTICE); -#ifdef RTE_LIBRTE_ENA_DEBUG_RX -RTE_LOG_REGISTER_SUFFIX(ena_logtype_rx, rx, NOTICE); -#endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX -RTE_LOG_REGISTER_SUFFIX(ena_logtype_tx, tx, NOTICE); -#endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX_FREE -RTE_LOG_REGISTER_SUFFIX(ena_logtype_tx_free, tx_free, NOTICE); +#ifdef RTE_ETHDEV_DEBUG_RX +RTE_LOG_REGISTER_SUFFIX(ena_logtype_rx, rx, DEBUG); #endif -#ifdef RTE_LIBRTE_ENA_COM_DEBUG -RTE_LOG_REGISTER_SUFFIX(ena_logtype_com, com, NOTICE); +#ifdef RTE_ETHDEV_DEBUG_TX +RTE_LOG_REGISTER_SUFFIX(ena_logtype_tx, tx, DEBUG); #endif +RTE_LOG_REGISTER_SUFFIX(ena_logtype_com, com, WARNING); /****************************************************************************** ******************************** AENQ Handlers ******************************* diff --git a/drivers/net/ena/ena_logs.h b/drivers/net/ena/ena_logs.h index 040bebfb98..43f16458ea 100644 --- a/drivers/net/ena/ena_logs.h +++ b/drivers/net/ena/ena_logs.h @@ -11,7 +11,7 @@ extern int ena_logtype_init; rte_log(RTE_LOG_ ## level, ena_logtype_init, \ "%s(): " fmt, __func__, ## args) -#ifdef RTE_LIBRTE_ENA_DEBUG_RX +#ifdef RTE_ETHDEV_DEBUG_RX extern int ena_logtype_rx; #define PMD_RX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_rx, \ @@ -20,7 +20,7 @@ extern int ena_logtype_rx; #define PMD_RX_LOG(level, fmt, args...) do { } while (0) #endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX +#ifdef RTE_ETHDEV_DEBUG_TX extern int ena_logtype_tx; #define PMD_TX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_tx, \ @@ -29,15 +29,6 @@ extern int ena_logtype_tx; #define PMD_TX_LOG(level, fmt, args...) do { } while (0) #endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX_FREE -extern int ena_logtype_tx_free; -#define PMD_TX_FREE_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, ena_logtype_tx_free, \ - "%s(): " fmt, __func__, ## args) -#else -#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) -#endif - extern int ena_logtype_driver; #define PMD_DRV_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_driver, \ From patchwork Fri Jul 23 10:24:51 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 96240 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 83A79A0C46; Fri, 23 Jul 2021 12:26:59 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 197CC410F8; Fri, 23 Jul 2021 12:26:45 +0200 (CEST) Received: from mail-ej1-f54.google.com (mail-ej1-f54.google.com [209.85.218.54]) by mails.dpdk.org (Postfix) with ESMTP id E93AF40DDA for ; Fri, 23 Jul 2021 12:26:40 +0200 (CEST) Received: by mail-ej1-f54.google.com with SMTP id nd39so2986201ejc.5 for ; Fri, 23 Jul 2021 03:26:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=V4S5VuFuIVM0xDiwmw+bkutu1YZQ0xs0EYeC39TGHVY=; b=cVfCLI+5g6FP+1CJ/rBNAgiwtXl691b3l5IEA2sdWPmtzp0+KsvJJbe2OdPdOwZf9W 4OtNuv6E8dMBn8We4zUl/ZzF3RbWGdYAB+CEld0dqFtwInfWs4y/2oMRnNtOD09wBDk6 GWA9mGZPd2U+/0XC/KgQlhzvEQH52i9aXMj8S6EH8mVs2v0f9JYmosylj8s2OxhPQ1HJ Mrw4GSlKo8cVB4YaQA/KdvyF3xQh0AYFjxpDjc6YsD4ESyo8cua6A5nrhY2eQf7fccGm v4TYcCmrBskByYFjqD0FJEcAHEGnEEtuwu3FZ6XZgQOoktbGYHACI1We3O9aX8HZSjso zy4A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=V4S5VuFuIVM0xDiwmw+bkutu1YZQ0xs0EYeC39TGHVY=; b=Tr5Q/L/Dy5mSmYHUMsx5Dpt2D7/Zf08ScozJD0wixfmMSRZhs1Ebt/mjbQ4qkPYBFU QyQB1kZ/7CynvBXIBfkkRqphSyXzg9ghK99l/DoF4tfeae7Mw0UnurASo1BiXncf0yab WYfFjOQd/PDZD14wX7HX2Mne3BWq/3P5B3Tn4cAtSoNe01fbnJR1nwb2wujooXxe1lbZ Z5EEcFl8tQv22mOHjZoEDoggUtB5gVfmmcWVvI3v54nWbPLZbqrgvkP129K3bbuL9jhX Em95pSWi50H2u25ZhAnB0cugNdNnjCd/+tXH8nq50OxVWRbRJWu8FfyuGS864Jy+jvn6 wPuQ== X-Gm-Message-State: AOAM532RdT/U7GCBwnDYgYnxLKnvctAQ39+wRjsqoTqY8EPfYIbYXPMb hMEz3beRC6Q91whAMiSGy7HCzDFSG8W+yVxD X-Google-Smtp-Source: ABdhPJyzlWEgTdBoZSMnDSLGqkNDkMhOsHpBo8mrGPnNR1Lu1Z8A9W9StCyW5xFVlzScjg+rlxszrg== X-Received: by 2002:a17:906:3e0b:: with SMTP id k11mr3953237eji.305.1627036000448; Fri, 23 Jul 2021 03:26:40 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (093105178068.dynamic-ww-01.vectranet.pl. [93.105.178.68]) by smtp.gmail.com with ESMTPSA id jp26sm10506142ejb.28.2021.07.23.03.26.39 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jul 2021 03:26:39 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , stable@dpdk.org, Shay Agroskin Date: Fri, 23 Jul 2021 12:24:51 +0200 Message-Id: <20210723102454.12206-4-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210723102454.12206-1-mk@semihalf.com> References: <20210714104320.4096-1-mk@semihalf.com> <20210723102454.12206-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 3/6] net/ena: trigger reset when Tx prepare fails X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If the prepare function failed, then it means the descriptors are in the invalid state. This condition now triggers the reset, which should be further handled by the application. To notify the application about prepare function failure, the error log was added. In general, it should never fail in normal conditions, as the Tx function checks for the available space in the Tx ring before the preparation even starts. Fixes: 2081d5e2e92d ("net/ena: add reset routine") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk Reviewed-by: Shai Brandes Reviewed-by: Shay Agroskin --- drivers/net/ena/ena_ethdev.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 2335436b6c..67cd91046a 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -2570,7 +2570,11 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) rc = ena_com_prepare_tx(tx_ring->ena_com_io_sq, &ena_tx_ctx, &nb_hw_desc); if (unlikely(rc)) { + PMD_DRV_LOG(ERR, "Failed to prepare Tx buffers, rc: %d\n", rc); ++tx_ring->tx_stats.prepare_ctx_err; + tx_ring->adapter->reset_reason = + ENA_REGS_RESET_DRIVER_INVALID_STATE; + tx_ring->adapter->trigger_reset = true; return rc; } From patchwork Fri Jul 23 10:24:52 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 96241 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5AB21A0C46; Fri, 23 Jul 2021 12:27:05 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 37798410E7; Fri, 23 Jul 2021 12:26:46 +0200 (CEST) Received: from mail-ej1-f45.google.com (mail-ej1-f45.google.com [209.85.218.45]) by mails.dpdk.org (Postfix) with ESMTP id 7B3F1410DA for ; Fri, 23 Jul 2021 12:26:42 +0200 (CEST) Received: by mail-ej1-f45.google.com with SMTP id hq13so2924150ejc.7 for ; Fri, 23 Jul 2021 03:26:42 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=Q+p4Sq1xHLXc2eJkxjnjJY+1N/ycTjXn5qKi5LmCwzE=; b=mgN6mNCYR6rmiXhE2EkW8a5Y4dIu/ZNzDPMLbXzuok7OYDXJOf+gJ1IJ6b644ecnOq RMI5EZiVaKhChWzzeQMWtw89zhNa8muOuGYEivAumitWImahC6sGy/8rgVZZssuu9LKh Z41sz46nrfh0q+WqRebthZ5NbzGsgnViBbu623isBEn1RkZBVnb8pI+mLPPxVeI/KKWn Cz+CLIuISKebOUBVKf74GCr6LxqSqVk2Bm93HB2ncKLffTTxmfni9uztnyS9wmmy7g60 wvLOFQtqg4ISISlP6Hed5N6qmWDYUxLqRRCNiQoLkOK18jnYkPs4aVM6ShmyAHKkDKwP L50w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=Q+p4Sq1xHLXc2eJkxjnjJY+1N/ycTjXn5qKi5LmCwzE=; b=SPxTiYPtdvy7h859cBmTVa2uuNtA/zTyTxXExlsKL1q2jvbzO7bsS9a/B5rlbvqXa5 ZWTnyi0bhzUPtb3rdqGZEsWvQAfHZLUJLLZe7Hc0Y6KHDRknNYk8a1KbZUMz2VKPlafi 7wTrGRYDiwnL19i1a/4abi00pHrKU0ndzeGCj+8Hli6nTghEtNkn7iRkaj1psb0R9aYs y8QvpxnCOFHggjqSJ4Aw8N4US6WsE1rJ2Fkpd1igEUo4MnR60c2bUi28u/wYto1iXk5p 739qX5psN4rw0SvAYE5qm49n3i4j//T0NM+DbsphoV5gUc2+TPO5vWfS6a+TFR1BMnkm d80w== X-Gm-Message-State: AOAM531z4fc+FP/lamLXIWO3GAHfDQt+fj2X7UZnKfKqQy16J4BPIprn /zKump+NG0B2hhf1YGstVNk3tnbCi0tHqEM7 X-Google-Smtp-Source: ABdhPJyRRYVAscH6P4Vugnu6mAn7r2O6Xy6Cium3nDDuH6xLGATgdqNNsLcCkLQ+3O2ZE+4US1gv/A== X-Received: by 2002:a17:906:c087:: with SMTP id f7mr3863576ejz.487.1627036001875; Fri, 23 Jul 2021 03:26:41 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (093105178068.dynamic-ww-01.vectranet.pl. [93.105.178.68]) by smtp.gmail.com with ESMTPSA id jp26sm10506142ejb.28.2021.07.23.03.26.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jul 2021 03:26:41 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Artur Rojek , Igor Chauskin , Shay Agroskin Date: Fri, 23 Jul 2021 12:24:52 +0200 Message-Id: <20210723102454.12206-5-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210723102454.12206-1-mk@semihalf.com> References: <20210714104320.4096-1-mk@semihalf.com> <20210723102454.12206-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 4/6] net/ena: add support for Rx interrupts X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to support asynchronous Rx in the applications, the driver has to configure the event file descriptors and configure the HW. This patch configures appropriate data structures for the rte_ethdev layer, adds .rx_queue_intr_enable and .rx_queue_intr_disable API handlers, and configures IO queues to work in the interrupt mode, if it was requested by the application. Signed-off-by: Michal Krawczyk Reviewed-by: Artur Rojek Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes Reviewed-by: Shay Agroskin --- doc/guides/nics/ena.rst | 12 ++ doc/guides/nics/features/ena.ini | 1 + doc/guides/rel_notes/release_21_08.rst | 7 ++ drivers/net/ena/ena_ethdev.c | 146 +++++++++++++++++++++++-- 4 files changed, 154 insertions(+), 12 deletions(-) diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst index 0f1f63f722..63951098ea 100644 --- a/doc/guides/nics/ena.rst +++ b/doc/guides/nics/ena.rst @@ -141,6 +141,7 @@ Supported features * LSC event notification * Watchdog (requires handling of timers in the application) * Device reset upon failure +* Rx interrupts Prerequisites ------------- @@ -180,6 +181,17 @@ At this point the system should be ready to run DPDK applications. Once the application runs to completion, the ENA can be detached from attached module if necessary. +**Rx interrupts support** + +ENA PMD supports Rx interrupts, which can be used to wake up lcores waiting for +input. Please note that it won't work with ``igb_uio``, so to use this feature, +the ``vfio-pci`` should be used. + +ENA handles admin interrupts and AENQ notifications on separate interrupt. +There is possibility that there won't be enough event file descriptors to +handle both admin and Rx interrupts. In that situation the Rx interrupt request +will fail. + **Note about usage on \*.metal instances** On AWS, the metal instances are supporting IOMMU for both arm64 and x86_64 diff --git a/doc/guides/nics/features/ena.ini b/doc/guides/nics/features/ena.ini index 2595ff53f9..3976bbbda6 100644 --- a/doc/guides/nics/features/ena.ini +++ b/doc/guides/nics/features/ena.ini @@ -6,6 +6,7 @@ [Features] Link status = Y Link status event = Y +Rx interrupt = Y MTU update = Y Jumbo frame = Y Scattered Rx = Y diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index 247e672f02..616e2cdea9 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -95,6 +95,13 @@ New Features Added a new PMD driver for Wangxun 1 Gigabit Ethernet NICs. See the :doc:`../nics/ngbe` for more details. +* **Updated Amazon ENA PMD.** + + The new driver version (v2.4.0) introduced bug fixes and improvements, + including: + + * Added Rx interrupt support. + * **Added support for Marvell CNXK crypto driver.** * Added cnxk crypto PMD which provides support for an integrated diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 67cd91046a..72f9887797 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -213,11 +213,11 @@ static void ena_rx_queue_release_bufs(struct ena_ring *ring); static void ena_tx_queue_release_bufs(struct ena_ring *ring); static int ena_link_update(struct rte_eth_dev *dev, int wait_to_complete); -static int ena_create_io_queue(struct ena_ring *ring); +static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring); static void ena_queue_stop(struct ena_ring *ring); static void ena_queue_stop_all(struct rte_eth_dev *dev, enum ena_ring_type ring_type); -static int ena_queue_start(struct ena_ring *ring); +static int ena_queue_start(struct rte_eth_dev *dev, struct ena_ring *ring); static int ena_queue_start_all(struct rte_eth_dev *dev, enum ena_ring_type ring_type); static void ena_stats_restart(struct rte_eth_dev *dev); @@ -249,6 +249,11 @@ static int ena_process_bool_devarg(const char *key, static int ena_parse_devargs(struct ena_adapter *adapter, struct rte_devargs *devargs); static int ena_copy_eni_stats(struct ena_adapter *adapter); +static int ena_setup_rx_intr(struct rte_eth_dev *dev); +static int ena_rx_queue_intr_enable(struct rte_eth_dev *dev, + uint16_t queue_id); +static int ena_rx_queue_intr_disable(struct rte_eth_dev *dev, + uint16_t queue_id); static const struct eth_dev_ops ena_dev_ops = { .dev_configure = ena_dev_configure, @@ -269,6 +274,8 @@ static const struct eth_dev_ops ena_dev_ops = { .dev_reset = ena_dev_reset, .reta_update = ena_rss_reta_update, .reta_query = ena_rss_reta_query, + .rx_queue_intr_enable = ena_rx_queue_intr_enable, + .rx_queue_intr_disable = ena_rx_queue_intr_disable, }; void ena_rss_key_fill(void *key, size_t size) @@ -829,7 +836,7 @@ static int ena_queue_start_all(struct rte_eth_dev *dev, "Inconsistent state of Tx queues\n"); } - rc = ena_queue_start(&queues[i]); + rc = ena_queue_start(dev, &queues[i]); if (rc) { PMD_INIT_LOG(ERR, @@ -1074,6 +1081,10 @@ static int ena_start(struct rte_eth_dev *dev) if (rc) return rc; + rc = ena_setup_rx_intr(dev); + if (rc) + return rc; + rc = ena_queue_start_all(dev, ENA_RING_TYPE_RX); if (rc) return rc; @@ -1114,6 +1125,8 @@ static int ena_stop(struct rte_eth_dev *dev) { struct ena_adapter *adapter = dev->data->dev_private; struct ena_com_dev *ena_dev = &adapter->ena_dev; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; int rc; /* Cannot free memory in secondary process */ @@ -1132,6 +1145,16 @@ static int ena_stop(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Device reset failed, rc: %d\n", rc); } + rte_intr_disable(intr_handle); + + rte_intr_efd_disable(intr_handle); + if (intr_handle->intr_vec != NULL) { + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; + } + + rte_intr_enable(intr_handle); + ++adapter->dev_stats.dev_stop; adapter->state = ENA_ADAPTER_STATE_STOPPED; dev->data->dev_started = 0; @@ -1139,10 +1162,12 @@ static int ena_stop(struct rte_eth_dev *dev) return 0; } -static int ena_create_io_queue(struct ena_ring *ring) +static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring) { - struct ena_adapter *adapter; - struct ena_com_dev *ena_dev; + struct ena_adapter *adapter = ring->adapter; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct ena_com_create_io_ctx ctx = /* policy set to _HOST just to satisfy icc compiler */ { ENA_ADMIN_PLACEMENT_POLICY_HOST, @@ -1151,9 +1176,7 @@ static int ena_create_io_queue(struct ena_ring *ring) unsigned int i; int rc; - adapter = ring->adapter; - ena_dev = &adapter->ena_dev; - + ctx.msix_vector = -1; if (ring->type == ENA_RING_TYPE_TX) { ena_qid = ENA_IO_TXQ_IDX(ring->id); ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_TX; @@ -1163,12 +1186,13 @@ static int ena_create_io_queue(struct ena_ring *ring) } else { ena_qid = ENA_IO_RXQ_IDX(ring->id); ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX; + if (rte_intr_dp_is_en(intr_handle)) + ctx.msix_vector = intr_handle->intr_vec[ring->id]; for (i = 0; i < ring->ring_size; i++) ring->empty_rx_reqs[i] = i; } ctx.queue_size = ring->ring_size; ctx.qid = ena_qid; - ctx.msix_vector = -1; /* interrupts not used */ ctx.numa_node = ring->numa_socket_id; rc = ena_com_create_io_queue(ena_dev, &ctx); @@ -1193,6 +1217,10 @@ static int ena_create_io_queue(struct ena_ring *ring) if (ring->type == ENA_RING_TYPE_TX) ena_com_update_numa_node(ring->ena_com_io_cq, ctx.numa_node); + /* Start with Rx interrupts being masked. */ + if (ring->type == ENA_RING_TYPE_RX && rte_intr_dp_is_en(intr_handle)) + ena_rx_queue_intr_disable(dev, ring->id); + return 0; } @@ -1229,14 +1257,14 @@ static void ena_queue_stop_all(struct rte_eth_dev *dev, ena_queue_stop(&queues[i]); } -static int ena_queue_start(struct ena_ring *ring) +static int ena_queue_start(struct rte_eth_dev *dev, struct ena_ring *ring) { int rc, bufs_num; ena_assert_msg(ring->configured == 1, "Trying to start unconfigured queue\n"); - rc = ena_create_io_queue(ring); + rc = ena_create_io_queue(dev, ring); if (rc) { PMD_INIT_LOG(ERR, "Failed to create IO queue\n"); return rc; @@ -2944,6 +2972,100 @@ static int ena_parse_devargs(struct ena_adapter *adapter, return rc; } +static int ena_setup_rx_intr(struct rte_eth_dev *dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + int rc; + uint16_t vectors_nb, i; + bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq; + + if (!rx_intr_requested) + return 0; + + if (!rte_intr_cap_multiple(intr_handle)) { + PMD_DRV_LOG(ERR, + "Rx interrupt requested, but it isn't supported by the PCI driver\n"); + return -ENOTSUP; + } + + /* Disable interrupt mapping before the configuration starts. */ + rte_intr_disable(intr_handle); + + /* Verify if there are enough vectors available. */ + vectors_nb = dev->data->nb_rx_queues; + if (vectors_nb > RTE_MAX_RXTX_INTR_VEC_ID) { + PMD_DRV_LOG(ERR, + "Too many Rx interrupts requested, maximum number: %d\n", + RTE_MAX_RXTX_INTR_VEC_ID); + rc = -ENOTSUP; + goto enable_intr; + } + + intr_handle->intr_vec = rte_zmalloc("intr_vec", + dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0); + if (intr_handle->intr_vec == NULL) { + PMD_DRV_LOG(ERR, + "Failed to allocate interrupt vector for %d queues\n", + dev->data->nb_rx_queues); + rc = -ENOMEM; + goto enable_intr; + } + + rc = rte_intr_efd_enable(intr_handle, vectors_nb); + if (rc != 0) + goto free_intr_vec; + + if (!rte_intr_allow_others(intr_handle)) { + PMD_DRV_LOG(ERR, + "Not enough interrupts available to use both ENA Admin and Rx interrupts\n"); + goto disable_intr_efd; + } + + for (i = 0; i < vectors_nb; ++i) + intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i; + + rte_intr_enable(intr_handle); + return 0; + +disable_intr_efd: + rte_intr_efd_disable(intr_handle); +free_intr_vec: + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; +enable_intr: + rte_intr_enable(intr_handle); + return rc; +} + +static void ena_rx_queue_intr_set(struct rte_eth_dev *dev, + uint16_t queue_id, + bool unmask) +{ + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_ring *rxq = &adapter->rx_ring[queue_id]; + struct ena_eth_io_intr_reg intr_reg; + + ena_com_update_intr_reg(&intr_reg, 0, 0, unmask); + ena_com_unmask_intr(rxq->ena_com_io_cq, &intr_reg); +} + +static int ena_rx_queue_intr_enable(struct rte_eth_dev *dev, + uint16_t queue_id) +{ + ena_rx_queue_intr_set(dev, queue_id, true); + + return 0; +} + +static int ena_rx_queue_intr_disable(struct rte_eth_dev *dev, + uint16_t queue_id) +{ + ena_rx_queue_intr_set(dev, queue_id, false); + + return 0; +} + /********************************************************************* * PMD configuration *********************************************************************/ From patchwork Fri Jul 23 10:24:53 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 96242 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 9B860A0C46; Fri, 23 Jul 2021 12:27:11 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 662C641101; Fri, 23 Jul 2021 12:26:47 +0200 (CEST) Received: from mail-ej1-f46.google.com (mail-ej1-f46.google.com [209.85.218.46]) by mails.dpdk.org (Postfix) with ESMTP id 3A167410E8 for ; Fri, 23 Jul 2021 12:26:44 +0200 (CEST) Received: by mail-ej1-f46.google.com with SMTP id l13so3011675ejo.3 for ; Fri, 23 Jul 2021 03:26:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=RY4odtA0ohjX8dWJSD6bOCQVsLJf13CItWXB0/iBwhw=; b=ZbaFGyy9calQRoZ8OrP4qYttdRF20BAPrsPoVzFs6Ks1Z5T+PoCoOX29nvmDzWHLAL pcF+vlKODZFWuAXj5n2DBCOA3js0HKkzEJSwA92YvjrLfpSbJ6OkN0b5YwLKBovgYeL1 e8RuIvRqHW+glH9nfZNmC2OnD3HGWbHR7n3N09aWPs7L8JuxphgAxCjtV3CHx1TWSSC2 8sisJ0Ug+8P+cWOd6OYiwxBXa+Z3SHfbCVVA612iYqArGNPzmE0Z6vis88P7ccKCbJ5D 11ooJx7zWi8P0eKhqk364GlAmeElGrpCL1eMwP0lmCeW/gY8jLyl9ZRkXNd/LKbcVEDn aAhA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=RY4odtA0ohjX8dWJSD6bOCQVsLJf13CItWXB0/iBwhw=; b=kmu5mA6zqXTB5horzk4PQ44xvK6RW80ssQo9LALnNLVOBw7z+G3G9HWE2Exjr3EJ+k mAJU9GfULb4f0muFPQP3j3+lUuqtEmXQY/Tf8pj9RtSleKg+vcInMxrzdIe5NgL1slLU Pa+dz3NmRBmZ9/zrpPknSuGh0Dc5fGuJK6AtdC9KrmPvHblIm1FA9nbVBYUrt2lmANdL 9QFPbSibL9UodRB3CnxwvRBjaZ8x3qaf26XRx9LT6I8+lMha9ZZFw9pVzK/qymRHHn8v twMKWrBxc+xbB1xIx/++K/hgG+D67ylOeLkANfbQ7fHRtY8/RQGdzyvmEG2218ZBRSH0 djBg== X-Gm-Message-State: AOAM532KlQIEg+aSdyVraeXEIYOJ2RnCOuIBu+o5ZUNiq/R1tq4MHnad 8u6uCth26VZnYBsjz+30dhtFaMF2LiEXP1gF X-Google-Smtp-Source: ABdhPJzjCHuyaX+QbwPZWfQXN5AL9grmllVeBONdkBHtYlAdG9ne9+zJdpLjTp4iZYS61o/qlq7ATA== X-Received: by 2002:a17:906:b89a:: with SMTP id hb26mr3992908ejb.492.1627036003299; Fri, 23 Jul 2021 03:26:43 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (093105178068.dynamic-ww-01.vectranet.pl. [93.105.178.68]) by smtp.gmail.com with ESMTPSA id jp26sm10506142ejb.28.2021.07.23.03.26.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jul 2021 03:26:42 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Shay Agroskin , Amit Bernstein Date: Fri, 23 Jul 2021 12:24:53 +0200 Message-Id: <20210723102454.12206-6-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210723102454.12206-1-mk@semihalf.com> References: <20210714104320.4096-1-mk@semihalf.com> <20210723102454.12206-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 5/6] net/ena: rework RSS configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Allow user to specify his own hash key and hash ctrl if the device is supporting that. HW interprets the key in reverse byte order, so the PMD reorders the key before passing it to the ena_com layer. Default key is being set in random matter each time the device is being initialized. Moreover, make minor adjustments for reta size setting in terms of returning error values. RSS code was moved to ena_rss.c file to improve readability. Signed-off-by: Michal Krawczyk Reviewed-by: Shai Brandes Reviewed-by: Shay Agroskin Reviewed-by: Amit Bernstein --- doc/guides/nics/features/ena.ini | 1 + doc/guides/rel_notes/release_21_08.rst | 1 + drivers/net/ena/ena_ethdev.c | 230 ++-------- drivers/net/ena/ena_ethdev.h | 34 ++ drivers/net/ena/ena_rss.c | 591 +++++++++++++++++++++++++ drivers/net/ena/meson.build | 1 + 6 files changed, 663 insertions(+), 195 deletions(-) create mode 100644 drivers/net/ena/ena_rss.c diff --git a/doc/guides/nics/features/ena.ini b/doc/guides/nics/features/ena.ini index 3976bbbda6..b971243ff0 100644 --- a/doc/guides/nics/features/ena.ini +++ b/doc/guides/nics/features/ena.ini @@ -12,6 +12,7 @@ Jumbo frame = Y Scattered Rx = Y TSO = Y RSS hash = Y +RSS key update = Y RSS reta update = Y L3 checksum offload = Y L4 checksum offload = Y diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index 616e2cdea9..2586360439 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -101,6 +101,7 @@ New Features including: * Added Rx interrupt support. + * RSS hash function key reconfiguration support. * **Added support for Marvell CNXK crypto driver.** diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 72f9887797..ee059fc165 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -4,12 +4,6 @@ */ #include -#include -#include -#include -#include -#include -#include #include #include #include @@ -30,21 +24,12 @@ #define DRV_MODULE_VER_MINOR 3 #define DRV_MODULE_VER_SUBMINOR 0 -#define ENA_IO_TXQ_IDX(q) (2 * (q)) -#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1) -/*reverse version of ENA_IO_RXQ_IDX*/ -#define ENA_IO_RXQ_IDX_REV(q) ((q - 1) / 2) - #define __MERGE_64B_H_L(h, l) (((uint64_t)h << 32) | l) -#define TEST_BIT(val, bit_shift) (val & (1UL << bit_shift)) #define GET_L4_HDR_LEN(mbuf) \ ((rte_pktmbuf_mtod_offset(mbuf, struct rte_tcp_hdr *, \ mbuf->l3_len + mbuf->l2_len)->data_off) >> 4) -#define ENA_RX_RSS_TABLE_LOG_SIZE 7 -#define ENA_RX_RSS_TABLE_SIZE (1 << ENA_RX_RSS_TABLE_LOG_SIZE) -#define ENA_HASH_KEY_SIZE 40 #define ETH_GSTRING_LEN 32 #define ARRAY_SIZE(x) RTE_DIM(x) @@ -223,12 +208,6 @@ static int ena_queue_start_all(struct rte_eth_dev *dev, static void ena_stats_restart(struct rte_eth_dev *dev); static int ena_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); -static int ena_rss_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); -static int ena_rss_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); static void ena_interrupt_handler_rte(void *cb_arg); static void ena_timer_wd_callback(struct rte_timer *timer, void *arg); static void ena_destroy_device(struct rte_eth_dev *eth_dev); @@ -276,27 +255,13 @@ static const struct eth_dev_ops ena_dev_ops = { .reta_query = ena_rss_reta_query, .rx_queue_intr_enable = ena_rx_queue_intr_enable, .rx_queue_intr_disable = ena_rx_queue_intr_disable, + .rss_hash_update = ena_rss_hash_update, + .rss_hash_conf_get = ena_rss_hash_conf_get, }; -void ena_rss_key_fill(void *key, size_t size) -{ - static bool key_generated; - static uint8_t default_key[ENA_HASH_KEY_SIZE]; - size_t i; - - RTE_ASSERT(size <= ENA_HASH_KEY_SIZE); - - if (!key_generated) { - for (i = 0; i < ENA_HASH_KEY_SIZE; ++i) - default_key[i] = rte_rand() & 0xff; - key_generated = true; - } - - rte_memcpy(key, default_key, size); -} - static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, - struct ena_com_rx_ctx *ena_rx_ctx) + struct ena_com_rx_ctx *ena_rx_ctx, + bool fill_hash) { uint64_t ol_flags = 0; uint32_t packet_type = 0; @@ -324,7 +289,8 @@ static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, else ol_flags |= PKT_RX_L4_CKSUM_GOOD; - if (likely((packet_type & ENA_PTYPE_HAS_HASH) && !ena_rx_ctx->frag)) { + if (fill_hash && + likely((packet_type & ENA_PTYPE_HAS_HASH) && !ena_rx_ctx->frag)) { ol_flags |= PKT_RX_RSS_HASH; mbuf->hash.rss = ena_rx_ctx->hash; } @@ -446,7 +412,8 @@ static void ena_config_host_info(struct ena_com_dev *ena_dev) host_info->num_cpus = rte_lcore_count(); host_info->driver_supported_features = - ENA_ADMIN_HOST_INFO_RX_OFFSET_MASK; + ENA_ADMIN_HOST_INFO_RX_OFFSET_MASK | + ENA_ADMIN_HOST_INFO_RSS_CONFIGURABLE_FUNCTION_KEY_MASK; rc = ena_com_set_host_attributes(ena_dev); if (rc) { @@ -556,151 +523,6 @@ ena_dev_reset(struct rte_eth_dev *dev) return rc; } -static int ena_rss_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) -{ - struct ena_adapter *adapter = dev->data->dev_private; - struct ena_com_dev *ena_dev = &adapter->ena_dev; - int rc, i; - u16 entry_value; - int conf_idx; - int idx; - - if ((reta_size == 0) || (reta_conf == NULL)) - return -EINVAL; - - if (reta_size > ENA_RX_RSS_TABLE_SIZE) { - PMD_DRV_LOG(WARNING, - "Requested indirection table size (%d) is bigger than supported: %d\n", - reta_size, ENA_RX_RSS_TABLE_SIZE); - return -EINVAL; - } - - for (i = 0 ; i < reta_size ; i++) { - /* each reta_conf is for 64 entries. - * to support 128 we use 2 conf of 64 - */ - conf_idx = i / RTE_RETA_GROUP_SIZE; - idx = i % RTE_RETA_GROUP_SIZE; - if (TEST_BIT(reta_conf[conf_idx].mask, idx)) { - entry_value = - ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]); - - rc = ena_com_indirect_table_fill_entry(ena_dev, - i, - entry_value); - if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, - "Cannot fill indirect table\n"); - return rc; - } - } - } - - rte_spinlock_lock(&adapter->admin_lock); - rc = ena_com_indirect_table_set(ena_dev); - rte_spinlock_unlock(&adapter->admin_lock); - if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, "Cannot flush the indirect table\n"); - return rc; - } - - PMD_DRV_LOG(DEBUG, "RSS configured %d entries for port %d\n", - reta_size, dev->data->port_id); - - return 0; -} - -/* Query redirection table. */ -static int ena_rss_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) -{ - struct ena_adapter *adapter = dev->data->dev_private; - struct ena_com_dev *ena_dev = &adapter->ena_dev; - int rc; - int i; - u32 indirect_table[ENA_RX_RSS_TABLE_SIZE] = {0}; - int reta_conf_idx; - int reta_idx; - - if (reta_size == 0 || reta_conf == NULL || - (reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL))) - return -EINVAL; - - rte_spinlock_lock(&adapter->admin_lock); - rc = ena_com_indirect_table_get(ena_dev, indirect_table); - rte_spinlock_unlock(&adapter->admin_lock); - if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, "Cannot get indirection table\n"); - return -ENOTSUP; - } - - for (i = 0 ; i < reta_size ; i++) { - reta_conf_idx = i / RTE_RETA_GROUP_SIZE; - reta_idx = i % RTE_RETA_GROUP_SIZE; - if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx)) - reta_conf[reta_conf_idx].reta[reta_idx] = - ENA_IO_RXQ_IDX_REV(indirect_table[i]); - } - - return 0; -} - -static int ena_rss_init_default(struct ena_adapter *adapter) -{ - struct ena_com_dev *ena_dev = &adapter->ena_dev; - uint16_t nb_rx_queues = adapter->edev_data->nb_rx_queues; - int rc, i; - u32 val; - - rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); - if (unlikely(rc)) { - PMD_DRV_LOG(ERR, "Cannot init indirection table\n"); - goto err_rss_init; - } - - for (i = 0; i < ENA_RX_RSS_TABLE_SIZE; i++) { - val = i % nb_rx_queues; - rc = ena_com_indirect_table_fill_entry(ena_dev, i, - ENA_IO_RXQ_IDX(val)); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot fill indirection table\n"); - goto err_fill_indir; - } - } - - rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_CRC32, NULL, - ENA_HASH_KEY_SIZE, 0xFFFFFFFF); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(INFO, "Cannot fill hash function\n"); - goto err_fill_indir; - } - - rc = ena_com_set_default_hash_ctrl(ena_dev); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(INFO, "Cannot fill hash control\n"); - goto err_fill_indir; - } - - rc = ena_com_indirect_table_set(ena_dev); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot flush indirection table\n"); - goto err_fill_indir; - } - PMD_DRV_LOG(DEBUG, "RSS configured for port %d\n", - adapter->edev_data->port_id); - - return 0; - -err_fill_indir: - ena_com_rss_destroy(ena_dev); -err_rss_init: - - return rc; -} - static void ena_rx_queue_release_all(struct rte_eth_dev *dev) { struct ena_ring **queues = (struct ena_ring **)dev->data->rx_queues; @@ -1093,9 +915,8 @@ static int ena_start(struct rte_eth_dev *dev) if (rc) goto err_start_tx; - if (adapter->edev_data->dev_conf.rxmode.mq_mode & - ETH_MQ_RX_RSS_FLAG && adapter->edev_data->nb_rx_queues > 0) { - rc = ena_rss_init_default(adapter); + if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) { + rc = ena_rss_configure(adapter); if (rc) goto err_rss_init; } @@ -1385,7 +1206,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, - __rte_unused const struct rte_eth_rxconf *rx_conf, + const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { struct ena_adapter *adapter = dev->data->dev_private; @@ -1469,6 +1290,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, for (i = 0; i < nb_desc; i++) rxq->empty_rx_reqs[i] = i; + rxq->offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + /* Store pointer to this queue in upper layer */ rxq->configured = 1; dev->data->rx_queues[queue_idx] = rxq; @@ -1932,6 +1755,9 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->offloads.rx_csum_supported = (get_feat_ctx.offload.rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK) != 0; + adapter->offloads.rss_hash_supported = + (get_feat_ctx.offload.rx_supported & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK) != 0; /* Copy MAC address and point DPDK to it */ eth_dev->data->mac_addrs = (struct rte_ether_addr *)adapter->mac_addr; @@ -1939,6 +1765,12 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) get_feat_ctx.dev_attr.mac_addr, (struct rte_ether_addr *)adapter->mac_addr); + rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Failed to initialize RSS in ENA device\n"); + goto err_delete_debug_area; + } + adapter->drv_stats = rte_zmalloc("adapter stats", sizeof(*adapter->drv_stats), RTE_CACHE_LINE_SIZE); @@ -1946,7 +1778,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) PMD_DRV_LOG(ERR, "Failed to allocate memory for adapter statistics\n"); rc = -ENOMEM; - goto err_delete_debug_area; + goto err_rss_destroy; } rte_spinlock_init(&adapter->admin_lock); @@ -1967,6 +1799,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) return 0; +err_rss_destroy: + ena_com_rss_destroy(ena_dev); err_delete_debug_area: ena_com_delete_debug_area(ena_dev); @@ -1991,6 +1825,8 @@ static void ena_destroy_device(struct rte_eth_dev *eth_dev) if (adapter->state != ENA_ADAPTER_STATE_CLOSED) ena_close(eth_dev); + ena_com_rss_destroy(ena_dev); + ena_com_delete_debug_area(ena_dev); ena_com_delete_host_info(ena_dev); @@ -2097,13 +1933,14 @@ static int ena_infos_get(struct rte_eth_dev *dev, /* Inform framework about available features */ dev_info->rx_offload_capa = rx_feat; - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; + if (adapter->offloads.rss_hash_supported) + dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; dev_info->rx_queue_offload_capa = rx_feat; dev_info->tx_offload_capa = tx_feat; dev_info->tx_queue_offload_capa = tx_feat; - dev_info->flow_type_rss_offloads = ETH_RSS_IP | ETH_RSS_TCP | - ETH_RSS_UDP; + dev_info->flow_type_rss_offloads = ENA_ALL_RSS_HF; + dev_info->hash_key_size = ENA_HASH_KEY_SIZE; dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN; dev_info->max_rx_pktlen = adapter->max_mtu; @@ -2250,6 +2087,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t completed; struct ena_com_rx_ctx ena_rx_ctx; int i, rc = 0; + bool fill_hash; #ifdef RTE_ETHDEV_DEBUG_RX /* Check adapter state */ @@ -2260,6 +2098,8 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, } #endif + fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH; + descs_in_use = rx_ring->ring_size - ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; nb_pkts = RTE_MIN(descs_in_use, nb_pkts); @@ -2306,7 +2146,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, } /* fill mbuf attributes if any */ - ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx); + ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx, fill_hash); if (unlikely(mbuf->ol_flags & (PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD))) { diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 78718b759b..06ac8b06b5 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -6,10 +6,16 @@ #ifndef _ENA_ETHDEV_H_ #define _ENA_ETHDEV_H_ +#include +#include +#include +#include #include #include #include #include +#include +#include #include "ena_com.h" @@ -43,6 +49,21 @@ #define ENA_IDX_NEXT_MASKED(idx, mask) (((idx) + 1) & (mask)) #define ENA_IDX_ADD_MASKED(idx, n, mask) (((idx) + (n)) & (mask)) +#define ENA_RX_RSS_TABLE_LOG_SIZE 7 +#define ENA_RX_RSS_TABLE_SIZE (1 << ENA_RX_RSS_TABLE_LOG_SIZE) + +#define ENA_HASH_KEY_SIZE 40 + +#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \ + ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP) + +#define ENA_IO_TXQ_IDX(q) (2 * (q)) +#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1) +/* Reversed version of ENA_IO_RXQ_IDX */ +#define ENA_IO_RXQ_IDX_REV(q) (((q) - 1) / 2) + +extern struct ena_shared_data *ena_shared_data; + struct ena_adapter; enum ena_ring_type { @@ -205,6 +226,7 @@ struct ena_offloads { bool tso4_supported; bool tx_csum_supported; bool rx_csum_supported; + bool rss_hash_supported; }; /* board specific private data structure */ @@ -268,4 +290,16 @@ struct ena_adapter { bool use_large_llq_hdr; }; +int ena_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int ena_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int ena_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +int ena_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +int ena_rss_configure(struct ena_adapter *adapter); + #endif /* _ENA_ETHDEV_H_ */ diff --git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c new file mode 100644 index 0000000000..88afe13da0 --- /dev/null +++ b/drivers/net/ena/ena_rss.c @@ -0,0 +1,591 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2020 Amazon.com, Inc. or its affiliates. + * All rights reserved. + */ + +#include "ena_ethdev.h" +#include "ena_logs.h" + +#include + +#define TEST_BIT(val, bit_shift) ((val) & (1UL << (bit_shift))) + +#define ENA_HF_RSS_ALL_L2 (ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA) +#define ENA_HF_RSS_ALL_L3 (ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA) +#define ENA_HF_RSS_ALL_L4 (ENA_ADMIN_RSS_L4_SP | ENA_ADMIN_RSS_L4_DP) +#define ENA_HF_RSS_ALL_L3_L4 (ENA_HF_RSS_ALL_L3 | ENA_HF_RSS_ALL_L4) +#define ENA_HF_RSS_ALL_L2_L3_L4 (ENA_HF_RSS_ALL_L2 | ENA_HF_RSS_ALL_L3_L4) + +enum ena_rss_hash_fields { + ENA_HF_RSS_TCP4 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_UDP4 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_TCP6 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_UDP6 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_IP4 = ENA_HF_RSS_ALL_L3, + ENA_HF_RSS_IP6 = ENA_HF_RSS_ALL_L3, + ENA_HF_RSS_IP4_FRAG = ENA_HF_RSS_ALL_L3, + ENA_HF_RSS_NOT_IP = ENA_HF_RSS_ALL_L2, + ENA_HF_RSS_TCP6_EX = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_IP6_EX = ENA_HF_RSS_ALL_L3, +}; + +static int ena_fill_indirect_table_default(struct ena_com_dev *ena_dev, + size_t tbl_size, + size_t queue_num); +static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto, + uint16_t field); +static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto, + uint64_t rss_hf); +static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf); +static int ena_rss_hash_set(struct ena_com_dev *ena_dev, + struct rte_eth_rss_conf *rss_conf, + bool default_allowed); +static void ena_reorder_rss_hash_key(uint8_t *reordered_key, + uint8_t *key, + size_t key_size); +static int ena_get_rss_hash_key(struct ena_com_dev *ena_dev, uint8_t *rss_key); + +void ena_rss_key_fill(void *key, size_t size) +{ + static bool key_generated; + static uint8_t default_key[ENA_HASH_KEY_SIZE]; + size_t i; + + RTE_ASSERT(size <= ENA_HASH_KEY_SIZE); + + if (!key_generated) { + for (i = 0; i < ENA_HASH_KEY_SIZE; ++i) + default_key[i] = rte_rand() & 0xff; + key_generated = true; + } + + rte_memcpy(key, default_key, size); +} + +int ena_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + int rc, i; + u16 entry_value; + int conf_idx; + int idx; + + if (reta_size == 0 || reta_conf == NULL) + return -EINVAL; + + if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) { + PMD_DRV_LOG(ERR, + "RSS was not configured for the PMD\n"); + return -ENOTSUP; + } + + if (reta_size > ENA_RX_RSS_TABLE_SIZE) { + PMD_DRV_LOG(WARNING, + "Requested indirection table size (%d) is bigger than supported: %d\n", + reta_size, ENA_RX_RSS_TABLE_SIZE); + return -EINVAL; + } + + for (i = 0 ; i < reta_size ; i++) { + /* Each reta_conf is for 64 entries. + * To support 128 we use 2 conf of 64. + */ + conf_idx = i / RTE_RETA_GROUP_SIZE; + idx = i % RTE_RETA_GROUP_SIZE; + if (TEST_BIT(reta_conf[conf_idx].mask, idx)) { + entry_value = + ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]); + + rc = ena_com_indirect_table_fill_entry(ena_dev, i, + entry_value); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, + "Cannot fill indirection table\n"); + return rc; + } + } + } + + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_com_indirect_table_set(ena_dev); + rte_spinlock_unlock(&adapter->admin_lock); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Cannot set the indirection table\n"); + return rc; + } + + PMD_DRV_LOG(DEBUG, "RSS configured %d entries for port %d\n", + reta_size, dev->data->port_id); + + return 0; +} + +/* Query redirection table. */ +int ena_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + uint32_t indirect_table[ENA_RX_RSS_TABLE_SIZE] = { 0 }; + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + int rc; + int i; + int reta_conf_idx; + int reta_idx; + + if (reta_size == 0 || reta_conf == NULL || + (reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL))) + return -EINVAL; + + if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) { + PMD_DRV_LOG(ERR, + "RSS was not configured for the PMD\n"); + return -ENOTSUP; + } + + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_com_indirect_table_get(ena_dev, indirect_table); + rte_spinlock_unlock(&adapter->admin_lock); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Cannot get indirection table\n"); + return rc; + } + + for (i = 0 ; i < reta_size ; i++) { + reta_conf_idx = i / RTE_RETA_GROUP_SIZE; + reta_idx = i % RTE_RETA_GROUP_SIZE; + if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx)) + reta_conf[reta_conf_idx].reta[reta_idx] = + ENA_IO_RXQ_IDX_REV(indirect_table[i]); + } + + return 0; +} + +static int ena_fill_indirect_table_default(struct ena_com_dev *ena_dev, + size_t tbl_size, + size_t queue_num) +{ + size_t i; + int rc; + uint16_t val; + + for (i = 0; i < tbl_size; ++i) { + val = i % queue_num; + rc = ena_com_indirect_table_fill_entry(ena_dev, i, + ENA_IO_RXQ_IDX(val)); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(DEBUG, + "Failed to set %zu indirection table entry with val %" PRIu16 "\n", + i, val); + return rc; + } + } + + return 0; +} + +static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto, + uint16_t fields) +{ + uint64_t rss_hf = 0; + + /* If no fields are activated, then RSS is disabled for this proto */ + if ((fields & ENA_HF_RSS_ALL_L2_L3_L4) == 0) + return 0; + + /* Convert proto to ETH flag */ + switch (proto) { + case ENA_ADMIN_RSS_TCP4: + rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP; + break; + case ENA_ADMIN_RSS_UDP4: + rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP; + break; + case ENA_ADMIN_RSS_TCP6: + rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP; + break; + case ENA_ADMIN_RSS_UDP6: + rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP; + break; + case ENA_ADMIN_RSS_IP4: + rss_hf |= ETH_RSS_IPV4; + break; + case ENA_ADMIN_RSS_IP6: + rss_hf |= ETH_RSS_IPV6; + break; + case ENA_ADMIN_RSS_IP4_FRAG: + rss_hf |= ETH_RSS_FRAG_IPV4; + break; + case ENA_ADMIN_RSS_NOT_IP: + rss_hf |= ETH_RSS_L2_PAYLOAD; + break; + case ENA_ADMIN_RSS_TCP6_EX: + rss_hf |= ETH_RSS_IPV6_TCP_EX; + break; + case ENA_ADMIN_RSS_IP6_EX: + rss_hf |= ETH_RSS_IPV6_EX; + break; + default: + break; + }; + + /* Check if only DA or SA is being used for L3. */ + switch (fields & ENA_HF_RSS_ALL_L3) { + case ENA_ADMIN_RSS_L3_SA: + rss_hf |= ETH_RSS_L3_SRC_ONLY; + break; + case ENA_ADMIN_RSS_L3_DA: + rss_hf |= ETH_RSS_L3_DST_ONLY; + break; + default: + break; + }; + + /* Check if only DA or SA is being used for L4. */ + switch (fields & ENA_HF_RSS_ALL_L4) { + case ENA_ADMIN_RSS_L4_SP: + rss_hf |= ETH_RSS_L4_SRC_ONLY; + break; + case ENA_ADMIN_RSS_L4_DP: + rss_hf |= ETH_RSS_L4_DST_ONLY; + break; + default: + break; + }; + + return rss_hf; +} + +static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto, + uint64_t rss_hf) +{ + uint16_t fields_mask = 0; + + /* L2 always uses source and destination addresses. */ + fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA; + + /* Determine which fields of L3 should be used. */ + switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) { + case ETH_RSS_L3_DST_ONLY: + fields_mask |= ENA_ADMIN_RSS_L3_DA; + break; + case ETH_RSS_L3_SRC_ONLY: + fields_mask |= ENA_ADMIN_RSS_L3_SA; + break; + default: + /* + * If SRC nor DST aren't set, it means both of them should be + * used. + */ + fields_mask |= ENA_HF_RSS_ALL_L3; + } + + /* Determine which fields of L4 should be used. */ + switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) { + case ETH_RSS_L4_DST_ONLY: + fields_mask |= ENA_ADMIN_RSS_L4_DP; + break; + case ETH_RSS_L4_SRC_ONLY: + fields_mask |= ENA_ADMIN_RSS_L4_SP; + break; + default: + /* + * If SRC nor DST aren't set, it means both of them should be + * used. + */ + fields_mask |= ENA_HF_RSS_ALL_L4; + } + + /* Return appropriate hash fields. */ + switch (proto) { + case ENA_ADMIN_RSS_TCP4: + return ENA_HF_RSS_TCP4 & fields_mask; + case ENA_ADMIN_RSS_UDP4: + return ENA_HF_RSS_UDP4 & fields_mask; + case ENA_ADMIN_RSS_TCP6: + return ENA_HF_RSS_TCP6 & fields_mask; + case ENA_ADMIN_RSS_UDP6: + return ENA_HF_RSS_UDP6 & fields_mask; + case ENA_ADMIN_RSS_IP4: + return ENA_HF_RSS_IP4 & fields_mask; + case ENA_ADMIN_RSS_IP6: + return ENA_HF_RSS_IP6 & fields_mask; + case ENA_ADMIN_RSS_IP4_FRAG: + return ENA_HF_RSS_IP4_FRAG & fields_mask; + case ENA_ADMIN_RSS_NOT_IP: + return ENA_HF_RSS_NOT_IP & fields_mask; + case ENA_ADMIN_RSS_TCP6_EX: + return ENA_HF_RSS_TCP6_EX & fields_mask; + case ENA_ADMIN_RSS_IP6_EX: + return ENA_HF_RSS_IP6_EX & fields_mask; + default: + break; + } + + return 0; +} + +static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf) +{ + struct ena_admin_proto_input selected_fields[ENA_ADMIN_RSS_PROTO_NUM] = {}; + int rc, i; + + /* Turn on appropriate fields for each requested packet type */ + if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0) + selected_fields[ENA_ADMIN_RSS_TCP4].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf); + + if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0) + selected_fields[ENA_ADMIN_RSS_UDP4].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf); + + if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0) + selected_fields[ENA_ADMIN_RSS_TCP6].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf); + + if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0) + selected_fields[ENA_ADMIN_RSS_UDP6].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf); + + if ((rss_hf & ETH_RSS_IPV4) != 0) + selected_fields[ENA_ADMIN_RSS_IP4].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf); + + if ((rss_hf & ETH_RSS_IPV6) != 0) + selected_fields[ENA_ADMIN_RSS_IP6].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf); + + if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0) + selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf); + + if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0) + selected_fields[ENA_ADMIN_RSS_NOT_IP].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf); + + if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0) + selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf); + + if ((rss_hf & ETH_RSS_IPV6_EX) != 0) + selected_fields[ENA_ADMIN_RSS_IP6_EX].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf); + + /* Try to write them to the device */ + for (i = 0; i < ENA_ADMIN_RSS_PROTO_NUM; i++) { + rc = ena_com_fill_hash_ctrl(ena_dev, + (enum ena_admin_flow_hash_proto)i, + selected_fields[i].fields); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(DEBUG, + "Failed to set ENA HF %d with fields %" PRIu16 "\n", + i, selected_fields[i].fields); + return rc; + } + } + + return 0; +} + +static int ena_rss_hash_set(struct ena_com_dev *ena_dev, + struct rte_eth_rss_conf *rss_conf, + bool default_allowed) +{ + uint8_t hw_rss_key[ENA_HASH_KEY_SIZE]; + uint8_t *rss_key; + int rc; + + if (rss_conf->rss_key != NULL) { + /* Reorder the RSS key bytes for the hardware requirements. */ + ena_reorder_rss_hash_key(hw_rss_key, rss_conf->rss_key, + ENA_HASH_KEY_SIZE); + rss_key = hw_rss_key; + } else { + rss_key = NULL; + } + + /* If the rss_key is NULL, then the randomized key will be used. */ + rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_TOEPLITZ, + rss_key, ENA_HASH_KEY_SIZE, 0); + if (rc != 0 && !(default_allowed && rc == ENA_COM_UNSUPPORTED)) { + PMD_DRV_LOG(ERR, + "Failed to set RSS hash function in the device\n"); + return rc; + } + + rc = ena_set_hash_fields(ena_dev, rss_conf->rss_hf); + if (rc == ENA_COM_UNSUPPORTED) { + if (rss_conf->rss_key == NULL && !default_allowed) { + PMD_DRV_LOG(ERR, + "Setting RSS hash fields is not supported\n"); + return -ENOTSUP; + } + PMD_DRV_LOG(WARNING, + "Setting RSS hash fields is not supported. Using default values: 0x%" PRIx64 "\n", + (uint64_t)(ENA_ALL_RSS_HF)); + } else if (rc != 0) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash fields\n"); + return rc; + } + + return 0; +} + +/* ENA HW interprets the RSS key in reverse bytes order. Because of that, the + * key must be processed upon interaction with ena_com layer. + */ +static void ena_reorder_rss_hash_key(uint8_t *reordered_key, + uint8_t *key, + size_t key_size) +{ + size_t i, rev_i; + + for (i = 0, rev_i = key_size - 1; i < key_size; ++i, --rev_i) + reordered_key[i] = key[rev_i]; +} + +static int ena_get_rss_hash_key(struct ena_com_dev *ena_dev, uint8_t *rss_key) +{ + uint8_t hw_rss_key[ENA_HASH_KEY_SIZE]; + int rc; + + /* The default RSS hash key cannot be retrieved from the HW. Unless it's + * explicitly set, this operation shouldn't be supported. + */ + if (ena_dev->rss.hash_key == NULL) { + PMD_DRV_LOG(WARNING, + "Retrieving default RSS hash key is not supported\n"); + return -ENOTSUP; + } + + rc = ena_com_get_hash_key(ena_dev, hw_rss_key); + if (rc != 0) + return rc; + + ena_reorder_rss_hash_key(rss_key, hw_rss_key, ENA_HASH_KEY_SIZE); + + return 0; +} + +int ena_rss_configure(struct ena_adapter *adapter) +{ + struct rte_eth_rss_conf *rss_conf; + struct ena_com_dev *ena_dev; + int rc; + + ena_dev = &adapter->ena_dev; + rss_conf = &adapter->edev_data->dev_conf.rx_adv_conf.rss_conf; + + if (adapter->edev_data->nb_rx_queues == 0) + return 0; + + /* Restart the indirection table. The number of queues could change + * between start/stop calls, so it must be reinitialized with default + * values. + */ + rc = ena_fill_indirect_table_default(ena_dev, ENA_RX_RSS_TABLE_SIZE, + adapter->edev_data->nb_rx_queues); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, + "Failed to fill indirection table with default values\n"); + return rc; + } + + rc = ena_com_indirect_table_set(ena_dev); + if (unlikely(rc != 0 && rc != ENA_COM_UNSUPPORTED)) { + PMD_DRV_LOG(ERR, + "Failed to set indirection table in the device\n"); + return rc; + } + + rc = ena_rss_hash_set(ena_dev, rss_conf, true); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash\n"); + return rc; + } + + PMD_DRV_LOG(DEBUG, "RSS configured for port %d\n", + adapter->edev_data->port_id); + + return 0; +} + +int ena_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct ena_adapter *adapter = dev->data->dev_private; + int rc; + + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_rss_hash_set(&adapter->ena_dev, rss_conf, false); + rte_spinlock_unlock(&adapter->admin_lock); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash\n"); + return rc; + } + + return 0; +} + +int ena_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + enum ena_admin_flow_hash_proto proto; + uint64_t rss_hf = 0; + int rc, i; + uint16_t admin_hf; + static bool warn_once; + + if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) { + PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n"); + return -ENOTSUP; + } + + if (rss_conf->rss_key != NULL) { + rc = ena_get_rss_hash_key(ena_dev, rss_conf->rss_key); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, + "Cannot retrieve RSS hash key, err: %d\n", + rc); + return rc; + } + } + + for (i = 0; i < ENA_ADMIN_RSS_PROTO_NUM; ++i) { + proto = (enum ena_admin_flow_hash_proto)i; + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_com_get_hash_ctrl(ena_dev, proto, &admin_hf); + rte_spinlock_unlock(&adapter->admin_lock); + if (rc == ENA_COM_UNSUPPORTED) { + /* As some devices may support only reading rss hash + * key and not the hash ctrl, we want to notify the + * caller that this feature is only partially supported + * and do not return an error - the caller could be + * interested only in the key value. + */ + if (!warn_once) { + PMD_DRV_LOG(WARNING, + "Reading hash control from the device is not supported. .rss_hf will contain a default value.\n"); + warn_once = true; + } + rss_hf = ENA_ALL_RSS_HF; + break; + } else if (rc != 0) { + PMD_DRV_LOG(ERR, + "Failed to retrieve hash ctrl for proto: %d with err: %d\n", + i, rc); + return rc; + } + + rss_hf |= ena_admin_hf_to_eth_hf(proto, admin_hf); + } + + rss_conf->rss_hf = rss_hf; + return 0; +} diff --git a/drivers/net/ena/meson.build b/drivers/net/ena/meson.build index cc912fceba..d02ed3f64f 100644 --- a/drivers/net/ena/meson.build +++ b/drivers/net/ena/meson.build @@ -9,6 +9,7 @@ endif sources = files( 'ena_ethdev.c', + 'ena_rss.c', 'base/ena_com.c', 'base/ena_eth_com.c', ) From patchwork Fri Jul 23 10:24:54 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 96243 Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0285FA0C46; Fri, 23 Jul 2021 12:27:21 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 57486406A2; Fri, 23 Jul 2021 12:26:51 +0200 (CEST) Received: from mail-ed1-f52.google.com (mail-ed1-f52.google.com [209.85.208.52]) by mails.dpdk.org (Postfix) with ESMTP id 525524003C for ; Fri, 23 Jul 2021 12:26:45 +0200 (CEST) Received: by mail-ed1-f52.google.com with SMTP id da26so1165628edb.1 for ; Fri, 23 Jul 2021 03:26:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OtP5ZPIU1Gx9Vu0Xg0GTkuhPw3ywGGSh6f/WboE4pJw=; b=ISlG9Zf2FoiFoI7FmOIsbXrYwSv8BE6kTM3uyzV3SiJbEfxMASwnx6qbGUu7tFbs+w 2D/cpqO8A5y3uIf/TOryRREvj9dPFkFyXdL82683DTTiU/UEbwvGgrHZfvK3Z4IMYFcX ddAxbgmQ5DD5Ugo5d0m6DB/qxhcmtNDA8YnQKRpMb0MiLblrFL7pUvoik9KR1TIk7PbK Srq8sXv/poHKFSBYRBFS8Rbh2rsXk+It0JlzxRwzcjvN7tj+ldonBwRbHNButRInH1wt J4qjsamCU+Qod1dh0kaELlbtQq42A47W4qZKd5k1LHWqg6n+owiMypS00vVPXxM773An wiyA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OtP5ZPIU1Gx9Vu0Xg0GTkuhPw3ywGGSh6f/WboE4pJw=; b=DS8ObjE8DxWuFlbL7HgA0XzT7laCU3Vs3JAM1R707mB22KievEYblum82TRiMuKSz4 cuV0Uw7F2AQtSkkbo5u/vrYTbhrI12Fr+c+9KLiumznLOfzIRvmGREZRy7OlzmGrqLLo 02K2nGW8mltiK675nuKs2S5/2UxZR9h3jtTrA4x0zmG+oUDTDJdCusgx2+gkN5bZnX4q FcouX0PkSpaENW3xlcTIGR4FgaORtASDjpBsHxh98PnzJgJ+bI8/6s3OJfNy5y4lPChe CYWvGg68cMpyCpQQR3kMJjbCWr12qyMhgoLlpOObe6azC78bkcR3/xH20ghXiCKdOCdL qYwA== X-Gm-Message-State: AOAM531GqxghKN3JTAObY7cq6tPXX6uSGZrom0DyYAx9s4xBDjt/PDEy v0dF6grTGcnbBaXas8acmHPUPsf5QljNvPoi X-Google-Smtp-Source: ABdhPJyRbiXoYn1MXcWUHWoTSavZiS1LNzOQs2F9f8Q3UPS/J5ll+SGjPvJneuXiyV/mdj1bgjlqig== X-Received: by 2002:aa7:c857:: with SMTP id g23mr4641155edt.100.1627036004817; Fri, 23 Jul 2021 03:26:44 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (093105178068.dynamic-ww-01.vectranet.pl. [93.105.178.68]) by smtp.gmail.com with ESMTPSA id jp26sm10506142ejb.28.2021.07.23.03.26.43 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 23 Jul 2021 03:26:43 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk Date: Fri, 23 Jul 2021 12:24:54 +0200 Message-Id: <20210723102454.12206-7-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210723102454.12206-1-mk@semihalf.com> References: <20210714104320.4096-1-mk@semihalf.com> <20210723102454.12206-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v4 6/6] net/ena: update version to v2.4.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This version update contains: * Rx interrupts feature, * Support for the RSS hash function reconfiguration, * Small rework of the works, * Reset trigger on Tx path fix. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index ee059fc165..14f776b5ad 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -21,7 +21,7 @@ #include #define DRV_MODULE_VER_MAJOR 2 -#define DRV_MODULE_VER_MINOR 3 +#define DRV_MODULE_VER_MINOR 4 #define DRV_MODULE_VER_SUBMINOR 0 #define __MERGE_64B_H_L(h, l) (((uint64_t)h << 32) | l)