From patchwork Wed Jul 14 10:43:15 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95853 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id B078BA0C4B; Wed, 14 Jul 2021 12:43:32 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 07F164130F; Wed, 14 Jul 2021 12:43:30 +0200 (CEST) Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by mails.dpdk.org (Postfix) with ESMTP id 9506A4130C for ; Wed, 14 Jul 2021 12:43:28 +0200 (CEST) Received: by mail-lf1-f52.google.com with SMTP id n14so2814721lfu.8 for ; Wed, 14 Jul 2021 03:43:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qgeMFeUjWpiW1KlsbR6HuDbIyCwuFWU/sA2ToLKyl+c=; b=JYB73d8J4sIcPdH5Pz+oIXAUsulnOXrEj269C+vzOcvWlLzPsQJafsveoBjb9N6Bq+ bwB9fryA4n5M7MveAlQf7JstkP6sYDgf5uDE+LPwXY8pBjdxNC7eO73jr3oFa+aYCAtD PA9dLBXgUgtWdDArVoceu/YFgAED2oca0O7Q6KPRbOA/oyJZKP9HPySB/wLYtL4v4+dh QLFsZQauPtJaWhHB1keusYPzq0RL2yQ/gHFy83sfT3WOAPYv2agYZ0m05KybLyd8y+4C z+MqIyWnnqH2UiONE3wUCRgBBbEP0njbU1xiAnZO2GJijY6Ns+orMmQA+mabNVZcySPL qLLg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qgeMFeUjWpiW1KlsbR6HuDbIyCwuFWU/sA2ToLKyl+c=; b=jze8g3LwKI0hh2+3lCyo23vEM84h2PlR6j+HdkXtrAxwGq+cDB1sg6/gkT46WGhilD WeBa7IPMllD9uNU6ccc12Qltaj9UI/GRJL/HXnALuk5N7oDR0gR/kEBx0rOYyD9zD0xZ RRa1sidJKvtPu7at60jYaxYkexdGK+/osdTYgsBjbeJPfx7tC4YeCd6Iog/8xqw+biDa ZGdz5ldAJgFcmmbpixAuSfZVpZWlCoduSiyB8s3I0q5IIZsZiUjoqasjAhRgHaaw5FYp PG+7222a2ueWZ8K6sF5sU43Mc2jq0lQpREHYp+nUO2IvI9ea8nl1f0s0Q8zGyfrlz3Tv cmCg== X-Gm-Message-State: AOAM530vRBsi0x1ntZlX4+kPeSDqgPuVNZoIKHJlXGmKN8f5gwL2njQR /bj/uOCcxjpDBZuB28rclCi3suCD8AObrZzx X-Google-Smtp-Source: ABdhPJw4MlX4ktyeQaVN5QeoTkhKkeY1E/sFGaROM5QGEMTO9SfIEKh4f5wXeac5Jz/rCR+1CgeWWQ== X-Received: by 2002:ac2:5dea:: with SMTP id z10mr7578898lfq.254.1626259407651; Wed, 14 Jul 2021 03:43:27 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id d3sm188928lji.101.2021.07.14.03.43.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:43:26 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Igor Chauskin Date: Wed, 14 Jul 2021 12:43:15 +0200 Message-Id: <20210714104320.4096-2-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714104320.4096-1-mk@semihalf.com> References: <20210714103435.3388-1-mk@semihalf.com> <20210714104320.4096-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 1/6] net/ena: adjust driver logs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENA logs were not consistent regarding the new line character. Few of them were relying on the new line character added by the PMD_*_LOG macros, but most were adding the new line character by themselves. It was causing ENA logs to add extra empty line after almost each log. To unify this behavior, the missing new line characters were added to the driver logs, and they were removed from the logging macros. After this patch, every ENA log message should add '\n' at the end. Moreover, the logging messages were adjusted in terms of wording (removed unnecessary abbreviations), capitalizing of the words (start sentences with capital letters, and use 'Tx/Rx' instead of 'tx/TX' etc. Some of the logs were rephrased to make them more clear for the reader. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes --- drivers/net/ena/ena_ethdev.c | 150 ++++++++++++++++++----------------- drivers/net/ena/ena_logs.h | 10 +-- 2 files changed, 84 insertions(+), 76 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index dfe68279fa..f5e812d507 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -544,7 +544,7 @@ ena_dev_reset(struct rte_eth_dev *dev) ena_destroy_device(dev); rc = eth_ena_dev_init(dev); if (rc) - PMD_INIT_LOG(CRIT, "Cannot initialize device"); + PMD_INIT_LOG(CRIT, "Cannot initialize device\n"); return rc; } @@ -565,7 +565,7 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, if (reta_size > ENA_RX_RSS_TABLE_SIZE) { PMD_DRV_LOG(WARNING, - "indirection table %d is bigger than supported (%d)\n", + "Requested indirection table size (%d) is bigger than supported: %d\n", reta_size, ENA_RX_RSS_TABLE_SIZE); return -EINVAL; } @@ -599,8 +599,8 @@ static int ena_rss_reta_update(struct rte_eth_dev *dev, return rc; } - PMD_DRV_LOG(DEBUG, "%s(): RSS configured %d entries for port %d\n", - __func__, reta_size, dev->data->port_id); + PMD_DRV_LOG(DEBUG, "RSS configured %d entries for port %d\n", + reta_size, dev->data->port_id); return 0; } @@ -626,7 +626,7 @@ static int ena_rss_reta_query(struct rte_eth_dev *dev, rc = ena_com_indirect_table_get(ena_dev, indirect_table); rte_spinlock_unlock(&adapter->admin_lock); if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, "cannot get indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot get indirection table\n"); return -ENOTSUP; } @@ -650,7 +650,7 @@ static int ena_rss_init_default(struct ena_adapter *adapter) rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); if (unlikely(rc)) { - PMD_DRV_LOG(ERR, "Cannot init indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot init indirection table\n"); goto err_rss_init; } @@ -659,7 +659,7 @@ static int ena_rss_init_default(struct ena_adapter *adapter) rc = ena_com_indirect_table_fill_entry(ena_dev, i, ENA_IO_RXQ_IDX(val)); if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot fill indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot fill indirection table\n"); goto err_fill_indir; } } @@ -679,7 +679,7 @@ static int ena_rss_init_default(struct ena_adapter *adapter) rc = ena_com_indirect_table_set(ena_dev); if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot flush the indirect table\n"); + PMD_DRV_LOG(ERR, "Cannot flush indirection table\n"); goto err_fill_indir; } PMD_DRV_LOG(DEBUG, "RSS configured for port %d\n", @@ -733,7 +733,7 @@ static void ena_rx_queue_release(void *queue) ring->configured = 0; - PMD_DRV_LOG(NOTICE, "RX Queue %d:%d released\n", + PMD_DRV_LOG(NOTICE, "Rx queue %d:%d released\n", ring->port_id, ring->id); } @@ -757,7 +757,7 @@ static void ena_tx_queue_release(void *queue) ring->configured = 0; - PMD_DRV_LOG(NOTICE, "TX Queue %d:%d released\n", + PMD_DRV_LOG(NOTICE, "Tx queue %d:%d released\n", ring->port_id, ring->id); } @@ -822,19 +822,19 @@ static int ena_queue_start_all(struct rte_eth_dev *dev, if (ring_type == ENA_RING_TYPE_RX) { ena_assert_msg( dev->data->rx_queues[i] == &queues[i], - "Inconsistent state of rx queues\n"); + "Inconsistent state of Rx queues\n"); } else { ena_assert_msg( dev->data->tx_queues[i] == &queues[i], - "Inconsistent state of tx queues\n"); + "Inconsistent state of Tx queues\n"); } rc = ena_queue_start(&queues[i]); if (rc) { PMD_INIT_LOG(ERR, - "failed to start queue %d type(%d)", - i, ring_type); + "Failed to start queue[%d] of type(%d)\n", + i, ring_type); goto err; } } @@ -867,9 +867,9 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) uint32_t max_frame_len = ena_get_mtu_conf(adapter); if (max_frame_len > adapter->max_mtu || max_frame_len < ENA_MIN_MTU) { - PMD_INIT_LOG(ERR, "Unsupported MTU of %d. " - "max mtu: %d, min mtu: %d", - max_frame_len, adapter->max_mtu, ENA_MIN_MTU); + PMD_INIT_LOG(ERR, + "Unsupported MTU of %d. Max MTU: %d, min MTU: %d\n", + max_frame_len, adapter->max_mtu, ENA_MIN_MTU); return ENA_COM_UNSUPPORTED; } @@ -938,7 +938,7 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx, ENA_ADMIN_PLACEMENT_POLICY_DEV)) { max_tx_queue_size /= 2; PMD_INIT_LOG(INFO, - "Forcing large headers and decreasing maximum TX queue size to %d\n", + "Forcing large headers and decreasing maximum Tx queue size to %d\n", max_tx_queue_size); } else { PMD_INIT_LOG(ERR, @@ -947,7 +947,7 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx, } if (unlikely(max_rx_queue_size == 0 || max_tx_queue_size == 0)) { - PMD_INIT_LOG(ERR, "Invalid queue size"); + PMD_INIT_LOG(ERR, "Invalid queue size\n"); return -EFAULT; } @@ -1044,8 +1044,7 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (mtu > ena_get_mtu_conf(adapter) || mtu < ENA_MIN_MTU) { PMD_DRV_LOG(ERR, - "Invalid MTU setting. new_mtu: %d " - "max mtu: %d min mtu: %d\n", + "Invalid MTU setting. New MTU: %d, max MTU: %d, min MTU: %d\n", mtu, ena_get_mtu_conf(adapter), ENA_MIN_MTU); return -EINVAL; } @@ -1054,7 +1053,7 @@ static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu) if (rc) PMD_DRV_LOG(ERR, "Could not set MTU: %d\n", mtu); else - PMD_DRV_LOG(NOTICE, "Set MTU: %d\n", mtu); + PMD_DRV_LOG(NOTICE, "MTU set to: %d\n", mtu); return rc; } @@ -1130,7 +1129,7 @@ static int ena_stop(struct rte_eth_dev *dev) if (adapter->trigger_reset) { rc = ena_com_dev_reset(ena_dev, adapter->reset_reason); if (rc) - PMD_DRV_LOG(ERR, "Device reset failed rc=%d\n", rc); + PMD_DRV_LOG(ERR, "Device reset failed, rc: %d\n", rc); } ++adapter->dev_stats.dev_stop; @@ -1175,7 +1174,7 @@ static int ena_create_io_queue(struct ena_ring *ring) rc = ena_com_create_io_queue(ena_dev, &ctx); if (rc) { PMD_DRV_LOG(ERR, - "failed to create io queue #%d (qid:%d) rc: %d\n", + "Failed to create IO queue[%d] (qid:%d), rc: %d\n", ring->id, ena_qid, rc); return rc; } @@ -1185,7 +1184,7 @@ static int ena_create_io_queue(struct ena_ring *ring) &ring->ena_com_io_cq); if (rc) { PMD_DRV_LOG(ERR, - "Failed to get io queue handlers. queue num %d rc: %d\n", + "Failed to get IO queue[%d] handlers, rc: %d\n", ring->id, rc); ena_com_destroy_io_queue(ena_dev, ena_qid); return rc; @@ -1239,7 +1238,7 @@ static int ena_queue_start(struct ena_ring *ring) rc = ena_create_io_queue(ring); if (rc) { - PMD_INIT_LOG(ERR, "Failed to create IO queue!"); + PMD_INIT_LOG(ERR, "Failed to create IO queue\n"); return rc; } @@ -1257,7 +1256,7 @@ static int ena_queue_start(struct ena_ring *ring) if (rc != bufs_num) { ena_com_destroy_io_queue(&ring->adapter->ena_dev, ENA_IO_RXQ_IDX(ring->id)); - PMD_INIT_LOG(ERR, "Failed to populate rx ring !"); + PMD_INIT_LOG(ERR, "Failed to populate Rx ring\n"); return ENA_COM_FAULT; } /* Flush per-core RX buffers pools cache as they can be used on other @@ -1282,21 +1281,21 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, if (txq->configured) { PMD_DRV_LOG(CRIT, - "API violation. Queue %d is already configured\n", + "API violation. Queue[%d] is already configured\n", queue_idx); return ENA_COM_FAULT; } if (!rte_is_power_of_2(nb_desc)) { PMD_DRV_LOG(ERR, - "Unsupported size of TX queue: %d is not a power of 2.\n", + "Unsupported size of Tx queue: %d is not a power of 2.\n", nb_desc); return -EINVAL; } if (nb_desc > adapter->max_tx_ring_size) { PMD_DRV_LOG(ERR, - "Unsupported size of TX queue (max size: %d)\n", + "Unsupported size of Tx queue (max size: %d)\n", adapter->max_tx_ring_size); return -EINVAL; } @@ -1314,7 +1313,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, txq->ring_size, RTE_CACHE_LINE_SIZE); if (!txq->tx_buffer_info) { - PMD_DRV_LOG(ERR, "failed to alloc mem for tx buffer info\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for Tx buffer info\n"); return -ENOMEM; } @@ -1322,7 +1322,8 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, sizeof(u16) * txq->ring_size, RTE_CACHE_LINE_SIZE); if (!txq->empty_tx_reqs) { - PMD_DRV_LOG(ERR, "failed to alloc mem for tx reqs\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for empty Tx requests\n"); rte_free(txq->tx_buffer_info); return -ENOMEM; } @@ -1332,7 +1333,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, txq->tx_max_header_size, RTE_CACHE_LINE_SIZE); if (!txq->push_buf_intermediate_buf) { - PMD_DRV_LOG(ERR, "failed to alloc push buff for LLQ\n"); + PMD_DRV_LOG(ERR, "Failed to alloc push buffer for LLQ\n"); rte_free(txq->tx_buffer_info); rte_free(txq->empty_tx_reqs); return -ENOMEM; @@ -1367,21 +1368,21 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, rxq = &adapter->rx_ring[queue_idx]; if (rxq->configured) { PMD_DRV_LOG(CRIT, - "API violation. Queue %d is already configured\n", + "API violation. Queue[%d] is already configured\n", queue_idx); return ENA_COM_FAULT; } if (!rte_is_power_of_2(nb_desc)) { PMD_DRV_LOG(ERR, - "Unsupported size of RX queue: %d is not a power of 2.\n", + "Unsupported size of Rx queue: %d is not a power of 2.\n", nb_desc); return -EINVAL; } if (nb_desc > adapter->max_rx_ring_size) { PMD_DRV_LOG(ERR, - "Unsupported size of RX queue (max size: %d)\n", + "Unsupported size of Rx queue (max size: %d)\n", adapter->max_rx_ring_size); return -EINVAL; } @@ -1390,7 +1391,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, buffer_size = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM; if (buffer_size < ENA_RX_BUF_MIN_SIZE) { PMD_DRV_LOG(ERR, - "Unsupported size of RX buffer: %zu (min size: %d)\n", + "Unsupported size of Rx buffer: %zu (min size: %d)\n", buffer_size, ENA_RX_BUF_MIN_SIZE); return -EINVAL; } @@ -1407,7 +1408,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, sizeof(struct ena_rx_buffer) * nb_desc, RTE_CACHE_LINE_SIZE); if (!rxq->rx_buffer_info) { - PMD_DRV_LOG(ERR, "failed to alloc mem for rx buffer info\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for Rx buffer info\n"); return -ENOMEM; } @@ -1416,7 +1418,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, RTE_CACHE_LINE_SIZE); if (!rxq->rx_refill_buffer) { - PMD_DRV_LOG(ERR, "failed to alloc mem for rx refill buffer\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for Rx refill buffer\n"); rte_free(rxq->rx_buffer_info); rxq->rx_buffer_info = NULL; return -ENOMEM; @@ -1426,7 +1429,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, sizeof(uint16_t) * nb_desc, RTE_CACHE_LINE_SIZE); if (!rxq->empty_rx_reqs) { - PMD_DRV_LOG(ERR, "failed to alloc mem for empty rx reqs\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for empty Rx requests\n"); rte_free(rxq->rx_buffer_info); rxq->rx_buffer_info = NULL; rte_free(rxq->rx_refill_buffer); @@ -1457,7 +1461,7 @@ static int ena_add_single_rx_desc(struct ena_com_io_sq *io_sq, /* pass resource to device */ rc = ena_com_add_single_rx_desc(io_sq, &ebuf, id); if (unlikely(rc != 0)) - PMD_DRV_LOG(WARNING, "failed adding rx desc\n"); + PMD_DRV_LOG(WARNING, "Failed adding Rx desc\n"); return rc; } @@ -1483,7 +1487,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (unlikely(rc < 0)) { rte_atomic64_inc(&rxq->adapter->drv_stats->rx_nombuf); ++rxq->rx_stats.mbuf_alloc_fail; - PMD_RX_LOG(DEBUG, "there are no enough free buffers"); + PMD_RX_LOG(DEBUG, "There are not enough free buffers\n"); return 0; } @@ -1506,8 +1510,9 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } if (unlikely(i < count)) { - PMD_DRV_LOG(WARNING, "refilled rx qid %d with only %d " - "buffers (from %d)\n", rxq->id, i, count); + PMD_DRV_LOG(WARNING, + "Refilled Rx queue[%d] with only %d/%d buffers\n", + rxq->id, i, count); rte_pktmbuf_free_bulk(&mbufs[i], count - i); ++rxq->rx_stats.refill_partial; } @@ -1535,7 +1540,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, /* Initialize mmio registers */ rc = ena_com_mmio_reg_read_request_init(ena_dev); if (rc) { - PMD_DRV_LOG(ERR, "failed to init mmio read less\n"); + PMD_DRV_LOG(ERR, "Failed to init MMIO read less\n"); return rc; } @@ -1548,14 +1553,14 @@ static int ena_device_init(struct ena_com_dev *ena_dev, /* reset device */ rc = ena_com_dev_reset(ena_dev, ENA_REGS_RESET_NORMAL); if (rc) { - PMD_DRV_LOG(ERR, "cannot reset device\n"); + PMD_DRV_LOG(ERR, "Cannot reset device\n"); goto err_mmio_read_less; } /* check FW version */ rc = ena_com_validate_version(ena_dev); if (rc) { - PMD_DRV_LOG(ERR, "device version is too low\n"); + PMD_DRV_LOG(ERR, "Device version is too low\n"); goto err_mmio_read_less; } @@ -1565,7 +1570,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, rc = ena_com_admin_init(ena_dev, &aenq_handlers); if (rc) { PMD_DRV_LOG(ERR, - "cannot initialize ena admin queue with device\n"); + "Cannot initialize ENA admin queue\n"); goto err_mmio_read_less; } @@ -1581,7 +1586,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, rc = ena_com_get_dev_attr_feat(ena_dev, get_feat_ctx); if (rc) { PMD_DRV_LOG(ERR, - "cannot get attribute for ena device rc= %d\n", rc); + "Cannot get attribute for ENA device, rc: %d\n", rc); goto err_admin_init; } @@ -1594,7 +1599,7 @@ static int ena_device_init(struct ena_com_dev *ena_dev, aenq_groups &= get_feat_ctx->aenq.supported_groups; rc = ena_com_set_aenq_config(ena_dev, aenq_groups); if (rc) { - PMD_DRV_LOG(ERR, "Cannot configure aenq groups rc: %d\n", rc); + PMD_DRV_LOG(ERR, "Cannot configure AENQ groups, rc: %d\n", rc); goto err_admin_init; } @@ -1643,7 +1648,7 @@ static void check_for_missing_keep_alive(struct ena_adapter *adapter) static void check_for_admin_com_state(struct ena_adapter *adapter) { if (unlikely(!ena_com_get_admin_running_state(&adapter->ena_dev))) { - PMD_DRV_LOG(ERR, "ENA admin queue is not in running state!\n"); + PMD_DRV_LOG(ERR, "ENA admin queue is not in running state\n"); adapter->reset_reason = ENA_REGS_RESET_ADMIN_TO; adapter->trigger_reset = true; } @@ -1706,8 +1711,8 @@ ena_set_queues_placement_policy(struct ena_adapter *adapter, rc = ena_com_config_dev_mode(ena_dev, llq, llq_default_configurations); if (unlikely(rc)) { - PMD_INIT_LOG(WARNING, "Failed to config dev mode. " - "Fallback to host mode policy."); + PMD_INIT_LOG(WARNING, + "Failed to config dev mode. Fallback to host mode policy.\n"); ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; return 0; } @@ -1717,8 +1722,8 @@ ena_set_queues_placement_policy(struct ena_adapter *adapter, return 0; if (!adapter->dev_mem_base) { - PMD_DRV_LOG(ERR, "Unable to access LLQ bar resource. " - "Fallback to host mode policy.\n."); + PMD_DRV_LOG(ERR, + "Unable to access LLQ BAR resource. Fallback to host mode policy.\n"); ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; return 0; } @@ -1758,7 +1763,7 @@ static uint32_t ena_calc_max_io_queue_num(struct ena_com_dev *ena_dev, max_num_io_queues = RTE_MIN(max_num_io_queues, io_tx_cq_num); if (unlikely(max_num_io_queues == 0)) { - PMD_DRV_LOG(ERR, "Number of IO queues should not be 0\n"); + PMD_DRV_LOG(ERR, "Number of IO queues cannot not be 0\n"); return -EFAULT; } @@ -1798,7 +1803,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) pci_dev = RTE_ETH_DEV_TO_PCI(eth_dev); - PMD_INIT_LOG(INFO, "Initializing %x:%x:%x.%d", + PMD_INIT_LOG(INFO, "Initializing %x:%x:%x.%d\n", pci_dev->addr.domain, pci_dev->addr.bus, pci_dev->addr.devid, @@ -1810,7 +1815,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->dev_mem_base = pci_dev->mem_resource[ENA_MEM_BAR].addr; if (!adapter->regs) { - PMD_INIT_LOG(CRIT, "Failed to access registers BAR(%d)", + PMD_INIT_LOG(CRIT, "Failed to access registers BAR(%d)\n", ENA_REGS_BAR); return -ENXIO; } @@ -1833,7 +1838,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) /* device specific initialization routine */ rc = ena_device_init(ena_dev, pci_dev, &get_feat_ctx, &wd_state); if (rc) { - PMD_INIT_LOG(CRIT, "Failed to init ENA device"); + PMD_INIT_LOG(CRIT, "Failed to init ENA device\n"); goto err; } adapter->wd_state = wd_state; @@ -1843,7 +1848,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) rc = ena_set_queues_placement_policy(adapter, ena_dev, &get_feat_ctx.llq, &llq_config); if (unlikely(rc)) { - PMD_INIT_LOG(CRIT, "Failed to set placement policy"); + PMD_INIT_LOG(CRIT, "Failed to set placement policy\n"); return rc; } @@ -1905,7 +1910,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) sizeof(*adapter->drv_stats), RTE_CACHE_LINE_SIZE); if (!adapter->drv_stats) { - PMD_DRV_LOG(ERR, "failed to alloc mem for adapter stats\n"); + PMD_DRV_LOG(ERR, + "Failed to allocate memory for adapter statistics\n"); rc = -ENOMEM; goto err_delete_debug_area; } @@ -2233,7 +2239,9 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->ena_com_io_sq, &ena_rx_ctx); if (unlikely(rc)) { - PMD_DRV_LOG(ERR, "ena_com_rx_pkt error %d\n", rc); + PMD_DRV_LOG(ERR, + "Failed to get the packet from the device, rc: %d\n", + rc); if (rc == ENA_COM_NO_SPACE) { ++rx_ring->rx_stats.bad_desc_num; rx_ring->adapter->reset_reason = @@ -2408,7 +2416,7 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, * be needed so we reduce the segments number from num_segments to 1 */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, 3)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the tx queue\n"); + PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } ++tx_ring->tx_stats.linearize; @@ -2428,7 +2436,7 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, num_segments + 2)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the tx queue\n"); + PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } @@ -2544,7 +2552,7 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) if (unlikely(ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq, &ena_tx_ctx))) { PMD_DRV_LOG(DEBUG, - "llq tx max burst size of queue %d achieved, writing doorbell to send burst\n", + "LLQ Tx max burst size of queue %d achieved, writing doorbell to send burst\n", tx_ring->id); ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); tx_ring->tx_stats.doorbells++; @@ -2666,10 +2674,10 @@ int ena_copy_eni_stats(struct ena_adapter *adapter) if (rc != 0) { if (rc == ENA_COM_UNSUPPORTED) { PMD_DRV_LOG(DEBUG, - "Retrieving ENI metrics is not supported.\n"); + "Retrieving ENI metrics is not supported\n"); } else { PMD_DRV_LOG(WARNING, - "Failed to get ENI metrics: %d\n", rc); + "Failed to get ENI metrics, rc: %d\n", rc); } return rc; } @@ -2993,7 +3001,7 @@ static void ena_notification(void *adapter_data, struct ena_admin_ena_hw_hints *hints; if (aenq_e->aenq_common_desc.group != ENA_ADMIN_NOTIFICATION) - PMD_DRV_LOG(WARNING, "Invalid group(%x) expected %x\n", + PMD_DRV_LOG(WARNING, "Invalid AENQ group: %x. Expected: %x\n", aenq_e->aenq_common_desc.group, ENA_ADMIN_NOTIFICATION); @@ -3004,7 +3012,7 @@ static void ena_notification(void *adapter_data, ena_update_hints(adapter, hints); break; default: - PMD_DRV_LOG(ERR, "Invalid aenq notification link state %d\n", + PMD_DRV_LOG(ERR, "Invalid AENQ notification link state: %d\n", aenq_e->aenq_common_desc.syndrome); } } @@ -3034,8 +3042,8 @@ static void ena_keep_alive(void *adapter_data, static void unimplemented_aenq_handler(__rte_unused void *data, __rte_unused struct ena_admin_aenq_entry *aenq_e) { - PMD_DRV_LOG(ERR, "Unknown event was received or event with " - "unimplemented handler\n"); + PMD_DRV_LOG(ERR, + "Unknown event was received or event with unimplemented handler\n"); } static struct ena_aenq_handlers aenq_handlers = { diff --git a/drivers/net/ena/ena_logs.h b/drivers/net/ena/ena_logs.h index 9053c9183f..040bebfb98 100644 --- a/drivers/net/ena/ena_logs.h +++ b/drivers/net/ena/ena_logs.h @@ -9,13 +9,13 @@ extern int ena_logtype_init; #define PMD_INIT_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_init, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #ifdef RTE_LIBRTE_ENA_DEBUG_RX extern int ena_logtype_rx; #define PMD_RX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_rx, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #else #define PMD_RX_LOG(level, fmt, args...) do { } while (0) #endif @@ -24,7 +24,7 @@ extern int ena_logtype_rx; extern int ena_logtype_tx; #define PMD_TX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_tx, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #else #define PMD_TX_LOG(level, fmt, args...) do { } while (0) #endif @@ -33,7 +33,7 @@ extern int ena_logtype_tx; extern int ena_logtype_tx_free; #define PMD_TX_FREE_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_tx_free, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #else #define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) #endif @@ -41,6 +41,6 @@ extern int ena_logtype_tx_free; extern int ena_logtype_driver; #define PMD_DRV_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_driver, \ - "%s(): " fmt "\n", __func__, ## args) + "%s(): " fmt, __func__, ## args) #endif /* _ENA_LOGS_H_ */ From patchwork Wed Jul 14 10:43:16 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95854 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 61A78A0C4B; Wed, 14 Jul 2021 12:43:40 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id CE20541321; Wed, 14 Jul 2021 12:43:31 +0200 (CEST) Received: from mail-lj1-f171.google.com (mail-lj1-f171.google.com [209.85.208.171]) by mails.dpdk.org (Postfix) with ESMTP id 47AEB4014E for ; Wed, 14 Jul 2021 12:43:30 +0200 (CEST) Received: by mail-lj1-f171.google.com with SMTP id bn5so2300912ljb.10 for ; Wed, 14 Jul 2021 03:43:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=oSyKftVlPDWJqRgjIh/i51Zcoc8WZWUiZqcESSncvlE=; b=zAbWql6vS+f8L1X1RF+iwoiHr4YlBICPSOYDqWh5pqhKpKDcd54ryGFJU0tWbF1fr9 fZ79KMvpC7hD2t4ipNisrE5dx8otlWyDjH2p0KwR7keAraVUEBWOeBS48pHRPu/j0iIm lJ+dbstIscR1QDNeLPhW8LwQa+2RTgGENkS7U8xJOPX4HvVckBrG6wlayLg/cLGoMYUe VtOXCVos1+4vv0jgAYCEC6ls829Pf6NeWT+uk6Wj7VVwNCqAoblS4WbigREGH2i4KPC7 fMmhWnDZDzrEVLIiSA7r04bdixixUycirA9WZhfXXhJEKy48usLXzp194CUEoTWvQhNO ++EA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=oSyKftVlPDWJqRgjIh/i51Zcoc8WZWUiZqcESSncvlE=; b=ors3i5ND5aCoAl/huSL3fOPAiITUjM7Uc6Netp34y/R4Dr5O5Qo7O2a+SOIV5rlEas 9CKPFt36MsHWSZgR1xWE7GF9jRULYVbYq9tbuZQ9OH20DUh8WzwGhccw+ZEykO/+oUni PYt1xpWZdR5iyS9AR4NZK3ftngLd7HqEchxub4c6ZRzsfON/PDK42SOJ3t46WGV/6aaK lxIy+oKF4HSAA4iKmybjxR4tkoEGna8R5Gl5fj/fHrLCSZo92PXJCDj3qx+bjxUKtZfL o6dGAwVUtYGvaKst+fhxwMxRx4Dq5mKLTBe7JNslfpLeo2wLD+fRYhuZUF7BB+1DMs8W YnfA== X-Gm-Message-State: AOAM531uoyh9+lYg0d9TNOvlkM+DP7UnQ0JpRNugn6pwd1iT8tD1TppI p/lik3RxtTolX08dnvvo9EeWGWCFroUiuetF X-Google-Smtp-Source: ABdhPJyNW3AifjA/iKa8c8kwFV2IgOEjE++RgcH+pMqxI81sW1f+Uyx+wVaWw+uYjyHXqjbBh8Jf8w== X-Received: by 2002:a05:651c:178f:: with SMTP id bn15mr8753287ljb.448.1626259409472; Wed, 14 Jul 2021 03:43:29 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id d3sm188928lji.101.2021.07.14.03.43.27 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:43:28 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Igor Chauskin Date: Wed, 14 Jul 2021 12:43:16 +0200 Message-Id: <20210714104320.4096-3-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714104320.4096-1-mk@semihalf.com> References: <20210714103435.3388-1-mk@semihalf.com> <20210714104320.4096-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 2/6] net/ena: make use of the IO debug build option X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENA defined its own logger flags for Tx and Rx, but they weren't technically used anywhere. Those data path loggers weren't used anywhere after the definition. This commit uses the generic RTE_ETHDEV_DEBUG_RX and RTE_ETHDEV_DEBUG_TX flags to define PMD_TX_LOG and PMD_RX_LOG which are now being used on the data path. The PMD_TX_FREE_LOG was removed, as it has no usage in the current version of the driver. RTE_ETH_DEBUG_[TR]X now wraps extra checks for the driver state in the IO path - this saves extra conditionals on the hot path. ena_com logger is no longer optional (previously it had to be explicitly enabled by defining this flag: RTE_LIBRTE_ENA_COM_DEBUG). Having this logger optional makes tracing of ena_com errors much harder. Due to ena_com design, it's impossible to separate IO path logs from the management path logs, so for now they will be always enabled. Default levels for the affected loggers were modified. Hot path loggers are initialized with the default level of DEBUG instead of NOTICE, as they have to be explicitly enabled. ena_com logging level was reduced from NOTICE to WARNING - as it's no longer optional, the driver should report just a warnings in the ena_com layer. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes --- drivers/net/ena/base/ena_plat_dpdk.h | 7 ---- drivers/net/ena/ena_ethdev.c | 52 +++++++++++++++------------- drivers/net/ena/ena_logs.h | 13 ++----- 3 files changed, 30 insertions(+), 42 deletions(-) diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index f66df95591..4e7f52881a 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -108,7 +108,6 @@ extern int ena_logtype_com; #define GENMASK_ULL(h, l) (((~0ULL) - (1ULL << (l)) + 1) & \ (~0ULL >> (BITS_PER_LONG_LONG - 1 - (h)))) -#ifdef RTE_LIBRTE_ENA_COM_DEBUG #define ena_trc_log(dev, level, fmt, arg...) \ ( \ ENA_TOUCH(dev), \ @@ -121,12 +120,6 @@ extern int ena_logtype_com; #define ena_trc_warn(dev, format, arg...) \ ena_trc_log(dev, WARNING, format, ##arg) #define ena_trc_err(dev, format, arg...) ena_trc_log(dev, ERR, format, ##arg) -#else -#define ena_trc_dbg(dev, format, arg...) ENA_TOUCH(dev) -#define ena_trc_info(dev, format, arg...) ENA_TOUCH(dev) -#define ena_trc_warn(dev, format, arg...) ENA_TOUCH(dev) -#define ena_trc_err(dev, format, arg...) ENA_TOUCH(dev) -#endif /* RTE_LIBRTE_ENA_COM_DEBUG */ #define ENA_WARN(cond, dev, format, arg...) \ do { \ diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index f5e812d507..2335436b6c 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -399,9 +399,9 @@ static int validate_tx_req_id(struct ena_ring *tx_ring, u16 req_id) } if (tx_info) - PMD_DRV_LOG(ERR, "tx_info doesn't have valid mbuf\n"); + PMD_TX_LOG(ERR, "tx_info doesn't have valid mbuf\n"); else - PMD_DRV_LOG(ERR, "Invalid req_id: %hu\n", req_id); + PMD_TX_LOG(ERR, "Invalid req_id: %hu\n", req_id); /* Trigger device reset */ ++tx_ring->tx_stats.bad_req_id; @@ -1461,7 +1461,7 @@ static int ena_add_single_rx_desc(struct ena_com_io_sq *io_sq, /* pass resource to device */ rc = ena_com_add_single_rx_desc(io_sq, &ebuf, id); if (unlikely(rc != 0)) - PMD_DRV_LOG(WARNING, "Failed adding Rx desc\n"); + PMD_RX_LOG(WARNING, "Failed adding Rx desc\n"); return rc; } @@ -1471,16 +1471,21 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) unsigned int i; int rc; uint16_t next_to_use = rxq->next_to_use; - uint16_t in_use, req_id; + uint16_t req_id; +#ifdef RTE_ETHDEV_DEBUG_RX + uint16_t in_use; +#endif struct rte_mbuf **mbufs = rxq->rx_refill_buffer; if (unlikely(!count)) return 0; +#ifdef RTE_ETHDEV_DEBUG_RX in_use = rxq->ring_size - 1 - ena_com_free_q_entries(rxq->ena_com_io_sq); - ena_assert_msg(((in_use + count) < rxq->ring_size), - "bad ring state\n"); + if (unlikely((in_use + count) >= rxq->ring_size)) + PMD_RX_LOG(ERR, "Bad Rx ring state\n"); +#endif /* get resources for incoming packets */ rc = rte_pktmbuf_alloc_bulk(rxq->mb_pool, mbufs, count); @@ -1510,7 +1515,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } if (unlikely(i < count)) { - PMD_DRV_LOG(WARNING, + PMD_RX_LOG(WARNING, "Refilled Rx queue[%d] with only %d/%d buffers\n", rxq->id, i, count); rte_pktmbuf_free_bulk(&mbufs[i], count - i); @@ -2218,12 +2223,14 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct ena_com_rx_ctx ena_rx_ctx; int i, rc = 0; +#ifdef RTE_ETHDEV_DEBUG_RX /* Check adapter state */ if (unlikely(rx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) { - PMD_DRV_LOG(ALERT, + PMD_RX_LOG(ALERT, "Trying to receive pkts while device is NOT running\n"); return 0; } +#endif descs_in_use = rx_ring->ring_size - ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; @@ -2239,7 +2246,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->ena_com_io_sq, &ena_rx_ctx); if (unlikely(rc)) { - PMD_DRV_LOG(ERR, + PMD_RX_LOG(ERR, "Failed to get the packet from the device, rc: %d\n", rc); if (rc == ENA_COM_NO_SPACE) { @@ -2416,13 +2423,13 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, * be needed so we reduce the segments number from num_segments to 1 */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, 3)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); + PMD_TX_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } ++tx_ring->tx_stats.linearize; rc = rte_pktmbuf_linearize(mbuf); if (unlikely(rc)) { - PMD_DRV_LOG(WARNING, "Mbuf linearize failed\n"); + PMD_TX_LOG(WARNING, "Mbuf linearize failed\n"); rte_atomic64_inc(&tx_ring->adapter->drv_stats->ierrors); ++tx_ring->tx_stats.linearize_failed; return rc; @@ -2436,7 +2443,7 @@ static int ena_check_space_and_linearize_mbuf(struct ena_ring *tx_ring, */ if (!ena_com_sq_have_enough_space(tx_ring->ena_com_io_sq, num_segments + 2)) { - PMD_DRV_LOG(DEBUG, "Not enough space in the Tx queue\n"); + PMD_TX_LOG(DEBUG, "Not enough space in the Tx queue\n"); return ENA_COM_NO_MEM; } @@ -2551,7 +2558,7 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) if (unlikely(ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq, &ena_tx_ctx))) { - PMD_DRV_LOG(DEBUG, + PMD_TX_LOG(DEBUG, "LLQ Tx max burst size of queue %d achieved, writing doorbell to send burst\n", tx_ring->id); ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); @@ -2628,12 +2635,14 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, struct ena_ring *tx_ring = (struct ena_ring *)(tx_queue); uint16_t sent_idx = 0; +#ifdef RTE_ETHDEV_DEBUG_TX /* Check adapter state */ if (unlikely(tx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) { - PMD_DRV_LOG(ALERT, + PMD_TX_LOG(ALERT, "Trying to xmit pkts while device is NOT running\n"); return 0; } +#endif for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { if (ena_xmit_mbuf(tx_ring, tx_pkts[sent_idx])) @@ -2960,18 +2969,13 @@ RTE_PMD_REGISTER_KMOD_DEP(net_ena, "* igb_uio | uio_pci_generic | vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(net_ena, ENA_DEVARG_LARGE_LLQ_HDR "=<0|1>"); RTE_LOG_REGISTER_SUFFIX(ena_logtype_init, init, NOTICE); RTE_LOG_REGISTER_SUFFIX(ena_logtype_driver, driver, NOTICE); -#ifdef RTE_LIBRTE_ENA_DEBUG_RX -RTE_LOG_REGISTER_SUFFIX(ena_logtype_rx, rx, NOTICE); -#endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX -RTE_LOG_REGISTER_SUFFIX(ena_logtype_tx, tx, NOTICE); -#endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX_FREE -RTE_LOG_REGISTER_SUFFIX(ena_logtype_tx_free, tx_free, NOTICE); +#ifdef RTE_ETHDEV_DEBUG_RX +RTE_LOG_REGISTER_SUFFIX(ena_logtype_rx, rx, DEBUG); #endif -#ifdef RTE_LIBRTE_ENA_COM_DEBUG -RTE_LOG_REGISTER_SUFFIX(ena_logtype_com, com, NOTICE); +#ifdef RTE_ETHDEV_DEBUG_TX +RTE_LOG_REGISTER_SUFFIX(ena_logtype_tx, tx, DEBUG); #endif +RTE_LOG_REGISTER_SUFFIX(ena_logtype_com, com, WARNING); /****************************************************************************** ******************************** AENQ Handlers ******************************* diff --git a/drivers/net/ena/ena_logs.h b/drivers/net/ena/ena_logs.h index 040bebfb98..43f16458ea 100644 --- a/drivers/net/ena/ena_logs.h +++ b/drivers/net/ena/ena_logs.h @@ -11,7 +11,7 @@ extern int ena_logtype_init; rte_log(RTE_LOG_ ## level, ena_logtype_init, \ "%s(): " fmt, __func__, ## args) -#ifdef RTE_LIBRTE_ENA_DEBUG_RX +#ifdef RTE_ETHDEV_DEBUG_RX extern int ena_logtype_rx; #define PMD_RX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_rx, \ @@ -20,7 +20,7 @@ extern int ena_logtype_rx; #define PMD_RX_LOG(level, fmt, args...) do { } while (0) #endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX +#ifdef RTE_ETHDEV_DEBUG_TX extern int ena_logtype_tx; #define PMD_TX_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_tx, \ @@ -29,15 +29,6 @@ extern int ena_logtype_tx; #define PMD_TX_LOG(level, fmt, args...) do { } while (0) #endif -#ifdef RTE_LIBRTE_ENA_DEBUG_TX_FREE -extern int ena_logtype_tx_free; -#define PMD_TX_FREE_LOG(level, fmt, args...) \ - rte_log(RTE_LOG_ ## level, ena_logtype_tx_free, \ - "%s(): " fmt, __func__, ## args) -#else -#define PMD_TX_FREE_LOG(level, fmt, args...) do { } while (0) -#endif - extern int ena_logtype_driver; #define PMD_DRV_LOG(level, fmt, args...) \ rte_log(RTE_LOG_ ## level, ena_logtype_driver, \ From patchwork Wed Jul 14 10:43:17 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95855 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id BC707A0C4B; Wed, 14 Jul 2021 12:43:46 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 3058041329; Wed, 14 Jul 2021 12:43:33 +0200 (CEST) Received: from mail-lf1-f52.google.com (mail-lf1-f52.google.com [209.85.167.52]) by mails.dpdk.org (Postfix) with ESMTP id C90DA4131F for ; Wed, 14 Jul 2021 12:43:31 +0200 (CEST) Received: by mail-lf1-f52.google.com with SMTP id a18so2804603lfs.10 for ; Wed, 14 Jul 2021 03:43:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=V4S5VuFuIVM0xDiwmw+bkutu1YZQ0xs0EYeC39TGHVY=; b=MZVb4Vqy6GbTQqHahMSqen3J60dnKf/tFo9gipS/ULYG2S2O0Oi3lyJpJf7LcrWlN3 hGwgzv/MBzRrStWihkxda+vVluIKD2tnnn9JQkzKIu8fDxWsVDUz/Zk4YbCCksHvrrR1 8rde0US/Jh2Eh0//r8IzYFpNPD1k7iLStmexTG6x/9SX0jduiR3rsE8gIjh0c6tQfOjC AxlWCVECzhz+TOqKXAfERpJWQoFz1FzTzClTpluh703QnmlfAdb/4rLYebFdN9st094E f6n+aD8iZMohR16W2VfP3Un8XOyf9Fppyr9bspSc0oFvSln/v+N2kTfD5zQqZdOfMzSi T3aw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=V4S5VuFuIVM0xDiwmw+bkutu1YZQ0xs0EYeC39TGHVY=; b=KSNiXg5XCkYIXoLBHzSaQfBjZ+vtbSMd3wywNJr4FVsxHvJ7VnS9a49Od1H0YF/Lia guzSA7FdDuocCmsUb3ilQMXXGarylhbDCInmKyI702ZwzFc/pNPKMp6pcdi2cS/8/8E5 SWxVzWQqyZZPaVh3/kWhuejrjHGBLIMvIREniDOse5pONsKFmqCmRRwgFNqqkwAYlJ/h pme4z263+IVNcuPIllIrq2amof02vRU3Wyv5uUg0DAnzTJkFfgSMY4havxfi6kBaBsqU xIoAHIehY9b+yBMDgChqc/l5To5mN4q+IVwr7nBDqaseZOUoKalsGS5xfZ77VzjEfzdz NonA== X-Gm-Message-State: AOAM531K4af1t46JQIko046ygkps3me3DI1aIhIlvHL7m5JzvzH8hQTZ rACDU+nekM9PvhRI02XrifjU//E4UJEU+AUu X-Google-Smtp-Source: ABdhPJxAgeM/+CuajwnVNpJOmwxz4xa/AaRcBxpCRWngeTKKVDVkuB6awjGl3wj5ORNIJkiPdF10tA== X-Received: by 2002:a05:6512:3182:: with SMTP id i2mr7721854lfe.652.1626259411125; Wed, 14 Jul 2021 03:43:31 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id d3sm188928lji.101.2021.07.14.03.43.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:43:30 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , stable@dpdk.org, Shay Agroskin Date: Wed, 14 Jul 2021 12:43:17 +0200 Message-Id: <20210714104320.4096-4-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714104320.4096-1-mk@semihalf.com> References: <20210714103435.3388-1-mk@semihalf.com> <20210714104320.4096-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 3/6] net/ena: trigger reset when Tx prepare fails X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" If the prepare function failed, then it means the descriptors are in the invalid state. This condition now triggers the reset, which should be further handled by the application. To notify the application about prepare function failure, the error log was added. In general, it should never fail in normal conditions, as the Tx function checks for the available space in the Tx ring before the preparation even starts. Fixes: 2081d5e2e92d ("net/ena: add reset routine") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk Reviewed-by: Shai Brandes Reviewed-by: Shay Agroskin --- drivers/net/ena/ena_ethdev.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 2335436b6c..67cd91046a 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -2570,7 +2570,11 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) rc = ena_com_prepare_tx(tx_ring->ena_com_io_sq, &ena_tx_ctx, &nb_hw_desc); if (unlikely(rc)) { + PMD_DRV_LOG(ERR, "Failed to prepare Tx buffers, rc: %d\n", rc); ++tx_ring->tx_stats.prepare_ctx_err; + tx_ring->adapter->reset_reason = + ENA_REGS_RESET_DRIVER_INVALID_STATE; + tx_ring->adapter->trigger_reset = true; return rc; } From patchwork Wed Jul 14 10:43:18 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95856 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 66B67A0C4B; Wed, 14 Jul 2021 12:43:52 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 677584132D; Wed, 14 Jul 2021 12:43:35 +0200 (CEST) Received: from mail-lf1-f46.google.com (mail-lf1-f46.google.com [209.85.167.46]) by mails.dpdk.org (Postfix) with ESMTP id 0155C41319 for ; Wed, 14 Jul 2021 12:43:33 +0200 (CEST) Received: by mail-lf1-f46.google.com with SMTP id f30so2858551lfj.1 for ; Wed, 14 Jul 2021 03:43:33 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4QuSRQ09aIu+m6HapJpfMrhIyJfzbhKCIhXxI49xzEM=; b=FK3LZ9IgVxhKA426iQH2+W2PQVNBX8jb677g0slRTQABdBxXlBllWmWO7jxZgewSbI 05RDNwHj+Vyo4M1YURaQwV5RYks2mAHwgnXeWEzJkwbGsawOROANFVMnybulJ0gn12xq xfdHwzYuyds0+q1MVny6kJEKotQU2HN7FVwVr+TJ6D3JcAlYAWH5CWxeGjuE17WStIDU s0js0crI64TGdSR5VLuOa7AywrrW/P8q3EwaHRvoTTIM6cLshFT0I6ZBEpHABZDyeJPo Rd1PKYX1rcp9niB46+bZfruIbo3tuEXDrqtpgvoR64btTG8/J2HbrTUyaIh/X8oVGDfs XYvA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4QuSRQ09aIu+m6HapJpfMrhIyJfzbhKCIhXxI49xzEM=; b=Lf1H38+tYhzbVWUmkoIYXGwsXt6962mJhoxOkTPLKxnm7mt+nnjiGVwmMSJJNbhM7w +cxteYj5XGf/Ud4mjA05sgdUMvErF/Gr8F7+JlejzCQQffx7nQUO0vp/aQjbwqVEkIei 7Kfsjbye0B6V6HfMJGKVrhziZn+iKBdUKJdf1Kf2UhAsSNjiLGRjFTh3E0M8OLmB66tY XjJqdqUKsmv+EYPaXxXnoDh4vM7TS3hQDvduI0LpS9PIaE0NS79+Kbw9H3x3KzpTzoSG FgL6P5dwgMYCYWoYDd2BXolJHF7IhGzxQJkbuiVsG5KM+kRuTG6gmmiKDLqIS/pjduEq y63Q== X-Gm-Message-State: AOAM533FW+W6j3Ie11o3X7Ro4CdORi4laHfD5latdok8peBDj43+sR9V fUfC+Zpuadrp9u0/rBntmauuU7VYu1CN/qKG X-Google-Smtp-Source: ABdhPJz+YB59/8SWg0iOe/FVUgV7DL2w4/gRpNXr2sfNa7DIPw2aInMJsOTcvHRFf6F6iKiDMvc+Zw== X-Received: by 2002:a19:dc03:: with SMTP id t3mr7608244lfg.464.1626259413169; Wed, 14 Jul 2021 03:43:33 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id d3sm188928lji.101.2021.07.14.03.43.31 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:43:32 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Artur Rojek , Igor Chauskin , Shay Agroskin Date: Wed, 14 Jul 2021 12:43:18 +0200 Message-Id: <20210714104320.4096-5-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714104320.4096-1-mk@semihalf.com> References: <20210714103435.3388-1-mk@semihalf.com> <20210714104320.4096-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 4/6] net/ena: add support for Rx interrupts X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to support asynchronous Rx in the applications, the driver has to configure the event file descriptors and configure the HW. This patch configures appropriate data structures for the rte_ethdev layer, adds .rx_queue_intr_enable and .rx_queue_intr_disable API handlers, and configures IO queues to work in the interrupt mode, if it was requested by the application. Signed-off-by: Michal Krawczyk Reviewed-by: Artur Rojek Reviewed-by: Igor Chauskin Reviewed-by: Shai Brandes Reviewed-by: Shay Agroskin --- doc/guides/nics/ena.rst | 12 ++ doc/guides/nics/features/ena.ini | 1 + doc/guides/rel_notes/release_21_08.rst | 7 ++ drivers/net/ena/ena_ethdev.c | 146 +++++++++++++++++++++++-- 4 files changed, 154 insertions(+), 12 deletions(-) diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst index 0f1f63f722..63951098ea 100644 --- a/doc/guides/nics/ena.rst +++ b/doc/guides/nics/ena.rst @@ -141,6 +141,7 @@ Supported features * LSC event notification * Watchdog (requires handling of timers in the application) * Device reset upon failure +* Rx interrupts Prerequisites ------------- @@ -180,6 +181,17 @@ At this point the system should be ready to run DPDK applications. Once the application runs to completion, the ENA can be detached from attached module if necessary. +**Rx interrupts support** + +ENA PMD supports Rx interrupts, which can be used to wake up lcores waiting for +input. Please note that it won't work with ``igb_uio``, so to use this feature, +the ``vfio-pci`` should be used. + +ENA handles admin interrupts and AENQ notifications on separate interrupt. +There is possibility that there won't be enough event file descriptors to +handle both admin and Rx interrupts. In that situation the Rx interrupt request +will fail. + **Note about usage on \*.metal instances** On AWS, the metal instances are supporting IOMMU for both arm64 and x86_64 diff --git a/doc/guides/nics/features/ena.ini b/doc/guides/nics/features/ena.ini index 2595ff53f9..3976bbbda6 100644 --- a/doc/guides/nics/features/ena.ini +++ b/doc/guides/nics/features/ena.ini @@ -6,6 +6,7 @@ [Features] Link status = Y Link status event = Y +Rx interrupt = Y MTU update = Y Jumbo frame = Y Scattered Rx = Y diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index efcb0f3584..dac86a9d3e 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -94,6 +94,13 @@ New Features Added a new PMD driver for Wangxun 1 Gigabit Ethernet NICs. See the :doc:`../nics/ngbe` for more details. +* **Updated Amazon ENA PMD.** + + The new driver version (v2.4.0) introduced bug fixes and improvements, + including: + + * Added Rx interrupt support. + * **Added support for Marvell CNXK crypto driver.** * Added cnxk crypto PMD which provides support for an integrated diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 67cd91046a..72f9887797 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -213,11 +213,11 @@ static void ena_rx_queue_release_bufs(struct ena_ring *ring); static void ena_tx_queue_release_bufs(struct ena_ring *ring); static int ena_link_update(struct rte_eth_dev *dev, int wait_to_complete); -static int ena_create_io_queue(struct ena_ring *ring); +static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring); static void ena_queue_stop(struct ena_ring *ring); static void ena_queue_stop_all(struct rte_eth_dev *dev, enum ena_ring_type ring_type); -static int ena_queue_start(struct ena_ring *ring); +static int ena_queue_start(struct rte_eth_dev *dev, struct ena_ring *ring); static int ena_queue_start_all(struct rte_eth_dev *dev, enum ena_ring_type ring_type); static void ena_stats_restart(struct rte_eth_dev *dev); @@ -249,6 +249,11 @@ static int ena_process_bool_devarg(const char *key, static int ena_parse_devargs(struct ena_adapter *adapter, struct rte_devargs *devargs); static int ena_copy_eni_stats(struct ena_adapter *adapter); +static int ena_setup_rx_intr(struct rte_eth_dev *dev); +static int ena_rx_queue_intr_enable(struct rte_eth_dev *dev, + uint16_t queue_id); +static int ena_rx_queue_intr_disable(struct rte_eth_dev *dev, + uint16_t queue_id); static const struct eth_dev_ops ena_dev_ops = { .dev_configure = ena_dev_configure, @@ -269,6 +274,8 @@ static const struct eth_dev_ops ena_dev_ops = { .dev_reset = ena_dev_reset, .reta_update = ena_rss_reta_update, .reta_query = ena_rss_reta_query, + .rx_queue_intr_enable = ena_rx_queue_intr_enable, + .rx_queue_intr_disable = ena_rx_queue_intr_disable, }; void ena_rss_key_fill(void *key, size_t size) @@ -829,7 +836,7 @@ static int ena_queue_start_all(struct rte_eth_dev *dev, "Inconsistent state of Tx queues\n"); } - rc = ena_queue_start(&queues[i]); + rc = ena_queue_start(dev, &queues[i]); if (rc) { PMD_INIT_LOG(ERR, @@ -1074,6 +1081,10 @@ static int ena_start(struct rte_eth_dev *dev) if (rc) return rc; + rc = ena_setup_rx_intr(dev); + if (rc) + return rc; + rc = ena_queue_start_all(dev, ENA_RING_TYPE_RX); if (rc) return rc; @@ -1114,6 +1125,8 @@ static int ena_stop(struct rte_eth_dev *dev) { struct ena_adapter *adapter = dev->data->dev_private; struct ena_com_dev *ena_dev = &adapter->ena_dev; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; int rc; /* Cannot free memory in secondary process */ @@ -1132,6 +1145,16 @@ static int ena_stop(struct rte_eth_dev *dev) PMD_DRV_LOG(ERR, "Device reset failed, rc: %d\n", rc); } + rte_intr_disable(intr_handle); + + rte_intr_efd_disable(intr_handle); + if (intr_handle->intr_vec != NULL) { + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; + } + + rte_intr_enable(intr_handle); + ++adapter->dev_stats.dev_stop; adapter->state = ENA_ADAPTER_STATE_STOPPED; dev->data->dev_started = 0; @@ -1139,10 +1162,12 @@ static int ena_stop(struct rte_eth_dev *dev) return 0; } -static int ena_create_io_queue(struct ena_ring *ring) +static int ena_create_io_queue(struct rte_eth_dev *dev, struct ena_ring *ring) { - struct ena_adapter *adapter; - struct ena_com_dev *ena_dev; + struct ena_adapter *adapter = ring->adapter; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; struct ena_com_create_io_ctx ctx = /* policy set to _HOST just to satisfy icc compiler */ { ENA_ADMIN_PLACEMENT_POLICY_HOST, @@ -1151,9 +1176,7 @@ static int ena_create_io_queue(struct ena_ring *ring) unsigned int i; int rc; - adapter = ring->adapter; - ena_dev = &adapter->ena_dev; - + ctx.msix_vector = -1; if (ring->type == ENA_RING_TYPE_TX) { ena_qid = ENA_IO_TXQ_IDX(ring->id); ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_TX; @@ -1163,12 +1186,13 @@ static int ena_create_io_queue(struct ena_ring *ring) } else { ena_qid = ENA_IO_RXQ_IDX(ring->id); ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX; + if (rte_intr_dp_is_en(intr_handle)) + ctx.msix_vector = intr_handle->intr_vec[ring->id]; for (i = 0; i < ring->ring_size; i++) ring->empty_rx_reqs[i] = i; } ctx.queue_size = ring->ring_size; ctx.qid = ena_qid; - ctx.msix_vector = -1; /* interrupts not used */ ctx.numa_node = ring->numa_socket_id; rc = ena_com_create_io_queue(ena_dev, &ctx); @@ -1193,6 +1217,10 @@ static int ena_create_io_queue(struct ena_ring *ring) if (ring->type == ENA_RING_TYPE_TX) ena_com_update_numa_node(ring->ena_com_io_cq, ctx.numa_node); + /* Start with Rx interrupts being masked. */ + if (ring->type == ENA_RING_TYPE_RX && rte_intr_dp_is_en(intr_handle)) + ena_rx_queue_intr_disable(dev, ring->id); + return 0; } @@ -1229,14 +1257,14 @@ static void ena_queue_stop_all(struct rte_eth_dev *dev, ena_queue_stop(&queues[i]); } -static int ena_queue_start(struct ena_ring *ring) +static int ena_queue_start(struct rte_eth_dev *dev, struct ena_ring *ring) { int rc, bufs_num; ena_assert_msg(ring->configured == 1, "Trying to start unconfigured queue\n"); - rc = ena_create_io_queue(ring); + rc = ena_create_io_queue(dev, ring); if (rc) { PMD_INIT_LOG(ERR, "Failed to create IO queue\n"); return rc; @@ -2944,6 +2972,100 @@ static int ena_parse_devargs(struct ena_adapter *adapter, return rc; } +static int ena_setup_rx_intr(struct rte_eth_dev *dev) +{ + struct rte_pci_device *pci_dev = RTE_ETH_DEV_TO_PCI(dev); + struct rte_intr_handle *intr_handle = &pci_dev->intr_handle; + int rc; + uint16_t vectors_nb, i; + bool rx_intr_requested = dev->data->dev_conf.intr_conf.rxq; + + if (!rx_intr_requested) + return 0; + + if (!rte_intr_cap_multiple(intr_handle)) { + PMD_DRV_LOG(ERR, + "Rx interrupt requested, but it isn't supported by the PCI driver\n"); + return -ENOTSUP; + } + + /* Disable interrupt mapping before the configuration starts. */ + rte_intr_disable(intr_handle); + + /* Verify if there are enough vectors available. */ + vectors_nb = dev->data->nb_rx_queues; + if (vectors_nb > RTE_MAX_RXTX_INTR_VEC_ID) { + PMD_DRV_LOG(ERR, + "Too many Rx interrupts requested, maximum number: %d\n", + RTE_MAX_RXTX_INTR_VEC_ID); + rc = -ENOTSUP; + goto enable_intr; + } + + intr_handle->intr_vec = rte_zmalloc("intr_vec", + dev->data->nb_rx_queues * sizeof(*intr_handle->intr_vec), 0); + if (intr_handle->intr_vec == NULL) { + PMD_DRV_LOG(ERR, + "Failed to allocate interrupt vector for %d queues\n", + dev->data->nb_rx_queues); + rc = -ENOMEM; + goto enable_intr; + } + + rc = rte_intr_efd_enable(intr_handle, vectors_nb); + if (rc != 0) + goto free_intr_vec; + + if (!rte_intr_allow_others(intr_handle)) { + PMD_DRV_LOG(ERR, + "Not enough interrupts available to use both ENA Admin and Rx interrupts\n"); + goto disable_intr_efd; + } + + for (i = 0; i < vectors_nb; ++i) + intr_handle->intr_vec[i] = RTE_INTR_VEC_RXTX_OFFSET + i; + + rte_intr_enable(intr_handle); + return 0; + +disable_intr_efd: + rte_intr_efd_disable(intr_handle); +free_intr_vec: + rte_free(intr_handle->intr_vec); + intr_handle->intr_vec = NULL; +enable_intr: + rte_intr_enable(intr_handle); + return rc; +} + +static void ena_rx_queue_intr_set(struct rte_eth_dev *dev, + uint16_t queue_id, + bool unmask) +{ + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_ring *rxq = &adapter->rx_ring[queue_id]; + struct ena_eth_io_intr_reg intr_reg; + + ena_com_update_intr_reg(&intr_reg, 0, 0, unmask); + ena_com_unmask_intr(rxq->ena_com_io_cq, &intr_reg); +} + +static int ena_rx_queue_intr_enable(struct rte_eth_dev *dev, + uint16_t queue_id) +{ + ena_rx_queue_intr_set(dev, queue_id, true); + + return 0; +} + +static int ena_rx_queue_intr_disable(struct rte_eth_dev *dev, + uint16_t queue_id) +{ + ena_rx_queue_intr_set(dev, queue_id, false); + + return 0; +} + /********************************************************************* * PMD configuration *********************************************************************/ From patchwork Wed Jul 14 10:43:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95857 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 169C3A0C4B; Wed, 14 Jul 2021 12:44:00 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 0ACF14133A; Wed, 14 Jul 2021 12:43:38 +0200 (CEST) Received: from mail-lf1-f51.google.com (mail-lf1-f51.google.com [209.85.167.51]) by mails.dpdk.org (Postfix) with ESMTP id E9FC941331 for ; Wed, 14 Jul 2021 12:43:35 +0200 (CEST) Received: by mail-lf1-f51.google.com with SMTP id a12so2817319lfb.7 for ; Wed, 14 Jul 2021 03:43:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=JaqYd+ocOfa8ghYQw38bRT3r4MQ2bmSi8VqLjy5sV+k=; b=zc2Rgy0yRIZCzXk7sTLyvUmI4BZboEwvALN067Iet9v2qmHU6NYctxQsjOzJEj5e4t vPCyiGGeO6a34Rda/hcrNqn5QWJFsN49luw0HCGQ1Ychs/OqSIwz0fJtrCEmyWEO2cWE nxdmNLaAdYt8+1mudxw5OgjUchAGs6XOnL4ARQDZgbibmVRYMYelrWBg7bg50SJEh97B Ypsp6fkwNHYZ+XhCFE99F4aGkFj9Hom7W5DyLhvMbKb36nrNiTfpTddiAG0UyqsTpIxh 1LJ7AkU5A0eCBykq9lMaagsPcGY3WfuDgA/3lmAaOUT+rVbsrSg1dxgypO4AJB/wDGof TxOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=JaqYd+ocOfa8ghYQw38bRT3r4MQ2bmSi8VqLjy5sV+k=; b=o9kkxk7brHYHmc9nXsuSjpBxlQkRJHwCOtzHL6Zu1Wpd8B0Cj3gOyNie5mgt044ftK OWikw2lcGdGHES9Pj5DKq0aqNoYjW10DC9izLKEQl3xy+UgBU3fK2JKW94CZ7Dz7Up8u UCqVD9pAN03bWaNWQVgdXdWHJNwYRPLRrFf+AGNz8QrEUwh1v05FvLBNV/+uRAhAAb/1 9yMxiQ6B9rlsgqu1NVOiGEBNfI85/mGPmmtJZ2HT8B90IaCMalanwTHGOC9o+4pLttWK WWipNZQGlO9+kbrIcTlyQmL/ZRwGgHeMOaJu6Pvq1o5YpQ8jhpQIy0efzgfJ1kcAZ6g2 I+QQ== X-Gm-Message-State: AOAM532XOhCoOp6WQ6ZflBc5B/BIzXR4FrN7vOOeIyHKpjeWBIzptwf9 Xt9rHdNXTOrJ46YuABgwKaSpovATxlLRA2Ah X-Google-Smtp-Source: ABdhPJz3h36gmQqYgn8kZgBB/nNJgBDpJvEc5ndKvRi8fdQ1S5OEtc8oxEVoEILlZOQp0ThJFANU0w== X-Received: by 2002:a19:8c07:: with SMTP id o7mr7413956lfd.637.1626259414936; Wed, 14 Jul 2021 03:43:34 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id d3sm188928lji.101.2021.07.14.03.43.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:43:34 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk , Shay Agroskin , Amit Bernstein Date: Wed, 14 Jul 2021 12:43:19 +0200 Message-Id: <20210714104320.4096-6-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714104320.4096-1-mk@semihalf.com> References: <20210714103435.3388-1-mk@semihalf.com> <20210714104320.4096-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 5/6] net/ena: rework RSS configuration X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Allow user to specify his own hash key and hash ctrl if the device is supporting that. HW interprets the key in reverse byte order, so the PMD reorders the key before passing it to the ena_com layer. Default key is being set in random matter each time the device is being initialized. Moreover, make minor adjustments for reta size setting in terms of returning error values. RSS code was moved to ena_rss.c file to improve readability. Signed-off-by: Michal Krawczyk Reviewed-by: Shai Brandes Reviewed-by: Shay Agroskin Reviewed-by: Amit Bernstein --- doc/guides/nics/features/ena.ini | 1 + doc/guides/rel_notes/release_21_08.rst | 1 + drivers/net/ena/ena_ethdev.c | 230 ++-------- drivers/net/ena/ena_ethdev.h | 34 ++ drivers/net/ena/ena_rss.c | 591 +++++++++++++++++++++++++ drivers/net/ena/meson.build | 1 + 6 files changed, 663 insertions(+), 195 deletions(-) create mode 100644 drivers/net/ena/ena_rss.c diff --git a/doc/guides/nics/features/ena.ini b/doc/guides/nics/features/ena.ini index 3976bbbda6..b971243ff0 100644 --- a/doc/guides/nics/features/ena.ini +++ b/doc/guides/nics/features/ena.ini @@ -12,6 +12,7 @@ Jumbo frame = Y Scattered Rx = Y TSO = Y RSS hash = Y +RSS key update = Y RSS reta update = Y L3 checksum offload = Y L4 checksum offload = Y diff --git a/doc/guides/rel_notes/release_21_08.rst b/doc/guides/rel_notes/release_21_08.rst index dac86a9d3e..d01784860e 100644 --- a/doc/guides/rel_notes/release_21_08.rst +++ b/doc/guides/rel_notes/release_21_08.rst @@ -100,6 +100,7 @@ New Features including: * Added Rx interrupt support. + * RSS hash function key reconfiguration support. * **Added support for Marvell CNXK crypto driver.** diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 72f9887797..ee059fc165 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -4,12 +4,6 @@ */ #include -#include -#include -#include -#include -#include -#include #include #include #include @@ -30,21 +24,12 @@ #define DRV_MODULE_VER_MINOR 3 #define DRV_MODULE_VER_SUBMINOR 0 -#define ENA_IO_TXQ_IDX(q) (2 * (q)) -#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1) -/*reverse version of ENA_IO_RXQ_IDX*/ -#define ENA_IO_RXQ_IDX_REV(q) ((q - 1) / 2) - #define __MERGE_64B_H_L(h, l) (((uint64_t)h << 32) | l) -#define TEST_BIT(val, bit_shift) (val & (1UL << bit_shift)) #define GET_L4_HDR_LEN(mbuf) \ ((rte_pktmbuf_mtod_offset(mbuf, struct rte_tcp_hdr *, \ mbuf->l3_len + mbuf->l2_len)->data_off) >> 4) -#define ENA_RX_RSS_TABLE_LOG_SIZE 7 -#define ENA_RX_RSS_TABLE_SIZE (1 << ENA_RX_RSS_TABLE_LOG_SIZE) -#define ENA_HASH_KEY_SIZE 40 #define ETH_GSTRING_LEN 32 #define ARRAY_SIZE(x) RTE_DIM(x) @@ -223,12 +208,6 @@ static int ena_queue_start_all(struct rte_eth_dev *dev, static void ena_stats_restart(struct rte_eth_dev *dev); static int ena_infos_get(struct rte_eth_dev *dev, struct rte_eth_dev_info *dev_info); -static int ena_rss_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); -static int ena_rss_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size); static void ena_interrupt_handler_rte(void *cb_arg); static void ena_timer_wd_callback(struct rte_timer *timer, void *arg); static void ena_destroy_device(struct rte_eth_dev *eth_dev); @@ -276,27 +255,13 @@ static const struct eth_dev_ops ena_dev_ops = { .reta_query = ena_rss_reta_query, .rx_queue_intr_enable = ena_rx_queue_intr_enable, .rx_queue_intr_disable = ena_rx_queue_intr_disable, + .rss_hash_update = ena_rss_hash_update, + .rss_hash_conf_get = ena_rss_hash_conf_get, }; -void ena_rss_key_fill(void *key, size_t size) -{ - static bool key_generated; - static uint8_t default_key[ENA_HASH_KEY_SIZE]; - size_t i; - - RTE_ASSERT(size <= ENA_HASH_KEY_SIZE); - - if (!key_generated) { - for (i = 0; i < ENA_HASH_KEY_SIZE; ++i) - default_key[i] = rte_rand() & 0xff; - key_generated = true; - } - - rte_memcpy(key, default_key, size); -} - static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, - struct ena_com_rx_ctx *ena_rx_ctx) + struct ena_com_rx_ctx *ena_rx_ctx, + bool fill_hash) { uint64_t ol_flags = 0; uint32_t packet_type = 0; @@ -324,7 +289,8 @@ static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, else ol_flags |= PKT_RX_L4_CKSUM_GOOD; - if (likely((packet_type & ENA_PTYPE_HAS_HASH) && !ena_rx_ctx->frag)) { + if (fill_hash && + likely((packet_type & ENA_PTYPE_HAS_HASH) && !ena_rx_ctx->frag)) { ol_flags |= PKT_RX_RSS_HASH; mbuf->hash.rss = ena_rx_ctx->hash; } @@ -446,7 +412,8 @@ static void ena_config_host_info(struct ena_com_dev *ena_dev) host_info->num_cpus = rte_lcore_count(); host_info->driver_supported_features = - ENA_ADMIN_HOST_INFO_RX_OFFSET_MASK; + ENA_ADMIN_HOST_INFO_RX_OFFSET_MASK | + ENA_ADMIN_HOST_INFO_RSS_CONFIGURABLE_FUNCTION_KEY_MASK; rc = ena_com_set_host_attributes(ena_dev); if (rc) { @@ -556,151 +523,6 @@ ena_dev_reset(struct rte_eth_dev *dev) return rc; } -static int ena_rss_reta_update(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) -{ - struct ena_adapter *adapter = dev->data->dev_private; - struct ena_com_dev *ena_dev = &adapter->ena_dev; - int rc, i; - u16 entry_value; - int conf_idx; - int idx; - - if ((reta_size == 0) || (reta_conf == NULL)) - return -EINVAL; - - if (reta_size > ENA_RX_RSS_TABLE_SIZE) { - PMD_DRV_LOG(WARNING, - "Requested indirection table size (%d) is bigger than supported: %d\n", - reta_size, ENA_RX_RSS_TABLE_SIZE); - return -EINVAL; - } - - for (i = 0 ; i < reta_size ; i++) { - /* each reta_conf is for 64 entries. - * to support 128 we use 2 conf of 64 - */ - conf_idx = i / RTE_RETA_GROUP_SIZE; - idx = i % RTE_RETA_GROUP_SIZE; - if (TEST_BIT(reta_conf[conf_idx].mask, idx)) { - entry_value = - ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]); - - rc = ena_com_indirect_table_fill_entry(ena_dev, - i, - entry_value); - if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, - "Cannot fill indirect table\n"); - return rc; - } - } - } - - rte_spinlock_lock(&adapter->admin_lock); - rc = ena_com_indirect_table_set(ena_dev); - rte_spinlock_unlock(&adapter->admin_lock); - if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, "Cannot flush the indirect table\n"); - return rc; - } - - PMD_DRV_LOG(DEBUG, "RSS configured %d entries for port %d\n", - reta_size, dev->data->port_id); - - return 0; -} - -/* Query redirection table. */ -static int ena_rss_reta_query(struct rte_eth_dev *dev, - struct rte_eth_rss_reta_entry64 *reta_conf, - uint16_t reta_size) -{ - struct ena_adapter *adapter = dev->data->dev_private; - struct ena_com_dev *ena_dev = &adapter->ena_dev; - int rc; - int i; - u32 indirect_table[ENA_RX_RSS_TABLE_SIZE] = {0}; - int reta_conf_idx; - int reta_idx; - - if (reta_size == 0 || reta_conf == NULL || - (reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL))) - return -EINVAL; - - rte_spinlock_lock(&adapter->admin_lock); - rc = ena_com_indirect_table_get(ena_dev, indirect_table); - rte_spinlock_unlock(&adapter->admin_lock); - if (unlikely(rc && rc != ENA_COM_UNSUPPORTED)) { - PMD_DRV_LOG(ERR, "Cannot get indirection table\n"); - return -ENOTSUP; - } - - for (i = 0 ; i < reta_size ; i++) { - reta_conf_idx = i / RTE_RETA_GROUP_SIZE; - reta_idx = i % RTE_RETA_GROUP_SIZE; - if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx)) - reta_conf[reta_conf_idx].reta[reta_idx] = - ENA_IO_RXQ_IDX_REV(indirect_table[i]); - } - - return 0; -} - -static int ena_rss_init_default(struct ena_adapter *adapter) -{ - struct ena_com_dev *ena_dev = &adapter->ena_dev; - uint16_t nb_rx_queues = adapter->edev_data->nb_rx_queues; - int rc, i; - u32 val; - - rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); - if (unlikely(rc)) { - PMD_DRV_LOG(ERR, "Cannot init indirection table\n"); - goto err_rss_init; - } - - for (i = 0; i < ENA_RX_RSS_TABLE_SIZE; i++) { - val = i % nb_rx_queues; - rc = ena_com_indirect_table_fill_entry(ena_dev, i, - ENA_IO_RXQ_IDX(val)); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot fill indirection table\n"); - goto err_fill_indir; - } - } - - rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_CRC32, NULL, - ENA_HASH_KEY_SIZE, 0xFFFFFFFF); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(INFO, "Cannot fill hash function\n"); - goto err_fill_indir; - } - - rc = ena_com_set_default_hash_ctrl(ena_dev); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(INFO, "Cannot fill hash control\n"); - goto err_fill_indir; - } - - rc = ena_com_indirect_table_set(ena_dev); - if (unlikely(rc && (rc != ENA_COM_UNSUPPORTED))) { - PMD_DRV_LOG(ERR, "Cannot flush indirection table\n"); - goto err_fill_indir; - } - PMD_DRV_LOG(DEBUG, "RSS configured for port %d\n", - adapter->edev_data->port_id); - - return 0; - -err_fill_indir: - ena_com_rss_destroy(ena_dev); -err_rss_init: - - return rc; -} - static void ena_rx_queue_release_all(struct rte_eth_dev *dev) { struct ena_ring **queues = (struct ena_ring **)dev->data->rx_queues; @@ -1093,9 +915,8 @@ static int ena_start(struct rte_eth_dev *dev) if (rc) goto err_start_tx; - if (adapter->edev_data->dev_conf.rxmode.mq_mode & - ETH_MQ_RX_RSS_FLAG && adapter->edev_data->nb_rx_queues > 0) { - rc = ena_rss_init_default(adapter); + if (adapter->edev_data->dev_conf.rxmode.mq_mode & ETH_MQ_RX_RSS_FLAG) { + rc = ena_rss_configure(adapter); if (rc) goto err_rss_init; } @@ -1385,7 +1206,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, - __rte_unused const struct rte_eth_rxconf *rx_conf, + const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp) { struct ena_adapter *adapter = dev->data->dev_private; @@ -1469,6 +1290,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, for (i = 0; i < nb_desc; i++) rxq->empty_rx_reqs[i] = i; + rxq->offloads = rx_conf->offloads | dev->data->dev_conf.rxmode.offloads; + /* Store pointer to this queue in upper layer */ rxq->configured = 1; dev->data->rx_queues[queue_idx] = rxq; @@ -1932,6 +1755,9 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->offloads.rx_csum_supported = (get_feat_ctx.offload.rx_supported & ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_L4_IPV4_CSUM_MASK) != 0; + adapter->offloads.rss_hash_supported = + (get_feat_ctx.offload.rx_supported & + ENA_ADMIN_FEATURE_OFFLOAD_DESC_RX_HASH_MASK) != 0; /* Copy MAC address and point DPDK to it */ eth_dev->data->mac_addrs = (struct rte_ether_addr *)adapter->mac_addr; @@ -1939,6 +1765,12 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) get_feat_ctx.dev_attr.mac_addr, (struct rte_ether_addr *)adapter->mac_addr); + rc = ena_com_rss_init(ena_dev, ENA_RX_RSS_TABLE_LOG_SIZE); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Failed to initialize RSS in ENA device\n"); + goto err_delete_debug_area; + } + adapter->drv_stats = rte_zmalloc("adapter stats", sizeof(*adapter->drv_stats), RTE_CACHE_LINE_SIZE); @@ -1946,7 +1778,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) PMD_DRV_LOG(ERR, "Failed to allocate memory for adapter statistics\n"); rc = -ENOMEM; - goto err_delete_debug_area; + goto err_rss_destroy; } rte_spinlock_init(&adapter->admin_lock); @@ -1967,6 +1799,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) return 0; +err_rss_destroy: + ena_com_rss_destroy(ena_dev); err_delete_debug_area: ena_com_delete_debug_area(ena_dev); @@ -1991,6 +1825,8 @@ static void ena_destroy_device(struct rte_eth_dev *eth_dev) if (adapter->state != ENA_ADAPTER_STATE_CLOSED) ena_close(eth_dev); + ena_com_rss_destroy(ena_dev); + ena_com_delete_debug_area(ena_dev); ena_com_delete_host_info(ena_dev); @@ -2097,13 +1933,14 @@ static int ena_infos_get(struct rte_eth_dev *dev, /* Inform framework about available features */ dev_info->rx_offload_capa = rx_feat; - dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; + if (adapter->offloads.rss_hash_supported) + dev_info->rx_offload_capa |= DEV_RX_OFFLOAD_RSS_HASH; dev_info->rx_queue_offload_capa = rx_feat; dev_info->tx_offload_capa = tx_feat; dev_info->tx_queue_offload_capa = tx_feat; - dev_info->flow_type_rss_offloads = ETH_RSS_IP | ETH_RSS_TCP | - ETH_RSS_UDP; + dev_info->flow_type_rss_offloads = ENA_ALL_RSS_HF; + dev_info->hash_key_size = ENA_HASH_KEY_SIZE; dev_info->min_rx_bufsize = ENA_MIN_FRAME_LEN; dev_info->max_rx_pktlen = adapter->max_mtu; @@ -2250,6 +2087,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t completed; struct ena_com_rx_ctx ena_rx_ctx; int i, rc = 0; + bool fill_hash; #ifdef RTE_ETHDEV_DEBUG_RX /* Check adapter state */ @@ -2260,6 +2098,8 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, } #endif + fill_hash = rx_ring->offloads & DEV_RX_OFFLOAD_RSS_HASH; + descs_in_use = rx_ring->ring_size - ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; nb_pkts = RTE_MIN(descs_in_use, nb_pkts); @@ -2306,7 +2146,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, } /* fill mbuf attributes if any */ - ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx); + ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx, fill_hash); if (unlikely(mbuf->ol_flags & (PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD))) { diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 78718b759b..06ac8b06b5 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -6,10 +6,16 @@ #ifndef _ENA_ETHDEV_H_ #define _ENA_ETHDEV_H_ +#include +#include +#include +#include #include #include #include #include +#include +#include #include "ena_com.h" @@ -43,6 +49,21 @@ #define ENA_IDX_NEXT_MASKED(idx, mask) (((idx) + 1) & (mask)) #define ENA_IDX_ADD_MASKED(idx, n, mask) (((idx) + (n)) & (mask)) +#define ENA_RX_RSS_TABLE_LOG_SIZE 7 +#define ENA_RX_RSS_TABLE_SIZE (1 << ENA_RX_RSS_TABLE_LOG_SIZE) + +#define ENA_HASH_KEY_SIZE 40 + +#define ENA_ALL_RSS_HF (ETH_RSS_NONFRAG_IPV4_TCP | ETH_RSS_NONFRAG_IPV4_UDP | \ + ETH_RSS_NONFRAG_IPV6_TCP | ETH_RSS_NONFRAG_IPV6_UDP) + +#define ENA_IO_TXQ_IDX(q) (2 * (q)) +#define ENA_IO_RXQ_IDX(q) (2 * (q) + 1) +/* Reversed version of ENA_IO_RXQ_IDX */ +#define ENA_IO_RXQ_IDX_REV(q) (((q) - 1) / 2) + +extern struct ena_shared_data *ena_shared_data; + struct ena_adapter; enum ena_ring_type { @@ -205,6 +226,7 @@ struct ena_offloads { bool tso4_supported; bool tx_csum_supported; bool rx_csum_supported; + bool rss_hash_supported; }; /* board specific private data structure */ @@ -268,4 +290,16 @@ struct ena_adapter { bool use_large_llq_hdr; }; +int ena_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int ena_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size); +int ena_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +int ena_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf); +int ena_rss_configure(struct ena_adapter *adapter); + #endif /* _ENA_ETHDEV_H_ */ diff --git a/drivers/net/ena/ena_rss.c b/drivers/net/ena/ena_rss.c new file mode 100644 index 0000000000..d718f877bc --- /dev/null +++ b/drivers/net/ena/ena_rss.c @@ -0,0 +1,591 @@ +/* SPDX-License-Identifier: BSD-3-Clause + * Copyright (c) 2020 Amazon.com, Inc. or its affiliates. + * All rights reserved. + */ + +#include "ena_ethdev.h" +#include "ena_logs.h" + +#include + +#define TEST_BIT(val, bit_shift) ((val) & (1UL << (bit_shift))) + +#define ENA_HF_RSS_ALL_L2 (ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA) +#define ENA_HF_RSS_ALL_L3 (ENA_ADMIN_RSS_L3_SA | ENA_ADMIN_RSS_L3_DA) +#define ENA_HF_RSS_ALL_L4 (ENA_ADMIN_RSS_L4_SP | ENA_ADMIN_RSS_L4_DP) +#define ENA_HF_RSS_ALL_L3_L4 (ENA_HF_RSS_ALL_L3 | ENA_HF_RSS_ALL_L4) +#define ENA_HF_RSS_ALL_L2_L3_L4 (ENA_HF_RSS_ALL_L2 | ENA_HF_RSS_ALL_L3_L4) + +enum ena_rss_hash_fields { + ENA_HF_RSS_TCP4 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_UDP4 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_TCP6 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_UDP6 = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_IP4 = ENA_HF_RSS_ALL_L3, + ENA_HF_RSS_IP6 = ENA_HF_RSS_ALL_L3, + ENA_HF_RSS_IP4_FRAG = ENA_HF_RSS_ALL_L3, + ENA_HF_RSS_NOT_IP = ENA_HF_RSS_ALL_L2, + ENA_HF_RSS_TCP6_EX = ENA_HF_RSS_ALL_L3_L4, + ENA_HF_RSS_IP6_EX = ENA_HF_RSS_ALL_L3, +}; + +static int ena_fill_indirect_table_default(struct ena_com_dev *ena_dev, + size_t tbl_size, + size_t queue_num); +static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto, + uint16_t field); +static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto, + uint64_t rss_hf); +static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf); +static int ena_rss_hash_set(struct ena_com_dev *ena_dev, + struct rte_eth_rss_conf *rss_conf, + bool default_allowed); +static void ena_reorder_rss_hash_key(uint8_t *reordered_key, + uint8_t *key, + size_t key_size); +static int ena_get_rss_hash_key(struct ena_com_dev *ena_dev, uint8_t *rss_key); + +void ena_rss_key_fill(void *key, size_t size) +{ + static bool key_generated; + static uint8_t default_key[ENA_HASH_KEY_SIZE]; + size_t i; + + RTE_ASSERT(size <= ENA_HASH_KEY_SIZE); + + if (!key_generated) { + for (i = 0; i < ENA_HASH_KEY_SIZE; ++i) + default_key[i] = rte_rand() & 0xff; + key_generated = true; + } + + rte_memcpy(key, default_key, size); +} + +int ena_rss_reta_update(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + int rc, i; + u16 entry_value; + int conf_idx; + int idx; + + if (reta_size == 0 || reta_conf == NULL) + return -EINVAL; + + if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) { + PMD_DRV_LOG(ERR, + "RSS was not configured for the PMD\n"); + return -ENOTSUP; + } + + if (reta_size > ENA_RX_RSS_TABLE_SIZE) { + PMD_DRV_LOG(WARNING, + "Requested indirection table size (%d) is bigger than supported: %d\n", + reta_size, ENA_RX_RSS_TABLE_SIZE); + return -EINVAL; + } + + for (i = 0 ; i < reta_size ; i++) { + /* Each reta_conf is for 64 entries. + * To support 128 we use 2 conf of 64. + */ + conf_idx = i / RTE_RETA_GROUP_SIZE; + idx = i % RTE_RETA_GROUP_SIZE; + if (TEST_BIT(reta_conf[conf_idx].mask, idx)) { + entry_value = + ENA_IO_RXQ_IDX(reta_conf[conf_idx].reta[idx]); + + rc = ena_com_indirect_table_fill_entry(ena_dev, i, + entry_value); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, + "Cannot fill indirection table\n"); + return rc; + } + } + } + + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_com_indirect_table_set(ena_dev); + rte_spinlock_unlock(&adapter->admin_lock); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Cannot set the indirection table\n"); + return rc; + } + + PMD_DRV_LOG(DEBUG, "RSS configured %d entries for port %d\n", + reta_size, dev->data->port_id); + + return 0; +} + +/* Query redirection table. */ +int ena_rss_reta_query(struct rte_eth_dev *dev, + struct rte_eth_rss_reta_entry64 *reta_conf, + uint16_t reta_size) +{ + uint32_t indirect_table[ENA_RX_RSS_TABLE_SIZE] = { 0 }; + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + int rc; + int i; + int reta_conf_idx; + int reta_idx; + + if (reta_size == 0 || reta_conf == NULL || + (reta_size > RTE_RETA_GROUP_SIZE && ((reta_conf + 1) == NULL))) + return -EINVAL; + + if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) { + PMD_DRV_LOG(ERR, + "RSS was not configured for the PMD\n"); + return -ENOTSUP; + } + + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_com_indirect_table_get(ena_dev, indirect_table); + rte_spinlock_unlock(&adapter->admin_lock); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Cannot get indirection table\n"); + return rc; + } + + for (i = 0 ; i < reta_size ; i++) { + reta_conf_idx = i / RTE_RETA_GROUP_SIZE; + reta_idx = i % RTE_RETA_GROUP_SIZE; + if (TEST_BIT(reta_conf[reta_conf_idx].mask, reta_idx)) + reta_conf[reta_conf_idx].reta[reta_idx] = + ENA_IO_RXQ_IDX_REV(indirect_table[i]); + } + + return 0; +} + +static int ena_fill_indirect_table_default(struct ena_com_dev *ena_dev, + size_t tbl_size, + size_t queue_num) +{ + size_t i; + int rc; + uint16_t val; + + for (i = 0; i < tbl_size; ++i) { + val = i % queue_num; + rc = ena_com_indirect_table_fill_entry(ena_dev, i, + ENA_IO_RXQ_IDX(val)); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(DEBUG, + "Failed to set %zu indirection table entry with val %" PRIu16 "\n", + i, val); + return rc; + } + } + + return 0; +} + +static uint64_t ena_admin_hf_to_eth_hf(enum ena_admin_flow_hash_proto proto, + uint16_t fields) +{ + uint64_t rss_hf = 0; + + /* If no fields are activated, then RSS is disabled for this proto */ + if ((fields & ENA_HF_RSS_ALL_L2_L3_L4) == 0) + return 0; + + /* Convert proto to ETH flag */ + switch (proto) { + case ENA_ADMIN_RSS_TCP4: + rss_hf |= ETH_RSS_NONFRAG_IPV4_TCP; + break; + case ENA_ADMIN_RSS_UDP4: + rss_hf |= ETH_RSS_NONFRAG_IPV4_UDP; + break; + case ENA_ADMIN_RSS_TCP6: + rss_hf |= ETH_RSS_NONFRAG_IPV6_TCP; + break; + case ENA_ADMIN_RSS_UDP6: + rss_hf |= ETH_RSS_NONFRAG_IPV6_UDP; + break; + case ENA_ADMIN_RSS_IP4: + rss_hf |= ETH_RSS_IPV4; + break; + case ENA_ADMIN_RSS_IP6: + rss_hf |= ETH_RSS_IPV6; + break; + case ENA_ADMIN_RSS_IP4_FRAG: + rss_hf |= ETH_RSS_FRAG_IPV4; + break; + case ENA_ADMIN_RSS_NOT_IP: + rss_hf |= ETH_RSS_L2_PAYLOAD; + break; + case ENA_ADMIN_RSS_TCP6_EX: + rss_hf |= ETH_RSS_IPV6_TCP_EX; + break; + case ENA_ADMIN_RSS_IP6_EX: + rss_hf |= ETH_RSS_IPV6_EX; + break; + default: + break; + }; + + /* Check if only DA or SA is being used for L3. */ + switch (fields & ENA_HF_RSS_ALL_L3) { + case ENA_ADMIN_RSS_L3_SA: + rss_hf |= ETH_RSS_L3_SRC_ONLY; + break; + case ENA_ADMIN_RSS_L3_DA: + rss_hf |= ETH_RSS_L3_DST_ONLY; + break; + default: + break; + }; + + /* Check if only DA or SA is being used for L4. */ + switch (fields & ENA_HF_RSS_ALL_L4) { + case ENA_ADMIN_RSS_L4_SP: + rss_hf |= ETH_RSS_L4_SRC_ONLY; + break; + case ENA_ADMIN_RSS_L4_DP: + rss_hf |= ETH_RSS_L4_DST_ONLY; + break; + default: + break; + }; + + return rss_hf; +} + +static uint16_t ena_eth_hf_to_admin_hf(enum ena_admin_flow_hash_proto proto, + uint64_t rss_hf) +{ + uint16_t fields_mask = 0; + + /* L2 always uses source and destination addresses. */ + fields_mask = ENA_ADMIN_RSS_L2_DA | ENA_ADMIN_RSS_L2_SA; + + /* Determine which fields of L3 should be used. */ + switch (rss_hf & (ETH_RSS_L3_SRC_ONLY | ETH_RSS_L3_DST_ONLY)) { + case ETH_RSS_L3_DST_ONLY: + fields_mask |= ENA_ADMIN_RSS_L3_DA; + break; + case ETH_RSS_L3_SRC_ONLY: + fields_mask |= ENA_ADMIN_RSS_L3_SA; + break; + default: + /* + * If SRC nor DST aren't set, it means both of them should be + * used. + */ + fields_mask |= ENA_HF_RSS_ALL_L3; + } + + /* Determine which fields of L4 should be used. */ + switch (rss_hf & (ETH_RSS_L4_SRC_ONLY | ETH_RSS_L4_DST_ONLY)) { + case ETH_RSS_L4_DST_ONLY: + fields_mask |= ENA_ADMIN_RSS_L4_DP; + break; + case ETH_RSS_L4_SRC_ONLY: + fields_mask |= ENA_ADMIN_RSS_L4_SP; + break; + default: + /* + * If SRC nor DST aren't set, it means both of them should be + * used. + */ + fields_mask |= ENA_HF_RSS_ALL_L4; + } + + /* Return appropriate hash fields. */ + switch (proto) { + case ENA_ADMIN_RSS_TCP4: + return ENA_HF_RSS_TCP4 & fields_mask; + case ENA_ADMIN_RSS_UDP4: + return ENA_HF_RSS_UDP4 & fields_mask; + case ENA_ADMIN_RSS_TCP6: + return ENA_HF_RSS_TCP6 & fields_mask; + case ENA_ADMIN_RSS_UDP6: + return ENA_HF_RSS_UDP6 & fields_mask; + case ENA_ADMIN_RSS_IP4: + return ENA_HF_RSS_IP4 & fields_mask; + case ENA_ADMIN_RSS_IP6: + return ENA_HF_RSS_IP6 & fields_mask; + case ENA_ADMIN_RSS_IP4_FRAG: + return ENA_HF_RSS_IP4_FRAG & fields_mask; + case ENA_ADMIN_RSS_NOT_IP: + return ENA_HF_RSS_NOT_IP & fields_mask; + case ENA_ADMIN_RSS_TCP6_EX: + return ENA_HF_RSS_TCP6_EX & fields_mask; + case ENA_ADMIN_RSS_IP6_EX: + return ENA_HF_RSS_IP6_EX & fields_mask; + default: + break; + } + + return 0; +} + +static int ena_set_hash_fields(struct ena_com_dev *ena_dev, uint64_t rss_hf) +{ + struct ena_admin_proto_input selected_fields[ENA_ADMIN_RSS_PROTO_NUM] = {}; + int rc, i; + + /* Turn on appropriate fields for each requested packet type */ + if ((rss_hf & ETH_RSS_NONFRAG_IPV4_TCP) != 0) + selected_fields[ENA_ADMIN_RSS_TCP4].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP4, rss_hf); + + if ((rss_hf & ETH_RSS_NONFRAG_IPV4_UDP) != 0) + selected_fields[ENA_ADMIN_RSS_UDP4].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP4, rss_hf); + + if ((rss_hf & ETH_RSS_NONFRAG_IPV6_TCP) != 0) + selected_fields[ENA_ADMIN_RSS_TCP6].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6, rss_hf); + + if ((rss_hf & ETH_RSS_NONFRAG_IPV6_UDP) != 0) + selected_fields[ENA_ADMIN_RSS_UDP6].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_UDP6, rss_hf); + + if ((rss_hf & ETH_RSS_IPV4) != 0) + selected_fields[ENA_ADMIN_RSS_IP4].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4, rss_hf); + + if ((rss_hf & ETH_RSS_IPV6) != 0) + selected_fields[ENA_ADMIN_RSS_IP6].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6, rss_hf); + + if ((rss_hf & ETH_RSS_FRAG_IPV4) != 0) + selected_fields[ENA_ADMIN_RSS_IP4_FRAG].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP4_FRAG, rss_hf); + + if ((rss_hf & ETH_RSS_L2_PAYLOAD) != 0) + selected_fields[ENA_ADMIN_RSS_NOT_IP].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_NOT_IP, rss_hf); + + if ((rss_hf & ETH_RSS_IPV6_TCP_EX) != 0) + selected_fields[ENA_ADMIN_RSS_TCP6_EX].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_TCP6_EX, rss_hf); + + if ((rss_hf & ETH_RSS_IPV6_EX) != 0) + selected_fields[ENA_ADMIN_RSS_IP6_EX].fields = + ena_eth_hf_to_admin_hf(ENA_ADMIN_RSS_IP6_EX, rss_hf); + + /* Try to write them to the device */ + for (i = 0; i < ENA_ADMIN_RSS_PROTO_NUM; i++) { + rc = ena_com_fill_hash_ctrl(ena_dev, + (enum ena_admin_flow_hash_proto)i, + selected_fields[i].fields); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(DEBUG, + "Failed to set ENA HF %d with fields %" PRIu16 "\n", + i, selected_fields[i].fields); + return rc; + } + } + + return 0; +} + +static int ena_rss_hash_set(struct ena_com_dev *ena_dev, + struct rte_eth_rss_conf *rss_conf, + bool default_allowed) +{ + uint8_t hw_rss_key[ENA_HASH_KEY_SIZE]; + uint8_t *rss_key; + int rc; + + if (rss_conf->rss_key != NULL) { + /* Reorder the RSS key bytes for the hardware requirements. */ + ena_reorder_rss_hash_key(hw_rss_key, rss_conf->rss_key, + ENA_HASH_KEY_SIZE); + rss_key = hw_rss_key; + } else { + rss_key = NULL; + } + + /* If the rss_key is NULL, then the randomized key will be used. */ + rc = ena_com_fill_hash_function(ena_dev, ENA_ADMIN_TOEPLITZ, + rss_key, ENA_HASH_KEY_SIZE, 0); + if (rc != 0 && !(default_allowed && rc == ENA_COM_UNSUPPORTED)) { + PMD_DRV_LOG(ERR, + "Failed to set RSS hash function in the device\n"); + return rc; + } + + rc = ena_set_hash_fields(ena_dev, rss_conf->rss_hf); + if (rc == ENA_COM_UNSUPPORTED) { + if (rss_conf->rss_key == NULL && !default_allowed) { + PMD_DRV_LOG(ERR, + "Setting RSS hash fields is not supported\n"); + return -ENOTSUP; + } + PMD_DRV_LOG(WARNING, + "Setting RSS hash fields is not supported. Using default values: 0x%llx\n", + ENA_ALL_RSS_HF); + } else if (rc != 0) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash fields\n"); + return rc; + } + + return 0; +} + +/* ENA HW interprets the RSS key in reverse bytes order. Because of that, the + * key must be processed upon interaction with ena_com layer. + */ +static void ena_reorder_rss_hash_key(uint8_t *reordered_key, + uint8_t *key, + size_t key_size) +{ + size_t i, rev_i; + + for (i = 0, rev_i = key_size - 1; i < key_size; ++i, --rev_i) + reordered_key[i] = key[rev_i]; +} + +static int ena_get_rss_hash_key(struct ena_com_dev *ena_dev, uint8_t *rss_key) +{ + uint8_t hw_rss_key[ENA_HASH_KEY_SIZE]; + int rc; + + /* The default RSS hash key cannot be retrieved from the HW. Unless it's + * explicitly set, this operation shouldn't be supported. + */ + if (ena_dev->rss.hash_key == NULL) { + PMD_DRV_LOG(WARNING, + "Retrieving default RSS hash key is not supported\n"); + return -ENOTSUP; + } + + rc = ena_com_get_hash_key(ena_dev, hw_rss_key); + if (rc != 0) + return rc; + + ena_reorder_rss_hash_key(rss_key, hw_rss_key, ENA_HASH_KEY_SIZE); + + return 0; +} + +int ena_rss_configure(struct ena_adapter *adapter) +{ + struct rte_eth_rss_conf *rss_conf; + struct ena_com_dev *ena_dev; + int rc; + + ena_dev = &adapter->ena_dev; + rss_conf = &adapter->edev_data->dev_conf.rx_adv_conf.rss_conf; + + if (adapter->edev_data->nb_rx_queues == 0) + return 0; + + /* Restart the indirection table. The number of queues could change + * between start/stop calls, so it must be reinitialized with default + * values. + */ + rc = ena_fill_indirect_table_default(ena_dev, ENA_RX_RSS_TABLE_SIZE, + adapter->edev_data->nb_rx_queues); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, + "Failed to fill indirection table with default values\n"); + return rc; + } + + rc = ena_com_indirect_table_set(ena_dev); + if (unlikely(rc != 0 && rc != ENA_COM_UNSUPPORTED)) { + PMD_DRV_LOG(ERR, + "Failed to set indirection table in the device\n"); + return rc; + } + + rc = ena_rss_hash_set(ena_dev, rss_conf, true); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash\n"); + return rc; + } + + PMD_DRV_LOG(DEBUG, "RSS configured for port %d\n", + adapter->edev_data->port_id); + + return 0; +} + +int ena_rss_hash_update(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct ena_adapter *adapter = dev->data->dev_private; + int rc; + + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_rss_hash_set(&adapter->ena_dev, rss_conf, false); + rte_spinlock_unlock(&adapter->admin_lock); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, "Failed to set RSS hash\n"); + return rc; + } + + return 0; +} + +int ena_rss_hash_conf_get(struct rte_eth_dev *dev, + struct rte_eth_rss_conf *rss_conf) +{ + struct ena_adapter *adapter = dev->data->dev_private; + struct ena_com_dev *ena_dev = &adapter->ena_dev; + enum ena_admin_flow_hash_proto proto; + uint64_t rss_hf = 0; + int rc, i; + uint16_t admin_hf; + static bool warn_once; + + if (!(dev->data->dev_conf.rxmode.offloads & DEV_RX_OFFLOAD_RSS_HASH)) { + PMD_DRV_LOG(ERR, "RSS was not configured for the PMD\n"); + return -ENOTSUP; + } + + if (rss_conf->rss_key != NULL) { + rc = ena_get_rss_hash_key(ena_dev, rss_conf->rss_key); + if (unlikely(rc != 0)) { + PMD_DRV_LOG(ERR, + "Cannot retrieve RSS hash key, err: %d\n", + rc); + return rc; + } + } + + for (i = 0; i < ENA_ADMIN_RSS_PROTO_NUM; ++i) { + proto = (enum ena_admin_flow_hash_proto)i; + rte_spinlock_lock(&adapter->admin_lock); + rc = ena_com_get_hash_ctrl(ena_dev, proto, &admin_hf); + rte_spinlock_unlock(&adapter->admin_lock); + if (rc == ENA_COM_UNSUPPORTED) { + /* As some devices may support only reading rss hash + * key and not the hash ctrl, we want to notify the + * caller that this feature is only partially supported + * and do not return an error - the caller could be + * interested only in the key value. + */ + if (!warn_once) { + PMD_DRV_LOG(WARNING, + "Reading hash control from the device is not supported. .rss_hf will contain a default value.\n"); + warn_once = true; + } + rss_hf = ENA_ALL_RSS_HF; + break; + } else if (rc != 0) { + PMD_DRV_LOG(ERR, + "Failed to retrieve hash ctrl for proto: %d with err: %d\n", + i, rc); + return rc; + } + + rss_hf |= ena_admin_hf_to_eth_hf(proto, admin_hf); + } + + rss_conf->rss_hf = rss_hf; + return 0; +} diff --git a/drivers/net/ena/meson.build b/drivers/net/ena/meson.build index cc912fceba..d02ed3f64f 100644 --- a/drivers/net/ena/meson.build +++ b/drivers/net/ena/meson.build @@ -9,6 +9,7 @@ endif sources = files( 'ena_ethdev.c', + 'ena_rss.c', 'base/ena_com.c', 'base/ena_eth_com.c', ) From patchwork Wed Jul 14 10:43:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 95858 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E526DA0C4B; Wed, 14 Jul 2021 12:44:07 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A69C741343; Wed, 14 Jul 2021 12:43:39 +0200 (CEST) Received: from mail-lj1-f173.google.com (mail-lj1-f173.google.com [209.85.208.173]) by mails.dpdk.org (Postfix) with ESMTP id 427794131E for ; Wed, 14 Jul 2021 12:43:37 +0200 (CEST) Received: by mail-lj1-f173.google.com with SMTP id u25so2649992ljj.11 for ; Wed, 14 Jul 2021 03:43:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OtP5ZPIU1Gx9Vu0Xg0GTkuhPw3ywGGSh6f/WboE4pJw=; b=DkjOKCLM9AM9O/vWKO57yJR3ff0x09fnNVyDRWOY+Tf9dyUznGSKMMBBF+m0WRWcVn fMv4SCSAqX7jWFLqmtTlK6Mc37Iwanqe0AJdwGWldmczUfhVcV6qXVY+zLAg5kq2WBzW bgwbnfjQha3g3huzkePm4FcE6DkiR/F4nt8mbMQBLfRk9rZ92DES2ugaWgVKEbSKM2if B8kVVyBlR7FzViuMoVMX2RuN1SZmYES5ZhwptZnZH4YstHKLM600oBiZuGUDOXfIlQ4o rFNIFiUswslc3FB9My/KElOQ9htD2PNSpQrR6gdw2cvKX0f6fL4fmLQ/VKB3ZGuWoqWd lTng== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OtP5ZPIU1Gx9Vu0Xg0GTkuhPw3ywGGSh6f/WboE4pJw=; b=ELuo/FwyCcL92U6TOoWVo/8lodSzGTTP+J57ov4gQ016GL8tgwZYJYyrUvQ2emERuA ReJCXn+MPpdZWS/Z28VypTB+/Ptrz4iyNJVisYaML7vnsq3KJVFqSts4X02lZLtdSnEv 7lA5W7rgRSTpPW/Es1vgleOzQCpC3aRJqUkE+aBTP9LHmwryFurCMabWI2wVbkZPSXR+ THHUESALOZGuGVi3R4ZPtMs/VX+t/KEYuATCJ6NI1ozhNV0is8Hg00e+uvkCAL+mrGe8 AH4O7iSsWYLE7D9EUWN7rAzJwtrYNjyW1UUU3XZ/t5AOm6it5UNw2bG8SbQxihIUlGmU DOug== X-Gm-Message-State: AOAM532rZNlpq+Sm6vrSgSYXzjfMQAnn3iCXYLkB2J45QanxN2D+ba0P OoundOQnh+7BOdUSfXnXmisET0vsanTRpqoo X-Google-Smtp-Source: ABdhPJwVqjHdLI4aXliijPF1Nwv/zup1sdMt5CDHhNJ0YODUhTzE/QCN4x9N9l/vx9r7DG3xRA/usw== X-Received: by 2002:a2e:9e16:: with SMTP id e22mr8398033ljk.447.1626259416567; Wed, 14 Jul 2021 03:43:36 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain (89-79-189-199.dynamic.chello.pl. [89.79.189.199]) by smtp.gmail.com with ESMTPSA id d3sm188928lji.101.2021.07.14.03.43.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 14 Jul 2021 03:43:35 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: ndagan@amazon.com, shaibran@amazon.com, upstream@semihalf.com, Michal Krawczyk Date: Wed, 14 Jul 2021 12:43:20 +0200 Message-Id: <20210714104320.4096-7-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210714104320.4096-1-mk@semihalf.com> References: <20210714103435.3388-1-mk@semihalf.com> <20210714104320.4096-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 6/6] net/ena: update version to v2.4.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This version update contains: * Rx interrupts feature, * Support for the RSS hash function reconfiguration, * Small rework of the works, * Reset trigger on Tx path fix. Signed-off-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index ee059fc165..14f776b5ad 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -21,7 +21,7 @@ #include #define DRV_MODULE_VER_MAJOR 2 -#define DRV_MODULE_VER_MINOR 3 +#define DRV_MODULE_VER_MINOR 4 #define DRV_MODULE_VER_SUBMINOR 0 #define __MERGE_64B_H_L(h, l) (((uint64_t)h << 32) | l)