From patchwork Wed Apr 1 14:21:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67601 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id C2D06A057B; Wed, 1 Apr 2020 16:24:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7B2231C0C6; Wed, 1 Apr 2020 16:22:02 +0200 (CEST) Received: from mail-lj1-f177.google.com (mail-lj1-f177.google.com [209.85.208.177]) by dpdk.org (Postfix) with ESMTP id E62981C0B9 for ; Wed, 1 Apr 2020 16:21:59 +0200 (CEST) Received: by mail-lj1-f177.google.com with SMTP id i20so25942555ljn.6 for ; Wed, 01 Apr 2020 07:21:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=hDGCzM1oIFg1NlFR5A9HCpLpVF6z8RS+UEWSJ0JRBNw=; b=lq5avyxo3HObBCPiD9QUlmKNVcN+lhcLntqGZkPY3zwiWn+3dzzd9bUObio7rYBLvm 9qMbGR6QVKr7MdXsIZNSREETbTT9ie0tb4VQlRBxeIQN0Qs/rRvSyxvc3GEbRuXRc3Sb U/BGoSnJJy61UuRAXr9WQ7QBwkxnzVQX8unuGGTCACYmUOpyx2NwfbTWyQft8W9ewKDp 8AR6UItJSWRxEuKc0ljdBm3BW5exk5IeBvCl/KIu2M5hK93xh6gbaQpuotuE8jYq1OaW AM4gja0B8UYmvnrLK7/vKaGgvCdPrWjw3esIDy/+k86ZVX9CfvHXbdcJDn15wW3bVLFW AP9g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=hDGCzM1oIFg1NlFR5A9HCpLpVF6z8RS+UEWSJ0JRBNw=; b=GU2h8z0/GvioeilLayxaGRuVwnU4mpXLs+oVMxAgj7UmdmUosmwcyxiAPK4y7Sfd6I AFw76pf4yMGl3nHdcHJsiYAI4Zz/lZmtMp8xUAviuF8mMx09sP6L63fFyI+2xsjwgGKY f2yA66IGdCsf8tMf/uLZWX/fUMC5DzBq9MsMXJeMUiQUjkQOIUfoRX3HfeSIA277y4mR fjIq5bCM6aK/3MGmLhv1CV2NtIgxPk37MqoxYgO5gOaf8nF8zJgOh0qXBoJ54kPIXFes Bll8NYEOBZJo1axP14lpV+5JhuHjhoDRErOa3IP8VkI+eYnztE1PNPn2YwbivytJvWfN akLQ== X-Gm-Message-State: AGi0PuZAlWOcM/CZOb9LHnu4cQv7pWh+CSCci6kRVWudBNucVqp9ZTRD XzvlqqsbFY+IJMuGZmLDVlibcw3SQ4o= X-Google-Smtp-Source: APiQypIw9B0VCPTkz0hTrg9puswikCGFqnS/iRNgufimT2Dm2WkkBoCR5QapkmDmhgW6xMrtlX1SqQ== X-Received: by 2002:a2e:6809:: with SMTP id c9mr13013120lja.251.1585750919069; Wed, 01 Apr 2020 07:21:59 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id r21sm1435961ljp.29.2020.04.01.07.21.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 01 Apr 2020 07:21:58 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, Michal Krawczyk Date: Wed, 1 Apr 2020 16:21:14 +0200 Message-Id: <20200401142127.13715-17-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200401142127.13715-1-mk@semihalf.com> References: <20200401142127.13715-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v2 16/29] net/ena: refactor getting IO queues capabilities X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Reading values from the device is about the maximum capabilities of the device. Because of that, the names of the fields storing those values, functions and temporary variables, should be more descriptive in order to improve self documentation fo the code. In connection with this, the way of getting maximum queue size could be simplified - no hardcoded values are needed, as the device is going to send it's capabilities anyway. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 101 ++++++++++++++++------------------- drivers/net/ena/ena_ethdev.h | 11 ++-- 2 files changed, 52 insertions(+), 60 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 13a016227c..d5f700093f 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -82,9 +82,6 @@ struct ena_stats { #define ENA_STAT_GLOBAL_ENTRY(stat) \ ENA_STAT_ENTRY(stat, dev) -#define ENA_MAX_RING_SIZE_RX 8192 -#define ENA_MAX_RING_SIZE_TX 1024 - /* * Each rte_memzone should have unique name. * To satisfy it, count number of allocation and add it to name. @@ -845,29 +842,26 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) } static int -ena_calc_queue_size(struct ena_calc_queue_size_ctx *ctx) +ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx) { struct ena_admin_feature_llq_desc *llq = &ctx->get_feat_ctx->llq; struct ena_com_dev *ena_dev = ctx->ena_dev; - uint32_t tx_queue_size = ENA_MAX_RING_SIZE_TX; - uint32_t rx_queue_size = ENA_MAX_RING_SIZE_RX; + uint32_t max_tx_queue_size; + uint32_t max_rx_queue_size; if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { struct ena_admin_queue_ext_feature_fields *max_queue_ext = &ctx->get_feat_ctx->max_queue_ext.max_queue_ext; - rx_queue_size = RTE_MIN(rx_queue_size, - max_queue_ext->max_rx_cq_depth); - rx_queue_size = RTE_MIN(rx_queue_size, + max_rx_queue_size = RTE_MIN(max_queue_ext->max_rx_cq_depth, max_queue_ext->max_rx_sq_depth); - tx_queue_size = RTE_MIN(tx_queue_size, - max_queue_ext->max_tx_cq_depth); + max_tx_queue_size = max_queue_ext->max_tx_cq_depth; if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { - tx_queue_size = RTE_MIN(tx_queue_size, + max_tx_queue_size = RTE_MIN(max_tx_queue_size, llq->max_llq_depth); } else { - tx_queue_size = RTE_MIN(tx_queue_size, + max_tx_queue_size = RTE_MIN(max_tx_queue_size, max_queue_ext->max_tx_sq_depth); } @@ -878,39 +872,36 @@ ena_calc_queue_size(struct ena_calc_queue_size_ctx *ctx) } else { struct ena_admin_queue_feature_desc *max_queues = &ctx->get_feat_ctx->max_queues; - rx_queue_size = RTE_MIN(rx_queue_size, - max_queues->max_cq_depth); - rx_queue_size = RTE_MIN(rx_queue_size, + max_rx_queue_size = RTE_MIN(max_queues->max_cq_depth, max_queues->max_sq_depth); - tx_queue_size = RTE_MIN(tx_queue_size, - max_queues->max_cq_depth); + max_tx_queue_size = max_queues->max_cq_depth; if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { - tx_queue_size = RTE_MIN(tx_queue_size, + max_tx_queue_size = RTE_MIN(max_tx_queue_size, llq->max_llq_depth); } else { - tx_queue_size = RTE_MIN(tx_queue_size, + max_tx_queue_size = RTE_MIN(max_tx_queue_size, max_queues->max_sq_depth); } ctx->max_rx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, - max_queues->max_packet_tx_descs); - ctx->max_tx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, max_queues->max_packet_rx_descs); + ctx->max_tx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, + max_queues->max_packet_tx_descs); } /* Round down to the nearest power of 2 */ - rx_queue_size = rte_align32prevpow2(rx_queue_size); - tx_queue_size = rte_align32prevpow2(tx_queue_size); + max_rx_queue_size = rte_align32prevpow2(max_rx_queue_size); + max_tx_queue_size = rte_align32prevpow2(max_tx_queue_size); - if (unlikely(rx_queue_size == 0 || tx_queue_size == 0)) { + if (unlikely(max_rx_queue_size == 0 || max_tx_queue_size == 0)) { PMD_INIT_LOG(ERR, "Invalid queue size"); return -EFAULT; } - ctx->rx_queue_size = rx_queue_size; - ctx->tx_queue_size = tx_queue_size; + ctx->max_tx_queue_size = max_tx_queue_size; + ctx->max_rx_queue_size = max_rx_queue_size; return 0; } @@ -1230,15 +1221,15 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } - if (nb_desc > adapter->tx_ring_size) { + if (nb_desc > adapter->max_tx_ring_size) { PMD_DRV_LOG(ERR, "Unsupported size of TX queue (max size: %d)\n", - adapter->tx_ring_size); + adapter->max_tx_ring_size); return -EINVAL; } if (nb_desc == RTE_ETH_DEV_FALLBACK_TX_RINGSIZE) - nb_desc = adapter->tx_ring_size; + nb_desc = adapter->max_tx_ring_size; txq->port_id = dev->data->port_id; txq->next_to_clean = 0; @@ -1310,7 +1301,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, } if (nb_desc == RTE_ETH_DEV_FALLBACK_RX_RINGSIZE) - nb_desc = adapter->rx_ring_size; + nb_desc = adapter->max_rx_ring_size; if (!rte_is_power_of_2(nb_desc)) { PMD_DRV_LOG(ERR, @@ -1319,10 +1310,10 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } - if (nb_desc > adapter->rx_ring_size) { + if (nb_desc > adapter->max_rx_ring_size) { PMD_DRV_LOG(ERR, "Unsupported size of RX queue (max size: %d)\n", - adapter->rx_ring_size); + adapter->max_rx_ring_size); return -EINVAL; } @@ -1654,10 +1645,10 @@ ena_set_queues_placement_policy(struct ena_adapter *adapter, return 0; } -static int ena_calc_io_queue_num(struct ena_com_dev *ena_dev, - struct ena_com_dev_get_features_ctx *get_feat_ctx) +static uint32_t ena_calc_max_io_queue_num(struct ena_com_dev *ena_dev, + struct ena_com_dev_get_features_ctx *get_feat_ctx) { - uint32_t io_tx_sq_num, io_tx_cq_num, io_rx_num, io_queue_num; + uint32_t io_tx_sq_num, io_tx_cq_num, io_rx_num, max_num_io_queues; /* Regular queues capabilities */ if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { @@ -1679,16 +1670,16 @@ static int ena_calc_io_queue_num(struct ena_com_dev *ena_dev, if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) io_tx_sq_num = get_feat_ctx->llq.max_llq_num; - io_queue_num = RTE_MIN(ENA_MAX_NUM_IO_QUEUES, io_rx_num); - io_queue_num = RTE_MIN(io_queue_num, io_tx_sq_num); - io_queue_num = RTE_MIN(io_queue_num, io_tx_cq_num); + max_num_io_queues = RTE_MIN(ENA_MAX_NUM_IO_QUEUES, io_rx_num); + max_num_io_queues = RTE_MIN(max_num_io_queues, io_tx_sq_num); + max_num_io_queues = RTE_MIN(max_num_io_queues, io_tx_cq_num); - if (unlikely(io_queue_num == 0)) { + if (unlikely(max_num_io_queues == 0)) { PMD_DRV_LOG(ERR, "Number of IO queues should not be 0\n"); return -EFAULT; } - return io_queue_num; + return max_num_io_queues; } static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) @@ -1701,6 +1692,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) struct ena_com_dev_get_features_ctx get_feat_ctx; struct ena_llq_configurations llq_config; const char *queue_type_str; + uint32_t max_num_io_queues; int rc; static int adapters_found; @@ -1772,20 +1764,19 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) calc_queue_ctx.ena_dev = ena_dev; calc_queue_ctx.get_feat_ctx = &get_feat_ctx; - adapter->num_queues = ena_calc_io_queue_num(ena_dev, - &get_feat_ctx); - rc = ena_calc_queue_size(&calc_queue_ctx); - if (unlikely((rc != 0) || (adapter->num_queues <= 0))) { + max_num_io_queues = ena_calc_max_io_queue_num(ena_dev, &get_feat_ctx); + rc = ena_calc_io_queue_size(&calc_queue_ctx); + if (unlikely((rc != 0) || (max_num_io_queues == 0))) { rc = -EFAULT; goto err_device_destroy; } - adapter->tx_ring_size = calc_queue_ctx.tx_queue_size; - adapter->rx_ring_size = calc_queue_ctx.rx_queue_size; - + adapter->max_tx_ring_size = calc_queue_ctx.max_tx_queue_size; + adapter->max_rx_ring_size = calc_queue_ctx.max_rx_queue_size; adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size; adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size; + adapter->max_num_io_queues = max_num_io_queues; /* prepare ring structures */ ena_init_rings(adapter); @@ -1904,9 +1895,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev) static void ena_init_rings(struct ena_adapter *adapter) { - int i; + size_t i; - for (i = 0; i < adapter->num_queues; i++) { + for (i = 0; i < adapter->max_num_io_queues; i++) { struct ena_ring *ring = &adapter->tx_ring[i]; ring->configured = 0; @@ -1918,7 +1909,7 @@ static void ena_init_rings(struct ena_adapter *adapter) ring->sgl_size = adapter->max_tx_sgl_size; } - for (i = 0; i < adapter->num_queues; i++) { + for (i = 0; i < adapter->max_num_io_queues; i++) { struct ena_ring *ring = &adapter->rx_ring[i]; ring->configured = 0; @@ -1982,21 +1973,21 @@ static int ena_infos_get(struct rte_eth_dev *dev, dev_info->max_rx_pktlen = adapter->max_mtu; dev_info->max_mac_addrs = 1; - dev_info->max_rx_queues = adapter->num_queues; - dev_info->max_tx_queues = adapter->num_queues; + dev_info->max_rx_queues = adapter->max_num_io_queues; + dev_info->max_tx_queues = adapter->max_num_io_queues; dev_info->reta_size = ENA_RX_RSS_TABLE_SIZE; adapter->tx_supported_offloads = tx_feat; adapter->rx_supported_offloads = rx_feat; - dev_info->rx_desc_lim.nb_max = adapter->rx_ring_size; + dev_info->rx_desc_lim.nb_max = adapter->max_rx_ring_size; dev_info->rx_desc_lim.nb_min = ENA_MIN_RING_DESC; dev_info->rx_desc_lim.nb_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, adapter->max_rx_sgl_size); dev_info->rx_desc_lim.nb_mtu_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, adapter->max_rx_sgl_size); - dev_info->tx_desc_lim.nb_max = adapter->tx_ring_size; + dev_info->tx_desc_lim.nb_max = adapter->max_tx_ring_size; dev_info->tx_desc_lim.nb_min = ENA_MIN_RING_DESC; dev_info->tx_desc_lim.nb_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, adapter->max_tx_sgl_size); diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index c1457defeb..99d1fba64d 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -21,6 +21,7 @@ #define ENA_NAME_MAX_LEN 20 #define ENA_PKT_MAX_BUFS 17 #define ENA_RX_BUF_MIN_SIZE 1400 +#define ENA_DEFAULT_RING_SIZE 1024 #define ENA_MIN_MTU 128 @@ -46,8 +47,8 @@ struct ena_tx_buffer { struct ena_calc_queue_size_ctx { struct ena_com_dev_get_features_ctx *get_feat_ctx; struct ena_com_dev *ena_dev; - u16 rx_queue_size; - u16 tx_queue_size; + u32 max_rx_queue_size; + u32 max_tx_queue_size; u16 max_tx_sgl_size; u16 max_rx_sgl_size; }; @@ -159,15 +160,15 @@ struct ena_adapter { /* TX */ struct ena_ring tx_ring[ENA_MAX_NUM_QUEUES] __rte_cache_aligned; - int tx_ring_size; + u32 max_tx_ring_size; u16 max_tx_sgl_size; /* RX */ struct ena_ring rx_ring[ENA_MAX_NUM_QUEUES] __rte_cache_aligned; - int rx_ring_size; + u32 max_rx_ring_size; u16 max_rx_sgl_size; - u16 num_queues; + u32 max_num_io_queues; u16 max_mtu; struct ena_offloads offloads;