From patchwork Fri Dec 14 13:18:30 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 48862 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A847A1BB52; Fri, 14 Dec 2018 14:19:05 +0100 (CET) Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by dpdk.org (Postfix) with ESMTP id 8C6F21BB2C for ; Fri, 14 Dec 2018 14:19:01 +0100 (CET) Received: by mail-lj1-f194.google.com with SMTP id 83-v6so4853018ljf.10 for ; Fri, 14 Dec 2018 05:19:01 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references; bh=TE4anncdD5N/HP4qXHsu4Yksylt9sR59MlDZ8en19JI=; b=K3fWYmJ4oS6o+dTxMGt0E9YKUICl/URzEWI7ITVjQ5Xpj0eTR10pyExZVgwEr5QRAm 5CNdCzHk3+oqcILvb1Vzg2hVymftarixyRXE3iXK5ozFxV9AHZzTKvys2fPypKHyT6Vw 5qa5Axni/PjHIKs4qRxV4x+1+2nMoGEZWGkbCRQa5wwlfyc9aIRIjv+ID26+J9XeiSEL 3WvNu9lyDbTQGmynZhiVIoz2WsgagbZxYTKUNLk04tT6PZoh6AzMrsAcx8AYxS50+tX2 FMt9kQ5ZMABJbgqECrCxbl7/Q9VCIrV4wSQNArszro5K8klVSC9OeJXjlqsMWk+W8jSd n1tQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references; bh=TE4anncdD5N/HP4qXHsu4Yksylt9sR59MlDZ8en19JI=; b=BNdJrUbQtZvbItWS15rDJ9nwacWATjYHlIQCg4eiRGSUe5A+nKq4bfIAjrBkuVbQZ6 fLf/5AH4RvF1aXncmdA3iQ4p6RD/lbiLMUYqvHL4mBe+Ld6a5OPWDGkKbMoDd6lTDNIk SF/qB3noh1Q/bpGzl1E3NLrhSszQIi4dUJaumATuaxHmPkw48pEtdpb2npxgrh3sEL71 KrLe08HtKUH5HHtJ+Fn6pjImpwf7yoO9TeJ3tOWXTD/y27DQ8BmyeYH0W+qs+JBonQYc 9KhnoOUDOV37o0qw1l0ATQ7aiL6gTHBr86p41ueUtBTJDEu29oRHHsxGOQwBTUI3CsJy l8ng== X-Gm-Message-State: AA+aEWZfLdwgcbrCOOp9hm+1sSIg+5X9q0Y7Miil7LOZgXxoVnVqAUFe a/q3CCGXt1IMP3FluoB6rCLoWbRIP/A= X-Google-Smtp-Source: AFSGD/X8beYnchrNAbnJQtLvUIrGmKzcdcxIiB+N9Q478ZhtciEG7T3LTXjWj5ALg5o7ipXu8ATwGA== X-Received: by 2002:a2e:8156:: with SMTP id t22-v6mr1825736ljg.32.1544793540601; Fri, 14 Dec 2018 05:19:00 -0800 (PST) Received: from mkPC.semihalf.local (31-172-191-173.noc.fibertech.net.pl. [31.172.191.173]) by smtp.gmail.com with ESMTPSA id o25sm873884lfd.29.2018.12.14.05.18.58 (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 14 Dec 2018 05:18:58 -0800 (PST) From: Michal Krawczyk To: dev@dpdk.org Cc: gtzalik@dpdk.org, mw@dpdk.org, matua@amazon.com, rk@semihalf.com Date: Fri, 14 Dec 2018 14:18:30 +0100 Message-Id: <20181214131846.22439-5-mk@semihalf.com> X-Mailer: git-send-email 2.14.1 In-Reply-To: <20181214131846.22439-1-mk@semihalf.com> References: <20181214131846.22439-1-mk@semihalf.com> Subject: [dpdk-dev] [PATCH 04/20] net/ena: add hw queues depth setup X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Rafal Kozik The device now allows driver to reconfigure Tx and Rx queues depth independently. Moreover, maximum size for Tx and Rx can be different. Those maximum values are received from the device. After reset, previous ring configuration is restored. If number of descriptor is set to RTE_ETH_DEV_FALLBACK_RX_RINGSIZE or RTE_ETH_DEV_FALLBACK_TX_RINGSIZE, the maximum value is restored. Remove checks, if provided number is not too big, as this is done in generic functions (rte_eth_rx_queue_setup and rte_eth_tx_queue_setup). Maximum number of segments is being set for Rx packets and provided to ena_com_rx_pkt() for validation. Unused definitions were removed. Signed-off-by: Rafal Kozik Acked-by: Michal Krawczyk --- drivers/net/ena/ena_ethdev.c | 125 ++++++++++++++++++++++++++++--------------- drivers/net/ena/ena_ethdev.h | 11 +++- 2 files changed, 93 insertions(+), 43 deletions(-) -- 2.14.1 diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index a47f5f7fa..617f85f02 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -85,7 +85,6 @@ #define ARRAY_SIZE(x) (sizeof(x) / sizeof((x)[0])) -#define ENA_MAX_RING_DESC ENA_DEFAULT_RING_SIZE #define ENA_MIN_RING_DESC 128 enum ethtool_stringset { @@ -591,11 +590,12 @@ ena_dev_reset(struct rte_eth_dev *dev) ena_com_admin_aenq_enable(ena_dev); for (i = 0; i < nb_queues; ++i) - ena_rx_queue_setup(eth_dev, i, adapter->rx_ring_size, 0, NULL, - mb_pool_rx[i]); + ena_rx_queue_setup(eth_dev, i, adapter->rx_ring[i].ring_size, 0, + NULL, mb_pool_rx[i]); for (i = 0; i < nb_queues; ++i) - ena_tx_queue_setup(eth_dev, i, adapter->tx_ring_size, 0, NULL); + ena_tx_queue_setup(eth_dev, i, adapter->tx_ring[i].ring_size, 0, + NULL); adapter->trigger_reset = false; @@ -935,34 +935,46 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) } static int -ena_calc_queue_size(struct ena_com_dev *ena_dev, - u16 *max_tx_sgl_size, - struct ena_com_dev_get_features_ctx *get_feat_ctx) +ena_calc_queue_size(struct ena_calc_queue_size_ctx *ctx) { - uint32_t queue_size = ENA_DEFAULT_RING_SIZE; - - queue_size = RTE_MIN(queue_size, - get_feat_ctx->max_queues.max_cq_depth); - queue_size = RTE_MIN(queue_size, - get_feat_ctx->max_queues.max_sq_depth); - - if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) - queue_size = RTE_MIN(queue_size, - get_feat_ctx->max_queues.max_legacy_llq_depth); - - /* Round down to power of 2 */ - if (!rte_is_power_of_2(queue_size)) - queue_size = rte_align32pow2(queue_size >> 1); - - if (unlikely(queue_size == 0)) { + uint32_t tx_queue_size, rx_queue_size; + + if (ctx->ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { + struct ena_admin_queue_ext_feature_fields *max_queue_ext = + &ctx->get_feat_ctx->max_queue_ext.max_queue_ext; + rx_queue_size = RTE_MIN(max_queue_ext->max_rx_cq_depth, + max_queue_ext->max_rx_sq_depth); + tx_queue_size = RTE_MIN(max_queue_ext->max_tx_cq_depth, + max_queue_ext->max_tx_sq_depth); + ctx->max_rx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, + max_queue_ext->max_per_packet_rx_descs); + ctx->max_tx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, + max_queue_ext->max_per_packet_tx_descs); + } else { + struct ena_admin_queue_feature_desc *max_queues = + &ctx->get_feat_ctx->max_queues; + rx_queue_size = RTE_MIN(max_queues->max_cq_depth, + max_queues->max_sq_depth); + tx_queue_size = rx_queue_size; + ctx->max_rx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, + max_queues->max_packet_tx_descs); + ctx->max_tx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, + max_queues->max_packet_rx_descs); + } + + /* Round down to the nearest power of 2 */ + rx_queue_size = rte_align32prevpow2(rx_queue_size); + tx_queue_size = rte_align32prevpow2(tx_queue_size); + + if (unlikely(rx_queue_size == 0 || tx_queue_size == 0)) { PMD_INIT_LOG(ERR, "Invalid queue size"); return -EFAULT; } - *max_tx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, - get_feat_ctx->max_queues.max_packet_tx_descs); + ctx->rx_queue_size = rx_queue_size; + ctx->tx_queue_size = tx_queue_size; - return queue_size; + return 0; } static void ena_stats_restart(struct rte_eth_dev *dev) @@ -1239,6 +1251,9 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } + if (nb_desc == RTE_ETH_DEV_FALLBACK_TX_RINGSIZE) + nb_desc = adapter->tx_ring_size; + txq->port_id = dev->data->port_id; txq->next_to_clean = 0; txq->next_to_use = 0; @@ -1297,6 +1312,9 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return ENA_COM_FAULT; } + if (nb_desc == RTE_ETH_DEV_FALLBACK_RX_RINGSIZE) + nb_desc = adapter->rx_ring_size; + if (!rte_is_power_of_2(nb_desc)) { RTE_LOG(ERR, PMD, "Unsupported size of RX queue: %d is not a power of 2.", @@ -1574,13 +1592,26 @@ static void ena_timer_wd_callback(__rte_unused struct rte_timer *timer, } } -static int ena_calc_io_queue_num(__rte_unused struct ena_com_dev *ena_dev, +static int ena_calc_io_queue_num(struct ena_com_dev *ena_dev, struct ena_com_dev_get_features_ctx *get_feat_ctx) { - int io_sq_num, io_cq_num, io_queue_num; + uint32_t io_sq_num, io_cq_num, io_queue_num; + + /* Regular queues capabilities */ + if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { + struct ena_admin_queue_ext_feature_fields *max_queue_ext = + &get_feat_ctx->max_queue_ext.max_queue_ext; + io_sq_num = max_queue_ext->max_rx_sq_num; + io_sq_num = RTE_MIN(io_sq_num, max_queue_ext->max_tx_sq_num); - io_sq_num = get_feat_ctx->max_queues.max_sq_num; - io_cq_num = get_feat_ctx->max_queues.max_cq_num; + io_cq_num = max_queue_ext->max_rx_cq_num; + io_cq_num = RTE_MIN(io_cq_num, max_queue_ext->max_tx_cq_num); + } else { + struct ena_admin_queue_feature_desc *max_queues = + &get_feat_ctx->max_queues; + io_sq_num = max_queues->max_sq_num; + io_cq_num = max_queues->max_cq_num; + } io_queue_num = RTE_MIN(io_sq_num, io_cq_num); @@ -1594,14 +1625,14 @@ static int ena_calc_io_queue_num(__rte_unused struct ena_com_dev *ena_dev, static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) { + struct ena_calc_queue_size_ctx calc_queue_ctx = { 0 }; struct rte_pci_device *pci_dev; struct rte_intr_handle *intr_handle; struct ena_adapter *adapter = (struct ena_adapter *)(eth_dev->data->dev_private); struct ena_com_dev *ena_dev = &adapter->ena_dev; struct ena_com_dev_get_features_ctx get_feat_ctx; - int queue_size, rc; - u16 tx_sgl_size = 0; + int rc; static int adapters_found; bool wd_state; @@ -1656,19 +1687,24 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->wd_state = wd_state; ena_dev->tx_mem_queue_type = ENA_ADMIN_PLACEMENT_POLICY_HOST; + + calc_queue_ctx.ena_dev = ena_dev; + calc_queue_ctx.get_feat_ctx = &get_feat_ctx; + adapter->num_queues = ena_calc_io_queue_num(ena_dev, &get_feat_ctx); - queue_size = ena_calc_queue_size(ena_dev, &tx_sgl_size, &get_feat_ctx); - if (queue_size <= 0 || adapter->num_queues <= 0) { + rc = ena_calc_queue_size(&calc_queue_ctx); + if (unlikely((rc != 0) || (adapter->num_queues <= 0))) { rc = -EFAULT; goto err_device_destroy; } - adapter->tx_ring_size = queue_size; - adapter->rx_ring_size = queue_size; + adapter->tx_ring_size = calc_queue_ctx.tx_queue_size; + adapter->rx_ring_size = calc_queue_ctx.rx_queue_size; - adapter->max_tx_sgl_size = tx_sgl_size; + adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size; + adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size; /* prepare ring structures */ ena_init_rings(adapter); @@ -1785,6 +1821,7 @@ static void ena_init_rings(struct ena_adapter *adapter) ring->type = ENA_RING_TYPE_RX; ring->adapter = adapter; ring->id = i; + ring->sgl_size = adapter->max_rx_sgl_size; } } @@ -1857,15 +1894,19 @@ static void ena_infos_get(struct rte_eth_dev *dev, adapter->tx_supported_offloads = tx_feat; adapter->rx_supported_offloads = rx_feat; - dev_info->rx_desc_lim.nb_max = ENA_MAX_RING_DESC; + dev_info->rx_desc_lim.nb_max = adapter->rx_ring_size; dev_info->rx_desc_lim.nb_min = ENA_MIN_RING_DESC; + dev_info->rx_desc_lim.nb_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, + adapter->max_rx_sgl_size); + dev_info->rx_desc_lim.nb_mtu_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, + adapter->max_rx_sgl_size); - dev_info->tx_desc_lim.nb_max = ENA_MAX_RING_DESC; + dev_info->tx_desc_lim.nb_max = adapter->tx_ring_size; dev_info->tx_desc_lim.nb_min = ENA_MIN_RING_DESC; dev_info->tx_desc_lim.nb_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, - feat.max_queues.max_packet_tx_descs); + adapter->max_tx_sgl_size); dev_info->tx_desc_lim.nb_mtu_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, - feat.max_queues.max_packet_tx_descs); + adapter->max_tx_sgl_size); } static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, @@ -1901,7 +1942,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, for (completed = 0; completed < nb_pkts; completed++) { int segments = 0; - ena_rx_ctx.max_bufs = rx_ring->ring_size; + ena_rx_ctx.max_bufs = rx_ring->sgl_size; ena_rx_ctx.ena_bufs = rx_ring->ena_bufs; ena_rx_ctx.descs = 0; /* receive packet context */ diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 322e90ace..e6f7bd012 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -45,7 +45,6 @@ #define ENA_MEM_BAR 2 #define ENA_MAX_NUM_QUEUES 128 -#define ENA_DEFAULT_RING_SIZE (1024) #define ENA_MIN_FRAME_LEN 64 #define ENA_NAME_MAX_LEN 20 #define ENA_PKT_MAX_BUFS 17 @@ -71,6 +70,15 @@ struct ena_tx_buffer { struct ena_com_buf bufs[ENA_PKT_MAX_BUFS]; }; +struct ena_calc_queue_size_ctx { + struct ena_com_dev_get_features_ctx *get_feat_ctx; + struct ena_com_dev *ena_dev; + u16 rx_queue_size; + u16 tx_queue_size; + u16 max_tx_sgl_size; + u16 max_rx_sgl_size; +}; + struct ena_ring { u16 next_to_use; u16 next_to_clean; @@ -176,6 +184,7 @@ struct ena_adapter { /* RX */ struct ena_ring rx_ring[ENA_MAX_NUM_QUEUES] __rte_cache_aligned; int rx_ring_size; + u16 max_rx_sgl_size; u16 num_queues; u16 max_mtu;