From patchwork Tue Mar 12 18:06:51 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Brandes, Shai" X-Patchwork-Id: 138255 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5D75843C94; Tue, 12 Mar 2024 19:07:59 +0100 (CET) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 1319342E08; Tue, 12 Mar 2024 19:07:42 +0100 (CET) Received: from smtp-fw-80007.amazon.com (smtp-fw-80007.amazon.com [99.78.197.218]) by mails.dpdk.org (Postfix) with ESMTP id 1C8F042E05 for ; Tue, 12 Mar 2024 19:07:39 +0100 (CET) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209; t=1710266860; x=1741802860; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version; bh=LzWNkwlGckcegYfNXpaO9H15LjzfgcHcKoeXFoB5oUA=; b=ADybGF8YCjE0ydQAx/MypuBzF7P50XG50jcCg7GcP9lfF9p9VFbZ7jR6 LASclBvXTDscHreFshrEsuq+T5pr5xa/9q+a9cf7708KlLLrrccqn+YWC PmlI+afdrqehQBz+nXa+wZD4cI0uOkWzvqR/O7tReIJB5aqRY1EiAhc7P U=; X-IronPort-AV: E=Sophos;i="6.07,119,1708387200"; d="scan'208";a="280481504" Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO smtpout.prod.us-east-1.prod.farcaster.email.amazon.dev) ([10.25.36.210]) by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 12 Mar 2024 18:07:40 +0000 Received: from EX19MTAEUC001.ant.amazon.com [10.0.10.100:15920] by smtpin.naws.eu-west-1.prod.farcaster.email.amazon.dev [10.0.21.118:2525] with esmtp (Farcaster) id 18af2330-8e3e-4913-84e9-b32ffd6a21d0; Tue, 12 Mar 2024 18:07:38 +0000 (UTC) X-Farcaster-Flow-ID: 18af2330-8e3e-4913-84e9-b32ffd6a21d0 Received: from EX19D007EUB002.ant.amazon.com (10.252.51.117) by EX19MTAEUC001.ant.amazon.com (10.252.51.193) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 12 Mar 2024 18:07:38 +0000 Received: from EX19MTAUWA001.ant.amazon.com (10.250.64.204) by EX19D007EUB002.ant.amazon.com (10.252.51.117) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.2.1258.28; Tue, 12 Mar 2024 18:07:37 +0000 Received: from HFA15-CG15235BS.amazon.com (10.85.143.174) by mail-relay.amazon.com (10.250.64.204) with Microsoft SMTP Server id 15.2.1258.28 via Frontend Transport; Tue, 12 Mar 2024 18:07:36 +0000 From: To: CC: , Shai Brandes Subject: [PATCH v4 06/31] net/ena: restructure the llq policy setting process Date: Tue, 12 Mar 2024 20:06:51 +0200 Message-ID: <20240312180716.8515-7-shaibran@amazon.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20240312180716.8515-1-shaibran@amazon.com> References: <20240312180716.8515-1-shaibran@amazon.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Shai Brandes The driver will set the size of the LLQ header size according to the recommendation from the device. The user can bypass the recommendation via devargs: - The existing devarg 'large_llq_hdr' (default 0) allows user to enforce large llq header policy. - The existing devarg 'enable_llq' (default 1) allows user to disable llq usage. - A new devarg 'normal_llq_hdr' (default 0) allows user to enforce normal llq header policy. Signed-off-by: Shai Brandes Reviewed-by: Amit Bernstein --- doc/guides/nics/ena.rst | 4 ++ doc/guides/rel_notes/release_24_03.rst | 1 + drivers/net/ena/ena_ethdev.c | 60 ++++++++++++++++++++++---- drivers/net/ena/ena_ethdev.h | 11 ++++- 4 files changed, 67 insertions(+), 9 deletions(-) diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst index b039e75ead..725215b36d 100644 --- a/doc/guides/nics/ena.rst +++ b/doc/guides/nics/ena.rst @@ -113,6 +113,10 @@ Runtime Configuration effect only if the device also supports large LLQ headers. Otherwise, the default value will be used. + * **normal_llq_hdr** (default 0) + + Enforce normal LLQ policy. + * **miss_txc_to** (default 5) Number of seconds after which the Tx packet will be considered missing. diff --git a/doc/guides/rel_notes/release_24_03.rst b/doc/guides/rel_notes/release_24_03.rst index cbd4669cbb..58d092194e 100644 --- a/doc/guides/rel_notes/release_24_03.rst +++ b/doc/guides/rel_notes/release_24_03.rst @@ -108,6 +108,7 @@ New Features * Removed the reporting of `rx_overruns` errors from xstats and instead updated `imissed` stat with its value. * Added support for sub-optimal configuration notifications from the device. * Restructured fast release of mbufs when RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE optimization is enabled. + * Added `normal_llq_hdr` devarg that enforce normal llq header policy. * **Updated Atomic Rules' Arkville driver.** diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 537ee9f8c3..e23edd4bd2 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -40,6 +40,8 @@ #define BITS_PER_TYPE(type) (sizeof(type) * BITS_PER_BYTE) +#define DECIMAL_BASE 10 + /* * We should try to keep ENA_CLEANUP_BUF_SIZE lower than * RTE_MEMPOOL_CACHE_MAX_SIZE, so we can fit this in mempool local cache. @@ -75,6 +77,7 @@ struct ena_stats { /* Device arguments */ #define ENA_DEVARG_LARGE_LLQ_HDR "large_llq_hdr" +#define ENA_DEVARG_NORMAL_LLQ_HDR "normal_llq_hdr" /* Timeout in seconds after which a single uncompleted Tx packet should be * considered as a missing. */ @@ -297,6 +300,8 @@ static int ena_rx_queue_intr_disable(struct rte_eth_dev *dev, static int ena_configure_aenq(struct ena_adapter *adapter); static int ena_mp_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer); +static ena_llq_policy ena_define_llq_hdr_policy(struct ena_adapter *adapter); +static bool ena_use_large_llq_hdr(struct ena_adapter *adapter, uint8_t recommended_entry_size); static const struct eth_dev_ops ena_dev_ops = { .dev_configure = ena_dev_configure, @@ -1135,6 +1140,7 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx, ctx->max_tx_queue_size = max_tx_queue_size; ctx->max_rx_queue_size = max_rx_queue_size; + PMD_DRV_LOG(INFO, "tx queue size %u\n", max_tx_queue_size); return 0; } @@ -2034,7 +2040,7 @@ ena_set_queues_placement_policy(struct ena_adapter *adapter, int rc; u32 llq_feature_mask; - if (!adapter->enable_llq) { + if (adapter->llq_header_policy == ENA_LLQ_POLICY_DISABLED) { PMD_DRV_LOG(WARNING, "NOTE: LLQ has been disabled as per user's request. " "This may lead to a huge performance degradation!\n"); @@ -2241,12 +2247,16 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->missing_tx_completion_to = ENA_TX_TIMEOUT; adapter->enable_llq = true; adapter->use_large_llq_hdr = false; + adapter->use_normal_llq_hdr = false; + /* Get user bypass */ rc = ena_parse_devargs(adapter, pci_dev->device.devargs); if (rc != 0) { PMD_INIT_LOG(CRIT, "Failed to parse devargs\n"); goto err; } + adapter->llq_header_policy = ena_define_llq_hdr_policy(adapter); + rc = ena_com_allocate_customer_metrics_buffer(ena_dev); if (rc != 0) { PMD_INIT_LOG(CRIT, "Failed to allocate customer metrics buffer\n"); @@ -2264,8 +2274,9 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) if (!(adapter->all_aenq_groups & BIT(ENA_ADMIN_LINK_CHANGE))) adapter->edev_data->dev_flags &= ~RTE_ETH_DEV_INTR_LSC; - set_default_llq_configurations(&llq_config, &get_feat_ctx.llq, - adapter->use_large_llq_hdr); + bool use_large_llq_hdr = ena_use_large_llq_hdr(adapter, + get_feat_ctx.llq.entry_size_recommended); + set_default_llq_configurations(&llq_config, &get_feat_ctx.llq, use_large_llq_hdr); rc = ena_set_queues_placement_policy(adapter, ena_dev, &get_feat_ctx.llq, &llq_config); if (unlikely(rc)) { @@ -2273,18 +2284,19 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) return rc; } - if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST) + if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_HOST) { queue_type_str = "Regular"; - else + } else { queue_type_str = "Low latency"; + PMD_DRV_LOG(INFO, "LLQ entry size %uB\n", llq_config.llq_ring_entry_size_value); + } PMD_DRV_LOG(INFO, "Placement policy: %s\n", queue_type_str); calc_queue_ctx.ena_dev = ena_dev; calc_queue_ctx.get_feat_ctx = &get_feat_ctx; max_num_io_queues = ena_calc_max_io_queue_num(ena_dev, &get_feat_ctx); - rc = ena_calc_io_queue_size(&calc_queue_ctx, - adapter->use_large_llq_hdr); + rc = ena_calc_io_queue_size(&calc_queue_ctx, use_large_llq_hdr); if (unlikely((rc != 0) || (max_num_io_queues == 0))) { rc = -EFAULT; goto err_device_destroy; @@ -3632,7 +3644,7 @@ static int ena_process_uint_devarg(const char *key, char *str_end; uint64_t uint_value; - uint_value = strtoull(value, &str_end, 10); + uint_value = strtoull(value, &str_end, DECIMAL_BASE); if (value == str_end) { PMD_INIT_LOG(ERR, "Invalid value for key '%s'. Only uint values are accepted.\n", @@ -3685,6 +3697,8 @@ static int ena_process_bool_devarg(const char *key, /* Now, assign it to the proper adapter field. */ if (strcmp(key, ENA_DEVARG_LARGE_LLQ_HDR) == 0) adapter->use_large_llq_hdr = bool_value; + else if (strcmp(key, ENA_DEVARG_NORMAL_LLQ_HDR) == 0) + adapter->use_normal_llq_hdr = bool_value; else if (strcmp(key, ENA_DEVARG_ENABLE_LLQ) == 0) adapter->enable_llq = bool_value; @@ -3696,6 +3710,7 @@ static int ena_parse_devargs(struct ena_adapter *adapter, { static const char * const allowed_args[] = { ENA_DEVARG_LARGE_LLQ_HDR, + ENA_DEVARG_NORMAL_LLQ_HDR, ENA_DEVARG_MISS_TXC_TO, ENA_DEVARG_ENABLE_LLQ, NULL, @@ -3717,6 +3732,10 @@ static int ena_parse_devargs(struct ena_adapter *adapter, ena_process_bool_devarg, adapter); if (rc != 0) goto exit; + rc = rte_kvargs_process(kvlist, ENA_DEVARG_NORMAL_LLQ_HDR, + ena_process_bool_devarg, adapter); + if (rc != 0) + goto exit; rc = rte_kvargs_process(kvlist, ENA_DEVARG_MISS_TXC_TO, ena_process_uint_devarg, adapter); if (rc != 0) @@ -3943,6 +3962,7 @@ RTE_PMD_REGISTER_PCI_TABLE(net_ena, pci_id_ena_map); RTE_PMD_REGISTER_KMOD_DEP(net_ena, "* igb_uio | uio_pci_generic | vfio-pci"); RTE_PMD_REGISTER_PARAM_STRING(net_ena, ENA_DEVARG_LARGE_LLQ_HDR "=<0|1> " + ENA_DEVARG_NORMAL_LLQ_HDR "=<0|1> " ENA_DEVARG_ENABLE_LLQ "=<0|1> " ENA_DEVARG_MISS_TXC_TO "="); RTE_LOG_REGISTER_SUFFIX(ena_logtype_init, init, NOTICE); @@ -4129,3 +4149,27 @@ ena_mp_primary_handle(const struct rte_mp_msg *mp_msg, const void *peer) /* Return just IPC processing status */ return rte_mp_reply(&mp_rsp, peer); } + +static ena_llq_policy ena_define_llq_hdr_policy(struct ena_adapter *adapter) +{ + if (!adapter->enable_llq) + return ENA_LLQ_POLICY_DISABLED; + if (adapter->use_large_llq_hdr) + return ENA_LLQ_POLICY_LARGE; + if (adapter->use_normal_llq_hdr) + return ENA_LLQ_POLICY_NORMAL; + return ENA_LLQ_POLICY_RECOMMENDED; +} + +static bool ena_use_large_llq_hdr(struct ena_adapter *adapter, uint8_t recommended_entry_size) +{ + if (adapter->llq_header_policy == ENA_LLQ_POLICY_LARGE) { + return true; + } else if (adapter->llq_header_policy == ENA_LLQ_POLICY_RECOMMENDED) { + PMD_DRV_LOG(INFO, "Recommended device entry size policy %u\n", + recommended_entry_size); + if (recommended_entry_size == ENA_ADMIN_LIST_ENTRY_SIZE_256B) + return true; + } + return false; +} diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 20b8307836..7358f28caf 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -85,6 +85,14 @@ enum ena_ring_type { ENA_RING_TYPE_TX = 2, }; +typedef enum ena_llq_policy_t { + ENA_LLQ_POLICY_DISABLED = 0, /* Host queues */ + ENA_LLQ_POLICY_RECOMMENDED = 1, /* Device recommendation */ + ENA_LLQ_POLICY_NORMAL = 2, /* 128B long LLQ entry */ + ENA_LLQ_POLICY_LARGE = 3, /* 256B long LLQ entry */ + ENA_LLQ_POLICY_LAST, +} ena_llq_policy; + struct ena_tx_buffer { struct rte_mbuf *mbuf; unsigned int tx_descs; @@ -328,9 +336,10 @@ struct ena_adapter { uint32_t active_aenq_groups; bool trigger_reset; - bool enable_llq; bool use_large_llq_hdr; + bool use_normal_llq_hdr; + ena_llq_policy llq_header_policy; uint32_t last_tx_comp_qid; uint64_t missing_tx_completion_to;