From patchwork Wed Apr 8 08:28:52 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67967 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 78F6BA0597; Wed, 8 Apr 2020 10:29:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0ECA71C00D; Wed, 8 Apr 2020 10:29:29 +0200 (CEST) Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by dpdk.org (Postfix) with ESMTP id 1297F1BF68 for ; Wed, 8 Apr 2020 10:29:26 +0200 (CEST) Received: by mail-lj1-f194.google.com with SMTP id v16so6698398ljg.5 for ; Wed, 08 Apr 2020 01:29:26 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=86wXBRPm8TU8y9mD6AOAEKe29tXVds0P2s6XymU/LlE=; b=1lu9poK7jVxZTtbBEzT34jFm7qCEl8Sxb0DRqV6YnxKB84pTaL5ynntyzZxojPm46k bBnuNYmh0oCWvpDtSaUpLq3MbGyRwyWyNmFzLyqKaIMxNqE+UZM70Wa7iuKRJqbGy/Ju b36v1HO31sH4vpPjyuNg3ARQamLJIXmqTAXJ82mxbvlfSySdCpzlMMOF4bF7eiuEEQKt 9u9nBLY8l82YGRoF93ETjteHVzCtRDqYzWvmasO/MagJ0OzcKa8pHGMIxyg9KCiuxfm6 76+zXpiylLhBpM7aOmfkEkzjeijGhQOSvdUFQRFQyAVdGCddHyDEhn3ctwGpY8hPaZdt 3QGA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=86wXBRPm8TU8y9mD6AOAEKe29tXVds0P2s6XymU/LlE=; b=VjeS6/5tpDzlPf8bOw5MSQ4al31Ig2DXK/LZz47v346f2J6YeWjsj8LCueb0TZ90fh S4qzVf3LuIAQATMSSEljwVGs+BuxmaTMb6x27mOdssxuuE6bDN7nkzzsg8d8aCOYwVvM CDVo44IHFbPh1Kzyp4gDM+M3G2fOG5YpI+r6p4sfpkUZ/rg2G57wgviP+KXE8x/thRir j4R0ClYv1wH5QNYZc93eeKk1Rc6JJm/YIEcurUmS8THQP4MVEIC6OhiTjhiZZXBiiCEJ pAIxuaMZYF5sw+M1wF0r8IWXwOfnzltldGLldQy7RQRlsQr0An7zT6Le6bqwLM7+p0y6 rrHA== X-Gm-Message-State: AGi0Puah8Xlfi1q6XRT/faSEdoWXTOxmY9Ygn95w7RjETkLmf0uj/5X4 dQ9qqV3otI/EzbYqLzzxmRtzK6QrrNs= X-Google-Smtp-Source: APiQypLrVLx8pyyAAC0iUTO/ERdLKTP3IoNQvvKA7vNxVZDh4I5hNEh4YuW5DIfauNLEBtpEujCGFw== X-Received: by 2002:a2e:3c08:: with SMTP id j8mr4376385lja.243.1586334565405; Wed, 08 Apr 2020 01:29:25 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.24 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:24 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:28:52 +0200 Message-Id: <20200408082921.31000-2-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 01/30] net/ena: check if size of buffer is at least 1400B X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some of the ENA devices can't handle buffers which are smaller than a 1400B. Because of this limitation, size of the buffer is being checked and limited during the Rx queue setup. If it's below the allowed value, PMD won't finish it's configuration successfully.. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v2: * Remove debug printout * Change the way of acquiring mempool size v3: * Update the copyright date in modified files drivers/net/ena/ena_ethdev.c | 12 +++++++++++- drivers/net/ena/ena_ethdev.h | 3 ++- 2 files changed, 13 insertions(+), 2 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 665afee4f0..64aabbbb19 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. */ @@ -1282,6 +1282,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, { struct ena_adapter *adapter = dev->data->dev_private; struct ena_ring *rxq = NULL; + size_t buffer_size; int i; rxq = &adapter->rx_ring[queue_idx]; @@ -1309,6 +1310,15 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } + /* ENA isn't supporting buffers smaller than 1400 bytes */ + buffer_size = rte_pktmbuf_data_room_size(mp) - RTE_PKTMBUF_HEADROOM; + if (buffer_size < ENA_RX_BUF_MIN_SIZE) { + PMD_DRV_LOG(ERR, + "Unsupported size of RX buffer: %zu (min size: %d)\n", + buffer_size, ENA_RX_BUF_MIN_SIZE); + return -EINVAL; + } + rxq->port_id = dev->data->port_id; rxq->next_to_clean = 0; rxq->next_to_use = 0; diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index af5eeea280..e9b55dc029 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. */ @@ -20,6 +20,7 @@ #define ENA_MIN_FRAME_LEN 64 #define ENA_NAME_MAX_LEN 20 #define ENA_PKT_MAX_BUFS 17 +#define ENA_RX_BUF_MIN_SIZE 1400 #define ENA_MIN_MTU 128 From patchwork Wed Apr 8 08:28:53 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67968 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 82D9CA0597; Wed, 8 Apr 2020 10:29:47 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C9B0C1C069; Wed, 8 Apr 2020 10:29:30 +0200 (CEST) Received: from mail-lj1-f178.google.com (mail-lj1-f178.google.com [209.85.208.178]) by dpdk.org (Postfix) with ESMTP id 7827F1BFDC for ; Wed, 8 Apr 2020 10:29:27 +0200 (CEST) Received: by mail-lj1-f178.google.com with SMTP id r7so6628823ljg.13 for ; Wed, 08 Apr 2020 01:29:27 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wjkQcxrwEVijfs6i/IOTcKsbyTOYyF8v0zZLLJqLqiI=; b=RmTocunwr33CaQJ3VIhheRRRUbmuzHBX6dVDGZ+IhsBoZ3YH4H192flUXfH6LjeJJ8 wbMBYVijsXyu1NjLsv+HeFCk1ghnUta6V9yIlDURPpOFTtgpYxbhH+DFvIi3DH/suwu6 0oTKgNFgR4PzBzWJ/IowBL10chaDdfDlpXqxzEux+FBh3TDTKnMk+yaddy/UBs6akSEc 74+jBghbm8OsCnCU16t3l4qSNm96vUKs76LUR/py/Rlzrrw8R0XiE5Zp3ILV4qBnvEsK 8yGvPzMOoNx3X7FwEcnHCQ2rMbFzsl830XDRaAI7avCFZMyaIqXGMHMF0CzUx4f67xXe HxjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wjkQcxrwEVijfs6i/IOTcKsbyTOYyF8v0zZLLJqLqiI=; b=D8WNPxDI1Bl0wTjQcioyd03mUGkbLphE0LKp7Wwbvi+ddokt8athZVU31QEd4oc/+q qs9N8Tkb/l02rbGUs9RTZrPkNCGrbxFkZ2CTq+dwnhwuVO6dKRisXLZDQxtJFICbyvwU +2Z8qIqzOiBXurOy/samAMYpTLf1Tkyza/FUTDeWwkEIkhYK30u3aztrbYrwp1LrfOmd MfwnqgWKfsBFqy2QCS0bTeF18JxhLIhXYn/RakU/aluUqMqdiY9r3yfRZxlkyo73PFd7 5aS4d4loHWXtFuG+cuAcbpR/6enALQqCY8JDhjDq7pLI3YPYzTndAafhnryqq1lGannu Y3KA== X-Gm-Message-State: AGi0PuYUYNATAWd6oBGHcyIB11BFIRd8/La9Gts4mfb4uQJIyJpd2onn eYYc6fOqoptA797vN5WuW6aBIK1RtiA= X-Google-Smtp-Source: APiQypJZ8OOt9AqEYSEzBSUzBB/Qv3tJ5A+FqrMELGZN4vyuc/nkA0c04MN4X9IAVAvqeEfMHJupTQ== X-Received: by 2002:a2e:3c0b:: with SMTP id j11mr4356316lja.9.1586334566832; Wed, 08 Apr 2020 01:29:26 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.25 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:26 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, stable@dpdk.org, Michal Krawczyk Date: Wed, 8 Apr 2020 10:28:53 +0200 Message-Id: <20200408082921.31000-3-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 02/30] net/ena/base: make allocation macros thread-safe X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Igor Chauskin Memory allocation region id could possibly be non-unique due to non-atomic increment, causing allocation failure. Fixes: 9ba7981ec992 ("ena: add communication layer for DPDK") Cc: stable@dpdk.org Signed-off-by: Igor Chauskin Reviewed-by: Michal Krawczyk Reviewed-by: Guy Tzalik --- v3: * Update the copyright date in modified files drivers/net/ena/base/ena_plat_dpdk.h | 10 ++++++---- drivers/net/ena/ena_ethdev.c | 2 +- 2 files changed, 7 insertions(+), 5 deletions(-) diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index b611fb204b..70261bdbc6 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. */ @@ -180,7 +180,7 @@ do { \ * Each rte_memzone should have unique name. * To satisfy it, count number of allocations and add it to name. */ -extern uint32_t ena_alloc_cnt; +extern rte_atomic32_t ena_alloc_cnt; #define ENA_MEM_ALLOC_COHERENT(dmadev, size, virt, phys, handle) \ do { \ @@ -188,7 +188,8 @@ extern uint32_t ena_alloc_cnt; char z_name[RTE_MEMZONE_NAMESIZE]; \ ENA_TOUCH(dmadev); ENA_TOUCH(handle); \ snprintf(z_name, sizeof(z_name), \ - "ena_alloc_%d", ena_alloc_cnt++); \ + "ena_alloc_%d", \ + rte_atomic32_add_return(&ena_alloc_cnt, 1)); \ mz = rte_memzone_reserve(z_name, size, SOCKET_ID_ANY, \ RTE_MEMZONE_IOVA_CONTIG); \ handle = mz; \ @@ -213,7 +214,8 @@ extern uint32_t ena_alloc_cnt; char z_name[RTE_MEMZONE_NAMESIZE]; \ ENA_TOUCH(dmadev); ENA_TOUCH(dev_node); \ snprintf(z_name, sizeof(z_name), \ - "ena_alloc_%d", ena_alloc_cnt++); \ + "ena_alloc_%d", \ + rte_atomic32_add_return(&ena_alloc_cnt, 1)); \ mz = rte_memzone_reserve(z_name, size, node, \ RTE_MEMZONE_IOVA_CONTIG); \ mem_handle = mz; \ diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 64aabbbb19..e0ed28419c 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -89,7 +89,7 @@ struct ena_stats { * Each rte_memzone should have unique name. * To satisfy it, count number of allocation and add it to name. */ -uint32_t ena_alloc_cnt; +rte_atomic32_t ena_alloc_cnt; static const struct ena_stats ena_stats_global_strings[] = { ENA_STAT_GLOBAL_ENTRY(wd_expired), From patchwork Wed Apr 8 08:28:54 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67969 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6A97AA0597; Wed, 8 Apr 2020 10:30:00 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id C31151C0B6; Wed, 8 Apr 2020 10:29:32 +0200 (CEST) Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by dpdk.org (Postfix) with ESMTP id AE1FC1C00D for ; Wed, 8 Apr 2020 10:29:28 +0200 (CEST) Received: by mail-lj1-f193.google.com with SMTP id 142so2122184ljj.7 for ; Wed, 08 Apr 2020 01:29:28 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/KxbpCL3IXWW8qCiFu9QpdXKCLKmVd0eih2DDTSlqdo=; b=1GiYpVGbfGnqb6rR9Sf09PNpPCW0sDxd0wV3Cn6zNJKbkbGF4EGbfmI8hk1dN9dORQ Whdj8d9xdg6w6G40rNHc5BYn389ERIKNrJCsizzFMZTsrBfCEy5YZgGc58LWRNEfDhpH FiovHGMl9VdtCgO6IIoLgebd2V0iXfs69XWzrix4JKtmgiAeK1TNdrMh+a8154Agml9m eS2GZg7oukqrN5HS+w/iNrICOYEImmOtLz6bOo7NTQDD7F+NCmwl9+p5CGTM5b3Ih+Xj ewXRaxeS5drkLocOPaF7QgxujDND2mRyzQ7xx83kBHK/40FGDTq5vhrt51GfS8oG6hpx C5yA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/KxbpCL3IXWW8qCiFu9QpdXKCLKmVd0eih2DDTSlqdo=; b=PGIMnknBs4YQWjpbjIT7uvfORiNkSuyzHkiZqdOs8l/ErKFv8arHox/hSVra+CG4vn uAQrI/uNX994KAqu8tF+oXhbmp+g3gdrppmSwwO+2LQ1o5T2ThuV+Jge3oryLlNW8iyu j4HtsRC5jC8su8zS3TPceKET0vqYAJDZhYobDSK0NYope1UrP/XNSygahHM1Wz4J1j5L l6YdXKD1Zdk5mdbIWwfPCSGGTh2fB6yfpgOucLVzcPB3SxDhyqOMbBR1sJc45ZUniG/c 0SsgqrivSmy+5hEk3DZOW5E1g52fb6rwsDtNBUUbzTeR+2UCIkuo8w3Twj1BgtZfwJ4N briw== X-Gm-Message-State: AGi0PuZr8qADqCZBaN19F3wHfCOPbd2ieb4WEcW3yRg7BGvxs6v3dRsy xC44IDv8b9VEkcoSyvVHiHeFRAPoygQ= X-Google-Smtp-Source: APiQypJx6K2Y47DxFDaabpvOcRLVUXy3sn4zuw/lOUzpRnGSQF1DkaIM2Fvj3oTHpaMWda3T5efxYA== X-Received: by 2002:a2e:b4f1:: with SMTP id s17mr4135965ljm.283.1586334568109; Wed, 08 Apr 2020 01:29:28 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:27 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, stable@dpdk.org, Michal Krawczyk Date: Wed, 8 Apr 2020 10:28:54 +0200 Message-Id: <20200408082921.31000-4-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 03/30] net/ena/base: prevent allocation of 0-sized memory X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" From: Igor Chauskin rte_memzone_reserve() will reserve the biggest contiguous memzone available if received 0 as size param. Fixes: 9ba7981ec992 ("ena: add communication layer for DPDK") Cc: stable@dpdk.org Signed-off-by: Igor Chauskin Reviewed-by: Michal Krawczyk Reviewed-by: Guy Tzalik --- drivers/net/ena/base/ena_plat_dpdk.h | 29 ++++++++++++++++------------ 1 file changed, 17 insertions(+), 12 deletions(-) diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index 70261bdbc6..4b8fe017dd 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -184,15 +184,18 @@ extern rte_atomic32_t ena_alloc_cnt; #define ENA_MEM_ALLOC_COHERENT(dmadev, size, virt, phys, handle) \ do { \ - const struct rte_memzone *mz; \ - char z_name[RTE_MEMZONE_NAMESIZE]; \ + const struct rte_memzone *mz = NULL; \ ENA_TOUCH(dmadev); ENA_TOUCH(handle); \ - snprintf(z_name, sizeof(z_name), \ + if (size > 0) { \ + char z_name[RTE_MEMZONE_NAMESIZE]; \ + snprintf(z_name, sizeof(z_name), \ "ena_alloc_%d", \ rte_atomic32_add_return(&ena_alloc_cnt, 1)); \ - mz = rte_memzone_reserve(z_name, size, SOCKET_ID_ANY, \ - RTE_MEMZONE_IOVA_CONTIG); \ - handle = mz; \ + mz = rte_memzone_reserve(z_name, size, \ + SOCKET_ID_ANY, \ + RTE_MEMZONE_IOVA_CONTIG); \ + handle = mz; \ + } \ if (mz == NULL) { \ virt = NULL; \ phys = 0; \ @@ -210,15 +213,17 @@ extern rte_atomic32_t ena_alloc_cnt; #define ENA_MEM_ALLOC_COHERENT_NODE( \ dmadev, size, virt, phys, mem_handle, node, dev_node) \ do { \ - const struct rte_memzone *mz; \ - char z_name[RTE_MEMZONE_NAMESIZE]; \ + const struct rte_memzone *mz = NULL; \ ENA_TOUCH(dmadev); ENA_TOUCH(dev_node); \ - snprintf(z_name, sizeof(z_name), \ + if (size > 0) { \ + char z_name[RTE_MEMZONE_NAMESIZE]; \ + snprintf(z_name, sizeof(z_name), \ "ena_alloc_%d", \ - rte_atomic32_add_return(&ena_alloc_cnt, 1)); \ - mz = rte_memzone_reserve(z_name, size, node, \ + rte_atomic32_add_return(&ena_alloc_cnt, 1)); \ + mz = rte_memzone_reserve(z_name, size, node, \ RTE_MEMZONE_IOVA_CONTIG); \ - mem_handle = mz; \ + mem_handle = mz; \ + } \ if (mz == NULL) { \ virt = NULL; \ phys = 0; \ From patchwork Wed Apr 8 08:28:55 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67970 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 47653A0597; Wed, 8 Apr 2020 10:30:10 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1D5F81C0BD; Wed, 8 Apr 2020 10:29:34 +0200 (CEST) Received: from mail-lj1-f170.google.com (mail-lj1-f170.google.com [209.85.208.170]) by dpdk.org (Postfix) with ESMTP id 10C0A1C036 for ; Wed, 8 Apr 2020 10:29:30 +0200 (CEST) Received: by mail-lj1-f170.google.com with SMTP id i20so6671140ljn.6 for ; Wed, 08 Apr 2020 01:29:30 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=FhstbftP5iiuTMS+y/5nJUbwYiOGeJe8wEwMlCsKBfw=; b=Ngml7nmpx1rSysZIgtLIqlEMsdvA4jAGMTvvRL/F7z50/b4ZDwEF1/Vvmy2S4oMRp8 c6fL2IqpY0V/45n5tpWKNqmnOt3cDoeBBdsDYTURpkPhiiWfFTJ1T/XG49hBm9jzr51j DQvOv2UVPbWYJPIVABy7sSwY5Wf/IFljOW109aw8MmKNWEDhzHcDfDF+qy7LieXO1oDt 5aO/gf7I6kbWxXK26nY4dnu7Lx78aLsyYBG6566wV/ZpgMMQ/Lde0YR+JUuQXz43C+lf 4iV9nKL+2/cjqxmgo9/Sx1DjDZTxlkGjo2YeiI4j4hMiUB4mjyI62ciTO94GqRckKZVP v6ww== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=FhstbftP5iiuTMS+y/5nJUbwYiOGeJe8wEwMlCsKBfw=; b=lECppHom78v2vgDmRvC21AeCCYqe5OTR9cewDiLAme1IU9PHwu/1508nBc6bvFKf6C /O2MZK5Nahu7qRPpxO7CaKd68dbFvGmpRvyDwEa8bGRpTvuV7FcVYKFz0Mqo44WAYE12 oWIzlCQPjMascTFOLFEwAkUG1/oKebBbTOUfqeOxWwSEMcsQ8T4rJYjUxe7A5RZl/J63 Pi73YxUEEBnt1xLy/bts+JYXdcuCaBDhtXNaZTWTQotSCQ80jkGvOJHoxu2irqJR5GbB UBDcDVXaH8KtK2F9yL4lAMWQRAXhOe7hSaUJ6+TFMk2P1077mb2progmPExtDnseT50u JCbA== X-Gm-Message-State: AGi0PuZ5YvxuvAROUxIpA9ot4ZWOrFOAGydI4qC2InxLgZA6LRO3nlsF nBG7GtiGWGo+GOjevODLXVNu1HUhJKg= X-Google-Smtp-Source: APiQypKCFGxtvIuQl19Un0qtw5IvKfuu0tytVXDhzmxKcOU+/LnzBAaB6TEF3w3VtrdzMAP4XoAmKg== X-Received: by 2002:a2e:3812:: with SMTP id f18mr536511lja.67.1586334569367; Wed, 08 Apr 2020 01:29:29 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.28 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:28 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:28:55 +0200 Message-Id: <20200408082921.31000-5-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 04/30] net/ena/base: generate default, random RSS hash key X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Although the RSS key still cannot be set, it is now being generated every time the driver is being initialized. Multiple devices can still have the same key if they're used by the same driver. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v2: * Remove variable declaration inside the for loop * Remove unlikely in condition check v3: * Fixed commit logs * Move unrealated changes to the separate patches * Update the copyright date in the modified files drivers/net/ena/base/ena_com.c | 34 ++++++++++++++++++++-------- drivers/net/ena/base/ena_com.h | 3 ++- drivers/net/ena/base/ena_plat_dpdk.h | 4 ++++ drivers/net/ena/ena_ethdev.c | 17 ++++++++++++++ 4 files changed, 48 insertions(+), 10 deletions(-) diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c index 17b51b5a11..38a474b1bd 100644 --- a/drivers/net/ena/base/ena_com.c +++ b/drivers/net/ena/base/ena_com.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. */ @@ -1032,6 +1032,19 @@ static int ena_com_get_feature(struct ena_com_dev *ena_dev, feature_ver); } +static void ena_com_hash_key_fill_default_key(struct ena_com_dev *ena_dev) +{ + struct ena_admin_feature_rss_flow_hash_control *hash_key = + (ena_dev->rss).hash_key; + + ENA_RSS_FILL_KEY(&hash_key->key, sizeof(hash_key->key)); + /* The key is stored in the device in uint32_t array + * as well as the API requires the key to be passed in this + * format. Thus the size of our array should be divided by 4 + */ + hash_key->keys_num = sizeof(hash_key->key) / sizeof(uint32_t); +} + static int ena_com_hash_key_allocate(struct ena_com_dev *ena_dev) { struct ena_rss *rss = &ena_dev->rss; @@ -2405,15 +2418,16 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, switch (func) { case ENA_ADMIN_TOEPLITZ: - if (key_len > sizeof(hash_key->key)) { - ena_trc_err("key len (%hu) is bigger than the max supported (%zu)\n", - key_len, sizeof(hash_key->key)); - return ENA_COM_INVAL; + if (key) { + if (key_len != sizeof(hash_key->key)) { + ena_trc_err("key len (%hu) doesn't equal the supported size (%zu)\n", + key_len, sizeof(hash_key->key)); + return ENA_COM_INVAL; + } + memcpy(hash_key->key, key, key_len); + rss->hash_init_val = init_val; + hash_key->keys_num = key_len / sizeof(u32); } - - memcpy(hash_key->key, key, key_len); - rss->hash_init_val = init_val; - hash_key->keys_num = key_len >> 2; break; case ENA_ADMIN_CRC32: rss->hash_init_val = init_val; @@ -2738,6 +2752,8 @@ int ena_com_rss_init(struct ena_com_dev *ena_dev, u16 indr_tbl_log_size) if (unlikely(rc)) goto err_hash_key; + ena_com_hash_key_fill_default_key(ena_dev); + rc = ena_com_hash_ctrl_init(ena_dev); if (unlikely(rc)) goto err_hash_ctrl; diff --git a/drivers/net/ena/base/ena_com.h b/drivers/net/ena/base/ena_com.h index f2ef26c91b..d58c802edf 100644 --- a/drivers/net/ena/base/ena_com.h +++ b/drivers/net/ena/base/ena_com.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. */ @@ -53,6 +53,7 @@ #define ENA_INTR_DELAY_NEW_VALUE_WEIGHT 4 #define ENA_INTR_MODER_LEVEL_STRIDE 1 #define ENA_INTR_BYTE_COUNT_NOT_SUPPORTED 0xFFFFFF +#define ENA_HASH_KEY_SIZE 40 #define ENA_HW_HINTS_NO_TIMEOUT 0xFFFF diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index 4b8fe017dd..e9b33bc36c 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -301,6 +301,10 @@ extern rte_atomic32_t ena_alloc_cnt; #define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) +void ena_rss_key_fill(void *key, size_t size); + +#define ENA_RSS_FILL_KEY(key, size) ena_rss_key_fill(key, size) + #include "ena_includes.h" #endif /* DPDK_ENA_COM_ENA_PLAT_DPDK_H_ */ diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index e0ed28419c..f1202d99f2 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -256,6 +256,23 @@ static const struct eth_dev_ops ena_dev_ops = { .reta_query = ena_rss_reta_query, }; +void ena_rss_key_fill(void *key, size_t size) +{ + static bool key_generated; + static uint8_t default_key[ENA_HASH_KEY_SIZE]; + size_t i; + + RTE_ASSERT(size <= ENA_HASH_KEY_SIZE); + + if (!key_generated) { + for (i = 0; i < ENA_HASH_KEY_SIZE; ++i) + default_key[i] = rte_rand() & 0xff; + key_generated = true; + } + + rte_memcpy(key, default_key, size); +} + static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, struct ena_com_rx_ctx *ena_rx_ctx) { From patchwork Wed Apr 8 08:28:56 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67971 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 68989A0597; Wed, 8 Apr 2020 10:30:20 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 89DA71C0C2; Wed, 8 Apr 2020 10:29:35 +0200 (CEST) Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by dpdk.org (Postfix) with ESMTP id 504BD1C06C for ; Wed, 8 Apr 2020 10:29:31 +0200 (CEST) Received: by mail-lj1-f194.google.com with SMTP id r7so6629016ljg.13 for ; Wed, 08 Apr 2020 01:29:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=EsVCJSfEV1PCZ7mbAUW9XauUx/hx3YPHr0APShNs7Jw=; b=qYZEkjHYM2YQzlrevS79Fjh2kpmAcgD/en6whcRccpMVIiTPYrn1PSXUgwynBtbbF8 eiKTpfa9l0mlBR9juTUBupWQjqNNqCxMXOqykLPBtfoAnFDo1p3oO1P3lMur2HApd6VZ ErGGID+ahqqWEwRmK2WRB7BoULFcVONyMJ+KpOhNefbLypJUa84Gxc7nDZwz7mrZWgX6 pWaIDJKcHsunmQdAbknYRai6Ln+9+5DVx4TWdfysiK9JWzeS5i25ZSHYIA/xG+Di7jdU vNvHYhUfBTX+FrRAwg23NjQgW9sBART0rLgsgpqqmL8ZHMj7ezIMTTRI+tASkS5jLA3S OvmQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=EsVCJSfEV1PCZ7mbAUW9XauUx/hx3YPHr0APShNs7Jw=; b=aEmwBl9IF3w0QHrfkmPntetz3XENmGL9ZEuWCNrz0AbkndEmyexyIArFDytl/l0fUg Jaav38fAnweISTP/kRfgLoGSnBXbdVPVd0GfzGle1+wcF/j1ARnivxPg0N/Ef2Vk2vo2 KaTp4LmEwnTbHQjZzqLHhEatve2e44nW+26oBoCblHblW6zOHlWB7D2CDEWCMCJilo+M HLVHV/QRN0A+0afUTxJh6AzcZ/1TFdB0xBGybPpoQxdvlCiXGPaTpzN2xLqgQPlyAjTn ctkkP9wbKAMsfqM04s1K+OVX99yZwYMbDqLZ3VDJYqbKnH4v4+PHsdMlAmcrkyln8Trc RFWQ== X-Gm-Message-State: AGi0PubT4Eui7gvN/gJcqAuW+h0Tnqm+7C//ZJ6c3mscYOW2tiC1T/H0 BVWYckkGiU1/MlXWaLiiwru9G1Mx94c= X-Google-Smtp-Source: APiQypL645As2L+1kNObE6i3opjtHb/WiHa8rjnKcVUCMlOIJ6ayiSn4IS/10IvJIvwatBUD/pyEbA== X-Received: by 2002:a2e:a40b:: with SMTP id p11mr4216963ljn.173.1586334570697; Wed, 08 Apr 2020 01:29:30 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.29 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:29 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:28:56 +0200 Message-Id: <20200408082921.31000-6-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 05/30] net/ena/base: fix testing for supported hash func X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" There was a bug in ena_com_fill_hash_function(), which was causing bit to be shifted left one bit too much. To fix that, the ENA_FFS macro is being ussed (returning the location of the first bit set), hash_function value is being subtracted by 1 if any hash function is supported by the device and BIT macro is used for shifting for better verbosity. Signed-off-by: Michal Krawczyk --- v3: * This patch was added - previously part of the v2-04 drivers/net/ena/base/ena_com.c | 19 +++++++++++++------ drivers/net/ena/base/ena_plat_dpdk.h | 2 ++ 2 files changed, 15 insertions(+), 6 deletions(-) diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c index 38a474b1bd..04f5d21d6f 100644 --- a/drivers/net/ena/base/ena_com.c +++ b/drivers/net/ena/base/ena_com.c @@ -2394,12 +2394,14 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, enum ena_admin_hash_functions func, const u8 *key, u16 key_len, u32 init_val) { - struct ena_rss *rss = &ena_dev->rss; + struct ena_admin_feature_rss_flow_hash_control *hash_key; struct ena_admin_get_feat_resp get_resp; - struct ena_admin_feature_rss_flow_hash_control *hash_key = - rss->hash_key; + enum ena_admin_hash_functions old_func; + struct ena_rss *rss = &ena_dev->rss; int rc; + hash_key = rss->hash_key; + /* Make sure size is a mult of DWs */ if (unlikely(key_len & 0x3)) return ENA_COM_INVAL; @@ -2411,7 +2413,7 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, if (unlikely(rc)) return rc; - if (!((1 << func) & get_resp.u.flow_hash_func.supported_func)) { + if (!(BIT(func) & get_resp.u.flow_hash_func.supported_func)) { ena_trc_err("Flow hash function %d isn't supported\n", func); return ENA_COM_UNSUPPORTED; } @@ -2437,12 +2439,13 @@ int ena_com_fill_hash_function(struct ena_com_dev *ena_dev, return ENA_COM_INVAL; } + old_func = rss->hash_func; rss->hash_func = func; rc = ena_com_set_hash_function(ena_dev); /* Restore the old function */ if (unlikely(rc)) - ena_com_get_hash_function(ena_dev, NULL, NULL); + rss->hash_func = old_func; return rc; } @@ -2464,7 +2467,11 @@ int ena_com_get_hash_function(struct ena_com_dev *ena_dev, if (unlikely(rc)) return rc; - rss->hash_func = get_resp.u.flow_hash_func.selected_func; + /* ENA_FFS returns 1 in case the lsb is set */ + rss->hash_func = ENA_FFS(get_resp.u.flow_hash_func.selected_func); + if (rss->hash_func) + rss->hash_func--; + if (func) *func = rss->hash_func; diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index e9b33bc36c..e9b3c02270 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -301,6 +301,8 @@ extern rte_atomic32_t ena_alloc_cnt; #define DIV_ROUND_UP(n, d) (((n) + (d) - 1) / (d)) +#define ENA_FFS(x) ffs(x) + void ena_rss_key_fill(void *key, size_t size); #define ENA_RSS_FILL_KEY(key, size) ena_rss_key_fill(key, size) From patchwork Wed Apr 8 08:28:57 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67972 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 067B6A0597; Wed, 8 Apr 2020 10:30:29 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E2C571C0CA; Wed, 8 Apr 2020 10:29:36 +0200 (CEST) Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by dpdk.org (Postfix) with ESMTP id 95F7C1C0B2 for ; Wed, 8 Apr 2020 10:29:32 +0200 (CEST) Received: by mail-lj1-f195.google.com with SMTP id r24so6697694ljd.4 for ; Wed, 08 Apr 2020 01:29:32 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=+sLIOXCx5PseK1IYCcMb8oM57EVHYgxEVkT3VBtq9AI=; b=m7RJIrJMZNTcJzqbFpXL9IP9UeK9bAyEp8IdTnj6CxZ5AswPnZ5+YkS90HeVwcGrTh PHjHXsoUj1VHwixRlLqkekSpMWko4jgARIQ2syeNH9RswUAc4t+v2Ab4T1Mx+7Kws8Ow PxxdIS/lt+rFvbMxDOOYlUpPr7LYuXfjmpfrjBPT9oE9Y+NQHh4mqGrj+74Lg0yvx9uO J/YfoqMd5E/EA1UO2vZ4b3H36muSRsD2D0+hBl5m9Itz+AfElwsOfmp+SMbZqhBi0h5Y NGaOjJP7RAIr5VMpSW2J8N771Odortba4LE6alNYkR9GXnOzWRagzvqZM0+LgDFkmuaZ JN5A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=+sLIOXCx5PseK1IYCcMb8oM57EVHYgxEVkT3VBtq9AI=; b=TBykAk4m1QopT/c0bFMG4g7xcOAMzhWKY653vWdiQnXBi2g3mdcKXV5Osu2lZeEMzq RxzQuVl8AyEcGU5N0fWc469CNmclV/4JTSoXH1G0VglklIuc6ZcFO7rw2Da6G7K6dVsw 7FSqDzU9naCl+sqxjDjI7WJf7FgtDpNWU9DvndStW2MvfYNfNrnZFelNU8AJmfUlIjC+ MTZdPCXIC+f5u3Br4okJ7i4hd5bcpWOy9d7esqSdr82Q1mxBRmFm1FeAKZe859n6pM9l 6ihhPfmRKK3Pqei3JvVWvOTA5go7lbiVaRiQuIx3QPbxbwPQsteAj2K5Ek5nPSsWoeU0 4mLQ== X-Gm-Message-State: AGi0PuZ6CkiyIHGe0XMzyxHlM+CFhZ49IyhqWU1ejo6xUp2A4Tuqalhp BhwrFFPmWf0K4Uajki5IEpBiAh1hGhI= X-Google-Smtp-Source: APiQypIwwPmnYucpY4sdUEPuQghRySu6nfHqbXxmMS3JwBcgbpiRUJba3wszWPqmW4fc0VjlEJiqQQ== X-Received: by 2002:a2e:91da:: with SMTP id u26mr4368101ljg.232.1586334571972; Wed, 08 Apr 2020 01:29:31 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.30 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:31 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:28:57 +0200 Message-Id: <20200408082921.31000-7-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 06/30] net/ena/base: remove conversion of the ind tbl X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" After the indirection table is being saved in the device, there is no need to convert it back, as it's already saved in host_rss_ind_tbl array. As a result, the call to the ena_com_ind_tbl_convert_from_device() is not needed. Signed-off-by: Michal Krawczyk --- v3: * This patch was added - previously part of the v2-04 drivers/net/ena/base/ena_com.c | 28 ---------------------------- 1 file changed, 28 deletions(-) diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c index 04f5d21d6f..d500e1ddfb 100644 --- a/drivers/net/ena/base/ena_com.c +++ b/drivers/net/ena/base/ena_com.c @@ -1279,30 +1279,6 @@ static int ena_com_ind_tbl_convert_to_device(struct ena_com_dev *ena_dev) return 0; } -static int ena_com_ind_tbl_convert_from_device(struct ena_com_dev *ena_dev) -{ - u16 dev_idx_to_host_tbl[ENA_TOTAL_NUM_QUEUES] = { (u16)-1 }; - struct ena_rss *rss = &ena_dev->rss; - u8 idx; - u16 i; - - for (i = 0; i < ENA_TOTAL_NUM_QUEUES; i++) - dev_idx_to_host_tbl[ena_dev->io_sq_queues[i].idx] = i; - - for (i = 0; i < 1 << rss->tbl_log_size; i++) { - if (rss->rss_ind_tbl[i].cq_idx > ENA_TOTAL_NUM_QUEUES) - return ENA_COM_INVAL; - idx = (u8)rss->rss_ind_tbl[i].cq_idx; - - if (dev_idx_to_host_tbl[idx] > ENA_TOTAL_NUM_QUEUES) - return ENA_COM_INVAL; - - rss->host_rss_ind_tbl[i] = dev_idx_to_host_tbl[idx]; - } - - return 0; -} - static int ena_com_init_interrupt_moderation_table(struct ena_com_dev *ena_dev) { size_t size; @@ -2735,10 +2711,6 @@ int ena_com_indirect_table_get(struct ena_com_dev *ena_dev, u32 *ind_tbl) if (!ind_tbl) return 0; - rc = ena_com_ind_tbl_convert_from_device(ena_dev); - if (unlikely(rc)) - return rc; - for (i = 0; i < (1 << rss->tbl_log_size); i++) ind_tbl[i] = rss->host_rss_ind_tbl[i]; From patchwork Wed Apr 8 08:28:58 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67973 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 59123A0597; Wed, 8 Apr 2020 10:30:40 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8F2DF1C0D6; Wed, 8 Apr 2020 10:29:38 +0200 (CEST) Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by dpdk.org (Postfix) with ESMTP id 317131C0BF for ; Wed, 8 Apr 2020 10:29:34 +0200 (CEST) Received: by mail-lj1-f195.google.com with SMTP id r7so6629143ljg.13 for ; Wed, 08 Apr 2020 01:29:34 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=OAsV8eIGN+nDHjE5aeq1zLyDAaMcKVBccazVzUFjlSU=; b=mMOnyL/02+R7nL1aVvSINRKcyXARPyZ25vppb9krEPu7mPDqa/09L/M20MuFNGVIvd KrAFWMjI/qi3Re/OEIYj/xTdqN3gVR4hIeEuOE+q2x1VxHDinHN/Zm65vFwG5QkzTIKi EY3b2EUoICi0+kSZZtcAAF6wNs0s0mvb1IgAxjGWe7qGYq8FSbVgqM3ru71XXu795s7A Fx7Fq4K73vj0L2ILkECDuAAwdcpsDZgaaqp1PQKo0FYGDr/jBjv4ec/eiSJO6toJ1JF7 4DWE4y8jv2qc0sqGaibUq5VOtJYPcxvrm3roeWBOFx4pPQ9xR31E4Uxd8uNt+cIcDKQh ytuw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=OAsV8eIGN+nDHjE5aeq1zLyDAaMcKVBccazVzUFjlSU=; b=fch2LPkbNMBqKVoMPxLdjyx67AP8xLUGfMZoP7EUlGMI/wvd3g1u+BCac6aTrdTSeC NrkFllPxUtZBx4rNjt6xPycnGyNWZcw88Fq2RvH2UMnM7guVfDPwrsqR6LsQCDA8w0Z8 O4Df4n4xaMOasfxu4568CFczCg80ChKPeltK8mW6OTQQnZT8WcKfdkTeImnFA4d0F2vP B+yQl4x1mpjozYP4Lb7bZnjCsiZaraazuwmXWTusJbIzRNJv5D637IeAwt/ktbUaVg6V TgF1T/iVwVih5XIixSgGIkAsmmtepOYgLl9mpWTix+HKA+bICrOjsobgyk8LpvbQby3W davw== X-Gm-Message-State: AGi0PuYuyXgHxN4/SyMw/eujlddQAUcHXMsjW8MEwZ037z4qqWHkvx4Q XlAVMWNuBxDWIrp28UDZySDO5k2XbvA= X-Google-Smtp-Source: APiQypJT1aaaaBl0e+v2hDXsK57aJ4Wv1cqwayAwCLgtjc6j5/Vn1fCpUJldad/x7d7BcasUyWtA3w== X-Received: by 2002:a2e:9f13:: with SMTP id u19mr4309655ljk.14.1586334573314; Wed, 08 Apr 2020 01:29:33 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.32 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:32 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:28:58 +0200 Message-Id: <20200408082921.31000-8-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 07/30] net/ena/base: rework interrupt moderation X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This feature allows for adaptive interrupt moderation. It's not used by the DPDK PMD, but is a part of the newest HAL version. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/base/ena_com.c | 171 +++++---------------------- drivers/net/ena/base/ena_com.h | 154 ++---------------------- drivers/net/ena/base/ena_plat_dpdk.h | 3 +- 3 files changed, 42 insertions(+), 286 deletions(-) diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c index d500e1ddfb..8a1cac3944 100644 --- a/drivers/net/ena/base/ena_com.c +++ b/drivers/net/ena/base/ena_com.c @@ -1279,39 +1279,29 @@ static int ena_com_ind_tbl_convert_to_device(struct ena_com_dev *ena_dev) return 0; } -static int ena_com_init_interrupt_moderation_table(struct ena_com_dev *ena_dev) -{ - size_t size; - - size = sizeof(struct ena_intr_moder_entry) * ENA_INTR_MAX_NUM_OF_LEVELS; - - ena_dev->intr_moder_tbl = ENA_MEM_ALLOC(ena_dev->dmadev, size); - if (!ena_dev->intr_moder_tbl) - return ENA_COM_NO_MEM; - - ena_com_config_default_interrupt_moderation_table(ena_dev); - - return 0; -} - static void ena_com_update_intr_delay_resolution(struct ena_com_dev *ena_dev, u16 intr_delay_resolution) { - struct ena_intr_moder_entry *intr_moder_tbl = ena_dev->intr_moder_tbl; - unsigned int i; + u16 prev_intr_delay_resolution = ena_dev->intr_delay_resolution; - if (!intr_delay_resolution) { + if (unlikely(!intr_delay_resolution)) { ena_trc_err("Illegal intr_delay_resolution provided. Going to use default 1 usec resolution\n"); - intr_delay_resolution = 1; + intr_delay_resolution = ENA_DEFAULT_INTR_DELAY_RESOLUTION; } - ena_dev->intr_delay_resolution = intr_delay_resolution; /* update Rx */ - for (i = 0; i < ENA_INTR_MAX_NUM_OF_LEVELS; i++) - intr_moder_tbl[i].intr_moder_interval /= intr_delay_resolution; + ena_dev->intr_moder_rx_interval = + ena_dev->intr_moder_rx_interval * + prev_intr_delay_resolution / + intr_delay_resolution; /* update Tx */ - ena_dev->intr_moder_tx_interval /= intr_delay_resolution; + ena_dev->intr_moder_tx_interval = + ena_dev->intr_moder_tx_interval * + prev_intr_delay_resolution / + intr_delay_resolution; + + ena_dev->intr_delay_resolution = intr_delay_resolution; } /*****************************************************************************/ @@ -2880,44 +2870,35 @@ bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev) ENA_ADMIN_INTERRUPT_MODERATION); } -int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, - u32 tx_coalesce_usecs) +static int ena_com_update_nonadaptive_moderation_interval(u32 coalesce_usecs, + u32 intr_delay_resolution, + u32 *intr_moder_interval) { - if (!ena_dev->intr_delay_resolution) { + if (!intr_delay_resolution) { ena_trc_err("Illegal interrupt delay granularity value\n"); return ENA_COM_FAULT; } - ena_dev->intr_moder_tx_interval = tx_coalesce_usecs / - ena_dev->intr_delay_resolution; + *intr_moder_interval = coalesce_usecs / intr_delay_resolution; return 0; } -int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev, - u32 rx_coalesce_usecs) -{ - if (!ena_dev->intr_delay_resolution) { - ena_trc_err("Illegal interrupt delay granularity value\n"); - return ENA_COM_FAULT; - } - /* We use LOWEST entry of moderation table for storing - * nonadaptive interrupt coalescing values - */ - ena_dev->intr_moder_tbl[ENA_INTR_MODER_LOWEST].intr_moder_interval = - rx_coalesce_usecs / ena_dev->intr_delay_resolution; - - return 0; +int ena_com_update_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev, + u32 tx_coalesce_usecs) +{ + return ena_com_update_nonadaptive_moderation_interval(tx_coalesce_usecs, + ena_dev->intr_delay_resolution, + &ena_dev->intr_moder_tx_interval); } -void ena_com_destroy_interrupt_moderation(struct ena_com_dev *ena_dev) +int ena_com_update_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev, + u32 rx_coalesce_usecs) { - if (ena_dev->intr_moder_tbl) - ENA_MEM_FREE(ena_dev->dmadev, - ena_dev->intr_moder_tbl, - (sizeof(struct ena_intr_moder_entry) * ENA_INTR_MAX_NUM_OF_LEVELS)); - ena_dev->intr_moder_tbl = NULL; + return ena_com_update_nonadaptive_moderation_interval(rx_coalesce_usecs, + ena_dev->intr_delay_resolution, + &ena_dev->intr_moder_rx_interval); } int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev) @@ -2944,10 +2925,6 @@ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev) return rc; } - rc = ena_com_init_interrupt_moderation_table(ena_dev); - if (rc) - goto err; - /* if moderation is supported by device we set adaptive moderation */ delay_resolution = get_resp.u.intr_moderation.intr_delay_resolution; ena_com_update_intr_delay_resolution(ena_dev, delay_resolution); @@ -2956,52 +2933,6 @@ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev) ena_com_disable_adaptive_moderation(ena_dev); return 0; -err: - ena_com_destroy_interrupt_moderation(ena_dev); - return rc; -} - -void ena_com_config_default_interrupt_moderation_table(struct ena_com_dev *ena_dev) -{ - struct ena_intr_moder_entry *intr_moder_tbl = ena_dev->intr_moder_tbl; - - if (!intr_moder_tbl) - return; - - intr_moder_tbl[ENA_INTR_MODER_LOWEST].intr_moder_interval = - ENA_INTR_LOWEST_USECS; - intr_moder_tbl[ENA_INTR_MODER_LOWEST].pkts_per_interval = - ENA_INTR_LOWEST_PKTS; - intr_moder_tbl[ENA_INTR_MODER_LOWEST].bytes_per_interval = - ENA_INTR_LOWEST_BYTES; - - intr_moder_tbl[ENA_INTR_MODER_LOW].intr_moder_interval = - ENA_INTR_LOW_USECS; - intr_moder_tbl[ENA_INTR_MODER_LOW].pkts_per_interval = - ENA_INTR_LOW_PKTS; - intr_moder_tbl[ENA_INTR_MODER_LOW].bytes_per_interval = - ENA_INTR_LOW_BYTES; - - intr_moder_tbl[ENA_INTR_MODER_MID].intr_moder_interval = - ENA_INTR_MID_USECS; - intr_moder_tbl[ENA_INTR_MODER_MID].pkts_per_interval = - ENA_INTR_MID_PKTS; - intr_moder_tbl[ENA_INTR_MODER_MID].bytes_per_interval = - ENA_INTR_MID_BYTES; - - intr_moder_tbl[ENA_INTR_MODER_HIGH].intr_moder_interval = - ENA_INTR_HIGH_USECS; - intr_moder_tbl[ENA_INTR_MODER_HIGH].pkts_per_interval = - ENA_INTR_HIGH_PKTS; - intr_moder_tbl[ENA_INTR_MODER_HIGH].bytes_per_interval = - ENA_INTR_HIGH_BYTES; - - intr_moder_tbl[ENA_INTR_MODER_HIGHEST].intr_moder_interval = - ENA_INTR_HIGHEST_USECS; - intr_moder_tbl[ENA_INTR_MODER_HIGHEST].pkts_per_interval = - ENA_INTR_HIGHEST_PKTS; - intr_moder_tbl[ENA_INTR_MODER_HIGHEST].bytes_per_interval = - ENA_INTR_HIGHEST_BYTES; } unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev *ena_dev) @@ -3011,49 +2942,7 @@ unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev * unsigned int ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev) { - struct ena_intr_moder_entry *intr_moder_tbl = ena_dev->intr_moder_tbl; - - if (intr_moder_tbl) - return intr_moder_tbl[ENA_INTR_MODER_LOWEST].intr_moder_interval; - - return 0; -} - -void ena_com_init_intr_moderation_entry(struct ena_com_dev *ena_dev, - enum ena_intr_moder_level level, - struct ena_intr_moder_entry *entry) -{ - struct ena_intr_moder_entry *intr_moder_tbl = ena_dev->intr_moder_tbl; - - if (level >= ENA_INTR_MAX_NUM_OF_LEVELS) - return; - - intr_moder_tbl[level].intr_moder_interval = entry->intr_moder_interval; - if (ena_dev->intr_delay_resolution) - intr_moder_tbl[level].intr_moder_interval /= - ena_dev->intr_delay_resolution; - intr_moder_tbl[level].pkts_per_interval = entry->pkts_per_interval; - - /* use hardcoded value until ethtool supports bytecount parameter */ - if (entry->bytes_per_interval != ENA_INTR_BYTE_COUNT_NOT_SUPPORTED) - intr_moder_tbl[level].bytes_per_interval = entry->bytes_per_interval; -} - -void ena_com_get_intr_moderation_entry(struct ena_com_dev *ena_dev, - enum ena_intr_moder_level level, - struct ena_intr_moder_entry *entry) -{ - struct ena_intr_moder_entry *intr_moder_tbl = ena_dev->intr_moder_tbl; - - if (level >= ENA_INTR_MAX_NUM_OF_LEVELS) - return; - - entry->intr_moder_interval = intr_moder_tbl[level].intr_moder_interval; - if (ena_dev->intr_delay_resolution) - entry->intr_moder_interval *= ena_dev->intr_delay_resolution; - entry->pkts_per_interval = - intr_moder_tbl[level].pkts_per_interval; - entry->bytes_per_interval = intr_moder_tbl[level].bytes_per_interval; + return ena_dev->intr_moder_rx_interval; } int ena_com_config_dev_mode(struct ena_com_dev *ena_dev, diff --git a/drivers/net/ena/base/ena_com.h b/drivers/net/ena/base/ena_com.h index d58c802edf..88cf6a896c 100644 --- a/drivers/net/ena/base/ena_com.h +++ b/drivers/net/ena/base/ena_com.h @@ -27,47 +27,16 @@ /*****************************************************************************/ /* ENA adaptive interrupt moderation settings */ -#define ENA_INTR_LOWEST_USECS (0) -#define ENA_INTR_LOWEST_PKTS (3) -#define ENA_INTR_LOWEST_BYTES (2 * 1524) - -#define ENA_INTR_LOW_USECS (32) -#define ENA_INTR_LOW_PKTS (12) -#define ENA_INTR_LOW_BYTES (16 * 1024) - -#define ENA_INTR_MID_USECS (80) -#define ENA_INTR_MID_PKTS (48) -#define ENA_INTR_MID_BYTES (64 * 1024) - -#define ENA_INTR_HIGH_USECS (128) -#define ENA_INTR_HIGH_PKTS (96) -#define ENA_INTR_HIGH_BYTES (128 * 1024) - -#define ENA_INTR_HIGHEST_USECS (192) -#define ENA_INTR_HIGHEST_PKTS (128) -#define ENA_INTR_HIGHEST_BYTES (192 * 1024) - -#define ENA_INTR_INITIAL_TX_INTERVAL_USECS 196 -#define ENA_INTR_INITIAL_RX_INTERVAL_USECS 4 -#define ENA_INTR_DELAY_OLD_VALUE_WEIGHT 6 -#define ENA_INTR_DELAY_NEW_VALUE_WEIGHT 4 -#define ENA_INTR_MODER_LEVEL_STRIDE 1 -#define ENA_INTR_BYTE_COUNT_NOT_SUPPORTED 0xFFFFFF +#define ENA_INTR_INITIAL_TX_INTERVAL_USECS ENA_INTR_INITIAL_TX_INTERVAL_USECS_PLAT +#define ENA_INTR_INITIAL_RX_INTERVAL_USECS 0 +#define ENA_DEFAULT_INTR_DELAY_RESOLUTION 1 + #define ENA_HASH_KEY_SIZE 40 #define ENA_HW_HINTS_NO_TIMEOUT 0xFFFF #define ENA_FEATURE_MAX_QUEUE_EXT_VER 1 -enum ena_intr_moder_level { - ENA_INTR_MODER_LOWEST = 0, - ENA_INTR_MODER_LOW, - ENA_INTR_MODER_MID, - ENA_INTR_MODER_HIGH, - ENA_INTR_MODER_HIGHEST, - ENA_INTR_MAX_NUM_OF_LEVELS, -}; - struct ena_llq_configurations { enum ena_admin_llq_header_location llq_header_location; enum ena_admin_llq_ring_entry_size llq_ring_entry_size; @@ -76,12 +45,6 @@ struct ena_llq_configurations { u16 llq_ring_entry_size_value; }; -struct ena_intr_moder_entry { - unsigned int intr_moder_interval; - unsigned int pkts_per_interval; - unsigned int bytes_per_interval; -}; - enum queue_direction { ENA_COM_IO_QUEUE_DIRECTION_TX, ENA_COM_IO_QUEUE_DIRECTION_RX @@ -353,7 +316,13 @@ struct ena_com_dev { struct ena_host_attribute host_attr; bool adaptive_coalescing; u16 intr_delay_resolution; + + /* interrupt moderation intervals are in usec divided by + * intr_delay_resolution, which is supplied by the device. + */ u32 intr_moder_tx_interval; + u32 intr_moder_rx_interval; + struct ena_intr_moder_entry *intr_moder_tbl; struct ena_com_llq_info llq_info; @@ -921,11 +890,6 @@ int ena_com_execute_admin_command(struct ena_com_admin_queue *admin_queue, */ int ena_com_init_interrupt_moderation(struct ena_com_dev *ena_dev); -/* ena_com_destroy_interrupt_moderation - Destroy interrupt moderation resources - * @ena_dev: ENA communication layer struct - */ -void ena_com_destroy_interrupt_moderation(struct ena_com_dev *ena_dev); - /* ena_com_interrupt_moderation_supported - Return if interrupt moderation * capability is supported by the device. * @@ -933,12 +897,6 @@ void ena_com_destroy_interrupt_moderation(struct ena_com_dev *ena_dev); */ bool ena_com_interrupt_moderation_supported(struct ena_com_dev *ena_dev); -/* ena_com_config_default_interrupt_moderation_table - Restore the interrupt - * moderation table back to the default parameters. - * @ena_dev: ENA communication layer struct - */ -void ena_com_config_default_interrupt_moderation_table(struct ena_com_dev *ena_dev); - /* ena_com_update_nonadaptive_moderation_interval_tx - Update the * non-adaptive interval in Tx direction. * @ena_dev: ENA communication layer struct @@ -975,29 +933,6 @@ unsigned int ena_com_get_nonadaptive_moderation_interval_tx(struct ena_com_dev * */ unsigned int ena_com_get_nonadaptive_moderation_interval_rx(struct ena_com_dev *ena_dev); -/* ena_com_init_intr_moderation_entry - Update a single entry in the interrupt - * moderation table. - * @ena_dev: ENA communication layer struct - * @level: Interrupt moderation table level - * @entry: Entry value - * - * Update a single entry in the interrupt moderation table. - */ -void ena_com_init_intr_moderation_entry(struct ena_com_dev *ena_dev, - enum ena_intr_moder_level level, - struct ena_intr_moder_entry *entry); - -/* ena_com_get_intr_moderation_entry - Init ena_intr_moder_entry. - * @ena_dev: ENA communication layer struct - * @level: Interrupt moderation table level - * @entry: Entry to fill. - * - * Initialize the entry according to the adaptive interrupt moderation table. - */ -void ena_com_get_intr_moderation_entry(struct ena_com_dev *ena_dev, - enum ena_intr_moder_level level, - struct ena_intr_moder_entry *entry); - /* ena_com_config_dev_mode - Configure the placement policy of the device. * @ena_dev: ENA communication layer struct * @llq_features: LLQ feature descriptor, retrieve via @@ -1023,75 +958,6 @@ static inline void ena_com_disable_adaptive_moderation(struct ena_com_dev *ena_d ena_dev->adaptive_coalescing = false; } -/* ena_com_calculate_interrupt_delay - Calculate new interrupt delay - * @ena_dev: ENA communication layer struct - * @pkts: Number of packets since the last update - * @bytes: Number of bytes received since the last update. - * @smoothed_interval: Returned interval - * @moder_tbl_idx: Current table level as input update new level as return - * value. - */ -static inline void ena_com_calculate_interrupt_delay(struct ena_com_dev *ena_dev, - unsigned int pkts, - unsigned int bytes, - unsigned int *smoothed_interval, - unsigned int *moder_tbl_idx) -{ - enum ena_intr_moder_level curr_moder_idx, new_moder_idx; - struct ena_intr_moder_entry *curr_moder_entry; - struct ena_intr_moder_entry *pred_moder_entry; - struct ena_intr_moder_entry *new_moder_entry; - struct ena_intr_moder_entry *intr_moder_tbl = ena_dev->intr_moder_tbl; - unsigned int interval; - - /* We apply adaptive moderation on Rx path only. - * Tx uses static interrupt moderation. - */ - if (!pkts || !bytes) - /* Tx interrupt, or spurious interrupt, - * in both cases we just use same delay values - */ - return; - - curr_moder_idx = (enum ena_intr_moder_level)(*moder_tbl_idx); - if (unlikely(curr_moder_idx >= ENA_INTR_MAX_NUM_OF_LEVELS)) { - ena_trc_err("Wrong moderation index %u\n", curr_moder_idx); - return; - } - - curr_moder_entry = &intr_moder_tbl[curr_moder_idx]; - new_moder_idx = curr_moder_idx; - - if (curr_moder_idx == ENA_INTR_MODER_LOWEST) { - if ((pkts > curr_moder_entry->pkts_per_interval) || - (bytes > curr_moder_entry->bytes_per_interval)) - new_moder_idx = - (enum ena_intr_moder_level)(curr_moder_idx + ENA_INTR_MODER_LEVEL_STRIDE); - } else { - pred_moder_entry = &intr_moder_tbl[curr_moder_idx - ENA_INTR_MODER_LEVEL_STRIDE]; - - if ((pkts <= pred_moder_entry->pkts_per_interval) || - (bytes <= pred_moder_entry->bytes_per_interval)) - new_moder_idx = - (enum ena_intr_moder_level)(curr_moder_idx - ENA_INTR_MODER_LEVEL_STRIDE); - else if ((pkts > curr_moder_entry->pkts_per_interval) || - (bytes > curr_moder_entry->bytes_per_interval)) { - if (curr_moder_idx != ENA_INTR_MODER_HIGHEST) - new_moder_idx = - (enum ena_intr_moder_level)(curr_moder_idx + ENA_INTR_MODER_LEVEL_STRIDE); - } - } - new_moder_entry = &intr_moder_tbl[new_moder_idx]; - - interval = new_moder_entry->intr_moder_interval; - *smoothed_interval = ( - (interval * ENA_INTR_DELAY_NEW_VALUE_WEIGHT + - ENA_INTR_DELAY_OLD_VALUE_WEIGHT * (*smoothed_interval)) + 5) / - 10; - - *moder_tbl_idx = new_moder_idx; -} - /* ena_com_update_intr_reg - Prepare interrupt register * @intr_reg: interrupt register to update. * @rx_delay_interval: Rx interval in usecs diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index e9b3c02270..a0f088c9b8 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -307,6 +307,7 @@ void ena_rss_key_fill(void *key, size_t size); #define ENA_RSS_FILL_KEY(key, size) ena_rss_key_fill(key, size) -#include "ena_includes.h" +#define ENA_INTR_INITIAL_TX_INTERVAL_USECS_PLAT 0 +#include "ena_includes.h" #endif /* DPDK_ENA_COM_ENA_PLAT_DPDK_H_ */ From patchwork Wed Apr 8 08:28:59 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67974 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id BF93AA0597; Wed, 8 Apr 2020 10:30:53 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id B50921C10D; Wed, 8 Apr 2020 10:29:40 +0200 (CEST) Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by dpdk.org (Postfix) with ESMTP id 6391B1C068 for ; Wed, 8 Apr 2020 10:29:35 +0200 (CEST) Received: by mail-lj1-f196.google.com with SMTP id k21so6720387ljh.2 for ; Wed, 08 Apr 2020 01:29:35 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4icfGNlPaVSpnOipmNSHlP+zHcMn1gYBH+vHWNedBho=; b=jy5WQhIO/e+GH4Ce1tAkSKcgOykt8UGiScEAa3tYa/wKl2vpINgowQNxe0ghVZC9bq acBXi9swyYCFdcIQ824IuYaaCv1kWT1JsiO0YIG3KamkWiDanUiRoDEbMPLoUMzGeZB9 NqSg1T2NVr3qxXId79JxeM9WV1GaYSebe9uZiASFgtcsx/5A6yerhxxH/Lcdv/8KqkEi zs3/O99z096xI1jJPXkjO8/wal1gRaEYKU+2Qsi3w5yZKXAwjfjnyq3zswhhLbzmLeMm IdQM7TpXxSGRm0qE5PhY8//X3Vsys9QH7M6ec0y1yrkKgDKDhV9Az5ReJALnsrVmZ456 BSrQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4icfGNlPaVSpnOipmNSHlP+zHcMn1gYBH+vHWNedBho=; b=hdnCb0DUKdK33L2GaeFHmOTqmbzIXJKnS5jEBQtgdfXiyqhiMo+aee9oMnrm5xR0CN L65arcSsx6al3iYlO8ZBMQ+t6WFpUsZ2vT6EsEI+JnNVR5BGxA1G2PBYiXIpGkBXgNgJ fpuebImunjU4AHrM7rWPS0yXgHQ4Fd4SFkxYW7xlOZX9Qceu9GvZRcdVT/+rtlb3O1yN xSQM26R4CYX5XUIRiavEFF2TnD13A9wySwpoDHp3ap/MaGP09sgnPbYsT/2VDza/UIn0 ZIB6Ol//0sjEoUAiLpsMUX31bucydCT/Mh/lPq7H1kc32rJhmL29Iiu1OYWdGp+CjOSk mDFQ== X-Gm-Message-State: AGi0PuZhrPkDjPw/8Fcdn5LBsLwQfnRPnAzzLzBxyMqQaGGviQC3H1nb MUtzSrYa1+63U1cvlfwH5SB33lkOgp8= X-Google-Smtp-Source: APiQypI9N4pNkc8UferrzuKEZPfsT6S6TQvqTvMKdTyHnFt+hWq4eOGIxtmiqetxJ6FNIHXYcU3Ofg== X-Received: by 2002:a2e:a412:: with SMTP id p18mr4367789ljn.39.1586334574714; Wed, 08 Apr 2020 01:29:34 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.33 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:33 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:28:59 +0200 Message-Id: <20200408082921.31000-9-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 08/30] net/ena/base: remove extra properties strings X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This buffer was never used by the ENA PMD. It could be used for debugging, but it's presence is redundant now. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/base/ena_com.c | 56 ---------------------------------- drivers/net/ena/base/ena_com.h | 33 -------------------- 2 files changed, 89 deletions(-) diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c index 8a1cac3944..7e0aa21e03 100644 --- a/drivers/net/ena/base/ena_com.c +++ b/drivers/net/ena/base/ena_com.c @@ -1905,62 +1905,6 @@ int ena_com_get_link_params(struct ena_com_dev *ena_dev, return ena_com_get_feature(ena_dev, resp, ENA_ADMIN_LINK_CONFIG, 0); } -int ena_com_extra_properties_strings_init(struct ena_com_dev *ena_dev) -{ - struct ena_admin_get_feat_resp resp; - struct ena_extra_properties_strings *extra_properties_strings = - &ena_dev->extra_properties_strings; - u32 rc; - extra_properties_strings->size = ENA_ADMIN_EXTRA_PROPERTIES_COUNT * - ENA_ADMIN_EXTRA_PROPERTIES_STRING_LEN; - - ENA_MEM_ALLOC_COHERENT(ena_dev->dmadev, - extra_properties_strings->size, - extra_properties_strings->virt_addr, - extra_properties_strings->dma_addr, - extra_properties_strings->dma_handle); - if (unlikely(!extra_properties_strings->virt_addr)) { - ena_trc_err("Failed to allocate extra properties strings\n"); - return 0; - } - - rc = ena_com_get_feature_ex(ena_dev, &resp, - ENA_ADMIN_EXTRA_PROPERTIES_STRINGS, - extra_properties_strings->dma_addr, - extra_properties_strings->size, 0); - if (rc) { - ena_trc_dbg("Failed to get extra properties strings\n"); - goto err; - } - - return resp.u.extra_properties_strings.count; -err: - ena_com_delete_extra_properties_strings(ena_dev); - return 0; -} - -void ena_com_delete_extra_properties_strings(struct ena_com_dev *ena_dev) -{ - struct ena_extra_properties_strings *extra_properties_strings = - &ena_dev->extra_properties_strings; - - if (extra_properties_strings->virt_addr) { - ENA_MEM_FREE_COHERENT(ena_dev->dmadev, - extra_properties_strings->size, - extra_properties_strings->virt_addr, - extra_properties_strings->dma_addr, - extra_properties_strings->dma_handle); - extra_properties_strings->virt_addr = NULL; - } -} - -int ena_com_get_extra_properties_flags(struct ena_com_dev *ena_dev, - struct ena_admin_get_feat_resp *resp) -{ - return ena_com_get_feature(ena_dev, resp, - ENA_ADMIN_EXTRA_PROPERTIES_FLAGS, 0); -} - int ena_com_get_dev_attr_feat(struct ena_com_dev *ena_dev, struct ena_com_dev_get_features_ctx *get_feat_ctx) { diff --git a/drivers/net/ena/base/ena_com.h b/drivers/net/ena/base/ena_com.h index 88cf6a896c..b613df1c77 100644 --- a/drivers/net/ena/base/ena_com.h +++ b/drivers/net/ena/base/ena_com.h @@ -284,13 +284,6 @@ struct ena_host_attribute { ena_mem_handle_t host_info_dma_handle; }; -struct ena_extra_properties_strings { - u8 *virt_addr; - dma_addr_t dma_addr; - ena_mem_handle_t dma_handle; - u32 size; -}; - /* Each ena_dev is a PCI function. */ struct ena_com_dev { struct ena_com_admin_queue admin_queue; @@ -326,7 +319,6 @@ struct ena_com_dev { struct ena_intr_moder_entry *intr_moder_tbl; struct ena_com_llq_info llq_info; - struct ena_extra_properties_strings extra_properties_strings; }; struct ena_com_dev_get_features_ctx { @@ -564,31 +556,6 @@ int ena_com_validate_version(struct ena_com_dev *ena_dev); int ena_com_get_link_params(struct ena_com_dev *ena_dev, struct ena_admin_get_feat_resp *resp); -/* ena_com_extra_properties_strings_init - Initialize the extra properties strings buffer. - * @ena_dev: ENA communication layer struct - * - * Initialize the extra properties strings buffer. - */ -int ena_com_extra_properties_strings_init(struct ena_com_dev *ena_dev); - -/* ena_com_delete_extra_properties_strings - Free the extra properties strings buffer. - * @ena_dev: ENA communication layer struct - * - * Free the allocated extra properties strings buffer. - */ -void ena_com_delete_extra_properties_strings(struct ena_com_dev *ena_dev); - -/* ena_com_get_extra_properties_flags - Retrieve extra properties flags. - * @ena_dev: ENA communication layer struct - * @resp: Extra properties flags. - * - * Retrieve the extra properties flags. - * - * @return - 0 on Success negative value otherwise. - */ -int ena_com_get_extra_properties_flags(struct ena_com_dev *ena_dev, - struct ena_admin_get_feat_resp *resp); - /* ena_com_get_dma_width - Retrieve physical dma address width the device * supports. * @ena_dev: ENA communication layer struct From patchwork Wed Apr 8 08:29:00 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67975 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 209F8A0597; Wed, 8 Apr 2020 10:31:03 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 224E21C116; Wed, 8 Apr 2020 10:29:42 +0200 (CEST) Received: from mail-lj1-f175.google.com (mail-lj1-f175.google.com [209.85.208.175]) by dpdk.org (Postfix) with ESMTP id D7A561C0C7 for ; Wed, 8 Apr 2020 10:29:36 +0200 (CEST) Received: by mail-lj1-f175.google.com with SMTP id p10so6712398ljn.1 for ; Wed, 08 Apr 2020 01:29:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=tmlkiercWsiz2r+xCrIbaaycM8dXLtXrwrLQ2ZbQV1I=; b=KA+xxKcm4NuTouThDvb46YMq04Gi+jY/UXulqOQdNdFXUvLM7QrXlAE11lHWIYblR5 Yit5R5MBpNf9hIs/ISlqzs2oPuD+1oMDaTba72dv71Xk0QBDVFGGrbg3qZdHripne6Pn 15Y96FgOJ6aK29pDjQePZHlaOB7L1fWqM0+t1c75wPQ6LjqPAfU3dogxincY2tfCm7kW cmGciEzfwP8jms4CEvhzazXTnk/SXriU0zjNxhYGzsbcJXv25F1YBU4GmD7PVhUyX2P3 /uHvsbvPteYCosU6eBlzEakc/CDQTd25tPueab+tBJGHCrGamZ/SgBguTTrEx0MPGc9X Jq0A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=tmlkiercWsiz2r+xCrIbaaycM8dXLtXrwrLQ2ZbQV1I=; b=IXKg3tct3gXxwY+hS54zcWOVCsd/o4jpaNBB4icl187esMZO/fYFBi3ok85WIKmz8r EUo5JsQXWsFcycwVKysZZRzedAoSzvWkzOU6SPe6eESPRFKvA/Wz6y0iyIh19nLRhzke l7TpQ3tc65MY8TzB6i6KtUclgcbibBY4+QHmOE4zwTH6thKefiG28QfUu6ahZ+rgRjv2 Q/vjpO0rx2f4GNXXBmZrfbQA8Yy7/apZxc14OQESIJ/0U4N/HapBD0d3g+afDAJ8Nqti KQk73OeGoVqTWmjDllDdmZbd0g5m2qj7BEkVGLwO+VKYF+rjaxf64NNwiR45riGQskMD K7LQ== X-Gm-Message-State: AGi0PubP3m/IBrGXnmSiJ5XklPdKT6CGZyme+igFyGHF914uqM6Cwcgo V1Ptz/yK5ipXmqzstqE2TAs8Pkgdr9w= X-Google-Smtp-Source: APiQypIxWtKBZ7Gy/ggjh2vU8g93uNM6/+FY1+/j6M9XIw6W8k8S9Mxl/3jTNYP0P1jP+d3WQcSnSQ== X-Received: by 2002:a2e:9ccd:: with SMTP id g13mr4192755ljj.147.1586334576051; Wed, 08 Apr 2020 01:29:36 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:35 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:00 +0200 Message-Id: <20200408082921.31000-10-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 09/30] net/ena/base: add accelerated LLQ mode X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In order to use the accelerated LLQ (Low-lateny queue) mode, the driver must limit the Tx burst and be aware that the device has the meta caching disabled. In that situation, the meta descriptor must be valid on each Tx packet. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v3: * Fix commit log - LLQ abbreviation is now explained * Update copyright date of the modified file drivers/net/ena/base/ena_com.c | 20 +++++++- drivers/net/ena/base/ena_com.h | 3 ++ .../net/ena/base/ena_defs/ena_admin_defs.h | 41 +++++++++++++-- drivers/net/ena/base/ena_eth_com.c | 51 +++++++++++++------ 4 files changed, 93 insertions(+), 22 deletions(-) diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c index 7e0aa21e03..962baf6024 100644 --- a/drivers/net/ena/base/ena_com.c +++ b/drivers/net/ena/base/ena_com.c @@ -378,6 +378,8 @@ static int ena_com_init_io_sq(struct ena_com_dev *ena_dev, 0x0, io_sq->llq_info.desc_list_entry_size); io_sq->llq_buf_ctrl.descs_left_in_line = io_sq->llq_info.descs_num_before_header; + io_sq->disable_meta_caching = + io_sq->llq_info.disable_meta_caching; if (io_sq->llq_info.max_entries_in_tx_burst > 0) io_sq->entries_in_tx_burst_left = @@ -595,6 +597,14 @@ static int ena_com_set_llq(struct ena_com_dev *ena_dev) cmd.u.llq.desc_num_before_header_enabled = llq_info->descs_num_before_header; cmd.u.llq.descriptors_stride_ctrl_enabled = llq_info->desc_stride_ctrl; + if (llq_info->disable_meta_caching) + cmd.u.llq.accel_mode.u.set.enabled_flags |= + BIT(ENA_ADMIN_DISABLE_META_CACHING); + + if (llq_info->max_entries_in_tx_burst) + cmd.u.llq.accel_mode.u.set.enabled_flags |= + BIT(ENA_ADMIN_LIMIT_TX_BURST); + ret = ena_com_execute_admin_command(admin_queue, (struct ena_admin_aq_entry *)&cmd, sizeof(cmd), @@ -714,9 +724,15 @@ static int ena_com_config_llq_info(struct ena_com_dev *ena_dev, supported_feat, llq_info->descs_num_before_header); } + /* Check for accelerated queue supported */ + llq_info->disable_meta_caching = + llq_features->accel_mode.u.get.supported_flags & + BIT(ENA_ADMIN_DISABLE_META_CACHING); - llq_info->max_entries_in_tx_burst = - (u16)(llq_features->max_tx_burst_size / llq_default_cfg->llq_ring_entry_size_value); + if (llq_features->accel_mode.u.get.supported_flags & BIT(ENA_ADMIN_LIMIT_TX_BURST)) + llq_info->max_entries_in_tx_burst = + llq_features->accel_mode.u.get.max_tx_burst_size / + llq_default_cfg->llq_ring_entry_size_value; rc = ena_com_set_llq(ena_dev); if (rc) diff --git a/drivers/net/ena/base/ena_com.h b/drivers/net/ena/base/ena_com.h index b613df1c77..07f63f44af 100644 --- a/drivers/net/ena/base/ena_com.h +++ b/drivers/net/ena/base/ena_com.h @@ -82,6 +82,7 @@ struct ena_com_llq_info { u16 descs_num_before_header; u16 descs_per_entry; u16 max_entries_in_tx_burst; + bool disable_meta_caching; }; struct ena_com_io_cq { @@ -146,6 +147,8 @@ struct ena_com_io_sq { enum queue_direction direction; enum ena_admin_placement_policy_type mem_queue_type; + bool disable_meta_caching; + u32 msix_vector; struct ena_com_tx_meta cached_tx_meta; struct ena_com_llq_info llq_info; diff --git a/drivers/net/ena/base/ena_defs/ena_admin_defs.h b/drivers/net/ena/base/ena_defs/ena_admin_defs.h index fb4d4d03f0..c36a525d12 100644 --- a/drivers/net/ena/base/ena_defs/ena_admin_defs.h +++ b/drivers/net/ena/base/ena_defs/ena_admin_defs.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. */ @@ -469,6 +469,36 @@ enum ena_admin_llq_stride_ctrl { ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY = 2, }; +enum ena_admin_accel_mode_feat { + ENA_ADMIN_DISABLE_META_CACHING = 0, + ENA_ADMIN_LIMIT_TX_BURST = 1, +}; + +struct ena_admin_accel_mode_get { + /* bit field of enum ena_admin_accel_mode_feat */ + uint16_t supported_flags; + + /* maximum burst size between two doorbells. The size is in bytes */ + uint16_t max_tx_burst_size; +}; + +struct ena_admin_accel_mode_set { + /* bit field of enum ena_admin_accel_mode_feat */ + uint16_t enabled_flags; + + uint16_t reserved; +}; + +struct ena_admin_accel_mode_req { + union { + uint32_t raw[2]; + + struct ena_admin_accel_mode_get get; + + struct ena_admin_accel_mode_set set; + } u; +}; + struct ena_admin_feature_llq_desc { uint32_t max_llq_num; @@ -514,10 +544,13 @@ struct ena_admin_feature_llq_desc { /* the stride control the driver selected to use */ uint16_t descriptors_stride_ctrl_enabled; - /* Maximum size in bytes taken by llq entries in a single tx burst. - * Set to 0 when there is no such limit. + /* reserved */ + uint32_t reserved1; + + /* accelerated low latency queues requirment. driver needs to + * support those requirments in order to use accelerated llq */ - uint32_t max_tx_burst_size; + struct ena_admin_accel_mode_req accel_mode; }; struct ena_admin_queue_ext_feature_fields { diff --git a/drivers/net/ena/base/ena_eth_com.c b/drivers/net/ena/base/ena_eth_com.c index d4d44226df..8f9528bdff 100644 --- a/drivers/net/ena/base/ena_eth_com.c +++ b/drivers/net/ena/base/ena_eth_com.c @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. */ @@ -258,11 +258,10 @@ static u16 ena_com_cdesc_rx_pkt_get(struct ena_com_io_cq *io_cq, return count; } -static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq, - struct ena_com_tx_ctx *ena_tx_ctx) +static int ena_com_create_meta(struct ena_com_io_sq *io_sq, + struct ena_com_tx_meta *ena_meta) { struct ena_eth_io_tx_meta_desc *meta_desc = NULL; - struct ena_com_tx_meta *ena_meta = &ena_tx_ctx->ena_meta; meta_desc = get_sq_desc(io_sq); memset(meta_desc, 0x0, sizeof(struct ena_eth_io_tx_meta_desc)); @@ -282,12 +281,13 @@ static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq, /* Extended meta desc */ meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_ETH_META_TYPE_MASK; - meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_STORE_MASK; meta_desc->len_ctrl |= (io_sq->phase << ENA_ETH_IO_TX_META_DESC_PHASE_SHIFT) & ENA_ETH_IO_TX_META_DESC_PHASE_MASK; meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_FIRST_MASK; + meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_STORE_MASK; + meta_desc->word2 |= ena_meta->l3_hdr_len & ENA_ETH_IO_TX_META_DESC_L3_HDR_LEN_MASK; meta_desc->word2 |= (ena_meta->l3_hdr_offset << @@ -298,13 +298,34 @@ static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq, ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_SHIFT) & ENA_ETH_IO_TX_META_DESC_L4_HDR_LEN_IN_WORDS_MASK; - meta_desc->len_ctrl |= ENA_ETH_IO_TX_META_DESC_META_STORE_MASK; + return ena_com_sq_update_tail(io_sq); +} - /* Cached the meta desc */ - memcpy(&io_sq->cached_tx_meta, ena_meta, - sizeof(struct ena_com_tx_meta)); +static int ena_com_create_and_store_tx_meta_desc(struct ena_com_io_sq *io_sq, + struct ena_com_tx_ctx *ena_tx_ctx, + bool *have_meta) +{ + struct ena_com_tx_meta *ena_meta = &ena_tx_ctx->ena_meta; - return ena_com_sq_update_tail(io_sq); + /* When disable meta caching is set, don't bother to save the meta and + * compare it to the stored version, just create the meta + */ + if (io_sq->disable_meta_caching) { + if (unlikely(!ena_tx_ctx->meta_valid)) + return ENA_COM_INVAL; + + *have_meta = true; + return ena_com_create_meta(io_sq, ena_meta); + } else if (ena_com_meta_desc_changed(io_sq, ena_tx_ctx)) { + *have_meta = true; + /* Cache the meta desc */ + memcpy(&io_sq->cached_tx_meta, ena_meta, + sizeof(struct ena_com_tx_meta)); + return ena_com_create_meta(io_sq, ena_meta); + } else { + *have_meta = false; + return ENA_COM_OK; + } } static void ena_com_rx_set_flags(struct ena_com_rx_ctx *ena_rx_ctx, @@ -380,12 +401,10 @@ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq, if (unlikely(rc)) return rc; - have_meta = ena_tx_ctx->meta_valid && ena_com_meta_desc_changed(io_sq, - ena_tx_ctx); - if (have_meta) { - rc = ena_com_create_and_store_tx_meta_desc(io_sq, ena_tx_ctx); - if (unlikely(rc)) - return rc; + rc = ena_com_create_and_store_tx_meta_desc(io_sq, ena_tx_ctx, &have_meta); + if (unlikely(rc)) { + ena_trc_err("failed to create and store tx meta desc\n"); + return rc; } /* If the caller doesn't want to send packets */ From patchwork Wed Apr 8 08:29:01 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67976 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 52FC3A0597; Wed, 8 Apr 2020 10:31:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5D4001C11D; Wed, 8 Apr 2020 10:29:43 +0200 (CEST) Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by dpdk.org (Postfix) with ESMTP id 315371C0D4 for ; Wed, 8 Apr 2020 10:29:38 +0200 (CEST) Received: by mail-lj1-f196.google.com with SMTP id r7so6629352ljg.13 for ; Wed, 08 Apr 2020 01:29:38 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=qIcOj2VqVKPWP9dfmDytuq5yTdYzOpbjrZbF1deyQGE=; b=OZO0bOrFv3u9+VpkQa2o4jbei23xe3tOYljfpxye6ygHm3t9FDNPunky4qLHH6tq+P 7hWa92tTu3daW2O6I7ne1SYM6Ui+Gvk4+cwh4vs8rqFVv5r19uwSMPvE+Bfza7gcMZuH ti7/DOhfjdrsaMsWOSTuEH6yIXPrkNkYujx/EjIdDl4lA9nhDcCffP2VM3SxxtwU8TVA V3vNqOZ2yxHYLhzQlQjlQusLIt0jRmGrnrGAwp6u5jJq5tkhRwHCS45QvxT3r3ayxnRs VH65A72JzxyevhTYJfzvWPc7Hq8nyGEacnqerdYp3NvgWX7zjionae6cyk2bHJn749mh sVyQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=qIcOj2VqVKPWP9dfmDytuq5yTdYzOpbjrZbF1deyQGE=; b=ln808Jr4K0t1lE4jft3tjCfOPG8bUGRDmqD4PllTDGnW+Hy1Fm0Z9TCk13mg6t3D42 Mr5ec3VxfCdMQvUJCZUQGqQVjD2sODC759EcumAxrL/NKvW0aTRkog+c/ZKP4WDs7toI oU826elZWez2W5qyED2SNnLyU2u+WR1wpqakUzOpI6EjJ/llv+J7CA63nyTbYFTU/u6p fdKDs/4aesHcTB/LiDnTg6Kn0iVoRB9auOaLFlldn8zA+IB6I0yxp95npTMyhUkrGe01 YL/OGv7iBenwAijjK+117aSKo/M+W6x6QaifqblnMXDlqmjoLrBMtxoOKe+jBVjJkmWd Bt1g== X-Gm-Message-State: AGi0PuZ803jeFRTb1/Z6NWPyjx6VlzyHddmXG3zrjEeT+mPQ5h3Gh2e6 HEwdFF84h4IUWJdtPpf1bGCtY4a+t4U= X-Google-Smtp-Source: APiQypJgrGTPFfoH1A6Pg8rG+9kTocs6puRlK35NYYZiBpue0Ul1ojPdgL9kTXI10bNpNbqwB5U6Dg== X-Received: by 2002:a2e:b610:: with SMTP id r16mr4312922ljn.254.1586334577461; Wed, 08 Apr 2020 01:29:37 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.36 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:36 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:01 +0200 Message-Id: <20200408082921.31000-11-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 10/30] net/ena/base: fix documentation of the functions X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The documentation format was aligned and few typos were fixed. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/base/ena_com.h | 24 ++++++++++++------------ 1 file changed, 12 insertions(+), 12 deletions(-) diff --git a/drivers/net/ena/base/ena_com.h b/drivers/net/ena/base/ena_com.h index 07f63f44af..6c9943df79 100644 --- a/drivers/net/ena/base/ena_com.h +++ b/drivers/net/ena/base/ena_com.h @@ -370,7 +370,7 @@ extern "C" { */ int ena_com_mmio_reg_read_request_init(struct ena_com_dev *ena_dev); -/* ena_com_set_mmio_read_mode - Enable/disable the mmio reg read mechanism +/* ena_com_set_mmio_read_mode - Enable/disable the indirect mmio reg read mechanism * @ena_dev: ENA communication layer struct * @readless_supported: readless mode (enable/disable) */ @@ -504,7 +504,7 @@ void ena_com_set_admin_auto_polling_mode(struct ena_com_dev *ena_dev, /* ena_com_admin_q_comp_intr_handler - admin queue interrupt handler * @ena_dev: ENA communication layer struct * - * This method go over the admin completion queue and wake up all the pending + * This method goes over the admin completion queue and wakes up all the pending * threads that wait on the commands wait event. * * @note: Should be called after MSI-X interrupt. @@ -514,7 +514,7 @@ void ena_com_admin_q_comp_intr_handler(struct ena_com_dev *ena_dev); /* ena_com_aenq_intr_handler - AENQ interrupt handler * @ena_dev: ENA communication layer struct * - * This method go over the async event notification queue and call the proper + * This method goes over the async event notification queue and calls the proper * aenq handler. */ void ena_com_aenq_intr_handler(struct ena_com_dev *dev, void *data); @@ -531,14 +531,14 @@ void ena_com_abort_admin_commands(struct ena_com_dev *ena_dev); /* ena_com_wait_for_abort_completion - Wait for admin commands abort. * @ena_dev: ENA communication layer struct * - * This method wait until all the outstanding admin commands will be completed. + * This method waits until all the outstanding admin commands are completed. */ void ena_com_wait_for_abort_completion(struct ena_com_dev *ena_dev); /* ena_com_validate_version - Validate the device parameters * @ena_dev: ENA communication layer struct * - * This method validate the device parameters are the same as the saved + * This method verifies the device parameters are the same as the saved * parameters in ena_dev. * This method is useful after device reset, to validate the device mac address * and the device offloads are the same as before the reset. @@ -715,7 +715,7 @@ int ena_com_set_hash_ctrl(struct ena_com_dev *ena_dev); * * Retrieve the hash control from the device. * - * @note, If the caller called ena_com_fill_hash_ctrl but didn't flash + * @note: If the caller called ena_com_fill_hash_ctrl but didn't flash * it to the device, the new configuration will be lost. * * @return: 0 on Success and negative value otherwise. @@ -767,7 +767,7 @@ int ena_com_indirect_table_set(struct ena_com_dev *ena_dev); * * Retrieve the RSS indirection table from the device. * - * @note: If the caller called ena_com_indirect_table_fill_entry but didn't flash + * @note: If the caller called ena_com_indirect_table_fill_entry but didn't flush * it to the device, the new configuration will be lost. * * @return: 0 on Success and negative value otherwise. @@ -793,14 +793,14 @@ int ena_com_allocate_debug_area(struct ena_com_dev *ena_dev, /* ena_com_delete_debug_area - Free the debug area resources. * @ena_dev: ENA communication layer struct * - * Free the allocate debug area. + * Free the allocated debug area. */ void ena_com_delete_debug_area(struct ena_com_dev *ena_dev); /* ena_com_delete_host_info - Free the host info resources. * @ena_dev: ENA communication layer struct * - * Free the allocate host info. + * Free the allocated host info. */ void ena_com_delete_host_info(struct ena_com_dev *ena_dev); @@ -841,9 +841,9 @@ int ena_com_destroy_io_cq(struct ena_com_dev *ena_dev, * @cmd_completion: command completion return value. * @cmd_comp_size: command completion size. - * Submit an admin command and then wait until the device will return a + * Submit an admin command and then wait until the device returns a * completion. - * The completion will be copyed into cmd_comp. + * The completion will be copied into cmd_comp. * * @return - 0 on success, negative value on failure. */ @@ -932,7 +932,7 @@ static inline void ena_com_disable_adaptive_moderation(struct ena_com_dev *ena_d * @intr_reg: interrupt register to update. * @rx_delay_interval: Rx interval in usecs * @tx_delay_interval: Tx interval in usecs - * @unmask: unask enable/disable + * @unmask: unmask enable/disable * * Prepare interrupt update register with the supplied parameters. */ From patchwork Wed Apr 8 08:29:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67977 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 3DB8AA0597; Wed, 8 Apr 2020 10:31:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id EFD6B1C123; Wed, 8 Apr 2020 10:29:44 +0200 (CEST) Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by dpdk.org (Postfix) with ESMTP id 758951C0CE for ; Wed, 8 Apr 2020 10:29:39 +0200 (CEST) Received: by mail-lj1-f196.google.com with SMTP id p10so6712521ljn.1 for ; Wed, 08 Apr 2020 01:29:39 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=W7piDFMpFthle7A8s+8+JunLxgKAG/6jCiG2ZfbBTRg=; b=gyp2SqMhc1TvDPya8vCDCquST6t7AWNVqFWD28qlPcMlfkfKmnXg8uT+tY1HGI+4nc Ux2azLewPijAzJ8QmRxfIqdz6GKgQ1GcEUHK5M5nGg1U3KvK+cTyxCHsca0VRVeE+F17 qMUCANS3N2rwUC/bWfin2+yxODY4ytSCkB68dNUiltHeZFTQYu7s9mJreR/CIhBLYOUE Nkpe5gJUiN1hYTmXjvUDimLJ21hNiH0CJPzv7D49lfRimj+otLCzjYnuIfC4R93BkzqX /FBSRRoH6GoKU3Hn1Jim4RWHGGjBvakww+LLIscTMp5XswcGW4ru4SoJ2BY/tlKdmW3N OLTA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=W7piDFMpFthle7A8s+8+JunLxgKAG/6jCiG2ZfbBTRg=; b=g1orXRRpRItHvYc33jqrjz4ofFFTHxfVDCahNBqI8OKC83AQf/XR2OBL6/zYiF7x5T 2FfYSKRZToa/ypMO0n5lNLb4UfRbUrIpmWOTYsKEpnWK7l/Xn6ekiwIiq6GbYJI3aLL1 t2JQoZjGk5ASLPjHtDAPtmtubHwe3940i5blNVF1Gom6PwjuPZQ7kEP2KdixmQ8JefPz OGg4zYEQ3FqRtc64Uo5YY0yUhbi6jkXkzZikVaPLsBMqbRIgcUd8c42NYrU1rAQqVSa7 nSPnAFV8YhiaLL566gBpYJM0MicDelx4z0IHl0Pyn1N0p3p0B7lcmFdMfrbwVEQ6XV46 kWWQ== X-Gm-Message-State: AGi0PuZCO1Hot507HTlR5jbJbgqjxr3vVXoHluoZAHVIXE+QRtd70oCp bFDHYJHURqLs1FPnxSq4/40NTh10df4= X-Google-Smtp-Source: APiQypJOcXU7RQJfAMpC/VazKj/ybBuTxSAOLIO8vYcWWU4akaYx1AztNc5yJgR9ew2yvaHeRZrBrw== X-Received: by 2002:a2e:8652:: with SMTP id i18mr4402807ljj.265.1586334578786; Wed, 08 Apr 2020 01:29:38 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.37 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:38 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:02 +0200 Message-Id: <20200408082921.31000-12-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 11/30] net/ena/base: fix indentation in cq polling X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The spaces instead of tabs were used for the indent. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/base/ena_com.c | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c index 962baf6024..f128d3c4f3 100644 --- a/drivers/net/ena/base/ena_com.c +++ b/drivers/net/ena/base/ena_com.c @@ -532,11 +532,11 @@ static int ena_com_wait_and_process_admin_cq_polling(struct ena_comp_ctx *comp_c timeout = ENA_GET_SYSTEM_TIMEOUT(admin_queue->completion_timeout); while (1) { - ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); - ena_com_handle_admin_completion(admin_queue); - ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); + ENA_SPINLOCK_LOCK(admin_queue->q_lock, flags); + ena_com_handle_admin_completion(admin_queue); + ENA_SPINLOCK_UNLOCK(admin_queue->q_lock, flags); - if (comp_ctx->status != ENA_CMD_SUBMITTED) + if (comp_ctx->status != ENA_CMD_SUBMITTED) break; if (ENA_TIME_EXPIRE(timeout)) { From patchwork Wed Apr 8 08:29:03 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67978 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D48DAA0598; Wed, 8 Apr 2020 10:31:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 95F6E1C131; Wed, 8 Apr 2020 10:29:46 +0200 (CEST) Received: from mail-lj1-f193.google.com (mail-lj1-f193.google.com [209.85.208.193]) by dpdk.org (Postfix) with ESMTP id A1DAC1C10B for ; Wed, 8 Apr 2020 10:29:40 +0200 (CEST) Received: by mail-lj1-f193.google.com with SMTP id q22so2927709ljg.0 for ; Wed, 08 Apr 2020 01:29:40 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SoTsB1BcCkbE4nlZnpNVpwN4JIh6xzm/N0MApZl/N1k=; b=NasGa+8DJzysexgmYTnkisPUmYLKNHYQ7VJx0fs684WOBFxlyORZ/nEJWFrWVTSwwo PQQx1xpm1+X3BxPZ+jl2DI4+82aBc7oYuXO7HlbUPe8exFVYPS4fWUOMkrfxYydvtd4y nA1T6vjlRgenU5ViqpNf1eGVpNWofBW/f+FT+RbxIdchYJV3bBe23qvnh69PInuiiG8l 7EE/ySJbv+HEu242yVqld+iDm/Rut7tMtnjAI1LQ3bFhujDNRGHuDWlYGV1rJbAT5y3E rV5bYmzxUq0sYtUtqnJz6GYHMk7UvX0slBm4sYdzQA8r7pPQ6GT3/+vMKv6wGxvWbRgq MRmA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SoTsB1BcCkbE4nlZnpNVpwN4JIh6xzm/N0MApZl/N1k=; b=Gnf8IN+oYkTG+RemLogaB/r9g+t0JPYVhL9Nyfo9QBy7JwyFDD8dAb24dQgkxZIzLx Fp5wLDdAEYOBOihex0Jx4PZOuxzM+CVknQIPoklTSW8gpP7uZw3NC4e5F3pNI2WInaue tnz7eOdNQegDdX2RqGqWje1ch1NwMGZFSp1eZKRsFQu6MY5dJVs1Vul9PEMr26P/ZtpY ALHNDlQ4r0tPUc+L8AHfn2ZzcxrD6cGcUSwRvrDZb2sSooBybBO1SuoebVh7Ywg9PC6k sRwZAKDErBhFn0N4fx2+186BzgXOzDt2Zwl9jDYVVhebCW3Qv5Nt/j/9Z9QC2ls/OADb Cyyw== X-Gm-Message-State: AGi0PuasB6vsOoQ4q/jB+gKlVsWzGo3h19FxJqrGuLqGIQzk3NYeHzA1 ksHj0EemGjcZxiXOEpYAmt0+UM5GBQ8= X-Google-Smtp-Source: APiQypK1akfNSVDhIYlg6XLfJHixn0n2hI7yXMb3EdXrL8TwgvIok5wtT/9eq88qSxjr7bdyEOP/Xw== X-Received: by 2002:a2e:3c08:: with SMTP id j8mr4377045lja.243.1586334580042; Wed, 08 Apr 2020 01:29:40 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.38 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:39 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:03 +0200 Message-Id: <20200408082921.31000-13-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 12/30] net/ena/base: add error logs when preparing Tx X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To make the debugging easier, the error logs were added in the Tx path. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/base/ena_eth_com.c | 24 +++++++++++++++++++----- 1 file changed, 19 insertions(+), 5 deletions(-) diff --git a/drivers/net/ena/base/ena_eth_com.c b/drivers/net/ena/base/ena_eth_com.c index 8f9528bdff..80d35556cb 100644 --- a/drivers/net/ena/base/ena_eth_com.c +++ b/drivers/net/ena/base/ena_eth_com.c @@ -148,8 +148,10 @@ static int ena_com_close_bounce_buffer(struct ena_com_io_sq *io_sq) if (pkt_ctrl->idx) { rc = ena_com_write_bounce_buffer_to_dev(io_sq, pkt_ctrl->curr_bounce_buf); - if (unlikely(rc)) + if (unlikely(rc)) { + ena_trc_err("failed to write bounce buffer to device\n"); return rc; + } pkt_ctrl->curr_bounce_buf = ena_com_get_next_bounce_buffer(&io_sq->bounce_buf_ctrl); @@ -179,8 +181,10 @@ static int ena_com_sq_update_llq_tail(struct ena_com_io_sq *io_sq) if (!pkt_ctrl->descs_left_in_line) { rc = ena_com_write_bounce_buffer_to_dev(io_sq, pkt_ctrl->curr_bounce_buf); - if (unlikely(rc)) + if (unlikely(rc)) { + ena_trc_err("failed to write bounce buffer to device\n"); return rc; + } pkt_ctrl->curr_bounce_buf = ena_com_get_next_bounce_buffer(&io_sq->bounce_buf_ctrl); @@ -394,8 +398,10 @@ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq, } if (unlikely(io_sq->mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV - && !buffer_to_push)) + && !buffer_to_push)) { + ena_trc_err("push header wasn't provided on LLQ mode\n"); return ENA_COM_INVAL; + } rc = ena_com_write_header_to_bounce(io_sq, buffer_to_push, header_len); if (unlikely(rc)) @@ -410,6 +416,8 @@ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq, /* If the caller doesn't want to send packets */ if (unlikely(!num_bufs && !header_len)) { rc = ena_com_close_bounce_buffer(io_sq); + if (rc) + ena_trc_err("failed to write buffers to LLQ\n"); *nb_hw_desc = io_sq->tail - start_tail; return rc; } @@ -469,8 +477,10 @@ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq, /* The first desc share the same desc as the header */ if (likely(i != 0)) { rc = ena_com_sq_update_tail(io_sq); - if (unlikely(rc)) + if (unlikely(rc)) { + ena_trc_err("failed to update sq tail\n"); return rc; + } desc = get_sq_desc(io_sq); if (unlikely(!desc)) @@ -499,10 +509,14 @@ int ena_com_prepare_tx(struct ena_com_io_sq *io_sq, desc->len_ctrl |= ENA_ETH_IO_TX_DESC_LAST_MASK; rc = ena_com_sq_update_tail(io_sq); - if (unlikely(rc)) + if (unlikely(rc)) { + ena_trc_err("failed to update sq tail of the last descriptor\n"); return rc; + } rc = ena_com_close_bounce_buffer(io_sq); + if (rc) + ena_trc_err("failed when closing bounce buffer\n"); *nb_hw_desc = io_sq->tail - start_tail; return rc; From patchwork Wed Apr 8 08:29:04 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67979 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 1BB9EA0597; Wed, 8 Apr 2020 10:31:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 319281C135; Wed, 8 Apr 2020 10:29:48 +0200 (CEST) Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by dpdk.org (Postfix) with ESMTP id DF0041C114 for ; Wed, 8 Apr 2020 10:29:41 +0200 (CEST) Received: by mail-lj1-f196.google.com with SMTP id q22so2927787ljg.0 for ; Wed, 08 Apr 2020 01:29:41 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=HpVZ4kVNqArrk0IuA/iGAadWKhmeXmbdyh4fcj88J9A=; b=NHv+AJRPLWi47LAU97kFIp/fz27aPVTsb1wSCfXjtkFiyFhvEgOum/dxsC4loJdGTJ 2EOpRY6kXEzf5AC2EWPtLlW00f7m97W0r2Lwtc/adI/Y1xlZnyo/m7eCXK22Gubh8bYU 3CAc80eVomTBCGy+1TtSQcdEmNAmCP+tj4ltQL8n4Yc9mUzGGTeCzmbi6Z0RcGAdkoDq PKWvK6oyMpIJlAiCCaFiUds4MPtT7RpIqwUIOhO+swqK/iXmNwldZI9apnfJPie8rWlU X28hRJfrO++hCS+Dmz+/nIA1mL2icOb3OFYq7BP+kgavMSxmrJ4CaJul+x9SsJrqmyN0 s1KQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=HpVZ4kVNqArrk0IuA/iGAadWKhmeXmbdyh4fcj88J9A=; b=ToXt4CxZPi8HzH8x3NJ5vJwJRdQ8l88aa8yP7uPc7gEtP5VuVuLt93ExrZ0NmPyGd3 QnoZyRt+IIGeA2z4LPaI1y46X1BPLs9ZqNXV/XAJNRM64k5+4CZ7ISLMUFtDxtH3sTJA kzZ+l4FrBhNB9eRJhYq9XPglp3XVxNKikwWdiQzCfO1grJsBMdZ4RGh3LNYK9NvCJ421 igVzTDcgHikRR5aAbD7kIFLcoAh/Pi0irhejSzXw9hoZYdycrOAb5qHwvKF1dTQeqUW9 1MixJBiX8p2rFycj6ui7k/6xTF+R6PsRMFkJvLDJewZf0ujjowhpyNwpdIzL8zqQxK5u Mveg== X-Gm-Message-State: AGi0PuYcA9/bBV6dfGFmrJt0Hry20mW1e++K/M3X8QpCEtkZpdMniOb0 zaR/AQSuryCSeeKswkW4G3ot0YV9iMg= X-Google-Smtp-Source: APiQypIOXjbyBGvmQ/6IbJIuuUxsi3aT1mnM+ug2kAV0bh5M0FP+L8lDzZZi6RutnVucNI3Z9mbYMw== X-Received: by 2002:a2e:3c0b:: with SMTP id j11mr4357011lja.9.1586334581291; Wed, 08 Apr 2020 01:29:41 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:40 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:04 +0200 Message-Id: <20200408082921.31000-14-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 13/30] net/ena/base: use 48-bit memory addresses in ena_com X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENA device is using 48-bit memory for IO. because of that, the upper limit had to be updated. From the driver perspective, it's just a cosmetic change to make definition of the strucutre 'ena_common_mem_addr' more descriptive and the address value was verified anyway for the valid range in the function 'ena_com_mem_addr_set()'. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v3: * Explain impact of this change in the commit log * Update copyright date of the modified file drivers/net/ena/base/ena_com.c | 2 +- drivers/net/ena/base/ena_defs/ena_common_defs.h | 8 ++++++-- 2 files changed, 7 insertions(+), 3 deletions(-) diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c index f128d3c4f3..4968054a99 100644 --- a/drivers/net/ena/base/ena_com.c +++ b/drivers/net/ena/base/ena_com.c @@ -73,7 +73,7 @@ static int ena_com_mem_addr_set(struct ena_com_dev *ena_dev, } ena_addr->mem_addr_low = lower_32_bits(addr); - ena_addr->mem_addr_high = upper_32_bits(addr); + ena_addr->mem_addr_high = (u16)upper_32_bits(addr); return 0; } diff --git a/drivers/net/ena/base/ena_defs/ena_common_defs.h b/drivers/net/ena/base/ena_defs/ena_common_defs.h index 1818c29a87..d1ee40de32 100644 --- a/drivers/net/ena/base/ena_defs/ena_common_defs.h +++ b/drivers/net/ena/base/ena_defs/ena_common_defs.h @@ -1,5 +1,5 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. */ @@ -9,10 +9,14 @@ #define ENA_COMMON_SPEC_VERSION_MAJOR 2 #define ENA_COMMON_SPEC_VERSION_MINOR 0 +/* ENA operates with 48-bit memory addresses. ena_mem_addr_t */ struct ena_common_mem_addr { uint32_t mem_addr_low; - uint32_t mem_addr_high; + uint16_t mem_addr_high; + + /* MBZ */ + uint16_t reserved16; }; #endif /* _ENA_COMMON_H_ */ From patchwork Wed Apr 8 08:29:05 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67980 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 393BCA0597; Wed, 8 Apr 2020 10:31:55 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 304E81C18E; Wed, 8 Apr 2020 10:29:49 +0200 (CEST) Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by dpdk.org (Postfix) with ESMTP id 47C4A1C11B for ; Wed, 8 Apr 2020 10:29:43 +0200 (CEST) Received: by mail-lj1-f195.google.com with SMTP id 142so2122946ljj.7 for ; Wed, 08 Apr 2020 01:29:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r3+VOaJat4RJFyQ4tmPJY6R+auaqtM4gKoNrwfTSVp4=; b=pkClJJsoULfsj0a1+j7R68YyQQoRzpB6+NtxOqq6VHPN61spnoyZcjzKtyTuwp9MBd DbqmcR1RLSbU69qdlXJRiRjx7jQwhyURIym54JT0Xv/ePDjvaRF0Mfg2PQjnNxPY7Jrx WmgoKia3QmxcdqsJtpIah014+bHSM7FI9UZd0Oru0yPQc37WAqDdw7AQ6eHyFJHLnx8f T/XKeLHwrhS8Dff4ZiWrUd1mlLgJj48w1avhVP+3vbTSGi9RCrvYz3X/bvgFviQSJaHi DCyfhmFLTfdRyoXqK1giKUGi9Q/xeeI7ik/KjrpJ1Y8Zx9Cs3Vk8JwlvLZWZtcJ10VE1 1OiA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r3+VOaJat4RJFyQ4tmPJY6R+auaqtM4gKoNrwfTSVp4=; b=YjaNbkvKV0+A6Q0THSdq0OLp9735Rxbw3tFeSumvHqzJzt8eMF1dCiASl41wZTXG2e WKPN2CqJpFDh+ktFVcIYdzvOlf9Ti/+nKyuzafrZZoftRfxUwFufEZhHHohwWbMYiUrw GADvqoC2oqb+/4N/CMEyc7M1ksZCtYxLDUJShOFvOT6UbEuEOP50OCzb5lci/8yLCIbq T818n+DKCs00u4hyPMpPZz1RCYZUUNv0VmIMdPTuWVEPMMTf2TZtxMUH2YFwdRguDLBb mNGHXqboPcAdYwsc5XdxvlWO0Bh0Cp2vZfR94LB1o3qeQ+fu3M92PZ+5eV3vWKq7yQcm RK6g== X-Gm-Message-State: AGi0PuaFqA262q0UoTVshbY6T50w1/LR1jC+INpRWADksKd0SKODMfor HQuJwlkHC7k0WTACA2ahfzHW27YZh+A= X-Google-Smtp-Source: APiQypK//4hiv3h61g8XEFfO2dIxX++r9UGTwbV5icE+pnRlj1j0OZ4ENWlu08HgVZxUdZiIyxdF2Q== X-Received: by 2002:a2e:905a:: with SMTP id n26mr4453462ljg.58.1586334582548; Wed, 08 Apr 2020 01:29:42 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:41 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:05 +0200 Message-Id: <20200408082921.31000-15-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 14/30] net/ena/base: fix types for printing timestamps X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Because ena_com is being used by multiple platforms which are using different C versions, PRIu64 cannot be used directly and must be defined in the platform file. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/base/ena_com.c | 2 +- drivers/net/ena/base/ena_plat_dpdk.h | 2 ++ 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/drivers/net/ena/base/ena_com.c b/drivers/net/ena/base/ena_com.c index 4968054a99..6257c535b1 100644 --- a/drivers/net/ena/base/ena_com.c +++ b/drivers/net/ena/base/ena_com.c @@ -2063,7 +2063,7 @@ void ena_com_aenq_intr_handler(struct ena_com_dev *dev, void *data) timestamp = (u64)aenq_common->timestamp_low | ((u64)aenq_common->timestamp_high << 32); ENA_TOUCH(timestamp); /* In case debug is disabled */ - ena_trc_dbg("AENQ! Group[%x] Syndrom[%x] timestamp: [%"PRIu64"]\n", + ena_trc_dbg("AENQ! Group[%x] Syndrom[%x] timestamp: [%" ENA_PRIu64 "s]\n", aenq_common->group, aenq_common->syndrom, timestamp); diff --git a/drivers/net/ena/base/ena_plat_dpdk.h b/drivers/net/ena/base/ena_plat_dpdk.h index a0f088c9b8..595967e6e3 100644 --- a/drivers/net/ena/base/ena_plat_dpdk.h +++ b/drivers/net/ena/base/ena_plat_dpdk.h @@ -309,5 +309,7 @@ void ena_rss_key_fill(void *key, size_t size); #define ENA_INTR_INITIAL_TX_INTERVAL_USECS_PLAT 0 +#define ENA_PRIu64 PRIu64 + #include "ena_includes.h" #endif /* DPDK_ENA_COM_ENA_PLAT_DPDK_H_ */ From patchwork Wed Apr 8 08:29:06 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67981 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 89FE3A0597; Wed, 8 Apr 2020 10:32:04 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 5C62C1C195; Wed, 8 Apr 2020 10:29:50 +0200 (CEST) Received: from mail-lf1-f66.google.com (mail-lf1-f66.google.com [209.85.167.66]) by dpdk.org (Postfix) with ESMTP id 9722F1C11F for ; Wed, 8 Apr 2020 10:29:44 +0200 (CEST) Received: by mail-lf1-f66.google.com with SMTP id l11so4484934lfc.5 for ; Wed, 08 Apr 2020 01:29:44 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=q07MF4a0VgIpzilsj4Bsf9an7pgFi96nAA2nrlTMNXc=; b=0Sx7zqodRBHWfOcrQL23o/JKtsfcV7QNzQSzeLH4jZwKTQeFKwf0Vc7regTKqGRYT/ YffxnepT657NLvPGtTURTA8wlvSLY4I9aIgYHktKm46QEGmwPEEe5PJLJZPHeLHCnXxQ SoRg0wrYd8khhIGjx1ZcXV5GbJNw14e/YIAHAUKhVu/WlQe0/iOd2ee5vaxOZQl0MQn4 M3xvfJGxaLe5taEgEhpPTMHHy0uJwg1XNW4THt/ItVKhLKjRjb9PTa1H1UQ2Zadn9j73 vyfMw8/uKbCU5IIeuJdRg1/hwmvDrVNuHJwRWarGyx5LjQgnrNmAWaEN0h8QRTRYRc+g 8gGw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=q07MF4a0VgIpzilsj4Bsf9an7pgFi96nAA2nrlTMNXc=; b=CWYRi3/lAmmzeDeJe0hRNrj0o5RqWOZyzdkk37WLTiiKBSCLYXzz2E61affjExLqc6 n/GL26Etx3av2R6mBVtHFnHCIAP2i2rMEi3pWrk6wuReChANYln72YzbDPpE9yvj5zf1 TOYVOuTyHEkteoWZKH04AwXOLAnslw7Eidqu2UcKtkrHM5EXtRMkVK4rImGNlkYtMCOJ kaySkraJcWsnsQoY3NwaKj16sGPIJcPjm5QibPtEW2h5UnryyAfnlDR0Bu1VK9pQV1kK rwllU9H5Z48yw5EPps3Iwln/Tef7rK3TS8w+rWvFr85VHM0ML7+NZ+uXGTN26oDmy6Bq YyDg== X-Gm-Message-State: AGi0PuaPMyaFNa+ldh3PKWpnMyf5ApqoInE2o+UEflaYvUuy5XurtP8w GAwcoV7pvORrjILNxl9BNIEXQ5zF4PM= X-Google-Smtp-Source: APiQypKlPCLH/qIMZbIFX7akt9wjzV/SJJJqdJdKZCXEuY3roZ5hvJJ9WgyMRyGCdchWNM8y8x3lLw== X-Received: by 2002:a05:6512:3189:: with SMTP id i9mr3854156lfe.178.1586334583981; Wed, 08 Apr 2020 01:29:43 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:43 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:06 +0200 Message-Id: <20200408082921.31000-16-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 15/30] net/ena/base: fix indentation of multiple defines X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As the alignemnt of the defines wasn't valid, it was removed at all, so instead of using multiple spaces or tabs, the single space after define name is being used. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/base/ena_com.h | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/drivers/net/ena/base/ena_com.h b/drivers/net/ena/base/ena_com.h index 6c9943df79..61074eaf63 100644 --- a/drivers/net/ena/base/ena_com.h +++ b/drivers/net/ena/base/ena_com.h @@ -8,9 +8,9 @@ #include "ena_plat.h" -#define ENA_MAX_NUM_IO_QUEUES 128U +#define ENA_MAX_NUM_IO_QUEUES 128U /* We need to queues for each IO (on for Tx and one for Rx) */ -#define ENA_TOTAL_NUM_QUEUES (2 * (ENA_MAX_NUM_IO_QUEUES)) +#define ENA_TOTAL_NUM_QUEUES (2 * (ENA_MAX_NUM_IO_QUEUES)) #define ENA_MAX_HANDLERS 256 @@ -33,9 +33,9 @@ #define ENA_HASH_KEY_SIZE 40 -#define ENA_HW_HINTS_NO_TIMEOUT 0xFFFF +#define ENA_HW_HINTS_NO_TIMEOUT 0xFFFF -#define ENA_FEATURE_MAX_QUEUE_EXT_VER 1 +#define ENA_FEATURE_MAX_QUEUE_EXT_VER 1 struct ena_llq_configurations { enum ena_admin_llq_header_location llq_header_location; From patchwork Wed Apr 8 08:29:07 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67982 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 38F94A0597; Wed, 8 Apr 2020 10:32:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id ACF7C1BF99; Wed, 8 Apr 2020 10:29:51 +0200 (CEST) Received: from mail-lf1-f44.google.com (mail-lf1-f44.google.com [209.85.167.44]) by dpdk.org (Postfix) with ESMTP id CDFB11C127 for ; Wed, 8 Apr 2020 10:29:45 +0200 (CEST) Received: by mail-lf1-f44.google.com with SMTP id k28so4466082lfe.10 for ; Wed, 08 Apr 2020 01:29:45 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=f7FUKN/AeDrL487KUysAeO/G3aNadSL2UyVdqPLEono=; b=FiAJSLEDtO5w6v/tcZtn7bFVVAUqfB76tlbBRKyiSwMD8gsbpOPssROJXN3Ul9wChZ NUQuUmvJ+G8SXXSoMMDXbYnZmE27sYHbRgmu10HOX8UTpO6EdplCdBE6UAiwGRUtZFgo hNAevsUfGdPd1+IzYjHBIq3izuT+WYI5XXlTrEDQGzOOAc4TWGWNnIgxJuG3DC6SY+uB 98wSBVS7KYr/lv99XFh8sQ1Dhv4D8G5DxUiB8jWZOw+5GhKPjV9uMiiekxzWFXSLbQ+e mpZUAC6WxJ/Bax/Q8G2qubnHR4wtvLHEIjAcouDfQcu6nvOm/eFAGQikesLFbeWw+YGh S98w== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=f7FUKN/AeDrL487KUysAeO/G3aNadSL2UyVdqPLEono=; b=k8K6SSqpVy5CWDw6DlkWmiVp0kbzRfZsXV2Fk9IlgrGDgE+v88lyQ9cdY2ehxDgzz2 ngB3EKAPlwDx2AFtPNy5N8kAMbHyopTmNz6KbpZGwMeK8eiG07hXD7i74EOXdwVVpWEK gPLUk6FLSAPHIS140qarvDZzaX+c49oAcGVLPuhRMmoIH6IKVMK20mYOjp/yVQnxStj1 PCFjsaExmWoIxnpPrSCTNVWd48cc/0cgcTy/6wEfZK1gpPWosh7H5z70DpnPYtFGYA0t qtssUyHh91HQRhEVQ9ZSVl2vl0CapcxcoY9QcY8QMc4uJbtJpzq/Lz1PdNEDs9m7jaFl 561g== X-Gm-Message-State: AGi0PuYlneJTymY7z9kvcB7+ZPNDq0deWPGi/p+p+CO60pOuoDW6s/27 G+ZFS0OUBzD+6zCN4/oXc20TG7Jpdhw= X-Google-Smtp-Source: APiQypJIgRQ/lgPACDWr2yH5PuNzwkP7E7FIs9BZ4BPqSB8r+4x8yENCfGdpxORKkbkvV5Ox96kgkg== X-Received: by 2002:a05:6512:79:: with SMTP id i25mr3922347lfo.87.1586334585195; Wed, 08 Apr 2020 01:29:45 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.44 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:44 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:07 +0200 Message-Id: <20200408082921.31000-17-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 16/30] net/ena/base: update gen date and commit X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The current ena_com version was generated on 25.09.2019. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v3: * Update copyright date of the modified file. drivers/net/ena/base/ena_defs/ena_gen_info.h | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/drivers/net/ena/base/ena_defs/ena_gen_info.h b/drivers/net/ena/base/ena_defs/ena_gen_info.h index 019b1fdb79..f486e9fe6e 100644 --- a/drivers/net/ena/base/ena_defs/ena_gen_info.h +++ b/drivers/net/ena/base/ena_defs/ena_gen_info.h @@ -1,7 +1,7 @@ /* SPDX-License-Identifier: BSD-3-Clause - * Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + * Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. * All rights reserved. */ -#define ENA_GEN_DATE "Wed Mar 20 10:40:42 STD 2019" -#define ENA_GEN_COMMIT "1476830" +#define ENA_GEN_DATE "Wed Sep 25 11:32:57 UTC 2019" +#define ENA_GEN_COMMIT "952697a9e0d3" From patchwork Wed Apr 8 08:29:08 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67983 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id A2AFFA0597; Wed, 8 Apr 2020 10:32:24 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id D262E1C1A7; Wed, 8 Apr 2020 10:29:52 +0200 (CEST) Received: from mail-lf1-f66.google.com (mail-lf1-f66.google.com [209.85.167.66]) by dpdk.org (Postfix) with ESMTP id 1DAF81C120 for ; Wed, 8 Apr 2020 10:29:47 +0200 (CEST) Received: by mail-lf1-f66.google.com with SMTP id l11so4485023lfc.5 for ; Wed, 08 Apr 2020 01:29:47 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=4/Y0SPHlpCA+KoWL4/VqqD9gixLYsmGGWaciVtTqh8Q=; b=Yuu6QlboM5DixhZZv9rfl1IQOp9ZfjiPnji2Mbuj1QbWCDgYm0yjrlh9VY8NUUyu/p KV3jKEXdz3fQrrelHcQC5rrLmzQU4FieN/pn4k/e0DerkBBI/VnASNImvnmF9ZGpH2Uv xcb/lGtc/4HfYWtpv1DxDWNLSt5d2WNnBbri90V3x54u+LStfqAW8Xqqg8LOSjx9Xr+R Y0bIPThVY+TPpwVM8JXUQ7EfSDIj32WKqVODWrtuYSJJbxrlkya5p6NxEJsUrwVdpGZu K57Z2okJVQ0t5ua/taMWo11TkIesx5swiwiANebYisvUNHsbHEAAcrGjOWyfyByyKsug 7uOw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=4/Y0SPHlpCA+KoWL4/VqqD9gixLYsmGGWaciVtTqh8Q=; b=YpbXjd5cIGolyyUxFQzog6VsYpPtHDXskUflK2SVtp0QnRjSrzwSfA7Jmuuvy4ozZc AmP/Rcs91QdLUEY05XfkmHLt0uXrxnhbqFe+iywrMo2UrXDpq4LdyXiSwsAvrAWaNp1I QmdtQ4v9EyYQx8pF0+rmTebwzgF8SRuEGfL/OZrc7aMCGrSjGpI2UJpTcPsmN0Trquts DiXicKrQI5howz83YLdDGaGXpU0BkjaKmosfMV8yJru7rcl2UHbMwj7KtVo7oUGhG+p2 4aiext/elBnbonOgIRMx8vwrmqds6iiZqa++V/57iRDofOaL5LB06n/wPDbj4nK8oDAl SEQA== X-Gm-Message-State: AGi0PuZhm8L2vml5SoMChvuWDNYflTwk5Zb2tWguvjWfGHgs16EciXDS vdLtJNUxF9Sqok3Cgs6vP9P5nfSmQHg= X-Google-Smtp-Source: APiQypLSuNncz0/XRj573WvNKrwT/+9r8KwBg87cCuhFjB66Ov46/a/nlniILom33mZIcBQ8nBp2/Q== X-Received: by 2002:a19:c7d8:: with SMTP id x207mr3792293lff.190.1586334586464; Wed, 08 Apr 2020 01:29:46 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.45 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:45 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk , stable@dpdk.org Date: Wed, 8 Apr 2020 10:29:08 +0200 Message-Id: <20200408082921.31000-18-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 17/30] net/ena: set IO ring size to the valid value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" IO rings were configured with the maximum allowed size for the Tx/Rx rings. However, the application could decide to create smaller rings. This patch is using value stored in the ring instead of the value from the adapter which is indicating the maximum allowed value. Fixes: df238f84c0a2 ("net/ena: recreate HW IO rings on start and stop") Cc: stable@dpdk.org Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index f1202d99f2..62e26a2a16 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1099,16 +1099,15 @@ static int ena_create_io_queue(struct ena_ring *ring) ena_qid = ENA_IO_TXQ_IDX(ring->id); ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_TX; ctx.mem_queue_type = ena_dev->tx_mem_queue_type; - ctx.queue_size = adapter->tx_ring_size; for (i = 0; i < ring->ring_size; i++) ring->empty_tx_reqs[i] = i; } else { ena_qid = ENA_IO_RXQ_IDX(ring->id); ctx.direction = ENA_COM_IO_QUEUE_DIRECTION_RX; - ctx.queue_size = adapter->rx_ring_size; for (i = 0; i < ring->ring_size; i++) ring->empty_rx_reqs[i] = i; } + ctx.queue_size = ring->ring_size; ctx.qid = ena_qid; ctx.msix_vector = -1; /* interrupts not used */ ctx.numa_node = ring->numa_socket_id; From patchwork Wed Apr 8 08:29:09 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67984 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8B311A0597; Wed, 8 Apr 2020 10:32:33 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 41C2E1C1AE; Wed, 8 Apr 2020 10:29:54 +0200 (CEST) Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by dpdk.org (Postfix) with ESMTP id 98AFC1C138 for ; Wed, 8 Apr 2020 10:29:48 +0200 (CEST) Received: by mail-lj1-f196.google.com with SMTP id t17so6637324ljc.12 for ; Wed, 08 Apr 2020 01:29:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=QHZ3Pvze7N3y9Swf7PKQsL2yGxR89GsSIwH0LlZBgZw=; b=I0n2xArYa9TgP6iv1fdidj5gN5oWbyH91YKbtH4MFKVFJc+mXdzZxW1oltsr0OoNOi cafEbWu+15r6Z/i9zPVKdbHP5wEILNqRlw93u+5mrRYt4/pN9SgOAgmlj7J/B8v4XJ84 rWxuzvgTLY4a5M8rUdpYpBrYikTTiWWblj8DoTv+3KM1SrgI881XkvmbfGhk8mrGD1ZK Z5h5AV3vgYDUbDafEoE7m4R4SyoaKIycZ6ZpVPqnrP6svR4qRwknqCUBwuLpCBZjsaET 0GkzFtQD7iDctUGDhfnXE5QF1O6V4mhQAE2qaoexltesOf23zB6By7ZI5cjPfHVXju6X UDEQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=QHZ3Pvze7N3y9Swf7PKQsL2yGxR89GsSIwH0LlZBgZw=; b=JUPpu0B8/5jg9wYWBcnQp8+dinhOOmX8cul2GLxXiitGVPlTjVf2qD+TxGa1x/B2i7 f5vYMeHoFqCgpEO+6QI9nXcKWBrnEXiifphCcYcLU5wB/pOsQ/CuSilBGh3wyP8LmWIX KpP0wX3eQMOe/6q/sDDbzgDsmdHaJKU10610usQzPno0NAjJe6u1foTVaLJWLo0bMbnQ FDwfC2qGEbTkLP2PdJiqSNtzkoplvs5vCJxEDn1eGpeG06l/3YqqQ++od54Gkx7rOIVQ EegiK/C1QuNaorE8C+v+ZoeekPDwsir5nLCh5fTuAvQfXduYJBz1FByAeavHVPrQ9Qya eZgw== X-Gm-Message-State: AGi0PuZoeyqbyL7K4lGFgTsbVrDRdIMdUvsguXSUlYT4BXpxMnjnC7ib J64CFwtIUYU8IKhcAAvZ529Z8kdPsh8= X-Google-Smtp-Source: APiQypKxSKlsUecZ+2SrdQLTaKTia7iBtUQpCA7TJMgrzPD2cjH67xbLtdih1OLVyNDcJX9ktd+8Yw== X-Received: by 2002:a2e:a495:: with SMTP id h21mr4262560lji.123.1586334587789; Wed, 08 Apr 2020 01:29:47 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:47 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:09 +0200 Message-Id: <20200408082921.31000-19-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 18/30] net/ena: refactor getting IO queues capabilities X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Reading values from the device is about the maximum capabilities of the device. Because of that, the names of the fields storing those values, functions and temporary variables, should be more descriptive in order to improve self documentation fo the code. In connection with this, the way of getting maximum queue size could be simplified - no hardcoded values are needed, as the device is going to send it's capabilities anyway. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 101 ++++++++++++++++------------------- drivers/net/ena/ena_ethdev.h | 11 ++-- 2 files changed, 52 insertions(+), 60 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 62e26a2a16..d0cd0e49c8 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -82,9 +82,6 @@ struct ena_stats { #define ENA_STAT_GLOBAL_ENTRY(stat) \ ENA_STAT_ENTRY(stat, dev) -#define ENA_MAX_RING_SIZE_RX 8192 -#define ENA_MAX_RING_SIZE_TX 1024 - /* * Each rte_memzone should have unique name. * To satisfy it, count number of allocation and add it to name. @@ -845,29 +842,26 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) } static int -ena_calc_queue_size(struct ena_calc_queue_size_ctx *ctx) +ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx) { struct ena_admin_feature_llq_desc *llq = &ctx->get_feat_ctx->llq; struct ena_com_dev *ena_dev = ctx->ena_dev; - uint32_t tx_queue_size = ENA_MAX_RING_SIZE_TX; - uint32_t rx_queue_size = ENA_MAX_RING_SIZE_RX; + uint32_t max_tx_queue_size; + uint32_t max_rx_queue_size; if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { struct ena_admin_queue_ext_feature_fields *max_queue_ext = &ctx->get_feat_ctx->max_queue_ext.max_queue_ext; - rx_queue_size = RTE_MIN(rx_queue_size, - max_queue_ext->max_rx_cq_depth); - rx_queue_size = RTE_MIN(rx_queue_size, + max_rx_queue_size = RTE_MIN(max_queue_ext->max_rx_cq_depth, max_queue_ext->max_rx_sq_depth); - tx_queue_size = RTE_MIN(tx_queue_size, - max_queue_ext->max_tx_cq_depth); + max_tx_queue_size = max_queue_ext->max_tx_cq_depth; if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { - tx_queue_size = RTE_MIN(tx_queue_size, + max_tx_queue_size = RTE_MIN(max_tx_queue_size, llq->max_llq_depth); } else { - tx_queue_size = RTE_MIN(tx_queue_size, + max_tx_queue_size = RTE_MIN(max_tx_queue_size, max_queue_ext->max_tx_sq_depth); } @@ -878,39 +872,36 @@ ena_calc_queue_size(struct ena_calc_queue_size_ctx *ctx) } else { struct ena_admin_queue_feature_desc *max_queues = &ctx->get_feat_ctx->max_queues; - rx_queue_size = RTE_MIN(rx_queue_size, - max_queues->max_cq_depth); - rx_queue_size = RTE_MIN(rx_queue_size, + max_rx_queue_size = RTE_MIN(max_queues->max_cq_depth, max_queues->max_sq_depth); - tx_queue_size = RTE_MIN(tx_queue_size, - max_queues->max_cq_depth); + max_tx_queue_size = max_queues->max_cq_depth; if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { - tx_queue_size = RTE_MIN(tx_queue_size, + max_tx_queue_size = RTE_MIN(max_tx_queue_size, llq->max_llq_depth); } else { - tx_queue_size = RTE_MIN(tx_queue_size, + max_tx_queue_size = RTE_MIN(max_tx_queue_size, max_queues->max_sq_depth); } ctx->max_rx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, - max_queues->max_packet_tx_descs); - ctx->max_tx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, max_queues->max_packet_rx_descs); + ctx->max_tx_sgl_size = RTE_MIN(ENA_PKT_MAX_BUFS, + max_queues->max_packet_tx_descs); } /* Round down to the nearest power of 2 */ - rx_queue_size = rte_align32prevpow2(rx_queue_size); - tx_queue_size = rte_align32prevpow2(tx_queue_size); + max_rx_queue_size = rte_align32prevpow2(max_rx_queue_size); + max_tx_queue_size = rte_align32prevpow2(max_tx_queue_size); - if (unlikely(rx_queue_size == 0 || tx_queue_size == 0)) { + if (unlikely(max_rx_queue_size == 0 || max_tx_queue_size == 0)) { PMD_INIT_LOG(ERR, "Invalid queue size"); return -EFAULT; } - ctx->rx_queue_size = rx_queue_size; - ctx->tx_queue_size = tx_queue_size; + ctx->max_tx_queue_size = max_tx_queue_size; + ctx->max_rx_queue_size = max_rx_queue_size; return 0; } @@ -1230,15 +1221,15 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } - if (nb_desc > adapter->tx_ring_size) { + if (nb_desc > adapter->max_tx_ring_size) { PMD_DRV_LOG(ERR, "Unsupported size of TX queue (max size: %d)\n", - adapter->tx_ring_size); + adapter->max_tx_ring_size); return -EINVAL; } if (nb_desc == RTE_ETH_DEV_FALLBACK_TX_RINGSIZE) - nb_desc = adapter->tx_ring_size; + nb_desc = adapter->max_tx_ring_size; txq->port_id = dev->data->port_id; txq->next_to_clean = 0; @@ -1310,7 +1301,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, } if (nb_desc == RTE_ETH_DEV_FALLBACK_RX_RINGSIZE) - nb_desc = adapter->rx_ring_size; + nb_desc = adapter->max_rx_ring_size; if (!rte_is_power_of_2(nb_desc)) { PMD_DRV_LOG(ERR, @@ -1319,10 +1310,10 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return -EINVAL; } - if (nb_desc > adapter->rx_ring_size) { + if (nb_desc > adapter->max_rx_ring_size) { PMD_DRV_LOG(ERR, "Unsupported size of RX queue (max size: %d)\n", - adapter->rx_ring_size); + adapter->max_rx_ring_size); return -EINVAL; } @@ -1654,10 +1645,10 @@ ena_set_queues_placement_policy(struct ena_adapter *adapter, return 0; } -static int ena_calc_io_queue_num(struct ena_com_dev *ena_dev, - struct ena_com_dev_get_features_ctx *get_feat_ctx) +static uint32_t ena_calc_max_io_queue_num(struct ena_com_dev *ena_dev, + struct ena_com_dev_get_features_ctx *get_feat_ctx) { - uint32_t io_tx_sq_num, io_tx_cq_num, io_rx_num, io_queue_num; + uint32_t io_tx_sq_num, io_tx_cq_num, io_rx_num, max_num_io_queues; /* Regular queues capabilities */ if (ena_dev->supported_features & BIT(ENA_ADMIN_MAX_QUEUES_EXT)) { @@ -1679,16 +1670,16 @@ static int ena_calc_io_queue_num(struct ena_com_dev *ena_dev, if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) io_tx_sq_num = get_feat_ctx->llq.max_llq_num; - io_queue_num = RTE_MIN(ENA_MAX_NUM_IO_QUEUES, io_rx_num); - io_queue_num = RTE_MIN(io_queue_num, io_tx_sq_num); - io_queue_num = RTE_MIN(io_queue_num, io_tx_cq_num); + max_num_io_queues = RTE_MIN(ENA_MAX_NUM_IO_QUEUES, io_rx_num); + max_num_io_queues = RTE_MIN(max_num_io_queues, io_tx_sq_num); + max_num_io_queues = RTE_MIN(max_num_io_queues, io_tx_cq_num); - if (unlikely(io_queue_num == 0)) { + if (unlikely(max_num_io_queues == 0)) { PMD_DRV_LOG(ERR, "Number of IO queues should not be 0\n"); return -EFAULT; } - return io_queue_num; + return max_num_io_queues; } static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) @@ -1701,6 +1692,7 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) struct ena_com_dev_get_features_ctx get_feat_ctx; struct ena_llq_configurations llq_config; const char *queue_type_str; + uint32_t max_num_io_queues; int rc; static int adapters_found; @@ -1772,20 +1764,19 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) calc_queue_ctx.ena_dev = ena_dev; calc_queue_ctx.get_feat_ctx = &get_feat_ctx; - adapter->num_queues = ena_calc_io_queue_num(ena_dev, - &get_feat_ctx); - rc = ena_calc_queue_size(&calc_queue_ctx); - if (unlikely((rc != 0) || (adapter->num_queues <= 0))) { + max_num_io_queues = ena_calc_max_io_queue_num(ena_dev, &get_feat_ctx); + rc = ena_calc_io_queue_size(&calc_queue_ctx); + if (unlikely((rc != 0) || (max_num_io_queues == 0))) { rc = -EFAULT; goto err_device_destroy; } - adapter->tx_ring_size = calc_queue_ctx.tx_queue_size; - adapter->rx_ring_size = calc_queue_ctx.rx_queue_size; - + adapter->max_tx_ring_size = calc_queue_ctx.max_tx_queue_size; + adapter->max_rx_ring_size = calc_queue_ctx.max_rx_queue_size; adapter->max_tx_sgl_size = calc_queue_ctx.max_tx_sgl_size; adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size; + adapter->max_num_io_queues = max_num_io_queues; /* prepare ring structures */ ena_init_rings(adapter); @@ -1904,9 +1895,9 @@ static int ena_dev_configure(struct rte_eth_dev *dev) static void ena_init_rings(struct ena_adapter *adapter) { - int i; + size_t i; - for (i = 0; i < adapter->num_queues; i++) { + for (i = 0; i < adapter->max_num_io_queues; i++) { struct ena_ring *ring = &adapter->tx_ring[i]; ring->configured = 0; @@ -1918,7 +1909,7 @@ static void ena_init_rings(struct ena_adapter *adapter) ring->sgl_size = adapter->max_tx_sgl_size; } - for (i = 0; i < adapter->num_queues; i++) { + for (i = 0; i < adapter->max_num_io_queues; i++) { struct ena_ring *ring = &adapter->rx_ring[i]; ring->configured = 0; @@ -1982,21 +1973,21 @@ static int ena_infos_get(struct rte_eth_dev *dev, dev_info->max_rx_pktlen = adapter->max_mtu; dev_info->max_mac_addrs = 1; - dev_info->max_rx_queues = adapter->num_queues; - dev_info->max_tx_queues = adapter->num_queues; + dev_info->max_rx_queues = adapter->max_num_io_queues; + dev_info->max_tx_queues = adapter->max_num_io_queues; dev_info->reta_size = ENA_RX_RSS_TABLE_SIZE; adapter->tx_supported_offloads = tx_feat; adapter->rx_supported_offloads = rx_feat; - dev_info->rx_desc_lim.nb_max = adapter->rx_ring_size; + dev_info->rx_desc_lim.nb_max = adapter->max_rx_ring_size; dev_info->rx_desc_lim.nb_min = ENA_MIN_RING_DESC; dev_info->rx_desc_lim.nb_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, adapter->max_rx_sgl_size); dev_info->rx_desc_lim.nb_mtu_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, adapter->max_rx_sgl_size); - dev_info->tx_desc_lim.nb_max = adapter->tx_ring_size; + dev_info->tx_desc_lim.nb_max = adapter->max_tx_ring_size; dev_info->tx_desc_lim.nb_min = ENA_MIN_RING_DESC; dev_info->tx_desc_lim.nb_seg_max = RTE_MIN(ENA_PKT_MAX_BUFS, adapter->max_tx_sgl_size); diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index e9b55dc029..1f320088ac 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -21,6 +21,7 @@ #define ENA_NAME_MAX_LEN 20 #define ENA_PKT_MAX_BUFS 17 #define ENA_RX_BUF_MIN_SIZE 1400 +#define ENA_DEFAULT_RING_SIZE 1024 #define ENA_MIN_MTU 128 @@ -46,8 +47,8 @@ struct ena_tx_buffer { struct ena_calc_queue_size_ctx { struct ena_com_dev_get_features_ctx *get_feat_ctx; struct ena_com_dev *ena_dev; - u16 rx_queue_size; - u16 tx_queue_size; + u32 max_rx_queue_size; + u32 max_tx_queue_size; u16 max_tx_sgl_size; u16 max_rx_sgl_size; }; @@ -159,15 +160,15 @@ struct ena_adapter { /* TX */ struct ena_ring tx_ring[ENA_MAX_NUM_QUEUES] __rte_cache_aligned; - int tx_ring_size; + u32 max_tx_ring_size; u16 max_tx_sgl_size; /* RX */ struct ena_ring rx_ring[ENA_MAX_NUM_QUEUES] __rte_cache_aligned; - int rx_ring_size; + u32 max_rx_ring_size; u16 max_rx_sgl_size; - u16 num_queues; + u32 max_num_io_queues; u16 max_mtu; struct ena_offloads offloads; From patchwork Wed Apr 8 08:29:10 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67985 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id E02F1A0597; Wed, 8 Apr 2020 10:32:45 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id E76FE1C1B5; Wed, 8 Apr 2020 10:29:55 +0200 (CEST) Received: from mail-lf1-f68.google.com (mail-lf1-f68.google.com [209.85.167.68]) by dpdk.org (Postfix) with ESMTP id F15071C192 for ; Wed, 8 Apr 2020 10:29:49 +0200 (CEST) Received: by mail-lf1-f68.google.com with SMTP id k28so4466226lfe.10 for ; Wed, 08 Apr 2020 01:29:49 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lpv95Elv/MKF4d3EuMJ1iuhEwO5iW/FYRbl57Xpd4xo=; b=Tcaj2n9vaxIHAUlhr9d+zjYqN76ugR+sXkEPxPThY7cel+LnbgHMjrunOZwyGqVIUj tTva3UJ1bkJD3PNxGcyTpJGJBWCeoUXzvmtTBFj4WlAEDA12KW04/NjetUm5n6zVoSM2 Q6I+nJMjtj3uQD2VR5+c2BnZNpt8omyAtTAw43ZaGZ9f0zw7nCrqR1dKoPyoULRpFTHl rwCkxF/DwbInWJmTM+WpcWByJ9t6QVPIRB1myGTVmSgIbw4Fy9nnCVmbWPUbNAmDdoKK c3oPpu5MYeUskBvx/hzdRySuDAdvEd51umNyT5xk3PK/hF9rp9DEP1pPhwJ5J7877wI1 lSHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lpv95Elv/MKF4d3EuMJ1iuhEwO5iW/FYRbl57Xpd4xo=; b=dZLRjDNG8zNRWoEvV2IB3UG1lfBmn8pm2fgIwUQF0lBlUYhp7B6ogMrmYFMCgnfTwb 4K33Udq0zwWOLcHCu6yEhbhpZRTPRa8bKr+nQE+zJhXJCLngkNwcm5wdv0RwSVFtZpNf MXH3/EByrM5qmWmQ8VBAzRmrxSLNCj1TswNQcpcZztvDXcSunAgcg9SJQ8EUhIs8d1oi EzDRja2I0sGaUYcbIZ397sVXJZ1+UwUvsxax2sKl5psexZqfeo7RjL0pINAUqIxLU3yp RqcvLZ+yQiES1Wi3XWEHMXvuhFE9Zq3fJd02D+m14gBE3R8lcj/H3ZrpMLZxOR7a2E1r tw3A== X-Gm-Message-State: AGi0PuZJhNJY/20jCtjJH7Vet6DObsXkq2lq4hPPDOshkBacYvMvFOn7 kZNTnBsD50y/VN1UHc9obNBugZTD+MY= X-Google-Smtp-Source: APiQypL8l6x5XaSsMtD1OKGhd4vODPX7aV66c8bJDL+jZSDt4cgoevqYmGAO6ArELg/S2fXIgTp1ag== X-Received: by 2002:a05:6512:6cf:: with SMTP id u15mr3783726lff.98.1586334589102; Wed, 08 Apr 2020 01:29:49 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:48 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:10 +0200 Message-Id: <20200408082921.31000-20-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 19/30] net/ena: add support for large LLQ headers X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Default LLQ (Low-latency queue) maximum header size is 96 bytes and can be too small for some types of packets - like IPv6 packets with multiple extension. This can be fixed, by using large LLQ headers. If the device supports larger LLQ headers, the user can activate them by using device argument 'large_llq_hdr' with value '1'. If the device isn't supporting this feature, the default value (96B) will be used. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v2: * Use devargs instead of compilation options v3: * Fix commit log - explain LLQ abbreviation, motivation behind this change and mention about new device argument * Update copyright date of the modified file * Add release notes doc/guides/nics/ena.rst | 10 ++- doc/guides/rel_notes/release_20_05.rst | 6 ++ drivers/net/ena/ena_ethdev.c | 110 +++++++++++++++++++++++-- drivers/net/ena/ena_ethdev.h | 2 + 4 files changed, 121 insertions(+), 7 deletions(-) diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst index bbf27f235a..0b9622ac85 100644 --- a/doc/guides/nics/ena.rst +++ b/doc/guides/nics/ena.rst @@ -1,5 +1,5 @@ .. SPDX-License-Identifier: BSD-3-Clause - Copyright (c) 2015-2019 Amazon.com, Inc. or its affiliates. + Copyright (c) 2015-2020 Amazon.com, Inc. or its affiliates. All rights reserved. ENA Poll Mode Driver @@ -95,6 +95,14 @@ Configuration information * **CONFIG_RTE_LIBRTE_ENA_COM_DEBUG** (default n): Enables or disables debug logging of low level tx/rx logic in ena_com(base) within the ENA PMD driver. +**Runtime Configuration Parameters** + + * **large_llq_hdr** (default 0) + + Enables or disables usage of large LLQ headers. This option will have + effect only if the device also supports large LLQ headers. Otherwise, the + default value will be used. + **ENA Configuration Parameters** * **Number of Queues** diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst index 2596269da5..7c73fe8fd5 100644 --- a/doc/guides/rel_notes/release_20_05.rst +++ b/doc/guides/rel_notes/release_20_05.rst @@ -78,6 +78,12 @@ New Features * Hierarchial Scheduling with DWRR and SP. * Single rate - Two color, Two rate - Three color shaping. +* **Updated Amazon ena driver.** + + Updated ena PMD with new features and improvements, including: + + * Added support for large LLQ (Low-latency queue) headers. + Removed Items ------------- diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index d0cd0e49c8..fdcbe53c1c 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -13,6 +13,7 @@ #include #include #include +#include #include "ena_ethdev.h" #include "ena_logs.h" @@ -82,6 +83,9 @@ struct ena_stats { #define ENA_STAT_GLOBAL_ENTRY(stat) \ ENA_STAT_ENTRY(stat, dev) +/* Device arguments */ +#define ENA_DEVARG_LARGE_LLQ_HDR "large_llq_hdr" + /* * Each rte_memzone should have unique name. * To satisfy it, count number of allocation and add it to name. @@ -231,6 +235,11 @@ static int ena_xstats_get_by_id(struct rte_eth_dev *dev, const uint64_t *ids, uint64_t *values, unsigned int n); +static int ena_process_bool_devarg(const char *key, + const char *value, + void *opaque); +static int ena_parse_devargs(struct ena_adapter *adapter, + struct rte_devargs *devargs); static const struct eth_dev_ops ena_dev_ops = { .dev_configure = ena_dev_configure, @@ -842,7 +851,8 @@ static int ena_check_valid_conf(struct ena_adapter *adapter) } static int -ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx) +ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx, + bool use_large_llq_hdr) { struct ena_admin_feature_llq_desc *llq = &ctx->get_feat_ctx->llq; struct ena_com_dev *ena_dev = ctx->ena_dev; @@ -895,6 +905,21 @@ ena_calc_io_queue_size(struct ena_calc_queue_size_ctx *ctx) max_rx_queue_size = rte_align32prevpow2(max_rx_queue_size); max_tx_queue_size = rte_align32prevpow2(max_tx_queue_size); + if (use_large_llq_hdr) { + if ((llq->entry_size_ctrl_supported & + ENA_ADMIN_LIST_ENTRY_SIZE_256B) && + (ena_dev->tx_mem_queue_type == + ENA_ADMIN_PLACEMENT_POLICY_DEV)) { + max_tx_queue_size /= 2; + PMD_INIT_LOG(INFO, + "Forcing large headers and decreasing maximum TX queue size to %d\n", + max_tx_queue_size); + } else { + PMD_INIT_LOG(ERR, + "Forcing large headers failed: LLQ is disabled or device does not support large headers\n"); + } + } + if (unlikely(max_rx_queue_size == 0 || max_tx_queue_size == 0)) { PMD_INIT_LOG(ERR, "Invalid queue size"); return -EFAULT; @@ -1594,14 +1619,25 @@ static void ena_timer_wd_callback(__rte_unused struct rte_timer *timer, } static inline void -set_default_llq_configurations(struct ena_llq_configurations *llq_config) +set_default_llq_configurations(struct ena_llq_configurations *llq_config, + struct ena_admin_feature_llq_desc *llq, + bool use_large_llq_hdr) { llq_config->llq_header_location = ENA_ADMIN_INLINE_HEADER; - llq_config->llq_ring_entry_size = ENA_ADMIN_LIST_ENTRY_SIZE_128B; llq_config->llq_stride_ctrl = ENA_ADMIN_MULTIPLE_DESCS_PER_ENTRY; llq_config->llq_num_decs_before_header = ENA_ADMIN_LLQ_NUM_DESCS_BEFORE_HEADER_2; - llq_config->llq_ring_entry_size_value = 128; + + if (use_large_llq_hdr && + (llq->entry_size_ctrl_supported & ENA_ADMIN_LIST_ENTRY_SIZE_256B)) { + llq_config->llq_ring_entry_size = + ENA_ADMIN_LIST_ENTRY_SIZE_256B; + llq_config->llq_ring_entry_size_value = 256; + } else { + llq_config->llq_ring_entry_size = + ENA_ADMIN_LIST_ENTRY_SIZE_128B; + llq_config->llq_ring_entry_size_value = 128; + } } static int @@ -1740,6 +1776,12 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) snprintf(adapter->name, ENA_NAME_MAX_LEN, "ena_%d", adapter->id_number); + rc = ena_parse_devargs(adapter, pci_dev->device.devargs); + if (rc != 0) { + PMD_INIT_LOG(CRIT, "Failed to parse devargs\n"); + goto err; + } + /* device specific initialization routine */ rc = ena_device_init(ena_dev, &get_feat_ctx, &wd_state); if (rc) { @@ -1748,7 +1790,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) } adapter->wd_state = wd_state; - set_default_llq_configurations(&llq_config); + set_default_llq_configurations(&llq_config, &get_feat_ctx.llq, + adapter->use_large_llq_hdr); rc = ena_set_queues_placement_policy(adapter, ena_dev, &get_feat_ctx.llq, &llq_config); if (unlikely(rc)) { @@ -1766,7 +1809,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) calc_queue_ctx.get_feat_ctx = &get_feat_ctx; max_num_io_queues = ena_calc_max_io_queue_num(ena_dev, &get_feat_ctx); - rc = ena_calc_io_queue_size(&calc_queue_ctx); + rc = ena_calc_io_queue_size(&calc_queue_ctx, + adapter->use_large_llq_hdr); if (unlikely((rc != 0) || (max_num_io_queues == 0))) { rc = -EFAULT; goto err_device_destroy; @@ -2582,6 +2626,59 @@ static int ena_xstats_get_by_id(struct rte_eth_dev *dev, return valid; } +static int ena_process_bool_devarg(const char *key, + const char *value, + void *opaque) +{ + struct ena_adapter *adapter = opaque; + bool bool_value; + + /* Parse the value. */ + if (strcmp(value, "1") == 0) { + bool_value = true; + } else if (strcmp(value, "0") == 0) { + bool_value = false; + } else { + PMD_INIT_LOG(ERR, + "Invalid value: '%s' for key '%s'. Accepted: '0' or '1'\n", + value, key); + return -EINVAL; + } + + /* Now, assign it to the proper adapter field. */ + if (strcmp(key, ENA_DEVARG_LARGE_LLQ_HDR)) + adapter->use_large_llq_hdr = bool_value; + + return 0; +} + +static int ena_parse_devargs(struct ena_adapter *adapter, + struct rte_devargs *devargs) +{ + static const char * const allowed_args[] = { + ENA_DEVARG_LARGE_LLQ_HDR, + }; + struct rte_kvargs *kvlist; + int rc; + + if (devargs == NULL) + return 0; + + kvlist = rte_kvargs_parse(devargs->args, allowed_args); + if (kvlist == NULL) { + PMD_INIT_LOG(ERR, "Invalid device arguments: %s\n", + devargs->args); + return -EINVAL; + } + + rc = rte_kvargs_process(kvlist, ENA_DEVARG_LARGE_LLQ_HDR, + ena_process_bool_devarg, adapter); + + rte_kvargs_free(kvlist); + + return rc; +} + /********************************************************************* * PMD configuration *********************************************************************/ @@ -2608,6 +2705,7 @@ static struct rte_pci_driver rte_ena_pmd = { RTE_PMD_REGISTER_PCI(net_ena, rte_ena_pmd); RTE_PMD_REGISTER_PCI_TABLE(net_ena, pci_id_ena_map); RTE_PMD_REGISTER_KMOD_DEP(net_ena, "* igb_uio | uio_pci_generic | vfio-pci"); +RTE_PMD_REGISTER_PARAM_STRING(net_ena, ENA_DEVARG_LARGE_LLQ_HDR "=<0|1>"); RTE_INIT(ena_init_log) { diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 1f320088ac..ed3674b202 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -200,6 +200,8 @@ struct ena_adapter { bool trigger_reset; bool wd_state; + + bool use_large_llq_hdr; }; #endif /* _ENA_ETHDEV_H_ */ From patchwork Wed Apr 8 08:29:11 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67986 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 87459A0597; Wed, 8 Apr 2020 10:32:55 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3717E1C1BD; Wed, 8 Apr 2020 10:29:57 +0200 (CEST) Received: from mail-lf1-f68.google.com (mail-lf1-f68.google.com [209.85.167.68]) by dpdk.org (Postfix) with ESMTP id 2C0461C199 for ; Wed, 8 Apr 2020 10:29:51 +0200 (CEST) Received: by mail-lf1-f68.google.com with SMTP id 131so4447956lfh.11 for ; Wed, 08 Apr 2020 01:29:51 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=bbFYKAWbQ2J0jwaIugOXsbVTKGezX54zNcZ2S7ig/tA=; b=VZBpSUvSqZc0JLuvHoM0z5+zHEB3VQlrtLa1MYVJBBsiHRkKHIhKCxLs/+XdsH8/lB bAJGnUR7eaWegv4/UDLB08m2KCjNOYTVhTxadahTQ1vacXGuOsGZ9dEuI5C4Icbb6+J9 TGaFlALrgWXdEsgsnM7RVsgs5Wil18pT+l2a1lyYkWAncwoBs0QIoNLvlpdaGZX8uJV8 WQBONhwePKTp063N6udx5JLYkK2SGD/ikgVyPxnKjsnUCW2OXaf1wareIuXXuPmDj5Km zJne9KfTIoKdmkJwx6kBO9z0ldfKUD7z9GOTwZs4G2n8/x78Lese8PyoemdP8Jpmz/bw d1NQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=bbFYKAWbQ2J0jwaIugOXsbVTKGezX54zNcZ2S7ig/tA=; b=TNxroSMN3cztNwT6p7PqrAS76e0X+o/3sXSxs6szHLzF+Ym4PwMkvUni3FVH+NdHIn FFxjy7ta8zthxcRXxztBEemMZFwva5zb2l6BoOE/fEjgSN3wcqKHXZNWB3NLrvbvGrrS XGIz+lSsn8h8wwgt+nkOu0PIuvjwzLSvcKeoASmh0+Ppb9wDd7p1nfTcmTCV+8j/nisg fj5UKoueXYvaxC++qUbdQu2z4rWmnvlxSvIFoSkVwNhEU8szPseN4sxaLAwcSNufcra+ 8heBv0BCxoTLfHyalcAMIIkFp0uN4GJieCUDZAzKpdQ64rzl21K42P7SkSARLg6GmCes Wxeg== X-Gm-Message-State: AGi0PuZh+vxPSJwxtICjTcbYxgcFb7rqkijDtoBaQVhSo8bpAUZeipAD vPHxZSeXmoffLjm/BOyrZdxHUsBHihY= X-Google-Smtp-Source: APiQypJd1OlR5q3sW6pgvdRS0SSRrKiVIdn7/mNo2li/QdPd0c4bFT61nVemS1jp1Wf10i99hLnN7g== X-Received: by 2002:a05:6512:3049:: with SMTP id b9mr3947664lfb.176.1586334590501; Wed, 08 Apr 2020 01:29:50 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.49 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:49 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:11 +0200 Message-Id: <20200408082921.31000-21-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 20/30] net/ena: remove memory barriers before doorbells X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The doorbell code is already issuing the doorbell by using rte_write. Because of that, there is no need to do that before calling the function. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 9 +-------- 1 file changed, 1 insertion(+), 8 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index fdcbe53c1c..07feb62c3f 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1462,12 +1462,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) /* When we submitted free recources to device... */ if (likely(i > 0)) { - /* ...let HW know that it can fill buffers with data - * - * Add memory barrier to make sure the desc were written before - * issue a doorbell - */ - rte_wmb(); + /* ...let HW know that it can fill buffers with data. */ ena_com_write_sq_doorbell(rxq->ena_com_io_sq); rxq->next_to_use = next_to_use; @@ -2405,7 +2400,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, PMD_DRV_LOG(DEBUG, "llq tx max burst size of queue %d" " achieved, writing doorbell to send burst\n", tx_ring->id); - rte_wmb(); ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); } @@ -2428,7 +2422,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* If there are ready packets to be xmitted... */ if (sent_idx > 0) { /* ...let HW do its best :-) */ - rte_wmb(); ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); tx_ring->tx_stats.doorbells++; tx_ring->next_to_use = next_to_use; From patchwork Wed Apr 8 08:29:12 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67987 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 7AE7DA0597; Wed, 8 Apr 2020 10:33:05 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 63E9D1C1C2; Wed, 8 Apr 2020 10:29:58 +0200 (CEST) Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by dpdk.org (Postfix) with ESMTP id 8D1BB1C1A1 for ; Wed, 8 Apr 2020 10:29:52 +0200 (CEST) Received: by mail-lf1-f65.google.com with SMTP id j17so4478576lfe.7 for ; Wed, 08 Apr 2020 01:29:52 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=J9gqqnaTMXcqQgWdFhVDfYPoqc6jIAmVfH9BEDKs3HM=; b=T3cPnTG++7LODM3BlGueugDp5yEXPs0P71Bv7NRwSkPHhqUFMuqhOcatDhOYogulDz yA87tOrSVgxu+Z0MPeUEF2DwywZuLGIoK9L0B//BMKNZqluAj/qa53hEqAU5YInyRutb ve+kbM51wmJIB/Bzo7KqZzx8c2xbGXSlF+EPdqcGqTyqVdz9uasZdF5VKOPyyMVAuPny ZKaUSP+zD7ufIZ/0mY/3r0lrumKwvau4WHC2ebZriHuwAVMbSih6MtmlLDUFbp0JE44Q 43jZmIg+d366hjY48ByNRoYPNCPUZSW8WVsYCYPlXPaZwHYd2n5GZ5Gvr2+bBJkSyuNc B1Zg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=J9gqqnaTMXcqQgWdFhVDfYPoqc6jIAmVfH9BEDKs3HM=; b=BFWokLslKEJMfIXObGdx1MHSBzn0zQQan5VP1mLj+6PBBC2fNIIQ1i4o6mDW58WHmY ccYYLk+6Uid4zGwR7Yxoyyae9y7iiNbvOYpCgcyaDlKPoUE7HNOY3hEJC5JkydC7Hy7T T+fhVJ8YjipCchLYUgNa1mCfJoDO6bRKQCInHS68kjb210tShBQ26DQlGsxTN4tI1CrT +Y/ZuiX0WnRgkNWkmzcPYIAM3267OUiGXQXVW3fQV3zZmZsAYInmcsA4fL76ITDUeYjs u1Jd9wCYf5M21RpPOhAJ1lNNHDsPvvsgbiqOle4rpJQu6EnvHK7wMHl67VfW0OnK3HOj pnxw== X-Gm-Message-State: AGi0PuYLvg8Q18YRc/lsXR9aWXTtuK9isXRNEB5qhKlCmcxEurhSF2sc k7dMKWyjZ1w4g/utTgvjPoO9v/1Uegk= X-Google-Smtp-Source: APiQypL2EB6m3niK+Q5qQipEsCVHnpzIp3HOzPQIf0j6+s4tY59tLzBryfC45GXjhim0e22ZrUOIWw== X-Received: by 2002:ac2:43b1:: with SMTP id t17mr3903546lfl.9.1586334591828; Wed, 08 Apr 2020 01:29:51 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.50 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:51 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:12 +0200 Message-Id: <20200408082921.31000-22-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 21/30] net/ena: add Tx drops statistic X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ENA device can report in the AENQ handler amount of Tx packets that were dropped and not sent. This statistic is showing global value for the device and because rte_eth_stats is missing field that could indicate this value (it isn't the Tx error), it is being presented as a extended statistic. As the current design of extended statistics prevents tx_drops from being an atomic variable and both tx_drops and rx_drops are only updated from the AENQ handler, both were set as non-atomic for the alignment. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v3: * Update release notes doc/guides/rel_notes/release_20_05.rst | 1 + drivers/net/ena/ena_ethdev.c | 11 ++++++++--- drivers/net/ena/ena_ethdev.h | 8 +++++++- 3 files changed, 16 insertions(+), 4 deletions(-) diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst index 7c73fe8fd5..bcd8d86299 100644 --- a/doc/guides/rel_notes/release_20_05.rst +++ b/doc/guides/rel_notes/release_20_05.rst @@ -83,6 +83,7 @@ New Features Updated ena PMD with new features and improvements, including: * Added support for large LLQ (Low-latency queue) headers. + * Added Tx drops as new extended driver statistic. Removed Items diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 07feb62c3f..0d4523c1da 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -96,6 +96,7 @@ static const struct ena_stats ena_stats_global_strings[] = { ENA_STAT_GLOBAL_ENTRY(wd_expired), ENA_STAT_GLOBAL_ENTRY(dev_start), ENA_STAT_GLOBAL_ENTRY(dev_stop), + ENA_STAT_GLOBAL_ENTRY(tx_drops), }; static const struct ena_stats ena_stats_tx_strings[] = { @@ -938,7 +939,7 @@ static void ena_stats_restart(struct rte_eth_dev *dev) rte_atomic64_init(&adapter->drv_stats->ierrors); rte_atomic64_init(&adapter->drv_stats->oerrors); rte_atomic64_init(&adapter->drv_stats->rx_nombuf); - rte_atomic64_init(&adapter->drv_stats->rx_drops); + adapter->drv_stats->rx_drops = 0; } static int ena_stats_get(struct rte_eth_dev *dev, @@ -972,7 +973,7 @@ static int ena_stats_get(struct rte_eth_dev *dev, ena_stats.tx_bytes_low); /* Driver related stats */ - stats->imissed = rte_atomic64_read(&adapter->drv_stats->rx_drops); + stats->imissed = adapter->drv_stats->rx_drops; stats->ierrors = rte_atomic64_read(&adapter->drv_stats->ierrors); stats->oerrors = rte_atomic64_read(&adapter->drv_stats->oerrors); stats->rx_nombuf = rte_atomic64_read(&adapter->drv_stats->rx_nombuf); @@ -2785,12 +2786,16 @@ static void ena_keep_alive(void *adapter_data, struct ena_adapter *adapter = adapter_data; struct ena_admin_aenq_keep_alive_desc *desc; uint64_t rx_drops; + uint64_t tx_drops; adapter->timestamp_wd = rte_get_timer_cycles(); desc = (struct ena_admin_aenq_keep_alive_desc *)aenq_e; rx_drops = ((uint64_t)desc->rx_drops_high << 32) | desc->rx_drops_low; - rte_atomic64_set(&adapter->drv_stats->rx_drops, rx_drops); + tx_drops = ((uint64_t)desc->tx_drops_high << 32) | desc->tx_drops_low; + + adapter->drv_stats->rx_drops = rx_drops; + adapter->dev_stats.tx_drops = tx_drops; } /** diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index ed3674b202..5afce25f13 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -134,13 +134,19 @@ struct ena_driver_stats { rte_atomic64_t ierrors; rte_atomic64_t oerrors; rte_atomic64_t rx_nombuf; - rte_atomic64_t rx_drops; + u64 rx_drops; }; struct ena_stats_dev { u64 wd_expired; u64 dev_start; u64 dev_stop; + /* + * Tx drops cannot be reported as the driver statistic, because DPDK + * rte_eth_stats structure isn't providing appropriate field for that. + * As a workaround it is being published as an extended statistic. + */ + u64 tx_drops; }; struct ena_offloads { From patchwork Wed Apr 8 08:29:13 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67988 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0308FA0597; Wed, 8 Apr 2020 10:33:14 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 8A6801C1C5; Wed, 8 Apr 2020 10:29:59 +0200 (CEST) Received: from mail-lj1-f196.google.com (mail-lj1-f196.google.com [209.85.208.196]) by dpdk.org (Postfix) with ESMTP id 0E6631C1AC for ; Wed, 8 Apr 2020 10:29:54 +0200 (CEST) Received: by mail-lj1-f196.google.com with SMTP id i20so6672462ljn.6 for ; Wed, 08 Apr 2020 01:29:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=UNIr+wQ9GWHGs/H4FEm6zRZelCgzBTFEO8bQEK5WiEY=; b=dmXkR4aPdc6FHI+gf3b7DNeYxQluo72vgfeCfvu0pm8Mm9y15DJyegy68j6TKKbpb1 IsftnW8LyyNJLF+d888UAyXRAJE4bJZJMRxJit0/MjWLuZPSxAkMfBwRdWxoUwSlcuVN 2PcuQIXzvu7pbaMu+HWx4C/laxadiBAUPbur8VWhdIKtL7fR6lL39zJVHbwiMfW60aJN POHM/xIkbphg2ZiJvu3jGJUSAwRJBgOuWPox8Jb57SMRGLSciulecZUVp/BsbMvTSvLk 4UeCqaP4wltHJ6zp02FajhhF6pe2JokIRYKGgHtPD6PtRYbrCxCZnhguFYlbwGnUqSkb 1Zlg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=UNIr+wQ9GWHGs/H4FEm6zRZelCgzBTFEO8bQEK5WiEY=; b=FabAfbpwIkkZpxqFEQkuIYrR/K6ofFvhIvzJ7cjFimvIgZMDSnIkPh/TGXulQewCWj v9y46j8oOQ8rlxrdNXSTvrkkOulo+1sqPPRH+mvmlbyLKXQLy/qIPYWZnhn7bPb2hwHW yqXOhHV/rpPw0YCEGHWa1EzKML9eqC6v7leD3tIDWerWw4Nts0X9eKQHAUkngbQQ5y4u KJNyBJM6zDbfiJm2pH0iXaLK0dbqcKIS32GUUAQfZ1d4Bc4bd+fV01D6KmETE3NQGa9Z 7LRzotd9tIJOo5wQIvzlgAsbgm4o1eXA6T9AOOsOQhm+aZWN19RZZooLLm18cgu67CY1 Iu0Q== X-Gm-Message-State: AGi0PuZ9dTsjJnJ1BuznjzmzNslCVFB6XwUsGpnofKq9Oc2EjHAsxSH9 090YKQnb2sHWUpGiY9zyVXygN3DyOc8= X-Google-Smtp-Source: APiQypIuBICZRM3pUDpkTMmCdGjOFZI46i0fQaCDSqjbs2b1LSCqjeqVuiluBc5GzoAfDHr/7UB5Yg== X-Received: by 2002:a2e:a365:: with SMTP id i5mr4237474ljn.246.1586334593197; Wed, 08 Apr 2020 01:29:53 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.51 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:52 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:13 +0200 Message-Id: <20200408082921.31000-23-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 22/30] net/ena: disable meta caching X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In the LLQ (Low-latency queue) mode, the device can indicate that meta data descriptor caching is disabled. In that case the driver should send valid meta descriptor on every Tx packet. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v3: * Explain LLQ abbreviation * Update release notes doc/guides/rel_notes/release_20_05.rst | 1 + drivers/net/ena/ena_ethdev.c | 28 ++++++++++++++++++++------ drivers/net/ena/ena_ethdev.h | 2 ++ 3 files changed, 25 insertions(+), 6 deletions(-) diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst index bcd8d86299..e6b2f1b972 100644 --- a/doc/guides/rel_notes/release_20_05.rst +++ b/doc/guides/rel_notes/release_20_05.rst @@ -84,6 +84,7 @@ New Features * Added support for large LLQ (Low-latency queue) headers. * Added Tx drops as new extended driver statistic. + * Added support for accelerated LLQ mode. Removed Items diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 0d4523c1da..9ba7bcbdc0 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -191,7 +191,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count); -static void ena_init_rings(struct ena_adapter *adapter); +static void ena_init_rings(struct ena_adapter *adapter, + bool disable_meta_caching); static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); static int ena_start(struct rte_eth_dev *dev); static void ena_stop(struct rte_eth_dev *dev); @@ -313,7 +314,8 @@ static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf, struct ena_com_tx_ctx *ena_tx_ctx, - uint64_t queue_offloads) + uint64_t queue_offloads, + bool disable_meta_caching) { struct ena_com_tx_meta *ena_meta = &ena_tx_ctx->ena_meta; @@ -363,6 +365,9 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf, ena_meta->l3_hdr_len = mbuf->l3_len; ena_meta->l3_hdr_offset = mbuf->l2_len; + ena_tx_ctx->meta_valid = true; + } else if (disable_meta_caching) { + memset(ena_meta, 0, sizeof(*ena_meta)); ena_tx_ctx->meta_valid = true; } else { ena_tx_ctx->meta_valid = false; @@ -1726,8 +1731,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) const char *queue_type_str; uint32_t max_num_io_queues; int rc; - static int adapters_found; + bool disable_meta_caching; bool wd_state; eth_dev->dev_ops = &ena_dev_ops; @@ -1818,8 +1823,16 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size; adapter->max_num_io_queues = max_num_io_queues; + if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { + disable_meta_caching = + !!(get_feat_ctx.llq.accel_mode.u.get.supported_flags & + BIT(ENA_ADMIN_DISABLE_META_CACHING)); + } else { + disable_meta_caching = false; + } + /* prepare ring structures */ - ena_init_rings(adapter); + ena_init_rings(adapter, disable_meta_caching); ena_config_debug_area(adapter); @@ -1933,7 +1946,8 @@ static int ena_dev_configure(struct rte_eth_dev *dev) return 0; } -static void ena_init_rings(struct ena_adapter *adapter) +static void ena_init_rings(struct ena_adapter *adapter, + bool disable_meta_caching) { size_t i; @@ -1947,6 +1961,7 @@ static void ena_init_rings(struct ena_adapter *adapter) ring->tx_mem_queue_type = adapter->ena_dev.tx_mem_queue_type; ring->tx_max_header_size = adapter->ena_dev.tx_max_header_size; ring->sgl_size = adapter->max_tx_sgl_size; + ring->disable_meta_caching = disable_meta_caching; } for (i = 0; i < adapter->max_num_io_queues; i++) { @@ -2359,7 +2374,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } /* there's no else as we take advantage of memset zeroing */ /* Set TX offloads flags, if applicable */ - ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads); + ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads, + tx_ring->disable_meta_caching); rte_prefetch0(tx_pkts[(sent_idx + 4) & ring_mask]); diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 5afce25f13..cf0b4c0763 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -113,6 +113,8 @@ struct ena_ring { uint64_t offloads; u16 sgl_size; + bool disable_meta_caching; + union { struct ena_stats_rx rx_stats; struct ena_stats_tx tx_stats; From patchwork Wed Apr 8 08:29:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67989 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 42A01A0597; Wed, 8 Apr 2020 10:33:26 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 1B4AB1C1D5; Wed, 8 Apr 2020 10:30:01 +0200 (CEST) Received: from mail-lf1-f67.google.com (mail-lf1-f67.google.com [209.85.167.67]) by dpdk.org (Postfix) with ESMTP id 643661C1B0 for ; Wed, 8 Apr 2020 10:29:55 +0200 (CEST) Received: by mail-lf1-f67.google.com with SMTP id k28so4466417lfe.10 for ; Wed, 08 Apr 2020 01:29:55 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=r3ceO4JsYTQceCEC1Wbm1Cd9HBybtENph2Gtm7TFoX4=; b=wvW8DHaPsA46jsZ+9/o9kvU+oHvL9Bac7MSadro5DwfFPaseb+jT2syMpmqa6oox7T jgJ2VPTNRVbkEW4Plyir4bPcCukEiWu9p/CAv4NniZ5EbqjXxQM5WQRoTdwqiW0UH7Gz vN4aWxgaouqDt1DE4AZ8XNlw1oEKR3LftPrHB9EdjDvrP8Nrl1efxNoYGnWVGBtU/nYK 1rLh8yEbaedDQ7zYumeFwreNr1POoNPofCzpRoEESHcUKljWcNGAkcg5YR19LUhl/Acr DPcmqOpwJpXZ0rFcBgVh9AzXVVOyLrj4Jc8k/L7HL1z0CqL8I+G/71jv7LI7O53xNZ48 g6RQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=r3ceO4JsYTQceCEC1Wbm1Cd9HBybtENph2Gtm7TFoX4=; b=dT+xEpCqkA0sSPqE38xoYq0YzxmT0PizOSpJMqq7AM9tpMnLduwi2E4FpUGQJVaf31 onoEyv8uMtL7lwmCM+9bmJwviG9SgG1+HT1vWSjui1Euvn4OANnNtcdBLPwxHz/OFUYB TA6Kf4fi8j5VUU1WJ6OKnVfbd9LngwNgj/GqSyyVtWeTjIBwl/ML8eloyq1wq+RIXx7h GORXMnRMFKzFz11scVGLp2egUxs42JTjqbqDr4sXCDVxJk4sY9/F8BhwSASj+FDQEu5w 7RAn0ZuHDJR1DoezrGsTiqghWLu7bRi/0mP609jS/uBUqsVamiGL1ICSi0b7fvjmDeg/ p1sw== X-Gm-Message-State: AGi0PuZIEJODMnrncMKPgbW3pCfN7kCcRSjynntQFjFOQ/lyQfBny+XB WIlrD7+hdcpFBZQnqGXvkvtJAYeJSAY= X-Google-Smtp-Source: APiQypJ391yQcbKDd8t1rS8mZbHNS147+bJFJi1u0Pz6hC5Kz0E+0IjCc5CcMYSe97LUyyXSgLZsAw== X-Received: by 2002:a05:6512:10cd:: with SMTP id k13mr3878148lfg.173.1586334594577; Wed, 08 Apr 2020 01:29:54 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.53 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:53 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:14 +0200 Message-Id: <20200408082921.31000-24-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 23/30] net/ena: refactor Rx path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" * Split main Rx function into multiple ones - the body of the main was very big and further there were 2 nested loops, which were making the code hard to read * Rework how the Rx mbuf chains are being created - Instead of having while loop which has conditional check if it's first segment, handle this segment outside the loop and if more fragments are existing, process them inside. * Initialize Rx mbuf using simple function - it's the common thing for the 1st and next segments. * Create structure for Rx buffer to align it with Tx path, other ENA drivers and to make the variable name more descriptive - on DPDK, Rx buffer must hold only mbuf, so initially array of mbufs was used as the buffers. However, it was misleading, as it was named "rx_buffer_info". To make it more clear, the structure holding mbuf pointer was added and now there is possibility to expand it in the future without reworking the driver. * Remove redundant variables and conditional checks. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 182 ++++++++++++++++++++++------------- drivers/net/ena/ena_ethdev.h | 8 +- 2 files changed, 124 insertions(+), 66 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 9ba7bcbdc0..e43ba51ac7 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -188,6 +188,12 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, uint16_t nb_desc, unsigned int socket_id, const struct rte_eth_rxconf *rx_conf, struct rte_mempool *mp); +static inline void ena_init_rx_mbuf(struct rte_mbuf *mbuf, uint16_t len); +static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, + struct ena_com_rx_buf_info *ena_bufs, + uint32_t descs, + uint16_t *next_to_clean, + uint8_t offset); static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count); @@ -749,11 +755,13 @@ static void ena_rx_queue_release_bufs(struct ena_ring *ring) { unsigned int i; - for (i = 0; i < ring->ring_size; ++i) - if (ring->rx_buffer_info[i]) { - rte_mbuf_raw_free(ring->rx_buffer_info[i]); - ring->rx_buffer_info[i] = NULL; + for (i = 0; i < ring->ring_size; ++i) { + struct ena_rx_buffer *rx_info = &ring->rx_buffer_info[i]; + if (rx_info->mbuf) { + rte_mbuf_raw_free(rx_info->mbuf); + rx_info->mbuf = NULL; } + } } static void ena_tx_queue_release_bufs(struct ena_ring *ring) @@ -1365,8 +1373,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, rxq->mb_pool = mp; rxq->rx_buffer_info = rte_zmalloc("rxq->buffer_info", - sizeof(struct rte_mbuf *) * nb_desc, - RTE_CACHE_LINE_SIZE); + sizeof(struct ena_rx_buffer) * nb_desc, + RTE_CACHE_LINE_SIZE); if (!rxq->rx_buffer_info) { PMD_DRV_LOG(ERR, "failed to alloc mem for rx buffer info\n"); return -ENOMEM; @@ -1434,15 +1442,17 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) uint16_t next_to_use_masked = next_to_use & ring_mask; struct rte_mbuf *mbuf = mbufs[i]; struct ena_com_buf ebuf; + struct ena_rx_buffer *rx_info; if (likely((i + 4) < count)) rte_prefetch0(mbufs[i + 4]); req_id = rxq->empty_rx_reqs[next_to_use_masked]; rc = validate_rx_req_id(rxq, req_id); - if (unlikely(rc < 0)) + if (unlikely(rc)) break; - rxq->rx_buffer_info[req_id] = mbuf; + + rx_info = &rxq->rx_buffer_info[req_id]; /* prepare physical address for DMA transaction */ ebuf.paddr = mbuf->buf_iova + RTE_PKTMBUF_HEADROOM; @@ -1452,9 +1462,9 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) &ebuf, req_id); if (unlikely(rc)) { PMD_DRV_LOG(WARNING, "failed adding rx desc\n"); - rxq->rx_buffer_info[req_id] = NULL; break; } + rx_info->mbuf = mbuf; next_to_use++; } @@ -2052,6 +2062,83 @@ static int ena_infos_get(struct rte_eth_dev *dev, return 0; } +static inline void ena_init_rx_mbuf(struct rte_mbuf *mbuf, uint16_t len) +{ + mbuf->data_len = len; + mbuf->data_off = RTE_PKTMBUF_HEADROOM; + mbuf->refcnt = 1; + mbuf->next = NULL; +} + +static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, + struct ena_com_rx_buf_info *ena_bufs, + uint32_t descs, + uint16_t *next_to_clean, + uint8_t offset) +{ + struct rte_mbuf *mbuf; + struct rte_mbuf *mbuf_head; + struct ena_rx_buffer *rx_info; + unsigned int ring_mask = rx_ring->ring_size - 1; + uint16_t ntc, len, req_id, buf = 0; + + if (unlikely(descs == 0)) + return NULL; + + ntc = *next_to_clean; + + len = ena_bufs[buf].len; + req_id = ena_bufs[buf].req_id; + if (unlikely(validate_rx_req_id(rx_ring, req_id))) + return NULL; + + rx_info = &rx_ring->rx_buffer_info[req_id]; + + mbuf = rx_info->mbuf; + RTE_ASSERT(mbuf != NULL); + + ena_init_rx_mbuf(mbuf, len); + + /* Fill the mbuf head with the data specific for 1st segment. */ + mbuf_head = mbuf; + mbuf_head->nb_segs = descs; + mbuf_head->port = rx_ring->port_id; + mbuf_head->pkt_len = len; + mbuf_head->data_off += offset; + + rx_info->mbuf = NULL; + rx_ring->empty_rx_reqs[ntc & ring_mask] = req_id; + ++ntc; + + while (--descs) { + ++buf; + len = ena_bufs[buf].len; + req_id = ena_bufs[buf].req_id; + if (unlikely(validate_rx_req_id(rx_ring, req_id))) { + rte_mbuf_raw_free(mbuf_head); + return NULL; + } + + rx_info = &rx_ring->rx_buffer_info[req_id]; + RTE_ASSERT(rx_info->mbuf != NULL); + + /* Create an mbuf chain. */ + mbuf->next = rx_info->mbuf; + mbuf = mbuf->next; + + ena_init_rx_mbuf(mbuf, len); + mbuf_head->pkt_len += len; + + rx_info->mbuf = NULL; + rx_ring->empty_rx_reqs[ntc & ring_mask] = req_id; + ++ntc; + } + + *next_to_clean = ntc; + + return mbuf_head; +} + static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { @@ -2060,16 +2147,10 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, unsigned int ring_mask = ring_size - 1; uint16_t next_to_clean = rx_ring->next_to_clean; uint16_t desc_in_use = 0; - uint16_t req_id; - unsigned int recv_idx = 0; - struct rte_mbuf *mbuf = NULL; - struct rte_mbuf *mbuf_head = NULL; - struct rte_mbuf *mbuf_prev = NULL; - struct rte_mbuf **rx_buff_info = rx_ring->rx_buffer_info; - unsigned int completed; - + struct rte_mbuf *mbuf; + uint16_t completed; struct ena_com_rx_ctx ena_rx_ctx; - int rc = 0; + int i, rc = 0; /* Check adapter state */ if (unlikely(rx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) { @@ -2083,8 +2164,6 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, nb_pkts = desc_in_use; for (completed = 0; completed < nb_pkts; completed++) { - int segments = 0; - ena_rx_ctx.max_bufs = rx_ring->sgl_size; ena_rx_ctx.ena_bufs = rx_ring->ena_bufs; ena_rx_ctx.descs = 0; @@ -2102,63 +2181,36 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return 0; } - if (unlikely(ena_rx_ctx.descs == 0)) - break; - - while (segments < ena_rx_ctx.descs) { - req_id = ena_rx_ctx.ena_bufs[segments].req_id; - rc = validate_rx_req_id(rx_ring, req_id); - if (unlikely(rc)) { - if (segments != 0) - rte_mbuf_raw_free(mbuf_head); - break; - } - - mbuf = rx_buff_info[req_id]; - rx_buff_info[req_id] = NULL; - mbuf->data_len = ena_rx_ctx.ena_bufs[segments].len; - mbuf->data_off = RTE_PKTMBUF_HEADROOM; - mbuf->refcnt = 1; - mbuf->next = NULL; - if (unlikely(segments == 0)) { - mbuf->nb_segs = ena_rx_ctx.descs; - mbuf->port = rx_ring->port_id; - mbuf->pkt_len = 0; - mbuf->data_off += ena_rx_ctx.pkt_offset; - mbuf_head = mbuf; - } else { - /* for multi-segment pkts create mbuf chain */ - mbuf_prev->next = mbuf; + mbuf = ena_rx_mbuf(rx_ring, + ena_rx_ctx.ena_bufs, + ena_rx_ctx.descs, + &next_to_clean, + ena_rx_ctx.pkt_offset); + if (unlikely(mbuf == NULL)) { + for (i = 0; i < ena_rx_ctx.descs; ++i) { + rx_ring->empty_rx_reqs[next_to_clean & ring_mask] = + rx_ring->ena_bufs[i].req_id; + ++next_to_clean; } - mbuf_head->pkt_len += mbuf->data_len; - - mbuf_prev = mbuf; - rx_ring->empty_rx_reqs[next_to_clean & ring_mask] = - req_id; - segments++; - next_to_clean++; - } - if (unlikely(rc)) break; + } /* fill mbuf attributes if any */ - ena_rx_mbuf_prepare(mbuf_head, &ena_rx_ctx); + ena_rx_mbuf_prepare(mbuf, &ena_rx_ctx); - if (unlikely(mbuf_head->ol_flags & + if (unlikely(mbuf->ol_flags & (PKT_RX_IP_CKSUM_BAD | PKT_RX_L4_CKSUM_BAD))) { rte_atomic64_inc(&rx_ring->adapter->drv_stats->ierrors); ++rx_ring->rx_stats.bad_csum; } - mbuf_head->hash.rss = ena_rx_ctx.hash; + mbuf->hash.rss = ena_rx_ctx.hash; - /* pass to DPDK application head mbuf */ - rx_pkts[recv_idx] = mbuf_head; - recv_idx++; - rx_ring->rx_stats.bytes += mbuf_head->pkt_len; + rx_pkts[completed] = mbuf; + rx_ring->rx_stats.bytes += mbuf->pkt_len; } - rx_ring->rx_stats.cnt += recv_idx; + rx_ring->rx_stats.cnt += completed; rx_ring->next_to_clean = next_to_clean; desc_in_use = desc_in_use - completed + 1; @@ -2168,7 +2220,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, ena_populate_rx_queue(rx_ring, ring_size - desc_in_use); } - return recv_idx; + return completed; } static uint16_t diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index cf0b4c0763..6bcca08563 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -44,6 +44,12 @@ struct ena_tx_buffer { struct ena_com_buf bufs[ENA_PKT_MAX_BUFS]; }; +/* Rx buffer holds only pointer to the mbuf - may be expanded in the future */ +struct ena_rx_buffer { + struct rte_mbuf *mbuf; + struct ena_com_buf ena_buf; +}; + struct ena_calc_queue_size_ctx { struct ena_com_dev_get_features_ctx *get_feat_ctx; struct ena_com_dev *ena_dev; @@ -89,7 +95,7 @@ struct ena_ring { union { struct ena_tx_buffer *tx_buffer_info; /* contex of tx packet */ - struct rte_mbuf **rx_buffer_info; /* contex of rx packet */ + struct ena_rx_buffer *rx_buffer_info; /* contex of rx packet */ }; struct rte_mbuf **rx_refill_buffer; unsigned int ring_size; /* number of tx/rx_buffer_info's entries */ From patchwork Wed Apr 8 08:29:15 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67990 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5290CA0597; Wed, 8 Apr 2020 10:33:36 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 85F1F1C1DC; Wed, 8 Apr 2020 10:30:02 +0200 (CEST) Received: from mail-lf1-f66.google.com (mail-lf1-f66.google.com [209.85.167.66]) by dpdk.org (Postfix) with ESMTP id B1BF21C1BB for ; Wed, 8 Apr 2020 10:29:56 +0200 (CEST) Received: by mail-lf1-f66.google.com with SMTP id f8so4445500lfe.12 for ; Wed, 08 Apr 2020 01:29:56 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=uzDAzPOrBvQXGfokfkv4vcCWzlEQfut+K5iMPe8pIfQ=; b=fyRP2dEL2EsA/zSOJLzMsOf4iBjxjtpquGN7NpNsoAujEibTuQI3fdoNbHVVL8BO0I tAMV9/K8fRADjIviBrLGalUoGWB7CYmbp6JFb0J3zN+ifTQUy8q8TV5VMgDS87wYD6N9 iXNUXvqQr1FDhoDxjD92jHy/ba1sc9ZbqfHjoPKdELAKie9jRgeLaah0w6hpSH7LI1X5 5vVBgmFcIZWj3FMf7g54n2UmurqqT8URr+vfXIs5zE4VdQPH5mCN8mWQ+EL4qKTnav8M gYTtZqwxA2ydrau0BQeJVXUthw5oucc4XCQDTlWkDVeBts9NBDqQsYBARPiMpYCYaKUz hPjQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=uzDAzPOrBvQXGfokfkv4vcCWzlEQfut+K5iMPe8pIfQ=; b=QsXBh1cP06U3GZ+8sLCqhBue+yA2b5cYFjCN1uxFJ8nZLj/2bUVTda8rxsQinrF+AX yLGeave8GAysDToaa8nAK4hkseqFEEsCCEsfEE83wfewH9tDQ8wRzvZxc4oqWrgFLHE2 DrVFw0n6s9kOTrmZx7rRjxQsUHKk2Cd9pqayu6w5JEMygEBJaf1jwvPBf/fsJ8hL38N6 HPkQ61pBvzMiLDyMvYlFdfTuH2ju9yT2YpIckCa7XDAE6e7Gk/wyR9Pg5TSWDuRuFJcE 1W6vNyB4RFbXn4JMNotZQCjRu6WUAziC9FLBT85C5+6aFq3SfEz36l6snjyMpFMUZ8um aghQ== X-Gm-Message-State: AGi0Pua0f0zNFYFo0nHblV79EQMAH7Dxt5UegVAk9ERBqVe+YwdAW75A f3Z2h5fy9rl0JqxQ1wuuHx3rogenkG0= X-Google-Smtp-Source: APiQypJQeEJSUaXqJqs6SuUJmNRzHp9n/w5rB+3ABU65uZKMt2HbAX4r4HNx1B/lTWX2b72CQCw4eA== X-Received: by 2002:a19:f518:: with SMTP id j24mr3969765lfb.205.1586334595906; Wed, 08 Apr 2020 01:29:55 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.54 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:55 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:15 +0200 Message-Id: <20200408082921.31000-25-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 24/30] net/ena: rework getting number of available descs X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" ena_com API should be preferred for getting number of used/available descriptors unless extra calculation needs to be performed. Some helper variables were added for storing values that are later reused. Moreover, for limiting the value of sent/received packets to the number of available descriptors, the RTE_MIN is used instead of if function, which was doing similar thing but was less descriptive. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index e43ba51ac7..9d76ebb0d9 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1426,7 +1426,8 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (unlikely(!count)) return 0; - in_use = rxq->next_to_use - rxq->next_to_clean; + in_use = ring_size - ena_com_free_q_entries(rxq->ena_com_io_sq) - 1; + ena_assert_msg(((in_use + count) < ring_size), "bad ring state\n"); /* get resources for incoming packets */ @@ -2145,8 +2146,9 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct ena_ring *rx_ring = (struct ena_ring *)(rx_queue); unsigned int ring_size = rx_ring->ring_size; unsigned int ring_mask = ring_size - 1; + unsigned int refill_required; uint16_t next_to_clean = rx_ring->next_to_clean; - uint16_t desc_in_use = 0; + uint16_t descs_in_use; struct rte_mbuf *mbuf; uint16_t completed; struct ena_com_rx_ctx ena_rx_ctx; @@ -2159,9 +2161,9 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return 0; } - desc_in_use = rx_ring->next_to_use - next_to_clean; - if (unlikely(nb_pkts > desc_in_use)) - nb_pkts = desc_in_use; + descs_in_use = ring_size - + ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; + nb_pkts = RTE_MIN(descs_in_use, nb_pkts); for (completed = 0; completed < nb_pkts; completed++) { ena_rx_ctx.max_bufs = rx_ring->sgl_size; @@ -2213,11 +2215,11 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->rx_stats.cnt += completed; rx_ring->next_to_clean = next_to_clean; - desc_in_use = desc_in_use - completed + 1; + refill_required = ena_com_free_q_entries(rx_ring->ena_com_io_sq); /* Burst refill to save doorbells, memory barriers, const interval */ - if (ring_size - desc_in_use > ENA_RING_DESCS_RATIO(ring_size)) { + if (refill_required > ENA_RING_DESCS_RATIO(ring_size)) { ena_com_update_dev_comp_head(rx_ring->ena_com_io_cq); - ena_populate_rx_queue(rx_ring, ring_size - desc_in_use); + ena_populate_rx_queue(rx_ring, refill_required); } return completed; @@ -2360,7 +2362,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, struct ena_tx_buffer *tx_info; struct ena_com_buf *ebuf; uint16_t rc, req_id, total_tx_descs = 0; - uint16_t sent_idx = 0, empty_tx_reqs; + uint16_t sent_idx = 0; uint16_t push_len = 0; uint16_t delta = 0; int nb_hw_desc; @@ -2373,9 +2375,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, return 0; } - empty_tx_reqs = ring_size - (next_to_use - next_to_clean); - if (nb_pkts > empty_tx_reqs) - nb_pkts = empty_tx_reqs; + nb_pkts = RTE_MIN(ena_com_free_q_entries(tx_ring->ena_com_io_sq), + nb_pkts); for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { mbuf = tx_pkts[sent_idx]; From patchwork Wed Apr 8 08:29:16 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67991 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id F24E8A0597; Wed, 8 Apr 2020 10:33:46 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 01BC51C1E2; Wed, 8 Apr 2020 10:30:04 +0200 (CEST) Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by dpdk.org (Postfix) with ESMTP id 0981E1C115 for ; Wed, 8 Apr 2020 10:29:58 +0200 (CEST) Received: by mail-lj1-f194.google.com with SMTP id p10so6713355ljn.1 for ; Wed, 08 Apr 2020 01:29:58 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=/kYh8shi4DGZscYSsWJJmp7vlK+UMm6qliWJ9GVFOHU=; b=JfoaejCHIYpw3/dOETFj6QU9pmT8Y0lxIEuYn4zEvXn7AAWKsiycVTFlJodqQByjvn g/x+VWDW7MmazWNNnkDsA4gREeaQwpt2HaLOZlfIT1Kq4NxKFJojeLTfIWq4pNdBMi64 LT39KDO7LexIbStDdi0zUM93P9d0K1mLCiuSuRTgMpm1W6Dm3GUmut8a6eTQF48JKCwP PuNcmqmfnEdBp9rU6IZVRDsOSY9AEptiobSWb11pDtCjD0Go0DwYHcdWfFTbyAqmroyo 6CXx3sLRYuH5cCQiwyOGmpSan62+Lw64CMhEYZ1cE+qs8hPMMS4BTQGG10h707zSE5VB 8ehQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=/kYh8shi4DGZscYSsWJJmp7vlK+UMm6qliWJ9GVFOHU=; b=qzPYHrc3Yg1pxi3HABys4+hGBtIrBoD2MtCXO5eDURQwbGCFhbgfA4Lf5rUBEYs9sA TCuWzGT4VAlUWEqOSm0Cnr0hwLch9uwB5gyUetm4O4v4JOlCtUuupXc8jzSTyCAMqYs1 vf4PeoAd9BxSN4FJbeuitbOsrzNeLEDt6216CKHO/oBpfnfv0sWknnoWQ1yK6kpOb5x2 gZqQQMQ3JAQidgJr4SZetqqEV1+7YbdorI1OxvENdwFv5vluvqQb6xTz8lYgsDP3UjrY +XABu1Iy7G+bN1wMmnohd6+B30GqmO+FGWH0vae1pMxtRyLI3DYCZAjng7F2H8HXArhq I3lA== X-Gm-Message-State: AGi0PuaOkxmbGYaABWpPmbGo6Y8P1AZVX89Vg677Wi2hxQKk8PiBgXBw lrksehEZcu7O5v2Ao+izwVnYOc3HVTE= X-Google-Smtp-Source: APiQypI1PLzrCnJ+ma5lbaKK5jua2Hql4T5pSC+vbRfXRgpWMkIZ1fNlX68IQ1p49RHqPs07pbjAIA== X-Received: by 2002:a2e:9f13:: with SMTP id u19mr4310780ljk.14.1586334597345; Wed, 08 Apr 2020 01:29:57 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.56 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:56 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:16 +0200 Message-Id: <20200408082921.31000-26-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 25/30] net/ena: limit refill threshold by fixed value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Divider used for both Tx and Rx cleanup/refill threshold can cause too big delay in case of the really big rings - for example if the 8k Rx ring will be used, the refill won't trigger unless 1024 threshold will be reached. It will also cause driver to try to allocate that much descriptors. Limiting it by fixed value - 256 in that case, would limit maximum time spent in repopulate function. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 27 ++++++++++++++------------- drivers/net/ena/ena_ethdev.h | 10 ++++++++++ 2 files changed, 24 insertions(+), 13 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 9d76ebb0d9..7804a5c85d 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -35,14 +35,6 @@ /*reverse version of ENA_IO_RXQ_IDX*/ #define ENA_IO_RXQ_IDX_REV(q) ((q - 1) / 2) -/* While processing submitted and completed descriptors (rx and tx path - * respectively) in a loop it is desired to: - * - perform batch submissions while populating sumbissmion queue - * - avoid blocking transmission of other packets during cleanup phase - * Hence the utilization ratio of 1/8 of a queue size. - */ -#define ENA_RING_DESCS_RATIO(ring_size) (ring_size / 8) - #define __MERGE_64B_H_L(h, l) (((uint64_t)h << 32) | l) #define TEST_BIT(val, bit_shift) (val & (1UL << bit_shift)) @@ -2146,7 +2138,8 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, struct ena_ring *rx_ring = (struct ena_ring *)(rx_queue); unsigned int ring_size = rx_ring->ring_size; unsigned int ring_mask = ring_size - 1; - unsigned int refill_required; + unsigned int free_queue_entries; + unsigned int refill_threshold; uint16_t next_to_clean = rx_ring->next_to_clean; uint16_t descs_in_use; struct rte_mbuf *mbuf; @@ -2215,11 +2208,15 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, rx_ring->rx_stats.cnt += completed; rx_ring->next_to_clean = next_to_clean; - refill_required = ena_com_free_q_entries(rx_ring->ena_com_io_sq); + free_queue_entries = ena_com_free_q_entries(rx_ring->ena_com_io_sq); + refill_threshold = + RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + (unsigned int)ENA_REFILL_THRESH_PACKET); + /* Burst refill to save doorbells, memory barriers, const interval */ - if (refill_required > ENA_RING_DESCS_RATIO(ring_size)) { + if (free_queue_entries > refill_threshold) { ena_com_update_dev_comp_head(rx_ring->ena_com_io_cq); - ena_populate_rx_queue(rx_ring, refill_required); + ena_populate_rx_queue(rx_ring, free_queue_entries); } return completed; @@ -2358,6 +2355,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t seg_len; unsigned int ring_size = tx_ring->ring_size; unsigned int ring_mask = ring_size - 1; + unsigned int cleanup_budget; struct ena_com_tx_ctx ena_tx_ctx; struct ena_tx_buffer *tx_info; struct ena_com_buf *ebuf; @@ -2515,9 +2513,12 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, /* Put back descriptor to the ring for reuse */ tx_ring->empty_tx_reqs[next_to_clean & ring_mask] = req_id; next_to_clean++; + cleanup_budget = + RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + (unsigned int)ENA_REFILL_THRESH_PACKET); /* If too many descs to clean, leave it for another run */ - if (unlikely(total_tx_descs > ENA_RING_DESCS_RATIO(ring_size))) + if (unlikely(total_tx_descs > cleanup_budget)) break; } tx_ring->tx_stats.available_desc = diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 6bcca08563..13d87d48f0 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -30,6 +30,16 @@ #define ENA_WD_TIMEOUT_SEC 3 #define ENA_DEVICE_KALIVE_TIMEOUT (ENA_WD_TIMEOUT_SEC * rte_get_timer_hz()) +/* While processing submitted and completed descriptors (rx and tx path + * respectively) in a loop it is desired to: + * - perform batch submissions while populating sumbissmion queue + * - avoid blocking transmission of other packets during cleanup phase + * Hence the utilization ratio of 1/8 of a queue size or max value if the size + * of the ring is very big - like 8k Rx rings. + */ +#define ENA_REFILL_THRESH_DIVIDER 8 +#define ENA_REFILL_THRESH_PACKET 256 + struct ena_adapter; enum ena_ring_type { From patchwork Wed Apr 8 08:29:17 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67992 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 036F5A0597; Wed, 8 Apr 2020 10:33:56 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 4F3C51C1E6; Wed, 8 Apr 2020 10:30:05 +0200 (CEST) Received: from mail-lf1-f66.google.com (mail-lf1-f66.google.com [209.85.167.66]) by dpdk.org (Postfix) with ESMTP id 7AEC21C1C9 for ; Wed, 8 Apr 2020 10:29:59 +0200 (CEST) Received: by mail-lf1-f66.google.com with SMTP id z23so4478487lfh.8 for ; Wed, 08 Apr 2020 01:29:59 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=BslQqR8WPPTfYqe91WVQAV1nqvQPKZkv3Be+1cG2SRg=; b=xV0KTD97rGnZIzyLtpI/IzkU/qYRwF3dUZY5v8r/uCGWlAT+8ppkyFaKk+MvZnN7Cl leIxLnRj3CejW3UKWOOQImG9StMx8V3RNphxnx32TXs2GSwli6s2B0+kaVP+brX0FnMx M6B/L6xHGZ8OhwmAs/r6JlhRKFLslweElYEds0lzHuKOyiRRdse03z84C1i+zVNCys1V 3G+cryUybY9fBYALxSyNqNaED17o1VzmYv5ybT1up5mQ31DgScpv6roW6FadFTYWv3Y0 +gj2AhIkU8Nc+pYIouXBeqNUzUTNSu7byLV3pgVzh/HrT4Ud5cWx6OitmDglINGnwLT2 Bt5g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=BslQqR8WPPTfYqe91WVQAV1nqvQPKZkv3Be+1cG2SRg=; b=IbFWrj+JoppqWCP1u2Y4YNH3fDp3PKgVsB3Wa2LhVY+FtPvkCuTp3O1mhvb2dyUYhR ZIU7/4hWU7mbLFllBnS7YVaMSN23pLmroOR1Vxh3gmGLVlw50N488CCwFhuhqE8aFLNA Y2cySUxrpzge3mJIz6jQr4lg9ZXSqugZRPeGsT4Miyzj8eK8E5fkGtZa9o/1Jonha08j 5YChH9ndIEpODdHClnPVzHmOMh8ojd01xYljdELxum2RNwJZsK/qS/O6yAfEQolK0kBo lMnfMEGhzXrqZIwx4C13cCF0DZ56o1ge4LS8jU8p988+sus4nQPY6emH/1xKzau5Hik7 Xxfg== X-Gm-Message-State: AGi0PuYDQjsRsdVItnlvpr2OYo/+LeVPBNLirZSDjWXmCxDYJP0DQUCJ xMakMYyD/UXv/M/MKNeKsDPHTyItBBA= X-Google-Smtp-Source: APiQypK+KZdP7WAgZ+8UC/RM7g8U9HrfilUos1ZLM2hLuHTNJ5GKcMuG/RCo+e0P5DqHcH10XrBtFw== X-Received: by 2002:ac2:4112:: with SMTP id b18mr3871389lfi.106.1586334598731; Wed, 08 Apr 2020 01:29:58 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.57 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:57 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:17 +0200 Message-Id: <20200408082921.31000-27-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 26/30] net/ena: use macros for ring idx operations X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" To improve code readability, abstraction was added for operating on IO rings indexes. Driver was defining local variable for ring mask in each function that needed to operate on the ring indexes. Now it is being stored in the ring as this value won't change unless size of the ring will change and macros for advancing indexes using the mask has been added. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 53 ++++++++++++++++++------------------ drivers/net/ena/ena_ethdev.h | 4 +++ 2 files changed, 30 insertions(+), 27 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 7804a5c85d..f6d0a75819 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -1266,6 +1266,7 @@ static int ena_tx_queue_setup(struct rte_eth_dev *dev, txq->next_to_clean = 0; txq->next_to_use = 0; txq->ring_size = nb_desc; + txq->size_mask = nb_desc - 1; txq->numa_socket_id = socket_id; txq->tx_buffer_info = rte_zmalloc("txq->tx_buffer_info", @@ -1361,6 +1362,7 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, rxq->next_to_clean = 0; rxq->next_to_use = 0; rxq->ring_size = nb_desc; + rxq->size_mask = nb_desc - 1; rxq->numa_socket_id = socket_id; rxq->mb_pool = mp; @@ -1409,8 +1411,6 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) { unsigned int i; int rc; - uint16_t ring_size = rxq->ring_size; - uint16_t ring_mask = ring_size - 1; uint16_t next_to_use = rxq->next_to_use; uint16_t in_use, req_id; struct rte_mbuf **mbufs = rxq->rx_refill_buffer; @@ -1418,9 +1418,10 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (unlikely(!count)) return 0; - in_use = ring_size - ena_com_free_q_entries(rxq->ena_com_io_sq) - 1; - - ena_assert_msg(((in_use + count) < ring_size), "bad ring state\n"); + in_use = rxq->ring_size - 1 - + ena_com_free_q_entries(rxq->ena_com_io_sq); + ena_assert_msg(((in_use + count) < rxq->ring_size), + "bad ring state\n"); /* get resources for incoming packets */ rc = rte_mempool_get_bulk(rxq->mb_pool, (void **)mbufs, count); @@ -1432,7 +1433,6 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) } for (i = 0; i < count; i++) { - uint16_t next_to_use_masked = next_to_use & ring_mask; struct rte_mbuf *mbuf = mbufs[i]; struct ena_com_buf ebuf; struct ena_rx_buffer *rx_info; @@ -1440,7 +1440,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) if (likely((i + 4) < count)) rte_prefetch0(mbufs[i + 4]); - req_id = rxq->empty_rx_reqs[next_to_use_masked]; + req_id = rxq->empty_rx_reqs[next_to_use]; rc = validate_rx_req_id(rxq, req_id); if (unlikely(rc)) break; @@ -1458,7 +1458,7 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) break; } rx_info->mbuf = mbuf; - next_to_use++; + next_to_use = ENA_IDX_NEXT_MASKED(next_to_use, rxq->size_mask); } if (unlikely(i < count)) { @@ -2072,7 +2072,6 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, struct rte_mbuf *mbuf; struct rte_mbuf *mbuf_head; struct ena_rx_buffer *rx_info; - unsigned int ring_mask = rx_ring->ring_size - 1; uint16_t ntc, len, req_id, buf = 0; if (unlikely(descs == 0)) @@ -2100,8 +2099,8 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, mbuf_head->data_off += offset; rx_info->mbuf = NULL; - rx_ring->empty_rx_reqs[ntc & ring_mask] = req_id; - ++ntc; + rx_ring->empty_rx_reqs[ntc] = req_id; + ntc = ENA_IDX_NEXT_MASKED(ntc, rx_ring->size_mask); while (--descs) { ++buf; @@ -2123,8 +2122,8 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, mbuf_head->pkt_len += len; rx_info->mbuf = NULL; - rx_ring->empty_rx_reqs[ntc & ring_mask] = req_id; - ++ntc; + rx_ring->empty_rx_reqs[ntc] = req_id; + ntc = ENA_IDX_NEXT_MASKED(ntc, rx_ring->size_mask); } *next_to_clean = ntc; @@ -2136,8 +2135,6 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts) { struct ena_ring *rx_ring = (struct ena_ring *)(rx_queue); - unsigned int ring_size = rx_ring->ring_size; - unsigned int ring_mask = ring_size - 1; unsigned int free_queue_entries; unsigned int refill_threshold; uint16_t next_to_clean = rx_ring->next_to_clean; @@ -2154,7 +2151,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, return 0; } - descs_in_use = ring_size - + descs_in_use = rx_ring->ring_size - ena_com_free_q_entries(rx_ring->ena_com_io_sq) - 1; nb_pkts = RTE_MIN(descs_in_use, nb_pkts); @@ -2183,9 +2180,10 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, ena_rx_ctx.pkt_offset); if (unlikely(mbuf == NULL)) { for (i = 0; i < ena_rx_ctx.descs; ++i) { - rx_ring->empty_rx_reqs[next_to_clean & ring_mask] = + rx_ring->empty_rx_reqs[next_to_clean] = rx_ring->ena_bufs[i].req_id; - ++next_to_clean; + next_to_clean = ENA_IDX_NEXT_MASKED( + next_to_clean, rx_ring->size_mask); } break; } @@ -2210,7 +2208,7 @@ static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, free_queue_entries = ena_com_free_q_entries(rx_ring->ena_com_io_sq); refill_threshold = - RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + RTE_MIN(rx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, (unsigned int)ENA_REFILL_THRESH_PACKET); /* Burst refill to save doorbells, memory barriers, const interval */ @@ -2353,8 +2351,6 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t next_to_clean = tx_ring->next_to_clean; struct rte_mbuf *mbuf; uint16_t seg_len; - unsigned int ring_size = tx_ring->ring_size; - unsigned int ring_mask = ring_size - 1; unsigned int cleanup_budget; struct ena_com_tx_ctx ena_tx_ctx; struct ena_tx_buffer *tx_info; @@ -2384,7 +2380,7 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, if (unlikely(rc)) break; - req_id = tx_ring->empty_tx_reqs[next_to_use & ring_mask]; + req_id = tx_ring->empty_tx_reqs[next_to_use]; tx_info = &tx_ring->tx_buffer_info[req_id]; tx_info->mbuf = mbuf; tx_info->num_of_bufs = 0; @@ -2428,7 +2424,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads, tx_ring->disable_meta_caching); - rte_prefetch0(tx_pkts[(sent_idx + 4) & ring_mask]); + rte_prefetch0(tx_pkts[ENA_IDX_ADD_MASKED( + sent_idx, 4, tx_ring->size_mask)]); /* Process first segment taking into * consideration pushed header @@ -2480,7 +2477,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } tx_info->tx_descs = nb_hw_desc; - next_to_use++; + next_to_use = ENA_IDX_NEXT_MASKED(next_to_use, + tx_ring->size_mask); tx_ring->tx_stats.cnt++; tx_ring->tx_stats.bytes += total_length; } @@ -2511,10 +2509,11 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, tx_info->mbuf = NULL; /* Put back descriptor to the ring for reuse */ - tx_ring->empty_tx_reqs[next_to_clean & ring_mask] = req_id; - next_to_clean++; + tx_ring->empty_tx_reqs[next_to_clean] = req_id; + next_to_clean = ENA_IDX_NEXT_MASKED(next_to_clean, + tx_ring->size_mask); cleanup_budget = - RTE_MIN(ring_size / ENA_REFILL_THRESH_DIVIDER, + RTE_MIN(tx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, (unsigned int)ENA_REFILL_THRESH_PACKET); /* If too many descs to clean, leave it for another run */ diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index 13d87d48f0..6e24a4e582 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -40,6 +40,9 @@ #define ENA_REFILL_THRESH_DIVIDER 8 #define ENA_REFILL_THRESH_PACKET 256 +#define ENA_IDX_NEXT_MASKED(idx, mask) (((idx) + 1) & (mask)) +#define ENA_IDX_ADD_MASKED(idx, n, mask) (((idx) + (n)) & (mask)) + struct ena_adapter; enum ena_ring_type { @@ -109,6 +112,7 @@ struct ena_ring { }; struct rte_mbuf **rx_refill_buffer; unsigned int ring_size; /* number of tx/rx_buffer_info's entries */ + unsigned int size_mask; struct ena_com_io_cq *ena_com_io_cq; struct ena_com_io_sq *ena_com_io_sq; From patchwork Wed Apr 8 08:29:18 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67993 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id D3C03A0597; Wed, 8 Apr 2020 10:34:07 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id DCE5B1C1ED; Wed, 8 Apr 2020 10:30:06 +0200 (CEST) Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by dpdk.org (Postfix) with ESMTP id 27A5D1C1D6 for ; Wed, 8 Apr 2020 10:30:01 +0200 (CEST) Received: by mail-lf1-f65.google.com with SMTP id k28so4466679lfe.10 for ; Wed, 08 Apr 2020 01:30:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=SOII8x+CfQ8QTvVnjttoLP7FVRxyvZlnQBcLamir9aA=; b=CWjvDApU7/mtLh4JpPR/c3Uu71IobVCguxMssiaTt35nVvWSqIFDOd0ahxF3rYWedZ 8vT5be30k7doqOr8s6hNL0DA/2dfK5SWlLrmaIdiaG/zqBqiMYlGDHfLVxbgYnn4I0wj fSP2dRh0EL9GWFHe/hGF8H5GewMBZNOHehNgTx5LTSRGWH/3syr/g4pb3jKgEYAJ2AzN CpKqdVpSFNwy/54ovf2Kp8eoIkDRAqMushSELa7UAALS6kH7w/RUyA9zNtpjMHj0jSSf AGuKCzv3Cv1pw316oOXWaMZDWNV/1gu7uo7Xc5coynpTdkVqTwunYVrEp2X/gajgtpv/ cXNg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=SOII8x+CfQ8QTvVnjttoLP7FVRxyvZlnQBcLamir9aA=; b=bSPR5p2yUHY/kDliWrJhK4zgRkKUQYy7zAUynCxx9fWRYeM2DtkFFfTMNfrnOH1Rz0 b9/53IsLbv2Iy0VDfY5ZtvEHBBC0d6gL8+DTKs9CJY9ejq7MSEszSSrD5BjC2z7XdY8q H/TNHTDDf+dD4FVd5iDFw1nhHkX/5pJr9b4pWPX6w0EJKym8BOn8r/aEUsuNhGWgHHiF Mo846ohke+3iz/MRG0XEhfK6KJ5uJ5M6KqBw+BV6yoYUpEKh1wFOoTJ6YlOY6d3zYsXV N6FQMRjycFcrnhqLMNcErIvHcEP9dC1vD9xrRIO3+3N2EeC0swKs9yHb9mRVz8VEgTrB qT4Q== X-Gm-Message-State: AGi0PuaTmDXy/GzUpNvPMZCQB+MbpKGzGvQmk1W/nMIZOdl5BDxGnVMk qdoEIqA/dKys3eyyFmDYWtZ2xInUCy8= X-Google-Smtp-Source: APiQypLHyej6PuHTUagt0TKY34khcIPIMPKXJks6yJHxY+OIC1juREH50/ckbMFL6ljLWekn5O0Abw== X-Received: by 2002:a05:6512:51c:: with SMTP id o28mr3944935lfb.116.1586334600164; Wed, 08 Apr 2020 01:30:00 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.29.58 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:29:59 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:18 +0200 Message-Id: <20200408082921.31000-28-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 27/30] net/ena: refactor Tx path X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The original Tx function was very long and was containing both cleanup and the sending sections. Because of that it was having a lot of local variables, big indentation and was hard to read. This function was split into 2 sections: * Sending - which is responsible for preparing the mbuf, mapping it to the device descriptors and finally, sending packet to the HW * Cleanup - which is releasing packets sent by the HW. Loop which was releasing packets was reworked a bit, to make intention more visible and aligned with other parts of the driver. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v2: * Fix compilation error on icc by adding braces around 0 drivers/net/ena/ena_ethdev.c | 323 +++++++++++++++++++---------------- 1 file changed, 179 insertions(+), 144 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index f6d0a75819..1a7cc686f5 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -169,6 +169,13 @@ static int ena_device_init(struct ena_com_dev *ena_dev, struct ena_com_dev_get_features_ctx *get_feat_ctx, bool *wd_state); static int ena_dev_configure(struct rte_eth_dev *dev); +static void ena_tx_map_mbuf(struct ena_ring *tx_ring, + struct ena_tx_buffer *tx_info, + struct rte_mbuf *mbuf, + void **push_header, + uint16_t *header_len); +static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf); +static void ena_tx_cleanup(struct ena_ring *tx_ring); static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, uint16_t nb_pkts); static uint16_t eth_ena_prep_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, @@ -2343,193 +2350,221 @@ static int ena_check_and_linearize_mbuf(struct ena_ring *tx_ring, return rc; } -static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, - uint16_t nb_pkts) +static void ena_tx_map_mbuf(struct ena_ring *tx_ring, + struct ena_tx_buffer *tx_info, + struct rte_mbuf *mbuf, + void **push_header, + uint16_t *header_len) { - struct ena_ring *tx_ring = (struct ena_ring *)(tx_queue); - uint16_t next_to_use = tx_ring->next_to_use; - uint16_t next_to_clean = tx_ring->next_to_clean; - struct rte_mbuf *mbuf; - uint16_t seg_len; - unsigned int cleanup_budget; - struct ena_com_tx_ctx ena_tx_ctx; - struct ena_tx_buffer *tx_info; - struct ena_com_buf *ebuf; - uint16_t rc, req_id, total_tx_descs = 0; - uint16_t sent_idx = 0; - uint16_t push_len = 0; - uint16_t delta = 0; - int nb_hw_desc; - uint32_t total_length; - - /* Check adapter state */ - if (unlikely(tx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) { - PMD_DRV_LOG(ALERT, - "Trying to xmit pkts while device is NOT running\n"); - return 0; - } + struct ena_com_buf *ena_buf; + uint16_t delta, seg_len, push_len; - nb_pkts = RTE_MIN(ena_com_free_q_entries(tx_ring->ena_com_io_sq), - nb_pkts); + delta = 0; + seg_len = mbuf->data_len; - for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { - mbuf = tx_pkts[sent_idx]; - total_length = 0; + tx_info->mbuf = mbuf; + ena_buf = tx_info->bufs; - rc = ena_check_and_linearize_mbuf(tx_ring, mbuf); - if (unlikely(rc)) - break; + if (tx_ring->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { + /* + * Tx header might be (and will be in most cases) smaller than + * tx_max_header_size. But it's not an issue to send more data + * to the device, than actually needed if the mbuf size is + * greater than tx_max_header_size. + */ + push_len = RTE_MIN(mbuf->pkt_len, tx_ring->tx_max_header_size); + *header_len = push_len; - req_id = tx_ring->empty_tx_reqs[next_to_use]; - tx_info = &tx_ring->tx_buffer_info[req_id]; - tx_info->mbuf = mbuf; - tx_info->num_of_bufs = 0; - ebuf = tx_info->bufs; + if (likely(push_len <= seg_len)) { + /* If the push header is in the single segment, then + * just point it to the 1st mbuf data. + */ + *push_header = rte_pktmbuf_mtod(mbuf, uint8_t *); + } else { + /* If the push header lays in the several segments, copy + * it to the intermediate buffer. + */ + rte_pktmbuf_read(mbuf, 0, push_len, + tx_ring->push_buf_intermediate_buf); + *push_header = tx_ring->push_buf_intermediate_buf; + delta = push_len - seg_len; + } + } else { + *push_header = NULL; + *header_len = 0; + push_len = 0; + } - /* Prepare TX context */ - memset(&ena_tx_ctx, 0x0, sizeof(struct ena_com_tx_ctx)); - memset(&ena_tx_ctx.ena_meta, 0x0, - sizeof(struct ena_com_tx_meta)); - ena_tx_ctx.ena_bufs = ebuf; - ena_tx_ctx.req_id = req_id; + /* Process first segment taking into consideration pushed header */ + if (seg_len > push_len) { + ena_buf->paddr = mbuf->buf_iova + + mbuf->data_off + + push_len; + ena_buf->len = seg_len - push_len; + ena_buf++; + tx_info->num_of_bufs++; + } - delta = 0; + while ((mbuf = mbuf->next) != NULL) { seg_len = mbuf->data_len; - if (tx_ring->tx_mem_queue_type == - ENA_ADMIN_PLACEMENT_POLICY_DEV) { - push_len = RTE_MIN(mbuf->pkt_len, - tx_ring->tx_max_header_size); - ena_tx_ctx.header_len = push_len; - - if (likely(push_len <= seg_len)) { - /* If the push header is in the single segment, - * then just point it to the 1st mbuf data. - */ - ena_tx_ctx.push_header = - rte_pktmbuf_mtod(mbuf, uint8_t *); - } else { - /* If the push header lays in the several - * segments, copy it to the intermediate buffer. - */ - rte_pktmbuf_read(mbuf, 0, push_len, - tx_ring->push_buf_intermediate_buf); - ena_tx_ctx.push_header = - tx_ring->push_buf_intermediate_buf; - delta = push_len - seg_len; - } - } /* there's no else as we take advantage of memset zeroing */ + /* Skip mbufs if whole data is pushed as a header */ + if (unlikely(delta > seg_len)) { + delta -= seg_len; + continue; + } - /* Set TX offloads flags, if applicable */ - ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads, - tx_ring->disable_meta_caching); + ena_buf->paddr = mbuf->buf_iova + mbuf->data_off + delta; + ena_buf->len = seg_len - delta; + ena_buf++; + tx_info->num_of_bufs++; - rte_prefetch0(tx_pkts[ENA_IDX_ADD_MASKED( - sent_idx, 4, tx_ring->size_mask)]); + delta = 0; + } +} - /* Process first segment taking into - * consideration pushed header - */ - if (seg_len > push_len) { - ebuf->paddr = mbuf->buf_iova + - mbuf->data_off + - push_len; - ebuf->len = seg_len - push_len; - ebuf++; - tx_info->num_of_bufs++; - } - total_length += mbuf->data_len; +static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) +{ + struct ena_tx_buffer *tx_info; + struct ena_com_tx_ctx ena_tx_ctx = { { 0 } }; + uint16_t next_to_use; + uint16_t header_len; + uint16_t req_id; + void *push_header; + int nb_hw_desc; + int rc; - while ((mbuf = mbuf->next) != NULL) { - seg_len = mbuf->data_len; + rc = ena_check_and_linearize_mbuf(tx_ring, mbuf); + if (unlikely(rc)) + return rc; - /* Skip mbufs if whole data is pushed as a header */ - if (unlikely(delta > seg_len)) { - delta -= seg_len; - continue; - } + next_to_use = tx_ring->next_to_use; - ebuf->paddr = mbuf->buf_iova + mbuf->data_off + delta; - ebuf->len = seg_len - delta; - total_length += ebuf->len; - ebuf++; - tx_info->num_of_bufs++; + req_id = tx_ring->empty_tx_reqs[next_to_use]; + tx_info = &tx_ring->tx_buffer_info[req_id]; + tx_info->num_of_bufs = 0; - delta = 0; - } + ena_tx_map_mbuf(tx_ring, tx_info, mbuf, &push_header, &header_len); - ena_tx_ctx.num_bufs = tx_info->num_of_bufs; + ena_tx_ctx.ena_bufs = tx_info->bufs; + ena_tx_ctx.push_header = push_header; + ena_tx_ctx.num_bufs = tx_info->num_of_bufs; + ena_tx_ctx.req_id = req_id; + ena_tx_ctx.header_len = header_len; - if (ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq, - &ena_tx_ctx)) { - PMD_DRV_LOG(DEBUG, "llq tx max burst size of queue %d" - " achieved, writing doorbell to send burst\n", - tx_ring->id); - ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); - } - - /* prepare the packet's descriptors to dma engine */ - rc = ena_com_prepare_tx(tx_ring->ena_com_io_sq, - &ena_tx_ctx, &nb_hw_desc); - if (unlikely(rc)) { - ++tx_ring->tx_stats.prepare_ctx_err; - break; - } - tx_info->tx_descs = nb_hw_desc; + /* Set Tx offloads flags, if applicable */ + ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads, + tx_ring->disable_meta_caching); - next_to_use = ENA_IDX_NEXT_MASKED(next_to_use, - tx_ring->size_mask); - tx_ring->tx_stats.cnt++; - tx_ring->tx_stats.bytes += total_length; + if (unlikely(ena_com_is_doorbell_needed(tx_ring->ena_com_io_sq, + &ena_tx_ctx))) { + PMD_DRV_LOG(DEBUG, + "llq tx max burst size of queue %d achieved, writing doorbell to send burst\n", + tx_ring->id); + ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); } - tx_ring->tx_stats.available_desc = - ena_com_free_q_entries(tx_ring->ena_com_io_sq); - /* If there are ready packets to be xmitted... */ - if (sent_idx > 0) { - /* ...let HW do its best :-) */ - ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); - tx_ring->tx_stats.doorbells++; - tx_ring->next_to_use = next_to_use; + /* prepare the packet's descriptors to dma engine */ + rc = ena_com_prepare_tx(tx_ring->ena_com_io_sq, &ena_tx_ctx, + &nb_hw_desc); + if (unlikely(rc)) { + ++tx_ring->tx_stats.prepare_ctx_err; + return rc; } - /* Clear complete packets */ - while (ena_com_tx_comp_req_id_get(tx_ring->ena_com_io_cq, &req_id) >= 0) { - rc = validate_tx_req_id(tx_ring, req_id); - if (rc) + tx_info->tx_descs = nb_hw_desc; + + tx_ring->tx_stats.cnt++; + tx_ring->tx_stats.bytes += mbuf->pkt_len; + + tx_ring->next_to_use = ENA_IDX_NEXT_MASKED(next_to_use, + tx_ring->size_mask); + + return 0; +} + +static void ena_tx_cleanup(struct ena_ring *tx_ring) +{ + unsigned int cleanup_budget; + unsigned int total_tx_descs = 0; + uint16_t next_to_clean = tx_ring->next_to_clean; + + cleanup_budget = RTE_MIN(tx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, + (unsigned int)ENA_REFILL_THRESH_PACKET); + + while (likely(total_tx_descs < cleanup_budget)) { + struct rte_mbuf *mbuf; + struct ena_tx_buffer *tx_info; + uint16_t req_id; + + if (ena_com_tx_comp_req_id_get(tx_ring->ena_com_io_cq, &req_id) != 0) + break; + + if (unlikely(validate_tx_req_id(tx_ring, req_id) != 0)) break; /* Get Tx info & store how many descs were processed */ tx_info = &tx_ring->tx_buffer_info[req_id]; - total_tx_descs += tx_info->tx_descs; - /* Free whole mbuf chain */ mbuf = tx_info->mbuf; rte_pktmbuf_free(mbuf); + tx_info->mbuf = NULL; + tx_ring->empty_tx_reqs[next_to_clean] = req_id; + + total_tx_descs += tx_info->tx_descs; /* Put back descriptor to the ring for reuse */ - tx_ring->empty_tx_reqs[next_to_clean] = req_id; next_to_clean = ENA_IDX_NEXT_MASKED(next_to_clean, tx_ring->size_mask); - cleanup_budget = - RTE_MIN(tx_ring->ring_size / ENA_REFILL_THRESH_DIVIDER, - (unsigned int)ENA_REFILL_THRESH_PACKET); - - /* If too many descs to clean, leave it for another run */ - if (unlikely(total_tx_descs > cleanup_budget)) - break; } - tx_ring->tx_stats.available_desc = - ena_com_free_q_entries(tx_ring->ena_com_io_sq); - if (total_tx_descs > 0) { + if (likely(total_tx_descs > 0)) { /* acknowledge completion of sent packets */ tx_ring->next_to_clean = next_to_clean; ena_com_comp_ack(tx_ring->ena_com_io_sq, total_tx_descs); ena_com_update_dev_comp_head(tx_ring->ena_com_io_cq); } +} + +static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, + uint16_t nb_pkts) +{ + struct ena_ring *tx_ring = (struct ena_ring *)(tx_queue); + uint16_t sent_idx = 0; + + /* Check adapter state */ + if (unlikely(tx_ring->adapter->state != ENA_ADAPTER_STATE_RUNNING)) { + PMD_DRV_LOG(ALERT, + "Trying to xmit pkts while device is NOT running\n"); + return 0; + } + + nb_pkts = RTE_MIN(ena_com_free_q_entries(tx_ring->ena_com_io_sq), + nb_pkts); + + for (sent_idx = 0; sent_idx < nb_pkts; sent_idx++) { + if (ena_xmit_mbuf(tx_ring, tx_pkts[sent_idx])) + break; + rte_prefetch0(tx_pkts[ENA_IDX_ADD_MASKED(sent_idx, 4, + tx_ring->size_mask)]); + } + + tx_ring->tx_stats.available_desc = + ena_com_free_q_entries(tx_ring->ena_com_io_sq); + + /* If there are ready packets to be xmitted... */ + if (sent_idx > 0) { + /* ...let HW do its best :-) */ + ena_com_write_sq_doorbell(tx_ring->ena_com_io_sq); + tx_ring->tx_stats.doorbells++; + } + + ena_tx_cleanup(tx_ring); + + tx_ring->tx_stats.available_desc = + ena_com_free_q_entries(tx_ring->ena_com_io_sq); tx_ring->tx_stats.tx_poll++; return sent_idx; From patchwork Wed Apr 8 08:29:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67994 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 04DEDA0597; Wed, 8 Apr 2020 10:34:18 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 3C43B1C1F3; Wed, 8 Apr 2020 10:30:08 +0200 (CEST) Received: from mail-lf1-f65.google.com (mail-lf1-f65.google.com [209.85.167.65]) by dpdk.org (Postfix) with ESMTP id 665311C1DA for ; Wed, 8 Apr 2020 10:30:02 +0200 (CEST) Received: by mail-lf1-f65.google.com with SMTP id m19so2461556lfq.13 for ; Wed, 08 Apr 2020 01:30:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=9xTdl7+kzq7GVixH0Cj3wvXdk1gvFtVa0Ihq2VDzj/g=; b=R8Fjso+zjy4W8h1fbP8LxxODJkJcz0zHePr+PklBV1ocGPB0sluxWFK5EY/E+v4mZb +QgocZNItCulZfHb0WDdDF8suuSxq+PWIZxyt5vPpk863yVx2euFFZLQCz0qnFuI8hcT 0DhkhbKlQlDGzowVrcGXCMmQIZf730/ULbvOrmE+zJqIW5v+B0zKtLbErkMo2y1z28Kw ZX1moNBtcB7wQd8qDMrGq/YW196D4KWFB/W5kF+52qVdn6aA7sZqS29CKdpYq9UV+B92 WFY+0xb3vc1P6GWMV8gboetOTe1BpglxRsLdll4BXPNkVZgYjvIcJHkpHFHYIzAEcCmD JBjA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=9xTdl7+kzq7GVixH0Cj3wvXdk1gvFtVa0Ihq2VDzj/g=; b=Fi0saZRx6Q2XTduHTemTeoE2SmZ+V/Ynj0AURBgnIOLtXBznpF0/HYc7snGzKChAkR S2DjQ3kLqfdwMpEl0nuGHfNC/TxeJfYHF7gYx8V4qk5U5CzEuxhTGHQSZiRLeXoEExpX vdOj+Jwt13JGeB4J/C0q3fBXPgOmxk4yiKF682YvZNAH98uObpYWR7x2vGq8U97fO49D lY0pFKeJ5OwwHdyuSl4K5ESJZqDVTyLDDVp+uph+2AIw0QtkhX2UhWsQ80G2/udPGNkc surAt8QVOrg7gZ0tnStviFXKhQPcNUXckVSrQWf/uJQerHYBr8yuc9yiuuB5gaIfZX5c WW2A== X-Gm-Message-State: AGi0PuYMNkALthInPXnR04EJSWxwpfYNzih4s/s93qd33Wu+2T3e6aou FJoe5ZX+A4inMXdvPI/F+VgbrqXhezY= X-Google-Smtp-Source: APiQypJB6qVPCsQVqzIpQmJCNjg7E73b96OwxLocxp6oHV4P3YzfUxkjtjYiig8mZCh/o7CffZp/vw== X-Received: by 2002:a05:6512:3135:: with SMTP id p21mr2422085lfd.36.1586334601687; Wed, 08 Apr 2020 01:30:01 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.30.00 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:30:00 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:19 +0200 Message-Id: <20200408082921.31000-29-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 28/30] net/ena: reuse 0 length Rx descriptor X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Some ENA devices can pass to the driver descriptor with length 0. To avoid extra allocation, the descriptor can be reused by simply putting it back to the device. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v2: * Compare rc (error code) to 0 value explicitely v3: * Update release notes doc/guides/rel_notes/release_20_05.rst | 1 + drivers/net/ena/ena_ethdev.c | 75 ++++++++++++++++++++------ 2 files changed, 61 insertions(+), 15 deletions(-) diff --git a/doc/guides/rel_notes/release_20_05.rst b/doc/guides/rel_notes/release_20_05.rst index e6b2f1b972..89a48a1ff8 100644 --- a/doc/guides/rel_notes/release_20_05.rst +++ b/doc/guides/rel_notes/release_20_05.rst @@ -85,6 +85,7 @@ New Features * Added support for large LLQ (Low-latency queue) headers. * Added Tx drops as new extended driver statistic. * Added support for accelerated LLQ mode. + * Handling of the 0 length descriptors on the Rx path. Removed Items diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 1a7cc686f5..156a3e441b 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -195,6 +195,8 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, uint8_t offset); static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); +static int ena_add_single_rx_desc(struct ena_com_io_sq *io_sq, + struct rte_mbuf *mbuf, uint16_t id); static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count); static void ena_init_rings(struct ena_adapter *adapter, bool disable_meta_caching); @@ -1414,6 +1416,24 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, return 0; } +static int ena_add_single_rx_desc(struct ena_com_io_sq *io_sq, + struct rte_mbuf *mbuf, uint16_t id) +{ + struct ena_com_buf ebuf; + int rc; + + /* prepare physical address for DMA transaction */ + ebuf.paddr = mbuf->buf_iova + RTE_PKTMBUF_HEADROOM; + ebuf.len = mbuf->buf_len - RTE_PKTMBUF_HEADROOM; + + /* pass resource to device */ + rc = ena_com_add_single_rx_desc(io_sq, &ebuf, id); + if (unlikely(rc != 0)) + PMD_DRV_LOG(WARNING, "failed adding rx desc\n"); + + return rc; +} + static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) { unsigned int i; @@ -1441,7 +1461,6 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) for (i = 0; i < count; i++) { struct rte_mbuf *mbuf = mbufs[i]; - struct ena_com_buf ebuf; struct ena_rx_buffer *rx_info; if (likely((i + 4) < count)) @@ -1454,16 +1473,10 @@ static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count) rx_info = &rxq->rx_buffer_info[req_id]; - /* prepare physical address for DMA transaction */ - ebuf.paddr = mbuf->buf_iova + RTE_PKTMBUF_HEADROOM; - ebuf.len = mbuf->buf_len - RTE_PKTMBUF_HEADROOM; - /* pass resource to device */ - rc = ena_com_add_single_rx_desc(rxq->ena_com_io_sq, - &ebuf, req_id); - if (unlikely(rc)) { - PMD_DRV_LOG(WARNING, "failed adding rx desc\n"); + rc = ena_add_single_rx_desc(rxq->ena_com_io_sq, mbuf, req_id); + if (unlikely(rc != 0)) break; - } + rx_info->mbuf = mbuf; next_to_use = ENA_IDX_NEXT_MASKED(next_to_use, rxq->size_mask); } @@ -2079,6 +2092,7 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, struct rte_mbuf *mbuf; struct rte_mbuf *mbuf_head; struct ena_rx_buffer *rx_info; + int rc; uint16_t ntc, len, req_id, buf = 0; if (unlikely(descs == 0)) @@ -2121,13 +2135,44 @@ static struct rte_mbuf *ena_rx_mbuf(struct ena_ring *rx_ring, rx_info = &rx_ring->rx_buffer_info[req_id]; RTE_ASSERT(rx_info->mbuf != NULL); - /* Create an mbuf chain. */ - mbuf->next = rx_info->mbuf; - mbuf = mbuf->next; + if (unlikely(len == 0)) { + /* + * Some devices can pass descriptor with the length 0. + * To avoid confusion, the PMD is simply putting the + * descriptor back, as it was never used. We'll avoid + * mbuf allocation that way. + */ + rc = ena_add_single_rx_desc(rx_ring->ena_com_io_sq, + rx_info->mbuf, req_id); + if (unlikely(rc != 0)) { + /* Free the mbuf in case of an error. */ + rte_mbuf_raw_free(rx_info->mbuf); + } else { + /* + * If there was no error, just exit the loop as + * 0 length descriptor is always the last one. + */ + break; + } + } else { + /* Create an mbuf chain. */ + mbuf->next = rx_info->mbuf; + mbuf = mbuf->next; - ena_init_rx_mbuf(mbuf, len); - mbuf_head->pkt_len += len; + ena_init_rx_mbuf(mbuf, len); + mbuf_head->pkt_len += len; + } + /* + * Mark the descriptor as depleted and perform necessary + * cleanup. + * This code will execute in two cases: + * 1. Descriptor len was greater than 0 - normal situation. + * 2. Descriptor len was 0 and we failed to add the descriptor + * to the device. In that situation, we should try to add + * the mbuf again in the populate routine and mark the + * descriptor as used up by the device. + */ rx_info->mbuf = NULL; rx_ring->empty_rx_reqs[ntc] = req_id; ntc = ENA_IDX_NEXT_MASKED(ntc, rx_ring->size_mask); From patchwork Wed Apr 8 08:29:20 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67995 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 604C3A0597; Wed, 8 Apr 2020 10:34:39 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AB6C91C202; Wed, 8 Apr 2020 10:30:10 +0200 (CEST) Received: from mail-lj1-f170.google.com (mail-lj1-f170.google.com [209.85.208.170]) by dpdk.org (Postfix) with ESMTP id D32051C1E1 for ; Wed, 8 Apr 2020 10:30:03 +0200 (CEST) Received: by mail-lj1-f170.google.com with SMTP id q19so6646174ljp.9 for ; Wed, 08 Apr 2020 01:30:03 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=gc7V4sTSOgeD1dMtWciT8dmyT5xWcLtVV7uSLTY6Pdc=; b=mCY9E4feEdJufFJV3b/S4orMD9JjFx87vVGiQVYgKEadShcT6EkZPF679AZtJD+9Y1 aVSzZ8JYcz3FcpIecYamDpc94o/ylGxr9oaNQlvG6rY00GBG3e8zsq2UH7x/XJV0E2U6 oIr3aJgK6AQiUbSOVbqM2epdqQ8jUjEZ63B6K3/twAhuWEuFhfCZlIJ9nYOAPNpsXyt+ VJ0zA97bh8GrxqrCBhDu5+9CR+uCbVCoXYSmrQlqB+Dg4g+nzhzbdzC7TCwOPtB9Rv9R 3QFGpiNY6mLexxFN5kv83OH3vGyXzPdSJBdCW+oNBt6O39qrhxhHcAR8WdYL0i0EzY1n JgDw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=gc7V4sTSOgeD1dMtWciT8dmyT5xWcLtVV7uSLTY6Pdc=; b=m19MlLM0te0dkesBc6R5t3Rzt7GsGClsWATJ2s6cLFeH0q1pRMqYA48mP5E/WAfAfy dRsf1kijkqXZvme95kg6URk9BgRA9b+BCRZ0JGsXoOHT2Bu+ACCzcr3TNbWGRewnZa6S v3YvDK0wj2oqwpPmGRSmH+LddEuyQrQtkvIKEiM5ZFX7hwNGiV7MFTr+gkHvtXqe+Vet OHxNX+W/Chq0Wtg6jVfc99TXuRQInkE35xqxQBK+82bgYGRHlMZfHhZb3JOzGQ+V8d3n esEJOcEswiG1ddp9IfIyVwpXpVHMg9g901oVX68vvq6fVSA6DA9HtXAX+vbdoPM5Zj+e tkVg== X-Gm-Message-State: AGi0PuYXqrGKNC3smsYn9TEQQkTPqInndXJTj0DT/5X24T9WBIy0gQbK fXBgPy1OCVsz6EXQ65Hc4z6TH+cqWho= X-Google-Smtp-Source: APiQypJz3uf1YwtBp2wuWW3Hv/7Kmk8fZFfEcZge43VMuX4uiZNcNamGk5ZcDi9nFlB4AAENK8pb0w== X-Received: by 2002:a2e:9cce:: with SMTP id g14mr4137717ljj.161.1586334603167; Wed, 08 Apr 2020 01:30:03 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.30.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:30:02 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:20 +0200 Message-Id: <20200408082921.31000-30-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 29/30] doc: add notes on ENA usage on metal instances X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As AWS metal instances are supporting IOMMU, the usage of igb_uio or vfio-pci can lead to a problems (when to use which module), especially that the vfio-pci isn't supporting SMMU on arm64. To clear up the problem of using those modules in various setup conditions (with or without IOMMU) on metal instances, more detailed explanation was added. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- doc/guides/nics/ena.rst | 48 +++++++++++++++++++++++++++++++++++++++-- 1 file changed, 46 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/ena.rst b/doc/guides/nics/ena.rst index 0b9622ac85..bec97c3326 100644 --- a/doc/guides/nics/ena.rst +++ b/doc/guides/nics/ena.rst @@ -187,11 +187,55 @@ Prerequisites echo 1 > /sys/module/vfio/parameters/enable_unsafe_noiommu_mode -#. Bind the intended ENA device to ``vfio-pci`` or ``igb_uio`` module. + To use ``noiommu`` mode, the ``vfio-pci`` must be built with flag + ``CONFIG_VFIO_NOIOMMU``. +#. Bind the intended ENA device to ``vfio-pci`` or ``igb_uio`` module. At this point the system should be ready to run DPDK applications. Once the -application runs to completion, the ENA can be detached from igb_uio if necessary. +application runs to completion, the ENA can be detached from attached module if +necessary. + +**Note about usage on \*.metal instances** + +On AWS, the metal instances are supporting IOMMU for both arm64 and x86_64 +hosts. + +* x86_64 (e.g. c5.metal, i3.metal): + IOMMU should be disabled by default. In that situation, the ``igb_uio`` can + be used as it is but ``vfio-pci`` should be working in no-IOMMU mode (please + see above). + + When IOMMU is enabled, ``igb_uio`` cannot be used as it's not supporting this + feature, while ``vfio-pci`` should work without any changes. + To enable IOMMU on those hosts, please update ``GRUB_CMDLINE_LINUX`` in file + ``/etc/default/grub`` with the below extra boot arguments:: + + iommu=1 intel_iommu=on + + Then, make the changes live by executing as a root:: + + # grub2-mkconfig > /boot/grub2/grub.cfg + + Finally, reboot should result in IOMMU being enabled. + +* arm64 (a1.metal): + IOMMU should be enabled by default. Unfortunately, ``vfio-pci`` isn't + supporting SMMU, which is implementation of IOMMU for arm64 architecture and + ``igb_uio`` isn't supporting IOMMU at all, so to use DPDK with ENA on those + hosts, one must disable IOMMU. This can be done by updating + ``GRUB_CMDLINE_LINUX`` in file ``/etc/default/grub`` with the extra boot + argument:: + + iommu.passthrough=1 + + Then, make the changes live by executing as a root:: + + # grub2-mkconfig > /boot/grub2/grub.cfg + + Finally, reboot should result in IOMMU being disabled. + Without IOMMU, ``igb_uio`` can be used as it is but ``vfio-pci`` should be + working in no-IOMMU mode (please see above). Usage example ------------- From patchwork Wed Apr 8 08:29:21 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67996 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 4555DA0597; Wed, 8 Apr 2020 10:34:50 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id AF7141C209; Wed, 8 Apr 2020 10:30:11 +0200 (CEST) Received: from mail-lj1-f194.google.com (mail-lj1-f194.google.com [209.85.208.194]) by dpdk.org (Postfix) with ESMTP id 1B5291C1D6 for ; Wed, 8 Apr 2020 10:30:05 +0200 (CEST) Received: by mail-lj1-f194.google.com with SMTP id b1so6707839ljp.3 for ; Wed, 08 Apr 2020 01:30:05 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WFLHTw0yNOVeSH04rFOr7o2jxyK+Uzkxp58xSm2/o/M=; b=l14LlbW2QAStXnlXp4CLt0irZzzy2V8nEJu9EcYH8A6wobQFFVFYLlMhtAkYDJ47Iy 0+TH1y8sffYQIeJlBsqrxw0b88EfAfHIh29Zx8TOqlYFJo5exhGEVusAzW88Uwm3Xxdy SFWfZUgLUHMzx51vQ74QGqqF2UaAqfCbxe3BAn0DXD1/IjUjQc+7MVGrjMu47shXRSXB /mQVlUNoVGNqAvmZH+XOOhXVqKC3jRtZJTikSNJz5nDlvDBrE6YNwZgNukXSmyVzJsH9 FyqI/pD4cnb76SI+1RaHpVEgrnEJyQ/eldtKhvzMMGIUFfKeCGZ0d89ctfCFNjKI4ZLb TJiw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WFLHTw0yNOVeSH04rFOr7o2jxyK+Uzkxp58xSm2/o/M=; b=qY7tGwHSI9TkrFAn6aNNSPaZDKbthUSio/57AfNmItD4WfdWT+grS6vBhmGh5+iUMg hmfN4mg18yKSA+CruJjswexlFWyW9vkKJ758OlUE6+X0TDs9ZD+XlkxexoJjyqeTXFMz ePR3qRcSMN9Lf5WJKE9CD5H6Ut1MuN3spMU5chwOPFqFLo9E65k+hZDWg7V8iwqLmJgo 37VfA8NhGX7QbIrPW80s1+Fro/LrfvFBBP2XSHgTuH7VgRoD6Tfk9UVLoAbsJ6mfhD78 WmEIcVFnjdZZJQDq/KkQBYqngItuz3ZhusKSLJRH4fhgDfgP8jb8rPe2HTj4rxwv/YUa 4ayg== X-Gm-Message-State: AGi0Puacsi9NC6rLPEoFo5ZNOX+ByHcKBLjnT8EsxNRRYiDgt1AVE33G pY59StWyWdFOyuRVvNma843bj7+rI9A= X-Google-Smtp-Source: APiQypIqmDaSZEauAGtOrHkzi1Dviw5DcIwHQ6jJd1e1DB0hDUv879MeIZl3O8qIK24et9SiRuUGPg== X-Received: by 2002:a2e:988c:: with SMTP id b12mr4311425ljj.138.1586334604475; Wed, 08 Apr 2020 01:30:04 -0700 (PDT) Received: from mkPC.semihalf.local (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id e8sm765685lja.3.2020.04.08.01.30.03 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 08 Apr 2020 01:30:03 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, ferruh.yigit@intel.com, arybchenko@solarflare.com, Michal Krawczyk Date: Wed, 8 Apr 2020 10:29:21 +0200 Message-Id: <20200408082921.31000-31-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200408082921.31000-1-mk@semihalf.com> References: <20200408082921.31000-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v3 30/30] net/ena: update version of the driver to v2.1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The v2.1.0 is refactoring Tx and Rx paths, including few bug fixes and is also adding a new features which are going to be available with the newest hardware. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- v3: * Remove features listed in this commit log as they were added to the release notes file drivers/net/ena/ena_ethdev.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 156a3e441b..c3fd3a4ac0 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -27,8 +27,8 @@ #include #define DRV_MODULE_VER_MAJOR 2 -#define DRV_MODULE_VER_MINOR 0 -#define DRV_MODULE_VER_SUBMINOR 3 +#define DRV_MODULE_VER_MINOR 1 +#define DRV_MODULE_VER_SUBMINOR 0 #define ENA_IO_TXQ_IDX(q) (2 * (q)) #define ENA_IO_RXQ_IDX(q) (2 * (q) + 1)