From patchwork Fri Mar 27 10:18:14 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 67294 X-Patchwork-Delegate: andrew.rybchenko@oktetlabs.ru Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 5CAF2A0589; Fri, 27 Mar 2020 11:32:34 +0100 (CET) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id A03701C1AD; Fri, 27 Mar 2020 11:29:47 +0100 (CET) Received: from mail-lj1-f195.google.com (mail-lj1-f195.google.com [209.85.208.195]) by dpdk.org (Postfix) with ESMTP id 568E91C0D9 for ; Fri, 27 Mar 2020 11:29:37 +0100 (CET) Received: by mail-lj1-f195.google.com with SMTP id r7so1983283ljg.13 for ; Fri, 27 Mar 2020 03:29:37 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=lVJBcAiAgLJizJvZDtVCTF+gd1RFsPZFLCuBem6yG80=; b=S0MgQNalw2yj95ISCIPaZq3WfL7mjPls8na/ZpmJRt1zoDBdyUDkG1II8ZI31x3Kil XJPPkH5opgIMvULGhCdFROObZNFHNKdju7Zr3NDJPxiPJUUblpOnh117wKs87AbR6ZcZ 2i8xYNlxwBOPg5RUd46zRB+C2zFy8isaZvBr/gRRp8GvtL2idBg6a8acS6z6PBQ1bY2U sTeiFfEr5JlgR6uhXQeCGuMUIsILhQInsD6vjqGjIF6QD//G6J8x4CLvm/NMHhu0a6VK UlykxBoGK/PUneOatmU8YDvtxYziuGK2awW6p13WJCwRIP7Y9HxaiKVMOv2ky3tWaRES 8XHw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=lVJBcAiAgLJizJvZDtVCTF+gd1RFsPZFLCuBem6yG80=; b=EeesLyx75XkyGYnHnlaUGPaReem0cFIoRtXfe3/G/tTMKicV/ltAgNNOzVfhujQMuy mSp3YBRGtaIWyCw+uQOlnciv1q9TL47BL2EcROKC574eBwHC7MbbIttaI9PEA0QNqXdV 7JPQPShkwAzzyO9G/pxsOMuxt6j9GFqyrkBPciNJviAuxZ6PLYtM7zlKiSrmDenGBwss shIrDbzuoLsncD96gEEDYp6YHlVmh4YOOmQc0w60d+LeB7dgJWgRr0MIWwzur2/2KSn4 XlrkqEOQQXL6Xd/gsIPnViIlvHJMba/jtqwbKBA0ssR8Lo2VrvluI2zipS9ni1QDBtlP afsw== X-Gm-Message-State: ANhLgQ2hv4YOlbH74enbeJ1EcJfi9eev1XRtZsCr9e9QeEDM6zwxkKpE NcXHUWK0OFOf7ZPFBeWcK86iJdblk6PsyQ== X-Google-Smtp-Source: ADFU+vsxwW/XBZ1Khp8bYxDWGturdFvOWQUGKxhPXwbP46b9HPdLXq5O+VlMXyxo3zNuvarTWk0Umg== X-Received: by 2002:a2e:9ad2:: with SMTP id p18mr7821449ljj.15.1585304976582; Fri, 27 Mar 2020 03:29:36 -0700 (PDT) Received: from localhost.localdomain (193-106-246-138.noc.fibertech.net.pl. [193.106.246.138]) by smtp.gmail.com with ESMTPSA id i11sm2789587lfo.84.2020.03.27.03.29.35 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 27 Mar 2020 03:29:35 -0700 (PDT) From: Michal Krawczyk To: dev@dpdk.org Cc: mw@semihalf.com, mba@semihalf.com, gtzalik@amazon.com, evgenys@amazon.com, igorch@amazon.com, Michal Krawczyk Date: Fri, 27 Mar 2020 11:18:14 +0100 Message-Id: <20200327101823.12646-21-mk@semihalf.com> X-Mailer: git-send-email 2.20.1 In-Reply-To: <20200327101823.12646-1-mk@semihalf.com> References: <20200327101823.12646-1-mk@semihalf.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH 20/29] net/ena: disable meta caching X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" In the LLQ mode, the device can indicate that meta data descriptor caching is disabled. In that case the driver should send valid meta descriptor on every Tx packet. Signed-off-by: Michal Krawczyk Reviewed-by: Igor Chauskin Reviewed-by: Guy Tzalik --- drivers/net/ena/ena_ethdev.c | 28 ++++++++++++++++++++++------ drivers/net/ena/ena_ethdev.h | 2 ++ 2 files changed, 24 insertions(+), 6 deletions(-) diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 52eac0f1b9..45c5d26ce8 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -187,7 +187,8 @@ static int ena_rx_queue_setup(struct rte_eth_dev *dev, uint16_t queue_idx, static uint16_t eth_ena_recv_pkts(void *rx_queue, struct rte_mbuf **rx_pkts, uint16_t nb_pkts); static int ena_populate_rx_queue(struct ena_ring *rxq, unsigned int count); -static void ena_init_rings(struct ena_adapter *adapter); +static void ena_init_rings(struct ena_adapter *adapter, + bool disable_meta_caching); static int ena_mtu_set(struct rte_eth_dev *dev, uint16_t mtu); static int ena_start(struct rte_eth_dev *dev); static void ena_stop(struct rte_eth_dev *dev); @@ -303,7 +304,8 @@ static inline void ena_rx_mbuf_prepare(struct rte_mbuf *mbuf, static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf, struct ena_com_tx_ctx *ena_tx_ctx, - uint64_t queue_offloads) + uint64_t queue_offloads, + bool disable_meta_caching) { struct ena_com_tx_meta *ena_meta = &ena_tx_ctx->ena_meta; @@ -353,6 +355,9 @@ static inline void ena_tx_mbuf_prepare(struct rte_mbuf *mbuf, ena_meta->l3_hdr_len = mbuf->l3_len; ena_meta->l3_hdr_offset = mbuf->l2_len; + ena_tx_ctx->meta_valid = true; + } else if (disable_meta_caching) { + memset(ena_meta, 0, sizeof(*ena_meta)); ena_tx_ctx->meta_valid = true; } else { ena_tx_ctx->meta_valid = false; @@ -1718,8 +1723,8 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) const char *queue_type_str; uint32_t max_num_io_queues; int rc; - static int adapters_found; + bool disable_meta_caching; bool wd_state; eth_dev->dev_ops = &ena_dev_ops; @@ -1802,8 +1807,16 @@ static int eth_ena_dev_init(struct rte_eth_dev *eth_dev) adapter->max_rx_sgl_size = calc_queue_ctx.max_rx_sgl_size; adapter->max_num_io_queues = max_num_io_queues; + if (ena_dev->tx_mem_queue_type == ENA_ADMIN_PLACEMENT_POLICY_DEV) { + disable_meta_caching = + !!(get_feat_ctx.llq.accel_mode.u.get.supported_flags & + BIT(ENA_ADMIN_DISABLE_META_CACHING)); + } else { + disable_meta_caching = false; + } + /* prepare ring structures */ - ena_init_rings(adapter); + ena_init_rings(adapter, disable_meta_caching); ena_config_debug_area(adapter); @@ -1917,7 +1930,8 @@ static int ena_dev_configure(struct rte_eth_dev *dev) return 0; } -static void ena_init_rings(struct ena_adapter *adapter) +static void ena_init_rings(struct ena_adapter *adapter, + bool disable_meta_caching) { size_t i; @@ -1931,6 +1945,7 @@ static void ena_init_rings(struct ena_adapter *adapter) ring->tx_mem_queue_type = adapter->ena_dev.tx_mem_queue_type; ring->tx_max_header_size = adapter->ena_dev.tx_max_header_size; ring->sgl_size = adapter->max_tx_sgl_size; + ring->disable_meta_caching = disable_meta_caching; } for (i = 0; i < adapter->max_num_io_queues; i++) { @@ -2343,7 +2358,8 @@ static uint16_t eth_ena_xmit_pkts(void *tx_queue, struct rte_mbuf **tx_pkts, } /* there's no else as we take advantage of memset zeroing */ /* Set TX offloads flags, if applicable */ - ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads); + ena_tx_mbuf_prepare(mbuf, &ena_tx_ctx, tx_ring->offloads, + tx_ring->disable_meta_caching); rte_prefetch0(tx_pkts[(sent_idx + 4) & ring_mask]); diff --git a/drivers/net/ena/ena_ethdev.h b/drivers/net/ena/ena_ethdev.h index a7f87eddc3..98ae75f5af 100644 --- a/drivers/net/ena/ena_ethdev.h +++ b/drivers/net/ena/ena_ethdev.h @@ -113,6 +113,8 @@ struct ena_ring { uint64_t offloads; u16 sgl_size; + bool disable_meta_caching; + union { struct ena_stats_rx rx_stats; struct ena_stats_tx tx_stats;