From patchwork Tue Jun 7 16:43:38 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michal Krawczyk X-Patchwork-Id: 112486 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id E6EA1A054D; Tue, 7 Jun 2022 18:44:14 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 5755C42685; Tue, 7 Jun 2022 18:44:12 +0200 (CEST) Received: from mail-ej1-f53.google.com (mail-ej1-f53.google.com [209.85.218.53]) by mails.dpdk.org (Postfix) with ESMTP id 174E540156 for ; Tue, 7 Jun 2022 18:44:10 +0200 (CEST) Received: by mail-ej1-f53.google.com with SMTP id s12so29159544ejx.3 for ; Tue, 07 Jun 2022 09:44:10 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=semihalf.com; s=google; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=WBBbzGo9JucMC1Sfg60ZcdeQzuYwPxkhJ2lnaBOip68=; b=SyCSnVCeTY8LeiQrY4eE68+ysR7JTtAr4wF+cWE3uiq6N3l3LXCXTbRSxUZQrViss5 F7tk9v1jSXZ3RzjvFAk88IBR36KTCYaG1mTOsg1NGV5yamiJBgrjqycvFPy32JNKTnMM jqa8CWjWXKulXyzrIJts6vaNsd25r3LCV19+d1YbQkoxdC4R6cB36P/ARt5iXwjTZDGZ m6Qan167oQAp+TCZvIZFnM8uYqb8+30X0u1Z3uuWOZl5qNVyc0M8ffI4xI/EZNVEpIEx 2Qrpr+AEV34epx/QfqaNjG7KU6Vu80MlZr+RY6YATKqLmlJI7rDsZKtKAage2KdVOhRi 3bcA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=WBBbzGo9JucMC1Sfg60ZcdeQzuYwPxkhJ2lnaBOip68=; b=Y2crNuNwrJGDCb/1v7xgR8osD/0hYN5W4mSvvNIdo/qJxBN0KoNR5WZDheZHzAdh+3 HjUl98fotIQ8vbtV8O/XDqy1tlP136CT0yG98TSVVL57OzMM37TOuF1inXzmi/kilTu3 NI7RyoWW788ZdMUs72dk9XZtoTtrLyaSkcL8n/xuOh6WO6UobHEP3M6WqIY0117XlL+/ T8uPRLFbl8kU4nZr6AyTOGHafXy3lSgb/NJGABB5+KWZJK9Xz5HV5NQAVtRxIQeka5xi 1sgIxc0v+Ype4Ge5eS3Jx8glHT8bO8GeZ3ZUCVmtrDFg8I90+6Kf61x3B4U7BPXF0jNL QArA== X-Gm-Message-State: AOAM53147iytrcnZrDxlFL9gZCWqjIhu8YOyitSygH9b4Onlb0/mTAF2 DJ1FK+I3U2dgbIuKdpaeyu2R8Q== X-Google-Smtp-Source: ABdhPJzuV3tXFt/tDo85U7Kj98/2h5De5EvZfeRWwNtuCaYBjW3hnHmM70zsBvHC6U91H7eau6OKPA== X-Received: by 2002:a17:907:ea9:b0:710:9003:9b33 with SMTP id ho41-20020a1709070ea900b0071090039b33mr16812499ejc.175.1654620249777; Tue, 07 Jun 2022 09:44:09 -0700 (PDT) Received: from DESKTOP-U5LNN3J.localdomain ([83.142.187.86]) by smtp.gmail.com with ESMTPSA id m3-20020aa7c483000000b0042de29d8fc0sm10625901edq.94.2022.06.07.09.44.08 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 07 Jun 2022 09:44:09 -0700 (PDT) From: Michal Krawczyk To: ferruh.yigit@intel.com Cc: shaibran@amazon.com, upstream@semihalf.com, mw@semihalf.com, dev@dpdk.org, Dawid Gorecki , Michal Krawczyk , Amit Bernstein Subject: [PATCH 1/4] net/ena: add fast mbuf free support Date: Tue, 7 Jun 2022 18:43:38 +0200 Message-Id: <20220607164341.19088-2-mk@semihalf.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20220607164341.19088-1-mk@semihalf.com> References: <20220607164341.19088-1-mk@semihalf.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org From: Dawid Gorecki Add support for RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE offload. It can be enabled if all the mbufs for a given queue belong to the same mempool and their reference count is equal to 1. Signed-off-by: Dawid Gorecki Reviewed-by: Michal Krawczyk Reviewed-by: Shai Brandes Reviewed-by: Amit Bernstein --- doc/guides/nics/features/ena.ini | 1 + doc/guides/rel_notes/release_22_07.rst | 7 ++++ drivers/net/ena/ena_ethdev.c | 49 ++++++++++++++++++++++++-- 3 files changed, 55 insertions(+), 2 deletions(-) diff --git a/doc/guides/nics/features/ena.ini b/doc/guides/nics/features/ena.ini index 59c1ae85fa..1fe7a71e3d 100644 --- a/doc/guides/nics/features/ena.ini +++ b/doc/guides/nics/features/ena.ini @@ -7,6 +7,7 @@ Link status = Y Link status event = Y Rx interrupt = Y +Fast mbuf free = Y Free Tx mbuf on demand = Y MTU update = Y Scattered Rx = Y diff --git a/doc/guides/rel_notes/release_22_07.rst b/doc/guides/rel_notes/release_22_07.rst index d46f773df0..73f566e5fc 100644 --- a/doc/guides/rel_notes/release_22_07.rst +++ b/doc/guides/rel_notes/release_22_07.rst @@ -87,6 +87,13 @@ New Features Added an API which can get the device type of vDPA device. + +* **Updated Amazon ena driver.** + + The new driver version (v2.7.0) includes: + + * Added fast mbuf free feature support. + * **Updated Intel iavf driver.** * Added Tx QoS queue rate limitation support. diff --git a/drivers/net/ena/ena_ethdev.c b/drivers/net/ena/ena_ethdev.c index 68768cab70..68a4478410 100644 --- a/drivers/net/ena/ena_ethdev.c +++ b/drivers/net/ena/ena_ethdev.c @@ -36,6 +36,12 @@ #define ENA_MIN_RING_DESC 128 +/* + * We should try to keep ENA_CLEANUP_BUF_SIZE lower than + * RTE_MEMPOOL_CACHE_MAX_SIZE, so we can fit this in mempool local cache. + */ +#define ENA_CLEANUP_BUF_SIZE 256 + #define ENA_PTYPE_HAS_HASH (RTE_PTYPE_L4_TCP | RTE_PTYPE_L4_UDP) struct ena_stats { @@ -2402,6 +2408,8 @@ static uint64_t ena_get_tx_port_offloads(struct ena_adapter *adapter) port_offloads |= RTE_ETH_TX_OFFLOAD_MULTI_SEGS; + port_offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + return port_offloads; } @@ -2414,9 +2422,12 @@ static uint64_t ena_get_rx_queue_offloads(struct ena_adapter *adapter) static uint64_t ena_get_tx_queue_offloads(struct ena_adapter *adapter) { + uint64_t queue_offloads = 0; RTE_SET_USED(adapter); - return 0; + queue_offloads |= RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; + + return queue_offloads; } static int ena_infos_get(struct rte_eth_dev *dev, @@ -3001,13 +3012,38 @@ static int ena_xmit_mbuf(struct ena_ring *tx_ring, struct rte_mbuf *mbuf) return 0; } +static __rte_always_inline size_t +ena_tx_cleanup_mbuf_fast(struct rte_mbuf **mbufs_to_clean, + struct rte_mbuf *mbuf, + size_t mbuf_cnt, + size_t buf_size) +{ + struct rte_mbuf *m_next; + + while (mbuf != NULL) { + m_next = mbuf->next; + mbufs_to_clean[mbuf_cnt++] = mbuf; + if (mbuf_cnt == buf_size) { + rte_mempool_put_bulk(mbufs_to_clean[0]->pool, (void **)mbufs_to_clean, + (unsigned int)mbuf_cnt); + mbuf_cnt = 0; + } + mbuf = m_next; + } + + return mbuf_cnt; +} + static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt) { + struct rte_mbuf *mbufs_to_clean[ENA_CLEANUP_BUF_SIZE]; struct ena_ring *tx_ring = (struct ena_ring *)txp; + size_t mbuf_cnt = 0; unsigned int total_tx_descs = 0; unsigned int total_tx_pkts = 0; uint16_t cleanup_budget; uint16_t next_to_clean = tx_ring->next_to_clean; + bool fast_free = tx_ring->offloads & RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE; /* * If free_pkt_cnt is equal to 0, it means that the user requested @@ -3032,7 +3068,12 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt) tx_info->timestamp = 0; mbuf = tx_info->mbuf; - rte_pktmbuf_free(mbuf); + if (fast_free) { + mbuf_cnt = ena_tx_cleanup_mbuf_fast(mbufs_to_clean, mbuf, mbuf_cnt, + ENA_CLEANUP_BUF_SIZE); + } else { + rte_pktmbuf_free(mbuf); + } tx_info->mbuf = NULL; tx_ring->empty_tx_reqs[next_to_clean] = req_id; @@ -3052,6 +3093,10 @@ static int ena_tx_cleanup(void *txp, uint32_t free_pkt_cnt) ena_com_update_dev_comp_head(tx_ring->ena_com_io_cq); } + if (mbuf_cnt != 0) + rte_mempool_put_bulk(mbufs_to_clean[0]->pool, + (void **)mbufs_to_clean, mbuf_cnt); + /* Notify completion handler that full cleanup was performed */ if (free_pkt_cnt == 0 || total_tx_pkts < cleanup_budget) tx_ring->last_cleanup_ticks = rte_get_timer_cycles();