From patchwork Tue Oct 1 23:48:52 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Joshua Washington X-Patchwork-Id: 144889 X-Patchwork-Delegate: ferruh.yigit@amd.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 10D2A45A84; Wed, 2 Oct 2024 01:48:56 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id A1C2540267; Wed, 2 Oct 2024 01:48:55 +0200 (CEST) Received: from mail-yw1-f201.google.com (mail-yw1-f201.google.com [209.85.128.201]) by mails.dpdk.org (Postfix) with ESMTP id C548F40261 for ; Wed, 2 Oct 2024 01:48:54 +0200 (CEST) Received: by mail-yw1-f201.google.com with SMTP id 00721157ae682-6886cd07673so106061407b3.3 for ; Tue, 01 Oct 2024 16:48:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20230601; t=1727826534; x=1728431334; darn=dpdk.org; h=cc:to:from:subject:message-id:mime-version:date:from:to:cc:subject :date:message-id:reply-to; bh=Tso2sYbWXeLjm6zFr6j+PuuoluzUuM8/94TUZeX8vQA=; b=NbyBoX3N5ZPPjREEBvdtRQkUI11I94di01T68wmUzpjDOyop2dizVgQcRJB8OegcA5 ZfgfgHvLhzAy6DWDdEyDHUcKlSjXXzEAXWIVJcQl4acpLl/MUSklTGr5FOm89zJPOjK5 LRb56/f7XG52a6POdfX4yI+D7gqHdJfy54gKm/mBwZb4ujtdx8s2tJY0waINiOXxEaGV vtPcl5WDlbp+kFJNhcDwbtrNrNYBILqdIxQfi3Bq688KIfkVxMCOPJJNkPjrZLdyzLvB +hz4cKoK2Q0t0Zfg8/qkoMA9hO77t81A62phQ8w7IS+MhgOb/O7+MW0j3rbDHul8Gi6m W3Eg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1727826534; x=1728431334; h=cc:to:from:subject:message-id:mime-version:date:x-gm-message-state :from:to:cc:subject:date:message-id:reply-to; bh=Tso2sYbWXeLjm6zFr6j+PuuoluzUuM8/94TUZeX8vQA=; b=CQzutE1Zg/zZugKAdDzJdetYhbNvAdS0K68b1ERqcaJVoDOe3x4fSm62aRhsO01aNj lGwATf0ZdgDnec16KzgS0R+sxz5CAAbs8yqBKeGTVWlgWEz3VojtSEPtsxBXY6talaJJ UFkFNsTQ235t6yu/hmoFAti93bfCzJl04gA9Vr2LdCXh08yqN++ujeaKEg7N/VJWi6aH dWc2TytfrBkB4pAbUX5Vw2tklTvBHJz1GkTJCQfvA+qqnjg6d1FJbQuHpJQg/BxJ3kMp 05ty+vFH0k3ZpFo2Xy2WYVUyZuiqa+9cV8n5Shg6pMIV1QVP6HYC6aTFM1lMPQ6phgX1 gQKg== X-Gm-Message-State: AOJu0YxnUZlWAG8qQXjDyo0PfjJerl5+OIAG7PWaU31XzhFHWez9cCOt 5u24veK6XsSOTb0gO7poNEgKjcIQe3NYp71x/EyIbcmxJAGG8ueVv3e4P+SH1HfHC227uAZ0Bqg 0ur0XOWB9LA== X-Google-Smtp-Source: AGHT+IF6dt90i7TgnhbBUekJo2wxv7o1L5r22dYVz2kWzEcwnsgUNm2jiqGz+YYW6j+GB+DjMyEAehvvWAr5CQ== X-Received: from joshwash.sea.corp.google.com ([2620:15c:11c:202:a282:7553:57e3:9bee]) (user=joshwash job=sendgmr) by 2002:a05:690c:2a47:b0:6b2:3ecc:817 with SMTP id 00721157ae682-6e2a3044f06mr116277b3.8.1727826534099; Tue, 01 Oct 2024 16:48:54 -0700 (PDT) Date: Tue, 1 Oct 2024 16:48:52 -0700 Mime-Version: 1.0 X-Mailer: git-send-email 2.46.1.824.gd892dcdcdd-goog Message-ID: <20241001234852.3312594-1-joshwash@google.com> Subject: [PATCH] net/gve: fix mbuf allocation memory leak for DQ Rx From: Joshua Washington To: Jeroen de Borst , Rushil Gupta , Joshua Washington , Junfeng Guo Cc: dev@dpdk.org, stable@dpdk.org, Ferruh Yigit , Praveen Kaligineedi X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Currently, gve_rxq_mbufs_alloc_dqo() allocates RING_SIZE buffers, but only posts RING_SIZE - 1 of them, inevitably leaking a buffer every time queues are stopped/started. This could eventually lead to running out of mbufs if an application stops/starts traffic enough. Fixes: b044845bb015 ("net/gve: support queue start/stop") Cc: stable@dpdk.org Signed-off-by: Joshua Washington Reviewed-by: Rushil Gupta Reviewed-by: Praveen Kaligineedi --- drivers/net/gve/gve_rx_dqo.c | 16 +++++++++------- 1 file changed, 9 insertions(+), 7 deletions(-) diff --git a/drivers/net/gve/gve_rx_dqo.c b/drivers/net/gve/gve_rx_dqo.c index 60702d4100..e4084bc0dd 100644 --- a/drivers/net/gve/gve_rx_dqo.c +++ b/drivers/net/gve/gve_rx_dqo.c @@ -393,34 +393,36 @@ static int gve_rxq_mbufs_alloc_dqo(struct gve_rx_queue *rxq) { struct rte_mbuf *nmb; + uint16_t rx_mask; uint16_t i; int diag; - diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[0], rxq->nb_rx_desc); + rx_mask = rxq->nb_rx_desc - 1; + diag = rte_pktmbuf_alloc_bulk(rxq->mpool, &rxq->sw_ring[0], + rx_mask); if (diag < 0) { rxq->stats.no_mbufs_bulk++; - for (i = 0; i < rxq->nb_rx_desc - 1; i++) { + for (i = 0; i < rx_mask; i++) { nmb = rte_pktmbuf_alloc(rxq->mpool); if (!nmb) break; rxq->sw_ring[i] = nmb; } if (i < rxq->nb_rx_desc - 1) { - rxq->stats.no_mbufs += rxq->nb_rx_desc - 1 - i; + rxq->stats.no_mbufs += rx_mask - i; return -ENOMEM; } } - for (i = 0; i < rxq->nb_rx_desc; i++) { - if (i == rxq->nb_rx_desc - 1) - break; + for (i = 0; i < rx_mask; i++) { nmb = rxq->sw_ring[i]; rxq->rx_ring[i].buf_addr = rte_cpu_to_le_64(rte_mbuf_data_iova_default(nmb)); rxq->rx_ring[i].buf_id = rte_cpu_to_le_16(i); } + rxq->rx_ring[rx_mask].buf_id = rte_cpu_to_le_16(rx_mask); rxq->nb_rx_hold = 0; - rxq->bufq_tail = rxq->nb_rx_desc - 1; + rxq->bufq_tail = rx_mask; rte_write32(rxq->bufq_tail, rxq->qrx_tail);