From patchwork Wed May 25 05:18:37 2022 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Fidaullah Noonari X-Patchwork-Id: 111781 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 2B7CFA0548; Wed, 25 May 2022 07:18:45 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id BAADA400EF; Wed, 25 May 2022 07:18:44 +0200 (CEST) Received: from mail-wr1-f47.google.com (mail-wr1-f47.google.com [209.85.221.47]) by mails.dpdk.org (Postfix) with ESMTP id 60FEC400D6 for ; Wed, 25 May 2022 07:18:43 +0200 (CEST) Received: by mail-wr1-f47.google.com with SMTP id t6so28483133wra.4 for ; Tue, 24 May 2022 22:18:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=emumba-com.20210112.gappssmtp.com; s=20210112; h=from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=d2Y3KP5XNTLPYRz2bN0FJov8h6G6Q/l58rnRyLl4p/Q=; b=njZiiWLpDkm0XQMDJ/gB6jRSFZYsmAcXSGjQdZSUfX0/ZlDtV4v9M4KEC9jwHfBp3X CEM4fQq9n3uW2evMyn7S5A9lstExG3XRvewLj9Wxy7p+Zaw8PbdRKlqINBalG7Qq7Fb4 FaYNdMvq4MS+TIMVggmzS+sUgpoX0GI7jlk8Fn6ZHo+EidlTbmIfzE/LIl2DG8nRPuDz 9IkgEnFcAsjlgAN2qhDw288AcTweJwdUm4UXkgxnn7vYq9VtOVXv/Rx7YJw3qT3CjjER FDVWPlad17KbywUOTBGtKJl+eXujBJgL4JiCGiQkfy8GTCpybfsm19RSR5LDnn68ljB0 sJxw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version :content-transfer-encoding; bh=d2Y3KP5XNTLPYRz2bN0FJov8h6G6Q/l58rnRyLl4p/Q=; b=1XtDJ8EGXfDQnawV/e2kiJmauSZfUMEG0Wu1Krl3T8aJO3rlIzLB/8Zlv+ab6wuYdQ p0746jOsyixfmaLglfeUEApWPfIHtRQQ6TbSILRk9CQOMtIT1TJeoa3JvXfb9Q0gA42f UvDiR77Rz091TwPnlGlE3sNimB2KiXj4SObQLPPXBUYl2DeexcUmGvvClq5np4Gl2z3Q TwNyRjeEvv7bs3oimZnom9yjA3UtkWpVUAAct/RFlMsH9CI7RAqMJjjhqan2CRMVj39n ZUy92PPcfuv92Q9v+fU6iWvzogpkT1jlcYQGE4jAY1ZqClBWrsw47l+LDPisoAf9r9jg mwBA== X-Gm-Message-State: AOAM533zmKkR/r6g/80EZo8XMiM3kZ/QlmAaKIx6iSdctfZj8/d0ZgPi TFl66UWMXFK7pR0T6KTaFuXj X-Google-Smtp-Source: ABdhPJxO9BLUdX9RgeYp8SxOFLiv8M2tJSQCWcNDlz/dPNH/Sp3Y2uEOzHmhUCNknZ3ME5rRlOhXLQ== X-Received: by 2002:a05:6000:186b:b0:20f:e2e5:f95f with SMTP id d11-20020a056000186b00b0020fe2e5f95fmr9997044wri.76.1653455923045; Tue, 24 May 2022 22:18:43 -0700 (PDT) Received: from localhost.localdomain ([101.50.109.54]) by smtp.gmail.com with ESMTPSA id p17-20020adfce11000000b0020d02ddf4d0sm1020661wrn.69.2022.05.24.22.18.41 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 24 May 2022 22:18:42 -0700 (PDT) From: Fidaullah Noonari To: anatoly.burakov@intel.com Cc: dev@dpdk.org, Fidaullah Noonari Subject: [PATCH] eal: fixes the bug where rte_malloc() fails to allocates memory Date: Wed, 25 May 2022 10:18:37 +0500 Message-Id: <20220525051837.247255-1-fidaullah.noonari@emumba.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org if rte malloc is called to allocate memory of size is between multiple of hugepage size minus malloc_header_len and hugepage size rte_malloc fails to allocate memory. this fix replaces malloc_elem_trailer_len with malloc_elem_overhead in try_expand_heap() to include malloc_elem_header_len when calculating n_seg. Bugzilla ID: 800 Signed-off-by: Fidaullah Noonari Acked-by: Dmitry Kozlyuk Acked-by: Dmitry Kozlyuk --- lib/eal/common/malloc_heap.c | 2 +- lib/eal/common/malloc_mp.c | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c index a3d26fcbea..27a52266ad 100644 --- a/lib/eal/common/malloc_heap.c +++ b/lib/eal/common/malloc_heap.c @@ -403,7 +403,7 @@ try_expand_heap_primary(struct malloc_heap *heap, uint64_t pg_sz, bool callback_triggered = false; alloc_sz = RTE_ALIGN_CEIL(align + elt_size + - MALLOC_ELEM_TRAILER_LEN, pg_sz); + MALLOC_ELEM_OVERHEAD, pg_sz); n_segs = alloc_sz / pg_sz; /* we can't know in advance how many pages we'll need, so we malloc */ diff --git a/lib/eal/common/malloc_mp.c b/lib/eal/common/malloc_mp.c index 207b90847e..2b8eb51067 100644 --- a/lib/eal/common/malloc_mp.c +++ b/lib/eal/common/malloc_mp.c @@ -250,7 +250,7 @@ handle_alloc_request(const struct malloc_mp_req *m, } alloc_sz = RTE_ALIGN_CEIL(ar->align + ar->elt_size + - MALLOC_ELEM_TRAILER_LEN, ar->page_sz); + MALLOC_ELEM_OVERHEAD, ar->page_sz); n_segs = alloc_sz / ar->page_sz; /* we can't know in advance how many pages we'll need, so we malloc */