From patchwork Mon Sep 11 08:09:53 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Artur Paszkiewicz X-Patchwork-Id: 131331 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id C992F4256D; Mon, 11 Sep 2023 10:11:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 575C3402DD; Mon, 11 Sep 2023 10:11:06 +0200 (CEST) Received: from mgamail.intel.com (mgamail.intel.com [134.134.136.126]) by mails.dpdk.org (Postfix) with ESMTP id 0E891402DA for ; Mon, 11 Sep 2023 10:11:03 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1694419864; x=1725955864; h=from:to:cc:subject:date:message-id:mime-version: content-transfer-encoding; bh=9kxSNgxePtcoJaUdSQMApx0cp8yJ/KIbpQ3ssI+qWF4=; b=X/7/y8GWexdaZ6LZ86/en/x+SDkrruBBOfc9KHYRkUaYGxgPH68rSd8j wtmLNtPO7pgbak/gfCvDoirJWmXAzPoGlg+W3uhnGGK7X2p/WXPfElolm HVtyXst5MhUAx55MNFLmsKA0WLRFmVU0CfbtFAh63+PALIBSBUxokSPTY jNfTrswjT+2Fbnl9jWednu//qu8j3cChRlU0Gj/tYWvTmeOPibZyHb6nY 5xethiBIpQ0NXuBVZbH1blP3UDgNoRvGkC7MRvKEuEx7Ud+65YNGrBaHj fRo4MmMnM0setBom0nngUHNJQUAeTSd2Ptt9VaixKZdarOi9/heGfnqYh A==; X-IronPort-AV: E=McAfee;i="6600,9927,10829"; a="363047923" X-IronPort-AV: E=Sophos;i="6.02,243,1688454000"; d="scan'208";a="363047923" Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2023 01:11:03 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10829"; a="808750482" X-IronPort-AV: E=Sophos;i="6.02,243,1688454000"; d="scan'208";a="808750482" Received: from apaszkie-mobl2.apaszkie-mobl2 (HELO apaszkie-mobl2.intel.com) ([10.213.3.219]) by fmsmga008-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 11 Sep 2023 01:11:01 -0700 From: Artur Paszkiewicz To: dev@dpdk.org Cc: anatoly.burakov@intel.com, Artur Paszkiewicz Subject: [PATCH v2] malloc: fix allocation for a specific case with ASAN Date: Mon, 11 Sep 2023 10:09:53 +0200 Message-Id: <20230911080953.10355-1-artur.paszkiewicz@intel.com> X-Mailer: git-send-email 2.35.3 MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Allocation would fail with ASAN enabled if the size and alignment was equal to half of the page size, e.g.: size_t pg_sz = 2 * (1 << 20); rte_malloc(NULL, pg_sz / 2, pg_sz / 2); In such case, try_expand_heap_primary() only allocated one page but it is not enough to fit this allocation with such alignment and MALLOC_ELEM_TRAILER_LEN > 0, as correctly checked by malloc_elem_can_hold(). Signed-off-by: Artur Paszkiewicz --- v2: - fix commit message typo lib/eal/common/malloc_heap.c | 4 ++-- lib/eal/common/malloc_mp.c | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/lib/eal/common/malloc_heap.c b/lib/eal/common/malloc_heap.c index 6b6cf9174c..bb7da0d2ef 100644 --- a/lib/eal/common/malloc_heap.c +++ b/lib/eal/common/malloc_heap.c @@ -402,8 +402,8 @@ try_expand_heap_primary(struct malloc_heap *heap, uint64_t pg_sz, int n_segs; bool callback_triggered = false; - alloc_sz = RTE_ALIGN_CEIL(RTE_ALIGN_CEIL(elt_size, align) + - MALLOC_ELEM_OVERHEAD, pg_sz); + alloc_sz = RTE_ALIGN_CEIL(RTE_MAX(MALLOC_ELEM_HEADER_LEN, align) + + elt_size + MALLOC_ELEM_TRAILER_LEN, pg_sz); n_segs = alloc_sz / pg_sz; /* we can't know in advance how many pages we'll need, so we malloc */ diff --git a/lib/eal/common/malloc_mp.c b/lib/eal/common/malloc_mp.c index 7270c2ec90..62deaca9eb 100644 --- a/lib/eal/common/malloc_mp.c +++ b/lib/eal/common/malloc_mp.c @@ -250,8 +250,8 @@ handle_alloc_request(const struct malloc_mp_req *m, return -1; } - alloc_sz = RTE_ALIGN_CEIL(RTE_ALIGN_CEIL(ar->elt_size, ar->align) + - MALLOC_ELEM_OVERHEAD, ar->page_sz); + alloc_sz = RTE_ALIGN_CEIL(RTE_MAX(MALLOC_ELEM_HEADER_LEN, ar->align) + + ar->elt_size + MALLOC_ELEM_TRAILER_LEN, ar->page_sz); n_segs = alloc_sz / ar->page_sz; /* we can't know in advance how many pages we'll need, so we malloc */