From patchwork Wed Apr 26 10:22:54 2023 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Wenjing Qiao X-Patchwork-Id: 126543 X-Patchwork-Delegate: qi.z.zhang@intel.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 8FF4B42A02; Wed, 26 Apr 2023 12:29:06 +0200 (CEST) Received: from mails.dpdk.org (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E3A8142D61; Wed, 26 Apr 2023 12:28:17 +0200 (CEST) Received: from mga17.intel.com (mga17.intel.com [192.55.52.151]) by mails.dpdk.org (Postfix) with ESMTP id 5367742D0C; Wed, 26 Apr 2023 12:28:13 +0200 (CEST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=intel.com; i=@intel.com; q=dns/txt; s=Intel; t=1682504893; x=1714040893; h=from:to:cc:subject:date:message-id:in-reply-to: references:mime-version:content-transfer-encoding; bh=LfMEFEMLe1Y3VI0KED4PFZccOjc3DrvKx2/omWswSeQ=; b=UwcA/PG/Iyx1F920VbAwAoJB/NZmh2m6nBqIJI0WcfKsr8Gdet8iRiJZ XHhsLKsO7y9vAr0hxEsmZVbZGGpDM+6trWosA/ffmuSinDl+JMqPzQkai O+/BZR1nGVICjedlR9cADFkVgHT2vu+Z2Jx7YP3IoFpq3VgY8bUVg2486 jNrDvIF+aVEvkSjMNFWzi3h8bndtPoZe7ArlYuboCPf79vDOgEKJ/7yo1 T3IHQ9qyXzWsAvuLIRP15JVrF19NOwbyP5eDttcRkL+bPfVGZzPnI7gl/ 0H5d8l9H1COr3AQ+OzWLRHUAIqDOwHV81sD2GHNdEenI9tXSuIxWUHzeC w==; X-IronPort-AV: E=McAfee;i="6600,9927,10691"; a="327391555" X-IronPort-AV: E=Sophos;i="5.99,227,1677571200"; d="scan'208";a="327391555" Received: from fmsmga005.fm.intel.com ([10.253.24.32]) by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 26 Apr 2023 03:28:12 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=McAfee;i="6600,9927,10691"; a="1023552739" X-IronPort-AV: E=Sophos;i="5.99,227,1677571200"; d="scan'208";a="1023552739" Received: from dpdk-wenjing-01.sh.intel.com ([10.67.118.239]) by fmsmga005.fm.intel.com with ESMTP; 26 Apr 2023 03:28:11 -0700 From: Wenjing Qiao To: jingjing.wu@intel.com, beilei.xing@intel.com, qi.z.zhang@intel.com Cc: dev@dpdk.org, mingxia.liu@intel.com, Wenjing Qiao , stable@dpdk.org, Christopher Pau Subject: [PATCH v3 10/15] common/idpf/base: fix memory leaks on ctrlq functions Date: Wed, 26 Apr 2023 06:22:54 -0400 Message-Id: <20230426102259.205992-11-wenjing.qiao@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20230426102259.205992-1-wenjing.qiao@intel.com> References: <20230421084043.135503-2-wenjing.qiao@intel.com> <20230426102259.205992-1-wenjing.qiao@intel.com> MIME-Version: 1.0 X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org idpf_init_hw needs to free it's q_info. idpf_clean_arq_element needs to return buffers via post_rx_buffs Fixes: fb4ac04e9bfa ("common/idpf: introduce common library") Cc: stable@dpdk.org Signed-off-by: Christopher Pau Signed-off-by: Wenjing Qiao --- drivers/common/idpf/base/idpf_common.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/drivers/common/idpf/base/idpf_common.c b/drivers/common/idpf/base/idpf_common.c index 2394f85580..da352ea88f 100644 --- a/drivers/common/idpf/base/idpf_common.c +++ b/drivers/common/idpf/base/idpf_common.c @@ -130,6 +130,8 @@ int idpf_init_hw(struct idpf_hw *hw, struct idpf_ctlq_size ctlq_size) hw->mac.addr[4] = 0x03; hw->mac.addr[5] = 0x14; + idpf_free(hw, q_info); + return 0; } @@ -219,6 +221,7 @@ bool idpf_check_asq_alive(struct idpf_hw *hw) int idpf_clean_arq_element(struct idpf_hw *hw, struct idpf_arq_event_info *e, u16 *pending) { + struct idpf_dma_mem *dma_mem = NULL; struct idpf_ctlq_msg msg = { 0 }; int status; u16 msg_data_len; @@ -226,6 +229,8 @@ int idpf_clean_arq_element(struct idpf_hw *hw, *pending = 1; status = idpf_ctlq_recv(hw->arq, pending, &msg); + if (status == -ENOMSG) + goto exit; /* ctlq_msg does not align to ctlq_desc, so copy relevant data here */ e->desc.opcode = msg.opcode; @@ -240,7 +245,14 @@ int idpf_clean_arq_element(struct idpf_hw *hw, msg_data_len = msg.data_len; idpf_memcpy(e->msg_buf, msg.ctx.indirect.payload->va, msg_data_len, IDPF_DMA_TO_NONDMA); + dma_mem = msg.ctx.indirect.payload; + } else { + *pending = 0; } + + status = idpf_ctlq_post_rx_buffs(hw, hw->arq, pending, &dma_mem); + +exit: return status; }