From patchwork Thu Jul 16 05:19:02 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruifeng Wang X-Patchwork-Id: 74162 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 24779A0540; Thu, 16 Jul 2020 07:19:29 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 542751BE98; Thu, 16 Jul 2020 07:19:28 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 84BCD4F9A for ; Thu, 16 Jul 2020 07:19:26 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id EBA5D31B; Wed, 15 Jul 2020 22:19:25 -0700 (PDT) Received: from net-arm-thunderx2-02.shanghai.arm.com (net-arm-thunderx2-02.shanghai.arm.com [10.169.210.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 9962F3F66B; Wed, 15 Jul 2020 22:19:23 -0700 (PDT) From: Ruifeng Wang To: Bruce Richardson , Vladimir Medvedkin Cc: dev@dpdk.org, nd@arm.com, honnappa.nagarahalli@arm.com, phil.yang@arm.com, Ruifeng Wang Date: Thu, 16 Jul 2020 13:19:02 +0800 Message-Id: <20200716051903.94195-1-ruifeng.wang@arm.com> X-Mailer: git-send-email 2.17.1 Subject: [dpdk-dev] [PATCH] lpm: fix unchecked return value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Coverity complains about unchecked return value of rte_rcu_qsbr_dq_enqueue. By default, defer queue size is big enough to hold all tbl8 groups. When enqueue fails, return error to the user to indicate system issue. Coverity issue: 360832 Fixes: 8a9f8564e9f9 ("lpm: implement RCU rule reclamation") Signed-off-by: Ruifeng Wang --- lib/librte_lpm/rte_lpm.c | 16 +++++++++++----- 1 file changed, 11 insertions(+), 5 deletions(-) diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index 2db9e16a2..a6d3a7894 100644 --- a/lib/librte_lpm/rte_lpm.c +++ b/lib/librte_lpm/rte_lpm.c @@ -532,11 +532,12 @@ tbl8_alloc(struct rte_lpm *lpm) return group_idx; } -static void +static int tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start) { struct rte_lpm_tbl_entry zero_tbl8_entry = {0}; struct __rte_lpm *internal_lpm; + int rc = 0; internal_lpm = container_of(lpm, struct __rte_lpm, lpm); if (internal_lpm->v == NULL) { @@ -552,9 +553,13 @@ tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start) __ATOMIC_RELAXED); } else if (internal_lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) { /* Push into QSBR defer queue. */ - rte_rcu_qsbr_dq_enqueue(internal_lpm->dq, + rc = rte_rcu_qsbr_dq_enqueue(internal_lpm->dq, (void *)&tbl8_group_start); + if (rc != 0) + RTE_LOG(ERR, LPM, "Failed to push QSBR FIFO\n"); } + + return rc; } static __rte_noinline int32_t @@ -1041,6 +1046,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index, tbl8_range, i; int32_t tbl8_recycle_index; + int rc = 0; /* * Calculate the index into tbl24 and range. Note: All depths larger @@ -1097,7 +1103,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, */ lpm->tbl24[tbl24_index].valid = 0; __atomic_thread_fence(__ATOMIC_RELEASE); - tbl8_free(lpm, tbl8_group_start); + rc = tbl8_free(lpm, tbl8_group_start); } else if (tbl8_recycle_index > -1) { /* Update tbl24 entry. */ struct rte_lpm_tbl_entry new_tbl24_entry = { @@ -1113,10 +1119,10 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, __ATOMIC_RELAXED); __atomic_thread_fence(__ATOMIC_RELEASE); - tbl8_free(lpm, tbl8_group_start); + rc = tbl8_free(lpm, tbl8_group_start); } #undef group_idx - return 0; + return (int32_t)rc; } /*