From patchwork Thu Jul 16 15:49:19 2020 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Ruifeng Wang X-Patchwork-Id: 74254 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from dpdk.org (dpdk.org [92.243.14.124]) by inbox.dpdk.org (Postfix) with ESMTP id 6F4A9A0546; Thu, 16 Jul 2020 17:49:42 +0200 (CEST) Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 7638E2BF3; Thu, 16 Jul 2020 17:49:41 +0200 (CEST) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by dpdk.org (Postfix) with ESMTP id 5B2BE2B86 for ; Thu, 16 Jul 2020 17:49:40 +0200 (CEST) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id AC70E30E; Thu, 16 Jul 2020 08:49:39 -0700 (PDT) Received: from net-arm-thunderx2-02.shanghai.arm.com (net-arm-thunderx2-02.shanghai.arm.com [10.169.210.119]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 5C4EA3F68F; Thu, 16 Jul 2020 08:49:37 -0700 (PDT) From: Ruifeng Wang To: Bruce Richardson , Vladimir Medvedkin Cc: dev@dpdk.org, nd@arm.com, honnappa.nagarahalli@arm.com, phil.yang@arm.com, Ruifeng Wang Date: Thu, 16 Jul 2020 23:49:19 +0800 Message-Id: <20200716154920.167185-1-ruifeng.wang@arm.com> X-Mailer: git-send-email 2.17.1 In-Reply-To: <20200716051903.94195-1-ruifeng.wang@arm.com> References: <20200716051903.94195-1-ruifeng.wang@arm.com> Subject: [dpdk-dev] [PATCH v2] lpm: fix unchecked return value X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Coverity complains about unchecked return value of rte_rcu_qsbr_dq_enqueue. By default, defer queue size is big enough to hold all tbl8 groups. When enqueue fails, return error to the user to indicate system issue. Coverity issue: 360832 Fixes: 8a9f8564e9f9 ("lpm: implement RCU rule reclamation") Signed-off-by: Ruifeng Wang Acked-by: Vladimir Medvedkin Acked-by: Vladimir Medvedkin Acked-by: Bruce Richardson --- v2: Converted return value to conform to LPM API convention. (Vladimir) lib/librte_lpm/rte_lpm.c | 19 +++++++++++++------ 1 file changed, 13 insertions(+), 6 deletions(-) diff --git a/lib/librte_lpm/rte_lpm.c b/lib/librte_lpm/rte_lpm.c index 2db9e16a2..757436f49 100644 --- a/lib/librte_lpm/rte_lpm.c +++ b/lib/librte_lpm/rte_lpm.c @@ -532,11 +532,12 @@ tbl8_alloc(struct rte_lpm *lpm) return group_idx; } -static void +static int32_t tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start) { struct rte_lpm_tbl_entry zero_tbl8_entry = {0}; struct __rte_lpm *internal_lpm; + int status; internal_lpm = container_of(lpm, struct __rte_lpm, lpm); if (internal_lpm->v == NULL) { @@ -552,9 +553,15 @@ tbl8_free(struct rte_lpm *lpm, uint32_t tbl8_group_start) __ATOMIC_RELAXED); } else if (internal_lpm->rcu_mode == RTE_LPM_QSBR_MODE_DQ) { /* Push into QSBR defer queue. */ - rte_rcu_qsbr_dq_enqueue(internal_lpm->dq, + status = rte_rcu_qsbr_dq_enqueue(internal_lpm->dq, (void *)&tbl8_group_start); + if (status == 1) { + RTE_LOG(ERR, LPM, "Failed to push QSBR FIFO\n"); + return -rte_errno; + } } + + return 0; } static __rte_noinline int32_t @@ -1040,7 +1047,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, #define group_idx next_hop uint32_t tbl24_index, tbl8_group_index, tbl8_group_start, tbl8_index, tbl8_range, i; - int32_t tbl8_recycle_index; + int32_t tbl8_recycle_index, status = 0; /* * Calculate the index into tbl24 and range. Note: All depths larger @@ -1097,7 +1104,7 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, */ lpm->tbl24[tbl24_index].valid = 0; __atomic_thread_fence(__ATOMIC_RELEASE); - tbl8_free(lpm, tbl8_group_start); + status = tbl8_free(lpm, tbl8_group_start); } else if (tbl8_recycle_index > -1) { /* Update tbl24 entry. */ struct rte_lpm_tbl_entry new_tbl24_entry = { @@ -1113,10 +1120,10 @@ delete_depth_big(struct rte_lpm *lpm, uint32_t ip_masked, __atomic_store(&lpm->tbl24[tbl24_index], &new_tbl24_entry, __ATOMIC_RELAXED); __atomic_thread_fence(__ATOMIC_RELEASE); - tbl8_free(lpm, tbl8_group_start); + status = tbl8_free(lpm, tbl8_group_start); } #undef group_idx - return 0; + return status; } /*