From patchwork Thu Mar 18 07:18:37 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feifei Wang X-Patchwork-Id: 89453 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 47E65A0561; Thu, 18 Mar 2021 08:18:59 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id D9A48140EA3; Thu, 18 Mar 2021 08:18:55 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id ED405140EA2; Thu, 18 Mar 2021 08:18:53 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 5E937ED1; Thu, 18 Mar 2021 00:18:53 -0700 (PDT) Received: from net-x86-dell-8268.shanghai.arm.com (net-x86-dell-8268.shanghai.arm.com [10.169.210.125]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id C221E3F718; Thu, 18 Mar 2021 00:18:50 -0700 (PDT) From: Feifei Wang To: Matan Azrad , Shahaf Shuler , Yongseok Koh Cc: dev@dpdk.org, nd@arm.com, Feifei Wang , stable@dpdk.org, Ruifeng Wang Date: Thu, 18 Mar 2021 15:18:37 +0800 Message-Id: <20210318071840.359957-2-feifei.wang2@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210318071840.359957-1-feifei.wang2@arm.com> References: <20210318071840.359957-1-feifei.wang2@arm.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 1/4] net/mlx4: fix rebuild bug for Memory Region cache X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 'dev_gen' is a variable to inform other cores to flush their local cache when global cache is rebuilt. However, if 'dev_gen' is updated after global cache is rebuilt, other cores may load a wrong memory region lkey value from old local cache. Timeslot main core worker core 1 rebuild global cache 2 load unchanged dev_gen 3 update dev_gen 4 look up old local cache From the example above, we can see that though global cache is rebuilt, due to that dev_gen is not updated, the worker core may look up old cache table and receive a wrong memory region lkey value. To fix this, updating 'dev_gen' should be moved before rebuilding global cache to inform worker cores to flush their local cache when global cache start rebuilding. And wmb can ensure the sequence of this process. Fixes: 9797bfcce1c9 ("net/mlx4: add new memory region support") Cc: stable@dpdk.org Suggested-by: Ruifeng Wang Signed-off-by: Feifei Wang Reviewed-by: Ruifeng Wang --- drivers/net/mlx4/mlx4_mr.c | 19 ++++++++----------- 1 file changed, 8 insertions(+), 11 deletions(-) diff --git a/drivers/net/mlx4/mlx4_mr.c b/drivers/net/mlx4/mlx4_mr.c index 6b2f0cf18..cfd7d4a9c 100644 --- a/drivers/net/mlx4/mlx4_mr.c +++ b/drivers/net/mlx4/mlx4_mr.c @@ -946,20 +946,17 @@ mlx4_mr_mem_event_free_cb(struct rte_eth_dev *dev, const void *addr, size_t len) rebuild = 1; } if (rebuild) { - mr_rebuild_dev_cache(dev); - /* - * Flush local caches by propagating invalidation across cores. - * rte_smp_wmb() is enough to synchronize this event. If one of - * freed memsegs is seen by other core, that means the memseg - * has been allocated by allocator, which will come after this - * free call. Therefore, this store instruction (incrementing - * generation below) will be guaranteed to be seen by other core - * before the core sees the newly allocated memory. - */ ++priv->mr.dev_gen; DEBUG("broadcasting local cache flush, gen=%d", - priv->mr.dev_gen); + priv->mr.dev_gen); + + /* Flush local caches by propagating invalidation across cores. + * rte_smp_wmb is to keep the order that dev_gen updated before + * rebuilding global cache. Therefore, other core can flush their + * local cache on time. + */ rte_smp_wmb(); + mr_rebuild_dev_cache(dev); } rte_rwlock_write_unlock(&priv->mr.rwlock); #ifdef RTE_LIBRTE_MLX4_DEBUG