From patchwork Thu Mar 18 07:18:39 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Feifei Wang X-Patchwork-Id: 89455 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 25EE5A0561; Thu, 18 Mar 2021 08:19:16 +0100 (CET) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id E9D77140EB7; Thu, 18 Mar 2021 08:19:01 +0100 (CET) Received: from foss.arm.com (foss.arm.com [217.140.110.172]) by mails.dpdk.org (Postfix) with ESMTP id 0345F40698; Thu, 18 Mar 2021 08:19:00 +0100 (CET) Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A437ED1; Thu, 18 Mar 2021 00:18:59 -0700 (PDT) Received: from net-x86-dell-8268.shanghai.arm.com (net-x86-dell-8268.shanghai.arm.com [10.169.210.125]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id AF3F23F718; Thu, 18 Mar 2021 00:18:56 -0700 (PDT) From: Feifei Wang To: Matan Azrad , Shahaf Shuler , Viacheslav Ovsiienko , Yongseok Koh Cc: dev@dpdk.org, nd@arm.com, Feifei Wang , stable@dpdk.org, Ruifeng Wang Date: Thu, 18 Mar 2021 15:18:39 +0800 Message-Id: <20210318071840.359957-4-feifei.wang2@arm.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20210318071840.359957-1-feifei.wang2@arm.com> References: <20210318071840.359957-1-feifei.wang2@arm.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 3/4] net/mlx5: fix rebuild bug for Memory Region cache X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" 'dev_gen' is a variable to inform other cores to flush their local cache when global cache is rebuilt. However, if 'dev_gen' is updated after global cache is rebuilt, other cores may load a wrong memory region lkey value from old local cache. Timeslot main core worker core 1 rebuild global cache 2 load unchanged dev_gen 3 update dev_gen 4 look up old local cache From the example above, we can see that though global cache is rebuilt, due to that dev_gen is not updated, the worker core may look up old cache table and receive a wrong memory region lkey value. To fix this, updating 'dev_gen' should be moved before rebuilding global cache to inform worker cores to flush their local cache when global cache start rebuilding. And wmb can ensure the sequence of this process. Fixes: 974f1e7ef146 ("net/mlx5: add new memory region support") Cc: stable@dpdk.org Suggested-by: Ruifeng Wang Signed-off-by: Feifei Wang Reviewed-by: Ruifeng Wang --- drivers/net/mlx5/mlx5_mr.c | 37 +++++++++++++++++-------------------- 1 file changed, 17 insertions(+), 20 deletions(-) diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index da4e91fc2..7ce1d3e64 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -103,20 +103,18 @@ mlx5_mr_mem_event_free_cb(struct mlx5_dev_ctx_shared *sh, rebuild = 1; } if (rebuild) { - mlx5_mr_rebuild_cache(&sh->share_cache); + ++sh->share_cache.dev_gen; + DEBUG("broadcasting local cache flush, gen=%d", + sh->share_cache.dev_gen); + /* * Flush local caches by propagating invalidation across cores. - * rte_smp_wmb() is enough to synchronize this event. If one of - * freed memsegs is seen by other core, that means the memseg - * has been allocated by allocator, which will come after this - * free call. Therefore, this store instruction (incrementing - * generation below) will be guaranteed to be seen by other core - * before the core sees the newly allocated memory. + * rte_smp_wmb() is to keep the order that dev_gen updated before + * rebuilding global cache. Therefore, other core can flush their + * local cache on time. */ - ++sh->share_cache.dev_gen; - DEBUG("broadcasting local cache flush, gen=%d", - sh->share_cache.dev_gen); rte_smp_wmb(); + mlx5_mr_rebuild_cache(&sh->share_cache); } rte_rwlock_write_unlock(&sh->share_cache.rwlock); } @@ -407,20 +405,19 @@ mlx5_dma_unmap(struct rte_pci_device *pdev, void *addr, mlx5_mr_free(mr, sh->share_cache.dereg_mr_cb); DEBUG("port %u remove MR(%p) from list", dev->data->port_id, (void *)mr); - mlx5_mr_rebuild_cache(&sh->share_cache); + + ++sh->share_cache.dev_gen; + DEBUG("broadcasting local cache flush, gen=%d", + sh->share_cache.dev_gen); + /* * Flush local caches by propagating invalidation across cores. - * rte_smp_wmb() is enough to synchronize this event. If one of - * freed memsegs is seen by other core, that means the memseg - * has been allocated by allocator, which will come after this - * free call. Therefore, this store instruction (incrementing - * generation below) will be guaranteed to be seen by other core - * before the core sees the newly allocated memory. + * rte_smp_wmb() is to keep the order that dev_gen updated before + * rebuilding global cache. Therefore, other core can flush their + * local cache on time. */ - ++sh->share_cache.dev_gen; - DEBUG("broadcasting local cache flush, gen=%d", - sh->share_cache.dev_gen); rte_smp_wmb(); + mlx5_mr_rebuild_cache(&sh->share_cache); rte_rwlock_read_unlock(&sh->share_cache.rwlock); return 0; }