From patchwork Mon Jun 28 15:06:14 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Michael Baum X-Patchwork-Id: 94897 X-Patchwork-Delegate: rasland@nvidia.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id 0CBBDA0A0C; Mon, 28 Jun 2021 17:06:55 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 76B0940692; Mon, 28 Jun 2021 17:06:54 +0200 (CEST) Received: from NAM11-CO1-obe.outbound.protection.outlook.com (mail-co1nam11on2045.outbound.protection.outlook.com [40.107.220.45]) by mails.dpdk.org (Postfix) with ESMTP id 5DB664068A; Mon, 28 Jun 2021 17:06:52 +0200 (CEST) ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=XP0sEgK4Ys0jLhYUNPu5RA33ups/7/bKphw/Qj8VbnNPI1GHWp8Oz7rSk3+p4I2EgOA2syXmCHdEfZVE4CwwfCHXLkPlO/DDTgLO8Uhd60WglGzbltAm/sQra7WcXTwc7cgRjfkhYLbTtNBx8EljEX3PUeFj+n9WVQ1WYjVu88WyHYSspyjhLaDnp2XBKxrhNFmVE7JEiiqJlRetXOACXKbmXsZzfj6b0X+LZugDAHZwMVRT2GYZYbPAOtKWRP4mOJXh3XiwM4zreZnWuPhyqhdxx9dm3BxJPEJB3V4zneTi6tVqC+uNS3dqVAEON3W8hs7qYmx27H7sliG/vzU6yg== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=q0yh8OlUIr7sqKOL/zuoFfj0CZMolJAdhyuiPGp75HQ=; b=nbiPXDUYPkbjmCKp5kVLvdy/1O/I4GYYQCrHWDjgtzj6jIOpKkgCO0fTdOLrkBthF3ChmygkAYGkcCNi38heLLdkCcJ84kiCGU5+cVpnM1DNL/19fuPOJ8oufLCeGvZjw5fpDtBjAYVWReyBKM+tBhQPHx8RpMUvf4LHsGTv7HumECAnGdQoKsq3068HkVHVFRH5QJVdkPmjXpOU6zqm9S/wSUl1CxgDyGSyDsXY5m/sWhAv99Xdrkifh3Wm9+Z3MZn+rfRuPL0NyTLyFpNj6+y4g7jZA3e+Vol6N8gP0PJDlubA92HEzwMU/xqVb9G/KkYlYVxGp4H55ThQTSLcOg== ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is 216.228.112.34) smtp.rcpttodomain=dpdk.org smtp.mailfrom=nvidia.com; dmarc=pass (p=none sp=none pct=100) action=none header.from=nvidia.com; dkim=none (message not signed); arc=none DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=Nvidia.com; s=selector2; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=q0yh8OlUIr7sqKOL/zuoFfj0CZMolJAdhyuiPGp75HQ=; b=VnFoyrhVLJ2ZgqvzctsmN5HbS7dPr+TN1awTyo2NGLiuCPK4LyO8phgTTvYxZUWUSDIsXEhpB1UU9N3JLjQ3qeOPWGoMI+t/AOSahOwFAhfbGvBni2owZkJEiZyvkkPz+82ywckd5zLKjUTtB7B9miUhpKyRysB1wiUthWiHHaWqHta4bSVebT/pzGi8K4wxcIB/UwXqAuZOY2K3PeteDqlMEfflOyvj1vLUYgWYHo5E7pqB0oxcTeNZ3XKhnbwzrc+pHrXgHGlYCaZw7U746SOSkYlSkoc6W6WZPO8NDPEpUhYUvLMsXZFV7y9hHRd6i6/oj1q5FJHP71vEtm2RVw== Received: from DM5PR15CA0067.namprd15.prod.outlook.com (2603:10b6:3:ae::29) by CY4PR1201MB0166.namprd12.prod.outlook.com (2603:10b6:910:24::23) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.23; Mon, 28 Jun 2021 15:06:50 +0000 Received: from DM6NAM11FT066.eop-nam11.prod.protection.outlook.com (2603:10b6:3:ae:cafe::bc) by DM5PR15CA0067.outlook.office365.com (2603:10b6:3:ae::29) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 15:06:49 +0000 X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 216.228.112.34) smtp.mailfrom=nvidia.com; dpdk.org; dkim=none (message not signed) header.d=none;dpdk.org; dmarc=pass action=none header.from=nvidia.com; Received-SPF: Pass (protection.outlook.com: domain of nvidia.com designates 216.228.112.34 as permitted sender) receiver=protection.outlook.com; client-ip=216.228.112.34; helo=mail.nvidia.com; Received: from mail.nvidia.com (216.228.112.34) by DM6NAM11FT066.mail.protection.outlook.com (10.13.173.179) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384) id 15.20.4264.18 via Frontend Transport; Mon, 28 Jun 2021 15:06:48 +0000 Received: from nvidia.com (172.20.187.5) by HQMAIL107.nvidia.com (172.20.187.13) with Microsoft SMTP Server (TLS) id 15.0.1497.2; Mon, 28 Jun 2021 15:06:44 +0000 From: Michael Baum To: CC: Matan Azrad , Raslan Darawsheh , Viacheslav Ovsiienko , Date: Mon, 28 Jun 2021 18:06:14 +0300 Message-ID: <20210628150614.1769507-1-michaelba@nvidia.com> X-Mailer: git-send-email 2.25.1 MIME-Version: 1.0 X-Originating-IP: [172.20.187.5] X-ClientProxiedBy: HQMAIL101.nvidia.com (172.20.187.10) To HQMAIL107.nvidia.com (172.20.187.13) X-EOPAttributedMessage: 0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: 0c1dab82-cc3b-4090-2004-08d93a465a89 X-MS-TrafficTypeDiagnostic: CY4PR1201MB0166: X-Microsoft-Antispam-PRVS: X-MS-Oob-TLC-OOBClassifiers: OLM:37; X-MS-Exchange-SenderADCheck: 1 X-Microsoft-Antispam: BCL:0; X-Microsoft-Antispam-Message-Info: smFGjX9vBrswzLHpBGwuJnx1o3zDThQy5g5xvakErv67K1tfipSo9CkeVSXriFH7Fi9lsPqOXPGM8P9uI6bHmAW1MdEQPgIp4d+SyAVlbFy/hl0/Kw8iokh4ogyvlH6uPQe7CN8oLERVPSriwAMp+Hr4gb8Ns5/2cZ2NAwGH0Ved90W3l3SMSNSy02908vBn7ACxcZjbY0UBtntvc+HyVORzPWCX5+NA56oCIGeH3lnou4855Ra4/tmmQittwtfiXNLgHBI+syncPJn/tadU/e8acUA/29xDlp1hMXn5AnWtZ0AzPNMkF07LwkTWSnvJC2AUlOTZ5hSS9zmlYn/slJwIinohzEfy50S3GAjTJY+gaq46O2hgBOWAlbUVFvWtpS98MSWq/kd0J8ZIilxGUy2ow6l1cRJ0eIfByvAHaOPBZwbrx62fQfetj1KVzvUPWqwgej074hp9azzFy/ejb+Dm8++LxVuDY9S8H4ihE1uTuBKa7ilNd0eQdLnOGXnOELebCPvc7+9LvMsAzjQNYLgU0FOpdgBShOy6IwR5AfUSu/OC3YcU9WCw2EuNEPXHvdlPLKr1qmqY/Ca73lwZco9KwLcrt0ZOKPoGRX+QMUX7YZpsN0k/ArUqbdNqNeNIlic72cK1twStTrypelyR8JqnYxU73+q8+oSN1hdWaTit6U3A6D4Es9XWQp/5LDNuPX6QRgPqopHD8TteLsf8Zw== X-Forefront-Antispam-Report: CIP:216.228.112.34; CTRY:US; LANG:en; SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:mail.nvidia.com; PTR:schybrid03.nvidia.com; CAT:NONE; SFS:(4636009)(136003)(376002)(39860400002)(396003)(346002)(36840700001)(46966006)(336012)(4326008)(426003)(86362001)(8936002)(8676002)(54906003)(450100002)(6916009)(2906002)(2616005)(83380400001)(45080400002)(1076003)(478600001)(316002)(5660300002)(186003)(70586007)(7636003)(47076005)(82310400003)(6666004)(55016002)(356005)(6286002)(16526019)(26005)(7696005)(82740400003)(36756003)(36860700001)(70206006); DIR:OUT; SFP:1101; X-OriginatorOrg: Nvidia.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 Jun 2021 15:06:48.9564 (UTC) X-MS-Exchange-CrossTenant-Network-Message-Id: 0c1dab82-cc3b-4090-2004-08d93a465a89 X-MS-Exchange-CrossTenant-Id: 43083d15-7273-40c1-b7db-39efd9ccc17a X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=43083d15-7273-40c1-b7db-39efd9ccc17a; Ip=[216.228.112.34]; Helo=[mail.nvidia.com] X-MS-Exchange-CrossTenant-AuthSource: DM6NAM11FT066.eop-nam11.prod.protection.outlook.com X-MS-Exchange-CrossTenant-AuthAs: Anonymous X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR1201MB0166 Subject: [dpdk-dev] [PATCH] common/mlx5: share memory free callback X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" All the mlx5 drivers using MRs for data-path must unregister the mapped memory when it is freed by the dpdk process. Currently, only the net/eth driver unregisters MRs in free event. Move the net callback handler from net driver to common. Cc: stable@dpdk.org Signed-off-by: Michael Baum Acked-by: Matan Azrad --- drivers/common/mlx5/mlx5_common_mr.c | 89 +++++++++++++++++++++++++++ drivers/common/mlx5/mlx5_common_mr.h | 3 + drivers/common/mlx5/version.map | 1 + drivers/net/mlx5/mlx5_mr.c | 90 +--------------------------- 4 files changed, 95 insertions(+), 88 deletions(-) diff --git a/drivers/common/mlx5/mlx5_common_mr.c b/drivers/common/mlx5/mlx5_common_mr.c index afb5b3d0a7..98fe8698e2 100644 --- a/drivers/common/mlx5/mlx5_common_mr.c +++ b/drivers/common/mlx5/mlx5_common_mr.c @@ -1062,6 +1062,95 @@ mlx5_create_mr_ext(void *pd, uintptr_t addr, size_t len, int socket_id, return mr; } +/** + * Callback for memory free event. Iterate freed memsegs and check whether it + * belongs to an existing MR. If found, clear the bit from bitmap of MR. As a + * result, the MR would be fragmented. If it becomes empty, the MR will be freed + * later by mlx5_mr_garbage_collect(). Even if this callback is called from a + * secondary process, the garbage collector will be called in primary process + * as the secondary process can't call mlx5_mr_create(). + * + * The global cache must be rebuilt if there's any change and this event has to + * be propagated to dataplane threads to flush the local caches. + * + * @param share_cache + * Pointer to a global shared MR cache. + * @param ibdev_name + * Name of ibv device. + * @param addr + * Address of freed memory. + * @param len + * Size of freed memory. + */ +void +mlx5_free_mr_by_addr(struct mlx5_mr_share_cache *share_cache, + const char *ibdev_name, const void *addr, size_t len) +{ + const struct rte_memseg_list *msl; + struct mlx5_mr *mr; + int ms_n; + int i; + int rebuild = 0; + + DRV_LOG(DEBUG, "device %s free callback: addr=%p, len=%zu", + ibdev_name, addr, len); + msl = rte_mem_virt2memseg_list(addr); + /* addr and len must be page-aligned. */ + MLX5_ASSERT((uintptr_t)addr == + RTE_ALIGN((uintptr_t)addr, msl->page_sz)); + MLX5_ASSERT(len == RTE_ALIGN(len, msl->page_sz)); + ms_n = len / msl->page_sz; + rte_rwlock_write_lock(&share_cache->rwlock); + /* Clear bits of freed memsegs from MR. */ + for (i = 0; i < ms_n; ++i) { + const struct rte_memseg *ms; + struct mr_cache_entry entry; + uintptr_t start; + int ms_idx; + uint32_t pos; + + /* Find MR having this memseg. */ + start = (uintptr_t)addr + i * msl->page_sz; + mr = mlx5_mr_lookup_list(share_cache, &entry, start); + if (mr == NULL) + continue; + MLX5_ASSERT(mr->msl); /* Can't be external memory. */ + ms = rte_mem_virt2memseg((void *)start, msl); + MLX5_ASSERT(ms != NULL); + MLX5_ASSERT(msl->page_sz == ms->hugepage_sz); + ms_idx = rte_fbarray_find_idx(&msl->memseg_arr, ms); + pos = ms_idx - mr->ms_base_idx; + MLX5_ASSERT(rte_bitmap_get(mr->ms_bmp, pos)); + MLX5_ASSERT(pos < mr->ms_bmp_n); + DRV_LOG(DEBUG, "device %s MR(%p): clear bitmap[%u] for addr %p", + ibdev_name, (void *)mr, pos, (void *)start); + rte_bitmap_clear(mr->ms_bmp, pos); + if (--mr->ms_n == 0) { + LIST_REMOVE(mr, mr); + LIST_INSERT_HEAD(&share_cache->mr_free_list, mr, mr); + DRV_LOG(DEBUG, "device %s remove MR(%p) from list", + ibdev_name, (void *)mr); + } + /* + * MR is fragmented or will be freed. the global cache must be + * rebuilt. + */ + rebuild = 1; + } + if (rebuild) { + mlx5_mr_rebuild_cache(share_cache); + /* + * No explicit wmb is needed after updating dev_gen due to + * store-release ordering in unlock that provides the + * implicit barrier at the software visible level. + */ + ++share_cache->dev_gen; + DRV_LOG(DEBUG, "broadcasting local cache flush, gen=%d", + share_cache->dev_gen); + } + rte_rwlock_write_unlock(&share_cache->rwlock); +} + /** * Dump all the created MRs and the global cache entries. * diff --git a/drivers/common/mlx5/mlx5_common_mr.h b/drivers/common/mlx5/mlx5_common_mr.h index 5cc3f097c2..6e465a05e9 100644 --- a/drivers/common/mlx5/mlx5_common_mr.h +++ b/drivers/common/mlx5/mlx5_common_mr.h @@ -144,6 +144,9 @@ void mlx5_mr_rebuild_cache(struct mlx5_mr_share_cache *share_cache); __rte_internal void mlx5_mr_flush_local_cache(struct mlx5_mr_ctrl *mr_ctrl); __rte_internal +void mlx5_free_mr_by_addr(struct mlx5_mr_share_cache *share_cache, + const char *ibdev_name, const void *addr, size_t len); +__rte_internal int mlx5_mr_insert_cache(struct mlx5_mr_share_cache *share_cache, struct mlx5_mr *mr); diff --git a/drivers/common/mlx5/version.map b/drivers/common/mlx5/version.map index db4f13f1f7..b8be73a77b 100644 --- a/drivers/common/mlx5/version.map +++ b/drivers/common/mlx5/version.map @@ -103,6 +103,7 @@ INTERNAL { mlx5_mr_insert_cache; mlx5_mr_lookup_cache; mlx5_mr_lookup_list; + mlx5_free_mr_by_addr; mlx5_mr_rebuild_cache; mlx5_mr_release_cache; diff --git a/drivers/net/mlx5/mlx5_mr.c b/drivers/net/mlx5/mlx5_mr.c index 0c5403e493..0b6cfc8cb9 100644 --- a/drivers/net/mlx5/mlx5_mr.c +++ b/drivers/net/mlx5/mlx5_mr.c @@ -31,93 +31,6 @@ struct mr_update_mp_data { int ret; }; -/** - * Callback for memory free event. Iterate freed memsegs and check whether it - * belongs to an existing MR. If found, clear the bit from bitmap of MR. As a - * result, the MR would be fragmented. If it becomes empty, the MR will be freed - * later by mlx5_mr_garbage_collect(). Even if this callback is called from a - * secondary process, the garbage collector will be called in primary process - * as the secondary process can't call mlx5_mr_create(). - * - * The global cache must be rebuilt if there's any change and this event has to - * be propagated to dataplane threads to flush the local caches. - * - * @param sh - * Pointer to the Ethernet device shared context. - * @param addr - * Address of freed memory. - * @param len - * Size of freed memory. - */ -static void -mlx5_mr_mem_event_free_cb(struct mlx5_dev_ctx_shared *sh, - const void *addr, size_t len) -{ - const struct rte_memseg_list *msl; - struct mlx5_mr *mr; - int ms_n; - int i; - int rebuild = 0; - - DRV_LOG(DEBUG, "device %s free callback: addr=%p, len=%zu", - sh->ibdev_name, addr, len); - msl = rte_mem_virt2memseg_list(addr); - /* addr and len must be page-aligned. */ - MLX5_ASSERT((uintptr_t)addr == - RTE_ALIGN((uintptr_t)addr, msl->page_sz)); - MLX5_ASSERT(len == RTE_ALIGN(len, msl->page_sz)); - ms_n = len / msl->page_sz; - rte_rwlock_write_lock(&sh->share_cache.rwlock); - /* Clear bits of freed memsegs from MR. */ - for (i = 0; i < ms_n; ++i) { - const struct rte_memseg *ms; - struct mr_cache_entry entry; - uintptr_t start; - int ms_idx; - uint32_t pos; - - /* Find MR having this memseg. */ - start = (uintptr_t)addr + i * msl->page_sz; - mr = mlx5_mr_lookup_list(&sh->share_cache, &entry, start); - if (mr == NULL) - continue; - MLX5_ASSERT(mr->msl); /* Can't be external memory. */ - ms = rte_mem_virt2memseg((void *)start, msl); - MLX5_ASSERT(ms != NULL); - MLX5_ASSERT(msl->page_sz == ms->hugepage_sz); - ms_idx = rte_fbarray_find_idx(&msl->memseg_arr, ms); - pos = ms_idx - mr->ms_base_idx; - MLX5_ASSERT(rte_bitmap_get(mr->ms_bmp, pos)); - MLX5_ASSERT(pos < mr->ms_bmp_n); - DRV_LOG(DEBUG, "device %s MR(%p): clear bitmap[%u] for addr %p", - sh->ibdev_name, (void *)mr, pos, (void *)start); - rte_bitmap_clear(mr->ms_bmp, pos); - if (--mr->ms_n == 0) { - LIST_REMOVE(mr, mr); - LIST_INSERT_HEAD(&sh->share_cache.mr_free_list, mr, mr); - DRV_LOG(DEBUG, "device %s remove MR(%p) from list", - sh->ibdev_name, (void *)mr); - } - /* - * MR is fragmented or will be freed. the global cache must be - * rebuilt. - */ - rebuild = 1; - } - if (rebuild) { - mlx5_mr_rebuild_cache(&sh->share_cache); - /* - * No explicit wmb is needed after updating dev_gen due to - * store-release ordering in unlock that provides the - * implicit barrier at the software visible level. - */ - ++sh->share_cache.dev_gen; - DRV_LOG(DEBUG, "broadcasting local cache flush, gen=%d", - sh->share_cache.dev_gen); - } - rte_rwlock_write_unlock(&sh->share_cache.rwlock); -} - /** * Callback for memory event. This can be called from both primary and secondary * process. @@ -143,7 +56,8 @@ mlx5_mr_mem_event_cb(enum rte_mem_event event_type, const void *addr, rte_rwlock_write_lock(&mlx5_shared_data->mem_event_rwlock); /* Iterate all the existing mlx5 devices. */ LIST_FOREACH(sh, dev_list, mem_event_cb) - mlx5_mr_mem_event_free_cb(sh, addr, len); + mlx5_free_mr_by_addr(&sh->share_cache, + sh->ibdev_name, addr, len); rte_rwlock_write_unlock(&mlx5_shared_data->mem_event_rwlock); break; case RTE_MEM_EVENT_ALLOC: