From patchwork Fri Dec 14 11:50:58 2018 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Anatoly Burakov X-Patchwork-Id: 48852 X-Patchwork-Delegate: thomas@monjalon.net Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id 0B4F81B9F3; Fri, 14 Dec 2018 12:51:14 +0100 (CET) Received: from mga12.intel.com (mga12.intel.com [192.55.52.136]) by dpdk.org (Postfix) with ESMTP id 797721B99B for ; Fri, 14 Dec 2018 12:51:08 +0100 (CET) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga003.jf.intel.com ([10.7.209.27]) by fmsmga106.fm.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 14 Dec 2018 03:51:07 -0800 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.56,352,1539673200"; d="scan'208";a="110367475" Received: from irvmail001.ir.intel.com ([163.33.26.43]) by orsmga003.jf.intel.com with ESMTP; 14 Dec 2018 03:51:05 -0800 Received: from sivswdev05.ir.intel.com (sivswdev05.ir.intel.com [10.243.17.64]) by irvmail001.ir.intel.com (8.14.3/8.13.6/MailSET/Hub) with ESMTP id wBEBp4aj005265; Fri, 14 Dec 2018 11:51:05 GMT Received: from sivswdev05.ir.intel.com (localhost [127.0.0.1]) by sivswdev05.ir.intel.com with ESMTP id wBEBp4ne007515; Fri, 14 Dec 2018 11:51:04 GMT Received: (from aburakov@localhost) by sivswdev05.ir.intel.com with LOCAL id wBEBp4tV007511; Fri, 14 Dec 2018 11:51:04 GMT From: Anatoly Burakov To: dev@dpdk.org Cc: John McNamara , Marko Kovacevic , shahafs@mellanox.com, yskoh@mellanox.com, thomas@monjalon.net, shreyansh.jain@nxp.com Date: Fri, 14 Dec 2018 11:50:58 +0000 Message-Id: <7dd880b42a204c1e90433bd0f102fdba67baace9.1544788118.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: References: In-Reply-To: References: Subject: [dpdk-dev] [PATCH v2 3/4] mem: allow registering external memory areas X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The general use-case of using external memory is well covered by existing external memory API's. However, certain use cases require manual management of externally allocated memory areas, so this memory should not be added to the heap. It should, however, be added to DPDK's internal structures, so that API's like ``rte_virt2memseg`` would work on such external memory segments. This commit adds such an API to DPDK. The new functions will allow to register and unregister externally allocated memory areas, as well as documentation for them. Signed-off-by: Anatoly Burakov Acked-by: Yongseok Koh --- Notes: v2: - Do more stringent alignment checks - Fix a bug where n_pages was used as is without parameter checking .../prog_guide/env_abstraction_layer.rst | 60 ++++++++++++--- doc/guides/rel_notes/release_19_02.rst | 6 ++ lib/librte_eal/common/eal_common_memory.c | 77 +++++++++++++++++++ lib/librte_eal/common/include/rte_memory.h | 63 +++++++++++++++ lib/librte_eal/rte_eal_version.map | 2 + 5 files changed, 198 insertions(+), 10 deletions(-) diff --git a/doc/guides/prog_guide/env_abstraction_layer.rst b/doc/guides/prog_guide/env_abstraction_layer.rst index 8b5d050c7..d7799b626 100644 --- a/doc/guides/prog_guide/env_abstraction_layer.rst +++ b/doc/guides/prog_guide/env_abstraction_layer.rst @@ -212,17 +212,26 @@ Normally, these options do not need to be changed. Support for Externally Allocated Memory ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ -It is possible to use externally allocated memory in DPDK, using a set of malloc -heap API's. Support for externally allocated memory is implemented through -overloading the socket ID - externally allocated heaps will have socket ID's -that would be considered invalid under normal circumstances. Requesting an -allocation to take place from a specified externally allocated memory is a -matter of supplying the correct socket ID to DPDK allocator, either directly -(e.g. through a call to ``rte_malloc``) or indirectly (through data -structure-specific allocation API's such as ``rte_ring_create``). +It is possible to use externally allocated memory in DPDK. There are two ways in +which using externally allocated memory can work: the malloc heap API's, and +manual memory management. -Since there is no way DPDK can verify whether memory are is available or valid, -this responsibility falls on the shoulders of the user. All multiprocess ++ Using heap API's for externally allocated memory + +Using using a set of malloc heap API's is the recommended way to use externally +allocated memory in DPDK. In this way, support for externally allocated memory +is implemented through overloading the socket ID - externally allocated heaps +will have socket ID's that would be considered invalid under normal +circumstances. Requesting an allocation to take place from a specified +externally allocated memory is a matter of supplying the correct socket ID to +DPDK allocator, either directly (e.g. through a call to ``rte_malloc``) or +indirectly (through data structure-specific allocation API's such as +``rte_ring_create``). Using these API's also ensures that mapping of externally +allocated memory for DMA is also performed on any memory segment that is added +to a DPDK malloc heap. + +Since there is no way DPDK can verify whether memory is available or valid, this +responsibility falls on the shoulders of the user. All multiprocess synchronization is also user's responsibility, as well as ensuring that all calls to add/attach/detach/remove memory are done in the correct order. It is not required to attach to a memory area in all processes - only attach to memory @@ -246,6 +255,37 @@ The expected workflow is as follows: For more information, please refer to ``rte_malloc`` API documentation, specifically the ``rte_malloc_heap_*`` family of function calls. ++ Using externally allocated memory without DPDK API's + +While using heap API's is the recommended method of using externally allocated +memory in DPDK, there are certain use cases where the overhead of DPDK heap API +is undesirable - for example, when manual memory management is performed on an +externally allocated area. To support use cases where externally allocated +memory will not be used as part of normal DPDK workflow, there is also another +set of API's under the ``rte_extmem_*`` namespace. + +These API's are (as their name implies) intended to allow registering or +unregistering externally allocated memory to/from DPDK's internal page table, to +allow API's like ``rte_virt2memseg`` etc. to work with externally allocated +memory. Memory added this way will not be available for any regular DPDK +allocators; DPDK will leave this memory for the user application to manage. + +The expected workflow is as follows: + +* Get a pointer to memory area +* Register memory within DPDK + - If IOVA table is not specified, IOVA addresses will be assumed to be + unavailable +* Perform DMA mapping with ``rte_vfio_dma_map`` if needed +* Use the memory area in your application +* If memory area is no longer needed, it can be unregistered + - If the area was mapped for DMA, unmapping must be performed before + unregistering memory + +Since these externally allocated memory areas will not be managed by DPDK, it is +therefore up to the user application to decide how to use them and what to do +with them once they're registered. + Per-lcore and Shared Variables ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ diff --git a/doc/guides/rel_notes/release_19_02.rst b/doc/guides/rel_notes/release_19_02.rst index e86ef9511..0b79918a9 100644 --- a/doc/guides/rel_notes/release_19_02.rst +++ b/doc/guides/rel_notes/release_19_02.rst @@ -54,6 +54,12 @@ New Features Also, make sure to start the actual text at the margin. ========================================================= +* **Added API to register external memory in DPDK.** + + A new ``rte_extmem_register``/``rte_extmem_unregister`` API was added to allow + chunks of external memory to be registered with DPDK without adding them to + the malloc heap. + * **Updated the enic driver.** * Added support for ``RTE_ETH_DEV_CLOSE_REMOVE`` flag. diff --git a/lib/librte_eal/common/eal_common_memory.c b/lib/librte_eal/common/eal_common_memory.c index d47ea4938..ea43c1362 100644 --- a/lib/librte_eal/common/eal_common_memory.c +++ b/lib/librte_eal/common/eal_common_memory.c @@ -24,6 +24,7 @@ #include "eal_memalloc.h" #include "eal_private.h" #include "eal_internal_cfg.h" +#include "malloc_heap.h" /* * Try to mmap *size bytes in /dev/zero. If it is successful, return the @@ -775,6 +776,82 @@ rte_memseg_get_fd_offset(const struct rte_memseg *ms, size_t *offset) return ret; } +int __rte_experimental +rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[], + unsigned int n_pages, size_t page_sz) +{ + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; + unsigned int socket_id, n; + int ret = 0; + + if (va_addr == NULL || page_sz == 0 || len == 0 || + !rte_is_power_of_2(page_sz) || + RTE_ALIGN(len, page_sz) != len || + ((len / page_sz) != n_pages && iova_addrs != NULL) || + !rte_is_aligned(va_addr, page_sz)) { + rte_errno = EINVAL; + return -1; + } + rte_rwlock_write_lock(&mcfg->memory_hotplug_lock); + + /* make sure the segment doesn't already exist */ + if (malloc_heap_find_external_seg(va_addr, len) != NULL) { + rte_errno = EEXIST; + ret = -1; + goto unlock; + } + + /* get next available socket ID */ + socket_id = mcfg->next_socket_id; + if (socket_id > INT32_MAX) { + RTE_LOG(ERR, EAL, "Cannot assign new socket ID's\n"); + rte_errno = ENOSPC; + ret = -1; + goto unlock; + } + + /* we can create a new memseg */ + n = len / page_sz; + if (malloc_heap_create_external_seg(va_addr, iova_addrs, n, + page_sz, "extmem", socket_id) == NULL) { + ret = -1; + goto unlock; + } + + /* memseg list successfully created - increment next socket ID */ + mcfg->next_socket_id++; +unlock: + rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock); + return ret; +} + +int __rte_experimental +rte_extmem_unregister(void *va_addr, size_t len) +{ + struct rte_mem_config *mcfg = rte_eal_get_configuration()->mem_config; + struct rte_memseg_list *msl; + int ret = 0; + + if (va_addr == NULL || len == 0) { + rte_errno = EINVAL; + return -1; + } + rte_rwlock_write_lock(&mcfg->memory_hotplug_lock); + + /* find our segment */ + msl = malloc_heap_find_external_seg(va_addr, len); + if (msl == NULL) { + rte_errno = ENOENT; + ret = -1; + goto unlock; + } + + ret = malloc_heap_destroy_external_seg(msl); +unlock: + rte_rwlock_write_unlock(&mcfg->memory_hotplug_lock); + return ret; +} + /* init memory subsystem */ int rte_eal_memory_init(void) diff --git a/lib/librte_eal/common/include/rte_memory.h b/lib/librte_eal/common/include/rte_memory.h index d970825df..ff23fc2c1 100644 --- a/lib/librte_eal/common/include/rte_memory.h +++ b/lib/librte_eal/common/include/rte_memory.h @@ -423,6 +423,69 @@ int __rte_experimental rte_memseg_get_fd_offset_thread_unsafe(const struct rte_memseg *ms, size_t *offset); +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Register external memory chunk with DPDK. + * + * @note Using this API is mutually exclusive with ``rte_malloc`` family of + * API's. + * + * @note This API will not perform any DMA mapping. It is expected that user + * will do that themselves. + * + * @param va_addr + * Start of virtual area to register. Must be aligned by ``page_sz``. + * @param len + * Length of virtual area to register. Must be aligned by ``page_sz``. + * @param iova_addrs + * Array of page IOVA addresses corresponding to each page in this memory + * area. Can be NULL, in which case page IOVA addresses will be set to + * RTE_BAD_IOVA. + * @param n_pages + * Number of elements in the iova_addrs array. Ignored if ``iova_addrs`` + * is NULL. + * @param page_sz + * Page size of the underlying memory + * + * @return + * - 0 on success + * - -1 in case of error, with rte_errno set to one of the following: + * EINVAL - one of the parameters was invalid + * EEXIST - memory chunk is already registered + * ENOSPC - no more space in internal config to store a new memory chunk + */ +int __rte_experimental +rte_extmem_register(void *va_addr, size_t len, rte_iova_t iova_addrs[], + unsigned int n_pages, size_t page_sz); + +/** + * @warning + * @b EXPERIMENTAL: this API may change without prior notice + * + * Unregister external memory chunk with DPDK. + * + * @note Using this API is mutually exclusive with ``rte_malloc`` family of + * API's. + * + * @note This API will not perform any DMA unmapping. It is expected that user + * will do that themselves. + * + * @param va_addr + * Start of virtual area to unregister + * @param len + * Length of virtual area to unregister + * + * @return + * - 0 on success + * - -1 in case of error, with rte_errno set to one of the following: + * EINVAL - one of the parameters was invalid + * ENOENT - memory chunk was not found + */ +int __rte_experimental +rte_extmem_unregister(void *va_addr, size_t len); + /** * Dump the physical memory layout to a file. * diff --git a/lib/librte_eal/rte_eal_version.map b/lib/librte_eal/rte_eal_version.map index 3fe78260d..593691a14 100644 --- a/lib/librte_eal/rte_eal_version.map +++ b/lib/librte_eal/rte_eal_version.map @@ -296,6 +296,8 @@ EXPERIMENTAL { rte_devargs_remove; rte_devargs_type_count; rte_eal_cleanup; + rte_extmem_register; + rte_extmem_unregister; rte_fbarray_attach; rte_fbarray_destroy; rte_fbarray_detach;