From patchwork Fri Sep 10 12:30:08 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Burakov, Anatoly" X-Patchwork-Id: 98621 X-Patchwork-Delegate: david.marchand@redhat.com Return-Path: X-Original-To: patchwork@inbox.dpdk.org Delivered-To: patchwork@inbox.dpdk.org Received: from mails.dpdk.org (mails.dpdk.org [217.70.189.124]) by inbox.dpdk.org (Postfix) with ESMTP id CF9C8A0547; Fri, 10 Sep 2021 14:32:29 +0200 (CEST) Received: from [217.70.189.124] (localhost [127.0.0.1]) by mails.dpdk.org (Postfix) with ESMTP id 8C9B84119B; Fri, 10 Sep 2021 14:30:44 +0200 (CEST) Received: from mga01.intel.com (mga01.intel.com [192.55.52.88]) by mails.dpdk.org (Postfix) with ESMTP id 894294114A for ; Fri, 10 Sep 2021 14:30:25 +0200 (CEST) X-IronPort-AV: E=McAfee;i="6200,9189,10102"; a="243386297" X-IronPort-AV: E=Sophos;i="5.85,282,1624345200"; d="scan'208";a="243386297" Received: from orsmga002.jf.intel.com ([10.7.209.21]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 10 Sep 2021 05:30:17 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.85,282,1624345200"; d="scan'208";a="450460467" Received: from silpixa00401191.ir.intel.com ([10.55.128.95]) by orsmga002.jf.intel.com with ESMTP; 10 Sep 2021 05:30:16 -0700 From: Anatoly Burakov To: dev@dpdk.org, Ray Kinsella Date: Fri, 10 Sep 2021 12:30:08 +0000 Message-Id: <59c3f27466100834f4d8fc2d43843fc581458903.1631277001.git.anatoly.burakov@intel.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <029012e59f555be16bed829229e8b48016157371.1631277001.git.anatoly.burakov@intel.com> References: <029012e59f555be16bed829229e8b48016157371.1631277001.git.anatoly.burakov@intel.com> MIME-Version: 1.0 Subject: [dpdk-dev] [PATCH v1 6/7] mem: promote DMA mask API's to stable X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" As per ABI policy, move the formerly experimental API's to the stable section. Signed-off-by: Anatoly Burakov Acked-by: Ray Kinsella --- lib/eal/include/rte_memory.h | 12 ------------ lib/eal/version.map | 6 +++--- 2 files changed, 3 insertions(+), 15 deletions(-) diff --git a/lib/eal/include/rte_memory.h b/lib/eal/include/rte_memory.h index c68b9d5e62..6d018629ae 100644 --- a/lib/eal/include/rte_memory.h +++ b/lib/eal/include/rte_memory.h @@ -553,22 +553,15 @@ unsigned rte_memory_get_nchannel(void); unsigned rte_memory_get_nrank(void); /** - * @warning - * @b EXPERIMENTAL: this API may change without prior notice - * * Check if all currently allocated memory segments are compliant with * supplied DMA address width. * * @param maskbits * Address width to check against. */ -__rte_experimental int rte_mem_check_dma_mask(uint8_t maskbits); /** - * @warning - * @b EXPERIMENTAL: this API may change without prior notice - * * Check if all currently allocated memory segments are compliant with * supplied DMA address width. This function will use * rte_memseg_walk_thread_unsafe instead of rte_memseg_walk implying @@ -581,18 +574,13 @@ int rte_mem_check_dma_mask(uint8_t maskbits); * @param maskbits * Address width to check against. */ -__rte_experimental int rte_mem_check_dma_mask_thread_unsafe(uint8_t maskbits); /** - * @warning - * @b EXPERIMENTAL: this API may change without prior notice - * * Set dma mask to use once memory initialization is done. Previous functions * rte_mem_check_dma_mask and rte_mem_check_dma_mask_thread_unsafe can not be * used safely until memory has been initialized. */ -__rte_experimental void rte_mem_set_dma_mask(uint8_t maskbits); /** diff --git a/lib/eal/version.map b/lib/eal/version.map index 420779e1aa..ebafb05e90 100644 --- a/lib/eal/version.map +++ b/lib/eal/version.map @@ -174,10 +174,13 @@ DPDK_22 { rte_mcfg_tailq_write_unlock; rte_mem_alloc_validator_register; rte_mem_alloc_validator_unregister; + rte_mem_check_dma_mask; + rte_mem_check_dma_mask_thread_unsafe; rte_mem_event_callback_register; rte_mem_event_callback_unregister; rte_mem_iova2virt; rte_mem_lock_page; + rte_mem_set_dma_mask; rte_mem_virt2iova; rte_mem_virt2memseg; rte_mem_virt2memseg_list; @@ -293,7 +296,6 @@ EXPERIMENTAL { rte_dev_event_monitor_start; # WINDOWS_NO_EXPORT rte_dev_event_monitor_stop; # WINDOWS_NO_EXPORT rte_log_register_type_and_pick_level; - rte_mem_check_dma_mask; # added in 18.08 rte_class_find; @@ -308,8 +310,6 @@ EXPERIMENTAL { rte_dev_event_callback_process; rte_dev_hotplug_handle_disable; # WINDOWS_NO_EXPORT rte_dev_hotplug_handle_enable; # WINDOWS_NO_EXPORT - rte_mem_check_dma_mask_thread_unsafe; - rte_mem_set_dma_mask; # added in 19.05 rte_dev_dma_map;