From patchwork Tue Nov 25 22:17:15 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Zhu X-Patchwork-Id: 1539 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 11A447F7C; Tue, 25 Nov 2014 11:06:17 +0100 (CET) Received: from e28smtp05.in.ibm.com (e28smtp05.in.ibm.com [122.248.162.5]) by dpdk.org (Postfix) with ESMTP id 53FA17F55 for ; Tue, 25 Nov 2014 11:05:46 +0100 (CET) Received: from /spool/local by e28smtp05.in.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Tue, 25 Nov 2014 15:46:36 +0530 Received: from d28dlp03.in.ibm.com (9.184.220.128) by e28smtp05.in.ibm.com (192.168.1.135) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Tue, 25 Nov 2014 15:46:35 +0530 Received: from d28relay04.in.ibm.com (d28relay04.in.ibm.com [9.184.220.61]) by d28dlp03.in.ibm.com (Postfix) with ESMTP id 14E8A1258060 for ; Tue, 25 Nov 2014 15:46:47 +0530 (IST) Received: from d28av05.in.ibm.com (d28av05.in.ibm.com [9.184.220.67]) by d28relay04.in.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id sAPAGsEK54919236 for ; Tue, 25 Nov 2014 15:46:54 +0530 Received: from d28av05.in.ibm.com (localhost [127.0.0.1]) by d28av05.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id sAPAGJKk021588 for ; Tue, 25 Nov 2014 15:46:19 +0530 Received: from os_controller.crl.ibm.com ([9.186.57.58]) by d28av05.in.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id sAPAG3kv020283 for ; Tue, 25 Nov 2014 15:46:18 +0530 From: Chao Zhu To: dev@dpdk.org Date: Tue, 25 Nov 2014 17:17:15 -0500 Message-Id: <1416953837-15894-13-git-send-email-chaozhu@linux.vnet.ibm.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1416953837-15894-1-git-send-email-chaozhu@linux.vnet.ibm.com> References: <1416953837-15894-1-git-send-email-chaozhu@linux.vnet.ibm.com> X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14112510-0017-0000-0000-0000025CCD18 Subject: [dpdk-dev] [PATCH v5 12/14] Add eal memory support for IBM Power Architecture X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The mmap of hugepage files on IBM Power starts from high address to low address. This is different from x86. This patch modified the memory segment detection code to get the correct memory segment layout on Power architecture. This patch also added a commond ARCH_PPC_64 defination for 64 bit systems. Signed-off-by: Chao Zhu --- config/defconfig_ppc_64-power8-linuxapp-gcc | 1 + config/defconfig_x86_64-native-linuxapp-clang | 1 + config/defconfig_x86_64-native-linuxapp-gcc | 1 + config/defconfig_x86_64-native-linuxapp-icc | 1 + lib/librte_eal/linuxapp/eal/eal_memory.c | 91 ++++++++++++++++++------- 5 files changed, 71 insertions(+), 24 deletions(-) diff --git a/config/defconfig_ppc_64-power8-linuxapp-gcc b/config/defconfig_ppc_64-power8-linuxapp-gcc index 4060eca..89d623e 100644 --- a/config/defconfig_ppc_64-power8-linuxapp-gcc +++ b/config/defconfig_ppc_64-power8-linuxapp-gcc @@ -35,6 +35,7 @@ CONFIG_RTE_MACHINE="power8" CONFIG_RTE_ARCH="ppc_64" CONFIG_RTE_ARCH_PPC_64=y CONFIG_RTE_ARCH_BIG_ENDIAN=y +CONFIG_RTE_ARCH_64=y CONFIG_RTE_TOOLCHAIN="gcc" CONFIG_RTE_TOOLCHAIN_GCC=y diff --git a/config/defconfig_x86_64-native-linuxapp-clang b/config/defconfig_x86_64-native-linuxapp-clang index bbda080..5f3074e 100644 --- a/config/defconfig_x86_64-native-linuxapp-clang +++ b/config/defconfig_x86_64-native-linuxapp-clang @@ -36,6 +36,7 @@ CONFIG_RTE_MACHINE="native" CONFIG_RTE_ARCH="x86_64" CONFIG_RTE_ARCH_X86_64=y +CONFIG_RTE_ARCH_64=y CONFIG_RTE_TOOLCHAIN="clang" CONFIG_RTE_TOOLCHAIN_CLANG=y diff --git a/config/defconfig_x86_64-native-linuxapp-gcc b/config/defconfig_x86_64-native-linuxapp-gcc index 3de818a..60baf5b 100644 --- a/config/defconfig_x86_64-native-linuxapp-gcc +++ b/config/defconfig_x86_64-native-linuxapp-gcc @@ -36,6 +36,7 @@ CONFIG_RTE_MACHINE="native" CONFIG_RTE_ARCH="x86_64" CONFIG_RTE_ARCH_X86_64=y +CONFIG_RTE_ARCH_64=y CONFIG_RTE_TOOLCHAIN="gcc" CONFIG_RTE_TOOLCHAIN_GCC=y diff --git a/config/defconfig_x86_64-native-linuxapp-icc b/config/defconfig_x86_64-native-linuxapp-icc index 795333b..71d1e28 100644 --- a/config/defconfig_x86_64-native-linuxapp-icc +++ b/config/defconfig_x86_64-native-linuxapp-icc @@ -36,6 +36,7 @@ CONFIG_RTE_MACHINE="native" CONFIG_RTE_ARCH="x86_64" CONFIG_RTE_ARCH_X86_64=y +CONFIG_RTE_ARCH_64=y CONFIG_RTE_TOOLCHAIN="icc" CONFIG_RTE_TOOLCHAIN_ICC=y diff --git a/lib/librte_eal/linuxapp/eal/eal_memory.c b/lib/librte_eal/linuxapp/eal/eal_memory.c index f2454f4..e6cb919 100644 --- a/lib/librte_eal/linuxapp/eal/eal_memory.c +++ b/lib/librte_eal/linuxapp/eal/eal_memory.c @@ -316,11 +316,12 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, #endif hugepg_tbl[i].filepath[sizeof(hugepg_tbl[i].filepath) - 1] = '\0'; } -#ifndef RTE_ARCH_X86_64 - /* for 32-bit systems, don't remap 1G pages, just reuse original - * map address as final map address. +#ifndef RTE_ARCH_64 + /* for 32-bit systems, don't remap 1G and 16G pages, just reuse + * original map address as final map address. */ - else if (hugepage_sz == RTE_PGSIZE_1G){ + else if ((hugepage_sz == RTE_PGSIZE_1G) + || (hugepage_sz == RTE_PGSIZE_16G)) { hugepg_tbl[i].final_va = hugepg_tbl[i].orig_va; hugepg_tbl[i].orig_va = NULL; continue; @@ -335,9 +336,17 @@ map_all_hugepages(struct hugepage_file *hugepg_tbl, * physical block: count the number of * contiguous physical pages. */ for (j = i+1; j < hpi->num_pages[0] ; j++) { +#ifdef RTE_ARCH_PPC_64 + /* The physical addresses are sorted in + * descending order on PPC64 */ + if (hugepg_tbl[j].physaddr != + hugepg_tbl[j-1].physaddr - hugepage_sz) + break; +#else if (hugepg_tbl[j].physaddr != hugepg_tbl[j-1].physaddr + hugepage_sz) break; +#endif } num_pages = j - i; vma_len = num_pages * hugepage_sz; @@ -412,11 +421,12 @@ remap_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) while (i < hpi->num_pages[0]) { -#ifndef RTE_ARCH_X86_64 - /* for 32-bit systems, don't remap 1G pages, just reuse original - * map address as final map address. +#ifndef RTE_ARCH_64 + /* for 32-bit systems, don't remap 1G pages and 16G pages, + * just reuse original map address as final map address. */ - if (hugepage_sz == RTE_PGSIZE_1G){ + if ((hugepage_sz == RTE_PGSIZE_1G) + || (hugepage_sz == RTE_PGSIZE_16G)) { hugepg_tbl[i].final_va = hugepg_tbl[i].orig_va; hugepg_tbl[i].orig_va = NULL; i++; @@ -428,8 +438,17 @@ remap_all_hugepages(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) * physical block: count the number of * contiguous physical pages. */ for (j = i+1; j < hpi->num_pages[0] ; j++) { - if (hugepg_tbl[j].physaddr != hugepg_tbl[j-1].physaddr + hugepage_sz) +#ifdef RTE_ARCH_PPC_64 + /* The physical addresses are sorted in descending + * order on PPC64 */ + if (hugepg_tbl[j].physaddr != + hugepg_tbl[j-1].physaddr - hugepage_sz) break; +#else + if (hugepg_tbl[j].physaddr != + hugepg_tbl[j-1].physaddr + hugepage_sz) + break; +#endif } num_pages = j - i; vma_len = num_pages * hugepage_sz; @@ -652,21 +671,21 @@ error: } /* - * Sort the hugepg_tbl by physical address (lower addresses first). We - * use a slow algorithm, but we won't have millions of pages, and this - * is only done at init time. + * Sort the hugepg_tbl by physical address (lower addresses first on x86, + * higher address first on powerpc). We use a slow algorithm, but we won't + * have millions of pages, and this is only done at init time. */ static int sort_by_physaddr(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) { unsigned i, j; - int smallest_idx; - uint64_t smallest_addr; + int compare_idx; + uint64_t compare_addr; struct hugepage_file tmp; for (i = 0; i < hpi->num_pages[0]; i++) { - smallest_addr = 0; - smallest_idx = -1; + compare_addr = 0; + compare_idx = -1; /* * browse all entries starting at 'i', and find the @@ -674,23 +693,28 @@ sort_by_physaddr(struct hugepage_file *hugepg_tbl, struct hugepage_info *hpi) */ for (j=i; j< hpi->num_pages[0]; j++) { - if (smallest_addr == 0 || - hugepg_tbl[j].physaddr < smallest_addr) { - smallest_addr = hugepg_tbl[j].physaddr; - smallest_idx = j; + if (compare_addr == 0 || +#ifdef RTE_ARCH_PPC_64 + hugepg_tbl[j].physaddr > compare_addr) { +#else + hugepg_tbl[j].physaddr < compare_addr) { +#endif + compare_addr = hugepg_tbl[j].physaddr; + compare_idx = j; } } /* should not happen */ - if (smallest_idx == -1) { + if (compare_idx == -1) { RTE_LOG(ERR, EAL, "%s(): error in physaddr sorting\n", __func__); return -1; } /* swap the 2 entries in the table */ - memcpy(&tmp, &hugepg_tbl[smallest_idx], sizeof(struct hugepage_file)); - memcpy(&hugepg_tbl[smallest_idx], &hugepg_tbl[i], - sizeof(struct hugepage_file)); + memcpy(&tmp, &hugepg_tbl[compare_idx], + sizeof(struct hugepage_file)); + memcpy(&hugepg_tbl[compare_idx], &hugepg_tbl[i], + sizeof(struct hugepage_file)); memcpy(&hugepg_tbl[i], &tmp, sizeof(struct hugepage_file)); } return 0; @@ -1260,12 +1284,25 @@ rte_eal_hugepage_init(void) new_memseg = 1; else if (hugepage[i].size != hugepage[i-1].size) new_memseg = 1; + +#ifdef RTE_ARCH_PPC_64 + /* On PPC64 architecture, the mmap always start from higher + * virtual address to lower address. Here, both the physical + * address and virtual address are in descending order */ + else if ((hugepage[i-1].physaddr - hugepage[i].physaddr) != + hugepage[i].size) + new_memseg = 1; + else if (((unsigned long)hugepage[i-1].final_va - + (unsigned long)hugepage[i].final_va) != hugepage[i].size) + new_memseg = 1; +#else else if ((hugepage[i].physaddr - hugepage[i-1].physaddr) != hugepage[i].size) new_memseg = 1; else if (((unsigned long)hugepage[i].final_va - (unsigned long)hugepage[i-1].final_va) != hugepage[i].size) new_memseg = 1; +#endif if (new_memseg) { j += 1; @@ -1284,6 +1321,12 @@ rte_eal_hugepage_init(void) } /* continuation of previous memseg */ else { +#ifdef RTE_ARCH_PPC_64 + /* Use the phy and virt address of the last page as segment + * address for IBM Power architecture */ + mcfg->memseg[j].phys_addr = hugepage[i].physaddr; + mcfg->memseg[j].addr = hugepage[i].final_va; +#endif mcfg->memseg[j].len += mcfg->memseg[j].hugepage_sz; } hugepage[i].memseg_id = j;