From patchwork Fri Sep 26 09:33:33 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Zhu X-Patchwork-Id: 564 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 73E167E0C; Fri, 26 Sep 2014 11:27:51 +0200 (CEST) Received: from e7.ny.us.ibm.com (e7.ny.us.ibm.com [32.97.182.137]) by dpdk.org (Postfix) with ESMTP id 615CC678C for ; Fri, 26 Sep 2014 11:27:47 +0200 (CEST) Received: from /spool/local by e7.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Fri, 26 Sep 2014 05:34:08 -0400 Received: from d01dlp03.pok.ibm.com (9.56.250.168) by e7.ny.us.ibm.com (192.168.1.107) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Fri, 26 Sep 2014 05:34:05 -0400 Received: from b01cxnp23033.gho.pok.ibm.com (b01cxnp23033.gho.pok.ibm.com [9.57.198.28]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id 9AA04C90026 for ; Fri, 26 Sep 2014 05:22:49 -0400 (EDT) Received: from d01av05.pok.ibm.com (d01av05.pok.ibm.com [9.56.224.195]) by b01cxnp23033.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id s8Q9Xvn68585668 for ; Fri, 26 Sep 2014 09:34:05 GMT Received: from d01av05.pok.ibm.com (localhost [127.0.0.1]) by d01av05.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s8Q9XWE4000935 for ; Fri, 26 Sep 2014 05:33:32 -0400 Received: from d01hub02.pok.ibm.com (d01hub02.pok.ibm.com [9.63.10.236]) by d01av05.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id s8Q9XWET000704 for ; Fri, 26 Sep 2014 05:33:32 -0400 Received: from localhost.localdomain ([9.186.57.14]) by rescrl1.research.ibm.com (IBM Domino Release 9.0.1) with ESMTP id 2014092617324308-312536 ; Fri, 26 Sep 2014 17:32:43 +0800 From: Chao Zhu To: dev@dpdk.org Date: Fri, 26 Sep 2014 05:33:33 -0400 Message-Id: <1411724018-7738-3-git-send-email-bjzhuc@cn.ibm.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1411724018-7738-1-git-send-email-bjzhuc@cn.ibm.com> References: <1411724018-7738-1-git-send-email-bjzhuc@cn.ibm.com> X-MIMETrack: Itemize by SMTP Server on rescrl1/Research/Affiliated/IBM(Release 9.0.1|October 14, 2013) at 2014/09/26 17:32:43, Serialize by Router on D01HUB02/01/H/IBM(Release 8.5.3FP2 ZX853FP2HF5|February, 2013) at 09/26/2014 05:33:31, Serialize complete at 09/26/2014 05:33:31 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14092609-5806-0000-0000-00000098F1B4 Subject: [dpdk-dev] [PATCH 2/7] Split byte order operations to architecture specific X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch splits the byte order operations from DPDK and push them to architecture specific arch directories, so that other processor architecture to support DPDK can be easily adopted. Signed-off-by: Chao Zhu --- lib/librte_eal/common/Makefile | 2 +- .../common/include/i686/arch/rte_byteorder_arch.h | 95 ++++++++++++++++++++ lib/librte_eal/common/include/rte_byteorder.h | 58 +------------ .../include/x86_64/arch/rte_byteorder_arch.h | 95 ++++++++++++++++++++ 4 files changed, 193 insertions(+), 57 deletions(-) create mode 100644 lib/librte_eal/common/include/i686/arch/rte_byteorder_arch.h create mode 100644 lib/librte_eal/common/include/x86_64/arch/rte_byteorder_arch.h diff --git a/lib/librte_eal/common/Makefile b/lib/librte_eal/common/Makefile index d730de5..d588c94 100644 --- a/lib/librte_eal/common/Makefile +++ b/lib/librte_eal/common/Makefile @@ -46,7 +46,7 @@ ifeq ($(CONFIG_RTE_INSECURE_FUNCTION_WARNING),y) INC += rte_warnings.h endif -ARCH_INC := rte_atomic.h rte_atomic_arch.h +ARCH_INC := rte_atomic.h rte_atomic_arch.h rte_byteorder_arch.h SYMLINK-$(CONFIG_RTE_LIBRTE_EAL)-include := $(addprefix include/,$(INC)) SYMLINK-$(CONFIG_RTE_LIBRTE_EAL)-include/arch := \ diff --git a/lib/librte_eal/common/include/i686/arch/rte_byteorder_arch.h b/lib/librte_eal/common/include/i686/arch/rte_byteorder_arch.h new file mode 100644 index 0000000..06c1afc --- /dev/null +++ b/lib/librte_eal/common/include/i686/arch/rte_byteorder_arch.h @@ -0,0 +1,95 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_BYTEORDER_ARCH_H_ +#define _RTE_BYTEORDER_ARCH_H_ + +#include + +/* + * An architecture-optimized byte swap for a 16-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap16(). + */ +static inline uint16_t rte_arch_bswap16(uint16_t _x) +{ + register uint16_t x = _x; + asm volatile ("xchgb %b[x1],%h[x2]" + : [x1] "=Q" (x) + : [x2] "0" (x) + ); + return x; +} + +/* + * An architecture-optimized byte swap for a 32-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap32(). + */ +static inline uint32_t rte_arch_bswap32(uint32_t _x) +{ + register uint32_t x = _x; + asm volatile ("bswap %[x]" + : [x] "+r" (x) + ); + return x; +} + +/* + * An architecture-optimized byte swap for a 64-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap64(). + */ +#ifdef RTE_ARCH_X86_64 +/* 64-bit mode */ +static inline uint64_t rte_arch_bswap64(uint64_t _x) +{ + register uint64_t x = _x; + asm volatile ("bswap %[x]" + : [x] "+r" (x) + ); + return x; +} +#else /* ! RTE_ARCH_X86_64 */ +/* Compat./Leg. mode */ +static inline uint64_t rte_arch_bswap64(uint64_t x) +{ + uint64_t ret = 0; + ret |= ((uint64_t)rte_arch_bswap32(x & 0xffffffffUL) << 32); + ret |= ((uint64_t)rte_arch_bswap32((x >> 32) & 0xffffffffUL)); + return ret; +} +#endif /* RTE_ARCH_X86_64 */ + +#endif /* _RTE_BYTEORDER_ARCH_H_ */ + diff --git a/lib/librte_eal/common/include/rte_byteorder.h b/lib/librte_eal/common/include/rte_byteorder.h index 30fbd56..98e3764 100644 --- a/lib/librte_eal/common/include/rte_byteorder.h +++ b/lib/librte_eal/common/include/rte_byteorder.h @@ -34,6 +34,8 @@ #ifndef _RTE_BYTEORDER_H_ #define _RTE_BYTEORDER_H_ +#include "arch/rte_byteorder_arch.h" + /** * @file * @@ -96,62 +98,6 @@ rte_constant_bswap64(uint64_t x) ((x & 0xff00000000000000ULL) >> 56); } -/* - * An architecture-optimized byte swap for a 16-bit value. - * - * Do not use this function directly. The preferred function is rte_bswap16(). - */ -static inline uint16_t rte_arch_bswap16(uint16_t _x) -{ - register uint16_t x = _x; - asm volatile ("xchgb %b[x1],%h[x2]" - : [x1] "=Q" (x) - : [x2] "0" (x) - ); - return x; -} - -/* - * An architecture-optimized byte swap for a 32-bit value. - * - * Do not use this function directly. The preferred function is rte_bswap32(). - */ -static inline uint32_t rte_arch_bswap32(uint32_t _x) -{ - register uint32_t x = _x; - asm volatile ("bswap %[x]" - : [x] "+r" (x) - ); - return x; -} - -/* - * An architecture-optimized byte swap for a 64-bit value. - * - * Do not use this function directly. The preferred function is rte_bswap64(). - */ -#ifdef RTE_ARCH_X86_64 -/* 64-bit mode */ -static inline uint64_t rte_arch_bswap64(uint64_t _x) -{ - register uint64_t x = _x; - asm volatile ("bswap %[x]" - : [x] "+r" (x) - ); - return x; -} -#else /* ! RTE_ARCH_X86_64 */ -/* Compat./Leg. mode */ -static inline uint64_t rte_arch_bswap64(uint64_t x) -{ - uint64_t ret = 0; - ret |= ((uint64_t)rte_arch_bswap32(x & 0xffffffffUL) << 32); - ret |= ((uint64_t)rte_arch_bswap32((x >> 32) & 0xffffffffUL)); - return ret; -} -#endif /* RTE_ARCH_X86_64 */ - - #ifndef RTE_FORCE_INTRINSICS /** * Swap bytes in a 16-bit value. diff --git a/lib/librte_eal/common/include/x86_64/arch/rte_byteorder_arch.h b/lib/librte_eal/common/include/x86_64/arch/rte_byteorder_arch.h new file mode 100644 index 0000000..06c1afc --- /dev/null +++ b/lib/librte_eal/common/include/x86_64/arch/rte_byteorder_arch.h @@ -0,0 +1,95 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_BYTEORDER_ARCH_H_ +#define _RTE_BYTEORDER_ARCH_H_ + +#include + +/* + * An architecture-optimized byte swap for a 16-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap16(). + */ +static inline uint16_t rte_arch_bswap16(uint16_t _x) +{ + register uint16_t x = _x; + asm volatile ("xchgb %b[x1],%h[x2]" + : [x1] "=Q" (x) + : [x2] "0" (x) + ); + return x; +} + +/* + * An architecture-optimized byte swap for a 32-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap32(). + */ +static inline uint32_t rte_arch_bswap32(uint32_t _x) +{ + register uint32_t x = _x; + asm volatile ("bswap %[x]" + : [x] "+r" (x) + ); + return x; +} + +/* + * An architecture-optimized byte swap for a 64-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap64(). + */ +#ifdef RTE_ARCH_X86_64 +/* 64-bit mode */ +static inline uint64_t rte_arch_bswap64(uint64_t _x) +{ + register uint64_t x = _x; + asm volatile ("bswap %[x]" + : [x] "+r" (x) + ); + return x; +} +#else /* ! RTE_ARCH_X86_64 */ +/* Compat./Leg. mode */ +static inline uint64_t rte_arch_bswap64(uint64_t x) +{ + uint64_t ret = 0; + ret |= ((uint64_t)rte_arch_bswap32(x & 0xffffffffUL) << 32); + ret |= ((uint64_t)rte_arch_bswap32((x >> 32) & 0xffffffffUL)); + return ret; +} +#endif /* RTE_ARCH_X86_64 */ + +#endif /* _RTE_BYTEORDER_ARCH_H_ */ +