From patchwork Thu Oct 16 10:44:29 2014 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Chao Zhu X-Patchwork-Id: 835 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 36B557E97; Thu, 16 Oct 2014 12:37:03 +0200 (CEST) Received: from e8.ny.us.ibm.com (e8.ny.us.ibm.com [32.97.182.138]) by dpdk.org (Postfix) with ESMTP id 10DA47E7A for ; Thu, 16 Oct 2014 12:36:42 +0200 (CEST) Received: from /spool/local by e8.ny.us.ibm.com with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted for from ; Thu, 16 Oct 2014 06:44:35 -0400 Received: from d01dlp03.pok.ibm.com (9.56.250.168) by e8.ny.us.ibm.com (192.168.1.108) with IBM ESMTP SMTP Gateway: Authorized Use Only! Violators will be prosecuted; Thu, 16 Oct 2014 06:44:32 -0400 Received: from b01cxnp22034.gho.pok.ibm.com (b01cxnp22034.gho.pok.ibm.com [9.57.198.24]) by d01dlp03.pok.ibm.com (Postfix) with ESMTP id AEB7EC9003E for ; Thu, 16 Oct 2014 06:33:15 -0400 (EDT) Received: from d01av05.pok.ibm.com (d01av05.pok.ibm.com [9.56.224.195]) by b01cxnp22034.gho.pok.ibm.com (8.14.9/8.14.9/NCO v10.0) with ESMTP id s9GAiNUv786754 for ; Thu, 16 Oct 2014 10:44:32 GMT Received: from d01av05.pok.ibm.com (localhost [127.0.0.1]) by d01av05.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVout) with ESMTP id s9GAhxhX019641 for ; Thu, 16 Oct 2014 06:43:59 -0400 Received: from d01hub02.pok.ibm.com (d01hub02.pok.ibm.com [9.63.10.236]) by d01av05.pok.ibm.com (8.14.4/8.14.4/NCO v10.0 AVin) with ESMTP id s9GAhxlb019293 for ; Thu, 16 Oct 2014 06:43:59 -0400 Received: from localhost.localdomain ([9.186.57.14]) by rescrl1.research.ibm.com (IBM Domino Release 9.0.1) with ESMTP id 2014101618431437-431783 ; Thu, 16 Oct 2014 18:43:14 +0800 From: Chao Zhu To: dev@dpdk.org Date: Thu, 16 Oct 2014 06:44:29 -0400 Message-Id: <1413456274-22649-3-git-send-email-bjzhuc@cn.ibm.com> X-Mailer: git-send-email 1.7.1 In-Reply-To: <1413456274-22649-1-git-send-email-bjzhuc@cn.ibm.com> References: <1413456274-22649-1-git-send-email-bjzhuc@cn.ibm.com> X-MIMETrack: Itemize by SMTP Server on rescrl1/Research/Affiliated/IBM(Release 9.0.1|October 14, 2013) at 2014/10/16 18:43:14, Serialize by Router on D01HUB02/01/H/IBM(Release 8.5.3FP2 ZX853FP2HF5|February, 2013) at 10/16/2014 06:43:58, Serialize complete at 10/16/2014 06:43:58 X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 14101610-0029-0000-0000-000000C573D1 Subject: [dpdk-dev] [PATCH v2 2/7] Split byte order operations to architecture specific X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: patches and discussions about DPDK List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" This patch splits the byte order operations from DPDK and push them to architecture specific arch directories, so that other processor architecture to support DPDK can be easily adopted. Signed-off-by: Chao Zhu --- lib/librte_eal/common/Makefile | 4 +- .../common/include/arch/i686/rte_byteorder.h | 194 ++++++++++++++ .../common/include/arch/x86_64/rte_byteorder.h | 195 ++++++++++++++ .../common/include/generic/rte_byteorder.h | 124 +++++++++ lib/librte_eal/common/include/rte_byteorder.h | 270 -------------------- 5 files changed, 515 insertions(+), 272 deletions(-) create mode 100644 lib/librte_eal/common/include/arch/i686/rte_byteorder.h create mode 100644 lib/librte_eal/common/include/arch/x86_64/rte_byteorder.h create mode 100644 lib/librte_eal/common/include/generic/rte_byteorder.h delete mode 100644 lib/librte_eal/common/include/rte_byteorder.h diff --git a/lib/librte_eal/common/Makefile b/lib/librte_eal/common/Makefile index 8ab363b..62a39cd 100644 --- a/lib/librte_eal/common/Makefile +++ b/lib/librte_eal/common/Makefile @@ -31,7 +31,7 @@ include $(RTE_SDK)/mk/rte.vars.mk -INC := rte_branch_prediction.h rte_byteorder.h rte_common.h +INC := rte_branch_prediction.h rte_common.h INC += rte_cycles.h rte_debug.h rte_eal.h rte_errno.h rte_launch.h rte_lcore.h INC += rte_log.h rte_memcpy.h rte_memory.h rte_memzone.h rte_pci.h INC += rte_pci_dev_ids.h rte_per_lcore.h rte_prefetch.h rte_random.h @@ -46,7 +46,7 @@ ifeq ($(CONFIG_RTE_INSECURE_FUNCTION_WARNING),y) INC += rte_warnings.h endif -GENERIC_INC := rte_atomic.h +GENERIC_INC := rte_atomic.h rte_byteorder.h ARCH_INC := $(GENERIC_INC) SYMLINK-$(CONFIG_RTE_LIBRTE_EAL)-include := $(addprefix include/,$(INC)) diff --git a/lib/librte_eal/common/include/arch/i686/rte_byteorder.h b/lib/librte_eal/common/include/arch/i686/rte_byteorder.h new file mode 100644 index 0000000..de5cc83 --- /dev/null +++ b/lib/librte_eal/common/include/arch/i686/rte_byteorder.h @@ -0,0 +1,194 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_BYTEORDER_I686_H_ +#define _RTE_BYTEORDER_I686_H_ + +/** + * @file + * + * Byte Swap Operations + * + * This file defines a architecture specific API for byte swap operations. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "generic/rte_byteorder.h" + +/* + * An architecture-optimized byte swap for a 16-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap16(). + */ +static inline uint16_t rte_arch_bswap16(uint16_t _x) +{ + register uint16_t x = _x; + asm volatile ("xchgb %b[x1],%h[x2]" + : [x1] "=Q" (x) + : [x2] "0" (x) + ); + return x; +} + +/* + * An architecture-optimized byte swap for a 32-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap32(). + */ +static inline uint32_t rte_arch_bswap32(uint32_t _x) +{ + register uint32_t x = _x; + asm volatile ("bswap %[x]" + : [x] "+r" (x) + ); + return x; +} + +/* + * An architecture-optimized byte swap for a 64-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap64(). + */ +/* Compat./Leg. mode */ +static inline uint64_t rte_arch_bswap64(uint64_t x) +{ + uint64_t ret = 0; + ret |= ((uint64_t)rte_arch_bswap32(x & 0xffffffffUL) << 32); + ret |= ((uint64_t)rte_arch_bswap32((x >> 32) & 0xffffffffUL)); + return ret; +} + +#ifndef RTE_FORCE_INTRINSICS +/** + * Swap bytes in a 16-bit value. + */ +#define rte_bswap16(x) ((uint16_t)(__builtin_constant_p(x) ? \ + rte_constant_bswap16(x) : \ + rte_arch_bswap16(x))) + +/** + * Swap bytes in a 32-bit value. + */ +#define rte_bswap32(x) ((uint32_t)(__builtin_constant_p(x) ? \ + rte_constant_bswap32(x) : \ + rte_arch_bswap32(x))) + +/** + * Swap bytes in a 64-bit value. + */ +#define rte_bswap64(x) ((uint64_t)(__builtin_constant_p(x) ? \ + rte_constant_bswap64(x) : \ + rte_arch_bswap64(x))) +#else +/** + * Swap bytes in a 16-bit value. + * __builtin_bswap16 is only available gcc 4.8 and upwards + */ +#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 8) +#define rte_bswap16(x) ((uint16_t)(__builtin_constant_p(x) ? \ + rte_constant_bswap16(x) : \ + rte_arch_bswap16(x))) +#endif +#endif + +/** + * Convert a 16-bit value from CPU order to little endian. + */ +#define rte_cpu_to_le_16(x) (x) + +/** + * Convert a 32-bit value from CPU order to little endian. + */ +#define rte_cpu_to_le_32(x) (x) + +/** + * Convert a 64-bit value from CPU order to little endian. + */ +#define rte_cpu_to_le_64(x) (x) + + +/** + * Convert a 16-bit value from CPU order to big endian. + */ +#define rte_cpu_to_be_16(x) rte_bswap16(x) + +/** + * Convert a 32-bit value from CPU order to big endian. + */ +#define rte_cpu_to_be_32(x) rte_bswap32(x) + +/** + * Convert a 64-bit value from CPU order to big endian. + */ +#define rte_cpu_to_be_64(x) rte_bswap64(x) + + +/** + * Convert a 16-bit value from little endian to CPU order. + */ +#define rte_le_to_cpu_16(x) (x) + +/** + * Convert a 32-bit value from little endian to CPU order. + */ +#define rte_le_to_cpu_32(x) (x) + +/** + * Convert a 64-bit value from little endian to CPU order. + */ +#define rte_le_to_cpu_64(x) (x) + + +/** + * Convert a 16-bit value from big endian to CPU order. + */ +#define rte_be_to_cpu_16(x) rte_bswap16(x) + +/** + * Convert a 32-bit value from big endian to CPU order. + */ +#define rte_be_to_cpu_32(x) rte_bswap32(x) + +/** + * Convert a 64-bit value from big endian to CPU order. + */ +#define rte_be_to_cpu_64(x) rte_bswap64(x) + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_BYTEORDER_I686_H_ */ \ No newline at end of file diff --git a/lib/librte_eal/common/include/arch/x86_64/rte_byteorder.h b/lib/librte_eal/common/include/arch/x86_64/rte_byteorder.h new file mode 100644 index 0000000..089aeae --- /dev/null +++ b/lib/librte_eal/common/include/arch/x86_64/rte_byteorder.h @@ -0,0 +1,195 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_BYTEORDER_X86_64_H_ +#define _RTE_BYTEORDER_X86_64_H_ + +/** + * @file + * + * Byte Swap Operations + * + * This file defines a architecture specific API for byte swap operations. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include "generic/rte_byteorder.h" + +/* + * An architecture-optimized byte swap for a 16-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap16(). + */ +static inline uint16_t rte_arch_bswap16(uint16_t _x) +{ + register uint16_t x = _x; + asm volatile ("xchgb %b[x1],%h[x2]" + : [x1] "=Q" (x) + : [x2] "0" (x) + ); + return x; +} + +/* + * An architecture-optimized byte swap for a 32-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap32(). + */ +static inline uint32_t rte_arch_bswap32(uint32_t _x) +{ + register uint32_t x = _x; + asm volatile ("bswap %[x]" + : [x] "+r" (x) + ); + return x; +} + +/* + * An architecture-optimized byte swap for a 64-bit value. + * + * Do not use this function directly. The preferred function is rte_bswap64(). + */ +/* 64-bit mode */ +static inline uint64_t rte_arch_bswap64(uint64_t _x) +{ + register uint64_t x = _x; + asm volatile ("bswap %[x]" + : [x] "+r" (x) + ); + return x; +} + +#ifndef RTE_FORCE_INTRINSICS +/** + * Swap bytes in a 16-bit value. + */ +#define rte_bswap16(x) ((uint16_t)(__builtin_constant_p(x) ? \ + rte_constant_bswap16(x) : \ + rte_arch_bswap16(x))) + +/** + * Swap bytes in a 32-bit value. + */ +#define rte_bswap32(x) ((uint32_t)(__builtin_constant_p(x) ? \ + rte_constant_bswap32(x) : \ + rte_arch_bswap32(x))) + +/** + * Swap bytes in a 64-bit value. + */ +#define rte_bswap64(x) ((uint64_t)(__builtin_constant_p(x) ? \ + rte_constant_bswap64(x) : \ + rte_arch_bswap64(x))) +#else +/** + * Swap bytes in a 16-bit value. + * __builtin_bswap16 is only available gcc 4.8 and upwards + */ +#if __GNUC__ < 4 || (__GNUC__ == 4 && __GNUC_MINOR__ < 8) +#define rte_bswap16(x) ((uint16_t)(__builtin_constant_p(x) ? \ + rte_constant_bswap16(x) : \ + rte_arch_bswap16(x))) +#endif +#endif + +/** + * Convert a 16-bit value from CPU order to little endian. + */ +#define rte_cpu_to_le_16(x) (x) + +/** + * Convert a 32-bit value from CPU order to little endian. + */ +#define rte_cpu_to_le_32(x) (x) + +/** + * Convert a 64-bit value from CPU order to little endian. + */ +#define rte_cpu_to_le_64(x) (x) + + +/** + * Convert a 16-bit value from CPU order to big endian. + */ +#define rte_cpu_to_be_16(x) rte_bswap16(x) + +/** + * Convert a 32-bit value from CPU order to big endian. + */ +#define rte_cpu_to_be_32(x) rte_bswap32(x) + +/** + * Convert a 64-bit value from CPU order to big endian. + */ +#define rte_cpu_to_be_64(x) rte_bswap64(x) + + +/** + * Convert a 16-bit value from little endian to CPU order. + */ +#define rte_le_to_cpu_16(x) (x) + +/** + * Convert a 32-bit value from little endian to CPU order. + */ +#define rte_le_to_cpu_32(x) (x) + +/** + * Convert a 64-bit value from little endian to CPU order. + */ +#define rte_le_to_cpu_64(x) (x) + + +/** + * Convert a 16-bit value from big endian to CPU order. + */ +#define rte_be_to_cpu_16(x) rte_bswap16(x) + +/** + * Convert a 32-bit value from big endian to CPU order. + */ +#define rte_be_to_cpu_32(x) rte_bswap32(x) + +/** + * Convert a 64-bit value from big endian to CPU order. + */ +#define rte_be_to_cpu_64(x) rte_bswap64(x) + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_BYTEORDER_X86_64_H_ */ \ No newline at end of file diff --git a/lib/librte_eal/common/include/generic/rte_byteorder.h b/lib/librte_eal/common/include/generic/rte_byteorder.h new file mode 100644 index 0000000..729d378 --- /dev/null +++ b/lib/librte_eal/common/include/generic/rte_byteorder.h @@ -0,0 +1,124 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#ifndef _RTE_BYTEORDER_H_ +#define _RTE_BYTEORDER_H_ + +/** + * @file + * + * Byte Swap Operations + * + * This file defines a generic API for byte swap operations. Part of + * the implementation is architecture-specific. + */ + +#ifdef __cplusplus +extern "C" { +#endif + +#include + +/* + * An internal function to swap bytes in a 16-bit value. + * + * It is used by rte_bswap16() when the value is constant. Do not use + * this function directly; rte_bswap16() is preferred. + */ +static inline uint16_t +rte_constant_bswap16(uint16_t x) +{ + return (uint16_t)(((x & 0x00ffU) << 8) | + ((x & 0xff00U) >> 8)); +} + +/* + * An internal function to swap bytes in a 32-bit value. + * + * It is used by rte_bswap32() when the value is constant. Do not use + * this function directly; rte_bswap32() is preferred. + */ +static inline uint32_t +rte_constant_bswap32(uint32_t x) +{ + return ((x & 0x000000ffUL) << 24) | + ((x & 0x0000ff00UL) << 8) | + ((x & 0x00ff0000UL) >> 8) | + ((x & 0xff000000UL) >> 24); +} + +/* + * An internal function to swap bytes of a 64-bit value. + * + * It is used by rte_bswap64() when the value is constant. Do not use + * this function directly; rte_bswap64() is preferred. + */ +static inline uint64_t +rte_constant_bswap64(uint64_t x) +{ + return ((x & 0x00000000000000ffULL) << 56) | + ((x & 0x000000000000ff00ULL) << 40) | + ((x & 0x0000000000ff0000ULL) << 24) | + ((x & 0x00000000ff000000ULL) << 8) | + ((x & 0x000000ff00000000ULL) >> 8) | + ((x & 0x0000ff0000000000ULL) >> 24) | + ((x & 0x00ff000000000000ULL) >> 40) | + ((x & 0xff00000000000000ULL) >> 56); +} + +#ifdef RTE_FORCE_INTRINSICS +/** + * Swap bytes in a 16-bit value. + * __builtin_bswap16 is only available gcc 4.8 and upwards + */ +#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 8) +#define rte_bswap16(x) __builtin_bswap16(x) +#endif + +/** + * Swap bytes in a 32-bit value. + */ +#define rte_bswap32(x) __builtin_bswap32(x) + +/** + * Swap bytes in a 64-bit value. + */ +#define rte_bswap64(x) __builtin_bswap64(x) + +#endif + +#ifdef __cplusplus +} +#endif + +#endif /* _RTE_BYTEORDER_H_ */ diff --git a/lib/librte_eal/common/include/rte_byteorder.h b/lib/librte_eal/common/include/rte_byteorder.h deleted file mode 100644 index 30fbd56..0000000 --- a/lib/librte_eal/common/include/rte_byteorder.h +++ /dev/null @@ -1,270 +0,0 @@ -/*- - * BSD LICENSE - * - * Copyright(c) 2010-2014 Intel Corporation. All rights reserved. - * All rights reserved. - * - * Redistribution and use in source and binary forms, with or without - * modification, are permitted provided that the following conditions - * are met: - * - * * Redistributions of source code must retain the above copyright - * notice, this list of conditions and the following disclaimer. - * * Redistributions in binary form must reproduce the above copyright - * notice, this list of conditions and the following disclaimer in - * the documentation and/or other materials provided with the - * distribution. - * * Neither the name of Intel Corporation nor the names of its - * contributors may be used to endorse or promote products derived - * from this software without specific prior written permission. - * - * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS - * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT - * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR - * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT - * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, - * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT - * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, - * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY - * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT - * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE - * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. - */ - -#ifndef _RTE_BYTEORDER_H_ -#define _RTE_BYTEORDER_H_ - -/** - * @file - * - * Byte Swap Operations - * - * This file defines a generic API for byte swap operations. Part of - * the implementation is architecture-specific. - */ - -#ifdef __cplusplus -extern "C" { -#endif - -#include - -/* - * An internal function to swap bytes in a 16-bit value. - * - * It is used by rte_bswap16() when the value is constant. Do not use - * this function directly; rte_bswap16() is preferred. - */ -static inline uint16_t -rte_constant_bswap16(uint16_t x) -{ - return (uint16_t)(((x & 0x00ffU) << 8) | - ((x & 0xff00U) >> 8)); -} - -/* - * An internal function to swap bytes in a 32-bit value. - * - * It is used by rte_bswap32() when the value is constant. Do not use - * this function directly; rte_bswap32() is preferred. - */ -static inline uint32_t -rte_constant_bswap32(uint32_t x) -{ - return ((x & 0x000000ffUL) << 24) | - ((x & 0x0000ff00UL) << 8) | - ((x & 0x00ff0000UL) >> 8) | - ((x & 0xff000000UL) >> 24); -} - -/* - * An internal function to swap bytes of a 64-bit value. - * - * It is used by rte_bswap64() when the value is constant. Do not use - * this function directly; rte_bswap64() is preferred. - */ -static inline uint64_t -rte_constant_bswap64(uint64_t x) -{ - return ((x & 0x00000000000000ffULL) << 56) | - ((x & 0x000000000000ff00ULL) << 40) | - ((x & 0x0000000000ff0000ULL) << 24) | - ((x & 0x00000000ff000000ULL) << 8) | - ((x & 0x000000ff00000000ULL) >> 8) | - ((x & 0x0000ff0000000000ULL) >> 24) | - ((x & 0x00ff000000000000ULL) >> 40) | - ((x & 0xff00000000000000ULL) >> 56); -} - -/* - * An architecture-optimized byte swap for a 16-bit value. - * - * Do not use this function directly. The preferred function is rte_bswap16(). - */ -static inline uint16_t rte_arch_bswap16(uint16_t _x) -{ - register uint16_t x = _x; - asm volatile ("xchgb %b[x1],%h[x2]" - : [x1] "=Q" (x) - : [x2] "0" (x) - ); - return x; -} - -/* - * An architecture-optimized byte swap for a 32-bit value. - * - * Do not use this function directly. The preferred function is rte_bswap32(). - */ -static inline uint32_t rte_arch_bswap32(uint32_t _x) -{ - register uint32_t x = _x; - asm volatile ("bswap %[x]" - : [x] "+r" (x) - ); - return x; -} - -/* - * An architecture-optimized byte swap for a 64-bit value. - * - * Do not use this function directly. The preferred function is rte_bswap64(). - */ -#ifdef RTE_ARCH_X86_64 -/* 64-bit mode */ -static inline uint64_t rte_arch_bswap64(uint64_t _x) -{ - register uint64_t x = _x; - asm volatile ("bswap %[x]" - : [x] "+r" (x) - ); - return x; -} -#else /* ! RTE_ARCH_X86_64 */ -/* Compat./Leg. mode */ -static inline uint64_t rte_arch_bswap64(uint64_t x) -{ - uint64_t ret = 0; - ret |= ((uint64_t)rte_arch_bswap32(x & 0xffffffffUL) << 32); - ret |= ((uint64_t)rte_arch_bswap32((x >> 32) & 0xffffffffUL)); - return ret; -} -#endif /* RTE_ARCH_X86_64 */ - - -#ifndef RTE_FORCE_INTRINSICS -/** - * Swap bytes in a 16-bit value. - */ -#define rte_bswap16(x) ((uint16_t)(__builtin_constant_p(x) ? \ - rte_constant_bswap16(x) : \ - rte_arch_bswap16(x))) - -/** - * Swap bytes in a 32-bit value. - */ -#define rte_bswap32(x) ((uint32_t)(__builtin_constant_p(x) ? \ - rte_constant_bswap32(x) : \ - rte_arch_bswap32(x))) - -/** - * Swap bytes in a 64-bit value. - */ -#define rte_bswap64(x) ((uint64_t)(__builtin_constant_p(x) ? \ - rte_constant_bswap64(x) : \ - rte_arch_bswap64(x))) - -#else - -/** - * Swap bytes in a 16-bit value. - * __builtin_bswap16 is only available gcc 4.8 and upwards - */ -#if __GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 8) -#define rte_bswap16(x) __builtin_bswap16(x) -#else -#define rte_bswap16(x) ((uint16_t)(__builtin_constant_p(x) ? \ - rte_constant_bswap16(x) : \ - rte_arch_bswap16(x))) -#endif - -/** - * Swap bytes in a 32-bit value. - */ -#define rte_bswap32(x) __builtin_bswap32(x) - -/** - * Swap bytes in a 64-bit value. - */ -#define rte_bswap64(x) __builtin_bswap64(x) - -#endif - -/** - * Convert a 16-bit value from CPU order to little endian. - */ -#define rte_cpu_to_le_16(x) (x) - -/** - * Convert a 32-bit value from CPU order to little endian. - */ -#define rte_cpu_to_le_32(x) (x) - -/** - * Convert a 64-bit value from CPU order to little endian. - */ -#define rte_cpu_to_le_64(x) (x) - - -/** - * Convert a 16-bit value from CPU order to big endian. - */ -#define rte_cpu_to_be_16(x) rte_bswap16(x) - -/** - * Convert a 32-bit value from CPU order to big endian. - */ -#define rte_cpu_to_be_32(x) rte_bswap32(x) - -/** - * Convert a 64-bit value from CPU order to big endian. - */ -#define rte_cpu_to_be_64(x) rte_bswap64(x) - - -/** - * Convert a 16-bit value from little endian to CPU order. - */ -#define rte_le_to_cpu_16(x) (x) - -/** - * Convert a 32-bit value from little endian to CPU order. - */ -#define rte_le_to_cpu_32(x) (x) - -/** - * Convert a 64-bit value from little endian to CPU order. - */ -#define rte_le_to_cpu_64(x) (x) - - -/** - * Convert a 16-bit value from big endian to CPU order. - */ -#define rte_be_to_cpu_16(x) rte_bswap16(x) - -/** - * Convert a 32-bit value from big endian to CPU order. - */ -#define rte_be_to_cpu_32(x) rte_bswap32(x) - -/** - * Convert a 64-bit value from big endian to CPU order. - */ -#define rte_be_to_cpu_64(x) rte_bswap64(x) - -#ifdef __cplusplus -} -#endif - -#endif /* _RTE_BYTEORDER_H_ */