[dpdk-dev,v4,1/3] eal/x86: run-time dispatch over memcpy

Message ID 1506960796-71620-2-git-send-email-xiaoyun.li@intel.com (mailing list archive)
State Superseded, archived
Headers

Checks

Context Check Description
ci/checkpatch warning coding style issues
ci/Intel-compilation fail Compilation issues

Commit Message

Li, Xiaoyun Oct. 2, 2017, 4:13 p.m. UTC
This patch dynamically selects functions of memcpy at run-time based
on CPU flags that current machine supports. This patch uses function
pointers which are bind to the relative functions at constrctor time.
In addition, AVX512 instructions set would be compiled only if users
config it enabled and the compiler supports it.

Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
---
v2
* Use gcc function multi-versioning to avoid compilation issues.
* Add macros for AVX512 and AVX2. Only if users enable AVX512 and the
compiler supports it, the AVX512 codes would be compiled. Only if the
compiler supports AVX2, the AVX2 codes would be compiled.

v3
* Reduce function calls via only keep rte_memcpy_xxx.
* Add conditions that when copy size is small, use inline code path.
Otherwise, use dynamic code path.
* To support attribute target, clang version must be greater than 3.7.
Otherwise, would choose SSE/AVX code path, the same as before.
* Move two mocro functions to the top of the code since they would be
used in inline SSE/AVX and dynamic SSE/AVX codes.

v4
* Modify rte_memcpy.h to several .c files and modify makefiles to compile
AVX2 and AVX512 files.

 lib/librte_eal/bsdapp/eal/Makefile                 |  17 +
 .../common/include/arch/x86/rte_memcpy.c           |  59 ++
 .../common/include/arch/x86/rte_memcpy.h           | 861 +------------------
 .../common/include/arch/x86/rte_memcpy_avx2.c      | 291 +++++++
 .../common/include/arch/x86/rte_memcpy_avx512f.c   | 316 +++++++
 .../common/include/arch/x86/rte_memcpy_internal.h  | 909 +++++++++++++++++++++
 .../common/include/arch/x86/rte_memcpy_sse.c       | 585 +++++++++++++
 lib/librte_eal/linuxapp/eal/Makefile               |  17 +
 mk/rte.cpuflags.mk                                 |  14 +
 9 files changed, 2223 insertions(+), 846 deletions(-)
 create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy.c
 create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy_avx2.c
 create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy_avx512f.c
 create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy_internal.h
 create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy_sse.c
  

Comments

Ananyev, Konstantin Oct. 2, 2017, 4:39 p.m. UTC | #1
> -----Original Message-----
> From: Li, Xiaoyun
> Sent: Monday, October 2, 2017 5:13 PM
> To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Richardson, Bruce <bruce.richardson@intel.com>
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin <helin.zhang@intel.com>; dev@dpdk.org; Li, Xiaoyun <xiaoyun.li@intel.com>
> Subject: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> 
> This patch dynamically selects functions of memcpy at run-time based
> on CPU flags that current machine supports. This patch uses function
> pointers which are bind to the relative functions at constrctor time.
> In addition, AVX512 instructions set would be compiled only if users
> config it enabled and the compiler supports it.
> 
> Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> ---
> v2
> * Use gcc function multi-versioning to avoid compilation issues.
> * Add macros for AVX512 and AVX2. Only if users enable AVX512 and the
> compiler supports it, the AVX512 codes would be compiled. Only if the
> compiler supports AVX2, the AVX2 codes would be compiled.
> 
> v3
> * Reduce function calls via only keep rte_memcpy_xxx.
> * Add conditions that when copy size is small, use inline code path.
> Otherwise, use dynamic code path.
> * To support attribute target, clang version must be greater than 3.7.
> Otherwise, would choose SSE/AVX code path, the same as before.
> * Move two mocro functions to the top of the code since they would be
> used in inline SSE/AVX and dynamic SSE/AVX codes.
> 
> v4
> * Modify rte_memcpy.h to several .c files and modify makefiles to compile
> AVX2 and AVX512 files.

Could you explain to me why instead of reusing existing rte_memcpy() code
to generate _sse/_avx2/ax512f flavors you keep pushing changes with 3 separate implementations?
Obviously that is much more expensive in terms of maintenance and doesn't look like
feasible solution to me.
Is existing rte_memcpy() implementation is not good enough in terms of functionality and/or performance?
If so, can you outline these problems and try to fix them first.
Konstantin

> 
>  lib/librte_eal/bsdapp/eal/Makefile                 |  17 +
>  .../common/include/arch/x86/rte_memcpy.c           |  59 ++
>  .../common/include/arch/x86/rte_memcpy.h           | 861 +------------------
>  .../common/include/arch/x86/rte_memcpy_avx2.c      | 291 +++++++
>  .../common/include/arch/x86/rte_memcpy_avx512f.c   | 316 +++++++
>  .../common/include/arch/x86/rte_memcpy_internal.h  | 909 +++++++++++++++++++++
>  .../common/include/arch/x86/rte_memcpy_sse.c       | 585 +++++++++++++
>  lib/librte_eal/linuxapp/eal/Makefile               |  17 +
>  mk/rte.cpuflags.mk                                 |  14 +
>  9 files changed, 2223 insertions(+), 846 deletions(-)
>  create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy.c
>  create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy_avx2.c
>  create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy_avx512f.c
>  create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy_internal.h
>  create mode 100644 lib/librte_eal/common/include/arch/x86/rte_memcpy_sse.c
> 
> diff --git a/lib/librte_eal/bsdapp/eal/Makefile b/lib/librte_eal/bsdapp/eal/Makefile
> index 005019e..27023c6 100644
> --- a/lib/librte_eal/bsdapp/eal/Makefile
> +++ b/lib/librte_eal/bsdapp/eal/Makefile
> @@ -36,6 +36,7 @@ LIB = librte_eal.a
>  ARCH_DIR ?= $(RTE_ARCH)
>  VPATH += $(RTE_SDK)/lib/librte_eal/common
>  VPATH += $(RTE_SDK)/lib/librte_eal/common/arch/$(ARCH_DIR)
> +VPATH += $(RTE_SDK)/lib/librte_eal/common/include/arch/$(ARCH_DIR)
> 
>  CFLAGS += -I$(SRCDIR)/include
>  CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
> @@ -93,6 +94,22 @@ SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_service.c
>  SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_cpuflags.c
>  SRCS-$(CONFIG_RTE_ARCH_X86) += rte_spinlock.c
> 
> +# for run-time dispatch of memcpy
> +SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy.c
> +SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_sse.c
> +
> +# if the compiler supports AVX512, add avx512 file
> +ifneq ($(filter $(MACHINE_CFLAGS),CC_SUPPORT_AVX512F),)
> +SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_avx512f.c
> +CFLAGS_rte_memcpy_avx512f.o += -mavx512f
> +endif
> +
> +# if the compiler supports AVX2, add avx2 file
> +ifneq ($(filter $(MACHINE_CFLAGS),CC_SUPPORT_AVX2),)
> +SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_avx2.c
> +CFLAGS_rte_memcpy_avx2.o += -mavx2
> +endif
> +
>  CFLAGS_eal_common_cpuflags.o := $(CPUFLAGS_LIST)
> 
>  CFLAGS_eal.o := -D_GNU_SOURCE
> diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy.c b/lib/librte_eal/common/include/arch/x86/rte_memcpy.c
> new file mode 100644
> index 0000000..74ae702
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy.c
> @@ -0,0 +1,59 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <rte_memcpy.h>
> +#include <rte_cpuflags.h>
> +#include <rte_log.h>
> +
> +void *(*rte_memcpy_ptr)(void *dst, const void *src, size_t n) = NULL;
> +
> +static void __attribute__((constructor))
> +rte_memcpy_init(void)
> +{
> +#ifdef CC_SUPPORT_AVX512F
> +	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F)) {
> +		rte_memcpy_ptr = rte_memcpy_avx512f;
> +		RTE_LOG(DEBUG, EAL, "AVX512 memcpy is using!\n");
> +		return;
> +	}
> +#endif
> +#ifdef CC_SUPPORT_AVX2
> +	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2)) {
> +		rte_memcpy_ptr = rte_memcpy_avx2;
> +		RTE_LOG(DEBUG, EAL, "AVX2 memcpy is using!\n");
> +		return;
> +	}
> +#endif
> +	rte_memcpy_ptr = rte_memcpy_sse;
> +	RTE_LOG(DEBUG, EAL, "Default SSE/AVX memcpy is using!\n");
> +}
> diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy.h b/lib/librte_eal/common/include/arch/x86/rte_memcpy.h
> index 74c280c..460dcdb 100644
> --- a/lib/librte_eal/common/include/arch/x86/rte_memcpy.h
> +++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy.h
> @@ -1,7 +1,7 @@
>  /*-
>   *   BSD LICENSE
>   *
> - *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> + *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
>   *   All rights reserved.
>   *
>   *   Redistribution and use in source and binary forms, with or without
> @@ -34,867 +34,36 @@
>  #ifndef _RTE_MEMCPY_X86_64_H_
>  #define _RTE_MEMCPY_X86_64_H_
> 
> -/**
> - * @file
> - *
> - * Functions for SSE/AVX/AVX2/AVX512 implementation of memcpy().
> - */
> -
> -#include <stdio.h>
> -#include <stdint.h>
> -#include <string.h>
> -#include <rte_vect.h>
> -#include <rte_common.h>
> +#include <rte_memcpy_internal.h>
> 
>  #ifdef __cplusplus
>  extern "C" {
>  #endif
> 
> -/**
> - * Copy bytes from one location to another. The locations must not overlap.
> - *
> - * @note This is implemented as a macro, so it's address should not be taken
> - * and care is needed as parameter expressions may be evaluated multiple times.
> - *
> - * @param dst
> - *   Pointer to the destination of the data.
> - * @param src
> - *   Pointer to the source data.
> - * @param n
> - *   Number of bytes to copy.
> - * @return
> - *   Pointer to the destination data.
> - */
> -static __rte_always_inline void *
> -rte_memcpy(void *dst, const void *src, size_t n);
> -
> -#ifdef RTE_MACHINE_CPUFLAG_AVX512F
> +#define RTE_X86_MEMCPY_THRESH 128
> 
> -#define ALIGNMENT_MASK 0x3F
> +extern void *
> +(*rte_memcpy_ptr)(void *dst, const void *src, size_t n);
> 
>  /**
> - * AVX512 implementation below
> + * Different implementations of memcpy.
>   */
> +extern void*
> +rte_memcpy_avx512f(void *dst, const void *src, size_t n);
> 
> -/**
> - * Copy 16 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov16(uint8_t *dst, const uint8_t *src)
> -{
> -	__m128i xmm0;
> -
> -	xmm0 = _mm_loadu_si128((const __m128i *)src);
> -	_mm_storeu_si128((__m128i *)dst, xmm0);
> -}
> -
> -/**
> - * Copy 32 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov32(uint8_t *dst, const uint8_t *src)
> -{
> -	__m256i ymm0;
> +extern void *
> +rte_memcpy_avx2(void *dst, const void *src, size_t n);
> 
> -	ymm0 = _mm256_loadu_si256((const __m256i *)src);
> -	_mm256_storeu_si256((__m256i *)dst, ymm0);
> -}
> -
> -/**
> - * Copy 64 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov64(uint8_t *dst, const uint8_t *src)
> -{
> -	__m512i zmm0;
> -
> -	zmm0 = _mm512_loadu_si512((const void *)src);
> -	_mm512_storeu_si512((void *)dst, zmm0);
> -}
> -
> -/**
> - * Copy 128 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov128(uint8_t *dst, const uint8_t *src)
> -{
> -	rte_mov64(dst + 0 * 64, src + 0 * 64);
> -	rte_mov64(dst + 1 * 64, src + 1 * 64);
> -}
> -
> -/**
> - * Copy 256 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov256(uint8_t *dst, const uint8_t *src)
> -{
> -	rte_mov64(dst + 0 * 64, src + 0 * 64);
> -	rte_mov64(dst + 1 * 64, src + 1 * 64);
> -	rte_mov64(dst + 2 * 64, src + 2 * 64);
> -	rte_mov64(dst + 3 * 64, src + 3 * 64);
> -}
> -
> -/**
> - * Copy 128-byte blocks from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n)
> -{
> -	__m512i zmm0, zmm1;
> -
> -	while (n >= 128) {
> -		zmm0 = _mm512_loadu_si512((const void *)(src + 0 * 64));
> -		n -= 128;
> -		zmm1 = _mm512_loadu_si512((const void *)(src + 1 * 64));
> -		src = src + 128;
> -		_mm512_storeu_si512((void *)(dst + 0 * 64), zmm0);
> -		_mm512_storeu_si512((void *)(dst + 1 * 64), zmm1);
> -		dst = dst + 128;
> -	}
> -}
> -
> -/**
> - * Copy 512-byte blocks from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov512blocks(uint8_t *dst, const uint8_t *src, size_t n)
> -{
> -	__m512i zmm0, zmm1, zmm2, zmm3, zmm4, zmm5, zmm6, zmm7;
> -
> -	while (n >= 512) {
> -		zmm0 = _mm512_loadu_si512((const void *)(src + 0 * 64));
> -		n -= 512;
> -		zmm1 = _mm512_loadu_si512((const void *)(src + 1 * 64));
> -		zmm2 = _mm512_loadu_si512((const void *)(src + 2 * 64));
> -		zmm3 = _mm512_loadu_si512((const void *)(src + 3 * 64));
> -		zmm4 = _mm512_loadu_si512((const void *)(src + 4 * 64));
> -		zmm5 = _mm512_loadu_si512((const void *)(src + 5 * 64));
> -		zmm6 = _mm512_loadu_si512((const void *)(src + 6 * 64));
> -		zmm7 = _mm512_loadu_si512((const void *)(src + 7 * 64));
> -		src = src + 512;
> -		_mm512_storeu_si512((void *)(dst + 0 * 64), zmm0);
> -		_mm512_storeu_si512((void *)(dst + 1 * 64), zmm1);
> -		_mm512_storeu_si512((void *)(dst + 2 * 64), zmm2);
> -		_mm512_storeu_si512((void *)(dst + 3 * 64), zmm3);
> -		_mm512_storeu_si512((void *)(dst + 4 * 64), zmm4);
> -		_mm512_storeu_si512((void *)(dst + 5 * 64), zmm5);
> -		_mm512_storeu_si512((void *)(dst + 6 * 64), zmm6);
> -		_mm512_storeu_si512((void *)(dst + 7 * 64), zmm7);
> -		dst = dst + 512;
> -	}
> -}
> -
> -static inline void *
> -rte_memcpy_generic(void *dst, const void *src, size_t n)
> -{
> -	uintptr_t dstu = (uintptr_t)dst;
> -	uintptr_t srcu = (uintptr_t)src;
> -	void *ret = dst;
> -	size_t dstofss;
> -	size_t bits;
> -
> -	/**
> -	 * Copy less than 16 bytes
> -	 */
> -	if (n < 16) {
> -		if (n & 0x01) {
> -			*(uint8_t *)dstu = *(const uint8_t *)srcu;
> -			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
> -			dstu = (uintptr_t)((uint8_t *)dstu + 1);
> -		}
> -		if (n & 0x02) {
> -			*(uint16_t *)dstu = *(const uint16_t *)srcu;
> -			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
> -			dstu = (uintptr_t)((uint16_t *)dstu + 1);
> -		}
> -		if (n & 0x04) {
> -			*(uint32_t *)dstu = *(const uint32_t *)srcu;
> -			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
> -			dstu = (uintptr_t)((uint32_t *)dstu + 1);
> -		}
> -		if (n & 0x08)
> -			*(uint64_t *)dstu = *(const uint64_t *)srcu;
> -		return ret;
> -	}
> -
> -	/**
> -	 * Fast way when copy size doesn't exceed 512 bytes
> -	 */
> -	if (n <= 32) {
> -		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> -		rte_mov16((uint8_t *)dst - 16 + n,
> -				  (const uint8_t *)src - 16 + n);
> -		return ret;
> -	}
> -	if (n <= 64) {
> -		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> -		rte_mov32((uint8_t *)dst - 32 + n,
> -				  (const uint8_t *)src - 32 + n);
> -		return ret;
> -	}
> -	if (n <= 512) {
> -		if (n >= 256) {
> -			n -= 256;
> -			rte_mov256((uint8_t *)dst, (const uint8_t *)src);
> -			src = (const uint8_t *)src + 256;
> -			dst = (uint8_t *)dst + 256;
> -		}
> -		if (n >= 128) {
> -			n -= 128;
> -			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
> -			src = (const uint8_t *)src + 128;
> -			dst = (uint8_t *)dst + 128;
> -		}
> -COPY_BLOCK_128_BACK63:
> -		if (n > 64) {
> -			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> -			rte_mov64((uint8_t *)dst - 64 + n,
> -					  (const uint8_t *)src - 64 + n);
> -			return ret;
> -		}
> -		if (n > 0)
> -			rte_mov64((uint8_t *)dst - 64 + n,
> -					  (const uint8_t *)src - 64 + n);
> -		return ret;
> -	}
> -
> -	/**
> -	 * Make store aligned when copy size exceeds 512 bytes
> -	 */
> -	dstofss = ((uintptr_t)dst & 0x3F);
> -	if (dstofss > 0) {
> -		dstofss = 64 - dstofss;
> -		n -= dstofss;
> -		rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> -		src = (const uint8_t *)src + dstofss;
> -		dst = (uint8_t *)dst + dstofss;
> -	}
> -
> -	/**
> -	 * Copy 512-byte blocks.
> -	 * Use copy block function for better instruction order control,
> -	 * which is important when load is unaligned.
> -	 */
> -	rte_mov512blocks((uint8_t *)dst, (const uint8_t *)src, n);
> -	bits = n;
> -	n = n & 511;
> -	bits -= n;
> -	src = (const uint8_t *)src + bits;
> -	dst = (uint8_t *)dst + bits;
> -
> -	/**
> -	 * Copy 128-byte blocks.
> -	 * Use copy block function for better instruction order control,
> -	 * which is important when load is unaligned.
> -	 */
> -	if (n >= 128) {
> -		rte_mov128blocks((uint8_t *)dst, (const uint8_t *)src, n);
> -		bits = n;
> -		n = n & 127;
> -		bits -= n;
> -		src = (const uint8_t *)src + bits;
> -		dst = (uint8_t *)dst + bits;
> -	}
> -
> -	/**
> -	 * Copy whatever left
> -	 */
> -	goto COPY_BLOCK_128_BACK63;
> -}
> -
> -#elif defined RTE_MACHINE_CPUFLAG_AVX2
> -
> -#define ALIGNMENT_MASK 0x1F
> -
> -/**
> - * AVX2 implementation below
> - */
> -
> -/**
> - * Copy 16 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov16(uint8_t *dst, const uint8_t *src)
> -{
> -	__m128i xmm0;
> -
> -	xmm0 = _mm_loadu_si128((const __m128i *)src);
> -	_mm_storeu_si128((__m128i *)dst, xmm0);
> -}
> -
> -/**
> - * Copy 32 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov32(uint8_t *dst, const uint8_t *src)
> -{
> -	__m256i ymm0;
> -
> -	ymm0 = _mm256_loadu_si256((const __m256i *)src);
> -	_mm256_storeu_si256((__m256i *)dst, ymm0);
> -}
> -
> -/**
> - * Copy 64 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov64(uint8_t *dst, const uint8_t *src)
> -{
> -	rte_mov32((uint8_t *)dst + 0 * 32, (const uint8_t *)src + 0 * 32);
> -	rte_mov32((uint8_t *)dst + 1 * 32, (const uint8_t *)src + 1 * 32);
> -}
> -
> -/**
> - * Copy 128 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov128(uint8_t *dst, const uint8_t *src)
> -{
> -	rte_mov32((uint8_t *)dst + 0 * 32, (const uint8_t *)src + 0 * 32);
> -	rte_mov32((uint8_t *)dst + 1 * 32, (const uint8_t *)src + 1 * 32);
> -	rte_mov32((uint8_t *)dst + 2 * 32, (const uint8_t *)src + 2 * 32);
> -	rte_mov32((uint8_t *)dst + 3 * 32, (const uint8_t *)src + 3 * 32);
> -}
> -
> -/**
> - * Copy 128-byte blocks from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n)
> -{
> -	__m256i ymm0, ymm1, ymm2, ymm3;
> -
> -	while (n >= 128) {
> -		ymm0 = _mm256_loadu_si256((const __m256i *)((const uint8_t *)src + 0 * 32));
> -		n -= 128;
> -		ymm1 = _mm256_loadu_si256((const __m256i *)((const uint8_t *)src + 1 * 32));
> -		ymm2 = _mm256_loadu_si256((const __m256i *)((const uint8_t *)src + 2 * 32));
> -		ymm3 = _mm256_loadu_si256((const __m256i *)((const uint8_t *)src + 3 * 32));
> -		src = (const uint8_t *)src + 128;
> -		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 0 * 32), ymm0);
> -		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 1 * 32), ymm1);
> -		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 2 * 32), ymm2);
> -		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 3 * 32), ymm3);
> -		dst = (uint8_t *)dst + 128;
> -	}
> -}
> -
> -static inline void *
> -rte_memcpy_generic(void *dst, const void *src, size_t n)
> -{
> -	uintptr_t dstu = (uintptr_t)dst;
> -	uintptr_t srcu = (uintptr_t)src;
> -	void *ret = dst;
> -	size_t dstofss;
> -	size_t bits;
> -
> -	/**
> -	 * Copy less than 16 bytes
> -	 */
> -	if (n < 16) {
> -		if (n & 0x01) {
> -			*(uint8_t *)dstu = *(const uint8_t *)srcu;
> -			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
> -			dstu = (uintptr_t)((uint8_t *)dstu + 1);
> -		}
> -		if (n & 0x02) {
> -			*(uint16_t *)dstu = *(const uint16_t *)srcu;
> -			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
> -			dstu = (uintptr_t)((uint16_t *)dstu + 1);
> -		}
> -		if (n & 0x04) {
> -			*(uint32_t *)dstu = *(const uint32_t *)srcu;
> -			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
> -			dstu = (uintptr_t)((uint32_t *)dstu + 1);
> -		}
> -		if (n & 0x08) {
> -			*(uint64_t *)dstu = *(const uint64_t *)srcu;
> -		}
> -		return ret;
> -	}
> -
> -	/**
> -	 * Fast way when copy size doesn't exceed 256 bytes
> -	 */
> -	if (n <= 32) {
> -		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> -		rte_mov16((uint8_t *)dst - 16 + n,
> -				(const uint8_t *)src - 16 + n);
> -		return ret;
> -	}
> -	if (n <= 48) {
> -		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> -		rte_mov16((uint8_t *)dst + 16, (const uint8_t *)src + 16);
> -		rte_mov16((uint8_t *)dst - 16 + n,
> -				(const uint8_t *)src - 16 + n);
> -		return ret;
> -	}
> -	if (n <= 64) {
> -		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> -		rte_mov32((uint8_t *)dst - 32 + n,
> -				(const uint8_t *)src - 32 + n);
> -		return ret;
> -	}
> -	if (n <= 256) {
> -		if (n >= 128) {
> -			n -= 128;
> -			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
> -			src = (const uint8_t *)src + 128;
> -			dst = (uint8_t *)dst + 128;
> -		}
> -COPY_BLOCK_128_BACK31:
> -		if (n >= 64) {
> -			n -= 64;
> -			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> -			src = (const uint8_t *)src + 64;
> -			dst = (uint8_t *)dst + 64;
> -		}
> -		if (n > 32) {
> -			rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> -			rte_mov32((uint8_t *)dst - 32 + n,
> -					(const uint8_t *)src - 32 + n);
> -			return ret;
> -		}
> -		if (n > 0) {
> -			rte_mov32((uint8_t *)dst - 32 + n,
> -					(const uint8_t *)src - 32 + n);
> -		}
> -		return ret;
> -	}
> -
> -	/**
> -	 * Make store aligned when copy size exceeds 256 bytes
> -	 */
> -	dstofss = (uintptr_t)dst & 0x1F;
> -	if (dstofss > 0) {
> -		dstofss = 32 - dstofss;
> -		n -= dstofss;
> -		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> -		src = (const uint8_t *)src + dstofss;
> -		dst = (uint8_t *)dst + dstofss;
> -	}
> -
> -	/**
> -	 * Copy 128-byte blocks
> -	 */
> -	rte_mov128blocks((uint8_t *)dst, (const uint8_t *)src, n);
> -	bits = n;
> -	n = n & 127;
> -	bits -= n;
> -	src = (const uint8_t *)src + bits;
> -	dst = (uint8_t *)dst + bits;
> -
> -	/**
> -	 * Copy whatever left
> -	 */
> -	goto COPY_BLOCK_128_BACK31;
> -}
> -
> -#else /* RTE_MACHINE_CPUFLAG */
> -
> -#define ALIGNMENT_MASK 0x0F
> -
> -/**
> - * SSE & AVX implementation below
> - */
> -
> -/**
> - * Copy 16 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov16(uint8_t *dst, const uint8_t *src)
> -{
> -	__m128i xmm0;
> -
> -	xmm0 = _mm_loadu_si128((const __m128i *)(const __m128i *)src);
> -	_mm_storeu_si128((__m128i *)dst, xmm0);
> -}
> -
> -/**
> - * Copy 32 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov32(uint8_t *dst, const uint8_t *src)
> -{
> -	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
> -	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
> -}
> -
> -/**
> - * Copy 64 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov64(uint8_t *dst, const uint8_t *src)
> -{
> -	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
> -	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
> -	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
> -	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
> -}
> -
> -/**
> - * Copy 128 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov128(uint8_t *dst, const uint8_t *src)
> -{
> -	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
> -	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
> -	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
> -	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
> -	rte_mov16((uint8_t *)dst + 4 * 16, (const uint8_t *)src + 4 * 16);
> -	rte_mov16((uint8_t *)dst + 5 * 16, (const uint8_t *)src + 5 * 16);
> -	rte_mov16((uint8_t *)dst + 6 * 16, (const uint8_t *)src + 6 * 16);
> -	rte_mov16((uint8_t *)dst + 7 * 16, (const uint8_t *)src + 7 * 16);
> -}
> -
> -/**
> - * Copy 256 bytes from one location to another,
> - * locations should not overlap.
> - */
> -static inline void
> -rte_mov256(uint8_t *dst, const uint8_t *src)
> -{
> -	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
> -	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
> -	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
> -	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
> -	rte_mov16((uint8_t *)dst + 4 * 16, (const uint8_t *)src + 4 * 16);
> -	rte_mov16((uint8_t *)dst + 5 * 16, (const uint8_t *)src + 5 * 16);
> -	rte_mov16((uint8_t *)dst + 6 * 16, (const uint8_t *)src + 6 * 16);
> -	rte_mov16((uint8_t *)dst + 7 * 16, (const uint8_t *)src + 7 * 16);
> -	rte_mov16((uint8_t *)dst + 8 * 16, (const uint8_t *)src + 8 * 16);
> -	rte_mov16((uint8_t *)dst + 9 * 16, (const uint8_t *)src + 9 * 16);
> -	rte_mov16((uint8_t *)dst + 10 * 16, (const uint8_t *)src + 10 * 16);
> -	rte_mov16((uint8_t *)dst + 11 * 16, (const uint8_t *)src + 11 * 16);
> -	rte_mov16((uint8_t *)dst + 12 * 16, (const uint8_t *)src + 12 * 16);
> -	rte_mov16((uint8_t *)dst + 13 * 16, (const uint8_t *)src + 13 * 16);
> -	rte_mov16((uint8_t *)dst + 14 * 16, (const uint8_t *)src + 14 * 16);
> -	rte_mov16((uint8_t *)dst + 15 * 16, (const uint8_t *)src + 15 * 16);
> -}
> -
> -/**
> - * Macro for copying unaligned block from one location to another with constant load offset,
> - * 47 bytes leftover maximum,
> - * locations should not overlap.
> - * Requirements:
> - * - Store is aligned
> - * - Load offset is <offset>, which must be immediate value within [1, 15]
> - * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
> - * - <dst>, <src>, <len> must be variables
> - * - __m128i <xmm0> ~ <xmm8> must be pre-defined
> - */
> -#define MOVEUNALIGNED_LEFT47_IMM(dst, src, len, offset)                                                     \
> -__extension__ ({                                                                                            \
> -    int tmp;                                                                                                \
> -    while (len >= 128 + 16 - offset) {                                                                      \
> -        xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));                  \
> -        len -= 128;                                                                                         \
> -        xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));                  \
> -        xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));                  \
> -        xmm3 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 3 * 16));                  \
> -        xmm4 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 4 * 16));                  \
> -        xmm5 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 5 * 16));                  \
> -        xmm6 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 6 * 16));                  \
> -        xmm7 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 7 * 16));                  \
> -        xmm8 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 8 * 16));                  \
> -        src = (const uint8_t *)src + 128;                                                                   \
> -        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));        \
> -        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));        \
> -        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 2 * 16), _mm_alignr_epi8(xmm3, xmm2, offset));        \
> -        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 3 * 16), _mm_alignr_epi8(xmm4, xmm3, offset));        \
> -        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 4 * 16), _mm_alignr_epi8(xmm5, xmm4, offset));        \
> -        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 5 * 16), _mm_alignr_epi8(xmm6, xmm5, offset));        \
> -        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 6 * 16), _mm_alignr_epi8(xmm7, xmm6, offset));        \
> -        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 7 * 16), _mm_alignr_epi8(xmm8, xmm7, offset));        \
> -        dst = (uint8_t *)dst + 128;                                                                         \
> -    }                                                                                                       \
> -    tmp = len;                                                                                              \
> -    len = ((len - 16 + offset) & 127) + 16 - offset;                                                        \
> -    tmp -= len;                                                                                             \
> -    src = (const uint8_t *)src + tmp;                                                                       \
> -    dst = (uint8_t *)dst + tmp;                                                                             \
> -    if (len >= 32 + 16 - offset) {                                                                          \
> -        while (len >= 32 + 16 - offset) {                                                                   \
> -            xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));              \
> -            len -= 32;                                                                                      \
> -            xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));              \
> -            xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));              \
> -            src = (const uint8_t *)src + 32;                                                                \
> -            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));    \
> -            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));    \
> -            dst = (uint8_t *)dst + 32;                                                                      \
> -        }                                                                                                   \
> -        tmp = len;                                                                                          \
> -        len = ((len - 16 + offset) & 31) + 16 - offset;                                                     \
> -        tmp -= len;                                                                                         \
> -        src = (const uint8_t *)src + tmp;                                                                   \
> -        dst = (uint8_t *)dst + tmp;                                                                         \
> -    }                                                                                                       \
> -})
> -
> -/**
> - * Macro for copying unaligned block from one location to another,
> - * 47 bytes leftover maximum,
> - * locations should not overlap.
> - * Use switch here because the aligning instruction requires immediate value for shift count.
> - * Requirements:
> - * - Store is aligned
> - * - Load offset is <offset>, which must be within [1, 15]
> - * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
> - * - <dst>, <src>, <len> must be variables
> - * - __m128i <xmm0> ~ <xmm8> used in MOVEUNALIGNED_LEFT47_IMM must be pre-defined
> - */
> -#define MOVEUNALIGNED_LEFT47(dst, src, len, offset)                   \
> -__extension__ ({                                                      \
> -    switch (offset) {                                                 \
> -    case 0x01: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x01); break;    \
> -    case 0x02: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x02); break;    \
> -    case 0x03: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x03); break;    \
> -    case 0x04: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x04); break;    \
> -    case 0x05: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x05); break;    \
> -    case 0x06: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x06); break;    \
> -    case 0x07: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x07); break;    \
> -    case 0x08: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x08); break;    \
> -    case 0x09: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x09); break;    \
> -    case 0x0A: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0A); break;    \
> -    case 0x0B: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0B); break;    \
> -    case 0x0C: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0C); break;    \
> -    case 0x0D: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0D); break;    \
> -    case 0x0E: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0E); break;    \
> -    case 0x0F: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0F); break;    \
> -    default:;                                                         \
> -    }                                                                 \
> -})
> -
> -static inline void *
> -rte_memcpy_generic(void *dst, const void *src, size_t n)
> -{
> -	__m128i xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7, xmm8;
> -	uintptr_t dstu = (uintptr_t)dst;
> -	uintptr_t srcu = (uintptr_t)src;
> -	void *ret = dst;
> -	size_t dstofss;
> -	size_t srcofs;
> -
> -	/**
> -	 * Copy less than 16 bytes
> -	 */
> -	if (n < 16) {
> -		if (n & 0x01) {
> -			*(uint8_t *)dstu = *(const uint8_t *)srcu;
> -			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
> -			dstu = (uintptr_t)((uint8_t *)dstu + 1);
> -		}
> -		if (n & 0x02) {
> -			*(uint16_t *)dstu = *(const uint16_t *)srcu;
> -			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
> -			dstu = (uintptr_t)((uint16_t *)dstu + 1);
> -		}
> -		if (n & 0x04) {
> -			*(uint32_t *)dstu = *(const uint32_t *)srcu;
> -			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
> -			dstu = (uintptr_t)((uint32_t *)dstu + 1);
> -		}
> -		if (n & 0x08) {
> -			*(uint64_t *)dstu = *(const uint64_t *)srcu;
> -		}
> -		return ret;
> -	}
> -
> -	/**
> -	 * Fast way when copy size doesn't exceed 512 bytes
> -	 */
> -	if (n <= 32) {
> -		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> -		rte_mov16((uint8_t *)dst - 16 + n, (const uint8_t *)src - 16 + n);
> -		return ret;
> -	}
> -	if (n <= 48) {
> -		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> -		rte_mov16((uint8_t *)dst - 16 + n, (const uint8_t *)src - 16 + n);
> -		return ret;
> -	}
> -	if (n <= 64) {
> -		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> -		rte_mov16((uint8_t *)dst + 32, (const uint8_t *)src + 32);
> -		rte_mov16((uint8_t *)dst - 16 + n, (const uint8_t *)src - 16 + n);
> -		return ret;
> -	}
> -	if (n <= 128) {
> -		goto COPY_BLOCK_128_BACK15;
> -	}
> -	if (n <= 512) {
> -		if (n >= 256) {
> -			n -= 256;
> -			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
> -			rte_mov128((uint8_t *)dst + 128, (const uint8_t *)src + 128);
> -			src = (const uint8_t *)src + 256;
> -			dst = (uint8_t *)dst + 256;
> -		}
> -COPY_BLOCK_255_BACK15:
> -		if (n >= 128) {
> -			n -= 128;
> -			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
> -			src = (const uint8_t *)src + 128;
> -			dst = (uint8_t *)dst + 128;
> -		}
> -COPY_BLOCK_128_BACK15:
> -		if (n >= 64) {
> -			n -= 64;
> -			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> -			src = (const uint8_t *)src + 64;
> -			dst = (uint8_t *)dst + 64;
> -		}
> -COPY_BLOCK_64_BACK15:
> -		if (n >= 32) {
> -			n -= 32;
> -			rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> -			src = (const uint8_t *)src + 32;
> -			dst = (uint8_t *)dst + 32;
> -		}
> -		if (n > 16) {
> -			rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> -			rte_mov16((uint8_t *)dst - 16 + n, (const uint8_t *)src - 16 + n);
> -			return ret;
> -		}
> -		if (n > 0) {
> -			rte_mov16((uint8_t *)dst - 16 + n, (const uint8_t *)src - 16 + n);
> -		}
> -		return ret;
> -	}
> -
> -	/**
> -	 * Make store aligned when copy size exceeds 512 bytes,
> -	 * and make sure the first 15 bytes are copied, because
> -	 * unaligned copy functions require up to 15 bytes
> -	 * backwards access.
> -	 */
> -	dstofss = (uintptr_t)dst & 0x0F;
> -	if (dstofss > 0) {
> -		dstofss = 16 - dstofss + 16;
> -		n -= dstofss;
> -		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> -		src = (const uint8_t *)src + dstofss;
> -		dst = (uint8_t *)dst + dstofss;
> -	}
> -	srcofs = ((uintptr_t)src & 0x0F);
> -
> -	/**
> -	 * For aligned copy
> -	 */
> -	if (srcofs == 0) {
> -		/**
> -		 * Copy 256-byte blocks
> -		 */
> -		for (; n >= 256; n -= 256) {
> -			rte_mov256((uint8_t *)dst, (const uint8_t *)src);
> -			dst = (uint8_t *)dst + 256;
> -			src = (const uint8_t *)src + 256;
> -		}
> -
> -		/**
> -		 * Copy whatever left
> -		 */
> -		goto COPY_BLOCK_255_BACK15;
> -	}
> -
> -	/**
> -	 * For copy with unaligned load
> -	 */
> -	MOVEUNALIGNED_LEFT47(dst, src, n, srcofs);
> -
> -	/**
> -	 * Copy whatever left
> -	 */
> -	goto COPY_BLOCK_64_BACK15;
> -}
> -
> -#endif /* RTE_MACHINE_CPUFLAG */
> -
> -static inline void *
> -rte_memcpy_aligned(void *dst, const void *src, size_t n)
> -{
> -	void *ret = dst;
> -
> -	/* Copy size <= 16 bytes */
> -	if (n < 16) {
> -		if (n & 0x01) {
> -			*(uint8_t *)dst = *(const uint8_t *)src;
> -			src = (const uint8_t *)src + 1;
> -			dst = (uint8_t *)dst + 1;
> -		}
> -		if (n & 0x02) {
> -			*(uint16_t *)dst = *(const uint16_t *)src;
> -			src = (const uint16_t *)src + 1;
> -			dst = (uint16_t *)dst + 1;
> -		}
> -		if (n & 0x04) {
> -			*(uint32_t *)dst = *(const uint32_t *)src;
> -			src = (const uint32_t *)src + 1;
> -			dst = (uint32_t *)dst + 1;
> -		}
> -		if (n & 0x08)
> -			*(uint64_t *)dst = *(const uint64_t *)src;
> -
> -		return ret;
> -	}
> -
> -	/* Copy 16 <= size <= 32 bytes */
> -	if (n <= 32) {
> -		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> -		rte_mov16((uint8_t *)dst - 16 + n,
> -				(const uint8_t *)src - 16 + n);
> -
> -		return ret;
> -	}
> -
> -	/* Copy 32 < size <= 64 bytes */
> -	if (n <= 64) {
> -		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> -		rte_mov32((uint8_t *)dst - 32 + n,
> -				(const uint8_t *)src - 32 + n);
> -
> -		return ret;
> -	}
> -
> -	/* Copy 64 bytes blocks */
> -	for (; n >= 64; n -= 64) {
> -		rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> -		dst = (uint8_t *)dst + 64;
> -		src = (const uint8_t *)src + 64;
> -	}
> -
> -	/* Copy whatever left */
> -	rte_mov64((uint8_t *)dst - 64 + n,
> -			(const uint8_t *)src - 64 + n);
> -
> -	return ret;
> -}
> +extern void *
> +rte_memcpy_sse(void *dst, const void *src, size_t n);
> 
>  static inline void *
>  rte_memcpy(void *dst, const void *src, size_t n)
>  {
> -	if (!(((uintptr_t)dst | (uintptr_t)src) & ALIGNMENT_MASK))
> -		return rte_memcpy_aligned(dst, src, n);
> +	if (n <= RTE_X86_MEMCPY_THRESH)
> +		return rte_memcpy_internal(dst, src, n);
>  	else
> -		return rte_memcpy_generic(dst, src, n);
> +		return (*rte_memcpy_ptr)(dst, src, n);
>  }
> 
>  #ifdef __cplusplus
> diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx2.c b/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx2.c
> new file mode 100644
> index 0000000..c83351a
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx2.c
> @@ -0,0 +1,291 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <rte_memcpy.h>
> +
> +#ifndef CC_SUPPORT_AVX2
> +#error CC_SUPPORT_AVX2 not defined
> +#endif
> +
> +void *
> +rte_memcpy_avx2(void *dst, const void *src, size_t n)
> +{
> +	if (!(((uintptr_t)dst | (uintptr_t)src) & 0x1F)) {
> +		void *ret = dst;
> +
> +		/* Copy size <= 16 bytes */
> +		if (n < 16) {
> +			if (n & 0x01) {
> +				*(uint8_t *)dst = *(const uint8_t *)src;
> +				src = (const uint8_t *)src + 1;
> +				dst = (uint8_t *)dst + 1;
> +			}
> +			if (n & 0x02) {
> +				*(uint16_t *)dst = *(const uint16_t *)src;
> +				src = (const uint16_t *)src + 1;
> +				dst = (uint16_t *)dst + 1;
> +			}
> +			if (n & 0x04) {
> +				*(uint32_t *)dst = *(const uint32_t *)src;
> +				src = (const uint32_t *)src + 1;
> +				dst = (uint32_t *)dst + 1;
> +			}
> +			if (n & 0x08)
> +				*(uint64_t *)dst = *(const uint64_t *)src;
> +
> +			return ret;
> +		}
> +
> +		/* Copy 16 <= size <= 32 bytes */
> +		if (n <= 32) {
> +			__m128i xmm0, xmm1;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 16 + n));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 16 + n), xmm1);
> +
> +			return ret;
> +		}
> +
> +		/* Copy 32 < size <= 64 bytes */
> +		if (n <= 64) {
> +			__m256i ymm0, ymm1;
> +			ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +			ymm1 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src - 32 + n));
> +			_mm256_storeu_si256((__m256i *)dst, ymm0);
> +			_mm256_storeu_si256((__m256i *)
> +				((uint8_t *)dst - 32 + n), ymm1);
> +
> +			return ret;
> +		}
> +
> +		/* Copy 64 bytes blocks */
> +		for (; n >= 64; n -= 64) {
> +			__m256i ymm0, ymm1;
> +			ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +			ymm1 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src + 32));
> +			_mm256_storeu_si256((__m256i *)dst, ymm0);
> +			_mm256_storeu_si256((__m256i *)
> +				((uint8_t *)dst + 32), ymm1);
> +			dst = (uint8_t *)dst + 64;
> +			src = (const uint8_t *)src + 64;
> +		}
> +
> +		/* Copy whatever left */
> +		__m256i ymm0, ymm1;
> +		ymm0 = _mm256_loadu_si256((const __m256i *)
> +			((const uint8_t *)src - 64 + n));
> +		ymm1 = _mm256_loadu_si256((const __m256i *)
> +			((const uint8_t *)src - 32 + n));
> +		_mm256_storeu_si256((__m256i *)((uint8_t *)dst - 64 + n), ymm0);
> +		_mm256_storeu_si256((__m256i *)((uint8_t *)dst - 32 + n), ymm1);
> +
> +		return ret;
> +	} else {
> +		uintptr_t dstu = (uintptr_t)dst;
> +		uintptr_t srcu = (uintptr_t)src;
> +		void *ret = dst;
> +		size_t dstofss;
> +		size_t bits;
> +
> +		/**
> +		 * Copy less than 16 bytes
> +		 */
> +		if (n < 16) {
> +			if (n & 0x01) {
> +				*(uint8_t *)dstu = *(const uint8_t *)srcu;
> +				srcu = (uintptr_t)((const uint8_t *)srcu + 1);
> +				dstu = (uintptr_t)((uint8_t *)dstu + 1);
> +			}
> +			if (n & 0x02) {
> +				*(uint16_t *)dstu = *(const uint16_t *)srcu;
> +				srcu = (uintptr_t)((const uint16_t *)srcu + 1);
> +				dstu = (uintptr_t)((uint16_t *)dstu + 1);
> +			}
> +			if (n & 0x04) {
> +				*(uint32_t *)dstu = *(const uint32_t *)srcu;
> +				srcu = (uintptr_t)((const uint32_t *)srcu + 1);
> +				dstu = (uintptr_t)((uint32_t *)dstu + 1);
> +			}
> +			if (n & 0x08)
> +				*(uint64_t *)dstu = *(const uint64_t *)srcu;
> +			return ret;
> +		}
> +
> +		/**
> +		 * Fast way when copy size doesn't exceed 256 bytes
> +		 */
> +		if (n <= 32) {
> +			__m128i xmm0, xmm1;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 16 + n));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 16 + n), xmm1);
> +			return ret;
> +		}
> +		if (n <= 48) {
> +			__m128i xmm0, xmm1, xmm2;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src + 16));
> +			xmm2 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 16 + n));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst + 16), xmm1);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 16 + n), xmm2);
> +			return ret;
> +		}
> +		if (n <= 64) {
> +			__m256i ymm0, ymm1;
> +			ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +			ymm1 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src - 32 + n));
> +			_mm256_storeu_si256((__m256i *)dst, ymm0);
> +			_mm256_storeu_si256((__m256i *)
> +				((uint8_t *)dst - 32 + n), ymm1);
> +			return ret;
> +		}
> +		if (n <= 256) {
> +			if (n >= 128) {
> +				n -= 128;
> +				__m256i ymm0, ymm1, ymm2, ymm3;
> +				ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +				ymm1 = _mm256_loadu_si256((const __m256i *)
> +					((const uint8_t *)src + 32));
> +				ymm2 = _mm256_loadu_si256((const __m256i *)
> +					((const uint8_t *)src + 2*32));
> +				ymm3 = _mm256_loadu_si256((const __m256i *)
> +					((const uint8_t *)src + 3*32));
> +				_mm256_storeu_si256((__m256i *)dst, ymm0);
> +				_mm256_storeu_si256((__m256i *)
> +					((uint8_t *)dst + 32), ymm1);
> +				_mm256_storeu_si256((__m256i *)
> +					((uint8_t *)dst + 2*32), ymm2);
> +				_mm256_storeu_si256((__m256i *)
> +					((uint8_t *)dst + 3*32), ymm3);
> +				src = (const uint8_t *)src + 128;
> +				dst = (uint8_t *)dst + 128;
> +			}
> +COPY_BLOCK_128_BACK31:
> +			if (n >= 64) {
> +				n -= 64;
> +				__m256i ymm0, ymm1;
> +				ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +				ymm1 = _mm256_loadu_si256((const __m256i *)
> +					((const uint8_t *)src + 32));
> +				_mm256_storeu_si256((__m256i *)dst, ymm0);
> +				_mm256_storeu_si256((__m256i *)
> +					((uint8_t *)dst + 32), ymm1);
> +				src = (const uint8_t *)src + 64;
> +				dst = (uint8_t *)dst + 64;
> +			}
> +			if (n > 32) {
> +				__m256i ymm0, ymm1;
> +				ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +				ymm1 = _mm256_loadu_si256((const __m256i *)
> +					((const uint8_t *)src - 32 + n));
> +				_mm256_storeu_si256((__m256i *)dst, ymm0);
> +				_mm256_storeu_si256((__m256i *)
> +					((uint8_t *)dst - 32 + n), ymm1);
> +				return ret;
> +			}
> +			if (n > 0) {
> +				__m256i ymm0;
> +				ymm0 = _mm256_loadu_si256((const __m256i *)
> +					((const uint8_t *)src - 32 + n));
> +				_mm256_storeu_si256((__m256i *)
> +					((uint8_t *)dst - 32 + n), ymm0);
> +			}
> +			return ret;
> +		}
> +
> +		/**
> +		 * Make store aligned when copy size exceeds 256 bytes
> +		 */
> +		dstofss = (uintptr_t)dst & 0x1F;
> +		if (dstofss > 0) {
> +			dstofss = 32 - dstofss;
> +			n -= dstofss;
> +			__m256i ymm0;
> +			ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +			_mm256_storeu_si256((__m256i *)dst, ymm0);
> +			src = (const uint8_t *)src + dstofss;
> +			dst = (uint8_t *)dst + dstofss;
> +		}
> +
> +		/**
> +		 * Copy 128-byte blocks
> +		 */
> +		__m256i ymm0, ymm1, ymm2, ymm3;
> +
> +		while (n >= 128) {
> +			ymm0 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src + 0 * 32));
> +			n -= 128;
> +			ymm1 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src + 1 * 32));
> +			ymm2 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src + 2 * 32));
> +			ymm3 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src + 3 * 32));
> +			src = (const uint8_t *)src + 128;
> +			_mm256_storeu_si256((__m256i *)
> +				((uint8_t *)dst + 0 * 32), ymm0);
> +			_mm256_storeu_si256((__m256i *)
> +				((uint8_t *)dst + 1 * 32), ymm1);
> +			_mm256_storeu_si256((__m256i *)
> +				((uint8_t *)dst + 2 * 32), ymm2);
> +			_mm256_storeu_si256((__m256i *)
> +				((uint8_t *)dst + 3 * 32), ymm3);
> +			dst = (uint8_t *)dst + 128;
> +		}
> +		bits = n;
> +		n = n & 127;
> +		bits -= n;
> +		src = (const uint8_t *)src + bits;
> +		dst = (uint8_t *)dst + bits;
> +
> +		/**
> +		 * Copy whatever left
> +		 */
> +		goto COPY_BLOCK_128_BACK31;
> +	}
> +}
> diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx512f.c
> b/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx512f.c
> new file mode 100644
> index 0000000..c8a9d20
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx512f.c
> @@ -0,0 +1,316 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <rte_memcpy.h>
> +
> +#ifndef CC_SUPPORT_AVX512F
> +#error CC_SUPPORT_AVX512F not defined
> +#endif
> +
> +void *
> +rte_memcpy_avx512f(void *dst, const void *src, size_t n)
> +{
> +	if (!(((uintptr_t)dst | (uintptr_t)src) & 0x3F)) {
> +		void *ret = dst;
> +
> +		/* Copy size <= 16 bytes */
> +		if (n < 16) {
> +			if (n & 0x01) {
> +				*(uint8_t *)dst = *(const uint8_t *)src;
> +				src = (const uint8_t *)src + 1;
> +				dst = (uint8_t *)dst + 1;
> +			}
> +			if (n & 0x02) {
> +				*(uint16_t *)dst = *(const uint16_t *)src;
> +				src = (const uint16_t *)src + 1;
> +				dst = (uint16_t *)dst + 1;
> +			}
> +			if (n & 0x04) {
> +				*(uint32_t *)dst = *(const uint32_t *)src;
> +				src = (const uint32_t *)src + 1;
> +				dst = (uint32_t *)dst + 1;
> +			}
> +			if (n & 0x08)
> +				*(uint64_t *)dst = *(const uint64_t *)src;
> +
> +			return ret;
> +		}
> +
> +		/* Copy 16 <= size <= 32 bytes */
> +		if (n <= 32) {
> +			__m128i xmm0, xmm1;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 16 + n));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 16 + n), xmm1);
> +
> +			return ret;
> +		}
> +
> +		/* Copy 32 < size <= 64 bytes */
> +		if (n <= 64) {
> +			__m256i ymm0, ymm1;
> +			ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +			ymm1 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src - 32 + n));
> +			_mm256_storeu_si256((__m256i *)dst, ymm0);
> +			_mm256_storeu_si256((__m256i *)
> +				((uint8_t *)dst - 32 + n), ymm1);
> +
> +			return ret;
> +		}
> +
> +		/* Copy 64 bytes blocks */
> +		for (; n >= 64; n -= 64) {
> +			__m512i zmm0;
> +			zmm0 = _mm512_loadu_si512((const void *)src);
> +			_mm512_storeu_si512((void *)dst, zmm0);
> +			dst = (uint8_t *)dst + 64;
> +			src = (const uint8_t *)src + 64;
> +		}
> +
> +		/* Copy whatever left */
> +		__m512i zmm0;
> +		zmm0 = _mm512_loadu_si512((const void *)
> +			((const uint8_t *)src - 64 + n));
> +		_mm512_storeu_si512((void *)((uint8_t *)dst - 64 + n), zmm0);
> +
> +		return ret;
> +	} else {
> +		uintptr_t dstu = (uintptr_t)dst;
> +		uintptr_t srcu = (uintptr_t)src;
> +		void *ret = dst;
> +		size_t dstofss;
> +		size_t bits;
> +
> +		/**
> +		 * Copy less than 16 bytes
> +		 */
> +		if (n < 16) {
> +			if (n & 0x01) {
> +				*(uint8_t *)dstu = *(const uint8_t *)srcu;
> +				srcu = (uintptr_t)((const uint8_t *)srcu + 1);
> +				dstu = (uintptr_t)((uint8_t *)dstu + 1);
> +			}
> +			if (n & 0x02) {
> +				*(uint16_t *)dstu = *(const uint16_t *)srcu;
> +				srcu = (uintptr_t)((const uint16_t *)srcu + 1);
> +				dstu = (uintptr_t)((uint16_t *)dstu + 1);
> +			}
> +			if (n & 0x04) {
> +				*(uint32_t *)dstu = *(const uint32_t *)srcu;
> +				srcu = (uintptr_t)((const uint32_t *)srcu + 1);
> +				dstu = (uintptr_t)((uint32_t *)dstu + 1);
> +			}
> +			if (n & 0x08)
> +				*(uint64_t *)dstu = *(const uint64_t *)srcu;
> +			return ret;
> +		}
> +
> +		/**
> +		 * Fast way when copy size doesn't exceed 512 bytes
> +		 */
> +		if (n <= 32) {
> +			__m128i xmm0, xmm1;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 16 + n));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 16 + n), xmm1);
> +			return ret;
> +		}
> +		if (n <= 64) {
> +			__m256i ymm0, ymm1;
> +			ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +			ymm1 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src - 32 + n));
> +			_mm256_storeu_si256((__m256i *)dst, ymm0);
> +			_mm256_storeu_si256((__m256i *)
> +				((uint8_t *)dst - 32 + n), ymm1);
> +			return ret;
> +		}
> +		if (n <= 512) {
> +			if (n >= 256) {
> +				n -= 256;
> +				__m512i zmm0, zmm1, zmm2, zmm3;
> +				zmm0 = _mm512_loadu_si512((const void *)src);
> +				zmm1 = _mm512_loadu_si512((const void *)
> +					((const uint8_t *)src + 64));
> +				zmm2 = _mm512_loadu_si512((const void *)
> +					((const uint8_t *)src + 2*64));
> +				zmm3 = _mm512_loadu_si512((const void *)
> +					((const uint8_t *)src + 3*64));
> +				_mm512_storeu_si512((void *)dst, zmm0);
> +				_mm512_storeu_si512((void *)
> +					((uint8_t *)dst + 64), zmm1);
> +				_mm512_storeu_si512((void *)
> +					((uint8_t *)dst + 2*64), zmm2);
> +				_mm512_storeu_si512((void *)
> +					((uint8_t *)dst + 3*64), zmm3);
> +				src = (const uint8_t *)src + 256;
> +				dst = (uint8_t *)dst + 256;
> +			}
> +			if (n >= 128) {
> +				n -= 128;
> +				__m512i zmm0, zmm1;
> +				zmm0 = _mm512_loadu_si512((const void *)src);
> +				zmm1 = _mm512_loadu_si512((const void *)
> +					((const uint8_t *)src + 64));
> +				_mm512_storeu_si512((void *)dst, zmm0);
> +				_mm512_storeu_si512((void *)
> +					((uint8_t *)dst + 64), zmm1);
> +				src = (const uint8_t *)src + 128;
> +				dst = (uint8_t *)dst + 128;
> +			}
> +COPY_BLOCK_128_BACK63:
> +			if (n > 64) {
> +				__m512i zmm0, zmm1;
> +				zmm0 = _mm512_loadu_si512((const void *)src);
> +				zmm1 = _mm512_loadu_si512((const void *)
> +					((const uint8_t *)src - 64 + n));
> +				_mm512_storeu_si512((void *)dst, zmm0);
> +				_mm512_storeu_si512((void *)
> +					((uint8_t *)dst - 64 + n), zmm1);
> +				return ret;
> +			}
> +			if (n > 0) {
> +				__m512i zmm0;
> +				zmm0 = _mm512_loadu_si512((const void *)
> +					((const uint8_t *)src - 64 + n));
> +				_mm512_storeu_si512((void *)
> +					((uint8_t *)dst - 64 + n), zmm0);
> +			}
> +			return ret;
> +		}
> +
> +		/**
> +		 * Make store aligned when copy size exceeds 512 bytes
> +		 */
> +		dstofss = ((uintptr_t)dst & 0x3F);
> +		if (dstofss > 0) {
> +			dstofss = 64 - dstofss;
> +			n -= dstofss;
> +			__m512i zmm0;
> +			zmm0 = _mm512_loadu_si512((const void *)src);
> +			_mm512_storeu_si512((void *)dst, zmm0);
> +			src = (const uint8_t *)src + dstofss;
> +			dst = (uint8_t *)dst + dstofss;
> +		}
> +
> +		/**
> +		 * Copy 512-byte blocks.
> +		 * Use copy block function for better instruction order control,
> +		 * which is important when load is unaligned.
> +		 */
> +		__m512i zmm0, zmm1, zmm2, zmm3, zmm4, zmm5, zmm6, zmm7;
> +
> +		while (n >= 512) {
> +			zmm0 = _mm512_loadu_si512((const void *)
> +				((const uint8_t *)src + 0 * 64));
> +			n -= 512;
> +			zmm1 = _mm512_loadu_si512((const void *)
> +				((const uint8_t *)src + 1 * 64));
> +			zmm2 = _mm512_loadu_si512((const void *)
> +				((const uint8_t *)src + 2 * 64));
> +			zmm3 = _mm512_loadu_si512((const void *)
> +				((const uint8_t *)src + 3 * 64));
> +			zmm4 = _mm512_loadu_si512((const void *)
> +				((const uint8_t *)src + 4 * 64));
> +			zmm5 = _mm512_loadu_si512((const void *)
> +				((const uint8_t *)src + 5 * 64));
> +			zmm6 = _mm512_loadu_si512((const void *)
> +				((const uint8_t *)src + 6 * 64));
> +			zmm7 = _mm512_loadu_si512((const void *)
> +				((const uint8_t *)src + 7 * 64));
> +			src = (const uint8_t *)src + 512;
> +			_mm512_storeu_si512((void *)
> +				((uint8_t *)dst + 0 * 64), zmm0);
> +			_mm512_storeu_si512((void *)
> +				((uint8_t *)dst + 1 * 64), zmm1);
> +			_mm512_storeu_si512((void *)
> +				((uint8_t *)dst + 2 * 64), zmm2);
> +			_mm512_storeu_si512((void *)
> +				((uint8_t *)dst + 3 * 64), zmm3);
> +			_mm512_storeu_si512((void *)
> +				((uint8_t *)dst + 4 * 64), zmm4);
> +			_mm512_storeu_si512((void *)
> +				((uint8_t *)dst + 5 * 64), zmm5);
> +			_mm512_storeu_si512((void *)
> +				((uint8_t *)dst + 6 * 64), zmm6);
> +			_mm512_storeu_si512((void *)
> +				((uint8_t *)dst + 7 * 64), zmm7);
> +			dst = (uint8_t *)dst + 512;
> +		}
> +		bits = n;
> +		n = n & 511;
> +		bits -= n;
> +		src = (const uint8_t *)src + bits;
> +		dst = (uint8_t *)dst + bits;
> +
> +		/**
> +		 * Copy 128-byte blocks.
> +		 * Use copy block function for better instruction order control,
> +		 * which is important when load is unaligned.
> +		 */
> +		if (n >= 128) {
> +			__m512i zmm0, zmm1;
> +
> +			while (n >= 128) {
> +				zmm0 = _mm512_loadu_si512((const void *)
> +					((const uint8_t *)src + 0 * 64));
> +				n -= 128;
> +				zmm1 = _mm512_loadu_si512((const void *)
> +					((const uint8_t *)src + 1 * 64));
> +				src = (const uint8_t *)src + 128;
> +				_mm512_storeu_si512((void *)
> +					((uint8_t *)dst + 0 * 64), zmm0);
> +				_mm512_storeu_si512((void *)
> +					((uint8_t *)dst + 1 * 64), zmm1);
> +				dst = (uint8_t *)dst + 128;
> +			}
> +			bits = n;
> +			n = n & 127;
> +			bits -= n;
> +			src = (const uint8_t *)src + bits;
> +			dst = (uint8_t *)dst + bits;
> +		}
> +
> +		/**
> +		 * Copy whatever left
> +		 */
> +		goto COPY_BLOCK_128_BACK63;
> +	}
> +}
> diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy_internal.h
> b/lib/librte_eal/common/include/arch/x86/rte_memcpy_internal.h
> new file mode 100644
> index 0000000..d17fb5b
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy_internal.h
> @@ -0,0 +1,909 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#ifndef _RTE_MEMCPY_INTERNAL_X86_64_H_
> +#define _RTE_MEMCPY_INTERNAL_X86_64_H_
> +
> +/**
> + * @file
> + *
> + * Functions for SSE/AVX/AVX2/AVX512 implementation of memcpy().
> + */
> +
> +#include <stdio.h>
> +#include <stdint.h>
> +#include <string.h>
> +#include <rte_vect.h>
> +#include <rte_common.h>
> +
> +#ifdef __cplusplus
> +extern "C" {
> +#endif
> +
> +/**
> + * Copy bytes from one location to another. The locations must not overlap.
> + *
> + * @note This is implemented as a macro, so it's address should not be taken
> + * and care is needed as parameter expressions may be evaluated multiple times.
> + *
> + * @param dst
> + *   Pointer to the destination of the data.
> + * @param src
> + *   Pointer to the source data.
> + * @param n
> + *   Number of bytes to copy.
> + * @return
> + *   Pointer to the destination data.
> + */
> +
> +#ifdef RTE_MACHINE_CPUFLAG_AVX512F
> +
> +#define ALIGNMENT_MASK 0x3F
> +
> +/**
> + * AVX512 implementation below
> + */
> +
> +/**
> + * Copy 16 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov16(uint8_t *dst, const uint8_t *src)
> +{
> +	__m128i xmm0;
> +
> +	xmm0 = _mm_loadu_si128((const __m128i *)src);
> +	_mm_storeu_si128((__m128i *)dst, xmm0);
> +}
> +
> +/**
> + * Copy 32 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov32(uint8_t *dst, const uint8_t *src)
> +{
> +	__m256i ymm0;
> +
> +	ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +	_mm256_storeu_si256((__m256i *)dst, ymm0);
> +}
> +
> +/**
> + * Copy 64 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov64(uint8_t *dst, const uint8_t *src)
> +{
> +	__m512i zmm0;
> +
> +	zmm0 = _mm512_loadu_si512((const void *)src);
> +	_mm512_storeu_si512((void *)dst, zmm0);
> +}
> +
> +/**
> + * Copy 128 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov128(uint8_t *dst, const uint8_t *src)
> +{
> +	rte_mov64(dst + 0 * 64, src + 0 * 64);
> +	rte_mov64(dst + 1 * 64, src + 1 * 64);
> +}
> +
> +/**
> + * Copy 256 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov256(uint8_t *dst, const uint8_t *src)
> +{
> +	rte_mov64(dst + 0 * 64, src + 0 * 64);
> +	rte_mov64(dst + 1 * 64, src + 1 * 64);
> +	rte_mov64(dst + 2 * 64, src + 2 * 64);
> +	rte_mov64(dst + 3 * 64, src + 3 * 64);
> +}
> +
> +/**
> + * Copy 128-byte blocks from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n)
> +{
> +	__m512i zmm0, zmm1;
> +
> +	while (n >= 128) {
> +		zmm0 = _mm512_loadu_si512((const void *)(src + 0 * 64));
> +		n -= 128;
> +		zmm1 = _mm512_loadu_si512((const void *)(src + 1 * 64));
> +		src = src + 128;
> +		_mm512_storeu_si512((void *)(dst + 0 * 64), zmm0);
> +		_mm512_storeu_si512((void *)(dst + 1 * 64), zmm1);
> +		dst = dst + 128;
> +	}
> +}
> +
> +/**
> + * Copy 512-byte blocks from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov512blocks(uint8_t *dst, const uint8_t *src, size_t n)
> +{
> +	__m512i zmm0, zmm1, zmm2, zmm3, zmm4, zmm5, zmm6, zmm7;
> +
> +	while (n >= 512) {
> +		zmm0 = _mm512_loadu_si512((const void *)(src + 0 * 64));
> +		n -= 512;
> +		zmm1 = _mm512_loadu_si512((const void *)(src + 1 * 64));
> +		zmm2 = _mm512_loadu_si512((const void *)(src + 2 * 64));
> +		zmm3 = _mm512_loadu_si512((const void *)(src + 3 * 64));
> +		zmm4 = _mm512_loadu_si512((const void *)(src + 4 * 64));
> +		zmm5 = _mm512_loadu_si512((const void *)(src + 5 * 64));
> +		zmm6 = _mm512_loadu_si512((const void *)(src + 6 * 64));
> +		zmm7 = _mm512_loadu_si512((const void *)(src + 7 * 64));
> +		src = src + 512;
> +		_mm512_storeu_si512((void *)(dst + 0 * 64), zmm0);
> +		_mm512_storeu_si512((void *)(dst + 1 * 64), zmm1);
> +		_mm512_storeu_si512((void *)(dst + 2 * 64), zmm2);
> +		_mm512_storeu_si512((void *)(dst + 3 * 64), zmm3);
> +		_mm512_storeu_si512((void *)(dst + 4 * 64), zmm4);
> +		_mm512_storeu_si512((void *)(dst + 5 * 64), zmm5);
> +		_mm512_storeu_si512((void *)(dst + 6 * 64), zmm6);
> +		_mm512_storeu_si512((void *)(dst + 7 * 64), zmm7);
> +		dst = dst + 512;
> +	}
> +}
> +
> +static inline void *
> +rte_memcpy_generic(void *dst, const void *src, size_t n)
> +{
> +	uintptr_t dstu = (uintptr_t)dst;
> +	uintptr_t srcu = (uintptr_t)src;
> +	void *ret = dst;
> +	size_t dstofss;
> +	size_t bits;
> +
> +	/**
> +	 * Copy less than 16 bytes
> +	 */
> +	if (n < 16) {
> +		if (n & 0x01) {
> +			*(uint8_t *)dstu = *(const uint8_t *)srcu;
> +			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
> +			dstu = (uintptr_t)((uint8_t *)dstu + 1);
> +		}
> +		if (n & 0x02) {
> +			*(uint16_t *)dstu = *(const uint16_t *)srcu;
> +			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
> +			dstu = (uintptr_t)((uint16_t *)dstu + 1);
> +		}
> +		if (n & 0x04) {
> +			*(uint32_t *)dstu = *(const uint32_t *)srcu;
> +			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
> +			dstu = (uintptr_t)((uint32_t *)dstu + 1);
> +		}
> +		if (n & 0x08)
> +			*(uint64_t *)dstu = *(const uint64_t *)srcu;
> +		return ret;
> +	}
> +
> +	/**
> +	 * Fast way when copy size doesn't exceed 512 bytes
> +	 */
> +	if (n <= 32) {
> +		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> +		rte_mov16((uint8_t *)dst - 16 + n,
> +				  (const uint8_t *)src - 16 + n);
> +		return ret;
> +	}
> +	if (n <= 64) {
> +		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> +		rte_mov32((uint8_t *)dst - 32 + n,
> +				  (const uint8_t *)src - 32 + n);
> +		return ret;
> +	}
> +	if (n <= 512) {
> +		if (n >= 256) {
> +			n -= 256;
> +			rte_mov256((uint8_t *)dst, (const uint8_t *)src);
> +			src = (const uint8_t *)src + 256;
> +			dst = (uint8_t *)dst + 256;
> +		}
> +		if (n >= 128) {
> +			n -= 128;
> +			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
> +			src = (const uint8_t *)src + 128;
> +			dst = (uint8_t *)dst + 128;
> +		}
> +COPY_BLOCK_128_BACK63:
> +		if (n > 64) {
> +			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> +			rte_mov64((uint8_t *)dst - 64 + n,
> +					  (const uint8_t *)src - 64 + n);
> +			return ret;
> +		}
> +		if (n > 0)
> +			rte_mov64((uint8_t *)dst - 64 + n,
> +					  (const uint8_t *)src - 64 + n);
> +		return ret;
> +	}
> +
> +	/**
> +	 * Make store aligned when copy size exceeds 512 bytes
> +	 */
> +	dstofss = ((uintptr_t)dst & 0x3F);
> +	if (dstofss > 0) {
> +		dstofss = 64 - dstofss;
> +		n -= dstofss;
> +		rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> +		src = (const uint8_t *)src + dstofss;
> +		dst = (uint8_t *)dst + dstofss;
> +	}
> +
> +	/**
> +	 * Copy 512-byte blocks.
> +	 * Use copy block function for better instruction order control,
> +	 * which is important when load is unaligned.
> +	 */
> +	rte_mov512blocks((uint8_t *)dst, (const uint8_t *)src, n);
> +	bits = n;
> +	n = n & 511;
> +	bits -= n;
> +	src = (const uint8_t *)src + bits;
> +	dst = (uint8_t *)dst + bits;
> +
> +	/**
> +	 * Copy 128-byte blocks.
> +	 * Use copy block function for better instruction order control,
> +	 * which is important when load is unaligned.
> +	 */
> +	if (n >= 128) {
> +		rte_mov128blocks((uint8_t *)dst, (const uint8_t *)src, n);
> +		bits = n;
> +		n = n & 127;
> +		bits -= n;
> +		src = (const uint8_t *)src + bits;
> +		dst = (uint8_t *)dst + bits;
> +	}
> +
> +	/**
> +	 * Copy whatever left
> +	 */
> +	goto COPY_BLOCK_128_BACK63;
> +}
> +
> +#elif defined RTE_MACHINE_CPUFLAG_AVX2
> +
> +#define ALIGNMENT_MASK 0x1F
> +
> +/**
> + * AVX2 implementation below
> + */
> +
> +/**
> + * Copy 16 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov16(uint8_t *dst, const uint8_t *src)
> +{
> +	__m128i xmm0;
> +
> +	xmm0 = _mm_loadu_si128((const __m128i *)src);
> +	_mm_storeu_si128((__m128i *)dst, xmm0);
> +}
> +
> +/**
> + * Copy 32 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov32(uint8_t *dst, const uint8_t *src)
> +{
> +	__m256i ymm0;
> +
> +	ymm0 = _mm256_loadu_si256((const __m256i *)src);
> +	_mm256_storeu_si256((__m256i *)dst, ymm0);
> +}
> +
> +/**
> + * Copy 64 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov64(uint8_t *dst, const uint8_t *src)
> +{
> +	rte_mov32((uint8_t *)dst + 0 * 32, (const uint8_t *)src + 0 * 32);
> +	rte_mov32((uint8_t *)dst + 1 * 32, (const uint8_t *)src + 1 * 32);
> +}
> +
> +/**
> + * Copy 128 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov128(uint8_t *dst, const uint8_t *src)
> +{
> +	rte_mov32((uint8_t *)dst + 0 * 32, (const uint8_t *)src + 0 * 32);
> +	rte_mov32((uint8_t *)dst + 1 * 32, (const uint8_t *)src + 1 * 32);
> +	rte_mov32((uint8_t *)dst + 2 * 32, (const uint8_t *)src + 2 * 32);
> +	rte_mov32((uint8_t *)dst + 3 * 32, (const uint8_t *)src + 3 * 32);
> +}
> +
> +/**
> + * Copy 128-byte blocks from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n)
> +{
> +	__m256i ymm0, ymm1, ymm2, ymm3;
> +
> +	while (n >= 128) {
> +		ymm0 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src + 0 * 32));
> +		n -= 128;
> +		ymm1 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src + 1 * 32));
> +		ymm2 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src + 2 * 32));
> +		ymm3 = _mm256_loadu_si256((const __m256i *)
> +				((const uint8_t *)src + 3 * 32));
> +		src = (const uint8_t *)src + 128;
> +		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 0 * 32), ymm0);
> +		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 1 * 32), ymm1);
> +		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 2 * 32), ymm2);
> +		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 3 * 32), ymm3);
> +		dst = (uint8_t *)dst + 128;
> +	}
> +}
> +
> +static inline void *
> +rte_memcpy_generic(void *dst, const void *src, size_t n)
> +{
> +	uintptr_t dstu = (uintptr_t)dst;
> +	uintptr_t srcu = (uintptr_t)src;
> +	void *ret = dst;
> +	size_t dstofss;
> +	size_t bits;
> +
> +	/**
> +	 * Copy less than 16 bytes
> +	 */
> +	if (n < 16) {
> +		if (n & 0x01) {
> +			*(uint8_t *)dstu = *(const uint8_t *)srcu;
> +			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
> +			dstu = (uintptr_t)((uint8_t *)dstu + 1);
> +		}
> +		if (n & 0x02) {
> +			*(uint16_t *)dstu = *(const uint16_t *)srcu;
> +			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
> +			dstu = (uintptr_t)((uint16_t *)dstu + 1);
> +		}
> +		if (n & 0x04) {
> +			*(uint32_t *)dstu = *(const uint32_t *)srcu;
> +			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
> +			dstu = (uintptr_t)((uint32_t *)dstu + 1);
> +		}
> +		if (n & 0x08)
> +			*(uint64_t *)dstu = *(const uint64_t *)srcu;
> +		return ret;
> +	}
> +
> +	/**
> +	 * Fast way when copy size doesn't exceed 256 bytes
> +	 */
> +	if (n <= 32) {
> +		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> +		rte_mov16((uint8_t *)dst - 16 + n,
> +				(const uint8_t *)src - 16 + n);
> +		return ret;
> +	}
> +	if (n <= 48) {
> +		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> +		rte_mov16((uint8_t *)dst + 16, (const uint8_t *)src + 16);
> +		rte_mov16((uint8_t *)dst - 16 + n,
> +				(const uint8_t *)src - 16 + n);
> +		return ret;
> +	}
> +	if (n <= 64) {
> +		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> +		rte_mov32((uint8_t *)dst - 32 + n,
> +				(const uint8_t *)src - 32 + n);
> +		return ret;
> +	}
> +	if (n <= 256) {
> +		if (n >= 128) {
> +			n -= 128;
> +			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
> +			src = (const uint8_t *)src + 128;
> +			dst = (uint8_t *)dst + 128;
> +		}
> +COPY_BLOCK_128_BACK31:
> +		if (n >= 64) {
> +			n -= 64;
> +			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> +			src = (const uint8_t *)src + 64;
> +			dst = (uint8_t *)dst + 64;
> +		}
> +		if (n > 32) {
> +			rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> +			rte_mov32((uint8_t *)dst - 32 + n,
> +					(const uint8_t *)src - 32 + n);
> +			return ret;
> +		}
> +		if (n > 0) {
> +			rte_mov32((uint8_t *)dst - 32 + n,
> +					(const uint8_t *)src - 32 + n);
> +		}
> +		return ret;
> +	}
> +
> +	/**
> +	 * Make store aligned when copy size exceeds 256 bytes
> +	 */
> +	dstofss = (uintptr_t)dst & 0x1F;
> +	if (dstofss > 0) {
> +		dstofss = 32 - dstofss;
> +		n -= dstofss;
> +		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> +		src = (const uint8_t *)src + dstofss;
> +		dst = (uint8_t *)dst + dstofss;
> +	}
> +
> +	/**
> +	 * Copy 128-byte blocks
> +	 */
> +	rte_mov128blocks((uint8_t *)dst, (const uint8_t *)src, n);
> +	bits = n;
> +	n = n & 127;
> +	bits -= n;
> +	src = (const uint8_t *)src + bits;
> +	dst = (uint8_t *)dst + bits;
> +
> +	/**
> +	 * Copy whatever left
> +	 */
> +	goto COPY_BLOCK_128_BACK31;
> +}
> +
> +#else /* RTE_MACHINE_CPUFLAG */
> +
> +#define ALIGNMENT_MASK 0x0F
> +
> +/**
> + * SSE & AVX implementation below
> + */
> +
> +/**
> + * Copy 16 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov16(uint8_t *dst, const uint8_t *src)
> +{
> +	__m128i xmm0;
> +
> +	xmm0 = _mm_loadu_si128((const __m128i *)(const __m128i *)src);
> +	_mm_storeu_si128((__m128i *)dst, xmm0);
> +}
> +
> +/**
> + * Copy 32 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov32(uint8_t *dst, const uint8_t *src)
> +{
> +	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
> +	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
> +}
> +
> +/**
> + * Copy 64 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov64(uint8_t *dst, const uint8_t *src)
> +{
> +	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
> +	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
> +	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
> +	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
> +}
> +
> +/**
> + * Copy 128 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov128(uint8_t *dst, const uint8_t *src)
> +{
> +	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
> +	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
> +	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
> +	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
> +	rte_mov16((uint8_t *)dst + 4 * 16, (const uint8_t *)src + 4 * 16);
> +	rte_mov16((uint8_t *)dst + 5 * 16, (const uint8_t *)src + 5 * 16);
> +	rte_mov16((uint8_t *)dst + 6 * 16, (const uint8_t *)src + 6 * 16);
> +	rte_mov16((uint8_t *)dst + 7 * 16, (const uint8_t *)src + 7 * 16);
> +}
> +
> +/**
> + * Copy 256 bytes from one location to another,
> + * locations should not overlap.
> + */
> +static inline void
> +rte_mov256(uint8_t *dst, const uint8_t *src)
> +{
> +	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
> +	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
> +	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
> +	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
> +	rte_mov16((uint8_t *)dst + 4 * 16, (const uint8_t *)src + 4 * 16);
> +	rte_mov16((uint8_t *)dst + 5 * 16, (const uint8_t *)src + 5 * 16);
> +	rte_mov16((uint8_t *)dst + 6 * 16, (const uint8_t *)src + 6 * 16);
> +	rte_mov16((uint8_t *)dst + 7 * 16, (const uint8_t *)src + 7 * 16);
> +	rte_mov16((uint8_t *)dst + 8 * 16, (const uint8_t *)src + 8 * 16);
> +	rte_mov16((uint8_t *)dst + 9 * 16, (const uint8_t *)src + 9 * 16);
> +	rte_mov16((uint8_t *)dst + 10 * 16, (const uint8_t *)src + 10 * 16);
> +	rte_mov16((uint8_t *)dst + 11 * 16, (const uint8_t *)src + 11 * 16);
> +	rte_mov16((uint8_t *)dst + 12 * 16, (const uint8_t *)src + 12 * 16);
> +	rte_mov16((uint8_t *)dst + 13 * 16, (const uint8_t *)src + 13 * 16);
> +	rte_mov16((uint8_t *)dst + 14 * 16, (const uint8_t *)src + 14 * 16);
> +	rte_mov16((uint8_t *)dst + 15 * 16, (const uint8_t *)src + 15 * 16);
> +}
> +
> +/**
> + * Macro for copying unaligned block from one location to another with constant load offset,
> + * 47 bytes leftover maximum,
> + * locations should not overlap.
> + * Requirements:
> + * - Store is aligned
> + * - Load offset is <offset>, which must be immediate value within [1, 15]
> + * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
> + * - <dst>, <src>, <len> must be variables
> + * - __m128i <xmm0> ~ <xmm8> must be pre-defined
> + */
> +#define MOVEUNALIGNED_LEFT47_IMM(dst, src, len, offset)                                                     \
> +__extension__ ({                                                                                            \
> +    int tmp;                                                                                                \
> +    while (len >= 128 + 16 - offset) {                                                                      \
> +        xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));                  \
> +        len -= 128;                                                                                         \
> +        xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));                  \
> +        xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));                  \
> +        xmm3 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 3 * 16));                  \
> +        xmm4 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 4 * 16));                  \
> +        xmm5 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 5 * 16));                  \
> +        xmm6 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 6 * 16));                  \
> +        xmm7 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 7 * 16));                  \
> +        xmm8 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 8 * 16));                  \
> +        src = (const uint8_t *)src + 128;                                                                   \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 2 * 16), _mm_alignr_epi8(xmm3, xmm2, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 3 * 16), _mm_alignr_epi8(xmm4, xmm3, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 4 * 16), _mm_alignr_epi8(xmm5, xmm4, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 5 * 16), _mm_alignr_epi8(xmm6, xmm5, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 6 * 16), _mm_alignr_epi8(xmm7, xmm6, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 7 * 16), _mm_alignr_epi8(xmm8, xmm7, offset));        \
> +        dst = (uint8_t *)dst + 128;                                                                         \
> +    }                                                                                                       \
> +    tmp = len;                                                                                              \
> +    len = ((len - 16 + offset) & 127) + 16 - offset;                                                        \
> +    tmp -= len;                                                                                             \
> +    src = (const uint8_t *)src + tmp;                                                                       \
> +    dst = (uint8_t *)dst + tmp;                                                                             \
> +    if (len >= 32 + 16 - offset) {                                                                          \
> +        while (len >= 32 + 16 - offset) {                                                                   \
> +            xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));              \
> +            len -= 32;                                                                                      \
> +            xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));              \
> +            xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));              \
> +            src = (const uint8_t *)src + 32;                                                                \
> +            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));    \
> +            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));    \
> +            dst = (uint8_t *)dst + 32;                                                                      \
> +        }                                                                                                   \
> +        tmp = len;                                                                                          \
> +        len = ((len - 16 + offset) & 31) + 16 - offset;                                                     \
> +        tmp -= len;                                                                                         \
> +        src = (const uint8_t *)src + tmp;                                                                   \
> +        dst = (uint8_t *)dst + tmp;                                                                         \
> +    }                                                                                                       \
> +})
> +
> +/**
> + * Macro for copying unaligned block from one location to another,
> + * 47 bytes leftover maximum,
> + * locations should not overlap.
> + * Use switch here because the aligning instruction requires immediate value for shift count.
> + * Requirements:
> + * - Store is aligned
> + * - Load offset is <offset>, which must be within [1, 15]
> + * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
> + * - <dst>, <src>, <len> must be variables
> + * - __m128i <xmm0> ~ <xmm8> used in MOVEUNALIGNED_LEFT47_IMM must be pre-defined
> + */
> +#define MOVEUNALIGNED_LEFT47(dst, src, len, offset)                   \
> +__extension__ ({                                                      \
> +    switch (offset) {                                                 \
> +    case 0x01: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x01); break;    \
> +    case 0x02: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x02); break;    \
> +    case 0x03: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x03); break;    \
> +    case 0x04: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x04); break;    \
> +    case 0x05: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x05); break;    \
> +    case 0x06: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x06); break;    \
> +    case 0x07: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x07); break;    \
> +    case 0x08: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x08); break;    \
> +    case 0x09: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x09); break;    \
> +    case 0x0A: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0A); break;    \
> +    case 0x0B: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0B); break;    \
> +    case 0x0C: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0C); break;    \
> +    case 0x0D: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0D); break;    \
> +    case 0x0E: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0E); break;    \
> +    case 0x0F: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0F); break;    \
> +    default:;                                                         \
> +    }                                                                 \
> +})
> +
> +static inline void *
> +rte_memcpy_generic(void *dst, const void *src, size_t n)
> +{
> +	__m128i xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7, xmm8;
> +	uintptr_t dstu = (uintptr_t)dst;
> +	uintptr_t srcu = (uintptr_t)src;
> +	void *ret = dst;
> +	size_t dstofss;
> +	size_t srcofs;
> +
> +	/**
> +	 * Copy less than 16 bytes
> +	 */
> +	if (n < 16) {
> +		if (n & 0x01) {
> +			*(uint8_t *)dstu = *(const uint8_t *)srcu;
> +			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
> +			dstu = (uintptr_t)((uint8_t *)dstu + 1);
> +		}
> +		if (n & 0x02) {
> +			*(uint16_t *)dstu = *(const uint16_t *)srcu;
> +			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
> +			dstu = (uintptr_t)((uint16_t *)dstu + 1);
> +		}
> +		if (n & 0x04) {
> +			*(uint32_t *)dstu = *(const uint32_t *)srcu;
> +			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
> +			dstu = (uintptr_t)((uint32_t *)dstu + 1);
> +		}
> +		if (n & 0x08)
> +			*(uint64_t *)dstu = *(const uint64_t *)srcu;
> +		return ret;
> +	}
> +
> +	/**
> +	 * Fast way when copy size doesn't exceed 512 bytes
> +	 */
> +	if (n <= 32) {
> +		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> +		rte_mov16((uint8_t *)dst - 16 + n,
> +				(const uint8_t *)src - 16 + n);
> +		return ret;
> +	}
> +	if (n <= 48) {
> +		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> +		rte_mov16((uint8_t *)dst - 16 + n,
> +				(const uint8_t *)src - 16 + n);
> +		return ret;
> +	}
> +	if (n <= 64) {
> +		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> +		rte_mov16((uint8_t *)dst + 32, (const uint8_t *)src + 32);
> +		rte_mov16((uint8_t *)dst - 16 + n,
> +				(const uint8_t *)src - 16 + n);
> +		return ret;
> +	}
> +	if (n <= 128)
> +		goto COPY_BLOCK_128_BACK15;
> +	if (n <= 512) {
> +		if (n >= 256) {
> +			n -= 256;
> +			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
> +			rte_mov128((uint8_t *)dst + 128,
> +					(const uint8_t *)src + 128);
> +			src = (const uint8_t *)src + 256;
> +			dst = (uint8_t *)dst + 256;
> +		}
> +COPY_BLOCK_255_BACK15:
> +		if (n >= 128) {
> +			n -= 128;
> +			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
> +			src = (const uint8_t *)src + 128;
> +			dst = (uint8_t *)dst + 128;
> +		}
> +COPY_BLOCK_128_BACK15:
> +		if (n >= 64) {
> +			n -= 64;
> +			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> +			src = (const uint8_t *)src + 64;
> +			dst = (uint8_t *)dst + 64;
> +		}
> +COPY_BLOCK_64_BACK15:
> +		if (n >= 32) {
> +			n -= 32;
> +			rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> +			src = (const uint8_t *)src + 32;
> +			dst = (uint8_t *)dst + 32;
> +		}
> +		if (n > 16) {
> +			rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> +			rte_mov16((uint8_t *)dst - 16 + n,
> +					(const uint8_t *)src - 16 + n);
> +			return ret;
> +		}
> +		if (n > 0) {
> +			rte_mov16((uint8_t *)dst - 16 + n,
> +					(const uint8_t *)src - 16 + n);
> +		}
> +		return ret;
> +	}
> +
> +	/**
> +	 * Make store aligned when copy size exceeds 512 bytes,
> +	 * and make sure the first 15 bytes are copied, because
> +	 * unaligned copy functions require up to 15 bytes
> +	 * backwards access.
> +	 */
> +	dstofss = (uintptr_t)dst & 0x0F;
> +	if (dstofss > 0) {
> +		dstofss = 16 - dstofss + 16;
> +		n -= dstofss;
> +		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> +		src = (const uint8_t *)src + dstofss;
> +		dst = (uint8_t *)dst + dstofss;
> +	}
> +	srcofs = ((uintptr_t)src & 0x0F);
> +
> +	/**
> +	 * For aligned copy
> +	 */
> +	if (srcofs == 0) {
> +		/**
> +		 * Copy 256-byte blocks
> +		 */
> +		for (; n >= 256; n -= 256) {
> +			rte_mov256((uint8_t *)dst, (const uint8_t *)src);
> +			dst = (uint8_t *)dst + 256;
> +			src = (const uint8_t *)src + 256;
> +		}
> +
> +		/**
> +		 * Copy whatever left
> +		 */
> +		goto COPY_BLOCK_255_BACK15;
> +	}
> +
> +	/**
> +	 * For copy with unaligned load
> +	 */
> +	MOVEUNALIGNED_LEFT47(dst, src, n, srcofs);
> +
> +	/**
> +	 * Copy whatever left
> +	 */
> +	goto COPY_BLOCK_64_BACK15;
> +}
> +
> +#endif /* RTE_MACHINE_CPUFLAG */
> +
> +static inline void *
> +rte_memcpy_aligned(void *dst, const void *src, size_t n)
> +{
> +	void *ret = dst;
> +
> +	/* Copy size <= 16 bytes */
> +	if (n < 16) {
> +		if (n & 0x01) {
> +			*(uint8_t *)dst = *(const uint8_t *)src;
> +			src = (const uint8_t *)src + 1;
> +			dst = (uint8_t *)dst + 1;
> +		}
> +		if (n & 0x02) {
> +			*(uint16_t *)dst = *(const uint16_t *)src;
> +			src = (const uint16_t *)src + 1;
> +			dst = (uint16_t *)dst + 1;
> +		}
> +		if (n & 0x04) {
> +			*(uint32_t *)dst = *(const uint32_t *)src;
> +			src = (const uint32_t *)src + 1;
> +			dst = (uint32_t *)dst + 1;
> +		}
> +		if (n & 0x08)
> +			*(uint64_t *)dst = *(const uint64_t *)src;
> +
> +		return ret;
> +	}
> +
> +	/* Copy 16 <= size <= 32 bytes */
> +	if (n <= 32) {
> +		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
> +		rte_mov16((uint8_t *)dst - 16 + n,
> +				(const uint8_t *)src - 16 + n);
> +
> +		return ret;
> +	}
> +
> +	/* Copy 32 < size <= 64 bytes */
> +	if (n <= 64) {
> +		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
> +		rte_mov32((uint8_t *)dst - 32 + n,
> +				(const uint8_t *)src - 32 + n);
> +
> +		return ret;
> +	}
> +
> +	/* Copy 64 bytes blocks */
> +	for (; n >= 64; n -= 64) {
> +		rte_mov64((uint8_t *)dst, (const uint8_t *)src);
> +		dst = (uint8_t *)dst + 64;
> +		src = (const uint8_t *)src + 64;
> +	}
> +
> +	/* Copy whatever left */
> +	rte_mov64((uint8_t *)dst - 64 + n,
> +			(const uint8_t *)src - 64 + n);
> +
> +	return ret;
> +}
> +
> +static inline void *
> +rte_memcpy_internal(void *dst, const void *src, size_t n)
> +{
> +	if (!(((uintptr_t)dst | (uintptr_t)src) & ALIGNMENT_MASK))
> +		return rte_memcpy_aligned(dst, src, n);
> +	else
> +		return rte_memcpy_generic(dst, src, n);
> +}
> +
> +#ifdef __cplusplus
> +}
> +#endif
> +
> +#endif /* _RTE_MEMCPY_INTERNAL_X86_64_H_ */
> diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy_sse.c b/lib/librte_eal/common/include/arch/x86/rte_memcpy_sse.c
> new file mode 100644
> index 0000000..2532696
> --- /dev/null
> +++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy_sse.c
> @@ -0,0 +1,585 @@
> +/*-
> + *   BSD LICENSE
> + *
> + *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
> + *   All rights reserved.
> + *
> + *   Redistribution and use in source and binary forms, with or without
> + *   modification, are permitted provided that the following conditions
> + *   are met:
> + *
> + *     * Redistributions of source code must retain the above copyright
> + *       notice, this list of conditions and the following disclaimer.
> + *     * Redistributions in binary form must reproduce the above copyright
> + *       notice, this list of conditions and the following disclaimer in
> + *       the documentation and/or other materials provided with the
> + *       distribution.
> + *     * Neither the name of Intel Corporation nor the names of its
> + *       contributors may be used to endorse or promote products derived
> + *       from this software without specific prior written permission.
> + *
> + *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
> + *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
> + *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
> + *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
> + *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
> + *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
> + *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
> + *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
> + *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
> + *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
> + *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
> + */
> +
> +#include <rte_memcpy.h>
> +
> +/**
> + * Macro for copying unaligned block from one location to another with constant load offset,
> + * 47 bytes leftover maximum,
> + * locations should not overlap.
> + * Requirements:
> + * - Store is aligned
> + * - Load offset is <offset>, which must be immediate value within [1, 15]
> + * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
> + * - <dst>, <src>, <len> must be variables
> + * - __m128i <xmm0> ~ <xmm8> must be pre-defined
> + */
> +#define MOVEUNALIGNED_LEFT47_IMM(dst, src, len, offset)                                                     \
> +__extension__ ({                                                                                            \
> +    int tmp;                                                                                                \
> +    while (len >= 128 + 16 - offset) {                                                                      \
> +        xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));                  \
> +        len -= 128;                                                                                         \
> +        xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));                  \
> +        xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));                  \
> +        xmm3 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 3 * 16));                  \
> +        xmm4 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 4 * 16));                  \
> +        xmm5 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 5 * 16));                  \
> +        xmm6 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 6 * 16));                  \
> +        xmm7 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 7 * 16));                  \
> +        xmm8 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 8 * 16));                  \
> +        src = (const uint8_t *)src + 128;                                                                   \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 2 * 16), _mm_alignr_epi8(xmm3, xmm2, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 3 * 16), _mm_alignr_epi8(xmm4, xmm3, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 4 * 16), _mm_alignr_epi8(xmm5, xmm4, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 5 * 16), _mm_alignr_epi8(xmm6, xmm5, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 6 * 16), _mm_alignr_epi8(xmm7, xmm6, offset));        \
> +        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 7 * 16), _mm_alignr_epi8(xmm8, xmm7, offset));        \
> +        dst = (uint8_t *)dst + 128;                                                                         \
> +    }                                                                                                       \
> +    tmp = len;                                                                                              \
> +    len = ((len - 16 + offset) & 127) + 16 - offset;                                                        \
> +    tmp -= len;                                                                                             \
> +    src = (const uint8_t *)src + tmp;                                                                       \
> +    dst = (uint8_t *)dst + tmp;                                                                             \
> +    if (len >= 32 + 16 - offset) {                                                                          \
> +        while (len >= 32 + 16 - offset) {                                                                   \
> +            xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));              \
> +            len -= 32;                                                                                      \
> +            xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));              \
> +            xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));              \
> +            src = (const uint8_t *)src + 32;                                                                \
> +            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));    \
> +            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));    \
> +            dst = (uint8_t *)dst + 32;                                                                      \
> +        }                                                                                                   \
> +        tmp = len;                                                                                          \
> +        len = ((len - 16 + offset) & 31) + 16 - offset;                                                     \
> +        tmp -= len;                                                                                         \
> +        src = (const uint8_t *)src + tmp;                                                                   \
> +        dst = (uint8_t *)dst + tmp;                                                                         \
> +    }                                                                                                       \
> +})
> +
> +/**
> + * Macro for copying unaligned block from one location to another,
> + * 47 bytes leftover maximum,
> + * locations should not overlap.
> + * Use switch here because the aligning instruction requires immediate value for shift count.
> + * Requirements:
> + * - Store is aligned
> + * - Load offset is <offset>, which must be within [1, 15]
> + * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
> + * - <dst>, <src>, <len> must be variables
> + * - __m128i <xmm0> ~ <xmm8> used in MOVEUNALIGNED_LEFT47_IMM must be pre-defined
> + */
> +#define MOVEUNALIGNED_LEFT47(dst, src, len, offset)                   \
> +__extension__ ({                                                      \
> +    switch (offset) {                                                 \
> +    case 0x01: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x01); break;    \
> +    case 0x02: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x02); break;    \
> +    case 0x03: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x03); break;    \
> +    case 0x04: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x04); break;    \
> +    case 0x05: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x05); break;    \
> +    case 0x06: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x06); break;    \
> +    case 0x07: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x07); break;    \
> +    case 0x08: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x08); break;    \
> +    case 0x09: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x09); break;    \
> +    case 0x0A: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0A); break;    \
> +    case 0x0B: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0B); break;    \
> +    case 0x0C: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0C); break;    \
> +    case 0x0D: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0D); break;    \
> +    case 0x0E: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0E); break;    \
> +    case 0x0F: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0F); break;    \
> +    default:;                                                         \
> +    }                                                                 \
> +})
> +
> +void *
> +rte_memcpy_sse(void *dst, const void *src, size_t n)
> +{
> +	if (!(((uintptr_t)dst | (uintptr_t)src) & 0x0F)) {
> +		void *ret = dst;
> +
> +		/* Copy size <= 16 bytes */
> +		if (n < 16) {
> +			if (n & 0x01) {
> +				*(uint8_t *)dst = *(const uint8_t *)src;
> +				src = (const uint8_t *)src + 1;
> +				dst = (uint8_t *)dst + 1;
> +			}
> +			if (n & 0x02) {
> +				*(uint16_t *)dst = *(const uint16_t *)src;
> +				src = (const uint16_t *)src + 1;
> +				dst = (uint16_t *)dst + 1;
> +			}
> +			if (n & 0x04) {
> +				*(uint32_t *)dst = *(const uint32_t *)src;
> +				src = (const uint32_t *)src + 1;
> +				dst = (uint32_t *)dst + 1;
> +			}
> +			if (n & 0x08)
> +				*(uint64_t *)dst = *(const uint64_t *)src;
> +
> +			return ret;
> +		}
> +
> +		/* Copy 16 <= size <= 32 bytes */
> +		if (n <= 32) {
> +			__m128i xmm0, xmm1;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 16 + n));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 16 + n), xmm1);
> +
> +			return ret;
> +		}
> +
> +		/* Copy 32 < size <= 64 bytes */
> +		if (n <= 64) {
> +			__m128i xmm0, xmm1, xmm2, xmm3;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src + 16));
> +			xmm2 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 32 + n));
> +			xmm3 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 16 + n));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst + 16), xmm1);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 32 + n), xmm2);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 16 + n), xmm3);
> +
> +			return ret;
> +		}
> +
> +		/* Copy 64 bytes blocks */
> +		for (; n >= 64; n -= 64) {
> +			__m128i xmm0, xmm1, xmm2, xmm3;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src + 16));
> +			xmm2 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src + 2*16));
> +			xmm3 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src + 3*16));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst + 16), xmm1);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst + 2*16), xmm2);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst + 3*16), xmm3);
> +			dst = (uint8_t *)dst + 64;
> +			src = (const uint8_t *)src + 64;
> +		}
> +
> +		/* Copy whatever left */
> +		__m128i xmm0, xmm1, xmm2, xmm3;
> +		xmm0 = _mm_loadu_si128((const __m128i *)
> +			((const uint8_t *)src - 64 + n));
> +		xmm1 = _mm_loadu_si128((const __m128i *)
> +			((const uint8_t *)src - 48 + n));
> +		xmm2 = _mm_loadu_si128((const __m128i *)
> +			((const uint8_t *)src - 32 + n));
> +		xmm3 = _mm_loadu_si128((const __m128i *)
> +			((const uint8_t *)src - 16 + n));
> +		_mm_storeu_si128((__m128i *)((uint8_t *)dst - 64 + n), xmm0);
> +		_mm_storeu_si128((__m128i *)((uint8_t *)dst - 48 + n), xmm1);
> +		_mm_storeu_si128((__m128i *)((uint8_t *)dst - 32 + n), xmm2);
> +		_mm_storeu_si128((__m128i *)((uint8_t *)dst - 16 + n), xmm3);
> +
> +		return ret;
> +	} else {
> +		__m128i xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7, xmm8;
> +		uintptr_t dstu = (uintptr_t)dst;
> +		uintptr_t srcu = (uintptr_t)src;
> +		void *ret = dst;
> +		size_t dstofss;
> +		size_t srcofs;
> +
> +		/**
> +		 * Copy less than 16 bytes
> +		 */
> +		if (n < 16) {
> +			if (n & 0x01) {
> +				*(uint8_t *)dstu = *(const uint8_t *)srcu;
> +				srcu = (uintptr_t)((const uint8_t *)srcu + 1);
> +				dstu = (uintptr_t)((uint8_t *)dstu + 1);
> +			}
> +			if (n & 0x02) {
> +				*(uint16_t *)dstu = *(const uint16_t *)srcu;
> +				srcu = (uintptr_t)((const uint16_t *)srcu + 1);
> +				dstu = (uintptr_t)((uint16_t *)dstu + 1);
> +			}
> +			if (n & 0x04) {
> +				*(uint32_t *)dstu = *(const uint32_t *)srcu;
> +				srcu = (uintptr_t)((const uint32_t *)srcu + 1);
> +				dstu = (uintptr_t)((uint32_t *)dstu + 1);
> +			}
> +			if (n & 0x08)
> +				*(uint64_t *)dstu = *(const uint64_t *)srcu;
> +			return ret;
> +		}
> +
> +		/**
> +		 * Fast way when copy size doesn't exceed 512 bytes
> +		 */
> +		if (n <= 32) {
> +			__m128i xmm0, xmm1;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 16 + n));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 16 + n), xmm1);
> +			return ret;
> +		}
> +		if (n <= 48) {
> +			__m128i xmm0, xmm1, xmm2;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src + 16));
> +			xmm2 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 16 + n));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst + 16), xmm1);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 16 + n), xmm2);
> +			return ret;
> +		}
> +		if (n <= 64) {
> +			__m128i xmm0, xmm1, xmm2, xmm3;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src + 16));
> +			xmm2 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src + 32));
> +			xmm3 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src - 16 + n));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst + 16), xmm1);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst + 32), xmm2);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst - 16 + n), xmm3);
> +			return ret;
> +		}
> +		if (n <= 128)
> +			goto COPY_BLOCK_128_BACK15;
> +		if (n <= 512) {
> +			if (n >= 256) {
> +				n -= 256;
> +				__m128i xmm0, xmm1;
> +				xmm0 = _mm_loadu_si128((const __m128i *)src);
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 16));
> +				_mm_storeu_si128((__m128i *)dst, xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 2*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 3*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 2*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 3*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 4*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 5*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 4*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 5*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 6*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 7*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 6*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 7*16), xmm1);
> +
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 128));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 128 + 16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 128), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 128 + 16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 128 + 2*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 128 + 3*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 128 + 2*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 128 + 3*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 128 + 4*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 128 + 5*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 128 + 4*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 128 + 5*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 128 + 6*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 128 + 7*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 128 + 6*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 128 + 7*16), xmm1);
> +				src = (const uint8_t *)src + 256;
> +				dst = (uint8_t *)dst + 256;
> +			}
> +COPY_BLOCK_255_BACK15:
> +			if (n >= 128) {
> +				n -= 128;
> +				__m128i xmm0, xmm1;
> +				xmm0 = _mm_loadu_si128((const __m128i *)src);
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 16));
> +				_mm_storeu_si128((__m128i *)dst, xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 2*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 3*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 2*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 3*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 4*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 5*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 4*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 5*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 6*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 7*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 6*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 7*16), xmm1);
> +				src = (const uint8_t *)src + 128;
> +				dst = (uint8_t *)dst + 128;
> +			}
> +COPY_BLOCK_128_BACK15:
> +			if (n >= 64) {
> +				n -= 64;
> +				__m128i xmm0, xmm1;
> +				xmm0 = _mm_loadu_si128((const __m128i *)src);
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 16));
> +				_mm_storeu_si128((__m128i *)dst, xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 16), xmm1);
> +
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 2*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 3*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 2*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 3*16), xmm1);
> +				src = (const uint8_t *)src + 64;
> +				dst = (uint8_t *)dst + 64;
> +			}
> +COPY_BLOCK_64_BACK15:
> +			if (n >= 32) {
> +				n -= 32;
> +				__m128i xmm0, xmm1;
> +				xmm0 = _mm_loadu_si128((const __m128i *)src);
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 16));
> +				_mm_storeu_si128((__m128i *)dst, xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 16), xmm1);
> +				src = (const uint8_t *)src + 32;
> +				dst = (uint8_t *)dst + 32;
> +			}
> +			if (n > 16) {
> +				__m128i xmm0, xmm1;
> +				xmm0 = _mm_loadu_si128((const __m128i *)src);
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src - 16 + n));
> +				_mm_storeu_si128((__m128i *)dst, xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst - 16 + n), xmm1);
> +				return ret;
> +			}
> +			if (n > 0) {
> +				__m128i xmm0;
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src - 16 + n));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst - 16 + n), xmm0);
> +			}
> +			return ret;
> +		}
> +
> +		/**
> +		 * Make store aligned when copy size exceeds 512 bytes,
> +		 * and make sure the first 15 bytes are copied, because
> +		 * unaligned copy functions require up to 15 bytes
> +		 * backwards access.
> +		 */
> +		dstofss = (uintptr_t)dst & 0x0F;
> +		if (dstofss > 0) {
> +			dstofss = 16 - dstofss + 16;
> +			n -= dstofss;
> +			__m128i xmm0, xmm1;
> +			xmm0 = _mm_loadu_si128((const __m128i *)src);
> +			xmm1 = _mm_loadu_si128((const __m128i *)
> +				((const uint8_t *)src + 16));
> +			_mm_storeu_si128((__m128i *)dst, xmm0);
> +			_mm_storeu_si128((__m128i *)
> +				((uint8_t *)dst + 16), xmm1);
> +			src = (const uint8_t *)src + dstofss;
> +			dst = (uint8_t *)dst + dstofss;
> +		}
> +		srcofs = ((uintptr_t)src & 0x0F);
> +
> +		/**
> +		 * For aligned copy
> +		 */
> +		if (srcofs == 0) {
> +			/**
> +			 * Copy 256-byte blocks
> +			 */
> +			for (; n >= 256; n -= 256) {
> +				__m128i xmm0, xmm1;
> +				xmm0 = _mm_loadu_si128((const __m128i *)src);
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 16));
> +				_mm_storeu_si128((__m128i *)dst, xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 2*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 3*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 2*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 3*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 4*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 5*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 4*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 5*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 6*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 7*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 6*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 7*16), xmm1);
> +
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 8*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 9*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 8*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 9*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 10*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 11*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 10*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 11*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 12*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 13*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 12*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 13*16), xmm1);
> +				xmm0 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 14*16));
> +				xmm1 = _mm_loadu_si128((const __m128i *)
> +					((const uint8_t *)src + 15*16));
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 14*16), xmm0);
> +				_mm_storeu_si128((__m128i *)
> +					((uint8_t *)dst + 15*16), xmm1);
> +				dst = (uint8_t *)dst + 256;
> +				src = (const uint8_t *)src + 256;
> +			}
> +
> +			/**
> +			 * Copy whatever left
> +			 */
> +			goto COPY_BLOCK_255_BACK15;
> +		}
> +
> +		/**
> +		 * For copy with unaligned load
> +		 */
> +		MOVEUNALIGNED_LEFT47(dst, src, n, srcofs);
> +
> +		/**
> +		 * Copy whatever left
> +		 */
> +		goto COPY_BLOCK_64_BACK15;
> +	}
> +}
> diff --git a/lib/librte_eal/linuxapp/eal/Makefile b/lib/librte_eal/linuxapp/eal/Makefile
> index 90bca4d..88d3298 100644
> --- a/lib/librte_eal/linuxapp/eal/Makefile
> +++ b/lib/librte_eal/linuxapp/eal/Makefile
> @@ -40,6 +40,7 @@ VPATH += $(RTE_SDK)/lib/librte_eal/common/arch/$(ARCH_DIR)
>  LIBABIVER := 5
> 
>  VPATH += $(RTE_SDK)/lib/librte_eal/common
> +VPATH += $(RTE_SDK)/lib/librte_eal/common/include/arch/$(ARCH_DIR)
> 
>  CFLAGS += -I$(SRCDIR)/include
>  CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
> @@ -105,6 +106,22 @@ SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_service.c
>  SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_cpuflags.c
>  SRCS-$(CONFIG_RTE_ARCH_X86) += rte_spinlock.c
> 
> +# for run-time dispatch of memcpy
> +SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy.c
> +SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_sse.c
> +
> +# if the compiler supports AVX512, add avx512 file
> +ifneq ($(filter $(MACHINE_CFLAGS),CC_SUPPORT_AVX512F),)
> +SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_avx512f.c
> +CFLAGS_rte_memcpy_avx512f.o += -mavx512f
> +endif
> +
> +# if the compiler supports AVX2, add avx2 file
> +ifneq ($(filter $(MACHINE_CFLAGS),CC_SUPPORT_AVX2),)
> +SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_avx2.c
> +CFLAGS_rte_memcpy_avx2.o += -mavx2
> +endif
> +
>  CFLAGS_eal_common_cpuflags.o := $(CPUFLAGS_LIST)
> 
>  CFLAGS_eal.o := -D_GNU_SOURCE
> diff --git a/mk/rte.cpuflags.mk b/mk/rte.cpuflags.mk
> index a813c91..8a7a1e7 100644
> --- a/mk/rte.cpuflags.mk
> +++ b/mk/rte.cpuflags.mk
> @@ -134,6 +134,20 @@ endif
> 
>  MACHINE_CFLAGS += $(addprefix -DRTE_MACHINE_CPUFLAG_,$(CPUFLAGS))
> 
> +# Check if the compiler suppoerts AVX512
> +CC_SUPPORT_AVX512F := $(shell $(CC) -mavx512f -dM -E - < /dev/null 2>&1 | grep -q AVX512 && echo 1)
> +ifeq ($(CC_SUPPORT_AVX512F),1)
> +ifeq ($(CONFIG_RTE_ENABLE_AVX512),y)
> +MACHINE_CFLAGS += -DCC_SUPPORT_AVX512F
> +endif
> +endif
> +
> +# Check if the compiler supports AVX2
> +CC_SUPPORT_AVX2 := $(shell $(CC) -mavx2 -dM -E - < /dev/null 2>&1 | grep -q AVX2 && echo 1)
> +ifeq ($(CC_SUPPORT_AVX2),1)
> +MACHINE_CFLAGS += -DCC_SUPPORT_AVX2
> +endif
> +
>  # To strip whitespace
>  comma:= ,
>  empty:=
> --
> 2.7.4
  
Li, Xiaoyun Oct. 2, 2017, 11:10 p.m. UTC | #2
Hi

> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, October 3, 2017 00:39
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> <helin.zhang@intel.com>; dev@dpdk.org
> Subject: RE: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> 
> 
> 
> > -----Original Message-----
> > From: Li, Xiaoyun
> > Sent: Monday, October 2, 2017 5:13 PM
> > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Richardson,
> Bruce <bruce.richardson@intel.com>
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> <helin.zhang@intel.com>; dev@dpdk.org; Li, Xiaoyun <xiaoyun.li@intel.com>
> > Subject: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> >
> > This patch dynamically selects functions of memcpy at run-time based
> > on CPU flags that current machine supports. This patch uses function
> > pointers which are bind to the relative functions at constrctor time.
> > In addition, AVX512 instructions set would be compiled only if users
> > config it enabled and the compiler supports it.
> >
> > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > ---
> > v2
> > * Use gcc function multi-versioning to avoid compilation issues.
> > * Add macros for AVX512 and AVX2. Only if users enable AVX512 and the
> > compiler supports it, the AVX512 codes would be compiled. Only if the
> > compiler supports AVX2, the AVX2 codes would be compiled.
> >
> > v3
> > * Reduce function calls via only keep rte_memcpy_xxx.
> > * Add conditions that when copy size is small, use inline code path.
> > Otherwise, use dynamic code path.
> > * To support attribute target, clang version must be greater than 3.7.
> > Otherwise, would choose SSE/AVX code path, the same as before.
> > * Move two mocro functions to the top of the code since they would be
> > used in inline SSE/AVX and dynamic SSE/AVX codes.
> >
> > v4
> > * Modify rte_memcpy.h to several .c files and modify makefiles to compile
> > AVX2 and AVX512 files.
> 
> Could you explain to me why instead of reusing existing rte_memcpy() code
> to generate _sse/_avx2/ax512f flavors you keep pushing changes with 3
> separate implementations?
> Obviously that is much more expensive in terms of maintenance and doesn't
> look like
> feasible solution to me.
> Is existing rte_memcpy() implementation is not good enough in terms of
> functionality and/or performance?
> If so, can you outline these problems and try to fix them first.
> Konstantin
> 

I just change many small functions to one function in those 3 separate functions.
Because the existing codes are totally inline, including rte_memcpy() itself. So the compilation will 
change all rte_memcpy() calls into the basic codes like xmm0=xxx.

The existing codes in this way are OK. But when run-time, it will bring lots of function calls
and cause perf drop.


Best Regards,
Xiaoyun Li
  
Ananyev, Konstantin Oct. 3, 2017, 11:15 a.m. UTC | #3
Hi,

> 
> Hi
> 
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Tuesday, October 3, 2017 00:39
> > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Richardson, Bruce
> > <bruce.richardson@intel.com>
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> > <helin.zhang@intel.com>; dev@dpdk.org
> > Subject: RE: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> >
> >
> >
> > > -----Original Message-----
> > > From: Li, Xiaoyun
> > > Sent: Monday, October 2, 2017 5:13 PM
> > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>; Richardson,
> > Bruce <bruce.richardson@intel.com>
> > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> > <helin.zhang@intel.com>; dev@dpdk.org; Li, Xiaoyun <xiaoyun.li@intel.com>
> > > Subject: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> > >
> > > This patch dynamically selects functions of memcpy at run-time based
> > > on CPU flags that current machine supports. This patch uses function
> > > pointers which are bind to the relative functions at constrctor time.
> > > In addition, AVX512 instructions set would be compiled only if users
> > > config it enabled and the compiler supports it.
> > >
> > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > > ---
> > > v2
> > > * Use gcc function multi-versioning to avoid compilation issues.
> > > * Add macros for AVX512 and AVX2. Only if users enable AVX512 and the
> > > compiler supports it, the AVX512 codes would be compiled. Only if the
> > > compiler supports AVX2, the AVX2 codes would be compiled.
> > >
> > > v3
> > > * Reduce function calls via only keep rte_memcpy_xxx.
> > > * Add conditions that when copy size is small, use inline code path.
> > > Otherwise, use dynamic code path.
> > > * To support attribute target, clang version must be greater than 3.7.
> > > Otherwise, would choose SSE/AVX code path, the same as before.
> > > * Move two mocro functions to the top of the code since they would be
> > > used in inline SSE/AVX and dynamic SSE/AVX codes.
> > >
> > > v4
> > > * Modify rte_memcpy.h to several .c files and modify makefiles to compile
> > > AVX2 and AVX512 files.
> >
> > Could you explain to me why instead of reusing existing rte_memcpy() code
> > to generate _sse/_avx2/ax512f flavors you keep pushing changes with 3
> > separate implementations?
> > Obviously that is much more expensive in terms of maintenance and doesn't
> > look like
> > feasible solution to me.
> > Is existing rte_memcpy() implementation is not good enough in terms of
> > functionality and/or performance?
> > If so, can you outline these problems and try to fix them first.
> > Konstantin
> >
> 
> I just change many small functions to one function in those 3 separate functions.

Yes, so with what you suggest  we'll have 4 implementations  for rte_memcpy to support.
That's very expensive terms of maintenance and I believe totally unnecessary.

> Because the existing codes are totally inline, including rte_memcpy() itself. So the compilation will
> change all rte_memcpy() calls into the basic codes like xmm0=xxx.
> 
> The existing codes in this way are OK. 

Good.

>But when run-time, it will bring lots of function calls
> and cause perf drop.

I believe it wouldn't if we do it properly.
All internal functions (mov16, mov32, etc.) will still be unlined by the compiler for each flavor (sse/avx2/etc.) -
have a look at the patch I sent.

Konstantin
  
Li, Xiaoyun Oct. 3, 2017, 11:39 a.m. UTC | #4
Hi
You mean just use rte_memcpy_internal in rte_memcpy_avx2, rte_memcpy_avx512?
But if RTE_MACHINE_CPUFLAGS_AVX2 means only whether the compiler supports avx2, then internal would only compiled
With avx2 codes, then cannot choose other code path. What if the HW cannot support avx2?
If RTE_MACHINE_CPUFLAGS_AVX2 means as before, suggests whether both compiler and HW supports avx2. Then the function
has no difference right now.
The mocro is determined at compilation time. But selection is hoped to be at runtime.
Did I consider something wrong?

Best Regards,
Xiaoyun Li




> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, October 3, 2017 19:16
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> <helin.zhang@intel.com>; dev@dpdk.org
> Subject: RE: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> 
> Hi,
> 
> >
> > Hi
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Tuesday, October 3, 2017 00:39
> > > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Richardson, Bruce
> > > <bruce.richardson@intel.com>
> > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> > > <helin.zhang@intel.com>; dev@dpdk.org
> > > Subject: RE: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: Li, Xiaoyun
> > > > Sent: Monday, October 2, 2017 5:13 PM
> > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > Richardson,
> > > Bruce <bruce.richardson@intel.com>
> > > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> > > <helin.zhang@intel.com>; dev@dpdk.org; Li, Xiaoyun
> > > <xiaoyun.li@intel.com>
> > > > Subject: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> > > >
> > > > This patch dynamically selects functions of memcpy at run-time
> > > > based on CPU flags that current machine supports. This patch uses
> > > > function pointers which are bind to the relative functions at constrctor
> time.
> > > > In addition, AVX512 instructions set would be compiled only if
> > > > users config it enabled and the compiler supports it.
> > > >
> > > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > > > ---
> > > > v2
> > > > * Use gcc function multi-versioning to avoid compilation issues.
> > > > * Add macros for AVX512 and AVX2. Only if users enable AVX512 and
> > > > the compiler supports it, the AVX512 codes would be compiled. Only
> > > > if the compiler supports AVX2, the AVX2 codes would be compiled.
> > > >
> > > > v3
> > > > * Reduce function calls via only keep rte_memcpy_xxx.
> > > > * Add conditions that when copy size is small, use inline code path.
> > > > Otherwise, use dynamic code path.
> > > > * To support attribute target, clang version must be greater than 3.7.
> > > > Otherwise, would choose SSE/AVX code path, the same as before.
> > > > * Move two mocro functions to the top of the code since they would
> > > > be used in inline SSE/AVX and dynamic SSE/AVX codes.
> > > >
> > > > v4
> > > > * Modify rte_memcpy.h to several .c files and modify makefiles to
> > > > compile
> > > > AVX2 and AVX512 files.
> > >
> > > Could you explain to me why instead of reusing existing rte_memcpy()
> > > code to generate _sse/_avx2/ax512f flavors you keep pushing changes
> > > with 3 separate implementations?
> > > Obviously that is much more expensive in terms of maintenance and
> > > doesn't look like feasible solution to me.
> > > Is existing rte_memcpy() implementation is not good enough in terms
> > > of functionality and/or performance?
> > > If so, can you outline these problems and try to fix them first.
> > > Konstantin
> > >
> >
> > I just change many small functions to one function in those 3 separate
> functions.
> 
> Yes, so with what you suggest  we'll have 4 implementations  for rte_memcpy
> to support.
> That's very expensive terms of maintenance and I believe totally unnecessary.
> 
> > Because the existing codes are totally inline, including rte_memcpy()
> > itself. So the compilation will change all rte_memcpy() calls into the basic
> codes like xmm0=xxx.
> >
> > The existing codes in this way are OK.
> 
> Good.
> 
> >But when run-time, it will bring lots of function calls  and cause perf
> >drop.
> 
> I believe it wouldn't if we do it properly.
> All internal functions (mov16, mov32, etc.) will still be unlined by the
> compiler for each flavor (sse/avx2/etc.) - have a look at the patch I sent.
> 
> Konstantin
  
Ananyev, Konstantin Oct. 3, 2017, 12:12 p.m. UTC | #5
> 
> Hi
> You mean just use rte_memcpy_internal in rte_memcpy_avx2, rte_memcpy_avx512?

Yes, exactly and for rte_memcpy_sse() too.
Basically we for rte_memcpy_avx512() we force compiler to use AVX512F path inside rte_memcpy_iternal(),
for rte_memcpy_avx2() we use AVX2 path inside rte_memcpy_internal(), etc.
To do that we setup:
CFLAGS_rte_memcpy_avx512f.o += -mavx512f
CFLAGS_rte_memcpy_avx512f.o += -DRTE_MACHINE_CPUFLAG_AVX512F
inside the Makefile.

For rte_memcpy_avx2() we force compiler to use AVX2 path inside rte_memcpy_internal(), etc.

> But if RTE_MACHINE_CPUFLAGS_AVX2 means only whether the compiler supports avx2, then internal would only compiled
> With avx2 codes, then cannot choose other code path. What if the HW cannot support avx2?

If the HW can't support AVX2 then rte_memcpy_init() just wouldn't select rte_memcpy_avx2(),
it would select rte_memcpy_sse() instead:

if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2)) {...} -
that is a runtime check that underlying HW does support AVX2.

Konstantin

> If RTE_MACHINE_CPUFLAGS_AVX2 means as before, suggests whether both compiler and HW supports avx2. Then the function
> has no difference right now.
> The mocro is determined at compilation time. But selection is hoped to be at runtime.
> Did I consider something wrong?
> 
> Best Regards,
> Xiaoyun Li
> 
> 
> 
> 
> > -----Original Message-----
> > From: Ananyev, Konstantin
> > Sent: Tuesday, October 3, 2017 19:16
> > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Richardson, Bruce
> > <bruce.richardson@intel.com>
> > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> > <helin.zhang@intel.com>; dev@dpdk.org
> > Subject: RE: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> >
> > Hi,
> >
> > >
> > > Hi
> > >
> > > > -----Original Message-----
> > > > From: Ananyev, Konstantin
> > > > Sent: Tuesday, October 3, 2017 00:39
> > > > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Richardson, Bruce
> > > > <bruce.richardson@intel.com>
> > > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> > > > <helin.zhang@intel.com>; dev@dpdk.org
> > > > Subject: RE: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> > > >
> > > >
> > > >
> > > > > -----Original Message-----
> > > > > From: Li, Xiaoyun
> > > > > Sent: Monday, October 2, 2017 5:13 PM
> > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > Richardson,
> > > > Bruce <bruce.richardson@intel.com>
> > > > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> > > > <helin.zhang@intel.com>; dev@dpdk.org; Li, Xiaoyun
> > > > <xiaoyun.li@intel.com>
> > > > > Subject: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> > > > >
> > > > > This patch dynamically selects functions of memcpy at run-time
> > > > > based on CPU flags that current machine supports. This patch uses
> > > > > function pointers which are bind to the relative functions at constrctor
> > time.
> > > > > In addition, AVX512 instructions set would be compiled only if
> > > > > users config it enabled and the compiler supports it.
> > > > >
> > > > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > > > > ---
> > > > > v2
> > > > > * Use gcc function multi-versioning to avoid compilation issues.
> > > > > * Add macros for AVX512 and AVX2. Only if users enable AVX512 and
> > > > > the compiler supports it, the AVX512 codes would be compiled. Only
> > > > > if the compiler supports AVX2, the AVX2 codes would be compiled.
> > > > >
> > > > > v3
> > > > > * Reduce function calls via only keep rte_memcpy_xxx.
> > > > > * Add conditions that when copy size is small, use inline code path.
> > > > > Otherwise, use dynamic code path.
> > > > > * To support attribute target, clang version must be greater than 3.7.
> > > > > Otherwise, would choose SSE/AVX code path, the same as before.
> > > > > * Move two mocro functions to the top of the code since they would
> > > > > be used in inline SSE/AVX and dynamic SSE/AVX codes.
> > > > >
> > > > > v4
> > > > > * Modify rte_memcpy.h to several .c files and modify makefiles to
> > > > > compile
> > > > > AVX2 and AVX512 files.
> > > >
> > > > Could you explain to me why instead of reusing existing rte_memcpy()
> > > > code to generate _sse/_avx2/ax512f flavors you keep pushing changes
> > > > with 3 separate implementations?
> > > > Obviously that is much more expensive in terms of maintenance and
> > > > doesn't look like feasible solution to me.
> > > > Is existing rte_memcpy() implementation is not good enough in terms
> > > > of functionality and/or performance?
> > > > If so, can you outline these problems and try to fix them first.
> > > > Konstantin
> > > >
> > >
> > > I just change many small functions to one function in those 3 separate
> > functions.
> >
> > Yes, so with what you suggest  we'll have 4 implementations  for rte_memcpy
> > to support.
> > That's very expensive terms of maintenance and I believe totally unnecessary.
> >
> > > Because the existing codes are totally inline, including rte_memcpy()
> > > itself. So the compilation will change all rte_memcpy() calls into the basic
> > codes like xmm0=xxx.
> > >
> > > The existing codes in this way are OK.
> >
> > Good.
> >
> > >But when run-time, it will bring lots of function calls  and cause perf
> > >drop.
> >
> > I believe it wouldn't if we do it properly.
> > All internal functions (mov16, mov32, etc.) will still be unlined by the
> > compiler for each flavor (sse/avx2/etc.) - have a look at the patch I sent.
> >
> > Konstantin
  
Li, Xiaoyun Oct. 3, 2017, 12:23 p.m. UTC | #6
OK. Got it. Thanks!

> -----Original Message-----
> From: Ananyev, Konstantin
> Sent: Tuesday, October 3, 2017 20:12
> To: Li, Xiaoyun <xiaoyun.li@intel.com>; Richardson, Bruce
> <bruce.richardson@intel.com>
> Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> <helin.zhang@intel.com>; dev@dpdk.org
> Subject: RE: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> 
> 
> 
> >
> > Hi
> > You mean just use rte_memcpy_internal in rte_memcpy_avx2,
> rte_memcpy_avx512?
> 
> Yes, exactly and for rte_memcpy_sse() too.
> Basically we for rte_memcpy_avx512() we force compiler to use AVX512F
> path inside rte_memcpy_iternal(), for rte_memcpy_avx2() we use AVX2 path
> inside rte_memcpy_internal(), etc.
> To do that we setup:
> CFLAGS_rte_memcpy_avx512f.o += -mavx512f
> CFLAGS_rte_memcpy_avx512f.o += -DRTE_MACHINE_CPUFLAG_AVX512F
> inside the Makefile.
> 
> For rte_memcpy_avx2() we force compiler to use AVX2 path inside
> rte_memcpy_internal(), etc.
> 
> > But if RTE_MACHINE_CPUFLAGS_AVX2 means only whether the compiler
> > supports avx2, then internal would only compiled With avx2 codes, then
> cannot choose other code path. What if the HW cannot support avx2?
> 
> If the HW can't support AVX2 then rte_memcpy_init() just wouldn't select
> rte_memcpy_avx2(), it would select rte_memcpy_sse() instead:
> 
> if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2)) {...} - that is a runtime
> check that underlying HW does support AVX2.
> 
> Konstantin
> 
> > If RTE_MACHINE_CPUFLAGS_AVX2 means as before, suggests whether
> both
> > compiler and HW supports avx2. Then the function has no difference right
> now.
> > The mocro is determined at compilation time. But selection is hoped to be
> at runtime.
> > Did I consider something wrong?
> >
> > Best Regards,
> > Xiaoyun Li
> >
> >
> >
> >
> > > -----Original Message-----
> > > From: Ananyev, Konstantin
> > > Sent: Tuesday, October 3, 2017 19:16
> > > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Richardson, Bruce
> > > <bruce.richardson@intel.com>
> > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> > > <helin.zhang@intel.com>; dev@dpdk.org
> > > Subject: RE: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> > >
> > > Hi,
> > >
> > > >
> > > > Hi
> > > >
> > > > > -----Original Message-----
> > > > > From: Ananyev, Konstantin
> > > > > Sent: Tuesday, October 3, 2017 00:39
> > > > > To: Li, Xiaoyun <xiaoyun.li@intel.com>; Richardson, Bruce
> > > > > <bruce.richardson@intel.com>
> > > > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> > > > > <helin.zhang@intel.com>; dev@dpdk.org
> > > > > Subject: RE: [PATCH v4 1/3] eal/x86: run-time dispatch over
> > > > > memcpy
> > > > >
> > > > >
> > > > >
> > > > > > -----Original Message-----
> > > > > > From: Li, Xiaoyun
> > > > > > Sent: Monday, October 2, 2017 5:13 PM
> > > > > > To: Ananyev, Konstantin <konstantin.ananyev@intel.com>;
> > > > > > Richardson,
> > > > > Bruce <bruce.richardson@intel.com>
> > > > > > Cc: Lu, Wenzhuo <wenzhuo.lu@intel.com>; Zhang, Helin
> > > > > <helin.zhang@intel.com>; dev@dpdk.org; Li, Xiaoyun
> > > > > <xiaoyun.li@intel.com>
> > > > > > Subject: [PATCH v4 1/3] eal/x86: run-time dispatch over memcpy
> > > > > >
> > > > > > This patch dynamically selects functions of memcpy at run-time
> > > > > > based on CPU flags that current machine supports. This patch
> > > > > > uses function pointers which are bind to the relative
> > > > > > functions at constrctor
> > > time.
> > > > > > In addition, AVX512 instructions set would be compiled only if
> > > > > > users config it enabled and the compiler supports it.
> > > > > >
> > > > > > Signed-off-by: Xiaoyun Li <xiaoyun.li@intel.com>
> > > > > > ---
> > > > > > v2
> > > > > > * Use gcc function multi-versioning to avoid compilation issues.
> > > > > > * Add macros for AVX512 and AVX2. Only if users enable AVX512
> > > > > > and the compiler supports it, the AVX512 codes would be
> > > > > > compiled. Only if the compiler supports AVX2, the AVX2 codes
> would be compiled.
> > > > > >
> > > > > > v3
> > > > > > * Reduce function calls via only keep rte_memcpy_xxx.
> > > > > > * Add conditions that when copy size is small, use inline code path.
> > > > > > Otherwise, use dynamic code path.
> > > > > > * To support attribute target, clang version must be greater than 3.7.
> > > > > > Otherwise, would choose SSE/AVX code path, the same as before.
> > > > > > * Move two mocro functions to the top of the code since they
> > > > > > would be used in inline SSE/AVX and dynamic SSE/AVX codes.
> > > > > >
> > > > > > v4
> > > > > > * Modify rte_memcpy.h to several .c files and modify makefiles
> > > > > > to compile
> > > > > > AVX2 and AVX512 files.
> > > > >
> > > > > Could you explain to me why instead of reusing existing
> > > > > rte_memcpy() code to generate _sse/_avx2/ax512f flavors you keep
> > > > > pushing changes with 3 separate implementations?
> > > > > Obviously that is much more expensive in terms of maintenance
> > > > > and doesn't look like feasible solution to me.
> > > > > Is existing rte_memcpy() implementation is not good enough in
> > > > > terms of functionality and/or performance?
> > > > > If so, can you outline these problems and try to fix them first.
> > > > > Konstantin
> > > > >
> > > >
> > > > I just change many small functions to one function in those 3
> > > > separate
> > > functions.
> > >
> > > Yes, so with what you suggest  we'll have 4 implementations  for
> > > rte_memcpy to support.
> > > That's very expensive terms of maintenance and I believe totally
> unnecessary.
> > >
> > > > Because the existing codes are totally inline, including
> > > > rte_memcpy() itself. So the compilation will change all
> > > > rte_memcpy() calls into the basic
> > > codes like xmm0=xxx.
> > > >
> > > > The existing codes in this way are OK.
> > >
> > > Good.
> > >
> > > >But when run-time, it will bring lots of function calls  and cause
> > > >perf drop.
> > >
> > > I believe it wouldn't if we do it properly.
> > > All internal functions (mov16, mov32, etc.) will still be unlined by
> > > the compiler for each flavor (sse/avx2/etc.) - have a look at the patch I
> sent.
> > >
> > > Konstantin
  

Patch

diff --git a/lib/librte_eal/bsdapp/eal/Makefile b/lib/librte_eal/bsdapp/eal/Makefile
index 005019e..27023c6 100644
--- a/lib/librte_eal/bsdapp/eal/Makefile
+++ b/lib/librte_eal/bsdapp/eal/Makefile
@@ -36,6 +36,7 @@  LIB = librte_eal.a
 ARCH_DIR ?= $(RTE_ARCH)
 VPATH += $(RTE_SDK)/lib/librte_eal/common
 VPATH += $(RTE_SDK)/lib/librte_eal/common/arch/$(ARCH_DIR)
+VPATH += $(RTE_SDK)/lib/librte_eal/common/include/arch/$(ARCH_DIR)
 
 CFLAGS += -I$(SRCDIR)/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
@@ -93,6 +94,22 @@  SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_service.c
 SRCS-$(CONFIG_RTE_EXEC_ENV_BSDAPP) += rte_cpuflags.c
 SRCS-$(CONFIG_RTE_ARCH_X86) += rte_spinlock.c
 
+# for run-time dispatch of memcpy
+SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy.c
+SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_sse.c
+
+# if the compiler supports AVX512, add avx512 file
+ifneq ($(filter $(MACHINE_CFLAGS),CC_SUPPORT_AVX512F),)
+SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_avx512f.c
+CFLAGS_rte_memcpy_avx512f.o += -mavx512f
+endif
+
+# if the compiler supports AVX2, add avx2 file
+ifneq ($(filter $(MACHINE_CFLAGS),CC_SUPPORT_AVX2),)
+SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_avx2.c
+CFLAGS_rte_memcpy_avx2.o += -mavx2
+endif
+
 CFLAGS_eal_common_cpuflags.o := $(CPUFLAGS_LIST)
 
 CFLAGS_eal.o := -D_GNU_SOURCE
diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy.c b/lib/librte_eal/common/include/arch/x86/rte_memcpy.c
new file mode 100644
index 0000000..74ae702
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy.c
@@ -0,0 +1,59 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_memcpy.h>
+#include <rte_cpuflags.h>
+#include <rte_log.h>
+
+void *(*rte_memcpy_ptr)(void *dst, const void *src, size_t n) = NULL;
+
+static void __attribute__((constructor))
+rte_memcpy_init(void)
+{
+#ifdef CC_SUPPORT_AVX512F
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX512F)) {
+		rte_memcpy_ptr = rte_memcpy_avx512f;
+		RTE_LOG(DEBUG, EAL, "AVX512 memcpy is using!\n");
+		return;
+	}
+#endif
+#ifdef CC_SUPPORT_AVX2
+	if (rte_cpu_get_flag_enabled(RTE_CPUFLAG_AVX2)) {
+		rte_memcpy_ptr = rte_memcpy_avx2;
+		RTE_LOG(DEBUG, EAL, "AVX2 memcpy is using!\n");
+		return;
+	}
+#endif
+	rte_memcpy_ptr = rte_memcpy_sse;
+	RTE_LOG(DEBUG, EAL, "Default SSE/AVX memcpy is using!\n");
+}
diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy.h b/lib/librte_eal/common/include/arch/x86/rte_memcpy.h
index 74c280c..460dcdb 100644
--- a/lib/librte_eal/common/include/arch/x86/rte_memcpy.h
+++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy.h
@@ -1,7 +1,7 @@ 
 /*-
  *   BSD LICENSE
  *
- *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
  *   All rights reserved.
  *
  *   Redistribution and use in source and binary forms, with or without
@@ -34,867 +34,36 @@ 
 #ifndef _RTE_MEMCPY_X86_64_H_
 #define _RTE_MEMCPY_X86_64_H_
 
-/**
- * @file
- *
- * Functions for SSE/AVX/AVX2/AVX512 implementation of memcpy().
- */
-
-#include <stdio.h>
-#include <stdint.h>
-#include <string.h>
-#include <rte_vect.h>
-#include <rte_common.h>
+#include <rte_memcpy_internal.h>
 
 #ifdef __cplusplus
 extern "C" {
 #endif
 
-/**
- * Copy bytes from one location to another. The locations must not overlap.
- *
- * @note This is implemented as a macro, so it's address should not be taken
- * and care is needed as parameter expressions may be evaluated multiple times.
- *
- * @param dst
- *   Pointer to the destination of the data.
- * @param src
- *   Pointer to the source data.
- * @param n
- *   Number of bytes to copy.
- * @return
- *   Pointer to the destination data.
- */
-static __rte_always_inline void *
-rte_memcpy(void *dst, const void *src, size_t n);
-
-#ifdef RTE_MACHINE_CPUFLAG_AVX512F
+#define RTE_X86_MEMCPY_THRESH 128
 
-#define ALIGNMENT_MASK 0x3F
+extern void *
+(*rte_memcpy_ptr)(void *dst, const void *src, size_t n);
 
 /**
- * AVX512 implementation below
+ * Different implementations of memcpy.
  */
+extern void*
+rte_memcpy_avx512f(void *dst, const void *src, size_t n);
 
-/**
- * Copy 16 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov16(uint8_t *dst, const uint8_t *src)
-{
-	__m128i xmm0;
-
-	xmm0 = _mm_loadu_si128((const __m128i *)src);
-	_mm_storeu_si128((__m128i *)dst, xmm0);
-}
-
-/**
- * Copy 32 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov32(uint8_t *dst, const uint8_t *src)
-{
-	__m256i ymm0;
+extern void *
+rte_memcpy_avx2(void *dst, const void *src, size_t n);
 
-	ymm0 = _mm256_loadu_si256((const __m256i *)src);
-	_mm256_storeu_si256((__m256i *)dst, ymm0);
-}
-
-/**
- * Copy 64 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov64(uint8_t *dst, const uint8_t *src)
-{
-	__m512i zmm0;
-
-	zmm0 = _mm512_loadu_si512((const void *)src);
-	_mm512_storeu_si512((void *)dst, zmm0);
-}
-
-/**
- * Copy 128 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov128(uint8_t *dst, const uint8_t *src)
-{
-	rte_mov64(dst + 0 * 64, src + 0 * 64);
-	rte_mov64(dst + 1 * 64, src + 1 * 64);
-}
-
-/**
- * Copy 256 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov256(uint8_t *dst, const uint8_t *src)
-{
-	rte_mov64(dst + 0 * 64, src + 0 * 64);
-	rte_mov64(dst + 1 * 64, src + 1 * 64);
-	rte_mov64(dst + 2 * 64, src + 2 * 64);
-	rte_mov64(dst + 3 * 64, src + 3 * 64);
-}
-
-/**
- * Copy 128-byte blocks from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n)
-{
-	__m512i zmm0, zmm1;
-
-	while (n >= 128) {
-		zmm0 = _mm512_loadu_si512((const void *)(src + 0 * 64));
-		n -= 128;
-		zmm1 = _mm512_loadu_si512((const void *)(src + 1 * 64));
-		src = src + 128;
-		_mm512_storeu_si512((void *)(dst + 0 * 64), zmm0);
-		_mm512_storeu_si512((void *)(dst + 1 * 64), zmm1);
-		dst = dst + 128;
-	}
-}
-
-/**
- * Copy 512-byte blocks from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov512blocks(uint8_t *dst, const uint8_t *src, size_t n)
-{
-	__m512i zmm0, zmm1, zmm2, zmm3, zmm4, zmm5, zmm6, zmm7;
-
-	while (n >= 512) {
-		zmm0 = _mm512_loadu_si512((const void *)(src + 0 * 64));
-		n -= 512;
-		zmm1 = _mm512_loadu_si512((const void *)(src + 1 * 64));
-		zmm2 = _mm512_loadu_si512((const void *)(src + 2 * 64));
-		zmm3 = _mm512_loadu_si512((const void *)(src + 3 * 64));
-		zmm4 = _mm512_loadu_si512((const void *)(src + 4 * 64));
-		zmm5 = _mm512_loadu_si512((const void *)(src + 5 * 64));
-		zmm6 = _mm512_loadu_si512((const void *)(src + 6 * 64));
-		zmm7 = _mm512_loadu_si512((const void *)(src + 7 * 64));
-		src = src + 512;
-		_mm512_storeu_si512((void *)(dst + 0 * 64), zmm0);
-		_mm512_storeu_si512((void *)(dst + 1 * 64), zmm1);
-		_mm512_storeu_si512((void *)(dst + 2 * 64), zmm2);
-		_mm512_storeu_si512((void *)(dst + 3 * 64), zmm3);
-		_mm512_storeu_si512((void *)(dst + 4 * 64), zmm4);
-		_mm512_storeu_si512((void *)(dst + 5 * 64), zmm5);
-		_mm512_storeu_si512((void *)(dst + 6 * 64), zmm6);
-		_mm512_storeu_si512((void *)(dst + 7 * 64), zmm7);
-		dst = dst + 512;
-	}
-}
-
-static inline void *
-rte_memcpy_generic(void *dst, const void *src, size_t n)
-{
-	uintptr_t dstu = (uintptr_t)dst;
-	uintptr_t srcu = (uintptr_t)src;
-	void *ret = dst;
-	size_t dstofss;
-	size_t bits;
-
-	/**
-	 * Copy less than 16 bytes
-	 */
-	if (n < 16) {
-		if (n & 0x01) {
-			*(uint8_t *)dstu = *(const uint8_t *)srcu;
-			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
-			dstu = (uintptr_t)((uint8_t *)dstu + 1);
-		}
-		if (n & 0x02) {
-			*(uint16_t *)dstu = *(const uint16_t *)srcu;
-			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
-			dstu = (uintptr_t)((uint16_t *)dstu + 1);
-		}
-		if (n & 0x04) {
-			*(uint32_t *)dstu = *(const uint32_t *)srcu;
-			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
-			dstu = (uintptr_t)((uint32_t *)dstu + 1);
-		}
-		if (n & 0x08)
-			*(uint64_t *)dstu = *(const uint64_t *)srcu;
-		return ret;
-	}
-
-	/**
-	 * Fast way when copy size doesn't exceed 512 bytes
-	 */
-	if (n <= 32) {
-		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
-		rte_mov16((uint8_t *)dst - 16 + n,
-				  (const uint8_t *)src - 16 + n);
-		return ret;
-	}
-	if (n <= 64) {
-		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
-		rte_mov32((uint8_t *)dst - 32 + n,
-				  (const uint8_t *)src - 32 + n);
-		return ret;
-	}
-	if (n <= 512) {
-		if (n >= 256) {
-			n -= 256;
-			rte_mov256((uint8_t *)dst, (const uint8_t *)src);
-			src = (const uint8_t *)src + 256;
-			dst = (uint8_t *)dst + 256;
-		}
-		if (n >= 128) {
-			n -= 128;
-			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
-			src = (const uint8_t *)src + 128;
-			dst = (uint8_t *)dst + 128;
-		}
-COPY_BLOCK_128_BACK63:
-		if (n > 64) {
-			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
-			rte_mov64((uint8_t *)dst - 64 + n,
-					  (const uint8_t *)src - 64 + n);
-			return ret;
-		}
-		if (n > 0)
-			rte_mov64((uint8_t *)dst - 64 + n,
-					  (const uint8_t *)src - 64 + n);
-		return ret;
-	}
-
-	/**
-	 * Make store aligned when copy size exceeds 512 bytes
-	 */
-	dstofss = ((uintptr_t)dst & 0x3F);
-	if (dstofss > 0) {
-		dstofss = 64 - dstofss;
-		n -= dstofss;
-		rte_mov64((uint8_t *)dst, (const uint8_t *)src);
-		src = (const uint8_t *)src + dstofss;
-		dst = (uint8_t *)dst + dstofss;
-	}
-
-	/**
-	 * Copy 512-byte blocks.
-	 * Use copy block function for better instruction order control,
-	 * which is important when load is unaligned.
-	 */
-	rte_mov512blocks((uint8_t *)dst, (const uint8_t *)src, n);
-	bits = n;
-	n = n & 511;
-	bits -= n;
-	src = (const uint8_t *)src + bits;
-	dst = (uint8_t *)dst + bits;
-
-	/**
-	 * Copy 128-byte blocks.
-	 * Use copy block function for better instruction order control,
-	 * which is important when load is unaligned.
-	 */
-	if (n >= 128) {
-		rte_mov128blocks((uint8_t *)dst, (const uint8_t *)src, n);
-		bits = n;
-		n = n & 127;
-		bits -= n;
-		src = (const uint8_t *)src + bits;
-		dst = (uint8_t *)dst + bits;
-	}
-
-	/**
-	 * Copy whatever left
-	 */
-	goto COPY_BLOCK_128_BACK63;
-}
-
-#elif defined RTE_MACHINE_CPUFLAG_AVX2
-
-#define ALIGNMENT_MASK 0x1F
-
-/**
- * AVX2 implementation below
- */
-
-/**
- * Copy 16 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov16(uint8_t *dst, const uint8_t *src)
-{
-	__m128i xmm0;
-
-	xmm0 = _mm_loadu_si128((const __m128i *)src);
-	_mm_storeu_si128((__m128i *)dst, xmm0);
-}
-
-/**
- * Copy 32 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov32(uint8_t *dst, const uint8_t *src)
-{
-	__m256i ymm0;
-
-	ymm0 = _mm256_loadu_si256((const __m256i *)src);
-	_mm256_storeu_si256((__m256i *)dst, ymm0);
-}
-
-/**
- * Copy 64 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov64(uint8_t *dst, const uint8_t *src)
-{
-	rte_mov32((uint8_t *)dst + 0 * 32, (const uint8_t *)src + 0 * 32);
-	rte_mov32((uint8_t *)dst + 1 * 32, (const uint8_t *)src + 1 * 32);
-}
-
-/**
- * Copy 128 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov128(uint8_t *dst, const uint8_t *src)
-{
-	rte_mov32((uint8_t *)dst + 0 * 32, (const uint8_t *)src + 0 * 32);
-	rte_mov32((uint8_t *)dst + 1 * 32, (const uint8_t *)src + 1 * 32);
-	rte_mov32((uint8_t *)dst + 2 * 32, (const uint8_t *)src + 2 * 32);
-	rte_mov32((uint8_t *)dst + 3 * 32, (const uint8_t *)src + 3 * 32);
-}
-
-/**
- * Copy 128-byte blocks from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n)
-{
-	__m256i ymm0, ymm1, ymm2, ymm3;
-
-	while (n >= 128) {
-		ymm0 = _mm256_loadu_si256((const __m256i *)((const uint8_t *)src + 0 * 32));
-		n -= 128;
-		ymm1 = _mm256_loadu_si256((const __m256i *)((const uint8_t *)src + 1 * 32));
-		ymm2 = _mm256_loadu_si256((const __m256i *)((const uint8_t *)src + 2 * 32));
-		ymm3 = _mm256_loadu_si256((const __m256i *)((const uint8_t *)src + 3 * 32));
-		src = (const uint8_t *)src + 128;
-		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 0 * 32), ymm0);
-		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 1 * 32), ymm1);
-		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 2 * 32), ymm2);
-		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 3 * 32), ymm3);
-		dst = (uint8_t *)dst + 128;
-	}
-}
-
-static inline void *
-rte_memcpy_generic(void *dst, const void *src, size_t n)
-{
-	uintptr_t dstu = (uintptr_t)dst;
-	uintptr_t srcu = (uintptr_t)src;
-	void *ret = dst;
-	size_t dstofss;
-	size_t bits;
-
-	/**
-	 * Copy less than 16 bytes
-	 */
-	if (n < 16) {
-		if (n & 0x01) {
-			*(uint8_t *)dstu = *(const uint8_t *)srcu;
-			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
-			dstu = (uintptr_t)((uint8_t *)dstu + 1);
-		}
-		if (n & 0x02) {
-			*(uint16_t *)dstu = *(const uint16_t *)srcu;
-			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
-			dstu = (uintptr_t)((uint16_t *)dstu + 1);
-		}
-		if (n & 0x04) {
-			*(uint32_t *)dstu = *(const uint32_t *)srcu;
-			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
-			dstu = (uintptr_t)((uint32_t *)dstu + 1);
-		}
-		if (n & 0x08) {
-			*(uint64_t *)dstu = *(const uint64_t *)srcu;
-		}
-		return ret;
-	}
-
-	/**
-	 * Fast way when copy size doesn't exceed 256 bytes
-	 */
-	if (n <= 32) {
-		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
-		rte_mov16((uint8_t *)dst - 16 + n,
-				(const uint8_t *)src - 16 + n);
-		return ret;
-	}
-	if (n <= 48) {
-		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
-		rte_mov16((uint8_t *)dst + 16, (const uint8_t *)src + 16);
-		rte_mov16((uint8_t *)dst - 16 + n,
-				(const uint8_t *)src - 16 + n);
-		return ret;
-	}
-	if (n <= 64) {
-		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
-		rte_mov32((uint8_t *)dst - 32 + n,
-				(const uint8_t *)src - 32 + n);
-		return ret;
-	}
-	if (n <= 256) {
-		if (n >= 128) {
-			n -= 128;
-			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
-			src = (const uint8_t *)src + 128;
-			dst = (uint8_t *)dst + 128;
-		}
-COPY_BLOCK_128_BACK31:
-		if (n >= 64) {
-			n -= 64;
-			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
-			src = (const uint8_t *)src + 64;
-			dst = (uint8_t *)dst + 64;
-		}
-		if (n > 32) {
-			rte_mov32((uint8_t *)dst, (const uint8_t *)src);
-			rte_mov32((uint8_t *)dst - 32 + n,
-					(const uint8_t *)src - 32 + n);
-			return ret;
-		}
-		if (n > 0) {
-			rte_mov32((uint8_t *)dst - 32 + n,
-					(const uint8_t *)src - 32 + n);
-		}
-		return ret;
-	}
-
-	/**
-	 * Make store aligned when copy size exceeds 256 bytes
-	 */
-	dstofss = (uintptr_t)dst & 0x1F;
-	if (dstofss > 0) {
-		dstofss = 32 - dstofss;
-		n -= dstofss;
-		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
-		src = (const uint8_t *)src + dstofss;
-		dst = (uint8_t *)dst + dstofss;
-	}
-
-	/**
-	 * Copy 128-byte blocks
-	 */
-	rte_mov128blocks((uint8_t *)dst, (const uint8_t *)src, n);
-	bits = n;
-	n = n & 127;
-	bits -= n;
-	src = (const uint8_t *)src + bits;
-	dst = (uint8_t *)dst + bits;
-
-	/**
-	 * Copy whatever left
-	 */
-	goto COPY_BLOCK_128_BACK31;
-}
-
-#else /* RTE_MACHINE_CPUFLAG */
-
-#define ALIGNMENT_MASK 0x0F
-
-/**
- * SSE & AVX implementation below
- */
-
-/**
- * Copy 16 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov16(uint8_t *dst, const uint8_t *src)
-{
-	__m128i xmm0;
-
-	xmm0 = _mm_loadu_si128((const __m128i *)(const __m128i *)src);
-	_mm_storeu_si128((__m128i *)dst, xmm0);
-}
-
-/**
- * Copy 32 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov32(uint8_t *dst, const uint8_t *src)
-{
-	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
-	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
-}
-
-/**
- * Copy 64 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov64(uint8_t *dst, const uint8_t *src)
-{
-	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
-	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
-	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
-	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
-}
-
-/**
- * Copy 128 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov128(uint8_t *dst, const uint8_t *src)
-{
-	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
-	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
-	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
-	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
-	rte_mov16((uint8_t *)dst + 4 * 16, (const uint8_t *)src + 4 * 16);
-	rte_mov16((uint8_t *)dst + 5 * 16, (const uint8_t *)src + 5 * 16);
-	rte_mov16((uint8_t *)dst + 6 * 16, (const uint8_t *)src + 6 * 16);
-	rte_mov16((uint8_t *)dst + 7 * 16, (const uint8_t *)src + 7 * 16);
-}
-
-/**
- * Copy 256 bytes from one location to another,
- * locations should not overlap.
- */
-static inline void
-rte_mov256(uint8_t *dst, const uint8_t *src)
-{
-	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
-	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
-	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
-	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
-	rte_mov16((uint8_t *)dst + 4 * 16, (const uint8_t *)src + 4 * 16);
-	rte_mov16((uint8_t *)dst + 5 * 16, (const uint8_t *)src + 5 * 16);
-	rte_mov16((uint8_t *)dst + 6 * 16, (const uint8_t *)src + 6 * 16);
-	rte_mov16((uint8_t *)dst + 7 * 16, (const uint8_t *)src + 7 * 16);
-	rte_mov16((uint8_t *)dst + 8 * 16, (const uint8_t *)src + 8 * 16);
-	rte_mov16((uint8_t *)dst + 9 * 16, (const uint8_t *)src + 9 * 16);
-	rte_mov16((uint8_t *)dst + 10 * 16, (const uint8_t *)src + 10 * 16);
-	rte_mov16((uint8_t *)dst + 11 * 16, (const uint8_t *)src + 11 * 16);
-	rte_mov16((uint8_t *)dst + 12 * 16, (const uint8_t *)src + 12 * 16);
-	rte_mov16((uint8_t *)dst + 13 * 16, (const uint8_t *)src + 13 * 16);
-	rte_mov16((uint8_t *)dst + 14 * 16, (const uint8_t *)src + 14 * 16);
-	rte_mov16((uint8_t *)dst + 15 * 16, (const uint8_t *)src + 15 * 16);
-}
-
-/**
- * Macro for copying unaligned block from one location to another with constant load offset,
- * 47 bytes leftover maximum,
- * locations should not overlap.
- * Requirements:
- * - Store is aligned
- * - Load offset is <offset>, which must be immediate value within [1, 15]
- * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
- * - <dst>, <src>, <len> must be variables
- * - __m128i <xmm0> ~ <xmm8> must be pre-defined
- */
-#define MOVEUNALIGNED_LEFT47_IMM(dst, src, len, offset)                                                     \
-__extension__ ({                                                                                            \
-    int tmp;                                                                                                \
-    while (len >= 128 + 16 - offset) {                                                                      \
-        xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));                  \
-        len -= 128;                                                                                         \
-        xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));                  \
-        xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));                  \
-        xmm3 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 3 * 16));                  \
-        xmm4 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 4 * 16));                  \
-        xmm5 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 5 * 16));                  \
-        xmm6 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 6 * 16));                  \
-        xmm7 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 7 * 16));                  \
-        xmm8 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 8 * 16));                  \
-        src = (const uint8_t *)src + 128;                                                                   \
-        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));        \
-        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));        \
-        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 2 * 16), _mm_alignr_epi8(xmm3, xmm2, offset));        \
-        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 3 * 16), _mm_alignr_epi8(xmm4, xmm3, offset));        \
-        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 4 * 16), _mm_alignr_epi8(xmm5, xmm4, offset));        \
-        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 5 * 16), _mm_alignr_epi8(xmm6, xmm5, offset));        \
-        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 6 * 16), _mm_alignr_epi8(xmm7, xmm6, offset));        \
-        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 7 * 16), _mm_alignr_epi8(xmm8, xmm7, offset));        \
-        dst = (uint8_t *)dst + 128;                                                                         \
-    }                                                                                                       \
-    tmp = len;                                                                                              \
-    len = ((len - 16 + offset) & 127) + 16 - offset;                                                        \
-    tmp -= len;                                                                                             \
-    src = (const uint8_t *)src + tmp;                                                                       \
-    dst = (uint8_t *)dst + tmp;                                                                             \
-    if (len >= 32 + 16 - offset) {                                                                          \
-        while (len >= 32 + 16 - offset) {                                                                   \
-            xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));              \
-            len -= 32;                                                                                      \
-            xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));              \
-            xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));              \
-            src = (const uint8_t *)src + 32;                                                                \
-            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));    \
-            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));    \
-            dst = (uint8_t *)dst + 32;                                                                      \
-        }                                                                                                   \
-        tmp = len;                                                                                          \
-        len = ((len - 16 + offset) & 31) + 16 - offset;                                                     \
-        tmp -= len;                                                                                         \
-        src = (const uint8_t *)src + tmp;                                                                   \
-        dst = (uint8_t *)dst + tmp;                                                                         \
-    }                                                                                                       \
-})
-
-/**
- * Macro for copying unaligned block from one location to another,
- * 47 bytes leftover maximum,
- * locations should not overlap.
- * Use switch here because the aligning instruction requires immediate value for shift count.
- * Requirements:
- * - Store is aligned
- * - Load offset is <offset>, which must be within [1, 15]
- * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
- * - <dst>, <src>, <len> must be variables
- * - __m128i <xmm0> ~ <xmm8> used in MOVEUNALIGNED_LEFT47_IMM must be pre-defined
- */
-#define MOVEUNALIGNED_LEFT47(dst, src, len, offset)                   \
-__extension__ ({                                                      \
-    switch (offset) {                                                 \
-    case 0x01: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x01); break;    \
-    case 0x02: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x02); break;    \
-    case 0x03: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x03); break;    \
-    case 0x04: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x04); break;    \
-    case 0x05: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x05); break;    \
-    case 0x06: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x06); break;    \
-    case 0x07: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x07); break;    \
-    case 0x08: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x08); break;    \
-    case 0x09: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x09); break;    \
-    case 0x0A: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0A); break;    \
-    case 0x0B: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0B); break;    \
-    case 0x0C: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0C); break;    \
-    case 0x0D: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0D); break;    \
-    case 0x0E: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0E); break;    \
-    case 0x0F: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0F); break;    \
-    default:;                                                         \
-    }                                                                 \
-})
-
-static inline void *
-rte_memcpy_generic(void *dst, const void *src, size_t n)
-{
-	__m128i xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7, xmm8;
-	uintptr_t dstu = (uintptr_t)dst;
-	uintptr_t srcu = (uintptr_t)src;
-	void *ret = dst;
-	size_t dstofss;
-	size_t srcofs;
-
-	/**
-	 * Copy less than 16 bytes
-	 */
-	if (n < 16) {
-		if (n & 0x01) {
-			*(uint8_t *)dstu = *(const uint8_t *)srcu;
-			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
-			dstu = (uintptr_t)((uint8_t *)dstu + 1);
-		}
-		if (n & 0x02) {
-			*(uint16_t *)dstu = *(const uint16_t *)srcu;
-			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
-			dstu = (uintptr_t)((uint16_t *)dstu + 1);
-		}
-		if (n & 0x04) {
-			*(uint32_t *)dstu = *(const uint32_t *)srcu;
-			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
-			dstu = (uintptr_t)((uint32_t *)dstu + 1);
-		}
-		if (n & 0x08) {
-			*(uint64_t *)dstu = *(const uint64_t *)srcu;
-		}
-		return ret;
-	}
-
-	/**
-	 * Fast way when copy size doesn't exceed 512 bytes
-	 */
-	if (n <= 32) {
-		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
-		rte_mov16((uint8_t *)dst - 16 + n, (const uint8_t *)src - 16 + n);
-		return ret;
-	}
-	if (n <= 48) {
-		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
-		rte_mov16((uint8_t *)dst - 16 + n, (const uint8_t *)src - 16 + n);
-		return ret;
-	}
-	if (n <= 64) {
-		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
-		rte_mov16((uint8_t *)dst + 32, (const uint8_t *)src + 32);
-		rte_mov16((uint8_t *)dst - 16 + n, (const uint8_t *)src - 16 + n);
-		return ret;
-	}
-	if (n <= 128) {
-		goto COPY_BLOCK_128_BACK15;
-	}
-	if (n <= 512) {
-		if (n >= 256) {
-			n -= 256;
-			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
-			rte_mov128((uint8_t *)dst + 128, (const uint8_t *)src + 128);
-			src = (const uint8_t *)src + 256;
-			dst = (uint8_t *)dst + 256;
-		}
-COPY_BLOCK_255_BACK15:
-		if (n >= 128) {
-			n -= 128;
-			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
-			src = (const uint8_t *)src + 128;
-			dst = (uint8_t *)dst + 128;
-		}
-COPY_BLOCK_128_BACK15:
-		if (n >= 64) {
-			n -= 64;
-			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
-			src = (const uint8_t *)src + 64;
-			dst = (uint8_t *)dst + 64;
-		}
-COPY_BLOCK_64_BACK15:
-		if (n >= 32) {
-			n -= 32;
-			rte_mov32((uint8_t *)dst, (const uint8_t *)src);
-			src = (const uint8_t *)src + 32;
-			dst = (uint8_t *)dst + 32;
-		}
-		if (n > 16) {
-			rte_mov16((uint8_t *)dst, (const uint8_t *)src);
-			rte_mov16((uint8_t *)dst - 16 + n, (const uint8_t *)src - 16 + n);
-			return ret;
-		}
-		if (n > 0) {
-			rte_mov16((uint8_t *)dst - 16 + n, (const uint8_t *)src - 16 + n);
-		}
-		return ret;
-	}
-
-	/**
-	 * Make store aligned when copy size exceeds 512 bytes,
-	 * and make sure the first 15 bytes are copied, because
-	 * unaligned copy functions require up to 15 bytes
-	 * backwards access.
-	 */
-	dstofss = (uintptr_t)dst & 0x0F;
-	if (dstofss > 0) {
-		dstofss = 16 - dstofss + 16;
-		n -= dstofss;
-		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
-		src = (const uint8_t *)src + dstofss;
-		dst = (uint8_t *)dst + dstofss;
-	}
-	srcofs = ((uintptr_t)src & 0x0F);
-
-	/**
-	 * For aligned copy
-	 */
-	if (srcofs == 0) {
-		/**
-		 * Copy 256-byte blocks
-		 */
-		for (; n >= 256; n -= 256) {
-			rte_mov256((uint8_t *)dst, (const uint8_t *)src);
-			dst = (uint8_t *)dst + 256;
-			src = (const uint8_t *)src + 256;
-		}
-
-		/**
-		 * Copy whatever left
-		 */
-		goto COPY_BLOCK_255_BACK15;
-	}
-
-	/**
-	 * For copy with unaligned load
-	 */
-	MOVEUNALIGNED_LEFT47(dst, src, n, srcofs);
-
-	/**
-	 * Copy whatever left
-	 */
-	goto COPY_BLOCK_64_BACK15;
-}
-
-#endif /* RTE_MACHINE_CPUFLAG */
-
-static inline void *
-rte_memcpy_aligned(void *dst, const void *src, size_t n)
-{
-	void *ret = dst;
-
-	/* Copy size <= 16 bytes */
-	if (n < 16) {
-		if (n & 0x01) {
-			*(uint8_t *)dst = *(const uint8_t *)src;
-			src = (const uint8_t *)src + 1;
-			dst = (uint8_t *)dst + 1;
-		}
-		if (n & 0x02) {
-			*(uint16_t *)dst = *(const uint16_t *)src;
-			src = (const uint16_t *)src + 1;
-			dst = (uint16_t *)dst + 1;
-		}
-		if (n & 0x04) {
-			*(uint32_t *)dst = *(const uint32_t *)src;
-			src = (const uint32_t *)src + 1;
-			dst = (uint32_t *)dst + 1;
-		}
-		if (n & 0x08)
-			*(uint64_t *)dst = *(const uint64_t *)src;
-
-		return ret;
-	}
-
-	/* Copy 16 <= size <= 32 bytes */
-	if (n <= 32) {
-		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
-		rte_mov16((uint8_t *)dst - 16 + n,
-				(const uint8_t *)src - 16 + n);
-
-		return ret;
-	}
-
-	/* Copy 32 < size <= 64 bytes */
-	if (n <= 64) {
-		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
-		rte_mov32((uint8_t *)dst - 32 + n,
-				(const uint8_t *)src - 32 + n);
-
-		return ret;
-	}
-
-	/* Copy 64 bytes blocks */
-	for (; n >= 64; n -= 64) {
-		rte_mov64((uint8_t *)dst, (const uint8_t *)src);
-		dst = (uint8_t *)dst + 64;
-		src = (const uint8_t *)src + 64;
-	}
-
-	/* Copy whatever left */
-	rte_mov64((uint8_t *)dst - 64 + n,
-			(const uint8_t *)src - 64 + n);
-
-	return ret;
-}
+extern void *
+rte_memcpy_sse(void *dst, const void *src, size_t n);
 
 static inline void *
 rte_memcpy(void *dst, const void *src, size_t n)
 {
-	if (!(((uintptr_t)dst | (uintptr_t)src) & ALIGNMENT_MASK))
-		return rte_memcpy_aligned(dst, src, n);
+	if (n <= RTE_X86_MEMCPY_THRESH)
+		return rte_memcpy_internal(dst, src, n);
 	else
-		return rte_memcpy_generic(dst, src, n);
+		return (*rte_memcpy_ptr)(dst, src, n);
 }
 
 #ifdef __cplusplus
diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx2.c b/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx2.c
new file mode 100644
index 0000000..c83351a
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx2.c
@@ -0,0 +1,291 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_memcpy.h>
+
+#ifndef CC_SUPPORT_AVX2
+#error CC_SUPPORT_AVX2 not defined
+#endif
+
+void *
+rte_memcpy_avx2(void *dst, const void *src, size_t n)
+{
+	if (!(((uintptr_t)dst | (uintptr_t)src) & 0x1F)) {
+		void *ret = dst;
+
+		/* Copy size <= 16 bytes */
+		if (n < 16) {
+			if (n & 0x01) {
+				*(uint8_t *)dst = *(const uint8_t *)src;
+				src = (const uint8_t *)src + 1;
+				dst = (uint8_t *)dst + 1;
+			}
+			if (n & 0x02) {
+				*(uint16_t *)dst = *(const uint16_t *)src;
+				src = (const uint16_t *)src + 1;
+				dst = (uint16_t *)dst + 1;
+			}
+			if (n & 0x04) {
+				*(uint32_t *)dst = *(const uint32_t *)src;
+				src = (const uint32_t *)src + 1;
+				dst = (uint32_t *)dst + 1;
+			}
+			if (n & 0x08)
+				*(uint64_t *)dst = *(const uint64_t *)src;
+
+			return ret;
+		}
+
+		/* Copy 16 <= size <= 32 bytes */
+		if (n <= 32) {
+			__m128i xmm0, xmm1;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 16 + n));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 16 + n), xmm1);
+
+			return ret;
+		}
+
+		/* Copy 32 < size <= 64 bytes */
+		if (n <= 64) {
+			__m256i ymm0, ymm1;
+			ymm0 = _mm256_loadu_si256((const __m256i *)src);
+			ymm1 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src - 32 + n));
+			_mm256_storeu_si256((__m256i *)dst, ymm0);
+			_mm256_storeu_si256((__m256i *)
+				((uint8_t *)dst - 32 + n), ymm1);
+
+			return ret;
+		}
+
+		/* Copy 64 bytes blocks */
+		for (; n >= 64; n -= 64) {
+			__m256i ymm0, ymm1;
+			ymm0 = _mm256_loadu_si256((const __m256i *)src);
+			ymm1 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src + 32));
+			_mm256_storeu_si256((__m256i *)dst, ymm0);
+			_mm256_storeu_si256((__m256i *)
+				((uint8_t *)dst + 32), ymm1);
+			dst = (uint8_t *)dst + 64;
+			src = (const uint8_t *)src + 64;
+		}
+
+		/* Copy whatever left */
+		__m256i ymm0, ymm1;
+		ymm0 = _mm256_loadu_si256((const __m256i *)
+			((const uint8_t *)src - 64 + n));
+		ymm1 = _mm256_loadu_si256((const __m256i *)
+			((const uint8_t *)src - 32 + n));
+		_mm256_storeu_si256((__m256i *)((uint8_t *)dst - 64 + n), ymm0);
+		_mm256_storeu_si256((__m256i *)((uint8_t *)dst - 32 + n), ymm1);
+
+		return ret;
+	} else {
+		uintptr_t dstu = (uintptr_t)dst;
+		uintptr_t srcu = (uintptr_t)src;
+		void *ret = dst;
+		size_t dstofss;
+		size_t bits;
+
+		/**
+		 * Copy less than 16 bytes
+		 */
+		if (n < 16) {
+			if (n & 0x01) {
+				*(uint8_t *)dstu = *(const uint8_t *)srcu;
+				srcu = (uintptr_t)((const uint8_t *)srcu + 1);
+				dstu = (uintptr_t)((uint8_t *)dstu + 1);
+			}
+			if (n & 0x02) {
+				*(uint16_t *)dstu = *(const uint16_t *)srcu;
+				srcu = (uintptr_t)((const uint16_t *)srcu + 1);
+				dstu = (uintptr_t)((uint16_t *)dstu + 1);
+			}
+			if (n & 0x04) {
+				*(uint32_t *)dstu = *(const uint32_t *)srcu;
+				srcu = (uintptr_t)((const uint32_t *)srcu + 1);
+				dstu = (uintptr_t)((uint32_t *)dstu + 1);
+			}
+			if (n & 0x08)
+				*(uint64_t *)dstu = *(const uint64_t *)srcu;
+			return ret;
+		}
+
+		/**
+		 * Fast way when copy size doesn't exceed 256 bytes
+		 */
+		if (n <= 32) {
+			__m128i xmm0, xmm1;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 16 + n));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 16 + n), xmm1);
+			return ret;
+		}
+		if (n <= 48) {
+			__m128i xmm0, xmm1, xmm2;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src + 16));
+			xmm2 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 16 + n));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst + 16), xmm1);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 16 + n), xmm2);
+			return ret;
+		}
+		if (n <= 64) {
+			__m256i ymm0, ymm1;
+			ymm0 = _mm256_loadu_si256((const __m256i *)src);
+			ymm1 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src - 32 + n));
+			_mm256_storeu_si256((__m256i *)dst, ymm0);
+			_mm256_storeu_si256((__m256i *)
+				((uint8_t *)dst - 32 + n), ymm1);
+			return ret;
+		}
+		if (n <= 256) {
+			if (n >= 128) {
+				n -= 128;
+				__m256i ymm0, ymm1, ymm2, ymm3;
+				ymm0 = _mm256_loadu_si256((const __m256i *)src);
+				ymm1 = _mm256_loadu_si256((const __m256i *)
+					((const uint8_t *)src + 32));
+				ymm2 = _mm256_loadu_si256((const __m256i *)
+					((const uint8_t *)src + 2*32));
+				ymm3 = _mm256_loadu_si256((const __m256i *)
+					((const uint8_t *)src + 3*32));
+				_mm256_storeu_si256((__m256i *)dst, ymm0);
+				_mm256_storeu_si256((__m256i *)
+					((uint8_t *)dst + 32), ymm1);
+				_mm256_storeu_si256((__m256i *)
+					((uint8_t *)dst + 2*32), ymm2);
+				_mm256_storeu_si256((__m256i *)
+					((uint8_t *)dst + 3*32), ymm3);
+				src = (const uint8_t *)src + 128;
+				dst = (uint8_t *)dst + 128;
+			}
+COPY_BLOCK_128_BACK31:
+			if (n >= 64) {
+				n -= 64;
+				__m256i ymm0, ymm1;
+				ymm0 = _mm256_loadu_si256((const __m256i *)src);
+				ymm1 = _mm256_loadu_si256((const __m256i *)
+					((const uint8_t *)src + 32));
+				_mm256_storeu_si256((__m256i *)dst, ymm0);
+				_mm256_storeu_si256((__m256i *)
+					((uint8_t *)dst + 32), ymm1);
+				src = (const uint8_t *)src + 64;
+				dst = (uint8_t *)dst + 64;
+			}
+			if (n > 32) {
+				__m256i ymm0, ymm1;
+				ymm0 = _mm256_loadu_si256((const __m256i *)src);
+				ymm1 = _mm256_loadu_si256((const __m256i *)
+					((const uint8_t *)src - 32 + n));
+				_mm256_storeu_si256((__m256i *)dst, ymm0);
+				_mm256_storeu_si256((__m256i *)
+					((uint8_t *)dst - 32 + n), ymm1);
+				return ret;
+			}
+			if (n > 0) {
+				__m256i ymm0;
+				ymm0 = _mm256_loadu_si256((const __m256i *)
+					((const uint8_t *)src - 32 + n));
+				_mm256_storeu_si256((__m256i *)
+					((uint8_t *)dst - 32 + n), ymm0);
+			}
+			return ret;
+		}
+
+		/**
+		 * Make store aligned when copy size exceeds 256 bytes
+		 */
+		dstofss = (uintptr_t)dst & 0x1F;
+		if (dstofss > 0) {
+			dstofss = 32 - dstofss;
+			n -= dstofss;
+			__m256i ymm0;
+			ymm0 = _mm256_loadu_si256((const __m256i *)src);
+			_mm256_storeu_si256((__m256i *)dst, ymm0);
+			src = (const uint8_t *)src + dstofss;
+			dst = (uint8_t *)dst + dstofss;
+		}
+
+		/**
+		 * Copy 128-byte blocks
+		 */
+		__m256i ymm0, ymm1, ymm2, ymm3;
+
+		while (n >= 128) {
+			ymm0 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src + 0 * 32));
+			n -= 128;
+			ymm1 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src + 1 * 32));
+			ymm2 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src + 2 * 32));
+			ymm3 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src + 3 * 32));
+			src = (const uint8_t *)src + 128;
+			_mm256_storeu_si256((__m256i *)
+				((uint8_t *)dst + 0 * 32), ymm0);
+			_mm256_storeu_si256((__m256i *)
+				((uint8_t *)dst + 1 * 32), ymm1);
+			_mm256_storeu_si256((__m256i *)
+				((uint8_t *)dst + 2 * 32), ymm2);
+			_mm256_storeu_si256((__m256i *)
+				((uint8_t *)dst + 3 * 32), ymm3);
+			dst = (uint8_t *)dst + 128;
+		}
+		bits = n;
+		n = n & 127;
+		bits -= n;
+		src = (const uint8_t *)src + bits;
+		dst = (uint8_t *)dst + bits;
+
+		/**
+		 * Copy whatever left
+		 */
+		goto COPY_BLOCK_128_BACK31;
+	}
+}
diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx512f.c b/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx512f.c
new file mode 100644
index 0000000..c8a9d20
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy_avx512f.c
@@ -0,0 +1,316 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_memcpy.h>
+
+#ifndef CC_SUPPORT_AVX512F
+#error CC_SUPPORT_AVX512F not defined
+#endif
+
+void *
+rte_memcpy_avx512f(void *dst, const void *src, size_t n)
+{
+	if (!(((uintptr_t)dst | (uintptr_t)src) & 0x3F)) {
+		void *ret = dst;
+
+		/* Copy size <= 16 bytes */
+		if (n < 16) {
+			if (n & 0x01) {
+				*(uint8_t *)dst = *(const uint8_t *)src;
+				src = (const uint8_t *)src + 1;
+				dst = (uint8_t *)dst + 1;
+			}
+			if (n & 0x02) {
+				*(uint16_t *)dst = *(const uint16_t *)src;
+				src = (const uint16_t *)src + 1;
+				dst = (uint16_t *)dst + 1;
+			}
+			if (n & 0x04) {
+				*(uint32_t *)dst = *(const uint32_t *)src;
+				src = (const uint32_t *)src + 1;
+				dst = (uint32_t *)dst + 1;
+			}
+			if (n & 0x08)
+				*(uint64_t *)dst = *(const uint64_t *)src;
+
+			return ret;
+		}
+
+		/* Copy 16 <= size <= 32 bytes */
+		if (n <= 32) {
+			__m128i xmm0, xmm1;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 16 + n));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 16 + n), xmm1);
+
+			return ret;
+		}
+
+		/* Copy 32 < size <= 64 bytes */
+		if (n <= 64) {
+			__m256i ymm0, ymm1;
+			ymm0 = _mm256_loadu_si256((const __m256i *)src);
+			ymm1 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src - 32 + n));
+			_mm256_storeu_si256((__m256i *)dst, ymm0);
+			_mm256_storeu_si256((__m256i *)
+				((uint8_t *)dst - 32 + n), ymm1);
+
+			return ret;
+		}
+
+		/* Copy 64 bytes blocks */
+		for (; n >= 64; n -= 64) {
+			__m512i zmm0;
+			zmm0 = _mm512_loadu_si512((const void *)src);
+			_mm512_storeu_si512((void *)dst, zmm0);
+			dst = (uint8_t *)dst + 64;
+			src = (const uint8_t *)src + 64;
+		}
+
+		/* Copy whatever left */
+		__m512i zmm0;
+		zmm0 = _mm512_loadu_si512((const void *)
+			((const uint8_t *)src - 64 + n));
+		_mm512_storeu_si512((void *)((uint8_t *)dst - 64 + n), zmm0);
+
+		return ret;
+	} else {
+		uintptr_t dstu = (uintptr_t)dst;
+		uintptr_t srcu = (uintptr_t)src;
+		void *ret = dst;
+		size_t dstofss;
+		size_t bits;
+
+		/**
+		 * Copy less than 16 bytes
+		 */
+		if (n < 16) {
+			if (n & 0x01) {
+				*(uint8_t *)dstu = *(const uint8_t *)srcu;
+				srcu = (uintptr_t)((const uint8_t *)srcu + 1);
+				dstu = (uintptr_t)((uint8_t *)dstu + 1);
+			}
+			if (n & 0x02) {
+				*(uint16_t *)dstu = *(const uint16_t *)srcu;
+				srcu = (uintptr_t)((const uint16_t *)srcu + 1);
+				dstu = (uintptr_t)((uint16_t *)dstu + 1);
+			}
+			if (n & 0x04) {
+				*(uint32_t *)dstu = *(const uint32_t *)srcu;
+				srcu = (uintptr_t)((const uint32_t *)srcu + 1);
+				dstu = (uintptr_t)((uint32_t *)dstu + 1);
+			}
+			if (n & 0x08)
+				*(uint64_t *)dstu = *(const uint64_t *)srcu;
+			return ret;
+		}
+
+		/**
+		 * Fast way when copy size doesn't exceed 512 bytes
+		 */
+		if (n <= 32) {
+			__m128i xmm0, xmm1;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 16 + n));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 16 + n), xmm1);
+			return ret;
+		}
+		if (n <= 64) {
+			__m256i ymm0, ymm1;
+			ymm0 = _mm256_loadu_si256((const __m256i *)src);
+			ymm1 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src - 32 + n));
+			_mm256_storeu_si256((__m256i *)dst, ymm0);
+			_mm256_storeu_si256((__m256i *)
+				((uint8_t *)dst - 32 + n), ymm1);
+			return ret;
+		}
+		if (n <= 512) {
+			if (n >= 256) {
+				n -= 256;
+				__m512i zmm0, zmm1, zmm2, zmm3;
+				zmm0 = _mm512_loadu_si512((const void *)src);
+				zmm1 = _mm512_loadu_si512((const void *)
+					((const uint8_t *)src + 64));
+				zmm2 = _mm512_loadu_si512((const void *)
+					((const uint8_t *)src + 2*64));
+				zmm3 = _mm512_loadu_si512((const void *)
+					((const uint8_t *)src + 3*64));
+				_mm512_storeu_si512((void *)dst, zmm0);
+				_mm512_storeu_si512((void *)
+					((uint8_t *)dst + 64), zmm1);
+				_mm512_storeu_si512((void *)
+					((uint8_t *)dst + 2*64), zmm2);
+				_mm512_storeu_si512((void *)
+					((uint8_t *)dst + 3*64), zmm3);
+				src = (const uint8_t *)src + 256;
+				dst = (uint8_t *)dst + 256;
+			}
+			if (n >= 128) {
+				n -= 128;
+				__m512i zmm0, zmm1;
+				zmm0 = _mm512_loadu_si512((const void *)src);
+				zmm1 = _mm512_loadu_si512((const void *)
+					((const uint8_t *)src + 64));
+				_mm512_storeu_si512((void *)dst, zmm0);
+				_mm512_storeu_si512((void *)
+					((uint8_t *)dst + 64), zmm1);
+				src = (const uint8_t *)src + 128;
+				dst = (uint8_t *)dst + 128;
+			}
+COPY_BLOCK_128_BACK63:
+			if (n > 64) {
+				__m512i zmm0, zmm1;
+				zmm0 = _mm512_loadu_si512((const void *)src);
+				zmm1 = _mm512_loadu_si512((const void *)
+					((const uint8_t *)src - 64 + n));
+				_mm512_storeu_si512((void *)dst, zmm0);
+				_mm512_storeu_si512((void *)
+					((uint8_t *)dst - 64 + n), zmm1);
+				return ret;
+			}
+			if (n > 0) {
+				__m512i zmm0;
+				zmm0 = _mm512_loadu_si512((const void *)
+					((const uint8_t *)src - 64 + n));
+				_mm512_storeu_si512((void *)
+					((uint8_t *)dst - 64 + n), zmm0);
+			}
+			return ret;
+		}
+
+		/**
+		 * Make store aligned when copy size exceeds 512 bytes
+		 */
+		dstofss = ((uintptr_t)dst & 0x3F);
+		if (dstofss > 0) {
+			dstofss = 64 - dstofss;
+			n -= dstofss;
+			__m512i zmm0;
+			zmm0 = _mm512_loadu_si512((const void *)src);
+			_mm512_storeu_si512((void *)dst, zmm0);
+			src = (const uint8_t *)src + dstofss;
+			dst = (uint8_t *)dst + dstofss;
+		}
+
+		/**
+		 * Copy 512-byte blocks.
+		 * Use copy block function for better instruction order control,
+		 * which is important when load is unaligned.
+		 */
+		__m512i zmm0, zmm1, zmm2, zmm3, zmm4, zmm5, zmm6, zmm7;
+
+		while (n >= 512) {
+			zmm0 = _mm512_loadu_si512((const void *)
+				((const uint8_t *)src + 0 * 64));
+			n -= 512;
+			zmm1 = _mm512_loadu_si512((const void *)
+				((const uint8_t *)src + 1 * 64));
+			zmm2 = _mm512_loadu_si512((const void *)
+				((const uint8_t *)src + 2 * 64));
+			zmm3 = _mm512_loadu_si512((const void *)
+				((const uint8_t *)src + 3 * 64));
+			zmm4 = _mm512_loadu_si512((const void *)
+				((const uint8_t *)src + 4 * 64));
+			zmm5 = _mm512_loadu_si512((const void *)
+				((const uint8_t *)src + 5 * 64));
+			zmm6 = _mm512_loadu_si512((const void *)
+				((const uint8_t *)src + 6 * 64));
+			zmm7 = _mm512_loadu_si512((const void *)
+				((const uint8_t *)src + 7 * 64));
+			src = (const uint8_t *)src + 512;
+			_mm512_storeu_si512((void *)
+				((uint8_t *)dst + 0 * 64), zmm0);
+			_mm512_storeu_si512((void *)
+				((uint8_t *)dst + 1 * 64), zmm1);
+			_mm512_storeu_si512((void *)
+				((uint8_t *)dst + 2 * 64), zmm2);
+			_mm512_storeu_si512((void *)
+				((uint8_t *)dst + 3 * 64), zmm3);
+			_mm512_storeu_si512((void *)
+				((uint8_t *)dst + 4 * 64), zmm4);
+			_mm512_storeu_si512((void *)
+				((uint8_t *)dst + 5 * 64), zmm5);
+			_mm512_storeu_si512((void *)
+				((uint8_t *)dst + 6 * 64), zmm6);
+			_mm512_storeu_si512((void *)
+				((uint8_t *)dst + 7 * 64), zmm7);
+			dst = (uint8_t *)dst + 512;
+		}
+		bits = n;
+		n = n & 511;
+		bits -= n;
+		src = (const uint8_t *)src + bits;
+		dst = (uint8_t *)dst + bits;
+
+		/**
+		 * Copy 128-byte blocks.
+		 * Use copy block function for better instruction order control,
+		 * which is important when load is unaligned.
+		 */
+		if (n >= 128) {
+			__m512i zmm0, zmm1;
+
+			while (n >= 128) {
+				zmm0 = _mm512_loadu_si512((const void *)
+					((const uint8_t *)src + 0 * 64));
+				n -= 128;
+				zmm1 = _mm512_loadu_si512((const void *)
+					((const uint8_t *)src + 1 * 64));
+				src = (const uint8_t *)src + 128;
+				_mm512_storeu_si512((void *)
+					((uint8_t *)dst + 0 * 64), zmm0);
+				_mm512_storeu_si512((void *)
+					((uint8_t *)dst + 1 * 64), zmm1);
+				dst = (uint8_t *)dst + 128;
+			}
+			bits = n;
+			n = n & 127;
+			bits -= n;
+			src = (const uint8_t *)src + bits;
+			dst = (uint8_t *)dst + bits;
+		}
+
+		/**
+		 * Copy whatever left
+		 */
+		goto COPY_BLOCK_128_BACK63;
+	}
+}
diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy_internal.h b/lib/librte_eal/common/include/arch/x86/rte_memcpy_internal.h
new file mode 100644
index 0000000..d17fb5b
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy_internal.h
@@ -0,0 +1,909 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2014 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#ifndef _RTE_MEMCPY_INTERNAL_X86_64_H_
+#define _RTE_MEMCPY_INTERNAL_X86_64_H_
+
+/**
+ * @file
+ *
+ * Functions for SSE/AVX/AVX2/AVX512 implementation of memcpy().
+ */
+
+#include <stdio.h>
+#include <stdint.h>
+#include <string.h>
+#include <rte_vect.h>
+#include <rte_common.h>
+
+#ifdef __cplusplus
+extern "C" {
+#endif
+
+/**
+ * Copy bytes from one location to another. The locations must not overlap.
+ *
+ * @note This is implemented as a macro, so it's address should not be taken
+ * and care is needed as parameter expressions may be evaluated multiple times.
+ *
+ * @param dst
+ *   Pointer to the destination of the data.
+ * @param src
+ *   Pointer to the source data.
+ * @param n
+ *   Number of bytes to copy.
+ * @return
+ *   Pointer to the destination data.
+ */
+
+#ifdef RTE_MACHINE_CPUFLAG_AVX512F
+
+#define ALIGNMENT_MASK 0x3F
+
+/**
+ * AVX512 implementation below
+ */
+
+/**
+ * Copy 16 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov16(uint8_t *dst, const uint8_t *src)
+{
+	__m128i xmm0;
+
+	xmm0 = _mm_loadu_si128((const __m128i *)src);
+	_mm_storeu_si128((__m128i *)dst, xmm0);
+}
+
+/**
+ * Copy 32 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov32(uint8_t *dst, const uint8_t *src)
+{
+	__m256i ymm0;
+
+	ymm0 = _mm256_loadu_si256((const __m256i *)src);
+	_mm256_storeu_si256((__m256i *)dst, ymm0);
+}
+
+/**
+ * Copy 64 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov64(uint8_t *dst, const uint8_t *src)
+{
+	__m512i zmm0;
+
+	zmm0 = _mm512_loadu_si512((const void *)src);
+	_mm512_storeu_si512((void *)dst, zmm0);
+}
+
+/**
+ * Copy 128 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov128(uint8_t *dst, const uint8_t *src)
+{
+	rte_mov64(dst + 0 * 64, src + 0 * 64);
+	rte_mov64(dst + 1 * 64, src + 1 * 64);
+}
+
+/**
+ * Copy 256 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov256(uint8_t *dst, const uint8_t *src)
+{
+	rte_mov64(dst + 0 * 64, src + 0 * 64);
+	rte_mov64(dst + 1 * 64, src + 1 * 64);
+	rte_mov64(dst + 2 * 64, src + 2 * 64);
+	rte_mov64(dst + 3 * 64, src + 3 * 64);
+}
+
+/**
+ * Copy 128-byte blocks from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n)
+{
+	__m512i zmm0, zmm1;
+
+	while (n >= 128) {
+		zmm0 = _mm512_loadu_si512((const void *)(src + 0 * 64));
+		n -= 128;
+		zmm1 = _mm512_loadu_si512((const void *)(src + 1 * 64));
+		src = src + 128;
+		_mm512_storeu_si512((void *)(dst + 0 * 64), zmm0);
+		_mm512_storeu_si512((void *)(dst + 1 * 64), zmm1);
+		dst = dst + 128;
+	}
+}
+
+/**
+ * Copy 512-byte blocks from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov512blocks(uint8_t *dst, const uint8_t *src, size_t n)
+{
+	__m512i zmm0, zmm1, zmm2, zmm3, zmm4, zmm5, zmm6, zmm7;
+
+	while (n >= 512) {
+		zmm0 = _mm512_loadu_si512((const void *)(src + 0 * 64));
+		n -= 512;
+		zmm1 = _mm512_loadu_si512((const void *)(src + 1 * 64));
+		zmm2 = _mm512_loadu_si512((const void *)(src + 2 * 64));
+		zmm3 = _mm512_loadu_si512((const void *)(src + 3 * 64));
+		zmm4 = _mm512_loadu_si512((const void *)(src + 4 * 64));
+		zmm5 = _mm512_loadu_si512((const void *)(src + 5 * 64));
+		zmm6 = _mm512_loadu_si512((const void *)(src + 6 * 64));
+		zmm7 = _mm512_loadu_si512((const void *)(src + 7 * 64));
+		src = src + 512;
+		_mm512_storeu_si512((void *)(dst + 0 * 64), zmm0);
+		_mm512_storeu_si512((void *)(dst + 1 * 64), zmm1);
+		_mm512_storeu_si512((void *)(dst + 2 * 64), zmm2);
+		_mm512_storeu_si512((void *)(dst + 3 * 64), zmm3);
+		_mm512_storeu_si512((void *)(dst + 4 * 64), zmm4);
+		_mm512_storeu_si512((void *)(dst + 5 * 64), zmm5);
+		_mm512_storeu_si512((void *)(dst + 6 * 64), zmm6);
+		_mm512_storeu_si512((void *)(dst + 7 * 64), zmm7);
+		dst = dst + 512;
+	}
+}
+
+static inline void *
+rte_memcpy_generic(void *dst, const void *src, size_t n)
+{
+	uintptr_t dstu = (uintptr_t)dst;
+	uintptr_t srcu = (uintptr_t)src;
+	void *ret = dst;
+	size_t dstofss;
+	size_t bits;
+
+	/**
+	 * Copy less than 16 bytes
+	 */
+	if (n < 16) {
+		if (n & 0x01) {
+			*(uint8_t *)dstu = *(const uint8_t *)srcu;
+			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
+			dstu = (uintptr_t)((uint8_t *)dstu + 1);
+		}
+		if (n & 0x02) {
+			*(uint16_t *)dstu = *(const uint16_t *)srcu;
+			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
+			dstu = (uintptr_t)((uint16_t *)dstu + 1);
+		}
+		if (n & 0x04) {
+			*(uint32_t *)dstu = *(const uint32_t *)srcu;
+			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
+			dstu = (uintptr_t)((uint32_t *)dstu + 1);
+		}
+		if (n & 0x08)
+			*(uint64_t *)dstu = *(const uint64_t *)srcu;
+		return ret;
+	}
+
+	/**
+	 * Fast way when copy size doesn't exceed 512 bytes
+	 */
+	if (n <= 32) {
+		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
+		rte_mov16((uint8_t *)dst - 16 + n,
+				  (const uint8_t *)src - 16 + n);
+		return ret;
+	}
+	if (n <= 64) {
+		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
+		rte_mov32((uint8_t *)dst - 32 + n,
+				  (const uint8_t *)src - 32 + n);
+		return ret;
+	}
+	if (n <= 512) {
+		if (n >= 256) {
+			n -= 256;
+			rte_mov256((uint8_t *)dst, (const uint8_t *)src);
+			src = (const uint8_t *)src + 256;
+			dst = (uint8_t *)dst + 256;
+		}
+		if (n >= 128) {
+			n -= 128;
+			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
+			src = (const uint8_t *)src + 128;
+			dst = (uint8_t *)dst + 128;
+		}
+COPY_BLOCK_128_BACK63:
+		if (n > 64) {
+			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
+			rte_mov64((uint8_t *)dst - 64 + n,
+					  (const uint8_t *)src - 64 + n);
+			return ret;
+		}
+		if (n > 0)
+			rte_mov64((uint8_t *)dst - 64 + n,
+					  (const uint8_t *)src - 64 + n);
+		return ret;
+	}
+
+	/**
+	 * Make store aligned when copy size exceeds 512 bytes
+	 */
+	dstofss = ((uintptr_t)dst & 0x3F);
+	if (dstofss > 0) {
+		dstofss = 64 - dstofss;
+		n -= dstofss;
+		rte_mov64((uint8_t *)dst, (const uint8_t *)src);
+		src = (const uint8_t *)src + dstofss;
+		dst = (uint8_t *)dst + dstofss;
+	}
+
+	/**
+	 * Copy 512-byte blocks.
+	 * Use copy block function for better instruction order control,
+	 * which is important when load is unaligned.
+	 */
+	rte_mov512blocks((uint8_t *)dst, (const uint8_t *)src, n);
+	bits = n;
+	n = n & 511;
+	bits -= n;
+	src = (const uint8_t *)src + bits;
+	dst = (uint8_t *)dst + bits;
+
+	/**
+	 * Copy 128-byte blocks.
+	 * Use copy block function for better instruction order control,
+	 * which is important when load is unaligned.
+	 */
+	if (n >= 128) {
+		rte_mov128blocks((uint8_t *)dst, (const uint8_t *)src, n);
+		bits = n;
+		n = n & 127;
+		bits -= n;
+		src = (const uint8_t *)src + bits;
+		dst = (uint8_t *)dst + bits;
+	}
+
+	/**
+	 * Copy whatever left
+	 */
+	goto COPY_BLOCK_128_BACK63;
+}
+
+#elif defined RTE_MACHINE_CPUFLAG_AVX2
+
+#define ALIGNMENT_MASK 0x1F
+
+/**
+ * AVX2 implementation below
+ */
+
+/**
+ * Copy 16 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov16(uint8_t *dst, const uint8_t *src)
+{
+	__m128i xmm0;
+
+	xmm0 = _mm_loadu_si128((const __m128i *)src);
+	_mm_storeu_si128((__m128i *)dst, xmm0);
+}
+
+/**
+ * Copy 32 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov32(uint8_t *dst, const uint8_t *src)
+{
+	__m256i ymm0;
+
+	ymm0 = _mm256_loadu_si256((const __m256i *)src);
+	_mm256_storeu_si256((__m256i *)dst, ymm0);
+}
+
+/**
+ * Copy 64 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov64(uint8_t *dst, const uint8_t *src)
+{
+	rte_mov32((uint8_t *)dst + 0 * 32, (const uint8_t *)src + 0 * 32);
+	rte_mov32((uint8_t *)dst + 1 * 32, (const uint8_t *)src + 1 * 32);
+}
+
+/**
+ * Copy 128 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov128(uint8_t *dst, const uint8_t *src)
+{
+	rte_mov32((uint8_t *)dst + 0 * 32, (const uint8_t *)src + 0 * 32);
+	rte_mov32((uint8_t *)dst + 1 * 32, (const uint8_t *)src + 1 * 32);
+	rte_mov32((uint8_t *)dst + 2 * 32, (const uint8_t *)src + 2 * 32);
+	rte_mov32((uint8_t *)dst + 3 * 32, (const uint8_t *)src + 3 * 32);
+}
+
+/**
+ * Copy 128-byte blocks from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov128blocks(uint8_t *dst, const uint8_t *src, size_t n)
+{
+	__m256i ymm0, ymm1, ymm2, ymm3;
+
+	while (n >= 128) {
+		ymm0 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src + 0 * 32));
+		n -= 128;
+		ymm1 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src + 1 * 32));
+		ymm2 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src + 2 * 32));
+		ymm3 = _mm256_loadu_si256((const __m256i *)
+				((const uint8_t *)src + 3 * 32));
+		src = (const uint8_t *)src + 128;
+		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 0 * 32), ymm0);
+		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 1 * 32), ymm1);
+		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 2 * 32), ymm2);
+		_mm256_storeu_si256((__m256i *)((uint8_t *)dst + 3 * 32), ymm3);
+		dst = (uint8_t *)dst + 128;
+	}
+}
+
+static inline void *
+rte_memcpy_generic(void *dst, const void *src, size_t n)
+{
+	uintptr_t dstu = (uintptr_t)dst;
+	uintptr_t srcu = (uintptr_t)src;
+	void *ret = dst;
+	size_t dstofss;
+	size_t bits;
+
+	/**
+	 * Copy less than 16 bytes
+	 */
+	if (n < 16) {
+		if (n & 0x01) {
+			*(uint8_t *)dstu = *(const uint8_t *)srcu;
+			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
+			dstu = (uintptr_t)((uint8_t *)dstu + 1);
+		}
+		if (n & 0x02) {
+			*(uint16_t *)dstu = *(const uint16_t *)srcu;
+			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
+			dstu = (uintptr_t)((uint16_t *)dstu + 1);
+		}
+		if (n & 0x04) {
+			*(uint32_t *)dstu = *(const uint32_t *)srcu;
+			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
+			dstu = (uintptr_t)((uint32_t *)dstu + 1);
+		}
+		if (n & 0x08)
+			*(uint64_t *)dstu = *(const uint64_t *)srcu;
+		return ret;
+	}
+
+	/**
+	 * Fast way when copy size doesn't exceed 256 bytes
+	 */
+	if (n <= 32) {
+		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
+		rte_mov16((uint8_t *)dst - 16 + n,
+				(const uint8_t *)src - 16 + n);
+		return ret;
+	}
+	if (n <= 48) {
+		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
+		rte_mov16((uint8_t *)dst + 16, (const uint8_t *)src + 16);
+		rte_mov16((uint8_t *)dst - 16 + n,
+				(const uint8_t *)src - 16 + n);
+		return ret;
+	}
+	if (n <= 64) {
+		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
+		rte_mov32((uint8_t *)dst - 32 + n,
+				(const uint8_t *)src - 32 + n);
+		return ret;
+	}
+	if (n <= 256) {
+		if (n >= 128) {
+			n -= 128;
+			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
+			src = (const uint8_t *)src + 128;
+			dst = (uint8_t *)dst + 128;
+		}
+COPY_BLOCK_128_BACK31:
+		if (n >= 64) {
+			n -= 64;
+			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
+			src = (const uint8_t *)src + 64;
+			dst = (uint8_t *)dst + 64;
+		}
+		if (n > 32) {
+			rte_mov32((uint8_t *)dst, (const uint8_t *)src);
+			rte_mov32((uint8_t *)dst - 32 + n,
+					(const uint8_t *)src - 32 + n);
+			return ret;
+		}
+		if (n > 0) {
+			rte_mov32((uint8_t *)dst - 32 + n,
+					(const uint8_t *)src - 32 + n);
+		}
+		return ret;
+	}
+
+	/**
+	 * Make store aligned when copy size exceeds 256 bytes
+	 */
+	dstofss = (uintptr_t)dst & 0x1F;
+	if (dstofss > 0) {
+		dstofss = 32 - dstofss;
+		n -= dstofss;
+		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
+		src = (const uint8_t *)src + dstofss;
+		dst = (uint8_t *)dst + dstofss;
+	}
+
+	/**
+	 * Copy 128-byte blocks
+	 */
+	rte_mov128blocks((uint8_t *)dst, (const uint8_t *)src, n);
+	bits = n;
+	n = n & 127;
+	bits -= n;
+	src = (const uint8_t *)src + bits;
+	dst = (uint8_t *)dst + bits;
+
+	/**
+	 * Copy whatever left
+	 */
+	goto COPY_BLOCK_128_BACK31;
+}
+
+#else /* RTE_MACHINE_CPUFLAG */
+
+#define ALIGNMENT_MASK 0x0F
+
+/**
+ * SSE & AVX implementation below
+ */
+
+/**
+ * Copy 16 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov16(uint8_t *dst, const uint8_t *src)
+{
+	__m128i xmm0;
+
+	xmm0 = _mm_loadu_si128((const __m128i *)(const __m128i *)src);
+	_mm_storeu_si128((__m128i *)dst, xmm0);
+}
+
+/**
+ * Copy 32 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov32(uint8_t *dst, const uint8_t *src)
+{
+	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
+	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
+}
+
+/**
+ * Copy 64 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov64(uint8_t *dst, const uint8_t *src)
+{
+	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
+	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
+	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
+	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
+}
+
+/**
+ * Copy 128 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov128(uint8_t *dst, const uint8_t *src)
+{
+	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
+	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
+	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
+	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
+	rte_mov16((uint8_t *)dst + 4 * 16, (const uint8_t *)src + 4 * 16);
+	rte_mov16((uint8_t *)dst + 5 * 16, (const uint8_t *)src + 5 * 16);
+	rte_mov16((uint8_t *)dst + 6 * 16, (const uint8_t *)src + 6 * 16);
+	rte_mov16((uint8_t *)dst + 7 * 16, (const uint8_t *)src + 7 * 16);
+}
+
+/**
+ * Copy 256 bytes from one location to another,
+ * locations should not overlap.
+ */
+static inline void
+rte_mov256(uint8_t *dst, const uint8_t *src)
+{
+	rte_mov16((uint8_t *)dst + 0 * 16, (const uint8_t *)src + 0 * 16);
+	rte_mov16((uint8_t *)dst + 1 * 16, (const uint8_t *)src + 1 * 16);
+	rte_mov16((uint8_t *)dst + 2 * 16, (const uint8_t *)src + 2 * 16);
+	rte_mov16((uint8_t *)dst + 3 * 16, (const uint8_t *)src + 3 * 16);
+	rte_mov16((uint8_t *)dst + 4 * 16, (const uint8_t *)src + 4 * 16);
+	rte_mov16((uint8_t *)dst + 5 * 16, (const uint8_t *)src + 5 * 16);
+	rte_mov16((uint8_t *)dst + 6 * 16, (const uint8_t *)src + 6 * 16);
+	rte_mov16((uint8_t *)dst + 7 * 16, (const uint8_t *)src + 7 * 16);
+	rte_mov16((uint8_t *)dst + 8 * 16, (const uint8_t *)src + 8 * 16);
+	rte_mov16((uint8_t *)dst + 9 * 16, (const uint8_t *)src + 9 * 16);
+	rte_mov16((uint8_t *)dst + 10 * 16, (const uint8_t *)src + 10 * 16);
+	rte_mov16((uint8_t *)dst + 11 * 16, (const uint8_t *)src + 11 * 16);
+	rte_mov16((uint8_t *)dst + 12 * 16, (const uint8_t *)src + 12 * 16);
+	rte_mov16((uint8_t *)dst + 13 * 16, (const uint8_t *)src + 13 * 16);
+	rte_mov16((uint8_t *)dst + 14 * 16, (const uint8_t *)src + 14 * 16);
+	rte_mov16((uint8_t *)dst + 15 * 16, (const uint8_t *)src + 15 * 16);
+}
+
+/**
+ * Macro for copying unaligned block from one location to another with constant load offset,
+ * 47 bytes leftover maximum,
+ * locations should not overlap.
+ * Requirements:
+ * - Store is aligned
+ * - Load offset is <offset>, which must be immediate value within [1, 15]
+ * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
+ * - <dst>, <src>, <len> must be variables
+ * - __m128i <xmm0> ~ <xmm8> must be pre-defined
+ */
+#define MOVEUNALIGNED_LEFT47_IMM(dst, src, len, offset)                                                     \
+__extension__ ({                                                                                            \
+    int tmp;                                                                                                \
+    while (len >= 128 + 16 - offset) {                                                                      \
+        xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));                  \
+        len -= 128;                                                                                         \
+        xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));                  \
+        xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));                  \
+        xmm3 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 3 * 16));                  \
+        xmm4 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 4 * 16));                  \
+        xmm5 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 5 * 16));                  \
+        xmm6 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 6 * 16));                  \
+        xmm7 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 7 * 16));                  \
+        xmm8 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 8 * 16));                  \
+        src = (const uint8_t *)src + 128;                                                                   \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 2 * 16), _mm_alignr_epi8(xmm3, xmm2, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 3 * 16), _mm_alignr_epi8(xmm4, xmm3, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 4 * 16), _mm_alignr_epi8(xmm5, xmm4, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 5 * 16), _mm_alignr_epi8(xmm6, xmm5, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 6 * 16), _mm_alignr_epi8(xmm7, xmm6, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 7 * 16), _mm_alignr_epi8(xmm8, xmm7, offset));        \
+        dst = (uint8_t *)dst + 128;                                                                         \
+    }                                                                                                       \
+    tmp = len;                                                                                              \
+    len = ((len - 16 + offset) & 127) + 16 - offset;                                                        \
+    tmp -= len;                                                                                             \
+    src = (const uint8_t *)src + tmp;                                                                       \
+    dst = (uint8_t *)dst + tmp;                                                                             \
+    if (len >= 32 + 16 - offset) {                                                                          \
+        while (len >= 32 + 16 - offset) {                                                                   \
+            xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));              \
+            len -= 32;                                                                                      \
+            xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));              \
+            xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));              \
+            src = (const uint8_t *)src + 32;                                                                \
+            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));    \
+            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));    \
+            dst = (uint8_t *)dst + 32;                                                                      \
+        }                                                                                                   \
+        tmp = len;                                                                                          \
+        len = ((len - 16 + offset) & 31) + 16 - offset;                                                     \
+        tmp -= len;                                                                                         \
+        src = (const uint8_t *)src + tmp;                                                                   \
+        dst = (uint8_t *)dst + tmp;                                                                         \
+    }                                                                                                       \
+})
+
+/**
+ * Macro for copying unaligned block from one location to another,
+ * 47 bytes leftover maximum,
+ * locations should not overlap.
+ * Use switch here because the aligning instruction requires immediate value for shift count.
+ * Requirements:
+ * - Store is aligned
+ * - Load offset is <offset>, which must be within [1, 15]
+ * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
+ * - <dst>, <src>, <len> must be variables
+ * - __m128i <xmm0> ~ <xmm8> used in MOVEUNALIGNED_LEFT47_IMM must be pre-defined
+ */
+#define MOVEUNALIGNED_LEFT47(dst, src, len, offset)                   \
+__extension__ ({                                                      \
+    switch (offset) {                                                 \
+    case 0x01: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x01); break;    \
+    case 0x02: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x02); break;    \
+    case 0x03: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x03); break;    \
+    case 0x04: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x04); break;    \
+    case 0x05: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x05); break;    \
+    case 0x06: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x06); break;    \
+    case 0x07: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x07); break;    \
+    case 0x08: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x08); break;    \
+    case 0x09: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x09); break;    \
+    case 0x0A: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0A); break;    \
+    case 0x0B: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0B); break;    \
+    case 0x0C: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0C); break;    \
+    case 0x0D: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0D); break;    \
+    case 0x0E: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0E); break;    \
+    case 0x0F: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0F); break;    \
+    default:;                                                         \
+    }                                                                 \
+})
+
+static inline void *
+rte_memcpy_generic(void *dst, const void *src, size_t n)
+{
+	__m128i xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7, xmm8;
+	uintptr_t dstu = (uintptr_t)dst;
+	uintptr_t srcu = (uintptr_t)src;
+	void *ret = dst;
+	size_t dstofss;
+	size_t srcofs;
+
+	/**
+	 * Copy less than 16 bytes
+	 */
+	if (n < 16) {
+		if (n & 0x01) {
+			*(uint8_t *)dstu = *(const uint8_t *)srcu;
+			srcu = (uintptr_t)((const uint8_t *)srcu + 1);
+			dstu = (uintptr_t)((uint8_t *)dstu + 1);
+		}
+		if (n & 0x02) {
+			*(uint16_t *)dstu = *(const uint16_t *)srcu;
+			srcu = (uintptr_t)((const uint16_t *)srcu + 1);
+			dstu = (uintptr_t)((uint16_t *)dstu + 1);
+		}
+		if (n & 0x04) {
+			*(uint32_t *)dstu = *(const uint32_t *)srcu;
+			srcu = (uintptr_t)((const uint32_t *)srcu + 1);
+			dstu = (uintptr_t)((uint32_t *)dstu + 1);
+		}
+		if (n & 0x08)
+			*(uint64_t *)dstu = *(const uint64_t *)srcu;
+		return ret;
+	}
+
+	/**
+	 * Fast way when copy size doesn't exceed 512 bytes
+	 */
+	if (n <= 32) {
+		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
+		rte_mov16((uint8_t *)dst - 16 + n,
+				(const uint8_t *)src - 16 + n);
+		return ret;
+	}
+	if (n <= 48) {
+		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
+		rte_mov16((uint8_t *)dst - 16 + n,
+				(const uint8_t *)src - 16 + n);
+		return ret;
+	}
+	if (n <= 64) {
+		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
+		rte_mov16((uint8_t *)dst + 32, (const uint8_t *)src + 32);
+		rte_mov16((uint8_t *)dst - 16 + n,
+				(const uint8_t *)src - 16 + n);
+		return ret;
+	}
+	if (n <= 128)
+		goto COPY_BLOCK_128_BACK15;
+	if (n <= 512) {
+		if (n >= 256) {
+			n -= 256;
+			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
+			rte_mov128((uint8_t *)dst + 128,
+					(const uint8_t *)src + 128);
+			src = (const uint8_t *)src + 256;
+			dst = (uint8_t *)dst + 256;
+		}
+COPY_BLOCK_255_BACK15:
+		if (n >= 128) {
+			n -= 128;
+			rte_mov128((uint8_t *)dst, (const uint8_t *)src);
+			src = (const uint8_t *)src + 128;
+			dst = (uint8_t *)dst + 128;
+		}
+COPY_BLOCK_128_BACK15:
+		if (n >= 64) {
+			n -= 64;
+			rte_mov64((uint8_t *)dst, (const uint8_t *)src);
+			src = (const uint8_t *)src + 64;
+			dst = (uint8_t *)dst + 64;
+		}
+COPY_BLOCK_64_BACK15:
+		if (n >= 32) {
+			n -= 32;
+			rte_mov32((uint8_t *)dst, (const uint8_t *)src);
+			src = (const uint8_t *)src + 32;
+			dst = (uint8_t *)dst + 32;
+		}
+		if (n > 16) {
+			rte_mov16((uint8_t *)dst, (const uint8_t *)src);
+			rte_mov16((uint8_t *)dst - 16 + n,
+					(const uint8_t *)src - 16 + n);
+			return ret;
+		}
+		if (n > 0) {
+			rte_mov16((uint8_t *)dst - 16 + n,
+					(const uint8_t *)src - 16 + n);
+		}
+		return ret;
+	}
+
+	/**
+	 * Make store aligned when copy size exceeds 512 bytes,
+	 * and make sure the first 15 bytes are copied, because
+	 * unaligned copy functions require up to 15 bytes
+	 * backwards access.
+	 */
+	dstofss = (uintptr_t)dst & 0x0F;
+	if (dstofss > 0) {
+		dstofss = 16 - dstofss + 16;
+		n -= dstofss;
+		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
+		src = (const uint8_t *)src + dstofss;
+		dst = (uint8_t *)dst + dstofss;
+	}
+	srcofs = ((uintptr_t)src & 0x0F);
+
+	/**
+	 * For aligned copy
+	 */
+	if (srcofs == 0) {
+		/**
+		 * Copy 256-byte blocks
+		 */
+		for (; n >= 256; n -= 256) {
+			rte_mov256((uint8_t *)dst, (const uint8_t *)src);
+			dst = (uint8_t *)dst + 256;
+			src = (const uint8_t *)src + 256;
+		}
+
+		/**
+		 * Copy whatever left
+		 */
+		goto COPY_BLOCK_255_BACK15;
+	}
+
+	/**
+	 * For copy with unaligned load
+	 */
+	MOVEUNALIGNED_LEFT47(dst, src, n, srcofs);
+
+	/**
+	 * Copy whatever left
+	 */
+	goto COPY_BLOCK_64_BACK15;
+}
+
+#endif /* RTE_MACHINE_CPUFLAG */
+
+static inline void *
+rte_memcpy_aligned(void *dst, const void *src, size_t n)
+{
+	void *ret = dst;
+
+	/* Copy size <= 16 bytes */
+	if (n < 16) {
+		if (n & 0x01) {
+			*(uint8_t *)dst = *(const uint8_t *)src;
+			src = (const uint8_t *)src + 1;
+			dst = (uint8_t *)dst + 1;
+		}
+		if (n & 0x02) {
+			*(uint16_t *)dst = *(const uint16_t *)src;
+			src = (const uint16_t *)src + 1;
+			dst = (uint16_t *)dst + 1;
+		}
+		if (n & 0x04) {
+			*(uint32_t *)dst = *(const uint32_t *)src;
+			src = (const uint32_t *)src + 1;
+			dst = (uint32_t *)dst + 1;
+		}
+		if (n & 0x08)
+			*(uint64_t *)dst = *(const uint64_t *)src;
+
+		return ret;
+	}
+
+	/* Copy 16 <= size <= 32 bytes */
+	if (n <= 32) {
+		rte_mov16((uint8_t *)dst, (const uint8_t *)src);
+		rte_mov16((uint8_t *)dst - 16 + n,
+				(const uint8_t *)src - 16 + n);
+
+		return ret;
+	}
+
+	/* Copy 32 < size <= 64 bytes */
+	if (n <= 64) {
+		rte_mov32((uint8_t *)dst, (const uint8_t *)src);
+		rte_mov32((uint8_t *)dst - 32 + n,
+				(const uint8_t *)src - 32 + n);
+
+		return ret;
+	}
+
+	/* Copy 64 bytes blocks */
+	for (; n >= 64; n -= 64) {
+		rte_mov64((uint8_t *)dst, (const uint8_t *)src);
+		dst = (uint8_t *)dst + 64;
+		src = (const uint8_t *)src + 64;
+	}
+
+	/* Copy whatever left */
+	rte_mov64((uint8_t *)dst - 64 + n,
+			(const uint8_t *)src - 64 + n);
+
+	return ret;
+}
+
+static inline void *
+rte_memcpy_internal(void *dst, const void *src, size_t n)
+{
+	if (!(((uintptr_t)dst | (uintptr_t)src) & ALIGNMENT_MASK))
+		return rte_memcpy_aligned(dst, src, n);
+	else
+		return rte_memcpy_generic(dst, src, n);
+}
+
+#ifdef __cplusplus
+}
+#endif
+
+#endif /* _RTE_MEMCPY_INTERNAL_X86_64_H_ */
diff --git a/lib/librte_eal/common/include/arch/x86/rte_memcpy_sse.c b/lib/librte_eal/common/include/arch/x86/rte_memcpy_sse.c
new file mode 100644
index 0000000..2532696
--- /dev/null
+++ b/lib/librte_eal/common/include/arch/x86/rte_memcpy_sse.c
@@ -0,0 +1,585 @@ 
+/*-
+ *   BSD LICENSE
+ *
+ *   Copyright(c) 2010-2017 Intel Corporation. All rights reserved.
+ *   All rights reserved.
+ *
+ *   Redistribution and use in source and binary forms, with or without
+ *   modification, are permitted provided that the following conditions
+ *   are met:
+ *
+ *     * Redistributions of source code must retain the above copyright
+ *       notice, this list of conditions and the following disclaimer.
+ *     * Redistributions in binary form must reproduce the above copyright
+ *       notice, this list of conditions and the following disclaimer in
+ *       the documentation and/or other materials provided with the
+ *       distribution.
+ *     * Neither the name of Intel Corporation nor the names of its
+ *       contributors may be used to endorse or promote products derived
+ *       from this software without specific prior written permission.
+ *
+ *   THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
+ *   "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
+ *   LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
+ *   A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
+ *   OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
+ *   SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
+ *   LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+ *   DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+ *   THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+ *   (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+ *   OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+ */
+
+#include <rte_memcpy.h>
+
+/**
+ * Macro for copying unaligned block from one location to another with constant load offset,
+ * 47 bytes leftover maximum,
+ * locations should not overlap.
+ * Requirements:
+ * - Store is aligned
+ * - Load offset is <offset>, which must be immediate value within [1, 15]
+ * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
+ * - <dst>, <src>, <len> must be variables
+ * - __m128i <xmm0> ~ <xmm8> must be pre-defined
+ */
+#define MOVEUNALIGNED_LEFT47_IMM(dst, src, len, offset)                                                     \
+__extension__ ({                                                                                            \
+    int tmp;                                                                                                \
+    while (len >= 128 + 16 - offset) {                                                                      \
+        xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));                  \
+        len -= 128;                                                                                         \
+        xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));                  \
+        xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));                  \
+        xmm3 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 3 * 16));                  \
+        xmm4 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 4 * 16));                  \
+        xmm5 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 5 * 16));                  \
+        xmm6 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 6 * 16));                  \
+        xmm7 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 7 * 16));                  \
+        xmm8 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 8 * 16));                  \
+        src = (const uint8_t *)src + 128;                                                                   \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 2 * 16), _mm_alignr_epi8(xmm3, xmm2, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 3 * 16), _mm_alignr_epi8(xmm4, xmm3, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 4 * 16), _mm_alignr_epi8(xmm5, xmm4, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 5 * 16), _mm_alignr_epi8(xmm6, xmm5, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 6 * 16), _mm_alignr_epi8(xmm7, xmm6, offset));        \
+        _mm_storeu_si128((__m128i *)((uint8_t *)dst + 7 * 16), _mm_alignr_epi8(xmm8, xmm7, offset));        \
+        dst = (uint8_t *)dst + 128;                                                                         \
+    }                                                                                                       \
+    tmp = len;                                                                                              \
+    len = ((len - 16 + offset) & 127) + 16 - offset;                                                        \
+    tmp -= len;                                                                                             \
+    src = (const uint8_t *)src + tmp;                                                                       \
+    dst = (uint8_t *)dst + tmp;                                                                             \
+    if (len >= 32 + 16 - offset) {                                                                          \
+        while (len >= 32 + 16 - offset) {                                                                   \
+            xmm0 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 0 * 16));              \
+            len -= 32;                                                                                      \
+            xmm1 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 1 * 16));              \
+            xmm2 = _mm_loadu_si128((const __m128i *)((const uint8_t *)src - offset + 2 * 16));              \
+            src = (const uint8_t *)src + 32;                                                                \
+            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 0 * 16), _mm_alignr_epi8(xmm1, xmm0, offset));    \
+            _mm_storeu_si128((__m128i *)((uint8_t *)dst + 1 * 16), _mm_alignr_epi8(xmm2, xmm1, offset));    \
+            dst = (uint8_t *)dst + 32;                                                                      \
+        }                                                                                                   \
+        tmp = len;                                                                                          \
+        len = ((len - 16 + offset) & 31) + 16 - offset;                                                     \
+        tmp -= len;                                                                                         \
+        src = (const uint8_t *)src + tmp;                                                                   \
+        dst = (uint8_t *)dst + tmp;                                                                         \
+    }                                                                                                       \
+})
+
+/**
+ * Macro for copying unaligned block from one location to another,
+ * 47 bytes leftover maximum,
+ * locations should not overlap.
+ * Use switch here because the aligning instruction requires immediate value for shift count.
+ * Requirements:
+ * - Store is aligned
+ * - Load offset is <offset>, which must be within [1, 15]
+ * - For <src>, make sure <offset> bit backwards & <16 - offset> bit forwards are available for loading
+ * - <dst>, <src>, <len> must be variables
+ * - __m128i <xmm0> ~ <xmm8> used in MOVEUNALIGNED_LEFT47_IMM must be pre-defined
+ */
+#define MOVEUNALIGNED_LEFT47(dst, src, len, offset)                   \
+__extension__ ({                                                      \
+    switch (offset) {                                                 \
+    case 0x01: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x01); break;    \
+    case 0x02: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x02); break;    \
+    case 0x03: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x03); break;    \
+    case 0x04: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x04); break;    \
+    case 0x05: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x05); break;    \
+    case 0x06: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x06); break;    \
+    case 0x07: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x07); break;    \
+    case 0x08: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x08); break;    \
+    case 0x09: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x09); break;    \
+    case 0x0A: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0A); break;    \
+    case 0x0B: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0B); break;    \
+    case 0x0C: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0C); break;    \
+    case 0x0D: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0D); break;    \
+    case 0x0E: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0E); break;    \
+    case 0x0F: MOVEUNALIGNED_LEFT47_IMM(dst, src, n, 0x0F); break;    \
+    default:;                                                         \
+    }                                                                 \
+})
+
+void *
+rte_memcpy_sse(void *dst, const void *src, size_t n)
+{
+	if (!(((uintptr_t)dst | (uintptr_t)src) & 0x0F)) {
+		void *ret = dst;
+
+		/* Copy size <= 16 bytes */
+		if (n < 16) {
+			if (n & 0x01) {
+				*(uint8_t *)dst = *(const uint8_t *)src;
+				src = (const uint8_t *)src + 1;
+				dst = (uint8_t *)dst + 1;
+			}
+			if (n & 0x02) {
+				*(uint16_t *)dst = *(const uint16_t *)src;
+				src = (const uint16_t *)src + 1;
+				dst = (uint16_t *)dst + 1;
+			}
+			if (n & 0x04) {
+				*(uint32_t *)dst = *(const uint32_t *)src;
+				src = (const uint32_t *)src + 1;
+				dst = (uint32_t *)dst + 1;
+			}
+			if (n & 0x08)
+				*(uint64_t *)dst = *(const uint64_t *)src;
+
+			return ret;
+		}
+
+		/* Copy 16 <= size <= 32 bytes */
+		if (n <= 32) {
+			__m128i xmm0, xmm1;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 16 + n));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 16 + n), xmm1);
+
+			return ret;
+		}
+
+		/* Copy 32 < size <= 64 bytes */
+		if (n <= 64) {
+			__m128i xmm0, xmm1, xmm2, xmm3;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src + 16));
+			xmm2 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 32 + n));
+			xmm3 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 16 + n));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst + 16), xmm1);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 32 + n), xmm2);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 16 + n), xmm3);
+
+			return ret;
+		}
+
+		/* Copy 64 bytes blocks */
+		for (; n >= 64; n -= 64) {
+			__m128i xmm0, xmm1, xmm2, xmm3;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src + 16));
+			xmm2 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src + 2*16));
+			xmm3 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src + 3*16));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst + 16), xmm1);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst + 2*16), xmm2);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst + 3*16), xmm3);
+			dst = (uint8_t *)dst + 64;
+			src = (const uint8_t *)src + 64;
+		}
+
+		/* Copy whatever left */
+		__m128i xmm0, xmm1, xmm2, xmm3;
+		xmm0 = _mm_loadu_si128((const __m128i *)
+			((const uint8_t *)src - 64 + n));
+		xmm1 = _mm_loadu_si128((const __m128i *)
+			((const uint8_t *)src - 48 + n));
+		xmm2 = _mm_loadu_si128((const __m128i *)
+			((const uint8_t *)src - 32 + n));
+		xmm3 = _mm_loadu_si128((const __m128i *)
+			((const uint8_t *)src - 16 + n));
+		_mm_storeu_si128((__m128i *)((uint8_t *)dst - 64 + n), xmm0);
+		_mm_storeu_si128((__m128i *)((uint8_t *)dst - 48 + n), xmm1);
+		_mm_storeu_si128((__m128i *)((uint8_t *)dst - 32 + n), xmm2);
+		_mm_storeu_si128((__m128i *)((uint8_t *)dst - 16 + n), xmm3);
+
+		return ret;
+	} else {
+		__m128i xmm0, xmm1, xmm2, xmm3, xmm4, xmm5, xmm6, xmm7, xmm8;
+		uintptr_t dstu = (uintptr_t)dst;
+		uintptr_t srcu = (uintptr_t)src;
+		void *ret = dst;
+		size_t dstofss;
+		size_t srcofs;
+
+		/**
+		 * Copy less than 16 bytes
+		 */
+		if (n < 16) {
+			if (n & 0x01) {
+				*(uint8_t *)dstu = *(const uint8_t *)srcu;
+				srcu = (uintptr_t)((const uint8_t *)srcu + 1);
+				dstu = (uintptr_t)((uint8_t *)dstu + 1);
+			}
+			if (n & 0x02) {
+				*(uint16_t *)dstu = *(const uint16_t *)srcu;
+				srcu = (uintptr_t)((const uint16_t *)srcu + 1);
+				dstu = (uintptr_t)((uint16_t *)dstu + 1);
+			}
+			if (n & 0x04) {
+				*(uint32_t *)dstu = *(const uint32_t *)srcu;
+				srcu = (uintptr_t)((const uint32_t *)srcu + 1);
+				dstu = (uintptr_t)((uint32_t *)dstu + 1);
+			}
+			if (n & 0x08)
+				*(uint64_t *)dstu = *(const uint64_t *)srcu;
+			return ret;
+		}
+
+		/**
+		 * Fast way when copy size doesn't exceed 512 bytes
+		 */
+		if (n <= 32) {
+			__m128i xmm0, xmm1;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 16 + n));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 16 + n), xmm1);
+			return ret;
+		}
+		if (n <= 48) {
+			__m128i xmm0, xmm1, xmm2;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src + 16));
+			xmm2 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 16 + n));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst + 16), xmm1);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 16 + n), xmm2);
+			return ret;
+		}
+		if (n <= 64) {
+			__m128i xmm0, xmm1, xmm2, xmm3;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src + 16));
+			xmm2 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src + 32));
+			xmm3 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src - 16 + n));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst + 16), xmm1);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst + 32), xmm2);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst - 16 + n), xmm3);
+			return ret;
+		}
+		if (n <= 128)
+			goto COPY_BLOCK_128_BACK15;
+		if (n <= 512) {
+			if (n >= 256) {
+				n -= 256;
+				__m128i xmm0, xmm1;
+				xmm0 = _mm_loadu_si128((const __m128i *)src);
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 16));
+				_mm_storeu_si128((__m128i *)dst, xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 2*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 3*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 2*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 3*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 4*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 5*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 4*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 5*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 6*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 7*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 6*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 7*16), xmm1);
+
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 128));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 128 + 16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 128), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 128 + 16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 128 + 2*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 128 + 3*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 128 + 2*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 128 + 3*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 128 + 4*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 128 + 5*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 128 + 4*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 128 + 5*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 128 + 6*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 128 + 7*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 128 + 6*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 128 + 7*16), xmm1);
+				src = (const uint8_t *)src + 256;
+				dst = (uint8_t *)dst + 256;
+			}
+COPY_BLOCK_255_BACK15:
+			if (n >= 128) {
+				n -= 128;
+				__m128i xmm0, xmm1;
+				xmm0 = _mm_loadu_si128((const __m128i *)src);
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 16));
+				_mm_storeu_si128((__m128i *)dst, xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 2*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 3*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 2*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 3*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 4*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 5*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 4*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 5*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 6*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 7*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 6*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 7*16), xmm1);
+				src = (const uint8_t *)src + 128;
+				dst = (uint8_t *)dst + 128;
+			}
+COPY_BLOCK_128_BACK15:
+			if (n >= 64) {
+				n -= 64;
+				__m128i xmm0, xmm1;
+				xmm0 = _mm_loadu_si128((const __m128i *)src);
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 16));
+				_mm_storeu_si128((__m128i *)dst, xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 16), xmm1);
+
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 2*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 3*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 2*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 3*16), xmm1);
+				src = (const uint8_t *)src + 64;
+				dst = (uint8_t *)dst + 64;
+			}
+COPY_BLOCK_64_BACK15:
+			if (n >= 32) {
+				n -= 32;
+				__m128i xmm0, xmm1;
+				xmm0 = _mm_loadu_si128((const __m128i *)src);
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 16));
+				_mm_storeu_si128((__m128i *)dst, xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 16), xmm1);
+				src = (const uint8_t *)src + 32;
+				dst = (uint8_t *)dst + 32;
+			}
+			if (n > 16) {
+				__m128i xmm0, xmm1;
+				xmm0 = _mm_loadu_si128((const __m128i *)src);
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src - 16 + n));
+				_mm_storeu_si128((__m128i *)dst, xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst - 16 + n), xmm1);
+				return ret;
+			}
+			if (n > 0) {
+				__m128i xmm0;
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src - 16 + n));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst - 16 + n), xmm0);
+			}
+			return ret;
+		}
+
+		/**
+		 * Make store aligned when copy size exceeds 512 bytes,
+		 * and make sure the first 15 bytes are copied, because
+		 * unaligned copy functions require up to 15 bytes
+		 * backwards access.
+		 */
+		dstofss = (uintptr_t)dst & 0x0F;
+		if (dstofss > 0) {
+			dstofss = 16 - dstofss + 16;
+			n -= dstofss;
+			__m128i xmm0, xmm1;
+			xmm0 = _mm_loadu_si128((const __m128i *)src);
+			xmm1 = _mm_loadu_si128((const __m128i *)
+				((const uint8_t *)src + 16));
+			_mm_storeu_si128((__m128i *)dst, xmm0);
+			_mm_storeu_si128((__m128i *)
+				((uint8_t *)dst + 16), xmm1);
+			src = (const uint8_t *)src + dstofss;
+			dst = (uint8_t *)dst + dstofss;
+		}
+		srcofs = ((uintptr_t)src & 0x0F);
+
+		/**
+		 * For aligned copy
+		 */
+		if (srcofs == 0) {
+			/**
+			 * Copy 256-byte blocks
+			 */
+			for (; n >= 256; n -= 256) {
+				__m128i xmm0, xmm1;
+				xmm0 = _mm_loadu_si128((const __m128i *)src);
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 16));
+				_mm_storeu_si128((__m128i *)dst, xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 2*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 3*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 2*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 3*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 4*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 5*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 4*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 5*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 6*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 7*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 6*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 7*16), xmm1);
+
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 8*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 9*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 8*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 9*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 10*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 11*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 10*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 11*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 12*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 13*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 12*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 13*16), xmm1);
+				xmm0 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 14*16));
+				xmm1 = _mm_loadu_si128((const __m128i *)
+					((const uint8_t *)src + 15*16));
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 14*16), xmm0);
+				_mm_storeu_si128((__m128i *)
+					((uint8_t *)dst + 15*16), xmm1);
+				dst = (uint8_t *)dst + 256;
+				src = (const uint8_t *)src + 256;
+			}
+
+			/**
+			 * Copy whatever left
+			 */
+			goto COPY_BLOCK_255_BACK15;
+		}
+
+		/**
+		 * For copy with unaligned load
+		 */
+		MOVEUNALIGNED_LEFT47(dst, src, n, srcofs);
+
+		/**
+		 * Copy whatever left
+		 */
+		goto COPY_BLOCK_64_BACK15;
+	}
+}
diff --git a/lib/librte_eal/linuxapp/eal/Makefile b/lib/librte_eal/linuxapp/eal/Makefile
index 90bca4d..88d3298 100644
--- a/lib/librte_eal/linuxapp/eal/Makefile
+++ b/lib/librte_eal/linuxapp/eal/Makefile
@@ -40,6 +40,7 @@  VPATH += $(RTE_SDK)/lib/librte_eal/common/arch/$(ARCH_DIR)
 LIBABIVER := 5
 
 VPATH += $(RTE_SDK)/lib/librte_eal/common
+VPATH += $(RTE_SDK)/lib/librte_eal/common/include/arch/$(ARCH_DIR)
 
 CFLAGS += -I$(SRCDIR)/include
 CFLAGS += -I$(RTE_SDK)/lib/librte_eal/common
@@ -105,6 +106,22 @@  SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_service.c
 SRCS-$(CONFIG_RTE_EXEC_ENV_LINUXAPP) += rte_cpuflags.c
 SRCS-$(CONFIG_RTE_ARCH_X86) += rte_spinlock.c
 
+# for run-time dispatch of memcpy
+SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy.c
+SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_sse.c
+
+# if the compiler supports AVX512, add avx512 file
+ifneq ($(filter $(MACHINE_CFLAGS),CC_SUPPORT_AVX512F),)
+SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_avx512f.c
+CFLAGS_rte_memcpy_avx512f.o += -mavx512f
+endif
+
+# if the compiler supports AVX2, add avx2 file
+ifneq ($(filter $(MACHINE_CFLAGS),CC_SUPPORT_AVX2),)
+SRCS-$(CONFIG_RTE_ARCH_X86) += rte_memcpy_avx2.c
+CFLAGS_rte_memcpy_avx2.o += -mavx2
+endif
+
 CFLAGS_eal_common_cpuflags.o := $(CPUFLAGS_LIST)
 
 CFLAGS_eal.o := -D_GNU_SOURCE
diff --git a/mk/rte.cpuflags.mk b/mk/rte.cpuflags.mk
index a813c91..8a7a1e7 100644
--- a/mk/rte.cpuflags.mk
+++ b/mk/rte.cpuflags.mk
@@ -134,6 +134,20 @@  endif
 
 MACHINE_CFLAGS += $(addprefix -DRTE_MACHINE_CPUFLAG_,$(CPUFLAGS))
 
+# Check if the compiler suppoerts AVX512
+CC_SUPPORT_AVX512F := $(shell $(CC) -mavx512f -dM -E - < /dev/null 2>&1 | grep -q AVX512 && echo 1)
+ifeq ($(CC_SUPPORT_AVX512F),1)
+ifeq ($(CONFIG_RTE_ENABLE_AVX512),y)
+MACHINE_CFLAGS += -DCC_SUPPORT_AVX512F
+endif
+endif
+
+# Check if the compiler supports AVX2
+CC_SUPPORT_AVX2 := $(shell $(CC) -mavx2 -dM -E - < /dev/null 2>&1 | grep -q AVX2 && echo 1)
+ifeq ($(CC_SUPPORT_AVX2),1)
+MACHINE_CFLAGS += -DCC_SUPPORT_AVX2
+endif
+
 # To strip whitespace
 comma:= ,
 empty:=