From patchwork Wed Aug 23 13:51:14 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: "Iremonger, Bernard" X-Patchwork-Id: 27769 Return-Path: X-Original-To: patchwork@dpdk.org Delivered-To: patchwork@dpdk.org Received: from [92.243.14.124] (localhost [IPv6:::1]) by dpdk.org (Postfix) with ESMTP id 329047D7F; Wed, 23 Aug 2017 15:51:43 +0200 (CEST) Received: from mga06.intel.com (mga06.intel.com [134.134.136.31]) by dpdk.org (Postfix) with ESMTP id CA31628F3 for ; Wed, 23 Aug 2017 15:51:40 +0200 (CEST) Received: from orsmga004.jf.intel.com ([10.7.209.38]) by orsmga104.jf.intel.com with ESMTP; 23 Aug 2017 06:51:40 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.41,417,1498546800"; d="scan'208";a="121888250" Received: from sivswdev01.ir.intel.com (HELO localhost.localdomain) ([10.237.217.45]) by orsmga004.jf.intel.com with ESMTP; 23 Aug 2017 06:51:38 -0700 From: Bernard Iremonger To: dev@dpdk.org, ferruh.yigit@intel.com, konstantin.ananyev@intel.com, cristian.dumitrescu@intel.com, adrien.mazarguil@6wind.com Cc: Bernard Iremonger Date: Wed, 23 Aug 2017 14:51:14 +0100 Message-Id: <1503496275-27492-6-git-send-email-bernard.iremonger@intel.com> X-Mailer: git-send-email 1.7.0.7 In-Reply-To: <20170525154634.44352-1-ferruh.yigit@intel.com> References: <20170525154634.44352-1-ferruh.yigit@intel.com> Subject: [dpdk-dev] [PATCH v1 5/6] examples/flow_classify: flow classify sample application X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" The flow_classify sample application exercises the following librte_flow_classify API's: rte_flow_classify_create rte_flow_classify_validate rte_flow_classify_destroy rte_flow_classify_query It sets up the IPv4 ACL field definitions. It creates table_acl using the librte_table API. Signed-off-by: Bernard Iremonger --- examples/flow_classify/Makefile | 57 +++ examples/flow_classify/flow_classify.c | 625 +++++++++++++++++++++++++++++++++ 2 files changed, 682 insertions(+) create mode 100644 examples/flow_classify/Makefile create mode 100644 examples/flow_classify/flow_classify.c diff --git a/examples/flow_classify/Makefile b/examples/flow_classify/Makefile new file mode 100644 index 0000000..eecdde1 --- /dev/null +++ b/examples/flow_classify/Makefile @@ -0,0 +1,57 @@ +# BSD LICENSE +# +# Copyright(c) 2017 Intel Corporation. All rights reserved. +# All rights reserved. +# +# Redistribution and use in source and binary forms, with or without +# modification, are permitted provided that the following conditions +# are met: +# +# * Redistributions of source code must retain the above copyright +# notice, this list of conditions and the following disclaimer. +# * Redistributions in binary form must reproduce the above copyright +# notice, this list of conditions and the following disclaimer in +# the documentation and/or other materials provided with the +# distribution. +# * Neither the name of Intel Corporation nor the names of its +# contributors may be used to endorse or promote products derived +# from this software without specific prior written permission. +# +# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS +# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT +# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR +# A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT +# OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, +# SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT +# LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, +# DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY +# THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT +# (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE +# OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + +ifeq ($(RTE_SDK),) +$(error "Please define RTE_SDK environment variable") +endif + +# Default target, can be overridden by command line or environment +RTE_TARGET ?= x86_64-native-linuxapp-gcc + +include $(RTE_SDK)/mk/rte.vars.mk + +# binary name +APP = flow_classify + + +# all source are stored in SRCS-y +SRCS-y := flow_classify.c + +CFLAGS += -O3 +CFLAGS += $(WERROR_FLAGS) + +# workaround for a gcc bug with noreturn attribute +# http://gcc.gnu.org/bugzilla/show_bug.cgi?id=12603 +ifeq ($(CONFIG_RTE_TOOLCHAIN_GCC),y) +CFLAGS_main.o += -Wno-return-type +endif + +include $(RTE_SDK)/mk/rte.extapp.mk diff --git a/examples/flow_classify/flow_classify.c b/examples/flow_classify/flow_classify.c new file mode 100644 index 0000000..61b0241 --- /dev/null +++ b/examples/flow_classify/flow_classify.c @@ -0,0 +1,625 @@ +/*- + * BSD LICENSE + * + * Copyright(c) 2017 Intel Corporation. All rights reserved. + * All rights reserved. + * + * Redistribution and use in source and binary forms, with or without + * modification, are permitted provided that the following conditions + * are met: + * + * * Redistributions of source code must retain the above copyright + * notice, this list of conditions and the following disclaimer. + * * Redistributions in binary form must reproduce the above copyright + * notice, this list of conditions and the following disclaimer in + * the documentation and/or other materials provided with the + * distribution. + * * Neither the name of Intel Corporation nor the names of its + * contributors may be used to endorse or promote products derived + * from this software without specific prior written permission. + * + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS + * "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT + * LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR + * A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT + * OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, + * SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT + * LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, + * DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY + * THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE + * OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + +#define RX_RING_SIZE 128 +#define TX_RING_SIZE 512 + +#define NUM_MBUFS 8191 +#define MBUF_CACHE_SIZE 250 +#define BURST_SIZE 32 +#define MAX_NUM_CLASSIFY 5 +#define FLOW_CLASSIFY_MAX_RULE_NUM 10 + +static const struct rte_eth_conf port_conf_default = { + .rxmode = { .max_rx_pkt_len = ETHER_MAX_LEN } +}; + +static void *table_acl; + +/* ACL field definitions for IPv4 5 tuple rule */ + +enum { + PROTO_FIELD_IPV4, + SRC_FIELD_IPV4, + DST_FIELD_IPV4, + SRCP_FIELD_IPV4, + DSTP_FIELD_IPV4, + NUM_FIELDS_IPV4 +}; + +enum { + PROTO_INPUT_IPV4, + SRC_INPUT_IPV4, + DST_INPUT_IPV4, + SRCP_DESTP_INPUT_IPV4 +}; + +static struct rte_acl_field_def ipv4_defs[NUM_FIELDS_IPV4] = { + /* first input field - always one byte long. */ + { + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint8_t), + .field_index = PROTO_FIELD_IPV4, + .input_index = PROTO_INPUT_IPV4, + .offset = 0, + }, + /* next input field (IPv4 source address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = SRC_FIELD_IPV4, + .input_index = SRC_INPUT_IPV4, + .offset = offsetof(struct ipv4_hdr, src_addr) - + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* next input field (IPv4 destination address) - 4 consecutive bytes. */ + { + /* rte_flow uses a bit mask for IPv4 addresses */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint32_t), + .field_index = DST_FIELD_IPV4, + .input_index = DST_INPUT_IPV4, + .offset = offsetof(struct ipv4_hdr, dst_addr) - + offsetof(struct ipv4_hdr, next_proto_id), + }, + /* + * Next 2 fields (src & dst ports) form 4 consecutive bytes. + * They share the same input index. + */ + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = SRCP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ipv4_hdr) - + offsetof(struct ipv4_hdr, next_proto_id), + }, + { + /* rte_flow uses a bit mask for protocol ports */ + .type = RTE_ACL_FIELD_TYPE_BITMASK, + .size = sizeof(uint16_t), + .field_index = DSTP_FIELD_IPV4, + .input_index = SRCP_DESTP_INPUT_IPV4, + .offset = sizeof(struct ipv4_hdr) - + offsetof(struct ipv4_hdr, next_proto_id) + + sizeof(uint16_t), + }, +}; + +/* flow classify data */ +static struct rte_flow_classify *udp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *tcp_flow_classify[MAX_NUM_CLASSIFY]; +static struct rte_flow_classify *sctp_flow_classify[MAX_NUM_CLASSIFY]; + +static struct rte_flow_classify_5tuple_stats udp_ntuple_stats; +static struct rte_flow_classify_stats udp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&udp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats tcp_ntuple_stats; +static struct rte_flow_classify_stats tcp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&tcp_ntuple_stats +}; + +static struct rte_flow_classify_5tuple_stats sctp_ntuple_stats; +static struct rte_flow_classify_stats sctp_classify_stats = { + .available_space = BURST_SIZE, + .used_space = 0, + .stats = (void **)&sctp_ntuple_stats +}; + +/* parameters for rte_flow_classify_validate and rte_flow_classify_create */ + +/* first sample UDP pattern: + * "eth / ipv4 src spec 2.2.2.3 src mask 255.255.255.00 dst spec 2.2.2.7 + * dst mask 255.255.255.00 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 17, 0, IPv4(2, 2, 2, 3), IPv4(2, 2, 2, 7)} +}; +static const struct rte_flow_item_ipv4 ipv4_mask_24 = { + .hdr = { + .next_proto_id = 0xff, + .src_addr = 0xffffff00, + .dst_addr = 0xffffff00, + }, +}; +static struct rte_flow_item_udp udp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item eth_item = { RTE_FLOW_ITEM_TYPE_ETH, + 0, 0, 0 }; +static struct rte_flow_item ipv4_udp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item udp_item_1 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_1, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item end_item = { RTE_FLOW_ITEM_TYPE_END, + 0, 0, 0 }; +static struct rte_flow_item pattern_udp_1[4]; + +/* second sample UDP pattern: + * "eth / ipv4 src is 9.9.9.3 dst is 9.9.9.7 / udp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_udp_spec_2 = { + { 0, 0, 0, 0, 0, 0, 17, 0, IPv4(9, 9, 9, 3), IPv4(9, 9, 9, 7)} +}; +static struct rte_flow_item_udp udp_spec_2 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item ipv4_udp_item_2 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_udp_spec_2, 0, &rte_flow_item_ipv4_mask}; +static struct rte_flow_item udp_item_2 = { RTE_FLOW_ITEM_TYPE_UDP, + &udp_spec_2, 0, &rte_flow_item_udp_mask}; +static struct rte_flow_item pattern_udp_2[4]; + +/* first sample TCP pattern: + * "eth / ipv4 src spec 9.9.9.3 src mask 255.255.255.0 dst spec 9.9.9.7 dst + * mask 255.255.255.0/ tcp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 6, 0, IPv4(9, 9, 9, 3), IPv4(9, 9, 9, 7)} +}; +static struct rte_flow_item_tcp tcp_spec_1 = { + { 32, 33, 0, 0, 0, 0, 0, 0, 0 } +}; + +static struct rte_flow_item ipv4_tcp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_1, 0, &ipv4_mask_24}; +static struct rte_flow_item tcp_item_1 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_1, 0, &rte_flow_item_tcp_mask}; + +static struct rte_flow_item pattern_tcp_1[4]; + +/* second sample TCP pattern: + * "eth / ipv4 src is 9.9.8.3 dst is 9.9.8.7 / tcp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_tcp_spec_2 = { + { 0, 0, 0, 0, 0, 0, 6, 0, IPv4(9, 9, 8, 3), IPv4(9, 9, 8, 7)} +}; +static struct rte_flow_item_tcp tcp_spec_2 = { + { 32, 33, 0, 0, 0, 0, 0, 0, 0 } +}; + +static struct rte_flow_item ipv4_tcp_item_2 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_tcp_spec_2, 0, &rte_flow_item_ipv4_mask}; +static struct rte_flow_item tcp_item_2 = { RTE_FLOW_ITEM_TYPE_TCP, + &tcp_spec_2, 0, &rte_flow_item_tcp_mask}; + +static struct rte_flow_item pattern_tcp_2[4]; + +/* first sample SCTP pattern: + * "eth / ipv4 src is 6.7.8.9 dst is 2.3.4.5 / sctp src is 32 dst is 33 / end" + */ +static struct rte_flow_item_ipv4 ipv4_sctp_spec_1 = { + { 0, 0, 0, 0, 0, 0, 132, 0, IPv4(6, 7, 8, 9), IPv4(2, 3, 4, 5)} +}; +static struct rte_flow_item_sctp sctp_spec_1 = { + { 32, 33, 0, 0 } +}; + +static struct rte_flow_item ipv4_sctp_item_1 = { RTE_FLOW_ITEM_TYPE_IPV4, + &ipv4_sctp_spec_1, 0, &rte_flow_item_ipv4_mask}; +static struct rte_flow_item sctp_item_1 = { RTE_FLOW_ITEM_TYPE_SCTP, + &sctp_spec_1, 0, &rte_flow_item_sctp_mask}; + +static struct rte_flow_item pattern_sctp_1[4]; + + +/* sample actions: + * "actions count / end" + */ +static struct rte_flow_action count_action = { RTE_FLOW_ACTION_TYPE_COUNT, 0}; +static struct rte_flow_action end_action = { RTE_FLOW_ACTION_TYPE_END, 0}; +static struct rte_flow_action actions[2]; + +/* sample attributes */ +static struct rte_flow_attr attr; + +/* flow_classify.c: + * Based on DPDK skeleton forwarding example. + */ + +/* + * Initializes a given port using global settings and with the RX buffers + * coming from the mbuf_pool passed as a parameter. + */ +static inline int +port_init(uint8_t port, struct rte_mempool *mbuf_pool) +{ + struct rte_eth_conf port_conf = port_conf_default; + struct ether_addr addr; + const uint16_t rx_rings = 1, tx_rings = 1; + int retval; + uint16_t q; + + if (port >= rte_eth_dev_count()) + return -1; + + /* Configure the Ethernet device. */ + retval = rte_eth_dev_configure(port, rx_rings, tx_rings, &port_conf); + if (retval != 0) + return retval; + + /* Allocate and set up 1 RX queue per Ethernet port. */ + for (q = 0; q < rx_rings; q++) { + retval = rte_eth_rx_queue_setup(port, q, RX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL, mbuf_pool); + if (retval < 0) + return retval; + } + + /* Allocate and set up 1 TX queue per Ethernet port. */ + for (q = 0; q < tx_rings; q++) { + retval = rte_eth_tx_queue_setup(port, q, TX_RING_SIZE, + rte_eth_dev_socket_id(port), NULL); + if (retval < 0) + return retval; + } + + /* Start the Ethernet port. */ + retval = rte_eth_dev_start(port); + if (retval < 0) + return retval; + + /* Display the port MAC address. */ + rte_eth_macaddr_get(port, &addr); + printf("Port %u MAC: %02" PRIx8 " %02" PRIx8 " %02" PRIx8 + " %02" PRIx8 " %02" PRIx8 " %02" PRIx8 "\n", + port, + addr.addr_bytes[0], addr.addr_bytes[1], + addr.addr_bytes[2], addr.addr_bytes[3], + addr.addr_bytes[4], addr.addr_bytes[5]); + + /* Enable RX in promiscuous mode for the Ethernet device. */ + rte_eth_promiscuous_enable(port); + + return 0; +} + +/* + * The lcore main. This is the main thread that does the work, reading from + * an input port classifying the packets and writing to an output port. + */ +static __attribute__((noreturn)) void +lcore_main(void) +{ + struct rte_flow_error error; + const uint8_t nb_ports = rte_eth_dev_count(); + uint8_t port; + int ret; + int i; + + /* + * Check that the port is on the same NUMA node as the polling thread + * for best performance. + */ + for (port = 0; port < nb_ports; port++) + if (rte_eth_dev_socket_id(port) > 0 && + rte_eth_dev_socket_id(port) != + (int)rte_socket_id()) + printf("\n\n"); + printf("WARNING: port %u is on remote NUMA node\n", + port); + printf("to polling thread.\n"); + printf("Performance will not be optimal.\n"); + + printf("\nCore %u forwarding packets. [Ctrl+C to quit]\n", + rte_lcore_id()); + + /* Run until the application is quit or killed. */ + for (;;) { + /* + * Receive packets on a port, classify them and forward them + * on the paired port. + * The mapping is 0 -> 1, 1 -> 0, 2 -> 3, 3 -> 2, etc. + */ + for (port = 0; port < nb_ports; port++) { + + /* Get burst of RX packets, from first port of pair. */ + struct rte_mbuf *bufs[BURST_SIZE]; + const uint16_t nb_rx = rte_eth_rx_burst(port, 0, + bufs, BURST_SIZE); + + if (unlikely(nb_rx == 0)) + continue; + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (udp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + udp_flow_classify[i], + bufs, nb_rx, + &udp_classify_stats, &error); + if (ret) + printf( + "udp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "udp rule [%d] counter1=%lu used_space=%d\n\n", + i, udp_ntuple_stats.counter1, + udp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (tcp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + tcp_flow_classify[i], + bufs, nb_rx, + &tcp_classify_stats, &error); + if (ret) + printf( + "tcp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "tcp rule [%d] counter1=%lu used_space=%d\n\n", + i, tcp_ntuple_stats.counter1, + tcp_classify_stats.used_space); + } + } + + for (i = 0; i < MAX_NUM_CLASSIFY; i++) { + if (sctp_flow_classify[i]) { + ret = rte_flow_classify_query( + table_acl, + sctp_flow_classify[i], + bufs, nb_rx, + &sctp_classify_stats, &error); + if (ret) + printf( + "sctp flow classify[%d] query failed port=%u\n\n", + i, port); + else + printf( + "sctp rule [%d] counter1=%lu used_space=%d\n\n", + i, sctp_ntuple_stats.counter1, + sctp_classify_stats.used_space); + } + } + + /* Send burst of TX packets, to second port of pair. */ + const uint16_t nb_tx = rte_eth_tx_burst(port ^ 1, 0, + bufs, nb_rx); + + /* Free any unsent packets. */ + if (unlikely(nb_tx < nb_rx)) { + uint16_t buf; + + for (buf = nb_tx; buf < nb_rx; buf++) + rte_pktmbuf_free(bufs[buf]); + } + } + } +} + +/* + * The main function, which does initialization and calls the per-lcore + * functions. + */ +int +main(int argc, char *argv[]) +{ + struct rte_mempool *mbuf_pool; + struct rte_flow_error error; + uint8_t nb_ports; + uint8_t portid; + int ret; + int udp_num_classify = 0; + int tcp_num_classify = 0; + int sctp_num_classify = 0; + int socket_id; + struct rte_table_acl_params table_acl_params; + + /* Initialize the Environment Abstraction Layer (EAL). */ + ret = rte_eal_init(argc, argv); + if (ret < 0) + rte_exit(EXIT_FAILURE, "Error with EAL initialization\n"); + + argc -= ret; + argv += ret; + + /* Check that there is an even number of ports to send/receive on. */ + nb_ports = rte_eth_dev_count(); + if (nb_ports < 2 || (nb_ports & 1)) + rte_exit(EXIT_FAILURE, "Error: number of ports must be even\n"); + + /* Creates a new mempool in memory to hold the mbufs. */ + mbuf_pool = rte_pktmbuf_pool_create("MBUF_POOL", NUM_MBUFS * nb_ports, + MBUF_CACHE_SIZE, 0, RTE_MBUF_DEFAULT_BUF_SIZE, rte_socket_id()); + + if (mbuf_pool == NULL) + rte_exit(EXIT_FAILURE, "Cannot create mbuf pool\n"); + + /* Initialize all ports. */ + for (portid = 0; portid < nb_ports; portid++) + if (port_init(portid, mbuf_pool) != 0) + rte_exit(EXIT_FAILURE, "Cannot init port %"PRIu8 "\n", + portid); + + if (rte_lcore_count() > 1) + printf("\nWARNING: Too many lcores enabled. Only 1 used.\n"); + + socket_id = rte_eth_dev_socket_id(0); + + /* initialise ACL table params */ + table_acl_params.n_rule_fields = RTE_DIM(ipv4_defs); + table_acl_params.name = "table_acl_ipv4_5tuple"; + table_acl_params.n_rules = FLOW_CLASSIFY_MAX_RULE_NUM; + memcpy(table_acl_params.field_format, ipv4_defs, sizeof(ipv4_defs)); + + table_acl = rte_table_acl_ops.f_create(&table_acl_params, socket_id, + RTE_ACL_RULE_SZ(RTE_DIM(ipv4_defs))); + if (table_acl == NULL) + return -1; + + /* set up parameters for rte_flow_classify_validate and + * rte_flow_classify_create + */ + + attr.ingress = 1; + attr.priority = 1; + pattern_udp_1[0] = eth_item; + pattern_udp_1[1] = ipv4_udp_item_1; + pattern_udp_1[2] = udp_item_1; + pattern_udp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, + pattern_udp_1, actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "udp_1 flow classify validate failed\n"); + + udp_flow_classify[udp_num_classify] = rte_flow_classify_create( + table_acl, &attr, pattern_udp_1, actions, &error); + if (udp_flow_classify[udp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "udp_1 flow classify create failed\n"); + udp_num_classify++; + + attr.ingress = 1; + attr.priority = 2; + pattern_udp_2[0] = eth_item; + pattern_udp_2[1] = ipv4_udp_item_2; + pattern_udp_2[2] = udp_item_2; + pattern_udp_2[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_udp_2, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "udp_2 flow classify validate failed\n"); + + udp_flow_classify[udp_num_classify] = rte_flow_classify_create( + table_acl, &attr, pattern_udp_2, actions, &error); + if (udp_flow_classify[udp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "udp_2 flow classify create failed\n"); + udp_num_classify++; + + attr.ingress = 1; + attr.priority = 3; + pattern_tcp_1[0] = eth_item; + pattern_tcp_1[1] = ipv4_tcp_item_1; + pattern_tcp_1[2] = tcp_item_1; + pattern_tcp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_tcp_1, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "tcp_1 flow classify validate failed\n"); + + tcp_flow_classify[tcp_num_classify] = rte_flow_classify_create( + table_acl, &attr, pattern_tcp_1, actions, &error); + if (tcp_flow_classify[tcp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "tcp_1 flow classify create failed\n"); + tcp_num_classify++; + + attr.ingress = 1; + attr.priority = 4; + pattern_tcp_2[0] = eth_item; + pattern_tcp_2[1] = ipv4_tcp_item_2; + pattern_tcp_2[2] = tcp_item_2; + pattern_tcp_2[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_tcp_2, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, "tcp_2 flow classify validate failed\n"); + + tcp_flow_classify[tcp_num_classify] = rte_flow_classify_create( + table_acl, &attr, pattern_tcp_2, actions, &error); + if (tcp_flow_classify[tcp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "tcp_2 flow classify create failed\n"); + tcp_num_classify++; + + attr.ingress = 1; + attr.priority = 5; + pattern_sctp_1[0] = eth_item; + pattern_sctp_1[1] = ipv4_sctp_item_1; + pattern_sctp_1[2] = sctp_item_1; + pattern_sctp_1[3] = end_item; + actions[0] = count_action; + actions[1] = end_action; + + ret = rte_flow_classify_validate(table_acl, &attr, pattern_sctp_1, + actions, &error); + if (ret) + rte_exit(EXIT_FAILURE, + "sctp_1 flow classify validate failed\n"); + + sctp_flow_classify[sctp_num_classify] = rte_flow_classify_create( + table_acl, &attr, pattern_sctp_1, actions, &error); + if (sctp_flow_classify[sctp_num_classify] == NULL) + rte_exit(EXIT_FAILURE, "sctp_1 flow classify create failed\n"); + sctp_num_classify++; + + ret = rte_flow_classify_destroy(table_acl, sctp_flow_classify[0], + &error); + if (ret) + rte_exit(EXIT_FAILURE, + "sctp_1 flow classify destroy failed\n"); + else { + sctp_num_classify--; + sctp_flow_classify[0] = NULL; + } + /* Call lcore_main on the master core only. */ + lcore_main(); + + return 0; +}